Accessibility MCP servers for AI coding agents (2026)
The accessibility tooling ecosystem and the AI coding tool ecosystem are merging fast. MCP servers are now the connective tissue between AI coding agents (Claude Code, Cursor, Windsurf, GitHub Copilot in agent mode) and specialised accessibility engines.
The list of options is short but growing. This post compares the ones that are actually shipping in April 2026, so you can pick the right one for your workflow without wading through marketing pages.
What an "accessibility MCP server" actually is
A short refresher, because the category is new enough that the definition still wobbles.
An MCP server is a process that exposes tools, resources, and prompts to an AI client over a standard protocol. An accessibility MCP server is one whose tools are about finding, fixing, or proving accessibility outcomes. The AI agent calls those tools the same way a developer would call an API: send input, get back structured findings or guidance.
That structure matters. A linter tells you "missing alt attribute" in your terminal. An MCP tool returns the same finding as JSON the agent can reason about, fix, and verify in the same turn.
The shipping options (April 2026)
axe MCP server (Deque)
Deque ships an axe MCP server that wraps the axe-core scanning engine. Strong runtime DOM analysis, mature rule library, well-documented for VPATs and audit reports.
- What it's good at: Rendered DOM scanning, runtime checks against WCAG 2.1/2.2 AA, integrating into existing axe-based workflows.
- What it doesn't do: Persistent tracking across sessions, structured evidence for compliance audits, fix verification loops. The agent finds, but each session starts from zero.
- Pick it when: You already use axe and want your AI assistant to call into the same engine that produces your VPAT.
Community Access agents (open source)
The Community Access project ships 57 specialist accessibility agents for Claude Code, GitHub Copilot, and Claude Desktop. Not strictly an MCP server — it's an agent collection — but it's the dominant open-source option in this space and worth listing.
- What it's good at: Free, open source, deep specialisation per criterion (one agent for focus order, one for colour contrast, etc).
- What it doesn't do: Centralised dashboard, evidence trail, fix verification. You get findings; you do not get proof.
- Pick it when: You want broad WCAG coverage inside Claude Code without paying anyone, and you have your own way of tracking outcomes.
BrowserStack Accessibility DevTools
BrowserStack's accessibility DevTools lints code in real time inside the editor and during builds. Not strictly MCP yet, but they have agent integrations on the roadmap.
- What it's good at: Real-time linting, cross-browser/device testing, established QA workflows.
- What it doesn't do: Native MCP server (today). Inline guidance in AI coding tools is limited compared with native MCP integrations.
- Pick it when: Your team already lives in BrowserStack for cross-browser testing and you want a single vendor.
Siteimprove "agentic accessibility"
Siteimprove's framework for agentic accessibility positions AI agents as autonomous remediators across content sites. Enterprise-focused, more about content/CMS than developer workflows.
- What it's good at: Large content sites, enterprise CMS integration, content team workflows.
- What it doesn't do: Developer-first integration. The "agent" lives more in the content layer than in the IDE.
- Pick it when: Your accessibility problem is content scale, not code scale.
Jeikin
Jeikin is the option we build, so treat this section as biased — but the differences are concrete.
- What it's good at: Closing the loop. Findings persist across sessions in a project dashboard, every fix is verified through quality checks, and every finding is timestamped against a specific WCAG 2.2 criterion. The same dashboard gives compliance teams an audit trail mapped to EAA, ADA Section 508, and EN 301 549. Free during beta, up to 3 projects per account.
- What it doesn't do: Replace runtime DOM scanners. Jeikin focuses on the enforcement layer — find, report, fix, verify, evidence — rather than competing with axe-core on rule coverage. The roadmap includes axe-core as a runtime sensor.
- Pick it when: You need to prove the work was done, not just do the work. If a regulator, client, or board asks "where's your evidence?", Jeikin answers; most other tools don't.
How to choose
Three questions, in this order:
- Do you need evidence, or do you need findings? If a compliance, legal, or procurement team will ever ask for proof, you need persistent, timestamped tracking. That rules out anything session-scoped.
- Where does your team already work? A tool your developers won't open is worth nothing. The MCP servers above all fit inside Claude Code, Cursor, or Windsurf — pick the integration that matches your stack.
- What's your runtime story? Static rules catch a fraction of issues. If you need rendered-DOM analysis, you'll want axe-core (directly or via Deque's MCP server) somewhere in the chain.
The honest answer for most teams is more than one tool. A scanning engine for runtime DOM checks. An enforcement layer for evidence. A dashboard for tracking. The good news: MCP makes those layers compose cleanly for the first time.
What's missing from the category
A few gaps worth flagging, because they shape what you should expect over the next twelve months:
- No tool yet handles WCAG 3.0 outcomes well. The W3C published a new WCAG 3.0 Working Draft in March 2026. The shift from "success criteria" to "outcomes" is going to break most rule-based scanners. Tools that already separate finding from verification will adapt faster.
- Evidence formats are not standardised. Each vendor exports different report shapes. Procurement teams get confused. Expect convergence around something VPAT-shaped.
- Verification is the weak link everywhere. Most tools find issues. Almost none verify the fix actually worked. This is the single biggest gap in the category, and the one we focus on at Jeikin.
Try the comparison yourself
The fastest way to see the difference: open Claude Code in any project, run an accessibility review with whichever tool you have today, then ask the agent in a fresh session "what did we fix last week?"
If the silence after that question makes you uncomfortable, you've found the enforcement gap. Closing it is what turns "we use AI for accessibility" into something a regulator will accept.
MCP standardised the wiring. The interesting question now is what gets wired in. For accessibility, the answer increasingly looks like a small stack of specialised tools, not one monolith.