Accessibility belongs where developers already work
A developer I spoke with recently said something that stuck with me: "I can just add accessibility rules to my CLAUDE.md. Why would I need a tool for that?"
He's right that you can. You can write "follow WCAG AA" in your AI instructions and your coding assistant will try. It will add alt text sometimes. It will use semantic HTML when it remembers. It will suggest aria attributes in contexts where it has seen them before.
But try this: ask the same AI to review a codebase for accessibility tomorrow. It won't remember what it found yesterday. Ask it whether a fix it applied last week actually passed. It can't tell you. Ask it to prove to your compliance officer that every component was reviewed. There's nothing to show.
Instructions without enforcement are suggestions. And suggestions don't survive deadlines, team changes, or the EAA inspector who shows up asking for evidence.
The enforcement gap
Since June 2025, the European Accessibility Act has been in active enforcement. France issued legal notices to four major retailers within days. Penalties reach up to three million euros or 4% of annual revenue. The ADA lawsuit count in the US keeps climbing. The UK Equality Act already covers websites.
The question for development teams has shifted from "should we care about accessibility?" to "can we prove we do?"
And that's where the gap appears. An AI coding assistant can write accessible code. But it can't track what it reviewed, verify that fixes actually passed quality checks, or produce an evidence trail for an auditor. Those are system-level capabilities, not instruction-level ones.
The difference matters. "We told our AI to follow WCAG" is not compliance evidence. "Here's a dashboard showing 86 criteria evaluated, 12 issues found, 12 fixes verified" is.
What the AI sees versus what it misses
WCAG 2.2 has 86 success criteria across four levels. Some are things AI handles well: structural issues like heading hierarchy, missing alt text, buttons without labels. A well-prompted AI catches these reliably.
But many criteria resist automation entirely. Focus order only shows up at runtime. Contrast ratios depend on computed styles, not source code. Keyboard traps require simulating tab traversal. Content reflow at 320px needs a real viewport. Color vision accessibility requires simulating what deuteranopia, protanopia, and tritanopia actually look like.
The industry is converging on a layered approach. Static analysis (ESLint rules, code patterns) catches about 30% of issues. Runtime scanning (axe-core against the rendered DOM) adds another 30%. The remaining 40% requires guided human review: reading order, cognitive load, sensory characteristics, orientation handling.
No single tool covers everything. The question is how these layers coordinate. If each tool produces its own report in its own format, and nobody tracks which issues were found where and whether they were fixed, you end up with a false sense of coverage.
The workflow that actually works
We've been thinking about this problem for months, and we keep arriving at the same conclusion: accessibility tools need to be invisible. Not invisible in the sense of overlays that hide problems, but invisible in the sense that the developer never switches context.
Today we shipped Jeikin to two places where developers already spend their time.
In the editor
The Jeikin extension for VS Code connects your AI coding assistant to a compliance system. Install it, click Connect, pick your compliance level. From that point forward, every AI interaction has your project's accessibility rules loaded via MCP.
This isn't just "instructions in a file." When the AI finds an issue, it reports it to a tracking system. When it fixes something, it has to verify the fix against quality checks. When the checks fail, it can't mark the issue as done. There's a system enforcing the loop: find, report, fix, verify. Skip a step and the dashboard shows the gap.
The extension itself is tiny (under 40 KB). It handles onboarding and shows your open issue count in the status bar. The real work happens through MCP, which means it works with Claude Code, GitHub Copilot, Cursor, Windsurf, and Cline.
In the pull request
The Jeikin GitHub App reviews every PR before merge. Issues appear as inline annotations in the code diff, not in a separate report. Each annotation includes the severity, the specific WCAG criterion, a plain-language explanation of who is affected, and a one-click link to fix it in Cursor or VS Code.
Critical violations block the merge. You can't ship inaccessible code by accident.
This catches what the AI in the editor missed. Maybe a junior developer didn't use the AI for that component. Maybe the AI hallucinated an aria attribute that doesn't exist. The PR review is the safety net.
On the dashboard
Everything flows to a central dashboard. Issues found, fixes verified, criteria evaluated, evidence preserved. The developer never needs to open it. But their engineering manager, their compliance officer, or their client can see exactly what was reviewed and what the results were.
Why this is different from adding rules to a file
There's a specific objection worth addressing directly, because it's the most reasonable one: "My AI already follows accessibility rules I've written in my project instructions."
Here's what project instructions can do:
- Tell the AI to use semantic HTML
- Remind it to add alt text to images
- Set a general standard like "follow WCAG AA"
Here's what they can't do:
- Track which files were actually reviewed and which were skipped
- Enforce that a fix was verified before marking it done
- Prove to an auditor what was checked and when
- Coordinate across tools (editor AI, PR bot, runtime scanner) into one view
- Remember findings across sessions. Every new conversation starts from zero
- Run quality gates like APCA contrast, readability scoring, color vision simulation, or focus order tracing
- Block a merge when a critical accessibility violation is present
Instructions are the input. A compliance system is the loop: input, review, evidence, enforcement. The loop is what auditors need to see. The loop is what prevents regressions. The loop is what makes "we care about accessibility" into a provable claim.
The landscape is moving
Accessibility tooling is converging with AI coding tools faster than most teams realize. Deque shipped an axe MCP server. AccessiMind does real-time WCAG analysis in VS Code. TestParty embeds remediation into GitHub workflows. The era of accessibility as a separate activity is ending.
The tools that win will be the ones developers don't have to think about. Not because accessibility doesn't matter, but because it matters too much to depend on someone remembering to run a scan.
Try it
Install the VS Code extension and ask your AI to review your code for accessibility.
Or install the GitHub App and open your next PR.
The first review usually surfaces things you didn't expect. Not because your code is bad, but because accessibility barriers are invisible until something looks for them systematically.
Every week, we find more evidence that the gap between "we care about accessibility" and "we can prove it" is where teams get stuck. We're building the bridge.