Accessibility Lead Pathway
The person inside the organisation who owns accessibility end-to-end. Sits between product, engineering, design, legal, and the executive layer. Translates between all five.
Last reviewed: 2026-04-07
Building the accessibility programme from scratch
If you've just been handed this title with no programme to inherit, the first 90 days decide whether anything actually gets built.
What the law says
The EAA doesn't mandate a specific organisational structure for accessibility. It requires outcomes: conformance, documentation, complaint handling, ongoing maintenance. How you structure things to deliver those outcomes is up to you. The standard reference for organisational maturity is the W3C Web Accessibility Initiative's Accessibility Maturity Model, which describes five levels from inactive to optimised.
What it means in practice
First 30 days: scope and baseline. Walk through every product, every site, every customer surface. Note which ones fall under the EAA. For each, run a quick automated scan and a 15-minute manual triage. The goal isn't a complete audit — it's an inventory of where the worst problems live and roughly how big the work is. Write it up as your 'state of the nation' brief. First 60 days: governance. Find an executive sponsor. Get an accessibility statement live on every consumer-facing surface, even if it's an interim version that admits to known limitations. Stand up a complaint inbox. Define what 'done' means for new feature work and get at least one product team to adopt it as a pilot. First 90 days: operating rhythm. Quarterly executive update with the risk register and budget status. Monthly working group with PM, eng, design, legal, and support representatives. Weekly office hours where any team can drop in with accessibility questions. An annual external audit cadence agreed with finance. What goes wrong is starting with the audit. A full audit before any governance work produces a 300-page document that nobody acts on. Start with the lightweight baseline, get the operating rhythm running, then layer in the full audit as you have capacity to actually act on the findings. Use the Self-Assessment Pipeline as the structural backbone for your audit work. It tracks the scoping, the test plan, the results, and produces a Word export you can hand to leadership. The Compliance Programme Template gives you the governance shell to drop the pipeline into.
Common pitfalls
- Starting with a giant audit and no governance. The audit takes three months, by which time the org has moved on and the findings are stale.
- Trying to fix everything everywhere at once. Accessibility programmes survive by picking the highest-impact changes and showing visible progress.
- No executive sponsor. Without one, you have no authority and no budget — you're a complaint department with extra steps.
How to verify it
After 90 days, can you answer these five questions? Who is the executive sponsor? Where is the budget line? What is the 12-month plan? What are the top three risks? What is the operating rhythm? If yes, the programme is real. If not, you're still in firefighting mode and need to reset.
AccessibilityRef tools that help
- Self-Assessment Pipeline— the structural backbone for audit work
- Compliance Programme Template— governance and operating rhythm template
- Provisional Assessment— fast 50-point baseline for the first 30 days
- Statement Wizard— interim accessibility statement in the first 60 days
Further reading
Translating between legal, product, and engineering
The Accessibility Lead's hardest skill isn't WCAG knowledge. It's making three different professional cultures agree on the same thing.
What the law says
The EAA relies on operational implementation of legal requirements. Article 13 conformity assessments need both technical and legal substance. Annex V statements need both. Annex VI burden assessments need both. The role that bridges those isn't defined in the directive at all, but it's essential for any compliant operator.
What it means in practice
Lawyers want certainty about exposure. Product wants user impact. Engineers want clear acceptance criteria. The same accessibility issue gets described three different ways depending on who's listening. A missing form label looks like this to a developer: 'WCAG 1.3.1 fail, add a `<label>` element with a `for` attribute'. To a PM: 'screen reader users can't complete the signup flow, blocking conversion for an estimated X% of users'. To legal: 'a documented WCAG 2.1 AA non-conformance affecting a consumer-facing service under EAA Annex I, with an audit trail in our remediation backlog'. The technical fact behind all three is identical. The framing is what makes each audience actually move. Learn to do that translation automatically, in your head. Then in any meeting you can pivot the framing to whoever you're trying to move. It's the Accessibility Lead's superpower and it takes roughly a year to develop. For formal documents, keep three artefacts in sync: the engineering bug tracker (technical), the product roadmap (impact), and the compliance register (legal). All three track the same underlying issues with the framing each audience needs. When something moves in one, it has to move in the others. It's dull spreadsheet work, and it's the difference between a programme that gets resourced and one that gets ignored. For difficult conversations — 'we shipped a regression, who knew, why didn't they raise it' — walk in with the technical facts and the impact framing both ready. Lead with whichever the audience values more. Don't argue about legal exposure with an engineer who just wants to know what to fix.
Common pitfalls
- Speaking to the board in WCAG criterion numbers. Eyes glaze over, the budget conversation goes the wrong way.
- Speaking to engineers in 'compliance risk' terms. They push back because it sounds like pressure rather than problem-solving. Lead with the user.
- Three different versions of the truth in three different trackers, none of which agree. Pick one source of truth and reflect it everywhere else.
How to verify it
Pick a single accessibility issue from the last quarter. Can you write three one-paragraph descriptions of it — one for engineering, one for product, one for legal? If yes, you're translating well. If all three sound the same, you're losing one or two of those audiences.
AccessibilityRef tools that help
- Self-Assessment Pipeline— single source of truth across audiences
- Compliance Report Builder— audit-grade reporting in legal-friendly format
- Accessibility Debt Calculator— translates findings into cost for finance conversations
Further reading
Working with disabled users and external testers
An accessibility programme that doesn't include disabled users in testing isn't an accessibility programme. It's a guess.
What the law says
Neither the EAA nor EN 301 549 explicitly require user testing with disabled participants. But Annex C of EN 301 549 includes user-based test methods, and the functional performance criteria in Clause 4 (perceivable without sight, operable without hearing, and so on) can only really be verified by users who have the relevant disabilities. The practical reality: a regulator looking at your conformity assessment will give more weight to evidence that includes real user testing than to evidence that's purely automated.
What it means in practice
Recruit a panel of disabled users you can test with regularly. The right size depends on your release cadence — for a quarterly cadence, 8 to 12 testers across a range of disabilities is enough. For monthly, double it. Pay them properly. Research participation rates for disabled users should match or exceed whatever you pay for standard usability research. The panel should cover the major disability categories: blind and low-vision (screen reader users), deaf and hard-of-hearing, motor (keyboard-only or switch users), cognitive (dyslexia, attention, language). One tester per category isn't enough — aim for two or three per category at minimum, because the variation within each category is substantial. For each session, brief the tester in writing in their preferred format, give them realistic tasks, and let them work through the product without interruption. Record the session if they consent. Debrief at the end with open questions, not a checklist. The output is a list of issues with reproduction steps, severity, and the user's own words about the impact. Use the User Testing Log on this site as the documentation format. It captures the consent, the tasks, the findings, and produces a clean report you can attach to your conformity assessment and your remediation tickets. Don't make this the only testing you do. User testing catches the issues that automated tools and expert reviews miss, but it's expensive and slow. Run automated tests on every release, expert reviews on every major change, and user testing every quarter or against every critical flow.
Common pitfalls
- One disabled tester per major release. The variation between users is too wide — one tester is anecdote, not data.
- Recruiting only through accessibility advocacy groups. You get experienced testers who are already comfortable with workarounds, and you miss what a casual user would actually experience.
- Treating user testing as the proof of compliance and skipping the formal Annex C tests. Both are needed. Neither replaces the other.
How to verify it
How many disabled users tested your product last quarter? If the answer is zero, your accessibility programme has a structural blind spot. If it's one, double it. The right number is almost always higher than you'd expect.
AccessibilityRef tools that help
- User Testing Log— documentation format for testing with disabled participants
- AT Interoperability Checklist— cross-reference for assistive technology coverage
Further reading
Measuring and reporting accessibility
If you can't measure the programme, you can't defend the budget. If you can't report it, leadership eventually stops believing in it.
What the law says
EAA Article 13 conformity assessments are documented snapshots. The directive doesn't require ongoing metrics, but it does require ongoing conformance — and the only way to know whether you've still got it is to measure.
What it means in practice
Pick three or four metrics, track them monthly, report them quarterly. The right metrics depend on the size of the org but a workable starter set is: **Critical and high-severity issues open.** This is the headline number. It should trend down. If it trends up, either nothing is shipping or new debt is being introduced faster than old debt is being fixed. **Conformance percentage by surface.** For each major surface (web app, marketing site, mobile app, help centre), the percentage of WCAG 2.1 AA criteria that pass. It's a stable number that changes slowly. Report it quarterly with the trend line attached. **Mean time to remediation.** From the moment a critical issue is discovered to the moment a fix ships. Long mean times point at a process problem. Short mean times tell you the team can move when motivated. **Complaints received versus resolved.** Both numbers matter. A growing complaint queue with a stable resolution rate is a workload problem. A stable complaint queue with a falling resolution rate is a process problem. Report these to the executive sponsor monthly in a one-page dashboard, to the board quarterly in a one-slide summary, and to the wider engineering org monthly in a more detailed format. Repetition matters more than depth — the number people remember is the one they've seen five times in a row. The Metrics Dashboard on this site lets you track scores over time and produces simple charts you can drop straight into a board pack. For more formal reporting, the Compliance Report Builder generates an audit-grade summary.
Common pitfalls
- Tracking 30 metrics nobody reads. Pick four. Track them well. Report them consistently.
- Reporting only the good-news numbers. Leadership stops trusting you the moment they realise the dashboard is filtered.
- No baseline. The first month's number is your baseline — everything after compares against it. Without one you can't actually show progress.
How to verify it
Show me last quarter's accessibility report. Does it have the same metrics as the previous quarter? Are the trends visible? If the answer is no, the reporting is theatre. If yes, you've got a real measurement programme running.
AccessibilityRef tools that help
- Metrics Dashboard— track scores over time with charts
- Compliance Report Builder— audit-grade quarterly reporting
Further reading
Important Legal Disclaimer
This tool is a self-assessment aid only and does not constitute legal advice or a formally certified compliance assessment. Outputs — including reports, scores, checklists, and accessibility statements — are for internal use and should be reviewed by a qualified legal representative or independent accessibility auditor before being relied upon for regulatory, procurement, or public-disclosure purposes. All assessment risk lies with the internal assessor. accessibilityref, its developers, and staff accept zero liability for losses arising from use of or reliance on these outputs. Always verify against official sources: the W3C WCAG 2.2 Recommendation, the European Accessibility Act (Directive 2019/882), and your national enforcement authority.