Knowledge Foundations
Accessibility review is not a single skill or a checklist exercise. It draws on four interconnected bodies of knowledge, and the most effective reviews hold all four in view simultaneously. Understanding these foundations helps clarify why rigorous accessibility review requires more than running an automated scan — and why the findings from a thorough review are meaningfully different from a surface-level compliance check.
The four knowledge pillars
Each pillar represents a distinct area of expertise that contributes to a complete review. In practice, they are deeply interconnected: you cannot do strong barrier analysis without disability knowledge; you cannot evaluate a learning experience without UDL fluency; you cannot connect findings to action without understanding governance and law.
1. Disability and learner variability
Effective accessibility review begins with understanding disability — not as a fixed category, but as a relationship between a person and their environment. This includes familiarity with major disability categories (visual, auditory, motor, cognitive, neurological, and situational), how assistive technologies work and what they require from content, and how the social model of disability shifts the focus from "fixing the person" to "fixing the environment." Understanding learner variability also means recognizing that many barriers affect people without formal disability designations — including those with temporary impairments, older learners, or people accessing content in constrained environments.
2. Universal Design for Learning (UDL) and inclusive design
Universal Design for Learning is a framework developed by CAST that guides the design of learning experiences to accommodate variability from the outset. It is built around three principles:
- Multiple means of Engagement — the why of learning; recruiting interest, sustaining effort, enabling self-regulation
- Multiple means of Representation — the what of learning; offering content through different modalities and formats
- Multiple means of Action and Expression — the how of learning; allowing learners to demonstrate knowledge in varied ways
UDL is distinct from, but complementary to, technical accessibility standards. WCAG tells you whether a video has captions; UDL asks whether the learning experience gives all learners genuine ways to engage with, access, and demonstrate the knowledge it contains.
3. Technical accessibility standards
The Web Content Accessibility Guidelines (WCAG), currently at versions 2.1 and 2.2, provide the technical baseline for digital accessibility. They are organized around four principles — content must be Perceivable, Operable, Understandable, and Robust (POUR) — and apply to web content, LMS platforms, digital documents, multimedia, and interactive learning objects. Standards fluency means being able to interpret and apply criteria in context, not simply run automated checks and report pass/fail results.
4. Laws, governance, and institutional context
Technical findings only create lasting change when they connect to real accountability structures. This requires understanding relevant legislation (discussed in detail in Section 5), but also institutional governance: who has authority to require accessibility in procurement, which workflows create barriers that no individual designer can fix, and how accessibility responsibilities are — or aren't — distributed across an organization. Reviews that ignore institutional context produce findings that sit in reports without generating change.
A practical hierarchy for using these frameworks together: Use WCAG to audit the artifact. Use UDL to evaluate the learning experience. Use inclusive design principles to assess the system. Strong accessibility review holds all three lenses simultaneously — and situates findings within the institutional context in which they need to be acted on.
What distinguishes expertise from checklist compliance
Automated accessibility tools can detect a meaningful subset of issues: missing alternative text, poor heading structure, insufficient colour contrast, ARIA implementation errors. These are genuinely useful starting points. But automated tools can only identify approximately 30 to 40 percent of real accessibility barriers — the rest require human judgment.
What automated tools cannot assess includes: whether a learning experience is equitably designed; whether assessment tasks assume a narrow range of abilities; whether the interaction patterns in an LMS are usable with a screen reader in practice (as opposed to technically passing); or whether the barriers that most affect a specific learner population are actually addressed. The gap between "technically passes" and "actually works for everyone" is where human expertise is irreplaceable.
Accommodation versus design: an important distinction
Accommodation addresses a specific individual's access need after the fact — providing an alternative format, extending a deadline, offering a different assessment method. Accessible design reduces the need for accommodation by anticipating learner variability from the outset and building flexibility into the learning experience itself.
Neither accommodation nor accessible design eliminates the other. Accommodation will always be necessary for some learners in some situations. But organizations that rely primarily on accommodation as their accessibility strategy have not yet addressed the underlying design problem — they are managing barriers, not removing them. The purpose of accessibility review is to understand both what barriers exist and where they live: in the content, the technology, the workflow, or the institutional structures that shape how learning is designed and delivered.
Methodology
Effective accessibility review is structured, iterative, and multidimensional. The six stages below represent a comprehensive approach. In practice, scope and depth will vary based on context, available resources, and organizational need — but understanding the full methodology helps clarify what is gained or lost when particular stages are abbreviated or skipped.
Before any review begins, it is essential to understand the learning environment, the learner population, and the institutional context. What types of content are in scope — web pages, LMS courses, documents, multimedia, assessments, interactive tools? Which platforms and systems are involved? Who are the learners, and what is already known about the barriers they face?
Scoping also establishes a shared understanding of what "accessible" means in this specific context, and what a realistic level of review looks like given available time and resources. This stage shapes the relevance and usefulness of everything that follows.
What this stage produces
- Shared documentation of scope and review boundaries
- Learner population context, including known barriers and assistive technology use
- Prioritized list of content and systems for review
Barrier analysis looks across four dimensions — physical and motor, cognitive, sensory, and situational — to identify where learners encounter obstacles. A key distinction is between minor friction and task-blocking barriers: not all issues carry equal weight, and effective review prioritizes those that most significantly affect access.
Equally important is locating where the barrier lives. Barriers in content are addressed through remediation. Barriers in technology platforms require vendor engagement or platform changes. Barriers in workflows or institutional policy require structural responses that no individual designer can implement alone. Content-focused audits often miss the latter two categories entirely.
What this stage produces
- Categorized barrier inventory by type (physical, cognitive, sensory, situational) and severity
- Clear distinction between access blockers and usability friction
- Identification of workflow and policy-level barriers alongside content issues
Automated tools — such as Axe, IBM Equal Access Checker, and browser-based accessibility checkers — can rapidly detect a subset of accessibility issues: missing alternative text, insufficient heading structure, colour contrast failures, and ARIA implementation errors. They are a useful and efficient starting point.
However, automated tools can only identify a portion of real barriers. Manual testing using assistive technologies — including screen readers such as NVDA, JAWS, and VoiceOver, keyboard-only navigation, and magnification tools — is essential for capturing issues that require human judgment. A result of "no automated errors detected" does not mean a page or course is accessible. Results should always be interpreted in context, not simply reported as pass or fail.
What this stage produces
- WCAG compliance mapping by criterion and content type
- Screen reader and keyboard navigation findings
- Colour contrast, visual structure, and readability analysis
- Documentation of issues requiring manual verification
Legal enforcement increasingly scrutinizes whether users can independently complete tasks on digital platforms — not just whether technical standards are met on paper. Engaging learners with disabilities in structured evaluation surfaces the gap between "technically compliant" and "actually usable."
This includes heuristic evaluation against established accessibility guidelines, task-based usability testing with participants who use assistive technologies, and systematic documentation and analysis of user feedback. Small, focused tests that examine specific tasks or interface elements typically yield more actionable insights than large, undifferentiated studies.
What this stage produces
- Task completion analysis and friction mapping
- Heuristic evaluation findings from disability-centered perspectives
- Documented user feedback with analysis
- Identification of usability gaps not captured by technical audit
A technically accessible course can still exclude learners if its learning design is inflexible or its assessments assume a narrow range of abilities. This stage evaluates whether the learning experience offers genuine multiple means across all three UDL dimensions: Engagement (the why of learning), Representation (the what), and Action and Expression (the how).
It also assesses whether assessment tasks are designed equitably — whether the format of assessment is measuring the intended learning outcome or inadvertently measuring a learner's ability to perform under specific conditions that have nothing to do with the outcome being assessed. This is distinct from whether assessment platforms pass a technical checklist.
What this stage produces
- UDL alignment map across all three dimensions
- Assessment equity analysis identifying format-based barriers
- Recommendations for flexible pathways and varied representation
- Identification of where inflexibility creates barriers for specific learner groups
Findings are only useful if they can be acted on. Effective reporting translates technical and learning design findings into a prioritized remediation roadmap — distinguishing quick wins from longer-term structural changes, and connecting recommendations to real governance, workflow, and procurement contexts.
Reports should serve multiple audiences with different needs. Technical teams need specific, reproducible guidance. Learning designers need design principles and concrete alternatives. Leadership needs strategic framing, risk context, and understanding of what systemic changes are required. A single report that attempts to serve all three audiences typically serves none of them well.
What this stage produces
- Prioritized remediation roadmap with effort and impact framing
- Differentiated outputs for technical, design, and leadership audiences
- Connection of findings to governance, policy, and procurement contexts
- Framework for continuous improvement, not one-time remediation
Key Decisions That Shape a Review
No two accessibility reviews look the same. The decisions an organization makes about scope, depth, timing, and involvement determine what kind of review is appropriate and what it will produce. Understanding these dimensions helps set realistic expectations — and make better use of the review process.
Scope: what is being reviewed
Depth: how thorough the review is
Learner involvement: who participates in evaluation
Timing: when the review happens
Focus area: what kind of review it primarily is
On rapid versus comprehensive approaches: Rapid assessments trade completeness for speed and can be genuinely useful when resources are constrained. The important thing is clarity about what a rapid assessment can and cannot detect — and treating its findings as a starting point rather than a complete picture of the accessibility landscape.
Learner Impact
Accessibility review creates change at two different horizons. The shorter-term outcomes are tangible and measurable. The longer-term shift is cultural — and has a broader effect on how an organization designs and delivers learning for everyone.
Shorter-term outcomes: what becomes visible and fixable
| Outcome | What this looks like in practice |
|---|---|
| Barriers are named and prioritizedA clear picture of what is blocking learners, organized by severity and remediation effort. | Teams move from vague awareness of "accessibility issues" to a specific, actionable inventory they can work through systematically. |
| Immediate issues are remediatedCommon barriers — missing captions, poor contrast, absent alternative text, inaccessible documents — addressed once identified. | Many high-impact, low-effort fixes are implemented quickly. Learners who were previously excluded gain access to content they were being denied. |
| Compliance is documentedA formal review and remediation roadmap provide evidence of due diligence for regulatory and governance purposes. | Organizations can demonstrate to regulatory bodies, leadership, and learners that they are taking accessibility obligations seriously and acting on them. |
| Team knowledge growsDesigners, developers, and faculty develop practical accessibility literacy through engagement with the review process. | The review itself becomes a capacity-building experience. Teams emerge better equipped to identify and address issues in future work. |
Longer-term change: what becomes embedded in practice
| Outcome | What this looks like in practice |
|---|---|
| Accessibility becomes a design habitRather than a retrofit, accessible design is incorporated from the beginning of content development. | Future remediation costs decrease significantly. Designers stop asking "is this accessible?" at the end of a project and start making accessibility-informed decisions throughout. |
| All learners benefitBetter structure, clearer language, flexible formats, and consistent navigation improve the experience for every learner — not only those with disabilities. | The "curb cut effect" in learning: features designed for learners with disabilities improve usability, comprehension, and engagement for the whole cohort. |
| Institutional culture shiftsThe focus moves from individual accommodation toward proactive inclusion — affecting procurement, faculty development, and organizational identity. | Accessibility stops being "someone else's job" and becomes a shared expectation embedded in how the organization hires, trains, procures, and governs. |
| Governance and procurement improveOrganizations develop the literacy to require and evaluate accessibility in vendor contracts and new platform selection. | Accessibility requirements appear in RFPs. Platform evaluations include assistive technology testing. Vendors are held to explicit standards as a condition of partnership. |
On the evidence for accessibility and learning outcomes: Research consistently shows that accessibility improvements support retention, participation, and equity — not just compliance. The strongest case for accessibility investment to organizational leadership is not the legal argument alone, but the evidence that inclusive design produces better learning environments for all learners.
Legal and Regulatory Context in Canada and BC
Understanding the legal context helps organizations make sense of their obligations, the gaps in current enforcement, and why the direction of travel matters even where specific technical requirements are still developing. The following is intended as an orientation to the landscape, not legal advice.
The Accessible Canada Act (ACA) establishes a framework for a barrier-free Canada by 2040 across federally regulated sectors, including broadcasting, telecommunications, banking, and federal government services. Organizations subject to the Act must publish accessibility plans, provide public feedback mechanisms, and report on progress annually.
The ACA has been praised for its ambition but also critiqued for relatively weak enforcement mechanisms and a lack of specific technical standards or binding timelines. Non-compliance can result in penalties reaching $250,000. Enforcement is managed through sector-specific regulatory bodies including the Canadian Radio-television and Telecommunications Commission (CRTC) and the Canadian Human Rights Commission.
The Canadian Accessibility Standards Development Organization (CASDO) — whose board is required by legislation to include a majority of people with disabilities — is responsible for developing specific technical standards, including those related to digital information and communications technology. CASDO's work will produce more specific requirements over time.
The Accessible British Columbia Act applies to the BC provincial government, public sector organizations, educational institutions, and school districts. Organizations in scope must form accessibility committees, develop accessibility plans, and create mechanisms for public feedback on those plans.
Current standards focus on employment accessibility and accessible service delivery. The Act does not yet specify technical web accessibility standards, but future regulations are expected to incorporate established guidelines such as WCAG. British Columbia post-secondary institutions are explicitly within scope of the Act.
Amendments to the Private Training Act and the establishment of Accessible Education and Training (AET) programs at public post-secondary institutions reflect the Act's particular attention to educational settings and the specific needs of students with cognitive disabilities.
The gap between legislation and practice
Both the ACA and the Accessible BC Act have been critiqued by disability advocates for the absence of specific enforceable technical standards and clear implementation timelines. In practice, organizations currently have significant latitude in how they interpret their accessibility obligations — but the regulatory direction is clearly toward greater specificity over time.
Understanding this landscape helps organizations make informed decisions — not just about minimum compliance, but about where the field is heading and how to build genuinely inclusive learning environments within that context. Organizations that engage proactively with accessibility tend to be better positioned as regulatory requirements tighten.
On the relationship between legal compliance and genuine accessibility: Meeting current legal requirements does not guarantee that learning environments are genuinely accessible. The gap between minimum compliance and meaningful inclusion is where most of the real work in educational accessibility lives. Legal frameworks provide a floor, not a ceiling.
Organizational Reflection
The questions below are designed to help you think through your organization's current relationship with accessibility. They are not a formal assessment — there are no scores or categories. They are intended to surface useful questions and help identify where the most productive focus might be for your context.
Take your time with these. The most useful answers are the honest ones, not the aspirational ones. You may find it helpful to work through them with colleagues who have different roles and perspectives within your organization.
On your current knowledge of barriers
- What do you actually know about the accessibility barriers your learners currently face? How did you find out, and how confident are you in that picture?
- Are there learner groups whose accessibility experience you know relatively little about? What would it take to understand it better?
- Where in your organization do accessibility barriers most likely go unreported — and why?
On your design and development practice
- At what point in your content development process does accessibility currently get considered? What happens as a result of that consideration?
- Where does accessibility get dropped — at the point of production, during review, or in the handoff between teams? What does that tell you about where the gap is?
- Are the people responsible for accessibility in your organization supported with the time, tools, and authority they need to do the work well?
On institutional structures and governance
- Does your organization have documented accessibility policies or an accessibility plan? If so, how connected are they to day-to-day practice?
- Where does accessibility still get treated as someone else's responsibility — in procurement decisions, vendor evaluation, curriculum governance, or platform selection?
- What would need to change — structurally, not just culturally — for accessibility to be consistently embedded in how your organization operates?
On capacity and sustainability
- How are you building accessibility expertise across your organization, so progress doesn't depend on one or two individuals?
- How do you know whether your accessibility efforts are actually improving outcomes for learners with disabilities? What evidence do you have, and what's missing?
- What would a realistic next step look like for your organization — given your current resources, constraints, and the people who have the knowledge and authority to move things forward?
These questions don't have quick answers, and that's the point. The organizations that make the most meaningful progress on accessibility tend to be those that create space for honest conversations about where the gaps actually are — not where they'd like them to be. The other sections of this resource are intended to provide useful context for those conversations.
Sources and Further Reading
The following sources ground the frameworks, methodology, and evidence base presented in this resource. They are organized by the knowledge area they most directly support, though many span multiple domains.
Universal Design for Learning
The foundational framework providing principles for flexible learning environments that address learner variability. The UDL Guidelines are the primary scholarly-practice bridge for inclusive learning design and the basis for evaluating whether learning experiences offer multiple means of engagement, representation, and action and expression. Available at udlguidelines.cast.org.
A meta-analysis confirming UDL's positive impact on diverse learners across educational settings. Useful when making the evidence-based case for proactive inclusive design over accommodation-based approaches.
Synthesizes UDL research evidence across the literature, useful for understanding the breadth of application and the state of the field as it matured into mainstream instructional practice.
Connects universal design and UDL to systemic inclusivity in higher education, grounding the frameworks in broader institutional and equity contexts rather than treating them as purely instructional tools.
Inclusive Design
Outlines practices for creating adaptable digital learning content. Complements both UDL and WCAG by situating inclusive design within broader questions of open education, content flexibility, and equitable access across diverse learner populations and contexts.
Accessible Course and Platform Design
Links WCAG technical standards with UDL principles specifically in the context of online and blended learning environments. Practically useful for reviews that span both the technical audit and the learning design assessment stages of the methodology.
Professional Standards and Audit Framework
The clearest cross-disciplinary foundation for accessibility review competency, integrating disability knowledge, universal design, and management systems in a single framework. For technical review, pair with the WAS Body of Knowledge and WCAG 2.1/2.2. Available at accessibilityassociation.org.
A practical note on using these sources together: For accessibility reviews, use UDL to evaluate whether learning experiences offer genuine multiple means of engagement, representation, and expression. Use WCAG to audit the digital content and platforms. Use inclusive design scholarship to assess the organization's systems and processes. For reporting to leadership, pair standards-based findings with the evidence base showing that accessibility supports retention, participation, and equity — not only legal compliance.