Accessible Learning Labs — Knowledge Resource

Understanding Accessibility Review in Learning Environments

A guide to what rigorous accessibility review involves — its knowledge foundations, methodology, key decision points, and what it means for learners over time.

Screen reader and keyboard-optimised version. This is a fully linear, plain-structure version of this resource designed for use with assistive technologies. All content is presented in reading order with no tabs, accordions, or JavaScript. An interactive version with tabs and expandable sections is also available.

Knowledge Foundations

Accessibility review is not a single skill or a checklist exercise. It draws on four interconnected bodies of knowledge, and the most effective reviews hold all four in view simultaneously. Understanding these foundations helps clarify why rigorous accessibility review requires more than running an automated scan — and why the findings from a thorough review are meaningfully different from a surface-level compliance check.

The four knowledge pillars

Each pillar represents a distinct area of expertise that contributes to a complete review. In practice, they are deeply interconnected: you cannot do strong barrier analysis without disability knowledge; you cannot evaluate a learning experience without UDL fluency; you cannot connect findings to action without understanding governance and law.

1. Disability and learner variability

Effective accessibility review begins with understanding disability — not as a fixed category, but as a relationship between a person and their environment. This includes familiarity with major disability categories (visual, auditory, motor, cognitive, neurological, and situational), how assistive technologies work and what they require from content, and how the social model of disability shifts the focus from "fixing the person" to "fixing the environment." Understanding learner variability also means recognizing that many barriers affect people without formal disability designations — including those with temporary impairments, older learners, or people accessing content in constrained environments.

2. Universal Design for Learning (UDL) and inclusive design

Universal Design for Learning is a framework developed by CAST that guides the design of learning experiences to accommodate variability from the outset. It is built around three principles:

  • Multiple means of Engagement — the why of learning; recruiting interest, sustaining effort, enabling self-regulation
  • Multiple means of Representation — the what of learning; offering content through different modalities and formats
  • Multiple means of Action and Expression — the how of learning; allowing learners to demonstrate knowledge in varied ways

UDL is distinct from, but complementary to, technical accessibility standards. WCAG tells you whether a video has captions; UDL asks whether the learning experience gives all learners genuine ways to engage with, access, and demonstrate the knowledge it contains.

3. Technical accessibility standards

The Web Content Accessibility Guidelines (WCAG), currently at versions 2.1 and 2.2, provide the technical baseline for digital accessibility. They are organized around four principles — content must be Perceivable, Operable, Understandable, and Robust (POUR) — and apply to web content, LMS platforms, digital documents, multimedia, and interactive learning objects. Standards fluency means being able to interpret and apply criteria in context, not simply run automated checks and report pass/fail results.

4. Laws, governance, and institutional context

Technical findings only create lasting change when they connect to real accountability structures. This requires understanding relevant legislation (discussed in detail in Section 5), but also institutional governance: who has authority to require accessibility in procurement, which workflows create barriers that no individual designer can fix, and how accessibility responsibilities are — or aren't — distributed across an organization. Reviews that ignore institutional context produce findings that sit in reports without generating change.

A practical hierarchy for using these frameworks together: Use WCAG to audit the artifact. Use UDL to evaluate the learning experience. Use inclusive design principles to assess the system. Strong accessibility review holds all three lenses simultaneously — and situates findings within the institutional context in which they need to be acted on.

What distinguishes expertise from checklist compliance

Automated accessibility tools can detect a meaningful subset of issues: missing alternative text, poor heading structure, insufficient colour contrast, ARIA implementation errors. These are genuinely useful starting points. But automated tools can only identify approximately 30 to 40 percent of real accessibility barriers — the rest require human judgment.

What automated tools cannot assess includes: whether a learning experience is equitably designed; whether assessment tasks assume a narrow range of abilities; whether the interaction patterns in an LMS are usable with a screen reader in practice (as opposed to technically passing); or whether the barriers that most affect a specific learner population are actually addressed. The gap between "technically passes" and "actually works for everyone" is where human expertise is irreplaceable.

Accommodation versus design: an important distinction

Accommodation addresses a specific individual's access need after the fact — providing an alternative format, extending a deadline, offering a different assessment method. Accessible design reduces the need for accommodation by anticipating learner variability from the outset and building flexibility into the learning experience itself.

Neither accommodation nor accessible design eliminates the other. Accommodation will always be necessary for some learners in some situations. But organizations that rely primarily on accommodation as their accessibility strategy have not yet addressed the underlying design problem — they are managing barriers, not removing them. The purpose of accessibility review is to understand both what barriers exist and where they live: in the content, the technology, the workflow, or the institutional structures that shape how learning is designed and delivered.

Methodology

Effective accessibility review is structured, iterative, and multidimensional. The six stages below represent a comprehensive approach. In practice, scope and depth will vary based on context, available resources, and organizational need — but understanding the full methodology helps clarify what is gained or lost when particular stages are abbreviated or skipped.

Stage 1
Intake and Scoping
Understanding the context before assessing it

Before any review begins, it is essential to understand the learning environment, the learner population, and the institutional context. What types of content are in scope — web pages, LMS courses, documents, multimedia, assessments, interactive tools? Which platforms and systems are involved? Who are the learners, and what is already known about the barriers they face?

Scoping also establishes a shared understanding of what "accessible" means in this specific context, and what a realistic level of review looks like given available time and resources. This stage shapes the relevance and usefulness of everything that follows.

What this stage produces

  • Shared documentation of scope and review boundaries
  • Learner population context, including known barriers and assistive technology use
  • Prioritized list of content and systems for review
Note: Skipping or rushing this stage is one of the most common reasons accessibility reviews produce findings that cannot be acted on — because they are not connected to the actual context of the organization.
Stage 2
Barrier Analysis
Identifying what blocks participation — and where

Barrier analysis looks across four dimensions — physical and motor, cognitive, sensory, and situational — to identify where learners encounter obstacles. A key distinction is between minor friction and task-blocking barriers: not all issues carry equal weight, and effective review prioritizes those that most significantly affect access.

Equally important is locating where the barrier lives. Barriers in content are addressed through remediation. Barriers in technology platforms require vendor engagement or platform changes. Barriers in workflows or institutional policy require structural responses that no individual designer can implement alone. Content-focused audits often miss the latter two categories entirely.

What this stage produces

  • Categorized barrier inventory by type (physical, cognitive, sensory, situational) and severity
  • Clear distinction between access blockers and usability friction
  • Identification of workflow and policy-level barriers alongside content issues
Note: Multidisciplinary barrier analysis — involving educators, technologists, and learners with disabilities — consistently produces more actionable findings than single-discipline reviews conducted in isolation.
Stage 3
Technical Standards Audit
Automated and manual testing against WCAG

Automated tools — such as Axe, IBM Equal Access Checker, and browser-based accessibility checkers — can rapidly detect a subset of accessibility issues: missing alternative text, insufficient heading structure, colour contrast failures, and ARIA implementation errors. They are a useful and efficient starting point.

However, automated tools can only identify a portion of real barriers. Manual testing using assistive technologies — including screen readers such as NVDA, JAWS, and VoiceOver, keyboard-only navigation, and magnification tools — is essential for capturing issues that require human judgment. A result of "no automated errors detected" does not mean a page or course is accessible. Results should always be interpreted in context, not simply reported as pass or fail.

What this stage produces

  • WCAG compliance mapping by criterion and content type
  • Screen reader and keyboard navigation findings
  • Colour contrast, visual structure, and readability analysis
  • Documentation of issues requiring manual verification
Note: Automated tools detect roughly 30 to 40 percent of real accessibility issues. A review that relies on automation alone will miss the majority of what affects learners in practice.
Stage 4
User-Centered Evaluation
Centering the lived experience of disabled learners

Legal enforcement increasingly scrutinizes whether users can independently complete tasks on digital platforms — not just whether technical standards are met on paper. Engaging learners with disabilities in structured evaluation surfaces the gap between "technically compliant" and "actually usable."

This includes heuristic evaluation against established accessibility guidelines, task-based usability testing with participants who use assistive technologies, and systematic documentation and analysis of user feedback. Small, focused tests that examine specific tasks or interface elements typically yield more actionable insights than large, undifferentiated studies.

What this stage produces

  • Task completion analysis and friction mapping
  • Heuristic evaluation findings from disability-centered perspectives
  • Documented user feedback with analysis
  • Identification of usability gaps not captured by technical audit
Note: Experiences that meet technical compliance standards but remain unusable with screen readers or other assistive technologies have been frequently cited in accessibility complaints and legal settlements. Technical compliance and genuine usability are not the same thing.
Stage 5
Learning Design and UDL Assessment
Is the learning experience itself inclusive?

A technically accessible course can still exclude learners if its learning design is inflexible or its assessments assume a narrow range of abilities. This stage evaluates whether the learning experience offers genuine multiple means across all three UDL dimensions: Engagement (the why of learning), Representation (the what), and Action and Expression (the how).

It also assesses whether assessment tasks are designed equitably — whether the format of assessment is measuring the intended learning outcome or inadvertently measuring a learner's ability to perform under specific conditions that have nothing to do with the outcome being assessed. This is distinct from whether assessment platforms pass a technical checklist.

What this stage produces

  • UDL alignment map across all three dimensions
  • Assessment equity analysis identifying format-based barriers
  • Recommendations for flexible pathways and varied representation
  • Identification of where inflexibility creates barriers for specific learner groups
Note: Traditional assessment methods often conflict with UDL principles, favouring standardized formats that measure performance under specific conditions rather than mastery of learning outcomes. An equity-oriented review examines whether assessment design is a barrier in itself.
Stage 6
Reporting and Remediation Planning
Findings that can actually be acted on

Findings are only useful if they can be acted on. Effective reporting translates technical and learning design findings into a prioritized remediation roadmap — distinguishing quick wins from longer-term structural changes, and connecting recommendations to real governance, workflow, and procurement contexts.

Reports should serve multiple audiences with different needs. Technical teams need specific, reproducible guidance. Learning designers need design principles and concrete alternatives. Leadership needs strategic framing, risk context, and understanding of what systemic changes are required. A single report that attempts to serve all three audiences typically serves none of them well.

What this stage produces

  • Prioritized remediation roadmap with effort and impact framing
  • Differentiated outputs for technical, design, and leadership audiences
  • Connection of findings to governance, policy, and procurement contexts
  • Framework for continuous improvement, not one-time remediation
Note: Treating accessibility as a one-time audit rather than a continuous practice is one of the most common failures in organizational accessibility work. Barriers accumulate over time as content is created, platforms are updated, and contexts change. Periodic review prevents the buildup of unresolved issues.

Key Decisions That Shape a Review

No two accessibility reviews look the same. The decisions an organization makes about scope, depth, timing, and involvement determine what kind of review is appropriate and what it will produce. Understanding these dimensions helps set realistic expectations — and make better use of the review process.

Scope: what is being reviewed

Spot audit
A single course, module, document, or platform feature. Useful for prioritization or addressing a known concern. Limited in what it reveals about systemic patterns.
Program review
A full learning pathway or credential. Reveals patterns across content types and surfaces inconsistencies in how accessibility is applied across a curriculum.
Institutional review
Systems, policies, and practices organization-wide. Required to address workflow, governance, and procurement-level barriers that individual content reviews cannot reach.

Depth: how thorough the review is

Rapid assessment
Key barriers identified efficiently, useful for initial prioritization. Trades methodological completeness for speed. Findings should be treated as a starting point, not a complete picture.
Comprehensive audit
Full WCAG, UDL, and user-centered evaluation across all six stages. Produces the most complete findings but requires more time and resources.
Ongoing review
Iterative review embedded in the design and development cycle. The most effective long-term approach, as it prevents accumulation of barriers rather than addressing them retrospectively.

Learner involvement: who participates in evaluation

Expert review only
Specialist assessment without direct user testing. Efficient and appropriate for many contexts, but misses the gap between expert prediction and actual learner experience.
Expert plus user testing
Structured participation from learners with disabilities alongside expert review. Surfaces issues that expert review alone does not capture. Requires careful ethical and logistical planning.
Co-design
Learners with disabilities involved in shaping remediation and redesign decisions. Produces the most learner-centered outcomes and builds genuine community trust.

Timing: when the review happens

Pre-development
Informing design decisions before content is built. Least costly and most effective approach — accessibility built in from the start.
Pre-launch
Catching issues before content goes live. More costly than pre-development review but significantly less costly than post-launch remediation.
Post-launch
Addressing barriers in existing content. Most commonly needed but most costly to address. Often reveals the need for both content remediation and systemic change.

Focus area: what kind of review it primarily is

Digital content
Web pages, LMS content, documents, multimedia, and assessments. The most common starting point for accessibility review in educational settings.
Learning design
UDL alignment, assessment equity, and the overall flexibility of the learning experience. Extends beyond technical compliance to equity in learning design.
Systems and governance
Institutional policy, procurement practices, and organizational workflows. Required for systemic and sustainable change.

On rapid versus comprehensive approaches: Rapid assessments trade completeness for speed and can be genuinely useful when resources are constrained. The important thing is clarity about what a rapid assessment can and cannot detect — and treating its findings as a starting point rather than a complete picture of the accessibility landscape.

Learner Impact

Accessibility review creates change at two different horizons. The shorter-term outcomes are tangible and measurable. The longer-term shift is cultural — and has a broader effect on how an organization designs and delivers learning for everyone.

Shorter-term outcomes: what becomes visible and fixable

Outcomes typically achievable within the review cycle and immediate remediation period
Outcome What this looks like in practice
Barriers are named and prioritizedA clear picture of what is blocking learners, organized by severity and remediation effort. Teams move from vague awareness of "accessibility issues" to a specific, actionable inventory they can work through systematically.
Immediate issues are remediatedCommon barriers — missing captions, poor contrast, absent alternative text, inaccessible documents — addressed once identified. Many high-impact, low-effort fixes are implemented quickly. Learners who were previously excluded gain access to content they were being denied.
Compliance is documentedA formal review and remediation roadmap provide evidence of due diligence for regulatory and governance purposes. Organizations can demonstrate to regulatory bodies, leadership, and learners that they are taking accessibility obligations seriously and acting on them.
Team knowledge growsDesigners, developers, and faculty develop practical accessibility literacy through engagement with the review process. The review itself becomes a capacity-building experience. Teams emerge better equipped to identify and address issues in future work.

Longer-term change: what becomes embedded in practice

Outcomes that develop over repeated review cycles and sustained organizational commitment
Outcome What this looks like in practice
Accessibility becomes a design habitRather than a retrofit, accessible design is incorporated from the beginning of content development. Future remediation costs decrease significantly. Designers stop asking "is this accessible?" at the end of a project and start making accessibility-informed decisions throughout.
All learners benefitBetter structure, clearer language, flexible formats, and consistent navigation improve the experience for every learner — not only those with disabilities. The "curb cut effect" in learning: features designed for learners with disabilities improve usability, comprehension, and engagement for the whole cohort.
Institutional culture shiftsThe focus moves from individual accommodation toward proactive inclusion — affecting procurement, faculty development, and organizational identity. Accessibility stops being "someone else's job" and becomes a shared expectation embedded in how the organization hires, trains, procures, and governs.
Governance and procurement improveOrganizations develop the literacy to require and evaluate accessibility in vendor contracts and new platform selection. Accessibility requirements appear in RFPs. Platform evaluations include assistive technology testing. Vendors are held to explicit standards as a condition of partnership.

On the evidence for accessibility and learning outcomes: Research consistently shows that accessibility improvements support retention, participation, and equity — not just compliance. The strongest case for accessibility investment to organizational leadership is not the legal argument alone, but the evidence that inclusive design produces better learning environments for all learners.

Understanding the legal context helps organizations make sense of their obligations, the gaps in current enforcement, and why the direction of travel matters even where specific technical requirements are still developing. The following is intended as an orientation to the landscape, not legal advice.

The gap between legislation and practice

Both the ACA and the Accessible BC Act have been critiqued by disability advocates for the absence of specific enforceable technical standards and clear implementation timelines. In practice, organizations currently have significant latitude in how they interpret their accessibility obligations — but the regulatory direction is clearly toward greater specificity over time.

Understanding this landscape helps organizations make informed decisions — not just about minimum compliance, but about where the field is heading and how to build genuinely inclusive learning environments within that context. Organizations that engage proactively with accessibility tend to be better positioned as regulatory requirements tighten.

On the relationship between legal compliance and genuine accessibility: Meeting current legal requirements does not guarantee that learning environments are genuinely accessible. The gap between minimum compliance and meaningful inclusion is where most of the real work in educational accessibility lives. Legal frameworks provide a floor, not a ceiling.

Organizational Reflection

The questions below are designed to help you think through your organization's current relationship with accessibility. They are not a formal assessment — there are no scores or categories. They are intended to surface useful questions and help identify where the most productive focus might be for your context.

Take your time with these. The most useful answers are the honest ones, not the aspirational ones. You may find it helpful to work through them with colleagues who have different roles and perspectives within your organization.

On your current knowledge of barriers

  • What do you actually know about the accessibility barriers your learners currently face? How did you find out, and how confident are you in that picture?
  • Are there learner groups whose accessibility experience you know relatively little about? What would it take to understand it better?
  • Where in your organization do accessibility barriers most likely go unreported — and why?

On your design and development practice

  • At what point in your content development process does accessibility currently get considered? What happens as a result of that consideration?
  • Where does accessibility get dropped — at the point of production, during review, or in the handoff between teams? What does that tell you about where the gap is?
  • Are the people responsible for accessibility in your organization supported with the time, tools, and authority they need to do the work well?

On institutional structures and governance

  • Does your organization have documented accessibility policies or an accessibility plan? If so, how connected are they to day-to-day practice?
  • Where does accessibility still get treated as someone else's responsibility — in procurement decisions, vendor evaluation, curriculum governance, or platform selection?
  • What would need to change — structurally, not just culturally — for accessibility to be consistently embedded in how your organization operates?

On capacity and sustainability

  • How are you building accessibility expertise across your organization, so progress doesn't depend on one or two individuals?
  • How do you know whether your accessibility efforts are actually improving outcomes for learners with disabilities? What evidence do you have, and what's missing?
  • What would a realistic next step look like for your organization — given your current resources, constraints, and the people who have the knowledge and authority to move things forward?

These questions don't have quick answers, and that's the point. The organizations that make the most meaningful progress on accessibility tend to be those that create space for honest conversations about where the gaps actually are — not where they'd like them to be. The other sections of this resource are intended to provide useful context for those conversations.

Sources and Further Reading

The following sources ground the frameworks, methodology, and evidence base presented in this resource. They are organized by the knowledge area they most directly support, though many span multiple domains.

Universal Design for Learning

Inclusive Design

Accessible Course and Platform Design

Professional Standards and Audit Framework

A practical note on using these sources together: For accessibility reviews, use UDL to evaluate whether learning experiences offer genuine multiple means of engagement, representation, and expression. Use WCAG to audit the digital content and platforms. Use inclusive design scholarship to assess the organization's systems and processes. For reporting to leadership, pair standards-based findings with the evidence base showing that accessibility supports retention, participation, and equity — not only legal compliance.

↑ Back to contents