Section 1
Knowledge Foundations
Inclusive course and curriculum design is not a method applied to content — it is a stance embedded in every decision made during the design process. The knowledge it draws on spans learning theory, equity-oriented pedagogy, accessibility standards, and institutional systems. Understanding what underpins this work clarifies why it cannot be reduced to a checklist, and why getting it right requires more than good intentions.
The knowledge pillars
Rigorous inclusive design draws on several interconnected bodies of knowledge. None of them is sufficient on its own. In practice, they work together — the frameworks providing structure, the pedagogy providing values, and the systems knowledge ensuring that changes are durable.
Learning design theory
Backward design and constructive alignment — beginning with learning outcomes and working backward to assessments and activities — are not just efficiency tools. When used well, they are equity tools: they force the question of whose knowledge counts, what evidence of learning looks like, and whether the design serves the full range of learners in the room, not a hypothetical average.
Inclusive and anti-oppressive pedagogy
Understanding how race, class, gender, disability, language, and other social positions shape access to learning — and how curriculum content, representation, and classroom dynamics can reinforce or disrupt those patterns. This includes actively critiquing hidden bias and deficit assumptions in existing curricula, and designing from a stance that centres equity rather than neutrality.
Decolonizing curriculum
Examining whose knowledge, whose epistemologies, and whose ways of knowing are treated as authoritative — and whose are absent, tokenized, or subordinated. Decolonizing curriculum is not a one-time addition of diverse authors; it is a structural rethinking of what counts as expertise, evidence, and valid ways of demonstrating competence.
Universal Design for Learning
UDL — developed by CAST — offers a practical framework for building flexibility into learning from the outset. Multiple means of Engagement (why learners engage), Representation (what they engage with), and Action and Expression (how they demonstrate what they know). The key principle: design for learner variability, not the mythical average student.
Learner variability and differentiated instruction
Learners bring different prior experiences, languages, cognitive profiles, cultural frameworks, and socio-emotional needs. Designing for variability means building in multiple pathways, not just one pathway with accommodations added afterward. This includes understanding how neurodiversity, disability, and situational factors shape how learning happens — and designing accordingly.
Digital accessibility
WCAG-aligned practices — captioned media, alternative text, readable document structure, keyboard-navigable interfaces — are the technical floor of accessible learning design, not the ceiling. Accessibility standards apply to every piece of digital content produced in a course: slides, documents, videos, assessments, and LMS pages. They are built into design from the start, not added at the end.
Community, belonging, and psychological safety
Structurally inclusive learning environments are necessary but not sufficient. Learners must also feel they belong — that their presence, contributions, and ways of knowing are genuinely valued. Belonging-centred design attends to the conditions that make learning possible: psychological safety, respectful interaction across difference, and relational pedagogy that acknowledges the whole person.
Amplifying what organizations already know
Inclusive curriculum design does not begin from zero. Every institution — whether a university faculty, a public sector learning team, or a research organization — carries deep disciplinary knowledge, existing relationships with learner communities, and hard-won understanding of the contexts in which their learners work. That expertise is irreplaceable — and it is the starting material for inclusive design, not an obstacle to it.
The most effective curriculum redesign processes amplify what organizations already know about their learners and their field, while bringing new lenses — equity, accessibility, UDL — to bear on how that knowledge is structured, sequenced, and made available to learners. The goal is not to replace disciplinary expertise with pedagogical frameworks, but to integrate them so that the result is both academically rigorous and genuinely accessible.
Human rights, accommodation, and equity frameworks
Inclusive design exists within a legal and institutional context. Human rights legislation requires that educational institutions not discriminate on grounds including disability, and that accommodation be provided to the point of undue hardship. In Canada, this obligation flows from the Canadian Human Rights Act, provincial human rights codes, the Accessible Canada Act (2019), and — in British Columbia — the Accessible British Columbia Act (2021). For internationally operating organizations, equivalent obligations exist in the jurisdictions in which they operate and in which their learners are based.
But accommodation is a floor, not a goal. Equity frameworks push beyond legal minimums toward the substantive changes in design and structure that reduce the need for individual accommodation in the first place. An organization that relies on accommodation as its primary accessibility strategy is managing barriers, not removing them. Proactive, inclusive design reduces the volume and urgency of accommodation requests — and distributes the responsibility for access across the design rather than concentrating it on individual learners.
The central distinction
Section 2
The Design Process
Inclusive learning design is not a linear sequence followed once. It is an iterative process: ideas are developed, tested with the people closest to the learning, revised in light of what they surface, and refined through repeated cycles before and after launch. This approach consistently produces more accessible, more rigorous, and more durable learning environments than a single-pass design-and-deliver model.
Intake and context
Understanding the learning environment before designing it
Before any design work begins, the process starts with careful listening. Who are the learners — their professional contexts, prior experiences, language backgrounds, access needs, and the communities they serve? What does the institution or organization already know about where learners struggle and where they thrive? What are the constraints: platform capabilities, faculty capacity, assessment governance, timeline?
This stage also surfaces the existing expertise that makes the design authentic. Subject matter experts bring irreplaceable knowledge of their discipline, their learners, and their field. That knowledge shapes everything that follows.
- Shared understanding of learner population, context, and known barriers
- Documented institutional constraints and design parameters
- An inventory of existing expertise and materials to build from
- Clarity on scope, timeline, and decision-making authority
Outcomes development
What learners will be able to do — and for whom
Learning outcomes are not bureaucratic requirements. They are the design's first equity decision: they determine what counts as learning, whose ways of demonstrating competence are legitimate, and how the course will be assessed. Outcomes developed without attention to equity often inadvertently center one kind of learner, one disciplinary tradition, or one way of knowing.
Inclusive outcome development asks: are these outcomes genuinely discipline-relevant, or are they measuring ability to perform under specific conditions? Do they allow for diverse ways of demonstrating competence? Are they written in language accessible to learners, not just designers? Do they reflect the actual professional contexts learners will enter?
This stage is iterative — outcomes are drafted, reviewed with subject matter experts and, where possible, with learners, and revised in light of what that review surfaces.
- Measurable, discipline-authentic learning outcomes aligned to program goals
- Outcomes mapped to assessment methods and learning activities
- A shared language for what success looks like in this context
Assessment design
The most consequential equity decision in the course
Assessment design is treated as its own stage because it is where inclusion most often breaks down — and where the most significant equity gains are available. Assessment decisions determine which learners can demonstrate their competence and which cannot, regardless of what they actually know.
This stage develops assessment approaches that are flexible without sacrificing rigour, transparent in criteria, and genuinely aligned to learning outcomes rather than to performance under specific conditions. It also addresses the institutional anxieties that often block assessment innovation.
See Section 3 below for a full treatment of assessment design.
- An assessment strategy aligned to learning outcomes and equity principles
- Flexible and alternative assessment formats where appropriate
- Transparent rubrics and criteria developed with subject matter experts
- A rationale for assessment decisions that can be shared with governance bodies
Content and modality development
Multiple means, multiple pathways
Content is developed iteratively, in working cycles with subject matter experts, rather than handed off for production after a single design session. This keeps the discipline knowledge and the design sensibility in conversation throughout — and allows for course correction when content doesn't land the way it was intended.
UDL principles shape every content decision: is material available in more than one format? Does it assume a particular language background or cultural reference point? Are representations of practitioners and communities in the discipline diverse and authentic? Does the sequencing build understanding or assume it?
Delivery mode — online, hybrid, in-person — shapes both what is possible and what is required. Each mode has distinct accessibility implications. Online delivery requires WCAG-aligned digital content and platform accessibility. Hybrid delivery requires deliberate design so that neither modality is a degraded version of the other. In-person delivery requires attention to physical access, sensory access, and psychological safety in the room.
- Multimodal learning materials developed in working cycles with SMEs
- WCAG-aligned digital content: captioned media, accessible documents, navigable LMS pages
- Content sequenced and scaffolded for the actual learner population
- Representations and examples that reflect the diversity of the field
Review with learners and subject matter experts
The people closest to the barrier are the most useful guides
Before any course goes live, a structured review cycle involves the people who will actually use it. This is not a perfunctory sign-off — it is where the design is tested against reality. Learners with disabilities, learners from underrepresented backgrounds, and learners with different professional contexts often surface barriers that subject matter experts and designers, working together, did not anticipate.
Beta review is structured: specific tasks, specific questions, specific dimensions of the learning experience to evaluate. Feedback is documented systematically and used to make targeted revisions. This is also where accessibility testing against WCAG criteria happens — automated tools first, then manual review with assistive technologies.
- Documented feedback from learner review, organized by theme and priority
- WCAG accessibility review findings for all digital content
- Targeted revisions to content, assessment, and navigation based on review findings
- A record of what was changed and why
Launch, iteration, and handoff
Inclusive design as ongoing practice, not one-time delivery
Launch is not the end of the design process. Learner experience data, facilitator observations, and formal evaluation all surface new information about what is working and what is not. A robust handoff ensures that the people responsible for running and maintaining the course understand its design rationale — so that future edits don't inadvertently reintroduce barriers that were deliberately removed.
Sustainable inclusive design requires institutional infrastructure: accessible templates, design guidelines, procurement standards that reflect accessibility requirements, and faculty development that builds capacity over time.
- A documented design rationale that can guide future maintenance and iteration
- Evaluation framework for tracking learner experience and identifying emerging barriers
- Recommendations for institutional systems and infrastructure
- A plan for the next iteration cycle
Section 3
Assessment Design
Assessment is where inclusion most often breaks down — and where the most significant equity gains are available. The format of an assessment can exclude learners who have fully achieved the intended learning outcomes. Addressing this is not about lowering standards; it is about ensuring that assessments measure what they are designed to measure.
Why assessment is the central equity question
A technically accessible course — captioned videos, readable documents, navigable platforms — can still systematically exclude learners if its assessments assume a narrow set of abilities unrelated to the learning outcomes. A timed written examination in a second language measures English proficiency and processing speed as much as it measures disciplinary knowledge. An oral presentation requirement excludes learners with communication disabilities regardless of the depth of their understanding. A traditional research essay may privilege the conventions of a single academic tradition while penalizing other equally rigorous ways of synthesizing and communicating knowledge.
The question at the centre of inclusive assessment design is not "how do we accommodate learners who can't do the standard assessment?" It is "what does this assessment actually measure — and is that what we intend to measure?"
The critical distinction
Assessing the learning outcome means measuring whether a learner has achieved the intended competency. Assessing performance under specific conditions means measuring whether a learner can demonstrate that competency in one particular way, under one particular set of constraints. These are not the same thing — and conflating them is where most assessment-based exclusion originates.
What flexible assessment is — and is not
Flexible and alternative assessment is not the same as reduced expectations, grade inflation, or academic dishonesty. It is the design of assessment tasks that allow multiple ways of demonstrating the same competency — so that the format of the assessment is not itself the barrier.
In research and graduate education contexts, this might mean: a literature synthesis completed as a traditional annotated bibliography, a structured evidence map, or a recorded conference-style presentation — each demonstrating the same research competency. In public sector professional development, it might mean a policy analysis produced as a written brief, a stakeholder presentation, or a structured decision memo — because the outcome is synthesizing and communicating evidence, not producing one specific document type.
What makes alternative assessment rigorous is not the format — it is the quality of criteria. When assessment criteria are clearly tied to the learning outcome, the format becomes a choice about how to demonstrate competency, not a loophole around demonstrating it.
Designing with transparent criteria
Transparent assessment criteria — rubrics co-developed with subject matter experts, shared with learners before they begin, and applied consistently regardless of format — are one of the most powerful equity levers available to curriculum designers. When learners understand what success looks like and how it will be evaluated, the hidden advantage of cultural familiarity with academic conventions is reduced. Everyone is working from the same specification.
Authentic assessment in professional disciplines
In higher education and research contexts, authentic assessment — tasks that reflect what graduates and practitioners actually do — has a dual advantage. It is more motivating for learners, and it is often more inclusive by default, because it draws on the knowledge and professional contexts learners already bring, rather than requiring mastery of a narrowly defined academic form.
A researcher who designs and documents a genuine inquiry process, a public sector professional who analyses a real policy problem from their organizational context, or a graduate student who produces a discipline-relevant output for an actual audience is demonstrating competency in a format that directly reflects their working reality. The assessment is rigorous because the task is real, not because the format is conventional.
Addressing the institutional anxieties
Assessment innovation in professional and academic programs often runs into legitimate institutional concerns. It is worth addressing them directly.
On academic integrity
Flexible assessment formats, particularly those tied to authentic professional contexts, are often more difficult to plagiarize than standardized formats — because they require the learner to apply knowledge to a specific situation, not reproduce a general argument. Authentic tasks produce more individualized work, not less.
On comparability
Comparability across learners is produced by shared, transparent criteria — not by identical formats. If the criteria are clear and consistently applied, two learners who demonstrate the same competency through different formats can be equitably evaluated. This is standard practice in professional licensure and competency-based education.
On grade equity
Research on flexible and alternative assessment consistently shows that learner performance improves when assessment formats are better aligned to learning outcomes — particularly for learners from underrepresented groups who are often disadvantaged by assessment formats that assume a narrow cultural and linguistic background.
On governance
Assessment innovation requires a clear rationale that can be articulated to academic committees, professional bodies, and accreditation reviewers. The rationale is straightforward: these assessments measure the intended learning outcomes more directly than the alternatives, and they do so in ways that give all learners a genuine opportunity to demonstrate their competency.
Section 4
Key Design Decisions
No two curriculum design engagements look the same. The decisions an organization makes about scope, entry point, who is involved, and what constraints are in play determine what kind of design process is appropriate and what it can realistically produce.
Scope: what is being designed or redesigned
- Module — a single unit, topic, or learning sequence. Useful for targeted improvement or piloting new approaches.
- Course — a full standalone learning experience, including outcomes, assessments, content, and delivery.
- Program — a credential, pathway, or curriculum sequence. Reveals and addresses systemic patterns across a full learning arc.
- Institutional — policies, templates, and design standards that shape how all programs are built. Required for systemic, durable change.
Entry point: where the engagement begins
- New build — designing from a brief and subject matter knowledge. The most effective entry point for inclusive design: inclusion built in from the start.
- Redesign — rebuilding an existing course with new principles. More effective than retrofit, but requires honest examination of what is and is not working.
- Retrofit — adding accessibility and flexibility to existing materials. The most common entry point; also the most limited. See the note below.
- Audit-first — reviewing what exists before deciding what to change. Often the most responsible starting point when the design history is unclear.
Who is at the table
- Subject matter experts — disciplinary and contextual expertise. Essential in every engagement.
- Learners — people who have taken or will take the course. The most reliable source of information about where the design creates friction.
- Community members — people affected by the discipline's practice. Particularly relevant where the curriculum trains people who will work directly with specific communities or populations.
- Equity consultants — specialists in decolonization, anti-oppressive practice, or specific equity domains relevant to the discipline.
Delivery mode
- Online — asynchronous, synchronous, or both. Requires WCAG-aligned content, accessible LMS navigation, and deliberate community-building design.
- Hybrid — combining in-person and online in the same course. Requires deliberate design so that neither modality is a degraded version of the other.
- In-person — face-to-face, with digital support materials. Requires attention to physical access, sensory access, and psychological safety in the room.
- Blended program — different modes for different courses in a sequence. Requires coherent design across modes so learners are not navigating a different experience in every course.
Section 5
Illustrated Scenario
The following is a composite, illustrative scenario. It is not a description of a real institution or specific engagement, but it is constructed from real patterns — the kinds of barriers, decisions, and turning points that appear consistently in curriculum design work with higher education institutions, public sector organizations, and research bodies.
Illustrative scenario — composite
Redesigning a professional certificate program for a globally distributed learner population
A well-established professional development program in global public health had been running successfully in a face-to-face format for over a decade. When the organization moved to a blended online delivery model to reach practitioners in lower-income settings — including health workers in sub-Saharan Africa, South Asia, and Central America — they discovered that the program as designed did not travel well. Completion rates were lower than expected. Participants in the new cohorts were dropping key assessments. Facilitators were spending significant time managing accommodation requests they hadn't anticipated. The program's content was strong. The design was not.
The curriculum redesign began with a thorough intake process. Conversations with current and former participants, facilitators, and regional coordinators revealed several patterns that had not been visible from headquarters: the program's default assumption of reliable high-bandwidth internet access was creating barriers for practitioners in rural settings; the primary assessment — a 3,000-word policy analysis written in formal academic English — was functioning as a test of academic writing in a second or third language, not a test of public health policy competency; and the visual and cultural references in the case studies were almost entirely drawn from North American and European contexts, making them feel abstract to participants working in very different epidemiological and systemic environments.
Subject matter experts — senior public health practitioners, not just academics — were brought into the design process as genuine co-designers. Their knowledge of what practitioners in the field actually needed to know and be able to do fundamentally reshaped the learning outcomes. Several outcomes that had been written around academic conventions were revised to reflect real professional tasks: analysing a disease burden dataset and producing a brief for a ministry of health; facilitating a community consultation in a resource-limited setting; adapting an evidence-based intervention to a specific epidemiological and cultural context.
The assessment design moment
When the central barrier became visible
The most consequential single decision in the redesign was rethinking the policy analysis assessment. The original assessment asked participants to write a 3,000-word evidence synthesis in formal academic English, structured according to conventions established in North American and European public health scholarship. For participants who had trained in those traditions, the task was familiar. For practitioners who had spent their careers in public health practice — producing briefs, policy memos, programme reports, and ministerial presentations — the format was foreign even though the underlying competency was not.
The assessment was replaced with a format choice: participants could produce a policy brief in the genre conventions of their own country's public health system, a structured programme recommendation, or a practitioner-facing evidence summary. A shared rubric — developed with subject matter experts and piloted with a small beta group — made the criteria explicit regardless of format. Format was not evaluated; competency was.
The outcome was a significant improvement in completion rates for the assessment, and qualitative feedback from facilitators indicating that the quality of the submissions — the depth of analysis and the sophistication of the recommendations — was noticeably higher than in the original format.
What the beta review surfaced
The barriers that designers and experts had not anticipated
A structured beta review with twelve practitioners from three regions — before the revised program went live — surfaced two significant barriers that the design team had not anticipated.
The first was a navigational issue in the LMS: the way the course was structured assumed that participants would work through modules sequentially and have uninterrupted blocks of two to three hours for each module. In practice, many participants were accessing the course in short windows between clinical and community responsibilities, often on mobile devices with intermittent connectivity. A redesign of the module structure and the addition of explicit low-bandwidth alternatives for video content addressed this directly.
The second was more subtle: several participants noted that the discussion forums were producing interactions that felt extractive to participants from lower-income settings. Participants based in well-resourced contexts were drawing on the contextual knowledge of participants from lower-income settings without reciprocating, and the forum prompts were inadvertently structured in ways that encouraged this dynamic. The facilitation guidelines and forum prompts were revised with attention to power dynamics and knowledge exchange.
Both barriers were invisible to the design team and the subject matter experts — not because they were careless, but because neither group occupied the same position as the learners. The people closest to the barrier are the most useful guides to finding it. Structured learner review is not a validation step; it is a design resource.
Section 6
Learner Impact
Inclusive curriculum design creates change at two different horizons. The shorter-term outcomes are tangible and often measurable. The longer-term shift is cultural — and has a much broader effect on how an organization thinks about who its learners are and what learning environments can do.
| Outcome | What this looks like in practice |
|---|---|
| Barriers are named and removed at source | Rather than being managed through individual accommodation, barriers are identified in the design and addressed structurally. Learners who were previously excluded gain access to the learning — not through a workaround, but through a course that was designed for them. |
| Assessment reflects what learners actually know | When assessment formats are better aligned to learning outcomes, completion rates improve and the quality of demonstrated competency increases — particularly for learners whose prior experience differs from the assumed norm. |
| Accommodation requests decrease | When flexibility is built into the design — in assessment formats, content delivery, pacing, and navigation — many individual accommodation requests become unnecessary. The need for accommodation is not eliminated, but its volume and urgency are significantly reduced. |
| Faculty and team knowledge grows | The design process itself is a capacity-building experience. Faculty and staff who co-design inclusive curriculum emerge with practical knowledge about UDL, assessment equity, and accessibility that they carry into future work. |
| Outcome | What this looks like in practice |
|---|---|
| Inclusion becomes a design habit, not a retrofit | When teams have experienced inclusive design in practice, the questions change. Accessibility, learner variability, and equity considerations enter the design process at the beginning rather than being raised as problems at the end. |
| All learners benefit | The curb cut effect is real in curriculum: multimodal content, clear navigation, transparent assessment criteria, and flexible delivery benefit every learner, not only those for whom they were specifically designed. |
| Retention and completion improve across groups | Research on inclusive curriculum consistently shows that accessibility improvements — particularly in assessment design and content flexibility — have the largest positive effects on completion and retention for learners from underrepresented groups, without negative effects on the cohort overall. |
| Institutional culture shifts | Organizations that have invested in genuine inclusive curriculum design tend to ask different questions about new programs: not "what accommodation do we need to add?" but "what does this design assume about who learners are — and is that assumption warranted?" |
Section 7
Organizational Reflection
The questions below are designed to help curriculum teams and faculty think honestly about where a program or course currently sits in relation to inclusive design. They are not a scoring instrument. They are meant to be worked through with colleagues who hold different roles and perspectives, because the most useful answers are rarely held by one person alone.
On whose knowledge is centred
On what assessments actually measure
On flexibility and accommodation
On whose presence is assumed
On what next steps are possible