Inclusive course and curriculum design is not a method applied to content — it is a stance embedded in every decision made during the design process. The knowledge it draws on spans learning theory, equity-oriented pedagogy, accessibility standards, and institutional systems. Understanding what underpins this work clarifies why it cannot be reduced to a checklist, and why getting it right requires more than good intentions.
What underpins rigorous inclusive design
Learning design theory
Backward design and constructive alignment — beginning with learning outcomes and working backward to assessments and activities — are not just efficiency tools. When used well, they are equity tools: they force the question of whose knowledge counts, what evidence of learning looks like, and whether the design serves the full range of learners in the room, not a hypothetical average.
Inclusive and anti-oppressive pedagogy
Understanding how race, class, gender, disability, language, and other social positions shape access to learning — and how curriculum content, representation, and classroom dynamics can reinforce or disrupt those patterns. This includes actively critiquing hidden bias and deficit assumptions in existing curricula, and designing from a stance that centres equity rather than neutrality.
Decolonizing curriculum
Examining whose knowledge, whose epistemologies, and whose ways of knowing are treated as authoritative — and whose are absent, tokenized, or subordinated. Decolonizing curriculum is not a one-time addition of diverse authors; it is a structural rethinking of what counts as expertise, evidence, and valid ways of demonstrating competence.
Universal Design for Learning
UDL — developed by CAST — offers a practical framework for building flexibility into learning from the outset. Multiple means of Engagement (why learners engage), Representation (what they engage with), and Action and Expression (how they demonstrate what they know). The key principle: design for learner variability, not the mythical average student.
Learner variability and differentiated instruction
Learners bring different prior experiences, languages, cognitive profiles, cultural frameworks, and socio-emotional needs. Designing for variability means building in multiple pathways, not just one pathway with accommodations added afterward. This includes understanding how neurodiversity, disability, and situational factors shape how learning happens — and designing accordingly.
Digital accessibility
WCAG-aligned practices — captioned media, alternative text, readable document structure, keyboard-navigable interfaces — are the technical floor of accessible learning design, not the ceiling. Accessibility standards apply to every piece of digital content produced in a course: slides, documents, videos, assessments, and LMS pages. They are built into design from the start, not added at the end.
Community, belonging, and psychological safety
Structurally inclusive learning environments are necessary but not sufficient. Learners must also feel they belong — that their presence, contributions, and ways of knowing are genuinely valued. Belonging-centred design attends to the conditions that make learning possible: psychological safety, respectful interaction across difference, and relational pedagogy that acknowledges the whole person.
Human rights, accommodation, and equity frameworks
Inclusive design exists within a legal and institutional context. Human rights legislation requires that educational institutions not discriminate on grounds including disability, and that accommodation be provided to the point of undue hardship. But accommodation is a floor, not a goal. Equity frameworks push beyond legal minimums toward the substantive changes in design and structure that reduce the need for individual accommodation in the first place.
What organizations already know
Inclusive curriculum design does not begin from zero. Every institution — whether a university faculty, a public sector training unit, or a research organization — carries deep disciplinary knowledge, existing relationships with learner communities, and hard-won understanding of the contexts in which their learners work. That expertise is irreplaceable — and it is the starting material for inclusive design, not an obstacle to it.
The most effective curriculum redesign processes amplify what organizations already know about their learners and their field, while bringing new lenses — equity, accessibility, UDL — to bear on how that knowledge is structured, sequenced, and made available to learners. The goal is not to replace disciplinary expertise with pedagogical frameworks, but to integrate them so that the result is both academically rigorous and genuinely accessible.
Inclusive learning design is not a linear sequence followed once. It is an iterative process: ideas are developed, tested with the people closest to the learning, revised in light of what they surface, and refined through repeated cycles before and after launch. This approach consistently produces more accessible, more rigorous, and more durable learning environments than a single-pass design-and-deliver model.
How the design process unfolds
Before any design work begins, the process starts with careful listening. Who are the learners — their professional contexts, prior experiences, language backgrounds, access needs, and the communities they serve? What does the institution or organization already know about where learners struggle and where they thrive? What are the constraints: platform capabilities, faculty capacity, assessment governance, timeline?
This stage also surfaces the existing expertise that makes the design authentic. Subject matter experts bring irreplaceable knowledge of their discipline, their learners, and their field. That knowledge shapes everything that follows.
- Shared understanding of learner population, context, and known barriers
- Documented institutional constraints and design parameters
- An inventory of existing expertise and materials to build from
- Clarity on scope, timeline, and decision-making authority
Learning outcomes are not bureaucratic requirements. They are the design's first equity decision: they determine what counts as learning, whose ways of demonstrating competence are legitimate, and how the course will be assessed. Outcomes developed without attention to equity often inadvertently center one kind of learner, one disciplinary tradition, or one way of knowing.
Inclusive outcome development asks: are these outcomes genuinely discipline-relevant, or are they measuring ability to perform under specific conditions? Do they allow for diverse ways of demonstrating competence? Are they written in language accessible to learners, not just designers? Do they reflect the actual professional contexts learners will enter?
This stage is iterative — outcomes are drafted, reviewed with subject matter experts and, where possible, with learners, and revised in light of what that review surfaces.
- Measurable, discipline-authentic learning outcomes aligned to program goals
- Outcomes mapped to assessment methods and learning activities
- A shared language for what success looks like in this context
Assessment design is treated as its own stage because it is where inclusion most often breaks down — and where the most significant equity gains are available. Assessment decisions determine which learners can demonstrate their competence and which cannot, regardless of what they actually know.
This stage develops assessment approaches that are flexible without sacrificing rigour, transparent in criteria, and genuinely aligned to learning outcomes rather than to performance under specific conditions. It also addresses the institutional anxieties that often block assessment innovation — including academic integrity, comparability, and grade equity.
See the Assessment Design tab for a full treatment of this stage.
- An assessment strategy aligned to learning outcomes and equity principles
- Flexible and alternative assessment formats where appropriate
- Transparent rubrics and criteria developed with subject matter experts
- A rationale for assessment decisions that can be shared with learners and governance bodies
Content is developed iteratively, in working cycles with subject matter experts, rather than handed off for production after a single design session. This keeps the discipline knowledge and the design sensibility in conversation throughout — and allows for course correction when content doesn't land the way it was intended.
UDL principles shape every content decision: is material available in more than one format? Does it assume a particular language background or cultural reference point? Are representations of practitioners and communities in the discipline diverse and authentic? Does the sequencing build understanding or assume it?
Delivery mode — online, hybrid, in-person — shapes both what is possible and what is required. Each mode has distinct accessibility implications. Online delivery requires WCAG-aligned digital content and platform accessibility. Hybrid delivery requires deliberate design so that neither modality is a degraded version of the other. In-person delivery requires attention to physical access, sensory access, and psychological safety in the room.
- Multimodal learning materials developed in working cycles with SMEs
- WCAG-aligned digital content: captioned media, accessible documents, navigable LMS pages
- Content sequenced and scaffolded for the actual learner population
- Representations and examples that reflect the diversity of the field
Before any course goes live, a structured review cycle involves the people who will actually use it. This is not a perfunctory sign-off — it is where the design is tested against reality. Learners with disabilities, learners from underrepresented backgrounds, and learners with different professional contexts often surface barriers that subject matter experts and designers, working together, did not anticipate.
Beta review is structured: specific tasks, specific questions, specific dimensions of the learning experience to evaluate. Feedback is documented systematically and used to make targeted revisions. This is also where accessibility testing against WCAG criteria happens — automated tools first, then manual review with assistive technologies.
- Documented feedback from learner review, organized by theme and priority
- WCAG accessibility review findings for all digital content
- Targeted revisions to content, assessment, and navigation based on review findings
- A record of what was changed and why — useful for governance and future iterations
Launch is not the end of the design process. Learner experience data, facilitator observations, and formal evaluation all surface new information about what is working and what is not. A robust handoff ensures that the people responsible for running and maintaining the course understand its design rationale — so that future edits don't inadvertently reintroduce barriers that were deliberately removed.
Sustainable inclusive design requires institutional infrastructure: accessible templates, design guidelines, procurement standards that reflect accessibility requirements, and faculty development that builds capacity over time. The handoff is also the point at which those systemic needs are identified and named.
- A documented design rationale that can guide future maintenance and iteration
- Evaluation framework for tracking learner experience and identifying emerging barriers
- Recommendations for institutional systems and infrastructure
- A plan for the next iteration cycle
Assessment is where inclusion most often breaks down — and where the most significant equity gains are available. The format of an assessment can exclude learners who have fully achieved the intended learning outcomes. Addressing this is not about lowering standards; it is about ensuring that assessments measure what they are designed to measure, rather than measuring a learner's ability to perform under conditions unrelated to the outcome.
Why assessment is the central equity question
A technically accessible course — captioned videos, readable documents, navigable platforms — can still systematically exclude learners if its assessments assume a narrow set of abilities unrelated to the learning outcomes. A timed written examination in a second language measures English proficiency and processing speed as much as it measures disciplinary knowledge. An oral presentation requirement excludes learners with communication disabilities regardless of the depth of their understanding. A traditional research essay may privilege the conventions of a single academic tradition while penalizing other equally rigorous ways of synthesizing and communicating knowledge.
The question at the centre of inclusive assessment design is not "how do we accommodate learners who can't do the standard assessment?" It is "what does this assessment actually measure — and is that what we intend to measure?"
The critical distinction
Assessing the learning outcome means measuring whether a learner has achieved the intended competency. Assessing performance under specific conditions means measuring whether a learner can demonstrate that competency in one particular way, under one particular set of constraints. These are not the same thing — and conflating them is where most assessment-based exclusion originates.
What flexible assessment is — and is not
Flexible and alternative assessment is not the same as reduced expectations, grade inflation, or academic dishonesty. It is the design of assessment tasks that allow multiple ways of demonstrating the same competency — so that the format of the assessment is not itself the barrier.
In research and graduate education contexts, this might mean: a literature synthesis completed as a traditional annotated bibliography, a structured evidence map, or a recorded conference-style presentation — each demonstrating the same research competency. In public sector professional development, it might mean a policy analysis produced as a written brief, a stakeholder presentation, or a structured decision memo — because the outcome is synthesizing and communicating evidence, not producing one specific document type.
What makes alternative assessment rigorous is not the format — it is the quality of criteria. When assessment criteria are clearly tied to the learning outcome, the format becomes a choice about how to demonstrate competency, not a loophole around demonstrating it.
Designing with transparent criteria
Transparent assessment criteria — rubrics co-developed with subject matter experts, shared with learners before they begin, and applied consistently regardless of format — are one of the most powerful equity levers available to curriculum designers. When learners understand what success looks like and how it will be evaluated, the hidden advantage of cultural familiarity with academic conventions is reduced. Everyone is working from the same specification.
Authentic assessment in professional disciplines
In higher education and research contexts, authentic assessment — tasks that reflect what graduates and practitioners actually do — has a dual advantage. It is more motivating for learners, and it is often more inclusive by default, because it draws on the knowledge and professional contexts learners already bring, rather than requiring mastery of a narrowly defined academic form.
A researcher who designs and documents a genuine inquiry process, a public sector professional who analyses a real policy problem from their organizational context, or a graduate student who produces a discipline-relevant output for an actual audience is demonstrating competency in a format that directly reflects their working reality. The assessment is rigorous because the task is real, not because the format is conventional.
Addressing the institutional anxieties
Assessment innovation in professional and academic programs often runs into legitimate institutional concerns. It is worth addressing them directly.
On academic integrity
Flexible assessment formats, particularly those tied to authentic professional contexts, are often more difficult to plagiarize than standardized formats — because they require the learner to apply knowledge to a specific situation, not reproduce a general argument. Authentic tasks produce more individualized work, not less.
On comparability
Comparability across learners is produced by shared, transparent criteria — not by identical formats. If the criteria are clear and consistently applied, two learners who demonstrate the same competency through different formats can be equitably evaluated. This is standard practice in professional licensure and competency-based education.
On grade equity
Research on flexible and alternative assessment consistently shows that learner performance improves when assessment formats are better aligned to learning outcomes — particularly for learners from underrepresented groups who are often disadvantaged by assessment formats that assume a narrow cultural and linguistic background. The concern should be about equity of outcomes, not equivalence of formats.
On governance
Assessment innovation requires a clear rationale that can be articulated to academic committees, professional bodies, and accreditation reviewers. The rationale is straightforward: these assessments measure the intended learning outcomes more directly than the alternatives, and they do so in ways that give all learners a genuine opportunity to demonstrate their competency.
No two curriculum design engagements look the same. The decisions an organization makes about scope, entry point, who is involved, and what constraints are in play determine what kind of design process is appropriate and what it can realistically produce. Understanding these dimensions helps set honest expectations — and make better use of the process.
What shapes a curriculum design engagement
Scope
What is being designed or redesigned?
Entry point
Where does this engagement begin?
Who is at the table
Whose knowledge shapes the design?
Delivery mode
How will learners engage with the learning?
Timeline and iteration
How much time and how many cycles?
Institutional constraints
What shapes the design space?
A note on retrofit vs. new build
Retrofit — adding accessibility and flexibility to existing materials — is the most common entry point, and it is also the most limited. When a course is built on inaccessible assumptions about who learners are and what they can do, retrofitting adds layers of accommodation without addressing the underlying design problem. The result is often a course that technically passes an accessibility checklist while still excluding learners in practice.
This does not mean retrofit is never appropriate — sometimes the constraints of time, resources, and faculty capacity make it the realistic option. But it means being honest about what retrofit can and cannot produce, and identifying the longer-term redesign work that retrofitting defers.
The following is a composite, illustrative scenario. It is not a description of a real organization or specific engagement, but it is constructed from real patterns — the kinds of barriers, decisions, and turning points that appear consistently in curriculum design work with higher education institutions, public sector organizations, and research bodies. It is intended to show what the design process looks like when it is working well, and what it surfaces that other approaches miss.
A graduate program in global public health practice
Illustrative scenario — composite
Redesigning a professional certificate program for a globally distributed learner population
A well-established professional development program in global public health had been running successfully in a face-to-face format for over a decade. When the organization moved to a blended online delivery model to reach practitioners in lower-income settings — including health workers in sub-Saharan Africa, South Asia, and Central America — they discovered that the program as designed did not travel well. Completion rates were lower than expected. Participants in the new cohorts were dropping key assessments. Facilitators were spending significant time managing accommodation requests they hadn't anticipated. The program's content was strong. The design was not.
The curriculum redesign began with a thorough intake process. Conversations with current and former participants, facilitators, and regional coordinators revealed several patterns that had not been visible from headquarters: the program's default assumption of reliable high-bandwidth internet access was creating barriers for practitioners in rural settings; the primary assessment — a 3,000-word policy analysis written in formal academic English — was functioning as a test of academic writing in a second or third language, not a test of public health policy competency; and the visual and cultural references in the case studies were almost entirely drawn from North American and European contexts, making them feel abstract to participants working in very different epidemiological and systemic environments.
Subject matter experts — senior public health practitioners, not just academics — were brought into the design process as genuine co-designers, not reviewers. Their knowledge of what practitioners in the field actually needed to know and be able to do fundamentally reshaped the learning outcomes. Several outcomes that had been written around academic conventions were revised to reflect real professional tasks: analysing a disease burden dataset and producing a brief for a ministry of health; facilitating a community consultation in a resource-limited setting; adapting an evidence-based intervention to a specific epidemiological and cultural context.
The assessment design moment
When the central barrier became visible
The most consequential single decision in the redesign was rethinking the policy analysis assessment. The original assessment asked participants to write a 3,000-word evidence synthesis in formal academic English, structured according to conventions established in North American and European public health scholarship. For participants who had trained in those traditions, the task was familiar. For practitioners who had spent their careers in public health practice — producing briefs, policy memos, programme reports, and ministerial presentations — the format was foreign even though the underlying competency was not.
The assessment was replaced with a format choice: participants could produce a policy brief in the genre conventions of their own country's public health system, a structured programme recommendation, or a practitioner-facing evidence summary — all of which mapped to the same learning outcome (synthesising evidence to support a public health decision) but reflected the actual formats practitioners use in their real working contexts.
A shared rubric — developed with subject matter experts and piloted with a small beta group of practitioners before the course ran — made the criteria explicit regardless of format. Assessors evaluated the quality of the evidence synthesis, the rigour of the argument, and the appropriateness of recommendations for the stated context. Format was not evaluated; competency was.
The academic integrity concern was addressed directly: because each assessment was tied to a specific, real-world context that the participant named and described at the outset, the opportunity for generic plagiarism was significantly reduced. What participants submitted was necessarily individualized to their stated context.
The outcome was a significant improvement in completion rates for the assessment, and qualitative feedback from facilitators indicating that the quality of the submissions — the depth of analysis and the sophistication of the recommendations — was noticeably higher than in the original format.
What the beta review surfaced
The barriers that designers and experts had not anticipated
A structured beta review with twelve practitioners from three regions — before the revised program went live — surfaced two significant barriers that the design team had not anticipated.
The first was a navigational issue in the LMS: the way the course was structured assumed that participants would work through modules sequentially, and that they would have uninterrupted blocks of two to three hours for each module. In practice, many participants were accessing the course in short windows between clinical and community responsibilities, often on mobile devices with intermittent connectivity. The navigation structure — which did not allow easy bookmarking or re-entry at a specific point — was creating significant friction. A redesign of the module structure and the addition of explicit low-bandwidth alternatives for video content addressed this directly.
The second was more subtle: several participants in the beta group noted that the discussion forums — designed to build a learning community across the cohort — were producing interactions that felt extractive to participants from lower-income settings. Participants based in well-resourced contexts were drawing on the contextual knowledge of participants from lower-income settings without reciprocating, and the forum prompts were inadvertently structured in ways that encouraged this dynamic. The facilitation guidelines and forum prompts were revised with attention to power dynamics and knowledge exchange, not just community building.
Both barriers were invisible to the design team and the subject matter experts — not because they were careless, but because neither group occupied the same position as the learners. The people closest to the barrier are the most useful guides to finding it. This is not a failure of the design team; it is an argument for structured learner review as a non-negotiable stage of any curriculum redesign.
Inclusive curriculum design creates change at two different horizons. The shorter-term outcomes are tangible and often measurable: reduced friction, more pathways, better completion. The longer-term shift is cultural — and has a much broader effect on how an organization thinks about who its learners are and what learning environments can do.
What changes — and when
Shorter term — within the course cycle and immediate redesign period
Longer term — across repeated cycles and sustained organizational commitment
The questions below are designed to help curriculum teams and faculty think honestly about where a program or course currently sits in relation to inclusive design. They are not a scoring instrument — there are no categories or ratings. They are meant to be worked through with colleagues who hold different roles and perspectives, because the most useful answers to these questions are rarely held by one person alone.
Take your time. The most productive conversations these questions generate are rarely comfortable. That is the point.
On whose knowledge is centred
On what assessments actually measure
On flexibility and accommodation
On whose presence is assumed
On what next steps are possible