Knowledge Resource

Inclusive Course and Curriculum Design

A guide to what principled, equity-centred learning design involves — its knowledge foundations, process, key decisions, and what it means for learners over time. Designed for faculty, learning designers, department heads, and curriculum committees.

Use the tabs to explore at whatever depth is useful to you. A screen reader and keyboard-optimised version of this resource is also available.

Inclusive course and curriculum design is not a method applied to content — it is a stance embedded in every decision made during the design process. The knowledge it draws on spans learning theory, equity-oriented pedagogy, accessibility standards, and institutional systems. Understanding what underpins this work clarifies why it cannot be reduced to a checklist, and why getting it right requires more than good intentions.

What underpins rigorous inclusive design

Learning design theory

Backward design and constructive alignment — beginning with learning outcomes and working backward to assessments and activities — are not just efficiency tools. When used well, they are equity tools: they force the question of whose knowledge counts, what evidence of learning looks like, and whether the design serves the full range of learners in the room, not a hypothetical average.

Inclusive and anti-oppressive pedagogy

Understanding how race, class, gender, disability, language, and other social positions shape access to learning — and how curriculum content, representation, and classroom dynamics can reinforce or disrupt those patterns. This includes actively critiquing hidden bias and deficit assumptions in existing curricula, and designing from a stance that centres equity rather than neutrality.

Decolonizing curriculum

Examining whose knowledge, whose epistemologies, and whose ways of knowing are treated as authoritative — and whose are absent, tokenized, or subordinated. Decolonizing curriculum is not a one-time addition of diverse authors; it is a structural rethinking of what counts as expertise, evidence, and valid ways of demonstrating competence.

Universal Design for Learning

UDL — developed by CAST — offers a practical framework for building flexibility into learning from the outset. Multiple means of Engagement (why learners engage), Representation (what they engage with), and Action and Expression (how they demonstrate what they know). The key principle: design for learner variability, not the mythical average student.

Learner variability and differentiated instruction

Learners bring different prior experiences, languages, cognitive profiles, cultural frameworks, and socio-emotional needs. Designing for variability means building in multiple pathways, not just one pathway with accommodations added afterward. This includes understanding how neurodiversity, disability, and situational factors shape how learning happens — and designing accordingly.

Digital accessibility

WCAG-aligned practices — captioned media, alternative text, readable document structure, keyboard-navigable interfaces — are the technical floor of accessible learning design, not the ceiling. Accessibility standards apply to every piece of digital content produced in a course: slides, documents, videos, assessments, and LMS pages. They are built into design from the start, not added at the end.

Community, belonging, and psychological safety

Structurally inclusive learning environments are necessary but not sufficient. Learners must also feel they belong — that their presence, contributions, and ways of knowing are genuinely valued. Belonging-centred design attends to the conditions that make learning possible: psychological safety, respectful interaction across difference, and relational pedagogy that acknowledges the whole person.

Human rights, accommodation, and equity frameworks

Inclusive design exists within a legal and institutional context. Human rights legislation requires that educational institutions not discriminate on grounds including disability, and that accommodation be provided to the point of undue hardship. But accommodation is a floor, not a goal. Equity frameworks push beyond legal minimums toward the substantive changes in design and structure that reduce the need for individual accommodation in the first place.

The central distinction: Inclusion as a structural principle is built into the design of a learning environment from the beginning — into outcomes, assessments, content, and delivery. Inclusion as a bolt-on is a diversity statement in the syllabus, one alternative format added after the fact, or a single guest speaker representing an entire community. The difference is not just pedagogical; it determines whether marginalized learners experience genuine access or a gesture toward it.

What organizations already know

Inclusive curriculum design does not begin from zero. Every institution — whether a university faculty, a public sector training unit, or a research organization — carries deep disciplinary knowledge, existing relationships with learner communities, and hard-won understanding of the contexts in which their learners work. That expertise is irreplaceable — and it is the starting material for inclusive design, not an obstacle to it.

The most effective curriculum redesign processes amplify what organizations already know about their learners and their field, while bringing new lenses — equity, accessibility, UDL — to bear on how that knowledge is structured, sequenced, and made available to learners. The goal is not to replace disciplinary expertise with pedagogical frameworks, but to integrate them so that the result is both academically rigorous and genuinely accessible.

On the evidence base: systematic reviews of UDL implementation in higher education consistently show that learner variability — not disability category — is the most useful unit of analysis for inclusive design. Designing for the widest range of learners produces better outcomes for all learners, not only those with identified disabilities.

Inclusive learning design is not a linear sequence followed once. It is an iterative process: ideas are developed, tested with the people closest to the learning, revised in light of what they surface, and refined through repeated cycles before and after launch. This approach consistently produces more accessible, more rigorous, and more durable learning environments than a single-pass design-and-deliver model.

How the design process unfolds

The stages below describe a comprehensive approach. In any given engagement, scope, timeline, and institutional context shape which stages are most intensive and which are abbreviated. Understanding the full process clarifies what is gained or lost when particular stages are compressed.

Before any design work begins, the process starts with careful listening. Who are the learners — their professional contexts, prior experiences, language backgrounds, access needs, and the communities they serve? What does the institution or organization already know about where learners struggle and where they thrive? What are the constraints: platform capabilities, faculty capacity, assessment governance, timeline?

This stage also surfaces the existing expertise that makes the design authentic. Subject matter experts bring irreplaceable knowledge of their discipline, their learners, and their field. That knowledge shapes everything that follows.

This stage produces
  • Shared understanding of learner population, context, and known barriers
  • Documented institutional constraints and design parameters
  • An inventory of existing expertise and materials to build from
  • Clarity on scope, timeline, and decision-making authority
Skipping or rushing this stage is one of the most common reasons curriculum redesigns produce materials that don't land — because they aren't built from or for the real context of the learners.

Learning outcomes are not bureaucratic requirements. They are the design's first equity decision: they determine what counts as learning, whose ways of demonstrating competence are legitimate, and how the course will be assessed. Outcomes developed without attention to equity often inadvertently center one kind of learner, one disciplinary tradition, or one way of knowing.

Inclusive outcome development asks: are these outcomes genuinely discipline-relevant, or are they measuring ability to perform under specific conditions? Do they allow for diverse ways of demonstrating competence? Are they written in language accessible to learners, not just designers? Do they reflect the actual professional contexts learners will enter?

This stage is iterative — outcomes are drafted, reviewed with subject matter experts and, where possible, with learners, and revised in light of what that review surfaces.

This stage produces
  • Measurable, discipline-authentic learning outcomes aligned to program goals
  • Outcomes mapped to assessment methods and learning activities
  • A shared language for what success looks like in this context

Assessment design is treated as its own stage because it is where inclusion most often breaks down — and where the most significant equity gains are available. Assessment decisions determine which learners can demonstrate their competence and which cannot, regardless of what they actually know.

This stage develops assessment approaches that are flexible without sacrificing rigour, transparent in criteria, and genuinely aligned to learning outcomes rather than to performance under specific conditions. It also addresses the institutional anxieties that often block assessment innovation — including academic integrity, comparability, and grade equity.

See the Assessment Design tab for a full treatment of this stage.

This stage produces
  • An assessment strategy aligned to learning outcomes and equity principles
  • Flexible and alternative assessment formats where appropriate
  • Transparent rubrics and criteria developed with subject matter experts
  • A rationale for assessment decisions that can be shared with learners and governance bodies

Content is developed iteratively, in working cycles with subject matter experts, rather than handed off for production after a single design session. This keeps the discipline knowledge and the design sensibility in conversation throughout — and allows for course correction when content doesn't land the way it was intended.

UDL principles shape every content decision: is material available in more than one format? Does it assume a particular language background or cultural reference point? Are representations of practitioners and communities in the discipline diverse and authentic? Does the sequencing build understanding or assume it?

Delivery mode — online, hybrid, in-person — shapes both what is possible and what is required. Each mode has distinct accessibility implications. Online delivery requires WCAG-aligned digital content and platform accessibility. Hybrid delivery requires deliberate design so that neither modality is a degraded version of the other. In-person delivery requires attention to physical access, sensory access, and psychological safety in the room.

This stage produces
  • Multimodal learning materials developed in working cycles with SMEs
  • WCAG-aligned digital content: captioned media, accessible documents, navigable LMS pages
  • Content sequenced and scaffolded for the actual learner population
  • Representations and examples that reflect the diversity of the field

Before any course goes live, a structured review cycle involves the people who will actually use it. This is not a perfunctory sign-off — it is where the design is tested against reality. Learners with disabilities, learners from underrepresented backgrounds, and learners with different professional contexts often surface barriers that subject matter experts and designers, working together, did not anticipate.

Beta review is structured: specific tasks, specific questions, specific dimensions of the learning experience to evaluate. Feedback is documented systematically and used to make targeted revisions. This is also where accessibility testing against WCAG criteria happens — automated tools first, then manual review with assistive technologies.

This stage produces
  • Documented feedback from learner review, organized by theme and priority
  • WCAG accessibility review findings for all digital content
  • Targeted revisions to content, assessment, and navigation based on review findings
  • A record of what was changed and why — useful for governance and future iterations
The organizations that produce the most genuinely accessible learning environments are those that treat learner review as a design resource, not a validation step. The goal is to find the barriers before learners encounter them in live contexts — not to confirm that the design was good.

Launch is not the end of the design process. Learner experience data, facilitator observations, and formal evaluation all surface new information about what is working and what is not. A robust handoff ensures that the people responsible for running and maintaining the course understand its design rationale — so that future edits don't inadvertently reintroduce barriers that were deliberately removed.

Sustainable inclusive design requires institutional infrastructure: accessible templates, design guidelines, procurement standards that reflect accessibility requirements, and faculty development that builds capacity over time. The handoff is also the point at which those systemic needs are identified and named.

This stage produces
  • A documented design rationale that can guide future maintenance and iteration
  • Evaluation framework for tracking learner experience and identifying emerging barriers
  • Recommendations for institutional systems and infrastructure
  • A plan for the next iteration cycle
On co-design vs. consultation: Consultation asks stakeholders to react to decisions already made. Co-design involves them in making the decisions. The difference in what gets built is significant. Subject matter experts who are genuine partners in the design process — rather than reviewers of a completed draft — produce learning materials that are more accurate, more contextually grounded, and more likely to be sustained after the engagement ends. The same is true when learners are involved. The people closest to a barrier are usually the most useful guides to removing it.

Assessment is where inclusion most often breaks down — and where the most significant equity gains are available. The format of an assessment can exclude learners who have fully achieved the intended learning outcomes. Addressing this is not about lowering standards; it is about ensuring that assessments measure what they are designed to measure, rather than measuring a learner's ability to perform under conditions unrelated to the outcome.

Why assessment is the central equity question

A technically accessible course — captioned videos, readable documents, navigable platforms — can still systematically exclude learners if its assessments assume a narrow set of abilities unrelated to the learning outcomes. A timed written examination in a second language measures English proficiency and processing speed as much as it measures disciplinary knowledge. An oral presentation requirement excludes learners with communication disabilities regardless of the depth of their understanding. A traditional research essay may privilege the conventions of a single academic tradition while penalizing other equally rigorous ways of synthesizing and communicating knowledge.

The question at the centre of inclusive assessment design is not "how do we accommodate learners who can't do the standard assessment?" It is "what does this assessment actually measure — and is that what we intend to measure?"

The critical distinction

Assessing the learning outcome means measuring whether a learner has achieved the intended competency. Assessing performance under specific conditions means measuring whether a learner can demonstrate that competency in one particular way, under one particular set of constraints. These are not the same thing — and conflating them is where most assessment-based exclusion originates.

What flexible assessment is — and is not

Flexible and alternative assessment is not the same as reduced expectations, grade inflation, or academic dishonesty. It is the design of assessment tasks that allow multiple ways of demonstrating the same competency — so that the format of the assessment is not itself the barrier.

In research and graduate education contexts, this might mean: a literature synthesis completed as a traditional annotated bibliography, a structured evidence map, or a recorded conference-style presentation — each demonstrating the same research competency. In public sector professional development, it might mean a policy analysis produced as a written brief, a stakeholder presentation, or a structured decision memo — because the outcome is synthesizing and communicating evidence, not producing one specific document type.

What makes alternative assessment rigorous is not the format — it is the quality of criteria. When assessment criteria are clearly tied to the learning outcome, the format becomes a choice about how to demonstrate competency, not a loophole around demonstrating it.

Designing with transparent criteria

Transparent assessment criteria — rubrics co-developed with subject matter experts, shared with learners before they begin, and applied consistently regardless of format — are one of the most powerful equity levers available to curriculum designers. When learners understand what success looks like and how it will be evaluated, the hidden advantage of cultural familiarity with academic conventions is reduced. Everyone is working from the same specification.

Authentic assessment in professional disciplines

In higher education and research contexts, authentic assessment — tasks that reflect what graduates and practitioners actually do — has a dual advantage. It is more motivating for learners, and it is often more inclusive by default, because it draws on the knowledge and professional contexts learners already bring, rather than requiring mastery of a narrowly defined academic form.

A researcher who designs and documents a genuine inquiry process, a public sector professional who analyses a real policy problem from their organizational context, or a graduate student who produces a discipline-relevant output for an actual audience is demonstrating competency in a format that directly reflects their working reality. The assessment is rigorous because the task is real, not because the format is conventional.

Addressing the institutional anxieties

Assessment innovation in professional and academic programs often runs into legitimate institutional concerns. It is worth addressing them directly.

On academic integrity

Flexible assessment formats, particularly those tied to authentic professional contexts, are often more difficult to plagiarize than standardized formats — because they require the learner to apply knowledge to a specific situation, not reproduce a general argument. Authentic tasks produce more individualized work, not less.

On comparability

Comparability across learners is produced by shared, transparent criteria — not by identical formats. If the criteria are clear and consistently applied, two learners who demonstrate the same competency through different formats can be equitably evaluated. This is standard practice in professional licensure and competency-based education.

On grade equity

Research on flexible and alternative assessment consistently shows that learner performance improves when assessment formats are better aligned to learning outcomes — particularly for learners from underrepresented groups who are often disadvantaged by assessment formats that assume a narrow cultural and linguistic background. The concern should be about equity of outcomes, not equivalence of formats.

On governance

Assessment innovation requires a clear rationale that can be articulated to academic committees, professional bodies, and accreditation reviewers. The rationale is straightforward: these assessments measure the intended learning outcomes more directly than the alternatives, and they do so in ways that give all learners a genuine opportunity to demonstrate their competency.

The strongest case for inclusive assessment design is not the equity argument alone, though that case is compelling. It is that better-aligned assessments are more valid measures of learning — and that validity is the foundational requirement for any assessment in a professional education context.

No two curriculum design engagements look the same. The decisions an organization makes about scope, entry point, who is involved, and what constraints are in play determine what kind of design process is appropriate and what it can realistically produce. Understanding these dimensions helps set honest expectations — and make better use of the process.

What shapes a curriculum design engagement

Scope

What is being designed or redesigned?

Module — a single unit, topic, or learning sequence
Course — a full standalone learning experience
Program — a credential, pathway, or curriculum sequence
Institutional — policies, templates, and design standards across programs

Entry point

Where does this engagement begin?

New build — designing from a brief and subject matter knowledge
Redesign — rebuilding an existing course with new principles
Retrofit — adding accessibility and flexibility to existing materials
Audit-first — reviewing what exists before deciding what to change

Who is at the table

Whose knowledge shapes the design?

Subject matter experts — disciplinary and contextual expertise
Learners — people who have taken or will take the course
Community members — people affected by the discipline's practice
Equity consultants — specialists in decolonization, anti-oppressive practice

Delivery mode

How will learners engage with the learning?

Online — asynchronous, synchronous, or both
Hybrid — combining in-person and online in the same course
In-person — face-to-face, with digital support materials
Blended program — different modes for different courses in a sequence

Timeline and iteration

How much time and how many cycles?

Rapid — targeted improvements to an existing course under time pressure
Phased — multiple iterative cycles over a longer engagement
Embedded — ongoing design partnership through multiple course runs

Institutional constraints

What shapes the design space?

LMS and platform — what the technology can and cannot do
Assessment governance — what committees, accreditors, or professional bodies require
Faculty capacity — how much time and support is available
Equity and accessibility policy — existing institutional commitments
On constraints as design material: Institutional constraints — what the LMS can do, what assessment governance permits, what faculty capacity allows — are not obstacles to inclusive design. They are design parameters. The most effective inclusive curriculum designers work within real constraints while identifying which constraints are genuinely fixed and which are open to negotiation with the right evidence and the right relationships. Part of the value of an external design partner is being able to name, honestly, which constraints are structural and which are habitual.

A note on retrofit vs. new build

Retrofit — adding accessibility and flexibility to existing materials — is the most common entry point, and it is also the most limited. When a course is built on inaccessible assumptions about who learners are and what they can do, retrofitting adds layers of accommodation without addressing the underlying design problem. The result is often a course that technically passes an accessibility checklist while still excluding learners in practice.

This does not mean retrofit is never appropriate — sometimes the constraints of time, resources, and faculty capacity make it the realistic option. But it means being honest about what retrofit can and cannot produce, and identifying the longer-term redesign work that retrofitting defers.

The following is a composite, illustrative scenario. It is not a description of a real organization or specific engagement, but it is constructed from real patterns — the kinds of barriers, decisions, and turning points that appear consistently in curriculum design work with higher education institutions, public sector organizations, and research bodies. It is intended to show what the design process looks like when it is working well, and what it surfaces that other approaches miss.

A graduate program in global public health practice

Illustrative scenario — composite

Redesigning a professional certificate program for a globally distributed learner population

A well-established professional development program in global public health had been running successfully in a face-to-face format for over a decade. When the organization moved to a blended online delivery model to reach practitioners in lower-income settings — including health workers in sub-Saharan Africa, South Asia, and Central America — they discovered that the program as designed did not travel well. Completion rates were lower than expected. Participants in the new cohorts were dropping key assessments. Facilitators were spending significant time managing accommodation requests they hadn't anticipated. The program's content was strong. The design was not.

The curriculum redesign began with a thorough intake process. Conversations with current and former participants, facilitators, and regional coordinators revealed several patterns that had not been visible from headquarters: the program's default assumption of reliable high-bandwidth internet access was creating barriers for practitioners in rural settings; the primary assessment — a 3,000-word policy analysis written in formal academic English — was functioning as a test of academic writing in a second or third language, not a test of public health policy competency; and the visual and cultural references in the case studies were almost entirely drawn from North American and European contexts, making them feel abstract to participants working in very different epidemiological and systemic environments.

Subject matter experts — senior public health practitioners, not just academics — were brought into the design process as genuine co-designers, not reviewers. Their knowledge of what practitioners in the field actually needed to know and be able to do fundamentally reshaped the learning outcomes. Several outcomes that had been written around academic conventions were revised to reflect real professional tasks: analysing a disease burden dataset and producing a brief for a ministry of health; facilitating a community consultation in a resource-limited setting; adapting an evidence-based intervention to a specific epidemiological and cultural context.

The assessment design moment

When the central barrier became visible

The most consequential single decision in the redesign was rethinking the policy analysis assessment. The original assessment asked participants to write a 3,000-word evidence synthesis in formal academic English, structured according to conventions established in North American and European public health scholarship. For participants who had trained in those traditions, the task was familiar. For practitioners who had spent their careers in public health practice — producing briefs, policy memos, programme reports, and ministerial presentations — the format was foreign even though the underlying competency was not.

What the redesign changed

The assessment was replaced with a format choice: participants could produce a policy brief in the genre conventions of their own country's public health system, a structured programme recommendation, or a practitioner-facing evidence summary — all of which mapped to the same learning outcome (synthesising evidence to support a public health decision) but reflected the actual formats practitioners use in their real working contexts.

A shared rubric — developed with subject matter experts and piloted with a small beta group of practitioners before the course ran — made the criteria explicit regardless of format. Assessors evaluated the quality of the evidence synthesis, the rigour of the argument, and the appropriateness of recommendations for the stated context. Format was not evaluated; competency was.

The academic integrity concern was addressed directly: because each assessment was tied to a specific, real-world context that the participant named and described at the outset, the opportunity for generic plagiarism was significantly reduced. What participants submitted was necessarily individualized to their stated context.

The outcome was a significant improvement in completion rates for the assessment, and qualitative feedback from facilitators indicating that the quality of the submissions — the depth of analysis and the sophistication of the recommendations — was noticeably higher than in the original format.

What the beta review surfaced

The barriers that designers and experts had not anticipated

A structured beta review with twelve practitioners from three regions — before the revised program went live — surfaced two significant barriers that the design team had not anticipated.

The first was a navigational issue in the LMS: the way the course was structured assumed that participants would work through modules sequentially, and that they would have uninterrupted blocks of two to three hours for each module. In practice, many participants were accessing the course in short windows between clinical and community responsibilities, often on mobile devices with intermittent connectivity. The navigation structure — which did not allow easy bookmarking or re-entry at a specific point — was creating significant friction. A redesign of the module structure and the addition of explicit low-bandwidth alternatives for video content addressed this directly.

The second was more subtle: several participants in the beta group noted that the discussion forums — designed to build a learning community across the cohort — were producing interactions that felt extractive to participants from lower-income settings. Participants based in well-resourced contexts were drawing on the contextual knowledge of participants from lower-income settings without reciprocating, and the forum prompts were inadvertently structured in ways that encouraged this dynamic. The facilitation guidelines and forum prompts were revised with attention to power dynamics and knowledge exchange, not just community building.

What this illustrates

Both barriers were invisible to the design team and the subject matter experts — not because they were careless, but because neither group occupied the same position as the learners. The people closest to the barrier are the most useful guides to finding it. This is not a failure of the design team; it is an argument for structured learner review as a non-negotiable stage of any curriculum redesign.

What didn't change: Not everything in this redesign was possible to change. The program's accreditation framework required certain assessment formats and certain credit-hour structures that constrained the design. Some platform limitations in the LMS could not be resolved within the timeline. Faculty who had taught in the program for years had strong views about what the program should look like, and those views shaped what the design team could and could not propose. Inclusive curriculum design operates within real constraints. What changes is the honesty with which those constraints are named, and the commitment to addressing what can be addressed — rather than treating existing design choices as if they were inevitable.

Inclusive curriculum design creates change at two different horizons. The shorter-term outcomes are tangible and often measurable: reduced friction, more pathways, better completion. The longer-term shift is cultural — and has a much broader effect on how an organization thinks about who its learners are and what learning environments can do.

What changes — and when

Shorter term — within the course cycle and immediate redesign period

Barriers are named and removed at source
Rather than being managed through individual accommodation, barriers are identified in the design and addressed structurally. Learners who were previously excluded gain access to the learning — not through a workaround, but through a course that was designed for them.
Assessment reflects what learners actually know
When assessment formats are better aligned to learning outcomes, completion rates improve and the quality of demonstrated competency increases — particularly for learners whose prior experience and background differ from the assumed norm.
Accommodation requests decrease
When flexibility is built into the design — in assessment formats, content delivery, pacing, and navigation — many individual accommodation requests become unnecessary. The need for accommodation is not eliminated, but its volume and urgency are significantly reduced.
Faculty and team knowledge grows
The design process itself is a capacity-building experience. Faculty and staff who co-design inclusive curriculum emerge with practical knowledge — about UDL, about assessment equity, about accessibility — that they carry into future teaching and design work.

Longer term — across repeated cycles and sustained organizational commitment

Inclusion becomes a design habit, not a retrofit
When teams have experienced inclusive design in practice — not just in training — the questions change. Accessibility, learner variability, and equity considerations enter the design process at the beginning rather than being raised as problems at the end.
All learners benefit
The curb cut effect is real in curriculum: multimodal content, clear navigation, transparent assessment criteria, and flexible delivery benefit every learner, not only those for whom they were specifically designed. Inclusive design produces better learning environments for everyone in the cohort.
Retention and completion improve across groups
Research on inclusive curriculum consistently shows that accessibility improvements — particularly in assessment design and content flexibility — have the largest positive effects on completion and retention for learners from underrepresented groups, without negative effects on the cohort overall.
Institutional culture shifts
Organizations that have invested in genuine inclusive curriculum design tend to ask different questions about new programs: not "what accommodation do we need to add?" but "what does this design assume about who learners are — and is that assumption warranted?" That shift, when it takes hold, is the deepest form of institutional change.
On the evidence: systematic reviews of UDL implementation in higher education, and the broader literature on inclusive curriculum and belonging, consistently show that inclusive design supports retention, participation, and equity across learner populations. The strongest institutional case for investment in curriculum redesign is not the legal argument — though that argument is real — but the evidence that better-designed learning produces better outcomes for more learners.

The questions below are designed to help curriculum teams and faculty think honestly about where a program or course currently sits in relation to inclusive design. They are not a scoring instrument — there are no categories or ratings. They are meant to be worked through with colleagues who hold different roles and perspectives, because the most useful answers to these questions are rarely held by one person alone.

Take your time. The most productive conversations these questions generate are rarely comfortable. That is the point.

This reflection takes approximately fifteen to twenty minutes if worked through carefully. It works best as a structured team conversation rather than an individual exercise.

On whose knowledge is centred

Whose knowledge, frameworks, and ways of knowing are represented as authoritative in this curriculum — and whose are absent, marginal, or presented only as context for the dominant perspective?
Which scholarly traditions, geographic regions, or professional communities are over-represented in the curriculum's readings, case studies, and examples — and which are not present, or present only as research subjects rather than knowledge producers?
What would it take to meaningfully integrate the knowledge of the communities that this discipline works with and serves — not as data or case studies, but as epistemological contributions?

On what assessments actually measure

For each major assessment in this course or program: what does it actually measure? Is it measuring the stated learning outcome, or is it measuring something else — facility with a particular format, academic writing in a second language, performance under time pressure?
Which assessment formats in this curriculum are genuinely required by the learning outcome — and which are habitual, inherited from prior iterations, or assumed to be "standard" in the discipline without examination?
Who in your current or typical learner population is systematically disadvantaged by the assessment formats in this curriculum? Have you tested this assumption, or is it theoretical?

On flexibility and accommodation

When a learner in this program needs an accommodation, what does that process look like — for the learner, for the faculty member, and for the administrative team? Is it designed to be navigable, or is it designed to be rare?
What flexibility is already built into this curriculum — in timing, format, pacing, or content pathways? What flexibility does not yet exist that would reduce the need for individual accommodation requests?
Is "flexibility" in this curriculum currently experienced by learners as genuine choice, or as the accommodation track — something that signals difference rather than affirming variability as normal?

On whose presence is assumed

Who is the implied learner in this curriculum — the person the design assumes, whether explicitly or not? What language background, educational history, technological access, professional context, and physical and cognitive profile is that learner assumed to have?
In what ways does the curriculum currently require learners to adapt to it — rather than the curriculum having been designed to work for the range of people who actually enrol?
If your most marginalized learner navigated this curriculum from start to finish, where would they encounter friction — in the content, the assessments, the navigation, the facilitation, or the community dynamics of the cohort?

On what next steps are possible

What one change to this curriculum's assessment design would have the largest positive effect on the most learners — and what would it take to make that change within your current institutional constraints?
Who needs to be involved in any meaningful redesign of this curriculum — and who has historically not been at that table? What would it take to include them?
What is the most honest description of where this curriculum currently is in relation to inclusive design — and what is a realistic next step, given your actual resources, relationships, and institutional context?
These questions don't have quick answers, and that is by design. The programs that make the most meaningful progress on inclusive curriculum tend to be those that create genuine space for honest conversations about where the gaps actually are — not where they would like them to be. The other sections of this resource are intended to provide useful context and language for those conversations.