Knowledge Resource

Inclusive Course and Curriculum Design

A guide to what principled, equity-centred learning design involves — its knowledge foundations, process, key decisions, and what it means for learners over time.

Screen reader and keyboard-optimised version. This is a fully linear, plain-structure version of this resource designed for use with assistive technologies. All content is presented in reading order with no tabs, accordions, or JavaScript. An interactive version with tabs and expandable sections is also available.

Section 1

Knowledge Foundations

Inclusive course and curriculum design is not a method applied to content — it is a stance embedded in every decision made during the design process. The knowledge it draws on spans learning theory, equity-oriented pedagogy, accessibility standards, and institutional systems. Understanding what underpins this work clarifies why it cannot be reduced to a checklist, and why getting it right requires more than good intentions.

The knowledge pillars

Rigorous inclusive design draws on several interconnected bodies of knowledge. None of them is sufficient on its own. In practice, they work together — the frameworks providing structure, the pedagogy providing values, and the systems knowledge ensuring that changes are durable.

Learning design theory

Backward design and constructive alignment — beginning with learning outcomes and working backward to assessments and activities — are not just efficiency tools. When used well, they are equity tools: they force the question of whose knowledge counts, what evidence of learning looks like, and whether the design serves the full range of learners in the room, not a hypothetical average.

Inclusive and anti-oppressive pedagogy

Understanding how race, class, gender, disability, language, and other social positions shape access to learning — and how curriculum content, representation, and classroom dynamics can reinforce or disrupt those patterns. This includes actively critiquing hidden bias and deficit assumptions in existing curricula, and designing from a stance that centres equity rather than neutrality.

Decolonizing curriculum

Examining whose knowledge, whose epistemologies, and whose ways of knowing are treated as authoritative — and whose are absent, tokenized, or subordinated. Decolonizing curriculum is not a one-time addition of diverse authors; it is a structural rethinking of what counts as expertise, evidence, and valid ways of demonstrating competence.

Universal Design for Learning

UDL — developed by CAST — offers a practical framework for building flexibility into learning from the outset. Multiple means of Engagement (why learners engage), Representation (what they engage with), and Action and Expression (how they demonstrate what they know). The key principle: design for learner variability, not the mythical average student.

Learner variability and differentiated instruction

Learners bring different prior experiences, languages, cognitive profiles, cultural frameworks, and socio-emotional needs. Designing for variability means building in multiple pathways, not just one pathway with accommodations added afterward. This includes understanding how neurodiversity, disability, and situational factors shape how learning happens — and designing accordingly.

Digital accessibility

WCAG-aligned practices — captioned media, alternative text, readable document structure, keyboard-navigable interfaces — are the technical floor of accessible learning design, not the ceiling. Accessibility standards apply to every piece of digital content produced in a course: slides, documents, videos, assessments, and LMS pages. They are built into design from the start, not added at the end.

Community, belonging, and psychological safety

Structurally inclusive learning environments are necessary but not sufficient. Learners must also feel they belong — that their presence, contributions, and ways of knowing are genuinely valued. Belonging-centred design attends to the conditions that make learning possible: psychological safety, respectful interaction across difference, and relational pedagogy that acknowledges the whole person.

Amplifying what organizations already know

Inclusive curriculum design does not begin from zero. Every institution — whether a university faculty, a public sector learning team, or a research organization — carries deep disciplinary knowledge, existing relationships with learner communities, and hard-won understanding of the contexts in which their learners work. That expertise is irreplaceable — and it is the starting material for inclusive design, not an obstacle to it.

The most effective curriculum redesign processes amplify what organizations already know about their learners and their field, while bringing new lenses — equity, accessibility, UDL — to bear on how that knowledge is structured, sequenced, and made available to learners. The goal is not to replace disciplinary expertise with pedagogical frameworks, but to integrate them so that the result is both academically rigorous and genuinely accessible.

Inclusive design exists within a legal and institutional context. Human rights legislation requires that educational institutions not discriminate on grounds including disability, and that accommodation be provided to the point of undue hardship. In Canada, this obligation flows from the Canadian Human Rights Act, provincial human rights codes, the Accessible Canada Act (2019), and — in British Columbia — the Accessible British Columbia Act (2021). For internationally operating organizations, equivalent obligations exist in the jurisdictions in which they operate and in which their learners are based.

But accommodation is a floor, not a goal. Equity frameworks push beyond legal minimums toward the substantive changes in design and structure that reduce the need for individual accommodation in the first place. An organization that relies on accommodation as its primary accessibility strategy is managing barriers, not removing them. Proactive, inclusive design reduces the volume and urgency of accommodation requests — and distributes the responsibility for access across the design rather than concentrating it on individual learners.

The central distinction

Inclusion as a structural principle is built into the design of a learning environment from the beginning — into outcomes, assessments, content, and delivery. Inclusion as a bolt-on is a diversity statement in the syllabus, one alternative format added after the fact, or a single guest speaker representing an entire community. The difference is not just pedagogical; it determines whether marginalized learners experience genuine access or a gesture toward it.
On the evidence base: systematic reviews of UDL implementation in higher education consistently show that learner variability — not disability category — is the most useful unit of analysis for inclusive design. Designing for the widest range of learners produces better outcomes for all learners, not only those with identified disabilities.
↑ Back to contents

Section 2

The Design Process

Inclusive learning design is not a linear sequence followed once. It is an iterative process: ideas are developed, tested with the people closest to the learning, revised in light of what they surface, and refined through repeated cycles before and after launch. This approach consistently produces more accessible, more rigorous, and more durable learning environments than a single-pass design-and-deliver model.

The stages below describe a comprehensive approach. In any given engagement, scope, timeline, and institutional context shape which stages are most intensive and which are abbreviated. Understanding the full process clarifies what is gained or lost when particular stages are compressed.

Intake and context

Understanding the learning environment before designing it

Before any design work begins, the process starts with careful listening. Who are the learners — their professional contexts, prior experiences, language backgrounds, access needs, and the communities they serve? What does the institution or organization already know about where learners struggle and where they thrive? What are the constraints: platform capabilities, faculty capacity, assessment governance, timeline?

This stage also surfaces the existing expertise that makes the design authentic. Subject matter experts bring irreplaceable knowledge of their discipline, their learners, and their field. That knowledge shapes everything that follows.

This stage produces
  • Shared understanding of learner population, context, and known barriers
  • Documented institutional constraints and design parameters
  • An inventory of existing expertise and materials to build from
  • Clarity on scope, timeline, and decision-making authority
Skipping or rushing this stage is one of the most common reasons curriculum redesigns produce materials that don't land — because they aren't built from or for the real context of the learners.

Outcomes development

What learners will be able to do — and for whom

Learning outcomes are not bureaucratic requirements. They are the design's first equity decision: they determine what counts as learning, whose ways of demonstrating competence are legitimate, and how the course will be assessed. Outcomes developed without attention to equity often inadvertently center one kind of learner, one disciplinary tradition, or one way of knowing.

Inclusive outcome development asks: are these outcomes genuinely discipline-relevant, or are they measuring ability to perform under specific conditions? Do they allow for diverse ways of demonstrating competence? Are they written in language accessible to learners, not just designers? Do they reflect the actual professional contexts learners will enter?

This stage is iterative — outcomes are drafted, reviewed with subject matter experts and, where possible, with learners, and revised in light of what that review surfaces.

This stage produces
  • Measurable, discipline-authentic learning outcomes aligned to program goals
  • Outcomes mapped to assessment methods and learning activities
  • A shared language for what success looks like in this context

Assessment design

The most consequential equity decision in the course

Assessment design is treated as its own stage because it is where inclusion most often breaks down — and where the most significant equity gains are available. Assessment decisions determine which learners can demonstrate their competence and which cannot, regardless of what they actually know.

This stage develops assessment approaches that are flexible without sacrificing rigour, transparent in criteria, and genuinely aligned to learning outcomes rather than to performance under specific conditions. It also addresses the institutional anxieties that often block assessment innovation.

See Section 3 below for a full treatment of assessment design.

This stage produces
  • An assessment strategy aligned to learning outcomes and equity principles
  • Flexible and alternative assessment formats where appropriate
  • Transparent rubrics and criteria developed with subject matter experts
  • A rationale for assessment decisions that can be shared with governance bodies

Content and modality development

Multiple means, multiple pathways

Content is developed iteratively, in working cycles with subject matter experts, rather than handed off for production after a single design session. This keeps the discipline knowledge and the design sensibility in conversation throughout — and allows for course correction when content doesn't land the way it was intended.

UDL principles shape every content decision: is material available in more than one format? Does it assume a particular language background or cultural reference point? Are representations of practitioners and communities in the discipline diverse and authentic? Does the sequencing build understanding or assume it?

Delivery mode — online, hybrid, in-person — shapes both what is possible and what is required. Each mode has distinct accessibility implications. Online delivery requires WCAG-aligned digital content and platform accessibility. Hybrid delivery requires deliberate design so that neither modality is a degraded version of the other. In-person delivery requires attention to physical access, sensory access, and psychological safety in the room.

This stage produces
  • Multimodal learning materials developed in working cycles with SMEs
  • WCAG-aligned digital content: captioned media, accessible documents, navigable LMS pages
  • Content sequenced and scaffolded for the actual learner population
  • Representations and examples that reflect the diversity of the field

Review with learners and subject matter experts

The people closest to the barrier are the most useful guides

Before any course goes live, a structured review cycle involves the people who will actually use it. This is not a perfunctory sign-off — it is where the design is tested against reality. Learners with disabilities, learners from underrepresented backgrounds, and learners with different professional contexts often surface barriers that subject matter experts and designers, working together, did not anticipate.

Beta review is structured: specific tasks, specific questions, specific dimensions of the learning experience to evaluate. Feedback is documented systematically and used to make targeted revisions. This is also where accessibility testing against WCAG criteria happens — automated tools first, then manual review with assistive technologies.

This stage produces
  • Documented feedback from learner review, organized by theme and priority
  • WCAG accessibility review findings for all digital content
  • Targeted revisions to content, assessment, and navigation based on review findings
  • A record of what was changed and why
The organizations that produce the most genuinely accessible learning environments are those that treat learner review as a design resource, not a validation step.

Launch, iteration, and handoff

Inclusive design as ongoing practice, not one-time delivery

Launch is not the end of the design process. Learner experience data, facilitator observations, and formal evaluation all surface new information about what is working and what is not. A robust handoff ensures that the people responsible for running and maintaining the course understand its design rationale — so that future edits don't inadvertently reintroduce barriers that were deliberately removed.

Sustainable inclusive design requires institutional infrastructure: accessible templates, design guidelines, procurement standards that reflect accessibility requirements, and faculty development that builds capacity over time.

This stage produces
  • A documented design rationale that can guide future maintenance and iteration
  • Evaluation framework for tracking learner experience and identifying emerging barriers
  • Recommendations for institutional systems and infrastructure
  • A plan for the next iteration cycle
On co-design vs. consultation: Consultation asks stakeholders to react to decisions already made. Co-design involves them in making the decisions. Subject matter experts who are genuine partners — rather than reviewers of a completed draft — produce learning materials that are more accurate, more contextually grounded, and more likely to be sustained after the engagement ends. The same is true when learners are involved. The people closest to a barrier are usually the most useful guides to removing it.
↑ Back to contents

Section 3

Assessment Design

Assessment is where inclusion most often breaks down — and where the most significant equity gains are available. The format of an assessment can exclude learners who have fully achieved the intended learning outcomes. Addressing this is not about lowering standards; it is about ensuring that assessments measure what they are designed to measure.

Why assessment is the central equity question

A technically accessible course — captioned videos, readable documents, navigable platforms — can still systematically exclude learners if its assessments assume a narrow set of abilities unrelated to the learning outcomes. A timed written examination in a second language measures English proficiency and processing speed as much as it measures disciplinary knowledge. An oral presentation requirement excludes learners with communication disabilities regardless of the depth of their understanding. A traditional research essay may privilege the conventions of a single academic tradition while penalizing other equally rigorous ways of synthesizing and communicating knowledge.

The question at the centre of inclusive assessment design is not "how do we accommodate learners who can't do the standard assessment?" It is "what does this assessment actually measure — and is that what we intend to measure?"

The critical distinction

Assessing the learning outcome means measuring whether a learner has achieved the intended competency. Assessing performance under specific conditions means measuring whether a learner can demonstrate that competency in one particular way, under one particular set of constraints. These are not the same thing — and conflating them is where most assessment-based exclusion originates.

What flexible assessment is — and is not

Flexible and alternative assessment is not the same as reduced expectations, grade inflation, or academic dishonesty. It is the design of assessment tasks that allow multiple ways of demonstrating the same competency — so that the format of the assessment is not itself the barrier.

In research and graduate education contexts, this might mean: a literature synthesis completed as a traditional annotated bibliography, a structured evidence map, or a recorded conference-style presentation — each demonstrating the same research competency. In public sector professional development, it might mean a policy analysis produced as a written brief, a stakeholder presentation, or a structured decision memo — because the outcome is synthesizing and communicating evidence, not producing one specific document type.

What makes alternative assessment rigorous is not the format — it is the quality of criteria. When assessment criteria are clearly tied to the learning outcome, the format becomes a choice about how to demonstrate competency, not a loophole around demonstrating it.

Designing with transparent criteria

Transparent assessment criteria — rubrics co-developed with subject matter experts, shared with learners before they begin, and applied consistently regardless of format — are one of the most powerful equity levers available to curriculum designers. When learners understand what success looks like and how it will be evaluated, the hidden advantage of cultural familiarity with academic conventions is reduced. Everyone is working from the same specification.

Authentic assessment in professional disciplines

In higher education and research contexts, authentic assessment — tasks that reflect what graduates and practitioners actually do — has a dual advantage. It is more motivating for learners, and it is often more inclusive by default, because it draws on the knowledge and professional contexts learners already bring, rather than requiring mastery of a narrowly defined academic form.

A researcher who designs and documents a genuine inquiry process, a public sector professional who analyses a real policy problem from their organizational context, or a graduate student who produces a discipline-relevant output for an actual audience is demonstrating competency in a format that directly reflects their working reality. The assessment is rigorous because the task is real, not because the format is conventional.

Addressing the institutional anxieties

Assessment innovation in professional and academic programs often runs into legitimate institutional concerns. It is worth addressing them directly.

On academic integrity

Flexible assessment formats, particularly those tied to authentic professional contexts, are often more difficult to plagiarize than standardized formats — because they require the learner to apply knowledge to a specific situation, not reproduce a general argument. Authentic tasks produce more individualized work, not less.

On comparability

Comparability across learners is produced by shared, transparent criteria — not by identical formats. If the criteria are clear and consistently applied, two learners who demonstrate the same competency through different formats can be equitably evaluated. This is standard practice in professional licensure and competency-based education.

On grade equity

Research on flexible and alternative assessment consistently shows that learner performance improves when assessment formats are better aligned to learning outcomes — particularly for learners from underrepresented groups who are often disadvantaged by assessment formats that assume a narrow cultural and linguistic background.

On governance

Assessment innovation requires a clear rationale that can be articulated to academic committees, professional bodies, and accreditation reviewers. The rationale is straightforward: these assessments measure the intended learning outcomes more directly than the alternatives, and they do so in ways that give all learners a genuine opportunity to demonstrate their competency.

The strongest case for inclusive assessment design is not the equity argument alone, though that case is compelling. It is that better-aligned assessments are more valid measures of learning — and validity is the foundational requirement for any assessment in a professional education context.
↑ Back to contents

Section 4

Key Design Decisions

No two curriculum design engagements look the same. The decisions an organization makes about scope, entry point, who is involved, and what constraints are in play determine what kind of design process is appropriate and what it can realistically produce.

Scope: what is being designed or redesigned

Entry point: where the engagement begins

Who is at the table

Delivery mode

On retrofit vs. new build: Retrofit — adding accessibility and flexibility to existing materials — is the most common entry point, and it is also the most limited. When a course is built on inaccessible assumptions about who learners are and what they can do, retrofitting adds layers of accommodation without addressing the underlying design problem. This does not mean retrofit is never appropriate — sometimes the constraints of time, resources, and faculty capacity make it the realistic option. But it means being honest about what retrofit can and cannot produce, and identifying the longer-term redesign work that retrofitting defers.
↑ Back to contents

Section 5

Illustrated Scenario

The following is a composite, illustrative scenario. It is not a description of a real institution or specific engagement, but it is constructed from real patterns — the kinds of barriers, decisions, and turning points that appear consistently in curriculum design work with higher education institutions, public sector organizations, and research bodies.

Illustrative scenario — composite

Redesigning a professional certificate program for a globally distributed learner population

A well-established professional development program in global public health had been running successfully in a face-to-face format for over a decade. When the organization moved to a blended online delivery model to reach practitioners in lower-income settings — including health workers in sub-Saharan Africa, South Asia, and Central America — they discovered that the program as designed did not travel well. Completion rates were lower than expected. Participants in the new cohorts were dropping key assessments. Facilitators were spending significant time managing accommodation requests they hadn't anticipated. The program's content was strong. The design was not.

The curriculum redesign began with a thorough intake process. Conversations with current and former participants, facilitators, and regional coordinators revealed several patterns that had not been visible from headquarters: the program's default assumption of reliable high-bandwidth internet access was creating barriers for practitioners in rural settings; the primary assessment — a 3,000-word policy analysis written in formal academic English — was functioning as a test of academic writing in a second or third language, not a test of public health policy competency; and the visual and cultural references in the case studies were almost entirely drawn from North American and European contexts, making them feel abstract to participants working in very different epidemiological and systemic environments.

Subject matter experts — senior public health practitioners, not just academics — were brought into the design process as genuine co-designers. Their knowledge of what practitioners in the field actually needed to know and be able to do fundamentally reshaped the learning outcomes. Several outcomes that had been written around academic conventions were revised to reflect real professional tasks: analysing a disease burden dataset and producing a brief for a ministry of health; facilitating a community consultation in a resource-limited setting; adapting an evidence-based intervention to a specific epidemiological and cultural context.

The assessment design moment

When the central barrier became visible

The most consequential single decision in the redesign was rethinking the policy analysis assessment. The original assessment asked participants to write a 3,000-word evidence synthesis in formal academic English, structured according to conventions established in North American and European public health scholarship. For participants who had trained in those traditions, the task was familiar. For practitioners who had spent their careers in public health practice — producing briefs, policy memos, programme reports, and ministerial presentations — the format was foreign even though the underlying competency was not.

What the redesign changed

The assessment was replaced with a format choice: participants could produce a policy brief in the genre conventions of their own country's public health system, a structured programme recommendation, or a practitioner-facing evidence summary. A shared rubric — developed with subject matter experts and piloted with a small beta group — made the criteria explicit regardless of format. Format was not evaluated; competency was.

The outcome was a significant improvement in completion rates for the assessment, and qualitative feedback from facilitators indicating that the quality of the submissions — the depth of analysis and the sophistication of the recommendations — was noticeably higher than in the original format.

What the beta review surfaced

The barriers that designers and experts had not anticipated

A structured beta review with twelve practitioners from three regions — before the revised program went live — surfaced two significant barriers that the design team had not anticipated.

The first was a navigational issue in the LMS: the way the course was structured assumed that participants would work through modules sequentially and have uninterrupted blocks of two to three hours for each module. In practice, many participants were accessing the course in short windows between clinical and community responsibilities, often on mobile devices with intermittent connectivity. A redesign of the module structure and the addition of explicit low-bandwidth alternatives for video content addressed this directly.

The second was more subtle: several participants noted that the discussion forums were producing interactions that felt extractive to participants from lower-income settings. Participants based in well-resourced contexts were drawing on the contextual knowledge of participants from lower-income settings without reciprocating, and the forum prompts were inadvertently structured in ways that encouraged this dynamic. The facilitation guidelines and forum prompts were revised with attention to power dynamics and knowledge exchange.

What this illustrates

Both barriers were invisible to the design team and the subject matter experts — not because they were careless, but because neither group occupied the same position as the learners. The people closest to the barrier are the most useful guides to finding it. Structured learner review is not a validation step; it is a design resource.

What didn't change: Not everything in this redesign was possible to change. The program's accreditation framework required certain assessment formats and credit-hour structures. Some platform limitations in the LMS could not be resolved within the timeline. Faculty who had taught in the program for years had strong views that shaped what the design team could and could not propose. Inclusive curriculum design operates within real constraints. What changes is the honesty with which those constraints are named, and the commitment to addressing what can be addressed.
↑ Back to contents

Section 6

Learner Impact

Inclusive curriculum design creates change at two different horizons. The shorter-term outcomes are tangible and often measurable. The longer-term shift is cultural — and has a much broader effect on how an organization thinks about who its learners are and what learning environments can do.

Shorter-term outcomes — within the course cycle and immediate redesign period
Outcome What this looks like in practice
Barriers are named and removed at source Rather than being managed through individual accommodation, barriers are identified in the design and addressed structurally. Learners who were previously excluded gain access to the learning — not through a workaround, but through a course that was designed for them.
Assessment reflects what learners actually know When assessment formats are better aligned to learning outcomes, completion rates improve and the quality of demonstrated competency increases — particularly for learners whose prior experience differs from the assumed norm.
Accommodation requests decrease When flexibility is built into the design — in assessment formats, content delivery, pacing, and navigation — many individual accommodation requests become unnecessary. The need for accommodation is not eliminated, but its volume and urgency are significantly reduced.
Faculty and team knowledge grows The design process itself is a capacity-building experience. Faculty and staff who co-design inclusive curriculum emerge with practical knowledge about UDL, assessment equity, and accessibility that they carry into future work.
Longer-term outcomes — across repeated cycles and sustained organizational commitment
Outcome What this looks like in practice
Inclusion becomes a design habit, not a retrofit When teams have experienced inclusive design in practice, the questions change. Accessibility, learner variability, and equity considerations enter the design process at the beginning rather than being raised as problems at the end.
All learners benefit The curb cut effect is real in curriculum: multimodal content, clear navigation, transparent assessment criteria, and flexible delivery benefit every learner, not only those for whom they were specifically designed.
Retention and completion improve across groups Research on inclusive curriculum consistently shows that accessibility improvements — particularly in assessment design and content flexibility — have the largest positive effects on completion and retention for learners from underrepresented groups, without negative effects on the cohort overall.
Institutional culture shifts Organizations that have invested in genuine inclusive curriculum design tend to ask different questions about new programs: not "what accommodation do we need to add?" but "what does this design assume about who learners are — and is that assumption warranted?"
On the evidence: systematic reviews of UDL implementation in higher education, and the broader literature on inclusive curriculum and belonging, consistently show that inclusive design supports retention, participation, and equity across learner populations. The strongest institutional case for investment in curriculum redesign is not the legal argument — though that argument is real — but the evidence that better-designed learning produces better outcomes for more learners.
↑ Back to contents

Section 7

Organizational Reflection

The questions below are designed to help curriculum teams and faculty think honestly about where a program or course currently sits in relation to inclusive design. They are not a scoring instrument. They are meant to be worked through with colleagues who hold different roles and perspectives, because the most useful answers are rarely held by one person alone.

On whose knowledge is centred

Whose knowledge, frameworks, and ways of knowing are represented as authoritative in this curriculum — and whose are absent, marginal, or presented only as context for the dominant perspective?
Which scholarly traditions, geographic regions, or professional communities are over-represented in the curriculum's readings, case studies, and examples — and which are not present, or present only as research subjects rather than knowledge producers?
What would it take to meaningfully integrate the knowledge of the communities that this discipline works with and serves — not as data or case studies, but as epistemological contributions?

On what assessments actually measure

For each major assessment in this course or program: what does it actually measure? Is it measuring the stated learning outcome, or is it measuring something else — facility with a particular format, academic writing in a second language, performance under time pressure?
Which assessment formats in this curriculum are genuinely required by the learning outcome — and which are habitual, inherited from prior iterations, or assumed to be "standard" without examination?
Who in your current or typical learner population is systematically disadvantaged by the assessment formats in this curriculum? Have you tested this assumption, or is it theoretical?

On flexibility and accommodation

When a learner in this program needs an accommodation, what does that process look like — for the learner, for the faculty member, and for the administrative team? Is it designed to be navigable, or is it designed to be rare?
What flexibility is already built into this curriculum — in timing, format, pacing, or content pathways? What flexibility does not yet exist that would reduce the need for individual accommodation requests?
Is "flexibility" in this curriculum currently experienced by learners as genuine choice, or as the accommodation track — something that signals difference rather than affirming variability as normal?

On whose presence is assumed

Who is the implied learner in this curriculum — the person the design assumes, whether explicitly or not? What language background, educational history, technological access, professional context, and physical and cognitive profile is that learner assumed to have?
In what ways does the curriculum currently require learners to adapt to it — rather than the curriculum having been designed to work for the range of people who actually enrol?
If your most marginalized learner navigated this curriculum from start to finish, where would they encounter friction — in the content, the assessments, the navigation, the facilitation, or the community dynamics of the cohort?

On what next steps are possible

What one change to this curriculum's assessment design would have the largest positive effect on the most learners — and what would it take to make that change within your current institutional constraints?
Who needs to be involved in any meaningful redesign of this curriculum — and who has historically not been at that table? What would it take to include them?
What is the most honest description of where this curriculum currently is in relation to inclusive design — and what is a realistic next step, given your actual resources, relationships, and institutional context?
These questions don't have quick answers, and that is by design. The programs that make the most meaningful progress on inclusive curriculum tend to be those that create genuine space for honest conversations about where the gaps actually are — not where they would like them to be.
↑ Back to contents