From education to employment

Assessment Is Being Rewritten: What the Latest Research Means for FE & HE Right Now

Lindsey Poole

Assessment is at a crossroads. AI is accelerating change, student needs are shifting, and colleges are being asked to deliver fairer, more flexible, more future-proofed assessment, often with fewer resources. The good news? A wave of new research is giving us practical, evidence-based ways forward. Here’s what leaders, curriculum teams and quality managers need to know.

Flexible Assessment: From “Nice-to-Have” to Non-Negotiable

A major 2025 review in TechTrends finds that flexible assessment improves engagement and reduces anxiety, especially for diverse and working learners, a profile that mirrors much of the FE student body.
This includes flexibility in deadlines, task formats, and assessment weighting.

But the review also highlights a caution: flexibility is only effective when paired with clear scaffolding and boundaries, otherwise students face choice-overload and staff face unmanageable marking demands.

The QAA’s national framework on optionality in assessment echoes this, offering practical guidance for colleges trialling student choice.

Why it matters: Flexibility directly supports apprentices, adult learners, carers, and students balancing study with employment, exactly the groups FE is trying to retain and progress.

Formative Assessment Still Outperforms Everything Else

Recent studies are unequivocal: formative assessment works, especially when students self-assess and peer-assess.

A 2025 study found that students who reviewed high-quality peer work produced stronger assignments and deeper reflection.

Another review highlights how learning analytics and digital tools are enhancing the quality and frequency of peer feedback.

Why it matters: With large cohorts and staff time under pressure, peer- and self-assessment offer a scalable way to boost learning without increasing marking load.

AI Isn’t Just a Threat, It’s a Design Challenge

The sector’s instinct has been to “AI-proof” assessments, but the research points to something more productive: designing assessments that work with AI, not against it.

a) Human + AI Marking Models Are Emerging

A 2024–25 pilot compared human-marked exam scripts with the same scripts graded by a generative-AI model using the same rubric. The scores were closely aligned, but required human verification.
The authors argue for hybrid models to free up staff time while retaining academic judgement.

b) Assessment Twins: Two Tasks, One Outcome

A striking new proposal, Assessment Twins, pairs two assessments that measure the same learning outcome using different modes (e.g., a written task + oral defence).
This approach strengthens validity and reduces AI-enabled malpractice without resorting to surveillance.

c) Hybrid Grading Can Reduce Workload by 88%

In another pilot, a hybrid grading system (AI first, educator second) cut marking time by 88% while maintaining high reliability.
For teams facing rising class sizes or increasing assessment points, this is a significant opportunity.

Why it matters: FE and HE assessment workloads are high, especially in programmes with frequent checkpoints. Hybrid models allow educators to spend more time teaching, and less time triaging scripts.

Neurodiversity: Designing Assessment That Works for Every Brain

Neurodiverse learners, including those with ADHD, dyslexia, autism, or working memory differences, are driving an important shift in assessment design across FE and HE.

Recent research shows that assessment flexibility, uncluttered task design, and predictable structures significantly improve performance and reduce anxiety for neurodiverse students (sector-wide findings mirrored across QAA guidance and disability support research).

Key findings emerging across the literature:

  • Flexible formats (e.g., choice between written, oral, or multimedia submissions) allow learners to demonstrate knowledge in strengths-based ways.
  • Chunked assessments and staged submissions support working memory and reduce cognitive overload.
  • Formative check-ins and clear exemplars promote confidence and clarity.
  • Reduced ambiguity in instructions increases fairness across diverse cognitive profiles.

Raised expectations for accessibility, along with the 2025 sector focus on inclusive practice, mean FE providers are being encouraged to design assessments that are intrinsically inclusive, not retrofitted with adjustments.

Why it matters: FE settings often serve higher proportions of neurodiverse learners. Inclusive assessment isn’t just good practice, it directly improves retention, success, and learner wellbeing.

Beware the Over-Assessment Trap

A recent Times Higher Education analysis warns that universities, and by extension FE, are increasingly over-assessing in response to AI. Frequent assessments intended to “catch out” AI use can lead to student burnout and workload overload.

Implications: Quality teams should resist increasing assessment volume without evidence. Fewer, better-designed assessments are more effective, and more manageable for staff.

Authentic Assessment Must Be Re-Imagined

AI tools have exposed a flaw in traditional “authentic” assessment: it’s not enough to set real-world tasks if AI can complete much of the work.

Sector analyses argue for critical authenticity tasks that require judgement, decision-making, personalisation, collaboration, or iterative development. These elements are significantly harder to outsource to AI.

Why it matters: Vocational and apprenticeship programmes are primed for this shift, offering rich opportunities for workplace-situated, multimodal assessment.

What FE Leaders Should Do Next

  • Build flexibility into assessment policies, with structure
  • Increase formative and peer-assessment to boost learning
  • Pilot hybrid human-AI marking models
  • Experiment with new designs such as Assessment Twins
  • Use gamification to increase engagement
  • Avoid increasing assessment volume “because of AI”
  • Focus on tasks that require judgement, not regurgitation

Assessment Is Changing, FE and HE Can Lead the Way

Assessment is no longer a static process. It is being reshaped by technology, pedagogy, and policy, and FE inparticular is uniquely placed to innovate.

With diverse learners, applied curricula, and close employer links, FE can model what fair, flexible, authentic, AI-aware assessment looks like.

The research is giving us a roadmap. Now the challenge is sector-wide leadership and experimentation.

By Lindsey Poole, Functional Skills lead, academic mentor at the University of Exeter


Related Articles

Responses