From education to employment

Assessment Reform in the Age of AI: A Turning Point for the Skills Sector

Kavitha Ravindran

Across the education and skills sector, conversations about artificial intelligence have been dominated by a single concern: cheating. Can students use AI to write essays? How do we detect AI-generated work? And what does this mean for the credibility of qualifications?

These are important questions. But they may also be the wrong starting point.

AI is not simply a challenge to assessment integrity. It is also revealing something deeper about the way we currently assess learning. In many ways, the rapid advancement of AI is exposing weaknesses in assessment systems that already existed, slow feedback cycles, heavy marking workloads, and assessment formats that often prioritise recall over genuine understanding.

Rather than viewing AI purely as a threat, the sector now has an opportunity to rethink assessment more fundamentally.

This moment could represent a turning point.

The Hidden Strain in Assessment Systems

For years, assessment has quietly carried a significant operational burden across the skills sector.

Teachers, trainers and assessors spend countless hours marking written responses, evaluating portfolios, and providing feedback. In many contexts, particularly in vocational education and apprenticeships, assessment often happens alongside full teaching workloads and administrative responsibilities.

The result is a system where feedback, arguably the most important part of assessment, is often delayed. Learners may wait days or weeks to receive insights on their work, long after the learning moment has passed.

In theory, assessment is designed to support learning. In practice, it can sometimes become a process focused primarily on grading and compliance.

This is where the emergence of AI begins to change the conversation.

AI and the Shift from Marking to Feedback

When people talk about AI in assessment, the assumption is often that technology will replace human markers. This fear is understandable, but it may miss the more meaningful opportunity.

AI has the potential to shift the focus of assessment away from marking as an administrative task and towards feedback as a learning tool.

Used responsibly, AI systems can analyse learner responses rapidly and generate structured insights that help educators understand where learners are struggling, where misconceptions lie, and where additional support may be needed.

Rather than waiting for the end of an assignment cycle, feedback could become faster, more consistent, and more actionable.

Importantly, this does not remove the role of the assessor. Instead, it allows human expertise to be directed where it matters most, interpreting complex responses, guiding learners, and exercising professional judgement.

In this model, AI supports the assessment process rather than replacing it.

What Regulators Are Signalling

Regulators are also beginning to shape how AI may be used within assessment systems. Recent work by Ofqual exploring the use of artificial intelligence in marking highlights both the potential and the challenges of the technology. The regulator emphasises that any use of AI must align with core principles of fairness, transparency and trust, and that AI should not replace human judgement in high-stakes assessment decisions.

Across the UK and internationally, there is a growing consensus that AI cannot be used as the sole decision-maker in high-stakes assessment. Transparency, fairness and accountability remain fundamental principles.

However, there is also growing recognition that AI may play a role in supporting assessment processes — particularly in areas such as feedback generation, quality assurance, and the analysis of learner responses.

In other words, the emerging regulatory model is not one of prohibition, but of careful integration.

Human oversight remains central, but technology can assist in improving the efficiency and consistency of assessment systems.

Rethinking What We Assess

Beyond the marking process itself, AI is also forcing the sector to reconsider the design of assessments.

If a learner can easily generate an essay using generative AI, then the question becomes: what skills are we truly trying to measure?

This challenge is already prompting educators to explore more authentic forms of assessment. Scenario-based tasks, applied problem-solving, professional discussions, and portfolio-based evidence may become increasingly important in evaluating real competence.

For the skills sector, this shift could be particularly powerful. Vocational qualifications are inherently designed to measure applied knowledge and practical ability, qualities that are often difficult to replicate through AI-generated responses alone.

AI may therefore act as a catalyst, accelerating a move toward assessments that better reflect real-world capability.

A Turning Point for the Skills Sector

The conversation about AI in education often focuses on disruption. But disruption can also create opportunities.

The skills sector has long been characterised by innovation, employer engagement, and a strong emphasis on practical competence. These strengths place it in a unique position to lead the conversation about how assessment can evolve in the age of AI.

If approached thoughtfully, AI could help reduce administrative burdens on educators, deliver faster and more meaningful feedback to learners, and support more robust and scalable assessment systems.

None of this will happen overnight. It will require collaboration between regulators, awarding organisations, training providers and technology developers.

But the direction of travel is becoming clear.

AI is not simply challenging assessment systems, it is inviting the sector to redesign them.

And for the skills sector, that redesign may represent one of the most important opportunities in a generation.

By Kavitha Ravindran, Co-Founder & Chief Growth Officer at sAInaptic 


Responses