From education to employment

Maintaining Assessment Integrity in the Age of AI

Dr. James Gupta

Why maintaining academic integrity now requires secure infrastructure, proportionate controls and institutional oversight

There is growing evidence that generative AI tools are exposing structural weaknesses in traditional assessment models, particularly coursework and unsupervised online exams. What was once a largely theoretical concern has quickly become a systemic issue, with institutions reporting increasing difficulty in distinguishing independent student work from AI-assisted outputs.

Research indicates that the vast majority of UK undergraduates are now using generative AI tools in their studies, with adoption accelerating rapidly. Recent survey data suggests that as many as 88% of students are using AI in assessments, up from just over half the previous year and at the same time, sector guidance highlights the growing challenge of evidencing academic misconduct in digital environments, particularly where assessment conditions are difficult to verify.

Taken together, these developments point to a shift in the nature of academic integrity itself. It can no longer be understood solely as a matter of pedagogy or assessment design, but must be considered within a broader institutional context that includes governance, compliance and risk.

From teaching issues to institutional risk

This shift is occurring alongside the widespread adoption of digital assessment. Online and hybrid approaches are now embedded across much of UK higher education, reflecting broader changes in how teaching and learning are delivered. 

However, while delivery models have evolved rapidly, governance frameworks have not always kept pace. Institutions are therefore increasingly exposed to risks ranging from inconsistent assessment conditions to limited visibility over how work is completed, as well as challenges in responding to appeals and regulatory scrutiny.

Regulators have emphasised the importance of maintaining credible and reliable assessment practices, particularly in digital contexts. This places greater responsibility on institutions not only to design robust assessments but also to demonstrate that they are delivered and monitored consistently and defensibly.

The case for risk-based assessment approaches

One of the central challenges in this environment is recognising that not all forms of assessment carry the same level of risk. A high-stakes, time-constrained examination presents a very different exposure to misuse than a piece of coursework completed over several weeks.

Increasingly, sector guidance points towards risk-based approaches that align the level of control applied to the level of risk involved. This reflects principles seen in other regulated domains, where proportionality is essential to maintaining both effectiveness and fairness.

The implication is that institutions need more nuanced models of assessment governance, where different types of assessment are supported by different levels of oversight, rather than relying on a single, uniform approach.

Proportionate control, not blanket surveillance

Debates around academic integrity are often framed in terms of stricter monitoring versus student privacy. In practice, sector bodies emphasise a more balanced approach, where assessment design remains a primary safeguard, supported by proportionate and transparent controls, where appropriate.

Designing assessments that require application, interpretation and critical thinking can reduce opportunities for misuse, while time constraints and structured formats can help ensure consistency. However, design alone is not always sufficient, particularly in high-stakes contexts, which is why institutions are increasingly combining multiple forms of assurance.

Crucially, the aim is not to maximise surveillance, but to ensure that any controls in place are justified, proportionate and aligned with the level of risk being managed.

Rethinking digital assessment infrastructure

As digital assessment becomes more complex, institutions are also reassessing the role of their core systems. Learning management systems remain central to teaching delivery, but were not originally designed to support high-stakes, synchronous assessment or detailed evidencing of assessment conditions.

This is reflected in wider technology strategy trends, with many higher education leaders shifting investment away from legacy infrastructure towards more flexible digital platforms that better support teaching, learning and assessment. Assessment delivery is therefore increasingly being treated as core institutional infrastructure, rather than an extension of teaching systems, with implications for performance, reliability and oversight.

Why auditability now matters as much as prevention

In an AI-enabled environment, preventing misconduct is only part of the challenge. Institutions must also be able to demonstrate that assessments are fair, consistent and defensible when questioned.

Guidance highlights the importance of maintaining clear audit trails that show how assessments were delivered, what controls were in place and whether conditions were applied consistently, as without this level of visibility, even well-designed assessments can be difficult to defend in the face of appeals or regulatory scrutiny, but with it, institutions are far better positioned to maintain confidence in their processes and outcomes.

A governance challenge, not just a pedagogical one

The rapid adoption of generative AI has fundamentally changed the conversation around academic integrity and the question is no longer whether students are using these tools, but whether institutional approaches to assessment are evolving quickly enough to keep pace.

Addressing this challenge requires a shift in perspective. Academic integrity must be understood not only as a teaching concern, but as a governance issue that spans policy, technology, compliance and leadership.

Maintaining trust in qualifications will increasingly depend on whether institutions can demonstrate that their assessment frameworks are robust, proportionate, transparent and grounded in evidence. In this context, integrity is not just about preventing misconduct, but about sustaining confidence in the credibility of the entire assessment system.

By Dr. James Gupta, CEO and Founder of online exam platform, Synap


Responses