From education to employment

AI and Higher Education: Rethinking Quality and Assessment

Imran Mir

Artificial intelligence (AI) is no longer just a future prospect. It is well and truly here, and it is here to stay. Yet the widespread embrace of AI also raises serious ethical and economic questions. Much of this technology is driven by large commercial interests, and its rapid adoption risks normalising systems whose implications for data privacy, intellectual labour and academic integrity are not yet fully understood. Apps such as ChatGPT are being used by students to generate their essays, running simulations, and even getting feedback. Universities are currently testing with AI tutors, adaptive learning platforms and automated marking. Yet the regulatory frameworks which underpin higher education quality are trying to catch up. This article focuses on how universities and regulators are adapting quality and assessment frameworks in response to AI, and what more is needed to balance innovation with integrity

A survey which was recently conducted by Jisc (2023) highlighted that more than 22 per cent of UK students have used apps like ChatGPT to generate essays for their studies, this is without the institution’s knowledge. If this percentage continues to rise, as is being predicted, then questions of quality, and validity of the assessment alongside academic integrity will become unavoidable. The issue is not whether students are going to use AI, but what universities and regulators will do to respond to this.

Quality frameworks under pressure

The Office for Students (the higher education regulator in England) sets what are known as “B Conditions,” which require universities to deliver high standards of teaching, reliable assessments, and positive outcomes for students. However, these expectations are built on a traditional model of learning and assessment. With the arrival of AI, questions arise about whether such systems can still assure academic integrity and measure genuine student achievement. If students can generate high-quality essays in seconds, is a timed exam or coursework submission still able to measure what it once did?

Similarly, the QAA Quality Code places a big emphasis when it comes to academic integrity. Universities have responded by using detection tools; however, the problem is these are often one step behind the new AI model which is continually updated. Rather than choosing between redesigning assessment or policing AI, universities may need a more nuanced approach integrating AI responsibly into assessment design while maintaining academic integrity and authentic learning.

Opportunities as well as risks

There are reasons why we should be optimistic. AI could result in being more inclusive in relation to teaching and learning. Adaptive learning platforms could be more personalised for each learner. Recent studies (Luckin, 2024; UNESCO, 2023) show that AI-driven personalisation can improve engagement for some learners, though its impact depends heavily on the quality and transparency of data models used. This could offer neurodiverse learners more flexible pathways to navigate through complex material. AI-driven feedback systems will give instant feedback to learners, and this will reduce the lag between submission and improvement. However, this benefit depends on the accuracy and feedback quality of the AI models. Poor data or bias could reinforce misconceptions rather than correct them. For staff facing heavy workloads, the automation of routine marking and administration will allow more time to focus on teaching and mentoring. Still, there is a risk that excessive automation may reduce teachers’ professional autonomy and devalue the relational aspects of teaching.

However, without governance, AI could also result in further gaps between students. Access to advanced tools is uneven, with some students who have the financial ability to pay for premium services while others cannot. Data privacy can also be a concern. Student data used to train AI systems must be handled transparently and ethically.

What policy needs to do

To date, universities are creating their own AI policies, and there is inconsistency across institutions. The result is that some institutions ban generative AI outright, others allow it cautiously, and a few are looking to actively integrate it into teaching. This inconsistency is risking confusion for students and it also undermines public trust in HE quality standards.

The lack of a unified policy across institutions has created uncertainty for both students and staff about what constitutes appropriate AI use. A student who uses ChatGPT responsibly at one university could face penalties at another. This inconsistency risks undermining trust in academic standards and the credibility of qualifications.

A coordinated sector-wide approach is what is needed. Policymakers should:
• Issue national guidance on what is acceptable use of AI in learning and assessment.
• Ensure that clear, consistent expectations are communicated across institutions so that both students and staff understand the boundaries of ethical AI use.
• Work with regulators such as the Office for Students and QAA to maintain public trust in higher education quality and integrity.

The killer fact

By 2030, the World Economic Forum (2020) has predicted that 85 million jobs are at risk of displaced by automation, while 97 million new roles will emerge, many of which will require digital fluency. Universities must adapt assessment and teaching to this reality, or graduates risk being left behind in this modern world.

The takeaway

AI is not simply a technical challenge for IT departments. It is a quality challenge for the whole higher education sector. The key question is not simply how fast universities can adapt, but how thoughtfully they can do so, ensuring that AI enhances rather than erodes the values that underpin higher education. Policymakers, regulators and providers will need to work together to make sure that AI strengthens, instead of undermining, the credibility of higher education sector.

By Imran Mir SFHEA, FSET, CMgr MCMI, FRSA


Related Articles

Responses