From education to employment

Building the Foundations for Ethical AI in Post 16 Education

Patti West-Smith

The UK’s education system is undergoing its most significant reform in over a decade. The Curriculum and Assessment Review (CAR) sets out a vision for a world-class curriculum that equips young people with the knowledge and skills needed to succeed in an increasingly tech-dominated education landscape. 

As part of this review, there is a clear commitment to embed artificial intelligence and digital literacy across all phases of education, including the development of a new Level 3 qualification in data science and AI for post 16 learners.

This ambition reflects the growing demand for AI-driven skills in the workplace and the need for students to navigate these complex changes responsibly. However, introducing AI into the curriculum is not just a matter of adding technical content. Success depends on robust governance, increased educator learning, and ethical frameworks that ensure AI enhances learning rather than undermines it.

Why governance matters for academic integrity

Clear governance is at the heart of responsible AI integration. Without it, institutions risk inconsistent practices and academic misconduct, not to mention dampening the educational experience of their students.

To ensure responsible use, educational institutions should consider creating academic integrity policies that address any issues related to AI-generated content. The CAR emphasises the importance of clarity and coherence in curriculum design, and this same principle should be applied to AI use. 

Clear guidelines that support both teachers and students can help institutions ensure a shared understanding of ethical AI use while reducing the risk of misconduct.

When building these policies, it is important that all stakeholders are considered. By involving everyone in the process, each group has the opportunity to communicate their perceptions and concerns, creating a policy that benefits everyone and will be committed to. To help achieve this, some institutions have started to develop forums and working groups solely dedicated to promoting responsible AI use. This, paired with regular workshops, feedback and training sessions, can help reinforce the institution’s commitment to positive AI use. 

To uphold the established policies, educators should ensure they are specific about when and how AI tools may be used, based on the type of assessment. For example, if the purpose of the assessment is to evaluate a student’s project planning skills, the expectations and tolerances around AI use will be different to those assessing their critical thinking and writing skills. Being clear and explicit about exactly what skills or content being assessed brings clarity to these expectations, for both educators and students. 

Finally, when creating and rolling out these policies, clear communication and regular updates are essential. Policies should be visible, written in plain language, and supported by examples of acceptable and unacceptable use. Students should feel confident about what is permitted and how to disclose AI use.

By embedding these practices, institutions can transform AI from a perceived threat into an opportunity for deeper learning and stronger learning integrity.

AI in education starts in the classroom

The success of AI adoption relies heavily on educators’ expertise and confidence. Their competence with emerging technologies determines whether these reforms translate into meaningful learning experiences. The CAR highlights that curriculum change must be supported by professional learning for educators, and this is particularly important for AI, where only one-third of educators feel prepared to use AI in teaching, indicating a significant skills gap.

Effective professional learning should blend practical skills with ethical understanding, covering areas such as prompt design, bias awareness, data privacy, and classroom strategies that enhance and not replace key skills. Teachers need time and a supportive environment to explore new approaches and adapt confidently. Embedding AI skills within existing curriculum objectives ensures they become part of everyday learning rather than just an add-on.

Building the pillars for responsible AI adoption

The Curriculum and Assessment Review is a step in setting the foundations for embedding AI and digital literacy in post 16 education. Aspiration alone will not deliver impact, however. Governance frameworks, educator learning, and ethical guidance are pillars that protect academic integrity and enable high-quality learning. When institutions make expectations clear, invest in professional learning, redesign assessments to value process, and cultivate transparent dialogue, AI can become a tool for empowerment rather than misconduct.

Collaboration between policymakers, leaders, and educators will be key in turning reform into reality. By building these foundations now, educational institutions can ensure students are ready to succeed in an AI-driven world.

By Patti West-Smith, Senior Director of Customer Engagement at Turnitin


Related Articles

Responses