From education to employment

UK universities draw up guiding principles on AI literacy

UK universities draw up guiding principles on AI literacy

UK universities have announced they are drawing up a set of guiding principles to ensure that students and staff are AI literate.

The news comes as the sector struggles to adapt teaching and assessment methods to deal with the growing use of generative artificial intelligence.

Vice-chancellors at the 24 Russell Group research-intensive universities have signed up to the code. They say this will help universities to capitalise on the opportunities of AI while simultaneously protecting academic rigour and integrity in higher education.

Welcoming the news, John Kirk, Group Deputy CEO at ITG said:

“Having a dedicated plan in place to ensure the highest standards of teaching and learning in AI across our universities is a wise move which will ensure the next generation is better equipped to bring additional skills into the workplace. The reality is that this technology is here to stay and deployed correctly can enhance our creative industries and help businesses transform marketing and customer interactions for the long term.

“With the digital skills shortfall still causing headaches for many companies, having systems in place to better understand such a high impact technology is a step in the right direction.” added Kirk.

MarTech expert Sjuul van der Leeuw, CEO of Deployteq added,

“Generative AI has huge potential to transform organisations and empower the creative industries, so this initiative will help students and staff better understand its full potential. The global race to embrace AI and other key transformative technologies shows no sign of slowing down, so collaboration between academics, businesses, and the next generation to drive better understanding how it can be utilised whilst adhering to high ethical standards of best practice is a positive step forward.”

While once there was talk of banning software like ChatGPT within education to prevent cheating, the guidance says students should be taught to use AI appropriately in their studies, while also making them aware of the risks of plagiarism, bias and inaccuracy in generative AI.

Staff will also have to be trained so they are equipped to help students, many of whom are already using ChatGPT in their assignments. New ways of assessing students are likely to emerge to reduce the risk of cheating.

All 24 Russell Group universities have reviewed their academic conduct policies and guidance to reflect the emergence of generative AI. The new guidance says:

“These policies make it clear to students and staff where the use of generative AI is inappropriate, and are intended to support them in making informed decisions and to empower them to use these tools appropriately and acknowledge their use where necessary.”

Developed in partnership with experts in AI and education, the principles represent a first step in what promises to be a challenging period of change in higher education as the world is increasingly transformed by AI.

The five guiding principles state that universities will support both students and staff to become AI literate; staff should be equipped to help students to use generative AI tools appropriately; the sector will adapt teaching and assessment to incorporate the “ethical” use of AI and ensure equal access to it; universities will ensure academic integrity is upheld; and share best practice as the technology evolves.

Dr Tim Bradshaw, the Russell Group chief executive, said:

“The transformative opportunity provided by AI is huge and our universities are determined to grasp it. This statement of principles underlines our commitment to doing so in a way that benefits students and staff and protects the integrity of the high-quality education Russell Group universities provide.”

Prof Andrew Brass, head of the School of Health Sciences at the University of Manchester, said: “We know that students are already utilising this technology, so the question for us as educators is how do you best prepare them for this, and what are the skills they need to have to know how to engage with generative AI sensibly?

“From our perspective, it’s clear that this can’t be imposed from the top down, but by working really closely with our students to co-create the guidance we provide. If there are restrictions for example, it’s crucial that it’s clearly explained to students why they are in place, or we will find that people find a way around it.”

Prof Michael Grove, deputy pro-vice chancellor (education policy and standards) at the University of Birmingham, said:

“The rapid rise of generative AI will mean we need to continually review and re-evaluate our assessment practices, but we should view this as an opportunity rather than a threat.

“We have an opportunity to rethink the role of assessment and how it can be used to enhance student learning and in helping students appraise their own educational gain.”

Gillian Keegan, the education secretary, launched a call for evidence on the use of generative AI in education last month, which asked for views on risks, ethical considerations, and training for education workers.


Sector Response

Ross Sleight, Chief Strategy Officer, EMEA, CI&T:

“Education is still yet to be transformed by AI. It’s centuries old in how it’s done, but that doesn’t mean change isn’t on the horizon.

“Exams and essays can risk regurgitation over critical thinking. Institutions must ask themselves, what is the most effective way to facilitate and consolidate knowledge, and can new technology better support this?

“Technology such as ChatGPT is here to stay, and while it does pose challenges for the education sector, fighting against it is a losing battle. Institutions need to work with it and use it to their advantage. Great innovation can come from it.”


Related Articles

Responses