What do Education Providers Really Think about AI?
A YouGov survey of more than 1,000 domestic undergraduate students at UK universities revealed how they use AI in their studies, highlighting sentiment around existing university policies, grey areas and more. Rob Telfer, Director of Higher Education at D2L, unpacks the findings and explains what must be done to ensure all students use AI appropriately and effectively.
Whilst AI’s role in further and higher education is still developing, there is no doubt that it is here to stay. The study found that 18 percent of students say their university takes a hardline stance against AI and actively discourages its use. Banning the use of AI tools doesn’t stop students from using them. It simply means they won’t know how to engage with AI in an ethical or analytical way. Institutions that view it as a threat, rather than maximising its potential as a tool, risk falling behind those that use it to their advantage.
On the other hand, only a shocking 11 percent said their university encourages responsible AI use, with almost 45 percent reporting that boundaries are in place but without the skills training to use it effectively. This leaves a huge gap in the skills development of students about to enter the workforce. Boundaries and regulation are important, but can only go so far without teaching proper skills. Outlining boundaries without developing skills means students graduate with uneven ideas about how to use AI ethically. Some may excel, while others struggle to keep up.
The dangers of inconsistent AI policy
These findings illustrate a reactive approach to AI policy. Many education providers are treating AI as a compliance issue instead of leading the charge and viewing it as a skills opportunity that prepares students for the workplace while enhancing their own academic offerings.
Universities that discourage AI use entirely and fail to provide proper training show that their policies may be driven more by fear of plagiarism than by long-term educational strategy. While academic integrity is crucial, innovation is equally important. In fact, by setting vague or inconsistent rules around AI, universities risk driving students towards misuse and plagiarism, rather than fostering the responsible habits that prevent it.
With ethical AI fluency fast becoming as fundamental as traditional digital literacy, education providers cannot afford to be slow to modernise. Inconsistent approaches to AI will not equip students for the job market they are about to enter.
Building ethical AI literacy
AI literacy is not about turning every student into an expert. It is about promoting critical thinking, where learners meaningfully engage with AI outputs, understand its limitations and apply it ethically. Training and structured support, combined with clear boundaries, are crucial in achieving this. This helps students feel empowered and confident with how they use AI.
When used responsibly, AI can enhance education for both students and educators. For students, it can make course content more dynamic and personalised through gamification, real-time feedback and adaptive support. For educators, AI can help manage heavy workloads and larger cohorts, freeing up more time for meaningful interaction with students. However, this all starts with a transparent and supportive AI policy.
Discouraging the use of AI will not keep it out of education. Instead, it will widen the gap between academic learning and real-world application, alienating learners and leaving many unprepared for the workplace. Clear, consistent guidance and proactive training will preserve trust, maintain academic standards and ensure students are equipped with the fluency employers now demand.
By Rob Telfer, Director of Higher Education at D2L
Responses