Why FE must Shape the Terms of Gen AI Adoption! Not Just React to it
Generative AI is no longer on the horizon. It is already reshaping how we teach, lead and learn across the Further Education (FE) and Skills system. The question is no longer ‘if’ we engage with it, but ‘how’, ‘when’ and, crucially, ‘on what terms’.
In a recent national roundtable that brought together sector leaders from education, research, and policy, the message was clear: the FE and Skills workforce is ready to lead, but must do so with clarity, collaboration and care.
We are beyond the experimental phase
Colleges and training providers across the country are already trialling Generative AI to personalise feedback, generate lesson content, and support learners beyond the classroom. Some are exploring tailored prompts to encourage growth mindset thinking or scaffold metacognitive skills, reframing AI as a non-judgemental coach rather than an answer engine.
But adoption is uneven, and understanding is patchy. While some institutions are confidently experimenting, others remain cautious or under-informed. And understandably so: any large-scale technological shift without a guiding ethos risks confusion, fear and fragmentation.
Intention must be communicated, not assumed
A key insight from the roundtable was the importance of transparent, values-led communication. Where staff do not understand why AI is being introduced, or how it will be used, they will fill the gap with uncertainty, especially when the narrative is shaped around efficiency or cost saving.
It must be stated clearly: AI is here to support, not replace, human educators. The purpose of AI must be framed around pedagogy, inclusion and professional practice. As we explore the positive aspects of embedding AI into our profession, we can draw on growing data to support its time-saving capacity; a recent report on teachers’ use of AI in the US found teachers using AI saved an average of 5.9 hours per week, equivalent to six weeks per school year. Ofsted’s latest report on AI use in schools and FE colleges also reflects leaders’ views on its potential to ease teacher workload. However, realising these benefits depends on having a clear and purposeful approach to AI adoption, underpinned by shared ethical frameworks. This clarity is essential for building the trust professionals need to engage confidently and critically with new technologies.
Meanwhile, there remain important questions about ownership and influence. As AI systems are increasingly developed by commercial actors, there is a growing concern about the role of profit motives in shaping pedagogical priorities. Sector voices are increasingly calling for a critical, values-led lens on edtech procurement and governance.
Inclusion and ethics are not optional extras
The ‘Digital Matthew Effect’, where those already advantaged benefit most from innovation, is a live risk with AI, particularly in relation to learners’ access to the knowledge, skills, resources and support needed to participate in the digital world. As highlighted in Neil Selwyn’s text, Education and Technology, unless actively mitigated, digital tools risk reproducing and deepening systemic inequalities. Without careful planning, digital poverty could worsen, and AI could exacerbate existing inequality through algorithmic bias or lack of representative data.
AI tools are not neutral. They inherit the assumptions and exclusions of their training data. Without scrutiny, algorithms can easily compound and reinforce systemic inequalities and biases at scale, a process that has been described as ‘digital structural violence’. To mitigate this outcome, ethical design must be embedded from the start, not retrofitted as a compliance measure. One part of the solution for the FE and Skills sector is the co-creation of principles to guide the responsible use of AI in learning and leadership.
Learners need support to build digital and communication skills
An often overlooked consequence of AI adoption is its impact on learners’ interpersonal development. Feedback from the roundtable attendees was that many younger learners now report a preference for using AI chatbots over speaking with teachers or peers, raising concerns about the long-term erosion of verbal and social confidence.
If we want AI to enhance, not impair, learner confidence and employability, we must actively design learning experiences that develop both digital literacy and interpersonal resilience. This means embedding AI not just as a tool for automation, but as a provocation for dialogue, critical thinking, and human connection.
From Gen AI to Agentic AI: the next frontier
While much of the sector’s focus has rightly been on Generative AI, we are now seeing the rapid rise of Agentic AI, systems that can take proactive steps, complete tasks, and even make decisions on our behalf. This shift from ‘assistive’ to ‘agentic’ intelligence represents a significant leap in how AI will integrate into daily workflows and learning environments.
An ‘AI-first’ mindset, where automation becomes the starting point for planning, rather than a bolt-on, is already emerging in industry and across the education eco-system. The FE and Skills sector must now consider how it prepares both staff and learners for this new paradigm. Productivity gains are possible, but they must not come at the cost of human judgement, reflection or equity.
AI is a professional issue, not just a technical one
The adoption of AI in FE and Skills is not a niche concern or a future project. We know it is already influencing practice across classrooms, staffrooms, and strategy rooms. But how we respond will determine whether AI enhances or erodes the values we hold in education.
This is not just a technical challenge, it is a professional one. It speaks directly to our collective civic mission to support learning, promote inclusion, and uphold public trust.
Leadership across the sector must continue to listen, learn, and collaborate. Collective wisdom, rather than isolated decisions, will ensure the FE and Skills system doesn’t just adapt to AI but helps shape its ethical, inclusive, and learner-centred future.
We know the FE and Skills workforce has the capability to respond with innovation and humanity. Now is the time to do just that and to make sure that the future of AI in FE and Skills is one that reflects our values, not just our technologies.
By Jon Adams, Chief Strategy Officer, Activate Learning; Dr James Robson, Director of the Oxford University Centre on Skills, Knowledge, and Organisational Performance (SKOPE); and Dr Vikki Smith, Executive Director, Education & Standards, Education and Training Foundation
Responses