From education to employment

Becoming a Plumber Will Not Save You If You Cannot Think

Geoffrey Hinton and I have something in common. We both studied and worked in the Computer Science and Artificial Intelligence department at the University of Sussex, though not at the same time, and I make no claim to his particular distinction. He went on to become widely described as the godfather of AI. I did not. But we share an alma mater, a department, and a subject, and I have been paying close attention to what he says about the future of work, because he speaks with a seriousness that the debate deserves.

Jobs most likely to survive automation for a long time are physically adaptive ones

His recent argument, repeated in several interviews, is that the jobs most likely to survive automation for a long time are physically adaptive ones. He has named plumbing specifically. Jensen Huang of Nvidia has made a similar case, pointing to the hundreds of thousands of tradespeople who will be needed to build the physical infrastructure that AI itself requires. City and Guilds reports that completed plumbing apprenticeships in England, Wales and Northern Ireland nearly doubled in the year to October 2025.

Are Skilled trades safe from AI?

The narrative is gaining momentum and FE leaders will recognise it immediately. Skilled trades are safe from AI. The message, even when it is not stated this bluntly, is beginning to shape how parents, schools, and learners think about vocational routes. And it is partly right. Physical, site-specific, adaptive work is genuinely harder to automate than many knowledge-based tasks. But the narrative contains a risk that I do not think has yet been fully confronted.

If the implicit message to vocational learners is that AI does not really apply to them, we will produce a generation of tradespeople unprepared for the AI-assisted diagnostic tools, compliance platforms, and job management systems that are already reshaping how trade businesses operate. The safety of a skilled trade in the age of AI is conditional. It depends on workers who understand their craft, can exercise genuine judgement, and can critically evaluate information, including information produced by an AI system. A plumber who cannot explain why they made a decision is not a safe plumber, regardless of what generated the initial recommendation.

The concern that is keeping some people up at night

Anthropic recently published findings from a survey of over 80,000 users of its Claude AI system across 159 countries. The most-cited concern among users globally was not job displacement. It was not loss of autonomy. It was AI errors: hallucinations, plausible-looking outputs that turn out to be wrong.

This reframes the conversation FE should be having about AI. The question is not only whether AI will automate particular job roles. The question is whether our learners can tell when AI is wrong. A learner who uses AI fluently but cannot evaluate its outputs is not AI-literate. They are AI-dependent. In a classroom that distinction may not matter much. In a care setting, on a construction site, in a clinical environment, it matters enormously.

The question is whether our learners can tell when AI is wrong

Knowing how to verify an AI output, how to check a claim against a primary source, how to recognise the patterns that suggest a hallucination, is a teachable skill. It is a vocational competency for the current era, as fundamental to professional practice as knowing how to read a technical drawing or interpret a safeguarding framework. FE can build it in. The question is whether FE is choosing to do so deliberately, or leaving it to chance.

A lesson from Higher Education, FE should not ignore

Wonkhe published new research this month on how students in UK universities are experiencing AI and assessment. It is a university study and I am not claiming its findings transfer directly to FE. But there is one pattern in the data that FE leaders would be unwise to dismiss.

Thirty-eight per cent of university students admitted to submitting assessed work they could not fully explain without going back to their sources. Nearly half worried their grades did not reflect what they actually know. The Wonkhe researchers are careful to note that AI did not create this gap. Assessment systems that reward polished outputs over demonstrated thinking have existed for decades. What AI has done is industrialise the gap, and make the choice to stop learning explicit in a way it never was before.

The relevant question for FE is whether the vocational assessment systems we use are genuinely immune to this pattern. Apprenticeship end-point assessments, competency observations, professional discussions: these are the accountability moments that should make genuine understanding visible. But only if they are designed to do that reliably, rather than functioning as formats that can be passed by someone who has produced something plausible without fully understanding it.

The Wonkhe research offers an instructive finding here. Students use AI very differently when they know a visible verification moment is coming. Those who know they will need to explain their work in person describe interrogating AI outputs, checking their own reasoning, and pushing back on what the model produces. Those without a downstream accountability moment describe using AI on autopilot. The accountability moment does not stop AI use. It changes how it is used. FE has those moments built into its structure. The question is whether they are exercising that function with sufficient rigour.

What FE can do now

Three things strike me as immediately actionable for FE leaders and curriculum teams.

The first is to treat AI literacy as a vocational competency rather than a generic digital skills add-on. The specific capability that matters is evaluative: can a learner assess the reliability of an AI-generated output in the context of their professional field? A childcare practitioner needs to do this in relation to developmental guidance. An engineering technician needs to do it in relation to materials specifications. A business administrator needs to do it in relation to compliance requirements. The skill is contextual. Teaching it generically is not sufficient.

The second is to review whether AI guidance is coherent across teaching teams. The Wonkhe research found that the most common student complaint was not that policy was too strict or too permissive. It was that different tutors on the same programme said different things. A ten-minute conversation between members of a teaching team, producing a shared position on what AI use is and is not appropriate at each stage of a programme, would do more than another round of policy revision.

The third is to ensure that at least one moment in each programme requires the learner to account for their work in person, in a way that cannot be delegated to an AI. Not a formal viva for every assignment. A brief walkthrough of process. A structured peer explanation. A conversation in a tutorial. The presence of such a moment changes how learners engage throughout, including how they engage with AI along the way.

Geoffrey Hinton is right that the trades are likely to be safer than many knowledge-work roles, for many years. But safety in the age of AI requires something more than physical skill. It requires the judgement to question, verify, and take responsibility for decisions that an AI system may have helped to inform.

That kind of thinking has always been at the heart of vocational education. The task for FE now is to make it explicit, to design it in deliberately, and to resist the comfortable idea that because the trades are less exposed to automation, the AI conversation does not really apply here. It does. The question is whether FE leads it or catches up with it later.

Professor Rose Luckin is Professor of Learner Centred Design at UCL Institute of Education and founder of Educate Ventures Research. She publishes The Skinny on AI for Education and is the author of Machine Learning and Human Intelligence and AI for School Teachers.

Sources

Anthropic, “Global survey of 80,000 Claude users across 159 countries”, March 2026. Available at anthropic.com.

City and Guilds, plumbing qualifications data, year to October 2025. Cited in the Financial Times, “Is a plumbing career the future?”, 22 March 2026.

Jim Dickinson and Mack Marshall, “Trained to Stop Learning? How students are experiencing assessment and learning in an age of AI”, Wonkhe, March 2026. Available at wonkhe.com.


Responses