From education to employment

Why all Vocations need AI literacy, Even if they Don’t Use it

AI literacy is often framed as a specialist skill: something required only in particular sectors, or by those who actively use artificial intelligence as part of their role. Yet an incident earlier this month demonstrated just how embedded AI has already become in the world, including for those who may never consciously choose to use it, and why AI literacy now matters across every profession.

After a 3.3-magnitude earthquake struck North-West England on 3rd December, an AI-generated image depicting severe bridge damage circulated widely online. The image was convincing enough that professionals within rail providers halted services as a precaution while checks were carried out. No structural damage was ultimately found, but the disruption was real. A piece of fabricated imagery had triggered operational decisions with economic and safety implications.

It may appear to be a niche episode, but it reveals something fundamental about the environment that all professionals are now operating in. Synthetic media has reached a level of plausibility where it can influence real-world systems. In a world where AI is increasingly ubiquitous, even those who do not actively use it must be prepared for the consequences of its presence. FE colleges, as providers of vocational education, are uniquely placed to achieve this.

Beyond prompting: what AI literacy now involves

AI literacy is frequently reduced to the ability to use generative tools effectively. Recent scholarship, however, argues for a much broader understanding. A 2024 review published in Computers and Education: Artificial Intelligence describes AI literacy as encompassing conceptual understanding of how AI systems work, practical competence in using and evaluating them, ethical awareness of issues such as bias and privacy, and critical judgement about when and how AI should influence decisions.

Under this definition, AI literacy is not about training everyone to become a technologist. It is about equipping people in every occupation to interpret information, evaluate risk and exercise judgement in environments where AI increasingly shapes the data they see and the systems they rely on.

Even learners who never directly prompt a generative tool will encounter its outputs. They will see AI-generated reports, schedules, diagrams, promotional materials, imagery, datasets and workflows. The question is no longer whether a particular vocation ‘uses’ AI, but whether its practitioners can navigate a world in which AI-mediated information is pervasive.

Misinformation with material consequences

The rail incident is not an isolated case. Research from York University examined the impact of AI-generated imagery during emergency scenarios and found that synthetic images can significantly distort public understanding and hinder effective response. In controlled studies, participants frequently struggled to distinguish fabricated disaster imagery from authentic reporting, with some scenarios showing delays in decision-making and reduced trust in legitimate information.

The authors warn that as synthetic media becomes more persuasive, frontline workers from emergency responders to infrastructure teams will require new competencies in verification, evidential reasoning and information triage. This is no longer simply a matter of traditional media literacy. It concerns the operational integrity of services that depend on reliable information flows. When a deceptive image or a confidently phrased AI-generated report can alter how professionals act, the boundary between online content and real-world outcomes dissolves.

The workforce is adopting AI faster than its understanding

Evidence from across the labour market reinforces the urgency of this challenge. Reports from organisations such as the Organisation for Economic Cooperation and Development (OECD) and PricewaterhouseCoopers (PwC) consistently show that employers expect AI to reshape job roles across most sectors, while simultaneously expressing concern about workforce readiness and governance. The OECD notes that while AI adoption is accelerating, many organisations lack the internal capability to manage its risks effectively, particularly in relation to oversight, judgement and ethical use.

Similarly, PwC’s Global Workforce Hopes and Fears Survey highlights a growing gap between the speed at which AI tools are entering workplaces and the confidence workers feel in understanding how to use them responsibly. Productivity gains are frequently reported when AI is integrated into routine tasks, but these gains are most reliably realised when workers understand both the capabilities and the limitations of the systems they are using.

AI literacy, in this context, is not simply a ‘nice to have’. It is increasingly a precondition for safe, effective and equitable adoption.

From preventing misuse to functioning in an AI world

Across the FE sector, responses to AI have taken different forms. Early reactions understandably focused on prevention: concerns around plagiarism, academic integrity and misuse. Alongside this, many colleges have invested in guidance, staff development and experimentation, recognising that generative AI is now embedded in learners’ lives and workplaces.

What is emerging now is something slightly different. The focus is shifting away from students’ individual use of AI tools and towards the wider environments they are entering. These are professional environments in which AI shapes decisions, systems and information flows even when individuals are not actively engaging with it. Whether learners move into health, construction, rail engineering, early years, creative industries or public services, they will do so in contexts where AI-generated content and AI-mediated systems are commonplace. Preparing them to function confidently and critically in such settings is becoming a core responsibility of Further Education providers.

What FE providers need to do next

This means embedding AI literacy across vocational curricula, not as an optional module but as an integrated strand linked to real-world tasks and professional standards. It requires sustained staff development so that educators themselves feel confident modelling critical and reflective engagement with AI. It also demands that employability strategies foreground verification, critical questioning, digital resilience and ethical awareness alongside technical competence, supported by active dialogue with employers about how AI is reshaping workplace expectations.

None of this is about turning every learner into an AI specialist. It is about ensuring that no learner is disadvantaged by being unprepared for the systems, tools and information ecosystems that will shape their working lives.

If AI literacy is now pan-vocational, it cannot be left to computing departments or isolated champions. It requires leadership that recognises AI as a structural shift, staff who feel supported to explore and interrogate its implications, and curricula that reflect the realities of contemporary work.

The bridge-image incident may seem minor in isolation, but it symbolises something profound about the age we are entering and the skills that our graduates will need to succeed.

By Dr Gary F. Fisher; Academic Developer in Online Education; Liverpool School of Tropical Medicine


Related Articles

Responses