From education to employment

Preparing Learners For A World Where Truth Is Harder To See

Elizabeth Anderson, CEO of the Digital Poverty Alliance

Colleges are now operating in an environment where the traditional markers of credibility have weakened to the point of near collapse. Information that once arrived through distinct channels now reaches people through a single stream where expert reporting, speculation, synthetic imagery, and AI generated text appear almost indistinguishable. The problem is not simply that false material exists. The deeper issue is that its visual and rhetorical style now mirrors the presentation of truth. In this landscape, judgment cannot rely on surface cues, yet those cues are often all that learners have.

The majority of 16 to 24-year-olds now encounter news through online platforms, including social media

Ofcom’s recent work on media use shows how far the ground has shifted. The majority of 16 to 24-year-olds now encounter news through online platforms, including social media, where the origin of a story is often hidden by the design of the feed. A long form investigation and a one sentence claim can appear in the same frame, given equal visual weight by the interface. When this becomes the main route into news, it alters how people decide what to believe. Familiarity, repetition, and visual polish start to stand in for reliability.

Deepfake

Deepfake technology has made visual material less dependable than it appears. Internet Matters has reported that significant numbers of children and young people have already encountered deepfake imagery, and that many parents and carers feel unsure about how to recognise or respond to manipulated content. An artificial picture can carry the same apparent authority as a real photograph. The eye no longer provides an easy shortcut to truth. Learners are asked to navigate a world in which an image can be persuasive without ever having existed in reality.

Generative AI creates a similar problem in language. Research on large language models, including work from organisations such as the Ada Lovelace Institute and the Alan Turing Institute, has highlighted that these systems can generate confident, fluent responses that nevertheless contain inaccuracies or invented details. Their strength lies in coherence, not verification. Public attitudes research, including surveys from the Centre for Data Ethics and Innovation, shows that many people remain unsure how AI systems work, even as they encounter them more frequently. In practice, that means a well-structured paragraph can appear authoritative even when the reasoning behind it has not been checked.

Small screens compress context

Research from the Digital Poverty Alliance (DPA) offers a parallel finding: many learners who rely primarily on smartphones face reduced opportunities to check sources or compare information. Small screens compress context. Origin cues become harder to see. Information appears in shorter bursts that encourage rapid acceptance rather than examination. Inequality in access to devices and connectivity becomes inequality in the space available for judgment.

The combined effect of these forces is visible in everyday teaching. Tutors report discussions in which strongly held views have no clear origin, and classroom debates in which the tone of certainty does not match the strength of the evidence. Learners are not indifferent to truth. They are drawing on information that has reached them through systems designed to maximise attention, and they have rarely been invited to examine how those systems work. When a claim has appeared often enough, it can feel well founded even if the underlying support is thin.

Safeguarding and the erosion of everyday judgment.

Safeguarding in this context must move beyond a narrow focus on obvious online harms. Identifying and responding to extreme risks remains essential, but it does not address the more pervasive issue that now shapes daily life: the erosion of everyday judgment. Learners need a clear understanding of how information is created, selected, and presented to them. They need to know why a particular story or clip appears when it does, how AI tools generate their answers, and how synthetic images or audio can be constructed to look and sound convincing.

This is not simply a matter of technical instruction. It is a question of intellectual habit. Learners benefit when they are encouraged to ask where a claim originated, what evidence supports it, which perspectives are missing, and whether a piece of content has been designed to inform or to influence. These questions are straightforward to describe, but they are difficult to apply consistently without practice. Critical inquiry needs to be built into ordinary teaching, not reserved for special occasions.

Media analysis, AI literacy, and verification as central competences

Colleges are well placed to lead this work because they teach at the point where independence expands and digital influence is strongest. If colleges treat media analysis, AI literacy, and verification as central competences rather than optional extras, they can help restore distinctions that the online world has blurred. That means deliberately examining examples of synthetic media, comparing AI generated answers with trusted sources, and unpacking how recommendation systems shape what appears on screen. It also means acknowledging that some learners will begin from a position of digital disadvantage and will need more support to develop the same level of judgment.

AI and misinformation

Public debate about AI and misinformation often gravitates toward dramatic scenarios, but the most persistent risks are quieter. They appear in misunderstandings about public policy, health, finance, or identity that spread through repetition and design rather than through open argument. They appear when a person trusts an AI explanation more than a textbook or a teacher, without realising that the explanation has never been tested. They appear when synthetic images circulate without scrutiny and weaken confidence in visual evidence. Addressing these realities does not require alarm. It requires steady, systematic education.

Colleges cannot redesign the information systems that dominate modern life. They can, however, ensure that learners are not left to interpret those systems alone. That is the core of AI safeguarding in education. It is less about restricting tools and more about demystifying them. It is less about telling people what to avoid and more about showing them how to think when they encounter content that appears convincing.

In a world where truth, imitation, and fabrication are increasingly difficult to distinguish at a glance, the ability to separate them becomes a central educational responsibility. If colleges choose to treat that responsibility as part of their core mission, they can give learners something the digital environment will never supply by itself: a stable framework for judgment that does not disappear when the feed changes.

By Elizabeth Anderson, CEO of the Digital Poverty Alliance


Responses