From education to employment

What the 2025 ‘UKSIC Appropriate Filtering and Monitoring Definitions’ mean for FE institutions

Rob Faulkner, VP Prevention & Enablement at Smoothwall

The UK Safer Internet Centre (UKSIC) has recently released updated Appropriate Filtering and Monitoring Definitions that represent a clear shift in digital safeguarding expectations for further education institutions.

With the revisions, the focus has moved increasingly towards learner-centric protection across all devices, platforms, and emerging technologies. For institutions navigating the realities of digital independence in young people who grew up as digital natives, mixed with increased mobile learning and diverse learner needs, the guidance sets an evolved standard in safeguarding.  

There’s also greater focus on acting on threats in real time, as misinformation and harmful content are being generated and spread through AI more quickly than ever, making them difficult to contain. Real-time filtering, which detects, assesses and blocks risks as they appear, is the most effective way to prevent students from accessing harmful content before it can do damage, helping make AI significantly safer to use in education.

The New Risk Landscape 

The 2025 guidance has expanded and enhanced definitions of both illegal and inappropriate online content, and crucially, now includes coercive control, intimate image abuse, racially motivated harm, and mis/disinformation. These issues have grown in frequency and significance among the post-16 age group, in which young adults are starting to experience more digital freedom without the same levels of supervision found in earlier education. 

The additions reflect a broader understanding of how FE students are navigating online spaces and how digital risks are starting to manifest in their lives. As such, the message to FE providers is clear, safeguarding must now address context, not just content. Institutions must be on the lookout for patterns of behaviour and the potential for risk escalation, wherever students connect.  

Realities of Increased Mobile Learning 

Students in further education are seldom confined to college networks and campuses. Many study on placement and use personal devices for research and notes. The refreshed definitions now call for hybrid filtering and device-level monitoring that keeps pace with the complexity of today’s FE landscape, focusing more on how and where content is accessed, not just what is accessed. 

Filtering and monitoring must be “effective across all devices used to access the internet”, including “web and in-app filtering across a range of browsers and device types”. For FE providers, this has immediate implications as they need to consider how they are protecting users across this range of ways students access digital and online resources.

The more that digital access and learning becomes personalised, the more safeguarding must follow the learner. These updated standards are challenging FE providers to break free of outdated assumptions and embrace an evolving, always-on approach to online safety through tools like content aware filtering. 

How The Use Of AI Is Impacted 

One of the most notable inclusions in the 2025 UKSIC definitions is the explicit reference to and guidance surrounding AI-generated content, marking a pivotal moment in the acknowledgement of the role artificial intelligence tools play in digital safeguarding.  

AI is increasingly being integrated into the FE student’s experience. From coursework to research, FE providers can no longer ignore its presence. The updated definitions state that monitoring and filtering systems “must be capable of identifying and managing risks associated with the use of AI technologies.” This is a clear call to action for decision makers in further education to embrace such systems and make sure they’re keeping on top of AI-related threats in real time. 

This means that institutions cannot assume AI is neutral or self-contained, and that it must be treated as a potential carrier of risk. It places a stronger level of responsibility on the institution itself to take a proactive approach to safeguarding and moderation of its use. AI usage must be incorporated into safeguarding frameworks, not viewed as a separate issue for IT.  

Safeguarding As A Strategic Priority 

How digital safeguarding is understood and implemented across the FE sector is changing. The 2025 UKSIC definitions now reflect the real-world behaviours and vulnerabilities of post-16 learners, so safeguarding strategies must follow suit. 

It’s time to move beyond firewalls and domain lists. FE leaders should use the guidance to look beyond technical compliance and consider how proper safeguarding measures support both institutional values and student wellbeing. These new standards call for investment and collaboration, but for those willing to lead, it’s an opportunity to build safer, smarter digital environments that empower students to thrive.  

By Rob Faulkner, VP Prevention & Enablement at Smoothwall


Related Articles

Responses