From education to employment

The Online Safety Bill marks a milestone in the fight for a new digital age

Online Safety Bill

#TheOnlineSafetyBill will be introduced in Parliament today to help protect young people and clamp down on racist abuse online, while safeguarding freedom of expression.

World-first online safety laws introduced in Parliament: Parliament to approve what types of ‘legal but harmful’ content platforms must tackle.

New measures include tougher and quicker criminal sanctions for tech bosses and new criminal offences for falsifying and destroying data

Internet users are one step closer to a safer online environment as the government’s new world-leading online safety laws are brought before parliament today.

The Online Safety Bill marks a milestone in the fight for a new digital age which is safer for users and holds tech giants to account. It will protect children from harmful content such as pornography and limit people’s exposure to illegal content, while protecting freedom of speech.

It will require social media platforms, search engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions.

The regulator Ofcom will have the power to fine companies failing to comply with the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites.

Today the government is announcing that executives whose companies fail to cooperate with Ofcom’s information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted.

A raft of other new offences have also been added to the Bill to make in-scope companies’ senior managers criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and for obstructing the regulator when it enters company offices.

Digital Secretary Nadine Dorries said:

“The internet has transformed our lives for the better. It’s connected us and empowered us. But on the other side, tech firms haven’t been held to account when harm, abuse and criminal behaviour have run riot on their platforms. Instead they have been left to mark their own homework.

“We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving. Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age. If we fail to act, we risk sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms.

“Since taking on the job I have listened to people in politics, wider society and industry and strengthened the Bill, so that we can achieve our central aim: to make the UK the safest place to go online.”

In the UK, tech industries are blazing a trail in investment and innovation. The Bill is balanced and proportionate with exemptions for low-risk tech and non-tech businesses with an online presence. It aims to increase people’s trust in technology, which will in turn support our ambition for the UK to be the best place for tech firms to grow.

The Bill will strengthen people’s rights to express themselves freely online and ensure social media companies are not removing legal free speech. For the first time, users will have the right to appeal if they feel their post has been taken down unfairly.

It will also put requirements on social media firms to protect journalism and democratic political debate on their platforms. News content will be completely exempt from any regulation under the Bill.

And, in a further boost to freedom of expression online, another major improvement announced today will mean social media platforms will only be required to tackle ‘legal but harmful’ content, such as exposure to self-harm, harassment and eating disorders, set by the government and approved by Parliament.

Previously they would have had to consider whether additional content on their sites met the definition of legal but harmful material. This change removes any incentives or pressure for platforms to over-remove legal content or controversial comments and will clear up the grey area around what constitutes legal but harmful.

Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets.

Education Secretary Damian Hinds

Minister of State for Security and Borders Damian Hinds said:

“As a society and as individuals, the internet has broadened our horizons and given us new opportunities to connect globally. But alongside this, the most depraved criminals have been given fresh avenues to exploit vulnerable people and ruin lives, whether that be by stealing the innocence of children or destroying finances.

“Our utmost priority is to protect children and ensure public safety. The trailblazing Online Safety Bill will ensure social media companies are finally held to account and are taking ownership of the massive effect they have on all of our lives. Fraudsters will have fewer places to hide and abusers will be ardently pursued to feel the full force of the law.”

Bill introduction and changes over the last year

The Bill will be introduced in the Commons today. This is the first step in its passage through Parliament to become law and beginning a new era of accountability online. It follows a period in which the government has significantly strengthened the Bill since it was first published in draft in May 2021. Changes since the draft Bill include:

Sector Response

Melanie Thomson, Online Safety Representative at RM, has provided some thoughts on the importance of schools, parents, carers and online technology providers working together, so that they can collectively identify potential risks, leading to the creation of these new Bills:

“From our work with schools across the country, RM believes that it’s more crucial than ever to protect children online, given the increasing role that technology is playing within our education system – something that was brought to the fore during the pandemic. While many schools adapted well to hybrid learning – which relies in part on pupils having access to connected devices –we know from our own research that many parents do not apply parental controls to devices that their children have access to in the home. This presents a significant challenge in ensuring that every child is safe online.

“While many school age children are adept at using technology, the onus remains on the adults to ensure they are able to use it wisely and safely. For schools, this means implementing online filtering and monitoring controls to help shield children from inappropriate content on school devices as well as engaging with parents and carers to share best practice on keeping children safe online at home. For parents, it is key that they take the advice passed from their child’s school – and other sources – and act upon it.  In a “new normal” where children are spending much more of their time online – for learning, entertainment and even to socialise – it’s essential that schools, parents, carers and online technology providers continue to work together. This way they can collectively identify potential risks, leading to the creation of these new Bills that help ensure everyone uses the internet happily, securely and safely.”

Dame Melanie Dawes, Ofcom Chief Executive, said:

“Today marks an important step towards creating a safer life online for the UK’s children and adults. Our research shows the need for rules that protect users from serious harm, but which also value the great things about being online, including freedom of expression. We’re looking forward to starting the job.”

Ian Russell, Molly Rose Foundation, said:

“The Molly Rose Foundation and Molly’s family urge Parliamentarians to deliver a safer internet for all, especially our young. The first reading of the Online Safety Bill in Parliament is another important step towards ending the damaging era of tech self-regulation. Increasingly, we are all reminded of the appalling consequences created by harmful online content.

“Even nations and governments can struggle to protect themselves from the damaging use of digital technology, so we must do more to safeguard the lives of our young and vulnerable. It is time for the laws, regulations, and freedoms of our offline democracies to be reflected in the digital domain.”

Matthew Fell, CBI Chief of UK Policy

Matthew Fell, CBI Chief Policy Director, said:

“This landmark legislation is important and necessary to keep people safe online – business wholeheartedly back this ambition. The goal should be to make the UK an international leader, not an outlier, in shaping the future of the internet. 

“However, the Bill in its current form raises some red flags, including extending the scope to legal but harmful content. Not only will this deter investment at a time when our country needs it most but will fail to deliver on the aims of this legislation. 

“This is an incredibly complex set of regulations and businesses will be combing through the detail over the coming days. Ensuring the Bill is feasible for companies to implement is essential – they will work with policymakers to make that happen as the Bill moves through Parliament.”

Further improvements to the Bill confirmed today:

Criminal liability for senior managers

The Bill gives Ofcom powers to demand information and data from tech companies, including on the role of their algorithms in selecting and displaying content, so it can assess how they are shielding users from harm.

Ofcom will be able to enter companies’ premises to access data and equipment, request interviews with company employees and require companies to undergo an external assessment of how they’re keeping users safe.

The Bill was originally drafted with a power for senior managers of large online platforms to be held criminally liable for failing to ensure their company complies with Ofcom’s information requests in an accurate and timely manner.

In the draft Bill, this power was deferred and so could not be used by Ofcom for at least two years after it became law. The Bill introduced today reduces the period to two months to strengthen penalties for wrongdoing from the outset.

Additional information-related offences have been added to the Bill to toughen the deterrent against companies and their senior managers providing false or incomplete information. They will apply to every company in scope of the Online Safety Bill. They are:

  • offences for companies in scope and/or employees who suppress, destroy or alter information requested by Ofcom;
  • offences for failing to comply with, obstructing or delaying Ofcom when exercising its powers of entry, audit and inspection, or providing false information;
  • offences for employees who fail to attend or provide false information at an interview.

Falling foul of these offences could lead to up to two years in imprisonment or a fine.

Ofcom must treat the information gathered from companies sensitively. For example, it will not be able to share or publish data without consent unless tightly defined exemptions apply, and it will have a responsibility to ensure its powers are used proportionately.

Changes to requirements on ‘legal but harmful’ content

Under the draft Bill, ‘Category 1’ companies – the largest online platforms with the widest reach including the most popular social media platforms – must address content harmful to adults that falls below the threshold of a criminal offence.

Category 1 companies will have a duty to carry risk assessments on the types of legal harms against adults which could arise on their services. They will have to set out clearly in terms of service how they will deal with such content and enforce these terms consistently. If companies intend to remove, limit or allow particular types of content they will have to say so.

The agreed categories of legal but harmful content will be set out in secondary legislation and subject to approval by both Houses of Parliament. Social media platforms will only be required to act on the priority legal harms set out in that secondary legislation, meaning decisions on what types of content are harmful are not delegated to private companies or at the whim of internet executives.

It will also remove the threat of social media firms being overzealous and removing legal content because it upsets or offends someone even if it is not prohibited by their terms and conditions. This will end situations such as the incident last year when TalkRadio was forced offline by YouTube for an “unspecified” violation and it was not clear on how it breached its terms and conditions.

The move will help uphold freedom of expression and ensure people remain able to have challenging and controversial discussions online.

The DCMS Secretary of State has the power to add more categories of priority legal but harmful content via secondary legislation should they emerge in the future. Companies will be required to report emerging harms to Ofcom.

Proactive technology

Platforms may need to use tools for content moderation, user profiling and behaviour identification to protect their users.

Additional provisions have been added to the Bill to allow Ofcom to set expectations for the use of these proactive technologies in codes of practice and force companies to use better and more effective tools, should this be necessary.

Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any technologies they develop meet standards of accuracy and effectiveness required by the regulator. Ofcom will not be able to recommend these tools are applied on private messaging or legal but harmful content.

Reporting child sexual abuse

A new requirement will mean companies must report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency.

The CSEA reporting requirement will replace the UK’s existing voluntary reporting regime and reflects the Government’s commitment to tackling this horrific crime.

Reports to the National Crime Agency will need to meet a set of clear standards to ensure law enforcement receives the high quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimisation by preventing the ongoing recirculation of illegal content.

In-scope companies will need to demonstrate existing reporting obligations outside of the UK to be exempt from this requirement, which will avoid duplication of company’s efforts.


The Online Safety Bill launched today to keep children safe, stop racial hate and protect democracy online

11th May 2021: New internet laws will be published today (11 May) in the draft Online Safety Bill to protect children online and tackle some of the worst abuses on social media, including racist hate crimes.

  • Milestone Online Safety Bill will help safeguard young people and clamp down on racist abuse online
  • Bill to be published today includes new measures to uphold democratic debate online
  • Financial fraud on social media and dating apps included to protect people from romance scams and fake investment opportunities

Ministers have added landmark new measures to the Bill to safeguard freedom of expression and democracy, ensuring necessary online protections do not lead to unnecessary censorship.

The draft Bill marks a milestone in the Government’s fight to make the internet safe. Despite the fact that we are now using the internet more than ever, over three quarters of UK adults are concerned about going online, and fewer parents feel the benefits outweigh the risks of their children being online – falling from 65 per cent in 2015 to 50 per cent in 2019.

The draft Bill includes changes to put an end to harmful practices, while ushering in a new era of accountability and protections for democratic debate, including:

  • New additions to strengthen people’s rights to express themselves freely online, while protecting journalism and democratic political debate in the UK.

  • Further provisions to tackle prolific online scams such as romance fraud, which have seen people manipulated into sending money to fake identities on dating apps.

  • Social media sites, websites, apps and other services hosting user-generated content or allowing people to talk to others online must remove and limit the spread of illegal and harmful content such as child sexual abuse, terrorist material and suicide content.

  • Ofcom will be given the power to fine companies failing in a new duty of care up to £18 million or ten per cent of annual global turnover, whichever is higher, and have the power to block access to sites.

  • A new criminal offence for senior managers has been included as a deferred power. This could be introduced at a later date if tech firms don’t step up their efforts to improve safety.

Rt Hon Oliver Dowden, Secretary of State

Digital Secretary Oliver Dowden said:

“Today the UK shows global leadership with our groundbreaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world.

“We will protect children on the internet, crack down on racist abuse on social media and through new measures to safeguard our liberties, create a truly democratic digital age.”

Home Secretary Priti Patel said:

“This new legislation will force tech companies to report online child abuse on their platforms, giving our law enforcement agencies the evidence they need to bring these offenders to justice.

“Ruthless criminals who defraud millions of people and sick individuals who exploit the most vulnerable in our society cannot be allowed to operate unimpeded, and we are unapologetic in going after them.

“It’s time for tech companies to be held to account and to protect the British people from harm. If they fail to do so, they will face penalties.”

The draft Bill will be scrutinised by a joint committee of MPs before a final version is formally introduced to Parliament.

The following elements of the Bill aim to create the most progressive, fair and accountable system in the world. This comes only weeks after a boycott of social media by sports professionals and governing bodies in protest at the racist abuse of footballers online, while at the same time concerns continue to be raised at social media platforms arbitrarily removing content and blocking users.

Duty of care

In line with the government’s response to the Online Harms White Paper, all companies in scope will have a duty of care towards their users so that what is unacceptable offline will also be unacceptable online.

They will need to consider the risks their sites may pose to the youngest and most vulnerable people and act to protect children from inappropriate content and harmful activity.

They will need to take robust action to tackle illegal abuse, including swift and effective action against hate crimes, harassment and threats directed at individuals and keep their promises to users about their standards.

The largest and most popular social media sites (Category 1 services) will need to act on content that is lawful but still harmful such as abuse that falls below the threshold of a criminal offence, encouragement of self-harm and mis/disinformation. Category 1 platforms will need to state explicitly in their terms and conditions how they will address these legal harms and Ofcom will hold them to account.

The draft Bill contains reserved powers for Ofcom to pursue criminal action against named senior managers whose companies do not comply with Ofcom’s requests for information. These will be introduced if tech companies fail to live up to their new responsibilities. A review will take place at least two years after the new regulatory regime is fully operational.

The final legislation, when introduced to Parliament, will contain provisions that require companies to report child sexual exploitation and abuse (CSEA) content identified on their services. This will ensure companies provide law enforcement with the high-quality information they need to safeguard victims and investigate offenders.

Freedom of expression

The Bill will ensure people in the UK can express themselves freely online and participate in pluralistic and robust debate.

All in-scope companies will need to consider and put in place safeguards for freedom of expression when fulfilling their duties. These safeguards will be set out by Ofcom in codes of practice but, for example, might include having human moderators take decisions in complex cases where context is important.

People using their services will need to have access to effective routes of appeal for content removed without good reason and companies must reinstate that content if it has been removed unfairly. Users will also be able to appeal to Ofcom and these complaints will form an essential part of Ofcom’s horizon-scanning, research and enforcement activity.

Category 1 services will have additional duties. They will need to conduct and publish up-to-date assessments of their impact on freedom of expression and demonstrate they have taken steps to mitigate any adverse effects.

These measures remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire.

Democratic content

Ministers have added new and specific duties to the Bill for Category 1 services to protect content defined as ‘democratically important’. This will include content promoting or opposing government policy or a political party ahead of a vote in Parliament, election or referendum, or campaigning on a live political issue.

Companies will also be forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation. Policies to protect such content will need to be set out in clear and accessible terms and conditions and firms will need to stick to them or face enforcement action from Ofcom.

When moderating content, companies will need to take into account the political context around why the content is being shared and give it a high level of protection if it is democratically important.

For example, a major social media company may choose to prohibit all deadly or graphic violence. A campaign group could release violent footage to raise awareness about violence against a specific group. Given its importance to democratic debate, the company might choose to keep that content up, subject to warnings, but it would need to be upfront about the policy and ensure it is applied consistently.

Journalistic content

Content on news publishers’ websites is not in scope. This includes both their own articles and user comments on these articles.

Articles by recognised news publishers shared on in-scope services will be exempted and Category 1 companies will now have a statutory duty to safeguard UK users’ access to journalistic content shared on their platforms.

This means they will have to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists’ removed content, and will be held to account by Ofcom for the arbitrary removal of journalistic content. Citizen journalists’ content will have the same protections as professional journalists’ content.

Online fraud

Measures to tackle user-generated fraud will be included in the Bill. It will mean online companies will, for the first time, have to take responsibility for tackling fraudulent user-generated content, such as posts on social media, on their platforms. This includes romance scams and fake investment opportunities posted by users on Facebook groups or sent via Snapchat.

Romance fraud occurs when a victim is tricked into thinking that they are striking up a relationship with someone, often through an online dating website or app, when in fact this is a fraudster who will seek money or personal information.

Analysis by the National Fraud Intelligence Bureau found in 2019/20 there were 5,727 instances of romance fraud in the UK (up 18 per cent year on year). Losses totalled more than £60 million.

Fraud via advertising, emails or cloned websites will not be in scope because the Bill focuses on harm committed through user-generated content.

The Government is working closely with industry, regulators and consumer groups to consider additional legislative and non-legislative solutions. The Home Office will publish a Fraud Action Plan after the 2021 spending review and the Department for Digital, Culture, Media and Sport will consult on online advertising, including the role it can play in enabling online fraud, later this year.

NAHT comments on new laws requiring social media companies to do more to protect children online 

Commenting on the government’s Online Safety Bill, announced in the Queen’s Speech yesterday, Paul Whiteman, general secretary of school leaders’ union NAHT, said:

“NAHT has previously called for a statutory duty of care to guarantee that social media companies will prioritise the safety and wellbeing of children and young people, and remove content quickly that is inappropriate or harmful.

“Social media companies need to be more proactive. They need to be on the ball looking for material and have a clearer line on what is and isn’t acceptable, particularly where children and young people are concerned.

“Social media providers should look at not only illegal content but also legal material that could be harmful. These companies need to ask themselves: ‘Could this content cause harm to children or young people?’ If the answer is yes, then the content needs to come down or more needs to be done to prevent children and young people from accessing it.”

Responding to the Online Safety Bill, Policy Chair of the City of London Corporation, Catherine McGuinness, said:

“We welcome the proposals announced in today’s Queen’s Speech of an Online Safety Bill, which will ensure consumers are better protected by the devastating financial and emotional harm caused by scams. It is vital that the UK is the safest place in the world to be online so we hope that the final bill will guarantee that web platforms are responsible for ensuring that fraudulent content is not hosted on their sites.

“We all have a role to play in tackling scams and it is important that the financial services sector works closely with the Government on this vital issue. Together we must ensure people continue to benefit from being online and fight back against fraudsters.”

Ian Russell, Molly Rose Foundation, said:

“The Molly Rose Foundation and Molly’s family say government internet regulation can’t come soon enough and welcome this important step towards a safer internet for all.

“It is vital to focus the minds of the tech platforms, to change their corporate culture and to reduce online harms, especially for the young and the vulnerable. Now is the time for the platforms to prioritise safety rather than profit; it is time for countries to change the internet for good.”

Dr Alex George, The UK Government’s Youth Mental Health Ambassador, said:

“This is a landmark moment here in the UK. The problem of online abuse has escalated into a real epidemic which is affecting people physically as well as psychologically and it is time that something is done.

“That’s why I welcome today’s announcement about the Online Safety Bill and the protection it will provide people. Social media companies must play their part in protecting those who consume and engage with their content.”

Dame Melanie Dawes, Ofcom Chief Executive, said:

“Today’s Bill takes us a step closer to a world where the benefits of being online, for children and adults, are no longer undermined by harmful content.

“We’ll support Parliament’s scrutiny of the draft Bill, and soon say more about how we think this new regime could work in practice – including the approach we’ll take to secure greater accountability from tech platforms.”

Yesterday the Digital Secretary visited Charlton Athletic FC to hear about the club’s work on diversity and inclusion and met players from the first, women’s and academy teams. He also spoke to representatives from UK safety tech firm Crisp.

Charlton Athletic academy player Wassim Aouachria said:

“I am very pleased to hear that action is being taken to stamp out discriminatory abuse on social media. I was on the receiving end of abuse on social media a few months ago and it was difficult to understand for myself and my family.

“I was grateful for the support I got from the club and more needs to be done so people are held accountable for their actions. Hopefully the upcoming online safety bill can help us create a safer, more welcoming and inclusive environment for players, managers, staff, fans and everyone associated with football.”

Adam Hildreth, CEO of UK safety tech start up, Crisp, said:

“We set up Crisp in 2005 with a vision of helping to create a digital world that is safe for everyone. We’ve been working alongside the UK government during that time to make sure legislation keeps up with changes in the online environment.

“We’re proud to have been contributors to the groundbreaking Online Safety Bill and we’re pleased to play a part in the successful UK safety tech story.”

  • The Online Safety Bill follows the publication of the Online Harms White Paper in April 2019. An initial Government response to the consultation was published in February 2020, and a full Government response in December 2020. The full government response set out in detail the regulatory framework, which will be taken forward through this bill.
  • The legislation will be published later today in draft, and will be subject to pre-legislative scrutiny by a joint committee of MPs in this session. The make-up of the committee will be confirmed in due course

Related Articles

Responses