A lot has been said about the scope and likely impact of the Online Safety Bill, and it is often hyped as some sort of silver bullet that will enable children and other vulnerable people to be kept safe online. The Bill is very complex and whilst it hopefully will have the impact of, for example, largely preventing children from accessing vast amounts of porn, it is also limited and a lot will turn on the resources and teeth of the regulator, Ofcom. This article gives an overview of Bill and looks behind the hype.
Background
Whilst criminal law applies to online activity in the same way as offline activity, there is no overall regulator of online activity in the UK, and for content that is harmful, but not illegal, social media platforms self-regulate through “community standards” and “terms of use” that users agree to when joining. Even though many platforms have a minimum age for use (often capped at 13) there is no age verification system, and previous legislative provisions, which imposed age verification requirements, were never brought into force.
As is well known, the self-regulation model can and does involve material which promotes violence, self-harm or cyberbullying, and include indecent, disturbing or misleading content. There is an increasing body of evidence of harm being caused to children in particular, by exposure to indecent content (whether legal or not), as well as cyberbullying, revenge porn and sites promoting eating disorders (aka ‘pro- ana sites’) and suicide. Platforms used by millions for posting social content can and do become echo chambers where filter bubbles driven by algorithms result in the user being repeatedly exposed to one side of an argument rather than seeing a range of opinion, leading to widespread disinformation.
The basis of the current system, in which platforms are largely immune from content posted by others, is rooted in section 230 of The Communications Decency Act 1996 – US legislation which provides immunity to owners of any ‘interactive computer service’ for anything posted online by third parties:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This creates the conundrum: even assuming there are statutory measures to prevent or remove content that is illegal (though query its effectiveness), what about content that is legal but potentially harmful?
For example, 88% of all pornographic material reviewed on a “mainstream” pornographic website – Pornhub – involved physical aggression by men on women – commercial pornography has coalesced around a “homogenous script involving violence and female degradation. This is not necessarily illegal, but its repeated exposure to curious children and teens is believed to have long-term adverse effects on their psyche, their views of what is ‘normal’, what is acceptable behaviour and their ability to form healthy sexual relationships going forward. The outcry following the Everyone’s Invited website and Ofsted’s Rapid Review provides a snapshot of how widespread and detrimental these behaviour patterns are, and the young age at which they start being formed.
The Online Safety Bill
The ongoing calls for statutory regulation have been met with the Online Safety Bill published in May 2021, following the White Paper published in 2019. The Bill is currently at pre-legislative scrutiny: a Joint Committee is looking at the Bill and has taken evidence from tech companies, children's charities and Ofcom. It was due to report by 10 December 2021 but at the time of writing it does not appear to have done so, and indeed was still taking written evidence on that date.
The Bill has the following core aims: to address illegal and harmful content online (esp. terrorist content, racist abuse and fraud) by imposing the duty of care concerning this content; to protect children from child sexual exploitation and abuse (CSEA) content; to protect users’ rights to freedom of expression and privacy; and to promote media literacy. The Bill designates the Office of Communications (OFCOM), the U.K.’s current telecommunications and broadcast regulator, to oversee and enforce the new regime.
The Bill is in 7 parts as follows:
The new duties contained in Parts 2 and 3 of the Bill apply to providers of “regulated services”, and they are of two types:
Providers must have links with the UK, which is defined in clause 3 as having a significant number of UK users or UK users form one of the target markets for the service.
The Bill imposes a “duty of care” to protect users from illegal and harmful content. A cornerstone of the approach in the Bill is risk assessment followed by duties to mitigate and effective management of those risks. All providers of regulated U2U services must comply with the following duties:
But where the regulated U2U services are likely to be accessed by children, the providers must also comply with the following duties:
“Content that is harmful to children” is defined in clause 45 as meaning:
Clause 15 is potentially an important provision forming part of the safeguarding system:
it requires the service provider to operate a system that allows users and affected persons to “easily report content” that:
A similar duty of care / risk assessment structure applies to search services, with the relevant duties contained in clauses 17, 19 and 21-24.
The legislation sets out a general definition of harmful content and activity. The general definition will apply to content and activity where there is a ‘reasonable, foreseeable risk of significant adverse physical or psychological impact on individuals’. A limited number of priority categories will be set in secondary legislation which will cover:
The detail is then to be found in the relevant Code of Practice which is to be issued by Ofcom as the regulator. This will set out the actions that companies must undertake to comply with their safety duties in respect of illegal content. There is an interim voluntary code on Child Exploitation and Abuse (CSEA)[1] which was published December 2020, but a more detailed Code can be expected once the Bill passes into law.
Exemptions/outside scope
There is an extensive list of matters which either fall outside the scope of the Bill or is exempt from it. For example, ‘regulated content’ in relation to U2U services, means user-generated content, except (clause 39):
Also, it should be noted that internal business services are exempt – this includes services such as business intranets, content management systems, database management software and collaboration tools.
Impact
It seems reasonably clear from the Bill and the surrounding material that the Bill will not, for example:
However, there is reason to be optimistic that the effect of the Bill will:
Nonetheless, serious concerns have been expressed regarding the Bill itself, that it “will be catastrophic for freedom of speech of British citizens online in its current form” by forcing tech platforms to delete “harmful” content or face big fines and that this will lead to many lawful posts being deleted without actually making people safer online.[2]
The task of the Bill is a considerable one, but its aims are laudable.
Samantha Broadfoot QC is a specialist practitioner in public law and human rights, with an interest in data protection and regulation. She is an Assistant Coroner and a Recorder.
[1] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/944034/1704__HO__INTERIM_CODE_OF_PRACTICE_CSEA_v.2.1_14-12-2020.pdf
[2] https://committees.parliament.uk/writtenevidence/41410/default/
To subscribe to our Health and Social Care Insight and get the blog posts sent straight to your inbox, click here.