Censorship by Proxy: A Critical Analysis of the UK’s Online Safety Act and the Erosion of Free Expression, By Richard Miller (Londonistan)
The United Kingdom's Online Safety Act, now in full effect, represents a sweeping expansion of state-mediated internet regulation. Framed ostensibly as a measure to protect children and vulnerable populations from harmful content, the Act marks a decisive shift from liberal democratic commitments to free expression toward a model of paternalistic censorship. I will critically assess the legal architecture, linguistic ambiguity, and practical implications of the Act, arguing that it constitutes a de facto "censorship charter" whose long-term effects may prove corrosive to open discourse, political dissent, and democratic accountability.
The Online Safety Act, developed under the Conservative government and brought into force with refinements by the Labour administration, signifies a bipartisan consensus on the need for online regulation. However, such consensus belies the Act's deeply authoritarian potential. The Act grants disproportionate powers to regulators and social media platforms, while incentivising pre-emptive censorship under the guise of harm reduction.
The Act criminalises the transmission of false information intended to cause "non-trivial psychological or physical harm." Specifically, it states:
A person commits an offence if –
(a) the person sends a message,
(b) the message conveys information that the person knows to be false,
(c) at the time of sending it, the person intended the message, or the information in it, to cause non-trivial psychological or physical harm to a likely audience, and
(d) the person has no reasonable excuse for sending the message.
While appearing reasonable on its surface, this formulation is plagued by epistemic and juridical ambiguities. What constitutes "non-trivial" psychological harm? How is intent to harm distinguished from provocation, satire, or robust political critique? The Act effectively empowers both Ofcom and private tech companies to make determinations about intent and harm without judicial oversight, a recipe for capricious and politically motivated enforcement.
Of particular concern is the Act's mechanism of delegated censorship, wherein social media platforms face fines of up to £18 million or 10% of global revenue for failing to act on content deemed harmful. This economic pressure creates a perverse incentive structure: platforms are encouraged to over-remove content to avoid liability. The result is algorithmic overcompliance, a process by which the boundaries of permissible speech are defined not by law, but by opaque moderation practices.
Examples have already surfaced. Posts discussing politically sensitive topics such as grooming gang scandals or immigration have been quietly removed. In many cases, no explanation is provided, merely a notice that content was "restricted in your region," a chilling euphemism for state-endorsed silencing.
The Online Safety Act aligns with broader international trends in preventive censorship, a philosophy most clearly articulated by European Commission President Ursula von der Leyen as "pre-bunking," form of information "vaccination" designed to pre-empt the spread of misinformation.
While rhetorically sophisticated, this paradigm treats public discourse as a public health hazard and speech as contagion. Such metaphors, once confined to totalitarian propaganda, are now normalised in democratic contexts. They function to pathologise dissent, shifting the burden of proof from censors to citizens and assuming that falsehood is inherently more dangerous than suppression.
Perhaps most disturbingly, proponents of the Act rarely acknowledge its implications for civil liberties. Political leaders frequently cloak their support in euphemism, invoking child protection, online safety, and platform responsibility, while refusing to use the term "censorship." This refusal constitutes a technocratic evasion of moral accountability. As John Stuart Mill warned, the greatest threats to liberty often come not from tyrants but from "well-meaning" functionaries convinced of their own moral superiority.
The Online Safety Act is not a narrow measure targeted at extremism or child abuse. It is a comprehensive regime of content control, constructed in vague language, enforced through coercive economic levers, and operationalised by private companies with limited transparency. It constitutes an abandonment of the liberal democratic ideal of open discourse in favour of a regulatory framework more consistent with soft authoritarianism.
While its immediate effects may appear benign or even laudable to some, its long-term implications for the health of political pluralism and public reason are severe. If history teaches us anything, it is that the tools of censorship, once normalised, are rarely surrendered voluntarily, and almost never used solely for the purposes claimed by their architects.
https://dailysceptic.org/2025/07/30/the-online-safety-act-is-a-censors-charter/
"The baton has passed. On Friday, the previous Conservative government's Online Safety Bill, newly refined by Labour, came into full force. This collaboration between Left and Right is all the evidence we need that one of the core aspects of woke ideology has prevailed. Specifically, the unevidenced belief that words cause real-world harm and therefore censorship is essential for the sake of social cohesion.
This has been a long time coming. Opposition from MPs has been lacklustre; most of our political class simply does not understand why the principle of free speech should take priority in a civilised society. This was evident from Keir Starmer's comments this week during his joint interview with Donald Trump at the Trump-Turnberry golf course in Scotland. With nuclear-strength audacity, Starmer claimed that he was "not censoring anyone". Rather, his Government was simply putting measures in place "to protect children, in particular from sites like suicide sites".
It all sounds noble enough, until one realises that the impact of the Online Safety Act will not simply stop at child protection. Social media platforms are now liable for "false communications" that may cause "non-trivial psychological harm", a crime that can result in a jail term of up to 51 weeks. Here is the specific section of the Act:
A person commits an offence if –
(a) the person sends a message,
(b) the message conveys information that the person knows to be false,
(c) at the time of sending it, the person intended the message, or the information in it, to cause non-trivial psychological or physical harm to a likely audience, and
(d) the person has no reasonable excuse for sending the message.
As with all 'hate speech' legislation, one suspects that the ambiguity is the point. If the standard is psychological harm, then almost anyone who speaks in public is vulnerable. I certainly receive abuse regularly that would qualify, but I would much sooner block these angry trolls than see them arrested. Moreover, we have already seen the claim of psychological harm weaponised against perfectly legitimate and sensible points of view. In other words, this nebulous legislation is wide open to exploitation by activists looking to silence their critics.
This act will limit the parameters of discussion because no social media platform is going to risk falling foul of the legislation. The fines for non-compliance can be up to £18 million, or 10% of global turnover (whichever is higher). Overzealous censorship is inevitable. Where content is controversial, it will be far easier for social media companies to err on the side of deletion rather than risk such stringent financial penalties.
The UK is now essentially in 'pre-bunking' mode, the term used by the president of the European Commission, Ursula von der Leyen, to describe her intention to roll out censorship online. At last year's Copenhagen Democracy Summit, she argued that when it comes to misinformation, "prevention is preferable to cure". She continued: "Perhaps if you think of information manipulation as a virus. Instead of treating an infection, once it has taken hold, that is debunking. It is much better to vaccinate so that the body is inoculated. Pre-bunking is the same approach."
This is sinister stuff. It also makes me wonder why those calling for censorship are invariably too timid to utter the word? Why must they insist that they support free speech and resort to these endless euphemisms? I would have far more respect for a technocrat who came out and said it: "I do not trust the masses to speak freely, and that is why they must be censored." It would be terrifying, but at least the honesty would be refreshing.
And so now in the UK, social media users are experiencing a curated version of the internet. Many examples have already been posted online. A thread on X by Benjamin Jones of the Free Speech Union includes a number of screenshots of posts that have been quietly 'disappeared'. For instance, a post about the grooming gangs scandal by Conservative MP Katie Lam has been replaced with a message explaining that the content has been restricted.
All of which makes it clear that the scope of censorship under the new act will far exceed the remit of protecting children from inappropriate material. Few will have failed to notice that censored posts seem to be those that the government might be glad to see suppressed.
Whether this is coincidence or not, the vagueness of the legislation will make it far easier for the government to crack down on its detractors. Worse still, it will establish a precedent whose end point will be impossible to predict. Those who are happy to cheer on online censorship now may not be so buoyant once they realise that these restrictions could also apply to them.
A combination of complacency and ignorance has led our political class to all but abandon the principles of free speech upon which our democracy was founded. While the list of citizens arrested or jailed for wrong speak continues to grow, our Government has now exacerbated the problem by insisting that social media platforms censor on its behalf.
This will not end well. Don't believe me? Read a few history books.'
Comments