Free Speech is under attack. When the Online Safety Act was first pitched, the message was simple: it would protect children from predators, scammers, and harmful content. Few objected to the goal of shielding minors from the darker corners of the internet. Successive governments had been criticised for not doing enough to keep children safe online, and campaigners for years had called for tighter regulation of tech giants.

The principle sounded uncontroversial. Who could possibly oppose removing child abuse material, terrorist propaganda, or scams designed to trick vulnerable people? It was framed as a common-sense reform that would clean up the online world and place responsibility firmly on the shoulders of powerful Silicon Valley platforms.

But buried deep in the legislation is a clause that seems to have little to do with child protection: immigration.

Specifically, the Act incorporates offences under Section 25 of the Immigration Act 1971—“assisting unlawful immigration”—into the list of illegal content platforms must monitor and remove. This means posts, videos, or even news stories that show Channel crossings or refugees arriving in Britain could be flagged as unsafe. In effect, it turns platforms into silent enforcers of immigration policy, policing not only smugglers but potentially journalists and ordinary users who dare to post on a controversial subject.

Critics argue this is classic “policy creep.” The law was sold to Parliament as child-focused, yet its scope has ballooned into areas of political debate. Campaigners warn that public discussions around migration—already one of the most divisive topics in Britain—could now be censored under the banner of “safety.”

Digital rights organisations such as the Open Rights Group and Index on Censorship have pointed out that legislation framed around protecting children often becomes a gateway to wider state control. They note that while it is right to criminalise smuggling networks that advertise their services on Facebook or Telegram, the drafting of the law is so broad that it risks sweeping up ordinary people who are not breaking any law. A charity raising awareness of asylum rights, or a citizen journalist filming migrant arrivals on the south coast, could find their content removed or their accounts restricted.

The government insists the measure is about cracking down on gangs who promote illegal crossings online. Home Office briefings have highlighted cases where smugglers used TikTok to post glossy adverts promising “safe passage” to Britain, complete with fake assurances about visas and asylum success rates. Ministers say ignoring this would be reckless, and that online platforms have a duty to remove such content as quickly as they would terrorist propaganda.

Yet sceptics are not convinced. The wording of the Act does not draw a clear line between criminal advertising and ordinary discussion. The way Free Speech has come under attack since Labour took office you should be suspicious. The result is that platforms, fearing massive fines, are likely to “over-comply” and remove anything remotely contentious. In practice, this creates a chilling effect. Immigration debate online—already subject to heavy moderation on mainstream platforms—may shrink further as companies take the safe option and delete.

This is not just a theoretical risk. Already, Science Secretary Peter Kyle has admitted that immigration-related content has been restricted under the Act’s enforcement, confirming that scope creep is not a distant possibility but a present reality. His acknowledgement has fuelled claims that the Online Safety Bill was never just about children but was always intended as a broader tool of control. To control the silent majority and restrict our freedom of speech and freedom of expression,

The implications go beyond free speech and freedom of expression. Journalists covering Channel crossings could find their reporting algorithmically buried. Campaigners raising concerns about asylum policy could see their posts flagged as harmful. Even satirical or critical takes could end up in moderation queues, stripped from feeds long before any human moderator reviews them.

For some, this is a feature, not a bug. Supporters of the legislation argue that the internet has become a breeding ground for disinformation and hostility, and that decisive regulation is the only way to create a “safe online environment.” But for others, the risk is that Britain ends up with a sterilised internet where only government-approved narratives survive.

The question is obvious: if a law designed to protect children can be used to shape what we say about immigration, what else might be added to the list in future? Today it is Channel crossings. Tomorrow it could be climate change denial, election scepticism, or criticism of international institutions.

The Online Safety Act was meant to guard children from predators. Instead, it has opened the door to something far more controversial: the quiet policing of political debate.

The real irony is that Social media giants are still pushing suicide related content to teens despite the new bill. Researchers who set up dummy accounts as 15-year-old girl were bombarded with self-harm and depression posts. Full Story

We want to hear from you — how do you see the Online Safety Bill affecting everyday people? Comment below.

Stay up today and receive an email when we have a new article

Related content: They told you it was about protecting kids. It’s really about silencing you.

TFP Where the Truth Will Remain Uncensored

Real news. No spin. No censorship. Get TFP updates first. 

We don’t spam! Read our privacy policy for more info.

By Editor

Verified by MonsterInsights