When the Online Safety Act was introduced, the government sold it as a child-protection measure. Ministers promised it would clean up the internet, tackle grooming gangs, and shield children from harmful material. Yet tucked away in the final legislation is a reference that has little to do with child safety: Section 25 of the Immigration Act 1971.
What Is Section 25?
Section 25 makes it an offence to “assist unlawful immigration to the UK.” Traditionally, this meant cracking down on smugglers, forged document sellers, or anyone physically organising illegal entry into the country. It carries serious penalties, including prison sentences.
By incorporating Section 25 into the Online Safety Act, Parliament effectively gave Ofcom the power to require platforms to remove content that could be interpreted as “assisting” illegal immigration. That includes adverts or posts where smugglers openly market their services — a growing problem on platforms like TikTok and Facebook. Officials argue that these networks use slick videos, testimonials, and WhatsApp groups to lure vulnerable migrants into dangerous Channel crossings.
Why It Matters Online
In practice, platforms now have a legal duty to detect and remove material that falls under Section 25. Failure to act could bring fines of up to £18 million or 10% of global turnover. For tech companies, the choice is stark: remove questionable content or risk financial disaster.
Supporters say this is common sense. Stopping smugglers from recruiting customers online is a vital step in breaking their business model. By targeting the digital side of their operations, the government hopes to reduce demand for crossings and save lives.
The Transatlantic Shock
But the law didn’t just alarm campaigners in Britain. It also rattled the boardrooms of Silicon Valley. The Online Safety Act includes criminal liability for senior managers who fail to ensure compliance with Ofcom’s demands. In theory, this means that tech executives could face jail time if their platforms allowed illegal content — including material tied to unlawful immigration — to slip through the cracks.
This provision made headlines in the U.S. when lobbyists warned that Britain was setting a dangerous precedent. For the first time, an allied democracy was threatening American tech leaders with prison for failing to moderate user-generated content to the satisfaction of a foreign regulator. The clash highlighted how far the UK was prepared to go to enforce its digital border policies, even if that meant tangling with the world’s most powerful tech firms.
The Controversy
Critics argue that extending Section 25 into the digital sphere creates blurred lines. Could a journalist’s footage of small boats arriving be flagged as “assisting” unlawful immigration? What about a charity explaining asylum procedures, or a citizen posting images from the coast? To avoid the risk of fines and criminal liability, platforms are expected to over-remove content.
Civil liberties groups warn this will chill public debate on immigration, already one of the most divisive issues in British politics. Once a piece of content is flagged as potentially breaching Section 25, it may disappear instantly, with little recourse for the user who posted it.

A Precedent for Expansion?
The inclusion of Section 25 in the Online Safety Act shows how a law pitched for child protection has expanded into border control — and even threatened international tech giants with jail time. Critics call it the thin end of the wedge: if immigration offences can be policed under a safety law, what might be added next? Climate change “disinformation”? Election coverage? Criticism of government policy?
For now, the Act has given Ofcom the authority to treat smuggler adverts as seriously as child exploitation material. Whether it stops there — or becomes a wider tool of censorship and criminalisation — remains to be seen.
