One-World Governance for Social Media

One-World Governance for Social Media

If you thought online censorship is bad now, just wait until you see the future - at least if the people running the EU and Great Britain get their way.

The European Union’s Digital Services Act (DSA) and the U.K.’s proposed Online Safety Bill are among the latest government policies designed to make social media companies responsible for "hate speech" and “disinformation” posted by users.

It looks like a potential slippery slope of regulations — in the U.S. and overseas — which, under the guise of “combating disinformation,” stifle the spread of information deemed inconvenient for governments and other powerful actors.

In the U.S., these proposals include a government “disinformation board” and a bill pending before Congress, the Digital Services Oversight and Safety Act.

The EU’s new regulations, experts say, may have far-reaching impacts beyond Europe.

Michael Rectenwald, author of “Google Archipelago: The Digital Gulag and the Simulation of Freedom,” said he can foresee a future in which such regulations might affect all speech — not just speech on social media platforms:

“[T]he EU’s DSA represents a major step toward one-world governance of social media and Internet search and one step closer to global government.

“Since the distinction between ‘on-line’ and ‘off-line’ activity will lose all meaning as the Internet includes the Internet of Things and Bodies, the DSA may become the law of the land.”

In timing that coincided with Elon Musk’s intent to purchase Twitter, the EU announced April 23 the passage of the Digital Services Act (DSA).

The DSA seeks to tackle the spread of “misinformation and illegal content” and will apply “to all online intermediaries providing services in the EU,” in proportion to “the nature of the services concerned” and the number of users of each platform.

According to the DSA, “very large online platforms” (VLOPs) and “very large online search engines” (VLOSEs) — those with more than 45 million monthly active users in the EU — will be subject to the most stringent of the DSA’s requirements.

 

 

Big Tech companies will be obliged to perform annual risk assessments to ascertain the extent to which their platforms “contribute to the spread of divisive material that can affect issues like health,” and independent audits to determine the steps the companies are taking to prevent their platforms from being “abused.”

These steps come as part of a broader crackdown on the “spread of disinformation” called for by the Act, requiring platforms to “flag hate speech, eliminate any kind of terrorist propaganda” and implement “frameworks to quickly take down illicit content.”

Regarding alleged “disinformation,” these platforms will be mandated to create a “crisis response mechanism” to combat the spread of such content, with the Act specifically citing the conflict between Russia and Ukraine and the “manipulation” of online content that has ensued.

Companies violating the provisions of the DSA would risk fines of up to 6% of their total global annual revenue, while repeat offenses may result in the platforms being banned from the EU — despite the “open internet” principle professed by the principle of “net neutrality” enshrined in EU law.

According to Techcrunch, the DSA will not fully come into effect until early 2024. However, rules for VLOPs have a shorter implementation period and may be enforced by early 2023.

 



 

 

-->