The new rules are based on old promises the tech companies made to Europe’s regulators, although recent events show it wasn’t nearly enough.
European Commissioner for Internal Market Thierry Breton told Reuters that upcoming regulations on the tech industry would be a “backbone” for enforcing disinformation moderation. |
Big tech companies are getting a kick in the keister to start sorting out the mess of disinformation on their online platforms or else face a financial spanking with a pretty massive paddle.
Reuters reported that the European Commission plans to release new rules on Thursday that will require big tech companies to deal with both deepfakes and fake information on their platforms. The new rules will require companies to hand over information that could help combat falsities online. Fines could be as big as 6% of their global turnover, according to a leaked document from the European Union seen by Reuters reporters. That could mean a hefty financial hit for those who don’t play ball.
It’s all part of the EU’s efforts to constrain tech giants like Meta, Microsoft, and Twitter through the Digital Services Act, which is already in the process of becoming law. In a piece of the document provided by reporters, signatories will need to implement “clear policies regarding impermissible manipulative behaviours [sic] and practices on their services.” The new rules are co-regulatory, meaning responsibility is shared between the regulators (AKA individual EU countries) and the companies themselves.
There is currently a voluntary code on the books that asks companies to combat disinformation through implementing policy roadmaps. That code was initially enacted in 2018, and was signed by Facebook, Google, Microsoft, Twitter, Mozilla, and more.
The EU Commissioner for Internal Market Thierry Breton told Reuters that the DSA is intended to provide “a legal backbone” to those codes of practices, which includes those heavy financial sanctions.
Some of those prior commitments—while lengthy—are several years out of date. In their signature forms, Facebook (now named Meta) and Twitter—as two examples—include links that have even become defunct at the time of reporting. The EU conducted self-assessment reviews in the years following those reports, showing that, despite claims of progress, some companies fell behind in their commitments. By September 2020, the commission was calling for a structured monitoring program to keep track of the companies.
It’s clear from the last few years that disinformation and fake accounts have remained a huge issue on every platform. Leaked documents like the Facebook Papers show that company in particular knew about and did little to prevent disinformation that led to events like the Jan. 6 insurrection. That was two years after the company pledged to do better. It’s unclear how much Twitter played a role in the 2021 storming of the Capitol, but they’re still playing a lethargic game of wack-a-mole to handle the rash of fake info being propagated on their platform about the war in Ukraine.
Commission VP Věra Jourová told reporters the new regulations will also help countries be better prepared to counter disinfo coming from Russia.
After the code becomes law, companies have six months to implement their anti-disinformation measures. They will have to report how they are moderating content and explain how that bung information is propagating on their sites in the first place.
Meta and Twitter did not immediately return Gizmodo’s request for comment.
Reporting mentions deepfakes in particular, which are an ongoing problem on social media, and have been used in scams and spam alike. However, there are limits to the technology, and some research has questioned how effective they are in convincing people over more mundane scams. However, reports have shown disinformation is still rampant on practically all platforms, whether its political, medical, or environmental.
Tags:
News