Public security meets disinformation threats
IT'S MONDAY, AND THIS IS DIGITAL POLITICS. I'm Mark Scott, and will be in Amsterdam next week to present this work at this year's DSA and Platform Regulation Conference. If you're also in town, drop me a line to say hi.
— As defense types meet at the Munich Security Conference this week, the importance of protecting the online information environment from abuse has never been more important. But it comes with significant perils.
— The European Commission's latest regulatory move against TikTok is less to do with potential harm on the platform, and more about sending a policymaking message, at home and abroad.
— The rise of a polarized social media has led to many users disengaging with these online platforms.
Let's get started:
THE PUBLIC SECURITY INDUSTRIAL COMPLEX
IF DAVOS IS WHERE THE GREAT AND THE GOOD of the business world meet to swap notes, then the Munich Security Conference, which gets underway on Feb 13, is where their equivalents in the defense world similarly gather to break bread. They will have a lot to talk about. From the almost 4-year war between Russia and Ukraine to the fraying transatlantic alliance to Europe's renewed efforts to stand up on its own two feet, this year's gathering in the southern German town represents a marker of a new era that has yet to be defined.
Among the topics to be discussed (alongside the ubiquitous AI hype-vest) will be the ongoing toxic nature of the online world and how that potentially harms countries' public security.
For many policymakers, this represents the sweet spot of ongoing accusations — some real, some not — that Russia continues to meddle in Western elections via a spidery web of disinformation agents and so-called hybrid attacks. It also includes an increase in public spending for government efforts to thwart such digital trickery, as well as proposals like the European Commission's Democracy Shield aimed at boosting collective resilience through a mixture of media literacy, public support for independent media and greater research into social media platforms.
It wouldn't be an international conference without some shade from the United States. Details are still thin on the ground. But I would expect senior White House and federal government officials to double down on accusations that Europe's online safety rules are akin to censorship; that Europe needs to embrace its historic culture heritage; and that only more free speech can combat the legitimate real-world harms seeping out of some of these global digital services.
Let's leave aside the US' significant critique on any form of online safety or disinformation-busting efforts. More on that here.
Thanks for reading Digital Politics. If you've been forwarded this newsletter (and like what you've read), please sign up here. For those already subscribed, reach out on digitalpolitics@protonmail.com
For other countries realizing there's a significant public security threat associated with unfettered — and, for most jurisdictions, unregulated — online spaces, many fall into a policymaking fallacy about where the real threat lies. That reduces their ability to truly marshal sufficient resources to provide a safe online environment — while, it should go without saying, upholding fundamental free speech rights.
First, the fallacy. While each country is different — and some jurisdictions face significantly more Russian meddling (like Moldova and Germany) than others — the Kremlin, on average, is not the main driver of politically-motivated disinformation and online polarization that many would believe. This over-indexing on Russian actors therefore pushes national security and digital policymaking to focus on a small subset of threats compared to more comprehensive issues currently affecting social media.
Yes, Russian state-affiliated actors are still doing what they can to shift public opinion. That includes everything from creating spoofed websites that pretend to be Western media outlets so they can spread falsehoods to significant bot farms — on all social media platforms — to try and shift the conversation, one way or the other.
These tactics have evolved since they first hit the headlines in 2016 around the US presidential election. Though, arguably, they existed decades earlier, often in analogue form. But what also has shifted over the last decade is online attention economy. Now, roughly the top two percent of online creators garner more than 60 percent, if not more, of time in people's social media feeds. That means most Russian-affiliated content just doesn't get the eyeballs that it once did.
If a Kremlin bot creates a sophisticated disinformation campaign, but no one (apart from other bots) sees it, does it even exist? In my view, no. No it doesn't.
Such ongoing attempts to create Russia as the bogeyman — especially due to its ongoing atrocities in Ukraine — has fixated many policymakers and, increasingly, national security types on the "what," and not the "why" of social media. By that, I mean it's too easy to focus on finding potentially harmful, politicized disinformation (see here) and not on the systems that amplify potential polarizing content to national audiences.
The 'why' in this context is the increasingly sophisticated social media recommendation algorithms that have made each user's feed a bespoke make-up of content which these companies believe will keep people interested (and, therefore, glued to the platform.)
Gone are the days where people typically received updates from friends and family — those posts now represent between seven and 17 percent on Instagram and Facebook, respectively.
Instead, these recommender systems, whose operations remain closed off from scrutiny, have been tailored to maximize engagement, even if that comes through party-political polarization and other content that potentially harms wider public security.
This is where I start to get queasy. I am a big fan of free speech, and I do not believe national security agencies should be poking around into either my, yours or companies' business. But just as too much time is spent hunting down Russian actors online, not enough time is dedicated to unpicking how these social media algorithms operate. These systems can actually harm people in the real world — more so, in my opinion, than the specter of Kremlin-back botfarms.
There needs to be greater coordination between outward looking national security agencies and inward looking regulators and policymakers focused around online safety. Currently, that is a relationship that either doesn't exist, or is only starting to take shape.
That will involve national security officials finding a way to maintain their independence from monitoring what happens within their countries' borders — a barrier which, legitimately, must be upheld to protect people's fundamental rights.
But to suggest that protecting the information environment is merely a foreign issue — that whatever foreign actors do overseas to target a country's population stands apart from how social media promotes specific content, at home — is a false dichotomy.
To combat online threats that may affect public security — all while upholding free speech rights and other individual freedoms — new connections must be formed between national security and online safety officials. That is not going to be an easy lift, given how each community approaches the digital topics that fall within their overlapping mandates.
But to not try is to relegate ourselves to live in a world defined by what happened in 2016 (and the specific characteristics of a singular US presidential election.)
The world has moved on. So should we.
Chart of the Week
A RESEARCHER AT THE UNIVERSITY AMSTERDAM discovered a correlation between the rise of polarization of posts (at least on Facebook and X) and the number of users who disengaged on those platforms during the 2020 and 2024 US presidential elections.
The first set of charts (on the left) highlight how between the 2020-2024 election cycles, all social media sites — with the exception of TikTok and Reddit — lost users, particularly among the young and elderly.
The second set of charts (on the right) shows the level of posting on both X and Facebook rose significantly, over that period, for those users who were more polarized than their more mainstream counterparts.
Source: Petter Törnberg
THE ANATOMY OF A EUROPEAN COMMISSION ANNOUNCEMENT
THE BERLAYMONT BUILDING in central Brussels can be a weird place. Amid the smattering of European languages and EU officials busily going about their business, the center of the European Commission is a labyrinth of complexity, double-speak and really (and I mean really) bad coffee.
So when the EU's executive branch announced on Feb 6 it had found TikTok in preliminary violation of the bloc's Digital Services Act, I took note. But not for the reason you might think.
Under the still-yet-to-be-finalized decision, the European Commission said it believed the China-linked app had not adequately assessed the addictive features baked into the popular social media service. That included allegedly rewarding users with new content to keep them doomscrolling and sending people (and particularly children) notifications during the wee hours of the morning.
"Social media addiction can have detrimental effects on the developing minds of children and teens," Henna Virkkunen, the European Commissioner in charge of tech policy, said in a statement. In response, TikTok denied the accusations and said it would fight Brussels' preliminary decision.
So far, so good.
But the European Commission's announcement wasn't really about TikTok. I mean, it was about the China-linked platform, as the investigation that led to this preliminary ruling dated from 2024. But the true audience, for my money, was Europeans and, to a lesser degree, Americans.
On the first, the TikTok ruling was specifically designed to tee up the EU's upcoming Digital Fairness Act, which is slated to be published in the fourth quarter of the year. Those proposals are aimed, in part, at so-called "dark patterns" of addictive design that — shockingly — are central to Brussels' claims against TikTok.
What better way to show the need for more rulemaking than demonstrating a real-world case of harm (via the TikTok preliminary decision), which then can be used to make the case for the Digital Fairness Act in late 2026.
On the second, it's telling the European Commission chose TikTok, and not Facebook, for its preliminary ruling. Officials say that separate case (around similar issues linked to addictive design) is still ongoing, and may (or may not) lead to a preliminary ruling.
But in the wake of the US House of Representatives holding another hearing around alleged European online censorship — and US officials traveling to Munich this week to make similar accusations — it's helpful, politically, to show that Europe's digital rulebook isn't just targeting Silicon Valley. In truth, more Chinese firms (AliExpress, Temu, TikTok) have faced decisions under the bloc's online safety rules than US counterparts (which only includes X, so far).
Sign up for Digital Politics
Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before. The first two weeks of any paid subscription are free.
Subscribe
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
This is where you have every right to call me a conspiracy theorist. That's not how regulatory enforcement works, I hear you saying. Brussels is just enforcing the rules as outlined within its regulation.
To which, I say yes. But to a point. As I mentioned above, the Berlaymont Building is a strange place. The European Commission sits in a weird regulatory position where it both writes and enforces the rules. Political decisions — particularly in light of the strained relationship with the US — are always taken into account in how the bloc's legislation is enforced. That's especially true for something like the Digital Services Act that includes new enforcement powers which no one within the European Commission has ever wielded before.
In that context, a regulatory decision is not just a regulatory decision.
It's a political marker to demonstrate, to both internal and external audiences, where the region is heading with its digital rulebook. Choosing TikTok and its alleged addictive design therefore meets two purposes. It provides political cover for the upcoming Digital Fairness Act and it allows EU leaders to tell Washington the bloc's rules apply to everyone — and not just US Big Tech.
What I'm reading
— The European Artificial Intelligence & Society Fund outlines its strategy for the next five years. More here.
— The Lowy Institute published a deep dive into the so-called "sovereign citizen movement" has gone global via digital platforms. More here.
— Ahead of next week's AI Impact Summit in India, researchers have written the second annual International AI Safety Report which documents efforts to safeguard the emerging technology. More here.
— Media companies still want to work with online platform to access their audience and global reach, despite reservations about how their content is monetized by these tech companies, argues Rasmus Kleis Nielsen in Digital Journalism.
— Australia's eSafety Commissioner hosts a series of analyses of emerging technologies and their impact on online safety. More here.