The miseducation of Jan 6
HAPPY NEW YEAR. This is Digital Politics. I'm Mark Scott, and as many of us head back to work after the holiday season, I bring you live footage of my first day in the office. Be gentle.
Before we begin, a logistics note: I'm teaming up with Ben Whitelaw (and his excellent Everything in Moderation newsletter) and Georgia Iacovou (and her equally good Horrific/Terrific newsletter) and for an in-person discussion/drinks about tech policy in 2025.
If you're in London on Jan 30, sign up to attend, for free, here.
— Jan 6 marks the four-year anniversary of the deadly attack on Capitol Hill. Social media's willingness to police content in the United States has only diminished since then.
— The New Year brings renewed efforts to corral artificial intelligence. Not all these governance attempts will work out.
— Ever wondered how the European Union's Digital Services Act actually works? I've got a chart for that.
Beware those who say all is well
JAN 6 MARKS ONE OF THE DARKEST DAYS in modern US history. Just two months after Joe Biden beat Donald Trump in securing the White House, a violent mob of roughly 2,000 people attacked the United States Capitol Building. Many believed the November 2020 election had been stolen from Trump — and they wanted to take it back. The insurrection eventually cost the lives of 9 individuals, including four police officers who committed suicide in the aftermath. Around 1,600 defendants have pleaded guilty to charges related to Jan 6, and another 200 have been convicted after trials. For a full breakdown, read the US House Select Committee to Investigate the Jan 6 Attacks final report.
You're probably familiar with all these facts — many of which are now openly questioned by those seeking to rewrite history. But, over the break, I found myself revisiting the leaked internal Facebook documents from Frances Haugen. Yes, it was quite a vacation. I had access to them, during my time at POLITICO, after we joined a consortium of other media outlets that were also granted access to this treasure trove of information — much of which related to how Facebook handled crises like that of Jan 6. The Wall Street Journal's Jeff Horwitz had been given a first crack at the documents.
Thanks for reading Digital Politics. If you've been forwarded this newsletter (and like what you've read), please sign up here. For those already subscribed, reach out on digitalpolitics@protonmail.com
Re-reading Facebook's approach to the build-up to Jan 6 (and subsequent violence on the day), based on these leaked documents, was troubling. They paint a picture of a social media giant struggling to come to terms with the coordinated efforts to spread the "Stop the Steal" message on its platform; an unwillingness to tackle so-called 'harmful non-violating narratives,' or posts that did not explicitly break the company's terms of service; and internal content algorithms that, within days, promoted QAnon theories to a mass audience. Meta subsequently banned QAnon-linked posts from its platforms.
"We recently saw non-violating content delegitimizing the US election results go viral on our platforms," according to an internal analysis of what happened on Facebook in the build-up to Jan 6. "Retrospectively, external sources told us that the on-platform experiences on this narrative may have had substantial negative impacts, including contributing materially to the Capitol riot."
Well, duh.
To be fair to Facebook, the platform was not the only engine for how conspiracy theories around the 2020 election spread. As someone enmeshed in that world four years ago, social media, writ large, was a major catalyzing factor in how those lies circulated. At the center of that coordination were fringe platforms — most notably Telegram — where little, if any, content moderation existed or, even now, exists. Such sophisticated online communities had flourished during the Covid-19 pandemic.
Within that context, Facebook should be considered a good corporate citizen, even if internal documents revealed it failed to clamp down on how election-related conspiracies fueled, in part, online anger and, eventually, offline violence.
For more on social media's impact on Jan 6, read the House Committee's own findings here, and an analysis of that investigation here.
It's indisputable that social media emboldened those who disliked the outcome of the 2020 US presidential election to take to the streets on Jan 6. What the Haugen documents reveal, at least within Facebook, was internal processes not adequately set up to handle such unprecedented domestic US political events. They show legitimate concerns around infringing people's free speech becoming entangled in the political realities of Facebook executives not wanting to be seen as taking sides in a highly contentious election. They highlight internal Facebook teams — whose counterparts also existed at YouTube and Twitter — struggling to get senior managers to respond quickly enough to dampen conspiracy theories that morphed into real-world violence.
But one overriding niggle I couldn't shake when re-reading these hundreds of pages of internal Facebook angst was that, in early 2025, they sounded exceedingly quaint given how much social media giants have changed over the last four years.
Yes, the likes of YouTube, Instagram and TikTok still have strong approaches toward foreign interference, even when state-backed meddling outside the US remains rife on these platforms. They also have highly robust terms of services about how illegal online content like hate speech and overt calls to violence will not be tolerated. They speak eloquently about the threat of disinformation created via generative AI, and how they are working, as an industry, to thwart such abuse.
And yet, would any of these platforms take similar measures, in 2025, to throttle the spread of overtly political conspiracy theories – even those associated with offline actions — as they did so four years ago? Honestly, I'm not so sure.
You're reading the free version of Digital Politics. Here's what paid subscribers had access to over the last month:
— What role did TikTok really play in Romania's presidential election?; The new and old digital policy faces in Brussels and Washington; Western countries' split digital ambitions. More here.
— Lessons from the 2024 (digital) election-palooza: Everything you need to know about how tech shaped last year's global election cycle. More here.
— Digital Politics' 2025 predictions: A renewed focus on national security; AI lobbying leads to governance results; Efforts to quell online competition abuse falter. More here.
If that sounds up your street, you can upgrade your subscription here.
Many of the election integrity and trust and safety teams at these platforms have been culled to almost insignificance. Some firms, like Elon Musk's rebranded X, have embraced an all-or-nothing vision of free speech that fundamentally misunderstands how the First Amendment applies to such private networks. With Trump's return to the White House only weeks away, many of these platforms' chief executives are doing whatever they can to stay on the right side of arguably the most powerful person in the world. A politician, it is worth noting, who was banned from all mainstream social media platforms in the wake of Jan 6.
In this new political environment, two things are happening. First, there is an ongoing effort to reshape the content moderation discussion within the US — one that was most evident in social media's role around Jan 6 — that platforms have gone too far in quelling people's free speech. (We'll come back to why that's happening in subsequent newsletters.) Second, given this emphasis on free speech fundamentalism, social media giants are now unwilling to "break the glass" to throttle people's problematic online posts in times of emergency.
Before I get angry emails, I understand that companies say they will enforce existing terms of service on all users, and that content moderation, especially around elections, is paramount. I also understand that people within these firms are still trying to live by that ethos.
Chart of the Week
The EU's social media laws are almost one year old. Investigations into the likes of Meta, X and TikTok abound. But how does the bloc's rulebook actually operate?
Cardiff University's Nora Jansen put together this (very complicated) overview of how all the pieces of the DSA puzzle interlink.
It includes regulators like the European Commission and national Digital Services Coordinators. It includes outside groups like auditors and 'trusted flaggers.' It includes the Very Large Online Platforms and Search Engines.
To say the structure is complex would be an understatement.
Source: https://shorturl.at/XyZ1V
They said what, now?
"As a new year begins, I have come to the view that this is the right time for me to move on from my role as President, Global Affairs at Meta," Nick Clegg, the former UK deputy prime minister, wrote on his Facebook page. "And no one could pick up from where I’ve left off with greater skill and integrity than my deputy, Joel Kaplan."
AI governance at the beginning of 2025
I HAVE GOOD NEWS AND BAD NEWS for those interested in the policing of next generation artificial intelligence systems. In late December, South Korea became the second jurisdiction after the EU to pass comprehensive AI rules. That's no mean feat given the country's recent political turmoil. The AI Safety Institutes of the US and United Kingdom also conducted a joint evaluation of OpenAI's latest model in what is expected to become standard practice before other firms release their own models into the wild. In early February, French President Emmanuel Macron will welcome the great-and-the-good (and me) to Paris for the country's AI Action Summit, or effort to shepherd the technology toward the light and away from apocalyptic uses.
This year will also see AI governance efforts gain steam in the EU, via its AI Act, the Council of Europe, via its AI Convention, and in other regions where policymakers are charting their own path to harness the technology for economic development.
That's the good news. Now here comes the bad. I'm not sure this will end well. I had promised not to be a 'fun sponge' this year, and I do believe we'll see new forms of AI governance take root in 2025. I'm just not convinced it's the type of governance many of us had envisioned.
Sign up for Digital Politics
Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before.
Subscribe
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
Let's take the EU's AI Act. If you listen to the bloc's leaders, the legislation will both corral the worst-case scenarios while unleashing Europe's economic potential. It is expected to be the gold standard on which others — like South Korea — will base their own legislation. It will equally be a hands-off means to jumpstart growth and a regulatory deterrent to stop firms from abusing the technology. What's not to like?
And yet, in early 2025, we're still 18 months away from all parts of the AI Act coming into force. Yes, some of the most stringent provisions, including on banned AI use cases, will kick in next month. But we're still a long way away from a meaningful regulatory rulebook — and even Brussels' AI Office, or linchpin for the European Commission on the AI Act's implementation, is still working with a skeleton crew (it's still hiring). Effective regulatory oversight, as of Jan 6, 2025, it is not.
That takes us to the other side of the Atlantic where the future of the US AI Safety Institute — and pretty much all of Joe Biden's White House Executive Order on AI — is up in the air ahead of Donald Trump's swearing in ceremony on Jan 20. Publicly, the future US president has said he will kill his predecessor's AI governance plans. I'm not so sure. The Trump 1.0 Administration passed its own Executive Order on AI, and incoming tech policymakers like Lynne Parker may temper efforts to quash all forms of AI governance.
And yet, that leaves the US AI Safety Institute, whose mandate includes spearheading much of this policy work, in limbo until those political decisions are made. It also places Washington's position in broader global discussions around AI governance — including those to be held in Paris on Feb 10-11 at the AI Action Summit — on equally shaky ground.
My best guess is that Trump 2.0 keeps some, but not all, of Biden's AI efforts, especially those related to national security and economic productivity. Having AI experts in senior positions in all federal agencies, for instance, is just good politics.
Given the US AI Safety Institute sits within the US Commerce Department, I would also bet it survives under the incoming administration. But I wouldn't put much money on the White House pushing anything more than voluntary commitments for AI companies when it comes to transparency, accountability and greater oversight.
Here's one wild card for you: the United Nations. Its AI Advisory Body has already called for global AI governance efforts to mostly fall under the international body's remit. That would allow the likes of China and Russia to have equal say as democratic countries. Something that hasn't exactly worked out well for the UN's separate Cybercrime Treaty.
Watch out for more power grabs by the UN over how AI systems are governed during 2025. It's 100 percent legitimate that the international body wants to make such discussions more equitable, including for Global Majority countries. But if these negotiations lead to authoritarian governments running roughshod over fundamental rights, then we will start to have a problem.
What I'm reading
— The US Treasury Department added a number of Russian and Iranian nationals to its sanction list related to cyber attacks and foreign interference. More here.
— Julie Inman Grant, Australia's eSafety Commissioner, explained the importance of newly-created codes of practice under the country's Online Safety Act. More here.
— The outgoing Italian G7 Presidency finalized reporting frameworks for how the most advanced forms of AI would be overseen. More here.
— Researchers at the Friedrich Naumann Foundation for Freedom outlined China's ever-evolving tactics to cyber operations and disinformation. More here.
— Ahead of the TikTok hearing in the US Supreme Court on Jan 10, here's an overview of the amicus briefs related to the case.