Social media at a time of war
WELCOME BACK TO DIGITAL POLITICS. I'm Mark Scott, and I have many feelings about Sora, OpenAI's new AI-generated social media platform. Many of which are encapsulated by this video by Casey Neistat. #FreeTheSlop.
— The world's largest platforms have failed to respond to the highest level of global conflict since World War II.
— The semiconductor wars between China and the United States are creating a massive barrier between the world's two largest economies.
— China's DeepSeek performs significantly worse than its US counterparts on a series of benchmark tests.
Let's get started:
WHEN PLATFORM GOVERNANCE MEETS GLOBAL CONFLICT
OCT 7 MARKED THE 2-YEAR ANNIVERSARY of Hamas militants attacking Israel, killing roughly 1,200 citizens and engulfing the region in a seemingly endless conflict. Tens of thousands of Palestinians have died, many more have been displaced, and attacks (or the threat of attack) against both Israelis and Jews, worldwide, have skyrocketed.
I won't pretend to understand the complexities of the Israeli-Hamas war (more on that here, here and here). But the last two years have seen a slow degradation of the checks and safeguards that social media companies once had in place to protect users from war-related content, propaganda and illegal content now rife wherever you look online.
First, let's be clear. This isn't just an Israeli-Hamas issue. As we hurtle toward the end of 2025, there are currently almost 60 active state-based conflicts worldwide and global peace is at its lowest level in 80 years, according to statistics from the Institute for Economics and Peace.
That is not social media's fault. As much as it's easy to blame TikTok, YouTube and Instagram for the ills of the world, real-world violence is baked into generational conflicts, multitudes of overlapping socio-economic issues and other analogue touch-points that have nothing to do with people swiping on their phones.
But it's also true the recent spike in global conflicts has come at a time of collective retrenchment on trust and safety issues from social media giants that, at the bare minimum, have failed to stop some of the offline violence from spreading widely within online communities. Again, there's a causation versus correlation issue here that we must be careful with. But at a time of heightened polarization (and not just in the US and Europe), the capacity for tech platforms to be used to foment real-world instability and violence has never been higher.
Before I get irate complaints from those of you working within these companies, social media platforms have clear terms of service supposed to limit war-related content from spreading among users. You can review them here, here, here and here. But there's one thing to have clear-cut rules, and it's another to actively implement them.
Thanks for reading the free monthly version of Digital Politics. Paid subscribers receive at least one newsletter a week. If that sounds like your jam, please sign up here.
Here's what paid subscribers read in September:
— A series of legal challenges to online safety legislation challenge how these rules are implemented; The unintended consequences of failing to define "tech sovereignty;" Where the money really goes within the chip industry. More here.
— What most people don't understand about Brussels' strategy toward technology; Unpicking the dual antitrust decisions against Google from Brussels and Washington; AI chatbots still return too much false information. More here.
— The next transatlantic trade dispute will be about digital antitrust, not online safety; Washington's new foreign policy ambitions toward AI; The US' spending spree on data centers. More here.
— An inside look into the United Nations' takeover of AI governance; How the United Kingdom embraced the US "AI Stack;" People view the spread of false information as a higher threat than a faltering global economy. More here.
— Washington's proposed deal to untangle TikTok US from Bytedance is not what it first appears; How social media companies are speaking from both sides of their mouths on online safety; AI's expected boost to global trade. More here.
Social media companies' neglect related to conflicts outside the Western world has been a feature for years (more on that here.) Now, that same level of omission has seeped into conflicts, including those within the Middle East and Ukraine, that are closer to home for the Western public.
There are many reasons for this shift.
Companies like Alphabet and Meta have pared back their commitments to independent fact-checking which provided at least some pushback to government and non-state efforts to peddle falsehoods associated with these global conflicts. A shift to crowdsourced fact-checking — initially rolled out by X, and then followed by Meta — has yet to fill that void. That's mostly because companies have found it difficult to find consensus among their users about often divisive topics (including those related to warfare) which is required before these crowdsourced fact-checks are published.
Social media platforms have similarly spent the last three years gutting their existing trust and safety teams to the point where the industry is on life support. This was initially done for economic reasons. Faced with a struggling advertising sector in 2022, company executives sought cost savings, wherever they could, and internal trust and safety teams felt the brunt of those efforts. Fast forward to 2025, and there has been an ideological shift to "free speech" among many of these firms which makes any form of content moderation anathema to the current (US-focused) zeitgeist.
Third: politics. The current White House's aversion to online safety is well known. So too is the US Congress' accusations that other country's digital regulation unfairly infringes on American citizens' First Amendment rights. But from India to Slovakia, there are growing local efforts to quell platforms' content moderation programs — and the associated domestic legislation that has sprouted up from Brazil to the United Kingdom. In that geopolitical context, social media firms have instituted a "go slow" on many of their internal systems — even if (at least in countries with existing online safety regulation) they still comply with domestic rules.
Making things more difficult is the platforms' increasingly adversarial relationship with outsiders seeking to hold these firms to account for their stated trust and safety policies. (Disclaimer: My day job puts me in this category, though my interactions with the companies remain cordial.) Researchers have found it increasingly difficult to access publicly-available social media data. Others have faced legal challenges to analyses which cast social media giants in an unfavorable light. Industry-linked funding for such independent "red-teaming" of platform weaknesses has fallen off a cliff.
Taken together, these four points represent a fundamental change in what had been, until now, a progressive multi-stakeholder approach to ridding global social media platforms of illegal and gruesome content — and not just related to warfare.
Before, companies, policymakers and outside groups worked together (often with difficulty) to make these social media networks a safe space for people to express themselves in ways that represented free speech rights and safeguarded individuals from hate. That coalition has now disintegrated amid a combination of hard-nosed economics, shifting geopolitics and fundamental differences over what constitutes tech companies' trust and safety obligations.
Each of the above points occurred separately. No one set out thinking that cutting back on internal trust and safety teams; ending relations with fact-checkers; kow-towing to a shift in geopolitics; and reducing ties to outside researchers would make it easier for conflict-related content to spread easily among these social media networks.
And yet, that is what happened.
Go onto any social media platform, and within a few clicks (if you know what you're doing), you can come face-to-face with gruesome war-torn content — or, at least, purportedly material associated with one of the 59 state-based conflicts active worldwide. Even if you're not seeking out such material, the collective pullback on trust and safety has raised the possibility that you will stumble over such content in your daily doomscroll.
That is the paradox we find ourselves in at the end of 2025.
In many ways, social media has become even more ingrained in everything from politics to the latest meme craze (cue: the rise of OpenAI's Sora.) But these platforms are less secure and protected than they have ever been — at a time when the world is engulfed in the highest level of subnational, national and regional warfare in multiple generations.
Chart of the Week
THE US CENTER FOR AI STANDARDS AND INNOVATION ran a series of tests — across four well-known sectors associated with the performance of large language models — between services offered by OpenAI, Anthropic and Deepseek.
You have to take these results with a pinch of salt, as they come from a US federal agency. But across the board, China's LLM performed significantly worse than its US rivals.Source: Center for AI Standards and innovation
THE AI WARS: SEMICONDUCTOR EDITION
COMMON WISDOM IS THAT YOU NEED three elements to compete in the global race around artificial intelligence. In your "AI Stack," you need world-leading microchips, you need cloud computing infrastructure that's cheap and almost universal, and you need applications like large language models that can sit on top and drive user engagement. On that first component — semiconductors — China and the US are increasingly going down different paths.
Looking back, it almost was inevitable. Washington has long safeguarded world-leading chips (from both American firms and those of its allies) from Beijing via export bans and other strong-arm tactics. The goal: to ensure China's AI Stack was always one step behind its US counterpart.
Yet that strategy is starting to backfire. Yes, Western AI chips are still better than their Chinese equivalents. But the lack of access to such semiconductors has forced the world's second largest economy to invest billions in domestic production in the hopes of eventually catching up — and surpassing — the likes of Nvidia or Taiwan's Taiwan Semiconductor Manufacturing Company.
What has galvanized this Chinese resolve is the repeated efforts by both the Trump and Biden administrations to hobble Chinese firms' ability to access the latest semiconductors. In this never-ending 'will they, or won't they?' game of national security ping-pong, the Trump 2.0 administration agreed in August to allow Nvidia and AMD to sell pared-down versions of their latest chips to China — as long as they gave the US federal government a 15 percent slice of that export revenue. Principled diplomacy, it was not.
That plan appears to have backfired. Nvidia is now under an antitrust investigation from Chinese authorities for its takeover of Israeli chipmaker Mellanox in 2020. The Cyberspace Administration of China has also reportedly told the country's largest tech firms, including Alibaba, ByteDance and Baidu, to not buy Nvidia's semiconductor. Jensen Huang, chief executive of the US chip firm, said he was "disappointed" with that move (which has never been officially confirmed.)
If you're interested in sponsoring Digital Politics, please get in touch on digitalpolitics@protonmail.com
Nvidia has invested millions to design China-specific microchips that both meet the national security limitations demanded by Washington and can be sold directly into the Middle Kingdom in ways that placate Beijing. If Chinese officials close the door — and require local firms to use domestic alternatives, many of which are reportedly almost on par with their Western rivals — then it's another indicator the US and China are on diverging paths when it comes to technological development.
Again, a lot of this was foreseeable. Repeated White House administrations urged American and Western chip and equipment firms to steer clear of China. In response, Beijing invested billions into local semiconductor production, much of which has remained at the lower level of sophistication. But as in other tech-related industries, Chinese manufacturers have steadily risen through the stack to now offer world-beating hardware. It's not unusual for that, eventually, to be the case in semiconductors.
Sign up for Digital Politics
Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before. The first two weeks of any paid subscription are free.
Subscribe
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
What does this all mean for the politics of technology?
First, Western semiconductor firms offering pared-back versions of their latest chips to China may have the door shut on them. Beijing may need these manufacturers, in the short term. But don't expect that welcome to remain warm — especially as Western officials continue to rattle sabres.
Second, the need for Chinese firms to rely on (currently sub-par, but rapidly advancing) homegrown chips will lead to scrappy innovation once associated just with Silicon Valley. We can debate whether the meteoric rise of DeepSeek was truly as unique as first believed (based on the company's ties to the wider Chinese tech ecosystem.) But relying on second-tier semiconductors will force Chinese AI firms to be more nimble compared to their US counterparts with seemingly unlimited access to chips, compute power and data.
Third, the "splinternet" will come to hardware. I wrote this in 2017 to explain how the digital world was being balkanized into regional fiefdoms. The creation of rival semiconductor stacks — one led by the US, one led by China — will extend that division into the offline world. Companies will try to make the respective hardware interoperable. But it won't be in the interests of either party, as the separation expands between which semiconductors can work with other infrastructure worldwide, to maintain such networking capability.
In short, the global race between AI Stackshas entered a new era.
What I'm reading
— The Wikimedia Foundation published a human rights impact assessment on artificial intelligence and machine learning. More here.
— The European Centre of Excellence for Countering Hybrid Threats assessed the current strengths and weaknesses in the transatlantic fight against state-backed disinformation. More here.
— The Canadian government launched an AI Strategy Task Force and outlined its agenda for public feedback on the emerging technology. More here.
— The Appeals Centre Europe, which allows citizens to seek redress from social media companies under the EU's Digital Services Act, published its first transparency report. More here.
— Researchers outlined the growing differences between how countries are approaching the oversight and governance of artificial intelligence for the University of Oxford. More here.
Social media at a time of war
IT'S MONDAY, AND THIS IS DIGITAL POLITICS. I'm Mark Scott, and I have many feelings about Sora, OpenAI's new AI-generated social media platform. Many of which are encapsulated by this video by Casey Neistat. #FreeTheSlop.
— The world's largest platforms have failed to respond to the highest level of global conflict since World War II.
— The semiconductor wars between China and the United States are creating a massive barrier between the world's two largest economies.
— China's DeepSeek performs significantly worse than its US counterparts on a series of benchmark tests.
Let's get started:
Questo account è gestito da @informapirata ⁂ e propone e ricondivide articoli di cybersecurity e cyberwarfare, in italiano e in inglese
I post possono essere di diversi tipi:
1) post pubblicati manualmente
2) post pubblicati da feed di alcune testate selezionate
3) ricondivisioni manuali di altri account
4) ricondivisioni automatiche di altri account gestiti da esperti di cybersecurity
NB: purtroppo i post pubblicati da feed di alcune testate includono i cosiddetti "redazionali"; i redazionali sono di fatto delle pubblicità che gli inserzionisti pubblicano per elogiare i propri servizi: di solito li eliminiamo manualmente, ma a volte può capitare che non ce ne accorgiamo (e no: non siamo sempre on line!) e quindi possono rimanere on line alcuni giorni. Fermo restando che le testate che ricondividiamo sono gratuite e che i redazionali sono uno dei metodi più etici per sostenersi economicamente, deve essere chiaro che questo account non riceve alcun contributo da queste pubblicazioni.
like this
reshared this