Salta al contenuto principale


🚨AI propaganda factories🏭 are now operational. My study shows how small, open-weight models can run as fully automatic generators in influence campaigns. mechanising personas, engagement, cadence. Possible for State, non-state, and micro-actors, including and bedroom ones. This is a wake-up call. This has implications for AI Safety. Alignment in frontier, closed-source models won't save you. 💥 Next generation information warfare.

Paper: arxiv.org/abs/2508.20186

reshared this

in reply to Lukasz Olejnik

I really think this was Twitter circa 2010 and isn't a new threat at all. I noticed a lot in the run up to the Syrian war and they were politically western agenda leaning.

The only thing that HAS changed for sure is the accessibility.

in reply to Lukasz Olejnik

Disclosure of AI use in any communications, private or public, should be made a legal requirement.
in reply to Lukasz Olejnik

my findings a couple of years back when I made mastodon.social/@StochasticEnt… (a cached version of the now-dead account) was that with €100 anyone could create a propoganda factory using nothing but GitHub Actions - github.com/tanepiper/Stochasti…
Questa voce è stata modificata (2 settimane fa)
in reply to Lukasz Olejnik

I’m not sure how anonymous, global social media will survive the coming AI propaganda bot onslaught, as anything but an agitprop channel.

Perhaps in the EU, our eID or banking apps could authenticate us as people…perhaps even without knowing/saving the precise social media account name we’re logging into…

But globally? If you’re posting from tyrannies, wars and genocides, they’ll just drown your your carefully TOR/VPN using posts in slop

#AI #SocialMedia

in reply to Lukasz Olejnik

After reading the paper I am a little confused at what is new here, llama has been runnable on a laptop and langchain has had social media integrations since 2023, langgraph has had self reflective agent examples that could be used for updating personas since march 2024 and maintained a full on social media agent framework since november 2024. Is it these measurements from the LLM judge that are the main result? Or the countermeasures? I'm having a hard time telling from the discussion what I should be taking away from the results section
in reply to Lukasz Olejnik

For every 'tech bro' or mindless toxic enabler type out there who has ever unironically used the phrase 'force multiplier' when describing tech, specifically including LLM's posing as AI:

arxiv.org/abs/2508.20186