Salta al contenuto principale



To the Field First, Comrades!


cross-posted from: lemmy.ml/post/33172838

Michael Thomas Carter
Jul 12, 2025

Mamdani’s success, according to mainstream narratives and prominent pundits, is due to a mixture of individual political acumen, social media savvy, a talented video production team, and his appealing message of a more affordable city for all New Yorkers. All of this helped, but the fact that Mamdani secured the most total votes in a primary in New York City’s history marks the culmination of a grassroots political project that began at least back in 2015, when the Democratic Socialists of America (DSA) announced a “New Strategy for a New Era,” energized by the early days of Bernie Sanders’s first presidential run.

Over the past nine years, NYC-DSA has built a field organizing machine that is arguably the strongest electoral operation in municipal politics nationwide. Through wins and losses in local, state, and federal elections, NYC-DSA has learned strategic lessons, developed significant logistical capacity, created a volunteer base for canvassing and outreach, and nurtured a cadre of experienced electoral campaign workers who work on endorsed campaigns.



To the Field First, Comrades!


Michael Thomas Carter
Jul 12, 2025

Mamdani’s success, according to mainstream narratives and prominent pundits, is due to a mixture of individual political acumen, social media savvy, a talented video production team, and his appealing message of a more affordable city for all New Yorkers. All of this helped, but the fact that Mamdani secured the most total votes in a primary in New York City’s history marks the culmination of a grassroots political project that began at least back in 2015, when the Democratic Socialists of America (DSA) announced a “New Strategy for a New Era,” energized by the early days of Bernie Sanders’s first presidential run.

Over the past nine years, NYC-DSA has built a field organizing machine that is arguably the strongest electoral operation in municipal politics nationwide. Through wins and losses in local, state, and federal elections, NYC-DSA has learned strategic lessons, developed significant logistical capacity, created a volunteer base for canvassing and outreach, and nurtured a cadre of experienced electoral campaign workers who work on endorsed campaigns.




To the Field First, Comrades!


Michael Thomas Carter
Jul 12, 2025

Mamdani’s success, according to mainstream narratives and prominent pundits, is due to a mixture of individual political acumen, social media savvy, a talented video production team, and his appealing message of a more affordable city for all New Yorkers. All of this helped, but the fact that Mamdani secured the most total votes in a primary in New York City’s history marks the culmination of a grassroots political project that began at least back in 2015, when the Democratic Socialists of America (DSA) announced a “New Strategy for a New Era,” energized by the early days of Bernie Sanders’s first presidential run.

Over the past nine years, NYC-DSA has built a field organizing machine that is arguably the strongest electoral operation in municipal politics nationwide. Through wins and losses in local, state, and federal elections, NYC-DSA has learned strategic lessons, developed significant logistical capacity, created a volunteer base for canvassing and outreach, and nurtured a cadre of experienced electoral campaign workers who work on endorsed campaigns.

#USA


Seeking interop testing for geosocial ActivityPub client


Hey, all! I’m seeking some help testing an application I whipped up for the Geosocial task force of the W3C Social Web Community Group. It’s called [url=https://checkin.swf.pub/]https://checkin.swf.pub/[/url] , and it’s a barebones checkin service, simila

Hey, all! I’m seeking some help testing an application I whipped up for the Geosocial task force of the W3C Social Web Community Group. It’s called https://checkin.swf.pub/ , and it’s a barebones checkin service, similar to Swarm, but implemented as a pure Web client. You can watch the application in action.

videopress.com/embed/zCMu0OeZ?…

It logs into your account on an ActivityPub server using OAuth 2.0. It then reads your inbox, filtering the activities there to only show geosocial ones. You can use the geolocation services in the browser, and the places.pub/ service for a place vocabulary, to find nearby places. You can then “check in” to one of the places, with a note, and control of the privacy of the activity.

Geosocial activities are part of the core Activity Vocabulary that underlies ActivityPub. But, they’re not as widely implemented as other activities in the vocabulary. This app is trying to change that, by making them available on the network, and making it easy to create them.

To test the client, your service will need to support:

To test federation, your service will need to support:

As of this writing, Mastodon does not work for either of these. If you want to test receiving federated messages, follow me on evan@onepage.pub . I’ve been using it a lot!

Code for the checkin application is here: github.com/social-web-foundati…

This is my second ActivityPub API client (ap, the command-line client, was my first), and my first one for the Web. I found this process really fun and invigorating. I was able to create a new kind of social networking application (well, new on the Fediverse…) purely from the client side. The app saves no data to the server; everything is done in the browser.

Please reach out on GitHub or comment here if you want to work on interoperability. I’m happy to help debug connections if needed.

Questa voce è stata modificata (2 mesi fa)

reshared this

in reply to Bonfire

> We're tracking mockups and implementation progress here

FYI I'd love to help. But from Jan 1 2025 onwards, I refuse to do anything that requires logging in to GritHub. For the same reason I refuse to maintain an account on FarceBook.

Even reading GH pages on mobile is starting to require allowing this BorgSoft-controlled platform to run JS in my browser.

#GitHub #DataFarms

@evanprodromou @Jeremiah @herebox

Questa voce è stata modificata (1 mese fa)
in reply to Strypey

copied the GH issues and mockups in a article and published on our bonfire instance for you to read and partecipate directly from the fediverse 🔥
here you go bonfire.cafe/post/01K0V1SMG293…

@evanprodromou @Jeremiah @herebox

Questa voce è stata modificata (1 mese fa)

Strypey reshared this.



Cameroon's President Biya, 92, announces bid for eighth term in office


Cameroon's President Paul Biya, the world's oldest serving head of state at 92, has announced he will run in this year's presidential election in October.

"I am a candidate in the presidential election. Rest assured that my determination to serve you matches the urgency of the challenges we face," he posted on his X (formerly Twitter) account on Sunday.

A new term would keep Biya in office until he is nearly 100. He came to power more than four decades ago in 1982, when his predecessor Ahmadou Ahidjo resigned. The country has had only two presidents since its independence from France and the United Kingdom in the early 1960s.

Biya scrapped presidential term limits in 2008, clearing the way for him to run indefinitely. He won the 2018 election with 71.28 percent of the vote, although opposition parties alleged there were widespread electoral irregularities.

His re-election bid had been widely anticipated, although his age and health are the subject of frequent speculation and criticism.

Biya also used social media to announce his candidacy for 2018's presidential contest, in a rare show of direct engagement with the public on social media.

In his recent post, Biya described an "increasingly restrictive international environment" and acute challenges for Cameroon, adding that he had decided to "respond favourably to the urgent calls coming from the 10 regions of our country and from the diaspora" to stand for election.

Members of the ruling Cameroon People's Democratic Movement (CPDM) and other supporters have publicly called for Biya to seek another term since last year.

Grégoire Owona, deputy secretary-general of the CPDM, told RFI: "At the party level, we had no doubts about this candidacy."

However, two former allies have quit the ruling coalition and announced their own plans to run in the election.

Issa Tchiroma Bakary, Minister of Employment and Vocational Training, left the government before declaring his presidential candidacy under the banner of his party, the FSNC.

Bello Bouba Maïgari, a minister of state and former prime minister – and a long-standing ally of Biya's for nearly 30 years – also declared his candidacy.

Opposition parties and some civil society groups argue that Biya's long rule has stifled economic and democratic development. The opposition remains deeply divided however, and is struggling to unite behind a single candidate.

Maurice Kamto, Biya's fiercest opponent of Biya, who came second in the 2018 presidential election, and Cabral Libii, a prominent opposition figure, are already in the running for the presidency.

Sunday's announcement has revived the debate over Biya's fitness for office. He seldom makes public appearances, often delegating responsibilities to the chief of staff of the president's office.

Last October, he left Cameroon for 42 days with no explanation, sparking speculation that he was unwell. The government responded by banning any discussion on his health, saying it was a matter of national security.

Under Biya's rule, Cameroon has faced economic challenges and insecurity on several fronts, including a drawn-out separatist conflict in its English-speaking regions and ongoing incursions from the armed Islamist group Boko Haram in the north.

The date of the presidential election was set for 12 October last Friday by the head of state himself. Candidates have until 21 July to declare their intention to run.


(with newswires)

in reply to xiao

Last October, he left Cameroon for 42 days with no explanation, sparking speculation that he was unwell. The government responded by banning any discussion on his health, saying it was a matter of national security.


facepalm



in reply to culprit

Nah they're completely different, klan members were proud to be so, dressing up in their silly costume so everyone could see that they are pieces of shit,
members of ICE are so ashamed of what they are that they hide it
in reply to culprit

Some of those that work forces, are the same burn crosses




Tesla’s Autopilot is under scrutiny in a rare jury trial


How will the jury respond?
in reply to BrikoX

McGee was using Autopilot, but had dropped his phone and was inattentive at the time of the crash.


Seems like this could be a factor.

in reply to threelonmusketeers

Ah yes, the magical get out of a jail card. Just add random disclaimer on page 424 that contradicts everything the company said for years in their official communications and marketing.




La voce dell’intimità: Elis Martins si racconta in musica


Con “Dentro me”, Elis Martins irrompe sulla scena musicale con un'esplosione di autenticità e visione artistica. Un brano che non si limita a raccontare: vive, respira e conquista.

In questa intervista Elis ci parla della realizzazione di un sogno e del suo cammino con il Produttore Salvo De Vita.

Elis Martins, finalmente abbiamo conosciuto il tuo inedito " Dentro me".
Cosa hai provato quando è uscito?

Ho provato grande soddisfazione per la realizzazione di un sogno che, prima di conoscere il Produttore Salvo De Vita, non pensavo potesse mai avverarsi e lo ringrazio per la sua professionalità e dedizione, nell'affiancarmi, passo passo in tutte le fasi di pre e post produzione.

Potresti raccontarci qualcosa riguardo alla sua produzione?

Certo... il brano “Dentro me” e il relativo video clip, sono stati interamente realizzati a Napoli. Due giorni di pura emozione un full immersion in cui sono stata catapultata in un in un mondo a me totalmente sconosciuto, un mondo in cui tutto quello che avevo sognato prendeva finalmente vita e io ne ero la protagonista, con un intero staff completamente dedicato a me che mi seguiva e indirizzava in ogni momento con enorme competenza e organizzazione tutto ovviamente sotto la supervisione del Produttore Dott. Salvo De Vita.

Cosa ti aspettavi succedesse dopo la sua uscita ? Quali sono state le tue aspettative a riguardo?

Cosa mi aspettavo?... La mia unica aspettativa era di arrivare a più persone possibili e, grazie al lavoro mirato e costante dell'Ufficio Stampa MP, il mio obbiettivo è stato raggiunto ed è in continuo aggiornamento.

Il tuo è un brano autobiografico. Pensi di aver toccato nel cuore delle persone con " Dentro me"?

Come dicevo poc'anzi “Dentro me” ha raggiunto una quantità di visualizzazioni, sui social e sulle piattaforme, più che soddisfacente e stando alle interazioni del pubblico.. Sì!.. posso affermare di aver toccato il cuore di molte persone e questo mi riempie immensamente di gioia e voglia di proseguire questo cammino.

Cosa ci possiamo aspettare, in futuro, riguardo a questo grandioso progetto musicale?

Il mio percorso è in continua evoluzione e ogni giorno mi confronto con il mio produttore sui progetti futuri, che a dir la verità sono numerosi e veramente interessanti, il 24 Agosto ad esempio parteciperò all'evento dell'Associazione Letteraria Engel Von Bergeiche al Castello di Venosa e, concedendovi un piccolo spoiler, per l'anno 2026, c'è in progetto l'uscita del mio nuovo inedito.

Seguitemi per scoprire tutte le novità!

Articolo: Dott.ssa Mietto Elisa

Dirigente del servizio: Dott. Salvo De Vita

Supervisore e Resp. Pubblicazione: Ufficio Stampa e Produzioni MP

Distribuzione: Urban Dream di Mietto Elisa

Questa voce è stata modificata (2 mesi fa)


What workers really want from AI


#AII


SORELLA DI PERFEZIONE di Giuseppe Iannozzi - Il nuovo booktrailer


SORELLA DI PERFEZIONE di Giuseppe Iannozzi - Il nuovo booktrailer

💥💥💥 SORELLA DI PERFEZIONE di Giuseppe Iannozzi, LFA Publisher - E una poesia bonus 🥳🥳🥳

PER TE SONO MORTO (*)

Per te sono morto, sono morto!
Te lo gridai ben forte
Non ti pregai di ricordarmi,
d’esser un po’ immortale
sui gradini del tuo poetare

Per te sono morto,
sfortunatamente non a sufficienza,
perciò continuo a pregarti di buttarmi
giù dalle scale di quei tuoi pensieri
che di tanto in tanto ti riportano a me,
ché solo così potrò io coglione
raggiungere la pace

(c) Iannozzi Giuseppe

(*) Questa poesia non è inclusa in "Sorella di Perfezione".

"Sorella di Perfezione" è la mia ultima opera. ❤️❤️❤️ La poesia è come la magia: sorprende il lettore, lo emoziona, gli fa palpitare il cuore. La poesia è la forma più nobile di letteratura. Con questo non voglio assolutamente dire di essere un profeta. Ho scritto un libro di circa 230 pagine. Ho impiegato ben quattro anni per scrivere le poesie che sono nell'antologia “Sorella di perfezione” (LFA Publisher). Perché dovreste leggere le mie poesie? Una risposta non ce l'ho, e non intendo stupirvi con effetti speciali e paroloni.
"Sorella di Perfezione" accoglie tante poesie, circa duecento. Ogni lirica affronta uno o più temi e tutti di grande attualità: amore, vita, morte, povertà, malattia, guerra, etc. Sostanzialmente parlo degli ultimi e dei penultimi, parlo di persone che dalla vita hanno ottenuto poco o niente. - Giuseppe Iannozzi

REPETITA IUVANT! 😊 "SORELLA DI PERFEZIONE" DI GIUSEPPE IANNOZZI". ❤️❤️❤️ LEGGETE CON ATTENZIONE, HO DELLE INFORMAZIONI IMPORTANTI DA DARVI. GRAZIE. 😘😘😘

⚠️⚠️⚠️ ATTENZIONE: Alcuni store online potrebbero segnalare che il libro non è disponibile. In realtà "Sorella di Perfezione" è disponibile. 🌹🌹🌹
Nel momento in cui effettuate l'ordine di acquisto online, l'ordine viene immediatamente inviato al distributore Libro Co Italia che subito provvederà a inviare il libro "Sorella di Perfezione". E scomparirà anche l'eventuale avviso "non disponibile". 💪💪💪

☎️☎️☎️ Sono a Vostra disposizione; se avete dubbi o perplessità, non esitate a contattarmi. Grazie infinite. 😘

Fatevi un regalo, regalate e regalatevi "Sorella di Perfezione". Un po' di buona poesia non può che fare bene all'anima. 😘😘😘❤️❤️❤️

🛒🛒🛒 ACQUISTA ON LINE 🛒🛒🛒

➡️ Su IBS:

ibs.it/sorella-di-perfezione-l…

➡️ Su La Feltrinelli:

lafeltrinelli.it/sorella-di-pe…

➡️ Su Mondadori Store:

mondadoristore.it/sorella-di-p…

➡️ Su Amazon:

amazon.it/Sorella-perfezione-G…

➡️ Su Libraccio:

libraccio.it/libro/97888334382…

➡️ Su Librerie UBIK:

ubiklibri.it/book-978883343828…

➡️ Su Libro Co Italia:

libroco.it/dl/Giuseppe-Iannozz…

➡️ Su Unilibro:

unilibro.it/libro/iannozzi-giu…

➡️ Su Libreria Universitaria:

libreriauniversitaria.it/sorel…

➡️ Su Hoepli:

hoepli.it/libro/sorella-di-per…

➡️ Su AbeBooks:

abebooks.it/9788833438283/Sore…

➡️ Su Punto Einaudi di Brescia:

puntoeinaudibrescia.it/scheda-…

➡️ Su Ancora Store:

ancorastore.it/scheda-libro/gi…

➡️ Su Librerie Coop:

librerie.coop/libri/9788833438…

⚡⚡⚡Il nuovo booktrailer è anche sul mio canale YouTube:

youtube.com/shorts/wc59RFhOhgM

⚡⚡⚡È possibile acquistare e prenotare il libro direttamente in libreria.

🎯🎯🎯 E se vi va, iscrivetevi al mio canale YouTube. Grazie di cuore a Tutte/i. 😘🌹❤️

(@GiuseppeIannozzi)



The Butlerian Jihad is NOT a warning against AI






Turkey becomes the first to censor AI chatbot Grok


#AII


Are AI existential risks real—and what should we do about them?


  • There have long been concerns about the existential risks that might be posed by highly capable AI systems, spanning from loss of control to extinction.
  • Some industry leaders believe AI is close to matching or surpassing human intelligence, though some evidence shows improvements in the technology have slowed recently.
  • While such intelligence might be reached and could pose extreme risks, there are more urgent issues and AI harms that should first be addressed, especially as researchers face more limited resources.
#AII





in reply to misk

If only sites would switch to non JavaScript option. I block all JavaScript by default (NoScript) on new sites and Anubis is cancer in current form which just leads to me adding these sites to a blocklist and them losing human traffic as a result. Time will tell I guess.


in reply to return2ozma

Since this came right after a post from !theonion@sh.itjust.works on my feed i honestly thought it was the onion.


Are a few people ruining the internet for the rest of us?


cross-posted from: lemmy.bestiver.se/post/493495

Comments


Mamdani appoints top DNC and Obama adviser in bid to secure Democratic Party establishment support


Is he building links or a sheep in wolf's clothing?

Not A Good Sign

Questa voce è stata modificata (2 mesi fa)





Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systems


Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.


(Credit and/or blame to David Gerard for starting this.)

in reply to blakestacey

Sanders why gizmodo.com/bernie-sanders-rev…

Sen. Sanders: I have talked to CEOs. Funny that you mention it. I won’t mention his name, but I’ve just gotten off the phone with one of the leading experts in the world on artificial intelligence, two hours ago.

. . .

Second point: This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.


taking a wild guess it's Yudkowsky. "very knowledgeable people" and "many/most experts" is staying on my AI apocalypse bingo sheet.

even among people critical of AI (who don't otherwise talk about it that much), the AI apocalypse angle seems really common and it's frustrating to see it normalized everywhere. though I think I'm more nitpicking than anything because it's not usually their most important issue, and maybe it's useful as a wedge issue just to bring attention to other criticisms about AI? I'm not really familiar with Bernie Sanders' takes on AI or how other politicians talk about this. I don't know if that makes sense, I'm very tired

in reply to blakestacey

Sex pest billionaire Travis Kalanick says AI is great for more than just vibe coding. It's also great for vibe physics.
Questa voce è stata modificata (2 mesi fa)


STORIE AL PASSO camminata poetico-performativa in cuffia lungo l’Anello di Davide a Bore (Parma), sabato 19 e domenica 20 luglio


STORIE AL PASSO
Silentwalk poetico-performativa in cuffia lungo l’Anello di Davide
A cura di Gabriele Anzaldi, Simone Baroni, Rita Di Leo, Giorgia Favoti
Musiche e suoni di Gabriele Anzaldi
Produzione: Fondazione Federico Cornoni
In collaborazione con il Comune di Bore

All’interno del festival Canile Drammatico, promosso dalla Fondazione Federico Cornoni ETS con il contributo di Regione Emilia-Romagna, Comune di Parma, Fondazione Cariparma, Confesercenti Parma, e il patrocinio di Comune di Bore e Università di Parma.

Il festival, dedicato al teatro contemporaneo per un pubblico giovane, approda a Bore con un progetto nato da una ricerca sul territorio e dai racconti degli abitanti, diventati base drammaturgica dell’evento.
“Storie al passo” è una camminata performativa lungo l’Anello di Davide, tra i faggeti del monte Carameto, a cura del Comitato Artistico della Fondazione, nata per ricordare Federico, giovane attore parmigiano. Una narrazione che intreccia memoria collettiva, Resistenza, antichi mestieri ed emigrazione.

Domenica 20 luglio alle ore 15.30, presso la Sala Multimediale dell’ex Colonia Leoni (via Roma 83), si terrà la presentazione del libro “Donne resistenti” di Fausto Ferrari, con testimonianze di partigiane delle montagne tra Piacenza e Parma.

Entrambi i giorni, dalle 10 alle 18, sempre all’Ex Colonia Leoni, sarà proiettato in loop il backstage del progetto, con le voci di alcuni abitanti coinvolti: Giuseppe e Valentino Campana, Iole Chiesa, Lorenzo Conti, Marisa Cornoni, Paolo Dondi, Fausto e Gaetano Ferrari, Michele Lalli.

L’iniziativa rientra nel progetto FaTiCa a margine, che collega diversi festival per avvicinare le comunità marginali al teatro.

INFO E PRENOTAZIONI
Partenza: Strada Comunale (loc. Orsi), ore 10 e 17 – Puntualità richiesta
Percorso: 3 km, dislivello 225 mt – Durata 1h30 circa
Abbigliamento comodo – Cuffie fornite
Prenotazioni: 348-8229334 – organizzazione@fondazionefedericocornoni.it
www.fondazionefedericocornoni.it – FB @Canile drammatico – IG @caniledrammatico_festival



[Technical] Why not Fanout via static files or CDNs in the Fediverse?


Current Fediverse Implementation


From my understanding, the prominent fediverse implementations implement fanout via writing to other instances.

In other words, if user A on instance A makes post A, instance A will write or sync post A in all instances that have followers for user A. So user B on instance B will read post A from instance B.

Why this is Done


From my understanding, to prevent a case where post A is viral and everyone wants to read it, and instance A's database gets overwhelmed with reads. It also serves to replicate content

My Question: Why not rely on static files instead of database reads / writes to propagate content?


Instead of the above, if someone follows user A, they can get user A's posts via a static file that contains all of User A's posts. Do the same for everyone you follow.

Reading this file will be a lot less resource intensive than a database read, and with a CDN would be even better.

Cons


  • posts are less "Real time". Why? Because when post A is made, the static file must be updated (though fediverse does this already), and user B or instance B must fetch it. User B / instance B do not have the post pushed to them, so the post arrives with a delay depending on how frequently they fetch. But frequent fetches are okay, and easier to handle heavy loads than database reads.
  • if using a CDN for the static files, there's another delay based on the TTL and invalidation. This should still be small, up to a couple minutes at most.


Pros


  • hosting a fediverse server is more accessible and cheaper, and it could scale better.
  • Federation woes of posts not federating to other instances can potentially be resolved, as the fanout architecture is less complex (no longer necessary to write to a dozens or hundreds of instances for a single post).
  • Clients can have greater freedom in implementing how they create news feeds. You don't have to rely on your instance to do it. Instances primarily make content available, and clients can handle creating news feeds, content sorting and filtering (optional), etc.

What are your thoughts on this?

Questa voce è stata modificata (2 mesi fa)
in reply to django

  1. I write a post, and send a request to the server to publish it
  2. The server takes the post and preprends it to the file housing all my posts
  3. Now, when someone requests my posts, they will see my new one

If a CDN is involved, we would have to properly take care of the invalidations and what not. We would have to run a batch process to update the CDN files, so that we are not doing it too often, but doing it every minute or so is still plenty fast for social media use cases.

Have to emphasize that I am not expert, so I may be missing a big pitfall here.

Questa voce è stata modificata (2 mesi fa)
in reply to matcha_addict

So I have to constantly check all files from everyone I follow for new entries in order to have a working timeline?
in reply to tofu

Yes, precisely. The existing implementation in the Fediverse does the opposite: everyone you follow has to insert their posts into the feed of everyone that follows them, which has its own issues.
in reply to matcha_addict

But only once. If an account doesn't post/interact for a year, it doesn't cause any traffic. With your approach, I constantly need to pull that account's profile to see if something new showed up.
in reply to tofu

Sure, but constantly having to do it is not really a bad thing, given it is automated and those reads are quite inexpensive compared to a database query. It's a lot easier to handle heavy loads when serving static files.
in reply to matcha_addict

I'm really not sure about that being inexpensive. The files will grow and the list of people to follow usually grows as well. This just doesn't scale well.

I follow 700 people on Mastodon. That's 700 requests every interval. With 100-10000 posts or possibly millions of interactions in each file.

Of course you can do stuff like pagination or something like that. But some people follow 10000 accounts and want to have their timeline updated in short in intervals.

Pulling like this is usually used when the author can't sent you something directly and it works in RSS Feeds. But most people don't follow hundreds of RSS feeds. Which reminds me that every mastodon profile offers an RSS feed - you can already do what you described with an RSS reader.

in reply to tofu

bringing up RSS feeds is actually very good, because although you can paginate or partition your feeds, I have never seen a feed that does that, even when they have decades of history. But if needed, partioning is an option so you don't have to pull all of its posts but only recent ones, or by date/time range.

I would also respectfully disagree that people don't subscribe to 100's of RSS feeds. I would bet most people who consistently use RSS feed readers will have more than 100 feeds, me included.

And last, even if you follow 10,000, yes it would require a lot more time than reading from a single database, but it is still on the order of double digit seconds at most. If you compare 10,000 static file fetches with 10,000 database writes across different instances, I think the static files would fare better. This isn't to mention that you are more likely to have to write more than read more (users with 100k followers are far more common than users with 100k subscriptions)

And just to emphasize, I do agree that double digit seconds would be quite long for a user's loading time, which is why I would expect to fetch regularly so the user logs onto a pre made news feed.

Questa voce è stata modificata (2 mesi fa)
in reply to matcha_addict

Sorry, I meant your timeline, where you see other peoples posts.
in reply to django

Oh my bad, I can explain that.

Before I do, one benefit of this method is that your timeline is entirely up to your client. Your instance becomes primarily tasked with making your posts available, and clients have the freedom of implementing the reading and news feed / timeline formation.

Hence, there are a few ways to do this. The best one is probably a mix of those.

Naive approach: fetch posts and build news feed when user requests it


This is not a good approach, but I mention it first because it'll make explaining the next one easier.

  • User opens app or website, thereby requesting their timeline / news feed
  • server fetches list of user's subscriptions and followees
  • for each followee or subscription, server fetches their content via their static file wherever they are hosted
  • server performs whatever filtering and ordering of content they want
  • user sees the result

Cons: loading time for the user may be long, depending on how many subscriptions they have it could be several seconds. P90 may even be in double digits.

Better approach: pre-build user's timeline periodically.


Think like a periodic job (hourly, or every 10 min, etc) , which fetches posts in a similar manner as described above, but instead of doing it when user requests it, it is done in advance

Pros:
- fast loading time compared to previous solution
- when the job runs, if users on the same instance share a followee or subscription, we don't have to query it twice (This benefit already exists on current fediverse implementations)
Cons: posts aren't real-time, delayed by the batch job frequency.

Best approach: hybrid


In this approach, we primarily do the second method, to achieve fast loading time. But to get more up-to-date content, we also simultaneously fetch the latest in the background, and interleave or add the latest posts as the user scrolls.

This way we get both fast initial load times and recent posts.

Surely there's other good approaches. As I said in the beginning, clients have the freedom to implement this however they like.



How a simple mistake ruined my new PC (and my YouTube channel)




procrastinanza sisamministrativa: aggiungere le righe è roba di notte…


Se qualcuno mai stesse cercando prove della mia assoluta pigrizia, o comunque della mia ormai sempre incontrastata procrastinazione, sicuramente non avrebbe molta difficoltà a trovarne… tra le volte che non rifaccio il letto o che non spolvero la stanza, o come mi riduco sempre letteralmente al giorno prima per studiare (cioè proprio oggi 14 luglio, […]

octospacc.altervista.org/2025/…


procrastinanza sisamministrativa: aggiungere le righe è roba di notte…


Se qualcuno mai stesse cercando prove della mia assoluta pigrizia, o comunque della mia ormai sempre incontrastata procrastinazione, sicuramente non avrebbe molta difficoltà a trovarne… tra le volte che non rifaccio il letto o che non spolvero la stanza, o come mi riduco sempre letteralmente al giorno prima per studiare (cioè proprio oggi 14 luglio, ma questa è un’altra storia), o alle 23:55 per fare Duolingo, o come ci sono tanti miei post che durante la giornata ritardano e spesse volte addirittura spariscono, o tranquillamente come finisco sempre a letto 2 ore più tardi del normale, insomma… 💀

Eppure, nonostante la mia esistenza altro non è che una sfilza di fallimenti, certi sbagli sono più sbagliati di altri, come si suol dire… Quella che penso sia la dimostrazione più semplice e lampante della mia incapacità di fare, infatti, si è vista ieri sera, quando finalmente mi sono decisa a sistemare una fonte di disperazione che parzialmente mi attanagliava: ho aggiunto un WebManifest alla mia istanza di Shiori, così che il sito possa essere da me installato come PWA su Android anche da Chromium, e non solo da Firefox (dove invece ho il mio userscript marcio per forzare qualsiasi sito come PWA)… vabbé, e quindi? 😴

Beh, questa era una cosa che banalmente andava fatta da secoli… non solo perché la app nativa di Shiori fa cadere i maroni (e quindi non la uso), e la webapp in Firefox altrettanto (visto che Firefox di per sé li fa cadere, essendo che ci mette tipo il triplo del tempo di Chromium a partire e poi lagga pure)… ma perché bastava aggiungere una (1) riga nella mia configurazione di nginx. sub_filter '</head>' '<link rel=\'manifest\' href=\'data:application/json;utf8,{ ... malloppone di roba tra nome ed icone ... }\' /></head>';. Basta, (almeno nel suo modo più semplice) era solo questo. 😐
Schermata di quello che ho detto con il file di nginx nell'editor nel terminale e la scheda Application dei DevTools di Firefox desktop
…Cioè, rendiamoci un attimino conto della situazione. Io ho procrastinato per anni — non ricordo più quanti anni ormai, ma decisamente troppi, considerato che quando ho iniziato ad usare questo software ero ancora al liceo e hostavo ancora sul Raspinouna procedura che ammontava a spendere 5 minuti di tempo per copiare i link alle icone dal sorgente della pagina HTML, incollarle in una singola fottuta riga così, e buttare tutto in un file di configurazione già esistente. Tutte cose che ho già fatto in tanti altri casi eh, che quindi non mi hanno richiesto di scervellarmi neanche un po’, ma, per qualche motivo, porca di quella puttana, quando c’avevo voglia non mi ricordavo e quando invece serviva mi seccavo. 😭

La beffa (la cui presenza, come dico ogni volta, con me è la costante di autenticità delle mie storie disperate) stavolta è che ho fatto questa semplice operazione, che avrei dovuto fare letterali anni fa, praticamente giusto il giorno dopo quello in cui ho rilasciato Pignio… software che di per sé non centra niente ma che, con i prossimi aggiornamenti, potrebbe potenzialmente inglobare tutte le funzioni [che mi servono] di Shiori, e in tal caso sarebbe per me assolutamente ovvio togliere di mezzo un software che si rivelerebbe completamente ridondante. (C’è in realtà un motivo per questa coincidenza, stavolta non sono stati gli spiriti a dirmi di fare così… c’è una sequenza più logica che, nel caso, approfondirò.) 😾

Giusto per chiarezza, comunque: in realtà Shiori include un WebManifest, ma solo da 4-5 mesi, stando a quanto vedo dai commit; pochissimo tempo rispetto a quello in cui ho avuto questa maledetta applicazione… e stavo per dire che allora avrei in teoria dovuto avere la funzione a quest’ora, ma invece no, perché anche i manutentori di questo progetto sono grandi procrastinatori, e non fanno uscire una release precompilata da gennaio, e io ovviamente non mi sbatterò per compilare da sorgente. Meglio così, dai… altrimenti avrei dovuto ammettere che sono talmente pigra che non aggiorno il software dal giorno in cui lo installai sul nuovo server, ~2 anni fa! (Ok, no, scherzi a parte, non sono così pigra… bensì è anche peggio: non aggiorno da quando l’ho installato per la prima volta, perché se lo facessi non avrei più accesso ad una vulnerabilità che io stessa scoprii e riportai agli sviluppatori, ma di cui faccio uso… se fosse patchata sulla mia istanza, uno script che feci all’epoca non funzionerebbe più bene e, neanche a dirlo, dover sistemare pure quello mi seccherebbe tremendamente… Sono veramente irrecuperabile!!!)

#nginx #pigrizia #procrastinazione #sysadmin #webapps




Microsoft Soars as AI Cloud Boom Drives $595 Price Target




Microsoft Soars as AI Cloud Boom Drives $595 Price Target




The Media's Pivot to AI Is Not Real and Not Going to Work


On May 23, we got a very interesting email from Ghost, the service we use to make 404 Media. “Paid subscription started,” the email said, which is the subject line of all of the automated emails we get when someone subscribes to 404 Media. The interesting thing about this email was that the new subscriber had been referred to 404 Media directly from chatgpt.com, meaning the person clicked a link to 404 Media from within a ChatGPT window. It is the first and only time that ChatGPT has ever sent us a paid subscriber.

From what I can tell, ChatGPT.com has sent us 1,600 pageviews since we founded 404 Media nearly two years ago. To give you a sense of where this slots in, this is slightly fewer than the Czech news aggregator novinky.cz, the Hungarian news portal Telex.hu, the Polish news aggregator Wykop.pl, and barely more than the Russian news aggregator Dzen.ru, the paywall jumping website removepaywall.com, and a computer graphics job board called 80.lv. In that same time, Google has sent roughly 3 million visitors, or 187,400 percent more than ChatGPT.

This is really neither here nor there because we have tried to set our website up to block ChatGPT from scraping us, though it is clear this is not always working. But even for sites that don’t block ChatGPT, new research from the internet infrastructure company CloudFlare suggests that OpenAI is crawling 1,500 individual webpages for every one visitor that it is sending to a website. Google traffic has begun to dry up as both Google’s own AI snippets and AI-powered SEO spam have obliterated the business models of many media websites.


Every time "tech" comes up with a journalism "solution," journalists get laid off while the product gets worse. First it was SEO, then Facebook, then Twitter ... you'd think people trained to detect patterns can do better than just hopping on the latest hype that kills traffic.


The Media's Pivot to AI Is Not Real and Not Going to Work


On May 23, we got a very interesting email from Ghost, the service we use to make 404 Media. “Paid subscription started,” the email said, which is the subject line of all of the automated emails we get when someone subscribes to 404 Media. The interesting thing about this email was that the new subscriber had been referred to 404 Media directly from chatgpt.com, meaning the person clicked a link to 404 Media from within a ChatGPT window. It is the first and only time that ChatGPT has ever sent us a paid subscriber.

From what I can tell, ChatGPT.com has sent us 1,600 pageviews since we founded 404 Media nearly two years ago. To give you a sense of where this slots in, this is slightly fewer than the Czech news aggregator novinky.cz, the Hungarian news portal Telex.hu, the Polish news aggregator Wykop.pl, and barely more than the Russian news aggregator Dzen.ru, the paywall jumping website removepaywall.com, and a computer graphics job board called 80.lv. In that same time, Google has sent roughly 3 million visitors, or 187,400 percent more than ChatGPT.

This is really neither here nor there because we have tried to set our website up to block ChatGPT from scraping us, though it is clear this is not always working. But even for sites that don’t block ChatGPT, new research from the internet infrastructure company CloudFlare suggests that OpenAI is crawling 1,500 individual webpages for every one visitor that it is sending to a website. Google traffic has begun to dry up as both Google’s own AI snippets and AI-powered SEO spam have obliterated the business models of many media websites.

This general dynamic—plummeting traffic because of AI snippets, ChatGPT, AI slop, Twitter no workie so good no more—has been called the “traffic apocalypse” and has all but killed some smaller websites and has been blamed by executives for hundreds of layoffs at larger ones.

Despite the fact that generative AI has been a destructive force against their businesses, their industry, and the truth more broadly, media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked, and pretending that they are forward thinking or have any clue what they are doing will perhaps allow a specific type of media executive to squeeze out a few more months of salary.

But pivoting to AI is not a business strategy. Telling journalists they must use AI is not a business strategy. Partnering with AI companies is a business move, but becoming reliant on revenue from tech giants who are creating a machine that duplicates the work you’ve already created is not a smart or sustainable business move, and therefore it is not a smart business strategy. It is true that AI is changing the internet and is threatening journalists and media outlets. But the only AI-related business strategy that makes any sense whatsoever is one where media companies and journalists go to great pains to show their audiences that they are human beings, and that the work they are doing is worth supporting because it is human work that is vital to their audiences. This is something GQ’s editorial director Will Welch recently told New York magazine: “The good news for any digital publisher is that the new game we all have to play is also a sustainable one: You have to build a direct relationship with your core readers,” he said.

Becoming an “AI-first” media company has become a buzzword that execs can point at to explain that their businesses can use AI to become more ‘efficient’ and thus have a chance to become more profitable. Often, but not always, this message comes from executives who are laying off large swaths of their human staff.

In May, Business Insider laid off 21 percent of its workforce. In her layoff letter, Business Insider’s CEO Barbara Peng said “there’s a huge opportunity for companies who harness AI first.” She told the remaining employees there that they are “fully embracing AI,” “we are going all-in on AI,” and said “over 70 percent of Business Insider employees are already using Enterprise ChatGPT regularly (our goal is 100%), and we’re building prompt libraries and sharing everyday use cases that help us work faster, smarter, and better.” She added they are “exploring how AI can boost operations across shared services, helping us scale and operate more efficiently.”

Last year, Hearst Newspapers executives, who operate 78 newspapers nationwide, told the company in an all-hands meeting audio obtained by 404 Media that they are “leaning into [AI] as Hearst overall, the entire corporation.” Examples given in the meeting included using AI for slide decks, a “quiz generation tool” for readers, translations, a tool called Dispatch, which is an email summarization tool, and a tool called “Assembly,” which is “basically a public meeting monitor, transcriber, summarizer, all in one. What it does is it goes into publicly posted meeting videos online, transcribes them automatically, [and] automatically alerts journalists through Slack about what’s going on and links to the transcript.”

The Washington Post and the Los Angeles Times are doing all sorts of fucked up shit that definitely no one wants but are being imposed upon their newsrooms because they are owned by tech billionaires who are tired of losing money. The Washington Post has an AI chatbot and plans to create a Forbes contributor-esque opinion section with an AI writing tool that will assist outside writers. The Los Angeles Times introduced an AI bot that argues with its own writers and has written that the KKK was not so bad, actually. Both outlets have had massive layoffs in recent months.

The New York Times, which is actually doing well, says it is using AI to “create initial drafts of headlines, summaries of Times articles and other text that helps us produce and distribute the news.” Wirecutter is hiring a product director for AI and recently instructed its employees to consider how they can use AI to make their journalism better, New York magazine reported. Kevin Roose, an, uhh, complicated figure in the AI space, said “AI has essentially replaced Google for me for basic questions,” and said that he uses it for “brainstorming.” His Hard Fork colleague Casey Newton said he uses it for “research” and “fact-checking.”

Over at Columbia Journalism Review, a host of journalists and news execs, myself included, wrote about how AI is used in their newsrooms. The responses were all over the place and were occasionally horrifying, and ranged from people saying they were using AI as personal assistants to brainstorming partners to article drafters.

In his largely incoherent screed that shows how terrible he was at managing G/O Media, which took over Deadspin, Kotaku, Jezebel, Gizmodo, and other beloved websites and ran them into the ground at varying speeds, Jim Spanfeller nods at the “both good and perhaps bad” impacts of AI on news. In a truly astounding passage of a notably poorly written letter that manages to say less than nothing, he wrote: “AI is a prime example. It is here to a degree but there are so many more shoes to drop [...] Clearly this technology is already having a profound impact. But so much more is yet to come, both good and perhaps bad depending on where you sit and how well monitored and controlled it is. But one thing to keep in mind, consumers seek out content for many reasons. Certainly, for specific knowledge, which search and search like models satisfy in very effective ways. But also, for insights, enjoyment, entertainment and inspiration.”

At the MediaPost Publishing Insider Conference, a media industry business conference I just went to in New Orleans, there was much chatter about AI. Alice Ting, an executive for the Daily Mail gave a pretty interesting talk about how the Daily Mail is protecting its journalism from AI scrapers in order to eventually strike deals with AI companies to license their content.

“What many of you have seen is a surge in scraping of our content, a decline in traffic referrals, and an increase in hallucinated outputs that often misrepresent our brands,” Ting said. “Publishers can provide decades of vetted and timestamped content, verified, fact checked, semantically organized, editorially curated. And in addition offer fresh content on an almost daily basis.”

Ting is correct in that several publishers have struck lucrative deals with AI companies, but she also suggested that AI licensing would be a recurring revenue stream for publishers, which would require a series of competing LLMs to want to come in and license the same content over and over again. Many LLMs have already scraped almost everything there is to scrape, it’s not clear that there are going to consistently be new LLMs from companies wanting to pay to train on data that other LLMs have already trained on, and it’s not clear how much money the Daily Mail’s blogs of the day are going to be worth to an AI company on an ongoing basis. Betting that this time, hinging the future of our industry on massive, monopolistic tech giants will work out is the most Lucy with the football thing I can imagine.

There is not much evidence that selling access to LLMs will work out in a recurring way for any publisher, outside of the very largest publishers like, perhaps, the New York Times. Even at the conference, panel moderator Upneet Grover, founder of LH2 Holdings, which owns several smaller blogs, suggested that “a lot of these licensing revenues are not moving the needle, at least from the deals we’ve seen, but there’s this larger threat of more referral traffic being taken away from news publishers [by AI].”
youtube.com/embed/La2R3iIL9BE?…
In my own panel at the conference I made the general argument that I am making in this article, which is that none of this is going to work.

“We’re not just competing against large-scale publications and AI slop, we are competing against the entire rest of the internet. We were publishing articles and AI was scraping and republishing them within five minutes of us publishing them,” I said. “So many publications are leaning into ‘how can we use AI to be more efficient to publish more,’ and it’s not going to work. It’s not going to work because you’re competing against a child in Romania, a child in Bangladesh who is publishing 9,000 articles a day and they don’t care about facts, they don’t care about accuracy, but in an SEO algorithm it’s going to perform and that’s what you’re competing against. You have to compete on quality at this point and you have to find a real human being audience and you need to speak to them directly and treat them as though they are intelligent and not as though you are trying to feed them as much slop as possible.”

It makes sense that journalists and media execs are talking about AI because everyone is talking about AI, and because AI presents a particularly grave threat to the business models of so many media companies. It’s fine to continue to talk about AI. But the point of this article is that “we’re going to lean into AI” is not a business model, and it’s not even a business strategy, any more than pivoting to “video” was a strategy or chasing Facebook Live views was a strategy.

In a harrowing discussion with Axios, in which he excoriates many of the deals publishers have signed with OpenAI and other AI companies, Matthew Prince, the CEO of Cloudflare, said that the AI-driven traffic apocalypse is a nightmare for people who make content online: “If we don’t figure out how to fix this, the internet is going to die,” he said.
youtube.com/embed/H5C9EL3C82Y?…
So AI is destroying traffic, ripping off our work, creating slop that destroys discoverability and further undermines trust, and allowing random people to create news-shaped objects that social media and search algorithms either can’t or don’t care to distinguish from real news. And yet media executives have decided that the only way to compete with this is to make their workers use AI to make content in a slightly more efficient way than they were already doing journalism.

This is not going to work, because “using AI” is not a reporting strategy or a writing strategy, and it’s definitely not a business strategy.

AI is a tool (sorry!) that people who are bad at their jobs will use badly and that people who are good at their jobs will maybe, possibly find some uses for. People who are terrible at their jobs (many executives), will tell their employees that they “need” to use AI, that their jobs depend on it, that they must become more productive, and that becoming an AI-first company is the strategy that will save them from the old failed strategy, which itself was the new strategy after other failed business models.

The only journalism business strategy that works, and that will ever work in a sustainable way, is if you create something of value that people (human beings, not bots) want to read or watch or listen to, and that they cannot find anywhere else. This can mean you’re breaking news, or it can mean that you have a particularly notable voice or personality. It can mean that you’re funny or irreverent or deeply serious or useful. It can mean that you confirm people’s priors in a way that makes them feel good. And you have to be trustworthy, to your audience at least. But basically, to make money doing journalism, you have to publish “content,” relatively often, that people want to consume.

This is not rocket science, and I am of course not the only person to point this out. There have been many, many features about the success of Feed Me, Emily Sundberg’s newsletter about New York, culture, and a bunch of other stuff. As she has pointed out in many interviews, she has been successful because she writes about interesting things and treats her audience like human beings. The places that are succeeding right now are individual writers who have a perspective, news outlets like WIRED that are fearless, publications that have invested in good reporters like The Atlantic, publications that tell you something that AI can’t, and worker owned, journalist-run outlets like us, Defector, Aftermath, Hellgate, Remap, Hearing Things, etc. There are also a host of personality-forward, journalism-adjacent YouTubers, TikTok influencers, and podcasters who have massive, loyal audiences, yet most of the traditional media is utterly allergic to learning anything from them.

There was a short period of time where it was possible to make money by paying human writers—some of them journalists, perhaps—to spam blog posts onto the internet that hit specific keywords, trending topics, or things that would perform well on social media. These were the early days of Gawker, Buzzfeed, VICE, and Vox. But the days of media companies tricking people into reading their articles using SEO or hitting a trending algorithm are over.

They are over because other people are doing it better than them now, and by “better,” I mean, more shamelessly and with reckless abandon. As we have written many times, news outlets are no longer just competing with each other, but with everyone on social media, and Netflix, and YouTube, and TikTok, and all the other people who post things on the internet. They are not just up against the total fracturing of social media, the degrading and enshittification of the discovery mechanisms on the internet, algorithms that artificially ding links to articles, AI snippets and summaries, etc. They are also competing with sophisticated AI slop and spam factories often being run by people on the other side of the world publishing things that look like “news” that is being created on a scale that even the most “efficient” journalist leveraging AI to save some perhaps negligible amount of time cannot ever hope to measure up to.

Every day, I get emails from AI spam influencers who are selling tools that allow slop peddlers to clone any website with one click, automatically generate newsletters about any topic, or generate plausible-seeming articles that are engineered to perform well in a search algorithm. Examples: “Clone any website in 9 seconds with Clonely AI,” “The future of video creation is here—and it’s faceless, seamless & limitless,” “just a straightforward path to earning 6-figures with an AI-powered newsletter that’s working right now.” These people do not care at all about truth or accuracy or our information ecosystem or anything else that a media company or a journalist would theoretically care about. If you want an example of what this looks like, consider the series of “Good Day” newsletters, which are AI generated and are in 355 small towns across America, many of which no longer have newspapers. These businesses are economically viable because they are being run by one person (or a very small team of people) who disproportionately live in low cost of living areas and who have essentially zero overhead.

And so becoming more “efficient” with AI is the wrong thing to do, and it’s the wrong thing to ask any journalist to do. The only thing that media companies can do in order to survive is to lean into their humanity, to teach their journalists how to do stories that cannot be done by AI, and to help young journalists learn the skills needed to do articles that weave together complicated concepts and, again, that focus on our shared human experience, in a way that AI cannot and will never be able to.

AI as buzzword and shiny object has been here for a long time. And I actually do not think AI is fake and sucks (I also don’t really believe that anyone thinks AI is “fake,” because we can see the internet collapsing around us). We report every day on the ways that AI is changing the web, in part because it is being shoved down our throats by big tech companies, spammers, etc. But I think that Princeton’s Arvind Narayanan and Sayash Kapoor are basically correct when they say that AI is “normal technology” that will not change everything but that over time will lead to modest improvements in people’s workflows as they get integrated into existing products or as they help around the edges. We—yes, even you—are using some version of AI, or some tools that have LLMs or machine learning in them in some way shape or form already, even if you hate such tools.

In early 2023, when I was the editor-in-chief of Motherboard, I was asked to put together a presentation for VICE executives about AI, and how I thought it would change both our journalism and the business of journalism. The reason I was asked to do this was because our team was writing a lot about AI, and there was a sense that the company could do something with AI to make money, or do better journalism, or some combination of those things. There was no sense or thought at the time, at least from what I was told, that VICE was planning to use AI as a pretext for replacing human journalists or cutting costs—it had already entered a cycle where it was constantly laying off journalists—but there was a sense that this was going to be the big new opportunity/threat, a new potential savior for a company that had already created a “virtual office” in Decentraland, a crypto-powered metaverse that last year had 42 daily active users.

I never got to give the presentation, because the executive who asked me to put it together left the company, and the new people either didn’t care or didn’t have time for me to give it. The company went bankrupt almost immediately after this change, and I left VICE soon after to make 404 Media with my co-founders, who also left VICE.

But my message at the time, and my message now two years later, is that AI has already changed our world, and that we have the opportunity to report on the technology as it already exists and is already being used—to justify layoffs, to dehumanize people, to spam the internet, etc. At the time, we had already written 840 articles that were tagged “AI,” which included articles about biased sentencing algorithms, predictive policing, facial recognition, deepfakes, AI romantic relationships, AI-powered spam and scams, etc.

The business opportunity then, as now, was to be an indispensable, very human guide to a technology that people—human beings—are making tons of money off of, using as an excuse to lay off workers, and are doing wild shit with. There was no magic strategy in which we could use AI to quadruple our output, replace workers, rise to the top of Google rankings, etc. There was, however, great risk in attempting to do this: “PR NIGHTMARE,” one of my slides about the risks of using AI I wrote said: “CNET plagiarism scandal. Big backlash from artists and writers to generative AI. Copyright issues. Race to the bottom.”

My other thought was that any efficiencies that could be squeezed out of AI, in our day-to-day jobs, were already being done so by good reporters and video producers at the company. There could be no top-down forced pivot to AI, because research and time-saving uses of AI were already being naturally integrated into our work by people who were smart in ways that were totally reasonable and mostly helpful, if not groundbreaking. The AI-as-force-multiplier was already happening, and while, yes, this probably helped the business in some way, it helped in ways that were not then and were never going to be actually perceptible to a company’s bottom line. AI was not a savior then, and it is not a savior now. For journalists and for media companies, there is no real “pivot to AI” that is possible unless that pivot means firing all of the employees and putting out a shittier product (which some companies have called a strategy). This is because the pivot has already occurred and the business prospects for media companies have gotten worse, not better. If Kevin Roose is using AI so much, in such a new and groundbreaking way, why aren’t his articles noticeably different than they ever were before, or why aren’t there way more of them than there were before? Where are the journalists who were formerly middling who are now pumping out incredible articles thanks to efficiencies granted by AI?

To be concrete: Many journalists, including me, at least sometimes use some sort of AI transcription tool for some of their less sensitive interviews. This saves me many hours, the tools have gotten better (but are still not perfect, and absolutely require double checking and should not be used for sensitive sources or sensitive stories). YouTube’s transcript feature is an incredible reporting tool that has allowed me to do stories that would have never been possible even a few years ago. YouTube’s built-in translations and subtitles, and its transcript tool are some of the only reasons that I was able to do this investigation into Indian AI slop creators, which allowed me to get the gist of what was happening in a given video before we handed them to human translators to get exact translations. Most podcasts I know of now use Descript, Riverside, or a similar tool to record and edit their podcasts; these have built-in AI transcription tools, built-in AI camera switching, and built-in text-to-video editing tools. Most media outlets use captioning that is built into Adobe Premiere or CapCut for their vertical videos and their YouTube videos (and then double check them). If you want to get extremely annoying about it, various machine learning algorithms are in ProTools, Audition, CapCut, Premiere, Canva, etc for things like photo editing, sound leveling, noise reduction, etc.

There are other journalists who feel very comfortable coding and doing data analysis and analyzing huge sets of documents. There are journalists out there who are already using AI to do some of these tasks and some of the resulting articles are surely good and could not have been done without AI.

But the people doing this well are doing so in a way where they are catching and fixing AI hallucinations, because the stakes for fucking up are so incredibly high. If you are one of the people who is doing this, then, great. I have little interest in policing other people’s writing processes so long as they are not publishing AI fever dreams or plagiarizing, and there are writers I respect who say they have their little chats with ChatGPT to help them organize their thoughts before they do a draft or who have vibecoded their own productivity tools or data analysis tools. But again, that’s not a business model. It’s a tool that has enabled some reporters to do their jobs, and, using their expertise, they have produced good and valuable work. This does not mean that every news outlet or every reporter needs to learn to shove the JFK documents into ChatGPT and have it shit out an investigation.

I also know that our credibility and the trust of our audience is the only thing that separates us from anyone else. It is the only “business model” that we have and that I am certain works: We trade good, accurate, interesting, human articles for money and attention. The risks of offloading that trust to an AI in a careless way is the biggest possible risk factor that we could have as a business. Having an article go out where someone goes “Actually, a robot wrote this,” is one of the worst possible things that could ever happen to us, and so we have made the brave decision to not do that.

This is part of what is so baffling about the Chicago Sun Times’ response to its somewhat complicated summer guide AI-generated reading list fiasco. Under its new owners, Chicago Public Media, The Sun Times has in recent years spent an incredible amount of time and effort rebuilding the image and good will that its previous private equity owners destroyed. And yet in its apology note, Melissa Bell, the CEO of Chicago Public Media, said that more AI is coming: “Chicago Public Media will not back away from experimenting and learning how to properly use AI,” she wrote, adding that the team was working with a fellow paid for by the Lenfest Institute, a nonprofit funded by OpenAI and Microsoft.

Bell does realize what makes the paper stand apart, though: “We must own our humanity,” Bell wrote. “Our humanity makes our work valuable.”

This is something that the New York Times’s Roose recently brought up that I thought was quite smart and yet is not something that he seems to have internalized when talking about how AI is going to change everything and that its widespread adoption is inevitable and the only path forward: “I wonder if [AI is] going to catalyze some counterreaction,” he said. “I’ve been thinking a lot recently about the slow-food movement and the farm-to-table movement, both of which came up in reaction to fast food. Fast food had a lot going for it—it was cheap, it was plentiful, you could get it in a hurry. But it also opened up a market for a healthier, more artisanal way of doing things. And I wonder if something similar will happen in creative industries—a kind of creative renaissance for things that feel real and human and aren’t just outputs from some A.I. company’s slop machine.”

This has ALREAAAAADDDDYYYYYY HAPPPENEEEEEDDDDDD, and it is quite literally the only path forward for all but perhaps the most gigantic of media companies. There is no reason for an individual journalist or an individual media company to make the fast food of the internet. It’s already being made, by spammers and the AI companies themselves. It is impossible to make it cheaper or better than them, because it is what they exist to do. The actual pivot that is needed is one to humanity. Media companies need to let their journalists be human. And they need to prove why they’re worth reading with every article they do.



in reply to Keineanung

privitising the government for “non-essential” tasks tends to just hurt normal people and enrichen the oligarchs

edit: Benutzername bestätigt sich ;)

Questa voce è stata modificata (2 mesi fa)