Salta al contenuto principale



The European Commission says that France, Spain, Italy, Denmark, and Greece will test a blueprint for an age verification app meant to protect children online


The release of this blueprint launches a pilot phase during which a software solution for age verification will be tested and further customised in collaboration with Member States, online platforms and end-users. Denmark, France, Greece, Italy and Spain will be the first to take up the technical solution in view of taking it up in their national digital wallets or publishing a customised national age verification app on the app stores. Market players can also take up the software solution and further develop it.


Probe launched into Westminster group’s Israel funding


An official inquiry has been launched after Declassified revealed that an Israeli state-owned weapons firm had funded a group of British MPs.

RUK Advanced Systems Ltd sells weapons including urban combat missiles and “hard kill” torpedoes. But records show it is part of the defence giant Rafael, which is owned by the Israeli government.

Our investigation found the company had paid at least £1,499 to partner with the All-Party Parliamentary Group (APPG) on Defence Technology, which provides “opportunities to network with MPs”. The money was paid directly to the group’s secretariat.

But parliamentary rules say that APPGs should not “accept the services of a secretariat funded directly or indirectly by a foreign government”.




Mamdani appoints top DNC and Obama adviser in bid to secure Democratic Party establishment support


Is he building links or a sheep in wolf's clothing?

Not A Good Sign

Questa voce è stata modificata (2 mesi fa)




Filch Stealer: A new infostealer leveraging old techniques




Filch Stealer: A new infostealer leveraging old techniques







Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systems


Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.


(Credit and/or blame to David Gerard for starting this.)

in reply to blakestacey

Sanders why gizmodo.com/bernie-sanders-rev…

Sen. Sanders: I have talked to CEOs. Funny that you mention it. I won’t mention his name, but I’ve just gotten off the phone with one of the leading experts in the world on artificial intelligence, two hours ago.

. . .

Second point: This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.


taking a wild guess it's Yudkowsky. "very knowledgeable people" and "many/most experts" is staying on my AI apocalypse bingo sheet.

even among people critical of AI (who don't otherwise talk about it that much), the AI apocalypse angle seems really common and it's frustrating to see it normalized everywhere. though I think I'm more nitpicking than anything because it's not usually their most important issue, and maybe it's useful as a wedge issue just to bring attention to other criticisms about AI? I'm not really familiar with Bernie Sanders' takes on AI or how other politicians talk about this. I don't know if that makes sense, I'm very tired

in reply to blakestacey

Sex pest billionaire Travis Kalanick says AI is great for more than just vibe coding. It's also great for vibe physics.
Questa voce è stata modificata (2 mesi fa)


STORIE AL PASSO camminata poetico-performativa in cuffia lungo l’Anello di Davide a Bore (Parma), sabato 19 e domenica 20 luglio


STORIE AL PASSO
Silentwalk poetico-performativa in cuffia lungo l’Anello di Davide
A cura di Gabriele Anzaldi, Simone Baroni, Rita Di Leo, Giorgia Favoti
Musiche e suoni di Gabriele Anzaldi
Produzione: Fondazione Federico Cornoni
In collaborazione con il Comune di Bore

All’interno del festival Canile Drammatico, promosso dalla Fondazione Federico Cornoni ETS con il contributo di Regione Emilia-Romagna, Comune di Parma, Fondazione Cariparma, Confesercenti Parma, e il patrocinio di Comune di Bore e Università di Parma.

Il festival, dedicato al teatro contemporaneo per un pubblico giovane, approda a Bore con un progetto nato da una ricerca sul territorio e dai racconti degli abitanti, diventati base drammaturgica dell’evento.
“Storie al passo” è una camminata performativa lungo l’Anello di Davide, tra i faggeti del monte Carameto, a cura del Comitato Artistico della Fondazione, nata per ricordare Federico, giovane attore parmigiano. Una narrazione che intreccia memoria collettiva, Resistenza, antichi mestieri ed emigrazione.

Domenica 20 luglio alle ore 15.30, presso la Sala Multimediale dell’ex Colonia Leoni (via Roma 83), si terrà la presentazione del libro “Donne resistenti” di Fausto Ferrari, con testimonianze di partigiane delle montagne tra Piacenza e Parma.

Entrambi i giorni, dalle 10 alle 18, sempre all’Ex Colonia Leoni, sarà proiettato in loop il backstage del progetto, con le voci di alcuni abitanti coinvolti: Giuseppe e Valentino Campana, Iole Chiesa, Lorenzo Conti, Marisa Cornoni, Paolo Dondi, Fausto e Gaetano Ferrari, Michele Lalli.

L’iniziativa rientra nel progetto FaTiCa a margine, che collega diversi festival per avvicinare le comunità marginali al teatro.

INFO E PRENOTAZIONI
Partenza: Strada Comunale (loc. Orsi), ore 10 e 17 – Puntualità richiesta
Percorso: 3 km, dislivello 225 mt – Durata 1h30 circa
Abbigliamento comodo – Cuffie fornite
Prenotazioni: 348-8229334 – organizzazione@fondazionefedericocornoni.it
www.fondazionefedericocornoni.it – FB @Canile drammatico – IG @caniledrammatico_festival



[Technical] Why not Fanout via static files or CDNs in the Fediverse?


Current Fediverse Implementation


From my understanding, the prominent fediverse implementations implement fanout via writing to other instances.

In other words, if user A on instance A makes post A, instance A will write or sync post A in all instances that have followers for user A. So user B on instance B will read post A from instance B.

Why this is Done


From my understanding, to prevent a case where post A is viral and everyone wants to read it, and instance A's database gets overwhelmed with reads. It also serves to replicate content

My Question: Why not rely on static files instead of database reads / writes to propagate content?


Instead of the above, if someone follows user A, they can get user A's posts via a static file that contains all of User A's posts. Do the same for everyone you follow.

Reading this file will be a lot less resource intensive than a database read, and with a CDN would be even better.

Cons


  • posts are less "Real time". Why? Because when post A is made, the static file must be updated (though fediverse does this already), and user B or instance B must fetch it. User B / instance B do not have the post pushed to them, so the post arrives with a delay depending on how frequently they fetch. But frequent fetches are okay, and easier to handle heavy loads than database reads.
  • if using a CDN for the static files, there's another delay based on the TTL and invalidation. This should still be small, up to a couple minutes at most.


Pros


  • hosting a fediverse server is more accessible and cheaper, and it could scale better.
  • Federation woes of posts not federating to other instances can potentially be resolved, as the fanout architecture is less complex (no longer necessary to write to a dozens or hundreds of instances for a single post).
  • Clients can have greater freedom in implementing how they create news feeds. You don't have to rely on your instance to do it. Instances primarily make content available, and clients can handle creating news feeds, content sorting and filtering (optional), etc.

What are your thoughts on this?

Questa voce è stata modificata (2 mesi fa)
in reply to django

  1. I write a post, and send a request to the server to publish it
  2. The server takes the post and preprends it to the file housing all my posts
  3. Now, when someone requests my posts, they will see my new one

If a CDN is involved, we would have to properly take care of the invalidations and what not. We would have to run a batch process to update the CDN files, so that we are not doing it too often, but doing it every minute or so is still plenty fast for social media use cases.

Have to emphasize that I am not expert, so I may be missing a big pitfall here.

Questa voce è stata modificata (2 mesi fa)
in reply to matcha_addict

So I have to constantly check all files from everyone I follow for new entries in order to have a working timeline?
in reply to tofu

Yes, precisely. The existing implementation in the Fediverse does the opposite: everyone you follow has to insert their posts into the feed of everyone that follows them, which has its own issues.
in reply to matcha_addict

But only once. If an account doesn't post/interact for a year, it doesn't cause any traffic. With your approach, I constantly need to pull that account's profile to see if something new showed up.
in reply to tofu

Sure, but constantly having to do it is not really a bad thing, given it is automated and those reads are quite inexpensive compared to a database query. It's a lot easier to handle heavy loads when serving static files.
in reply to matcha_addict

I'm really not sure about that being inexpensive. The files will grow and the list of people to follow usually grows as well. This just doesn't scale well.

I follow 700 people on Mastodon. That's 700 requests every interval. With 100-10000 posts or possibly millions of interactions in each file.

Of course you can do stuff like pagination or something like that. But some people follow 10000 accounts and want to have their timeline updated in short in intervals.

Pulling like this is usually used when the author can't sent you something directly and it works in RSS Feeds. But most people don't follow hundreds of RSS feeds. Which reminds me that every mastodon profile offers an RSS feed - you can already do what you described with an RSS reader.

in reply to tofu

bringing up RSS feeds is actually very good, because although you can paginate or partition your feeds, I have never seen a feed that does that, even when they have decades of history. But if needed, partioning is an option so you don't have to pull all of its posts but only recent ones, or by date/time range.

I would also respectfully disagree that people don't subscribe to 100's of RSS feeds. I would bet most people who consistently use RSS feed readers will have more than 100 feeds, me included.

And last, even if you follow 10,000, yes it would require a lot more time than reading from a single database, but it is still on the order of double digit seconds at most. If you compare 10,000 static file fetches with 10,000 database writes across different instances, I think the static files would fare better. This isn't to mention that you are more likely to have to write more than read more (users with 100k followers are far more common than users with 100k subscriptions)

And just to emphasize, I do agree that double digit seconds would be quite long for a user's loading time, which is why I would expect to fetch regularly so the user logs onto a pre made news feed.

Questa voce è stata modificata (2 mesi fa)
in reply to matcha_addict

Sorry, I meant your timeline, where you see other peoples posts.
in reply to django

Oh my bad, I can explain that.

Before I do, one benefit of this method is that your timeline is entirely up to your client. Your instance becomes primarily tasked with making your posts available, and clients have the freedom of implementing the reading and news feed / timeline formation.

Hence, there are a few ways to do this. The best one is probably a mix of those.

Naive approach: fetch posts and build news feed when user requests it


This is not a good approach, but I mention it first because it'll make explaining the next one easier.

  • User opens app or website, thereby requesting their timeline / news feed
  • server fetches list of user's subscriptions and followees
  • for each followee or subscription, server fetches their content via their static file wherever they are hosted
  • server performs whatever filtering and ordering of content they want
  • user sees the result

Cons: loading time for the user may be long, depending on how many subscriptions they have it could be several seconds. P90 may even be in double digits.

Better approach: pre-build user's timeline periodically.


Think like a periodic job (hourly, or every 10 min, etc) , which fetches posts in a similar manner as described above, but instead of doing it when user requests it, it is done in advance

Pros:
- fast loading time compared to previous solution
- when the job runs, if users on the same instance share a followee or subscription, we don't have to query it twice (This benefit already exists on current fediverse implementations)
Cons: posts aren't real-time, delayed by the batch job frequency.

Best approach: hybrid


In this approach, we primarily do the second method, to achieve fast loading time. But to get more up-to-date content, we also simultaneously fetch the latest in the background, and interleave or add the latest posts as the user scrolls.

This way we get both fast initial load times and recent posts.

Surely there's other good approaches. As I said in the beginning, clients have the freedom to implement this however they like.


in reply to basiclemmon98

Israel wants to make Palestinians so miserable that they choose to "voluntarily migrate".


How a simple mistake ruined my new PC (and my YouTube channel)




procrastinanza sisamministrativa: aggiungere le righe è roba di notte…


Se qualcuno mai stesse cercando prove della mia assoluta pigrizia, o comunque della mia ormai sempre incontrastata procrastinazione, sicuramente non avrebbe molta difficoltà a trovarne… tra le volte che non rifaccio il letto o che non spolvero la stanza, o come mi riduco sempre letteralmente al giorno prima per studiare (cioè proprio oggi 14 luglio, […]

octospacc.altervista.org/2025/…


procrastinanza sisamministrativa: aggiungere le righe è roba di notte…


Se qualcuno mai stesse cercando prove della mia assoluta pigrizia, o comunque della mia ormai sempre incontrastata procrastinazione, sicuramente non avrebbe molta difficoltà a trovarne… tra le volte che non rifaccio il letto o che non spolvero la stanza, o come mi riduco sempre letteralmente al giorno prima per studiare (cioè proprio oggi 14 luglio, ma questa è un’altra storia), o alle 23:55 per fare Duolingo, o come ci sono tanti miei post che durante la giornata ritardano e spesse volte addirittura spariscono, o tranquillamente come finisco sempre a letto 2 ore più tardi del normale, insomma… 💀

Eppure, nonostante la mia esistenza altro non è che una sfilza di fallimenti, certi sbagli sono più sbagliati di altri, come si suol dire… Quella che penso sia la dimostrazione più semplice e lampante della mia incapacità di fare, infatti, si è vista ieri sera, quando finalmente mi sono decisa a sistemare una fonte di disperazione che parzialmente mi attanagliava: ho aggiunto un WebManifest alla mia istanza di Shiori, così che il sito possa essere da me installato come PWA su Android anche da Chromium, e non solo da Firefox (dove invece ho il mio userscript marcio per forzare qualsiasi sito come PWA)… vabbé, e quindi? 😴

Beh, questa era una cosa che banalmente andava fatta da secoli… non solo perché la app nativa di Shiori fa cadere i maroni (e quindi non la uso), e la webapp in Firefox altrettanto (visto che Firefox di per sé li fa cadere, essendo che ci mette tipo il triplo del tempo di Chromium a partire e poi lagga pure)… ma perché bastava aggiungere una (1) riga nella mia configurazione di nginx. sub_filter '</head>' '<link rel=\'manifest\' href=\'data:application/json;utf8,{ ... malloppone di roba tra nome ed icone ... }\' /></head>';. Basta, (almeno nel suo modo più semplice) era solo questo. 😐
Schermata di quello che ho detto con il file di nginx nell'editor nel terminale e la scheda Application dei DevTools di Firefox desktop
…Cioè, rendiamoci un attimino conto della situazione. Io ho procrastinato per anni — non ricordo più quanti anni ormai, ma decisamente troppi, considerato che quando ho iniziato ad usare questo software ero ancora al liceo e hostavo ancora sul Raspinouna procedura che ammontava a spendere 5 minuti di tempo per copiare i link alle icone dal sorgente della pagina HTML, incollarle in una singola fottuta riga così, e buttare tutto in un file di configurazione già esistente. Tutte cose che ho già fatto in tanti altri casi eh, che quindi non mi hanno richiesto di scervellarmi neanche un po’, ma, per qualche motivo, porca di quella puttana, quando c’avevo voglia non mi ricordavo e quando invece serviva mi seccavo. 😭

La beffa (la cui presenza, come dico ogni volta, con me è la costante di autenticità delle mie storie disperate) stavolta è che ho fatto questa semplice operazione, che avrei dovuto fare letterali anni fa, praticamente giusto il giorno dopo quello in cui ho rilasciato Pignio… software che di per sé non centra niente ma che, con i prossimi aggiornamenti, potrebbe potenzialmente inglobare tutte le funzioni [che mi servono] di Shiori, e in tal caso sarebbe per me assolutamente ovvio togliere di mezzo un software che si rivelerebbe completamente ridondante. (C’è in realtà un motivo per questa coincidenza, stavolta non sono stati gli spiriti a dirmi di fare così… c’è una sequenza più logica che, nel caso, approfondirò.) 😾

Giusto per chiarezza, comunque: in realtà Shiori include un WebManifest, ma solo da 4-5 mesi, stando a quanto vedo dai commit; pochissimo tempo rispetto a quello in cui ho avuto questa maledetta applicazione… e stavo per dire che allora avrei in teoria dovuto avere la funzione a quest’ora, ma invece no, perché anche i manutentori di questo progetto sono grandi procrastinatori, e non fanno uscire una release precompilata da gennaio, e io ovviamente non mi sbatterò per compilare da sorgente. Meglio così, dai… altrimenti avrei dovuto ammettere che sono talmente pigra che non aggiorno il software dal giorno in cui lo installai sul nuovo server, ~2 anni fa! (Ok, no, scherzi a parte, non sono così pigra… bensì è anche peggio: non aggiorno da quando l’ho installato per la prima volta, perché se lo facessi non avrei più accesso ad una vulnerabilità che io stessa scoprii e riportai agli sviluppatori, ma di cui faccio uso… se fosse patchata sulla mia istanza, uno script che feci all’epoca non funzionerebbe più bene e, neanche a dirlo, dover sistemare pure quello mi seccherebbe tremendamente… Sono veramente irrecuperabile!!!)

#nginx #pigrizia #procrastinazione #sysadmin #webapps




Microsoft Soars as AI Cloud Boom Drives $595 Price Target




Microsoft Soars as AI Cloud Boom Drives $595 Price Target




The Media's Pivot to AI Is Not Real and Not Going to Work


On May 23, we got a very interesting email from Ghost, the service we use to make 404 Media. “Paid subscription started,” the email said, which is the subject line of all of the automated emails we get when someone subscribes to 404 Media. The interesting thing about this email was that the new subscriber had been referred to 404 Media directly from chatgpt.com, meaning the person clicked a link to 404 Media from within a ChatGPT window. It is the first and only time that ChatGPT has ever sent us a paid subscriber.

From what I can tell, ChatGPT.com has sent us 1,600 pageviews since we founded 404 Media nearly two years ago. To give you a sense of where this slots in, this is slightly fewer than the Czech news aggregator novinky.cz, the Hungarian news portal Telex.hu, the Polish news aggregator Wykop.pl, and barely more than the Russian news aggregator Dzen.ru, the paywall jumping website removepaywall.com, and a computer graphics job board called 80.lv. In that same time, Google has sent roughly 3 million visitors, or 187,400 percent more than ChatGPT.

This is really neither here nor there because we have tried to set our website up to block ChatGPT from scraping us, though it is clear this is not always working. But even for sites that don’t block ChatGPT, new research from the internet infrastructure company CloudFlare suggests that OpenAI is crawling 1,500 individual webpages for every one visitor that it is sending to a website. Google traffic has begun to dry up as both Google’s own AI snippets and AI-powered SEO spam have obliterated the business models of many media websites.


Every time "tech" comes up with a journalism "solution," journalists get laid off while the product gets worse. First it was SEO, then Facebook, then Twitter ... you'd think people trained to detect patterns can do better than just hopping on the latest hype that kills traffic.


The Media's Pivot to AI Is Not Real and Not Going to Work


On May 23, we got a very interesting email from Ghost, the service we use to make 404 Media. “Paid subscription started,” the email said, which is the subject line of all of the automated emails we get when someone subscribes to 404 Media. The interesting thing about this email was that the new subscriber had been referred to 404 Media directly from chatgpt.com, meaning the person clicked a link to 404 Media from within a ChatGPT window. It is the first and only time that ChatGPT has ever sent us a paid subscriber.

From what I can tell, ChatGPT.com has sent us 1,600 pageviews since we founded 404 Media nearly two years ago. To give you a sense of where this slots in, this is slightly fewer than the Czech news aggregator novinky.cz, the Hungarian news portal Telex.hu, the Polish news aggregator Wykop.pl, and barely more than the Russian news aggregator Dzen.ru, the paywall jumping website removepaywall.com, and a computer graphics job board called 80.lv. In that same time, Google has sent roughly 3 million visitors, or 187,400 percent more than ChatGPT.

This is really neither here nor there because we have tried to set our website up to block ChatGPT from scraping us, though it is clear this is not always working. But even for sites that don’t block ChatGPT, new research from the internet infrastructure company CloudFlare suggests that OpenAI is crawling 1,500 individual webpages for every one visitor that it is sending to a website. Google traffic has begun to dry up as both Google’s own AI snippets and AI-powered SEO spam have obliterated the business models of many media websites.

This general dynamic—plummeting traffic because of AI snippets, ChatGPT, AI slop, Twitter no workie so good no more—has been called the “traffic apocalypse” and has all but killed some smaller websites and has been blamed by executives for hundreds of layoffs at larger ones.

Despite the fact that generative AI has been a destructive force against their businesses, their industry, and the truth more broadly, media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked, and pretending that they are forward thinking or have any clue what they are doing will perhaps allow a specific type of media executive to squeeze out a few more months of salary.

But pivoting to AI is not a business strategy. Telling journalists they must use AI is not a business strategy. Partnering with AI companies is a business move, but becoming reliant on revenue from tech giants who are creating a machine that duplicates the work you’ve already created is not a smart or sustainable business move, and therefore it is not a smart business strategy. It is true that AI is changing the internet and is threatening journalists and media outlets. But the only AI-related business strategy that makes any sense whatsoever is one where media companies and journalists go to great pains to show their audiences that they are human beings, and that the work they are doing is worth supporting because it is human work that is vital to their audiences. This is something GQ’s editorial director Will Welch recently told New York magazine: “The good news for any digital publisher is that the new game we all have to play is also a sustainable one: You have to build a direct relationship with your core readers,” he said.

Becoming an “AI-first” media company has become a buzzword that execs can point at to explain that their businesses can use AI to become more ‘efficient’ and thus have a chance to become more profitable. Often, but not always, this message comes from executives who are laying off large swaths of their human staff.

In May, Business Insider laid off 21 percent of its workforce. In her layoff letter, Business Insider’s CEO Barbara Peng said “there’s a huge opportunity for companies who harness AI first.” She told the remaining employees there that they are “fully embracing AI,” “we are going all-in on AI,” and said “over 70 percent of Business Insider employees are already using Enterprise ChatGPT regularly (our goal is 100%), and we’re building prompt libraries and sharing everyday use cases that help us work faster, smarter, and better.” She added they are “exploring how AI can boost operations across shared services, helping us scale and operate more efficiently.”

Last year, Hearst Newspapers executives, who operate 78 newspapers nationwide, told the company in an all-hands meeting audio obtained by 404 Media that they are “leaning into [AI] as Hearst overall, the entire corporation.” Examples given in the meeting included using AI for slide decks, a “quiz generation tool” for readers, translations, a tool called Dispatch, which is an email summarization tool, and a tool called “Assembly,” which is “basically a public meeting monitor, transcriber, summarizer, all in one. What it does is it goes into publicly posted meeting videos online, transcribes them automatically, [and] automatically alerts journalists through Slack about what’s going on and links to the transcript.”

The Washington Post and the Los Angeles Times are doing all sorts of fucked up shit that definitely no one wants but are being imposed upon their newsrooms because they are owned by tech billionaires who are tired of losing money. The Washington Post has an AI chatbot and plans to create a Forbes contributor-esque opinion section with an AI writing tool that will assist outside writers. The Los Angeles Times introduced an AI bot that argues with its own writers and has written that the KKK was not so bad, actually. Both outlets have had massive layoffs in recent months.

The New York Times, which is actually doing well, says it is using AI to “create initial drafts of headlines, summaries of Times articles and other text that helps us produce and distribute the news.” Wirecutter is hiring a product director for AI and recently instructed its employees to consider how they can use AI to make their journalism better, New York magazine reported. Kevin Roose, an, uhh, complicated figure in the AI space, said “AI has essentially replaced Google for me for basic questions,” and said that he uses it for “brainstorming.” His Hard Fork colleague Casey Newton said he uses it for “research” and “fact-checking.”

Over at Columbia Journalism Review, a host of journalists and news execs, myself included, wrote about how AI is used in their newsrooms. The responses were all over the place and were occasionally horrifying, and ranged from people saying they were using AI as personal assistants to brainstorming partners to article drafters.

In his largely incoherent screed that shows how terrible he was at managing G/O Media, which took over Deadspin, Kotaku, Jezebel, Gizmodo, and other beloved websites and ran them into the ground at varying speeds, Jim Spanfeller nods at the “both good and perhaps bad” impacts of AI on news. In a truly astounding passage of a notably poorly written letter that manages to say less than nothing, he wrote: “AI is a prime example. It is here to a degree but there are so many more shoes to drop [...] Clearly this technology is already having a profound impact. But so much more is yet to come, both good and perhaps bad depending on where you sit and how well monitored and controlled it is. But one thing to keep in mind, consumers seek out content for many reasons. Certainly, for specific knowledge, which search and search like models satisfy in very effective ways. But also, for insights, enjoyment, entertainment and inspiration.”

At the MediaPost Publishing Insider Conference, a media industry business conference I just went to in New Orleans, there was much chatter about AI. Alice Ting, an executive for the Daily Mail gave a pretty interesting talk about how the Daily Mail is protecting its journalism from AI scrapers in order to eventually strike deals with AI companies to license their content.

“What many of you have seen is a surge in scraping of our content, a decline in traffic referrals, and an increase in hallucinated outputs that often misrepresent our brands,” Ting said. “Publishers can provide decades of vetted and timestamped content, verified, fact checked, semantically organized, editorially curated. And in addition offer fresh content on an almost daily basis.”

Ting is correct in that several publishers have struck lucrative deals with AI companies, but she also suggested that AI licensing would be a recurring revenue stream for publishers, which would require a series of competing LLMs to want to come in and license the same content over and over again. Many LLMs have already scraped almost everything there is to scrape, it’s not clear that there are going to consistently be new LLMs from companies wanting to pay to train on data that other LLMs have already trained on, and it’s not clear how much money the Daily Mail’s blogs of the day are going to be worth to an AI company on an ongoing basis. Betting that this time, hinging the future of our industry on massive, monopolistic tech giants will work out is the most Lucy with the football thing I can imagine.

There is not much evidence that selling access to LLMs will work out in a recurring way for any publisher, outside of the very largest publishers like, perhaps, the New York Times. Even at the conference, panel moderator Upneet Grover, founder of LH2 Holdings, which owns several smaller blogs, suggested that “a lot of these licensing revenues are not moving the needle, at least from the deals we’ve seen, but there’s this larger threat of more referral traffic being taken away from news publishers [by AI].”
youtube.com/embed/La2R3iIL9BE?…
In my own panel at the conference I made the general argument that I am making in this article, which is that none of this is going to work.

“We’re not just competing against large-scale publications and AI slop, we are competing against the entire rest of the internet. We were publishing articles and AI was scraping and republishing them within five minutes of us publishing them,” I said. “So many publications are leaning into ‘how can we use AI to be more efficient to publish more,’ and it’s not going to work. It’s not going to work because you’re competing against a child in Romania, a child in Bangladesh who is publishing 9,000 articles a day and they don’t care about facts, they don’t care about accuracy, but in an SEO algorithm it’s going to perform and that’s what you’re competing against. You have to compete on quality at this point and you have to find a real human being audience and you need to speak to them directly and treat them as though they are intelligent and not as though you are trying to feed them as much slop as possible.”

It makes sense that journalists and media execs are talking about AI because everyone is talking about AI, and because AI presents a particularly grave threat to the business models of so many media companies. It’s fine to continue to talk about AI. But the point of this article is that “we’re going to lean into AI” is not a business model, and it’s not even a business strategy, any more than pivoting to “video” was a strategy or chasing Facebook Live views was a strategy.

In a harrowing discussion with Axios, in which he excoriates many of the deals publishers have signed with OpenAI and other AI companies, Matthew Prince, the CEO of Cloudflare, said that the AI-driven traffic apocalypse is a nightmare for people who make content online: “If we don’t figure out how to fix this, the internet is going to die,” he said.
youtube.com/embed/H5C9EL3C82Y?…
So AI is destroying traffic, ripping off our work, creating slop that destroys discoverability and further undermines trust, and allowing random people to create news-shaped objects that social media and search algorithms either can’t or don’t care to distinguish from real news. And yet media executives have decided that the only way to compete with this is to make their workers use AI to make content in a slightly more efficient way than they were already doing journalism.

This is not going to work, because “using AI” is not a reporting strategy or a writing strategy, and it’s definitely not a business strategy.

AI is a tool (sorry!) that people who are bad at their jobs will use badly and that people who are good at their jobs will maybe, possibly find some uses for. People who are terrible at their jobs (many executives), will tell their employees that they “need” to use AI, that their jobs depend on it, that they must become more productive, and that becoming an AI-first company is the strategy that will save them from the old failed strategy, which itself was the new strategy after other failed business models.

The only journalism business strategy that works, and that will ever work in a sustainable way, is if you create something of value that people (human beings, not bots) want to read or watch or listen to, and that they cannot find anywhere else. This can mean you’re breaking news, or it can mean that you have a particularly notable voice or personality. It can mean that you’re funny or irreverent or deeply serious or useful. It can mean that you confirm people’s priors in a way that makes them feel good. And you have to be trustworthy, to your audience at least. But basically, to make money doing journalism, you have to publish “content,” relatively often, that people want to consume.

This is not rocket science, and I am of course not the only person to point this out. There have been many, many features about the success of Feed Me, Emily Sundberg’s newsletter about New York, culture, and a bunch of other stuff. As she has pointed out in many interviews, she has been successful because she writes about interesting things and treats her audience like human beings. The places that are succeeding right now are individual writers who have a perspective, news outlets like WIRED that are fearless, publications that have invested in good reporters like The Atlantic, publications that tell you something that AI can’t, and worker owned, journalist-run outlets like us, Defector, Aftermath, Hellgate, Remap, Hearing Things, etc. There are also a host of personality-forward, journalism-adjacent YouTubers, TikTok influencers, and podcasters who have massive, loyal audiences, yet most of the traditional media is utterly allergic to learning anything from them.

There was a short period of time where it was possible to make money by paying human writers—some of them journalists, perhaps—to spam blog posts onto the internet that hit specific keywords, trending topics, or things that would perform well on social media. These were the early days of Gawker, Buzzfeed, VICE, and Vox. But the days of media companies tricking people into reading their articles using SEO or hitting a trending algorithm are over.

They are over because other people are doing it better than them now, and by “better,” I mean, more shamelessly and with reckless abandon. As we have written many times, news outlets are no longer just competing with each other, but with everyone on social media, and Netflix, and YouTube, and TikTok, and all the other people who post things on the internet. They are not just up against the total fracturing of social media, the degrading and enshittification of the discovery mechanisms on the internet, algorithms that artificially ding links to articles, AI snippets and summaries, etc. They are also competing with sophisticated AI slop and spam factories often being run by people on the other side of the world publishing things that look like “news” that is being created on a scale that even the most “efficient” journalist leveraging AI to save some perhaps negligible amount of time cannot ever hope to measure up to.

Every day, I get emails from AI spam influencers who are selling tools that allow slop peddlers to clone any website with one click, automatically generate newsletters about any topic, or generate plausible-seeming articles that are engineered to perform well in a search algorithm. Examples: “Clone any website in 9 seconds with Clonely AI,” “The future of video creation is here—and it’s faceless, seamless & limitless,” “just a straightforward path to earning 6-figures with an AI-powered newsletter that’s working right now.” These people do not care at all about truth or accuracy or our information ecosystem or anything else that a media company or a journalist would theoretically care about. If you want an example of what this looks like, consider the series of “Good Day” newsletters, which are AI generated and are in 355 small towns across America, many of which no longer have newspapers. These businesses are economically viable because they are being run by one person (or a very small team of people) who disproportionately live in low cost of living areas and who have essentially zero overhead.

And so becoming more “efficient” with AI is the wrong thing to do, and it’s the wrong thing to ask any journalist to do. The only thing that media companies can do in order to survive is to lean into their humanity, to teach their journalists how to do stories that cannot be done by AI, and to help young journalists learn the skills needed to do articles that weave together complicated concepts and, again, that focus on our shared human experience, in a way that AI cannot and will never be able to.

AI as buzzword and shiny object has been here for a long time. And I actually do not think AI is fake and sucks (I also don’t really believe that anyone thinks AI is “fake,” because we can see the internet collapsing around us). We report every day on the ways that AI is changing the web, in part because it is being shoved down our throats by big tech companies, spammers, etc. But I think that Princeton’s Arvind Narayanan and Sayash Kapoor are basically correct when they say that AI is “normal technology” that will not change everything but that over time will lead to modest improvements in people’s workflows as they get integrated into existing products or as they help around the edges. We—yes, even you—are using some version of AI, or some tools that have LLMs or machine learning in them in some way shape or form already, even if you hate such tools.

In early 2023, when I was the editor-in-chief of Motherboard, I was asked to put together a presentation for VICE executives about AI, and how I thought it would change both our journalism and the business of journalism. The reason I was asked to do this was because our team was writing a lot about AI, and there was a sense that the company could do something with AI to make money, or do better journalism, or some combination of those things. There was no sense or thought at the time, at least from what I was told, that VICE was planning to use AI as a pretext for replacing human journalists or cutting costs—it had already entered a cycle where it was constantly laying off journalists—but there was a sense that this was going to be the big new opportunity/threat, a new potential savior for a company that had already created a “virtual office” in Decentraland, a crypto-powered metaverse that last year had 42 daily active users.

I never got to give the presentation, because the executive who asked me to put it together left the company, and the new people either didn’t care or didn’t have time for me to give it. The company went bankrupt almost immediately after this change, and I left VICE soon after to make 404 Media with my co-founders, who also left VICE.

But my message at the time, and my message now two years later, is that AI has already changed our world, and that we have the opportunity to report on the technology as it already exists and is already being used—to justify layoffs, to dehumanize people, to spam the internet, etc. At the time, we had already written 840 articles that were tagged “AI,” which included articles about biased sentencing algorithms, predictive policing, facial recognition, deepfakes, AI romantic relationships, AI-powered spam and scams, etc.

The business opportunity then, as now, was to be an indispensable, very human guide to a technology that people—human beings—are making tons of money off of, using as an excuse to lay off workers, and are doing wild shit with. There was no magic strategy in which we could use AI to quadruple our output, replace workers, rise to the top of Google rankings, etc. There was, however, great risk in attempting to do this: “PR NIGHTMARE,” one of my slides about the risks of using AI I wrote said: “CNET plagiarism scandal. Big backlash from artists and writers to generative AI. Copyright issues. Race to the bottom.”

My other thought was that any efficiencies that could be squeezed out of AI, in our day-to-day jobs, were already being done so by good reporters and video producers at the company. There could be no top-down forced pivot to AI, because research and time-saving uses of AI were already being naturally integrated into our work by people who were smart in ways that were totally reasonable and mostly helpful, if not groundbreaking. The AI-as-force-multiplier was already happening, and while, yes, this probably helped the business in some way, it helped in ways that were not then and were never going to be actually perceptible to a company’s bottom line. AI was not a savior then, and it is not a savior now. For journalists and for media companies, there is no real “pivot to AI” that is possible unless that pivot means firing all of the employees and putting out a shittier product (which some companies have called a strategy). This is because the pivot has already occurred and the business prospects for media companies have gotten worse, not better. If Kevin Roose is using AI so much, in such a new and groundbreaking way, why aren’t his articles noticeably different than they ever were before, or why aren’t there way more of them than there were before? Where are the journalists who were formerly middling who are now pumping out incredible articles thanks to efficiencies granted by AI?

To be concrete: Many journalists, including me, at least sometimes use some sort of AI transcription tool for some of their less sensitive interviews. This saves me many hours, the tools have gotten better (but are still not perfect, and absolutely require double checking and should not be used for sensitive sources or sensitive stories). YouTube’s transcript feature is an incredible reporting tool that has allowed me to do stories that would have never been possible even a few years ago. YouTube’s built-in translations and subtitles, and its transcript tool are some of the only reasons that I was able to do this investigation into Indian AI slop creators, which allowed me to get the gist of what was happening in a given video before we handed them to human translators to get exact translations. Most podcasts I know of now use Descript, Riverside, or a similar tool to record and edit their podcasts; these have built-in AI transcription tools, built-in AI camera switching, and built-in text-to-video editing tools. Most media outlets use captioning that is built into Adobe Premiere or CapCut for their vertical videos and their YouTube videos (and then double check them). If you want to get extremely annoying about it, various machine learning algorithms are in ProTools, Audition, CapCut, Premiere, Canva, etc for things like photo editing, sound leveling, noise reduction, etc.

There are other journalists who feel very comfortable coding and doing data analysis and analyzing huge sets of documents. There are journalists out there who are already using AI to do some of these tasks and some of the resulting articles are surely good and could not have been done without AI.

But the people doing this well are doing so in a way where they are catching and fixing AI hallucinations, because the stakes for fucking up are so incredibly high. If you are one of the people who is doing this, then, great. I have little interest in policing other people’s writing processes so long as they are not publishing AI fever dreams or plagiarizing, and there are writers I respect who say they have their little chats with ChatGPT to help them organize their thoughts before they do a draft or who have vibecoded their own productivity tools or data analysis tools. But again, that’s not a business model. It’s a tool that has enabled some reporters to do their jobs, and, using their expertise, they have produced good and valuable work. This does not mean that every news outlet or every reporter needs to learn to shove the JFK documents into ChatGPT and have it shit out an investigation.

I also know that our credibility and the trust of our audience is the only thing that separates us from anyone else. It is the only “business model” that we have and that I am certain works: We trade good, accurate, interesting, human articles for money and attention. The risks of offloading that trust to an AI in a careless way is the biggest possible risk factor that we could have as a business. Having an article go out where someone goes “Actually, a robot wrote this,” is one of the worst possible things that could ever happen to us, and so we have made the brave decision to not do that.

This is part of what is so baffling about the Chicago Sun Times’ response to its somewhat complicated summer guide AI-generated reading list fiasco. Under its new owners, Chicago Public Media, The Sun Times has in recent years spent an incredible amount of time and effort rebuilding the image and good will that its previous private equity owners destroyed. And yet in its apology note, Melissa Bell, the CEO of Chicago Public Media, said that more AI is coming: “Chicago Public Media will not back away from experimenting and learning how to properly use AI,” she wrote, adding that the team was working with a fellow paid for by the Lenfest Institute, a nonprofit funded by OpenAI and Microsoft.

Bell does realize what makes the paper stand apart, though: “We must own our humanity,” Bell wrote. “Our humanity makes our work valuable.”

This is something that the New York Times’s Roose recently brought up that I thought was quite smart and yet is not something that he seems to have internalized when talking about how AI is going to change everything and that its widespread adoption is inevitable and the only path forward: “I wonder if [AI is] going to catalyze some counterreaction,” he said. “I’ve been thinking a lot recently about the slow-food movement and the farm-to-table movement, both of which came up in reaction to fast food. Fast food had a lot going for it—it was cheap, it was plentiful, you could get it in a hurry. But it also opened up a market for a healthier, more artisanal way of doing things. And I wonder if something similar will happen in creative industries—a kind of creative renaissance for things that feel real and human and aren’t just outputs from some A.I. company’s slop machine.”

This has ALREAAAAADDDDYYYYYY HAPPPENEEEEEDDDDDD, and it is quite literally the only path forward for all but perhaps the most gigantic of media companies. There is no reason for an individual journalist or an individual media company to make the fast food of the internet. It’s already being made, by spammers and the AI companies themselves. It is impossible to make it cheaper or better than them, because it is what they exist to do. The actual pivot that is needed is one to humanity. Media companies need to let their journalists be human. And they need to prove why they’re worth reading with every article they do.




Life found in underwater brine lakes



in reply to Keineanung

privitising the government for “non-essential” tasks tends to just hurt normal people and enrichen the oligarchs

edit: Benutzername bestätigt sich ;)

Questa voce è stata modificata (2 mesi fa)


[2025] Event suggestions / feedback 💜


Canvas 2025 has ended!

Tip the staff: tips.sc07.com/ 💜

Leave feedback and recommendations for this year's Canvas or any events you'd like to see

in reply to grant 🍞

If there are constroversial pixel wars (see the whole hungary saga) maybe make a subchat on the matrix for that
Questa voce è stata modificata (2 mesi fa)



Can I embed/crosspost a Mastodon post in a PieFed post?


I'd like, if possible, to embed the complete content of a Mastodon post in a PieFed post.

I tried by posting the URL in a "Link" type post. See piefed.social/post/1040253

That "sucked in" the image (at least, a thumbnail of the image) from the post but nothing else.

Questa voce è stata modificata (2 mesi fa)





Meteo Italia: caldo in aumento e temporali in arrivo tra Nordest e Adriatico | Meteo POP




in reply to YTG123

That's fine, move along. No need to crap on the hard work of the OSS people that work on anything.
in reply to boaratio

I'll have you notice that there's also a gigachad in the meme, not just Kirk

in reply to Davriellelouna

The worst part is their châtlet joke is actually good 😭.

I hope someone tags over all this.



Scientists make game-changing breakthrough that could slash costs of solar panels: 'Has the potential to contribute to the energy transition'


cross-posted from: slrpnk.net/post/24690127

Solar energy experts in Germany are putting sun-catching cells under the magnifying glass with astounding results, according to multiple reports.

The Fraunhofer Institute for Solar Energy Systems team is perfecting the use of lenses to concentrate sunlight onto solar panels, reducing size and costs while increasing performance, Interesting Engineering and PV Magazine reported.

The "technology has the potential to contribute to the energy transition, facilitating the shift toward more sustainable and renewable energy sources by combining minimal carbon footprint and energy demand with low levelized cost of electricity," the researchers wrote in a study published by the IEEE Journal of Photovoltaics.

The sun-catcher is called a micro-concentrating photovoltaic, or CPV, cell. The lens makes it different from standard solar panels that convert sunlight to energy with average efficiency rates around 20%, per MarketWatch. Fraunhofer's improved CPV cell has an astounding 36% rate in ideal conditions and is made with lower-cost parts. It cuts semiconductor materials "by a factor of 1,300 and reduces module areas by 30% compared to current state-of-the-art CPV systems," per IE.

Unknown parent

lemmy - Collegamento all'originale
themurphy

It does. Also seems weird nobody thought of a magnifying glass before.

But its also the beauty in science. Now somebody else thought about it, and they might work harder to fix the next problem: Heat.

If that gets better now, solar panels will increase in output even more. There are so many technologies going into one product, and each field have its own experts.

I'm excited.

Unknown parent

This was my first question too! I thought heat makes them wear out faster.


in reply to nebulaone

Space stopped making sense,

walls laughed,

directions stopped directioning,

the endless maze of hallways became the process of living.

Questa voce è stata modificata (2 mesi fa)


It’s Not WordPress. It’s the Plugins.


After managing hundreds of WordPress sites over the years, one thing is clear: the core is solid – it’s the outdated, poorly written plugins that open the doors to attacks. At OSDay 2025, I attended a talk that confirmed this and shed light on a massive b

One of the reasons I’m always so happy to attend conferences and technical events (the real ones – not the flashy, sponsor-driven ones designed just to sell products or services) is because I get to meet amazing people and always come away having learned something new.

I’ve been using WordPress since 2006 and have been managing hundreds of installations from a sysadmin perspective. Over time, I’ve noticed a clear pattern: most hacks and compromises happen through plugins or outdated installations. And often, these installations (and plugins) become outdated because they’ve been patched together so messily that updating them becomes nearly impossible – especially when the PHP version changes.

In March 2025, I attended a fantastic conference: OSDay 2025. I gave a talk on why I believe it makes perfect sense to consider the BSDs in 2025, but many of the other talks were truly eye-opening.

To mark the launch of the BSD Cafe Journal, I’d like to share the link to a particularly interesting talk by Maciek Palmowski: “How we closed almost 1k plugins in a month — the biggest WordPress bug bounty hunt.”

What struck me right away was how much his analysis of WordPress security aligned with what I’ve seen over the years: WordPress, out of the box, is reasonably secure. It’s the plugins – often old, unmaintained, or poorly written – that make it vulnerable.

I highly recommend watching his talk. It’s definitely worth your time.

youtube.com/watch?v=Y3HsjvRAof…


Announcing The BSD Cafe Journal!


Dear friends of the BSD Cafe,

This idea has been in my mind since the very beginning of this adventure, almost two years ago. Over time, several people have suggested it. But until recently, I felt the timing just wasn’t right — for many reasons. Today, I believe it finally is.

So I’m happy to announce a new service: The BSD Cafe Journal.


What is The BSD Cafe Journal?


At first, I thought I’d use BSSG for it (I even added multi-author support with this in mind), but in the end, it didn’t feel like the right tool for the job.

The idea is to create a multi-author space, with content published on a fairly regular basis. A reference point for news, updates, tutorials, technical articles — a place to inform and connect.

Just like people in Italy used to stop by cafés to read the newspaper and chat about the day’s news, the BSD Cafe Journal aims to be a space for reading, sharing, and staying informed — all in the spirit of the BSD Cafe.


What it’s Not


  • It’s not here to replace personal blogs, or excellent newsletters like Vermaden’s.
  • It’s not an aggregator.

What it Is


  • A place where authors can write original content.
  • A space to share links to posts on their own blogs or elsewhere.
  • A platform to publish guides, offer insights, or dive into technical explanations.

Our Guiding Principles


The guiding principles are the same as always: positivity, constructive discussion, promoting BSDs and open source in general.

  • No hype: Sharing a cool new service is fine, posting non-stop about the latest trend is not.
  • No drama, no politics: The goal is to bring people together, not divide them. To inform, not inflame.
  • Respect, tolerance, and inclusivity are key. Everyone should feel welcome reading the BSD Cafe Journal — never judged, offended, or excluded.

Why WordPress?


The platform I’ve chosen is WordPress, for several reasons:

  • It’s portable (runs well on all BSDs).
  • It has great built-in role management (contributors, authors, etc.).
  • And — last but not least — it supports ActivityPub.

This means every author will have their own identity in the Fediverse and can be followed directly, and it’ll also be possible to follow the whole Journal.

Original and educational content is encouraged, but it’s also perfectly fine to link to existing articles elsewhere. Personally, I’ll link my technical posts from ITNotes whenever I publish them there.

The goal is simple: a news-oriented site, rich in content, ad-free, respectful of privacy — all under the BSD Cafe umbrella.


Getting Involved


Content coordination will happen in a dedicated Matrix room for authors. There’ll also be a public room for discussing ideas, giving feedback, and sharing suggestions.

Of course, I can’t do this alone. A journal with no content is just an empty shell.

So here’s my call to action:

Who’s ready to lend a hand? If you enjoy writing, explaining, sharing your knowledge — the Journal is waiting for you!





Videogiochi liberi: comunità Matrix/Telegram dove giocare insieme a titoli liberi 🌸


[img]https://matrix-client.matrix.org/_matrix/media/v3/thumbnail/olimpololaschi.com/a86371acc7033e91a5f80f1ce1091569d2e83939356cdafc3c2ec3e2ee12e063?width=80&height=80&method=crop&allow_redirect=true[/img] Pubblicità progresso: mantengo una comunità chia

Pubblicità progresso: mantengo una comunità chiamata Videogiochi Liberi, nella quale si parla appunto di videogiochi non proprietari (e li si gioca 😁).

Sarebbe carino ricominciare ad organizzare delle serate dove nerdare in compagnia, quindi vi lascio i link per chi fosse interessatə! Le due piattaforme sono collegate, ma per l'esperienza completa consiglio Matrix, la prima (ci son varie stanze).

Matrix: matrix.to/#/#videogiochiliberi…

Telegram: https://t.me/videogiochi_liberi



Caparezza Annuncia “Orbit Orbit”: Tra Fumetto e Musica il Suo Nono Album in Arrivo il 31 Ottobre 2025


Caparezza torna a stupire i fan e lo fa con un progetto ambizioso, “Orbit Orbit“, che unisce due grandi passioni: musica e fumetto.

LEGGILO SU ATOMHEARTMAGAZINE.COM

reshared this



in reply to Mas

Successful exploitation requires a combination of specific conditions. An attacker must first gain physical access to a target eUICC and use publicly known keys," Kigen said. "This enables the attacker to install a malicious JavaCard applet."


If an attacker has physical access, they can do whatever the fuck they want with the device. All bets are off.

If I had physical access to a server, I could just fucking drop in my own hard drive full of malware if I wanted to. It doesn’t matter how good the security software/firmware is on the server, when I can physically remove that software/firmware and substitute my own. That doesn’t mean every single server is “exposed to malicious attacks” as is colloquially known.

Questa voce è stata modificata (2 mesi fa)


Stopping the rot when good software goes bad means new rules


The 21st century is turning out weirder than we thought. For the entire history of art, for example, tools could be used and abused and would work more or less well, but generally helped the wishes and skills of the user. They did not plot against us. Now they can – and do.

Take the painter's palette. A simple, essential, and harmless tool – just don't lick the chrome yellow, Vincent – affording complete control of the visual spectrum while being an entirely benign piece of wood. Put the same idea into software, and it becomes a thief, a deceiver, and a spy. Not a paranoid fever dream of an absinthe-soaked dauber, but the observed behavior of a Chrome extension color picker. Not a skanky chunk of code picked up from the back streets of cyberland, but a Verified by Google extension from Google's own store.

This seems to have happened because when the extension was first uploaded to the store, it was as simple, useful, and harmless as its physical antecedents. Somewhere in its life since then, an update slipped through with malicious code that delivered activity data to the privacy pirates. It's not alone in taking this path to evil.


One wonders what might be different if making a living wage didn't usually involve deceit of some form.



About The BSD Cafe Journal


Welcome to The BSD Cafe Journal! This platform is an extension of the BSD Cafe, born from a long-held vision to create a central, multi-author space for the BSD and open-source communities. Just like the traditional Italian cafés (called “bar”) where peop

Welcome to The BSD Cafe Journal! This platform is an extension of the BSD Cafe, born from a long-held vision to create a central, multi-author space for the BSD and open-source communities. Just like the traditional Italian cafés (called “bar“) where people gathered to read the news and chat, our Journal aims to be a vibrant hub for reading, sharing, and staying informed.


What You’ll Find Here


The BSD Cafe Journal is dedicated to providing original, educational, and insightful content on a regular basis. You can expect:

  • News and Updates: Stay current with the latest happenings in the BSD and Open Source worlds.
  • Tutorials and Guides: Learn new skills and deepen your understanding.
  • Technical Articles: Dive into detailed explanations and analyses.
  • Author Contributions: Our writers will share original articles, link to their own blog posts (but not only), and offer unique perspectives.

This isn’t an aggregator, nor is it here to replace your favorite personal blogs or newsletters. Instead, it’s a complementary space where authors can contribute high-quality content that informs and connects our community.


Our Guiding Principles


The BSD Cafe Journal operates on the same core values that define the BSD Cafe:

  • Positivity and Constructive Discussion: We aim to build up, not tear down. Supporters, not haters.
  • Promoting BSDs and Open Source: Our focus is on the advancement and sharing of knowledge within these communities.
  • No Hype, No Drama, No Politics: We steer clear of trends, conflicts, and divisive topics. Our goal is to unite, not inflame.
  • Respect, Tolerance, and Inclusivity: Everyone is welcome here. We strive to create an environment where readers and contributors alike feel respected, never judged, offended, or excluded.

Our Platform: WordPress and the Fediverse


We’ve chosen WordPress for its robust features, including excellent role management for contributors and authors, its portability across BSDs, and crucially, its ActivityPub support. This means:

  • Individual Author Identities: Each author can have their own identity within the Fediverse and be followed directly.
  • Follow the Journal: You can also follow the entire Journal to get all our updates.

We encourage original content, but also welcome links to relevant articles published elsewhere. For example, the founder will link technical posts from ITNotes when they’re published. Our aim is a content-rich, ad-free, privacy-respecting news site under the BSD Café umbrella.


Join the Conversation!


Content coordination for authors happens in a dedicated, private Matrix room. We also have a public Matrix room (matrix.to/#/#bsdcafejournal:bs…) where you can discuss ideas, provide feedback, and share suggestions with the community.

The success of the Journal depends on its contributors. If you enjoy writing, explaining, and sharing your knowledge, we invite you to join us! The Journal is waiting for your unique voice.

Questa voce è stata modificata (2 mesi fa)
in reply to Ángel

Who knows, maybe in the future... AI will replace us, and we'll all gather at the "real" BSD Cafe to talk about the 'good old days of technology', just like the pirates in Monkey Island at the SCUMM Bar, drinking coffee and eating Tiramisu (instead of Grog).


Monday, July 14, 2025


Russian drones kill 1, injure 9 in Sumy Oblast amid attack on civilian, critical infrastructure — More Russians will fall from Windows — Russia’s summer offensive has fallen far short of expectations –North Korea supplied Russia with 12 million rounds of

Share

The Kyiv Independent [unofficial]


This newsletter is brought to you by Medical Bridges.

Medical Supplies for Ukraine’s Hospitals. Partnering for global health equity.

Russia’s war against Ukraine

Standing with workers before they install a new flag pole on the South Lawn, U.S. President Donald Trump talks with journalists outside the White House on June 18, 2025, in Washington, DC. (Chip Somodevilla / Getty Images)
A building is seen on fire after a Russian missile strike hit the city of Sloviansk, Donetsk Oblast, Ukraine, on July 12, 2025. (Vincenzo Circosta / Anadolu via Getty Images)

Trump says US will send Patriot missiles to Ukraine. “We will send them Patriots, which they desperately need, because (Russian President Vladimir) Putin really surprised a lot of people. He talks nice and then bombs everybody in the evening,” U.S. President Donald Trump said on July 13.

Russia’s summer offensive has fallen ‘far short of expectations,’ Zelensky says. Moscow’s ongoing summer offensive has not reached the Kremlin’s expectations as Ukrainian troops continue to thwart Russian attacks on various regions, President Volodymyr Zelensky claimed on July 13.

Russia launched over 1,800 drones on Ukraine in one week, Zelensky says. Over 1,200 glide bombs and 83 missiles of various types were also launched on Ukraine in the past week, President Volodymyr Zelensky said on July 13.

Your contribution helps keep the Kyiv Independent going. Become a member today.

Pro-Ukrainian partisans destroy car used by Chechen unit in occupied Mariupol, Atesh claims. “We send greetings to the kadyrovtsy,” the group wrote, referring to the notoriously ruthless troops named for Chechen strongman Ramzan Kadyrov.

SBU claims liquidation of Russian agents responsible for killing officer in Kyiv. The alleged Russian agents were killed during a shootout in an SBU special operation on July 13 in Kyiv Oblast, according to the agency.

North Korea supplied Russia with 12 million rounds of 152mm shells, South Korean intelligence estimates. The report estimated that North Korea could have provided Russia with around 28,000 containers containing weapons and artillery shells to date.

Read our exclusives


Ukraine war latest: German-funded long-range weapons to arrive in Ukraine by late July; NATO chief to visit Washington on July 14

Ukraine will begin receiving hundreds of domestically produced long-range weapon systems by the end of July under a German-financed agreement, German Major General Christian Freuding told the German ZDF news channel.

Photo: Krisztian Bocsi / Bloomberg via Getty Images

Learn more

The origins and meaning of the tryzub, the Ukrainian coat of arms

The trident, known in Ukrainian as tryzub, is instantly recognizable as the central element of Ukraine’s modern coat of arms. But beyond its official role, the tryzub has taken on profound symbolic meaning in recent years amid Russia’s full-scale invasion of Ukraine.

Photo: John Moore / Getty Images

Learn more

Human cost of Russia’s war


Russian drones kill 1, injure 9 in Sumy Oblast amid attack on civilian, critical infrastructure. A Russian drone attack on Ukraine’s northeastern Sumy Oblast on July 13 killed one person and injured nine others, Governor Oleh Gryhorov reported, amid a larger attack on the region’s critical infrastructure.

Russian attacks across Ukraine kill 8, injure 21 over past day. Deadly attacks on civilians were reported in Donetsk, Dnipropetrovsk, Sumy, and Kherson oblasts, according to regional authorities.

International response


‘The game is about to change‘ — Republican Senator Graham expects influx of US weapons shipments to Ukraine, ahead of Trump’s ‘major’ announcement. U.S. Senator Lindsey Graham said in an interview with CBS News on July 13 that he expects an influx of U.S. weapons shipments to Ukraine to begin “in the coming days,” as U.S. President Donald Trump prepares to make a “major statement” on the war in Ukraine on July 14.

NATO chief to visit Washington on July 14 as Trump prepares ‘major statement‘ on Russia. NATO Secretary General Mark Rutte will visit Washington on July 14-15, the military alliance’s press service announced on July 13. The visit comes as U.S. President Donald Trump previously said he intends to make a “major” announcement on Russia on July 14, potentially signifying a major policy shift on the war in Ukraine.

Balkan countries release joint statement supporting Ukrainian NATO accession after summit. The joint summit declaration was released by Ukraine and the Croatian government on July 12.

Over $4 billion in new funds pledged for Ukraine’s reconstruction after Recovery Conference, ministry says. Ukrainian officials signed agreements, memorandums, and joint statements on raising funds totalling 3.55 billion euros ($4.15 million) following the Ukraine Recovery Conference 2025 on July 10-11 in Rome, Ukraine’s Ministry for Development of Communities and Territories announced July 13.

US aid swings and mysterious deaths in Russia | Ukraine This Week

In other news


Russia denies Putin pushed Iran for ‘zero enrichment‘ nuclear deal. Western countries and Israel suspect Tehran of seeking to develop a nuclear weapon, a claim Iran denies, defending what it calls its “non-negotiable” right to develop a civilian nuclear program.

Russia scales up propaganda operations across Africa, Ukrainian intelligence says. By the end of the year, Russia Today plans to launch broadcasting in Amharic for an audience in Ethiopia, HUR said.

The Kyiv Independent delivers urgent, independent journalism from the ground, from breaking news to investigations into war crimes. Your support helps us keep telling the truth. Become a member today.

Become a member

This newsletter is open for sponsorship. Boost your brand’s visibility by reaching thousands of engaged subscribers. Click here for more details.

Today’s Ukraine Daily was brought to you by Francis Farrell, Natalia Yermak, Dmytro Basmat, Olena Goncharova, and Volodymyr Ivanyshyn.

If you’re enjoying this newsletter, consider joining our membership program. Start supporting independent journalism today.

Share



Il frontend Photon si aggiorna alla versione 2.0


Segnalo che il front-end alternativo per Lemmy chiamato Photon è stato aggiornato alla versione 2.0: github.com/Xyphyn/photon/relea…

Se volete testarlo con Feddit potete farlo all'indirizzo fdd.lealternative.net/ dove è già stato aggiornato a questa nuova release...

Potrete fare login con il vostro account Feddit e usarlo normalmente al posto di feddit.it, è solo un front-end alternativo quindi cambia solo il modo con cui vengono mostrate le informazioni presenti su Feddit.

#Main
Questa voce è stata modificata (2 mesi fa)


How I'm sending incremental Btrfs snapshots on a Asustor NAS to a LUKS disk


Hi, I recently finished setting up my Asustor NAS, and I found the snapshotting setup in it a bit confusing, so I'm writing this as a quick reference that might be hopefully useful to others.

For context, my device is a AS1102TL, and it's running ADM 4.3.3, but I imagine it should apply to all recent Asustor devices.

First of all, the reason I picked Asustor instead of e.g. Synology... it's because it was not clear if the latter actually supported LUKS full disk encryption on an external usb hd. In Asustor you have to ssh into your NAS, but you can definitely do it.

The only gotcha: If you created the LUKS volume recently on another system, it's likely that it'll be using Argon2 for key stretching. A memory-intensive algorithm, for which the 1GB of memory provided by my AS1102TL is not enough. The solution is simply to add another key with a different algo, e.g. PBKDF2, or just create the volume on your Asustor. Either way, you're going to be able to read and write it both from the NAS and another Linux on your pc.

$ sudo $(which cryptsetup) luksOpen /dev/sdb1 encrypted # "encrypted" is just the name that I gave to the device, pick anything... remember the `-S` flag if you need to select a key in a different slot
$ sudo mount /dev/mapper/encrypted /mnt/USB1
... do what you want ...
$ sudo umount /mnt/USB1/
$ sudo $(which cryptsetup) luksClose encrypted

Unfortunately, this doesn't mean that once mounted, the disk will be integrated in the ADM ui (you're not going to be able to see it in the "External Devices" ui, nor be able to select it as a destination in the "Backup & Restore" ui).

Normally, mounted external drives are available on paths like /share/USB0, /share/USB1. Maybe it could be possible to mount (or symlink your mount point for) your disk there, to make it usable to ADM, but by default /share is an immutable loop mount of /volume0/.@system/sharebase.loop

$ lsattr -d /share/
-----i------- /share/

Trying to workaround that with chattr and maybe manually modifying sharebase.loop felt a bit more risky than needed, so I didn't attempt that (the ADM ui doesn't provide a btrfs send functionality, so it's not very interesting for our purposes anyhow).

Now, you have two different approaches to accomplish incremental backups of btrfs snapshots, one where you just create them yourself from the CLI, and one where you can try to reuse the snapshots created in ADM.

  1. Create snapshot from the cli


sudo btrfs subvolume snapshot -r /volume1 /volume1/.@snapshots/v20250710-0951

pick a parent snapshot, and send the incremental changes:
sudo btrfs send -p /volume1/.@snapshots/v20250710-0936 /volume1/.@snapshots/v20250710-0951/ | sudo btrfs receive -v /mnt/USB1/

I'm using the same naming convention, and same location that is used for snapshots created by ADM (you wouldn't get conflicts anyhow, unless you're creating another one in the exact same minute).

I recommend the -v verbose flag for btrfs receive, otherwise you're not going to see progress while the operation is ongoing.

That's it! Of course, the first send will have to happen without specifying a parent with -p, to do a full clone.

  1. Reuse snapshots created in ADM

There are two problems with this: the snapshots created by ADM are not read-only and they are mounted right under the toplevel.

To address these issues:

sudo mount /dev/md1 -o subvol=/ /mnt/rootvol
sudo btrfs property set /mnt/rootvol/v2025079-2324/ ro true

then pick a parent snapshot, and send the incremental changes:
sudo btrfs send -p /mnt/rootvol/v2025079-0824/ /mnt/rootvol/v2025079-2324 | sudo btrfs receive -v /mnt/USB1/

As above, use the -p and -v flags as needed. That's it!

If you're wondering why did we have to mount the / subvol, you can try without:

You can mount the snapshots directly in ADM's Snapshot Center, by toggling the Preview toggle for a snapshot. In that case, they are still going to be RW, though mounted as RO. You can deal with that by remounting: sudo mount -o remount,rw /volume1/.@snapshots/v2025079-2324/ && sudo btrfs property set /volume1/.@snapshots/v2025079-2324/ ro true,

You can then try to send the changes, but what you're going to get is:

$ sudo btrfs send -p /volume1/.@snapshots/v2025079-0824/ /volume1/.@snapshots/v2025079-2324
ERROR: not on mount point: /volume1/.@snapshots/v2025079-2324

The error is a bit confusing (you have mounted the volume! why is that not good enough?), but you can get a bit of clarity with btrfs subvolume list.
$ sudo btrfs subvolume list /volume1 -qua
ID 256 gen 159842 top level 5 parent_uuid -                                    uuid cbc37b20-901f-b043-8cf1-59b814814140 path <FS_TREE>/base
ID 258 gen 151914 top level 5 parent_uuid -                                    uuid 5039c206-1a89-dc45-a9fe-43f8959cb672 path <FS_TREE>/.iscsi
ID 259 gen 159840 top level 5 parent_uuid -                                    uuid 06d46207-9aa2-2944-ba38-e5736963ec12 path <FS_TREE>/.@plugins
ID 2758 gen 157876 top level 5 parent_uuid cbc37b20-901f-b043-8cf1-59b814814140 uuid 256c36c5-7033-a945-a2db-b6a334a8419f path <FS_TREE>/v2025079-0824
ID 2759 gen 157859 top level 5 parent_uuid cbc37b20-901f-b043-8cf1-59b814814140 uuid 88942ee6-8b52-3d4d-b972-5de2d6764728 path <FS_TREE>/v2025079-2324
ID 2762 gen 159833 top level 256 parent_uuid cbc37b20-901f-b043-8cf1-59b814814140 uuid 6d636914-35a0-3f42-9486-bf5d673b94c5 path base/.@snapshots/v20250710-0936
ID 2763 gen 159836 top level 256 parent_uuid cbc37b20-901f-b043-8cf1-59b814814140 uuid e99df217-4946-a740-bba9-99f64f1a0d69 path base/.@snapshots/v20250710-0951

Now, compare with the output when listing /mnt/rootvol:
$ sudo btrfs subvolume list /mnt/rootvol/ -qua
ID 256 gen 159867 top level 5 parent_uuid -                                    uuid cbc37b20-901f-b043-8cf1-59b814814140 path base
ID 258 gen 151914 top level 5 parent_uuid -                                    uuid 5039c206-1a89-dc45-a9fe-43f8959cb672 path .iscsi
ID 259 gen 159840 top level 5 parent_uuid -                                    uuid 06d46207-9aa2-2944-ba38-e5736963ec12 path .@plugins
ID 2758 gen 157876 top level 5 parent_uuid cbc37b20-901f-b043-8cf1-59b814814140 uuid 256c36c5-7033-a945-a2db-b6a334a8419f path v2025079-0824
ID 2759 gen 157859 top level 5 parent_uuid cbc37b20-901f-b043-8cf1-59b814814140 uuid 88942ee6-8b52-3d4d-b972-5de2d6764728 path v2025079-2324
ID 2762 gen 159833 top level 256 parent_uuid cbc37b20-901f-b043-8cf1-59b814814140 uuid 6d636914-35a0-3f42-9486-bf5d673b94c5 path <FS_TREE>/base/.@snapshots/v20250710-0936
ID 2763 gen 159836 top level 256 parent_uuid cbc37b20-901f-b043-8cf1-59b814814140 uuid e99df217-4946-a740-bba9-99f64f1a0d69 path <FS_TREE>/base/.@snapshots/v20250710-0951

as you can see, the snapshots created in ADM are directly under top level 5 and if you list them under /volume1 (which is just the mount point for the /base subvolume), they are not found directly underneath (despite them being mounted there), which is why you see them being under their own <FS_TREE>.

Conversely, the ones that you can create directly from the cli under volume1, appear as top level 256 and they are under /base if you list the subvolumes under /mnt/rootvol.

I hope this has been useful.

Questa voce è stata modificata (2 mesi fa)
in reply to berdario

PS, while I was closing the dozens of tabs that I opened to investigate how everything fits together, a note on what I wrote earlier:

this doesn’t mean that once mounted, the disk will be integrated in the ADM ui (you’re not going to be able to see it in the “External Devices” ui, nor be able to select it as a destination in the “Backup & Restore” ui).


mounting the disk on a path already accessible by ADM file explorer doesn't work because of permission issues, similar to the immutable flag that you can see with lsattr... but someone on Reddit had another workaround:

reddit.com/r/asustor/comments/…

I mounted the opened device to another path already accessible by ADM file explorer. I don't think it will matter where really. But in my case I made two partitions on an external USB drive. The first partition is a small exFat formatted (10GB). The second partition takes up the rest of the drive and is formatted with cryptsetup (LUKS). Finally, to use this I open the LUKS device and mount it to a location in the first partition (which is automatically mounted by ADM)
Questa voce è stata modificata (2 mesi fa)


Come riconoscere un FUFFA GURU del TRADING


Tutti i "segnali" per riconoscere chi ti vuole truffare col trading online.

in reply to skariko

Beh semplice: se parla di trading online, è un fuffaguru per definizione.

Sarebbe più onesto se parlasse di oroscopi o tarocchi



Why Does Linux Have So Much Drama?!


in reply to AbnormalHumanBeing

Because there is choice. There is very little choice on Windows or Mac, so there's not really anything to argue about 😅 Champagne problems, if ya ask me.
in reply to AbnormalHumanBeing

Most of the conspiracy theorists (and similar personality types) gravitate to Linux.