Salta al contenuto principale



2025 Component Abuse Challenge: Glowing Neon From a 9 V Relay


Most of us know that a neon bulb requires a significant voltage to strike, in the region of 100 volts. There are plenty of circuits to make that voltage from a lower supply, should you wish to have that comforting glow of old, but perhaps one of the simplest comes from [meinsamayhun]. The neon is lit from a 9-volt battery, and the only other component is a relay.

What’s going on? It’s a simple mechanical version of a boost converter, with the relay wired as a buzzer. On each “off” cycle, the magnetic field in the coil collapses, and instead of being harvested by a diode as with a boost converter, it lights the neon. Presumably, the neon also saves the relay contacts from too much wear.

We like this project for its simplicity and for managing to do something useful without a semiconductor or vacuum tube in sight. It’s the very spirit of our 2025 Component Abuse Challenge, for which there is barely time to enter yourself if you have something in mind.

2025 Hackaday Component Abuse Challenge


hackaday.com/2025/11/08/2025-c…



Thanks for a Superconference


Last weekend was Supercon, and it was, in a word super. So many people sharing so much enthusiasm and hackery, and so many good times. It’s a yearly dose of hacker mojo that we as Hackaday staff absolutely cherish, and we heard the same from many of the participants as well. We always come away with new ideas for projects, or new takes on our current top-of-the-heap obsession.

If you didn’t get a chance to see the talks live, head on over to the Hackaday YouTube stream and get yourself caught up really quickly, because that’s only half of the talks. Over the next few weeks, we’ll be writing up the other track of Design Lab talks and getting them out to you ASAP.

If you didn’t get to join us because you are on an entirely different continent, well, that’s a decent excuse. But if that continent is Europe, you can catch us up in the Spring of 2026, because we’re already at work planning our next event on that side of the Atlantic.

Our conferences always bring out the best of our community, and the people who show up are so amazingly positive, knowledgeable, and helpful. It’s too bad that it can only happen a few times per year, but it surely charges up our hacker batteries. So thanks to all the attendees, presenters, volunteers, and sponsors who make it all possible!

This article is part of the Hackaday.com newsletter, delivered every seven days for each of the last 200+ weeks. It also includes our favorite articles from the last seven days that you can see on the web version of the newsletter. Want this type of article to hit your inbox every Friday morning? You should sign up!


hackaday.com/2025/11/08/thanks…



What has 5,000 Batteries and Floats?


While it sounds like the start of a joke, Australian shipmaker Incat Tasmania isn’t kidding around about electric ships. Hull 096 has started charging, although it has only 85% of the over 5,000 lithium-ion batteries it will have when complete. The ship has a 40 megawatt-hour storage system with 12 banks of batteries, each consisting of 418 modules for a total of 5,016 cells. [Vannessa Bates Ramierz] breaks it down in a recent post over on IEEE Spectrum. You can get an eyeful of the beast in the official launch video, below. The Incat Tasmania channel also has other videos about the ship.

The batteries use no racks to save weight. Good thing since they already weigh in at 250 tonnes. Of course, cooling is a problem, too. Each module has a fan, and special techniques prevent one hot cell from spreading. Charging in Australia comes from a grid running 100% renewable energy. When the ship enters service as a ferry between Argentina and Uruguay, a 40-minute charge will be different. Currently, Uruguay has about 92% of its power from renewable sources. Argentina still uses mostly natural gas, but 42% of its electricity is sourced from renewable generation.

The ship is 130 meters (426 feet) long, mostly aluminum, and has a reported capacity of 2,100 people and 225 vehicles per trip. Ferry service is perfect for electric ships — the distance is short, and it’s easy to schedule time to charge. Like all electric vehicles, though, the batteries won’t stay at full capacity for long. Typical ship design calls for a 20-year service life, and it’s not uncommon for a vessel to remain in service for 30 or even 40 years. But experts expect the batteries on the ferry will need to be replaced every 5 to 10 years.

While electric ferries may become common, we don’t expect to see electric cargo ships plying the ocean soon. Diesel is hard to beat for compact storage and high energy density. There are a few examples of cargo ships using electric, though. Of course, that doesn’t mean you can’t build your own electric watercraft.

youtube.com/embed/5GVwLNH_Qus?…


hackaday.com/2025/11/08/what-h…



Sam Altman “Spero che a causa della tecnologia non accadano cose brutte”


Le ultime dichiarazioni di Sam Altman, CEO di OpenAI, in merito al progresso dell’intelligenza artificiale (IA) non sono molto incoraggianti, in quanto ha recentemente affermato di essere preoccupato per “l’impatto dell’IA sui posti di lavoro”e ha anche chiarito che non saremo al sicuro nemmeno in un bunker “se l’IA sfugge al controllo” .

Ma non è tutto, perché in una recente intervista l’AD di OpenAI ha dichiarato senza mezzi termini che dovremmo preoccuparci del futuro che l’intelligenza artificiale ci porterà: “Penso che succederà qualcosa di brutto con l’intelligenza artificiale”.

Come riporta un articolo di Investopedia, un mese fa Sam Altman ha partecipato a un’intervista per il videopodcast a16z della società di venture capital Andreessen Horowitz e ha colto l’occasione per dire che si aspetta che accadano cose brutte a causa dell’intelligenza artificiale: “Spero che a causa della tecnologia non accadano cose davvero brutte”.

youtube.com/embed/JfE1Wun9xkk?…

Come potete vedere nell’intervista qui sotto, Altman si riferiva a Sora, uno strumento di generazione video lanciato a fine settembre da OpenAI che è rapidamente diventato l’app più scaricata sull’App Store negli Stati Uniti. Ciò ha portato a un’ondata di deepfake creati da questo modello, che hanno inondato i social media con video con personaggi come Martin Luther King Jr. e altri personaggi pubblici come lo stesso Altman.

In effetti, Altman è apparso in questi video mentre compiva varie attività criminali, come potete vedere in questa Instagram Story. Ma non è tutto, poiché Altman ha anche affermato che strumenti come Sora necessitano di controlli per impedire che questa tecnologia venga utilizzata per scopi dannosi: “Molto presto il mondo dovrà vedersela con incredibili modelli video in grado di fingere chiunque o di mostrare qualsiasi cosa si voglia”.

Allo stesso modo, il creatore di ChatGPT ha affermato che, invece di perfezionare questo tipo di tecnologia a porte chiuse, la società e l’intelligenza artificiale dovrebbero collaborare per “co-evolversi” e “non si può semplicemente lasciare tutto alla fine“.

Secondo Altman, ciò che dovremmo fare è offrire alle persone un’esposizione precoce a questo tipo di tecnologia, in modo che le comunità possano creare norme e barriere prima che questi strumenti diventino ancora più potenti. Afferma inoltre che, se lo faremo, saremo meglio preparati quando arriveranno modelli di generazione video basati sull’intelligenza artificiale ancora più avanzati di quelli attuali.

L’avvertimento di Sam Altman non si riferiva solo ai video falsi, ma anche al fatto che molti di noi tendono a “esternalizzare” le proprie decisioni ad algoritmi che poche persone comprendono: “Penso ancora che ci saranno momenti strani o spaventosi.”

Inoltre, Altman ha anche spiegato che il fatto che l’intelligenza artificiale non abbia ancora causato un evento catastroficonon significa che non lo farà mai” e che “miliardi di persone che parlano allo stesso cervello” potrebbero finire per creare “strane cose su scala sociale” .

“Credo che come società svilupperemo delle barriere attorno a questo fenomeno” ha detto. Infine, sebbene si tratti di qualcosa che ci riguarda tutti, Altman si oppone a una regolamentazione rigida di questa tecnologia perché afferma che “la maggior parte delle regolamentazioni probabilmente presenta molti svantaggi” e che l’ideale sarebbe condurre “test di sicurezza molto accurati” con questi nuovi modelli che ha definito “estremamente sovrumani” .

L'articolo Sam Altman “Spero che a causa della tecnologia non accadano cose brutte” proviene da Red Hot Cyber.



Il nuovo obiettivo di Microsoft per l’intelligenza artificiale? La medicina!


Il colosso della tecnologia ha annunciato la creazione di un nuovo team di sviluppo per un’intelligenza artificiale “sovrumana” che supererà in accuratezza gli esperti umani nelle diagnosi mediche. Il team sarà guidato da Mustafa Suleiman, responsabile dell’intelligenza artificiale dell’azienda.

Microsoft ha annunciato la creazione di un nuovo team chiamato MAI Superintelligence Team, che mira a sviluppare un’intelligenza artificiale “sovrumana” specializzata nella diagnostica medica. Lo riporta Reuters. Il progetto è guidato da Mustafa Suleiman, ex co-fondatore di DeepMind e Inflection AI.

Secondo Suleiman, il nuovo team non mira a sviluppare un’intelligenza generale (AGI) in grado di svolgere qualsiasi compito umano, ma piuttosto a concentrarsi su “modelli esperti” che raggiungeranno livelli di accuratezza sovrumani in aree specifiche, principalmente la diagnosi medica. Questo, afferma, è un primo passo verso lo sviluppo di capacità di intelligenza artificiale che rileveranno le malattie prima degli esseri umani e potranno prolungare la durata e la qualità della vita dei pazienti.

Ha affermato che Microsoft ha un piano concreto e chiaro per sviluppare entro due o tre anni un sistema di intelligenza artificiale che supererà in accuratezza gli esperti umani nelle diagnosi mediche. Suleiman ha osservato che l’azienda intende investire risorse significative nel processo, aggiungendo che il successo segnerebbe una svolta storica nel campo della salute globale.

Secondo le stime di mercato, Microsoft integrerà i frutti della nuova iniziativa nei suoi servizi cloud Azure Health Data Services e nelle collaborazioni esistenti con istituti medici negli Stati Uniti e in Europa. L’obiettivo immediato è migliorare l’accuratezza delle diagnosi basate su immagini, delle analisi genetiche e della lettura dei riepiloghi delle visite mediche, riducendo al contempo gli errori umani e accelerando le decisioni cliniche.

Oltre all’entusiasmo, la mossa solleva interrogativi normativi ed etici sul grado di fiducia nei sistemi che raggiungono un’accuratezza e una sicurezza “sovrumane” in caso di errore medico.

Suleiman ha sottolineato che Microsoft eviterà sviluppi che rappresentino un rischio esistenziale e si concentrerà su soluzioni sicure per uso medico

L'articolo Il nuovo obiettivo di Microsoft per l’intelligenza artificiale? La medicina! proviene da Red Hot Cyber.




“Congiungiamo le radici cristiane e l’apertura a tutti”. Con queste parole i vescovi di Francia, riuniti in Assemblea plenaria a Lourdes, si sono rivolti agli operatori dell’insegnamento della religione cattolica, esprimendo gratitudine e vicinanza a…


Nel pomeriggio di oggi Papa Leone XIV ha incontrato 15 persone provenienti dal Belgio, vittime di abuso, quando erano minori, da parte di membri del clero. Lo rende noto la Sala Stampa della Santa Sede.


Si concludono oggi, con la celebrazione della messa a Lazise (Verona), gli esercizi spirituali per sacerdoti promossi da Comunione e Liberazione, sul tema “Dio è misericordia”. A predicare le meditazioni è stato mons.


La Fondazione Antiusura Jubilaeum E.T.S., eretta dai vescovi di Avezzano, L’Aquila e Sulmona con competenza per tutta la regione Abruzzo, partecipa al Mese dell’Educazione finanziaria promosso dal Ministero dell’economia con il progetto “Il sovrainde…


a me pare che qua in europa l'unica che spinga i paesi europei ad avere paura e a sentirsi minacciati sia la russia stessa.... veramente.... la russia (Lavrov) ci accusa di fare quello che le conseguenze delle azioni russe rendono necessario? anche questa storia dei droni russi dovrebbe servire a farci sentire più al sicuro? non capisco.


proposta del segretario della CGIL Maurizio Landini di introdurre un “contributo di solidarietà” dell’1,3 per cento sui patrimoni netti superiori a 2 milioni di euro".

Considerato che il più scalcagnato dei lavoratori italiani paga il 23% di tasse sull'unica cosa che ha, ovvero il reddito, una proposta del genere mi sembra fin troppo timida.


Perché si riparla di una tassa patrimoniale - Il Post
https://www.ilpost.it/2025/11/09/tassa-patrimoniale/?utm_source=flipboard&utm_medium=activitypub

Pubblicato su News @news-ilPost




L’impegno delle Forze armate tra onore e riconoscenza

@Notizie dall'Italia e dal mondo

Il ministro della difesa Guido Crosetto, in una intervista alla Rivista Aeronautica, che celebra i 100 anni di vita, ha evidenziato che “il personale della Difesa resta la nostra risorsa più preziosa. Donne e uomini che operano spesso in contesti difficili con professionalità, umanità, spirito di servizio, e




D.K. Harrell – Talkin’ Heavy
freezonemagazine.com/articoli/…
Il ragazzo il Blues lo parla chiaro… Stavo iniziando a scrivere di tutt’altro quando, nella esasperante, nebbiosa, quotidianità post-moderna fatta di quotidiane post-minchiate si è fatto largo, come una Ricola data a un bronchitico, D.Keyran Harrell giovane Bluesman (26 anni, aprile 1999, Ruston Louisiana) vestito di fine broccato. Planato in salotto da un dispositivo a […]
L'articolo D.K. Harrell
Il


Non solo un lavoro di qualità, ma anche prospettive di qualità


@Notizie dall'Italia e dal mondo
Stiamo tornando a far rallentare il mondo. Ma l’ultimo periodo, per noi, non è stato per niente semplice. Ci siamo scontrati con la difficoltà di portare avanti un’attività giornalistica indipendente e renderla al contempo sostenibile. Nonostante i nostri buoni propositi, la mancanza di risorse






Una panoramica delle potenze militari nel mondo

@Notizie dall'Italia e dal mondo

Il sito web militare statunitense Global Firepower ha recentemente pubblicato la sua classifica della potenza militare mondiale per il 2025, con i primi dieci classificati come segue: Stati Uniti d’America, Russia, Repubblica Popolare della Cina, India, Repubblica di Corea (sud), Regno Unito, Francia, Giappone, Turchia e


in reply to Nabil Hunt

Hello and welcome to poliverso.org

Friendica is a somewhat unique software: a little more difficult to use than Mastodon, but infinitely richer in features.

I noticed that your first test post was written in English. That's not a problem, but I remind you that poliverso.org is a server dedicated to an audience that communicates primarily in Italian, so it would be appropriate for most of your posts to be in that language.

If you prefer to continue communicating in English, you can search for other Friendica servers at this link:

friendica.fediverse.observer/l…

Best regards and have a good Sunday



Quel momento in cui capisci che realizzare il tuo sogno è impossibile.


Non so se vi è mai capitato di avere un sogno, sperare di poterlo realizzare, e poi desiderare che si avveri, ogni giorno più intensamente.

A me è capitato tante volte, e altrettante volte i sogni si sono infranti. Alcuni erano anche molto grandi, e la delusione è stata tanta quando è successo. Forse sono una persona che si crea troppe aspettative; chissà.

Ma quando il mio sogno è diventato quello di non provare più dolore e malessere, allora la questione è cambiata: era GIUSTO che io realizzassi quel sogno. Pensavo che ci sarei riuscito facilmente, e non solo mi sembrava che una qualche giustizia divina me lo avrebbe concesso, ma addirittura che sarebbe stato più semplice riuscirci, più che per tutti gli altri sogni che avevo coltivato.

Non è stato così.

Il sogno di vivere a Tenerife si è sbriciolato velocemente dal 2020 in poi, quando ho iniziato a capire che quel posto, l'unico in cui io stia davvero bene, non era più vivibile. Troppe persone ci si sono trasferite, troppi turisti continuano ad andarci, rendendolo di fatto un luogo inospitale.

Riuscite ad immaginare come mi sentivo ritornando a Tenerife, dopo che avevo capito che anche a El Hierro non avrei potuto vivere?

Cercavo di non rovinarmi quei pochi giorni di permanenza amara, in cui tutto ciò che vedevo - e sentivo - somigliava ad una preziosa torta, che i miei occhi di bambino non potevano vedere, ma non toccare.

Eppure, l'isola è riuscita ad insegnarmi qualcosa.

Di nuovo.

Il racconto è in questo episodio del podcast.


Tenerife: l'isola perfetta, dove non posso vivere.
Nel quinto episodio del podcast in cui cerco una nuova casa, eccomi di nuovo a Tenerife, l'isola dove mi sarei dovuto trasferire ma che nel frattempo è diventata "inospitale".
Ogni volta che la vedo è una fitta al cuore, ma anche stavolta mi ha insegnato qualcosa di importante.
Buon ascolto.




Non spegniamo le luci su Gaza


@Giornalismo e disordine informativo
articolo21.org/2025/11/non-spe…
Notizie sempre più scarne. L’informazione toglie spazio a Gaza e alla Cisgiordania con poche eccezioni, per esempio quelle di Avvenire e Il Manifesto. Ma il dramma che si è consumato a Gaza durante i bombardamenti israeliani non si è esaurito, purtroppo, con la fragile pace americana.




This week, we discuss archiving to get around paywalls, hating on smart glasses, and more.#BehindTheBlog


Behind the Blog: Paywall Jumping and Smart Glasses


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss archiving to get around paywalls, hating on smart glasses, and more.

JASON: I was going to try to twist myself into knots attempting to explain the throughline between my articles this week, and about how I’ve been thinking about the news and our coverage more broadly. This was going to be something about trying to promote analog media and distinctly human ways of communicating (like film photography), while highlighting the very bad economic and political incentives pushing us toward fundamentally dehumanizing, anti-human methods of communicating. Like fully automated, highly customized and targeted AI ads, automated library software, and I guess whatever Nancy Pelosi has been doing with her stock portfolio. But then I remembered that I blogged about the FBI’s subpoena against archive.is, a website I feel very ambivalent about and one that is the subject of perhaps my most cringe blog of all time.

So let’s revisit that cringe blog, which was called “Dear GamerGate: Please Stop Stealing Our Shit.” I wrote this article in 2014, which was fully 11 years ago, which is alarming to me. First things first: They were not stealing from me they were stealing from VICE, a company that I did not actually experience financial gains from related to people reading articles; it was good if people read my articles and traffic was very important, and getting traffic over time led to me getting raises and promotions and stuff, but the company made very, very clear that we did not “own” the articles and therefore they were not “mine” in the way that they are now. With that out of the way, the reporting and general reason for the article was I think good but the tone of it is kind of wildly off, and, as I mentioned, over the course of many years I have now come to regard archive.is as sort of an integral archiving tool. If you are unfamiliar with archive.is, it’s a site that takes snapshots of any URL and creates a new link for them which, notably, does not go to the original website. Archive.is is extremely well known for bypassing the paywalls on many sites, 404 Media sometimes but not usually among them.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




X and TikTok accounts are dedicated to posting AI-generated videos of women being strangled.#News #AI #Sora


OpenAI’s Sora 2 Floods Social Media With Videos of Women Being Strangled


Social media accounts on TikTok and X are posting AI-generated videos of women and girls being strangled, showing yet another example of generative AI companies failing to prevent users from creating media that violates their own policies against violent content.

One account on X has been posting dozens of AI-generated strangulation videos starting in mid-October. The videos are usually 10 seconds long and mostly feature a “teenage girl” being strangled, crying, and struggling to resist until her eyes close and she falls to the ground. Some titles for the videos include: “A Teenage Girl Cheerleader Was Strangled As She Was Distressed,” “Prep School Girls Were Strangled By The Murderer!” and “man strangled a high school cheerleader with a purse strap which is crazy.”

Many of the videos posted by this X account in October include the watermark for Sora 2, Open AI’s video generator, which was made available to the public on September 30. Other videos, including most videos that were posted by the account in November, do not include a watermark but are clearly AI generated. We don’t know if these videos were generated with Sora 2 and had their watermark removed, which is trivial to do, or created with another AI video generator.

The X account is small, with only 17 followers and a few hundred views on each post. A TikTok account with a similar username that was posting similar AI-generated choking videos had more than a thousand followers and regularly got thousands of views. Both accounts started posting the AI-generated videos in October. Prior to that, the accounts were posting clips of scenes, mostly from real Korean dramas, in which women are being strangled. I first learned about the X account from a 404 Media reader, who told me X declined to remove the account after they reported it.

“According to our Community Guidelines, we don't allow hate speech, hateful behavior, or promotion of hateful ideologies,” a TikTok spokesperson told me in an email. The TikTok account was also removed after I reached out for comment. “That includes content that attacks people based on protected attributes like race, religion, gender, or sexual orientation.”

X did not respond to a request for comment.

OpenAI did not respond to a request for comment, but its policies state that “graphic violence or content promoting violence” may be removed from the Sora Feed, where users can see what other users are generating. In our testing, Sora immediately generated a video for the prompt “man choking woman” which looked similar to the videos posted to TikTok and X. When Sora finished generating those videos it sent us notifications like “Your choke scene just went live, brace for chaos,” and “Yikes, intense choke scene, watch responsibly.” Sora declined to generate a video for the prompt “man choking woman with belt,” saying “This content may violate our content policies.”

Safe and consensual choking is common in adult entertainment, be it various forms of BDSM or more niche fetishes focusing on choking specifically, and that content is easy to find wherever adult entertainment is available. Choking scenes are also common social media and more mainstream horror movies and TV shows. The UK government recently announced that it will soon make it illegal to publish or possess pornographic depictions of strangulation of suffocation.

It’s not surprising, then, that when generative AI tools are made available to the public some people generate choking videos and violent content as well. In September, I reported about an AI-generated YouTube channel that exclusively posted videos of women being shot. Those videos were generated with Google’s Veo AI-video generator, despite it being against the company’s policies. Google said it took action against the user who was posting those videos.

Sora 2 had to make several changes to its guardrails since it launched after people used it to make videos of popular cartoon characters depicted as Nazis and other forms of copyright infringement.


#ai #News #sora

Breaking News Channel reshared this.



Early humans crafted the same tools for hundreds of thousands of years, offering an unprecedented glimpse of a continuous tradition that may push back the origins of technology.#TheAbstract


Advanced 2.5 Million-Year-Old Tools May Rewrite Human History


🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

After a decade-long excavation at a remote site in Kenya, scientists have unearthed evidence that our early human relatives continuously fashioned the same tools across thousands of generations, hinting that sophisticated tool use may have originated much earlier than previously known, according to a new study in Nature Communications.

The discovery of nearly 1,300 artifacts—with ages that span 2.44 to 2.75 million years old—reveals that the influential Oldowan tool-making tradition existed across at least 300,000 years of turbulent environmental shifts. The wealth of new tools from Kenya’s Namorotukunan site suggest that their makers adapted to major environmental changes in part by passing technological knowledge down through the ages.

“The question was: did they generally just reinvent the [Oldowan tradition] over and over again? That made a lot of sense when you had a record that was kind of sporadic,” said David R. Braun, a professor of anthropology at the George Washington University who led the study, in a call with 404 Media.

“But the fact that we see so much similarity between 2.4 and 2.75 [million years ago] suggests that this is generally something that they do,” he continued. “Some of it may be passed down through social learning, like observation of others doing it. There’s some kind of tradition that continues on for this timeframe that would argue against this idea of just constantly reinventing the wheel.”

Oldowan tools, which date back at least 2.75 million years, are distinct from earlier traditions in part because hominins, the broader family to which humans belong, specifically sought out high-quality materials such as chert and quartz to craft sharp-edged cutting and digging tools. This advancement allowed them to butcher large animals, like hippos, and possibly dig for underground food sources.

When Braun and his colleagues began excavating at Namorotukunan in 2013, they found many artifacts made of chalcedony, a fine-grained rock that is typically associated with much later tool-making traditions. To the team’s surprise, the rocks were dated to periods as early as 2.75 million years ago, making them among the oldest artifacts in the Oldowan record.

“Even though Oldowan technology is really just hitting one rock against the other, there's good and bad ways of doing it,” Braun explained. “So even though it's pretty simple, what they seem to be figuring out is where to hit the rock, and which angles to select. They seem to be getting a grip on that—not as well as later in time—but they're definitely getting an understanding at this timeframe.”
Some of the Namorotukunan tools. Image: Koobi Fora Research and Training Program
The excavation was difficult as it takes several days just to reach the remote offroad site, while much of the work involved tiptoing along steep outcrops. Braun joked that their auto mechanic lined up all the vehicle shocks that had been broken during the drive each season, as a testament to the challenge.

But by the time the project finally concluded in 2022, the researchers had established that Oldowan tools were made at this site over the course of 300,000 years. During this span, the landscape of Namorotukunan shifted from lush humid forests to arid desert shrubland and back again. Despite these destabilizing shifts in their climate and biome, the hominins that made these tools endured in part because this technology opened up new food sources to them, such as the carcasses of large animals.

“The whole landscape really shifts,” Braun said. “But hominins are able to basically ameliorate those rapid changes in the amount of rainfall and the vegetation around by using tools to adapt to what’s happening.”

“That's a human superpower—it’s that ability we have to keep this information stored in our collective heads, so that when new challenges show up, there's somebody in our group that remembers how to deal with this particular adaptation,” he added.

It’s not clear exactly which species of hominin made the tools at Namorotukunan; it may have been early members of our own genus Homo, or other relatives, like Australopithecus afarensis, that later went extinct. Regardless, the discovery of such a long-lived and continuous assemblage may hint that the origins of these tools are much older than we currently know.

“I think that we're going to start to find tool use much earlier” perhaps “going back five, six, or seven million years,” Braun said. “That’s total speculation. I've got no evidence that that's the case. But judging from what primates do, I don't really understand why we wouldn't see it.”

To that end, the researchers plan to continue excavating these bygone landscapes to search for more artifacts and hominin remains that could shed light on the identity of these tool makers, probing the origins of these early technologies that eventually led to humanity’s dominance on the planet.

“It's possible that this tool use is so diverse and so different from our expectations that we have blinders on,” Braun concluded. “We have to open our search for what tool use looks like, and then we might start to see that they're actually doing a lot more of it than we thought they were.”




Protecting Minors Online: Can Age Verification Truly Make the Internet Safer?


The drive to protect minors online has been gaining momentum in recent years and is now making its mark in global policy circles. This shift, strongly supported by public sentiment, has also reached the European Union.

In a recent development, Members of the European Parliament, as part of the Internal Market and Consumer Protection Committee, approved a report raising serious concerns about the shortcomings of major online platforms in safeguarding minors. With 32 votes in favour, the Committee highlighted growing worries over issues such as online addiction, mental health impacts, and children’s exposure to illegal or harmful digital content.

What Is In The Report


The report discusses the creation of frameworks and systems to support age verification and protect children’s rights and privacy online. This calls for a significant push to incorporate safety measures as an integral part of the system’s design, within a social responsibility framework, to make the internet a safe environment for minors.

MEPs have proposed sixteen years as the minimum age for children to access social media, video-sharing platforms, and AI-based chat companions. Children below sixteen can access the above-mentioned platforms with parental permission. However, a proposal has been put forth demanding that an absolute minimum age of thirteen be set. This indicates that children under 13 cannot access or use social media platforms, even with parental permission.

In Short:

  • Under 13 years of age: Not allowed on social media
  • 13-15 years of age: Allowed with parents’ approval
  • 16 years and above: Can use freely, no consent required

MEPs recommended stricter actions against non-compliance with the Digital Services Act (DSA). Stricter actions range from holding the senior executives of the platforms responsible for breaches of security affecting minors to imposing huge fines.

The recommendations include banning addictive design features and engagement-driven algorithms, removing gambling-style elements in games, and ending the monetisation of minors as influencers. They also call for tighter control over AI tools that create fake or explicit content and stronger rules against manipulative chatbots.

What Do Reports And Research Say?


The operative smoothness and convenience introduced by the digital and technological advancements over the last two decades have changed how the world works and communicates. The internet provides a level field for everyone to connect, learn, and make an impact. However, the privacy of internet users and the access to and control over data are points of contention and a constant topic of debate. With an increasing percentage of minor users globally, the magnitude of risks has been multiplied. Lack or limited awareness of understanding of digital boundaries and the deceptive nature of the online environment make minors more susceptible to the dangers. Exposure to inappropriate content, cyberbullying, financial scams, identity theft, and manipulation through social media or gaming platforms are a few risks to begin with. Their curiosity to explore beyond boundaries often makes minors easy targets for online predators.

Recent studies have made the following observations (the studies are EU-relevant):

  • According to the Internet Watch Foundation Annual Data & Insights / 2024 (reported 2025 releases), Record levels of child sexual abuse imagery were discovered in 2024; IWF actioned 291,273 reports and found 62% of identified child sexual abuse webpages were hosted in EU countries.
  • WeProtect Global Alliance Global Threat Assessment 2023 (relevant to the EU) reported an 87% increase in child sexual abuse material since 2019. Rapid grooming on social gaming platforms and emerging threats from AI-generated sexual abuse material are the new patterns of online exploitation.
  • According to WHO/Europe HBSC Volume on Bullying & Peer Violence (2024), one in six school-aged children (around 15-16%) experienced cyberbullying in 2022, a rise from previous survey rounds.

These reports indicate the alarming situation regarding minors’ safety and reflect the urgency with which the Committee is advancing its recommendations. Voting is due on the 23rd-24th of November, 2025.

While these reports underline the scale of the threat, they also raise an important question: are current solutions, like age verification, truly effective?

How Foolproof Is Age Verification As A Measure?


The primary concern in promoting age verification as a defence mechanism against cybercrime is the authenticity of those verification processes and whether they are robust enough to eliminate unethical practices targeting users. For instance, if the respondent (user) provides inaccurate information during the age verification process, are there any mechanisms in place to verify its accuracy?

Additionally, implementing age verification for children is next to impossible without violating the rights to privacy and free speech of adults, raising the question of who shall have access to and control over users’ data – Government bodies or big tech companies. Has “maintenance of anonymity” while providing data been given enough thought in drafting these policies? This is a matter of concern.

According to EDRI, a leading European Digital Rights NGO, deploying age verification as a measure to tackle multiple forms of cybercrime against minors is not a new policy. Reportedly, social media platforms were made to adopt similar measures in 2009. However, the problem still exists. Age verification as a countermeasure to cybercrime against minors is a superficial fix. Do the Commission’s safety guidelines address the root cause of the problem – a toxic online environment – is an important question to answer.

EDRI’s Key arguments:

  • Age verification is not a solution to problems of toxic platform design, such as addictive features and manipulative algorithms.
  • It restricts children’s rights to access information and express themselves, rather than empowering them.
  • It can exclude or discriminate against users without digital IDs or access to verification tools.
  • Lawmakers are focusing on exclusion instead of systemic reform — creating safer, fairer online spaces for everyone.
  • True protection lies in platform accountability and ethical design, not mass surveillance or one-size-fits-all age gates.


Read the complete article here:
https://edri.org/our-work/age-verification-gains-traction-eu-risks-failing-to-address-the-root-causes-of-online-harm/ | https://archive.ph/wip/LIMUI: Protecting Minors Online: Can Age Verification Truly Make the Internet Safer?

Before floating any policy into the periphery of execution, weighing the positive and negative user experiences is pivotal, because a blanket policy based on age brackets might make it ineffective at mitigating the risks of an unsafe online space. Here, educating and empowering both parents and children with digital literacy can have a more profound and meaningful impact rather than simply regulating age brackets. Change always comes with informed choices.



Sulle droghe abbiamo un piano: Possibile alla contro-conferenza nazionale sulle droghe


Possibile è presente alla Controconferenza nazionale “Sulle droghe abbiamo un piano” con Giulia Marro, Consigliera Regionale del Piemonte, e Domenico Sperone, assessore del Comune di Canale.

La controconferenza si svolge a Roma in parallelo alla conferenza governativa che si è aperta all’Eur.

È stata promossa dalla Rete nazionale per la riforma delle politiche sulle droghe, dopo che il governo ha rifiutato ogni confronto con la società civile e gli enti locali. L’impostazione della conferenza ufficiale rimane ancorata a un modello repressivo e datato, ancora legato allo slogan “un mondo senza droghe”, lontano dalle conoscenze scientifiche e dalle esperienze sviluppate a livello internazionale.

L’iniziativa propone un piano alternativo per le politiche sulle droghe, basato su salute pubblica, diritti umani e riduzione del danno, in linea con le raccomandazioni ONU e con le pratiche già adottate in diversi Paesi.

Nella prima giornata, il 6 novembre, si sono alternati interventi di esperti e rappresentanti di reti internazionali, tra cui Susanna Ronconi (Forum Droghe), Saner Mahmood (Alto Commissariato ONU per i Diritti Umani), Marie Nougier (International Drug Policy Consortium), Adria Cots Fernández (Apertura Politiche Droghe) ed Eligia Parodi (rete EuroPUD, persone che usano droghe).

È emerso un messaggio chiaro: le politiche punitive non riducono i consumi né migliorano la salute pubblica, ma producono esclusione e stigma. Sempre più paesi — tra cui Portogallo, Spagna e Svizzera — stanno invece seguendo la via della depenalizzazione e dell’investimento in servizi di riduzione del danno.

I lavori si sono articolati in tre panel:
1. Politiche e diritti umani, con un’analisi dei cambiamenti globali e delle nuove risoluzioni ONU;
2. Riduzione del danno come politica complessiva, con esperienze europee e latinoamericane che integrano salute, inclusione e giustizia sociale;
3. Psichedelici per uso medico, dedicato alla libertà di ricerca e ai trattamenti innovativi.

La controconferenza ha sottolineato anche il ruolo delle città e delle amministrazioni locali, che in molti casi sono il primo livello istituzionale capace di attuare politiche concrete e basate sui diritti.

Per Possibile, questo appuntamento rappresenta uno spazio politico necessario per costruire politiche sulle droghe efficaci e umane, fondate su salute, evidenze scientifiche e rispetto della dignità delle persone, superando definitivamente l’approccio repressivo e ideologico che continua a dominare il dibattito nazionale.

L'articolo Sulle droghe abbiamo un piano: Possibile alla contro-conferenza nazionale sulle droghe proviene da Possibile.



🎉#ioleggoperché compie dieci anni!
Il progetto sociale, dell'Associazione Italiana Editori (AIE) per la creazione e il potenziamento delle biblioteche scolastiche, quest’anno si svolge da oggi al 16 novembre con 4,2 milioni di studenti coinvolti, 29.


Time to enforce ICE restraining orders


Dear Friend of Press Freedom,

Rümeysa Öztürk has been facing deportation for 227 days for co-writing an op-ed the government didn’t like, and the government hasn’t stopped targeting journalists for deportation. Read on for news from Illinois, our latest public records lawsuit, and how you can take action to protect journalism.

Enforce ICE restraining orders now


A federal judge in Chicago yesterday entered an order to stop federal immigration officers from targeting journalists and peaceful protesters, affirming journalists’ right to cover protests and their aftermath without being assaulted or arrested.

Judge Sara Ellis entered her ruling — which extended a similar prior order against Immigration and Customs Enforcement — in dramatic fashion, quoting everyone from Chicago journalist and poet Carl Sandburg to the Founding Fathers. But the real question is whether she’ll enforce the order when the feds violate it, as they surely will. After all, they violated the prior order repeatedly and egregiously.

Federal judges can fine and jail people who violate their orders. But they rarely use those powers, especially against the government. That needs to change when state thugs are tearing up the First Amendment on Chicago’s streets. We suspect Sandburg would agree.

Journalist Raven Geary of Unraveled Press summed it up at a press conference after the hearing: “If people think a reporter can’t be this opinionated, let them think that. I know what’s right and what’s wrong. I don’t feel an ounce of shame saying that this is wrong.”

Congratulations to Geary and the rest of the journalists and press organizations in Chicago and Los Angeles that are standing against those wrongs by taking the government to court and winning. Listen to Geary’s remarks here.

Journalists speak out about abductions from Gaza aid flotillas


We partnered with Defending Rights & Dissent to platform three U.S. journalists who were abducted from humanitarian flotillas bound for Gaza and detained by Israel.

They discussed the inaction from their own government in the aftermath of their abduction, shared their experiences while detained, and reflected on what drove them to take this risk while so many reporters are self-censoring.

We’ll have a write-up of the event soon, but it deserves to be seen in full. Watch it here.

FPF takes ICE to court over dangerous secrecy


We filed yet another Freedom of Information Act lawsuit this week — this time to uncover records on ICE’s efforts to curtail congressional access to immigration facilities.

“ICE loves to demand our papers but it seems they don’t like it as much when we demand theirs,” attorney Ginger Quintero-McCall of Free Information Group said.

If you are a FOIA lawyer who is interested in working with us pro bono or for a reduced fee on FOIA litigation, please email lauren@freedom.press.

Read more about our latest lawsuit here.

If Big Tech can’t withstand jawboning, how can individual journalists?


Last week, Sen. Ted Cruz convened yet another congressional hearing on Biden-era “jawboning” of Big Tech companies. The message: Government officials leaning on these multibillion-dollar conglomerates to influence the views they platform was akin to censorship.

Sure, the Biden administration’s conduct is worth scrutinizing and learning from. But if you accept the premise that gigantic tech companies are susceptible to soft pressure from a censorial government, doesn’t it go without saying that so are individual journalists who lack anything close to those resources?

We wrote about the numerous instances of “jawboning” of individual reporters during the current administration that Senate Republicans failed to address at their hearing. Read more here.

Tell lawmakers from both parties to oppose Tim Burke prosecution


Conservatives are outraged at Tucker Carlson for throwing softballs to neo-Nazi Nick Fuentes. But the Trump administration is continuing its predecessor’s prosecution of journalist Tim Burke for exposing Tucker Carlson whitewashing another antisemite — Ye, formerly known as Kanye West.

Lawmakers shouldn’t stand for this hypocrisy, regardless of political party. Tell them to speak up with our action center.

What we’re reading


FBI investigating recent incident involving feds in Evanston, tries to block city from releasing records (Evanston RoundTable). Apparently obstructing transparency at the federal level is no longer enough and the government now wants to meddle with municipal police departments’ responses to public records requests.

To preserve records, Homeland Security now relies on officials to take screenshots (The New York Times). The new policy “drastically increases the likelihood the agency isn’t complying with the Federal Records Act,” FPF’s Lauren Harper told the Times.

When your local reporter needs the same protection as a war correspondent (Poynter). Foreign war correspondents get “hostile environment training, security consultants, trauma counselors and legal teams. … Local newsrooms covering militarized federal operations in their own communities? Sometimes all we have is Google, group chats and each other.”

YouTube quietly erased more than 700 videos documenting Israeli human rights violations (The Intercept). “It is outrageous that YouTube is furthering the Trump administration’s agenda to remove evidence of human rights violations and war crimes from public view,” said Katherine Gallagher of the Center for Constitutional Rights.

Plea to televise Charlie Kirk trial renews Senate talk of cameras in courtrooms (Courthouse News Service). It’s past time for cameras in courtrooms nationwide. None of the studies have ever substantiated whatever harms critics have claimed transparency would cause. Hopefully, the Kirk trial will make this a bipartisan issue.

When storytelling is called ‘terrorism’: How my friend and fellow journalist was targeted by ICE (The Barbed Wire). “The government is attempting to lay a foundation for dissenting political beliefs as grounds for terrorism. And people like Ya’akub — non-white [or] non-Christian — have been made its primary examples. Both journalists; like Mario Guevara … and civilians.”


freedom.press/issues/time-to-e…



If Big Tech can’t withstand jawboning, how can individual journalists?


Last week, Sen. Ted Cruz convened yet another congressional hearing on Biden-era “jawboning” of Big Tech companies. The message: Government officials leaning on these multibillion-dollar conglomerates to influence the views they platform was akin to censorship. Officials may not have formally ordered the companies to self-censor, but they didn’t have to – businesspeople know it’s in their economic interests to stay on the administration’s good side.

They’re not entirely wrong. Public officials are entitled to express their opinions about private speech, but it’s a different story when they lead speakers to believe they have no choice but to appease the government. At the same time the Biden administration was making asks of social platforms, the former president and other Democrats (and Republicans) pushed for repealing Section 230 of the Communications Decency Act, the law that allows social media to exist.

It’s unlikely that the Biden administration intended its rhetoric around Section 230 to intimidate social media platforms into censorship. That said, it’s certainly possible companies made content decisions they otherwise wouldn’t have when requested by a government looking to legislate them out of existence. It’s something worth exploring and learning from.

But if you accept the premise — as I do — that gigantic tech companies with billions in the bank and armies of lawyers are susceptible to soft pressure from a censorial government, doesn’t it go without saying that so are individual journalists who lack anything close to those resources?

If it’s jawboning when Biden officials suggest Facebook take down anti-vaccine posts, isn’t it “jawboning” when a North Carolina GOP official tells ProPublica to kill a story, touting connections to the Trump administration? When the president calls for reporters to be fired for doing basic journalism, like reporting on leaks? When the White House and Pentagon condition access on helping them further official narratives? A good-faith conversation about jawboning can’t just ignore all of that.

Here are some more incidents Cruz and his colleagues have not held hearings about:

  • A Department of Homeland Security official publicly accused a Chicago Tribune reporter of “interference” for the act of reporting where immigration enforcement was occurring. Journalism, in the government’s telling, constituted obstruction of justice. That certainly could lead others to tread cautiously when exercising their constitutional right to document law enforcement actions.
  • Director of National Intelligence Tulsi Gabbard attacked Washington Post reporter Ellen Nakashima by name, suggesting her reporting methods — which is to say, calling government officials — were improper and reflected a media establishment “desperate to sabotage POTUS’s successful agenda.” Might that dissuade reporters from seeking comment from sources, or sources from providing such comment to reporters?
  • When a journalist suggested people contact her on the encrypted messaging app Signal, an adviser to Defense Secretary Pete Hegseth said she should be banned from Pentagon coverage. The Pentagon then attempted to exclude her from Hegseth’s trip to Singapore. Putting aside the irony of Hegseth’s team taking issue with Signal usage, it’s fair to assume journalists are less likely to suggest sources lawfully contact them via secure technologies if doing so leads to government threats and retaliation.
  • Bill Essayli, a U.S. attorney in California, publicly called a reporter “a joke, not a journalist” for commenting on law enforcement policies for shooting at moving vehicles. Obviously, remarks from prosecutors carry unique weight and have significant potential to chill speech, particularly when prosecutors make clear that they don’t view a journalist as worthy of the First Amendment’s protections for their profession.


Sources wanting to expose wrongdoing ... will think twice about talking to journalists who are known targets of an out-of-control administration.

There are plenty more examples — and that doesn’t even get into all the targeting of news outlets, from major broadcast networks to community radio stations. They may have more resources than individual reporters, but they’re nowhere near as well positioned to withstand a major spike in legal bills and insurance premiums as big social media firms (who this administration also jawbones to censor constitutionally protected content).

And hovering over all of this is President Donald Trump himself, whose social media feed doubles as an intimidation campaign against reporters. Our Trump Anti-Press Social Media Tracker documents hundreds of posts targeting not only news outlets but individual journalists. It’s documented over 3,500 posts. Unlike Biden-era “jawboning,” threats like these come from the very top — people in a position to actually carry them out. And unlike Biden’s administration, Trump’s track record makes the threat of government retribution real, not hypothetical.

Trump views excessive criticism of him as “probably illegal.” He has made very clear his desire for journalists to be imprisoned, sued for billions, and assaulted for reasons completely untethered to the Constitution, and has surrounded himself with bootlicking stooges eager to carry out his whims. “Chilling” is an understatement for the effect when a sitting president — particularly an authoritarian one — threatens journalists for doing their job.

It’s not only that these journalists don’t have the resources of Meta, Alphabet, and the like. They also have much more to lose. Tech companies might get some bad PR based on how they handle government takedown requests, but it’s unlikely to significantly impact their bottom line, particularly when news content comprises a small fraction of their business.

But journalists don’t just host news content, they create it. Their whole careers depend on their reputations and the willingness of sources to trust them. Sources wanting to expose wrongdoing, who often talk to journalists at great personal risk and try to keep a low profile, will think twice about talking to journalists who are known targets of an out-of-control administration.

Other news outlets might be reluctant to hire someone who has been singled out by the world’s most powerful person and his lackeys. Editors and publishers — already spooked about publishing articles that might draw a SLAPP suit or worse from Trump — will be doubly hesitant when the article is written by someone already on the administration’s public blacklist.

Unlike Biden’s antics, the Trump administration has cut out the middleman by directly targeting the speech and speakers it doesn’t like. And it wields this power against people with a fraction of the resources to fight back. If that’s not jawboning, what is?


freedom.press/issues/if-big-te…




Migliaia di voli in ritardo a causa dei tagli della FAA che hanno bloccato i principali aeroporti
Le cancellazioni dei voli imposte dalla FAA aumenteranno fino al 10% entro il 14 novembre.

  • Oltre 5.000 voli sono stati ritardati e 1.100 cancellati, mentre venerdì sono entrate in vigore le riduzioni in 40 aeroporti ad alto traffico , in quello che i funzionari definiscono un tentativo di alleviare la pressione derivante dalla chiusura record del governo.
  • Le cancellazioni dei voli imposte dalla FAA comportano una riduzione del 4% questo fine settimana. La riduzione salirà al 6% entro l'11 novembre, all'8% entro il 13 novembre e al 10% entro il 14 novembre.
  • Il Segretario ai Trasporti Sean Duffy ha dichiarato oggi che la fine della chiusura delle attività governative non comporterà il ripristino immediato dei controllori di volo, perché ci vorrà del tempo prima che tutti possano tornare al lavoro.

nbcnews.com/news/us-news/live-…

@Politica interna, europea e internazionale




in reply to Max - Poliverso 🇪🇺🇮🇹

@max @News
È l'unico modo in cui in fallito del genere poteva fare soldi.... molto vantaggioso conoscere in anticipo l'andamento dei titoli in borsa.
youtube
@News








“Tre ciotole” con Alba Rohrwacher (ed altre recensioni)


@Giornalismo e disordine informativo
articolo21.org/2025/11/tre-cio…
“Tre ciotole”, di Isabel Coixet, Ita-Spa, 2025. Con Alba Rohrwacher, Elio Germano. Tratto dal libro omonimo di Michela Murgia, scrittrice italiana recentemente scomparsa, “Tre ciotole”, della regista spagnola Isabel



Ricostruzione post-bellica e coesione euro-atlantica. Le prospettive ai Defense and Security Days

@Notizie dall'Italia e dal mondo

Alla luce della guerra in Ucraina e delle trasformazioni in corso nell’architettura di sicurezza europea, la Fondazione De Gasperi ha riproposto a Roma i Defense and Security Days, una giornata di confronto internazionale dedicata alle sfide della sicurezza, alla coesione



SìSepara: nasce il comitato referendario per il Sì alla Separazione delle Carriere

@Politica interna, europea e internazionale

Mercoledì 12 novembre 2025, ore 11:30 – Sala Stampa della Camera dei Deputati Saluti introduttivi Enrico Costa Interverranno Giuseppe Benedetto Gian Domenico Caiazza Andrea Cangini Antonio Di Pietro Nel corso della conferenza stampa stampa



Ho un blog con WordPress, qualcuno sa perché quando condivido qui sopra un suo post nell'anteprima non compare né la figura né il titolo del post ma solo l'URL?

Es.:

orizzontisfocati.it/2025/06/05…

#wordpress

in reply to EugenioLiberoBocca

@EugenioLiberoBocca

Io in questo post non vedo neanche il titolo, solo l'URL.

Nel mio post precedente sugli scioperi di venerdì si vede il nome del blog e il titolo ma solo perché l'ho scritto io, manualmente nel post.



Siccome ci risiamo e, in vista dello sciopero generale del 12 dicembre, qualcuno ha già provato a buttarla in caciara, cercando di spostare l'attenzione dal problema della sanità, dal problema di un fisco che spreme i lavoratori dipendenti e i pensionati e premia gli evasori fiscali, dal problema delle scuole che cadono a pezzi, della povertà sempre in aumento, ecc. al problema del giorno della settimana scelto per lo sciopero, ripropongo un mio post di qualche tempo fa in cui provo a spiegare perché il venerdì è un buon giorno per fare sciopero.

Sia chiaro, non mi aspetto che chi, di fronte agli enormi problemi messi sul tavolo dal più grande sindacato italiano, si gingilla con i giorni della settimana possa avere qualche interesse nella sua lettura ma magari qualcun altro sì.

orizzontisfocati.it/2025/06/05…



Ecco come Meta si arricchisce con le pubblicità-truffa

L'articolo proviene da #StartMag e viene ricondiviso sulla comunità Lemmy @Informatica (Italy e non Italy 😁)
Documenti interni visionati da Reuters rivelano che Meta avrebbe incassato miliardi da pubblicità legate a truffe e prodotti vietati mentre rallentava gli interventi per non compromettere i profitti. Fatti, numeri e

reshared this




RIASSUNTO DELLE PUTTANATE DELLA SETTIMANA

1- Rinnovati i contratti degli insegnanti, fatti due calcoli in media in busta paga vedremo non più di 40 euro al mese in più, netti
2- Brunetta invece si aumenta da solo lo stipendio di 5000 euro al mese in più passando da 250mila euro l anno a 310mila euro l'anno.
3- La carta del docente arriverà nel secondo quadrimestre e solo se abbiamo fatto i bravi nel primo quadrimestre, nel frattampp se servono libri tablet pc, corsi ce li paghiamo di tasca nostra.
4- per andare in pensione occorre lavorare 3 mesi in più, pare stiano veramente abolendo la riforma Fornero, peggiorandola.
5- La legge di bilancio prevede un risparmio sulla scuola di almeno 600 milioni di euro utili per comprare armi.
6- A New York viene eletto un sindaco di fede musulmana che sa parlare ai cittadini, panico tra i destrorsi, rischio sicurezza. Sarebbe come dire che io sono pericoloso perché conterraneo di Cuffaro.
7- Cuffaro viene arrestato per appalti truccati. Non si riesce a capire come sia stato capace, un personaggio così onesto e altruista oltre che bravo amministratore.
8- Il principale problema degli scioperi pare non sia il motivo per cui si sciopera, ma il fatto che si facciano di venerdì per avere il weekend lungo a proprie spese, mentre i parlamentari hanno da tempo lanciato la settimana cortissima andando a casa di giovedì a spese dello Stato.

Prof Salvo Amato.

Informa Pirata reshared this.



Perché “Agi” scuoterà OpenAi e Microsoft

L'articolo proviene da #StartMag e viene ricondiviso sulla comunità Lemmy @Informatica (Italy e non Italy 😁)
La definizione e la tempistica del raggiungimento dell’intelligenza artificiale generale potrebbero essere contestate in tribunale: se OpenAI dovesse dichiarare l’Agi o se il panel di esperti dovesse verificarla, le ripercussioni finanziarie e di controllo sarebbero immense.



"Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another."#AI #libraries


AI Is Supercharging the War on Libraries, Education, and Human Knowledge


This story was reported with support from the MuckRock Foundation.

Last month, a company called the Children’s Literature Comprehensive Database announced a new version of a product called Class-Shelf Plus. The software, which is used by school libraries to keep track of which books are in their catalog, added several new features including “AI-driven automation and contextual risk analysis,” which includes an AI-powered “sensitive material marker” and a “traffic-light risk ratings” system. The company says that it believes this software will streamline the arduous task school libraries face when trying to comply with legislation that bans certain books and curricula: “Districts using Class-Shelf Plus v3 may reduce manual review workloads by more than 80%, empowering media specialists and administrators to devote more time to instructional priorities rather than compliance checks,” it said in a press release.

In a white paper published by CLCD, it gave a “real-world example: the role of CLCD in overcoming a book ban.” The paper then describes something that does not sound like “overcoming” a book ban at all. CLCD’s software simply suggested other books “without the contested content.”

Ajay Gupte, the president of CLCD, told 404 Media the software is simply being piloted at the moment, but that it “allows districts to make the majority of their classroom collections publicly visible—supporting transparency and access—while helping them identify a small subset of titles that might require review under state guidelines.” He added that “This process is designed to assist districts in meeting legislative requirements and protect teachers and librarians from accusations of bias or non-compliance [...] It is purpose-built to help educators defend their collections with clear, data-driven evidence rather than subjective opinion.”

Librarians told 404 Media that AI library software like this is just the tip of the iceberg; they are being inundated with new pitches for AI library tech and catalogs are being flooded with AI slop books that they need to wade through. But more broadly, AI maximalism across society is supercharging the ideological war on libraries, schools, government workers, and academics.

CLCD and Class Shelf Plus is a small but instructive example of something that librarians and educators have been telling me: The boosting of artificial intelligence by big technology firms, big financial firms, and government agencies is not separate from book bans, educational censorship efforts, and the war on education, libraries, and government workers being pushed by groups like the Heritage Foundation and any number of MAGA groups across the United States. This long-running war on knowledge and expertise has sown the ground for the narratives widely used by AI companies and the CEOs adopting it. Human labor, inquiry, creativity, and expertise is spurned in the name of “efficiency.” With AI, there is no need for human expertise because anything can be learned, approximated, or created in seconds. And with AI, there is less room for nuance in things like classifying or tagging books to comply with laws; an LLM or a machine algorithm can decide whether content is “sensitive.”

“I see something like this, and it’s presented as very value neutral, like, ‘Here’s something that is going to make life easier for you because you have all these books you need to review,’” Jaime Taylor, discovery & resource management systems coordinator for the W.E.B. Du Bois Library at the University of Massachusetts told me in a phone call. “And I look at this and immediately I am seeing a tool that’s going to be used for censorship because this large language model is ingesting all the titles you have, evaluating them somehow, and then it might spit out an inaccurate evaluation. Or it might spit out an accurate evaluation and then a strapped-for-time librarian or teacher will take whatever it spits out and weed their collections based on it. It’s going to be used to remove books from collections that are about queerness or sexuality or race or history. But institutions are going to buy this product because they have a mandate from state legislatures to do this, or maybe they want to do this, right?”

The resurgent war on knowledge, academics, expertise, and critical thinking that AI is currently supercharging has its roots in the hugely successful recent war on “critical race theory,” “diversity equity and inclusion,” and LGBTQ+ rights that painted librarians, teachers, scientists, and public workers as untrustworthy. This has played out across the board, with a seemingly endless number of ways in which the AI boom directly intersects with the right’s war on libraries, schools, academics, and government workers. There are DOGE’s mass layoffs of “woke” government workers, and the plan to replace them with AI agents and supposed AI-powered efficiencies. There are “parents rights” groups that pushed to ban books and curricula that deal with the teaching of slavery, systemic racism, and LGBTQ+ issues and attempted to replace them with homogenous curricula and “approved” books that teach one specific type of American history and American values; and there are the AI tools that have been altered to not be “woke” and to reenforce the types of things the administration wants you to think. Many teachers feel they are not allowed to teach about slavery or racism and increasingly spend their days grading student essays that were actually written by robots.

“One thing that I try to make clear any time I talk about book bans is that it’s not about the books, it’s about deputizing bigots to do the ugly work of defunding all of our public institutions of learning,” Maggie Tokuda-Hall, a cofounder of Authors Against Book Bans, told me. “The current proliferation of AI that we see particularly in the library and education spaces would not be possible at the speed and scale that is happening without the precedent of book bans leading into it. They are very comfortable bedfellows because once you have created a culture in which all expertise is denigrated and removed from the equation and considered nonessential, you create the circumstances in which AI can flourish.”

Justin, a cohost of the podcast librarypunk, told me that the project of offloading cognitive capacity to AI continues apace: “Part of a fascist project to offload the work of thinking, especially the reflective kind of thinking that reading, study, and community engagement provide,” Justin said. “That kind of thinking cultivates empathy and challenges your assumptions. It's also something you have to practice. If we can offload that cognitive work, it's far too easy to become reflexive and hateful, while having a robot cheerleader telling you that you were right about everything all along.”

These two forces—the war on libraries, classrooms, and academics and AI boosterism—are not working in a vacuum. The Heritage Foundation’s right-wing agenda for remaking the federal government, Project 2025, talks about criminalizing teachers and librarians who “poison our own children” and pushing artificial intelligence into every corner of the government for data analysis and “waste, fraud, and abuse” detection.

Librarians, teachers, and government workers have had to spend an increasing amount of their time and emotional bandwidth defending the work that they do, fighting against censorship efforts and dealing with the associated stress, harassment, and threats that come from fighting educational censorship. Meanwhile, they are separately dealing with an onslaught of AI slop and the top-down mandated AI-ification of their jobs; there are simply fewer and fewer hours to do what they actually want to be doing, which is helping patrons and students.

“The last five years of library work, of public service work has been a nightmare, with ongoing harassment and censorship efforts that you’re either experiencing directly or that you’re hearing from your other colleagues,” Alison Macrina, executive director of Library Freedom Project, told me in a phone interview. “And then in the last year-and-a-half or so, you add to it this enormous push for the AIfication of your library, and the enormous demands on your time. Now you have these already overworked public servants who are being expected to do even more because there’s an expectation to use AI, or that AI will do it for you. But they’re dealing with things like the influx of AI-generated books and other materials that are being pushed by vendors.”

The future being pushed by both AI boosters and educational censors is one where access to information is tightly controlled. Children will not be allowed to read certain books or learn certain narratives. “Research” will be performed only through one of a select few artificial intelligence tools owned by AI giants which are uniformly aligned behind the Trump administration and which have gone to the ends of the earth to prevent their black box machines from spitting out “woke” answers lest they catch the ire of the administration. School boards and library boards, forced to comply with increasingly restrictive laws, funding cuts, and the threat of being defunded entirely, leap at the chance to be considered forward looking by embracing AI tools, or apply for grants from government groups like the Institute of Museum and Library Services (IMLS), which is increasingly giving out grants specifically to AI projects.

We previously reported that the ebook service Hoopla, used by many libraries, has been flooded with AI-generated books (the company has said it is trying to cull these from its catalog). In a recent survey of librarians, Macrina’s organization found that librarians are getting inundated with pitches from AI companies and are being pushed by their superiors to adopt AI: “People in the survey results kept talking about, like, I get 10 aggressive, pushy emails a day from vendors demanding that I implement their new AI product or try it, jump on a call. I mean, the burdens have become so much, I don’t even know how to summarize them.”

“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another"


Macrina said that in response to Library Freedom Project’s recent survey, librarians said that misinformation and disinformation was their biggest concern. This came not just in the form of book bans and censorship but also in efforts to proactively put disinformation and right-wing talking points into libraries: “It’s not just about book bans, and library board takeovers, and the existing reactionary attacks on libraries. It’s also the effort to push more far-right material into libraries,” she said. “And then you have librarians who are experiencing a real existential crisis because they are getting asked by their jobs to promote [AI] tools that produce more misinformation. It's the most, like, emperor-has-no-clothes-type situation that I have ever witnessed.”

Each person I spoke to for this article told me they could talk about the right-wing project to erode trust in expertise, and the way AI has amplified this effort, for hours. In writing this article, I realized that I could endlessly tie much of our reporting on attacks on civil society and human knowledge to the force multiplier that is AI and the AI maximalist political and economic project. One need look no further than Grokipedia as one of the many recent reminders of this effort—a project by the world’s richest man and perhaps its most powerful right-wing political figure to replace a crowdsourced, meticulously edited fount of human knowledge with a robotic imitation built to further his political project.

Much of what we write about touches on this: The plan to replace government workers with AI, the general erosion of truth on social media, the rise of AI slop that “feels” true because it reinforces a particular political narrative but is not true, the fact that teachers feel like they are forced to allow their students to use AI. Justin, from librarypunk, said AI has given people “absolute impunity to ignore reality […] AI is a direct attack on the way we verify information: AI both creates fake sources and obscures its actual sources.”

That is the opposite of what librarians do, and teachers do, and scientists do, and experts do. But the political project to devalue the work these professionals do, and the incredible amount of money invested in pushing AI as a replacement for that human expertise, have worked in tandem to create a horrible situation for all of us.

“AI is an agreement machine, which is anathema to learning and critical thinking,” Tokuda-Hall said. Previously we have had experts like librarians and teachers to help them do these things, but they have been hamstrung and they’ve been attacked and kneecapped and we’ve created a culture in which their contribution is completely erased from society, which makes something like AI seem really appealing. It’s filling that vacuum.”

“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another,” she added.




The FBI has subpoenaed the domain registrar of archive.today, demanding information about the owner.#fbi #Archiveis


FBI Tries to Unmask Owner of Infamous Archive.is Site


The FBI is attempting to unmask the owner behind archive.today, a popular archiving site that is also regularly used to bypass paywalls on the internet and to avoid sending traffic to the original publishers of web content, according to a subpoena posted by the website. The FBI subpoena says it is part of a criminal investigation, though it does not provide any details about what alleged crime is being investigated. Archive.today is also popularly known by several of its mirrors, including archive.is and archive.ph.

This post is for subscribers only


Become a member to get access to all content
Subscribe now