Salta al contenuto principale



“Cerca, Madre, coloro che si sono allontanati dalla santa Chiesa: che il tuo sguardo li raggiunga dove non arriva il nostro, abbatti i muri che ci separano e riportali a casa con la forza del tuo amore”.


“Madre, insegna alle nazioni che vogliono essere tue figlie a non dividere il mondo in fazioni inconciliabili, a non permettere che l’odio segni la loro storia né che la menzogna scriva la loro memoria”.


Weird Email Appliance Becomes AI Terminal


The Landel Mailbug was a weird little thing. It combined a keyboard and a simple text display, and was intended to be a low-distraction method for checking your email. [CiferTech] decided to repurpose it, though, turning it into an AI console instead.

The first job was to crack the device open and figure out how to interface with the keyboard. The design was conventional, so reading the rows and columns of the key matrix was a cinch. [CiferTech] used PCF8574 IO expanders to make it easy to read the matrix with an ESP32 microcontroller over I2C. The ESP32 is paired with a small audio output module to allow it to run a text-to-speech system, and a character display to replace the original from the Mailbug itself. It uses its WiFi connection to query the ChatGPT API. Thus, when the user enters a query, the ESP32 runs it by ChatGPT, and then displays the output on the screen while also speaking it aloud.

[CiferTech] notes the build was inspired by AI terminals in retro movies, though we’re not sure what specifically it might be referencing. In any case, it does look retro and it does let you speak to a computer being, of a sort, so the job has been done. Overall, though, the build shows that you can build something clean and functional just by reusing and interfacing a well-built commercial product.

youtube.com/embed/pRIfY21PpyI?…


hackaday.com/2025/12/12/weird-…



Amnesty sbarca nel Dark Web: ecco perché ha aperto il suo sito .onion


Amnesty International ha attivato un proprio sito accessibile tramite dominio .onion sulla rete Tor, offrendo così un nuovo canale sicuro per consultare informazioni e ricerche dell’organizzazione. L’iniziativa, lanciata ufficialmente nel dicembre 2023, nasce dall’esigenza di garantire accesso ai contenuti anche in quei Paesi dove il sito principale viene oscurato o pesantemente monitorato.

La decisione arriva in un contesto globale segnato da crescenti restrizioni digitali. In Stati come Russia, Iran e Cina, l’intero portale di Amnesty International risulta bloccato, impedendo ai cittadini di informarsi liberamente sulle violazioni dei diritti umani. In diverse altre regioni, invece, la navigazione è esposta a sorveglianza governativa, con rischi diretti per attivisti, giornalisti e dissidenti.

Tor, acronimo di The Onion Router, rappresenta uno strumento fondamentale per aggirare queste limitazioni. La rete utilizza una serie di relay gestiti da volontari e applica più livelli di crittografia, rendendo estremamente difficile risalire all’indirizzo IP dell’utente. Questo sistema consente un livello di anonimato più elevato rispetto alla navigazione tradizionale.

Nei browser comuni, l’accesso a un sito avviene tramite DNS e connessioni dirette al server, un processo che espone gli utenti al tracciamento del loro indirizzo IP. Questo elemento, paragonabile all’indirizzo di ritorno di una lettera postale, può essere utilizzato per mappare le attività digitali di una persona senza che ne abbia consapevolezza.

Il browser Tor, invece, inoltra i dati attraverso una catena di nodi distribuiti, mascherando la vera origine del traffico. Quando si accede a un dominio .onion, la comunicazione non abbandona mai la rete Tor e beneficia di una crittografia end-to-end, riducendo ulteriormente i rischi di intercettazione o identificazione.

Amnesty International ha scelto questa infrastruttura proprio per proteggere gli utenti che consultano materiali sensibili relativi a denunce, indagini e campagne sui diritti umani. L’obiettivo è permettere l’accesso a informazioni indipendenti anche in contesti autoritari, senza che chi naviga sia costretto a esporre la propria identità digitale.

La necessità di strumenti come Tor è emersa in modo ancora più evidente dopo indagini come il Project Pegasus del 2021, in cui Amnesty ha documentato l’uso dello spyware della società NSO Group per monitorare fino a 50.000 dispositivi mobili. Tecnologie di questo tipo, utilizzate da governi di varie nazionalità, hanno colpito attivisti, avvocati, giornalisti e oppositori politici.

In un’epoca segnata da sorveglianza avanzata e censura mirata, l’apertura del sito .onion di Amnesty International si inserisce in una più ampia strategia di difesa della libertà digitale. Fornire accesso sicuro ai contenuti è un passo concreto per permettere alle persone di informarsi senza mettere a rischio la propria privacy o la propria sicurezza personale.

L'articolo Amnesty sbarca nel Dark Web: ecco perché ha aperto il suo sito .onion proviene da Red Hot Cyber.

reshared this




a titolo di indagine, per farsi un'idea, trump potrebbe intanto chiedere a tutti i cittadini usa del texas se per evitare una guerra con la russia sono disposti ad andare a vivere in un altro stato (a spese proprie) e consegnare le chiavi di casa propria a putin... chissà cosa risponderanno.


""Pensavo fossimo molto vicini ad un accordo con la Russia e pensavo fossimo vicini ad un accordo con l'Ucraina. Al di là del presidente Zelensky, la sua gente apprezza l'idea dell'accordo"

trump non sa neppure cosa pensa la parte politica non propria negli stati uniti (e pure su quella ci sarebbe da discutere) e non rappresenta tutti i cittadini usa. figurarsi se può sapere meglio di zelensky cosa pensano gli ucraini. è improbabile. al massimo saprà cosa pensa quell'1 su 100'000 che per legge dei grandi numeri sono già fascistoni...



Dati, reti e deterrenza. Cosa emerge dall’esercitazione Cyber eagle

@Notizie dall'Italia e dal mondo

Si è chiusa una delle principali esercitazioni italiane dedicate alla difesa digitale, un appuntamento che restituisce una fotografia concreta di come il dominio cibernetico sia ormai parte integrante della sicurezza nazionale. L’Aeronautica militare ha presentato i risultati di



BOLIVIA. Arrestato l’ex presidente Luis Arce, timori per Evo Morales


@Notizie dall'Italia e dal mondo
In Bolivia la procura ha arrestato l'ex presidente di centrosinistra Luis Arce per malversazione di fondi pubblici. I sostenitori di Evo Morales temono che il prossimo ad essere arrestato sia l'ex leader del Movimento al Socialismo
L'articolo BOLIVIA. Arrestato l’ex



This Week in Security: Hornet, Gogs, and Blinkenlights


Microsoft has published a patch-set for the Linux kernel, proposing the Hornet Linux Security Module (LSM). If you haven’t been keeping up with the kernel contributor scoreboard, Microsoft is #11 at time of writing and that might surprise you. The reality is that Microsoft’s biggest source of revenue is their cloud offering, and Azure is over half Linux, so Microsoft really is incentivized to make Linux better.

The Hornet LSM is all about more secure eBPF programs, which requires another aside: What is eBPF? First implemented in the Berkeley Packet Filter, it’s a virtual machine in the kernel, that allows executing programs in kernel space. It was quickly realized that this ability to run a script in kernel space was useful for far more than just filtering packets, and the extended Berkeley Packet Filter was born. eBPF is now used for load balancing, system auditing, security and intrusion detection, and lots more.

This unique ability to load scripts from user space into kernel space has made eBPF useful for malware and spyware applications, too. There is already a signature scheme to restrict eBPF programs, but Hornet allows for stricter checks and auditing. The patch is considered a Request For Comments (RFC), and points out that this existing protection may be subject to Time Of Check / Time Of Use (TOCTOU) attacks. It remains to be seen whether Hornet passes muster and lands in the upstream kernel.

Patch Tuesday


Linux obviously isn’t the only ongoing concern for Microsoft, and it’s the time of month to talk about patch Tuesday. There are 57 fixes that are considered vulnerabilities, and additional changes that are just classified internally as bug fixes. There were three of those vulnerabilities that were publicly known before the fix, and one of those was known to be actively used in attacks in the wild.

CVE-2025-62221 was an escalation of privilege flaw in the Windows Cloud Files Mini Filter Driver. In Windows, a minifilter is a kernel driver that attach to the file system software, to monitor or modify file operations. This flaw was a use-after-free that allowed a lesser-privileged attacker to gain SYSTEM privileges.

Gogs


Researchers at Wiz found an active exploitation campaign that uses CVE-2025-8110, a previously unknown vulnerability in Gogs. The GO Git Service, hence the name, is a self-hosted GitHub/GitLab alternative written in Go. It’s reasonably popular, with 1,400 of them exposed to the Internet.

The vulnerability was a bypass of CVE-2024-55947, a path traversal vulnerability that allowed a malicious user to upload files to arbitrary locations. That was fixed with Gogs 0.13.1, but the fix failed to account for symbolic links (symlinks). Namely, as far as the git protocol is concerned, symlinks are completely legal. The path traversal checking doesn’t check for symlinks during normal git access, so a symlink pointing outside the repository can easily be created. And then the HTTPS file API can be used to upload a file to that symlink, again allowing for arbitrary writes.

The active exploitation on this vulnerability is particularly widespread. Of the 1400 Gogs instances on the Internet, over 700 show signs of compromise, in the form of new repositories with randomized names. It’s possible that even more instances have been compromised, and the signs have been covered. The attack added a symlink to .git/config, and then overwriting that file with a new config that defines the sshCommand setting. After exploitation, a Supershell malware was installed, establishing ongoing remote control.

The most troubling element of this story is that the vulnerability was first discovered in the wild back in July and was reported to the Gogs project at that time. As of December 11, the vulnerability has not been fixed or acknowledged. After five months of exploitation without a patch, it seems time to acknowledge that Gogs is effectively unmaintained. There are a couple of active forks that don’t seem to be vulnerable to this attack; time to migrate.

Blinkenlights


There’s an old story I always considered apocryphal, that data could be extracted from the blinking lights of network equipment, leading to a few ISPs to boast that they covered all their LEDs with tape for security. While there may have been a bit of truth to that idea, it definitely served as inspiration for [Damien Cauquil] at Quarkslab, reverse engineering a very cheap smart watch.

The watches were €11.99 last Christmas, and a price point that cheap tickles the curiosity of nearly any hacker. What’s on the inside? What does the firmware look like? The micro-controller was by the JieLi brand, and it’s a bit obscure, with no good way to pull the firmware back off. With no leads there, [Damien] turned to the Android app and the Bluetooth Low Energy connection. One of the functions of the app is uploading custom watch dials. Which of course had to be tested by creating a custom watch face featuring a certain Rick Astley.

But those custom watch faces have a quirk. The format internally uses byte offsets, and the watch doesn’t check for that offset to be out of bounds. A ridiculous scheme was concocted to abuse this memory leak to push firmware bytes out as pixel data. It took a Raspberry Pi Pico sniffing the SPI bus to actually recover those bytes, but it worked! Quite the epic hack.

Bits and Bytes


Libpng has an out of bounds read vulnerability, that was just fixed in 1.6.52. What’s weird about this one is that the vulnerability is can be triggered by completely legitimate PNG images. The good news is that is vulnerability only effects the simplified API, so not every user of libpng is in the blast radius.

And finally, Google has pushed out an out-of-band update to Chrome, fixing a vulnerability that is being exploited in the wild. The Hacker News managed to connect the bug ID to a pull request in the LibANGLE library, a translation layer between OpenGL US calls into Direct3D, Vulkan, and Metal. The details there suggests the flaw is limited to the macOS platform, as the fix is in the metal renderer. Regardless, time to update!


hackaday.com/2025/12/12/this-w…

#11


Linux Foundation crea l’AAIF: la nuova cabina di regia dell’Intelligenza Artificiale globale?


La costituzione dell’Agentic AI Foundation (AAIF), un fondo dedicato sotto l’egida della Linux Foundation, è stata annunciata congiuntamente da varie aziende che dominano nel campo della tecnologia e dell’intelligenza artificiale.

Con la costituzione dell’AAIF, Anthropic ha annunciato la donazione del protocollo MCP alla Linux Foundation, un’organizzazione no-profit impegnata a promuovere ecosistemi open source sostenibili attraverso una governance neutrale, lo sviluppo della comunità e infrastrutture condivise. L’AAIF opererà come un fondo vincolato all’interno della Linux Foundation.

Tra i membri che ne fanno parte fin dalla sua fondazione vi sono Anthropic, OpenAI e Block, con l’ulteriore sostegno di Google, Microsoft, AWS, Cloudflare, Docker e Bloomberg.

Originariamente concepito da Anthropic, il protocollo MCP è stato ideato per mettere in comunicazione tra loro diverse applicazioni e permettere l’estrazione di dati dalle stesse.

Subito dopo la sua uscita, OpenAI e Google lo hanno adottato senza esitazioni. Da quel momento, il suo sviluppo ha subito un’accelerazione repentina, permettendo una gestione armonica di svariati strumenti e, al tempo stesso, una riduzione della latenza nei processi di lavoro complessi degli agenti.

Il portfolio iniziale della fondazione include anche AGENTS.md di OpenAI e il progetto Goose di Block, entrambi donati all’AAIF.

La fondazione ne supervisionerà il funzionamento e il coordinamento futuri, garantendo che queste iniziative rimangano in linea con i principi di neutralità tecnica, apertura e gestione della comunità, promuovendo così l’innovazione nell’intero ecosistema dell’IA.

Anthropic ha sottolineato che questa transizione non modificherà il modello di governance del protocollo MCP. I responsabili del progetto continueranno a dare priorità al feedback della community e a processi decisionali trasparenti, sottolineando l’impegno di Anthropic nel preservare MCP come standard aperto e indipendente dai fornitori.

L'articolo Linux Foundation crea l’AAIF: la nuova cabina di regia dell’Intelligenza Artificiale globale? proviene da Red Hot Cyber.

Gazzetta del Cadavere reshared this.




#ScuolaFutura, dal 12 al 15 dicembre, il campus itinerante sull’innovazione didattica promosso dal #MIM farà tappa a #Sanremo con i laboratori nazionali di orientamento dedicati alla musica e alle #STEM.

Science Channel reshared this.



Disney concede a ad OpenAI l’utilizzo di 200 personaggi per generare video


Disney investirà 1 miliardo di dollari in OpenAI e concederà ufficialmente in licenza i suoi personaggi per l’utilizzo nel suo generatore video Sora. L’accordo arriva nel bel mezzo di un acceso dibattito a Hollywood su come il rapido progresso dell’intelligenza artificiale stia cambiando l’industria dell’intrattenimento e incidendo sui diritti dei creatori di contenuti.

In base all’accordo di licenza triennale, gli utenti di Sora potranno creare brevi video per i social media con oltre 200 personaggi degli universi Disney, Marvel, Pixar e Star Wars. Tuttavia, i termini dell’accordo stabiliscono specifiche restrizioni: le immagini e le voci degli attori associati a questi franchise non saranno utilizzate.

Si tratta del passo più significativo di OpenAI verso Hollywood dopo il controverso lancio di Sora e una serie di lamentele sul suo strumento di video generativo. Il servizio si è già trovato al centro di polemiche per possibili violazioni del copyright e per la comparsa di video con personaggi famosi in immagini provocatorie e offensive.

Le immagini razziste di Martin Luther King Jr. e l’uso di Malcolm X, che sua figlia ha definito doloroso e irrispettoso, hanno suscitato particolare polemica, spingendo l’azienda a iniziare a limitare più severamente tali richieste.

Anche la Disney stessa protegge attivamente la propria proprietà intellettuale dall’uso incontrollato nei servizi generativi. In autunno, l’azienda ha presentato una severa richiesta di risarcimento contro Character.AI, accusandola di violazione del copyright a causa di chatbot basati sui personaggi Disney.

Successivamente, secondo pubblicazioni di settore, gli avvocati della Disney hanno chiesto a Google di cessare di utilizzare i personaggi dello studio nei suoi sistemi di intelligenza artificiale. Il nuovo contratto con OpenAI mira a rendere lo sviluppo dei personaggi una questione contrattuale e gestibile.

Bob Iger, CEO di Disney, ha presentato la collaborazione come un modo per combinare le storie riconoscibili dell’azienda con le capacità dell’intelligenza artificiale generativa e ampliare i formati narrativi, preservando al contempo la protezione dei creatori e delle loro opere. Sam Altman, CEO di OpenAI, da parte sua, ha promosso l’accordo come un esempio di come le aziende tecnologiche e l’industria dell’intrattenimento possano costruire una partnership più responsabile, che bilanci l’innovazione con il rispetto del lavoro creativo e delle leggi sul copyright.

Disney non si limiterà a concedere in licenza i personaggi di Sora e diventerà un importante cliente di OpenAI. L’azienda prevede di utilizzare le API di OpenAI per creare nuovi prodotti digitali e strumenti interni, nonché di distribuire ChatGPT ai dipendenti. Una sezione separata di video generati dagli utenti creati in Sora apparirà nel catalogo del servizio Disney+, rafforzando l’integrazione dei video generativi nell’ecosistema della holding multimediale.

L’accordo dimostra come un importante studio e uno sviluppatore di intelligenza artificiale stiano cercando di stabilire nuove regole del gioco in un momento in cui sceneggiatori, attori, artisti degli effetti visivi e altri operatori del mercato protestano contro la sostituzione del lavoro umano con algoritmi e l’uso del loro aspetto e delle loro sembianze senza consenso.

In questo contesto, l’accordo tra Disney e OpenAI sta diventando un banco di prova per verificare se sia possibile conciliare i vantaggi commerciali dell’intelligenza artificiale con reali garanzie per i creatori di contenuti.

L'articolo Disney concede a ad OpenAI l’utilizzo di 200 personaggi per generare video proviene da Red Hot Cyber.



Componenti cinesi nelle auto americane? L’allarme del Congresso per la sicurezza nazionale

@Notizie dall'Italia e dal mondo

La catena globale della produzione automobilistica è ormai diventata un fronte della competizione strategica tra Stati Uniti e Cina. E il messaggio è emerso con chiarezza durante un’audizione della House Select Committee on China, in cui i vertici della



“A noi che, bisognosi di addestratori, andiamo sempre in cerca di tutorial, il grande maestro di fede e dottore della Chiesa Papa Benedetto XVI viene in nostro soccorso”.



56 anni fa, la strage di piazza Fontana a Milano. Strage fascista, dove morirono 17 persone e ne vennero ferite 88. Lo stato accusò gli anarchici, arrestarono il ferroviere Giuseppe Pinelli, che guarda caso morì cadendo dalla finestra della questura di Milano. Dissero che si era buttato... Una preghiera per le vittime della stage e per Pinelli, vittima innocente di un omicidio di stato.

La storia della strage di Piazza Fontana: perché cambiò l’Italia
geopop.it/strage-di-piazza-fon…




“Fate in modo che le vostre azioni siano sempre proporzionate rispetto al bene comune da perseguire e che la tutela della sicurezza nazionale garantisca sempre e comunque i diritti delle persone, la loro vita privata e familiare, la libertà di coscie…




trump vuole aprire l'europa come una cozza, ma a quanto pare, contrariamente a quello che dice, la UE pare abbastanza indigesta per trump. non ne parlerebbe male tutti i giorni se non fosse così. e il fatto che ne parli male indica che è tutt'altro che debole e arrendevole. per certi versi la reazione di trump è la conferma che in europa siamo sulla strada giusta. non si può difendere i propri interessi e piacere a trump e putin.


"La Banca di Russia farà causa alla Ue per l'uso degli asset russi"

dopo aver imbracciato le armi adesso vogliono fare causa? che ridicoli.



BENIN. Il golpe sventato grazie all’intervento della Francia e dell’Ecowas


@Notizie dall'Italia e dal mondo
Per sventare il tentato golpe in Benin sono intervenute le truppe dell'Ecowas, un'alleanza regionale fedele a Parigi. La Francia non vuole perdere la sua residua influenza in Africa dopo l'avvicinamento a Mosca di Burkina Faso, Mali e Niger
L'articolo BENIN. Il golpe sventato grazie



OSINT nell'Indagine sull'assalto al Campidoglio degli Stati Uniti


@Privacy Pride
Il post completo di Christian Bernieri è sul suo blog: garantepiracy.it/blog/osint-ca…
Dopo il grande pezzo sugli ecoceronti, Claudia torna a noi per regalarci una nuova perla dedicata all'OSINT. Non è roba da nerd, anzi, è qualcosa che ci appartiene culturalmente e che abbiamo imparato fin dai tempi dell'asilo.

Privacy Pride reshared this.



Sara Gioielli – Gioielli neri
freezonemagazine.com/articoli/…
Quando il talento incontra lo studio e la passione, allora nascono percorsi artistici dall’alto potenziale di sviluppo. Questo è il caso di Sara Gioielli, pianista e diplomata in canto jazz in quel Sancta Sanctorum che è il conservatorio di San Pietro a Majella di Napoli, straordinaria fucina di artisti e compositori fin dalla sua fondazione […]
L'articolo Sara Gioielli – Gioielli neri proviene da


STATI UNITI. L’ICE perseguita i lavoratori. Datori di lavoro e sindacati reagiscono


@Notizie dall'Italia e dal mondo
Quali sono le tattiche che aziende agricole, fabbriche, ristoranti e altri luoghi di lavoro utilizzano per proteggere i dipendenti immigrati dalle incursioni dell'ICE?
L'articolo STATI UNITI. L’ICE perseguita i lavoratori. Datori di lavoro e




Ieri, al #MIM, con l’accensione dell’albero di #Natale, alla presenza del Ministro Giuseppe Valditara e del Sottosegretario Paola Frassinetti, si sono conclusi i laboratori di #NextGenArt.


Digital Fights: Digital Lights: Wir kämpfen gegen Handydurchsuchungen bei Geflüchteten


netzpolitik.org/2025/digital-f…



Ma com'è sta storia che il petrolio scende di prezzo e i carburanti aumentano? 🤨🧐😠

Il petrolio chiude in calo a New York a 57,60 dollari al barile - Ultima ora - Ansa.it
ansa.it/sito/notizie/topnews/2…



Ma com'è sta storia che il petrolio scende di prezzo e i carburanti aumentano? 🤨🧐😠



Journalists warn of silenced sources


From national outlets to college newspapers, reporters are running into the same troubling trend: sources who are afraid to speak to journalists because they worry about retaliation from the federal government.

This fear, and how journalists can respond to it, was the focus of a recent panel discussion hosted by Freedom of the Press Foundation (FPF), the Association of Health Care Journalists, and the Society of Environmental Journalists. Reporters from a range of beats described how the second Trump administration has changed the way people talk to the press, and what journalists do to reassure sources and keep them safe.

youtube.com/embed/rIyRDQFEl4k?…

For journalist Grace Hussain, a solutions correspondent at Sentient Media, this shift became unmistakable when sources who relied on federal funding suddenly backed out of participating in her reporting. “Their concerns were very legitimate,” Hussain said, “It was possible that their funding could get retracted or withdrawn” for speaking to the press.

When Hussain reached out to other reporters, she found that sources’ reluctance to speak to the press for fear of federal retaliation is an increasingly widespread issue that’s already harming news coverage. “There are a lot of stories that are under-covered, and it’s just getting more difficult at this point to do that sort of coverage with the climate that we’re in,” she said.

Lizzy Lawrence, who covers the Food and Drug Administration for Stat, has seen a different but equally unsettling pattern. Lawrence has found that more government sources want to talk about what’s happening in their agencies, but often only if they’re not named. Since Trump returned to office, she said, many sources “would request only to speak on the condition of anonymity, because of fears of being fired.” As a result, her newsroom is relying more on confidential sources, with strict guardrails, like requiring multiple sources to corroborate information.

For ProPublica reporter Sharon Lerner, who’s covered health and the environment across multiple administrations, the heightened fear is impossible to miss. Some longtime sources have cut off communication with her, including one who told her they were falsely suspected of leaking.

And yet, she added, speaking to the press may be one of the last options left for employees trying to expose wrongdoing. “So many of the avenues for federal employees to seek justice or address retaliation have been shut down,” Lerner said.

This chilling effect extends beyond federal agencies. Emily Spatz, editor-in-chief of Northeastern’s independent student newspaper The Huntington News, described how fear spread among international students after federal agents detained Mahmoud Kahlil and Rümeysa Öztürk. Visa revocations of students at Northeastern only deepened the concern.

Students started asking the newspaper to take down previously published op-eds they worried could put them at risk, a step Spatz took after careful consideration. The newsroom ultimately removed six op-eds but posted a public website documenting each removal to preserve transparency.

Even as the paper worked hard to protect sources, many became reluctant to participate in their reporting. One student, for instance, insisted the newspaper remove a photo showing the back of their head, a method the paper had used specifically to avoid identifying sources.

Harlo Holmes, the chief information security officer and director of digital security at FPF, said these patterns mirror what journalists usually experience under authoritarian regimes, but — until now — have not been seen in the United States. Whistleblowing is a “humongously heroic act,” Holmes said, “and it is not always without its repercussions.”

She urged reporters to adopt rigorous threat-modeling practices and to be transparent with sources about the tools and techniques they use to keep them safe. Whether using SecureDrop, Signal, or other encrypted channels, she said journalists should make it easy for sources to find out how to contact them securely. “A little bit of education goes a long way,” she said.

For more on how journalists are working harder than ever to protect vulnerable sources, watch the full event recording here.


freedom.press/issues/journalis…



Covering immigration in a climate of fear


As the federal government ramps up immigration enforcement, sweeping through cities, detaining citizens and noncitizens, separating families, and carrying out deportations, journalists covering immigration have had to step up their work, too.

Journalists on the immigration beat today are tasked with everything from uncovering government falsehoods to figuring out what their communities need to know and protecting their sources. Recently, Freedom of the Press Foundation (FPF) hosted a conversation with journalists Maritza Félix, the founder and director of Conecta Arizona; Arelis Hernández, a reporter for The Washington Post; and Lam Thuy Vo, an investigative reporter with Documented. They discussed the challenges they face and shared how they report on immigration with humanity and accuracy, while keeping their sources and themselves safe.

youtube.com/embed/OPPo0YzKfnA?…

Immigration reporting has grown a lot more difficult, explained Hernández, as sources increasingly fear retaliation from the government. “I spend a lot of time at the front end explaining, ‘Where will this go? What will it look like?’” Hernández said, describing her process of working with sources to ensure they participate in reporting knowingly and safely. She also outlined her own precautions, from using encrypted devices to carrying protective gear, highlighting just how unsafe conditions have become, even for U.S.-born reporters.

Like Hernández, Félix also emphasized the intense fear and uncertainty many immigrant sources experience. Other sources, however, may be unaware of the possible consequences of speaking to reporters and need to be protected as well. “I think when we’re talking about sources, particularly with immigration, we’re talking about people who are sharing their most vulnerable moments in their life, and I think the way that we treat it is going to be very decisive on their future,” she said.

Journalists who are themselves immigrants must also manage personal risk, Félix said, “but the risk is always going to be there just because of who we are and what we represent in this country.” She pointed to the arrest and deportation of journalist Mario Guevara in Georgia, saying it “made me think that could have been me” before she became a U.S. citizen. She recommended that newsrooms provide security training, mental health resources, and operational protocols for both staff and freelancers.

Both Félix and Vo, who work in newsrooms by and for immigrant communities, emphasized the need for journalists to actively listen to the people they cover. “If you’re trying to serve immigrants, build a listening mechanism, some kind of way of continuing to listen to both leaders in the community, service providers, but also community members,” Vo advised. She also recommended that journalists use risk assessments and threat modeling to plan how to protect themselves and their sources.

Watch the full discussion here.


freedom.press/issues/covering-…



Tempesta e freddo su 850mila sfollati vittime dello stato genocida di israele.
Rahaf, bimba di otto mesi, morta di freddo a Kahn Younis
differx.noblogs.org/2025/12/11…

#Gaza #genocidio #israhell #tempesta #tempestabyron

reshared this





‘Architects of AI’ Wins Time Person of the Year, Sends Gambling Markets Into a Meltdown#TimePersonoftheYear


‘Architects of AI’ Wins Time Person of the Year, Sends Gambling Markets Into a Meltdown


The degenerate gamblers of Polymarket and Kalshi who bet that “AI” would win the Time Person of the Year are upset because the magazine has named the “Architects of AI” the person of the year. The people who make AI tools and AI infrastructure are, notably, not “AI” themselves, and thus both Kalshi and Polymarket have decided that people who bet “AI” do not win the bet. On Polymarket alone, people spent more than $6 million betting on AI gracing the cover of Time.

As writer Parker Molloy pointed out, people who bet on AI are pissed. “ITS THE ARCHITECTS OF AI THISNIS [sic] LITERALLY THE BET FUCK KALSHI,” one Kalshi better said.

“This pretty clearly should’ve resolved to yes. If you bought AI, reach out to Kalshi support because ‘AI’ is literally on the cover and in the title ‘Architects of AI.’ They’re not going to change anything unless they hear from people,” said another.

“ThE aRcHiTeCtS oF AI fuck you pay me,” said a third.

“Another misleading bet by Kalshi,” said another gambler. “Polymarket had fair rules and Kalshi did not. They need to fix this.”

But bag holders on Polymarket are also pissed. “This is a scam. It should be resolved to a cancellation and a full refund to everyone,” said a gambler who’d put money down on Jensen Huang and lost. Notably, on Kalshi, anyone who bet on any of the “Architects of AI,” won the bet (meaning Sam Altman, Elon Musk, Jensen Huang, Dario Amodei, Mark Zuckerberg, Lisa Su, and Demis Hassabis), while anyone who bet their products—“ChatGPT” and “OpenAI” did not win. On Polymarket, the rules were even more strict, i.e. people who bet “Jensen Huang” lost but people who bet “Other” won.

“FUCK YOU FUCKING FUCK Shayne Coplan [CEO of Polymarket],” said someone who lost about $50 betting on AI to make the cover.

Polymarket made its reasoning clear in a note of “additional context” on the market.

“This market is about the person/thing named as TIME's Person of the Year for 2025, not what is depicted on the cover. Per the rules, “If the Person of the Year is ‘Donald Trump and the MAGA movement,’ this would qualify to resolve this market to ‘Trump.’ However if the Person of the Year is ‘The MAGA movement,’ this would not qualify to resolve this market to ‘Trump’ regardless of whether Trump is depicted on the cover,” it said.

“Accordingly, a Time cover which lists ‘Architects of AI’ as the person of the year will not qualify for ‘AI’ even if the letters ‘AI’ are depicted on the cover, as AI itself is not specifically named.”

It should be noted how incredibly stupid all of this is, which is perhaps appropriate for the year 2025, in which most of the economy consists of reckless gambling on AI. People spent more than $55 million betting on the Time Person of the Year on Polymarket, and more than $19 million betting on the Time Person of the Year on Kalshi. It also presents one of the many downsides of spending money to bet on random things that happen in the world. One of the most common and dumbest things that people continue to do to this day despite much urging otherwise is anthropomorphize AI, which is distinctly not a person and is not sentient.

Time almost always actually picks a “person” for its Person of the Year cover, but it does sometimes get conceptual with it, at times selecting groups of people (“The Silence Breakers” of the #MeToo movement, the “Whistleblowers,” the “Good Samaritans,” “You,” and the “Ebola Fighters,” for example). In 1982 it selected “The Computer” as its “Machine of the Year,” and in 1988 it selected “The Endangered Earth” as “Planet of the Year.”

Polymarket’s users have been upset several times over the resolution of bets in the past few weeks and their concerns highlight how easy it is to manipulate the system. In November, an unauthorized edit of a live map of the Ukraine War allowed gamblers to cash in on a battle that hadn’t happened. Earlier this month, a trader made $1 million in 24 hours betting on the results of Google’s 2025 Year In Search Rankings and other users accused him of having inside knowledge of the process. Over the summer, Polymarket fought a war over whether or not President Zelenskyy had worn a suit. Surely all of this will continue to go well and be totally normal moving forward, especially as these prediction markets begin to integrate themselves with places such as CNN.




With OpenAI investment, Disney will officially begin putting AI slop into its flagship streaming product.#AIPorn #OpenAI #Disney


Disney Invests $1 Billion in the AI Slopification of Its Brand


The first thing I saw this morning when I opened X was an AI-generated trailer for Avengers: Doomsday. Robert Downey Jr’s Doctor Doom stood in a shapeless void alongside Captain America and Reed Richards. It was obvious slop but it was also close in tone and feel of the last five years of Disney’s Marvel movies. As media empires consolidate, nostalgia intensifies, and AI tools spread, Disney’s blockbusters feel more like an excuse to slam recognizable characters together in a contextless morass.

So of course Disney has announced it signed a deal with OpenAI today that will soon allow fans to make their own officially licensed Disney slop using Sora 2. The house that mouse built, and which has been notoriously protective of its intellectual property, opened up the video generator, saw the videos featuring Nazi Spongebob and criminal Pikachu, and decided: We want in.

According to a press release, the deal is a 3 year licensing agreement that will allow the AI company’s short form video platform Sora to generate slop videos using characters like Mickey Mouse and Iron Man. As part of the agreement, Disney is investing $1 billion of equity into OpenAI, said it will become a major customer of the company, and promised that fan and corporate AI-generated content would soon come to Disney+, meaning that Disney will officially begin putting AI slop into its flagship streaming product.

The deal extends to ChatGPT as well and, starting in early 2026, users will be able to crank out officially approved Disney slop on multiple platforms. When Sora 2 launched in October, it had little to no content moderation or copyright guidelines and videos of famous franchise characters doing horrible things flooded the platform. Pikachu stole diapers from a CVS, Rick and Morty pushed crypto currencies, and Disney characters shouted slurs in the aisles of Wal-Mart.

It is worth mentioning that, although Disney has traditionally been extremely protective of its intellectual property, the company’s princesses have become one of the most common fictional subjects of AI porn on the internet; 404 Media has found at least three different large subreddits dedicated to making AI porn of characters like Elsa, Snow White, Rapunzel, and Tinkerbell. In this case, Disney is fundamentally throwing its clout behind a technology that has thus far most commonly been used to make porn of its iconic characters.

After the hype of the launch, OpenAI added an “opt-in” policy to Sora that was meant to prevent users from violating the rights of copyright holders. It’s trivial to break this policy however, and circumvent the guardrails preventing a user from making a lewd Mickey Mouse cartoon or episode of The Simpsons. The original sin of Sora and other AI systems is that the training data is full of copyrighted material and the models cannot be retrained without great cost, if at all.

If you can’t beat the slop, become the slop.

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Bob Iger, CEO of Disney, said in the press release about the agreement.

The press release explained that Sora users will soon have “official” access to 200 characters in the Disney stable, including Loki, Thanos, Darth Vader, and Minnie Mouse. In exchange, Disney will begin to use OpenAI’s APIs to “build new products” and it will deploy “ChatGPT for its employees.”

I’m imagining a future where AI-generated fan trailers of famous characters standing next to each other in banal liminal spaces is the norm. People have used Sora 2 to generate some truly horrifying videos, but the guardrails have become more aggressive. As Disney enters the picture, I imagine the platform will become even more anodyne. Persistent people will slip through and generate videos of Goofy and Iron Man sucking and fucking, sure, but the vast majority of what’s coming will be safe corporate gruel that resembles a Marvel movie.




Il Portogallo paralizzato dal primo sciopero generale dopo 12 anni


@Notizie dall'Italia e dal mondo
I sindacati portoghesi hanno proclamato lo sciopero contro un piano del governo che faciliterà i licenziamenti ed estenderà la precarietà nel mondo del lavoro
L'articolohttps://pagineesteri.it/2025/12/11/europa/il-portogallo-paralizzato-dal-primo-sciopero-generale-dopo-12-anni/