a causa di questo governo un'opera utile come il ponte a rischio.
La Corte dei Conti boccia il Ponte sullo Stretto. Meloni attacca: 'Ennesimo atto di invasione dei giudici' - Notizie - Ansa.it
I magistrati contabili: 'No al visto di legittimità'. Le motivazioni entro 30 giorni. Salvini: 'Scelta politica, andiamo avanti'. Schlein contro la premier: 'Vuole mettersi al di sopra delle leggi' (ANSA)Agenzia ANSA
che possiamo fare per evitare iscrizioni per usare l'hardware che compriamo?
pur avendoli pagati cari e amari per poterli utilizzare devo iscrivermi con nome, cognome e accesso al mio deretano peloso, altrimenti posso usarli solo come fermaporte.
Ma non gli bastano i bei soldoni che gli ho dato?
Boh, mi sa che torno a carta e penna.
Il nuovo video di Pasta Grannies: youtube.com/watch?v=_S-7VfwGSb…
@Cucina e ricette
(HASHTAG)
Cucina e ricette reshared this.
#Gaza, la tregua che non c'era
Gaza, la tregua che non c’era
Il regime di Netanyahu è tornato formalmente a rispettare il cessate il fuoco mercoledì pomeriggio dopo parecchie ore di intensi bombardamenti sulla striscia di Gaza, ordinati dal primo ministro/criminale di guerra a seguito di presunte violazioni da…www.altrenotizie.org
📢 Hannah Jadagu – Describe:
Eremita in una foresta incantata, per Hannah Jadagu il tempo smette di scorrere mentre scrive un album catartico, arrivato da un altra dimensione per scavarci dentro.
reshared this
Giornalisti nel mirino, il report di Ossigeno
Servizio di Greta Giglio e Leonardo Macciocca
L'articolo Giornalisti nel mirino, il report di Ossigeno su Lumsanews.
Universitaly: università & universitari reshared this.
Qui il video ▶️ youtube.com/watch?v=LVmAOa_vcd…
Qui l’infografica ▶️ unica.istruzione.
Ministero dell'Istruzione
Sicurezza delle password, questo il tema del primo appuntamento con #Sicurnauti. Scopri i contenuti dedicati a studenti e genitori su #UNICA. Qui il video ▶️ https://www.youtube.com/watch?v=LVmAOa_vcdg Qui l’infografica ▶️ https://unica.istruzione.Telegram
This Reactor is on Fire! Literally…
If I mention nuclear reactor accidents, you’d probably think of Three Mile Island, Fukushima, or maybe Chernobyl (or, now, Chornobyl). But there have been others that, for whatever reason, aren’t as well publicized. Did you know there is an International Nuclear Event Scale? Like the Richter scale, but for nuclear events. A zero on the scale is a little oopsie. A seven is like Chernobyl or Fukushima, the only two such events at that scale so far. Three Mile Island and the event you’ll read about in this post were both level five events. That other level five event? The Windscale fire incident in October of 1957.
If you imagine this might have something to do with the Cold War, you are correct. It all started back in the 1940s. The British decided they needed a nuclear bomb project and started their version of the Manhattan Project called “Tube Alloys.” But in 1943, they decided to merge the project with the American program.
The British, rightfully so, saw themselves as co-creators of the first two atomic bombs. However, in post-World War paranoia, the United States shut down all cooperation on atomic secrets with the 1946 McMahon Act.
We Are Not Amused
The British were not amused and knew that to secure a future seat at the world table, it would need to develop its own nuclear capability, so it resurrected Tube Alloys. If you want a detour about the history of Britan’s bomb program, the BBC has a video for you that you can see below.
youtube.com/embed/8WcMm31RbMw?…
Of course, post-war Britain wasn’t exactly flush with cash, so they had to limit their scope a bit. While the Americans had built bombs with both uranium and plutonium, the UK decided to focus on plutonium, which could create a stronger bomb with less material.
Of course, that also means you have to create plutonium, so they built two reactors — or piles, as they were known then. They were both in the same location near Seascale, Cumberland.
Inside a Pile
The Windscale Piles in 1951 (photo from gov.uk website).
The reactors were pretty simple. There was a big block of graphite with channels drilled through it horizontally. You inserted uranium fuel cartridges in one end, pushing the previous cartridge through the block until they fell out the other side into a pool of water.
The cartridges were encased in aluminum and had cooling fins. These things got hot! Immediately, though, practical concerns — that is, budgets — got in the way. Water cooling was a good idea, but there were problems. First, you needed ultra-pure water. Next, you needed to be close to the sea to dump radioactive cooling water, but not too close to any people. Finally, you had to be willing to lose a circle around the site about 60 miles in diameter if the worst happened.
The US facility at Hanford, indeed, had a 30-mile escape road for use if they had to abandon the site. They dumped water into the Columbia River, which, of course, turned out to be a bad idea. The US didn’t mind spending on pure water.
Since the British didn’t like any of those constraints, they decided to go with air cooling using fans and 400-foot-tall chimneys.
Our Heros
Most of us can relate to being on a project where the rush to save money causes problems. A physicist, Terence Price, wondered what would happen if a fuel cartridge split open. For example, one might miss the water pool on the other side of the reactor. There would be a fire and uranium oxide dust blowing out the chimney.
The idea of filters in each chimney was quickly shut down. Since the stacks were almost complete, they’d have to go up top, costing money and causing delays. However, Sir John Cockcroft, in charge of the construction, decided he’d install the filters anyway. The filters became known as Cockcroft’s Follies because they were deemed unnecessary.
So why are these guys the heroes of this story? It isn’t hard to guess.
A Rush to Disaster
The government wanted to quickly produce a bomb before treaties would prohibit them from doing so. That put them on a rush to get H-bombs built by 1958. There was no time to build more reactors, so they decided to add material to the fuel cartridges to produce tritium, including magnesium. The engineers were concerned about flammability, but no one wanted to hear it.
They also decided to make the fins of the cartridges smaller to raise the temperature, which was good for production. This also allowed them to stuff more fuel inside. Engineers again complained. Hotter, more flammable fuel. What could go wrong? When no one would listen, the director, Christopher Hinton, resigned.
The Inevitable
The change in how heat spread through the core was dangerous. But the sensors in place were set for the original patterns, so the increased heat went undetected. Everything seemed fine.
It was known that graphite tends to store some energy from neutron bombardment for later release, which could be catastrophic. The solution was to heat the core to a point where the graphite started to get soft, which would gradually release the potential energy. This was a regular part of operating the reactors. The temperature would spike and then subside. Operations would then proceed as usual.
By 1957, they’d done eight of these release cycles and prepared for a ninth. However, this one didn’t go as planned. Usually, the core would heat evenly. This time, one channel got hot and the rest didn’t. They decided to try the release again. This time it seemed to work.
As the core started to cool as expected, there was an anomaly. One part of the core was rising instead, reaching up to 400C. They sped up the fans and the radiation monitors determined that they had a leak up the chimney.
Memories
Remember the filters? Cockcroft”s Follies? Well, radioactive dust had gone up the chimney before. In fact, it had happened pretty often. As predicted, the fuel would miss the pool and burst.
With the one spot getting hotter, operators assumed a cartridge had split open in the core. They were wrong. The cartridge was on fire. The Windscale reactor was on fire.
Of course, speeding up the fans just made the fire worse. Two men donned protective gear and went to peek at an inspection port near the hot spot. They saw four channels of fuel glowing “bright cherry red”. At that point, the reactor had been on fire for two days. The Reactor Manager suited up and climbed the 80 feet to the top of the reactor building so he could assess the backside of the unit. It was glowing red also.
Fight Fire with ???
The fans only made the fire worse. They tried to push the burning cartridges out with metal poles. They came back melted and radioactive. The reactor was now white hot. They then tried about 25 tonnes of carbon dioxide, but getting it to where it was needed proved to be too difficult, so that effort was ineffective.
By the 11th of October, an estimated 11 tonnes of uranium were burning, along with magnesium in the fuel for tritium production. One thermocouple was reading 3,100C, although that almost had to be a malfunction. Still, it was plenty hot. There was fear that the concrete containment building would collapse from the heat.
You might think water was the answer, and it could have been. But when water hits molten metal, hydrogen gas results, which, of course, is going to explode under those conditions. They decided, though, that they had to try. The manager once again took to the roof and tried to listen for any indication that hydrogen was building up. A dozen firehoses pushed into the core didn’t make any difference.
Sci Fi
If you read science fiction, you probably can guess what did work. Starve the fire for air. The manager, a man named Tuohy, and the fire chief remained and sent everyone else out. If this didn’t work, they were going to have to evacuate the nearby town anyway.
They shut off all cooling and ventilation to the reactor. It worked. The temperature finally started going down, and the firehoses were now having an effect. It took 24 hours of water flow to get things completely cool, and the water discharge was, of course, radioactive.
If you want a historical documentary on the even, here’s one from Spark:
youtube.com/embed/S0DXndsQ0H4?…
Aftermath
The government kept a tight lid on the incident and underreported what had been released. But there was much less radioactive iodine, cesium, plutonium, and polonium release because of the chimney filters. Cockcroft’s Folly had paid off.
While it wasn’t ideal, official estimates are that 240 extra cancer cases were due to the accident. Unofficial estimates are higher, but still comparatively modest. Also, there had been hushed-up releases earlier, so it is probably that the true number due to this one accident is even lower, although if it is your cancer, you probably don’t care much which accident caused it.
Milk from the area was dumped into the sea for a while. Today, the reactor is sealed up, and the site is called Sellafield. It still contains thousands of damaged fuel elements within. The site is largely stable, although the costs of remediating the area have been, and will continue to be staggering.
This isn’t the first nuclear slip-up that could have been avoided by listening to smart people earlier. We’ve talked before about how people tend to overestimate or sensationalize these kinds of disasters. But it still is, of course, something you want to avoid.
Featured image: “HD.15.003” by United States Department of Energy
Restoring the E&L MMD-1 Mini-Micro Designer Single-Board Computer from 1977
Over on YouTube [CuriousMarc] and [TubeTimeUS] team up for a multi-part series E&L MMD-1 Mini-Micro Designer Restoration.
The E&L MMD-1 is a microcomputer trainer and breadboard for the Intel 8080. It’s the first ever single-board computer. What’s more, they mention in the video that E&L actually invented the breadboard with the middle trench for the ICs which is so familiar to us today; their US patent 228,136 was issued in August 1973.
The MMD-1 trainer has support circuits providing control logic, clock, bus drivers, voltage regulator, memory decoder, memory, I/O decoder, keyboard encoder, three 8-bit ports, an octal keyboard, and other support interconnects. They discuss in the video the Intel 1702 which is widely accepted as the first commercially available EPROM, dating back to 1971.
In the first video they repair the trainer then enter a “chasing lights” assembly language program for testing and demonstration purposes. This program was found in 8080 Microcomputer Experiments by Howard Boyet on page 76. Another book mentioned is The Bugbook VI by David Larsen et al.
In the second video they wire in some Hewlett-Packard HP 5082-7300 displays which they use to report on values in memory.
A third episode is promised, so stay tuned for that! If you’re interested in the 8080 you might like to read about its history or even how to implement one in an FPGA!
youtube.com/embed/eCAp3K7yTlQ?…
youtube.com/embed/Sfe18oyRvGk?…
Expert Systems: The Dawn of AI
We’ll be honest. If you had told us a few decades ago we’d teach computers to do what we want, it would work some of the time, and you wouldn’t really be able to explain or predict exactly what it was going to do, we’d have thought you were crazy. Why not just get a person? But the dream of AI goes back to the earliest days of computers or even further, if you count Samuel Butler’s letter from 1863 musing on machines evolving into life, a theme he would revisit in the 1872 book Erewhon.
Of course, early real-life AI was nothing like you wanted. Eliza seemed pretty conversational, but you could quickly confuse the program. Hexapawn learned how to play an extremely simplified version of chess, but you could just as easily teach it to lose.
But the real AI work that looked promising was the field of expert systems. Unlike our current AI friends, expert systems were highly predictable. Of course, like any computer program, they could be wrong, but if they were, you could figure out why.
Experts?
As the name implies, expert systems drew from human experts. In theory, a specialized person known as a “knowledge engineer” would work with a human expert to distill his or her knowledge down to an essential form that the computer could handle.
This could range from the simple to the fiendishly complex, and if you think it was hard to do well, you aren’t wrong. Before getting into details, an example will help you follow how it works.
From Simple to Complex
One simple fake AI game is the one where the computer tries to guess an animal you think of. This was a very common Basic game back in the 1970s. At first, the computer would ask a single yes or no question that the programmer put in. For example, it might ask, “Can your animal fly?” If you say yes, the program guesses you are thinking of a bird. If not, it guesses a dog.
Suppose you say it does fly, but you weren’t thinking of a bird. It would ask you what you were thinking of. Perhaps you say, “a bat.” It would then ask you to tell it a question that would distinguish a bat from a bird. You might say, “Does it use sonar?” The computer will remember this, and it builds up a binary tree database from repeated play. It learns how to guess animals. You can play a version of this online and find links to the old source code, too.
Of course, this is terrible. It is easy to populate the database with stupid questions or ones you aren’t sure of. Do ants live in trees? We don’t think of them living in trees, but carpenter ants do. Besides, sometimes you may not know the answer or maybe you aren’t sure.
So let’s look at a real expert system, Mycin. Mycin, from Stanford, took data from doctors and determined what bacteria a patient probably had and what antibiotic would be the optimal treatment. Turns out, most doctors you see get this wrong a lot of the time, so there is a lot of value to giving them tools for the right treatment.
This is really a very specialized animal game where the questions are preprogrammed. Is it gram positive? Is it in a normally sterile site? What’s more is, Mycin used Bayesian math so that you could assign values to how sure you were of an answer, or even if you didn’t know. So, for example, -1 might mean definitely not, +1 means definitely, 0 means I don’t know, and -0.5 means probably not, but maybe. You get the idea. The system ran on a DEC PDP-10 and had about 600 rules.
The system used LISP and could paraphrase rules into English. For example:
(defrule 52
if (site culture is blood)
(gram organism is neg)
(morphology organism is rod)
(burn patient is serious)
then .4
(identity organism is pseudomonas))
Rule 52:
If
1) THE SITE OF THE CULTURE IS BLOOD
2) THE GRAM OF THE ORGANISM IS NEG
3) THE MORPHOLOGY OF THE ORGANISM IS ROD
4) THE BURN OF THE PATIENT IS SERIOUS
Then there is weakly suggestive evidence (0.4) that
1) THE IDENTITY OF THE ORGANISM IS PSEUDOMONAS
In practice, the program did as well as real doctors, even specialists. Of course, it was never used in practice because of ethical concerns and the poor usability of entering data into a timesharing terminal. You can see a 1988 video about Mycin below.
youtube.com/embed/a65uwr_O7mM?…
Under the Covers
Mycin wasn’t the first or only expert system. Perhaps the first was SID. In 1982, SID produced over 90% of the VAX 9000’s CPU design, although many systems before then had dabbled in probabilities and other similar techniques. For example, DENDRAL from the 1960s used rules to interpret mass spectrometry data. XCON started earlier than SID and was DEC’s way of configuring hardware based on rules. There were others, too. Everyone “knew” back then that expert systems were the wave of the future!
Expert systems generally fall into two categories: forward chaining and backward chaining. Mycin was a backward chaining system.
What’s the difference? You can think of each rule as an if statement. Just like the example, Mycin knew that “if the site is in the blood and it is gram negative and…. then….” A forward chaining expert system will try to match up rules until it finds the ones that match.
Of course, you can make some assumptions. So, in the sample, if a hypothetical forward-chaining Mycin asked if the site was the blood and the answer was no, then it was done with rule 52.
However, the real Mycin was backward chaining. It would assume something was true and then set out to prove or disprove it. As it receives more answers, it can see which hypothesis to prioritize and which to discard. As rules become more likely, one will eventually emerge.
If that’s not clear, you can try a college lecture on the topic from 2013, below.
youtube.com/embed/ZhTt-GG7PiQ?…
Of course, in a real system, too, rules may trigger more rules. There were probably as many actual approaches as there were expert systems. Some, like Mycin, were written in LISP. Some in C. Many used Prolog, which has some features aimed at just the kind of things you need for an expert system.
What Happened?
Expert systems are actually very useful for a certain class of problems, and there are still examples of them hanging around (for example, Apache Drools). However, some problems that expert systems tried to solve — like speech recognition — were much better handled by neural networks.
Part of the supposed charm of expert systems is that — like all new technology — it was supposed to mature to the point where management could get rid of those annoying programmers. That really wasn’t the case. (It never is.) The programmers just get new titles as knowledge engineers.
Even NASA got in on the action. They produced CLIPS, allowing expert systems in C, which was available to the public and still is. If you want to try your hand, there is a good book out there.
Meanwhile, you can chat with Eliza if you don’t want to spend time chatting with her more modern cousins.
Giuseppe likes this.
183 milioni di account Gmail hackerati! E’ falso: era solo una bufala
Per la seconda volta negli ultimi mesi, Google è stata costretta a smentire le notizie di una massiccia violazione dei dati di Gmail. La notizia è stata scatenata dalle segnalazioni di un “hacking di 183 milioni di account” diffuse online, nonostante non vi sia stata alcuna violazione o incidente reale che abbia coinvolto i server di Google.
Come hanno spiegato i rappresentanti dell’azienda, non si tratta di un nuovo attacco, ma piuttosto di vecchi database di login e password raccolti dagli aggressori tramite infostealer e altri attacchi degli ultimi anni.
“Le segnalazioni di una ‘violazione di Gmail che ha interessato milioni di utenti’ sono false. Gmail e i suoi utenti sono protetti in modo affidabile”, hanno dichiarato i rappresentanti di Google. L’azienda ha inoltre sottolineato che la fonte delle voci di una fuga di dati importante era un database contenente i log degli infostealer, nonché credenziali rubate durante attacchi di phishing e altri attacchi.
Il fatto è che questo database è stato recentemente reso pubblico tramite la piattaforma di analisi delle minacce Synthient ed è stato poi aggiunto all’aggregatore di perdite Have I Been Pwned (HIBP).
Il creatore di HIBP, Troy Hunt, ha confermato che il database di Synthient contiene circa 183 milioni di credenziali, inclusi login, password e indirizzi web su cui sono state utilizzate. Secondo Hunt, non si tratta di una singola fuga di dati: queste informazioni sono state raccolte nel corso degli anni da canali Telegram, forum, dark web e altre fonti. Inoltre, questi account non sono correlati a una singola piattaforma, ma a migliaia, se non milioni, di siti web e servizi diversi.
Inoltre, il 91% dei record era già apparso in altre fughe di notizie ed era presente nel database HIBP, mentre solo 16,4 milioni di indirizzi erano nuovi.
I rappresentanti di Synthient hanno confermato che la maggior parte dei dati nel database non è stata ottenuta tramite attività di hacking, ma infettando i sistemi dei singoli utenti con malware. In totale, i ricercatori hanno raccolto 3,5 TB di informazioni (23 miliardi di righe), inclusi indirizzi email, password e indirizzi di siti web esposti in cui sono state utilizzate le credenziali compromesse.
Google sottolinea che l’azienda scopre e utilizza regolarmente tali database per i controlli di sicurezza, aiutando gli utenti a reimpostare le password trapelate e a proteggere nuovamente i propri account.
L’azienda sottolinea inoltre che, anche se Gmail non è stato hackerato, i vecchi nomi utente e password già trapelati potrebbero comunque rappresentare una minaccia. Per mitigare questi rischi, Google consiglia di abilitare l’autenticazione a più fattori o di passare alle passkey, che sono più sicure delle password tradizionali.
Ricordiamo che, nel settembre 2025, Google aveva già smentito le segnalazioni di una massiccia violazione dei dati degli utenti di Gmail. All’epoca, erano emersi resoconti sui media secondo cui Google avrebbe inviato una notifica di massa a tutti gli utenti di Gmail (circa 2,5 miliardi di persone) per chiedere loro di cambiare urgentemente le password e abilitare l’autenticazione a due fattori. I rappresentanti di Google hanno poi negato la veridicità di tale notizia.
L'articolo 183 milioni di account Gmail hackerati! E’ falso: era solo una bufala proviene da Red Hot Cyber.
Gazzetta del Cadavere reshared this.
Nvidia lancia NVQLink per il calcolo quantistico
Nvidia non ha sviluppato un proprio computer quantistico, ma il CEO Jensen Huang scommette che l’azienda avrà un ruolo chiave nel futuro della tecnologia. Nel suo discorso di apertura alla Global Technology Conference (GTC) di Nvidia a Washington, D.C., martedì, Huang ha annunciato NVQLink, una tecnologia di interconnessione che collega i processori quantistici ai supercomputer di intelligenza artificiale di cui hanno bisogno per funzionare efficacemente.
Ha affermato: “NVQLink è la chiave per collegare i supercomputer quantistici e quelli classici”. I processori quantistici rappresentano un modo completamente nuovo di fare calcolo, utilizzando i principi della fisica quantistica per risolvere problemi che gli attuali computer classici non sono in grado di risolvere.
Le loro applicazioni sono vaste, dalla scoperta scientifica alla finanza. Tuttavia, per fornire risultati significativi per aziende e ricercatori, devono essere integrati con computer classici ad alte prestazioni, che eseguono calcoli che non sono in grado di completare e correggono gli errori naturali nelle loro risposte, un processo noto come correzione degli errori.
Tim Costa, General Manager di Ingegneria Industriale e Quantum di NVIDIA, ha affermato che nel settore esiste un consenso generale sulla necessità di questa infrastruttura ibrida che coinvolge processori quantistici (QPU) e chip di intelligenza artificiale come le GPU NVIDIA, in parte perché l’esecuzione di una correzione completa degli errori richiede l’intelligenza artificiale.
Costa ha affermato che alcune aziende hanno già tentato di integrare processori quantistici con supercomputer di intelligenza artificiale, ma queste tecnologie non sono riuscite a fornire la velocità e la scalabilità necessarie per ottenere una correzione degli errori rapida e scalabile. NVIDIA afferma che la sua nuova tecnologia di interconnessione è la prima soluzione a fornire la velocità e la scalabilità necessarie per realizzare la vera promessa del calcolo quantistico su larga scala.
A tal fine, NVIDIA sta collaborando con oltre una dozzina di diverse aziende quantistiche, tra cui IonQ, Quantumuum e Infleqtion, nonché con diversi laboratori nazionali, tra cui i Sandia National Laboratories, l’Oak Ridge National Laboratory e il Fermi Laboratory. Questa tecnologia di interconnessione si basa su un’architettura aperta ed è adatta a diverse modalità quantistiche, tra cui ioni intrappolati, superconduttori e fotoni. Costa ha affermato che questa apertura è cruciale, il che significa che i laboratori nazionali saranno ora in grado di sviluppare supercomputer per utilizzare le capacità quantistiche non appena saranno disponibili.
Costa ha affermato che in futuro “ogni supercomputer utilizzerà processori quantistici per ampliare la gamma di problemi che può elaborare, e ogni processore quantistico si affiderà ai supercomputer per funzionare correttamente”. Quando vedremo le tecnologie quantistiche generare un valore commerciale significativo? Costa di Nvidia ha affermato che qualsiasi risposta potesse dare sarebbe sbagliata.
L'articolo Nvidia lancia NVQLink per il calcolo quantistico proviene da Red Hot Cyber.
Zivilgesellschaft: Kritik an Debatte um „Missbrauch von Sozialleistungen“
Tasting the Exploit: HackerHood testa l’Exploit di Microsoft WSUS CVE-2025-59287
Il panorama della sicurezza informatica è stato recentemente scosso dalla scoperta di una vulnerabilità critica di tipo Remote Code Execution (RCE) nel servizio Windows Server Update Services (WSUS) di Microsoft.
Identificata come CVE-2025-59287 e con un punteggio CVSS di 9.8 (Critico), questa falla rappresenta un rischio elevato e immediato per le organizzazioni che utilizzano WSUS per la gestione centralizzata degli aggiornamenti.
La vulnerabilità è particolarmente pericolosa perché consente a un aggressore remoto e non autenticato di eseguire codice arbitrario con privilegi di sistema sui server WSUS interessati.
Dopo il rilascio di una patch di emergenza “out-of-band” da parte di Microsoft il 23 ottobre 2025, necessaria in quanto la patch iniziale di ottobre non aveva risolto completamente il problema, è stata subito osservata una attività di sfruttamento attiva in rete.
L’Agenzia statunitense per la cybersicurezza e la sicurezza delle infrastrutture (CISA) ha aggiunto rapidamente questa vulnerabilità al suo Catalogo delle vulnerabilità sfruttate note (KEV), sottolineando l’urgenza di una risposta immediata da parte degli amministratori di sistema.
Dettagli sul problema
WSUS è uno strumento fondamentale nelle reti aziendali, fungendo da sorgente fidata per la distribuzione delle patch software.
La sua natura di servizio infrastrutturale chiave lo rende un obiettivo ad alto valore, poiché un suo compromesso può fornire una testa di ponte per il movimento laterale e la compromissione diffusa della rete.
La radice del problema risiede in un caso di “deserializzazione non sicura di dati non attendibili” (unsafe deserialization of untrusted data) e rientra tra le cause del RCE (Remote Code Execution) come effetto finale.
Questo difetto tecnico può essere sfruttato in diverse vie d’attacco note:
- Endpoint GetCookie(): un aggressore può inviare una richiesta appositamente elaborata all’endpoint GetCookie(), inducendo il server a deserializzare in modo improprio un oggetto AuthorizationCookie utilizzando il non sicuro BinaryFormatter.
- Servizio ReportingWebService: un percorso alternativo mira al ReportingWebService per innescare una deserializzazione non sicura tramite SoapFormatter.
In entrambi gli scenari, l’aggressore può indurre il sistema ad eseguire codice dannoso con il massimo livello di privilegi: System
La vulnerabilità è specifica per i sistemi su cui è abilitato il ruolo Server WSUS e risultano vulnerabili Microsoft Windows Server 2012, 2012 R2, 2016, 2019, 2022 (inclusa la versione 23H2) e 2025.
Dettagli Tecnici dell’Exploit
A seguito della recente divulgazione pubblica di un Proof-of-Concept (PoC) per l’exploit, abbiamo condotto un test di laboratorio per analizzare il funzionamento e l’impatto potenziale.
Il PoC, reperibile al link: gist.github.com/hawktrace/76b3…
Questo è specificamente progettato per colpire le istanze di Windows Server Update Services (WSUS) esposte pubblicamente sulle porte TCP predefinite 8530 (HTTP) e 8531 (HTTPS).
L’esecuzione del PoC è semplice, occorre avviare lo script indicare il target http/https vulnerabile come argomento.
Questo innesca l’avvio di comandi PowerShell maligni tramite processi figlio e utilizza la seconda modalità di exploit spiegata precedentemetne (ReportingWebService), in questo caso specifico viene aperto il processo calc.exe (calcolatrice di sistema).
I comandi dannosi sono presenti in formato Base64 codificato e vengono eseguiti durante una fase di deserializzazione all’interno del servizio WSUS.
Questo meccanismo di deserializzazione rappresenta il punto cruciale in cui un aggressore può iniettare qualsiasi altro comando per condurre attività di ricognizione o post-sfruttamento.
La sequenza di processi nel test di laboratorio è stata la seguente:
WsusService.exe -> Cmd.exe -> Calc.exe -> win32calc.exe
Qui i processi figli spawnati dai processi legittimi WSUS (wsusservice.exe).
(Nota: in questo specifico PoC, l’esecuzione di calc.exe (Calcolatrice di sistema) funge da prova non distruttiva della corretta esecuzione del codice remoto).
Il comando oltretutto rimane persistente: al riavvio del server o del servizio viene eseguita la RCE precedentemente iniettata. Lo stesso accade quando si avvia la console MMC (Microsoft Management Console) con lo snap-in di WSUS, utilizzato per configurare e monitorare il servizio.
Video Dimostrativo: Il funzionamento completo di questo PoC è visibile nel video disponibile a questo link: youtube.com/watch?v=CH4Ped59SL…
youtube.com/embed/CH4Ped59SLY?…
Punti Chiave di Monitoraggio e Artefatti
La tabella seguente riassume gli artefatti cruciali da esaminare e i relativi criteri di rilevamento per identificare una possibile compromissione da questa CVE:
Qui un esempio del log durante i test.
Il contenuto di SoftwareDistribution.log
E infine anche il log di IIS dove è presente appunto un useragent anomalo.
(questo log è accessibile solo se installato la funzionalità “HTTP Logging” nella sotto-categoria “Web server/Health and Diagnostics del server”)
Superficie di attacco globale
La superficie di attacco associata a questa vulnerabilità è estremamente significativa.
Allo stato attuale, migliaia di istanze WSUS rimangono esposte a Internet a livello globale e sono potenzialmente vulnerabili. Una ricerca condotta su Shodan evidenzia oltre 480.000 risultati a livello mondiale, di cui più di 500 solo in Italia, classificati come servizi aperti in nelle due port di default riconducibili a WSUS.
Raccomandazioni e Mitigazioni
La raccomandazione principale è di implementare immediatamente le patch di sicurezza di emergenza rilasciate da Microsoft.
Per le organizzazioni che non sono in grado di distribuire immediatamente gli aggiornamenti, sono state suggerite da Microsoft le seguenti misure temporanee per mitigare il rischio, da intendersi come soluzioni provvisorie:
- Disabilitare il ruolo WSUS server: Disabilitare/Rimuovere completamente il ruolo WSUS dal server per eliminare il vettore di attacco. Si noti che ciò interrompe la capacità del server di gestire e distribuire gli aggiornamenti ai sistemi client.
- Bloccare le porte ad alto rischio: Bloccare tutto il traffico in entrata sulle porte TCP 8530 e 8531 tramite il firewall a livello host. Come per la disabilitazione del ruolo, questa azione impedisce al server di operare.
È fondamentale che le organizzazioni seguano queste indicazioni e si impegnino in attività di threat hunting per rilevare eventuali segni di compromissione o tentativi di sfruttamento pregressi.
L'articolo Tasting the Exploit: HackerHood testa l’Exploit di Microsoft WSUS CVE-2025-59287 proviene da Red Hot Cyber.
In Echtzeit: US-Abschiebebehörde ICE baut Überwachungsarsenal weiter aus
Ecco perché dall’Artico arrivano le nuove sfide della competizione globale
@Notizie dall'Italia e dal mondo
La prima Conferenza nazionale sull’Artico, promossa dal sottosegretario di Stato alla Difesa, Isabella Rauti, ha raccolto al Centro alti studi per la Difesa (Casd) esponenti delle istituzioni, delle Forze armate, del mondo diplomatico e scientifico. L’obiettivo, riconoscere
Notizie dall'Italia e dal mondo reshared this.
“The shameless use of covert recording technology at massage parlours to gain likes, attention, and online notoriety is both disgusting and dangerous.”#News #ballotinitiatives #1201
Breaking News Channel reshared this.
The Internet of Agents based on Large Language Models
Le notizie dal Centro Nexa su Internet & Società del Politecnico di Torino su @Etica Digitale (Feddit)
A TIM-financed project which aims to address the technological, economic, political and social impacts of the widespread diffusion and use of agentic AI tools.
The post The Internet of Agents based on Large Language Models appeared
Etica Digitale (Feddit) reshared this.
Il prezzo del rame in Ecuador: la lotta per la sopravvivenza del popolo shuar arutam
@Notizie dall'Italia e dal mondo
Indice La versione originale di questo articolo è stata pubblicata su Arcc foundation, in lingua inglese. La traduzione è a cura dell’autore. «Noi, gli shuar, siamo un popolo guerriero. Da migliaia di anni, viviamo nella foresta che scende dalle
Notizie dall'Italia e dal mondo reshared this.
Cybersecurity. Quattro bandi UE da 50 milioni di euro, anche per cavi sottomarini
@Informatica (Italy e non Italy 😁)
I nuovi bandi europei sulla cybersecurity La Commissione europea lancia quattro nuovi bandi dedicati alla cybersecurity, con un investimento complessivo di oltre 50 milioni di euro nell’ambito del Programma Europa Digitale (Digital Europe
Informatica (Italy e non Italy 😁) reshared this.
fnsi.it/giornalisti-minacciati…
FREE ASSANGE Italia
FNSI - Giornalisti minacciati, nel 2025 casi aumentati del 78% https://www.fnsi.it/giornalisti-minacciati-nel-2025-casi-aumentati-del-78Telegram
la_r_go* reshared this.
Looking beyond the AI hype
THIS IS A BONUS EDITION OF DIGITAL POLITICS. I'm Mark Scott, and I'm breaking my rule of not discussing my day job in this newsletter. I'm in New York to present this report about the current gaps in social media data access and the public-private funding opportunities to meet those challenges.
The project is part of my underlying thesis that much, if not all, of digital policymaking is done in a vacuum without quantifiable evidence.
When it comes to the effects of social media on society and the potential need for lawmakers to intervene, we are maddeningly blind to how these platforms operate; what impact they have on people's online-offline habits; and what interventions, if any, are required to make social media more transparent and accountability to all of us.
The report is a call-to-arms for practical steps required to move beyond educated guesses in digital policymaking to evidence-based oversight.
It's a cross-post from Tech Policy Press.
Let's get started:
THE CASE FOR SUPPORTING SOCIAL MEDIA DATA ACCESS
IN THE HIERARCHY OF DIGITAL POLICYMAKING PRIORITIES, it’s artificial intelligence, not platform governance, that is now the cause célèbre.
From the United States’ public aim to dominate the era of AI to the rise of so-called AI slop created by apps such as OpenAI’s Sora, the emerging technology has seemingly become the sole priority across governments, tech companies, philanthropic organizations and civil society groups.
This fixation on AI is a mistake.
It’s a mistake because it relegates equally pressing areas of digital rulemaking — especially those related to social media’s impact on the wider world — down the pecking order at a time when these global platforms have a greater say on people’s online, and increasingly offline, habits than ever before.
Current regulatory efforts, primarily in Europe, to rein in potential abuses linked to platforms controlled by Meta, Alphabet and TikTok have so far been more bark than bite. Social media giants remain black boxes to outsiders seeking to shine a light on how the companies’ content algorithms determine what people see in their daily feeds.
On Oct 24, the European Commission announced a preliminary finding under the European Union's Digital Services Act that Meta and TikTok had failed under their obligations to make it easier for researchers to access public data on their platforms.
These companies’ ability to decide how their users consume content on everything from the Israel-Hamas conflict to elections from Germany to Argentina is now equally interwoven into Washington’s attempts to roll back international online safety legislation in the presumed defense of US citizens’ First Amendment rights.
Confronted with this cavalcade of ongoing social media-enabled problems, the collective digital policymaking shift to focus almost exclusively on artificial intelligence is the epitome of the distracted boyfriend meme.
While governments, industry and civil society compete to outdo themselves on AI policymaking, the current ills associated with social media are being left behind — a waning after-thought in the global AI hype that has transfixed the public, set off a gold rush between industrial rivals and consumed governments in search of economic growth.
But where to focus?
In a report published via Columbia World Projects at Columbia University and the Hertie School’s Centre for Digital Governance on Oct 23, my co-authors and I lay out practical first steps in what can often seem like a labyrinthine web of problems associated with social media.
Our starting point is simple: the world currently has limited understanding about what happens within these global platforms despite companies’ stated commitments, through their terms of service, to uphold basic standards around accountability and transparency.
It’s impossible to diagnose the problem without first identifying the symptoms. And in the world of platform governance, that requires improved access to publicly-available and private social media data — in the form of engagement statistics and details on how so-called content recommender systems function.
Thankfully, the EU and, soon, the United Kingdom have passed the world’s first regulated regimes that mandate social media giants provide such information to outsiders, as long as they meet certain requirements like being associated with an academic institution or civil society organizations.
Elsewhere, particularly in the US, researchers are often reliant on voluntary commitments from companies growing increasingly adversarial in their interactions with outsiders whose work may shine unwanted attention on problematic areas within these global platforms.
Our report outlines the current gaps in how social media data access works. It builds on a year of workshops during which more than 120 experts from regulatory agencies, academia, civil society groups and data infrastructure providers identified the existing data access limitations and outlined recommendations for public-private funding to address those failings.
All told, it represents a comprehensive review of current global researcher data access efforts, based on inputs from those actively engaged in the policy area worldwide.
At a time when the US government has pulled back significantly from funding digital policymaking and many philanthropies are shifting gears from social media to artificial intelligence, it can feel like a hard sell to urge both public and private funders to open up their wallets to support a digital policymaking area fraught with political uncertainty.
But our recommendations are framed as practical attempts to fill current shortfalls that, with just a little support, could have an exponential impact on improving the transparency and accountability pledges that all of the world’s largest social media companies say they remain committed to.
Some of the ideas will require more of a collective effort than others.
Participants in the workshops highlighted the need for widely-accessible data access infrastructure — akin to what was offered via Meta’s CrowdTangle data analytics tool before the tech giant shut it down in 2024 — as a starting point, even though such projects, collectively, will likely cost in the tens of millions of dollars each year.
But many of the opportunities are more short-term than long-term.
That was by design. The workshops underpinning the report made clear the independent research community needed technical and capacity-building support more than it needed moonshot projects which may fail to deliver on the dual focus on increased transparency and accountability for social media.
The recommendations include expanded funding support to ensure academics and civil society groups are trained in world-class data protection and security protocols — preferably standardized across the whole research community — so that data about people’s social media habits are kept safe and not misused like what happened in the Cambridge Analytica scandal in 2018.
It also includes programs to allow new researchers to gain access to social media data access regimes that often remain accessible to only a handful of organizations, as well as attempts to create international standards across different countries’ regulated regimes so that jurisdictions can align, as much as they can, on approach to social media data access.
Such day-to-day digital policymaking does not have the bells and whistles associated with the current AI hype. It's borne out of the realities of independent researchers and regulators seeking to address near-and-present harms tied to social media, and not in the alarmism that artificial intelligence may, some day, represent an existential threat to humanity.
That, too, was by design. Often, digital policymaking, especially on AI, can become overly-complex — lost in technical jargon and misconceptions of what technology can, and can not, do.
By outlining where public and private funders can meet immediate needs on society-wide problems tied to social media, my co-authors and I are clear where digital policymaking priorities should lie: in the need to improve people’s understanding of how these global platforms increasingly shape the world around us.
marcoboccaccio
in reply to Andrea R. • • •Io pensavo che fosse tutto registrato sul dispositivo e invece nemmeno nell'account, a cui ora non posso più accedere su pc se non passando da quello Feltrinelli.