Ondata di attacchi contro Palo Alto Networks: oltre 2.200 IP coinvolti nella nuova campagna
A partire dal 7 ottobre 2025, si è verificata un’intensificazione su larga scala di attacchi specifici contro i portali di accesso GlobalProtect di Palo Alto Networks, PAN-OS. Oltre 2.200 indirizzi IP unici sono stati coinvolti in attività di ricognizione.
Un notevole incremento è stato rilevato rispetto ai 1.300 indirizzi IP iniziali rilevati solo pochi giorni prima. Secondo il monitoraggio di GreyNoise Intelligence, questo rappresenta l’attività di scansione più intensa degli ultimi 90 giorni.
Il 3 ottobre 2025, un’impennata significativa dell’attività di scansione, pari al 500%, ha contrassegnato l’inizio della campagna di ricognizione. In quel giorno, sono stati rilevati circa 1.300 indirizzi IP unici che stavano esplorando i portali di accesso di Palo Alto. Rispetto ai tre mesi precedenti, questo picco iniziale di attività ha costituito il più alto livello di scansioni registrate.
Nei 90 giorni che hanno preceduto tale evento, i volumi giornalieri di scansioni non avevano quasi mai raggiunto la soglia dei 200 IP.
L’analisi condotta da GreyNoise ha messo in luce che una quota preponderante degli indirizzi IP nocivi, ben il 91%, risulta essere ubicata negli Stati Uniti. Si rilevano inoltre altri nuclei concentrati di tali indirizzi rispettivamente nel Regno Unito, nei Paesi Bassi, in Canada e nella Russia.
Un sostanziale investimento infrastrutturale per tale operazione è evidenziato dal fatto che gli specialisti della sicurezza hanno individuato intorno al 12% delle subnet ASN11878 complessivamente dedicate alla scansione dei gate di accesso Palo. E’ probabile che gli artefici della minaccia stiano esaminando in modo sistematico ampi database di credenziali, visti i pattern di autenticazione falliti che fanno supporre l’utilizzo di operazioni automatizzate brute-force nei confronti dei portali GlobalProtect SSL VPN.
GreyNoise ha reso pubblico un dataset esaustivo che include nomi utente e password univoci ricavati dai tentativi di login a Palo monitorati, in modo da permettere ai team per la sicurezza di stimare l’eventuale esposizione delle credenziali. Dall’analisi tecnica emerge che il 93% degli indirizzi IP coinvolti è stato etichettato come sospetto, mentre un 7% è stato giudicato dannoso.
L’esame delle attività di scansione rivela la presenza di diversi pattern di aggregazione a livello regionale contraddistinti da impronte TCP uniche, il che fa supporre l’esistenza di vari gruppi di minacce organizzate che agiscono in concomitanza. Gli studiosi nel campo della sicurezza hanno rilevato possibili legami tra la serie di scansioni registrata a Palo Alto e le operazioni di esplorazione condotte simultaneamente contro dispositivi Cisco ASA.
Entrambe le campagne di attacco condividono impronte TCP dominanti legate all’infrastruttura nei Paesi Bassi, insieme a comportamenti di clustering regionale e caratteristiche degli strumenti simili. L’attacco multitecnologico suggerisce una campagna di ricognizione più ampia contro le soluzioni di accesso remoto aziendale.
L'articolo Ondata di attacchi contro Palo Alto Networks: oltre 2.200 IP coinvolti nella nuova campagna proviene da il blog della sicurezza informatica.
Homebrew Dam Control System Includes all the Bells and Whistles
Over on brushless.zone, we’ve come across an interesting write-up that details the construction of a dam control system. This is actually the second part, in the first, we learn that some friends purchased an old dysfunctional 80 kW dam with the intention of restoring it. One friend was in charge of the business paperwork, one friend the mechanical side of things, and the other was responsible for the electronics — you can probably guess which ones we’re interested in.
The site controller is built around a Nucleo-H753 featuring the STM32H753ZI microcontroller, which was selected due to it being the largest single-core version of the dev board available. This site controller board features a dozen output light switches, sixteen front-panel button inputs, dual 24 V PSU inputs, multiple non-isolated analog inputs, atmospheric pressure and temperature sensors, multiple analog multiplexers, a pair of SSD1309 OLED screens, and an ESP32 for internet connectivity. There’s also fiber optic TX and RX for talking to the valve controller, a trio of isolated hall-effect current sensors for measuring the generator phase current, through current transformers, four contactor outputs (a contactor is a high-current relay), a line voltage ADC, and the cherry on top — an electronic buzzer.
The valve controller has: 48 V input from either the PSU or battery, motor phase output, motor field drive output, 8 kV rated isolation relay, limit switch input, the other side of the optical fiber TX and RX for talking to the site controller board, and connectors for various purposes.
If you’re interested in seeing this dam control system being tested, checkout the video embedded below.
youtube.com/embed/8laQxXGqc38?…
Social media at a time of war
WELCOME BACK TO DIGITAL POLITICS. I'm Mark Scott, and I have many feelings about Sora, OpenAI's new AI-generated social media platform. Many of which are encapsulated by this video by Casey Neistat. #FreeTheSlop.
— The world's largest platforms have failed to respond to the highest level of global conflict since World War II.
— The semiconductor wars between China and the United States are creating a massive barrier between the world's two largest economies.
— China's DeepSeek performs significantly worse than its US counterparts on a series of benchmark tests.
Let's get started:
WHEN PLATFORM GOVERNANCE MEETS GLOBAL CONFLICT
OCT 7 MARKED THE 2-YEAR ANNIVERSARY of Hamas militants attacking Israel, killing roughly 1,200 citizens and engulfing the region in a seemingly endless conflict. Tens of thousands of Palestinians have died, many more have been displaced, and attacks (or the threat of attack) against both Israelis and Jews, worldwide, have skyrocketed.
I won't pretend to understand the complexities of the Israeli-Hamas war (more on that here, here and here). But the last two years have seen a slow degradation of the checks and safeguards that social media companies once had in place to protect users from war-related content, propaganda and illegal content now rife wherever you look online.
First, let's be clear. This isn't just an Israeli-Hamas issue. As we hurtle toward the end of 2025, there are currently almost 60 active state-based conflicts worldwide and global peace is at its lowest level in 80 years, according to statistics from the Institute for Economics and Peace.
That is not social media's fault. As much as it's easy to blame TikTok, YouTube and Instagram for the ills of the world, real-world violence is baked into generational conflicts, multitudes of overlapping socio-economic issues and other analogue touch-points that have nothing to do with people swiping on their phones.
But it's also true the recent spike in global conflicts has come at a time of collective retrenchment on trust and safety issues from social media giants that, at the bare minimum, have failed to stop some of the offline violence from spreading widely within online communities. Again, there's a causation versus correlation issue here that we must be careful with. But at a time of heightened polarization (and not just in the US and Europe), the capacity for tech platforms to be used to foment real-world instability and violence has never been higher.
Before I get irate complaints from those of you working within these companies, social media platforms have clear terms of service supposed to limit war-related content from spreading among users. You can review them here, here, here and here. But there's one thing to have clear-cut rules, and it's another to actively implement them.
Thanks for reading the free monthly version of Digital Politics. Paid subscribers receive at least one newsletter a week. If that sounds like your jam, please sign up here.
Here's what paid subscribers read in September:
— A series of legal challenges to online safety legislation challenge how these rules are implemented; The unintended consequences of failing to define "tech sovereignty;" Where the money really goes within the chip industry. More here.
— What most people don't understand about Brussels' strategy toward technology; Unpicking the dual antitrust decisions against Google from Brussels and Washington; AI chatbots still return too much false information. More here.
— The next transatlantic trade dispute will be about digital antitrust, not online safety; Washington's new foreign policy ambitions toward AI; The US' spending spree on data centers. More here.
— An inside look into the United Nations' takeover of AI governance; How the United Kingdom embraced the US "AI Stack;" People view the spread of false information as a higher threat than a faltering global economy. More here.
— Washington's proposed deal to untangle TikTok US from Bytedance is not what it first appears; How social media companies are speaking from both sides of their mouths on online safety; AI's expected boost to global trade. More here.
Social media companies' neglect related to conflicts outside the Western world has been a feature for years (more on that here.) Now, that same level of omission has seeped into conflicts, including those within the Middle East and Ukraine, that are closer to home for the Western public.
There are many reasons for this shift.
Companies like Alphabet and Meta have pared back their commitments to independent fact-checking which provided at least some pushback to government and non-state efforts to peddle falsehoods associated with these global conflicts. A shift to crowdsourced fact-checking — initially rolled out by X, and then followed by Meta — has yet to fill that void. That's mostly because companies have found it difficult to find consensus among their users about often divisive topics (including those related to warfare) which is required before these crowdsourced fact-checks are published.
Social media platforms have similarly spent the last three years gutting their existing trust and safety teams to the point where the industry is on life support. This was initially done for economic reasons. Faced with a struggling advertising sector in 2022, company executives sought cost savings, wherever they could, and internal trust and safety teams felt the brunt of those efforts. Fast forward to 2025, and there has been an ideological shift to "free speech" among many of these firms which makes any form of content moderation anathema to the current (US-focused) zeitgeist.
Third: politics. The current White House's aversion to online safety is well known. So too is the US Congress' accusations that other country's digital regulation unfairly infringes on American citizens' First Amendment rights. But from India to Slovakia, there are growing local efforts to quell platforms' content moderation programs — and the associated domestic legislation that has sprouted up from Brazil to the United Kingdom. In that geopolitical context, social media firms have instituted a "go slow" on many of their internal systems — even if (at least in countries with existing online safety regulation) they still comply with domestic rules.
Making things more difficult is the platforms' increasingly adversarial relationship with outsiders seeking to hold these firms to account for their stated trust and safety policies. (Disclaimer: My day job puts me in this category, though my interactions with the companies remain cordial.) Researchers have found it increasingly difficult to access publicly-available social media data. Others have faced legal challenges to analyses which cast social media giants in an unfavorable light. Industry-linked funding for such independent "red-teaming" of platform weaknesses has fallen off a cliff.
Taken together, these four points represent a fundamental change in what had been, until now, a progressive multi-stakeholder approach to ridding global social media platforms of illegal and gruesome content — and not just related to warfare.
Before, companies, policymakers and outside groups worked together (often with difficulty) to make these social media networks a safe space for people to express themselves in ways that represented free speech rights and safeguarded individuals from hate. That coalition has now disintegrated amid a combination of hard-nosed economics, shifting geopolitics and fundamental differences over what constitutes tech companies' trust and safety obligations.
Each of the above points occurred separately. No one set out thinking that cutting back on internal trust and safety teams; ending relations with fact-checkers; kow-towing to a shift in geopolitics; and reducing ties to outside researchers would make it easier for conflict-related content to spread easily among these social media networks.
And yet, that is what happened.
Go onto any social media platform, and within a few clicks (if you know what you're doing), you can come face-to-face with gruesome war-torn content — or, at least, purportedly material associated with one of the 59 state-based conflicts active worldwide. Even if you're not seeking out such material, the collective pullback on trust and safety has raised the possibility that you will stumble over such content in your daily doomscroll.
That is the paradox we find ourselves in at the end of 2025.
In many ways, social media has become even more ingrained in everything from politics to the latest meme craze (cue: the rise of OpenAI's Sora.) But these platforms are less secure and protected than they have ever been — at a time when the world is engulfed in the highest level of subnational, national and regional warfare in multiple generations.
Chart of the Week
THE US CENTER FOR AI STANDARDS AND INNOVATION ran a series of tests — across four well-known sectors associated with the performance of large language models — between services offered by OpenAI, Anthropic and Deepseek.
You have to take these results with a pinch of salt, as they come from a US federal agency. But across the board, China's LLM performed significantly worse than its US rivals.Source: Center for AI Standards and innovation
THE AI WARS: SEMICONDUCTOR EDITION
COMMON WISDOM IS THAT YOU NEED three elements to compete in the global race around artificial intelligence. In your "AI Stack," you need world-leading microchips, you need cloud computing infrastructure that's cheap and almost universal, and you need applications like large language models that can sit on top and drive user engagement. On that first component — semiconductors — China and the US are increasingly going down different paths.
Looking back, it almost was inevitable. Washington has long safeguarded world-leading chips (from both American firms and those of its allies) from Beijing via export bans and other strong-arm tactics. The goal: to ensure China's AI Stack was always one step behind its US counterpart.
Yet that strategy is starting to backfire. Yes, Western AI chips are still better than their Chinese equivalents. But the lack of access to such semiconductors has forced the world's second largest economy to invest billions in domestic production in the hopes of eventually catching up — and surpassing — the likes of Nvidia or Taiwan's Taiwan Semiconductor Manufacturing Company.
What has galvanized this Chinese resolve is the repeated efforts by both the Trump and Biden administrations to hobble Chinese firms' ability to access the latest semiconductors. In this never-ending 'will they, or won't they?' game of national security ping-pong, the Trump 2.0 administration agreed in August to allow Nvidia and AMD to sell pared-down versions of their latest chips to China — as long as they gave the US federal government a 15 percent slice of that export revenue. Principled diplomacy, it was not.
That plan appears to have backfired. Nvidia is now under an antitrust investigation from Chinese authorities for its takeover of Israeli chipmaker Mellanox in 2020. The Cyberspace Administration of China has also reportedly told the country's largest tech firms, including Alibaba, ByteDance and Baidu, to not buy Nvidia's semiconductor. Jensen Huang, chief executive of the US chip firm, said he was "disappointed" with that move (which has never been officially confirmed.)
If you're interested in sponsoring Digital Politics, please get in touch on digitalpolitics@protonmail.com
Nvidia has invested millions to design China-specific microchips that both meet the national security limitations demanded by Washington and can be sold directly into the Middle Kingdom in ways that placate Beijing. If Chinese officials close the door — and require local firms to use domestic alternatives, many of which are reportedly almost on par with their Western rivals — then it's another indicator the US and China are on diverging paths when it comes to technological development.
Again, a lot of this was foreseeable. Repeated White House administrations urged American and Western chip and equipment firms to steer clear of China. In response, Beijing invested billions into local semiconductor production, much of which has remained at the lower level of sophistication. But as in other tech-related industries, Chinese manufacturers have steadily risen through the stack to now offer world-beating hardware. It's not unusual for that, eventually, to be the case in semiconductors.
Sign up for Digital Politics
Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before. The first two weeks of any paid subscription are free.
Subscribe
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
What does this all mean for the politics of technology?
First, Western semiconductor firms offering pared-back versions of their latest chips to China may have the door shut on them. Beijing may need these manufacturers, in the short term. But don't expect that welcome to remain warm — especially as Western officials continue to rattle sabres.
Second, the need for Chinese firms to rely on (currently sub-par, but rapidly advancing) homegrown chips will lead to scrappy innovation once associated just with Silicon Valley. We can debate whether the meteoric rise of DeepSeek was truly as unique as first believed (based on the company's ties to the wider Chinese tech ecosystem.) But relying on second-tier semiconductors will force Chinese AI firms to be more nimble compared to their US counterparts with seemingly unlimited access to chips, compute power and data.
Third, the "splinternet" will come to hardware. I wrote this in 2017 to explain how the digital world was being balkanized into regional fiefdoms. The creation of rival semiconductor stacks — one led by the US, one led by China — will extend that division into the offline world. Companies will try to make the respective hardware interoperable. But it won't be in the interests of either party, as the separation expands between which semiconductors can work with other infrastructure worldwide, to maintain such networking capability.
In short, the global race between AI Stackshas entered a new era.
What I'm reading
— The Wikimedia Foundation published a human rights impact assessment on artificial intelligence and machine learning. More here.
— The European Centre of Excellence for Countering Hybrid Threats assessed the current strengths and weaknesses in the transatlantic fight against state-backed disinformation. More here.
— The Canadian government launched an AI Strategy Task Force and outlined its agenda for public feedback on the emerging technology. More here.
— The Appeals Centre Europe, which allows citizens to seek redress from social media companies under the EU's Digital Services Act, published its first transparency report. More here.
— Researchers outlined the growing differences between how countries are approaching the oversight and governance of artificial intelligence for the University of Oxford. More here.
Hacker nordcoreani: 2 miliardi di dollari rubati in criptovalute in nove mesi di frodi
Una rete di hacker legata alla Corea del Nord ha rubato oltre 2 miliardi di dollari in criptovalute nei primi nove mesi del 2025. Gli analisti di Elliptic definiscono questa cifra la più grande mai registrata, con tre mesi rimanenti alla fine dell’anno.
Si stima che l’importo totale rubato abbia superato i 6 miliardi di dollari e, secondo le Nazioni Unite e diverse agenzie governative, sono questi fondi a finanziare i programmi missilistici e di armi nucleari della Corea del Nord.
Secondo Elliptic, la cifra reale potrebbe essere più elevata, dato che risulta complicato attribuire a Pyongyang furti specifici, operazione che necessita di analisi blockchain, esami del riciclaggio di denaro e attività di intelligence. In alcuni casi, gli incidenti corrispondono solo in parte ai modelli caratteristici dei gruppi nordcoreani, mentre altri episodi potrebbero non essere stati segnalati.
La principale fonte di perdite record è stato l’attacco hacker di febbraio all’exchange Bybit , che ha portato al furto di 1,46 miliardi di dollari in criptovalute. Altri incidenti confermati quest’anno includono attacchi a LND.fi, WOO X e Seedify. Elliptic collega inoltre oltre 30 ulteriori incidenti non segnalati pubblicamente alla Corea del Nord. Questa cifra è quasi il triplo di quella dell’anno scorso e supera significativamente il precedente record stabilito nel 2022, quando furono registrati furti di asset da servizi come Ronin Network e Horizon Bridge.
Allo stesso tempo, il vettore di attacco è cambiato in modo significativo. Mentre in precedenza i criminali informatici sfruttavano le vulnerabilità nell’infrastruttura dei servizi crittografici, ora utilizzano sempre più spesso metodi di ingegneria sociale. Le principali perdite nel 2025 sono dovute all’inganno, non a difetti tecnici.
Gli utenti facoltosi privi di meccanismi di sicurezza aziendale sono a rischio. Vengono attaccati tramite contatti falsi, messaggi di phishing e schemi di comunicazione convincenti, a volte dovuti a connessioni con organizzazioni che detengono grandi quantità di asset digitali. Pertanto, l’anello debole del settore crittografico sta gradualmente diventando l’elemento umano.
Allo stesso tempo, si sta sviluppando una corsa tra analisti e riciclatori. Con l’aumentare dell’accuratezza degli strumenti di tracciamento blockchain, i criminali stanno diventando più sofisticati nei loro schemi per trasferire i beni rubati. Un recente rapporto di Elliptic descrive nuovi approcci per nascondere le loro tracce: mixaggio di transazioni in più fasi, trasferimenti cross-chain tra blockchain di Bitcoin, Ethereum, BTTC e Tron, l’uso di reti oscure con bassa copertura analitica e lo sfruttamento di “indirizzi di ritorno” che reindirizzano i fondi verso nuovi wallet. A volte, i criminali creano e scambiano i propri token emessi direttamente all’interno delle reti in cui avviene il riciclaggio. Tutto ciò trasforma le indagini in un gioco del gatto e del topo tra investigatori e gruppi altamente qualificati che operano sotto il controllo statale.
Tuttavia, la trasparenza della blockchain rimane un vantaggio chiave per le indagini. Ogni moneta rubata lascia una traccia digitale che può essere analizzata e collegata ad altre transazioni. Secondo i ricercatori, questo rende l’ecosistema delle criptovalute più resiliente e riduce la capacità della Corea del Nord di finanziare i suoi programmi militari.
I 2 miliardi di dollari rubati in soli nove mesi sono un segnale preoccupante della portata della minaccia. Le unità informatiche nordcoreane stanno diventando sempre più inventive, ma gli strumenti forensi basati sulla blockchain contribuiscono a mantenere l’equilibrio, garantendo trasparenza e aumentando la responsabilità degli operatori di mercato. Questa costante battaglia per il controllo dei flussi digitali sta decidendo non solo il destino del mercato delle criptovalute, ma anche questioni di sicurezza internazionale.
L'articolo Hacker nordcoreani: 2 miliardi di dollari rubati in criptovalute in nove mesi di frodi proviene da il blog della sicurezza informatica.
Building the DVD Logo Screensaver with LEGO
The completed Lego DVD screensaver. (Credit: Grant Davis, YouTube)
There’s something extremely calming and pleasing about watching a screensaver that merely bounces some kind of image around, with the DVD logo screensaver of a DVD player being a good example. The logical conclusion is thus that it would be great to replicate this screensaver in Lego, because it’d be fun and easy. That’s where [Grant Davis]’s life got flipped upside-down, as this turned out to be anything but an easy task in his chosen medium.
Things got off on a rocky start with figuring out how to make the logo bounce against the side of the ‘screen’, instead of having it merely approach before backing off. The right approach here seemed to be Lego treads as used on e.g. excavators, which give the motion that nice pause before ‘bouncing’ back in the other direction.
With that seemingly solved, most of the effort went into assembling a functional yet sturdy frame, all driven by a single Lego Technic electromotor. Along the way there were many cases of rapid self-disassembly, ultimately leading to a complete redesign using worm gears, thus requiring running the gears both ways with help from a gearbox.
Since the screensaver is supposed to run unattended, many end-stop and toggle mechanisms were tried and discarded before settling on the design that would be used for the full-sized build. Naturally, scaling up always goes smoothly, so everything got redesigned and beefed up once again, with more motors added and multiple gearbox design changes attempted after some unfortunate shredded gears.
Ultimately [Grant] got what he set out to do: the DVD logo bouncing around on a Lego ‘TV’ in a very realistic fashion, set to the noise of Lego Technic gears and motors whirring away in the background.
Thanks to [Carl Foxmarten] for the tip.
youtube.com/embed/1sPK42-fzqU?…
Nel designare il DPO, l’incarico non dev’essere un segreto!
La designazione del DPO avviene seguendo la procedura prevista dall’art. 37 par. 7 GDPR, per cui è necessario svolgere due adempimenti: pubblicare i dati di contatto e comunicare gli stessi all’autorità di controllo. Questo significa pertanto che un incarico formale è una condizione necessaria ma non sufficiente, motivo per cui il Garante Privacy si è più volte espresso a riguardo sanzionando per lo più enti pubblici per la mancanza di questi ulteriori passaggi.
Passaggi che, beninteso, devono essere intesi come tutt’altro che meri formalismi dal momento che il loro adempimento consente di porre alcuni dei presupposti fondamentali per garantire l’efficace attuazione dei compiti propri della funzione.
Altrimenti, viene meno la capacità dell’organizzazione di fornire il punto di contatto del DPO tanto agli interessati quanto all’autorità di controllo. Il che relega la funzione alla sola nomina, in assenza di un raccordo operativo.
Perchè non si tratta di un formalismo.
La pubblicazione dei dati di contatto del DPO è funzionale a garantire la posizione nei confronti degli interessati, come espressamente previsto dall’art. 38 par. 4 GDPR:
Gli interessati possono contattare il responsabile della protezione dei dati per tutte le questioni relative al trattamento dei loro dati personali e all’esercizio dei loro diritti derivanti dal presente regolamento.
Questo comporta la predisposizione di un canale dedicato, per il quale viene garantita la confidenzialità delle comunicazioni superando così eventuali resistenze soprattutto da parte del personale interno nel segnalare non conformità o dubbi.
La comunicazione dei dati di contatto, invece, permette al DPO di svolgere il proprio compito come punto di contatto con l’autorità di controllo seguendo la previsione dell’art. 39 par. 1 lett. e) GDPR agevolando l’interlocuzione attraverso cui, ad esempio, il Garante Privacy può chiedere chiarimenti o maggiori informazioni. Ottenendo riscontri tempestivi.
Nella procedura dedicata del Garante Privacy a tale riguardo, è previsto l’obbligo di indicare almeno un indirizzo di posta elettronica fra e-mail o PEC, e un recapito telefonico fra numero fisso e cellulare.
Questo, a prescindere che il DPO sia interno o esterno.
Dopodiché, per quanto riguarda la pubblicazione dei dati di contatto viene richiesto di indicare le modalità attraverso cui il soggetto designante ha scelto di provvedere a riguardo, potendo anche indicare moduli e form ad esempio.
Si deve pubblicare il nominativo?
Premesso che il nominativo deve essere comunque comunicato all’autorità di controllo, rimane la questione circa l’obbligo o meno di pubblicare il nominativo del DPO. Dal momento che non è specificamente previsto, è al più una buona prassi riconosciuta e condivisa. L’ultima parola a riguardo spetta comunque al titolare o al responsabile che, valutate le circostanze, stabilisce se tale informazione può essere necessaria o utile nell’ottica della migliore protezione dei diritti degli interessati.
Per quanto riguarda il personale interno, invece, all’interno delle Linee guida WP 243 sui responsabili della protezione dei dati viene raccomandata la comunicazione del nominativo. Questo può avvenire ad esempio con pubblicazione sull’intranet, nell’organigramma della struttura, o indicazione all’interno delle informative somministrate ai lavoratori.
Il motivo è semplicemente quello di andare a garantire un’integrazione operativa della funzione, agevolandone tanto l’identificabilità quanto la reperibilità.
Insomma, viene confermato che la designazione del DPO non deve rimanere sulla carta.
Né tantomeno può essere dimenticata in qualche cassetto.
L'articolo Nel designare il DPO, l’incarico non dev’essere un segreto! proviene da il blog della sicurezza informatica.
Redox OS introduce il multithreading di default e migliora le prestazioni
Gli sviluppatori del sistema operativo Redox OS, scritto in Rust, hanno abilitato il supporto multithreading di default per i sistemi x86. In precedenza, la funzionalità era sperimentale, ma dopo la correzione di alcuni bug è diventata parte integrante della piattaforma. Ciò garantisce un notevole incremento delle prestazioni sui computer e laptop moderni.
Redox OS è stato sviluppato da zero e implementato interamente in Rust, un linguaggio incentrato sulla sicurezza e sulla tolleranza agli errori. Il passaggio a un modello multithread consente al sistema di utilizzare le risorse della CPU in modo più efficiente e di eseguire attività parallele più velocemente, il che è particolarmente importante per gli scenari desktop e server.
Inoltre, il team ha introdotto diverse importanti ottimizzazioni. La gestione dei file di piccole dimensioni è stata migliorata, l’installazione del sistema è stata velocizzata e il supporto alla compressione LZ4 è stato aggiunto al file system RedoxFS.
Gli sviluppatori definiscono queste modifiche un “passo fondamentale” nel miglioramento della velocità e della reattività del sistema operativo.
L’aggiornamento include anche miglioramenti alle app e all’esperienza utente. Questi miglioramenti riguardano gli strumenti principali e l’interfaccia, rendendo il sistema più stabile e facile da usare nell’uso quotidiano.
Una dimostrazione convincente delle capacità del progetto è stato il successo dell’avvio di Redox OS sugli smartphone BlackBerry KEY2 LE e Google Pixel 3. Sebbene si tratti ancora di build di prova, gli sviluppatori sottolineano che il kernel e il modello di driver sono già sufficientemente versatili per i dispositivi mobili.
Redox OS rimane uno dei pochi sistemi operativi sviluppati da zero in Rust e indipendenti dal codice Linux o BSD. Il progetto sviluppa il proprio file system, kernel e ambiente, rendendolo un esempio unico di approccio Rust “puro” alla programmazione di sistemi.
L'articolo Redox OS introduce il multithreading di default e migliora le prestazioni proviene da il blog della sicurezza informatica.
Hai Teams? Sei un Bersaglio! La piattaforma Microsoft nel mirino di Stati e Criminali
La piattaforma di collaborazione Microsoft Teams è diventata un bersaglio ambito per gli aggressori, poiché la sua vasta adozione l’ha resa un obiettivo di alto valore. Le funzionalità di messaggistica, chiamate e condivisione dello schermo vengono sfruttate per scopi dannosi. Secondo un avviso di Microsoft, sia gli autori di minacce sponsorizzate dagli stati sia i criminali informatici stanno aumentando l’abuso delle funzionalità e delle capacità di Teams nelle loro catene di attacco.
Gli autori delle minacce sfruttano in modo improprio le sue funzionalità principali, ovvero la messaggistica (chat), le chiamate, le riunioni e la condivisione dello schermo basata su video in diversi punti della catena di attacco.
Ciò aumenta la posta in gioco per i responsabili della sicurezza, che devono monitorare, rilevare e rispondere in modo proattivo. Sebbene la Secure Future Initiative (SFI) di Microsoft abbia rafforzato la sicurezza, l’azienda sottolinea che i responsabili della sicurezza devono utilizzare i controlli di sicurezza disponibili per rafforzare i propri ambienti Teams aziendali.
Gli aggressori stanno sfruttando l’intero ciclo di vita dell’attacco all’interno dell’ecosistema Teams, dalla ricognizione iniziale all’impatto finale, ha affermato Microsoft Si tratta di un processo in più fasi in cui lo stato di affidabilità della piattaforma viene sfruttato per infiltrarsi nelle reti, rubare dati e distribuire malware.
La catena di attacco spesso inizia con la ricognizione, durante la quale gli autori della minaccia utilizzano strumenti open source come TeamsEnum e TeamFiltration per enumerare utenti, gruppi e tenant. Eseguono la mappatura delle strutture organizzative e individuano le debolezze della sicurezza, come ad esempio impostazioni di comunicazione esterna permissive.
Gli aggressori proseguono con lo sfruttamento delle risorse, mediante la compromissione di tenant legittimi o la creazione di nuovi, dotati di un marchio personalizzato, al fine di impersonare entità fidate, come ad esempio il supporto IT. Successivamente, una volta stabilita un’identità credibile, gli aggressori procedono con l’accesso iniziale, spesso attraverso l’utilizzo di tattiche di ingegneria sociale, fra cui le truffe legate al supporto tecnico.
Un caso paradigmatico è quello dell’autore della minaccia Storm-1811, che si è travestito da tecnico di supporto con il compito di risolvere presunti malfunzionamenti della posta elettronica, sfruttando tale copertura per diffondere un ransomware. Un modus operandi simile è stato adottato dagli affiliati del ransomware 3AM, i quali hanno sommerso i dipendenti di messaggi di posta non richiesti, per poi utilizzare le chiamate di Teams al fine di persuaderli a concedere l’accesso remoto.
Dopo aver preso piede, gli autori delle minacce si concentrano sul mantenimento della persistenza e sull’aumento dei privilegi. Possono aggiungere i propri account guest, abusare dei flussi di autenticazione del codice del dispositivo per rubare token di accesso o utilizzare esche di phishing per distribuire malware che garantiscano l’accesso a lungo termine.
Il gruppo Octo Tempest, mosso da motivazioni finanziarie, è stato osservato mentre utilizzava un’aggressiva ingegneria sociale su Teams per compromettere l’autenticazione a più fattori (MFA) per gli account privilegiati. Con un accesso elevato, gli aggressori iniziano a scoprire e a muoversi lateralmente. Utilizzano strumenti come AzureHound per mappare la configurazione dell’ID Microsoft Entra dell’organizzazione compromessa e cercare dati preziosi.
L'articolo Hai Teams? Sei un Bersaglio! La piattaforma Microsoft nel mirino di Stati e Criminali proviene da il blog della sicurezza informatica.
Mesmerizing Patterns from Simple Rules
Nature is known for its intense beauty from its patterns and bright colors; however, this requires going outside. Who has time for that insanity!?!? [Bleuje] provides the perfect solution with his mesmerizing display of particle behavior.
Agents follow defined paths created by other agents.
These patterns of color and structure, based on 36 points, are formed from simple particles, also called agents. Each agent leaves behind a trail that adds to the pattern formation. Additionally, these trails act almost as pheromone trails, attracting other particles. This dispersion and attraction to trails create the feedback loops similar to those found in ant herd behavior or slime mold.
Complex patterns created by the algorithm can resemble many different biological formations including slime mold.
Of course, none of this behavior would be very fun to mess with if you couldn’t change the parameters on the fly. This is one main feature of [Bleuje]’s implementation of the 36 points’ ideas. Being able to change settings quickly and interact with the environment itself allows for touching natural feeling patterns without exiting your house!
If you want to try out the simulation yourself, make sure to check out [Bleuje]’s GitHub repository of the project! While getting out of the house can be difficult, sometimes it’s good for you to see real natural patterns. For a great example of this hard work leading to great discoveries, look to this bio-inspired way of protecting boat hauls!
Thanks Adrian for the tip!
Tips for C Programming from Nic Barker
If you’re going to be a hacker, learning C is a rite of passage. If you don’t have much experience with C, or if your experience is out of date, you very well may benefit from hearing [Nic Barker] explain tips for C programming.
In his introduction he notes that C, invented in the 70s by Dennis Ritchie, is now more than 50 years old. This old language still appears in lists of the most popular languages, although admittedly not at the top!
He notes that the major versions of C, named for the year they were released, are: C89, C99, C11, and C23. His recommendation is C99 because it has some features he doesn’t want to live without, particularly scoped variables and initializing structs with named members using designated initializers. Also C89 is plagued with non-standard integer types, and this is fixed by stdint.h in C99. Other niceties of C99 include compound literals and // for single-line comments.
He recommends the use of clang arguments -std=c99 to enable C99, -Wall to enable all warnings, and -Werror to treat warnings as errors, then he explains the Unity Build where you simply include all of your module files from your main.c file.
It’s stressed that printf debugging is not the way to go in C and that you definitely want to be using a debugger. To elaborate on this point he explains what a segfault is and how they happen.
He goes on to explain memory corruption and how ASAN (short for Address Sanitisation) can help you find when it happens. Then he covers C’s support for arrays and strings, which is, admittedly, not very much! He shows you that it’s pretty easy to make your own array and string types though, potentially supporting slices as well.
Finally he explains how to use arenas for memory allocation and management for static, function, and task related memory.
youtube.com/embed/9UIIMBqq1D4?…
Building a Diwheel to Add More Tank Controls to Your Commute
It’s often said that one should not reinvent the wheel, but that doesn’t mean that you cannot change how the use of said wheel should be interpreted. After initially taking the rather zany concept of a monowheel for a literal ride, [Sam Barker] decided to shift gears, did a ‘what if’ and slapped a second monowheel next to the first one to create his diwheel vehicle. Using much thicker steel for the wheels and overall much more robust construction than for his monowheel, the welding could commence.
It should be said here that the concept of a diwheel, or dicycle, isn’t entirely new, but the monowheel – distinct from a unicycle – is much older, with known builds at least as far back as the 19th century. Confusing, self-balancing platforms like Segways are also referred to as ‘dicycles’, while a diwheel seems to refer specifically to what [Sam] built here. That said, diwheels are naturally stable even without gyroscopic action, which is definitely a big advantage.
The inner frame for [Sam]’s diwheel is built out of steel too, making it both very robust and very heavy. High-tech features include suspension for that smooth ride, and SLS 3D-printed nylon rollers between the inner frame and the wheels. After some mucking about with a DIY ‘lathe’ to work around some measurement errors, a lot more welding and some questionable assembly practices, everything came together in the end.
This is just phase one, however, as [Sam] will not be installing pedals like it’s an old-school monowheel. Instead it’ll have electrical drive, which should make it a bit less terrifying than the Ford Ka-based diwheel we featured in 2018, but rather close to the electric diwheel called EDWARD which we featured back in 2011. We hope to see part two of this build soon, in which [Sam] will hopefully take this beast for its first ride.
youtube.com/embed/xKOHrBmhexU?…
JawnCon Returns This Weekend
For those local to the Philadelphia area, a “jawn” can be nearly anything or anyone — and at least for this weekend, it can even be a hacker con building up steam as it enters its third year. Kicking off this Friday at Arcadia University, JawnCon0x2 promises to be another can’t-miss event for anyone with a curious mind that lives within a reasonable distance of the Liberty Bell.
The slate of talks leans slightly towards the infosec crowd, but there’s really something for everyone on the schedule. Presentations such as Nothing is Safe: An Introduction to Hardware (In)Security and Making the GameTank – A New, Real 8-Bit Game Machine will certainly appeal to those of us who keep a hot soldering iron within arm’s reach, while Rolling Recon & Tire Prints: Perimeter Intrusion Detection and Remote Shenanigans via Rogue Tire Stem RF and Get More Radio Frequency Curious will certainly appeal to the radio enthusiasts.
Speaking of which, anyone who wants to make their interest in radio official can sit in on the Saturday study group led by Ed “N2XDD” Wilson, the Director of the American Radio Relay League (ARRL) Hudson Division. After lunch, you can take your exam to become a licensed ham, and still have time to check out the lockpicking demonstrations from the local TOOOL chapter, the Retro Show ‘n Tell area, and rummage through the self-replenishing table of free stuff that’s looking for a new home.
Attendees can also take part in a number of unique challenges and competitions inspired by the shared professional experience of the JawnCon organizers. One of the events will have attendees putting together the fastest Digital Subscriber Line (DSL) broadband connection, as measured by era-appropriate commercial gear. Easy enough with a spool of copper wire, but the trick here is to push the legendary resilience of DSL to the limit by using unusual conductors. Think wet strings and cooked pasta. There’s also a Capture The Flag (CTF) competition that will pit teams against each other as they work their way through customer support tickets at a fictional Internet service provider.
We were on the ground for JawnCon in 2024, and even had the good fortune to be present for the inaugural event back in 2023. While it may not have the name recognition of larger East Coast hacker cons, JawnCon is backed by some of the sharpest and most passionate folks we’ve come across in this community, and we’re eager to see the event grow in 2025 and beyond.
Qualcomm Introduces the Arduino Uno Q Linux-Capable SBC
Generally people equate the Arduino hardware platforms with MCU-centric options that are great for things like low-powered embedded computing, but less for running desktop operating systems. This looks about to change with the Arduino Uno Q, which keeps the familiar Uno formfactor, but features both a single-core Cortex-M33 STM32U575 MCU and a quad-core Cortex-A53 Qualcomm Dragonwing QRB2210 SoC.
According to the store page the board will ship starting October 24, with the price being $44 USD. This gets you a board with the aforementioned SoC and MCU, as well as 2 GB of LPDDR4 and 16 GB of eMMC. There’s also a WiFi and Bluetooth module present, which can be used with whatever OS you decide to install on the Qualcomm SoC.
This new product comes right on the heels of Arduino being acquired by Qualcomm. Whether the Uno Q is a worthy purchase mostly depends on what you intend to use the board for, with the SoC’s I/O going via a single USB-C connector which is also used for its power supply. This means that a USB-C expansion hub is basically required if you want to have video output, additional USB connectors, etc. If you wish to run a headless OS install this would of course be much less of a concern.
2025 Hackaday Supercon: More Wonderful Speakers
Supercon is just around the corner, and we’re absolutely thrilled to announce the second half of our slate! Supercon will sell out so get your tickets now before it’s too late. If you’re on the fence, we hope this pushes you over the line. And if it doesn’t, stay tuned — we’ve still got to tell you everything about the badge and the fantastic keynote speaker lineup.
(What? More than one keynote speaker? Unheard of!)
And as if that weren’t enough, there’s delicious food, great live music, hot soldering irons, and an absolutely fantastic crowd of the Hackaday faithful, and hopefully a bunch of new folks too. If you’re a Supercon fan, we’re looking forward to seeing you again, and if it’s your first time, we’ll be sure to make you feel welcome.
Amie Dansby and Karl Koscher
Hands-On Hardware: Chip Implants, Weird Hacks, and Questionable Decisions
What happens when your body is the dev board? Join Amie Dansby, who’s been living with four biochip implants for years, and Karl Koscher as they dive into the wild world of biohacking, rogue experiments, and deeply questionable decisions in the name of science, curiosity, and chaos.
Arsenio Menendez
Long Waves, Short Talk: A Practical IR Spectrum Guide
Whether you’re a seasoned sensor engineer or a newcomer join us in exploring the capabilities of SWIR, MWIR, and LWIR infrared bands. Learn how each wavelength range enables enhanced vision across a variety of environments, as well as how the IR bands are used in surveillance, industrial inspection, target tracking, and more.
Daniel [DJ] Harrigan
Bringing Animatronics to Life
This talk explores the considerations behind designing a custom Waldo/motion capture device that allows him to remotely puppet a complex animatronic with over twenty degrees of freedom. We’ll discuss the electrical, mechanical, and software challenges involved in creating a responsive, robust remote controller.
Daryll Strauss
Covert Regional Communication with Meshtastic
Learn how Meshtastic uses low-cost LORA radios to build ad hoc mesh networks for secure, decentralized communication. We’ll cover fundamentals, hardware, configuration tips, and techniques to protect against threats, whether for casual chats, data sharing, or highly covert group communication.
Allie Katz and SJ Jones
Fireside Chat: Metal 3D Printing … in space?!
Metal 3D Printing … in space?! SJ Jones is an additive manufacturing solutions engineer and nobody knows metal printing for intense applications like they do. In this discussion they’ll be talking with designer and 3D printing expert Allie Katz about computational design, artful engineering, and 3D prints that can survive a rocket trip.
Davis DeWitt
Movie Magic and the Value of Practical Effects
What does it take to create something that’s never been seen before? In film and TV, special effects must not only work, but also feel alive. This talk explores how blending hardware hacking with art creates functional and emotional storytelling, from explosive stunts to robots with personality, these projects blur the lines between disciplines.
Aaron Eiche
The Magic of Electropermanent Magnets!
Electropermanent magnets are like magic, an electromagnet but permanently switchable with a bit of current and a few microseconds. Aaron shares the adventures in using cheap off-the-shelf components to build his own and how to make them work empirically.
Fangzheng Liu
CircuitScout: Probing PCBs the Easy Way
Debugging PCBs can be challenging and time-consuming. This talk dives into the open-source DIY project, CircuitScout. This small desktop machine system automates debugging by selecting pads from your schematic, locating them, and controlling a probe machine for safe, hands-free testing.
Joe Needleman
From Sunlight to Silicon
AI workloads consume significant energy, but what if it didn’t? This hands-on session shows how to design and run a solar-powered computer cluster, focusing on NVIDIA Jetson Orin hardware, efficient power pipelines, and software strategies for high performance under tight energy limits.
John Duffy
The Circuits Behind Your Multimeter
Everyone uses a multimeter, but do you know what’s inside? This talk explores the inner workings, plus insights from building one, the design choices, and the tradeoffs behind common models. Discover the hidden engineering that makes this everyday tool accurate, safe, and reliable.
Josh Martin
DIY Depth: Shooting and Printing 3D Images
3D photography isn’t just for vintage nerds or high-end tech! Learn how stereoscopic film cameras work, the mechanics of lenticular lenses and how to print convincing 3D images at home, plus dive into digitizing, aligning, and processing 3D images from analog sources.
Kay Antoniak
From bytes to bobbins: Driving an embroidery machine
This talk explores how an embroidery machine brings out the best of tinkering: production, customization, and creative hacks. Learn how to run your first job on that dusty makerspace machine, create your own patch using open-source tools, and see what extra capabilities lie beyond the basics.
Keith Penney
Ghostbus: Simpler CSR Handling in Verilog
Designing FPGA applications means wrangling CSRs and connecting busses, a tedious & error-prone task. This talk introduces Ghostbus, an approach that automates address assignment and bus routing entirely in Verilog to keep designs clean, maintainable, and functional.
Kumar Abhishek
Laser ablating PCBs
Once too expensive, PCB fabrication via laser ablation of copper is now accessible via commodity fiber laser engravers. This talk shares experiences in making boards using this chemical-free technique and how it can help in rapid prototyping.
Karl Koscher
rtlsdr.tv: Broadcast TV in your browser
This talk introduces rtlsdr.tv and will cover the basics of digital video streams, programmatically feeding live content to video through Media Source Extensions, and using WebUSB to interact with devices that previously required kernel drivers.
If you’re still here, get your tickets!
Can a Coin Cell Make 27 Volts?
We have all no doubt at some point released the magic smoke from a piece of electronics, it’s part of what we do. But sometimes it’s a piece of electronics we’re not quite ready to let go, and something has to be fixed. Chris Greening had a board just like that, a 27 volt generator from an LCD panel, and he crafted a new circuit for it.
The original circuit (which we think he may have drawn incorrectly), uses a small boost converter IC with the expected inductor and diode. His replacement is the tried and tested joule thief, but with a much higher base resistor than its normal application in simply maintaining a battery voltage. It sucks 10 mA from the battery and is regulated with a Zener diode, but there’s still further room for improvement. Adding an extra transistor and using the Zener as a feedback component causes the oscillator to shut off as the voltage increases, something which in this application is fine.
It’s interesting to see a joule thief pushed into a higher voltage application like this, but we sense perhaps it could be made more efficient by seeking out an equivalent to the boost converter chip. Or even a flyback converter.
Smart Bulbs Are Turning Into Motion Sensors
If you’ve got an existing smart home rig, motion sensors can be a useful addition to your setup. You can use them for all kinds of things, from turning on lights when you enter a room, to shutting off HVAC systems when an area is unoccupied. Typically, you’d add dedicated motion sensors to your smart home to achieve this. But what if your existing smart light bulbs could act as the motion sensors instead?
The Brightest Bulb In The Bulb Box
The most typical traditional motion sensors use passive infrared detection, wherein the sensor picks up on the infrared radiation emitted by a person entering a room. Other types of sensors include break-beam sensors, ultrasonic sensors, and cameras running motion-detection algorithms. All of these technologies can readily be used with a smart home system if so desired. However, they all require the addition of extra hardware. Recently, smart home manufacturers have been exploring methods to enable motion detection without requiring the installation of additional dedicated sensors.
Hue Are You?
The technology uses data on radio propagation between multiple smart bulbs to determine whether or not something (or someone) is moving through an area. Credit: Ivani
Philips has achieved this goal with its new MotionAware technology, which will be deployed on the company’s new Hue Bridge Pro base station and Hue smart bulbs. The company’s smart home products use Zigbee radios for communication. By monitoring small fluctuations in the Zigbee communications between the smart home devices, it’s possible to determine if a large object, such as a human, is moving through the area. This can be achieved by looking at fluctuations to signal strength, latency, and bit error rates. This allows motion detection using Hue smart bulbs without any specific motion detection hardware required.
Using MotionAware requires end users to buy the latest Philips Hue Bridge Pro base station. As for whether there is some special magic built into this device, or if Phillips merely wants to charge users to upgrade to the new feature? Well, Philips claims the new bridge is required because it’s powerful enough to run the AI-powered algorithms that sift the radio data and determine whether motion is occurring. The tech is based on IP from a company called Ivani, which developed Sensify—an RF sensing technology that works with WiFi, Bluetooth, and Zigbee signals.
To enable motion detection, multiple Hue bulbs must be connected to the same Hue Bridge Pro, with three to four lights used to create a motion sensing “area” in a given room. When setting up the system, the room must be vacated so the system can calibrate itself. This involves determining how the Zigbee radio signals propagate between devices when nobody—humans or animals—is inside. The system then uses variations from this baseline to determine if something is moving in the room. The system works whether the lights themselves are on or off, because the light isn’t used for sensing—as long as the bulb has power, it can use its radio for sensing motion. Philips notes this only increases standby power consumption by 1%, and a completely negligible amount while the light is actually “on” and outputting light.
There are some limitations to the use of this system. It’s primarily for indoor use, as Philips notes that the system benefits from the way radio waves bounce off surrounding interior walls and objects. Lights should also be separated from 1 to 7 meters apart for optimal use, and effectively create a volume between them in which motion sensing is most effective. Depending on local conditions, it’s also possible that the system may detect motion on adjacent levels or in nearby rooms, so sensitivity adjustment or light repositioning may be necessary. Notably, though, you won’t need new bulbs to use MotionAware. The system will work with all the Hue mains-powered bulbs that have been manufactured since 2014.
The WiZ Kids Were Way Ahead
Philips aren’t the only ones offering in-built motion sensing with their smart home bulbs. WiZ also has a product in this space, which feels coincidental given the company was acquired in 2019 by Philip’s own former lighting division. Unlike Philips Hue, WiZ products rely on WiFi for communication. The company’s SpaceSense technology again relies on perturbations in radio signals between devices, but using WiFi signals instead of Zigbee. What’s more, the company has been at this since 2022
There are some notable differences in Wiz’s technology. SpaceSense is able to work with just two devices at a minimum, and not just lights—you can use any of the company’s newer lights, smart switches, or devices, as long as they’re compatible with SpaceSense, which covers the vast majority of the company’s recent product.
youtube.com/embed/fPsBXFMXNAM?…
Ultimately, WiZ beat Philips by years with this tech. However, perhaps due to its lower market penetration, it didn’t make the same waves when SmartSense dropped in 2022.
Radio Magic
We’ve seen similar feats before. It’s actually possible to get all kinds of useful information out of modern radio chipsets for physical sensing purposes. We’ve seen systems that measure a person’s heart rate using nothing more than perturbations in WiFi transmission over short distances, for example. When you know what you’re looking for, a properly-built algorithm can let you dig usable motion information out of your radio hardware.
Ultimately, it’s neat to see smart home companies expanding their offerings in this way. By leveraging the radio chipsets in existing smart bulbs, engineers have been able to pull out granular enough data to enable this motion-sensing parlour trick. If you’ve ever wanted your loungeroom lights to turn on when you walk in, or a basic security notification when you’re out of the house… now you can do these kinds of things without having to add more hardware. Expect other smart home platforms to replicate this sort of thing in future if it proves practical and popular with end users.
A Childhood Dream, Created and Open Sourced
Some kids dream about getting a pony, others dream about a small form factor violin-style MIDI controller. [Brady Y. Lin] was one of the latter, and now, with the skills he’s learning at Northwestern, he can make that dream a reality — and share it with all of us as an open source hardware project.
The dream instrument’s name is Stradex1, and it’s a lovely bit of kit. The “fretless” neck is a SoftPot linear potentiometer being sampled by an ADS1115 ADC — that’s a 16-bit unit, so while one might pedantically argue that there are discreet frets, there’s 2^15 of them, which is functionally the same as none at all. Certainly it’s enough resolution for continuous-sounding pitch control, as well as vibrato, as you can see at 3:20 in the demo video below. The four buttons that correspond to the four strings of a violin aren’t just push-buttons, but also contain force sensors (again, sampled by the 16-bit ADC) to allow for fine volume control of each tone.
A few other potentiometers flesh out the build, allowing control over different MIDI parameters, such as what key [Brady] is playing on. The body is a combination of 3D printed plastic and laser-cut acrylic, but [Brady] suggests you could also print the front and back panels if you don’t happen to have a laser cutter handy.
This project sounds great, and it satisfies the maker’s inner child, so what’s not to love. We’ve had lots of MIDI controllers on Hackaday over the years — everything from stringless guitars to wheel-less Hurdy-Gurdies to say nothing of laser harps galore — but somehow, we’ve never had a MIDI violin. The violin hacks we have featured tend to be either 3D printed or comically small.
If you like this project but don’t feel like fabbing and populating the PCB, [Brady] is going to be giving one away to his 1000th YouTube subscriber. As of this writing, he’s only got 800, so that could be you!
youtube.com/embed/0cMQYN_HLao?…
A Lorenz Teletype Shows Us Its Secrets
When we use the command line on Linux, we often refer to it as a terminal. It’s a word with a past invoking images of serial terminals, rows of green-screened machines hooked up to a central computer somewhere. Those in turn were electronic versions of mechanical teletypes, and it’s one of these machines we’re bringing you today. [DipDoT] has a Lorenz teletype from the 1950s, and he’s taking us through servicing and cleaning it, eventually showing us its inner workings.
The machine in question had been in storage for many years, but remained in good condition. To be this long out of use though meant it needed a thorough clean, so he sets about oiling the many hundreds of maintenance points listed in a Lorenz manual. It’s a pleasant surprise for us to see keyboard and printer unit come away from the chassis for servicing so easily, and by stepping it through its operation step by step we can see how it works in detail. It even incorporates an identifier key — think of it as a mechanical ROM that stores a sequence of letters — which leads him to believe it may have come from a New York news office. The video is below the break, and makes for an interesting watch.
He’s going to use it with a relay computer, but if you don’t have one of those there are more modern ways to do it.
youtube.com/embed/XKv8w1sUX_o?…
DK 10x05 - ChatControl deve morire
Siamo ancora qui a parlare di chatControl, una cosa che solo in Cina e Corea del Nord.
Non è questione di schieramenti, affondatelo e non parliamone più.
Per votare a favore della sorveglianza generalizzata della popolazione con la scusa di proteggere i picciriddi occorre essere idioti, o in malafede, o entrambe le cose.
spreaker.com/episode/dk-10x05-…
RediShell: una RCE da score 10 vecchia di 13 anni è stata aggiornata in Redis
Una falla critica di 13 anni, nota come RediShell, presente in Redis, permette l’esecuzione di codice remoto (RCE) e offre agli aggressori la possibilità di acquisire il pieno controllo del sistema host sottostante.
Il problema di sicurezza, è stato contrassegnato come CVE-2025-49844 ed è stato rilevato da Wiz Research. A questo problema è stato assegnato il massimo livello di gravità secondo la scala CVSS, con un punteggio di 10,0, una valutazione che indica le vulnerabilità di sicurezza più critiche.
l’analisi condotta da Wiz Research ha rivelato un’ampia superficie di attacco, con circa 330.000 istanze Redis esposte a Internet. È allarmante notare che circa 60.000 di queste istanze non hanno alcuna autenticazione configurata.
La falla di sicurezza, viene causata da un errore di tipo Use-After-Free (UAF) nella gestione della memoria, è presente nel codice di Redis da circa tredici anni. Questa vulnerabilità può essere sfruttata da un utente malintenzionato, dopo aver completato l’autenticazione, attraverso l’invio di uno script Lua appositamente realizzato.
Poiché lo scripting Lua è una funzionalità predefinita, l’aggressore può uscire dall’ambiente sandbox Lua per ottenere l’esecuzione di codice arbitrario sull’host Redis.
Il controllo completo viene garantito all’aggressore a questo livello di accesso, permettendogli di dirottare le risorse di sistema per attività come il mining di criptovalute, di muoversi lateralmente sulla rete, nonché di rubare, eliminare o crittografare i dati.
Il potenziale impatto è amplificato dall’ubiquità di Redis. Si stima che il 75% degli ambienti cloud utilizzi l’archivio dati in-memory per la memorizzazione nella cache, la gestione delle sessioni e la messaggistica.
Il flusso di attacco inizia con l’invio da parte dell’aggressore di uno script Lua dannoso all’istanza vulnerabile di Redis. Dopo aver sfruttato con successo il bug UAF per uscire dalla sandbox, l’aggressore può stabilire una reverse shell per l’accesso persistente. Da lì, possono compromettere l’intero host rubando credenziali come chiavi SSH e token IAM, installando malware ed esfiltrando dati sensibili sia da Redis che dalla macchina host.
Il 3 ottobre 2025, Redis ha rilasciato un avviso di sicurezza e versioni patchate per risolvere il problema CVE-2025-49844. Si consiglia vivamente a tutti gli utenti Redis di aggiornare immediatamente le proprie istanze, dando priorità a quelle esposte a Internet o prive di autenticazione.
L'articolo RediShell: una RCE da score 10 vecchia di 13 anni è stata aggiornata in Redis proviene da il blog della sicurezza informatica.
A New Cartridge for an Old Computer
Although largely recognizable to anyone who had a video game console in the 80s or 90s, cartridges have long since disappeared from the computing world. These squares of plastic with a few ROM modules were a major route to get software for a time, not only for consoles but for PCs as well. Perhaps most famously, the Commodore VIC-20 and Commodore 64 had cartridge slots for both gaming and other software packages. As part of the Chip Hall of Fame created by IEEE Spectrum, [James] found himself building a Commodore cartridge more than three decades after last working in front of one of these computers.
[James] points out that even by the standards of the early 80s the Commodore cartridges were pretty low on specs. They’re limited to 16 kB, which means programming in assembly and doing things like interacting with video hardware directly. Luckily there’s a treasure trove of documentation about the C64 nowadays as well as a number of modern programming tools for them, in contrast to the 80s when tools and documentation were scarce or nonexistent. Hardware these days is cheap as well; the cartridge PCB and other hardware cost only a few dollars, and the case for it can easily be 3D printed.
Burning the software to the $3 ROM chip was straightforward as well with a TL866 programmer, although [James] left a piece of memory management code in the first pass which caused the C64 to lock up. Removing this code and flashing the chip again got the demo up and running though, and it’ll be on display at their travelling “Chips that Changed the World” exhibit. If you find yourself in the opposite situation, though, we’ve also seen projects that cleverly pull the data off of ancient C64 ROM chips for preservation.
Google Confirms Non-ADB APK Installs Will Require Developer Registration
After the news cycle recently exploded with the announcement that Google would require every single Android app to be from a registered and verified developer, while killing third-party app stores and sideloading in the process, Google has now tried to put out some of the fires with a new Q&A blog post and a video discussion (also embedded below).
When we first covered the news, all that was known for certain was the schedule, with the first trials beginning in October of 2025 before a larger rollout the next year. One of the main questions pertained to installing apps from sources that are not the Google Play Store. The answer here is that the only way to install an app without requiring one to go through the developer verification process is by installing the app with the Android Debug Bridge, or adb for short.
The upcoming major release of Android 16 will feature a new process called the Android Developer Verifier, which will maintain a local cache of popular verified apps. The remaining ones will require a call back to the Google mothership where the full database will be maintained. In order to be a verified Android developer you must have a Google Play account, pay the $25 fee and send Google a scan of your government-provided ID. This doesn’t mean that you cannot also distribute your app also via F-Droid, it does however mean that you need to be a registered Play Store developer, negating many of the benefits of those third-party app stores.
Although Google states that they will also introduce a ‘free developer account type’, this will only allow your app to be installed on a limited number of devices, without providing an exact number so far. Effectively this would leave having users install unsigned APKs via the adb tool as the sole way to circumvent the new system once it is fully rolled out by 2027. On an unrelated note, Google’s blog post also is soliciting feedback from the public on these changes.
youtube.com/embed/A7DEhW-mjdc?…
Finding Simpler Schlieren Imaging Systems
Perhaps the most surprising thing about shadowgraphs is how simple they are: you simply take a point source of light, pass the light through a the volume of air to be imaged, and record the pattern projected on a screen; as light passes through the transition between areas with different refractive indices, it gets bent in a different direction, creating shadows on the viewing screen. [Degree of Freedom] started with these simple shadowgraphs, moved on to the more advanced schlieren photography, and eventually came up with a technique sensitive enough to register the body heat from his hand.
The most basic component in a shadowgraph is a point light source, such as the sun, which in experiments was enough to project the image of an escaping stream of butane onto a sheet of white paper. Better point sources make the imaging work over a wider range of distances from the source and projection screen, and a magnifying lens makes the image brighter and sharper, but smaller. To move from shadowgraphy to schlieren imaging, [Degree of Freedom] positioned a razor blade in the focal plane of the magnifying lens, so that it cut off light refracted by air disturbances, making their shadows darker. Interestingly, if the light source is small and point-like enough, adding the razor blade makes almost no difference in contrast.
With this basic setup under his belt, [Degree of Freedom] moved on to more unique schlieren setups. One of these replaced the magnifying lens with a standard camera lens in which the aperture diaphragm replaced the razor blade, and another replaced the light source and razor with a high-contrast black-and-white pattern on a screen. The most sensitive technique was what he called double-pinhole schlieren photography, which used a pinhole for the light source and another pinhole in place of the razor blade. This could image the heated air rising from his hand, even at room temperature.
The high-contrast background imaging system is reminiscent of this technique, which uses a camera and a known background to compute schlieren images. If you’re interested in a more detailed look, we’ve covered schlieren photography in depth before.
youtube.com/embed/kRyE-n9UaIg?…
Thanks to [kooshi] for the tip!
2G Gone? Bring It Back Yourself!
Some parts of the world still have ample 2G coverage; for those of in North America, 2G is long gone and 3G has either faded into dusk or beginning its sunset. The legendary [dosdude1] shows us it need not be so, however: Building a Custom 2G GSM Cellular Base Station is not out of reach, if you are willing to pay for it. His latest videos show us how.
Before you start worrying about the FCC or its equivalents, the power here is low enough not to penetrate [dosdude]’s walls, but technically this does rely in flying under the radar. The key component is a Nuand BladeRF x40 full-duplex Software Defined Radio, which is a lovely bit of open-source hardware, but not exactly cheap. Aside from that, all you need is a half-decent PC (it at least needs USB-3.0 to communicate with the SDR, the “YateBTS” software (which [dosdude1] promises to provide a setup guide for in a subsequent video) and a sim card reader. Plus some old phones, of course, which is rather the whole point of this exercise.
The 2G sunset, especially when followed by 3G, wiped out whole generations of handhelds — devices with unique industrial design and forgotten internet protocols that are worth remembering and keeping alive. By the end of the video, he has his own little network, with the phones able to call and text one another on the numbers he set up, and even (slowly) access the internet through the miniPC’s network connection.
Unlike most of the hacks we’ve featured from [dosdude1], you won’t even need a soldering iron, never mind a reflow oven for BGA.
youtube.com/embed/CMWvA4Ty1Wk?…
Logitech POP Buttons are About to go Pop
For those who missed out on the past few years of ‘smart home’ gadgets, the Logitech POP buttons were introduced in 2018 as a way to control smart home devices using these buttons and a central hub. After a few years of Logitech gradually turning off features on this $100+ system, it seems that Logitech will turn off the lights in two weeks from now. Remaining POP Button users are getting emails from Logitech in which they are informed of the shutdown on October 15 of 2025, along with a 15% off coupon code for the Logitech store.
Along with this coupon code only being usable for US-based customers, this move appears to disable the hub and with it any interactions with smart home systems like Apple HomeKit, Sonos, IFTTT and Philips Hue. If Logitech’s claim in the email that the buttons and connected hub will ‘lose all functionality’, then it’d shatter the hopes for those who had hoped to keep using these buttons in a local fashion.
Suffice it to say that this is a sudden and rather customer-hostile move by Logitech. Whether the hub can be made to work in a local fashion remains to be seen. At first glance there don’t seem to be any options for this, and it’s rather frustrating that Logitech doesn’t seem to be interested in the goodwill that it would generate to enable this option.
Know Audio: Distortion Part Two
It’s been a while since the last installment in our Know Audio series, in which we investigated distortion as it applies to Hi-Fi audio. Now it’s time to return with part two of our look at distortion, and attempt some real-world distortion measurements on the bench.
Last time, we examined distortion from a theoretical perspective, as the introduction of unwanted harmonics as a result of non-linearities in the signal path. Sometimes that’s a desired result, as with a guitar pedal, but in a Hi-Fi system where the intention is to reproduce as faithfully as possible a piece of music from a recording, the aim is to make any signal path components as linear as possible. When we measure the distortion, usually expressed as THD, for Total Harmonic Distortion, of a piece of equipment we are measuring the ratio of those unwanted harmonics in the output to the frequencies we want, and the resulting figure is commonly expressed in dB, or as a percentage.
The Cheapest Of Audio Kits, Analysed
The Hackaday audio test bench in all its glory.
Having explained what we are trying to do, it’s on to the device in question and the instruments required. On the bench in front of me I have my tube headphone amplifier project, a Chinese 6J1 preamp kit modified with transformers on its output for impedance matching. I’ve investigated the unmodified version of this kit here in the past, and measured a THD of 0.03% when it’s not driven into distortion, quite an acceptable figure.
To measure the distortion I’m using my audio signal generator, a Levell TG200DMP that I was lucky enough to obtain through a friend. It’s not the youngest of devices, but it’s generally reckoned to be a pretty low distortion oscillator. It’s set to 1 KHz and a 1 V peak-to-peak line level audio output, which feeds the headphone amplifier input. The output from the amplifier is feeding a set of headphones, and my trusty HP334A distortion analyser is monitoring the result.
How Does A Distortion Analyser Work Then?
The business end of my trusty HP.
A distortion analyser is two instruments in one, a sensitive audio level meter, and an extremely high quality notch filter. In an instrument as old as this one everything is analogue, while in a modern audio analyser everything including the signal source is computer controlled.
The idea is that the analyser is first calibrated against the incoming audio using the voltmeter, and then the filter is switched into the circuit. The filter is then adjusted to reject the fundamental frequency, in this case 1 kHz, leaving behind only the harmonic distortion. The audio level meter can then be used to read the distortion. If you’re interested in how these work in greater detail I made one a few years ago in GNU Radio for an April Fool post about gold cables.
Using the HP offers an experience that’s all too rare in 2025, that of tuning an analogue circuit. It settles down over time, so when you first tune it for minimum 1kHz level it will retune to a lower level after a while. So mine has been running but idle for the last few hours, in order to reach maximum stability. I’m measuring 0.2% THD for the headphone amplifier, which is entirely expected given that the transformers it uses are not of high quality at all.
An Instrument Too Expensive For A Hackaday Expense Claim
An Audio Precision APx525 audio analyzer. Bradp723 (CC-BY-SA 3.0)
It’s important to state that I’ve measured the THD at only one frequency, namely 1 kHz. This is the frequency at which most THD figures are measured, so it’s an easy comparison, but a high-end audio lab will demand measurements across a range of frequencies. That’s entirely possible with the Levell and the HP, but it becomes a tedious manual process of repetitive calibration and measurement.
As you might expect, a modern audio analyser has all these steps computerised, having in place of the oscillator and meter a super-high-quality DAC and ADC, and instead of the 334A’s filter tuning dial, a computer controlled switched filter array. Unsurprisingly these instruments can be eye-wateringly expensive.
So there in a nutshell is a basic set-up to measure audio distortion. It’s extremely out of date, but in its simplicity I hope you find an understanding of the topic. Keep an eye out for a 334A and snap it up if you see one for not a lot. I did, and it’s by far the most beautifully-made piece of test equipment I own.
Huawei, presunta vendita di dati sul dark web
Il 3 ottobre 2025, su un noto forum del dark web è stato pubblicato un thread, da un utente identificato come KaruHunters. Nel post sostiene di aver compromesso i sistemi di Huawei Technologies Co., Ltd. e rivendica la messa in vendita di dati.
Attualmente, non possiamo confermare l’autenticità della notizia, poiché l’organizzazione non ha ancora pubblicato un comunicato ufficiale sul proprio sito web in merito all’incidente. Le informazioni riportate provengono da fonti pubbliche accessibili su siti underground, pertanto vanno interpretate come una fonte di intelligence e non come una conferma definitiva.
Secondo quanto dichiarato, la presunta violazione avrebbe permesso l’accesso al “source code” e “internal tools” dell’azienda. L’autore, è inoltre disponibile alla vendita del materiale o alla negoziazione privata con un prezzo di ingresso pari a 1.000 dollari. Non vengono forniti dettagli tecnici pubblici sulla modalità d’attacco o sulla natura specifica dei file sottratti. Ma, la disponibilità del codice sorgente suggerisce un’intrusione mirata ai sistemi di sviluppo o repository interni.
Il fatto che l’attore menzioni “tools interni” lascia intendere che l’accesso sia andato oltre un semplice dump di credenziali, indicando un livello di intrusione potenzialmente profondo.
Secondo i canali istituzionali di Huawei, al momento non ci sono evidenze pubbliche che confermino la veridicità della violazione. Secondo prassi CTI e giornalistica, in assenza di campioni verificabili o nota ufficiale, il caso resta “da confermare” e va maneggiato con prudenza.
Analisi del “Tree” allegato
L’analisi del file elenco allegato “Tree” (paste.txt), fa evincere che i contenuti elencati appartengono all’ecosistema TeX/CWEB e alla distribuzione TeX Live, includendo progetti open‑source come dvisvgm, brotli, woff2, potrace e una struttura di installazione “install‑tl‑20251003”.paste.txt
Secondo questa evidenza, non emergono namespace, pipeline o domini interni che permettano di inferire una fuga di proprietà intellettuale riconducibile a Huawei.paste.txt
L’attore citato
Consultando i feed OSINT, KaruHunters è presente nei tracciamenti come attore orientato alla monetizzazione di presunti data‑leak tramite vendite riservate e rilascio di indizi limitati.
Secondo le stesse fonti, l’attribuzione rimane indicativa in assenza di riscontri forensi indipendenti o conferme ufficiali.
Conclusione
Fino a una conferma ufficiale, il presunto Huawei Breach resta un caso da monitorare, ma non verificato. Se confermato, l’incidente potrebbe avere un impatto significativo in diversi ambiti:
- Rischio IP leakage: esposizione di codice sorgente e strumenti interni può facilitare exploit mirati.
- Reputazione aziendale: ulteriore pressione su Huawei, già spesso sotto scrutinio per questioni di sicurezza.
- Implicazioni geopolitiche: data la natura strategica dell’azienda, una compromissione potrebbe avere conseguenze anche a livello statale.
Le organizzazioni del settore dovrebbero mantenere un alto livello di allerta, aggiornare sistemi di monitoraggio su forum underground e verificare eventuali correlazioni con precedenti campagne attribuite a gruppi noti.
RHC monitorerà l’evoluzione della vicenda in modo da pubblicare ulteriori news sul blog, qualora ci fossero novità sostanziali. Qualora ci siano persone informate sui fatti che volessero fornire informazioni in modo anonimo possono utilizzare la mail crittografata del whistleblower.
L'articolo Huawei, presunta vendita di dati sul dark web proviene da il blog della sicurezza informatica.
Weaving Circuits from Electronic Threads
Though threading is a old concept in computer science, and fabric computing has been a term for about thirty years, the terminology has so far been more metaphorical than strictly descriptive. [Cedric Honnet]’s FiberCircuits project, on the other hand, takes a much more literal to weaving technology “into the fabric of everyday life,” to borrow the phrase from [Mark Weiser]’s vision of computing which inspired this project. [Cedric] realized that some microcontrollers are small enough to fit into fibers no thicker than a strand of yarn, and used them to design these open-source threads of electronics (open-access paper).
The physical design of the FiberCircuits was inspired by LED filaments: a flexible PCB wrapped in a protective silicone coating, optionally with a protective layer of braiding surrounding it. There are two kinds of fiber: the main fiber and display fibers. The main fiber (1.5 mm wide) holds an STM32 microcontroller, a magnetometer, an accelerometer, and a GPIO pin to interface with external sensors or other fibers. The display fibers are thinner at only one millimeter, and hold an array of addressable LEDs. In testing, the fibers could withstand six Newtons of force and be bent ten thousand times without damage; fibers protected by braiding even survived 40 cycles in a washing machine without any damage. [Cedrik] notes that finding a PCB manufacturer that will make the thin traces required for this circuit board is a bit difficult, but if you’d like to give it a try, the design files are on GitHub.
[Cedrik] also showed off a few interesting applications of the thread, including a cyclist’s beanie with automatic integrated turn signals, a woven fitness tracker, and a glove that senses the wearer’s hand position; we’re sure the community can find many more uses. The fibers could be embroidered onto clothing, or embedded into woven or knitted fabrics. On the programming side, [Cedrik] ported support for this specific STM32 core to the Arduino ecosystem, and it’s now maintained upstream by the STM32duino project, which should make integration (metaphorically) seamless.
One area for future improvement is in power, which is currently supplied by small lithium batteries; it would be interesting to see an integration of this with power over skin. This might be a bit more robust, but it isn’t first knitted piece of electronics we’ve seen. Of course, rather than making wearables more unobtrusive, you can go in the opposite direction.
youtube.com/embed/OA_IuWRBbfM?…
Airbags, and How Mercedes-Benz Hacked Your Hearing
Airbags are an incredibly important piece of automotive safety gear. They’re also terrifying—given that they’re effectively small pyrotechnic devices that are aimed directly at your face and chest. Myths have pervaded that they “kill more people than they save,” in part due a hilarious episode of The Simpsons. Despite this, they’re credited with saving tens of thousands of lives over the years by cushioning fleshy human bodies from heavy impacts and harsh decelerations.
While an airbag is generally there to help you, it can also hurt you in regular operation. The immense sound pressure generated when an airbag fires is not exactly friendly to your ears. However, engineers at Mercedes-Benz have found a neat workaround to protect your hearing from the explosive report of these safety devices. It’s a nifty hack that takes advantage of an existing feature of the human body. Let’s explore how air bags work, why they’re so darn loud, and how that can be mitigated in the event of a crash.
A Lot Of Hot Air
The first patent for an airbag safety device was filed over 100 years ago, intended for use in aircraft. Credit: US Patent Office
Once an obscure feature only found in luxury vehicles, airbags became common safety equipment in many cars and trucks by the mid-1990s. Indeed, a particular turning point was when they became mandatory in vehicles sold in the US market from late 1998 onwards, which made them near-universal equipment in many other markets worldwide. Despite their relatively recent mainstream acceptance, the concept of the airbag actually dates back a lot farther.
The basic invention of the airbag is typically credited to two English dentists—Harold Round and Arthur Parrott—who submitted a patent for the concept all the way back in 1919. The patent regarded the concept of creating an air cushion to protect occupants in aircraft during serious impacts. Specific attention was given to the fact that the air cushion should “yield readily without developing the power to rebound,” which could cause further injury. This was achieved by giving the device air outlet passages that would vent as a person impacted the device, which would allow the cushion to absorb the hit gently while reducing the chance of injury.
The concept only later became applicable to automobiles when Walter Linderer filed for a German patent in 1951, and John W. Hetrick filed for a US patent in 1952. Both engineers devised airbags that were based on the release of compressed air, triggered either by human intervention or automated mechanical means. These concepts proved ultimately infeasible, as compressed air could not be feasibly be released to inflate an airbag quickly enough to be protective in an automobile crash.
It would only be later in the 1960s that workable versions using explosive or pyrotechnic inflation came to the fore. The concept was simple—use a chemical reaction to generate a great deal of gas near-instantaneously, inflating the airbag fractions of a second before vehicle occupants come into contact with the device. The airbags are fitted with vents that only allow the gas to escape slowly. This means that as a person hits the airbag, they are gently decelerated as their impact pushes the gas out of the restrictive vents. This helps reduce injuries that would typically be incurred if the occupants instead hit interior parts of the car without any protection at all.In a crash, it’s much nicer to faceplant into an air-filled pillow than a hard, unforgiving dashboard. Credit: DaimlerChrysler AG, CC BY SA 3.0
The Big Bang
The use of pyrotechnic gas generators to inflate airbags was the leap forward that made airbags practical and effective for use in automobiles. However, as you might imagine, releasing a massive burst of gas in under 50 milliseconds does create a rather large pressure wave—which we experience as an incredibly loud sound. If you ever seen airbags detonated outside of a vehicle, you’ve probably noticed they sound rather akin to fireworks or a gun going off. Indeed, the sound of an airbag can exceed 160 decibels (dB)—more than enough to cause instant damage to the ear. Noise generated in a vehicle impact is often incredibly loud, too, or course. Ultimately, this isn’t great for the occupants of the vehicle, particularly their hearing. Ultimately, an airbag deployment is a carefully considered trade-off—the general consensus is that impact protection in a serious crash is preferable, even if your ears are worse for wear afterwards.
However, there is atechnique that can mitigate this problem. In particular, Mercedes-Benz developed a system to protect the hearing of vehicle occupants in the event that the airbags are fired. The trick is in using the body’s own reactions to sound to reduce damage to the ear from excessive sound pressure levels.In humans, the stapedius muscle can be triggered reflexively to protect the ear from excess sound levels, though the mechanism is slow enough that it can’t respond well to sudden loud impulses. However, pre-emptively triggering it before a loud event can be very useful. Credit: Mercedes Benz
The stapedius reflex (also known as the acoustic reflex) is one of the body’s involuntary, instantaneous movements in response to an external stimulus—in this case, certain sound levels. When a given sound stimulus occurs to either ear, muscles inside both ears contract, most specifically the stapedius muscle in humans. When the muscle contracts, it has a stiffening effect on the ossicular chain—the three tiny bones that connect the ear drum to the cochlea in the inner ear. Under this condition, less vibrational energy is transferred, reducing damage to the cochlea from excessive sound levels.
The threshold at which the reflex is triggered is usually 10 to 20 dB lower than the point at which the individual feels discomfort; typical levels are from around 70 to 100 dB. When triggered by particularly loud sounds of 20 dB above the trigger threshold, the muscle contraction is enough to reduce the sound level at the cochlea by a full 15 dB. Notably, the reflex is also triggered by vocalization—reducing transmission through to the inner ear when one begins to speak.
Mercedes-Benz engineers realized that the stapedius reflex could be pre-emptively triggered ahead of firing the airbags, in order to provide a protective effect for the ears. To this end, the company developed the PRE-SAFE Sound system. When the vehicle’s airbag control unit detects a collision, it triggers the vehicle’s sound system to play a short-duration pink noise signal at a level of 80 dB. This is intended to be loud enough to trigger the stapedius reflex without in itself doing damage to the ears. Typically, it takes higher sound levels closer to 100 dB to reliably trigger the reflex in a wide range of people, but Mercedes-Benz engineers realized that the wide-spread frequency content of pink noise enable the reflex to be switched on at a much lower, and safer, sound level. With the reflex turned on, when the airbags do fire a fraction of a second later, less energy from the intense pressure spike will be transferred to the inner ear, protecting the delicate structures that provide the sense of hearing.
youtube.com/embed/vTmLYY-Z2rc?…
Mercedes-Benz first released the technology in production models almost a decade ago.
The stapedius reflex does have some limitations. It can be triggered with a latency of just 10 milliseconds, however, it can take up to 100 milliseconds for the muscle in the ear to reach full tension, conferring the full protective effect. This limits the ability of the reflex to protect against short, intense noises. However, given the Mercedes-Benz system triggers the sound before airbag inflation where possible, this helps the muscles engage prior to the peak sound level being reached. The protective effect of the stapedius reflex also only lasts for a few seconds, with the muscle contraction unable to be maintained beyond this point. However, in a vehicle impact scenario, the airbags typically all fire very quickly, usually well within a second, negating this issue.
Mercedes-Benz was working on the technology from at least the early 2010s, having run human trials to trigger the stapedius reflex with pink noise in 2011. It deployed the technology on its production vehicles almost a decade ago, first offering PRE-SAFE Sound on E-Class models for the 2017 model year. Despite the simple nature of the technology, few to no other automakers have publicly reported implementing the technique.
Car crashes are, thankfully, rather rare. Few of us are actually in an automobile accident in any given year, even less in ones serious enough to cause an airbag deployment. However, if you are unlucky enough to be in a severe collision, and you’re riding in a modern Mercedes-Benz, your ears will likely thank you for the added protection, just as your body will be grateful for the cushioning of the airbags themselves.
GuitarPie Uses Guitar as Interface, No Raspberries Needed
We’ve covered plenty of interesting human input devices over the years, but how about an instrument? No, not as a MIDI controller, but to interact with what’s going on-on screen. That’s the job of GuitarPie, a guitar-driven pie menu produced by a group at the University of Stuttgart.
The idea is pretty simple: the computer is listening for one specific note, which cues the pie menu on screen. Options on the pie menu can be selected by playing notes on adjacent strings and frets. (Check it out in action in the video embedded below). This is obviously best for guitar players, and has been built into a tablature program they’re calling TabCTRL. For those not in the loop, tablature, also known as tabs, is an instrument-specific notation system for stringed instruments that’s quite popular with guitar players. So TabCTRL is a music-learning program, that shows how to play a given song.
With this pairing, you can rock out to the tablature, the guitarist need never take their hands off the frets. You might be wondering “how isn’t the menu triggered during regular play”? Well, the boffins at Stuttgart thought of that– in TabCTRL, the menu is locked out while play mode is active. (It keeps track of tempo for you, too, highlighting the current musical phrase.) A moment’s silence (say, after you made a mistake and want to restart the song) stops play mode and you can then activate the menu. It’s well a well-thought-out UI. It’s also open source, with all the code going up on GitHub by the end of October.
The neat thing is that this is pure software; it will work with any unmodified guitar and computer. You only need a microphone in front of the amp to pick up the notes. One could, of course, use voice control– we’ve seen no shortage of hacks with that–but that’s decidedly less fun. Purists can comfort themselves that at least this time the computer interface is a real guitar, and not a guitar-shaped MIDI controller.
youtube.com/embed/ItJGNO-IQDw?…
Social media at a time of war
IT'S MONDAY, AND THIS IS DIGITAL POLITICS. I'm Mark Scott, and I have many feelings about Sora, OpenAI's new AI-generated social media platform. Many of which are encapsulated by this video by Casey Neistat. #FreeTheSlop.
— The world's largest platforms have failed to respond to the highest level of global conflict since World War II.
— The semiconductor wars between China and the United States are creating a massive barrier between the world's two largest economies.
— China's DeepSeek performs significantly worse than its US counterparts on a series of benchmark tests.
Let's get started:
L’Italia nel mondo degli Zero Day c’è! Le prime CNA Italiane sono Leonardo e Almaviva!
Se n’è parlato molto poco di questo avvenimento, che personalmente reputo strategicamente molto importante e segno di un forte cambiamento nella gestione delle vulnerabilità non documentate in Italia.
A marzo 2024 scrissi un articolo in cui descrivevo un panorama italiano pressoché desolante: la cultura dei bug non documentati, gli zero-day, era praticamente inesistente, e non c’era alcuna CNA (CVE Numbering Authority) attiva nel nostro paese.
La gestione delle vulnerabilità spesso è lasciata al caso o, peggio, nascosta dietro un velo di segretezza e incapace di creare un dialogo con la comunità dei ricercatori. Quel pezzo, pubblicato su Red Hot Cyber, rimbalzò sui social e suscitò molte reazioni – sinonimo che qualcosa stava cambiando – ma allora pochi potevano immaginare che avrebbe prefigurato un cambiamento reale.
L’approccio italiano e il cambiamento
L’approccio prevalente in Italia tra i produttori di software è spesso caratterizzato dalla mancanza di conoscenza delle pratiche di gestione delle vulnerabilità non documentate, oppure dalla scelta della “security by obscurity“, nella convinzione che nascondere i bug possa garantire sicurezza.
Questo modello, sebbene diffuso, è intrinsecamente fragile: ignora la realtà della cybersicurezza contemporanea, in cui ogni vulnerabilità non gestita rappresenta una porta aperta per attacchi mirati, sofisticati e sempre più frequenti.
Infatti, la cultura dell’oscurità, ha spesso significato di trascuratezza, lentezza nella risposta e, in ultima analisi, rischi concreti per cittadini, istituzioni e clienti fino ad arrivare alla sicurezza nazionale.
Oggi, finalmente, le cose stanno cambiando. Da settembre 2024, due grandi realtà italiane, Almaviva e Leonardo, sono diventate ufficialmente CNA.
Questo significa che possono assegnare identificativi CVE alle vulnerabilità che scoprono o gestiscono tramite la community hacker, entrando così in un circuito internazionale di sicurezza responsabile. Non è un dettaglio tecnico: è la dimostrazione che l’Italia sta iniziando a prendere sul serio le vulnerabilità non documentate e a strutturare processi di sicurezza coerenti con gli standard globali.
Immagine presa da cve.org al 06/10/2025
Coordinated Vulnerability Disclosure: la chiave di volta
La transizione non riguarda solo la scoperta dei bug, ma il modo stesso in cui la sicurezza viene concepita. La CVD (Coordinated Vulnerability Disclosure)diventa lo strumento attraverso cui le aziende collaborano con i ricercatori, condividono informazioni in sicurezza e risolvono le problematiche senza lasciare spazio a sfruttamenti malevoli prima del rilascio delle patch. La CVD è, in pratica, quel ponte tra la scoperta delle vulnerabilità e la gestione responsabile, un principio che fino a poco tempo fa sembrava quasi utopico nel contesto italiano.
Ciò che impressiona è come questo nuovo approccio dimostri che trasparenza, etica e collaborazione non sono ostacoli al business, ma fattori che lo rafforzano.
Gestire i bug in modo aperto riduce drasticamente i rischi di attacco, migliora la reputazione aziendale e crea fiducia nei clienti e nei partner. L’Italia sta imparando che la sicurezza non è un valore accessorio, ma un fattore abilitante che può generare valore tangibile. Questo è il risultato anche di un lento ma significativo cambiamento nella cultura della sicurezza informatica in Italia, sostenuto dagli sforzi dell’Agenzia per la Cybersicurezza Nazionale (ACN), che, con costanza e determinazione, sta tracciando un percorso di maggiore consapevolezza e professionalità nel settore.
L’eredità dell’Open Source e la “destinazione”
Se guardiamo all’esperienza dell’open source, troviamo un modello consolidato: i progetti open prosperano grazie alla collaborazione e alla condivisione delle conoscenze. Bug, patch e miglioramenti diventano patrimonio comune e l’intera comunità beneficia dei risultati. La lezione è chiara: soprattutto nella sicurezza informatica, la cooperazione non è un rischio, ma una risorsa preziosa, capace di trasformare potenziali minacce in opportunità di crescita.
Spesso ho sottolineato un concetto chiave: ‘l’hacking è un percorso, non una destinazione’. Per le aziende italiane, il passaggio dalla mancanza di cultura sugli zero-day, o peggio dalla security by obscurity, a una gestione aperta e responsabile delle vulnerabilità non è solo un atto tecnico: è un vero e proprio percorso di crescita, un cambiamento culturale profondo che richiede visione, consapevolezza e apertura al dialogo con la comunità dei ricercatori.
Significa accettare che la sicurezza non può essere trattata come un segreto commerciale, ma come un impegno condiviso verso la comunità, i clienti e la società nel suo complesso. Richiede coraggio, visione e leadership, ma apre la strada a ecosistemi digitali più resilienti e sostenibili.
Due CNA italiane: un impegno concreto al cambiamento
Almaviva e Leonardo mostrano concretamente la via: non solo riconoscono la responsabilità verso i propri clienti, ma valorizzano il ruolo etico dei ricercatori indipendenti e della comunità hacker e adottando standard e processi che possano consentire una gestione delle vulnerabilità non documentate.
Questo modello dimostra che trasparenza e collaborazione non sono incompatibili con la competitività, ma anzi la rafforzano, trasformando il rischio in un’opportunità di innovazione continua e miglioramento del prodotto.
Il nuovo corso italiano riflette anche un cambiamento di mentalità più ampio: la sicurezza non è solo tecnica, ma sociale, culturale ed etica. La gestione responsabile delle vulnerabilità richiede dialogo, fiducia e cooperazione tra aziende, ricercatori e comunità, principi che costituiscono il fondamento di un ecosistema digitale sano e sostenibile.
Il percorso è ancora lungo, e la strada per diffondere CNA e CVD in tutte le aziende italiane è appena iniziata. Ma il fatto che oggi possiamo contare su due CNA ufficiali rappresenta un cambiamento concreto, la prima traccia di un nuovo paradigma.
E un sogno nel cassetto
Nonostante i passi avanti compiuti, oggi l’Italia e tutta l’Europa continuano a fare affidamento sui processi degli Stati Uniti per la gestione delle vulnerabilità: dal National Vulnerability Database (NVD) alle autorità di numerazione CNA, è il modello statunitense a dettare gli standard globali.
Anche se esiste un progetto europeo, il European Vulnerability Database (EUVD)gestito da ENISA, questo rimane ancora embrionale e lontano dall’avere un modello di classificazione delle vulnerabilità strutturato come quello statunitense sviluppato da MITRE e NIST.
In un’ottica di autonomia strategica europea, sarebbe auspicabile sviluppare un sistema simile a quello statunitense, che integri numerazione, valutazione del rischio e gestione coordinata delle vulnerabilità. Un modello che già esiste in Cina con il CNNVD, il repository nazionale che affianca numerazione e processi di valutazione dei rischi, dimostrando come un approccio nazionale (ed europeo) possa garantire controllo, coerenza e tempestività nella gestione dei bug critici.
Il sogno, quindi, è vedere un sistema europeo maturo e indipendente, in cui ENISA possa gestire un modello europeo della classificazione e gestione dei bug non documentati, con standardchiari, processi di valutazione del rischio condivisi e un repository trasparente accessibile a ricercatori, aziende e istituzioni. Non sarebbe solo un atto tecnico: rappresenterebbe un salto culturale e strategico per tutta la comunità di sicurezza informatica, un segnale che l’Europa vuole costruire autonomia nella cybersicurezza, valorizzare la collaborazione con la comunità hacker e proteggere i cittadini con strumenti propri, moderni e affidabili.
Fino ad allora, ogni passo compiuto dalle aziende italiane, ogni CNA istituita e ogni CVD gestita responsabilmente resta un piccolo ma fondamentale tassello di questo lungo percorso: un percorso che conduce dalla dipendenza dagli altri verso una sicurezza consapevole, etica e autonoma.
L'articolo L’Italia nel mondo degli Zero Day c’è! Le prime CNA Italiane sono Leonardo e Almaviva! proviene da il blog della sicurezza informatica.
Datacrazia. Politica, cultura, algoritmica e conflitti al tempo dei big data
I dati sono il sangue dell’intelligenza artificiale. È così che Nello Cristianini parla del motore dell’IA. Per intendere che sono i dati la materia grezza da cui la macchina estrae le proprie predizioni e decisioni. Il professore italiano che insegna intelligenza artificiale all’Università di Bath lo dice in Datacrazia. Politica, cultura, algoritmica e conflitti al tempo dei big data (d editore) un libro del 2018, precedente alla sua famosa trilogia per i tipi de Mulino: La scorciatoia (2023), Machina Sapiens (2024) e Sovrumano (2025), sollevando una questione su cui non sembra avere cambiato idea. O, almeno per quanto riguarda il valore dei dati, che devono essere precisi e affidabili, per consentire alle macchine di «pensare». Pensare come preconizzato da Turing, e cioè nel senso di macchine in grado di simulare un comportamento intelligente, come poi si riveleranno capaci di fare, senza ritenere però che sia lo stesso «pensare» degli esseri umani.
Il libro che è una raccolta collettanea a cura di Daniela Gambetta, e affronta i risvolti socio-politici della gestione dei dati – dalla produzione creativa digitale all’incetta che ne fanno i social network – per arrivare e metterci in guardia dai bias presenti nell’addestramento dell’IA. Timori che hanno già avuto una certa attenzione ma che non sono ancora studiati abbastanza. Ed è per questo che nella parte in cui il libro se ne occupa è possibile affermare che i contributori al libro siano stati capaci di deinire un framework interpretativo critico dell’innovazione che può essere una guida nell’analisi delle tecnologie di rete, indipendentemente dall’attualità delle soluzioni sviluppate proprio nell’IA.
Alla data del libro per esempio, il campione Gary Kasparov era stato già battuto a scacchi da un sistema di machine learning e lo stesso era accaduto a Lee Sedol nel gioco del GO; il chatbot Tay di Facebook era stato già avvelenato nei suoi dati di addestramento dall’esercito di troll su Twitter fino a congratularsi con Hitler, ma ChatGPT era ancora da venire. E, tuttavia le questioni etiche poste dal libro sono ancora irrisolte. Chi decide cosa è bene e cosa è male? La macchina o l’uomo? Viene facile dire l’uomo che la governa, ma cosa accade con i sistemi autonomi che non prevedono l’intervento umano? Ecco, Datacrazia pone quei temi, sociali e filosofici su cui ci interroghiamo ancora oggi: dalla sovranità digitale alle fake news potenziate dall’IA.
ESP32 Decodes S/PDIF Like A Boss (Or Any Regular Piece of Hi-Fi Equipment)
S/PDIF has been around for a long time; it’s still a really great way to send streams of digital audio from device A to device B. [Nathan Ladwig] has got the ESP32 decoding SPDIF quite effectively, using an onboard peripheral outside its traditional remit.
On the ESP32, the Remote Control Transceiver (RMT) peripheral was intended for use with infrared transceivers—think TV remotes and the like. However, this peripheral is actually quite flexible, and can be used for sending and receiving a range of different signals. [Nathan] was able to get it to work with S/PDIF quite effectively. Notably, it has no defined bitrate, which allows it to work with signals of different sample rates quite easily. Instead, it uses biphase mark code to send data. With one or two transitions for each transmitted bit, it’s possible to capture the timing and determine the correct clock from the signal itself.
[Nathan] achieved this feat as part of his work to create an ESP32-based RTP streaming device. The project allows an ESP32 to work as a USB audio device or take an S/PDIF signal as input, and then transmitting that audio stream over RTP to a receiver which delivers the audio at the other end via USB audio or as an SPDIF output. It’s a nifty project that has applications for anyone that regularly finds themselves needing to get digital audio from once place to another. It can also run a simple visualizer, too, with some attached LEDs.
It’s not the first time we’ve seen S/PDIF decoded on a microcontroller; it’s quite achievable if you know what you’re doing. Meanwhile, if you’re cooking up your own digital audio hacks, we’d love to hear about it. Digitally, of course, because we don’t accept analog phone calls here at Hackaday. Video after the break.
youtube.com/embed/k_nE87P4N88?…
How we trained an ML model to detect DLL hijacking
DLL hijacking is a common technique in which attackers replace a library called by a legitimate process with a malicious one. It is used by both creators of mass-impact malware, like stealers and banking Trojans, and by APT and cybercrime groups behind targeted attacks. In recent years, the number of DLL hijacking attacks has grown significantly.
Trend in the number of DLL hijacking attacks. 2023 data is taken as 100% (download)
We have observed this technique and its variations, like DLL sideloading, in targeted attacks on organizations in Russia, Africa, South Korea, and other countries and regions. Lumma, one of 2025’s most active stealers, uses this method for distribution. Threat actors trying to profit from popular applications, such as DeepSeek, also resort to DLL hijacking.
Detecting a DLL substitution attack is not easy because the library executes within the trusted address space of a legitimate process. So, to a security solution, this activity may look like a trusted process. Directing excessive attention to trusted processes can compromise overall system performance, so you have to strike a delicate balance between a sufficient level of security and sufficient convenience.
Detecting DLL hijacking with a machine-learning model
Artificial intelligence can help where simple detection algorithms fall short. Kaspersky has been using machine learning for 20 years to identify malicious activity at various stages. The AI expertise center researches the capabilities of different models in threat detection, then trains and implements them. Our colleagues at the threat intelligence center approached us with a question of whether machine learning could be used to detect DLL hijacking, and more importantly, whether it would help improve detection accuracy.
Preparation
To determine if we could train a model to distinguish between malicious and legitimate library loads, we first needed to define a set of features highly indicative of DLL hijacking. We identified the following key features:
- Wrong library location. Many standard libraries reside in standard directories, while a malicious DLL is often found in an unusual location, such as the same folder as the executable that calls it.
- Wrong executable location. Attackers often save executables in non-standard paths, like temporary directories or user folders, instead of %Program Files%.
- Renamed executable. To avoid detection, attackers frequently save legitimate applications under arbitrary names.
- Library size has changed, and it is no longer signed.
- Modified library structure.
Training sample and labeling
For the training sample, we used dynamic library load data provided by our internal automatic processing systems, which handle millions of files every day, and anonymized telemetry, such as that voluntarily provided by Kaspersky users through Kaspersky Security Network.
The training sample was labeled in three iterations. Initially, we could not automatically pull event labeling from our analysts that indicated whether an event was a DLL hijacking attack. So, we used data from our databases containing only file reputation, and labeled the rest of the data manually. We labeled as DLL hijacking those library-call events where the process was definitively legitimate but the DLL was definitively malicious. However, this labeling was not enough because some processes, like “svchost”, are designed mainly to load various libraries. As a result, the model we trained on this data had a high rate of false positives and was not practical for real-world use.
In the next iteration, we additionally filtered malicious libraries by family, keeping only those which were known to exhibit DLL-hijacking behavior. The model trained on this refined data showed significantly better accuracy and essentially confirmed our hypothesis that we could use machine learning to detect this type of attacks.
At this stage, our training dataset had tens of millions of objects. This included about 20 million clean files and around 50,000 definitively malicious ones.
| Status | Total | Unique files |
| Unknown | ~ 18M | ~ 6M |
| Malicious | ~ 50K | ~ 1,000 |
| Clean | ~ 20M | ~ 250K |
We then trained subsequent models on the results of their predecessors, which had been verified and further labeled by analysts. This process significantly increased the efficiency of our training.
Loading DLLs: what does normal look like?
So, we had a labeled sample with a large number of library loading events from various processes. How can we describe a “clean” library? Using a process name + library name combination does not account for renamed processes. Besides, a legitimate user, not just an attacker, can rename a process. If we used the process hash instead of the name, we would solve the renaming problem, but then every version of the same library would be treated as a separate library. We ultimately settled on using a library name + process signature combination. While this approach considers all identically named libraries from a single vendor as one, it generally produces a more or less realistic picture.
To describe safe library loading events, we used a set of counters that included information about the processes (the frequency of a specific process name for a file with a given hash, the frequency of a specific file path for a file with that hash, and so on), information about the libraries (the frequency of a specific path for that library, the percentage of legitimate launches, and so on), and event properties (that is, whether the library is in the same directory as the file that calls it).
The result was a system with multiple aggregates (sets of counters and keys) that could describe an input event. These aggregates can contain a single key (e.g., a DLL’s hash sum) or multiple keys (e.g., a process’s hash sum + process signature). Based on these aggregates, we can derive a set of features that describe the library loading event. The diagram below provides examples of how these features are derived:
Feature extraction from aggregates
Loading DLLs: how to describe hijacking
Certain feature combinations (dependencies) strongly indicate DLL hijacking. These can be simple dependencies. For some processes, the clean library they call always resides in a separate folder, while the malicious one is most often placed in the process folder.
Other dependencies can be more complex and require several conditions to be met. For example, a process renaming itself does not, on its own, indicate DLL hijacking. However, if the new name appears in the data stream for the first time, and the library is located on a non-standard path, it is highly likely to be malicious.
Model evolution
Within this project, we trained several generations of models. The primary goal of the first generation was to show that machine learning could at all be applied to detecting DLL hijacking. When training this model, we used the broadest possible interpretation of the term.
The model’s workflow was as simple as possible:
- We took a data stream and extracted a frequency description for selected sets of keys.
- We took the same data stream from a different time period and obtained a set of features.
- We used type 1 labeling, where events in which a legitimate process loaded a malicious library from a specified set of families were marked as DLL hijacking.
- We trained the model on the resulting data.
First-generation model diagram
The second-generation model was trained on data that had been processed by the first-generation model and verified by analysts (labeling type 2). Consequently, the labeling was more precise than during the training of the first model. Additionally, we added more features to describe the library structure and slightly complicated the workflow for describing library loads.
Second-generation model diagram
Based on the results from this second-generation model, we were able to identify several common types of false positives. For example, the training sample included potentially unwanted applications. These can, in certain contexts, exhibit behavior similar to DLL hijacking, but they are not malicious and rarely belong to this attack type.
We fixed these errors in the third-generation model. First, with the help of analysts, we flagged the potentially unwanted applications in the training sample so the model would not detect them. Second, in this new version, we used an expanded labeling that included useful detections from both the first and second generations. Additionally, we expanded the feature description through one-hot encoding — a technique for converting categorical features into a binary format — for certain fields. Also, since the volume of events processed by the model increased over time, this version added normalization of all features based on the data flow size.
Third-generation model diagram
Comparison of the models
To evaluate the evolution of our models, we applied them to a test data set none of them had worked with before. The graph below shows the ratio of true positive to false positive verdicts for each model.
Trends in true positives and false positives from the first-, second-, and third-generation models
As the models evolved, the percentage of true positives grew. While the first-generation model achieved a relatively good result (0.6 or higher) only with a very high false positive rate (10⁻³ or more), the second-generation model reached this at 10⁻⁵. The third-generation model, at the same low false positive rate, produced 0.8 true positives, which is considered a good result.
Evaluating the models on the data stream at a fixed score shows that the absolute number of new events labeled as DLL Hijacking increased from one generation to the next. That said, evaluating the models by their false verdict rate also helps track progress: the first model has a fairly high error rate, while the second and third generations have significantly lower ones.
False positives rate among model outputs, July 2024 – August 2025 (download)
Practical application of the models
All three model generations are used in our internal systems to detect likely cases of DLL hijacking within telemetry data streams. We receive 6.5 million security events daily, linked to 800,000 unique files. Aggregates are built from this sample at a specified interval, enriched, and then fed into the models. The output data is then ranked by model and by the probability of DLL hijacking assigned to the event, and then sent to our analysts. For instance, if the third-generation model flags an event as DLL hijacking with high confidence, it should be investigated first, whereas a less definitive verdict from the first-generation model can be checked last.
Simultaneously, the models are tested on a separate data stream they have not seen before. This is done to assess their effectiveness over time, as a model’s detection performance can degrade. The graph below shows that the percentage of correct detections varies slightly over time, but on average, the models detect 70–80% of DLL hijacking cases.
DLL hijacking detection trends for all three models, October 2024 – September 2025 (download)
Additionally, we recently deployed a DLL hijacking detection model into the Kaspersky SIEM, but first we tested the model in the Kaspersky MDR service. During the pilot phase, the model helped to detect and prevent a number of DLL hijacking incidents in our clients’ systems. We have written a separate article about how the machine learning model for detecting targeted attacks involving DLL hijacking works in Kaspersky SIEM and the incidents it has identified.
Conclusion
Based on the training and application of the three generations of models, the experiment to detect DLL hijacking using machine learning was a success. We were able to develop a model that distinguishes events resembling DLL hijacking from other events, and refined it to a state suitable for practical use, not only in our internal systems but also in commercial products. Currently, the models operate in the cloud, scanning hundreds of thousands of unique files per month and detecting thousands of files used in DLL hijacking attacks each month. They regularly identify previously unknown variations of these attacks. The results from the models are sent to analysts who verify them and create new detection rules based on their findings.
Detecting DLL hijacking with machine learning: real-world cases
Introduction
Our colleagues from the AI expertise center recently developed a machine-learning model that detects DLL-hijacking attacks. We then integrated this model into the Kaspersky Unified Monitoring and Analysis Platform SIEM system. In a separate article, our colleagues shared how the model had been created and what success they had achieved in lab environments. Here, we focus on how it operates within Kaspersky SIEM, the preparation steps taken before its release, and some real-world incidents it has already helped us uncover.
How the model works in Kaspersky SIEM
The model’s operation generally boils down to a step-by-step check of all DLL libraries loaded by processes in the system, followed by validation in the Kaspersky Security Network (KSN) cloud. This approach allows local attributes (path, process name, and file hashes) to be combined with a global knowledge base and behavioral indicators, which significantly improves detection quality and reduces the probability of false positives.
The model can run in one of two modes: on a correlator or on a collector. A correlator is a SIEM component that performs event analysis and correlation based on predefined rules or algorithms. If detection is configured on a correlator, the model checks events that have already triggered a rule. This reduces the volume of KSN queries and the model’s response time.
This is how it looks:
A collector is a software or hardware component of a SIEM platform that collects and normalizes events from various sources, and then delivers these events to the platform’s core. If detection is configured on a collector, the model processes all events associated with various processes loading libraries, provided these events meet the following conditions:
- The path to the process file is known.
- The path to the library is known.
- The hashes of the file and the library are available.
This method consumes more resources, and the model’s response takes longer than it does on a correlator. However, it can be useful for retrospective threat hunting because it allows you to check all events logged by Kaspersky SIEM. The model’s workflow on a collector looks like this:
It is important to note that the model is not limited to a binary “malicious/non-malicious” assessment; it ranks its responses by confidence level. This allows it to be used as a flexible tool in SOC practice. Examples of possible verdicts:
- 0: data is being processed.
- 1: maliciousness not confirmed. This means the model currently does not consider the library malicious.
- 2: suspicious library.
- 3: maliciousness confirmed.
A Kaspersky SIEM rule for detecting DLL hijacking would look like this:
N.KL_AI_DLLHijackingCheckResult > 1
Embedding the model into the Kaspersky SIEM correlator automates the process of finding DLL-hijacking attacks, making it possible to detect them at scale without having to manually analyze hundreds or thousands of loaded libraries. Furthermore, when combined with correlation rules and telemetry sources, the model can be used not just as a standalone module but as part of a comprehensive defense against infrastructure attacks.
Incidents detected during the pilot testing of the model in the MDR service
Before being released, the model (as part of the Kaspersky SIEM platform) was tested in the MDR service, where it was trained to identify attacks on large datasets supplied by our telemetry. This step was necessary to ensure that detection works not only in lab settings but also in real client infrastructures.
During the pilot testing, we verified the model’s resilience to false positives and its ability to correctly classify behavior even in non-typical DLL-loading scenarios. As a result, several real-world incidents were successfully detected where attackers used one type of DLL hijacking — the DLL Sideloading technique — to gain persistence and execute their code in the system.
Let us take a closer look at the three most interesting of these.
Incident 1. ToddyCat trying to launch Cobalt Strike disguised as a system library
In one incident, the attackers successfully leveraged the vulnerability CVE-2021-27076 to exploit a SharePoint service that used IIS as a web server. They ran the following command:
c:\windows\system32\inetsrv\w3wp.exe -ap "SharePoint - 80" -v "v4.0" -l "webengine4.dll" -a \\.\pipe\iisipmd32ded38-e45b-423f-804d-34471928538b -h "C:\inetpub\temp\apppools\SharePoint - 80\SharePoint - 80.config" -w "" -m 0
After the exploitation, the IIS process created files that were later used to run malicious code via the DLL sideloading technique (T1574.001 Hijack Execution Flow: DLL):
C:\ProgramData\SystemSettings.exe
C:\ProgramData\SystemSettings.dll
SystemSettings.dll is the name of a library associated with the Windows Settings application (SystemSettings.exe). The original library contains code and data that the Settings application uses to manage and configure various system parameters. However, the library created by the attackers has malicious functionality and is only pretending to be a system library.
Later, to establish persistence in the system and launch a DLL sideloading attack, a scheduled task was created, disguised as a Microsoft Edge browser update. It launches a SystemSettings.exe file, which is located in the same directory as the malicious library:
Schtasks /create /ru "SYSTEM" /tn "\Microsoft\Windows\Edge\Edgeupdates" /sc DAILY /tr "C:\ProgramData\SystemSettings.exe" /F
The task is set to run daily.
When the SystemSettings.exe process is launched, it loads the malicious DLL. As this happened, the process and library data were sent to our model for analysis and detection of a potential attack.
Example of a SystemSettings.dll load event with a DLL Hijacking module verdict in Kaspersky SIEM
The resulting data helped our analysts highlight a suspicious DLL and analyze it in detail. The library was found to be a Cobalt Strike implant. After loading it, the SystemSettings.exe process attempted to connect to the attackers’ command-and-control server.
DNS query: connect-microsoft[.]com
DNS query type: AAAA
DNS response: ::ffff:8.219.1[.]155;
8.219.1[.]155:8443
After establishing a connection, the attackers began host reconnaissance to gather various data to develop their attack.
C:\ProgramData\SystemSettings.exe
whoami /priv
hostname
reg query HKLM\SOFTWARE\Microsoft\Cryptography /v MachineGuid
powershell -c $psversiontable
dotnet --version
systeminfo
reg query "HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware Drivers"
cmdkey /list
REG query "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" /v PortNumber
reg query "HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\Servers
netsh wlan show profiles
netsh wlan show interfaces
set
net localgroup administrators
net user
net user administrator
ipconfig /all
net config workstation
net view
arp -a
route print
netstat -ano
tasklist
schtasks /query /fo LIST /v
net start
net share
net use
netsh firewall show config
netsh firewall show state
net view /domain
net time /domain
net group "domain admins" /domain
net localgroup administrators /domain
net group "domain controllers" /domain
net accounts /domain
nltest / domain_trusts
reg query HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
reg query HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOnce
reg query HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\Run
reg query HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Run
reg query HKEY_CURRENT_USER\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\RunOnce
Based on the attackers’ TTPs, such as loading Cobalt Strike as a DLL, using the DLL sideloading technique (1, 2), and exploiting SharePoint, we can say with a high degree of confidence that the ToddyCat APT group was behind the attack. Thanks to the prompt response of our model, we were able to respond in time and block this activity, preventing the attackers from causing damage to the organization.
Incident 2. Infostealer masquerading as a policy manager
Another example was discovered by the model after a client was connected to MDR monitoring: a legitimate system file located in an application folder attempted to load a suspicious library that was stored next to it.
C:\Program Files\Chiniks\SettingSyncHost.exe
C:\Program Files\Chiniks\policymanager.dll E83F331BD1EC115524EBFF7043795BBE
The SettingSyncHost.exe file is a system host process for synchronizing settings between one user’s different devices. Its 32-bit and 64-bit versions are usually located in C:\Windows\System32\ and C:\Windows\SysWOW64\, respectively. In this incident, the file location differed from the normal one.
Example of a policymanager.dll load event with a DLL Hijacking module verdict in Kaspersky SIEM
Analysis of the library file loaded by this process showed that it was malware designed to steal information from browsers.
Graph of policymanager.dll activity in a sandbox
The file directly accesses browser files that contain user data.
C:\Users\<user>\AppData\Local\Google\Chrome\User Data\Local State
The library file is on the list of files used for DLL hijacking, as published in the HijackLibs project. The project contains a list of common processes and libraries employed in DLL-hijacking attacks, which can be used to detect these attacks.
Incident 3. Malicious loader posing as a security solution
Another incident discovered by our model occurred when a user connected a removable USB drive:
Example of a Kaspersky SIEM event where a wsc.dll library was loaded from a USB drive, with a DLL Hijacking module verdict
The connected drive’s directory contained hidden folders with an identically named shortcut for each of them. The shortcuts had icons typically used for folders. Since file extensions were not shown by default on the drive, the user might have mistaken the shortcut for a folder and launched it. In turn, the shortcut opened the corresponding hidden folder and ran an executable file using the following command:
"%comspec%" /q /c "RECYCLER.BIN\1\CEFHelper.exe [$DIGITS] [$DIGITS]"
CEFHelper.exe is a legitimate Avast Antivirus executable that, through DLL sideloading, loaded the wsc.dll library, which is a malicious loader.
Code snippet from the malicious file
The loader opens a file named AvastAuth.dat, which contains an encrypted backdoor. The library reads the data from the file into memory, decrypts it, and executes it. After this, the backdoor attempts to connect to a remote command-and-control server.
The library file, which contains the malicious loader, is on the list of known libraries used for DLL sideloading, as presented on the HijackLibs project website.
Conclusion
Integrating the model into the product provided the means of early and accurate detection of DLL-hijacking attempts which previously might have gone unnoticed. Even during the pilot testing, the model proved its effectiveness by identifying several incidents using this technique. Going forward, its accuracy will only increase as data accumulates and algorithms are updated in KSN, making this mechanism a reliable element of proactive protection for corporate systems.
IoC
Legitimate files used for DLL hijacking
E0E092D4EFC15F25FD9C0923C52C33D6 loads SystemSettings.dll
09CD396C8F4B4989A83ED7A1F33F5503 loads policymanager.dll
A72036F635CECF0DCB1E9C6F49A8FA5B loads wsc.dll
Malicious files
EA2882B05F8C11A285426F90859F23C6 SystemSettings.dll
E83F331BD1EC115524EBFF7043795BBE policymanager.dll
831252E7FA9BD6FA174715647EBCE516 wsc.dll
Paths
C:\ProgramData\SystemSettings.exe
C:\ProgramData\SystemSettings.dll
C:\Program Files\Chiniks\SettingSyncHost.exe
C:\Program Files\Chiniks\policymanager.dll
D:\RECYCLER.BIN\1\CEFHelper.exe
D:\RECYCLER.BIN\1\wsc.dll
Apple’s Continuing Failing Repair Score With the AirPods Pro 3
It takes quite a bit of effort to get a 0 out of 10 repairability score from iFixit, but in-ears like Apple’s AirPods are well on course for a clean streak there, with the AirPod Pro 3 making an abysmal showing in their vitriolic teardown video alongside their summary article. The conclusion is that while they are really well-engineered devices with a good feature set, the moment the battery wears out it is effectively e-waste. The inability to open them without causing at least some level of cosmetic damage is bad, and that’s before trying to glue the device back together. Never mind effecting any repairs beyond this.
Worse is that this glued-together nightmare continues with the charging case. Although you’d expect to be able to disassemble this case for a battery swap, it too is glued shut to the point where a non-destructive entry is basically impossible. As iFixit rightfully points out, there are plenty of examples of how to do it better, like the Fairbuds in-ears. We have seen other in-ears in the past that can have some maintenance performed without having to resort to violence, which makes Apple’s decisions here seem to be on purpose.
Although in the comments to the video there seem to be plenty of happy AirPod users for whom the expected 2-3 year lifespan is no objection, it’s clear that the AirPods are still getting zero love from the iFixit folk.
youtube.com/embed/MOsjsjvzp2E?…
25.000 Chilometri, è il nuovo cavo sottomarino Seacom2.0 per collegare Europa, Africa e Asia
Seacom, operatore africano di infrastrutture sottomarine, ha annunciato il lancio di Seacom 2.0, un sistema di cavi internazionali progettato per collegare Europa, Medio Oriente, Africa e Asia.
Il progetto prevede un tracciato lungo 25.000 chilometri (15.534 miglia), dotato di 48 coppie di fibre ottiche, con 20 punti di atterraggio distribuiti in 15 paesi.
Secondo la società, il nuovo cavo risponde alla domanda crescente di servizi per l’intelligenza artificiale, il cloud e il trasferimento di dati in tempo reale. Seacom dichiara che la rete potrebbe ridurre i costi di connettività fino al 300%, favorendo lo sviluppo di servizi cloud, fintech e dell’ecosistema tecnologico regionale.
Il percorso pianificato ha inizio a Marsiglia (Francia); attraversa il Mar Mediterraneo e il Mar Rosso prima di diramarsi in due rami: il primo prosegue verso est fino a Singapore, attraversando India e Pakistan; il secondo serve le coste del Nordafrica, dell’Africa occidentale e dell’Africa meridionale.
Nel comunicato ufficiale il CEO Alpheus Mangale ha definito Seacom 2.0 come “più di un semplice cavo“, sottolineando che l’iniziativa intende consolidare la sovranità digitale della regione e promuovere accesso aperto e integrazione regionale. L’azienda indica come obiettivo la creazione di una rete resiliente, sostenibile e inclusiva.
Dal punto di vista tecnico, Seacom 2.0 è concepito come una infrastruttura ad alta capacità e bassa latenza ottimizzata per i carichi di lavoro dell’intelligenza artificiale. È prevista la trasformazione delle stazioni di atterraggio in veri e propri “nodi di comunicazione dell’intelligenza artificiale”, volti a collegare l’infrastruttura AI sovrana africana ai data center globali.
Il progetto include obiettivi strategici economici e operativi: stimolare la crescita del PIL – sulla scia dell’impatto positivo già osservato da precedenti cavi sottomarini, che hanno contribuito a incrementare il PIL pro capite africano di oltre il 6% – supportare infrastrutture intelligenti (porti abilitati all’IoT, pianificazione urbana basata su AI, edge computing) e potenziare le piccole e medie imprese con connettività di livello aziendale per l’accesso al cloud e ai mercati digitali.
Seacom 2.0 si poggia sulla rete esistente dell’operatore, avviata nel 2009 in collaborazione con Tata Communications. Seacom gestisce stazioni di atterraggio lungo la costa orientale dell’Africa e detiene capacità su principali sistemi internazionali come WACS, TEAMS, EASSy, Main One, Equiano e Peace. Tra gli investitori figurano Industrial Promotion Services (Aga Khan Fund for Economic Development), Remgro, Solcon Capital e Sanlam.
L’annuncio richiama inoltre proiezioni macro: Seacom posiziona Seacom 2.0 in previsione di 10 miliardi di agenti di intelligenza artificiale entro il 2030 e della crescita demografica prevista per il bacino dell’Oceano Indiano, che secondo le stime ospiterà metà della popolazione mondiale entro il 2050.
L'articolo 25.000 Chilometri, è il nuovo cavo sottomarino Seacom2.0 per collegare Europa, Africa e Asia proviene da il blog della sicurezza informatica.
IO E CHATGPT E19: Collaborare in team
ChatGPT non è solo uno strumento personale: può migliorare anche la comunicazione e l’efficienza nei gruppi di lavoro. Ne parliamo in questo episodio.
zerodays.podbean.com/e/io-e-ch…
Splashflag: Raising the Flag on a Pool Party
Some things are more fun when there are more folks involved, and enjoying time in the pool is one of those activities. Knowing this, [Bert Wagner] started thinking of ways to best coordinate pool activities with his kids and their neighborhood friends. Out of this came the Splashflag, an IoT device built from the ground up that provides fun pool parties and a great learning experience along the way.
The USB-powered Splashflag is housed in a 3D-printed case, with a simple 2×16 LCD mounted on the front to display the notification. There’s also a small servo mounted to the rear that raises a 3D-printed flag when the notification comes in—drawing your attention to it a bit more than just text alone would. Hidden on the back is also a reset button: a long press factory-resets the device to connect to a different Wi-Fi network, and a quick press clears the notification to return the device to its resting state.
Inside is an ESP32-S3 that drives the servo and display and connects to the Wi-Fi. The ESP32 is set up with a captive portal, easing the device’s connection to a wireless network. The ESP32, once connected, joins an MQTT broker hosted by [Bert Wagner], allowing easy sending of notifications via the web app he made to quickly and easily send out invitations.
Thanks, [Bert Wagner], for sharing the process of building this fun, unique IoT device—be sure to read all the details on his website or check out the code and design files available over on his GitHub. Check out some of our other IoT projects if this project has you interested in making your own.
youtube.com/embed/m3u1LLpupH0?…