Salta al contenuto principale

Scared for a Drink?


Dark lab setup with scientific looking drink dispenser

Halloween is about tricks and treats, but who wouldn’t fancy a bit to drink with that? [John Sutley] decided to complete his Halloween party with a drink dispenser looking as though it was dumped by a backstreet laboratory. It’s not only an impressive looking separating funnel, it even runs on an Arduino. The setup combines lab glassware, servo motors, and an industrial control panel straight from a process plant.

The power management appeared the most challenging part. The three servos drew more current than one Arduino could handle. [John] overcame voltage sag, brownouts, and ghostly resets. A healthy 1000 µF capacitor across the 5-volt rail fixed it. With a bit of PWM control and some C++, [John] managed to finish up his interactive bar system where guests could seal their own doom by pressing simple buttons.

This combines the thrill of Halloween with ‘the ghost in the machine’. Going past the question whether you should ever drink from a test tube – what color would you pick? Lingonberry juice or aqua regia, who could tell? From this video, we wouldn’t trust the bartender on it – but build it yourself and see what it brings you!

youtube.com/embed/8pY9TOOwmnc?…


hackaday.com/2025/10/31/scared…


2025 Component Abuse Challenge: An Input Is Now An Output


Part of setting up a microcontroller when writing a piece of firmware usually involves configuring its connections to the outside world. You define a mapping of physical pins to intenral peripherals to decide which is an input, output, analogue, or whatever other are available. In some cases though that choice isn’t available, and when you’ve used all the available output pins you’re done. But wait – can you use an input as an output? With [SCART VADER]’s lateral thinking, you can.

The whole thing takes advantage of the internal pull-up resistor that a microcontroller has among its internal kit of parts. Driving a transistor from an output pin usually requires a base resistor, so would it be possible to use the pullup as a base resistor? If the microcontroller can enable or disable the resistor on an input pin then yes it can, a transistor can be turned off and on with nary an output to be seen. In this case the chip is from ATmega parts bin so we’re not sure if the trick is possible on other manufacturers’ devices.

As part of our 2025 Component Abuse Challenge, this one embodies the finest principles of using a part in a way it was never intended to be used, and we love it. You’ve still got a few days to make an entry yourself at the time of writing this, so bring out your own hacks!

2025 Hackaday Component Abuse Challenge


hackaday.com/2025/10/31/2025-c…


Speech Synthesis on A 10 Cent Microcontroller


Speech synthesis has been around since roughly the middle of the 20th century. Once upon a time, it took remarkably advanced hardware just to even choke out a few words. But as [atomic14] shows with this project, these days it only takes some open source software and 10-cent microcontroller

The speech synth is implemented on a CH32V003 microcontroller, known for its remarkably low unit cost when ordered in quantity. It’s a speedy little RISC-V chip running at 48 MHz, albeit with the limitation of just 16 KB of Flash and 2 KB of SRAM on board.

The microcontroller is hooked up to a speaker via a simple single-transistor circuit, which allows for audio output. [atomic14] first demonstrates this by having the chip play back six seconds of low quality audio with some nifty space-saving techniques to squeeze it into the limited flash available. Then, [atomic14] shows how he implemented the Talkie library on the chip, which is a softwarehttps://www.youtube.com/watch?v=RZvX95aXSdM implementation of Texas Instruments’ LPC speech synthesis architecture—which you probably know from the famous Speak & Spell toys. It’s got a ton of built in vocabulary out of the box, and you can even encode your own words with some freely available tools.

We’ve seen [atomic14] tinker with these chips before, too.

youtube.com/embed/RZvX95aXSdM?…


hackaday.com/2025/10/31/speech…


AzureHound: lo strumento “legittimo” per gli assalti al cloud


AzureHound, parte della suite BloodHound, nasce come strumento open-source per aiutare i team di sicurezza e i red team a individuare vulnerabilità e percorsi di escalation negli ambienti Microsoft Azure ed Entra ID.

Oggi, però, è sempre più spesso utilizzato da gruppi criminali e attori sponsorizzati da stati per scopi ben diversi: mappare infrastrutture cloud, identificare ruoli privilegiati e pianificare attacchi mirati.

Perché AzureHound è diventato uno strumento pericoloso


Scritto in Go e disponibile per Windows, Linux e macOS, AzureHound interroga le API di Microsoft Graph e Azure REST per raccogliere informazioni su identità, ruoli, applicazioni e risorse presenti nel tenant.

Il suo funzionamento, progettato per scopi legittimi, si rivela utile anche a chi vuole colpire:

  1. Può essere eseguito da remoto, senza accedere direttamente alla rete vittima.
  2. Produce output JSON compatibili con BloodHound, che li traduce in grafici di relazioni, privilegi e potenziali percorsi di attacco.

In altre parole, AzureHound consente di automatizzare quella fase di ricognizione che, in passato, richiedeva tempo e competenze manuali, trasformando il cloud reconnaissance in un processo rapido e preciso.
Execution of AzureHound to enumerate users BloodHound illustration of available key vaults.

Dall’uso legittimo all’abuso


Nel corso del 2025 diversi gruppi di cybercriminali hanno adottato AzureHound per scopi offensivi.
Secondo analisi di threat intelligence, Curious Serpens (noto anche come Peach Sandstorm), Void Blizzard e il gruppo Storm-0501 hanno impiegato lo strumento per enumerare ambienti Entra ID, individuare configurazioni errate e pianificare escalation di privilegi.
Ciò dimostra come strumenti nati per la sicurezza possano diventare parte integrante delle campagne di compromissione, soprattutto quando gli ambienti cloud non sono monitorati in modo adeguato.

Come viene sfruttato


Dopo aver ottenuto un primo accesso a un tenant Azure tramite credenziali compromesse, phishing o account di servizio vulnerabili, gli operatori malevoli eseguono AzureHound per:

  • raccogliere informazioni su utenti, ruoli e relazioni;
  • individuare identità privilegiate o service principal con permessi eccessivi;
  • scoprire percorsi indiretti di escalation di privilegi;
  • costruire, tramite BloodHound, una rappresentazione grafica dell’intero ambiente.

Questa visibilità permette di pianificare con precisione i passi successivi: dall’escalation al movimento laterale, fino all’esfiltrazione dei dati o alla distribuzione di ransomware.

Cosa fare per difendersi


Le organizzazioni che utilizzano Azure e Microsoft Entra ID dovrebbero implementare controlli mirati per individuare e bloccare comportamenti anomali legati all’uso improprio di strumenti come AzureHound.

Monitorare le API per individuare pattern di enumerazione insoliti verso Graph e REST API.
Creare alert su query massicce o su richieste con user-agent sospetti.
Limitare i permessi delle applicazioni e delle service principal, adottando il principio del privilegio minimo.
Applicare MFA e controlli stringenti sugli account sincronizzati con privilegi elevati.
Integrare regole di hunting nei SIEM (come Microsoft Sentinel o Defender XDR) per rilevare comportamenti riconducibili alla raccolta automatica di dati.

Conclusione


AzureHound rappresenta un esempio concreto di come strumenti nati per migliorare la sicurezza possano diventare un’arma nelle mani sbagliate.
Capire come questi strumenti vengono abusati è fondamentale per costruire strategie di difesa efficaci, potenziare la visibilità sugli ambienti cloud e ridurre il tempo di reazione in caso di compromissione.
Solo conoscendo le stesse tecniche impiegate da chi attacca è possibile anticiparle e mantenere il controllo delle proprie infrastrutture digitali.

L'articolo AzureHound: lo strumento “legittimo” per gli assalti al cloud proviene da Red Hot Cyber.


Learn What a Gaussian Splat Is, Then Make One


Gaussian Splats is a term you have likely come across, probably in relation to 3D scenery. But what are they, exactly? This blog post explains precisely that in no time at all, complete with great interactive examples and highlights of their strengths and relative weaknesses.
Gaussian splats excel at making colorful, organic subject matter look great.
Gaussian splats are a lot like point clouds, except the points are each differently-shaped “splats” of color, arranged in such a way that the resulting 3D scene looks fantastic — photorealistic, even — from any angle.

All of the real work is in the initial setup of the splats into the scene. Once that work is done, viewing is the easy part. Not only are the resulting file sizes of the scenes small, but rendering is computationally simple.

There are a few pros and cons to gaussian splats compared to 3D meshes, but in general they look stunning for any kind of colorful, organic scene. So how does one go about making or using them?

That’s where the second half of the post comes in handy. It turns out that making your own gaussian splats is simply a matter of combining high-quality photos with the right software. In that sense, it has a lot in common with photogrammetry.

Even early on, gaussian splats were notable for their high realism. And since this space has more than its share of lateral-thinkers, the novel concept of splats being neither pixels nor voxels has led some enterprising folks to try to apply the concept to 3D printing.


hackaday.com/2025/10/31/learn-…


There’s Nothing Boring About Web Search on Retro Amigas


The most exciting search engine 68k can handle.

Do you have a classic Amiga computer? Do you want to search the web with iBrowse, but keep running into all that pesky modern HTML5 and HTTPS? In that case, [Nihirash] created BoringSearch.com just for you!

BoringSearch was explicitly inspired by [ActionRetro]’s FrogFind search portal, and works similarly in practice. From an end-user perspective, they’re quite similar: both serve as search engines and strip down the websites listed by the search to pure HTML so old browsers can handle it.
Boring search in its natural habitat, iBrowse on Amiga.
The biggest difference we can see betwixt the two is that FrogFind will link to images while BoringSearch either loads them inline or strips them out entirely, depending on the browser you test with and how the page was formatted to begin with. (Ironically, modern Firefox doesn’t get images from BoringSearch’s page simplifier.) BoringSearch also gives you the option of searching with DuckDuckGo or Google via the SerpAPI, though note that poor [Nihirash] is paying out-of-pocket for google searches.

BoringSearch is explicitly aimed at the iBrowse browser for late-stage Amigas, but should work equally well with any modern browser. Apparently this project only exists because FrogFind went down for a week, and without the distraction of retrocomptuer websurfing, [Nihirash] was able to bash out his own version from scratch in Rust. If you want to self-host or see how they did it, [Nihirash] put the code on GitHub under a donationware license.

If you’re scratching your head why on earth people are still going on about Amiga in 2025, here’s one take on it.


hackaday.com/2025/10/31/theres…


Bug nel Task Manager di Windows 11: come risolverlo


Gli aggiornamenti di Windows 11 di Microsoft spesso contengono bug inspiegabili, in particolare patch per nuove funzionalità, come la KB5067036 rilasciata di recente. Sebbene KB5067036 sia un aggiornamento facoltativo, ha introdotto un menu Start completamente nuovo e aggiornamenti alla barra delle applicazioni e a Esplora file, rendendolo molto atteso.

Tuttavia, è stato riscontrato un bug nel Task Manager.

Il bug è che quando un utente chiude la finestra del Task Manager come di consueto, il programma non viene effettivamente chiuso e rimane in background.

Se il programma viene riaperto, verrà rigenerato. Nel test, sono stati generati un massimo di 100 processi in background.

Considerando che consuma anche memoria e risorse della CPU, questo bug può rallentare il sistema e persino causare il blocco dei computer.

Come risolvere questo bug?

Ci sono soluzioni sul forum di Reddit, dove è stato segnalato il problema. Una soluzione è utilizzare la funzione “Termina processo” in Task Manager per terminare i processi in esecuzione in background. Basta terminare ogni processo in background.

Se hai più finestre aperte, i passaggi precedenti potrebbero risultare un po’ macchinosi. In tal caso, puoi provare la riga di comando: apri CMD e digita il seguente comando:

taskkill /im taskmgr.exe /f

Questo comando può terminare contemporaneamente tutti i programmi in background in Task Manager, portando pace e tranquillità nel mondo.

Naturalmente, la soluzione definitiva dipende ancora da Microsoft. Questa patch è ancora un’anteprima e la versione finale verrà rilasciata entro due settimane. Microsoft dovrebbe avere il tempo di risolvere il problema. Non preoccuparti se non hai ancora effettuato l’aggiornamento: potrebbero esserci altri bug non ancora scoperti. Non c’è fretta di provare il nuovo menu Start.

L'articolo Bug nel Task Manager di Windows 11: come risolverlo proviene da Red Hot Cyber.


Linux e il gaming: un connubio sempre più affidabile


Secondo Boiling Steam, il numero di giochi Windows che funzionano in modo affidabile su Linux è il più alto mai registrato. L’analisi si basa sulle statistiche di ProtonDB, che raccoglie i report degli utenti sui lanci di giochi tramite Proton e WINE.

I ricercatori sottolineano che i giochi sono divisi in cinque categorie: Platino – funziona perfettamente appena tolto dalla scatola; Oro – richiede impostazioni minime; Argento – giocabile ma con problemi; Bronzo – intermedio; Borked – non funziona affatto.

Queste valutazioni sono solo parzialmente paragonabili al sistema Steam Deck Verified, che tiene conto delle prestazioni di un dispositivo specifico.

Il grafico pubblicato nell’articolo mostra un costante aumento del numero di giochi nelle categorie Platino e Oro. Attualmente, circa il 90% dei giochi Windows funziona correttamente su Linux e la percentuale di giochi “non funzionanti” è scesa a un minimo storico di circa il 10%.

Questi miglioramenti sono attribuibili al continuo lavoro degli sviluppatori di Proton e WINE, nonché alle iniziative di Valve, che collabora sempre più con gli editori prima dell’uscita dei giochi per garantire la compatibilità con Steam Deck .

Alcuni titoli, come il MOBA March of Giants, continuano a rifiutarsi di essere lanciati, spesso a causa di ban diretti da parte degli sviluppatori, come confermano i report di ProtonDB. Tuttavia, stanno diventando sempre più comuni i casi in cui giochi come Blade e Soul NEO passano da non funzionanti a parzialmente compatibili.

Esistono ancora progetti che richiedono la manipolazione manuale delle librerie DLL, come la visual novel Sickly Days and Summer Traces, che può essere avviata solo dopo aver installato protontricks. Su un altro fronte, il progresso è ostacolato dall’uso di software anti-cheat che non supporta Linux, un problema che, secondo Boiling Steam, può essere superato solo con l’adozione diffusa di dispositivi Linux.

La crescente compatibilità rende Linux e SteamOS sempre più interessanti per i produttori di sistemi di gioco: oggi possiamo dire che offrono un supporto completo all’80% per i giochi più popolari. Inoltre, su hardware con processori AMD, Linux dimostra spesso prestazioni migliori di Windows.

Anche il supporto HDR sui sistemi desktop sta migliorando, e la qualità complessiva del rendering e la stabilità dei driver stanno avvicinando Linux al livello delle piattaforme di gioco.

Gli autori di Boiling Steam concludono: cinque anni fa, pochi credevano nel gaming su Linux, ma ora il suo successo sta diventando impossibile da ignorare.

L'articolo Linux e il gaming: un connubio sempre più affidabile proviene da Red Hot Cyber.


Arrestati i creatori del malware Medusa dai funzionari del Ministero degli interni Russo


Il gruppo di programmatori russi dietro il malware Medusa è stato arrestato da funzionari del Ministero degli Interni russo, con il supporto della polizia della regione di Astrakhan.

Secondo gli investigatori, tre giovani specialisti IT erano coinvolti nello sviluppo, nella distribuzione e nell’implementazione di virus progettati per rubare dati digitali e violare i sistemi di sicurezza. Lo ha riferito Irina Volk sul canale Telegram, che ha allegato un video degli arresti.

Gli investigatori hanno stabilito che le attività del gruppo sono iniziate circa due anni fa . All’epoca, i sospettati avevano creato e pubblicato sui forum degli hacker un programma chiamato Medusa, in grado di rubare account utente, wallet di criptovalute e altre informazioni riservate. Il virus si è diffuso rapidamente attraverso comunità chiuse, dove è stato utilizzato per attaccare reti private e aziendali.

Uno degli incidenti registrati è stato un attacco informatico nel maggio 2025 a un’agenzia governativa nella regione di Astrakhan. Utilizzando software proprietario, gli aggressori hanno ottenuto l’accesso non autorizzato a dati ufficiali e li hanno trasferiti su server sotto il loro controllo. È stato avviato un procedimento penale ai sensi della Parte 2 dell’Articolo 273 del Codice Penale russo, che prevede la responsabilità per la creazione e la distribuzione di malware.

Gli investigatori del Dipartimento per la criminalità informatica del Ministero degli Interni russo, con il supporto della Guardia Nazionale russa, hanno arrestato i sospettati nella regione di Mosca. Durante le perquisizioni, sono stati sequestrati computer, dispositivi mobili, carte di credito e altri oggetti, confermando il loro coinvolgimento in reati contro la sicurezza informatica.

L’indagine ha rivelato che gli sviluppatori di Medusa avevano creato anche un altro strumento dannoso. Questo software era progettato per aggirare le soluzioni antivirus, disattivare i meccanismi di difesa e creare botnet , ovvero reti di computer infetti utilizzate per lanciare attacchi informatici su larga scala.

Sono state imposte misure di custodia cautelare a tutti e tre gli indagati. Le indagini proseguono per individuare possibili complici e ulteriori casi di attività illecita.

L'articolo Arrestati i creatori del malware Medusa dai funzionari del Ministero degli interni Russo proviene da Red Hot Cyber.


Hacking Together an Expensive-Sounding Microphone At Home


When it comes to microphones, [Roan] has expensive tastes. He fancies the famous Telefunken U-47, but doesn’t quite have the five-figure budget to afford a real one. Thus, he set about getting as close as he possibly could with a build of his own.

[Roan] was inspired by [Jim Lill], who is notable for demonstrating that the capsule used in a mic has probably the greatest effect on its sound overall compared to trivialities like the housing or the grille. Thus, [Roan’s] build is based around a 3U Audio M7 capsule. It’s a large diaphragm condenser capsule that is well regarded for its beautiful sound, and can be had for just a few hundred dollars. [Roan] then purchased a big metal lookalike mic housing that would hold the capsule and all the necessary electronics to make it work. The electronics itself would be harvested from an old ADK microphone, with some challenges faced due to its sturdy construction. When the tube-based amplifier circuit was zip-tied into its new housing along with the fancy mic capsule, everything worked! Things worked even better when [Roan] realized an error in wiring and got the backplate voltage going where it was supposed to go. Some further tweaks to the tube and capacitors further helped dial in the sound.

If you’ve got an old mic you can scrap for parts and a new capsule you’re dying to use, you might pursue a build like [Roan’s]. Or, you could go wilder and try building your own ribbon mic with a gum wrapper. Video after the break.

youtube.com/embed/hFXfJk1FC9E?…

[Thanks to Keith Olson for the tip!]


hackaday.com/2025/10/30/hackin…


PhantomRaven Attack Exploits NPM’s Unchecked HTTP URL Dependency Feature



An example of RDD in a package's dependencies list. It's not even counted as a 'real' dependency. (Credit: Koi.ai)An example of RDD in a package’s dependencies list. It’s not even counted as a ‘real’ dependency. (Credit: Koi.ai)
Having another security threat emanating from Node.js’ Node Package Manager (NPM) feels like a weekly event at this point, but this newly discovered one is among the more refined. It exploits not only the remote dynamic dependencies (RDD) ‘feature’ in NPM, but also uses the increased occurrence of LLM-generated non-existent package names to its advantage. Called ‘slopsquatting’, it’s only the first step in this attack that the researchers over at [Koi] stumbled over by accident.

Calling it the PhantomRaven attack for that cool vibe, they found that it had started in August of 2025, with some malicious packages detected and removed by NPM, but eighty subsequent packages evaded detection. A property of these packages is that in their dependencies list they use RDD to download malicious code from a HTTP URL. It was this traffic to the same HTTP domain that tipped off the researchers.

For some incomprehensible reason, allowing these HTTP URLs as package dependency is an integral part of the RDD feature. Since the malicious URL is not found in the code itself, it will slip by security scanners, nor is the download cached, giving the attackers significantly more control. This fake dependency is run automatically, without user interaction or notification that it has now begun to scan the filesystem for credentials and anything else of use.

The names of the fake packages were also chosen specifically to match incomplete package names that an LLM might spit out, such as unused-import instead of the full package name of eslint-plugin-unused-imports as example. This serves to highlight why you should not only strictly validate direct dependencies, but also their dependencies. As for why RDD is even a thing, this is something that NPM will hopefully explain soon.

Top image: North American Common Raven (Corvus corax principalis) in flight at Muir Beach in Northern California (Credit: Copetersen, Wikimedia)


hackaday.com/2025/10/30/phanto…


100-Year Old Wagon Wheel Becomes Dynamometer


If you want to dyno test your tuner car, you can probably find a couple of good facilities in any nearby major city. If you want to do similar testing at a smaller scale, though, you might find it’s easier to build your own rig, like [Lou] did.

[Lou’s] dynamometer is every bit a DIY project, relying on a 100-year-old wagon wheel as the flywheel installed in a simple frame cobbled together from 6×6 timber beams. As you might imagine, a rusty old wagon wheel probably wouldn’t be in great condition, and that was entirely true here. [Lou] put in the work to balance it up with some added weights, before measuring its inertia with a simple falling weight test. The wheel is driven via a chain with a 7:1 gear reduction to avoid spinning it too quickly. Logging the data is a unit from BlackBoxDyno, which uses hall effect sensors to measure engine RPM and flywheel RPM. With this data and a simple calibration, it’s possible to calculate the torque and horsepower of a small engine hooked up to the flywheel.

Few of us are bench testing our lawnmowers for the ultimate performance, but if you are, a build like this could really come in handy. We’ve seen other dyno builds before, too. Video after the break.

youtube.com/embed/61-e-HK6HdU?…


hackaday.com/2025/10/30/100-ye…


Iconic Xbox Prototype Brought to Life


When Microsoft decided they wanted to get into the game console market, they were faced with a problem. Everyone knew them as a company that developed computer software, and there was a concern that consumers wouldn’t understand that their new Xbox console was a separate product from their software division. To make sure they got the message though, Microsoft decided to show off a prototype that nobody could mistake for a desktop computer.

The giant gleaming X that shared the stage with Bill Gates and Seamus Blackley at the 2000 Game Developers Conference became the stuff of legend. We now know the machine wasn’t actually a working Xbox, but at the time, it generated enormous buzz. But could it have been a functional console? That’s what [Tito] of Macho Nacho Productions wanted to find out — and the results are nothing short of spectacular.

The key to this project is the enclosure itself, but this is no simple project box we’re talking about here. Milled from a solid block of aluminum, the original prototype’s shell reportedly cost Microsoft $18,000 to have produced, which would be around $36,000 when adjusted for inflation. Luckily, the state of the art has moved forward a bit in the intervening two decades. So after working with [Wesk] to create a 3D model from reference images (including some that [Tito] took himself of one of the surviving prototypes on display in New York), the design was sent away to PCBWay for production. It still cost the better part of $6 K to be produced, but that’s a hell of a savings compared to the original. Though [Tito] still had to polish the aluminum himself to recreate the original’s mirror-like shine.

To say the rest of the project was “easy” would be something of an understatement, but it was at least more familiar territory. Unlike the original prototype, this machine would actually play Xbox games, to [Tito] focused on cramming the original era-appropriate hardware (plus a few modern homebrew tweaks, such as HDMI-out) into the hollow X using a clever system of integrated rails and 3D printed mounts.

Some of the original parts, like the power supply, were simply too large to use. That’s where [Redherring32] came in. He designed a custom USB-C power supply that could satisfy the original console’s energy needs in a much smaller footprint. There’s also a modern SSD in place of the 8 GB of spinning rust that the console shipped with back in 2001. But overall, it’s still real Xbox hardware — no emulation or other funny tricks here.

At this point, the team had already exceeded what Microsoft pulled off in 2000, but they weren’t done yet. Wanting to really set this project apart, [Tito] decided to replace the center jewel with something a bit more modern. The original was little more than a backlit piece of plastic, but on this build it’s a circular LCD driven by a Raspberry Pi Pico, capable of showing a number of custom full-motion animations thanks to the efforts of [StuckPixel].

The end result of this team effort is a machine that’s not only better looking than Microsoft’s original, but also more functional. It’s a project that’s destined for a more than just sitting on a shelf collecting dust, so we’re happy to hear that [Tito] plans on taking it on a tour of different gaming events to give the public a chance to see it in person. He’s even had a custom crate made so he can transport it around in style and safety.

youtube.com/embed/0OMP8JvGWNY?…


hackaday.com/2025/10/30/iconic…


Build Your Own Force-Feedback Joystick


Force feedback joysticks are prized for creating a more realistic experience when used with software like flight sims. Sadly, you can’t say the same thing about using them with mech games, because mechs aren’t real. In any case, [zeroshot] whipped up their own stick from scratch for that added dose of realistic feedback in-game.

[zeroshot] designed a simple gimbal to allow the stick to move in two axes, relying primarily on 3D-printed components combined with a smattering of off-the-shelf bearings. For force feedback, an Arduino Micro uses via TMC2208 stepper drivers to control a pair of stepper motors, which can apply force to the stick in each axis via belt-driven pulleys. Meanwhile, the joystick’s position on each axis is tracked via magnetic encoders. The Arduino feeds this data to an attached computer by acting as a USB HID device.

We’ve seen some other great advanced joystick projects over years, too. Never underestimate how much a little haptic feedback can add to immersion.

youtube.com/embed/YdNP5jIJ0dU?…


hackaday.com/2025/10/30/build-…


The Time Of Year For Things That Go Bump In The Night


Each year around the end of October we feature plenty of Halloween-related projects, usually involving plastic skeletons and LED lights, or other fun tech for decorations to amuse kids. It’s a highly commercialised festival of pretend horrors which our society is content to wallow in, but beyond the plastic ghosts and skeletons there’s both a history and a subculture of the supernatural and the paranormal which has its own technological quirks. We’re strictly in the realm of the science here at Hackaday so we’re not going to take you ghost hunting, but there’s still an interesting journey to be made through it all.

Today: Fun For Kids. Back Then: Serious Business

A marble carved skull on a 17th century monument in the church of st. Mary & st. Edburga, Stratton Audley, Oxfordshire.English churches abound with marble-carved symbols of death.
Halloween as we know it has its roots in All Hallows Eve, or the day before the remembrance festivals of All Saint’s Day and All Soul’s Day in European Christianity. Though it has adopted a Christian dressing, its many trappings are thought to have their origin in pagan traditions such as for those of us where this is being written, the Gaelic Samhain (pronounced something like “sow-ain”). The boundary between living and dead was thought to be particularly porous at this time of year, hence all the ghosts and other trappings of the season you’ll see today.

Growing up in a small English village as I did, is to be surrounded by the remnants of ancient belief. They survive from an earlier time hundreds of years ago when they were seen as very real indeed, as playground rhymes at the village school or hushed superstitions such as that it would be bad luck to walk around the churchyard in an anticlockwise manner.

As a small child they formed part of the thrills and mild terrors of discovering the world around me, but of course decades later when it was my job to mow the grass and trim the overhanging branches in the same churchyard it mattered little which direction I piloted the Billy Goat. I was definitely surrounded by the mortal remains of a millennium’s worth of my neighbours, but I never had any feeling that they were anything but at peace.

Some Unexplained Phenomena Are Just That

A sliding stone in Death Valley, USAA previously unexplained phenomenon in the appropriately named Death Valley. Jon Sullivan, Public domain.
So as you might expect, nothing has persuaded me to believe in ghosts. I can and have walked through an ancient churchyard at night as I grew up next to it, and never had so much as a creepy feeling. I do however believe in unexplained phenomena, but before you throw a book at your computer I mean it in the exact terms given: observable phenomena we know occur, but can’t immediately explain.

To illustrate, a good example of a believable unexplained phenomenon was those moving rocks in an American desert; they moved but nobody could explain how they did it. It’s now thought to be due to the formation of ice underneath them in certain meteorological circumstances, so that’s one that’s no longer unexplained.

As another slightly less cut-and-dried example there are enough credible reports of marsh lights to believe that they could exist, but the best explanation we have, of spontaneous combustion of high concentrations of organic decomposition products, remains for now a theory. I hope one day a scientist researching fenland ecosystems captures one on their instruments by chance, and we can at last confirm or deny it.
A collection of apparatus and cameras, sepia photo.The ghost hunting kit of 1920s paranormal investigator Harry Price. Harry Price, Public domain.
The trouble is with unexplained phenomena, that there are folks who would prefer to explain them in their own way because that’s what they want to believe. “I want to believe” is the slogan from the X Files TV show for exactly that reason.

People who want a marsh light or the sounds made by an old house as it settles under thermal contraction at night to be made by a ghost, are going to look for ghosts, and will clutch at anything which helps them “prove” their theories. In this they have naturally enlisted the help of technology, and thus there are all manner of gizmos taken into cemeteries or decaying mansions in the service of the paranormal. And of course in this we have the chance for some fun searching the web for electronic devices.

All The Fun Of Scam Devices


In researching this it’s been fascinating to see a progression of paranormal detection equipment over the decades, following the technological trends of the day. From early 20th century kits that resembled those used by detectives, to remote film cameras like the underwater Kodak Instamatic from a 1970s Nessie hunt we featured earlier this year, to modern multispectral imaging devices, with so much equipment thrown at the problem you’d expect at least one of them to have found something!
My cheap EM meter, a handheld rectangular black plastic device with an L:CD display on top.I coulda found GHOSTS with this thing, had I only thought of it!
I’ve found that these instruments can be broadly divided into two camps: “normal” devices pressed into ghost-hunting service such as thermal cameras or audio recorders, and “special” instruments produced for the purpose. The results from either source may be digitally processed to “reveal” information, much in the manner of the famous “dead salmon paper“, which used an MRI of a dead fish to make a sarcastic comment about some research methodologies.

I’ve even discovered that I may have inadvertently reviewed one a few years ago, a super-cheap electric field meter touted as helping prevent some medical conditions, which I found to be mostly useful for detecting cables in my walls. Surprisingly I found it to be well engineered and in principle doing what it was supposed to for such an instrument, but completely uncalibrated and fitted with an alarm that denounced the mildest of fields as lethal. At least it was a lot cheaper than an e-meter.

Tomorrow night, there will be those who put on vampire costumes to be shepherded around their neighbourhoods in search of candy, and somewhere in the quiet country churchyard of an Oxfordshire village, something will stir. Is it a spectre, taking advantage of their yearly opportunity for a sojourn in the land of the living? No, it’s a solitary fox, hoping to find some prey under the moonlight in the undergrowth dividing the churchyard from a neighbouring field.

Wherever you are, may your Halloween be a quiet and only moderately scary one.

Header: Godstone, Surrey: Gravestone with skull and bones by Dr Neil Clifton, CC BY-SA 2.0.


hackaday.com/2025/10/30/the-ti…


Why You Shouldn’t Trade Walter Cronkite for an LLM


Graph showing accuracy vs model

Has anyone noticed that news stories have gotten shorter and pithier over the past few decades, sometimes seeming like summaries of what you used to peruse? In spite of that, huge numbers of people are relying on large language model (LLM) “AI” tools to get their news in the form of summaries. According to a study by the BBC and European Broadcasting Union, 47% of people find news summaries helpful. Over a third of Britons say they trust LLM summaries, and they probably ought not to, according to the beeb and co.

It’s a problem we’ve discussed before: as OpenAI researchers themselves admit, hallucinations are unavoidable. This more recent BBC-led study took a microscope to LLM summaries in particular, to find out how often and how badly they were tainted by hallucination.

Not all of those errors were considered a big deal, but in 20% of cases (on average) there were “major issues”–though that’s more-or-less independent of which model was being used. If there’s good news here, it’s that those numbers are better than they were when the beeb last performed this exercise earlier in the year. The whole report is worth reading if you’re a toaster-lover interested in the state of the art. (Especially if you want to see if this human-produced summary works better than an LLM-derived one.) If you’re a luddite, by contrast, you can rest easy that your instincts not to trust clanks remains reasonable… for now.

Either way, for the moment, it might be best to restrict the LLM to game dialog, and leave the news to totally-trustworthy humans who never err.


hackaday.com/2025/10/30/why-yo…


Atroposia: la piattaforma MaaS che fornisce un Trojan munito di scanner delle vulnerabilità


I ricercatori di Varonis hanno scoperto la piattaforma MaaS (malware-as-a-service) Atroposia. Per 200 dollari al mese, i suoi clienti ricevono un Trojan di accesso remoto con funzionalità estese, tra cui desktop remoto, gestione del file system, furto di informazioni, credenziali, contenuto degli appunti, wallet di criptovalute, dirottamento DNS e uno scanner integrato per le vulnerabilità locali.

Secondo gli analisti, Atroposia ha un’architettura modulare. Il malware comunica con i server di comando e controllo tramite canali crittografati ed è in grado di bypassare il Controllo Account Utente (UAC) per aumentare i privilegi in Windows.

Una volta infettato, fornisce un accesso persistente e non rilevabile al sistema della vittima. I moduli chiave di Atroposia sono:

HRDP Connect avvia una sessione di desktop remoto nascosta in background, consentendo agli aggressori di aprire applicazioni, leggere documenti ed e-mail e, in generale, interagire con il sistema senza alcun segno visibile di attività dannosa. I ricercatori sottolineano che gli strumenti standard di monitoraggio dell’accesso remoto potrebbero “non rilevare” questa attività.

Il file manager funziona come un familiare Esplora risorse di Windows: gli aggressori possono visualizzare, copiare, eliminare ed eseguire i file. Il componente grabber cerca i dati per estensione o parola chiave, li comprime in archivi ZIP protetti da password e li invia al server di comando e controllo utilizzando metodi in-memory, riducendo al minimo le tracce dell’attacco sul sistema.

Stealer raccoglie dati di accesso salvati, dati del portafoglio di criptovalute e file di chat. Il gestore degli appunti intercetta tutto ciò che l’utente copia (password, chiavi API, indirizzi del portafoglio) in tempo reale e lo conserva per gli aggressori.

Il modulo di spoofing DNS sostituisce i domini con gli indirizzi IP degli aggressori a livello di host, reindirizzando silenziosamente le vittime verso server controllati dagli hacker. Questo apre le porte a phishing, attacchi MitM, falsi aggiornamenti, iniezione di adware o malware e furto di dati tramite query DNS.

Lo scanner di vulnerabilità integrato analizza il sistema della vittima alla ricerca di vulnerabilità non corrette, impostazioni non sicure e software obsoleto. I risultati vengono inviati agli operatori di malware sotto forma di punteggio, che gli aggressori possono utilizzare per pianificare ulteriori attacchi.

I ricercatori avvertono che questo modulo è particolarmente pericoloso negli ambienti aziendali: il malware potrebbe rilevare un client VPN obsoleto o una vulnerabilità di escalation dei privilegi, che può quindi essere sfruttata per ottenere informazioni più approfondite sull’infrastruttura della vittima. Inoltre, lo scanner analizza i sistemi vulnerabili nelle vicinanze per rilevare eventuali movimenti laterali.

Varonis osserva che Atroposia prosegue la tendenza verso la democratizzazione del crimine informatico.

Insieme ad altre piattaforme MaaS (come SpamGPT e MatrixPDF), riduce la barriera tecnica all’ingresso, consentendo anche ad aggressori poco qualificati di condurre efficaci “attacchi in abbonamento”.

L'articolo Atroposia: la piattaforma MaaS che fornisce un Trojan munito di scanner delle vulnerabilità proviene da Red Hot Cyber.


Self-Driving Cars and the Fight Over the Necessity of Lidar


If you haven’t lived underneath a rock for the past decade or so, you will have seen a lot of arguing in the media by prominent figures and their respective fanbases about what the right sensor package is for autonomous vehicles, or ‘self-driving cars’ in popular parlance. As the task here is to effectively replicate what is achieved by the human Mark 1 eyeball and associated processing hardware in the evolutionary layers of patched-together wetware (‘human brain’), it might seem tempting to think that a bunch of modern RGB cameras and a zippy computer system could do the same vision task quite easily.

This is where reality throws a couple of curveballs. Although RGB cameras lack the evolutionary glitches like an inverted image sensor and a big dead spot where the optical nerve punches through said sensor layer, it turns out that the preprocessing performed in the retina, the processing in the visual cortex and analysis in the rest of the brain is really quite good at detecting objects, no doubt helped by millions of years of only those who managed to not get eaten by predators procreating in significant numbers.

Hence the solution of sticking something like a Lidar scanner on a car makes a lot of sense. Not only does this provide advanced details on one’s surroundings, but also isn’t bothered by rain and fog the way an RGB camera is. Having more and better quality information makes subsequent processing easier and more effective, or so it would seem.

Computer Vision Things

A Waymo Jaguar I-Pace car in San Francisco. (Credit: Dllu, Wikimedia)A Waymo Jaguar I-Pace car in San Francisco. (Credit: Dllu, Wikimedia)
Giving machines the ability to see and recognize objects has been a dream for many decades, and the subject of nearly an infinite number of science-fiction works. For us humans this ability is developed over the course of our development from a newborn with a still developing visual cortex, to a young adult who by then has hopefully learned how to identify objects in their environment, including details like which objects are edible and which are not.

As it turns out, just the first part of that challenge is pretty hard, with interpreting a scene as captured by a camera subject to many possible algorithms that seek to extract edges, infer connections based on various hints as well as the distance to said object and whether it’s moving or not. All just to answer the basic question of which objects exist in a scene, and what they are currently doing.

Approaches to object detection can be subdivided into conventional and neural network approaches, with methods employing convolutional neural networks (CNNs) being the most prevalent these days. These CNNs are typically trained with a dataset that is relevant to the objects that will be encountered, such as while navigating in traffic. This is what is used for autonomous cars today by companies like Waymo and Tesla, and is why they need to have both access to a large dataset of traffic videos to train with, as well as a large collection of employees who watch said videos in order to tag as many objects as possible. Once tagged and bundled, these videos then become CNN training data sets.

This raises the question of how accurate this approach is. With purely RGB camera images as input, the answer appears to be ‘sorta’. Although only considered to be a Class 2 autonomous system according to the SAE’s 0-5 rating system, Tesla vehicles with the Autopilot system installed failed to recognize hazards on multiple occasions, including the side of a white truck in 2016, a concrete barrier between a highway and an offramp in 2018, running a red light and rear-ending a fire truck in 2019.

This pattern continues year after year, with the Autopilot system failing to recognize hazards and engaging the brakes, including in so-called ‘Full-Self Driving’ (FSD) mode. In April of 2024, a motorcyclist was run over by a Tesla in FSD mode when the system failed to stop, but instead accelerated. This made it the second fatality involving FSD mode, with the mode now being called ‘FSD Supervised’.

Compared to the considerably less crash-prone Level 4 Waymo cars with their hard to miss sensor packages strapped to the car, one could conceivably make the case that perhaps just a couple of RGB cameras is not enough for reliable object detection, and that quite possibly blending of sensors is a more reliable method for object detection.

Which is not to say that Waymo cars are perfect, of course. In 2024 one Waymo car managed to hit a utility pole at low speeds during a pullover maneuver, when the car’s firmware incorrectly assessed its response to a situation where a ‘pole-like object’ was present, but without a hard edge between said pole and the road.

This gets us to the second issue with self-driving cars: taking the right decision when confronted with a new situation.

Acting On Perception

The Tesla Hardware 4 mainboard with its redundant custom SoCs. (Credit: Autopilotreview.com)The Tesla Hardware 4 mainboard with its redundant custom SoCs. (Source: Autopilotreview.com)
Once you know what objects are in a scene, and merge this with the known state of the vehicle and, the next step for an autonomous vehicle is to decide what to do with this information. Although the tempting answer might be to also use ‘something with neural networks’ here, this has turned out to be a non-viable method. Back in 2018 Waymo created a recursive neural network (RNN) called ChauffeurNet which was trained on both real-life and synthetic driving data to have it effectively imitate human drivers.

The conclusion of this experiment was that while deep learning has a place here, you need to lean mostly on a solid body of rules that provides it with explicit reasoning that copes better with what is called the ‘long tail’ of possible situations, as you cannot put every conceivable situation in a data set.

This thus again turns out to be a place where human input and intelligence are required, as while an RNN or similar can be trained on an impressive data set, it will never be able to learn the reasons for why a decision was made in a training video, nor provide its own reasoning and make reasonable adaptations when faced with a new situation. This is where human experts have to define explicit rules, taking into account the known facts about the current surroundings and state of the vehicle.

Here is where having details like explicit distance information to an obstacle, its relative speed and dimensions, as well as room to divert to prevent a crash are not just nice to have. Adding sensors like radar and Lidar can provide solid data that an RGB camera plus CNN may also provide if you’re lucky, but also maybe not quite. When you’re talking about highway speeds and potentially the lives of multiple people at risk, certainty always wins out.

Tesla Hardware And Sneaky Radars

Arbe Phoenix radar module installed in a Tesla car as part of the Hardware 4 Autopilot hardware. (Credit: @greentheonly, Twitter)Arbe Phoenix radar module installed in a Tesla car as part of the Hardware 4 Autopilot hardware. (Credit: @greentheonly, Twitter)
One of the poorly kept secrets about Tesla’s Autopilot system is that it’s had a front-facing radar sensor for most of the time. Starting with Hardware 1 (HW1), it featured a single front-facing camera behind the top of the windshield and a radar behind the lower grille, in addition to 12 ultrasonic sensors around the vehicle.

Notable is that Tesla did not initially use the radar in a primary object detection role here, meaning that object detection and emergency stop functionality was performed using the RGB cameras. This changed after the RGB camera system failed to notice a white trailer against a bright sky, resulting in a spectacular crash. The subsequent firmware update gave the radar system the same role as the camera system, which likely would have prevented that particular crash.

HW1 used Mobileye’s EyeQ3, but after Mobileye cut ties with Tesla, NVidia’s Drive PX 2 was used instead for HW2. This upped the number of cameras to eight, providing a surround view of the car’s surroundings, with a similar forward-facing radar. After an intermedia HW2.5 revision, HW3 was the first to use a custom processor, featuring twelve Arm Cortex-A72 cores clocked at 2.6 GHz.

HW3 initially also had a radar sensor, but in 2021 this was eliminated with the ‘Tesla Vision’ system, which resulted in a significant uptick in crashes. In 2022 it was announced that the ultrasonic sensors for short-range object detection would be removed as well.

Then in January of 2023 HW4 started shipping, with even more impressive computing specs and 5 MP cameras instead of the previous 1.2 MP ones. This revision also reintroduced the forward-facing radar, apparently the Arbe Phoenix radar with a 300 meter range, but not in the Model Y. This indicates that RGB camera-only perception is still the primary mode for Tesla cars.

Answering The Question


At this point we can say with a high degree of certainty that by just using RGB cameras it is exceedingly hard to reliably stop a vehicle from smashing into objects, for the simple reason that you are reducing the amount of reliable data that goes into your decision-making software. While the object-detecting CNN may give a 29% possibility of an object being right up ahead, the radar or Lidar will have told you that a big, rather solid-looking object is lying on the road. Your own eyes would have told you that it’s a large piece of concrete that fell off a truck in front of you.

This then mostly leaves the question of whether the front-facing radar that’s present in at least some Tesla cars is about as good as the Lidar contraption that’s used by other car manufacturers like Volvo, as well as the roof-sized version by Waymo. After all, both work according to roughly the same basic principles.

That said, Lidar is superior when it comes to aspects like accuracy, as radar uses longer wavelengths. At the same time a radar system isn’t bothered as much by weather conditions, while generally being cheaper. For Waymo the choice for Lidar over radar comes down to this improved detail, as they can create a detailed 3D image of the surroundings, down to the direction that a pedestrian is facing, and hand signals by cyclists.

Thus the shortest possible answer is that yes, Lidar is absolutely the best option, while radar is a pretty good option to at least not drive into that semitrailer and/or pedestrian. Assuming your firmware is properly configured to act on said object detection, natch.


hackaday.com/2025/10/30/self-d…


0day come armi: ha venduto 8 exploit 0day della difesa USA a Mosca


Peter Williams, ex dipendente dell’azienda appaltatrice della difesa, si è dichiarato colpevole presso un tribunale federale degli Stati Uniti di due capi d’accusa per furto di segreti commerciali, ammettendo di aver venduto otto vulnerabilità zero-day a un intermediario russo per milioni di dollari in criptovaluta.

Secondo il fascicolo, Williams, 39 anni, che lavorava per una filiale di una società chiamata Trenchant, ha copiato illegalmente componenti software interni creati esclusivamente per il governo degli Stati Uniti e i suoi alleati nel corso di tre anni. Ha rivenduto questi strumenti, progettati per operazioni informatiche, a un broker che si spaccia apertamente per fornitore di exploit per vari clienti.

L’indagine ha stabilito che le transazioni sono avvenute tramite canali di comunicazione crittografati dal 2022 a quest’anno. Williams ha stipulato contratti con l’intermediario, indicato nei documenti del tribunale come “Società n. 3”, e ha ricevuto pagamenti in criptovaluta , parte dei quali ha poi speso in beni di lusso.

Durante l’udienza, i pubblici ministeri hanno chiarito che questa designazione si riferisce a Operation Zero, una piattaforma che si autodefinisce “l’unico mercato ufficiale per l’acquisto di vulnerabilità zero-day”.

I procuratori hanno citato un post sui social media di Operation Zero, in cui l’azienda offriva milioni di dollari per exploit iOS e Android, sottolineando che il cliente finale era un “Paese non NATO”. Questa formulazione, secondo l’accusa, corrisponde al testo di un annuncio pubblicato nel 2023.

L’agenzia ha riferito che Williams aveva precedentemente prestato servizio presso l’Australian Signals Directorate e poi era stato trasferito a Trenchant, dove aveva avuto accesso a software sviluppati per operazioni informatiche di sicurezza nazionale. Fu durante questo periodo che rubò il codice sorgente e gli sviluppi interni.

Il Dipartimento di Giustizia ha stimato il danno subito dall’appaltatore della difesa in 35 milioni di dollari, affermando che il trasferimento di strumenti così sofisticati avrebbe potuto fornire ad attori stranieri i mezzi per condurre attacchi informatici contro “numerose vittime ignare”.

Ogni accusa prevede una pena massima di 10 anni di carcere e una multa fino a 250.000 dollari o il doppio dell’importo dei profitti illeciti. Secondo le linee guida federali, il giudice Lauren Alikhan determinerà una pena compresa tra sette anni e tre mesi e nove anni. Williams sarà inoltre condannato a pagare una multa fino a 300.000 dollari e a un risarcimento di 1,3 milioni di dollari. Williams è stato posto agli arresti domiciliari in attesa della sentenza, prevista per gennaio.

Il Dipartimento di Giustizia ha definito le azioni di Williams “un tradimento degli interessi degli Stati Uniti e del suo stesso datore di lavoro”, sottolineando la natura deliberata del crimine. La Procura ha osservato che i trafficanti di exploit internazionali stanno diventando “una nuova generazione di trafficanti d’armi” e ha sottolineato che indagare su casi simili contro insider e intermediari rimane una priorità per le agenzie di intelligence.

L'articolo 0day come armi: ha venduto 8 exploit 0day della difesa USA a Mosca proviene da Red Hot Cyber.


Why Sodium-Ion Batteries are Terrible for Solar Storage


These days just about any battery storage solution connected to PV solar or similar uses LiFePO4 (LFP) batteries. The reason for this is obvious: they got a very practical charge and discharge curve that chargers and inverters love, along with a great round trip efficiency. Meanwhile some are claiming that sodium-ion (Na+) batteries would be even better, but this is not borne out by the evidence, with [Will Prowse] testing and tearing down an Na+ battery to prove the point.
The OCV curve for LFP vs Na+ batteries.The OCV curve for LFP vs Na+ batteries.
The Hysincere brand battery that [Will] has on the test bench claims a nominal voltage of 12 V and a 100 Ah capacity, which all appears to be in place based on the cells found inside. The lower nominal voltage compared to LFP’s 12.8 V is only part of the picture, as can be seen in the OCV curve. Virtually all of LFP’s useful capacity is found in a very narrow voltage band, with only significant excursions when reaching around >98% or <10% of state of charge.

What this means is that with existing chargers and inverters, there is a whole chunk of the Na+ discharge curve that’s impossible to use, and chargers will refuse to charge Na+ batteries that are technically still healthy due to the low cell voltage. In numbers, this means that [Will] got a capacity of 82 Ah out of this particular 100 Ah battery, despite the battery costing twice that of a comparable LFP one.

Yet even after correcting for that, the internal resistance of these Na+ batteries appears to be significantly higher, giving a round trip efficiency of 60 – 92%, which is a far cry from the 95% to 99% of LFP. Until things change here, [Will] doesn’t see much of a future for Na+ beyond perhaps grid-level storage and as a starter battery for very cold climates.

youtube.com/embed/zSPSCC3_hHw?…


hackaday.com/2025/10/30/why-so…


How Simple Can A Superhet Be


If you cultivate an interest in building radios it’s likely that you’ll at some point make a simple receiver. Perhaps a regenerative receiver, or maybe a direct conversion design, it’ll take a couple of transistors or maybe some simple building-block analogue ICs. More complex designs for analogue radios require far more devices; if you’re embarking on a superhetrodyne receiver in which an oscillator and mixer are used to generate an intermediate frequency then you know it’ll be a hefty project. [VK3YE] is here to explode that assumption, with a working AM broadcast band superhet that uses only two transistors.
The circuit diagram of the radioIt doesn’t get much simpler than this.
A modern portable radio will almost certainly use an all-in-one SDR-based chip, but in the golden age of the transistor radio the first stage of the receiver would be a single transistor that was simultaneously RF amplifier, oscillator, and mixer. The circuit in the video below does this , with a ferrite rod, the familiar red-cored oscillator coil, and a yellow-cored IF transformer filtering out the 455 kHz mixer product between oscillator and signal.

There would normally follow at least one more transistor amplifying the 455 kHz signal, but instead the next device is both a detector and an audio amplifier. Back in the day that would have been a germanium point contact diode, but now the transistor has a pair of 1N4148s in its biasing. We’re guessing this applies a DC bias to counteract the relatively high forward voltage of a silicon diode, but we could be wrong.

We like this radio for its unexpected simplicity and clever design, but also because he’s built it spiderweb-style. We never expected to see a superhet this simple, and even if you have no desire to build a radio we hope you’ll appreciate the ingenuity of using simple transistors to the max.

youtube.com/embed/on9My8dMfbU?…


hackaday.com/2025/10/30/how-si…


Il SEO dell’inganno! La rete fantasma scoperta da RHC che penalizza la SERP


Analisi RHC sulla rete “BHS Links” e sulle infrastrutture globali di Black Hat SEO automatizzato

Un’analisi interna di Red Hot Cyber sul proprio dominio ha portato alla luce una rete globale di Black Hat SEO denominata “BHS Links”, capace di manipolare gli algoritmi di Google attraverso backlink automatizzati e contenuti sintetici.

Molti di questi siti, ospitati su reti di proxy distribuite in Asia, generavano backlink automatizzati e contenuti sintetici con l’obiettivo di manipolare gli algoritmi di ranking dei motori di ricerca.

Queste infrastrutture combinavano IP rotanti, proxy residenziali e bot di pubblicazione per simulare segnali di traffico e autorità, una strategia pensata per rendere l’attacco indistinguibile da attività organica e per aggirare i controlli automatici dei motori di ricerca.

Dalle infrastrutture asiatiche a “BHS Links”


Nel corso dell’indagine però, tra i vari cluster osservati, uno in particolare ha attirato l’attenzione per dimensioni, coerenza e persistenza operativa: la rete non asiatica denominata “BHS Links”, attiva almeno da maggio 2025.

A differenza dei gruppi asiatici frammentati, BHS Links si presenta come un ecosistema strutturato di “Black Hat SEO as a Service”, che sfrutta automazione, tecniche antiforensi e domini compromessi per vendere ranking temporanei a clienti anonimi di vari settori, spesso ad alto rischio reputazionale (scommesse, pharma, trading, adult).

Architettura e domini coinvolti


L’infrastruttura di BHS Links comprende decine di domini coordinati, tra cui:

  • bhs-links-anchor.online
  • bhs-links-ass.online
  • bhs-links-boost.online
  • bhs-links-blast.online
  • bhs-links-blastup.online
  • bhs-links-crawlbot.online
  • bhs-links-clicker.online
  • bhs-links-edge.online
  • bhs-links-elite.online
  • bhs-links-expert.online
  • bhs-links-finder.online
  • bhs-links-fix.online
  • bhs-links-flux.online
  • bhs-links-family.online
  • bhs-links-funnel.online
  • bhs-links-genie.online
  • bhs-links-hub.online
  • bhs-links-hubs.online
  • bhs-links-hive.online
  • bhs-links-info.online
  • bhs-links-insight.online
  • bhs-links-keyword.online
  • bhs-links-launch.online
  • bhs-links-move.online
  • bhs-links-net.online
  • bhs-links-power.online
  • bhs-links-pushup.online
  • bhs-links-rankboost.online
  • bhs-links-rise.online
  • bhs-links-signal.online
  • bhs-links-snap.online
  • bhs-links-spark.online
  • bhs-links-stack.online
  • bhs-links-stacker.online
  • bhs-links-stats.online
  • bhs-links-storm.online
  • bhs-links-strategy.online
  • bhs-links-target.online
  • bhs-links-traffic.online
  • bhs-links-vault.online
  • bhs-links-zone.online

Ogni dominio funge da nodo di ridistribuzione: aggrega backlink, genera nuove pagine, replica codice HTML da siti legittimi e rimanda al canale Telegram ufficiale t.me/bhs_links.

Molti domini sono protetti da Cloudflare e ospitati su server offshore, rendendo difficile la tracciabilità. I log forensi indicano anche filtraggio selettivo di Googlebot e pattern di cloaking deliberato.

Cloaking attivo rilevato su bhs-links-blaze.online


Un test condotto da RHC tramite curl con differenti User-Agent ha evidenziato un comportamento di cloaking selettivo, pratica vietata dalle Google Search Essentials.

C:\Users\OSINT>curl -I -A "Googlebot/2.1 (+google.com/bot.html)" bhs-links-blaze.online
HTTP/1.1 403 Forbidden
Server: cloudflare

C:\Users\OSINT>curl -I -A "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" bhs-links-blaze.online
HTTP/1.1 200 OK
Server: cloudflare


Il sito blocca deliberatamente i crawler di Google, rendendo invisibili i propri contenuti promozionali per evitare penalizzazioni. La regola Cloudflare è simile a:

Regola: Block Googlebot
Condizione: (http.user_agent contains “Googlebot”)
Azione: Block


Dal punto di vista forense, si tratta di una tecnica antiforense deliberata, utile a eludere i controlli automatici di Google, nascondere la rete di clienti e backlink generati artificialmente e disturbare l’analisi OSINT basata su crawling.

Target italiani e sfruttamento del “trust locale”


Durante l’analisi del codice sorgente di più domini BHS, RHC ha rilevato centinaia di link verso siti italiani legittimi, tra cui:

Ansa, repubblica.it, gazzetta.it, fanpage.it, legaseriea.it, adm.gov.it, gdf.gov.it, liceoissel.edu.it, meteofinanza.com, aranzulla.it, superscudetto.sky.it.

Tutti i domini citati sono vittime passive di citazione algoritmica e non coinvolti in attività illecite.

Questi siti web non sono compromessi: vengono citati in modo passivo per sfruttarne la reputazione. È una strategia basata sul cosiddetto trust semantico, dove la semplice co-occorrenza tra un sito affidabile e un dominio malevolo induce l’algoritmo a interpretare quest’ultimo come credibile. In altre parole, BHS Links non buca i siti, ma li usa come riflettori reputazionali. Una tattica che consente ai clienti di ottenere boost di ranking temporanei, soprattutto nei settori gambling, forex e adult.

Come nascondono i link


Nel codice sorgente delle pagine analizzate compare un elemento ricorrente: una lista racchiusa in un blocco <ul style="display:none">. Questa sintassi HTML/CSS significa letteralmente “crea una lista non ordinata, ma non mostrarla all’utente”, il browser riceve il markup ma non lo rende visibile perché la regola CSS display:none impedisce la visualizzazione dell’elemento e di tutto il suo contenuto.

A prima vista può sembrare innocuo, ma in realtà rappresenta una delle tattiche più subdole del cloaking semantico: i link vengono resi invisibili ai visitatori umani, ma restano presenti nel sorgente e dunque leggibili dai crawler dei motori di ricerca.

In questo modo il network BHS Links inietta decine di riferimenti nascosti verso domini esterni, forum, casinò online e siti di affiliazione, tutti corredati dal marchio “TG @BHS_LINKS – BEST SEO LINKS – https://t.me/bhs_links”. Il server può servire due versioni della stessa pagina, una pubblica e “pulita” per gli utenti e una destinata ai bot, oppure lasciare lo stesso HTML che, pur essendo nascosto via CSS, viene comunque indicizzato come shadow content: un contenuto fantasma che vive nel codice ma non sulla pagina visibile.

Googlebot e altri crawler analizzano il sorgente e i link anche quando sono nascosti tramite CSS; di conseguenza i riferimenti invisibili vengono interpretati come segnali di co-occorrenza e autorevolezza, attribuendo al dominio malevolo una falsa credibilità. In termini pratici, BHS Links crea così un ponte reputazionale artificiale tra i propri domini e portali reali (testate giornalistiche, siti regolamentati, blog autorevoli). Per l’utente tutto appare normale; per l’algoritmo si tratta invece di una rete ricca di collegamenti tematici e autorevoli. È proprio questa discrepanza, tra ciò che vede l’uomo e ciò che interpreta l’algoritmo, a rendere l’avvelenamento semantico così efficace e difficile da individuare.

Le prove dell’inganno semantico: oltre l’iniezione


In tutti i casi analizzati, dopo l’iniezione di codice già descritta, le evidenze tecniche convergono su altri due indizi ricorrenti che completano la triade dell’inganno semantico:

  • hash differenti tra la versione “normale” e quella servita a Googlebot,
  • rotazione semantica dei blocchi dinamici del CMS

Questi elementi, nel loro insieme, costituiscono la firma tecnica ricorrente dell’operazione BHS Links.

Gli Hash divergenti


Gli hash SHA-256 calcolati per ogni file confermano con precisione la manipolazione semantica.
Nel caso d’esempio, i valori rilevati mostrano due versioni distinte della stessa pagina:

  • 2C65F50C023E58A3E8E978B998E7D63F283180495AC14CE74D08D96F4BD81327normal.html, versione servita all’utente reale
  • 6D9127977AACF68985B9EF374A2B4F591A903F8EFCEE41512E0CF2F1EDBBADDEgooglebot.html, versione destinata al crawler di Google

La discrepanza tra i due hash è la prova più diretta di cloaking attivo: il server restituisce due codici HTML diversi a seconda di chi effettua la richiesta.
Il file diff.txt, con hash FF6B59BB7F0C76D63DDA9DFF64F36065CB2944770C0E0AEBBAF75AD7D23A00C6, documenta le righe effettivamente differenti tra le due versioni, costituendo la traccia forense della manipolazione.

Ecco invece come appare uno dei siti citati, rimasto intatto e non alterato da cloacking


La rotazione semantica: la riscrittura invisibile


Dopo la verifica degli hash, l’analisi del codice rivela un’ulteriore strategia di manipolazione: la rotazione semantica dei contenuti.

In questo schema, il CMS Bitrix24 genera blocchi dinamici con ID diversi a seconda dello user-agent. I file normal.html e googlebot.html mostrano lo stesso contenuto ma con ordine invertito, una rotazione semantica che modifica la priorità logica dei link interni. Agli occhi di Googlebot il sito appare semanticamente riscritto: alcune sezioni, spesso quelle contenenti riferimenti nascosti al marchio BHS Links, acquisiscono un peso maggiore nel grafo semantico, influenzando la valutazione di autorevolezza. È una manipolazione invisibile ma precisa, che agisce sulla gerarchia cognitiva dell’algoritmo.

Per verificare l’anomalia, RHC ha confrontato le due versioni di alcuni siti acquisite in locale: normal.html (utente reale) e googlebot.html (crawler Google).
Nel codice servito a Googlebot compaiono ID di sezione diversi generati dal CMS, come helpdesk_article_sections_lGqiW e helpdesk_article_sections_0A6gh, mentre nella versione normale gli stessi blocchi assumono ID differenti, ad esempio C7TgM e pAZJs.

Questa variazione non cambia l’aspetto visivo della pagina, ma modifica la struttura logica letta dal motore di ricerca: Googlebot interpreta i contenuti con una gerarchia diversa, assegnando maggiore rilevanza a certi link interni. È il meccanismo della rotazione semantica: una riscrittura invisibile che orienta la comprensione algoritmica della pagina.

Nel codice della versione per bot, è inoltre presente una riga che non esiste nel file normale:
form.setProperty("url_page","https://helpdesk.bitrix24.it/open/19137184/,TG @BHS_LINKS - BEST SEO LINKS - https://t.me/bhs_links")

Il furto semantico: quando il Black Hat SEO diventa un’arma reputazionale


Le stesse tecniche di manipolazione semantica impiegate dal network BHS Links, se rivolte contro domini legittimi, si trasformano in Negative SEO: un’arma reputazionale capace di contaminare i risultati di ricerca, duplicare contenuti e indurre l’algoritmo di Google a svalutare la fonte originale.

Il caso Red Hot Cyber


Durante l’analisi, RHC ha documentato la duplicazione dell’headline istituzionale

“La cybersecurity è condivisione. Riconosci il rischio, combattilo, condividi le tue esperienze ed incentiva gli altri a fare meglio di te.”

Questa frase, appartenente al portale ufficiale Red Hot Cyber, è comparsa su portali spam e domini compromessi di varia provenienza, accostata a titoli pornografici o clickbait.

Le evidenze raccolte mostrano risultati su Google come:

  • peluqueriasabai.esDonna cerca uomo Avezzano contacted the booker and set up
  • restaurantele42.frEmiok OnlyFans porn I have seen a few delightful FBSM
  • lucillebourgeon.frLa Camila Cruz sex total GFE and I walked away as super
  • benedettosullivan.frBaad girl Sandra her images caught my eye and I had time
  • serrurier-durand.frSexs web the girl is a striking bisexual African American

In tutti i casi, la descrizione sotto il titolo riportava il testo di Red Hot Cyber, creando un effetto paradossale:
contenuti pornografici o spam presentati con il tono di una testata di cybersecurity affidabile.

Questo meccanismo è il cuore del furto semantico: l’algoritmo di Google unisce automaticamente titolo e descrizione in base a indizi semantici, generando risultati ibridi e apparentemente credibili.
Così, brand reali e frasi autorevoli diventano involontarie esche reputazionali per spingere in alto network malevoli.

Nel caso Red Hot Cyber, la frase originale è stata estratta dal dominio principale, indicizzata in cache e riutilizzata per costruire falsi snippet di autorevolezza, che rafforzano l’immagine di affidabilità dei siti compromessi.

È una forma di Negative SEO di terza generazione: non distrugge direttamente il sito bersaglio, ma ne riutilizza l’identità per ingannare gli algoritmi di ranking e, con essi, la percezione stessa della reputazione digitale.

Il secondo livello dell’inganno: il circuito TDS


Dietro al furto semantico si nasconde una struttura più profonda e funzionale: il Traffic Direction System (TDS) della rete BHS Links.
L’analisi dei dump HTML e delle stringhe Base64 decodificate ha permesso di risalire a questa infrastruttura, progettata per smistare e monetizzare il traffico manipolato attraverso il SEO.

I reindirizzamenti individuati puntano verso un gruppo stabile di domini che costituisce il cuore del circuito dating-affiliate della rete, attivo da mesi e già osservato in contesti internazionali.

Tra i principali, seekfinddate.com agisce come nodo centrale di smistamento, presente nella quasi totalità dei dump analizzati.
Da lì, il traffico viene indirizzato verso romancetastic.com, singlegirlsfinder.com, finddatinglocally.com, sweetlocalmatches.com e luvlymatches.com, che operano come landing page di reti di affiliazione riconducibili a circuiti come Traffic Company, AdOperator e ClickDealer.

A collegare questi livelli si trovano domini-ponte come go-to-fl.com, bt-of-cl.com e bt-fr-cl.com, che mascherano i redirect e spesso si appoggiano a Cloudflare per nascondere l’origine del traffico.
Completano la catena front-end alternativi come mydatinguniverse.com, chilloutdate.com, privatewant.com e flirtherher.com, che reindirizzano dinamicamente in base all’indirizzo IP, alla lingua o al dispositivo dell’utente.

In pratica, le pagine compromesse o sintetiche della rete BHS includono redirect cifrati che portano prima ai nodi TDS e poi alle landing di affiliazione o alle truffe a tema dating.
L’analisi dei parametri (tdsid, click_id, utm_source, __c) conferma il tipico schema di tracciamento d’affiliazione: una pagina BHS, un dominio TDS (ad esempio seekfinddate.com), e infine una landing commerciale o fraudolenta.

Molti di questi domini risultano ospitati su Cloudflare (AS13335) o su server offshore nei Paesi Bassi, a Cipro o a Panama, con TTL molto bassi e registrazioni recenti, una firma operativa tipica delle reti di cloaking SEO.

L’analisi incrociata degli indirizzi IP e dei sistemi autonomi (ASN) conferma la sovrapposizione infrastrutturale tra i due livelli della rete.
I domini del circuito “dating-affiliate”, come seekfinddate.com, romancetastic.com, singlegirlsfinder.com e mydatinguniverse.com, risultano ospitati su Amazon AWS (AS16509), mentre i domini del network BHS Links, come bhs-links-zone.online, bhs-links-anchor.online e bhs-links-suite.online, sono serviti da Cloudflare (AS13335).

Questa doppia architettura lascia pensare a una divisione di ruoli precisa: Amazon ospita i nodi di smistamento e monetizzazione, mentre Cloudflare garantisce l’offuscamento e la persistenza dei domini SEO.
La ripetizione degli stessi blocchi IP e la coincidenza tra ASN dimostrano che si tratta di un’infrastruttura coordinata, in cui la reputazione viene manipolata su un fronte e monetizzata sull’altro.

Caso correlato: backlinks.directory e indici automatizzati


Durante l’indagine è emerso anche il dominio backlinks.directory (mirror di backlinksources.com), un portale che pubblica elenchi automatizzati di oltre un milione di domini, organizzati in blocchi numerati da 1000 record ciascuno (es. domain-list-403, domain-list-404).

Le verifiche tecniche condotte con User-Agent differenti (Googlebot e browser standard) hanno restituito in entrambi i casi una risposta HTTP 200 OK, escludendo la presenza di cloaking o blocchi selettivi. Il dominio risulta pienamente accessibile e consultabile anche da bot di scansione, suggerendo una funzione di archivio automatizzato più che di infrastruttura antiforense.

La struttura progressiva degli URL e la presenza di parametri come “Domain Power” indicano l’impiego di crawler o scraper automatici per replicare e classificare backlink su larga scala. Questi indici possono essere utilizzati per alimentare link farm di seconda generazione, impiegate come proof-of-delivery per servizi di Black Hat SEO o per simulare una crescita organica di backlink acquistati.

Negative SEO: confine e relazione con il Black Hat SEO (BHS)


Il Negative SEO (NSO) rappresenta la declinazione offensiva delle stesse tecniche impiegate nel Black Hat SEO (BHS), ma con finalità distruttiva. Invece di migliorare la visibilità di un sito, l’obiettivo è danneggiare un dominio concorrente, compromettendone la reputazione digitale fino a causarne la perdita di ranking o addirittura una penalizzazione algoritmica o manuale da parte dei motori di ricerca.

Le pratiche più comuni di BHS, link farm, reti PBN (Private Blog Network), cloaking, reindirizzamenti ingannevoli e compromissioni di CMS, diventano, se applicate contro terzi, vere e proprie armi di Negative SEO, capaci di infettare la reputazione di un dominio legittimo con migliaia di backlink tossici o contenuti manipolati.

A rendere il fenomeno più subdolo è il fatto che molti servizi BHS nati con finalità promozionali, come la vendita di backlink o guest post su siti compromessi, possono generare effetti collaterali di Negative SEO anche in assenza di un intento malevolo diretto. La diffusione automatizzata di link su larga scala, senza filtri di qualità o controllo sull’origine dei domini, finisce per creare una rete di contaminazioni digitali che colpisce indistintamente vittime e aggressori, rendendo il confine tra promozione e sabotaggio sempre più labile.

Rilevamento e analisi dei segnali di attacco SEO


La diagnostica forense SEO parte spesso da segnali visibili direttamente nella Google Search Console (GSC), che rappresenta il primo strumento di allerta in caso di inquinamento o attacco.
Tra i sintomi più frequenti si osservano:

  • un crollo improvviso del traffico organico, non giustificato da aggiornamenti di algoritmo o stagionalità;
  • una perdita anomala di ranking su keyword strategiche, spesso sostituite da risultati di siti di scarsa qualità;
  • la comparsa di Azioni manuali per link non naturali o contenuti sospetti.

Questi indizi, presi insieme, suggeriscono che il dominio possa essere stato esposto a campagne di link tossici o schemi di manipolazione tipici del Negative SEO. Da qui si procede all’analisi tecnica dei backlink, alla verifica dei referral sospetti e all’eventuale bonifica tramite strumenti di disavow.

Audit dei backlink


L’audit dei backlink è una delle fasi più importanti nella diagnosi di compromissioni SEO.
Attraverso l’analisi sistematica dei collegamenti in ingresso, è possibile distinguere i link organici e progressivi, generati nel tempo da contenuti autentici o citazioni spontanee, da quelli artificiali o tossici, prodotti in modo massivo da reti automatizzate come BHS Links.

Un’analisi di questo tipo non si limita a contare i link, ma valuta la qualità semantica, la coerenza tematica e la distribuzione geografica delle sorgenti. Quando numerosi backlink provengono da domini appena registrati, con struttura HTML simile o ancore ripetitive, il segnale diventa chiaro: si è di fronte a un ecosistema costruito per alterare il ranking.

Nel caso specifico di BHS Links, il tracciamento dei collegamenti ha evidenziato pattern ricorrenti: picchi improvvisi di link in uscita, ancore manipolate con parole chiave commerciali, e riferimenti incrociati verso directory nascoste. Tutti indizi tipici di un’operazione di SEO artificiale, mirata non solo a spingere i propri domini, ma anche a inquinare semanticamente quelli legittimi collegati.

Risposta e mitigazione


Quando un dominio mostra segnali di compromissione o riceve backlink tossici, la prima azione consiste nel mappare e isolare i domini sospetti. I link dannosi possono essere raccolti in un semplice file di testo (.txt, codifica UTF-8) nel seguente formato:

domain:bhs-links-hive.online
domain:bhs-links-anchor.online
domain:bhs-links-blaze.online
domain:backlinks.directory

Il file va poi caricato nella Google Search Console, sezione Disavow Tool, per comunicare al motore di ricerca di ignorare i link provenienti da quei domini. È importante monitorare nel tempo gli effetti dell’operazione: la rimozione dell’impatto negativo può richiedere settimane, a seconda della frequenza di scansione del sito da parte di Googlebot.

In caso di penalizzazione manuale, è possibile presentare una richiesta di riconsiderazione, fornendo una documentazione chiara delle azioni intraprese:

  • descrivere il tipo di manipolazione o attacco subito (p.es. link innaturali, contenuti generati automaticamente);
  • spiegare in dettaglio le misure correttive adottate (rimozione e/o disavow dei link, bonifica del server, rimozione di contenuti spam);
  • allegare documentazione pertinente (per esempio screenshot, elenco dei cambiamenti, file di disavow, registri delle richieste di rimozione link) per illustrare l’intervento;
  • verificare che il sito sia accessibile a Googlebot (nessun blocco in robots.txt, pagine chiave indicizzabili e sitemap aggiornate)


Difesa preventiva e monitoraggio


Una strategia di difesa realmente efficace passa dalla prevenzione continua e dal controllo costante dell’ecosistema digitale. Le pratiche più raccomandate includono:

  • audit periodici dei backlink (almeno mensili), per intercettare rapidamente picchi anomali o nuovi domini di provenienza sospetta;
  • verifica regolare dei file .htaccess e robots.txt, per individuare tempestivamente eventuali iniezioni di codice, redirect non autorizzati o blocchi impropri al crawler;
  • monitoraggio dei DNS e delle classi IP condivise (Class C), utile per individuare co-hosting rischiosi o connessioni con reti compromesse;
  • formazione SEO interna e sensibilizzazione del personale, per evitare la collaborazione con fornitori o agenzie che utilizzano tecniche “black hat” mascherate da strategie di link building aggressive


I danni causati da una operazione di SEO Negativa


Un’operazione di SEO negativa può iniziare con una serie di azioni malevole mirate a compromettere la sua reputazione agli occhi dei motori di ricerca. Gli attaccanti possono, come in questo caso, generare migliaia di backlink di bassa qualità da siti spam o penalizzati, facendo sembrare che il portale stia tentando di manipolare artificialmente il proprio posizionamento. Questo tipo di attacco può indurre Google a ridurre la fiducia nel dominio, con un conseguente calo drastico del ranking organico e una perdita significativa di traffico.

Un caso tipico è la duplicazione dei contenuti, e questo può avvenire quando elementi distintivi del portale, come headline originali o slogan, vengono copiati e riutilizzati da siti terzi in modo malevolo. Ad esempio, l’headline “La cybersecurity è condivisione. Riconosci il rischio, combattilo, condividi le tue esperienze ed incentiva gli altri a fare meglio di te.”, originariamente concepita per promuovere la filosofia di Red Hot Cyber, è stata rilevata in diversi post pubblicati su portali sconosciuti o di scarsa qualità, come visto in precedenza, utilizzati per pratiche di black SEO.

I danni derivanti da un’operazione di black SEO possono essere profondi e di lunga durata, andando ben oltre la semplice perdita di posizionamento sui motori di ricerca. Oltre al calo di traffico organico e alla riduzione della visibilità, il portale può subire un deterioramento della fiducia sia da parte degli utenti sia degli algoritmi di ranking. Quando un sito viene associato, anche indirettamente, a pratiche di spam, link farming o duplicazione di contenuti, i filtri di Google e Bing possono applicare penalizzazioni algoritmiche o manuali che richiedono mesi per essere rimosse.

Conclusioni


La rete BHS Links rappresenta un caso emblematico di Black Hat SEO industrializzato, in cui tecniche di manipolazione un tempo marginali si trasformano in servizi globali automatizzati.

L’inclusione di siti italiani reali all’interno del codice HTML dimostra come la frontiera del [strong]Black Hat SEO[/strong]e del Negative Seo non passi più dall’hacking tradizionale, ma da una forma più sottile di sovrapposizione semantica e reputazionale, capace di confondere gli algoritmi di ranking e i criteri di autorevolezza.

Per Google e per i webmaster il rischio è duplice:

  • a perdita temporanea di ranking, con impatti economici immediati;
  • l’erosione della fiducia negli algoritmi di valutazione, su cui si fonda l’intero ecosistema della ricerca

Quando la fiducia è l’unico algoritmo che non si può corrompere, difendere la trasparenza non è più una scelta tecnica, ma un atto di resistenza digitale.

L'articolo Il SEO dell’inganno! La rete fantasma scoperta da RHC che penalizza la SERP proviene da Red Hot Cyber.


Making RAM for a TMS9900 Homebrew Computer


Schematic diagram of part of RAM

Over on YouTube [Usagi Electric] shows us how to make RAM for the TMS9900.

He starts by remarking that the TI-99/4A computer is an excellent place to start if you’re interested in getting into retro-computing. Particularly there are a lot of great resources online, including arcadeshopper.com and the AtariAge forums.

The CPU in the TI-99 is the TMS9900. As [Usagi Electric] explains in the video this CPU only has a few registers and most actual “registers” are actually locations in RAM. Because of this you can’t do much with a TMS9900 without RAM attached. So he sets about making some RAM for his homebrew TMS9900 board. He uses Mitsubishi M58725P 16 kilobit (2 kilobyte) static RAM integrated circuits; each has 11 address lines and 8 data lines, so by putting two side-by-side we get support for 16-bit words. Using six M58725Ps, in three pairs, we get 6 kilowords (12 kilobytes).

He builds out his RAM boards by adding 74LS00 Quad 2-input NAND gates and 74LS32 Quad 2-input OR gates. Anticipating the question as to why he uses NAND gates and OR gates he explains that he uses these because he has lots of them! Hard to fault that logic. (See what we did there?)

After a quick introduction to the various animals in his household [Usagi Electric] spends the rest of the video assembling and testing his RAM. For testing the RAM with full feature coverage a simple assembly program is written and then compiled with a custom tool chain built around a bunch of software available on the internet. Success is claimed when the expected trace is seen on the oscilloscope.

Of course we’ve seen plenty of TMS9900 builds before, such as with this TMS9900 Retro Build.

youtube.com/embed/YMdh74Grcqo?…


hackaday.com/2025/10/29/making…


Hello World in C Without Linking in Libraries


If there’s one constant with software developers, it is that sometimes they get bored. At these times, they tend to think dangerous thoughts, usually starting with ‘What if…’. Next you know, they have gone down a dark and winding rabbit hole and found themselves staring at something so amazing that the only natural conclusion that comes to mind is that while educational, it serves no immediate purpose.

The idea of applying this to snipping out the <stdio.h> header in C and the printf() function that it provides definitely is a good example here. Starting from the typical Hello World example in C, [Old Man Yells at Code] over at YouTube first takes us from the standard dynamically linked binary at a bloated 16 kB, to the statically linked version at an eyepopping 767 kB.

To remove any such dynamic linkages, and to keep file sizes somewhat sane, he then proceeds to first use the write()function from the <unistd.h> header, which does indeed cut out the <stdio.h> include, before doing the reasonable thing and removing all includes by rewriting the code in x86 assembly.

While this gets the final binary size down to 9 kB and needs no libraries to link with, it still performs a syscall, after setting appropriate register values, to hand control back to the kernel for doing the actual printing. If you try doing something similar with syscall(), you have to link in libc, so it might very well be that this is the real way to do Hello World without includes or linking in libraries. Plus the asm keyword part of C, although one could argue that at this point you could just as well write everything in x86 ASM.

Of course, one cannot argue that this experience isn’t incredibly educational, and decidedly answers the original ‘What if…’ question.

youtube.com/embed/gVaXLlGqQ-c?…


hackaday.com/2025/10/29/hello-…


10 Cent Microcontroller Makes Tracker Music


We are absurdly spoiled these days by our microcontrollers. Take the CH32V00X family– they’ve been immortalized by meme as “the ten cent micro” but with a clock speed of 48MHz and 32-bit registers to work with, they’re astoundingly capable machines even by the standards of home computers of yore. That’s what motivated [Tim] to see if he could use one to play MOD files, with only minimal extra parts– and quite specifically no DAC.

Well, that’s part of what motivated him. The other part was seeing Hackaday feature someone use a CH32V003 making chiptune-like beeps. [Tim] apparently saw that post as a gauntlet thrown down, and he picked it up with an even smaller chip: the CH32V002, which he proceeded to turn into a MOD player. For those of you who slept through 80s and early 90s (or for those precocious infants reading this who hadn’t then yet been born), MOD files are an electronic music format, pioneered on the Amiga home computers. Like MIDI, the file specifies when to play specific voices rather than encoding the sound directly. Unlike MIDI, MOD files are self-contained, with the samples/voices used being stored inside the file. The original version targeted four-channel sound, and that’s what [Tim] is using here.

As you can see from the demo video, it sounds great. He pulled it off by using the chip’s built-in PWM timer. Since the timer’s duty cycle is determined by a variable that can be changed by DMA, the CPU doesn’t end up with very much to do here. In the worst case, with everything in flash memory instead of SRAM, the CPU is only taxed at 24%, so there’s plenty of power to say, add graphics for a proper demo. Using the existing MODPlay Library, [Tim]’s player fits into 4kB of memory, leaving a perfectly-usable 12kB for the MOD file. As far as external components needed, it’s just an RC filter to get rid of PWM noise.

[Tim] has put his code up on GitHub for anyone interested, and has perhaps inadvertently cast down another gauntlet for anyone who wants to use these little RISC V microprocessors for musical tasks. If you can do better, please do, let us know.

youtube.com/embed/IQmQ0Qlt3V8?…


hackaday.com/2025/10/29/10-cen…


Microsoft 365 va giù: un’anomalia DNS paralizza servizi in tutto il mondo


Una interruzione del servizio DNS è stata rilevata il 29 ottobre 2025 da Microsoft, con ripercussioni sull’accesso ai servizi fondamentali come Microsoft Azure e Microsoft 365. Un’ anomalia è stata rilevata alle 21:37 GMT+5:30, provocando ritardi generalizzati in varie applicazioni e bloccando l’accesso degli utenti all’ area amministrativa di Microsoft 365.

Secondo i primi resoconti, difficoltà nella risoluzione DNS stavano ostacolando la corretta gestione del traffico, con effetti negativi sugli endpoint relativi all’autenticazione e ai servizi. La dipendenza da tali piattaforme per servizi di posta elettronica, collaborazione e cloud computing ha portato problemi di indisponibilità dei servizi.

La sospensione ha colpito numerose aree geografiche, suscitando molti reclami sui social media e sui forum tecnologici in Nord America, Europa e Asia. I responsabili della gestione dei tenant di Office 365 si sono trovati di fronte ad errori, mentre gli utenti riscontravano ritardi nelle applicazioni come SharePoint, Teams e Outlook.

I servizi di archiviazione e le macchine virtuali di Azure hanno registrato episodi di indisponibilità intermittente, il che potrebbe comportare l’interruzione dei flussi di lavoro di sviluppo e delle operazioni di elaborazione dati.


Gli specialisti della sicurezza informatica hanno notato che, nonostante l’assenza di segnalazioni di violazioni dei dati, l’evento ha messo in luce le debolezze presenti nelle catene di dipendenza cloud, dove un’anomalia DNS isolata può ripercuotersi in maniera estesa su servizi interconnessi.

La pagina di stato di Microsoft ha confermato che l’ambito includeva i portali di amministrazione e gli strumenti di produttività principali, ma ha risparmiato alcune funzionalità ausiliarie come la sincronizzazione dei file OneDrive in casi isolati.

I team di ingegneri di Microsoft hanno rapidamente identificato la causa principale del problema: un’infrastruttura di rete e di hosting non funzionante. Alle 21:51 GMT+5:30, hanno iniziato a sbloccare i sistemi interessati e a ridistribuire il traffico per mitigare il problema.

Un successivo aggiornamento alle 21:58 ha fornito dettagli su un’analisi più approfondita dello stato di salute dell’infrastruttura, seguito dal reindirizzamento verso percorsi alternativi sani annunciato alle 22:06.

Quindi un problema interno isolato, non un attacco informatico. Gli sforzi per il ripristino mentre scriviamo risultano ancora in corso e Microsoft ha invitato gli utenti a tenere d’occhio la pagina di stato di Azure per ricevere aggiornamenti in tempo reale. L’azienda ha confermato che i lavori di ripristino proseguivano senza sosta.

Negli ultimi giorni abbiamo assistito a una serie di problemi a cui si aggiunge ora questo incidente; i blocchi e i disservizi su AWS (come avvenuto qualche giorno fa) o su Azure (come adesso) causano facilmente problemi a cascata prevedibili e controllabili.

L'articolo Microsoft 365 va giù: un’anomalia DNS paralizza servizi in tutto il mondo proviene da Red Hot Cyber.


Tor Browser dice di NO all’intelligenza artificiale! La sicurezza viene prima di tutto


È interessante notare che, mentre grandi aziende come Microsoft e Google stanno attivamente aggiungendo funzionalità di intelligenza artificiale ai loro browser, il team di sviluppo di Tor ha scelto di rimuoverle.

@henry, un collaboratore del progetto Tor, ha sottolineato che il team non è riuscito a verificare completamente il processo di addestramento e il comportamento “black box” dei modelli di intelligenza artificiale, quindi ha deciso di eliminare prima i rischi.

Sebbene alcuni utenti potrebbero essere disposti ad “accettare i rischi di Mozilla” per determinate funzionalità, il progetto Tor dà esplicitamente la priorità a non integrare queste funzionalità.

Tra i componenti rimossi figurano la barra laterale della chat basata sull’intelligenza artificiale di Mozilla, introdotta a marzo di quest’anno, e la funzionalità di anteprima del link di riepilogo della pagina, lanciata a maggio.

Inoltre, Tor Browser 15.0a4 ha rimosso anche alcuni elementi del branding di Mozilla/Firefox, come l’icona della volpe, la homepage di Firefox e la nuova barra laterale della cronologia. La barra laterale della cronologia è stata ripristinata alla vecchia interfaccia di Tor Browser versione 14.5 ed è accessibile tramite la scorciatoia Ctrl+H.

Per quanto riguarda la visualizzazione della barra degli indirizzi, Tor Browser non nasconderà più la parte del protocollo dell’URL (come http o https) nella versione desktop, ma questa parte sarà comunque nascosta sulla piattaforma mobile Android.

Altri aggiornamenti minori includono: rendering emoji migliorato sulla piattaforma Linux (ora integrato con il font Noto Color Emoji), l’aggiunta del font Jigmo per ottimizzare la visualizzazione dei caratteri cinesi, giapponesi e coreani e supporto migliorato del tema scuro per l’interfaccia esclusiva del browser Tor

L'articolo Tor Browser dice di NO all’intelligenza artificiale! La sicurezza viene prima di tutto proviene da Red Hot Cyber.


Supercon 2025 Badge Gets Vintage Star Trek Makeover


There are still a few days before the doors open on this year’s Hackaday Supercon in Pasadena, but for the most dedicated attendees, the badge hacking has already begun…even if they don’t have a badge yet.

By referencing the design files we’ve published for this year’s Communicator badge, [Thomas Flummer] was able to produce this gorgeous 3D printed case that should be immediately recognizable to fans of the original Star Trek TV series.
Metal hinge pin? Brass inserts? Scotty would be proud.
Although the layout of this year’s badge is about as far from the slim outline of the iconic flip-up Trek communicator as you can get, [Thomas] managed to perfectly capture its overall style. By using the “Fuzzy Skin” setting in the slicer, he was even able to replicate the leather-like texture seen on the original prop.

Between that and the “chrome” trim, the finished product really nails everything Jadzia Dax loved about classic 23rd century designs. It’s not hard to imagine this could be some companion device to the original communicator that we just never got to see on screen.

While there’s no denying that the print quality on the antenna lid is exceptional, we’d really like to see that part replaced with an actual piece of brass mesh at some point. Luckily, [Thomas] has connected it to the body of the communicator with a removable metal hinge pin, so it should be easy enough to swap it out.

Considering the incredible panel of Star Trek artists that have been assembled for the Supercon 2025 keynote, we imagine this won’t be the last bit of Trek-themed hacking that we see this weekend — which is fine by us.


hackaday.com/2025/10/29/superc…


FLOSS Weekly Episode 853: Hardware Addiction; Don’t Send Help


This week Jonathan and Rob chat with Cody Zuschlag about the Xen project! It’s the hypervisor that runs almost everywhere. Why is it showing up in IoT devices and automotive? And what’s coming next for the project? Watch to find out!


youtube.com/embed/z1bXf5mTzcY?…

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or have the guest contact us! Take a look at the schedule here.

play.libsyn.com/embed/episode/…

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:


Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License


hackaday.com/2025/10/29/floss-…


2025 Component Abuse Challenge: The Opto Flasher


There’s a part you’ll find in almost every mains powered switch mode power supply that might at first appear to have only one application. An optocoupler sits between the low voltage and the high voltage sides, providing a safely isolated feedback. Can it be used for anything else? [b.kainka] thinks so, and has proved it by making an optocoupler powered LED flasher.

If a part can be made to act as an amplifier with a gain greater than one, then it should also be possible to make it oscillate. We’re reminded of the old joke about it being very easy to make an oscillator except when you want to make one, but in this case when an optocoupler is wired up as an inverting amplifier with appropriate feedback, it will oscillate. In this case the rather large capacitor leading to a longish period, enough to flash an LED.

We like this circuit, combining as it does an unexpected use for a part, and a circuit in which the unusual choice might just be practical. It’s part of our 2025 Component Abuse Challenge, for which you just about still have time to make an entry yourself if you have one.

2025 Hackaday Component Abuse Challenge


hackaday.com/2025/10/29/2025-c…


This Reactor is on Fire! Literally…


If I mention nuclear reactor accidents, you’d probably think of Three Mile Island, Fukushima, or maybe Chernobyl (or, now, Chornobyl). But there have been others that, for whatever reason, aren’t as well publicized. Did you know there is an International Nuclear Event Scale? Like the Richter scale, but for nuclear events. A zero on the scale is a little oopsie. A seven is like Chernobyl or Fukushima, the only two such events at that scale so far. Three Mile Island and the event you’ll read about in this post were both level five events. That other level five event? The Windscale fire incident in October of 1957.

If you imagine this might have something to do with the Cold War, you are correct. It all started back in the 1940s. The British decided they needed a nuclear bomb project and started their version of the Manhattan Project called “Tube Alloys.” But in 1943, they decided to merge the project with the American program.

The British, rightfully so, saw themselves as co-creators of the first two atomic bombs. However, in post-World War paranoia, the United States shut down all cooperation on atomic secrets with the 1946 McMahon Act.

We Are Not Amused


The British were not amused and knew that to secure a future seat at the world table, it would need to develop its own nuclear capability, so it resurrected Tube Alloys. If you want a detour about the history of Britan’s bomb program, the BBC has a video for you that you can see below.

youtube.com/embed/8WcMm31RbMw?…

Of course, post-war Britain wasn’t exactly flush with cash, so they had to limit their scope a bit. While the Americans had built bombs with both uranium and plutonium, the UK decided to focus on plutonium, which could create a stronger bomb with less material.

Of course, that also means you have to create plutonium, so they built two reactors — or piles, as they were known then. They were both in the same location near Seascale, Cumberland.

Inside a Pile

The Windscale Piles in 1951 (photo from gov.uk website).
The reactors were pretty simple. There was a big block of graphite with channels drilled through it horizontally. You inserted uranium fuel cartridges in one end, pushing the previous cartridge through the block until they fell out the other side into a pool of water.

The cartridges were encased in aluminum and had cooling fins. These things got hot! Immediately, though, practical concerns — that is, budgets — got in the way. Water cooling was a good idea, but there were problems. First, you needed ultra-pure water. Next, you needed to be close to the sea to dump radioactive cooling water, but not too close to any people. Finally, you had to be willing to lose a circle around the site about 60 miles in diameter if the worst happened.

The US facility at Hanford, indeed, had a 30-mile escape road for use if they had to abandon the site. They dumped water into the Columbia River, which, of course, turned out to be a bad idea. The US didn’t mind spending on pure water.

Since the British didn’t like any of those constraints, they decided to go with air cooling using fans and 400-foot-tall chimneys.

Our Heros


Most of us can relate to being on a project where the rush to save money causes problems. A physicist, Terence Price, wondered what would happen if a fuel cartridge split open. For example, one might miss the water pool on the other side of the reactor. There would be a fire and uranium oxide dust blowing out the chimney.

The idea of filters in each chimney was quickly shut down. Since the stacks were almost complete, they’d have to go up top, costing money and causing delays. However, Sir John Cockcroft, in charge of the construction, decided he’d install the filters anyway. The filters became known as Cockcroft’s Follies because they were deemed unnecessary.

So why are these guys the heroes of this story? It isn’t hard to guess.

A Rush to Disaster


The government wanted to quickly produce a bomb before treaties would prohibit them from doing so. That put them on a rush to get H-bombs built by 1958. There was no time to build more reactors, so they decided to add material to the fuel cartridges to produce tritium, including magnesium. The engineers were concerned about flammability, but no one wanted to hear it.

They also decided to make the fins of the cartridges smaller to raise the temperature, which was good for production. This also allowed them to stuff more fuel inside. Engineers again complained. Hotter, more flammable fuel. What could go wrong? When no one would listen, the director, Christopher Hinton, resigned.

The Inevitable


The change in how heat spread through the core was dangerous. But the sensors in place were set for the original patterns, so the increased heat went undetected. Everything seemed fine.

It was known that graphite tends to store some energy from neutron bombardment for later release, which could be catastrophic. The solution was to heat the core to a point where the graphite started to get soft, which would gradually release the potential energy. This was a regular part of operating the reactors. The temperature would spike and then subside. Operations would then proceed as usual.

By 1957, they’d done eight of these release cycles and prepared for a ninth. However, this one didn’t go as planned. Usually, the core would heat evenly. This time, one channel got hot and the rest didn’t. They decided to try the release again. This time it seemed to work.

As the core started to cool as expected, there was an anomaly. One part of the core was rising instead, reaching up to 400C. They sped up the fans and the radiation monitors determined that they had a leak up the chimney.

Memories


Remember the filters? Cockcroft”s Follies? Well, radioactive dust had gone up the chimney before. In fact, it had happened pretty often. As predicted, the fuel would miss the pool and burst.

With the one spot getting hotter, operators assumed a cartridge had split open in the core. They were wrong. The cartridge was on fire. The Windscale reactor was on fire.

Of course, speeding up the fans just made the fire worse. Two men donned protective gear and went to peek at an inspection port near the hot spot. They saw four channels of fuel glowing “bright cherry red”. At that point, the reactor had been on fire for two days. The Reactor Manager suited up and climbed the 80 feet to the top of the reactor building so he could assess the backside of the unit. It was glowing red also.

Fight Fire with ???


The fans only made the fire worse. They tried to push the burning cartridges out with metal poles. They came back melted and radioactive. The reactor was now white hot. They then tried about 25 tonnes of carbon dioxide, but getting it to where it was needed proved to be too difficult, so that effort was ineffective.

By the 11th of October, an estimated 11 tonnes of uranium were burning, along with magnesium in the fuel for tritium production. One thermocouple was reading 3,100C, although that almost had to be a malfunction. Still, it was plenty hot. There was fear that the concrete containment building would collapse from the heat.

You might think water was the answer, and it could have been. But when water hits molten metal, hydrogen gas results, which, of course, is going to explode under those conditions. They decided, though, that they had to try. The manager once again took to the roof and tried to listen for any indication that hydrogen was building up. A dozen firehoses pushed into the core didn’t make any difference.

Sci Fi


If you read science fiction, you probably can guess what did work. Starve the fire for air. The manager, a man named Tuohy, and the fire chief remained and sent everyone else out. If this didn’t work, they were going to have to evacuate the nearby town anyway.

They shut off all cooling and ventilation to the reactor. It worked. The temperature finally started going down, and the firehoses were now having an effect. It took 24 hours of water flow to get things completely cool, and the water discharge was, of course, radioactive.

If you want a historical documentary on the even, here’s one from Spark:

youtube.com/embed/S0DXndsQ0H4?…

Aftermath


The government kept a tight lid on the incident and underreported what had been released. But there was much less radioactive iodine, cesium, plutonium, and polonium release because of the chimney filters. Cockcroft’s Folly had paid off.

While it wasn’t ideal, official estimates are that 240 extra cancer cases were due to the accident. Unofficial estimates are higher, but still comparatively modest. Also, there had been hushed-up releases earlier, so it is probably that the true number due to this one accident is even lower, although if it is your cancer, you probably don’t care much which accident caused it.

Milk from the area was dumped into the sea for a while. Today, the reactor is sealed up, and the site is called Sellafield. It still contains thousands of damaged fuel elements within. The site is largely stable, although the costs of remediating the area have been, and will continue to be staggering.

This isn’t the first nuclear slip-up that could have been avoided by listening to smart people earlier. We’ve talked before about how people tend to overestimate or sensationalize these kinds of disasters. But it still is, of course, something you want to avoid.

Featured image: “HD.15.003” by United States Department of Energy


hackaday.com/2025/10/29/this-r…


Restoring the E&L MMD-1 Mini-Micro Designer Single-Board Computer from 1977


A photo of the MMD-1 on the workbench.

Over on YouTube [CuriousMarc] and [TubeTimeUS] team up for a multi-part series E&L MMD-1 Mini-Micro Designer Restoration.

The E&L MMD-1 is a microcomputer trainer and breadboard for the Intel 8080. It’s the first ever single-board computer. What’s more, they mention in the video that E&L actually invented the breadboard with the middle trench for the ICs which is so familiar to us today; their US patent 228,136 was issued in August 1973.

The MMD-1 trainer has support circuits providing control logic, clock, bus drivers, voltage regulator, memory decoder, memory, I/O decoder, keyboard encoder, three 8-bit ports, an octal keyboard, and other support interconnects. They discuss in the video the Intel 1702 which is widely accepted as the first commercially available EPROM, dating back to 1971.

In the first video they repair the trainer then enter a “chasing lights” assembly language program for testing and demonstration purposes. This program was found in 8080 Microcomputer Experiments by Howard Boyet on page 76. Another book mentioned is The Bugbook VI by David Larsen et al.

In the second video they wire in some Hewlett-Packard HP 5082-7300 displays which they use to report on values in memory.

A third episode is promised, so stay tuned for that! If you’re interested in the 8080 you might like to read about its history or even how to implement one in an FPGA!

youtube.com/embed/eCAp3K7yTlQ?…

youtube.com/embed/Sfe18oyRvGk?…


hackaday.com/2025/10/29/restor…


183 milioni di account Gmail hackerati! E’ falso: era solo una bufala


Per la seconda volta negli ultimi mesi, Google è stata costretta a smentire le notizie di una massiccia violazione dei dati di Gmail. La notizia è stata scatenata dalle segnalazioni di unhacking di 183 milioni di account” diffuse online, nonostante non vi sia stata alcuna violazione o incidente reale che abbia coinvolto i server di Google.

Come hanno spiegato i rappresentanti dell’azienda, non si tratta di un nuovo attacco, ma piuttosto di vecchi database di login e password raccolti dagli aggressori tramite infostealer e altri attacchi degli ultimi anni.

“Le segnalazioni di una ‘violazione di Gmail che ha interessato milioni di utenti’ sono false. Gmail e i suoi utenti sono protetti in modo affidabile”, hanno dichiarato i rappresentanti di Google. L’azienda ha inoltre sottolineato che la fonte delle voci di una fuga di dati importante era un database contenente i log degli infostealer, nonché credenziali rubate durante attacchi di phishing e altri attacchi.

Il fatto è che questo database è stato recentemente reso pubblico tramite la piattaforma di analisi delle minacce Synthient ed è stato poi aggiunto all’aggregatore di perdite Have I Been Pwned (HIBP).

Il creatore di HIBP, Troy Hunt, ha confermato che il database di Synthient contiene circa 183 milioni di credenziali, inclusi login, password e indirizzi web su cui sono state utilizzate. Secondo Hunt, non si tratta di una singola fuga di dati: queste informazioni sono state raccolte nel corso degli anni da canali Telegram, forum, dark web e altre fonti. Inoltre, questi account non sono correlati a una singola piattaforma, ma a migliaia, se non milioni, di siti web e servizi diversi.

Inoltre, il 91% dei record era già apparso in altre fughe di notizie ed era presente nel database HIBP, mentre solo 16,4 milioni di indirizzi erano nuovi.

I rappresentanti di Synthient hanno confermato che la maggior parte dei dati nel database non è stata ottenuta tramite attività di hacking, ma infettando i sistemi dei singoli utenti con malware. In totale, i ricercatori hanno raccolto 3,5 TB di informazioni (23 miliardi di righe), inclusi indirizzi email, password e indirizzi di siti web esposti in cui sono state utilizzate le credenziali compromesse.

Google sottolinea che l’azienda scopre e utilizza regolarmente tali database per i controlli di sicurezza, aiutando gli utenti a reimpostare le password trapelate e a proteggere nuovamente i propri account.

L’azienda sottolinea inoltre che, anche se Gmail non è stato hackerato, i vecchi nomi utente e password già trapelati potrebbero comunque rappresentare una minaccia. Per mitigare questi rischi, Google consiglia di abilitare l’autenticazione a più fattori o di passare alle passkey, che sono più sicure delle password tradizionali.

Ricordiamo che, nel settembre 2025, Google aveva già smentito le segnalazioni di una massiccia violazione dei dati degli utenti di Gmail. All’epoca, erano emersi resoconti sui media secondo cui Google avrebbe inviato una notifica di massa a tutti gli utenti di Gmail (circa 2,5 miliardi di persone) per chiedere loro di cambiare urgentemente le password e abilitare l’autenticazione a due fattori. I rappresentanti di Google hanno poi negato la veridicità di tale notizia.

L'articolo 183 milioni di account Gmail hackerati! E’ falso: era solo una bufala proviene da Red Hot Cyber.


Tasting the Exploit: HackerHood testa l’Exploit di Microsoft WSUS CVE-2025-59287


Il panorama della sicurezza informatica è stato recentemente scosso dalla scoperta di una vulnerabilità critica di tipo Remote Code Execution (RCE) nel servizio Windows Server Update Services (WSUS) di Microsoft.

Identificata come CVE-2025-59287 e con un punteggio CVSS di 9.8 (Critico), questa falla rappresenta un rischio elevato e immediato per le organizzazioni che utilizzano WSUS per la gestione centralizzata degli aggiornamenti.

La vulnerabilità è particolarmente pericolosa perché consente a un aggressore remoto e non autenticato di eseguire codice arbitrario con privilegi di sistema sui server WSUS interessati.

Dopo il rilascio di una patch di emergenza “out-of-band” da parte di Microsoft il 23 ottobre 2025, necessaria in quanto la patch iniziale di ottobre non aveva risolto completamente il problema, è stata subito osservata una attività di sfruttamento attiva in rete.

L’Agenzia statunitense per la cybersicurezza e la sicurezza delle infrastrutture (CISA) ha aggiunto rapidamente questa vulnerabilità al suo Catalogo delle vulnerabilità sfruttate note (KEV), sottolineando l’urgenza di una risposta immediata da parte degli amministratori di sistema.

Dettagli sul problema


WSUS è uno strumento fondamentale nelle reti aziendali, fungendo da sorgente fidata per la distribuzione delle patch software.

La sua natura di servizio infrastrutturale chiave lo rende un obiettivo ad alto valore, poiché un suo compromesso può fornire una testa di ponte per il movimento laterale e la compromissione diffusa della rete.

La radice del problema risiede in un caso di “deserializzazione non sicura di dati non attendibili” (unsafe deserialization of untrusted data) e rientra tra le cause del RCE (Remote Code Execution) come effetto finale.

Questo difetto tecnico può essere sfruttato in diverse vie d’attacco note:

  • Endpoint GetCookie(): un aggressore può inviare una richiesta appositamente elaborata all’endpoint GetCookie(), inducendo il server a deserializzare in modo improprio un oggetto AuthorizationCookie utilizzando il non sicuro BinaryFormatter.
  • Servizio ReportingWebService: un percorso alternativo mira al ReportingWebService per innescare una deserializzazione non sicura tramite SoapFormatter.

In entrambi gli scenari, l’aggressore può indurre il sistema ad eseguire codice dannoso con il massimo livello di privilegi: System

La vulnerabilità è specifica per i sistemi su cui è abilitato il ruolo Server WSUS e risultano vulnerabili Microsoft Windows Server 2012, 2012 R2, 2016, 2019, 2022 (inclusa la versione 23H2) e 2025.

Dettagli Tecnici dell’Exploit


A seguito della recente divulgazione pubblica di un Proof-of-Concept (PoC) per l’exploit, abbiamo condotto un test di laboratorio per analizzare il funzionamento e l’impatto potenziale.

Il PoC, reperibile al link: gist.github.com/hawktrace/76b3…

Questo è specificamente progettato per colpire le istanze di Windows Server Update Services (WSUS) esposte pubblicamente sulle porte TCP predefinite 8530 (HTTP) e 8531 (HTTPS).

L’esecuzione del PoC è semplice, occorre avviare lo script indicare il target http/https vulnerabile come argomento.

Questo innesca l’avvio di comandi PowerShell maligni tramite processi figlio e utilizza la seconda modalità di exploit spiegata precedentemetne (ReportingWebService), in questo caso specifico viene aperto il processo calc.exe (calcolatrice di sistema).

I comandi dannosi sono presenti in formato Base64 codificato e vengono eseguiti durante una fase di deserializzazione all’interno del servizio WSUS.

Questo meccanismo di deserializzazione rappresenta il punto cruciale in cui un aggressore può iniettare qualsiasi altro comando per condurre attività di ricognizione o post-sfruttamento.

La sequenza di processi nel test di laboratorio è stata la seguente:

WsusService.exe -> Cmd.exe -> Calc.exe -> win32calc.exe

Qui i processi figli spawnati dai processi legittimi WSUS (wsusservice.exe).

(Nota: in questo specifico PoC, l’esecuzione di calc.exe (Calcolatrice di sistema) funge da prova non distruttiva della corretta esecuzione del codice remoto).

Il comando oltretutto rimane persistente: al riavvio del server o del servizio viene eseguita la RCE precedentemente iniettata. Lo stesso accade quando si avvia la console MMC (Microsoft Management Console) con lo snap-in di WSUS, utilizzato per configurare e monitorare il servizio.

Video Dimostrativo: Il funzionamento completo di questo PoC è visibile nel video disponibile a questo link: youtube.com/watch?v=CH4Ped59SL…

youtube.com/embed/CH4Ped59SLY?…

Punti Chiave di Monitoraggio e Artefatti


La tabella seguente riassume gli artefatti cruciali da esaminare e i relativi criteri di rilevamento per identificare una possibile compromissione da questa CVE:

Qui un esempio del log durante i test.

Il contenuto di SoftwareDistribution.log

E infine anche il log di IIS dove è presente appunto un useragent anomalo.

(questo log è accessibile solo se installato la funzionalità “HTTP Logging” nella sotto-categoria “Web server/Health and Diagnostics del server”)

Superficie di attacco globale


La superficie di attacco associata a questa vulnerabilità è estremamente significativa.

Allo stato attuale, migliaia di istanze WSUS rimangono esposte a Internet a livello globale e sono potenzialmente vulnerabili. Una ricerca condotta su Shodan evidenzia oltre 480.000 risultati a livello mondiale, di cui più di 500 solo in Italia, classificati come servizi aperti in nelle due port di default riconducibili a WSUS.

Raccomandazioni e Mitigazioni


La raccomandazione principale è di implementare immediatamente le patch di sicurezza di emergenza rilasciate da Microsoft.

Per le organizzazioni che non sono in grado di distribuire immediatamente gli aggiornamenti, sono state suggerite da Microsoft le seguenti misure temporanee per mitigare il rischio, da intendersi come soluzioni provvisorie:

  1. Disabilitare il ruolo WSUS server: Disabilitare/Rimuovere completamente il ruolo WSUS dal server per eliminare il vettore di attacco. Si noti che ciò interrompe la capacità del server di gestire e distribuire gli aggiornamenti ai sistemi client.
  2. Bloccare le porte ad alto rischio: Bloccare tutto il traffico in entrata sulle porte TCP 8530 e 8531 tramite il firewall a livello host. Come per la disabilitazione del ruolo, questa azione impedisce al server di operare.

È fondamentale che le organizzazioni seguano queste indicazioni e si impegnino in attività di threat hunting per rilevare eventuali segni di compromissione o tentativi di sfruttamento pregressi.

L'articolo Tasting the Exploit: HackerHood testa l’Exploit di Microsoft WSUS CVE-2025-59287 proviene da Red Hot Cyber.


Expert Systems: The Dawn of AI


We’ll be honest. If you had told us a few decades ago we’d teach computers to do what we want, it would work some of the time, and you wouldn’t really be able to explain or predict exactly what it was going to do, we’d have thought you were crazy. Why not just get a person? But the dream of AI goes back to the earliest days of computers or even further, if you count Samuel Butler’s letter from 1863 musing on machines evolving into life, a theme he would revisit in the 1872 book Erewhon.

Of course, early real-life AI was nothing like you wanted. Eliza seemed pretty conversational, but you could quickly confuse the program. Hexapawn learned how to play an extremely simplified version of chess, but you could just as easily teach it to lose.

But the real AI work that looked promising was the field of expert systems. Unlike our current AI friends, expert systems were highly predictable. Of course, like any computer program, they could be wrong, but if they were, you could figure out why.

Experts?


As the name implies, expert systems drew from human experts. In theory, a specialized person known as a “knowledge engineer” would work with a human expert to distill his or her knowledge down to an essential form that the computer could handle.

This could range from the simple to the fiendishly complex, and if you think it was hard to do well, you aren’t wrong. Before getting into details, an example will help you follow how it works.

From Simple to Complex


One simple fake AI game is the one where the computer tries to guess an animal you think of. This was a very common Basic game back in the 1970s. At first, the computer would ask a single yes or no question that the programmer put in. For example, it might ask, “Can your animal fly?” If you say yes, the program guesses you are thinking of a bird. If not, it guesses a dog.

Suppose you say it does fly, but you weren’t thinking of a bird. It would ask you what you were thinking of. Perhaps you say, “a bat.” It would then ask you to tell it a question that would distinguish a bat from a bird. You might say, “Does it use sonar?” The computer will remember this, and it builds up a binary tree database from repeated play. It learns how to guess animals. You can play a version of this online and find links to the old source code, too.

Of course, this is terrible. It is easy to populate the database with stupid questions or ones you aren’t sure of. Do ants live in trees? We don’t think of them living in trees, but carpenter ants do. Besides, sometimes you may not know the answer or maybe you aren’t sure.

So let’s look at a real expert system, Mycin. Mycin, from Stanford, took data from doctors and determined what bacteria a patient probably had and what antibiotic would be the optimal treatment. Turns out, most doctors you see get this wrong a lot of the time, so there is a lot of value to giving them tools for the right treatment.

This is really a very specialized animal game where the questions are preprogrammed. Is it gram positive? Is it in a normally sterile site? What’s more is, Mycin used Bayesian math so that you could assign values to how sure you were of an answer, or even if you didn’t know. So, for example, -1 might mean definitely not, +1 means definitely, 0 means I don’t know, and -0.5 means probably not, but maybe. You get the idea. The system ran on a DEC PDP-10 and had about 600 rules.

The system used LISP and could paraphrase rules into English. For example:
(defrule 52
if (site culture is blood)
(gram organism is neg)
(morphology organism is rod)
(burn patient is serious)
then .4
(identity organism is pseudomonas))
Rule 52:
If
1) THE SITE OF THE CULTURE IS BLOOD
2) THE GRAM OF THE ORGANISM IS NEG
3) THE MORPHOLOGY OF THE ORGANISM IS ROD
4) THE BURN OF THE PATIENT IS SERIOUS
Then there is weakly suggestive evidence (0.4) that
1) THE IDENTITY OF THE ORGANISM IS PSEUDOMONAS

In practice, the program did as well as real doctors, even specialists. Of course, it was never used in practice because of ethical concerns and the poor usability of entering data into a timesharing terminal. You can see a 1988 video about Mycin below.

youtube.com/embed/a65uwr_O7mM?…

Under the Covers


Mycin wasn’t the first or only expert system. Perhaps the first was SID. In 1982, SID produced over 90% of the VAX 9000’s CPU design, although many systems before then had dabbled in probabilities and other similar techniques. For example, DENDRAL from the 1960s used rules to interpret mass spectrometry data. XCON started earlier than SID and was DEC’s way of configuring hardware based on rules. There were others, too. Everyone “knew” back then that expert systems were the wave of the future!

Expert systems generally fall into two categories: forward chaining and backward chaining. Mycin was a backward chaining system.

What’s the difference? You can think of each rule as an if statement. Just like the example, Mycin knew that “if the site is in the blood and it is gram negative and…. then….” A forward chaining expert system will try to match up rules until it finds the ones that match.

Of course, you can make some assumptions. So, in the sample, if a hypothetical forward-chaining Mycin asked if the site was the blood and the answer was no, then it was done with rule 52.

However, the real Mycin was backward chaining. It would assume something was true and then set out to prove or disprove it. As it receives more answers, it can see which hypothesis to prioritize and which to discard. As rules become more likely, one will eventually emerge.

If that’s not clear, you can try a college lecture on the topic from 2013, below.

youtube.com/embed/ZhTt-GG7PiQ?…

Of course, in a real system, too, rules may trigger more rules. There were probably as many actual approaches as there were expert systems. Some, like Mycin, were written in LISP. Some in C. Many used Prolog, which has some features aimed at just the kind of things you need for an expert system.

What Happened?


Expert systems are actually very useful for a certain class of problems, and there are still examples of them hanging around (for example, Apache Drools). However, some problems that expert systems tried to solve — like speech recognition — were much better handled by neural networks.

Part of the supposed charm of expert systems is that — like all new technology — it was supposed to mature to the point where management could get rid of those annoying programmers. That really wasn’t the case. (It never is.) The programmers just get new titles as knowledge engineers.

Even NASA got in on the action. They produced CLIPS, allowing expert systems in C, which was available to the public and still is. If you want to try your hand, there is a good book out there.

Meanwhile, you can chat with Eliza if you don’t want to spend time chatting with her more modern cousins.


hackaday.com/2025/10/29/expert…


Looking beyond the AI hype


Looking beyond the AI hype
THIS IS A BONUS EDITION OF DIGITAL POLITICS. I'm Mark Scott, and I'm breaking my rule of not discussing my day job in this newsletter. I'm in New York to present this report about the current gaps in social media data access and the public-private funding opportunities to meet those challenges.

The project is part of my underlying thesis that much, if not all, of digital policymaking is done in a vacuum without quantifiable evidence.

When it comes to the effects of social media on society and the potential need for lawmakers to intervene, we are maddeningly blind to how these platforms operate; what impact they have on people's online-offline habits; and what interventions, if any, are required to make social media more transparent and accountability to all of us.

The report is a call-to-arms for practical steps required to move beyond educated guesses in digital policymaking to evidence-based oversight.

It's a cross-post from Tech Policy Press.

Let's get started:


THE CASE FOR SUPPORTING SOCIAL MEDIA DATA ACCESS


IN THE HIERARCHY OF DIGITAL POLICYMAKING PRIORITIES, it’s artificial intelligence, not platform governance, that is now the cause célèbre.

From the United States’ public aim to dominate the era of AI to the rise of so-called AI slop created by apps such as OpenAI’s Sora, the emerging technology has seemingly become the sole priority across governments, tech companies, philanthropic organizations and civil society groups.

This fixation on AI is a mistake.

It’s a mistake because it relegates equally pressing areas of digital rulemaking — especially those related to social media’s impact on the wider world — down the pecking order at a time when these global platforms have a greater say on people’s online, and increasingly offline, habits than ever before.

Current regulatory efforts, primarily in Europe, to rein in potential abuses linked to platforms controlled by Meta, Alphabet and TikTok have so far been more bark than bite. Social media giants remain black boxes to outsiders seeking to shine a light on how the companies’ content algorithms determine what people see in their daily feeds.

On Oct 24, the European Commission announced a preliminary finding under the European Union's Digital Services Act that Meta and TikTok had failed under their obligations to make it easier for researchers to access public data on their platforms.

These companies’ ability to decide how their users consume content on everything from the Israel-Hamas conflict to elections from Germany to Argentina is now equally interwoven into Washington’s attempts to roll back international online safety legislation in the presumed defense of US citizens’ First Amendment rights.

Confronted with this cavalcade of ongoing social media-enabled problems, the collective digital policymaking shift to focus almost exclusively on artificial intelligence is the epitome of the distracted boyfriend meme.

While governments, industry and civil society compete to outdo themselves on AI policymaking, the current ills associated with social media are being left behind — a waning after-thought in the global AI hype that has transfixed the public, set off a gold rush between industrial rivals and consumed governments in search of economic growth.

But where to focus?

In a report published via Columbia World Projects at Columbia University and the Hertie School’s Centre for Digital Governance on Oct 23, my co-authors and I lay out practical first steps in what can often seem like a labyrinthine web of problems associated with social media.

Our starting point is simple: the world currently has limited understanding about what happens within these global platforms despite companies’ stated commitments, through their terms of service, to uphold basic standards around accountability and transparency.

It’s impossible to diagnose the problem without first identifying the symptoms. And in the world of platform governance, that requires improved access to publicly-available and private social media data — in the form of engagement statistics and details on how so-called content recommender systems function.

Thankfully, the EU and, soon, the United Kingdom have passed the world’s first regulated regimes that mandate social media giants provide such information to outsiders, as long as they meet certain requirements like being associated with an academic institution or civil society organizations.

Elsewhere, particularly in the US, researchers are often reliant on voluntary commitments from companies growing increasingly adversarial in their interactions with outsiders whose work may shine unwanted attention on problematic areas within these global platforms.

Our report outlines the current gaps in how social media data access works. It builds on a year of workshops during which more than 120 experts from regulatory agencies, academia, civil society groups and data infrastructure providers identified the existing data access limitations and outlined recommendations for public-private funding to address those failings.

All told, it represents a comprehensive review of current global researcher data access efforts, based on inputs from those actively engaged in the policy area worldwide.

At a time when the US government has pulled back significantly from funding digital policymaking and many philanthropies are shifting gears from social media to artificial intelligence, it can feel like a hard sell to urge both public and private funders to open up their wallets to support a digital policymaking area fraught with political uncertainty.

But our recommendations are framed as practical attempts to fill current shortfalls that, with just a little support, could have an exponential impact on improving the transparency and accountability pledges that all of the world’s largest social media companies say they remain committed to.

Some of the ideas will require more of a collective effort than others.

Participants in the workshops highlighted the need for widely-accessible data access infrastructure — akin to what was offered via Meta’s CrowdTangle data analytics tool before the tech giant shut it down in 2024 — as a starting point, even though such projects, collectively, will likely cost in the tens of millions of dollars each year.

But many of the opportunities are more short-term than long-term.

That was by design. The workshops underpinning the report made clear the independent research community needed technical and capacity-building support more than it needed moonshot projects which may fail to deliver on the dual focus on increased transparency and accountability for social media.

The recommendations include expanded funding support to ensure academics and civil society groups are trained in world-class data protection and security protocols — preferably standardized across the whole research community — so that data about people’s social media habits are kept safe and not misused like what happened in the Cambridge Analytica scandal in 2018.

It also includes programs to allow new researchers to gain access to social media data access regimes that often remain accessible to only a handful of organizations, as well as attempts to create international standards across different countries’ regulated regimes so that jurisdictions can align, as much as they can, on approach to social media data access.

Such day-to-day digital policymaking does not have the bells and whistles associated with the current AI hype. It's borne out of the realities of independent researchers and regulators seeking to address near-and-present harms tied to social media, and not in the alarmism that artificial intelligence may, some day, represent an existential threat to humanity.

That, too, was by design. Often, digital policymaking, especially on AI, can become overly-complex — lost in technical jargon and misconceptions of what technology can, and can not, do.

By outlining where public and private funders can meet immediate needs on society-wide problems tied to social media, my co-authors and I are clear where digital policymaking priorities should lie: in the need to improve people’s understanding of how these global platforms increasingly shape the world around us.



digitalpolitics.co/newsletter0…


Recreating a Homebrew Game System from 1987


We often take for granted how easy it is to get information in today’s modern, Internet-connected world. Especially around electronics projects, datasheets are generally a few clicks away, as are instructions for building almost anything. Not so in the late 80s where ordering physical catalogs of chips and their datasheets was generally required.

Mastering this landscape took a different skillset and far more determination than today, which is what makes the fact that a Japanese electronics hobbyist built a complete homebrew video game system from scratch in 1987 all the more impressive.[Alex] recently discovered this project and produced a replica of it with a few modern touches.

The original console, called the Z80 TV Game, was built on an 8-bit Z80 processor. The rest of the circuitry is fairly intuitive as it uses various integrated circuits that have straightforward wiring. It supports cartridges with up to 32 KB of ROM, outputs a 168×210 black and white image as well as 1-bit audio.

There are around 26 games for this platform developed mostly by the original creator of the console and another developer named [Inufuto] who also developed a multi-platform compiler for the system. [Alex]’s version adds PCBs to the overall design making assembly much easier. He’s also added a cartridge port for the various games and included controller ports for Genesis or Master System controllers.

Even outside the context of the 80s the console is an impressive build, encouraging development of homebrew games that continues to this day. The original creator maintains a site about the console as well (Google Translate from Japanese). There are a number of development tools still available for this platform that allow modern gamers and enthusiasts to interact with it, and all of [Alex]’s schematics and other information are available on his website as well.

For a more modern take on a homebrew system, take a look at this one based on a PIC32 that can not only run homebrew games, but original Game Boy ROMs as well.


hackaday.com/2025/10/29/recrea…


Microsoft acquisisce il 27% di OpenAI per 135 miliardi di dollari


Dopo quasi un anno di trattative con il suo storico sostenitore Microsoft, OpenAI ha concesso a quest’ultima una quota del 27%. Questa mossa elimina una significativa incertezza per entrambe le aziende e apre la strada alla trasformazione dello sviluppatore di ChatGPT in un’impresa a scopo di lucro.

In una dichiarazione rilasciata martedì, entrambe le società hanno affermato che, in base all’accordo rivisto, Microsoft acquisirà circa 135 miliardi di dollari di azioni di OpenAI. Inoltre, Microsoft avrà accesso alla tecnologia della startup di intelligenza artificiale (IA) fino al 2032, compresi i modelli che hanno già raggiunto il benchmark per l’intelligenza artificiale generale (AGI).

OpenAI ha trascorso gran parte di quest’anno spingendo per la ristrutturazione, trasformandosi in un’azienda più tradizionale a scopo di lucro; Microsoft, che ha investito circa 13,75 miliardi di dollari, è stata il più grande ostacolo tra gli investitori di OpenAI.

“OpenAI ha completato una ristrutturazione del capitale e semplificato la sua struttura aziendale”, ha dichiarato il presidente di OpenAI, Bret Taylor, in una nota. “L’organizzazione no-profit controlla ancora l’entità a scopo di lucro, garantendole accesso diretto alle risorse critiche prima dell’arrivo di AGI”.

Nell’ambito della ristrutturazione, l’ente no-profit di OpenAI, la OpenAI Foundation, riceverà anche un capitale di circa 130 miliardi di dollari. La fondazione prevede di concentrarsi inizialmente sul finanziamento di progetti come “l’accelerazione delle innovazioni in campo sanitario”.

OpenAI ha dichiarato che il suo co-fondatore e CEO, Sam Altman, non riceverà alcuna partecipazione azionaria nella società recentemente ristrutturata.

Martedì, le azioni Microsoft sono salite fino al 4,2%, raggiungendo i 553,72 dollari. Molti analisti di Wall Street ritengono che il cambiamento del rapporto con OpenAI abbia rappresentato in passato una fonte importante di incertezza per l’azienda produttrice di software.

L’analista di ricerca di settore Anurag Rana ha affermato che il mantenimento da parte di Microsoft dei diritti di proprietà intellettuale sui prodotti e sui modelli OpenAI fino al 2032 è “il punto più importante” dell’accordo rivisto. “Microsoft sta sviluppando i propri modelli, utilizzando al contempo i modelli OpenAI o Anthropic nel suo prodotto Copilot”.

Un punto cruciale nei negoziati durati mesi tra Microsoft e OpenAI è stato cosa sarebbe successo dopo che OpenAI avesse raggiunto l’AGI (Intelligenza Artificiale Generale), ovvero un’intelligenza artificiale che supera le capacità umane nella maggior parte dei compiti. In base al nuovo accordo, questa soglia deve essere verificata da un “gruppo indipendente di esperti” e, una volta raggiunta, Microsoft non riceverà più alcuna quota dei ricavi di OpenAI.

Azure è stato a lungo il fornitore esclusivo di OpenAI, ma in seguito Microsoft gli ha consentito di acquistare servizi da altri fornitori come Oracle, pur mantenendo il diritto di prelazione di Microsoft . OpenAI investirà altri 250 miliardi di dollari in Azure.

Entrambe le parti hanno dichiarato che i diritti di Microsoft sull’utilizzo della tecnologia OpenAI non includeranno l’hardware consumer. OpenAI avrà anche la possibilità di “sviluppare congiuntamente alcuni prodotti con terze parti”.

L'articolo Microsoft acquisisce il 27% di OpenAI per 135 miliardi di dollari proviene da Red Hot Cyber.


Se ricevi una mail che dice che sei morto… è il nuovo phishing contro LastPass


Gli sviluppatori del gestore di password LastPass hanno avvisato gli utenti di una campagna di phishing su larga scala iniziata a metà ottobre 2025. Gli aggressori stanno inviando e-mail contenenti false richieste di accesso di emergenza al vault delle password, correlate alla morte degli utenti.

Secondo gli esperti, dietro questa campagna c’è il gruppo di hacker CryptoChameleon (noto anche come UNC5356), motivato finanziariamente. Il gruppo è specializzato nel furto di criptovalute e ha già attaccato gli utenti di LastPass nell’aprile 2024.

La nuova campagna si è rivelata estesa e tecnologicamente avanzata: gli aggressori ora sono a caccia non solo delle password principali, ma anche delle passkey.

CryptoChameleon utilizza un kit di phishing specializzato che prende di mira i wallet di criptovalute di Binance, Coinbase, Kraken e Gemini. Nei suoi attacchi, il gruppo sfrutta attivamente pagine di accesso false per Okta, Gmail, iCloud e Outlook.

In una nuova campagna, i truffatori stanno abusando della funzionalità di accesso di emergenza di LastPass. Il gestore di password dispone di un meccanismo di successione che consente ai contatti fidati di richiedere l’accesso al caveau in caso di decesso o incapacità del proprietario dell’account.

Al ricevimento di tale richiesta, il proprietario dell’account viene avvisato e, se non annulla la richiesta entro un periodo di tempo specificato, l’accesso all’account viene concesso automaticamente.

Nelle loro e-mail, i phisher affermano che un familiare avrebbe richiesto l’accesso allo spazio di archiviazione della vittima caricando un certificato di morte. Per rendere il messaggio più convincente, è incluso un falso ID di richiesta. Il destinatario viene invitato ad annullare immediatamente la richiesta, se è ancora in vita, cliccando sul link fornito.

Naturalmente, tali link portano al sito fraudolento lastpassrecovery[.]com, dove alla vittima viene chiesto di inserire la propria password principale. I ricercatori hanno notato che in alcuni casi gli aggressori hanno addirittura chiamato le vittime, fingendosi dipendenti di LastPass, e le hanno convinte a inserire le proprie credenziali su un sito di phishing.

Una caratteristica distintiva di questa campagna è l’enfasi posta sul furto di passkey. A tal fine, gli aggressori utilizzano domini specializzati come mypasskey[.]info e passkeysetup[.]com.

Passkey è un moderno standard di autenticazione senza password basato sui protocolli FIDO2/WebAuthn. Invece delle password tradizionali, la tecnologia utilizza la crittografia asimmetrica. I moderni gestori di password (tra cui LastPass, 1Password, Dashlane e Bitwarden) possono memorizzare e sincronizzare le passkey su tutti i dispositivi. E, come dimostra l’esperienza, gli aggressori si sono rapidamente adattati a questi cambiamenti.

Si consiglia agli utenti di LastPass di rimanere vigili e prestare molta attenzione a qualsiasi email relativa a richieste di accesso di emergenza o di eredità. Gli sviluppatori ricordano agli utenti di controllare sempre gli URL prima di inserire le proprie credenziali e sottolineano inoltre che i rappresentanti di LastPass non chiameranno mai gli utenti chiedendo loro di inserire la password su alcun sito web.

L'articolo Se ricevi una mail che dice che sei morto… è il nuovo phishing contro LastPass proviene da Red Hot Cyber.


All Hail The OC71


Such are the breadth of functions delivered by integrated circuits, it’s now rare to see a simple small-signal transistor project on these pages. But if you delve back into the roots of solid state electronics you’ll find a host of clever ways to get the most from the most basic of active parts.\

Everyone was familiar with their part numbers and characteristics, and if you were an electronics enthusiast in Europe it’s likely there was one part above all others that made its way onto your bench. [ElectronicsNotes] takes a look at the OC71, probably the most common PNP germanium transistor on the side of the Atlantic this is being written on.

When this device was launched in 1953 the transistor itself had only been invented a few years earlier, so while its relatively modest specs look pedestrian by today’s standards they represented a leap ahead in performance at the time. He touches on the thermal runaway which could affect germanium devices, and talks about the use of black silicone filling to reduce light sensitivity.

The OC71 was old hat by the 1970s, but electronics books of the era hadn’t caught up. Thus many engineers born long after the device’s heyday retain a soft spot for it. We recently even featured a teardown of a dead one.

youtube.com/embed/BpxeOxuXjr0?…


hackaday.com/2025/10/29/all-ha…