Get Statistical About Your Pet With This Cat Tracking Dashboard
Cats can be wonderful companions, but they can also be aloof and boring to hang out with. If you want to get a little more out of the relationship, consider obsessively tracking your cat’s basic statistics with this display from [Matthew Sylvester].
The build is based around the Seeedstudio ReTerminal E1001/E1002 devices—basically an e-paper display with a programmable ESP32-S3 built right in. It’s upon this display that you will see all kinds of feline statistics being logged and graphed. The data itself comes from smart litterboxes, with [Matthew] figuring out how to grab data on weight and litterbox usage via APIs. In particular, he’s got the system working with PetKit gear as well as the Whisker Litter Robot 4. His dashboard can separately track data for four cats and merely needs the right account details to start pulling in data from the relevant cat cloud service.
For [Matthew], the build wasn’t just a bit of fun—it also proved very useful. When one of his cats had a medical issue recently, he was quickly able to pick up that something was wrong and seek the help required. That’s a pretty great result for any homebrew project. It’s unrelated, too, but Gnocci is a great name for a cat, so hats off for that one.
We’ve featured some other fun cat-tracking projects over the years, too. If you’re whipping up your own neat hardware to commune with, entertain, or otherwise interact with your cat, don’t hesitate to let us know on the tipsline.
User Serviceable Parts
Al and I were talking on the podcast about the Home Assistant home automation hub software. In particular, about how devilishly well designed it is for extensibility. It’s designed to be added on to, and that makes all of the difference.
That doesn’t mean that it’s trivial to add your own wacky control or sensor elements to the system, but that it’s relatively straightforward, and that it accommodates you. If your use case isn’t already covered, there is probably good documentation available to help guide you in the right direction, and that’s all a hacker really needs. As evidence for why you might care, take the RTL-HAOS project that we covered this week, which adds nearly arbitrary software-defined radio functionality to your setup.
And contrast this with many commercial systems that are hard to hack on because they are instead focused on making sure that the least-common-denominator user is able to get stuff working without even reading a single page of documentation. They are so focused on making everything that’s in-scope easy that they spend no thought on expansion, or worse they actively prevent it.
Of course, it’s not trivial to make a system that’s both extremely flexible and relatively easy to use. We all know examples where the configuration of even the most basic cases is a nightmare simply because the designer wanted to accommodate everything. Somehow, Home Assistant has managed to walk the fine line in the middle, where it’s easy enough to use that you don’t have to be a wizard, but that you can make it do what you want if you are, and hence it got spontaneous hat-tips from both Al and myself. Food for thought if you’re working on a complex system that’s aimed at the DIY / hacker crowd.
This article is part of the Hackaday.com newsletter, delivered every seven days for each of the last 200+ weeks. It also includes our favorite articles from the last seven days that you can see on the web version of the newsletter. Want this type of article to hit your inbox every Friday morning? You should sign up!
Biohack Your Way To Lactose Tolerance (Through Suffering)
A significant fraction of people can’t handle lactose, like [HGModernism]. Rather than accept a cruel, ice cream free existence, she decided to do something you really shouldn’t try: biohacking her way to lactose tolerance.
The hack is very simple, and based on a peer reviewed study from the 1990s: consume lactose constantly, and suffer constantly, until… well, you can tolerate lactose. If you’re lactose intolerant, you’re probably horrified at the implications of the words “suffer constantly” in a way that those milk-digesting-weirdos could never understand. They probably think it is hyperbole; it is not. On the plus side, [HGModernism]’s symptoms began to decline after only one week.
The study dates back to the 1980s, and discusses a curious phenomenon where American powdered milk was cluelessly distributed during an African famine. Initially that did more harm than good, but after a few weeks mainlining the white stuff, the lactose-intolerant Africans stopped bellyaching about their bellyaches.
Humans all start out with a working lactase gene for the sake of breastfeeding, but in most it turns off naturally in childhood. It’s speculated that rather than some epigenetic change turning the gene for lactose tolerance back on — which probably is not possible outside actual genetic engineering — the gut biome of the affected individuals shifted to digest lactose painlessly on behalf of the human hosts. [HGModernism] found this worked but it took two weeks of chugging a slurry of powdered milk and electrolyte, formulated to avoid dehydration due to the obvious source of fluid loss. After the two weeks, lactose tolerance was achieved.
Should you try this? Almost certainly not. [HGModernism] doesn’t recommend it, and neither do we. Still, we respect the heck out any human willing to hack the way out of the limitations of their own genetics. Speaking of, at least one hacker did try genetically engineering themselves to skip the suffering involved in this process. Gene hacking isn’t just for ice-cream sundaes; when applied by real medical professionals, it can save lives.
youtube.com/embed/h90rEkbx95w?…
Thanks to [Kieth Olson] for the tip!
Agenzia delle Entrate: accesso admin in vendita a 500$? Ecco perché i conti non tornano
All’interno del noto Dark Forum, l’utente identificato come “espansive” ha messo in vendita quello che descrive come l’accesso al pannello di amministrazione dell’Agenzia delle Entrate.
Tuttavia, un’analisi più approfondita dell’offerta e delle infrastrutture di sicurezza dell’ente italiano suggerisce che si tratti di una minaccia dalla portata decisamente ridimensionata rispetto al titolo sensazionalistico.
Accesso Agenzia delle Entrate e l’annuncio sul forum underground
L’annuncio, comparso l’11 dicembre 2025, è disarmante nella sua brevità. Con un laconico ‘sell access admin panel of agenzia dell’entrate‘, l’attaccante dichiara di possedere le chiavi del portale fiscale italiano. I dettagli operativi emergono dalla chat laterale, dove compare una richiesta economica decisamente anomala per un target di questo calibro: appena 500 dollari, con contatto diretto da stabilire tramite l’account Telegram.
La prima incongruenza che salta all’occhio è proprio la valutazione economica. Nel mercato del Cybercrime, un accesso persistente con privilegi di amministratore (RCE o Admin Panel) a un’infrastruttura governativa critica di un paese G7 non verrebbe mai svenduto per una cifra così irrisoria.
Solitamente, accessi di tale calibro vengono trattati in modo privato con cifre a tre o quattro zeri, o sfruttati direttamente per attacchi ransomware o esfiltrazione di dati, molto allettanti per i mercati underground.
Le difese attive: SPID e MFA
Il punto tecnico che smonta quasi definitivamente la veridicità di un accesso “funzionante” riguarda le procedure di autenticazione adottate dalla Pubblica Amministrazione italiana.
L’accesso ai portali dell’Agenzia delle Entrate, sia per i cittadini che per gli operatori, è protetto da livelli di sicurezza che vanno oltre la semplice coppia username/password. Oggi l’ingresso è subordinato all’utilizzo di:
- SPID (Sistema Pubblico di Identità Digitale) di livello 2 o 3;
- CIE (Carta d’Identità Elettronica);
- CNS (Carta Nazionale dei Servizi).
Inoltre, per gli account amministrativi interni, è prassi consolidata l’obbligo della MFA (Multi-Factor Authentication). Questo significa che anche possedendo le credenziali corrette, l’attaccante si troverebbe bloccato dalla richiesta di un codice OTP (One Time Password) inviato sul dispositivo del legittimo proprietario.
L’ipotesi Infostealer: credenziali non verificate
Se l’accesso è così difficile, cosa sta vendendo realmente “espandive”? L’ipotesi più probabile è che si tratti di log grezzi provenienti da un Infostealer.
È plausibile che un malware abbia infettato il computer di un dipendente o di un commercialista, esfiltrando tutti i dati salvati nel browser: cookie, cronologia e, appunto, credenziali di accesso salvate per comodità. L’attaccante vede nel log la stringa agenziaentrate.gov.it associata a uno username e una password e prova a rivenderla automaticamente.
È quasi certo che si tratti di una credenziale non provata. Se il threat actor tentasse il login, si scontrerebbe con il secondo fattore di autenticazione. Di conseguenza, cerca di monetizzare rapidamente vendendo l’illusione di un accesso a un acquirente poco esperto per 500 dollari, pertanto si tratta presumibilmente di una frode verso i criminali informatici stessi.
Conclusioni
L’annuncio di “espandive” sembra essere l’ennesimo tentativo di scam o di vendita di materiale di scarto (junk data) all’interno della community cybercriminale, piuttosto che una reale compromissione dell’infrastruttura dell’Agenzia delle Entrate.
Tuttavia, questo episodio ci ricorda l’importanza vitale dell’abilitazione dell’autenticazione a due fattori (2FA/MFA) su tutti gli account critici. È proprio grazie a queste barriere che le migliaia di credenziali rubate ogni giorno dagli infostealer diventano, nella maggior parte dei casi, carta straccia.
L'articolo Agenzia delle Entrate: accesso admin in vendita a 500$? Ecco perché i conti non tornano proviene da Red Hot Cyber.
PJON, Open Single-Wire Bus Protocol, Goes Verilog
Did OneWire of DS18B20 sensor fame ever fascinate you in its single-data-line simplicity? If so, then you’ll like PJON (Padded Jittering Operative Network) – a single-wire-compatible protocol for up to 255 devices. One disadvantage is that you need to check up on the bus pretty often, trading hardware complexity for software complexity. Now, this is no longer something for the gate wielders of us to worry about – [Giovanni] tells us that there’s a hardware implementation of PJDL (Padded Jittering Data Link), a PJON-based bus.
This implementation is written in Verilog, and allows you to offload a lot of your low-level PJDL tasks, essentially, giving you a PJDL peripheral for all your inter-processor communication needs. Oh, and as [Giovanni] says, this module has recently been taped out as part of the CROC chip project, an educational SoC project. What’s not to love?
PJON is a fun protocol, soon to be a decade old. We’ve previously covered [Giovanni] use PJON to establish a data link through a pair of LEDs, and it’s nice to see this nifty small-footprint protocol gain that much more of a foothold, now, in our hardware-level projects.
We thank [Giovanni Blu Mitolo] for sharing this with us!
The Near Space Adventures of Bradfield the Bear
Admit it or not, you probably have a teddy bear somewhere in your past that you were — or maybe are — fond of. Not to disparage your bear, but we think Bradfield might have had a bigger adventure than yours has. Bradfield was launched in November on a high-altitude balloon by Year 7 and 8 students at Walhampton School in the UK in connection with Southampton University. Dressed in a school uniform, he was supposed to ride to near space, but ran into some turbulence. The BBC reported that poor Bradfield couldn’t hold on any longer and fell from around 17 miles up. The poor bear looked fairly calm for being so high up.
A camera recorded the unfortunate stuffed animal’s plight. Apparently, a companion plushie, Bill the Badger (the Badger being the Southampton mascot), successfully completed the journey, returning to Earth with a parachute.
There have been some news reports that Bradfield may have been recovered, but we haven’t seen anything definitive yet. Of course, there are plenty of things you can launch on a balloon, but what a great idea to let kids send a mascot aloft with your serious science payloads and radio gear. Because we know you are launching a balloon for a serious purpose, right? Sure, we won’t tell you just want the cool pictures.
No offense to Bradfield, but sending humans aloft on balloons requires a little more care. We aren’t as well-equipped to drop 17 miles. The Hackaday Supercon has even been the site of an uncrewed (and unbeared) balloon launch. Meanwhile, if you are around Reading and spot Bradfield, be sure to give him a cookie and call the Walhampton school.
youtube.com/embed/l7JiO5356KY?…
Super-Sizing Insects and the Benefits of Bones
One swol mealworm amidst its weaker brethren. (Credit: The Thought Emporium, YouTube)
Have you ever found yourself looking at the insects of the Paleozoic era, including the dragonfly Meganeuropsis permiana with its 71 cm wingspan and wondered what it would be like to have one as a pet? If so, you’re in luck because the mad lads over at [The Thought Emporium] have done a lot of the legwork already to grow your own raven-sized moths and more. As it turns out, all it takes is hijacking the chemical signals that control the development phases, to grow positively humongous mealworms and friends.
The growth process of the juveniles, such as mealworms – the larval form of the yellow mealworm beetle – goes through a number of molting stages (instars), with the insect juvenile hormone levels staying high until it is time for the final molt and transformation into a pupa from which the adult form emerges. The pyriproxyfen insecticide is a juvenile hormone analog that prevents this event. Although at high doses larvae perish, the video demonstrates that lower doses work to merely inhibit the final molt.
Hormone levels in an insect across its larval and pupa stages.
That proof-of-concept is nice of course if you really want to grow larger grubs, but doesn’t ultimately really affect the final form as they simply go through the same number of instars. Changing this requires another hormone/insecticide, called ecdysone, which regulates the number of instars before the final molt and pupal stage.
Amusingly, this hormone is expressed by plants to mess with larvae as they predate on their tissues, with spinach expressing a very significant amount of this phyto-ecdysone. For humans this incidentally interacts with the estrogen receptor beta, which helps with building muscle. Ergo bodybuilding supplies provide a ready to use source of this hormone as ‘beta ecdysterone’ to make swol insects with.
Unfortunately, this hormone turned out to be very tricky to apply, as adding it to their feed like with pyriproxyfen merely resulted in the test subjects losing weight or outright dying. For the next step it would seem that a more controlled exposure method is needed, which may or may not involve some DNA editing. Clearly creating Mothra is a lot harder than just blasting a hapless insect with some random ionizing radiation or toxic chemicals.Gauromydas heros, the largest true fly alive today. (Credit: Biologoandre)
A common myth with insect size is that the only reason why they got so big during the Paleozoic was due to the high CO2 content in the atmosphere. This is in fact completely untrue. There is nothing in insect physiology that prevents them from growing much larger, as they even have primitive lungs, as well as a respiratory and circulatory system to support this additional growth. Consequently, even today we got some pretty large insects for this reason, including some humongous flies, like the 7 cm long and 10 cm wingspan Gauromydas heros.
The real reasons appears to be the curse of exoskeletons, which require constant stressful molting and periods of complete vulnerability. In comparison, us endoskeleton-equipped animals have bones that grow along with the muscles and other tissues around them, which ultimately seems to be just the better strategy if you want to grow big. Evolutionary speaking this makes it more attractive for insects and other critters with exoskeletons to stay small and fly under the proverbial radar.
The positive upshot of this is of course that this means that we can totally have dog-sized moths as pets, which surely is what the goal of the upcoming video will be.
youtube.com/embed/D0PjLvlBsWw?…
Liberating AirPods with Bluetooth Spoofing
Apple’s AirPods can pair with their competitors’ devices and work as basic Bluetooth earbuds, but to no one’s surprise most of their really interesting features are reserved for Apple devices. What is surprising, though, is that simple Bluetooth device ID spoofing unlocks these features, a fact which [Kavish Devar] took advantage of to write LibrePods, an AirPods controller app for Android and Linux.
In particular, LibrePods lets you control noise reduction modes, use ear detection to pause and unpause audio, detect head gestures, reduce volume when the AirPods detect you’re speaking, work as configurable hearing aids, connect to two devices simultaneously, and configure a few other settings. The app needs an audiogram to let them work as hearing aids, and you’ll need an existing audiogram – creating an audiogram requires too much precision. Of particular interest to hackers, the app has a debug mode to send raw Bluetooth packets to the AirPods. Unfortunately, a bug in the Android Bluetooth stack means that LibrePods requires root on most devices.
This isn’t the first time we’ve seen a hack enable hearing aid functionality without official Apple approval. However, while we have some people alter the hardware, AirPorts can’t really be called hacker- or repair-friendly.
Thanks to [spiralbrain] for the tip!
Hidden Camera Build Proves You Can’t Trust Walnuts
Typically, if you happened across a walnut lying about, you might consider eating it or throwing it to a friendly squirrel. However, as [Penguin DIY] demonstrates, it’s perfectly possible to turn the humble nut into a clandestine surveillance device. It turns out the walnut worriers were right all along.
The build starts by splitting and hollowing out the walnut. From there, small holes are machined into the mating faces of the walnut, into which [Penguin DIY] glues small neodymium magnets. These allow the walnut to be opened and snapped shut as desired, while remaining indistinguishable from a regular walnut at a distance.
The walnut shell is loaded with nine tiny lithium-polymer cells, for a total of 270 mAh of battery capacity at 3.7 volts. Charging the cells is achieved via a deadbugged TP4056 charge module to save space, with power supplied via a USB C port. Holes are machined in the walnut shell for the USB C port as well as the camera lens, though one imagines the former could have been hidden purely inside for a stealthier look. The camera itself appears to be an all-in-one module with a transmitter built in, with the antenna installed in the top half of the walnut shell and connected via pogo pins. The video signal can be picked up at a distance via a receiver hooked up to a smart phone. No word on longevity, but the included batteries would probably provide an hour or two of transmission over short ranges if you’re lucky.
If you have a walnut tree in your backyard, please do not email us about your conspiracy theories that they are watching you. We get those more than you might think, and they are always upsetting to read. If, however, you’re interested in surveillance devices, we’ve featured projects built for detecting them before with varying levels of success. Video after the break.
youtube.com/embed/j0rs7ny2t8A?…
Esce Kali Linux 2025.4! Miglioramenti e Novità nella Distribuzione per la Sicurezza Informatica
La recente edizione 2025.4 di Kali Linux è stata messa a disposizione del pubblico, introducendo significative migliorie per quanto riguarda gli ambienti desktop GNOME, KDE e Xfce. D’ora in poi, Wayland sarà il sistema di gestione delle finestre utilizzato di default, rappresentando un’importante novità rispetto alle precedenti versioni.
L’aggiornamento più recente trae spunto dalla versione precedente 2025.3 di settembre e vanta esperienze desktop perfezionate, un supporto guest VM potenziato in Wayland nonché una gamma di nuovi strumenti per la sicurezza offensiva.
Aggiornamenti dell’ambiente desktop
È importante sottolineare che GNOME ha abbandonato completamente il supporto per X11, spingendo Kali ad adottare Wayland come unico server Windows. Gli sviluppatori di Kali descrivono il passaggio come un’operazione fluida, grazie a test approfonditi e configurazioni aggiuntive per gli ambienti di macchine virtuali.
Tutti e tre gli ambienti desktop supportati, GNOME, KDE Plasma e Xfce, hanno ricevuto notevole attenzione. GNOME 49 introduce un’interfaccia più coerente e reattiva. Tra le modifiche più significative, una nuova categorizzazione degli strumenti nella griglia delle app, allineata alla tradizionale struttura del menu di Kali. Il lettore video Totem è stato sostituito con l’app più leggera Showtime e le scorciatoie da tastiera a lungo richieste per l’avvio dei terminali (Ctrl+Alt+T / Win+T) sono ora attive su tutti i desktop GNOME.
In Xfce, che rimane il desktop predefinito di Kali, è stato aggiunto un nuovo sistema di gestione dei colori, che finalmente raggiunge la parità con la personalizzazione visiva di GNOME e KDE. Gli utenti possono ora modificare i temi delle icone, i colori GTK/Qt e le decorazioni delle finestre tramite lo strumento Aspetto di Xfce e le utilità qt5ct / qt6ct.
Nuovi strumenti
Sono stati introdotti tre nuovi strumenti nei repository di Kali:
- bpf-linker – Un semplice linker statico per programmi BPF (Berkeley Packet Filter).
- evil-winrm-py – Una riscrittura in Python di Evil-WinRM, che consente l’esecuzione di comandi su macchine Windows remote tramite WinRM.
- hexstrike-ai – Un server MCP che facilita l’esecuzione autonoma di strumenti da parte di agenti di intelligenza artificiale.
Oltre agli aggiornamenti della toolchain, Kali Linux ha aggiornato il suo kernel alla versione 6.16, garantendo la compatibilità con nuovi hardware e funzionalità.
Supporto guest Wayland e VM completamente funzionante
Kali 2025.4 segna una pietra miliare nella transizione a Wayland, che ha sostituito X11 in tutta la distribuzione. Mentre GNOME ora impone ufficialmente sessioni esclusivamente basate su Wayland, Kali distribuiva Wayland di default per KDE già dalla versione 2023.1.
Uno dei principali ostacoli a un’adozione più ampia di Wayland è stato il supporto incompleto per le utilità guest delle VM, in particolare la condivisione degli appunti e il ridimensionamento dinamico delle finestre. Gli sviluppatori di Kali segnalano di aver risolto questi problemi in questa versione. Le installazioni virtualizzate di Kali su VirtualBox, VMware e QEMU offrono ora il supporto completo per l’aggiunta di VM guest su Wayland, allineandosi all’esperienza X11.
Altri aggiornamenti
La piattaforma NetHunter per Android ha ricevuto diversi aggiornamenti, tra cui il supporto anticipato ad Android 16 per le varianti Samsung Galaxy S10, OnePlus Nord e dispositivi Xiaomi Mi 9. Il terminale NetHunter è ora di nuovo funzionante, con supporto per la modalità interattiva Magisk, che garantisce una migliore stabilità della sessione.
Lo strumento di phishing Wifipumpkin3 è stato aggiornato con nuovi modelli per piattaforme comuni come Instagram, iCloud e Snapchat, ed è in fase di sperimentazione un nuovo terminale in-app (versione alpha).
Chi fosse interessato a provare Kali Linux 2025.4 può scaricarlo da qui . A causa delle limitazioni di Cloudflare CDN sulle dimensioni dei file (~5 GB) e del costante aumento del peso e della complessità dei pacchetti, l’immagine Live ISO è ora disponibile solo tramite BitTorrent.
L'articolo Esce Kali Linux 2025.4! Miglioramenti e Novità nella Distribuzione per la Sicurezza Informatica proviene da Red Hot Cyber.
Rats Get even Better at Playing DOOM
We all know that you can play DOOM on nearly anything, but what about the lesser known work being done to let other species get in on the action? For ages now, our rodent friends haven’t been able to play the 1993 masterpiece, but [Viktor Tóth] and colleagues have been working hard to fix this unfortunate oversight.
If you’ve got the feeling this isn’t the first time you’ve read about rats attempting to slay demons, it’s probably because [Victor] has been working on this mission for years now — with a previous attempt succeeding in allowing rats to navigate the DOOM landscape. Getting the rodents to actually play through the game properly has proved slightly more difficult, however.
Improving on the previous attempt, V2 has the capability to allow rats to traverse through levels, be immersed in the virtual world with a panoramic screen, and take out enemies. Rewards are given to successful behaviors in the form of sugar water through a solenoid powered dispenser.
While this current system looks promising, the rats haven’t gotten too far though the game due to time constraints. But they’ve managed to travel through the levels and shoot, which is still pretty impressive for rodents.
DOOM has been an indicator of just how far we can take technology for decades. While this particular project has taken the meme into a slightly different direction, there are always surprises. You can even play DOOM in KiCad when you’re tired of using it to design PCBs.
Review: Cherry G84-4100 Keyboard
The choice of a good keyboard is something which consumes a lot of time for many Hackaday readers, judging by the number of custom input device projects which make it to these pages. I live by my keyboard as a writer, but I have to admit that I’ve never joined in on the special keyboard front; for me it’s been a peripheral rather than an obsession. But I’m hard on keyboards, I type enough that I wear them out. For the last five years my Hackaday articles have come via a USB Thinkpad keyboard complete with the little red stick pointing device, but its keys have started parting company with their switches so it’s time for a replacement.
I Don’t Want The Blackpool Illuminations
Is it a gamer’s keyboard, or the Blackpool seafront at night? I can’t tell any more. Mark S Jobling, Public domain.
For a non keyboard savant peering over the edge, this can be a confusing choice. There’s much obsessing about different types of mechanical switch, and for some reason I can’t quite fathom, an unreasonable number of LEDs.
I don’t want my keyboard to look like the Blackpool Illuminations (translation for Americans: Las Vegas strip), I just want to type on the damn thing. More to the point, many of these “special” keyboards carry prices out of proportion to their utility, and it’s hard to escape the feeling that like the thousand quid stereo the spotty kid puts in his Opel Corsa, you’re being asked to pay just for bragging rights.
Narrowing down my needs then, I don’t need any gimmicks, I just need a small footprint keyboard that’s mechanically robust enough to survive years of my bashing out Hackaday articles on it. I’m prepared to pay good money for that.
The ‘board I settled upon is probably one of the most unglamorous decent quality keyboards on the market. The Cherry G84-4100 is sold to people in industry who need a keyboard that fits in a small space, and I’ve used one to the deafening roar of a cooling system in a data centre rack. It’s promising territory for a Hackaday scribe. I ordered mine from the Cherry website, and it cost me just under £70 (about $93), with the postage being extra. It’s available with a range of different keymaps, and I ordered the UK one. In due course the package arrived, a slim cardboard box devoid of consumer branding, inside of which was the keyboard, a USB-to-PS/2 adaptor, and a folded paper manual. I’m using it on a USB machine so the adaptor went in my hoard, but I’m pleased to be able to use this with older machines when necessary.
Hello My Old Data Centre Friend
It’s not shift-3 for the £ sign that’s important, but shift-2 for the quote. You have no idea how annoying not having that is on an international layout.
For my money, I got a keyboard described as “compact”, or 75%. It’s 282 by 132 by 26 mm in size, which means it takes up a little less space than the Thinkpad one it replaces, something of a win to my mind. It doesn’t have a numeric keypad, but I don’t need that. The switches are Cherry mechanical ones rather than the knock-offs you’ll find on so many competitors, and they have something of the mechanical sound but not the racket of an IBM buckled spring key switch. Cherry claim they’re good for 20 million activations, so even I shouldn’t wear them out.
The keymap is of course the standard UK one I’m used to, but what makes or breaks a ‘board like this one is how they arrange the other keys. I really like that their control key is in the bottom left hand corner rather than as in so many others, the function key, but I am taking a little while to get used to the insert and delete keys being to the left of the arrow keys in the bottom right hand corner. Otherwise my muscle memory isn’t being taxed too much by it.
There are a couple of little feet at the back underneath that can be flipped up to raise the ‘board at an angle. Since after years of typing the heel of my hand becomes inflamed if I rest it on the surface I elevate my wrist by about an inch with a rest, thus I use the keyboard tilt. I’ve been typing with the Cherry for a few weeks now, and it remains comfortable.
The Cherry G84-4100 then. It’s not a “special” keyboard in any way, in fact its about as utilitarian as it gets in a peripheral. But for me a keyboard is a tool, and just like my Vernier caliper or my screwdrivers I demand that it does its job repeatably and flawlessly for many years to come. So its unglamorous nature is its strength, because I’ve paid for the engineering which underlies it rather than the bells and whistles that adorn some others. Without realising it you’ll be seeing a lot of this peripheral in my work over the coming years.
Hackaday Podcast Episode 349: Clocks, AI, and a New 3D Printer Guy
Hackaday Editors Elliot Williams and Al Williams met up to cover the best of Hackaday this week, and they want you to listen in. There were a hodgepodge of hacks this week, ranging from home automation with RF, volumetric displays in glass, and some crazy clocks, too.
Ever see a typewriter that uses an ink pen? Elliot and Al hadn’t either. Want time on a supercomputer? It isn’t free, but it is pretty cheap these days. Finally, the guys discussed how to focus on a project like Dan Maloney, who finally got a 3D printer, and talked about Maya Posch’s take on LLM intelligence.
Check out the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!
html5-player.libsyn.com/embed/…
Download the human-generated podcast in mostly mono, but sometimes stereo, MP3.
Where to Follow Hackaday Podcast
Places to follow Hackaday podcasts:
Episode 349 Show Notes:
News:
What’s that Sound?
- What is that sound? Al didn’t guess, but if you can, you could win a coveted Hackaday Podcast T-Shirt.
Interesting Hacks of the Week:
- Bridging RTL-433 To Home Assistant
- The Key To Plotting
- Reverse Sundial Still Tells Time
- Trace Line Clock Does It With Magnets
- Volumetric Display With Lasers And Bubbly Glass
- The Engineering That Makes A Road Cat’s Eye Self-Cleaning
Quick Hacks:
- Elliot’s Picks:
- Production KiCad Template Covers All Your Bases
- RP2350 Done Framework Style
- Neat Techniques To Make Interactive Light Sculptures
- Al’s Picks:
- Super Simple Deadbuggable Bluetooth Chip
- LED Hourglass Is A Great Learning Project
- Your Supercomputer Arrives In The Cloud
Can’t Miss Articles:
Banca dati IRS con 18 milioni di record 401(k) in vendita nel dark web
Una presunta banca dati contenente informazioni sensibili su 18 milioni di cittadini statunitensi over 65 è apparsa in vendita su un noto forum del dark web.
L’inserzionista, che usa lo pseudonimo “Frenshyny”, sostiene di aver sottratto i dati direttamente dal portale governativo irs.gov, che gestisce, tra le altre cose, documentazione fiscale e informazioni sui piani pensionistici 401(k).
Disclaimer: Questo rapporto include screenshot e/o testo tratti da fonti pubblicamente accessibili. Le informazioni fornite hanno esclusivamente finalità di intelligence sulle minacce e di sensibilizzazione sui rischi di cybersecurity. Red Hot Cyber condanna qualsiasi accesso non autorizzato, diffusione impropria o utilizzo illecito di tali dati. Al momento, non è possibile verificare in modo indipendente l’autenticità delle informazioni riportate, poiché l’organizzazione coinvolta non ha ancora rilasciato un comunicato ufficiale sul proprio sito web. Di conseguenza, questo articolo deve essere considerato esclusivamente a scopo informativo e di intelligence.
Cosa conterrebbe il database
L’annuncio elenca una quantità impressionante di dati personali, indicati come:
- Nome e cognome
- Età
- Stato e città
- Indirizzo
- Codice postale
- Numero di telefono
Si tratterebbe, secondo il venditore, di informazioni relative ai beneficiari dei 401(k) Benefit Funds, il celebre piano di risparmio previdenziale statunitense attivo dagli anni ’80.
La quantità – 18 milioni di record – suggerisce una compromissione estremamente ampia, che se confermata implicherebbe il più grande data breach mai registrato sul sistema pensionistico privato statunitense.
Il contesto del forum
L’annuncio è pubblicato in una sezione dedicata alla vendita di database rubati. Il venditore si presenta come membro “V.I.P.” del forum, sottolineando una certa reputazione nella community e offrendo la possibilità di contattarlo su Telegram per “prove e prezzi”.
Il post contiene inoltre una lunga descrizione del funzionamento dei piani 401(k), probabilmente inserita per rendere più credibile la provenienza dei dati.
Perché questo leak sarebbe estremamente pericoloso
Se autentico, un database di tale portata esporrebbe milioni di cittadini anziani a:
- Frodi finanziarie su larga scala, incluse truffe relative alla previdenza.
- Furti d’identità, grazie al pacchetto completo di informazioni personali.
- Attacchi di social engineering mirati, particolarmente efficaci su persone più vulnerabili.
- Accesso fraudolento a conti o servizi collegati ai piani pensionistici.
Gli over 65 sono tra i bersagli preferiti dei criminali informatici, e possedere dati verificati li rende estremamente appetibili per campagne fraudolente.
Un nuovo segnale del boom del cyber-crimine finanziario
L’apparizione di questo presunto database conferma una tendenza ormai consolidata: il settore finanziario e previdenziale è diventato uno dei principali target per i threat actors.
Dati come quelli contenuti nei registri pensionistici, infatti, hanno un valore particolarmente alto nei mercati clandestini.
Se verificata, questa vendita rappresenterebbe l’ennesimo colpo alla sicurezza dei sistemi governativi statunitensi e un rischio enorme per milioni di pensionati.
L'articolo Banca dati IRS con 18 milioni di record 401(k) in vendita nel dark web proviene da Red Hot Cyber.
Weird Email Appliance Becomes AI Terminal
The Landel Mailbug was a weird little thing. It combined a keyboard and a simple text display, and was intended to be a low-distraction method for checking your email. [CiferTech] decided to repurpose it, though, turning it into an AI console instead.
The first job was to crack the device open and figure out how to interface with the keyboard. The design was conventional, so reading the rows and columns of the key matrix was a cinch. [CiferTech] used PCF8574 IO expanders to make it easy to read the matrix with an ESP32 microcontroller over I2C. The ESP32 is paired with a small audio output module to allow it to run a text-to-speech system, and a character display to replace the original from the Mailbug itself. It uses its WiFi connection to query the ChatGPT API. Thus, when the user enters a query, the ESP32 runs it by ChatGPT, and then displays the output on the screen while also speaking it aloud.
[CiferTech] notes the build was inspired by AI terminals in retro movies, though we’re not sure what specifically it might be referencing. In any case, it does look retro and it does let you speak to a computer being, of a sort, so the job has been done. Overall, though, the build shows that you can build something clean and functional just by reusing and interfacing a well-built commercial product.
youtube.com/embed/pRIfY21PpyI?…
Linux Foundation crea l’AAIF: la nuova cabina di regia dell’Intelligenza Artificiale globale?
La costituzione dell’Agentic AI Foundation (AAIF), un fondo dedicato sotto l’egida della Linux Foundation, è stata annunciata congiuntamente da varie aziende che dominano nel campo della tecnologia e dell’intelligenza artificiale.
Con la costituzione dell’AAIF, Anthropic ha annunciato la donazione del protocollo MCP alla Linux Foundation, un’organizzazione no-profit impegnata a promuovere ecosistemi open source sostenibili attraverso una governance neutrale, lo sviluppo della comunità e infrastrutture condivise. L’AAIF opererà come un fondo vincolato all’interno della Linux Foundation.
Tra i membri che ne fanno parte fin dalla sua fondazione vi sono Anthropic, OpenAI e Block, con l’ulteriore sostegno di Google, Microsoft, AWS, Cloudflare, Docker e Bloomberg.
Originariamente concepito da Anthropic, il protocollo MCP è stato ideato per mettere in comunicazione tra loro diverse applicazioni e permettere l’estrazione di dati dalle stesse.
Subito dopo la sua uscita, OpenAI e Google lo hanno adottato senza esitazioni. Da quel momento, il suo sviluppo ha subito un’accelerazione repentina, permettendo una gestione armonica di svariati strumenti e, al tempo stesso, una riduzione della latenza nei processi di lavoro complessi degli agenti.
Il portfolio iniziale della fondazione include anche AGENTS.md di OpenAI e il progetto Goose di Block, entrambi donati all’AAIF.
La fondazione ne supervisionerà il funzionamento e il coordinamento futuri, garantendo che queste iniziative rimangano in linea con i principi di neutralità tecnica, apertura e gestione della comunità, promuovendo così l’innovazione nell’intero ecosistema dell’IA.
Anthropic ha sottolineato che questa transizione non modificherà il modello di governance del protocollo MCP. I responsabili del progetto continueranno a dare priorità al feedback della community e a processi decisionali trasparenti, sottolineando l’impegno di Anthropic nel preservare MCP come standard aperto e indipendente dai fornitori.
L'articolo Linux Foundation crea l’AAIF: la nuova cabina di regia dell’Intelligenza Artificiale globale? proviene da Red Hot Cyber.
This Week in Security: Hornet, Gogs, and Blinkenlights
Microsoft has published a patch-set for the Linux kernel, proposing the Hornet Linux Security Module (LSM). If you haven’t been keeping up with the kernel contributor scoreboard, Microsoft is #11 at time of writing and that might surprise you. The reality is that Microsoft’s biggest source of revenue is their cloud offering, and Azure is over half Linux, so Microsoft really is incentivized to make Linux better.
The Hornet LSM is all about more secure eBPF programs, which requires another aside: What is eBPF? First implemented in the Berkeley Packet Filter, it’s a virtual machine in the kernel, that allows executing programs in kernel space. It was quickly realized that this ability to run a script in kernel space was useful for far more than just filtering packets, and the extended Berkeley Packet Filter was born. eBPF is now used for load balancing, system auditing, security and intrusion detection, and lots more.
This unique ability to load scripts from user space into kernel space has made eBPF useful for malware and spyware applications, too. There is already a signature scheme to restrict eBPF programs, but Hornet allows for stricter checks and auditing. The patch is considered a Request For Comments (RFC), and points out that this existing protection may be subject to Time Of Check / Time Of Use (TOCTOU) attacks. It remains to be seen whether Hornet passes muster and lands in the upstream kernel.
Patch Tuesday
Linux obviously isn’t the only ongoing concern for Microsoft, and it’s the time of month to talk about patch Tuesday. There are 57 fixes that are considered vulnerabilities, and additional changes that are just classified internally as bug fixes. There were three of those vulnerabilities that were publicly known before the fix, and one of those was known to be actively used in attacks in the wild.
CVE-2025-62221 was an escalation of privilege flaw in the Windows Cloud Files Mini Filter Driver. In Windows, a minifilter is a kernel driver that attach to the file system software, to monitor or modify file operations. This flaw was a use-after-free that allowed a lesser-privileged attacker to gain SYSTEM privileges.
Gogs
Researchers at Wiz found an active exploitation campaign that uses CVE-2025-8110, a previously unknown vulnerability in Gogs. The GO Git Service, hence the name, is a self-hosted GitHub/GitLab alternative written in Go. It’s reasonably popular, with 1,400 of them exposed to the Internet.
The vulnerability was a bypass of CVE-2024-55947, a path traversal vulnerability that allowed a malicious user to upload files to arbitrary locations. That was fixed with Gogs 0.13.1, but the fix failed to account for symbolic links (symlinks). Namely, as far as the git protocol is concerned, symlinks are completely legal. The path traversal checking doesn’t check for symlinks during normal git access, so a symlink pointing outside the repository can easily be created. And then the HTTPS file API can be used to upload a file to that symlink, again allowing for arbitrary writes.
The active exploitation on this vulnerability is particularly widespread. Of the 1400 Gogs instances on the Internet, over 700 show signs of compromise, in the form of new repositories with randomized names. It’s possible that even more instances have been compromised, and the signs have been covered. The attack added a symlink to .git/config, and then overwriting that file with a new config that defines the sshCommand setting. After exploitation, a Supershell malware was installed, establishing ongoing remote control.
The most troubling element of this story is that the vulnerability was first discovered in the wild back in July and was reported to the Gogs project at that time. As of December 11, the vulnerability has not been fixed or acknowledged. After five months of exploitation without a patch, it seems time to acknowledge that Gogs is effectively unmaintained. There are a couple of active forks that don’t seem to be vulnerable to this attack; time to migrate.
Blinkenlights
There’s an old story I always considered apocryphal, that data could be extracted from the blinking lights of network equipment, leading to a few ISPs to boast that they covered all their LEDs with tape for security. While there may have been a bit of truth to that idea, it definitely served as inspiration for [Damien Cauquil] at Quarkslab, reverse engineering a very cheap smart watch.
The watches were €11.99 last Christmas, and a price point that cheap tickles the curiosity of nearly any hacker. What’s on the inside? What does the firmware look like? The micro-controller was by the JieLi brand, and it’s a bit obscure, with no good way to pull the firmware back off. With no leads there, [Damien] turned to the Android app and the Bluetooth Low Energy connection. One of the functions of the app is uploading custom watch dials. Which of course had to be tested by creating a custom watch face featuring a certain Rick Astley.
But those custom watch faces have a quirk. The format internally uses byte offsets, and the watch doesn’t check for that offset to be out of bounds. A ridiculous scheme was concocted to abuse this memory leak to push firmware bytes out as pixel data. It took a Raspberry Pi Pico sniffing the SPI bus to actually recover those bytes, but it worked! Quite the epic hack.
Bits and Bytes
Libpng has an out of bounds read vulnerability, that was just fixed in 1.6.52. What’s weird about this one is that the vulnerability is can be triggered by completely legitimate PNG images. The good news is that is vulnerability only effects the simplified API, so not every user of libpng is in the blast radius.
And finally, Google has pushed out an out-of-band update to Chrome, fixing a vulnerability that is being exploited in the wild. The Hacker News managed to connect the bug ID to a pull request in the LibANGLE library, a translation layer between OpenGL US calls into Direct3D, Vulkan, and Metal. The details there suggests the flaw is limited to the macOS platform, as the fix is in the metal renderer. Regardless, time to update!
Il ruolo decisivo di Israele per l’innovazione tecnologica globale
Il 7 dicembre, parlando a margine dell’incontro con il cancelliere tedesco Friedrich Merz a Gerusalemme, il primo ministro israeliano Benjamin Netanyahu ha espresso una chiara visione circa la dottrina strategica di Tel Aviv rivendicando che in un’epoca di profonda innovazione tecnologica “possiamo guidare questo processo e diventare non una potenza secondaria, ma una potenza primaria”.
Benjamin Netanyahu declares that Israel will have total sovereign control from the Jordan River to the Mediterranean Sea as the nation rises from a secondary power to a primary one.He says Israel is entering a “new age,” claiming the country is on the cusp of an era where… pic.twitter.com/se3Gx1Ennh
— Shadow of Ezra (@ShadowofEzra) December 7, 2025
Netanyahu ha ragione. La prospettiva strategica di Israele è questa ed è proprio il dato della vitalità tecnologica dell’ecosistema dello Stato Ebraico a rendere Tel Aviv così influente agli occhi del campo occidentale e la sua alleanza, molto spesso, irrinunciabile per diversi attori. Molto noti sono i casi del cyber e delle tecnologie ad esse collegate, ben evidenti in occasione dell’operazione condotta a settembre 2024 con i cercapersone esplosivi contro Hezbollah in Libano o della raccolta informativa che ha preparato gli attacchi d’apertura della guerra all’Iran del giugno scorso.
Il boom del cyber di Israele
Uno scenario, questo, molto noto anche in occasione dei diversi scandali di spionaggio che in Occidente hanno riguardato spyware israeliani e che in Italia hanno portato a interrogativi profondi circa la reale sovranità tecnologica del Paese. Israele vede, peraltro, la guerra in cui è stata immersa nel biennio 2023-2025 fungere da abilitatore per ulteriori investimenti: a livello di raccolta di capitali nel settore cyber, nota Security Boulevard, “Israele ha registrato 4,4 miliardi di dollari in 130 round nel 2025, con un aumento del 46% dell’attività di negoziazione su base annua, la sua performance più forte in un decennio di monitoraggio”.
Meno approfondita, se non per le sue ricadute nel “laboratorio Gaza”, è la natura decisiva del ruolo che Israele gioca in campo di intelligenza artificiale e tecnologie ad essa legate. Israele applica con forza le tecnologie del colosso americano del data mining e dell’uso strategico dell’Ia Palantir, il cui Ceo Alex Karp è radicalmente a favore di Tel Aviv. Ma non solo. Anche nel moderno boom dell’Ia generativa di massa c’è una forte mano israeliana.
Si pensi alla società regina dell’Ia per eccellenza, Nvidia. Il colosso di Santa Claraè cresciuto fino a diventare il maggior player della borsa mondiale e l’azienda più capitalizzata della storia finanziaria mondiale per la sua capacità di fornire i chip e le unità computazionali più moderne e potenti agli attori del settore dell’intelligenza artificiale generativa, ottenendo processori più longevi, più dinamici, più strutturati.
Il matrimonio Nvidia-Mellanox
Ebbene, un’acquisizione completata nel 2020 ha contribuito a rendere possibile tutto questo a partire da fine 2022 in avanti: Nvidia comprò infatti l’israeliana Mellanox per quasi 7 miliardi di dollari, integrandola pienamente nel suo ecosistema. Mellanox, fondata dall’israeliano Eyal Waldman e dal 2007 quotata al Nasdaq, realizzava quei sistemi d’interconnessione interni ai data center e tra questi ultimi e i sistemi d’archiviazione fondamentale per accelerare la potenza di calcolo, vero e proprio driver dello sviluppo Ia.
La lunga marcia di Nvidia, oggi l’azienda a cui gli operatori guardano al momento dell’uscita dei conti per capire il futuro del settore IA, non sarebbe stata possibile senza la tecnologia israeliana. Alessandro Aresu, analista di Limes e autore de “Geopolitica dell’intelligenza artificiale”, ricorda su StartMagche spesso nel parlare del gruppo di Jensen Huang ci si scorda che ” circa il 13% dei dipendenti di NVIDIA è in Israele e che spesso la CFO Colette Kress inizia le giornate parlando coi manager israeliani”. La scommessa del colosso di Huang è stata pienamente vinta.
La divisione israeliana di Nvidia, che ha sede a Yokneam in Galilea, macina profitti, come ricorda Calcalistech:
Nell’ultimo trimestrericavi del settore networking sono aumentati del 46% rispetto al trimestre precedente e sono quasi raddoppiati su base annua, raggiungendo i 7,25 miliardi di dollari solo nel secondo trimestre. In altre parole, solo nell’ultimo trimestre, il centro di ricerca e sviluppo istituito grazie all’acquisizione di Mellanox ha generato per Nvidia un fatturato superiore al costo dell’acquisizione stessa
La corsa tecnologica di Israele
Nvidia sta investendo attivamente per espandere la sua presenza israeliana. Il campus di Yokneam sarà espanso e a Sud di Israele Nvidia costruirà un nuovo complesso per il suo laboratorio di ricerca a Beersheba: “Il nuovo sito, situato nel parco tecnologico Gav Yam di Beersheba , si estende su circa 3.000 metri quadrati e dovrebbe essere pienamente operativo entro la fine della prima metà del 2026″, nota il Times of Israelricordando che “nell’ambito dell’espansione, Nvidia sta cercando di assumere centinaia di dipendenti aggiuntivi nella regione meridionale, tra cui sviluppatori di chip, ingegneri hardware e software, architetti, studenti e laureati”.
Un segno del ruolo importante di Tel Aviv per l’innovazione globale, che si manifesta attraversa un’influenza silenziosa ma decisiva nel plasmare le attività di una vera e propria “industria delle industrie” dell’era dell’IA come Nvidia.gisreportsonline.com/r/israel-…
Israele, che investe oltre il 6% del Pil in ricerca e sviluppo ogni anno, profitta poi delle continue porte girevoli tra apparati strategici, industria e start-up che consente di connettere attivamente sviluppo economico, innovazione e sicurezza nazionale e spingere il governo a finanziare i progetti più promettenti. Anche sulla nuova frontiera del calcolo quantistico Israele è ben posizionato: le ciniche parole di Netanyahu sul futuro del Paese nella regione hanno un retroterra reale. Tel Aviv è uno snodo fondamentale per il mercato mondiale della tecnologia. E il suo peso è destinato a aumentare.
Noi di InsideOver ci mettiamo cuore, esperienza e curiosità per raccontare un mondo complesso e in continua evoluzione. Per farlo al meglio, però, abbiamo bisogno di te: dei tuoi suggerimenti, delle tue idee e del tuo supporto. Unisciti a noi, abbonati oggi!
L'articolo Il ruolo decisivo di Israele per l’innovazione tecnologica globale proviene da InsideOver.
NASA May Have Lost the MAVEN Mars Orbiter
When the orbit of NASA’s Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft took it behind the Red Planet on December 6th, ground controllers expected a temporary loss of signal (LoS). Unfortunately, the Deep Space Network hasn’t heard from the science orbiter since. Engineers are currently trying to troubleshoot this issue, but without a sign of life from the stricken spacecraft, there are precious few options.
As noted by [Stephen Clark] over at ArsTechnica this is a pretty big deal. Even though MAVEN was launched in November of 2013, it’s a spring chicken compared to the other Mars orbiters. The two other US orbiters: Mars Reconnaissance Orbiter (MRO) and Mars Odyssey, are significantly older by around a decade. Of the two ESA orbiters, Mars Express and ExoMars, the latter is fairly new (2016) and could at least be a partial backup for MAVEN’s communication relay functionality with the ground-based units, in particular the two active rovers. ExoMars has a less ideal orbit for large data transfers, which would hamper scientific research.
With neither the Chinese nor UAE orbiters capable of serving as a relay, this puts the burden on a potential replacement orbiter, such as the suggested Mars Telecommunications Orbiter, which was cancelled in 2005. Even if contact with MAVEN is restored, it would only have fuel for a few more years. This makes a replacement essential if we wish to keep doing ground-based science missions on Mars, as well as any potential manned missions.
Following the digital trail: what happens to data stolen in a phishing attack
Introduction
A typical phishing attack involves a user clicking a fraudulent link and entering their credentials on a scam website. However, the attack is far from over at that point. The moment the confidential information falls into the hands of cybercriminals, it immediately transforms into a commodity and enters the shadow market conveyor belt.
In this article, we trace the path of the stolen data, starting from its collection through various tools – such as Telegram bots and advanced administration panels – to the sale of that data and its subsequent reuse in new attacks. We examine how a once leaked username and password become part of a massive digital dossier and why cybercriminals can leverage even old leaks for targeted attacks, sometimes years after the initial data breach.
Data harvesting mechanisms in phishing attacks
Before we trace the subsequent fate of the stolen data, we need to understand exactly how it leaves the phishing page and reaches the cybercriminals.
By analyzing real-world phishing pages, we have identified the most common methods for data transmission:
- Send to an email address.
- Send to a Telegram bot.
- Upload to an administration panel.
It also bears mentioning that attackers may use legitimate services for data harvesting to make their server harder to detect. Examples include online form services like Google Forms, Microsoft Forms, etc. Stolen data repositories can also be set up on GitHub, Discord servers, and other websites. For the purposes of this analysis, however, we will focus on the primary methods of data harvesting.
Data entered into an HTML form on a phishing page is sent to the cybercriminal’s server via a PHP script, which then forwards it to an email address controlled by the attacker. However, this method is becoming less common due to several limitations of email services, such as delivery delays, the risk of the hosting provider blocking the sending server, and the inconvenience of processing large volumes of data.
As an example, let’s look at a phishing kit targeting DHL users.
The index.php file contains the phishing form designed to harvest user data – in this case, an email address and a password.
Phishing form imitating the DHL website
The data that the victim enters into this form is then sent via a script in the next.php file to the email address specified within the mail.php file.
Telegram bots
Unlike the previous method, the script used to send stolen data specifies a Telegram API URL with a bot token and the corresponding Chat ID, rather than an email address. In some cases, the link is hard-coded directly into the phishing HTML form. Attackers create a detailed message template that is sent to the bot after a successful attack. Here is what this looks like in the code:
Code snippet for data submission
Compared to sending data via email, using Telegram bots provides phishers with enhanced functionality, which is why they are increasingly adopting this method. Data arrives in the bot in real time, with instant notification to the operator. Attackers often use disposable bots, which are harder to track and block. Furthermore, their performance does not depend on the quality of phishing page hosting.
Automated administration panels
More sophisticated cybercriminals use specialized software, including commercial frameworks like BulletProofLink and Caffeine, often as a Platform as a Service (PaaS). These frameworks provide a web interface (dashboard) for managing phishing campaigns.
Data harvested from all phishing pages controlled by the attacker is fed into a unified database that can be viewed and managed through their account.
Sending data to the administration panel
These admin panels are used for analyzing and processing victim data. The features of a specific panel depend on the available customization options, but most dashboards typically have the following capabilities:
- Sorting of real-time statistics: the ability to view the number of successful attacks by time and country, along with data filtering options
- Automatic verification: some systems can automatically check the validity of the stolen data like credit cards and login credentials
- Data export: the ability to download the data in various formats for future use or sale
Example of an administration panel
Admin panels are a vital tool for organized cybercriminals.
One campaign often employs several of these data harvesting methods simultaneously.
Sending stolen data to both an email address and a Telegram bot
The data cybercriminals want
The data harvested during a phishing attack varies in value and purpose. In the hands of cybercriminals, it becomes a method of profit and a tool for complex, multi-stage attacks.
Stolen data can be divided into the following categories, based on its intended purpose:
- Immediate monetization: the direct sale of large volumes of raw data or the immediate withdrawal of funds from a victim’s bank account or online wallet.
- Banking details: card number, expiration date, cardholder name, and CVV/CVC.
- Access to online banking accounts and digital wallets: logins, passwords, and one-time 2FA codes.
- Accounts with linked banking details: logins and passwords for accounts that contain bank card details, such as online stores, subscription services, or payment systems like Apple Pay or Google Pay.
- Subsequent attacks for further monetization: using the stolen data to conduct new attacks and generate further profit.
- Credentials for various online accounts: logins and passwords. Importantly, email addresses or phone numbers, which are often used as logins, can hold value for attackers even without the accompanying passwords.
- Phone numbers, used for phone scams, including attempts to obtain 2FA codes, and for phishing via messaging apps.
- Personal data: full name, date of birth, and address, abused in social engineering attacks
- Targeted attacks, blackmail, identity theft, and deepfakes.
- Biometric data: voice and facial projections.
- Scans and numbers of personal documents: passports, driver’s licenses, social security cards, and taxpayer IDs.
- Selfies with documents, used for online loan applications and identity verification.
- Corporate accounts, used for targeted attacks on businesses.
We analyzed phishing and scam attacks conducted from January through September 2025 to determine which data was most frequently targeted by cybercriminals. We found that 88.5% of attacks aimed to steal credentials for various online accounts, 9.5% targeted personal data (name, address, and date of birth), and 2% focused on stealing bank card details.
Distribution of attacks by target data type, January–September 2025 (download)
Selling data on dark web markets
Except for real-time attacks or those aimed at immediate monetization, stolen data is typically not used instantly. Let’s take a closer look at the route it takes.
- Sale of data dumps
Data is consolidated and put up for sale on dark web markets in the form of dumps: archives that contain millions of records obtained from various phishing attacks and data breaches. A dump can be offered for as little as $50. The primary buyers are often not active scammers but rather dark market analysts, the next link in the supply chain. - Sorting and verification
Dark market analysts filter the data by type (email accounts, phone numbers, banking details, etc.) and then run automated scripts to verify it. This checks validity and reuse potential, for example, whether a Facebook login and password can be used to sign in to Steam or Gmail. Data stolen from one service several years ago can still be relevant for another service today because people tend to use identical passwords across multiple websites. Verified accounts with an active login and password command a higher price at the point of sale.
Analysts also focus on combining user data from different attacks. Thus, an old password from a compromised social media site, a login and password from a phishing form mimicking an e-government portal, and a phone number left on a scam site can all be compiled into a single digital dossier on a specific user. - Selling on specialized markets
Stolen data is typically sold on dark web forums and via Telegram. The instant messaging app is often used as a storefront to display prices, buyer reviews, and other details.
Offers of social media data, as displayed in TelegramThe prices of accounts can vary significantly and depend on many factors, such as account age, balance, linked payment methods (bank cards, online wallets), 2FA authentication, and service popularity. Thus, an online store account may be more expensive if it is linked to an email, has 2FA enabled, and has a long history, with a large number of completed orders. For gaming accounts, such as Steam, expensive game purchases are a factor. Online banking data sells at a premium if the victim has a high account balance and the bank itself has a good reputation.
The table below shows prices for various types of accounts found on dark web forums as of 2025*.
Category Price Average price Crypto platforms $60–$400 $105 Banks $70–$2000 $350 E-government portals $15–$2000 $82.5 Social media $0.4–$279 $3 Messaging apps $0.065–$150 $2.5 Online stores $10–$50 $20 Games and gaming platforms $1–$50 $6 Global internet portals $0.2–$2 $0.9 Personal documents $0.5–$125 $15 *Data provided by Kaspersky Digital Footprint Intelligence
- High-value target selection and targeted attacks
Cybercriminals take particular interest in valuable targets. These are users who have access to important information: senior executives, accountants, or IT systems administrators.
Let’s break down a possible scenario for a targeted whaling attack. A breach at Company A exposes data associated with a user who was once employed there but now holds an executive position at Company B. The attackers analyze open-source intelligence (OSINT) to determine the user’s current employer (Company B). Next, they craft a sophisticated phishing email to the target, purportedly from the CEO of Company B. To build trust, the email references some facts from the target’s old job – though other scenarios exist too. By disarming the user’s vigilance, cybercriminals gain the ability to compromise Company B for a further attack.Importantly, these targeted attacks are not limited to the corporate sector. Attackers may also be drawn to an individual with a large bank account balance or someone who possesses important personal documents, such as those required for a microloan application.
Takeaways
The journey of stolen data is like a well-oiled conveyor belt, where every piece of information becomes a commodity with a specific price tag. Today, phishing attacks leverage diverse systems for harvesting and analyzing confidential information. Data flows instantly into Telegram bots and attackers’ administration panels, where it is then sorted, verified, and monetized.
It is crucial to understand that data, once lost, does not simply vanish. It is accumulated, consolidated, and can be used against the victim months or even years later, transforming into a tool for targeted attacks, blackmail, or identity theft. In the modern cyber-environment, caution, the use of unique passwords, multi-factor authentication, and regular monitoring of your digital footprint are no longer just recommendations – they are a necessity.
What to do if you become a victim of phishing
- If a bank card you hold has been compromised, call your bank as soon as possible and have the card blocked.
- If your credentials have been stolen, immediately change the password for the compromised account and any online services where you may have used the same or a similar password. Set a unique password for every account.
- Enable multi-factor authentication in all accounts that support this.
- Check the sign-in history for your accounts and terminate any suspicious sessions.
- If your messaging service or social media account has been compromised, alert your family and friends about potential fraudulent messages sent in your name.
- Use specialized services to check if your data has been found in known data breaches.
- Treat any unexpected emails, calls, or offers with extreme vigilance – they may appear credible because attackers are using your compromised data.
Consider This Pocket Machine For Your iPhone Backups
What if you find yourself as an iPhone owner, desiring a local backup solution — no wireless tech involved, no sending off data to someone else’s server, just an automatic device-to-device file sync? Check out [Giovanni]’s ios-backup-machine project, a small Linux-powered device with an e-ink screen that backs up your iPhone whenever you plug the two together with a USB cable.
The system relies on libimobiledevice, and is written to make simple no-interaction automatic backups work seamlessly. The backup status is displayed on the e-ink screen, and at boot, it shows up owner’s information of your choice, say, a phone number — helpful if the device is ever lost. For preventing data loss, [Giovanni] recommends a small uninterruptible power supply, and the GitHub-described system is married to a PiSugar board, though you could go without or add a different one, for sure. Backups are encrypted through iPhone internal mechanisms, so while it appears you might not be able to dig into one, they are perfectly usable for restoring your device should it get corrupted or should you need to provision a new phone to replace the one you just lost.
Easy to set up, fully open, and straightforward to use — what’s not to like? Just put a few off-the-shelf boards together, print the case, and run the setup instructions, you’ll have a pocket backup machine ready to go. Now, if you’re considering this as a way to decrease your iTunes dependency, you might as well check out this nifty tool that helps you get out the metadata for the music you’ve bought on iTunes.
Turn me on, turn me off: Zigbee assessment in industrial environments
We all encounter IoT and home automation in some form or another, from smart speakers to automated sensors that control water pumps. These services appear simple and straightforward to us, but many devices and protocols work together under the hood to deliver them.
One of those protocols is Zigbee. Zigbee is a low-power wireless protocol (based on IEEE 802.15.4) used by many smart devices to talk to each other. It’s common in homes, but is also used in industrial environments where hundreds or thousands of sensors may coordinate to support a process.
There are many guides online about performing security assessments of Zigbee. Most focus on the Zigbee you see in home setups. They often skip the Zigbee used at industrial sites, what I call ‘non-public’ or ‘industrial’ Zigbee.
In this blog, I will take you on a journey through Zigbee assessments. I’ll explain the basics of the protocol and map the attack surface likely to be found in deployments. I’ll also walk you through two realistic attack vectors that you might see in facilities, covering the technical details and common problems that show up in assessments. Finally, I will present practical ways to address these problems.
Zigbee introduction
Protocol overview
Zigbee is a wireless communication protocol designed for low-power applications in wireless sensor networks. Based on the IEEE 802.15.4 standard, it was created for short-range and low-power communication. Zigbee supports mesh networking, meaning devices can connect through each other to extend the network range. It operates on the 2.4 GHz frequency band and is widely used in smart homes, industrial automation, energy monitoring, and many other applications.
You may be wondering why there’s a need for Zigbee when Wi-Fi is everywhere? The answer depends on the application. In most home setups, Wi-Fi works well for connecting devices. But imagine you have a battery-powered sensor that isn’t connected to your home’s electricity. If it used Wi-Fi, its battery would drain quickly – maybe in just a few days – because Wi-Fi consumes much more power. In contrast, the Zigbee protocol allows for months or even years of uninterrupted work.
Now imagine an even more extreme case. You need to place sensors in a radiation zone where humans can’t go. You drop the sensors from a helicopter and they need to operate for months without a battery replacement. In this situation, power consumption becomes the top priority. Wi-Fi wouldn’t work, but Zigbee is built exactly for this kind of scenario.
Also, Zigbee has a big advantage if the area is very large, covering thousands of square meters and requiring thousands of sensors: it supports thousands of nodes in a mesh network, while Wi-Fi is usually limited to hundreds at most.
There are lots more ins and outs, but these are the main reasons Zigbee is preferred for large-scale, low-power sensor networks.
Since both Zigbee and IEEE 802.15.4 define wireless communication, many people confuse the two. The difference between them, to put it simply, concerns the layers they support. IEEE 802.15.4 defines the physical (PHY) and media access control (MAC) layers, which basically determine how devices send and receive data over the air. Zigbee (as well as other protocols like Thread, WirelessHART, 6LoWPAN, and MiWi) builds on IEEE 802.15.4 by adding the network and application layers that define how devices form a network and communicate.
Zigbee operates in the 2.4 GHz wireless band, which it shares with Wi-Fi and Bluetooth. The Zigbee band includes 16 channels, each with a 2 MHz bandwidth and a 5 MHz gap between channels.
This shared frequency means Zigbee networks can sometimes face interference from Wi-Fi or Bluetooth devices. However, Zigbee’s low power and adaptive channel selection help minimize these conflicts.
Devices and network
There are three main types of Zigbee devices, each of which plays a different role in the network.
- Zigbee coordinator
The coordinator is the brain of the Zigbee network. A Zigbee network is always started by a coordinator and can only contain one coordinator, which has the fixed address 0x0000.
It performs several key tasks:- Starts and manages the Zigbee network.
- Chooses the Zigbee channel.
- Assigns addresses to other devices.
- Stores network information.
- Chooses the PAN ID: a 2-byte identifier (for example, 0x1234) that uniquely identifies the network.
- Sets the Extended PAN ID: an 8-byte value, often an ASCII name representing the network.
The coordinator can have child devices, which can be either Zigbee routers or Zigbee end devices.
- Zigbee router
The router works just like a router in a traditional network: it forwards data between devices, extends the network range and can also accept child devices, which are usually Zigbee end devices.
Routers are crucial for building large mesh networks because they enable communication between distant nodes by passing data through multiple hops. - Zigbee end device
The end device, also referred to as a Zigbee endpoint, is the simplest and most power-efficient type of Zigbee device. It only communicates with its parent, either a coordinator or router, and sleeps most of the time to conserve power. Common examples include sensors, remotes, and buttons.
Zigbee end devices do not accept child devices unless they are configured as both a router and an endpoint simultaneously.
Each of these device types, also known as Zigbee nodes, has two types of address:
- Short address: two bytes long, similar to an IP address in a TCP/IP network.
- Extended address: eight bytes long, similar to a MAC address.
Both addresses can be used in the MAC and network layers, unlike in TCP/IP, where the MAC address is used only in Layer 2 and the IP address in Layer 3.
Zigbee setup
Zigbee has many attack surfaces, such as protocol fuzzing and low-level radio attacks. In this post, however, I’ll focus on application-level attacks. Our test setup uses two attack vectors and is intentionally small to make the concepts clear.
In our setup, a Zigbee coordinator is connected to a single device that functions as both a Zigbee endpoint and a router. The coordinator also has other interfaces (Ethernet, Bluetooth, Wi-Fi, LTE), while the endpoint has a relay attached that the coordinator can switch on or off over Zigbee. This relay can be triggered by events coming from any interface, for example, a Bluetooth command or an Ethernet message.
Our goal will be to take control of the relay and toggle its state (turn it off and on) using only the Zigbee interface. Because the other interfaces (Ethernet, Bluetooth, Wi-Fi, LTE) are out of scope, the attack must work by hijacking Zigbee communication.
For the purposes of this research, we will attempt to hijack the communication between the endpoint and the coordinator. The two attack vectors we will test are:
- Spoofed packet injection: sending forged Zigbee commands made to look like they come from the coordinator to trigger the relay.
- Coordinator impersonation (rejoin attack): impersonating the legitimate coordinator to trick the endpoint into joining the attacker-controlled coordinator and controlling it directly.
Spoofed packet injection
In this scenario, we assume the Zigbee network is already up and running and that both the coordinator and endpoint nodes are working normally. The coordinator has additional interfaces, such as Ethernet, and the system uses those interfaces to trigger the relay. For instance, a command comes in over Ethernet and the coordinator sends a Zigbee command to the endpoint to toggle the relay. Our goal is to toggle the relay by injecting simulated legitimate Zigbee packets, using only the Zigbee link.
Sniffing
The first step in any radio assessment is to sniff the wireless traffic so we can learn how the devices talk. For Zigbee, a common and simple tool is the nRF52840 USB dongle by Nordic Semiconductor. With the official nRF Sniffer for 802.15.4 firmware, the dongle can run in promiscuous mode to capture all 802.15.4/Zigbee traffic. Those captures can be opened in Wireshark with the appropriate dissector to inspect the frames.
How do you find the channel that’s in use?
Zigbee runs on one of the 16 channels that we mentioned earlier, so we must set the sniffer to the same channel that the network uses. One practical way to scan the channels is to change the sniffer channel manually in Wireshark and watch for Zigbee traffic. When we see traffic, we know we’ve found the right channel.
After selecting the channel, we will be able to see the communication between the endpoint and the coordinator, though it will most likely be encrypted:
In the “Info” column, we can see that Wireshark only identifies packets as Data or Command without specifying their exact type, and that’s because the traffic is encrypted.
Even when Zigbee payloads are encrypted, the network and MAC headers remain visible. That means we can usually read things like source and destination addresses, PAN ID, short and extended MAC addresses, and frame control fields. The application payload (i.e., the actual command to toggle the relay) is typically encrypted at the Zigbee network/application layer, so we won’t see it in clear text without encryption keys. Nevertheless, we can still learn enough from the headers.
Decryption
Zigbee supports several key types and encryption models. In this post, we’ll keep it simple and look at a case involving only two security-related devices: a Zigbee coordinator and a device that is both an endpoint and a router. That way, we’ll only use a network encryption model, whereas with, say, mesh networks there can be various encryption models in use.
The network encryption model is a common concept. The traffic that we sniffed earlier is typically encrypted using the network key. This key is a symmetric AES-128 key shared by all devices in a Zigbee network. It protects network-layer packets (hop-by-hop) such as routing and broadcast packets. Because every router on the path shares the network key, this encryption method is not considered end-to-end.
Depending on the specific implementation, Zigbee can use two approaches for application payloads:
- Network-layer encryption (hop-by-hop): the network key encrypts the Application Support Sublayer (APS) data, the sublayer of the application layer in Zigbee. In this case, each router along the route can decrypt the APS payload. This is not end-to-end encryption, so it is not recommended for transmitting sensitive data.
- Link key (end-to-end) encryption: a link key, which is also an AES-128 key, is shared between two devices (for example, the coordinator and an endpoint).
The link key provides end-to-end protection of the APS payload between the two devices.
Because the network key could allow an attacker to read and forge many types of network traffic, it must be random and protected. Exposing the key effectively compromises the entire network.
When a new device joins, the coordinator (Trust Center) delivers the network key using a Transport Key command. That transport packet must be protected by a link key so the network key is not exposed in clear text. The link key authenticates the joining device and protects the key delivery.
The image below shows the transport packet:
There are two common ways link keys are provided:
- Pre-installed: the device ships with an installation code or link key already set.
- Key establishment: the device runs a key-establishment protocol.
A common historical problem is the global default Trust Center link key, “ZigBeeAlliance09”. It was included in early versions of Zigbee (pre-3.0) to facilitate testing and interoperability. However, many vendors left it enabled on consumer devices, and that has caused major security issues. If an attacker knows this key, they can join devices and read or steal the network key.
Newer versions – Zigbee 3.0 and later – introduced installation codes and procedures to derive unique link keys for each device. An installation code is usually a factory-assigned secret (often encoded on the device label) that the Trust Center uses to derive a unique link key for the device in question. This helps avoid the problems caused by a single hard-coded global key.
Unfortunately, many manufacturers still ignore these best practices. During real assessments, we often encounter devices that use default or hard-coded keys.
How can these keys be obtained?
If an endpoint has already joined the network and communicates with the coordinator using the network key, there are two main options for decrypting traffic:
- Guess or brute-force the network key. This is usually impractical because a properly generated network key is a random AES-128 key.
- Force the device to rejoin and capture the transport key. If we can make the endpoint leave the network and then rejoin, the coordinator will send the transport key. Capturing that packet can reveal the network key, but the transport key itself is protected by the link key. Therefore, we still need the link key.
To obtain the network and link keys, many approaches can be used:
- The well-known default link key, ZigBeeAlliance09. Many legacy devices still use it.
- Identify the device manufacturer and search for the default keys used by that vendor. We can find the manufacturer by:
- Checking the device MAC/OUI (the first three bytes of the 64-bit extended address often map to a vendor).
- Physically inspecting the device (label, model, chip markings).
- Extract the firmware from the coordinator or device if we have physical access and search for hard-coded keys inside the firmware images.
Once we have the relevant keys, the decryption process is straightforward:
- Open the capture in Wireshark.
- Go to Edit -> Preferences -> Protocols -> Zigbee.
- Add the network key and any link keys in our possession.
- Wireshark will then show decrypted APS payloads and higher-level Zigbee packets.
After successful decryption, packet types and readable application commands will be visible, such as Link Status or on/off cluster commands:
Choose your gadget
Now that we can read and potentially decrypt traffic, we need hardware and software to inject packets over the Zigbee link between the coordinator and the endpoint. To keep this practical and simple, I opted for cheap, widely available tools that are easy to set up.
For the hardware, I used the nRF52840 USB dongle, the same device we used for sniffing. It’s inexpensive, easy to find, and supports IEEE 802.15.4/Zigbee, so it can sniff and transmit.
The dongle runs the firmware we can use. A good firmware platform is Zephyr RTOS. Zephyr has an IEEE 802.15.4 radio API that enables the device to receive raw frames, essentially enabling sniffer mode, as well as send raw frames as seen in the snippets below.
Using this API and other components, we created a transceiver implementation written in C, compiled it to firmware, and flashed it to the dongle. The firmware can expose a simple runtime interface, such as a USB serial port, which allows us to control the radio from a laptop.
At runtime, the dongle listens on the serial port (for example, /dev/ttyACM1). Using a script, we can send it raw bytes, which the firmware will pass to the radio API and transmit to the channel. The following is an example of a tiny Python script to open the serial port:
I used the Scapy tool with the 802.15.4/Zigbee extensions to build Zigbee packets. Scapy lets us assemble packets layer-by-layer – MAC → NWK → APS → ZCL – and then convert them to raw bytes to send to the dongle. We will talk about APS and ZCL in more detail later.
Here is an example of how we can use Scapy to craft an APS layer packet:
from scapy.layers.dot15d4 import Dot15d4, Dot15d4FCS, Dot15d4Data, Dot15d4Cmd, Dot15d4Beacon, Dot15d4CmdAssocResp
from scapy.layers.zigbee import ZigbeeNWK, ZigbeeAppDataPayload, ZigbeeSecurityHeader, ZigBeeBeacon, ZigbeeAppCommandPayload
Before sending, the packet must be properly encrypted and signed so the endpoint accepts it. That means applying AES-CCM (AES-128 with MIC) using the network key (or the correct link key) and adhering to Zigbee’s rules for packet encryption and MIC calculation. This is how we implemented the encryption and MIC in Python (using a cryptographic library) after building the Scapy packet. We then sent the final bytes to the dongle.
This is how we implemented the encryption and MIC:
Crafting the packet
Now that we know how to inject packets, the next question is what to inject. To toggle the relay, we simply need to send the same type of command that the coordinator already sends. The easiest way to find that command is to sniff the traffic and read the application payload. However, when we look at captures in Wireshark, we can see many packets under ZCL marked [Malformed Packet].
A “malformed” ZCL packet usually means Wireshark could not fully interpret the packet because the application layer is non-standard or lacks details Wireshark expects. To understand why this happens, let’s look at the Zigbee application layer.
The Zigbee application layer consists of four parts:
- Application Support Sublayer (APS): routes messages to the correct profile, endpoint, and cluster, and provides application-level security.
- Application Framework (AF): contains the application objects that implement device functionality. These objects reside on endpoints (logical addresses 1–240) and expose clusters (sets of attributes and commands).
- Zigbee Cluster Library (ZCL): defines standard clusters and commands so devices can interoperate.
- Zigbee Device Object (ZDO): handles device discovery and management (out of scope for this post).
To make sense of application traffic, we must introduce three concepts:
- Profile: a rulebook for how devices should behave for a specific use case. Public (standard) profiles are managed by the Connectivity Standards Alliance (CSA). Vendors can also create private profiles for proprietary features.
- Cluster: a set of attributes and commands for a particular function. For example, the On/Off cluster contains On and Off commands and an OnOff attribute that displays the current state.
- Endpoint: a logical “port” on the device where a profile and clusters reside. A device can host multiple endpoints for different functions.
Putting all this together, in the standard home automation traffic we see APS pointing to the home automation profile, the On/Off cluster, and a destination endpoint (for example, endpoint 1). In ZCL, the byte 0x00 often means “Off”.
In many industrial setups, vendors use private profiles or custom application frameworks. That’s why Wireshark can’t decode the packets; the AF payload is custom, so the dissector doesn’t know the format.
So how do we find the right bytes to toggle the switch when the application is private? Our strategy has two phases.
- Passive phase
Sniff traffic while the system is driven legitimately. For example, trigger the relay from another interface (Ethernet or Bluetooth) and capture the Zigbee packets used to toggle the relay. If we can decrypt the captures, we can extract the application payload that correlates with the on/off action. - Active phaseWith the legitimate payload at hand, we can now turn to creating our own packet. There are two ways to do that. First, we need to replay or duplicate the captured application payload exactly as it is. This works if there are no freshness checks like sequence numbers. Otherwise, we have to reverse-engineer the payload and adjust any counters or fields that prevent replay. For instance, many applications include an application-level counter. If the device ignores packets with a lower application counter, we must locate and increment that counter when we craft our packet.
Another important protective measure is the frame counter inside the Zigbee security header (in the network header security fields). The frame counter prevents replay attacks; the receiver expects the frame counter to increase with each new packet, and will reject packets with a lower or repeated counter.
So, in the active phase, we must:
- Sniff the traffic until the coordinator sends a valid packet to the endpoint.
- Decrypt the packet, extract the counters and increase them by one.
- Build a packet with the correct APS/AF fields (profile, endpoint, cluster).
- Include a valid ZCL command or the vendor-specific payload that we identified in the passive phase.
- Encrypt and sign the packet with the correct network or link key.
- Make sure both the application counter (if used) and the Zigbee frame counter are modified so the packet is accepted.
The whole strategy for this phase will look like this:
If all of the above are handled correctly, we will be able to hijack the Zigbee communication and toggle the relay (turn it off and on) using only the Zigbee link.
Coordinator impersonation (rejoin attack)
The goal of this attack vector is to force the Zigbee endpoint to leave its original coordinator’s network and join our spoofed network so that we can take control of the device. To do this, we must achieve two things:
- Force the endpoint to leave the original network.
- Spoof the original coordinator and trick the node into joining our fake coordinator.
Force leaving
To better understand how to manipulate endpoint connections, let’s first describe the concept of a beacon frame. Beacon frames are periodic announcements sent by a coordinator and by routers. They advertise the presence of a network and provide join information, such as:
- PAN ID and Extended PAN ID
- Coordinator address
- Stack/profile information
- Device capacity (for example, whether the coordinator can accept child devices)
When a device wants to join, it sends a beacon request across Zigbee channels and waits for beacon replies from nearby coordinators/routers. Even if the network is not beacon-enabled for regular synchronization, beacon frames are still used during the join/discovery process, so they are mandatory when a node tries to discover networks.
Note that beacon frames exist at both the Zigbee and IEEE 802.15.4 levels. The MAC layer carries the basic beacon structure that Zigbee then extends with network-specific fields.
Now, we can force the endpoint to leave its network by abusing how Zigbee handles PAN conflicts. If a coordinator sees beacons from another coordinator using the same PAN ID and the same channel, it may trigger a PAN ID conflict resolution. When that happens, the coordinator can instruct its nodes to change PAN ID and rejoin, which causes them to leave and then attempt to join again. That rejoin window gives us an opportunity to advertise a spoofed coordinator and capture the joining node.
In the capture shown below, packet 7 is a beacon generated by our spoofed coordinator using the same PAN ID as the real network. As a result, the endpoint with the address 0xe8fa leaves the network (see packets 14–16).
Choose me
After forcing the endpoint to leave its original network by sending a fake beacon, the next step is to make the endpoint choose our spoofed coordinator. At this point, we assume we already have the necessary keys (network and link keys) and understand how the application behaves.
To impersonate the original coordinator, our spoofed coordinator must reply to any beacon request the endpoint sends. The beacon response must include the same Extended PAN ID (and other fields) that the endpoint expects. If the endpoint deems our beacon acceptable, it may attempt to join us.
I can think of two ways to make the endpoint prefer our coordinator.
- Jam the real coordinator
Use a device that reduces the real coordinator’s signal at the endpoint so that it appears weaker, forcing the endpoint to prefer our beacon. This requires extra hardware. - Exploit undefined or vendor-specific behavior
Zigbee stacks sometimes behave slightly differently across vendors. One useful field in a beacon is the Update ID field. It increments when a coordinator changes network configuration.
If two coordinators advertise the same Extended PAN ID but one has a higher Update ID, some stacks will prefer the beacon with the higher Update ID. This is undefined behavior across implementations; it works on some stacks but not on others. In my experience, sometimes it works and sometimes it fails. There are lots of other similar quirks we can try during an assessment.
Even if the endpoint chooses our fake coordinator, the connection may be unstable. One main reason for that is the timing. The endpoint expects ACKs for the frames it sends to the coordinator, as well as fast responses regarding connection initiation packets. If our responder is implemented in Python on a laptop that receives packets, builds responses, and forwards them to a dongle, the round trip will be too slow. The endpoint will not receive timely ACKs or packets and will drop the connection.
In short, we’re not just faking a few packets; we’re trying to reimplement parts of Zigbee and IEEE 802.15.4 that must run quickly and reliably. This is usually too slow for production stacks when done in high-level, interpreted code.
A practical fix is to run a real Zigbee coordinator stack directly on the dongle. For example, the nRF52840 dongle can act as a coordinator if flashed with the right Nordic SDK firmware (see Nordic’s network coordinator sample). That provides the correct timing and ACK behavior needed for a stable connection.
However, that simple solution has one significant disadvantage. In industrial deployments we often run into incompatibilities. In my tests I compared beacons from the real coordinator and the Nordic coordinator firmware. Notable differences were visible in stack profile headers:
The stack profile identifies the network profile type. Common values include 0x00, which is a network-specific (private) profile, and 0x02, which is a Zigbee Pro (public) profile.
If the endpoint expects a network-specific profile (i.e., it uses a private vendor profile) and we provide Zigbee Pro, the endpoint will refuse to join. Devices that only understand private profiles will not join public-profile networks, and vice versa. In my case, I could not change the Nordic firmware to match the proprietary stack profile, so the endpoint refused to join.
Because of this discrepancy, the “flash a coordinator firmware on the dongle” fix was ineffective in that environment. This is why the standard off-the-shelf tools and firmware often fail in industrial cases, forcing us to continue working with and optimizing our custom setup instead.
Back to the roots
In our previous test setup we used a sniffer in promiscuous mode, which receives every frame on the air regardless of destination. Real Zigbee (IEEE 802.15.4) nodes do not work like that. At the MAC/802.15.4 layer, a node filters frames by PAN ID and destination address. A frame is only passed to upper layers if the PAN ID matches and the destination address is the node’s address or a broadcast address.
We can mimic that real behavior on the dongle by running Zephyr RTOS and making the dongle act as a basic 802.15.4 coordinator. In that role, we set a PAN ID and short network address on the dongle so that the radio only accepts frames that match those criteria. This is important because it allows the dongle to handle auto-ACKs and MAC-level timing: the dongle will immediately send ACKs at the MAC level.
With the dongle doing MAC-level work (sending ACKs and PAN filtering), we can implement the Zigbee logic in Python. Scapy helps a lot with packet construction: we can create our own beacons with the headers matching those of the original coordinator, which solves the incompatibility problem. However, we must still implement the higher-level Zigbee state machine in our code, including connection initiation, association, network key handling, APS/AF behavior, and application payload handling. That’s the hardest part.
There is one timing problem that we cannot solve in Python: the very first steps of initiating a connection require immediate packet responses. To handle this issue, we implemented the time-critical parts in C on the dongle firmware. For example, we can statically generate the packets for connection initiation in Python and hard-code them in the firmware. Then, using “if” statements, we can determine how to respond to each packet from the endpoint.
So, we let the dongle (C/Zephyr) handle MAC-level ACKs and the initial association handshake, but let Python build higher-level packets and instruct the dongle what to send next when dealing with the application level. This hybrid model reduces latency and maintains a stable connection. The final architecture looks like this:
Deliver the key
Here’s a quick recap of how joining works: a Zigbee endpoint broadcasts beacon requests across channels, waits for beacon responses, chooses a coordinator, and sends an association request, followed by a data request to identify its short address. The coordinator then sends a transport key packet containing the network key. If the endpoint has the correct link key, it can decrypt the transport key packet and obtain the network key, meaning it has now been authenticated. From that point on, network traffic is encrypted with the network key. The entire process looks like this:
The sticking point is the transport key packet. This packet is protected using the link key, a per-device key shared between the coordinator (Trust Center) and the joining endpoint. Before the link key can be used for encryption, it often needs to be processed (hashed/derived) according to Zigbee’s key derivation rules. Since there is no trivial Python implementation that implements this hashing algorithm, we may need to implement the algorithm ourselves.
I implemented the required key derivation; the code is available on our GitHub.
Now that we’ve managed to obtain the hashed link key and deliver it to the endpoint, we can successfully mimic a coordinator.
The final success
If we follow the steps above, we can get the endpoint to join our spoofed coordinator. Once the endpoint joins, it will often remain associated with our coordinator, even after we power it down (until another event causes it to re-evaluate its connection). From that point on, we can interact with the device at the application layer using Python. Getting access as a coordinator allowed us to switch the relay on and off as intended, but also provided much more functionality and control over the node.
Conclusion
In conclusion, this study demonstrates why private vendor profiles in industrial environments complicate assessments: common tools and frameworks often fail, necessitating the development of custom tools and firmware. We tested a simple two-node scenario, but with multiple nodes the attack surface changes drastically and new attack vectors emerge (for example, attacks against routing protocols).
As we saw, a misconfigured Zigbee setup can lead to a complete network compromise. To improve Zigbee security, use the latest specification’s security features, such as using installation codes to derive unique link keys for each device. Also, avoid using hard-coded or default keys. Finally, it is not recommended to use the network key encryption model. Add another layer of security in addition to the network level protection by using end-to-end encryption at the application level.
Copia e Incolla e hai perso l’account di Microsoft 365! Arriva ConsentFix e la MFA è a rischio
Un nuovo schema chiamato “ConsentFix” amplia le capacità del già noto attacco social ClickFix e consente di dirottare gli account Microsoft senza password o autenticazione a più fattori. Per farlo, gli aggressori sfruttano un’applicazione Azure CLI legittima e le funzionalità di autenticazione OAuth , trasformando il processo di accesso standard in uno strumento di dirottamento.
ClickFix si basa sulla visualizzazione di istruzioni pseudo-sistema all’utente, chiedendogli di eseguire comandi o eseguire diversi passaggi, presumibilmente per correggere un errore o dimostrare la propria identità.
La variante “ConsentFix”, descritta dal team di Push Security, mantiene lo scenario generale dell’inganno, ma invece di installare malware, mira a rubare un codice di autorizzazione OAuth 2.0, che viene poi utilizzato per ottenere un token di accesso all’interfaccia della riga di comando di Azure.
L’attacco inizia con la visita a un sito web legittimo compromesso, ben indicizzato su Google per le query pertinenti. Sulla pagina appare un finto widget Cloudflare Turnstile, che richiede un indirizzo email valido. Lo script degli aggressori confronta l’indirizzo inserito con un elenco predefinito di obiettivi ed esclude bot, analisti e visitatori casuali. Solo alle vittime selezionate viene presentato il passaggio successivo, strutturato come un tipico script ClickFix con passaggi di verifica apparentemente innocui.
Alla vittima viene chiesto di cliccare sul pulsante di accesso, dopodiché il vero dominio Microsoft si apre in una scheda separata. Tuttavia, invece del consueto modulo di accesso, utilizza una pagina di autorizzazione di Azure che genera un codice OAuth specifico per l’interfaccia a riga di comando di Azure. Se l’utente ha effettuato l’accesso a un account Microsoft, è sufficiente selezionarlo; in caso contrario, l’accesso avviene normalmente tramite il modulo autentico.
Dopo l’autorizzazione, il browser viene reindirizzato a localhost e nella barra degli indirizzi viene visualizzato un URL con il codice di autorizzazione dell’interfaccia della riga di comando di Azure associato all’account. Il passaggio finale dell’inganno consiste nell’incollare nuovamente questo indirizzo nella pagina dannosa, come indicato. A questo punto, l’aggressore può scambiare il codice con un token di accesso e gestire l’account tramite l’interfaccia della riga di comando di Azure senza conoscere la password o completare l’autenticazione a più fattori . Durante una sessione attiva, l’accesso non viene effettivamente richiesto. Per ridurre il rischio di divulgazione, lo script viene eseguito una sola volta da ciascun indirizzo IP.
Gli esperti di Push Security consigliano ai team addetti alla sicurezza di monitorare le attività insolite dell’interfaccia della riga di comando di Azure, inclusi gli accessi da indirizzi IP insoliti, e di monitorare l’utilizzo delle autorizzazioni Graph legacy, su cui questo schema si basa per eludere gli strumenti di rilevamento standard.
L'articolo Copia e Incolla e hai perso l’account di Microsoft 365! Arriva ConsentFix e la MFA è a rischio proviene da Red Hot Cyber.
Aumento di stipendio? Tranquillo, l’unico che riceve i soldi è l’hacker per la tua negligenza
Emerge da un recente studio condotto da Datadog Security Labs un’operazione attualmente in corso, mirata a organizzazioni che utilizzano Microsoft 365 e Okta per l’autenticazione Single Sign-On (SSO). Questa operazione, avvalendosi di tecniche sofisticate, aggira i controlli di sicurezza con l’obiettivo di sottrarre token di sessione.
Mentre le valutazioni delle prestazioni di fine anno stanno per essere comunicate ai dipendenti, questa complessa truffa di phishing ha iniziato a diffondersi, trasformando quello che sembrava un aumento salariale in una minaccia per la sicurezza informatica.
Dall’inizio di dicembre 2025, questa campagna sfrutta senza scrupoli i benefit offerti dalle aziende. I destinatari ignari ricevono messaggi di posta elettronica dissimulati da comunicazioni ufficiali dei reparti risorse umane o di servizi di gestione stipendi, tra cui ADP o Salesforce.
Gli oggetti sono progettati per suscitare urgenza e curiosità immediate, utilizzando frasi come “Azione richiesta: rivedere le informazioni su stipendio e bonus del 2026” o “Riservato: aggiornamento sulla retribuzione”.
Secondo il rapporto , i ricercatori di sicurezza riportano che “gli URL di phishing includono un parametro URL che indica il tenant Okta preso di mira. Inoltra qualsiasi richiesta al dominio .okta.com originale, garantendo che tutte le personalizzazioni alla pagina di autenticazione Okta vengano preservate, rendendo la pagina di phishing più legittima”.
Alcuni attacchi utilizzano allegati PDF crittografati con la password fornita nel corpo dell’e-mail: una tattica classica per aggirare gli scanner di sicurezza della posta elettronica.
La minaccia risulta essere ancora più subdola nel caso in cui la vittima faccia accesso a una pagina di login contraffatta di Microsoft 365. Il codice maligno esamina il traffico del browser in modo occulto. Rilevato che l’utente sta effettuando l’autenticazione tramite Okta, tramite un campo JSON specifico chiamato FederationRedirectUrl, il traffico viene immediatamente intercettato.
Una volta che l’utente inserisce le proprie credenziali, uno script lato client chiamato inject.js entra in funzione. Traccia le sequenze di tasti premuti per acquisire nomi utente e password, ma il suo obiettivo principale è il dirottamento della sessione.
L’infrastruttura alla base di questi attacchi è in rapida evoluzione.
Gli autori delle minacce utilizzano Cloudflare per nascondere i loro siti dannosi ai bot di sicurezza e perfezionano costantemente il loro codice.
L'articolo Aumento di stipendio? Tranquillo, l’unico che riceve i soldi è l’hacker per la tua negligenza proviene da Red Hot Cyber.
Vulnerabilità di sicurezza in PowerShell: Una nuova Command Injection su Windows
Un aggiornamento di sicurezza urgente è stato rilasciato per risolvere una vulnerabilità critica in Windows PowerShell, che permette agli aggressori di eseguire codice malevolo sui sistemi colpiti. Questa falla di sicurezza, catalogata come CVE-2025-54100, è stata divulgata il 9 dicembre 2025 e costituisce una minaccia considerevole per l’integrità dei sistemi informatici a livello globale.
Microsoft classifica la vulnerabilità come importante, con un punteggio di gravità CVSS di 7,8. La debolezza, identificata come CWE-77, si riferisce alla neutralizzazione impropria di elementi speciali impiegati negli attacchi di iniezione di comandi.
Microsoft considera remota la possibilità che questa vulnerabilità venga sfruttata in attacchi reali. La vulnerabilità è stata già divulgata pubblicamente. Gli aggressori devono disporre di accesso locale e dell’intervento dell’utente per eseguire l’attacco, pertanto sono costretti a cercare di indurre gli utenti ad aprire file dannosi o eseguire comandi sospetti.
Patch di sicurezza sono state rilasciate da Microsoft su diverse piattaforme. È fondamentale che le organizzazioni che operano con Windows Server 2025, Windows 11 nelle versioni 24H2 e 25H2, e Windows Server 2022, procedano con l’applicazione delle patch mediante KB5072033 o KB5074204, dando priorità all’aggiornamento.
Il difetto si verifica quando elementi speciali in Windows PowerShell vengono neutralizzati in modo improprio durante gli attacchi di iniezione di comandi. Ciò permette ad aggressori non autorizzati di eseguire codice arbitrario localmente tramite comandi appositamente predisposti.
Microsoft consiglia di utilizzare l’opzione UseBasicParsing per impedire l’esecuzione di codice script dal contenuto Web. Inoltre, le organizzazioni dovrebbero implementare le linee guida contenute nell’articolo KB5074596 in merito alle misure di sicurezza di PowerShell 5.1 per mitigare i rischi legati all’esecuzione degli script.
La vulnerabilità colpisce un’ampia gamma di sistemi operativi Windows, tra cui Windows 10, Windows 11, Windows Server 2008 fino alla versione 2025 e varie configurazioni di sistema. Gli utenti che utilizzano Windows 10 e versioni precedenti necessitano di aggiornamenti separati, come KB5071546 o KB5071544.
L'articolo Vulnerabilità di sicurezza in PowerShell: Una nuova Command Injection su Windows proviene da Red Hot Cyber.
Telegram perde il suo status di piattaforma comoda per i criminali informatici
Telegram, che nel corso della sua storia è diventata una delle app di messaggistica più popolari al mondo, sta gradualmente perdendo il suo status di piattaforma comoda per i criminali informatici.
Gli analisti di Kaspersky Lab hanno monitorato il ciclo di vita di centinaia di canali underground e hanno concluso che la moderazione più severa stanno letteralmente estromettendo l’underground dall’app di messaggistica.
Gli esperti sottolineano che Telegram è inferiore alle app di messaggistica sicura dedicate in termini di protezione della privacy: le chat non utilizzano la crittografia end-to-end di default, l’intera infrastruttura è centralizzata e il codice del server è chiuso.
Sebbene questo probabilmente non rappresenti un problema significativo per l’utente medio, implica la dipendenza da terze parti e il rischio di deanonimizzazione per i criminali. Non è un caso che le proposte di vietare completamente Telegram per motivi di lavoro siano sempre più frequenti sui forum underground.
Confronto dei criteri di anonimato dei messenger (Kaspersky Lab)
Tuttavia, sono proprio le funzionalità integrate del servizio a renderlo una piattaforma aziendale conveniente per i criminali.
I bot gestiscono l’accettazione e il pagamento degli ordini, vendono log di infostealer , abbonamenti MaaS, servizi di doxxing, frodi con carte di credito e altre frodi online minori. Questa attività criminale “snella” e altamente automatizzata si adatta perfettamente al modello di Telegram: il proprietario è in gran parte estraneo alle operazioni e i file pubblicati sui canali vengono archiviati a tempo indeterminato.
Tuttavia, prodotti esclusivi – accesso alle reti aziendali, exploit zero-day– rimangono sui forum darknet tradizionali con sistemi di reputazione, depositi e garanzie sulle transazioni.
Una sezione separata dello studio è dedicata alla durata di vita dei canali underground. Sulla base dei dati di oltre 800 risorse bloccate, gli analisti hanno stimato la loro durata media in circa sette mesi. Tuttavia, la mediana è aumentata: mentre nel 2021-2022 un canale durava in media cinque mesi, nel 2023-2024 ha raggiunto i nove. Ciò non significa che la persecuzione sia diminuita: il grafico dei blocchi mostra un forte picco nel 2022, legato all’attività degli hacktivisti, e livelli costantemente elevati fino a metà del 2025. Anche i minimi di fine 2024 sono paragonabili ai picchi del 2023.
I criminali informatici stanno cercando di adattarsi: cambiano canale in modalità “on-demand”, pubblicano messaggi “innocui” per camuffare la propria identità e annotano i post con disclaimer e dichiarazioni sulla legalità del contenuto. Tuttavia, un’analisi di risorse di lunga data mostra che queste misure vengono applicate sporadicamente e generalmente non riescono a impedire il blocco.
Di conseguenza, grandi comunità stanno iniziando a cercare alternative. Ad esempio, nel 2025, uno dei gruppi più grandi, BFRepo, con quasi 9.000 iscritti, ha annunciato il suo passaggio al messenger decentralizzato SimpleX dopo una serie di ban su Telegram. Un altro gruppo ben noto, Angel Drainer, si è spinto ancora oltre e ha lanciato il proprio messenger chiuso con il presunto supporto per i moderni protocolli crittografici, raccomandando allo stesso tempo agli utenti di abbandonare Telegram.
Gli autori del rapporto concludono in modo inequivocabile: Telegram un tempo sembrava un rifugio relativamente sicuro per i criminali, ma quell’era sta finendo. La crescente moderazione e la pressione da parte di vari attori, dai detentori di copyright ai gruppi di hacktivisti, stanno rendendo l’infrastruttura underground del messenger sempre più instabile.
Tuttavia, la scomparsa dei canali underground da Telegram non significa una riduzione delle minacce informatiche: le comunità criminali stanno semplicemente migrando verso altri servizi o sviluppando soluzioni proprie. Gli analisti esortano le aziende e gli specialisti della sicurezza informatica a monitorare attentamente la migrazione delle piattaforme e ad adattare i propri sistemi di monitoraggio ai nuovi focolai di attività criminale informatica.
L'articolo Telegram perde il suo status di piattaforma comoda per i criminali informatici proviene da Red Hot Cyber.
DIY Synth Takes Inspiration From Fretted Instruments
There are a million and one MIDI controllers and synths on the market, but sometimes it’s just more satisfying to make your own. [Turi Scandurra] very much went his own way when he put together his Diapasonix instrument.
Right away, the build is somewhat reminiscent of a stringed instrument, what with its buttons laid out in four “strings” of six “frets” each. Only, they’re not so much buttons, as individual sections of a capacitive touch controller. A Raspberry Pi Pico 2 is responsible for reading the 24 pads, with the aid of two MPR121 capacitive touch ICs.
The Diapasonix can be played as an instrument in its own right, using the AMY synthesis engine. This provides a huge range of patches from the Juno 6 and DX7 synthesizers of old. Onboard effects like delay and reverb can be used to alter the sound. Alternatively, it can be used as a MIDI controller, feeding its data to a PC attached over USB. It can be played in multiple modes, with either direct note triggers or with a “strumming” method instead.
We’ve featured a great many MIDI controllers over the years, from the artistic to the compact. Video after the break.
youtube.com/embed/DMDRZ1dwdG4?…
Step into my Particle Accelerator
If you get a chance to visit a computer history museum and see some of the very old computers, you’ll think they took up a full room. But if you ask, you’ll often find that the power supply was in another room and the cooling system was in yet another. So when you get a computer that fit on, say, a large desk and maybe have a few tape drives all together in a normal-sized office, people thought of it as “small.” We’re seeing a similar evolution in particle accelerators, which, a new startup company says, can be room-sized according to a post by [Charles Q. Choi] over at IEEE Spectrum.
Usually, when you think of a particle accelerator, you think of a giant housing like the 3.2-kilometer-long SLAC accelerator. That’s because these machines use magnets to accelerate the particles, and just like a car needs a certain distance to get to a particular speed, you have to have room for the particle to accelerate to the desired velocity.
A relatively new technique, though, doesn’t use magnets. Instead, very powerful (but very short) laser pulses create plasma from gas. The plasma oscillates in the wake of the laser, accelerating electrons to relativistic speeds. These so-called wakefield accelerators can, in theory, produce very high-energy electrons and don’t need much space to do it.
The startup company, TAU Systems, is about to roll out a commercial system that can generate 60 to 100 MeV at 100 Hz. They also intend to increase the output over time. For reference, SLAC generates 50,000 MeV. But, then again, it takes two miles of raceway to do it.
The initial market is likely to be radiation testing for space electronics. Higher energies will open the door to next-generation X-ray lithography for IC production, and more. There are likely applications for accelerated electrons that we don’t see today because it isn’t feasible to generate them without a massive facility.
On the other hand, don’t get your checkbook out yet. The units will cost about $10 million at the bottom end. Still a bargain compared to the alternatives.
You can do some of this now on a chip. Particle accelerators have come a long way.
Photo from Tau Systems.
Designing a Simpler Cycloidal Drive
Cycloidal drives have an entrancing motion, as well as a few other advantages – high torque and efficiency, low backlash, and compactness among them. However, much as [Sergei Mishin] likes them, it can be difficult to 3D-print high-torque drives, and it’s sometimes inconvenient to have the input and output shafts in-line. When, therefore, he came across a video of an industrial three-ring reducing drive, which works on a similar principle, he naturally designed his own 3D-printable drive.
The main issue with 3D-printing a normal cycloidal drive is with the eccentrically-mounted cycloidal plate, since the pins which run through its holes need bearings to keep them from quickly wearing out the plastic plate at high torque. This puts some unfortunate constraints on the size of the drive. A three-ring drive also uses an eccentric drive shaft to cause cycloidal plates to oscillate around a set of pins, but the input and output shafts are offset so that the plates encompass both the pins and the eccentric driveshaft. This simplifies construction significantly, and also makes it possible to add more than one input or output shaft.
As the name indicates, these drives use three plates 120 degrees out of phase with each other; [Sergei] tried a design with only two plates 180 degrees out of phase, but since there was a point at which the plates could rotate just as easily in either direction, it jammed easily. Unlike standard cycloidal gears, these plates use epicycloidal rather than hypocycloidal profiles, since they move around the outside of the pins. [Sergei] helpfully wrote a Python script that can generate profiles, animate them, and export to DXF. The final performance of these drives will depend on their design parameters and printing material, but [Sergei] tested a 20:1 drive and reached a respectable 9.8 Newton-meters before it started skipping.
Even without this design’s advantages, it’s still possible to 3D-print a cycloidal drive, its cousin the harmonic drive, or even more exotic drive configurations.
youtube.com/embed/WMgny-yDjvs?…
CSRA: Perché serve un nuovo modo di percepire ciò che non riusciamo più a controllare
La cybersecurity vive oggi in una contraddizione quasi paradossale: più aumentano i dati, più diminuisce la nostra capacità di capire cosa sta accadendo. I SOC traboccano di log, alert, metriche e pannelli di controllo, eppure gli attacchi più gravi — dai ransomware alle campagne stealth di spionaggio — continuano a sfuggire alla vista proprio nel momento decisivo: quando tutto sta per cominciare.
Il problema non è l’informazione. Il problema è lo sguardo: la capacità di visualizzare ciò che conta!
Esistono strumenti perfetti per contare, classificare, correlare; molti meno per percepire.
Così, mentre la superficie digitale cresce in complessità e velocità, continuiamo a osservare il cyberspace come un inventario di oggetti — server, IP, pacchetti — quando in realtà è un ambiente vivo, dinamico, pulsante, fatto di relazioni che cambiano forma.
Per provare a cambiare le cose nasce il Cyber Situational-Relational Awareness (CSRA): un modello che mette al centro la percezione.
1. Il vero problema non è tecnico: è percettivo.
Viviamo in un ecosistema digitale dove tutto produce segnali. Ogni servizio, applicazione, sensore, utente, processo automatico lascia una o più tracce. Non c’è mai stata così tanta “visibilità” a disposizione dei difensori. Eppure il paradosso persiste: gli attacchi vanno a segno, gli indicatori non bastano, la complessità ci sovrasta.
Il motivo è semplice: abbiamo un’enorme quantità di dati e di informazioni, ma pochissima consapevolezza del loro significato nel momento in cui cambia.
Gli strumenti tradizionali ci dicono cosa è successo, a volte cosa sta succedendo al momento, quasi mai cosa sta per accadere.
Manca la percezione del mutamento, di quella variazione sottile che precede l’incidente.
Il CSRA, concettualmente, nasce proprio per cogliere queste trasformazioni minime, che oggi si perdono nei rumori di fondo.
2. Il cyberspace non è un inventario di oggetti, ma un ecosistema di relazioni
Per trent’anni abbiamo sentito descrivere il cyberspace come un insieme di entità isolate: host, server, IP, router. Abbiamo costruito strumenti pensati per monitorare questi oggetti, ciascuno con i propri attributi e comportamenti misurabili. Ma la realtà degli attacchi moderni mostra un quadro completamente diverso.
Quando una minaccia si manifesta, non è l’oggetto a tradirla, ma la relazione tra oggetti.
Non è il server anomalo a dirci che sta succedendo qualcosa: è il modo in cui sta comunicando con altri nodi.
Non è un singolo pacchetto strano: è il ritmo delle sue interazioni.
Non è un evento isolato: è il cambiamento di una costellazione di micro-eventi.
Ecco la prima grande intuizione del CSRA: il nodo cyber non è un dispositivo, ma un piccolo ecosistema composto da un’entità — umana o automatica — e dalla tecnologia che usa per comunicare. Per comprenderlo occorre osservare come evolve nel tempo, non cosa è staticamente.
3. Comprendere un attacco significa ascoltare il ritmo della rete.
Ogni rete possiede un suo proprio ritmo: qualcuna presenta un alternarsi di picchi e quiete, altre una regolarità nelle connessioni, e poi ci sono le reti di lavoro con un andamento che si ripete giorno dopo giorno.
È proprio quando questo ritmo naturale si spezza — anche in maniera quasi impercettibile — che qualcosa comincia a muoversi.
Il CSRA si concentra su questo: sulla capacità di percepire un cambiamento di ritmo prima che diventi un incidente conclamato. Naturalmente non ogni deviazione implica un attacco: a volte si tratta di un malfunzionamento o di un cambio di abitudini. Ma è sempre meglio verificare.
Un attacco nelle prime fasi è quasi invisibile. Modifica una sequenza di azioni, altera un’abitudine, crea una piccola vibrazione nella rete. Una procedura automatica che diventa più attiva del solito; un nodo che stabilisce collegamenti imprevisti; un cluster che sembra muoversi in modo più irrequieto.
Queste vibrazioni sono i segnali precoci dell’incidente. Il CSRA è progettato per ascoltarli.
4. Lo “spazio locale”: dove la rete ci dice di guardare.
La rete è troppo vasta per essere osservata tutta allo stesso modo.
E qui nasce la seconda intuizione del CSRA: non tutto merita attenzione, almeno non nello stesso momento. Ogni trasformazione significativa parte da un punto preciso della rete. È lì, in quella piccola regione, che il ritmo si altera. Il CSRA identifica questa regione emergente e la trasforma in un “spazio locale”: una sorta di lente dinamica che si concentra proprio dove sta avvenendo il cambiamento.
Lo spazio locale non è predefinito: viene generato dalla rete stessa. È lei a segnalare dove posare lo sguardo. È questo meccanismo che permette a un sistema CSRA di non affogare nei dati e, allo stesso tempo, di accorgersi dei segnali giusti nel momento giusto.
5. La rete come paesaggio: quando ciò che era invisibile diventa evidente
Uno dei punti più affascinanti del CSRA è la sua capacità di trasformare la rete in un paesaggio visivo. Non una dashboard piena di numeri, ma una rappresentazione quasi geografica delle attività: alture che indicano intensità, valli che rivelano stabilità, creste che segnalano turbolenze.
Per la prima volta, l’analista può “vedere” l’incidente come si vede una perturbazione meteorologica: una zona che si scalda, si deforma, si muove. È un modo più naturale, quasi intuitivo, di percepire il cyberspace, che consente di cogliere immediatamente ciò che non torna, anche senza sapere in anticipo di cosa si tratti.
Questa rappresentazione non è un vezzo grafico: è una forma di consapevolezza.
Ciò che era nascosto diventa visibile.
6. Perché il CSRA potenzialmente potrebbe vedere ciò che gli altri strumenti non vedono.
La ragione è semplice e, allo stesso tempo, rivoluzionaria: il CSRA non cerca ciò che conosce, osserva ciò che cambia. Gli strumenti basati su firme, regole e pattern riconoscono solo ciò che è già stato codificato. Funzionano benissimo contro minacce note, molto meno contro tecniche nuove, attacchi creativi o comportamenti deliberatamente irregolari.
Il CSRA, invece, affronta il cyberspace come un organismo in continuo mutamento.
Quando un nodo inizia a comportarsi in modo inusuale, quando due sistemi improvvisano un dialogo imprevisto, quando una parte della rete si anima in modo anomalo, il CSRA lo percepisce immediatamente.
È un’attenzione simile a quella umana: istintiva, sensibile al movimento, orientata alla variazione. Dietro questa capacità percettiva c’è una matematica precisa: quella delle variazioni, dell’entropia e dei cambiamenti tra nodi connessi. Non servono formule per capirlo: conta il principio, quello di misurare come la rete si trasforma nel tempo.
7. Il CSRA è prima di tutto un nuovo modo di pensare.
Non è un software da aggiungere, né un modulo da integrare. È una nuova mentalità per affrontare la sicurezza in un mondo che cambia troppo velocemente. Il CSRA porta con sé un messaggio chiaro: non possiamo più limitarci a raccogliere dati. Dobbiamo imparare a percepire le trasformazioni.
Una rete osservata con il CSRA non è una collezione di oggetti tecnici, ma un ambiente vivo. Il SOC smette di essere un centro di allarmi e diventa un luogo di interpretazione, in cui la difesa non è una reazione tardiva ma un esercizio continuo di comprensione.
Conclusione: il CSRA è la percezione cyber del XXI secolo.
Il CSRA non introduce un altro strumento nell’infinita collezione già esistente. Introduce qualcosa di più importante: un nuovo modo di guardare il cyberspace.
Non come un elenco di entità isolate, ma come un tessuto di relazioni che respira, cambia e ci parla — se sappiamo ascoltarlo.
In un’epoca in cui gli attacchi si trasformano più rapidamente dei nostri modelli di difesa, percepire il cambiamento diventa una necessità strategica. E il CSRA rappresenta il primo passo verso questa nuova forma di consapevolezza.
Non è solo tecnologia. È una nuova capacità umana: vedere ciò che prima era invisibile.
L'articolo CSRA: Perché serve un nuovo modo di percepire ciò che non riusciamo più a controllare proviene da Red Hot Cyber.
Amiibo Emulator Becomes Pocket 2.4 GHz Spectrum Analyzer
As technology marches on, gear that once required expensive lab equipment is now showing up in devices you can buy for less than a nice dinner. A case in point: those tiny displays originally sold as Nintendo amiibo emulators. Thanks to [ATC1441], one of these pocket-sized gadgets has been transformed into 2.4 GHz spectrum analyzer.
These emulators are built around the Nordic nRF52832 SoC, the same chip found in tons of low-power Bluetooth devices, and most versions come with either a small LCD or OLED screen plus a coin cell or rechargeable LiPo. Because they all share the same core silicon, [ATC1441]’s hack works across a wide range of models. Don’t expect lab-grade performance; the analyzer only covers the range the Bluetooth chip inside supports. But that’s exactly where Wi-Fi, Bluetooth, Zigbee, and a dozen other protocols fight for bandwidth, so it’s perfect for spotting crowded channels and picking the least congested one.
Flashing the custom firmware is dead simple: put the device into DFU mode, drag over the .zip file, and you’re done. All the files, instructions, and source are up on [ATC1441]’s PixlAnlyzer GitHub repo. Check out some of the other amiibo hacks we’ve featured as well.
youtube.com/embed/kgrsfGIeL9w?…
Extremely Rare Electric Piano Restoration
Not only are pianos beautiful musical instruments that have stood the test of many centuries of time, they’re also incredible machines. Unfortunately, all machines wear out over time, which means it’s often not feasible to restore every old piano we might come across. But a few are worth the trouble, and [Emma] had just such a unique machine roll into her shop recently.
What makes this instrument so unique is that it’s among the first electric pianos to be created, and one of only three known of this particular model that survive to the present day. This is a Vivi-Tone Clavier piano which dates to the early 1930s. In an earlier video she discusses more details of its inner workings, but essentially it uses electromagnetic pickups like a guitar to detect vibrations in plucked metal reeds.
To begin the restoration, [Emma] removes the action and then lifts out all of the keys from the key bed. This instrument is almost a century old so it was quite dirty and needed to be cleaned. The key pins are lubricated, then the keys are adjusted so that they all return after being pressed. From there the keys are all adjusted so that they are square and even with each other. With the keys mostly in order, her attention turns to the action where all of the plucking mechanisms can be filed, and other adjustments made. The last step was perhaps the most tedious, which is “tuning” the piano by adjusting the pluckers so that all of the keys produce a similar amount or volume of sound, and then adding some solder to the reeds that were slightly out of tune.
With all of those steps completed, the piano is back in working order, although [Emma] notes that since these machines were so rare and produced so long ago there’s no real way to know if the restoration sounds like what it would have when it was new. This is actually a similar problem we’ve seen before on this build that hoped to model the sound of another electric instrument from this era called the Luminaphone.
youtube.com/embed/cEG7hD28dW4?…
Thanks to [Eluke] for the tip!
Jenny’s Daily Drivers: Haiku R1/beta5
Back in the mid 1990s, the release of Microsoft’s Windows 95 operating system cemented the Redmond software company’s dominance over most of the desktop operating system space. Apple were still in their period in the doldrums waiting for Steve Jobs to return with his NeXT, while other would-be challengers such as IBM’s OS/2 or Commodore’s Amiga were sinking into obscurity.
Into this unpromising marketplace came Be inc, with their BeBox computer and its very nice BeOS operating system. To try it out as we did at a trade show some time in the late ’90s was to step into a very polished multitasking multimedia OS, but sadly one which failed to gather sufficient traction to survive. The story ended in the early 2000s as Be were swallowed by Palm, and a dedicated band of BeOS enthusiasts set about implementing a free successor OS. This has become Haiku, and while it’s not BeOS it retains API compatibility with and certainly feels a lot like its inspiration. It’s been on my list for a Daily Drivers article for a while now, so it’s time to download the ISO and give it a go. I’m using the AMD64 version.
A Joy To Use, After A Few Snags
If you ignore the odd font substitution in WebPositive, it’s a competent browser.
This isn’t the first time I’ve given Haiku a go in an attempt to write about it for this series, and I have found it consistently isn’t happy with my array of crusty old test laptops. So this time I pulled out something newer, my spare Lenovo Thinkpad X280. I was pleased to see that the Haiku installation USB volume booted and ran fine on this machine, and I was soon at the end of the install and ready to start my Haiku journey.
Here I hit my first snag, because sadly the OS hadn’t quite managed to set up its UEFI booting correctly. I thus found myself unexpectedly in a GRUB prompt, as the open source bootloader was left in place from a previous Linux install. Fixing this wasn’t too onerous as I was able to copy the relevant Haiku file to my UEFI partition, but it was a little unexpected. On with the show then, and in to Haiku.
In use, this operating system is a joy. Its desktop look and feel is polished, in a late-90s sense. There was nothing jarring or unintuitive, and though I had never used Haiku before I was never left searching for what I needed. It feels stable too, I was expecting the occasional crash or freeze, but none came. When I had to use the terminal to move the UEFI file it felt familiar to me as a Linux user, and all my settings were easy to get right.
Never Mind My Network Card
If only the network setup on my Thinkpad was as nice as the one in the VM.
I hit a problem when it came to network setup though, I found its wireless networking to be intermittent. I could connect to my network, but while DHCP would give it an IP address it failed to pick up the gateway and thus wasn’t a useful external connection. I could fix this by going to a fixed IP address and entering the gateway and DNS myself, and that gave me a connection, but not a reliable one. I would have it for a minute or two, and then it would be gone. Enough time for a quick software update and to load Hackaday on its WebPositive web browser, but not enough time to do any work. We’re tantalisingly close to a useful OS here, and I don’t want this review to end on that note.
The point of this series has been to try each OS in as real a situation as possible, to do my everyday Hackaday work of writing articles and manipulating graphics. I have used real hardware to achieve this, a motley array of older PCs and laptops. As I’ve described in previous paragraphs I’ve reached the limits of what I can do on real hardware due to the network issue, but I still want to give this one a fair evaluation. I have thus here for the first time used a test subject in a VM rather than on real hardware. What follows then is courtesy of Gnome Boxes on my everyday Linux powerhouse, so please excuse the obvious VM screenshots.
This One Is A True Daily Driver
There’s plenty of well-ported software, but nothing too esoteric.
With a Haiku install having a working network connection, it becomes an easy task to install software updates, and install new software. The library has fairly up-to-date versions of many popular packages, so I was easily able to install GIMP and LibreOffice. WebPositive is WebKit-based and up-to-date enough that the normally-picky Hackaday back-end doesn’t complain at me, so it more than fulfils my Daily Drivers requirement for an everyday OS I can do my work on. In fact, the ’90s look-and-feel and the Wi-Fi issues notwithstanding, this OS feels stable and solid in a way that many of the other minority OSes I’ve tried do not. I could use this day-to-day, and the Haiku Thinkpad could accompany me on the road.
There is a snag though, and it’s not the fault of the Haiku folks but probably a function of the size of their community; this is a really great OS, but sadly there are software packages it simply doesn’t have available for it. They’ve concentrated on multimedia, the web, games, and productivity in their choice of software to port, and some of the more esoteric or engineering-specific stuff I use is unsurprisingly not there. I can not fault them for this given the obvious work that’s gone into this OS, but it’s something to consider if your needs are complex.
Haiku then, it’s a very nice desktop operating system that’s polished, stable, and a joy to use. Excuse it a few setup issues and take care to ensure your Wi-Fi card is on its nice list, and you can use it day-to-day. It will always have something of the late ’90s about it, but think of that as not a curse but the operating system some of us wished we could have back in the real late ’90s. I’ll be finding a machine to hang onto a Haiku install, this one bears further experimentation.
700.000 record di un Registro Professionale Italiano in vendita nel Dark Web
Un nuovo allarme arriva dal sottobosco del cybercrime arriva poche ore fa. A segnalarlo l’azienda ParagonSec, società specializzata nel monitoraggio delle attività delle cyber gang e dei marketplace clandestini, che ha riportato la comparsa su un forum underground di un presunto database contenente oltre 700.000 record appartenenti ad un Registro Professionale Italiano non meglio precisato.
L’annuncio, pubblicato da un utente che si firma gtaviispeak, descrive la disponibilità di una “fresh db” contenente una quantità impressionante di informazioni sensibili di un database ad oggi sconosciuto che contiene dati personali estremamente dettagliati.
Disclaimer: Questo rapporto include screenshot e/o testo tratti da fonti pubblicamente accessibili. Le informazioni fornite hanno esclusivamente finalità di intelligence sulle minacce e di sensibilizzazione sui rischi di cybersecurity. Red Hot Cyber condanna qualsiasi accesso non autorizzato, diffusione impropria o utilizzo illecito di tali dati. Al momento, non è possibile verificare in modo indipendente l’autenticità delle informazioni riportate, poiché l’organizzazione coinvolta non ha ancora rilasciato un comunicato ufficiale sul proprio sito web. Di conseguenza, questo articolo deve essere considerato esclusivamente a scopo informativo e di intelligence.
Print Screen fornito da Paragon Sec a Red Hot Cyber
Il contenuto del database: un rischio elevatissimo
Secondo quanto riportato nel post, il database includerebbe una lunga lista di campi, tra cui:
- Dati anagrafici completi: nome, cognome, sesso, luogo di nascita, data di nascita
- Codice fiscale
- Email e numeri telefonici (fissi e cellulari)
- Password (non è noto di quale sito siano riferite)
- Dati lavorativi: ente di lavoro, ruolo, categoria professionale
- Indirizzi di residenza e domicilio
- CAP, provincia, comune
- Eventuali informazioni di gruppo e stato professionale
- Dati amministrativi di registrazione
- Indirizzo IP associato all’utente
La presenza di password in chiaro (o comunque disponibili nel dump) aumenta notevolmente il rischio di compromissioni successive, soprattutto se gli utenti riutilizzano le stesse credenziali su altri servizi.
La vendita avviene su Telegram
Il venditore invita gli interessati a contattarlo tramite un canale Telegram dedicato, una prassi ormai consolidata nelle dinamiche di vendita di database sottratti illegalmente. Nel post è presente anche un link a un presunto sample del dataset, finalizzato a dimostrare l’autenticità del materiale.
Una minaccia concreta per cittadini e aziende
Se confermato, questo leak rappresenta un rischio significativo per:
- Frodi fiscali grazie alla disponibilità del codice fiscale
- Phishing altamente mirato (spear phishing) basato su dati personali e professionali
- Furti d’identità attraverso combinazioni di dati anagrafici, contatti e credenziali
- Attacchi contro enti pubblici o professionali, sfruttando i dati lavorativi e l’email associata
Il livello di dettaglio dei campi elencati suggerisce che si tratti di un database istituzionale o comunque proveniente da una piattaforma amministrativa con dati certificati.
Sebbene un utente del forum abbia precisato che gli account non sarebbero ‘fresh’, ciò incide ben poco: informazioni come dati anagrafici, codice fiscale e recapiti non cambiano nel tempo. Di conseguenza, il materiale resta estremamente sensibile e può essere sfruttato con facilità per diverse tipologie di frodi.
Le fughe di dati provenienti da enti pubblici e registri professionali stanno aumentando in tutta Europa. I cybercriminali puntano sempre più su database certificati e ufficiali, poiché consentono attacchi più credibili e redditizi.
L'articolo 700.000 record di un Registro Professionale Italiano in vendita nel Dark Web proviene da Red Hot Cyber.
tinyCore Board Teaches Core Microcontroller Concepts
Looking for an educational microcontroller board to get you or a loved one into electronics? Consider the tinyCore – a small and nifty hexagon-shaped ESP32 board by [MR. INDUSTRIES], simplified for learning yet featureful enough to offer plenty of growth, and fully open.
The tinyCore board’s hexagonal shape makes it more flexible for building wearables than the vaguely rectangular boards we’re used to, and it’s got a good few onboard gadgets. Apart from already expected WiFi, BLE, and GPIOs, you get battery management, a 6DoF IMU (LSM6DSOX) in the center of the board, a micro SD card slot for all your data needs, and two QWIIC connectors. As such, you could easily turn it into, say, a smartwatch, a motion-sensitive tracker, or a controller for a small robot – there’s even a few sample projects for you to try.
You can buy one, or assemble a few yourself thanks to the open-source-ness – and, to us, the biggest factor is the [MR.INDUSTRIES] community, with documentation, examples, and people learning with this board and sharing what they make. Want a device with a big display that similarly wields a library of examples and a community? Perhaps check out the Cheap Yellow Display hacks!
youtube.com/embed/3Nd6zynJclk?…
We thank [Keith Olson] for sharing this with us!
Terremoto? No, AI-Fake! Un’immagine generata dall’IA paralizza i treni britannici
In Inghilterra, i treni sono stati sospesi per diverse ore a causa del presunto crollo di un ponte ferroviario causato da un’immagine falsa generata da una rete neurale. In seguito al terremoto avvenuto durante la notte, avvertito dagli abitanti del Lancashire e del Lake District meridionale, sui social media sono circolate immagini che mostravano il Carlisle Bridge a Lancaster gravemente danneggiato.
Network Rail, la società statale che gestisce l’infrastruttura ferroviaria britannica, ha riferito di aver appreso dell’immagine intorno alle 00:30 GMT e, per precauzione, di aver sospeso il traffico ferroviario sul ponte fino a quando i tecnici non avessero potuto verificarne le condizioni.
Intorno alle 02:00 GMT, il binario è stato completamente riaperto, non sono stati riscontrati danni e un giornalista della BBC che ha visitato il sito ha confermato che la struttura del ponte era intatta.
Una foto scattata da un reporter della BBC North West Tonight ha mostrato che il ponte è intatto
Un giornalista della BBC ha sottoposto l’immagine a un chatbot con intelligenza artificiale, che ha evidenziato diversi segnali rivelatori di un possibile falso. Tuttavia, Network Rail sottolinea che qualsiasi avviso di sicurezza deve essere trattato come se l’immagine fosse autentica, poiché sono in gioco vite umane.
L’azienda ha riferito che 32 treni, sia passeggeri che merci, hanno subito ritardi a causa dell’incidente. Alcuni treni hanno dovuto essere fermati o rallentati in avvicinamento al ponte, mentre altri hanno subito ritardi perché il loro percorso era bloccato da servizi precedentemente in ritardo. Data la lunghezza della West Coast Main Line, le conseguenze hanno raggiunto anche i treni diretti a nord, verso la Scozia.
Network Rail ha esortato gli utenti a considerare le potenziali conseguenze di tali immagini false. Secondo un portavoce dell’azienda, la creazione e la distribuzione di tali immagini comporta ritardi del tutto inutili, comporta costi per i contribuenti e aumenta il carico di lavoro dei dipendenti, già impegnati al massimo per garantire il regolare e sicuro funzionamento della ferrovia.
L’azienda ha sottolineato che la sicurezza dei passeggeri e del personale rimane una priorità assoluta e che pertanto qualsiasi potenziale minaccia all’infrastruttura viene presa estremamente sul serio.
La polizia dei trasporti britannica ha confermato di essere stata informata della situazione, ma al momento non è in corso alcuna indagine specifica sull’incidente.
L'articolo Terremoto? No, AI-Fake! Un’immagine generata dall’IA paralizza i treni britannici proviene da Red Hot Cyber.
Creating User-Friendly Installers Across Operating Systems
After you have written the code for some awesome application, you of course want other people to be able to use it. Although simply directing them to the source code on GitHub or similar is an option, not every project lends itself to the traditional configure && make && make install, with often dependencies being the sticking point.
Asking the user to install dependencies and set up any filesystem links is an option, but having an installer of some type tackle all this is of course significantly easier. Typically this would contain the precompiled binaries, along with any other required files which the installer can then copy to their final location before tackling any remaining tasks, like updating configuration files, tweaking a registry, setting up filesystem links and so on.
As simple as this sounds, it comes with a lot of gotchas, with Linux distributions in particular being a tough nut. Whereas on MacOS, Windows, Haiku and many other OSes you can provide a single installer file for the respective platform, for Linux things get interesting.
Windows As Easy Mode
For all the flak directed at Windows, it is hard to deny that it is a stupidly easy platform to target with a binary installer, with equally flexible options available on the side of the end-user. Although Microsoft has nailed down some options over the years, such as enforcing the user’s home folder for application data, it’s still among the easiest to install an application on.
While working on the NymphCast project, I found myself looking at a pleasant installer to wrap the binaries into, initially opting to use the NSIS (Nullsoft Scriptable Install System) installer as I had seen it around a lot. While this works decently enough, you do notice that it’s a bit crusty and especially the more advanced features can be rather cumbersome.
This is where a friend who was helping out with the project suggested using the more modern Inno Setup instead, which is rather like the well-known InstallShield utility, except OSS and thus significantly more accessible. Thus the pipeline on Windows became the following:
- Install dependencies using vcpkg.
- Compile project using NMake and the MSVC toolchain.
- Run the Inno Setup script to build the
.exebased installer.
Installing applications on Windows is helped massively both by having a lot of freedom where to install the application, including on a partition or disk of choice, and by having the start menu structure be just a series of folders with shortcuts in them.
The Qt-based NymphCast Player application’s .iss file covers essentially such a basic installation process, while the one for NymphCast Server also adds the option to download a pack of wallpaper images, and asks for the type of server configuration to use.
Uninstalling such an application basically reverses the process, with the uninstaller installed alongside the application and registered in the Windows registry together with the application’s details.
MacOS As Proprietary Mode
Things get a bit weird with MacOS, with many application installers coming inside a DMG image or PKG file. The former is just a disk image that can be used for distributing applications, and the user is generally provided with a way to drag the application into the Applications folder. The PKG file is more of a typical installer as on Windows.
Of course, the problem with anything MacOS is that Apple really doesn’t want you to do anything with MacOS if you’re not running MacOS already. This can be worked around, but just getting to the point of compiling for MacOS without running XCode on MacOS on real Apple hardware is a bit of a fool’s errand. Not to mention Apple’s insistence on signing these packages, if you don’t want the end-user to have to jump through hoops.
Although I have built both iOS and OS X/MacOS applications in the past – mostly for commercial projects – I decided to not bother with compiling or testing my projects like NymphCast for Apple platforms without easy access to an Apple system. Of course, something like Homebrew can be a viable alternative to the One True Apple Way if you merely want to get OSS o MacOS. I did add basic support for Homebrew in NymphCast, but without a MacOS system to test it on, who knows whether it works.
Anything But Linux
The world of desktop systems is larger than just Windows, MacOS and Linux, of course. Even mobile OSes like iOS and Android can be considered to be ‘desktop OSes’ with the way that they’re being used these days, also since many smartphones and tablets can be hooked up to to a larger display, keyboard and mouse.
How to bootstrap Android development, and how to develop native Android applications has been covered before, including putting APK files together. These are the typical Android installation files, akin to other package manager packages. Of course, if you wish to publish to something like the Google Play Store, you’ll be forced into using app bundles, as well as various ways to signing the resulting package.
The idea of using a package for a built-in package manager instead of an executable installer is a common one on many platforms, with iOS and kin being similar. On FreeBSD, which also got a NymphCast port, you’d create a bundle for the pkg package manager, although you can also whip up an installer. In the case of NymphCast there is a ‘universal installer’ built into the Makefile after compilation via the fully automated setup.sh shell script, using the fact that OSes like Linux, FreeBSD and even Haiku are quite similar on a folder level.
That said, the Haiku port of NymphCast is still as much of a Beta as Haiku itself, as detailed in the write-up which I did on the topic. Once Haiku is advanced enough I’ll be creating packages for its pkgman package manager as well.
The Linux Chaos Vortex
There is a simple, universal way to distribute software across Linux distributions, and it’s called the ‘tar.gz method’, referring to the time-honored method of distributing source as a tarball, for local compilation. If this is not what you want, then there is the universal RPM installation format which died along with the Linux Standard Base. Fortunately many people in the Linux ecosystem have worked tirelessly to create new standards which will definitely, absolutely, totally resolve the annoying issue of having to package your applications into RPMs, DEBs, Snaps, Flatpaks, ZSTs, TBZ2s, DNFs, YUMs, and other easily remembered standards.
It is this complete and utter chaos with Linux distros which has made me not even try to create packages for these, and instead offer only the universal .tar.gz installation method. After un-tar-ing the server code, simply run [url=https://github.com/MayaPosch/NymphCast/blob/master/setup.sh]setup.sh[/url] and lean back while it compiles the thing. After that, run install_linux.sh and presto, the whole shebang is installed without further ado. I also provided an uninstall_linux.sh script to complete the experience.
That said, at least one Linux distro has picked up NymphCast and its dependencies like Libnymphcast and NymphRPC into their repository: Alpine Linux. Incidentally FreeBSD also has an up to date package of NymphCast in its repository. I’m much obliged to these maintainers for providing this service.
Perhaps the lesson here is that if you want to get your neatly compiled and packaged application on all Linux distributions, you just need to make it popular enough that people want to use it, so that it ends up getting picked up by package repository contributors?
Wrapping Up
With so many details to cover, there’s also the easily forgotten topic that was so prevalent in the Windows installer section: integration with the desktop environment. On Windows, the Start menu is populated via simple shortcut files, while one sort-of standard on Linux (and FreeBSD as corollary) are Freedesktop’s XDC Desktop Entry files. Or .desktop files for short, which purportedly should give you a similar effect.
Only that’s not how anything works with the Linux ecosystem, as every single desktop environment has its own ideas on how these files should be interpreted, where they should be located, or whether to ignore them completely. My own experiences there are that relying on them for more advanced features, such as auto-starting a graphical application on boot (which cannot be done with Systemd, natch) without something throwing an XDG error or not finding a display is basically a fool’s errand. Perhaps that things are better here if you use KDE Plasma as DE, but this was an installer thing that I failed to solve after months of trial and error.
Long story short, OSes like Windows are pretty darn easy to install applications on, MacOS is okay as long as you have bought into the Apple ecosystem and don’t mind hanging out there, while FreeBSD is pretty simple until it touches the Linux chaos via X11 and graphical desktops. Meanwhile I’d strongly advise to only distribute software on Linux as a tarball, for your sanity’s sake.
Hunting for Mythic in network traffic
Post-exploitation frameworks
Threat actors frequently employ post-exploitation frameworks in cyberattacks to maintain control over compromised hosts and move laterally within the organization’s network. While they once favored closed-source frameworks, such as Cobalt Strike and Brute Ratel C4, open-source projects like Mythic, Sliver, and Havoc have surged in popularity in recent years. Malicious actors are also quick to adopt relatively new frameworks, such as Adaptix C2.
Analysis of popular frameworks revealed that their development focuses heavily on evading detection by antivirus and EDR solutions, often at the expense of stealth against systems that analyze network traffic. While obfuscating an agent’s network activity is inherently challenging, agents must inevitably communicate with their command-and-control servers. Consequently, an agent’s presence in the system and its malicious actions can be detected with the help of various network-based intrusion detection systems (IDS) and, of course, Network Detection and Response (NDR) solutions.
This article examines methods for detecting the Mythic framework within an infrastructure by analyzing network traffic. This framework has gained significant traction among various threat actors, including Mythic Likho (Arcane Wolf) и GOFFEE (Paper Werewolf), and continues to be used in APT and other attacks.
The Mythic framework
Mythic C2 is a multi-user command and control (C&C, or C2) platform designed for managing malicious agents during complex cyberattacks. Mythic is built on a Docker container architecture, with its core components – the server, agents, and transport modules – written in Python. This architecture allows operators to add new agents, communication channels, and custom modifications on the fly.
Since Mythic is a versatile tool for the attacker, from the defender’s perspective, its use can align with multiple stages of the Unified Kill Chain, as well as a large number of tactics, techniques, and procedures in the MITRE ATT&CK® framework.
- Pivoting is a tactic where the attacker uses an already compromised system as a pivot point to gain access to other systems within the network. In this way, they gradually expand their presence within the organization’s infrastructure, bypassing firewalls, network segmentation, and other security controls.
- Collection (TA0009) is a tactic focused on gathering and aggregating information of value to the attacker: files, credentials, screenshots, and system logs. In the context of network operations, collection is often performed locally on compromised hosts, with data then packaged for transfer. Tools like Mythic automate the discovery and selection of data sought by the adversary.
- Exfiltration (TA0010) is the process of moving collected information out of the secured network via legitimate or covert channels, such as HTTP(s), DNS, or SMB, etc. Attackers may use resident agents or intermediate relays (pivot hosts) to conceal the exfiltration source and route.
- Command and Control (TA0011) encompasses the mechanisms for establishing and maintaining a communication channel between the operator and compromised hosts to transmit commands and receive status updates. This includes direct connections, relaying through pivot hosts, and the use of covert protocols. Frameworks like Mythic provide advanced C2 capabilities, such as scheduled command execution, tunneling, and multi-channel communication, which complicate the detection and blocking of their activity.
This article focuses exclusively on the Command and Control (TA0011) tactic, whose techniques can be effectively detected within the network traffic of Mythic agents.
Detecting Mythic agent activity in network traffic
At the time of writing, Mythic supports data transfer over HTTP/S, WebSocket, TCP, SMB, DNS, and MQTT. The platform also boasts over a dozen different agents, written in Go, Python, and C#, designed for Windows, macOS, and Linux.
Mythic employs two primary architectures for its command network:
- In this model, agents communicate with adjacent agents forming a chain of connections which eventually leads to a node communicating directly with the Mythic C2 server. For this purpose, agents utilize TCP and SMB.
- In this model, agents communicate directly with the C2 server via HTTP/S, WebSocket, MQTT, or DNS.
P2P communication
Mythic provides pivoting capabilities via named SMB pipes and TCP sockets. To detect Mythic agent activity in P2P mode, we will examine their network traffic and create corresponding Suricata detection rules (signatures).
P2P communication via SMB
When managing agents via the SMB protocol, a named pipe is used by default for communication, with its name matching the agent’s UUID.
Although this parameter can be changed, it serves as a reliable indicator and can be easily described with a regular expression. Example:
[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}
For SMB communication, agents encode and encrypt data according to the pattern: base64(UUID+AES256(JSON)). This data is then split into blocks and transmitted over the network. The screenshot below illustrates what a network session for establishing a connection between agents looks like in Wireshark.
Commands and their responses are packaged within the MythicMessage data structure. This structure contains three header fields, as well as the commands themselves or the corresponding responses:
- Total size (4 bytes)
- Number of data blocks (4 bytes)
- Current block number (4 bytes)
- Base64-encoded data
The screenshot below shows an example of SMB communication between agents.
The agent (10.63.101.164) sends a command to another agent in the MythicMessage format. The first three Write Requests transmit the total message size, total number of blocks, and current block number. The fourth request transmits the Base64-encoded data. This is followed by a sequence of Read Requests, which are also transmitted in the MythicMessage format.
Below are the data transmitted in the fourth field of the MythicMessage structure.
The content is encoded in Base64. Upon decoding, the structure of the transmitted information becomes visible: it begins with the UUID of the infected host, followed by a data block encrypted using AES-256.
The fact that the data starts with a UUID string can be leveraged to create a signature-based detection rule that searches network packets for the identifier pattern.
To search for packets containing a UUID, the following signature can be applied. It uses specific request types and protocol flags as filters (Command: Ioctl (11), Function: FSCTL_PIPE_WAIT (0x00110018)), followed by a check to see if the pipe name matches the UUID pattern.
alert tcp any any -> any [139, 445] (msg: "Trojan.Mythic.SMB.C&C"; flow: to_server, established; content: "|fe|SMB"; offset: 4; depth: 4; content: "|0b 00|"; distance: 8; within: 2; content: "|18 00 11 00|"; distance: 48; within: 12; pcre: "/\x48\x00\x00\x00[\x00-\xFF]{2}([a-z0-9]\x00){8}\-\x00([a-z0-9]\x00){4}\-\x00([a-z0-9]\x00){4}\-\x00([a-z0-9]\x00){4}\-\x00([a-z0-9]\x00){12}$/R"; threshold: type both, track by_src, count 1, seconds 60; reference: url, github.com/MythicC2Profiles/sm… classtype: ndr1; sid: 9000101; rev: 1;)
Agent activity can also be detected by analyzing data transmitted in SMB WriteRequest packets with the protocol flag Command: Write (9) and a distinct packet structure where the BlobOffset and BlobLen fields are set to zero. If the Data field is Base64-encoded and, after decoding, begins with a UUID-formatted string, this indicates a command-and-control channel.
alert tcp any any -> any [139, 445] (msg: "Trojan.Mythic.SMB.C&C"; flow: to_server, established; dsize: > 360; content: "|fe|SMB"; offset: 4; depth: 4; content: "|09 00|"; distance: 8; within: 2; content: "|00 00 00 00 00 00 00 00 00 00 00 00|"; distance: 86; within: 12; base64_decode: bytes 64, offset 0, relative; base64_data; pcre: "/^[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, github.com/MythicC2Profiles/sm… classtype: ndr1; sid: 9000102; rev: 1;)
Below is the KATA NDR user interface displaying an alert about detecting a Mythic agent operating in P2P mode over SMB. In this instance, the first rule – which checks the request type, protocol flags, and the UUID pattern – was triggered.
It should be noted that these signatures have a limitation. If the SMBv3 protocol with encryption enabled is used, Mythic agent activity cannot be detected with signature-based methods. A possible alternative is behavioral analysis. However, in this context, it suffers from low accuracy and a high false-positive rate. The SMB protocol is widely used by organizations for various legitimate purposes, making it difficult to isolate behavioral patterns that definitively indicate malicious activity.
P2P communication via TCP
Mythic also supports P2P communications via TCP. The connection initialization process appears in network traffic as follows:
As with SMB, the MythicMessage structure is used for transmitting and receiving data. First, the data length (4 bytes) is sent as a big-endian DWORD in a separate packet. Subsequent packets transmit the number of data blocks, the current block number, and the data itself. However, unlike SMB packets, the value of the current block number field is always 0x00000000, due to TCP’s built-in packet fragmentation support.
The data encoding scheme is also analogous to what we observed with SMB and appears as follows: base64(UUID+AES256(JSON)). Below is an example of a network packet containing Mythic data.
The decoded data appears as follows:
Similar to communication via SMB, signature-based detection rules can be created for TCP traffic to identify Mythic agent activity by searching for packets containing UUID-formatted strings. Below are two Suricata detection rules. The first rule is a utility rule. It does not generate security alerts but instead tags the TCP session with an internal flag, which is then checked by another rule. The second rule verifies the flag and applies filters to confirm that the current packet is being analyzed at the beginning of a network session. It then decodes the Base64 data and searches the resulting content for a UUID-formatted string.
alert tcp any any -> any any (msg: "Trojan.Mythic.TCP.C&C"; flow: from_server, established; dsize: 4; stream_size: server, <, 6; stream_size: client, <, 3; content: "|00 00|"; depth: 2; pcre: "/^\x00\x00[\x00-\x5C]{1}[\x00-\xFF]{1}$/"; flowbits: set, mythic_tcp_p2p_msg_len; flowbits: noalert; threshold: type both, track by_src, count 1, seconds 60; reference: url, github.com/MythicC2Profiles/tc… classtype: ndr1; sid: 9000103; rev: 1;)
alert tcp any any -> any any (msg: "Trojan.Mythic.TCP.C&C"; flow: from_server, established; dsize: > 300; stream_size: server, <, 6000; stream_size: client, <, 6000; flowbits: isset, mythic_tcp_p2p_msg_len; content: "|00 00 00|"; depth: 3; content: "|00 00 00 00|"; distance: 1; within: 4; base64_decode: bytes 64, offset 0, relative; base64_data; pcre: "/^[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, github.com/MythicC2Profiles/tc… classtype: ndr1; sid: 9000104; rev: 1;)
Below is the NDR interface displaying an example of the two rules detecting a Mythic agent operating in P2P mode over TCP.
Egress transport modules
Covert Egress communication
For stealthy operations, Mythic allows agents to be managed through popular services. This makes its activity less conspicuous within network traffic. Mythic includes transport modules based on the following services:
- Discord
- GitHub
- Slack
Of these, only the first two remain relevant at the time of writing. Communication via Slack (the Slack C2 Profile transport module) is no longer supported by the developers and is considered deprecated, so we will not examine it further.
The Discord C2 Profile transport module
The use of the Discord service as a mediator for C2 communication within the Mythic framework has been gaining popularity recently. In this scenario, agent traffic is indistinguishable from normal Discord activity, with commands and their execution results masquerading as messages and file attachments. Communication with the server occurs over HTTPS and is encrypted with TLS. Therefore, detecting Mythic traffic requires decrypting this.
Analyzing decrypted TLS traffic
Let’s assume we are using an NDR platform in conjunction with a network traffic decryption (TLS inspection) system to detect suspicious network activity. In this case, we operate under the assumption that we can decrypt all TLS traffic. Let’s examine possible detection rules for that scenario.
Agent and server communication occurs via Discord API calls to send messages to a specific channel. Communication between the agent and Mythic uses the MythicMessageWrapper structure, which contains the following fields:
- message: the transmitted data
- sender_id: a GUID generated by the agent, included in every message
- to_server: a direction flag – a message intended for the server or the agent
- id: not used
- final: not used
Of particular interest to us is the message field, which contains the transmitted data encoded in Base64. The MythicMessageWrapper message is transmitted in plaintext, making it accessible to anyone with read permissions for messages on the Discord server.
Below is an example of data transmission via messages in a Discord channel.
To establish a connection, the agent authenticates to the Discord server via the API call /api/v10/gateway/bot. We observe the following data in the network traffic:
After successful initialization, the agent gains the ability to receive and respond to commands. To create a message in the channel, the agent makes a POST request to the API endpoint /channels/<channel.id>/messages. The network traffic for this call is shown in the screenshot below.
After decoding the Base64, the content of the message field appears as follows:
A structure characteristic of a UUID is visible at the beginning of the packet.
After processing the message, the agent deletes it from the channel via a DELETE request to the API endpoint /channels/{channel.id}/messages/{message.id}.
Below is a Suricata rule that detects the agent’s Discord-based communication activity. It checks the API activity for creating HTTP messages for the presence of Base64-encoded data containing the agent’s UUID.
alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "POST"; http_method; content: "/api/"; http_uri; content: "/channels/"; distance: 0; http_uri; pcre: "/\/messages$/U"; content: "|7b 22|content|22|"; depth: 20; http_client_body; content: "|22|sender_id"; depth: 1500; http_client_body; pcre: "/\x22sender_id\x5c\x22\x3a\x5c\x22[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, github.com/MythicC2Profiles/di… classtype: ndr1; sid: 9000105; rev: 1;)
Below is the NDR user interface displaying an example of detecting the activity of the Discord C2 Profile transport module for a Mythic agent within decrypted HTTP traffic.
Analyzing encrypted TLS traffic
If Discord usage is permitted on the network and there is no capability to decrypt traffic, it becomes nearly impossible to detect agent activity. In this scenario, behavioral analysis of requests to the Discord server may prove useful. Below is network traffic showing frequent TLS connections to the Discord server, which could indicate commands being sent to an agent.
In this case, we can use a Suricata rule to detect the frequent TLS sessions with Discord servers:
alert tcp any any -> any any (msg: "NetTool.PossibleMythicDiscordEgress.TLS.C&C"; flow: to_server, established; tls_sni; content: "discord.com"; nocase; threshold: type both, track by_src, count 4, seconds 420; reference: url, github.com/MythicC2Profiles/di… classtype: ndr3; sid: 9000106; rev: 1;)
Another method for detecting these communications involves tracking multiple DNS queries to the discord.com domain.
The following rule can be applied to detect these:
alert udp any any -> any 53 (msg: "NetTool.PossibleMythicDiscordEgress.DNS.C&C"; content: "|01 00 00 01 00 00 00 00 00 00|"; depth: 10; offset: 2; content: "|07|discord|03|com|00|"; nocase; distance: 0; threshold: type both, track by_src, count 4, seconds 60; reference: url, github.com/MythicC2Profiles/di… classtype: ndr3; sid: 9000107; rev: 1;)
Below is the NDR user interface showing an example of a custom rule in operation, detecting the activity of the Discord C2 Profile transport module for a Mythic agent within encrypted traffic based on characteristic DNS queries.
The proposed rule options have low accuracy and can generate a high number of false positives. Therefore, they must be adapted to the specific characteristics of the infrastructure in which they will run. Threshold and count parameters, which control the triggering frequency and time window, require tuning.
GitHub C2 Profile transport module
GitHub’s popularity has made it an attractive choice as a mediator for managing Mythic agents. The core concept is the same as in other covert Egress communication transport modules. Communication with GitHub utilizes HTTPS. Successful operation requires an account on the target platform and the ability to communicate via API calls. The transport module utilizes the GitHub API to send comments to pre-created Issues and to commit files to a branch within a repository controlled by the attackers. In this model, the agent interacts only with GitHub: it creates and reads comments, uploads files, and manages branches. It does not communicate with any other servers. The communication algorithm via GitHub is as follows:
- The agent posts a comment (check-in) to a designated Issue on GitHub, intended for agents to report their results.
- The Mythic server validates the comment, deletes it, and posts a reply in an issue designated for server use.
- The agent creates a branch with a name matching its UUID and writes a get_tasking file to it (performs a push request).
- The Mythic server reads the file and writes a response file to the same branch.
- The agent reads the response file, deletes the branch, pauses, and repeats the cycle.
Analyzing decrypted TLS traffic
Let’s consider an approach to detecting agent activity when traffic decryption is possible.
Agent communication with the server utilizes API calls to GitHub. The payload is encoded in Base64 and published in plaintext; therefore, anyone who can view the repository or analyze the traffic contents can decode it.
Analysis of agent communication revealed that the most useful traffic for creating detection rules is associated with publishing check-in comments, creating a branch, and publishing a file.
During the check-in phase, the agent posts a comment to register a new agent and establish communication.
The transmitted data is encoded in Base64 and contains the agent’s UUID and the portion of the message encrypted using AES-256.
This allows for a signature that detects UUID-formatted substrings within GitHub comment creation requests.
alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "POST"; http_method; content: "api.github.com"; http_host; content: "/repos/"; depth: 8; http_uri; pcre: "/\/comments$/U"; content: "|22|body|22|"; depth: 8; http_client_body; base64_decode: bytes 300, offset 2, relative; base64_data; pcre: "/^[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, github.com/MythicC2Profiles/gi… classtype: ndr1; sid: 9000108; rev: 1;)
Another stage suitable for detection is when the agent creates a separate branch with its UUID as the name. All subsequent relevant communication with the server will occur within this branch. Here is an example of a branch creation request:
Therefore, we can create a detection rule to identify UUID-formatted strings within branch creation requests.
alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "POST"; http_method; content: "api.github.com"; http_host; content: "/repos/"; depth: 100; http_uri; content: "/git/refs"; distance: 0; http_uri; content: "|22|ref|22 3a|"; depth: 10; http_client_body; content: "refs/heads/"; distance: 0; within: 50; http_client_body; pcre: "/refs\/heads\/[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}\x22/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, github.com/MythicC2Profiles/gi… classtype: ndr1; sid: 9000109; rev: 1;)
After creating the branch, the agent writes a file to it (sends a push request), which contains Base64-encoded data.
Therefore, we can create a rule to trigger on file publication requests to a branch whose name matches the UUID pattern.
alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "PUT"; http_method; content: "api.github.com"; http_host; content: "/repos/"; depth:8; http_uri; content: "/contents/"; distance: 0; http_uri; content: "|22|content|22|"; depth: 100; http_client_body; pcre: "/\x22message\x22\x3a\x22[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}\x22/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, github.com/MythicC2Profiles/gi… classtype: ndr1; sid: 9000110; rev: 1;)
The screenshot below shows how the NDR solution logs all suspicious communications using the GitHub API and subsequently identifies the Mythic agent’s activity. The result is an alert with the verdict Trojan.Mythic.HTTP.C&C.
Analyzing encrypted TLS traffic
Communication with GitHub occurs over HTTPS; therefore, in the absence of traffic decryption capability, signature-based methods for detecting agent activity cannot be applied. Let’s consider a behavioral agent activity detection approach.
For instance, it is possible to detect connections to GitHub servers that are atypical in frequency and purpose, originating from network segments where this activity is not expected. The screenshot below shows an example of an agent’s multiple TLS sessions. The traffic reflects the execution of several commands, as well as idle time, manifested as constant polling of the server while awaiting new tasks.
Multiple TLS sessions with the GitHub service from uncharacteristic network segments can be detected using the rule presented below:
alert tcp any any -> any any (msg:"NetTool.PossibleMythicGitHubEgress.TLS.C&C"; flow: to_server, established; tls_sni; content: "api.github.com"; nocase; threshold: type both, track by_src, count 4, seconds 60; reference: url, github.com/MythicC2Profiles/gi… classtype: ndr3; sid: 9000111; rev: 1;)
Additionally, multiple DNS queries to the service can be logged in the traffic.
This activity is detected with the help of the following rule:
alert udp any any -> any 53 (msg: "NetTool.PossibleMythicGitHubEgress.DNS.C&C"; content: "|01 00 00 01 00 00 00 00 00 00|"; depth: 10; offset: 2; content: "|03|api|06|github|03|com|00|"; nocase; distance: 0; threshold: type both, track by_src, count 12, seconds 180; reference: url, github.com/MythicC2Profiles/gi… classtype: ndr3; sid: 9000112; rev: 1;)
The screenshot below shows the NDR interface with an example of the first rule in action, detecting traces of the GitHub profile activity for a Mythic agent within encrypted TLS traffic.
The suggested rule options can produce false positives, so to improve their effectiveness, they must be adapted to the specific characteristics of the infrastructure in which they will run. The parameters of the threshold keyword – specifically the count and seconds values, which control the number of events required to generate an alert and the time window for their occurrence in NDR – must be configured.
Direct Egress communication
The Egress communication model allows agents to interact directly with the C2 server via the following protocols:
- HTTP(S)
- WebSocket
- MQTT
- DNS
The first two protocols are the most prevalent. The DNS-based transport module is still under development, and the module based on MQTT sees little use among operators. We will not examine them within the scope of this article.
Communication via HTTP
HTTP is the most common protocol for building a Mythic agent control network. The HTTP transport container acts as a proxy between the agents and the Mythic server. It allows data to be transmitted in both plaintext and encrypted form. Crucially, the metadata is not encrypted, which enables the creation of signature-based detection rules.
Below is an example of unencrypted Mythic network traffic over HTTP. During a GET request, data encoded in Base64 is passed in the value of the query parameter.
After decoding, the agent’s UUID – generated according to a specific pattern – becomes visible. This identifier is followed by a JSON object containing the key parameters of the host, collected by the agent.
If data encryption is applied, the network traffic for agent communication appears as shown in the screenshot below.
After decrypting the traffic and decoding from Base64, the communication data reveals the familiar structure: UUID+AES256(JSON).
Therefore, to create a detection signature for this case, we can also rely on the presence of a UUID within the Base64-encoded data in POST requests.
alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "POST"; http_method; content: "|0D 0A 0D 0A|"; base64_decode: bytes 80, offset 0, relative; base64_data; content: "-"; offset: 8; depth: 1; content: "-"; distance: 4; within: 1; content: "-"; distance: 4; within: 1; content: "-"; distance: 4; within: 1; pcre: "/[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}/"; threshold: type both, track by_src, count 1, seconds 180; reference: md5, 6ef89ccee639b4df42eaf273af8b5ffd; classtype: trojan1; sid: 9000113; rev: 2;)
The screenshot below shows how the NDR platform detects agent communication with the server over HTTP, generating an alert with the name Trojan.Mythic.HTTP.C&C.
Communication via HTTPS
Mythic agents can communicate with the server via HTTPS using the corresponding transport module. In this case, data is encrypted with TLS and is not amenable to signature-based analysis. However, the activity of Mythic agents can be detected if they use the default SSL certificate. Below is an example of network traffic from a Mythic agent with such a certificate.
For this purpose, the following signature is applied:
alert tcp any any -> any any (msg: "Trojan.Mythic.HTTPS.C&C"; flow: established, from_server, no_stream; content: "|16 03|"; content: "|0B|"; distance: 3; within: 1; content: "Mythic C2"; distance: 0; reference: url, github.com/its-a-feature/Mythi… classtype: ndr1; sid: 9000114; rev: 1;)
WebSocket
The WebSocket protocol enables full-duplex communication between a client and a remote host. Mythic can utilize it for agent management.
The process of agent communication with the server via WebSocket is as follows:
- The agent sends a request to the WebSocket container to change the protocol for the HTTP(S) connection.
- The agent and the WebSocket container switch to WebSocket to send and receive messages.
- The agent sends a message to the WebSocket container requesting tasks from the Mythic container.
- The WebSocket container forwards the request to the Mythic container.
- The Mythic container returns the tasks to the WebSocket container.
- The WebSocket container forwards these tasks to the agent.
It is worth mentioning that in this communication model, both the WebSocket container and the Mythic container reside on the Mythic server. Below is a screenshot of the initial agent connection to the server.
An analysis of the TCP session shows that the actual data is transmitted in the data field in Base64 encoding.
Decoding reveals the familiar data structure: UUID+AES256(JSON).
Therefore, we can use an approach similar to those discussed above to detect agent activity. The signature should rely on the UUID string at the beginning of the data field. The rule first verifies that the session data matches the data:base64 format, then decodes the data field and searches for a string matching the UUID pattern.
alert tcp any any -> any any (msg: "Trojan.Mythic.WebSocket.C&C"; flow: established, from_server; content: "|7B 22|data|22 3a 22|"; depth: 14; pcre: "/^[0-9a-zA-Z\/\+]+[=]{0,2}\x22\x7D\x0A$/R"; content: "|7B 22|data|22 3a 22|"; depth: 14; base64_decode: bytes 48, offset 0, relative; base64_data; pcre: "/^[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/i"; threshold: type both, track by_src, count 1, seconds 30; reference: url, github.com/MythicAgents/; classtype: ndr1; sid: 9000115; rev: 2;)
Below is the Trojan.Mythic.WebSocket.C&C signature triggering on Mythic agent communication over WebSocket.
Takeaways
The Mythic post-exploitation framework continues to gain popularity and evolve rapidly. New agents are emerging, designed for covert persistence within target infrastructures. Despite this evolution, the various implementations of network communication in Mythic share many common characteristics that remain largely consistent over time. This consistency enables IDS/NDR solutions to effectively detect the framework’s agent activity through network traffic analysis.
Mythic supports a wide array of agent management options utilizing several network protocols. Our analysis of agent communications across these protocols revealed that agent activity can be detected by searching for specific data patterns within network traffic. The primary detection criterion involves tracking UUID strings in specific positions within Base64-encoded transmitted data. However, while the general approach to detecting agent activity is similar across protocols, each requires protocol-specific filters. Consequently, creating a single, universal signature for detecting Mythic agents in network traffic is challenging; individual detection rules must be crafted for each protocol. This article has provided signatures that are included in Kaspersky NDR.
Kaspersky NDR is designed to identify current threats within network infrastructures. It enables the detection of all popular post-exploitation frameworks based on their characteristic traffic patterns. Since the network components of these frameworks change infrequently, employing an NDR solution ensures high effectiveness in agent discovery.
Kaspersky verdicts in Kaspersky solutions (Kaspersky Anti-Targeted Attack with NDR module and Kaspersky NGFW)
Trojan.Mythic.SMB.C&C
Trojan.Mythic.TCP.C&C
Trojan.Mythic.HTTP.C&C
Trojan.Mythic.TLS.C&C
Trojan.Mythic.WebSocket.C&C
Iteration3D is Parametric Python in the Cloud
It’s happened to all of us: you find the perfect model for your needs — a bracket, a box, a cable clip, but it only comes in STL, and doesn’t quite fit. That problem will never happen if you’re using Iteration3D to get your models, because every single thing on the site is fully-parametric, thanks to an open-source toolchain leveraging 123Dbuilds and Blender.
Blender gives you preview renderings, including colors where the models are set up for multi-material printing. Build123D is the CAD behind the curtain — if you haven’t heard of it, think OpenSCAD but in Python, but with chamfers and fillets. It actually leverages the same OpenCascade that’s behind everyone’s other favorite open-source CAD suite, FreeCAD. Anything you can do in FreeCAD, you can do in Build123D, but with code. Except you don’t need to learn the code if the model is on Iteration3D; you just set the parameters and push a button to get an STL of your exact specifications.
The downside is that, as of now, you are limited to the hard-coded templates provided by Iteration3D. You can modify their parameters to get the configuration and dimensions you need, but not the pythonic Build123D script that generates them. Nor can you currently upload your own models to be shared and parametrically altered, like Thingiverse had with their OpenSCAD-based customizer. That said, we were told that user-uploads are in the pipeline, which is great news and may well turn Iteration3D into our new favorite.
Right now, if you’re looking for a box or a pipe hanger or a bracket, plugging your numbers into Iteration3D’s model generator is going to be a lot faster than rolling your own, weather that rolling be done in OpenSCAD, FreeCAD, or one of those bits of software people insist on paying for. There’s a good variety of templates — 18 so far — so it’s worth checking out. Iteration3D is still new, having started in early 2025, so we will watch their career with great interest.
Going back to the problem in the introduction, if Iteration3D doesn’t have what you need and you still have an STL you need to change the dimensions of, we can help you with that.
Thanks to [Sylvain] for the tip!
L’EDR è inutile! Gli hacker di DeadLock hanno trovato un “kill switch” universale
Cisco Talos ha identificato una nuova campagna ransomware chiamata DeadLock: gli aggressori sfruttano un driver antivirus Baidu vulnerabile (CVE-2024-51324) per disabilitare i sistemi EDR tramite la tecnica Bring Your Own Vulnerable Driver (BYOVD). Il gruppo non gestisce un sito di fuga di dati ma comunica con le vittime tramite Session Messenger.
Secondo Talos gli attacchi vengono eseguiti da un operatore motivato finanziariamente che ottiene l’accesso all’infrastruttura della vittima almeno cinque giorni prima della crittografia e prepara gradualmente il sistema per l’implementazione di DeadLock.
Uno degli elementi chiave della catena è BYOVD : gli aggressori stessi introducono nel sistema un driver Baidu Antivirus legittimo ma vulnerabile, BdApiUtil.sys, camuffato da DriverGay.sys, e il proprio loader, EDRGay.exe. Il loader inizializza il driver in modalità utente, stabilisce una connessione ad esso tramite CreateFile() e inizia a enumerare i processi alla ricerca di soluzioni antivirus ed EDR.
Successivamente, viene sfruttata la vulnerabilità CVE-2024-51324, un errore di gestione dei privilegi nel driver. Il loader invia uno speciale comando DeviceIOControl() al driver con codice IOCTL 0x800024b4 e il PID del processo di destinazione.
Dal lato kernel, il driver interpreta questo come una richiesta di terminazione del processo, ma a causa della vulnerabilità, non verifica i privilegi del programma chiamante. Eseguendo con privilegi kernel, il driver richiama semplicemente ZwTerminateProcess() e “termina” immediatamente il servizio di sicurezza, aprendo la strada ad ulteriori aggressori.
Prima di lanciare il ransomware, l’operatore esegue uno script PowerShell preparatorio sul computer della vittima. Innanzitutto, verifica i privilegi dell’utente corrente e, se necessario, si riavvia con privilegi amministrativi tramite RunAs, bypassando l’UAC e attenuando le restrizioni standard di PowerShell.
Dopo aver ottenuto i privilegi di amministratore, lo script disabilita Windows Defender e altri strumenti di sicurezza, arresta e disabilita i servizi di backup, i database e altri software che potrebbero interferire con la crittografia. Elimina inoltre tutti gli snapshot delle copie shadow del volume, privando la vittima degli strumenti di ripristino standard, e infine si autodistrugge, complicando l’analisi forense.
Lo script include anche un elenco dettagliato di eccezioni per i servizi critici per il sistema. Tra queste rientrano i servizi di rete (WinRM, DNS, DHCP), i meccanismi di autenticazione (KDC, Netlogon, LSM) e i componenti di base di Windows (RPCSS, Plug and Play, registro eventi di sistema).
Ciò consente agli aggressori di disabilitare il maggior numero possibile di componenti di sicurezza e applicativi senza causare l’arresto anomalo dell’intero sistema, consentendo alla vittima di leggere la nota, contattare il ransomware e pagare.
Talos ha notato che alcune sezioni dello script relative all’eliminazione delle condivisioni di rete e ai metodi alternativi per l’arresto dei processi erano commentate, a indicare che gli autori le intendevano come “opzioni” per scopi specifici. Lo script carica dinamicamente alcune eccezioni da un file run[.]txt esterno.
La telemetria indica che gli aggressori stanno accedendo alla rete della vittima tramite account legittimi compromessi. Dopo l’accesso iniziale, configurano l’accesso remoto persistente: utilizzando il comando reg add, modificano il valore di registro fDenyTSConnections per abilitare RDP. Quindi, utilizzando netsh advfirewall, creano una regola che apre la porta 3389, impostano il servizio RemoteRegistry in modalità on-demand e lo avviano, consentendo la gestione remota del registro.
Il giorno prima della crittografia, l’operatore installa una nuova istanza di AnyDesk su una delle macchine , nonostante altre installazioni del software siano già presenti nell’infrastruttura, rendendo questa distribuzione sospetta.
AnyDesk viene distribuito in modo silenzioso, con l’avvio di Windows abilitato, una password configurata per l’accesso silenzioso e gli aggiornamenti disabilitati che potrebbero interrompere le sessioni degli aggressori. Successivamente, inizia la ricognizione attiva e lo spostamento della rete: nltest viene utilizzato per trovare i controller di dominio e la struttura del dominio, net localgroup/domain per enumerare i gruppi privilegiati, ping e quser per verificare la disponibilità e gli utenti attivi, e infine mstsc e mmc compmgmt.msc per connettersi ad altri host tramite RDP o tramite lo snap-in Gestione Desktop remoto.
Il potenziale accesso alle risorse web interne viene rilevato dall’avvio di iexplore.exe con indirizzi IP interni.
L'articolo L’EDR è inutile! Gli hacker di DeadLock hanno trovato un “kill switch” universale proviene da Red Hot Cyber.