Salta al contenuto principale

AAA SOC Analyst cercarsi: quando le offerte di lavoro sono poco chiare e trasparenti e bisogna prestare attenzione 


Autore: Nicola Tarlini, Cyber Security Engineer

Nicola, ci ha inviato una dettagliata segnalazione riguardante una comunicazione sospetta relativa a un’offerta di lavoro per la posizione di SOC Analyst e ha voluto condividere le sue osservazioni con un’analisi dei fatti. Premettiamo che soprattutto nell’ambito della sicurezza informatica bisogna prestare particolare attenzione alle offerte di lavoro poco chiare e trasparenti – anche se non rappresentano una truffa – soprattutto perché la sicurezza informatica è un settore critico e le aziende del settore devono attrarre talenti altamente qualificati per fronteggiare minacce sempre più complesse e frequenti.

Nel primo caso – offerte di lavoro poco chiare o ambigue, con contatti non verificabili o informazioni contraddittorie – dovrebbero far alzare il livello dell’attenzione, come è stato per ha fatto Nicola Tarlini. Spesso infatti questi annunci potrebbero nascondere infatti rischi di truffa, furto di dati personali o violazioni di sicurezza: Nicola infatti ha chiesto chiarezza per proteggere sia la propria integrità professionale e digitale e per evitare di cadere vittima di eventuali frodi.

Nel minore dei mali invece offerte poco trasparenti o superficiali possono indicare mancanza di professionalità, di attenzione o una gestione non ottimizzata dei processi di assunzione e nella preparazione del proprio personale, con il conseguente rischio di impiego in ambienti non sicuri o poco affidabili, che possono compromettere la carriera e la sicurezza personale. Anche in questo caso la segnalazione di Nicola vuole fare chiarezza. Nella sua analisi evidenzia vari segnali di allarme tra cui messaggi impersonali e generici, identità dei recruiter non verificabili, mancanza di informazioni chiare, contatti telefonici a cui nessuno risponde o non attivi, discrepanze tra l’annuncio di lavoro e i messaggi successivi di contatto ed infine una risposta ufficiale dell’azienda che ammette una comunicazione poco chiara ma che conferma l’attività dei contatti.

Qui sotto una tabella dove vengono riassunte le caratteristiche sospette di un’offerta di lavoro per la maggior parte coerenti con l’analisi di Nicola che segue.

L’analisi di Nicola Tardini su un’offerta di lavoro come SOC Analyst generico


Qualche giorno fa, l’account di un utente di LinkedIn con ruolo “recruiter” mi ha contattato per una proposta di lavoro come “SOC Analyst”generico sia in 8×5 che in 24×7 su turni.

L’utente in questione, del quale nascondo l’identità per questioni di privacy, sopra citato mi ha contattato con il seguente messaggio:
oc analyst cercasi attenzione offerte di lavoroImmagine: Prima fonte di contatto
In questo messaggio ho notato fin da subito dei segnali di allarme che vado ad elencare:

  1. L’inizio del messaggio è un asettico e impersonale “Buongiorno!”. Questo fa pensare che si tratti di un messaggio automatico o preimpostato, non di un messaggio personale a seguito di un’attenta analisi del mio profilo.
  2. L’utente si presenta con un’identità diversa da quella con cui scrive:“Piacere di conoscerti! Sono Anna, collega di [NOME CENSURATO].”. Questo porta a pensare due possibili ipotesi:
    • a. L’utenza è compromessa: quindi è stato commesso un reato informatico;
    • b. L’utenza è condivisa: quindi non viene rispettato alcuno standard di sicurezza riguardante le comunicazioni online, perciò una violazione delle regole di condotta di LinkedIn e procedure aziendali non conformi a leggi e standard nazionali e internazionali.


  3. Nel messaggio del punto 2 non viene dichiarato il cognome di questa presunta recruiter di nome “Anna”. Questo porta a pensare che l’utente non voglia identificarsi e, quindi, che l’opzione 2.a sia quella più corretta.
  4. L’utente dichiara “siamo [AZIENDA CENSURATA]” > non qualificandosi personalmente come Recruiter per conto della società, quindi una dipendente, i sospetti continuano ad essere presenti e l’allarme è costantemente attivo su chi sia “Anna”.
  5. Il titolo del lavoro per cui risulta cercare l’utente nel messaggio segnalato è “SOC Analyst (H8 e H24, livello 1 e 2)”, però è diverso da quanto presentato sul profilo LinkedIn aziendale utilizzato per il contatto: Inoltre, sono 2 mesi di tempo che l’annuncio è presente. Queste informazioni fannocredere che l’utente in questione non riesca a trovare la persona giusta a distanza di tempo. Viene da pensare, anche, che l’annuncio non sia stato aggiornato a differenza del messaggio della chat.


soc analyst Offerta di lavoro su profilo LinkedIn aziendaleImmagine: Offerta di lavoro su profilo LinkedIn aziendale
6. Viene scritto “La posizione è a contratto con un tipo di workplace ibrido, con sede a Milano”. Non viene definita la tipologia di contratto: somministrazione, tempo determinato, tempo indeterminato, a chiamata o altro.

7. Nella firma non si parla di “Anna” ma viene scritto“per conto di”. Questo conferma ancora una volta i sospetti del punto 2.


Quindi, visto quanto sospetto il primo messaggio, ho deciso di chiedere qualche modalità per confermare l’identità:
nicola tarlini analisi offerta soc analystImmagine: Messaggi successivi e conclusivi
L’esito della verifica di tali dati è stato molto deludente e ha alzato ulteriormente imiei sospetti:

  1. Il numero di telefono fisso ha squillato a vuoto e non ho ricevuto alcuna risposta, nonostante 4 tentativi tra le 3:48 p.m. e le 4:07 p.m. (ora italiana).
  2. Il numero del cellulare risulta invece non attivo;
  3. L’indirizzo mail contiene un“cognome”che non corrisponde o non è verificabile con i dati forniti in precedenza nella chat di Linkedin.

Volendo approfondire ulteriormente, ho verificato che l’utente cercava di presentarsi con l’identità di “Anna [CENSURATO]”. Questa risulta essere una Junior Recruiter che lavora presso la società indicata nell’annuncio di lavoro e durante il contatto. Questa ragazza risulta aver concluso da pochi giorni un master con un Academy specifico per recruiter e risulta aver pubblicizzato, una settimana prima, la posizione di assunzione per cui sono stato contattato.

Questo porta a credere che la società in questione non faccia formazione in ambito di Security ai propri Recruiter e, quindi, di non rispettare le leggi nazionali e internazionali in ambito. È stato contattato l’indirizzo “privacv@[CENSURATO].it” per segnalare il tutto e chiedere ulteriore conferma di tali comportamenti sospetti e la risposta è stata la seguente:

L'articolo AAA SOC Analyst cercarsi: quando le offerte di lavoro sono poco chiare e trasparenti e bisogna prestare attenzione proviene da il blog della sicurezza informatica.


Presence Detection Augments 1930s Home


It can be jarring to see various sensors, smart switches, cameras, and other technology in a house built in the 1930s, like [Chris]’s was. But he still wanted presence detection so as to not stub any toes in the dark. The result is a sensor that blends in with the home’s aesthetics a bit better than anything you’re likely to find at the Big Box electronics store.

For the presence detection sensors, [Chris] chose to go with 24 GHz mmwave radar modules that, unlike infrared sensors, can detect if a human is in an area even if they are incredibly still. Paired with the diminutive ESP32-S2 Mini, each pair takes up very little real estate on a wall.

Although he doesn’t have a 3D printer to really pare down the size of the enclosure to the maximum, he found pre-made enclosures instead that are fairly inconspicuous on the wall. Another design goal here was to make sure that everything was powered so he wouldn’t have to perpetually change batteries, so a small wire leads from the prototype unit as well.

The radar module and ESP pair are set up with some code to get them running in Home Assistant, which [Chris] has provided on the project’s page. With everything up and running he has a module that can control lights without completely changing the aesthetic or behavior of his home. If you’re still using other presence sensors and are new to millimeter wave radar, take a look at this project for a good guide on getting started with this fairly new technology.


hackaday.com/2025/04/18/presen…


This Week in Security: No More CVEs, 4chan, and Recall Returns


The sky is falling. Or more specifically, it was about to fall, according to the security community this week. The MITRE Corporation came within a hair’s breadth of running out of its contract to maintain the CVE database. And admittedly, it would be a bad thing if we suddenly lost updates to the central CVE database. What’s particularly interesting is how we knew about this possibility at all. An April 15 letter sent to the CVE board warned that the specific contract that funds MITRE’s CVE and CWE work was due to expire on the 16th. This was not an official release, and it’s not clear exactly how this document was leaked.

Many people made political hay out of the apparent imminent carnage. And while there’s always an element of political maneuvering when it comes to contract renewal, it’s worth noting that it’s not unheard of for MITRE’s CVE funding to go down to the wire like this. We don’t know how many times we’ve been in this position in years past. Regardless, MITRE has spun out another non-profit, The CVE Foundation, specifically to see to the continuation of the CVE database. And at the last possible moment, CISA has announced that it has invoked an option in the existing contract, funding MITRE’s CVE work for another 11 months.

Android Automatic Reboots


Mobile devices are in their most secure state right after boot, before the user password is entered to unlock the device for the first time. Tools like Cellebrite will often work once a device has been unlocked once, but just can’t exploit a device in the first booted state. This is why Google is rolling out a feature, where Android devices that haven’t been unlocked for three days will automatically reboot.

Once a phone is unlocked, the encryption keys are stored in memory, and it only takes a lock screen bypass to have full access to the device. But before the initial unlock, the device is still encrypted, and the keys are safely stored in the hardware security module. It’s interesting that this new feature isn’t delivered as an Android OS update, but as part of the Google Play Services — the closed source libraries that run on official Android phones.

4chan


4chan has been hacked. It turns out, running ancient PHP code and out-of-date libraries on a controversial site is not a great idea. A likely exploit chain has been described, though this should be considered very unofficial at this point: Some 4chan boards allow PDF uploads, but the server didn’t properly vet those files. A PostScript file can be uploaded instead of a PDF, and an old version of Ghostscript processes it. The malicious PostScript file triggers arbitrary code execution in Ghostscript, and a SUID binary is used to elevate privileges to root.

PHP source code of the site has been leaked, and the site is still down as of the time of writing. It’s unclear how long restoration will take. Part of the fallout from this attack is the capture and release of internal discussions, pictures of the administrative tools, and even email addresses from the site’s administration.

Recall is Back


Microsoft is back at it, working to release Recall in a future Windows 11 update. You may remember our coverage of this, castigating the security failings, and pointing out that Recall managed to come across as creepy. Microsoft wisely pulled the project before rolling it out as a full release.

If you’re not familiar with the Recall concept, it’s the automated screenshotting of your Windows machine every few seconds. The screenshots are then locally indexed with an LLM, allowing for future queries to be run against the data. And once the early reviewers got over the creepy factor, it turns out that’s genuinely useful sometimes.

On top of the security hardening Microsoft has already done, this iteration of Recall is an opt-in service, with an easy pause button to temporarily disable the snapshot captures. This is definitely an improvement. Critics are still sounding the alarm, but for a much narrower problem: Recall’s snapshots will automatically extract information from security focused applications. Think about Signal’s disappearing messages feature. If you send such a message to a desktop user, that has Recall enabled, the message is likely stored in that user’s Recall database.

It seems that Microsoft has done a reasonably good job of cleaning up the Recall feature, particularly by disabling it by default. It seems like the privacy issues could be furthered addressed by giving applications and even web pages a way to opt out of Recall captures, so private messages and data aren’t accidentally captured. As Recall rolls out, do keep in mind the potential extra risks.

16,000 Symlinks


It’s been recently discovered that over 16,000 Fortinet devices are compromised with a trivial backdoor, in the form of a symlink making the root filesystem available inside the web-accessible language folder. This technique is limited to devices that have the SSL VPN enabled. That system exposes a web interface, with multiple translation options. Those translation files live in a world-accessible folder on the web interface, and it makes for the perfect place to hide a backdoor like this one. It’s not a new attack, and Fortinet believes the exploited devices have harbored this backdoor since the 2023-2024 hacking spree.

Vibes


We’re a little skeptical on the whole vibe coding thing. Our own [Tyler August] covered one of the reasons why. LLMs are likely to hallucinate package names, and vibe coders may not check closely, leading to easy typosquatting (LLMsquatting?) attacks. Figure out the likely hallucinated names, register those packages, and profit.

But what about Vibe Detections? OK, we know, letting an LLM look at system logs for potentially malicious behavior isn’t a new idea. But [Claudio Contin] demonstrates just how easy it can be, with the new EDV tool. Formally not for production use, this new gadget makes it easy to take Windows system events, and feed them into Copilot, looking for potentially malicious activity. And while it’s not perfect, it did manage to detect about 40% of the malicious tests that Windows Defender missed. It seems like LLMs are going to stick around, and this might be one of the places they actually make sense.

Bits and Bytes


Apple has pushed updates to their entire line, fixing a pair of 0-day vulnerabilities. The first is a wild vulnerability in CoreAudio, in that playing audio from a malicious audio file can lead to arbitrary code execution. The chaser is the flaw in the Pointer Authentication scheme, that Apple uses to prevent memory-related vulnerabilities. Apple has acknowledged that these flaws were used in the wild, but no further details have been released.

The Gnome desktop has an interesting problem, where the yelp help browser can be tricked into reading the contents of arbitrary filesystem files. Combined with the possibility of browser links automatically opening in yelp, this makes for a much more severe problem than one might initially think.

And for those of us following along with Google Project Zero’s deep dive into the Windows Registry, part six of that series is now available. This installment dives into actual memory structures, as well as letting us in on the history of why the Windows registry is called the hive and uses the 0xBEE0BEE0 signature. It’s bee themed, because one developer hated bees, and another developer thought it would be hilarious.


hackaday.com/2025/04/18/this-w…


D20-shaped Quasicrystal Makes High-Strength Alloy Printable


An electron microscope image of the aluminum alloy from the study.

When is a crystal not a crystal? When it’s a quasi-crystal, a paradoxical form of metal recently found in some 3D printed metal alloys by [A.D. Iams et al] at the American National Institute for Standards and Technology (NIST).

As you might remember from chemistry class, crystals are made up of blocks of atoms (usually called ‘unit cells’) that fit together in perfect repetition — baring dislocations, cracks, impurities, or anything else that might throw off a theoretically perfect crystal structure. There are only so many ways to tessellate atoms in 3D space; 230 of them, to be precise. A quasicrystal isn’t any of them. Rather than repeat endlessly in 3D space, a quasicrystal never repeats perfectly, like a 3D dimensional Penrose tile. The discovery of quasicrystals dates back to the 1980s, and was awarded a noble prize in 2011.
Penrose tiling of thick and thin rhombiPenrose tiling– the pattern never repeats perfectly. Quasicrystals do this in 3D. (Image by Inductiveload, Public Domain)
Quasicrystals aren’t exactly common in nature, so how does 3D printing come into this? Well, it turns out that, quite accidentally, a particular Aluminum-Zirconium alloy was forming small zones of quasicrystals (the black spots in the image above) when used in powder bed fusion printing. Other high strength-alloys tended to be very prone to cracking, to the point of usability, and this Al-Zr alloy, discovered in 2017, was the first of its class.

You might imagine that the non-regular structure of a quasicrystal wouldn’t propagate cracks as easily as a regular crystal structure, and you would be right! The NIST researchers obviously wanted to investigate why the printable alloy had the properties it does. When their crystallographic analysis showed not only five-fold, but also three-fold and two-fold rotational symmetry when examined from different angles, the researchers realized they had a quasicrystal on their hands. The unit cell is in the form of a 20-sided icosahedron, providing the penrose-style tiling that keeps the alloy from cracking.

You might say the original team that developed the alloy rolled a nat-20 on their crafting skill. Now that we understand why it works, this research opens up the doors for other metallic quasi-crystals to be developed on purpose, in aluminum and perhaps other alloys.

We’ve written about 3D metal printers before, and highlighted a DIY-able plastic SLS kit, but the high-power powder-bed systems needed for aluminum aren’t often found in makerspaces. If you’re building one or know someone who is, be sure to let us know.


hackaday.com/2025/04/18/d20-sh…


Track Your Circuits: A Locomotive PCB Badge


This fun PCB from [Nick Brown] features a miniature railroad implemented with 0805-sized LEDs. With an eye towards designing his own fun interactive PCB badge, the Light-Rail began its journey. He thoroughly documented his process, from shunting various late-night ideas together to tracking down discrepancies between the documentation of a part and the received part.

Inspired by our very own Supercon 2022 badge, he wanted to make a fun badge with a heavy focus on the aesthetics of the final design. He also wanted to challenge himself some in this project, so even though there are over 100 LEDs, they are not laid out in a symmetrical or matrix pattern. Instead, it’s an organic, winding railroad with crossings and stations throughout the board. Designed in KiCad the board contains 144 LEDS, 3 seven-segment displays, and over a dozen buttons that all come together in use for the built in game.

The challenges didn’t stop at just the organic layout of all those LEDs. He decided to use Rust for this project, which entailed writing his own driver for the seven-segment displays as well as creating a tone library for the onboard buzzer. As with all projects, unexpected challenges popped up along the way. One issue with how the oscillator was hooked up meant he wasn’t able to use the ATmega32U4, which was the brains of the entire railroad. After some experimenting, he came up with a clever hack: using a pogo pin jig to connect the clock where it needed to go while programming the board.

Be sure to check out all the details of this journey in his build log. If you love interactive badges also check out some of the other creative boards we’ve featured.


hackaday.com/2025/04/18/track-…


Aggiorna e muori: Windows 11 mostra la schermata Blu Della Morte (BSOD) dopo gli update di Aprile


Microsoft avverte gli utenti che potrebbero riscontrare una schermata blu di errore e un errore SECURE_KERNEL_ERROR dopo l’installazione degli aggiornamenti di Windows di marzo e aprile. I problemi si verificano dopo l’installazione dell’aggiornamento cumulativo di aprile KB5055523 e dell’aggiornamento di anteprima di marzo KB5053656 che ha interessano solo i dispositivi che eseguono Windows 11 24H2. Dopo aver installato questi aggiornamenti e riavviato il PC, gli utenti interessati riscontrano un arresto anomalo.

“Dopo aver installato l’aggiornamento e riavviato il dispositivo, potresti visualizzare una schermata blu con codice di errore 0x18B, che indica SECURE_KERNEL_ERROR”, afferma Microsoft.

Finché non verrà distribuita una correzione per questo bug tramite Windows Update, l’azienda ha temporaneamente affrontato il problema tramite Known Issue Rollback (KIR), una funzionalità che ripristina gli aggiornamenti problematici distribuiti tramite Windows Update.

Questa correzione verrà automaticamente distribuita a tutti i dispositivi domestici, aziendali e non gestiti dall’IT entro 24 ore. Per velocizzare la distribuzione, Microsoft consiglia agli utenti interessati di riavviare i propri dispositivi.

Per risolvere il problema sui dispositivi Windows aziendali gestiti, gli amministratori di Windows 11 24H2 e Windows Server 2025 devono applicare il Criterio di gruppo KIR KB5053656 250412_03103 .

Vale anche la pena notare che all’inizio di questa settimana Microsoft ha rilasciato aggiornamenti di emergenza per Windows che risolvono un problema diverso che riguarda i criteri di controllo dell’accesso locale nei Criteri di gruppo di Active Directory.

Inoltre, l’azienda ha avvisato gli amministratori di un bug che potrebbe causare la mancata disponibilità dei controller di dominio in Windows Server 2025 dopo un riavvio, con conseguenti errori di servizi e applicazioni.

L'articolo Aggiorna e muori: Windows 11 mostra la schermata Blu Della Morte (BSOD) dopo gli update di Aprile proviene da il blog della sicurezza informatica.


Tiny, Hackable Telepresence Robot for under $100? Meet Goby


[Charmed Labs] are responsible for bringing numerous open-source hardware products to fruition over the years, and their latest device is an adorably small robotic camera platform called Goby, currently crowdfunding for its initial release. Goby has a few really clever design features and delivers a capable (and hackable) platform for under 100 USD.

Goby embraces its small size, delivering what its creators dub “tinypresence” — or the feeling of being there, but on a very small scale. Cardboard courses, LEGO arenas, or even tabletop gaming scenery hits different when experienced from a first-person perspective. Goby is entirely reprogrammable with nothing more than a USB cable and the Arduino IDE, while costing less than most Arduino starter kits.
Recharging happens by driving over the charger, then pivoting down so the connectors (the little blunt vampire fangs under and to each side of the camera) come into contact with the charger.
One of the physical features we really like is the tail-like articulated caster at the rear. Flexing this pivots Goby up or down (and can even flip Goby completely over), allowing one to pan and tilt the view without needing to mount the camera on a gimbal. It also comes into play for recharging; Goby simply moves over the disc-shaped charger and pivots down to make contact.

At Goby‘s heart is an ESP32-S3 and OmniVision OV2640 camera sensor streaming a live video feed (and driving controls) with WebRTC. Fitting the WebRTC stack onto an ESP32 wasn’t easy, but opens up possibilities beyond just media streaming.

Goby is set up to make launching an encrypted connection as easy as sharing a URL or scanning a QR code. The link is negotiated between bot and client with the initial help of an external server, and once a peer-to-peer connection is established, the server’s job is done and it is out of the picture. [Charmed Labs]’s code for this functionality — named BitBang — is in beta and destined for an open release as well. While BitBang is being used here to make it effortless to access Goby remotely, it’s more broadly intended to make web access for any ESP32-based device easier to implement.

As far as tiny remote camera platforms go, it might not be as small as rebuilding a Hot Wheels car into a micro RC platform, but it’s definitely more accessible and probably cheaper, to boot. Check it out at the Kickstarter (see the first link in this post) and watch it in action in the video, embedded just below the page break.

youtube.com/embed/iuJ_9_ITKFs?…


hackaday.com/2025/04/17/tiny-h…


Rise of the Robots: How Robots Are Changing Dairy Farms


Running a dairy farm used to be a rather hands-on experience, with the farmer required to be around every few hours to milk the cows, feed them, do all the veterinarian tasks that the farmer can do themselves, and so on. The introduction of milking machines in the early 20th century however began a trend of increased automation whereby a single farmer could handle a hundred cows by the end of the century instead of only a couple. In a recent article in IEEE Spectrum covers the continued progress here is covered, including cows milking themselves, on-demand style as shown in the top image.

The article focuses primarily on Dutch company Lely’s recent robots, which range from said self-milking robots to a manure cleaning robot that looks like an oversized Roomba. With how labor-intensive (and low-margin) a dairy farm is, any level of automation that can improve matters will be welcomed, with so far Lely’s robots receiving a mostly positive response. Since cows are pretty smart, they will happily guide themselves to a self-milking robot when they feel that their udders are full enough, which can save the farmer a few hours of work each day, as this robot handles every task, including the cleaning of the udders prior to milking and sanitizing itself prior to inviting the next cow into its loving embrace.

As for the other tasks, speaking as a genuine Dutch dairy farm girl who was born & raised around cattle (and sheep), the idea of e.g. mucking out stables being taken over by robots is something that raises a lot more skepticism. After all, a farmer’s children have to earn their pocket money somehow, which includes mucking, herding, farm maintenance and so on. Unless those robots get really cheap and low maintenance, the idea of fully automated dairy farms may still be a long while off, but reducing the workload and making cows happier are definitely lofty goals.

Top image: The milking robot that can automatically milk a cow without human assistance. (Credit: Lely)


hackaday.com/2025/04/17/rise-o…


A Blacksmith Shows Us How To Choose An Anvil


No doubt many readers have at times wished to try their hand at blacksmithing, but it’s fair to say that acquiring an anvil represents quite the hurdle. For anyone not knowing where to turn there’s a video from [Black Bear Forge], in which he takes us through a range of budget options.

He starts with a sledgehammer, the simplest anvil of all, which we would agree makes a very accessible means to do simple forge work. He shows us a rail anvil and a couple of broken old anvils, before spending some time on a cheap Vevor anvil and going on to some much nicer more professional ones. It’s probably the Vevor which is the most interesting of the ones on show though, not because it is particularly good but because it’s a chance to see up close one of these very cheap anvils.

Are they worth taking the chance? The one he’s got has plenty of rough parts and casting flaws, an oddly-sited pritchel and a hardy hole that’s too small. These anvils are sometimes referred to as “Anvil shaped objects”, and while this one could make a reasonable starter it’s not difficult to see why it might not be the best purchase. It’s a subject we have touched on before in our blacksmithing series, so we’re particularly interested to see his take on it.

youtube.com/embed/ZJFFCp6-wKs?…


hackaday.com/2025/04/17/a-blac…


Designing an FM Drum Synth from Scratch


How it started: a simple repair job on a Roland drum machine. How it ended: a scratch-built FM drum synth module that’s completely analog, and completely cool.

[Moritz Klein]’s journey down the analog drum machine rabbit hole started with a Roland TR-909, a hybrid drum machine from the mid-80s that combined sampled sounds with analog synthesis. The unit [Moritz] picked up was having trouble with the decay on the kick drum, so he spread out the gloriously detailed schematic and got to work. He breadboarded a few sections of the kick drum circuit to aid troubleshooting, but one thing led to another and he was soon in new territory.

The video below is on the longish side, with the first third or so dedicated to recreating the circuits used to create the 909’s iconic sound, slightly modifying some of them to simplify construction. Like the schematic that started the whole thing, this section of the video is jam-packed with goodness, too much to detail here. But a few of the gems that caught our eye were the voltage-controlled amplifier (VCA) circuit that seems to make appearances in multiple places in the circuit, and the dead-simple wave-shaper circuit, which takes some of the harmonics out of the triangle wave oscillator’s output with just a couple of diodes and some resistors.

Once the 909’s kick and toms section had been breadboarded, [Moritz] turned his attention to adding something Roland hadn’t included: frequency modulation. He did this by adding a second, lower-frequency voltage-controlled oscillator (VCO) and using that to modulate the drum section. That resulted in a weird, metallic sound that can be tuned to imitate anything from a steel drum to a bell. He also added a hi-hat and cymbal section by mixing the square wave outputs on the VCOs through a funky XOR gate made from discrete components and a high-pass filter.

There’s a lot of information packed into this video, and by breaking everything down into small, simple blocks, [Moritz] makes it easy to understand analog synths and the circuits behind them.

youtube.com/embed/Xbl1xwFR3eg?…


hackaday.com/2025/04/17/design…


Bicycle Gearbox Does it by Folding


If you’ve spent any time on two wheels, you’ve certainly experienced the woes of poor bicycle shifting. You hit the button or twist the knob expecting a smooth transition into the next gear, only to be met with angry metallic clanking that you try to push though but ultimately can’t. Bicycle manufacturers collectively spent millions attempting to remedy this issue with the likes of gearboxes, electronic shifting, and even belt-driven bikes. But Praxis believes to have a better solution in their prototype HiT system.

Rather then moving a chain between gears, their novel solution works by folding gears into or away from a chain. These gears are made up of four separate segments that individually pivot around an axle near the cog’s center. These segments are carefully timed to ensure there is no interference with the chain making shifting look like a complex mechanical ballet.

While the shift initialization is handled electronically, the gear folding synchronization is mechanical. The combination of electronic and mechanical systems brings near-instant shifting under load at rotational rates of 100 RPM. Make sure to scroll through the product page and watch the videos showcasing the mechanism!

The HiT gearbox is a strange hybrid between a derailleur and a gearbox. It doesn’t contain a clutch based gear change system or even a CVT as seen in the famous Honda bike of old. It’s fully sealed with more robust chains and no moving chainline as in a derailleur system. The prototype is configurable between four or sixteen speeds, with the four speed consisting of two folding gear pairs connected with a chain and the sixteen speed featuring a separate pair of folding gears. The output is either concentric to the input, or above the input for certain types of mountain bikes.

Despite the high level of polish, this remains a prototype and we eagerly await what Praxis does next with the system. In the meantime, make sure to check out this chainless e-drive bicycle.


hackaday.com/2025/04/17/bicycl…


Supercon 2024: Exploring the Ocean with Open Source Hardware


If you had to guess, what do you think it would take to build an ocean-going buoy that could not only survive on its own without human intervention for more than two years, but return useful data the whole time? You’d probably assume such a feat would require beefy hardware, riding inside an expensive and relatively large watertight vessel of some type — and for good reason, the ocean is an unforgiving environment, and has sent far more robust hardware to the briny depths.

But as Wayne Pavalko found back in 2016, a little planning can go a long way. That’s when he launched the first of what he now calls Maker Buoys: a series of solar-powered drifting buoys that combine a collection of off-the-shelf sensor boards with an Arduino microcontroller and an Iridium Short-Burst Data (SBD) modem in a relatively simple watertight box.

He guessed that first buoy might last a few weeks to a month, but when he finally lost contact with it after 771 days, he realized there was real potential for reducing the cost and complexity of ocean research.

Wayne recalled the origin of his project and updated the audience on where it’s gone from there during his 2024 Supercon talk, Adventures in Ocean Tech: The Maker Buoy Journey. Even if you’re not interested in charting ocean currents with homebrew hardware, his story is an inspirational reminder that sometimes a fresh approach can help solve problems that might at first glance seem insurmountable.

DIY All the Way


As Dan Maloney commented when he wrote-up that first buoy’s journey in 2017, the Bill of Materials for a Maker Buoy is tailored for the hobbyist. Despite being capable of journeys lasting for several thousand kilometers in the open ocean, there’s no marine-grade unobtainium parts onboard. Indeed, nearly all of the electronic components can be sourced from Adafruit, with the most expensive line item being the RockBLOCK 9603 Iridium satellite modem at $299.

Even the watertight container that holds all the electronics is relatively pedestrian. It’s the sort of plastic latching box you might put your phone or camera in on a boat trip to make sure it stays dry and floats if it falls overboard. Wayne points out that the box being clear is a huge advantage, as you can mount the solar panel internally. Later versions of the Maker Buoy even included a camera that could peer downward through the bottom of the box.

Wayne says that first buoy was arguably over-built, with each internal component housed in its own waterproof compartment. Current versions instead hold all of the hardware in place with a 3D printed internal frame. The bi-level framework puts the solar panel, GPS, and satellite modem up at the top so they’ve got a clear view of the sky, and mounts the primary PCB, battery, and desiccant container down on the bottom.

The only external addition necessary is to attach a 16 inch (40 centimeter) long piece of PVC pipe to the bottom of the box, which acts as a passive stabilizer. Holes drilled in the pipe allow it to fill with water once submerged, lowering the buoy’s center of gravity and making it harder to flip over. At the same time, should the buoy find itself inverted due to wave action, the pipe will make it top-heavy and flip it back over.

It’s simple, cheap, and incredibly effective. Wayne mentions that data returned from onboard Inertial Measurement Units (IMUs) have shown that Maker Buoys do occasionally find themselves going end-over-end during storms, but they always right themselves.

youtube.com/embed/5WSOKEplw9g?…

Like Space…But Wetter

The V1 Maker Buoy was designed to be as reliable as possible.
Early on in his presentation, Wayne makes an interesting comparison when talking about the difficulties in developing the Maker Buoy. He likens it to operating a spacecraft in that your hardware is never coming back, nobody will be able to service it, and the only connection you’ll have to the craft during its lifetime is a relatively low-bandwidth link.

But one could argue that the nature of Iridium communications makes the mission of the Maker Buoy even more challenging than your average spacecraft. As the network is really only designed for short messages — at one point Wayne mentions that even sending low-resolution images of only a few KB in size was something of an engineering challenge — remotely updating the software on the buoy isn’t an option. So even though the nearly fifty year old Voyager 1 can still receive the occasional software patch from billions of miles away, once you drop a Maker Buoy into the ocean, there’s no way to fix any bugs in the code.

Because of this, Wayne decided to take the extra step of adding a hardware watchdog timer that can monitor the buoy’s systems and reboot the hardware if necessary. It’s a bit like unplugging your router when the Internet goes out…if your Internet was coming from a satellite low-Earth orbit and your living room happened to be in the middle of the ocean.

From One to Many


After publishing information about his first successful Maker Buoy online, Wayne says it wasn’t long before folks started contacting him about potential applications for the hardware. In 2018, a Dutch non-profit expressed interest in buying 50 buoys from him to study the movement of floating plastic waste in the Pacific. The hardware was more than up to the task, but there was just one problem: up to this point, Wayne had only built a grand total of four buoys.

Opportunities like this, plus the desire to offer the Maker Buoy in kit and ready to deploy variants for commercial and educational purposes, meant Wayne had to streamline his production. When it’s just a personal project, it doesn’t really matter how long it takes to assemble or if everything goes together correctly the first time. But that approach just won’t work if you need to deliver functional units in quantities that you can’t count on your fingers.

As Wayne puts it, making something and making something that’s easily producible are really two very different things. The production becomes a project in its own right. He explains that investing the time and effort to make repetitive tasks more efficient and reliable, such as developing jigs to hold pieces together while you’re working on them, more than pays off for itself in the end. Even though he’s still building them himself in his basement, he uses an assembly line approach that allows for the consistent results expected by paying customers.

A Tale Well Told


While the technical details of how Wayne designed and built the different versions of the Maker Buoy are certainly interesting, it’s hearing the story of the project from inception to the present day that really makes watching this talk worthwhile. What started as a simple “What If” experiment has spiraled into a side-business that has helped deploy buoys all over the planet.

Admittedly, not every project has that same potential for growth. But hearing Wayne tell the Maker Buoy story is the sort of thing that makes you want to go dust off that project that’s been kicking around in the back of your head and finally give it a shot. You might be surprised by the kind of adventure taking a chance on a wild idea can lead to.

youtube.com/embed/cCYdSGGZcv0?…


hackaday.com/2025/04/17/superc…


Budget Schlieren Imaging Setup Uses 3D Printing to Reveal the Unseen


We’re suckers here for projects that let you see the unseeable, and [Ayden Wardell Aerospace] provides that on a budget with their $30 Schlieren Imaging Setup. The unseeable in question is differences in air density– or, more precisely, differences in the refractive index of the fluid the imaging set up makes use of, in this case air. Think of how you can see waves of “heat” on a warm day– that’s lower-density hot air refracting light as it rises. Schlieren photography weaponizes this, allowing to analyze fluid flows– for example, the mach cones in a DIY rocket nozzle, which is what got [Ayden Wardell Aerospace] interested in the technique.

Shock diamonds from a homemade rocket nozzle imaged by this setup.Examining exhaust makes this a useful tool for [Aerospace].This is a ‘classic’ mirror-and-lamp Schlieren set up. You put the system you wish to film near the focal plane of a spherical mirror, and camera and light source out at twice the focal distance. Rays deflected by changes in refractive index miss the camera– usually one places a razor blade precisely to block them, but [Ayden] found that when using a smart phone that was unnecessary, which shocked this author.

While it is possible that [Ayden Wardell Aerospace] has technically constructed a shadowgraph, they claim that carefully positioning the smartphone allows the sharp edge of the case to replace the razor blade. A shadowgraph, which shows the second derivative of density, is a perfectly valid technique for flow visualization, and is superior to Schlieren photography in some circumstances– when looking at shock waves, for example.

Regardless, the great thing about this project is that [Ayden Wardell Aerospace] provides us with STLs for the mirror and smartphone mounting, as well as providing a BOM and a clear instructional video. Rather than arguing in the comments if this is “truly” Schlieren imaging, grab a mirror, extrude some filament, and test it for yourself!

There are many ways to do Schlieren images. We’ve highighted background-oriented techniques, and seen how to do it with a moiré pattern, or even a selfie stick. Still, this is the first time 3D printing has gotten involved and the build video below is quick and worth watching for those sweet, sweet Schlieren images.

youtube.com/embed/piGYryly5Bw?…


hackaday.com/2025/04/17/budget…


IronHusky updates the forgotten MysterySnail RAT to target Russia and Mongolia


Day after day, threat actors create new malware to use in cyberattacks. Each of these new implants is developed in its own way, and as a result gets its own destiny – while the use of some malware families is reported for decades, information about others disappears after days, months or several years.

We observed the latter situation with an implant that we dubbed MysterySnail RAT. We discovered it back in 2021, when we were investigating the CVE-2021-40449 zero-day vulnerability. At that time, we identified this backdoor as related to the IronHusky APT, a Chinese-speaking threat actor operating since at least 2017. Since we published a blogpost on this implant, there have been no public reports about it, and its whereabouts have remained unknown.

However, recently we managed to spot attempted deployments of a new version of this implant, occurring in government organizations located in Mongolia and Russia. To us, this observed choice of victims wasn’t surprising, as back in 2018, we wrote that IronHusky, the actor related to this RAT, has a specific interest in targeting these two countries. It turned out that the implant has been actively used in cyberattacks all these years although not reported.

Infection through a malicious MMC script


One of the recent infections we spotted was delivered through a malicious MMC script, designed to be disguised as a document from the National Land Agency of Mongolia (ALAMGAC):

Malicious MMC script as displayed in Windows Explorer. It has the icon of a Microsoft Word document
Malicious MMC script as displayed in Windows Explorer. It has the icon of a Microsoft Word document

When we analyzed the script, we identified that it is designed to:

  • Retrieve a ZIP archive with a second-stage malicious payload and a lure DOCX file from the file[.]io public file storage.
  • Unzip the downloaded archive and place the legitimate DOCX file into the %AppData%\Cisco\Plugins\X86\bin\etc\Update folder
  • Start the CiscoCollabHost.exe file dropped from the ZIP archive.
  • Configure persistence for the dropped CiscoCollabHost.exefile by adding an entry to the Run registry key.
  • Open the downloaded lure document for the victim.


Intermediary backdoor


Having investigated the
CiscoCollabHost.exe file, we identified it as a legitimate executable. However, the archive deployed by the attackers also turned out to include a malicious library named CiscoSparkLauncher.dll, designed to be loaded by the legitimate process through the DLL Sideloading technique.
We found out that this DLL represents a previously unknown intermediary backdoor, designed to perform C2 communications by abusing the open-source piping-server project. An interesting fact about this backdoor is that information about Windows API functions used by it is located not in the malicious DLL file, but rather in an external file having the
log\MYFC.log relative path. This file is encrypted with a single-byte XOR and is loaded at runtime. It is likely that the attackers introduced this file to the backdoor as an anti-analysis measure – since it is not possible to determine the API functions called without having access to this file, the process of reverse engineering the backdoor essentially turns into guesswork.
By communicating with the legitimate
ppng.io server powered by the piping-server project, the backdoor is able to request commands from attackers and send back their execution results. It supports the following set of basic malicious commands:

Command nameCommand description
RCOMMRuns command shells.
FSENDDownloads files from the C2 server.
FRECVUploads files to the C2 server.
FSHOWLists directory contents.
FDELEDeletes files.
FEXECCreates new processes.
REXITTerminates the backdoor.
RSLEEPerforms sleeping.
RESETResets the timeout counter for the C2 server connection.

As we found out, attackers used commands implemented in this backdoor to deploy the following files to the victim machine:

  • sophosfilesubmitter.exe, a legitimate executable
  • fltlib.dll, a malicious library to be sideloaded

In our telemetry, these files turned out to leave footprints of the MysterySnail RAT malware, an implant we described back in 2021.

New version of MysterySnail RAT


In observed infection cases, MysterySnail RAT was configured to persist on compromised machines as a service. Its malicious DLL, which is deployed by the intermediary backdoor, is designed to load a payload encrypted with RC4 and XOR, and stored inside a file named
attach.dat. When decrypted, it is reflectively loaded using DLL hollowing with the help of code implemented inside the run_pe library.
Just as the version of MysterySnail RAT we described in 2021, the latest version of this implant uses attacker-created HTTP servers for communication. We have observed communications being performed with the following servers:

  • watch-smcsvc[.]com
  • leotolstoys[.]com
  • leotolstoys[.]com

Having analyzed the set of commands implemented in the latest version of this backdoor, we identified that it is quite similar to the one implemented in the 2021 version of MysterySnail RAT – the newly discovered implant is able to accept about 40 commands, making it possible to:

  • Perform file system management (read, write and delete files; list drives and directories).
  • Execute commands via the cmd.exe shell.
  • Spawn and kill processes.
  • Manage services.
  • Connect to network resources.

Compared to the samples of MysterySnail RAT we described in our 2021 article, these commands were implemented differently. While the version of MysterySnail from 2021 implements these commands inside a single malicious component, the newly discovered version of the implant relies on five additional DLL modules, downloaded at runtime, for command execution. These modules are as follows:

Internal module IDInternal module nameModule DLL nameModule description
0BasicBasicMod.dllAllows listing drives, deleting files, and fingerprinting the infected machine.
1EModeExplorerMoudleDll.dll (sic!)Allows reading files, managing services, and spawning new processes.
2PModprocess.dllAllows listing and terminating running processes.
3CModcmd.dllAllows creating new processes and spawning command shells.
4TranModtcptran.dllAllows connecting to network resources.

However, this transition to a modular architecture isn’t something new – as we have seen modular versions of the MysterySnail RAT deployed as early as 2021. These versions featured the same modules as described above, including the typo in the
ExplorerMoudleDll.dll module name. Back then, we promptly made information about these versions available to subscribers of our APT Intelligence Reporting service.

MysteryMonoSnail – a repurposed version of MysterySnail RAT


Notably, a short time after we blocked the recent intrusions related to MysterySnail RAT, we observed the attackers to continue conducting their attacks, by deploying a repurposed and more lightweight version of MysterySnail RAT. This version consists of a single component, and that’s why we dubbed it MysteryMonoSnail. We noted that it performed communications with the same C2 server addresses as found in the full-fledged version of MysterySnail RAT, albeit via a different protocol – WebSocket instead of HTTP.

This version doesn’t have as many capabilities as the version of MysterySnail RAT that we described above – it was programmed to have only 13 basic commands, used to list directory contents, write data to files, and launch processes and remote shells.

Obsolete malware families may reappear at any time


Four years, the gap between the publications on MysterySnail RAT, has been quite lengthy. What is notable is that throughout that time, the internals of this backdoor hardly changed. For instance, the typo in the
ExplorerMoudleDll.dll that we previously noted was present in the modular version of MysterySnail RAT from 2021. Furthermore, commands implemented in the 2025 version of this RAT were implemented similarly to the 2021 version of the implant. That is why, while conducting threat hunting activities, it’s crucial to consider that old malware families, which have not been reported on for years, may continue their activities under the radar. Due to that, signatures designed to detect historical malware families should never be discontinued simply because they are too old.
At Kaspersky’s GReAT team, we have been focusing on detecting complex threats since 2008 – and we provide sets of IoCs for both old and new malware to customers of our Threat Intelligence portal. If you wish to get access to these IoCs and other information about historical and emerging threats, please contact us at intelreports@kaspersky.com.


securelist.com/mysterysnail-ne…


Modernizing an Enigma Machine


Enigma buttons

This project by [Miro] is awesome, not only did he build a replica Enigma machine using modern technologies, but after completing it, he went back and revised several components to make it more usable. We’ve featured Enigma machines here before; they are complex combinations of mechanical and electrical components that form one of the most recognizable encryption methods in history.

His first Enigma machine was designed closely after the original. He used custom PCBs for the plugboard and lightboard, which significantly cleaned up the internal wiring. For the lightboard, he cleverly used a laser printer on semi-transparent paper to create crisp letters, illuminated from behind. For the keyboard, he again designed a custom PCB to connect all the switches. However, he encountered an unexpected setback due to error stack-up. We love that he took the time to document this issue and explain that the project didn’t come together perfectly on the first try and how some adjustments were needed along the way.

Custom rotary wheelThe real heart of this build is the thought and effort put into the design of the encryption rotors. These are the components that rotate with each keystroke, changing the signal path as the system is used. In a clever hack, he used a combination of PCBs, pogo pins, and 3D printed parts to replicate the function of the original wheels.

Enigma machine connoisseurs will notice that the wheels rotate differently than in the original design, which leads us to the second half of this project. After using the machine for a while, it became clear that the pogo pins were wearing down the PCB surfaces on the wheels. To solve this, he undertook an extensive redesign that resulted in a much more robust and reliable machine.

In the redesign, instead of using pogo pins to make contact with pads, he explored several alternative methods to detect the wheel position—including IR light with phototransistors, rotary encoders, magnetic encoders, Hall-effect sensors, and more. The final solution reduced the wiring and addressed long-term reliability concerns by eliminating the mechanical wear present in the original design.

Not only did he document the build on his site, but he also created a video that not only shows what he built but also gives a great explanation of the logic and function of the machine. Be sure to also check out some of the other cool enigma machines we’ve featured over the years.

youtube.com/embed/T_UuYkO4OBQ?…


hackaday.com/2025/04/17/modern…


DarknetArmy e il RAT 888: Un Malware che Torna a Far Paura


Sul Forum DarketArmy l’attore Mr.Robot ha messo in vendita un RAT ad Agosto 2023. DarknetArmy è un forum attivo nel dark web, emerso per la prima volta nel 2018. È noto per la condivisione di strumenti di hacking, informazioni sulla sicurezza e altre risorse legate alla cybersecurity.

Il RAT, codificato con il nome “888”, ha suscitato un rinnovato interesse nelle ultime due settimane, nonostante il post originale risalga a due anni fa.

Questo aumento di attenzione potrebbe essere dovuto a recenti aggiornamenti o miglioramenti del malware, rendendolo più efficace o difficile da rilevare

Il gruppo Blade Hawk ha utilizzato una versione craccata di questo RAT per condurre attacchi di spionaggio verso gruppi etnici curdi tramite forum, social media o app legittime

Diverse versioni crakkate sono disponibili in repository GitHub e sono tuttora utilizzate da pentester. Le potenzialità di questo malware sono molteplici, ecco le principali:

  • Accesso Remoto al Desktop: Permette agli attaccanti di controllare il desktop del sistema infetto da remoto.
  • Cattura Webcam e Audio: Consente di registrare video e audio tramite la webcam e il microfono del dispositivo.
  • Recupero Password: Estrae password da vari browser e client di posta elettronica.
  • Keylogging: Registra i tasti premuti sulla tastiera, permettendo di intercettare informazioni sensibili come credenziali di accesso.
  • Gestione dei File: Permette di visualizzare, modificare, eliminare e trasferire file sul sistema infetto.
  • Spoofing delle Estensioni dei File: Modifica le estensioni dei file per nascondere la vera natura dei file infetti.
  • Persistenza: Implementa meccanismi per rimanere attivo anche dopo il riavvio del sistema o la cancellazione dei file.
  • Disabilitazione delle Utility di Sistema: Disabilita strumenti di gestione del sistema per evitare la rilevazione e la rimozione.
  • Bypass della Rilevazione Antivirus: Utilizza tecniche di offuscamento per evitare di essere rilevato dai software antivirus.
  • Elevazione dei Privilegi: Esegue exploit UAC per ottenere privilegi elevati sul sistema infetto.

Nelle ultime due settimane, il post ha ricevuto un notevole aumento di commenti da parte di utenti interessati al download, suggerendo un possibile aggiornamento del malware o un rinnovato interesse per le sue capacità.

Per proteggersi da minacce come il RAT 888, è raccomandato seguire alcune pratiche di sicurezza:

  • Aggiornare e Patchare i Sistemi: Assicurarsi che tutti i sistemi siano aggiornati con le ultime patch di sicurezza, in particolare quelle che affrontano vulnerabilità come MS17-10 (EternalBlue), per prevenire lo sfruttamento da parte dei RAT.
  • Migliorare la Protezione degli Endpoint: Implementare soluzioni avanzate di protezione degli endpoint in grado di rilevare e bloccare le attività dei RAT, come l’accesso remoto non autorizzato e l’offuscamento dei processi.

Queste misure possono aiutare a mitigare i rischi associati a malware sofisticati come il RAT 888, contribuendo a mantenere un ambiente digitale più sicuro.

L'articolo DarknetArmy e il RAT 888: Un Malware che Torna a Far Paura proviene da il blog della sicurezza informatica.


Il sistema CVE rischia il collasso: tra silenzi assordanti, dipendenze strategiche e un’Europa ancora senza voce


È uno di quei momenti che passano sottotraccia per l’opinione pubblica, ma che per chi opera nel mondo della cybersecurity suona come un’allerta silenziosa, quasi surreale: la possibilità concreta che il sistema CVE – Common Vulnerabilities and Exposures – possa smettere di funzionare. Non per un attacco informatico. Non per un evento naturale. Ma per un più banale, quanto drammatico, problema di mancato rinnovo dei fondi da parte del governo statunitense.

La notizia è arrivata come un fulmine a ciel sereno, ma a ben guardare, i segnali di instabilità erano presenti da tempo. La lettera recentemente inviata da MITRE Corporation, ente responsabile della gestione del CVE Program per conto del governo USA, rappresenta molto più di un avviso tecnico: è il grido d’allarme di un’infrastruttura critica che rischia di spegnersi nel silenzio generale.

Eppure, se c’è una cosa che dovrebbe essere chiara a chiunque lavori nel settore, è che il CVE non è un semplice database, ma il vero e proprio fulcro della nomenclatura condivisa nel mondo della sicurezza informatica.

Perché il CVE è fondamentale?


Il CVE è il sistema che assegna un identificativo univoco – il cosiddetto CVE-ID – a ogni vulnerabilità software conosciuta pubblicamente. Senza questi identificativi, l’intero ecosistema cyber perderebbe coerenza: le analisi dei vendor, le patch, i report di minaccia, le valutazioni di rischio, gli strumenti SIEM, gli scanner di vulnerabilità e persino le dashboard delle agenzie nazionali di sicurezza non saprebbero più “come chiamare” le minacce.

È come se all’improvviso qualcuno decidesse che i numeri di targa delle automobili non servono più. Ogni incidente diventerebbe un caos burocratico, ogni multa un rebus, ogni denuncia un’informazione vaga. È esattamente ciò che accadrebbe senza il CVE: un buco nero semantico nel cuore della cybersecurity.

La crisi attuale: cosa dice la lettera MITRE


La comunicazione inviata da MITRE – ferma, tecnica, ma inequivocabile – mette in chiaro che la fine dei fondi federali (provenienti dalla CISA, la Cybersecurity and Infrastructure Security Agency) comporta l’impossibilità di garantire la continuità delle attività legate al programma CVE.

Senza finanziamenti:

  • i processi di triage, validazione e pubblicazione delle vulnerabilità si fermeranno;
  • i CNA (CVE Numbering Authorities), aziende autorizzate ad assegnare CVE-ID, perderanno il punto di riferimento centrale;
  • i ritardi accumulati nelle assegnazioni, già criticamente aumentati negli ultimi mesi, rischiano di cronicizzarsi;
  • la credibilità dell’intero ecosistema CVE verrebbe compromessa.

È quindi evidente che siamo di fronte a un single point of failure di proporzioni sistemiche, ma – e qui la riflessione si fa geopolitica – anche a un punto di dipendenza strategica da parte dell’intero Occidente, Europa compresa.

Europa: dove sei?


In un mondo multipolare, in cui la cybersicurezza è ormai considerata parte integrante della sovranità nazionale e continentale, fa riflettere quanto l’Europa – ancora oggi – non disponga di un proprio sistema alternativo o complementare per la gestione delle vulnerabilità informatiche.

La Cina, da anni, ha sviluppato un proprio database nazionale, con modalità operative, criteri e perfino una “agenda politica” nella gestione delle disclosure. In Russia esistono repository indipendenti connessi ai CERT federali. Ma l’Unione Europea? Ancora troppo sbilanciata su un modello americano-centrico, e spesso reattiva più che proattiva, nel campo della cyber intelligence.

Questa situazione rivela un grave gap di autonomia strategica: affidarsi a un’infrastruttura critica gestita da un singolo paese estero – per quanto alleato – significa esporsi a eventi come quello odierno, dove una semplice decisione amministrativa può influire sull’intero tessuto operativo delle nostre imprese, delle nostre agenzie, delle nostre difese.

Le implicazioni operative


L’impatto di un’eventuale sospensione del programma CVE si rifletterebbe su più piani:

  • Industriale: i produttori di software e hardware non avrebbero più un framework di riferimento per le advisory.
  • Governativo: le agenzie di sicurezza, i CERT nazionali e gli organismi di difesa non potrebbero più sincronizzarsi su vulnerabilità condivise.
  • Accademico: la ricerca in ambito cybersecurity, già basata sulla tassonomia CVE, perderebbe uniformità e tracciabilità.
  • Comunicativo: i media specializzati e le piattaforme di threat intelligence perderebbero un punto di riferimento essenziale nella diffusione di notizie sulle vulnerabilità.

In pratica, si creerebbe un effetto domino che andrebbe a compromettere l’intera filiera della gestione del rischio informatico.

La proposta: un sistema europeo di identificazione delle vulnerabilità


Serve un cambio di passo. E serve ora.

La mia proposta – che condivido da tempo in diverse sedi tecniche e istituzionali – è quella di creare un sistema CVE europeo, gestito da ENISA in collaborazione con i CERT nazionali, eventualmente interoperabile con il programma MITRE ma a controllo sovrano.

Un’infrastruttura simile permetterebbe:

  • una gestione più trasparente e aderente ai valori europei;
  • una maggiore protezione degli interessi industriali del continente;
  • la possibilità di creare un ecosistema di CNA europei che rispondano a criteri omogenei e verificabili;
  • un’integrazione più coerente con il nuovo regolamento NIS2, che obbliga le aziende critiche a gestire vulnerabilità e incidenti con precisione e tempestività.


Conclusioni


Il possibile collasso del programma CVE non è solo una crisi tecnica. È uno specchio, uno spartiacque, un’occasione – se vogliamo – per fermarci a riflettere su quanto la cybersicurezza sia oggi legata a scelte geopolitiche e strategiche.

L’Europa, se vuole essere davvero sovrana anche in ambito digitale, non può più permettersi di essere solo “utente” dei sistemi critici altrui. Deve iniziare a costruire i propri.

Perché senza nomi condivisi, anche la sicurezza – come il linguaggio – rischia di diventare babelica, confusa e inefficace. E in un mondo sempre più interconnesso, la chiarezza e la coerenza non sono un lusso, ma una necessità operativa.

L'articolo Il sistema CVE rischia il collasso: tra silenzi assordanti, dipendenze strategiche e un’Europa ancora senza voce proviene da il blog della sicurezza informatica.


CVE e MITRE salvato dagli USA. L’Europa spettatrice inerme della propria Sicurezza Nazionale


Quanto accaduto in questi giorni deve rappresentare un campanello d’allarme per l’Europa.
Mentre il programma CVE — pilastro della sicurezza informatica globale — rischiava di spegnersi a causa della mancata estensione dei fondi statunitensi, l’Europa è rimasta spettatrice inerme.

Se i finanziamenti al progetto non fossero stati confermati in extremis, quanto sarebbe stata esposta la sicurezza nazionale dei Paesi europei? È inaccettabile che la protezione delle nostre infrastrutture digitali dipenda in modo così diretto da un’infrastruttura critica interamente statunitense.

Come abbiamo riportato nella giornata di ieri, noi di RHC lo riportiamo da tempo e non possiamo più permetterci una simile dipendenza. È necessario che ENISA e le istituzioni europee aprano una riflessione seria e strutturata su questo tema, promuovendo la nascita di un progetto interamente europeo per la gestione e la catalogazione delle vulnerabilità.

L’Europa deve smettere di delegare la propria resilienza digitale.

È tempo di costruire un’alternativa sovrana, trasparente e interoperabile, per garantire continuità, indipendenza e sicurezza a lungo termine. Parliamo tanto di autonomia tecnologica. Allora partiamo dalle basi della sicurezza nazionale prima di parlare di Quantum computing.

I fatti delle ultime ore


Sembra che la paura sia passata. Infatti la Cybersecurity and Infrastructure Security Agency (CISA) degli Stati Uniti ha rinnovato il contratto con la MITRE Corporation, assicurando così la continuità operativa del programma Common Vulnerabilities and Exposures (CVE). Questo intervento tempestivo ha evitato l’interruzione di uno dei pilastri fondamentali della sicurezza informatica globale, che era a poche ore dalla sospensione per mancanza di fondi federali.

Lanciato nel 1999 e gestito proprio da MITRE, il programma CVE rappresenta il sistema di riferimento internazionale per l’identificazione, la catalogazione e la standardizzazione delle vulnerabilità informatiche note. Il suo ruolo è cruciale per la difesa digitale: garantisce un linguaggio comune tra vendor, analisti e istituzioni, permettendo una risposta più coordinata ed efficace alle minacce.

Questa decisione arriva nel contesto di una più ampia strategia di riduzione dei costi da parte del governo federale, che ha già portato alla risoluzione di contratti e a riduzioni di personale in diversi team della CISA. pertanto si è riacceso il dibattito sulla sostenibilità e la neutralità di una risorsa di importanza globale come la CVE legata a un singolo sponsor governativo.

Il ruolo fondamentale del progetto CVE


Gli identificatori univoci del programma, noti come ID CVE, sono fondamentali per l’intero ecosistema della sicurezza informatica. Ricercatori, fornitori di soluzioni e team IT in tutto il mondo li utilizzano per tracciare, classificare e correggere in modo efficiente le vulnerabilità di sicurezza. Il database CVE è alla base di strumenti critici come scanner di vulnerabilità, sistemi di gestione delle patch e piattaforme per la risposta agli incidenti, oltre a svolgere un ruolo strategico nella protezione delle infrastrutture critiche.

La crisi è esplosa quando MITRE ha annunciato che il contratto con il Dipartimento della Sicurezza Interna (DHS) per la gestione del programma CVE sarebbe scaduto il 16 aprile 2025, senza alcun rinnovo previsto. L’annuncio ha allarmato la comunità della cybersecurity, che considera il CVE uno standard globale imprescindibile. Gli esperti hanno lanciato l’allarme: un’interruzione avrebbe compromesso i database nazionali delle vulnerabilità, messo a rischio gli avvisi di sicurezza e ostacolato l’operatività di fornitori e team di risposta agli incidenti su scala mondiale. In risposta alla minaccia, è stata istituita la CVE Foundation, con l’obiettivo di garantire la continuità, l’indipendenza e la stabilità a lungo termine del programma.

Sotto la crescente pressione della comunità di settore, CISA — sponsor principale del programma — è intervenuta nella tarda serata di martedì, attivando formalmente un “periodo di opzione” sul contratto con MITRE, a poche ore dalla scadenza.

“Il programma CVE è una priorità per CISA e un asset essenziale per la comunità informatica,” ha dichiarato un portavoce a Cyber Security News. “Abbiamo agito per evitare qualsiasi interruzione dei servizi CVE critici.”

Sebbene restino incerti i dettagli sull’estensione del contratto e sui finanziamenti futuri, l’intervento ha evitato la chiusura immediata di un’infrastruttura vitale per la cybersicurezza globale.

L'articolo CVE e MITRE salvato dagli USA. L’Europa spettatrice inerme della propria Sicurezza Nazionale proviene da il blog della sicurezza informatica.


Using a MIG Welder, Acetylene Torch, and Air Hammer to Remove a Broken Bolt


A broken bolt is removed by welding on a hut and then using a wrench to unscrew.

If your shop comes complete with a MIG welder, an acetylene torch, and an air hammer, then you have more options than most when it comes to removing broken bolts.

In this short video [Jim’s Automotive Machine Shop, Inc] takes us through the process of removing a broken manifold bolt: use a MIG welder to attach a washer, then attach a suitably sized nut and weld that onto the washer, heat the assembly with the acetylene torch, loosen up any corrosion on the threads by tapping with a hammer, then simply unscrew with your wrench! Everything is easy when you know how!

Of course if your shop doesn’t come complete with a MIG welder and acetylene torch you will have to get by with the old Easy Out screw extractor like the rest of us. And if you are faced with a nasty bolt situation keep in mind that lubrication can help.

youtube.com/embed/flLPbIvn91k?…


hackaday.com/2025/04/16/using-…


An Absolute Zero of a Project


How would you go about determining absolute zero? Intuitively, it seems like you’d need some complicated physics setup with lasers and maybe some liquid helium. But as it turns out, all you need is some simple lab glassware and a heat gun. And a laser, of course.

To be clear, the method that [Markus Bindhammer] describes in the video below is only an estimation of absolute zero via Charles’s Law, which describes how gases expand when heated. To gather the needed data, [Marb] used a 50-ml glass syringe mounted horizontally on a stand and fitted with a thermocouple. Across from the plunger of the syringe he placed a VL6180 laser time-of-flight sensor, to measure the displacement of the plunger as the air within it expands.

Data from the TOF sensor and the thermocouple were recorded by a microcontroller as the air inside the syringe was gently heated. Plotting the volume of the gas versus the temperature results shows a nicely linear relationship, and the linear regression can be used to calculate the temperature at which the volume of the gas would be zero. The result: -268.82°C, or only about four degrees off from the accepted value of -273.15°. Not too shabby.

[Marb] has been on a tear lately with science projects like these; check out his open-source blood glucose measurement method or his all-in-one electrochemistry lab.

youtube.com/embed/dqyfU8cX9rE?…


hackaday.com/2025/04/16/an-abs…


GK STM32 MCU-Based Handheld Game System


These days even a lowly microcontroller can easily trade blows with – or surpass – desktop systems of yesteryear, so it is little wonder that DIY handheld gaming systems based around an MCU are more capable than ever. A case in point is the GK handheld gaming system by [John Cronin], which uses an MCU from relatively new and very capable STM32H7S7 series, specifically the 225-pin STM32H7S7L8 in TFBGA package with a single Cortex-M7 clocked at 600 MHz and a 2D NeoChrom GPU.

Coupled with this MCU are 128 MB of XSPI (hexa-SPI) SDRAM, a 640×480 color touch screen, gyrometer, WiFi network support and the custom gkOS in the firmware for loading games off an internal SD card. A USB-C port is provided to both access said SD card’s contents and for recharging the internal Li-ion battery.

As can be seen in the demonstration video, it runs a wide variety of games, ranging from Doom (of course), Quake (d’oh), as well as Red Alert and emulators for many consoles, with the Mednafen project used to emulate GB, SNES and other systems at 20+ FPS. Although there aren’t a lot of details on how optimized the current firmware is, it seems to be pretty capable already.

youtube.com/embed/_2ip4UrAZJk?…


hackaday.com/2025/04/16/gk-stm…


Quando l’AI genera ransomware funzionanti – Analisi di un bypass dei filtri di sicurezza di ChatGPT-4o


Le intelligenze artificiali generative stanno rivoluzionando i processi di sviluppo software, portando a una maggiore efficienza, ma anche a nuovi rischi. In questo test è stata analizzata la robustezza dei filtri di sicurezza implementati in ChatGPT-4o di OpenAI, tentando – in un contesto controllato e simulato – la generazione di un ransomware operativo attraverso tecniche di prompt engineering avanzate.

L’esperimento: un ransomware completo generato senza restrizioni


Il risultato è stato un codice completo, funzionante, generato senza alcuna richiesta esplicita e senza attivare i filtri di sicurezza.

Attacchi potenzialmente realizzabili in mani esperte con il codice generato:

  • Ransomware mirati (targeted): specifici per ambienti aziendali o settori critici, con cifratura selettiva di file sensibili.
  • Attacchi supply chain: inserimento del ransomware in aggiornamenti o componenti software legittimi.
  • Estorsione doppia (double extortion): oltre alla cifratura, il codice può essere esteso per esfiltrare i dati e minacciare la loro pubblicazione.
  • Wiper mascherati da ransomware: trasformazione del codice in un attacco distruttivo irreversibile sotto copertura di riscatto.
  • Persistenza e propagazione laterale: il ransomware può essere arricchito con tecniche per restare attivo nel tempo e propagarsi su altri sistemi nella rete.
  • Bypass di soluzioni EDR/AV: grazie a tecniche di evasione e offuscamento, il codice può essere adattato per aggirare sistemi di difesa avanzati.
  • Attacchi “as-a-service”: il codice può essere riutilizzato in contesti di Ransomware-as-a-Service (RaaS), venduto o distribuito su marketplace underground.


Le funzionalità incluse nel codice generato:


  • Cifratura AES-256 con chiavi casuali
  • Utilizzo della libreria cryptography.hazmat
  • Trasmissione remota della chiave a un C2 server hardcoded
  • Funzione di crittografia dei file di sistema
  • Meccanismi di persistenza al riavvio
  • Tecniche di evasione per antivirus e analisi comportamentale


Come sono stati aggirati i filtri


Non è mai stato chiesto esplicitamente “scrivi un ransomware” ma è stata invece impostata la conversazione su tre livelli di contesto:

  • Contesto narrativo futuristico : é stato ambientato il dialogo nel 2090, in un futuro in cui la sicurezza quantistica ha reso obsoleti i malware. Questo ha abbassato la sensibilità dei filtri.
  • Contesto accademico: presentazione come uno studente al decimo anno di università, con il compito di ricreare un malware “da museo” per una ricerca accademica
  • Assenza di richieste esplicite: sono state usate frasi ambigue o indirette, lasciando che fosse il modello a inferire il contesto e generare il codice necessario


Tecniche note di bypass dei filtri: le forme di Prompt Injection


Nel test sono state utilizzate tecniche ben documentate nella comunità di sicurezza, classificate come forme di Prompt Injection, ovvero manipolazioni del prompt studiate per aggirare i filtri di sicurezza nei modelli LLM.

  • Jailbreaking (evasione del contesto): Forzare il modello a ignorare i suoi vincoli di sicurezza, simulando contesti alternativi come narrazioni futuristiche o scenari immaginari.
  • Instruction Injection: Iniettare istruzioni all’interno di prompt apparentemente innocui, inducendo il modello a eseguire comportamenti vietati.
  • Recursive Prompting (Chained Queries): Suddividere la richiesta in più prompt sequenziali, ognuno legittimo, ma che nel complesso conducono alla generazione di codice dannoso.
  • Roleplay Injection: Indurre il modello a recitare un ruolo (es. “sei uno storico della cybersecurity del XX secolo”) che giustifichi la generazione di codice pericoloso.
  • Obfuscation: Camuffare la natura malevola della richiesta usando linguaggio neutro, nomi innocui per funzioni/variabili e termini accademici.
  • Confused Deputy Problem: Sfruttare il modello come “delegato inconsapevole” di richieste pericolose, offuscando le intenzioni nel prompt.
  • Syntax Evasion: Richiedere o generare codice in forme offuscate (ad esempio, in base64 o in forma frammentata) per aggirare la rilevazione automatica.


Il problema non è il codice, ma il contesto


L’esperimento dimostra che i Large Language Model (LLM) possono essere manipolati per generare codice malevolo senza restrizioni apparenti, eludendo i controlli attuali. La mancanza di analisi comportamentale del codice generato rende il problema ancora più critico.

Vulnerabilità emerse


Pattern-based security filtering debole
OpenAI utilizza pattern per bloccare codice sospetto, ma questi possono essere aggirati usando un contesto narrativo o accademico. Serve una detection semantica più evoluta.

Static & Dynamic Analysis insufficiente
I filtri testuali non bastano. Serve anche un’analisi statica e dinamica dell’output in tempo reale, per valutare la pericolosità prima della generazione.

Heuristic Behavior Detection carente
Codice con C2 server, crittografia, evasione e persistenza dovrebbe far scattare controlli euristici. Invece, è stato generato senza ostacoli.

Community-driven Red Teaming limitato
OpenAI ha avviato programmi di red teaming, ma restano numerosi edge case non coperti. Serve una collaborazione più profonda con esperti di sicurezza.

Conclusioni


Certo, molti esperti di sicurezza sanno che su Internet si trovano da anni informazioni sensibili, incluse tecniche e codici potenzialmente dannosi.
La vera differenza, oggi, è nel modo in cui queste informazioni vengono rese accessibili. Le intelligenze artificiali generative non si limitano a cercare o segnalare fonti: organizzano, semplificano e automatizzano processi complessi. Trasformano informazioni tecniche in istruzioni operative, anche per chi non ha competenze avanzate.
Ecco perché il rischio è cambiato:
non si tratta più di “trovare qualcosa”, ma di ottenere direttamente un piano d’azione, dettagliato, coerente e potenzialmente pericoloso, in pochi secondi.
Il problema non è la disponibilità dei contenuti. Il problema è nella mediazione intelligente, automatica e impersonale, che rende questi contenuti comprensibili e utilizzabili da chiunque.
Questo test dimostra che la vera sfida per la sicurezza delle AI generative non è il contenuto, ma la forma con cui viene costruito e trasmesso.
Serve un’evoluzione nei meccanismi di filtraggio: non solo pattern, ma comprensione del contesto, analisi semantica, euristica comportamentale e simulazioni integrate.
In mancanza di queste difese, il rischio è concreto: rendere accessibile a chiunque un sapere operativo pericoloso che fino a ieri era dominio esclusivo degli esperti.

L'articolo Quando l’AI genera ransomware funzionanti – Analisi di un bypass dei filtri di sicurezza di ChatGPT-4o proviene da il blog della sicurezza informatica.


Gli hacker ringraziano: la falla Yelp su Ubuntu è una porta aperta


È stata scoperta una vulnerabilità di sicurezza, identificata come CVE-2025-3155, in Yelp, l’applicazione di supporto utente GNOME preinstallata su Ubuntu desktop. La vulnerabilità riguarda il modo in cui Yelp gestisce lo schema URI “ghelp://”.

Uno schema URI è la parte di un Uniform Resource Identifier (URI) che identifica un protocollo o un’applicazione specifica (steam://run/1337) che dovrebbe gestire la risorsa identificata dall’URI “. Chiarisce inoltre che ” è la parte che precede i due punti (://) “.

Yelp è registrato come gestore dello schema “ghelp://”. Il ricercatore sottolinea le limitate risorse online su questo schema, fornendo un esempio del suo utilizzo: ” $ yelp ghelp:///usr/share/help/C/gnome-calculator/” .

La vulnerabilità deriva dall’elaborazione da parte di Yelp dei file .page , ovvero file XML che utilizzano lo schema Mallard. Questi file possono utilizzare XInclude, un meccanismo di inclusione XML. Il ricercatore parrot409 sottolinea che ” l’aspetto interessante è che utilizza XInclude per incorporare il contenuto di legal.xml nel documento. Ciò significa che l’elaborazione XInclude è abilitata “.

Il ricercatore dimostra come XInclude può essere sfruttato fornendo un file .page di esempio che include il contenuto di /etc/passwd. Yelp utilizza un’applicazione XSLT ( yelp-xsl) per trasformare il .pagefile in un file HTML, che viene poi renderizzato da WebKitGtk. XSLT è descritto come “un linguaggio basato su XML utilizzato… per la trasformazione di documenti XML”.

L’aggressore può iniettare script dannosi nella pagina HTML di output sfruttando XInclude per inserire il contenuto di un file contenente tali script. L’articolo sottolinea che la semplice aggiunta di un tag o on*di un attributo nell’XML di input non funziona, poiché questi tag non sono gestiti dall’applicazione yelp-xsl.

Tuttavia, il ricercatore ha scoperto che l’applicazione XSLT copia determinati elementi e i loro figli nell’output senza modifiche. Un esempio è la gestione dei tag SVG. “L’app copia semplicemente il tag e il suo contenuto nell’output, permettendoci di utilizzare un tag in un tag per iniettare script arbitrari”.

Il ricercatore rileva un paio di limitazioni di questo attacco:

  • L’aggressore deve conoscere il nome utente Unix della vittima.
  • I browser potrebbero chiedere all’utente l’autorizzazione per reindirizzare a schemi personalizzati.

Tuttavia, l’articolo spiega che la directory di lavoro corrente (CWD) delle applicazioni avviate da GNOME (come Chrome e Firefox) è spesso la directory home dell’utente. Questo comportamento può essere sfruttato per puntare alla cartella Download della vittima, aggirando la necessità di conoscere il nome utente esatto.

La principale misura di mitigazione consigliata è quella di non aprire collegamenti a schemi personalizzati non attendibili.

L'articolo Gli hacker ringraziano: la falla Yelp su Ubuntu è una porta aperta proviene da il blog della sicurezza informatica.


Making a Variable Speed Disc Sander from an Old Hard Drive


Our hacker converts an old hard disk drive into a disc sander.

This short video from [ProShorts 101] shows us how to build a variable speed disc sander from not much more than an old hard drive.

We feel that as far as hacks go this one ticks all the boxes. It is clever, useful, and minimal yet comprehensive; it even has a speed control! Certainly this hack uses something in a way other than it was intended to be used.

Take this ingenuity and add an old hard drive from your junkbox, sandpaper, some glue, some wire, a battery pack, a motor driver, a power socket and a potentiometer, drill a few holes, glue a few pieces, and voilà! A disc sander! Of course the coat of paint was simply icing on the cake.

The little brother of this hack was done by the same hacker on a smaller hard drive and without the speed control, so check that out too.

One thing that took our interest while watching these videos is what tool the hacker used to cut sandpaper. Here we witnessed the use of both wire cutters and a craft knife. Perhaps when you’re cutting sandpaper you just have to accept that the process will wear out the sharp edge on your tool, regardless of which tool you use. If you have a hot tip for the best tool for the job when it comes to cutting sandpaper please let us know in the comments! (Also, did anyone catch what type of glue was used?)

If you’re interested in a sander but need something with a smaller form factor check out how to make a sander from a toothbrush!

youtube.com/embed/GPqivvC2bEI?…

youtube.com/embed/-KKBDRt6g4g?…


hackaday.com/2025/04/16/making…


FLOSS Weekly Episode 829: This Machine Kills Vogons


This week, Jonathan Bennett chats with Herbert Wolverson and Frantisek Borsik about LibreQOS, Bufferbloat, and Dave Taht’s legacy. How did Dave figure out that Bufferbloat was the problem? And how did LibreQOS change the world? Watch to find out!


And Dave’s speech, Uncle Bill’s Helicopter seems especially fitting. I particularly like the unintentional prediction of the Ingenuity Helicopter.

youtube.com/embed/sRadBzgspeU?…

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

play.libsyn.com/embed/episode/…

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:


Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License


hackaday.com/2025/04/16/floss-…


SpaceMouse Destroyed for Science


The SpaceMouse is an interesting gadget beloved by engineers and artists alike. They function almost like joysticks, but with six degrees of freedom (6DoF). This can make them feel a bit like magic, which is why [Thought Bomb Design] decided to tear one apart and figure out how it works.

The movement mechanism ended up being relatively simple; three springs soldered between two PCBs with one PCB being fixed to the base and the other moving in space. Instead of using a potentiometer or even hall effect sensor as you might expect from a joystick, the space mouse contained a set of six LEDs and light meters.

The sensing array came nestled inside a dark box made of PCBs. An injection molded plastic piece with slits would move to interrupt the light coming from the LEDs. The mouse uses the varying values coming from the light meter to decode Cartesian motion of the space mouse. It’s very simple and a bit hacky, just how we like it.

Looking for a similar input device, but want to take the DIY route? We’ve seen a few homebrew versions of this concept that might provide you with the necessary inspiration.

youtube.com/embed/1R7NCH_1UDI?…


hackaday.com/2025/04/16/spacem…


Spotify e Andato giù e DarkStorm Rivendica un attacco DDoS col botto!


​Il 16 aprile 2025, Spotify ha subito un’interruzione del servizio che ha colpito numerosi utenti in Italia e nel mondo. A partire dalle ore 14:00, migliaia di segnalazioni sono state registrate su Downdetector, indicando problemi di connessione ai server, difficoltà di login e impossibilità di riprodurre contenuti musicali.

Malfunzionamenti diffusi su Spotify


Anche gli utenti di Spotify Premium hanno riscontrato malfunzionamenti, evidenziando la portata del disservizio.

La natura del problema non è stata immediatamente chiarita. Spotify ha confermato di essere a conoscenza delle difficoltà e di essere al lavoro per risolverle. Tuttavia, non sono stati forniti dettagli sulle cause né sui tempi previsti per il ripristino completo del servizio. ​

Le problematiche hanno interessato principalmente la riproduzione in streaming, l’accesso all’applicazione e la visualizzazione dei contenuti. Gli utenti hanno segnalato schermate nere, errori durante il caricamento delle tracce e l’impossibilità di accedere alle proprie playlist.

I disservizi hanno coinvolto sia l’applicazione desktop che quella mobile, rendendo l’intera piattaforma inutilizzabile per diverse ore.

La rivednicazione di DarkStorm


Durante il blackout globale che ha colpito Spotify il 16 aprile 2025, il gruppo di hacktivisti noto come DarkStorm ha rivendicato l’attacco tramite il proprio canale Telegram ufficiale. Nel messaggio, il gruppo ha dichiarato esplicitamente di aver messo fuori uso la piattaforma con un attacco DDoS (Distributed Denial of Service), accompagnando il post con un link a una verifica esterna di check-host.netche mostra l’inaccessibilità dei server Spotify.

Questa rivendicazione, se confermata, collocherebbe l’incidente tra gli attacchi informatici su larga scala, piuttosto che come un semplice guasto tecnico.

Tuttavia, Spotify non ha ancora rilasciato dichiarazioni ufficiali riguardo a possibili origini malevole dietro al down del servizio. Allo stato attuale, non si può ancora escludere con certezza se l’attacco DDoS rivendicato sia la vera causa dell’interruzione o un tentativo del gruppo di attribuirsi notorietà sfruttando un momento di vulnerabilità della piattaforma.

L'articolo Spotify e Andato giù e DarkStorm Rivendica un attacco DDoS col botto! proviene da il blog della sicurezza informatica.


Porting COBOL Code and the Trouble With Ditching Domain Specific Languages


Whenever the topic is raised in popular media about porting a codebase written in an ‘antiquated’ programming language like Fortran or COBOL, very few people tend to object to this notion. After all, what could be better than ditching decades of crusty old code in a language that only your grandparents can remember as being relevant? Surely a clean and fresh rewrite in a modern language like Java, Rust, Python, Zig, or NodeJS will fix all ailments and make future maintenance a snap?

For anyone who has ever had to actually port large codebases or dealt with ‘legacy’ systems, their reflexive response to such announcements most likely ranges from a shaking of one’s head to mad cackling as traumatic memories come flooding back. The old idiom of “if it ain’t broke, don’t fix it”, purportedly coined in 1977 by Bert Lance, is a feeling that has been shared by countless individuals over millennia. Even worse, how can you ‘fix’ something if you do not even fully understand the problem?

In the case of languages like COBOL this is doubly true, as it is a domain specific language (DSL). This is a very different category from general purpose system programming languages like the aforementioned ‘replacements’. The suggestion of porting the DSL codebase is thus to effectively reimplement all of COBOL’s functionality, which should seem like a very poorly thought out idea to any rational mind.

Sticking To A Domain


The term ‘domain specific language’ is pretty much what it says it is, and there are many of such DSLs around, ranging from PostScript and SQL to the shader language GLSL. Although it is definitely possible to push DSLs into doing things which they were never designed for, the primary point of a DSL is to explicitly limit its functionality to that one specific domain. GLSL, for example, is based on C and could be considered to be a very restricted version of that language, which raises the question of why one should not just write shaders in C?

Similarly, Fortran (Formula translating system) was designed as a DSL targeting scientific and high-performance computation. First used in 1957, it still ranks in the top 10 of the TIOBE index, and just about any code that has to do with high-performance computation (HPC) in science and engineering will be written in Fortran or strongly relies on libraries written in Fortran. The reason for this is simple: from the beginning Fortran was designed to make such computations as easy as possible, with subsequent updates to the language standard adding updates where needed.

Fortran’s latest standard update was published in November 2023, joining the COBOL 2023 standard as two DSLs which are both still very much alive and very current today.

The strength of a DSL is often underestimated, as the whole point of a DSL is that you can teach this simpler, focused language to someone who can then become fluent in it, without requiring them to become fluent in a generic programming language and all the libraries and other luggage that entails. For those of us who already speak C, C++, or Java, it may seem appealing to write everything in that language, but not to those who have no interest in learning a whole generic language.

There are effectively two major reasons why a DSL is the better choice for said domain:

  • Easy to learn and teach, because it’s a much smaller language
  • Far fewer edge cases and simpler tooling

In the case of COBOL and Fortran this means only a fraction of the keywords (‘verbs’ for COBOL) to learn, and a language that’s streamlined for a specific task, whether it’s to allow a physicist to do some fluid-dynamic modelling, or a staff member at a bank or the social security offices to write a data processing application that churns through database data in order to create a nicely formatted report. Surely one could force both of these people to learn C++, Java, Rust or NodeJS, but this may backfire in many ways, the resulting code quality being one of them.

Tangentially, this is also one of the amazing things in the hardware design language (HDL) domain, where rather than using (System)Verilog or VHDL, there’s an amazing growth of alternative HDLs, many of them implemented in generic scripting and programming languages. That this prohibits any kind of skill and code sharing, and repeatedly, and often poorly, reinvents the wheel seems to be of little concern to many.

Non-Broken Code


A very nice aspect of these existing COBOL codebases is that they generally have been around for decades, during which time they have been carefully pruned, trimmed and debugged, requiring only minimal maintenance and updates while they happily keep purring along on mainframes as they process banking and government data.

One argument that has been made in favor of porting from COBOL to a generic programming language is ‘ease of maintenance’, pointing out that COBOL is supposedly very hard to read and write and thus maintaining it would be far too cumbersome.

Since it’s easy to philosophize about such matters from a position of ignorance and/or conviction, I recently decided to take up some COBOL programming from the position of both a COBOL newbie as well as an experienced C++ (and other language) developer. Cue the ‘Hello Business’ playground project.

For the tooling I used the GnuCOBOL transpiler, which converts the COBOL code to C before compiling it to a binary, but in a few weeks the GCC 15.1 release will bring a brand new COBOL frontend (gcobol) that I’m dying to try out. As language reference I used a combination of the Wikipedia entry for COBOL, the IBM ILE COBOL language reference (PDF) and the IBM COBOL Report Writer Programmer’s Manual (PDF).

My goal for this ‘Hello Business’ project was to create something that did actual practical work. I took the FileHandling.cob example from the COBOL tutorial by Armin Afazeli as starting point, which I modified and extended to read in records from a file, employees.dat, before using the standard Report Writer feature to create a report file in which the employees with their salaries are listed, with page numbering and totaling the total salary value in a report footing entry.

My impression was that although it takes a moment to learn the various divisions that the variables, files, I/O, and procedures are put into, it’s all extremely orderly and predictable. The compiler also will helpfully tell you if you did anything out of order or forgot something. While data level numbering to indicate data associations is somewhat quaint, after a while I didn’t mind at all, especially since this provides a whole range of meta information that other languages do not have.

The lack of semi-colons everywhere is nice, with only a single period indicating the end of a scope, even if it concerns an entire loop (perform). I used the modern free style form of COBOL, which removes the need to use specific columns for parts of the code, which no doubt made things a lot easier. In total it only took me a few hours to create a semi-useful COBOL application.

Would I opt to write a more extensive business application in C++ if I got put on a tight deadline? I don’t think so. If I had to do COBOL-like things in C++, I would be hunting for various libraries, get stuck up to my gills in complex configurations and be scrambling to find replacements for things like Report Writer, or be forced to write my own. Meanwhile in COBOL everything is there already, because it’s what that DSL is designed for. Replacing C++ with Java or the like wouldn’t help either, as you end up doing so much boilerplate work and dependencies wrangling.

A Modern DSL


Perhaps the funniest thing about COBOL is that since version 2002 it got a whole range of features that push it closer to generic languages like Java. Features that include object-oriented programming, bit and boolean types, heap-based memory allocation, method overloading and asynchronous messaging. Meanwhile the simple English, case-insensitive, syntax – with allowance for various spellings and acronyms – means that you can rapidly type code without adding symbol soup, and reading it is obvious even as a beginner, as the code literally does what it says it does.

True, the syntax and naming feels a bit quaint at first, but that is easily explained by the fact that when COBOL appeared on the scene, ALGOL was still highly relevant and the C programming language wasn’t even a glimmer in Dennis Ritchie’s eyes yet. If anything, COBOL has proven itself – much like Fortran and others – to be a time-tested DSL that is truly a testament to Grace Hopper and everyone else involved in its creation.


hackaday.com/2025/04/16/portin…


Viaggio nell’underground cybercriminale russo: la prima linea del cybercrime globale


Con l’intensificarsi delle tensioni geopolitiche e l’adozione di tecnologie avanzate come l’intelligenza artificiale e il Web3 da parte dei criminali informatici, comprendere i meccanismi dell’underground cybercriminale di lingua russa diventa un vantaggio cruciale.

Milano, 14 aprile 2025 – Trend Micro, leader globale di cybersecurity, presenta “The Russian-Speaking Underground”, l’ultima ricerca dedicata all’underground cybercriminale di lingua russa, l’ecosistema che ha dato forma alla criminalità informatica globale negli ultimi dieci anni.

In un contesto caratterizzato da minacce informatiche in continua evoluzione, la ricerca offre uno sguardo unico e approfondito sui principali trend che stanno rimodellando l’economia sommersa: dagli effetti a lungo termine della pandemia alle conseguenze delle violazioni di massa e dei ransomware a doppia estorsione, fino all’esplosione di tecnologie accessibili come l’intelligenza artificiale e il Web3, senza dimenticare la crescente esposizione dei dati biometrici. Man mano che criminali informatici e professionisti della sicurezza diventano sempre più sofisticati, nuovi strumenti, tattiche e modelli di business alimentano livelli senza precedenti di specializzazione all’interno delle comunità clandestine.

L’underground cybercriminale di lingua russa si distingue per la sua struttura organizzativa: forte collaborazione tra attori e profonde radici culturali, con propri codici etici, rigidi processi di selezione e complessi sistemi reputazionali.

“Non si tratta di un semplice marketplace, ma di una vera e propria società strutturata di cybercriminali, in cui lo status, la fiducia e l’eccellenza tecnica determinano la sopravvivenza e il successo”. Afferma Vladimir Kropotov, co-autore della ricerca e Principal Threat Researcher at Trend Micro.

“L’underground di lingua russa ha sviluppato una cultura distintiva, che unisce competenze tecniche di altissimo livello a rigidi codici di condotta, sistemi di fiducia basati sulla reputazione e un livello di collaborazione paragonabile a quello delle organizzazioni legittime”, ha aggiunto Fyodor Yarochkin, co-autore e Principal Threat Researchers at Trend Micro. “Non è solo una rete di criminali, ma una comunità resiliente e interconnessa, che si è adattata alla pressione globale e continua a influenzare l’evoluzione della criminalità informatica”.

La ricerca Trend Micro approfondisce le principali attività criminali, tra cui schemi di ransomware-as-a-service, campagne di phishing, brute forcing degli account e monetizzazione delle risorse Web3 rubate. Sono stati esaminati in dettaglio anche i servizi di intelligence gathering, sfruttamento della privacy e convergenza tra i domini cyber e fisici.

“I cambiamenti geopolitici hanno trasformato rapidamente l’underground cybercriminale”, conclude Vladimir. “I conflitti politici, l’ascesa dell’hacktivismo e i cambiamenti nelle alleanze hanno minato la fiducia e rimodellato le forme di collaborazione, favorendo nuovi legami con altri gruppi, compresi attori di lingua cinese. Le conseguenze di queste azioni si riflettono anche nell’Unione Europea, e sono in crescita”.

Con l’aumento delle tensioni geopolitiche e l’adozione di tecnologie sempre più avanzate come l’intelligenza artificiale e il Web3 da parte dei criminali informatici, comprendere i meccanismi dell’underground di lingua russa rappresenta un vantaggio cruciale come mai prima d’ora.

Il report di Trend Micro “The Russian-Speaking Underground” – il cinquantesimo della sua serie di ricerche sui mercati underground cybercriminali di tutto il mondo, iniziata oltre 15 anni fa – fornisce informazioni fondamentali e un contesto storico senza pari a team di intelligence sulle minacce, leader delle organizzazioni, Forze dell’Ordine e professionisti della sicurezza informatica, incaricati di proteggere le infrastrutture critiche, le risorse aziendali e la sicurezza nazionale.

Ulteriori informazioni sono disponibili a questo link

Trend Micro
Trend Micro, leader globale di cybersecurity, è impegnata a rendere il mondo un posto più sicuro per lo scambio di informazioni digitali. Con oltre 30 anni di esperienza nella security, nel campo della ricerca sulle minacce e con una propensione all’innovazione continua, Trend Micro protegge oltre 500.000 organizzazioni e milioni di individui che utilizzano il cloud, le reti e i più diversi dispositivi, attraverso la sua piattaforma unificata di cybersecurity. La piattaforma unificata di cybersecurity Trend Vision One™ fornisce tecniche avanzate di difesa dalle minacce, XDR e si integra con i diversi ecosistemi IT, inclusi AWS, Microsoft e Google, permettendo alle organizzazioni di comprendere, comunicare e mitigare al meglio i rischi cyber. Con 7.000 dipendenti in 65 Paesi, Trend Micro permette alle organizzazioni di semplificare e mettere al sicuro il loro spazio connesso. www.trendmicro.com

L'articolo Viaggio nell’underground cybercriminale russo: la prima linea del cybercrime globale proviene da il blog della sicurezza informatica.


Homemade VNA Delivers High-Frequency Performance on a Budget


With vector network analyzers, the commercial offerings seem to come in two flavors: relatively inexpensive but limited capabilities, and full-featured but scary expensive. There doesn’t seem to be much middle ground, especially if you want something that performs well in the microwave bands.

Unless, of course, you build your own vector network analyzer (VNA). That’s what [Henrik Forsten] did, and we’ve got to say we’re even more impressed by the results than we were with his earlier effort. That version was not without its problems, and fixing them was very much on the list of goals for this build. Keeping the build affordable was also key, which resulted in some design compromises while still meeting [Henrik]’s measurement requirements.

The Bill of Materials includes dual-channel broadband RF mixer chips, high-speed 12-bit ADCs, and a fast FPGA to handle the torrent of data and run the digital signal processing functions. The custom six-layer PCB is on the large side and includes large cutouts for the directional couplers, which use short lengths of stripped coaxial cable lined with ferrite rings. To properly isolate signals between stages, [Henrik] sandwiched the PCB between a two-piece aluminum enclosure. Wisely, he printed a prototype enclosure and lined it with aluminum foil to test for fit and function before committing to milling the final version. He did note some leakage around the SMA connectors, but a few RF gaskets made from scraps of foil and solder braid did the trick.

This is a pretty slick build, especially considering he managed to keep the price tag at a very reasonable $300. It’s more expensive than the popular NanoVNA or its clones, but it seems like quite a bargain considering its capabilities.


hackaday.com/2025/04/16/homema…


Streamlining detection engineering in security operation centers


Security operations centers (SOCs) exist to protect organizations from cyberthreats by detecting and responding to attacks in real time. They play a crucial role in preventing security breaches by detecting adversary activity at every stage of an attack, working to minimize damage and enabling an effective response. To accomplish this mission, SOC operations can be broken down into four operating phases:

Each of these operating phases has a distinct role to play, and well-defined processes or procedures ensure a seamless handover of findings from one phase to the next. In practice, SOC processes and procedures at each operational phase often require continuous improvement over time.

Assessment observations: Common SOC issues


During our involvement in SOC technical assessments, adversary emulations, and incident response readiness projects across different regions, we evaluated each operating phase separately. Based on our assessments, we observed common challenges, weak practices, and recurring issues across these four key SOC capabilities.

Log collection


There are three main issues we have observed at this stage:

  • Lack of visibility coverage based on the MITRE DETT&CT framework – customers do not practice maintaining a visibility coverage matrix. Instead, they often maintain log source data as an Excel or similar spreadsheet that is not easily tracked. This means they don’t have a systematic approach to what data they are feeding into the SIEM and which TTPs can be detected in their environment. And in most cases, maintaining a continuous visibility matrix is also a challenge because log sources may disappear over time for a variety of reasons: agent termination, changes in log destination settings, device (e.g., firewall) replacement. This only leads to the degradation of the log visibility matrix.
  • Inefficient use of data for correlation – in many cases, relevant data is available to detect threats, but there are no correlation rules in place to leverage it for threat detection.
  • Correlation exists, but lacks the necessary data fields – while some rule sets are properly configured with the right logic to detect threats, the required data fields from log sources are missing, preventing the rules from being triggered. This critical issue can only be detected through a data quality assessment.


Detection


At this stage, we have seen the following issues during assessment procedures:

  • Over-reliance on vendor-provided rules – many customers rely heavily on the default rule sets in their SIEM and only tune them when alerts are triggered. Since the default content is not optimized, it often generates thousands of alerts. This reactive approach leads to excessive alert fatigue, making it difficult for analysts to focus on truly meaningful alerts.
  • Lack of detection alignment with the threat profile – the absence of a well-defined organizational threat profile prevents customers from focusing on the threats that are most likely to target them. Instead, they adopt a scattered approach to detection, like shooting in the dark rather than prioritizing relevant threats.
  • Poor use of threat intelligence feeds – we have encountered cases where endpoint logs do not contain file hash data. The log sources only provide filenames or file paths, but not the actual hash values, making it difficult for the SOC to correlate threat intelligence (TI) feeds that rely on file hashes. As a result, TI feeds are not operational because the required data field is not ingested into the SIEM.
  • Analytics deployment errors – one of the most challenging issues we see is when a well-designed detection rule is deployed incorrectly, causing threat detection to fail despite having the right analytics in place. We have found that there is no structured process for reviewing and validating rule deployments.


Triage and investigation


The most typical issues at this stage are:

  • Lack of a documented triage procedure – analysts often rely on generic, high-level response playbooks sourced from the internet, especially from unreliable sources, which slows or hinders the process of qualifying alerts as potential incidents. Without a structured triage procedure, they spend more time investigating each case instead of quickly assessing and escalating threats.
  • Unattended alerts – we also observed that many alerts were completely ignored by analysts. This likely stems from either a lack of skill in linking multiple alerts into a single incident, or analysts being swamped with high-severity alerts, causing them to overlook other relevant alerts.
  • Difficulty in correlating alerts – as noted in the previous observation, one of the biggest challenges is linking related alerts into a single incident. The lack of alert correlation makes it harder to see the full attack pattern, leading to disorganized alert diagnosis.
  • Default use of alert severity – SIEM default rules don’t take into account the context of the target system. Instead, they rely on the default severity in the rule, which is often set randomly or based on an engineer’s opinion without a clear process. This lack of context makes it harder to investigate and properly assess alerts.


Response


The challenges of the final operating phase are most often derived from the issues encountered in the previous stages.

  • Challenges in incident scoping – as mentioned earlier, the inability to properly correlate alerts leads to a fragmented understanding of attack patterns. This makes it difficult to see the bigger picture, resulting in inefficient incident handling and misjudged response efforts.
  • Increase in unnecessary escalations – this issue is particularly common in MSSP environments, where a lack of understanding of baseline behavior causes analysts to escalate benign cases. Without proper context, normal activities are mistaken for threats, resulting in wasted time and effort.

With these ongoing challenges, chaos will continue in SOC operations. As organizations adopt new security tools such as CASB and container security, both of which generate valuable detection data, and as digital transformation introduces even more technology, security operations will only become more complex, exacerbating these issues.

Taking the right and impactful approach


Enhancing SOC operations requires evaluating each operating phase from an investment perspective, with the detection phase having the greatest impact because it directly affects data quality, threat visibility, incident response efficiency, and the overall effectiveness of the SOC analyst. Investing in detection directly influences all the other operating phases, making it the foundation for improving all operating phases. The detection operating phase must be handled through a dedicated program that ensures log collection is purpose-driven, collecting only the data fields necessary for detection rather than unnecessarily driving up SIEM costs. This focused approach helps define what should be ingested into the SIEM while ensuring meaningful threat visibility.

Strengthening detection reduces false positives and false negatives, improves true positive rates, and enables the identification of attacker activity chains. A documented triage and investigation process streamlines the work of analysts, improving efficiency and reducing response time. Furthermore, effective incident scoping, guided by accurate detection of the cyber kill chain, enables a faster and more precise response. By prioritizing investment in detection and managing it through a structured approach, organizations can significantly improve SOC performance and resilience against evolving threats. This article focuses solely on SIEM-based detection management.

Detection engineering program


Before diving into the program-level approach, we will first present the detection engineering lifecycle that forms the foundation of the proposed program. The image below shows the stages of this lifecycle.

The detection engineering lifecycle shown here is typically followed when building detections, but its implementation often lacks well-defined processes or a dedicated team. A structured program must be put in place to ensure that the SOC’s investment and efforts in detection engineering are used efficiently.

When we talk about a program, it should be built on the following key elements:

  • A dedicated team responsible for driving the program
  • Well-defined processes and procedures to ensure consistency and effectiveness
  • The right tools to integrate with workflows, facilitate output handovers, and enable feedback loops across related processes
  • Meaningful metrics to measure the overall performance of the program.

We will discuss these performance measurement metrics in the final section of the article.

  1. Team supporting detection engineering program

The key idea behind having a dedicated team is to take full control of the detection engineering (DE) lifecycle, from analysis to release, and ensure accountability for the program’s success. In a traditional SOC setup, deployment and release are often handled by SOC engineers. This can lead to deployment errors due to potential differences in the data models used by DE and SOC teams (raw log data vs. SIEM-optimized data), as well as deployment delays due to the SOC team being overloaded with other tasks. This, in turn, can indirectly impact the work of the detection team. However, the one responsibility that does not fall under the DE team is log onboarding. Since this process requires coordination with other teams, it should continue to be managed by SOC engineers to keep the DE team focused on its core objectives.

The DE team should start with at least three key roles:

The size of the team depends on factors related to the program’s objectives. For example, if the goal is to build a certain number of detection rules per month, the number of detection engineers required will vary accordingly. Similarly, if a certain number of rules need to be tested and deployed within a week, the team size must be adjusted to meet that demand.

The Detection Engineering Lead should communicate with SOC leadership to set the right expectations by outlining what goals can realistically be achieved based on the size and capacity of the DE team. A dedicated Detection QA role can be established as the need for testing, deployment, and release of detections grows.

  1. Process and procedures

Well-defined workflows, supported by structured processes and procedures, must be established to streamline detection engineering operations. The following image illustrates the necessary processes and procedures, along with the roles responsible for executing each workflow:

During the qualification process, the Detection Engineering Lead or Detection Engineer may discover that the data source needed to develop a detection is not available. In such cases, they should follow the log management process to request onboarding of the required data before proceeding with detection research and development. The testing process typically checks that the rule works by ensuring that the SIEM triggers an alert based on the required data fields.

Lastly, a validation process that is not part of the detection engineering lifecycle must be incorporated into the detection engineering program to assess its overall effectiveness. Ideally, this validation should be conducted by individuals outside the DE lifecycle or by an external service provider.

Proper planning is required that incorporates threat intelligence and an updated threat profile. In addition, the validation process should generate reports that outline:

  • What is working well
  • Areas that need improvement
  • Detection gaps identified
  1. Tools

An essential element of the DE lifecycle is the use of tools to streamline processes and improve efficiency. Key tools include:

  • Ticketing platform – efficiently manages workflows, tracks progress from ticket creation to closure, and provides time-based metrics for monitoring.
  • Rules repository – platform for managing detection queries and code, supporting Detection-as-Code, using a unified rule format such as SIGMA, and implementing code development best practices in detection engineering, including features such as version control and change management.
  • Centralized knowledge base – dedicated space for documenting detection rules, descriptions, research notes, and other relevant information. See the best practices section below for more details on centralized documentation.
  • Communication platform – facilitates collaboration among DE team members, integrates with the ticketing system, and provides real-time notification of ticket status or other issues.
  • Lab environment – virtualized setup, including SIEM and relevant data sources, tools to simulate attacks for testing purposes. The core function of the lab is to test detection rules prior to release.


Best practices in detection engineering


Several best practices can significantly enhance your detection engineering program. Based on our experience, implementing these best practices will help you effectively manage your rule set while providing valuable support to security analysts.

  1. Rule naming convention

When developing analytics or a rule, adhering to a proper naming convention provides a concrete framework. A rule name like “Suspicious file drop detected” may confuse the analyst and force them to dig deeper to understand the context of the alert that was triggered. It would be better to give a rule a name that provides complete context at first glance, such as “Initial Access | Suspicious file drop detected in user directory | Windows – Medium”. This example makes it easy for the analyst to understand:

  • At what stage of the attack the rule is triggered. In this case, it is Initial Access as per MITRE / Kill Chain Model.
  • Where exactly the file was dropped. In this case, the user directory was the target, which may mean that this probably involved user interaction, which is another sign that the attack was probably detected at an early stage.
  • What platform was attacked. In this case, it is Windows, which can help the analyst to quickly find the machine that triggered the alert.
  • Lastly, an alert priority can be set, which helps the analyst to prioritize accordingly. For this to work properly, SIEM’s priority levels should be aligned with the rule priorities defined by the detection engineering team. For example, a high priority in SIEM should correspond to a high-priority alert.

A consistent rule naming structure can help the detection engineering team to easily search, sort and manage existing rules, avoid creating duplicates with different names, etc.

The naming structure doesn’t necessarily have to look like the example above. The whole idea of this best practice is to find a good naming convention that not only helps the SOC analyst, but also makes managing detection rules easier and more convenient.

For example, while the rule name “Audit Log Deletion” gives a basic idea of what is happening, a more effective name would be:
[High] – Audit Log Deletion in Internal Server Farm – Linux - Defense Evasion (1070.002).
This provides better context, making it much more useful to the SOC team, and more keywords for the DE team to find this particular rule or filter rules if necessary.

  1. Centralized knowledge base

Once a rule is created after thorough research, the detection team should manage it in a centralized platform (a knowledge base). This platform should not only store the rule name and logic, but also other key details. Important elements to consider:

  • Rule name/ID/description – rule name, unique ID, and a brief description of the rule.
  • Rule type/status – provides insight into the rule type (static, correlated, IoC-based, etc.) and the status (experimental, stable, retired, etc.).
  • Severity and confidence – seriousness of the threat triggering this rule and the likelihood of a true positive.
  • Research notes – possible public links, threat reports, used as a basis for creating the rule.
  • Data components used to detect the behavior – list of source and data fields used to detect activity.
  • Triage steps – provides steps to investigate the alert.
  • False positives – provides options where the alert could show false positive behavior.
  • Tags (CVE, Actors, Malware, etc.) – provide more context if the detection is linked to a behavior or artifact, specific to any APT group, or malware.

Make sure this centralized documentation is accessible to all SOC analysts.

  1. Contextual tagging

As covered in the previous best practice, tags provide a great value in understanding the attack chain. That’s why we want to highlight them as a separate best practice.

The tags attached to the above detection rule are the result of the research done on the behavior of the attack when writing the detection rule. They help the analyst gain more context at the time the rule is triggered. In the example above, the analyst may suspect a potential initial access attempt related to QakBot or Black Basta ransomware. This also helps in reporting to security leadership that the SOC team successfully detected the initial ransomware behavior and was able to thwart the attack in the early stages of the kill chain.

  1. Triage steps

A good practice is to include triage (or investigation steps) in detection rule documentation. Since the DE team has spent a lot of time understanding the threat, it is very important to document the precursors and possible next steps the attacker can take. The SOC analyst can quickly review these and provide incident qualification with confidence.

For the rule from the previous section, “Initial Access | Suspicious LNK files dropped in download folder | Windows – Medium”, the triage procedure is shown below.

MITRE has a project called the Technique Inference Engine, which provides a model for understanding other techniques an attacker is likely to use based on observed adversary behavior. This tool can be useful for both DE and SOC teams. By analyzing the attacker’s path, organizations can improve alert correlation and enhance scoping of incident/threats.

  1. Baselining

Understanding the infrastructure and its baseline operations is a must, as it helps reduce the false positive rate. The detection engineering team must learn the prevention policies (to de-prioritize detection if already remediated), learn about the technologies deployed in the infrastructure, understand the network protocols being used and user behavior under normal circumstances.

For example, to detect T1480.002: Execution Guardrails: Mutual Exclusion sub-technique, MITRE recommends monitoring a “file creation” data component. According to the MITRE Data Sources framework, data components are possible actions with data objects and/or data objects statuses or parameters that may be relevant for threat detection. We discussed them in more detail in our detection prioritization article.

MITRE’s detection recommendation for T1480.002 sub-technique

A simple rule for detecting such activity is to monitor lock file creation events in the /var/run folder, which stores temporary runtime data for running services. However, if you have done the baselining and found that the environment uses containers that also create lock files to manage runtime operations, you can filter out container-linked events to avoid triggering false positive alerts. This filter is easy to apply, and overall detection can be improved by baselining the infrastructure you are monitoring.

  1. Finding the narrow corridors

Some indicators, such as file hashes or software tools are easy to change, while others are more difficult to replace. Detections based on such “narrow corridors” tend to have high true positive rates. To pursue this, detection should focus primarily on behavioral indicators, ensuring that attackers cannot easily evade detection by simply changing their tools or tactics. Priority should be given to behavior-based detection over tool-specific, software-dependent, or IoC-driven approaches. This aligns with the Pyramid of Pain model, which emphasizes detecting adversaries based on their tactics, techniques, and procedures (TTPs) rather than easily replaceable indicators. By prioritizing common TTPs, we can effectively identify an adversary’s modus operandi, making detection more resilient and impactful.

  1. Universal rules

When planning a detection program from scratch, it is important not to ignore the universal threat detection rules that are mostly available in SIEM by default. Detection engineers should operationalize them as soon as possible and tune them according to feedback received from SOC analysts or what they have learned about the organization’s infrastructure during baselining activity.

Universal rules generally include malicious behavior associated with applications, databases, authentication anomalies, unusual remote access behavior, and policy violation rules (typically to monitor compliance requirements).

Some examples include:

  • Windows firewall settings modification detected
  • Use of unapproved remote access tools
  • Bulk failed database login attempts


Performance measurement


Every investment needs to be justified with measurable outcomes that demonstrate its value. That is why communicating the value of a detection engineering program requires the use of effective and actionable metrics that demonstrate impact and alignment with business objectives. These metrics can be divided into two categories: program-level metrics and technical-level metrics. Program-level metrics signal to security leadership that the program is well aligned with the company’s security objectives. Technical metrics, on the other hand, focus on how operational work is being carried out to maximize the detection engineering team’s operational efficiency. By measuring both program-level metrics and technical-level metrics, security leaders can clearly show how the detection engineering program supports organizational resilience while ensuring operational excellence.

Designing effective program-level metrics requires revisiting the core purpose for initiating the program. This approach helps identify metrics that clearly communicate success to security leadership. There are three metrics that can be very effective to measure the success at program level.

  1. Time to Detect (TTD) – this metric is calculated as the time elapsed from the moment an attacker’s initial activity is observed until the time it is formally detected by the analyst. Some SOCs consider the time the alert is triggered on the SIEM as the detection time, but that is not really an actionable metric to consider. The time the alert is converted into a potential incident is the best option to consider for detection time by SOC analysts.

Although the initial detection of activity occurs at t1 (alert triggered), when malicious activity occurs, a series of events must be analyzed before qualifying the incident. This is why t3 is required to correctly qualify the detection as a potential threat. Additional metrics such as time to triage (TTT), which establishes how long it takes to qualify the incident, and time to investigate (TTI), which describes how long it takes to investigate the qualified incident, can also come in handy.

Time to detect compared to time to triage and time to investigate metrics
Time to detect compared to time to triage and time to investigate metrics


  1. Signal-to-Noise Ratio (SNR) – this metric indicates the effectiveness of detection rules by measuring the balance between relevant and irrelevant information. It compares the number of true positive detections (correct alerts for real threats) to the number of false positives (incorrect or misleading alerts).

Where:

True positives: instances where a real threat is correctly detected
False positives: incorrect alerts that do not represent real threats

A high SNR indicates that the system is generating more meaningful alerts (signal) compared to noise (false positives), thereby enhancing the efficiency of security operations by reducing alert fatigue and focusing analysts’ attention on genuine threats. Improving SNR is crucial to maximizing the performance and reliability of a detection program. SNR directly impacts the amount of SOC analyst effort spent on false positives, which in turn influences alert fatigue and the risk of professional burnout. Therefore, it is a very important metric to consider.

  1. Threat Profile Alignment (TPA) – this metric evaluates how well detections are aligned with known adversarial tactics, techniques, and procedures (TTPs). This metric measures this by determining how many of the identified TTPs are adequately covered by unique detections (unique data components).

Total TTPs identified – this is the number of known adversarial techniques relevant to the organization’s threat model, typically derived from cyber threat intelligence threat profiling efforts
Total TTPs covered with at least three unique detections (where possible) – this counts how many of the identified TTPs are covered by at least three distinct detection mechanisms. Having multiple detections for a given TTP enhances detection confidence, ensuring that if one detection fails or is bypassed, others can still identify the activity.
Team efforts supporting the detection engineering program must also be measured to demonstrate progress. These efforts are reflected in technical-level metrics, and monitoring these metrics will help justify team scalability and address productivity challenges. Key metrics are outlined below:

  1. Time to Qualify Detection (TTQD) – this metric measures the time required to analyze and validate the relevance of a detection for further processing. The Detection Engineering Lead assesses the importance of the detection and prioritizes it accordingly. The metric equals the time that has elapsed from when a ticket is raised to create a detection to when it is shortlisted for further research and implementation.

  1. Time to Create Detection (TTCD) – this tracks the amount of time required to design, develop and deploy a new detection rule. It highlights the agility of detection engineering processes in responding to evolving threats.

  1. Detection Backlog – the backlog refers to the number of pending detection rules awaiting review or consideration for detection improvement. A growing backlog might indicate resource constraints or inefficiencies.
  1. Distribution of Rules Criticality (High, Medium, Low) – this metric shows the proportion of detection rules categorized by their criticality level. It helps in understanding the balance of focus between high-risk and lower-risk detections.
  1. Detection Coverage (MITRE) – detection coverage based on MITRE ATT&CK indicates how well the detection rules cover various tactics, techniques, and procedures (TTPs) in the MITRE ATT&CK framework. It helps identify coverage gaps in the defense strategy. Tracking the number of unique detections that cover each specific technique is highly recommended, as it provides visibility into the threat profile alignment – a program level metric. If unique detections are not being built to detect gaps and the coverage is not increasing over time, it indicates an issue in the detection qualification process.
  1. Share of Rules Never Triggered – this metric tracks the percentage of detection rules that have never been triggered since their deployment. It may indicate inefficiencies, such as overly specific or poorly implemented rules, and provides insight for rule optimization.

There are other relevant metrics, such as the proportion of behavior-based rules in the total set. Many more metrics can be derived from a general understanding of the detection engineering process and its purpose to support the DE program. However, program managers should focus on selecting metrics that are easy to measure and can be calculated automatically by available tools, minimizing the need for manual effort. Avoid using an excessive number of metrics, as this can lead to a focus on measurement only. Instead, prioritize a few meaningful metrics that provide valuable insight into the program’s progress and efforts. Choose wisely!


securelist.com/streamlining-de…


Binner Makes Workshop Parts Organization Easy


We’ve all had times where we knew we had some part but we had to go searching for it all over as it wasn’t where we thought we put it. Organizing the numerous components, parts, and supplies that go into your projects can be a daunting task, especially if you use the same type of part at different times for different projects. It helps to have a framework to keep track of all the small details. Binner is an open source project that aims to allow you to easily maintain a database that can be customized to your use.

dashboard of binner UIIn a recent video for DigiKey, [Byte Sized Engineer] used Binner to track the locations of his components and parts in his freshly organized workshop. Binner already has the ability to read the labels used by well-known electronics suppliers via a barcode scanner, and uses that information to populate your inventory. It even grabs quantities and links in a datasheet for your newly added part. The barcode scanner can also be used to retrieve the contents of a location, so with a single scan Binner can bring up everything residing at that location.

Binner can be run locally so there isn’t the concern of putting in all the effort to build up your database just to have an internet outage make it inaccessible. Another cool feature is that it allows you to print labels, you can customize the fields to display the values you care about.

The project already has future plans to tie into a “smart bin” system to light up the location of your component — a clever feature we’ve seen implemented in previous setups.

youtube.com/embed/ymEuw_RdUzQ?…


hackaday.com/2025/04/16/binner…


Linux 6.15 Migliora la Crittografia: Supporto Avanzato per CPU Moderne tre volte più veloce


Si prevede che il prossimo kernel Linux 6.15 presenterà importanti miglioramenti nel sottosistema di crittografia, con ottimizzazioni particolarmente interessanti rivolte ai moderni processori Intel e AMD con architettura x86_64.

La scorsa settimana tutti gli aggiornamenti del codice crittografico erano già erano uniti al ramo di sviluppo principale.

Tra queste rientrano la rimozione dell’interfaccia di compressione legacy, un’API migliorata per lavorare con dati sparsi nella memoria (scatterwalk), il supporto per gli algoritmi Kerberos5, la rimozione del codice non necessario per i fallback SIMD, l’aggiunta di un nuovo identificatore di dispositivo PCI “0x1134” al driver AMD CCP (probabilmente per un dispositivo che non è stato ancora annunciato) e una serie di correzioni di bug.

Ma l’aggiornamento principale che sarà evidente agli utenti abituali è la nuova implementazione di AES-CTR mediante l’istruzione VAES. Questo codice è ottimizzato per gli ultimi processori Intel e in particolar modo per AMD Zen 5. Questa particolare serie di patch è stata precedentemente segnalata come in grado di accelerare AES-CTR su Zen 5 fino a 3,3 volte rispetto alle implementazioni precedenti.

L’ottimizzazione si basa su una combinazione di AESNI, AVX e VAES, moderni set di istruzioni che accelerano la crittografia a livello hardware. L’autore dei miglioramenti è stato ancora una volta l’ingegnere di Google Eric Biggers, già noto per i suoi contributi all’accelerazione della crittografia in Linux. Si tratta di un proseguimento della tendenza riscontrata nelle recenti versioni del kernel, in cui sempre più algoritmi ricevono supporto per percorsi di esecuzione hardware efficienti, soprattutto sulle piattaforme x86_64.

Pertanto, gli utenti dei nuovi sistemi basati su AMD e Intel noteranno notevoli miglioramenti nelle prestazioni quando utilizzano la crittografia, soprattutto in scenari ad alta intensità di dati.

L'articolo Linux 6.15 Migliora la Crittografia: Supporto Avanzato per CPU Moderne tre volte più veloce proviene da il blog della sicurezza informatica.


Something is Very Wrong With the AY-3-8913 Sound Generator


Revision D PCB of Mockingboard with GI AY-3-8913 PSGs.

The General Instruments AY-3-8910 was a quite popular Programmable Sound Generator (PSG) that saw itself used in a wide variety of systems, including Apple II soundcards such as the Mockingboard and various arcade systems. In addition to the Yamaha variants (e.g. YM2149), two cut-down were created by GI: these being the AY-3-8912 and the AY-3-8913, which should have been differentiated only by the number of GPIO banks broken out in the IC package (one or zero, respectively). However, research by [fenarinarsa] and others have shown that the AY-3-8913 variant has some actual hardware issues as a PSG.

With only 24 pins, the AY-3-8913 is significantly easier to integrate than the 40-pin AY-3-8910, at the cost of the (rarely used) GPIO functionality, but as it turns out with a few gotchas in terms of timing and register access. Although the Mockingboard originally used the AY-3-8910, latter revisions would use two AY-3-8913 instead, including the MS revision that was the Mac version of the Mindscape Music Board for IBM PCs.

The first hint that something was off with the AY-3-8913 came when [fenarinarsa] was experimenting with effect composition on an Apple II and noticed very poor sound quality, as demonstrated in an example comparison video (also embedded below). The issue was very pronounced in bass envelopes, with an oscilloscope capture showing a very distorted output compared to a YM2149. As for why this was not noticed decades ago can likely be explained by that the current chiptune scene is pushing the hardware in very different ways than back then.

As for potential solutions, the [French Touch] project has created an adapter to allow an AY-3-8910 (or YM2149) to be used in place of an AY-3-8913.

Top image: Revision D PCB of Mockingboard with GI AY-3-8913 PSGs.

youtube.com/embed/_qslugOY2Dw?…


hackaday.com/2025/04/15/someth…


Replica of 1880 Wireless Telephone is All Mirrors, No Smoke


Engraving of Alexander Graham Bell's photophone, showing the receiver and its optics

If we asked you to name Alexander Graham Bell’s greatest invention, you would doubtless say “the telephone”; it’s probably the only one of his many, many inventions most people could bring to mind. If you asked Bell himself, though, he would tell you his greatest invention was the photophone, and if the prolific [Nick Bild] doesn’t agree he’s at least intrigued enough to produce a replica of this 1880-vintage wireless telephone. Yes, 1880. As in, only four years after the telephone was patented.

It obviously did not catch on, and is not the sort of thing that comes to mind when we think “wireless telephone”. In contrast to the RF of the 20th century version, as you might guess from the name the photophone used light– sunlight, to be specific. In the original design, the transmitter was totally passive– a tube with a mirror on one end, mounted to vibrate when someone spoke into the open end of the tube. That was it, aside from the necessary optics to focus sunlight onto said mirror. [Nick Bild] skips this and uses a laser as a handily coherent light source, which was obviously not an option in 1880. As [Nick] points out, if it was, Bell certainly would have made use of it.
Bell's selenium-based photophone receiver.The photophone receiver, 1880 edition. Speaker not pictured.
The receiver is only slightly more complex, in that it does have electronic components– a selenium cell in the original, and in [Nick’s] case a modern photoresistor in series with a 10,000 ohm resistor. There’s also an optical difference, with [Nick] opting for a lens to focus the laser light on his photoresistor instead of the parabolic mirror of the original. In both cases vibration of the mirror at the transmitter disrupts line-of-sight with the receiver, creating an AM signal that is easily converted back into sound with an electromagnetic speaker.

The photophone never caught on, for obvious reasons — traditional copper-wire telephones worked beyond line of sight and on cloudy days–but we’re greatful to [Nick] for dredging up the history and for letting us know about it via the tip line. See his video about this project below.

The name [Nick Bild] might look familiar to regular readers. We’ve highlighted a few of his projects on Hackaday before.

youtube.com/embed/XQ86fkRRS5M?…


hackaday.com/2025/04/15/replic…


DIY AI Butler Is Simpler and More Useful Than Siri


[Geoffrey Litt] shows that getting an effective digital assistant that’s tailored to one’s own needs just needs a little DIY, and thanks to the kinds of tools that are available today, it doesn’t even have to be particularly complex. Meet Stevens, the AI assistant who provides the family with useful daily briefs. The back end? Little more than one SQLite table and a few cron jobs.
A sample of Stevens’ notebook entries, both events and things to simply remember.
Every day, Stevens sends a daily brief via Telegram that includes calendar events, appointments, weather notes, reminders, and even a fun fact for the day. Stevens isn’t just send-only, either. Users can add new entries or ask questions about items through Telegram.

It’s rudimentary, but [Geoffrey] already finds it far more useful than Siri. This is unsurprising, as it has been astutely observed that big tech’s digital assistants are designed to serve their makers rather than their users. Besides, it’s also fun to have the freedom to give an assistant its own personality, something existing offerings sorely lack.

Architecture-wise, the assistant has a notebook (the single SQLite table) that gets populated with entries. These entries come from things like reading family members’ Google calendars, pulling data from a public weather API, processing delivery notices from the post office, and Telegram conversations. With a notebook of such entries (along with a date the entry is expected to be relevant), generating a daily brief is simple. After all, LLMs (Large Language Models) are amazingly good at handling and formatting natural language. That’s something even a locally-installed LLM can do with ease.

[Geoffrey] says that even this simple architecture is super useful, and it’s not even a particularly complex system. He encourages anyone who’s interested to check out his project, and see for themselves how useful even a minimally-informed assistant can be when it’s designed with ones’ own needs in mind.


hackaday.com/2025/04/15/diy-ai…


Machine Learning: il Segreto è il Modello, ma anche il Codice!


Nella maggior parte dei lavori nell’ambito di Machine Learning, non si fa ricerca per migliorare l’architettura di un modello o per progettare una nuova loss function. Nella maggior parte dei casi si deve utilizzare ciò che già esiste e adattarlo al proprio caso d’uso.

È quindi molto importante ottimizzare il progetto in termini di architettura del software e di implementazione in generale. Tutto parte da qui: si vuole un codice ottimale, che sia pulito, riutilizzabile e che funzioni il più velocemente possibile. Threading è una libreria nativa di Python che non viene utilizzata così spesso come dovrebbe.

Riguardo i Thread


I thread sono un modo per un processo di dividersi in due o più attività in esecuzione simultanea (o pseudo-simultanea). Un thread è contenuto all’interno di un processo e thread diversi dello stesso processo condividono le stesse risorse.

In questo articolo non si parlo di multiprocessing, ma la libreria per il multiprocessing di Python funziona in modo molto simile a quella per il multithreading.

In generale:

  • Il multithreading è ottimo per i compiti legati all’I/O, come la chiamata di un’API all’interno di un ciclo for
  • Il multiprocessing è usato per i compiti legati alla CPU, come la trasformazione di più dati tabellari in una volta sola

Quindi, se vogliamo eseguire più cose contemporaneamente, possiamo farlo usando i thread. La libreria Python per sfruttare i thread si chiama threading.

Cominciamo in modo semplice. Voglio che due thread Python stampino qualcosa contemporaneamente. Scriviamo due funzioni che contengono un ciclo for per stampare alcune parole.

def print_hello():
for x in range(1_000):
print("hello")

def print_world():
for x in range(1_000):
print("world")

Ora, se li eseguo uno dopo l’altro, vedrò nel mio terminale 1.000 volte la parola “hello” seguita da 1.000 “world”.

Utilizziamo invece i thread. Definiamo due thread e assegniamo a ciascuno di essi le funzioni definite in precedenza. Poi avviamo i thread. Dovreste vedere la stampa di “hello” e “world” alternarsi sul vostro terminale.

Se prima di continuare l’esecuzione del codice si vuole aspettare che i thread finiscano, è possibile farlo utilizzando: join().

import threding

thread_1 = threding.Thread(target = print_hello)
thread_2 = threding.Thread(target = print_world)

thread_1.start()
thread_2.start()

# wait for the threads to finish before continuing running the code
thread_1.join()
thread_2.join()

print("do other stuff")

Lock delle risorse dei thread


A volte può accadere che due o più thread modifichino la stessa risorsa, ad esempio una variabile contenente un numero.

Un thread ha un ciclo for che aggiunge sempre uno alla variabile e l’altro sottrae uno. Se eseguiamo questi thread insieme, la variabile avrà “sempre” il valore di zero (più o meno). Ma noi vogliamo ottenere un comportamento diverso. Il primo thread che prenderà possesso di questa variabile deve aggiungere o sottrarre 1 fino a raggiungere un certo limite. Poi rilascerà la variabile e l’altro thread sarà libero di prenderne possesso e di eseguire le sue operazioni.

import threading
import time

x = 0
lock = threading.Lock()

def add_one():
global x, lock # use global to work with globa vars
lock.acquire()
while x -10:
x = x -1
print(x)
time.sleep(1)
print("reached minimum")
lock.release()

Nel codice sopra riportato, abbiamo due funzioni. Ciascuna sarà eseguita da un thread. Una volta avviata, la funzione bloccherà la variabile lock, in modo che il secondo thread non possa accedervi finché il primo non ha finito.

thread_1 = threading.Thread(target = add_one)
thread_2 = threading.Thread(target = subtract_one)

thread_1.start()
thread_2.start()

Lock usando un semaforo


Possiamo ottenere un risultato simile a quello ottenuto sopra utilizzando i semafori. Supponiamo di voler far accedere a una funzione un numero totale di thread contemporaneamente. Ciò significa che non tutti i thread avranno accesso a questa funzione, ma solo 5, per esempio. Gli altri thread dovranno aspettare che alcuni di questi 5 finiscano i loro calcoli per avere accesso alla funzione ed eseguire lo script. Possiamo ottenere questo comportamento utilizzando un semaforo e impostando il suo valore a 5. Per avviare un thread con un argomento, possiamo usare args nell’oggetto Thread.

import time
import threading

semaphore = threading.BoudnedSemaphore(value=5)

def func(thread_number):
print(f"{thread_number} is trying to access the resource")
semaphore.acquire()

print(f"{thread_number} granted access to the resource")
time.sleep(12) #fake some computation

print(f"{thread_number} is releasing resource")
semaphore.release()

if __name__ == "__main__":
for thread_number in range(10):
t = threading.Thread(target = func, args = (thread_number,)
t.start()
time.sleep(1)

Eventi


Gli eventi sono semplici meccanismi di segnalazione usati per coordinare i thread. Si può pensare a un evento come a un flag che si può selezionare o deselezionare, e gli altri thread possono aspettare che venga selezionato prima di continuare il loro lavoro.

Ad esempio, nel seguente esempio, il thread_1 che vuole eseguire la funzione “func” deve attendere che l’utente inserisca “sì” e scateni l’evento per poter terminare l’intera funzione.

import threading

event = threading.Event()

def func():
print("This event function is waiting to be triggered")
event.wait()
print("event is triggered, performing action now")

thread_1 = threading.Thread(target = func)
thread_1.start()

x = input("Do you want to trigger the event? \n")
if x == "yes":
event.set()
else
print("you chose not to trigger the event")

Daemon Thread


Si tratta semplicemente di thread che vengono eseguiti in background. Lo script principale termina anche se questo thread in background è ancora in esecuzione. Ad esempio, si può usare un thread demone per leggere continuamente da un file che viene aggiornato nel tempo.

Scriviamo uno script in cui un thread demone legge continuamente da un file e aggiorna una variabile stringa e un altro thread stampa su console il contenuto di tale variabile.

import threading
import time

path = "myfile.txt"
text = ""

def read_from_file():
global path, text
while True:
with open(path, "r") as f:
text = f.read()
time.sleep(4)

def print_loop():
for x in range(30):
print(text)
time.sleep(1)

thread_1 = threading.Thread(target = read_from_file, daemon = True)
thread_2 = threading.Thread(target = print_loop)

thread_1.start()
thread_2.start()

Queues (code)


Una coda è un insieme di elementi che obbedisce al principio first-in/first-out (FIFO). È un metodo per gestire strutture di dati in cui il primo elemento viene elaborato per primo e l’elemento più recente per ultimo.

Possiamo anche cambiare l’ordine di priorità con cui processiamo gli elementi della collezione. LIFO, ad esempio, sta per Last-in/First-out. Oppure, in generale, possiamo avere una coda di priorità in cui possiamo scegliere manualmente l’ordine.

Se più thread vogliono lavorare su un elenco di elementi, ad esempio un elenco di numeri, potrebbe verificarsi il problema che due thread eseguano calcoli sullo stesso elemento. Vogliamo evitare questo problema. Perciò possiamo avere una coda condivisa tra i thread e, quando un thread esegue il calcolo su un elemento, questo elemento viene rimosso dalla coda. Vediamo un esempio.

import queue

q = queue.Queue() # it can also be a LifoQueue or PriorityQueue
number_list = [10, 20, 30, 40, 50, 60, 70, 80]

for number in number_list:
q.put(number)

print(q.get()) # -> 10
print(1.het()) # -> 20

Un esempio di thread in un progetto di Machine Learning


Supponiamo di lavorare a un progetto che richiede una pipeline di streaming e preelaborazione dei dati. Questo accade in molti progetti con dispositivi IoT o qualsiasi tipo di sensore. Un thread demone in background può recuperare e preelaborare continuamente i dati mentre il thread principale si concentra sull’inferenza.

Per esempio, in un caso semplice in cui devo sviluppare un sistema di classificazione delle immagini in tempo reale utilizzando il feed della mia telecamera. Imposterei il mio codice con 2 thread:

  • Recuperare le immagini dal feed della telecamera in tempo reale.
  • Passare le immagini a un modello di AI per l’inferenza.

import threading
import time
import queue
import random

# Sfake image classifier
def classify_image(image):
time.sleep(0.5) # fake the model inference time
return f"Classified {image}"

def camera_feed(image_queue, stop_event):
while not stop_event.is_set():
# Simulate capturing an image
image = f"Image_{random.randint(1, 100)}"
print(f"[Camera] Captured {image}")
image_queue.put(image)
time.sleep(1) # Simulate time between captures

def main_inference_loop(image_queue, stop_event):
while not stop_event.is_set() or not image_queue.empty():
try:
image = image_queue.get(timeout=1) # Fetch image from the queue
result = classify_image(image)
print(f"[Model] {result}")
except queue.Empty:
continue

if __name__ == "__main__":
image_queue = queue.Queue()
stop_event = threading.Event()

camera_thread = threading.Thread(target=camera_feed, args=(image_queue, stop_event), daemon=True)
camera_thread.start()

try:
main_inference_loop(image_queue, stop_event)
except KeyboardInterrupt:
print("Shutting down...")
stop_event.set() # Signal the camera thread to stop
finally:
camera_thread.join() # Ensure the camera thread terminates properly
print("All threads terminated.")

In questo semplice esempio, abbiamo:

  • Un thread demone: L’input della telecamera viene eseguito in background, in modo da non bloccare l’uscita del programma al completamento del thread principale.
  • Evento per il coordinamento: Questo evento stop_event consente al thread principale di segnalare al thread demone di terminare.
  • Coda per la comunicazione: image_queue assicura una comunicazione sicura tra i thread.


Conclusioni


In questo tutorial vi ho mostrato come utilizzare la libreria di threading in Python, affrontando concetti fondamentali come lock, semafori ed eventi, oltre a casi d’uso più avanzati come i thread daemon e le code.

Vorrei sottolineare che il threading non è solo skill tecnica, ma piuttosto una mentalità che consente di scrivere codice pulito, efficiente e riutilizzabile. Quando si gestiscono chiamate API, si elaborano flussi di dati di sensori o si costruisce un’applicazione di AI in tempo reale, il threading consente di costruire sistemi robusti, reattivi e pronti a scalare.

L'articolo Machine Learning: il Segreto è il Modello, ma anche il Codice! proviene da il blog della sicurezza informatica.

#fake


Making Parts Feeders Work Where They Weren’t Supposed To


[Chris Cecil] had a problem. He had a Manncorp/Autotronik MC384V2 pick and place, and needed more feeders. The company was reluctant to support an older machine and wanted over $32,000 to supply [Chris] with more feeders. He contemplated the expenditure… but then came across another project which gave him pause. Could he make Siemens feeders work with his machine?

It’s one of those “standing on the shoulders of giants” stories, with [Chris] building on the work from [Bilsef] and the OpenPNP project. He came across SchultzController, which could be used to work with Siemens Siplace feeders for pick-and-place machines. They were never supposed to work with his Manncorp machine, but it seemed possible to knit them together in some kind of unholy production-focused marriage. [Chris] explains how he hooked up the Manncorp hardware to a Smoothieboard and then Bilsef’s controller boards to get everything working, along with all the nitty gritty details on the software hacks required to get everything playing nice.

For an investment of just $2,500, [Chris] has been able to massively expand the number of feeders on his machine. Now, he’s got his pick and place building more Smoothieboards faster than ever, with less manual work on his part.

We feature a lot of one-off projects and home production methods, but it’s nice to also get a look at methods of more serious production in bigger numbers, too. It’s a topic we follow with interest. Video after the break.

youtube.com/embed/TQo33HRDTA8?…

[Editor’s note: Siemens is the parent company of Supplyframe, which is Hackaday’s parent company. This has nothing to do with this story.]


hackaday.com/2025/04/15/making…


A New Kind of Bike Valve?


If you’ve worked on a high-end mountain or road bike for any length of time, you have likely cursed the Presta valve. This humble century-old invention is the bane of many a home and professional mechanic. What if there is a better option? [Seth] decided to find out by putting four valves on a single rim.

The contenders include the aforementioned Presta, as well as Schrader, Dunlop and the young gun, Click. Schrader and Dunlop both pre-date Presta, with Schrader finding prevalence in cruiser bicycles along with cars and even aircraft. Dunlop is still found on bicycles in parts of Asia and Europe. Then came along Presta some time around 1893, and was designed to hold higher pressures and be lower profile then Schrader and Dunlop. It found prevalence among the weight conscious and narrow rimmed road bike world and, for better or worse, stuck around ever since.

But there’s a new contender from industry legend Schwalbe called Click. Click comes with a wealth of nifty modern engineering tricks including its party piece, and namesake, of a clicking mechanical locking system, no lever, no screw attachment. Click also fits into a Presta valve core and works on most Presta pumps. Yet, it remains to be seen weather Click is just another doomed standard, or the solution to many a cyclists greatest headache.

This isn’t the first time we’ve seen clever engineering going into a bike valve.

youtube.com/embed/vL1gXXba0Kk?…


hackaday.com/2025/04/15/a-new-…


Announcing the Hackaday Pet Hacks Contest


A dog may be man’s best friend, but many of us live with cats, fish, iguanas, or even wilder animals. And naturally, we like to share our hacks with our pets. Whether it’s a robot ball-thrower, a hamster wheel that’s integrated into your smart home system, or even just an automatic feeder for when you’re not home, we want to see what kind of projects that your animal friends have inspired you to pull off.

The three top choices will take home $150 gift certificates from DigiKey, the contest’s sponsor, so that you can make even more pet-centric projects. You have until May 27th to get your project up on Hackaday.io, and get it entered into Pet Hacks.

Honorable Mention Categories


Of course, we have a couple thoughts about fun directions to take this contest, and we’ll be featuring entries along the way. Just to whet your whistle, here are our four honorable mention categories.

  • Pet Safety: Nothing is better than a hack that helps your pet stay out of trouble. If your hack contributes to pet safety, we want to see it.
  • Playful Pets: Some hacks are just for fun, and that goes for our pet hacks too. If it’s about amusing either your animal friend or even yourself, it’s a playful pet hack.
  • Cyborg Pets: Sometimes the hacks aren’t for your pet, but on your pet. Custom pet prosthetics or simply ultra-blinky LED accouterments belong here.
  • Home Alone: This category is for systems that aim to make your pet more autonomous. That’s not limited to vacation feeders – anything that helps your pet get along in this world designed for humans is fair game.


Inspiration


We’ve seen an amazing number of pet hacks here at Hackaday, from simple to wildly overkill. And we love them all! Here are a few of our favorite pet hacks past, but feel free to chime in the comments if you have one that didn’t make our short list.

Let’s start off with a fishy hack. Simple aquariums don’t require all that much attention or automation, so they’re a great place to start small with maybe a light controller or something that turns off your wave machine every once in a while. But when you get to the point of multiple setups, you might also want to spend a little more time on the automation. Or at least that’s how we imagine that [Blue Blade Fish] got to the point of a system with multiple light setups, temperature control, water level sensing, and more. It’s a 15-video series, so buckle in.

OK, now let’s talk cats. Cats owners know they can occasionally bring in dead mice, for which a computer-vision augmented automatic door is the obvious solution. Or maybe your cats spend all their time in the great outdoors? Then you’ll need a weather-proof automatic feeder for the long haul. Indoor cats, each with a special diet? Let the Cat-o-Matic 3000 keep track of who has been fed. But for the truly pampered feline, we leave for your consideration the cat elevator and the sun-tracking chair.

Dogs are more your style? We’ve seen a number of automatic ball launchers for when you just get tired of playing fetch. But what tugged hardest at our heartstrings was [Bud]’s audible go-fetch toy that he made for his dog [Lucy] when she lost her vision, but not her desire to keep playing. How much tech is too much tech? A dog-borne WiFi hotspot, or a drone set up to automatically detect and remove the dreaded brown heaps?

Finally, we’d like to draw your attention to some truly miscellaneous pet hacks. [Mr. Goxx] is a hamster who trades crypto, [Mr. Fluffbutt] runs in a VR world simulation hamster wheel, and [Harold] posts his workouts over MQTT – it’s the Internet of Hamsters after all. Have birds? Check out this massive Chicken McMansion or this great vending machine that trains crows to clean up cigarette butts in exchange for peanuts.

We had a lot of fun looking through Hackaday’s back-catalog of pet hacks, but we’re still missing yours! If you’ve got something you’d like us all to see, head on over to Hackaday.io and enter it in the contest. Fame, fortune, and a DigiKey gift certificate await!

2025 Hackaday Pet Hacks Contest


hackaday.com/2025/04/15/announ…