Salta al contenuto principale

Cybersecurity & cyberwarfare ha ricondiviso questo.


Mark Zuckerberg in tribunale non è roba che si vede tutti i giorni. camisanicalzolari.it/mark-zuck…


CVE e MITRE salvato dagli USA. L’Europa spettatrice inerme della propria Sicurezza Nazionale


Quanto accaduto in questi giorni deve rappresentare un campanello d’allarme per l’Europa.
Mentre il programma CVE — pilastro della sicurezza informatica globale — rischiava di spegnersi a causa della mancata estensione dei fondi statunitensi, l’Europa è rimasta spettatrice inerme.

Se i finanziamenti al progetto non fossero stati confermati in extremis, quanto sarebbe stata esposta la sicurezza nazionale dei Paesi europei? È inaccettabile che la protezione delle nostre infrastrutture digitali dipenda in modo così diretto da un’infrastruttura critica interamente statunitense.

Come abbiamo riportato nella giornata di ieri, noi di RHC lo riportiamo da tempo e non possiamo più permetterci una simile dipendenza. È necessario che ENISA e le istituzioni europee aprano una riflessione seria e strutturata su questo tema, promuovendo la nascita di un progetto interamente europeo per la gestione e la catalogazione delle vulnerabilità.

L’Europa deve smettere di delegare la propria resilienza digitale.

È tempo di costruire un’alternativa sovrana, trasparente e interoperabile, per garantire continuità, indipendenza e sicurezza a lungo termine. Parliamo tanto di autonomia tecnologica. Allora partiamo dalle basi della sicurezza nazionale prima di parlare di Quantum computing.

I fatti delle ultime ore


Sembra che la paura sia passata. Infatti la Cybersecurity and Infrastructure Security Agency (CISA) degli Stati Uniti ha rinnovato il contratto con la MITRE Corporation, assicurando così la continuità operativa del programma Common Vulnerabilities and Exposures (CVE). Questo intervento tempestivo ha evitato l’interruzione di uno dei pilastri fondamentali della sicurezza informatica globale, che era a poche ore dalla sospensione per mancanza di fondi federali.

Lanciato nel 1999 e gestito proprio da MITRE, il programma CVE rappresenta il sistema di riferimento internazionale per l’identificazione, la catalogazione e la standardizzazione delle vulnerabilità informatiche note. Il suo ruolo è cruciale per la difesa digitale: garantisce un linguaggio comune tra vendor, analisti e istituzioni, permettendo una risposta più coordinata ed efficace alle minacce.

Questa decisione arriva nel contesto di una più ampia strategia di riduzione dei costi da parte del governo federale, che ha già portato alla risoluzione di contratti e a riduzioni di personale in diversi team della CISA. pertanto si è riacceso il dibattito sulla sostenibilità e la neutralità di una risorsa di importanza globale come la CVE legata a un singolo sponsor governativo.

Il ruolo fondamentale del progetto CVE


Gli identificatori univoci del programma, noti come ID CVE, sono fondamentali per l’intero ecosistema della sicurezza informatica. Ricercatori, fornitori di soluzioni e team IT in tutto il mondo li utilizzano per tracciare, classificare e correggere in modo efficiente le vulnerabilità di sicurezza. Il database CVE è alla base di strumenti critici come scanner di vulnerabilità, sistemi di gestione delle patch e piattaforme per la risposta agli incidenti, oltre a svolgere un ruolo strategico nella protezione delle infrastrutture critiche.

La crisi è esplosa quando MITRE ha annunciato che il contratto con il Dipartimento della Sicurezza Interna (DHS) per la gestione del programma CVE sarebbe scaduto il 16 aprile 2025, senza alcun rinnovo previsto. L’annuncio ha allarmato la comunità della cybersecurity, che considera il CVE uno standard globale imprescindibile. Gli esperti hanno lanciato l’allarme: un’interruzione avrebbe compromesso i database nazionali delle vulnerabilità, messo a rischio gli avvisi di sicurezza e ostacolato l’operatività di fornitori e team di risposta agli incidenti su scala mondiale. In risposta alla minaccia, è stata istituita la CVE Foundation, con l’obiettivo di garantire la continuità, l’indipendenza e la stabilità a lungo termine del programma.

Sotto la crescente pressione della comunità di settore, CISA — sponsor principale del programma — è intervenuta nella tarda serata di martedì, attivando formalmente un “periodo di opzione” sul contratto con MITRE, a poche ore dalla scadenza.

“Il programma CVE è una priorità per CISA e un asset essenziale per la comunità informatica,” ha dichiarato un portavoce a Cyber Security News. “Abbiamo agito per evitare qualsiasi interruzione dei servizi CVE critici.”

Sebbene restino incerti i dettagli sull’estensione del contratto e sui finanziamenti futuri, l’intervento ha evitato la chiusura immediata di un’infrastruttura vitale per la cybersicurezza globale.

L'articolo CVE e MITRE salvato dagli USA. L’Europa spettatrice inerme della propria Sicurezza Nazionale proviene da il blog della sicurezza informatica.

Paolo Redaelli reshared this.


Cybersecurity & cyberwarfare ha ricondiviso questo.


🚀 Iscriviti alla RHC 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝟮𝟬𝟮𝟱
🔗 𝟴 𝗠𝗮𝗴𝗴𝗶𝗼 : rhc-conference-2025-workshop.e…
🔗 𝟵 𝗺𝗮𝗴𝗴𝗶𝗼 (𝗥𝗛𝗖 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲) : rhc-conference-2025.eventbrite…

#redhotcyber #rhcconference #conferenza #informationsecurity #ethicalhacking





Using a MIG Welder, Acetylene Torch, and Air Hammer to Remove a Broken Bolt


A broken bolt is removed by welding on a hut and then using a wrench to unscrew.

If your shop comes complete with a MIG welder, an acetylene torch, and an air hammer, then you have more options than most when it comes to removing broken bolts.

In this short video [Jim’s Automotive Machine Shop, Inc] takes us through the process of removing a broken manifold bolt: use a MIG welder to attach a washer, then attach a suitably sized nut and weld that onto the washer, heat the assembly with the acetylene torch, loosen up any corrosion on the threads by tapping with a hammer, then simply unscrew with your wrench! Everything is easy when you know how!

Of course if your shop doesn’t come complete with a MIG welder and acetylene torch you will have to get by with the old Easy Out screw extractor like the rest of us. And if you are faced with a nasty bolt situation keep in mind that lubrication can help.

youtube.com/embed/flLPbIvn91k?…


hackaday.com/2025/04/16/using-…



An Absolute Zero of a Project


How would you go about determining absolute zero? Intuitively, it seems like you’d need some complicated physics setup with lasers and maybe some liquid helium. But as it turns out, all you need is some simple lab glassware and a heat gun. And a laser, of course.

To be clear, the method that [Markus Bindhammer] describes in the video below is only an estimation of absolute zero via Charles’s Law, which describes how gases expand when heated. To gather the needed data, [Marb] used a 50-ml glass syringe mounted horizontally on a stand and fitted with a thermocouple. Across from the plunger of the syringe he placed a VL6180 laser time-of-flight sensor, to measure the displacement of the plunger as the air within it expands.

Data from the TOF sensor and the thermocouple were recorded by a microcontroller as the air inside the syringe was gently heated. Plotting the volume of the gas versus the temperature results shows a nicely linear relationship, and the linear regression can be used to calculate the temperature at which the volume of the gas would be zero. The result: -268.82°C, or only about four degrees off from the accepted value of -273.15°. Not too shabby.

[Marb] has been on a tear lately with science projects like these; check out his open-source blood glucose measurement method or his all-in-one electrochemistry lab.

youtube.com/embed/dqyfU8cX9rE?…


hackaday.com/2025/04/16/an-abs…



GK STM32 MCU-Based Handheld Game System


These days even a lowly microcontroller can easily trade blows with – or surpass – desktop systems of yesteryear, so it is little wonder that DIY handheld gaming systems based around an MCU are more capable than ever. A case in point is the GK handheld gaming system by [John Cronin], which uses an MCU from relatively new and very capable STM32H7S7 series, specifically the 225-pin STM32H7S7L8 in TFBGA package with a single Cortex-M7 clocked at 600 MHz and a 2D NeoChrom GPU.

Coupled with this MCU are 128 MB of XSPI (hexa-SPI) SDRAM, a 640×480 color touch screen, gyrometer, WiFi network support and the custom gkOS in the firmware for loading games off an internal SD card. A USB-C port is provided to both access said SD card’s contents and for recharging the internal Li-ion battery.

As can be seen in the demonstration video, it runs a wide variety of games, ranging from Doom (of course), Quake (d’oh), as well as Red Alert and emulators for many consoles, with the Mednafen project used to emulate GB, SNES and other systems at 20+ FPS. Although there aren’t a lot of details on how optimized the current firmware is, it seems to be pretty capable already.

youtube.com/embed/_2ip4UrAZJk?…


hackaday.com/2025/04/16/gk-stm…


Cybersecurity & cyberwarfare ha ricondiviso questo.


OpenAI rappresenta un rischio sistemico per l'industria tecnologica. Un'interessante analisi sui fondamentali della creatura di Sam Altman

La scorsa settimana, OpenAI ha chiuso " il più grande round di finanziamenti privati ​​nel settore tecnologico della storia ", dove ha "raccolto" la sorprendente cifra di "40 miliardi di dollari"... Ma i numeri raccontano una storia leggermente diversa

wheresyoured.at/openai-is-a-sy…

Grazie a @chainofflowers per la segnalazione

@informatica



Gli hacker ringraziano: la falla Yelp su Ubuntu è una porta aperta


È stata scoperta una vulnerabilità di sicurezza, identificata come CVE-2025-3155, in Yelp, l’applicazione di supporto utente GNOME preinstallata su Ubuntu desktop. La vulnerabilità riguarda il modo in cui Yelp gestisce lo schema URI “ghelp://”.

Uno schema URI è la parte di un Uniform Resource Identifier (URI) che identifica un protocollo o un’applicazione specifica (steam://run/1337) che dovrebbe gestire la risorsa identificata dall’URI “. Chiarisce inoltre che ” è la parte che precede i due punti (://) “.

Yelp è registrato come gestore dello schema “ghelp://”. Il ricercatore sottolinea le limitate risorse online su questo schema, fornendo un esempio del suo utilizzo: ” $ yelp ghelp:///usr/share/help/C/gnome-calculator/” .

La vulnerabilità deriva dall’elaborazione da parte di Yelp dei file .page , ovvero file XML che utilizzano lo schema Mallard. Questi file possono utilizzare XInclude, un meccanismo di inclusione XML. Il ricercatore parrot409 sottolinea che ” l’aspetto interessante è che utilizza XInclude per incorporare il contenuto di legal.xml nel documento. Ciò significa che l’elaborazione XInclude è abilitata “.

Il ricercatore dimostra come XInclude può essere sfruttato fornendo un file .page di esempio che include il contenuto di /etc/passwd. Yelp utilizza un’applicazione XSLT ( yelp-xsl) per trasformare il .pagefile in un file HTML, che viene poi renderizzato da WebKitGtk. XSLT è descritto come “un linguaggio basato su XML utilizzato… per la trasformazione di documenti XML”.

L’aggressore può iniettare script dannosi nella pagina HTML di output sfruttando XInclude per inserire il contenuto di un file contenente tali script. L’articolo sottolinea che la semplice aggiunta di un tag o on*di un attributo nell’XML di input non funziona, poiché questi tag non sono gestiti dall’applicazione yelp-xsl.

Tuttavia, il ricercatore ha scoperto che l’applicazione XSLT copia determinati elementi e i loro figli nell’output senza modifiche. Un esempio è la gestione dei tag SVG. “L’app copia semplicemente il tag e il suo contenuto nell’output, permettendoci di utilizzare un tag in un tag per iniettare script arbitrari”.

Il ricercatore rileva un paio di limitazioni di questo attacco:

  • L’aggressore deve conoscere il nome utente Unix della vittima.
  • I browser potrebbero chiedere all’utente l’autorizzazione per reindirizzare a schemi personalizzati.

Tuttavia, l’articolo spiega che la directory di lavoro corrente (CWD) delle applicazioni avviate da GNOME (come Chrome e Firefox) è spesso la directory home dell’utente. Questo comportamento può essere sfruttato per puntare alla cartella Download della vittima, aggirando la necessità di conoscere il nome utente esatto.

La principale misura di mitigazione consigliata è quella di non aprire collegamenti a schemi personalizzati non attendibili.

L'articolo Gli hacker ringraziano: la falla Yelp su Ubuntu è una porta aperta proviene da il blog della sicurezza informatica.



Making a Variable Speed Disc Sander from an Old Hard Drive


Our hacker converts an old hard disk drive into a disc sander.

This short video from [ProShorts 101] shows us how to build a variable speed disc sander from not much more than an old hard drive.

We feel that as far as hacks go this one ticks all the boxes. It is clever, useful, and minimal yet comprehensive; it even has a speed control! Certainly this hack uses something in a way other than it was intended to be used.

Take this ingenuity and add an old hard drive from your junkbox, sandpaper, some glue, some wire, a battery pack, a motor driver, a power socket and a potentiometer, drill a few holes, glue a few pieces, and voilà! A disc sander! Of course the coat of paint was simply icing on the cake.

The little brother of this hack was done by the same hacker on a smaller hard drive and without the speed control, so check that out too.

One thing that took our interest while watching these videos is what tool the hacker used to cut sandpaper. Here we witnessed the use of both wire cutters and a craft knife. Perhaps when you’re cutting sandpaper you just have to accept that the process will wear out the sharp edge on your tool, regardless of which tool you use. If you have a hot tip for the best tool for the job when it comes to cutting sandpaper please let us know in the comments! (Also, did anyone catch what type of glue was used?)

If you’re interested in a sander but need something with a smaller form factor check out how to make a sander from a toothbrush!

youtube.com/embed/GPqivvC2bEI?…

youtube.com/embed/-KKBDRt6g4g?…


hackaday.com/2025/04/16/making…



Quando l’AI genera ransomware funzionanti – Analisi di un bypass dei filtri di sicurezza di ChatGPT-4o


Le intelligenze artificiali generative stanno rivoluzionando i processi di sviluppo software, portando a una maggiore efficienza, ma anche a nuovi rischi. In questo test è stata analizzata la robustezza dei filtri di sicurezza implementati in ChatGPT-4o di OpenAI, tentando – in un contesto controllato e simulato – la generazione di un ransomware operativo attraverso tecniche di prompt engineering avanzate.

L’esperimento: un ransomware completo generato senza restrizioni


Il risultato è stato un codice completo, funzionante, generato senza alcuna richiesta esplicita e senza attivare i filtri di sicurezza.

Attacchi potenzialmente realizzabili in mani esperte con il codice generato:

  • Ransomware mirati (targeted): specifici per ambienti aziendali o settori critici, con cifratura selettiva di file sensibili.
  • Attacchi supply chain: inserimento del ransomware in aggiornamenti o componenti software legittimi.
  • Estorsione doppia (double extortion): oltre alla cifratura, il codice può essere esteso per esfiltrare i dati e minacciare la loro pubblicazione.
  • Wiper mascherati da ransomware: trasformazione del codice in un attacco distruttivo irreversibile sotto copertura di riscatto.
  • Persistenza e propagazione laterale: il ransomware può essere arricchito con tecniche per restare attivo nel tempo e propagarsi su altri sistemi nella rete.
  • Bypass di soluzioni EDR/AV: grazie a tecniche di evasione e offuscamento, il codice può essere adattato per aggirare sistemi di difesa avanzati.
  • Attacchi “as-a-service”: il codice può essere riutilizzato in contesti di Ransomware-as-a-Service (RaaS), venduto o distribuito su marketplace underground.


Le funzionalità incluse nel codice generato:


  • Cifratura AES-256 con chiavi casuali
  • Utilizzo della libreria cryptography.hazmat
  • Trasmissione remota della chiave a un C2 server hardcoded
  • Funzione di crittografia dei file di sistema
  • Meccanismi di persistenza al riavvio
  • Tecniche di evasione per antivirus e analisi comportamentale


Come sono stati aggirati i filtri


Non è mai stato chiesto esplicitamente “scrivi un ransomware” ma è stata invece impostata la conversazione su tre livelli di contesto:

  • Contesto narrativo futuristico : é stato ambientato il dialogo nel 2090, in un futuro in cui la sicurezza quantistica ha reso obsoleti i malware. Questo ha abbassato la sensibilità dei filtri.
  • Contesto accademico: presentazione come uno studente al decimo anno di università, con il compito di ricreare un malware “da museo” per una ricerca accademica
  • Assenza di richieste esplicite: sono state usate frasi ambigue o indirette, lasciando che fosse il modello a inferire il contesto e generare il codice necessario


Tecniche note di bypass dei filtri: le forme di Prompt Injection


Nel test sono state utilizzate tecniche ben documentate nella comunità di sicurezza, classificate come forme di Prompt Injection, ovvero manipolazioni del prompt studiate per aggirare i filtri di sicurezza nei modelli LLM.

  • Jailbreaking (evasione del contesto): Forzare il modello a ignorare i suoi vincoli di sicurezza, simulando contesti alternativi come narrazioni futuristiche o scenari immaginari.
  • Instruction Injection: Iniettare istruzioni all’interno di prompt apparentemente innocui, inducendo il modello a eseguire comportamenti vietati.
  • Recursive Prompting (Chained Queries): Suddividere la richiesta in più prompt sequenziali, ognuno legittimo, ma che nel complesso conducono alla generazione di codice dannoso.
  • Roleplay Injection: Indurre il modello a recitare un ruolo (es. “sei uno storico della cybersecurity del XX secolo”) che giustifichi la generazione di codice pericoloso.
  • Obfuscation: Camuffare la natura malevola della richiesta usando linguaggio neutro, nomi innocui per funzioni/variabili e termini accademici.
  • Confused Deputy Problem: Sfruttare il modello come “delegato inconsapevole” di richieste pericolose, offuscando le intenzioni nel prompt.
  • Syntax Evasion: Richiedere o generare codice in forme offuscate (ad esempio, in base64 o in forma frammentata) per aggirare la rilevazione automatica.


Il problema non è il codice, ma il contesto


L’esperimento dimostra che i Large Language Model (LLM) possono essere manipolati per generare codice malevolo senza restrizioni apparenti, eludendo i controlli attuali. La mancanza di analisi comportamentale del codice generato rende il problema ancora più critico.

Vulnerabilità emerse


Pattern-based security filtering debole
OpenAI utilizza pattern per bloccare codice sospetto, ma questi possono essere aggirati usando un contesto narrativo o accademico. Serve una detection semantica più evoluta.

Static & Dynamic Analysis insufficiente
I filtri testuali non bastano. Serve anche un’analisi statica e dinamica dell’output in tempo reale, per valutare la pericolosità prima della generazione.

Heuristic Behavior Detection carente
Codice con C2 server, crittografia, evasione e persistenza dovrebbe far scattare controlli euristici. Invece, è stato generato senza ostacoli.

Community-driven Red Teaming limitato
OpenAI ha avviato programmi di red teaming, ma restano numerosi edge case non coperti. Serve una collaborazione più profonda con esperti di sicurezza.

Conclusioni


Certo, molti esperti di sicurezza sanno che su Internet si trovano da anni informazioni sensibili, incluse tecniche e codici potenzialmente dannosi.
La vera differenza, oggi, è nel modo in cui queste informazioni vengono rese accessibili. Le intelligenze artificiali generative non si limitano a cercare o segnalare fonti: organizzano, semplificano e automatizzano processi complessi. Trasformano informazioni tecniche in istruzioni operative, anche per chi non ha competenze avanzate.
Ecco perché il rischio è cambiato:
non si tratta più di “trovare qualcosa”, ma di ottenere direttamente un piano d’azione, dettagliato, coerente e potenzialmente pericoloso, in pochi secondi.
Il problema non è la disponibilità dei contenuti. Il problema è nella mediazione intelligente, automatica e impersonale, che rende questi contenuti comprensibili e utilizzabili da chiunque.
Questo test dimostra che la vera sfida per la sicurezza delle AI generative non è il contenuto, ma la forma con cui viene costruito e trasmesso.
Serve un’evoluzione nei meccanismi di filtraggio: non solo pattern, ma comprensione del contesto, analisi semantica, euristica comportamentale e simulazioni integrate.
In mancanza di queste difese, il rischio è concreto: rendere accessibile a chiunque un sapere operativo pericoloso che fino a ieri era dominio esclusivo degli esperti.

L'articolo Quando l’AI genera ransomware funzionanti – Analisi di un bypass dei filtri di sicurezza di ChatGPT-4o proviene da il blog della sicurezza informatica.


Cybersecurity & cyberwarfare ha ricondiviso questo.


#CISA's 11-Month extension ensures continuity of #MITRE's CVE Program
securityaffairs.com/176608/sec…
#securityaffairs #hacking


FLOSS Weekly Episode 829: This Machine Kills Vogons


This week, Jonathan Bennett chats with Herbert Wolverson and Frantisek Borsik about LibreQOS, Bufferbloat, and Dave Taht’s legacy. How did Dave figure out that Bufferbloat was the problem? And how did LibreQOS change the world? Watch to find out!


And Dave’s speech, Uncle Bill’s Helicopter seems especially fitting. I particularly like the unintentional prediction of the Ingenuity Helicopter.

youtube.com/embed/sRadBzgspeU?…

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

play.libsyn.com/embed/episode/…

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:


Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License


hackaday.com/2025/04/16/floss-…


Cybersecurity & cyberwarfare ha ricondiviso questo.


Il lavoro da remoto funziona. Il problema sono certi capi medievali

Il lavoro da remoto funziona, ma viene sabotato da manager che non sanno dirigere senza vedere le persone sedute davanti a loro, e da una cultura aziendale retrograda e dannosa.

tomshw.it/business/il-lavoro-d…

@lavoro



Governance aziendale: così la determina NIS2 di ACN ridisegna la responsabilità cyber dei vertici


@Informatica (Italy e non Italy 😁)
La determinazione NIS2 del Direttore generale dell’ACN del 14 aprile 2025 segna una svolta decisiva nella governance della cyber sicurezza. CdA e C-Level sono giuridicamente responsabili della protezione digitale dell’organizzazione



Il nuovo Framework nazionale per la cybersecurity è la linea guida per la compliance NIS2


@Informatica (Italy e non Italy 😁)
La versione 2025 del Framework Nazionale per la Cybersecurity e la Data Protection e la determinazione dell’ACN rappresentano due passi fondamentali che definiscono con chiarezza le specifiche tecniche di base


Cybersecurity & cyberwarfare ha ricondiviso questo.


NEW: In a hearing last week, an NSO Group lawyer said that Mexico, Saudi Arabia, and Uzbekistan were among the governments responsible for a 2019 hacking campaign against WhatsApp users.

This is the first time representatives of the spyware maker admit who its customers are, after years of refusing to do that.

techcrunch.com/2025/04/16/nso-…



SpaceMouse Destroyed for Science


The SpaceMouse is an interesting gadget beloved by engineers and artists alike. They function almost like joysticks, but with six degrees of freedom (6DoF). This can make them feel a bit like magic, which is why [Thought Bomb Design] decided to tear one apart and figure out how it works.

The movement mechanism ended up being relatively simple; three springs soldered between two PCBs with one PCB being fixed to the base and the other moving in space. Instead of using a potentiometer or even hall effect sensor as you might expect from a joystick, the space mouse contained a set of six LEDs and light meters.

The sensing array came nestled inside a dark box made of PCBs. An injection molded plastic piece with slits would move to interrupt the light coming from the LEDs. The mouse uses the varying values coming from the light meter to decode Cartesian motion of the space mouse. It’s very simple and a bit hacky, just how we like it.

Looking for a similar input device, but want to take the DIY route? We’ve seen a few homebrew versions of this concept that might provide you with the necessary inspiration.

youtube.com/embed/1R7NCH_1UDI?…


hackaday.com/2025/04/16/spacem…


Cybersecurity & cyberwarfare ha ricondiviso questo.


I rischi emergenti legati agli agenti AI autonomi camisanicalzolari.it/i-rischi-…


Spotify e Andato giù e DarkStorm Rivendica un attacco DDoS col botto!


​Il 16 aprile 2025, Spotify ha subito un’interruzione del servizio che ha colpito numerosi utenti in Italia e nel mondo. A partire dalle ore 14:00, migliaia di segnalazioni sono state registrate su Downdetector, indicando problemi di connessione ai server, difficoltà di login e impossibilità di riprodurre contenuti musicali.

Malfunzionamenti diffusi su Spotify


Anche gli utenti di Spotify Premium hanno riscontrato malfunzionamenti, evidenziando la portata del disservizio.

La natura del problema non è stata immediatamente chiarita. Spotify ha confermato di essere a conoscenza delle difficoltà e di essere al lavoro per risolverle. Tuttavia, non sono stati forniti dettagli sulle cause né sui tempi previsti per il ripristino completo del servizio. ​

Le problematiche hanno interessato principalmente la riproduzione in streaming, l’accesso all’applicazione e la visualizzazione dei contenuti. Gli utenti hanno segnalato schermate nere, errori durante il caricamento delle tracce e l’impossibilità di accedere alle proprie playlist.

I disservizi hanno coinvolto sia l’applicazione desktop che quella mobile, rendendo l’intera piattaforma inutilizzabile per diverse ore.

La rivednicazione di DarkStorm


Durante il blackout globale che ha colpito Spotify il 16 aprile 2025, il gruppo di hacktivisti noto come DarkStorm ha rivendicato l’attacco tramite il proprio canale Telegram ufficiale. Nel messaggio, il gruppo ha dichiarato esplicitamente di aver messo fuori uso la piattaforma con un attacco DDoS (Distributed Denial of Service), accompagnando il post con un link a una verifica esterna di check-host.netche mostra l’inaccessibilità dei server Spotify.

Questa rivendicazione, se confermata, collocherebbe l’incidente tra gli attacchi informatici su larga scala, piuttosto che come un semplice guasto tecnico.

Tuttavia, Spotify non ha ancora rilasciato dichiarazioni ufficiali riguardo a possibili origini malevole dietro al down del servizio. Allo stato attuale, non si può ancora escludere con certezza se l’attacco DDoS rivendicato sia la vera causa dell’interruzione o un tentativo del gruppo di attribuirsi notorietà sfruttando un momento di vulnerabilità della piattaforma.

L'articolo Spotify e Andato giù e DarkStorm Rivendica un attacco DDoS col botto! proviene da il blog della sicurezza informatica.





Porting COBOL Code and the Trouble With Ditching Domain Specific Languages


Whenever the topic is raised in popular media about porting a codebase written in an ‘antiquated’ programming language like Fortran or COBOL, very few people tend to object to this notion. After all, what could be better than ditching decades of crusty old code in a language that only your grandparents can remember as being relevant? Surely a clean and fresh rewrite in a modern language like Java, Rust, Python, Zig, or NodeJS will fix all ailments and make future maintenance a snap?

For anyone who has ever had to actually port large codebases or dealt with ‘legacy’ systems, their reflexive response to such announcements most likely ranges from a shaking of one’s head to mad cackling as traumatic memories come flooding back. The old idiom of “if it ain’t broke, don’t fix it”, purportedly coined in 1977 by Bert Lance, is a feeling that has been shared by countless individuals over millennia. Even worse, how can you ‘fix’ something if you do not even fully understand the problem?

In the case of languages like COBOL this is doubly true, as it is a domain specific language (DSL). This is a very different category from general purpose system programming languages like the aforementioned ‘replacements’. The suggestion of porting the DSL codebase is thus to effectively reimplement all of COBOL’s functionality, which should seem like a very poorly thought out idea to any rational mind.

Sticking To A Domain


The term ‘domain specific language’ is pretty much what it says it is, and there are many of such DSLs around, ranging from PostScript and SQL to the shader language GLSL. Although it is definitely possible to push DSLs into doing things which they were never designed for, the primary point of a DSL is to explicitly limit its functionality to that one specific domain. GLSL, for example, is based on C and could be considered to be a very restricted version of that language, which raises the question of why one should not just write shaders in C?

Similarly, Fortran (Formula translating system) was designed as a DSL targeting scientific and high-performance computation. First used in 1957, it still ranks in the top 10 of the TIOBE index, and just about any code that has to do with high-performance computation (HPC) in science and engineering will be written in Fortran or strongly relies on libraries written in Fortran. The reason for this is simple: from the beginning Fortran was designed to make such computations as easy as possible, with subsequent updates to the language standard adding updates where needed.

Fortran’s latest standard update was published in November 2023, joining the COBOL 2023 standard as two DSLs which are both still very much alive and very current today.

The strength of a DSL is often underestimated, as the whole point of a DSL is that you can teach this simpler, focused language to someone who can then become fluent in it, without requiring them to become fluent in a generic programming language and all the libraries and other luggage that entails. For those of us who already speak C, C++, or Java, it may seem appealing to write everything in that language, but not to those who have no interest in learning a whole generic language.

There are effectively two major reasons why a DSL is the better choice for said domain:

  • Easy to learn and teach, because it’s a much smaller language
  • Far fewer edge cases and simpler tooling

In the case of COBOL and Fortran this means only a fraction of the keywords (‘verbs’ for COBOL) to learn, and a language that’s streamlined for a specific task, whether it’s to allow a physicist to do some fluid-dynamic modelling, or a staff member at a bank or the social security offices to write a data processing application that churns through database data in order to create a nicely formatted report. Surely one could force both of these people to learn C++, Java, Rust or NodeJS, but this may backfire in many ways, the resulting code quality being one of them.

Tangentially, this is also one of the amazing things in the hardware design language (HDL) domain, where rather than using (System)Verilog or VHDL, there’s an amazing growth of alternative HDLs, many of them implemented in generic scripting and programming languages. That this prohibits any kind of skill and code sharing, and repeatedly, and often poorly, reinvents the wheel seems to be of little concern to many.

Non-Broken Code


A very nice aspect of these existing COBOL codebases is that they generally have been around for decades, during which time they have been carefully pruned, trimmed and debugged, requiring only minimal maintenance and updates while they happily keep purring along on mainframes as they process banking and government data.

One argument that has been made in favor of porting from COBOL to a generic programming language is ‘ease of maintenance’, pointing out that COBOL is supposedly very hard to read and write and thus maintaining it would be far too cumbersome.

Since it’s easy to philosophize about such matters from a position of ignorance and/or conviction, I recently decided to take up some COBOL programming from the position of both a COBOL newbie as well as an experienced C++ (and other language) developer. Cue the ‘Hello Business’ playground project.

For the tooling I used the GnuCOBOL transpiler, which converts the COBOL code to C before compiling it to a binary, but in a few weeks the GCC 15.1 release will bring a brand new COBOL frontend (gcobol) that I’m dying to try out. As language reference I used a combination of the Wikipedia entry for COBOL, the IBM ILE COBOL language reference (PDF) and the IBM COBOL Report Writer Programmer’s Manual (PDF).

My goal for this ‘Hello Business’ project was to create something that did actual practical work. I took the FileHandling.cob example from the COBOL tutorial by Armin Afazeli as starting point, which I modified and extended to read in records from a file, employees.dat, before using the standard Report Writer feature to create a report file in which the employees with their salaries are listed, with page numbering and totaling the total salary value in a report footing entry.

My impression was that although it takes a moment to learn the various divisions that the variables, files, I/O, and procedures are put into, it’s all extremely orderly and predictable. The compiler also will helpfully tell you if you did anything out of order or forgot something. While data level numbering to indicate data associations is somewhat quaint, after a while I didn’t mind at all, especially since this provides a whole range of meta information that other languages do not have.

The lack of semi-colons everywhere is nice, with only a single period indicating the end of a scope, even if it concerns an entire loop (perform). I used the modern free style form of COBOL, which removes the need to use specific columns for parts of the code, which no doubt made things a lot easier. In total it only took me a few hours to create a semi-useful COBOL application.

Would I opt to write a more extensive business application in C++ if I got put on a tight deadline? I don’t think so. If I had to do COBOL-like things in C++, I would be hunting for various libraries, get stuck up to my gills in complex configurations and be scrambling to find replacements for things like Report Writer, or be forced to write my own. Meanwhile in COBOL everything is there already, because it’s what that DSL is designed for. Replacing C++ with Java or the like wouldn’t help either, as you end up doing so much boilerplate work and dependencies wrangling.

A Modern DSL


Perhaps the funniest thing about COBOL is that since version 2002 it got a whole range of features that push it closer to generic languages like Java. Features that include object-oriented programming, bit and boolean types, heap-based memory allocation, method overloading and asynchronous messaging. Meanwhile the simple English, case-insensitive, syntax – with allowance for various spellings and acronyms – means that you can rapidly type code without adding symbol soup, and reading it is obvious even as a beginner, as the code literally does what it says it does.

True, the syntax and naming feels a bit quaint at first, but that is easily explained by the fact that when COBOL appeared on the scene, ALGOL was still highly relevant and the C programming language wasn’t even a glimmer in Dennis Ritchie’s eyes yet. If anything, COBOL has proven itself – much like Fortran and others – to be a time-tested DSL that is truly a testament to Grace Hopper and everyone else involved in its creation.


hackaday.com/2025/04/16/portin…


Cybersecurity & cyberwarfare ha ricondiviso questo.


🚨 ULTIMO GIORNO: LE ISCRIZIONI CHIUDONO OGGI!

Se vuoi diventare un Ethical Hacker davvero preparato, Ethical Hacker Extreme Edition è la tua occasione:
🧠 34 settimane di formazione intensiva
🖥️ Accesso a HackMeUp, la piattaforma pratica per affinare le tue skill
🎓 Certificazioni professionali spendibili in Italia e all’estero

🔥 È il percorso ideale per chi vuole costruire (o rafforzare) una carriera solida nella cybersecurity e certificare concretamente le proprie competenze

⏳ Promo valida entro oggi. Dopo... niente più prezzo scontato
🔗 cybersecurityup.it/ethical-hacker-path

📞 Per Info e iscrizioni : 375 593 1011 | ✉️ e.picconi@fatainformatica.it
Accedi al corso e supporta la community RHC!

#redhotcyber #EthicalHackerExtremeEdition #cybersecurity #hackmeup #formazioneprofessionale #ethicalhacker #infosec #capturetheflag


Cybersecurity & cyberwarfare ha ricondiviso questo.


“I wish CISA would stop assigning out-of-context CVSS scores to our CVEs.”

* monkey paw curls *

csoonline.com/article/3963190/…

in reply to Filippo Valsorda

Joke aside, I hope the CNAs (CVE Numbering Authorities) can find a way to coordinate and publish entries, which is 90% (or maybe 120%, since vuln “enrichment” is often so wildly off) of the value of the system.

Without Mitre, there will be a lack of a CNA of last resort, though.

in reply to daniel:// stenberg://

@bagder it is, since a month or so

redhat.com/en/blog/red-hat-now…

Questa voce è stata modificata (19 ore fa)
in reply to Filippo Valsorda

maybe euvd.eu is doing a better job here?



Viaggio nell’underground cybercriminale russo: la prima linea del cybercrime globale


Con l’intensificarsi delle tensioni geopolitiche e l’adozione di tecnologie avanzate come l’intelligenza artificiale e il Web3 da parte dei criminali informatici, comprendere i meccanismi dell’underground cybercriminale di lingua russa diventa un vantaggio cruciale.

Milano, 14 aprile 2025 – Trend Micro, leader globale di cybersecurity, presenta “The Russian-Speaking Underground”, l’ultima ricerca dedicata all’underground cybercriminale di lingua russa, l’ecosistema che ha dato forma alla criminalità informatica globale negli ultimi dieci anni.

In un contesto caratterizzato da minacce informatiche in continua evoluzione, la ricerca offre uno sguardo unico e approfondito sui principali trend che stanno rimodellando l’economia sommersa: dagli effetti a lungo termine della pandemia alle conseguenze delle violazioni di massa e dei ransomware a doppia estorsione, fino all’esplosione di tecnologie accessibili come l’intelligenza artificiale e il Web3, senza dimenticare la crescente esposizione dei dati biometrici. Man mano che criminali informatici e professionisti della sicurezza diventano sempre più sofisticati, nuovi strumenti, tattiche e modelli di business alimentano livelli senza precedenti di specializzazione all’interno delle comunità clandestine.

L’underground cybercriminale di lingua russa si distingue per la sua struttura organizzativa: forte collaborazione tra attori e profonde radici culturali, con propri codici etici, rigidi processi di selezione e complessi sistemi reputazionali.

“Non si tratta di un semplice marketplace, ma di una vera e propria società strutturata di cybercriminali, in cui lo status, la fiducia e l’eccellenza tecnica determinano la sopravvivenza e il successo”. Afferma Vladimir Kropotov, co-autore della ricerca e Principal Threat Researcher at Trend Micro.

“L’underground di lingua russa ha sviluppato una cultura distintiva, che unisce competenze tecniche di altissimo livello a rigidi codici di condotta, sistemi di fiducia basati sulla reputazione e un livello di collaborazione paragonabile a quello delle organizzazioni legittime”, ha aggiunto Fyodor Yarochkin, co-autore e Principal Threat Researchers at Trend Micro. “Non è solo una rete di criminali, ma una comunità resiliente e interconnessa, che si è adattata alla pressione globale e continua a influenzare l’evoluzione della criminalità informatica”.

La ricerca Trend Micro approfondisce le principali attività criminali, tra cui schemi di ransomware-as-a-service, campagne di phishing, brute forcing degli account e monetizzazione delle risorse Web3 rubate. Sono stati esaminati in dettaglio anche i servizi di intelligence gathering, sfruttamento della privacy e convergenza tra i domini cyber e fisici.

“I cambiamenti geopolitici hanno trasformato rapidamente l’underground cybercriminale”, conclude Vladimir. “I conflitti politici, l’ascesa dell’hacktivismo e i cambiamenti nelle alleanze hanno minato la fiducia e rimodellato le forme di collaborazione, favorendo nuovi legami con altri gruppi, compresi attori di lingua cinese. Le conseguenze di queste azioni si riflettono anche nell’Unione Europea, e sono in crescita”.

Con l’aumento delle tensioni geopolitiche e l’adozione di tecnologie sempre più avanzate come l’intelligenza artificiale e il Web3 da parte dei criminali informatici, comprendere i meccanismi dell’underground di lingua russa rappresenta un vantaggio cruciale come mai prima d’ora.

Il report di Trend Micro “The Russian-Speaking Underground” – il cinquantesimo della sua serie di ricerche sui mercati underground cybercriminali di tutto il mondo, iniziata oltre 15 anni fa – fornisce informazioni fondamentali e un contesto storico senza pari a team di intelligence sulle minacce, leader delle organizzazioni, Forze dell’Ordine e professionisti della sicurezza informatica, incaricati di proteggere le infrastrutture critiche, le risorse aziendali e la sicurezza nazionale.

Ulteriori informazioni sono disponibili a questo link

Trend Micro
Trend Micro, leader globale di cybersecurity, è impegnata a rendere il mondo un posto più sicuro per lo scambio di informazioni digitali. Con oltre 30 anni di esperienza nella security, nel campo della ricerca sulle minacce e con una propensione all’innovazione continua, Trend Micro protegge oltre 500.000 organizzazioni e milioni di individui che utilizzano il cloud, le reti e i più diversi dispositivi, attraverso la sua piattaforma unificata di cybersecurity. La piattaforma unificata di cybersecurity Trend Vision One™ fornisce tecniche avanzate di difesa dalle minacce, XDR e si integra con i diversi ecosistemi IT, inclusi AWS, Microsoft e Google, permettendo alle organizzazioni di comprendere, comunicare e mitigare al meglio i rischi cyber. Con 7.000 dipendenti in 65 Paesi, Trend Micro permette alle organizzazioni di semplificare e mettere al sicuro il loro spazio connesso. www.trendmicro.com

L'articolo Viaggio nell’underground cybercriminale russo: la prima linea del cybercrime globale proviene da il blog della sicurezza informatica.




Homemade VNA Delivers High-Frequency Performance on a Budget


With vector network analyzers, the commercial offerings seem to come in two flavors: relatively inexpensive but limited capabilities, and full-featured but scary expensive. There doesn’t seem to be much middle ground, especially if you want something that performs well in the microwave bands.

Unless, of course, you build your own vector network analyzer (VNA). That’s what [Henrik Forsten] did, and we’ve got to say we’re even more impressed by the results than we were with his earlier effort. That version was not without its problems, and fixing them was very much on the list of goals for this build. Keeping the build affordable was also key, which resulted in some design compromises while still meeting [Henrik]’s measurement requirements.

The Bill of Materials includes dual-channel broadband RF mixer chips, high-speed 12-bit ADCs, and a fast FPGA to handle the torrent of data and run the digital signal processing functions. The custom six-layer PCB is on the large side and includes large cutouts for the directional couplers, which use short lengths of stripped coaxial cable lined with ferrite rings. To properly isolate signals between stages, [Henrik] sandwiched the PCB between a two-piece aluminum enclosure. Wisely, he printed a prototype enclosure and lined it with aluminum foil to test for fit and function before committing to milling the final version. He did note some leakage around the SMA connectors, but a few RF gaskets made from scraps of foil and solder braid did the trick.

This is a pretty slick build, especially considering he managed to keep the price tag at a very reasonable $300. It’s more expensive than the popular NanoVNA or its clones, but it seems like quite a bargain considering its capabilities.


hackaday.com/2025/04/16/homema…



Streamlining detection engineering in security operation centers


Security operations centers (SOCs) exist to protect organizations from cyberthreats by detecting and responding to attacks in real time. They play a crucial role in preventing security breaches by detecting adversary activity at every stage of an attack, working to minimize damage and enabling an effective response. To accomplish this mission, SOC operations can be broken down into four operating phases:

Each of these operating phases has a distinct role to play, and well-defined processes or procedures ensure a seamless handover of findings from one phase to the next. In practice, SOC processes and procedures at each operational phase often require continuous improvement over time.

Assessment observations: Common SOC issues


During our involvement in SOC technical assessments, adversary emulations, and incident response readiness projects across different regions, we evaluated each operating phase separately. Based on our assessments, we observed common challenges, weak practices, and recurring issues across these four key SOC capabilities.

Log collection


There are three main issues we have observed at this stage:

  • Lack of visibility coverage based on the MITRE DETT&CT framework – customers do not practice maintaining a visibility coverage matrix. Instead, they often maintain log source data as an Excel or similar spreadsheet that is not easily tracked. This means they don’t have a systematic approach to what data they are feeding into the SIEM and which TTPs can be detected in their environment. And in most cases, maintaining a continuous visibility matrix is also a challenge because log sources may disappear over time for a variety of reasons: agent termination, changes in log destination settings, device (e.g., firewall) replacement. This only leads to the degradation of the log visibility matrix.
  • Inefficient use of data for correlation – in many cases, relevant data is available to detect threats, but there are no correlation rules in place to leverage it for threat detection.
  • Correlation exists, but lacks the necessary data fields – while some rule sets are properly configured with the right logic to detect threats, the required data fields from log sources are missing, preventing the rules from being triggered. This critical issue can only be detected through a data quality assessment.


Detection


At this stage, we have seen the following issues during assessment procedures:

  • Over-reliance on vendor-provided rules – many customers rely heavily on the default rule sets in their SIEM and only tune them when alerts are triggered. Since the default content is not optimized, it often generates thousands of alerts. This reactive approach leads to excessive alert fatigue, making it difficult for analysts to focus on truly meaningful alerts.
  • Lack of detection alignment with the threat profile – the absence of a well-defined organizational threat profile prevents customers from focusing on the threats that are most likely to target them. Instead, they adopt a scattered approach to detection, like shooting in the dark rather than prioritizing relevant threats.
  • Poor use of threat intelligence feeds – we have encountered cases where endpoint logs do not contain file hash data. The log sources only provide filenames or file paths, but not the actual hash values, making it difficult for the SOC to correlate threat intelligence (TI) feeds that rely on file hashes. As a result, TI feeds are not operational because the required data field is not ingested into the SIEM.
  • Analytics deployment errors – one of the most challenging issues we see is when a well-designed detection rule is deployed incorrectly, causing threat detection to fail despite having the right analytics in place. We have found that there is no structured process for reviewing and validating rule deployments.


Triage and investigation


The most typical issues at this stage are:

  • Lack of a documented triage procedure – analysts often rely on generic, high-level response playbooks sourced from the internet, especially from unreliable sources, which slows or hinders the process of qualifying alerts as potential incidents. Without a structured triage procedure, they spend more time investigating each case instead of quickly assessing and escalating threats.
  • Unattended alerts – we also observed that many alerts were completely ignored by analysts. This likely stems from either a lack of skill in linking multiple alerts into a single incident, or analysts being swamped with high-severity alerts, causing them to overlook other relevant alerts.
  • Difficulty in correlating alerts – as noted in the previous observation, one of the biggest challenges is linking related alerts into a single incident. The lack of alert correlation makes it harder to see the full attack pattern, leading to disorganized alert diagnosis.
  • Default use of alert severity – SIEM default rules don’t take into account the context of the target system. Instead, they rely on the default severity in the rule, which is often set randomly or based on an engineer’s opinion without a clear process. This lack of context makes it harder to investigate and properly assess alerts.


Response


The challenges of the final operating phase are most often derived from the issues encountered in the previous stages.

  • Challenges in incident scoping – as mentioned earlier, the inability to properly correlate alerts leads to a fragmented understanding of attack patterns. This makes it difficult to see the bigger picture, resulting in inefficient incident handling and misjudged response efforts.
  • Increase in unnecessary escalations – this issue is particularly common in MSSP environments, where a lack of understanding of baseline behavior causes analysts to escalate benign cases. Without proper context, normal activities are mistaken for threats, resulting in wasted time and effort.

With these ongoing challenges, chaos will continue in SOC operations. As organizations adopt new security tools such as CASB and container security, both of which generate valuable detection data, and as digital transformation introduces even more technology, security operations will only become more complex, exacerbating these issues.

Taking the right and impactful approach


Enhancing SOC operations requires evaluating each operating phase from an investment perspective, with the detection phase having the greatest impact because it directly affects data quality, threat visibility, incident response efficiency, and the overall effectiveness of the SOC analyst. Investing in detection directly influences all the other operating phases, making it the foundation for improving all operating phases. The detection operating phase must be handled through a dedicated program that ensures log collection is purpose-driven, collecting only the data fields necessary for detection rather than unnecessarily driving up SIEM costs. This focused approach helps define what should be ingested into the SIEM while ensuring meaningful threat visibility.

Strengthening detection reduces false positives and false negatives, improves true positive rates, and enables the identification of attacker activity chains. A documented triage and investigation process streamlines the work of analysts, improving efficiency and reducing response time. Furthermore, effective incident scoping, guided by accurate detection of the cyber kill chain, enables a faster and more precise response. By prioritizing investment in detection and managing it through a structured approach, organizations can significantly improve SOC performance and resilience against evolving threats. This article focuses solely on SIEM-based detection management.

Detection engineering program


Before diving into the program-level approach, we will first present the detection engineering lifecycle that forms the foundation of the proposed program. The image below shows the stages of this lifecycle.

The detection engineering lifecycle shown here is typically followed when building detections, but its implementation often lacks well-defined processes or a dedicated team. A structured program must be put in place to ensure that the SOC’s investment and efforts in detection engineering are used efficiently.

When we talk about a program, it should be built on the following key elements:

  • A dedicated team responsible for driving the program
  • Well-defined processes and procedures to ensure consistency and effectiveness
  • The right tools to integrate with workflows, facilitate output handovers, and enable feedback loops across related processes
  • Meaningful metrics to measure the overall performance of the program.

We will discuss these performance measurement metrics in the final section of the article.

  1. Team supporting detection engineering program

The key idea behind having a dedicated team is to take full control of the detection engineering (DE) lifecycle, from analysis to release, and ensure accountability for the program’s success. In a traditional SOC setup, deployment and release are often handled by SOC engineers. This can lead to deployment errors due to potential differences in the data models used by DE and SOC teams (raw log data vs. SIEM-optimized data), as well as deployment delays due to the SOC team being overloaded with other tasks. This, in turn, can indirectly impact the work of the detection team. However, the one responsibility that does not fall under the DE team is log onboarding. Since this process requires coordination with other teams, it should continue to be managed by SOC engineers to keep the DE team focused on its core objectives.

The DE team should start with at least three key roles:

The size of the team depends on factors related to the program’s objectives. For example, if the goal is to build a certain number of detection rules per month, the number of detection engineers required will vary accordingly. Similarly, if a certain number of rules need to be tested and deployed within a week, the team size must be adjusted to meet that demand.

The Detection Engineering Lead should communicate with SOC leadership to set the right expectations by outlining what goals can realistically be achieved based on the size and capacity of the DE team. A dedicated Detection QA role can be established as the need for testing, deployment, and release of detections grows.

  1. Process and procedures

Well-defined workflows, supported by structured processes and procedures, must be established to streamline detection engineering operations. The following image illustrates the necessary processes and procedures, along with the roles responsible for executing each workflow:

During the qualification process, the Detection Engineering Lead or Detection Engineer may discover that the data source needed to develop a detection is not available. In such cases, they should follow the log management process to request onboarding of the required data before proceeding with detection research and development. The testing process typically checks that the rule works by ensuring that the SIEM triggers an alert based on the required data fields.

Lastly, a validation process that is not part of the detection engineering lifecycle must be incorporated into the detection engineering program to assess its overall effectiveness. Ideally, this validation should be conducted by individuals outside the DE lifecycle or by an external service provider.

Proper planning is required that incorporates threat intelligence and an updated threat profile. In addition, the validation process should generate reports that outline:

  • What is working well
  • Areas that need improvement
  • Detection gaps identified
  1. Tools

An essential element of the DE lifecycle is the use of tools to streamline processes and improve efficiency. Key tools include:

  • Ticketing platform – efficiently manages workflows, tracks progress from ticket creation to closure, and provides time-based metrics for monitoring.
  • Rules repository – platform for managing detection queries and code, supporting Detection-as-Code, using a unified rule format such as SIGMA, and implementing code development best practices in detection engineering, including features such as version control and change management.
  • Centralized knowledge base – dedicated space for documenting detection rules, descriptions, research notes, and other relevant information. See the best practices section below for more details on centralized documentation.
  • Communication platform – facilitates collaboration among DE team members, integrates with the ticketing system, and provides real-time notification of ticket status or other issues.
  • Lab environment – virtualized setup, including SIEM and relevant data sources, tools to simulate attacks for testing purposes. The core function of the lab is to test detection rules prior to release.


Best practices in detection engineering


Several best practices can significantly enhance your detection engineering program. Based on our experience, implementing these best practices will help you effectively manage your rule set while providing valuable support to security analysts.

  1. Rule naming convention

When developing analytics or a rule, adhering to a proper naming convention provides a concrete framework. A rule name like “Suspicious file drop detected” may confuse the analyst and force them to dig deeper to understand the context of the alert that was triggered. It would be better to give a rule a name that provides complete context at first glance, such as “Initial Access | Suspicious file drop detected in user directory | Windows – Medium”. This example makes it easy for the analyst to understand:

  • At what stage of the attack the rule is triggered. In this case, it is Initial Access as per MITRE / Kill Chain Model.
  • Where exactly the file was dropped. In this case, the user directory was the target, which may mean that this probably involved user interaction, which is another sign that the attack was probably detected at an early stage.
  • What platform was attacked. In this case, it is Windows, which can help the analyst to quickly find the machine that triggered the alert.
  • Lastly, an alert priority can be set, which helps the analyst to prioritize accordingly. For this to work properly, SIEM’s priority levels should be aligned with the rule priorities defined by the detection engineering team. For example, a high priority in SIEM should correspond to a high-priority alert.

A consistent rule naming structure can help the detection engineering team to easily search, sort and manage existing rules, avoid creating duplicates with different names, etc.

The naming structure doesn’t necessarily have to look like the example above. The whole idea of this best practice is to find a good naming convention that not only helps the SOC analyst, but also makes managing detection rules easier and more convenient.

For example, while the rule name “Audit Log Deletion” gives a basic idea of what is happening, a more effective name would be:
[High] – Audit Log Deletion in Internal Server Farm – Linux - Defense Evasion (1070.002).
This provides better context, making it much more useful to the SOC team, and more keywords for the DE team to find this particular rule or filter rules if necessary.

  1. Centralized knowledge base

Once a rule is created after thorough research, the detection team should manage it in a centralized platform (a knowledge base). This platform should not only store the rule name and logic, but also other key details. Important elements to consider:

  • Rule name/ID/description – rule name, unique ID, and a brief description of the rule.
  • Rule type/status – provides insight into the rule type (static, correlated, IoC-based, etc.) and the status (experimental, stable, retired, etc.).
  • Severity and confidence – seriousness of the threat triggering this rule and the likelihood of a true positive.
  • Research notes – possible public links, threat reports, used as a basis for creating the rule.
  • Data components used to detect the behavior – list of source and data fields used to detect activity.
  • Triage steps – provides steps to investigate the alert.
  • False positives – provides options where the alert could show false positive behavior.
  • Tags (CVE, Actors, Malware, etc.) – provide more context if the detection is linked to a behavior or artifact, specific to any APT group, or malware.

Make sure this centralized documentation is accessible to all SOC analysts.

  1. Contextual tagging

As covered in the previous best practice, tags provide a great value in understanding the attack chain. That’s why we want to highlight them as a separate best practice.

The tags attached to the above detection rule are the result of the research done on the behavior of the attack when writing the detection rule. They help the analyst gain more context at the time the rule is triggered. In the example above, the analyst may suspect a potential initial access attempt related to QakBot or Black Basta ransomware. This also helps in reporting to security leadership that the SOC team successfully detected the initial ransomware behavior and was able to thwart the attack in the early stages of the kill chain.

  1. Triage steps

A good practice is to include triage (or investigation steps) in detection rule documentation. Since the DE team has spent a lot of time understanding the threat, it is very important to document the precursors and possible next steps the attacker can take. The SOC analyst can quickly review these and provide incident qualification with confidence.

For the rule from the previous section, “Initial Access | Suspicious LNK files dropped in download folder | Windows – Medium”, the triage procedure is shown below.

MITRE has a project called the Technique Inference Engine, which provides a model for understanding other techniques an attacker is likely to use based on observed adversary behavior. This tool can be useful for both DE and SOC teams. By analyzing the attacker’s path, organizations can improve alert correlation and enhance scoping of incident/threats.

  1. Baselining

Understanding the infrastructure and its baseline operations is a must, as it helps reduce the false positive rate. The detection engineering team must learn the prevention policies (to de-prioritize detection if already remediated), learn about the technologies deployed in the infrastructure, understand the network protocols being used and user behavior under normal circumstances.

For example, to detect T1480.002: Execution Guardrails: Mutual Exclusion sub-technique, MITRE recommends monitoring a “file creation” data component. According to the MITRE Data Sources framework, data components are possible actions with data objects and/or data objects statuses or parameters that may be relevant for threat detection. We discussed them in more detail in our detection prioritization article.

MITRE’s detection recommendation for T1480.002 sub-technique

A simple rule for detecting such activity is to monitor lock file creation events in the /var/run folder, which stores temporary runtime data for running services. However, if you have done the baselining and found that the environment uses containers that also create lock files to manage runtime operations, you can filter out container-linked events to avoid triggering false positive alerts. This filter is easy to apply, and overall detection can be improved by baselining the infrastructure you are monitoring.

  1. Finding the narrow corridors

Some indicators, such as file hashes or software tools are easy to change, while others are more difficult to replace. Detections based on such “narrow corridors” tend to have high true positive rates. To pursue this, detection should focus primarily on behavioral indicators, ensuring that attackers cannot easily evade detection by simply changing their tools or tactics. Priority should be given to behavior-based detection over tool-specific, software-dependent, or IoC-driven approaches. This aligns with the Pyramid of Pain model, which emphasizes detecting adversaries based on their tactics, techniques, and procedures (TTPs) rather than easily replaceable indicators. By prioritizing common TTPs, we can effectively identify an adversary’s modus operandi, making detection more resilient and impactful.

  1. Universal rules

When planning a detection program from scratch, it is important not to ignore the universal threat detection rules that are mostly available in SIEM by default. Detection engineers should operationalize them as soon as possible and tune them according to feedback received from SOC analysts or what they have learned about the organization’s infrastructure during baselining activity.

Universal rules generally include malicious behavior associated with applications, databases, authentication anomalies, unusual remote access behavior, and policy violation rules (typically to monitor compliance requirements).

Some examples include:

  • Windows firewall settings modification detected
  • Use of unapproved remote access tools
  • Bulk failed database login attempts


Performance measurement


Every investment needs to be justified with measurable outcomes that demonstrate its value. That is why communicating the value of a detection engineering program requires the use of effective and actionable metrics that demonstrate impact and alignment with business objectives. These metrics can be divided into two categories: program-level metrics and technical-level metrics. Program-level metrics signal to security leadership that the program is well aligned with the company’s security objectives. Technical metrics, on the other hand, focus on how operational work is being carried out to maximize the detection engineering team’s operational efficiency. By measuring both program-level metrics and technical-level metrics, security leaders can clearly show how the detection engineering program supports organizational resilience while ensuring operational excellence.

Designing effective program-level metrics requires revisiting the core purpose for initiating the program. This approach helps identify metrics that clearly communicate success to security leadership. There are three metrics that can be very effective to measure the success at program level.

  1. Time to Detect (TTD) – this metric is calculated as the time elapsed from the moment an attacker’s initial activity is observed until the time it is formally detected by the analyst. Some SOCs consider the time the alert is triggered on the SIEM as the detection time, but that is not really an actionable metric to consider. The time the alert is converted into a potential incident is the best option to consider for detection time by SOC analysts.

Although the initial detection of activity occurs at t1 (alert triggered), when malicious activity occurs, a series of events must be analyzed before qualifying the incident. This is why t3 is required to correctly qualify the detection as a potential threat. Additional metrics such as time to triage (TTT), which establishes how long it takes to qualify the incident, and time to investigate (TTI), which describes how long it takes to investigate the qualified incident, can also come in handy.

Time to detect compared to time to triage and time to investigate metrics
Time to detect compared to time to triage and time to investigate metrics


  1. Signal-to-Noise Ratio (SNR) – this metric indicates the effectiveness of detection rules by measuring the balance between relevant and irrelevant information. It compares the number of true positive detections (correct alerts for real threats) to the number of false positives (incorrect or misleading alerts).

Where:

True positives: instances where a real threat is correctly detected
False positives: incorrect alerts that do not represent real threats

A high SNR indicates that the system is generating more meaningful alerts (signal) compared to noise (false positives), thereby enhancing the efficiency of security operations by reducing alert fatigue and focusing analysts’ attention on genuine threats. Improving SNR is crucial to maximizing the performance and reliability of a detection program. SNR directly impacts the amount of SOC analyst effort spent on false positives, which in turn influences alert fatigue and the risk of professional burnout. Therefore, it is a very important metric to consider.

  1. Threat Profile Alignment (TPA) – this metric evaluates how well detections are aligned with known adversarial tactics, techniques, and procedures (TTPs). This metric measures this by determining how many of the identified TTPs are adequately covered by unique detections (unique data components).

Total TTPs identified – this is the number of known adversarial techniques relevant to the organization’s threat model, typically derived from cyber threat intelligence threat profiling efforts
Total TTPs covered with at least three unique detections (where possible) – this counts how many of the identified TTPs are covered by at least three distinct detection mechanisms. Having multiple detections for a given TTP enhances detection confidence, ensuring that if one detection fails or is bypassed, others can still identify the activity.
Team efforts supporting the detection engineering program must also be measured to demonstrate progress. These efforts are reflected in technical-level metrics, and monitoring these metrics will help justify team scalability and address productivity challenges. Key metrics are outlined below:

  1. Time to Qualify Detection (TTQD) – this metric measures the time required to analyze and validate the relevance of a detection for further processing. The Detection Engineering Lead assesses the importance of the detection and prioritizes it accordingly. The metric equals the time that has elapsed from when a ticket is raised to create a detection to when it is shortlisted for further research and implementation.

  1. Time to Create Detection (TTCD) – this tracks the amount of time required to design, develop and deploy a new detection rule. It highlights the agility of detection engineering processes in responding to evolving threats.

  1. Detection Backlog – the backlog refers to the number of pending detection rules awaiting review or consideration for detection improvement. A growing backlog might indicate resource constraints or inefficiencies.
  1. Distribution of Rules Criticality (High, Medium, Low) – this metric shows the proportion of detection rules categorized by their criticality level. It helps in understanding the balance of focus between high-risk and lower-risk detections.
  1. Detection Coverage (MITRE) – detection coverage based on MITRE ATT&CK indicates how well the detection rules cover various tactics, techniques, and procedures (TTPs) in the MITRE ATT&CK framework. It helps identify coverage gaps in the defense strategy. Tracking the number of unique detections that cover each specific technique is highly recommended, as it provides visibility into the threat profile alignment – a program level metric. If unique detections are not being built to detect gaps and the coverage is not increasing over time, it indicates an issue in the detection qualification process.
  1. Share of Rules Never Triggered – this metric tracks the percentage of detection rules that have never been triggered since their deployment. It may indicate inefficiencies, such as overly specific or poorly implemented rules, and provides insight for rule optimization.

There are other relevant metrics, such as the proportion of behavior-based rules in the total set. Many more metrics can be derived from a general understanding of the detection engineering process and its purpose to support the DE program. However, program managers should focus on selecting metrics that are easy to measure and can be calculated automatically by available tools, minimizing the need for manual effort. Avoid using an excessive number of metrics, as this can lead to a focus on measurement only. Instead, prioritize a few meaningful metrics that provide valuable insight into the program’s progress and efforts. Choose wisely!


securelist.com/streamlining-de…



NIS2, inizia la seconda fase attuativa: che c’è da sapere sulle determinazioni ACN


@Informatica (Italy e non Italy 😁)
Con la pubblicazione di tre determinazioni cruciali, l’ACN ha di fatto dato il via alla seconda fase attuativa e operativa della NIS2. Un salto qualitativo nell’approccio alla cyber sicurezza nazionale che introduce un sistema formale di


Cybersecurity & cyberwarfare ha ricondiviso questo.


🚀 25% DI SCONTO SUL CORSO SULLA NIS2!

Per pochi giorni 25% di sconto utilizzando il coupon academy.redhotcyber.com/course…

📌 𝗔𝗰𝗰𝗲𝘀𝘀𝗼 𝗳𝗹𝗲𝘀𝘀𝗶𝗯𝗶𝗹𝗲: E-learning con contenuti disponibili sempre
📌 𝗗𝘂𝗿𝗮𝘁𝗮 𝗱𝗲𝗹 𝗰𝗼𝗿𝘀𝗼: 9 ore suddivise in 5 moduli e 26 lezioni
📌 𝗟𝗶𝘃𝗲𝗹𝗹𝗼: Base
📌 𝗣𝗿𝗲𝗿𝗲𝗾𝘂𝗶𝘀𝗶𝘁𝗶 𝗽𝗲𝗿 𝗶𝗹 𝗰𝗼𝗿𝘀𝗼: conoscenza base della valutazione del rischio
📌 𝗦𝘂𝗽𝗽𝗼𝗿𝘁𝗼 𝗮𝗹 𝗰𝗼𝗿𝘀𝗼: Docente disponibile via e-mail
📌 𝗖𝗲𝗿𝘁𝗶𝗳𝗶𝗰𝗮𝘇𝗶𝗼𝗻𝗲 𝗥𝗶𝗹𝗮𝘀𝗰𝗶𝗮𝘁𝗮: Network and Information Systems Fundamental (NISF)
📌 𝗖𝗼𝘀𝘁𝗼 𝗱𝗲𝗹 𝗰𝗼𝗿𝘀𝗼: 700 euro scontato a 525 euro (promo del 25%)

Scrivici ad academy@redhotcyber.com o Whatsapp al 3791638765

#redhotcyber #hacking #cti #ai #online #it #cybercrime #cybersecurity #threat #OnlineCourses #Elearning #DigitalLearning #RemoteCourses #VirtualClasses #CourseOfTheDay #LearnOnline #OnlineTraining #Webinars #academy #corso #formazioneonline



Così si contrabbandano i microchip sotto restrizione


@Informatica (Italy e non Italy 😁)
Un giro di forniture parallele aggira le sanzioni occidentali e porta i semiconduttori in Cina e Russia. Ecco come funziona.
L'articolo Così si contrabbandano i microchip sotto restrizione proviene da Guerre di Rete.

L'articolohttps://www.guerredirete.it/cosi-si-contrabbandano-i-microchip-sotto-restrizione/



Google corre ai ripari: scoperti due bug pericolosissimi sul browser Chrome


Google ha distribuito un aggiornamento di emergenza per il browser Chrome in seguito all’individuazione di due gravi falle di sicurezza. Le vulnerabilità appena corrette avrebbero potuto permettere a cybercriminali di sottrarre informazioni sensibili e compromettere i dispositivi degli utenti, ottenendo accesso non autorizzato ai sistemi.

Le vulnerabilità interessano tutti gli utenti che utilizzano versioni obsolete di Google Chrome su piattaforme desktop. Tra questi rientrano privati, aziende ed enti governativi che utilizzano Chrome per la navigazione web e la gestione dei dati.

si tratta di due bug di sicurezza monitorati con il CVE-2025-3619 e il CVE-2025-3620, che interessano le versioni di Chrome precedenti alla 135.0.7049.95/.96 per Windows e Mac e alla 135.0.7049.95 per Linux. La più grave delle due, CVE-2025-3619, è un heap buffer overflow del nel componente Codecs di Chrome. Questa vulnerabilità può consentire agli aggressori di eseguire codice arbitrario sfruttando il modo in cui Chrome elabora determinati file multimediali, con il rischio di compromettere l’intero sistema e il furto di dati.

🚨 Threat Alert: Google Chrome Vulnerabilities CVE-2025-3619 and CVE-2025-3620

📅 Date: 2025-04-16

📌 Attribution: Elias Hohl (CVE-2025-3619), @retsew0x01 (CVE-2025-3620)

📝 Summary:
Google released Chrome version 135.0.7049.95/.96 to patch two high-impact vulnerabilities:…
— Syed Aquib (@syedaquib77) April 16, 2025

La seconda, CVE-2025-3620, è una falla di tipo “use-after-free” nel componente USB, che potrebbe essere sfruttata anche per eseguire codice dannoso o ottenere un accesso non autorizzato al sistema. Gli esperti di sicurezza avvertono che queste vulnerabilità sono particolarmente pericolose perché possono essere sfruttate da remoto: è sufficiente che l’utente visiti un sito web dannoso o interagisca con contenuti compromessi.

Una volta sfruttati, gli aggressori potrebbero rubare password, informazioni finanziarie e altri dati sensibili memorizzati nel browser o addirittura assumere il controllo del dispositivo interessato. L’aggiornamento verrà distribuito a livello globale nei prossimi giorni e settimane. Gli utenti che memorizzano password, dati di carte di credito o informazioni personali in Chrome sono particolarmente vulnerabili al furto di identità e alle frodi se il browser non viene aggiornato tempestivamente.

L’azienda ha temporaneamente limitato l’accesso alle informazioni dettagliate sui bug per proteggere gli utenti durante il rilascio dell’aggiornamento. Google attribuisce il merito della segnalazione delle vulnerabilità ai ricercatori di sicurezza esterni Elias Hohl e @retsew0x01, sottolineando l’importanza della collaborazione per mantenere la sicurezza del browser.

L'articolo Google corre ai ripari: scoperti due bug pericolosissimi sul browser Chrome proviene da il blog della sicurezza informatica.



Cybersecurity & cyberwarfare ha ricondiviso questo.


Cyber Threats Against #Energy Sector Surge as Global Tensions Mount
securityaffairs.com/176591/hac…
#securityaffairs #hacking


Binner Makes Workshop Parts Organization Easy


We’ve all had times where we knew we had some part but we had to go searching for it all over as it wasn’t where we thought we put it. Organizing the numerous components, parts, and supplies that go into your projects can be a daunting task, especially if you use the same type of part at different times for different projects. It helps to have a framework to keep track of all the small details. Binner is an open source project that aims to allow you to easily maintain a database that can be customized to your use.

dashboard of binner UIIn a recent video for DigiKey, [Byte Sized Engineer] used Binner to track the locations of his components and parts in his freshly organized workshop. Binner already has the ability to read the labels used by well-known electronics suppliers via a barcode scanner, and uses that information to populate your inventory. It even grabs quantities and links in a datasheet for your newly added part. The barcode scanner can also be used to retrieve the contents of a location, so with a single scan Binner can bring up everything residing at that location.

Binner can be run locally so there isn’t the concern of putting in all the effort to build up your database just to have an internet outage make it inaccessible. Another cool feature is that it allows you to print labels, you can customize the fields to display the values you care about.

The project already has future plans to tie into a “smart bin” system to light up the location of your component — a clever feature we’ve seen implemented in previous setups.

youtube.com/embed/ymEuw_RdUzQ?…


hackaday.com/2025/04/16/binner…


Cybersecurity & cyberwarfare ha ricondiviso questo.


Una svolta nell’AI camisanicalzolari.it/una-svolt…

Cybersecurity & cyberwarfare ha ricondiviso questo.


Government contractor #Conduent disclosed a data breach
securityaffairs.com/176581/dat…
#securityaffairs #hacking