Salta al contenuto principale

Tentativo di phishing contro PagoPA? Ecco come ho fatto chiudere in 3 ore il sito malevolo


Grazie alla nostra community recentemente sono venuto a conoscenza di un tentativo di phishing contro PagoPA e ho deciso di fare due cose. Per prima cosa attivarmi in prima persona per arrecare un danno alla campagna ed ai suoi autori. Come seconda, scrivere questo articolo per condividere con la community quello che ho fatto, sperando di riuscire a sensibilizzare più persone ad agire spiegando la strategia e la metodologia che ho adottato

ezstandalone.cmd.push(function () { ezstandalone.showAds(604); });
Come possiamo vedere l’email risulta molto scarna di contenuti facendo riferimento ad un non meglio specificato “biglietto”, il che suggerisce una doppia strategia da parte dell’attaccante, da un lato utilizza un contenuto generico così da cercare di colpire un pubblico decisamente più vasto, dall’altro l’email è talmente scarna da poter spingere l’utente medio a cliccare sul bottone “Vedi il biglietto”, anche io ho cliccato sul loro URL ma, come vedremo tra poco, ciò ha rappresentato una brutta esperienza per l’attaccante.

Entriamo nel vivo dell’attacco


Cliccando sul link (strutturato con username@dominio) si viene reindirizzati ad un finto blog su blogspot che serve solo come secondo redirect verso il sito di phishing vero e proprio:

ezstandalone.cmd.push(function () { ezstandalone.showAds(612); });
Qui scopriamo che il nostro “biglietto” non è altro che una finta sanzione e che, quindi, la campagna di phishing è rivolta verso PagoPA, una realtà di cui ci siamo già occupati in precedenza e che recentemente ha subito un aumento degli attacchi.

Come si evince dall’immagine, si tratta del classicophishing volto a spingere l’utente ad effettuare rapidamente un pagamento minacciandolo di un incremento della somma richiesta in caso di mancato pagamento entro 20 ore ma, come ben sappiamo, il mettere fretta è tipico delle campagne di phishing che tendono a massimizzare il risultato con il minimo sforzo.

A questo punto avevo davanti a me due strade: ignorare la campagna o attivarmi per contrastarla, ho scelto la seconda e, ora, vi spiegherò cosa e come ho fatto arrivando a far chiudere il sito in meno di 3 ore da quando sono venuto a conoscenza dello stesso.

ezstandalone.cmd.push(function () { ezstandalone.showAds(613); });

Strategia adottata e servizi utilizzati


La prima cosa da fare quando si viene a conoscenza di un URL sospetto è sicuramente quello di analizzarlo con VirusTotal e questo è il risultato della mia prima analisi:

Come si può vedere al momento della mia prima analisi risultava un’ultima analisi effettuata 22 ore prima e solamente un vendor che segnalava l’URL come “Phishing”.

Un risultato decisamente troppo basso per impedire la diffusione della campagna, effettuare una corretta mitigazione dei rischi e, soprattutto, incidere sul fattore più debole della catena, ovvero il fattore umano.

ezstandalone.cmd.push(function () { ezstandalone.showAds(614); });
A questo punto rimaneva solo da agire e così ho segnalato l’URL all’apposito servizio di Google Safe Browsing che lo ha bloccato dopo circa mezz’ora:

Ora tutti gli utenti che utilizzino un browser Chrome o basato su Chromium e che dovessero visitare questo sito si ritroverebbero questo avviso che dovrebbe indurli a non procedere.
Un altro passaggio su VirusTotal ed ecco comparire correttamente Google Safebrowsing
Come secondo vendor a cui segnalare la risorsa di phishing ho scelto Netcraft, tuttavia, devo ammettere che mi ha stupito la loro risposta dove affermano di non aver rilevato minacce:

ezstandalone.cmd.push(function () { ezstandalone.showAds(615); }); La risposta di Netcraft dove afferma di non aver rilevato minacce all’interno del sito e che, quindi, il sito per un utente medio è sicuro, ovviamente come si può vedere dallo screen ho aperto una controversia in merito alla classificazione; controversia che, al momento della stesura di questo articolo, non risulta ancora risolta(ma, come vedremo più avanti, ormai sarebbe totalmente inutile)
Come terza scelta ho utilizzato il servizio di URL Scanner di CloudFlare, dove inizialmente risultava non classificato e, successivamente alla mia segnalazione ho ottenuto questo risultato:
Sito classificato correttamente come phishing
Successivamente, ho utilizzato l’apposito servizio di Fortinet ottenendo questo risultato:
La categoria del sito aggiornata con successo a phishing
Ho utilizzato anche altri servizi che trovate per esteso qui

ezstandalone.cmd.push(function () { ezstandalone.showAds(616); }); L’ultimo rilevamento VirusTotal relativo al sito di phishing preso in esame

Conclusioni e risultato finale


Ho basato tutta la mia azione su alcune considerazioni:

  • Il phishing è una tipologia di attacco avente una motivazione esclusivamente economica, a basso costo ed a bassi rischi
  • L’anello debole della catena è il fattore umano, l’unico modo per cui un attacco di phishing possa avere successo è che la gente clicchi il link contenuto nell’emailezstandalone.cmd.push(function () { ezstandalone.showAds(617); });
  • Per impedire alla gente di cliccare il link è necessario che il link non sia più attivo e che, quindi, venga reso inoffensivo


Il sito di phishing è stato correttamente eliminato
Ora immaginate una tipologia di attacco basata interamente su fattore umano e tempistiche che venga colpita proprio su questi due fattori, probabilmente dopo un po’ diventerebbe pressoché inutile…

Lo avete immaginato?

ezstandalone.cmd.push(function () { ezstandalone.showAds(618); });
Ora immaginate questo risultato ottenuto da un gruppo di professionisti competenti che, appena individuata una campagna di phishing contro entità del proprio paese, si attivano come descritto in questo articolo: il potenziale impatto, con il minimo sforzo, sarebbe enorme.

Ecco, con questo breve articolo ho cercato di far capire l’importanza di passare dall’ignorare la minaccia al contrastarla visto che, essendo una minaccia verso un target indistinto è fisiologico che qualcuno ne rimanga vittima.

Finché non ci attiveremo le campagne aumenteranno sempre di piú anzichè diminuire perché il phishing è molto semplice da fare, è a basso costo e garantisce risultati.

ezstandalone.cmd.push(function () { ezstandalone.showAds(619); });
Forse è arrivata l’ora di invertire la tendenza e pensare ad un protocollo per segnalare le minacce di phishing di cui dovessimo venire a conoscenza così da stroncarle sul nascere ed incidere sui due fattori su cui si basa il phishing: Il fattore umano e la motivazione economica.

L'articolo Tentativo di phishing contro PagoPA? Ecco come ho fatto chiudere in 3 ore il sito malevolo proviene da il blog della sicurezza informatica.


The (Data) Plot Thickens


You’ve generated a ton of data. How do you analyze it and present it? Sure, you can use a spreadsheet. Or break out some programming tools. Or try LabPlot. Sure, it is sort of like a spreadsheet. But it does more. It has object management features, worksheets like a Juypter notebook, and a software development kit, in case it doesn’t do what you want out of the box.

The program is made to deal with very large data sets. There are tons of output options, including the usual line plots, histograms, and more exotic things like Q-Q plots. You can have hierarchies of spreadsheets (for example, a child spreadsheet can compute statistics about a parent spreadsheet). There are tons of regression analysis tools, likelihood estimation, and numerical integration and differentiation built in.

Fourier transforms and filters? Of course. The title graphic shows the program pulling SOS out of the noise using signal processing techniques. It also works as a front end for programs ranging from Python and Julia, to Scilab and Octave, to name a few. If you insist, it can read Jupyter projects, too. A lot of features? That’s not even a start. For example, you can input an image file of a plot and extract data from it. It is an impressive piece of software.

A good way to get the flavor of it is to watch one of the many videos on the YouTube channel (you can see one below). Or, since you can download it for Windows, Mac, Linux, FreeBSD, or Haiku, just grab it and try it out.

If you’ve been putting off Jupyter notebooks, this might be your excuse to skip them. If you think spreadsheets are just fine for processing signals and other big sets, you aren’t wrong. But it sure is hard.

youtube.com/embed/Ngf1g3S5C0A?…


hackaday.com/2025/08/28/the-da…


Microsoft Teams in panne: bloccata l’apertura dei documenti Office incorporati


Un giovedì nero per milioni di utenti Microsoft Teams in tutto il mondo. Una funzionalità chiave della piattaforma di collaborazione – l’apertura dei documenti Office incorporati – è improvvisamente finita KO, scatenando frustrazione e rallentamenti in aziende e organizzazioni che fanno affidamento quotidiano sul servizio.

ezstandalone.cmd.push(function () { ezstandalone.showAds(604); });

Il cuore della collaborazione si inceppa


Teams nasce con un obiettivo chiaro: fornire un ambiente unico e integrato, dove chat, canali e documenti si fondono per rendere il lavoro più veloce e collaborativo.

Ma oggi, aprire un Word, un Excel o un PowerPoint direttamente da Teams si è trasformato in una missione impossibile: schermate di caricamento infinite, errori criptici, finestre vuote. Un workflow spezzato che costringe gli utenti a cercare strade alternative per portare avanti anche le attività più semplici.

ezstandalone.cmd.push(function () { ezstandalone.showAds(612); });
Immaginate di dover aggiornare un foglio Excel durante una riunione, o di consultare un report in tempo reale con i colleghi: ciò che normalmente richiede un click ora diventa un ostacolo che rallenta interi team.

Microsoft conferma: è un incidente globale


La società di Redmond ha riconosciuto ufficialmente il problema, pubblicando un avviso sul Microsoft 365 Service Health Dashboard con ID TM1143347.

Secondo l’advisory, gli ingegneri Microsoft stanno già analizzando i dati diagnostici per individuare la radice del guasto e ripristinare quanto prima la funzionalità.

ezstandalone.cmd.push(function () { ezstandalone.showAds(613); });
Nel frattempo, la community degli utenti si è mobilitata segnalando workaround temporanei: aprire i documenti direttamente dalle app Office o da OneDrive/SharePoint, aggirando così il malfunzionamento interno di Teams.

Impatti concreti sulle aziende


Se per un singolo utente si tratta di qualche minuto perso, per le aziende che utilizzano Teams come hub centrale di collaborazione il problema ha un impatto diretto sulla produttività.

Riunioni rallentate, decisioni posticipate, presentazioni non accessibili: la disruption colpisce proprio il cuore del lavoro moderno, quello basato sulla condivisione immediata e sull’accesso fluido alle informazioni.

ezstandalone.cmd.push(function () { ezstandalone.showAds(614); });

La lezione dietro il blackout


Incidenti come questo ricordano quanto la nostra quotidianità digitale sia fragile e dipendente da ecosistemi centralizzati. Quando un tassello si rompe, l’intera macchina rallenta.

Per questo diventa fondamentale non solo affidarsi al cloud, ma anche prevedere procedure di continuità, alternative operative e formazione dei dipendenti nell’uso di strumenti paralleli.

Oggi è toccato a Teams, domani potrebbe essere un altro servizio critico. L’unica certezza è che il digitale, per quanto indispensabile, rimane un equilibrio delicato, pronto a incrinarsi con un singolo bug.

ezstandalone.cmd.push(function () { ezstandalone.showAds(615); });
L'articolo Microsoft Teams in panne: bloccata l’apertura dei documenti Office incorporati proviene da il blog della sicurezza informatica.


Acoustic Coupling Like it’s 1985


Before the days of mobile broadband, and before broadband itself even, there was a time where Internet access was provided by phone lines. To get onto a BBS or chat on ICQ required dialing a phone number and accoustically coupling a computer to the phone system. The digital data transmitted as audio didn’t have a lot of bandwidth by today’s standards but it was revolutionary for the time. [Nino] is taking us back to that era by using a serial modem at his house and a device that can communicate to it through any phone, including a public pay phone.

As someone in the present time can imagine, a huge challenge of this project wasn’t technical. Simply finding a working public phone in an era of smartphones was a major hurdle, and at one point involved accidentally upsetting local drug dealers. Eventually [Nino] finds a working pay phone that takes more than one type of coin and isn’t in a loud place where he can duct tape the receiver to his home brew modem and connect back to his computer in his house over the phone line like it’s 1994 again.

Of course with an analog connection like this on old, public hardware there were bound to be a few other issues as well. There were some quirks with the modems including them not hanging up properly and not processing commands quickly enough. [Nino] surmises that something like this hasn’t been done in 20 years, and while this might be true for pay phones we have seen other projects that use VoIP systems at desk phones to accomplish a similar task.

youtube.com/embed/1h9UcyUPYJs?…


hackaday.com/2025/08/27/acoust…


Pascal? On my Arduino? It’s More Likely Than You Think


Screenshot of AVRpascal

The Arduino ecosystem is an amazing learning tool, but even those of us who love it admit that even the simplified C Arduino uses isn’t the ideal teaching language. Those of us who remember learning Pascal as our first “real” programming language in schools (first aside from BASIC, at least) might look fondly on the AVRPascal project by [Andrzej Karwowski].

[Andrzej] is using FreePascal’s compiler tools, and AVRdude to pipe compiled code onto the micro-controller. Those tools are built into his AVRPascal code editor to create a Pascal-based alternative to the Arduino IDE for programming AVR-based microcontrollers. The latest version, 3.3, even includes a serial port monitor compatible with the Arduino boards.
This guy, but with Pascal. What’s not to love?
The Arduino comparisons don’t stop there: [Andrzej] also maintains UnoLib, a Pascal library for the Arduino Uno and compatible boards with some of the functionality you’d expect from Arduino libraries: easy access to I/O (digital and analog ports) timers, serial communication, and even extras like i2c, LCD and sensor libraries.

He’s distributing the AVRPascal editor as freeware, but it is not open source. It’s too bad, because Pascal is a great choice for microcontrollers: compiled, it isn’t much slower than C, but it can be as easy to write as Python. Micropython shows there’s a big market for “easy” embedded programming; Pascal could help fill it in a more performant way. Is the one-man license holding this project back, or is it just that people don’t use Pascal much these days?

While AVR programming is mostly done in C, this is hardly the first time we’ve seen alternatives. While some have delved into the frightening mysteries of assembly, others have risen to higher abstraction to run LISP or even good old fashioned BASIC. Pascal seems like a good middle road, if you want to go off the beaten path away from C.

Via reddit.


hackaday.com/2025/08/27/pascal…


JuiceBox Rescue: Freeing Tethered EV Chargers From Corporate Overlords



The JuiceBox charger in its natural environment. (Credit: Nathan Matias)The JuiceBox charger in its natural environment. (Credit: Nathan Matias)
Having a charger installed at home for your electric car is very convenient, not only for the obvious home charging, but also for having scheduling and other features built-in. Sadly, like with so many devices today, these tend to be tethered to a remote service managed by the manufacturer. In the case of the JuiceBox charger that [Nathan Matias] and many of his neighbors bought into years ago, back then it and the associated JuiceNet service was still part of a quirky startup. After the startup got snapped up by a large company, things got so bad that [Nathan] and others saw themselves required to find a way to untether their EV chargers.

The drama began back in October of last year, when the North American branch of the parent company – Enel X Way – announced that it’d shutdown operations. After backlash, the online functionality was kept alive while a buyer was sought. That’s when [Nathan] and other JuiceBox owners got an email informing them that the online service would be shutdown, severely crippling their EV chargers.

Ultimately both a software and hardware solution was developed, the former being the JuicePass Proxy project which keeps the original hardware and associated app working. The other solution is a complete brain transplant, created by the folk over at OpenEVSE, which enables interoperability with e.g. Home Assistant through standard protocols like MQTT.

Stories like these make one wonder how much of this online functionality is actually required, and how much of it just a way for manufacturers to get consumers to install a terminal in their homes for online subscription services.


hackaday.com/2025/08/27/juiceb…


A New Screen Upgrade for the GBA


The Game Boy Advance (GBA) was released in 2001 to breathe some new life into the handheld market, and it did it with remarkable success. Unfortunately, the original models had a glaring problem: their unlit LCD screens could be very difficult to see. For that reason, console modders who work on these systems tend to improve the screen first like this project which brings a few other upgrades as well.

The fully open-source modification is called the Open AGB Display and brings an IPS display to the classic console. The new screen has 480×480 resolution which is slightly larger than the original resolution but handles upscaling with no noticeable artifacts and even supports adding some back in like scanlines and pixelation to keep the early 00s aesthetic. The build does require permanently modifying the case though, but for the original GBA we don’t see much downside. [Tobi] also goes through a ton of detail on how the mod works as well, for those who want to take a deep dive into the background theory.

There has been a lot of activity in the Game Boy Advance communities lately though as the hardware and software become more understood. If you don’t want to modify original hardware, want an upgraded experience, but still want to use the original game cartridges we might recommend something like the Game Bub instead.


hackaday.com/2025/08/27/a-new-…


FLOSS Weekly Episode 844: Simulated Word-of-Mouth


This week Jonathan, Doc, and Aaron chat about Open Source AI, advertisements, and where we’re at in the bubble roller coaster!


youtube.com/embed/MKEJAJger4M?…

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

play.libsyn.com/embed/episode/…

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:


Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License


hackaday.com/2025/08/27/floss-…


Le VPN esplodono in Francia! Pornhub, YouPorn e RedTube bloccati? Ma solo da un Click!


Dal 4 giugno, se si tenta di accedere a Pornhub, YouPorn o RedTube, sarà possibile visualizzare solo “Liberty Leading the People”, accompagnato da un testo del gruppo Aylo. Nel suo comunicato stampa, il colosso canadese dell’industria per adulti spiega di aver deciso di sospendere l’accesso ai suoi siti dalla Francia, in risposta all’entrata in vigore, il 9 giugno, della legge SREN e del doppio anonimato. Questo metodo di controllo dell’età dei visitatori, secondo l’azienda, mette a repentaglio la riservatezza dei suoi visitatori.

Quindi, inevitabilmente, questa decisione ha creato frustrazione tra i quasi 3,8 milioni di utenti di siti porno. Abbastanza da spingerli a cercare metodi di elusione per accedere alle loro piattaforme “intime”.

Fino al 1.000% in più di utenti francesi


Se scegli una VPN, non sarai il solo. Dal pomeriggio del 4 giugno, la maggior parte di questi servizi ha visto esplodere il numero di clienti francesi. Come logica conseguenza della scomparsa di un servizio ampiamente utilizzato in Francia, questo aumento ha raggiunto livelli impressionanti anche per ProtonVPN.

L’azienda ha condiviso uno screenshot delle sue statistiche, che mostra un aumento di oltre il 1.000% nella creazione di nuovi account.

Per la cronaca, ProtonVPN afferma addirittura che questa crescita supera quella registrata durante il blocco di TikTok negli Stati Uniti. Ciò evidenzia la reale portata dei siti per adulti nella vita quotidiana di molti francesi. NordVPN segnala un aumento del 170% delle registrazioni in Francia.

Questo sembra essere un fenomeno globale.

È legale utilizzare una VPN per accedere a Pornhub o YouPorn?


Tuttavia, rimane una domanda: è legale utilizzare una VPN per accedere a questi siti, che ora sono generalmente inaccessibili nel Paese?

La risposta è semplice: sì. L’uso delle reti private virtuali è legale in Francia e questi siti non sono inaccessibili a seguito di una decisione legale o amministrativa, ma piuttosto a seguito di una decisione interna specifica dell’azienda Aylo.

Infine, i gestori delle VPN si aspettano critiche, sottolineando che l’aumento delle registrazioni riguarda gli adulti francesi, poiché la procedura richiede l’accesso a una carta di credito.

NordVPN coglie anche l’occasione per esprimere un chiaro sostegno al gruppo Aylo, sottolineando che questo tipo di fenomeno è particolarmente visibile nei paesi “dove le libertà digitali sono minacciate”.

L'articolo Le VPN esplodono in Francia! Pornhub, YouPorn e RedTube bloccati? Ma solo da un Click! proviene da il blog della sicurezza informatica.


Homebrew Tire Pressure Monitoring System


When [upir] saw that you could buy tire valve stem caps that read pressure electronically, he decided to roll his own Tire Pressure Monitoring System (TPMS) like the one found on modern cars. An ESP32 and an OLED display read the pressure values. He didn’t have a car tire on his workbench though, so he had to improvise there.

Of course, a real TPMS sensor goes inside the tire, but screwing them on the valve stem is much easier to deal with. The sensors use Bluetooth Low Energy and take tiny batteries. In theory, you’re supposed to connect to them to your phone, although two different apps failed to find the sensors. Even a BLE scanner app wouldn’t pick them up. Turns out — and this makes sense — the sensors don’t send data if there’s no pressure on them, so as not to run down the batteries. Putting pressure on them made them pop up on the scanner.

The scanner was able to read the advertisement and then correlate pressure to the data. He discovered that someone had already decoded standard TPMS BLE data, except the advertisements he found were significantly longer than his own. Eventually he was able to find a good reference.

The data includes a status byte, the battery voltage, the temperature, and pressure. Once you know the format, it is straightforward to read it and create your own display. Many people would have ended the video there, but [upir] goes into great detail — the video is nearly an hour long. If you want to duplicate the project, there’s plenty of info and a code repository, too.

If you need to read the regular RF TPMS sensors, grab a software-defined radio. Many of these sensors follow their own format though, so be prepared.

youtube.com/embed/P85tkCbQGo8?…


hackaday.com/2025/08/27/homebr…


Dal 2026 basta app “fantasma”: Android accetterà solo sviluppatori verificati


I rappresentanti di Google hanno annunciato che dal 2026, solo le app di sviluppatori verificati potranno essere installate sui dispositivi Android certificati. Questa misura mira a contrastare malware e frodi finanziarie e riguarderà le app installate da fonti terze.

Il requisito si applicherà a tutti i “dispositivi Android certificati”, ovvero i dispositivi che eseguono Play Protect e hanno le app Google preinstallate.

Nel 2023, il Google Play Store ha introdotto requisiti simili e, secondo l’azienda, ciò ha portato a un netto calo di malware e frodi. Ora i requisiti saranno obbligatori per qualsiasi app, comprese quelle distribuite tramite app store di terze parti e tramite sideloading (quando l’utente scarica autonomamente il file APK sul dispositivo).

“Pensate a questo come a un controllo d’identità in aeroporto: verifica l’identità del viaggiatore, ma è un controllo separato dal controllo del suo bagaglio. Verificheremo l’identità dello sviluppatore, ma non il contenuto della sua app o la sua origine”, scrive l’azienda.

In questo modo, Google vuole combattere le “convincenti app false” e rendere più difficile il compito agli aggressori che iniziano a distribuire un altro malware poco dopo che Google ha rimosso quello precedente. Secondo una recente analisi, le fonti di terze parti da cui vengono installate le app tramite sideloading contengono 50 volte più malware rispetto alle app disponibili nel Google Play Store.

Allo stesso tempo, Google sottolinea che “gli sviluppatori manterranno la stessa libertà di distribuire le loro app direttamente agli utenti tramite fonti terze parti o di utilizzare qualsiasi app store preferiscano”. Per implementare la nuova iniziativa, verrà creata una Console per sviluppatori Android separata e semplificata soprattutto per quelli che distribuiscono le proprie app al di fuori del Google Play Store. Dopo aver verificato la loro identità, gli sviluppatori dovranno registrare il nome del pacchetto e le chiavi di firma per le loro app.

Chi distribuisce app tramite il Google Play Store “probabilmente è già conforme ai requisiti di verifica tramite l’attuale processo Play Console”, che richiede alle organizzazioni di fornire un numero DUNS (Data Universal Numbering System, un numero di identificazione univoco a nove cifre per le persone giuridiche). Il nuovo sistema di verifica inizierà i test a ottobre di quest’anno, con i primi sviluppatori Android che potranno accedervi. Il meccanismo sarà disponibile a tutti a partire da marzo 2026.

Il requisito di verifica entrerà in vigore per la prima volta a settembre 2026 in Brasile, Indonesia, Singapore e Thailandia. Google spiega che questi Paesi sono “particolarmente colpiti da queste forme di app fraudolente”. Successivamente, nel 2027, la verifica degli sviluppatori inizierà ad essere applicata a livello globale.

L'articolo Dal 2026 basta app “fantasma”: Android accetterà solo sviluppatori verificati proviene da il blog della sicurezza informatica.


The Android Bluetooth Connection


Suppose someone came to talk to you and said, “I need your help. I have a Raspberry Pi-based robot and I want to develop a custom Android app to control it.” If you are like me, you’ll think about having to get the Android developer tools updated, and you’ll wonder if you remember exactly how to sign a manifest. Not an appealing thought. Sure, you can buy things off the shelf that make it easier, but then it isn’t custom, and you have to accept how it works. But it turns out that for simple things, you can use an old Google Labs project that is, surprisingly, still active and works well: MIT’s App Inventor — which, unfortunately, should have the acronym AI, but I’ll just call it Inventor to avoid confusion.

What’s Inventor? It lives in your browser. You lay out a fake phone screen using drag and drop, much like you’d use QT Designer or Visual Basic. You can switch views and attach actions using a block language sort of like Scratch. You can debug in an emulator or on your live phone wirelessly. Then, when you are ready, you can drop an APK file ready for people to download. Do you prefer an iPhone? There’s some support for it, although that’s not as mature. In particular, it appears that you can’t easily share an iPhone app with others.

Is it perfect? No, there are some quirks. But it works well and, with a little patience, can make amazingly good apps. Are they as efficient as some handcrafted masterpiece? Probably not. Does it matter? Probably not. I think it gets a bad rep because of the colorful blocks. Surely it’s made for kids. Well, honestly, it is. But it does a fine job, and just like TinkerCad or Lego, it is simple enough for kids, but you can use it to do some pretty amazing things.

How Fast?


How fast is it to create a simple Android app? Once you get used to it, it is very fast, and there are plenty of tutorials. Just for fun, I wrote a little custom web browser for my favorite website. It is hard to tell from the image, but there are several components present. The web browser at the bottom is obvious, and there are three oval buttons. The Hackaday logo is also clickable (it takes you home). What you can’t see is that there is a screen component you get by default. In there is a vertical layout that stacks the toolbar with the web browser. Then the toolbar itself is a horizontal layout (colored yellow, as you can see).

The black bar at the bottom and the very top bar are parts of the fake phone, although you can also pick a fake monitor or tablet if you want more space to work.

What you can’t see is that there are two more hidden components. There’s a clock. If you are on the home page for an hour, the app refreshes the page. There’s also a share component that the share button will use. You can see three views of the app below. There are three views: a design view where you visually build the interface, a block view where you create code, and the final result running on a real phone.

Code


Putting all that on the screen took just a few minutes. Sure, I played with the fonts and colors, but just to get the basic layout took well under five minutes. But what about the code? That’s simple, too, as you can see.

The drab boxes are for control structures like event handlers and if/then blocks. Purple boxes are for subroutine calls, and you can define your own subroutines, although that wasn’t needed here. The green blocks are properties, like the browser’s URL. You can try it yourself if you want.

Rather than turn this into a full-blown Inventor tutorial, check out any of the amazingly good tutorials on the YouTube channel, like the one below.

youtube.com/embed/eSvtXWpZ6os?…

Half the Story


Earlier, I mentioned that your friend wants a robot controller to talk to a Raspberry Pi. I was surprised at how hard this turned out to be, but it wasn’t Inventor’s fault. There are three obvious choices: the system can make web requests, or it can connect via Bluetooth. It can also work with a serial port.

I made the mistake of deciding to use Bluetooth serial using the Bluetooth client component. From Inventor’s point of view, this is easy, if not very sophisticated. But the Linux side turned out to be a pain.

There was a time when Bluez, the Linux Bluetooth stack, had a fairly easy way to create a fake serial port that talked over Bluetooth. There are numerous examples of this circulating on the Internet. But they decided that wasn’t good for some reason and deprecated it. Modern Linux doesn’t like all that and expects you to create a dbus program that can receive bus messages from the Bluetooth stack.

To Be Fair…


Ok, in all fairness, you can reload the Bluetooth stack with a compatibility flag — at least for now — and it will still work the old way. But you know they’ll eventually turn that off, so I decided I should do it the right way. Instead of fighting it, though, I found some code on GitHub that created a simple client or server for SPP (the serial port profile). I stripped it down to just work as a server, and then bolted out a separate function bt_main() where you can just write code that works with streams. That way, all the hocus pocus — and there is a lot of it — stays out of your way.

You can find my changes to the original code, also on GitHub. Look at the spp_bridge.c file, and you’ll see it is a lot of messy bits to interact with Bluez via dbus. It registers a Profile1 interface and forks a worker process for each incoming connection. The worker runs the user-defined bt_main() function, which will normally override. The worker reads from the Bluetooth socket and writes to your code via a normal FILE *. You can send data back the same way.

Here’s the default bt_main function:
<div>
<pre>int bt_main(int argc, char *argv[], FILE *in, FILE *out) {
// Default demo: echo lines, prefixing with "ECHO: "
fprintf(stderr,"[bt_main] Default echo mode.\n");
setvbuf(out,NULL,_IOLBF,0);
charbuf[1024];
while(fgets(buf,sizeof(buf),in)){
fprintf(stderr,"[bt_main] RX: %s",buf);
fprintf(out,"ECHO: %s",buf);
fflush(out);
}
fprintf(stderr,"[bt_main] Input closed. Exiting.\n");
return0;
}</pre>

In retrospect, it might have been better to just use the compatibility flag on the Bluez server to restore the old behavior. At least, for as long as it lasts. This involves finding where your system launches the Bluez service (probably in a systemd service, these days) and adding a -c to the command line. There may be a newer version of rfcomm that supports the latest Bluez setup, too, but KDE Neon didn’t have it.

On the other hand, this does work. The bt_main function is easy to write and lets you focus on solving your problem rather than how to set up and tear down the Bluetooth connection.

Next Time


Next time, I’ll show you a more interesting bt_main along with an Android app that sends and receives data with a custom server. You could use this as the basis of, for example, a custom macropad or an Android app to control a robot.


Fuga di dati Auchan: centinaia di migliaia di clienti colpiti da un attacco hacker


Il rivenditore francese Auchan ha informato centinaia di migliaia di clienti che i loro dati personali sono stati rubati a seguito di un attacco hacker.

Nelle notifiche inviate agli utenti la scorsa settimana, l’azienda ha affermato che la violazione ha interessato nomi, indirizzi e-mail, numeri di telefono e numeri di carte fedeltà, ma ha sottolineato che non sono state compromesse informazioni bancarie, password o PIN.

“La informiamo che Auchan è stata vittima di un attacco informatico. Questo attacco ha comportato l’accesso non autorizzato ad alcuni dati personali associati al suo account del programma fedeltà”, si legge nell’avviso.

Auchan afferma di aver adottato tutte le misure necessarie per localizzare l’attacco e migliorare la sicurezza dei propri sistemi, e di aver informato le forze dell’ordine e le autorità di regolamentazione dell’incidente.

L’azienda consiglia ai clienti interessati di prestare attenzione a potenziali casi di phishing e frode, poiché gli aggressori potrebbero tentare di utilizzare informazioni rubate.

Il rivenditore ha dichiarato ai media francesi che l’incidente ha colpito “centinaia di migliaia di clienti”. Tuttavia, l’azienda non ha specificato come si sia verificata esattamente la fuga di dati, chi ci fosse dietro l’attacco informatico o se l’incidente fosse collegato a un’estorsione.

L'articolo Fuga di dati Auchan: centinaia di migliaia di clienti colpiti da un attacco hacker proviene da il blog della sicurezza informatica.


Lynx-R1 Headset Makers Release 6DoF SLAM Solution As Open Source


Some readers may recall the Lynx-R1 headset — it was conceived as an Android virtual reality (VR) and mixed reality (MR) headset with built-in hand tracking, designed to be open where others were closed, allowing developers and users access to inner workings in defiance of walled gardens. It looked very promising, with features rivaling (or surpassing) those of its contemporaries.

Founder [Stan Larroque] recently announced that Lynx’s 6DoF SLAM (simultaneous location and mapping) solution has been released as open source. ORB-SLAM3 (GitHub repository) takes in camera images and outputs a 6DoF pose, and does so effectively in real-time. The repository contains some added details as well as a demo application that can run on the Lynx-R1 headset.
The unusual optics are memorable. (Hands-on Lynx-R1 by Antony Vitillo)
As a headset the Lynx-R1 had a number of intriguing elements. The unusual optics, the flip-up design, and built-in hand tracking were impressive for its time, as was the high-quality mixed reality pass-through. That last feature refers to the headset using its external cameras as inputs to let the user see the real world, but with the ability to have virtual elements displayed and apparently anchored to real-world locations. Doing this depends heavily on the headset being able to track its position in the real world with both high accuracy and low latency, and this is what ORB-SLAM3 provides.

A successful crowdfunding campaign for the Lynx-R1 in 2021 showed that a significant number of people were on board with what Lynx was offering, but developing brand new consumer hardware is a challenging road for many reasons unrelated to developing the actual thing. There was a hands-on at a trade show in 2021 and units were originally intended to ship out in 2022, but sadly that didn’t happen. Units still occasionally trickle out to backers and pre-orders according to the unofficial Discord, but it’s safe to say things didn’t really go as planned for the R1.

It remains a genuinely noteworthy piece of hardware, especially considering it was not a product of one of the tech giants. If we manage to get our hands on one of them, we’ll certainly give you a good look at it.


hackaday.com/2025/08/27/lynx-r…


Exploits and vulnerabilities in Q2 2025


Vulnerability registrations in Q2 2025 proved to be quite dynamic. Vulnerabilities that were published impact the security of nearly every computer subsystem: UEFI, drivers, operating systems, browsers, as well as user and web applications. Based on our analysis, threat actors continue to leverage vulnerabilities in real-world attacks as a means of gaining access to user systems, just like in previous periods.

This report also describes known vulnerabilities used with popular C2 frameworks during the first half of 2025.

Statistics on registered vulnerabilities


This section contains statistics on assigned CVE IDs. The data is taken from cve.org.

Let’s look at the number of CVEs registered each month over the last five years.

Total vulnerabilities published each month from 2021 to 2025 (download)

This chart shows the total volume of vulnerabilities that go through the publication process. The number of registered vulnerabilities is clearly growing year-on-year, both as a total and for each individual month. For example, around 2,600 vulnerabilities were registered as of the beginning of 2024, whereas in January 2025, the figure exceeded 4,000. This upward trend was observed every month except May 2025. However, it’s worth noting that the registry may include vulnerabilities with identifiers from previous years; for instance, a vulnerability labeled CVE-2024-N might be published in 2025.

We also examined the number of vulnerabilities assigned a “Critical” severity level (CVSS > 8.9) during the same period.

Total number of critical vulnerabilities published each month from 2021 to 2025 (download)

The data for the first two quarters of 2025 shows a significant increase when compared to previous years. Unfortunately, it’s impossible to definitively state that the total number of registered critical vulnerabilities is growing, as some security issues aren’t assigned a CVSS score. However, we’re seeing that critical vulnerabilities are increasingly receiving detailed descriptions and publications – something that should benefit the overall state of software security.

Exploitation statistics


This section presents statistics on vulnerability exploitation for Q2 2025. The data draws on open sources and our telemetry.

Windows and Linux vulnerability exploitation


In Q2 2025, as before, the most common exploits targeted vulnerable Microsoft Office products that contained unpatched security flaws.

Kaspersky solutions detected the most exploits on the Windows platform for the following vulnerabilities:

  • CVE-2018-0802: a remote code execution vulnerability in the Equation Editor component
  • CVE-2017-11882: another remote code execution vulnerability, also affecting Equation Editor
  • CVE-2017-0199: a vulnerability in Microsoft Office and WordPad allowing an attacker to gain control over the system

These vulnerabilities are traditionally exploited by threat actors more often than others, as we’ve detailed in previous reports. These are followed by equally popular issues in WinRAR and exploits for stealing NetNTLM credentials in the Windows operating system:

  • CVE-2023-38831: a vulnerability in WinRAR involving improper handling of files within archive contents
  • CVE-2025-24071: a Windows File Explorer vulnerability that allows for the retrieval of NetNTLM credentials when opening specific file types (.library-ms)
  • CVE-2024-35250: a vulnerability in the ks.sys driver that allows arbitrary code execution


Dynamics of the number of Windows users encountering exploits, Q1 2024 — Q2 2025. The number of users who encountered exploits in Q1 2024 is taken as 100% (download)

All of the vulnerabilities listed above can be used for both initial access to vulnerable systems and privilege escalation. We recommend promptly installing updates for the relevant software.

For the Linux operating system, exploits for the following vulnerabilities were detected most frequently:

  • CVE-2022-0847, also known as Dirty Pipe: a widespread vulnerability that allows privilege escalation and enables attackers to take control of running applications
  • CVE-2019-13272: a vulnerability caused by improper handling of privilege inheritance, which can be exploited to achieve privilege escalation
  • CVE-2021-22555: a heap overflow vulnerability in the Netfilter kernel subsystem. The widespread exploitation of this vulnerability is due to the fact that it employs popular memory modification techniques: manipulating msg_msg primitives, which leads to a Use-After-Free security flaw.


Dynamics of the number of Linux users encountering exploits, Q1 2024 — Q2 2025. The number of users who encountered exploits in Q1 2024 is taken as 100% (download)

It’s critically important to install security patches for the Linux operating system, as it’s attracting more and more attention from threat actors each year – primarily due to the growing number of user devices running Linux.

Most common published exploits


In Q2 2025, we observed that the distribution of published exploits by software type continued the trends from last year. Exploits targeting operating system vulnerabilities continue to predominate over those targeting other software types that we track as part of our monitoring of public research, news, and PoCs.

Distribution of published exploits by platform, Q1 2025 (download)

Distribution of published exploits by platform, Q2 2025 (download)

In Q2, no public information about new exploits for Microsoft Office systems appeared.

Vulnerability exploitation in APT attacks


We analyzed data on vulnerabilities that were exploited in APT attacks during Q2 2025. The following rankings are informed by our telemetry, research, and open-source data.

TOP 10 vulnerabilities exploited in APT attacks, Q2 2025 (download)

The Q2 TOP 10 list primarily draws from the large number of incidents described in public sources. It includes both new security issues exploited in zero-day attacks and vulnerabilities that have been known for quite some time. The most frequently exploited vulnerable software includes remote access and document editing tools, as well as logging subsystems. Interestingly, low-code/no-code development tools were at the top of the list, and a vulnerability in a framework for creating AI-powered applications appeared in the TOP 10. This suggests that the evolution of software development technology is attracting the attention of attackers who exploit vulnerabilities in new and increasingly popular tools. It’s also noteworthy that the web vulnerabilities were found not in AI-generated code but in the code that supported the AI framework itself.

Judging by the vulnerabilities identified, the attackers’ primary goals were to gain system access and escalate privileges.

C2 frameworks


In this section, we’ll look at the most popular C2 frameworks used by threat actors and analyze the vulnerabilities whose exploits interacted with C2 agents in APT attacks.

The chart below shows the frequency of known C2 framework usage in attacks on users during the first half of 2025, according to open sources.

TOP 13 C2 frameworks used by APT groups to compromise user systems in Q1–Q2 2025 (download)

The four most frequently used frameworks – Sliver, Metasploit, Havoc, and Brute Ratel C4 – can work with exploits “out of the box” because their agents provide a variety of post-compromise capabilities. These capabilities include reconnaissance, command execution, and maintaining C2 communication. It should be noted that the default implementation of Metasploit has built-in support for exploits that attackers use for initial access. The other three frameworks, in their standard configurations, only support privilege escalation and persistence exploits in a compromised system and require additional customization tailored to the attackers’ objectives. The remaining tools don’t work with exploits directly and were modified for specific exploits in real-world attacks. We can therefore conclude that attackers are increasingly customizing their C2 agents to automate malicious activities and hinder detection.

After reviewing open sources and analyzing malicious C2 agent samples that contained exploits, we found that the following vulnerabilities were used in APT attacks involving the C2 frameworks mentioned above:

  • CVE-2025-31324: a vulnerability in SAP NetWeaver Visual Composer Metadata Uploader that allows for remote code execution and has a CVSS score of 10.0
  • CVE-2024-1709: a vulnerability in ConnectWise ScreenConnect 23.9.7 that can lead to authentication bypass, also with a CVSS score of 10.0
  • CVE-2024-31839: a cross-site scripting vulnerability in the CHAOS v5.0.1 remote administration tool, leading to privilege escalation
  • CVE-2024-30850: an arbitrary code execution vulnerability in CHAOS v5.0.1 that allows for authentication bypass
  • CVE-2025-33053: a vulnerability caused by improper handling of working directory parameters for LNK files in Windows, leading to remote code execution

Interestingly, most of the data about attacks on systems is lost by the time an investigation begins. However, the list of exploited vulnerabilities reveals various approaches to the vulnerability–C2 combination, offering insight into the attack’s progression and helping identify the initial access vector. By analyzing the exploited vulnerabilities, incident investigations can determine that, in some cases, attacks unfold immediately upon exploit execution, while in others, attackers first obtain credentials or system access and only then deploy command and control.

Interesting vulnerabilities


This section covers the most noteworthy vulnerabilities published in Q2 2025.

CVE-2025-32433: vulnerability in the SSH server, part of the Erlang/OTP framework


This remote code execution vulnerability can be considered quite straightforward. The attacker needs to send a command execution request, and the server will run it without performing any checks – even if the user is unauthenticated. The vulnerability occurs during the processing of messages transmitted via the SSH protocol when using packages for Erlang/OTP.

CVE-2025-6218: directory traversal vulnerability in WinRAR


This vulnerability is similar to the well-known CVE-2023-38831: both target WinRAR and can be exploited through user interaction with the GUI. Vulnerabilities involving archives aren’t new and are typically exploited in web applications, which often use archives as the primary format for data transfer. These archives are processed by web application libraries that may lack checks for extraction limits. Typical scenarios for exploiting such vulnerabilities include replacing standard operating system configurations and setting additional values to launch existing applications. This can lead to the execution of malicious commands, either with a delay or upon the next OS boot or application startup.

To exploit such vulnerabilities, attackers need to determine the location of the directory to modify, as each system has a unique file layout. Additionally, the process is complicated by the need to select the correct characters when specifying the extraction path. By using specific combinations of special characters, archive extraction outside of the working directory can bypass security mechanisms, which is the essence of CVE-2025-6218. A PoC for this vulnerability appeared rather quickly.

Hex dump of the PoC file for CVE-2025-6218
Hex dump of the PoC file for CVE-2025-6218

As seen in the file dump, the archive extraction path is altered not due to its complex structure, but by using a relative path without specifying a drive letter. As we mentioned above, a custom file organization on the system makes such an exploit unstable. This means attackers will have to use more sophisticated social engineering methods to attack a user.

CVE-2025-3052: insecure data access vulnerability in NVRAM, allowing bypass of UEFI signature checks


UEFI vulnerabilities almost always aim to disable the Secure Boot protocol, which is designed to protect the operating system’s boot process from rootkits and bootkits. CVE-2025-3052 is no exception.

Researchers were able to find a set of vulnerable UEFI applications in which a function located at offset 0xf7a0 uses the contents of a global non-volatile random-access memory (NVRAM) variable without validation. The vulnerable function incorrectly processes and can modify the data specified in the variable. This allows an attacker to overwrite Secure Boot settings and load any modules into the system – even those that are unsigned and haven’t been validated.

CVE-2025-49113: insecure deserialization vulnerability in Roundcube Webmail


This vulnerability highlights a classic software problem: the insecure handling of serialized objects. It can only be exploited after successful authentication, and the exploit is possible during an active user session. To carry out the attack, a malicious actor must first obtain a legitimate account and then use it to access the vulnerable code, which lies in the lack of validation for the _from parameter.

Post-authentication exploitation is quite simple: a serialized PHP object in text format is placed in the vulnerable parameter for the attack. It’s worth noting that an object injected in this way is easy to restore for subsequent analysis. For instance, in a PoC published online, the payload creates a file named “pwned” in /tmp.

Example of a payload published online
Example of a payload published online

According to the researcher who discovered the vulnerability, the defective code had been used in the project for 10 years.

CVE-2025-1533: stack overflow vulnerability in the AsIO3.sys driver


This vulnerability was exploitable due to an error in the design of kernel pool parameters. When implementing access rights checks for the AsIO3.sys driver, developers incorrectly calculated the amount of memory needed to store the path to the file requesting access to the driver. If a path longer than 256 characters is created, the system will crash with a “blue screen of death” (BSOD). However, in modern versions of NTFS, the path length limit is not 256 but 32,767 characters. This vulnerability demonstrates the importance of a thorough study of documentation: it not only helps to clearly understand how a particular Windows subsystem operates but also impacts development efficiency.

Conclusion and advice


The number of vulnerabilities continues to grow in 2025. In Q2, we observed a positive trend in the registration of new CVE IDs. To protect systems, it’s critical to regularly prioritize the patching of known vulnerabilities and use software capable of mitigating post-exploitation damage. Furthermore, one way to address the consequences of exploitation is to find and neutralize C2 framework agents that attackers may use on a compromised system.

To secure infrastructure, it’s necessary to continuously monitor its state, particularly by ensuring thorough perimeter monitoring.

Special attention should be paid to endpoint protection. A reliable solution for detecting and blocking malware will ensure the security of corporate devices.

Beyond basic protection, corporate infrastructures need to implement a flexible and effective system that allows for the rapid installation of security patches, as well as the configuration and automation of patch management. It’s also important to constantly track active threats and proactively implement measures to strengthen security, including mitigating risks associated with vulnerabilities. Our Kaspersky Next product line helps to detect and analyze vulnerabilities in the infrastructure in a timely manner for companies of all sizes. Moreover, these modern comprehensive solutions also combine the collection and analysis of security event data from all sources, incident response scenarios, an up-to-date database of cyberattacks, and training programs to improve the level of employees’ cybersecurity awareness.


securelist.com/vulnerabilities…


A Tool-changing 3D Printer For the Masses


A preproduction U1 sitting on a workbench

Modern multi-material printers certainly have their advantages, but all that purging has a way to add up to oodles of waste. Tool-changing printers offer a way to do multi-material prints without the purge waste, but at the cost of complexity. Plastic’s cheap, though, so the logic has been that you could never save enough on materials cost to make up for the added capital cost of a tool-changer — that is, until now.

Currently active on Kickstarter, the Snapmaker U1 promises to change that equation. [Albert] got his hands on a pre-production prototype for a review on 247Printing, and what we see looks promising.

The printer features the ubiquitous 235 mm x 235 mm bed size — pretty much the standard for a printer these days, but quite a lot smaller than the bed of what’s arguably the machine’s closest competition, the tool-changing Prusa XL. On the other hand, at under one thousand US dollars, it’s one quarter the price of Prusa’s top of the line offering. Compared to the XL, it’s faster in every operation, from heating the bed and nozzle to actual printing and even head swapping. That said, as you’d expect from Prusa, the XL comes dialed-in for perfect prints in a way that Snapmaker doesn’t manage — particularly for TPU. You’re also limited to four tool heads, compared to the five supported by the Prusa XL.

The U1 is also faster in multi-material than its price-equivalent competitors from Bambu Lab, up to two to three times shorter print times, depending on the print. It’s worth noting that the actual print speed is comparable, but the Snapmaker takes the lead when you factor in all the time wasted purging and changing filaments.

The assisted spool loading on the sides of the machine uses RFID tags to automatically track the colour and material of Snapmaker filament. That feature seems to take a certain inspiration from the Bambu Labs Mini-AMS, but it is an area [Albert] identifies as needing particular attention from Snapmaker. In the beta configuration he got his hands on, it only loads filament about 50% of the time. One can only imagine the final production models will do better than that!

In spite of that, [Albert] says he’s backing the Kickstarter. Given Snapmaker is an established company — we featured an earlier Snapmaker CNC/Printer/Laser combo machine back in 2021— that’s less of a risk than it could be.

youtube.com/embed/oWUTe1TjjKA?…


hackaday.com/2025/08/27/a-tool…


Quale E-commerce italiano da 500 ordini/mese a breve griderà “Data Breach”?


SinCity torna a far parlare di sé, questa volta mettendo in vendita l’accesso amministrativo a un nuovo shop online italiano basato su PrestaShop.

Secondo quanto dichiarato dallo stesso threat actor nel thread, l’exploit consente il caricamento di webshell direttamente sul sito, aprendo così la strada a ulteriori compromissioni e abusi della piattaforma e-commerce.

Dettagli dell’annuncio:


  • Piattaforma: PrestaShop
  • Accesso: Admin e possibilità di exploit tramite webshell
  • Settore: non specificato
  • Sistema di pagamento: PayPal e carta di credito con gateway di pagamento NEXI
  • Ordini attivi: circa 600 al mese (dati giugno e luglio)
  • Prezzo di partenza: 200$ – rilanci da 100$
  • Durata asta: 12h dall’ultimo rilancio

Questa vendita espone tutti i clienti del portale a rischi concreti: esfiltrazione di dati personali e finanziari, campagne di formjacking o skimming tramite JavaScript malevolo, compromissione della reputazione e potenziali danni legali e reputazionali.

Inoltre, l’elevato volume di ordini mensili rende l’accesso molto appetibile per operatori di ransomware o gruppi focalizzati su frodi con carte di credito.

Questo è l’ennesimo caso che conferma come le PMI italiane – in particolare quelle del commercio elettronico – siano un bersaglio ricorrente degli Initial Access Broker. La vendita di accessi privilegiati su marketplace criminali resta uno dei principali vettori di ingresso per ransomware e attacchi supply chain.

L’importanza della Cyber Threat Intelligence


Quanto emerso dalla vendita di accessi da parte degli Initial Access Broker (IAB) evidenzia, ancora una volta, quanto sia fondamentale disporre di un solido programma di Cyber Threat Intelligence (CTI). Gli IAB rappresentano un anello critico nella catena del cybercrime, fornendo a gruppi ransomware o altri attori malevoli l’accesso iniziale alle infrastrutture compromesse. Identificare tempestivamente queste dinamiche, monitorando forum underground e canali riservati, consente di anticipare minacce prima che si traducano in veri e propri attacchi.

Oggi la CTI non è più un’opzione, ma un pilastro strategico dei programmi di sicurezza informatica. Non si tratta solo di analizzare indicatori di compromissione (IoC) o condividere report: si parla di comprendere il contesto, le motivazioni degli attori ostili e il loro interesse verso specifici settori o target. Una corretta attività di intelligence avrebbe potuto, in questo caso, segnalare anomalie o pattern riconducibili a un potenziale interesse da parte di un broker d’accesso verso l’azienda in questione, fornendo al team di sicurezza il tempo per prepararsi e rafforzare i propri sistemi.

In uno scenario dove il tempo tra l’intrusione iniziale e l’escalation dell’attacco si riduce sempre più, la capacità di raccogliere, analizzare e agire sulle informazioni strategiche rappresenta un vantaggio competitivo e operativo. Integrare la CTI nei processi aziendali non solo riduce il rischio, ma consente di passare da una postura reattiva a una proattiva. Oggi più che mai, nessuna organizzazione può permettersi di ignorare il valore della Cyber Threat Intelligence.

Se sei interessato ad approfondire il mondo del dark web e della Cyber Threat Intelligence, Red Hot Cyber organizza corsi formativi dedicati sia in Live Class che in modalità eLearning. Le Live Class sono lezioni interattive svolte in tempo reale con i nostri esperti, che ti permettono di porre domande, confrontarti con altri partecipanti e affrontare simulazioni pratiche guidate. I corsi in eLearning, invece, sono fruibili in autonomia, disponibili 24/7 tramite piattaforma online, ideali per chi ha bisogno di massima flessibilità, con contenuti aggiornati, quiz e laboratori hands-on. Entrambe le modalità sono pensate per rendere accessibile e comprensibile la CTI anche a chi parte da zero, offrendo un percorso concreto per sviluppare competenze operative nel campo della sicurezza informatica. Per informazioni contatta formazione@redhotcyber.com oppure tramite WhatsApp al numero 379 163 8765.

L'articolo Quale E-commerce italiano da 500 ordini/mese a breve griderà “Data Breach”? proviene da il blog della sicurezza informatica.


Simulating the Commodore PET


A view of the schematics for each major component.

Over on his blog our hacker [cpt_tom] shows us how to simulate the hardware for a Commodore PET. Two of them in fact, one with static RAM and the other with dynamic RAM.

This project is serious business. The simulation environment used is Digital. Digital is a digital logic designer and circuit simulator designed for educational purposes. It’s a Java program that runs under the JVM. It deals in .dig files which are XML files that represent the details of the simulated hardware components. You don’t need to write the XML files by hand, there is a GUI for that.

This digital simulation from [cpt_tom] is based on the original schematics. To run [cpt_tom]’s code you first need to clone his GitHub repository: github.com/innot/PET-Digital-S…. You will need to install Digtial and configure it with the PETComponentsDigitalPlugin.jar Java library that ships with [cpt_tom]’s code (the details are in the blog post linked above).

What’s not in the documentation is that you will need to update the paths to the binaries for the ROMs. This means searching in the .dig XML files for “C:\Users\thoma\Documents\Projects\PET-Digital-Simulation” and replacing that path to whichever path actually contains your ROM binaries (they will be in the code from GitHub and have the same directory structure). This simulation is complete and the hardware components defined can actually run the binaries in the emulated ROMs.

It is immensely satisfying after you’ve got everything running to enter at the keyboard:
10 PRINT "HELLO, WORLD"
RUN

To be greeted with:
HELLO, WORLD
READY.

This is what technology is all about! 😀

If you do go through the process of downloading this code and loading it in the Digital simulator you will be presented with a complete schematic comprised of the following components: CPU, IEEE-488 Interface, Cassette and Keyboard, ROMS, RAMS, Master Clock, Display Logic, and Display RAMs. All the bits you need for a complete and functional computer!

If you’re interested in the Commodore PET you might also like to check out A Tricky Commodore PET Repair And A Lesson About Assumptions.

Thanks to [Thomas Holland] for writing in to let us know about this one.


hackaday.com/2025/08/26/simula…


Google Will Require Developer Verification Even for Sideloading


Do you like writing software for Android, perhaps even sideload the occasional APK onto your Android device? In that case some big changes are heading your way, with Google announcing that they will soon require developer verification for all applications installed on certified Android devices – meaning basically every mainstream device. Those of us who have distributed Android apps via the Google app store will have noticed this change already, with developer verification in the form of sending in a scan of your government ID now mandatory, along with providing your contact information.

What this latest change thus effectively seems to imply is that workarounds like sideloading or using alternative app stores, like F-Droid, will no longer suffice to escape these verification demands. According to the Google blog post, these changes will be trialed starting in October of 2025, with developer verification becoming ‘available’ to all developers in March of 2026, followed by Google-blessed Android devices in Brazil, Indonesia, Thailand and Singapore becoming the first to require this verification starting in September of 2026.

Google expects that this system will be rolled out globally starting in 2027, meaning that every Google-blessed Android device will maintain a whitelist of ‘verified developers’, not unlike the locked-down Apple mobile ecosystem. Although Google’s claim is that this is for ‘security’, it does not prevent the regular practice of scammers buying up existing – verified – developer accounts, nor does it harden Android against unscrupulous apps. More likely is that this will wipe out Android as an actual alternative to Apple’s mobile OS offerings, especially for the hobbyist and open source developer.


hackaday.com/2025/08/26/google…


Avocado Harvester is A Cut Above


For a farmer or gardener, fruit trees offer a way to make food (and sometimes money) with a minimum of effort, especially when compared to growing annual vegetables. Mature trees can be fairly self-sufficient, and may only need to be pruned once a year if at all. But getting the fruit down from these heights can be a challenge, even if it is on average less work than managing vegetable crops. [Kladrie] created this avocado snipper to help with the harvest of this crop.

Compounding the problem for avocados, even compared to other types of fruit, is their inscrutable ripeness schedule. Some have suggested that cutting the avocados out of the trees rather than pulling them is a way to help solve this issue as well, so [Kladrie] modified a pair of standard garden shears to mount on top of a long pole. A string is passed through the handle so that the user can operate them from the ground, and a small basket catches the fruit before it can plummet to the Earth. A 3D-printed guide helps ensure that the operator can reliable snip the avocados off of the tree on the first try without having to flail about with the pole and hope for the best, and the part holds the basket to the pole as well.

For those living in more northern climates, this design is similar to many tools made for harvesting apples, but the addition of the guide solves a lot of the problems these tools can have which is largely that it’s easy to miss the stems on the first try. Another problem with pulling the fruits off the tree, regardless of species, is that they can sometimes fling off of their branches in unpredictable ways which the snipping tool solves as well. Although it might not work well for avocados, if you end up using this tool for apples we also have a suggestion for what to do with them next.


hackaday.com/2025/08/26/avocad…


Battery Repair By Reverse Engineering


Ryobi is not exactly the Cadillac of cordless tools, but one still has certain expectations when buying a product. For most of us “don’t randomly stop working” is on the list. Ryobi 18-volt battery packs don’t always meet that expectation, but fortunately for the rest of us [Badar Jahangir Kayani] took matters into his own hands and reverse-engineered the pack to find all the common faults– and how to fix them.

[Badar]’s work was specifically on the Ryobi PBP005 18-volt battery packs. He’s reproduced the schematic for them and given a fairly comprehensive troubleshooting guide on his blog. The most common issue (65%) with the large number of batteries he tested had nothing to do with the cells or the circuit, but was the result of some sort of firmware lock.

It isn’t totally clear what caused the firmware to lock the batteries in these cases. We agree with [Badar] that it is probably some kind of glitch in a safety routine. Regardless, if you have one of these batteries that won’t charge and exhibits the characteristic flash pattern (flashing once, then again four times when pushing the battery test button), [Badar] has the fix for you. He actually has the written up the fix for a few flash patterns, but the firmware lockout is the one that needed the most work.

[Badar] took the time to find the J-tag pins hidden on the board, and flash the firmware from the NXP micro-controller that runs the show. Having done that, some snooping and comparison between bricked and working batteries found a single byte difference at a specific hex address. Writing the byte to zero, and refreshing the firmware results in batteries as good as new. At least as good as they were before the firmware lock-down kicked in, anyway.

He also discusses how to deal with unbalanced packs, dead diodes, and more. Thanks to the magic of buying a lot of dead packs on e-Bay, [Badar] was able to tally up the various failure modes; the firmware lockout discussed above was by far the majority of them, at 65%. [Badar]’s work is both comprehensive and impressive, and his blog is worth checking out even if you don’t use the green brand’s batteries. We’ve also embedded his video below if you’d rather watch than read and/or want to help out [Badar] get pennies from YouTube monetization. We really do have to give kudos for providing such a good write up along with the video.

This isn’t the first attempt we’ve seen at tearing into Ryobi batteries. When they’re working, the cheap packs are an excellent source of power for everything from CPap machines to electric bicycles.

Thanks to [Badar] for the tip.

youtube.com/embed/NQ_lyDyzEHY?…


hackaday.com/2025/08/26/batter…


Automated Brewing


There’s little more to making alcoholic beverages than sugar, water, yeast, and time. Of course those with more refined or less utilitarian tastes may want to invest a bit more care and effort into making their concoctions. For beer making especially this can be a very involved task, but [Fieldman] has come up with a machine that helps automate the process and take away some of the tedium.

[Fieldman] has been making beers in relatively small eight-liter batches for a while now, and although it’s smaller than a lot of home brewers, it lends itself perfectly to automation. Rather than use a gas stove for a larger boil this process is done on a large hot plate, which is much more easily controlled by a microcontroller. The system uses an ESP32 for temperature control, and it also runs a paddle stirrer and controls a screen which lets the brewer know when it’s time to add ingredients or take the next step in the process. Various beers can be programmed in, and the touchscreen makes it easy to know at a glance what’s going on.

For a setup of this size this is a perfect way to take away some of the hassle of beer brewing like making sure the stove didn’t accidentally get too hot or making sure it’s adequately stirred for the large number of hours it might take to brew, but it still leaves the brewer in charge for the important steps.

Beer brewing is a hobby with a lot of rabbit holes to jump down, and it can get as complicated as you like. Just take a look at this larger brewery setup that automates more tasks on a much larger scale.

youtube.com/embed/2098iAXmmrU?…


hackaday.com/2025/08/26/automa…


Picture by Paper Tape


The April 1926 issue of “Science and Invention” had a fascinating graphic. It explained, for the curious, how a photo of a rescue at sea could be in the New York papers almost immediately. It was the modern miracle of the wire photo. But how did the picture get from Plymouth, England, to New York so quickly? Today, that’s no big deal, but set your wayback machine to a century ago.

Of course, the answer is analog fax. But think about it. How would you create an analog fax machine in 1926? The graphic is quite telling. (Click on it to enlarge, you won’t be disappointed.)

If you are like us, when you first saw it you thought: “Oh, sure, paper tape.” But a little more reflection makes you realize that solves nothing. How do you actually scan the photo onto the paper tape, and how can you reconstitute it on the other side? The paper tape is clearly digital, right? How do you do an analog-to-digital converter in 1926?

It Really is a Wire PHOTO


The graphic is amazingly technical in its description. Getting the negative from Plymouth to London is a short plane hop. From there, a photographer creates five prints on specially-coated zinc plates. Where the emulsion stays, the plate won’t conduct electricity. Where the developer removes it, electricity will flow.
The picture of the vessel S.S. Antione sinking (including a magnified inset)
Why five? Well, each print is successively darker. All five get mounted to a drum with five brushes making contact with the plate. Guess how many holes are in the paper tape? If you guessed five, gold star for you.

As you can see in the graphic, each brush drives a punch solenoid. It literally converts the brightness of the image into a digital code because the photographer made five prints, each one darker than the last. So something totally covered on all five plates gets no holes. Something totally uncovered gets five holes. Everything else gets something in between. This isn’t a five-bit converter. You can only get 00000, 00001, 00011, 00111, 011111, and 11111 out of the machine, for six levels of brightness.

Decoding


The decoding is also clever. A light passes through the five holes, and optics collimates the light into a single beam. That’s it. If there are no holes in the tape, the beam is dark. The more holes, the brighter it gets. The light hits a film, and then it is back to a darkroom on the other side of the ocean.

The rest of the process is nothing more than the usual way a picture gets printed in a newspaper.

If you want to see the graphic in context, you can grab a copy of the whole magazine (another Hugo Gernsback rag) at the excellent World Radio History site. You’ll also see that you could buy a rebuilt typewriter for $3 and that the magazine was interested if the spirits of the dead can find each other in the afterlife. Note this was the April issue. Be sure to check out the soldering iron described on page 1114. You’ll also see on that page that Big Mouth Bill Bass isn’t the recent fad you thought it was.

We are always fascinated by what smart people would develop if they had no better options. It is easy to think that the old days were full of stone knives and bear skins, but human ingenuity is seemingly boundless. If you want to see really old fax technology, it goes back much further than you would think.


hackaday.com/2025/08/26/pictur…


Troubled USB Device? This Tool Can Help


Close up of a multi-USB tester PCB

You know how it goes — some gadgets stick around in your toolbox far longer than reason dictates, because maybe one day you’ll need it. How many of us held onto ISA diagnostic cards long past the death of the interface?

But unlike ISA, USB isn’t going away anytime soon. Which is exactly why this USB and more tester by [Iron Fuse] deserves a spot in your toolbox. This post is not meant to directly lure you into buying something, but seen how compact it is, it would be sad to challenge anyone to reinvent this ‘wheel’, instead of just ordering it.

So, to get into the details. This is far from the first USB tester to appear on these pages, but it is one of the most versatile ones we’ve seen so far. On the surface, it looks simple: a hand-soldered 14×17 cm PCB with twelve different connectors, all broken out to labelled test points. Hook up a dodgy cable or device, connect a known-good counterpart, and the board makes it painless to probe continuity, resistance, or those pesky shorts where D+ suddenly thinks it’s a ground line.

You’ll still need your multimeter (automation is promised for a future revision), but the convenience of not juggling probes into microscopic USB-C cavities is hard to overstate. Also, if finding out whether you have a power-only or a data cable is your goal, this might be the tool for you instead.


hackaday.com/2025/08/26/troubl…


Where There Is No Down: Measuring Liquid Levels in Space


As you can probably imagine, we get tips on a lot of really interesting projects here at Hackaday. Most are pretty serious, at least insofar as they aim to solve a specific problem in some new and clever way. Some, though, are a little more lighthearted, such as a fun project that came across the tips line back in May. Charmingly dubbed “pISSStream,” the project taps into NASA’s official public telemetry stream for the International Space Station to display the current level of the urine tank on the Space Station.

Now, there are a couple of reactions to a project like this when it comes across your desk. First and foremost is bemusement that someone would spend time and effort on a project like this — not that we don’t appreciate it; the icons alone are worth the price of admission. Next is sheer amazement that NASA provides access to a parameter like this in its public API, with a close second being the temptation to look at what other cool endpoints they expose.

But for my part, the first thing I thought of when I saw that project was, “How do they even measure liquid levels in space?” In a place where up and down don’t really have any practical meaning, the engineering challenges of liquid measurement must be pretty interesting. That led me down the rabbit hole of low-gravity process engineering, a field that takes everything you know about how fluids behave and flushes it into the space toilet.

What’s Up?


Before even considering the methods used to measure liquid levels in space, you really have to do away with the concept of “levels.” That’s tough to do for anyone who has spent a lifetime at the bottom of a gravity well, a place where the gravity vector is always straight down at 1 g, fluids always seek their own level, and the densest stuff eventually makes its way to the bottom of a container. None of this applies in space, a place where surface tension and capillary action take the lead role in determining how fluids behave.

We’ve all seen clips of astronauts aboard the Space Shuttle or ISS having fun playing with a bit of water liberated from a drinking pouch, floating in a wobbly spheroid until it gets sucked up with a straw. That’s surface tension in action, forcing the liquid to assume the minimum surface area for a given volume. In the absence of an acceleration vector, fluids will do exactly the same thing inside a tank on a spacecraft. In the Apollo days, NASA used cameras inside the fuel tanks of their Saturn rockets to understand fluid flow during flights. These films showed the fuel level rapidly decreasing while the engines were burning, but the remaining fuel rushing to fill the entire tank with individual blobs of floating liquid once in free-fall. SpaceX does the same today with their rockets, with equally impressive results — apologies for the soundtrack.

youtube.com/embed/u656se4e34M?…

So, getting propellants to the outlets of tanks in rockets turns out to be not much of a chore, at least for boosters, since the acceleration vector is almost always directed toward the nominal bottom of the tank, where the outlets are located. For non-reusable stages, it doesn’t really matter if the remaining fuel floats around once the engines turn off, since it and the booster are just going to burn up upon reentry or end up at the bottom of the ocean. But for reusable boosters, or for rockets that need to be restarted after a period of free fall, the fuel and oxidizer need to be settled back into their tanks before the engines can use them again.

Ullage Motors, Bookkeeping, and Going With The Flow

Ullage motor from a Saturn IB rocket. Motors like these provided a bit of acceleration to settle propellants to the nominal bottom of their tanks. Source: Clemens Vasters, CC BY 2.0.
Settling propellants requires at least a little acceleration in the right direction, which is provided by dedicated ullage motors. In general, ullage refers to the empty space in a closed container, and ullage motors are used to consolidate the mix of gas and liquid in a tank into a single volume. On the Saturn V rockets of the Apollo era, for example, up to a dozen solid-fuel ullage motors on the two upper stages were used to settle propellants.

With all the effort that goes into forcing liquid propellants to the bottom of their tanks, at least for most of the time, you’d think it would be pretty simple to include some sort of level gauging sensor, such as an ultrasonic sensor at the nominal top of the tank to measure the distance to the rapidly receding liquid surface as the engines burn. But in practice, there’s little need for sensing the volume of propellants left in the tank. Rather, the fuel remaining in the tank can be inferred from flow sensors in lines feeding the engines. If you know the flow rate and the starting volume, it’s easy enough to calculate the fuel remaining. SpaceX seems to use this method for their boosters, although they don’t expose a lot of detail to the public on their rocket designs. For the Saturn S-1C, the first stage of the Saturn V rocket, it was even simpler — they just filled the tanks with a known volume of propellants and burned them until they were basically empty.

In general, this is known as the bookkeeping or flow accounting method. This method has the disadvantage of compounding errors in flow measurement over time, but it’s still good enough for applications where some engineering wiggle room can be built in. In fact, this is the method used to monitor the urine tank level in the ISS, except in reverse. When the tank is emptied during resupply missions, the volume resets to zero, and each operation of one of the three Waste & Hygiene Compartments (WHC) aboard the station results in approximately 350 ml to 450 ml of fluid — urine, some flush water, and a small amount of liquid pretreatment — flowing into the urine holding tank. By keeping track of the number of flushes and by measuring the outflow of pretreated urine to the Urine Processing Assembly (UPA), which recycles the urine into potable water, the level of the tank can be estimated.

PUGS In Space

Propellant Utilization Gage Subsystem (PUGS) display from an Apollo Command Module. This is an early version that totals fuel and oxidizer in pounds rather than displaying the percent remaining. The dial indicates if fuel and oxidizer flows become unbalanced. Source: Steve Jurvetson, CC BY 2.0.
Monitoring pee on a space station may be important, but keeping track of propellants during crewed flights is a matter of life or death. During the Apollo missions, a variety of gauging methods were employed for fuel and oxidizer measurements, most of which relied on capacitance probes inside the tanks. The Apollo service module’s propulsion system used the Propellant Utilization Gauging Subsystem, or PUGS, to keep track of the fuel and oxidizer levels onboard.

PUGS relied primarily on capacitive probes mounted axially within the tank. For the fuel tanks, the sensor was a sealed Pyrex tube with a silver coating on the inside. The glass acted as the dielectric between the silver coating and the conductive Aerozine 50 fuel. In the oxidizer tanks, the inhibited nitrogen tetroxide acted as the dielectric, filling the space between concentric electrodes. Once settled with an ullage burn, the level of propellants could be determined by measuring the capacitance across the probes, which would vary with liquid level. Each probe also had a series of point contacts along its length. Measuring the impedance across the contacts would show which points were covered by propellant and which weren’t, giving a lower-resolution reading as a backup to the primary capacitive sensors.

For the Lunar Module, propellant levels for the descent stage were monitored with a similar but simpler Propellant Quantity Gage System (PQGS). Except for an initial ullage burn, fuel settling wasn’t needed during descent thanks to lunar gravity. The LM also used the same propellants as the service module, so the PQGS capacitive probes were the same as the PUGS probes, except for the lack of auxiliary impedance-based sensors. The PQGS capacitive readings were used to calculate the percent of fuel and oxidizer remaining, which was displayed digitally on the LM control panel.

The PQGS probe on the early Apollo landings gave incorrect readings of the remaining propellants thanks to sloshing inside the tanks, a defect that was made famous by Mission Control’s heart-stopping callouts of how many seconds of fuel were left during the Apollo 11 landing. This was fixed after Apollo 12 by adding new anti-slosh baffles to the PQGS probes.

Counting Bubbles


For crewed flights, ullage burns to settle fuel and get accurate tank level measurements are easy to justify. But not so for satellites and deep-space probes, which are lofted into orbit at great expense. These spacecraft can only carry a limited amount of propellant for maneuvering and station-keeping, which has to last months or even years, and the idea of wasting any of that precious allotment on ullage is a non-starter.

To work around this, engineers have devised clever methods to estimate the amount of propellants or other liquids in tanks under microgravity conditions. The pressure-volume-temperature (PVT) method can estimate the volume of fluid remaining based on measurements from pressure and temperature sensors inside the tank and the ideal gas law. Like the flow accounting method, the accuracy of the PVT method tends to decrease over time, mainly because the resolution of pressure sensors tends to get worse as the pressure decreases.

For some fluids, the thermal gauging method might be employed. This is a variation of the PVT method, which involves applying heat to the tank while monitoring the pressure and temperature of its contents. If the thermal characteristics of the process fluid are well known, it’s possible to infer the volume remaining. The downside is that a good thermal model of the tank and its environment is needed; it wouldn’t do, for instance, to have unaccounted heat gain from solar radiation during a measurement, or loss of heat due to conduction to space via the structure of the spacecraft.
Schematic of ECVT, which can be used to measure the volume of fluids floating in a tank in free fall. The capacitance between pairs of electrodes depends on the total dielectric constant of the gas and liquid in between them. Scanning all combinations of electrodes results in a map of the material in a tank. Source: Marashdeh, CC BY-SA 4.0.
For better accuracy, a more recent development in microgravity tank gauging is electrical capacitance volume sensing (ECVS), and the closely related electrical capacitance volume tomography (ECVT). The two methods use arrays of electrodes on the inside surface of a tank. The mixed-phase fluid in the tank acts as a dielectric, allowing capacitance measurements between any pair of electrodes. Readings are collected across each combination of electrodes, which can be used to build a map of where fluid is located within the tank. This is especially useful for tanks where liquid is floating in discrete spheroids. The volume of each of these blobs can be calculated and totalled to give the amount of liquid in the tank.

One promising gauging method, especially for deep-space missions, is radio frequency mass gauging, or RFMG. This method uses a small antenna to inject RF signals into a tank. The liquid inside the tank reflects these signals; analyzing the spectrum of these reflections can be used to calculate the amount of liquid inside the tank. RFMG was tested on the ISS before heading to the Moon aboard Intuitive Machines’ IM-1 lander, which touched down softly on the lunar surface in February of 2024, only to tip over onto its side. Luckily, the RFMG system had nothing to do with the landing anomaly; in fact, the sensor was critical to determining that cryogenic fuel levels in the lander were correct when temperature sensors indicated the tank was colder than expected, potentially pointing to a leak.

youtube.com/embed/OdB_i74H3dc?…


hackaday.com/2025/08/26/where-…


Padre e Figlio ricercati dall’FBI. 10 milioni di ricompensa per gli hacker che collaboravano con il GRU


L’FBI offre una generosa ricompensa a chiunque aiuti a trovare Amin Stigal, 23 anni, e Timur Stigal, 47 anni, padre e figlio. Sono accusati di aver violato i sistemi informatici di agenzie governative in Ucraina e in decine di paesi occidentali. Inoltre, i loro precedenti includono presunte “azioni sovversive” in collaborazione con ufficiali russi del GRU, traffico di dati di carte di credito rubate, estorsione e altro ancora.

A quanto pare, la famiglia Stigall ora vive a Saratov.

Timur Stigall ha ammesso in una conversazione con i giornalisti di aver partecipato ad alcune operazioni contro i servizi segreti stranieri. Tuttavia, nega la colpevolezza del figlio Amin.

Va notato che anche Amin Stigall è rimasto sorpreso quando, il 26 giugno 2024, una persona sconosciuta gli ha inviato un link a un messaggio su Telegram in cui si affermava che l’FBI lo aveva dichiarato ricercato.

A quel tempo, il ragazzo studiava in un istituto tecnico a Khasavyurt, specializzato in “operatore di risorse informative”, ma è stato espulso al primo anno per assenteismo. Secondo le agenzie di intelligence americane, Amin, in collaborazione con cinque dipendenti del GRU dello Stato Maggiore delle Forze Armate della Federazione Russa, avrebbe partecipato all’hacking di sistemi informatici a partire da dicembre 2020, in particolare sarebbe stato coinvolto in attacchi al portale ucraino Diya (analogo al russo “Gosuslugi”). All’epoca, Amin non aveva nemmeno 20 anni.

Di conseguenza, nell’agosto del 2024, un tribunale americano emise un mandato di arresto per Amin e l’FBI fissò una ricompensa.

Per questo motivo, l’uomo vive in uno stato di costante stress. Crede di essere stato accusato di qualcosa che non ha commesso.

Un rapporto del National Cyber ​​Security Center (NCSC) degli Stati Uniti ha inoltre coinvolto il GRU nel tentativo di interferire nelle elezioni che hanno portato al potere Donald Trump. Secondo l’NCSC, il GRU è associato a diversi gruppi di hacker: Fancy Bear; Sofacy; Pawnstorm; Sednit; CyberCaliphate; CyberBerkut; Voodoo bear; BlackEnergy Actors; Strontium; Tsar Team e Sandworm.

Anche Germania, Paesi Bassi, Australia e Nuova Zelanda hanno accusato il GRU di aver condotto una campagna mondiale di attacchi informatici “dannosi”.

L'articolo Padre e Figlio ricercati dall’FBI. 10 milioni di ricompensa per gli hacker che collaboravano con il GRU proviene da il blog della sicurezza informatica.


Confirmation of Record 220 PeV Cosmic Neutrino Hit on Earth


One of the photo-detector spheres of ARCA (Credit: KM3NeT)

Neutrinos are exceedingly common in the Universe, with billions of them zipping around us throughout the day from a variety of sources. Due to their extremely low mass and no electric charge they barely ever interact with other particles, making these so-called ‘ghost particles’ very hard to detect. That said, when they do interact the result is rather spectacular as they impart significant kinetic energy. The resulting flash of energy is used by neutrino detectors, with most neutrinos generally pegging out at around 10 petaelectronvolt (PeV), except for a 2023 event.

This neutrino event which occurred on February 13th back in 2023 was detected by the KM3NeT/ARCA detector and has now been classified as an ultra-high energy neutrino event at 220 PeV, suggesting that it was likely a cosmogenic neutrinos. When we originally reported on this KM3-230213A event, the data was still being analyzed based on a detected muon from the neutrino interaction even, with the researchers also having to exclude the possibility of it being a sensor glitch.

By comparing the KM3-230213A event data with data from other events at other detectors, it was possible to deduce that the most likely explanation was one of these ultra-high energy neutrinos. Since these are relatively rare compared to neutrinos that originate within or near Earth’s solar system, it’ll likely take a while for more of these detection events. As the KM3NeT/ARCA detector grid is still being expanded, we may see many more of them in Earth’s oceans. After all, if a neutrino hits a particle but there’s no sensor around to detect it, we’d never know it happened.


Top image: One of the photo-detector spheres of ARCA (Credit: KM3NeT)


hackaday.com/2025/08/26/confir…


Buon compleanno Windows 95: 30 anni per un sistema che ha cambiato i PC per sempre!


Il 24 agosto 2025 ha segnato i 30 anni dal lancio di Windows 95, il primo sistema operativo consumer a 32 bit di Microsoft destinato al mercato di massa, che ha rivoluzionato in modo significativo il mondo dei personal computer. Nell’era della limitata connettività Internet domestica, il software veniva venduto in confezioni e la domanda era da record: un milione di copie furono vendute nei primi quattro giorni e circa 40 milioni in un anno.

Un sistema operativo moderno


Windows 95 rappresentò una svolta nella strategia aziendale. Dopo il successo di Windows 3.0, Microsoft si propose di unire i mondi disparati di MS-DOS e Windows in un’unica esperienza utente. Per raggiungere il pubblico più vasto possibile, i requisiti minimi furono mantenuti molto bassi: un processore 386DX, 4 MB di RAM e 50-55 MB di spazio su disco. In pratica, molti PC “da gioco” a 16 bit dell’epoca non soddisfacevano questi standard, il che causò reazioni contrastanti da parte degli utenti al lancio.

Le principali innovazioni divennero rapidamente standard del settore. C’erano un pulsante e un menu Start, un’interfaccia unificata basata su Windows Explorer, un’API Win32 completa a 32 bit e un ambiente multitasking preselezionato.

Il sistema eseguiva software di tre generazioni contemporaneamente – programmi DOS, applicazioni Windows a 16 bit e nuove applicazioni a 32 bit – grazie a un’architettura ibrida in cui il “kernel” DOS a 16 bit fungeva da bootloader e livello di compatibilità. Persino il programma di installazione si basava su diversi mini-sistemi per supportare il numero massimo di configurazioni PC.

Le basi per tutti gli OS di oggi


Contrariamente a quanto si pensa, non fu il “DOS 7 con shell”, ma un sistema operativo multitasking a 32 bit a tutti gli effetti a stabilire nuove regole sia in ambito tecnologico che di marketing.

Il supporto ufficiale per Windows 95 terminò nel dicembre 2001, ma la sua influenza si fa sentire ancora oggi, dalle abitudini informatiche agli approcci allo sviluppo e alla distribuzione del software.

Il trentesimo anniversario di Windows 95 non è solo una celebrazione nostalgica: rappresenta il riconoscimento di un sistema operativo che ha segnato un punto di svolta nell’informatica consumer. Con la sua interfaccia unificata, il menu Start e il supporto multitasking a 32 bit, Windows 95 ha posto le basi per gli standard dei sistemi operativi moderni e ha cambiato il modo in cui milioni di persone interagiscono con i computer.

Il successo immediato e l’adozione di massa dimostrano quanto fosse importante rendere la tecnologia accessibile, mantenendo requisiti minimi bassi e combinando innovazione e praticità. Ancora oggi, molte delle idee introdotte in quel lontano 1995 – dall’esperienza utente all’integrazione di software legacy – influenzano il design dei sistemi operativi contemporanei, confermando l’eredità duratura di Windows 95 nel mondo dell’informatica.

youtube.com/embed/wRdl1BjTG7c?…

Un lancio esplosivo con i Rolling Stone


Nel lancio di Windows 95, Microsoft ha scelto la celebre canzone dei Rolling Stones, “Start Me Up”, come colonna sonora per la sua campagna pubblicitaria. Questa decisione non solo ha reso memorabile il debutto del sistema operativo, ma ha anche segnato una svolta nel marketing tecnologico.

La scelta di “Start Me Up” si è rivelata perfetta: il titolo della canzone si sposava idealmente con il nuovo “Start Button” introdotto in Windows 95. Tuttavia, ottenere i diritti per utilizzare il brano non è stato semplice. Secondo Brad Chase, responsabile del marketing di Windows 95, Microsoft ha dovuto affrontare trattative difficili con i rappresentanti dei Rolling Stones, che inizialmente chiedevano una cifra considerevole per l’uso del brano. Brad Chase

Nonostante le sfide, l’accordo è stato raggiunto e “Start Me Up” è diventata la colonna sonora di uno degli spot più iconici nella storia della tecnologia. La campagna pubblicitaria, che ha incluso anche apparizioni di celebrità come Jay Leno, Jennifer Aniston e Matthew Perry, ha contribuito a rendere Windows 95 un fenomeno culturale, attirando l’attenzione di milioni di consumatori in tutto il mondo.

Questa mossa ha dimostrato l’importanza di un marketing creativo e mirato, capace di associare un prodotto tecnologico a elementi della cultura popolare, creando un legame emotivo con il pubblico. L’uso di “Start Me Up” ha trasformato il lancio di Windows 95 in un evento memorabile, consolidando la sua posizione nella storia dell’informatica.

L'articolo Buon compleanno Windows 95: 30 anni per un sistema che ha cambiato i PC per sempre! proviene da il blog della sicurezza informatica.


STAGERSHELL: quando il malware non lascia tracce. L’analisi di Malware Forge


All’inizio del 2025 un’organizzazione italiana si è trovata vittima di un’intrusione subdola. Nessun exploit clamoroso, nessun attacco da manuale. A spalancare la porta agli aggressori è stato un account VPN rimasto attivo dopo la cessazione di un ex dipendente. Una semplice dimenticanza che ha permesso agli attaccanti di infiltrarsi nella rete senza sforzi apparenti. Da lì in poi, il resto è stato un gioco di pazienza: movimento silenzioso, escalation dei privilegi e mesi di presenza nascosta all’interno dell’infrastruttura.

All’analisi hanno partecipato Manuel Roccon, Alessio Stefan, Bajram Zeqiri (aka Frost), Agostino pellegrino, Sandro Sana e Bernardo Simonetto.

Scarica il report STAGERSHELL realizzato da Malware Forge

La scoperta di StagerShell


Durante le operazioni di incident response un Blue Team ha individuato due artefatti sospetti. Non erano i soliti file eseguibili, ma script PowerShell capaci di agire direttamente in memoria.

È qui che entra in scena il malware il protagonista del report pubblicato dal laboratorio di Malware Analysis di Red Hot Cyber Malware Forge, che il laboratorio ha dato nome StagerShell. Si tratta di un componente invisibile agli occhi dei sistemi meno evoluti, progettato per preparare il terreno a un secondo stadio più aggressivo, con ogni probabilità un ransomware.

La caratteristica principale di StagerShell è la sua natura fileless. Questo significa che non lascia file sul disco, non sporca l’ambiente con tracce evidenti, ma si insinua nei processi di memoria. Un approccio che gli consente di sfuggire a gran parte delle difese tradizionali. In pratica, il malware non è un ladro che sfonda la porta, ma un intruso che si mescola silenziosamente tra chi vive già nell’edificio, diventando difficile da riconoscere.

Il nome non è casuale: StagerShell è uno “stager”, ovvero un trampolino di lancio. Il suo compito non è infliggere il danno finale, ma aprire un canale invisibile attraverso cui far arrivare il vero payload. In altre parole, prepara la strada e rende più semplice e rapido il lavoro del malware principale, che spesso entra in scena solo nella fase finale, quella più devastante. È il preludio di un attacco che si concretizza quando ormai gli aggressori hanno già ottenuto un vantaggio tattico enorme.

Somiglianze con i grandi gruppi criminali


Gli analisti del Malware Forge hanno notato una forte somiglianza tra StagerShell e strumenti già utilizzati da gruppi criminali come Black Basta. Dopo il collasso di quella sigla, molti suoi affiliati sono confluiti in organizzazioni come Akira e Cactus, molto attive anche in Italia e in particolare nel Nord-Est, la stessa area colpita da questo episodio. Non è stato possibile attribuire con certezza l’attacco, ma il contesto lascia pochi dubbi: si trattava di una campagna ransomware interrotta prima della fase di cifratura. Resta però l’ombra dell’esfiltrazione: su un Domain Controller è stato trovato un file da 16 GB, segno evidente che i dati erano già stati trafugati.

Errori banali e lezioni imparate


Questo caso mostra chiaramente come, spesso, non siano i super exploit a mettere in crisi le aziende, ma gli errori di gestione quotidiani. Un account non disabilitato, una credenziale dimenticata, un controllo mancante. A questo si somma la capacità degli attaccanti di combinare strumenti noti con tecniche di elusione avanzate, capaci di confondere antivirus e sistemi di difesa meno evoluti. È la combinazione perfetta: da una parte la leggerezza delle vittime, dall’altra la creatività dei criminali.

Il report lancia un messaggio chiaro: la sicurezza non è statica. Non basta avere firewall e antivirus se non vengono accompagnati da monitoraggio continuo, revisione costante degli accessi e capacità di risposta rapida agli incidenti. In questo caso sono stati gli alert EDR e la prontezza del Blue Team a impedire il peggio. Ma è evidente che senza un’attenzione maggiore, l’operazione avrebbe potuto concludersi con una cifratura massiva e un fermo totale dell’infrastruttura.

Gli attacchi fileless non sono un fenomeno raro né circoscritto a grandi multinazionali. Sono una realtà quotidiana, che colpisce imprese di tutte le dimensioni. Per gli attaccanti, l’Italia – e in particolare le aree produttive – è un obiettivo redditizio: catene di fornitura critiche, aziende manifatturiere che non possono fermarsi, informazioni preziose da rivendere o utilizzare come leva di ricatto. È un problema sistemico che va affrontato con consapevolezza e serietà.

Perché leggere il report


Il documento del Malware Forge non è un semplice approfondimento tecnico. È uno strumento pratico per capire come gli aggressori operano davvero, quali errori sfruttano e quali contromisure possono fare la differenza. Racconta un caso reale, con protagonisti e dinamiche concrete, e lo traduce in lezioni che ogni organizzazione può applicare. Non è teoria, è esperienza sul campo.

Pubblicare questo tipo di analisi significa trasformare la conoscenza in difesa. È l’idea che guida Red Hot Cyber: riconoscere il rischio, raccontarlo, condividere ciò che si è imparato. Non è un gesto accademico, ma un modo concreto per rendere più difficile la vita agli aggressori e più matura la comunità della sicurezza. Perché il sapere, in questo ambito, non è potere se resta chiuso in un cassetto: diventa potere solo quando è diffuso.

Una sveglia per tutti


StagerShell ci insegna che l’intrusione più pericolosa è spesso quella che non vedi. È il segnale che non ha ancora fatto rumore, la presenza silenziosa che prepara un attacco devastante. Leggere il report significa prendere coscienza di queste dinamiche e portarsi a casa tre convinzioni semplici: chiudere ciò che non serve, vedere ciò che conta, reagire senza esitazione.

La prossima intrusione potrebbe essere già iniziata. E, come StagerShell dimostra, quello che non vedi è proprio ciò che ti mette più in pericolo.

Chi sono gli specialisti di Malware Forge


Gli specialisti di Malware Forge rappresentano il cuore tecnico della sotto-community di Red Hot Cyber dedicata alla Malware Analysis. Si tratta di professionisti con competenze avanzate nell’analisi dei malware, nella reverse engineering, nella sicurezza offensiva e difensiva, capaci di ricostruire comportamenti complessi dei codici malevoli e di tradurli in informazioni pratiche per aziende, istituzioni e professionisti del settore.

Il loro lavoro non si limita alla semplice identificazione di un malware: gli specialisti studiano come gli attaccanti operano, quali vulnerabilità sfruttano, come si muovono lateralmente all’interno delle reti e come pianificano le loro campagne (anche con la collaborazione degli altri gruppi di Red Hot Cyber come HackerHood specializzati nell’hacking etico oppure Dark Lab, specializzati nella cyber threat intelligence. Grazie a questa expertise, Malware Forge produce report dettagliati e documenti tecnici, trasformando dati grezzi in intelligence operativa e tattica che permette di prevenire e mitigare attacchi reali.

Chi fosse interessato a entrare a far parte della community e collaborare con Malware Forge può trovare tutte le informazioni e le modalità di adesione in questo articolo ufficiale: Malware Forge: nasce il laboratorio di Malware Analysis di Red Hot Cyber.

L'articolo STAGERSHELL: quando il malware non lascia tracce. L’analisi di Malware Forge proviene da il blog della sicurezza informatica.


VIC-20 Gets ISA Slot, Networking


There are few computing collapses more spectacular than the downfall of Commodore, but its rise as a home computer powerhouse in the early 80s was equally impressive. Driven initially by the VIC-20, this was the first home computer model to sell over a million units thanks to its low cost and accessibility for people outside of niche markets and hobbyist communities.

The VIC-20 would quickly be eclipsed by the much more famous Commodore 64, but for those still using these older machines there are a few tweaks to give it some extra functionality it was never originally designed for like this build which gives it an ISA bus.

To begin adapting the VIC-20 to the ISA standard, [Lee] built a fixed interrupt line handled with a simple transistor circuit. From there he started mapping memory and timing signals. The first attempt to find a portion of memory to use failed as it wasn’t as unused as he had thought, but eventually he settled on using the I/O area instead although still had to solve some problems with quirky ISA timing. There’s also a programmable logic chip which was needed to generate three additional signals for proper communication.

After solving some other issues around interrupts [Lee] was finally able to get the ISA bus working, specifically so he could add a 3Com networking card and get his VIC-20 on his LAN. Although the ISA bus has since gone out of fashion on modern computers, if you still have a computer with one (or build one onto your VIC-20), it is a surprisingly versatile expansion port.

Thanks to [Stephen] for the tip!


hackaday.com/2025/08/26/vic-20…


RDP sotto Tiro! 30.000 indirizzi IP univoci sondano i servizi esposti per attacchi mirati


I ricercatori di sicurezza di greyNoise hanno rilevato una vasta operazione di scansione coordinata contro i servizi Microsoft Remote Desktop Protocol (RDP), durante la quale gli aggressori hanno scansionato oltre 30.000 indirizzi IP unici al fine di valutare le vulnerabilità presenti nei portali di autenticazione Microsoft RD Web Access e RDP Web Client.

La metodologia di attacco si concentra sull’enumerazione dell’autenticazione basata sul tempo, una tecnica che sfrutta le sottili differenze nei tempi di risposta del server per identificare nomi utente validi senza attivare i tradizionali meccanismi di rilevamento brute force.

Questo approccio consente agli aggressori di creare elenchi completi di obiettivi per successive operazioni di credential stuffing e password spraying, mantenendo al contempo la massima discrezione operativa.

La campagna, riportano i ricercatori di GrayNoise, rappresenta una delle più grandi operazioni di ricognizione coordinate dell’RDP osservate negli ultimi anni, segnalando la potenziale preparazione per attacchi su larga scala basati sulle credenziali. L’operazione di scansione è iniziata con una prima ondata il 21 agosto 2025, coinvolgendo quasi 2.000 indirizzi IP contemporaneamente.

La tempistica della campagna coincide con il periodo di ritorno a scuola negli Stati Uniti, quando gli istituti scolastici solitamente implementano ambienti di laboratorio abilitati RDP e sistemi di accesso remoto per gli studenti in arrivo. Questa finestra di targeting è strategicamente significativa, poiché le reti educative spesso implementano schemi di nomi utente prevedibili (ID studente, formati nome.cognome) che facilitano gli attacchi di enumerazione.

L’analisi della telemetria di rete rivela che il 92% dell’infrastruttura di scansione è costituito da indirizzi IP dannosi precedentemente classificati, con traffico di origine fortemente concentrato in Brasile (73% delle origini osservate) e mirato esclusivamente agli endpoint RDP con sede negli Stati Uniti.

Tuttavia, la campagna ha subito un’escalation drammatica il 24 agosto, quando i ricercatori di sicurezza hanno rilevato oltre 30.000 indirizzi IP univoci che conducevano indagini coordinate utilizzando firme client identiche, il che indica una sofisticata infrastruttura botnet o un’implementazione coordinata di un set di strumenti. I modelli uniformi di firma client su 1.851 dei 1.971 host di scansione iniziali suggeriscono un’infrastruttura di comando e controllo centralizzata tipica delle operazioni APT (Advanced Persistent Threat).

Gli autori della minaccia stanno conducendo operazioni di ricognizione in più fasi, identificando prima gli endpoint RD Web Access e RDP Web Client esposti, quindi testando i flussi di lavoro di autenticazione per individuare vulnerabilità di divulgazione delle informazioni. Questo approccio sistematico consente la creazione di database di destinazione completi contenenti nomi utente validi ed endpoint accessibili per future campagne di sfruttamento.

I ricercatori della sicurezza hanno osservato che la stessa infrastruttura IP è stata osservata mentre eseguiva scansioni parallele per servizi proxy aperti e operazioni di web crawling, il che indica un toolkit di minacce multiuso progettato per una ricognizione completa della rete.

L'articolo RDP sotto Tiro! 30.000 indirizzi IP univoci sondano i servizi esposti per attacchi mirati proviene da il blog della sicurezza informatica.


I due Cyber Romani ce l’hanno fatta! Il Cyberpandino raggiunge il traguardo del Mongol Rally 2025!


Ciao, siamo felici (e un po’ increduli) di annunciarvi che il Cyberpandino ha ufficialmente raggiunto il traguardo del Mongol Rally 2025! Un’avventura lunga oltre 17.000 km, attraverso 20 paesi, con una quantità di guasti, imprevisti e riparazioni improvvisate che solo un viaggio del genere poteva regalarci.

Siamo stanchi, sì, ma ancora più motivati: questa esperienza ci ha fatto sognare nuove idee, progetti e competizioni a cui ci piacerebbe partecipare. E proprio per questo abbiamo deciso di portare in Italia il primo Cyberpandino, per farlo vivere ancora e condividerlo in fiere ed eventi di settore, insieme ai brand che lo hanno reso possibile.

E tra questi ci siamo anche noi come Red Hot Cyber!

In questi 40 giorni abbiamo raccolto una mole enorme di materiale foto e video che stiamo organizzando in un piano editoriale ricco per i prossimi mesi.

  • Napapijri lancerà l’10 settembre a Londra un cortometraggio dedicato all’avventura.
  • Noi produrremo un mini-documentario dal giorno zero fino al traguardo, con l’obiettivo non solo di generare visibilità, ma anche di ispirare altre persone a lanciarsi in progetti fuori dagli schemi.

Tutto questo non sarebbe stato possibile senza il vostro supporto.

Grazie per aver creduto in noi e nel nostro primo progetto: ora che siamo sulla strada di casa, ci impegniamo a preparare i contenuti concordati e ad avere il vostro via libera per portarvi con noi in eventi e fiere, lasciando il vostro brand inciso su questo primo Cyberpandino.

Per darvi un’idea dell’impatto raggiunto, ecco alcuni dati di Instagram (il canale che abbiamo seguito di più durante il viaggio):

  • Dalla partenza da Lampedusa intorno al 1° luglio ad oggi: 2,4M visualizzazioni contenuti, 40K interazioni singole (like, commenti, salvataggi), +6K follower (pubblico 90% maschile, 24-44 anni).
  • Dagli ultimi lavori alla macchina intorno a fine maggio ad oggi: oltre 4M visualizzazioni contenuti e 70K interazioni singole.

Questi risultati dimostrano quanto insieme abbiamo generato valore e quanta attenzione abbiano attratto i prodotti e servizi dei nostri partner, rivelatisi davvero indispensabili per la riuscita del viaggioe di cui parleremo nel dettaglio nei contenuti riassuntivi che produrremo.

Un grazie sincero da parte di tutto il team,
Roberto, Matteo ed il Cyberpandino

L'articolo I due Cyber Romani ce l’hanno fatta! Il Cyberpandino raggiunge il traguardo del Mongol Rally 2025! proviene da il blog della sicurezza informatica.


Nessun Miracolo! L’Università Pontificia Salesiana cade vittima del ransomware


Nella notte del 19 agosto l’infrastruttura informatica dell’Università Pontificia Salesiana (UPS) è stata vittima di un grave attacco informatico che ha reso temporaneamente inaccessibili il sito web e tutti i servizi digitali dell’Ateneo. L’incidente ha determinato un blocco immediato delle attività online, generando disagi per studenti, docenti e personale amministrativo. Non sappiamo se si tratti di ransomware ma le parole “valutare i danni e avviare le operazioni di ripristino” del comunicato stampa fanno pensare a questo.

A seguito dell’attacco, l’Agenzia per la Cybersicurezza Nazionale e la Polizia Postale sono prontamente intervenute per condurre le indagini necessarie e adottare le misure di contenimento. Le autorità competenti stanno infatti lavorando per comprendere le modalità con cui è stato portato a termine l’attacco e per attuare tutte le azioni necessarie alla sicurezza delle infrastrutture digitali coinvolte.

Attualmente è ancora in corso la fase di analisi tecnica per valutare l’effettiva portata del danno. Solo al termine di questa attività sarà possibile stabilire con precisione l’impatto subito e avviare in maniera mirata le operazioni di ripristino. Fino a quel momento, i siti e i servizi online dell’Università Pontificia Salesiana restano non disponibili.

La sospensione riguarda anche la casella di posta elettronica istituzionale con dominio @unisal.it, che al momento non risulta funzionante. Questo ha reso necessario predisporre un indirizzo email alternativo per garantire i contatti urgenti: universitatpontificiasalesiana@gmail.com.
Nella notte del 19 agosto l’infrastruttura informatica dell’Università Pontificia Salesiana (UPS) è stata oggetto di un grave attacco informatico che ha reso temporaneamente inaccessibili il sito web e tutti i servizi digitali dell’Ateneo. L’Agenzia per la Cybersicurezza Nazionale e la Polizia Postale sono immediatamente intervenute e stanno conducendo tutte le azioni necessarie. È tuttora in corso la fase di analisi per comprendere la reale portata dell’attacco, valutare i danni e avviare le operazioni di ripristino. Al momento i siti e i servizi online dell’UPS non sono disponibili.

Ci scusiamo per il disagio e forniremo aggiornamenti sull’avanzamento dei lavori di riattivazione attraverso i canali ufficiali, compresi i social media e il Canale WhatsApp.

La casella di posta elettronica @unisal.it risulta al momento non funzionante. In caso di necessità, è possibile contattare l’Ateneo scrivendo all’indirizzo: universitapontificiasalesiana@gmail.com
L’Ateneo ha comunicato che continuerà a fornire aggiornamenti sull’andamento delle operazioni di ripristino attraverso i propri canali ufficiali, inclusi i social media e il canale WhatsApp. In questo modo si cerca di mantenere informata la comunità universitaria nonostante l’indisponibilità dei servizi digitali abituali.

Nel messaggio ufficiale, l’Università Pontificia Salesiana ha espresso le proprie scuse per i disagi causati, assicurando il massimo impegno per il ritorno alla piena operatività nel più breve tempo possibile. Le autorità competenti e i tecnici dell’Ateneo restano al lavoro per garantire sicurezza e continuità delle attività didattiche e amministrative.

L'articolo Nessun Miracolo! L’Università Pontificia Salesiana cade vittima del ransomware proviene da il blog della sicurezza informatica.


Very Efficient APFC Circuit in Faulty Industrial 960 Watt Power Supply


The best part about post-mortem teardowns of electronics is when you discover some unusual design features, whether or not these are related to the original fault. In the case of a recent [DiodeGoneWild] video involving the teardown of an industrial DIN-rail mounted 24 V, 960 Watt power supply, the source of the reported bang was easy enough to spot. During the subsequent teardown of this very nicely modular PSU the automatic power factor correction (APFC) board showed it to have an unusual design, which got captured in a schematic and is explained in the video.

Choosing such a APFC design seems to have been done in the name of efficiency, bypassing two of the internal diodes in the bridge rectifier with the external MOSFETs and ultrafast diodes. In short, it prevents some of the typical diode voltage drops by removing diodes in the path of the current.

Although not a new design, as succinctly pointed out in the comments by [marcogeri], it’s explained how even cutting out one diode worth of voltage drop in a PSU like this can save 10 Watt of losses. Since DIN rail PSUs rarely feature fans for active cooling, this kind of APFC design is highly relevant and helps to prevent passively cooled PSUs from spiraling into even more of a thermal nightmare.

As for the cause behind the sooty skid marks on one of the PCBs, that will be covered in the next video.

youtube.com/embed/UsY1xzpdJPU?…


hackaday.com/2025/08/25/very-e…


The Shady School


We can understand why shaderacademy.com chose that name over “the shady school,” but whatever they call it, if you are looking to brush up on graphics programming with GPUs, it might be just what you are looking for.

The website offers challenges that task you to draw various 2D and 3D graphics using code in your browser. Of course, this presupposes you have WebGPU enabled in your browser which means no Firefox or Safari. It looks like you can do some exercises without WebGPU, but the cool ones will need you to use a Chrome-style browser.

You can search by level of difficulty, so maybe start with “Intro” and try doing “the fragment shader.” You’ll notice they already provide some code for you along with a bit of explanation. It also shows you a picture of what you should draw and what you really drew. You get a percentage based on the matching. There’s also a visual diff that can show you what’s different about your picture from the reference picture.

We admit that one is pretty simple. Consider moving on to “Easy” with options like “two images blend,” for example. There are problems at every level of difficulty. Although there is a part for compute shaders, none seem to be available yet. Too bad, because that’s what we find most interesting. If you prefer a different approach, there are other tutorials out there.


hackaday.com/2025/08/25/the-sh…


There’s nothing Mini About this Mini Hasselblad-Style Camera’s Sensor


The camera, lens off to show the 1" sensor.

When someone hacks together a digital camera with a Raspberry Pi, the limiting factor for serious photography is usually the sensor. No offense to the fine folks at the foundation, but even the “HQ” camera, while very good, isn’t quite professional grade. That’s why when photographer [Malcom Wilson] put together this “Mini Hasselblad” style camera, he hacked in a 1″ sensor.

The sensor in question came in the form of a OneInchEye V2, from [Will Whang] on Tindie. The OneInch Eye is a great project in its own right: it takes a Sony IMX283 one-inch CMOS image sensor, and packages it with an IMU and thermal sensor on a board that hooks up to the 4-lane MIPI interface on the Rasberry Pi CM4 and Pi 5.

Sensor in hand, [Malcom Jay] needed but to figure out power and view-finding. Power is provided by a Geekworm X1200 battery hat. That’s the nice thing about the Pi ecosystem: with so many modules, it’s like lego for makers. The viewfinder, too, uses 4″ HDMI screen sold for Pi use, and he’s combined it with a Mamiya C220 TLR viewfinder to give that look-down-and-shoot effect that gives the project the “Mini Hasselblad” moniker.

These are a few images [Malcom] took with the camera. We’re no pros, but at least at this resolution they look good.The steel-PLA case doesn’t hurt in that regard either, with the styling somewhat reminiscent of vintage film cameras. The “steel” isn’t just a colour in this case, and the metal actually makes the PLA conductive, which our photographer friend learned the hard way. Who hasn’t fried components on a surface they didn’t realize was conductive, though? We bet the added weight of the steel in the PLA makes this camera much nicer to hold than it would be in plain plastic, at least.

The OneInchEye module came set up for C-mount lenses, and [Malcom] stuck with that, using some Fujinon TV lenses he already had on hand. [Malcom] has released STL files of his build under a creative-commons noncommercial license, but he’s holding the code back for subscribers to his Substack.

This isn’t the first Pi-based camera we’ve seen from [Malcom]. and there’ve been quite a few others on these pages over the years. There was even a Hackaday version, to test out the “offical” module [Malcom] eschewed.

Thanks to [Malcom] for the tip.


hackaday.com/2025/08/25/theres…


Gli Indicator of Attack (IoA): la protezione proattiva in ambito cybersecurity


Con la Threat Intelligence Olympos Consulting supporta le aziende per una cybersecurity predittiva.

Nel panorama della cybersecurity contemporanea, la differenza tra un approccio reattivo e uno proattivo può determinare il successo o il fallimento di una strategia difensiva. Mentre gli Indicatori di Compromissione (IoC) rappresentano ormai uno strumento consolidato ma limitato principalmente a certificare un’attacco già avvenuto, gli Indicatori di Attacco (IoA) sono emersi come un vero e proprio game changer nella lotta alle minacce informatiche.

La vera rivoluzione degli IoA risiede nella loro capacità di interpretare il comportamento dei Threat Actor piuttosto che limitarsi a catalogare evidenze postume. Si tratta di un cambio di paradigma fondamentale: se gli IoC ti dicono “sei stato attaccato” (sigh!), gli IoA ti avvertono “stanno per attaccarti”.

Gli IoA infatti rappresentano pattern di attività che indicano un attacco in corso o in fase di preparazione, ancor prima che l’attacco raggiunga il suo obiettivo. Gli IoA si basano sull’osservazione di tecniche, tattiche e procedure (TTP) utilizzate dai threat actor.

CrowdStrike, colosso americano della cybersecurity focalizzato su threat intelligence e rilevamento proattivo delle minacce, spiega la differenza tra IoC e IoA con un esempio efficace: in una rapina in banca, gli IoC sono le tracce lasciate dopo l’evento – come un cappello dei Baltimore Ravens, un trapano e dell’azoto liquido. Ma cosa accade se lo stesso rapinatore torna con un cappello da cowboy e un piede di porco? In quel caso, riesce comunque nel colpo, perché chi sorveglia si è basato solo su vecchi indicatori (gli IoC), ormai inutili per fermarlo.

Come scritto prima un IoA riflette, al contrario, una serie di azioni che un cybercriminale (o rapinatore) deve necessariamente compiere per avere successo: entrare nella banca, disattivare gli allarmi, accedere alla cassaforte, e così via.

Il punto di forza dell’approccio basato sugli IoA è la capacità di osservare e analizzare in tempo reale ciò che accade sulla rete, monitorando i comportamenti mentre si manifestano. In questo modo, a differenza degli IoC che reagiscono a un attacco già avvenuto, gli IoA consentono di intervenire in anticipo e bloccare l’attacco prima che provochi danni.

I threat actor utilizzano tecniche sempre più sofisticate, rapide e mirate. Per eludere i controlli, modificano continuamente gli IoC e sfruttano file legittimi del sistema operativo (i cosiddetti LOLBin), che non possono essere semplicemente bloccati senza compromettere il funzionamento dei sistemi. Al contrario, le TTP (Tattiche, Tecniche e Procedure) su cui si basano – come lo sfruttamento di vulnerabilità note o l’uso malevolo di strumenti legittimi, ad esempio msbuild.exe per eseguire codice dannoso direttamente in memoria e aggirare gli antivirus – sono molto più difficili da mascherare. Per questo motivo, risultano più affidabili e durature nel tempo per individuare comportamenti anomali e prevenire gli attacchi.

Adottare un approccio basato sul comportamento dei Threat Actor permette di identificare attività sospette in tempo reale, bloccare attacchi nella loro fase iniziale e rilevare anche minacce sconosciute come gli zero-day.

Gli IoA sono categorizzati in base allo scopo delle azioni osservate: ad esempio, scansioni di porte non autorizzate suggeriscono attività di Reconnaissance, mentre tentativi di brute-force su RDP o accessi da località insolite indicano spesso una fase di Initial Access. Allo stesso modo, comunicazioni anomale verso server esterni possono rivelare la presenza di un canale C2 (Command and Control Server).

Un caso d’uso esemplificativo è quello di Morphing Meerkat.

Nel 2024 è stato identificato un Threat Actor noto con il nome in codice Morphing Meerkat, specializzato nell’offerta di servizi di phishing-as-a-service (PHaaS). La loro piattaforma, scoperta grazie a un’attività di OSINT e threat hunting avanzato, consente a chiunque, dietro pagamento, di lanciare campagne di phishing sofisticate, con moduli pronti all’uso.

es. di analisi comportamentale del Threat Actor Morphing Meerkat

Grazie all’analisi degli IoA è stato possibile identificare attività anomale tra le quali possiamo ricordare la falsificazione del mittente email; l’adozione del protocollo DoH (DNS over HTTPS) per cifrare le richieste DNS; la creazione di pagine phishing dinamiche sfruttando informazioni ottenute interrogando i record MX DNS e reindirizzamento verso infrastrutture legittime

È proprio in questo contesto che l’esperienza di Olympos Consulting fa la differenza. Combinando behavioral analysis avanzato con threat intelligence derivata da fonti OSINT e dark web, il nostro approccio trasforma dati apparentemente eterogenei in un sistema di Early Warning efficace.

In questo specifico caso abbiamo generato alert tempestivi per i clienti prima che l’attacco avesse effetto ed abbiamo suggerito tecniche di rilevamento comportamentale fornendo una lista azioni per interrompere la kill chain al primo passo.

Esempi di azioni suggerite: disabilitare l’uso di DoH nei browser permessi in azienda attraverso Group Policy; filtrare i DNS per bloccare gli endpoint DoH noti (es. Cloudflare, Google, Quad9); abilitare la decrittaura SSL/TLS sui Secure Web Gateway (SWG) per analizzare il traffico cifrato DoH.

Questa metodologia trasforma la cybersecurity da costoso esercizio di remediation a strategia predittiva.

Come si può capire dagli esempi fatti, l’utilizzo degli IoA permette di passare dalla reazione all’azione. Una cybersecurity proattiva si basa sulla capacità di prevedere i comportamenti del nemico ed interrompere la kill chain prima che l’attacco raggiunga la fase finale, migliorando la resilienza aziendale.

In un mondo dove gli attacchi zero-day e le campagne polimorfiche (ed il nome Morphing Meerkat la dice lunga) sono diventati la norma. Affidarsi a soluzioni convenzionali significa condannarsi all’obsolescenza. Olympos Consulting, con il suo mix unico di competenze tecniche e intelligence operativa, offre alle aziende la possibilità non solo di difendersi, ma di farlo con un vantaggio temporale che spesso fa la differenza tra un incidente contenuto e una violazione catastrofica.

La cybersecurity del futuro non sarà decisa da chi ha i migliori strumenti per documentare gli attacchi subiti, ma da chi saprà interpretare per primo le intenzioni degli avversari. In questa nuova era, l’analisi comportamentale dei cybercriminali rappresenta la chiave di volta e Olympos Consulting si conferma come partner strategico per quelle organizzazioni che intendono davvero trasformare la propria postura di sicurezza da passiva a predittiva. Gli Indicatori di Attacco rappresentano il tassello mancante per costruire una difesa davvero efficace. Vuoi scoprire come? Scrivici oggi stesso a “info [@] olymposconsulting [.] it” e a trasforma la tua strategia di cybersecurity con l’aiuto dei nostri esperti.

L'articolo Gli Indicator of Attack (IoA): la protezione proattiva in ambito cybersecurity proviene da il blog della sicurezza informatica.


Butta Melta Stops Rock-solid Butter From Tearing Your Toast


Ever ruin a perfectly serviceable piece of toast by trying (and failing) to spread a little pat of rock-solid butter? [John Dingley] doesn’t! Not since he created the Butta Melta to cozily snug a single butter serving right up against a warm beverage, softening it just enough to get nice and spreadable. Just insert one of those foil-wrapped pats of butter into the Melta, hang its chin on the edge of your mug, and you’ll have evenly softened butter in no time.

The Butta Melta is intentionally designed with a bit of personality, but also has a features we think are worth highlighting. One is the way it’s clearly designed with 3D printing in mind, making it an easy print on just about any machine in no time at all. The second is the presence of the hinge point which really helps the Butta Melta conform to a variety of cup designs, holding the payload as close as possible to the heat regardless of cup shape. A couple of minutes next to a hot beverage is all it takes for the butter to soften enough to become easily spreadable.

You may remember [John] (aka [XenonJohn]) from his experimental self-balancing scooters, or from a documentary he made about domestic ventilator development during COVID. He taught himself video editing and production to make that, and couldn’t resist using those skills to turn a video demo of the Butta Melta into a mock home shopping style advertisement. Watch it below, embedded just under the page break, then print one and save yourself from the tyranny of torn toast.

youtube.com/embed/hc3DUhguNoI?…


hackaday.com/2025/08/25/butta-…


Pi Port Protection PCB


We’re used to interfaces such as I2C and one-wire as easy ways to hook up sensors and other peripherals to microcontrollers. While they’re fine within the confines of a small project, they do have a few limitations. [Vinnie] ran straight into those limitations while using a Raspberry Pi with agricultural sensors. The interfaces needed to work over long cable runs, and to be protected from ESD due to lightning strikes. The solution? A custom Pi interface board packing differential drivers and protection circuits aplenty.

The I2C connection is isolated using an ISO1541 bus isolator from TI, feeding a PCA9615DP differential I2C bus driver from NXP. 1-wire is handled by a Dallas DS2482S 1-wire bus master and an ESD protection diode network. Even the 5-volt power supply is delivered through an isolated module.

Whether or not you need this Raspberry Pi board, this is still an interesting project for anyone working with these interfaces. If you’re interested, we’ve looked at differential I2C in the past.


hackaday.com/2025/08/25/pi-por…


CERN’s Large Hadron Collider Runs on A Bendix G-15 in 2025


The Bendix G-15 refurbished by [David at Usagi Electric] is well known as the oldest digital computer in North America. The question [David] gets most is “what can you do with it?”. Well, as a general-purpose computer, it can do just about anything. He set out to prove it. Can a 1950s-era vacuum tube computer handle modern physics problems? This video was several years in the making, was a journey from [David’s] home base in Texas all the way to CERN’s Large Hadron Collider (LHC) in Switzerland.

Command breakdownThe G-15 can run several “high-level” programming languages, including Algol. The most popular, though, was Intercom. Intercom is an interactive programming language – you can type your program in right at the typewriter. It’s much closer to working with a basic interpreter than, say, a batch-processed IBM 1401 with punched cards. We’re still talking about the 1950s, though, so the language mechanics are quite a bit different from what we’re used to today.

To start with, [Usagi’s] the G-15 is a numeric machine. It can’t even handle the full alphabet. What’s more, all numbers on the G-15 are stored as floating-point values. Commands are sent via operation codes. For example, ADD is operation 43. You have to wrangle an index register and an address as well. Intercom feels a bit like a cross between assembler and tokenized BASIC.

If you’d like to play along, the intercom manual is available on Bitsavers. (Thanks [Al]!)

In the second half of the video, things take a modern turn. [David’s] friend [Lloyd] recently wrote a high-speed algorithm for the ATLAS detector running at the Large Hadron Collider at CERN. [Lloyd] was instrumental in getting the G-15 up and running. Imagine a career stretching from the early days of computing to modern high-speed data processing. Suffice to say, [Lloyd] is a legend.

There are some hardcore physics and high speed data collection involved in ATLAS. [Allison] from SMU does a great job of explaining it all. The short version is: When particles are smashed together, huge amounts of information is collected by detectors and calorimeters. On the order of 145 TB/s (yes, TerraBytes per second). It would be impossible to store and analyze all that data. Topoclustering is an algorithm that determines if any given event is important to the researchers or not. The algorithm has to run in less than 1 microsecond, which is why it’s highly pipelined and lives inside an FPGA.

Even though it’s written in Verilog, topoclustering is still an algorithm. This means the G-15, being a general-purpose computer, can run it. To that end, [Lloyd] converted the Verilog code to C. But the Bendix doesn’t run C code. That’s where G-15 historian [Rob Kolstad] came in. Rob ported the C code to Intercom. [David] punched the program and a sample dataset on a short tape. He loaded up Intercom, then Topoclustering, and sent the run command. The G-15 sprang to life and performed flawlessly, proving that it is a general-purpose computer capable of running modern algorithms.

youtube.com/embed/2y0DO8d7Az0?…

Curious about the history of this particular Bendix G-15? Check out some of our earlier articles!


hackaday.com/2025/08/25/cerns-…