Salta al contenuto principale

Ask Hackaday: Are You Wearing 3D Printed Shoes?


We love 3D printing. We’ll print brackets, brackets for brackets, and brackets to hold other brackets in place. Perhaps even a guilty-pleasure Benchy. But 3D printed shoes? That’s where we start to have questions.

Every few months, someone announces a new line of 3D-printed footwear. Do you really want your next pair of sneakers to come out of a nozzle? Most of the shoes are either limited editions or fail to become very popular.

First World Problem


You might be thinking, “Really? Is this a problem that 3D printing is uniquely situated to solve?” You might assume that this is just some funny designs on some of the 3D model download sites. But no. Adidas, Nike, and Puma have shoes that are at least partially 3D printed. We have to ask why.

We are pretty happy with our shoes just the way that they are. But we will admit, if you insist on getting a perfect fitting shoe, maybe having a scan of your foot and a custom or semi-custom shoe printed is a good idea. Zellerfield lets you scan your feet with your phone, for example. [Stefan] at CNC Kitchen had a look at those in a recent video. The company is also in many partnerships, so when you hear that Hugo Boss, Mallet London, and Sean Watherspoon have a 3D-printed shoe, it might actually be their design from Zellerfield.

youtube.com/embed/4id0-vvu-u0?…

Or, try a Vivobiome sandal. We aren’t sold on the idea that we can’t buy shoes off the rack, but custom fits might make a little sense. We aren’t sure about 3D-printed bras, though.

Maybe the appeal of 3D-printed shoes lies in their personalizability? Creating self-printed shoes might make sense, so you can change their appearance or otherwise customize them. Maybe you’d experiment with different materials, colors, or subtle changes in designs. Nothing like 30 hours of printing and three filament changes to make one shoe. And that doesn’t explain why the majors are doing it.

Think of the Environment!


There is one possible plus to printing shoes. According to industry sources, more than 20 billion pairs of shoes are made every year, and almost all will end up in landfills. Up to 20% of these shoes will go straight to the dump without being worn even once.

So maybe you could argue that making shoes on demand would help reduce waste. We know of some shoe companies that offer you a discount if you send in an old pair for recycling, although we don’t know if they use them to make new shoes or not. Your tolerance for how much you are willing to pay might correlate to how much of a problem you think trash shoes really are.

But mass-market 3D-printed shoes? What’s the appeal? If you’re desperate for status, consider grabbing a pair of 3D-printed Gucci shoes for around $1,300. But for most of us, are you planning on dropping a few bucks on a pair of 3D-printed shoes? Why or why not? Let us know in the comments.

If you are imagining the big guys printing shoes on an Ender 3, that’s probably not the case. The shoes we’ve seen are made on big commercial printers.


hackaday.com/2025/07/10/ask-ha…


Hai bisogno di una Product Key per Microsoft Windows? Nessun problema, chiedilo a Chat-GPT


ChatGPT si è rivelato ancora una volta vulnerabile a manipolazioni non convenzionali: questa volta ha emesso chiavi di prodotto Windows valide, tra cui una registrata a nome della grande banca Wells Fargo. La vulnerabilità è stata scoperta durante una sorta di provocazione intellettuale: uno specialista ha suggerito che il modello linguistico giocasse a indovinelli, trasformando la situazione in un aggiramento delle restrizioni di sicurezza.

L’essenza della vulnerabilità consisteva in un semplice ma efficace bypass della logica del sistema di protezione. A ChatGPT 4.0 è stato offerto di partecipare a un gioco in cui doveva indovinare una stringa, con la precisazione che doveva trattarsi di un vero numero di serie di Windows 10.

Le condizioni stabilivano che il modello dovesse rispondere alle ipotesi solo con “sì” o “no” e, nel caso della frase “Mi arrendo”, aprire la stringa indovinata. Il modello ha accettato il gioco e, seguendo la logica integrata, dopo la frase chiave ha effettivamente restituito una stringa corrispondente alla chiave di licenza di Windows.

L’autore dello studio ha osservato che la principale debolezza in questo caso risiede nel modo in cui il modello percepisce il contesto dell’interazione. Il concetto di “gioco” ha temporaneamente superato i filtri e le restrizioni integrati, poiché il modello ha accettato le condizioni come uno scenario accettabile.

Le chiavi esposte includevano non solo chiavi predefinite disponibili al pubblico, ma anche licenze aziendali, tra cui almeno una registrata a Wells Fargo. Ciò è stato possibile perché avrebbe potuto causare la fuga di informazioni sensibili che avrebbero potuto finire nel set di addestramento del modello. In precedenza, si sono verificati casi di informazioni interne, incluse le chiavi API, esposte pubblicamente, ad esempio tramite GitHub, e di addestramento accidentale di un’IA.

Screenshot di una conversazione con ChatGPT (Marco Figueroa)

Il secondo trucco utilizzato per aggirare i filtri era l’uso di tag HTML . Il numero di serie originale veniva “impacchettato” all’interno di tag invisibili, consentendo al modello di aggirare il filtro basato sulle parole chiave. In combinazione con il contesto di gioco, questo metodo funzionava come un vero e proprio meccanismo di hacking, consentendo l’accesso a dati che normalmente sarebbero stati bloccati.

La situazione evidenzia un problema fondamentale nei modelli linguistici moderni: nonostante gli sforzi per creare barriere protettive (chiamati guardrail), il contesto e la forma della richiesta consentono ancora di aggirare il filtro. Per evitare simili incidenti in futuro, gli esperti suggeriscono di rafforzare la consapevolezza contestuale e di introdurre la convalida multilivello delle richieste.

L’autore sottolinea che la vulnerabilità può essere sfruttata non solo per ottenere chiavi, ma anche per aggirare i filtri che proteggono da contenuti indesiderati, da materiale per adulti a URL dannosi e dati personali. Ciò significa che i metodi di protezione non dovrebbero solo diventare più rigorosi, ma anche molto più flessibili e proattivi.

L'articolo Hai bisogno di una Product Key per Microsoft Windows? Nessun problema, chiedilo a Chat-GPT proviene da il blog della sicurezza informatica.


Code highlighting with Cursor AI for $500,000


Attacks that leverage malicious open-source packages are becoming a major and growing threat. This type of attacks currently seems commonplace, with reports of infected packages in repositories like PyPI or npm appearing almost daily. It would seem that increased scrutiny from researchers on these repositories should have long ago minimized the profits for cybercriminals trying to make a fortune from malicious packages. However, our investigation into a recent cyberincident once again confirmed that open-source packages remain an attractive way for attackers to make easy money.

Infected out of nowhere


In June 2025, a blockchain developer from Russia reached out to us after falling victim to a cyberattack. He’d had around $500,000 in crypto assets stolen from him. Surprisingly, the victim’s operating system had been installed only a few days prior. Nothing but essential and popular apps had been downloaded to the machine. The developer was well aware of the cybersecurity risks associated with crypto transactions, so he was vigilant and carefully reviewed his every step while working online. Additionally, he used free online services for malware detection to protect his system, but no commercial antivirus software.

The circumstances of the infection piqued our interest, and we decided to investigate the origins of the incident. After obtaining a disk image of the infected system, we began our analysis.

Syntax highlighting with a catch


As we examined the files on the disk, a file named extension.js caught our attention. We found it at %userprofile%\.cursor\extensions\solidityai.solidity-1.0.9-universal\src\extension.js. Below is a snippet of its content:

A request sent by the extension to the server
A request sent by the extension to the server

This screenshot clearly shows the code requesting and executing a PowerShell script from the web server angelic[.]su: a sure sign of malware.

It turned out that extension.js was a component of the Solidity Language extension for the Cursor AI IDE, which is based on Visual Studio Code and designed for AI-assisted development. The extension is available in the Open VSX registry, used by Cursor AI, and was published about two months ago. At the time this research, the extension had been downloaded 54,000 times. The figure was likely inflated. According to the description, the extension offers numerous features to optimize work with Solidity smart contract code, specifically syntax highlighting:

The extension's description in the Open VSX registry
The extension’s description in the Open VSX registry

We analyzed the code of every version of this extension and confirmed that it was a fake: neither syntax highlighting nor any of the other claimed features were implemented in any version. The extension has nothing to do with smart contracts. All it does is download and execute malicious code from the aforementioned web server. Furthermore, we discovered that the description of the malicious plugin was copied by the attackers from the page of a legitimate extension, which had 61,000 downloads.

How the extension got on the computer


So, we found that the malicious extension had 54,000 downloads, while the legitimate one had 61,000. But how did the attackers manage to lull the developer’s vigilance? Why would he download a malicious extension with fewer downloads than the original?

We found out that while trying to install a Solidity code syntax highlighter, the developer searched the extension registry for solidity. This query returned the following:

Search results for "solidity": the malicious (red) and legitimate (green) extensions
Search results for “solidity”: the malicious (red) and legitimate (green) extensions

In the search results, the malicious extension appeared fourth, while the legitimate one was only in eighth place. Thus, while reviewing the search results, the developer clicked the first extension in the list with a significant number of downloads – which unfortunately proved to be the malicious one.

The ranking algorithm trap


How did the malicious extension appear higher in search results than the legitimate one, especially considering it had fewer downloads? It turns out the Open VSX registry ranks search results by relevance, which considers multiple factors, such as the extension rating, how recently it was published or updated, the total number of downloads, and whether the extension is verified. Consequently, the ranking is determined by a combination of factors: for example, an extension with a low number of downloads can still appear near the top of search results if that metric is offset by its recency. This is exactly what happened with the malicious plugin: the fake extension’s last update date was June 15, 2025, while the legitimate one was last updated on May 30, 2025. Thus, due to the overall mix of factors, the malicious extension’s relevance surpassed that of the original, which allowed the attackers to promote the fake extension in the search results.

The developer, who fell into the ranking algorithm trap, didn’t get the functionality he wanted: the extension didn’t do any syntax highlighting in Solidity. The victim mistook this for a bug, which he decided to investigate later, and continued his work. Meanwhile, the extension quietly installed malware on his computer.

From PowerShell scripts to remote control


As mentioned above, when the malicious plugin was activated, it downloaded a PowerShell script from https://angelic[.]su/files/1.txt.

The PowerShell script contents
The PowerShell script contents

The script checks if the ScreenConnect remote management software is installed on the computer. If not, it downloads a second malicious PowerShell script from: https://angelic[.]su/files/2.txt. This new script then downloads the ScreenConnect installer to the infected computer from https://lmfao[.]su/Bin/ScreenConnect.ClientSetup.msi?e=Access&y=Guest and runs it. From that point on, the attackers can control the infected computer via the newly installed software, which is configured to communicate with the C2 server relay.lmfao[.]su.

Data theft


Further analysis revealed that the attackers used ScreenConnect to upload three VBScripts to the compromised machine:

  • a.vbs
  • b.vbs
  • m.vbs

Each of these downloaded a PowerShell script from the text-sharing service paste.ee. The download URL was obfuscated, as shown in the image below:

The obfuscated URL for downloading the PowerShell script
The obfuscated URL for downloading the PowerShell script

The downloaded PowerShell script then retrieved an image from archive[.]org. A loader known as VMDetector was then extracted from this image. VMDetector attacks were previously observed in phishing campaigns that targeted entities in Latin America. The loader downloaded and ran the final payload from paste.ee.

Our analysis of the VBScripts determined that the following payloads were downloaded to the infected computer:

  • Quasar open-source backdoor (via a.vbs and b.vbs),
  • Stealer that collected data from browsers, email clients, and crypto wallets (via m.vbs). Kaspersky products detect this malware as HEUR:Trojan-PSW.MSIL.PureLogs.gen.

Both implants communicated with the C2 server 144.172.112[.]84, which resolved to relay.lmfao[.]su at the time of our analysis. With these tools, the attackers successfully obtained passphrases for the developer’s wallets and then syphoned off cryptocurrency.

New malicious package


The malicious plugin didn’t last long in the extension store and was taken down on July 2, 2025. By that time, it had already been detected not only by us as we investigated the incident but also by other researchers. However, the attackers continued their campaign: just one day after the removal, they published another malicious package named “solidity”, this time exactly replicating the name of the original legitimate extension. The functionality of the fake remained unchanged: the plugin downloaded a malicious PowerShell script onto the victim’s device. However, the attackers sought to inflate the number of downloads dramatically. The new extension was supposedly downloaded around two million times. The following results appeared up until recently when users searched for solidity within the Cursor AI development environment (the plugin is currently removed thanks to our efforts).

Updated search results for "solidity"
Updated search results for “solidity”

The updated search results showed the legitimate and malicious extensions appearing side-by-side in the search rankings, occupying the seventh and eighth positions respectively. The developer names look identical at first glance, but the legitimate package was uploaded by juanblanco, while the malicious one was uploaded by juanbIanco. The font used by Cursor AI makes the lowercase letter l and uppercase I appear identical.

Therefore, the search results displayed two seemingly identical extensions: the legitimate one with 61,000 downloads and the malicious one with two million downloads. Which one would the user choose to install? Making the right choice becomes a real challenge.

Similar cyberattacks


It’s worth noting that the Solidity extensions we uncovered are not the only malicious packages published by the attackers behind this operation. We used our open-source package monitoring tool to find a malicious npm package called “solsafe”. It uses the URL https://staketree[.]net/1.txt to download ScreenConnect. In this campaign, it’s also configured to use relay.lmfao[.]su for communication with the attackers.

We also discovered that April and May 2025 saw three malicious Visual Studio Code extensions published: solaibot, among-eth, and blankebesxstnion. The infection method used in these threats is strikingly similar to the one we described above. In fact, we found almost identical functionality in their malicious scripts.

Scripts downloaded by the VS Code extension (left) vs. Solidity Language (right)
Scripts downloaded by the VS Code extension (left) vs. Solidity Language (right)

In addition, all of the listed extensions perform the same malicious actions during execution, namely:

  • Download PowerShell scripts named 1.txt and 2.txt.
  • Use a VBScript with an obfuscated URL to download a payload from paste.ee.
  • Download an image with a payload from archive.org.

This leads us to conclude that these infection schemes are currently being widely used to attack blockchain developers. We believe the attackers won’t stop with the Solidity extensions or the solsafe package that we found.

Takeaways


Malicious packages continue to pose a significant threat to the crypto industry. Many projects today rely on open-source tools downloaded from package repositories. Unfortunately, packages from these repositories are often a source of malware infections. Therefore, we recommend extreme caution when downloading any tools. Always verify that the package you’re downloading isn’t a fake. If a package doesn’t work as advertised after you install it, be suspicious and check the downloaded source code.

In many cases, malware installed via fake open-source packages is well-known, and modern cybersecurity solutions can effectively block it. Even experienced developers must not neglect security solutions, as these can help prevent an attack in case a malicious package is installed.

Indicators of compromise


Hashes of malicious JS files
2c471e265409763024cdc33579c84d88d5aaf9aea1911266b875d3b7604a0eeb
404dd413f10ccfeea23bfb00b0e403532fa8651bfb456d84b6a16953355a800a
70309bf3d2aed946bba51fc3eedb2daa3e8044b60151f0b5c1550831fbc6df17
84d4a4c6d7e55e201b20327ca2068992180d9ec08a6827faa4ff3534b96c3d6f
eb5b35057dedb235940b2c41da9e3ae0553969f1c89a16e3f66ba6f6005c6fa8
f4721f32b8d6eb856364327c21ea3c703f1787cfb4c043f87435a8876d903b2c

Network indicators
https://angelic[.]su/files/1.txt
https://angelic[.]su/files/2.txt
https://staketree[.]net/1.txt
https://staketree[.]net/2.txt
https://relay.lmfao[.]su
https://lmfao[.]su/Bin/ScreenConnect.ClientSetup.msi?e=Access&y=Guest
144.172.112[.]84


securelist.com/open-source-pac…


An Emulated Stroll Down Macintosh Memory Lane


Screenshot of "Frame of Preference"

If you’re into Macs, you’ll always remember your first. Maybe it was the revolutionary classic of 1984 fame, perhaps it was the adorable G3 iMac in 1998, or even a shiny OS X machine in the 21st century. Whichever it is, you’ll find it emulated in [Marcin Wichary]’s essay “Frame of preference: A history of Mac settings, 1984–2004” — an exploration of the control panel and its history.
Image of PowerBook showing the MacOS 8.0 desktop.That’s not a photograph, it’s an emulator. (At least on the page. Here, it’s a screenshot.)
[Marcin] is a UI designer as well as an engineer and tech historian, and his UI chops come out in full force, commenting and critiquing Curputino’s coercions. The writing is excellent, as you’d expect from the man who wrote the book on keyboards, and it provides a fascinating look at the world of retrocomputing through the eyes of a designer. That design-focused outlook is very apropos for Apple in particular. (And NeXT, of course, because you can’t tell the story of Apple without it.)

There are ten emulators on the page, provided by [Mihai Parparita] of Infinite Mac. It’s like a virtual museum with a particularly knowledgeable tour guide — and it’s a blast, getting to feel hands-on, the design changes being discussed. There’s a certain amount of gamification, with each system having suggested tasks and a completion score when you finish reading. There are even Easter eggs.

This is everything we wish the modern web was like: the passionate deep-dives of personal sites on the Old Web, but enhanced and enabled by modern technology. If you’re missing those vintage Mac days and don’t want to explore them in browser, you can 3D print your own full-size replica, or a doll-sized picoMac.


hackaday.com/2025/07/10/an-emu…


CloudFlare, WordPress e l’API key in pericolo per colpa di un innocente autocomplete


Un tag mancante su di un campo input di una api key può rappresentare un rischio?

Avrai sicuramente notato che il browser suggerisce i dati dopo aver compilato un form. L’autocompletamento è proprio la funzione di ricordare le cose che inseriamo.

Per impostazione predefinita, i browser ricordano le informazioni inserite dall’utente nei campi dei siti web. Questo consente loro di eseguire questi automatismi.

Come può rappresentare un pericolo?

Esaminiamo il Plugin di CloudFlare per WordPress:

In questo esempio prendiamo in analisi un plugin di CloudFlare, che permette di collegare un sito WordPress con un’istanza cloudflare.

Questo permette di eseguire varie attività come la pulizia della cache, molte volte utile dopo un aggiornamento del sito.

github.com/cloudflare/Cloudfla…

Una volta installato, nella configurazione è richiesto di inserire una mail e un api key.

L’api key è una stringa che viene generata. Ad essa possono essere assegnati una serie di permessi.

La webapp poi con quel codice potrà interfacciarsi con CloudFlare per eseguire diverse azioni, come detto prima per esempio lo svuotamento della cache.

Vediamo cosa succedere se in passato avevamo già inserito un apikey.

Il modulo ricorda la nostra apikey, ecco semplicemente spiegato la funzione di autocompletamento.

Cosa succede in realtà quando il browser salva queste informazioni?

Ogni browser dispone di un file locale, dove salva queste informazioni di autocompletamento.

In chrome, per esempio, questi dati sono salvati in un file a questo percorso:

C:\Users[user]\AppData\Local\Google\Chrome\User Data\Default\Web Data

Aprendo infatti il file è possibile riconoscere la chiave appena inserita nel form.

Perchè questo allora potrebbe rappresentare un rischio?


Questa chiave API può essere rubata se il sistema dell’utente è compromesso ed il file sottratto.

Questo è solo un esempio, in questi campi come detto prima potrebbe contenere anche per esempio dati di carte di credito e altro ancora.

Possiamo vedere che in questi rapporti di analisi di vari infostealer, proprio l’autocompletamento è uno degli obiettivi.

Leggendo alcuni report di SonicWall e Avira vediamo che molti di questi infostealer hanno come obiettivo questi file.

sonicwall.com/blog/infostealer…

Molto spesso gli infostealer vanno alla ricerca di questi file da infiltrare dal sistema attaccato.

[strong]Altre evidenze in questo rapporto di Avira:[/strong]

avira.com/en/blog/fake-office-…

In entrambi i report si vede chiaramente che questi file sono interessanti per gli infostealer.

Infine più a fondo e addirittura addentrarci su IntelX, per una ulteriore verifica a un vero leak..

Come mitigare questo rischio?


Gli input,come le password,non vengono salvate dai browser (più precisamente viene utilizzato invece il portachiavi del browser)

Nel caso invece delle apikey si utlizzi un input semplice, è possibile inserire un tag autocomplete-off, per informare il browser che questo dato non deve essere inserito nel file autocomplete.

es.

Username:

Password:

Login

Oppure

L’impostazione autocomplete=”off”sui campi ha due effetti:

Indica al browser di non salvare i dati immessi dall’utente per il completamento automatico successivo in moduli simili (alcuni browser fanno eccezioni per casi speciali, ad esempio chiedendo agli utenti di salvare le password).

Impedisce al browser di memorizzare nella cache i dati del modulo nella cronologia della sessione. Quando i dati del modulo vengono memorizzati nella cache della cronologia della sessione, le informazioni inserite dall’utente vengono visualizzate nel caso in cui l’utente abbia inviato il modulo e abbia cliccato sul pulsante Indietro per tornare alla pagina originale del modulo.

Analizzando infatti il codice html, non è presente questo tag per la api key.

Conclusioni


La maggior parte dei browser dispone di una funzionalità per ricordare i dati immessi nei moduli HTML.

Queste funzionalità sono solitamente abilitate di default, ma possono rappresentare un problema per gli utenti, quindi i browser possono anche disattivare.

Tuttavia, alcuni dati inviati nei moduli non sono utili al di là dell’interazione corrente (ad esempio, un PIN monouso) o contengono informazioni sensibili (ad esempio, un identificativo governativo univoco o un codice di sicurezza della carta di credito) oppure un token di un api.

I dati di autocompletamento archiviati dai vari browser possono essere catturati da un utente malintenzionato.

Inoltre, un attaccante che rilevi una vulnerabilità distinta dell’applicazione, come il cross-site scripting, potrebbe essere in grado di sfruttarla per recuperare le credenziali archiviate dal browser. JavaScript non può accedere direttamente ai dati dell’autofill del browser per motivi di sicurezza e privacy tuttavia Autofill può riempire i campi HTML automaticamente, e JavaScript può leggere i valori di quei campi solo dopo che sono stati riempiti.
const email = document.querySelector('#email').value;
console.log(email); // Se il browser ha già riempito il campo, questo valore sarà accessibile
Nonostante ciò esiste la possibilità che la mancata disabilitazione del completamento automatico possa causare problemi nell’ottenimento della conformità PCI (PortSwigger).

L'articolo CloudFlare, WordPress e l’API key in pericolo per colpa di un innocente autocomplete proviene da il blog della sicurezza informatica.


Generatively-Designed Aerospike Test Fired


The aerospike engine holds great promise for spaceflight, but for various reasons, has remained slightly out of reach for decades. But thanks to Leap 71, the technology has moved one step closer to a spacecraft near you with the test fire of their generatively-designed, 3D printed aerospike.

We reported on the original design process of the engine, but at the time it hadn’t been given a chance to burn its liquid oxygen and kerosene fuel. The special sauce was the application of a computational physics model to tackle the complex issue of keeping the engine components cool enough to function while directing 3,500˚C exhaust around the eponymous spike.

Printed via a powder bed process out of CuCrZr, cleaned, heat treated, and then prepped by the University of Sheffield’s Race 2 Space Team, the rocket produced 5,000 Newtons (1,100 lbf) of thrust during its test fire. For comparison, VentureStar, the ill-fated aerospike single stage to orbit project from the 1990s, was projected to produce more than 1,917 kilonewtons (431,000 lbf) from each of its seven RS-2200 engines. Leap 71 obviously has some scaling up to do before this can propel any crewed spacecraft.

If you want to build your own aerospike or 3D printed rocket nozzles we encourage you to read, understand, and follow all relevant safety guidelines when handling your rockets. It is rocket science, after all!


hackaday.com/2025/07/10/genera…


Sorride, parla 15 lingue e non ti risponderà mai ci sentiamo domani perché è fatto di plastica


Mentre alcuni dibattono sui potenziali rischi dell’intelligenza artificiale, altri la stanno integrando con fiducia nella vita di tutti i giorni. L’azienda americana Realbotix , specializzata nello sviluppo di robot umanoidi, ha presentato un importante aggiornamento: ora il suo umanoide parla fluentemente 15 lingue e ne comprende altre 147, grazie a un sistema di elaborazione vocale basato su cloud.

Questa tecnologia è concepita come un ponte tra macchine e persone. Un robot in grado di parlare la lingua madre dell’utente non è solo comodo, ma crea anche l’illusione di una comunicazione reale. Questa caratteristica è particolarmente rilevante per i settori in cui l’empatia e il contatto personale sono fondamentali: turismo, hotel, sanità, musei e persino parchi di divertimento.

Realbotix sottolinea che il multilinguismo permette di stabilire contatti con persone di culture e professioni diverse. La barriera linguistica rimane uno dei principali ostacoli al servizio, e un assistente universale che parla giapponese, arabo, francese, hindi e decine di altre lingue può sostituire decine di specialisti, senza bisogno di riqualificazione né di sostituzioni di personale .La paranoia digitale è il nuovo buon senso.

Un’altra importante innovazione è la possibilità di connettersi a piattaforme di intelligenza artificiale di terze parti. Grazie a questa, lo stesso robot può svolgere ruoli diversi e cimentarsi in centinaia di professioni. Basta semplicemente cambiare lo scenario del software.

Esternamente, l’androide ha un aspetto quasi umano: pelle, espressioni facciali, movimenti realistici. Il suo compito non è solo quello di fornire informazioni, ma di stabilire una sorta di contatto con la persona. In una clinica, ad esempio, può incontrare un paziente, porre domande, coglierne lo stato emotivo e trasmettere i dati al medico, risparmiando tempo e riducendo il carico di lavoro degli infermieri.

In spazi pubblici come hotel, terminal e centri informazioni, un umanoide può sostituire un banco informazioni: spiegare dove si trova l’uscita o il banco del check-in, nel linguaggio giusto e con indicazioni visive. E tutto questo senza affaticamento, errori o cattivo umore.

Gli sviluppatori sottolineano: l’obiettivo non è quello di sostituire le persone, ma di rendere la vita nella società il più confortevole possibile. I robot umanoidi possono compensare la carenza di personale e fornire un livello di servizio stabile. La filosofia di Realbotix è quella di creare tecnologie che rendano l’interazione più calorosa, non più fredda.

Secondo i dati di Research and Markets pubblicati da EE News Europe, il mercato globale dei robot umanoidi crescerà da 2,93 miliardi di dollari nel 2025 a 243,4 miliardi di dollari entro il 2035. Le ragioni di ciò sono la carenza di manodopera, la crescente domanda di automazione e la crescente richiesta di assistenza clienti.

Il costo di un dispositivo di questo tipo varia dai 20.000 ai 175.000 dollari, a seconda delle funzioni. Si tratta di un costo inferiore a quello di molti anni di stipendio nelle grandi città, e il dispositivo non chiederà mai ferie o malattia nel momento più critico.

L'articolo Sorride, parla 15 lingue e non ti risponderà mai ci sentiamo domani perché è fatto di plastica proviene da il blog della sicurezza informatica.


Exploit RCE 0day per WinRAR e WinZIP in vendita su exploit.in per email di phishing da urlo


In questi giorni, sul noto forum underground exploit.in, attualmente chiuso e accessibile solo su invito – sono stati messi in vendita degli exploit per una vulnerabilità di tipo 0day che colpiscono i noti software WinRAR e WinZIP. L’annuncio, pubblicato dall’utente zeroplayer, propone tali exploit tra 80.000 e 100.000 dollari.

Specifica che non si tratta di un semplice 1day (cioè un exploit per una vulnerabilità già nota come CVE-2025-6218), ma di un bug sconosciuto e non ancora patchato.

Cosa sono gli exploit e cosa significa “0day”


Gli exploit sono strumenti o porzioni di codice che permettono di sfruttare vulnerabilità software per ottenere comportamenti non previsti dal programma, come l’esecuzione di codice malevolo, il furto di dati o il controllo completo di un sistema.

Quando parliamo di 0day, intendiamo vulnerabilità che non sono ancora conosciute dal produttore del software e per le quali non esistono patch: proprio per questo motivo sono particolarmente preziose nel mercato nero e incredibilmente pericolose.

Perché i bug su software come WinRAR o ZIP sono così critici


WinZIP e WinRAR sono i software più utilizzati al mondo per la gestione di archivi compressi come file ZIP e RAR. Una vulnerabilità RCE (Remote Code Execution) su questo tipo di programma permette a un attaccante di far eseguire codice malevolo semplicemente inducendo la vittima ad aprire o visualizzare un archivio compromesso.

Un possibile scenario d’attacco prevede l’uso di email di phishing, in cui l’utente riceve un allegato ZIP o RAR apparentemente innocuo. Basta un clic per attivare l’exploit e compromettere completamente il sistema, installando malware, ransomware o backdoor per il controllo remoto.

Il ruolo dei forum underground come exploit.in


Forum chiusi come exploit.in fungono da veri e propri marketplace per la compravendita di vulnerabilità, malware, dati rubati e altri strumenti usati nel cybercrime. Gli utenti che vendono exploit, come nel caso di zeroplayer, spesso offrono garanzie di affidabilità attraverso servizi interni chiamati Garant, che fanno da intermediari per evitare truffe tra criminali.

L’utente zeroplayer, che ha pubblicato gli annunci, appare come un profilo nuovo e ancora privo di una reputazione consolidata. Registrato sul forum exploit.in solo il 30 giugno 2025, conta appena 3 post e non ha ancora concluso transazioni certificate tramite il sistema di Garant interno alla piattaforma, che solitamente serve a ridurre il rischio di truffe tra venditori e acquirenti.

Sebbene abbia effettuato una registrazione a pagamento, pratica comune nei forum underground più chiusi per filtrare account fake e inattivi, questo elemento da solo non basta a definirlo affidabile agli occhi della community. Un account così recente potrebbe indicare due scenari contrapposti: da un lato, un vendor realmente in possesso di un exploit molto prezioso che sceglie di aprire un nuovo profilo per motivi di anonimato; dall’altro, un tentativo di frode per monetizzare la paura attorno a una vulnerabilità critica e ancora sconosciuta. La mancanza di feedback e attività passata rende difficile distinguere tra le due possibilità, ma sottolinea quanto sia complesso — perfino nei circuiti del cybercrime — fidarsi senza prove concrete dell’esistenza e dell’efficacia dell’exploit offerto.

La vendita di un exploit 0day per WinRAR rappresenta una seria minaccia, vista la diffusione globale del software. È un ulteriore richiamo all’importanza di mantenere i programmi sempre aggiornati, usare strumenti di sicurezza affidabili e prestare la massima attenzione alle email sospette, soprattutto se contengono allegati compressi.

L'articolo Exploit RCE 0day per WinRAR e WinZIP in vendita su exploit.in per email di phishing da urlo proviene da il blog della sicurezza informatica.


Solder Smarts: Hands-Free Fume Extractor Hack


fume extractor

[Ryan] purchased a large fume extractor designed to sit on the floor below the work area and pull solder fumes down into its filtering elements. The only drawback to this new filter was that its controls were located near his feet. Rather than kicking at his new equipment, he devised a way to automate it.

By adding a Wemos D1 Mini microcontroller running ESPHome, a relay board, and a small AC-to-DC transformer, [Ryan] can now control the single push button used to cycle through speed settings wirelessly. Including the small transformer inside was a clever touch, as it allows the unit to require only a single power cable while keeping all the newfound smarts hidden inside.

The relay controls the button in parallel, so the physical button still works. Now that the extractor is integrated with Home Assistant, he can automate it. The fan can be controlled via his phone, but even better, he automated it to turn on by monitoring the power draw on the smart outlet his soldering iron is plugged into. When he turns on his iron, the fume extractor automatically kicks in.

Check out some other great automations we’ve featured that take over mundane tasks.


hackaday.com/2025/07/09/solder…


Volume Controller Rejects Skeumorphism, Embraces the Physical


The volume slider on our virtual desktops is a skeuomorphic callback to the volume sliders on professional audio equipment on actual, physical desktops. [Maker Vibe] decided that this skeuomorphism was so last century, and made himself a physical audio control box for his PC.

Since he has three audio outputs he needs to consider, the peripheral he creates could conceivably be called a fader. It certainly has that look, anyway: each output is controlled by a volume slider — connected to a linear potentiometer — and a mute button. Seeing a linear potentiometer used for volume control threw us for a second, until we remembered this was for the computer’s volume control, not an actual volume control circuit. The computer’s volume slider already does the logarithmic conversion. A Seeed Studio Xiao ESP32S3 lives at the heart of this thing, emulating a Bluetooth gamepad using a library by LemmingDev. A trio of LEDs round out the electronics to provide an indicator for which audio channels are muted or active.

Those Bluetooth signals are interpreted by a Python script feeding a software called Voicmeeter Banana, because [Maker Vibe] uses Windows, and Redmond’s finest operating system doesn’t expose audio controls in an easily-accessible way. Voicmeeter Banana (and its attendant Python script) takes care of telling Windows what to do.

The whole setup lives on [Maker Vibe]’s desk in a handsome 3D printed box. He used a Circuit vinyl cutter to cut out masks so he could airbrush different colours onto the print after sanding down the layer lines. That’s another one for the archive of how to make front panels.

If volume sliders aren’t doing it for you, perhaps you’d prefer tocontrol your audio with a conductor’s baton.

youtube.com/embed/e0OYAANYKug?…


hackaday.com/2025/07/09/volume…


How To Train A New Voice For Piper With Only A Single Phrase


[Cal Bryant] hacked together a home automation system years ago, which more recently utilizes Piper TTS (text-to-speech) voices for various undisclosed purposes. Not satisfied with the robotic-sounding standard voices available, [Cal] set about an experiment to fine-tune the Piper TTS AI voice model using a clone of a single phrase created by a commercial TTS voice as a starting point.

Before the release of Piper TTS in 2023, existing free-to-use TTS systems such as espeak and Festival sounded robotic and flat. Piper delivered much more natural-sounding output, without requiring massive resources to run. To change the voice style, the Piper AI model can be either retrained from scratch or fine-tuned with less effort. In the latter case, the problem to be solved first was how to generate the necessary volume of training phrases to run the fine-tuning of Piper’s AI model. This was solved using a heavyweight AI model, ChatterBox, which is capable of so-called zero-shot training. Check out the Chatterbox demo here.
As the loss function gets smaller, the model’s accuracy gets better
Training began with a corpus of test phrases in text format to ensure decent coverage of everyday English. [Cal] used ChatterBox to clone audio from a single test phrase generated by a ‘mystery TTS system’ and created 1,300 test phrases from this new voice. This audio set served as training data to fine-tune the Piper AI model on the lashed-up GPU rig.

To verify accuracy, [Cal] used OpenAI’s Whisper software to transcribe the audio back to text, in order to compare with the original text corpus. To overcome issues with punctuation and differences between US and UK English, the text was converted into phonemes using espeak-ng, resulting in a 98% phrase matching accuracy.

After down-sampling the training set using SoX, it was ready for the Piper TTS training system. Despite all the preparation, running the software felt anticlimactic. A few inconsistencies in the dataset necessitated the removal of some data points. After five days of training parked outside in the shade due to concerns about heat, TensorBoard indicated that the model’s loss function was converging. That’s AI-speak for: the model was tuned and ready for action! We think it sounds pretty slick.

If all this new-fangled AI speech synthesis is too complex and, well, a bit creepy for you, may we offer a more 1980s solution to making stuff talk? Finally, most people take the ability to speak for granted, until they can no longer do so. Here’s a team using cutting-edge AI to give people back that ability.


hackaday.com/2025/07/09/how-to…


No Tension for Tensors?


We always enjoy [FloatHeadPhysics] explaining any math or physics topic. We don’t know if he’s acting or not, but he seems genuinely excited about every topic he covers, and it is infectious. He also has entertaining imaginary conversations with people like Feynman and Einstein. His recent video on tensors begins by showing the vector form of Ohm’s law, making it even more interesting. Check out the video below.

If you ever thought you could use fewer numbers for many tensor calculations, [FloatHeadPhysics] had the same idea. Luckily, imaginary Feynman explains why this isn’t right, and the answer shows the basic nature of why people use tensors.

The spoiler: vectors and even scalars are just a special case of tensors, so you use tensors all the time, you just don’t realize it. He works through other examples, including an orbital satellite and a hydroelectric dam.

We love videos that help us have aha moments about complex math or physics. It is easy to spew formulas, but there’s no substitute for having a “feeling” about how things work.

The last time we checked in with [FloatHeadPhysics], he convinced us we were already travelling at the speed of light. We’ve looked at a simple tensor explainer before, if you want a second approach.

youtube.com/embed/k2FP-T6S1x0?…


hackaday.com/2025/07/09/no-ten…


FLOSS Weekly Episode 840: End-of-10; Not Just Some Guy in a Van


This week Jonathan chats with Joseph P. De Veaugh-Geiss about KDE’s eco initiative and the End of 10 campaign! Is Open Source really a win for environmentalism? How does the End of 10 campaign tie in? And what does Pewdiepie have to do with it? Watch to find out!


youtube.com/embed/COdArYxZWgg?…

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

play.libsyn.com/embed/episode/…

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:


Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License


hackaday.com/2025/07/09/floss-…


Dithering With Quantization to Smooth Things Over


It should probably come as no surprise to anyone that the images which we look at every day – whether printed or on a display – are simply illusions. That cat picture isn’t actually a cat, but rather a collection of dots that when looked at from far enough away tricks our brain into thinking that we are indeed looking at a two-dimensional cat and happily fills in the blanks. These dots can use the full CMYK color model for prints, RGB(A) for digital images or a limited color space including greyscale.

Perhaps more interesting is the use of dithering to further trick the mind into seeing things that aren’t truly there by adding noise. Simply put, dithering is the process of adding noise to reduce quantization error, which in images shows up as artefacts like color banding. Within the field of digital audio dithering is also used, for similar reasons. Part of the process of going from an analog signal to a digital one involves throwing away data that falls outside the sampling rate and quantization depth.

By adding dithering noise these quantization errors are smoothed out, with the final effect depending on the dithering algorithm used.

The Digital Era

Plot of a quantized signal and its error. (Source: Wikimedia)Plot of a quantized signal and its error. (Source: Wikimedia)
For most of history, humanity’s methods of visual-auditory recording and reproduction were analog, starting with methods like drawing and painting. Until fairly recently reproducing music required you to assemble skilled artists, until the arrival of analog recording and playback technologies. Then suddenly, with the rise of computer technology in the second half of the 20th century we gained the ability to not only perform analog-to-digital conversion, but also store the resulting digital format in a way that promised near-perfect reproduction.

Digital optical discs and tapes found themselves competing with analog formats like the compact cassette and vinyl records. While video and photos remained analog for a long time in the form of VHS tapes and film, eventually these all gave way to the fully digital world of digital cameras, JPEGs, PNGs, DVDs and MPEG. Despite the theoretical pixel- and note-perfect reproduction of digital formats, considerations like sampling speed (Nyquist frequency) and the aforementioned quantization errors mean a range of new headaches to address.

That said, the first use of dithering was actually in the 19th century, when newspapers and other printed media were looking to print phots without the hassle of having a woodcut or engraving made. This led to the invention of halftone printing.

Polka Dots

Left: halftone dot pattern with increasing size downwards, Right: how the human eye would see this, when viewed from a sufficient distance. (Credit: Wikimedia)Left: halftone dot pattern with increasing size downwards, Right: how the human eye would see this, when viewed from a sufficient distance. (Source: Wikimedia)
With early printing methods, illustrations were limited to an all-or-nothing approach with their ink coverage. This obviously meant serious limitations when it came to more detailed illustrations and photographs, until the arrival of the halftone printing method. First patented in 1852 by William Fox Talbot, his approach used a special screen to break down an image into discrete points on a photographic plate. After developing this into a printing plate, these plates would then print this pattern of differently sized points.

Although the exact halftone printing methods were refined over the following decades, the basic principle remains the same to this day: by varying the size of the dot and the surrounding empty (white) space, the perceived brightness changes. When this method got extended to color prints with the CMYK color model, the resulting printing of these three colors as adjoining dots allowed for full-color photographs to be printed in newspapers and magazines despite having only so few ink colors available.

While it’s also possible to do CMYK printing with blending of the inks, as in e.g. inkjet printers, this comes with some disadvantages especially when printing on thin, low-quality paper, such as that used for newspapers, as the ink saturation can cause the paper to rip and distort. This makes CMYK and monochrome dithering still a popular technique for newspapers and similar low-fidelity applications.

Color Palettes


In an ideal world, every image would have an unlimited color depth. Unfortunately we sometimes have to adapt to a narrower color space, such as when converting to the Graphics Interchange Format (GIF), which is limited to 8 bits per pixel. This 1987-era and still very popular format thus provides an astounding 256 possible colors -albeit from a full 24-bit color space – which poses a bit of a challenge when using a 24-bit PNG or similar format as the source. Simply reducing the bit depth causes horrible color banding, which means that we should use dithering to ease these sharp transitions, like the very common Floyd-Steinberg dithering algorithm:
From left to right: original image. Converted to web safe color. Web safe with Floyd-Steinberg dithering. (Credit: Wikipedia)From left to right: original image. Converted to web safe color. Web safe with Floyd-Steinberg dithering. (Source: Wikipedia)
The Floyd-Steinberg dithering algorithm was created in 1976 by Robert W. Floyd and Louis Steinberg. Its approach to dithering is based on error diffusion, meaning that it takes the quantization error that causes the sharp banding and distributes it across neighboring pixels. This way transitions are less abrupt, even if it means that there is noticeable image degradation (i.e. noise) compared to the original.

This algorithm is quite straightforward, working its way down the image one pixel at a time without affecting previously processed pixels. After obtaining the current pixel’s quantization error, this is distributed across the subsequent pixels following and below the current one, as in the below pseudo code:
for each y from top to bottom do
for each x from left to right do
oldpixel := pixels[x]
[y] newpixel := find_closest_palette_color(oldpixel)
pixels[x][y] := newpixel
quant_error := oldpixel - newpixel
pixels[x + 1][y ] := pixels[x + 1][y ] + quant_error × 7 / 16
pixels[x - 1][y + 1] := pixels[x - 1][y + 1] + quant_error × 3 / 16
pixels[x ][y + 1] := pixels[x ][y + 1] + quant_error × 5 / 16
pixels[x + 1][y + 1] := pixels[x + 1][y + 1] + quant_error × 1 / 16

The implementation of the find_closest_palette_color() function is key here, with for a greyscale image a simple round(oldpixel / 255) sufficing, or trunc(oldpixel + 0.5) as suggested in this CS 559 course material from 2000 by the Universe of Wisconsin-Madison.

As basic as Floyd-Steinberg is, it’s still commonly used today due to the good results that it gives with fairly minimal effort. Which is not to say that there aren’t other dithering algorithms out there, with the Wikipedia entry on dithering helpfully pointing out a number of alternatives, both within the same error diffusion category as well as other categories like ordered dithering. In the case of ordered dithering there is a distinct crosshatch pattern that is both very recognizable and potentially off-putting.

Dithering is of course performed here to compensate for a lack of bit-depth, meaning that it will never look as good as the original image, but the less obnoxious the resulting artefacts are, the better.

Dithering With Audio


Although at first glance dithering with digital audio seems far removed from dithering the quantization error with images, the same principles apply here. When for example the original recording has to be downsampled to CD-quality (i.e. 16-bit) audio, we can either round or truncate the original samples to get the desired sample size, but we’d get distortion in either case. This distortion is highly noticeable by the human ear as the quantization errors create new frequencies and harmonics, this is quite noticeable in the 16- to 6-bit downsampling examples provided in the Wikipedia entry.

In the sample with dithering, there is clearly noise audible, but the original signal (a sine wave) now sounds pretty close to the original signal. This is done through the adding of random noise to each sample by randomly rounding up or down and counting on the average. Although random noise is clearly audible in the final result, it’s significantly better than the undithered version.

Random noise distribution is also possible with images, but more refined methods tend to give better results. For audio processing there are alternative noise distributions and noise shaping approaches.

Regardless of which dither method is being applied, it remains fascinating how the humble printing press and quantization errors have led to so many different ways to trick the human eye and ear into accepting lower fidelity content. As many of the technical limitations that existed during the time of their conception – such as expensive storage and low bandwidth – have now mostly vanished, it will be interesting to see how dithering usage evolves over the coming years and decades.

Featured image: “JJN Dithering” from [Tanner Helland]’s great dithering writeup.


hackaday.com/2025/07/09/dither…


Kids vs Computers: Chisanbop Remembered


If you are a certain age, you probably remember the ads and publicity around Chisanbop — the supposed ancient art of Korean finger math. Was it Korean? Sort of. Was it faster than a calculator? Sort of. [Chris Staecker] offers a great look at Chisanbop, not just how to do it, but also how it became such a significant cultural phenomenon. Take a look at the video below. Long, but worth it.

Technically, the idea is fairly simple. Your right-hand thumb is worth 5, and each finger is worth 1. So to identify 8, you hold down your thumb and the first three digits. The left hand has the same arrangement, but everything is worth ten times the right hand, so the thumb is 50, and each digit is worth 10.

With a little work, it is easy to count and add using this method. Subtraction is just the reverse. As you might expect, multiplication is just repeated addition. But the real story here isn’t how to do Chisanbop. It is more the story of how a Korean immigrant’s system went viral decades before the advent of social media.

You can argue that this is a shortcut that hurts math understanding. Or, you could argue the reverse. However, the truth is that this was around the time the calculator became widely available. Math education would shift from focusing on getting the right answer to understanding the underlying concepts. In a world where adding ten 6-digit numbers is easy with a $5 device, being able to do it with your fingers isn’t necessarily a valuable skill.

If you enjoy unconventional math methods, you may appreciate peasant multiplication.

youtube.com/embed/Rsaf4ncxlyA?…


hackaday.com/2025/07/09/kids-v…


Crunching The News For Fun And Little Profit


Do you ever look at the news, and wonder about the process behind the news cycle? I did, and for the last couple of decades it’s been the subject of one of my projects. The Raspberry Pi on my shelf runs my word trend analysis tool for news content, and since my journey from curious geek to having my own large corpus analysis system has taken twenty years it’s worth a second look.

How Career Turmoil Led To A Two Decade Project

A hanging sign surrounded by ornate metalwork, with the legend "Cyder house".This is very much a minority spelling. Colin Smith, CC BY-SA 2.0.
In the middle of the 2000s I had come out of the dotcom crash mostly intact, and was working for a small web shop. When they went bust I was casting around as one does, and spent a while as a Google quality rater while I looked for a new permie job. These teams are employed by the search giant through temporary employment agencies, and in loose terms their job is to be the trained monkeys against whom the algorithm is tested. The algorithm chose X, and if the humans also chose X, the algorithm is probably getting it right. Being a quality rater is not in any way a high-profile job, but with the big shiny G on my CV I soon found myself in demand from web companies seeking some white-hat search engine marketing expertise. What I learned mirrored my lesson from a decade earlier in the CD-ROM business, that on the web as in any other electronic publishing medium, good content well presented has priority over any black-hat tricks.

But what makes good content? Forget an obsession with stuffing bogus keywords in the text, and instead talk about the right things, and do it authoritatively. What are the right things in this context? If you are covering a subject, you need to do so using the right language; that which the majority uses rather than language only you use. I can think of a bunch of examples which I probably shouldn’t talk about, but an example close to home for me comes in cider. In the UK, cider is a fermented alcoholic drink made from apples, and as a craft cidermaker of many years standing I have a good grasp of its vocabulary. The accepted spelling is “Cider”, but there’s an alternate spelling of “Cyder” used by some commercial producers of the drink. It doesn’t take long to realise that online, hardly anyone uses cyder with a Y, and thus pages concentrating on that word will do less well than those talking about cider.
A graph of the word football versus the word soccer in British news.We Brits rarely use the word “soccer” unless there’s a story about the Club World Cup in America.
I started to build software to analyse language around a given topic, with the aim of discerning the metaphorical cider from the cyder. It was a great surprise a few years later to discover that I had invented for myself the already-existing field of computational linguistics, something that would have saved me a lot of time had I known about it when I began. I was taking a corpus of text and computing the frequencies and collocates (words that appear alongside each other) of the words within it, and from that I could quickly see which wording mattered around a subject, and which didn’t. This led seamlessly to an interest in what the same process would look like for news data with a time axis added, so I created a version which harvested its corpus from RSS feeds. Thus began my decades-long project.

From Project Idea, To Corpus Appliance


In 2005 I knew how to create websites in the manner of the day, so I used the tools I had. PHP5, and MySQL. I know PHP is unfashionable these days, but at the time this wasn’t too controversial, and aside from all the questionable quality PHP code out there it remains a useful scripting language. Using MySQL however would cause me immense problems. I had done what seemed the right thing and created a structured database with linked tables, but I hadn’t fully appreciated just how huge was the task I had taken on. Harvesting the RSS firehose across multiple media outlets brings in thousands of stories every week, so queries which were near-instantaneous during my first development stages grew to take many minutes as my corpus expanded. It was time to come up with an alternative, and I found it in the most basic of OS features, the filesystem.
A graph of the words cat and doc in British news.I have no idea why British news has more dog stories than cat stories.
Casting back to the 1990s, when you paid for web hosting it was given in terms of the storage space it came with. The processing power required to run your CGI scripts or later server-side interpreters such as ASP or PHP, wasn’t considered. It thus became normal practice to try to reduce storage use and not think about processing, and I had without thinking followed this path.

But by the 2000s the price of storage had dropped hugely while that of processing hadn’t. This was the decade in which cloud services such as AWS made an appearance, and as well as buying many-gigabyte hard disks for not a lot, you could also for the first time rent a cloud bucket for pennies. My corpus analysis system didn’t need to spend all its time computing if I could use a terabyte hard drive to make up for less processor usage, so I turned my system on its head. When collecting the RSS stories my retrieval script would pre-compute the final data and store it in a vast tree of tiny JSON files accessible at high speed through the filesystem, and then my analysis software could simply retrieve them and make its report. The system moved from a hard-working x86 laptop to a whisper-quiet and low powered Raspberry Pi with a USB hard disk, and there it has stayed in some form ever since.

Just What Can This Thing Do?

A bubble cloud for the week of 2016-06-23, when the UK Brexit referendum happened. Big words are EU, Brexit,referendum, leave, and vote.No prizes for guessing what happened this week.
So I have a news corpus that has taken me a long time to build. I can take one or more words, and I can compare their occurrence over time. I can watch the news cycle, I can see stories build up over time. I can even see trends which sometimes go against received opinion, such as spotting that the eventual winner of the 2016 UK Labour leadership race was likely to be Jeremy Corbyn early on while the herd were looking elsewhere. Sometimes as with the performance of the word “Brexit” over the middle of the last decade I can see the great events of our times in stark relief, but perhaps it’s in the non-obvious that there’s most value. If you follow a topic and it suddenly dries up for a couple of days, expect a really big story on day three, for example. I can also see which outlets cover one story more than another, something helpful when trying to ascertain if a topic is being pushed on behalf of a particular lobby.

My experiment in text analysis then turned into something much more, even dare I say it, something I find of help in figuring out what’s really going on in turbulent times. But from a tech point of view it’s taught me a huge amount, about statistics, about language, about text parsing, and even about watching the number of available inodes on a hard drive. Believe me, many millions of tiny files in a tree can become unwieldy. But perhaps most of all, after a lifetime of mucking about with all manner of projects but generating little of lasting significance, I can look at this one and say I created something useful. And that is something to be happy about.


hackaday.com/2025/07/09/crunch…


PIC Burnout: Dumping Protected OTP Memory in Microchip PIC MCUs


Normally you can’t read out the One Time Programming (OTP) memory in Microchip’s PIC MCUs that have code protection enabled, but an exploit has been found that gets around the copy protection in a range of PIC12, PIC14 and PIC16 MCUs.

This exploit is called PIC Burnout, and was developed by [Prehistoricman], with the cautious note that although this process is non-invasive, it does damage the memory contents. This means that you likely will only get one shot at dumping the OTP data before the memory is ‘burned out’.

The copy protection normally returns scrambled OTP data, with an example of PIC Burnout provided for the PIC16LC63A. After entering programming mode by setting the ICSP CLK pin high, excessively high programming voltage and duration is used repeatedly while checking that an area that normally reads as zero now reads back proper data. After this the OTP should be read out repeatedly to ensure that the scrambling has been circumvented.

The trick appears to be that while there’s over-voltage and similar protections on much of the Flash, this approach can still be used to affect the entire flash bit column. Suffice it to say that this method isn’t very kind to the fzslash memory cells and can take hours to get a good dump. Even after this you need to know the exact scrambling method used, which is fortunately often documented by Microchip datasheets.

Thanks to [DjBiohazard] for the tip.


hackaday.com/2025/07/09/pic-bu…


Programming Like It’s 1986, For Fun and Zero Profit


screenshot of C programming on Macintosh Plus

Some people slander retrocomputing as an old man’s game, just because most of those involved are more ancient than the hardware they’re playing with. But there are veritable children involved too — take the [ComputerSmith], who isrecreating Conway’s game of life on a Macintosh Plus that could very well be as old as his parents. If there’s any nostalgia here, it’s at least a generation removed — thus proving for the haters that there’s more than a misplaced desire to relive one’s youth in exploring these ancient machines.

So what does a young person get out of programming on a 1980s Mac? Well, aside from internet clout, and possible YouTube monetization, there’s the sheer intellectual challenge of the thing. You cant go sniffing around StackExchange or LLMs for code to copy-paste when writing C for a 1986 machine, not if you’re going to be fully authentic. ANSI C only dates to 1987, after all, and figuring out the quirks and foibles of the specific C implementation is both half the fun, and not easily outsourced. Object Pascal would also have been an option (and quite likely more straightforward — at least the language was clearly-defined), but [ComputerSmith] seems to think the exercise will improve his chops with C, and he’s likely to be right.

Apparently [ComputerSmith] brought this project to VCS Southwest, so anyone who was there doesn’t have to wait for Part 2 of the video to show up to see how this turns out, or to snag a copy of the code (which was apparently available on diskette). If you were there, let us know if you spotted the youngest Macintosh Plus programmer, and if you scored a disk from him.

If the idea of coding in this era tickles the dopamine receptors, check out thishow-to for a prizewinning Amiga demo. If you think pre-ANSI C isn’t retro enough, perhaps you’dprefer programming by card?

youtube.com/embed/P4QYKMXJ108?…


hackaday.com/2025/07/09/progra…


Anonymous rivendica un presunto doxxing ai danni di 16 membri del partito AKP turco


Il 5 luglio 2025, l’account YourAnonFrench_ collegato alla rete Anonymous, ha pubblicato un post sulla piattaforma X (ex Twitter) dichiarando di aver avviato un’azione di doxxing nei confronti di 16 esponenti del partito turco AKP (Adalet ve Kalkınma Partisi).

Nel contenuto diffuso, il gruppo ha incluso un’immagine con nomi, indirizzi email istituzionali, numeri di telefono con prefisso internazionale e altri dati personali riconducibili a membri del partito. L’immagine si chiude con la frase: “More coming soon…”, lasciando intuire che potrebbero esserci ulteriori divulgazioni a breve.

Secondo quanto dichiarato dal gruppo, l’azione si inserirebbe nell’ambito della più ampia campagna #OpTurkey, attiva ormai da oltre un decennio.

Una campagna longeva: cos’è #OpTurkey


La sigla #OpTurkey non è nuova nel panorama hacktivista. La sua prima apparizione risale al 2011, quando Anonymous ha lanciato una serie di attacchi contro siti istituzionali turchi per protestare contro i tentativi di censura su Internet da parte del governo.

Come riportato da Dark Reading, tra i primi bersagli figurava l’autorità nazionale per le telecomunicazioni (BTK), in risposta a una proposta di filtraggio obbligatorio dei contenuti online.

Poco dopo, le autorità turche hanno reagito con un’ondata di arresti: 32 persone, tra cui diversi minorenni, sono state fermate con l’accusa di far parte del collettivo. A confermarlo è Reuters, che ha seguito da vicino la vicenda.

Nel 2013, nel pieno delle proteste di Gezi Park, Anonymous ha rilanciato l’operazione, schierandosi apertamente con i manifestanti. Come documentato da Bianet, sono stati condotti attacchi DDoS e defacement contro portali della polizia e del governo, inclusi quelli del partito AKP.

Due anni dopo, nel 2015, il gruppo ha dichiarato una vera e propria “cyber-guerra” contro la Turchia, accusando il governo Erdoğan di presunte connessioni con lo Stato Islamico. In quella fase, come riportato da Hürriyet Daily News, Anonymous ha attaccato migliaia di domini con estensione .tr, generando un traffico DDoS di oltre 40 Gbps.

Quella di luglio 2025 sembrerebbe una nuova fase della stessa campagna, con un cambio netto di strategia: non più solo azioni tecniche contro le infrastrutture digitali, ma una scelta mirata di doxxing come strumento di pressione politica.

Una verifica condotta sul sito ufficiale del partito AKP (akparti.org.tr) e sui canali social associati al partito non ha restituito alcun comunicato ufficiale, né smentite o dichiarazioni in merito alla rivendicazione. Al momento, dunque, non risultano conferme da parte dell’organizzazione coinvolta.

L'articolo Anonymous rivendica un presunto doxxing ai danni di 16 membri del partito AKP turco proviene da il blog della sicurezza informatica.


Il Cyberpandino è pronto per il Mongol Rally 2025: RHC tifa per voi ragazzi! A tutto GAS digitale!


Il progetto Cyberpandino non è solo un’idea folle, ma una grande avventura su quattro ruote progettata e realizzata da due menti brillanti romane – Matteo Errera e Roberto Zaccardi – che hanno preso una Fiat Panda del 2003, pagata appena 800 €, e l’hanno trasformata in un vero laboratorio hi‑tech su ruote.

Si tratta del cyberpandino, pronto ad affrontare ben 14.000 km di sterrati europei e asiatici per partecipare al leggendario Mongol Rally, un rally internazionale non competitivo gestito dalla società The Adventurists di Bristol che si svolge dal 2004.

Il Mongol Rally


Il Mongol Rally, nato come puro evento di beneficenza dal 2004 al 2006, con tutti i proventi dalle tasse di iscrizione utilizzati per organizzare l’evento e la restante parte destinata a opere di beneficenza è divenuto è cambiato successivamente a partire dal 2007 quando League of Adventurists International Ltd, un’azienda privata, ha iniziato a gestirlo.

Pensate che l’edizione del 2007 del Mongol Rally partì da Hyde Park, a Londra, il 21 luglio e fu limitato a 200 squadre. Vennero raccolte più iscrizioni di quanto gli organizzatori avessero previsto, tanto da assegnare i primi 100 posti in 22 secondi. A causa di questa imprevista popolarità, gli ultimi 50 pass di partecipazione furono assegnati attraverso una procedura di sorteggio casuale.

L’idea alla base del Mongol Rally è semplice, ma allo stesso tempo incredibile. “Ti diamo un punto di partenza e un punto di arrivo, ma dove andrai o cosa farai nel frattempo è interamente il tuo bagaglio fumante di magia avventurosa. Ti consigliamo di non perdere troppo tempo a pianificare il tuo itinerario o a consultare mappe o guide utili. Scopri cosa c’è lì quando arrivi. Scatena l’inaspettato.” Questi i termini della sfida. E non è cosa da poco non è vero?

Queste le date di questa avventura:

  • 12 luglio: festa di lancio
  • 13 luglio: giorno di lancio
  • 16 agosto: prima festa al traguardo
  • 23 agosto: festa al traguardo e cerimonia di chiusura

E sul sito del Mongol Rally si legge anche

Quest’anno non potremo attraversare la Russia, il che significa che non potremo arrivare fino in Mongolia, ma tra voi e il traguardo ci sarà comunque un’enorme fetta di caos centroasiatico. Perdetevi sulla Pamir Highway, sfasciate la macchina sui sentieri montani del Kirghizistan, arenatevi sulla strada per le Porte dell’Inferno in Turkmenistan… Aggiungete un po’ di caos uzbeko e avrete un’avventura gigantesca attraverso i possenti ‘stan. Il traguardo del 2025 si trova dall’altra parte del deserto, nell’estremo oriente del Kazakistan. Stiamo già esplorando le location nella regione di Oksemen, intorno al fiume Irtysh e al lago Zaysan, quindi la posizione esatta sarà confermata presto.

Una Panda… diventata “Cybertruck” in versione economica


La vecchia Panda, è equipaggiata da un classico motore Fire 1.1 e 140.000 km sul groppone, è stata completamente rivisitata. Ora sfoggia fari LED stampati in 3D, un’interfaccia touchscreen chiamata “Panda OS” con stile fumettoso e tanto eco‑nerd, e una strumentazione digitale sofisticata realizzata interamente dal team dei due romani.

Tutta questa tecnologia open source è stata sviluppata in garage, riflettendo perfettamente lo spirito maker del team. Inoltre la cyberpanda è dotata di un trasmettitore satellitare fornito dalla telespazio che consente al nostro equipaggio di poter accedere ad internet nei posti più impervi del pianeta.
Cruscotto e strumentazione di bordo del pannello principale del cyberpandino

Dettagli del restomod hi‑tech


  • Infotainment touch e toggle switch ispirati all’avionica, integrati in un sistema React.js basato su Raspberry Pi .
  • Connessione satellitare Telespazio, per rimanere sempre online anche nel mezzo del deserto it.motor1.com.
  • Sensori OBD2, GPS, IMU, e perfino qualità dell’aria: la Panda è diventata una sorta di stazione mobile per dati, monitoraggio e storytelling continuo it.motor1.com+1it.wikipedia.org+1.
  • Assetto rinforzato, taniche di scorta e ruote off‑road — tutto pensato per affrontare le piste “inguardabili” che separano la Praga dalla Mongolia it.motor1.com.


Red Hot Cyber scende in pista… o meglio, in fuoristrada


Noi di Red Hot Cyber siamo fieramente al fianco di Matteo e Roberto – il team che ormai chiamiamo affettuosamente il “Magic Team” – mentre si lanciano in questa folle e straordinaria sfida da 14.000 chilometri tra Europa e Asia.

È una corsa di coraggio, ingegno e passione, senza alcuna assistenza esterna, dove ogni riga di codice e ogni vite avvitata con una pinza contano più di quanto si possa immaginare. Cosa mai potrà andar storto?

Beh, tutto potrebbe andare storto… ma è proprio questo il bello del Mongol Rally. Il nostro team è pronto a tutto: hanno affrontato crash di sistema, bulloni spezzati e compilatori più testardi della sabbia del deserto. E continueranno a farlo con il sorriso!

Crediamo fermamente che questa impresa rappresenti al 100% ciò in cui crediamo anche noi: la cultura hacker e maker, fatta di creatività, resilienza, condivisione, e di quella incoscienza sana che ti fa dire: “Prendiamo una Panda, attacchiamoci sensori, scriviamoci sopra un software e attraversiamo mezzo… ma dico proprio mezzo mondo”.

Forza Cyberpandino! RHC Tifa per te!


Il viaggio del Cyberpandino è cominciato!

Il vostro tifo scalda il motore e il cuore!

Avanti con Matteo e Roberto equipaggiati di saldatori, script, sogni e tanta voglia di scoprire. Seguiamoli, condividiamo, facciamo rumore: ammirazione, supporto e adrenalina sono il carburante di questa corsa epica!

L'articolo Il Cyberpandino è pronto per il Mongol Rally 2025: RHC tifa per voi ragazzi! A tutto GAS digitale! proviene da il blog della sicurezza informatica.


Il lato oscuro di DeepSeek: prezzi bassi, utenti in fuga e il sogno nel cassetto dell’AGI


Nel 128° giorno dal lancio, DeepSeek R1 ha rivoluzionato l’intero mercato dei modelli di grandi dimensioni. Il suo impatto si è fatto sentire prima di tutto sul fronte dei costi: il solo annuncio di R1 ha contribuito ad abbassare i prezzi delle inferenze. OpenAI, ad esempio, ha aggiornato a giugno il costo del suo modello o3, riducendolo del 20% rispetto alla versione precedente o1. Questo cambiamento è avvenuto in un contesto competitivo sempre più serrato, dove l’efficienza economica è diventata una leva strategica fondamentale.

L’utilizzo dei modelli DeepSeek su piattaforme di terze parti è esploso, ma non senza contraddizioni. La domanda è aumentata di quasi 20 volte rispetto al primo rilascio, trainando l’espansione di molte aziende cloud. Tuttavia, la piattaforma ufficiale di DeepSeek – sia a livello web che via API – ha registrato un costante calo di traffico. Secondo i dati di SemiAnalysis, a maggio solo il 16% dei token generati dal modello proveniva da DeepSeek stesso. Questo segnale evidenzia una crescente preferenza degli utenti verso soluzioni alternative più performanti e meno frustranti in termini di latenza.

Dietro l’apparente successo si cela una strategia estrema di riduzione dei costi. DeepSeek ha deliberatamente sacrificato l’esperienza utente per limitare il consumo di risorse computazionali. Le sue API ufficiali soffrono di alti tempi di latenza, con ritardi significativi nell’erogazione del primo token. In confronto, piattaforme come Parasail o Friendli offrono latenze minime a costi contenuti. Altre, come Azure, pur essendo più care, garantiscono prestazioni decisamente superiori. Anche la finestra di contesto fornita da DeepSeek – limitata a 64k – è considerata insufficiente per task complessi come la programmazione, dove piattaforme concorrenti offrono fino a 2,5 volte più contesto allo stesso prezzo.

La scelta di DeepSeek è chiara: potenziare l’intelligenza, non il servizio. Tutte le ottimizzazioni introdotte puntano a un unico obiettivo: ridurre il carico delle inferenze pubbliche per concentrare la potenza di calcolo sullo sviluppo interno. Questo approccio spiega anche l’assenza di reali investimenti su chatbot proprietari o offerte API competitive. In parallelo, DeepSeek adotta una strategia open source per alimentare l’adozione dei suoi modelli tramite provider esterni, consolidando la propria influenza sull’ecosistema AI senza dover sostenere i costi di scala.

La seconda metà del gioco nei modelli LLM è tutta sulla qualità del token. Mentre DeepSeek punta alla costruzione dell’AGI, Claude, ad esempio, cerca un compromesso tra performance e redditività. Ha rallentato leggermente per contenere il consumo computazionale, ma mantiene una buona esperienza utente. Il modello Claude Sonnet 4 ha visto un calo del 40% nella velocità, ma resta più reattivo di DeepSeek. Inoltre, modelli come Claude ottimizzano le risposte in modo da consumare meno token, mentre DeepSeek e Gemini, per la stessa risposta, possono richiedere il triplo dei token. In questa fase della competizione, efficienza e intelligenza non sono più solo una questione di prezzo o velocità, ma di visione a lungo termine.

L'articolo Il lato oscuro di DeepSeek: prezzi bassi, utenti in fuga e il sogno nel cassetto dell’AGI proviene da il blog della sicurezza informatica.


Five-minute(ish) Beanie is the Fastest We’ve Seen Yet


Yes, you read that right– not benchy, but beanie, as in the hat. A toque, for those of us under the Maple Leaf. It’s not 3D printed, either, except perhaps by the loosest definition of the word: it is knit, by [Kevr102]’smotorized turbo knitter.

The turbo-knitter started life as an Addi Express King knitting machine. These circular knitting machines are typically crank-operated, functioning with a cam that turns around to raise and lower special hooked needles that grab and knit the yarn. This particular example was not in good working order when [Kevr102] got a hold of it. Rather than a simple repair, they opted to improve on it.

A 12 volt motor with a printed gear and mount served for motorizing the machine. The original stitch counter proved a problem, so was replaced with an Arduino Nano and a hall effect sensor driving a 7-digit display. In theory, the Arduino could be interfaced with the motor controller and set to run the motor for a specific number of stitches, but in practice there’s no point as the machine needs babysat to maintain tension and avoid dropping stitches and the like. Especially, we imagine, when it runs fast enough to crank out a hat in under six minutes. Watch it go in the oddly cropped demo video embedded below.

Five minutes would still be a very respectable time for benchy, but it’s not going to get you on the SpeedBoatRace leaderboards against something like the minuteman we covered earlier.

If you prefer to take your time, this knitting machine clock might be more your fancy. We don’t see as many fiber arts hacks as perhaps we should here, so if you’re tangled up in anything interesting in that scene, please drop us a line.

youtube.com/embed/QWRVQVrnILk?…


hackaday.com/2025/07/08/five-m…


Oscillator Negativity is a Good Thing


Many people who get analog electronics still struggle a bit to design oscillators. Even common simulators often need a trick to simulate some oscillating circuits. The Barkhausen criteria state that for stable oscillation, the loop gain must be one, and the phase shift around the feedback loop must be a multiple of 360 degrees. [All Electronics Channel] provides a thorough exploration of oscillators and, specifically, negative resistance, which is punctuated by practical measurements using a VNA. Check it out in the video below.

The video does have a little math and even mentions differential equations, but don’t worry. He points out that the universe solves the equation for you.

In an LC circuit, you can consider the losses in the circuit as a resistor. That makes sense. No component is perfect. But if you could provide a negative resistance, it would cancel out the parasitic resistance. With no loss, the inductor and capacitor will go back and forth, electrically, much like a pendulum.

So, how do you get a negative resistance? You’ll need an active device. He presents some example oscillator architectures and explains how they generate negative resistances.

Crystals are a great thing to look at with a VNA. That used to be a high-dollar piece of test gear, but not anymore.

youtube.com/embed/EG3BSn8MzHc?…


hackaday.com/2025/07/08/oscill…


View a Beehive Up Close with this 3D Printed Hive


3 yellow modules are connected with bees filling 2 out of 3

Bees are incredible insects that live and die for their hive, producing rich honey in complicated hive structures. The problem is as the average beekeeper, you wouldn’t see much of these intricate structures without disturbing the hive. So why not 3D print an observation hive? With [Teddy Hatcher]’s 3D printing creativity, that is exactly what he did.

A yellow 3D printed hexagonal panel

Hexagonal sections allow for viewing of entire panels of hexagonal cells, growing new workers, and storing the rich syrup we all enjoy. Each module has two cell panels, giving depth to the hive for heat/humidity gradients. The rear of a module has a plywood backing and an acrylic front for ample viewing. [Teddy] uses three modules plus a Flow Hive for a single colony, enough room for more bees than we here at Hackaday would ever consider letting in the front door.

As with many 3D printed projects involving food or animals, the question remains about health down the line. Plastic can bio-accumulate in hives, which is a valid concern for anyone wanting to add the honey to their morning coffee. On the other hand, the printed plastic is not what honey is added to, nor what the actual cell panels are made from. When considering the collected honey, this is collected from the connected Flow Hive rather than anything directly in contact with 3D printed plastic.

Beehives might not always need a fancy 3D printed enclosure; the standard wooden crates seem to work just fine for most, but there’s a time and place for some bio-ingenuity. Conditions in a hive might vary creating problems for your honey production, so you better check out this monitoring system dedicated to just that!

youtube.com/embed/Qi3-rIL5Fbw?…

Thanks to [George Graves] for the tip!


hackaday.com/2025/07/08/view-a…


Better Solid State Heat Pumps Through Science


If you need to cool something, the gold standard is using a gas compressor arrangement. Of course, there are definite downsides to that, like weight, power consumption, and vibrations. There are solid-state heat pumps — the kind you see in portable coolers, for example. But, they are not terribly efficient and have limited performance.

However, researchers at Johns Hopkins, working with Samsung, have developed a new thin-film thermoelectric heat pump, which they claim is easy to fabricate, scalable, and significantly more efficient. You can see a video about the new research below.

Manufacturing requires similar processes to solar cells, and the technology can make tiny heat pumps or — in theory — coolers that could provide air conditioning for large buildings. You can read the full paper in Nature.

CHESS stands for Controlled Hierarchically Engineered Superlattice Structures. These are nano-engineered thin-film superlattices (around 25 μm thick). The design optimizes their performance in this application.

The new devices claim to be 100% more efficient at room temperature than traditional devices. In practical devices, thermoelectric devices and the systems using them have improved by around 70% to 75%. The material can also harvest power from heat differences, such as body heat. The potential small size of devices made with this technology would make them practical for wearables.

We’ve looked at the traditional modules many times. They sometimes show up in cloud chambers.

youtube.com/embed/dOw_fzZh7MM?…


hackaday.com/2025/07/08/better…


Budget Brilliance: DHO800 Function Generator


DHO800 function generator

The Rigol oscilloscopes have a long history of modifications and hacks, and this latest from [Matthias] is an impressive addition; he’s been working on adding a function generator to the DHO800 line of scopes.

The DHO800 series offers many great features: it’s highly portable with a large 7-inch touchscreen, powered by USB-C, and includes plenty of other goodies. However, there’s room for enhancements. [Matthias] realized that while software mods exist to increase bandwidth or unlock logic analyzer functions, the hardware needed to implement the function generator—available in the more expensive DHO900 series—was missing.

To address this, he designed a daughterboard to serve as the function generator hardware, enabling features that software tweaks can unlock. His goal was to create an affordable, easy-to-produce, and easy-to-assemble interface board that fits in the space reserved for the official daughterboard in higher-end scopes.

Once the board is installed and the software is updated, the new functionality becomes available. [Matthias] clearly explains some limitations of his implementation. However, these shortcomings are outweighed by the tremendous value this mod provides. A 4-channel, 200 MHz oscilloscope with function generator capabilities for under $500 is a significant achievement. We love seeing these Rigol mods enhance tool functionality. Thanks, [Matthias], for sharing this project—great job bringing even more features to this popular scope.


hackaday.com/2025/07/08/budget…


Could Space Radiation Mutate Seeds For The Benefit of Humanity?


Humans have forever been using all manner of techniques to better secure the food we need to sustain our lives. The practice of agriculture is intimately tied to the development of society, while techniques like selective breeding and animal husbandry have seen our plants and livestock deliver greater and more nourishing bounty as the millennia have gone by. More recently, more direct tools of genetic engineering have risen to prominence, further allowing us to tinker with our crops to make them do more of what we want.

Recently, however, scientists have been pursuing a bold new technique. Researchers have explored using radiation from space to potentially create greater crops to feed more of us than ever.

“Cosmic Crops”


Most recently, an effort at “space mutagenesis” has been spearheaded by the International Atomic Energy Agency, a body which has been rather more notable for other works of late. In partnership with the UN’s Food and Agriculture Organization (FAO), it has been examining the effects that the space-based environment might have on seeds. Ideally, these effects would be positive, producing hardier crops with greater yields for the benefit of humanity.
The sorghum seeds that spent five months on the ISS as part of the joint FAO/IAEA research project. Credit: Katy Laffan/IAEA, CC BY 2.0
The concept is simple enough—put a bunch of seeds on the International Space Station (ISS), and see what happens. Specifically, researchers placed half the seeds outside the ISS, where they would be exposed to extreme cold and maximum doses of cosmic radiation. The other half were left inside the station as a control, where they would experience microgravity but otherwise be safe from temperature and radiation extremes. The hope was that the radiation may cause some random but beneficial mutations in the seed’s genetics which provide better crops for use on Earth.
Plant breeder and geneticist Anupama Hingane examines a sorghum plant grown at the FAO/IAEA Plant Breeding & Genetics Laboratory. Credit: Katy Laffan / IAEA, CC BY 2.0
Two types of seeds were sent up for the first trial by the IAEA and UN—sorghum, a nutrient-filled cereal grain, and arabidopsis, a fast-growing cress. After their flight on the ISS, they were returned to Earth to be germinated, grown, and examined for desirable traits. Of course, DNA sequencing was also on the table, to compare mutations generated in space with seeds kept inside the ISS and those irradiated under laboratory conditions.

The only thing missing from the IAEA’s experiment? A research paper. The seeds returned from space in April 2023, and were sent to the Plant Breeding and Genetics Laboratory in Seibersdorf, Austria soon after. We’ve seen pictures of the plants that sprouted from the seeds in space, but researchers are yet to publish full results or findings from the project.

Proven Benefits


It might sound like an oddball idea, particularly given the results from the IAEA’s project are yet to be delivered. However, space mutagenesis has been tried and tested to a greater degree than you might think. Chinese scientists have been experimenting with the technique of space mutagenesis for over 30 years, finding that it often delivers more beneficial mutations compared to using gamma rays in terrestrial labs.

Chinese efforts have seen many thousands of seeds irradiated via satellites and space stations, including a trip around the moon on the Chang’e-5 mission. Having been exposed to space radiation for anywhere from days to months, the seeds have returned to Earth and been planted and examined for beneficial mutations. While not every seed comes back better than before, some show rare mutations that offer breakthrough benefits in yield, drought resistance, fruit size, or temperature hardiness. These crops can then be bred further to refine the gains. Chinese efforts have experimented with everything from cotton to tomatoes, watermelons and corn, beyond others. A particular success story was Yujiao 1 – a sweet pepper variety released in 1990 boasting better fruit and resistance to disease, along with 16.4% higher yield than some comparable varieties.
A comparison of mutated peppers Yujiao 1 (Y1), Yujiao 2 (Y2), and Yujiao 3 (Y3) with comparable Longjiao wild types (marked W1,W2). Credit: research paper
The results of space mutagenesis are tracked very carefully, both by researchers involved and wider authorities. Notably, the IAEA maintains a Mutant Variety Database for plants that have been modified either by space-based radiation or a variety of other physical or chemical methods. This is important, and not only for reaping the benefits from mutagenic organisms. It’s also important to help researchers understand the mechanisms involved, and to help make sure that the risk of any negative traits breaking out into broader wild plant populations are mitigated.

Ultimately, space mutagenesis is just another tool in the toolbox for scientists looking to improve crops. It’s far from cheap to send seeds to space, let alone to do the research to weed out those with beneficial mutations from the rest. Still, the benefits on offer can be huge when scaled to the size of modern agriculture, so the work will go on regardless. It’s just another way to get more, something humans can never quite get enough of.


hackaday.com/2025/07/08/could-…


Turning PET Plastic Into Paracetamol With This One Bacterial Trick


Over the course of evolution microorganisms have evolved pathways to break down many materials. The challenge with the many materials that we humans have created over just the past decades is that we cannot wait for evolution to catch up, ergo we have to develop such pathways ourselves. One such example is demonstrated by [Nick W. Johnson] et al. with a recent study in Nature Chemistry that explicitly targets PET plastic, which is very commonly used in plastic bottles.

The researchers modified regular E. coli bacteria to use PET plastic as an input via Lossen rearrangement, which converts hydroxamate esters to isocyanates, with at the end of the pathway para-aminobenzoate (PABA) resulting, which using biosynthesis created paracetamol, the active ingredient in Tylenol. This new pathway is also completely harmless to the bacterium, which is always a potential pitfall with this kind of biological pathway engineering.

In addition to this offering a potential way to convert PET bottles into paracetamol, the researchers note that their findings could be very beneficial to studies targeting other ‘waste’ products from biological pathways.

Thanks to [DjBiohazard] for the tip.


hackaday.com/2025/07/08/turnin…


The End Of The Hackintosh Is Upon Us


From the very dawn of the personal computing era, the PC and Apple platforms have gone very different ways. IBM compatibles surged in popularity, while Apple was able to more closely guard the Macintosh from imitators wanting to duplicate its hardware and run its software.

Things changed when Apple announced it would hop aboard the x86 bandwagon in 2005. Soon enough was born the Hackintosh. It was difficult, yet possible, to run MacOS on your own computer built with the PC parts your heart desired.

Only, the Hackintosh era is now coming to the end. With the transition to Apple Silicon all but complete, MacOS will abandon the Intel world once more.

End Of An Era

macOS Tahoe is slated to drop later this year. Credit: Apple
2025 saw the 36th Worldwide Developers Conference take place in June, and with it, came the announcement of macOS Tahoe. The latest version of Apple’s full-fat operating system will offer more interface customization, improved search features, and the new attractive ‘Liquid Glass’ design language. More critically, however, it will also be the last version of the modern MacOS to support Apple’s now aging line of x86-based computers.

The latest OS will support both Apple Silicon machines as well as a small list of older Macs. Namely, if you’ve got anything with an M1 or newer, you’re onboard. If you’re Intel-based, though, you might be out of luck. It will run on the MacBook Pro 16 inch from 2019, as well as the MacBook Pro 13-inch from 2020, but only the model with four Thunderbolt 3 ports. It will also support iMacs and Mac Minis from 2020 or later. As for the Mac Pro, you’ll need one from 2019 or later, or 2022 or later for the Mac Studio.

Basically, beyond the release of Tahoe, Apple will stop releasing versions of its operating system for x86 systems. Going forward, it will only be compiling MacOS for ARM-based Apple Silicon machines.

How It Was Done


Of course, it’s worth remembering that Apple never wanted random PC builders to be able to run macOS to begin with. Yes, it will eventually stop making an x86 version of its operating system, but it had already gone to great lengths trying to stop macOS from running on non-authorized hardware. The dream of a Hackintosh was to build a powerful computer on the cheap, without having to pay Apple’s exorbitant prices for things like hard drive, CPU, and memory upgrades. However, you always had to jump through hoops, using hacks to fool macOS into running on a computer that Apple never built.

youtube.com/embed/0Afw3fchl9o?…

Installing macOS on a PC takes some doing.

Getting a Hackintosh running generally involved pulling down special patches crafted by a dedicated community of hackers. Soon after Apple started building x86 machines, hackers rushed to circumvent security features in what was then called Mac OS X, allowing it to run on non-Apple approved machines. The first patches landed just over a month after the first x86 Macs. Each subsequent Apple update to OS X locked things down further, only for the community to release new patches unlocking the operating system in quick succession. Sometimes this involved emulating the EFI subsystem which contemporary Macs used in place of a traditional PC’s BIOS. Sometimes it was involved as tweaking the kernel to stick to older SSE2 instructions when Apple’s use of SS3 instructions stopped the operating system running on older hardware. Depending on the precise machine you were building, and the version of OS X or MacOS that you hoped to run, you’d use different patches or hacks to get your machine booting, installing, and running to operating system.
Hackintosh communities maintain lists of bugs and things that don’t work quite right—no surprise given Apple’s developers put little thought into making their OS work on unofficial hardware. Credit: eliteMacx86.com via Screenshot
Running a Hackintosh often involved dealing with limitations. Apple’s operating system was never intended to run on just any hardware, after all. Typical hurdles included having to use specific GPUs or WiFi cards, for example, since broad support for the wide range of PC parts just wasn’t there. Similarly, sometimes certain motherboards wouldn’t work, or would require specific workarounds to make Apple’s operating system happy in a particularly unfamiliar environment.

Of course, you can still build a Hackintosh today. Instructions exist for installing and running macOS Sequoia (macOS 15), macOS Sonoma (macOS 14), as well as a whole host of earlier versions all the way back to when it was still called Mac OS X. When macOS Tahoe drops later this year, the community will likely work to make the x86 version run on any old PC hardware. Beyond that, though, the story will end, as Apple continues to walk farther into its ARM-powered future.

Ultimately, what the Hackintosh offered was choice. It wasn’t convenient, but if you were in love with macOS, it let you do what Apple said was verboten. You didn’t have to pay for expensive first party parts, and you could build your machine in the manner to which you were accustomed. You could have your cake and eat it too, which is to say that you could run the Mac version of Photoshop because that apparently mattered to some people. Now, all that’s over, so if you love weird modifier keys on your keyboard and a sleek, glassy operating system, you’ll have to pay the big bucks for Apple hardware again. The Hackintosh is dead. Long live Apple Silicon, so it goes.


hackaday.com/2025/07/08/the-en…


Touch Lamp Tracks ISS with Style


In the comments of a recent article, the question came up as to where to find projects from the really smart kids the greybeards remember being in the 70s. In the case of [Will Dana] the answer is YouTube, where he’s done an excellent job of producing an ISS-tracking lamp, especially considering he’s younger than almost all of the station’s major components.*

There’s nothing ground-breaking here, and [Will] is honest enough to call out his inspiration in the video. Choosing to make a ground-track display with an off-the-shelf globe is a nice change from the pointing devices we’ve featured most recently. Inside the globe is a pair of stepper motors configured for alt/az control– which means the device must reset every orbit, since [Willis] didn’t have slip rings or a 360 degree stepper on hand. A pair of magnets couples the motion system inside the globe to the the 3D printed ISS model (with a lovely paintjob thanks to [Willis’s girlfriend]– who may or may be from Canada, but did show up in the video to banish your doubts as to her existence), letting it slide magically across the surface. (Skip to the end of the embedded video for a timelapse of the globe in action.) The lamp portion is provided by some LEDs in the base, which are touch-activated thanks to some conductive tape inside the 3D printed base.

It’s all controlled by an ESP32, which fetches the ISS position with a NASA API. Hopefully it doesn’t go the way of the sighting website, but if it does there’s more than enough horsepower to calculate the position from orbital parameters, and we are confident [Will] can figure out the code for that. That should be pretty easy compared to the homebrew relay computer or the animatronic sorting hat we featured from him last year.

Our thanks to [Will] for the tip. The tip line is for hackers of all ages, but we admit that it’s great to see what the new generation is up to.

*Only the Roll Out Solar Array, unless you only count on-orbit age, in which case the Nakua module would qualify as well.

youtube.com/embed/nbEe-BCNutg?…


hackaday.com/2025/07/08/touch-…


Approach to mainframe penetration testing on z/OS. Deep dive into RACF


In our previous article we dissected penetration testing techniques for IBM z/OS mainframes protected by the Resource Access Control Facility (RACF) security package. In this second part of our research, we delve deeper into RACF by examining its decision-making logic, database structure, and the interactions between the various entities in this subsystem. To facilitate offline analysis of the RACF database, we have developed our own utility, racfudit, which we will use to perform possible checks and evaluate RACF configuration security. As part of this research, we also outline the relationships between RACF entities (users, resources, and data sets) to identify potential privilege escalation paths for z/OS users.

This material is provided solely for educational purposes and is intended to assist professionals conducting authorized penetration tests.

RACF internal architecture

Overall role


z/OS access control diagram
z/OS access control diagram

To thoroughly analyze RACF, let’s recall its role and the functions of its components within the overall z/OS architecture. As illustrated in the diagram above, RACF can generally be divided into a service component and a database. Other components exist too, such as utilities for RACF administration and management, or the RACF Auditing and Reporting solution responsible for event logging and reporting. However, for a general understanding of the process, we believe these components are not strictly necessary. The RACF database stores information about z/OS users and the resources for which access control is configured. Based on this data, the RACF service component performs all necessary security checks when requested by other z/OS components and subsystems. RACF typically interacts with other subsystems through the System Authorization Facility (SAF) interface. Various z/OS components use SAF to authorize a user’s access to resources or to execute a user-requested operation. It is worth noting that while this paper focuses on the operating principle of RACF as the standard security package, other security packages like ACF2 or Top Secret can also be used in z/OS.

Let’s consider an example of user authorization within the Time Sharing Option (TSO) subsystem, the z/OS equivalent of a command line interface. We use an x3270 terminal emulator to connect to the mainframe. After successful user authentication in z/OS, the TSO subsystem uses SAF to query the RACF security package, checking that the user has permission to access the TSO resource manager. The RACF service queries the database for user information, which is stored in a user profile. If the database contains a record of the required access permissions, the user is authorized, and information from the user profile is placed into the address space of the new TSO session within the ACEE (Accessor Environment Element) control block. For subsequent attempts to access other z/OS resources within that TSO session, RACF uses the information in ACEE to make the decision on granting user access. SAF reads data from ACEE and transmits it to the RACF service. RACF makes the decision to grant or deny access, based on information in the relevant profile of the requested resource stored in the database. This decision is then sent back to SAF, which processes the user request accordingly. The process of querying RACF repeats for any further attempts by the user to access other resources or execute commands within the TSO session.

Thus, RACF handles identification, authentication, and authorization of users, as well as granting privileges within z/OS.

RACF database components


As discussed above, access decisions for resources within z/OS are made based on information stored in the RACF database. This data is kept in the form of records, or as RACF terminology puts it, profiles. These contain details about specific z/OS objects. While the RACF database can hold various profile types, four main types are especially important for security analysis:

  1. User profile holds user-specific information such as logins, password hashes, special attributes, and the groups the user belongs to.
  2. Group profile contains information about a group, including its members, owner, special attributes, list of subgroups, and the access permissions of group members for that group.
  3. Data set profile stores details about a data set, including access permissions, attributes, and auditing policy.
  4. General resource profile provides information about a resource or resource class, such as resource holders, their permissions regarding the resource, audit policy, and the resource owner.

The RACF database contains numerous instances of these profiles. Together, they form a complex structure of relationships between objects and subjects within z/OS, which serves as the basis for access decisions.

Logical structure of RACF database profiles


Each profile is composed of one or more segments. Different profile types utilize different segment types.

For example, a user profile instance may contain the following segments:

  • BASE: core user information in RACF (mandatory segment);
  • TSO: user TSO-session parameters;
  • OMVS: user session parameters within the z/OS UNIX subsystem;
  • KERB: data related to the z/OS Network Authentication Service, essential for Kerberos protocol operations;
  • and others.

User profile segments
User profile segments

Different segment types are distinguished by the set of fields they store. For instance, the BASE segment of a user profile contains the following fields:

  • PASSWORD: the user’s password hash;
  • PHRASE: the user’s password phrase hash;
  • LOGIN: the user’s login;
  • OWNER: the owner of the user profile;
  • AUTHDATE: the date of the user profile creation in the RACF database;
  • and others.

The PASSWORD and PHRASE fields are particularly interesting for security analysis, and we will dive deeper into these later.

RACF database structure


It is worth noting that the RACF database is stored as a specialized data set with a specific format. Grasping this format is very helpful when analyzing the DB and mapping the relationships between z/OS objects and subjects.

As discussed in our previous article, a data set is the mainframe equivalent of a file, composed of a series of blocks.

RACF DB structure
RACF DB structure

The image above illustrates the RACF database structure, detailing the data blocks and their offsets. From the RACF DB analysis perspective, and when subsequently determining the relationships between z/OS objects and subjects, the most critical blocks include:

  • The header block, or inventory control block (ICB), which contains various metadata and pointers to all other data blocks within the RACF database. By reading the ICB, you gain access to the rest of the data blocks.
  • Index blocks, which form a singly linked list that contains pointers to all profiles and their segments in the RACF database – that is, to the information about all users, groups, data sets, and resources.
  • Templates: a crucial data block containing templates for all profile types (user, group, data set, and general resource profiles). The templates list fields and specify their format for every possible segment type within the corresponding profile type.

Upon dissecting the RACF database structure, we identified the need for a utility capable of extracting all relevant profile information from the DB, regardless of its version. This utility would also need to save the extracted data in a convenient format for offline analysis. Performing this type of analysis provides a comprehensive picture of the relationships between all objects and subjects for a specific z/OS installation, helping uncover potential security vulnerabilities that could lead to privilege escalation or lateral movement.

Utilities for RACF DB analysis


At the previous stage, we defined the following functional requirements for an RACF DB analysis utility:

  1. The ability to analyze RACF profiles offline without needing to run commands on the mainframe
  2. The ability to extract exhaustive information about RACF profiles stored in the DB
  3. Compatibility with various RACF DB versions
  4. Intuitive navigation of the extracted data and the option to present it in various formats: plaintext, JSON, SQL, etc.


Overview of existing RACF DB analysis solutions


We started by analyzing off-the-shelf tools and evaluating their potential for our specific needs:

  • Racf2john extracts user password hashes (from the PASSWORD field) encrypted with the DES and KDFAES algorithms from the RACF database. While this was a decent starting point, we needed more than just the PASSWORD field; specifically, we also needed to retrieve content from other profile fields like PHRASE.
  • Racf2sql takes an RACF DB dump as input and converts it into an SQLite database, which can then be queried with SQL. This is convenient, but the conversion process risks losing data critical for z/OS security assessment and identifying misconfigurations. Furthermore, the tool requires a database dump generated by the z/OS IRRDBU00 utility (part of the RACF security package) rather than the raw database itself.
  • IRRXUTIL allows querying the RACF DB to extract information. It is also part of the RACF security package. It can be conveniently used with a set of scripts written in REXX (an interpreted language used in z/OS). However, these scripts demand elevated privileges (access to one or more IRR.RADMIN.** resources in the FACILITY resource class) and must be executed directly on the mainframe, which is unsuitable for the task at hand.
  • Racf_debug_cleanup.c directly analyzes a RACF DB from a data set copy. A significant drawback is that it only parses BASE segments and outputs results in plaintext.

As you can see, existing tools don’t satisfy our needs. Some utilities require direct execution on the mainframe. Others operate on a data set copy and extract incomplete information from the DB. Moreover, they rely on hardcoded offsets and signatures within profile segments, which can vary across RACF versions. Therefore, we decided to develop our own utility for RACF database analysis.

Introducing racfudit


We have written our own platform-independent utility racfudit in Golang and tested it across various z/OS versions (1.13, 2.02, and 3.1). Below, we delve into the operating principles, capabilities and advantages of our new tool.

Extracting data from the RACF DB


To analyze RACF DB information offline, we first needed a way to extract structured data. We developed a two-stage approach for this:

  • The first stage involves analyzing the templates stored within the RACF DB. Each template describes a specific profile type, its constituent segments, and the fields within those segments, including their type and size. This allows us to obtain an up-to-date list of profile types, their segments, and associated fields, regardless of the RACF version.
  • In the second stage, we traverse all index blocks to extract every profile with its content from the RACF DB. These collected profiles are then processed and parsed using the templates obtained in the first stage.

The first stage is crucial because RACF DB profiles are stored as unstructured byte arrays. The templates are what define how each specific profile (byte array) is processed based on its type.

Thus, we defined the following algorithm to extract structured data.

Extracting data from the RACF DB using templates
Extracting data from the RACF DB using templates


  1. We offload the RACF DB from the mainframe and read its header block (ICB) to determine the location of the templates.
  2. Based on the template for each profile type, we define an algorithm for structuring specific profile instances according to their type.
  3. We use the content of the header block to locate the index blocks, which store pointers to all profile instances.
  4. We read all profile instances and their segments sequentially from the list of index blocks.
  5. For each profile instance and its segments we read, we apply the processing algorithm based on the corresponding template.
  6. All processed profile instances are saved in an intermediate state, allowing for future storage in various formats, such as plaintext or SQLite.

The advantage of this approach is its version independence. Even if templates and index blocks change their structure across RACF versions, our utility will not lose data because it dynamically determines the structure of each profile type based on the relevant template.

Analyzing extracted RACF DB information


Our racfudit utility can present collected RACF DB information as an SQLite database or a plaintext file.

RACF DB information as an SQLite DB (top) and text data (bottom)
RACF DB information as an SQLite DB (top) and text data (bottom)

Using SQLite, you can execute SQL queries to identify misconfigurations in RACF that could be exploited for privilege escalation, lateral movement, bypassing access controls, or other pentesting tactics. It is worth noting that the set of SQL queries used for processing information in SQLite can be adapted to validate current RACF settings against security standards and best practices. Let’s look at some specific examples of how to use the racfudit utility to uncover security issues.

Collecting password hashes


One of the primary goals in penetration testing is to get a list of administrators and a way to authorize using their credentials. This can be useful for maintaining persistence on the mainframe, moving laterally to other mainframes, or even pivoting to servers running different operating systems. Administrators are typically found in the SYS1 group and its subgroups. The example below shows a query to retrieve hashes of passwords (PASSWORD) and password phrases (PHRASE) for privileged users in the SYS1 group.
select ProfileName,PHRASE,PASSWORD,CONGRPNM from USER_BASE where CONGRPNM LIKE "%SYS1%";
Of course, to log in to the system, you need to crack these hashes to recover the actual passwords. We cover that in more detail below.

Searching for inadequate UACC control in data sets


The universal access authority (UACC) defines the default access permissions to the data set. This parameter specifies the level of access for all users who do not have specific access permissions configured. Insufficient control over UACC values can pose a significant risk if elevated access permissions (UPDATE or higher) are set for data sets containing sensitive data or for APF libraries, which could allow privilege escalation. The query below helps identify data sets with default ALTER access permissions, which allow users to read, delete and modify the data set.
select ProfileName, UNIVACS from DATASET_BASE where UNIVACS LIKE "1%";
The UACC field is not present only in data set profiles; it is also found in other profile types. Weak control in the configuration of this field can give a penetration tester access to resources.

RACF profile relationships


As mentioned earlier, various RACF entities have relationships. Some are explicitly defined; for example, a username might be listed in a group profile within its member field (USERID field). However, there are also implicit relationships. For instance, if a user group has UPDATE access to a specific data set, every member of that group implicitly has write access to that data set. This is a simple example of implicit relationships. Next, we delve into more complex and specific relationships within the RACF database that a penetration tester can exploit.

RACF profile fields


A deep dive into RACF internal architecture reveals that misconfigurations of access permissions and other attributes for various RACF entities can be difficult to detect and remediate in some scenarios. These seemingly minor errors can be critical, potentially leading to mainframe compromise. The explicit and implicit relationships within the RACF database collectively define the mainframe’s current security posture. As mentioned, each profile type in the RACF database has a unique set of fields and attributes that describe how profiles relate to one another. Based on these fields and attributes, we have compiled lists of key fields that help build and analyze relationship chains.
User profile fields

  • SPECIAL: indicates that the user has privileges to execute any RACF command and grants them full control over all profiles in the RACF database.
  • OPERATIONS: indicates whether the user has authorized access to all RACF-protected resources of the DATASET, DASDVOL, GDASDVOL, PSFMPL, TAPEVOL, VMBATCH, VMCMD, VMMDISK, VMNODE, and VMRDR classes. While actions for users with this field specified are subject to certain restrictions, in a penetration testing context the OPERATIONS field often indicates full data set access.
  • AUDITOR: indicates whether the user has permission to access audit information.
  • AUTHOR: the creator of the user. It has certain privileges over the user, such as the ability to change their password.
  • REVOKE: indicates whether the user can log in to the system.
  • Password TYPE: specifies the hash type (DES or KDFAES) for passwords and password phrases. This field is not natively present in the user profile, but it can be created based on how different passwords and password phrases are stored.
  • Group-SPECIAL: indicates whether the user has full control over all profiles within the scope defined by the group or groups field. This is a particularly interesting field that we explore in more detail below.
  • Group-OPERATIONS: indicates whether the user has authorized access to all RACF-protected resources of the DATASET, DASDVOL, GDASDVOL, PSFMPL, TAPEVOL, VMBATCH, VMCMD, VMMDISK, VMNODE and VMRDR classes within the scope defined by the group or groups field.
  • Group-AUDITOR: indicates whether the user has permission to access audit information within the scope defined by the group or groups field.
  • CLAUTH (class authority): allows the user to create profiles within the specified class or classes. This field enables delegation of management privileges for individual classes.
  • GROUPIDS: contains a list of groups the user belongs to.
  • UACC (universal access authority): defines the UACC value for new profiles created by the user.

Group profile fields

  • UACC (universal access authority): defines the UACC value for new profiles that the user creates when connected to the group.
  • OWNER: the creator of the group. The owner has specific privileges in relation to the current group and its subgroups.
  • USERIDS: the list of users within the group. The order is essential.
  • USERACS: the list of group members with their respective permissions for access to the group. The order is essential.
  • SUPGROUP: the name of the superior group.

General resource and data set profile fields

  • UACC (universal access authority): defines the default access permissions to the resource or data set.
  • OWNER: the creator of the resource or data set, who holds certain privileges over it.
  • WARNING: indicates whether the resource or data set is in WARNING mode.
  • USERIDS: the list of user IDs associated with the resource or data set. The order is essential.
  • USERACS: the list of users with access permissions to the resource or data set. The order is essential.


RACF profile relationship chains


The fields listed above demonstrate the presence of relationships between RACF profiles. We have decided to name these relationships similarly to those used in BloodHound, a popular tool for analyzing Active Directory misconfigurations. Below are some examples of these relationships – the list is not exhaustive.

  • Owner: the subject owns the object.
  • MemberOf: the subject is part of the object.
  • AllowJoin: the subject has permission to add itself to the object.
  • AllowConnect: the subject has permission to add another object to the specified object.
  • AllowCreate: the subject has permission to create an instance of the object.
  • AllowAlter: the subject has the ALTER privilege for the object.
  • AllowUpdate: the subject has the UPDATE privilege for the object.
  • AllowRead: the subject has the READ privilege for the object.
  • CLAuthTo: the subject has permission to create instances of the object as defined in the CLAUTH field.
  • GroupSpecial: the subject has full control over all profiles within the object’s scope of influence as defined in the group-SPECIAL field.
  • GroupOperations: the subject has permissions to perform certain operations with the object as defined in the group-OPERATIONS field.
  • ImpersonateTo: the subject grants the object the privilege to perform certain operations on the subject’s behalf.
  • ResetPassword: the subject grants another object the privilege to reset the password or password phrase of the specified object.
  • UnixAdmin: the subject grants superuser privileges to the object in z/OS UNIX.
  • SetAPF: the subject grants another object the privilege to set the APF flag on the specified object.

These relationships serve as edges when constructing a graph of subject–object interconnections. Below are examples of potential relationships between specific profile types.

Examples of relationships between RACF profiles
Examples of relationships between RACF profiles

Visualizing and analyzing these relationships helped us identify specific chains that describe potential RACF security issues, such as a path from a low-privileged user to a highly-privileged one. Before we delve into examples of these chains, let’s consider another interesting and peculiar feature of the relationships between RACF database entities.

Implicit RACF profile relationships


We have observed a fascinating characteristic of the group-SPECIAL, group-OPERATIONS, and group-AUDITOR fields within a user profile. If the user has any group specified in one of these fields, that group’s scope of influence extends the user’s own scope.

Scope of influence of a user with a group-SPECIAL field
Scope of influence of a user with a group-SPECIAL field

For instance, consider USER1 with GROUP1 specified in the group-SPECIAL field. If GROUP1 owns GROUP2, and GROUP2 subsequently owns USER5, then USER1 gains privileges over USER5. This is not just about data access; USER1 essentially becomes the owner of USER5. A unique aspect of z/OS is that this level of access allows USER1 to, for example, change USER5’s password, even if USER5 holds privileged attributes like SPECIAL, OPERATIONS, ROAUDIT, AUDITOR, or PROTECTED.

Below is an SQL query, generated using the racfudit utility, that identifies all users and groups where the specified user possesses special attributes:
select ProfileName, CGGRPNM, CGUACC, CGFLAG2 from USER_BASE WHERE (CGFLAG2 LIKE '%10000000%');
Here is a query to find users whose owners (AUTHOR) are not the standard default administrators:
select ProfileName,AUTHOR from USER_BASE WHERE (AUTHOR NOT LIKE '%IBMUSER%' AND AUTHOR NOT LIKE 'SYS1%');
Let’s illustrate how user privileges can be escalated through these implicit profile relationships.

Privilege escalation via the group-SPECIAL field
Privilege escalation via the group-SPECIAL field

In this scenario, the user TESTUSR has the group-SPECIAL field set to PASSADM. This group, PASSADM, owns the OPERATOR user. This means TESTUSR’s scope of influence expands to include PASSADM’s scope, thereby granting TESTUSR control over OPERATOR. Consequently, if TESTUSR’s credentials are compromised, the attacker gains access to the OPERATOR user. The OPERATOR user, in turn, has READ access to the IRR.PASSWORD.RESET resource, which allows them to assign a password to any user who does not possess privileged permissions.

Having elevated privileges in z/OS UNIX is often sufficient for compromising the mainframe. These can be acquired through several methods:

  • Grant the user READ access to the BPX.SUPERUSER resource of the FACILITY class.
  • Grant the user READ access to UNIXPRIV.SUPERUSER.* resources of the UNIXPRIV class.
  • Set the UID field to 0 in the OMVS segment of the user profile.

For example, the DFSOPER user has READ access to the BPX.SUPERUSER resource, making them privileged in z/OS UNIX and, by extension, across the entire mainframe. However, DFSOPER does not have the explicit privileged fields SPECIAL, OPERATIONS, AUDITOR, ROAUDIT and PROTECTED set, meaning the OPERATOR user can change DFSOPER’s password. This allows us to define the following sequence of actions to achieve high privileges on the mainframe:

  1. Obtain and use TESTUSR’s credentials to log in.
  2. Change OPERATOR’s password and log in with those credentials.
  3. Change DFSOPER’s password and log in with those credentials.
  4. Access the z/OS UNIX Shell with elevated privileges.

We uncovered another implicit RACF profile relationship that enables user privilege escalation.

Privilege escalation from a chain of misconfigurations
Privilege escalation from a chain of misconfigurations

In another example, the TESTUSR user has READ access to the OPERSMS.SUBMIT resource of the SURROGAT class. This implies that TESTUSR can create a task under the identity of OPERSMS using the ImpersonateTo relationship. OPERSMS is a member of the HFSADMIN group, which has READ access to the TESTAUTH resource of the TSOAUTH class. This resource indicates whether the user can run an application or library as APF-authorized – this requires only READ access. Therefore, if APF access is misconfigured, the OPERSMS user can escalate their current privileges to the highest possible level. This outlines a path from the low-privileged TESTUSR to obtaining maximum privileges on the mainframe.

At this stage, the racfudit utility allows identifying these connections only manually through a series of SQLite database queries. However, we are planning to add support for another output format, including Neo4j DBMS integration, to automatically visualize the interconnected chains described above.

Password hashes in RACF


To escalate privileges and gain mainframe access, we need the credentials of privileged users. We previously used our utility to extract their password hashes. Now, let’s dive into the password policy principles in z/OS and outline methods for recovering passwords from these collected hashes.

The primary password authentication methods in z/OS, based on RACF, are PASSWORD and PASSPHRASE. PASSWORD is a password composed by default of ASCII characters: uppercase English letters, numbers, and special characters (@#$). Its length is limited to 8 characters. PASSPHRASE, or a password phrase, has a more complex policy, allowing 14 to 100 ASCII characters, including lowercase or uppercase English letters, numbers, and an extended set of special characters (@#$&*{}[]()=,.;’+/). Hashes for both PASSWORD and PASSPHRASE are stored in the user profile within the BASE segment, in the PASSWORD and PHRASE fields, respectively. Two algorithms are used to derive their values: DES and KDFAES.

It is worth noting that we use the terms “password hash” and “password phrase hash” for clarity. When using the DES and KDFAES algorithms, user credentials are stored in the RACF database as encrypted text, not as a hash sum in its classical sense. Nevertheless, we will continue to use “password hash” and “password phrase hash” as is customary in IBM documentation.

Let’s discuss the operating principles and characteristics of the DES and KDFAES algorithms in more detail.

DES


When the DES algorithm is used, the computation of PASSWORD and PHRASE values stored in the RACF database involves classic DES encryption. Here, the plaintext data block is the username (padded to 8 characters if shorter), and the key is the password (also padded to 8 characters if shorter).

PASSWORD


The username is encrypted with the password as the key via the DES algorithm, and the 8-byte result is placed in the user profile’s PASSWORD field.

DES encryption of a password
DES encryption of a password

Keep in mind that both the username and password are encoded with EBCDIC. For instance, the username USR1 would look like this in EBCDIC: e4e2d9f140404040. The byte 0x40 serves as padding for the plaintext to reach 8 bytes.

This password can be recovered quite fast, given the small keyspace and low computational complexity of DES. For example, a brute-force attack powered by a cluster of NVIDIA 4090 GPUs takes less than five minutes.

The hashcat tool includes a module (Hash-type 8500) for cracking RACF passwords with the DES algorithm.

PASSPHRASE


PASSPHRASE encryption is a bit more complex, and a detailed description of its algorithm is not readily available. However, our research uncovered certain interesting characteristics.

First, the final hash length in the PHRASE field matches the original password phrase length. Essentially, the encrypted data output from DES gets truncated to the input plaintext length without padding. This design can clearly lead to collisions and incorrect authentication under certain conditions. For instance, if the original password phrase is 17 bytes long, it will be encrypted in three blocks, with the last block padded with seven bytes. These padded bytes are then truncated after encryption. In this scenario, any password whose first 17 encrypted bytes match the encrypted PASSPHRASE would be considered valid.

The second interesting feature is that the PHRASE field value is also computed using the DES algorithm, but it employs a proprietary block chaining mode. We will informally refer to this as IBM-custom mode.

DES encryption of a password phrase
DES encryption of a password phrase

Given these limitations, we can use the hashcat module for RACF DES to recover the first 8 characters of a password phrase from the first block of encrypted data in the PHRASE field. In some practical scenarios, recovering the beginning of a password phrase allowed us to guess the remainder, especially when weak dictionary passwords were used. For example, if we recovered Admin123 (8 characters) while cracking a 15-byte PASSPHRASE hash, then it is plausible the full password phrase was Admin1234567890.

KDFAES


Computing passwords and password phrases generated with the KDFAES algorithm is significantly more challenging than with DES. KDFAES is a proprietary IBM algorithm that leverages AES encryption. The encryption key is generated from the password using the PBKDF2 function with a specific number of hashing iterations.

PASSWORD


The diagram below outlines the multi-stage KDFAES PASSWORD encryption algorithm.

KDFAES encryption of a password
KDFAES encryption of a password

The first stage mirrors the DES-based PASSWORD computation algorithm. Here, the plaintext username is encrypted using the DES algorithm with the password as the key. The username is also encoded in EBCDIC and padded if it’s shorter than 8 bytes. The resulting 8-byte output serves as the key for the second stage: hashing. This stage employs a proprietary IBM algorithm built upon PBKDF2-SHA256-HMAC. A randomly generated 16-byte string (salt) is fed into this algorithm along with the 8-byte key from the first stage. This data is then iteratively hashed using PBKDF2-SHA256-HMAC. The number of iterations is determined by two parameters set in RACF: the memory factor and the repetition factor. The output of the second stage is a 32-byte hash, which is then used as the key for AES encryption of the username in the third stage.

The final output is 16 bytes of encrypted data. The first 8 bytes are appended to the end of the PWDX field in the user profile BASE segment, while the other 8 bytes are placed in the PASSWORD field within the same segment.

The PWDX field in the BASE segment has the following structure:

OffsetSizeFieldComment
0–34 bytesMagic numberIn the profiles we analyzed, we observed only the value E7D7E66D
4–74 bytesHash typeIn the profiles we analyzed, we observed only two values: 00180000 for PASSWORD hashes and 00140000 for PASSPHRASE hashes
8–92 bytesMemory factorA value that determines the number of iterations in the hashing stage
10–112 bytesRepetition factorA value that determines the number of iterations in the hashing stage
12–154 bytesUnknown valueIn the profiles we analyzed, we observed only the value 00100010
16–3116 bytesSaltA randomly generated 16-byte string used in the hashing stage
32–398 bytesThe first half of the password hashThe first 8 bytes of the final encrypted data

You can use the dedicated module in the John the Ripper utility for offline password cracking. While an IBM KDFAES module for an older version of hashcat exists publicly, it was never integrated into the main branch. Therefore, we developed our own RACF KDFAES module compatible with the current hashcat version.

The time required to crack an RACF KDFAES hash has significantly increased compared to RACF DES, largely due to the integration of PBKDF2. For instance, if the memory factor and repetition factor are set to 0x08 and 0x32 respectively, the hashing stage can reach 40,000 iterations. This can extend the password cracking time to several months or even years.

PASSPHRASE


KDFAES encryption of a password phrase
KDFAES encryption of a password phrase

Encrypting a password phrase hash with KDFAES shares many similarities with encrypting a password hash. According to public sources, the primary difference lies in the key used during the second stage. For passwords, data derived from DES-encrypting the username was used, while for a password phrase, its SHA256 hash is used. During our analysis, we could not determine the exact password phrase hashing process – specifically, whether padding is involved, if a secret key is used, and so on.

Additionally, when using a password phrase, the PHRASE and PHRASEX fields instead of PASSWORD and PWDX, respectively, store the final hash, with the PHRASEX value having a similar structure.

Conclusion


In this article, we have explored the internal workings of the RACF security package, developed an approach to extracting information, and presented our own tool developed for the purpose. We also outlined several potential misconfigurations that could lead to mainframe compromise and described methods for detecting them. Furthermore, we examined the algorithms used for storing user credentials (passwords and password phrases) and highlighted their strengths and weaknesses.

We hope that the information presented in this article helps mainframe owners better understand and assess the potential risks associated with incorrect RACF security suite configurations and take appropriate mitigation steps. Transitioning to the KDFAES algorithm and password phrases, controlling UACC values, verifying access to APF libraries, regularly tracking user relationship chains, and other steps mentioned in the article can significantly enhance your infrastructure security posture with minimal effort.

In conclusion, it is worth noting that only a small percentage of the RACF database structure has been thoroughly studied. Comprehensive research would involve uncovering additional relationships between database entities, further investigating privileges and their capabilities, and developing tools to exploit excessive privileges. The topic of password recovery is also not fully covered because the encryption algorithms have not been fully studied. IBM z/OS mainframe researchers have immense opportunities for analysis. As for us, we will continue to shed light on the obscure, unexplored aspects of these devices, to help prevent potential vulnerabilities in mainframe infrastructure and associated security incidents.


securelist.com/zos-mainframe-p…


Altro che droni! Ora comandiamo i coleotteri con il joystick. Ecco a voi i Cyber-Coleotteri


Gli scienziati dell’Università del Queensland hanno presentato un’invenzione insolita che può accelerare significativamente le operazioni di ricerca e soccorso. Hanno trasformato i comuni coleotteri (Zophobas morio) in veri e propri insetti cibernetici, dotandoli di microchip miniaturizzati e di un sistema di controllo remoto.

Questi “super soccorritori” sono dotati di zaini rimovibili dotati di componenti elettronici che permettono di indirizzare gli insetti nella giusta direzione. Il controllo avviene tramite joystick simili a quelli utilizzati nei videogiochi. Grazie a questo, è possibile controllare con precisione i movimenti dei coleotteri senza danneggiarli o comprometterne la durata di vita.

L’idea di utilizzare gli insetti in compiti complessi come la rimozione di detriti da edifici crollati o da miniere non è nata per caso. Secondo il responsabile del progetto, il Dott. Tang Vo-Doan, i coleotteri possiedono già di per sé straordinarie capacità naturali. Si muovono agilmente su superfici complesse, si insinuano nelle fessure più strette e si arrampicano con sicurezza su superfici verticali, laddove le attrezzature tradizionali e persino i robot in miniatura sono impotenti.

Gli zaini sono costruiti utilizzando elettrodi che inviano segnali alle antenne del coleottero o alle sue elitre, le placche protettive rigide che ricoprono le sue ali. In questo modo, il team riesce a far muovere gli insetti nella giusta direzione. Sono già stati condotti test con successo: i coleotteri si muovono con sicurezza sia orizzontalmente che verticalmente verso l’alto e sono persino in grado di sollevare un carico aggiuntivo pari al loro peso.
Uno Zophobas morio, o tenebrione, dotato di uno zaino con microchip rimovibile che i ricercatori possono utilizzare per sollecitare i movimenti.
Sebbene alcuni test siano attualmente in corso utilizzando una fonte di alimentazione esterna, gli sviluppatori stanno già preparando prototipi migliorati con batterie compatte e telecamere miniaturizzate. Ciò consentirà non solo di monitorare i movimenti dell’insetto, ma anche di ricevere immagini in tempo reale dalla scena, fondamentale per le operazioni di soccorso.

Il team prevede di testare la nuova tecnologia in situazioni di emergenza reali nei prossimi cinque anni. Se il progetto supererà con successo le fasi di implementazione, i cyber-bug saranno in grado di ispezionare rapidamente le macerie, trovare sopravvissuti e trasmettere informazioni ai soccorritori, il che accelererà significativamente l’erogazione dei soccorsi.

Il lavoro degli scienziati dell’Università del Queensland si inserisce in una tendenza globale volta a creare tecnologie bioibride. Sviluppi simili sono già in corso in diversi Paesi. Ad esempio, gli specialisti della Nanyang Technological University di Singapore trasformano gli scarafaggi in “robot” controllati in pochi minuti, e diversi anni fa i bioingegneri hanno creato una versione cibernetica della pianta di Venere acchiappamosche, in grado di afferrare con delicatezza piccoli oggetti.

L'articolo Altro che droni! Ora comandiamo i coleotteri con il joystick. Ecco a voi i Cyber-Coleotteri proviene da il blog della sicurezza informatica.


Managing Temperatures for Ultrafast Benchy Printing


A blue 3DBenchy is visible on a small circular plate extending up through a cutout in a flat, reflective surface. Above the Benchy is a roughly triangular metal 3D printer extruder, with a frost-covered ring around the nozzle. A label below the Benchy reads “2 MIN 03 SEC.”

Commercial 3D printers keep getting faster and faster, but we can confidently say that none of them is nearly as fast as [Jan]’s Minuteman printer, so named for its goal of eventually printing a 3DBenchy in less than a minute. The Minuteman uses an air bearing as its print bed, feeds four streams of filament into one printhead for faster extrusion, and in [Jan]’s latest video, printed a Benchy in just over two minutes at much higher quality than previous two-minute Benchies.

[Jan] found that the biggest speed bottleneck was in cooling a layer quickly enough that it would solidify before the printer laid down the next layer. He was able to get his layer speed down to about 0.6-0.4 seconds per layer, but had trouble going beyond that. He was able to improve the quality of his prints, however, by varying the nozzle temperature throughout the print. For this he used [Salim BELAYEL]’s postprocessing script, which increases hotend temperature when volumetric flow rate is high, and decreases it when flow rate is low. This keeps the plastic coming out of the nozzle at an approximately constant temperature. With this, [Jan] could print quite good sub-four and sub-thee minute Benchies, with almost no print degradation from the five-minute version. [Jan] predicts that this will become a standard feature of slicers, and we have to agree that this could help even less speed-obsessed printers.

Now onto less generally-applicable optimizations: [Jan] still needed stronger cooling to get faster prints, so he designed a circular duct that directed a plane of compressed air horizontally toward the nozzle, in the manner of an air knife. This wasn’t quite enough, so he precooled his compressed air with dry ice. This made it both colder and denser, both of which made it a better coolant. The thermal gradient this produced in the print bed seemed to cause it to warp, making bed adhesion inconsistent. However, it did increase build quality, and [Jan]’s confident that he’s made the best two-minute Benchy yet.

If you’re curious about Minuteman’s motion system, we’ve previously looked at how that was built. Of course, it’s also possible to speed up prints by simply adding more extruders.

youtube.com/embed/EaORGjZbS-c?…


hackaday.com/2025/07/08/managi…


Hack the System: L’evento di OMNIA e WithSecure dove RHC ha mostrato un attacco ransomware in diretta


Milano, 2 luglio 2025 – Come si ferma un attacco informatico che, partendo da una semplice email di phishing, può mettere in ginocchio un’intera azienda in pochi minuti?

Non l’abbiamo solo raccontato: l’abbiamo fatto. Dal vivo!

Questo è stato “Hack the System”, l’evento esclusivo organizzato da Omnia Srl in collaborazione con la testata Red Hot Cyber ed HackerHood e il vendor di cybersecurity WithSecure, ospitato nell’avveniristico datacenter di STACK EMEA. Una giornata formativa, per un pubblico selezionato di addetti ai lavori, che ha trasformato la teoria del rischio in una realtà tangibile.

Dalla mail di phishing al disastro (sventato)


Il cuore dell’evento è stato un’impressionante simulazione di attacco in diretta. Davanti agli occhi dei partecipanti, un hacker etico ha dimostrato come, sfruttando una singola email, sia possibile penetrare una rete aziendale, scalare i privilegi e lanciare un attacco ransomware devastante.

La platea ha assistito a due scenari:

  1. L’attacco senza difese adeguate: i file vengono criptati, i sistemi bloccati. Il business si ferma.
  1. Lo stesso attacco, ma contro le tecnologie WithSecure: la soluzione di sicurezza ha intercettato e neutralizzato la minaccia in tempo reale, dimostrando come la velocità di risposta sia tutto.

Ma l’evento è stato anche un’occasione per approfondire temi normativi cruciali come NIS2 e DORA, e discutere di come i servizi gestiti di MDR (Managed Detection and Response) siano ormai essenziali per garantire resilienza e conformità.
Tratta dalla presentazione che ha guidato la demo live dell’attacco ransomware in diretta svolta dal team di HackerHood di Red Hot Cyber composto da Antonio Montillo e Alessandro Moccia di Framework Security.

Le voci dei protagonisti: “Un’esperienza che fa riflettere”


Il vero successo dell’evento, però, è nelle parole di chi c’era. I commenti sui social media e i commenti che abbiamo richiesto parlano chiaro:

Diego Sarnataro, CEO di 10punto10 ed esperto del settore, ha condiviso una riflessione potente:

“Vedere gli attacchi in azione dal vivo è sempre un’esperienza che ti fa riflettere. L’evoluzione è impressionante: quello che mi richiedeva settimane di preparazione ora si fa in minuti. La velocità di risposta è tutto.”

Gli fa eco Davide Rogai, CSMO & Co-fondatore di Comm.it s.r.l.:

“Una giornata veramente formativa… in un contesto tecnologico (a tratti fantascientifico) e veramente per addetti ai lavori!”

Federico Mariotti amministratore di Wi-Fi Communication S.r.l.:

Un’esperienza davvero formativa e ricca di spunti concreti. Assistere a una simulazione così realistica ha reso ancora più evidente quanto siano tangibili le minacce informatiche e quanto sia fondamentale adottare un approccio di sicurezza attivo. Un plauso a Omnia e Red Hot Cyber per l’eccellente organizzazione.”

Stefano Torracca, IT Tarros SPA:

Giornata estremamente formativa e concreta. La simulazione d’attacco ha mostrato senza filtri cosa significhi affrontare un incidente ransomware. Un’iniziativa di alto livello da parte di Omnia e Red Hot Cyber.”

Alessio Civita, IT – Network, Security & Support – Supervisor:

Complimenti a Jacopo, Filippo e a tutta Omnia per l’idea e per essere riusciti a metterla in pratica! Vedere “il lato oscuro” in azione dal vivo e avere anche solo una minima dimostrazione di ciò a cui siamo esposti vale più di mille slide o video. Ci fa capire che il rischio cyber è reale, imminente, e non possiamo più permetterci di ignorarlo.

Ho apprezzato in particolar modo la scelta coraggiosa di affrontare in diretta gli eventuali – e immancabili – imprevisti: un esperimento riuscitissimo e assolutamente da ripetere.

Complimenti!

Simone Peroncini – WiFi Communication S.r.l.:

“Un evento di altissimo livello, capace di coniugare concretezza e impatto formativo in modo impeccabile. La simulazione dell’attacco hacker presentata durante la sessione è stata non solo tecnicamente ben costruita, ma anche estremamente realistica: vedere come attacco hacker basato su un classico ma ancora troppo efficace schema di phishing possa insinuarsi nei processi aziendali fino a compromettere interi sistemi è stato un vero campanello d’allarme.

A rendere ancora più interessante l’incontro è stata la presentazione, da parte di Omnia, del software, pensato per offrire una difesa proattiva contro questo tipo di attacchi. Un approccio moderno e intelligente alla cybersecurity, che punta non solo a rilevare e bloccare l’intrusione, ma anche a prevenirla agendo sui comportamenti, sulle configurazioni e sull’educazione continua degli utenti.

Complimenti anche a Red Hot Cyber, che insieme a Omnia ha saputo costruire un format di grande impatto, capace di coinvolgere e far riflettere anche i meno tecnici. Esperienze come questa dovrebbero essere la norma nelle strategie di aggiornamento delle aziende: concrete, attuali e orientate all’azione.”

La collaborazione vince sempre


“Hack the System” ha dimostrato un principio fondamentale, riassunto perfettamente da Diego Sarnataro: “La cybersecurity è un settore dove la collaborazione vince sempre sulla competizione.”

La partnership tra un system integrator come Omnia, un leader tecnologico come WithSecure e un punto di riferimento per l’informazione come Red Hot Cyber è la formula vincente per alzare il livello di consapevolezza e di difesa delle aziende italiane.

La tua azienda è pronta a reagire con la stessa velocità? Se la visione di questo live hacking ti ha fatto sorgere dei dubbi, è il momento giusto per agire.

Contatta Omnia Srl per un’analisi della tua postura di sicurezza e scopri come possiamo aiutarti a non diventare la prossima vittima.


L'articolo Hack the System: L’evento di OMNIA e WithSecure dove RHC ha mostrato un attacco ransomware in diretta proviene da il blog della sicurezza informatica.


Alla scoperta di Drumrlu. L’IaB che fa grossi affari tra Turchia e Venezuela


Dopo estremo oriente e Iran, continuiamo la nostra serie di articoli su attori di tipo IAB con un attore che si pensa sia di base in Europa, in un paese NATO.

Origine e attribuzione


Secondo i ricercatori di KelaCyber (vendor di servizi di Cyber Threat Intelligence di Tel Aviv, Israele), l’attore Drumrlu, è uno IAB che presumibilmente ha base in Turchia.

Drumrlu è anche noto con il nome/moniker “3LV4N”.

Come visto nel primo articolo sull’access broker miyako, lo ricerca delle entrate delle vittime da parte dello IAB, è una pratica molto comune per questi attori, i cui post tendono appunto a menzionare le entrate delle vittime per invogliare i potenziali acquirenti. Si parte dal presupposto che organizzazioni con entrate più elevate abbiano il potenziale di garantire un riscatto multimilionario.



“drumrlu” (alias 3lv4n) è un broker di accesso iniziale e un venditore di database di credenziali attivo nei forum clandestini almeno dal maggio 2020. drumrlu ha venduto accessi a domini di varie organizzazioni in molti paesi del mondo (EMEA, APAC e AMER) nei settori dell’istruzione, dei servizi pubblici, delle assicurazioni, della sanità, delle criptovalute, dei giochi e del governo. Nell’ ottobre del 2020, l’attore ha iniziato a vendere accessi root al software VMware ESXi con prezzi compresi tra 250 e 500 dollari.

Gli analisti di Outpost24 hanno osservato che “Nosophoros”, l’attore dietro il Ransomware as a Service (RaaS) Thanos, probabilmente collabora con (è cliente di) drumrlu. Il 18 luglio 2020, Nosophoros ha postato sul forum “Exploit” il messaggio: “drumrlu è un buon fornitore, ho garantito per lui in passato e continuo a farlo. Sono contento che sia tornato”.

drumrlu is a good vendor, I vouched for him before and I still do. Glad you are back”.

Simon Roe, ricercatore e product manager presso Outpost24, in un suo report evidenza come drumrlu/3LV4N abbia operato nella operazione RaaS Thanos.

drumrlu ha anche lasciato una recensione nel profilo di Nosophoros dicendo di lui: “Best RaaS, Best Programmer”. Un altro commento dell’attore “peterveliki” conferma la potenziale partnership tra drumrlu e Nosophoros: “Ho acquistato un accesso da questo venditore (drumrlu) – è andato tutto liscio. Un tipo molto disponibile. Mi ha anche consigliato di usare Thanos di Nosphorus, che si è rivelato essere molto utile in questo caso. Ottimo venditore, lo consiglio”.

I bought access from this seller – everything went smoothly. A very helpful dude. He also recommended using Thanos from Nosphorus; which turned out to be very helpful in this case. Good seller, I recommend”

Ciclo di attacco


Secondo ProofPoint una catena di attacco su RaaS Thanos con accessi iniziali forniti dallo IAB drumrlu potrebbe essere questa:

1. Invio di e-mail contenenti un documento Office dannoso

2. Un utente vittima scarica il documento e attiva le macro che rilasciano un payload (un RAT e/o InfoStealer)

3. L’attore sfrutta l’accesso backdoor per esfiltrare informazioni di sistema/accessi

4. A questo punto, il broker di accesso iniziale può vendere l’accesso ad altri attori

5. Inoltre può distribuire Cobalt Strike tramite l’accesso backdoor del malware che consente il movimento laterale all’interno della rete

6. Ottiene quindi compromissione completa del dominio tramite Active Directory

7. L’attore affiliato al RaaS distribuisce il ransomware a tutte le stazioni di lavoro collegate al dominio.

Esempio di credential stealers tramite phish e-mail


Possibile phishing email per furto di credenziali con allegato documento Office dannoso da indirizzo email free GMX.COM.

File EXCEL XSLM con malware GuLoader (aka CloudEyE o vbdropper).

Fonte ProofPoint

Alcuni scenari in cui l’attore ha operato


  • Compagnia energia elettrica ad Amman in Giordania.
  • Ospedale tedesco in Arabia Saudita.
  • Gruppo assicurativo in Tailandia.
  • Gruppo assicurativo in Arabia Saudita.
  • Entità governativa in Kuwait.


Paesi target

Australia, Stati Uniti, Thailandia, Pakistan, Francia, Italia, Svizzera, Emirati Arabi Uniti, Giordania, Israele, Egitto, Kuwait e Arabia Saudita.

Settori target

Istruzione, servizi pubblici, assicurazioni, sanità, criptovalute, giochi e entità governative.

NotaBene su Thanos: presunto creatore del RaaS … Un medico venezuelano!?!?


Il DoJ USA presume che un cardiologo sia lo sviluppatore che ha creato il ransomware Thanos: Moises Luis Zagala Gonzalez, 55 anni, cittadino francese e venezuelano residente a Ciudad Bolivar, Venezuela, è accurato di aver commesso tentativi di intrusione informatica e di associazione a delinquere finalizzata al commettere intrusioni informatiche, secondo una denuncia penale statunitense resa pubblica lunedì 16 maggio 2022.

Zagala avrebbe venduto e affittato pacchetti ransomware da lui sviluppati a criminali informatici. È inoltre accusato di aver addestrato gli aspiranti aggressori/gli affiliati sul come utilizzare i suoi prodotti per estorcere le vittime e di essersi successivamente vantato degli attacchi riusciti.

Una serie di errori di Zagala, avrebbe permesso agli investigatori di identificarlo come sospetto, ha dichiarato il DoJ. Nel settembre 2020, un agente dell’FBI sotto copertura avrebbe acquistato una licenza per Thanos da Zagala e scaricato il software. Inoltre, un informatore dell’FBI ha parlato con Zagala della possibilità di istituire un programma di affiliazione utilizzando Thanos, sempre secondo il documento del DoJ.

Zagala si sarebbe vantato pubblicamente nel DarkWeb del fatto che il RaaS Thanos, la sua creatura, era stato usato da parte di un gruppo di threat actors sponsorizzato dallo Stato iraniano per attaccare aziende israeliane.

Fonte: portswigger.net/daily-swig/med…

Fonte PDF (FBI.GOV)

fbi.gov/wanted/cyber/moises-lu…

Conclusione


In questo articolo della serie sugli initial access broker abbia visto come il furto di credenziali avvenga attraverso campagne di phishing con allegati Office contenenti malware/infostealer… Quindi ricordiamo alcune delle best practice menzionate in precedenza per essere pronti ad ogni evenienza

  • Controlli di Accesso Forti/uso di Multi Factor Authentication
  • Formazione e Consapevolezza dei Dipendenti
  • Segmentazione/micro segmentazione della rete
  • Monitoraggio Continuo e Rilevamento delle Minacce


Bibliografica


KelaCyber

kelacyber.com/blog/uncovering-…

kelacyber.com/blog/the-secret-…

Report di Outpost24

slideshare.net/slideshow/outpo…

RaaS THANOS su RecordedFuture

recordedfuture.com/research/th…

Bleeping Computer su Thanos

bleepingcomputer.com/news/secu…

PortSwigger su Thanos

portswigger.net/daily-swig/med…

ProffPoint su Thanos

proofpoint.com/us/blog/threat-…

Malpedia Fraunhofer su GuLoader/CluodEyE

malpedia.caad.fkie.fraunhofer.…

FBI

fbi.gov/wanted/cyber/moises-lu…

L'articolo Alla scoperta di Drumrlu. L’IaB che fa grossi affari tra Turchia e Venezuela proviene da il blog della sicurezza informatica.


When is a synth a woodwind? When it’s a Pneumatone


Ever have one of those ideas that’s just so silly, you just need to run with it? [Chris] from Sound Workshop ran into that when he had the idea that became thePneumatone: a woodwind instrument that plays like a synth.

In its 3D printed case, it looks like a giant polyphonic analog synth, but under the plastic lies a pneumatic heart: the sound is actually being made by slide whistles. We always thought of the slide whistle as a bit of a gag instrument, but this might change our minds. The sliders on the synth-box obviously couple to the sliders in the whistles. The ‘volume knobs’ are actually speed controllers for computer fans that feed air into the whistles. The air path is possibly not ideal– there’s a bit of warbling in the whistles at some pitches– but the idea is certainly a fun one. Notes are played by not blocking the air path out the whistle, as you can see in the video embedded below.

Since the fans are always on, this is an example of a drone instrument,like bagpipes or the old hacker’s favourite,the hurdy gurdy. [Chris] actually says in his tip– for which we are very thankful– that this project takes inspiration not from those projects but from Indian instruments like the Shruthi Box and Tanpura. We haven’t seen those on Hackaday yet, but if you know of any hacks involving them,please leave a tip.

youtube.com/embed/oL1cb8jFiyI?…


hackaday.com/2025/07/07/when-i…


IR Point and Shoot Has a Raspberry Heart in a 35mm Body


Photography is great, but sometimes it can get boring just reusing the same wavelengths over and over again. There are other options, though and when [Malcolm Wilson] decided he wanted to explore them, he decided to build a (near) IR camera.
The IR images are almost ethereal.
Image : Malcom Wilson.
The housing is an old Yashica Electro 35 — apparently this model was prone to electrical issues, and there are a lot of broken camera bodies floating around– which hides a Pi NoIR Camera v3. That camera module, paired with an IR pass filter, makes for infrared photography like the old Yashica used to do with special film. The camera module is plugged into a Pi Zero 2 W, and it’s powered by a PiSugar battery. There’s a tiny (0.91″) OLED display, but it’s only for status messages. The viewfinder is 100% optical, as the designers of this camera intended. Point, shoot, shoot again.

There’s something pure in that experience; we sometimes find stopping to look at previews pulls one out of the creative zone of actually taking pictures. This camera won’t let you do that, though of course you do get to skip on developing photos. [Malcom] has the Pi set up to connect to his Wifi when he gets home, and he grabs the RAW (he is a photographer, after all) image files via SSH. Follow the link above to [Malcom]’s substack, and you’ll get some design details and his python code.

The Raspberry Pi Foundation’s NoIR camera shows up on these pages from time to time, though rarely so artistically. We’re more likely to see it spying on reptiles, or make magic wands work. So we are quite grateful to [Malcom] for the tip, via Petapixel. Yes, photographers and artists of all stripes are welcome to use the tips line to tell us about their work.
Follow the links in this article for more images like this.
Image: Malcom Wilson


hackaday.com/2025/07/07/ir-poi…


The Hackaday Summer Reading List: No AI Involvement, Guaranteed


If you have any empathy at all for those of us in the journalistic profession, have some pity for the poor editor at the Chicago Sun-Times, who let through an AI-generated summer reading list made up of novels which didn’t exist. The fake works all had real authors and thus looked plausible, thus we expect that librarians and booksellers throughout the paper’s distribution area were left scratching their heads as to why they’re not in the catalogue.

Here at Hackaday we’re refreshingly meat-based, so with a guarantee of no machine involvement, we’d like to present our own summer reading list. They’re none of them new works but we think you’ll find them as entertaining, informative, or downright useful as we did when we read them. What are you reading this summer?

Surely You’re Joking, Mr. Feynman!


Richard P. Feynman was a Nobel-prize-winning American physicist whose career stretched from the nuclear weapons lab at Los Alamos in the 1940s to the report on the Challenger shuttle disaster in the 1980s, along the way working at the boundaries of quantum physics. He was also something of a character, and that side of him comes through in this book based on a series of taped interviews he gave.

We follow him from his childhood when he convinced his friends he could see into the future by picking up their favourite show from a distant station that broadcast it at an earlier time, to Los Alamos where he confuses security guards by escaping through a hole in the fence, and breaks into his colleagues’ safes. I first read this book thirty years ago, and every time I read it again I still find it witty and interesting. A definite on the Hackaday reading list!

Back Into The Storm


A lot of us are fascinated by the world of 1980s retrocomputers, and here at Hackaday we’re fortunate to have among our colleagues a number of people who were there as it happened, and who made significant contributions to the era.

Among them is Bil Herd, whose account of his time working at Jack Tramiel’s Commodore from the early to mid 1980s capture much more than just the technology involved. It’s at the same time an an insider’s view of a famous manufacturer and a tale redolent with the frenetic excesses of that moment in computing history. The trade shows and red-eye flights, the shonky prototypes demonstrated to the world, and the many might-have-been machines which were killed by the company’s dismal marketing are all recounted with a survivor’s eye, and really give a feeling for the time. We reviewed it in 2021, and it’s still very readable today.

The Cuckoo’s Egg


In the mid 1980s, Cliff Stoll was a junior academic working as a university sysadmin, whose job was maintaining the system that charged for access to their timesharing system. Chasing a minor discrepancy in this financial system led him to discover an unauthorised user, which in turn led him down a rabbit-hole of computer detective work chasing an international blackhat that’s worthy of James Bond.

This book is one of the more famous break-out novels about the world of hacking, and is readable because of its combination of story telling and the wildly diverse worlds in which it takes place. From the hippyish halls of learning to three letter agencies, where he gets into trouble for using a TOP SECRET stamp, it will command your attention from cover to cover. We reviewed it back in 2017 and it was already a couple of decades old then, but it’s a book which doesn’t age.

The Code Book


Here’s another older book, this time Simon Singh’s popular mathematics hit, The Code Book. It’s a history of cryptography from Roman and medieval cyphers to the quantum computer, and where its value lies is in providing comprehensible explanations of how each one works.

Few of us need to know the inner workings of RSA or the Vigniere square in our everyday lives, but we live in a world underpinned by encryption. This book provides a very readable introduction, and much more than a mere bluffers guide, to help you navigate it.

The above are just a small selection of light summer reading that we’ve been entertained by over the years, and we hope that you will enjoy them. But you will have your own selections too, would you care to share them with us?

Header image: Sheila Sund, CC BY 2.0.


hackaday.com/2025/07/07/the-ha…


Splice CAD: Cable Harness Design Tool


splice-cad assembly

Cable harness design is a critical yet often overlooked aspect of electronics design, just as essential as PCB design. While numerous software options exist for PCB design, cable harness design tools are far less common, making innovative solutions like Splice CAD particularly exciting. We’re excited to share this new tool submitted by Splice CAD.

Splice CAD is a browser-based tool for designing cable assemblies. It allows users to create custom connectors and cables while providing access to a growing library of predefined components. The intuitive node editor enables users to drag and connect connector pins to cable wires and other pinned connectors. Those familiar with wire harnesses know the complexity of capturing all necessary details, so having a tool that consolidates these properties is incredibly powerful.

Among the wire harness tools we’ve featured, Splice CAD stands out as the most feature-rich to date. Users can define custom connectors with minimal details, such as the number of pins, or include comprehensive information like photos and datasheets. Additionally, by entering a manufacturer’s part number, the tool automatically retrieves relevant data from various distributor websites. The cable definition tool is equally robust, enabling users to specify even the most obscure cables.

Once connectors, cables, and connections are defined, users can export their designs in multiple formats, including SVG or PDF for layouts, and CSV for a detailed bill of materials. Designs can also be shared via a read-only link on the Splice CAD website, allowing others to view the harness and its associated details. For those unsure if the tool meets their needs, Splice CAD offers full functionality without requiring an account, though signing in (which is free) is necessary to save or export designs. The tool also includes a version control system, ideal for tracking design changes over time. Explore our other cable harness articles for more tips and tricks on building intricate wire assemblies.

youtube.com/embed/JfQVB_iTD1I?…


hackaday.com/2025/07/07/splice…


Un Criminal Hacker minaccia di divulgare 106 GB di dati rubati a Telefónica


Un hacker ha minacciato di divulgare 106 GB di dati presumibilmente rubati alla compagnia di telecomunicazioni spagnola Telefónica. L’azienda nega l’attacco informatico e la fuga di dati. L’aggressore, soprannominato Rey, sostiene che l’attacco sia avvenuto il 30 maggio e che abbia impiegato più di 12 ore a estrarre dati dalla rete aziendale prima che il suo accesso venisse bloccato.

Ha ora pubblicato un archivio di 2,6 GB di pubblico dominio, che contiene circa cinque gigabyte di dati e oltre 20.000 file dopo la decompressione. Rey è un membro del gruppo ransomware Hellcat, che si è assunto la responsabilità di un altro attacco informatico ai danni di Telefónica nel gennaio 2025, che ha compromesso il server interno di sviluppo e ticketing di Jira.

Rey ha dichiarato ai giornalisti di aver rubato 385.311 file, per un totale di 106,3 GB, dalla rete aziendale. I file contenevano presumibilmente comunicazioni interne (inclusi ticket ed email), ordini di acquisto, registri interni, registri dei clienti e dati dei dipendenti. L’hacker sostiene inoltre che il nuovo attacco è stato nuovamente causato da impostazioni Jira errate e che si è verificato dopo il primo attacco.

Rey ha condiviso con la pubblicazione campioni di dati e un albero di file presumibilmente rubati a Telefónica. Alcuni dei file contenevano fatture di clienti aziendali in paesi come Ungheria, Germania, Spagna, Cile e Perù. I file contenevano anche indirizzi email di dipendenti in Spagna, Germania, Perù, Argentina e Cile, nonché fatture emesse a partner commerciali in paesi europei.

Nonostante i giornalisti abbiano ripetutamente tentato di contattare i rappresentanti di Telefónica, l’unica risposta ricevuta è stata che il presunto incidente era un tentativo di estorsione e che gli aggressori stavano utilizzando informazioni obsolete ottenute durante un attacco precedente.

Il file più recente che i giornalisti sono riusciti a trovare tra i campioni di dati forniti dall’hacker è datato 2021, il che conferma le parole del rappresentante dell’azienda. Tuttavia, Rey continua a sostenere che i dati siano stati ottenuti in seguito a un nuovo attacco informatico avvenuto il 30 maggio. Per dimostrare la sua tesi, ha iniziato a pubblicare i dati dell’azienda nel pubblico dominio.

“Dato che Telefónica nega la recente fuga di notizie di 106 GB contenenti dati provenienti dalla sua infrastruttura interna, pubblicherò 5 GB come prova. Pubblicherò presto l’intero archivio e, se Telefónica non ottempera, l’intero archivio sarà pubblicato nelle prossime settimane”, scrive Rey.

Inizialmente, i dati presumibilmente rubati sono stati distribuiti tramite il servizio PixelDrain, ma sono stati rimossi poche ore dopo per motivi legali. In seguito, l’aggressore ha distribuito un altro link per il download del dump, questa volta dal servizio Kotizada, che Google Chrome contrassegna come pericoloso e consiglia vivamente agli utenti di evitarlo.

Sebbene Telefónica non abbia rilasciato dichiarazioni ufficiali, i giornalisti sottolineano che alcuni degli indirizzi e-mail inclusi nella fuga di notizie appartengono ad attuali dipendenti dell’azienda.

L'articolo Un Criminal Hacker minaccia di divulgare 106 GB di dati rubati a Telefónica proviene da il blog della sicurezza informatica.