Belting out the Audio
Today, it is hard to imagine a world without recorded audio, and for the most part that started with Edison’s invention of the phonograph. However, for most of its history, the phonograph was a one-way medium. Although early phonographs could record with a separate needle cutting into foil or wax, most record players play only records made somewhere else. The problem is, this cuts down on what you can do with them. When offices were full of typists and secretaries, there was the constant problem of telling the typist what to type. Whole industries developed around that problem, including the Dictaphone company.
The issue is that most people can talk faster than others can write or type. As a result, taking dictation is frustrating as you have to stop, slow down, repeat yourself, or clarify dubious words. Shorthand was one way to equip a secretary to write as fast as the boss can talk. Steno machines were another way. But the dream was always a way to just speak naturally, at your convenience, and somehow have it show up on a typewritten page. That’s where the Dictaphone company started.
History of the Dictaphone
Unsurprisingly, Dictaphone’s founder was the famous Alexander Graham Bell. Although Edison invented the phonograph, Bell made many early improvements to the machine, including the use of wax instead of foil as a recording medium. He actually started the Volta Graphophone Company, which merged with the American Graphophone Company that would eventually become Columbia Records.
In 1907, the Columbia Phonograph Company trademarked the term Dictaphone. While drum-based machines were out of style in other realms, having been replaced by platters, the company wanted to sell drum-based machines that let executives record audio that would be played back by typists. By 1923, the company spun off on its own.
Edison, of course, also created dictation machines. There were many other companies that made some kind of dictation machine, but Dictaphone became the standard term for any such device, sort of like Xerox became a familiar term for any copier.
Dictaphones were an everyday item in early twentieth-century offices for dictation, phone recording, and other audio applications. Not to mention a few other novel uses. In 1932, a vigilante organization used a Dictaphone to bug a lawyer’s office suspected of being part of a kidnapping.
Some machines could record and playback. Others, usually reserved for typists, were playback-only. In addition, some machines could “shave” wax cylinders to erase a cylinder for future use. Of course, eventually you’d shave it down to the core, and then it was done.
The Computer History Archives has some period commercials and films from Dictaphone, and you can see them in the videos below.
youtube.com/embed/76r81PEEjFY?…
As mentioned, Dictaphone wasn’t the only game in town. Edison was an obvious early competitor. We were amused that the Edison devices had a switch that allowed them to operate on AC or DC current.
Later, other companies like IBM would join in. Some, like the Gray Audograph and the SoundScriber used record-like disks instead of belts or drums. Of course, eventually, magnetic tape cassettes were feasible, too, and many people made recorders that could be used for dictation and many other recording duties.
youtube.com/embed/jQqyLIxfWZo?…
The Dictabelt
For the first half of the twentieth century, Dictaphones used wax cylinders. However, in 1947, they began making machines that pressed a groove into a Lexan belt — a “Dictabelt,” at first called a “Memobelt.” These were semi-permanent and, since you couldn’t easily melt over some of the wax, difficult to tamper with, which helped make them admissible in court. Apparently, you could play a Dictabelt back about 20 times before it would be too beat up to play.
These belts found many uses. For one, Dictaphone was a major provider to police departments and other similar services, recording radio traffic and telephone calls. In the late 1970s, the House Select Committee on Assassinations used Dictaphone belts from the Dallas police department recording in 1963 to do audio analysis on the Kennedy assassination. Many Dictaphones found homes in courtrooms, too.
As you can see in the commercials in the video, Dictabelts would fit in an envelope: they are about 3.5 in x 12 in or 89 mm x 300 mm. The “portable” machine promised to let you dictate from anywhere, keep meeting minutes, and more. A single belt held 15 minutes of audio, and the color gives you an idea of when the belt was made.
youtube.com/embed/M-edbrhcrWY?…
Magnetic Personality
Of course, Dictaphone wasn’t the only game in town for machines like this. IBM released one that used a magnetic belt called a “Magnabelt’ that you could edit. Dictaphone followed suit. These, of course, were erasable.
Even as late as 1977, you could find Dictaphones in “word processing operations” like the one in the video with the catchy tune, below. Of course, computers butted into both word processing and dictation with products like Via Voice or DragonDictate. Oddly, DragonDictate is from Nuance, which bought what was left of Dictaphone.
Insides
Since this is Hackaday, of course, you want to see the insides of some of these machines. A video from [databits] gives us a peek below.
youtube.com/embed/wb0yENdMQeI?…
Offices have certainly changed. Most people do their own typing now. Your phone can record many hours of crystal-clear audio. Computers can even take your dictation now, if you insist.
Should you ever find a Dictabelt and want to digitize it for posterity, you might find the video below from [archeophone] useful. They make a modern playback unit for old cylinders and belts.
youtube.com/embed/_nRREq3v6vY?…
We’d love to see a homebrew Dictabelt recorder player using more modern tech. If you make one, be sure to let us know. People recorded on the darndest things. Tape caught on primarily because of World War II Germany and Bing Crosby.
How Big is Your Video Again? Square vs Rectangular Pixels
[Alexwlchan] noticed something funny. He knew that not putting a size for a video embedded in a web page would cause his page to jump around after the video loaded. So he put the right numbers in. But with some videos, the page would still refresh its layout. He learned that not all video sizes are equal and not all pixels are square.
For a variety of reasons, some videos have pixels that are rectangular, and it is up to your software to take this into account. For example, when he put one of the suspect videos into QuickTime Player, it showed the resolution was 1920×1080 (1350×1080). That’s the non-square pixel.
So just pulling the size out of a video isn’t always sufficient to get a real idea of how it looks. [Alex] shows his old Python code that returns the incorrect number and how he managed to make it right. The mediainfo library seems promising, but suffers from some rounding issues. Instead, he calls out to ffprobe, an external program that ships with ffmpeg. So even if you don’t use Python, you can do the same trick, or you could go read the ffprobe source code.
[Alex] admits that there are not many videos that have rectangular pixels, but they do show up.
If you like playing with ffmpeg and videos, try this in your browser. Think rectangular pixels are radical? There has been work for variable-shaped pixels.
What Europeans really think about tech
IT'S MONDAY, AND THIS IS DIGITAL POLITICS. I'm Mark Scott, and you find me this week splitting my time between Berlin and Brussels. As we hurtle toward the end of 2025, this is my current permanent state of mind. Almost there, folks.
— What are Europeans' views on digital policymaking? I teamed up with YouGov, the polling firm, to find out.
— The United States latest National Security Strategy reaffirms a step change in how Washington approaches tech.
— A litany of recent trade deals highlights how US officials are linking digital policymaking with access to the American market.
Let's get started:
USA: smantellato mega–sito di phishing responsabile di 10 milioni di dollari di frodi annuali
Il Dipartimento di Giustizia degli Stati Uniti ha annunciato la chiusura di un sito web di phishing utilizzato da truffatori in Myanmar per sottrarre migliaia di dollari alle vittime. Secondo il dipartimento, è stato sequestrato il dominio tickmilleas[.]com, una copia falsa della piattaforma di trading legittima TickMill, attraverso la quale i truffatori convincevano le persone a investire dimostrando investimenti fittizi e “redditizi”.
La risorsa è stata ricondotta a un grande complesso fraudolento, Tai Chang, a Kyaukhat, che le forze dell’ordine internazionali hanno perquisito tre settimane fa. Questo è il terzo dominio associato a Tai Chang che le autorità statunitensi hanno chiuso nell’ambito dell’operazione in corso.
L’FBI ha stabilito che alle vittime sono stati mostrati depositi falsi e rendimenti “aumentati” artificialmente per convincerle a trasferire criptovalute sulla piattaforma fasulla. Sebbene i domini siano stati registrati solo all’inizio di novembre 2025, gli agenti speciali sono riusciti a identificare diverse vittime che avevano già perso somme ingenti. Inoltre, i truffatori hanno invitato le persone a scaricare app correlate e, dopo averle avvisate, alcune di queste sono state rimosse.
Il sito web sequestrato ora presenta un banner delle forze dell’ordine. Le misure statunitensi fanno parte di una campagna più ampia per combattere i centri antifrode nel Sud-est asiatico, che rubano miliardi di dollari agli americani ogni anno. Attraverso la neonata Scam Center Strike Force, l’FBI ha dispiegato agenti a Bangkok, dove stanno collaborando con la Royal Thai Police per smantellare tali centri, tra cui Tai Chang, situato al confine tra Myanmar e Thailandia.
Il complesso di Tai Chang è gestito dal Democratic Karen Benevolent Army (DKBA), un gruppo alleato con il governo militare del Myanmar. A novembre, il Dipartimento di Giustizia degli Stati Uniti ha collegato le attività del DKBA alla criminalità organizzata cinese e ad aziende thailandesi, ora soggette a sanzioni del Tesoro. La Scam Center Strike Force mira a smantellare l’infrastruttura delle reti criminali cinesi e i funzionari in Myanmar, Cambogia e Laos che sono dietro queste organizzazioni.
L’FBI stima che centri fraudolenti simili rubino annualmente agli americani circa 10 miliardi di dollari, principalmente attraverso programmi di macellazione di maiali tramite messaggistica e social media. Nell’ambito dell’operazione Tai Chang, il gruppo ha collaborato anche con Meta*, che ha cancellato circa 2.000 account coinvolti nelle operazioni fraudolente.
L'articolo USA: smantellato mega–sito di phishing responsabile di 10 milioni di dollari di frodi annuali proviene da Red Hot Cyber.
Off-Axis Rotation For Amiga-Themed Levitating Lamp
Do you remember those levitating lamps that were all the rage some years ago? Floating light bulbs, globes, you name it. After the initial craze of expensive desk toys, a wave of cheap kits became available from the usual suspects. [RobSmithDev] wanted to make a commemorative lamp for the Amiga’s 40th anniversary, but… it was missing something. Sure, the levitating red-and-white “boing” ball looked good, but in the famous demo, the ball is spinning at a jaunty angle. You can’t do that with mag-lev… not without a hack, anyway.
The hack [RobSmith] decided on is quite simple: the levitator is working in the usual manner, but rather than mount his “boing ball” directly to the magnet, the magnet is glued to a Dalek-lookalike plinth. The plinth holds a small motor, which is mounted at an angle to the base. Since the base stays vertical, the motor’s shaft provides the jaunty angle for the 3D-printed boing ball’s rotation. The motor is powered by the same coil that came with the kit to power the LEDs– indeed, the original LEDs are reused. An interesting twist is that the inductor alone was not able to provide enough power to run even the motor by itself: [Rob] had to add a capacitor to tune the LC circuit to the ~100 kHz frequency of the base coil. While needing to tune an antenna shouldn’t be any sort of surprise, neither we nor [Rob] were thinking of this as an antenna, so it was a neat detail to learn.
With the hard drive-inspired base — which eschews insets for self-tapping screws — the resulting lamp makes a lovely homage to the Amiga Computer in its 40th year.
We’ve seen these mag-lev modules before, but the effect is always mesmerizing. Of course, if you want to skip the magnets, you can still pretend to levitate a lamp with tensegrity.
youtube.com/embed/r8hF-ykyDfQ?…
A Touchscreen MIDI Controller For The DIY Set
MIDI controllers are easy to come by these days. Many modern keyboards have USB functionality in this regard, and there are all kinds of pads and gadgets that will spit out MIDI, too. But you might also like to build your own, like this touchscreen design from [Nick Culbertson].
The build takes advantage of a device colloquially called the Cheap Yellow Display. It consists of a 320 x 240 TFT touchscreen combined with a built-in ESP32-WROOM-32, available under the part number ESP32-2432S028R.
[Nick] took this all-in-one device and turned it into a versatile MIDI controller platform. It spits out MIDI data over Bluetooth and has lots of fun modes. There’s a straightforward keyboard, which works just like you’d expect, and a nifty beat sequencer too. There are more creative ideas, too, like the bouncing-ball Zen mode, a physics-based note generator, and an RNG mode. If you liked Electroplankton on the Nintendo DS, you’d probably dig some of these. Files are on GitHub if you want to replicate the build.
These days, off-the-shelf hardware is super capable, so you can whip up a simple MIDI controller really quickly. Video after the break.
youtube.com/embed/ALDQR1RCIdE?…
Wavebird Controller Soars Once More with Open Source Adapter
After scouring the second-hand shops and the endless pages of eBay for original video game hardware, a pattern emerges. The size of the accessory matters. If a relatively big controller originally came with a tiny wireless dongle, after twenty years, only the controller will survive. It’s almost as if these game controllers used to be owned by a bunch of irresponsible children who lose things (wink). Such is the case today when searching for a Nintendo Wavebird controller, and [James] published a wireless receiver design to make sure that the original hardware can be resurrected.
The project bears the name Wave Phoenix. The goal was to bring new life to a legendary controller by utilizing inexpensive, readily available parts. Central to the design is the RF-BM-BG22C3 Bluetooth module. Its low power draw and diminutive footprint made it a great fit for the limited controller port space of a Nintendo GameCube. The module itself is smaller than the GameCube’s proprietary controller connector. Luckily for projects like this, there are plenty of third-party connector options available.
When it comes to assembly, [James] insists it is possible to wire everything up by hand. He included an optional custom PCB design for those of us who aren’t point-to-point soldering masters. The PCB nestles cleanly into the 3D-printed outer casing seen in the image above in the iconic GameCube purple. Once the custom firmware for the Bluetooth module is flashed, pairing is as simple as pressing the Wave Phoenix adapter pairing button, followed by pressing X and Y simultaneously on the Wavebird controller. The two devices should stay paired as long as the controller’s wireless channel dial remains on the same channel. Better yet, any future firmware updates can be transferred wirelessly over Bluetooth.
Those who have chosen to build their own Wave Phoenix adapter have been pleased with the performance. The video below from Retrostalgia on YouTube shows that input responsiveness seems to be on par with the original Nintendo adapter. Mix in a variety of 3D printed shell color options, and this project goes a long way to upcycle Wavebird controllers that may have been doomed to end up in a dumpster. So it might be time to fire up a round of Kirby Air Ride and mash the A button unencumbered by a ten-foot cord.
youtube.com/embed/uDWsT5ocKWY?…
There are even more open source video game controller designs out there like this previous post about the Alpakka controller by Dave.
Hackaday Links: December 7, 2025
We stumbled upon a story this week that really raised our eyebrows and made us wonder if we were missing something. The gist of the story is that U.S. Secretary of Energy Chris Wright, who has degrees in both electrical and mechanical engineering, has floated the idea of using the nation’s fleet of emergency backup generators to reduce the need to build the dozens of new power plants needed to fuel the AI data center building binge. The full story looks to be a Bloomberg exclusive and thus behind a paywall — hey, you don’t get to be a centibillionaire by giving stuff away, you know — so we might be missing some vital details, but this sounds pretty stupid to us.
First of all, saying that 35 gigawatts of generation capacity sits behind the big diesel and natural gas-powered generators tucked behind every Home Depot and Walmart in the land might be technically true, but it seems to ignore the fact that backup generators aren’t engineered to run continuously. In our experience, even the best backup generators are only good for a week or two of continuous operation before something — usually the brushes — gives up the ghost. That’s perfectly acceptable for something that is designed to be operated only a few times a year, and maybe for three or four days tops before grid power is restored. Asking these units to run continuously to provide the base load needed to run a data center is a recipe for rapid failure. And even if these generators could be operated continuously, there’s still the issue of commandeering private property for common use, as well as the fact that you’d be depriving vital facilities like hospitals and fire stations of their backup power. But at least we’d have chatbots.
Well, that won’t buff right out. Roscosmos, the Russian space agency, suffered a serious setback last week when it damaged the launchpad at Site 31/6 during a Soyuz launch. This is bad news because that facility is currently the only one in the world capable of launching Soyuz and Progress, both crucial launch vehicles for the continued operation of the International Space Station. As usual, the best coverage of the accident comes from Scott Manley, who has all the gory details. His sources inform him that the “service cabin,” a 20-ton platform that slides into position under the rocket once it has been erected, is currently situated inside the flame trench rather than being safely tucked into a niche in the wall. He conjectures that the service cabin somehow got sucked into the flame trench during launch, presumably by the negative pressure zone created by the passage of all that high-velocity rocket exhaust. Whatever the cause of the accident, it causes some problems for the Russians and the broader international space community. An uncrewed Progress launch to resupply the ISS was scheduled for December 20, and a crewed Soyuz mission is scheduled for July 2026. But without that service cabin, neither mission seems likely. Hopefully, the Russians will be able to get things tidied up quickly, but it might not matter anyway since there’s currently a bit of a traffic jam at the ISS.
We saw a really nice write-up over at Make: Magazine by Dom Dominici about his impressions from his first Supercon visit. Spoiler alert: he really liked it! He describes it as “an intimate, hands-on gathering that feels more like a hacker summer camp than a tech expo,” and that’s about the best summary of the experience that we’ve seen yet. His reaction to trying to find what he assumed would be a large convention center, but only finding a little hole-in-the-wall behind a pizza place off the main drag in Pasadena, is priceless; yes, that mystery elevator actually goes somewhere. For those of you who still haven’t made the pilgrimage to Pasadena, the article is a great look at what you’re missing.
And finally, we know we were a little rough on the Russians a couple of weeks back for their drunk-walking robot demo hell, but it really served to demonstrate just how hard it is to mimic human walking with a mechanical system. After all, it takes the better part of two years for a new human to even get the basics, and a hell of a lot longer than that to get past the random face-plant stage. But still, some humanoid robots are better than others, to the point that there’s now a Guinness Book of World Records category for longest walk by a humanoid robot. The current record was set last August, with a robot from Shanghai-based Agibot Innovations going on a 106-km walkabout without falling or (apparently) recharging. The journey took place in temperatures approaching 40°C and took 24 hours to complete, which means the robot kept up a pretty brisk walking pace over the course, which we suppose didn’t have any of the usual obstacles.
USB Video Capture Devices: Wow! They’re All Bad!!
[VWestlife] purchased all kinds of USB video capture devices — many of them from the early 2000s — and put them through their paces in trying to digitize VHS classics like Instant Fireplace and Buying an Auxiliary Sailboat. The results were actually quite varied, but almost universally bad. They all worked, but they also brought unpleasant artifacts and side effects when it came to the final results. Sure, the analog source isn’t always the highest quality, but could it really be this hard to digitize a VHS tape?
The best results for digitizing VHS came from an old Sony device that was remarkably easy to use on a more modern machine.
It turns out there’s an exception to all the disappointment: the Sony Digital Video Media Converter (DVMC) is a piece of vintage hardware released in 1998 that completely outperformed the other devices [VWestlife] tested. There is a catch, but it’s a small one. More on that in a moment.
Unlike many other capture methods, the DVMC has a built-in time base corrector that stabilizes analog video signals by buffering them and correcting any timing errors that would cause problems like jitter or drift. This is a feature one wouldn’t normally find on budget capture devices, but [VWestlife] says the Sony DVMC can be found floating around on eBay for as low as 20 USD. It even has composite and S-Video inputs.
For an old device, [VWestlife] says using the DVMC was remarkably smooth. It needed no special drivers, defaults to analog input mode, and can be powered over USB. That last one may sound trivial, but it means there’s no worry about lacking some proprietary wall adapter with an oddball output voltage.
The catch? It isn’t really a USB device, and requires a FireWire (IEEE-1394) port in order to work. But if that’s not a deal-breaker, it does a fantastic job.
So if you’re looking to digitize older analog media, [VWestlife] says it might be worth heading to eBay and digging up a used Sony DVMC. But if one wants to get really serious about archiving analog media, capturing RF signals direct from the tape head is where it’s at.
Thanks to [Keith Olson] for the tip!
youtube.com/embed/OTOChbbTRgs?…
Neat Techniques To Make Interactive Light Sculptures
[Voria Labs] has created a whole bunch of artworks referred to as Lumanoi Interactive Light Sculptures. A new video explains the hardware behind these beautiful glowing pieces, as well as the magic that makes their interactivity work.
The basic architecture of the Lumanoi pieces starts with a custom main control board, based around the ESP-32-S3-WROOM-2. It’s got two I2C buses onboard, as well as an extension port with some GPIO breakouts. The controller also has lots of protection features and can shut down the whole sculpture if needed. The main control board works in turn with a series of daisy-chained “cell” boards attached via a 20-pin ribbon cable. The cable carries 24-volt power, a bunch of grounds, and LED and UART data that can be passed from cell to cell. The cells are responsible for spitting out data to addressable LEDs that light the sculpture, and also have their own microcontrollers and photodiodes, allowing them to do all kinds of neat tricks.
As for interactivity, simple sensors provide ways for the viewer to interact with the glowing artwork. Ambient light sensors connected via I2C can pick up the brightness of the room as well as respond to passing shadows, while touch controls give a more direct interface to those interacting with the art.
[Voria Labs] has provided a great primer on building hardcore LED sculptures in a smart, robust manner. We love a good art piece here, from the mechanical to the purely illuminatory. Video after the break.
youtube.com/embed/pELnzIHmLkI?…
E se domani agli USA girasse male e ci spegnessero il cloud? L’Europa? Paralizzata in 2 secondi
Negli ultimi mesi due episodi apparentemente scollegati tra loro hanno messo in luce una verità scomoda: l’Europa non controlla più la propria infrastruttura digitale. E questa dipendenza, in uno scenario geopolitico sempre più teso, non è soltanto un rischio economico, ma una vulnerabilità sistemica.
Il primo campanello d’allarme è arrivato quando Microsoft ha improvvisamente disabilitato l’accesso ad Azure all’unità di unità di intelligence israeliana. Si trattava dell’Unità 8200, precedentemente accusata di aver spiato i palestinesi nei territori controllati da Israele utilizzando la tecnologia Microsoft. Nessun preavviso, nessuna gradualità: un interruttore che si spegne e mostra quanto il potere decisionale su infrastrutture critiche sia nelle mani di poche corporation globali.
Il secondo episodio è di oggi, ancora più emblematico, riguarda la recente disputa fra X (la piattaforma di Elon Musk) e la Commissione europea. Dopo una multa da 120 milioni di euro per violazioni delle norme europee, X ha cancellato l’account pubblicitario della Commissione, accusandola di aver utilizzato in modo improprio gli strumenti della piattaforma. Bruxelles si è difesa ricordando di aver sospeso da mesi ogni forma di pubblicità su X e di aver utilizzato soltanto gli strumenti messi a disposizione.
Al di là delle dinamiche tra multinazionali e istituzioni, il messaggio è chiaro: la capacità dell’UE di comunicare sui social media, informare i cittadini e attuare politiche digitali è subordinata alla volontà commerciale di aziende non europee delle quali non ha nessun controllo.
La fine dell’illusione: la tecnologia non è neutra
Questi episodi mostrano un punto chiave: chi controlla la tecnologia controlla anche i comportamenti, la comunicazione e persino la politica delle nazioni. L’Europa lo ha scoperto tardi, per anni convinta che la globalizzazione digitale fosse sinonimo di neutralità.
Ma parlare di tecnologia proprietaria significa affrontare una realtà molto diversa:
- servono anni di ricerca e sviluppo;
- serve visione politica di lungo periodo, indipendente dai cicli elettorali;
- serve percepire la tecnologia non come un settore industriale, ma come un pilastro strutturale della sicurezza nazionale.
E soprattutto significa accettare che nessun Paese è realmente sovrano se dipende da altri per servizi cloud, sistemi operativi, chip e infrastrutture digitali critiche.
Una dipendenza che ci espone a shock globali
Nel mondo iperconnesso di oggi, ogni incidente tecnologico ha effetti immediati e globali. Lo abbiamo visto recentemente con i blackout di AWS, Azure e Cloudflare (primo incidente e secondo incidente) hanno paralizzato servizi pubblici, aziende, banche, giornali, mobilità e sanità in mezzo mondo. Non si tratta più di “problemi tecnici”, ma di veri e propri rischi sistemici.
L’Europa, priva di un proprio stack tecnologico completo, vive in una condizione di dipendenza totale. Siamo un continente altamente digitalizzato che poggia… su fondamenta costruite altrove.
Serve un cloud europeo. E un sistema operativo europeo. E hardware europeo
Affermare la sovranità tecnologica non significa rifiutare il cloud, anzi: significa costruire il nostro.
Un cloud europeo, basato su hardware europeo, sistemi operativi europei, hypervisor europei e software europeo. Una filiera completa, dall’hardware all’applicazione.
Quanto tempo occorre? 20 anni!
Un percorso lungo, complesso, costoso. Ma oggi necessario più che lo sviluppo delle armi. Perché le armi del futuro, le armi “economiche”, sono queste e possono essere usate con un solo click.
- Serve un sistema operativo europeo, base di tutta la catena.
- Serve un hypervisor europeo, per garantire che la virtualizzazione non dipenda da logiche esterne.
- Serve un cloud europeo che risponda alle leggi europee, non a quelle di altri Paesi.
- Serve una filiera hardware, dal microchip ai server.
E serve soprattutto una scelta politica: programmare questo percorso per 20-30 anni, non per una legislatura.
Perché nell’attuale scacchiere geopolitico, chi controlla la tecnologia controlla le economie, le difese e le democrazie.
E oggi l’Europa non controlla nulla di tutto questo.
L'articolo E se domani agli USA girasse male e ci spegnessero il cloud? L’Europa? Paralizzata in 2 secondi proviene da Red Hot Cyber.
Anatomy Of A Minimalist Home Computer
There are plenty of well-known models among the 8-bit machines of the 1980s, and most readers could rattle them off without a thought. They were merely the stars among a plethora of others, and even for a seasoned follower of the retrocomputing world, there are fresh models from foreign markets that continue to surprise and delight. [Dave Collins] is treating us to an in-depth look at the VTech VZ-200, a budget machine that did particularly well in Asian markets. On the way, we learn a lot about a very cleverly designed machine.
The meat of the design centres not around the Z80 microprocessor or the 6847 video chip, but the three 74LS chips handling both address decoding and timing for video RAM access. That they managed this with only three devices is the exceptionally clever part. While there are some compromises similar to other minimalist machines in what memory ranges can be addressed, they are not sufficient to derail the experience.
Perhaps the most ingenuity comes in using not just the logic functions of the chips, but their timings. The designers of this circuit really knew the devices and used them to their full potential. Here in 2025, this is something novice designers using FPGAs have to learn; back then, it was learned the hard way on the breadboard.
All in all, it’s a fascinating read from a digital logic perspective as much as a retrocomputing one. If you want more, it seems this isn’t the only hacker-friendly VTech machine.
John Dalton, CC BY-SA 3.0.
The Key to Plotting
Plotters aren’t as common as they once were. Today, many printers can get high enough resolution with dots that drawing things with a pen isn’t as necessary as it once was. But certainly you’ve at least seen or heard of machines that would draw graphics using a pen. Most of them were conceptually like a 3D printer with a pen instead of a hotend and no real Z-axis. But as [biosrhythm] reminds us, some plotters were suspiciously like typewriters fitted with pens.
Instead of type bars, type balls, or daisy wheels, machines like the Panasonic Penwriter used a pen to draw your text on the page, as you can see in the video below. Some models had direct computer control via a serial port, if you wanted to plot using software. At least one model included a white pen so you could cover up any mistakes.
If you didn’t have a computer, the machine had its own way to input data for graphs. How did that work? Read for yourself.
Panasonic wasn’t the only game in town, either. Silver Reed — a familiar name in old printers — had a similar model that could connect via a parallel port. Other familiar names are Smith Corona, Brother, Sharp, and Sears.
Since all the machines take the same pens, they probably have very similar insides. According to the post, Alps was the actual manufacturer of the internal plotting mechanism, at least.
The video doesn’t show it, but the machines would draw little letters just as well as graphics. Maybe better since you could change font sizes and shapes without switching a ball. They could even “type” vertically or at an angle, at least with external software.
Since plotters are, at heart, close to 3D printers, it is pretty easy to build one these days. If plotting from keystrokes is too mundane for you, try voice control.
youtube.com/embed/7IGrUeB100I?…
The ZX Spectrum Finally Got An FPS
The ZX Spectrum is known for a lot of things, but it’s not really known for a rich and deep library of FPS titles. However, there is finally such a game for the platform, thanks to [Jakub Trznadel]—and it’s called World of Spells.
Like so many other games of this type, it was inspired by the 3D raycasting techniques made so popular by Wolfenstein 3D back in the day. For that reason, it has a very similar look in some regards, but a very different look in others—the latter mostly due to the characteristic palette available on the ZX Spectrum. A playable FPS is quite a feat to achieve on such limited hardware, but [Jakub] pulled it off well, with the engine able to reach up to 80 frames per second.
The game is available for download, and you can even order it on tape if you so desire. You might also like to check out the walkthrough on YouTube, where the game is played on an emulator. Don’t worry, though—the game works on real ZX Spectrum 48k hardware just fine.
The Speccy retains a diehard fanbase to this day. You can even build a brand new one thanks to a buoyant supply of aftermarket parts.
youtube.com/embed/52ukFKy3Awk?…
[thanks to losr for the tip!]
L’interruzione di Cloudflare del 5 dicembre 2025 dovuta alle patch su React Server. L’analisi tecnica
Cloudflare ha registrato un’interruzione significativa nella mattina del 5 dicembre 2025, quando alle 08:47 UTC una parte della propria infrastruttura ha iniziato a generare errori interni. L’incidente, che ha avuto una durata complessiva di circa 25 minuti, si è concluso alle 09:12 con il ripristino completo dei servizi.
Secondo l’azienda, circa il 28% del traffico HTTP gestito globalmente è stato coinvolto. L’impatto ha riguardato solo clienti che utilizzavano una combinazione specifica di configurazioni, come spiegato dai tecnici.
Cloudflare ha chiarito che il disservizio non è stato collegato ad alcuna attività ostile: nessun attacco informatico, tentativo di intrusione o comportamento malevolo ha contribuito all’evento. A causare il problema è stato invece un aggiornamento introdotto per mitigare una vulnerabilità recentemente resa pubblica e legata ai componenti React Server, identificata come CVE-2025-55182.
Come si è arrivati all’incidente
Il disservizio è nato da una modifica al sistema di analisi dei body delle richieste HTTP, parte delle misure adottate per proteggere gli utenti di applicazioni basate su React. La modifica prevedeva l’aumento del buffer di memoria interno del Web Application Firewall (WAF) da 128 KB a 1 MB, valore che si allinea al limite predefinito nei framework Next.js.
Questa prima variazione è stata diffusa con un rollout progressivo. Durante l’implementazione, i tecnici hanno rilevato che uno strumento di test WAF interno non era compatibile con il nuovo limite. Ritenendo quel componente non necessario per il traffico reale, Cloudflare ha proceduto con una seconda modifica destinata a disabilitarlo.
È stata questa seconda modifica, distribuita con il sistema di configurazione globale – che non prevede rollout graduali – a generare la catena di eventi che ha portato agli errori HTTP 500. Il sistema ha raggiunto rapidamente ogni server della rete in pochi secondi.
A quel punto, una particolare versione del proxy FL1 si è trovata a eseguire una porzione di codice Lua contenente un bug latente. Il risultato è stato il blocco dell’elaborazione di alcune richieste e la restituzione di errori 500 da parte dei server coinvolti.
Chi è stato colpito
A essere interessati, riportano gli ingegneri di Cloudflare, sono stati i clienti che utilizzavano il proxy FL1 insieme al Cloudflare Managed Ruleset. Le richieste verso i siti configurati in questo modo hanno iniziato a rispondere con errori 500, con pochissime eccezioni (come alcuni endpoint di test, ad esempio /cdn-cgi/trace).
Non sono stati invece colpiti i clienti che utilizzavano configurazioni differenti o quelli serviti dalla rete Cloudflare operante in Cina.
La causa tecnica
Il problema è stato ricondotto al funzionamento del sistema di regole utilizzato dal WAF. Alcune regole, tramite l’azione “execute”, attivano la valutazione di set di regole aggiuntivi. Il sistema killswitch, utilizzato per disattivare rapidamente regole problematiche, non era mai stato applicato fino a quel momento a una regola con azione “execute“.
Quando la modifica ha disabilitato il set di test, il sistema ha saltato correttamente l’esecuzione della regola, ma non ha gestito l’assenza dell’oggetto “execute” nella fase successiva di elaborazione dei risultati. Da qui l’errore Lua che ha generato gli HTTP 500.
Cloudflare ha precisato che questo bug non esiste nel proxy FL2, scritto in Rust, grazie al diverso sistema di gestione dei tipi che evita questi scenari.
Collegamento con l’incidente del 18 novembre
La società ha ricordato come una dinamica simile si fosse verificata il 18 novembre 2025, quando un’altra modifica non correlata causò un malfunzionamento diffuso. In seguito a quell’episodio erano stati annunciati diversi progetti per rendere più sicuri gli aggiornamenti di configurazione e ridurre l’impatto di singoli errori.
Tra le iniziative ancora in corso figurano:
- un sistema più rigido di versioning e rollback,
- procedure “break glass” per mantenere operative funzioni critiche anche in condizioni eccezionali,
- una gestione fail-open nei casi di errore di configurazione.
Cloudflare ha ammesso che, se queste misure fossero già state pienamente operative, l’impatto dell’incidente del 5 dicembre sarebbe potuto essere più contenuto. Per il momento, l’azienda ha sospeso ogni modifica alla rete finché i nuovi sistemi di mitigazione non saranno completi.
Cronologia essenziale dell’evento (UTC)
- 08:47 – Inizio dell’incidente dopo la propagazione della modifica di configurazione
- 08:48 – Impatto esteso a tutta la parte di rete coinvolta
- 08:50 – Il sistema di alerting interno segnala il problema
- 09:11 – Revoca della modifica di configurazione
- 09:12 – Ripristino completo del traffico
Cloudflare ha ribadito le proprie scuse ai clienti e ha confermato la pubblicazione, entro la settimana successiva, di un’analisi completa sui progetti in corso per migliorare la resilienza dell’intera infrastruttura.
L'articolo L’interruzione di Cloudflare del 5 dicembre 2025 dovuta alle patch su React Server. L’analisi tecnica proviene da Red Hot Cyber.
GlobalProtect di Palo Alto Networks è sotto scansioni Attive. Abilitate la MFA!
Una campagna sempre più aggressiva, che punta direttamente alle infrastrutture di accesso remoto, ha spinto gli autori delle minacce a tentare di sfruttare attivamente le vulnerabilità dei portali VPN GlobalProtect di Palo Alto Networks.
Il 5 dicembre Palo Alto Networks ha emesso un avviso urgente, esortando i clienti ad adottare l’autenticazione a più fattori (MFA), a limitare l’esposizione del portale tramite firewall e ad applicare le patch più recenti.
In base alle risultanze del report sulle attività di monitoraggio condotto da GrayNoise, che ha rilevato scansioni e sforzi di sfruttamento condotti da oltre 7.000 indirizzi IP unici a livello globale, le organizzazioni che utilizzano la popolare soluzione VPN per garantire la sicurezza del lavoro remoto sono state messe in allarme.
Targeting osservato da Ip (Fonte: GreyNoise)
A partire dalla fine di novembre 2025, sono stati rilevati attacchi che sfruttano le vulnerabilità dei gateway GlobalProtect, soprattutto quelle accessibili pubblicamente attraverso la porta UDP 4501.
GlobalProtect di Palo Alto Networks è da tempo un obiettivo primario a causa della sua onnipresenza negli ambienti aziendali. Difetti storici, come CVE-2024-3400 (una vulnerabilità critica di command injection, risolta nell’aprile 2024 con punteggio CVSS 9,8), continuano a perseguitare i sistemi non ancora patchati.
Le ondate recenti sfruttano configurazioni errate che consentono l’accesso pre-autenticazione, incluse credenziali predefinite o portali di amministrazione esposti. Gli aggressori utilizzano strumenti come script personalizzati che imitano i moduli Metasploit per enumerare i portali, effettuare accessi con forza bruta e rilasciare malware per la persistenza.
Secondo i dati di Shadowserver e di altri feed di intelligence sulle minacce, le fonti IP comprendono proxy residenziali, provider di hosting Bulletproof e istanze VPS compromesse in Asia, Europa e Nord America.
Gli indicatori di compromissione includono picchi anomali di traffico UDP sulla porta 4501, seguiti da richieste HTTP agli endpoint /global-protect/login.urd. Nelle violazioni confermate, gli intrusi hanno esfiltrato token di sessione, consentendo il movimento laterale nelle reti aziendali.
L'articolo GlobalProtect di Palo Alto Networks è sotto scansioni Attive. Abilitate la MFA! proviene da Red Hot Cyber.
F1 Light Box Helps You Know the Current Race Status
[joppedc] wrote in to let us know that the Formula 1® season is coming to an end, and that the final race should be bangin’. To get ready, he built this ultra-sleek logo light box last week that does more than just sit there looking good, although it does that pretty well. This light box reacts to live race events, flashing yellow for safety cars, red for red flags, and green for, well, green flags.
The excellent light box itself was modeled in Fusion 360, and the files are available on MakerWorld. The design is split into four parts — the main body, a backplate to mount the LEDs, the translucent front plate, and an enclosure for an ESP32.
Doing it this way allowed [joppedc] to not only print in manageable pieces, it also allowed him to use different materials. Getting the front panel to diffuse light correctly took some experimenting to find the right thickness. Eventually, [joppedc] landed on 0.4 mm (two layers) of matte white PLA.
There isn’t much in the way of brains behind this beauty, just an ESP32, a strip of WS2812B addressable LEDs, and a USB-C port for power. But it’s the software stack that ties everything together. The ESP32 has WLED, Home Assistant runs the show, and of course, there is the F1 sensor integration to get live race data.
If you’re looking for more of an F1 dashboard, then we’ve got you covered.
Standalone USB-PD Stack For All Your Sink Needs
USB PD is a fun protocol to explore, but it can be a bit complex to fully implement. It makes sense we’re seeing new stacks pop up all the time, and today’s stack is a cool one as far as code reusability goes. [Vitaly] over on Hackaday.io brings us pdsink – a C++ based PD stack with no platform dependencies, and fully-featured sink capabilities.
This stack can do SPR (5/9/15/20V) just like you’d expect, but it also does PPS without breaking a sweat – perfect for your Lithium Ion battery charging or any other current-limited shenanigans. What’s more, it can do EPR (28V and up) – for all your high-power needs. For reference, the SPR/PPS/EPR combination is all you could need from a PD stack intended for fully taking advantage of any USB-PD charger’s capabilities. The stack is currently tailored to the classic FUSB302, but [Vitaly] says it wouldn’t be hard to add support for a PD PHY chip of your choice.
It’s nice to have a choice in how you want your PD interactions to go – we’ve covered a few stacks before, and each of them has strong and weak sides. Now, if you have the CPU bandwidth, you could go seriously low-tech and talk PD with just a few resistors, transistors, and GPIOs! Need to debug a particular USB-C edge case? Don’t forget a logger.
Lessons Learned After Trying MeshCore for Off-grid Text Messaging
[Michael Lynch] recently decided to delve into the world of off-grid, decentralized communications with MeshCore, because being able to communicate wirelessly with others in a way that does not depend on traditional communication infrastructure is pretty compelling. After getting his hands on a variety of hardware and trying things out, he wrote up his thoughts from the perspective of a hardware-curious software developer.
He ends up testing a variety of things: MeshCore firmware installed on a Heltec V3 board (used via an app over Bluetooth), a similar standalone device with antenna and battery built in (SenseCAP T-1000e, left in the header image), and a Lilygo T-Deck+ (right in the header image above). These all use MeshCore, which is built on and reportedly compatible with Meshtastic, a framework we have featured in the past.
The cheapest way to get started is with a board like the Heltec v3, pictured here. It handles the LoRa wireless communications part, and one interfaces to it over Bluetooth.
The first two devices are essentially MeshCore gateways, to which the user connects over Bluetooth. The T-Deck is a standalone device that resembles a Blackberry, complete with screen and keypad. [Michael] dove into what it was like to get them up and running.
Probably his most significant takeaway was that the whole process of onboarding seemed a lot more difficult and much less clear than it could be. This is an experience many of us can relate to: the fragmented documentation that exists seems written both by and for people who are already intimately familiar with the project in its entirety.
Another thing he learned was that while LoRa is a fantastic technology capable of communicating wirelessly over great distances with low power, those results require good antennas and line of sight. In a typical urban-ish environment, range is going to be much more limited. [Michael] was able to get a maximum range of about five blocks between two devices. Range could be improved by purchasing and installing repeaters or by having more devices online and in range of one another, but that’s where [Michael] drew the line. He felt he had gotten a pretty good idea of the state of things by then, and not being a radio expert, he declined to purchase repeater hardware without any real sense of where he should put them, or what performance gains he could expect by doing so.
Probably the most surprising discovery was that MeshCore is not entirely open source, which seems odd for an off-grid decentralized communications framework. Some parts are open, but the official clients (the mobile apps, web app, and T-Deck firmware) are not. [Michael] found this out when, being primarily a software developer, he took a look at the code to see if there was anything he could do to improve the poor user experience on the T-Deck and found that the firmware was proprietary.
[Michael]’s big takeaway as a hardware-curious software developer is that the concept is great and accessible (hardware is not expensive and there is no licensing requirement for LoRa), but it’s not really there yet in terms of whether it’s practical for someone to buy a few to distribute among friends for use in an emergency. Not without getting into setting up enough repeaters to ensure connectivity, anyway.
Bridging RTL-433 To Home Assistant
If you’ve got an RTL-SDR compatible receiver, you’ve probably used it for picking up signals from all kinds of weird things. Now, [Jaron McDaniel] has built a tool to integrate many such devices into the world of Home Assistant.
It’s called RTL-HAOS, and it’s intended to act as a bridge. Whatever you can pick up using the RTL_433 tool, you can set up with Home Assistant using RTL-HAOS. If you’re unfamiliar with RTL_433, it’s a multitalented data receiver for picking up all sorts of stuff on a range of bands using RTL-SDR receivers, as well as a range of other hardware. While it’s most closely associated with products that communicate in the 433 MHz band, it can also work with products that talk in 868 MHz, 315 MHz, 345 MHz, and 915 MHz, assuming your hardware supports it. Out of the box, it’s capable of working with everything from keyless entry systems to thermostats, weather stations, and energy monitors. You can even use it to listen to the tire pressure monitors in your Fiat Abarth 124 Spider, if you’re so inclined.
[Jaron’s] tool integrates these devices nicely into Home Assistant, where they’ll appear automatically thanks to MQTT discovery. It also offers nice signal metrics like RSSI and SNR, so you can determine whether a given link is stable. You can even use multiple RTL-SDR dongles if you’re so inclined. If you’re eager to pull some existing environmental sensors into your smart home, this may prove a very easy way to do it.
The cool thing about Home Assistant is that hackers are always working to integrate more gear into the ecosystem. Oftentimes, they’re far faster and more efficient at doing this than big-name corporations. Meanwhile, if you’re working on your own hacks for this popular smart home platform, we’d probably like to know about it. Be sure to hit up the tips line in due time.
Emulate ROMs at 12MHz With Pico2 PIO
Nothing lasts forever, and that includes the ROMs required to make a retrocomputer run. Even worse, what if you’re rolling your own firmware? Period-appropriate EPROMs and their programmers aren’t always cheap or easy to get a hold of these days. [Kyo-ta04] had that problem, and thanks to them, we now all have a solution: Pico2ROMEmu, a ROM emulator based on, you guessed it, the Raspberry Pi Pico2.
The Pico2ROMEmu in its natural habitat on a Z80 SBC.
The ROM emulator has been tested at 10MHz with a Z80 processor and 12MHz with an MC68000. An interesting detail here is that rather than use the RP2350’s RISC-V or ARM cores, [kyo-ta04] is doing all the work using the chip’s powerful PIO. PIO means “programmable I/O,” and if you need a primer, check this out. Using PIO means the main core of the microcontroller needn’t be involved — and in this context, a faster ROM emulator.
We’ve seen ROM emulators before, of course — the OneROM comes to mind, which can also use the RP2350 and its PIOs. That project hasn’t been chasing these sorts of speeds as it is focused on older, slower machines. That may change in the newest revision. It’s great to see another contender in this space, though, especially one to serve slightly higher-performance retrocomputers. Code and Gerbers for the Pico2RomeEMU are available on GitHub under an MIT license.
Thanks to [kyo-ta04] for the tip.
Something New Every Day, Something Relevant Every Week?
The site is called Hackaday, and has been for 21 years. But it was only for maybe the first half-year that it was literally a hack a day. By the 2010s, we were putting out four or more per day, and in the later 20-teens, we settled into our current cadence of eight hacks per day, plus some original pieces over the top. That’s a lot of hacks per day! (But “Eight-to-Ten-Hacks-a-Day” just isn’t as catchy.)
With that many posts daily, we also tend to reach out to a broader array of interests. Quite simply, not every hack is necessarily going to be just exactly what you are looking for, but we wouldn’t be writing it up if we didn’t think that someone was looking for it. Maybe you don’t like CAN bus hacks, but you’re into biohacking, or retrocomputing. Our broad group of writers helps to make sure that we’ll get you covered sooner or later.
What’s still surprising to me, though, is that a couple of times per week, there is a hack that is actually relevant to a particular project that I’m currently working on. It’s one thing to learn something new every day, and I’d bet that I do, but it’s entirely another to learn something new and relevant.
So I shouldn’t have been shocked when Tom and I were going over the week’s hacks on the podcast, and he picked an investigation of injecting spray foam into 3D prints. I liked that one too, but for me it was just “learn something new”. Tom has been working on an underwater ROV, and it perfectly scratched an itch that he has – how to keep the top of the vehicle more buoyant, while keeping the whole thing waterproof.
That kind of experience is why I’ve been reading Hackaday for 21 years now, and it’s all of our hope that you get some of that too from time to time. There is a lot of “new” on the Internet, and that’s a wonderful thing. But the combination of new and relevant just can’t be beat! So if you’ve got anything you want to hear more about, let us know.
This article is part of the Hackaday.com newsletter, delivered every seven days for each of the last 200+ weeks. It also includes our favorite articles from the last seven days that you can see on the web version of the newsletter. Want this type of article to hit your inbox every Friday morning? You should sign up!
Electronic Dice Built The Old Fashioned Way
If you wanted to build an electronic dice, you might grab an Arduino and a nice OLED display to whip up something fancy. You could even choose an ESP32 and have it log your rolls to the cloud. Or, you could follow the lead of [Axiometa] and do it the old-school way.
The build is based around the famous 555 timer IC. It’s paired with a 4017 decade counter IC, which advances every time it receives a clock signal from the 555. With the aid of some simple transistor logic, this lights the corresponding LEDs for the numbers 1 to 6, which are laid out like the face of a typical six-sided die. For an added bit of fun, a tilt sensor is used to trigger the 555 and thus the roll of the dice. A little extra tweak to the circuit ensures the 555 keeps counting just a little while after you stop shaking. This makes the action feel like an actual dice roll.
Schematics are available for the curious. We’d love to see this expanded to emulate a range of other dice—like a D20 version that could blink away on the D&D table. We’ve covered some very exciting technology in that area as well.
youtube.com/embed/cZxBj7fzkkk?…
Sudo Clean Up My Workbench
[Engineezy] might have been watching a 3D printer move when inspiration struck: Why not build a robot arm to clean up his workbench? Why not, indeed? Well, all you need is a 17-foot-long X-axis and a gripper mechanism that can pick up any strange thing that happens to be on the bench.
Like any good project, he did it step by step. Mounting a 17-foot linear rail on an accurately machined backplate required professional CNC assistance. He was shooting for a 1mm accuracy, but decided to settle for 10mm.
With the long axis done, the rest seemed anticlimactic, at least for moving it around. The system can actually support his bodyweight while moving. The next step was to control the arm manually and use a gripper to open a parts bin.
The arm works, but is somewhat slow and needs some automation. A great start to a project that might not be practical, but is still a fun build and might inspire you to do something equally large.
We have large workbenches, but we tend to use tiny ones more often in our office. We also enjoy ones that are portable.
youtube.com/embed/iarVef8tFiw?…
Blue Hedgehog, Meet Boing Ball: Can Sonic Run on Amiga?
The Amiga was a great game system in its day, but there were some titles it was just never going to get. Sonic the Hedgehog was one of them– SEGA would never in a million years been willing to port its flagship platformer to another system. Well, SEGA might not in a million years, but [reassembler] has started that process after only thirty four.
Both the SEGA Mega Drive (that’s the Genesis for North Americans) and Amiga have Motorola 68k processors, but that doesn’t mean you can run code from one on the other: the memory maps don’t match, and the way graphics are handled is completely different. The SEGA console uses so-called “chunky” graphics, which is how we do it today. Amiga, on the other hand, is all about the bitplanes; that’s why it didn’t get a DOOM port back in the day, which may-or-may not be what killed the platform.
In this first video of what promises to be a series, [reassembler] takes us through his process of migrating code from the Mega Drive to Amiga, starting specifically with the SEGA loading screen animation, with a preview of the rest of the work to come. While watching someone wrestle with 68k assembler is always interesting, the automation he’s building up to do it with python is the real star here. Once this port is done, that toolkit should really grease the wheels of bringing other Mega Drive titles over.
It should be noted that since the Mega Drive was a 64 colour machine, [reassembler] is targeting the A1200 for his Sonic port, at least to start. He plans to reprocess the graphics for a smaller-palette A500 version once that’s done. That’s good, because it would be a bit odd to have a DOOM-clone for the A500 while being told a platformer like Sonic is too much to ask. If anyone can be trusted to pull this project off, it’s [reassembler], whose OutRun: Amiga Edition is legendary in the retro world, even if we seem to have missed covering it.
If only someone had given us a tip off, hint hint.
youtube.com/embed/Xb94oUw7_K4?…
Adding Electronics to a Classic Game
Like many classic board games, Ludo offers its players numerous opportunities to inflict frustration on other players. Despite this, [Viktor Takacs] apparently enjoys it, which motivated him to build a thoroughly modernized, LED-based, WiFi-enabled game board for it (GitHub repository).
The new game board is built inside a stylish 3D-printed enclosure with a thin white front face, under which the 115 LEDs sit. Seven LEDs in the center represent a die, and the rest mark out the track around the board and each user’s home row. Up to six people can play on the board, and different colors of the LEDs along the track represent their tokens’ positions. To prevent light leaks, a black plastic barrier surrounds each LED. Each player has one button to control their pieces, with a combination of long and short presses serving to select one of the possible actions.
The electronics themselves are mounted on seven circuit boards, which were divided into sections to reduce their size and therefore their manufacturing cost. For component placement reasons, [Viktor] used a barrel connector instead of USB, but for more general compatibility also created an adapter from USB-C to a barrel plug. The board is controlled by an ESP32-S3, which hosts a server that can be used to set game rules, configure player colors, save and load games, and view statistics for the game (who rolled the most sixes, who sent other players home most often, etc.).
If you prefer your games a bit more complex, we’ve also seen electronics added to Settlers of Catan. On a rather larger scale, there is also this LED-based board game which invites humans onto the board itself.
youtube.com/embed/l1b1UZjEF5Y?…
Thanks to [Victoria Bei] for the tip!
Magic Magikarp Makes Moves
One of the most influential inventions of the 20th century was Big Mouth Billy Bass. A celebrity bigger than the biggest politicians or richest movie stars, there’s almost nothing that could beat Billy. That is, until [Kiara] from Kiara’s Workshop built a Magikarp version of Big Mouth Billy Bass.
Sizing in at over 2 entire feet, the orange k-carp is able to dance, it is able to sing, and it is able to stun the crowd. Magikarp functions the same way as its predecessor; a small button underneath allows the show to commence. Of course, this did not come without its challenges.
Starting the project was easy, just a model found online and some Blender fun to create a basic mold. Dissecting Big Mouth Billy Bass gave direct inspiration for how to construct the new idol in terms of servos and joints. Programming wasn’t even all that much with the use of Bottango for animations. Filling the mold with the silicone filling proved to be a bit more of a challenge.
After multiple attempts with some minor variations in procedure, [Kirara] got the fish star’s skin just right. All it took was a paint job and some foam filling to get the final touches. While this wasn’t the most mechanically challenging animatronic project, we have seen our fair share of more advanced mechanics. For example, check out this animatronic that sees through its own eyes!
youtube.com/embed/spsPT778ws0?…
Garage Fridge Gets New DIY Controller
[Rick] had a problem. His garage refrigerator was tasked with a critical duty—keeping refreshing beverages at low temperature. Unfortunately, it had failed—the condenser was forever running, or not running at all. The beverages were either frozen, or lukewarm, regardless of the thermostat setting. There was nothing for it—the controller had to be rebuilt from scratch.
Thankfully, [Rick]’s junk drawer was obliging. He was able to find an Arduino Uno R4, complete with WiFi connectivity courtesy of the ESP32 microcontroller onboard. This was paired with a DHT11 sensor, which provided temperature and humidity measurements. [Rick] began testing the hardware by spitting out temperature readings on the Uno’s LED matrix.
Once that was working, the microcontroller had to be given control over the fridge itself. This was achieved by programming it to activate a Kasa brand smart plug, which could switch mains power to the fridge as needed. The Uno simply emulated the action of the Kasa phone app to switch the smart plug on and off to control the fridge’s temperature, with the fridge essentially running flat out whenever it was switched on. The Uno also logs temperature to a server so [Rick] can make sure temperatures remain in the proper range.
We’ve seen some great beverage-cooling hacks over the years. If you’ve mastered your own hacky methods of keeping the colas chilled, don’t hesitate to let us know on the tipsline.
Reason versus Sentimental Attachment for Old Projects
We have probably all been there: digging through boxes full of old boards for projects and related parts. Often it’s not because we’re interested in the contents of said box, but because we found ourselves wondering why in the name of project management we have so many boxes of various descriptions kicking about. This is the topic of [Joe Barnard]’s recent video on his BPS.shorts YouTube channel, as he goes through box after box of stuff.
For some of the ‘trash’ the answer is pretty simple; such as the old rocket that’s not too complex and can have its electronics removed and the basic tube tossed, which at least will reduce the volume of ‘stuff’. Then there are the boxes with old projects, each of which are tangible reminders of milestones, setbacks, friendships, and so on. Sentimental stuff, basically.
Some rules exist for safety that make at least one part obvious, and that is that every single Li-ion battery gets removed when it’s not in use, with said battery stored in its own fire-resistant box. That then still leaves box after box full of parts and components that were ordered for projects once, but not fully used up. Do you keep all of it, just in case it will be needed again Some Day? The same issue with boxes full of expensive cut-off cable, rare and less rare connectors, etc.
One escape clause is of course that you can always sell things rather than just tossing it, assuming it’s valuable enough. In the case of [Joe] many have watched his videos and would love to own a piece of said history, but this is not an option open to most. Leaving the question of whether gritting one’s teeth and simply tossing the ‘value-less’ sentimental stuff and cheap components is the way to go.
Although there is always the option of renting storage somewhere, this feels like a cheat, and will likely only result in the volume of ‘stuff’ expanding to fill the void. Ultimately [Joe] is basically begging his viewers to help him to solve this conundrum, even as many of them and our own captive audience are likely struggling with a similar problem. Where is the path to enlightenment here?
youtube.com/embed/IPXQ_6CMm28?…
Hackaday Podcast Episode 348: 50 Grams of PLA Hold a Ton, Phreaknic Badge is Off The Shelf, and Hackers Need Repair Manuals
Join Hackaday Editors Elliot Williams and Tom Nardi as they go over their picks for the best stories and hacks from the previous week. Things start off with a warning about the long-term viability of SSD backups, after which the discussion moves onto the limits of 3D printed PLA, the return of the Pebble smart watch, some unconventional aircraft, and an online KiCad schematic repository that has plenty of potential. You’ll also hear about a remarkable conference badge made from e-waste electronic shelf labels, filling 3D prints with foam, and a tiny TV powered by the ESP32. The episode wraps up with our wish for hacker-friendly repair manuals, and an interesting tale of underwater engineering from D-Day.
Check out the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!
html5-player.libsyn.com/embed/…
As always, this episode is available in DRM-free MP3.
Where to Follow Hackaday Podcast
Places to follow Hackaday podcasts:
Episode 348 Show Notes:
News:
What’s that Sound?
- Congratulations to [for_want_of_a_better_handle] for guessing the data center ambiance!
Interesting Hacks of the Week:
- Designing PLA To Hold Over A Metric Ton
- The New Pebble: Now 100% Open Source
- Magnus Effect Drone Flies, Looks Impossible
- An Online Repository For KiCad Schematics
- Shelf Life Extended: Hacking E-Waste Tags Into Conference Badges
- On The Benefits Of Filling 3D Prints With Spray Foam
Quick Hacks:
- Elliot’s Picks:
- Get To The Games On Time With This Ancient-Style Waterclock
- How To Design 3D Printed Pins That Won’t Break
- Necroprinting Isn’t As Bad As It Sounds
- Tiny Little TV Runs On ESP32
- Tom’s Picks:
- Little Lie Detector Is Probably No Worse Than The Big Ones
- Build Your Own Glasshole Detector
- Portable Plasma Cutter Removes Rust, Packs A (Reasonable) Punch
Can’t-Miss Articles:
- Give Us One Manual For Normies, Another For Hackers
- How Cross-Channel Plumbing Fuelled The Allied March On Berlin
hackaday.com/2025/12/05/hackad…
Mac System 7 On a G4? Why Not!
Over the many years Apple Computer have been in operation, they have made a success of nearly-seamlessly transitioning multiple times between both operating systems and their underlying architecture. There have been many overlapping versions, but there’s always a point at which a certain OS won’t run on newer hardware. Now [Jubadub] has pushed one of those a little further than Apple intended, by persuading classic Mac System 7 to run on a G4.
System 7 was the OS your Mac would have run some time in the mid ’90s, whether it was a later 68000 machine or a first-gen PowerMac. In its day it gave Windows 3.x and even 95 a run for their money, but it relied on an older Mac ROM architecture than the one found on a G4. The hack here lies in leaked ROMS, hidden backwards compatibility, and an unreleased but preserved System 7 version originally designed for the ’90s Mac clone programme axed by Steve Jobs. It’s not perfect, but they achieved the impossible.
As to why, it seems there’s a significant amount of software that needs 7 to run, something mirrored in the non-Mac retrocomputing world. Even this hack isn’t the most surprising System 7 one we’ve seen recently, as an example someone even made a version for x86 machines.
Thumbnail Image Art: Apple PowerMac G4 by baku13, CC BY-SA 3.0
This Week in Security: React, JSON Formatting, and the Return of Shai Hulud
After a week away recovering from too much turkey and sweet potato casserole, we’re back for more security news! And if you need something to shake you out of that turkey-induced coma, React Server has a single request Remote Code Execution flaw in versions 19.0.1, 19.1.2, and 19.2.1.
The issue is insecure deserialization in the Flight protocol, as implemented right in React Server, and notably also used in Next.js. Those two organizations have both issued Security Advisories for CVSS 10.0 CVEs.
There are reports of a public Proof of Concept (PoC), but the repository that has been linked explicitly calls out that it is not a true PoC, but merely research into how the vulnerability might work. As far as I can tell, there is not yet a public PoC, but reputable researchers have been able to reverse engineer the problem. This implies that mass exploitation attempts are not far off, if they haven’t already started.
Legal AI Breaks Attorney-Client Privilege
We often cover security flaws that are discovered by merely poking around the source of a web interface. [Alex Schapiro] went above and beyond the call of duty, manually looking through minified JS, to discover a major data leak in the Filevine legal AI. And the best part, the problem isn’t even in the AI agent this time.
The story starts with subdomain enumeration — the process of searching DNS records, Google results, and other sources for valid subdomains. That resulted in a valid subdomain and a not-quite-valid web endpoint. This is where [Alex] started digging though Javascript, and found an Amazon AWS endpoint, and a reference to BOX_SERVICE. Making requests against the listed endpoint resulted in both boxFolders and a boxToken in the response. What are those, and what is Box?
Box is a file sharing system, similar to a Google Drive or even Microsoft Sharepoint. And that boxToken was a valid admin-level token for a real law firm, containing plenty of confidential records. It was at this point that [Alex] stopped interacting with the Filevine endpoints, and contacted their security team. There was a reasonably quick turnaround, and when [Alex] re-tested the flaw a month later, it had been fixed.
JSON Formatting As A Service
The web is full of useful tools, and I’m sure we all use them from time to time. Or maybe I’m the only lazy one that types a math problem into Google instead of opening a dedicated calculator program. I’m also guilty of pasting base64 data into a conversion web site instead of just piping it through base64 and xxd in the terminal. Watchtowr researchers are apparently familiar with such laziness efficiency, in the form of JSONformatter and CodeBeautify. Those two tools have an interesting feature: an online save function.
You may see where this is going. Many of us use Github Gists, which supports secret gists protected by long, random URLs. JSONformatter and CodeBeautify don’t. Those URLs are short enough to enumerate — not to mention there is a Recent Links page on both sites. Between the two sites, there are over 80,000 saved JSON snippets. What could possibly go wrong? Not all of that JSON was intended to be public. It’s not hard to predict that JSON containing secrets were leaked through these sites.
And then on to the big question: Is anybody watching? Watchtowr researchers beautified a JSON containing a Canarytoken in the form of AWS credentials. The JSON was saved with the 24 hour timeout, and 48 hours later, the Canarytoken was triggered. That means that someone is watching and collecting those JSON snippets, and looking for secrets. The moral? Don’t upload your passwords to public sites.
Shai Hulud Rises Again
NPM continues to be a bit of a security train wreck, with the Shai Hulud worm making another appearance, with some upgraded smarts. This time around, the automated worm managed to infect 754 packages. It comes with a new trick: pushing the pilfered secrets directly to GitHub repositories, to overcome the rate limiting that effected this worm the first time around. There were over 33,000 unique credentials captured in this wave. When researchers at GitGuardian tested that list a couple days later, about 10% were still valid.
This wave was launched by a PostHog credential that allowed a malicious update to the PostHog NPM package. The nature of Node.js means that this worm was able to very quickly spread through packages where maintainers were using that package. Version 2.0 of Shai Hulud also includes another nasty surprise, in the form of a remote control mechanism stealthily installed on compromised machines. It implies that this is not the last time we’ll see Shai Hulud causing problems.
Bits and Bytes
[Vortex] at ByteRay took a look at an industrial cellular router, and found a couple major issues. This ALLNET router has an RCE, due to CGI handling of unauthenticated HTTP requests. It’s literally just /cgi-bin/popen.cgi?command=whoami to run code as root. That’s not the only issue here, as there’s also a hardcoded username and password. [Vortex] was able to derive that backdoor account information and use hashcat to crack the password. I was unable to confirm whether patched firmware is available.
Google is tired of their users getting scammed by spam phone calls and texts. Their latest salvo in trying to defeat such scams is in-call scam protection. This essentially detects a banking app that is opened as a result of a phone call. When this scenario is detected, a warning dialogue is presented, that suggests the user hangs up the call, and forces a 30 second waiting period. While this may sound terrible for sophisticated users, it is likely to help prevent fraud against our collective parents and grandparents.
What seemed to be just an illegal gambling ring of web sites, now seems to be the front for an Advanced Persistent Threat (APT). That term, btw, usually refers to a government-sponsored hacking effort. In this case, instead of a gambling fraud targeting Indonesians, it appears to be targeting Western infrastructure. One of the strongest arguments for this claim is the fact that this network has been operating for over 14 years, and includes a mind-boggling 328,000 domains. Quite the odd one.
React2Shell = Log4shell: 87.000 server in Italia a rischio compromissione
Nel 2025, le comunità IT e della sicurezza sono in fermento per un solo nome: “React2Shell“. Con la divulgazione di una nuova vulnerabilità, CVE-2025-55182, classificata CVSS 10.0, sviluppatori ed esperti di sicurezza di tutto il mondo ne mettono in guardia dalla gravità, utilizzando persino il termine “2025 Log4Shell”.
I server impattati da questa minaccia sono circa 8.777.000 nel mondo, mentre i server italiani sono circa 87.000. Questo fa comprendere, che con una severity da score 10, potrebbe essere una delle minacce più importante di tutto l’anno, che sta diventando “attiva”.
Il nuovo Log4Shell del 2025
Infatti, è stato confermato che la comunità hacker cinese che sono stati già avviati test di attacco su larga scala sfruttando l’exploit per la vulnerabilità in questione sui server esposti. il CVE-2025-55182 non è semplicemente un bug software. È una falla strutturale nel protocollo di serializzazione RSC, che consente lo sfruttamento con la sola configurazione predefinita, senza errori da parte degli sviluppatori. L’autenticazione non è nemmeno richiesta.
Ecco perché gli esperti di sicurezza di tutto il mondo lo chiamano “la versione 2025 di Log4Shell”. Lo strumento di scansione delle vulnerabilità React2Shell Checker sta analizzando più percorsi e alcuni endpoint sono contrassegnati come Sicuri o Vulnerabili. L’immagine sopra mostra che diversi ricercatori stanno già eseguendo scansioni automatiche sui server basati su RSC.
Il problema è che questi strumenti diventano armi che gli aggressori possono sfruttare. Gli hacker cinesi stanno conducendo con successo test RCE. Secondo i dati raccolti dalla comunità di hacker cinese, gli aggressori hanno già iniettato React2Shell PoC nei servizi basati su Next.js, raccolto i risultati con il servizio DNSLog e verificato il vettore di attacco.
L’Exploit PoC in uso nelle scansioni
Viene inviato un payload manipolato con Burp Repeater e il server crea un record DNS esterno. Ciò indica che l’attacco viene verificato in tempo reale. Gli aggressori hanno già completato i seguenti passaggi:
- Carica il payload sul server di destinazione
- Attiva la vulnerabilità di serializzazione RSC
- Verifica il successo dell’esecuzione del comando con DNSLog esterno
- Verifica la possibilità di eseguire child_process sul lato server.
Non si tratta più di una “vulnerabilità teorica”, bensì della prova che è già stato sviluppato un vettore di attacco valido.
Gli hacker cinesi stanno in questi istanti eseguendo con successo le RCE.
l PoC sono stati pubblicati su GitHub e alcuni ricercatori lo hanno eseguito, confermando che la Calcolatrice di Windows (Calc.exe) è stata eseguita in remoto.
L’invio del payload tramite BurpSuite Repeater ha comportato l’esecuzione immediata di Calc.exe sul server. Ciò significa che è possibile l’esecuzione completa del codice remoto.
L’esecuzione remota della calcolatrice è un metodo di dimostrazione comune nella comunità di ricerca sulla sicurezza di un “RCE” riuscito, ovvero quando un aggressore ha preso il controllo di un server.
Gli 87.000 server riportati nella print screen di FOFA, dimostrano che un numero significativo di servizi web di aziende italiane che operano con funzioni RSC basate su React/Next.js attivate sono a rischio. Il problema è che la maggior parte di essi
- utilizza il rendering del server
- mantiene le impostazioni predefinite di RSC
- gestisce percorsi API esposti, quindi possono essere bersaglio di attacchi su larga scala.
In particolare, dato che i risultati della ricerca FOFA sono una fonte comune di informazioni utilizzata anche dai gruppi di hacker per selezionare gli obiettivi degli attacchi, è altamente probabile che questi server siano sotto scansioni attive.
Perché React2Shell è pericoloso?
Gli esperti definiscono questa vulnerabilità “senza precedenti” per i seguenti motivi:
- RCE non autenticato (esecuzione di codice remoto non autenticato): l’aggressore non ha bisogno di effettuare l’accesso.
- Possibilità Zero-Click: non è richiesta alcuna azione da parte dell’utente.
- PoC immediatamente sfruttabile: già pubblicato in gran numero su GitHub e X.
- Centinaia di migliaia di servizi in tutto il mondo si basano su React 19/Next.js: rischio di proliferazione su larga scala a livello della supply chain.
- L’impostazione predefinita stessa è vulnerabile: è difficile per gli sviluppatori difenderla.
Questa combinazione è molto simile all’incidente Log4Shell del 2021.
Tuttavia, a differenza di Log4Shell, che era limitato a Java Log4j, React2Shell è più serio in quanto prende di mira i framework utilizzati dall’intero ecosistema globale dei servizi web.
I segnali di un attacco effettivo quali sono
Gli Aggressori stanno già eseguendo la seguente routine di attacco.
- Raccolta di risorse di esposizione React/Next.js per paese da FOFA
- Esecuzione dello script di automazione PoC di React2Shell
- Verifica se il comando è stato eseguito utilizzando DNSLog
- Sostituisci il payload dopo aver identificato i server vulnerabili
- Controllo del sistema tramite RCE finale
Questa fase non è una pre-scansione, ma piuttosto la fase immediatamente precedente all’attacco. Dato il numero particolarmente elevato di server in Italia, la probabilità di attacchi RCE su larga scala contro istituzioni e aziende nazionali è molto alta. Strumenti di valutazione delle vulnerabilità e altri strumenti vengono caricati sulla comunità della sicurezza.
Mitigazione del bug di sicurezza
Gli esperti raccomandano misure di emergenza quali l’applicazione immediata di patch, la scansione delle vulnerabilità, l’analisi dei log e l’aggiornamento delle policy di blocco WAF.
Il team di React ha annunciato il 3 di aver rilasciato urgentemente una patch per risolvere il problema CVE-2025-55182, correggendo un difetto strutturale nel protocollo di serializzazione RSC. Tuttavia, a causa della natura strutturale di React, che non si aggiorna automaticamente, le vulnerabilità persistono a meno che aziende e organizzazioni di sviluppo non aggiornino e ricompilino manualmente le versioni.
In particolare, i servizi basati su Next.js richiedono un processo di ricostruzione e distribuzione dopo l’applicazione della patch di React, il che significa che probabilmente ci sarà un ritardo significativo prima che la patch di sicurezza effettiva venga implementata nell’ambiente del servizio. Gli esperti avvertono che “la patch è stata rilasciata, ma la maggior parte dei server è ancora a rischio”.
Molte applicazioni Next.js funzionano con RSC abilitato di default, spesso senza che nemmeno i team di sviluppo interni ne siano a conoscenza. Ciò richiede che le aziende ispezionino attentamente le proprie basi di codice per verificare l’utilizzo di componenti server e Server Actions. Con tentativi di scansione su larga scala già confermati in diversi paesi, tra cui la Corea, il rafforzamento delle policy di blocco è essenziale.
Inoltre, con la diffusione capillare di scanner automatici React2Shell e codici PoC in tutto il mondo, gli aggressori stanno eseguendo scansioni di massa dei server esposti anche in questo preciso momento. Di conseguenza, gli esperti di sicurezza hanno sottolineato che le aziende devono scansionare immediatamente i propri domini, sottodomini e istanze cloud utilizzando strumenti esterni di valutazione della superficie di attacco.
Hanno inoltre sottolineato che se nei log interni vengono rilevate tracce di chiamate DNSLog, un aumento di richieste POST multipart insolite o payload di grandi dimensioni inviati agli endpoint RSC, è molto probabile che si sia già verificato un tentativo di attacco o che sia stata raggiunta una compromissione parziale, il che richiede una risposta rapida.
L'articolo React2Shell = Log4shell: 87.000 server in Italia a rischio compromissione proviene da Red Hot Cyber.
Warnings About Retrobright Damaging Plastics After 10 Year Test
Within the retro computing community there exists a lot of controversy about so-called ‘retrobrighting’, which involves methods that seeks to reverse the yellowing that many plastics suffer over time. While some are all in on this practice that restores yellow plastics to their previous white luster, others actively warn against it after bad experiences, such as [Tech Tangents] in a recent video.Uneven yellowing on North American SNES console. (Credit: Vintage Computing)
After a decade of trying out various retrobrighting methods, he found for example that a Sega Dreamcast shell which he treated with hydrogen peroxide ten years ago actually yellowed faster than the untreated plastic right beside it. Similarly, the use of ozone as another way to achieve the oxidation of the brominated flame retardants that are said to underlie the yellowing was also attempted, with highly dubious results.
While streaking after retrobrighting with hydrogen peroxide can be attributed to an uneven application of the compound, there are many reports of the treatment damaging the plastics and making it brittle. Considering the uneven yellowing of e.g. Super Nintendo consoles, the cause of the yellowing is also not just photo-oxidation caused by UV exposure, but seems to be related to heat exposure and the exact amount of flame retardants mixed in with the plastic, as well as potentially general degradation of the plastic’s polymers.
Pending more research on the topic, the use of retrobrighting should perhaps not be banished completely. But considering the damage that we may be doing to potentially historical artifacts, it would behoove us to at least take a step or two back and consider the urgency of retrobrighting today instead of in the future with a better understanding of the implications.
youtube.com/embed/_n_WpjseCXA?…
Cloudflare di nuovo in down: disservizi su Dashboard, API e ora anche sui Workers
Cloudflare torna sotto i riflettori dopo una nuova ondata di disservizi che, nella giornata del 5 dicembre 2025, sta colpendo diversi componenti della piattaforma.
Oltre ai problemi al Dashboard e alle API, già segnalati dagli utenti di tutto il mondo, l’azienda ha confermato di essere al lavoro anche su un aumento significativo degli errori relativi ai Cloudflare Workers, il servizio serverless utilizzato da migliaia di sviluppatori per automatizzare funzioni critiche delle loro applicazioni.
Un’altra tessera che si aggiunge a un mosaico di criticità non trascurabili.
Come sottolineano da anni numerosi esperti di sicurezza informatica, affidare l’infrastruttura di base del web a una manciata di aziende significa creare colli di bottiglia strutturali. E quando uno di questi nodi si inceppa – come accade con Cloudflare – l’intero ecosistema ne risente.
Un intoppo può bloccare automazioni, API personalizzate, redirect logici, funzioni di autenticazione e perfino sistemi di sicurezza integrati. Un singolo malfunzionamento può generare un effetto domino ben più vasto del previsto.
A complicare ulteriormente la situazione, oggi è in corso anche una manutenzione programmata nel datacenter DTW di Detroit, con possibile rerouting del traffico e incrementi di latenza per gli utenti dell’area. Sebbene la manutenzione sia prevista e gestita, la concomitanza con i problemi ai Workers e al Dashboard aumenta il livello di incertezza. In alcuni casi specifici – come per i clienti PNI/CNI che si collegano direttamente al datacenter – certe interfacce di rete potrebbero risultare temporaneamente non disponibili, causando failover forzati verso percorsi alternativi.
Il nodo cruciale resta lo stesso: questa centralizzazione espone il web a rischi enormi dal punto di vista operativo e di sicurezza. Quando una piattaforma come Cloudflare scricchiola, anche solo per qualche ora, si indeboliscono le protezioni DDoS, i sistemi anti bot, le regole firewall, e si creano finestre di vulnerabilità che gli attaccanti più preparati potrebbero tentare di sfruttare.
La dipendenza da un unico colosso per funzioni così delicate è un punto di fragilità che non può più essere ignorato.
Il precedente blackout globale – documentato con grande trasparenza da Cloudflare stessa e analizzato da Red Hot Cyber – aveva messo in luce come un errore interno nella configurazione del backbone potesse mandare offline porzioni significative del traffico mondiale.
Oggi non siamo (ancora) di fronte a un guasto di tale entità, ma la somma di più disservizi simultanei riporta alla memoria quel caso e solleva dubbi sulla resilienza complessiva dell’infrastruttura.
Il nuovo down di Cloudflare, questa volta distribuito su più livelli della piattaforma, dimostra quanto l’Internet moderno sia fragile e quanto la sua affidabilità dipenda da pochi attori. Le aziende – piccole o grandi – che costruiscono i propri servizi sopra queste fondamenta dovrebbero iniziare a considerare seriamente piani di ridondanza multi-provider. Perché quando un singolo punto cade, rischia di cadere mezzo web.
L'articolo Cloudflare di nuovo in down: disservizi su Dashboard, API e ora anche sui Workers proviene da Red Hot Cyber.
Off-Grid, Small-Scale Payment System
An effective currency needs to be widely accepted, easy to use, and stable in value. By now most of us have recognized that cryptocurrencies fail at all three things, despite lofty ideals revolving around decentralization, transparency, and trust. But that doesn’t mean that all digital currencies or payment systems are doomed to failure. [Roni] has been working on an off-grid digital payment node called Meshtbank, which works on a much smaller scale and could be a way to let a much smaller community set up a basic banking system.
The node uses Meshtastic as its backbone, letting the payment system use the same long-range low-power system that has gotten popular in recent years for enabling simple but reliable off-grid communications for a local area. With Meshtbank running on one of the nodes in the network, accounts can be created, balances reported, and digital currency exchanged using the Meshtastic messaging protocols. The ledger is also recorded, allowing transaction histories to be viewed as well.
A system like this could have great value anywhere barter-style systems exist, or could be used for community credits, festival credits, or any place that needs to track off-grid local transactions. As a thought experiment or proof of concept it shows that this is at least possible. It does have a few weaknesses though — Meshtastic isn’t as secure as modern banking might require, and the system also requires trust in an administrator. But it is one of the more unique uses we’ve seen for this communications protocol, right up there with a Meshtastic-enabled possum trap.
Supply Chain Digitale: perché un fornitore può diventare un punto critico
L’aumento esponenziale dell’interconnessione digitale negli ultimi anni ha generato una profonda interdipendenza operativa tra le organizzazioni e i loro fornitori di servizi terzi. Questo modello di supply chain digitale, se da un lato ottimizza l’efficienza e la scalabilità, dall’altro introduce un rischio sistemico critico: una vulnerabilità o un fallimento in un singolo nodo della catena può innescare una serie di conseguenze negative che mettono a repentaglio l’integrità e la resilienza dell’intera struttura aziendale.
Il recente attacco verso i sistemi di myCicero S.r.l., operatore di servizi per il Consorzio UnicoCampania, rappresenta un caso emblematico di tale rischio.
La notifica di data breach agli utenti (Figura 1), eseguita in ottemperanza al Regolamento Generale sulla Protezione dei Dati (GDPR), va oltre la semplice conformità formale. Essa rappresenta la prova che una singola vulnerabilità all’interno della catena di fornitura può portare all’esposizione non autorizzata dei dati personali di migliaia di utenti, inclusi, come nel seguente caso, potenziali dati sensibili relativi a documenti di identità e abbonamenti studenteschi.
Figura1. Comunicazione UnicoCampania
Il caso myCicero – UnicoCampania
Il Consorzio UnicoCampania, l’ente responsabile dell’integrazione tariffaria regionale e del rilascio degli abbonamenti agevolati per gli studenti, ha ufficialmente confermato un grave data breach che ha colpito l’infrastruttura di un suo fornitore chiave: myCicero S.r.l.
L’incidente, definito come un “sofisticato attacco informatico perpetrato da attori esterni non identificati”, si è verificato tra il 29 e il 30 marzo 2025.
La complessità del caso risiede nella stratificazione dei ruoli di trattamento dei dati. In particolare, nella gestione del servizio abbonamenti, il Consorzio UnicoCampania agiva in diverse vesti:
- Titolare o Contitolare: per la gestione dell’account utente, le credenziali e l’emissione dei titoli di viaggio.
- Responsabile del Trattamento (per conto della Regione Campania): per l’acquisizione e la verifica della documentazione necessaria a comprovare i requisiti soggettivi per le agevolazioni tariffarie.
L’attacco ha portato all’esfiltrazione di dati non codificati sensibili. Queste includono:
- Dati anagrafici, di contatto, credenziali di autenticazione (username e password, sebbene cifrate);
- Immagini dei documenti di identità, dati dichiarati per l’attestazione ISEE e particolari categorie di dati (es. informazioni sulla salute, come lo stato di invalidità) se emergenti dalla documentazione ISEE [1].
- Dati personali appartenenti a soggetti minorenni e ai loro genitori [1].
Invece, i dati relativi a carte di credito o altri strumenti di pagamento non sono stati coinvolti, in quanto non ospitati sui sistemi di myCicero.
Figura2. Dati esfiltrati
In risposta all’incidente, myCicero ha immediatamente sporto formale denuncia e attivato un piano di remediation volto a rafforzare l’infrastruttura. Parallelamente, il consorzio UnicoCampania ha informato tempestivamente le Autorità competenti e ha implementato una misura drastica per mitigare il rischio derivante dalle password compromesse: tutte le credenziali coinvolte e non modificate dagli utenti entro il 30 settembre 2025 sono state definitivamente cancellate e disabilitate il 1° ottobre 2025.
Azione e Difesa: Come Reagire
Di fronte a un incidente di questa portata, l’utente finale sperimenta spesso un senso di vulnerabilità. Per ridurre l’esposizione al rischio e limitare potenziali danni derivanti da un data breach, si raccomanda di seguire le seguenti misure di mitigazione e rafforzamento della sicurezza:
- Gestione delle Credenziali:
- Utilizzare stringhe complesse e lunghe, che integrino numeri, simboli e una combinazione di caratteri maiuscoli e minuscoli;
- Non usare come password termini comuni, sequenze logiche o dati personali (e.g. nome, data di nascita);
- Usare il prinicipio di unicità: usare credenziali uniche per ciascun servizio utilizzato;
- Modificare le proprie credenziali con cadenza periodica, evitando di riutilizzarle nel tempo;
- Abilitare l’autenticazione a più fattori (MFA) ove possibile;
- Prevenzione del phishing
- In caso di ricezione di e-mail o SMS sospetti, eseguire sempre una verifica dell’identità del mittente e non fornire mai dati sensibili in risposta;
- Verificare l’autenticità di qualsiasi richiesta urgente (specie quelle relative a verifica dati o pagamenti) esclusivamente contattando l’operatore tramite i suoi canali di comunicazione ufficiali (sito web o numero di assistenza noto);
- Evitare di cliccare su link ipertestuali (hyperlinks) o aprire allegati inattesi o provenienti da fonti non verificate;
- Prestare particolare attenzione a richieste che inducono un senso di urgenza o che sfruttano la psicologia per indurre a fornire informazioni.
L'articolo Supply Chain Digitale: perché un fornitore può diventare un punto critico proviene da Red Hot Cyber.
Biogas Production For Surprisingly Little Effort
Probably most people know that when organic matter such as kitchen waste rots, it can produce flammable methane. As a source of free energy it’s attractive, but making a biogas plant sounds difficult, doesn’t it? Along comes [My engines] with a well-thought-out biogas plant that seems within the reach of most of us.
It’s based around a set of plastic barrels and plastic waste pipe, and he shows us the arrangement of feed pipe and residue pipe to ensure a flow through the system. The gas produced has CO2 and H2s as undesirable by-products, both of which can be removed with some surprisingly straightforward chemistry. The home-made gas holder meanwhile comes courtesy of a pair of plastic drums one inside the other.
Perhaps the greatest surprise is that the whole thing can produce a reasonable supply of gas from as little as 2 KG of organic kitchen waste daily. We can see that this is a set-up for someone with the space and also the ability to handle methane safely, but you have to admit from watching the video below, that it’s an attractive idea. Who knows, if the world faces environmental collapse, you might just need it.
youtube.com/embed/0EC0RMQUN68?…
Building a Microscope without Lenses
It’s relatively easy to understand how optical microscopes work at low magnifications: one lens magnifies an image, the next magnifies the already-magnified image, and so on until it reaches the eye or sensor. At high magnifications, however, that model starts to fail when the feature size of the specimen nears the optical system’s diffraction limit. In a recent video, [xoreaxeax] built a simple microscope, then designed another microscope to overcome the diffraction limit without lenses or mirrors (the video is in German, but with automatic English subtitles).
The first part of the video goes over how lenses work and how they can be combined to magnify images. The first microscope was made out of camera lenses, and could resolve onion cells. The shorter the focal length of the objective lens, the stronger the magnification is, and a spherical lens gives the shortest focal length. [xoreaxeax] therefore made one by melting a bit of soda-lime glass with a torch. The picture it gave was indistinct, but highly magnified.A cross section of the diffraction pattern of a laser diode shining through a pinhole, built up from images at different focal distances.
Besides the dodgy lens quality given by melting a shard of glass, at such high magnification some of the indistinctness was caused by the specimen acting as a diffraction grating and directing some light away from the objective lens. [xoreaxeax] visualized this by taking a series of pictures of a laser shining through a pinhole at different focal lengths, thus getting cross sections of the light field emanating from the pinhole. When repeating the procedure with a section of onion skin, it became apparent that diffraction was strongly scattering the light, which meant that some light was being diffracted out of the lens’s field of view, causing detail to be lost.
To recover the lost details, [xoreaxeax] eliminated the lenses and simply captured the interference pattern produced by passing light through the sample, then wrote a ptychography algorithm to reconstruct the original structure from the interference pattern. This required many images of the subject under different lighting conditions, which a rotating illumination stage provided. The algorithm was eventually able to recover a sort of image of the onion cells, but it was less than distinct. The fact that the lens-free setup was able to produce any image at all is nonetheless impressive.
To see another approach to ptychography, check out [Ben Krasnow’s] approach to increasing microscope resolution. With an electron microscope, ptychography can even image individual atoms.
youtube.com/embed/lhJhRuQsiMU?…
Preventing a Mess with the Weller WDC Solder Containment Pocket
Resetting the paraffin trap. (Credit: MisterHW)
Have you ever tipped all the stray bits of solder out of your tip cleaner by mistake? [MisterHW] is here with a bit pf paraffin wax to save the day.
Hand soldering can be a messy business, especially when you wipe the soldering iron tip on those common brass wool bundles that have largely come to replace moist sponges. The Weller Dry Cleaner (WDC) is one of such holders for brass wool, but the large tray in front of the opening with the brass wool has confused many as to its exact purposes. In short, it’s there so that you can slap the iron against the side to flick contaminants and excess solder off the tip.
Along with catching some of the bits of mostly solder that fly off during cleaning in the brass wool section, quite a lot of debris can be collected this way. Yet as many can attest to, it’s quite easy to flip over brass wool holders and have these bits go flying everywhere.
The trap in action. (Credit: MisterHW)
That’s where [MisterHW]’s pit of particulate holding comes into play, using folded sheet metal and some wax (e.g. paraffin) to create a trap that serves to catch any debris that enters it and smother it in the wax. To reset the trap, simply heat it up with e.g. the iron and you’ll regain a nice fresh surface to capture the next batch of crud.
As the wax is cold when in use, even if you were to tip the holder over, it should not go careening all over your ESD-safe work surface and any parts on it, and the wax can be filtered if needed to remove the particulates. When using leaded solder alloys, this setup also helps to prevent lead-contamination of the area and generally eases clean-up as bumping or tipping a soldering iron stand no longer means weeks, months or years of accumulations scooting off everywhere.