The Many Questions and Challenges with DIY Hydroelectric Generators
The concept of building your own hydroelectric generator seems simple at face value: use gravity to impart as much force as possible onto a turbine, which spins a generator, thus generating electricity. If you’re like the bloke over at [FarmCraft101] trying to DIY this with your farm pond and a lot of PVC pipes, you may have some significantly more in-depth questions, especially pertaining to what kind of generator to use. This and other questions, some of which were raised after the previous video in which the first prototype generator was assembled, are answered in this follow-up video.
When you DIY such a hydroelectric system, you have a number of options when it comes to just the turbine design alone, with the Kaplan-style turbine being one of the most straightforward ones – especially if you use a fixed pitch instead of adjustable – but you can go pretty far in the weeds with alternatives. As for the sharp drop-off after the turbine in the used design, the technical term is a draft tube, which is actually more efficient in this kind of low head, high flow hydroelectric dam situation.
After getting his money back for the unusable ‘3 kW’ generator, there were three options left: try an EBay special, get a purpose-built one from a US company, or rewind an alternator stator for higher voltage output than the standard 12/24V. Ultimately option four was chosen, as in ‘all of the above’, so that comparison is coming up in a future video.
There were also questions from viewers about why he opted to rectify the AC power from the generator and use DC transmission to the nearest farm building. The main reason is efficiency, as DC transmission lines lack the skin effect losses. The other is that the grid-tie inverter that he plans to use needs DC input anyway. Not having to deal with AC transmission issues like losses and reactive power shenanigans is a major plus here.
Once the three new generator versions are being tested it will be interesting to see how they perform. One thing with the Kaplan-style turbine is that too fast RPM induces cavitation, which will erode the propeller pretty quickly. Generally car alternators require a pretty fast RPM, so that may not work out too well. There is also the question of the DC voltage generated, as for DC transmission you want to have as high a voltage as possible to reduce the current.
The purpose-built generator he purchased tops out at 48V, which is quite low. The goal is to have at least 230 VAC before rectification, so a step-up transformer may be needed. Unfortunately three-phase transformers are pretty pricy again, making the rewound alternator seem less crazy. The wild card here is perhaps whether the EBay-purchased generator is a diamond in the rough and works out of the box as hoped.
youtube.com/embed/45DNG8eUhwY?…
Tired of Burnt Fingers? Try PID Tuning the Hot Glue Gun
Hot glue guns are pretty simple beasts: there’s an on/off switch, a heating element, and a source of current, be it battery or wired. You turn it on, and the heater starts warming up; eventually you can start extruding the thermoplastic sticks we call “hot glue”. Since there’s no temperature control, the longer you run the gun, the warmer it gets until it is inevitably hotter than you actually want– either burning you or oozing thermoplastic out the tip. [Mellow_Labs] was sick of that after a marathon hot-glue session, and decided to improve on his hot glue gun with PID tuning in the video embedded below.
PID tuning is probably a familiar concept to most of you, particularly those who have 3D printers, where it’s used in exactly the same way [Mellow_Labs] puts it to work in the hot glue gun. By varying the input (in this case the power to the heater) proportional both to the Parameter (in this case, temperature) as well as the Integral and Derivative of that value, you can have a much steadier control than more naive algorithms, like the simple “on/off” thermostat that leads to large temperature swings.
In this case [Mellow_Labs] is implementing the PID control using a thermistor that looks like it came from a 3D printer, and a MOSFET driven by an RP2040. Microcontroller gets its power via the hot glue gun’s battery fed through a buck converter. Since he has them, a small OLED screen displays temperature, which is set with a pair of push-buttons. Thus, one can set a temperature hot enough to melt the glue, but low enough to avoid oozing or third degree burns.
He does not share the code he’s running on the RP2040, but if you are inspired to replicate this project and don’t want to roll your own, there are plenty of example PID scripts out there, like the one in this lovely robot. No, PID isn’t reserved for thermostats– but if you are controlling heat, it’s not reserved for electric, either. Some intrepid soul put built a PID controller for a charcoal BBQ once.
youtube.com/embed/DKgOyBBh7eE?…
PiStorm68K Offers Supercharged Retro Amiga Experience
[AmiCube] has announced their new PiStorm68K special edition MiniMig accelerator board. This board was developed to replace the 68000 CPU in a MiniMig — a recreation of the original Amiga chipset in an FPGA allowing a real genuine 68000 CPU to operate.
The PiStorm68K itself can host a real genuine 68000 CPU but it can also host various Raspberry Pi models which can do emulation of a 68000. So if you combine a PiStorm68K with a MiniMig you can, at your option, boot into an emulated environment with massively increased performance, or you can boot into an original environment, with its reliable and charming sluggishness.
In the introduction video below, [AmiCube] uses the SYSINFO utility software to compare the CPU speed when using emulation (1531 MIPS) versus the original (4.47 MIPS), where MIPS means Millions of Instructions Per Second. As you can see the 68000 emulated by the Raspberry Pi is way faster than the original. The Raspberry Pi also emulates a floating-point unit (FPU) which the original doesn’t include and a memory management unit (MMU) which isn’t used.
If you’re interested in old Amiga tech you might also like to read about Chip Swap Fixes A Dead Amiga 600 or The Many-Sprites Interpretation Of Amiga Mechanics.
youtube.com/embed/6b-HfLYA1E8?…
Linux Fu: Yet Another Shell Script Trick
I’m going to go ahead and admit it: I really have too many tray icons. You know the ones. They sit on your taskbar, perhaps doing something in the background or, at least, giving you fingertip access to some service. You’d think that creating a custom tray icon would be hard, but on Linux, it can be surprisingly simple. Part of the reason is that the Freedesktop people created standards, so you don’t typically have to worry about how it works on KDE vs. GNOME or any of the other desktop environments. That’s a big win.
In fact, it is simple enough that you can even make your own tray icons with a lowly shell script. Well, of course, like most interesting shell scripts, you need some helper programs and, in this case, we’ll use YAD — which is “yet another dialog,” a derivative of Zenity. It’s a GTK program that may cause minor issues if you primarily use KDE, but they are nothing insurmountable.
The program is somewhat of a Swiss army knife. You can use it to make dialogs, file pickers, color selectors, printer dialogs, and even — in some versions — simple web browsers. We’ve seen plenty of tools to make pretty scripts, of course. However, the ability to quickly make good-looking taskbar icons is a big win compared to many other tools.
Docs
Depending on what you want to do, YAD will read things from a command line, a file, or standard input. There are dozens of options, and it is, honestly, fairly confusing. Luckily, [Ingemar Karlsson] wrote the Yad Guide, which is very digestible and full of examples.
Exactly what you need will depend on what you want to do. In my case, I want a tray icon that picks up the latest posts from my favorite website. You know. Hackaday?
The Web Connection
YAD can render HTML using WebKit. However, I ran into immediate problems. The version in the repos for the Linux I use was too old to include the HTML option. I found a supposedly statically linked version, but it was missing dependencies. Even after I fixed that, the program still reported errors related to the NVIDIA OpenGL stack.
I quickly abandoned the idea of using a web browser. I turned to two other YAD features. First, the basic dialog can hold text and, in most cases, renders quasi-HTML because it uses the Pango library. However, there is also a text-info dialog built in. Unlike most other YAD features, the text-info dialog reads its input from standard input. However, it doesn’t render markup.
In the end, I decided to try them both. Why not? It is simple enough. But first, I needed a tray icon.
The Tray
YAD can provide a “notification,” which is what it calls a tray icon. You can specify an icon, some text, and a right-click context menu. In addition, it can react when someone clicks on the icon.Can you find the tray icon we’re talking about?
I decided to write a script with multiple personalities. If you run it with no arguments, it sets up the tray icon. If you pass anything to it, it will show a dialog with the latest Hackaday articles from the RSS feed. I wanted to make those links clickable, and that turned out to be a bit of a wrinkle. Both versions will do the job, but they each need a different approach, as you will see.
Here’s the tray code:
yad --notification --image="$0.icon.png" --text="Hackaday Now" \
--menu="Quit!quit!gtk-quit" --command="$0 show" --no-middle
You can probably guess at most of this without the manual. The image is stored in a file with the same name as the script, but with .icon.png at the end. That’s the icon in the tray. The simple menu provides an option to exit the program. If you click the icon, it calls the same script again, but with the “show” argument. The script doesn’t care what the argument is, but maybe one day it will.
So that part of the project was extremely simple. The next job is making the dialog appear.
Text Info
Grabbing the RSS feed with wget is trivial. You could use grep, sed, and bash pattern replacement to extract the titles and URLs, but I opted for awk and a brute-force parsing approach.This works, but the URLs are long and not terribly attractive. The list is scrollable, and there are more links below the visible ones.
The standard output of awk pipes into YAD, but you can’t readily apply formatting or hyperlinks. You can use formatting in regular dialog text, which will appear before the other output. That’s where the yellow “Hackaday Today!” title in the adjacent screenshot gets set. In addition, you can automatically detect URLs and make them clickable using the --show-uri option.
Here’s the relevant command:
yad --text-info \
--text "<span foreground='$TITLECOLOR'><b><big><big>Hackaday Today!</big></big></b></span>" \
--show-uri --window-icon="$0.icon.png" \
--uri-color=$LINKCOLOR --width=$WIDTH --height=$HEIGHT \
--Title "Hackaday Posts" --button="Close!gtk-ok" \
--buttons-layout=center --escape-ok 2>/dev/null
You’ll notice that the –text option does take Pango formatting and the --show-uri option makes the links clickable. By default, dialogs have an Open and Cancel button, but I forced this one to have a single close button, accept escape, and I wanted the button centered.
As you can see in the screenshot, the result isn’t bad, but it does require having the title followed by a long URL that you can click on and that’s a little ugly.
Stock Dialog
Using a standard dialog instead of text-info allows better formatting.
Since the –text option works with any dialog and handles formatting, I decided to try that. The awk code was nearly the same, except for the output formatting. In addition, the output now needs to go on the command line instead of through a pipe.
This does make the script a bit more unwieldy. The awk script sets a variable, since jamming the command into the already busy YAD command line would make the script more complicated to read and work with.
The YAD command is still simple, though:
yad \
--text="$DATA" \
--window-icon="$0.icon.png" \
--width=$WIDTH --height=$HEIGHT \
--Title "Hackaday Posts" --button="Close!gtk-ok" \
--buttons-layout=center --escape-ok
The DATA variable has the formatted output text. The result looks better, as you can see in the screenshot. In either version, if you click an underlined link, your default browser should open the relevant post.
Other Choices
If you want to install either script, you can get it from GitHub. Of course, you could do this in Python or any other conventional language. There are also programs for “minimizing” another program to the tray, like AllTray or KDocker, although some of these may only work with X11 and not Wayland.
It would have been nice to have an integrated browser, although, thanks again to FreeDesktop, it is simple enough to open a URL and launch the system’s default browser.
Prefer your Hackaday feed on the command line? Check out the comments for this post. Meanwhile, send us a tip (you know, a link to your project, not a gratuity) and maybe you’ll see your own project show up on the feed.
The use of Ultrasound to take on Cancerous Tumors
As areas of uncontrolled cell growth, cancerous growth form a major problem for a multi-celled organism like us humans. Thus before they can begin to affect our long-term prospects of a continued existence, eradicating these cells-gone-wrong is essential. Unfortunately, doing so without affecting healthy cells significantly is tough. Treatments such as chemotherapy are correspondingly rough on the body, while radiation therapy is a lot more directed. Perhaps one of the more fascinating treatments involves ultrasound, with the IEEE Spectrum magazine recently covering one company providing histotripsy equipment.Diagram showing how HIFU can be used to destroy tissue in the body. An acoustic lens is used to focus sound to a small point in the body. (Credit: James Ross McLaughlan, Wikimedia)
Ultrasound has found many applications in the medical field far beyond imaging, with therapeutic ultrasound by itself covering a variety of methods to perform actions within the body without breaking the skin. By using high-energy ultrasound, everything from kidney stones to fat cells and cancerous cells can be accurately targeted and destroyed. For liver tumors the application of so-called histotropsy has become quite common, allowing certain types of tumors to be ablated non-invasively after which the body can handle the clean-up.
Histotropsy is a form of high-intensify focused ultrasound (HIFU) that uses either continuous or pulsed waves to achieve the desired effect, with the HIFU transducer equipped with an acoustic lens to establish a focal point. In the case of histotripsy cavitation is induced at this focal point that ends up destroying the local tissue. Beyond liver tumors the expectation is that other tumors will soon be treated in a similar manner, which could be good news for especially solid tumors.
Along with new approaches like CAR T cell immunotherapy, the prospects for cancer becoming a very treatable set of diseases would seem to be brighter than ever.
How Advanced Autopilots Make Airplanes Safer When Humans go AWOL
It’s a cliché in movies that whenever an airplane’s pilots are incapacitated, some distraught crew member queries the self-loading freight if any of them know how to fly a plane. For small airplanes we picture a hapless passenger taking over the controls so that a heroic traffic controller can talk them through the landing procedure and save the day.
Back in reality, there have been zero cases of large airliners being controlled by passengers in this fashion, while it has happened a few times in small craft, but with variable results. And in each of these cases, another person in the two- to six-seater aircraft was present to take over from the pilot, which may not always be the case.
To provide a more reliable backup, a range of automated systems have been proposed and implemented. Recently, the Garmin Emergency Autoland system got its first real use: the Beechcraft B200 Super King Air landed safely with two conscious pilots on board, but they let the Autoland do it’s thing due to the “complexity” of the situation.
Human In The Loop
Throughout the history of aviation, a human pilot has been a crucial component for the longest time for fairly obvious reasons, such as not flying past the destination airport or casually into terrain or rough weather. This changed a few decades ago with the advent of more advanced sensors, fast computing systems and landing assistance systems such as the ILS radio navigation system. It’s now become easier than ever to automate things like take-off and landing, which are generally considered to be the hardest part of any flight.
Meanwhile, the use of an autopilot of some description has become indispensable since the first long-distance flights became a thing by around the 1930s. This was followed by a surge in long-distance aviation and precise bombing runs during World War II, which in turn resulted in a massive boost in R&D on airplane automation.A USAF C-54 Skymaster. (Credit: US Air Force)
While the the early gyroscopic autopilots provided basic controls that kept the airplane level and roughly on course, the push remained to increase the level of automation. This resulted in the first fully automatic take-off, flight and landing being performed on September 22, 1947 involving a USAF C-54 Skymaster. As the military version of the venerable DC-4 commercial airplane its main adaptations included extended fuel capacity, which allowed it to safely perform this autonomous flight from Newfoundland to the UK.
In the absence of GNSS satellites, two ships were located along the flight path to relay bearings to the airplane’s board computer via radio communication. As the C-54 approached the airfield at Brise Norton, a radio beacon provided the glide slope and other information necessary for a safe landing. The fact that this feat was performed just over twenty-eight years after the non-stop Atlantic crossing of Alcock and Brown in their Vickers Vimy airplane shows just how fast technology progressed at the time.
Nearly eighty years later, it bears asking the question why we still need human pilots, especially in this age of GNSS navigation, machine vision, and ILS beacons at any decently sized airfield. The other question that comes to mind is why we accept that airplanes effectively fall out of the sky the moment that they run out of functioning human pilots to push buttons, twist dials, and fiddle with sticks.
State of the Art
In the world of aviation, increased automation has become the norm, with Airbus in particular taking the lead. This means that Airbus has also taken the lead in spectacular automation-related mishaps: Flight 296Q in 1988 and Air France Flight 447 in 2009. While some have blamed the 296Q accident on the automation interfering with the pilot’s attempt to increase thrust for a go-around, the official explanation is that the pilots simply failed to notice that they were flying too low and thus tried to blame the automation.The Helios Airways 737-300, three days before it would become a ghost flight. (Credit: Mila Daniel)
For the AF447 crash the cause was less ambiguous, even if took a few years to recover the flight recorders from the seafloor. Based on the available evidence it was clear by then that the automation had functioned as designed, with the autopilot disengaging at some point due to the unheated pitot tubes freezing up, resulting in inconsistent airspeed readings. Suddenly handed the reins, the pilots took over and reacted incorrectly to the airspeed information, stalled the plane, and crashed into the ocean.
One could perhaps say that AF447 shows that there ought to be either more automation, or better pilot training so that the human element can fly an airplane unassisted by an autopilot. When we then consider the tragic case of Helios Airways Flight 522, the ‘ghost flight’ that flew on autopilot with no conscious souls on board due to hypoxia, we can imagine a dead-man switch that auto-lands the airplane instead of leaving onlookers powerless to do anything but watch the airplane run out of fuel and crash.
Be Reasonable
Although there are still a significant number of people who would not dare to step a foot on an airliner that doesn’t have at least two full-blooded, breathing human pilots on board, there is definitely a solid case to be made for emergency landing systems to become a feature on airplanes, starting small. Much like the Cirrus Airframe Parachute System (CAPS) – a whole-airplane parachute system that has saved many lives as well as airframes – the Garmin Autoland feature targets smaller airplanes.The Garmin Autoland system communicates with ATC and nearby traffic and lands unassisted. (Credit: Garmin)
After a recent successful test with a HondaJet, this recent unscheduled event with the Beechcraft B200 Super King Air twin-prop airplane turned out to be effectively another test. As the two pilots in this airplane were flying between airports for a repositioning flight, the cabin suddenly lost pressurization. Although both pilots were able to don their oxygen masks, the Autoland system engaged due to the dangerous cabin conditions. They then did not disengage the system as they didn’t know the full extent of the situation.
This effectively kept both pilots ready to take full control of the airplane should the need have arisen to interfere, but with the automated system making a textbook descent, approach and landing, it’s clear that even if their airplane had turned into another ghost flight, they would have woken up groggy but whole on the airstrip, surrounded by emergency personnel.
Considering how many small airplanes fly each year in the US alone, systems like CAPS and Autoland stand to save many lives both in the air and on the ground the coming years. Combine this with increased ATC automation at towers and elsewhere such as the FAA’s STARS and Saab’s I-ATS, and a picture begins to form of increased automation that takes the human element out of the loop as much as possible.
Although we’re still a long way off from the world imagined in 1947 where ‘electronic brains’ would unerringly fly all airplanes and more for us, it’s clear that we are moving in that direction, with such technology even within the reach of the average owner of an airplane of some description.
Super Mario 64, Now With Microtransactions
Besides being a fun way to pass time, video gaming is a surprisingly affordable hobby per unit time. A console or budget PC might only cost a few hundred dollars, and modern games like Hollowknight: Silksong can provide 40-60 hours of experience for only around $20 USD. This value proposition wasn’t really there in the 80s, where arcade cabinets like Gauntlet might have cost an inflation-adjusted $8 per hour in quarters. This paradigm shift is great for gamers, but hasn’t been great for arcade owners. [PrintAndPanic] wanted to bring some of that old coin munching vibe into console gaming, and so added a credit system to Super Mario 64.
The project is a fork of a decompilation of Super Mario 64, which converts the original machine code into a human-friendly format so bugs can be fixed and other modern features added. With the code available, essentially anyone can add features into the game that weren’t there already. In this case, [PrintAndPanic] is using a Raspberry Pi connected to a coin slot, so when coins are put into the game like an old arcade machine, the Raspberry Pi can tell the modified version of Super Mario 64 to add credits. These credits allow the player to run and jump, and when the credits run out Mario becomes extremely limited and barely able to outrun even the slowest Bombombs and Goombas.
With some debugging out of the way and the custom game working, [PrintAndPanic] built a custom enclosure for the game and the coin slot to turn it into a more self-contained arcade-style machine. The modified code for this project is available on the project’s GitHub page for those who want to play a tedious version of a favorite video game that costs more money than it should.
There are plenty of other modifications for this classic as well, most of which involve improving the game instead of adding a modern microtransaction-based system.
youtube.com/embed/Z_uFcPic5kE?…
Tying up Loose Ends on a Rope-based Robot Actuator
One of the perennial challenges of building robots is minimizing the size and weight of drive systems while preserving power. One established way to do this, at least on robots with joints, is to fit each joint with a quasi-direct-drive motor integrating a brushless motor and gearbox in one device. [The 5439 Workshop] wanted to take this approach with his own robot project, but since commercial drives were beyond his budget, he designed his own powerful, printable actuator.
The motor reducing mechanism was the biggest challenge: most quasi-direct drives use a planetary gearbox, but this would have been difficult to 3D-print without either serious backlash or limited torque. A cycloidal drive was an option, but previous printable cycloidal drives seemed to have low efficiency, and they didn’t want to work with a strain-wave gearing. Instead, he decided to use a rope drive (this seems to be another name for a kind of Capstan drive), which doesn’t require particularly strong materials or high precision. These normally use a rope wound around two side-by-side drums, which are difficult to integrate into a compact actuator, but he solved the issue by putting the drums in-line with the motor, with two pairs of pulleys guiding the rope between them in a “C” shaped path.
To build the actual motor, they used a hand-wound stator inside a 3D-printed rotor with magnets epoxied into it, and used Dyneema rope in the reducer for its high strength. The printed rotor proved problematic when the attraction between the rotor and magnets caused it to flex and scrape against the housing, and it eventually had to be reinforced with some thin metal sheets. After fixing this, it reached five Newton-meters of torque at one amp and nine Newton-meters at five amps. The diminishing returns seem to be because the 3D-printed pulley wheels broke under higher torque, which should be easy to fix in the future.
This looks like a promising design, but if you don’t need the output shaft inline with the motors, it’s probably easier to build a simple Capstan drive, the mathematics of which we’ve covered before. Both makers we’ve previously seen build Capstan drives used them to make robot dogs, which says something for their speed and responsiveness.
youtube.com/embed/02vmEU2-5d4?…
Putting the M in a UNI-T MSO
[Kerry Wong] points out that the Uni-T MSO oscilloscopes have a logic analyzer built in — that’s the MSO, or Mixed Signal Oscilloscope, part — but you have to add the probes. He shows you how it works in a recent video below.
He’s looked at the scope’s analog capabilities before and was not unimpressed. The probes aren’t inexpensive, but they do unlock the mixed signal capabilities of the instrument.
Although simple logic analyzers are very affordable today, having the capability integrated with your scope has several advantages, including integrated triggering and the simple convenience of being able to switch measurement modes with no problem.
In many cases, being able to do things like decode UART signals without dragging out a laptop and firing up software is a nice feature. If all you’ve used are the super-cheap USB logic analyzers, you may find some of the features of a more serious instrument surprising.
Is it worth the extra expense? That depends on you and what you are doing. But if you ever wondered if it was worth splurging on digital probes for a UNI-T scope, [Kerry] can help you decide.
Not that simple logic analyzers aren’t useful, and they certainly cost less. Some of them will even work as a scope, too.
youtube.com/embed/ceYI-TNx2gA?…
Commodore Disk Drive Becomes General Purpose Computer
The Commodore 1541 was built to do one job—to save and load data from 5.25″ diskettes. [Commodore History] decided to see whether the drive could be put to other purposes, though. Namely, operating as a standalone computer in its own right!
It might sound silly, but there’s a very obvious inspiration behind this hack. It’s all because the Commodore 1541 disk drive contains a MOS 6502 CPU, along with some RAM, ROM, and other necessary supporting hardware. As you might remember, that’s the very same CPU that powers the Commodore 64 itself, along with a wide range of other 1980s machines. With a bit of work, that CPU can indeed be made to act like a general purpose computer instead of a single-purpose disk controller.
[Commodore History] compares the 1541 to the Commodore VIC-20, noting that the disk drive has a very similar configuration, but less than half the RAM. The video then explains how the drive can be reconfigured to run like the even-simpler MOS Technology KIM-1 — a very primitive but well-known 8-bit machine. What’s wild is that this can be achieved with no hardware modifications. It’s not just a thought exercise, either. We get a full “Hello World!” example running in both BASIC and machine code to demonstrate that it really works.
Code is on GitHub for the curious. We’ve featured hacks with the chunky Commodore 1541 before, too.
youtube.com/embed/6loDwvG4CP8?…
Thanks to [Bruce] and [Stephen] for the tip!
Hands On WIth The Raspberry Pi Compute Module Zero
We are all familiar enough by now with the succession of boards that have come from Raspberry Pi in Cambridge over the years, and when a new one comes out we’ve got a pretty good idea what to expect. The “classic” Pi model B+ form factor has been copied widely by other manufacturers as has their current Compute Module. If you buy the real Raspberry Pi you know you’ll get a solid board with exceptionally good software support.
Every now and then though, they surprise us, with a board that follows a completely different path, which brings us to the one on our bench today. The Compute Module Zero packs the same quad-core RP3 system-on-chip (SoC) and Wi-Fi module as the Pi Zero 2 W with 512 MB of SDRAM onto a tiny 39 mm by 33 mm postage-stamp module. It’s a Pi, but not as you know it, so what is it useful for?
A Pi Zero 2 As You Haven’t Seen It Before
If you don’t mint the wait for shipping from China, LCSC have stock.
The first clue as to where this module sits in the Pi range comes from how it came to me. I have a bare module and the dev kit on loan from a friend who’s evaluating them with the idea of incorporating into a product. Instead of buying it from a store here in Europe he had to have it shipped from LCSC in China. It’s Chinese-made and distributed, and it’s not a consumer part in the way your Pi 5 is. Instead it’s an OEM part, and one which appears from where we’re sitting to be tailored specifically to the needs of OEMs manufacturing in China. Would you like a Linux computer with useful software updates and support built into your product? Look no further.
I put up a quick video showing it in detail which you can see at the bottom of the page. Physically it appears to carry the same parts we’re used to from the Zero 2, with the addition of an eMMC storage chip and with an antenna socket in place of the PCB antenna on the Zero. All the available interfaces are brought out to the edge of the board including some not seen on the Zero. The module is available with a variety of different storage options, including the version with no eMMC which my friend has. He’s also bought one with the storage on the dev board, so you can see both types.
The bottom-end CM0 has no onboard eMMC.
The dev board is similar to a Pi model A+ in size, with a bit of extra PCB at the bottom for the USB and HDMI connectors. Like the Zero it has Micro-USB connectors for power and USB, but it carries a full-size HDMI socket. There are connectors for an LCD display, a camera, a micro SD card if you’re using the version without eMMC, and 40-pin GPIO header.
In addition, there’s an extrnal stick-on antenna in the box. Electrically it’s nothing you won’t have seen before, after all it’s little more than a Pi Zero 2 on a different board, and with less memory. This one is fresh from the box and doesn’t have an OS installed, but since we all already know how well a Pi Zero 2 runs and the likely implications of 512 MB of memory I’ve left it that way for my friend.
What Can This Board Do For Us?
The idea of a bottom-end Raspberry Pi as a component module for your Chinese assembly house is a good one. It has to be the RP3 on board, because as we’ve noted, the earlier Pi architecture is heading into the sunset and that is now their lowest-power 64-bit silicon. It could use more memory, but 512 MB is enough for many undemanding Linux applications and more than appears on many SoCs.
For tiny little computer applications, it’s an attractive component, but it’s a little bit expensive. Depending on the version, and whether it comes with the dev board, it ranges from about $25 to $38, and we can imagine that even with a quantity price break that may be too much for many manufacturers. A Chinese SoC, albeit with worse long-term Linux support, can be had for much less. If this SBC form factor catches on, we’d expect to see knockoff boards appear for a more reasonable price in due course.
Perhaps as the price of memory eventually comes down they will increase the spec a little, but we’d hazard a guess that a lower price would mean more success. A low power, plug-innable computer for $20 would be interesting for a number of projects where size really matters. Only time will tell, but meanwhile if you’re designing a product you have a new Linux option for it, and for the rest of us it’s time to look out for these modules appearing in things we buy.
Would you use one of these, and for what?
youtube.com/embed/jtdAFIAMueM?…
Popular Science Experiments in Sound During the 19th-Century
Check one, two; check one, two; is this thing on? Over on The Public Domain Review [Lucas Thompson] takes us for a spin through sound, as it was in Britain around and through the 1800s.
The article begins by introducing the Father of Acoustics, German physicist Ernst Chladni. After placing grains of sand on a thin metal plate and drawing a violin bow along one edge Chladni figures appear, making manifest that which previously could only be heard, that is, sound waves.
It’s fun to think that it wasn’t so long ago that the physics of sound was avant-garde. Middle class Victorian society was encouraged to reproduce cutting edge experiments with equipment in their own homes, participating in a popular science which was at the same time part entertainment and part instruction, for young and old alike. Throughout the rest of his article [Lucas] lists a number of popular science books from the period and talks a little about what was to be found within.
See the video below the break for a demonstration of Chladni figures from The Royal Institution. Of course the present state of the art regarding sonics is well advanced as compared with that of the 19th century. If you’re interested to know more check out Building A Wall-Mounted Sound Visualizer and Seeing Sound For Under $200.
youtube.com/embed/OLNFrxgMJ6E?…
2025: As The Hardware World Turns
If you’re reading this, that means you’ve successfully made it through 2025! Allow us to be the first to congratulate you — that’s another twelve months of skills learned, projects started, and hacks….hacked. The average Hackaday reader has a thirst for knowledge and an insatiable appetite for new challenges, so we know you’re already eager to take on everything 2026 has to offer.
But before we step too far into the unknown, we’ve found that it helps to take a moment and reflect on where we’ve been. You know how the saying goes: those that don’t learn from history are doomed to repeat it. That whole impending doom bit obviously has a negative connotation, but we like to think the axiom applies for both the lows and highs in life. Sure you should avoid making the same mistake twice, but why not have another go at the stuff that worked? In fact, why not try to make it even better this time?
As such, it’s become a Hackaday tradition to rewind the clock and take a look at some of the most noteworthy stories and trends of the previous year, as seen from our rather unique viewpoint in the maker and hacker world. With a little luck, reviewing the lessons of 2025 can help us prosper in 2026 and beyond.
Love it or Hate it, AI is Here
While artificial intelligence software — or at least, what passes for it by current standards — has been part of the technical zeitgeist for a few years, 2026 was definitely the year that AI seemed to be everywhere. So much so that the folks at Merriam-Webster decided to make “slop”, as in computer-generated garbage content, their Word of the Year. They also gave honorable mention to “touch grass”, which they describe as a phrase that’s “often aimed at people who spend so much time online that they become disconnected from reality.” But we’re going to ignore that one for personal reasons.
At Hackaday, we’ve obviously got some strong feelings on AI. For those who earn a living by beating the written word into submission seven days a week, the rise of AI is nothing less than an existential crisis. The only thing we have going for us is the fact that the average Hackaday reader is sharp enough to recognize the danger posed by a future in which all of our media is produced by a Python script running on somebody’s graphics card and will continue to support us, warts and all.
Like all powerful tools, AI can get you into trouble if you aren’t careful.
But while most of us are on the same page about AI in regards to things like written articles or pieces of art, it’s not so clear cut when it comes to more utilitarian endeavours. There’s a not insignificant part of our community that’s very interested in having AI help out with tedious tasks such as writing code, or designing PCBs; and while the technology is still in its infancy, there’s no question the state of the art is evolving rapidly.
For a practical example we can take a look at the personal projects of two of our own writers. Back in 2023. Dan Maloney had a hell of a time getting ChatGPT to help him design a latch in OpenSCAD. Fast forward to earlier this month, and Kristina Panos convinced it to put together a customized personal library management system with minimal supervision.
We’ve also seen a uptick in submitted projects that utilized AI in some way. Kelsi Davis used a large language model (LLM) to help get Macintosh System 7 running on x86 in just three days, Stable Diffusion provided the imagery for a unique pizza-themed timepiece, Parth Parikh used OpenAI’s Speech API to bring play-by-play commentary to PONG, and Nick Bild used Google Gemini to help turn physical tomes into DIY audio books.
Make no mistake, an over-reliance on AI tools can be dangerous. In the best case, the user is deprived of the opportunity to actually learn the material at hand. In the worst case, you make an LLM-enhanced blunder that costs you time and money. But when used properly, the takeaway seems to be that a competent maker or hacker can leverage these new AI tools to help bring more of their projects across the finish line — and that’s something we’ve got a hard time being against.
Meshtastic Goes Mainstream
Another tech that gained steam this year is Meshtastic. This open source project aims to allow anyone to create an off-grid, decentralized, mesh network with low cost microcontrollers and radio modules. We fell in love with the idea as soon as we heard about it, as did many a hacker. But the project has reached a level of maturity that it’s starting to overflow into other communities, with the end result being a larger and more capable mesh that benefits everyone.
Part of the appeal is really how ridiculously cheap and easy it is to get started. If you’re starting from absolutely zero, connecting up to an existing mesh network — or creating your own — can cost you as little as $10 USD. But if you’re reading Hackaday, there’s a good chance you’ve already got a supported microcontroller (or 10) laying around, in which case you may just need to spring for the LoRa radio module and wire it up. Add a 3D printed case, and you’re meshin’ with the best of them.
There are turn-key Meshtastic options available for every budget, from beginner to enthusiast.
If you’re OK with trading some money for time, there’s a whole world of ready to go Meshtastic devices available online from places like Amazon, AliExpress, and even Etsy for that personal touch. Fans of the retro aesthetic would be hard pressed to find a more stylish way to get on the grid than the Hacker Pager, and if you joined us in Pasadena this year for Hackaday Supercon, you even got to take home a capable Meshtastic device in the form of the Communicator Badge.
Whether you’re looking for a backup communication network in the event of a natural disaster, want to chat with neighbors without a megacorp snooping on your discussion, or are simply curious about radio communications, Meshtastic is a fantastic project to get involved with. If you haven’t taken the plunge already, point your antenna to the sky and see who’s out there, you might be surprised at what you find.
Arduino’s New Overlord
In terms of headlines, the acquisition of Arduino by Qualcomm was a pretty big one for our community. Many a breathless article was written about what this meant for the future of the company. And things only got more frantic a month later, when the new Arduino lawyers updated the website’s Terms and Conditions.
But you didn’t see any articles about that here on Hackaday. The most interesting part of the whole thing to us was the new Arduino Uno Q: an under $50 USD single-board computer that can run Linux while retaining the classic Uno layout. With the cost of Raspberry Pi hardware steadily increasing over the years, some competition on the lower end of the price spectrum is good for everyone.
The Arduino Uno Q packs enough punch to run Linux.
As for the Qualcomm situation — we’re hackers, not lawyers. Our immediate impression of the new ToS changes was that they only applied to the company’s web services — “The Platform” in the contract — and had no bearing on the core Arduino software and hardware offerings that we’re all familiar with. The company eventually released a blog post explaining more or less the same thing, explaining that evolving privacy requirements for online services meant they had to codify certain best practices, and that their commitment to open source is unwavering.
For now, that’s good enough for us. But the whole debacle does bring to mind a question: if future Arduino software development went closed-source tomorrow, how much of an impact would it really have on the community at this point? Today when somebody talks about doing something with Arduino they are more likely to be talking about the IDE and development environment than one of the company’s microcontroller boards — the licenses for which mean the versions we have now will remain open in perpetuity. The old AVR Arduino code is GPLed, after all, as are the newer cores for microcontrollers like the ESP32 and RP2040, which weren’t written by Arduino anyway. On the software side, we believe that we have nothing to lose.
But Arduino products have also always been open hardware, and we’ve all gained a lot from that. This is where Qualcomm could still upset the applecart, but we don’t see why they would, and they say they won’t. We’ll see in 2026.
The Year of Not-Windows on the Desktop?
The “Year of Linux on the Desktop” is a bit like fusion power, in that no matter how many technical hurdles are cleared, it seems to be perennially just over the horizon. At this point it’s become a meme, so we won’t do the cliché thing and claim that 2025 (or even 2026) is going to finally be the year when Linux breaks out of the server room and becomes a mainstream desktop operating system. But it does seem like something is starting to shift.
That’s due, at least in part, to Microsoft managing to bungle the job so badly with their Windows 11 strategy. In spite of considerable push-back in the tech community over various aspects of the operating system, the Redmond software giant seems hell-bent on getting users upgraded. At the same time, making it a hard requirement that all Windows 11 machines have a Trusted Platform Module means that millions of otherwise perfectly usable computers are left out in the cold.
What we’re left with is a whole lot of folks who either are unwilling, or unable, to run Microsoft’s latest operating system. At the same time desktop Linux has never been more accessible, and thanks in large part to the efforts of Valve, it can now run the majority of popular Windows games. That last bit might not seem terribly exciting to folks in our circles, but historically, the difficulty involved in playing AAA games on Linux has kept many a techie from making the switch.
Does that mean everyone is switching over to Linux? Well, no. Certainly Linux is seeing an influx of new users, but for the average person, it’s more likely they’d switch to Mac or pick up a cheap Chromebook if all they want to do is surf the web and use social media.
Of course, there’s an argument to be made that Chromebook users are technically Linux users, even if they don’t know it. But for that matter, you could say anyone running macOS is a BSD user. In that case, perhaps the “Year of *nix” might actually be nigh.
Grandma is 3D Printing in Color
There was a time when desktop 3D printers were made of laser-cut wood, used literal strings instead of belts, and more often then not, came as a kit you had to assemble with whatever assistance you could scrounge up from message boards and IRC channels — and we liked it that way. A few years later, printers were made out of metal and became more reliable, and within a decade or so you could get something like an Ender 3 for a couple hundred bucks on Amazon that more or less worked out of the box. We figured that was as mainstream as 3D printing was likely to get…but we were very wrong.
A Prusa hotend capable of printing a two-part liquid silicone.
Today 3D printing is approaching a point where the act of downloading a model, slicing it, and manifesting it into physical form has become, dare we say it, mundane. While we’re not always thrilled with the companies that make them and their approach to things that are important to us like repairability, open development, and privacy, we have to admit that the new breed of printers on the market today are damn good at what they do. Features like automatic calibration and filament run-out sensors, once the sort of capabilities you’d only see on eye-wateringly expensive prosumer machines, have became standard equipment.
While it’s not quite at the point where it’s an expected feature, the ability to print in multiple materials and colors is becoming far more common. Pretty much every printer manufacturer has their own approach, and the prices on compatible machines are falling rapidly. We’re even starting to see printers capable of laying down more exotic materials such as silicone.
Desktop 3D printing still hasn’t reached the sort of widespread adoption that all those early investors would have had us believe in the 2000s, where every home would one day have their own Star Trek style personal replicator. But they are arguably approaching the commonality of something like a table saw or drill press — specialized but affordable and reliable tools that act as a force multiplier rather than a tinkerer’s time sink.
Tariffs Take Their Toll
Finally, we couldn’t end an overview of 2025 without at least mentioning the ongoing tariff situation in the United States. While it hasn’t ground DIY electronics to a halt as some might have feared, it’s certainly had an impact.
A tax on imported components is nothing new. We first ran into that back in 2018, and though it was an annoyance, it didn’t have too much of an impact at the hobbyist scale. When an LED costs 20 cents, even a 100% tariff wouldn’t be much of a hit to the wallet at the scale most of us are operating at. Plus there are domestic, or at least non-Chinese, options for some jellybean components. The surplus market can also help here — you can often find great deals on things like partial reels of SMD capacitors and resistors on eBay if you keep an eye out for them.
We’ve heard more complaints about PCB production than anything. After years of being able to get boards made overseas for literal pennies, seeing a import tax that added at checkout can be quite a shock. But just like the added tax on components, while annoying, it’s not enough to actually keep folks from ordering. Even with the tariffs, the cost of getting a PCB made at OSH Park is going to be much higher than any Chinese board house.
Truth be told, if an import tax on Chinese-made PCBs and components resulted in a boom of affordable domestic alternatives, we’d be all over it. The idea that our little hobby boards needed to cross an ocean just to get to us always seemed unsustainable anyway. It wouldn’t even have to be domestic, there’s an opportunity for countries with a lower import tariff to step in. Instead of having our boards made in China, why not India or Mexico?
But unfortunately, the real-world is more complex than that. Building up those capabilities, either at home or abroad, takes time and money. So while we’d love to see this situation lead to greater competition, we’ve got a feeling that the end result is just more money out of our pockets.
Thanks for Another Year of Hacks
One thing that absolutely didn’t change in 2025 was you — thanks to everyone that makes Hackaday part of their daily routine, we’ve been able to keep the lights on for another year. Everyone here knows how incredibly fortunate we are to have this opportunity, and your ongoing support is never taken for granted.
We’d love to hear what you thought the biggest stories or trends of 2025 were, good and bad. Let us know what lessons you’ll be taking with you into 2026 down below in the comments.
hackaday.com/2026/01/05/2025-a…
Moving From Windows To FreeBSD As The Linux Chaos Alternative
Back in the innocent days of Windows 98 SE, I nearly switched to Linux on account of how satisfied I was with my Windows experience. This started with the Year of the Linux Desktop in 1999 that sta…Hackaday
What will happen in tech policy during 2026?
IT'S MONDAY, AND THIS IS DIGITAL POLITICS. I'm Mark Scott, and Happy New Year!
As I plan for the year ahead, I'm looking to arrange more in-person events — mostly because it's great to connect with people in real life. If that sounds something you'd be interested in, please fill out this survey to help my planning.
Just as the last newsletterlooked back over what happened in 2025, this first edition of the new year focuses on how global tech policy will evolve over the next 12 months. I've skipped the clichés — 'AI will consume everything,' 'Washington and Brussels won't get along' — to highlight macro trends that, imo, will underpin what will likely be a bumpy road ahead.
Some of my predictions will be wrong. That's OK — no one's perfect.
What follows is my best guess at the topics which will dominate 2026 at a time when geopolitics, technology and economic competitiveness have become intertwined like never before.
Let's get started:
GitHub Disables Rockchip’s Linux MPP Repository After DMCA Request
Recently GitHub disabled the Rockchip Linux MPP repository, following a DMCA takedown request from the FFmpeg team. As of writing the affected repository remains unavailable. At the core of this issue is the Rockchip MPP framework, which provides hardware-accelerated video operations on Rockchip SoCs. Much of the code for this was lifted verbatim from FFmpeg, with the allegation being that this occurred with the removal of the original copyright notices and authors. The Rockchip MPP framework was further re-licensed from LGPL 2.1 to the Apache license.
Most egregious of all is perhaps that the FFmpeg team privately contacted Rockchip about this nearly two years ago, with clearly no action taken since. Thus FFmpeg demands that Rockchip either undoes these actions that violate the LGPL, or remove all infringing files.
This news and further context is also covered by [Brodie Robertson] in a video. What’s interesting is that Rockchip in public communications and in GitHub issues are clearly aware of this license issue, but seem to defer dealing with it until some undefined point in the future. Clearly that was the wrong choice by Rockchip, though it remains a major question what will happen next. [Brodie] speculates that Rockchip will keep ignoring the issue, but is hopeful that he’ll be proven wrong.
Unfortunately, these sort of long-standing license violations aren’t uncommon in the open source world.
youtube.com/embed/cYvvYPth1fo?…
Bicycle Tows 15,000 Pounds
An old joke in physics is that of the “spherical cow”, poking fun at some of the assumptions physicists make when tackling a new problem. Making the problem simple like this can help make its fundamentals easier to understand, but when applying these assumptions to real-world problems these assumptions are quickly challenged. Which is what happened when [Seth] from Berm Peak attempted to tow a huge trailer with a bicycle — while in theory the bike just needs a big enough gear ratio he quickly found other problems with this setup that had to be solved.
[Seth] decided on a tandem bike for this build. Not only does the second rider add power, but the longer wheelbase makes it less likely that the tongue weight of the trailer will lift the front wheel off the ground. It was modified with a Class 3 trailer hitch, as well as a battery to activate the electric trailer brakes in case of an emergency. But after hooking the trailer up the first time the problems started cropping up. At such a high gear ratio the bike is very slow and hard to keep on a straight line. Some large, custom training wheels were added between the riders to keep it stable, but even then the huge weight still caused problems with the chain and even damaged the bike’s freehub at one point.
Eventually, though, [Berm Peak] was able to flat tow a Ford F-150 Lightning pulling a trailer a few yards up a hill, at least demonstrating this proof of concept. It might be the absolute most a bicycle can tow without help from an electric motor, although real-world applications for something like this are likely a bit limited. He’s been doing some other bicycle-based projects with more utility lately, including a few where he brings abandoned rental e-bikes back to life by removing proprietary components.
youtube.com/embed/8hDQXP3xSj4?…
Print Pixel Art to a Floppy Disk
Here at Hackaday we love floppy disks. While they are by no means a practical or useful means of storing data in the age of solid state storage, there is something special about the little floppy disc of magnetic film inside that iconic plastic case. That’s why we were so excited to see the tool [dbalsom] developed for printing pixel art in a floppy’s track timing diagrams!
Floppy timing diagrams are usually used to analyze the quality of an individual disk. It represents flux transitions within the a single floppy tack as a 2D graph. But it’s also perfectly possible to “paint” images on a floppy this way. Granted, you can’t see these images without printing out a timing diagram, but if your painting images onto a floppy, that’s probably the point.
This is where pbm2track tool comes in handy! It takes bitmap images and encodes them onto floppy emulators, or actual floppies. The results are quite excellent, with near-perfect recreation in floppy graphical views. The results on real floppies are also recognizable as the original image. The concept is similar to a previous tool [dbalsom] created, PNG2disk
If you too love the nearly forgotten physical likeness of the save button, make sure to check out this modern Linux on a floppy hack next!
Thanks [gloriouscow] for the tip!
Modifying a QingPing Air Quality Monitor for Local MQTT Access
The QingPing Air Quality Monitor 2 is an Android-based device that not only features a touch screen with the current air quality statistics of the room, but also includes an MQTT interface that normally is used in combination with the QingPing mobile app and the Xiaomi IoT ecosystem. Changing it to report to a local MQTT server instead for integration with e.g. Home Assistant can be done in an official way that still requires creating a cloud account, or you can just do it yourself via an ADB shell and some file modifications as [ea] has done.
By default these devices do not enumerate when you connect a computer to their USB-C port, but that’s easily resolved by enabling Android’s developer mode. This involves seven taps on the Device Name line in the About section of settings. After this you can enter Developer Options to toggle on Debug Mode and Adbd Debugging, which creates the option to connect to the device via USB with ADB and open up a shell with adb shell.
From there you can shoot off the QingSnow2 app and the watchdog.sh that’s running in the background, disable IPv6 and edit /etc/host to redirect all the standard cloud server calls to a local server. Apparently there is even SSH access at this point, with root access and password rockchip. The MQTT configuration is found under /data/etc/ in settings.ini, which is used by the QingPing app, so editing redirects all that.
Of course, the device also queries a remote server for weather data for your location, so if you modify this you have to provide a proxy, which [ea] did with a simple MQTT server that’s found along with other files on the GitHub project page.
Sleeping Rough in Alaska with a USPS Cargo Bike
Out of all 49 beautiful US states (plus New Jersey), the one you’d probably least want to camp outside in during the winter is arguably Alaska. If you were to spend a night camping out in the Alaskan winter, your first choice of shelter almost certainly wouldn’t be a USPS electric cargo trike, but over on YouTube [Matt Spears] shows that it’s not that hard to make a lovely little camper out of the mail bike.
We’re not sure how much use these sorts of cargo trikes get in Alaska, but [Matt] seems to have acquired this one surplus after an entirely-predictable crash took one of the mirrors off. A delta configuration trike — single wheel in front — is tippy at the best of times, but the high center of gravity you’d get from a loading the rear with mail just makes it worse. That evidently did not deter the United States Postal Service, and it didn’t deter [Matt] either.
His conversion is rather minimal: to turn the cargo compartment into a camper, he only adds a few lights, a latch on the inside of the rear door, and a wood-burning stove for heat. Rather than have heavy insulation shrink the already-small cargo compartment, [Matt] opts to insulate himself with a pile of warm sleeping bags. Some zip-tie tire chains even let him get the bike moving (slowly) in a winter storm that he claims got his truck stuck.
While it might not be a practical winter vehicle, at least on un-plowed mountain roads, starting with an electric-assist cargo trike Uncle Sam already paid for represented a huge cost and time savings vs starting from scratch like this teardrop bike camper we featured a while back. While not as luxurious, it seems more practical for off-roading than another electric RV we’ve seen.
youtube.com/embed/s9MqbLbFRDQ?…
Ray Marching in Excel
3D graphics are made up of little more then very complicated math. With enough time, you could probably compute a ray marching by hand. Or, you could set up Excel to do it for you!
Ray marching is a form of ray tracing, where a ray is stepped along based on how close it is to the nearest surface. By taking advantage of signed distance functions, such an algorithm can be quite effective, and in some instances much more efficient then traditional ray marching algorithms. But the fact that ray marching is so mathematically well defined is probably why [ExcelTABLE] used it to make a ray traced game in Excel.
Under the hood, the ray marching works by casting a ray out from the camera and measuring its distance from a set of three dimensional functions. If that distance is below a certain value, this is considered a surface hit. On surface hits, a simple normal shader computes pixel brightness. This is then rendered out by variable formatting in the cells of the spreadsheet.
For those of you following along at home, the tutorial should work just fine in any modern spreadsheet software including Google Sheets and LibreOffice Calc. It also provides a great explanation of the math and concepts of ray marching, so is worth a read regardless your opinions on Excel’s status as a so-called “programming language.”
This is not the first time we have come across a ray tracing tutorial. If computer graphics are your thing, make sure to check out this ray tracing in a weekend tutorial next!
Thanks [Niklas] for the tip!
Exploring Nintendo 64DD Code Remnants in Ocarina of Time
What if you took a Nintendo 64 cartridge-based game and allowed it to also use a large capacity magnetic disc format alongside it? This was the premise of the Nintendo 64DD peripheral, and the topic of a recent video by [Skawo] in which an archaeological code dig is performed to see what traces of the abandoned product may remain.
The 64DD slots into the bottom of the console where the peripheral connector is located, following which the console can read and write the magnetic discs of the 64DD. At 64 MB it matched the cartridge in storage capacity, while also being writable unlike cartridges or CDs. It followed on previous formats like the Famicom Disk System.
For 1998’s Game of the Year title The Legend of Zelda: Ocarina of Time such a 64DD-based expansion was worked on for a while before being cancelled along with the 64DD. With this Zelda game now decompiled, its source code has shown to be still full of 64DD-related code that [Skawo] takes us through in the video.
The Nintendo 64DD discs resembled ZIP discs. (Credit: Evan-Amos, Wikimedia)
As is typical for CD- and magnetic storage formats like these 64DD discs, their access times and transfer speeds are atrociously slow next to a cartridge’s mask ROM, which clearly left the developers scrambling to find some way to use the 64DD as an actual enhancement. Considering that the 64DD never was released outside of Japan and had a very short life, it would seem apparent that, barring PlayStation-level compromises, disc formats just weren’t a good match for the console.
The interface with the 64DD in the game’s code gives some idea of what the developers had in mind, which mostly consisted out of swapping on-cartridge resources like dungeon maps with different ones. Ultimately this content did make its way into a commercial release, in the form of the Master Quest option on the game’s re-release on the GameCube.
Although this doesn’t enable features once envisioned, such as tracking the player’s entire route and storing permanent map changes during gameplay, it at least gives us a glimpse of what the expansion game on the 64DD could have looked like.
youtube.com/embed/2xyk-EozojY?…
Top image: N64 with stacked 64DD, credit: Evan-Amos
Are We Ready for AR Smart Glasses Yet?
In a recent article from IEEE Spectrum, [Alfred Poor] asks the question what do consumers really want in smart glasses? And are you finally ready to hang a computer screen on your face?
[Alfred] says that since Google Glass was introduced in 2012, smart glasses haven’t yet found their compelling use-case. Apparently it looks like while virtual reality (VR) might be out, augmented reality (AR) might be in. And of course now we have higher levels of “AI” in the mix, whatever that means.
According to the article in the present day there are two competing visions of what smart glasses might be: we have One Pro from Xreal in Beijing, and AI Glasses from Halliday in Singapore, each representing different design concepts evolving in today’s market. The article goes into further detail. The video below the break is promotional material from Halliday showing people’s reactions to their AI Glasses product.
[Alfred] talks with Louis Rosenberg, CEO and chief scientist of Unanimous AI, who says he believes “that within five years, immersive AI-powered glasses will replace the smartphone as the primary mobile device in our digital lives.” Predicting the future is hard, but what do you think? Sound off in the comments!
All in all smart glasses remain a hot topic. If you’d like to read more check out our recent articles Making Glasses That Detect Smartglasses and Mentra Brings Open Smart Glasses OS With Cross-Compat.
youtube.com/embed/C0Iwq2auR_g?…
Quote Printer Keeps Receipts
In the world of social media, “keeping receipts” refers to the practice of storing evidence that may come in handy for a callout post at a later date. For [Teddy Warner], though, it’s more applicable to a little printer he whipped up to record the very best banter from his cadre of friends.
[Teddy’s] idea was simple. He hoped to capture amusing or interesting quotes his friends made in his apartment, and store them in a more permanent form. He also wanted to allow his friends to do the same. To that end, he whipped up a small locally-hosted web interface which his friends could use to record quotes, along with proper attribution. Hosted on a Raspberry Pi 5, the web interface can then truck those quotes out to an 80 mm thermal receipt printer. The anecdote, epithet, or witticism is then spat out with a timestamp in a format roughly approximating a receipt you might get from your local gas station. What’s neat is that [Teddy] was also able to install the entire system within the housing of the Miemieyo receipt printer, by 3D printing a custom base that could house the Pi and a suitable power supply.
Beyond being fun, this system also serves a critical purpose. It creates a paper trail, such that in-jokes, rumors, and insults alike can be traced back to their originating source. No more can Crazy Terry claim to have invented “the Malaga bit,” because the server and the receipt clearly log that Gerald dropped it first at the Boxing Day do.
We’ve seen similar projects before, too. There’s just something neat about holding a bit of paper in your hand.
youtube.com/embed/F5_00bj8dHo?…
FPGA Dev Kit Unofficially Brings MSX Standard Back
In the 1980s there were an incredible number of personal computers of all shapes, sizes, and operating system types, and there was very little interoperability. Unlike today’s Windows-Mac duopoly, this era was much more of a free-for-all but that didn’t mean companies like Microsoft weren’t trying to clean up all of this mess. In 1983 they introduced the MSX standard for computers, hoping to coalesce users around a single design. Eventually it became very successful in Japan and saw some use in a few other places but is now relegated to the dustbin of history, but a new FPGA kit unofficially supports this standard.
The kit is called the OneChip Book and, unlike most FPGA kits, includes essentially everything needed to get it up and running including screen, keyboard, and I/O all in a pre-built laptop case. At its core it’s just that: and FPGA kit. But its original intent was to recreate this old 80s computer standard with modern hardware. The only problem is they never asked for permission, and their plans were quickly quashed. The development kit is still available, though, and [electricadventures] goes through the steps to get this computer set up to emulate this unofficially-supported retro spec. He’s also able to get original MSX cartridges running on it when everything is said and done.
Although MSX is relatively unknown in North America and Western Europe, it remains a fairly popular platform for retro computing enthusiasts in much of the rest of the world. We’ve seen a few similar projects related to this computer standard like this MSX-inspired cyberdeck design, but also others that bring new hardware to this old platform.
youtube.com/embed/Iy7R29bjuJ8?…
Apollo Lunar Module Thrust Meter Lives Again
[Mike Stewart] powers up a thrust meter from an Apollo lunar module. This bit of kit passed inspection on September 25, 1969. Fortunately [Mike] was able to dig up some old documentation which included the pin numbers. Score! It’s fun to see the various revisions this humble meter went through. Some of the latest revisions are there to address an issue where there was no indication upon failure, so they wired in a relay which could flip a lamp indicator if the device lost power.
This particular examination of this lunar thrust module is a good example of how a system’s complexity can quickly get out of hand. Rather than one pin there are two pins to indicate auto or manual thrust, each working with different voltage levels; the manual thrust is as given but the auto thrust is only the part of the thrust that gets added to a baseline thrust, so they need to be handled differently, requiring extra logic and wiring for biasing the thrust meter when appropriate. The video goes into further detail. Toward the end of the video [Mike] shows us what the meter’s backlights look like when powered.
If you’re interested in Apollo mission technology be sure to check out Don Eyles Walks Us Through The Lunar Module Source Code.
youtube.com/embed/H3bxe7gynQk?…
Teardown of Boeing 777 Cabin Pressure Control System
Modern passenger airliners are essentially tubes-with-wings, they just happen to be tubes that are stuffed full with fancy electronics. Some of the most important of these are related to keeping the bits of the tube with humans inside it at temperatures and pressures that keeps them alive and happy. Case in point the Boeing 777, of which [Michel] of Le Labo de Michel on YouTube recently obtained the Cabin Pressure Control System (CPCS) for a teardown.
The crucial parts on the system are the two Nord-Micro C0002 piezo resistive pressure transducers, which measure the pressure inside the aircraft. These sensors, one of which is marked as ‘backup’, are read out by multiple ADCs connected to a couple of FPGAs. The system further has an ARINC 429 transceiver, for communicating with the other avionics components. Naturally the multiple PCBs are conformally coated and with vibration-proof interconnects.
Although it may seem like a lot of hardware just to measure air pressure with, this kind of hardware is meant to work without errors over the span of years, meaning significant amounts of redundancy and error checking has to be built-in. Tragic accidents such as Helios Airways Flight 522 involving a 737-300 highlight the importance of these systems. Although in that case human error had disabled the cabin pressurization, it shows just how hard it can be to detect hypoxia before it is too late.
youtube.com/embed/rsCxEcR-AYE?…
The Setun Was a Ternary Computer from the USSR in 1958
[Codeolences] tells us about the FORBIDDEN Soviet Computer That Defied Binary Logic. The Setun, the world’s first ternary computer, was developed at Moscow State University in 1958. Its troubled and short-lived history is covered in the video. The machine itself uses “trits” (ternary digits) instead of “bits” (binary digits).
When your digits have three discrete values there are a multiplicity of ways of assigning meaning to each state, and the Setun uses a system known as balanced ternary where each digit can be either -1, 0, or 1 and otherwise uses a place-value system in the normal way.
An interesting factoid that comes up in the video is that base-3 (also known as radix-3) is the maximally efficient way to represent numbers because three is the closest integer to the natural growth constant, the base of the natural logarithm, e, which is approximately 2.718 ≈ 3.
If you’re interested to know more about ternary computing check out There Are 10 Kinds Of Computers In The World and Building The First Ternary Microprocessor.
youtube.com/embed/4vwOJE0Dq38?…
Pickle Diodes, Asymmetric Jacobs Ladders, and Other AC Surprises
While we’re 100 years past Edison’s fear, uncertainty, and doubt campaign, the fact of the matter is that DC is a bit easier to wrap one’s head around. It’s just so honest in its directness. AC, though? It can be a little shifty, and that results in some unexpected behaviors, as seen in this video from [The Action Lab].
He starts off with a very relatable observation: have you ever noticed that when you plug in a pickle, only half of it lights up? What’s up with that? Well, it’s related to the asymmetry he sees on his Jacobs ladder that has one side grow hotter than the other. In fact, it goes back to something welders who use DC know about well: the Debye sheath.
The arc of a welder, or a Jacobs ladder, or a pickle lamp is a plasma: ions and free electrons. Whichever electrode has negative is going to repel the plasma’s electrons, resulting in a sheath of positive charge around it. This positively-charged ions in the Debye sheath are going to accelerate into the anode, and voila! Heating. That’s why it matters which way the current goes when you’re welding.
With DC, that makes sense. In AC, well — one side starts as negatively charged, and that’s all it takes. It heats preferentially by creating a temporary Debye sheath. The hotter electrode is going to preferentially give off electrons compared to its colder twin — which amplifies the effect every time it swings back to negative. It seems like there’s no way to get a pure AC waveform across a plasma; there’s a positive feedback loop at whatever electrode starts negative that wants to introduce a DC bias. That’s most dramatically demonstrated with a pickle: it lights up on the preferentially heated side, showing the DC bias. Technically, that makes the infamous electric pickle a diode. We suspect the same thing would happen in a hot dog, which gives us the idea for the tastiest bridge rectifier. Nobody tell OSHA.
[The Action Lab] explains in more detail in his video, and demonstrates with ring-shaped electrode how geometry can introduce its own bias. For those of us who spend most of our time slinging solder in low-voltage DC applications, this sort of thing is fascinating. It might be old hat to others here; if the science of a plain Jacobs ladder no longer excites you, maybe you’d find it more electrifying built into a blade.
youtube.com/embed/_59b75Vql38?…
Printing in Metal with DIY SLM
An accessible 3D printer for metals has been the holy grail of amateur printer builders since at least the beginning of the RepRap project, but as tends to be the case with holy grails, it’s proven stubbornly elusive. If you have the resources to build it, though, it’s possible to replicate the professional approach with a selective laser melting (SLM) printer, such as the one [Travis Mitchell] built (this is a playlist of nine videos, but if you want to see the final results, the last video is embedded below).
Most of the playlist shows the process of physically constructing the machine, with only the last two videos getting into testing. The heart of the printer is a 500 Watt fiber laser and a galvo scan head, which account for most of the cost of the final machine. The print chamber has to be purged of oxygen with shielding gas, so [Travis] minimized the volume to reduce the amount of argon needed. The scan head therefore isn’t located in the chamber, but shines down into it through a window in the chamber’s roof. A set of repurposed industrial servo motors raises and lowers the two pistons which form the build plate and powder dispenser, and another servo drives the recoater blade which smooths on another layer of metal powder after each layer.
As with any 3D printer, getting good first-layer adhesion proved troublesome, since too much power caused the powder to melt and clump together, and too little could result in incomplete fusion. Making sure the laser was in focus improved things significantly, though heat management and consequent warping remained a challenge. The recoater blade was originally made out of printed plastic, with a silicone cord along the edge. Scraping along hot fused metal in the early tests damaged it, so [Travis] replaced it with a stainless steel blade, which gave much more consistent performance. The final results looked extremely promising, though [Travis] notes that there is still room for redesign and improvement.
This printer joins the very few other DIY SLM machines we’ve seen, though there is an amazingly broad range of other creative ideas for homemade metal printers, from electrochemical printers to those that use precise powder placement.
youtube.com/embed/MPXp3hpsdjA?…
Zork Running on 4-Bit Intel Computer
Before DOOM would run on any computing system ever produced, and indeed before it even ran on its first computer, the game that would run on any computer of the pre-DOOM era was Zork. This was a text-based adventure game first published in the late 70s that could run on a number of platforms thanks to a virtual machine that interpreted the game code. This let the programmers write a new VM for each platform rather than porting the game every time. [smbakeryt] wanted to see how far he could push this design and got the classic game running on one of the oldest computers ever produced.
The computer in question is the ubiquitous Intel 4004 processor, the first commercially available general-purpose microprocessor produced. This was a four-bit machine and predates the release of Zork by about eight years. As discussed earlier, though, the only thing needed to get Zork to run on any machine is the Z-machine for that platform, so [smbakeryt] got to work. He’s working on a Heathkit H9 terminal, and the main limitation here is the amount of RAM needed to run the game. He was able to extended the address bus to increase the available memory in hardware, but getting the Z-machine running in software took some effort as well. There’s a number of layers of software abstraction here that’s a bit surprising for 70s-era computing but which make it an extremely interesting challenge and project.
As far as [smbakeryt]’s goal of finding the “least amount of computer” that would play Zork, we’d have a hard time thinking of anything predating the 4004 that would have any reasonable user experience, but we’d always encourage others to challenge this thought and [smbakeryt]’s milestone. Similarly, DOOM has a history of running on machines far below the original recommended minimum system requirements, and one of our favorites was getting it to run on the NES.
youtube.com/embed/VcTQyA80Apg?…
Benchmarking Windows Against Itself, from Windows XP to Windows 11
Despite faster CPUs, RAM and storage, today’s Windows experience doesn’t feel noticeably different from back in the 2000s when XP and later Windows 7 ruled the roost. To quantify this feeling, [TrigrZolt] decided to run a series of benchmarks on a range of Windows versions.
Covering Windows XP, Vista, 7, 8.1, 10 and 11, the Pro version of each with the latest service packs and updates was installed on the same laptop: a Lenovo ThinkPad X220. It features an Intel i5 2520M CPU, 8 GB of RAM, built-in Intel HD Graphics 3000 and a 256 GB HDD.
For start-up, Windows 8.1 won the race, probably due to having the Fast Boot feature, while Windows 11 came in dead last as it showed the desktop, but struggled to show the task bar. Windows XP’s install size was the smallest and also had the lowest RAM usage with nothing loaded at 800 MB versus 3.3 GB for Windows 11 in last place.
Using the Chrome-based Supermium browser, memory management was tested, with XP performing as poorly as Windows 11, while Windows 7 and 8.1 took home the gold at over two-hundred tabs open before hitting the total RAM usage limit of 5 GB. That XP performed so poorly was however due to an issue with virtual memory and not hitting the RAM limit, which means that Windows 11 is the real dunce here.
This is a pattern that keeps repeating: Windows 11 was last in the battery test, took longer to render a video project in OpenShot, took its sweet time opening the File Explorer window, and opening built-in applications like MS Paint left enough time to fetch a fresh cup of coffee. Not to mention Windows 11 taking the longest to open websites and scoring worst of all in single-threaded CPU-Z.
Much seems to be due to the new code in Windows 11, as Microsoft has opted to start doing major rewrites since Windows 7, hitting a crescendo with Windows 11. Although there’s the unhelpful fact that Windows 11 by default encrypts the storage with the very slow software-based BitLocker, its massive RAM usage and general sluggishness are such a big deal that even Microsoft has acknowledged this and added workarounds for the slow File Explorer in Windows 11 by preloading components into RAM.
All of this appears to be part of the same trend in software development, where more resources are pointlessly used due to developing for the hardware, and performance increasingly takes a backseat to abstractions and indirections that effectively add bloat and latency.
youtube.com/embed/7VZJO-hOT4c?…
A Steam Machine Clone For An Indeterminate but Possibly Low Cost
For various reasons, crypto mining has fallen to the wayside in recent years. Partially because it was never useful other than as a speculative investment and partially because other speculative investments have been more popular lately, there are all kinds of old mining hardware available at bargain prices. One of those is the Asrock AMD BC250, which is essentially a cut down Playstation 5 but which has almost everything built into it that a gaming PC would need to run Steam, and [ETA PRIME] shows us how to get this system set up.
The first steps are to provide the computer with power, an SSD, and a fan for cooling. It’s meant to be in a server rack so this part at least is pretty straightforward. After getting it powered up there are a few changes to make in the BIOS, mostly related to memory management. [ETA PRIME] is uzing Bazzite as an operating system which helps to get games up and running easily. It plays modern games and even AAA titles at respectable resolutions and framerates almost out-of-the-box, which perhaps shouldn’t be surprising since this APU has a six-core Zen 2 processor with a fairly powerful RDNA2 graphics card, all on one board.
It’s worth noting that this build is a few weeks old now, and the video has gotten popular enough that the BC250 cards that [ETA PRIME] was able to find for $100 are reported to be much more expensive now. Still, though, even at double or triple the price this might still be an attractive price point for a self-contained, fun, small computer that lets you game relatively easily and resembles the Steam Machine in concept. There are plenty of other builds based on old mining hardware as well, so don’t limit yourself to this one popular piece of hardware. This old mining rig, for example, made an excellent media server.
youtube.com/embed/q_CxcbS5HI8?…
Qron0b: a Minimalist, Low-Power BCD Wristwatch
Over the decades we have seen many DIY clocks and wrist watches presented, but few are as likely to get you either drawing in the crowds, or quietly snickered at behind your back, as a binary watch of some description does. A wrist watch like [qewer]’s qron0b project which also uses BCD encoding to display the current time is among our more rare project types here, with us having to go all the way back to 2018 for a similar project as well as a BCD desk clock.
As is typical, a single CR2032 coin cell powers the entire PCB, with an ATtiny24A or compatible as the MCU, a DS1302 RTC and the requisite 4×4 LED matrix to display the hours and minutes. Technically three LEDs are unneeded here, but it looks nicely symmetrical this way, and the extra LEDs can be used for other tasks as the firmware is expanded from the current setting and reading of the time.
The AVR C firmware can be found in the above linked GitHub repository, along with the KiCad PCB project and FreeCAD design files for the watch body. The body accepts a 22 mm GT2/GT3-style watch strap to complete the assembly. With a single CR2032 you’re assured of at least a few months of runtime.
Adding Solar Power to an Electric Tractor
In my country, we have a saying: the sun is a deadly lazer. Well, it’s not so much a folk saying as a meme, and not so much in one country as “the internet”. In any case, [LiamTronix] was feeling those cancer rays this harvest season when running his electric tractor, and realized that– since he’s already charging it with ground-mounted solar panels anyway–if he’s going to build a roof for his ride, he might as well make charge the batteries.
Another bonus is safety: the old Massey-Ferguson at the heart of the electric tractor build didn’t come with any rollover protection from the factory back in the 1960s. Since having however many tons of tractor roll onto you was bad enough before it got a big hefty battery pack, we heartily approve of including a roll cage in this build. Speaking of battery packs, he’s taking this chance to upgrade to a larger LiFePo pack from the LiIon pack he installed when we first featured this conversion in 2024.
Atop the new roll cage, and above the new battery, [Liam] installed four second-hand 225 W solar panels. Since that’s under 1kW even if the panels have not degraded, the tractor isn’t going to be getting much charge as it runs. In the northern winter, [Liam] is only able to pull 80 W from the set. That’s not getting much work done, but who wants a tractor without a cab or heater when it’s below freezing? In the summer it’s a much better story, and [Liam] estimates that the roof-mounted panels should provide all of the energy needed to run the tractor for the couple hours a day he expects to use it.
If you’re wondering how practical all this is, yes, it can farm — we covered [Liam] putting the project through its paces in early 2025.
Jailbreaking the Amazon Echo Show
As locked-down as the Amazon Echo Show line of devices are, they’re still just ARM-based Android devices, which makes repurposing it somewhat straightforward as long as what you want is another Android device.Running Home Assistant on an Echo Show 8 with LineageOS.
In this case, we’re talking about the first-generation Amazon Echo Show 8, which is a 2019-era device that got jailbroken back in November by [Rortiz2]. The process was then demonstrated in a video by [Dammit Jeff].
Currently only two devices are supported by this jailbreak, with the Echo Show 5 being the other one. If there’s enough interest, there doesn’t appear to be any technical reason at least for why this support couldn’t be extended to other devices. One major reason for jailbreaking is to put LineageOS on your Echo device courtesy of these Echo Show devices recently beginning to show advertisements, with no way to disable this.
The process of jailbreaking and installing the LineageOS ROM is somewhat long as usual, with plenty of points where you can make a tragic mistake. Fortunately it’s pretty simple as long as you follow the steps and afterwards you can even install the Google apps package if that’s your thing. Just mind the 1 GB RAM and 8 GB of storage on the Echo Show 8. In the case of [Jeff] he mostly replicated the home automation and entertainment features of Amazon’s FireOS with far less locked-down alternatives like Home Assistant.
youtube.com/embed/h0-MlJ38BXw?…
Hackaday Podcast Ep 351: Hackaday Goes To Chaos Communication Congress
Elliot was of at Europe’s largest hacker convention: Chaos Communication Congress. He had an awesome time, saw more projects than you might think humanely possible, and got the flu. But he pulled through and put this audio tourbook for you.
So if you’ve never been to CCC, give it a listen!
html5-player.libsyn.com/embed/…
In the far future, all the cool kids will be downloading MP3s of their favorite podcasts.
Where to Follow Hackaday Podcast
Places to follow Hackaday podcasts:
What’s That Sound
- If you think you know what the mystery sound was this week, give us your best guess for a chance at a Hackaday Podcast t-shirt.
hackaday.com/2026/01/02/hackad…
Low-Cost, Portable Streaming Server
Thanks to the Raspberry Pi, we have easy access to extremely inexpensive machines running Linux that have all kinds of GPIO as well as various networking protocols. And as the platform has improved over the years, we’ve seen more demanding applications on them as well as applications that use an incredibly small amount of power. This project combines all of these improvements and implements a media streaming server on a Raspberry Pi that uses a tiny amount of energy, something that wouldn’t have been possible on the first generations of Pi.
Part of the reason this server uses such low power, coming in just around two watts, is that it’s based on the Pi Zero 2W. It’s running a piece of software called Mini-Pi Media Server which turns the Pi into a DLNA server capable of streaming media over the network, in this case WiFi. Samba is used to share files and Cockpit is onboard for easy web administration. In testing, the server was capable of streaming video to four different wireless devices simultaneously, all while plugged in to a small USB power supply.
For anyone who wants to try this out, the files for it as well as instructions are also available on a GitHub page. We could think of a number of ways that this would be useful over a more traditional streaming setup, specifically in situations where power demand must remain low such as on a long car trip or while off grid. We also don’t imagine the Pi will be doing much transcoding or streaming of 4K videos with its power and processing limitations, but it would be unreasonable to expect it to do so. For that you’d need something more powerful.
youtube.com/embed/rvEQalALV6Y?…
Thanks to [Richard] for the tip!
Liquid CO2 For Grid Scale Energy Storage Isn’t Just Hot Air
There’s folk wisdom in just about every culture that teaches about renewable energy — things like “make hay while the sun shines”. But as an industrial culture, we want to make hay 24/7 and not be at the whims of some capricious weather god! Alas, renewable energy puts a crimp in that. Once again, energy supplies are slowly becoming tied to the sun and the wind.
Since “Make compute while the wind blows” doesn’t have a great ring to it, clearly our civilization needs to come up with some grid-scale storage. Over in Sardinia they’re testing an idea that sounds like hot air, but isn’t — because the working gas is CO2.
The principle is simple: when power is available, carbon dioxide is compressed, cooled, and liquefied into pressure vessels as happens at millions of industrial facilities worldwide every day. When power is required, the compressed CO2 can be run through a turbine to generate sweet, sweet electricity. Since venting tonnes of CO2 into the atmosphere is kind of the thing we’re trying to avoid with this whole rigmarole, the greenhouse gas slash working fluid is stored in a giant bag. It sits, waiting for the next charge cycle, like the world’s heaviest and saddest dirigible. In the test project in Sardinia — backed by Google, amongst others — the gas bag holds 2000 tonnes and can produce 20 megawatts of power for up-to 10 hours.
The scheme does require pressure vessels the size of buildings, which may make some nervous.
That’s not exactly astounding. It gets you through the night, but leaves you hanging if the next day is cloudy. But it’s scalable. The turbine is 20 megawatts, sure, but all you need is land to add extra energy capacity. The 200 MWh pilot plant is a five hectare facility, which is only about 12.3 acres, or roughly 1/10th the size of the Mall of America. It seems like increasing capacity would be fairly trivial; unlike, say, pumped hydro storage, no special topography is required. Ten hours of storage is also notably longer than the six to eight hours grid-scale battery farms usually aim for.
As of this writing, there’s only one of these plants in operation, but expect that to change rapidly. In 2026 the company behind the Sardinia project, Energy Dome, plans on putting in grid-scale storage based on its technology in India and Wisconsin, and that’s before Google gets into it. They’re hoping to roll this technology out at a number of data centers worldwide, though the exact details of the deal aren’t public.
We’ve talked about grid-scale energy storage before, using everything from liquid tin to electric car batteries and big piles of gravel. This methodology has a lot to recommend it over those others in comparison, and should worst come to worst, at least it won’t burn for days like certain batteries we could name. Releasing 2000 tonnes of CO2 might not be as benign as a failure from a liquid air battery, but storing liquid CO2 under pressure is a lot easier holding onto cryogenic air.
All images credited to Luigi Avantaggiato.
Print Your Own Standardized Wire Spool Storage
Hardware hackers tend to have loads of hookup wire, and that led [firstgizmo] to design a 3D printable wire and cable spool storage system. As a bonus, it’s Gridfinity-compatible!The slot to capture loose ends is a nice touch, and the units can be assembled without external hardware.
There are a lot of little design touches we love. For example, we like the little notch into which the wire ends are held, which provides a way to secure the loose ends without any moving parts. Also, while at first glance these holders look like something that goes together with a few screws, they actually require no additional hardware and can be assembled entirely with printed parts. But should one wish to do so, [firstgizmo] has an alternate design that goes together with some M3 bolts instead.
Want to adjust something? The STEP files are included, which we always love to see because it makes modifications to the models so much more accessible. One thing that hasn’t changed over the years is that making engineering-type adjustments to STL files is awful, at best.
If there is one gotcha, it is that one must remove wire from their old spools and re-wind onto the new to use this system. However, [firstgizmo] tries to make that as easy as possible by providing two tools to make re-spooling easier: one for hand-cranking, and one for using a hand drill to do the work for you.
It’s a very thoughtful design, and as mentioned, can also be used with the Gridfinity system, which seems to open organizational floodgates in most people’s minds. Most of us are pinched for storage space, and small improvements in space-saving really, really add up.
Al via il corso “Cyber Offensive Fundamentals” di RHC! 40 ore in Live Class
Vuoi smettere di guardare tutorial e iniziare a capire davvero come funziona la sicurezza informatica?
Se la risposta è SI, ti consigliamo di leggere questo articolo.
Il panorama della sicurezza informatica cambia velocemente: nuove vulnerabilità, attacchi sempre più sofisticati e infrastrutture complesse rendono indispensabile comprendere come i criminali informatici agiscono, per poter mettere in sicurezza le organizzazioni.
È fondamentale disporre di specialisti in grado di comprendere come i criminal hacker aggirano le misure di sicurezza, così da individuare in anticipo vulnerabilità e potenziali vettori di attacco all’interno di un’infrastruttura informatica, prima che possano essere sfruttati da aggressori reali.
Per maggiori informazioni sul corso potete accedere al programma online, o contattarci tramite WhatsApp al numero 379 163 8765 oppure scrivendoci alla casella di posta formazione@redhotcyber.com.
Pensa come un attaccante, agisci come un difensore
La base della sicurezza informatica, al di là di norme e tecnologie, ha sempre un unico obiettivo: fermare gli attacchi dei criminali informatici.
Per raggiungere questo obiettivo, è fondamentale conoscere il modo in cui gli hacker operano. Ed è proprio questo il “life motive” di questo corso: Pensa come un attaccante, agisci come un difensore. Solo comprendendo come i criminali informatici sfruttano i sistemi è possibile definire strategie coerenti ed efficaci per ostacolarli.
Non basta più conoscere la teoria. Per proteggere davvero un’infrastruttura digitale, occorre capire a fondo il comportamento degli aggressori che sono di fatto dei “tecnici”. Ogni vulnerabilità, ogni configurazione errata, ogni falla può diventare un punto di ingresso per un attacco. La capacità di anticipare le mosse di un hacker criminali e trasformare questa conoscenza in strategie difensive rappresenta oggi un vantaggio competitivo decisivo per le aziende.
Nel mondo del lavoro, la richiesta di professionisti specializzati in Offensive Security (che racchiude l’arte del penetration test e dell’ethical hacking) supera di gran lunga l’offerta. Il mercato della sicurezza informatica è in forte espansione e le aziende – dalle startup alle grandi multinazionali – sono costantemente alla ricerca di figure capaci di simulare gli attaccanti ed isolare i vettori di attacco per fare “prevenzione”. Le opportunità occupazionali sono enormi, concrete e destinate a crescere ancora per molti anni
Per rispondere a questa esigenza, Red Hot Cyber ha lanciato una nuova Live Class formativa: “Cyber Offensive Fundamentals”, una nuova Live Class pensata per chi desidera entrare davvero nel mondo della cybersecurity operativa.
Con un approccio estremamente pratico, laboratori tecnici e affiancamento diretto dei docenti, la Live Class guida passo dopo passo nella costruzione delle competenze essenziali per svolgere un penetration test in modo moderno, professionale ed etico.
Cosa tratterà il corso
Il corso offre un percorso completo che parte dalle basi e arriva alle tecniche operative utilizzate quotidianamente nelle tecniche di penetration test ed ethical hacking. Tra gli argomenti principali:
- Networking: modello OSI, indirizzamento IP, subnetting, protocolli fondamentali, analisi del traffico e sniffing controllato.
- Laboratori pratici: installazione di ambienti virtuali, Kali Linux, Metasploitable, Windows 10/11, configurazioni di rete isolate.
- Recon & OSINT: raccolta informazioni pubbliche, scansioni di rete, fingerprinting, attività di enumeration, utilizzo avanzato di Nmap, Nessus e analisi CVE/CVSS.
- Malware education: reverse shell, backdoor, persistenza, evasione e tecniche di rilevamento.
- Exploitation: attacchi controllati tramite exploit pubblici, Metasploit, payload e metodologie reali.
- Post-Exploitation: privilege escalation, movimento laterale, persistenza, password cracking, strumenti come LinPEAS e WinPEAS.
- WebApp Security: SQLi, XSS, RCE, traversal, testing con Burp Suite, esercitazioni su DVWA.
- Active Directory: architettura, autenticazioni, vettori d’attacco, strumenti professionali (BloodHound, nxc, Responder, Impacket).
Alla fine del percorso, i partecipanti affineranno la propria autonomia operativa, riuscendo a condurre un penetration test completo e documentato, dalla ricognizione iniziale alla post-exploitation.
Il docente: Alessio Lauro
A guidare i partecipanti in questo percorso tecnico ci sarà un professionista e docente che collabora con Red Hot Cyber da diverso tempo: Alessio Lauro. Si tratta di un ethical hacker specializzato da anni in penetration testing, sicurezza delle reti e analisi delle vulnerabilità. Possiede certificazioni internazionali e collabora come formatore con diversi enti professionali italiani. Unisce un approccio pratico a una forte attenzione all’attualità delle minacce moderne, portando in aula esempi concreti e metodologie operative utilizzate dai professionisti del settore.
Modalità di apprendimento
La Live Class segue un metodo formativo estremamente operativo:
- 40 ore di lezioni dal vivo, con interazione diretta con il docente;
- Laboratori tecnici isolati, replicabili in autonomia;
- Dimostrazioni passo-passo delle tecniche utilizzate negli scenari reali;
- Approccio learning-by-doing: ogni concetto teorico viene immediatamente applicato in pratica;
- Utilizzo degli stessi strumenti usati dai professionisti
Il corso rilascia inoltre la certificazione COF – Cyber Offensive Fundamentals di Red Hot Cyber, previo superamento dell’esame finale.
[em][em]Per maggiori informazioni sul corso potete accedere al programma online, o contattarci tramite WhatsApp al numero 379 163 8765 oppure scrivendoci alla casella di posta formazione@redhotcyber.com.[/em][/em]
Difficoltà del corso
Il corso è di livello base, ma con una forte impronta operativa: non richiede competenze avanzate, ma è indispensabile avere:
- capacità di navigazione Internet,
- conoscenze minime di sicurezza informatica,
- familiarità con il computer e con i sistemi operativi.
È adatto sia a chi vuole entrare in ambito cybersecurity, sia a chi già lavora nel settore IT e desidera acquisire competenze offensive aggiornate. Il concetto sul quale si basa questo corso è semplice ma al tempo stesso a nostro avviso fondamentale: devi conoscere come operano tecnicamente gli avversari se vuoi poter difendere con efficacia una organizzazione.
Ingresso nel mondo del lavoro
Questa Live Class rappresenta un vero punto di accesso al mondo della sicurezza informatica offensiva e all’arte del penetration testing.
Pur essendo un corso base, permette di:
- costruire solide fondamenta tecniche,
- comprendere le logiche operative di un penetration test,
- utilizzare i principali strumenti professionali,
- familiarizzare con le tipologie reali di attacco e difesa,
- sviluppare un mindset da penetration tester.
Questo tipo di mestiere non conosce una meta e questo ricordatelo sempre.
Infatti, come spesso riportiamo, l‘”hacking è un percorso e non un destinazione”, ma il corso pone solide basi concrete per avviare una carriera nel settore, permettendo al discente di avviare un percorso in questo mondo altamente tecnologico.
Una volta completato il corso, sarà possibile per i partecipanti entrare a far parte di HackerHood Academy, un contesto strettamente legato al nostro collettivo di hacker etici noto come HackerHood. Questo spazio consentirà loro di familiarizzare con l’ambiente e con i professionisti del settore, favorendo la condivisione di esperienze, idee e traiettorie nel campo dell’hacking etico.
[em][em]Per maggiori informazioni sul corso potete accedere al programma online, o contattarci tramite WhatsApp al numero 379 163 8765 oppure scrivendoci alla casella di posta formazione@redhotcyber.com.[/em][/em]
L'articolo Al via il corso “Cyber Offensive Fundamentals” di RHC! 40 ore in Live Class proviene da Red Hot Cyber.