The D in DNS Stands for DOOM


The media in this post is not displayed to visitors. To view it, please log in.

As literally everything ought to be able to play DOOM in some fashion, [Adam Rice] recently set out to make the venerable DNS finally play the game after far too many decades of being DOOM-less. You may be wondering how video games and a boring domain records database relate to each other. This is where DNS TXT records come into play, which are essentially fields for arbitrary data with no requirements or limitations on this payload, other than a 2,000 character limit.

Add to this the concept of DNS zones which can contain thousands of records and the inkling of a plan begins to form. Essentially the entire game (in C#) is fetched from TXT records, loaded into memory and run from there. This is in some ways a benign form of how DNS TXT records can be abused by people with less harmless intentions, though [Adam] admits to using the Claude chatbot to help with the code, so YMMV.

The engine and WAD file with the game’s resources are compressed to fit into 1.7 MB along with a 1.2 MB DLL bundle, requiring 1,966 TXT records in Base64 encoding on a Cloudflare Pro DNS zone. With a free Cloudflare account you’d need to split it across multiple zones. With the TXT records synced across the globe, every DNS server in the world now has a copy of DOOM on it, for better or worse.

You can find the project source on GitHub if you want to give this a shake yourself.

Thanks to [MrRTFM] for the tip.


hackaday.com/2026/03/31/the-d-…


PDP-11 Lives in Literal Computer Desk Once More


The media in this post is not displayed to visitors. To view it, please log in.

The ikea desk, with the spectrometer on the far left.

When you think of iconic parings, your brain probably goes more to “cookies and milk” than “DEC and Ikea” but after watching [Dave]’s latest on Usagi Electric where he puts a PDP-11 into an Ikea desk, you may rethink that.

The PDP-11 is vintage hardware that actually lived inside of a different desk, once upon a time, serving as the control unit for an FTIR spectrometer. While the lab equipment has thankfully survived the decades, the desk did not and when [Dave] got the unit it was as a pile of parts. He revived it, of course– it’s kind of what he does– but it didn’t get a new desk for years, until his latest shop re-organization.

The one concession to modernity– and missing parts– is using switching power supplies rather than the bulky linear PSU that would have originally powered the unit. It’s a good thing, too, or we have trouble picturing how everything would fit! This particular PDP-11 comes with the high performance vector processing unit in order to crunch those spectrographs, and apparently those chips idle at about 60C, so the desk-case got some decent-sized 120V fans to keep everything cool and running for years to come.

This isn’t the most aesthetic or fanciest case-mod we’ve seen, mostly being made of surplus plywood and scrap metal fittings, but it certainly gets the job done. Given that the PDP-11 has been crammed into every form-factor known to man, from a system-on-a-chip (before anybody really talked about SOCs) to desktop workstations, and of course the hulking cabinets with their iconic blinkenlights-– it’s hard to say that this installation isn’t reasonably authentic, even if it isn’t the original desk.

youtube.com/embed/mG3XGbbvWH8?…


hackaday.com/2026/03/30/pdp-11…


See The Computers That Powered The Voyager Space Program


The media in this post is not displayed to visitors. To view it, please log in.

A Univac 1219 cabinet

Have you ever wanted to see the computers behind the first (and for now only) man-made objects to leave the heliosphere? [Gary Friedman] shows us, with an archived tour of JPL building 230 in the ’80s.

A NASA employee picks up a camcorder and decides to record a tour of the place “before they replace it all with mainframes”. They show us computers that would seem prehistoric compared to anything modern; early Univac and IBM machines whose power is outmatched today by even an ESP32, yet made the Voyager program possible all the way back in 1977. There are countless peripherals to see, from punch card writers to Univac debug panels where you can see the registers, and from impressive cabinets full of computing hardware to the zip-tied hacks “attaching” a small box they call the “NIU”, dangling off the inner wall of the cabinet. And don’t forget the tape drives that are as tall as a refrigerator!

We could go on ad nauseum, nerding out about the computing history, but why don’t you see it for yourself in the video after the break?

youtube.com/embed/T_bqc76_3xU?…

Thanks to [Michael] for the tip!


hackaday.com/2026/03/30/see-th…


This Front Panel Makes Its Own Clean-Edged Drill Guides


The media in this post is not displayed to visitors. To view it, please log in.

We haven’t seen an instrument panel quite like [bluesyann]’s, which was made by curing UV resin directly onto plywood with the help of a 3D printer and a bit of software work. The result is faintly-raised linework that also makes hand drilling holes both cleaner and more accurate.

The process begins by designing the 2D layout in Inkscape, which has the advantage of letting one work in 1:1 dimensions. A 10 mm diameter circle will print as 10 mm; a nice advantage when designing for physical components. After making the layout one uses OpenSCAD to import the .svg and turn it into a 3D model that’s 0.5 mm tall. That 3D model gets loaded into the resin printer, and the goal is to put it directly onto a sheet of plywood.
A little donut shape makes a drill centering feature, and the surrounding ring keeps the edges of the hole clean.
To do that, [bluesyann] sticks the plywood directly onto the 3D printer’s build platform with double-sided tape. With the plywood taking the place of the usual build surface, the printer can cure resin directly onto its surface. Cleanup still involves washing uncured resin off the board, but it’s nothing a soak in isopropyl alcohol and an old toothbrush can’t take care of.

[bluesyann] has a few tips for getting the best results, and one of our favorites is a way to make drilling holes easier and cleaner. Marking the center of a drill hit with a small donut-shaped feature makes a fantastic centering guide, making hand drilling much more accurate. And adding a thick ring around the drill hole ensures clean edges with no stray wood fibers, so no post-drilling cleanup required. Don’t want the ring to stick around after drilling? Just peel it off. There’s a load of other tips too, so be sure to check it out.

A nice front panel really does make a project better, and we’ve seen many different approaches over the years. One can stick laminated artwork onto an enclosure, or one can perform toner transfer onto 3D printed surfaces by putting the design on top of the 3D printer’s build surface, and letting the heat of molten plastic do the work of transferring the toner. And if one should like the idea of a plywood front panel but balk at resin printing onto it, old-fashioned toner transfer works great on wood.


hackaday.com/2026/03/30/this-f…


Retro Open Source Camera Straight from the ’90s


The media in this post is not displayed to visitors. To view it, please log in.

In our modern society, we have started to take the humble camera for granted. Perhaps because of this, trendy standalone cameras have started to take off. Unfortunately, most of the time these cameras are expensive and not any better than those in our everyday smartphones. If only there were some open-source solution where you could build and customize your own standalone device? [Yutani] has done just that with the SATURNIX.

Simple microcontrollers and cameras meant for Raspberry Pis are a dime a dozen these days. Because of this, it’s no surprise to hear that the SATURNIX is based on recognizable hardware, a Raspberry Pi Zero 2W and an Arducam 16MP sensor. The Pi Zero powers both the sensors’ capture abilities and the interactive LCD display.

Some sample filtered shots from the SATURNIX
With a simple visual design, the device could certainly fit into the same market we see so many other standalone cameras. Pictures from the camera look great without or with the included filter options if you want a more retro look. While currently there do appear to be some speed improvements needed, the best part of open source is that you yourself can help out!

We always love ambitious open source projects that look to build a true base for others to work on, and this seems like no exception! If you want similarly impressive feats of optical trickery, look no further than using scotch tape as a camera lens!


hackaday.com/2026/03/30/retro-…


Recreating One of the First Hackintoshes


The media in this post is not displayed to visitors. To view it, please log in.

Apple’s Intel era was a boon for many, especially for software developers who were able to bring their software to the platform much more easily than in the PowerPC era. Macs at the time were even able to run Windows fairly easily, which was unheard of. A niche benefit to few was that it made it much easier to build Hackintosh-style computers, which were built from hardware not explicitly sanctioned by Apple but could be tricked into running OSX nonetheless. Although the Hackintosh scene exploded during this era, it actually goes back much farther and [This Does Not Compute] has put together one of the earliest examples going all the way back to the 1980s.

The build began with a Macintosh SE which had the original motherboard swapped out for one with a CPU accelerator card installed. This left the original motherboard free, and rather than accumulate spare parts [This Does Not Compute] decided to use it to investigate the Hackintosh scene of the late 80s. There were a few publications put out at the time that documented how to get this done, so following those as guides he got to work. The only original Apple part needed for this era was a motherboard, which at the time could be found used for a bargain price. The rest of the parts could be made from PC components, which can also be found for lower prices than most Mac hardware. The cases at the time would be literally hacked together as well, but in the end a working Mac would come out of the process at a very reasonable cost.

[This Does Not Compute]’s case isn’t scrounged from 80s parts bins, though. He’s using a special beige filament to print a case with the appropriate color aesthetic for a computer of this era. There are also some modern parts that make this style computer a little easier to use in today’s world like a card that lets the Mac output a VGA signal, an SD card reader, and a much less clunky power supply than the original would have had. He’s using an original floppy disk drive though, so not everything needs to be modernized. But, with these classic Macintosh computers, modernization can go to whatever extreme suits your needs.

Thanks to [Stephen] for the tip!

youtube.com/embed/RUUVNi_X8w8?…


hackaday.com/2026/03/30/recrea…


Medieval Alhambra’s Pulser Pump and Other Aquatic Marvels


The media in this post is not displayed to visitors. To view it, please log in.

Reflective pool of the Court of the Myrtles, looking north towards the Comares Tower. (Credit: Tuxyso, Wikimedia)

Recently the Practical Engineering YouTube channel featured a functional recreation of a pump design that is presumed by some to have been used to pump water up to the medieval Alhambra palace and its fortress, located in what is today Spain. This so-called pulser pump design is notable for not featuring any moving parts, but the water pump was just one of many fascinating engineering achievements that made the Alhambra a truly unique place before the ravages of time had their way with it.

Although the engineering works were said to still have been functional in the 18th century, this pumping system and many other elements that existed at the peak of its existence had already vanished by the 19th century for a number of reasons. During this century a Spanish engineering professor, Cáceres, tried to reconstruct the mechanism as best as he could based on the left-over descriptions, but sadly we’ll likely never know for certain that it is what existed there.

Similarly, the speculated time-based fountain in the Court of the Lions and other elements are now forever lost to time, but we have plenty of theories on how all of this worked in a pre-industrial era.

Alhambra

Evening panorama of Alhambra from Mirador de San Nicolás, Granada, Spain. (Credit: Slaunger, Wikimedia)Evening panorama of Alhambra from Mirador de San Nicolás, Granada, Spain. (Credit: Slaunger, Wikimedia)
A UNESCO World Heritage Site since 1984, the Alhambra saw its first construction in 1238 CE by Muhammad I, the first Nasrid emir. The Nasrid dynasty would last from 1238 to 1491 CE when the Muslim state of al-Andalus fell during the Christian Reconquista.

Even after the end of the Nasrid dynasty would the Alhambra see further construction by Charles V in the 16th century. This made the Alhambra a rather unique amalgamation of Islamic and Renaissance-era architecture and engineering. Sadly by the 18th century the structure had been abandoned for centuries, invaded by squatters, and partially destroyed by the troops of Napoleon in 1812.

Only after these troubled times did an appreciation for such cultural heritage begin to flourish, with European and American tourists alike frequenting the area. One of them – US author Washington Irving – was so inspired by his visit in 1828 that he’d end up writing Tales of the Alhambra, containing many myths, stories, sketches, and essays pertaining to the site. This book in particular was instrumental in making an international audience aware of this site and its legacy.

This renewed attention resulted in the site becoming recognized first as a Spanish Cultural Heritage monument in 1870 and subsequently by UNESCO more than a century later.

Water Features


Most fortresses of the era relied primarily on water cisterns that collected rainwater, as well as access to local rivers in some form, usually requiring human or animal labor to transport the latter. This was also how the Alhambra started in its initial fortress form, called the Alcazaba, meaning ‘citadel’ in Spanish, from Arabic al-qaṣabah. The water from this cistern didn’t just supply drinking water, but also for the bathhouse (hammam) and water elements like a pool or fountain for houses in the interior urban area. These houses additionally featured latrines that were flushed using this cistern water.

As the Alhambra expanded, with many palaces and related structures added, its water requirements increased correspondingly. Rather than some small decorative water features for a dozen houses and a communal bath, there were now reflective pools, fountains and a much larger population. This necessitated finding more efficient ways to get more water up the hill on which the Alhambra was constructed.
Aqueduct of the Alhambra as it enters the wall. (Credit: Sharon Mollerus, Wikimedia)Aqueduct of the Alhambra as it enters the wall. (Credit: Sharon Mollerus, Wikimedia)
In addition to the aforementioned pump, there was also an aqueduct (the Acequia Real) that carried water from the Darro River. At a distance of 6.1 km from the fortress the river is at a sufficiently high elevation to provide water using just gravity. This aqueduct additionally provided water via additional branches to gardens and settlements beyond the Alhambra’s walls.

Many details can be found in this 2019 summary of applied hydraulic techniques at al-Andalus fortresses by Luis José García-Pulido and Sara Peñalver Martín.

As noted in that overview article, the reason for the Alhambra being significantly more advanced than other fortresses in the al-Andalus region was that it was the seat of the Nasrid dynasty, ergo it was only natural that it’d not only get all the palaces and comforts, but also the most advanced technologies for supplying water.

Unfortunately the unique pumping device that was used to supply the Alcazaba with water from the aqueduct was replaced in the 18th century with a more basic syphon system and the original device was removed. Up till that point the previous device had continued to work, despite the new owners of the Alhambra not understanding its operating principles. This left 19th century researchers like Cáceres to essentially fully rely on notes made during the previous century.

That said, there are also hints that the Alcazaba of the Antequera fortress used a similar device to pump water uphill, featuring ceramic pipes and other features that are described in by Sancho de Toledo in 1545. Unfortunately these accounts were all written by people who lacked the engineering know-how of the original Nasrid engineers – or any engineering knowledge at all – and thus had no understanding of the workings of these pumps.

This means that we will unfortunately never know exactly what this device looked like or how it worked, but we can still look at some mechanisms which we are familiar with today that could have been used. The concept of the hydraulic ram or pulser pump would seem to come closest compared to what little we do know.

Self-Powered Pumps

1) Inlet - drive pipe; 2) Free flow at waste valve; 3) Outlet - delivery pipe; 4) Waste valve; 5) Delivery check valve; 6) Pressure vessel (Source: Wikimedia)1) Inlet – drive pipe; 2) Free flow at waste valve; 3) Outlet – delivery pipe; 4) Waste valve; 5) Delivery check valve; 6) Pressure vessel (Source: Wikimedia)
Unlike a water pump that uses e.g. an impeller to impart kinetic energy and thus move the liquid, a self-powered pump uses physical phenomena like the water hammer effect or the fact that gas in a liquid will rise in order to effect a pumping effect. The hydraulic ram, for example, uses the water hammer effect and relies only on the kinetic energy of the incoming water.

The basic hydraulic ram functional sequence involves the water current pushing the normally open waste valve close, at which point the water hammer effect from the sudden current cessation forces the delivery valve open and pushing water into the delivery pipe.

This process will reverse again after a short while, sending a pressure wave upstream and eventually leading to the waste valve reopening. The downstream flow will then resume again, restarting the whole process.

In terms of technological complexity this is a very straightforward design, with the most complex parts being the valves and the pressure vessel that cushions the system against pressure shocks. This is however a design that would have been technologically quite feasible to manufacture and operate.
Basic pulser pump design. (Credit: Belbury, Wikimedia)Basic pulser pump design. (Credit: Belbury, Wikimedia)
Another, similar type of pump is the gas lift pump. A very small variant of this is commonly used in devices like coffee percolators, with the pulser pump being in effect a very large implementation of the same general principle. Rather than applying heat to the water reservoir in order to create gas (i.e. steam), the pulser pump uses an air compressing effect that’s also used with water-powered trompe air compressors.

As water falls down a pipe it drags air bubbles along with it, which eventually arrive at the bottom where said air is trapped in a cavity while the water flows on to a lower elevation.

The thinner pipe through which water ultimately is pumped is inserted into this air chamber in such a way that it’ll alternately ingest water and air as the level of the latter varies over time. This way pockets of water become trapped between pockets of air, with a resulting pulsing output of water at the end of this pipe.

Whether the original device at the Alhambra or Antequera exactly matches either pump design will likely remain forever a mystery, but neither were beyond the technological means of the time, with the pulser pump arguably even more straightforward due to a lack of need for any valves and pressure vessels.

Time Or Reflective Fountain


Although the Practical Engineering video focuses on this pump design, its author – Grady – was inspired by a Primal Space video that’s basically just history slop content, not citing any proper sources and propagating myths and misinformation as fact. The worst offender is probably the myth that the fountain that is found in the Court of the Lions was time-activated, with the only evidence for it being a clock being that there are twelve lion statues and there are two times twelve hours in a day.
Court of the Lions and its fountain in 2021. (Credit: Sean Adams, Wikimedia)Court of the Lions and its fountain in 2021. (Credit: Sean Adams, Wikimedia)
When we consider the archaeological evidence that exists so far, as well as the findings during the recent restorations, it seems clear that the marble block with its many holes through which the water entered the bowl was intended to diffuse the flow. Around the bowl we can see a corresponding poem of twelve verses by the vizier and poet Ibn Zamrak.

In verses 3 through 7 it specifically refers to “[..] which runs to that which is still, that we know not which of them is flowing”. This quite strongly suggests that the theme was similar to that of the many reflective pools that were so popular around the Alhambra and elsewhere. The idea of it being a time-controlled mechanism would thus seem to be a purely Western interpretation, barring some hitherto unknown evidence appearing.

Lossy History


Perhaps the most cruel aspect of history is that, much like time itself, it has no concern for those of us who live in the present. Throughout the eons as empires rise and crumble back into dust, wondrous inventions are made and soon again forgotten, leaving behind only echoes of deeds and wonder.

If we’re lucky some of it is recorded in a form as durable as Sumerian clay tablets buried underneath desert sands, but if not then what once was shall never be again. This impermanence is the eternal curse of the past, and also the reason why it’s always so important to make multiple copies of your important data.

Due to the passage of time history is mostly just ruins, pot shards and bones buried in mud and sand. Some will try to spruce things up with one’s imagination resulting in faux romanticism, but this naturally bears little connection to the past. That today the Alhambra has been largely restored is testament to how much more respectful we now approach the past, but the parts that were erased after the demise of the Nasrid dynasty are sadly likely to be lost forever.

Featured image: Reflective pool of the Court of the Myrtles, looking north towards the Comares Tower. (Credit: Tuxyso, Wikimedia)


hackaday.com/2026/03/30/mediev…


Tame the Tape: Open-Source Dotterboard for Bulk SMT Parts


The media in this post is not displayed to visitors. To view it, please log in.

Dotterboard smt counter

One of the great things about building electronics today is how affordable SMT components have become — sometimes just fractions of a cent each. That low price often means ordering far more than you need so you’ll have spares on hand the next time a project calls for them. Keeping track of exactly how many of each part you actually have, though, is rarely easy. To solve that problem, [John] built the Dotterboard, an open-source SMT tape counter.

While working on some of his other projects, [John] found himself managing thousands of tiny SMT parts and decided it was time to automate the counting. The Dotterboard takes inspiration from the BeanCounter — a compact, portable SMT tape counter — but expands the design to handle larger components beyond the 8 mm tapes the BeanCounter targets.

The Dotterboard is mostly 3D-printed and uses just a few common hardware parts such as springs and ball bearings. An OLED displays the current count, which comes from an encoder tracking movement and multiplying by the number of components per hole. At the heart sits an RP2040 Zero that needs nothing more than a single USB-C cable for power, unlike the bulky industrial SMT counters that demand AC outlets and desk space.

Be sure to check out all the details of the build on [John]’s website, and grab the files from his GitHub if you want to make your own. Let us know what are some projects you’ve done to save you the headache of doing the same task by hand for hours on end.

youtube.com/embed/WIFQgdVEmkg?…


hackaday.com/2026/03/30/tame-t…


Spy Tech: Conflicts Bring a New Number Station


The media in this post is not displayed to visitors. To view it, please log in.

If you know much about radios and espionage, you’ve probably encountered number stations. These are mysterious stations that read out groups of numbers or otherwise encoded messages to… well… someone. Most of the time, we don’t know who is receiving the messages. You’d be excused for thinking that this is an old technology. After all, satellite phones, the Internet, and a plethora of options now exist to allow the home base to send spies secret instructions. However, the current-day global conflict has seen at least one new number station appear, apparently associated with the United States and, presumably, targeting some recipients in Iran, according to priyom.org.

As you might expect, these stations don’t identify themselves, but the Enigma Control List names this one as V32. It broadcasts two two-hour blocks a day at 0200 UTC and a repeat at 1800 UTC. Each message starts with the Farsi word for “attention” followed by what is assumed to be some header information as two 5-digit groups. Then there is a set of 181 five-digit groups. Each message is padded out to take 20 minutes, and there are six messages in each transmission.

How Do You Know?


While this could, in theory, be from (and to) anywhere, direction finding has traced the signal to a US base near Stuttgart, Germany. In addition to using Farsi, Iran has repeatedly attempted to jam the signal, causing V32 to change frequencies a few times. There’s also a more recent, so far unidentified, jammer trying to block the signal.

In addition to direction finding, there is a surprising amount of information you can glean from the audio. The first few days of broadcasts had specific beeps in the background, which appear to be warning tones from a specific type of American military transmitter that warns the operator when encryption is not engaged. At first, a human read the numbers. Eventually, the station switched to using automated numbers.

Oops


In addition, there have been a few times when Windows 10 system sounds have leaked into the transmission. Other oddities are several cases where a word was read out in the middle of the numbers. We aren’t cryptographers, but that suggests the numbers refer to words in some sort of codebook, and that book doesn’t contain the proper words.

If you want to try your hand at decoding, you can hear the station on USB just under 8 MHz, or just listen to the recordings made by others (like the ones below or this one). You might like to read what other people say about it, too.

youtube.com/embed/3-eg3i9XYt4?…

youtube.com/embed/r6CzkwAXltk?…

We are fascinated by spy stations. Even when they aren’t really number stations.


hackaday.com/2026/03/30/spy-te…


Platform governance goes to court


The media in this post is not displayed to visitors. To view it, please log in.

Platform governance goes to court
PRESENTED BY

Platform governance goes to court

IT'S MONDAY, AND THIS IS DIGITAL POLITICS. I'm Mark Scott, and as the war in the Middle East enters its second month, here's a map that explains we are all in for major energy price hikes (or shortages) in April.

— American and European courts are doing more for social media oversight than the growing list of online safety regimes worldwide.

— Middle Powers are now testbeds for different forms of AI, competition and platform governance regulation. Some will work, others will not.

— Two-thirds of people polled worldwide say they have used AI, in some form, over the last 12 months.

Let's get started:


LITIGATION VERSUS LEGISLATION?


ONE OF THE HALLMARKS OF DIGITAL policymaking over the last five years is the drive toward national or regional online safety rules. The likes of Australia's Online Safety Act; the European Union's Digital Services Act; and the United Kingdom's Online Safety Act epitomize lawmakers' efforts to create greater accountability and transparency for social media giants. That, in turn, has led to a pushback from some of these companies and the United States (at least within the federal government), which criticized these rules as either being overly cumbersome or an illegitimate attack on people's free speech rights.

My day job means I'm pretty clued up on most of these (Western) online safety regimes. If you want a wonky policy discussion about mandatory data access requirements or the inner workings of companies' annual risk assessments and external audits, then I'm your man. Yet we need to be honest about the current state of play of these online safety regimes. They are often too cumbersome, under-resourced and overly-politicized to meaningfully improve people's experiences online — at least in the short term.

In contrast, four recent court decisions — two in the US, two in the EU — demonstrate how judges and juries now have had a more significant impact on platform governance compared to the growing number of national/regional online safety rulebooks. For countries similarly seeking to create greater transparency and oversight for the likes of TikTok and YouTube, this "litigation over legislation" strategy may be worth pursuing. That's especially true given how the current White House is embedding provisions to ward off future digital regulation in its trade negotiations with third-party countries.


**A message from Meta** Following the Brussels AI Symposium, hosted by Meta with eco and EssilorLuxottica as supporting partners, the message from leaders was clear: the world needs a strong Europe at the table in this ongoing technological revolution. Read eco’s white paper here.


Before we get to the court cases and their implications, let's lay out two caveats.

First, it's not a question of litigation or legislation. These policy levers do different things. For officials, it's more about potentially front-running lawsuits, based on existing statutory oversight, before more long-term online safety regulation can navigate countries' often labyrinthine democratic processes. Second, litigation often builds on existing regulatory playbooks, providing individual citizens the ability to fill in gaps where slow-moving — and often untested — legislation has yet to take hold.

OK, caveats covered. Now, to the cases. I'll keep these brief, given how much coverage there has been, especially related to the American lawsuits.

In a one-two punch, two US courts — one in California, another in New Mexico, respectively — took swings at Meta and Google and, separately, Meta. On March 25, a jury in Los Angeles awarded $6 million in damages to a plaintiff who had accused YouTube and Instagram of deliberate design choices that had made her become addicted to both platforms. In response, both Meta and Google rejected those assertions and said they would appeal.

Thanks for reading the free monthly edition Digital Politics. Subscribers receive at least four newsletters a month.

If you've been forwarded this newsletter (and like what you've read), please sign up here. For those already subscribed, reach out on digitalpolitics@protonmail.com

In New Mexico, a separate jury on March 24 found that Meta had violated state consumer protection laws and ordered the tech giant to pay $375 million in damages. The case revolved around accusations from the state's attorney general that the social media company's services were designed to maximize engagement for children without embedding the appropriate safety measures to protect minors. In response, Meta said it kept people safe on its platforms and would similarly appeal.

In Europe, a regional court in southern Germany upheld a complaint on March 11, initially filed by a local consumer protection agency. It required YouTube to stop online influencers from posting sponsored content if the underlying advertiser was not disclosed and clearly stated. The court decision is not yet final. But the preliminary ruling may force YouTube to place a "sponsored post" label across all such videos, as well as require content creators to make public who is paying for such ads. It's a clarification to the EU's Digital Services Act (Article 26) related to how online platforms handle transparency issues related to online advertising.

Finally, a Dutch court on March 26 forcedElon Musk's xAI to stop generating and distributing sexually-explicit images of people without their consent in the Netherlands — or face daily fines of around $115,000. The case had been brought by Offlimits, a local advocacy group. It followed global outrage — and regulatory investigations — into how xAI's Grok artificial intelligence tool and X, which hosted it, had been used to create realistic deepfake explicit images of women and children. xAI's lawyers had said it was impossible to remove all such abuse from the social media platform. The company also stopped Grok from creating such images in early 2026, though the Dutch judge believed there was still reasonable doubt that xAI's attempts would be effective.

Four legal cases, four slightly different legal issues. More lawsuits are pending, and the current cases may still be overturned on appeal.

Yet what is striking are the similarities between these lawsuits — and what they do compared to slow-moving online safety regulation.

Two central criticisms aimed at legislation whichtarget social media are that 1) These platforms have significant liability carve-outs for what people post online and 2) The likes of Australia and the UK's Online Safety Acts represent illegal attacks on free speech rights. The four separate cases, outlined above, mostly circumvent these issues by focusing on the design of these platforms, not on how they moderate individual social media posts.

This is a significant distinction and one, to be fair, also baked into national/regional online safety rules. The point of these lawsuits was not to dictate what could be posted online. Instead, they took aim at the intrinsic design choices that the likes of xAI, YouTube and Instagram had made that, at least in the views of the American juries and European judges, failed to live up to these companies' obligations under existing legislation. It's hard to accuse these decisions of undermining free speech rights when they focus exclusively on the wonkiness of how content recommender systems operate or the transparency requirements related to influencers' sponsored posts.

The Censorship Industrial Complex, it is not.

The second meaningful difference between these lawsuits and the ongoing conveyor belt of online safety regulation is how much more personal such litigation makes the potential harms associated with social media.

I can count, on one hand, the number of experts who have read the EU's most recent risk assessments and external audits related to how so-called Very Large Online Platforms and Search Engines combat alleged systemic risks under the bloc's Digital Services Act. I joke. But only just. These documents run into the hundreds of pages; are inherently legalistic in both tone and nature; and — after two years of these reports being published — have not provided meaningful transparency for the average EU citizen.

In contrast, the often personal (and routinely tragic) stories at the heart of such lawsuits, as well as the spectacle of high-profile tech executives taking the stand to defend their platforms, cuts through to the average social media user more effectively than decades worth of dense policy documents. They demonstrate the potential real world harms resulting from poor design choices that is just not possible via online safety regulation which, inherently, takes a systemic view of such problems.

Inherently, platform governance litigation does something different than online safety legislation. It is not one over the other. But at a time when regulatory headwinds are gathering against countries' attempts to pass such regulation, a shift toward national courts — as a means to boost transparency and accountability for some of the world's largest companies by centering these debates in the lived experiences of individuals — is a much-needed step.


Chart of the day


THE US LIKES TO THINK IT'S THE CENTER of the AI revolution. And at least when it comes to where these systems are built, that certainly is true.

But Americans remain behind the curve in the use of artificial intelligence tools and applications compared to their peers across both the West and the Global Majority, based on a worldwide survey conducted between Sept - Oct, 2025.

On average, 66 percent of those polled said they had used such services over the last 12 months. At 88 percent, Nigeria was the most AI-savvy country compared with Japan where only 45 percent of people said they had used AI over the last year.
Platform governance goes to courtSource: Google / Ipsos


MIDDLE POWERS: LABORATORIES OF DIGITAL POLICYMAKING


IN THE EARLY 1930s, THE US SUPREME COURT justice Louis Brandeis referred to US states as so-called "laboratories of democracy." By this, he meant smaller jurisdictions, with the Union, could try different social and economic models without these experiments unduly harming all 50 states. I can't help but think of that expression as I look over the similar digital policymaking crucible underway in so-called Middle Powers countries, or states like Brazil, Japan and the UK that sit slightly outside the trifecta of global digital policymaking powers of the US, EU and China, respectively.

Taken together, three ongoing experiments in each of these jurisdictions demonstrate how national lawmakers and officials are meeting local needs in an increasingly globalized digital world. They offer potential alternatives for other Middle Power countries that do not necessarily want to be rule-takers from global powers when it comes to digital competition, artificial intelligence and data protection.

First, to London. I remain skeptical the UK government (of all political flavors) has the will to implement a serious digital policymaking agenda. Other, that is, than one that prioritizes foreign direct investment over all other demands. Yet the country's Competition and Markets Authority (CMA) is slowly implementing the so-called the Digital Markets, Competition and Consumers, or updated digital antitrust rulebook, that offers an alternative to the more bureaucratic approach under the EU's Digital Markets Act.

A quick snapshot about how these competition rules operate. Under the UK's regime, regulators first determine if a company has so-called "Strategic Market Status," and then create specific rules to ensure its dominance doesn't skew the market. Under the EU's rulebook, companies are designated as "gatekeepers," and then — collectively — European regulators determine if these firms' activities infringe on smaller players.

Before I get angry emails, yes, Doug Gurr — a former senior Amazon executive – was appointed as chairman of the British competition agency in February. That has raised concerns the CMA will pull back on its digital enforcement work. But since early 2026, the regulator has issued two statements — one linked to how people/business interact with Google's search product; another aimed at leveling the playing field in both Google and Apple's App Stores — that are worth tracking.

Both are designed to loosen these services' control of what are now viewed as dominant parts of the online economy. Critics will say they don't go far enough to hobble these services. But the UK's revamped digital antitrust regime is designed to create bespoke interventions, based on individual companies' services, that may prove more nimble than the one-size-fit-all approach outlined within the EU's Digital Markets Act.

What's worth paying attention to, for other Middle Powers, is whether the proposed changes in both Google search and the app stores gives greater breathing space for competitors, as well as allowing users to more easily swap to rival search products. If that does start to happen in the UK (and it's still an 'if,') then London's digital antitrust approach may be worth adopting.

Shifting gears — both geographically and thematically — takes us to Japan where the country's AI regulation is now more than six months old. Unlike the top-down legislation outlined by the likes of South Korea and Europe, Japan has instead implemented mostly voluntary guidelines, backed up with expanded enforcement powers for existing regulatory agencies, to create a flexible approach to AI oversight. At least, that is what Tokyo would like you to believe.


**A message from Meta** On 24 March, The Brussels AI Symposium hosted by Meta with eco, and EssilorLuxottica as supporting partners, convened political leaders including European Parliament President Roberta Metsola and US Ambassador Andrew Puzder, Italian Vice Minister Valentino Valentini, and leaders across industry. The speakers struck the same chord: the world needs a strong Europe at the table in this ongoing technological revolution.

Regulatory simplification to enable innovation and competitiveness is a necessity. Implementation must match the ambition. To learn more, read eco’s white paper here.


The legislation also includes a public commitment to invest $6.3 billion, over five years, in AI-linked emerging technology, as well as other high-tech industries like drones and quantum computing. The idea is to combine a co-regulatory approach to AI governance — again, supported by stricter enforcement from the likes of the country's privacy regulator — with direct investment in local firms competing on the globe stage.

Japan's approach stands somewhere between that of the 'regulate-first' EU and the 'don't-regulate' US, albeit via an AI governance framework that relies heavily on voluntary corporate compliance. Still, it may represent a potential third way for other countries both concerned about how AI is rolled out nationally and wanting to support local industries in this global technology race.

Finally, to Brazil. Latin America's most populous country enacted its so-called ECA Digital law earlier this month, specifically aimed at protecting children online. The regulation gained widespread traction after local YouTube channels were found to be profiting off sexualized videos of children.

Key provisions include: age verification requirements for platforms that host potentially inappropriate content for minors; such age-gating must take place when an account is created; providers of digital services must prevent addictive design practices like infinite social media scrolling; companies must remove child-related criminal content and notify authorities; a new law enforcement center was created to coordinate potential violations.

Sign up for Digital Politics


Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before. The first two weeks of any paid subscription are free.

Subscribe
Email sent! Check your inbox to complete your signup.


No spam. Unsubscribe anytime.

What's different from other online safety regimes is that it puts a significant onus on companies, not the government, to enforce individual provisions. That will inevitably create higher regulatory burdens for companies operating in Brazil — and some firms will likely pull out because of that.

But for other countries, which don't have the financial resources to implement a UK-style Online Safety Act, the outsourcing of such requirements to digital services where much of the potential harm is housed may offer a way forward in adopting online safety regulation without similarly incurring hefty increases in public money to support such oversight.


What I'm reading


— The Reuters Institute at the University of Oxford delve into the online media/news habits of Generation Alpha. It's more social media, less websites. More here.

— The DSA Observatory explains what lessons it learned when its application for data access to privately-held social media data was rejected under the EU's Digital Services Act. More here.

— The Oversight Board published recommendations for how Meta should implement its community notes instrument across its global platforms. More here.

— New York University's Center on Tech Policy produced a comparison of the 60 bills, across nearly 30 US states, aimed at regulating companion AI chatbots. More here.

— Almost 70 countries within the World Trade Organization agreed to an interim pathway toward global rules for digital trade, even though a final deal is unlikely in the short term. More here.



digitalpolitics.co/newsletter0…


Platform governance goes to court


The media in this post is not displayed to visitors. To view it, please log in.

Platform governance goes to court
SUPPORTED BY

Platform governance goes to court

IT'S MONDAY, AND THIS IS DIGITAL POLITICS. I'm Mark Scott, and as the war in the Middle East enters its second month, here's a map that explains we are all in for major energy price hikes (or shortages) in April.

— American and European courts are doing more for social media oversight than the growing list of online safety regimes worldwide.

— Middle Powers are now testbeds for different forms of AI, competition and platform governance regulation. Some will work, others will not.

— Two-thirds of people polled worldwide say they have used AI, in some form, over the last 12 months.

Let's get started:



digitalpolitics.co/newsletter0…


Using a Scientific Satellite for Passive Radar


The media in this post is not displayed to visitors. To view it, please log in.

An overlay is shown on a topographical map. High points are highlighted in blue. The letters "A" and "B" are shown in red text at two points.

The basic principle of radar systems is simple enough: send a radio signal out, and measure the time it takes for a reflection to return. Given the abundant sources of RF signals – television signals, radio stations, cellular carriers, even Wi-Fi – that surround most of us, it’s not even necessary to transmit your own signal. This is the premise of passive radar, which uses passive RF illumination to form an image. The RF signal doesn’t even need to come from a terrestrial source, as [Jean Michel Friedt] demonstrated with a passive radar illuminated by the NISAR radar-imaging satellite (pre-print paper).

NISAR is a synthetic-aperture radar satellite jointly built by NASA and ISRO, and it completes a pass over the world every twelve days. It uses an L-band chirp radar signal, which can be picked up with GNSS antennas. One antenna points up towards the satellite, and has a ground plane blocking the signal from directly reaching the second antenna, which picks up reflections from the landscape under observation. Since the satellite would illuminate the scene for less than a minute, [Jean-Michel] had to predict the moment of peak intensity, and achieved an accuracy of about three seconds.

The signals themselves were recorded with an SDR and a Raspberry Pi. High-end, high-resolution SDRs such as the Ettus B210 gave the best results, but an inexpensive homebuilt MAX2771-based SDR also produced recognizable images. This setup won’t be providing any particularly detailed images, but it did accurately show the contours of the local geography – quite a good result for such a simple setup.

If you’re more interested in tracking aircraft than surveying landscapes, check out this ADS-B-synchronized passive radar system. Although passive radar doesn’t require a transmitter license, that doesn’t mean it’s free from legal issues, as the KrakenSDR team can testify.


hackaday.com/2026/03/30/using-…


The Hazards of Charging USB-C Equipped Cells In-Situ


The media in this post is not displayed to visitors. To view it, please log in.

Can you charge those Li-ion based cells with USB-C charging ports without taking them out of the device? While this would seem to be answered with an unequivocal ‘yes’, recently [Colin] found out that this could easily have destroyed the device they were to be installed in.

After being tasked with finding a better way to keep the electronics of some exercise bikes powered than simply swapping the C cells all the time, [Colin] was led to consider using these Li-ion cells in such a manner. Fortunately, rather than just sticking the whole thing together and calling it a day, he decided to take some measurements to satisfy some burning safety questions.

As it turns out, at least the cells that he tested – with a twin USB-C connector on a single USB-A – have all the negative terminals and USB-C grounds connected. Since the cells are installed in a typical series configuration in the device, this would have made for an interesting outcome. Although you can of course use separate USB-C leads and chargers per cell, it’s still somewhat disconcerting to run it without any kind of electrical isolation.

In this regard the suggestion by some commentators to use NiMHs and trickle-charge these in-situ similar to those garden PV lights might be one of the least crazy solutions.

youtube.com/embed/sOCF46_d0Sk?…


hackaday.com/2026/03/30/the-ha…


Writing an Open-World Engine for the Nintendo 64


The media in this post is not displayed to visitors. To view it, please log in.

Anyone who has ever played Nintendo 64 games is probably familiar with the ways that large worlds in these games got split up, with many loading zones. Another noticeable aspect is that of the limited drawing distance, which is why even a large open area such as in Ocarina of Time‘s Hyrule Field has many features that limit how far you can actually see, such as hills and a big farming homestead in the center. Yet as [James Lambert] demonstrates in a recent video, it’s actually possible to create an open world on the N64, including large drawing distances.

As explained in the video, the drawing distance is something that the developer controls, and thus may want to restrict to hit certain performance goals. In effect he developer sets where the far clipping plane is set, beyond which items are no longer rendered. Of course, there are issues with just ramping up the distance to the far clipping plane, as the N64 only has a 15-bit Z-buffer, after which you get ‘Z fighting’, where render order becomes an issue as it’s no longer clear what is in front of what.

One fix is to push the near clipping plane further away from the player, but this comes with its own share of issues. Ergo [James] fixed it by doing two render passes: first all the far-away objects with Z-buffer disabled, and then all the nearby objects. These far-away objects can be rendered back-to-front with low level-of-detail (LoD), so this is relatively fast and also saves a lot of RAM, as the N64 is scraping by in this department at the best of times.

In the video the full details of this rendering approach, as well as a new fog rendering method, are explained, with the code and such available on GitHub for those who wish to tinker with it themselves. [James] and friends intend to develop a full game using this engine as well, so that’s definitely something to look forward to.

youtube.com/embed/lXxmIw9axWw?…


hackaday.com/2026/03/29/writin…


Training a Transformer with 1970s-era Technology


The media in this post is not displayed to visitors. To view it, please log in.

Although generative language models have found little widespread, profitable adoption outside of putting artists out of work and giving tech companies an easy scapegoat for cutting staff, their their underlying technology remains a fascinating area of study. Stepping back to the more innocent time of the late 2010s, before the cultural backlash, we could examine these models in their early stages. Or, we could see how even older technology processes these types of machine learning algorithms in order to understand more about their fundamentals. [Damien] has put a 60s-era IBM as well as a PDP-11 to work training a transformer algorithm in order to take a closer look at it.

For such old hardware, the task [Damien] is training his transformer to do is to reverse a list of digits. This is a trivial problem for something like a Python program but much more difficult for a transformer. The model relies solely on self-attention and a residual connection. To fit within the 32KB memory limit of the PDP-11, it employs fixed-point arithmetic and lookup tables to replace computationally expensive functions. Training is optimized with hand-tuned learning rates and stochastic gradient descent, achieving 100% accuracy in 350 steps. In the real world, this means that he was able to get the training time down from hours or days to around five minutes.

Not only does a project like this help understand these tools, but it also goes a long way towards demonstrating that not every task needs a gigawatt datacenter to be useful. In fact, we’ve seen plenty of large language models and other generative AI running on computers no more powerful than an ESP32 or, if you need slightly more computing power, on consumer-grade PCs with or without GPUs.


hackaday.com/2026/03/29/traini…


Hackaday Links: March 29, 2026


The media in this post is not displayed to visitors. To view it, please log in.

Hackaday Links Column Banner

Whether it’s a new couch or a rare piece of hardware picked up on eBay, we all know what it feels like to eagerly await a delivery truck. But the CERN researchers involved in a delivery earlier this week weren’t transporting anyone’s Amazon Prime packages, they were hauling antimatter.

Moving antimatter, specifically antiprotons, via trucks might seem a bit ridiculous. But ultimately CERN wants to transfer samples between various European laboratories, and that means they need a practical and reliable way of getting the temperamental stuff from point A to B. To demonstrate this capability, the researchers loaded a truck with 92 antiprotons and drove it around for 30 minutes. Of course, you can’t just put antiprotons in a cardboard box, the experiment utilized a cryogenically cooled magnetic containment unit that they hope will eventually be able to keep antimatter from rudely annihilating itself on trips lasting as long as 8 hours.

Speaking of deliveries, anyone building a new computer should be careful when ordering components. Shady companies are looking to capitalize on the currently sky high prices of solid-state drives by counterfeiting popular models, and according to the Japanese site AKIBA PC Hotline, there are some examples in the wild that would fool all but the most advanced users. They examine a bootleg drive that’s a nearly identical replica of the Samsung 990 PRO — the unit and its packaging are basically a mirror image of the real deal, the stated capacity appears valid, and it even exhibits similar performance when put through a basic benchmark test.

But while the drive’s sequential read and write speeds are within striking distance of the official numbers from Samsung, things start to fall apart when doing random speed tests or performing real-world operations. It took the fake drive over 25 minutes to write a 370 GB file, while the authentic one ripped through the same file in less than 4: giving a true write speed of 261 MB/s and 1,861 MB/s, respectively.

Luckily you don’t have to time how long it takes to dump 100+ GB of data on the drive just to see if it’s legitimate, Samsung offers a tool that can communicate with the drive and determine if it’s an original or not. If they don’t already, we imagine other manufacturers will roll out similar capabilities in an effort to combat these sophisticated clones.

Of course, computers aren’t the only things in our modern world that are impacted by the rising prices of memory and flash storage. On Friday, Sony announced that they would be implementing higher prices across their PlayStation line starting this week to compensate for what they call “pressures in the global economic landscape.”

Starting April 2nd (presumably they didn’t want consumers to think this was a joke), the base model PS5 will be bumped up to $649.99 in the US and €649.99 in Europe, while the PS5 Pro will be set at an eye-watering 899.99 in both currencies. Admittedly we’ve done absolutely no research to support this, but surely that must make the latter system the most expensive home game console in history by a considerable margin. In comparison, Microsoft’s top of the line Xbox Series X is currently priced at $799, though the model with the smaller 1 TB drive is still available for $649.

One might think that the skyrocketing cost of memory would force developers to take a lesson from the early days of computing, and usher in a new era of highly optimized code that manages to do more with less. That would be nice. Instead, we have now have DOOM rendered in the browser using CSS.

As Niels Leenheer explains in the write-up, the original goal was to have the entire game running in CSS. But he quickly ran into issues trying to implement the game logic. So he settled for letting Claude port the open source C code for the base game over to JavaScript, which freed him up to work on doing the graphics in CSS.
NASA Astronaut Mike Fincke
If you’re interested in web development it’s a fascinating look at how far the modern browser can be pushed, and even if you don’t, it’s a surprisingly smooth way to play the classic shooter without having to install anything.

Lastly, the public is finally getting some information about the health scare aboard the International Space Station that triggered the first-ever medical evacuation from the orbiting laboratory back in January. As we predicted in our previous coverage, NASA was unwilling to put personal information about one of their astronauts on the public record, and have remained tight-lipped about the situation. So it was Crew-11 Pilot Mike Fincke himself that decided to not only come forward as the individual who experienced the issue, but to detail what he went through in an interview with the Associated Press.

So what happened? Well, nobody is quite sure yet. Fincke says he was eating dinner the night before he was scheduled to go on a spacewalk outside the Station, and suddenly realized he couldn’t speak. His crewmates realized he was in distress, and contacted medical personnel at Mission Control on his behalf. Testing performed both on the Station and back on Earth has yet to provide any explanation for the episode. It lasted approximately 20 minutes, and he’s experienced no issues since. Space is kinda crazy like that sometimes.


See something interesting that you think would be a good fit for our weekly Links column? Drop us a line, we’d love to hear about it.


hackaday.com/2026/03/29/hackad…


Laser Ranging Makes GPS Satellites More Accurate


The media in this post is not displayed to visitors. To view it, please log in.

Although GNSS systems like GPS have made pin-pointing locations on Earth’s sphere-approximating surface significantly easier and more precise, it’s always possible to go a bit further. The latest innovation involves strapping laser retroreflector arrays (LRAs) to newly launched GPS satellites, enabling ground-based lasers to accurately determine the distance to these satellites.

Similar to the retroreflector array that was left on the Moon during the Apollo missions, these LRAs will be most helpful with scientific pursuits, such as geodesy. This is the science of studying Earth’s shape, gravity and rotation over time, which is information that is also incredibly useful for Earth-observing satellites.

Laser ranging is also essential for determining the geocentric orbit of a satellite, which enables precise calibration of altimeters and increasing the accuracy of long-term measurements. Now that the newly launched GPS III SV-09 satellite is operational this means more information for NASA’s geodesy project, and increased accuracy for GPS measurements as more of its still to be launched satellites are equipped with LRAs.


hackaday.com/2026/03/29/laser-…


Clean Enclosures, No Printing Necessary


The media in this post is not displayed to visitors. To view it, please log in.

Unless you’re into circuit sculptures, generally speaking, a working circuit isn’t the end-point of a lot of electronics projects. To protect your new creation from grabby hands, curious paws, and the ravages of nature, you’ll probably want some kind of enclosure. These days a lot of us would probably run it off on the 3D printer, but some people would rather stay electronics hobbiests without getting into the 3D printing hobby. For those people, [mircemk] shares how he creates professonal-looking enclosures with handtools.

The name [mircemk] will seem familiar to longtime readers– we’ve featured many of his projects, and they’ve always stood out for the simple but elegant enclosures he uses. The secret, it turns out, is thin PVC sheeting from a sign shop. At thicknesses upto and including 5 mm, the material can be bent by hand and cut with hobby knives. It’s obviously also amenable to drilling and cutting with woodworking tools as well. Drilling is especially useful to make holes for indicator LEDs. [mircemk] recommends cyanoacrylate ‘crazy’ glue to hold pieces together. For holding down the PCB, the suggestion of double-sided tape will work for components that won’t get too hot.

Rather than paint, the bold contrasting colours we’ve become used to are applied using peel-and-stick wallpaper, which is a great idea. It’s quick, zero mess, and the colour is guaranteed to be evenly applied. It might even help hold the PVC enclosure together ever so slightly. You can watch him do it in the video embedded below.

We hate to say it, but for a one-off project, this technique probably does beat a 3D printed box for professional looks, assuming you have [mircemk]’s motorskills. If you don’t have said motor skills, check out this parametric project box generator. If you’d rather avoid PVC while making a square box to hold a PCB, have you considered using PCBs?

Thanks to [mircemk] for the tip! If you have a tip or technique you want to share, please box it up and send it to the tipsline

youtube.com/embed/t9KfsZ-eU5M?…


hackaday.com/2026/03/29/clean-…


Self-healing CMOS Imager to Withstand Jupiter’s Radiation Belt


The media in this post is not displayed to visitors. To view it, please log in.

Ionizing radiation damage from electrons, protons and gamma rays will over time damage a CMOS circuit, through e.g. degrading the oxide layer and damaging the lattice structure. For a space-based camera that’s inside a probe orbiting a planet like Jupiter it’s thus a bit of a bummer if this will massively shorted useful observation time before the sensor has been fully degraded. A potential workaround here is by using thermal energy to anneal the damaged part of a CMOS imager.

The first step is to detect damaged pixels by performing a read-out while the sensor is not exposed to light. If a pixel still carries significant current it’s marked as damaged and a high current is passed through it to significantly raise its temperature. For the digital logic part of the circuit a similar approach is used, where the detection of logic errors is cause for a high voltage pulse that should also result in annealing of any damage.

During testing the chip was exposed to the same level of radiation to what it would experience during thirty days in orbit around Jupiter, which rendered the sensor basically unusable with a massive increase in leakage current. After four rounds of annealing the image was almost restored to full health, showing that it is a viable approach.

Naturally, this self-healing method is only intended as another line of defense against ionizing radiation, with radiation shielding and radiation-resistant semiconductor technologies serving as the primary defenses.


hackaday.com/2026/03/29/self-h…


Multicolor 5-Axis 3D Printing


The media in this post is not displayed to visitors. To view it, please log in.

A 3D printer is shown, with the print bed pitched sharply toward the camera. The hotend is depositing plastic on a model at a sharp angle to the print bed.

Usually, when we see non-planar 3D printers, they’re rather rudimentary prototypes, intended more as development frames than as workhorse machines. [multipoleguy]’s Archer five-axis printer, on the other hand, breaks this trend with automatic four-hotend toolchanging, a CoreXY motion system, and print results as good-looking as any Voron’s.

The print bed rests on three ball joints, two on one side and one in the center of the opposite side. Each joint can be raised and lowered on an independent rail, which allows the bed to be tilted on two axes. The dimensions of the extruders their motion system limit how much the bed can be angled when the extruder is close to the bed, but it can reach sharp angles further out.

The biggest difficulty with non-planar printing is developing a slicer; [multipoleguy] is working on a slicer (MaxiSlicer), but it’s still in development. It looks as though it’s already working rather well, to the point that [multipoleguy] has been optimizing purge settings for tool changes. It seems that when a toolhead is docked, the temperature inside the melt chamber rises above the normal temperature in use, which causes stringing. To compensate for this, the firmware runs a more extensive purge when a hotend’s been sitting for a longer time. The results for themselves: a full three-color double helix, involving 830 tool changes, could be printed with as little as six grams of purge waste.

As three-axis 3D printers become consumer products, hackers have kept looking for further improvements to make, which perhaps explains the number of non-planar printing projects appearing recently, including a few five-axis machines. Alternatively, some have experimented with non-planar print ironing.

youtube.com/embed/Y44QV1gQqq0?…


hackaday.com/2026/03/29/multic…


Soviet CDs And CD Players Existed, And They Were Strange


The media in this post is not displayed to visitors. To view it, please log in.

Until the fall of the Soviet Union around 1990 you’d be forgiven as a proud Soviet citizen for thinking that the USSR’s technology was on par with the decadent West. After the Iron Curtain lifted it became however quite clear how outdated especially consumer electronics were in the USSR, with technologies like digital audio CDs and their players being one good point of comparison. In a recent video by a railways/retro tech YouTube channel we get a look at one of the earliest Soviet CD players.

A good overall summary of how CD technology slowly developed in the Soviet Union despite limitations can be found in this 2025 article by [Artur Netsvetaev]. Soviet technology was characterized mostly by glossy announcements and promises of ‘imminent’ serial production prior to a slow fading into obscurity. Soviet engineers had come up with the Luch-001 digital audio player in 1979, using glass discs. More prototypes followed, but with no means for mass-production and Soviet bureaucracy getting in the way, these efforts died during the 1980s.

During the 1980s CD players were produced in Soviet Estonia in small batches, using Philips internals to create the Estonia LP-010. Eventually sanctions on the USSR would strangle these efforts, however. Thus it wouldn’t be until 1991 that the Vega PKD-122 would become the first mass-produced CD player, with one example featured in this video.

The video helpfully includes a teardown of the player after a rundown of its controls and playback demonstration, so that we can ogle its internals. This system uses mostly localized components, with imported components like the VF display and processors gradually getting replaced over time. The DAC and optical-mechanical assembly would still be imported from Japan until 1995 when the factory went bankrupt.
Insides of the Vega 122S CD player. (Credit: Railways | Retro Tech | DIY, YouTube)Insides of the Vega 122S CD player. (Credit: Railways | Retro Tech | DIY, YouTube)
This difference between the imported and localized part is captured succinctly in the video with the comparison to Berlin in 1999, in that you can clearly see the difference between East and West. The CD mechanism is produced by Sanyo, with a Sanyo DAC IC on the mainboard. The power supply, display and logic board (using Soviet TTL ICs) are all Soviet-produced. A sticker inside the case identifies this unit as having been produced in 1994.

Amusingly, the front buttons are directly coupled into the mainboard without ESD protection, which means that in a Siberian winter with practically zero relative humidity inside you’d often fry the mainboard by merely using these buttons.

After this exploration the video goes on to explain how Soviet CD production began in the 1989, using imported technology and know-how. This factory was set up in Moscow, using outdated West-German CD pressing equipment and makes for a whole fascinating topic by itself.

Finally, the video explores the CD player’s manual and how to program the player, as well as how to obtain your own Soviet CD player. Interestingly, a former employee of the old factory has taken over the warehouse and set up a web shop selling new old stock as well as repaired units and replacement parts.

youtube.com/embed/utcfnmQtGxA?…


hackaday.com/2026/03/29/soviet…


Play a .WAV Instead of Typing Line After Line Into Vintage Microcomputer


The media in this post is not displayed to visitors. To view it, please log in.

[Casey Bralla] got his hands on a Rockwell AIM 65 microcomputer, a fantastic example of vintage computing from the late 70s. It sports a full QWERTY keyboard, and a twenty character wide display complemented by a small thermal printer. The keyboard is remarkably comfortable, but doing software development on a one-line, twenty-character display is just not anyone’s idea of a good time. [Casey] made his own tools to let him write programs on his main PC, and transfer them easily to the AIM 65 instead.
A one-line, twenty-character wide display was a fantastic feature, but certainly lacking for development work.
Moving data wasn’t as straightforward in 1978 as it is today. While the Rockwell AIM 65 is a great machine, it has no disk drive and no filesystem. Programs can be written in assembler or BASIC (which had ROM support) but getting them into running memory where they could execute is not as simple as it is on modern machines. One can type a program in by hand, but no one wants to do that twice.

Fortunately the AIM 65 had a tape interface (two, actually) and could read and store data in an audio-encoded format. Rather than typing a program by hand, one could play an audio tape instead.

This is the angle [Casey]’s tools take, in the form of two Python programs: one for encoding into audio, and one for decoding. He can write a program on his main desktop, and encode it into a .wav file. To load the program, he sets up the AIM 65 then hits play on that same .wav file, sending the audio to the AIM 65 and essentially automating the process of typing it in. We’ve seen people emulate vintage tape drive hardware, but the approach of simply encoding text to and from .wav files is much more fitting in this case.

The audio encoding format Rockwell used for the AIM is very well-documented but no tools existed that [Casey] could find, so he made his own with the help of Anthropic’s Claude AI. The results were great, as Claude was able to read the documentation and, with [Casey]’s direction, generate working encoding and decoding tools that implemented the spec perfectly. It went so swimmingly he even went on to also make a two-pass assembler and source code formatter for the AIM, as well. With them, development is far friendlier.

Watch a demonstration in the video [Casey] made (embedded under the page break) that shows the encoded data being transferred at a screaming 300 baud, before being run on the AIM 65.

youtube.com/embed/C5hO1vE4pxM?…


hackaday.com/2026/03/28/play-a…


Watch Electricity Slosh: Visualizing Impedance Matching


The media in this post is not displayed to visitors. To view it, please log in.

Y-circuit comparison for a water and real electrical circuit

It’s one thing to learn about transmission lines in theory, and quite another to watch a voltage pulse bounce off an open connector. [Alpha Phoenix] bridges the gap between knowledge and understanding in the excellent videos after the break. With a simple circuit, he uses an oscilloscope to visualize the propagation of electricity, showing us exactly how signals travel, reflect, and interfere.

The experiment relies on a twisted-pair Y-harness, where one leg is left open and the other is terminated by a resistor. By stitching together oscilloscope traces captured at regular intervals along the wire, [Alpha Phoenix] constructs a visualization of the voltage pulse propagating. To make this intuitive, [Alpha Phoenix] built a water model of the same circuit with acrylic channels, and the visual result is almost identical to the electrical traces.

For those who dabble in the dark art of RF and radio, the real payoff is the demonstration of impedance matching in the second video. He swaps resistors on the terminated leg to show how energy “sloshes” back when the resistance is too high or too low. However, when the resistor matches the line’s characteristic impedance, the reflection vanishes entirely—the energy is perfectly dissipated. It really makes it click how a well-matched, low SWR antenna is crucial for performance and protecting your radio.

[Alpha Phoenix] is a genius at making physics visible. He even managed “film” a laser beam traveling at light speed.

youtube.com/embed/2AXv49dDQJw?…

youtube.com/embed/RkAF3X6cJa4?…


hackaday.com/2026/03/28/watch-…


Playful ‘Space Dice’ Kit Shows Off Clever Design


The media in this post is not displayed to visitors. To view it, please log in.

[Tommy] at Oskitone has been making hardware synth kits for years, and his designs are always worth checking out. His newest offering Space Dice is an educational kit that is a combination vintage sci-fi space laser sound generator, and six-sided die roller. What’s more, as a kit it represents an effort to be genuinely educational, rather than just using it as a meaningless marketing term.

There are several elements we find pretty interesting in Space Dice. One is the fact that, like most of [Tommy]’s designs, there isn’t a microcontroller in sight. Synthesizers based mostly on CMOS logic chips have been a mainstay of DIY electronics for years, as have “electronic dice” circuits. This device mashes both together in an accessible way that uses a minimum of components.

There are only three chips inside: a CD4093 quad NAND with Schmitt-trigger inputs used as a relaxation oscillator, a CD4040 binary counter used as a prescaler, and a CD4017 decade counter responsible for spinning a signal around six LEDs while sound is generated, to represent an electronic die. Sound emerges from a speaker on the backside of the PCB, which we’re delighted to see is driven not by a separate amplifier chip, but by unused gates on the CD4093 acting as a simple but effective square wave booster.

In addition, [Tommy] puts effort into minimizing part count and complexity, ensuring that physical assembly does not depend on separate fasteners or adhesives. We also like the way he uses a lever assembly to make the big activation button — mounted squarely above the 9 V battery — interface with a button on the PCB that is physically off to the side. The result is an enclosure that is compact and tidy.

We recommend checking out [Tommy]’s concise writeup on the design details of Space Dice for some great design insights, and take a look at the assembly guide to see for yourself the attention paid to making the process an educational one. We love the concept of presenting an evolving schematic diagram, which changes and fills out as each assembly step is performed and tested.

Watch it in action in a demo video, embedded just below. Space Dice is available for purchase but if you prefer to roll your own, all the design files and documentation are available online from the project’s GitHub repository.

player.vimeo.com/video/1172325…


hackaday.com/2026/03/28/playfu…


Apple’s Most Repairable Laptop is Thanks to Right-to-Repair


The media in this post is not displayed to visitors. To view it, please log in.

An upside down laptop with its cover removed on a grey surface. The inside of the laptop is a series of black modules connected to the frame with glorious amounts of screws and not glue!

The common narrative around device design is that you can have repairability or a low price, but that they are inversely proportional to each other. Apple’s new budget MacBook Neo seems to attempt a bit of both.

Brittle snap-fit enclosures or glue can make a device pop together quickly during manufacture, but are a headache when it comes time to repair or hack it. Our friends at iFixit tore down the Neo and found it to be the most repairable MacBook since the 2012 unibody model. A screwed in battery, and modules for many of the individual components including the USB ports and headphone jack make it fairly simple to replace individual components. Most of those components are even accessible as soon as you pop the bottom cover instead of requiring major surgery.

As someone who has done a keyboard replacement on a 2010 MacBook, the 41 screws holding the keyboard in brought back (bad) memories. While this is a great improvement over Apple’s notoriously painful repair processes, we’re still only looking at an overall 6/10 score from iFixit versus a 10/10 from Framework or Lenovo.

The real story here is that these improvements from Apple were spurred by Right-to-Repair developments, particularly in the EU, that were the result of pressure from hackers like you.

If you want to push a Neo even further, how about water cooling it? If you’d rather have user-upgradeable RAM and storage too in a Mac, you’ve got to go a bit older.

youtube.com/embed/PbPCGqoBB4Y?…


hackaday.com/2026/03/28/apples…


Turning Tesla Model 3’s Computer Into a Desktop PC


The media in this post is not displayed to visitors. To view it, please log in.

Like many high-tech companies Tesla runs a bug bounty program. But in the case of a car manufacturer, this means that you either already have one of their cars, are interested in buying one, or can gain access to its software-bits in another legal manner. Being a Tesla-less individual, yet with an interest in hunting bugs [David Schütz] thus decided to pursue the option of obtaining the required parts from crashed Tesla cars.

Specifically [David] was interested in the Tesla Model 3 and its combined Media Control Unit (MCU) and Autopilot computer (AAP) assembly. In addition to the main unit, it also requires – obviously – a power supply, and the proprietary display. These were all obtained fairly easily, but unfortunately the devices all had their cables cut off, leaving just a sad little stump of wiring with the still plugged-in connectors.

After trying his luck with an incompatible BMW LVDS cable from one of their headunit infotainment systems, he then proceeded to try and use the cable stumps with some creative patching. This briefly worked, but some debris fell onto the MCU board and blew a power rail IC.

Ultimately this IC got swapped after [David] had already purchased a whole new Model 3 computer, leaving him with two units and the easy way out of buying the Dashboard Wiring Harness cable loom that contained the Rosenberger connectors he needed to connect the display to the main unit.


hackaday.com/2026/03/28/turnin…


For Art’s Sake


The media in this post is not displayed to visitors. To view it, please log in.

Hackers can be a strange folk. Our idea of beauty, for instance, can be rather odd. This week, Hackaday saw a few projects that were not just functional – the aesthetics were the goal. I don’t think we’ll be taking over the fine art world any time soon, but I’m absolutely convinced that the same muse that guides the hand that holds the paintbrush sometimes also guides the hand holding the soldering iron.

Take “circuit sculpture”, for instance. Heck, we even give it an art-inspired name that classifies it correctly. This week’s project that got me thinking about the aesthetics of hand-bent wire circuits was this marvelous clock build, but the works of Mohit Bhoite or Kelly Heaton are also absolute must-sees in this category.

Outside of the Hackaday orbit, one of my all-time favorite artists in this genre was Peter Vogel, who made complex audience-reactive sound sculptures that looked as good as they sound.

Is a wireframe animated moving jellyfish art? It was certainly intended to be beautiful, and I personally find it so. Watch some of the video clips attached to the project to get a better sense of it.

In the sculpture world, there is a sub-genre of kinetic art pieces where the work itself is secondary to the beauty of the motions that the pieces pull off. Think ballet, but mechanical. Perhaps my absolute favorite of these artists is Arthur Ganson. If you haven’t seen his work before, check out “Thinking Chair” for the beauty of movement, but don’t miss “Machine with Concrete” if you’re feeling more conceptual.

If you’re willing to buy an insane geartrain as art, what about these 3D printed wire strippers? Is this “art”? It’s clear that they were designed with real intent and attention to the aesthetics of the final form, and am I wrong for finding the way they move literally beautiful?

What’s your favorite offbeat hacker artform?

This article is part of the Hackaday.com newsletter, delivered every seven days for each of the last 200+ weeks. It also includes our favorite articles from the last seven days that you can see on the web version of the newsletter. Want this type of article to hit your inbox every Friday morning? You should sign up!


hackaday.com/2026/03/28/for-ar…


Magic-less 8 Ball Finds New Life With Pi Pico Inside


The media in this post is not displayed to visitors. To view it, please log in.

There’s an old saying that goes: when life gives you lemons, make lemonade. [lds133] must have heard that saying, because when life took the magic liquid out of his Magic 8 Ball, [lds133] made not eight-ball-aide, but an electronic replacement with a Raspberry Pi Pico and a round TFT display.

In case the Magic 8 Ball is unknown in some corners of the globe, it is a toy that consists of a twenty-sided die with a set of oracular messages engraved on it, enclosed in a magical blue liquid — and by magical, we mean isopropyl alcohol and dye. The traditional use is to ask a question, shake the eight-ball, and then ignore its advice and do whatever you wanted to do anyway.

[lds133]’s version replicates the original behavior exactly by using the accelerometer to detect the shaking, the round display to show an icon of the die, and a Raspberry Pi Pico to do the hard work. There’s also the obligatory lithium pouch cell for power, which is managed by one of the usual TP4056 breakout boards. One very nice detail is that instead of a distracting battery indicator, the virtual die changes color as the battery wears out.

We’ve seen digital 8 Balls before, like this one that used an STM32, or another that used a Raspberry Pi to display reaction GIFs. Some projects are just perennial.


hackaday.com/2026/03/28/magic-…


Making a Nichrome Wirewound Power Resistor


The media in this post is not displayed to visitors. To view it, please log in.

Although not really a cost-effective or a required skill unless you have some very specific needs not met by off-the-shelf power resistor options, making your own own wirewound power resistor is definitely educational, as well as a fascinating look at a common part that few people spare a thought on. Cue [TheElectronBench]’s video tutorial on how to make one of these components from scratch.

The resistance value is determined by the length of nichrome wire, which is an alloy of nickel and chromium (NiCr) with a resistivity of around 1.12 µΩ/m. It’s also extremely durable when heated, as it forms a protective outer layer of chromium oxide. This makes it suitable for very high power levels, but also requires the rest of the power resistor assembly to be able to take a similar punishment.

For the inner tube of this DIY power resistor a tube of alumina ceramic was used, around which the nichrome wire is wound. This resistor targets 15 Ohm at a maximum load of 50 Watt, this means a current of about 1.83 A is expected at 27.4 V. The used nichrome wire has a measured resistance of 10.4 Ohm, ergo 1.44 meter has to be cut and wound.

This entire assembly is then embedded in refractory cement (fireproof cement), as this will keep the wire in place, while also able to take the intense temperature cycling during operation. As a bonus this will prevent toasting the surrounding environment too much, never mind lighting things on fire as the nichrome wire heats up.

As explained in the video, this is hardly the only way to create such a power resistor, with multiple types of alternative alloys available, different cores to wind around and various options to embed the assembly. The demonstrated method is however one that should give solid results and be well within the capabilities and budget of a hobbyist.

An important point with nichrome is that you cannot really solder to it, so you’ll need something along the lines of a mechanical (crimping) connection. There are also different winding methods that can affect the inductance of the resistor, since this type of resistor is by its design also a coil. This is however not covered in the video as for most applications it’s not an issue.

Overall, this video tutorial would seem to be a solid introduction to nichrome power resistors, including coverage of many issues you may encounter along the way. Feel free to sound off in the comment section with your own experiences with power resistors, especially if you made them as well.


hackaday.com/2026/03/28/making…


SEGA Music to MODfile, (Semi)Automatically


The media in this post is not displayed to visitors. To view it, please log in.

One thing SEGA’s MegaDrive/Genisis and the Commodore Amiga had in common was–aside from the Motorola 68000 processor– being known for excellent music in games. As [reassembler] continues his quest to de-assemble Sonic: The Hedgehog and re-assemble the code to run on Amiga, getting the music right is a key challenge. Rather than pull MIDI info or recreate the sound by ear, [reassembler] has written a program called Sonic2MOD to automatically take the assembly file music from the MegaDrive catridge and turn it into an Amiga-style MODfile. He’s also made a video about it that you’ll find embedded below.

Of course how music gets made differs widly on the two systems. Amiga, famously has Paula, a custom ASIC designed for sampling, allowing you to play four eight-bit voices. The Sega, of course, has that glorious FM-synthesis chip from Yamaha synthesizing five channels of CD-quality sound and one channel of sample. It’s not as well known, but the Sega also has a bonus TI-compatible programmable sound chip (PSG) that can handle 3 square-wave tone channels and one noise channel. That’s ten total channels to the Amiga’s four, and CD-quality to 8-bit voices. Knowing all that, we were very curious how close to SEGA’s original music [reassembler] could get on the Amiga.

Before he could show us, [reassembler] needed to decode the SMPS files used on Sonic: The Hedgehog and many other MegaDrive games. Presumably he could have gotten a MIDI file online somewhere– there are oodles– but the goal was to reverse engineer Sonic from its cartridge for the Amiga, not download a lot of resources from the web. SMPS is a sort of programing language for sound, telling the Yamaha and PSG chips what to do.

In some ways, it’s not unlike the Amiga’s MOD format, which programmatically specifies how to play the sampled voices also stored in the file. Translating from one to another is a matter of reading the SMPS files, extracting the timing, volume, vibrato, et cetera, and translate that into a form the MOD file can use. Then [reassembler] needed to generate samples, which was an added hiccup because the Amiga can only handle 3 octaves vs the seven of the SEGA’s FM synthesizer. He’s able to solve this simply by generating multiple samples to span the Yamaha chip’s range, though, again, at only 8-bit fidelity. It doesn’t sound half bad.

What about the four-channel limit? That’s where a bit of artistry comes in; the automated tool produces MOD files with more voices, which MOD trackers can handle at increased computational load. Computational load you don’t need when trying to play a game. Scaling down the soundtrack to the Amiga’s limits is something [reassembler] already has practice with from his famous OutRun port, though, so we’re sure he’ll get it done.

All of this effort just to match the Mega Drive makes us appreciate what a capable little computer the Sega console was; why, you can even check your stocks with it! We’ve already featured [reassembler]’s Sonic port once before, but this music tool was interesting enough we couldn’t help ourselves coming back to it. The ability to play MOD files were pretty impressive when the Amiga came out, but nowadays all you need is a ten-cent microcontroller.

youtube.com/embed/E4dZzJFroAY?…


hackaday.com/2026/03/27/sega-m…


Using FireWire on a Raspberry Pi Before Linux Drops Support


The media in this post is not displayed to visitors. To view it, please log in.

Once the premium option for data transfers and remote control for high-end audiovisual and other devices, FireWire (IEEE 1394) has been dying a slow death ever since Apple and Sony switched over to USB. Recently Apple correspondingly dropped support for it in MacOS 26, and Linux will follow in 2029. The bright side of this when you’re someone like [Jeff Geerling] is that this means three more years of Linux support for one’s FireWire gear, including on the Raspberry Pi with prosumer gear from 1999.

If you’re not concerned about running the latest and greatest – and supported – software, then using an old or modern Mac or PC is of course an option, but with Linux support still available [Jeff] really wanted to get it working on Linux. Particularly on a Raspberry Pi in order to stay on brand.

Adding a FireWire port to a Raspberry Pi SBC is easy enough with an RPi 5 board as you can put a Mini PCIe HAT on it into which you slot a mini PCIe to Firewire adapter. At this point lspci shows the new device, but to use it you need to recompile the Linux kernel with Firewire support. On the Raspberry Pi you then also need to enable it in the device tree overlay, as shown in the article.

With this you now have FireWire 400 support right off the bat, but to use the FireWire 800 port you need to also connect external power to the adapter, which [Jeff]’s Canon GL1 video camera with its FW400 port does not require, so he didn’t bother with that.

Capturing the video from the GL1 via FW400 was done using the DVgrab utility, with a subsequent capture attempt successful. This means that at least until 2029 [Jeff] will be happily using his GL1 camera this way.

Meanwhile over on the Dark Side, you can still happily install FireWire drivers made for older Windows versions on Windows 10 and 11, which is great news for e.g. people who have expensive DAW gear kicking around. Perhaps the demise of FireWire is still a long while off as long as you’re not too picky about the OS you’re running.

youtube.com/embed/BuKeW45OL-g?…


hackaday.com/2026/03/27/using-…


Water Cooling the MacBook Neo Laptop to Double Gaming Performance


The media in this post is not displayed to visitors. To view it, please log in.

Recently [ETA Prime] felt a bit underwhelmed by the raw performance of his MacBook Neo when it came to running for extended periods under full load, such as when gaming. Thus the obvious solution is to mildly over-engineer a cooling solution that takes care of issues like thermal throttling.

The Apple MacBook Neo with its repurposed iPhone 16 SoC seems to have leaned hard into answering the question whether a smartphone can be a good general purpose personal computer. Ignoring the lack of I/O, it’s overall not a bad SoC for a laptop, but like when you try to push the CPU and GPU on a smartphone, they do get pretty toasty. Due to the minimalistic cooling solution in the MacBook Neo it’ll easily hit the 105°C thermal throttle limit.

Technically the ‘heatsink’ for this laptop is the aluminium case, as the SoC is coupled via a thermal pad to the case. This doesn’t leave a lot of space and the case will heat soak pretty fast, while also making retrofitting a cooling solution a challenge.

Amusingly, replacing the existing thermal pad with a thin copper plate already massively reduced the thermal throttling of the A18 Pro SoC by about 20 degrees. In Geekbench 6 this bumped multi-core scores up by 9.7% and single-core by 15.2%. Definitely a promising glimpse at how much performance could still be extracted from this SoC.

For the next step a thermo-electric cooler (TEC) with built-in water cooling loop was used, which happened to be one of those overkill smartphone cooling systems that you’d stick to the back of the phone. Here the cooler was attached similarly, directly to the bottom aluminium of the case.

With this solution in place Geekbench 6 results mostly showed a solid bump for single-core results, while multi-core results showed diminishing returns. For Cinebench results this gave a 19% increase over stock cooling in multi-core and 23.5% for single-core.

Perhaps most interesting of all was that playing a video game for a while without thermal throttling meant framerates of over 80 FPS instead of hitting that thermal wall with 30 FPS. This shows just how much performance is left on the table due to the cooling choices for the system, even with this still rather inefficient cooling solution.

That said, this probably isn’t some kind of nefarious scheme by Apple, but rather the result of designing the thermal solution to not heat the case up to temperatures that are deemed to be unsafe or uncomfortable for the user. After all, if the case if the heatsink, then you don’t want to feel like you’re literally handling one. This is sadly the compromise when venting out hot air is deemed to be an unacceptable solution.

youtube.com/embed/lswbpVtAhrc?…


hackaday.com/2026/03/27/water-…


Laser Welding Helps YouTuber Get Ahead with Aluminum Sheet


The media in this post is not displayed to visitors. To view it, please log in.

Laser Welding is apparently the new hotness, in part because these sci-fi rayguns masquerading as tools are really cool. They cut! They weld! They Julienne Fry! Well, maybe not that last one. In any case, perhaps feeling the need to cancel out that coolness as quickly as he possibly could, YouTuber [Wesley Treat] decided to make a giant version of his own head.

[Wesely] had previously been 3D scanned as part of the maker scans project, which you can find over on Printables. Those of you who really hate YouTubers, take note: finally you have something to take your frustrations out on. [Wesely] takes that model into Blender to decimate and decapitate– fans of the band Tyr may wonder if the model questioned his sword–before feeding that head through an online papercraft tool called PaperMaker to generate cut files for his CNC. There are also a lot of welding montages interspersed there as he practices with the new tool. [Wesely] did first try out his new raygun on steel in a previous video, but even knowing that, he makes the learning curve on these lasers look quite scalable.

While we’re not likely to follow in [Wesely]’s footsteps and create our own low-poly Zardoz– Zardozes? Zardii?– using a papercraft toolchain and CNC equipment with sheet aluminum is absolutely a great idea worth stealing. It’s very similar to what another hacker did with PCBs— though that project was perhaps more reasonable in scale and ego.

We are no strangers to papercrafts that use actual paper here, either, having featured everything from model retrocomputers to fully-mobile strandbeasts.

youtube.com/embed/eKwoDYrec4U?…


hackaday.com/2026/03/27/laser-…


Use a Gap-Cap to Embed Hardware In Your Next 3D Print


The media in this post is not displayed to visitors. To view it, please log in.

Embedding fasteners or other hardware into 3D prints is a useful technique, but it can bring challenges when applied to large or non-flat objects. The solution? Use a gap-cap.

The gap-cap technique is essentially a 3D printed lid. One pauses a print, inserts hardware, then covers it with a lid before resuming the print. The lid — or gap-cap — does three things. It seals in the part, it fills in empty space left above the component, and it provides a nice flat surface for subsequent layers which makes the whole process much cleaner and more reliable.

This whole technique is a bit reminiscent of the idea of manual supports, except that the inserted piece is intended to be sealed into the print along with the embedded hardware under it.

If you have never inserted anything larger than a nut or small magnet into a 3D print, you may wonder why one needs to bother with a gap-cap at all. The short version is that what works for printing over small bits doesn’t reliably carry over to big, odd-shaped bits.

For one thing, filament generally doesn’t like to stick to embedded hardware. As the size of the inserted object increases, especially if it isn’t flat, it increasingly complicates the printer’s ability to seal it in cleanly. Because most nuts are small, even if the printer gets a little messy it probably doesn’t matter much. But what works for small nuts won’t work for something like an LED strip mounted on its side, as shown here.
Cross-section of a print with an embedded LED strip. The print pauses (A), LED strip is inserted and capped with a gap-cap (B, C), then printing resumes and completes (D).
In cases like these a gap-cap is ideal. By pre-printing a form-fitting cap that covers the inserted hardware, one provides a smooth and flat surface that both seals the component in snugly while providing an ideal surface upon which to resume printing.

If needed, a bit of glue can help ensure a gap-cap doesn’t shift and cause trouble when printing resumes, but we can’t help but recall the pause-and-attach technique of embedding printed elements with the help of a LEGO-like connection. Perhaps a gap-cap designed in such a way would avoid needing any kind of adhesive at all.


hackaday.com/2026/03/27/use-a-…


Hackaday Podcast Episode 363: The History of PLA, Laser DIY PCBs, and Corporate Craziness


The media in this post is not displayed to visitors. To view it, please log in.

What did Elliot Williams and Al Williams read on Hackaday last week? Tune in and find out. After a bit of news, [Vik Oliver] chimes in with some deep PLA knowledge. Then the topic changed to pressure advance measurements, SDRs, making super-resolution PCBs with a fiber laser, and more.

Want to 3D print wire strippers? A robot arm? Or just make your own Z-80? Those hacks are in there, too.

For the long articles, we talked about old tech, including the :CueCat and the Iomega Zip Drive. Let us know if you had either one in the comments.

What do you think? Leave us a comment or record something and send it to our mailbag.

html5-player.libsyn.com/embed/…

Download a copy of the podcast with no corporate trackers in the clean MP3.

Where to Follow Hackaday Podcast

Places to follow Hackaday podcasts:



News:



Mailbag



What’s that Sound?



Interesting Hacks of the Week:



Quick Hacks:



Can’t-Miss Articles:



hackaday.com/2026/03/27/hackad…


Luthier Crafts Guitar from Cardboard


The media in this post is not displayed to visitors. To view it, please log in.

The people at Signal Snowboards are well known not only for producing quality snowboards, but doing one-off builds out of unusual and perhaps questionable materials just to see what’s possible. From pennies to glass, if it can go on their press (and sometimes even if it can’t) they’ll build a snowboard out of it. At some point, they were challenged to build different types of boards from paper products which resulted in a few interesting final products, but this pushed them to see what else they could build from paper and are now here with an acoustic guitar fashioned almost entirely from cardboard.

For this build, the luthiers are modeling the cardboard guitar on a 50s-era archtop jazz guitar called a Benedetto. The parts can’t all just be CNC machined out of stacks of glued-up cardboard, though. Not only because of the forces involved in their construction, but because the parts are crucial to a guitar’s sound. The top and back are pressed using custom molds to get exactly the right shape needed for a working soundboard, and the sides have another set of molds. The neck, which has the added duty of supporting the tension of the strings, gets special attention here as well. Each piece is filled with resin before being pressed in a manner surprisingly similar to producing snowboards. From there, the parts go to the luthier in Detroit.

At this point all of the parts are treated similarly to how a wood guitar might be built. The parts are trimmed down on a table saw, glued together, and then finished with a router before getting some other finishing treatments. From there the bridge, tuning pegs, pickups, and strings are added before finally getting finished up. The result is impressive, and without looking closely or being told it’s made from cardboard, it’s not obvious that it was the featured material here.

Some of the snowboards that Signal produced during their Every Third Thursday series had similar results as well, and we actually featured a few of their more tech-oriented builds around a decade ago like their LED snowboard and another one which changes music based on how the snowboard is being ridden.

youtube.com/embed/u1u_Z0yjq5g?…


hackaday.com/2026/03/27/luthie…


This Week in Security: Second Verse, Worse Than the First


The media in this post is not displayed to visitors. To view it, please log in.

Isn’t there some claim events come in threes? After the extremely rare leak of the iOS Coruna exploit chain recently, now we have details from Google on a second significant exploit in the wild, dubbed Darksword.

Like Coruna, Darksword appears to have followed the path of government security contractors, to different government actors, to crypto stealer. It appears to focus on exploits already fixed in modern iOS releases, with most affecting iOS 18 and all patched by iOS 26.3.

Going from almost no public examples of modern iOS exploits to two in as many weeks is wild, so if mobile device security is of interest, be sure to check out the Google write-up.

Another FBI Router Warning


The second too early to be retro – but too important to ignore – repeat security item is a second alert by the FBI cautioning about end-of-life consumer network hardware under active exploitation, with the FBI tracking almost 400,000 device infections so far.

Like the warning two weeks ago, the FBI calls out a handful of consumer routers – but this time they’re devices that may actually still be service in some of our homes (or our less cutting edge friends and family), calling out devices from Netgear, TP-Link, D-Link, and Zyxel:

  • Netgear DGN2200v4 and AC1900 R700
  • TP-Link Archer C20, TL-WR840N, TL-WR849N, and WR841N
  • D-Link DIR-818LW, 850L, and 860L
  • Zyxel EMG6726-B10A, VMG1312-B10D, VMG1312-T20B, VMG3925-B10A, VMG3925-B10C, VMG4825-B10A, VMG4927-B50A, VMG8825-T50K

While many of these devices are over ten years old, they still support modern networking – some of them even supporting 802.11ac (also called Wi-Fi 5). Unfortunately, since support has been ended by the manufacturers, publicly disclosed vulnerabilities have not been patched (and now never will be, officially)

Once infected, the routers are enrolled in the AVRecon malware network, which includes the now-typical suite of behavior of remote control, remote VPN access to the internal and external networks, DNS hijacking, and DDoS (distributed denial of service) attacks. This sort of network malware is used by attackers to exploit internal systems like un-patched Windows or IOT devices on the local network, and as a launching point to hide behavior as coming from a certain country or state by using the public Internet connection as a VPN. It’s also often monetized by unscrupulous apps selling cheap VPN service.

The worst type of vulnerability affecting home routers is one which can be triggered remotely from the Internet without user interaction – for instance CVE-2024-12988 which allows arbitrary code execution remotely on Netgear devices, but even vulnerabilities which are only accessible from the local network can be combined with cross-site vulnerabilities or vulnerabilities in other devices to exploit home routers. A malware infection on a Windows system can be leveraged to install additional, permanent malware installs on routers and IOT devices, and malware on a router can be used to redirect the user to install more malware on an internal PC via manipulating the network, or allow direct attack of internal systems via a proxy.

A slight upside is that this batch of vulnerable hardware is often modern enough to run OpenWRT or other replacement firmware. OpenWRT supports thousands of routers and access points – and often forms the basis of the commercial firmware the device was shipped with, before the manufacturer abandoned it. Converting a device to OpenWRT may be intimidating for some, but for anyone with one of the listed devices, the time to try is now! It’s cheaper than buying a new device, and worst case scenario, you’d have to replace that router anyway!

You can use the OpenWRT Table of Hardware to see if there is a version for your device.

Unfortunately, vulnerabilities in home routers don’t offer many lessons: there’s rarely a need to log into them to see if there is a pending update, and almost nothing the typical home user can do except buy a new device when the manufacturer stops supplying security fixes.

Trivy Compromised


The Trivy security scanner suffered a breach themselves, leading to a cascading series of breaches of other tools. Trivy is an automatic vulnerability scanner for finding vulnerabilities is the dependencies of Docker and other container images, package repositories, and language packages in Go, PHP, Python, Node, and many other popular languages. Trivy is often integrated into the CI/CD (continual integration and continual deployment) process of other open and closed source projects and internal company processes.

According to the timeline published by Aqua, in late February 2026 a misconfigured GitHub workflow allowed the theft of authentication tokens for the Trivy project. While the attack was detected and the credentials removed, not all credentials were properly removed, which allowed the attackers to complete the attack on March 19, 2026.

Once compromised, all but one release of the Trivy GitHub actions were replaced with trojaned malicious copies, spreading the compromise to any project which used the Trivy GitHub actions, spreading the malware payload to many projects using the Trivy scanner actions.

GitHub actions are part of GitHub which allows scripts when repository actions like a pull request or merge are performed. Actions can be used to check that a change compiles properly, scan for security issues, generate documentation, or generate release binaries, and typically are allowed to make changes to the repository itself. GitHub workflows can include actions from other repositories via the Action Marketplace. By replacing the Trivy actions, the attackers essentially gained access to every repository using Trivy to scan for vulnerabilities in their own codebases.

The hijacked Trivy actions collected and exfiltrated access tokens for Docker, Google Cloud, Azure, and AWS, Git credentials, SSH keys, and any other secrets from projects using the Trivy actions. With these keys, the controllers of the original malware are able to attack those projects directly, such as the immensely popular LiteLLM Python interface to AI LLM models from multiple companies.

The compromise of LiteLLM also stole credentials to cloud services, SSH, git, Docker, and Kubernetes on any system that ran the trojaned setup scripts, as well as infecting any connected Kubernetes systems found in the configurations.

There are also reports that the malware actors are also infecting NPM node packages with malware which automatically updates itself from a block-chain based control system and steals NPM authentication tokens to inject itself into any NPM packages the victim may have authored.

Supply-chain attacks happening for years with varying levels of success. But the Trivy attack may be the most successful in spreading compromised packages into multiple package repositories. It’s difficult to avoid supply chain attacks, especially when the vulnerability scanner itself is the source of the problem. GitHub has introduced immutable releases – tagged build versions which can not be updated once released, and the immutable release of Trivy was the only version not compromised by the attackers. As more packages shift to immutable versions it may become harder to insert malware into the supply, but we’re nowhere near a tipping point of projects using immutable releases yet.


hackaday.com/2026/03/27/this-w…


Sovranità tecnologica e autonomia strategica


The media in this post is not displayed to visitors. To view it, please log in.

Arturo Di Corinto al Festival di Geopolitica, Demarcazioni

Mi ha fatto veramente molto piacere partecipare a Demarcazioni, il primo Festival di Geopolitica appena concluso ad Ascoli Piceno.

Venerdì 20 aprile 2026, nella mia sessione, “I corridoi del potere”, a cui hanno partecipato anche il Ministro Francesco Lollobrigida e il presidente delle marche, Francesco Acquaroli, abbiamo parlato di Sovranità digitale e tecnologica insieme a Giulia Pastorella, Franco Spicciariello e altri relatori moderati dalla collega di Porta a Porta Paola Ferazzoli.

Dal canto mio ho ribadito come oggi la sovranità digitale si declini come autonomia strategica, cioé di come essa comporti l’istituzione e il mantenimento attivo di collaborazioni internazionali dinamiche e mirate, per affrontare proattivamente le minacce alla sovranità stessa (copyright Roberto Baldoni) e che la sovranità non si esercita solo nel controllo sui dati generati dai cittadini dalle imprese e dalla PA contando sull’uso di tecnologie sicure e affidabili.

Sovranità e autonomia si garantiscono anche infatti attraverso il controllo politico e normativo. Perciò non c’è sovranità senza Europa.
Certo, per sfuggire al paradosso della sovranità senza tecnologie, l’Europa deve fare fare di più e sviluppare meglio la sua capacità di innovare e produrre tecnologie utili e affidabili.

L’Italia ci sta provando, attraverso Agenzia per la Cybersicurezza Nazionale e in collaborazione con molti soggetti come l’European Cybersecurity Competence Centre (ECCC).

Perciò grazie agli organizzatori per questo bell’incontro.

DEMARCAZIONI Festival Geopolitica 2026 Ascoli Piceno


dicorinto.it/formazione/sovran…


Understand Your Printer Better With The Interactive Inkjet Simulator


The media in this post is not displayed to visitors. To view it, please log in.

A screenshot of the inkjet simulator project

Love them or hate them, inkjets are still a very popular technology for putting text and images on paper, and with good reason. They work and are inexpensive, or would be, if not for the cartridge racket. There’s a bit of mystery about exactly what’s going on inside the humble inkjet that can be difficult to describe in words, though, which is why [Dennis Kuppens] recently released his Interactive Printing Simulator.

[Dennis] would likely object to that introduction, however, as the simulator targets functional inkjet printing, not graphical. Think traces of conductive ink, or light masks where even a single droplet out-of-place can lead to a non-functional result. If you’re just playing with this simulator to get an idea of what the different parameters are, and the effects of changing them, you might not care. There are some things you can get away with in graphics printing you really cannot with functional printing, however, so this simulator may seem a bit limited in its options to those coming from the artistic side of things.

You can edit parameters of the nozzle head manually, or select a number of industrial printers that come pre-configured. Likewise there are pre-prepared patterns, or you can try and draw the Jolly Wrencher as the author clearly failed to do. Then hit ‘start printing’ and watch the dots get laid down.

[Dennis] has released it under an AGPL-3.0 license, but notes that he doesn’t plan on developing the project further. If anyone else wants to run with this, they are apparently more than welcome to, and the license enables that.

Did you know that there’s an inkjet in space? Hopefully NASA got a deal on cartridges. If not, maybe they could try hacking the printer for continuous ink flow. Of course that’s all graphics stuff; functional printing is more like this inkjet 3D printer.


hackaday.com/2026/03/27/unders…


This Flow Battery Operates With No Pump Required


The media in this post is not displayed to visitors. To view it, please log in.

Flow batteries are rather unique. They generate electricity by the combination of two fluids flowing on either side of a membrane. Typically, this involves the use of some kind of pump to get everything moving. However, [Dusan Caf] has demonstrated another way to make a flow battery operate.

[Dusan]’s build is a zinc-iodide flow battery. It uses two 3D printed reservoirs, each holding a ZnI2 solution and a graphite electrode. Unlike traditional flow batteries, there is no mechanism included to mechanically push the fluid around. Instead, fluid motion is generated by the magnetohydrodynamic effect, which you may know from that Japanese boat that didn’t work very well.

When charging the liquid-based cell, current flows through the conductive electrolyte that sits between both electrodes. This sees zinc electroplated onto the graphite anode, while iodide ions are oxidized at the cathode. There’s also a permanent magnet installed beneath the electrodes, which provides a stable magnetic field. This field, combined with the current flowing through the electrolyte, sees the Lorentz force pushing the electrolyte along, allowing the flow battery to operate. When the cell is being discharged, the reactions happen in reverse, with the flow through the electrodes changing direction in turn. Neatly, as current draw or supply increases, the flow rate increases in turn, naturally regulating the system.

[Dusan] notes this isn’t feasible for large batteries, due to the limited flow rate, but it’s fine for small-scale demos regarding the operation of a flow battery. We’ve featured some more typical flow battery designs in the past, too.

youtube.com/embed/p2LaPcJia7U?…

youtube.com/embed/i3Abqr1r-mk?…


hackaday.com/2026/03/27/this-f…