This Week In Security: Getting Back Up to Speed
Editor’s Note: Over the course of nearly 300 posts, Jonathan Bennett set a very high bar for this column, so we knew it needed to be placed in the hands of somebody who could do it justice. That’s why we’re pleased to announce that Mike Kershaw AKA [Dragorn] will be taking over This Week In Security! Mike is a security researcher with decades of experience, a frequent contributor to 2600, and perhaps best known as the creator of the Kismet wireless scanner.
He’ll be bringing the column to you regularly going forward, but given the extended period since we last checked in with the world of (in)security, we thought it would be appropriate to kick things off with a review of some of the stories you may have missed.
Hacking like it’s 2009, or 1996
Hello all! It’s a pleasure to be here, and it already seems like a theme of the new year so far has bringing in the old bugs – what’s old is new again, and 2026 has seen several fixes to some increasingly ancient bugs.
Telnet
Reported on the OpenWall list, the GNU inetd suite brings an update to the telnet server (yes, telnet) that closes a login bug present since 2015 linked to environment variable sanitization.
Under the covers, the telnet daemon uses /bin/login to perform user authentication, but also has the ability to pass environment variables from the client to the host. One of these variables, USER, is passed directly to login — unfortunately this time with no checking to see what it contains. By simply passing a USER variable of “-froot”, login would accept the “-f” argument, or “treat this user as already logged in”. Instant root!
If this sounds vaguely familiar, it might be because the exact same bug was found in the Solaris telnetd service in 2007, including using the “-f” argument in the USER variable. An extremely similar bug targeting other variables (LD_PRELOAD) was found in the FreeBSD telnetd service in 2009, and other historical similar bugs have afflicted AIX and other Unix systems in the past.
Of course, nobody in 2026 should be running a telnet service, especially not exposed to the Internet, but it’s always interesting to see the old style of bugs resurface.
Glibc
Also reported on the OpenWall list, glibc — the GNU LibC library which underpins most binaries on Linux systems, providing kernel interfaces, file and network I/O, string manipulation, and most other common functions programmers expect — has killed another historical bug, present since 1996 in the DNS resolver functions which could be used to expose some locations in the stack.
Although not exploitable directly, the getnetbyaddr resolution functions could still ease in breaking ASLR, making other exploits viable.
Address Space Layout Randomization (ASLR) is a common method of randomizing where in memory a process and its data are loaded, making trivial exploits like buffer overflows much harder to execute. Being able to expose the location of the binary in memory by leaking stack locations weakens this mechanism, possibly exposing a vulnerable program to more traditional attacks.
MSHTML
In February, Microsoft released fixes under CVE-2026-21513 for the MSHTML Trident renderer – the one used in Internet Explorer 5. Apparently still present in Windows, and somehow still accessible through specific shortcut links, it’s the IE5 and Active-X gift that keeps giving, being actively exploited.
Back in the modern era…
After that bit of computing nostalgia, let’s look at some interesting stories involving slightly more contemporary subjects.
Server-side JS
It’s easy to think of JavaScript as simply a client-side language, but of course it’s also used in server frameworks like node.js and React, the latter being used heavily in the popular Next.JS framework server components.
Frameworks like React blur the lines between client and server, using the same coding style and framework conventions in the browser and in the server-side engine. React and NextJS allow calling server-side functions from the client side, mixing client and server side rendering of content, but due to a deserialization bug, React allowed any function to be called from a non-privileged client.
Cleverly named React2Shell, it has rapidly become a target for bulk exploitation, with Internet-scale monitoring firm GreyNoise reporting 8 million logged attempts by early January 2026. At this point, it’s safe to assume any Internet-exposed vulnerable service has been compromised.
Too much AI
As previously covered by Hackaday, the Curl project is officially ending bug bounties due to the flood of bogus submissions from AI tools. The founder and project lead, Daniel Sternberg, has been critical of AI-generated bug bounties in the past, and has finally decided the cost is no longer worth the gains.
In many ways this calls to mind the recent conflict between the ffmpeg team and Google, where Google Project Zero discovered a flaw in the decoding of a relatively obscure codec, assigning it a 90-day disclosure deadline and raising the ire of the open source volunteer team.
The influx of AI-generated reports is the latest facet of the friction between volunteer-led open source projects, and paid bug bounties or other commercial interests. Even with sponsorship backing, the reach of popular open-source libraries and tools like Curl, OpenSSL, BusyBox, and more is often far, far greater than the compensation offered by the biggest users of those libraries — often trillion dollar multinational companies.
Many open source projects are the passion project of a small set of people, even if they become massively popular and critical to commercial tools and infrastructure. While AI tooling may generate actionable reports, when it is deployed by users who may not themselves be programmers and are unable to verify the results, it puts the time drain of determining the validity, and at times, arguing with the submitter, entirely on the project maintainers. As the asymmetry increases, more small open source teams may start rejecting clearly AI generated reports as well.
OpenSSL, Again
The OpenSSL library, another critical component of Internet infrastructure with a very small team, suffers from a vulnerability in PKCS12 parsing which appears to be a relatively traditional memory bug leaning to null pointers, stack corruption, or buffer overflows, which in the best case causes a crash and the worst case allows for arbitrary code execution. (Insert obligatory XKCD reference here.)
PKCS12 is a certificate storage format which bundles multiple certificates and private keys in a single file – similar to a zip or tar for certificate credentials. Fortunately PKCS12 files are typically already trusted, and methods to upload them are not often exposed to the Internet at large, unfortunately, potential code execution even when limited to a trusted network interface is rarely a positive thing.
Notepad++
The Notepad++ team has released a write-up about the infrastructure compromise which appears to have enabled a state-level actor to deliver infected updates to select customers.
Notepad++ is a fairly popular alternative to the classic Notepad app found on Windows, with support for syntax highlighting, multiple programming languages, and basic IDE functionality. According to the write-up by the team based on findings by independent researchers, in June 2025 the shared hosting service which served updates to Notepad++ was compromised, and remained so until September of 2025.
The root of the issue lies in the update library WinGUp, used by Notepad++, which did not validate the downloaded update, leaving it vulnerable to redirection and modification. With control of the update servers, the attackers were able to send specific customers to modified, trojaned updates.
An important take-away for all developers: if your project can self-update, make sure that the update process is secure against malicious actors. Which can mean the complex issues of not only validating the certificate chain, but sometimes embedding trusted certificates in your software (or firmware) and using them to validate that the update file itself has not been modified.
WiFi Isolation
Finally, we have a new paper on WiFi security, with a new attack dubbed “AirSnitch”. From a team of collaborators including Mathy Vanhoef (a frequent publisher of modern WiFi attacks including the WPA2 KRACK attacks, and a driving force behind deprecating WPA2), AirSnitch defeats a protection in wireless networks known as “client isolation”.
Client isolation acts essentially as a firewall mechanism, which attempts to offer wireless clients an additional layer of security by preventing communication between clients on the same network. Optimally, this would prevent a hostile or infected client from communicating with other clients, despite being on the same shared network.
On a WPA encrypted WiFi network, each client has an individual key used for encryption, and a shared group key used by all clients for broadcast and multicast communication. For one client to communicate with another, the access point must decrypt the traffic from the first and re-encrypt it to the second. Preventing communication between clients should be as simple as not performing the encryption between clients, however by cloning the MAC address of the target client and establishing a second connection to the access point, and further manipulating the internal state of the access point with injected packets, a hostile device can cause the access point to share the data of the target, essentially converting the behavior of the network to a legacy Ethernet hub.
How significantly this might impact you will vary wildly, and likely the full impacts of the attack will take some time to be understood. An attacker still needs access to the network – for a WPA network this means the PSK must be known, and for an Enterprise network, login credentials are still required. Typically home networks don’t use client isolation at all – most home users expect devices to be able to communicate directly, and most public access networks use no encryption at all, leaving clients exposed to the same level of risk by default. Networks with untrusted clients, like educational campus networks or business bring-your-own-device networks, are likely at the greatest risk, but time will tell.
Linux Hotplug Events Explained
There was a time when Linux was much simpler. You’d load a driver, it would find your device at boot up, or it wouldn’t. That was it. Now, though, people plug and unplug USB devices all the time and expect the system to react appropriately. [Arcanenibble] explains all “the gory details” about what really happens when you plug or unplug a device.
You might think, “Oh, libusb handles that.” But, of course, it doesn’t do the actual work. In fact, there are two possible backends: netlink or udev. However, the libusb developers strongly recommend udev. Turns out, udev also depends on netlink underneath, so if you use udev, you are sort of using netlink anyway.
If netlink sounds familiar, it is a generic BSD-socket-like API the kernel can use to send notifications to userspace. The post shows example code for listening to kernel event messages via netlink, just like udev does.
When udev sees a device add message from netlink, it resends a related udev message using… netlink! Turns out, netlink can send messages between two userspace programs, not just between the kernel and userspace. That means that the code to read udev events isn’t much different from the netlink example.
The next hoop is the udev event format. It uses a version number, but it seems stable at version 0xfeedcafe. Part of the structure contains a hash code that allows a bloom filter to quickly weed out uninteresting events, at least most of the time.
The post documents much of the obscure inner workings of USB hotplug events. However, there are some security nuances that aren’t clear. If you can explain them, we bet [Arcanenibble] would like to hear from you.
If you like digging into the Linux kernel and its friends, you might want to try creating kernel modules. If you get overwhelmed trying to read the kernel source, maybe go back a few versions.
Exploits and vulnerabilities in Q4 2025
The fourth quarter of 2025 went down as one of the most intense periods on record for high-profile, critical vulnerability disclosures, hitting popular libraries and mainstream applications. Several of these vulnerabilities were picked up by attackers and exploited in the wild almost immediately.
In this report, we dive into the statistics on published vulnerabilities and exploits, as well as the known vulnerabilities leveraged with popular C2 frameworks throughout Q4 2025.
Statistics on registered vulnerabilities
This section contains statistics on registered vulnerabilities. The data is taken from cve.org.
Let’s take a look at the number of registered CVEs for each month over the last five years, up to and including the end of 2025. As predicted in our last report, Q4 saw a higher number of registered vulnerabilities than the same period in 2024, and the year-end totals also cleared the bar set the previous year.
Total published vulnerabilities by month from 2021 through 2025 (download)
Now, let’s look at the number of new critical vulnerabilities (CVSS > 8.9) for that same period.
Total number of published critical vulnerabilities by month from 2021 to 2025< (download)
The graph shows that the volume of critical vulnerabilities remains quite substantial; however, in the second half of the year, we saw those numbers dip back down to levels seen in 2023. This was due to vulnerability churn: a handful of published security issues were revoked. The widespread adoption of secure development practices and the move toward safer languages also pushed those numbers down, though even that couldn’t stop the overall flood of vulnerabilities.
Exploitation statistics
This section contains statistics on the use of exploits in Q4 2025. The data is based on open sources and our telemetry.
Windows and Linux vulnerability exploitation
In Q4 2025, the most prevalent exploits targeted the exact same vulnerabilities that dominated the threat landscape throughout the rest of the year. These were exploits targeting Microsoft Office products with unpatched security flaws.
Kaspersky solutions detected the most exploits on the Windows platform for the following vulnerabilities:
- CVE-2018-0802: a remote code execution vulnerability in Equation Editor.
- CVE-2017-11882: another remote code execution vulnerability, also affecting Equation Editor.
- CVE-2017-0199: a vulnerability in Microsoft Office and WordPad that allows an attacker to assume control of the system.
The list has remained unchanged for years.
We also see that attackers continue to adapt exploits for directory traversal vulnerabilities (CWE-35) when unpacking archives in WinRAR. They are being heavily leveraged to gain initial access via malicious archives on the Windows operating system:
- CVE-2023-38831: a vulnerability stemming from the improper handling of objects within an archive.
- CVE-2025-6218 (formerly ZDI-CAN-27198): a vulnerability that enables an attacker to specify a relative path and extract files into an arbitrary directory. This can lead to arbitrary code execution. We covered this vulnerability in detail in our Q2 2025 report.
- CVE-2025-8088: a vulnerability we analyzed in our previous report, analogous to CVE-2025-6218. The attackers used NTFS streams to circumvent controls on the directory into which files were being unpacked.
As in the previous quarter, we see a rise in the use of archiver exploits, with fresh vulnerabilities increasingly appearing in attacks.
Below are the exploit detection trends for Windows users over the last two years.
Dynamics of the number of Windows users encountering exploits, Q1 2024 – Q4 2025. The number of users who encountered exploits in Q1 2024 is taken as 100% (download)
The vulnerabilities listed here can be used to gain initial access to a vulnerable system. This highlights the critical importance of timely security updates for all affected software.
On Linux-based devices, the most frequently detected exploits targeted the following vulnerabilities:
- CVE-2022-0847, also known as Dirty Pipe: a vulnerability that allows privilege escalation and enables attackers to take control of running applications.
- CVE-2019-13272: a vulnerability caused by improper handling of privilege inheritance, which can be exploited to achieve privilege escalation.
- CVE-2021-22555: a heap overflow vulnerability in the Netfilter kernel subsystem.
- CVE-2023-32233: another vulnerability in the Netfilter subsystem that creates a use-after-free condition, allowing for privilege escalation due to the improper handling of network requests.
Dynamics of the number of Linux users encountering exploits, Q1 2024 – Q4 2025. The number of users who encountered exploits in Q1 2024 is taken as 100% (download)
We are seeing a massive surge in Linux-based exploit attempts: in Q4, the number of affected users doubled compared to Q3. Our statistics show that the final quarter of the year accounted for more than half of all Linux exploit attacks recorded for the entire year. This surge is primarily driven by the rapidly growing number of Linux-based consumer devices. This trend naturally attracts the attention of threat actors, making the installation of security patches critically important.
Most common published exploits
The distribution of published exploits by software type in Q4 2025 largely mirrors the patterns observed in the previous quarter. The majority of exploits we investigate through our monitoring of public research, news, and PoCs continue to target vulnerabilities within operating systems.
Distribution of published exploits by platform, Q1 2025 (download)
Distribution of published exploits by platform, Q2 2025 (download)
Distribution of published exploits by platform, Q3 2025 (download)
Distribution of published exploits by platform, Q4 2025 (download)
In Q4 2025, no public exploits for Microsoft Office products emerged; the bulk of the vulnerabilities were issues discovered in system components. When calculating our statistics, we placed these in the OS category.
Vulnerability exploitation in APT attacks
We analyzed which vulnerabilities were utilized in APT attacks during Q4 2025. The following rankings draw on our telemetry, research, and open-source data.
TOP 10 vulnerabilities exploited in APT attacks, Q4 2025 (download)
In Q4 2025, APT attacks most frequently exploited fresh vulnerabilities published within the last six months. We believe that these CVEs will remain favorites among attackers for a long time, as fixing them may require significant structural changes to the vulnerable applications or the user’s system. Often, replacing or updating the affected components requires a significant amount of resources. Consequently, the probability of an attack through such vulnerabilities may persist. Some of these new vulnerabilities are likely to become frequent tools for lateral movement within user infrastructure, as the corresponding security flaws have been discovered in network services that are accessible without authentication. This heavy exploitation of very recently registered vulnerabilities highlights the ability of threat actors to rapidly implement new techniques and adapt old ones for their attacks. Therefore, we strongly recommend applying the security patches provided by vendors.
C2 frameworks
In this section, we will look at the most popular C2 frameworks used by threat actors and analyze the vulnerabilities whose exploits interacted with C2 agents in APT attacks.
The chart below shows the frequency of known C2 framework usage in attacks against users during Q4 2025, according to open sources.
TOP 10 C2 frameworks used by APTs to compromise user systems in Q4 2025 (download)
Despite the significant footprints it can leave when used in its default configuration, Sliver continues to hold the top spot among the most common C2 frameworks in our Q4 2025 analysis. Mythic and Havoc were second and third, respectively. After reviewing open sources and analyzing malicious C2 agent samples that contained exploits, we found that the following vulnerabilities were used in APT attacks involving the C2 frameworks mentioned above:
- CVE-2025-55182: a React2Shell vulnerability in React Server Components that allows an unauthenticated user to send commands directly to the server and execute them from RAM.
- CVE-2023-36884: a vulnerability in the Windows Search component that allows the execution of commands on a system, bypassing security mechanisms built into Microsoft Office applications.
- CVE-2025-53770: a critical insecure deserialization vulnerability in Microsoft SharePoint that allows an unauthenticated user to execute commands on the server.
- CVE-2020-1472, also known as Zerologon, allows for compromising a vulnerable domain controller and executing commands as a privileged user.
- CVE-2021-34527, also known as PrintNightmare, exploits flaws in the Windows print spooler subsystem, enabling remote access to a vulnerable OS and high-privilege command execution.
- CVE-2025-8088 and CVE-2025-6218 are similar directory-traversal vulnerabilities that allow extracting files from an archive to a predefined path without the archiving utility notifying the user.
The set of vulnerabilities described above suggests that attackers have been using them for initial access and early-stage maneuvers in vulnerable systems to create a springboard for deploying a C2 agent. The list of vulnerabilities includes both zero-days and well-known, established security issues.
Notable vulnerabilities
This section highlights the most noteworthy vulnerabilities that were publicly disclosed in Q4 2025 and have a publicly available description.
React2Shell (CVE-2025-55182): a vulnerability in React Server Components
We typically describe vulnerabilities affecting a specific application. CVE-2025-55182 stood out as an exception, as it was discovered in React, a library primarily used for building web applications. This means that exploiting the vulnerability could potentially disrupt a vast number of applications that rely on the library. The vulnerability itself lies in the interaction mechanism between the client and server components, which is built on sending serialized objects. If an attacker sends serialized data containing malicious functionality, they can execute JavaScript commands directly on the server, bypassing all client-side request validation. Technical details about this vulnerability and an example of how Kaspersky solutions detect it can be found in our article.
CVE-2025-54100: command injection during the execution of curl (Invoke-WebRequest)
This vulnerability represents a data-handling flaw that occurs when retrieving information from a remote server: when executing the curl or Invoke-WebRequest command, Windows launches Internet Explorer in the background. This can lead to a cross-site scripting (XSS) attack.
CVE-2025-11001: a vulnerability in 7-Zip
This vulnerability reinforces the trend of exploiting security flaws found in file archivers. The core of CVE-2025-11001 lies in the incorrect handling of symbolic links. An attacker can craft an archive so that when it is extracted into an arbitrary directory, its contents end up in the location pointed to by a symbolic link. The likelihood of exploiting this vulnerability is significantly reduced because utilizing such functionality requires the user opening the archive to possess system administrator privileges.
This vulnerability was associated with a wave of misleading news reports claiming it was being used in real-world attacks against end users. This misconception stemmed from an error in the security bulletin.
RediShell (CVE-2025-49844): a vulnerability in Redis
The year 2025 saw a surge in high-profile vulnerabilities, several of which were significant enough to earn a unique nickname. This was the case with CVE-2025-49844, also known as RediShell, which was unveiled during a hacking competition. This vulnerability is a use-after-free issue related to how the load command functions within Lua interpreter scripts. To execute the attack, an attacker needs to prepare a malicious script and load it into the interpreter.
As with any named vulnerability, RediShell was immediately weaponized by threat actors and spammers, albeit in a somewhat unconventional manner. Because technical details were initially scarce following its disclosure, the internet was flooded with fake PoC exploits and scanners claiming to test for the vulnerability. In the best-case scenario, these tools were non-functional; in the worst, they infected the system. Notably, these fraudulent projects were frequently generated using LLMs. They followed a standardized template and often cross-referenced source code from other identical fake repositories.
CVE-2025-24990: a vulnerability in the ltmdm64.sys driver
Driver vulnerabilities are often discovered in legitimate third-party applications that have been part of the official OS distribution for a long time. Thus, CVE-2025-24990 has existed within code shipped by Microsoft throughout nearly the entire history of Windows. The vulnerable driver has been shipped since at least Windows 7 as a third-party driver for Agere Modem. According to Microsoft, this driver is no longer supported and, following the discovery of the flaw, was removed from the OS distribution entirely.
The vulnerability itself is straightforward: insecure handling of IOCTL codes leading to a null pointer dereference. Successful exploitation can lead to arbitrary command execution or a system crash resulting in a blue screen of death (BSOD) on modern systems.
CVE-2025-59287: a vulnerability in Windows Server Update Services (WSUS)
CVE-2025-59287 represents a textbook case of insecure deserialization. Exploitation is possible without any form of authentication; due to its ease of use, this vulnerability rapidly gained traction among threat actors. Technical details and detection methodologies for our product suite have been covered in our previous advisories.
Conclusion and advice
In Q4 2025, the rate of vulnerability registration has shown no signs of slowing down. Consequently, consistent monitoring and the timely application of security patches have become more critical than ever. To ensure resilient defense, it is vital to regularly assess and remediate known vulnerabilities while implementing technology designed to mitigate the impact of potential exploits.
Continuous monitoring of infrastructure, including the network perimeter, allows for the timely identification of threats and prevents them from escalating. Effective security also demands tracking the current threat landscape and applying preventative measures to minimize risks associated with system flaws. Kaspersky Next serves as a reliable partner in this process, providing real-time identification and detailed mapping of vulnerabilities within the environment.
Securing the workplace remains a top priority. Protecting corporate devices requires the adoption of solutions capable of blocking malware and preventing it from spreading. Beyond basic measures, organizations should implement adaptive systems that allow for the rapid deployment of security updates and the automation of patch management workflows.
Building a Heading Sensor Resistant To Magnetic Disturbances
Light aircraft often use a heading indicator as a way to know where they’re going. Retired instrumentation engineer [Don Welch] recreated a heading indicator of his own, using cheap off-the-shelf hardware to get the job done.
The heart of the build is a Teensy 4.0 microcontroller. It’s paired with a BNO085 inertial measurement unit (IMU), which combines a 3-axis gyro, 3-axis accelerometer, and 3-axis magnetometer into a single package. [Don] wanted to build a heading indicator that was immune to magnetic disturbances, so ignored the magnetometer readings entirely, using the rest of the IMU data instead.
Upon startup, the Teensy 4.0 initializes a small round TFT display, and draws the usual compass rose with North at the top of the display. Any motion after this will update the heading display accordingly, with [Don] noting the IMU has a fast update rate of 200 Hz for excellent motion tracking. The device does not self-calibrate to magnetic North; instead, an encoder can be used to calibrate the device to match a magnetic compass you have on hand. Or, you can just ensure it’s already facing North when you turn it on.
Thanks to the power of the Teensy 4.0 and the rapid updates of the BNO085, the display updates are nicely smooth and responsive. However, [Don] notes that it’s probably not quite an aircraft-spec build. We’ve featured some interesting investigations of just how much you can expect out of MEMS-based sensors like these before, too.
youtube.com/embed/UoS7PKGJVlE?…
Ebike Charges At Car Charging Stations
Electric vehicles are everywhere these days, and with them comes along a whole slew of charging infrastructure. The fastest of these are high-power machines that can deliver enough energy to charge a car in well under an hour, but there are plenty of slower chargers available that take much longer. These don’t tend to require any specialized equipment which makes them easier to install in homes and other places where there isn’t as much power available. In fact, these chargers generally amount to fancy extension cords, and [Matt Gray] realized he could use these to do other things like charge his electric bicycle.
To begin the build, [Matt] started with an electric car charging socket and designed a housing for it with CAD software. The housing also holds the actual battery charger for his VanMoof bicycle, connected internally directly to the car charging socket. These lower powered chargers don’t require any communication from the vehicle either, which simplifies the process considerably. They do still need to be turned on via a smartphone app so the energy can be metered and billed, but with all that out of the way [Matt] was able to take his test rig out to a lamppost charger and boil a kettle of water.
After the kettle experiment, he worked on miniaturizing his project so it fits more conveniently inside the 3D-printed enclosure on the rear rack of his bicycle. The only real inconvenience of this project, though, is that since these chargers are meant for passenger vehicles they’re a bit bulky for smaller vehicles like e-bikes. But this will greatly expand [Matt]’s ability to use his ebike for longer trips, and car charging infrastructure like this has started being used in all kinds of other novel ways as well.
youtube.com/embed/i6IyukCIia8?…
California’s Problematic Attempt to add Age-Verification to Software
Last year California’s Digital Age Assurance Act (AB 1043) was signed into law, requiring among other things that operating system providers implement an API for age verification purposes. With the implementation date of January 1, 2027 slowly encroaching this now has people understandably agitated. So what are the requirements, and what will its impact be, as it affects not only OS developers but also application stores and developers?
The required features for OS developers include an interface at account setup during which the person indicates which of the four age brackets they fit into. This age category then has to be used by application developers and application stores to filter access to the software. Penalties for non-compliance go up to $2,500 per affected child if the cause is neglect and up to $7,500 if the violation was intentional.
As noted in the Tom’s Hardware article, CA governor Newsom issued a statement when signing the unanimously passed bill, saying that he hopes the bill gets amended due to how problematic it would be to implement and unintended effects. Of course, the bigger question is whether this change requires more than adding a few input fields and checkboxes to an OS’ account setup and an API call or two.
When we look at the full text of this very short bill, the major questions are whether this bill has any teeth at all. From reading the bill’s text, we can see that the person creating the account is merely asked to provide their birth date, age or both. This makes it at first glance as effective as those ‘pick your age’ selection boxes before entering an age-gated part of a website. What would make this new ‘age-verification feature’ any more reliable than that?
Although the OS developer is required to provide this input option and an API feature of undefined nature that provides the age bracket in some format via some method, the onus is seemingly never put on the user who creates or uses the OS account. Enforcement as defined in section 1798.503 is defined as a vague ‘[a] person that violates this title’, who shall have a civil action lawsuit filed against them. What happens if a 9-year old child indicates that they’re actually 35, for example? Or when a user account is shared on a family computer?
All taken together, this bill looks from all angles to add a lot of nuisance and potential for catching civil lawsuit flak for in particular FOSS developers, all in order to circuitously reimplement the much beloved age dropdown selection widget that’s been around since at least the 1990s.
They could give this bill real teeth by requiring that photo ID is required for registering an (online-only) OS account, much like with the recent social media restrictions and Discord age-verification kerfuffle, but that’d run right over the ‘privacy-preserving’ elements in this same bill.
Prevent your Denon Receiver Turning on From Rogue Nvidia Shield CEC Requests
In theory HDMI’s CEC feature is great, as it gives HDMI devices the ability to do useful things such as turning on multiple HDMI devices with a single remote control. Of course, such a feature will inevitably feature bugs. A case in point is the Nvidia Shield which has often been reported to turn on other HDMI devices that should stay off. After getting ticked off by such issues one time too many, [Matt] decided to implement a network firewall project to prevent his receiver from getting messed with by the Shield.
The project is a Python-based network service that listens for the responsible rogue HDMI-CEC Zone 2 requests and talks with a Denon/Marantz receiver to prevent it from turning on unnecessarily. Of course, when you want these Zone 2 requests to do their thing you need to disable the script.
That said, HDMI-CEC is such a PITA that people keep running into issues like these over and over again, to the point where people are simply disabling the feature altogether. That said, Nvidia did recently release a Shield update that’s claimed to fix CEC issues, so maybe this is one CEC bug down already.
Capacitor Memory Makes Homebrew Relay Computer Historically Plausible
It’s one thing to create your own relay-based computer; that’s already impressive enough, but what really makes [DiPDoT]’s design special– at least after this latest video— is swapping the SRAM he had been using for historically-plausible capacitor-based memory.
A relay-based computer is really a 1940s type of design. There are various memory types that would have been available in those days, but suitable CRTs for Williams Tues are hard to come by these days, mercury delay lines have the obvious toxicity issue, and core rope memory requires granny-level threading skills. That leaves mechanical or electromechanical memory like [Konrad Zeus] used in the 30s, or capacitors. he chose to make his memory with capacitors.
It’s pretty obvious when you think about it that you can use a capacitor as memory: charged/discharged lets each capacitor store one bit. Charge is 1, discharged is 0. Of course to read the capacitor it must be discharged (if charged) but most early memory has that same read-means-erase pattern. More annoying is that you can’t overwrite a 1 with a 0– a separate ‘clear’ circuit is needed to empty the capacitor. Since his relay computer was using SRAM, it wasn’t set up to do this clear operation.
He demonstrates an auto-clearing memory circuit on breadboard, using 3 relays and a capacitor, so the existing relay computer architecture doesn’t need to change. Addressing is a bit of a cheat, in terms of 1940s tech, as he’s using modern diodes– though of course, tube diodes or point-contact diodes could conceivably pressed into service if one was playing purist. He’s also using LEDs to avoid the voltage draw and power requirements of incandescent indicator lamps. Call it a hack.
He demonstrates his circuit on breadboard– first with a 4-bit word, and then scaled up to 16-bit, before going all way to a massive 8-bytes hooked into the backplane of his Altair-esque relay computer. If you watch nothing else, jump fifteen minutes in to have the rare pleasure of watching a program being input via front panel with a complete explanation. If you have a few extra seconds, stay for the satisfyingly clicky run of the loop. The bonus 8-byte program [DiPDoT] runs at the end of the video is pure AMSR, too.
Yeah, it’s not going to solve the rampocalypse, any more than the initial build of this computer helped with GPU prices. That’s not the point. The point is clack clack clack clack clack, and if that doesn’t appeal, we don’t know what to tell you.
youtube.com/embed/EtDyzEDMOoo?…
Railway End Table Powered By Hand Crank
Most end tables that you might find in a home are relatively static objects. However, [Peter Waldraff] of Tiny World Studios likes to build furniture that’s a little more interesting. Thus came about this beautiful piece with a real working railway built right in.
The end table was built from scratch, with [Peter] going through all the woodworking steps required to assemble the piece. The three-legged wooden table is topped with a tiny N-scale model railway layout, and you get to see it put together including the rocks, the grass, and a beautiful epoxy river complete with a bridge. The railway runs a Kato Pocket Line trolley, but the really neat thing is how it’s powered.
[Peter] shows us how a small gearmotor generator was paired with a bridge rectifier and a buck converter to fill up a super capacitor that runs the train and lights up the tree on the table. Just 25 seconds of cranking will run the train anywhere from 4 to 10 minutes depending on if the tree is lit as well. To top it all off, there’s even a perfect coaster spot for [Peter]’s beverage of choice.
It’s a beautiful kinetic sculpture and a really fun way to build a small model railway that fits perfectly in the home. We’ve featured some other great model railway builds before, too.
youtube.com/embed/9cLuf6BuB3A?…
Keebin’ with Kristina: the One With the Beginner’s Guide to Split Keyboards
Curious about split keyboards, but overwhelmed by the myriad options for every little thing? You should start with [thehaikuza]’s excellent Beginner’s Guide to Split Keyboards.
Image by [thehaikuza] via redditYour education begins with the why, so you can skip that if you must, but the visuals are a nice refresher on that front.
He then gets into the types of keyboards — you got your standard row-staggered rectangles that we all grew up on, column-staggered, and straight-up ortholinear, which no longer enjoy the popularity they once did.
At this point, the guide becomes a bit of a Choose Your Own Adventure story. If you want a split but don’t want to learn to change much if at all about your typing style, keep reading, because there are definitely options.
But if you’re ready to commit to typing correctly for the sake of ergonomics, you can skip the Alice and other baby ergo choices and get your membership to the light side. First are features — you must decide what you need to get various jobs done. Then you learn a bit about key map customization, including using a non-QWERTY layout. Finally, there’s the question of buying versus DIYing. All the choices are yours, so go for it!
Via reddit
Is That a Bat In Your Pocket?
Need something ultra-portable for those impromptu sessions at the coffee shop (when you can actually find a table)? You can’t get much smaller than the 28-key Koumori by [fata1err0r81], which means “bat” in Japanese. Here’s the repo.
Image by [fata1err0r81] via redditThis unibody beauty runs on an RP2040 Zero using QMK firmware. That 40 mm Cirque track pad has a glass overlay, which is a really nice touch. It’s actually a screen protector for a smart watch, and the purple bit is some craft vinyl cut to size.
Protecting that glass overlay is a case with a handle and a magnetic lid. Both the PCB and the case were designed in Ergogen, which as you know, I really like to see people using.
As you might have guessed, those are Kailh V1 choc switches with matching key caps. If you want a bat for your pocket, the build guide is simple, and there aren’t even any microscopic parts involved.
The Centerfold: [arax20]’s Been Workin’ On the Railroader
Image by [arax20] via redditOkay, before you do anything, go check out the image gallery to see this baby glowing and being worn like a katana or something. Yeah.
So [arax20] built this as a gift for an ex. She likes the ergonomics of splits, but didn’t want cables between the halves and feels the space between is otherwise wasted. Really? There’s so much you can put there, from cats to mice to coffee mugs.
Do you rock a sweet set of peripherals on a screamin’ desk pad? Send me a picture along with your handle and all the gory details, and you could be featured here!
Historical Clackers: the Mysterious Rico
Frustratingly little is known about the Rico, a 1932 index machine out of Nuremburg, Germany. But the Antikey Chop has over a dozen books on typewriters, and only two have any mention of the Rico: Adler’s Antique Typewriters, From Creed to QWERTY, and Dingwerth’s Kleines Lexikon Historischer Schreibmaschinen.Image via The Antikey Chop
Adler calls it a “pleasant toy typewriter with indicator selecting letters from a rectangular index”, saying nothing more descriptive. Dingwerth’s volume both dates the Rico and lists the maker as Richard Koch & Co. of Nuremburg.
The Rico was ambitiously declared the No. A1 model, though there is no evidence of any other model in existence. It was made mostly of stamped tin, though the type element was made of brass. The type element looked like a tube cut in half lengthwise, and worked in a similar fashion to the Chicago typewriter with its type sleeve.
There are some interesting things about the Rico nonetheless. The platen could not accommodate paper wider than 4″, for one thing. There is also no inking system to speak of. Weirder still, this oversight isn’t mentioned in the original instructions. Most people just taped a couple inches of typewriter ribbon between the element and the platen and called it good .
To use the thing, you would move the center lever to the character you wanted. The lever has a pin in the bottom, and each character has a dimple in it for the pin to sit. The lever on the left side was used to pivot the carriage toward the type element in order to print. In total, the Rico typed 74 characters plus Space.
Finally, Someone’s Made a Braille Keyboard, and It’s Inexpensive
Once upon a time, New Jersey high schooler Umang Sharma saw an ad for a Braille keyboard. The price? A cool seven grand. For a keyboard. No problem, he thought. I can build my own.Image via NJ.com
The astute among you will notice that there’s a Logitech keyboard in the picture, with what look like key cap hats. That is exactly what’s happening here. Sharma starts with a standard keyboard base, one that is usually either donated or was previously discarded.
He then focuses on the most important accessibility layer, which is tactile Braille key caps that are both readable and durable. In 2022, Sharma launched the non-profit Jdable to bring affordable, accessible design to people with disabilities.
He designed the key caps himself, and uses a combination of 3D printing and other materials to create them in bulk. They’re printed using a combination of PETG for toughness, TPU for grippiness, and resin for definition. The key caps are attached to the standard set with a strong adhesive.
Sharma has a team of student volunteers that help him build the keyboards and distribute them, and they have reached nearly 1,000 blind or visually-impaired students in the U.S. and abroad.
Got a hot tip that has like, anything to do with keyboards? Help me out by sending in a link or two. Don’t want all the Hackaday scribes to see it? Feel free to email me directly.
A Live ISO For Those Vibe Coding Experiments
Vibe coding is all the rage at the moment if you follow certain parts of the Internet. It’s very easy to dunk upon it, whether it’s to mock the sea of people who’ve drunk the Kool-Aid and want the magic machine to make them a million dollar app with no work, or the vibe coded web apps with security holes you could drive a bus through.
But AI-assisted coding is now a thing that will stick around whether you like it or not, and there are many who want to dip a toe in the water to see what the fuss is about. For those who don’t quite trust the magic machines in their inner sanctum, [jscottmiller] is here with Clix, a bootable live Linux environment which puts Claude Code safely in a sandbox away from your family silver.
Physically it’s a NixOS live USB image with the Sway tiling Wayland compositor, and as he puts it: “Claude Code ready to go”. It has a shared partition for swapping files with Windows or macOS machines, and it’s persistent. The AI side of it has permissive settings, which means the mechanical overlord can reach parts of the OS you wouldn’t normally let it anywhere near; the point of having it in a live environment in the first place.
We can see the attraction of using an environment such as this one for experimenting without commitment, but we’d be interested to hear your views in the comments. It’s about a year since we asked you all about vibe coding, has the art moved forward in that time?
SpyTech: The Underwater Wire Tap
In the 1970s, the USSR had an undersea cable connecting a major naval base at Petropavlovsk to the Pacific Fleet headquarters at Vladivostok. The cable traversed the Sea of Okhotsk, which, at the time, the USSR claimed. It was off limits to foreign vessels, heavily patrolled, and laced with detection devices. How much more secure could it be? Against the US Navy, apparently not very secure at all. For about a decade starting in 1972, the Navy delivered tapes of all the traffic on the cable to the NSA.
Top Secret
You need a few things to make this a success. First, you need a stealthy submarine. The Navy had the USS Halibut, which has a strange history. You also need some sort of undetectable listening device that can operate on the ocean floor. You also need a crew that is sworn to secrecy.
That last part was hard to manage. It takes a lot of people to mount a secret operation to the other side of the globe, so they came up with a cover story: officially, the Halibut was in Okhotsk to recover parts of a Soviet weapon for analysis. Only a few people knew the real mission. The whole operation was known as Operation Ivy Bells.
The Halibut
The Halibut is possibly the strangest submarine ever. It started life destined to be a diesel sub. However, before it launched in 1959, it had been converted to nuclear power. In fact, the sub was the first designed to launch guided missiles and was the first sub to successfully launch a guided missile, although it had to surface to launch.
Oddly enough, the sub carried nuclear cruise missiles and its specific target, should the world go to a nuclear war, was the Soviet naval base at Petropavolvsk.
By 1965, the sub had been replaced for missile duty by newer submarines. It was tapped to be converted for “special operations.” Under the guise of being a deep-sea recovery vehicle, the Halibut received skids to settle on the seabed, side thrusters, specialized anchors, and a host of electronic equipment, including “the Fish” a 12-foot-long array of cameras, sonar, and strobe lights weighing nearly two tons. The “rescue vehicle” on its stern didn’t actually detach. It was a compartment for deploying saturation divers.
An early mission was Operation Sand Dollar. Halibut found the wreck of the Soviet K-129, which the US would go on to recover in another top secret mission, looking for secrets and Soviet technology.
When it came time to deploy the listening device on an underwater cable, Halibut was perfect. It could park a safe distance away, deploy saturation divers, and recover them. If you want to see more about the Halibut, check out the [Defence Central] video below.
youtube.com/embed/mrgR8cMWKVo?…
The Listening Device
A later undersea wire tap device (Soviet photograph)
This wasn’t a hidden microphone in a briefcase. It was a 20-foot, six-ton pressure vessel parked on the ocean floor. Details are murky, but there was another part, probably smaller, that clamped around the cable. Working inductively, it didn’t pierce the cable for fear the Soviets would notice that. In addition, if they raised the cable for maintenance, the device was made to break away and sink to the bottom.
Needless to say, tapping a cable on the ocean floor isn’t easy. First, they had to locate the cable. Luckily, there were signs at either end telling fishing vessels to avoid the area. That helped, but they still had to search for the 5-inch wide cables. They found them at least 400 feet below the surface, some 120 miles offshore.
Saturation diving was a relatively new idea at the time, and the Navy’s SeaLab experiments had given them several years of experience with the technology. While commercial saturation dives started in 1965, it was still exotic technology in 1971. The first mission simply recorded a bit of data on the submarine and returned it. Once it was proven, the sub returned with the giant tap device and installed it.
It took four divers to position the big tap. Even then, you couldn’t just leave it there. The device used tapes and required service once a month. So Halibut or another sub had to visit each month to swap tapes out. We couldn’t find out what the power source for the bug was, so they probably had to change the batteries, too.
The Soviets didn’t consider the cable to be at risk for eavesdropping, so much of the traffic on the cable was in the clear. It was a gold mine of intelligence information, and many credit the information gained as crucial to closing the SALT II treaty talks.
Secondary Mission
Most of the crews participating in Operation Ivy Bells didn’t have clearance to know what was going on. Instead, they thought they were on a different secret mission to retrieve debris from Soviet anti-ship missiles.
To keep the story believable, the crew actually did recover a large number of parts from the subject Soviet missiles. Turns out, analysis of the debris did reveal some useful information, so two spy missions for the price of one.
Presumably, the assumption would be that if the Soviets heard a sub was scavenging missile parts, it might qualify as a secret, but it would hardly be a surprise. They couldn’t have imagined the real purpose of the submarine.
Future Taps
Later undersea taps were created that used radioisotope batteries and could store a year’s data between visits that tapped other Soviet phone lines. Submarines Parche, Richard B. Russel, and Seawolf saw duty with some of these other taps as well as taking over for Halibut when it retired four years after the start of Operation Ivy Bells.
The original Okhotsk tap would have operated for many more years if it were not for [Ronald Pelton]. A former NSA employee, he found himself bankrupt over $65,000 of debt. In 1980, he showed up at the Soviet embassy in Washington and offered to sell what he knew.
He knew a number of things, including what was going on with Operation Ivy Bells. That data netted him $5,000 and, overall, he got about $35,000 or so. Oh, he also got life in prison when, in 1985, a Soviet defector revealed he had been the initial contact for [Pelton].
The Soviets didn’t immediately act on [Pelton’s] intel, but by 1981, the Americans knew something was up. A small fleet of ships was parked right over the device. The USS Parche was sent to retrieve it, but they couldn’t find it. Today, it (or, perhaps, a replica) is in the Great Patriotic War Museum in Moscow.
A surprising amount of the Cold War was waged under the sea. Not to mention in the air.
LEGO Space Computer Made Full Size, 47 Years On
There’s just something delightful about scaled items. Big things shrunk down, like LEGO’s teeny tiny terminal brick? Delightful. Taking that terminal brick and scaling it back to a full-sized computer? Even better. That’s what designer [Paul Staal] has done with his M2x2 project.
In spite of the name, it actually has a Mac Mini M4 as its powerful beating heart. An M2 might have been more on-brand, but it’s probably a case of wanting the most horsepower possible in what [Paul] apparently uses as his main workstation these days. The build itself is simple, but has some great design details. As you probably expected, the case is 3D printed. You may not have expected that he can use the left stud as a volume control, thanks to an IKEA Symfonisk remote hidden beneath. The right stud comes off to allow access to a wireless charger.The minifigs aren’t required to charge those airpods, but they’re never out of place.
The 7″ screen can display anything, but [Paul] mostly uses it either for a custom home assistant dashboard, or to display an equalizer, both loosely styled after ‘screen’ on the original brick. We have to admit, as cool as it looked with the minifigs back in the day, that sharp angle to the screen isn’t exactly ergonomic for humans.
Perhaps the best detail was putting LEGO-compatible studs on top of the 10:1 scaled up studs, so the brick that inspired the project can sit securely atop its scion. [Paul] has provided a detailed build guide and the STLs necessary to print off a brick, should anyone want to put one of these nostalgic machines on their own desk.
We’ve covered the LEGO computer brick before, but going the other way–putting a microcontroller and display in the brick it to run DOOM. We’ve also seen it scaled up before, but that project was a bit more modest in size and computing power.
Using a Solid-State Elastocaloric Cooler to Freeze Water
Elastocaloric materials are a class of materials that exhibit a big change in temperature when exposed to mechanical stress. This could potentially make them useful as solid-state replacement for both vapor-compression refrigeration systems and Peltier coolers.The entire assembled elastocaloric device. (Credit: Guoan Zhou, Nature, 2026)
So far one issue has been that reaching freezing temperatures was impossible, but a recently demonstrated solution (online PDF via IEEE Spectrum) using NiTi-based shape-memory alloys addressed that issue with a final temperature of -12°C achieved within 15 minutes from room temperature.
In the paper by [Guoan Zhou] et al. the cascade cooler is described, with eight stages of each three tubular, thin-walled NiTi structures. Each of these stages is mechanically loaded by a ceramic head that provides the 900 MPa mechanical stress required to transfer thermal energy via the stages from one side to the other of the device, alternately absorbing or releasing the energy with CaCl2 as the heat-exchange fluid.
NiTi alloys are known as about the ideal type of SMA for this elastocaloric purpose, so how much further this technology can be pushed remains to be seen. For stationary refrigeration applications it might just be the ticket, but we’ll have to see as the technology is developed further.
Running Video Through A Guitar Effects Pedal
Guitar pedals are designed to take in a sound signal, do fun stuff to it, and then spit it out to your amplifier where it hopefully impresses other people. However, [Liam Taylor] decided to see what would happen if you fed video through a guitar pedal instead.
The device under test is a Boss ME-50 multi-effects unit. It’s capable of serving up a wide range of effects, from delay to chorus to reverb, along with compression and distortion and a smattering of others. [Liam] hooked up the composite video output from an old Sony camcorder from the 2000s to a 3.5 mm audio jack, and plugged it straight into the auxiliary input of the ME-50 (notably, not the main guitar input of the device).
The multi-effects pedal isn’t meant to work with an analog video signal, but it can pass it through and do weird things to it regardless. Using the volume pedal on the ME-50 puts weird lines on the signal, while using a wah effect makes everything a little wobbly. [Liam] then steps through a whole range of others, like ring modulation, octave effects, and reverb, all of which do different weird things to the visuals. Particularly fun are some of the periodic effects which create predictable variation to the signal. True to its name, the distortion effect did a particularly good job of messing things up overall.
It’s a fun experiment, and recalls us of some of the fantastic analog video synths of years past. Video after the break.
youtube.com/embed/WG0KVxWWH18?…
Trying a Vibe-Coded Operating System
If you were to read the README of the Vib-OS project on GitHub, you’d see it advertised as a Unix-like OS that was written from scratch, runs on ARM64 and x86_64, and comes with a full GUI, networking and even full Doom game support. Unfortunately, what you are seeing there isn’t the beginnings of a new promising OS that might go toe to toe with the likes of Linux or Haiku, but rather a vibe-coded confabulation. Trying to actually use the OS as [tirimid] recently did sends you down a vibe-coded rabbit hole of broken code, more bugs than you can shake a bug zapper at, and most of the promised features being completely absent.
[tirimid] is one of those people who have a bit of a problem, in that they like to try out new OSes, just to see what they’re like. The fun starts with simply making the thing run at all in any virtual machine environment, as apparently the author uses MacOS and there it probably ‘runs fine’.
After this the graphical desktop does in fact load, some applications also open, but it’s not possible to create new folders in the ‘file explorer’, the function keys simply switch between wallpapers, there’s no networking or Doom support despite the promises made, there’s no Python or Nano support at all, and so on.
Clearly it’s still got the hallmarks of a functioning OS, and it’s sort of nice that you don’t need to know what you’re doing to create a sort-of-OS, but it will not appease those who feel that vibe-coding is killing Open Source software.
youtube.com/embed/JxknDQaDrao?…
Embossing Precision Ball Joints for a Micromanipulator
[Diffraction Limited] has been working on a largely 3D-printed micropositioner for some time now, and previously reached a resolution of about 50 nanometers. There was still room for improvement, though, and his latest iteration improves the linkage arms by embossing tiny ball joints into them.
The micro-manipulator, which we’ve covered before, uses three sets of parallel rod linkages to move a platform. Each end of each rod rotates on a ball joint. In the previous iteration, the parallel rods were made out of hollow brass tubing with internal chamfers on the ends. The small area of contact between the ball and socket created unnecessary friction, and being hollow made the rods less stiff. [Diffraction Limited] wanted to create spherical ball joints, which could retain more lubricant and distribute force more evenly.
The first step was to cut six lengths of solid two-millimeter brass rod and sand them to equal lengths, then chamfer them with a 3D-printed jig and a utility knife blade. Next, they made two centering sleeves to hold small ball bearings at the ends of the rod being worked on, while an anti-buckling sleeve surrounded the rest of the rod. The whole assembly went between the jaws of a pair of digital calipers, which were zeroed. When one of the jaws was tapped with a hammer, the ball bearings pressed into the ends of the brass rod, creating divots. Since the calipers measured the amount of indentation created, they was able to emboss all six rods equally. The mechanism is designed not to transfer force into the calipers, but he still recommends using a dedicated pair.
In testing, the new ball joints had about a tenth the friction of the old joints. They also switched out the original 3D-printed ball mount for one made out of a circuit board, which was more rigid and precisely manufactured. In the final part of the video, he created an admittedly unnecessary, but useful and fun machine to automatically emboss ball joints with a linear rail, stepper motor, and position sensor.
On such a small scale, a physical ball joint is clearly simpler, but on larger scales it’s also possible to make flexures that mimic a ball joint’s behavior.
youtube.com/embed/NM2KXvRGmpg?…
Vape-powered Car Isn’t Just Blowing Smoke
Disposable vapes aren’t quite the problem/resource stream they once were, with many jurisdictions moving to ban the absurdly wasteful little devices, but there are still a lot of slightly-smelly lithium batteries in the wild. You might be forgiven for thinking that most of them seem to be in [Chris Doel]’s UK workshop, given that he’s now cruising around what has to be the world’s only vape-powered car.
Technically, anyway; some motorheads might object to calling donor vehicle [Chris] starts with a car, but the venerable G-Wiz has four wheels, four seats, lights and a windscreen, so what more do you want? Horsepower in excess of 17 ponies (12.6 kW)? Top speeds in excess of 50 Mph (80 km/h)? Something other than the dead weight of 20-year-old lead-acid batteries? Well, [Chris] at least fixes that last part.
The conversion is amazingly simple: he just straps his 500 disposable vape battery pack into the back seat– the same one that was powering his shop–into the GWiz, and it’s off to the races. Not quickly, mind you, but with 500 lightly-used lithium cells in the back seat, how fast would you want to go? Hopefully the power bank goes back on the wall after the test drive, or he finds a better mounting solution. To [Chris]’s credit, he did renovate his pack with extra support and insulation, and put all the cells in an insulated aluminum box. Still, the low speed has to count as a safety feature at this point.
Charging isn’t fast either, as [Chris] has made the probably-controversial decision to use USB-C. We usually approve of USB-Cing all the things, but a car might be taking things too far, even one with such a comparatively tiny battery. Perhaps his earlier (equally nicotine-soaked) e-bike project would have been a better fit for USB charging.
Thanks to [Vaughna] for the tip!
youtube.com/embed/HwoZg3BCigU?…
FLOSS Weekly Episode 865: Multiplayer Firewall
This week Jonathan chats with Philippe Humeau about Crowdsec! That company created a Web Application Firewall as on Open Source project, and now runs it as a Multiplayer Firewall. What does that mean, and how has it worked out as a business concept? Watch to find out!
youtube.com/embed/cFlhtWiCHNw?…
Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or have the guest contact us! Take a look at the schedule here.
play.libsyn.com/embed/episode/…
Direct Download in DRM-free MP3.
If you’d rather read along, here’s the transcript for this week’s episode.
Places to follow the FLOSS Weekly Podcast:
Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
hackaday.com/2026/03/04/floss-…
Linux Fu: The USB WiFi Dongle Exercise
The TX50U isn’t very Linux-friendly
If you’ve used Linux for a long time, you know that we are spoiled these days. Getting a new piece of hardware back in the day was often a horrible affair, requiring custom kernels and lots of work. Today, it should be easier. The default drivers on most distros cover a lot of ground, kernel modules make adding drivers easier, and dkms can automate the building of modules for specific kernels, even if it isn’t perfect.
So ordering a cheap WiFi dongle to improve your old laptop’s network connection should be easy, right? Obviously, the answer is no or this would be a very short post.
Plug and Pray
The USB dongle in question is a newish TP-Link Archer TX50U. It is probably perfectly serviceable for a Windows computer, and I got a “deal” on it. Plugging it in caused it to show up in the list of USB devices, but no driver attached to it, nor were any lights on the device blinking. Bad sign. Pro tip: lsusb -t will show you what drivers are attached to which devices. If you see a device with no driver, you know you have a problem. Use -tv if you want a little more detail.
The lsusb output shows the devices as a Realtek, so that tells you a little about the chipset inside. Unfortunately, it doesn’t tell you exactly which chip is in use.
Internet to the Rescue?
Note that most devices (including the network card) have drivers since this was taken after the driver install. The fingerprint scanner (port 5 device 3) does not have a driver, however.
My first attempt to install a Realtek driver from GitHub failed because it was for what turned out to be the wrong chipset. But I did find info that the adapter had an RTL8832CU chip inside. Armed with that nugget, I found [morrownr] had several versions, and I picked up the latest one.
Problem solved? Turns out, no. I should have read the documentation, but, of course, I didn’t. So after going through the build, I still had a dead dongle with no driver or blinking lights.
Then I decided to read the file in the repository that tells you what USB IDs the driver supports. According to that file, the code matches several Realtek IDs, an MSI device, one from Sihai Lianzong, and three from TP-Link. All of the TP-Link devices use the 35B2 vendor ID, and the last two of those use device IDs of 0101 and 0102.
Suspiciously, my dongle uses 0103 but with a vendor ID of 37AD. Still, it seemed like it would be worth a shot. I did a recursive grep for 0x0102 and found a table that sets the USB IDs in os_dep/linux/usb_intf.c.
Of course, since I had already installed the driver, I had to change the dkms source, not the download from GitHub. That was, on my system, in /usr/src/rtl8852cu-v1.19.22-103/os_dep_linux/usb_intf.c. I copied the 0x0102 line and changed both IDs so there was now a 0x0103 line, too:
{USB_DEVICE_AND_INTERFACE_INFO(0x37ad, 0x0103, 0xff, 0xff, 0xff), .driver_info = RTL8852C},
/* TP-Link Archer TX50U */
Now it was a simple matter of asking dkms to rebuild and reinstall the driver. Blinking lights were a good sign and, in fact, it worked and worked well.
DKMS
If you haven’t used DKMS much, it is a reasonable system that can rebuild drivers for specific Linux kernels. It basically copies each driver and version to a directory (usually /usr/src) and then has ways to build them against your kernel’s symbols and produce loadable modules.
The system also maintains a build/install state database in /var/lib. A module is “added” to DKMS, then “built” for one or more kernels, and finally “installed” into the corresponding location for use by that kernel. When a new kernel appears, DKMS detects the event — usually via package manager hooks or distribution-specific kernel install triggers — and automatically rebuilds registered modules against the new kernel headers. The system tracks which module versions are associated with which kernels, allowing parallel kernel installations without conflicts. This separation of source registration from per-kernel builds is what allows DKMS to scale cleanly across multiple kernel versions.
If you didn’t use DKMS, you’d have to manually rebuild kernel modules every time you did a kernel update. That would be very inconvenient for things that are important, like video drivers for example.
Of course, not everything is rosy. The NVidia drivers, for example, often depend on something that is prone to change in future Linux kernels. So one day, you get a kernel update, reboot, and you have no screen. DKMS is the first place to check. You’ll probably find it has some errors when building the graphics drivers.
Your choices are to look for a new driver, see if you can patch the old driver, or roll back to a previous working kernel. Sometimes the changes are almost trivial like when an API changes names. Sometimes they are massive changes and you really do want to wait for the next release. So while DKMS helps, it doesn’t solve all problems all the time.
Extras and Thoughts
I skipped over the part of turning off secure boot because I was too lazy to add a signing key to my BIOS. I’ll probably go back and do that later. Probably.
You have to wonder why this is so hard. There is already a way to pass the module options. It seems like you might as well let a user jam a USB ID in. Sure, that wouldn’t have helped for the enumeration case, but it would have been perfectly fine to me if I had just had to put a modprobe or insmod with a parameter to make the card work. Even though I’m set up for rebuilding kernel modules and kernels, many people aren’t, and it seems silly to force them to recompile for a minor change like this.
Of course, another fun answer would be to have vendors actually support their devices for Linux. Wouldn’t that be nice?
You could write your own drivers if you have sufficient documentation or the desire to reverse-engineer the Windows drivers. But it can take a long time. User-space drivers are a little less scary, and some people like using Rust.
What’s your Linux hardware driver nightmare story? We know you have one. Let us hear about it in the comments.
Success With FreeDOS on a Modern Platform
Last summer we took a look at FreeDOS as part of the Daily Drivers series, and found a faster and more complete successor to the DOS of old. The sojourn into the 16-bit OS wasn’t perfect though, as we couldn’t find drivers for the 2010-era network card on our newly DOS-ified netbook. Here’s [Inkbox] following the same path, and bringing with it a fix for that networking issue.
The video below is an affectionate look at the OS alongside coding a TRON clone in assembler, and it shows a capable environment within the limitations of the 16-bit mode. The modern laptop here can’t emulate a BIOS as it’s UEFI only, and after trying a UEFI-to-BIOS emulator with limited success, he hits on a different approach. With just enough Linux to support QEMU, he has a lightweight and extremely fast x86 BIOS platform with the advantage of legacy emulation of network cards and the like.
The point of Daily Drivers is wherever possible to use real hardware and not an emulator, as it’s trying to be the machine you’d use day to day. But we can see in a world where a BIOS is no longer a thing it becomes ever more necessary to improvise, and this approach is better than just firing up an emulator from a full-fat Linux desktop. If you fancy giving it a try, it seems less pain than the route we took.
You can read our look at FreeDOS 1.4 here.
youtube.com/embed/mwLIgdRj5bI?…
FreeDOS logo: Bas Snabilie for the FreeDOS Project, CC BY 2.5.
New Artemis Plan Returns to Apollo Playbook
In their recent announcement, NASA has made official what pretty much anyone following the Artemis lunar program could have told you years ago — humans won’t be landing on the Moon in 2028.
It was always an ambitious timeline, especially given the scope of the mission. It wouldn’t be enough to revisit the Moon in a spidery lander that could only hold two crew members and a few hundred kilograms of gear like in the 60s. This time, NASA wants to return to the lunar surface with hardware capable of setting up a sustained human presence. That means a new breed of lander that dwarfs anything the agency, or humanity for that matter, has ever tried to place on another celestial body.
Unsurprisingly, developing such vehicles and making sure they’re safe for crewed missions takes time and requires extensive testing. The simple fact is that the landers, being built by SpaceX and Blue Origin, won’t be ready in time to support the original Artemis III landing in 2028. Additionally, development of the new lunar extravehicular activity (EVA) suits by Axiom Space has fallen behind schedule. So even if one of the landers would have been ready to fly in 2028, the crew wouldn’t have the suits they need to actually leave the vehicle and work on the surface.
But while the Artemis spacecraft and EVA suits might be state of the art, NASA’s revised timeline for the program is taking a clear step back in time, hewing closer to the phased approach used during Apollo. This not only provides their various commercial partners with more time to work on their respective contributions, but critically, provides an opportunity to test them in space before committing to a crewed landing.
Artemis II Remains Unchanged
Given its imminent launch, there are no changes planned for the upcoming Artemis II mission. In fact, had there not been delays in getting the Space Launch System (SLS) rocket ready for launch, the mission would have already flown by now. Given how slow the gears of government tend to turn, one wonders if the original plan was to announce these program revisions after the conclusion of the mission. The launch is currently slated for April, but could always slip again if more issues arise.Artemis II Crew
At any rate, the goals for Artemis II have always been fairly well-aligned with its Apollo counterpart, Apollo 8. Just like the 1968 mission, this flight is designed to test the crew capsule and collect real-world experience while in the vicinity of the Moon, but without the added complexity of attempting a landing. Although now, as it was then, the decision to test the crew capsule without its lander wasn’t made purely out of an abundance of caution.
As originally envisioned, Apollo 8 would have seen both the command and service module (CSM) and the lunar module (LM) tested in low Earth orbit. But due to delays in LM production, it was decided to fly the completed CSM without a lander on a modified mission that would put it into orbit around the Moon. This would give NASA an opportunity to demonstrate the critical translunar injection (TLI) maneuver and gain experience operating the CSM in lunar orbit — tasks which were originally scheduled to be part of the later Apollo 10 mission.
In comparison, Artemis II was always intended to be flown with only the Orion crew capsule. NASA’s goal has been to keep the program relatively agnostic when it came to landers, with the hope being that private industry would furnish an array of vehicles from which the agency could chose depending on the mission parameters. The Orion capsule would simply ferry crews to the vicinity of the Moon, where they would transfer over to the lander — either via directly docking, or by using the Lunar Gateway station as a rallying point.
There’s no lander waiting at the Moon for Artemis II, and the fate of Lunar Gateway is still uncertain. But for now, that’s not important. On this mission, NASA just wants to demonstrate that the Orion capsule can take a crew of four to the Moon and bring them back home safely.
Artemis III Kicks the Tires
For Artemis III, the previous plan was to have the Orion capsule mate up with a modified version of SpaceX’s Starship — known in NASA parlance as the Human Landing System (HLS) — which would then take the crew down to the lunar surface. While the HLS contract did stipulate that SpaceX was to perform an autonomous demonstration landing before Artemis III, the aggressive nature of the overall timeline made no provision for testing the lander with a crew onboard ahead of the actual landing attempt — a risky plan even in the best of circumstances.Docked CSM and LM during Apollo 9
The newly announced timeline resolves this issue by not only delaying the actual Moon landing until 2028, to take place during Artemis IV, but to change Artemis III into a test flight of the lander from the relative safety of low Earth orbit in 2027. The crew will liftoff from Kennedy Space Center and rendezvous with the lander in orbit. Once docked, the crews will practice maneuvering the mated vehicles and potentially perform an EVA to test Axiom’s space suits.
This new plan closely follows the example of Apollo 9, which saw the CSM and LM tested together in Earth orbit. At this point in the program, the CSM had already been thuroughly tested, but the LM had never flown in space or had a crew onboard. After the two craft docked, the crew performed several demonstrations, such as verifying that the mated craft could be maneuvered with both the CSM and LM propulsion systems.
The two craft then separated, and the LM was flown independently for several hours before once again docking with the CSM. The crew also performed a brief EVA to test the Portable Life Support System (PLSS) which would eventually be used on the lunar surface.Orion docked to landers from SpaceX and Blue Origin
While the Artemis III and Apollo 9 missions have a lot in common, there’s at least one big difference. At this point, NASA isn’t committing to one particular lander. If Blue Origin gets their hardware flying before SpaceX, that’s what they’ll go with. There’s even a possibility, albeit remote, that they could test both landers during the mission.
Artemis IV Takes a Different Path
After the success of Apollo 9, there was consideration given to making the first landing attempt on the following mission. But key members of NASA such as Director of Flight Operations Christopher C. Kraft felt there was still more to learn about operating the spacecraft in lunar orbit, and it was ultimately decided to make Apollo 10 a dress rehearsal for the actual landing.
The CSM and LM would head to the Moon, separate, and go through the motions of preparing to land. The LM would begin its descent to the lunar surface, but stop at an altitude of 14.4 kilometers (9 miles). After taking pictures of the intended landing site, it would return to the CSM and the crew would prepare for the return trip to Earth. With these maneuvers demonstrated, NASA felt confident enough to schedule the history-making landing for the next mission, Apollo 11.
But this time around, NASA will take that first option. Rather than do a test run out to the Moon with the Orion capsule and attached lander, the plan is to make the first landing attempt on Artemis IV. This is partially because we now have a more complete understanding of orbital rendezvous and related maneuvers in lunar orbit. But also because by this point, SpaceX and Blue Origin should have already completed their autonomous demonstration missions to prove the capabilities of their respective landers.
Entering Uncharted Territory
At this point, the plans for anything beyond Artemis IV are at best speculative. NASA says they will work to increase mission cadence, which includes streamlining SLS operations so the megarocket can be launched at least once per year, and work towards establishing a permanent presence on the Moon. But of course none of that can happen until these early Artemis missions have been successfully executed. Until then it’s all just hypothetical.
While Apollo was an incredible success, one can only follow its example so far. Despite some grand plans, the program petered out once it was clear the Soviet Union was no longer in the game. It cemented NASA’s position as the preeminent space agency, but the dream of exploring the lunar surface and establishing an outpost remained unfulfilled. With China providing a modern space rival, and commercial partners rapidly innovating, perhaps Artemis may be able to succeed where Apollo fell short.
Creating an Ultra-Stable Lunar Clock With a Cryogenic Silicon Cavity Laser
Phase-coherent lasers are crucial for many precision tasks, including timekeeping. Here on Earth the most stable optical oscillators are used in e.g. atomic clocks and many ultra-precise scientific measurements, such as gravitational wave detection. Since these optical oscillators use cryogenic silicon cavities, it’s completely logical to take this principle and build a cryogenic silicon cavity laser on the Moon.
In the pre-print article by [Jun Ye] et al., the researchers go through the design parameters and construction details of such a device in one of the permanently shadowed regions (PSRs) of the Moon, as well as the applications for it. This would include the establishment of a very precise lunar clock, optical interferometry and various other scientific and telecommunication applications.
Although these PSRs are briefly called ‘cold’ in the paper’s abstract, this is fortunately quickly corrected, as the right term is ‘well-insulated’. These PSRs on the lunar surface never get to warm up due to the lack of an atmosphere to radiate thermal energy, and the Sun’s warm rays never pierce their darkness either. Thus, with some radiators to shed what little thermal energy the system generates and the typical three layers of thermal shielding it should stay very much cryogenic.
Add to this the natural vacuum on the lunar surface, with PSRs even escaping the solar wind’s particulates, and maintaining a cryogenic, ultra-high vacuum inside the silicon cavity should be a snap, with less noise than on Earth. Whether we’ll see this deployed to the Moon any time soon remains to be seen, but with various manned missions and even Moon colony plans in the charts, this could be just one of the many technologies to be deployed on the lunar surface over the next few decades.
Postcard from Brussels: the digital vibe shift
WELCOME BACK TO THE FREE MONTHLY EDITION of Digital Politics.I'm Mark Scott, and the world appears to be veering out of control (again). You're here for digital policy. But for the latest on the evolving crisis in the Middle East, see here, here, here, here and here.
— The mood within European Union policymaking circleshas markedly changed when it comes to digital sovereignty, online competition and platform governance.
— The likelihood of a digital-focused transatlantic trade war has risen significantly in the wake of the US Supreme Court's overturning of Donald Trump's tariff regime.
— Who's actually funding Europe's AI industry? The answer isn't who you would think.
Let's get started:
THE NEW REALITY OF THE BRUSSELS BUBBLE
THE EU QUARTER CAN BE A STRANGE PLACE. Among the glass-fronted European Commission buildings, the hustle and bustle of multilingual lobbyists and the cavalcade of European Parliament lawmakers that most people have never heard of, it's difficult to decipher fact from fiction. I've spent most of the last two weeks entrenched in the so-called Brussels bubble. I come bearing news: the EU's collective digital policymaking priorities are in flux — and a new reality is starting to emerge.
First, a caveat. This analysis is based on conversations before the US and Israeli attacks on Iran over the weekend. Such an open-ended conflict will inevitably change political priorities, including those associated with tech. I don't know how that will shake out. Reader discretion is advised.
What is unmistakable, however, is that three fundamental shifts are underway in how the 27-country bloc approaches digital policymaking. This shift is couched in 1) the deregulatory environment created by Mario Draghi's 2024 competitiveness report; 2) the dominance of the center-right European People's Party across all EU institutions; and 3) a relegation of tech-related issues behind those linked to Ukraine and trade.
First, the EU is implementing a version of digital sovereignty that will try to onshore infrastructure and seek to reduce the Continent's independence on US tech giants. This move began before Donald Trump's second term in the White House. But over the last 12 months, even staunch US allies in Eastern Europe and the Baltics have come to recognize that Washington can no longer be seen as a short-term trusted partner. That has jumpstarted a policy agenda aimed at investing public European money into local alternatives to gradually wean the bloc off US tech.
This is still an early-stage movement. Many within more defense-focused policy circles fret that a so-called "rip-and-replace" strategy, which would see the likes of AWS infrastructure give way to a European alternative, would create systemic vulnerabilities which would not be in EU member countries' short-term national interests. More fiscally hawkish officials also worry that throwing EU public funds at often legacy industrial players — many of which are the only ones currently positioned to offer alternatives to Silicon Valley — would not represent good value for money.
Thanks for reading the free monthly version of Digital Politics. Paid subscribers receive at least one newsletter a week. If that sounds like your jam, please sign up here.
Here's what paid subscribers read in February:
— Digital policymaking needs a fundamental rethink; US attacks against Europe's online safety regime are not really about the bloc's online safety regime; Southeast Asia still dominates the world's semiconductor industry. More here.
— Public security and combating disinformation are increasingly intertwined, often in ways that should leave us feeling queasy; How Brussels' latest regulatory enforcement about TikTok plays into the EU's wider legislative agenda; Polarized social media has led to a public exodus from these platforms. More here.
— Be wary of anyone at India's AI Impact Summit peddling easy solutions for AI governance; The rise of kids' social media bans is example of the lack of quantifiable evidence in digital policymaking; The Global Majority is missing from the global data center boom. More here.
— What is, and what is not, working within the EU's Digital Services Act; Debrief from the AI Impact Summit: more trade show than policymaking; One-third of US teenagers use AI chatbots every day. More here.
And yet, my conversations with EU officials over the last two weeks made it clear that such a "Make Europe Great Again" digital sovereignty strategy — including now open discussions of funding European alternatives to American social media companies — has been baked into the bloc's policy priorities.
Second (and this is related to the first point) is a growing awareness and willingness to use the EU's digital competition rulebook to fast-track the newly-empowered digital sovereignty strategy.
While some officials and advocates would like to pour money into European alternatives (and that inevitably will happen), others are taking a more nuanced approach. That includes galvanizing the EU's Digital Markets Act to reduce market concentration which, in turn, would open up space for European alternatives to flourish.
This strategy is based on the somewhat naive belief that if only Big Tech didn't control the market, then a steady flow of European and non-European firms would be able to compete in everything from social media to online marketplaces to cloud computing infrastructure. Such a theory misunderstands the network effects from which consumers benefit when such services are bundled together — often at a cheaper price compared to buying such digital wares individually.
But as the DMA undergoes a current review, policymaking hope to extend the competition levers within this legislation to more aggressively hobble US tech firms, as well as expand areas of interoperability so that smaller firms can build on top of these platforms by offering people the ability to connect often rival services to each other. This is already available for messaging services within the bloc, and some EU startups now offer that ability.
Policymakers are also looking to extend that functionality — and, goes the theory, reduce Big Tech's market dominance and boost the bloc's digital sovereignty — to the likes of social media.
Third: the era of vigorous enforcement of the bloc's online safety and platform governance rules will be replaced by more nuanced policymaking aimed at balancing internal political priorities with those coming from outside the bloc.
That may sound odd, given my take on the EU's online safety landscape from last week. But the political winds have shifted away from comprehensive enforcement on topics like platform design and disinformation (editor's note: this does not constitute illegal content under the bloc's Digital Services Act). In its place, there will be more kneejerk policymaking attempts around populist topics like social media bans for teenagers, which meet short-term priorities for national leaders without addressing the long-term harm derived from how these platforms are designed.
It would be wrong to think that attacks from the US on the DSA had not played a role in this shift. The European Commission is a political beast. The repeated (and unfounded) claims that these rules equate to censorship of Americans' First Amendment rights have been heard at the very top of the Berlaymont building.
But, in truth, the shift away from aggressive, fast and comprehensive enforcement of the bloc's online safety rules has been driven by a change in the EU's internal dynamics.
Many center-right politicians — and such lawmakers now hold a majority in the European Commission, European Parliament and Council of the EU — are openly skeptical of the need for these rules. The complexities of implementing the DSA, in which Brussels enforcers are struggling to have a meaningful impact, have run up against shifting political priorities that promote deregulation and a more populist digital agenda.
That doesn't mean Brussels won't continue enforcing the DSA. But it is no longer first among legislative equals as EU officials turn their attention to digital sovereignty and the use of the bloc's competition rules to lift up European alternatives to their US and Chinese competitors.
Chart of the week
EUROPE WANTS TO GO IT ALONE ON AI. But which investors lie at the heart of the Continent's strategic ambitions for the emerging technology?
The University of Amsterdam's Leevi Saari crunched investment funds for all AI-linked European startups (including those from non-EU countries.) He then ranked which investors were central to these deals, ranking them on so-called "betweenness centrality," or a figure that measures the importance of certain actors in how the Continent's AI startups grow.
At the top of the list is French public investor BPI France, which plays a central role in the country's AI scene. Only one American Big Tech firm — Nvidia — makes the list (at number six.) Top-tier US venture capitalists and Europe's billionaire class, like Xavier Niel, also underpin how AI investment works across Europe, based on Saari's analysis.Source: Leevi Saari
ARE WE HEADING TOWARD A TRANSATLANTIC (DIGITAL) TRADE WAR?
THE RECENT US SUPREME COURT 6-3 DECISIONto invalidate 60 percent of US tariffs against third-party countries feels like a lifetime ago. In truth, it only happened on Feb 20. The world is rightly preoccupied with other matters. But the ongoing global omnishambles should take away from the fact that the EU-US trade deal — known as the Turnberry Framework — is on life support after the European Parliament refused to ratify it; and US President Trump threatened a new round of potential tariffs, including those that targeted the 27-country bloc (more on that below).
Trade negotiators, on both sides, are seeking a compromise. Maroš Šefčovič, the EU's trade czar, held meetings with his US counterpart, and said that "full respect for the EU-US deal is paramount."
If only things were that simple.
I still believe that any future transatlantic trade beef would likely be limited to the offline, not online, world. The US runs a significant trade surplus with the EU on digital services, whereas it runs an almost identical trade deficit on non-digital goods. If Washington really wants to hurt Brussels (and other European capitals), then it makes a lot more sense to slap tariffs on French wine and German cars than it does to tax incoming digital services from the likes of <<checks notes>> almost no EU-based firm (I joke, but only just.)
This, however, is where things get sticky. As part of the White House's new arsenal of potential tariff measures are so-called Section 301 investigations. These probes allow the US Trade Representative to look into any country's trading practices to determine if they are discriminatory or unfair against US firms. When it comes to Europe, the Trump administration has already made clear its anger toward the bloc's digital rulebook.
"The European Union and certain EU Member States have persisted in a continuing course of discriminatory and harassing lawsuits, taxes, fines, and directives against U.S. service providers," the USTR said in late 2025. "If the EU and EU Member States insist on continuing to restrict, limit, and deter the competitiveness of U.S. service providers through discriminatory means, the United States will have no choice but to begin using every tool at its disposal to counter these unreasonable measures."
Shots fired, if you will.
Sign up for Digital Politics
Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before. The first two weeks of any paid subscription are free.
Subscribe
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
There is still a long way to go before Washington starts specific 301 investigations into Europe — let alone before it leads to a tit-for-tat trade war with Washington. US President Trump, however, is looking for any opportunity to impose new tariffs. And for the EU, that's most likely connected to the bloc's competition laws, known as the Digital Markets Act, and national digital services taxes, which almost exclusively are paid by American tech firms.
That contrasts with the public attention focused by some in the White House against the bloc's online safety rules. Such ire may represent red meat in the ongoing culture war issue of platform governance. But for almost all US tech giants, the bigger issue remains EU digital competition rules and these unilateral digital services taxes.
If I was to be a betting man, I would put all my money on upcoming 301 investigations to focus on these two digital issues in how Washington responds to last month's US Supreme Court decision. Former administrations, on both sides of the aisle, have raised objections to these laws. Competition rules and digital services taxes would neatly fix into the definition required to start such investigations. And the focus on tech — compared to more analogue products — provides the White House with a strong corporate lobbying constituency willing to back a more aggressive stance with Europe.
For now, such speculation remains what it is: speculation. Officials' attention is also drawn elsewhere.
But in the coming months, I would wager the US will attempt to use such digital-focused 301 investigations to force the issue. In response, Europe already has a suite of tech-focused tariff responses that would be aimed at Silicon Valley — including potential hefty EU tariffs and, if things really go badly, potential Continent-wide bans on certain digital services.
Hopefully, we do not get to such a stage, for the sake of officials on both sides of the Atlantic. In the wake of the Turnberry deal (almost of which did not affect the digital world), most people breathed a sigh of relief that we had avoided a transatlantic trade war. That threat is now back — and all bets may soon be off.
What I'm reading
— Chatham House published an analysis into how so-called Middle Powers countries could navigate the dual hegemony of the US and China on AI. More here.
— A report from Citrini, a research group, into the potential labor force and market impact of mass adoption of AI led to a significant fall in US financial markets. Read the analysis here.
— We are living in a 'sovereignty paradox' in which the more governments and companies try to build their own AI systems, the more they rely on a small number of foreign providers, argues Damien Kopp for the Digital New Deal
— More than 60 data protection authorities from around the world signed a joint voluntary statement on the privacy impact on AI-generated imagery. More here.
— The US federal government ordered all agencies to stop using Anthropic's AI systems after it refused to meet certain commitments, including the use of its technology to surveil American citizens within the country and to power unmanned military equipment. Here is Anthropic's statement. And here is the statement from OpenAI's Sam Altman after the company agreed to work with the US Department of War.
Mobile malware evolution in 2025
Starting from the third quarter of 2025, we have updated our statistical methodology based on the Kaspersky Security Network. These changes affect all sections of the report except for the installation package statistics, which remain unchanged.
To illustrate trends between reporting periods, we have recalculated the previous year’s data; consequently, these figures may differ significantly from previously published numbers. All subsequent reports will be generated using this new methodology, ensuring accurate data comparisons with the findings presented in this article.
Kaspersky Security Network (KSN) is a global network for analyzing anonymized threat intelligence, voluntarily shared by Kaspersky users. The statistics in this report are based on KSN data unless explicitly stated otherwise.
The year in figures
According to Kaspersky Security Network, in 2025:
- Over 14 million attacks involving malware, adware or unwanted mobile software were blocked.
- Adware remained the most prevalent mobile threat, accounting for 62% of all detections.
- Over 815 thousand malicious installation packages were detected, including 255 thousand mobile banking Trojans.
The year’s highlights
In 2025, cybercriminals launched an average of approximately 1.17 million attacks per month against mobile devices using malicious, advertising, or unwanted software. In total, Kaspersky solutions blocked 14,059,465 attacks throughout the year.
Attacks on Kaspersky mobile users in 2025 (download)
Beyond the malware mentioned in previous quarterly reports, 2025 saw the discovery of several other notable Trojans. Among these, in Q4 we uncovered the Keenadu preinstalled backdoor. This malware is integrated into device firmware during the manufacturing stage. The malicious code is injected into libandroid_runtime.so – a core library for the Android Java runtime environment – allowing a copy of the backdoor to enter the address space of every app running on the device. Depending on the specific app, the malware can then perform actions such as inflating ad views, displaying banners on behalf of other apps, or hijacking search queries. The functionality of Keenadu is virtually unlimited, as its malicious modules are downloaded dynamically and can be updated remotely.
Cybersecurity researchers also identified the Kimwolf IoT botnet, which specifically targets Android TV boxes. Infected devices are capable of launching DDoS attacks, operating as reverse proxies, and executing malicious commands via a reverse shell. Subsequent analysis revealed that Kimwolf’s reverse proxy functionality was being leveraged by proxy providers to use compromised home devices as residential proxies.
Another notable discovery in 2025 was the LunaSpy Trojan.
LunaSpy Trojan, distributed under the guise of an antivirus app
Disguised as antivirus software, this spyware exfiltrates browser passwords, messaging app credentials, SMS messages, and call logs. Furthermore, it is capable of recording audio via the device’s microphone and capturing video through the camera. This threat primarily targeted users in Russia.
Mobile threat statistics
815,735 new unique installation packages were observed in 2025, showing a decrease compared to the previous year. While the decline in 2024 was less pronounced, this past year saw the figure drop by nearly one-third.
Detected Android-specific malware and unwanted software installation packages in 2022–2025 (download)
The overall decrease in detected packages is primarily due to a reduction in apps categorized as not-a-virus. Conversely, the number of Trojans has increased significantly, a trend clearly reflected in the distribution data below.
Detected packages by type
Distribution* of detected mobile software by type, 2024–2025 (download)
* The data for the previous year may differ from previously published data due to some verdicts being retrospectively revised.
A significant increase in Trojan-Banker and Trojan-Spy apps was accompanied by a decline in AdWare and RiskTool files. The most prevalent banking Trojans were Mamont (accounting for 49.8% of apps) and Creduz (22.5%). Leading the persistent adware category were MobiDash (39%), Adlo (27%), and HiddenAd (20%).
Share* of users attacked by each type of malware or unwanted software out of all users of Kaspersky mobile solutions attacked in 2024–2025 (download)
* The total may exceed 100% if the same users encountered multiple attack types.
Trojan-Banker malware saw a significant surge in 2025, not only in terms of unique file counts but also in the total number of attacks. Nevertheless, this category ranked fourth overall, trailing far behind the Trojan file category, which was dominated by various modifications of Triada and Fakemoney.
TOP 20 types of mobile malware
Note that the malware rankings below exclude riskware and potentially unwanted apps, such as RiskTool and adware.
| Verdict | % 2024* | % 2025* | Difference in p.p. | Change in ranking |
| Trojan.AndroidOS.Triada.fe | 0.04 | 9.84 | +9.80 | |
| Trojan.AndroidOS.Triada.gn | 2.94 | 8.14 | +5.21 | +6 |
| Trojan.AndroidOS.Fakemoney.v | 7.46 | 7.97 | +0.51 | +1 |
| DangerousObject.Multi.Generic | 7.73 | 5.83 | –1.91 | –2 |
| Trojan.AndroidOS.Triada.ii | 0.00 | 5.25 | +5.25 | |
| Trojan-Banker.AndroidOS.Mamont.da | 0.10 | 4.12 | +4.02 | |
| Trojan.AndroidOS.Triada.ga | 10.56 | 3.75 | –6.81 | –6 |
| Trojan-Banker.AndroidOS.Mamont.db | 0.01 | 3.53 | +3.51 | |
| Backdoor.AndroidOS.Triada.z | 0.00 | 2.79 | +2.79 | |
| Trojan-Banker.AndroidOS.Coper.c | 0.81 | 2.54 | +1.72 | +35 |
| Trojan-Clicker.AndroidOS.Agent.bh | 0.34 | 2.48 | +2.14 | +74 |
| Trojan-Dropper.Linux.Agent.gen | 1.82 | 2.37 | +0.55 | +4 |
| Trojan.AndroidOS.Boogr.gsh | 5.41 | 2.06 | –3.35 | –8 |
| DangerousObject.AndroidOS.GenericML | 2.42 | 1.97 | –0.45 | –3 |
| Trojan.AndroidOS.Triada.gs | 3.69 | 1.93 | –1.76 | –9 |
| Trojan-Downloader.AndroidOS.Agent.no | 0.00 | 1.87 | +1.87 | |
| Trojan.AndroidOS.Triada.hf | 0.00 | 1.75 | +1.75 | |
| Trojan-Banker.AndroidOS.Mamont.bc | 1.13 | 1.65 | +0.51 | +8 |
| Trojan.AndroidOS.Generic. | 2.13 | 1.47 | –0.66 | –6 |
| Trojan.AndroidOS.Triada.hy | 0.00 | 1.44 | +1.44 |
* Unique users who encountered this malware as a percentage of all attacked users of Kaspersky mobile solutions.
The list is largely dominated by the Triada family, which is distributed via malicious modifications of popular messaging apps. Another infection vector involves tricking victims into installing an official messaging app within a “customized virtual environment” that supposedly offers enhanced configuration options. Fakemoney scam applications, which promise fraudulent investment opportunities or fake payouts, continue to target users frequently, ranking third in our statistics. Meanwhile, the Mamont banking Trojan variants occupy the 6th, 8th, and 18th positions by number of attacks. The Triada backdoor preinstalled in the firmware of certain devices reached the 9th spot.
Region-specific malware
This section describes malware families whose attack campaigns are concentrated within specific countries.
| Verdict | Country* | %** |
| Trojan-Banker.AndroidOS.Coper.a | Türkiye | 95.74 |
| Trojan-Dropper.AndroidOS.Hqwar.bj | Türkiye | 94.96 |
| Trojan.AndroidOS.Thamera.bb | India | 94.71 |
| Trojan-Proxy.AndroidOS.Agent.q | Germany | 93.70 |
| Trojan-Banker.AndroidOS.Coper.c | Türkiye | 93.42 |
| Trojan-Banker.AndroidOS.Rewardsteal.lv | India | 92.44 |
| Trojan-Banker.AndroidOS.Rewardsteal.jp | India | 92.31 |
| Trojan-Banker.AndroidOS.Rewardsteal.ib | India | 91.91 |
| Trojan-Dropper.AndroidOS.Rewardsteal.h | India | 91.45 |
| Trojan-Banker.AndroidOS.Rewardsteal.nk | India | 90.98 |
| Trojan-Dropper.AndroidOS.Agent.sm | Türkiye | 90.34 |
| Trojan-Dropper.AndroidOS.Rewardsteal.ac | India | 89.38 |
| Trojan-Banker.AndroidOS.Rewardsteal.oa | India | 89.18 |
| Trojan-Banker.AndroidOS.Rewardsteal.ma | India | 88.58 |
| Trojan-Spy.AndroidOS.SmForw.ko | India | 88.48 |
| Trojan-Dropper.AndroidOS.Pylcasa.c | Brazil | 88.25 |
| Trojan-Dropper.AndroidOS.Hqwar.bf | Türkiye | 88.15 |
| Trojan-Banker.AndroidOS.Agent.pp | India | 87.85 |
* Country where the malware was most active.
** Unique users who encountered the malware in the indicated country as a percentage of all users of Kaspersky mobile solutions who were attacked by the same malware.
Türkiye saw the highest concentration of attacks from Coper banking Trojans and their associated Hqwar droppers. In India, Rewardsteal Trojans continued to proliferate, exfiltrating victims’ payment data under the guise of monetary giveaways. Additionally, India saw a resurgence of the Thamera Trojan, which we previously observed frequently attacking users in 2023. This malware hijacks the victim’s device to illicitly register social media accounts.
The Trojan-Proxy.AndroidOS.Agent.q campaign, concentrated in Germany, utilized a compromised third-party application designed for tracking discounts at a major German retail chain. Attackers monetized these infections through unauthorized use of the victims’ devices as residential proxies.
In Brazil, 2025 saw a concentration of Pylcasa Trojan attacks. This malware is primarily used to redirect users to phishing pages or illicit online casino sites.
Mobile banking Trojans
The number of new banking Trojan installation packages surged to 255,090, representing a several-fold increase over previous years.
Mobile banking Trojan installation packages detected by Kaspersky in 2022–2025 (download)
Notably, the total number of attacks involving bankers grew by 1.5 times, maintaining the same growth rate seen in the previous year. Given the sharp spike in the number of unique malicious packages, we can conclude that these attacks yield significant profit for cybercriminals. This is further evidenced by the fact that threat actors continue to diversify their delivery channels and accelerate the production of new variants in an effort to evade detection by security solutions.
TOP 10 mobile bankers
| Verdict | % 2024* | % 2025* | Difference in p.p. | Change in ranking |
| Trojan-Banker.AndroidOS.Mamont.da | 0.86 | 15.65 | +14.79 | +28 |
| Trojan-Banker.AndroidOS.Mamont.db | 0.12 | 13.41 | +13.29 | |
| Trojan-Banker.AndroidOS.Coper.c | 7.19 | 9.65 | +2.46 | +2 |
| Trojan-Banker.AndroidOS.Mamont.bc | 10.03 | 6.26 | –3.77 | –3 |
| Trojan-Banker.AndroidOS.Mamont.ev | 0.00 | 4.10 | +4.10 | |
| Trojan-Banker.AndroidOS.Coper.a | 9.04 | 4.00 | –5.04 | –4 |
| Trojan-Banker.AndroidOS.Mamont.ek | 0.00 | 3.73 | +3.73 | |
| Trojan-Banker.AndroidOS.Mamont.cb | 0.64 | 3.04 | +2.40 | +26 |
| Trojan-Banker.AndroidOS.Faketoken.pac | 2.17 | 2.95 | +0.77 | +5 |
| Trojan-Banker.AndroidOS.Mamont.hi | 0.00 | 2.75 | +2.75 |
* Unique users who encountered this malware as a percentage of all users of Kaspersky mobile solutions who encountered banking threats.
In 2025, we observed a massive surge in activity from Mamont banking Trojans. They accounted for approximately half of all new apps in their category and also were utilized in half of all banking Trojan attacks.
Conclusion
The year 2025 saw a continuing trend toward a decline in total unique unwanted software installation packages. However, we noted a significant year-over-year increase in specific threats – most notably mobile banking Trojans and spyware – even though adware remained the most frequently detected threat overall.
Among the mobile threats detected, we have seen an increased prevalence of preinstalled backdoors, such as Triada and Keenadu. Consistent with last year’s findings, certain mobile malware families continue to proliferate via official app stores. Finally, we have observed a growing interest among threat actors in leveraging compromised devices as proxies.
Neither Android nor iOS: DIY Smartphone Runs on ESP32!
You may or may not be reading this on a smartphone, but odds are that even if you aren’t, you own one. Well, possess one, anyway — it’s debatable if the locked-down, one-way relationships we have with our addiction slabs counts as ownership. [LuckyBor], aka [Breezy], on the other hand — fully owns his 4G smartphone, because he made it himself.
OK, sure, it’s only rocking a 4G modem, not 5G. But with an ESP32-S3 for a brain, that’s probably going to provide plenty of bandwidth. It does what you expect from a phone: thanks to its A7682E simcom modem, it can call and text. The OV2640 Arducam module allows it to take pictures, and yes, it surfs the web. It even has features certain flagship phones lack, like a 3.5 mm audio jack, and with its 3.5″ touchscreen, the ability to fit in your pocket. Well, once it gets a case, anyway.
It talks, it texts, it… does not julienne fry, but that’s arguably a good thing.
This is just an alpha version, a brick of layered modules. [LuckyBor] plans on fitting everything into a slimmer form factor with a four-layer PCB that will also include an SD-card adapter, and will open-source the design at that time, both hardware and software. Since [LuckyBor] has also promised the world documentation, we don’t mind waiting a few months.
It’s always good to see another open-source option, and this one has us especially chuffed. Sure, we’ve written about Postmarket OS and other Linux options like Nix, and someone even put the rust-based Redox OS on a phone, but those are still on the same potentially-backdoored commercial hardware. That’s why this project is so great, even if its performance is decidedly weak compared to flagship phones that have as more horsepower as some of our laptops.
We very much hope [LuckyBor] carries through with the aforementioned promise to open source the design.
Old Desk Phone Gets DOOM Port
Old desk phones are fairly useless these days unless you’re building a corporate PBX in your house. However, they can be fun to hack on, as [0x19] demonstrates by porting DOOM to a Snom 360 office phone.
The Snom 360 is a device from the early VoIP era, with [ox19] laying their hands on some examples from 2005. The initial plan was just to do some telephony with Asterisk, but [ox19] soon realized more was possible. Digging into a firmware image revealed the device ran a Linux kernel on a MIPS chip, so the way forward became obvious.
They set about hacking the phone to run DOOM on its ancient single-color LCD. Doing so was no mean feat. It required compilation of custom firmware, pulling over a better version of BusyBox, and reworking doomgeneric to run on this oddball platform. It also required figuring out how the keyboard was read and the screen was driven to write custom drivers—not at all trivial things on a bespoke phone platform. With all that done, though, [0x19] had a dodgy version of DOOM running slowly on a desk phone on a barely-legible LCD display.
Porting DOOM is generally a task done more for the technical thrill than to actually play the game on terribly limited hardware. We love seeing it done, whether the game is ported to a LEGO brick or a pair of earbuds. If you’re doing your own silly port, don’t hesitate to notify the tipsline—just make sure it’s one we haven’t seen before.
Inside SKALA: How Chernobyl’s Reactor Was Actually Controlled
Entering SKALA codes during RBMK operation. (Credit: Pripyat-Film studio)
Running a nuclear power plant isn’t an easy task, even with the level of automation available to a 1980s Soviet RBMK reactor. In their continuing efforts to build a full-sized, functional replica of an RBMK control room as at the Chornobyl Nuclear Power Plant – retired in the early 2000s – the [Chornobyl Family] channel has now moved on to the SKALA system.
Previously we saw how they replicated the visually very striking control panel for the reactor core, with its many buttons and status lights. SKALA is essentially the industrial control system, with multiple V-3M processor racks (‘frames’), each with 20k 24-bit words of RAM. Although less powerful than a PDP-11, its task was to gather all the sensor information and process them in real-time, which was done in dedicated racks.
Output from SKALA’s DREG program were also the last messages from the doomed #4 reactor. Unfortunately an industrial control system can only do so much if its operators have opted to disable every single safety feature. By the time the accident unfolded, the hardware was unable to even keep up with the rapid changes, and not all sensor information could even be recorded on the high-speed drum printer or RTA-80 teletypes, leaving gaps in our knowledge of the accident.
(Credit: Chornobyl Family, YouTube)
Setting up a genuine RTA-80 teletype is still one of the goals, but these old systems are not easy to use. Same with the original software that ran on these V-3M computer frames, which was loaded from paper tape (the ‘library’), including the aforementioned DREG program. This process creates executable code that is put on magnetic tapes, with magnetic tape also used for storage.(Credit: Chornobyl Family, YouTube)
The workings of the SKALA system and its individual programs including KRV, DREG and PRIZMA are explained in the video, each having its own focus on a part of the RBMK reactor’s status and overall health. Interacting with SKALA occurs via a special keyboard, on which the operator enters command codes to change e.g. set points, with parameters encoded in this code.
Using this method, RBMK operators can set and request values, with parameters and any error codes displayed on a dedicated display. There is also the Mnemonic Display for the SKALA system which provides feedback to the operator on the status of the SKALA system, including any faults.
Although to many people the control system of a power plant is just the control room, with its many confusing buttons, switches, lights and displays, there is actually a lot more to it, with systems SKALA and its associated hardware an often overlooked aspect. It’s great to see this kind of knowledge being preserved, and even poured into a physical model that simulates the experience of using the system.
The long-lived nature of nuclear power reactors means that even today 1960s and 1970s-era industrial automation system are still in active use, but once the final reactor goes offline – or is modernized during refurbishing – a lot of the institutional knowledge of these systems tends to vanish and with it a big part of history.
youtube.com/embed/Sjk2B0SzXUU?…
Designing A Pen Clip That Never Bends Out Of Shape
If you’ve ever used a ballpoint pen with a clip on the top, you’ve probably noticed they bend pretty easily. The clip relies on you only bending it a small amount to clip it on to things; bend it too far, and it ends up permanently deformed. [Craighill] decided to develop a pen clip that didn’t suffer this ugly malady. The wire clip design easily opens wide because the spring wire is not actually deforming much at all. Credit: YouTube video, via screenshot
The problem with regular pen clips comes down to simple materials science. Bend the steel clip a little bit, and the stress in the material remains below the elastic limit—so it springs back to its original shape. Push it too far, though, and you’ll end up getting into the plastic deformation region, where you’ve applied so much stress that the material is permanently deformed.
[Craighill] noted this problem, and contemplated whether a better type of clip was possible. An exploration of carabiner clips served to highlight possible solutions. Some carabiners using elastically-deformed closures that faced the same problem, while others used more complicated spring closures or a nifty bent-wire design. This latter solution seemed perfect for building a non-deforming pen clip. The bent wire is effectively a small spring, which allows it to act as a clip to hold the pen on to something. However, it’s also able to freely rotate out from the pen body, limiting the amount of actual stress put on the material itself, which stops it entering the plastic deformation region that would ruin it.
It’s some neat materials science combined with a pleasant bit of inventing, which we love to see. Sometimes there is joy to be had in contemplating and improving even the simplest of things. Video after the break.
youtube.com/embed/bFDt3lUzVPc?…
youtube.com/embed/3i9FGaakX-Y?…
[Thanks to Keith Olson for the tip!]
Exploring Security Vulnerabilities in a Cheapo WiFi Extender
If all you want is just a basic WiFi extender that gets some level of network connectivity to remote parts of your domicile, then it might be tempting to get some of those $5, 300 Mbit extenders off Temu as [Low Level] recently did for a security audit. Naturally, as he shows in the subsequent analysis of its firmware, you really don’t want to stick this thing into your LAN. In this context it is also worrying that the product page claims that over a 100,000 of these have been sold.
Starting the security audit is using $(reboot) as the WiFi password, just to see whether the firmware directly uses this value in a shell without sanitizing. Shockingly, this soft-bricks the device with an infinite reboot loop until a factory reset is performed by long-pressing the reset button. Amusingly, after this the welcome page changed to the ‘Breed web recovery console’ interface, in Chinese.
Here we also see that it uses a Qualcomm Atheros QCA953X SoC, which incidentally is OpenWRT compatible. On this new page you can perform a ‘firmware backup’, making it easy to dump and reverse-engineer the firmware in Ghidra. Based on this code it was easy to determine that full remote access to these devices was available due to a complete lack of sanitization, proving once again that a lack of input sanitization is still the #1 security risk.
In the video it’s explained that it was tried to find and contact a manufacturer about these security issues, but this proved to be basically impossible. This leaves probably thousands of these vulnerable devices scattered around on networks, but on the bright side they could be nice targets for OpenWRT and custom firmware development.
youtube.com/embed/KsiuA5gOl1o?…
The Perfect Cheat’s Racing Bicycle
One of the ongoing rumors and scandals in professional cycle sport concerns “motor doping” — the practice of concealing an electric motor in a bicycle to provide the rider with an unfair advantage. It’s investigated in a video from [Global Cycling Network], in which they talk about the background and then prove its possible by creating a motor doped racing bike.
To do this they’ve recruited a couple of recent graduate engineers, who get to work in a way most of us would be familiar with: prototyping with a set of 18650 cells, some electronics, and electromagnets. It uses what they call a “Magic wheel”, which features magnets embedded in its rim that engage with hidden electromagnets. It gives somewhere just under 20 W boost, which doesn’t sound much, but could deliver those crucial extra seconds in a race.
Perhaps the most interesting part is the section which looks at the history of motor doping with some notable cases mentioned, and the steps taken by cycling competition authorities to detect it. They use infra-red cameras, magnetometers, backscatter detectors, and even X-ray machines, but even these haven’t killed persistent rumors in the sport. It’s a fascinating video we’ve placed below the break, and we thank [Seb] for the tip. Meanwhile the two lads who made the bike are looking for a job, so if any Hackaday readers are hiring, drop them a line.
youtube.com/embed/ZdDHtLP3oEs?…
Get Your Green Power On!
Nobody likes power cords, and batteries always need recharging or replacing. What if your device could run on only the power it could gather together by itself from the world around it? It would be almost like free energy, although without breaking the laws of physics.
Hackaday’s 2026 Green-Powered Challenge asks you to show us your devices, contraptions, and hacks that can run on the power they can harvest. Whether it’s heat, light, vibration, or any other source of energy that your device gathers to keep running, we’d like to see it.
The top three entries will receive $150 shopping sprees courtesy of the contest’s sponsor, DigiKey, so get your entry in before April 24, 2026, to be eligible to win.
Honorable Mentions
As always, we have several honorable mention categories to get your creative juices flowing:
- Solar: In terms of self-powered anything, photovoltaic cells are probably the easiest way to go, but yet good light-harvesting designs aren’t exactly trivial either. Let’s see what you can run on just the sun. (Or even room lighting?)
- Anything But PV: Harnessing the light is too easy for you, then? How about piezo-electric power or a heat generator? Show us your best self-powering projects that work even when it’s dark out.
- Least Power: Maybe the smartest way to make your project run forever is to just cut down on the juice. If your project can run on its own primarily because of clever energy savings, it’s eligible for this mention.
- Most Power: How much of a challenge is building a solar-powered desk calculator in 2026? How about pushing it to the other extreme? Let’s see how much power you can consume while still running without batteries or cords. Does your off-grid shack count here? Let’s see it!
Prior Art
We’ve seen a lot of green-powered projects on Hackaday over the years, ranging from a solar-powered web server to a microcontroller powered by a BPW34 photodiode. Will your entry run off the juice harvested by an LED? It’s not inconceivable!
Solar cells only work when the sun shines, though. As long as your body is putting out heat, this Seebeck-effect ring will keep on running. (Matrix vibes notwithstanding!) Or maybe you want to go straight from heat to motion with a Stirling engine. And our favorite environmental-energy-harvester of all has to be the Beverly Clock and its relatives, running on the daily heat cycles and atmospheric pressure changes.
Your Turn
So what’s your energy-harvesting project? Batteries are too easy. Take it to the next level! All you have to do to enter is put your project up on Hackaday.io, pull down the “Submit Project to…” widget on the right, and you’re in. It’s that easy, and we can’t wait to see what you are all up to.
And of course, stay tuned to Hackaday, as we pick from our favorites along the way.
A Look Inside the Creative MB-10 MIDI Blaster
Before it became viable to distribute and play music tracks on home computers, the use of FM and Wavetable synthesis was very common, with MIDI Wavetable-based devices like the Roland MT-32 and SC-55 still highly sought after today. The Creative Midi Blaster MB-10 that [Yeo Kheng Meng] reviewed and tore down for an analysis isn’t quite as famous or sought after, but it provides a good example of what Creative Labs was doing at the time in this space.
Released in 1993, it definitely has more of a popular style vibe to it than the utilitarian Roland devices, even if this means highly impractical curves. In the list of features it claims Roland MT-32 emulation, which would have made it quite a bit more useful to the average user, including gamers of the era. Games like DOOM supported these MIDI devices for audio, for example.
In terms of price only the Roland SC-55ST comes close to the MB-10, similarly dropping a screen and a host of features. In terms of features the MB-10 claims far fewer instruments than the SC-55 variants, with even with the slightly higher priced SC-55ST massively outgunning it in raw specs. So would you ever buy the MB-10 back then and consider it a ‘good deal’? If $100 in 1990s money was worth losing full MIDI compatibility for, then it seems the answer was ‘yes’.
During the teardown of the MB-10 we can find an 8051-based Siemens processor that handles the MIDI interfaces and a Dream SAM8905 effects processor. Most of the remaining ICs are ROM chips that contain the firmware and MIDI banks, with the ROM dumps found in this GitHub repository.
The analog output stage includes the venerable TL074CN opamp and TDA1545 DAC, as well as a TDA2822M power amplifier IC. All of which is typical off-the-shelf for the era and also not something where Creative spent big bucks. It also appears that the 20-note polyphony claims on the box are false, as the Dream processor can only do 16 notes, which a quick test confirmed.
Despite being the cheaper option, it seems that most people with the spare cash to splurge on an external MIDI Wavetable device opted for a Roland one. These days it’s correspondingly quite hard to find an MB-10 for sale, unlike Roland MT-32 and SC-55 variants, yet considering software compatibility you really want to just stick with MT-32 and SC-55 compatibility anyway.
Back to Basics: Hacking on Key Matrixes
A lot of making goes on in this community these days, but sometimes you’ve just gotta do some old fashioned hacking. You might have grabbed an old Speak and Spell that you want to repurpose as an interface for a horrifyingly rude chatbot, or you’ve got a calculator that is going to become the passcode keypad for launching your DIY missiles. You want to work with the original hardware, but you need to figure out how to interface all the buttons yourself.
Thankfully, this is usually an easy job. The vast majority of buttons and keypads and keyboards are all implemented pretty much the same way. Once you know the basics of how to work with them, hooking them up is easy. It’s time to learn about key matrixes!
Wire ‘Em Up
A simple 3 x 3 matrix layout that allows six pins to read nine buttons. The buttons are organized into three rows and three columns. Credit: author
Imagine you have a piece of consumer hardware, like a desk phone or an old control panel or something. You’d like to hook up a microcontroller to read all the buttons. Only, there’s 10, or 20, or 100 buttons… and your microcontroller just doesn’t have that many I/O pins! If you’re only familiar with hooking up a couple of push buttons to a couple of pins on an Arduino with some pull-up resistors, this can feel like an overbearing limitation. However, thankfully—there is a better way!
Enter the key matrix. It’s a very simple way of hooking up more buttons to less I/O pins. Imagine, for example, a nine-digit keypad, arranged in a 3 x 3 square. Assign three pins for columns, and three pins for rows. Each button in the keypad is hooked up to one row pin and one column pin. You can then, for example, energize each row pin in turn with a high output on a microcontroller, and detect whether any of the column pins go high by setting them to inputs. Do this quickly enough, and you can detect the state of all nine buttons with just six pins. In fact, the technique is generalizable—for n pins, you can address (n/2)2 buttons. For six pins, that’s nine buttons.In this diagram, each circle represents a button, which is connected to the pins whose lines intersect within. With this method, it’s possible to address many more buttons with the same amount of I/O pins as a regular row-column layout. Credit: author, inspired by work from Touchscreen1
You can even take it further, if you abandon the concept of a grid-like row-and-column layout. You can instead take six pins, for example, treating each one as its own “row.” You then place a button between it and every other pin, doing the same for each following pin in turn. You can then energize pin 1, while scanning pins 2 through six to see which buttons were pressed, and so on through the rest of the pins. This will net you a higher amount of buttons per pin—(n2-n)/2, in fact. For our six pin example, you could address 15 buttons this way.
When you expect multiple button presses at a time, you should add diodes into the button matrix to prevent current paths taking unexpected directions, and you might be lucky enough to find that your device already has them. There are even more advanced techniques, like Charlieplexing, that can address n2 – n switches, but you’re less likely to come across this in the wild except for pin-constrained LED circuits.
These techniques are commonly referred to as multiplexing, and you’ll find it in all sorts of places. Everything from TV remotes to desktop calculators use this sort of technique to address many buttons without requiring lots of individual I/O pins.Sometimes you’ll find a piece of hardware with neat little test pads that link up with the rows and columns of the keypad matrix. This makes things easy! Credit: author
Once you’re aware of this, it generally becomes straightforward to open up any such piece of hardware and figure out how the buttons work. All you need to do is hunt down the traces that connect from button to button, and slowly map out how they’re all connected. Mapping out the board can be challenging, though, because designers don’t always make the traces easy to follow. While something like a keypad may be logically connected in a grid-type layout, for example, it might not actually look like that on the PCB. The traces might be going every which way, complicating your efforts to figure out what’s connected to what.
A multimeter set to continuity mode is a great tool for this work. It lets you tap around a PCB to figure out which side of each button is connected to which other buttons, allowing you to figure out how the matrix is laid out. For example, if you were working with a phone keypad, you might start by putting a multimeter lead on one of the contacts of the “1” button. You might then find that it’s connected to one side of the buttons for 3, 5, 9, and *. You can then probe the other side of each of those buttons to find out what they’re connected to as well. Put all this data into a spreadsheet, and you’ll eventually see which two pins you need to check to determine the status of any button on the keypad.
Generally, you’ll also find all the traces lead back to some main chip or connector, where you can easily solder on leads to hook up your own microcontroller to read all the buttons. It’s not always this easy—some boards will help you out with accessible test pads, while others will only provide tiny solder points for fine pitch connectors. In a worst case scenario, you might have to scrape solder resist off some traces so you can solder your wires in that way.
Once you’ve got a microcontroller hooked up to your button pads, the hard part is over. You just need to write some simple code to scan the key matrix and detect button presses. You can use a pre-baked library if you so desire, or you can do it yourself. Ultimately, a simple way is to just energize a row with an output I/O pin while setting all the column pins to inputs to see if any buttons are currently pressed, and stepping through the rows from there. You can get fancier about it if you like if things like latency or anti-ghosting are critical to you, but that’s a discussion for another time. With the high clock speeds of modern microcontrollers, it’s trivial to read even a large key matrix at a rapid pace.
youtube.com/embed/Yiq4fkdly04?…
Figuring out how to interface button pads on random hardware is a fun hacking skill to learn, and is accessible for beginners.
It’s worth noting that you might also have to cut some traces going to components of the original circuit, depending on what you’re hacking on. Oftentimes it’s not necessary, particularly if you’re unfussed what happens to any original circuitry on the board. For example, if you do intend to restore the item you’re hacking to original function, it might not be good to be probing the keypad with a 5 V microcontroller when the original hardware all ran at 3.3V. You might hurt the original chips on board if some voltage ends up where you didn’t intend it to go.
If you’ve ever dreamed of turning an air conditioner remote into a secret access panel for your home security system, or making your microwave into a cellular phone, these techniques will serve you well. Go forth, hunt down the matrix, and hack an appliance’s original user interface into the control panel of your dreams.
DK 10x24 - Non è colpa della IA
Dalla Rivoluzione Industriale in poi, il punto non è mai stato se le macchine possano davvero sostituire i lavoratori, ma chi, con le macchine acquisisce potere contrattuale, e chi lo perde...
spreaker.com/episode/dk-10x24-…
Building a Hackerspace Entry System
A hackerspace is a place that generally needs to be accessed by a wide group of people, often at weird and unusual hours. Handing around keys and making sure everything is properly locked up can be messy, too. To make it easy for hackers to get in to [Peter]’s local hackerspace, a simple electronic system was whipped up to grant access.The combined use of QR code & PIN adds a layer of security.
The basic components of the system are a keypad, a QR code and barcode scanner, a stepper motor, an Arduino Nano, and a Raspberry Pi. The keypad is read by an Arduino Nano, which is also responsible for talking to a stepper motor driver to actuate the lock cylinder.
The system works on the basis of two-factor authentication. Regular users authenticate to enter by presenting a QR code or barcode, and entering a matching PIN number. The system can also be set up for PIN-only entry on a temporary basis.
For example, if the hackerspace is running an event, a simple four-digit pin can allow relatively free access for the duration without compromising long-term security. Actual authentication is handled by the Raspberry Pi, which takes in the scanned barcode and/or PIN, hashes it, and checks it against a backend database which determines if the credentials are valid for entry.
While it’s not technically necessary for a project like this — in fact, you could argue it’s preposterously overkill — we have to take particular note of the machined aluminum enclosure for the keypad. Mere mortals could just run it off on their 3D printers, but if you’ve got access to a CNC router and a suitably chunky piece of aluminum, why not show off a bit?
It’s a nifty system that has served the hackerspace well over some time. We’ve featured some neat access control systems before, too. If you’ve got your own solution to this common problem, don’t hesitate to notify the tipsline!
DK 10x23 - Un mondo di sbobba
Si scopre che non solo i modelli linguistici generano stronzate by design, ma che i markettari possono suggerirgli quali stronzate generare...
spreaker.com/episode/dk-10x23-…
Building a Dependency-Free GPT on a Custom OS
The construction of a large language model (LLM) depends on many things: banks of GPUs, vast reams of training data, massive amounts of power, and matrix manipulation libraries like Numpy. For models with lower requirements though, it’s possible to do away with all of that, including the software dependencies. As someone who’d already built a full operating system as a C learning project, [Ethan Zhang] was no stranger to intimidating projects, and as an exercise in minimalism, he decided to build a generative pre-trained transformer (GPT) model in the kernel space of his operating system.
As with a number of other small demonstration LLMs, this was inspired by [Andrej Karpathy]’s MicroGPT, specifically by its lack of external dependencies. The first step was to strip away every unnecessary element from MooseOS, the operating system [Ethan] had previously written, including the GUI, most drivers, and the filesystem. All that’s left is the kernel, and KernelGPT runs on this. To get around the lack of a filesystem, the training data was converted into a header to keep it in memory — at only 32,000 words, this was no problem. Like the original MicroGPT, this is trained on a list of names, and predicts new names. Due to some hardware issues, [Ethan] hasn’t yet been able to test this on a physical computer, but it does work in QEMU.
It’s quite impressive to see such a complex piece of software written solely in C, running directly on hardware; for a project which takes the same starting point and goes in the opposite direction, check out this browser-based implementation of MicroGPT. For more on the math behind GPTs, check out this visualization.
youtube.com/embed/i43kzMwv04o?…
C64 Gets A Modern Interactive Disassembler
If you want to pull apart a program to see how it ticks, you’re going to need a disassembler. [Ricardo Quesada] has built Regenerator 2000 for just that purpose. It’s a new interactive disassembler for the Commodore 64 platform.
Naturally, Regenerator 2000 is built with full support for the 6502 instruction set, including undocumented op-codes as well. It’s able to automatically create labels and comments and can be paired with the VICE C64 emulator for live debugging. You can do all the usual debug stuff like inspecting registers, stepping through code, and setting breakpoints and watchpoints when you’re trying to figure out how something works. It can even show you sprites, bitmaps, and character sets right in the main window.
Files are on Github if you’re ready to dive in. You might find this tool to be a useful companion to C64 assembly tools we’ve featured previously, as well. If you’re pulling off your own retro development hacks, be sure to notify the tipsline.
[Thanks to Stephen Waters for the tip!]
NASA Uses Mars Global Localization as GNSS Replacement for the Perseverance Rover
Unlike on Earth there aren’t dozens of satellites whizzing around Mars to provide satellite navigation functionality. Recently NASA’s JPL engineers tried something with the Perseverance Mars rover that can give such Marsbound vehicles the equivalent of launching GPS satellites into Mars orbit, by introducing Mars Global Localization.
Although its remote operators back on Earth have the means to tell the rover where it is, it’d be incredibly helpful if it could determine this autonomously so that the rover doesn’t have to constantly stop and ask its human operators for directions. To this end the processor which was originally used to communicate with its Ingenuity helicopter companion was repurposed, reprogrammed to run an algorithm that compares panoramic images from the rover’s navigation cameras with its onboard orbital terrain maps.
Much like terrain-based navigation as used in cruise missiles back on Earth, this can provide excellent results depending on how accurate your terrain maps are. This terrain mapping process used to be done back on Earth, but for the past years engineers have worked to give the rover its own means to perform this task.
Ingenuity: left behind but not forgotten. (Credit: NASA, JPL)
Because the off-the-shelf processor in the rover’s Helicopter Base Station (HBS) is much faster than the custom, radiation-hardened processors that control the rover, the decision was made to try the algorithm on the HBS, especially since Ingenuity was left behind after it fatally damaged its propeller during a rough landing. This left the HBS unused and free to be repurposed.
Repurposing such OTS hardware also provided a good way to check for radiation damage to such standard hardware that was never certified for high radiation environments. To validate reliability the algorithm was run multiple times on the HBS, with the results compared by the main computer. This found some discrepancies, attributed to damage to about 25 bits out of 1 GB of RAM.
By isolating these damaged bits, the algorithm could run reliably, while giving another nod to the genius of the Ingenuity program that enabled such new features with what was at the time an unproven and relatively low-budget side-project tacked onto the Perseverance rover.
Thanks to [Nevyn] for the tip.
youtube.com/embed/KofTfRGO4Zs?…