This Week In Security: Getting Back Up to Speed
Editor’s Note: Over the course of nearly 300 posts, Jonathan Bennett set a very high bar for this column, so we knew it needed to be placed in the hands of somebody who could do it justice. That’s why we’re pleased to announce that Mike Kershaw AKA [Dragorn] will be taking over This Week In Security! Mike is a security researcher with decades of experience, a frequent contributor to 2600, and perhaps best known as the creator of the Kismet wireless scanner.
He’ll be bringing the column to you regularly going forward, but given the extended period since we last checked in with the world of (in)security, we thought it would be appropriate to kick things off with a review of some of the stories you may have missed.
Hacking like it’s 2009, or 1996
Hello all! It’s a pleasure to be here, and it already seems like a theme of the new year so far has bringing in the old bugs – what’s old is new again, and 2026 has seen several fixes to some increasingly ancient bugs.
Telnet
Reported on the OpenWall list, the GNU inetd suite brings an update to the telnet server (yes, telnet) that closes a login bug present since 2015 linked to environment variable sanitization.
Under the covers, the telnet daemon uses /bin/login to perform user authentication, but also has the ability to pass environment variables from the client to the host. One of these variables, USER, is passed directly to login — unfortunately this time with no checking to see what it contains. By simply passing a USER variable of “-froot”, login would accept the “-f” argument, or “treat this user as already logged in”. Instant root!
If this sounds vaguely familiar, it might be because the exact same bug was found in the Solaris telnetd service in 2007, including using the “-f” argument in the USER variable. An extremely similar bug targeting other variables (LD_PRELOAD) was found in the FreeBSD telnetd service in 2009, and other historical similar bugs have afflicted AIX and other Unix systems in the past.
Of course, nobody in 2026 should be running a telnet service, especially not exposed to the Internet, but it’s always interesting to see the old style of bugs resurface.
Glibc
Also reported on the OpenWall list, glibc — the GNU LibC library which underpins most binaries on Linux systems, providing kernel interfaces, file and network I/O, string manipulation, and most other common functions programmers expect — has killed another historical bug, present since 1996 in the DNS resolver functions which could be used to expose some locations in the stack.
Although not exploitable directly, the getnetbyaddr resolution functions could still ease in breaking ASLR, making other exploits viable.
Address Space Layout Randomization (ASLR) is a common method of randomizing where in memory a process and its data are loaded, making trivial exploits like buffer overflows much harder to execute. Being able to expose the location of the binary in memory by leaking stack locations weakens this mechanism, possibly exposing a vulnerable program to more traditional attacks.
MSHTML
In February, Microsoft released fixes under CVE-2026-21513 for the MSHTML Trident renderer – the one used in Internet Explorer 5. Apparently still present in Windows, and somehow still accessible through specific shortcut links, it’s the IE5 and Active-X gift that keeps giving, being actively exploited.
Back in the modern era…
After that bit of computing nostalgia, let’s look at some interesting stories involving slightly more contemporary subjects.
Server-side JS
It’s easy to think of JavaScript as simply a client-side language, but of course it’s also used in server frameworks like node.js and React, the latter being used heavily in the popular Next.JS framework server components.
Frameworks like React blur the lines between client and server, using the same coding style and framework conventions in the browser and in the server-side engine. React and NextJS allow calling server-side functions from the client side, mixing client and server side rendering of content, but due to a deserialization bug, React allowed any function to be called from a non-privileged client.
Cleverly named React2Shell, it has rapidly become a target for bulk exploitation, with Internet-scale monitoring firm GreyNoise reporting 8 million logged attempts by early January 2026. At this point, it’s safe to assume any Internet-exposed vulnerable service has been compromised.
Too much AI
As previously covered by Hackaday, the Curl project is officially ending bug bounties due to the flood of bogus submissions from AI tools. The founder and project lead, Daniel Sternberg, has been critical of AI-generated bug bounties in the past, and has finally decided the cost is no longer worth the gains.
In many ways this calls to mind the recent conflict between the ffmpeg team and Google, where Google Project Zero discovered a flaw in the decoding of a relatively obscure codec, assigning it a 90-day disclosure deadline and raising the ire of the open source volunteer team.
The influx of AI-generated reports is the latest facet of the friction between volunteer-led open source projects, and paid bug bounties or other commercial interests. Even with sponsorship backing, the reach of popular open-source libraries and tools like Curl, OpenSSL, BusyBox, and more is often far, far greater than the compensation offered by the biggest users of those libraries — often trillion dollar multinational companies.
Many open source projects are the passion project of a small set of people, even if they become massively popular and critical to commercial tools and infrastructure. While AI tooling may generate actionable reports, when it is deployed by users who may not themselves be programmers and are unable to verify the results, it puts the time drain of determining the validity, and at times, arguing with the submitter, entirely on the project maintainers. As the asymmetry increases, more small open source teams may start rejecting clearly AI generated reports as well.
OpenSSL, Again
The OpenSSL library, another critical component of Internet infrastructure with a very small team, suffers from a vulnerability in PKCS12 parsing which appears to be a relatively traditional memory bug leaning to null pointers, stack corruption, or buffer overflows, which in the best case causes a crash and the worst case allows for arbitrary code execution. (Insert obligatory XKCD reference here.)
PKCS12 is a certificate storage format which bundles multiple certificates and private keys in a single file – similar to a zip or tar for certificate credentials. Fortunately PKCS12 files are typically already trusted, and methods to upload them are not often exposed to the Internet at large, unfortunately, potential code execution even when limited to a trusted network interface is rarely a positive thing.
Notepad++
The Notepad++ team has released a write-up about the infrastructure compromise which appears to have enabled a state-level actor to deliver infected updates to select customers.
Notepad++ is a fairly popular alternative to the classic Notepad app found on Windows, with support for syntax highlighting, multiple programming languages, and basic IDE functionality. According to the write-up by the team based on findings by independent researchers, in June 2025 the shared hosting service which served updates to Notepad++ was compromised, and remained so until September of 2025.
The root of the issue lies in the update library WinGUp, used by Notepad++, which did not validate the downloaded update, leaving it vulnerable to redirection and modification. With control of the update servers, the attackers were able to send specific customers to modified, trojaned updates.
An important take-away for all developers: if your project can self-update, make sure that the update process is secure against malicious actors. Which can mean the complex issues of not only validating the certificate chain, but sometimes embedding trusted certificates in your software (or firmware) and using them to validate that the update file itself has not been modified.
WiFi Isolation
Finally, we have a new paper on WiFi security, with a new attack dubbed “AirSnitch”. From a team of collaborators including Mathy Vanhoef (a frequent publisher of modern WiFi attacks including the WPA2 KRACK attacks, and a driving force behind deprecating WPA2), AirSnitch defeats a protection in wireless networks known as “client isolation”.
Client isolation acts essentially as a firewall mechanism, which attempts to offer wireless clients an additional layer of security by preventing communication between clients on the same network. Optimally, this would prevent a hostile or infected client from communicating with other clients, despite being on the same shared network.
On a WPA encrypted WiFi network, each client has an individual key used for encryption, and a shared group key used by all clients for broadcast and multicast communication. For one client to communicate with another, the access point must decrypt the traffic from the first and re-encrypt it to the second. Preventing communication between clients should be as simple as not performing the encryption between clients, however by cloning the MAC address of the target client and establishing a second connection to the access point, and further manipulating the internal state of the access point with injected packets, a hostile device can cause the access point to share the data of the target, essentially converting the behavior of the network to a legacy Ethernet hub.
How significantly this might impact you will vary wildly, and likely the full impacts of the attack will take some time to be understood. An attacker still needs access to the network – for a WPA network this means the PSK must be known, and for an Enterprise network, login credentials are still required. Typically home networks don’t use client isolation at all – most home users expect devices to be able to communicate directly, and most public access networks use no encryption at all, leaving clients exposed to the same level of risk by default. Networks with untrusted clients, like educational campus networks or business bring-your-own-device networks, are likely at the greatest risk, but time will tell.