Salta al contenuto principale



Come si chiama il Threat Actors? Microsoft e CrowdStrike ora li chiamano con un unico nome


Nel settore della sicurezza informatica, ogni azienda usa nomi propri per identificare i Threat Actors. Questo ha sempre creato problemi: gli analisti parlavano dello stesso nemico, ma lo chiamavano con nomi completamente diversi.

In questo contesto, Microsoft e CrowdStrike hanno annunciato il lancio di un’iniziativa congiunta: hanno unito i loro sistemi di denominazione dei gruppi e pubblicato un manuale di riferimento aggiornato, in cui ogni attore della minaccia viene confrontato con diverse tassonomie contemporaneamente.

Questo approccio non trasforma il mercato in una palude di informazioni, ma consente agli specialisti di trovare rapidamente un terreno comune e parlare la stessa lingua, anche quando si tratta di nomi diversi per lo stesso gruppo di criminali informatici.

Secondo il responsabile della sicurezza di Microsoft, il nuovo database è diventato un punto di partenza per identificare rapidamente gli aggressori e aumentare l’efficacia delle indagini. Ora, in una situazione in cui un’organizzazione riceve segnalazioni da diversi fornitori contemporaneamente, non è più necessario perdere tempo a selezionare manualmente le corrispondenze: tutto è riunito in un unico e chiaro database di riferimento.

Tra i futuri partecipanti al progetto figurano Google/Mandiant e Unit 42 di Palo Alto Networks, che forniranno i propri dati per accelerare l’identificazione. Microsoft prevede che altri importanti attori si uniranno a questa iniziativa, che aumenterà significativamente la trasparenza del mercato della cyber intelligence e accelererà la risposta agli attacchi.

L’azienda ha sottolineato che l’obiettivo principale non è stabilire uno standard rigido, ma fornire agli specialisti uno strumento per una rapida sincronizzazione reciproca. È già stato possibile eliminare la confusione nei nomi di oltre 80 gruppi attivi: stiamo parlando dei team più pericolosi e tecnicamente attrezzati che operano in tutto il mondo.

In futuro, l’alleanza promette di svilupparsi ulteriormente: i database saranno aggiornati regolarmente con nuovi nomi e, in futuro, è previsto lo scambio automatico di dati telemetrici tra i partecipanti. Ciò semplificherà notevolmente il lavoro e renderà i report delle aziende più compatibili.

Microsoft e CrowdStrike ritengono che ciò di cui il settore ha bisogno sia la collaborazione e la sincronizzazione volontaria, non i tentativi di imporre tutto in un unico schema. L’iniziativa è aperta a nuovi partecipanti ed è progettata per eliminare il caos dei nomi e risparmiare ai difensori il grattacapo di capire chi c’è realmente dietro un attacco.

L'articolo Come si chiama il Threat Actors? Microsoft e CrowdStrike ora li chiamano con un unico nome proviene da il blog della sicurezza informatica.



Making a LEGO Vehicle Which Can Cross Large Gaps


A Lego vehicle crossing a gap between two benches.

Here is a hacker showing off their engineering chops. This video shows successive design iterations for a LEGO vehicle which can cross increasingly large gaps.

At the time of writing this video from [Brick Experiment Channel] has been seen more than 110,000,000 times, which is… rather a lot. We guess with a view count like that there is a fairly good chance that many of our readers have already seen this video, but this is the sort of video one could happily watch twice.

This video sports a bunch of engineering tricks and approaches. We particularly enjoy watching the clever use of center of gravity. They hack gravity to make some of their larger designs work.

It is a little surprising that we haven’t already covered this video over here on Hackaday as it has been on YouTube for over three years now. But we have heard from [Brick Experiment Channel] before with videos such as Testing Various Properties Of LEGO-Compatible Axles and LEGO Guitar Is Really An Ultrasonically-Controlled Synth.

And of course we’ve covered heaps of LEGO stuff in the past too, such as Building An Interferometer With LEGO and Stepping On LEGO For Science.

youtube.com/embed/pwglOlD7e0M?…

Thanks to [Keith Olson] for writing in to remind us about the [Brick Experiment Channel].


hackaday.com/2025/06/03/making…



[url=https://poliverso.org/photo/1573303163683fb55a9fae4016739025-0.jpeg][img]https://poliverso.org

Kal Bhairav 🕉️

Arun Shah reshared this.



Building An Automatic Wire Stripper And Cutter


Stripping and cutting wires can be a tedious and repetitive part of your project. To save time in this regard, [Red] built an automatic stripper and cutter to do the tiring work for him.

An ESP32 runs the show in this build. Via a set of A4988 stepper motor drivers, it controls two NEMA 17 stepper motors which control the motion of the cutting and stripping blades via threaded rods. A third stepper controls a 3D printer extruder to move wires through the device. There’s a rotary encoder with a button for controlling the device, with cutting and stripping settings shown on a small OLED display. It graphically represents the wire for stripping, so you can select the length of the wire and how much insulation you want stripped off each end. You merely need select the measurements on the display, press a button, and the machine strips and cuts the wire for you. The wires end up in a tidy little 3D-printed bin for collection.

The build should be a big time saver for [Red], who will no longer have to manually cut and strip wires for future builds. We’ve featured some other neat wire stripper builds before, too. Video after the break.

youtube.com/embed/pbuzLy1ktKM?…


hackaday.com/2025/06/03/buildi…



Building An Eight Channel Active Mixer


There are plenty of audio mixers on the market, and the vast majority all look the same. If you wanted something different, or just a nice learning experience, you could craft your own instead. That’s precisely what [Something Physical] did.

The build was inspired by an earlier 3-channel mixer designed by [Moritz Klein]. This project stretches to eight channels, which is nice, because somehow it feels right that a mixer’s total channels always land on a multiple of four. As you might expect, the internals are fairly straightforward—it’s just about lacing together all the separate op-amp gain stages, pots, and jacks, as well as a power LED so you can tell when it’s switched on. It’s all wrapped up in a slant-faced wooden box with an aluminum face plate and Dymo labels. Old-school, functional, and fit for purpose.

It’s a simple build, but a satisfying one; there’s something beautiful about recording on audio gear you’ve hewn yourself. Once you’ve built your mixer, you might like to experiment in the weird world of no-input mixing. Video after the break.

youtube.com/embed/SHH72r4SKWQ?…

youtube.com/embed/SHH72r4SKWQ?…


hackaday.com/2025/06/03/buildi…



Ai cybersecurity: come affrontare le sfide emergenti


@Informatica (Italy e non Italy 😁)
Gli effetti dell’AI nella cybersecurity sono noti ed evidenti. La necessità di progredire con le contromisure, invece, non è altrettanto evidente: le aziende si stanno muovendo, ma con una velocità e un ritmo che rischiano di creare un inedito gap fra attacco e difesa
L'articolo Ai



Mi sento come una particella di sodio


Non ho quasi mai alcuna reazione ai miei post, né commenti, né like, né condivisioni. Su Facebook avevo 100-150 amici e ogni post aveva qualche like, qualche commento, qualche condivisione. Mi sembra di non esistere.

Qualche giorno fa ho messo un post di test chiedendo a chi lo leggeva se era visibile e ho avuto una sola reazione (tra l'altro, di una persona iscritta a questa stessa istanza).

A me sono venute in mente due possibili spiegazioni, ma magari ce ne sono anche altre, non so.

1. La mia uscita da Facebook ha coinciso con un calo verticale della mia capacità di scrivere o condividere cose che interessino qualcun altro oltre a me;

2. Non ho capito bene come funziona il Fediverso e per qualche motivo i miei post invece di arrivare alle migliaia di persone che lo affollano (come mi aspetterei) non arrivano a nessun'altra istanza (o arrivano in un numero insignificante di istanze), benché io usi sempre la permission "public".

Ci sono altre spiegazioni che mi sfuggono?

in reply to Max su Poliverso 🇪🇺🇮🇹

Re: Mi sento come una particella di sodioNon ho quasi mai alcuna reazione ai miei post, né commenti, né like, né condivisioni.


max@poliverso.org
Non credo che i i post scritti da una istanza raggiungano le altre istanze se non c'è un server relay che prende i post provenienti da tutte le istanze e li ridistribuisca a tutte le altre istanze in una timeline globale. Si porrebbe il problema di chi mantiene il server relay. I tuoi post raggiungono le istanze di chi in quelle istanze ti segue. Comunque gli amministratori di istanza possono collegare la propria istanza a specifici relay che prendono e rilanciano i messaggi di altre specifiche istanze. Io nella mia istanza Mastodon, come in foto, ho attivato alcuni ripetitori da altre istanze che mi interessano e vedo i messaggi degli utenti di quelle istanze nella mia timeline federata. Ma i miei messaggi non raggiungono tutti gli utenti di quelle istanze, raggiungono solo quelli che in qualche modo mi seguono.
Questa voce è stata modificata (3 giorni fa)
in reply to Piero Bosio

@Piero Bosio

Non credo che i i post scritti da una istanza raggiungano le altre istanze se non c'è un server relay che prende i post provenienti da tutte le istanze e li ridistribuisca a tutte le altre istanze in una timeline globale

È giusto così: non tutti i post devono essere distribuiti in tutte le altre istanze, ma soltanto quelli degli utenti che sono connessi ad altri utenti appartenenti a tali istanze.

I relay sono comodi Se vuoi creare un network di istanze, Oppure se vuoi Popolare artificialmente la tua istanza con determinati contenuti provenienti da determinate istanze.
Ma i relay comportano un carico di lavoro per il server che è assolutamente ingiustificabile. Per una questione puramente statistica, più della metà dei contenuti prodotti in una istanza sono insignificanti per un utente qualsiasi. È proprio per questo che la distribuzione dei messaggi viene focalizzata in base al rapporto tra follower

@Max 🇪🇺🇮🇹

Fediverso reshared this.



Piattaforme di interazione digitale libere (Fediverso). 10 Giugno 2025 dalle 18.00 alle 19.30

Esiste uno svariato numero di servizi liberi, decentralizzati e federati che permettono di scambiarsi messaggi e altri materiali con la nostra cerchia di persone senza finire nei recinti creati dalla piattaforme commerciali più note. Tali servizi, raggruppati sotto il nome di Fediverso, si distinguono per avere una rete di istanze (nodi della rete) indipendenti a livello di esecuzione, ciascuna avente i propri termini di servizio e le proprie regole per la riservatezza e per la moderazione, e interconnesse tra loro con il protocollo ActivityPub.
Durante il corso della serata scopriremo quali sono, quali istanze scegliere e cosa possiamo farci.
Mastodon, ad esempio, è un software libero e una rete sociale di microblogging decentralizzato che permette di pubblicare messaggi brevi. Simile a X e creato nel 2016.
Pixelfed è una piattaforma di condivisione di immagini condivise simile a Instagram e connessa con tutto il Fediverso.

bibliotecacivica.rovereto.tn.i…

@Che succede nel Fediverso?

reshared this

Unknown parent

mastodon - Collegamento all'originale
LinuxTrent
@emanuelecariati @rresoli Domani pubblichiamo il video. Se qualcuno avesse seguito la diretta, scusate per l'assenza della chat, dobbiamo sistemarla.


La mia opinione su tinylist app


ho provato tinylist app per un po' di tempo e questa è la mia opinione.

PRO
leggerissima (soprattutto perché è una pagina web);
graficamente minimale, ma gradevole;
leggibile su praticamente ogni dispositivo;
facile da usare

CONTRO
non si possono aprire le note a tutto schermo (mantenendo la formattazione);
se hai poso segnale è irraggiungibile (a Lucca C&G è inutile)

VOTO
🌔 molto utile e versatile

reshared this



Abbiamo sempre contestato la decisione politica di dotare le forze dell’ordine di taser. Se verrà confermato che la causa della morte del trentenne a Pescara è stata causata dall’uso del taser non sarà il primo caso. La responsabilità di questa morte non ricade solo sulla destra ma è stata bipartisan. La sperimentazione del taser è [...]


Open Source Watch Movement Really Ticks All the Boxes


When you think of open-source hardware, you probably think of electronics and maker tools– RepRap, Arduino, Adafruit, et cetera. Yet open source is an ethos and license, and is in no way limited to electronics. The openmovement foundation is a case in point– a watch case, to be specific. The “movment” in Openmovement is a fully open-source and fully mechanical watch movement.

Openmovement has already released STEP files of OM10 the first movement developed by the group. (You do need to sign up to download, however.) They say the design is meant to be highly serviceable and modular, with a robust construction suited for schools and new watchmakers. The movement uses a “swiss pallets escarpent” we think that’s an odd translation of lever escarpment, but if you’re a watchmaker let us know in the comments), and runs at 3.5 Hz / 25,200 vph. An OM20 is apparently in the works, as well, but it looks like only OM10 has been built from what we can see.

If you don’t have the equipment to finely machine brass from the STEP files, Openmovement is running a crowdfunding campaign to produce kits of the OM10, which you can still get in on until the seventh of June.

If you’re wondering what it takes to make a mechanical watch from scratch, we covered that last year. Spoiler: it doesn’t look easy. Just assembling the tiny parts of an OM10 kit would seem daunting to most of us. That might be why most of the watches we’ve covered over the years weren’t mechanical, but at least they tend to be open source, too.


hackaday.com/2025/06/03/open-s…


in reply to simona

@simona no, anzi...
Incorporare un link è consigliabile soprattutto se vuoi creare un riferimento ipertestuale, ma se incolli semplicemente un link, ti viene caricata anche l'anteprima

linkiesta.it/2025/05/delfini-c…

Personalmente preferisco sempre aggiungere una descrizione, perché questo aiuta anche la ricerca testuale da parte di altri utenti, ma non è necessario.

Tieni conto che non tutti i siti presentano tag open graph che agevolano il caricamento delle anteprime

in reply to Franc Mac

@simona il sito Linkiesta per esempio non ha dei buoni link open graph e, anche quando viene pubblicato da mastodon, non carica quasi mai l'anteprima


#Iran, la commedia della Casa Bianca


altrenotizie.org/primo-piano/1…


3D Printed Tank Has a Cannon to Boot


Few of us will ever find ourselves piloting a full-sized military tank. Instead, you might like to make do with the RC variety. [TRDB] has whipped up one of their own design which features a small little pellet cannon to boot.

The tank is assembled from 3D printed components — with PETG filament being used for most of the body and moving parts, while the grippy parts of the treads are printed in TPU. The tank’s gearboxes consist of printed herringbone gears, and are driven by a pair of powerful 775 brushed DC motors, which are cooled by small 40 mm PC case fans. A rather unique touch are the custom linear actuators, used to adjust the tank’s ride height and angle relative to the ground. The small cannon on top is a flywheel blaster that fires small plastic pellets loaded from a simple drum magazine. Running the show is an ESP32, which responds to commands from [TRDB]’s own custom RC controller built using the same microcontroller.

As far as DIY RC tanks go, this is a very complete build. We’ve seen some other great work in this space, like this giant human-sized version that’s big enough to ride in.

youtube.com/embed/IDQrcb_U2eE?…


hackaday.com/2025/06/03/3d-pri…




#Ucraina, verso il baratro


altrenotizie.org/primo-piano/1…


Pornhub, Redtube e YouPorn si ritirano dalla Francia per colpa della legge sulla verifica dell’età


Secondo diverse indiscrezioni, il proprietario di Pornhub, Redtube e YouPorn ha intenzione di interrompere il servizio agli utenti francesi già mercoledì pomeriggio per protestare contro le misure governative che impongono di verificare l’età dei propri utenti.

Aylo Freesites e altre piattaforme pornografiche sono obbligate per legge a implementare soluzioni di verifica dell’età entro il 7 giugno. Il governo francese ha approvato queste misure nel 2023 per proteggere i minori da contenuti inappropriati. Secondo le statistiche di Pornhub, la Francia è il secondo Paese al mondo per numero di spettatori .

Solomon Friedman, socio di Ethical Capital Partners, società proprietaria di Aylo Freesites, ha dichiarato martedì ai giornalisti che la legge è “pericolosa”, “potenzialmente lesiva della privacy” e “inefficace”. Friedman sostiene invece la verifica dell’età a livello di dispositivo.

“Siamo ansiosi di collaborare con i produttori di sistemi operativi, gli app store e altri partner tecnologici per garantire che solo gli adulti accedano alla piattaforma. Non si tratta di non volersi assumere la responsabilità. Si tratta di affermare che è necessario bloccare l’accesso alla fonte”, hanno affermato.

Martedì, la ministra francese per l’intelligenza artificiale e la tecnologia digitale, Clara Chappaz, ha risposto alle affermazioni di Friedman sulla legge francese. “Mentire quando non si vuole rispettare la legge e prendere in ostaggio gli altri è inaccettabile”, ha scritto su X. “Gli adulti sono liberi di consumare materiale pornografico, ma non a scapito della protezione dei nostri figli”.

L'articolo Pornhub, Redtube e YouPorn si ritirano dalla Francia per colpa della legge sulla verifica dell’età proviene da il blog della sicurezza informatica.



Il vertice Meloni-Macron può rilanciare l’intesa franco-italiana. Parla Nelli Feroci (Iai)

@Notizie dall'Italia e dal mondo

Italia e Francia si incontrano a Roma in un momento complesso per le loro relazioni bilaterali. Mentre l’Europa riconosce la necessità di muoversi verso una maggiore integrazione strategica e industriale, le tensioni sulla cantieristica (che



Nuovo patto tra sicurezza e crescita economica. Patalano (Kcl) spiega la difesa secondo il Labour

@Notizie dall'Italia e dal mondo

La Strategic Defence Review pubblicata ieri dal governo britannico con un evento a Glasgow alla presenza del primo ministro Sir Keir Starmer e del segretario John Healey racconta le sfide e la strategia britannica in un mondo che, si legge, è segnato da una crescente



Referendum, Corrado (PD) contro Meloni: “Ma che razza di risposta è?”


@Politica interna, europea e internazionale
“Ma che razza di risposta è?”: con queste parole Annalisa Corrado, europarlamente del Partito Democratico, ha criticato la presa di posizione della Presidente del Consiglio Giorgia Meloni che ieri, in occasione della festa della Repubblica, ha dichiarato in vista dell’appuntamento alle urne dell’8 e



Supercon 2024: How To Track Down Radio Transmissions


You turn the dial on your radio, and hear a powerful source of interference crackle in over the baseline noise. You’re interested as to where it might be coming from. You’re receiving it well, and the signal strength is strong, but is that because it’s close or just particularly powerful? What could it be? How would you even go about tracking it down?

When it comes to hunting down radio transmissions, Justin McAllister and Nick Foster have a great deal of experience in this regard. They came down to the 2024 Hackaday Superconference to show us how it’s done.

Transmissions From Where?


youtube.com/embed/3vLtIsfRu_o?…

Nick Foster opens the talk by discussing how the first job is often to figure out what you’re seeing when you pick up a radio transmission. “The moral of this talk is that your hardware is always lying to you,” says Nick. “In this talk, we’re going to show you how your radio lies to you, what you can do about it, and if your hardware is not lying to you, what is that real station that you’re looking at?” It can be difficult to tease out the truth of what the radio might seem to be picking up. “How do we determine what a signal actually is?” he asks. “Is it a real signal that we’re looking at which is being transmitted deliberately from somebody else, or is it interference from a bad power supply, or is it a birdie—a signal that’s created entirely within my own radio that doesn’t exist at all?”

There are common tools used to perform this work of identifying just what the radio is actually picking up and where it’s coming from. Justin goes over some of the typical hardware, noting that the RX-888 is a popular choice for software-defined radio that can be tuned across HF, VHF, and UHF bands. It’s highly flexible, and it’s affordable to boot, as is the Web-888 which can be accessed conveniently over a web browser. Other common SDRs are useful, too, as are a variety of filters that can aid with more precise investigations.
Justin demonstrates an errant radio emission from the brushed motor in his furnace, noting how it varies in bandwidth—a surefire tell versus intentional radio transmissions.
Establishing a grounding in reality is key, Justin steps up to explain. “We turn our SDR on, we stick [on] the little antenna that comes with it, and we start looking at something,” says Justin. “Are the signals that we see there actually real?” He notes that there are some basics to consider right off the bat. “One key point to make is that nobody makes money or has good communication using an unmodulated carrier,” he points out. “If you just see a tone somewhere, it might be real, but there’s a good chance that it’s not.”

It’s perhaps more likely unintentional radiation, noise, or something generated inside the hardware itself on your end. It’s also worth looking at whether you’re looking at a fixed frequency or a changing frequency to pin things down further. Gesturing to a spectrogram, he notes that the long, persistent lines on the spectrogram are usually clues to more intentional transmissions. Intermittent squiggles are more often unintentional. Justin points at some that he puts down to the emissions from arc welders, sparking away as they do, and gives an example of what emissions from typical switching power supplies look like.

There are other hints to look out for, too. Real human-made signals tend to have some logic to them. Justin notes that real signals usually make “efficient” use of spectrum without big gaps or pointless repetition. It’s also possible to make judgement calls as to whether a given signal makes sense for the band it appears to be transmitted in. Schedule can be a tell, too—if a signal always pops up when your neighbor gets home at 6 PM, it might just be coming from their garage door remote. Justin notes a useful technique for hunting down possible nearby emitters—”Flipping on and off switches is a real good way of figuring out—is it close to me or not?”
SDRs are hugely flexible, but they also have very open front-ends that can lead to some confusing output.
Nick follows up by discussing the tendency of sampling radios to show up unique bizarre transmissions that aren’t apparent on an analog receiver. “One of the curses of the RTL-SDR is actually one of its strengths… it has a completely wide open front end,” notes Nick. “Its ADC which is sampling and capturing the RF has basically nothing except an amplifier in between it and whatever crud you’re putting into it.” This provides great sensitivity and frequency agility, but there’s a catch—”It will happily eat up and spit out lots of horrible stuff,” says Nick. He goes on to explain various ways such an SDR might lie to the user. A single signal might start popping up all over the frequency band, or interfere with other signals coming in from the antenna. He also highlights a great sanity check for hunting down birdies—”If it’s always there, if it’s unchanging, if you unplug your antenna and you still hear it—it’s probably generated in your radio!”

The rest of the talk covers locating transmissions—are they in your house, in the local community, or from even farther afield? It explores the technique of multilateration, where synchronized receivers and maths are used to measure the time differences seen in the signal at each point to determine exactly where a transmission is coming from. The talk also goes over common sources of noise in residential settings—cheap PWM LED lights, or knock-off laptop chargers being a prime example in Nick’s experience. There’s also a discussion of how the noise floor has shifted up a long way compared to 50 years ago, now that the world is full of so many more noise-emitting appliances.

Ultimately, the duo of Justin and Nick brought us a great pun-filled talk on sleuthing for the true source of radio transmissions. If you’ve ever wondered about how to track down some mystery transmitter, you would do well to watch and learn from the techniques explored within!


hackaday.com/2025/06/03/superc…




Simulation and Motion Planning for 6DOF Robotic Arm


ManiPylator focusing its laser pointer at a page.

[Leo Goldstien] recently got in touch to let us know about a fascinating update he posted on the Hackaday.io page for ManiPylator — his 3D printed Six degrees of freedom, or 6DOF robotic arm.

This latest installment gives us a glimpse at what’s involved for command and control of such a device, as what goes into simulation and testing. Much of the requisite mathematics is introduced, along with a long list of links to further reading. The whole solution is based entirely on free and open source (FOSS) software, in fact a giant stack of such software including planning and simulation software on top of glue like MQTT message queues.

The practical exercise for this installment was to have the arm trace out the shape of a heart, given as a mathematical equation expressed in Python code, and it fared quite well. Measurements were taken! Science was done!

We last brought you word about this project in October of 2024. Since then, the project name has changed from “ManiPilator” to “ManiPylator”. Originally the name was a reference to the Raspberry Pi, but now the focus is on the Python programming language. But all the bot’s best friends just call him “Manny”.

If you want to get started with your own 6DOF robotic arm, [Leo] has traced out a path for you to follow. We’d love to hear about what you come up with!

youtube.com/embed/as9t7umI3mM?…


hackaday.com/2025/06/03/simula…



Don’t empower agencies to gut free speech


Federal agencies are transforming into the speech police under President Donald Trump. So why are some Democrats supporting the Kids Online Safety Act, a recently reintroduced bill that would authorize the MAGA-controlled Federal Trade Commission to enforce censorship?

As Freedom of the Press Foundation (FPF) senior advocacy adviser Caitlin Vogus wrote for The Boston Globe, there’s never an excuse for supporting censorship bills, but especially when political loyalists are at the FTC sure to abuse any power they’re given to stifle news on disfavored topics.

Vogus wrote, “KOSA’s supporters argue that it’s about keeping children under 17 safe from the harms of social media. But at the heart of the bill is something everyone should oppose: empowering the government to decide what speech children should be forbidden from seeing online.”

Read the article here.


freedom.press/issues/dont-empo…



Aggiornamenti Android giugno 2025, corrette 36 vulnerabilità: aggiorniamo i dispositivi


@Informatica (Italy e non Italy 😁)
Google ha rilasciato l’Android Security Bulletin per il mese di giugno 2025, con gli aggiornamenti per 36 vulnerabilità: la più grave, identificata nel componente System, potrebbe causare un'escalation locale dei privilegi senza alcuna



Rai diretta streaming estero: guida alla visione dei programmi Tv


@Informatica (Italy e non Italy 😁)
Per guardare la diretta streaming della Rai dall'estero è consigliato l'uso di una VPN (Virtual Private Network). Le VPN permettono di cambiare il proprio indirizzo IP, simulando una connessione dall'Italia, il che consente di aggirare i blocchi imposti dalle restrizioni



Il codice e l’etica. L’intelligenza che cambia il mondo

L'articolo proviene da #Euractiv Italia ed è stato ricondiviso sulla comunità Lemmy @Intelligenza Artificiale
Il 13 giugno l’Avv. Federica De Stefani condurrà a Mantova il convegno “Il codice e l’etica. L’intelligenza che cambia il mondo”, evento conclusivo del secondo anno del Percorso di Eccellenza dell’Università degli

Intelligenza Artificiale reshared this.



Anti-porn laws can't stop porn, but they can stop free speech. In the meantime, people will continue to get off to anything and everything.

Anti-porn laws canx27;t stop porn, but they can stop free speech. In the meantime, people will continue to get off to anything and everything.#porn #sex

#sex #x27 #porn



Add Wood Grain Texture to 3D Prints – With a Model of a Log


Adding textures is a great way to experiment with giving 3D prints a different look, and [PandaN] shows off a method of adding a wood grain effect in a way that’s easy to play around with. It involves using a 3D model of a log (complete with concentric tree rings) as a print modifier. The good news is that [PandaN] has already done the work of creating one, as well as showing how to use it.
The model of the stump — complete with concentric tree rings — acts as a modifier for the much-smaller printed object (in this case, a small plate).
In the slicer software one simply uses the log as a modifier for an object to be printed. When a 3D model is used as a modifier in this way, it means different print settings get applied everywhere the object to be printed and the modifier intersect one another.

In the case of this project, the modifier shifts the angle of the fill pattern wherever the models intersect. A fuzzy skin modifier is used as well, and the result is enough to give a wood grain appearance to the printed object. When printed with a wood filament (which is PLA mixed with wood particles), the result looks especially good.

We’ve seen a few different ways to add textures to 3D prints, including using Blender to modify model surfaces. Textures can enhance the look of a model, and are also a good way to hide layer lines.

In addition to the 3D models, [PandaN] provides a ready-to-go project for Bambu slicer with all the necessary settings already configured, so experimenting can be as simple as swapping the object to be printed with a new 3D model. Want to see that in action? Here’s a separate video demonstrating exactly that step-by-step, embedded below.

youtube.com/embed/dPqu9Sk01jc?…


hackaday.com/2025/06/03/add-wo…



Obsolescenza Tecnologica. Accesso Sicuro della Regione Toscana su server obsoleto da 9 anni


L’Obsolescenza tecnologia è una brutta bestia!

Ti ritrovi con dispositivi e applicazioni perfettamente funzionanti, ma ormai inutilizzabili perché i sistemi non sono più supportati, le app smettono di aggiornarsi e le patch di sicurezza diventano un miraggio. Non si tratta solo di un fastidio pratico: è anche un rischio concreto, soprattutto in ambito informatico. Tecnologie obsolete diventano il bersaglio preferito dei cyber attacchi, rendendo aziende, enti pubblici e utenti finali estremamente vulnerabili. Restare aggiornati non è più un’opzione: è una necessità.

Certo, non è affatto semplice affrontare un replatforming, cioè la migrazione di applicazioni e sistemi verso nuove piattaforme quando cambiano le major release dei prodotti. Questo processo può richiedere risorse, competenze e tempo. Tuttavia, prevedere questi ammodernamenti nel ciclo di vita del software è una responsabilità imprescindibile. Ignorarli significa accumulare debito tecnologico e mettere a rischio l’intero ecosistema digitale su cui si basa l’organizzazione.

Il Portale Accesso Sicuro della Regione Toscana


Il sito accessosicuro.rete.toscana.it/… è il portale ufficiale della Regione Toscana per l’accesso ai servizi online ad autenticazione sicura. Attraverso questo portale, cittadini, professionisti e operatori possono svolgere varie pratiche amministrative e sanitarie da remoto, in modo protetto e conforme alle normative vigenti.

A parte dei Directory Listing presenti all’interno del sito, la cosa che ci è stata portata all’attenzione da N3m0D4m è la versione del server Apache Tomcat presente in esercizio. Si parla della versione 6.0.33 che è andata in de supporto il 31 dicembre 2016.

Quindi stiamo parlando di un server che non riceve più patch di sicurezza da ben 9 anni.
Pagina di errore sul portale rete.toscana.it che mostra la presenza di un application server obsoleto

Cos’è l’obsolescenza tecnologica


L’obsolescenza tecnologica si verifica quando un sistema, un dispositivo o un software diventa inadeguato non perché smetta di funzionare, ma perché non riceve più supporto tecnico o aggiornamenti di sicurezza da parte del produttore.

Nel contesto della sicurezza informatica, questo è un problema altamente critico. Quando un sistema non viene più aggiornato, le vulnerabilità scoperte nel tempo restano aperte. Ogni bug o falla non corretta rappresenta una porta lasciata socchiusa che un attaccante può facilmente sfruttare. E quando si tratta di una applicazione esposta su internet, i rischi diventano molto ma molto più importanti.

Cosa succede esattamente?


  • Nessuna patch di sicurezza: i cyber criminali cercano attivamente sistemi non aggiornati, perché sanno esattamente come colpirli.
  • Espansione del rischio nel tempo: più tempo passa dall’ultimo aggiornamento, maggiore sarà il numero di vulnerabilità pubblicamente note che colpiscono quel sistema.
  • Incompatibilità con nuove tecnologie di difesa: software EDR/XDR, antivirus, firewall o sistemi di monitoraggio moderni non sempre funzionano su tecnologie obsolete, lasciando interi ambienti scoperti.
  • Utilizzo in attacchi a catena: un sistema vulnerabile può essere il punto d’ingresso per compromettere tutta una rete aziendale o istituzionale, anche se il server non eroga alcun servizio.


Un problema per tutti


Questo non è un rischio teorico: le cronache sono piene di casi in cui un semplice componente obsoleto ha permesso a un attaccante di prendere il controllo di interi sistemi. Nella pubblica amministrazione, dove spesso convivono tecnologie datate e dati sensibili, l’obsolescenza è una vera e propria bomba a orologeria.

E non si tratta solo di malware o ransomware: le falle non corrette possono permettere furti di dati personali, accessi non autorizzati, modifiche ai contenuti e molto altro.

Come affrontare l’obsolescenza tecnologica


Affrontare l’obsolescenza tecnologica richiede una visione strategica e un approccio proattivo.

Non basta reagire quando un sistema smette di funzionare o non riceve più aggiornamenti: è fondamentale pianificare in anticipo il ciclo di vita delle tecnologie utilizzate. Questo significa monitorare le date di fine supporto (EoL – End of Life) dei software e dei sistemi critici, stabilire una roadmap per gli aggiornamenti e prevedere risorse per interventi di manutenzione evolutiva. Ogni organizzazione dovrebbe adottare un inventario IT aggiornato, con indicatori chiari sullo stato e sulla criticità dei vari componenti, per prevenire l’accumulo di debito tecnologico.

La chiave è l’ammodernamento continuo, non l’intervento d’emergenza.

Quando possibile, è utile adottare soluzioni modulari e scalabili, che facilitino futuri aggiornamenti senza dover ricostruire da zero l’intera infrastruttura. Inoltre, il replatforming – per quanto complesso – deve essere considerato parte integrante della manutenzione a lungo termine. È importante coinvolgere team tecnici, responsabili della sicurezza e decisori strategici in un dialogo costante per valutare impatti, costi e benefici. Solo così è possibile garantire continuità operativa, sicurezza e sostenibilità nel tempo.

Concludendo


L’obsolescenza tecnologica non è solo una questione di efficienza o modernizzazione: è una questione di sicurezza. Continuare a utilizzare software o infrastrutture non più supportate equivale, nel mondo digitale, a lasciare la porta di casa aperta in un quartiere ad alto rischio.

Nel contesto attuale, dove gli attacchi informatici crescono in complessità e frequenza, ignorare gli aggiornamenti significa esporsi volontariamente al pericolo. E il prezzo da pagare non è solo tecnico: può tradursi in danni reputazionali, sanzioni per mancato rispetto delle normative (come il GDPR), perdita di dati sensibili e costi di ripristino elevati.

Aggiornare non è più una scelta, è un dovere di responsabilità digitale — sia per i privati, sia (e soprattutto) per le istituzioni pubbliche che gestiscono servizi essenziali per i cittadini.

La tecnologia evolve, ma la sicurezza non può restare indietro.

Come nostra consuetudine, lasciamo sempre spazio ad una dichiarazione dell’organizzazione qualora voglia darci degli aggiornamenti su questa vicenda e saremo lieti di pubblicarla con uno specifico articolo dando risalto alla questione.

RHC monitorerà l’evoluzione della vicenda in modo da pubblicare ulteriori news sul blog, qualora ci fossero novità sostanziali. Qualora ci siano persone informate sui fatti che volessero fornire informazioni in modo anonimo possono accedere utilizzare la mail crittografata del whistleblower.

L'articolo Obsolescenza Tecnologica. Accesso Sicuro della Regione Toscana su server obsoleto da 9 anni proviene da il blog della sicurezza informatica.



My Winter of ’99: The Year of the Linux Desktop is Always Next Year


Growing up as a kid in the 1990s was an almost magical time. We had the best game consoles, increasingly faster computers at a pace not seen before, the rise of the Internet and World Wide Web, as well the best fashion and styles possible between neon and pastel colors, translucent plastic and also this little thing called Windows 95 that’d take the world by storm.

Yet as great as Windows 95 and its successor Windows 98 were, you had to be one of the lucky folks who ended up with a stable Windows 9x installation. The prebuilt (Daewoo) Intel Celeron 400 rig with 64 MB SDRAM that I had splurged on with money earned from summer jobs was not one of those lucky systems, resulting in regular Windows reinstalls.

As a relatively nerdy individual, I was aware of this little community-built operating system called ‘Linux’, with the online forums and the Dutch PC magazine that I read convincing me that it would be a superior alternative to this unstable ‘M$’ Windows 98 SE mess that I was dealing with. Thus it was in the Year of the Linux Desktop (1999) that I went into a computer store and bought a boxed disc set of SuSE 6.3 with included manual.

Fast-forward to 2025, and Windows is installed on all my primary desktop systems, raising the question of what went wrong in ’99. Wasn’t Linux the future of desktop operating systems?

Focus Groups

Boxed SuSE Linux 6.3 software. (Source: Archive.org)Boxed SuSE Linux 6.3 software. (Source: Archive.org)
Generally when companies gear up to produce something new, they will determine and investigate the target market, to make sure that the product is well-received. This way, when the customer purchases the item, it should meet their expectations and be easy to use for them.

This is where SuSE Linux 6.3 was an interesting experience for me. I’d definitely have classified myself in 1999 as your typical computer nerd who was all about the Pentiums and the MHz, so at the very least I should have had some overlap with the nerds who wrote this Linux OS thing.

The comforting marketing blurbs on the box promised an easy installation, bundled applications for everything, while suggesting that office and home users alike would be more than happy to use this operating system. Despite the warnings and notes in the installation section of the included manual, installation was fairly painless, with YAST (Yet Another Setup Tool) handling a lot of the tedium.

However, after logging into the new operating system and prodding and poking at it a bit over the course of a few days, reality began to set in. There was the rather rough-looking graphical interface, with what I am pretty sure was the FVWM window manager for XFree86, no font aliasing and very crude widgets. I would try the IceWM window manager and a few others as well, but to say that I felt disappointed was an understatement. Although it generally worked, the whole experience felt unfinished and much closer to using CDE on Solaris than the relatively Windows 98 or the BeOS Personal Edition 5 that I would be playing with around that time as well.

That’s when a friend of my older brother slipped me a completely legit copy of Windows 2000 plus license key. To my pleasant surprise, Windows 2000 ran smoothly, worked great and was stable as a rock even on my old Celeron 400 rig that Windows 98 SE had struggled with. I had found my new forever home, or so I thought.

Focus Shift

Start-up screen of FreeSCO. (Credit: Lewis “Lightning” Baughman, Wikimedia)Start-up screen of FreeSCO. (Credit: Lewis “Lightning” Baughman, Wikimedia)
With Windows 2000, and later XP, being my primary desktop systems, my focus with Linux would shift away from the desktop experience and more towards other applications, such as the FreeSCO (en français) single-floppy router project, and the similar Smoothwall project. After upgrading to a self-built AMD Duron 600 rig, I’d use the Celeron 400 system to install various Linux distributions on, to keep tinkering with them. This led me down the path of trying out Wine to try out Windows applications on Linux in the 2000s, along with some Windows games ported by Loki Entertainment, with mostly disappointing results. This also got me to compile kernel modules, to make the onboard sound work in Linux.

Over the subsequent years, my hobbies and professional career would take me down into the bowels of Linux and similar with mostly embedded (Yocto) development, so that by now I’m more familiar with Linux from the perspective of the command line and architectural level. Although I have many Linux installations kicking around with a perfectly fine X/Wayland installation on both real hardware and in virtual machines, generally the first thing I do after logging in is pop open a Bash terminal or two or switching to a different TTY.

Yet now that the rainbows-and-sunshine era of Windows 2000 through Windows 7 has come to a fiery end amidst the dystopian landscape of Windows 10 and with Windows 11 looming over the horizon, it’s time to ask whether I would make the jump to the Linux desktop now.

Linux Non-Standard Base


Bringing things back to the ‘focus group’ aspect, perhaps one of the most off-putting elements of the Linux ecosystem is the completely bewildering explosion of distributions, desktop environments, window managers, package managers and ways of handling even basic tasks. All the skills that you learned while using Arch Linux or SuSE/Red Hat can be mostly tossed out the moment you are on a Debian system, never mind something like Alpine Linux. The differences can be as profound as when using Haiku, for instance.

Rather than Linux distributions focusing on a specific group of users, they seem to be primarily about doing what the people in charge want. This is illustrated by the demise of the Linux Standard Base (LSB) project, which was set up in 2001 by large Linux distributions in order to standardize various fundamentals between these distributions. The goals included a standard filesystem hierarchy, the use of the RPM package format and binary compatibility between distributions to help third-party developers.

By 2015 the project was effectively abandoned, and since then distributing software across Linux distributions has become if possible even more convoluted, with controversial ‘solutions’ like Canonical’s Snap, Flatpak, AppImage, Nix and others cluttering the landscape and sending developers scurrying back in a panic to compiling from source like it’s the 90s all over again.

Within an embedded development context this lack of standardization is also very noticeable, between differences in default compiler search paths, broken backwards compatibility — like the removal of ifconfig — and a host of minor and larger frustrations even before hitting big ticket items like service management flittering between SysV, Upstart, Systemd or having invented their own, even if possibly superior, alternatives like OpenRC in Alpine Linux.

Of note here is also that these system service managers generally do not work well with GUI-based applications, as CLI Linux and GUI Linux are still effectively two entirely different universes.

Wrong Security Model


For some inconceivable reason, Linux – despite not having UNIX roots like BSD – has opted to adopt the UNIX filesystem hierarchy and security model. While this is of no concern when you look at Linux as a wannabe-UNIX that will happily do the same multi-user server tasks, it’s an absolutely awful choice for a desktop OS. Without knowledge of the permission levels on folders, basic things like SSH keys will not work, and accessing network interfaces with Wireshark requires root-level access and some parts of the filesystem, like devices, require the user to be in a specific group.

When the expectation of a user is that the OS behaves pretty much like Windows, then the continued fight against an overly restrictive security model is just one more item that is not necessarily a deal breaker, but definitely grates every time that you run into it. Having the user experience streamlined into a desktop-friendly experience would help a lot here.

Unstable Interfaces


Another really annoying thing with Linux is that there is no stable kernel driver API. This means that with every update to the kernel, each of the kernel drivers have to be recompiled to work. This tripped me up in the past with Realtek chipset drivers for WiFi and Bluetooth. Since these were too new to be included in the Realtek driver package, I had to find an online source version on GitHub, run through the whole string of commands to compile the kernel driver and finally load it.

After running a system update a few days later and doing a restart, the system was no longer to be found on the LAN. This was because the WiFi driver could no longer be loaded, so I had to plug in Ethernet to regain remote access. With this experience in mind I switched to using Wireless-N WiFi dongles, as these are directly supported.

Experiences like this fortunately happen on non-primary systems, where a momentary glitch is of no real concern, especially since I made backups of configurations and such.

Convoluted Mess


This, in a nutshell, is why moving to Linux is something that I’m not seriously considering. Although I would be perfectly capable of using Linux as my desktop OS, I’m much happier on Windows — if you ignore Windows 11. I’d feel more at home on FreeBSD as well as it is a far more coherent experience, not to mention BeOS’ successor Haiku which is becoming tantalizingly usable.

Secretly my favorite operating system to switch to after Windows 10 would be ReactOS, however. It would bring the best of Windows 2000 through Windows 7, be open-source like Linux, yet completely standardized and consistent, and come with all the creature comforts that one would expect from a desktop user experience.

One definitely can dream.


hackaday.com/2025/06/03/my-win…



Host-based logs, container-based threats: How to tell where an attack began



The risks associated with containerized environments


Although containers provide an isolated runtime environment for applications, this isolation is often overestimated. While containers encapsulate dependencies and ensure consistency, the fact that they share the host system’s kernel introduces security risks.

Based on our experience providing Compromise Assessment, SOC Consulting, and Incident Response services to our customers, we have repeatedly seen issues related to a lack of container visibility. Many organizations focus on monitoring containerized environments for operational health rather than security threats. Some lack the expertise to properly configure logging, while others rely on technology stacks that don’t support effective visibility of running containers.

Environments that suffer from such visibility issues are often challenging for threat hunters and incident responders because it can be difficult to clearly distinguish between processes running inside a container and those executed on the host itself. This ambiguity makes it difficult to determine the true origin of an attack and whether it started in a compromised container or directly on the host.

The aim of this blog post is to explain how to restore the execution chain inside a running container using only host-based execution logs, helping threat hunters and incident responders determine the root cause of a compromise.

How containers are created and operate


To effectively investigate security incidents and hunt for threats in containerized environments, it’s essential to understand how containers are created and how they operate. Unlike virtual machines, which run as separate operating systems, containers are isolated user-space environments that share the host OS kernel. They rely on namespaces, control groups (cgroups), union filesystems, Linux capabilities, and other Linux features for resource management and isolation.

Because of this architecture, every process inside a container technically runs on the host, but within a separate namespace. Threat hunters and incident responders typically rely on host-based execution logs to gain a retrospective view of executed processes and command-line arguments. This allows them to analyze networks that lack dedicated containerization environment monitoring solutions. However, some logging configurations may lack critical attributes such as namespaces, cgroups, or specific syscalls. In such cases, rather than relying solely on missing log attributes, we can bridge this visibility gap by understanding the process execution chain of a running container from a host perspective.

Overview of the container creation workflow
Overview of the container creation workflow

End users interact with command-line utilities, such as Docker CLI, kubectl and others, to create and manage their containers. On the backend, these utilities communicate with an engine that facilitates communication with a high-level container runtime, most commonly containerd or CRI-O. These high-level container runtimes leverage low-level container runtimes like runc (the most common) to do the heavy lifting of interacting with the Linux OS kernel. This interaction allocates cgroups, namespaces, and other Linux capabilities for creating and killing containers based on a bundle provided by the high-level runtime. The high-level runtime is, in its turn, based on user-provided arguments. The bundle is a self-contained directory that defines the configuration of a container according to the Open Container Initiative (OCI) Runtime Specification. It mainly consists of:

  1. A rootfs directory that serves as the root filesystem for the container. It is created by extracting and combining the layers from a container image, typically using a union filesystem like OverlayFS.
  2. A config.json file describing an OCI runtime configuration that specifies the necessary process, mounts, and other configurations necessary for creating the container.

It’s important to note which mode runc has been executed in, since it supports two modes: foreground mode and detached mode. The resulting process tree may vary depending on the chosen mode. In foreground mode, a long-running runc process remains in the foreground as a parent process for the container process, primarily to handle the stdio so the end user can interact with the running container.

Process tree of a container created in foreground mode using runc
Process tree of a container created in foreground mode using runc

In detached mode, however, there will be no long-running runc process. After creating the container, runc exits, leaving the caller process to take care of the stdio. In most cases, this is containerd or CRI-O. As we can see in the screenshot below, when we execute a detached container using runc, the runc process will create it and immediately exit. Hence, the parent process of the container is the host’s PID 1 (systemd process).

Process tree of a container created in detached mode using runc
Process tree of a container created in detached mode using runc

However, if we create a detached container using Docker CLI, for example, we’ll notice that the parent of the container process is a shim process, not PID 1!

Process tree of a container created in detached mode using Docker CLI
Process tree of a container created in detached mode using Docker CLI

In modern architectures, communication between high- and low-level container runtimes is proxied through a shim process. This allows containers to run independently of the high-level container runtime, ensuring the sustainability of the running container even if the high-level container runtime crashes or restarts. The shim process also manages the stdio of the container process so users can later attach to running containers via commands like docker exec -it <container>, for example. The shim process can also redirect stdout and stderr to log files that users can later inspect either directly from the filesystem or via commands like kubectl logs <pod> -c <container>.

When a detached container is created using Docker CLI, the high-level container runtime, for example, containerd, executes a shim process that calls runc as a low-level container runtime for the sole purpose of creating the container in detached mode. After that, runc immediately exits. To avoid orphan processes or reparenting to the PID 1, as in the case when we executed runc ourselves, the shim process explicitly sets itself as a subreaper to adopt the container processes after runc exits. A Linux subreaper process is a designated parent that takes care of orphaned child processes in its chain (instead of init), allowing it to manage and clean up its entire process tree.

Detached containers will be reparented to the shim process after creation
Detached containers will be reparented to the shim process after creation

This is implemented in the latest V2 shim and is the default in the modern containerd implementations.

The shim process sets itself as a subreaper process during creation
The shim process sets itself as a subreaper process during creation

When we check the help message of the containerd-shim-runc-v2 process, for example, we notice that it accepts the container ID as a command-line argument, and calls it the id of the task.

Help message of the shim process
Help message of the shim process

We can confirm this by checking the command-line arguments of the running containerd-shim-runc-v2 processes and comparing them with the running containers.

The shim process accepts the ID of the relevant container as a command-line argument
The shim process accepts the ID of the relevant container as a command-line argument

So far, we’ve successfully identified container processes from the host’s perspective. In modern architectures, one of the following processes will typically be seen as a predecessor process for the containerized processes:

  • A shim process, in the case of detached mode; or
  • A runc process, in the case of foreground (interactive) mode.

We can also use the command-line arguments of the shim process to determine which container the process belongs to.

Process tree of the containers from the host perspective
Process tree of the containers from the host perspective

Although tracking the child processes of the shim process can sometimes lead to easy wins, it is often not as easy as it sounds, especially when there are a lot of subprocesses between the shim process and the malicious process. In this case, we can take a bottom-to-top approach, pivoting from the malicious process, tracking its parents all the way up to the shim process to confirm that it was executed inside a running container. It then becomes a matter of choosing the process whose behavior we may need to check for malicious or suspicious activities.

Since containers typically run with minimal dependencies, attackers often rely on shell access to either execute commands directly, or install missing dependencies for their malware. This makes container shells a critical focus for detection. But how exactly do these shells behave? Let’s take a closer look at one of the key shell processes in containerized environments.

How do BusyBox and Alpine execute commands?


In this post, we focus on the behavior of BusyBox-based containers. We also included Alpine-based containers as an example of an image base that relies on BusyBox to implement many core Linux utilities, helping to keep the image lightweight. For the sake of demonstration, Alpine images that depend on other utilities are outside the scope of this post.

BusyBox provides minimalist replacements for many commonly used UNIX utilities, combining them into one small executable. This allows for the creation of lightweight containers with significantly reduced image sizes. But how does the BusyBox executable actually work?

BusyBox has its own implementation of system utilities, known as applets. Each applet is written in C and stored in the busybox/coreutils/ directory as part of the source code. For example, the UNIX cat utility has a custom implementation named cat.c. At runtime, BusyBox creates an applet table that maps applet names to their corresponding functions. This table is used to determine which applet to execute based on the command-line argument provided. This mechanism is defined in the appletlib.c file.

Snippet of the appletlib.c file
Snippet of the appletlib.c file

When an executed command calls an installed utility that is not a default applet, BusyBox relies on the PATH environment variable to determine the utility’s location. Once the path is identified, BusyBox spawns the utility as a child process of the BusyBox process itself. This dynamic execution mechanism is critical to understanding how command execution works within a BusyBox-based container.

Applet/program execution logic
Applet/program execution logic

Now that we have a clear understanding of how the BusyBox binary operates, let’s explore how it functions when running inside a container. What happens, for example, when you execute the sh command inside such containers?

In both BusyBox and Alpine containers, executing the sh command to access the shell doesn’t actually invoke a standalone binary called sh. Instead, the BusyBox binary itself is executed. In BusyBox containers we can verify that /bin/sh is replaced by BusyBox by comparing the inodes of /bin/sh and /bin/busybox using ls -li and confirm that both have the same inode number. We can also print their MD5 hash to see that they are the same, and by executing /bin/sh --help, we’ll see that the banner of BusyBox is the one that’s printed.

/bin/sh is replaced by the /bin/busybox on the BusyBox based containers
/bin/sh is replaced by the /bin/busybox on the BusyBox based containers

On the other hand, in the Alpine containers, /bin/sh is a symbolic link to /bin/busybox. This means that when you run the sh command, it actually executes the BusyBox executable referred to by the symbolic link. This can be confirmed by executing readlink -f /bin/sh and observing the output.

/bin/sh is a symbolic link to /bin/busybox in the Alpine-based containers
/bin/sh is a symbolic link to /bin/busybox in the Alpine-based containers

Hence, inside BusyBox- or Alpine-based containers, all shell commands are either executed directly by the BusyBox process or are launched as child processes under the BusyBox process. These processes run within isolated namespaces on the host operating system, providing the necessary containerization while still utilizing the shared kernel of the host.

From a threat hunting perspective, having a non-standard shell process for the host OS, like BusyBox in this case, should prompt further investigation. Why would a BusyBox shell process be running on a Debian or a RedHat OS? Combining this conclusion with the previous one allows us to confirm that the shell was executed inside a container when runc or shim is observed as the predecessor process to the BusyBox process. This knowledge can be applied not only to the BusyBox process but also to any other process executed inside a running container. This knowledge is crucial for effectively determining the origin of suspicious behavior while hunting for threats using the host execution logs.

Some security tools, such as Kaspersky Container Security, are designed to monitor container activity and detect suspicious behavior. Others, such as Auditd, provide enriched logging at the kernel level based on preconfigured rules that capture system calls, file access, and user activity. However, these rules are often not optimized for containerized environments, further complicating the distinction between host and container activity.

Investigation value


While investigating execution logs, threat hunters and incident responders might overlook some activities on Linux machines, thinking they are part of normal operations. However, the same activities performed inside a running container should raise suspicion. For example, installing utilities such as Docker CLI may be normal on the host, but not inside a container. Recently, in a Compromise Assessment project, we discovered a crypto mining campaign in which the threat actor installed Docker CLI inside a running container in order to easily communicate with dockerd APIs.

Confirming that the docker.io installation occurred inside a running container
Confirming that the docker.io installation occurred inside a running container

In this example, we detected the installation of Docker CLI inside a container by tracing the process chain. We then determined the origin of the executed command and confirmed the container in which the command was executed by checking the command-line argument of the shim process.

During another investigation, we detected an interesting event where the process name was systemd while the process executable path was /.redtail. To identify the origin of this process, we followed the same procedure of tracking the parent processes.

Determining the container in which the suspicious event occurred
Determining the container in which the suspicious event occurred

Another interesting fact we can leverage is that a Docker container is always created by a runc process as the low-level container runtime. The runc help message reveals the command-line arguments used to create, run or start a container.

runc help messate
runc help messate

Monitoring these events helps threat hunters and incident responders identify the ID of the subject container and detect any abnormal entrypoints. A container’s entrypoint is its main process and it will be the process spawned by runc. The screenshot below shows an example of the creation of a malicious container detected by hunting for entrypoints with suspicious command-line arguments. In this case, the command line contains a malicious base64-encoded command.

Hunting for suspicious container entrypoints
Hunting for suspicious container entrypoints

Conclusion


Containerized environments are now part of most organizations’ networks because of the deployment and dependency encapsulation feasibility they provide. However, they are usually overlooked by security teams and decision makers because of a common misunderstanding about container isolation. This results in undesirable situations when these containers are compromised, and the security team is not fully equipped with the knowledge or tools to help during response activities, or even to monitor or detect in the first place.

The approach discussed in this post is one of the procedures that we typically follow in our Compromise Assessment and Incident Response services when we need to hunt for threats in historical host execution logs with container visibility issues. However, in order to detect container-based threats in time, it is crucial to protect your systems with a solid containerization monitoring solution, such as Kaspersky Container Security.


securelist.com/host-based-logs…



Digitale Souveränität: Wie das EU-Parlament Europa unabhängiger machen will


netzpolitik.org/2025/digitale-…



Con grande gioia oggi possiamo festeggiare la sconfitta della società petrolifera Rockhopper che aveva chiesto un risarcimento di 190 milioni di euro all’Italia. Riceveranno 0 euro e non potranno più ricattare la nostra comunità come avevano fatto. Rivendico con orgoglio che siamo riusciti a fermare il devastante progetto di una gigantesca raffineria galleggiante Ombrina 2 [...]