Meet the Experts: AI tra hype e realtà
L’AI è davvero la nostra alleata? O nasconde vulnerabilità che non possiamo ignorare?
Il 25 settembre alla Factory NoLo (Milano) ne parleremo con il Prof. Alessandro Piva, che presenterà i dati dell’Osservatorio AI. Con lui condivideranno scenari, opportunità e rischi della dipendenza dall’AI Matteo Macina e Arturo Di Corinto professionisti di provata esperienza. Modera Gianni Rusconi, giornalista.
Non un talk convenzionale, ma riflessioni di senso, risposte dal vivo. E perché no qualche provocazione per scettici, o super consapevoli. Non un oneway format, ma un confronto concreto e contemporaneo per portarsi a casa idee, consapevolezza e qualche esperienza.
25 settembre | 16:00 – 20:00 – Factory NoLo, Milano
Posti limitati
Partecipa gratuitamente: bit.ly/4p9u8ZA
Resta collegato per scoprire gli altri protagonisti
#evento #milano #cybersecurity #ai #technology
Heart Rate Monitoring via WiFi
Before you decide to click away, thinking we’re talking about some heart rate monitor that connects to a display using WiFi, wait! Pulse-Fi is a system that monitors heart rate using the WiFi signal itself as a measuring device. No sensor, no wires, and it works on people up to ten feet away.
Researchers at UC Santa Cruz, including a visiting high school student researcher, put together a proof of concept. Apparently, your heart rate can modify WiFi channel state information. By measuring actual heart rate and the variations in the WiFi signal, the team was able to fit data to allow for accurate heart rate prediction.
The primary device used was an ESP32, although the more expensive Raspberry Pi performed the same trick using data generated in Brazil. The Pi appeared to work better, but it is also more expensive. However, that implies that different WiFi chipsets probably need unique training, which, we suppose, makes sense.
Like you, we’ve got a lot of questions about this one — including how repeatable this is in a real-world environment. But it does make you wonder what we could use WiFi permutations to detect. Or other ubiquitous RF signals like Bluetooth.
No need for a clunky wristband. If you could sense enough things like this, maybe you could come up with a wireless polygraph.
IT threat evolution in Q2 2025. Mobile statistics
IT threat evolution in Q2 2025. Non-mobile statistics
The mobile section of our quarterly cyberthreat report includes statistics on malware, adware, and potentially unwanted software for Android, as well as descriptions of the most notable threats for Android and iOS discovered during the reporting period. The statistics in this report are based on detection alerts from Kaspersky products, collected from users who consented to provide anonymized data to Kaspersky Security Network.
Quarterly figures
According to Kaspersky Security Network, in Q2 2025:
- Our solutions blocked 10.71 million malware, adware, and unwanted mobile software attacks.
- Trojans, the most common mobile threat, accounted for 31.69% of total detected threats.
- Just under 143,000 malicious installation packages were detected, of which:
- 42,220 were mobile banking Trojans;
- 695 packages were mobile ransomware Trojans.
Quarterly highlights
Mobile attacks involving malware, adware, and unwanted software dropped to 10.71 million.
Attacks on users of Kaspersky mobile solutions, Q4 2023 — Q2 2025 (download)
The trend is mainly due to a decrease in the activity of RiskTool.AndroidOS.SpyLoan
. These are applications typically associated with microlenders and containing a potentially dangerous framework for monitoring borrowers and collecting their data, such as contacts lists. Curiously, such applications have been found pre-installed on some devices.
In Q2, we found a new malicious app for Android and iOS that was stealing images from the gallery. We were able to determine that this campaign was linked to the previously discovered SparkCat, so we dubbed it SparkKitty.
Fake app store page distributing SparkKitty
Like its “big brother”, the new malware most likely targets recovery codes for crypto wallets saved as screenshots.
Trojan-DDoS.AndroidOS.Agent.a
was this past quarter’s unusual discovery. Malicious actors embedded an SDK for conducting dynamically configurable DDoS attacks into apps designed for viewing adult content. The Trojan allows for sending specific data to addresses designated by the attacker at a set frequency. Building a DDoS botnet from mobile devices with adult apps installed may seem like a questionable venture in terms of attack efficiency and power – but apparently, some cybercriminals have found a use for this approach.
In Q2, we also encountered Trojan-Spy.AndroidOS.OtpSteal.a
, a fake VPN client that hijacks user accounts. Instead of the advertised features, it uses the Notification Listener service to intercept OTP codes from various messaging apps and social networks, and sends them to the attackers’ Telegram chat via a bot.
Mobile threat statistics
The number of Android malware and potentially unwanted app samples decreased from Q1, reaching a total of 142,762 installation packages.
Detected malware and potentially unwanted app installation packages, Q2 2024 — Q2 2025 (download)
The distribution of detected installation packages by type in Q2 was as follows:
Detected mobile malware by type, Q1 — Q2 2025 (download)
* Data for the previous quarter may differ slightly from previously published data due to some verdicts being retrospectively revised.
Banking Trojans remained in first place, with their share increasing relative to Q1. The Mamont family continues to dominate this category. In contrast, spy Trojans dropped to fifth place as the surge in the number of APK files for the SMS-stealing Trojan-Spy.AndroidOS.Agent.akg
subsided. The number of Agent.amw
spyware files, which masquerade as casino apps, also decreased.
RiskTool-type unwanted apps and adware ranked second and third, respectively, while Trojans – with most files belonging to the Triada family – occupied the fourth place.
Share* of users attacked by the given type of malicious or potentially unwanted apps out of all targeted users of Kaspersky mobile products, Q1 — Q2 2025 (download)
* The total may exceed 100% if the same users experienced multiple attack types.
The distribution of attacked users remained close to that of the previous quarter. The increase in the share of backdoors is linked to the discovery of Backdoor.Triada.z
, which came pre-installed on devices. As for adware, the proportion of users affected by the HiddenAd family has grown.
TOP 20 most frequently detected types of mobile malware
Note that the malware rankings below exclude riskware or potentially unwanted software, such as RiskTool or adware.
Verdict | %* Q1 2025 | %* Q2 2025 | Difference (p.p.) | Change in rank |
Trojan.AndroidOS.Fakemoney.v | 26.41 | 14.57 | -11.84 | 0 |
Trojan-Banker.AndroidOS.Mamont.da | 11.21 | 12.42 | +1.20 | +2 |
Backdoor.AndroidOS.Triada.z | 4.71 | 10.29 | +5.58 | +3 |
Trojan.AndroidOS.Triada.fe | 3.48 | 7.16 | +3.69 | +4 |
Trojan-Banker.AndroidOS.Mamont.ev | 0.00 | 6.97 | +6.97 | |
Trojan.AndroidOS.Triada.gn | 2.68 | 6.54 | +3.86 | +3 |
Trojan-Banker.AndroidOS.Mamont.db | 16.00 | 5.50 | -10.50 | -4 |
Trojan-Banker.AndroidOS.Mamont.ek | 1.83 | 5.09 | +3.26 | +7 |
DangerousObject.Multi.Generic. | 19.30 | 4.21 | -15.09 | -7 |
Trojan-Banker.AndroidOS.Mamont.eb | 1.59 | 2.58 | +0.99 | +7 |
Trojan.AndroidOS.Triada.hf | 3.81 | 2.41 | -1.40 | -4 |
Trojan-Downloader.AndroidOS.Dwphon.a | 2.19 | 2.24 | +0.05 | 0 |
Trojan-Banker.AndroidOS.Mamont.ef | 2.44 | 2.20 | -0.24 | -2 |
Trojan-Banker.AndroidOS.Mamont.es | 0.05 | 2.13 | +2.08 | |
Trojan-Banker.AndroidOS.Mamont.dn | 1.46 | 2.13 | +0.67 | +5 |
Trojan-Downloader.AndroidOS.Agent.mm | 1.45 | 1.56 | +0.11 | +6 |
Trojan-Banker.AndroidOS.Agent.rj | 1.86 | 1.45 | -0.42 | -3 |
Trojan-Banker.AndroidOS.Mamont.ey | 0.00 | 1.42 | +1.42 | |
Trojan-Banker.AndroidOS.Mamont.bc | 7.61 | 1.39 | -6.23 | -14 |
Trojan.AndroidOS.Boogr.gsh | 1.41 | 1.36 | -0.06 | +3 |
* Unique users who encountered this malware as a percentage of all attacked users of Kaspersky mobile solutions.
The activity of Fakemoney scam apps noticeably decreased in Q2, but they still held the top position. Almost all the other entries on the list are variants of the popular banking Trojan Mamont, pre-installed Trojans like Triada and Dwphon, and modified messaging apps with the Triada Trojan built in (Triada.fe
, Triada.gn
, Triada.ga
, and Triada.gs
).
Region-specific malware
This section describes malware types that mostly affected specific countries.
Verdict | Country* | %** |
Trojan-Banker.AndroidOS.Coper.c | Türkiye | 98.65 |
Trojan-Banker.AndroidOS.Coper.a | Türkiye | 97.78 |
Trojan-Dropper.AndroidOS.Rewardsteal.h | India | 95.62 |
Trojan-Banker.AndroidOS.Rewardsteal.lv | India | 95.48 |
Trojan-Dropper.AndroidOS.Agent.sm | Türkiye | 94.52 |
Trojan.AndroidOS.Fakeapp.hy | Uzbekistan | 86.51 |
Trojan.AndroidOS.Piom.bkzj | Uzbekistan | 85.83 |
Trojan-Dropper.AndroidOS.Pylcasa.c | Brazil | 83.06 |
* The country where the malware was most active.
** Unique users who encountered this Trojan variant in the indicated country as a percentage of all Kaspersky mobile security solution users attacked by the same variant.
In addition to the typical banking Trojans for this category – Coper, which targets users in Türkiye, and Rewatrdsteal, active in India – the list also includes the fake job search apps Fakeapp.hy
and Piom.bkzj
, which specifically target Uzbekistan. Both families collect the user’s personal data. Meanwhile, new droppers named “Pylcasa” operated in Brazil. They infiltrate Google Play by masquerading as simple apps, such as calculators, but once launched, they open a URL provided by malicious actors – similar to Trojans of the Fakemoney family. These URLs may lead to illegal casino websites or phishing pages.
Mobile banking Trojans
The number of banking Trojans detected in Q2 2025 was slightly lower than in Q1 but still significantly exceeded the figures for 2024. Kaspersky solutions detected a total of 42,220 installation packages of this type.
Number of installation packages for mobile banking Trojans detected by Kaspersky, Q2 2024 — Q2 2025 (download)
The bulk of mobile banking Trojan installation packages still consists of various modifications of Mamont, which account for 57.7%. In terms of the share of affected users, Mamont also outpaced all its competitors, occupying nearly all the top spots on the list of the most widespread banking Trojans.
TOP 10 mobile bankers
Verdict | %* Q1 2025 | %* Q2 2025 | Difference (p.p.) | Change in rank |
Trojan-Banker.AndroidOS.Mamont.da | 26.68 | 30.28 | +3.59 | +1 |
Trojan-Banker.AndroidOS.Mamont.ev | 0.00 | 17.00 | +17.00 | |
Trojan-Banker.AndroidOS.Mamont.db | 38.07 | 13.41 | -24.66 | -2 |
Trojan-Banker.AndroidOS.Mamont.ek | 4.37 | 12.42 | +8.05 | +2 |
Trojan-Banker.AndroidOS.Mamont.eb | 3.80 | 6.29 | +2.50 | +2 |
Trojan-Banker.AndroidOS.Mamont.ef | 5.80 | 5.36 | -0.45 | -2 |
Trojan-Banker.AndroidOS.Mamont.es | 0.12 | 5.20 | +5.07 | +23 |
Trojan-Banker.AndroidOS.Mamont.dn | 3.48 | 5.20 | +1.72 | +1 |
Trojan-Banker.AndroidOS.Agent.rj | 4.43 | 3.53 | -0.90 | -4 |
Trojan-Banker.AndroidOS.Mamont.ey | 0.00 | 3.47 | +3.47 | 9 |
Conclusion
In Q2 2025, the number of attacks involving malware, adware, and unwanted software decreased compared to Q1. At the same time, Trojans and banking Trojans remained the most common threats, particularly the highly active Mamont family. Additionally, the quarter was marked by the discovery of the second spyware Trojan of 2025 to infiltrate the App Store, along with a fake VPN client stealing OTP codes and a DDoS bot concealed within porn-viewing apps.
IT threat evolution in Q2 2025. Non-mobile statistics
IT threat evolution in Q2 2025. Mobile statistics
The statistics in this report are based on detection verdicts returned by Kaspersky products unless otherwise stated. The information was provided by Kaspersky users who consented to sharing statistical data.
The quarter in numbers
In Q2 2025:
- Kaspersky solutions blocked more than 471 million attacks originating from various online resources.
- Web Anti-Virus detected 77 million unique links.
- File Anti-Virus blocked nearly 23 million malicious and potentially unwanted objects.
- There were 1,702 new ransomware modifications discovered.
- Just under 86,000 users were targeted by ransomware attacks.
- Of all ransomware victims whose data was published on threat actors’ data leak sites (DLS), 12% were victims of Qilin.
- Almost 280,000 users were targeted by miners.
Ransomware
Quarterly trends and highlights
Law enforcement success
The alleged malicious actor behind the Black Kingdom ransomware attacks was indicted in the U.S. The Yemeni national is accused of infecting about 1,500 computers in the U.S. and other countries through vulnerabilities in Microsoft Exchange. He also stands accused of demanding a ransom of $10,000 in bitcoin, which is the amount victims saw in the ransom note. He is also alleged to be the developer of the Black Kingdom ransomware.
A Ukrainian national was extradited to the U.S. in the Nefilim case. He was arrested in Spain in June 2024 on charges of distributing ransomware and extorting victims. According to the investigation, he had been part of the Nefilim Ransomware-as-a-Service (RaaS) operation since 2021, targeting high-revenue organizations. Nefilim uses the classic double extortion scheme: cybercriminals steal the victim’s data, encrypt it, then threaten to publish it online.
Also arrested was a member of the Ryuk gang, charged with organizing initial access to victims’ networks. The accused was apprehended in Kyiv in April 2025 at the request of the FBI and extradited to the U.S. in June.
A man suspected of being involved in attacks by the DoppelPaymer gang was arrested. In a joint operation by law enforcement in the Netherlands and Moldova, the 45-year-old was arrested in May. He is accused of carrying out attacks against Dutch organizations in 2021. Authorities seized around €84,800 and several devices.
A 39-year-old Iranian national pleaded guilty to participating in RobbinHood ransomware attacks. Among the targets of the attacks, which took place from 2019 to 2024, were U.S. local government agencies, healthcare providers, and non-profit organizations.
Vulnerabilities and attacks
Mass exploitation of a vulnerability in SAP NetWeaver
In May, it was revealed that several ransomware gangs, including BianLian and RansomExx, had been exploiting CVE-2025-31324 in SAP NetWeaver software. Successful exploitation of this vulnerability allows attackers to upload malicious files without authentication, which can lead to a complete system compromise.
Attacks via the SimpleHelp remote administration tool
The DragonForce group compromised an MSP provider, attacking its clients with the help of the SimpleHelp remote administration tool. According to researchers, the attackers exploited a set of vulnerabilities (CVE-2024-57727, CVE-2024-57728, CVE-2024-57726) in the software to launch the DragonForce ransomware on victims’ hosts.
Qilin exploits vulnerabilities in Fortinet
In June, news broke that the Qilin gang (also known as Agenda) was actively exploiting critical vulnerabilities in Fortinet devices to infiltrate corporate networks. The attackers allegedly exploited the vulnerabilities CVE-2024-21762 and CVE-2024-55591 in FortiGate software, which allowed them to bypass authentication and execute malicious code remotely. After gaining access, the cybercriminals encrypted data on systems within the corporate network and demanded a ransom.
Exploitation of a Windows CLFS vulnerability
April saw the detection of attacks that leveraged CVE-2025-29824, a zero-day vulnerability in the Windows Common Log File System (CLFS) driver, a core component of the Windows OS. This vulnerability allows an attacker to elevate privileges on a compromised system. Researchers have linked these incidents to the RansomExx and Play gangs. The attackers targeted companies in North and South America, Europe, and the Middle East.
The most prolific groups
This section highlights the most prolific ransomware gangs by number of victims added to each group’s DLS during the reporting period. In the second quarter, Qilin (12.07%) proved to be the most prolific group. RansomHub, the leader of 2024 and the first quarter of 2025, seems to have gone dormant since April. Clop (10.83%) and Akira (8.53%) swapped places compared to the previous reporting period.
Number of each group’s victims according to its DLS as a percentage of all groups’ victims published on all the DLSs under review during the reporting period (download)
Number of new variants
In the second quarter, Kaspersky solutions detected three new families and 1,702 new ransomware variants. This is significantly fewer than in the previous reporting period. The decrease is linked to the renewed decline in the count of the Trojan-Ransom.Win32.Gen
verdicts, following a spike last quarter.
Number of new ransomware modifications, Q2 2024 — Q2 2025 (download)
Number of users attacked by ransomware Trojans
Our solutions protected a total of 85,702 unique users from ransomware during the second quarter.
Number of unique users attacked by ransomware Trojans, Q2 2025 (download)
Geography of attacked users
TOP 10 countries and territories attacked by ransomware Trojans
Country/territory* | %** | |
1 | Libya | 0.66 |
2 | China | 0.58 |
3 | Rwanda | 0.57 |
4 | South Korea | 0.51 |
5 | Tajikistan | 0.49 |
6 | Bangladesh | 0.45 |
7 | Iraq | 0.45 |
8 | Pakistan | 0.38 |
9 | Brazil | 0.38 |
10 | Tanzania | 0.35 |
* Excluded are countries and territories with relatively few (under 50,000) Kaspersky users.
** Unique users whose computers were attacked by ransomware Trojans as a percentage of all unique users of Kaspersky products in the country/territory.
TOP 10 most common families of ransomware Trojans
Name | Verdict | %* | |
1 | (generic verdict) | Trojan-Ransom.Win32.Gen | 23.33 |
2 | WannaCry | Trojan-Ransom.Win32.Wanna | 7.80 |
3 | (generic verdict) | Trojan-Ransom.Win32.Encoder | 6.25 |
4 | (generic verdict) | Trojan-Ransom.Win32.Crypren | 6.24 |
5 | (generic verdict) | Trojan-Ransom.Win32.Agent | 3.75 |
6 | Cryakl/CryLock | Trojan-Ransom.Win32.Cryakl | 3.34 |
7 | PolyRansom/VirLock | Virus.Win32.PolyRansom / Trojan-Ransom.Win32.PolyRansom | 3.03 |
8 | (generic verdict) | Trojan-Ransom.Win32.Crypmod | 2.81 |
9 | (generic verdict) | Trojan-Ransom.Win32.Phny | 2.78 |
10 | (generic verdict) | Trojan-Ransom.MSIL.Agent | 2.41 |
* Unique Kaspersky users attacked by the specific ransomware Trojan family as a percentage of all unique users attacked by this type of threat.
Miners
Number of new variants
In the second quarter of 2025, Kaspersky solutions detected 2,245 new modifications of miners.
Number of new miner modifications, Q2 2025 (download)
Number of users attacked by miners
During the second quarter, we detected attacks using miner programs on the computers of 279,630 unique Kaspersky users worldwide.
Number of unique users attacked by miners, Q2 2025 (download)
Geography of attacked users
TOP 10 countries and territories attacked by miners
Country/territory* | %** | |
1 | Senegal | 3.49 |
2 | Panama | 1.31 |
3 | Kazakhstan | 1.11 |
4 | Ethiopia | 1.02 |
5 | Belarus | 1.01 |
6 | Mali | 0.96 |
7 | Tajikistan | 0.88 |
8 | Tanzania | 0.80 |
9 | Moldova | 0.80 |
10 | Dominican Republic | 0.80 |
* Excluded are countries and territories with relatively few (under 50,000) Kaspersky users.
** Unique users whose computers were attacked by miners as a percentage of all unique users of Kaspersky products in the country/territory.
Attacks on macOS
Among the threats to macOS, one of the biggest discoveries of the second quarter was the PasivRobber family. This spyware consists of a huge number of modules designed to steal data from QQ, WeChat, and other messaging apps and applications that are popular mainly among Chinese users. Its distinctive feature is that the spyware modules get embedded into the target process when the device goes into sleep mode.
Closer to the middle of the quarter, several reports (1, 2, 3) emerged about attackers stepping up their activity, posing as victims’ trusted contacts on Telegram and convincing them to join a Zoom call. During or before the call, the user was persuaded to run a seemingly Zoom-related utility, but which was actually malware. The infection chain led to the download of a backdoor written in the Nim language and bash scripts that stole data from browsers.
TOP 20 threats to macOS
* Unique users who encountered this malware as a percentage of all attacked users of Kaspersky security solutions for macOS (download)
* Data for the previous quarter may differ slightly from previously published data due to some verdicts being retrospectively revised.
A new piece of spyware named PasivRobber, discovered in the second quarter, immediately became the most widespread threat, attacking more users than the fake cleaners and adware typically seen on macOS. Also among the most common threats were the password- and crypto wallet-stealing Trojan Amos and the general detection Trojan.OSX.Agent.gen
, which we described in our previous report.
Geography of threats to macOS
TOP 10 countries and territories by share of attacked users
Country/territory | %* Q1 2025 | %* Q2 2025 |
Mainland China | 0.73% | 2.50% |
France | 1.52% | 1.08% |
Hong Kong | 1.21% | 0.84% |
India | 0.84% | 0.76% |
Mexico | 0.85% | 0.76% |
Brazil | 0.66% | 0.70% |
Germany | 0.96% | 0.69% |
Singapore | 0.32% | 0.63% |
Russian Federation | 0.50% | 0.41% |
South Korea | 0.10% | 0.32% |
* Unique users who encountered threats to macOS as a percentage of all unique Kaspersky users in the country/territory.
IoT threat statistics
This section presents statistics on attacks targeting Kaspersky IoT honeypots. The geographic data on attack sources is based on the IP addresses of attacking devices.
In the second quarter of 2025, there was another increase in both the share of attacks using the Telnet protocol and the share of devices connecting to Kaspersky honeypots via this protocol.
Distribution of attacked services by number of unique IP addresses of attacking devices (download)
Distribution of attackers’ sessions in Kaspersky honeypots (download)
TOP 10 threats delivered to IoT devices
Share of each threat delivered to an infected device as a result of a successful attack, out of the total number of threats delivered (download)
In the second quarter, the share of the NyaDrop botnet among threats delivered to our honeypots grew significantly to 30.27%. Conversely, the number of Mirai variants on the list of most common malware decreased, as did the share of most of them. Additionally, after a spike in the first quarter, the share of BitCoinMiner miners dropped to 1.57%.
During the reporting period, the list of most common IoT threats expanded with new families. The activity of the Agent.nx
backdoor (4.48%), controlled via P2P through the BitTorrent DHT distributed hash table, grew markedly. Another newcomer to the list, Prometei, is a Linux version of a Windows botnet that was first discovered in December 2020.
Attacks on IoT honeypots
Geographically speaking, the percentage of SSH attacks originating from Germany and the U.S. increased sharply.
Country/territory | Q1 2025 | Q2 2025 |
Germany | 1.60% | 24.58% |
United States | 5.52% | 10.81% |
Russian Federation | 9.16% | 8.45% |
Australia | 2.75% | 8.01% |
Seychelles | 1.32% | 6.54% |
Bulgaria | 1.25% | 3.66% |
The Netherlands | 0.63% | 3.53% |
Vietnam | 2.27% | 3.00% |
Romania | 1.34% | 2.92% |
India | 19.16% | 2.89% |
The share of Telnet attacks originating from China and India remained high, with more than half of all attacks on Kaspersky honeypots coming from these two countries combined.
Country/territory | Q1 2025 | Q2 2025 |
China | 39.82% | 47.02% |
India | 30.07% | 28.08% |
Indonesia | 2.25% | 5.54% |
Russian Federation | 5.14% | 4.85% |
Pakistan | 3.99% | 3.58% |
Brazil | 12.03% | 2.35% |
Nigeria | 3.01% | 1.66% |
Germany | 0.09% | 1.47% |
United States | 0.68% | 0.75% |
Argentina | 0.01% | 0.70% |
Attacks via web resources
The statistics in this section are based on detection verdicts by Web Anti-Virus, which protects users when suspicious objects are downloaded from malicious or infected web pages. Cybercriminals create malicious pages with a goal in mind. Websites that host user-generated content, such as message boards, as well as compromised legitimate sites, can become infected.
Countries that served as sources of web-based attacks: TOP 10
This section gives the geographical distribution of sources of online attacks blocked by Kaspersky products: web pages that redirect to exploits; sites that host exploits and other malware; botnet C2 centers, and the like. Any unique host could be the source of one or more web-based attacks.
To determine the geographic source of web attacks, we matched the domain name with the real IP address where the domain is hosted, then identified the geographic location of that IP address (GeoIP).
In the second quarter of 2025, Kaspersky solutions blocked 471,066,028 attacks from internet resources worldwide. Web Anti-Virus responded to 77,371,384 unique URLs.
Web-based attacks by country, Q2 2025 (download)
Countries and territories where users faced the greatest risk of online infection
To assess the risk of malware infection via the internet for users’ computers in different countries and territories, we calculated the share of Kaspersky users in each location who experienced a Web Anti-Virus alert during the reporting period. The resulting data provides an indication of the aggressiveness of the environment in which computers operate in different countries and territories.
This ranked list includes only attacks by malicious objects classified as Malware. Our calculations leave out Web Anti-Virus detections of potentially dangerous or unwanted programs, such as RiskTool or adware.
Country/territory* | %** | |
1 | Bangladesh | 10.85 |
2 | Tajikistan | 10.70 |
3 | Belarus | 8.96 |
4 | Nepal | 8.45 |
5 | Algeria | 8.21 |
6 | Moldova | 8.16 |
7 | Turkey | 8.08 |
8 | Qatar | 8.07 |
9 | Albania | 8.03 |
10 | Hungary | 7.96 |
11 | Tunisia | 7.95 |
12 | Portugal | 7.93 |
13 | Greece | 7.90 |
14 | Serbia | 7.84 |
15 | Bulgaria | 7.79 |
16 | Sri Lanka | 7.72 |
17 | Morocco | 7.70 |
18 | Georgia | 7.68 |
19 | Peru | 7.63 |
20 | North Macedonia | 7.58 |
* Excluded are countries and territories with relatively few (under 10,000) Kaspersky users.
** Unique users targeted by Malware attacks as a percentage of all unique users of Kaspersky products in the country.
On average during the quarter, 6.36% of internet users’ computers worldwide were subjected to at least one Malware web-based attack.
Local threats
Statistics on local infections of user computers are an important indicator. They include objects that penetrated the target computer by infecting files or removable media, or initially made their way onto the computer in non-open form. Examples of the latter are programs in complex installers and encrypted files.
Data in this section is based on analyzing statistics produced by anti-virus scans of files on the hard drive at the moment they were created or accessed, and the results of scanning removable storage media. The statistics are based on detection verdicts from the On-Access Scan (OAS) and On-Demand Scan (ODS) modules of File Anti-Virus. This includes malware found directly on user computers or on connected removable media: flash drives, camera memory cards, phones, and external hard drives.
In the second quarter of 2025, our File Anti-Virus recorded 23,260,596 malicious and potentially unwanted objects.
Countries and territories where users faced the highest risk of local infection
For each country and territory, we calculated the percentage of Kaspersky users whose devices experienced a File Anti-Virus triggering at least once during the reporting period. This statistic reflects the level of personal computer infection in different countries and territories around the world.
Note that this ranked list includes only attacks by malicious objects classified as Malware. Our calculations leave out File Anti-Virus detections of potentially dangerous or unwanted programs, such as RiskTool or adware.
Country/territory* | %** | |
1 | Turkmenistan | 45.26 |
2 | Afghanistan | 34.95 |
3 | Tajikistan | 34.43 |
4 | Yemen | 31.95 |
5 | Cuba | 30.85 |
6 | Uzbekistan | 28.53 |
7 | Syria | 26.63 |
8 | Vietnam | 24.75 |
9 | South Sudan | 24.56 |
10 | Algeria | 24.21 |
11 | Bangladesh | 23.79 |
12 | Belarus | 23.67 |
13 | Gabon | 23.37 |
14 | Niger | 23.35 |
15 | Cameroon | 23.10 |
16 | Tanzania | 22.77 |
17 | China | 22.74 |
18 | Iraq | 22.47 |
19 | Burundi | 22.30 |
20 | Congo | 21.84 |
* Excluded are countries and territories with relatively few (under 10,000) Kaspersky users.
** Unique users on whose computers Malware local threats were blocked, as a percentage of all unique users of Kaspersky products in the country/territory.
Overall, 12.94% of user computers globally faced at least one Malware local threat during the second quarter.
The figure for Russia was 14.27%.
Capture and Plot Serial Data in the Browser
If you’re working with a microcontroller that reads a sensor, the chances are that at some point you’re faced with a serial port passing out continuous readings. The workflow of visualizing this data can be tedious, involving a cut-and-paste from a terminal to a CSV file. What if there were a handy all-in-one serial data visualization tool, a serial data oscilloscope, if you will? [Atomic14] has you covered, with the web serial plotter.
It’s a browser-based tool that uses the WebSerial API, so sadly if you’re a Firefox user you’re not invited to the party. Serial data can be plotted and exported, and there are a range of options for viewing. Behind the scenes there’s some Node and React magic happening, but should you wish to avoid getting your hands dirty there’s an online demo you can try.
Looking at it we’re ashamed to have been labouring under a complex workflow, particularly as we find this isn’t the first to appear on these pages.
youtube.com/embed/MEQCPBF99FQ?…
Supercomputer: l’Italia al sesto e decimo posto nella classifica TOP500 del 2025
Il mondo dei supercomputer è entrato nell’era dell’exascale computing. La classifica TOP500 di giugno per il 2025 ha registrato tre sistemi americani ai vertici, un debutto clamoroso dall’Europa e una notevole presenza di installazioni cloud e industriali nella top ten. Ora l’high performance computing non riguarda solo i laboratori nazionali, ma anche i cloud commerciali e i centri industriali.
Le prestazioni dei supercomputer si misurano in operazioni in virgola mobile al secondo (FLOPS). Nella classifica TOP500, i sistemi vengono classificati in base al test LINPACK (HPL), in cui l’indicatore chiave Rmax riflette la velocità sostenibile nella risoluzione di un ampio sistema di equazioni lineari. Oggi, i leader dimostrano un livello stabile in exaflop in questo rigoroso test.
I primi tre dagli USA
- El Capitan (USA, Lawrence Livermore National Laboratory, Rmax 1,742 exaflops).
Il supercomputer più potente al mondo. Costruito sull’architettura HPE Cray EX255a con processori AMD EPYC di quarta generazione e acceleratori Instinct MI300A collegati tramite una rete Slingshot-11. È in testa non solo al test LINPACK, ma anche al più complesso test HPCG, a dimostrazione di prestazioni bilanciate per compiti scientifici reali. Dalla fine del 2024, El Capitan detiene il primo posto, consolidando gli exaflops come standard per i laboratori americani. - Frontier (USA, Oak Ridge, Rmax 1,353 exaflop).
Il primo sistema exaflop della storia, che rimane il punto di riferimento. Utilizza cabinet HPE Cray EX235a con processori AMD EPYC e acceleratori grafici Instinct MI250X, combinati tramite Slingshot-11. Dal 2022 al 2024, ha occupato il primo posto, mentre ora è passato al secondo. Questo cambiamento di posizione non riflette un declino, ma il ritmo del progresso nel settore. Frontier continua a lavorare al limite delle sue capacità nella ricerca su energia, materiali, biologia e astrofisica. - Aurora (USA, Argonne National Laboratory, Rmax 1.012 exaflop).
Il terzo sistema americano di classe exaflop. Costruito sulla piattaforma HPE Cray EX con processori Intel Xeon CPU Max e acceleratori Intel Data Center GPU Max, connessi tramite Slingshot-11. Dopo molti anni di graduale introduzione, Aurora si è assicurato il terzo posto. Il suo scopo è quello di combinare la modellazione con l’uso dell’intelligenza artificiale nella scienza: dalla ricerca nel campo della fusione termonucleare e del clima agli esperimenti su larga scala con modelli linguistici..
L’Italia al sesto e decimo posto
- HPC6 (Italia, Eni Green Data Center, Rmax 477,9 petaflop).
Sistema industriale di punta. Costruito su HPE Cray EX235a con processori AMD EPYC e GPU Instinct MI250X, rete Slingshot-11. I principali obiettivi sono l’esplorazione sismica, la modellazione di campo e la ricerca energetica a basse emissioni di carbonio. Alla fine del 2024, il sistema è diventato brevemente leader europeo, ma nel 2025 ha ceduto il passo a JUPITER Booster, salendo al sesto posto. - Leonardo (Italia, EuroHPC / CINECA, Rmax 241,2 petaflop).
Sistema modulare BullSequana XH2000 con processori Intel Xeon Platinum 8358 e GPU NVIDIA A100, rete HDR100 InfiniBand. Dotato di un efficiente raffreddamento a liquido. Nel 2025, rimane nella top ten, nonostante la concorrenza dei nuovi sistemi basati su Grace-Hopper e MI300A. Questo indica che un’architettura di successo è rimasta competitiva per diversi anni consecutivi.
Il resto della top 10
- JUPITER Booster (Germania, EuroHPC / Juelich, Rmax 793,4 petaflop).
Il nuovo orgoglio d’Europa. Costruito su BullSequana XH3000 con processori ibridi NVIDIA GH200 Grace-Hopper e una rete InfiniBand NDR200 a quattro canali. Si tratta di un modulo acceleratore parte dell’architettura modulare JUPITER. Il sistema si è subito imposto al primo posto in Europa e si è affermato nell’élite mondiale, sottolineando il rapido progresso del programma EuroHPC - Fugaku (Giappone, RIKEN R-CCS, Rmax 442,0 petaflop).
Un sistema ARM di Fujitsu con processori A64FX e una rete Tofu-D. Leader del 2020-2021, si conferma attualmente forte nel test HPCG, il che lo rende particolarmente efficace in attività con elevati carichi di memoria e di comunicazione. Nonostante il calo in classifica, il ritorno scientifico rimane enorme. - Alps (Svizzera, CSCS, Rmax 434,9 petaflop).
Uno dei sistemi europei di nuova generazione più versatili. Basato su HPE Cray EX254n con processori NVIDIA Grace e GPU GH200, combinati tramite Slingshot-11. Questa piattaforma riflette la tendenza moderna: combinare l’addestramento dell’IA e i modelli fisici sullo stesso hardware. - LUMI (Finlandia, EuroHPC / CSC, Rmax 379,7 petaflop).
Sistema europeo su HPE Cray EX235a con AMD EPYC e GPU Instinct MI250X. Funziona con energia rinnovabile e sfrutta il recupero di calore. Dal 2022 al 2024, è stato il progetto più importante in Europa, ma con l’avvento di Alps e JUPITER Booster, è sceso al nono posto. Rimane uno strumento chiave per la climatologia, la scienza dei materiali e l’analisi dei big data. - La prossima frontiera nella corsa è il livello degli zettaflop, circa mille exaflop. Il Giappone sta già lavorando al progetto Fugaku Next, il cui lancio è previsto per il 2030. Potrebbe cambiare gli equilibri di potere globali. Tuttavia, il quadro completo non è ancora chiaro: la Cina non pubblica più i dati per la TOP500 e la reale portata dei suoi sistemi è sconosciuta. Questo aggiunge intrigo: chi sarà il primo a raggiungere il livello degli zettaflop rimane una domanda aperta.
L'articolo Supercomputer: l’Italia al sesto e decimo posto nella classifica TOP500 del 2025 proviene da il blog della sicurezza informatica.
La Guerra Subacquea è alle porte! Il tagliacavi cinesi saranno una minaccia globale?
Il nuovo tagliacavi, sviluppato dal China Shipbuilding Research Center, è progettato per l’uso su sottomarini avanzati come le serie Fengdou e Haidou. Il dispositivo è in grado di tagliare i cavi di comunicazione corazzati in acciaio e polimero, che svolgono un ruolo fondamentale nel mantenimento o nell’interruzione della rete di comunicazioni globale, che rappresenta il 95% della trasmissione dati mondiale.
Il tagliacavi cinese per acque profonde può tagliare cavi spessi fino a 2,4 pollici a profondità di quasi 4.000 metri. (Fonte: Sustainability Time)
Il dispositivo è dotato di una mola diamantata che ruota a 1.600 giri al minuto, consentendo di strappare i cavi d’acciaio senza danneggiare il fondale marino. Il corpo in lega di titanio e il sistema di ammortizzazione a olio consentono al dispositivo di resistere a pressioni estreme sul fondale marino. Controllato da un manipolatore robotico con un sistema di posizionamento avanzato, il dispositivo garantisce un’elevata precisione anche in condizioni di scarsa visibilità.
Il dispositivo è dotato di un motore da 1 kilowatt e di un rapporto di trasmissione di 8:1, che bilancia prestazioni ed efficienza energetica. Sebbene sia progettato per applicazioni civili come il recupero subacqueo e l’estrazione mineraria in acque profonde, solleva preoccupazioni per la sicurezza internazionale. La possibilità di tagliare cavi in posizioni strategiche, come vicino a Guam, potrebbe seriamente compromettere le comunicazioni globali.
Lo sviluppo della tecnologia subacquea da parte della Cina rientra in una strategia più ampia volta ad espandere le infrastrutture e l’influenza negli oceani del mondo. Grazie alla più grande flotta sottomarina del mondo, la Cina ha accesso a vaste aree oceaniche.
Le nuove attrezzature per il taglio dei cavi, azionabili da piattaforme senza pilota difficili da individuare, offrono un vantaggio tattico nel colpire punti di strozzatura strategicamente importanti.
Sebbene gli scienziati cinesi affermino che il dispositivo sia progettato per “lo sfruttamento delle risorse marine”, il suo potenziale militare non deve essere sottovalutato.
I test hanno dimostrato che il dispositivo può tagliare cavi fino a 6,5 cm di spessore.
L'articolo La Guerra Subacquea è alle porte! Il tagliacavi cinesi saranno una minaccia globale? proviene da il blog della sicurezza informatica.
16 miliardi di credenziali rubate da Apple, Meta e Google in vendita per 121.000 dollari
Il team di Darklab, la community di esperti di threat intelligence di Red Hot Cyber, ha individuato un annuncio sul marketplace del dark web “Tor Amazon”, l’analogo criminale del celebre e-commerce nel clear web. L’inserzione mette in vendita un archivio senza precedenti: 16 miliardi di credenziali compromesse provenienti da piattaforme di primo piano come Apple, Facebook, Google, Binance, Coinbase e molte altre.
L’offerta, proposta al prezzo di 1 Bitcoin (circa 121.000 dollari), rappresenta una delle raccolte di dati più vaste e diversificate mai apparse nei circuiti underground.
Immagini del post pubblicato sul market underground TOR Amazon (Fonte Red Hot Cyber)
Origine e natura del leak
Secondo l’analisi di Darklab, il pacchetto non deriva da un singolo data breach, ma da 30 raccolte distinte generate attraverso campagne di malware.
Gli attori malevoli avrebbero sfruttato file corrotti e tecniche di ingegneria sociale per infettare i dispositivi delle vittime, raccogliendo credenziali soprattutto da utenti che riutilizzavano password deboli o non attivavano misure di sicurezza avanzate.
Questa caratteristica rende il dataset particolarmente interessante dal punto di vista investigativo, poiché consente di osservare non solo le vulnerabilità delle piattaforme, ma anche le cattive abitudini degli utenti e l’impatto reale del malware sulla sicurezza quotidiana.
Sample dei dati messi in vendita nel market underground TOR Amazon (Fonte Red Hot Cyber)
Dimensioni e distribuzione geografica
- Volume: le raccolte oscillano tra 16 milioni e 3,5 miliardi di record ciascuna, con una media di circa 550 milioni di credenziali per batch.
- Concentrazione geografica: i dati risultano particolarmente densi in Asia e America Latina, regioni spesso più esposte a violazioni massive per via di infrastrutture digitali meno resilienti e scarsa consapevolezza degli utenti.
- Diversità delle piattaforme: il leak copre ambienti eterogenei – social network, servizi email, piattaforme finanziarie e portali di sviluppo – offrendo una panoramica trasversale delle superfici d’attacco.
Implicazioni per il cybercrime e la ricerca
La vendita su Tor Amazon riflette la crescente minaccia dei marketplace criminali, che replicano logiche tipiche dell’e-commerce legittimo: sistemi di escrow per le transazioni, feedback degli acquirenti, supporto post-vendita.
Per i cybercriminali, i dati rappresentano una risorsa immediatamente monetizzabile attraverso:
- campagne di phishing su larga scala;
- account takeover e frodi finanziarie;
- compromissione di wallet crypto e servizi collegati.
Per i ricercatori e gli analisti, invece, il dataset costituisce una fonte preziosa per:
- studiare i pattern di distribuzione del malware;
- comprendere l’impatto della scarsa igiene digitale;
- delineare trend storici ed economici delle violazioni su scala globale.
Considerazioni finali
La scoperta effettuata da Darklab mette in evidenza come l’ecosistema criminale del dark web stia evolvendo verso modelli sempre più strutturati e competitivi.
Allo stesso tempo, ribadisce la necessità di adottare misure minime di protezione – password manager, autenticazione multi-fattore, monitoraggio continuo delle fughe di dati – che restano ancora le difese più efficaci contro minacce di questa portata.
In questo scenario, l’attività di monitoraggio e analisi condotta da comunità come Darklab si conferma cruciale per portare alla luce fenomeni che, se ignorati, rischiano di compromettere interi ecosistemi digitali.
L'articolo 16 miliardi di credenziali rubate da Apple, Meta e Google in vendita per 121.000 dollari proviene da il blog della sicurezza informatica.
Il Tribunale dell’Unione Europea “salva” il trasferimento dei dati personali verso gli Stati Uniti. Per ora
Quello dello scorso 1 aprile non era un pesce d’aprile: la prima udienza del caso Latombe c. Commissione, infatti, rinviava alla data del 3 settembre per un giudizio sul ricorso presentato per l’annullamento della decisione di adeguatezza relativa al Data Privacy Framework.
Una decisione di adeguatezza è lo strumento giuridico previsto dall’art. 45 GDPR attraverso il quale la Commissione riconosce che un paese terzo o un’organizzazione garantisce un livello di protezione adeguato, anche in relazione a un ambito territoriale o settoriale, consentendo così il trasferimento internazionale dei dati personali senza che debbano ricorrere ulteriori autorizzazioni o condizioni.
Con un comunicato stampa la il Tribunale dell’Unione europea ha reso noto l’esito del giudizio, respingendo il ricorso e riconoscendo che il quadro normativo vigente negli Stati Uniti garantisce un livello di protezione “essenzialmente equivalente” a quello garantito dalla normativa europea sulla protezione dei dati personali.
Insomma: al momento è stato evitato un effetto “Schrems III”. Ma c’è ancora la possibilità di un altro grado di giudizio.
Il precedente delle “sentenze Schrems” e la reazione di noyb.
La storia degli accordi fra Stati Uniti e Unione Europea per il trasferimento dei dati personali è particolarmente travagliata: prima il Safe Harbor, poi il Privacy Shield vennero annullati in seguito alle sentenze Schrems I e Schrems II. Motivo per cui nel 2023 è intervenuto il nuovo accordo del Data Privacy Framework.
Le questioni cruciali affrontate anche nel ricorso di Latombe riguardano l’effettività delle tutela giurisdizionale e la legittimità della raccolta e impiego dei dati da parte delle agenzie di intelligence statunitensi.
Nella sentenza si affronta il percorso di rafforzamento delle garanzie fondamentali implementate dagli Stati Uniti in materia di trattamento dei dati personali attraverso l’istituzione e il funzionamento della Data Protection Review Court (DPRC), la corte incaricata del controllo della protezione dei dati. L’indipendenza della DPRC, contestata nel ricorso, è stata confermata constatando che è stata assicurata in forza degli ordini esecutivi presidenziali. Circa il tema della raccolta massiva dei dati personali da parte delle agenzie di intelligence, il Tribunale ha valutato che la tutela giurisdizionale offerta dalla DPRC è equivalente rispetto a quella garantita dal diritto dell’Unione.
La reazione di noyb, l’ONG di Max Schrems di attivisti dei diritti digitali, contesta la decisione del Tribunale e fa intendere che nel caso di differenti argomenti di ricorso, o anche di un’impugnazione della sentenza di fronte alla CGUE, l’esito sarà ben differente. Uno dei punti critici riguarda l’indipendenza, garantita da un ordine presidenziale e non dalla legge, che in relazione all’attuale contesto dell’amministrazione Trump non può certamente dirsi un presidio sufficiente.
Il futuro incerto del Data Privacy Framework.
Le sorti del Data Privacy Framework restano incerte. Elemento che può essere deducibile anche da uno dei passaggi della sentenza, in cui si ricorda che la Commissione deve valutare eventuali mutamenti del contesto normativo nel tempo.
Insomma: non c’è nessun “sempre e per sempre”, ma è possibile che l’adeguatezza venga riesaminata ed essere sospesa, modificata o revocata in tutto o in parte da parte della stessa Commissione. O altrimenti essere annullata per effetto di un intervento della CGUE.
Qualora dovesse venir meno il Data Privacy Framework, è comunque possibile comunque fare ricorso a garanzie adeguate o altrimenti ricevere un’autorizzazione specifica da parte di un’autorità di controllo, come previsto dall’art. 46 GDPR e come avvenuto in precedenza.
Tutto questo è un déjà-vu. Ma più che un glitch della Matrix, esprime l’inevitabile interferenza della politica internazionale sulla stabilità della normativa. Rendendo evidenti e ricorrenti le contrapposte pretese di sovratensione normativa tanto da parte degli Stati Uniti che dell’Unione Europea.
L'articolo Il Tribunale dell’Unione Europea “salva” il trasferimento dei dati personali verso gli Stati Uniti. Per ora proviene da il blog della sicurezza informatica.
Powering a Submarine with Rubber Bands
A look underneath the water’s surface can be fun and informative! However, making a device to go under the surface poses challenges with communication and water proofing. That’s what this rubber band powered submarine by [PeterSripol] attempts to fix!
The greatest challenge of building such a submersible was the active depth control system. The submarine is slightly negatively buoyant so that once the band power runs out, it returns to the surface. Diving is controlled by pitch fins, which will pitch downward under the torque applied by the rubber bands. Once the rubber band power runs out, elastic returns the fins to their natural pitch up position encouraging surfacing of the submarine. However, this results in uncontrolled dives and risks loss of the submersible.
Therefore, a float to deflect the fins when a certain depth was reached. Yet this proved ineffective, so a final solution of electronic depth control was implemented. While this may not be in the spirit of a rubber band powered submarine, it is technically still rubber band powered.
After a prototype with a single rubber band holder, a second version which uses a gearbox and three rubber band inputs was implemented which provides approximately 10 minutes of run time. An electronic failure resulted in the submarine’s failure of its final wild test, but the project was nonetheless a fun look at elastic powering a submersible.
This is not the first time we have looked at strange rubber band powered vehicles. Make sure to check out this rubber band powered airplane next!
youtube.com/embed/CDs-k-l0ZPo?…
TFINER is an Atompunk Solar Sail Lookalike
It’s not every day we hear of a new space propulsion method. Even rarer to hear of one that actually seems halfway practical. Yet that’s what we have in the case of TFINER, a proposal by [James A. Bickford] we found summarized on Centauri Dreams by [Paul Gilster] .
TFINER stands for Thin-Film Nuclear Engine Rocket Engine, and it’s a hoot. The word “rocket” is in the name, so you know there’s got to be some reaction mass, but this thing looks more like a solar sail. The secret is that the “sail” is the rocket: as the name implies, it hosts a thin film of nuclear materialwhose decay products provide the reaction mass. (In the Phase I study for NASA’s Innovative Advanced Concepts office (NIAC), it’s alpha particles from Thorium-228 or Radium-228.) Alpha particles go pretty quick (about 5% c for these isotopes), so the ISP on this thing is amazing. (1.81 million seconds!)Figure 3-1 From Bickford’s Phase I report shows the basic idea.
Now you might be thinking, “nuclear decay is isotropic! The sail will thrust equally in both directions and go nowhere!”– which would be true, if the sail was made of Thorium or Radium. It’s not; the radioisotope is a 9.5 um thin film on a 35 um beryllium back-plane that’s going to absorb any alpha emissions going the wrong way around. 9.5um is thin enough that most of the alphas from the initial isotope and its decay products (lets not forget that most of this decay chain are alpha emitters — 5 in total for both Th and Ra) aimed roughly normal to the surface will make it out.
Since the payload is behind the sail, it’s going to need a touch of shielding or rather long shrouds; the reference design calls for 400 m cables. Playing out or reeling in the cables would allow for some degree of thrust-vectoring, but this thing isn’t going to turn on a dime.
It’s also not going to have oodles of thrust, but the small thrust it does produce is continuous, and will add up to large deltaV over time. After a few years, the thrust is going to fall off (the half-life of Th-228 is 1.91 years, or 5.74 for Ra-228; either way the decay products are too short-lived to matter) but [Bickford]’s paper gives terminal/cruising velocity in either case of ~100 km/s.
Sure, that’s not fast enough to be convenient to measure as a fraction of the speed of light, and maybe it’s not great for a quick trip to Alpha Centauri but that’s plenty fast enough for to reach the furthest reaches of our solar system. For a flyby, anyway: like a solid-fueled rocket, once your burn is done, it’s done. Stopping isn’t really on offer here. The proposal references extra-solar comets like Oumuamua as potential flyby targets. That, and the focus of the Sun’s gravitational lens effect. Said focus is fortunately not a point, but a line, so no worries about a “blink and you miss it” fast-flyby. You can imagine we love both of those ideas.
NASA must have too, since NIAC was interested enough to advance this concept to a Phase II study. As reported at Centauri Dreams, the Phase II study will involve some actual hardware, albeit a ~1 square centimeter demonstrator rather than anything that will fly. We look forward to it. Future work also apparently includes the idea of combining the TFINER concept with an actual solar sail to get maximum possible delta-V from an Oberth-effect sundive. We really look forward to that one.
Repairing a Tektronix 577 Curve Tracer
Over on his YouTube channel our hacker [Jerry Walker] repairs a Tektronix 577 curve tracer.
A curve tracer is a piece of equipment which plots I-V (current vs voltage) curves, among other things. This old bit of Tektronix kit is rocking a CRT, which dates it. According to TekWiki the Tektronix 577 was introduced in 1972.
In this repair video [Jerry] goes to use his Tektronix 577 only to discover that it is nonfunctional. He begins his investigation by popping off the back cover and checking out the voltages across the voltage rails. His investigations suggest a short circuit. He pushes on that which means he has to remove the side panel to follow a lead into the guts of the machine.
Then, in order to find the shorted component he suspect exists, [Jerry] breaks out the old thermal cam. And the thermal cam leads to the fault: a shorted tantalum capacitor, just as he suspected to begin with! After replacing the shorted tantalum capacitor this old workhorse is like new.
There are probably quite a number of repair lessons in this video, but we think that an important takeaway is just how useful a thermal camera can be when it comes time for fault finding. If you’re interested in electronics repair a thermal cam is a good trick to have up your sleeve, it excels at finding short circuits.
If you’re interested in repairing old Tektronix gear be sure to check out Repairing An Old Tektronix TDS8000 Scope.
youtube.com/embed/SI-DL0fU0Pc?…
Tips for Homebrewing Inductors
How hard can it be to create your own inductors? Get a wire. Coil it up. Right? Well, the devil is definitely in the details, and [Nick] wants to share his ten tips for building “the perfect” inductor. We don’t know about perfect, but we do think he brings up some very good points. Check out his video below.
If you are winding wire around your finger (or, as it appears in the video, a fork) or you are using a beefy ferrite core, you’ll find something interesting in the video.
Of course, the issue with inductors is that wires aren’t perfect, nor are core materials. Factors like this lead to inefficiency and loss, sometimes in a frequency-dependent way.
It looks like [Nick] is building a large switching power supply, so the subject inductor is a handful. He demonstrates some useful computational tools for analyzing data about cores, for example.
We learned a lot watching the tricks, but we were more interested in the inductor’s construction. We have to admit that the computed inductance of the coil matched quite closely to the measured value.
Need a variable inductor? No problem. Before ferrite cores, good coils were a lot harder to wind.
youtube.com/embed/PEme07iCH-s?…
Designing an Open Source Micro-Manipulator
When you think about highly-precise actuators, stepper motors probably aren’t the first device that comes to mind. However, as [Diffraction Limited]’s sub-micron capable micro-manipulator shows, they can reach extremely fine precision when paired with external feedback.
The micro-manipulator is made of a mobile platform supported by three pairs of parallel linkages, each linkage actuated by a crank mounted on a stepper motor. Rather than attaching to the structure with the more common flexures, these linkages swivel on ball joints. To minimize the effects of friction, the linkage bars are very long compared to the balls, and the wide range of allowed angles lets the manipulator’s stage move 23 mm in each direction.
To have precision as well as range, the stepper motors needed closed-loop control, which a magnetic rotary encoder provides. The encoder can divide a single rotation of a magnet into 100,000 steps, but this wasn’t enough for [Diffraction Limited]; to increase its resolution, he attached an array of alternating-polarity magnets to the rotor and positioned the magnetic encoder near these. As the rotor turns, the encoder’s local magnetic field rotates rapidly, creating a kind of magnetic gear.
A Raspberry Pi Pico 2 and three motor drivers control this creation; even here, the attention to detail is impressive. The motor drivers couldn’t have internal charge pumps or clocked logic units, since these introduce tiny timing errors and motion jitter. The carrier circuit board is double-sided and uses through-hole components for ease of replication; in a nice touch, the lower silkscreen displays pin numbers.
To test the manipulator’s capabilities, [Diffraction Limited] used it to position a chip die under a microscope. To test its accuracy and repeatability, he traced the path a slicer generated for the first layer of a Benchy, vastly scaled-down, with the manipulator. When run slowly to reduce thermal drift, it could trace a Benchy within a 20-micrometer square, and had a resolution of about 50 nanometers.
He’s already used the micro-manipulator to couple an optical fiber with a laser, but [Diffraction Limited] has some other uses in mind, including maskless lithography (perhaps putting the stepper in “wafer stepper”), electrochemical 3D printing, focus stacking, and micromachining. For another promising take on small-scale manufacturing, check out the RepRapMicron.
youtube.com/embed/MgQbPdiuUTw?…
Thanks to [Nik282000] for the tip!
Returning To An Obsolete Home Movie Format
A few years ago, I bought an 8 mm home movie camera in a second hand store. I did a teardown on it here and pulled out for your pleasure those parts of it which I considered interesting. My vague plan was to put a Raspberry Pi in it, but instead it provided a gateway into the world of 8mm film technology. Since then I’ve recreated its Single 8 cartridge as a 3D printable model, produced a digital Super 8 cartridge, and had a movie camera with me at summer hacker camps.
When I tore down that Single 8 camera though, I don’t feel I did the subject justice. I concentrated on the lens, light metering, and viewfinder parts of the system, and didn’t bring you the shutter and film advance mechanism. That camera also lacked a couple of common 8 mm camera features; its light metering wasn’t through the lens, and its zoom lens was entirely manual. It’s time to dig out another 8 mm camera for a further teardown.
A Different Camera To Tear Down
The camera with a Super 8 cartridge inserted.
My test camera is a battered and scuffed Minolta XL-250 that I found in a second hand store for not a lot. It takes Super 8 cartridges, of which I have an expired Kodachrome example for the pictures, and it has the advantage of an extremely well-thought-out design that makes dismantling it very easy. So out it comes to be laid bare for Hackaday.
Once the sides have come off the camera, immediately you can see a set of very early-70s-analogue PCBs containing the light metering circuitry. Typically this would involve a CdS cell and a simple transistor circuit, and the aperture is controlled via a moving coil meter mechanism. This camera also has a large mostly-unpopulated PCB, giving a clue to some of the higher-end features found on its more expensive sibling.
Turning our attention inside the camera to the film gate, we can see the casting the film cartridge engages with, and the frame opening for the shutter To the left of that opening is a metal claw that engages with the sprocket holes in the film, thus providing the primary film advancement. The metal claw is attached to a slider on the back of the film gate, which in turn is operated by the rotation of the shutter, which is the next object of our attention.
The shutter is a disc that spins at the frame rate, in this case 18 frames per second. It sits in the light path between the back of the lens system and the film gate. It has a segment cut out of the disc to let light through for part of the rotation, this is how it operates as a shutter. On its reverse is the cam which operates the slider for the film advancement claw, while its front is mirrored. This forms part of the through-the-lens light metering system which we’ll come to next.
The mirror on the front of the shutter is angled, which means that when the shutter is closed, the light is instead reflected upwards at right angles into a prism, which in turn directs the light to the light meter cell. The PCB on the other side must have a charge pump which takes this 18 Hz interrupted analogue signal and turns it into a DC to drive the moving coil mechanism. There’s a 10 uF capacitor which may be part of this circuit.
Finally, we come to the powered zoom feature that was missing from the previous camera. On the top of the camera is a W/T rocker, for Wide/Telephoto. that operates the zoom. It is connected to a set of levers inside the case, which emerge as a pin at the front of the camera below the lens. This engages with a small gearbox that drives a knurled ring on the lens body, and selects forward and reverse to turn the ring. It’s driven by the same motor as the shutter, so it only works when the camera is operating.
I hope this look at my Minolta has filled in some of the gaps left by the previous article, and maybe revealed that there’s more than meets the eye when it comes to 8 mm movies. Careful though. If you dip a toe into this particular puddle it may suck you in head first!
Optimizing VLF Antennas
Using digital techniques has caused a resurgence of interest in VLF — very low frequency — radio. Thanks to software-defined radio, you no longer need huge coils. However, you still need a suitable antenna. [Electronics Unmessed] has been experimenting and asks the question: What really matters when it comes to VLF loops? The answer he found is in the video below.
This isn’t the first video about the topic he’s made, but it covers new ground about what changes make the most impact on received signals. You can see via graphs how everything changes performance. There are several parameters varied, including different types of ferrite, various numbers of loops in the antenna, and wire diameter. Don’t miss the comment section, either, where some viewers have suggested other parameters that might warrant experimentation.
Don’t miss the 9-foot square antenna loop in the video. We’d like to see it suspended in the air. Probably not a good way to ingratiate yourself with your neighbors, though.
Between software-defined radio and robust computer simulation, there’s never been a better time to experiment with antennas and radios. We first saw these antennas in an earlier post. VLF sure is easier than it used to be.
youtube.com/embed/S7nQ2fnaA3Y?…
Dal Commodore 64 a GitHub! Il BASIC di Gates e Allen diventa open source dopo 48 anni
Microsoft ha ufficialmente reso pubblico il codice sorgente della sua prima versione di BASIC per il processore MOS 6502, che per decenni è esistito solo sotto forma di fughe di notizie, copie da museo e build non ufficiali. Ora, per la prima volta, è pubblicato con licenza libera e disponibile per studio e modifica.
La prima versione del BASIC di Microsoft apparve nel 1975 per il microcomputer Altair 8800 basato sul processore Intel 8080. Fu scritta dai fondatori dell’azienda, Bill Gates e Paul Allen. Un anno dopo, Gates, insieme al secondo dipendente Microsoft, Rick Weiland, portò il BASIC sul processore MOS 6502. Nel 1977, Commodore acquistò la licenza per 25.000 dollari e la integrò nei suoi sistemi PET, VIC-20 e Commodore 64.
Gli ultimi due computer vendettero milioni di copie e divennero uno dei fattori determinanti nella diffusione di massa della tecnologia informatica.
La versione 1.1 è stata resa pubblica, tenendo conto dei miglioramenti al garbage collector proposti dall’ingegnere Commodore John Feagans e dallo stesso Gates nel 1978. Nei dispositivi PET, questa versione era nota come BASIC V2. Il codice contiene 6.955 righe di assembler è pubblicato su GitHub con licenza MIT, che ne consente l’uso e la rivendita senza restrizioni.
Microsoft ha fornito al repository note storiche e commit eseguiti con timestamp di “48 anni fa”. Il codice sorgente implementa la compilazione condizionale per diverse piattaforme dell’epoca: Apple II, Commodore PET, Ohio Scientific e KIM-1.
Le funzionalità includono un set completo di operatori BASIC, supporto per array, gestione delle stringhe, aritmetica in virgola mobile, I/O, garbage collection delle stringhe e archiviazione dinamica delle variabili.
Particolare enfasi è posta sull’uso efficiente della memoria, fondamentale per i sistemi a 8 bit. Il codice contiene anche gli Easter egg di Bill Gates, nascosti nelle etichette STORDO e STORD0, confermati dallo stesso Gates nel 2010.
Il MOS 6502, per il quale fu creato l’interprete, divenne una leggenda del settore.
Fu la base dell’Apple II, delle console di gioco Atari 2600 e NES e di un’intera linea di home computer Commodore. La semplicità e l’efficienza dell’architettura lo resero popolare tra i produttori e influenzarono la formazione del mercato dei personal computer. Oggi, l’interesse per il 6502 non accenna a diminuire: gli appassionati creano repliche FPGA, sviluppano emulatori e persino preparano una nuova riedizione “ufficiale” del Commodore 64 basata su logica programmabile.
Microsoft sottolinea che è stato il BASIC a rendere l’azienda un attore significativo sul mercato, molto prima della comparsa di MS-DOS e Windows. La concessione in licenza di massa di questo interprete da parte di vari produttori è diventata il fondamento del modello di business di Microsoft nei suoi primi anni.
Dal 1977 a oggi, il BASIC continua a vivere: dal cursore lampeggiante sullo schermo del Commodore alle versioni FPGA del 2025. Oggi, il codice storico non solo viene preservato, ma anche trasferito nelle mani della comunità, per studio, adattamento e nuovi esperimenti gratuiti.
L'articolo Dal Commodore 64 a GitHub! Il BASIC di Gates e Allen diventa open source dopo 48 anni proviene da il blog della sicurezza informatica.
Bootstrapping Android Development: a Survival Guide
Developing Android applications seems like it should be fairly straightforward if you believe the glossy marketing by Google and others. It’s certainly possible to just follow the well-trodden path, use existing templates and example code – or even use one of those WYSIWYG app generators – to create something passable that should work okay for a range of common applications. That’s a far cry from learning general Android development, of course.
The process has changed somewhat over the years, especially with the big move from the Eclipse-based IDE with the Android Development Tools (ADT) plugin, to today’s Jetbrains IntelliJ IDEA-based Android Studio. It’s fortunately still possible to download just the command-line tools to obtain the SDK components without needing the Google-blessed IDE. Using the CLI tools it’s not only possible to use your preferred code editor, but also integrate with IDEs that provide an alternate Android development path, such as Qt with its Qt Creator IDE.
Picking Poison
Both Qt Creator and ADT/Android Studio offer a WYSIWYG experience for GUI design, though the former’s design tools are incomparably better. Much of this appears to be due to how Qt Creator’s GUI design tools follow the standard desktop GUI paradigms, with standard elements and constraint patterns. After over a decade of having wrangled the – also XML-based – UI files and WYSIWYG design tools in ADT/Android Studio, it never ceases to amaze how simple things like placing UI elements and adding constraints love to explode on you.
The intuitive Android Studio WYSIWYG experience.
Somewhat recently the original Android API layouts also got ditched in favor of the ‘refactored’ AndroidX API layouts, with apparently now this Jetpack Compose being the (high-level) way to use. Over the years of me having developed for Android, many APIs and tools have been introduced, deprecated and removed at an increasingly rapid pace, to the point where having Android Studio or the CLI tools not freak out when confronted with a one year old project is a pleasant surprise.
Designing GUIs in Qt Creator’s Designer mode.
Although Qt isn’t the only alternative to the Android Studio experience, it serves to highlight the major differences encountered when approaching Android development. In fact, Qt for Android offers a few options, including building a desktop Qt application for Android, which can also use the Qt Quick elements, or including Qt Quick within your existing Android application. For Qt Quick you want to either create the UIs by hand, or using Qt Quick Designer, though I have so far mostly just stuck to using Qt Creator and liberally applied stylesheets to make the UI fit the target Android UI.
Whichever way you choose, it’s important to know your requirements and take some time to work through a few test projects before investing a lot of time in a single approach.
The Build System
No matter what approach you choose, the build system for Android is based on what is objectively one of the worst build automation tools conceivable, in the form of Gradle. Not only does it take ages to even start doing anything, it’s also agonizingly slow, insists on repeating tasks that should already have been completed previously, provides few ways to interact or get more information without getting absolutely swamped in useless verbosity, and loves to fail silently if you get just the wrong Gradle version installed in your Android project.
Did I mention yet that the entire Gradle tool is a permanent fixture of your Android project? Android Studio will want to upgrade it almost every time you open the project, and if you don’t use an IDE like it which automates Gradle upgrades, you better learn how to do it manually. Don’t forget to install the right Java Development Kit (JDK) either, or Android Studio, Gradle or both will get very upset.
If your IDE doesn’t pave over many of these inane issues, then getting familiar with the Gradle wrapper CLI commands is right on the top of your list, as you will need them. Fortunately sticking to an IDE here tends to avoid the biggest pitfalls, except for having enough time with each build session to fetch a coffee and go on a brisk walk before returning to address the next build failure.
There are no real solutions here, just a call for perseverance and documenting solutions that worked previously, because you will always encounter the same errors again some day.
Test, Debug And Deploy
Creating a new virtual Android device.
Even if you have built that shiny APK or app bundle, there’s a very high likelihood that there will be issues while running it. Fortunately the one advantage of JVM-based environments is that you get blasted with details when something violently explodes. Of course, that is unless someone screwed up exception handling in the code and your backtrace explodes somewhere in thin air instead. For the same reason using a debugger is pretty easy too, especially if you are using an IDE like Android Studio or Qt Creator that provides easy debugger access.
Logging in Android tends to be rather verbose, with the LogCat functionality providing you with a veritable flood of logging messages, most of which you want to filter out. Using the filter function of your IDE of choice is basically essential here. Usually when I do Android application debugging, I am either already running Qt Creator where I can start up a debug session, or I can fire up Android Studio and do the same here as at its core it’s the same Gradle-based project.
The NymphCast Player Android build, with default skin.
Of course, in order to have something catch on fire you first need to run the application, which is where you get two options: run on real hardware or use an emulator. Real hardware is easier in some ways, as unlike an emulated Android Virtual Device (AVD) your application can directly access the network and internet, whereas an AVD instance requires you to wrangle with network redirects each session.
On the other hand, using an AVD can be handy as it allows you to create devices with a wide range of screen resolutions, so it can be quite nifty to test applications that do not require you to connect to externally via the network. If you want to know for example how well your UI scales across screen sizes, and how it looks on something like a tablet, then using an AVD is a pretty good option.
Some hardware devices are also quite annoyingly locked-down, such as Xiaomi phones that at least for a while have refused to allow you to toggle on remote debugging via USB unless you install a SIM card. Fortunately this could be circumvented by clicking through an alternate path that the Xiaomi developers had not locked down, but these are just some of the obnoxious hurdles that you may encounter with real hardware.
With that out of the way, deploying to an AVD or real device is basically the same, either by using the ‘Start’ or similar function in your IDE of choice with the target device selected, or by doing so via the command-line, either with ADB, or via the Gradle wrapper with ./gradlew InstallDebug
or equivalent.
This will of course be a debug build, with creating a release build also being an option, but this will not be signed by default. Signing an APK requires a whole other procedure, and is honestly something that’s best done via the friendly-ish dialogs in an IDE, rather than by burning a lot of time and sanity on the command-line. Whether you want to sign the APK or app bundle depends mostly on your needs/wants and what a potential app store demands.
Ever since Google began to demand that all developers – including Open Source hobbyists – send in a scan of their government ID and full address, I have resorted to just distributing the Android builds of my NymphCast player and server via GitHub, from where they can be sideloaded.
This NymphCast server is incidentally the topic of the next installment in this mini-series, in which we will be doing a deep dive into native Android development using the NDK.
Nuova Campagna MintsLoader: Buovi Attacchi di Phishing tramite PEC sono in corso
Dopo una lunga pausa estiva, nella giornata di ieri il CERT-AgID ha pubblicato un nuovo avviso su una nuova campagna MintsLoader, la prima dopo quella registrata lo scorso giugno.
Rispetto alle precedenti ondate, il template dell’email risulta solo leggermente modificato, mantenendo però lo schema già noto. In particolare:
- invece del consueto link ipertestuale sulla parola “Fattura“, l’email presenta ora un file ZIP allegato;
- all’interno dell’archivio compresso è presente un file JavaScript offuscato che avvia la catena di compromissione.
Abuso di caselle PEC
Come già osservato nelle precedenti ondate, anche questa campagna mantiene la stessa natura di diffusione: i messaggi vengono inviati da caselle PEC compromesse a caselle PEC dei destinatari, sfruttando quindi il canale certificato per aumentare la credibilità e l’efficacia dell’attacco.
L’obiettivo finale è la compromissione dei sistemi delle vittime, in particolare macchine Windows (dalla versione 10 in poi), dove la disponibilità del comando cURL viene sfruttata per avviare la catena d’infezione e installare malware, generalmente appartenenti alla categoria degli infostealer.
Strategia temporale
È inoltre interessante evidenziare come venga mantenuta la stessa strategia temporale già descritta nel precedente avviso del CERT-AGID, con un ritorno delle attività in coincidenza con la ripresa lavorativa post-estiva.
Raccomandazioni
Si raccomanda di prestare attenzione a messaggi email sospetti, in particolare a quelli con oggetto relativo a fatture scadute e contenenti allegati ZIP, evitando in ogni caso di estrarre gli archivi e interagire con i file al loro interno. Nel dubbio, è sempre possibile inoltrare le email ritenute sospette alla casella di postamalware@cert-agid.gov.it
Azioni di contrasto
Le attività di contrasto sono state già messe in atto con il supporto dei Gestori PEC. Gli IoC relativi alla campagna sono stati diramati attraverso il Feed IoC del CERT-AGID verso i Gestori PEC e verso le strutture accreditate.
L'articolo Nuova Campagna MintsLoader: Buovi Attacchi di Phishing tramite PEC sono in corso proviene da il blog della sicurezza informatica.
Microsoft BASIC For 6502 Is Now Open Source
An overriding memory for those who used 8-bit machines back in the day was of using BASIC to program them. Without a disk-based operating system as we would know it today, these systems invariably booted into a BASIC interpreter. In the 1970s the foremost supplier of BASIC interpreters was Microsoft, whose BASIC could be found in Commodore and Apple products among many others. Now we can all legally join in the fun, because the software giant has made version 1.1 of Microsoft BASIC for the 6502 open source under an MIT licence.
This version comes from mid-1978, and supports the Commodore PET as well as the KIM-1 and early Apple models. It won’t be the same as the extended versions found in later home computers such as the Commodore 64, but it still provides plenty of opportunities for retrocomputer enthusiasts to experiment. It’s also not entirely new to the community, because it’s a version that has been doing the rounds unofficially for a long time, but now with any licensing worries cleared up. A neat touch can be found in the GitHub repository, with the dates on the files being 48 years ago.
We look forward to seeing what the community does with this new opportunity, and given that the 50-year-old 6502 is very much still with us we expect some real-hardware projects. Meanwhile this isn’t the first time Microsoft has surprised us with an old product.
Header image: Michael Holley, Public domain.
Netshacker: Retrogaming e Hacking Reale su Commodore 64
Nel panorama dei giochi per Commodore 64, Netshacker emerge come un progetto che sfida le convenzioni del gaming moderno, riportando i giocatori alle radici dell’informatica domestica degli anni ’80. Non si tratta di un semplice omaggio nostalgico, ma di una piccola grande esperienza di hacking autentica e credibile, sviluppata con la precisione tecnica di un ingegnere e al contempo la passione di un retro gamer.
Un concetto rivoluzionario per il C64 Netshacker non è un gioco che “finge” di essere retrò: è un prodotto nato dalla mentalità old-school, ma costruito con la cura e la precisione di un progetto moderno.
L’obiettivo è chiaro: ricreare l’esperienza autentica di un hacker degli anni ’90, quando le reti erano ancora un territorio inesplorato e ogni comando poteva rivelare nuovi orizzonti digitali.
Il gioco si presenta come un sistema operativo completo per C64, con due ambienti distinti: uno in stile Linux e uno in stile DOS, ognuno con le proprie peculiarità e comandi specifici.
La Shell che respira
Il cuore di Netshacker è la sua shell interattiva, un’interfaccia a riga di comando che non si limita a simulare i comandi, ma li implementa realmente: ogni input dell’utente ha conseguenze logiche e coerenti: i file esistono fisicamente nella memoria del C64, i permessi sono gestiti secondo regole realistiche, e gli errori forniscono dei feedback sensati che guidano il giocatore verso la soluzione. Non ci sono scorciatoie o bug da sfruttare: la progressione si basa esclusivamente sull’ingegno e sulla comprensione dei sistemi.
Missioni che premiano la creatività
Il sistema di missioni di Netshacker è progettato per premiare la creatività e la deduzione logica. Ogni obiettivo richiede una comprensione profonda degli strumenti disponibili e della logica sottostante. Le missioni spaziano dal port scanning al social engineering, dalla gestione di file protetti all’analisi forense. Il gioco non fornisce soluzioni dirette, ma lascia che il giocatore scopra i percorsi attraverso l’esplorazione e l’esperimentazione.
Strumenti di Comunicazione d’epoca
Una delle caratteristiche più affascinanti di Netshacker è il suo sistema di comunicazione, ispirato ai BBS e alle reti clandestine degli anni ’90.
Atmosfera e Immersione
L’atmosfera di Netshacker è meticolosamente curata per ricreare l’esperienza autentica di un hacker degli anni ’90. I colori del C64 sono utilizzati strategicamente per differenziare i diversi tipi di output, i suoni SID creano un’ambientazione sonora appropriata, e l’interfaccia mantiene la fedeltà visiva ai sistemi dell’epoca. Ogni dettaglio, dai messaggi di errore ai prompt dei comandi, è stato pensato per mantenere l’immersione senza compromettere la giocabilità.
Compatibilità e Accessibilità
Netshacker è progettato per funzionare sia su hardware reale che su emulatori, garantendo un’esperienza autentica indipendentemente dalla piattaforma di esecuzione. Il gioco è localizzato in italiano, eliminando barriere linguistiche e rendendo l’esperienza più accessibile ai giocatori italiani.
La distribuzione del gioco avviene attraverso due modalità: una versione digitale in formato .d64 al prezzo di 8 euro, e un’edizione fisica con floppy e manuale stampato a 69 euro.
Un Progetto che rispetta la storia
Netshacker non è solo un gioco: è un piccolo grande tributo alla cultura hacker degli anni ’90, un’opportunità per i giocatori di oggi di sperimentare le sfide e le soddisfazioni di un’epoca in cui l’informatica era ancora un territorio inesplorato.
Il progetto dimostra che la complessità e la profondità non sono incompatibili con le limitazioni hardware del C64, e che la creatività può superare i vincoli tecnici apparenti.
Conclusioni
Netshacker dimostrando che è possibile creare un’esperienza di hacking autentica e coinvolgente senza compromessi sulla qualità o sulla fedeltà storica.
Il progetto sfida i giocatori a pensare come veri hacker, utilizzando strumenti reali e logica deduttiva per superare le sfide.
Non è un gioco per tutti, ma per coloro che apprezzano la profondità tecnica e l’autenticità storica, Netshacker offre un’esperienza insolita ed avvincente, poiché non è solo un gioco: è un viaggio nel tempo, un’opportunità per riscoprire le radici dell’hacking e della sicurezza informatica, attraverso la lente di una piattaforma che ha fatto la storia dell’informatica domestica.
Per i retrogamer esigenti e gli appassionati di sicurezza informatica, rappresenta un must-have che combina nostalgia, sfida intellettuale e autenticità tecnica in un pacchetto unico e irripetibile.
Dovete assolutamente andare a scaricare la demo da netshacker.com/ e poi comprare la versione completa. Per quanti hanno giocato su C64 a System 15000 o amano le sfide è un’occasione da non perdere.
Un plauso all’autore Stefano Basile
L'articolo Netshacker: Retrogaming e Hacking Reale su Commodore 64 proviene da il blog della sicurezza informatica.
CPU Utilization Not as Easy as It Sounds
If you ever develop an embedded system in a corporate environment, someone will probably tell you that you can only use 80% of the CPU or some other made-up number. The theory is that you will need some overhead for expansion. While that might have been a reasonable thing to do when CPUs and operating systems were very simple, those days are long gone. [Brendan Long] explains at least one problem with the idea in some recent tests he did related to server utilization.
[Brendan] recognizes that a modern CPU doesn’t actually scale like you would think. When lightly loaded, a modern CPU might run faster because it can keep other CPUs in the package slower and cooler. Increase the load, and more CPUs may get involved, but they will probably run slower. Beyond that, a newfangled processor often has fewer full CPUs than you expect. The test machine was a 24-core AMD processor. However, there are really 12 complete CPUs that can fast switch between two contexts. You have 24 threads that you can use, but only 12 at a time. So that skews the results, too.
Of course, our favorite problem is even more subtle. A modern OS will use whatever resources would otherwise go to waste. Even at 100% load, your program may work, but very slowly. So assume the boss wants you to do something every five seconds. You run the program. Suppose it is using 80% of the CPU and 90% of the memory. The program can execute its task every 4.6 seconds. So what? It may be that the OS is giving you that much because it would otherwise be idle. If you had 50% of the CPU and 70% of the memory, you might still be able to work in 4.7 seconds.
A better method is to have a low-priority task consume the resources you are not allowed to use, run the program, and verify that it still meets the required time. That solves a lot of [Brendan’s] observations, too. What you can’t do is scale the measurement linearly for all these reasons and probably others.
Not every project needs to worry about performance. But if you do, measuring and predicting it isn’t as straightforward as you might think. If you are interested in displaying your current stats, may we suggest analog? You have choices.
Hexstrike-AI scatena il caos! Zero-day sfruttati in tempo record
Il rilascio di Hexstrike-AI segna un punto di svolta nel panorama della sicurezza informatica. Il framework, presentato come uno strumento di nuova generazione per red team e ricercatori, è in grado di orchestrare oltre 150 agenti di intelligenza artificiale specializzati, capaci di condurre in autonomia scansioni, sfruttamento e persistenza sugli obiettivi. A poche ore dalla sua diffusione, però, è stato oggetto di discussioni nel dark web, dove diversi attori hanno tentato di impiegarlo per colpire vulnerabilità zero-day, con l’obiettivo di installare webshell per l’esecuzione di codice remoto non autenticato.
Hexstrike-AI era stato presentato come un “rivoluzionario framework di sicurezza offensiva basato sull’intelligenza artificiale”, pensato per combinare strumenti professionali e agenti autonomi. Tuttavia, il suo rilascio ha rapidamente suscitato interesse tra i malintenzionati, che hanno discusso del suo impiego per sfruttare tre vulnerabilità critiche di Citrix NetScaler ADC e Gateway, rivelate il 26 agosto. In poche ore, uno strumento destinato a rafforzare la difesa è stato trasformato in un motore di sfruttamento reale.
Post sul dark web che parlano di HexStrike AI, subito dopo il suo rilascio. (Fonte CheckPoint)
L’architettura del framework si distingue per il suo livello di astrazione e orchestrazione, che permette a modelli come GPT, Claude e Copilot di gestire strumenti di sicurezza senza supervisione diretta. Il cuore del sistema è rappresentato dai cosiddetti MCP Agents, che collegano i modelli linguistici alle funzioni offensive. Ogni strumento, dalla scansione Nmap ai moduli di persistenza, viene incapsulato in funzioni richiamabili, rendendo fluida l’integrazione e l’automazione. Il framework è inoltre dotato di logiche di resilienza, capaci di garantire la continuità operativa anche in caso di errori.
Particolarmente rilevante, riporta l’articolo di Check Point, è la capacità del sistema di tradurre comandi generici in flussi di lavoro tecnici, riducendo drasticamente la complessità per gli operatori. Questo elimina la necessità di lunghe fasi manuali e permette di trasformare istruzioni come “sfrutta NetScaler” in sequenze precise e adattive di azioni. In tal modo, operazioni complesse vengono rese accessibili e ripetibili, abbattendo la barriera di ingresso per chi intende sfruttare vulnerabilità avanzate.
HexStrike AI MCP Toolkit. (Fonte CheckPoint)
Il tempismo del rilascio amplifica i rischi. Citrix ha infatti reso note tre vulnerabilità zero-day: la CVE-2025-7775, già sfruttata in natura con webshell osservate su sistemi compromessi; la CVE-2025-7776, un difetto di gestione della memoria ad alto rischio; e la CVE-2025-8424, relativa al controllo degli accessi nelle interfacce di gestione. Tradizionalmente, lo sfruttamento di queste falle avrebbe richiesto settimane di sviluppo e conoscenze avanzate. Con Hexstrike-AI, invece, i tempi si riducono a pochi minuti e le azioni possono essere parallelizzate su vasta scala.
Le conseguenze sono già visibili: nelle ore successive alla divulgazione dei CVE, diversi forum sotterranei hanno riportato discussioni su come usare il framework per individuare e sfruttare istanze vulnerabili. Alcuni attori hanno persino messo in vendita i sistemi compromessi, segnalando un salto qualitativo nella rapidità e nella commercializzazione delle intrusioni. Tra i rischi principali vi è la riduzione drastica della finestra temporale tra divulgazione e sfruttamento di massa, che rende urgente un cambio di paradigma nella difesa.
Pannello superiore: Post del dark web che afferma di aver sfruttato con successo gli ultimi Citrix CVE utilizzando l’intelligenza artificiale HexStrike, originariamente in russo; Pannello inferiore: Post del dark web tradotto in inglese utilizzando il componente aggiuntivo Google Translate. (Fonte Checkpoint)
Le mitigazioni suggerite indicano un percorso chiaro. È fondamentale applicare senza indugi le patch rilasciate da Citrix e rafforzare autenticazioni e controlli di accesso. Allo stesso tempo, le organizzazioni sono chiamate a evolvere le proprie difese adottando rilevamento adattivo, intelligenza artificiale difensiva, pipeline di patching più rapide e un monitoraggio costante delle discussioni nel dark web. In aggiunta, viene raccomandata la progettazione di sistemi resilienti, basati su segmentazione, privilegi minimi e capacità di ripristino, così da ridurre l’impatto di eventuali compromissioni.
L'articolo Hexstrike-AI scatena il caos! Zero-day sfruttati in tempo record proviene da il blog della sicurezza informatica.
CISO vs DPO: collaborazione o guerra fredda nelle aziende?
Gestire la sicurezza non è affatto semplice, non è qualcosa di standardizzabile, ma soprattutto non può avvenire a colpi di “soluzioni”. Serve progettazione, analisi e la capacità di avere una visione d’insieme e soprattutto perseguire gli obiettivi di mantenere dati e sistemi ad un livello di sicurezza accettabile. Le cause di crisi più comuni sono lo scollamento fra ciò che si è fatto e ciò che si vorrebbe fare, o ancor peggio ciò che si crede di aver fatto. Insomma: sia l’ipotesi in cui i desiderata non siano raggiungibili in concreto, sia quella in cui ci si illude di essere al sicuro, sono fonte di gran parte dei problemi che possono essere riscontrati nelle organizzazioni di ogni dimensione.
Per questo motivo, esistono delle funzioni – o meglio: degli uffici – che sono deputati non solo ad una sorte di controllo di gestione della sicurezza, ma anche e soprattutto ad un’azione di advisoring continua della Direzione in modo tale che si possano contrastare allucinazioni di varia natura. Non da ultima, quella della cosiddetta paper security. Ovverosia la sicurezza scritta e mai attuata, in cui si ritiene che un formalismo possa mettere in salvo dall’azione di qualsiasi threat actor.
Questi uffici sono quello del CISO e del DPO. Il primo è caratterizzato da un campo d’azione decisamente più ampio mentre il secondo è verticalizzato sugli aspetti di gestione dei dati personali fra cui rientra anche la sicurezza. La correlazione fra Data Privacy e Data Security è ricorrente in gran parte delle norme, nei sistemi di gestione e nell’esperienza pratica delle organizzazioni.
Quel che occorre è che però CISO e DPO sappiano operare come un tag team nella gestione della sicurezza, anziché come concorrenti. Anche quando le funzioni sono esterne e affamate di upselling. Ma la Direzione sa come impiegarli e soprattutto come verificare la correttezza del loro operato? Ecco la nota dolente. Spesso si ricorre a un CISO perchè fa fancy, o a un DPO perchè obbligatorio. Ma raramente si sa rispondere alla domanda se questi stiano facendo bene il proprio lavoro, accontentandosi dei report periodici e qualche slide messa lì per giustificare i compensi corrisposti.
Superiamo un malinteso: sia il CISO che il DPO possono essere controllati e questo non ne compromette l’apporto. Tanto nell’ipotesi che siano interni che esterni. Come ogni organismo dell’organizzazione – ivi incluso l’OdV – devono comprovare di aver adempiuto agli obblighi contrattualmente definiti nonché alle mansioni richieste dallo svolgimento dell’incarico. Alcuni potrebbero sostenere – anzi: hanno sostenuto – che così viene compromessa l’indipendenza della funzione, facendo salire le Nessuno mi può giudicare vibes. Errando profondamente. Perché ciò che non può essere sindacato è l’ambito discrezionale affidato alle funzioni di controllo e l’esito delle loro valutazioni, non il fatto che queste non svolgano il proprio incarico correttamente.
Quindi bene coinvolgerli e farli lavorare, ancor meglio comprendere come farli lavorare al meglio. Valorizzando i punti di forza e mitigando le criticità.
Punti di forza: cooperazione fra CISO e DPO.
“Together we stand, divided we fall” , ricordano i Pink Floyd. Questo, nella sicurezza, è un leitmotiv comune a molteplici funzioni e ricorrente per CISO e DPO. Ma come agire in cooperazione? Certamente, sedere ai tavoli di lavoro è importante ma sapere anche qual è l’apporto reciproco che si può dare ai progetti, o il perimetro di intervento, è fondamentale.
Buona prassi vuole che si condividano progetti anche in cui l’ultima parola è naturalmente del CISO o del DPO, come ad esempio, rispettivamente, nella decisione circa una misure di sicurezza o altrimenti un parere circa l’adeguatezza della stessa rispetto ai rischi per gli interessati. Insomma: condivisione degli obiettivi, dei progetti e rispetto dei ruoli.
La Direzione deve pertanto predisporre i flussi informativi ma anche coinvolgere le figure all’interno dei tavoli di lavoro della sicurezza, avendo contezza di cosa chiedere a chi, potendo così gestire al meglio la roadmap di attuazione e di controllo della sicurezza dei dati e dei sistemi.
Chiarire i termini e le modalità di cooperazione è utile non solo per evitare inutili ripetizioni, ma soprattutto per prevenire quei conflitti che possono emergere quando ci sono ambiti comuni d’intervento.
Criticità: competizione fra CISO e DPO.
La sovrapposizione dell’azione del CISO e del DPO è inevitabile, ma dev’essere gestita correttamente. Altrimenti diventa competizione. E oramai la favola della coopetition è decisamente sfumata, dal momento che innalza il livello di conflitto interno in azienda, nonché porta ad inevitabili deragliamenti dal tracciato degli obiettivi di sicurezza.
La Direzione non solo deve astenersi dal promuovere conflitti, ma prevenirli presentando adeguatamente le funzioni e chiarendo che cosa si attende come risultati. Questo può significare stabilire KPI, richiedere pareri congiunti o in sinergia, o altrimenti assegnare valutazioni di rischio in prospettiva di integrazione o confronto, ad esempio.
Insomma: CISO e DPO possono migliorare la gestione della sicurezza.
Ma occorre aver letto bene le istruzioni per l’uso.
L'articolo CISO vs DPO: collaborazione o guerra fredda nelle aziende? proviene da il blog della sicurezza informatica.
Rivoluziona i modelli di sicurezza con il framework Unified SASE
Un approccio unificato e sicuro per supportare la trasformazione digitale, abilitare il lavoro ibrido e ridurre la complessità operativa.
A cura di Federico Saraò, Specialized System Engineer SASE, Fortinet Italy
La natura delle digital operations di un’azienda è drasticamente cambiata nell’ultimo decennio. Il tradizionale modello di lavoro al terminale in ufficio è stato completamente rivoluzionato per lasciar spazio ad un modello dinamico dove le attività aziendali sono sempre più distribuite in maniera capillare sia all’interno che all’esterno della sede di lavoro, e per questo necessitano di poter essere eseguite da qualsiasi tipo di terminale in maniera tempestiva.
Per garantire questa flessibilità operativa, risulta strettamente necessario per le aziende migrare verso un nuovo modello architetturale che permetta un accesso facile e continuo, ma sempre sicuro, alle proprie infrastrutture.
Federico Saraò, Specialized System Engineer SASE, Fortinet Italy
La flessibilità non può prescindere però da tre aspetti fondamentali:
- garantire unelevato livello di sicurezzasu tutte le componenti dell’architettura aziendale, dall’utente al dispositivo, passando inevitabilmente per la rete e le applicazioni, garantendo consistenza e uniformità di accesso alle applicazioni ovunque queste vengano erogate (in public cloud, private cloud, on-premise DC), per tutti gli utenti, ovunque essi si trovino (in sede aziendale o da remoto);
- migliorare l’efficacia in termini digestione, controllo e monitoraggio, per un management degli eventi critici corretto, tempestivo e soprattutto semplificato, in quanto la complessità operativa è da sempre un fattore critico nella security;
- indirizzare irequisiti di conformità delle regolamentazionirichieste nei vari settori industriali e nelle diverse aree geografiche in cui si opera, sempre più stringenti per garantire la corretta fruizione dei servizi.
Il frameworkSASE(Secure Access Service Edge) nasce proprio come risposta a queste necessità, definendo un modello di sicurezza evolutivo che negli ultimi anni è cambiato rispetto alle sue origini che lo vedevano principalmente come strumento abilitante al lavoro remoto.
Il SASE è molto di più di un’innovativa soluzione per la gestione dell’accesso remoto, ma si propone come un modello in grado di integrare nativamente soluzioni e funzionalità multiple di networking e security su un’unica piattaforma cloud, per semplificare l’operatività, efficientare la visibilità ed il monitoring, applicare politiche di sicurezza trasversali e consentire una trasformazione digitale sicura su larga scala.
I nuovi modelli architetturali utilizzati dalle aziende per gestire al meglio la delocalizzazione degli utenti e delle risorse, hanno sicuramente garantito l’ottimizzazione delle performances e della user-experience ma al contempo hanno drasticamente aumentato la potenziale superficie d’attacco delle reti, rendendola fortemente eterogenea vista la diversa natura dei servizi in gioco.
L’adozione di una soluzione SASE da parte di un’azienda non può prescindere da un’assunzione primaria: la sicurezza dell’infrastruttura.
Diventa quindi essenziale trovare un nuovo modello implementativo che possa, in maniera unificata, gestire e proteggere tutte le componenti dell’infrastruttura, erogando parallelamente servizi di sicurezza eterogenei, tutti volti alla protezione dell’infrastruttura end-to-end.
Tra questi servizi, i principali che caratterizzano un modello Unified SASE sono il NGFWaaS (Next-Gen Firewall as a Service), SWG (Secure Web Gateway), SDWAN, CASB (Cloud Access Security Broker), DLP (Data Loss Prevention), RBI (Remote Browser Isolation), Endpoint Security, Sandboxing, DEM (Digital Experience Monitoring), il tutto all’interno di un’unica piattaforma.
In aggiunta a questi servizi però, all’interno del framework SASE, riveste un ruolo fondamentale il concetto diUniversal Zero-Trust-Network Access (ZTNA), che permette l’implementazione di una politica di sicurezza globale in grado di fornire un’esperienza di sicurezza coerente per tutti gli utenti e le risorse di una rete aziendale.
Attraverso lo Universal ZTNA ci si pone l’obiettivo di garantire la massima protezione per l’accesso alle risorse ed ai servizi aziendali verificando lo stato e la compliance del singolo utente e del singolo dispositivo prima di ogni sessione; si tratta di una verifica continua e puntuale dell’identità e del contesto, in tempo reale, in grado di identificare immediatamente qualsiasi cambiamento di stato della rete e dei dispositivi, per poter reagire di conseguenza proteggendo l’infrastruttura e garantendo un’esperienza di connessione prevedibile e affidabile agli utenti.
La combinazione del concetto di Universal ZTNA, insieme agli altri servizi offerti nell’ambito del framework UnifiedSASE, rappresentano la vera rivoluzione nel paradigma della sicurezza di cui le aziende hanno bisogno per proteggere al meglio le loro reti.
È importante che la soluzione possa modularsi sulla base delle esigenze degli utenti e dei loro dispositivi, fornendo il servizio sia attraverso unagent unificato (utilizzabile anche sui dispositivi mobili), ma anche in modalitàagentless, garantendo opzioni di implementazione flessibili. Allo stesso modo è fondamentale la possibilità di disporre di un’ampia rete di POP globaliche possano garantire l’applicazione delle politiche di sicurezza secondo la logica di prossimità geografica degli utenti e delle sedi aziendali, soddisfacendo le esigenze di conformità e prestazioni.
Infine, risulta essenziale che la soluzione sia dotata di interfacce di gestione semplificate e strumenti di assistenza basati sull’intelligenza artificiale per ridurre i costi operativi e identificare le minacce in modo tempestivo per prevenire attacchi e ridurre i rischi per l’azienda. Questo può avvenire attraverso servizi di SOC as-a-servicee diForensic Analysis integrati nella piattaforma volti a supportare i team di sicurezza nelle attività di analisi.
Adottare un framework Unified SASE significa abilitare un modello architetturale capace di rispondere alle esigenze di scalabilità, sicurezza e performance richieste dalle infrastrutture moderne fornendo alle aziende non solo più sicurezza, ma anche più agilità e competitività.
SASE non è solo un’evoluzione tecnologica, ma un acceleratore strategico per abilitare il business digitale in modo sicuro e resiliente: il passo decisivo per affrontare con fiducia le sfide digitali di domani.
L'articolo Rivoluziona i modelli di sicurezza con il framework Unified SASE proviene da il blog della sicurezza informatica.
Restoring a Vintage Intel Prompt 80 8080 Microcomputer Trainer
Over on his blog our hacker [Scott Baker] restores a Prompt 80, which was a development system for the 8-bit Intel 8080 CPU.
[Scott] acquired this broken trainer on eBay and then set about restoring it. The trainer provides I/O for programming, probing, and debugging an attached CPU. The first problem discovered when opening the case is that the CPU board is missing. The original board was an 80/10 but [Scott] ended up installing a newer 80/10A board he scored for fifty bucks. Later he upgraded to an 80/10B which increased the RAM and added a multimodule slot.
[Scott] has some luck fixing the failed power supply by recapping some of the smaller electrolytic capacitors which were showing high ESR. Once he had the board installed and the power supply functional he was able to input his first assembly program: a Cylon LED program! Making artistic use of the LEDs attached to the parallel port. You can see the results in the video embedded below.
[Scott] then went all in and pared down a version of Forth which was “rommable” and got it down to 5KB of fig-forth plus 3KB of monitor leads to 8KB total, which fit in four 2716 chips on the 80/10B board.
To take the multimodule socket on the 80/10B for a spin [Scott] attached his SP0256A-AL2 speech multimodule and wrote two assembly language programs to say “Scott Was Here” and “This is an Intel Prompt 80 Computer”. You can hear the results in the embedded video.
youtube.com/embed/C9CFD0suW_0?…
Thanks to [BrendaEM] for writing in to let us know about [Scott]’s YouTube channel.
CP/M Gently
If you are interested in retrocomputers, you might be like us and old enough to remember the old systems and still have some of the books. But what if you aren’t? No one is born knowing how to copy a file with PIP, for example, so [Kraileth] has the answer: A Gentle Introduction to CP/M.
Of course, by modern standards, CP/M isn’t very hard. You had disks and they had a single level of files in them. No subdirectories. We did eventually get user areas, and the post covers that near the end. It was a common mod to treat user 0 as a global user, but by default, no.
That leads to one of the classic dragon and egg problems. PIP copies files, among other things. It knows about user areas, too, but only for source files. You can copy from user 3, for example, but you can’t copy to user 3. But that leads to a problem.
Suppose you switch to user 3 for the first time. The disk is empty. So there’s no PIP command. To get it, you’ll need to copy it from user 0, but… you can’t without PIP. The solution is either genius or madness. You essentially load PIP into memory as user 0, switch users, then dump memory out to the disk. Who wouldn’t think of that?
Some people would load PIP with the debugger instead, but it is the same idea. But this is why you need some kind of help to use this important but archaic operating system.
Of course, this just gets you started. Formatting disks and adapting software to your terminal were always a challenge with CP/M. But at least this gives you a start.
Can’t afford a vintage CP/M machine? Build one. Or just emulate it.
Over-Engineering an Egg Cracking Machine
Eggs are perhaps the most beloved staple of breakfast. However, they come with a flaw, they are incredibly messy to work with. Cracking in particular leaves egg on one’s hands and countertop, requiring frequent hand washing. This fundamental flaw of eggs inspired [Stuff Made Here] to fix it with an over-engineered egg cracking robot.
The machine works on the principle of scoring a line along an egg shell to weaken it, then gently tapping it to fracture the shell. A simple theory that proved complex to build into a machine. The first challenge was merely holding an egg as eggs come in all shapes and sizes. [Stuff Made Here] ended up settling on silicone over-molded with a 3D printed structure. After numerous prototypes, this evolved into including over-molded arms for added stiffness, and a vacuum seal for added rigidity.
After making two of these holders, [Stuff Made Here] added them to a roughly C shaped holder, which could spin the egg around, and slide the holders to allow fitting any egg shape. To this was added an arm which included a scoring blade and tiny hammer to crack the egg. The hammer can even be turned off while the blade is in use.The over-molded egg holder
The mechanism runs off a sequence of score, hammer, dump, eject. It was attempted to run this sequence off a single crank, but ended up not working for a number of reasons, not least of which being some eggs required more scoring then others. Nonetheless, we love the mechanical computational mechanism used. Ultimately, while frivolous, the project provides a wonderful look at the highs and lows of the prototyping process with all its numerous broken eggs.
If you like over-engineered solutions to simple problems, [Stuff Made Here] has you covered. Make sure to check out this automatic postcard machine next!
youtube.com/embed/vJ43DjwLPGA?…
One Camera Mule to Rule Them All
A mule isn’t just a four-legged hybrid created of a union betwixt Donkey and Horse; in our circles, it’s much more likely to mean a testbed device you hang various bits of hardware off in order to evaluate. [Jenny List]’s 7″ touchscreen camera enclosure is just such a mule.
In this case, the hardware to be evaluated is camera modules– she’s starting out with the official RPi HQ camera, but the modular nature of the construction means it’s easy to swap modules for evaluation. The camera modules live on 3D printed front plates held to the similarly-printed body with self-tapping screws.
Any Pi will do, though depending on the camera module you may need one of the newer versions. [Jenny] has got Pi4 inside, which ought to handle anything. For control and preview, [Jenny] is using an old first-gen 7″ touchscreen from the Raspberry Pi foundation. Those were nice little screens back in the day, and they still serve well now.
There’s no provision for a battery because [Jenny] doesn’t need one– this isn’t a working camera, after all, it’s just a test mule for the sensors. Having it tethered to a wall wart or power bank is no problem in this application. All files are on GitHub under a CC4.0 license– not just STLs, either, proper CAD files that you can actually make your own. (SCAD files in this case, but who doesn’t love OpenSCAD?) That means if you love the look of this thing and want to squeeze in a battery or add a tripod mount, you can! It’s no shock that our own [Jenny List] would follow best-practice for open source hardware, but it’s so few people do that it’s worth calling out when we see it.
Thanks to [Jenny] for the tip, and don’t forget that the tip line is open to everyone, and everyone is equally welcome to toot their own horn.
Nuovi ricatti: se non paghi, daremo tutti i tuoi dati in pasto alle intelligenze artificiali!
Il gruppo di hacker LunaLock ha aggiunto un nuovo elemento al classico schema di estorsione, facendo leva sui timori di artisti e clienti. Il 30 agosto, sul sito web Artists&Clients, che mette in contatto illustratori indipendenti con i clienti, è apparso un messaggio: gli aggressori hanno segnalato il furto e la crittografia di tutti i dati della risorsa.
Gli hacker hanno promesso di pubblicare il codice sorgente del sito e le informazioni personali degli utenti nelle darknet se il proprietario non avesse pagato 50.000 dollari in criptovaluta. Ma la principale leva di pressione era la prospettiva di trasferire le opere e le informazioni rubate ad aziende che addestrano le reti neurali per includerle in set per modelli di addestramento.
Il sito ha pubblicato una nota con un timer per il conto alla rovescia, in cui si informava che se la vittima si fosse rifiutata di pagare, i file sarebbero stati resi pubblici. Gli autori hanno avvertito di possibili sanzioni per violazione del GDPR e di altre leggi. Il pagamento era richiesto in Bitcoin o Monero. Screenshot della notifica sono stati diffusi sui social network e persino Google è riuscito a indicizzare la pagina con il messaggio, dopodiché Artists&Clients ha smesso di funzionare: quando si tenta di accedere, gli utenti visualizzano un errore di Cloudflare.
La maggior parte del testo sembra un messaggio standard negli attacchi ransomware. La novità è l’accenno all’intenzione di consegnare i disegni e i dati rubati agli sviluppatori di intelligenza artificiale. Gli esperti hanno osservato che questa è la prima volta che vedono l’argomento relativo all’accesso ai set di addestramento utilizzato come metodo di pressione. Finora, tale possibilità era stata discussa solo teoricamente: ad esempio, che i criminali potessero analizzare i dati per calcolare l’importo del riscatto.
Non è ancora chiaro come gli aggressori trasferiranno esattamente i materiali artistici agli sviluppatori dell’algoritmo. Possono pubblicare le immagini su un sito normale e attendere che vengano rilevate dai crawler dei modelli linguistici. Un’altra opzione è caricare le immagini tramite i servizi stessi, se le loro regole consentono l’utilizzo dei contenuti degli utenti per l’addestramento. In ogni caso, la minaccia stessa spinge la comunità di artisti e clienti a fare pressione sull’amministrazione delle risorse chiedendo il pagamento di un riscatto per mantenere il controllo sulle proprie opere.
Al momento, il sito web di Artists&Clients rimane irraggiungibile. Nel frattempo, gli utenti continuano a discutere della minaccia e a condividere online screenshot acquisiti, il che non fa che aumentare la visibilità dell’attacco.
L'articolo Nuovi ricatti: se non paghi, daremo tutti i tuoi dati in pasto alle intelligenze artificiali! proviene da il blog della sicurezza informatica.
FLOSS Weekly Episode 845: The Sticky Spaghetti Gauge
This week Jonathan and Randal talk Flutter and Dart! Is Google killing Flutter? What’s the challenge Randal sees in training new senior developers, and what’s the solution? Listen to find out!
youtube.com/embed/HzZQacDIxZg?…
Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.
play.libsyn.com/embed/episode/…
Direct Download in DRM-free MP3.
If you’d rather read along, here’s the transcript for this week’s episode.
Places to follow the FLOSS Weekly Podcast:
Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
hackaday.com/2025/09/03/floss-…
Ask Hackaday: Now You Install Your Friends’ VPNs. But Which One?
Something which may well unite Hackaday readers is the experience of being “The computer person” among your family or friends. You’ll know how it goes, when you go home for Christmas, stay with the in-laws, or go to see some friend from way back, you end up fixing their printer connection or something. You know that they would bridle somewhat if you asked them to do whatever it is they do for a living as a free service for you, but hey, that’s the penalty for working in technology.
Bad Laws Just Make People Avoid Them
There’s a new one that’s happened to me and no doubt other technically-minded Brits over the last few weeks: I’m being asked to recommend, and sometimes install, a VPN service. The British government recently introduced the Online Safety Act, which is imposing ID-backed age verification for British internet users when they access a large range of popular websites. The intent is to regulate access to pornography, but the net has been spread so wide that many essential or confidential services are being caught up in it. To be a British Internet user is to have your government peering over your shoulder, and while nobody’s on the side of online abusers, understandably a lot of my compatriots want no part of it. We’re in the odd position of having 4Chan and the right-wing Reform Party alongside Wikipedia among those at the front line on the matter. What a time to be alive.
VPN applications have shot to the top of all British app download charts, prompting the government to flirt with deny the idea of banning them, but as you might imagine therein lies a problem. Aside from the prospect of dodgy VPN apps to trap the unwary, the average Joe has no idea how to choose from the plethora of offerings. A YouTuber being paid to shill “that” VPN service is as close of they’ve ever come to a VPN, so they are simply unequipped to make a sound judgement when it comes to trusting a service with their web traffic. They have no hope of rolling their own VPN; setting up WireGuard and still further having a friend elsewhere in the world prepared to act as their endpoint are impractical.
It therefore lies upon us, their tech-savvy friends, to lead them through this maze. Which brings me to the point of this piece; are we even up to the job ourselves? I’ve been telling my friends to use ProtonVPN because their past behaviour means I trust Proton more than I do some of the other well-known players, but is my semi-informed opinion on the nose here? Even I need help!
Today Brits, Tomorrow The Rest Of You
At the moment it’s Brits who are scrambling for VPNs, but it seems very likely that with the EU yet again flirting with their ChatControl snooping law, and an American government whose actions are at best unpredictable, soon enough many of the rest of you will too. The question is then: where do we send the non-technical people, and how good are the offerings? A side-by-side review of VPNs has been done to death by many other sites, so there’s little point in repeating. Instead let’s talk to some experts. You lot, or at least those among the Hackaday readership who know their stuff when it comes to VPNs. What do you recommend for your friends and family?
Header image: Nenad Stojkovic, CC BY 2.0.
One ROM: the Latest Incarnation of the Software Defined ROM
Retrocomputers need ROMs, but they’re just so read only. Enter the latest incarnation of [Piers]’s One ROM to rule them all, now built with a RP2350, because the newest version is 5V capable. This can replace the failing ROMs in your old Commodore gear with this sweet design on a two-layer PCB, using a cheap microcontroller.
[Piers] wanted to use the RP2350 from the beginning but there simply wasn’t space on the board for the 23 level shifters which would have been required. But now that the A4 stepping adds 5 V tolerance [Piers] has been able to reformulate his design.
The C64 in the demo has three different ROMs: the basic ROM, kernel ROM, and character ROM. A single One ROM can emulate all three. The firmware is performance critical, it needs to convert requests on the address pins to results on the data bus just as fast as it can and [Piers] employs a number of tricks to meet these requirements.
The PCB layout for the RP2350 required extensive changes from the larger STM32 in the previous version. Because the RP2350 uses large power and ground pads underneath the IC this area, which was originally used to drop vias to the other side of the board, was no longer available for signal routing. And of course [Piers] is constrained by the size of the board needing to fit in the original form factor used by the C64.
The One ROM code is available over on GitHub, and the accompanying video from [Piers] is an interesting look into the design process and how tradeoffs and compromises and hacks are made in order to meet functional requirements.
youtube.com/embed/Zy8IMe6fMI4?…
Thanks to [Piers] for writing in to let us know about the new version of his project.
LockBit 5.0 : segnali di una nuova e possibile “Rinascita”?
LockBit rappresenta una delle più longeve e strutturate ransomware gang degli ultimi anni, con un modello Ransomware-as-a-Service (RaaS)che ha segnato in maniera profonda l’ecosistema criminale.
A seguito dell’operazione internazionale Operation Cronos, condotta a febbraio 2024 e che ha portato al sequestro di numerose infrastrutture e alla compromissione dei pannelli di gestione affiliati, il gruppo sembrava destinato a un declino irreversibile. Tuttavia, nelle ultime settimane, nuove evidenze in rete onion stanno alimentando ipotesi di una resurrezione del brand LockBit, sotto la sigla LockBit 5.0.
Breve storia del gruppo
- 2019– Comparsa delle prime varianti di LockBit, caratterizzate da automatismi di propagazione rapida in ambienti Windows e tecniche avanzate di cifratura.
- 2020-2021– Consolidamento del modello RaaS e forte espansione nella scena del cybercrime; introduzione dei data leak site come strumento di doppia estorsione.
- 2022– LockBit diventa uno dei gruppi più attivi a livello globale, rilasciando le versioni LockBit 2.0 e 3.0, con implementazioni in linguaggi multipli e payload cross-platform.
- 2023– Ulteriore diversificazione con payload in Go e Linux, e campagne mirate verso supply chain e settori critici.
- 2024 (Operazione Cronos)– Coordinata da Europol e FBI, l’operazione porta al sequestro di oltre 30 server, domini onion e strumenti interni. Per la prima volta viene distribuito un decryptor pubblico su larga scala.
Evidenze recenti
Analizzando il loro sito underground, viene mostrato un portale accessibile tramite rete onion con brand LockBit 5.0, che adotta lo stesso schema di queue panel già osservato in precedenti versioni del gruppo. L’interfaccia ripropone loghi riconducibili a Monero (XMR), Bitcoin (BTC) e Zcash (ZEC) come metodi di pagamento, indicando che il modello di estorsione rimarrebbe centrato su criptovalute ad alto grado di anonimato.
Il messaggio“You have been placed in a queue, awaiting forwarding to the platform”richiama i meccanismi classici dei pannelli di affiliazione LockBit, dove l’utente (o affiliato) viene instradato verso il backend operativo.
Analisi tecnica e possibili scenari
L’apparizione di LockBit 5.0 può essere interpretata secondo tre scenari principali:
- Tentativo di resurrezione reale: una parte del core team non colpita da Operation Cronos potrebbe aver ricostruito un’infrastruttura ridotta, puntando a reclutare nuovamente affiliati.
- Operazione di inganno (honeypot): non si esclude la possibilità che si tratti di un’esca creata da ricercatori o forze dell’ordine per monitorare traffico e identificare affiliati superstiti.
- Rebranding opportunistico: attori terzi, approfittando del “marchio” LockBit, potrebbero riutilizzarlo per ottenere visibilità e autorevolezza immediata nella scena underground.
Conclusioni
Sebbene al momento non vi siano prove concrete di nuove compromissioni riconducibili a LockBit 5.0, la presenza di un portale onion con brand ufficiale alimenta speculazioni su una possibile rinascita del gruppo. Sarà cruciale monitorare:
- eventuali nuove campagne di intrusione con TTP riconducibili al passato di LockBit,
- leak site attivi con pubblicazione di vittime,
- segnali di reclutamento nel dark web.
La vicenda dimostra ancora una volta la resilienza e la capacità di adattamento delle cyber-gang, che spesso riescono a rigenerarsi anche dopo operazioni di law enforcement di portata globale.
L'articolo LockBit 5.0 : segnali di una nuova e possibile “Rinascita”? proviene da il blog della sicurezza informatica.
Field Guide to North American Crop Irrigation
Human existence boils down to one brutal fact: however much food you have, it’s enough to last for the rest of your life. Finding your next meal has always been the central organizing fact of life, and whether that meal came from an unfortunate gazelle or the local supermarket is irrelevant. The clock starts ticking once you finish a meal, and if you can’t find the next one in time, you’ve got trouble.
Working around this problem is basically why humans invented agriculture. As tasty as they may be, gazelles don’t scale well to large populations, but it’s relatively easy to grow a lot of plants that are just as tasty and don’t try to run away when you go to cut them down. The problem is that growing a lot of plants requires a lot of water, often more than Mother Nature provides in the form of rain. And that’s where artificial irrigation comes into the picture.
We’ve been watering our crops with water diverted from rivers, lakes, and wells for almost as long as we’ve been doing agriculture, but it’s only within the last 100 years or so that we’ve reached a scale where massive pieces of infrastructure are needed to get the job done. Above-ground irrigation is a big business, both in terms of the investment farmers have to make in the equipment and the scale of the fields it turns from dry, dusty patches of dirt into verdant crops that feed the world. Here’s a look at the engineering behind some of the more prevalent methods of above-ground irrigation here in North America.
Crop Circles
Center-pivot irrigation machines are probably the most recognizable irrigation methods, both for their sheer size — center-pivot booms can be a half-mile long or more — and for the distinctive circular and semi-circular crop patterns they result in. Center-pivot irrigation has been around for a long time, and while it represents a significant capital cost for the farmer, both in terms of the above-ground machinery and the subsurface water supply infrastructure that needs to be installed, the return on investment time can be as low as five years, depending on the crop.Pivot tower in an alfalfa field in Oregon. You can clearly see the control panel, riser pipe, swivel elbow, and the boom. The slip rings for electrical power distribution live inside the gray dome atop the swivel. Note the supporting arch in the pipe created by the trusses underneath. Source: Tequask, CC BY-SA 4.0.
Effective use of pivot irrigation starts with establishing a water supply to the pivot location. Generally, this will be at the center of a field, allowing the boom to trace out a circular path. However, semi-circular layouts with the water supply near the edge of the field or even in one corner of a square field are also common. The source must also be able to supply a sufficient amount of water; depending on the emitter heads selected, the boom can flow approximately 1,000 gallons per minute.
The pivot tower is next. It’s generally built on a sturdy concrete pad, although there are towable pivot machines where the center tower is on wheels. The tower needs to stand tall enough that the rotating boom clears the crop when it’s at its full height, which can be substantial for crops like corn. Like almost all parts of the machine, the tower is constructed of galvanized steel to resist corrosion and to provide a bit of anodic protection to the underlying metal.
The tower is positioned over a riser pipe that connects to the water supply and is topped by a swivel fitting to change the water flow from vertical to horizontal and to let the entire boom rotate around the tower. For electrically driven booms, a slip ring will also be used to transfer power and control signals from the fixed control panel on the tower along the length of the boom. The slip ring connector is located in a weather-tight enclosure mounted above the exact center of the riser pipe.
The irrigation boom is formed from individual sections of pipe, called spans. In the United States, each span is about 180 feet long, a figure that makes it easy to build a system that will fit within the Public Land Survey System (PLSS), a grid-based survey system based on even divisions called sections, one mile on a side and 640 acres in area. These are divided down into half-, quarter-, and finally quarter-quarter sections, which are a quarter mile on a side and cover 40 acres. A boom built from seven spans will be about 1,260 feet long and will be able to irrigate a 160-acre quarter-section, which is a half-mile on a side.
The pipe for each span is usually made from galvanized steel, but aluminum is also sometimes used. Because of the flow rates, large-diameter pipe is used, and it needs to be supported lest it sag when filled. To do this, the pipe is put into tension with a pair of truss rods that run the length of the span, connecting firmly to each end. The truss rods and the pipe are connected by a series of triangular trusses attached between the bottom of the pipe and the truss rods, bending the pipe into a gentle arch. The outer end of each span is attached to a wheeled tower, sized to support the pipe at the same height as the center tower. The boom is constructed by connecting spans to each other and to the center pivot using flexible elastomeric couplings, which allow each span some flexibility to adjust for the terrain of the field. Sprinkler heads (drops) are attached to the span by elbows that exit at the top of the pipe. These act as siphon breakers, preventing water from flowing out of the sprinkler heads once water flow in the boom stops.
Different sprinkler heads are typically used along the length of the boom, with lower flow rate heads used near the center pivot. Sprinkler heads are also often spaced further apart close to the pivot. Both of these limit the amount of water delivered to the field where the boom’s rotational speed is lower, to prevent crops at the center of the field from getting overwatered. Most booms also have an end gun, which is similar to the impulse sprinklers commonly used for lawn irrigation, but much bigger. The end gun can add another 100′ or more of coverage to the pivot, without the expense of another length of pipe. End guns are often used to extend coverage into the corners of square fields, to make better use of space that otherwise would go fallow. In this case, an electrically driven booster pump can be used to drive the end gun, but only when the controller senses that the boom is within those zones.Many center-pivot booms have an end gun, which is an impulse sprinkler that extends coverage by 100 feet or more without having to add an extra span. They can help fill in the corners of square fields. Source: Ingeniero hidr., CC BY-SA 3.0.
Most center-pivot machines are electrically driven, with a single motor mounted on each span’s tower. The motor drives both wheels through a gearbox and driveshaft. In electrically driven booms, only the outermost span rotates continuously. The motors on the inboard spans are kept in sync through a position-sensing switch that’s connected to the next-furthest-out span through mechanical linkages. When the outboard span advances, it eventually trips a microswitch that tells the motor on the inboard span to turn on. Once that span catches up to the outboard span, the motor turns off. The result is a ripple of movement that propagates along the boom in a wave.Electrically driven pivots use switches to keep each span in sync. The black cam is attached to the next-further span by a mechanical linkage, which operates a microswitch to run the motor on that span. Source: Everything About Irrigation Pivots, by SmarterEveryDay, via YouTube.
While electrically driven center-pivot machines are popular, they do have significant disadvantages. Enterprising thieves often target them for copper theft; half a mile of heavy-gauge, multi-conductor cable sitting unattended in a field that could take hours for someone to happen upon is a tempting target indeed. To combat this, some manufacturers use hydrostatic drives, with hydraulic motors on each wheel and a powerful electric- or diesel-driven hydraulic pump at the pivot. Each tower’s wheels are controlled by a proportioning valve connected to the previous span via linkages, to run the motors faster when the span is lagging behind the next furthest-out tower.
Aside from theft deterrence, hydrostatic-drive pivots tend to be mechanically simpler and safer to work on, although it’s arguable that the shock hazard from the 480 VAC needed for the motors on electrically driven pivots is any less dangerous than hydraulic injection injuries from leaks. Speaking of leaks, hydrostatic pivots also pose an environmental hazard that electric rigs don’t; a hydraulic leak could potentially contaminate an entire field. To mitigate that risk, hydrostatic pivots generally use a non-toxic hydraulic fluid specifically engineered for pivots.
Occasionally, you’ll see center-pivot booms in fields that aren’t circular. Some rectangular fields can be irrigated with pivot-style booms that are set up with drive wheels at both ends. These booms travel up and down the length of a field with all motors running at the same speed. Generally, water is supplied via a suction hose dipping down from one end of the boom into an irrigation ditch or canal running alongside the field. At the end of the field, the boom reverses and heads back down the way it came. Alternatively, the boom can pivot 180 degrees at the end of the field and head back to the other end, tracing out a racetrack pattern. There are also towers where the wheels can swivel rather than being fixed perpendicularly to the boom; this setup allows individual spans or small groups to steer independently of the main boom, accommodating odd-shaped fields.While pivot-irrigation is labor-efficient, it leaves quite a bit of land fallow. Many of these pivots use the end gun to get a few extra rows in each of the corner quadrants, increasing land use. Source: go_turk06, via Adobestock.
Rolling, Rolling, Rolling
While center-pivot machines are probably the ultimate in above-ground irrigation, they’re not perfect for every situation. They’re highly automated, but at great up-front cost, and even with special tricks, it’s still not possible to “square the circle” and make use of every bit of a rectangular field. For those fields, a lower-cost method like wheel line irrigation might be used. In this setup, lengths of pipe are connected to large spoked wheels about six feet in diameter. The pipe passes through the center of the wheel, acting as an axle. Spans of pipe are connected end-to-end on either side of a wheeled drive unit, forming a line the width of the field, up to a quarter-mile long, with the drive unit at the center of the line.Wheel-line system in action on alfalfa in British Columbia. The drive unit at the center powers the whole string, moving it across the field a few times a day. It’s far more labor-intensive than a pivot, but far cheaper. Source: nalidsa, via Adobestock.
In use, the wheel line is rolled out into the field about 25 feet from the edge. When the line is in position, one end is connected to a lateral line installed along the edge of the field, which typically has fittings every 50 feet or so, or however far the sprinkler heads that are attached at regular intervals to the pipe cover. The sprinklers are usually impulse-type and attached to the pipe by weighted swivel fittings, so they always remain vertical no matter where the line stops in its rotation. The heads were traditionally made of brass or bronze for long wear and corrosion resistance, but thieves attracted to them for their scrap value have made plastic heads more common.
Despite their appearance, wheel lines do not continually move across the field. They need to be moved manually, often several times a day, by running the drive unit at the center of the line. This is generally powered by a small gasoline engine which rotates the pipe attached to either side, rolling the entire string across the field as a unit. Disconnecting the water, rolling the line, and reconnecting the line to the supply is quite labor-intensive, so it tends to be used only where labor is cheap.
Reeling In The Years
A method of irrigation that lives somewhere between the labor-intensive wheel line and the hands-off center-pivot is hose reel irrigation. It’s more commonly used for crop irrigation in Europe, but it does make an occasional appearance in US agriculture, particularly in fields where intensive watering all season long isn’t necessary.
As the name suggests, hose reel irrigation uses a large reel of flexible polyethylene pipe, many hundreds of feet in length. The reel is towed into the field, typically positioned in the center or at its edge. Large spades on the base of the reel are lowered into the ground to firmly anchor the reel before it’s connected to the water supply via hoses or pipes. The free end of the hose reel is connected to a tower-mounted gun, which is typically a high-flow impulse sprinkler. The gun tower is either wheeled or on skids, and a tractor is used to drag it out into the field away from the reel. Care is taken to keep the hose between rows to prevent damage to the crops.
Once the water is turned on, water travels down the hose and blasts out of the gun tower, covering a circle or semi-circle a hundred feet or more in diameter. The water pressure also turns a turbine inside the hose reel, which drives a gearbox that slowly winds the hose back onto the reel through a chain and sprocket drive. As the hose retracts, it pulls the gun back to the center of the field, evenly irrigating a large rectangular swath of the field. Depending on how the reel is set up, it can take a day or more for the gun to return to the reel, where an automatic shutoff valve shuts off the flow of water. The setup is usually moved to another point further down the field and the process is repeated until the whole field is irrigated.Hose reel system being deployed for potatoes in Maine. The end gun on the right is about to be towed into the field, pulling behind it the large-diameter hose from the reel. The reel’s turbine and gearbox will wind the hose back up, pulling the gun in over a day or two. Source: Irrigation Hustle Continues, Bell’s Farming, via YouTube.
Although hose reels still need tending to, they’re nowhere near as labor-intensive as wheel lines. Farmers can generally look in on a reel setup once a day to make sure everything is running smoothly, and can often go several days between repositioning. Hose reels also have the benefit of being much easier to scale up and down than either center-pivot machines or wheel line; there are hose reels that store thousands of feet of large-diameter hose, and ones that are small enough for lawn irrigation that use regular garden hose and small impulse sprinklers.
Il RE dei DDoS! Cloudflare blocca un attacco mostruoso da 11,5 terabit al secondo
Il record per il più grande attacco DDoS mai registrato nel giugno 2025 è già stato battuto. Cloudflare ha dichiarato di aver recentemente bloccato il più grande attacco DDoS della storia, che ha raggiunto il picco di 11,5 Tbps.
“Le difese di Cloudflare sono operative senza sosta. Nelle ultime settimane abbiamo bloccato centinaia di attacchi DDoS iper-volume, il più grande dei quali ha raggiunto un picco di 5,1 miliardi di pacchetti al secondo e 11,5 Tbps”, ha affermato Cloudflare.
Secondo l’azienda, l’attacco è stato un flood UDP proveniente da diversi provider cloud e IoT, tra cui Google Cloud. I rappresentanti di Cloudflare hanno detto di voler pubblicare un rapporto dettagliato sull’incidente nel prossimo futuro. Secondo un’immagine allegata al comunicato dell’azienda, l’attacco da record è durato solo circa 35 secondi.
Ricordiamo che il record precedente era stato stabilito a giugno di quest’anno. In quell’occasione, Cloudflare aveva comunicato di aver neutralizzato un attacco DDoS rivolto a un provider di hosting non identificato, la cui potenza di picco aveva raggiunto i 7,3 Tbit/s.
Questo attacco è stato superiore del 12% rispetto al precedente record di 5,6 Tbps, stabilito nel gennaio 2025.
All’epoca, gli esperti scrissero che un’enorme quantità di dati veniva trasferita in soli 45 secondi: 37,4 TB. Ciò equivale a circa 7.500 ore di streaming HD o al trasferimento di 12.500.000 di foto JPEG.
Nel suo rapporto del primo trimestre del 2025 , Cloudflare ha dichiarato di aver bloccato un totale di 21,3 milioni di attacchi DDoS contro i suoi clienti lo scorso anno, oltre a più di 6,6 milioni di attacchi all’infrastruttura aziendale stessa.
L'articolo Il RE dei DDoS! Cloudflare blocca un attacco mostruoso da 11,5 terabit al secondo proviene da il blog della sicurezza informatica.
The Nintendo Famicom Reimagined as a 2003-era Family Computer
If there’s one certainty in life, it is that Nintendo Famicom and similar NES clone consoles are quite literally everywhere. What’s less expected is that they were used for a half-serious attempt at making an educational family computer in the early 2000s. This is however what [Nicole Branagan] tripped over at the online Goodwill store, in the form of a European market Famiclone that was still in its original box. Naturally this demanded an up-close investigation and teardown.
The system itself comes in the form of a keyboard that seems to have been used for a range of similar devices based on cut-outs for what looks like some kind of alarm clock on the top left side and a patched over hatch on the rear. Inside are the typical epoxied-over chips, but based on some scattered hints it likely uses a V.R. Technology’s VTxx-series Famiclone. The manufacturer or further products by them will sadly remain unknown for now.
While there’s a cartridge slot that uses the provided 48-in-1 cartridge – with RAM-banked 32 kB of SRAM for Family BASIC – its compatibility with Famicom software is somewhat spotty due to the remapped keys and no ability to save, but you can use it to play the usual array of Famicom/NES games as with the typical cartridge-slot equipped Famiclone. Whether the provided custom software really elevates this Famiclone that much is debatable, but it sure is a fascinating entry.
Reverse-Engineering Mystery TV Equipment: The Micro-Scan
[VWestlife] ended up with an obscure piece of 80s satellite TV technology, shown above. The Micro-Scan is a fairly plan metal box with a single “Tune” knob on the front. At the back is a power switch and connectors for TV Antenna, TV Set, and “MW” (probably meaning microwave). There’s no other data. What was this, and what was it for?
Satellite TV worked by having a dish receive microwave signals, but televisions could not use those signals directly. A downconverter was needed to turn the signal into something an indoor receiver box (to which the television was attached) could use, allowing the user to select a channel to feed into the TV.
At first, [VWestlife] suspected the Micro-Scan was a form of simple downconverter, but that turned out to not be the case. Testing showed that the box didn’t modify signals at all. Opening it up revealed the Micro-Scan acts as a combination switchbox and variable power supply, sending a regulated 12-16 V (depending on knob position) out the “MW” connector.
So what is it for, and what does that “Tune” knob do? When powered off, the Micro-Scan connected the TV (plugged into the “TV Set” connector) to its normal external antenna (connected to “TV Antenna”) and the TV worked like a normal television. When powered on, the TV would instead be connected to the “MW” connector, probably to a remote downconverter. In addition, the Micro-Scan supplied a voltage (the 12-16 V) on that connector, which was probably a control voltage responsible for tuning the downconverter. The resulting signal was passed unmodified to the TV.
It can be a challenge to investigate vintage equipment modern TV no longer needs, especially hardware that doesn’t fit the usual way things were done, and lacks documentation. If you’d like to see a walkthrough and some hands-on with the Micro-Scan, check out the video (embedded bel0w).
youtube.com/embed/dhxh9BZcFXg?…
Online safety's day in court
WELCOME TO DIGITAL POLITICS. I'm Mark Scott, and this edition marks the one-year anniversary for this newsletter. That's 61 newsletters, roughly 130,000 words and, hopefully, some useful insight into the world of global digital policymaking.
To thank all subscribers for your support, I'm offering a discounted offer to the paid version of Digital Bridge published each Monday. You can go for either a monthly or an annual subscription — at 25 percent off the regular price. You can also keep receiving these monthly free updates.
Also, for anyone in Brussels, I'll be in town next week from Sept 8 - 11. Drop me a line if you're free for coffee.
— The outcome to a series of legal challenges to online safety legislation will be made public in the coming weeks. The results may challenge how these laws are implemented.
— We are starting to see the consequences of what happens when policymakers fail to define what "tech sovereignty" actually means.
— The vast amount of money within the semiconductor industry comes from the design, not manufacture, of high-end microchips.
Let's get started:
LEGAL CHALLENGES TO ONLINE SAFETY RULES
WE'RE ABOUT TO FIND OUT WHERE THE limits are to some of the Western world's attempts to rein in social media platforms and e-commerce giants.
On Sept. 3, Zalando, the German online shopping site (my decade-old profile here) will find out if one of the European Union's top courts agrees that it should not be designated as a Very Large Online Platform, or VLOP, under the bloc's Digital Services Act. The Berlin-based retailer claims it doesn't represent a so-called "systemic risk" within the EU. Zalando's focus on business customers (in contrast to retailer customers) also means the platform does not technically have 45 million users within the EU, it also argues. Expect a decision from the European Court of Justice before midday CET on Sept. 3 (documents here.)
By challenging Brussels' ability to designate which tech companies fall within its VLOP definition (in which the requirement to have at least 45 million local users is critical), Zalando is taking on a central component of the EU's online safety regime. Under the DSA, these large firm take on greater responsibilities and reporting requirements — and are overseen directly by the European Commission, and not EU national regulators — compared to their smaller counterparts.
Currently, how the bloc determines the threshold for 45 million users is cloaked in secrecy — mostly because officials typically have to rely on company estimates to make such adjudications. Telegram, for instance, maintains it has less than that benchmark, allowing it to avoid the most strenuous oversight offered by the DSA. By challenging the European Commission's (opaque) methodology, Zalando's case (no matter the outcome) will force Brussels to up its game when determining which companies fall within its VLOP definition.
Thanks for reading the free monthly version of Digital Politics. Paid subscribers receive at least one newsletter a week. If that sounds like your jam, please sign up here.
Here's what paid subscribers read in August:
— Google and Meta's separate decisions to end political ads in Europe is a mistake; What Big Tech's quarterly earnings teach us about geopolitics; Most Brits have yet to jump on the AI bandwagon. More here.
— Everything you need to know about India's AI Impact Summit; How Russia's propaganda machine weaponized the Trump-Putin meeting in Alaska; Who's Who in the shake-up in the European Commission's DG CNECT. More here.
— Why focusing on protecting kids online should not come at the price of breaking encryption; What Kremlin-backed media took from the Trump-Putin summit; The cottage industry of copyright lawsuits targeting AI companies. More here.
— The US, EU and China are building rival "AI Stacks" that will split the world into competing camps; How to understand the EU-US trade framework when it comes to tech and future tensions; The "AI Divide" is playing out in global research. More here.
Next up are Meta and TikTok. In a dual ruling on Sept. 10 (documents here and here), one of Europe's top courts will again decide a key part of the bloc's online safety rules. This time, both tech giants claim the so-called DSA supervisory fee, or annual levy all VLOPs must pay for the regulation's implementation, is disproportionate and opaque.
The fee — which increased 21 percent this year, to €58.2 million — is based on the European Commission's calculation of up to a 0.05 percent charge on these tech firms' annual global net income. Both Meta and TikTok (and, in a separate legal challenge, Google) say those figures should only come from each firm's profit within the 27-country bloc, and not from their overall global income. In response, Brussels says such levies — a tiny slice of these firms' annual profits — do not violate companies' rights.
Depending on how the court rules, the decision will have ramifications for the DSA's (stuttering) implementation.
Currently, the European Commission has scores of open investigations. Temu, the Chinese online retailer, was the latest firm to be accused of breaching the rules. A potential separate enforcement action against X is expected in the coming weeks. These probes cost money. If Europe's top judges start cutting the funds available for DSA enforcement — based on TikTok and Meta's claims — then the regulation's implementation will similarly slow.
If one of Europe's top courts sides with the tech giants, then expect Brussels to claim business-as-usual, and likely dedicate additional resources from the bloc's almost €2 trillion budget. But the ability to charge VLOPs for DSA supervision is a pillar of how these online safety rules are supposed to work. It forms the basis for the European Commission's mutlti-year work plan on DSA supervision and enforcement. To suggest everything is fine, if next week's court decision goes against Brussels, will be a fantasy.
The next legal challenge takes us across the English Channel to the United Kingdom's Online Safety Act, or OSA. There is already growing disquiet after the country's so-called "age assurance" rules came into force late last month. Now, 4chan and Kiwi Farms filed a lawsuit in a federal court in the United States to challenge how the UK's online safety rules apply to these US-based online platforms.
I'm no lawyer. But the lawsuit is worth a read for two reasons.
First, 4Chan and Kiwi Farms — both of which received requests from Ofcom, the UK's online safety regulator, to comply with mandatory transparency demands — relied heavily on history to suggest they did not have to comply with the British rules. (Disclaimer: I sit on an independent advisory committee at Ofcom, so anything I say here is done so in a personal capacity.)
"Where Americans are concerned, the Online Safety Act purports to legislate the Constitution out of existence," lawyers for both firms wrote in the lawsuit. "Parliament does not have that authority. That issue was settled, decisively, 243 years ago in a war that the UK’s armies lost and are not in any position to re-litigate.”
Shots fired, if you will.
Under the UK's online safety regime, a company does not have to have a physical presence within the country to fall under the legislation. Technically, a site only needs to be accessible to British internet users for the regulatory requirements, most of which focus on mandating a base level of transparency about how companies apply their internal online safety protocols. That means thousands of sites worldwide fall under the UK's OSA — even though almost none of them will be contacted as what happened with 4Chan and Kiwi Farms.
Determining how far the UK's OSA can extend to sites with no physical presence in the country — even if that comes via a US federal court — is a marker for how countries can extend their online safety rules in the name of protecting their citizens.
The second reason the case is important is more political.
Expect the federal lawsuit against Ofcom to be name-checked during a Congressional hearing, overseen by Congressman Jim Jordan, on Sept 3 entitled: "Europe's Threat to American Speech and Innovation." It will start at 10am ET and the current witness list includes noted online safety expert (jk!) Nigel Farage. Former European Commissioner Thierry Breton was invited, though he preferred to respond in an OpEd for The Guardian.
The 4Chan/Kiwi Farm lawsuit is important as it represents a new attack from some in the US who view any form of online safety regulation as a direct threat to Americans' First Amendment rights.
These individuals — most commonly associated with the "Censorship Industrial Complex" — have already accused researchers of acting in unison with the US federal government and social media platforms to censor those mostly on the political right. So far, there has been no evidence to back up those allegations.
Now, many are turning to non-US online safety legislation, most notably the EU's DSA and the UK's OSA, as a new attack vector to claim Americans' free speech rights are under attack. The 4Chan/Kiwi Farm lawsuit's arguments, including the illegal extraterritoriality of the British rules, are likely to be re-used in these ongoing efforts to thwart countries' push to protect their own citizens against online abuse and illegal content like terrorist propaganda.
Chart of the Week
EVERY COUNTRY UNDER THE SUN wants to be a semiconductor superpower. That's especially true in the global battle between rival "AI stacks" reliant on next generation semiconductors.
But there's a significant difference between those who make semiconductors and where the value resides in the overall global chip market.
The chart on the left depicts the 2024 worldwide revenue, divided as a percentage per company, for semiconductor foundries, or facilities that manufacture microchips. The chart on the right represents overall market values for semiconductor companies (based on Dec. 31, 2024 prices), divided by companies and countries.
On manufacturing, Taiwan is the global leader, by some margin. But in overall semiconductor value, the US (and, to a large degree, NVIDIA) are the ones to beat.
That's a reminder for any country spending taxpayers' dollars to entice semiconductor foundries to be built locally. Just because you back such in-country manufacturing doesn't mean the overall value within the semiconductor supply chain will follow.
Source: JPMorgan; Companiemarketcap.com. Data as of Dec. 2024
The consequences of tech sovereignty
I KNOW I'M BIASED. BUT IT'S HARD NOT TO VIEW the first eight months of 2025 as a demonstration of what happens when countries blend politics and technology in ways that lead to bad outcomes. Think the US-China stand-off on pretty much everything. Think the EU-US dispute over trade/digital regulation. Think the failure of Middle Powers to articulate a path on digital that is different to that offered by China, the US and Europe, respectively.
This is what happens when politicians and policymakers put forward a vision of "tech sovereignty" without thinking through what happens when you mix national/regional political needs with the global nature of how technology actually works.
Back in March, I made a plea for a more joined-up approach to that amorphous definition that, ever since European Commission president Ursula von der Leyen went hard on "tech sovereignty" five years ago, has been plagued with false starts, conflicting efforts and a failure to understand how such digital policymaking would end up playing out in the real world.
Fast forward to late(ish) 2025, and we are starting to feel the consequences of rival and, frustratingly, allied countries implementing "tech sovereignty" concepts that will inevitably harm citizens' fundamental rights and their ability to take advantage of what technology has to offer.
For me, those concepts include: countries asserting legal claims over the global internet; politicians subsidizing the creation/support of domestic industries that do not have the scale to compete on the global stage; the creation of artificial barriers between digital markets/goods that undermine fundamental rights; the politicization of apolitical digital regulation aimed at quelling abuse.
Some of these issues were almost inevitable, given the vast differences between how countries approach both digital policymaking and industrial policy. The US — based on its financial muscle, deregulatory stance and domestic industry — is just in a different place to, say, Singapore, which must approach questions about how technology affects its society in ways that meet its own domestic needs.
What I worry, though, is that the push toward 'tech sovereignty' has reached a point where it may be difficult to bring countries back from the edge of creating siloed digital worlds. That goes for everything from high-tech manufacturing that may face high import tariffs elsewhere to digital regulation aimed at safeguarding people's fundamental rights.
Sign up for Digital Politics
Thanks for getting this far. Enjoyed what you've read? Why not receive weekly updates on how the worlds of technology and politics are colliding like never before. The first two weeks of any paid subscription are free.
Subscribe
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
As technology has become a powerful engine, both for politics and industry, it was inevitable that politicians would want to exert greater power over digital areas of the economy and society. Where we are currently, however, is nearing a point of potentially killing the golden goose.
Technology, at its basic level, is an apolitical tool. And ever since the Web 1.0 era, that has been based on a borderless, hand-off approach to digital oversight — something that just isn't possible given the geopolitical nature of technology in 2025.
What I've been thinking a lot about is how can we marry the best of this laissez-faire approach to technology — one that allows firms and people to connect to each other, within seconds, across the globe — with the ability for politicians and policymakers to both protect citizens from harm and harness what technology has to offer to serve domestic economic interests.
Right now, that balance is failing, and badly.
It is leading to the siloing of citizens within national/regional digital fiefdoms. It is embracing a top-down approach to "tech sovereignty" that relegates people to passive spectators as their digital experiences are dictated for them. It is leaving millions behind as digital policymaking falls into three camps: led by China, the EU and US, respectively.
Watch this space for thoughts on how to fix that.
What I'm reading
— The Molly Rose Foundation analyzed TikTok and Instagram and found an ongoing high-level risk of exposure for minors to content linked to suicide, self-harm and depression-related material. More here.
— Bits of Freedom, a Dutch non-profit organization, filed a lawsuit against Meta so that its local users could access their Instagram and Facebook feeds in ways not based on user profiling. More here.
— Researchers from the University of Amsterdam created a social media network based on AI agents, and discovered the platform quickly recreated levels of polarization seen in real-world networks. More here.
— The White House's recent AI Action Plan is full of contradictory policies that may lead to the concentration of power of the emerging technology, argue three former Joe Biden-appointed officials for Tech Policy Press.
— Meta's Oversight Board published its annual report, including details into the 217 voluntary policy recommendations it had made to the tech firm since 2027. More here.
An Amiga Demo With No CPU Involved
Of the machines from the 16-bit era, the Commodore Amiga arguably has the most active community decades later, and it’s a space which still has the power to surprise. Today we have a story which perhaps pushes the hardware farther than ever before: a demo challenge for the Amiga custom chips only, no CPU involved.
The Amiga was for a time around the end of the 1980s the most exciting multimedia platform, not because of the 68000 CPU it shared with other platforms, but because of its set of custom co-processors that handled tasks such as graphics manipulation, audio, and memory. Each one is a very powerful piece of silicon capable of many functions, but traditionally it would have been given its tasks by the CPU. The competition aims to find how possible it is to run an Amiga demo entirely on these chips, by using the CPU only for a loader application, with the custom chip programming coming entirely from a pre-configured memory map which forms the demo.
The demoscene is a part of our community known for pushing hardware to its limits, and we look forward to seeing just what they do with this one. If you have never been to a demo party before, you should, after all everyone should go to a demo party!
Amiga CD32 motherboard: Evan-Amos, Public domain.