Salta al contenuto principale

2025 One Hertz Challenge: An Arduino-Based Heart Rate Sensor


How fast does your heart beat? It’s a tough question to answer, because our heart rate changes all the time depending on what we’re doing and how our body is behaving. However, [Ludwin] noted that resting heart rates often settle somewhere near 60 bpm on average. Thus, they entered a heart rate sensor to our 2025 One Hertz Challenge!

The build is based around a Wemos D1 mini, a ESP8266 development board. It’s hooked up to a MAX30102 heart beat sensor, which uses pulse oximetry to determine heart rate with a photosensor and LEDs. Basically, it’s possible to determine the oxygenation of blood by measuring its absorbance of red and infrared wavelengths, usually done by passing light through a finger. Meanwhile, by measuring the change in absorption of light in the finger as blood flows with the beat of the heat, it’s also possible to measure a person’s pulse rate.

The Wemos D1 takes the reading from the MAX30102, and displays it on a small OLED display. It reports heart rate in both beats per minute and in Hertz. if you can happen to get your heartrate to exactly 60 beats per minute, it will be beating at precisely 1 Hertz. Perhaps, then, it’s the person using Ludwin’s build that is actually eligible for the One Hertz Challenge, since they’re the one doing something once per second?

In any case, it shows just how easy it is to pick up biometric data these days. You only need a capable microcontroller and some off-the-shelf sensors, and you’re up and running.

youtube.com/embed/1OFFsdR9g3k?…

2025 Hackaday One Hertz Challenge


hackaday.com/2025/08/15/2025-o…


Gentle Processing Makes Better Rubber That Cracks Less


Rubber! It starts out as a goopy material harvested from special trees, and is then processed into a resilient, flexible material used for innumerable important purposes. In the vast majority of applications, rubber is prized for its elasticity, which eventually goes away with repeated stress cycles, exposure to heat, and time. When a rubber part starts to show cracks, it’s generally time to replace it.

Researchers at Harvard have now found a way to potentially increase rubber’s ability to withstand cracking. The paper, published in Nature Sustainability, outlines how the material can be treated to provide far greater durability and toughness.

Big Flex

Note the differences between a short-chain crosslinked structure, and the longer-chain tanglemer structure with far less crosslinks. The latter is far better at resisting crack formation, since the longer chain can deconcentrate stress over a longer distance, allowing far greater stretch before failure. Credit: research paper
The traditional method of producing rubber products starts with harvesting the natural rubber latex from various types of rubber tree. The trees are tapped to release their milky sap, which is then dried, processed with additives, and shaped into the desired form before heating with sulfur compounds to vulcanize the material. It’s this last step that is key to producing the finished product we know as rubber, as used in products like tires, erasers, and o-rings. The vulcanization process causes the creation of short crosslinked polymer chains in the rubber, which determine the final properties and behavior of the material.

Harvard researchers modified the traditional rubber production process to be gentler. Typical rubber production includes heavy-handed mixing and extruding steps which tend to “masticate” the polymers in the material, turning them into shorter chains. The new, gentler process better preserves the long polymer chains initially present in the raw rubber. When put through the final stages of processing, these longer chains form into a structure referred to as a “tanglemer”, where the tangles of long polymer chains actually outnumber the sparse number of crosslinks between the chains in the structure.
The gentler production method involves drying the latex and additive mixture at room temperature to form a film, before hot-pressing it to form the final tanglemer structure. This process isn’t practical for producing large, thick parts. Credit: research paper
This tanglemer structure is much better at resisting crack formation. “At a crack tip in the tanglemer, stress deconcentrates over a long polymer strand between neighbouring crosslinks,” notes the research paper. “The entanglements function as slip links and do not impede stress deconcentration, thus decoupling modulus and fatigue threshold.” Plus, these long, tangled polymer chains are just generally better at spreading out stress in the material than the shorter crosslinked chains found in traditional vulcanized rubber. With the stress more evenly distributed, the rubber is less likely to crack or fail in any given location. The material is thus far tougher, more durable, and more flexible. These properties hold up even over repeated loading cycles.

youtube.com/embed/UvGIVUFnsjg?…

Overall, the researchers found the material to be four times better at resisting crack growth during repeated stretch cycles. It also proved to be ten times tougher than traditional rubber. However, the new gentler processing method is fussy, and cannot outperform traditional rubber processing in all regards. After all, there’s a reason things are done the way they are in industry. Most notably, it relies on a lot of water evaporation, and it’s not currently viable for thick-wall parts like tires, for example. For thinner rubber parts, though, the mechanical advantages are all there—and this method could prove useful.

Ultimately, don’t expect to see new this ultra-rubber revolutionizing the tire market or glove manufacturing overnight. However, the research highlights an important fact—rubber can be made with significantly improved properties if the longer polymer chains can be preserved during processing, and tangled instead of excessively cross-linked. There may be more fruitful ground to explore to find other ways in which we can improve rubber by giving it a better, more resilient structure.


hackaday.com/2025/08/15/gentle…


Hackaday Podcast Episode 333: Nightmare Whiffletrees, 18650 Safety, and a Telephone Twofer


This week, Hackaday’s Elliot Williams and Kristina Panos met up over the tubes to bring you the latest news, mystery sound, and of course, a big bunch of hacks from the previous week.

In Hackaday news, get your Supercon 2025 tickets while they’re hot! Also, the One Hertz Challenge ticks on, but time is running out. You have until Tuesday, August 19th to show us what you’ve got, so head over to Hackaday.IO and get started now. Finally, its the end of eternal September as AOL discontinues dial-up service after all these years.

On What’s That Sound, Kristina got sort of close, but this is neither horseshoes nor hand grenades. Can you get it? If so, you could win a limited edition Hackaday Podcast t-shirt!

After that, it’s on to the hacks and such, beginning with a talking robot that uses typewriter tech to move its mouth. We take a look at hacking printed circuit boards to create casing and instrument panels for a PDP-1 replica. Then we explore a fluid simulation business card, witness a caliper shootout, and marvel at one file in six formats. Finally, it’s a telephone twofer as we discuss the non-hack-ability of the average smart phone, and learn about what was arguably the first podcast.

Check out the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!

html5-player.libsyn.com/embed/…

Download in DRM-free MP3 and savor at your leisure.

Where to Follow Hackaday Podcast

Places to follow Hackaday podcasts:



Episode 333 Show Notes:

News:



What’s that Sound?



Interesting Hacks of the Week:



Quick Hacks:



Can’t-Miss Articles:



hackaday.com/2025/08/15/hackad…


Why Lorde’s Clear CD has so Many Playback Issues


2003 Samsung CD player playing a clear vs normal audio CD. (Credit: Adrian's Digital Basement)

Despite the regularly proclaimed death of physical media, new audio albums are still being published on CD and vinyl. There’s something particularly interesting about Lorde’s new album Virgin however — the CD is a completely clear disc. Unfortunately there have been many reports of folks struggling to get the unique disc to actually play, and some sharp-eyed commentators have noted that the CD doesn’t claim to be Red Book compliant by the absence of the Compact CD logo.
The clear Lorde audio CD in all its clear glory. (Credit: Adrian's Digital Basement, YouTube)The clear Lorde audio CD in all its clear glory. (Credit: Adrian’s Digital Basement, YouTube)
To see what CD players see, [Adrian] of Adrian’s Digital Basement got out some tools and multiple CD players to dig into the issue. These players range from a 2003 Samsung, a 1987 NEC, and a cheap portable Coby player. But as all audio CDs are supposed to adhere to the Red Book standard, a 2025 CD should play just as happily on a 1980s CD player as vice versa.

The first step in testing was to identify the laser pickup (RF) signal test point on the PCB of each respective player. With this hooked up to a capable oscilloscope, you can begin to see the eye pattern forming. In addition to being useful with tuning the CD player, it’s also an indication of the signal quality that the rest of the CD player has to work with. Incidentally, this is also a factor when it comes to CD-R compatibility.

While the NEC player was happy with regular and CD-R discs, its laser pickup failed to get any solid signal off the clear Lorde disc. With the much newer Samsung player (see top image), the clear CD does play, but as the oscilloscope shot shows, it only barely gets a usable signal from the pickup. Likewise, the very generic Coby player also plays the audio CD, which indicates that any somewhat modern CD player with its generally much stronger laser and automatic gain control ought to be able to play it.

That said, it seems that very little of the laser’s light actually makes it back to the pickup’s sensor, which means that the gain gets probably cranked up to 11, and with that its remaining lifespan will be significantly shortened. Ergo it’s probably best to just burn that CD-R copy of the album and listen to that instead.

youtube.com/embed/s3IvVUh3mt4?…


hackaday.com/2025/08/15/why-lo…


This Week in Security: The AI Hacker, FortMajeure, and Project Zero


One of the hot topics currently is using LLMs for security research. Poor quality reports written by LLMs have become the bane of vulnerability disclosure programs. But there is an equally interesting effort going on to put LLMs to work doing actually useful research. One such story is [Romy Haik] at ULTRARED, trying to build an AI Hacker. This isn’t an over-eager newbie naively asking an AI to find vulnerabilities, [Romy] knows what he’s doing. We know this because he tells us plainly that the LLM-driven hacker failed spectacularly.

The plan was to build a multi-LLM orchestra, with a single AI sitting at the top that maintains state through the entire process. Multiple LLMs sit below that one, deciding what to do next, exactly how to approach the problem, and actually generating commands for those tools. Then yet another AI takes the output and figures out if the attack was successful. The tooling was assembled, and [Romy] set it loose on a few intentionally vulnerable VMs.

As we hinted at up above, the results were fascinating but dismal. This LLM successfully found one Remote Code Execution (RCE), one SQL injection, and three Cross-Site Scripting (XSS) flaws. This whole post is sort of sneakily an advertisement for ULTRARED’s actual automated scanner, that uses more conventional methods for scanning for vulnerabilities. But it’s a useful comparison, and it found nearly 100 vulnerabilities among the collection of targets.

The AI did what you’d expect, finding plenty of false positives. Ask an AI to describe a vulnerability, and it will glad do so — no real vulnerability required. But the real problem was the multitude of times that the AI stack did demonstrate a problem, and failed to realize it. [Romy] has thoughts on why this attempt failed, and two points stand out. The first is that while the LLM can be creative in making attacks, it’s really terrible at accurately analyzing the results. The second observation is one of the most important observations to keep in mind regarding today’s AIs. It doesn’t actually want to find a vulnerability. One of the marks of security researchers is the near obsession they have with finding a great score.

DARPA


Don’t take the previous story to mean that AI will never be able do vulnerability research, or even that it’s not a useful tool right now. The US DARPA sponsored a competition at this year’s DEF CON, and another security professional pointed out that Buttercup AI Cyber REasoning System (CRS) is the second place winner. It’s now available as an Open Source project.

This challenge was a bit different from an open-ended attack on a VM. In the DARPA challenge, the AI tools are given specific challenges, and a C or Java codebase, and told to look for problems. Buttercup uses an AI-guided fuzzing approach, and one of the notable advantages with this challenge is that often times a vulnerability will cause an outright crash in the program, and that’s hard to miss, even for an AI.

Team Atlanta took first place, and has some notes on their process. Their first-place finish was almost derailed from the start, due to a path checking rule to comply with contest rules. The AI tools were provided fuzzing harnesses that they were not allowed to modify, and the end goal was for the AIs to actually write patches to fix the issues found. All of the challenges were delivered inside directories containing ossfuzz, triggering the code that protected against breaking the no modification rules. A hasty code hacking session right at the last moment managed to clear this, and saved the entire competition.

FortMajeure


We have this write-up from [0x_shaq], finding a very fun authentication bypass in FortiWeb. The core problem is the lack of validation on part of the session cookie. This cookie has a couple of sections that we care about. The Era field is a single digit integer that seems to indicate a protocol version or session type, while the Payload and AuthHash fields are the encrypted session information and signed hash for verification.

That Era field is only ever expected to be a 0 or a 1, but the underlying code processes the other eight possible values the same way: by accessing the nth element of an array, even if the array doesn’t actually have that many initialized elements. And one of the things that array will contain is the encryption/signing key for the session cookie. This uninitialized memory is likely to be mostly or entirely nulls, making for a very predictable session key.

Project Zero


Google has a couple interesting items on their Project Zero blog. The first is from late July, and outlines a trial change to disclosure timelines. The problem here is that a 90 day disclosure gives the immediate vendor plenty of time to patch an issue, but even with a 30 day extension, it’s a race for all of the downstream users to apply, test, and distribute the fix. The new idea is to add a one week vulnerability pre-disclosure. One week after a vulnerability is found, it’s existence is added to the chart of upcoming releases. So if you ship Dolby’s Unified Decoder in a project or product, mark your calendar for September 25, among the other dozen or so pre-released vulnerabilities.

The second item from Project Zero is a vulnerability found in Linux, that could be triggered from within the Chrome renderer sandbox. At the heart of the matter is the Out Of Band byte that could be sent as a part of Unix Sockets. This is a particularly obscure feature, and yet enabled by default, which is a great combination for security research.

The kernel logic for this feature could get confused when dealing with multiples of these one-byte messages, and eventually free kernel memory while a pointer is still pointing to it. Use the recv() syscall again on that socket, and the freed memory is accessed. This results in a very nice kernel memory read primitive, but also a very constrained write primitive. In this case, it’s to increment a single byte, 0x44 bytes into the now-freed data structure. Turning this into a working exploit was challenging but doable, and mainly consisted of constructing a fake object in user-controlled memory, triggering the increment, and then using the socket again to coerce the kernel into using the fake object.

Bits and Bytes


Cymulate has the story of a Microsoft NTLM patch that wasn’t quite enough. The original problem was that a Windows machine could be convinced to connect to a remote NTLM server to retrieve a .ico file. The same bug can be triggered by creating a shortcut that implies the .ico is embedded inside the target binary itself, and put that on a remote SMB share. It’s particularly bad because this one will acess the server, and leak the NTLM hash, just by displaying the icon on the decktop.

Xerox FreeFlow Core had a pair of exploits, the more serious of which could enable an unauthenticated RCE. The first is an XML External Entity (XXE) injection issue, where a user request could result in the server fetching remote content while processing the request. The more serious is a simple file upload with path traversal, making for an easy webshell dropper.

Claroty’s Team82 dug into the Axis Communications protocol for controlling security cameras, and found some interesting items. The Axis.Remoting protocol uses mutual TLS, which is good. But those are self-signed certificates that never validated, allowing for trivial man in the middle. The most serious issue was a JSON deserialization vulnerability, allowing for RCE on the service itself. Patches are available, and are particularly important for Axis systems that are available on the open Internet.


hackaday.com/2025/08/15/this-w…


Google corregge un bug critico in Gemini che permette di tracciare gli utenti


Gli sviluppatori di Google hanno corretto un bug che consentiva agli inviti dannosi di Google Calendar di prendere il controllo remoto degli agenti Gemini in esecuzione sul dispositivo della vittima e di rubare i dati dell’utente. Gemini è il Large Language Model (LLM) di Google integrato nelle app Android.

I ricercatori di SafeBreach hanno scoperto che inviando alla vittima un invito con un prompt di Google Calendar incorporato (che poteva essere nascosto, ad esempio, nel titolo dell’evento), gli aggressori erano in grado di estrarre il contenuto dell’email e le informazioni del calendario, tracciare la posizione dell’utente, controllare i dispositivi della smart home tramite Google Home, aprire app Android e avviare videochiamate Zoom.

Nel loro rapporto, gli esperti sottolineano che un attacco di questo tipo non richiedeva l’accesso a un modello white-box e non veniva bloccato dai filtri rapidi e da altri meccanismi di difesa Gemini.

L’attacco inizia inviando alla vittima un invito a un evento tramite Google Calendar, il cui titolo contiene un messaggio dannoso. Una volta che la vittima interagisce con Gemini, ad esempio chiedendo “Quali eventi sono programmati sul mio calendario oggi?”, l’IA scaricava un elenco di eventi da Calendar, incluso quello dannoso.

Di conseguenza, il prompt dannoso è diventato parte della finestra di contesto di Gemini e l’assistente lo ha percepito come parte della conversazione, senza rendersi conto che l’istruzione era ostile all’utente.

A seconda del prompt utilizzato, gli aggressori potrebbero avviare vari strumenti o agenti per eliminare o modificare gli eventi del Calendario, aprire URL per determinare l’indirizzo IP della vittima, partecipare alle chiamate Zoom, utilizzare Google Home per controllare i dispositivi e accedere alle e-mail ed estrarre dati.

I ricercatori hanno osservato che un aggressore potrebbe inviare sei inviti, includendo il prompt dannoso solo nell’ultimo, per garantire che l’attacco funzioni mantenendo comunque un certo livello di furtività.

Il problema è che Calendar Events mostra solo gli ultimi cinque eventi, mentre gli altri sono nascosti sotto il pulsante “Mostra altro“. Tuttavia, quando richiesto, Gemini li analizza tutti, incluso quello dannoso. Allo stesso tempo, l’utente non vedrà il nome dannoso a meno che non espanda manualmente l’elenco degli eventi.

Google ha risposto al rapporto SafeBreach affermando che l’azienda sta continuamente implementando nuove difese per Gemini per contrastare un’ampia gamma di attacchi, e che molte delle misure sono pianificate per un’implementazione imminente o sono già in fase di implementazione.

L'articolo Google corregge un bug critico in Gemini che permette di tracciare gli utenti proviene da il blog della sicurezza informatica.


Teletext Around the World, Still


When you mention Teletext or Videotex, you probably think of the 1970s British system, the well-known system in France, or the short-lived US attempt to launch the service. Before the Internet, there were all kinds of crazy ways to deliver customized information into people’s homes. Old-fashioned? Turns out Teletext is alive and well in many parts of the world, and [text-mode] has the story of both the past and the present with a global perspective.

The whole thing grew out of the desire to send closed caption text. In 1971, Philips developed a way to do that by using the vertical blanking interval that isn’t visible on a TV. Of course, there needed to be a standard, and since standards are such a good thing, the UK developed three different ones.

The TVs of the time weren’t exactly the high-resolution devices we think of these days, so the 1976 level one allowed for regular (but Latin) characters and an alternate set of blocky graphics you could show on an expansive 40×24 palette in glorious color as long as you think seven colors is glorious. Level 1.5 added characters the rest of the world might want, and this so-called “World System Teletext” is still the basis of many systems today. It was better, but still couldn’t handle the 134 characters in Vietnamese.

Meanwhile, the French also wanted in on the action and developed Antiope, which had more capabilities. The United States would, at least partially, adopt this standard as well. In fact, the US fragmented between both systems along with a third system out of Canada until they converged on AT&T’s PLP system, renamed as North American Presentation Layer Syntax or NAPLPS. The post makes the case that NAPLPS was built on both the Canadian and French systems.

That was in 1986, and the Internet was getting ready to turn all of these developments, like $200 million Canadian system, into a roaring dumpster fire. The French even abandoned their homegrown system in favor of the World System Teletext. The post says as of 2024, at least 15 countries still maintain teletext.

So that was the West. What about behind the Iron Curtain, the Middle East, and in Asia? Well, that’s the last part of the post, and you should definitely check it out.
Japan’s version of teletex, still in use as of the mid-1990s, was one of the most advanced.
If you are interested in the underlying technology, teletext data lives in the vertical blanking interval between frames on an analog TV system. Data had page numbers. If you requested a page, the system would either retrieve it from a buffer or wait for it to appear in the video signal. Some systems send a page at a time, while others send bits of a page on each field. In theory, the three-digit page number can range from 100 to 0x8FF, although in practice, too many pages slow down the system, and normal users can’t key in hex numbers.

For PAL, for example, the data resides in even lines between 6 and 22, or in lines 318 to 335 for odd lines. Systems can elect to use fewer lines. A black signal is a zero, while a 66% white signal is a one, and the data is in NRZ line coding. There is a framing code to identify where the data starts. Other systems have slight variations, but the overall bit rate is around 5 to 6 Mbit/s. Character speeds are slightly slower due to error correction and other overhead.

Honestly, we thought this was all ancient history. You have to wonder which country will be the last one standing as the number of Teletext systems continues to dwindle. Of course, we still have closed captions, but with digital television, it really isn’t the same thing. Can Teletext run Doom? Apparently, yes, if you stretch your definition of success a bit.


hackaday.com/2025/08/15/telete…


È bastata una ん di troppo! Phishing che impersona Booking.com con la tecnica degli omoglifi


Gli aggressori hanno iniziato a utilizzare un trucco insolito per mascherare i link di phishing, facendoli apparire come indirizzi di Booking.com. La nuova campagna malware utilizza il carattere hiragana giapponese “ん” (U+3093). In alcuni font e interfacce, assomiglia visivamente a una barra, facendo apparire l’URL come un normale percorso sul sito, sebbene in realtà conduca a un dominio falso.

Il ricercatore JAMESWT ha scoperto che nelle e-mail di phishing il collegamento si presenta così:

admin.booking.com/hotel/hotela…

ma in realtà indirizza l’utente a un indirizzo del tipo

account.booking.comんdetail

Tutto ciò che precede “www-account-booking[.]com” è solo un sottodominio che imita la struttura del sito reale. Il vero dominio registrato appartiene agli aggressori. Cliccandoci sopra, la vittima finisce sulla pagina

www-account-booking[.]com/c.php?a=0

da cui viene scaricato un file MSI dannoso dal nodo CDN updatessoftware.b-cdn[.]net.

Secondo l’analisi di MalwareBazaar e ANY.RUN , il programma di installazione distribuisce componenti aggiuntivi, probabilmente infostealer o strumenti di accesso remoto.

La tecnica si basa sull’uso di omoglifi ovvero simboli che sembrano altri ma appartengono ad altri alfabeti o set Unicode. Tali simboli sono spesso utilizzati in attacchi omografi e phishing. Un esempio è la “O” cirillica (U+041E), che è quasi indistinguibile dalla “O” latina (U+004F). Nonostante gli sviluppatori di browser e servizi aggiungano protezioni contro tali sostituzioni, gli attacchi continuano a verificarsi.

Non è la prima volta che Booking.com diventa un’esca per il phishing. A marzo, Microsoft Threat Intelligence ha segnalato email mascherate da servizio di prenotazione che utilizzavano la tecnica ClickFix per infettare i computer dei dipendenti degli hotel. E ad aprile, i ricercatori di Malwarebytes hanno segnalato uno schema simile.

Tuttavia, l’uso di omoglifi come “ん” può ingannare anche gli utenti più attenti, quindi è importante integrare la cautela con un software antivirus aggiornato in grado di bloccare il download di contenuti dannosi.

L'articolo È bastata una ん di troppo! Phishing che impersona Booking.com con la tecnica degli omoglifi proviene da il blog della sicurezza informatica.


redhotcyber.com/post/e-bastata…


Open Source Lithium-Titanate Battery Management System


Lithium-titanate (LTO) is an interesting battery chemistry that is akin to Li-ion but uses Li2TiO3 nanocrystals instead of carbon for the anode. This makes LTO cells capable of much faster charging and with better stability characteristics, albeit at the cost of lower energy density. Much like LiFePO4 cells, this makes them interesting for a range of applications where the highest possible energy density isn’t the biggest concern, while providing even more stability and long-term safety.

That said, LTO is uncommon enough that finding a battery management system (BMS) can be a bit of a pain. This is where [Vlastimil Slintak]’s open source LTO BMS project may come in handy, which targets single cell (1S) configurations with the typical LTO cell voltage of around 1.7 – 2.8V, with 3 cells in parallel (1S3P). This particular BMS was designed for low-power applications like Meshtastic nodes, as explained on the accompanying blog post which also covers the entire development and final design in detail.

The BMS design features all the stuff that you’d hope is on there, like under-voltage, over-voltage and over-current protection, with an ATtiny824 MCU providing the brains. Up to 1 A of discharge and charge current is supported, for about 2.4 Watt at average cell voltage. With the triple 1,300 mAh LTO cells in the demonstrated pack you’d have over 9 Wh of capacity, with the connected hardware able to query the BMS over I2C for a range of statistics.

Thanks to [Marcel] for the tip.


hackaday.com/2025/08/15/open-s…


Rediscovering Microsoft’s Oddball Music Generator From The 1990s


There has been a huge proliferation in AI music creation tools of late, and a corresponding uptick in the number of AI artists appearing on streaming services. Well before the modern neural network revolution, though, there was an earlier tool in this same vein. [harke] tells us all about Microsoft Music Producer 1.0, a forgotten relic from the 1990s.

The software wasn’t ever marketed openly. Instead, it was a part of Microsoft Visual InterDev, a web development package from 1997. It allowed the user to select a style, a personality, and a band to play the song, along with details like key, tempo, and the “shape” of the composition. It would then go ahead and algorithmically generate the music using MIDI instruments and in-built synthesized sounds.

As [harke] demonstrates, there are a huge amounts of genres to choose from. Pick one, and you’ll most likely find it sounds nothing like the contemporary genre it’s supposed to be recreating. The more gamey genres, though, like “Adventure” or “Chase” actually sound pretty okay. The moods are hilariously specific, too — you can have a “noble” song, or a “striving” or “serious” one. [harke] also demonstrates building a full song with the “7AM Illusion” preset, exporting the MIDI, and then adding her own instruments and vocals in a DAW to fill it out. The result is what you’d expect from a composition relying on the Microsoft GS Wavetable synth.

Microsoft might not have cornered the generative music market in the 1990s, but generative AI is making huge waves in the industry today.

youtube.com/embed/EdL6b8ZZRLc?…


hackaday.com/2025/08/14/redisc…


Calibration, Good Old Calibration


Do you calibrate your digital meters? Most of us don’t have the gear to do a proper calibration, but [Mike Wyatt] shares his simple way to calibrate his DMMs using a precision resistor coupled with a thermistor. The idea is to use a standard dual banana plug along with a 3D-printed housing to hold the simple electronics.

The calibration element is a precision resistor. But the assembly includes a 1% thermistor. In addition to the banana plugs, there are test points to access the resistor and another pair for the thermistor.

In use, you plug the device into the unit you want to test. Then you clip a different temperature sensor to the integrated thermistor. Because the thermistor is in close proximity to the meter’s input, it can tell the difference between the ambient temperature and the meter. [Mike] says the bench meters get warmer than hand-held units.

This is, of course, not a perfect setup if you are a real metrology stickler. But it can be helpful. [Mike] suggests the precision resistor be over 100 ohms since anything less really isn’t a candidate for a precision measurement with two wires. Debating over calibration? We do that, too.


hackaday.com/2025/08/14/calibr…


Bench-Top Wireless Power Transmission


A photo of a the power supply, distribution board, and primary and secondary windings on a bench top.

[mircemk] has been working on wireless power transmission. Using a Class-E Tesla coil with 12 turns on the primary and 8 turns on the secondary and a 12 volt input he can send a few milliwatts to power an LED over a distance of more than 40 centimeters or power a 10 watt bulb over a distance of about 10 centimeters. With the DC input set at 24 volts the apparatus can deliver 5 watts over a distance of a few centimeters and a light is still visible after separating the primary and secondary coils by more than 30 centimeters.

There are many types of Tesla coil and we can’t go into the details here but they include Spark-Gap Tesla Coils (SGTC) and Solid-State Tesla Coils (SSTC), among others. The Class-E coil demonstrated in this project is a type of SSTC which in general is more efficient than an SGTC alternative.

Please bear in mind that while it is perfectly safe to watch a YouTube video of a person demonstrating a functional Tesla coil, building your own is hazardous and probably not a good idea unless you really understand what you’re doing! Particularly high voltages can be involved and EMI/RFI emissions can violate regulations. You can damage your body with RF burns while not feeling any pain, and without even knowing that it’s happening.

If you’d like to read more about wireless power transmission it is certainly a topic we’ve covered here at Hackaday in the past, you might like to check out Wireless Power Makes For Cable-Free Desk or Transmitting Wireless Power Over Longer Distances.

youtube.com/embed/6k1Oj8ioWsg?…


hackaday.com/2025/08/14/bench-…


DIY Wind Turbine Gets a 3-Phase Rectifier


[Electronoobs] is using some brushless motors to make a DIY wind turbine. His recent video isn’t about the turbine itself, but a crucial electronic part: the three-phase rectifier. The reason it is so important is due to the use of brushless motors. Normal motors are not ideal for generating power for several reasons, as explained in the video below.

The brushless motors have three windings and generate three outputs, each out of phase with the others. You can’t just join them together because they are 120 degrees out of phase. But a special rectifier can merge the inputs efficiently and output a low-ripple DC voltage.

The rectifier will have to handle a lot of power, so it uses beefy devices with heat sinks. The design is very similar to a full-wave bridge rectifier, but instead of two legs, each with two diodes, this one has three legs. This is still not as efficient as you would like. A synchronous rectifier would be even more efficient but also more complicated.

Still, we have no doubt the board will do its job. We’re anxious to see the turbine come together. Want to build your own? Maybe start smaller. Too big? You can strip it down even further.

youtube.com/embed/4hBOTZeXqbc?…


hackaday.com/2025/08/14/diy-wi…


2025 One Hertz Challenge: Blinking An LED With The Aid Of Radio Time


If you want to blink an LED once every second, you could use just about any old timer circuit to create a 1 Hz signal. Or, you could go the complicated route like [Anthony Vincz] and grab 1 Hz off a radio clock instead.

The build is an entry for the 2025 One Hertz Challenge, with [Anthony] pushing himself to whip up a simple entry on a single Sunday morning. He started by grabbing a NE567 tone decoder IC, which uses a phase-locked loop to trigger an output when detecting a tone of a given frequency. [Anthony] had used this chip hooked up to an Arduino to act as a Morse decoder, which picked up sound from an electret mic and decoded it into readable output.

However, he realized he could repurpose the NE567 to blink in response to output from radio time stations like the 60 KHz British and 77.5 KHz German broadcasts. He thus grabbed a software-defined radio, tuned it into one of the time stations, and adjusted the signal to effectively sound a regular 800 Hz tone coming out of his computer’s speakers that cycled once every second. He then tweaked the NE567 so it would trigger off this repetitive tone every second, flashing an LED.

Is it the easiest way to flash an LED? No. It’s complicated, but it’s also creative. They say a one hertz signal is always in the last place you look.

youtube.com/embed/vjqnhFVmqjU?…

2025 Hackaday One Hertz Challenge


hackaday.com/2025/08/14/2025-o…


For Americans Only: Estimating Celsius and Other Mental Metrics


I know many computer languages, but I’ve struggled all my life to learn a second human language. One of my problems is that I can’t stop trying to translate in my head. Just like Morse code, you need to understand things directly, not translate. But you have to start somewhere. One of the reasons metric never caught on in the United States is that it is hard to do exact translations while you are developing intuition about just how hot is 35 °C or how long 8 cm is.

If you travel, temperature is especially annoying. When the local news tells you the temperature is going to be 28, it is hard to do the math in your head to decide if you need a coat or shorts.

Ok, you are a math whiz. And you have a phone with a calculator and, probably, a voice assistant. So you can do the right math, which is (9/5) x °C + 32. But for those of us who can’t do that in our heads, there is an easier way.

Field Expedient

Close enough for a quick estimate
Most of us can’t multiply by 9/5 in our heads. But 9/5 is very nearly two. So if you double the Celsius temperature, you are halfway there. Of course, the number will be too high. But to make up for it, instead of adding 32, just add 30. For weather temperatures, this gives you a ballpark estimate. For 0 °C, you get 30 °F instead of 32. For 20 °C, you get 70 °F instead of 68. For 35 °C, you get 100 °F instead of 95. All close enough.

If you want to flip the error as the temperature goes up, you can remember to add 25 instead of 30 if the temperature is more than, say 25 °C. Then 35 °C gives you 95 °F on the dot, although other temperatures will still have some error, of course.

The error gets worse as the temperature rises, but it has to get fairly high before it gets useless. For example, my AMD CPU is currently at 48 °C. Using the +25 estimate, that’s 121 °F, instead of the correct 118. But maybe it won’t help you set up your metal smelting furnace.

Other Estimates

Centimeters to inches the easy way.
This is a useful way to embrace metric. Find rough estimates for units you deal with. For example, 2.54 cm/inch is not the easiest thing to apply. But if you remember that 5 cm is about 2 in, that works well. So a 160 mm rod is 16 cm. If you think of that as 3 x 5 + 1, you’ll know it is 6 inches plus an extra centimeter. The right answer is about 6.3 inches. Not close enough to start cutting things, but it does give you a feel for how big a thing you are talking about.

If you lived through the time when gasoline in the US went from less than $1/gallon to over, you might remember that many gas stations switched to liters because the pumps couldn’t be set for a dollar. The reason is a liter is very nearly a quart, and there are four quarts to a gallon. So 12 liters is practically 12 quarts or 3 gallons. This turns out to be very close.

Kilograms and kilometers are a bit trickier. The right way to imprecisely convert kilograms to pounds is to multiply by 2.2. But a nice mental math trick is to double it. Then remove the last digit and add the rest back in to the whole result. Then put the last digit you removed after the decimal point. So 8 kg would be 16+1 (throw away the six) or 17 pounds. Then put the 0.6 in for the correct answer of 17.6 pounds. Of course, the conversion factor isn’t exactly 2.2, but that’s what most people use anyway. If you are trying to be scientifically accurate, none of this is going to help you.
Estimating kilometers.
The factor for kilometers is roughly 0.6 km/mile or 1.6 miles/km. If you halve the kilometers, that will get you a fairly low estimate. So 35 km (21.7 miles) is easy to guess as more than 17.5 miles. That’s a pretty big difference, though. But if you then add 10% of the 35 back (3.5) you get 21 miles which is close.

Advice


I’m not trying to say that these rule-of-thumb tricks are good when you need an exact answer. But they are handy when you simply want a gut feel over some measure. Over time, you’ll just naturally know that 35 °C is summer-weather hot and you need more than a coffee mug to hold 3 liters.

Do you have a favorite fast conversion back or forth from metric? Share it in the comments. Americans love their strange measuring system. Turns out, some of the reasons we didn’t get metric was due to pirates, as you can see in the video below.

youtube.com/embed/WoUBpPbv0zs?…

Featured image: Wood thermometer on white background by Marco Verch under Creative Commons 2.0


hackaday.com/2025/08/14/for-am…


3D Printing a Self-cleaning Water Filter


No one likes cleaning out water spouts. [NeedItMakeIt] wanted to collect rainwater and was interested in using a Coanda filter that those used on hydroelectric plants to separate out debris. Ultimately, he decided to design his own and 3D print it.

The design uses a sloping surface with teeth on it to coax water to go in one direction and debris to go in another. It fits into a typical spout, and seems like it works well enough. Some commenters note that varying volumes of rain and different types of debris behave differently, which is probably true. However, there are similar commercial products, so you’d guess there would be some value to using the technique.

The water pushes the debris off the slope, so you end up losing a little water with the debris. So as always, there’s a trade-off. You can see in the video that if the water flow isn’t substantial, the debris tends to stall on the slope. Could the filter be improved? That was the point in trying a second design.

It wasn’t a big improvement. That’s where there’s a plot twist. Well, actually, a literal twist. Instead of making a flat slope, the new design is a conic shape with a spiral channel. That improved flow quite a bit. We weren’t clear from the video of exactly where the debris was going with the last version.

Usually, when we think of the Coanda effect, we are thinking aerodynamics. It can be quite uplifting.

youtube.com/embed/wy9lKx8X1HI?…


hackaday.com/2025/08/14/3d-pri…


How The Widget Revolutionized Canned Beer


Walk into any pub and order a pint of Guinness, and you’ll witness a mesmerizing ritual. The bartender pulls the tap, fills the glass two-thirds full, then sets it aside to settle before topping it off with that iconic creamy head. But crack open a can of Guinness at home, and something magical happens without any theatrical waiting period. Pour it out, and you get that same cascading foam effect that made the beer famous.

But how is it done? It’s all thanks to a tiny little device that is affectionately known as The Widget.

Beer Engineering

A pint of Guinness, pictured with the iconic foamy head. Credit: Sami Keinänen, CC BY SA 2.0
In 1959, draught Guinness diverged from other beers. The pints served from the tap at the pub were charged with a combination of nitrogen gas and carbon dioxide, rather than just carbon dioxide alone. Nitrogen is less soluble in beer than carbon dioxide, and low temperatures and higher pressures are required to get it to stay in the fluid. Charging the beer in this way, and then forcing it through a tap with a restrictor plate with many fine holes, allows the pouring of a beer with small, fine bubbles. This is what gives Guinness its signature smooth, creamy texture and characteristic dense head. The lower carbon dioxide level also contributes to the flavor, removing some of the sharp taste present in regular carbonated beers.

When Guinness started using the nitrogenation method, it quickly gained popularity and became the default way to serve the draught beer. The problem was that it wasn’t initially practical to do the same for bottled Guinness. Without being poured through the fine holes of a special tap under pressure, it wasn’t possible to create the same foamy head. Bottled Guinness thus remained carbonated in the traditional manner, and it was thus very much unlike the draught beer served at the pub. The desire was to produce a better version—”bottled draught Guinness” was a term often bandied about. The company experimented with a variety of methods of serving nitrogenated Guinness from a bottle or can. It even sold some bottles with a special “initiator” syringe to generate head in select markets, but it was all too clumsy to catch on with the beer drinking public. A better solution was needed.
The modern floating Guinness widget, pictured in a can that has been cut open. Credit: Duk, CC BY SA 3.0
The modern widget was developed as the technological solution to this fundamental problem in beverage physics. Guinness tackled this challenge by essentially putting a tiny pressure vessel inside the larger pressure vessel of the can itself. The widget is a small plastic sphere, hollow inside, with a tiny hole on the surface. The widget and beer are placed inside the can on the production line. Liquid nitrogen is then added, before the can’s lid is sealed. The can is then inverted as the liquid nitrogen quickly boils off into a gas. This effectively fills the widget with gaseous nitrogen under pressure, often along with a small amount of beer. It’s a charged pressure vessel lurking inside the can itself.

The magic happens when the beverage is served. When you crack open the can, the pressure inside drops rapidly to atmospheric pressure. The nitrogen under pressure in the widget thus wants to equalize with the now lower-pressure environment outside. Thus, the nitrogen sprays out through the tiny hole with tremendous force, creating countless microscopic bubbles that act as nucleation sites for the rest of the nitrogen dissolved in the surrounding beer. As the beer is poured into a glass, a foamy head forms, mimicking the product served fresh from the tap at the local pub.

Today’s widget, first marketed in 1997, is the floating sphere type, but the original version was a little different. The original widget launched in 1989 was a flat disc, which was mounted in the bottom of the can, but fundamentally worked in the same way. However, it had a tendency to cause rapid overflowing of the beer if opened when warm. The floating spherical widget reduced this tendency, though the precise engineering reasons why aren’t openly explained by the company. The fixed widget actually had a surprise return in 2020 due to COVID-19 supply chain issues, suggesting it was still mostly fit for purpose in the brewery’s eyes.

The key to the widget’s performance is in the filling and the construction. It’s important to ensure the widget is filled with pressurized gas, hence the inversion step used in the filling process. If the pressurized nitrogen was allowed to simply sit in the empty space in the top of the can, it would just vent out on opening without making any head. The orifice size on the widget is also critical. Too large, and the pressure equalizes too quickly without creating the necessary turbulence. Too small, and insufficient gas and beer volume flows through to generate adequate nucleation. The widget as it stands today is the result of much research and development to optimize its performance.
A finned “rocket” widget as used in Guinness beer bottles. Credit: Joeinwap, CC0
Further different widget designs have emerged over the years. The company had mastered draught Guinness in a can, though it needed to be poured into a glass to be drank properly. The company later looked to create draught Guinness that could be drank straight from the bottle. This led to the creation of the “rocket widget.” It worked largely in the same way, but was designed to float while remaining in the correct orientation inside the neck of the bottle. Fins ensured it wouldn’t fall out of the bottle during drinking. It would charge the beer with bubbles when first opened, and continue to boost the head to a lesser degree each time the bottle was tilted for a sip.

Guinness could have left this problem unsolved. It could have remained a beautiful tap-based beer, while selling its lesser carbonated products in bottles and cans for home consumption. Instead, it innovated, finding a way to create the same creamy tap-poured experience right out of the can.

The next time you crack open a widget-equipped can and watch that mesmerizing cascade of bubbles, you’re witnessing a masterpiece of beverage engineering that took years to perfect. It’s a reminder that sometimes the most elegant engineering solutions hide in the most ordinary places, waiting for someone clever enough to recognize that a tiny plastic ball could revolutionize how we experience beer outside the pub.


hackaday.com/2025/08/14/how-th…


Fortinet VPN sotto attacco: una nuova ondata di attacchi brute-force rilevata da GrayNoise


GreyNoise ha rilevato due importanti ondate di attacchi ai dispositivi Fortinet all’inizio di agosto 2025. La prima, un attacco brute-force mirato alla VPN SSL di Fortinet il 3 agosto, che poi è stato seguito da un brusco cambiamento su FortiManager il 5 agosto, con una nuova firma del traffico. I ricercatori avvertono che tali picchi di attività precedono la pubblicazione di vulnerabilità critiche nell’80% dei casi.

Secondo GreyNoise, il picco del 3 agosto ha coinvolto tentativi di accesso basati su dizionario sulla VPN SSL FortiOS . L’impronta digitale della rete JA4+, che utilizza l’impronta digitale TLS per classificare il traffico crittografato, ha indicato una possibile corrispondenza con l’attività osservata a giugno. Tale traffico proveniva da un indirizzo IP residenziale associato all’ISP Pilot Fiber Inc. Sebbene ciò non dimostri un’attribuzione specifica, i ricercatori suggeriscono il riutilizzo dello stesso toolkit o infrastruttura.

Il 5 agosto, è stata osservata una situazione diversa. L’aggressore è passato da SSL VPN a FortiManager e ha iniziato con attacchi di brute force al servizio FGFM, che fa parte del sistema di gestione Fortinet. Sebbene i filtri GreyNoise continuassero a attivarsi sul vecchio tag “Fortinet SSL VPN Bruteforcer”, la firma del traffico stessa è cambiata. Il nuovo flusso non corrispondeva più a FortiOS, ma corrispondeva esattamente al profilo FortiManager, ovvero FGFM. Ciò indica un cambio di target che utilizza gli stessi strumenti o una continuazione della campagna con un nuovo focus.

GreyNoise sottolinea che queste scansioni non sono solitamente esplorative, poiché le attività esplorative hanno una portata ampia, una frequenza moderata e non comportano l’individuazione delle password. In questo caso, l’attività sembra essere una fase preparatoria prima di un tentativo di sfruttamento. L’obiettivo potrebbe non essere semplicemente quello di scoprire endpoint accessibili, ma di condurre una ricognizione preliminare e valutare il valore dei potenziali obiettivi, con un successivo attacco a una vulnerabilità reale non ancora resa pubblica.

Secondo le statistiche di GreyNoise, i picchi di attività registrati , in particolare quelli contrassegnati con questo tag, presentano un’elevata correlazione con i futuri CVE nei prodotti Fortinet. La maggior parte di questi incidenti si conclude con la pubblicazione di una vulnerabilità entro sei settimane. Pertanto, i responsabili della sicurezza non dovrebbero attribuirli a tentativi di sfruttare bug chiusi da tempo. Al contrario, è giunto il momento di rafforzare le difese, soprattutto sulle interfacce esterne, e limitare l’accesso ai pannelli amministrativi tramite IP.

GreyNoise ha anche pubblicato un elenco degli indirizzi IP coinvolti in entrambe le ondate di attacchi e raccomanda di bloccarli su tutti i dispositivi Fortinet.

Secondo gli analisti, dietro questi indirizzi si cela lo stesso gruppo, che conduce test adattivi e modifica le tattiche in tempo reale. A questo proposito, le aziende che utilizzano FortiGate, FortiManager o SSL VPN di Fortinet dovrebbero urgentemente rafforzare le policy di autenticazione, abilitare la protezione brute-force, applicare limitazioni di velocità e, se possibile, limitare l’accesso alle interfacce di gestione solo tramite VPN affidabili o whitelist IP.

L'articolo Fortinet VPN sotto attacco: una nuova ondata di attacchi brute-force rilevata da GrayNoise proviene da il blog della sicurezza informatica.


Hacking the Bluetooth-Enabled Anker Prime Power Bank


Selling power banks these days isn’t easy, as you can only stretch the reasonable limits of capacity and output wattage so far. Fortunately there is now a new game in town, with ‘smart’ power banks, like the Anker one that [Aaron Christophel] recently purchased for reverse-engineering. It features Bluetooth (BLE), a ‘smart app’ and a rather fancy screen on the front with quite of information. This also means that there’s a lot to hack here beyond basic battery management system (BMS) features.

As detailed on the GitHub project page, after you get past the glue-and-plastic-clip top, you will find inside a PCB with a GD32F303 MCU, a Telink TLSR8253 BLE IC and the 240×240 ST7789 LCD in addition to a few other ICs to handle BMS functions, RTC and such. Before firmware version 1.6.2 you can simply overwrite the firmware, but Anker added a signature check to later firmware updates.

The BLE feature is used to communicate with the Anker app, which the official product page advertises as being good for real-time stats, smart charging and finding the power bank by making a loud noise. [Aaron] already reverse-engineered the protocol and offers his own alternative on the project page. Naturally updating the firmware is usually also done via BLE.

Although the BLE and mobile app feature is decidedly a gimmick, hacking it could allow for some interesting UPS-like and other features. We just hope that battery safety features aren’t defined solely in software, lest these power banks can be compromised with a nefarious or improper firmware update.

youtube.com/embed/WtEIjkMUH_8?…


hackaday.com/2025/08/14/hackin…


Steampunk Copper PC is as Cool as it Runs


Copper! The only thing it does better than conduct heat is conduct a great steampunk vibe. [Billet Labs]’ latest video is an artfully done wall PC that makes full use of both of those properties.

The parts are what you’d expect in a high-end workstation PC: a Ryzen 9 and an 3090Ti with oodles of RAM. It’s the cooling loop where all the magic happens: from the copper block on the CPU, to the plumbing fixtures that give the whole thing a beautiful brewery-chiq shine when polished up. Hopefully the water-block in the GPU is equally cupriferous too, but given the attention to detail in the rest of the build, we cannot imagine [Billet Labs] making such a rookie mistake as to invite Mr. Galvanic Corrosion to the party.

There’s almost no visible plastic or paint; the GPU and PSU are hidden by a brass plates, and even the back panel everything mounts to is shiny metal. Even the fans on the radiator are metal, and customized to look like a quad throttle body or four-barreled carburetor on an old race car. (Though they sound more like a jet takeoff.)

The analog gauges are a particular treat, which push this build firmly into “steampunk” territory. Unfortunately the temperature gauge glued onto the GPU only measures the external temperature of the GPU, not the temperature at the die or even the water-block. On the other hand, given how well this cooling setup seems to work later in the video, GPU temps are likely to stay pretty stable. The other gauges do exactly what you’d expect, measuring the pressure and temperature of the water in the coolant loop and voltage on the twelve volt rail.

Honestly, once it gets mounted on the wall, this build looks more like an art piece than any kind of computer— only the power and I/O cables do anything to give the game away. Now that he has the case, perhaps some artful peripherals are in order?

youtube.com/embed/4qN130ySBqE?…

Thanks to prolific tipster [Keith Olson] for cluing us into this one. If you see a project you take a shine to, why not drop us a tip?


hackaday.com/2025/08/14/steamp…


F-NORM ritorna e ci racconta La favola dell’anonimato online


Si racconta la favola dell’anonimato online come la causa di tutti i mali. Il problema è che qualcuno ci crede. Non solo. Il problema è anche che chi ci crede è anche una politica che pensa così di soddisfare un finto problema con una proposta di soluzione stupida. Così stupida che viene addirittura giustificata con l’intenzione di rendere Internet e gli ecosistemi digitali “più sicuri” rendendoli praticamente ad accesso controllato.

Qualcuno dirà Digital Services Act, ma quello è più un raffreddore. Il mal di anonimato si sta propagandando favoleggiando chine fatali e argomenti fantoccio. E, guarda caso, appoggiandosi su qualcosa a cui non puoi dire di no: “salviamo i bambini”.

Che come principio va bene, peccato manca sempre spiegare come lo si fa.
Altrimenti è solo l’ennesimo “difendiamo l’infanzia” usato come copertura morale, lo zuccherino nella pillola del controllo.
Non è una novità: ogni regime ha avuto la sua scusa sacra, il suo motivo nobile per schedare, censurare, sorvegliare.
I bambini. La purezza della razza. Il nemico interno. L’ordine pubblico.

Tutti ottimi motivi, finché non ti trovi a bussare alla porta di casa tua per chiedere il permesso di viverci.

Perché quando si parla di libertà di Internet sono tutti un po’ distratti.
Troppo presi ad applaudire proclami e soluzioni facili.
E magari si accorgeranno di aver reso possibile l’inaccettabile solo quando i moduli saranno già firmati e i firewall messi a benedire l’archivio centrale.

Ricorda qualcosa? Se sì, scommetto niente che sia finito bene.

Salviamo i bambini (e anche gli accessi).


“Salviamo i bambini” è l’argomento definitivo.

Quello che azzera ogni discussione. Che fa tacere, irrigidisce le spalle, sposta la conversazione dal ragionamento alla reazione.

Perché se non sei d’accordo, allora sei contro i bambini.

E chi è contro i bambini, si sa, merita il rogo.

Non importa che la proposta sia scritta con la logica di un volantino da bar.
Non importa che non ci siano prove, dati, strategie, contromisure, verifiche.
Conta che faccia leva sulla pancia, non sul cervello.
Ed è proprio lì che va a scavare: nel bisogno di sentirsi buoni senza capire nulla, di condannare senza dubbi, di lottare senza pensare.

È il grande ricatto morale della modernità digitale: ti do una paura prefabbricata e tu mi dai tutto il resto.
Il tuo consenso. I tuoi diritti. Il tuo silenzio.

Siamo tornati al medioevo, solo con una connessione in fibra.
Non c’è più la strega da bruciare, ma c’è l’anonimo da stanare.
Non c’è più l’eretico da torturare, ma il profilo sospetto da segnalare.
Chi non si allinea è subito “complice”, “sospetto”, “strano”.
Il diverso, il critico, l’anonimo: tutti colpevoli finché non dimostrano di non avere nulla da nascondere.

La paura diventa il carburante perfetto.
Paura dei pedofili, degli hacker, dei criminali, dei drogati, degli immigrati, dei complottisti, di chiunque si possa usare come esempio da sbattere in prima pagina.
E in mezzo, il cittadino medio. Quello che dice “eh, ma se fosse tuo figlio?”, tra una birra e l’altra.
Quello che condivide video di linciaggi e poi scrive “così imparano”.
Quello che non vuole giustizia, ma vendetta rapida, in diretta e possibilmente col sangue.

E allora sì, sorvegliamo tutto.
Perché “non si può più dire niente”, “è pieno di malati là fuori”, “serve un po’ d’ordine”.
Ed eccoci qua: a scambiare la libertà di tutti per la paura di pochi.
A costruire un sistema in cui chiunque può essere controllato, schedato, classificato.
A trasformare ogni cittadino in potenziale bersaglio, ogni dissenso in devianza, ogni dato in prova.

Ma tranquilli, lo facciamo per i bambini.
Per loro possiamo sacrificare ogni cosa.
Tranne ovviamente le responsabilità di chi scrive le leggi, approva i decreti e gestisce i dati.
Quelle no. Quelle sono fuori discussione.

Perché quando lo scopo è nobile, nessuno deve fare troppe domande.
E guai a non applaudire.

Risolviamo tutto con un APT, già che ci siamo?


Insomma, la soluzione è: tracciamo online tutto. Tanto che problema c’è. Ogni attività. Tutto, proprio tutto.
E quindi a che serve il GIDIPIERRE se poi, in nome di qualche pericolo astratto e poco dimostrato, un po’ di allarme mediatico e il solletico alla pancia reazionaria e piena di paura ti fa scardinare tutto perché “se non hai nulla da nascondere, che problemi hai a farti sorvegliare?”.

Praticamente si vuole risolvere ogni problema con un grandissimo APT distribuito fra capo e collo di ogni utente che vuole affacciarsi su Internet o su qualche servizio.

APT, per chi non bazzica il gergo tecnico, significa Advanced Persistent Threat.
Tradotto: un attacco informatico raffinato, nascosto, costante.
Non il ladro che entra, ruba e scappa.
Il ladro che entra, resta, ti osserva, prende appunti.
Ti conosce meglio del tuo psicanalista. Sa cosa cerchi, cosa leggi, cosa scrivi, e forse anche quando stai per sbagliare.

Solo che qui non c’è un hacker russo con felpa nera e dark web di sottofondo. No.
Qui il “ladro” ha giacca, cravatta, badge governativo e un piano triennale.
E l’APT non è più un attacco: è una funzione di sistema.
Una nuova normalità, dove sei tu stesso a fornire ogni dato, ogni spostamento, ogni pensiero condiviso, in nome della sicurezza.
Il mostro non bussa alla porta: te lo installi da solo accettando i termini di servizio.

E ovviamente nessuno che chieda conto a nessuno.
Nessuno che dica: “chi controlla i controllori?”
Perché chiedere trasparenza, tracciabilità delle decisioni, responsabilità concreta… suona scomodo.
Fa troppo “democrazia matura”, troppo “accountability”.
E invece è molto più comodo sparare una regola nuova, gridare “sicurezza!”, e non rispondere mai a nessuna domanda dopo.

Che poi sono un po’ come le favole: le scrivi con grandi ideali, ma le stampi sulla stessa carta su cui un tempo c’erano scritti i diritti.
Tanto vale riciclare.
La giri dall’altra parte, ci scrivi sopra, e via.
Giustizia è servita.

Cosa ci vogliono vendere.


Impacchettate di buone intenzioni, che cosa ci stanno vendendo? L’illusione della sicurezza. Che è una cosa che diventa tanto più bella quanto più la fai esoterica. Un grande abracadabra fa sparire mostri che ti ho raccontato, e le folle scoppieranno entusiaste in applausi.

Qualche politico rinnoverà così il proprio mandato, qualcuno siederà a Osservatori e Tavoli di lavoro, si metterà qualche convegno e via di crediti deontologici per convincere i professionisti che bisogna orientarsi verso il nuovo pensiero: più controllati, più sicuri.

Poi se il controllo lo fanno grandi piattaforme, o chissà chi negli apparati dello Stato poco conta. Ancor meno se quei dati sono raccolti in modo massivo e poco protetti, e verranno saccheggiati. O utilizzati impropriamente. Anzi: ri-saccheggiati. Perché una prima razzia già c’è stata ma forse non ce ne siamo accorti tanto stavamo impegnati ad applaudire, litigare o lodare che ora si è più sicuri.

Forse solo un po’ meno liberi, ma magari sono soltanto delle fisime.

Ma chi è il lupo cattivo?


Stanno solo abolendo l’habeas corpus digitale, ma in fondo che volete che sia.
Niente processo, niente presunzione di innocenza, niente spazio per l’errore, il dubbio o l’ambiguità.
Chi sta lasciando che l’anonimato online venga declassato da diritto a problema?
Spoiler: tu.

Sì, proprio tu che stai leggendo e ti senti virtuoso perché fai il bravo cittadino, ti indigni a rotazione programmata e condividi post con l’hashtag giusto.
Tu che ti senti al sicuro perché ti schieri sempre dalla parte “giusta” e deleghi tutto a qualcun altro.
Perché è comodo pensare che ci sia sempre qualcun altro a cui tocca risolvere, decidere, sporcarsi le mani.
Qualcun altro da votare, seguire, denunciare, cancellare.

Il lupo cattivo non viene da fuori. Siamo noi.

Siamo noi con la nostra smania di essere speciali.
Snowflakes cristallini che si autodefiniscono unici e sacri in un mondo brutale che non ci capisce.
Noi che viviamo le nostre tendenze, le nostre identità, i nostri vizi e virtù come totem inviolabili.
Che ci infiliamo etichette come medaglie, ci iscriviamo a micro-movimenti e comunità che ci danno sempre ragione. O che ci illudiamo di creare.
E dentro quelle comfort zone fatte di like e autoaffermazioni, ci convinciamo che gli altri sono il problema.

Chi non parla come noi? Pericoloso.
Chi non si comporta come noi? Sospetto.
Chi osa metterci in discussione? Colpevole.

Ed ecco che allora sì, servono regole.
Regole ferree.
Per identificarli, schedarli, smascherarli, punirli, eliminarli.
E diamo potere a qualche boia dalla lama affilata.La distopia non ha bisogno di un dittatore, basta un pubblico ben educato al livore e alla vendetta.
Una folla che applaude ogni nuova misura repressiva, a patto che colpisca “quelli sbagliati”.
E così, mentre pensi di combattere i mostri, sei tu che stai disegnando il progetto della gabbia.
Bella, funzionale, intelligente.
Ci metti anche i fiori fuori.
Se fai il bravo, forse ti lasciano scegliere il colore.
Magari quando il tuo punteggio sociale sarà sufficiente.
Ma sempre gabbia resta.

L'articolo F-NORM ritorna e ci racconta La favola dell’anonimato online proviene da il blog della sicurezza informatica.


L’AI che si programma da sola: il 2025 potrebbe segnare una svolta per i programmatori


Dalle macchine che apprendono a quelle che si auto migliorano: il salto evolutivo che sta riscrivendo il codice del futuro

Mentre leggete questo articolo, molto probabilmente, in un data center del mondo, una macchina sta scrivendo codice più efficiente di un ingegnere senior. Non è fantascienza: è la realtà di luglio 2025, dove l’[strong]AI si programma da sola, segnando una svolta epocale per il futuro dei programmatori.[/strong] La questione che si pone non è più se una macchina ci supererà in intelligenza, ma quando ciò accadrà. Secondo Mark Zuckerberg, questo momento potrebbe arrivare entro 12-18 mesi, con la maggior parte del codice generato dall’intelligenza artificiale¹.

Un’evoluzione ispirata da Turing


La domanda posta da Alan Turing negli anni 50: “Le macchine potranno pensare?” sta trasformando ogni aspetto della società, dalle leggi ai sistemi economici, dalla sicurezza informatica alla progettazione dei data center che ospitano queste intelligenze artificiali avanzate. I benchmark, test standardizzati con i quali si valutano abilità specifiche come la comprensione del linguaggio o il ragionamento logico, sono il metro della corsa tecnologica globale. Ad esempio, un’analisi recente dell’ARC Prize Foundation evidenzia come i sistemi di intelligenza artificiale abbiano superato le capacità umane in molti benchmark, come quelli di comprensione linguistica o il ragionamento visivo². Questa competizione tra apprendimento e verifica ha creato un ciclo virtuoso: ogni tre-quattro mesi emerge un nuovo modello o un test innovativo, alimentando una ricerca incessante, come sottolinea il professor Nello Cristianini, esperto di intelligenza artificiale all’Università di Bath e autore della trilogia delle macchine pensanti⁴.

L’accelerazione del machine learning


Il progresso è stato reso possibile da un’accelerazione senza precedenti del machine learning, grazie ad algoritmi addestrati su enormi quantità di dati: archivi di libri, gran parte del web, miliardi di immagini e video. I ricercatori misurano questo progresso con i benchmark, i quali, valutano abilità specifiche come la comprensione del linguaggio o la risoluzione di problemi complessi. Per comprendere questo fenomeno, ci affidiamo alle analisi di Cristianini, che evidenzia come questa competizione tra apprendimento e verifica abbia creato un ciclo virtuoso di innovazione⁴.

Il muro di gomma della scalabilità


A differenza degli attuali sistemi di intelligenza artificiale definiti “debole”, limitati a compiti specifici, i ricercatori puntano all’Artificial General Intelligence (AGI), un’intelligenza con capacità cognitive paragonabili a quelle di un matematico o un fisico di alto livello. Per raggiungere questo obiettivo, sono state seguite due strategie principali. La prima, nota come “congettura della scala”, si basa sull’idea che modelli più grandi, addestrati con maggiore potenza computazionale e quantità di dati sempre più vaste, portino a prestazioni superiori. Fino a poco tempo fa, questo approccio sembrava inarrestabile. Tuttavia, si è scontrato con un limite fisico: l’esaurimento dei dati di alta qualità. Come spiega Cristianini: “Abbiamo ‘finito’ Internet e i cataloghi editoriali acquistabili” ⁴.

La via rivoluzionaria del ragionamento formale


Questo ostacolo ha spinto verso una seconda strategia: il ragionamento formale. Qui, le macchine apprendono passo dopo passo, dalle premesse alle conclusioni, senza intervento umano diretto. Questo approccio, emerso di recente, è particolarmente efficace in domini strutturati come la matematica, la fisica e la programmazione. La vera svolta è che, da alcuni mesi, queste macchine possono auto migliorarsi, eliminando la necessità di supervisione umana. Cristianini lo sottolinea chiaramente: “Gli umani sono il punto debole. Escluderli svincola la macchina” ⁴. Un esempio? Il transfer learning: una macchina che si addestra nella programmazione può migliorare le sue prestazioni in matematica, trasferendo conoscenze tra domini diversi.

Il campo di battaglia digitale


Il software engineering è diventato il terreno principale di questa rivoluzione. Modelli come DeepSeek-R1 e OpenAI o3 competono su benchmark come SWE-Bench, che valuta la capacità di scrivere codice complesso, e test di codifica multilingue. Il 20 gennaio 2025 ha segnato una svolta con il rilascio di DeepSeek-R1², mentre OpenAI ha raggiunto il 75,7% sul benchmark ARC-AGI, mostrando progressi nel ragionamento visivo e logico².
La novità più dirompente è l’automiglioramento ricorsivo: sistemi che identificano e ottimizzano autonomamente il codice, senza bisogno di dati o supervisione umana. I tre pilastri della nuova generazione di AI includono:

  • DeepSeek-R1: Utilizza il Reinforcement Learning per migliorare il ragionamento, correggendo i propri errori iterativamente, come descritto in un paper recente².
  • OpenAI o3: Con un’accuratezza dell’87,5% su ARC-AGI, dimostra capacità avanzate di ragionamento formale grazie a tecniche di test-time compute che elaborano soluzioni in tempo reale².
  • Automiglioramento ricorsivo: Modelli come quelli descritti in “Absolute Zero” riscrivono il proprio codice per ottimizzarlo, creando un ciclo di miglioramento continuo.


La profezia che si autoavvera


Eric Schmidt, ex CEO di Google, ha dichiarato: “Una percentuale significativa del codice di routine è già scritta da sistemi di AI”¹. Inoltre, Zuckerberg prevede che entro 12-18 mesi la maggior parte del codice sarà generata da AI, passando dal completamento automatico a sistemi capaci di eseguire test complessi e produrre codice di alta qualità¹.

Il lato oscuro dell’evoluzione


Un aspetto preoccupante emerge dagli studi recenti: un’AI addestrata per attacchi informatici può sviluppare comportamenti maliziosi anche in altri domini, come dimostrato dal fenomeno del transfer learning negativo⁴. Questo solleva interrogativi cruciali per la cybersecurity:

  • Threat modeling evoluto: Come proteggersi da attacchi generati da AI autonome?
  • Attribution forensics: Come identificare codice malevolo generato automaticamente?
  • Defense automation: Serviranno sistemi di difesa basati su AI per contrastare attacchi AI?

La competizione nel software engineering è così diventata una corsa agli armamenti digitali, con implicazioni economiche, strategiche e militari.

Lezioni dal destino dei traduttori


I professionisti delle traduzioni sono un esempio: vent’anni fa, tradurre era un’abilità specialistica; oggi è un servizio quasi gratuito. Lo stesso sta accadendo ai programmatori di routine (mancanza di creatività), con compiti come la creazione di siti web o videogiochi semplici sempre più automatizzati. La differenza è la velocità: ciò che per i traduttori ha richiesto vent’anni, per i programmatori potrebbe avvenire in pochi anni.

La guerra delle GPU


Nessun paese può permettersi di restare indietro. La potenza di calcolo è cruciale: il supercomputer Leonardo di Bologna ha quasi 15.000 GPU, mentre i data center di Meta, Amazon e Google ne possiedono centinaia di migliaia. Di recente, xAI ha introdotto Grok 4, un modello AI la cui potenza di calcolo è spinta da un impressionante cluster di 200.000 GPU nel supercomputer Colossus, segnando un nuovo standard nella corsa globale alla supremazia computazionale³. Questo “ReArm” tecnologico determina chi guiderà lo sviluppo di modelli AI avanzati.

Rotta verso l’ignoto


L’AGI è solo un passo verso l’Artificial Super Intelligence (ASI), un’intelligenza che supera le capacità umane. Cristianini la definisce: “O svolge i nostri compiti meglio di noi, o comprende cose che noi non possiamo afferrare”⁴. Il secondo scenario è il più inquietante: un’AI che produce conoscenze scientifiche oltre la nostra comprensione, ponendo domande che non sappiamo affrontare. Questo solleva una questione cruciale: come governare e gestire un’entità i cui paradigmi cognitivi ci sono estranei?

Il momento di agire


Per i professionisti del tech, il futuro è già qui. Cristianini avverte: “È meglio affrontare questi temi ora, piuttosto che rimediare a disastri dopo”⁴. Cosa fare:

  • Upskilling strategico: Specializzarsi in creatività, supervisione e governance dell’AI.
  • Security first: Prepararsi a contrastare minacce da AI autonome.
  • Policy engagement: Partecipare a dibattiti normativi.
  • Continuous learning: Aggiornarsi sui progressi dell’AI.

Scienziati sociali, psicologi ed esperti di pedagogia sono essenziali per gestire questa transizione. La strada verso l’AGI non presenta ostacoli scientifici evidenti. Il mondo è già cambiato, e il “quando” è più vicino di quanto molti pensino.

Riferimenti


  1. Zuckerberg, M. (2025). India Today.
  2. DeepSeek-AI. (2025). DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arXiv:2501.12948.
  3. The Emergence of Grok 4: A Deep Dive into xAI’s Flagship AI Model. (2025). Medium.
  4. Cristianini, N. (2025). Speech We Make Future: Present and Near Future of Artificial Intelligence.


L'articolo L’AI che si programma da sola: il 2025 potrebbe segnare una svolta per i programmatori proviene da il blog della sicurezza informatica.


Flyback Converter Revealed


As [Sam Ben-Yaakov] points out in a recent video, you don’t often see flyback converters these days. That’s because there are smarter ways to get the same effect, which is to convert between two voltages. If you work on old gear, you’ll see plenty of these, and going through the analysis is educational, even if you’ll never actually work with the circuit. That’s what the video below shows: [Sam’s] analysis of why this circuit works.

The circuit in question uses a bridge rectifier to get a high-voltage DC voltage directly from the wall. Of course, you could just use a transformer to convert the AC to a lower AC voltage first, but then you probably need a regulator afterwards to get a stable voltage.

The converter operates as an oscillator. The duty cycle of the oscillator varies depending on the difference between the output voltage and a zener diode reference. These circuits are often difficult to model in a simulator, but [Ben] shows an LTSpice simulation that did take a few tweaks.

As he mentions, today you’d get a switching regulator on a chip and be done with it. But it is still interesting to understand how the design works. Another common flyback circuit used an oscillator driving a CRT for the primary, more or less. If you want to learn more, we can help with that, too.

youtube.com/embed/-fL3WSUYRxQ?…


hackaday.com/2025/08/13/flybac…


2025 One Hertz Challenge: Digital Clock Built With Analog Timer


You can use a microcontroller to build a clock. After all, a clock is just something that counts the passage of time. The only problem is that microcontrollers can’t track time very accurately. They need some kind of external timing source that doesn’t drift as much as the microcontroller’s primary clock oscillator. To that end, [Josh] wanted to try using a rather famous IC with his Arduino to build a viable timepiece.

[Josh]’s idea was straightforward—employ a 555 timer IC to generate a square wave at 1 Hz. He set up an Arduino Uno to count the pulses using edge detection. This allowed for a reliable count which would serve as the timebase for a simple 24-hour clock. The time was then displayed on an OLED display attached over I2C, while raw pulses from the 555 were counted on a 7-segment display as a useful debugging measure. Setting the time is easy, with a few pushbuttons hooked up to the Arduino for this purpose.

[Josh] claims a drift of “only ~0.5 seconds” but does not state over what time period this drift occurs. In any case, 555s are not really used for timekeeping purposes in this way, because timers based on resistor-capacitor circuits tend to drift a lot and are highly susceptible to temperature changes. However, [Josh] could easily turn this into a highly accurate clock merely by replacing the 555 square wave input with a 1PPS clock source from another type of timer or GPS device.

We’ve had quite a few clocks entered into the One Hertz Competition already, including this hilariously easy Nixie clock build. You’ve got until August 19 to get your own entry in, so wow us with your project that does something once a second!

2025 Hackaday One Hertz Challenge


hackaday.com/2025/08/13/2025-o…


Digital Etch-A-Sketch Also Plays Snake


The Etch-A-Sketch has been a popular toy for decades. It can be fun to draw on, but you have to get things right the first time, because there’s no undo button. [Tekavou] decided to recreate this popular toy in digital form instead to give it more capabilities.

The build relies on an Inkplate e-paper screen as a display, which is probably as close you can get in appearance to the aluminium dust and glass screen used in an Etch-a-Sketch. The display is hooked up to an ESP32 microcontroller, which is charged with reading inputs from a pair of rotary encoders. In standard drawing mode, it emulates the behavior of an Etch-A-Sketch, with the ESP32 drawing to the e-paper display as the user turns the encoders to move the cursor. However, it has a magical “undo” feature, where pressing the encoder undoes the last movement, allowing you to craft complex creations without having to get every move perfect on your first attempt. As a fun aside, [Tekavou] also included a fun Snake game. More specifically, it’s inspired by NIBBLES.BAS, a demo program included with Microsoft QBasic back in the day.

We’ve seen all kinds of Etch-A-Sketch builds around these parts, including this impressive roboticized version. Video after the break.

youtube.com/embed/dxsY7SYraeA?…


hackaday.com/2025/08/13/digita…


2025 One Hertz Challenge: A Game Of Life


The 2025 One Hertz Challenge asks you to build a project that does something once every second. While that has inspired a lot of clock and timekeeping builds, we’re also seeing some that do entirely different things on a 1 Hz period. [junkdust] has entered the contest with a project that does something rather mathematical once every second.

[junkdust] wanted to get better acquainted with the venerable ATtiny85, so decided to implement Conway’s Game of Life on it. The microcontroller is hooked up to a 0.91″ OLED display with a resolution of 128 x 32 pixels, however, [junkdust] only elected to implement a 32 x 32 grid for the game itself, using the rest of the display area to report the vital statistics of the game. On power up, the grid is populated with a random population, and the game proceeds, updating once every second.

It’s a neat little desk toy, but more importantly than that, it served as a nicely complicated test project for [junkdust] to get familiar working inside the limitations of the ATtiny85. It may be a humble part, but it can do great things, as we’ve seen many times before!

2025 Hackaday One Hertz Challenge


hackaday.com/2025/08/13/2025-o…


FLOSS Weekly Episode 842: Will the Real JQ Please Stand Up


We’re back! This week Jonathan chats with Mattias Wadman and Michael Farber about JQ! It’s more than just a JSON parser, JQ is a whole scripting language! Tune in to find out more about it.


youtube.com/embed/2NyQuMhzl2I?…

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

play.libsyn.com/embed/episode/…

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:


Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License


hackaday.com/2025/08/13/floss-…


The World’s First Podcaster?


When do you think the first podcast occurred? Did you guess in the 1890s? That’s not a typo. Telefonhírmondó was possibly the world’s first true “telephone newspaper.” People in Budapest could dial a phone number and listen to what we would think of now as radio content. Surprisingly, the service lasted until 1944, although after 1925, it was rebroadcasting a radio station’s programming.
Tivadar Puskás, the founder of Budapest’s “Telephone Newspaper” (public domain)
The whole thing was the brainchild of Tivadar Puskás, an engineer who had worked with Thomas Edison. At first, the service had about 60 subscribers, but Puskás envisioned the service one day spanning the globe. Of course, he wasn’t wrong. There was a market for worldwide audio programs, but they were not going to travel over phone lines to the customer.

The Hungarian government kept tight control over newspapers in those days. However, as we see in modern times, new media often slips through the cracks. After two weeks of proving the concept out, Puskás asked for formal approval and for a 50-year exclusive franchise for the city of Budapest. They would eventually approve the former, but not the latter.

Unfortunately, a month into the new venture, Puskás died. His brother Albert took over and continued talks with the government. The phone company wanted a piece of the action, as did the government. Before anything was settled, Albert sold the company to István Popper. He finalized the deal, which included rules requiring signed copies of the news reports to be sent to the police three times a day. The affair must have been lucrative. The company would eventually construct its own telephone network independent of the normal phone system. By 1907, they boasted 15,000 subscribers, including notable politicians and businesses, including hotels.

Invention


This was all possible because of Puskás’ 1892 invention of a telephone switchboard with a mechanism that could send a signal to multiple lines at once. The Canadian patent was titled “Telephonic News Dispenser.”

There had been demonstrations of similar technology going back to 1881 when Clément Ader piped stereo music (then called the slightly less catchy binauriclar audition) from the Paris Grand Opéra to the city’s Electrical Exhibition. Fictionally, the 1888 novel Looking Backward: 2000-1887also predicted such a service:

All our bedchambers have a telephone attachment at the head of the bed by which any person who may be sleepless can command music at pleasure, of the sort suited to the mood.”


No Bluetooth for her. (Public Domain)
The 1881 demonstration turned into a similar service in Paris, although it was mostly used for entertainment programming with occasional new summaries. It didn’t really qualify as a newspaper. It also wasn’t nearly as successful, having 1,300 subscribers in 1893. London was late to the game in 1895, but, again, the focus was on live performances and church services. Both services collapsed in 1925 due to radio.

Several attempts to bring a similar service to the United States were made in several states during the early 1900s. None of them had much success and were gone and forgotten in a year or two.

In Budapest, they rapidly abandoned the public phone lines and created a network that would eventually span 1,100 miles (1,800 km), crisscrossing Budapest. Impressive considering that there were no active amplifiers yet. From reading the Canadian patent, it seems they use “induction coils.” We imagine the carbon microphones at the studio also had very high voltages compared to a regular phone, but it is hard to say for sure. As you might expect, you’d need a lot of input signal for this to work.

To that end, the company hired especially loud announcers who worked in ten-minute shifts as they were effectively screaming into the microphones. The signal would run to the central office, to one of 27 districts, and then out to people’s homes. We had hoped a 1907 article about the system in Scientific American might have more technical detail, but it didn’t. However, The Electrical World did have a bit more detail:

…the arrangement which he adopts is to have a separate primary and secondary coil for each subscriber, all the primaries being connected in series with the single transmitter…


Last Mile


In a subscriber’s home, there were two earpieces. You could put one on each ear, or share with a friend. There was a buzzer to let you know about special alerts. An American who returned from Budapest in 1901 said that the news was “highly satisfactory,” but wasn’t impressed with the quality of musical programs on the service (see page 640 of The World’s Work, Volume 1).
Concert room at the studio (Public Domain).
The company issued daily schedules you could hang on the wall. Programs included news, news recaps, stories, poetry readings, musical performances, lectures, and language lessons. Typically, transmissions ran from 1030 in the morning to 2230 at night, although this was somewhat flexible.

You are probably wondering what this all cost? A year’s service — including a free receiver — was 18 krones. At the time, that was about US$7.56. That doesn’t sound like much, but in 1901 Budapest, you could buy about 44 pounds (20 kg) of coffee for that much money. The service also ran ads, costing 1 krone for a 12-second spot. They also had some coin-operated receivers to generate revenue.

Radio


It makes sense that in 1925, the service opened Budapest’s first radio station. The programming was shared, and by 1930, the service had over 91,000 subscribers. The private phone network, however, didn’t survive World War II, and that was the end of telephonic newspapers, at least in Budapest.

The technology was also put to use in Italy. A US businessman tried to make a go of it in New Jersey for about a year and then in Oregon for another year before throwing in the towel. Ironically, the tube technology that made phones more capable of covering distances with clear results also doomed phone broadcasting. Those same tubes would make radio practical.

Why Budapest?


You have to wonder why the only really successful operation was in Budapest. We don’t know if it was the politics that made an independent news source with a little less scrutiny attractive, or if it was just that Popper ran an excellent business. After all, Popper and the Puskás brothers anticipated the market for radio. And Popper, in fact, successfully embraced radio instead of letting it sink his business.

We talked about Hugo Gernsback’s predictions that doctors would operate by telephone. He also predicted telephone music in 1916. Of course, music by phone is still a thing. If you are on hold.

Featured image: “A TelefonHírmondó announcer reading the news in 1901 (Public Domain)”

Thumbnail image: “Telefon Hirmondo – Home subscriber” in the public domain.


hackaday.com/2025/08/13/the-wo…


PCB Business Card Plays Pong, Attracts Employer


Facing the horrifying realization that he’s going to graduate soon, EE student [Colin Jackson] AKA [Electronics Guy] needed a business card. Not just any business card: a PCB business card. Not just any PCB business card: a PCB business card that can play pong.

[Colin] was heavily inspired by the card [Ben Eater] was handing out at OpenSauce last year, and openly admits to copying the button holder from it. We can’t blame him: the routed-out fingers to hold a lithium button cell were a great idea. The original idea, a 3D persistence-of-vision display, was a little too ambitious to fit on a business card, so [Colin] repurposed the 64 LED matrix and STM32 processor to play Pong. Aside from the LEDs and the microprocessor, it looks like the board has a shift register to handle all those outputs and a pair of surface-mount buttons.

Of course you can’t get two players on a business card, so the microprocessor is serving as the opponent. With only 64 LEDs, there’s no room for score-keeping — but apparently even the first, nonworking prototype was good enough to get [Colin] a job, so not only can we not complain, we offer our congratulations.

The video is a bit short on detail, but [Colin] promises a PCB-business card tutorial at a later date. If you can’t wait for that, or just want to see other hackers take on the same idea, take a gander at some of the entries to last year’s Business Card Challenge.

youtube.com/embed/x8Cdz36BOXc?…


hackaday.com/2025/08/13/pcb-bu…


Ore Formation: Introduction and Magmatic Processes


Hackaday has a long-running series on Mining and Refining, that tracks elements of interest on the human-made road from rocks to riches. What author Dan Maloney doesn’t address in that series is the natural history that comes before the mine. You can’t just plunk down a copper mine or start squeezing oil from any old stone, after all: first, you need ore. Ore has to come from somewhere. In this series, we’re going to get down and dirty into the geology of ore-forming processes to find out from wither come the rocks that hold our elements of interest.

What’s In an Ore?


Though we’re going to be talking about Planetary Science in this series, we should recognize the irony that “ore” is a word without any real scientific meaning. What distinguishes ore from other rock is its utility to human industry: it has elements or compounds, like gems, that we want, and that we think we can get out economically. That changes over time, and one generation’s “rock” can be another generation’s “ore deposits”. For example, these days prospectors are chasing copper in porphyry deposits at concentrations as low as 1000 ppm (0.1%) that simply were not economic in previous decades. The difference? Improvements in mining and refining, as well as a rise in the price of copper.
Image of Downtown Kirkland LakeThis may or may not be the fabled “mile of gold”. Image: “Main Street Kirkland Lake” by P199.
There’s a story everyone tells in my region, about a street in Kirkland Lake, Ontario that had been paved using waste rock from one of the local gold mines and then torn up when the price of gold rose enough to reprocess the pavement a part-per-million of microscopic flakes of yellow metal. That story is apocryphal: history records that there was mine product accidentally used in road works, but it does not seem it has ever been deemed economic to dig it back up. (Or if it was, there’s no written record of it I could find.)

It is established fact that they did drain and reprocess 20th century tailings ponds from Kirkland Lake’s gold mines, however. Tailings are, by definition, what you leave behind when concentrating the ore. How did the tailings become ore? When somebody wanted to process them, because it had become economic to do so.

It’s similar across the board. “Aluminum ore” was a meaningless phrase until the 1860s; before that, Aluminum was a curiosity of a metal extracted in laboratories. Even now, the concentration of aluminum in its main ore, Bauxite, is lower than some aluminum silicate rocks– but we can’t get aluminum out of silicate rock economically. Bauxite, we can. Bauxite, thus, is the ore, and concentration be damned.

So, there are two things needed for a rock to be an ore: an element must be concentrated to a high enough level, and it be in a form that we can extract it economically. No wonder, then, that almost all of the planet’s crust doesn’t meet the criteria– and that that will hold on every rocky body in the solar system.

Blame Archimedes


It’s not the planetary crusts’ fault; blame instead Archimedes and Sir Issac Newton. Rocky crusts, you see, are much depleted in metals because of those two– or, rather, the physical laws they are associated with. To understand, we have to go back, way back, to the formation of the solar system.
It might be metal, but there’s no ore in the core. Image: nau.edu, CC3.0
There’s a primitive elemental abundance in the solid bodies that first coalesced out of the protoplanetary disk around a young Sol and our crust is depleted in metals compared to it. The reason is simple: as unaltered bodies accreted to form larger objects, the collisions released a great deal of energy, causing the future planetoid to melt, and stay molten. Heat rejection isn’t easy in the thermos vacuum of space, after all. Something planetoid sized could stay molten long enough for gravity to start acting on its constituent elements.

Like a very slow centrifuge, the heavier elements sunk and the lighter ones rose by Archimedes principle. That’s where almost all of Earth’s metals are to this day: in the core. Even the Moon has an iron core thanks to this process of differentiation.

In some ways, you can consider this the first ore-forming process, though geologists don’t yet count planetary differentiation on their lists of such. If we ever start to mine the nickel-iron asteroids, they’ll have to change their tune, though: those metallic space-rocks are fragments of the core of destroyed planetoids, concentrated chunks of metal created by differentiation. That’s also where most of the metal in the Earth’s crust and upper mantle is supposed to have come from, during the Late Heavy Bombardment.

Thank the LHB

Image: “Comet Crash” by Ben Crowder. Repeat 10000x.
The Late Heavy Bombardment is exactly what it sounds like: a period in the history of this solar system 3.8 to 4.1 billion years ago that saw an uncommonly elevated number of impacts on inner solar system objects like the Earth, Moon, and Mars. Most of our evidence for this event comes from the Moon, in the form of isotopic dating of lunar rocks brought back by the Apollo missions, but the topography of Mars and what little geologic record we have on Earth are consistent with the theory. Not all of these impactors were differentiated: many are likely to have been comets, but those still had the primordial abundance of metals. Even cometary impacts, then, would have served to enrich the planet’s crust and upper mantle in metals.

Is that the story, then? Metal ores on Earth are the remnants of the Late Heavy Bombardment? In a word: No. Yes, those impacts probably brought metals back to the lithosphere of this planet, but there are very few rocks of that age left on the surface of this planet, and none of them are ore-bearing. There has been a lot of geology since the LHB– not just on Earth, but on other worlds like the Moon and Mars, too. Just like the ore bodies here on Earth, any ore we find elsewhere is likely to be from other processes.
It looks impressive, but don’t start digging just yet. (Image: Stromboli Eruption by Petr Novak)
One thing that seems nearly universal on rocky bodies is volcanism, and the so-called magmatic ore-forming processes are among the easiest to understand, so we’ll start there.

Igneous rocks are rocks formed of magma — or lava, if it cools on surface. Since all the good stuff is down below, and there are slow convection currents in the Earth’s mantle, it stands to reason some material might make its way up. Yet no one is mining the lava fields of Hawaii or Iceland– it’s not just a matter of magma = metals. Usually some geochemical processes has to happen to that magma in order to enrich it, and those are the magmatic ore forming processes, with one exception.

Magmatic Ore Formation: Kimberlite Pipes

Cross-sectional diagram of a kimberlite deposit. You can see why it’s called a pipe. The eruption would be quite explosive. (Image: Kansas Geological Survey)
Kimberlite pipes are formations of ultramaphic (very high in Magnesium) rock that explode upwards from the mantle, creating vertical, carrot-shaped pipes. The olivine that is the main rock type in these pipes isn’t a desirable magnesium ore because it’s too hard to refine.

What’s interesting economically is what is often brought to surface in these pipes: diamonds, and occasionally gold. Diamonds can only form under the intense pressures beneath the Earth’s crust, so the volcanic process that created kimberlite pipes are our main source of them. (Though not all pipes contain diamonds, as many a prospector has discovered to their disappointment.)

The kimberlite pipes seem to differ from ordinary vulcanism both due to the composition of the rock — ultramaphic rocks from relatively deep in the mantle — and the speed of that rock’s ascent at up to 400 m/s. Diamonds aren’t stable in magma at low pressures, so the magma that makes up a kimberlite pipe must erupt very quickly (in geologic terms) from the depths. The hypothesis is that these are a form of mantle plume.

A different mantle plume is believed to drive volcanism in Hawaii, but that plume expresses itself as steady stream and contains no diamonds. Hawaii’s lava creates basalt, less magnesium-rich rocks than olivine, and come from a shallower strata of the Earth’s mantle. Geochemically, the rocks in Hawaii are very similar to the oceanic crust that the mantle plume is pushing through. Kimberlite pipes, on the other hand, have only been found in ancient continental crusts, though no one seems entirely sure why.
You bet your Tanpi that Mars has had mantle plumes! (Image: NASA)
The great shield volcanoes on Mars show that mantle plumes have occurred on that planet, and there’s no reason to suppose kimberlite-type eruptions could not have occurred there as well. While some of the diamond-creating carbon in the Earth’s mantle comes from subducted carbonate rocks, some of it seems to be primordial to the mantle.

It is thus not unreasonable to suppose that there may be some small diamond deposits on Mars, if anyone ever goes to look. Venus, too, though it’s doubtful anyone will ever go digging to check. The moon, on the other hand, lacks the pressure gradients required for diamond formation even if it does have vulcanism. What the moon likely does posses (along with the three terrestrial planets) is another type of ore body: layered igneous intrusions.

A Delicious Cake of Rock

Chromite layers in the Bushveld Igneous Complex. Image: Kevin Walsh.
Layered igneous intrusions are, as the name suggests, layered. They aren’t always associated with ore bodies, but when they are, they’re big names like Stillwater (USA) and Bushveld (South Africa). The principle of ore formation is pretty simple: magma in underground chambers undergoes a slow cooling that causes it to fractionate into layers of similar minerals.

Fractional crystallization also has its role to play in concentrating minerals: as the melt cools, it’s natural that some compounds will have higher melting points and freeze out first. These crystals may sink to the bottom of the melt chamber or float to the top, depending on their density relative to the surrounding lava. Like the process of differentiation writ in miniature, heavy minerals sink to the bottom and light ones float to the top, concentrating minerals by density and creating the eponymous layers. Multiple flows of lava can create layers upon layers upon layers of the same, or similar, stacks of minerals.

There’s really no reason to suspect that this ore formation process should not be possible on any terrestrial planet: all one needs is a rich magma and slow cooling. Layered igneous intrusions are a major source of chromium, mainly in the form of Chromatite, an iron-chromium-oxide, but also economically important sources of iron, nickel, copper and platinum group elements (PGEs) amongst other metals. If nickel, copper, or PGEs are present in this kind of deposit, if they’re going to be economically extractable, it will be in the form of a sulfide. So-called sulfide melt deposits can coexist within layered igneous intrusions (as at Bushveld, where they produce a notable fraction of the world’s nickel) or as stand-alone deposits.

When Magma Met Sulfur


One of the problems with igneous rocks from a miner’s perspective is that they’re too chemically stable. Take olivine: it’s chock full of magnesium you cannot extract. If you want an an easily-refined ore, rarely do you look at silicate rock first. Igneous rocks, though, even when ultramafic like in Kimberlite pipes or layered melt deposits, are still silicates.

There’s an easy way to get ore from a magma: just add sulfur. Sulfur pulls metals out of the melt to create sulfide minerals, which are both very concentrated sources of metals and, equally importantly, very easy to refine. Sulfide melt deposits are some of the most economically important ones on this planet, and there’s no reason to think we couldn’t find them elsewhere. (The moon isn’t terribly depleted in sulfur.)
The Bear Stream Quarry is one of many Ni/Cu mines created by the Siberian Traps. (Image: Nikolay Zhukov, CC3.0)
Have you heard of the Siberian Traps? That was a series of volcanoes that produced a flood basalt, like the lunar mare. The volcanoes of the Siberian Traps were a primary cause of the End-Perimian mass extinction, and they put out somewhere between two and four million cubic kilometers of rock. Most of that rock is worthless basalt Most, except in Norilsk.

The difference? In Norilsk, there was enough sulfur in the melt, thanks to existing sedimentary rocks, to pull metals out of the melt. 250 million years after it cooled, this became Eurasia’s greatest source of Nickel and Platinum Group Elements, with tonnes and tonnes of copper brought to surface as a bonus.

Norilk’s great rival in the Cold War was Sudbury, Canada– another sulfide melt deposit, this one believed to be associated with the meteorite impact that created the Sudbury Basin. The titanic impact that created the basin also melted a great deal of rock, and as it cooled, terrestrial sulfur combined with metals that had existed in the base rock, and any brought down in the impactor, to freeze out of the melt as sulfides.
Most mining still ongoing in the Sudbury Basin is deep underground, like at Nickel Rim South Mine. (Image: P199.)
While some have called Sudbury “humanity’s first asteroid mine”, it’s a combination of sulfur and magma that created the ore body; there is little evidence to suggest the impactor was itself a nickel-iron asteroid. Once the source of the vast majority of the world’s nickel, peaking at over 80% before WWI, Sudbury remains the largest hard-rock mining centre in North America, and one of the largest in the world, on the weight of all that sulfide.

Since the Moon does not seem to be terribly depleted in sulfur, and has more flood basalt and impact craters than you can shake a stick at, it’s a fairly safe bet that if anyone ever tries to mine metals on Luna, they will be sulfide melt deposits. There’s no reason not to expect Mars to posses its fair share as well.


hackaday.com/2025/08/13/ore-fo…


Josef Prusa Warns Open Hardware 3D Printing is Dead


It’s hard to overstate the impact desktop 3D printing has had on the making and hacking scene. It drastically lowered the barrier for many to create their own projects, and much of the prototyping and distribution of parts and tools that we see today simply wouldn’t be possible via traditional means.

What might not be obvious to those new to the game is that much of what we take for granted today in the 3D printing world has its origins in open source hardware (OSHW). Unfortunately, [Josef Prusa] has reason to believe that this aspect of desktop 3D printing is dead.

If you’ve been following 3D printing for awhile, you’ll know how quickly the industry and the hobby have evolved. Just a few years ago, the choice was between spending the better part of $1,000 USD on a printer with all the bells and whistles, or taking your chances with a stripped-down clone for half the price. But today, you can get a machine capable of self calibration and multi-color prints for what used to be entry-level prices. According to [Josef] however, there’s a hidden cost to consider.

A chart showing the growth in patents after 2020(Data from Espacenet International Database by European Patent Organization, March 2025) – Major Point made by Prusa on the number of patents from certain large-name companies
From major development comes major incentives. In 3D printing’s case, we can see the Chinese market dominance. Printers can be sold for a loss, and patents are filed when you can rely on government reimbursements, all help create the market majority we see today. Despite continuing to improve their printers, these advantages have made it difficult for companies such as Prusa Research to remain competitive.

That [Josef] has become disillusioned with open source hardware is unfortunately not news to us. Prusa’s CORE One, as impressive as it is, marked a clear turning point in how the company released their designs. Still, [Prusa]’s claims are not unfounded. Many similar issues have arisen in 3D printing before. One major innovation was even falsely patented twice, slowing adoption of “brick layering” 3D prints.

Nevertheless, no amount of patent trolling or market dominance is going to stop hackers from hacking. So while the companies that are selling 3D printers might not be able to offer them as OSHW, we feel confident the community will continue to embrace the open source principles that helped 3D printing become as big as it is today.

Thanks to [JohnU] for the tip.


hackaday.com/2025/08/13/josef-…


Running Guitar Effects on a PlayStation Portable


A red Sony PSP gaming console is shown, displaying the lines “Audio Mechanica,” “Brek Martin 2006-2025,” and “Waiting for Headphones.”

If your guitar needs more distortion, lower audio fidelity, or another musical effect, you can always shell out some money to get a dedicated piece of hardware. For a less conventional route, though, you could follow [Brek Martin]’s example and reprogram a handheld game console as a digital effects processor.

[Brek] started with a Sony PSP 3000 handheld, with which he had some prior programming experience, having previously written a GPS maps program and an audio recorder for it. The PSP has a microphone input as part of the connector for a headset and remote, though [Brek] found that a Sony remote’s PCB had to be plugged in before the PSP would recognize the microphone. To make things a bit easier to work with, he made a circuit board that connected the remote’s hardware to a microphone jack and an output plug.

[Brek] implemented three effects: a flanger, bitcrusher, and crossover distortion. Crossover distortion distorts the signal as it crosses zero, the bitcrusher reduces sample rate to make the signal choppier, and the flanger mixes the current signal with its variably-delayed copy. [Brek] would have liked to implement more effects, but the program’s lag would have made it impractical. He notes that the program could run more quickly if there were a way to reduce the sample chunk size from 1024 samples, but if there is a way to do so, he has yet to find it.

If you’d like a more dedicated digital audio processor, you can also build one, perhaps using some techniques to reduce lag.

youtube.com/embed/MlPtfeSyyak?…


hackaday.com/2025/08/13/runnin…


New trends in phishing and scams: how AI and social media are changing the game



Introduction


Phishing and scams are dynamic types of online fraud that primarily target individuals, with cybercriminals constantly adapting their tactics to deceive people. Scammers invent new methods and improve old ones, adjusting them to fit current news, trends, and major world events: anything to lure in their next victim.

Since our last publication on phishing tactics, there has been a significant leap in the evolution of these threats. While many of the tools we previously described are still relevant, new techniques have emerged, and the goals and methods of these attacks have shifted.

In this article, we will explore:

  • The impact of AI on phishing and scams
  • How the tools used by cybercriminals have changed
  • The role of messaging apps in spreading threats
  • Types of data that are now a priority for scammers


AI tools leveraged to create scam content

Text


Traditional phishing emails, instant messages, and fake websites often contain grammatical and factual errors, incorrect names and addresses, and formatting issues. Now, however, cybercriminals are increasingly turning to neural networks for help.

They use these tools to create highly convincing messages that closely resemble legitimate ones. Victims are more likely to trust these messages, and therefore, more inclined to click a phishing link, open a malicious attachment, or download an infected file.

Example of a phishing email created with DeepSeek
Example of a phishing email created with DeepSeek

The same is true for personal messages. Social networks are full of AI bots that can maintain conversations just like real people. While these bots can be created for legitimate purposes, they are often used by scammers who impersonate human users. In particular, phishing and scam bots are common in the online dating world. Scammers can run many conversations at once, maintaining the illusion of sincere interest and emotional connection. Their primary goal is to extract money from victims by persuading them to pursue “viable investment opportunities” that often involve cryptocurrency. This scam is known as pig butchering. AI bots are not limited to text communication, either; to be more convincing, they also generate plausible audio messages and visual imagery during video calls.

Deepfakes and AI-generated voices


As mentioned above, attackers are actively using AI capabilities like voice cloning and realistic video generation to create convincing audiovisual content that can deceive victims.

Beyond targeted attacks that mimic the voices and images of friends or colleagues, deepfake technology is now being used in more classic, large-scale scams, such as fake giveaways from celebrities. For example, YouTube users have encountered Shorts where famous actors, influencers, or public figures seemingly promise expensive prizes like MacBooks, iPhones, or large sums of money.

Deepfake YouTube Short
Deepfake YouTube Short

The advancement of AI technology for creating deepfakes is blurring the lines between reality and deception. Voice and visual forgeries can be nearly indistinguishable from authentic messages, as traditional cues used to spot fraud disappear.

Recently, automated calls have become widespread. Scammers use AI-generated voices and number spoofing to impersonate bank security services. During these calls, they claim there has been an unauthorized attempt to access the victim’s bank account. Under the guise of “protecting funds”, they demand a one-time SMS code. This is actually a 2FA code for logging into the victim’s account or authorizing a fraudulent transaction.
media.kasperskycontenthub.com/…Example of an OTP (one-time password) bot call

Data harvesting and analysis


Large language models like ChatGPT are well-known for their ability to not only write grammatically correct text in various languages but also to quickly analyze open-source data from media outlets, corporate websites, and social media. Threat actors are actively using specialized AI-powered OSINT tools to collect and process this information.

The data so harvested enables them to launch phishing attacks that are highly tailored to a specific victim or a group of victims – for example, members of a particular social media community. Common scenarios include:

  • Personalized emails or instant messages from what appear to be HR staff or company leadership. These communications contain specific details about internal organizational processes.
  • Spoofed calls, including video chats, from close contacts. The calls leverage personal information that the victim would assume could not be known to an outsider.

This level of personalization dramatically increases the effectiveness of social engineering, making it difficult for even tech-savvy users to spot these targeted scams.

Phishing websites


Phishers are now using AI to generate fake websites too. Cybercriminals have weaponized AI-powered website builders that can automatically copy the design of legitimate websites, generate responsive interfaces, and create sign-in forms.

Some of these sites are well-made clones nearly indistinguishable from the real ones. Others are generic templates used in large-scale campaigns, without much effort to mimic the original.

Phishing pages mimicking travel and tourism websites
Phishing pages mimicking travel and tourism websites

Often, these generic sites collect any data a user enters and are not even checked by a human before being used in an attack. The following are examples of sites with sign-in forms that do not match the original interfaces at all. These are not even “clones” in the traditional sense, as some of the brands being targeted do not offer sign-in pages.

These types of attacks lower the barrier to entry for cybercriminals and make large-scale phishing campaigns even more widespread.

Login forms on fraudulent websites
Login forms on fraudulent websites

Telegram scams


With its massive popularity, open API, and support for crypto payments, Telegram has become a go-to platform for cybercriminals. This messaging app is now both a breeding ground for spreading threats and a target in itself. Once they get their hands on a Telegram account, scammers can either leverage it to launch attacks on other users or sell it on the dark web.

Malicious bots


Scammers are increasingly using Telegram bots, not just for creating phishing websites but also as an alternative or complement to these. For example, a website might be used to redirect a victim to a bot, which then collects the data the scammers need. Here are some common schemes that use bots:

  • Crypto investment scams: fake token airdrops that require a mandatory deposit for KYC verification

Telegram bot seemingly giving away SHIBARMY tokens
Telegram bot seemingly giving away SHIBARMY tokens


  • Phishing and data collection: scammers impersonate official postal service to get a user’s details under the pretense of arranging delivery for a business package.

Phishing site redirects the user to an "official" bot.
Phishing site redirects the user to an “official” bot.


  • Easy money scams: users are offered money to watch short videos.

Phishing site promises easy earnings through a Telegram bot.
Phishing site promises easy earnings through a Telegram bot.

Unlike a phishing website that the user can simply close and forget about when faced with a request for too much data or a commission payment, a malicious bot can be much more persistent. If the victim has interacted with a bot and has not blocked it, the bot can continue to send various messages. These might include suspicious links leading to fraudulent or advertising pages, or requests to be granted admin access to groups or channels. The latter is often framed as being necessary to “activate advanced features”. If the user gives the bot these permissions, it can then spam all the members of these groups or channels.

Account theft


When it comes to stealing Telegram user accounts, social engineering is the most common tactic. Attackers use various tricks and ploys, often tailored to the current season, events, trends, or the age of their target demographic. The goal is always the same: to trick victims into clicking a link and entering the verification code.

Links to phishing pages can be sent in private messages or posted to group chats or compromised channels. Given the scale of these attacks and users’ growing awareness of scams within the messaging app, attackers now often disguise these phishing links using Telegram’s message-editing tools.

This link in this phishing message does not lead to the URL shown
This link in this phishing message does not lead to the URL shown

New ways to evade detection

Integrating with legitimate services


Scammers are actively abusing trusted platforms to keep their phishing resources under the radar for as long as possible.

  • Telegraph is a Telegram-operated service that lets anyone publish long-form content without prior registration. Cybercriminals take advantage of this feature to redirect users to phishing pages.

Phishing page on the telegra.ph domain
Phishing page on the telegra.ph domain


  • Google Translate is a machine translation tool from Google that can translate entire web pages and generate links like https://site-to-translate-com.translate.goog/… Attackers exploit it to hide their assets from security vendors. They create phishing pages, translate them, and then send out the links to the localized pages. This allows them to both avoid blocking and use a subdomain at the beginning of the link that mimics a legitimate organization’s domain name, which can trick users.

Localized phishing page
Localized phishing page


  • CAPTCHA protects websites from bots. Lately, attackers have been increasingly adding CAPTCHAs to their fraudulent sites to avoid being flagged by anti-phishing solutions and evade blocking. Since many legitimate websites also use various types of CAPTCHAs, phishing sites cannot be identified by their use of CAPTCHA technology alone.

CAPTCHA on a phishing site
CAPTCHA on a phishing site

Blob URL


Blob URLs (blob:example.com/…) are temporary links generated by browsers to access binary data, such as images and HTML code, locally. They are limited to the current session. While this technology was originally created for legitimate purposes, such as previewing files a user is uploading to a site, cybercriminals are actively using it to hide phishing attacks.

Blob URLs are created with JavaScript. The links start with “blob:” and contain the domain of the website that hosts the script. The data is stored locally in the victim’s browser, not on the attacker’s server.

Blob URL generation script inside a phishing kit
Blob URL generation script inside a phishing kit

Hunting for new data


Cybercriminals are shifting their focus from stealing usernames and passwords to obtaining irrevocable or immutable identity data, such as biometrics, digital signatures, handwritten signatures, and voiceprints.

For example, a phishing site that asks for camera access supposedly to verify an account on an online classifieds service allows scammers to collect your biometric data.

Phishing for biometrics
Phishing for biometrics

For corporate targets, e-signatures are a major focus for attackers. Losing control of these can cause significant reputational and financial damage to a company. This is why services like DocuSign have become a prime target for spear-phishing attacks.

Phishers targeting DocuSign accounts
Phishers targeting DocuSign accounts

Even old-school handwritten signatures are still a hot commodity for modern cybercriminals, as they remain critical for legal and financial transactions.

Phishing for handwritten signatures
Phishing for handwritten signatures

These types of attacks often go hand-in-hand with attempts to gain access to e-government, banking and corporate accounts that use this data for authentication.

These accounts are typically protected by two-factor authentication, with a one-time password (OTP) sent in a text message or a push notification. The most common way to get an OTP is by tricking users into entering it on a fake sign-in page or by asking for it over the phone.

Attackers know users are now more aware of phishing threats, so they have started to offer “protection” or “help for victims” as a new social engineering technique. For example, a scammer might send a victim a fake text message with a meaningless code. Then, using a believable pretext – like a delivery person dropping off flowers or a package – they trick the victim into sharing that code. Since the message sender indeed looks like a delivery service or a florist, the story may sound convincing. Then a second attacker, posing as a government official, calls the victim with an urgent message, telling them they have just been targeted by a tricky phishing attack. They use threats and intimidation to coerce the victim into revealing a real, legitimate OTP from the service the cybercriminals are actually after.

Fake delivery codes
Fake delivery codes

Takeaways


Phishing and scams are evolving at a rapid pace, fueled by AI and other new technology. As users grow increasingly aware of traditional scams, cybercriminals change their tactics and develop more sophisticated schemes. Whereas they once relied on fake emails and websites, today, scammers use deepfakes, voice cloning and multi-stage tactics to steal biometric data and personal information.
Here are the key trends we are seeing:

  • Personalized attacks: AI analyzes social media and corporate data to stage highly convincing phishing attempts.
  • Usage of legitimate services: scammers are misusing trusted platforms like Google Translate and Telegraph to bypass security filters.
  • Theft of immutable data: biometrics, signatures, and voiceprints are becoming highly sought-after targets.
  • More sophisticated methods of circumventing 2FA: cybercriminals are using complex, multi-stage social engineering attacks.


How do you protect yourself?


  • Critically evaluate any unexpected calls, emails, or messages. Avoid clicking links in these communications, even if they appear legitimate. If you do plan to open a link, verify its destination by hovering over it on a desktop or long-pressing on a mobile device.
  • Verify sources of data requests. Never share OTPs with anyone, regardless of who they claim to be, even if they say they are a bank employee.
  • Analyze content for fakery. To spot deepfakes, look for unnatural lip movements or shadows in videos. You should also be suspicious of any videos featuring celebrities who are offering overly generous giveaways.
  • Limit your digital footprint. Do not post photos of documents or sensitive work-related information, such as department names or your boss’s name, on social media.

securelist.com/new-phishing-an…


That’s no Moon, er, Selectric


If you learned to type anytime in the mid-part of the 20th century, you probably either had or wanted an IBM Selectric. These were workhorses and changed typing by moving from typebars to a replaceable wheel. They were expensive, though worth it since many of them still work (including mine). But few of us could afford the $1,000 or more that these machines cost back in the day, especially when you consider that $1,000 was enough to buy a nice car for most of that time. [Tech Tangents] looks at something different: a clone Selectric from the sewing machine and printer company Juki.

The typewriter was the brainchild of [Thomas O’Reilly]. He sold typewriters and knew that a $500 compatible machine would sell. He took the prototype to Juki, which was manufacturing typewriters for Olivetti at the time.

Although other typewriters used typeballs, none of them were actual clones and didn’t take IBM typeballs. Juki even made their own typeballs. You’d think IBM might have been upset, but they were already moving towards the “wheelwriter,” which used a daisywheel element. Juki would later make a Xerox-compatible daisywheel printer, again at a fraction of the cost of the original.

Even the Juki manual was essentially a rip-off of the IBM Selectric manual. Sincerest form of flattery, indeed. It did appear that the ribbon was not a standard IBM cartridge. That makes them hard to find compared to Selectric ribbons, but they are nice since they have correction tape built in. The video mentions that you can find them on eBay and similar sites.

There were a few other cost savings. First, the Juki was narrower than most Selectrics. It also had a plastic case, although if you have ever had to carry a Selectric up a few flights of stairs, you might consider that a feature.

The Juki in the video doesn’t quite work, but it is a quirky machine with an odd history. Today, you can print your own typeballs. We wonder if these would be amenable to computer control like the Selectrics?

youtube.com/embed/EQMOWNUJq7U?…


hackaday.com/2025/08/12/thats-…


Creating a New Keyboard Flex for an Old Calculator


[Menadue] had a vintage Compucorp 326 calculator with an aging problem. Specifically, the flex cable that connects the button pad had turned corroded over time. However, thanks to the modern PCB industrial complex, replacing the obscure part was relatively straightforward!

The basic idea was simple enough: measure the original flex cable, and recreate it with the flat-flex PCB options available at many modern PCB houses that cater to small orders and hobbyists. [Menadue] had some headaches, having slightly misjudged the pitch of the individual edge-connector contacts. However, he figured that if lined up just right, it was close enough to still work. With the new flex installed, the calculator sprung into life…only several keys weren’t working. Making a new version with the correct pitch made all the difference, however, and the calculator was restored to full functionality.

It goes to show that as long as your design skills are up to scratch, you can replace damaged flex-cables in old hardware with brand new replacements. There’s a ton of other cool stuff you can do with flex PCBs, too.

youtube.com/embed/QmJaNzWDqbY?…


hackaday.com/2025/08/12/creati…


LEDs That Flow: A Fluid Simulation Business Card


Flip card

Fluid-Implicit-Particle or FLIP is a method for simulating particle interactions in fluid dynamics, commonly used in visual effects for its speed. [Nick] adapted this technique into an impressive FLIP business card.

The first thing you’ll notice about this card is its 441 LEDs arranged in a 21×21 matrix. These LEDs are controlled by an Raspberry Pi RP2350, which interfaces with a LIS2DH12TR accelerometer to detect card movement and a small 32Mb memory chip. The centerpiece is a fluid simulation where tilting the card makes the LEDs flow like water in a container. Written in Rust, the firmware implements a FLIP simulation, treating the LEDs as particles in a virtual fluid for a natural, flowing effect.

This eye-catching business card uses clever tricks to stay slim. The PCB is just 0.6mm thick—compared to the standard 1.6mm—and the 3.6mm-thick 3.7V battery sits in a cutout to distribute its width across both sides of the board. The USB-C connection for charging and programming uses clever PCB cuts, allowing the plug to slide into place as if in a dedicated connector.

Inspired by a fluid simulation pendant we previously covered, this board is just as eye-catching. Thanks to [Nick] for sharing the design files for this unique business card. Check out other fluid dynamics projects we’ve featured in the past.


hackaday.com/2025/08/12/leds-t…


3D-Printing A Full-Sized Kayak In Under A Day


If you want to get active out on the water, you could buy a new kayak, or hunt one down on Craigslist, Or, you could follow [Ivan Miranda]’s example, and print one out instead.

[Ivan] is uniquely well positioned to pursue a build like this. That’s because he has a massive 3D printer which uses a treadmill as a bed. It’s perfect for building long, thin things, and a kayak fits the bill perfectly. [Ivan] has actually printed a kayak before, but it took an excruciating 7 days to finish. This time, he wanted to go faster. He made some extruder tweaks that would allow his treadmill printer to go much faster, and improved the design to use as much of the belt width as possible. With the new setup capable of extruding over 800 grams of plastic per hour, [Ivan] then found a whole bunch of new issues thanks to the amount of heat involved. He steps through the issues one at a time until he has a setup capable of extruding an entire kayak in less than 24 hours.

This isn’t just a dive into 3D printer tech, though. It’s also about watercraft! [Ivan] finishes the print with a sander and a 3D pen to clean up some imperfections. The body is also filled with foam in key areas, and coated with epoxy to make it watertight. It’s not the easiest craft to handle, and probably isn’t what you’d choose for ocean use. It’s too narrow, and wounds [Ivan] when he tries to get in. It might be a floating and functional kayak, just barely, for a smaller individual, but [Ivan] suggests he’ll need to make changes if he were to actually use this thing properly.

Overall, it’s a project that shows you can 3D print big things quite quickly with the right printer, and that maritime engineering principles are key for producing viable watercraft. Video after the break.

youtube.com/embed/9DpMkYDCq9Y?…


hackaday.com/2025/08/12/3d-pri…


2025 One Hertz Challenge: Abstract Aircraft Sculpture Based On Lighting Regulations


The 2025 One Hertz Challenge is really heating up with all kinds of projects that do something once every second. [The Baiko] has given us a rather abstract entry that looks like a plane…if you squint at it under the right conditions.

It’s actually quite an amusing abstract build. If you’ve ever seen planes flying in the night sky, you’ve probably noticed they all have similar lights. Navigation lights, or position lights as they are known, consist of a red light on the left side and a green light on the right side. [The Baiko] assembled two such LEDs on a small sliver of glass along with an ATtiny85 microcontroller.

Powered by a coin cell, they effectively create a abstract representation of a plane in the night sky, paired with a flashing strobe that meets the requirements of the contest. [The Baiko] isn’t exactly sure of the total power draw, but notes it must be low given the circuit has run for weeks on a 30 mAh coin cell.

It’s an amusing piece of PCB art, though from at least one angle, it does appear the red LED might be on the wrong side to meet FAA regulations. Speculate on that in the comments.

In any case, we’ve had a few flashers submitted to the competition thus far, and you’ve got until August 19 to get your own entry in!

2025 Hackaday One Hertz Challenge


hackaday.com/2025/08/12/2025-o…


Design Review: LattePanda Mu NAS Carrier


It is a good day for design review! Today’s board is the MuBook, a Lattepanda Mu SoM (System-on-Module) carrier from [LtBrain], optimized for a NAS with 4 SATA and 2 NVMe ports. It is cheap to manufacture and put together, the changes are non-extensive but do make the board easier to assemble, and, it results in a decent footprint x86 NAS board you can even order assembled at somewhere like JLCPCB.

This board is based on the Lite Carrier KiCad project that the LattePanda team open-sourced to promote their Mu boards. I enjoy seeing people start their project from a known-working open-source design – they can save themselves lots of work, avoid reinventing the wheel and whole categories of mistakes, and they can learn a bunch of design techniques/tips through osmosis, too. This is a large part of why I argue everyone should open-source their projects to the highest extent possible, and why I try my best to open-source all the PCBs I design.

Let’s get into it! The board’s on GitHub as linked, already containing the latest changes.

Git’ting Better


I found the very first review item when downloading the repo onto my computer. It took a surprising amount of time, which led me to believe the repo contains a fair bit of binary files – something quite counterproductive to keep in Git. My first guess was that the repo had no .gitignore for KiCad, and indeed – it had the backups/ directory with a heap of hefty .zips, as well as a fair bit of stuff like gerbers and footprint/symbol cache files. I checked in with [LtBrain] that these won’t be an issue to delete, and then added a .gitignore from the Blepis project.

This won’t make the repo easier to check out in the future, sadly – the hefty auto-generated files are still in the repo history. However, at least it won’t grow further as KiCad puts new archives into the backups/ directory, and, it’s good to keep .gitignore files in your KiCad repos so you can easily steal them every time you start a new project.

Apart from that, a .gitignore also makes working with your repository way way easier! When seeing changes overview in git status or GitHub Desktop, it’s way nicer to, and you even get a shot at reviewing changes in your commits to make sure you’re not adding something you don’t want in the repository. Oh, and, you don’t risk leaking your personal details as much, since things like auto-generated KiCad lockfiles will sometimes contain your computer name or your user name.

Now that the PCB Git-ability has been improved, let’s take a look at the board, first and foremost; the schematic changes here are fairly minimal, and already reviewed by someone else.

Cheap With Few Compromises


There’s plenty of PCIe, USB3, and SATA on this board – as such, it has to be at least four layers, and this one is. The SIG-GND-GND-SIG arrangement is only slightly compromised by a VDC (12 V to 15 V) polygon on one of the layers, taking up about 30% of space, and used to provide input power to Mu and also onboard 3.3 V and 5 V regulators.

Of course, with so many interfaces, you’ll also want to go small – you’ll have to fit a lot of diffpairs on the board, and you don’t want them flowing too close to each other to avoid interference. This board uses approximately 0.1 mm / 0.1 mm clearances, which, thankfully, work well enough for JLCPCB – the diffpairs didn’t even need to be redrawn much. Apart from that, the original design used 0.4 mm / 0.2 mm vias. Problem? JLC has a $30 surcharge for such vias for a board of this size. No such thing for 0.4 mm / 0.3 mm vias, surprisingly, even though the annular ring is way smaller.

I went and changed all 0.4 mm / 0.2 mm vias to 0.4 mm / 0.3mm vias, and that went surprisingly well – no extra DRC errors. The hole-to-copper distance is set to be pretty low in this project, to 0.15 mm, because that’s inherited from LattePanda carrier files, so I do hope that JLC doesn’t balk at those vias during the pre-production review. Speaking of DRC, I also set all courtyard errors to “ignore” – not only does this category have low signal-to-noise ratio, the LattePanda module courtyard also would raise problems at all items placed under the module, even though there’s plenty of space as long as you use a DDR socket tall enough.

One thing looked somewhat critical to me, though – the VDC polygon, specifically, the way it deprived quite a few diffpairs from GND under them.

Redraw, Nudge, Compromise


Remember, you want a ground polygon all along the underside of the differential pair, from start to finish, without interruptions – that ground polygon is where ground return current flows, and it’s also crucial in reaching the right differential pair impedance. The VDC polygon did interrupt a good few pairs, however.

Most of those interruptions were fixed easily by lifting the VDC polygon. Highlighting the net (` keyboard key) showed that there’s only really 4 consumers of the VDC power input, and all of them were above the overwhelming majority of the diffpairs. REFCLKs for M.2 sockets had to be rerouted to go over ground all throughout, though, and I also added a VDC cutout to pull gigabit Ethernet IC PCIe RX/TX pairs over VDC for most of their length.

This polygon carries a fair bit of current, a whole N100 (x86) CPU’s worth and then some, and remember – inner layers are half as thick, only 0. 5oz instead of 1 oz you get for outer layers by default. So, while we can cut into it, the VDC path has to be clear enough. A lot of items on VDC, like some gigabit controller power lines, ended up being moved from the VDC polygon layer to the opposite inner layer – now, they’re technically on the layer under PCIe and gigabit Ethernet pairs, but it’s a better option than compromising VDC power delivery. I also moved some VDC layer tracks to B.Cu and F.Cu; remember, with high-speed stuff you really want to minimize the number of inner layer tracks.

Loose Ends


With the vias changed and polygon redrawn, only a few changes remained. Not all diffpair layer crossings had enough vias next to them, and not all GND pads had vias either – particularly on the Mu and M.2 slots, what’s with high-speed communications and all, you have to make sure that all GND pads have GND vias on them. Again, highlight GND net (`) and go hunting. Afterwards, check whether you broke any polygons on inner layers – I sure did accidentally make a narrow passage on VDC even more narrow with my vias, but it didn’t take much to fix. Remember, it’s rare that extra vias cost you extra, so going wild on them is generally safe.

The SATA connector footprint from Digikey was faulty – instead of plated holes for through-hole pins, it had non-plated holes. Not the kind of error I’ve ever seen with easyeda2kicad, gotta say. As an aside, it was quite a struggle to find the proper datasheet on Digikey – I had to open like five different PDFs before I found one with footprint dimension recommendations.

A few nets were NC – as it turned out, mostly because some SATA ports had conflicting names; a few UART testpoints were present in the schematic but not on the board, so I wired them real quick, too. DRC highlighted some unconnected tracks – always worth fixing, so that KiCad can properly small segments into longer tracks, and so that your track moves don’t then result in small track snippets interfering with the entire plan. Last but not least, the BIOS sheet in the schematic was broken for some reason; KiCad said that it was corrupted. Turned out that instead of BIOS.kicad_sch, the file was named bios.kicad_sch – go figure.

Production Imminent


These changes helped [LtBrain] reduce PCB manufacturing cost, removed some potential problems for high-speed signal functioning, and fixed some crucial issues like SATA port mounting pins – pulling an otherwise SMD-pad SATA port off the board is really easy on accident! They’re all on GitHub now, as you’d expect, and you too can benefit from this board now.


Continuous-Path 3D Printed Case is Clearly Superior


[porchlogic] had a problem. The desire was to print a crystal-like case for an ESP32 project, reminiscent of so many glorious game consoles and other transparent hardware of the 1990s. However, with 3D printing the only realistic option on offer, it seemed difficult to achieve a nice visual result. The solution? Custom G-code to produce as nice a print as possible, by having the hot end trace a single continuous path.

The first job was to pick a filament. Transparent PLA didn’t look great, and was easily dented—something [porchlogic] didn’t like given the device was intended to be pocketable. PETG promised better results, but stringing was common and tended to reduce the visual appeal. The solution to avoid stringing would be to stop the hot end lifting away from the print and moving to different areas of the part. Thus, [porchlogic] had to find a way to make the hot end move in a single continuous path—something that isn’t exactly a regular feature of common 3D printing slicer utilities.

The enclosure itself was designed from the ground up to enable this method of printing. Rhino and Grasshopper were used to create the enclosure and generate the custom G-code for an all-continuous print. Or, almost—there is a single hop across the USB port opening, which creates a small blob of plastic that is easy to remove once the print is done, along with strings coming off the start and end points of the print.

Designing an enclosure in this way isn’t easy, per se, but it did net [porchLogic] the results desired. We’ve seen some other neat hacks in this vein before, too, like using innovative non-planar infill techniques to improve the strength of prints.

youtube.com/embed/2Sy50BrlDMo?…

Thanks to [Uxorious] and [Keith Olson] for the tip!


hackaday.com/2025/08/12/contin…