Salta al contenuto principale





Build a 400 MHz Logic Analyzer for $35


Build a $35 400 MHz Logic Analyzer

What do you do when you’re a starving student and you need a 400 MHz logic analyzer for your digital circuit investigations? As [nanofix] shows in a recent video, you find one that’s available as an open hardware project and build it yourself.

The project, aptly named LogicAnalyzer was developed by [Dr. Gusman] a few years back, and has actually graced these pages in the past. In the video below, [nanofix] concentrates on the mechanics of actually putting the board together with a focus on soldering. The back of the build is the Raspberry Pi Pico 2 and the TXU0104 level shifters.

If you’d like to follow along at home, all the build instructions and design files are available on GitHub. For your convenience the Gerber files have been shared at PCBWay

Of course we have heaps of material here at Hackaday covering logic analyzers. If you’re interested in budget options check out $13 Scope And Logic Analyzer Hits 18 Msps or how to build one using a ZX Spectrum! If you’re just getting started with logic analyzers (or if you’re not sure why you should) check out Logic Analyzers: Tapping Into Raspberry Pi Secrets.

youtube.com/embed/NaSM0-yAvQs?…


hackaday.com/2025/06/12/build-…



Assumed Breach. Il Perimetro è Caduto: Benvenuti nella Guerra “Dentro” la Rete


Dalla prospettiva del Red Team, un’analisi critica della fallacia della sicurezza perimetrale e dell’imperativo di testare le capacità di rilevamento e risposta interne.

Per decenni, il paradigma dominante nella cybersecurity è stato quello della “fortezza”: costruire difese perimetrali sempre più alte e sofisticate per tenere gli attaccanti fuori. Firewall di nuova generazione, sistemi di prevenzione delle intrusioni, gateway di sicurezza web ed email – un arsenale imponente schierato a guardia del castello digitale. Tuttavia, dalla prospettiva di chi, come i Red Team, ha il compito di simulare gli avversari più determinati, questa visione si scontra quotidianamente con una realtà più complessa: il perimetro, per quanto robusto, è destinato a essere violato.

Che sia attraverso un’ingegnosa campagna di spear phishing, lo sfruttamento di una vulnerabilità zero-day, una falla nella supply chain o un errore umano, l’accesso iniziale è spesso una questione di “quando”, non di “se”. È qui che l’approccio strategico dell’“Assumed Breach” (o compromissione presunta) emerge non come una teoria pessimistica, ma come una necessità pragmatica per testare e migliorare la reale resilienza di un’organizzazione.

Quando le Mura Cedono: Il Campo di Battaglia Interno


Una volta che un attaccante (o un Red Team che ne simula le azioni) ottiene un primo punto d’appoggio all’interno della rete – un endpoint compromesso, credenziali valide – lo scenario cambia radicalmente. La sicurezza perimetrale ha fallito il suo primo compito e la partita si sposta interamente sulle capacità di difesa, rilevamento e risposta interne. È in questo scenario che il Red Team opera con maggiore efficacia, mettendo a nudo le debolezze spesso trascurate. Le nostre attività si concentrano tipicamente su:

  1. Ricognizione Interna e “Consolidamento del Piede di Porco”: Mappatura della rete interna, identificazione di segmenti di valore, enumerazione di utenti e servizi e stabilizzazione dell’accesso iniziale.
  2. Lateral Movement: Questa è forse la fase più critica e, spesso, la più rumorosa se i sistemi di rilevamento sono ben tarati. Utilizziamo una varietà di tecniche, dal classico Pass-the-Hash/Ticket all’abuso di protocolli come RDP, SMB, WinRM, fino allo sfruttamento di relazioni di fiducia tra sistemi e applicazioni mal configurate. La mancanza di una segmentazione di rete efficace (micro-segmentazione) e di un monitoraggio del traffico Est-Ovest facilita enormemente queste manovre.
  3. Privilege Escalation: L’obiettivo è elevare i privilegi dall’account inizialmente compromesso fino a ottenere accessi amministrativi (Domain Admin, root, Cloud administrator). Questo avviene sfruttando credenziali deboli o riutilizzate, servizi configurati con permessi eccessivi, vulnerabilità note non patchate su sistemi interni, o debolezze strutturali in Active Directory (es. Kerberoasting, ACL permissive).
  4. Evasion e Persistence: Mantenere un accesso persistente e operare sotto traccia è fondamentale. Tecniche di evasione mirano a bypassare EDR/XDR, antivirus e altri controlli locali, mentre la persistenza viene stabilita tramite scheduled task, servizi, chiavi di registro modificate o backdoor più sofisticate.
  5. Raggiungimento dell’Obiettivo (Data Exfiltration, Impatto): Che si tratti di esfiltrare dati sensibili (simulando tecniche per aggirare le soluzioni DLP), cifrare sistemi (in scenari ransomware) o compromettere processi critici, è il culmine dell’operazione.


L’Efficacia del Test “Assumed Breach”


Adottare una metodologia di test basata sull’ “Assumed Breach” significa iniziare l’assessment partendo da uno scenario di compromissione già avvenuta (es. fornendo al Red Team credenziali a basso privilegio o l’accesso a un endpoint “infetto”). I benefici sono molteplici:

  • Focus sul “Detection & Response”: Valuta direttamente la capacità del Blue Team e del SOC di rilevare attività malevole interne e di rispondere efficacemente.
  • Realismo Aumentato: Simula più fedelmente gli scenari di attacco avanzati, dove l’attaccante opera già all’interno.
  • Ottimizzazione degli Investimenti: Aiuta a identificare dove gli investimenti in sicurezza interna (EDR, NDR, SIEM, SOAR, IAM, segmentazione) sono più carenti o dove le configurazioni necessitano di miglioramenti.
  • Validazione dei Playbook di Incident Response: Mette alla prova l’efficacia e la praticità delle procedure di risposta agli incidenti.


Rafforzare la Fortezza dall’Interno: Implicazioni per i Difensori


L’approccio “Assumed Breach” non è un atto d’accusa contro le difese perimetrali, ma un’integrazione necessaria. I difensori devono chiedersi:

  • Visibilità Interna: Abbiamo una visibilità adeguata su ciò che accade all’interno della nostra rete? I nostri log sono sufficienti e correttamente correlati?
  • Capacità di Rilevamento Avanzato: I nostri strumenti (EDR/XDR, NDR, UEBA) sono in grado di rilevare TTPs post-exploitation piuttosto che solo firme di malware note?
  • Principio del Minimo Privilegio e Zero Trust: Stiamo realmente applicando questi principi o esistono scorciatoie e permessi eccessivi che facilitano il movimento laterale?
  • Prontezza alla Risposta: Quanto velocemente possiamo isolare un sistema compromesso, revocare credenziali, analizzare l’incidente e recuperare?


Dalla Prevenzione alla Resilienza


L’evoluzione del panorama delle minacce impone un cambio di paradigma: dalla speranza di una prevenzione infallibile alla costruzione di una resilienza robusta. L’approccio “Assumed Breach”, incarnato dalle attività di Red Teaming, è uno strumento fondamentale in questo percorso. Non si tratta solo di trovare vulnerabilità, ma di testare l’intero ecosistema di difesa – persone, processi e tecnologie – nella sua capacità di resistere, rilevare e rispondere a un avversario che è già riuscito a superare le prime linee.

Solo accettando la possibilità della compromissione possiamo prepararci efficacemente a gestirla, trasformando potenziali disastri in incidenti contenuti e lezioni apprese.

L'articolo Assumed Breach. Il Perimetro è Caduto: Benvenuti nella Guerra “Dentro” la Rete proviene da il blog della sicurezza informatica.



Sei stato vittima del Ransomware? Il governo aiuterà la tua azienda… con una multa salata


L’Australia sta lanciando una nuova fase nella lotta alla criminalità informatica, imponendo alle grandi aziende di segnalare formalmente al governo qualsiasi pagamento di riscatto agli operatori di ransomware. L’obbligo è sancito dal Cyber ​​Security Bill 2024 entrato in vigore il 30 maggio.

Le norme si applicano a tutte le aziende con un fatturato annuo superiore a 3 milioni di dollari australiani, ovvero circa 1,92 milioni di dollari USA. Hanno 72 ore di tempo per informare l’Australian Signals Directorate (ASD) di aver trasferito denaro a truffatori. I pagamenti in sé non sono soggetti al divieto, ma le autorità sottolineano che tali azioni sono ufficialmente sconsigliate.

Il rapporto dell’ASD dell’anno scorso elencava solo 121 indagini su tali incidenti, mentre gli attacchi effettivi erano molto più numerosi. Per questo motivo, le autorità contano su nuove segnalazioni.

Una volta entrata in vigore la legge, le aziende avranno esattamente sei mesi per “adattarsi”: durante questo periodo, saranno perseguite solo le violazioni più gravi. Dal 2026, l’obbligo di notifica entrerà in vigore a pieno titolo. La mancata osservanza comporterà una multa di 19.800 dollari australiani, ma questo importo potrebbe aumentare in futuro.

Le organizzazioni saranno ora tenute a fornire non solo il proprio numero aziendale, ma anche i dettagli dell’incidente: l’ora dell’attacco, se le informazioni sono state rubate o crittografate, quali vulnerabilità hanno sfruttato gli aggressori, le perdite stimate, l’importo e la valuta del riscatto.

Le autorità australiane spiegano che la raccolta di questi dati consentirà loro di comprendere meglio quali famiglie di ransomware prendono di mira più spesso le aziende locali e quanto sia diffuso il problema. L’analisi delle statistiche potrebbe influenzare lo sviluppo di future iniziative legislative.

La soglia per la segnalazione obbligatoria è piuttosto elevata: secondo i calcoli del governo, le nuove norme interesseranno meno del 7% delle aziende del Paese. Tuttavia, sono proprio queste aziende a detenere i maggiori volumi di dati personali e a rappresentare il principale interesse per gli aggressori.

Leggi simili sono attualmente in fase di elaborazione in altri paesi. Negli Stati Uniti, l’agenzia CISA sta attualmente lavorando alle norme definitive per la segnalazione dei pagamenti di riscatto. Nel Regno Unito, si prevede di andare ancora oltre: vietare in linea di principio il pagamento di riscatti al settore pubblico, nonché richiedere alle grandi aziende non statali di segnalare tutti i casi di pagamento di riscatto e introdurre una procedura speciale in cui la vittima deve prima ottenere l’approvazione del governo per pagare gli aggressori

L'articolo Sei stato vittima del Ransomware? Il governo aiuterà la tua azienda… con una multa salata proviene da il blog della sicurezza informatica.



La Danimarca dice addio a Windows e Word: il ministero della Digitalizzazione passa a Linux e LibreOffice | DDay.it

dday.it/redazione/53314/la-dan…

Il ministero danese ha avviato la transizione a software open source per rafforzare la sovranità digitale e ridurre la dipendenza da Big Tech statunitensi

Ma Gianluca reshared this.



Boston Dyke March Friday


Join us at the Boston Dyke March this Friday, June 13th, 6-8pm.

Other upcoming events:


masspirates.org/blog/2025/06/1…



Simple Open Source Photobioreactor


[Bhuvanmakes] says that he has the simplest open source photobioreactor. Is it? Since it is the only photobioreactor we are aware of, we’ll assume that it is. According to the post, other designs are either difficult to recreate since they require PC boards, sensors, and significant coding.

This project uses no microcontroller, so it has no coding. It also has no sensors. The device is essentially an acrylic tube with an air pump and some LEDs.

The base is 3D printed and contains very limited electronics. In addition to the normal construction, apparently, the cylinder has to be very clean before you introduce the bioreactant.

Of course, you also need something to bioreact, if that’s even a real word. The biomass of choice in this case was Scenedesmus algae. While photobioreactors are used in commercial settings where you need to grow something that requires light, like algae, this one appears to mostly be for decorative purposes. Sort of an aquarium for algae. Then again, maybe someone has some use for this. If that’s you, let us know what your plans are in the comments.

We’ve seen a lantern repurposed into a bioreactor. It doesn’t really have the photo part, but we’ve seen a homebrew bioreactor for making penicillin.

youtube.com/embed/He-LUacT_SY?…


hackaday.com/2025/06/12/simple…



Neues Berliner Verfassungsschutzgesetz: Mehr Überwachung, weniger Kontrolle, erschwerte Auskünfte


netzpolitik.org/2025/neues-ber…



La recente conferma della condanna a sei anni di reclusione per l’ex presidente dell’Argentina Cristina Fernández de Kirchner, sancita oggi dalla Corte Suprema, rappresenta un ennesimo attacco alla democrazia in America Latina e una minaccia all’autonomia della politica rispetto ai poteri economici e giudiziari. Il processo noto come “Vialidad”, già ampiamente contestato per le sue [...]


COTS Components Combine to DIY Solar Power Station


They’re marketed as “Solar Generators” or “Solar Power Stations” but what they are is a nice box with a battery, charge controller, and inverter inside. [DoItYourselfDad] on Youtube decided that since all of those parts are available separately, he could put one together himself.

The project is a nice simple job for a weekend afternoon. (He claims 2 hours.) Because it’s all COTS components, it just a matter of wiring everything together, and sticking into a box. [DoItYourselfDad] walks his viewers through this process very clearly, including installing a shunt to monitor the battery. (This is the kind of video you could send to your brother-in-law in good conscience.)

Strictly speaking, he didn’t need the shunt, since his fancy LiFePo pack from TimeUSB has one built in with Bluetooth connectivity. Having a dedicated screen is nice, though, as is the ability to charge from wall power or solar, via the two different charge controllers [DoItYourselfDad] includes. If it were our power station, we’d be sure to put in a DC-DC converter for USB-PD functionality, but his use case must be different as he has a 120 V inverter as the only output. That’s the nice thing about doing it yourself, though: you can include all the features you want, and none that you don’t.

We’re not totally sure about his claim that the clear cargo box was chosen because he was inspired by late-90s Macintosh computers, but it’s a perfectly usable case, and the build quality is probably as good as the cheapest options on TEMU.

This project is simple, but it does the job. Have you made a more sophisticated battery box, or other more-impressive project? Don’t cast shade on [DoItYourselfDad]: cast light on your work by letting us know about it!.

youtube.com/embed/g_v6E-MYMdc?…


hackaday.com/2025/06/12/cots-c…



Brasilianisches Verfassungsgericht: Soziale Medien sollen für Postings von Nutzer:innen haften


netzpolitik.org/2025/brasilian…



WordPress plugins on ATProto, managing digital badges and attestations, and more.


ATmosphere Report – #120

WordPress plugins on ATProto, managing digital badges and attestations, and more.

I also run a weekly newsletter, where you get all the articles I published this week directly in your inbox, as well as additional analysis. You can sign up right here, and get the next edition tomorrow!

The News


The Linux Foundation has announced FAIR, a package manager project for WordPress. It is “a federated and independent repository of trusted plugins and themes for web hosts, commercial plugin and tool developers in the WordPress ecosystem and end users.” To achieve this independent and federated repository of tools for the WordPress ecosystem, FAIR uses ATProto underneath. FAIR has build their own protocol, the FAIR protocol, on top of ATProto. It uses DID PLC as an identifier for the packages, and ATProto for indexing and discoverability. As the project has just launched and some of the final parts are still being ironed out there are no packages yet that use the FAIR system. As such I cannot give yet a good context for what discoverability of WordPress packages over ATProto actually looks like. The chaos of the last year around the management of WordPress shows a need for decentralised repository of packages and plugins, and FAIR does already show that ATProto can be much more than only a microblogging network.

Gnosco is a new tool for digital badges and attestations on ATProto. It acts as a secure middleman between the application that issues the badge and your PDS. This allows applications to create a signed record to award a badge of attestation for a user. This badge is then not yet placed into the user’s PDS, but instead held in escrow by Gnosco. Users can then log into Gnosco with their ATProto account and review the badges. If they approve, the signed badge then added to their own PDS.

Gnosco took me a while to wrap my head around what the tool is and what it does, but it tackles the following problem. Badges and awards and other attestations need to be accepted and signed by both the issuer and the receiver. But not for all attestations that are issued it is known in advance if the user actually wants to receive this attestation and store it on their PDS. So there needs to be a way for the user to accept or reject a badge or attestation that is issued. Gnosco provides this interface that is platform-neutral, where users can accept and reject any attestation or badge.

Photo-sharing platform Grain now has their own moderation system on their own infrastructure. Grain is building a social photo-sharing network on ATProto that is separate from Bluesky, using their own lexicon. One reason why image-sharing platforms so far tend to have been alternate Bluesky clients is that means that the client does not have to be responsible for moderation. For Grain, the goal is to build their own independent social network, and thus their own moderation system is mandatory as well. The Grain developer also released a stand-alone app to embed Grain galleries on your own website.

Blacksky is proposing to make a soft-fork of the Bluesky client for the Blacksky community. With their own forked app, Blacksky can set some default values that benefit their community, such as setting the default feed to the Blacksky Trending feed, and setting the Blacksky moderation as default moderation. The organisation is looking for 2500 USD in recurring monthly donations, and they are close to reaching that goal.

ATProto chatroom app Roomy has released the another alpha version. Besides offering public chatrooms, Roomy continues to experiment with features for collecting and aggregating chat messages into longer-lived places for text. In this update they included ‘boards’, where people can create simple markdown pages as well as collect ‘threads’ that are pulled out of the chat log. Roomy is on the bleeding edge of technology when it comes to using ATProto, by combining it with Conflict-free Replicated Data Type (CRDT). The Roomy blogs go into more detail on why they are building the architecture this way, but the current practical problem is that CRDTs are new enough that what Roomy needs is still in development.

Tech updates and news


  • ATStudio is a new developer-focused tool that allows people to interact with ATProto. It allows you to “experiment with the protocol and debug code paths by making direct XRPC requests and executing @ATProtocol SDK methods using the integrated dashboard.”
  • Boost Blue is a new Bluesky client for Android and iOS, that has a few in-demand features that the main Bluesky client is missing, such as repost muting by user, drafts and bookmarks.
  • Bluesky’s latest update adds a ‘share’ button on every post, and an announced update to get notification on likes on reposts is pushed back to the next update which contains more notification filters.
  • An update by Skylight on how they are building their algorithm.
  • Work on the Deer client is paused for the summer.
  • Graze announced they are backing Party Starter with a 1k USD grant, a “toolkit for creating short-lived, location-aware events”. Not much else is known yet about Party Starter.
  • A “minor change to the PLC Directory service, with the aim of expanding compatibility with non-atproto apps and services”.
  • A tool to run raffles on Bluesky posts.
  • A new PDS browser with a retro interface.


The Links


That’s all for this week, thanks for reading! If you want more analysis, you can subscribe to my newsletter. Every week you get an update with all this week’s articles, as well as extra analysis not published anywhere else. You can subscribe below, and follow this blog @fediversereport.com and my personal account @laurenshof.online on Bluesky.

#bluesky

fediversereport.com/atmosphere…




Israele attacca l’Iran. Esplosioni a Teheran


@Notizie dall'Italia e dal mondo
La nuova guerra scatenata da Netanyahu segue le rivelazioni sull'imminenza dell'attacco fatte ieri da due televisioni americane
L'articolo Israele attacca l’Iran. Esplosioni a Teheran proviene da Pagine Esteri.

pagineesteri.it/2025/06/13/med…



Exclusive: An FTC complaint led by the Consumer Federation of America outlines how therapy bots on Meta and Character.AI have claimed to be qualified, licensed therapists to users, and why that may be breaking the law.#aitherapy #AI #AIbots #Meta


Telescopes perched on the Andes Mountains glimpsed elusive encounters fueled by the first of the first stars in the universe more than 13 billion years ago.#TheAbstract


✍ Tutte le novità della #Maturità2025 spiegate dalla Dott.ssa Flaminia Giorda, Coordinatrice Nazionale del Servizio Ispettivo e della Struttura Tecnica degli Esami di Stato.

Qui il video➡️ youtu.be/af_bqCfx9nc

#MIMaturo



The Billionth Repository On GitHub is Really Shitty


What’s the GitHub repository you have created that you think is of most note? Which one do you think of as your magnum opus, the one that you will be remembered by? Was it the CAD files and schematics of a device for ending world hunger, or perhaps it was software designed to end poverty? Spare a thought for [AasishPokhrel] then, for his latest repository is one that he’ll be remembered by for all the wrong reasons. The poor guy created a repository with a scatalogical name, no doubt to store random things, but had the misfortune to inadvertently create the billionth repository on GitHub.

At the time of writing, the 💩 repository sadly contains no commits. But he seems to have won an unexpectedly valuable piece of Internet real estate judging by the attention it’s received, and if we were him we’d be scrambling to fill it with whatever wisdom we wanted the world to see. A peek at his other repos suggests he’s busy learning JavaScript, and we wish him luck in that endeavor.

We think everyone will at some time or another have let loose some code into the wild perhaps with a comment they later regret, or a silly name that later comes back to haunt them. We know we have. So enjoy a giggle at his expense, but don’t give him a hard time. After all, this much entertainment should be rewarded.


hackaday.com/2025/06/12/the-bi…



End of an Era: NOAA’s Polar Sats Wind Down Operations


Since October 1978, the National Oceanic and Atmospheric Administration (NOAA) has operated its fleet of Polar-orbiting Operational Environmental Satellites (POES) — the data from which has been used for a wide array of environmental monitoring applications, from weather forecasting to the detection of forest fires and volcanic eruptions. But technology marches on, and considering that even the youngest member of the fleet has been in orbit for 16 years, NOAA has decided to retire the remaining operational POES satellites on June 16th.
NOAA Polar-orbiting Operational Environmental Satellite (POES)
Under normal circumstances, the retirement of weather satellites wouldn’t have a great impact on our community. But in this case, the satellites in question utilize the Automatic Picture Transmission (APT), Low-Rate Picture Transmission (LRPT), and High Resolution Picture Transmission (HRPT) protocols, all of which can be received by affordable software defined radios (SDRs) such as the RTL-SDR and easily decoded using free and open source software.

As such, many a radio hobbyist has pointed their DIY antennas at these particular satellites and pulled down stunning pictures of the Earth. It’s the kind of thing that’s impressive enough to get new folks interested in experimenting with radio, and losing it would be a big blow to the hobby.

Luckily, it’s not all bad news. While one of the NOAA satellites slated for retirement is already down for good, at least two remaining birds should be broadcasting publicly accessible imagery for the foreseeable future.

Not For Operational Use


The story starts in January, when NOAA announced that it would soon stop actively maintaining the three remaining operational POES satellites: NOAA-15, NOAA-18, and NOAA-19. At the time, the agency said there were currently no plans to decommission the spacecraft, and that anything they transmitted back down to Earth should be considered “data of opportunity” rather than a reliable source of information.

However, things appeared to have changed by April when NOAA sent out an update with what seemed like conflicting information. The update said that delivery of all data from the satellites would be terminated on June 16th, and that any users should switch over to other sources. Taken at face value, this certainly sounded like the end of amateurs being able to receive images from these particular satellites.

This was enough of a concern for radio hobbyists that Carl Reinemann, who operates the SDR-focused website USRadioguy.com, reached out to NOAA’s Office of Satellite and Product Operations for clarification. It was explained that the intent of the notice was to inform the public that NOAA would no longer be using or disseminating any of the data collected by the POES satellites, not that they would stop transmitting data entirely.

Further, the APT, LRPT, and HRPT services were to remain active and operate as before. The only difference now would be that the agency couldn’t guarantee how long the data would be available. Should there be any errors or failures on the spacecraft, NOAA won’t address them. In official government parlance, from June 16th, the feeds from the satellites would be considered unsuitable for “operational use.”

In other words, NOAA-15, NOAA-18, and NOAA-19 are free to beam Earth images down to anyone who cares to listen, but when they stop working, they will very likely stop working for good.

NOAA-18’s Early Retirement


As it turns out, it wouldn’t take long before this new arrangement was put to the test. At the end of May, NOAA-15’s S-band radio suffered some sort of failure, causing its output power to drop from its normal 7 watts down to approximately 0.8 watts. This significantly degraded both the downlinked images and the telemetry coming from the spacecraft. This didn’t just make reception by hobbyists more difficult. Even NOAA’s ground stations were having trouble sifting through the noise to get any useful data. To make matters even worse, the failing radio was also the only one left onboard the spacecraft that could actually receive commands from the ground.

While the transmission power issue seemed intermittent, there was clearly something very wrong with the radio, and there was no backup unit to switch over to. Concerned that they might lose control of the satellite entirely, ground controllers quickly made the decision to decommission NOAA-18 on June 6th.

Due to their limited propulsion systems, the POES satellites are unable to de-orbit themselves. So the decommissioning process instead tries to render the spacecraft as inert as possible. This includes turning off all transmitters, venting any remaining propellant into space, and finally, disconnecting all of the batteries from their chargers so they will eventually go flat.

At first glance, this might seem like a rash decision. After all, it was just a glitchy transmitter. What does it matter if NOAA wasn’t planning on using any more data from the satellite in a week or two anyway? But the decision makes more sense when you consider the fate of earlier NOAA POES satellites.

Curse of the Big Four


When one satellite breaks up in orbit, it’s an anomaly. When a second one goes to pieces, it’s time to start looking for commonality between the events. But when four similar spacecraft all explode in the same way…it’s clear you’ve got a serious problem.

That’s precisely what happened with NOAA-16, NOAA-17, and two of their counterparts from the Defense Meteorological Satellite Program (DMSP), DMSP F11, and DMSP F13, between 2015 and 2021. While it’s nearly impossible to come to a definitive conclusion about what happened to the vehicles, collectively referred to as the “Big Four” in the NOAA-17 Break-up Engineering Investigation’s 2023 report, the most likely cause is a violent rupture of the craft’s Ni-Cd battery pack due to extreme overcharging.

What’s interesting is that NOAA-16 and 17, as well as DMSP F11, had gone through the decommissioning process before their respective breakups. As mentioned earlier, the final phase of the deactivation process is the disconnection of all batteries from the charging system. The NOAA-17 investigation was unable to fully explain how the batteries on these spacecraft could have become overcharged in this state, but speculated it may be possible that some fault in the electrical system inadvertently allowed the batteries to be charged through what normally would have been a discharge path.

As such, there’s no guarantee that the now decommissioned NOAA-18 is actually safe from a design flaw that destroyed its two immediate predecessors. But considering the risk of not disconnecting the charge circuits on a spacecraft design that’s known to be prone to overcharging its batteries, it’s not hard to see why NOAA went ahead with the shutdown process while they still had the chance.

The Future of Satellite Sniffing

GOES-16 Image, Credit: USRadioguy.com
While there are no immediate plans to decommission NOAA-15 and 19, it’s clear that the writing is on the wall. Especially considering the issues NOAA-15 has had in the past. These birds aren’t getting any younger, and eventually they’ll go dark, especially now that they’re no longer being actively managed.

So does that mean the end of DIY satellite imagery? Thankfully, no. While it’s true that NOAA-15 and 19 are the only two satellites still transmitting the analog APT protocol, the digital LRPT and HRPT protocols are currently in use by the latest Russian weather satellites. Meteor-M 2-3 was launched in June 2023, and Meteor-M 2-4 went up in February 2024, so both should be around for quite some time. In addition, at least four more satellites in the Meteor-M family are slated for launch by 2042.

So, between Russia’s Meteor fleet and the NOAA GOES satellites in geosynchronous orbit, hobbyists should still have plenty to point their antennas at in the coming years.

Want to grab your own images? There are tutorials. You can even learn how to listen to the Russian birds.


hackaday.com/2025/06/12/end-of…



2025 Pet Hacks Contest: Cat at the Door


Cat at the door

This Pet Hacks Contest entry from [Andrea] opens the door to a great collaboration of sensors to solve a problem. The Cat At The Door project’s name is a bit of a giveaway to its purpose, but this project has something for everyone, from radar to e-ink, LoRa to 3D printing. He wanted a sensor to watch the door his cats frequent and when one of his cats were detected have an alert sent to where he is in the house

There are several ways you can detect a cat, in this project [Andrea] went with mmWave radar, and this is ideal for sensing a cat as it allows the sensor to sit protected inside, it works day or night, and it doesn’t stop working should the cat stand still. In his project log he has a chapter going into what he did to dial in the settings on the LD2410C radar board.

How do you know if you’re detecting your cat, some other cat, a large squirrel, or a small child? It helps if you first give your cats a MAC address, in the form of a BLE tag. Once the radar detects presence of a suspected cat, the ESP32-S3 starts looking over Bluetooth, and if a known tag is found it will identify which cat or cats are outside waiting.

Once the known cat has been identified, it’s time to notify [Andrea] that his cat is waiting for his door opening abilities. To do this he selected an ESP32 board that includes a SX1262 LoRa module for communicating with the portable notification device. This battery powered device has a low power e-paper display showing you which cat, as well as an audio buzzer to help alert you.

To read more details about this project head over to the GitHub page to check out all the details. Including a very impressive 80 page step-by-step guide showing you step by step how to make your own. Also, be sure to check out the other entries into the 2025 Pet Hacks Contest.

youtube.com/embed/0kiuHv76AjQ?…

2025 Hackaday Pet Hacks Contest


hackaday.com/2025/06/12/2025-p…



Learning the Basics of Astrophotography Editing


Astrophotography isn’t easy. Even with good equipment, simply snapping a picture of the night sky won’t produce anything particularly impressive. You’ll likely just get a black void with a few pinpricks of light for your troubles. It takes some editing magic to create stunning images of the cosmos, and luckily [Karl Perera] has a guide to help get you started.

The guide demonstrates a number of editing techniques specifically geared to bring the extremely dim lights of the stars into view, using Photoshop and additionally a free software tool called Siril specifically designed for astrophotograpy needs. The first step on an image is to “stretch” it, essentially expanding the histogram by increasing the image’s contrast. A second technique called curve adjustment performs a similar procedure for smaller parts of the image. A number of other processes are performed as well, which reduce noise, sharpen details, and make sure the image is polished.

While the guide does show some features of non-free software like Photoshop, it’s not too hard to extrapolate these tasks into free software like Gimp. It’s an excellent primer for bringing out the best of your astrophotography skills once the pictures have been captured, though. And although astrophotography itself might have a reputation as being incredibly expensive just to capture those pictures in the first place, it can be much more accessible by using this Pi-based setup as a starting point.

youtube.com/embed/2cNANnSnJBs?…


hackaday.com/2025/06/12/learni…



Crowdsourcing SIGINT: Ham Radio at War


I often ask people: What’s the most important thing you need to have a successful fishing trip? I get a lot of different answers about bait, equipment, and boats. Some people tell me beer. But the best answer, in my opinion, is fish. Without fish, you are sure to come home empty-handed.

On a recent visit to Bletchley Park, I thought about this and how it relates to World War II codebreaking. All the computers and smart people in the world won’t help you decode messages if you don’t already have the messages. So while Alan Turing and the codebreakers at Bletchley are well-known, at least in our circles, fewer people know about Arkley View.

The problem was apparent to the British. The Axis powers were sending lots of radio traffic. It would take a literal army of radio operators to record it all. Colonel Adrian Simpson sent a report to the director of MI5 in 1938 explaining that the three listening stations were not enough. The proposal was to build a network of volunteers to handle radio traffic interception.

That was the start of the Radio Security Service (RSS), which started operating out of some unused cells at a prison in London. The volunteers? Experienced ham radio operators who used their own equipment, at first, with the particular goal of intercepting transmissions from enemy agents on home soil.

At the start of the war, ham operators had their transmitters impounded. However, they still had their receivers and, of course, could all read Morse code. Further, they were probably accustomed to pulling out Morse code messages under challenging radio conditions.

Over time, this volunteer army of hams would swell to about 1,500 members. The RSS also supplied some radio gear to help in the task. MI5 checked each potential member, and the local police would visit to ensure the applicant was trustworthy. Keep in mind that radio intercepts were also done by servicemen and women (especially women) although many of them were engaged in reporting on voice communication or military communications.

Early Days


The VIs (voluntary interceptors) were asked to record any station they couldn’t identify and submit a log that included the messages to the RSS.

Arkey View ([Aka2112] CC-BY-SA-3.0)The hams of the RSS noticed that there were German signals that used standard ham radio codes (like Q signals and the prosign 73). However, these transmissions also used five-letter code groups, a practice forbidden to hams.

Thanks to a double agent, the RSS was able to decode the messages that were between agents in Europe and their Abwehr handlers back in Germany (the Abwehr was the German Secret Service) as well as Abwehr offices in foreign cities. Later messages contained Enigma-coded groups, as well.

Between the RSS team’s growth and the fear of bombing, the prison was traded for Arkley View, a large house near Barnet, north of London. Encoded messages went to Bletchley and, from there, to others up to Churchill. Soon, the RSS had orders to concentrate on the Abwehr and their SS rivals, the Sicherheitsdienst.

Change in Management


In 1941, MI6 decided that since the RSS was dealing with foreign radio traffic, they should be in charge, and thus RSS became SCU3 (Special Communications Unit 3).

There was fear that some operators might be taken away for normal military service, so some operators were inducted into the Army — sort of. They were put in uniform as part of the Royal Corps of Signals, but not required to do very much you’d expect from an Army recruit.

Those who worked at Arkley View would process logs from VIs and other radio operators to classify them and correlate them in cases where there were multiple logs. One operator might miss a few characters that could be found in a different log, for example.

Going 24/7


National HRO Receiver ([LuckyLouie] CC-BY-SA-3.0)It soon became clear that the RSS needed full-time monitoring, so they built a number of Y stations with two National HRO receivers from America at each listening position. There were also direction-finding stations built in various locations to attempt to identify where a remote transmitter was.

Many of the direction finding operators came from VIs. The stations typically had four antennas in a directional array. When one of the central stations (the Y stations) picked up a signal, they would call direction finding stations using dedicated phone lines and send them the signal.
Map of the Y-stations (interactive map at the Bletchley Park website)
The operator would hear the phone signal in one ear and the radio signal in the other. Then, they would change the antenna pattern electrically until the signal went quiet, indicating the antenna was electrically pointing away from the signals.

The DF operator would hear this signal in one earpiece. They would then tune their radio receiver to the right frequency and match the signal from the main station in one ear to the signal from their receiver in the other ear. This made sure they were measuring the correct signal among the various other noise and interference. The DF operator would then take a bearing by rotating the dial on their radiogoniometer until the signal faded out. That indicated the antenna was pointing the wrong way which means you could deduce which way it should be pointing.

The central station could plot lines from three direction finding stations and tell the source of a transmission. Sort of. It wasn’t incredibly accurate, but it did help differentiate signals from different transmitters. Later, other types of direction-finding gear saw service, but the idea was still the same.

Interesting VIs


Most of the VIs, like most hams at the time, were men. But there were a few women, including Helena Crawley. She was encouraged to marry her husband Leslie, another VI, so they could be relocated to Orkney to copy radio traffic from Norway.

In 1941, a single VI was able to record an important message of 4,429 characters. He was bedridden from a landmine injury during the Great War. He operated from bed using mirrors and special control extensions. For his work, he receive the British Empire Medal and a personal letter of gratitude from Churchill.

Results


Because of the intercepts of the German spy agency’s communications, many potential German agents were known before they arrived in the UK. Of about 120 agents arriving, almost 30 were turned into double agents. Others were arrested and, possibly, executed.

By the end of the war, the RSS had decoded around a quarter of a million intercepts. It was very smart of MI5 to realize that it could leverage a large number of trained radio operators both to cover the country with receivers and to free up military stations for other uses.

Meanwhile, on the other side of the Atlantic, the FCC had a similar plan.

The BBC did a documentary about the work the hams did during the war. You can watch it below.

youtube.com/embed/RwbzV2Jx5Qo?…


hackaday.com/2025/06/12/crowds…



CVE-2025-32710: La falla zero-click nei servizi RDP che può causare la totale compromissione del tuo server


Una vulnerabilità di sicurezza critica nei Servizi Desktop remoto di Windows, monitorata con il codice CVE-2025-32710, consente ad aggressori non autorizzati di eseguire codice arbitrario in remoto senza autenticazione. La falla deriva da una condizione di tipo use-after-free combinata con una race condition nel servizio Gateway Desktop remoto, consentendo agli aggressori di ottenere il controllo completo sui sistemi vulnerabili tramite lo sfruttamento basato sulla rete.

Il CVE-2025-32710 rappresenta una sofisticata vulnerabilità di danneggiamento della memoria classificata in due categorie Common Weakness Enumeration (CWE): CWE-416 (Use After Free) e CWE-362 (Esecuzione simultanea mediante risorse condivise con sincronizzazione non corretta). Rilasciata il 10 giugno 2025, questa vulnerabilità colpisce più versioni di Windows Server e ha un punteggio CVSS di 8,1, che indica un’elevata gravità con potenziale di compromissione significativa del sistema.

Microsoft ha identificato diverse versioni di Windows Server vulnerabili alla vulnerabilità CVE-2025-32710, che spaziano dai sistemi legacy alle versioni attuali. Le piattaforme interessate includono Windows Server 2008 (sia sistemi a 32 bit che x64 con Service Pack 2), Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server 2016, Windows Server 2019, Windows Server 2022 e l’ultima versione di Windows Server 2025.

La stringa del vettore CVSS della vulnerabilità CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H/E:U/RL:O/RC:C indica un vettore di attacco basato sulla rete che richiede elevata complessità ma nessun privilegio o interazione da parte dell’utente. Il meccanismo di sfruttamento tecnico prevede che un aggressore si colleghi a un sistema che esegue il ruolo Gateway Desktop remoto e inneschi una condizione di competizione che crea uno scenario di tipo use-after-free.

Questa corruzione della memoria consente all’aggressore di manipolare le regioni di memoria liberate, portando potenzialmente all’esecuzione di codice arbitrario con privilegi a livello di sistema.

La valutazione dell’impatto rivela la massima severità in tutti e tre gli ambiti di sicurezza: riservatezza, integrità e disponibilità sono tutti classificati come “Elevati”. La complessità dell’attacco alla vulnerabilità è considerata elevata perché per sfruttarla con successo è necessario superare una race condition, il che la rende una sfida impegnativa ma non impossibile per gli autori di minacce determinati.

Ciò significa che uno sfruttamento riuscito potrebbe compromettere l’intero sistema, con accesso non autorizzato a dati sensibili, modifica delle configurazioni del sistema e potenziali condizioni di negazione del servizio che potrebbero compromettere le operazioni aziendali. Ai ricercatori di sicurezza SmallerDragon e ʌ!ɔ⊥ojv del Kunlun Lab viene attribuito il merito di aver scoperto e divulgato responsabilmente questa vulnerabilità attraverso processi di divulgazione coordinati.

L'articolo CVE-2025-32710: La falla zero-click nei servizi RDP che può causare la totale compromissione del tuo server proviene da il blog della sicurezza informatica.



Cybersecurity, infrastrutture critiche e difesa del sistema Paese: tecnologia e cultura per vincere le sfide del futuro


A cura di Aldo Di Mattia, Director of Specialized Systems Engineering and Cybersecurity Advisor Italy and Malta di Fortinet

Nel 2024 i cyber criminali hanno intensificato in modo significativo gli attacchi alle infrastrutture critiche, sia in Italia che a livello globale. Come emerge dai dati dei FortiGuard Labs pubblicati nell’ultimo Rapporto Clusit, l’Italia è stata colpita dal 2,91% delle minacce globali, un aumento significativo rispetto allo 0,79% dell’anno precedente. Si tratta di una fotografia chiara della crescente esposizione del Paese agli attacchi informatici, che coinvolgono tutte le categorie di attori: cyber criminali mossi da interessi economici, gruppi hacktivisti e attacchi sponsorizzati da governi.

I dati rilevati dai FortiGuard Labs non si limitano ai soli incidenti pubblicamente noti, ma comprendono anche le attività di scansione, gli attacchi rilevati e i malware, offrendo così una prospettiva più completa. In particolare, di rilievo è l’aumento rilevato delle Active Scanning Techniques in Italia, che nel 2024 hanno registrato un incremento del 1.076%, passando da 4,21 miliardi a 49,46 miliardi. Il dato relativo alle attività di ricognizione (reconnaissance) è quello che maggiormente preoccupa, inquanto qui sono incluse le tecniche attive e passive con cui gli attaccanti raccolgono informazioni su infrastrutture, persone e sistemi da colpire. Questo tipo di attività rappresenta un campanello d’allarme importante: laddove c’è un’intensa attività di raccolta di informazioni, ci si deve aspettare l’esecuzione di attacchi più mirati e sofisticati.
Aldo Di Mattia, Director of Specialized Systems Engineering and Cybersecurity Advisor Italy and Malta di Fortinet
Anche gli attacchi Denial of Service (DoS) hanno visto un’escalation da non sottovalutare: da 657,06 milioni a oltre 4,22 miliardi in Italia (+542,42%), e da 576,63 miliardi a 1,07 trilioni a livello globale (+85,25%).

Dal punto di vista settoriale, i dati mostrano come la Sanità e le Telecomunicazioni siano i comparti più bersagliati a livello globale: 232,8 miliardi di tentativi di attacco alle infrastrutture sanitarie e 243 miliardi verso le Telco. A seguire, il settore Energy & Utilities con 22,4 miliardi, il comparto Trasporti e Logistica con 10,8 miliardi, e infine il Finance e il Government con, rispettivamente, 72,2 e 60,3 miliardi di attacchi. In Italia, invece, il comparto manifatturiero emerge come il principale obiettivo dei cyber criminali, sia per la sua importanza strategica che per la relativa vulnerabilità strutturale delle PMI, spesso prive di adeguati sistemi di difesa.

Deception, Threat Intelligence e Intelligenza Artificiale: l’innovazione al servizio della sicurezza informatica


Per fronteggiare questo scenario, è fondamentale per le aziende adottare tecnologie avanzate che permettano non solo di rilevare tempestivamente gli attacchi, ma anche di prevenirli e neutralizzarli. Deception, Threat Intelligence e Intelligenza Artificiale rappresentano oggi alcuni degli strumenti più efficaci, ma troppo spesso ancora sottoutilizzati.

Le tecnologie di Deception, ad esempio, consentono di creare “trappole” e asset virtuali che confondono gli attaccanti, rallentano le loro operazioni e forniscono indicazioni preziose sulle tecniche utilizzate. La Threat Intelligence permette invece di anticipare le mosse dei cyber criminali grazie all’analisi e alla condivisione di informazioni sulle minacce. Infine, l’Intelligenza Artificiale, nella somma degli algoritmi più utilizzati (Machine Learning, Deep Neural Network, GenAI), consente di rilevare anomalie comportamentali, automatizzare la risposta agli incidenti, avere supporto in tutte le fasi di analisi e risposta, gestire enormi volumi di dati in tempo reale e molto altro ancora.

In questo contesto, è importante sottolineare che l’IA oggi rappresenta un’arma a doppio taglio. Se da un lato migliora la capacità difensiva, dall’altro è utilizzata anche dagli attaccanti per automatizzare campagne di phishing, generare deepfake, scrivere codice malevolo e aggirare i controlli di identità. I modelli linguistici di grandi dimensioni (LLM) sono già impiegati per creare script in grado di compromettere infrastrutture OT, come impianti industriali, reti elettriche, trasporti e persino sistemi finanziari.

Formazione e awareness: come costruire una solida cultura di cybersecurity


La tecnologia, però, da sola non è sufficiente. Per rispondere a uno scenario di minacce in continua ascesa ed evoluzione, è fondamentale rafforzare la consapevolezza e la cultura della cybersecurity a tutti i livelli, a partire dai dipendenti delle organizzazioni fino ad arrivare agli studenti. Gli attacchi di phishing, sempre più sofisticati grazie all’uso dell’IA, puntano infatti a colpire proprio l’anello più debole della catena: l’essere umano.

Secondo il Security Awareness and Training Global Research Report di Fortinet, in Italia, l’86% dei responsabili aziendali considera positivamente i programmi di formazione sulla sicurezza informatica, e l’84% dichiara di aver osservato miglioramenti concreti nella postura di sicurezza della propria organizzazione. Tuttavia, affinché la formazione sia efficace, deve essere coinvolgente, ben progettata e calibrata.

Per rispondere a queste esigenze, Fortinet ha avviato diversi programmi educativi, come il Fortinet Academic Program, attivo da anni nelle università, che offre materiale gratuito, laboratori cloud e voucher per certificazioni. Di particolare rilievo, inoltre, è il nuovo progetto rivolto alle scuole elementari, medie e superiori italiane, che ha l’obiettivo di estendere la formazione in materia di sicurezza informatica tra i più giovani a livello nazionale. L’iniziativa punta non solo a diffondere la cultura della cybersecurity, ma anche a colmare il gap di competenze che oggi rappresenta uno dei principali ostacoli alla sicurezza digitale del Paese.

Partnership pubblico-privato: la forza della cooperazione per essere sempre un passo avanti al cybercrime


Per costruire un sistema di difesa solido e resiliente è fondamentale che aziende, istituzioni e organizzazioni collaborino. La cybersecurity non può più essere affrontata come una battaglia individuale: occorre un ecosistema coeso, in cui le competenze e le risorse vengano condivise.

In linea con questa visione, Fortinet ha recentemente siglato un protocollo d’intesa con l’Agenzia per la Cybersicurezza Nazionale (ACN). Il protocollo è finalizzato alla successiva definizione di accordi attuativi che prevedono potenziali aree di collaborazione su diversi temi, quali la condivisione di best practice, lo scambio di informazioni, metodi di analisi e programmi di cyber threat intelligence, e la possibilità di intraprendere iniziative su temi quali la formazione con la realizzazione di eventi educativi sul territorio destinati a diffondere e aumentare la consapevolezza dei rischi legati alla cybersecurity e le conoscenze in materia. Un protocollo di intesa analogo è stato firmato con la Polizia Postale.

Queste collaborazioni si inseriscono in un impegno più ampio, che vede Fortinet attiva anche in iniziative internazionali come la Partnership Against Cybercrime (PAC) e il Cybercrime Atlas del World Economic Forum. Quest’ultimo progetto, di cui Fortinet è membro fondatore, mira a mappare le infrastrutture e le reti utilizzate dai cyber criminali, offrendo una visione globale per coordinare strategie di contrasto mirate.

Le nuove normative europee, come Dora e NIS2, rappresentano un altro passo avanti verso una maggiore resilienza. Dora punta a garantire la continuità operativa delle entità finanziarie in caso di attacchi informatici, mentre NIS2 estende gli obblighi di sicurezza anche a fornitori e partner della supply chain. I dati suggeriscono che queste normative stiano già producendo effetti positivi, contribuendo a una riduzione degli incidenti nei settori regolamentati.

Guardare al futuro con una visione integrata IT/OT


Gli attacchi informatici in futuro saranno sempre più sofisticati, automatizzati e difficili da individuare. I criminali sfrutteranno l’IA agentiva per condurre campagne mirate in modo autonomo, eludere i sistemi di difesa e manipolare infrastrutture fisiche e digitali. In questo scenario, sarà quindi essenziale garantire anche la sicurezza dei sistemi di intelligenza artificiale, diventati a tutti gli effetti bersagli e strumenti di attacco.

Il perimetro della difesa non può più essere limitato ai confini tecnici dell’IT. Occorre una visione integrata che abbracci l’IT e l’OT, coinvolga le persone, promuova la formazione e favorisca la cooperazione tra pubblico e privato. Solo così sarà possibile affrontare con successo le sfide di cybersecurity che ci attendono, nel 2025 e oltre, sia come singole organizzazioni ma anche, e soprattutto, a livello Paese.

L'articolo Cybersecurity, infrastrutture critiche e difesa del sistema Paese: tecnologia e cultura per vincere le sfide del futuro proviene da il blog della sicurezza informatica.



The reporter documenting 10 years of Trump’s anti-media posts


61,989.

That’s how many social media posts by President Donald Trump over the past decade that journalist Stephanie Sugars has single-handedly reviewed.

At all hours of the day, Trump posts about everything from foreign policy to personnel matters. “It’s a staggering amount of posts,” said Sugars, a senior reporter at the U.S. Press Freedom Tracker, a project of Freedom of the Press Foundation (FPF). But for the past several years, Sugars has trawled through Trump’s prolific activity on X and TruthSocial in search of something specific: anti-media rhetoric.

Since Trump’s first term as president, Sugars has managed an extensive database that documents each and every anti-media post from Trump. In them, Trump sometimes attacks individual journalists. Other times, he takes aim at specific outlets or the media in general. Some posts include all three. While the content varies, Sugars said, the goal appears to be the same: to discredit the media that holds him accountable.



“He is consolidating narrative power and asserting that he is the ultimate, if not singular, conveyor of what is actually true, which, to no one’s surprise, is what is most favorable to him,” Sugars said.

Monday, June 16, marks 10 years since Trump famously descended a golden escalator at New York City’s Trump Tower in 2015 and launched his first winning bid for the Oval Office. The first anti-media post recorded in the database came one day after “Golden Escalator Day,” on June 17, 2015. In it, Trump lambasts the New York Daily News. “Loses fortune & has zero gravitas,” he said about the paper. “Let it die!”

“He is consolidating narrative power and asserting that he is the ultimate, if not singular, conveyor of what is actually true, which, to no one’s surprise, is what is most favorable to him.”

Over the course of his ensuing campaign, Sugars said Trump primarily targeted individual journalists — high-profile ones like Megyn Kelly, Joe Scarborough, and Anderson Cooper — as well as Fox News for what Trump considered to be inadequate support.

“Once he entered office, it was a pretty stark shift,” Sugars said. Trump continued insulting individual journalists, she said, but he also began to target entire outlets, as well as the media as a whole.

For years, Trump has lambasted the mainstream media, which he accuses of bias, as “the enemy of the people.” Attacks on individual journalists and news outlets feed into those broader attacks on the media writ large, according to Sugars.

“His intention appears to be to erode our understanding of what truth is, to erode trust in the media, and to position himself as the ultimate source of truth for his supporters,” she said.

Out of all of the posts Sugars has reviewed, one from Feb. 17, 2017, sticks out to her. “The FAKE NEWS media,” Trump wrote, taking specific aim at The New York Times, CNN, and NBC, “is not my enemy, it is the enemy of the American people.”

That post was published at 4:32 a.m., but at some point it was deleted and replaced with a revision at 4:48 p.m. The revised post was nearly identical, except that Trump had added two more news outlets to the list of so-called enemies: ABC and CBS. “It just demonstrated this doubling down,” Sugars said.

Trump’s account on then-Twitter was, at the time, permanently suspended on Jan. 8, 2021, two days after the Jan. 6 insurrection on the Capitol. At that point, the database had documented more than 2,500 anti-media posts from Trump’s campaign and first term.

“His intention appears to be to erode our understanding of what truth is, to erode trust in the media, and to position himself as the ultimate source of truth for his supporters.”

During the Biden administration, Sugars said she would have similarly monitored anti-media posts from President Joe Biden, but he didn’t make such statements. Meanwhile, Trump’s anti-media posts have continued at a similar rate since he returned to the White House in January, according to Sugars. “He picked up just where he left off,” Sugars said.

But one primary difference between Trump’s first and second term, Sugars added, is that this time around, Trump is increasingly framing “the media” as an opposition party of sorts or as partners of the Democratic Party. Sugars said she has also noticed an uptick in posts that demonize leakers and pledge that the administration will crack down on whistleblowers.

In the Tracker: Trump and the media



Trump’s hostile rhetoric against the media is the backdrop for more concrete attacks on media freedom, according to Sugars, including lawsuits, investigations by the Federal Communications Commission, the co-optation of the White House press pool, and the revocation of Biden-era policies that protected journalists in leak investigations.

Given those more concrete attacks on media freedom and just how frequently Trump posts on social media, Sugars said it can be easy to dismiss the anti-press posts. But doing so would be a mistake, said Sugars, who thinks it’s important to take what Trump writes seriously because his supporters take it seriously.

“What these posts end up doing is shifting the entire window of how we are understanding the world,” she said.

Watch the full interview:

youtube.com/embed/N4jeJDQjGD8?…


freedom.press/issues/the-repor…



CYBERWARFARE - Definizione & Concetti - la live di Mirko Campochiari con Riccardo Evarisco

La cyberwarfare non è solo guerra ibrida, ma è un ambito vero e proprio della guerra, tra strategia, tattica e logistica e si integra con le tradizionali forze armate, coinvolgendo sia la sfera del diritto internazionale sia quella del diritto della guerra.

youtube.com/live/ZAmb9z4C0Ms

@Informatica (Italy e non Italy 😁)



possibile che non ci sia nessuno che non si riconosce più in questi stati uniti? tutti buoni e tutti zitti? in un paese dove anche i bambini sono armati poi? come fa non succedere un macello? i carri armati praticamente a sedare la rivolta? ma la rivolta di chi?



ma che strano... non l'avrei mai detto


Come direbbe il buon Emilio Fede, che figura di m....da 🤣🤣🤣🤣🤣🤣 Calenda, un inutile al senato. Solo in questo paese gestito da saltimbanchi poteva fare il senatore.


Capite perché uno non crede più alla retorica delle buone intenzioni?


Una mano lava l'altra, aiuta me che poi aiuto te, chiudiamo un occhio di qua, un altro di là, (tanto chi ci rimette sono sempre i cittadini e/o i lavoratori), ed ecco che arriva il premio fedeltà...
ilfattoquotidiano.it/2025/06/1…


L'errore dei Dem Usa è stato non distruggere lo psicopatico arancione in questi 4 anni.
Esattamente lo stesso errore nostro con i fascisti a suo tempo.



Rutte-Meloni, industria e difesa gettano le basi della Nato di domani

@Notizie dall'Italia e dal mondo

La Nato è unita, deve rafforzarsi anche con l’aiuto del suo pilastro atlantico, di cui l’Italia è parte strategica. Mark Rutte non ha, nel suo incontro a Palazzo Chigi con Giorgia Meloni, solo messo l’accento sulle priorità strutturali dell’alleanza atlantica ma in “un’era



ma alla fine l'economicità dell'elettrico dove sarebbe? ricaricare l'auto costa quando il kW consumati in casa (ed era ovvio che fosse così...)
in reply to simona

se poi consideri che per viaggiare da Milano a Bari non trovi facilmente le colonnine di ricarica in autostrada e devi uscire perdendo tempo, io mi tengo il mio benzina.