Ben(e)detto sulla Separazione delle Carriere
@Politica interna, europea e internazionale
L'articolo Ben(e)detto sulla Separazione delle Carriere proviene da Fondazione Luigi Einaudi.
Politica interna, europea e internazionale reshared this.
Strategie di investimento per i data center evoluti
@Informatica (Italy e non Italy 😁)
Gli esperti del mercato dei data center prevedono un'enorme discrepanza tra domanda e offerta, la crescita di una robusta domanda di data center, a fronte di una crescita esponenziale dei dati, e colli di bottiglia nell'offerta. Ecco i fattori che incidono nelle incertezze di mercato
Informatica (Italy e non Italy 😁) reshared this.
È uscito il nuovo numero di The Post Internazionale. Da oggi potete acquistare la copia digitale
@Politica interna, europea e internazionale
È uscito il nuovo numero di The Post Internazionale. Il magazine, disponibile già da ora nella versione digitale sulla nostra App, e da domani, venerdì 16 maggio, in tutte le edicole, propone ogni due settimane inchieste e approfondimenti sugli affari e il potere in
Politica interna, europea e internazionale reshared this.
Mylar Space Blankets As RF Reflectors
Metalized Mylar “space blankets” are sold as a survivalist’s accessory, primarily due to their propensity for reflecting heat. They’re pretty cheap, and [HamJazz] has performed some experiments on their RF properties. Do they reflect radio waves as well as they reflect heat? As it turns out, yes they do.
Any antenna system that’s more than a simple radiator relies on using conductive components as reflectors. These can either be antenna elements, or the surrounding ground acting as an approximation to a conductor. Radio amateurs will often use wires laid on the ground or buried within it to improve its RF conductivity, and it’s in this function that he’s using the Mylar sheet. Connection to the metalized layer is made with a magnet and some aluminium tape, and the sheet is strung up from a line at an angle. It’s a solution for higher frequencies only due to the restricted size of the thing, but it’s certainly interesting enough to merit further experimentation.
As you can see in the video below, his results are derived in a rough and ready manner with a field strength meter. But they certainly show a much stronger field on one side resulting from the Mylar, and also in an antenna that tunes well. We would be interested to conduct a received signal strength test over a much greater distance rather than a high-level field strength test so close to the antenna, but it’s interesting to have a use for a space blanket that’s more than just keeping the sun away from your tent at a hacker camp. Perhaps it could even form a parabolic antenna.
youtube.com/embed/X1sDYBe5wY0?…
Thanks [Fl.xy] for the tip!
L'amministrazione Trump ha puntato sui paesi africani. L'obiettivo: procurare affari a Elon Musk.
- "Massima pressione": il Dipartimento di Stato ha condotto una campagna durata mesi per spingere un piccolo paese africano ad aiutare la società di internet satellitare di Musk, come dimostrano registrazioni e interviste.
- "Ram This Through": lavorando a stretto contatto con i dirigenti di Starlink, il governo degli Stati Uniti ha compiuto uno sforzo globale per aiutare Musk a espandere l'impero commerciale nei paesi in via di sviluppo.
- “Capitalismo clientelare”: i diplomatici hanno affermato che gli eventi hanno rappresentato un preoccupante allontanamento dalla prassi standard, sia per le tattiche impiegate, sia per la persona che ne avrebbe tratto i maggiori benefici.
reshared this
Remembering More Memory: XMS and a Real Hack
Last time we talked about how the original PC has a limit of 640 kB for your programs and 1 MB in total. But of course those restrictions chafed. People demanded more memory, and there were workarounds to provide it.
However, the workarounds were made to primarily work with the old 8088 CPU. Expanded memory (EMS) swapped pages of memory into page frames that lived above the 640 kB line (but below 1 MB). The system would work with newer CPUs, but those newer CPUs could already address more memory. That led to new standards, workarounds, and even a classic hack.
XMS
If you had an 80286 or above, you might be better off using extended memory (XMS). This took advantage of the fact that the CPU could address more memory. You didn’t need a special board to load 4MB of RAM into an 80286-based PC. You just couldn’t get to with MSDOS. In particular, the memory above 1 MB was — in theory — inaccessible to real-mode programs like MSDOS.
Well, that’s not strictly true in two cases. One, you’ll see in a minute. The other case is because of the overlapping memory segments on an 8088, or in real mode on later processors. Address FFFF:000F was the top of the 1 MB range.
PCs with more than 20 bits of address space ran into problems since some programs “knew” that memory access above that would wrap around. That is FFFF:0010, on an 8088, is the same as 0000:0000. They would block A20, the 21st address bit, by default. However, you could turn that block off in software, although exactly how that worked varied by the type of motherboard — yet another complication.
XMS allowed MSDOS programs to allocate and free blocks of memory that were above the 1 MB line and map them into that special area above FFFF:0010, the so-called high memory area (HMA).The 640 kB user area, 384 kB system area, and almost 64 kB of HMA in a PC (80286 or above)
Because of its transient nature, XMS wasn’t very useful for code, but it was a way to store data. If you weren’t using it, you could load some TSRs into the HMA to prevent taking memory from MSDOS.
Protected Mode Hacks
There is another way to access memory above the 1 MB line: protected mode. In protected mode, you still have a segment and an offset, but the segment is just an index into a table that tells you where the segment is and how big it is. The offset is just an offset into the segment. So by setting up the segment table, you can access any memory you like. You can even set up a segment that starts at zero and is as big as all the memory you can have.A protected mode segment table entry
You can use segments like that in a lot of different ways, but many modern operating systems do set them up very simply. All segments start at address 0 and then go up to the top of user memory. Modern processors, 80386s and up, have a page table mechanism that lets you do many things that segments were meant to do in a more efficient way.
However, MS-DOS can’t deal with any of that directly. There were many schemes that would switch to protected mode to deal with upper memory using EMS or XMS and then switch back to real mode.
Unfortunately, switching back to real mode was expensive because, typically, you had to set a bit in non-volatile memory and reboot the computer! On boot, the BIOS would notice that you weren’t really rebooting and put you back where you were in real mode. Quite a kludge!
There was a better way to run MSDOS in protected mode called Virtual86 mode. However, that was complex to manage and required many instructions to run in an emulated mode, which wasn’t great for performance. It did, however, avoid the real mode switch penalty as you tried to access other memory.
Unreal Mode
In true hacker fashion, several of us figured out something that later became known as Unreal Mode. In the CPU documentation, they caution you that before switching to real mode, you need to set all the segment tables to reflect what a segment in real mode looks like. Obviously, you have to think, “What if I don’t?”
Well, if you don’t, then your segments can be as big as you like. Turns out, apparently, some people knew about this even though it was undocumented and perhaps under a non-disclosure agreement. [Michal Necasek] has a great history about the people who independently discovered it, or at least, the ones who talked about it publicly.
The method was doomed, though, because of Windows. Windows ran in protected mode and did its own messing with the segment registers. If you wanted to play with that, you needed a different scheme, but that’s another story.
Modern Times
These days, we don’t even use video cards with a paltry 1 MB or even 100 MB of memory! Your PC can adroitly handle tremendous amounts of memory. I’m writing this on a machine with 64 GB of physical memory. Even my smallest laptop has 8 GB and at least one of the bigger ones has more.
Then there’s virtual memory, and if you have solid state disk drives, that’s probably faster than the old PC’s memory, even though today it is considered slow.
Modern memory systems almost don’t resemble these old systems even though we abstract them to pretend they do. Your processor really runs out of cache memory. The memory system probably manages several levels of cache. It fills the cache from the actual RAM and fills that from the paging device. Each program can have a totally different view of physical memory with its own idea of what physical memory is at any given address. It is a lot to keep track of.
Times change. EMS, XMS, and Unreal mode seemed perfectly normal in their day. It makes you wonder what things we take for granted today will be considered backward and antiquated in the coming decades.
Machine1337: Il threat actor che rivendica l’accesso a 4 milioni di account Microsoft 365
Un nuovo nome sta guadagnando rapidamente visibilità nei meandri del cybercrime underground: Machine1337, un attore malevolo attivo sul noto forum XSS[.]is, dove ha pubblicato una serie impressionante di presunte violazioni di dati ai danni di colossi tecnologici e piattaforme popolari, tra cui Microsoft, TikTok, Huawei, Steam, Temu, 888.es e altri ancora.
Il post shock: 4 milioni di account Office 365
Tra le inserzioni più eclatanti figura una pubblicazione che afferma di essere in possesso di 4 milioni di credenziali Microsoft Office 365 e Microsoft 365, vendute al prezzo di 5000 dollari. Nel post viene anche fornito un link per scaricare un campione da 1.000 record, una prassi comune nei mercati underground per “dimostrare” la genuinità del materiale in vendita. L’attore fornisce un contatto Telegram diretto e rimanda al suo canale ufficiale @Machine******. Il post è accompagnato da un’immagine che mostra il logo di Microsoft e frasi promozionali come:
🔥 DAILY LIVE SALES 🔥
📌 Premium Real-Time Phone Numbers for Sale
good luck! 🚀
Una retorica che mescola ironia, branding e un tono apparentemente professionale, tipico degli attori che cercano di affermarsi come “venditori affidabili” nel dark web.
Una raffica di violazioni: da TikTok a Huawei
Nelle ore successive alla pubblicazione dell’annuncio Microsoft, Machine1337 ha pubblicato altri post che dichiarano la compromissione di:
- Huawei – 129 milioni di record
- TikTok – 105 milioni di record
- Steam – 89 milioni di account, postato più volte con aggiornamenti
- Temu – 17 milioni di record
- 888.es – 13 milioni di record
Queste presunte violazioni sembrano essere parte di un’offensiva coordinata per dimostrare la “potenza” del threat actor e attrarre acquirenti nei suoi canali Telegram.
Identità e contatti: l’ecosistema Telegram
Machine1337 si presenta come un attore organizzato. Sul suo canale Telegram, pubblica anche un avviso importante per evitare impersonificazioni:
“Il mio unico contatto ufficiale è: @Energy************
Canale ufficiale: @Machine************
Fate attenzione ai fake. Non sono responsabile per problemi con impostori.”
In un altro messaggio consiglia di visitare un altro canale afferente ad un’altra community underground dove, secondo le sue parole, “real shit goes down there.”
Possibili implicazioni e autenticità
Sebbene la quantità di dati rivendicati sia impressionante, non è possibile confermare al 100% l’autenticità di ogni dump pubblicato, almeno fino a quando non emergono conferme da parte delle aziende coinvolte o da analisi indipendenti OSINT/DFIR.
Tuttavia, la mole e la frequenza degli annunci di Machine1337 suggeriscono o una reale disponibilità di fonti compromesse (es. broker di accesso iniziale, botnet di infostealer) o una manovra di disinformazione e marketing aggressivo per ottenere fondi da utenti del forum XSS.
Il 13 maggio è stato ufficialmente creato il canale Telegram Machine1337, punto centrale della comunicazione del threat actor omonimo. Nel messaggio di benvenuto, l’utente si presenta come Red Teamer, Penetration Tester e Offensive Security Researcher, specificando di essere attualmente impegnato nello studio dell’API Testing e dell’analisi di malware. Il tono è quello di un professionista della sicurezza informatica che si muove però su un doppio binario: da un lato competenze legittime, dall’altro attività chiaramente legate al cybercrime.
Nel canale vengono anche menzionate l’intenzione di collaborare a progetti open source e una sezione “Visitor Count”, in cui probabilmente si monitora l’attività degli utenti. Il canale stesso è legato a un gruppo di discussione (accessibile solo tramite richiesta) e offre una sottoscrizione a pagamento, presumibilmente per accedere ai contenuti premium o ai database completi.
La presenza su Telegram conferma ancora una volta come questo strumento sia uno degli hub preferiti dai threat actor per diffondere i propri contenuti e stabilire un canale diretto con potenziali acquirenti.
Conclusione
Machine1337 si inserisce in una nuova generazione di threat actors che non solo monetizzano i dati rubati, ma costruiscono vere e proprie strategie di branding e marketing per affermarsi nei circuiti del cybercrime.
Con vendite quotidiane pubblicizzate, dump multipli e una presenza su Telegram molto attiva, sarà fondamentale tenere sotto osservazione questa figura nei prossimi mesi.
L'articolo Machine1337: Il threat actor che rivendica l’accesso a 4 milioni di account Microsoft 365 proviene da il blog della sicurezza informatica.
"Thinking about your ex 24/7? There's nothing wrong with you. Chat with their AI version—and finally let it go," an ad for Closure says. I tested a bunch of the chatbot startups' personas.
"Thinking about your ex 24/7? Therex27;s nothing wrong with you. Chat with their AI version—and finally let it go," an ad for Closure says. I tested a bunch of the chatbot startupsx27; personas.#AI #chatbots
This Chatbot Promises to Help You Get Over That Ex Who Ghosted You
"Thinking about your ex 24/7? There's nothing wrong with you. Chat with their AI version—and finally let it go," an ad for Closure says. I tested a bunch of the chatbot startups' personas.Samantha Cole (404 Media)
MESSICO. La gentrificazione silenziosa di Puerto Escondido
@Notizie dall'Italia e dal mondo
L’arrivo di turisti con maggiore capitale in zone precedentemente abitate da comunità a basso reddito, causa l’aumento dei prezzi di affitto, servizi e alimenti. Spesso gli abitanti sono obbligati a lasciare le loro case, per spostarsi più in periferia, dove la vita costa meno.
Notizie dall'Italia e dal mondo reshared this.
L’incertezza non porti all’inazione. L’allarme di Cavo Dragone dal Comitato militare Nato
@Notizie dall'Italia e dal mondo
La Nato è coesa, in trasformazione, e consapevole che la sicurezza collettiva non è più solo una formula diplomatica, ma una necessità strategica da declinare in scelte operative concrete. È il quadro emerso dalla riunione del Comitato militare
Notizie dall'Italia e dal mondo reshared this.
Ministero dell'Istruzione
#Scuola, stretta sulle certificazioni linguistiche. Gli enti abilitati al rilascio delle certificazioni per le competenze linguistico-comunicative, oltre a dover garantire la qualità delle prove d’esame e la trasparenza delle valutazioni, sono chiama…Telegram
reshared this
FPV Drone Takes Off From a Rocketing Start
Launching rockets into the sky can be a thrill, but why not make the fall just as interesting? That is exactly what [I Build Stuff] thought when attempting to build a self-landing payload. The idea is to release a can sized “satellite” from a rocket at an altitude upwards of 1 km, which will then fly back down to the launch point.
The device itself is a first-person view (FPV) drone running the popular Betaflight firmware. With arms that swing out with some of the smallest brushless motors you’ve ever seen (albeit not the smallest motor), the satellite is surprisingly capable. Unfortunately due to concerns over the legality of an autonomous payload, the drone is human controlled on the descent.
Using collaborated efforts, a successful launch was flown with the satellite making it to the ground unharmed, at least for the most part. While the device did show capabilities of being able to fly back, human error led to a manual recovery. Of course, this is far from the only rocketry hack we have seen here at Hackaday. If you are more into making the flight itself interesting, here is a record breaking one from USC students.
youtube.com/embed/7yVFZn87TkY?…
Thank you [Hari Wiguna] for the great tip!
Una flotta di droni? La marina di Mosca si crea le sue unità unmanned
@Notizie dall'Italia e dal mondo
La Voenno-morskoj Flot includerà tra i suoi ranghi unità specializzate nell’uso di sistemi unmanned. A riportare la notizia è l’organo di informazione russo Izvestia, secondo cui le nuove formazioni in fase di costituzione opereranno sistemi unmanned di tutte le tipologie, da quelli aerei a quelli terrestri e a
Notizie dall'Italia e dal mondo reshared this.
L’Italia raggiunge l’obiettivo Nato sulla spesa militare, ma ora si guarda al 5%
@Notizie dall'Italia e dal mondo
L’Italia ha raggiunto l’obiettivo del 2% del Pil in spesa per la difesa. Un traguardo simbolico e politico, che arriva a dieci anni dall’impegno assunto nel vertice Nato del 2014 in Galles, quando gli Alleati si promisero di rafforzare i bilanci militari in
Notizie dall'Italia e dal mondo reshared this.
Libsophia #15 – David Hume con Ermanno Ferretti
@Politica interna, europea e internazionale
L'articolo Libsophia #15 – David Hume con Ermanno Ferretti proviene da Fondazione Luigi Einaudi.
Politica interna, europea e internazionale reshared this.
Il ministro degli Esteri Antonio Tajani al vertice Nato: “L’Italia ha raggiunto il 2% del Pil in spese per la difesa”
@Politica interna, europea e internazionale
L’Italia ha già informato gli alleati della Nato di aver raggiunto il 2 per cento del Pil in termini di spese per la difesa e la sicurezza. L’annuncio è arrivato oggi da parte del vicepremier, ministro degli Esteri e leader di Forza
Politica interna, europea e internazionale reshared this.
Mehr Alterskontrollen, weniger Sogwirkung: So stellt sich die EU ein kindgerechtes Internet vor
Sentenza UE: la pubblicità basata sul tracciamento di Google, Microsoft, Amazon, X, in tutta Europa non ha base giuridica
Una storica sentenza della corte contro i pop-up di consenso “TCF” sull’80% di Internet
Google, Microsoft, Amazon, X e l'intero settore della pubblicità basata sul tracciamento si affidano al “Transparency & Consent Framework” (TCF) per ottenere il “consenso” al trattamento dei dati. Questa sera la Corte d'Appello belga ha stabilito che il TCF è illegale. Il TCF è attivo sull'80% di Internet. (link)
La decisione odierna è il risultato dell'applicazione delle norme da parte dell'Autorità belga per la protezione dei dati, sollecitata dai reclamanti coordinati dal Dott. Johnny Ryan , Direttore di Enforce presso l'Irish Council for Civil Liberties. Il gruppo di reclamanti è composto dal Dott. Johnny Ryan di Enforce, Katarzyna Szymielewicz della Fondazione Panoptykon , dal Dott. Jef Ausloos , dal Dott. Pierre Dewitte , dalla Stichting Bits of Freedom e dalla Ligue des Droits Humains
reshared this
Bezahlte Influencer:innen: Untersuchung warnt vor verdeckter politischer Online-Manipulation
Luci e ombre nella digitalizzazione della pubblica amministrazione
L'articolo proviene da #StartMag e viene ricondiviso sulla comunità Lemmy @Informatica (Italy e non Italy 😁)
Che cosa emerge da uno studio dell'Istituto Piepoli sul tema della digitalizzazione della pubblica amministrazione
reshared this
Bastian’s Night #425 May, 15th
Every Thursday of the week, Bastian’s Night is broadcast from 21:30 CET (new time).
Bastian’s Night is a live talk show in German with lots of music, a weekly round-up of news from around the world, and a glimpse into the host’s crazy week in the pirate movement aka Cabinet of Curiosities.
If you want to read more about @BastianBB: –> This way
Come si chiama quella situazione in cui tu ti sei dimesso e ti restano 13 giorni di lavoro davanti, ma i colleghi ti buttano addosso richieste e progetti che richiederanno settimane o mesi di lavoro per essere completate, portandoti via tempo prezioso per redigere la documentazione da lasciare ai posteri?
Ora non mi viene la parola, ma almeno una deve pur esistere!
Instagram e non solo, ecco mosse e lobbying di Meta sul legislatore Ue
L'articolo proviene da #StartMag e viene ricondiviso sulla comunità Lemmy @Informatica (Italy e non Italy 😁)
Instagram e account per teenager: ora Meta chiede al legislatore comunitario normative che assicurino il controllo dei genitori e intanto ci inonda di numeroni per convincere l'Europa che
Informatica (Italy e non Italy 😁) reshared this.
Gamma Knife e l’informazione corretta
Ci è stato segnalato un post che mischia informazioni mediche reali con informazioni fuorvianti che possono creare confusione in chi si trova di fronte questo wall of text: Con una risonanza scopro di avere due meningiomi ben incastonati nel mio cerv…maicolengel butac (Butac – Bufale Un Tanto Al Chilo)
Storia della Nakba palestinese, una “catastrofe” che continua
@Notizie dall'Italia e dal mondo
Dopo 77 anni, il timore dell'espulsione dalla propria terra è tornato a farsi vivo ancora più forte, di fronte a ciò che sta accadendo a Gaza, con i piani di espulsione proposti sostenuti da Israele e Stati Uniti. Pagine Esteri propone un resoconto storico di alcuni degli eventi del 1948
Notizie dall'Italia e dal mondo reshared this.
Clamorosi i dati sul silenzio referendario
@Giornalismo e disordine informativo
articolo21.org/2025/05/clamoro…
L’altro giorno l’Agcom ha sentito il bisogno di richiamare la Rai e le altre emittenti sulla necessità di garantire un’adeguata copertura informativa dei Referendum sul lavoro e sulla cittadinanza. Già dal tono delle dichiarazioni dell’Autorità
Giornalismo e disordine informativo reshared this.
Base nucleare sotto i ghiacciai della Groenlandia: la scoperta della Nasa
Secondo quanto riporta il Wall Street Journal c'erano alcuni alloggi, latrine , laboratori e una mensa che ospitava circa 200 militariRedazione Adnkronos (Adnkronos)
Ancora sul matrimonio egualitario
Vi segnalo questo podcast in cui se n'è parlato:
Lo trovate un po' su tutte le piattaforme, compreso su Antenna Pod.
In particolare, si spiega perché il testo legato alla raccolta firme che sta girando non è accurato.
open.spotify.com/episode/2QAdk…
#MatrimonioEgualitario #matrimonio #diritti #EQUITA #referendum #gaymarriage #samesexmarriage #Matrimoniogay #lgbtquia #lgbt #italia #repubblicadellebanane
Online Safety Act: A Guide for Organisations Working with the Act
This document is intended as a full overview of the Online Safety Act (OSA, or the Act) and how it works for organisations attempting to understand it and its implications. We explain the OSA’s key regulatory provisions as they impact moderation decisions and practices and its knock-on effect on the rights and freedoms of Internet businesses and users. For the enforcement mechanisms available to Ofcom see our detailed analysis of the problems the Act creates, published alongside this guide.
EPUB
1. Who is Regulated by the Online Safety Act 2023?
The Online Safety Act 2023 is a complex piece of legislation that places extensive duties on Internet service providers regarding content moderation, transparency reporting, and age verification. It also creates new criminal offences in respect of online communication. In this section we highlight these new key duties and offences.
Part 3 of the Act imposes duties on regulated user-to-user and search services. A user-to-user service is defined broadly as any service “by means of which content that is generated directly on the service by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or other users, of the service” [Section 3(1)]. Email services, one-to-one aural communication platforms like Skype, messaging platforms that share content between telephone numbers like WhatsApp, and user-review sites like Trustpilot are excluded as long as the service provider does not use them to supply pornographic content. Similarly, workplace platforms set up internally within public or private organisations are excluded, as are education and childcare platforms [Schedule 1]. A search service is regulated if it searches multiple websites or databases rather than a single site or database [Section 229].
Regulated services need not be based in the UK but must have “links” with the UK, either because they have a “significant number” of UK users, or they view the UK as a target market, or that they can be accessed from the UK and there “are reasonable grounds to believe that there is a material risk of significant harm to individuals” in the UK from their content [Sections 4(2), 4(5), 4(6)].
All services that meet this broad description are regulated. This means, for instance, that a website hosting a forum that enables users to share content is potentially in scope and must comply with the general duties imposed by the OSA. Yet there are thresholds above which the Act applies particular additional duties.
These depend on as-yet unwritten secondary legislation. Regulated services will be classified into one of three categories for the purposes of assigning their duties. User-to-user services – social media – are classed as Category 1 or Category 2B, depending on their size, functionality, and other features. Category 2A services are large search engines.
The additional duties imposed on each category are outlined in the table below:
Table 1: Categorisation and duties, from Ofcom’s report of March 2024 entitled ‘Categorisation: Advice Submitted to the Secretary of State’, p.4. See also sections 94 and 95 OSA.
The threshold conditions may change for these categories, as the legislation is open-ended and designed to allow changes over time. In preliminary research and advice as of March 2024, Ofcom recommended that Category 1 user-to-user services should be regarded as any service that uses a content recommender system and has more than 34 million UK users (half the population), or that uses a content recommender system, allows users to forward or share user-generated content, and has more than 7 million UK users (10% of the population). Similarly, a category 2A search engine has more than 7 million UK users. A category 2B user-to-user service allows users to send direct messages and has more than 3 million UK users (5% of the population). Services that fall below these thresholds, if they are adopted, will still have to comply with the duties the Act imposes on all services, but will not have to perform the additional requirements.
However, charities from the mental health and children’s sectors are lobbying the new Labour government to lower the threshold to bring even small websites within the scope of the Act.
2. Part 3 Duties
The duties imposed by Part 3 are the OSA’s key regulatory elements. They are designed to make regulated services do the bulk of the work, rather than tasking Ofcom with policing all services in detail. Instead, all regulated services will perform mandatory self-assessments in line with the general regulatory framework. They must implement appropriate measures in response to their own findings, in line with guidance and Codes of Practice prepared by Ofcom for the Secretary of State and laid before Parliament as secondary legislation. Ofcom will supervise their implementation, in practice focusing on the larger, categorised services.
Risk assessments are the critical device in Part 3. All regulated services must produce risk assessments in relation to specified categories of content and behaviour on their platform and keep those assessments up to date. Changes to their systems and Terms of Service cannot be made without first updating the relevant risk assessment. Transparency requirements are intended to ensure that Ofcom can review the relevant measures and assess overall compliance.
Where necessary, Ofcom can intervene directly, and extensive enforcement powers are available to it when services and individual senior managers fail to comply. Ideally, however, Ofcom will simply ensure compliance with the mandatory requirements. Ofcom plays a meta-regulatory role, co-ordinating best practices in a manner explicitly intended to develop and evolve iteratively across the sector in response to its own operations.
Part 3 places the following duties on all regulated user-to-user services, whether category 1 or 2B:
- duties about illegal content risk assessments set out in section 9,
- duties about illegal content set out in section 10(2) ‘to ‘(8),
- a duty about content reporting set out in section 20,
- duties about complaints procedures set out in section 21,
- duties about freedom of expression and privacy set out in section 22(2)and ‘(3), and
- duties about record-keeping and review set out in section 23(2) ‘to ‘(6).
All regulated user-to-user services that are “likely to be accessed by children” have additional duties:
- duties to carry out children’s risk assessments under section 11,
- duties to protect child safety online under section 12.
Category 1 services (large user-to-user platforms) have the following additional duties:
- a further duty regarding illegal content risk assessments set out in section 10(9),
- a further duty about children’s risk assessments set out in section 12(14),
- duties about assessments related to ‘adult user empowerment’ set out in section 14,
- duties to empower adult users set out in section 15,
- duties to protect content of democratic importance set out in section 17,
- duties to protect news publisher content set out in section 18,
- duties to protect journalistic content set out in section 19,
- duties about freedom of expression and privacy set out in section 22(4), ‘(6) ‘and ‘(7), and
- further duties about record-keeping set out in section 23(9) ‘and ‘(10).
We now briefly explain each category of duty, in turn.
Illegal Content and behaviour
This duty requires all regulated platforms to assess the risk that illegal content or behaviour will be published or carried out using their services. This is a broad category, but in practice there are specific “priority” categories that require close attention: terrorist content, child sexual exploitation and abuse material (CSEA), and 39 other kinds of wide-ranging priority content listed in Schedule 7, including assisting suicide, public order offences of causing fear of violence or provoking violence, harassment, stalking, making threats to kill, racially aggravated harassment or abuse, supplying drugs, firearms, knives, and other weapons, and facilitating “foreign interference” in the UK’s public affairs under the National Security Act 2023.
Risk assessments must factor in the user base, algorithms used in recommender systems and moderation systems, the process of disseminating content, the business model, the governance system, the use of any “proactive” technology (defined in section 231 as content identification, user profiling, or behaviour identification technologies), any media literacy initiatives, and any other systems and processes that affect these risks. To this end, Ofcom will provide “risk models” as guidance that will set out baseline factors to consider. All Category 1 services must publish a summary of their most recent risk assessment.
Risk assessment must in turn feed into the design and implementation of proportionate measures, applied across all relevant elements of the design and operation of the regulated service or part of the service, to prevent users encountering priority illegal content and mitigate and manage the risk that the service may be used to commit a priority offence. The risks of harm from illegal content identified in the most recent risk assessment must be minimised by implementing proportionate systems and processes that minimise the length of time that illegal content is present and allow it to be swiftly taken down once notified of its presence. Services must also act in respect of their compliance arrangements, their functionalities and algorithms, their policies and terms of use and blocking users, content moderation policies, user control options, support measures, and internal staff policies. Measures including the use of proactive technology may be applied, and services must spell such measures out to users in clear and accessible policies and terms of service. Measures to take down or restrict illegal material must be applied consistently.
We expand on illegal harm duties further in Part 3 below.
Children’s risk assessment
The same model of self-assessment and response applies to risks relating to child safety for all social media and search services “likely to be accessed by children”. Risk profiles must pay heed to the user base, including different age groups of child users; each kind of “primary priority content” harmful to children; each kind of priority content; and “non-designated content”/ This all puts an additional onus on services to consider potential harms not specified by either legislation or regulator.
As with illegal content, services must assess the levels of risk presented by each category of content, having regard to its specific functions, algorithms, and use characteristics. The legislation specifically includes consideration of any functions that enable adults to search out and contact child users. For each element, the service must consider “nature, and severity, of the harm that might be suffered by children” (s11(g)), and how the design, operation, business model, governance, use of proactive technology, media literacy initiatives and “other systems and processes” may reduce or increase these risks. Again, these rather open-ended criteria are to be fleshed out by model risk profiles that are to be created and updated over time by Ofcom.
Section 12 creates a duty to mitigate and manage the risks identified by the assessment in a proportionate manner. Sections 60 and 61 divide content harmful to children into three broad headings: “Primary Priority Content” (PPC), “Priority Content” (PC), and Non-designated content (NDC).
PPC is non-textual pornographic content and any content that encourages, promotes, or provides instructions for suicide, self-harm, or eating disorders.
PC includes several broad categories:
- Abusive content, or content which incites hatred, on grounds of race, religion, sex, sexual orientation, disability, or gender reassignment;
- Bullying content;
- Violent content which encourages, promotes, or provides instructions for a serious act of violence against a person, or which graphically depicts real or realistic serious violence against a person, animal, or fictional creature;
- Harmful substances content that encourages taking or abusing harmful substances or substances in a harmful quantity;
- Dangerous stunts and challenges content.
NDC is any other content “which presents a material risk of significant harm to an appreciable number of children in the UK”.
Of particular note is section 12(3), which stipulates that children of any age must be prevented from encountering “primary priority content” that is harmful to children. Section 12(3) OSA requires all regulated user-to-user services that are “likely to be accessed by children” to use “proportionate systems and processes” to prevent children from encountering primary priority content that is harmful to children, including pornographic content, except where such content is prohibited on the service for all users.
Section 12(4) requires the use of highly effective age assurance measures to prevent children encountering PPC on a service except where PPC is prohibited for all users. Age assurance is not mandated for user-to-user services except in relation to PPC, but it is listed as a potential measure for proportionately addressing other duties to children.
Section 29(2) requires search services take “proportionate measures” to mitigate and manage the risk and impact of harm to children by search content. Large general search engines should apply safe search settings that filter out PPC for all users believed to be children. Users should not be able to switch this off.
By contrast, children must be protected from the risk of encountering “priority content that is harmful”. This is defined at section 62 as including content abusive on grounds of race, religion, sex, sexual orientation, disability or gender reassignment, or content which incites hatred on the same grounds. It also includes content that encourages or instructs “an act of serious violence against a person”; “bullying content”; the graphic depiction of a realistic injury or realistic violence against a real or fictional person, animal, or creature; content encouraging or instructing people to perform dangerous stunts and challenges; and content that encourages a person to ingest, inject, inhale, or otherwise take a harmful substance. Bullying content is defined as content targeted against a person that conveys a serious threat, is humiliating or degrading, and is part of a “campaign of mistreatment”. The categories of harmful content are deliberately intended to be iterative and mutable – section 63 requires Ofcom to review the incidence of such content and the severity of harm that children suffer, or may suffer, as a result and produce advisory reports no less than once every three years with recommended changes.
All these categories of harmful communication must be operationalised via the content moderation systems used on all regulated services that children can access. The duty to prevent any encounter with primary priority content, however, leads logically to section 12(4), which requires the implementation of age verification or age estimation systems. According to section 12(6), age verification or estimation systems must be “highly effective at correctly determining” whether a user is a child or an adult.
User Empowerment – “Legal but harmful” content
In the initial Online Safety White Paper, the government proposed a requirement for regulated services to assess and manage the risks to all users from “legal but harmful” communication. Such a measure would have imposed blanket censorship for all users, regardless of their subjective preferences, in the name of politically-defined population-level “harms”. During the Act’s passage through Parliament these censorious provisions were replaced with “user empowerment” duties, which require Category 1 services to enable individual users to select what kinds of content gets filtered out of their online experience.
Section 14 “assessments related to adult user empowerment” must include considering the “likelihood” (arguably a synonym for risk) that users will encounter different kinds of “relevant content”, including the likelihood of “adult users with a certain characteristic or who are members of a certain group encountering relevant content which particularly affects them” [Section 14(5)(d)]. Then proportionate measures must be taken to give adult users the ability to filter out “relevant content”, self-referentially defined at section 15(2):A duty to include in a service, to the extent that it is proportionate to do so, features which adult users may use or apply if they wish to increase their control over content to which this subsection applies. Section 16 holds that such content includes anything that encourages, promotes, or gives instructions on suicide, self-harm, or eating disorders (s16(3)). It includes anything that is “abusive” in relation to race, religion (or lack thereof), sex, sexual orientation, disability, or gender reassignment, and anything that incites hated against people of a particular race, religion, or sexual orientation, anyone with a disability, or anyone with the characteristic of gender reassignment, meaning “the person is proposing to undergo, is undergoing or has undergone a process (or part of a process) for the purpose of reassigning the person’s sex by changing physiological or other attributes of sex”. This definition is thus a subset of the list of priority content deemed harmful to children. But whereas it applies by law to all services on which children may be present, it only applies to adults who self-select and therefore avoids the complex questions of freedom of expression that would otherwise arise.
Category 1 services (large social media platforms) must offer features that reduce the likelihood of users encountering “relevant” content, or that alert them if such content is present on the site. Users must also be enabled to block all non-verified users – people who have not confirmed their real identity to the platform – from contacting them or from uploading content that they may encounter. Related, all Category 1 services must offer adult users the option to verify their identity.
In practice, this amounts to an “opt-in” version of the curtailment of “legal but harmful” content. To provide this, Category 1 platforms will have to use automated semantic analysis and user-driven reporting systems to identify such content so that the filters work effectively. The key priority for services – and users – is that their filtering systems accurately do what they purport to do with a proportionate degree of reliability. While services likely take these steps anyway in the course of moderating content, the state’s imposition of such a requirement is an interference with their freedom of expression as private companies.
Pro-Free Speech Duties
Section 17 places a general duty on all services to use “proportionate systems and processes to ensure that the importance of the free expression of content of democratic importance is taken into account” in respect of taking content moderation action or acting against a user, whether a warning, suspension, or ban, for generating or sharing such content. Such measures must apply “in the same way to a wide diversity of political opinion” (s17(3)). Although a “wide diversity” of opinion is not defined, it implies that some political opinions do not count as democratically important.
At section 17(7), content of democratic importance is defined as news publisher content or user-generated content that “is or appears to be specifically intended to contribute to democratic political debate in the UK or a part of area of the UK”.
Section 18 creates a prospective duty to protect “news publisher” content by imposing steps that must be taken before action can be taken to moderate or remove content or user accounts from news media organizations. First, notice must be given of the intended action, along with reasons and an account of how the specific duty to protect news content was considered. A reasonable period for representations must be allowed, then a considered decision must be given with reasons. These steps can only be skipped where there is a “reasonable consideration” that the material would incur criminal or civil penalties, with the publisher then entitled to retroactively appeal. Clear and accessible descriptions of these provisions must be included in the Terms of Service.
This provision was added following lobbying by the British press. Ironically, they themselves, especially the tabloid papers, have historically been responsible for promoting content that might meet the criteria of harmful or even illegal, yet their political power is such that they successfully campaigned for special treatment in the OSA to mitigate the risk of losing readership, ad revenue, and reputational integrity through moderation action taken against their social media posts. Consequently, the press has ended up with much better procedural protections than ordinary citizens.
Section 19 applies a similar duty to protect UK- linked “journalistic content”, which includes content for the purpose of journalism, even if the producer is not a journalist by profession. Rather than giving advance notice, however, services must create a dedicated expedited complaints procedure to allow users to appeal against the removal of any content, or any action taken against them, with swift reinstatement when complaints are upheld.
Section 22 imposes a duty to have regard “to the importance of protecting users’ right to freedom of expression within the law”. This means that, when deciding on and implementing safety measures and policies, services must have “particular regard” to the “importance of protecting users from a breach of any statutory provision or rule of law concerning privacy that is relevant to the use or operation of a user-to-user service (including, but not limited to, any such provision or rule concerning the processing of personal data)”.
However, for reasons set out above regarding the mechanics of moderation, and in light of the extensive illegal harm and children’s duties discussed above, it is unlikely that such measures will be operationalised with any real impact.
All of this is part of the auditing processes mandated by the OSA. Category 1 services must produce impact assessments on how their safety measures and polices will affect freedom of expression and privacy, with particular information on news publisher and journalistic content. They must keep the assessment updated and specify positive steps they are taking in response to identified problems.
Reports and Complaints
There is a duty imposed at s20 to create systems for users and/or affected others to easily report illegal content or content harmful to children on a service that children can access. The affected person must be in the UK and the subject of the content, or of a class of people targeted, or a parent, carer, or adult assistant who is the subject of the content.
At section 21, there is a duty to take “appropriate action” in response to complaints that are relevant to the duties imposed by the Act. The complaints and response process must be accessible, easy to use, and transparent in its effects, including child users. This also includes complaints about the way the service uses proactive safety-oriented technology such as algorithmic classifiers. Section 23 creates record-keeping and review duties that apply to all risk assessment duties.
Search Engines
The duties discussed so far apply to “user-to-user” social media. Sections 24-34 OSA reproduce several of these duties in respect of search engine services. Search engines are not required to verify the age of users and or comply with user empowerment duties, but they must take proportionate measures to mitigate and manage the risks of harm from illegal content and content harmful to children, including minimising (but not absolutely preventing) the risk that children might encounter primary priority content harmful to children. There are similar duties in respect of content reporting, complaints, freedom of expression and privacy, and record-keeping and review (see section 24).
3. PORNOGRAPHIC CONTENT PROVIDERS
Under Part 5 of the OSA, and expressly under section 81, services that publish or display pornographic content have a duty to implement “highly effective age assurance” (referred to in draft guidance as HEAA) to ensure that “children are not normally able to encounter pornographic content” on the service. This means that the method chosen has to be highly effective in principle, and that it has to be implemented in a manner that is highly effective.
This overlaps with section 12 duties in respect of harm to children. As explained above, all content harmful to children is regulated by Part 3 of the OSA, and expressly includes pornographic content (other than purely textual content) as “Primary Priority Content” (PPC), ranking it alongside any content that encourages, promotes, or provides instructions for suicide, self-harm, or eating disorders. In respect of PPC, child users must be “prevented” from access, rather than merely “protected” from the risk of encountering it. Firm age-verifying gateways around pornographic content on specific porn sites are, in other words, doubly mandated by the OSA. The duplication may be explained by the fact that there is no minimum number of users required for the provisions of Part 5 to bite. Any site that provides pornographic content and that has a “significant” number of UK users or that targets the UK market is caught by the OSA, meaning that even small niche providers of pornographic content are required to implement HEAA measures.
Where a user is deemed to be a child by a regulated user-to-user site, three types of safety measure can be applied, depending on its profile: first, access controls that prevent access to the service or part thereof; second, content controls that protect children from encountering harmful content; and finally, measures that prevent recommender systems from promoting the site’s harmful content to children.
The question of which method should apply depends on is the type of service provided and the risk level it presents:
- In relation to access to the service, strong age assurance gatekeeping must apply to all user-to-user services that principally host or disseminate Primary Priority Content or Priority Content. Access must be entirely controlled by HEAA, meaning users must somehow show that they are over-18 in order to gain access to any user-to-user service that focuses on pornography, suicide and self-harm, or eating disorders.
- In relation to content controls, where disseminating PPC and PC is not the principal purpose of the service, but the service does not prohibit such content, and (in respect of PC) where the service is rated as a high or medium risk for hosting PC, HEAA must apply to content control measures to ensure that children who access the service, or indeed any user who does not demonstrate that they are an adult, are protected from encountering it via content filtering methods. This would apply to any social media service that tolerates user-generated and promoted pornography, like X (formerly known as Twitter), where accounts advertising pornographic material are not banned.
- User-to-user services that operate recommender systems to select and amplify content, and which is rated high or medium risk for PPC or PC (excluding bullying) must use HEAA methods to control the recommender system settings.
- Other relevant differences in the services provided to children and adults include private messaging settings, the ability to search for suicide, self-harm, and eating disorder content, and signposting users to sources of support, amongst other design features aimed at minimising the risk of harm to children.
In effect, to provide a fully uncensored service hosting lawful social media content intended for adult users who do not mind encountering pornographic, violent, or other controversial material even if that material is not the reason for most users’ interest in the service, such platforms must still estimate or verify that their users are adults. Age assurance unlocks the adult versions of regulated user-to-user services:
- Access to services that exist to host and disseminate PPC
- Content controls allowing access to identified PPC/PC on other services
- The absence of content moderation measures to identify and filter PPC/PC
- Recommender systems that recommend PPC/PC
We explore the practicalities of this in the main report.
4. New Criminal Offences
Besides provisions designed to combat online fraud propagated by regulated services, the Act creates several new communication offences. Aside from enabling the arrest and prosecution of individuals, these provisions broaden the scope of “illegal content and behaviour” and the associated risks of encountering it on regulated services. Therefore, they are an extension of the new mandated regime of moderation requirements. Regulated services must take steps to mitigate the risk of these offences or risk failing to comply with their duties. The new offences are:
S 179 False communications offence
It is an offence to send, without reasonable excuse, a message one knows to be false if it is intended to cause “non-trivial psychological or physical harm to a likely audience”. A “likely audience” is anyone who can be reasonably foreseen to encounter the message. It need not be a specific person.
S 181 Threatening messages
It is an offence to send a message by any medium conveying a threat of death or serious harm with the intention to cause fear that it will be carried out, or where the sender is reckless as to the effect. The message need not be directly sent to the victim provided that it is communicated such that they may “encounter” it – for instance, posting it to a social media site or message board.
S 183 offences of sending or showing flashing images electronically
It is an offence to send messages, or cause messages to be sent, containing flashing images to a person known to have, or believed to have, epilepsy with the intention that they see the images and are harmed by them.
S 184 Offence of encouraging or assisting serious self-harm
It is an offence to intentionally encourage or assist another person to cause serious self-harm, whether in person or online. “Serious” means acts that cause the equivalent of grievous bodily harm and includes successive acts of less severe self-harm that cumulatively cross the threshold. These offences apply to acts done outside the UK provided D is habitually resident or incorporated in the UK.
S 187 Sending photograph or film of genitals
This provision criminalizes “cyberflashing” – that is, intentionally sending or giving a photograph or film of any person’s genitals to another, if doing so is intended to cause the recipient alarm, distress or humiliation, or if the sender obtains sexual gratification regardless of the recipient’s response.
S 188 Sharing or threatening to share intimate photography or film
This provision modifies the Sexual Offences Act 2003 to make it an offence to share or threaten to share intimate images or videos depicting another person in exemption for children or those without capacity if shared for medical purposes, and exemptions for images ordinarily shared between family members.
How to Fix the Online Safety Act: A Rights First Approach
Support ORG
Become a Member
Tutela ambiente. Traffico illecito di oli esausti, indagini internazionali dei carabinieri
E' stata denominata “Petrolio dorato” l’operazione dei carabinieri che ha condotto ad arresti e perquisizioni - non solo in Italia - su mandato del gip di Bologna, consentendo di scoprire un sistema illegale di commercio di oli vegetali esausti utili per produrre biodiesel.
Nell’ambito delle attività, i Carabinieri del Gruppo per la tutela ambientale e la sicurezza energetica di Venezia, con il supporto di diversi reparti dell’Arma, sono stati coordinati dalla Procura della Repubblica – Direzione distrettuale antimafia e antiterrorismo – di Bologna.
L’inchiesta ha consentito di documentare l’operatività di un sodalizio che, attraverso società autorizzate alla raccolta di oli vegetali esausti, traeva ingiusti profitti dagli introiti derivanti dal trattamento e dalla rivendita di questo rifiuto pregiato, utilizzato per la produzione del biodiesel. Attualmente sono iscritte nel registro degli indagati 22 persone e 2 società, a vario titolo ritenute responsabili dei reati di associazione a delinquere, attività organizzate per il traffico illecito di rifiuti, favoreggiamento personale, falsità ideologica commessa dal privato in atto pubblico e abuso d’ufficio, in relazione ai fatti accertati in Emilia-Romagna, Veneto, Trentino Alto-Adige e Campania nel periodo 2021 – 2022.
L’indagine ha visto il coinvolgimento nelle varie fasi di Europol per quanto riguarda la cooperazione internazionale di polizia, visto che gli indagati risultano gestire attività economiche anche in Grecia e Spagna, nonché commerciare con Austria, Belgio, Ungheria, Bulgaria, Repubblica Slovacca, Malta e Libia.
Accogliendo le richieste del Pm, il Gip ha disposto la misura cautelare nei confronti di 11 indagati (5 dei quali agli arresti domiciliari, 3 con obbligo di dimora e 3 con divieto di esercitare imprese o uffici direttivi in società del settore della gestione dei rifiuti) e il sequestro preventivo dei compendi societari e delle strutture aziendali delle due società al centro delle investigazioni. Nel corso dell’operazione verranno anche eseguite 17 perquisizioni personali e locali e svolte ispezioni ambientali a impianti di raccolta, gestione e trattamento di oli vegetali esausti.
#Armadeicarabinieri #carabinierinoe #carabinieritase #biodiesel #olivegetaliesausti #ambiente #europol
Ambiente - Gruppo sulla sostenibilità e giustizia climatica reshared this.
maupao
in reply to Informa Pirata • • •@pirati
Pirati Europei reshared this.
quinta
in reply to maupao • • •Pirati Europei reshared this.