Salta al contenuto principale

Pressefreiheit: Journalist*innen weltweit in Gefahr


netzpolitik.org/2025/pressefre…


Transparenzbericht 1. Quartal 2025: Unsere Einnahmen und Ausgaben und die Suche nach Substanz


netzpolitik.org/2025/transpare…


Why is UNOV important for PPI?


Over almost a decade of UN activities, we have consistently attended events at the UN Headquarters in New York and the office in Geneva. Around 2019 we sent a few delegates to Vienna, but COVID ended most of our activities. This year we are making a new emphasis to network and lobby at UNOV, because a lot of very important global policies that are important to our movement are decided there. The important issues include crime, drugs, trade, outer space, and nukes.

Over this past Easter holiday the chair of PPI, Keith Goldstein, met with the Pirate Party of Austria members, including the PPI main representative at UNOV, Kay Schroeder. They discussed plans this year for PPI activities in Vienna, including goals of establishing a side event, hosting further PPI visits at UNOV, making statements, crowdfunding, and grant writing to establish projects.

PPI has held special consultative status with the UN Economic and Social Council (ECOSOC) since 2017, granting us the right to attend UN events, submit statements, and engage with global policymakers. UNOV representatives have been focused on events, such as those of the UN Office on Drugs and Crime´s 68th Commission on Narcotic Drugs in March 2025. We have also participated for several years now in the Cybercrime panels. Many international actors do not share our belief in a free and open internet. We hope that our presence at these events can at a minimum keep us informed about changes to the interpretation of cybercrimes, or at best to stop regulations that target innocent civilians who are not criminals but treated as such. We have also attempted to inquire about nuclear safety issues, but most of those meetings are not open to NGOs.

We share a few pictures from the most recent trip to UNOV. Please let us know if you are also interested in participating in UNOV activities. We hope to inform you soon about PPI events in the area.


pp-international.net/2025/05/u…


Why ‘Predictive’ Policing Must be Banned


The UK Government is trying to use algorithms to predict which people are most likely to become killers using sensetive personal data of hundreds of thousands of people. The secretive project, originally called ‘The Homicide Prediction Project’ was discovered by Statewatch. They described how “data from people not convicted of any criminal offence will be used as part of the project, including personal information about self-harm and details relating to domestic abuse.”

It may sound like something from a sci-fi film or dystopian novel, but the “Homicide Prediction Project” is just the tip of the iceberg. Police forces across the UK are increasingly using so-called “predictive policing” technology to try to predict crime. Police claim these tools “help cut crime, allowing officers and resources to be deployed where they are most needed.” In reality, the tech is built on existing, flawed, police data.

As a result, communities who have historically been most targeted by police are more likely to be identified as “at risk” of future criminal behaviour. This leads to more racist policing and more surveillance, particularly for Black and racialised communities, lower income communities and migrant communities. These technologies infringe human rights and are weaponised against the most marginalised in our society. It is time that we ban them for good.

That is why we are calling for a ban on predictive policing technologies, which needs to be added to any future AI Act, or the current Crime and Policing Bill. We are urgently asking MPs to demand this ban from the government, before these racist systems become any further embedded into policing.

The illusion of objectivity


The Government argue that algorithms remove human bias from decision-making. Instead, these tools are only as “objective” as the data they are fed. Historical crime data reflects decades of racist and discriminatory policing practices, for example, targeting poorer neighbourhoods by labelling them “crime-hotspots” and “microbeats” synonymous with drugs and violence and racial profiling by using the language of “gang” and “gang-affiliated” as a dog whistle for young Black men and boys. When algorithms are built on discriminatory data, they don’t neutralise bias, they amplify it.

There are two main types of “predictive policing” systems: those which focus on geographies seeking to “predict” where crimes may take place and those which aim to “predict” an individuals likelihood of committing a future crime.

In 2021, the Home Office funded 20 police forces from across the UK to roll out a geographic “predictive policing” programme called ‘Grip.’ The tech was described as “a place-based policing intervention that focuses police resources and activities on those places where crime is most concentrated.”1

However, research by Amnesty International has highlighted that there has been no conclusive evidence to demonstrate that the programme had any impact on crime. What’s more, there is evidence that the programme reinforced and contributed to racial profiling and racist policing.

Rather than investing in addressing the root causes of crime, such as the rising cost of living and lack of access to mental health services, the Government is wasting time and money on technologies that automate police racism and criminalise entire neighbourhoods.

Lack of transparency and accountability


So-called “predictive policing” systems are not only harmful in that they reinforce racism and discrimination; there is also a lack of transparency and accountability over their use. In practice, this means people often do not know when or how they, or their community have been subject to “predictive policing,” but they can still be impacted in various areas of their life.

This includes being unjustly stop-and-searched, handcuffed and harassed by police. However, because data from these systems is often shared between public services, people can experience harms in multitude areas of their life, including in their dealings with schools and colleges, local authorities and the Department for Work and Pensions. This can affect people’s access to education, benefits, housing and other essential public services.

Even when individuals seek to access information on whether they have been profiled by a tool, they are often met with blanket refusals or contradictory statements. The lack of transparency means people often cannot challenge how or why they were targeted, or all the different places that their data may have been shared.

In an age where “predictive policing” technologies are being presented as a silver bullet to crime, police forces should be legally required to disclose all the “predictive policing” systems that they are using, including what they do, how they are used, what data operationalises them and the decisions they influence.

It should also be legally required that individuals are notified when they have been profiled by “predictive policing” systems, with clearly defined routes to challenge all places that their data is being held. Without full transparency and enforceable accountability mechanisms, these systems risk eroding the very foundations of a democratic society.

The Pre-Crime Surveillance State


The expansion of “predictive policing” into public services represents a dangerous move towards a surveillance state. The scope of “predictive policing” is not only limited to the criminal legal system. The Government is expanding algorithmic, automated and data-based systems into spaces of healthcare, education and welfare as well.

Research conducted by Medact on the Prevent Duty in healthcare evidenced how health workers are required to identify and report those who they believe are “at risk” of being drawn into terrorism. This risks undermining therapeutic relationships, confidentiality and trust in medical practitioners and expands the role of policing and counter-terror into healthcare.

Those targeted by these kinds of systems are not afforded the right to be presumed innocent until proven guilty. Instead, they are profiled, risk-scored and surveilled based on where they live or what flawed data says about them or who they associate with.

This is how a surveillance state embeds itself into the everyday. Without committing a crime, you can be branded a threat; without access to redress, you can be punished; and without transparency, you may never know it happened.

But the rise of pre-crime policing is not inevitable – it is a political choice. That is why we must take a stand and call on the government to ban “predictive policing” systems once and for all.

Beyond ‘predictive’ policing, towards community safety


The failures of “predictive” policing have been well documented – from reinforcing racist policing to undermining human rights. But rejecting these technologies does not mean giving up on public safety. On the contrary, it means shifting resources and attention to solutions that are proven to work, that respect human rights and that are based on trust, not fear. This means investing in secure housing, mental health services, youth centers and community based support services for people experiencing hardship or distress. If safety is the goal – prevention not prediction should be the priority.

Ban Crime predicting police tech


Crime predicting’ AI doesn’t prevent crime – it creates fear and undermines our fundamental right to be presumed innocent.
Sign the petition

Digital Privacy


End Pre-Crime


Find Out More

End Pre-Crime

Support ORG
Become a member


openrightsgroup.org/blog/why-p…


Account gesperrt: Sie haben Pornos rezensiert, dann warf Instagram sie raus


netzpolitik.org/2025/account-g…


mSpy: netzpolitik.org soll für eine Spionage-App werben


netzpolitik.org/2025/mspy-netz…


Gerichtshof für Menschenrechte: Serbien soll Schallwaffe stecken lassen


netzpolitik.org/2025/gerichtsh…


Studie: Druck und Überwachung per Arbeits-App


netzpolitik.org/2025/studie-dr…


Gegen Deepfakes: US-Kongress verabschiedet Take It Down Act


netzpolitik.org/2025/gegen-dee…


Ungarns Pride-Verbot: Abgeordnete machen Druck auf EU-Kommission


netzpolitik.org/2025/ungarns-p…


EU-Regeln für KI-Modelle: Wenn meine KI keinen Atomkrieg startet, darf sie dann rassistisch sein?


netzpolitik.org/2025/eu-regeln…


What Do Political Parties Really Know About You?


This Thursday (1 May 2025), voters will go to the polls in 1,641 council seats across 24 local authorities. You may have spoken to a canvasser, filled out a political survey, or received campaign leaflets — but have you ever stopped to wonder how political parties know where to find you, how likely you are to vote, or even what you care about?

How Political Parties use your data


Behind the scenes, political parties are using sophisticated data systems to profile, segment, and target voters — and many people have little to no idea this is happening.

Five years ago, Open Rights Group published a report called What Do They Know? revealing how political parties were building detailed databases of voter information. In the run-up to last year’s General Election, we revisited this issue — and what we found was even more troubling.

We invited supporters to submit subject access requests (SARs) to political parties, allowing individuals to see what data parties held on them. We have complied a CSV full of some of the data fields we learned about.

This time, we also provided new tools to help people opt out of automated profiling and algorithmic decision-making. In parallel, we carried out a technical audit of canvassing apps used by major parties and published the results in our report, Moral Hazard: Voter Data Privacy and Politics in Election Canvassing Apps.

Here are four lessons we’ve learned.

ONE
Credit agency Experian is embedded in Labour’s voter targeting infrastructure


We uncovered an uncomfortably close relationship between the Labour Party and Experian, a credit referencing agency best known for scoring people’s creditworthiness.

Experian plays a role in hosting or developing key parts of Labour’s canvassing database. Labour’s privacy policy admits that the party collects “demographic data about you from our commercial supplier (Experian),” but provides little detail about the nature of this data or how it’s used.

Subject access requests suggest that Experian’s Mosaic data is used to algorithmically score voters — including, worryingly, assigning a score for a person’s likelihood of being at home during the day.

Credit reference agencies like Experian have extraordinary powers to harvest personal data. Their involvement in electoral profiling raises serious questions about data separation and accountability.

We believe the Information Commissioner’s Office (ICO) should investigate how data flows between Experian and political parties, and whether such relationships breach the principles of data protection law.

TWO
Political parties are still failing to respect people’s data rights


No political party performed well in handling data access or opt-out requests.

  • The Conservatives, though relatively quick to respond, treated requests to opt out of profiling as if they were simply requests to stop receiving marketing emails.
  • We had reports from members that the Liberal Democrats claimed they were too busy during the election to respond to some SARs.
  • Labour introduced bureaucratic hurdles, questioning the validity of requests submitted through third-party tools — ironically, despite most email addresses also being third-party services.

This isn’t about pointing fingers at one party over another — it’s a systemic failure across the political spectrum. Established parties like Labour and the Conservatives are just as culpable as newer entrants like Reform UK or the Workers Party of Britain. The underlying problem is that compliance isn’t prioritised in political campaigning — funding and staffing go to ads and outreach, not rights and transparency.

Most parties offer some ability to opt out of direct marketing, but none are prepared to honour opt-outs from automated profiling. That’s concerning, because some voters may want to hear from candidates — but not be profiled or scored based on commercially available data.

If political parties want to earn the trust of privacy-conscious voters, they need to take these rights seriously.

THREE
Profiling by race has declined — but class-based targeting remains widespread


When we first looked at voter profiling in the late 2010s, it wasn’t unusual to find parties making assumptions about race, religion, and ethnicity. The Liberal Democrats analysed surnames to predict ethnic origin. Labour used Experian’s “Mosaic Origins” data field. The Conservatives had a “Mysticism” field to guess someone’s religion.

Our most recent SAR data shows fewer signs of this kind of profiling — a welcome shift. But class-based targeting remains widespread. However the Conservatives were still using a ‘mother tongue’ field which could be used as a proxy for race and cultural profiling.

Voters are still being profiled based on wealth and income indicators, often sourced from third-party commercial datasets. Parties routinely use marked registers — which show who has voted in past elections — to estimate a person’s likelihood to vote.

Together, these tools can lead to a troubling outcome: if parties believe certain people are unlikely to vote, they’re less likely to contact them. And those people, in turn, become even more disengaged.

It creates a vicious cycle of disenfranchisement — especially for those from lower-income or precarious backgrounds.

The use of credit data (again, often via Experian) can exacerbate these issues, as debt history and postcode data are used to profile voting behaviour.

Politicians should remember: ignoring voters who don’t vote might backfire — especially when a new party comes along with a message that resonates with those left out.

FOUR
Canvassing apps: security flaws and lack of transparency


Our Moral Hazard report revealed major privacy and security concerns in the canvassing apps used by political parties:

  • Labour’s web-based Reach, Doorstep and Contact Creator apps were found to be integrated with infrastructure owned by Experian. It’s unclear how data was shared and processed between the two entities.
  • Static Application Security Testing analysis of the Liberal Democrats’ MiniVan App found it was deployed on infrastructure with a history of known vulnerabilities.
  • The Conservatives’ Share2Win app raised privacy concerns including potential location tracking.
  • All parties appear to be reliant on international commercial entities to run key parts of their digital campaigning infrastructure.

The lack of transparency over these tools — and how data is being stored, shared, or secured — raises serious questions about voter privacy and legality. We believe the ICO must investigate these tools as part of a broader inquiry into the data ecosystems underpinning modern campaigning.

Time for Political Data Reform


Our investigations show that voter data rights are still not being respected — by any political party.

Political parties collect vast amounts of personal data to drive increasingly precise and opaque targeting. But the systems they use are poorly regulated, frequently intrusive, and not subject to meaningful oversight.

If democracy is to be fair, voters must have the right to understand, challenge, and opt out of how they’re being profiled. We need:

  • Stronger enforcement by the ICO.
  • Greater transparency from political parties.
  • Tools and rights that put power back in the hands of voters.
  • Funding for parties to get compliance issues right.

As voters head the polls in local elections, we urge parties to clean up their data practices — and we urge voters to ask: What do they know about me? And what are they doing with it?

ACCESS WHAT INFORMATION POLITICAL PARTIES HAVE ABOUT YOU


Use our tool to find out what data political parties hold about you
Take action

Voter Data Privacy and Politics in Election Canvassing Apps


Read ORG’s report into canvassing apps used by UK Political Parties
Find out more

Data and Democracy


Data and Democracy


Find Out More

Data and Democracy

Support ORG
Become a member


openrightsgroup.org/blog/what-…


Neuer Innenminister: Dobrindts zweiter Versuch


netzpolitik.org/2025/neuer-inn…


IP-Catching: Die Überwachungs-Maßnahme, die geheim bleiben soll


netzpolitik.org/2025/ip-catchi…


Zerschlagung von Big Tech: Warum es für Alphabet, Meta & Co. eng werden könnte


netzpolitik.org/2025/zerschlag…


OnlyFans: Schweden will Bezahlung von Camshows kriminalisieren


netzpolitik.org/2025/onlyfans-…


Designiertes Bundeskabinett: Vom Lobbyisten zum Digitalminister


netzpolitik.org/2025/designier…


Netzneutralität: Beschwerde gegen Telekom wegen absichtlicher Netzbremse


netzpolitik.org/2025/netzneutr…


KW 17: Die Woche, als wir auf unbequeme Antworten warteten


netzpolitik.org/2025/kw-17-die…


Erweiterter Chat-Datenschutz: Neue WhatsApp-Funktion liefert Scheinsicherheit


netzpolitik.org/2025/erweitert…


Digital Markets Act: Millionenschwere Wettbewerbsstrafen für Apple und Meta


netzpolitik.org/2025/digital-m…


Global Majority House: Wie Digital-Aktivist:innen bei der EU für globale Perspektiven werben wollen


netzpolitik.org/2025/global-ma…


Global Majority House: How activists want to bring Global Majority perspectives into EU tech policy


netzpolitik.org/2025/global-ma…


Biometrie weltweit: Hier werden Protestierende mit Gesichtserkennung verfolgt


netzpolitik.org/2025/biometrie…


KW 16: Die Woche, in der uns die Haare zu Berge stehen.


netzpolitik.org/2025/kw-16-die…


Interview: „Wir brauchen eine neue Vision der Digitalisierung“


netzpolitik.org/2025/interview…


Digitalisierung: Wie Verwaltung und Justiz automatisiert werden könnten


netzpolitik.org/2025/digitalis…


Verwaltung in der Cloud: Bund macht sich abhängig von Amazon und Co.


netzpolitik.org/2025/verwaltun…


US-Analysesoftware: Palantir macht Polizei und Militär politisch


netzpolitik.org/2025/us-analys…


Einreichung Referendum E-ID-Gesetz – Abwahlen und Veränderungen im Vorstand der Piratenpartei


Wir informieren Sie hiermit über unsere Hauptversammlung vom 5. April und die aktuelle Situation bezüglich des E-ID-Referendums:

Am 5. April haben die Mitglieder an der Hauptversammlung der Piratenpartei ihren Präsidenten Jorgo Ananiadis mit einer Dreiviertel-Mehrheit in seinem Amt bestätigt. Ferner wurden Pat Mächler, Melanie Hartmann, Michel Baetscher und Renato Sigg neu gewählt. Für einen Verbleib von Nicole Rüegger und Jonas Sulzer im Vorstand stimmten nur drei Mitglieder.

Unabhängig davon sind Frau Rüegger und Herr Sulzer weiterhin verantwortlich für das die Unterschriftensammlung gegen das E-ID-Gesetz. Sie haben den Auftrag im Oktober 2024 von der Piratenpartei mit einer entsprechenden Anschubfinanzierung erhalten.
Der Vorstand der PPS wurde seit Auftragsvergabe von der Referendumsleitung über die Pläne und Aktivitäten mangelhaft, verspätet, unvollständig oder gar nicht in Kenntnis gesetzt. Auch wurden Vermittlungsbemühungen ignoriert sowie Unterstützungsangebote für die Kampagne häufig abgelehnt. In der Folge trat Philippe Burger zu Beginn des Jahres aus dem Vorstand der Zürcher Sektion zurück, schliesslich vor einem Monat als Vize-Präsident der PPS inklusive Parteiaustritt. Im Rahmen von weiteren Vorkommnissen hat er sich entschlossen, seine Privatwohnung dem Referendumskomitee nicht länger zur Verfügung zu stellen.

Nachdem die Referendumsleitung sich an der Hauptversammlung selbst für das Präsidium der PPS zur Wahl gestellt sowie die Abwahl des aktuellen Präsidenten Jorgo Ananiadis gefordert hatte und mit diesen Anträgen scheiterte, wurde das Branding der Piratenpartei auf der Webseite des Referendums gegen die E-ID ohne Rücksprache entfernt. Wie wir aus einer Medienmitteilung erfahren haben, wurden heute 20.000 Unterschriften der Piratenpartei bei der Bundeskanzlei eingereicht, ohne dass der Vorstand davon in Kenntnis gesetzt wurde.

Für die E-ID Referendumskampagne wurden die Kommunikationskanäle, Unterschriftenbögen etc. im Namen der PPS geführt. Alle UnterzeichnerInnen und UnterstützerInnen gehen somit davon aus, dass das Referendum von der Piratenpartei geführt wird und auch die Piratenpartei Schweiz hält daran fest.
Der Verein „E-ID-Gesetz-Nein“ wurde nur zur administrativen Unterstützung des Referendums gegründet. Deshalb beinhalten alle Sammelbögen und Kampagnenelemente immer die Information „Ein Referendum der Piratenpartei“.

Wir danken allen Beteiligten für ihren Einsatz.


piratenpartei.ch/2025/04/17/ei…


Interne Dokumente: EU-Staaten treten bei Chatkontrolle auf der Stelle


netzpolitik.org/2025/interne-d…


Für bessere Zusammenarbeit: Gelingt der EU das Nachjustieren beim Datenschutz?


netzpolitik.org/2025/fuer-bess…


Bad Ads: Targeted Disinformation, Division and Fraud on Meta’s Platforms


openrightsgroup.org/app/upload…
Download

Read ORG’s report about Meta’s targeted disinformation, division and fraud on Meta’s platforms

Executive Summary


This report lays out clear evidence of how Meta enables bad actors to use its targeted advertising system to manipulate elections, spread disinformation, fuel division, and facilitate fraud.

Meta’s social media platforms, Facebook and Instagram, sit at the intersection of the attention economy and surveillance capitalism. Meta’s business model is built on maximising user attention while tracking behaviours, interests and harvesting personal information. This surveillance is used to categorise people into ‘types’. Meta uses this profiling to sell the attention of these ‘types’ to would-be advertisers – a practice known as surveillance advertising.

This report brings together existing and new evidence of how bad actors can and have used Meta’s targeted advertising system to access the attention of certain types of users with harmful adverts. These ‘bad ads’ seek to mislead, to divide, and to undermine democracy. Through a series of case studies, it shows how bad actors — from political campaigns to financial scammers — have used Meta’s profiling and ad-targeting tools to cause societal harm.

The case studies in this report examine how bad actors use Meta’s advertising systems to spread bad ads across five areas:

  • Democracy
    Voter suppression, the targeting of minorities, electoral disinformation, and political manipulation by the Trump campaign, Musk-backed dark money groups, and Kremlin-linked actors.
  • Science
    The COVID infodemic, vaccine disinformation and climate crisis obfuscation.
  • Hate
    Sectarian division, far-right propaganda, antisemitism, and Islamophobia.
  • Fear
    Targeting of vulnerable communities, UK Home Office migrant deterrence, and the reinforcement of trauma.
  • Fraud
    Deepfake scams, financial fraud, and the use of targeted adverts to facilitate black market activities on Meta’s platforms.


This report evidences the individual and collective harms enabled by Meta’s advertising model. Three major issues emerge, each requiring urgent action:

The Transparency Problem


Meta’s ad system is insufficiently transparent about the profiled targeting categories advertisers choose. This opacity facilitates harmful advertising and prevents public scrutiny of disinformation, fraud, and manipulation.

Recommendation
Meta must be required to publish full ad targeting details in its public Ad Library. This should include all demographic, interest-based, and behavioural categories used by each advertiser for
each advert. Greater transparency would deter some forms of harmful targeting and enable greater public scrutiny of harmful ad targeting on Meta’s platforms.

The Moderation Problem


Meta’s moderation heavily relies on user reporting of bad ads once they are circulating rather than preventing them from appearing in the first place. Meta’s largely automated approval process and lax approach to ad moderation enable targeted disinformation and harm.

Recommendation
Meta must significantly expand both human and technological resources allocated to pre-publication ad moderation to tackle obvious disinformation, fraud and harmful ads upstream of publication rather than downstream of harm. A useful starting ground could be provided by provisions, in the Digital Services Act, which already establish transparency obligations covering both content moderation decisions of online platforms and the criteria which advertisers use to target advertisement. The DSA also introduces so-called anti darkpatterns provisions, that prohibit online service providers from misleading, tricking or otherwise forcing users into accepting targeted advertising against their best interest.

The Profiling Problem


Meta’s business model is built on profiling users by harvesting vast amounts of personal and behavioural data, yet it offers users no effective opt-out of surveillance and targeting for users to protect themselves from the harms evidenced in this report. Additionally, there is little awareness of how users can opt out of their data being used to train generative AI.

Recommendation
Users should be presented with a clear and explicit opt-in option for profiling and targeting, and be warned that this means they can be targeted by bad actors seeking to mislead or defraud them. Given the invasive nature of the data collection and Meta’s inability to demonstrate it can protect citizens from harm, informed consent must be required above and beyond the acceptance of lengthy terms and conditions. For users who do not opt-in to surveillance advertising, Meta should adopt contextual advertising within broad geographies – targeting ads based on the content users are presently engaging with rather than based on the surveillance and profiling of citizens.

Despite Meta’s public assurances that it does not allow disinformation, voter suppression, hate and division, or fraudulent adverts on its platforms, the case studies in this report demonstrate it consistently enables these harms. This is not solely a problem of policy enforcement, but an issue with the fundamental architecture of Meta’s opaque and poorly moderated advertising model built on surveillance, profiling, and microtargeting.

Combined with Meta’s unwillingness to mitigate the harms it enables, Meta’s surveillance advertising continues to facilitate societal harm. Without legal or regulatory intervention, these threats to democratic integrity, public safety, and social stability will persist, and citizens globally will continue to be deprived of the right to a reality that is not shaped by opaque, targeted advertising systems available for hire by bad actors.

Support ORG
Become a member


openrightsgroup.org/publicatio…


New report: How Meta is monetising the migrant crisis


  • Open Rights Group investigation reveals fraudulent adverts offering fake driving licences and passports still running on Facebook.
  • Investigation is part of new report which examines how Meta’s surveillance advertising model enables vulnerable people to be targeted with disinformation, fraudulent ads and divisive content.
  • Meta monetises from migrant crisis through ads placed by both criminals and the Home Office.


BAD ADS: TARGETED DISINFORMATION, DIVISION AND FRAUD ON META’s platforms

Read the report

How Meta monetises the migrant crisis


Social media giant Meta is profiting from the global migrant crisis – enabling both criminals and the UK Home Office to target vulnerable migrants with adverts.

Research by ORG found that fraudulent adverts offering fake identity documents are still being run on Facebook. This is despite Meta claiming it had stamped out the problem after previous media coverage exposed the issue months ago.

Now ORG has discovered that these scams have not gone away. Instead, fraudsters are now using Facebook pages disguised as gaming communities to target migrants with illegal services.

One ad, placed by a page pretending to be about video gaming, offered EU identity documents and British passports for sale. It targeted men aged 18 and over in Belgium, France, Germany, Italy, Malta, the Netherlands, Poland, Portugal and Spain.

These findings expose the stark reality: even after public scandal and media scrutiny, Meta’s systems are still enabling fraudsters to reach vulnerable people — simply by disguising criminal adverts behind harmless-looking pages.

0

0

Criminals Aren’t the Only Ones Targeting Migrants


But Meta’s profiling and microtargeting enables it to profit from all aspects of the global migrant crisis. Its vast surveillance infrastructure is also being used by the UK Home Office to target refugees with fear-based advertising — including campaigns designed to deter people from crossing the Channel in small boats.

Previous research funded by the Scottish Institute for Policing Research found that Meta’s profiling tools allowed the Home Office to build ‘patchwork profiles’ of likely refugees — stitching together interests, behaviours, and language categories to target them with scare campaigns.

ORG’s Platform Power Programme Manager James Baker said:

“Meta are turning the migrant crisis into a marketplace, profiting while first criminal gangs then the UK Government target them with adverts. This isn’t just a failure of Meta to moderate content, it’s a feature of their intrusive surveillance capitalism model.”

Independent researcher James Riley, who carried out the investigation for ORG said:

“Behind the façade of personalisation, Meta’s surveillance-based ad system enables covert campaigns that target vulnerable people, promote illegal goods, and undermine democracy. Profiling and microtargeting continue to be weaponised to deceive, divide, and inflict harm. We need far stronger external oversight, far greater transparency, and serious action from Meta to prevent the harms exposed in this report from continuing.”

0

0

Bad Ads: Targeted Disinformation, Division and Fraud on Meta’s Platforms


Social media giant Meta is profiting from the global migrant crisis – enabling both criminals and the UK Home Office to target vulnerable migrants with adverts.

Open Rights Groups’s report brings together existing and new evidence of how bad actors can and have used Meta’s targeted advertising system to access the attention of certain types of users with harmful adverts across five areas:

Democracy
Voter suppression, the targeting of minorities, electoral disinformation, and political manipulation by the Trump campaign, Musk-backed dark money groups, and Kremlin-linked actors.

Science
The COVID infodemic, vaccine disinformation and climate crisis obfuscation.

Hate
Sectarian division, far-right propaganda, antisemitism, and Islamophobia.

Fear
Targeting of vulnerable communities, UK Home Office migrant deterrence, and the reinforcement of trauma.

Fraud
Deepfake scams, financial fraud, and the use of targeted adverts to facilitate black market activities on Meta’s platforms.

0

ORG is calling for:


  • A clear and explicit opt-in for targeted advertising on Meta platforms.
  • Full transparency on how ads are targeted in Meta’s public Ad Library.
  • Proper resourcing for pre-publication ad moderation — to catch fraud and harmful content before it reaches users.


Support ORG
Become a member


openrightsgroup.org/press-rele…


Polizeidatenbanken: Keine Palantir-Konkurrenz in Sicht


netzpolitik.org/2025/polizeida…


Polizeidatendanken: Keine Palantir-Konkurrenz in Sicht


netzpolitik.org/2025/polizeida…


App-basierte Lieferdienste: Wegwerfjobs für marginalisierte Menschen


netzpolitik.org/2025/app-basie…


Happy 15th Birthday to Pirate Parties International! 🎉


Today, April 16th, 2025, we are proudly celebrating 15 remarkable years since the founding of Pirate Parties International. On this very day in 2010 Pirates from around the world gathered in Brussels for a 3 day conference which established an international office for Pirate parties around the world.

As we celebrate our birthday, we’re reminded of the collective strength of our global movement. Over the years, PPI has grown into a vibrant community with members in over 40 countries. We are intercontinental. We are recognized by the UN. We have helped members get elected to parliaments in numerous countries. We are represent an international voice advocating for digital freedom, transparency, and human rights in the digital age.

To all our members, supporters, and friends worldwide: Thank you for being part of this incredible journey.

Here’s to many more years ahead!

🏴‍☠️ Happy Birthday, PPI! 🏴‍☠️


pp-international.net/2025/04/h…


Zentrum für digitale Souveränität: Ohne Strategie ist es nur ein Feigenblatt


netzpolitik.org/2025/zentrum-f…