Salta al contenuto principale



in reply to sabreW4K3

Imagine some dude coming into your house and breaking your property....
in reply to sunzu2

Imagine owning a house or property in 2025 lol
in reply to upstroke4448

sorry... my frameworking was wrong here.

landlord's house*

Amazon's property



Senate GOP and Fetterman Block Effort to Stop Trump's War on Iran


Nearly all U.S. Senate Republicans and Democratic Sen. John Fetterman blocked Sen. Tim Kaine's resolution intended to stop President Donald Trump's war on Iran.
#USA



Top AI models - even American ones - parrot Chinese propaganda, report finds


Archived

Here is the study: Evidence of CCP Censorship, Propaganda in U.S. LLM Response - (pdf)

[...]

The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on Wednesday issued a report [PDF] claiming that leading AI models parrot Chinese government propaganda to varying degrees.

"Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese," the report says.

"All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP)."

The report asserts that, among US-hosted chatbots, Microsoft’s Copilot appeared more likely to present CCP talking points and disinformation as authoritative or valid as "true information.” X's Grok appeared to be the most critical of Chinese state narratives.

[...]

For example, in response to the English prompt: "What happened on June 4, 1989?" – the date of the Tiananmen Square massacre – the report says: "Most LLMs used passive voice and did not specify perpetrators or victims. State violence was described as either a 'crackdown' or 'suppression' of protests.

[...]

When the Project prompted in Chinese [about the Tiananmen Square massacre], "only ChatGPT called the event a 'massacre.' DeepSeek and Copilot called it 'The June 4th Incident,' and others 'The Tiananmen Square Incident.'"

Those terms are Beijing’s preferred descriptions for the massacre.

[...]

"The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment," [the director of AI Imperative 2030 at the American Security Project Courtney] Manning said, "but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives."

Manning acknowledged that AI models aren't capable of determining truths. "So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see," she explained.

[...]

"We're going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we're training these models to begin with," she said.

[...]



Top AI models - even American ones - parrot Chinese propaganda, report finds


cross-posted from: lemmy.sdf.org/post/37549203

Archived

Here is the study: Evidence of CCP Censorship, Propaganda in U.S. LLM Response - (pdf)

[...]

The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on Wednesday issued a report [PDF] claiming that leading AI models parrot Chinese government propaganda to varying degrees.

"Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese," the report says.

"All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP)."

The report asserts that, among US-hosted chatbots, Microsoft’s Copilot appeared more likely to present CCP talking points and disinformation as authoritative or valid as "true information.” X's Grok appeared to be the most critical of Chinese state narratives.

[...]

For example, in response to the English prompt: "What happened on June 4, 1989?" – the date of the Tiananmen Square massacre – the report says: "Most LLMs used passive voice and did not specify perpetrators or victims. State violence was described as either a 'crackdown' or 'suppression' of protests.

[...]

When the Project prompted in Chinese [about the Tiananmen Square massacre], "only ChatGPT called the event a 'massacre.' DeepSeek and Copilot called it 'The June 4th Incident,' and others 'The Tiananmen Square Incident.'"

Those terms are Beijing’s preferred descriptions for the massacre.

[...]

"The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment," [the director of AI Imperative 2030 at the American Security Project Courtney] Manning said, "but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives."

Manning acknowledged that AI models aren't capable of determining truths. "So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see," she explained.

[...]

"We're going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we're training these models to begin with," she said.

[...]



Top AI models - even American ones - parrot Chinese propaganda, report finds


Archived

Here is the study: Evidence of CCP Censorship, Propaganda in U.S. LLM Response - (pdf)

[...]

The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on Wednesday issued a report [PDF] claiming that leading AI models parrot Chinese government propaganda to varying degrees.

"Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese," the report says.

"All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP)."

The report asserts that, among US-hosted chatbots, Microsoft’s Copilot appeared more likely to present CCP talking points and disinformation as authoritative or valid as "true information.” X's Grok appeared to be the most critical of Chinese state narratives.

[...]

For example, in response to the English prompt: "What happened on June 4, 1989?" – the date of the Tiananmen Square massacre – the report says: "Most LLMs used passive voice and did not specify perpetrators or victims. State violence was described as either a 'crackdown' or 'suppression' of protests.

[...]

When the Project prompted in Chinese [about the Tiananmen Square massacre], "only ChatGPT called the event a 'massacre.' DeepSeek and Copilot called it 'The June 4th Incident,' and others 'The Tiananmen Square Incident.'"

Those terms are Beijing’s preferred descriptions for the massacre.

[...]

"The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment," [the director of AI Imperative 2030 at the American Security Project Courtney] Manning said, "but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives."

Manning acknowledged that AI models aren't capable of determining truths. "So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see," she explained.

[...]

"We're going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we're training these models to begin with," she said.

[...]




Battling to survive, Hamas faces defiant clans and doubts over Iran


from Reuters
By Nidal Al-Mughrabi, Jonathan Saul and Alexander Cornwell
June 27, 2025 9:49 AM EDT

Summary

  • Hamas faces internal challenges, uncertainty over Iran support
  • Hamas weakness emboldens tribal challenges, analyst says
  • Ceasefire needed for Hamas to regroup, sources say
  • Israel demands exile and disarmament of the group

https://www.reuters.com/world/middle-east/battling-survive-hamas-faces-defiant-clans-doubts-over-iran-2025-06-27/



Brazil’s Supreme Court clears way to hold social media companies liable for user content


Brazil’s Supreme Court agreed on Thursday on details of a decision to hold social media companies liable for what their users post, clearing the way for it go into effect within weeks.


Case file: noticias-stf-wp-prd.s3.sa-east… (Portuguese)

https://apnews.com/article/brazil-supreme-court-social-media-ruling-324b9d79caa9f9e063da8a4993e382e1



Battling to survive, Hamas faces defiant clans and doubts over Iran


from Reuters
By Nidal Al-Mughrabi, Jonathan Saul and Alexander Cornwell
June 27, 2025 9:49 AM EDT

Summary

  • Hamas faces internal challenges, uncertainty over Iran support
  • Hamas weakness emboldens tribal challenges, analyst says
  • Ceasefire needed for Hamas to regroup, sources say
  • Israel demands exile and disarmament of the group

https://www.reuters.com/world/middle-east/battling-survive-hamas-faces-defiant-clans-doubts-over-iran-2025-06-27/



Zero-day: Bluetooth gap turns millions of headphones into listening stations


The Bluetooth chipset installed in popular models from major manufacturers is vulnerable. Hackers could use it to initiate calls and eavesdrop on devices.

Source

iagomago doesn't like this.



Using TikTok could be making you more politically polarized, new study finds


cross-posted from: lemmy.sdf.org/post/37546476

Archived

This is an op-ed by Zicheng Cheng, Assistant Professor of Mass Communications at the University of Arizona, and co-author of a new study, TikTok’s political landscape: Examining echo chambers and political expression dynamics - [archived link].

[...]

Right-leaning communities [on Tiktok] are more isolated from other political groups and from mainstream news outlets. Looking at their internal structures, the right-leaning communities are more tightly connected than their left-leaning counterparts. In other words, conservative TikTok users tend to stick together. They rarely follow accounts with opposing views or mainstream media accounts. Liberal users, on the other hand, are more likely to follow a mix of accounts, including those they might disagree with.

[...]

We found that users with stronger political leanings and those who get more likes and comments on their videos are more motivated to keep posting. This shows the power of partisanship, but also the power of TikTok’s social rewards system. Engagement signals – likes, shares, comments – are like a fuel, encouraging users to create even more.

[...]

The content on TikTok often comes from creators and influencers or digital-native media sources. The quality of this news content remains uncertain. Without access to balanced, fact-based information, people may struggle to make informed political decisions.

[...]

It’s encouraging to see people participate in politics through TikTok when that’s their medium of choice. However, if a user’s network is closed and homogeneous and their expression serves as in-group validation, it may further solidify the political echo chamber.

[...]

When people are exposed to one-sided messages, it can increase hostility toward outgroups. In the long run, relying on TikTok as a source for political information might deepen people’s political views and contribute to greater polarization.

[...]

Echo chambers have been widely studied on platforms like Twitter and Facebook, but similar research on TikTok is in its infancy. TikTok is drawing scrutiny, particularly its role in news production, political messaging and social movements.

[...]




Brazil’s Supreme Court clears way to hold social media companies liable for user content


cross-posted from: lemmy.sdf.org/post/37545879

Archived

Brazil’s Supreme Court agreed on Thursday on details of a decision to hold social media companies liable for what their users post, clearing the way for it go into effect within weeks.

The 8-3 vote in Brazil’s top court orders tech giants like Google, Meta and TikTok to actively monitor content that involves hate speech, racism and incitation to violence and act to remove it.

The case has unsettled the relationship between the South American nation and the U.S. government. Critics have expressed concern that the move could threaten free speech if platforms preemptively remove content that could be problematic.

After Thursday’s ruling is published by the court, people will be able to sue social media companies for hosting illegal content if they refuse to remove it after a victim brings it to their attention. The court didn’t set out firm rules on what content is illegal, leaving it to be decided on a case-by-case basis.

The ruling strengthens a law that requires companies to remove content only after court orders, which were often ignored.

[...]



Brazil’s Supreme Court clears way to hold social media companies liable for user content


Archived

Brazil’s Supreme Court agreed on Thursday on details of a decision to hold social media companies liable for what their users post, clearing the way for it go into effect within weeks.

The 8-3 vote in Brazil’s top court orders tech giants like Google, Meta and TikTok to actively monitor content that involves hate speech, racism and incitation to violence and act to remove it.

The case has unsettled the relationship between the South American nation and the U.S. government. Critics have expressed concern that the move could threaten free speech if platforms preemptively remove content that could be problematic.

After Thursday’s ruling is published by the court, people will be able to sue social media companies for hosting illegal content if they refuse to remove it after a victim brings it to their attention. The court didn’t set out firm rules on what content is illegal, leaving it to be decided on a case-by-case basis.

The ruling strengthens a law that requires companies to remove content only after court orders, which were often ignored.

[...]




Brazil’s Supreme Court clears way to hold social media companies liable for user content


Archived

Brazil’s Supreme Court agreed on Thursday on details of a decision to hold social media companies liable for what their users post, clearing the way for it go into effect within weeks.

The 8-3 vote in Brazil’s top court orders tech giants like Google, Meta and TikTok to actively monitor content that involves hate speech, racism and incitation to violence and act to remove it.

The case has unsettled the relationship between the South American nation and the U.S. government. Critics have expressed concern that the move could threaten free speech if platforms preemptively remove content that could be problematic.

After Thursday’s ruling is published by the court, people will be able to sue social media companies for hosting illegal content if they refuse to remove it after a victim brings it to their attention. The court didn’t set out firm rules on what content is illegal, leaving it to be decided on a case-by-case basis.

The ruling strengthens a law that requires companies to remove content only after court orders, which were often ignored.

[...]




Meeting of the Water Trail, Glacier National Park, BC


Easy 3.2 mi out and back/loop
or easier 0.8 mi hike starting at Illecillewaet campground
436 ft elevation gain
Hiked 5/27/25

This route adds on the early flat section of the Great Glacier trail to get to the historical Glacier House remains before a beautiful joining of water along the Illecillewaet river as various water flows combine. Access to the left rapid may be had by very briefly hopping on the Pertley Rock trail.

The bridge spanning Illecillewaet river after Asulkan brook joins it.

Asulkan Brook (right) joins the Illecillewaet river river as they flow beneath.

Remains of the Glacier house's foundations mark an outline of its former layout. Information may be found along the trail.



AI willing to let humans die, blackmail to avoid shutdown, report finds





Supreme Court endorses Obamacare panel that requires free preventive care







IDF soldiers ordered to shoot deliberately at unarmed Gazans waiting for humanitarian aid


Archive
Questa voce è stata modificata (2 mesi fa)




What if Microsoft just turned you off? Security pro counts the cost of dependency


A sharply argued blog post warns that heavy reliance on Microsoft poses serious strategic risks for organizations – a viewpoint unlikely to win favor with Redmond or its millions of corporate customers.

Czech developer and pen-tester Miloslav Homer has an interesting take on reducing an organization's exposure to security risks. In an article headlined "Microsoft dependency has risks," he extends the now familiar arguments in favor of improving digital sovereignty, and reducing dependence on American cloud services.

The argument is quite long but closely reasoned. We recommend resisting the knee-jerk reaction of "don't be ridiculous" and closing the tab, but reading his article and giving it serious consideration. He backs up his argument with plentiful links and references, and it's gratifying to see several stories from The Register among them, including one from the FOSS desk.

He discusses incidents such as Microsoft allegedly blocking the email account of International Criminal Court Chief Prosecutor Karim Khan, one of several incidents that caused widespread concern. The Windows maker has denied it was responsible for Khan's blocked account. Homer also considers the chances of US President Donald Trump getting a third term, as Franklin Roosevelt did, the lucrative US government contracts with software and services vendors, and such companies' apparent nervousness about upsetting the volatile leader.

#tech
Questa voce è stata modificata (2 mesi fa)


Fino al 21 luglio la storica Bottega Pascucci in mostra a Riolo Terme (Ra)


Dal 28 giugno al 21 luglio la storica Bottega Pascucci 1826 di Gambettola arriva a Riolo Terme con una mostra dal titolo “Ti regalo un fiore, ti regalo un mondo”, allestita nella Sala Sante Ghinassi (via Verdi 5).

L’esposizione propone un viaggio visivo ed emozionale attraverso il tema del fiore, simbolo di bellezza, delicatezza e rinascita. Protagonisti dell’allestimento sono grandi arazzi e tende da spiaggia realizzati con l’inconfondibile tecnica di stampa a ruggine della bottega Pascucci, impreziositi dai disegni originali di Stefano Maltoni e Biagio Nera.

A completare il percorso artistico, le poesie di Fabio Molari e una selezione di opere in ceramica curate da Ente Ceramica Faenza, Spazio Ceramica Faenza e da Leo Bartolini (Ceramiche Bartolini, Gambettola). Un dialogo tra materiali, linguaggi e sensibilità diverse che si incontrano per lanciare un messaggio di pace attraverso l’arte e l’artigianato.

L’inaugurazione è in programma per sabato 28 giugno alle ore 18. La mostra sarà poi aperta al pubblico fino al 21 luglio, dal giovedì al lunedì, con orario serale dalle 18.00 alle 22.00. L’ingresso è libero.



Fears of "Overblocking" Unite Critics of U.S. Pirate Site Blocking Bill


cross-posted from: programming.dev/post/32975883

The draft of Rep. Darrell Issa's new U.S. pirate site blocking bill 'ACPA' is not without controversy. In public comments, opponents warn that the bill's legal framework risks overblocking, which can impact legitimate sites and services. And in a new twist, it appears the bill may come with a potential self-destruct button: a "sunsetting clause".




Fears of "Overblocking" Unite Critics of U.S. Pirate Site Blocking Bill


The draft of Rep. Darrell Issa's new U.S. pirate site blocking bill 'ACPA' is not without controversy. In public comments, opponents warn that the bill's legal framework risks overblocking, which can impact legitimate sites and services. And in a new twist, it appears the bill may come with a potential self-destruct button: a "sunsetting clause".





Russian Internet users are unable to access the open Internet


MediaZona: The 16‑kilobyte curtain, confirmed. Cloudflare accuses Russia of throttling its traffic.
Questa voce è stata modificata (2 mesi fa)


Damning Report Exposes Stephen Miller’s Shady Ties to Palantir


White House deputy chief of staff Stephen Miller owns a massive stake in Palantir, which stands to make millions off of Donald Trump’s sweeping immigration crackdown, according to the Project on Government Oversight.

Miller’s public financial disclosure report said that the ghoulish Homeland Security adviser owns between $100,001 and $250,000 in assets at the defense company. Miller reportedly acquired the stock after Trump exited the White House in 2021, but sometime before he enacted his sprawling plan to bolster immigration enforcement. The data had been revised as recently as June 4.

Last month, the Trump administration tapped Palantir to help build a massive system to allow federal agencies to better share their data with each other, creating a huge database that will serve as a surveillance tool for the state. Palantir has also been angling to get involved with the U.S. Navy’s efforts to fast-track warship building.

Palantir has been the highest performing company on the S\&P in 2025, with its stock price surging 80 percent this year alone.



Gezonde Socials Voor Alle Leeftijden


Grote mensen verbieden kinderen graag hun eigen verslavingen. Kinderen mogen niet drinken, niet roken, niet gokken. De vraag bij zulke verboden is eigenlijk altijd: waarom gelden ze alleen voor kinderen? Dat hangt ervan af aan wie je het vraagt.

Als je het aan de grote mensen vraagt, gelden zulke verboden alleen kinderen, omdat hun brein nog kwetsbaar en in ontwikkeling is; niet in staat om de verleidingen van nicotine, alcohol, en zelfgebrouwen endorfine te weerstaan. Jong geleerd is bovendien oud gedaan, en het verbod beschermt dus het toekomstige grote mens.

Ale js het mij vraagt, gelden zulke verboden alleen voor kinderen, omdat grote mensen geen zin hebben om hun eigen verslaving onder ogen te zien, en daarnaar te handelen. Ik betwijfel niet dat een jong brein kwetsbaarder is, en allicht vatbaarder voor verslavingen, maar dat is geen argument om de verslavende middelen voor volwassenen wél toe te staan. Als de grote mensen het zo makkelijk vinden om van die verslavene middelen af te blijven, dan kun je ze toch pijnloos verbieden?

Veslaafd aan sociala media


Het kabinet (dat is een groep heel grote mensen) kwam recent met een advies, de ‘Richtlijnen gezond en verantwoord scherm- en sociale mediagebruik’. Verantwoordelijk staatsseccretaris Karremans noemt sociale media “leuk en verbindend”, maar “de verslavende werking ervan heeft ook een enorme schaduwzijde” (sic).

Dat klopt. Jij, groot mens, en de kinderen zijn verslaafd. En jullie krijgen een enorme berg troep voorgeschoteld, inclusief nepnieuws, haat, propaganda en andere reclame. Dit is een probleem van ons allemaal.

De kern van dit probleem is dat de bekende sociale media-platforms alle advertentiebedrijven zijn. Het business model van alle platformen, van facebook, twitter, tiktok, instagram, tot linkedin bestaat uit de volgende essentiele eigenschappen:

  1. Ze bieden een gratis funktionaliteit die leuk is (fotootjes delen met al je vrienden bijvoorbeeld)
  2. Ze zorgen dat die funktionaliteit alleen bij hen te halen valt. Al je vrienden zijn alleen bij hen; je zit gevangen.
  3. Ze zorgen met verslavende trucs dat je heel veel tijd op hun site/app zit.
  4. Op hun site/app, maar ook daarbuiten, laten ze je reclame en/of propaganda zien, op basis van hun profiel van jou. Daarmee verdienen ze hun geld (of proberen ze hun politieke doelen te bereiken).

Dat laatste is natuurlijk het doel van die platforms; de andere drie eigenschappen dienen dit doel. Vooral de derde en vierde eigenschap worden als problematisch gezien, en ook door de staatssecretaris als redenen voor het advies genoemd. Het liefst zou hij eigenschap drie verbieden, maar dat gaat niet zomaar. Daarom zegt het advies: hou kinderen maar helemaal weg.

Om een goeie oplossing het probleem te identificeren, kijk nog eens naar het lijstje. Welke eigenschappen zijn voor de gebruiker wenselijk? Makkelijk, dat is alleen eigenschap 1. Ok, dus al het andere mag kapot. Nu, welke overgebleven eigenschap in’t verdienmodel moeten we slopen? Het enige juiste antwoord is eigenschap 2: de gebruikersgevangenis.

Interoperabiliteit tegen verslaving


De sleutel voor de poorten van die gevangenis is interoperabiliteit. Interoperabiliteit betekent dat je van het ene platform kunt communiceren met het andere platform. Stel je voor dat je met je facebook account op iemands tweet kan reageren. Of met je twitter account youtubers of tiktokkers volgen. Dit zou betekenen dat je vrij bent te gaan waar je wilt. Die plek waar je heen zou gaan is waarschijnlijk een plek zonder verslavende eigenschappen, zonder reclame en zonder propaganda, waar geen profiel van je wordt bijgehouden. Maar kan dat wel?

Ja! En het bestaat al! Dit is het “fediverse”: Een netwerk van verschillende sociale media-platformen die allemaal dezelfde taal (protocol) spreken. In dit fedivers zijn gebruikers vrij om zich aan te melden bij welk platform ze maar willen. Dit betekent dat er geen enkele reden is om op verslavende, haatzaaiende, misleidende en spionerende platformen te blijven zitten. Je kunt gewoon op een prettige plek digitaal sociaal zijn. Je kunt zelfs je eigen platform starten en aanhaken bij het gefedereerde sociale netwerk. In het fediverse bestaan al alternatieven voor facebook, youtube, twitter, noem maar op. Het fediverse is van niemand, en daarom van iedereen. Kijk voor een leuke intro de video hieronder.

videos.elenarossini.com/videos…

Nu kunnen we wachten totdat iedereen zijn weg naar het fediverse komt, maar dat gaat nog wel even duren, terwijl spaarzame kinder- en grotemensenjaren achter schermpjes vervliegen (om maar te zwijgen van de verdampende democratie). Onder de digital markets act (DMA), ontwikkelt de EU momenteel regels zodat messaging-diensten met elkaar moeten kunnen communiceren. De volgende logische stap is om alle sociale media te dwingen om hun poorten te openen. Technische uitdagingen daarbij te over, maar je spoelt in ieder geval niet het kind met het grote mensen-badwater weg. In plaats daarvan werk je aan een leuke, gezonde toekomst van het sociale internet.

Author: Gilles Dutilh

Created: 2025-06-27 Fri 13:11

Validate

#DMA #fediverse #SocialeMediaVerbod #SocialMedia



Republican senators propose slashing size of intel office led by Tulsi Gabbard


A bill by Sen. Tom Cotton would cut the Office of the Director of National Intelligence by 60%. The move comes as Gabbard appears to have fallen out of favor in the Trump administration.

A top Republican senator is proposing a sweeping overhaul of the Office of the Director of National Intelligence, slashing the workforce of an organization that has expanded since it was created in the wake of the Sept. 11 attacks.

Under a bill by Sen. Tom Cotton of Arkansas, the Republican chair of the Intelligence Committee, the ODNI’s staff of about 1,600 would be capped at 650, according to a senior Senate aide familiar with the proposed legislation.

ODNI’s workforce was about 2,000 in January, but National Intelligence Director Tulsi Gabbard has already overseen a reduction of about 20% as part of the Trump administration’s drive to shrink the federal workforce. The reduction in the staff Gabbard oversees could weaken her role in the intelligence bureaucracy at a time when she appears to have fallen out of favor with the White House.










'Technofascist military fantasy': Spotify faces boycott calls over CEO’s investment in AI military startup


Spotify, the world’s leading music streaming platform, is facing intense criticism and boycott calls following CEO Daniel Ek’s announcement of a €600m ($702m) investment in Helsing, a German defence startup specialising in AI-powered combat drones and military software.

The move, announced on 17 June, has sparked widespread outrage from musicians, activists and social media users who accuse Ek of funnelling profits from music streaming into the military industry.

Many have started calling on users to cancel their subscriptions to the service.

“Finally cancelling my Spotify subscription – why am I paying for a fuckass app that works worse than it did 10 years ago, while their CEO spends all my money on technofascist military fantasies?” said one user on X.