Salta al contenuto principale


"Defendants have indicated that some video between October 19, 2025 and October 31, 2025 has been irretrievably destroyed and therefore cannot be produced on an expedited basis or at all."#ICE


Two Weeks of Surveillance Footage From ICE Detention Center ‘Irretrievably Destroyed’


The Department of Homeland Security claimed in court proceedings that nearly two weeks worth of surveillance footage from ICE’s Broadview Detention Center in suburban Chicago has been “irretrievably destroyed” and may not be able to be recovered, according to court records reviewed by 404 Media.

The filing was made as part of a class action lawsuit against the Department of Homeland Security by people being held at Broadview, which has become the site of widespread protests against ICE. The lawsuit says that people detained at the facility are being held in abhorrent, “inhumane” conditions. The complaint describes a facility where detainees are “confined at Broadview inside overcrowded holding cells containing dozens of people at a time. People are forced to attempt to sleep for days or sometimes weeks on plastic chairs or on the filthy concrete floor. They are denied sufficient food and water […] the temperatures are extreme and uncomfortable […] the physical conditions are filthy, with poor sanitation, clogged toilets, and blood, human fluids, and insects in the sinks and the floor […] federal officers who patrol Broadview under Defendants’ authority are abusive and cruel. Putative class members are routinely degraded, mistreated, and humiliated by these officers.”

As part of discovery in the case, the plaintiffs’ lawyers requested surveillance footage from the facility starting from mid September, which is when ICE stepped up its mass deportation campaign in Chicago. In a status report submitted by lawyers from both the plaintiffs and the Department of Homeland Security, lawyers said that nearly two weeks of footage has been “irretrievably destroyed.”

“Defendants have agreed to produce. Video from September 28, 2025 to October 19, 2025, and also from October 31, 2025 to November, 7 2025,” the filing states. “Defendants have indicated that some video between October 19, 2025 and October 31, 2025 has been irretrievably destroyed and therefore cannot be produced on an expedited basis or at all.” Law & Crime first reported on the filing.
1. Surveillance Video from Inside Broadview. In their Expedited Discovery Request No. 9, Plaintiffs request surveillance video from inside the Broadview facility captured by Defendants’ equipment for a limited set of days, starting in mid-September 2025. Plaintiffs also request current video on a weekly basis. Defendants have agreed to produce video from September 28, 2025, to October 19, 2025, and also from October 31, 2025, to November 7, 2025. Plaintiffs are providing Defendants with hard drives for this production, and the parties expect that this initial production will be made shortly. The parties are discussing ways to ease the burden of production of video going forward, including by having Plaintiffs select random days for production rather than the production of all video on an on-going basis. Defendants have indicated that some video between October 19, 2025, and October 31, 2025, has been irretrievably destroyed and therefore cannot be produced on an expedited basis or at all. Plaintiffs are in the process of hiring an IT contractor. Plaintiffs’ contractor will meet with the government’s ESI Liaison (with attorneys on the phone) to attempt to work through issues concerning the missing video, including whether any content is able to be retrieved. While Plaintiffs intend to explore the issue of missing footage, Plaintiffs have communicated toA screenshot from the court filing
The filing adds that the plaintiffs, who are being represented by lawyers from the American Civil Liberties Union of Illinois, the MacArthur Justice Center, and the Eimer Stahl law firm, hired an IT contractor to work with the government “to attempt to work through issues concerning the missing video, including whether any content is able to be retrieved.”

Surveillance footage from inside the detention center would presumably be critical in a case about the alleged abusive treatment of detainees and inhumane living conditions. The filing states that the plaintiffs' attorneys have “communicated to Defendants that they are most concerned with obtaining the available surveillance videos as quickly as possible.”

ICE did not respond to a request for comment from 404 Media. A spokesperson for the ACLU of Illinois told 404 Media “we don’t have any insight on this. Hoping DHS can explain.”


#ice


A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On#News #study #AI


A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On


Online survey research, a fundamental method for data collection in many scientific studies, is facing an existential threat because of large language models, according to new research published in the Proceedings of the National Academy of Sciences (PNAS). The author of the paper, associate professor of government at Dartmouth and director of the Polarization Research Lab Sean Westwood, created an AI tool he calls "an autonomous synthetic respondent,” which can answer survey questions and “demonstrated a near-flawless ability to bypass the full range” of “state-of-the-art” methods for detecting bots.

According to the paper, the AI agent evaded detection 99.8 percent of the time.

"We can no longer trust that survey responses are coming from real people," Westwood said in a press release. "With survey data tainted by bots, AI can poison the entire knowledge ecosystem.”

Survey research relies on attention check questions (ACQs), behavioral flags, and response pattern analysis to detect inattentive humans or automated bots. Westwood said these methods are now obsolete after his AI agent bypassed the full range of standard ACQs and other detection methods outlined in prominent papers, including one paper designed to detect AI responses. The AI agent also successfully avoided “reverse shibboleth” questions designed to detect nonhuman actors by presenting tasks that an LLM could complete easily, but are nearly impossible for a human.

💡
Are you a researcher who is dealing with the problem of AI-generated survey data? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪(609) 678-3204‬. Otherwise, send me an email at emanuel@404media.co.

“Once the reasoning engine decides on a response, the first layer executes the action with a focus on human mimicry,” the paper, titled “The potential existential threat of large language models to online survey research,” says. “To evade automated detection, it simulates realistic reading times calibrated to the persona’s education level, generates human-like mouse movements, and types open-ended responses keystroke by-keystroke, complete with plausible typos and corrections. The system is also designed to accommodate tools for bypassing antibot measures like reCAPTCHA, a common barrier for automated systems.”

The AI, according to the paper, is able to model “a coherent demographic persona,” meaning that in theory someone could sway any online research survey to produce any result they want based on an AI-generated demographic. And it would not take that many fake answers to impact survey results. As the press release for the paper notes, for the seven major national polls before the 2024 election, adding as few as 10 to 52 fake AI responses would have flipped the predicted outcome. Generating these responses would also be incredibly cheap at five cents each. According to the paper, human respondents typically earn $1.50 for completing a survey.

Westwood’s AI agent is a model-agnostic program built in Python, meaning it can be deployed with APIs from big AI companies like OpenAI, Anthropic, or Google, but can also be hosted locally with open-weight models like LLama. The paper used OpenAI’s o4-mini in its testing, but some tasks were also completed with DeepSeek R1, Mistral Large, Claude 3.7 Sonnet, Grok3, Gemini 2.5 Preview, and others, to prove the method works with various LLMs. The agent is given one prompt of about 500 words which tells it what kind of persona to emulate and to answer questions like a human.

The paper says that there are several ways researchers can deal with the threat of AI agents corrupting survey data, but they come with trade-offs. For example, researchers could do more identity validation on survey participants, but this raises privacy concerns. Meanwhile, the paper says, researchers should be more transparent about how they collect survey data and consider more controlled methods for recruiting participants, like address-based sampling or voter files.

“Ensuring the continued validity of polling and social science research will require exploring and innovating research designs that are resilient to the challenges of an era defined by rapidly evolving artificial intelligence,” the paper said.




Video games are more popular than ever, but many of the biggest companies in the business seem like they are struggling to adapt and convert that popularity into stability and sustainability.#Podcast #interview


The Video Game Industry’s Existential Crisis (with Jason Schreier)


The video game industry has had a turbulent few years. The pandemic made people play more and caused a small boom, which then subsided, resulting in wave after wave of massive layoffs. Microsoft, one of the major console manufacturers, is shifting its strategy for Xbox as the company shifts its focus to AI. And now, Electronic Arts, once a load-bearing publisher for the industry with brands like The Sims and Madden, is going private via a leveraged buyout in a deal involving Saudi Arabia’s Public Investment Fund and Jared Kushner.
playlist.megaphone.fm?e=TBIEA8…
Video games are more popular than ever, but many of the biggest companies in the business seem like they are struggling to adapt and convert that popularity into stability and sustainability. To try and understand what the hell is going on, this week we have a conversation between Emanuel and Jason Schreier, who reports about video games for Bloomberg and one of the best journalists on this beat.
youtube.com/embed/6ydF7hD6cFI?…
Jason helps us unpack why Microsoft is now aiming for higher-than-average profit margins at Xbox and why the company is seemingly bowing out of the console business despite a massive acquisition spree. We also talk about what the EA deal tells us about other game publishers, and what all these problems tell us about changing player habits and the future of big budget video games.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube.

Become a paid subscriber for early access to these interview episodes and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.




Material viewed by 404 Media shows data giant Thomson Reuters enriches license plate data with marriage, voter, and ownership records. The tool can predict where a car may be in the future.#ICE #Privacy


This App Lets ICE Track Vehicles and Owners Across the Country


Immigration and Customs Enforcement (ICE) recently invited staff to demos of an app that lets officers instantly scan a license plate, adding it to a database of billions of records that shows where else that vehicle has been spotted around the country, according to internal agency material viewed by 404 Media. That data can then be combined with other information such as driver license data, credit header data, marriage records, vehicle ownership, and voter registrations, the material shows.

The capability is powered by both Motorola Solutions and Thomson Reuters, the massive data broker and media conglomerate, which besides running the Reuters news service, also sells masses of personal data to private industry and government agencies. The material notes that the capabilities allow for predicting where a car may travel in the future, and also can collect face scans for facial recognition.

The material shows that ICE continues to buy or source a wealth of personal and sensitive information as part of its mass deportation effort, from medical insurance claims data, to smartphone location data, to housing and labor data. The app, called Mobile Companion, is a tool designed to be used in real time by ICE officials in the field, similar to its facial recognition app but for finding more information about vehicles.

💡
Do you work at ICE or CBP? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




The newly sequenced RNA is 25,000 years older than the previous record-holder, opening a new window into genetic evolution and revealing a surprise about a famous mammoth mummy.#TheAbstract


Scientists Make Genetic Breakthrough with 39,000-Year-Old Mammoth RNA


Welcome back to the Abstract! These are the studies this week that reached back through time, flooded the zone, counted the stars, scored science goals, and topped it all off with a ten-course meal.

First, scientists make a major breakthrough thanks to a very cute mammoth mummy. Then: the climate case for busy beavers; how to reconnect with 3,000 estranged siblings; this is your brain on football; and last, what Queen Elizabeth II had for lunch on February 20, 1957.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens, or subscribe to my personal newsletter the BeX Files.

The long afterlife of Yuka the mammoth


Mármol Sánchez, Emilio et al. “Ancient RNA expression profiles from the extinct woolly mammoth.” Cell.

Scientists have sequenced RNA—a key ingredient of life as we know it—from the remains of a mammoth that lived 39,000 years ago during the Pleistocene “Ice Age” period, making it by far the oldest RNA on record.

The previous record holder for oldest RNA was sourced from a puppy that lived in Siberia 14,300 years ago. The new study has now pushed that timeline back by an extraordinary 25,000 years, opening a new window into ancient genetics and revealing a surprise about a famous mammoth mummy called Yuka.

“Ancient DNA has revolutionized the study of extinct and extant organisms that lived up to 2 million years ago, enabling the reconstruction of genomes from multiple extinct species, as well as the ecosystems where they once thrived,” said researchers led by Emilio Mármol Sánchez of the Globe Institute in Copenhagen, who completed the study while at Stockholm University.

“However, current DNA sequencing techniques alone cannot directly provide insights into tissue identity, gene expression dynamics, or transcriptional regulation, as these are encoded in the RNA fraction.”

“Here, we report transcriptional profiles from 10 late Pleistocene woolly mammoths,” the team continued. “One of these, dated to be ∼39,000 years old, yielded sufficient detail to recover…the oldest ancient RNA sequences recorded to date.”

DNA, the double-stranded “blueprint” molecule that stores genetic information, is far sturdier than RNA, which is why it can be traced back for millions of years instead of thousands. Single-stranded RNA, a “messenger” molecule that carries out the orders of DNA, is more fragile and rare in the paleontological record.

In addition to proving that RNA can survive much longer than previously known, the team discovered that Yuka—the mammoth that died 39,000 years ago—has been misgendered for years (yes, I realize gender is a social construct that does not apply to extremely dead mammoths, but mis-sexed just doesn’t have the same ring).

Yuka was originally deemed female according to a 2021 study that observed the “presence of skin folds in the genital area compatible with labia vulvae structures in modern elephants and the absence of male-specific muscle structures.” Mármol Sánchez and his colleagues have now overturned this anatomical judgement by probing the genetic remnants of Yuka’s Y chromosome.

In fact, as I write this on Thursday, November 13—a day before the embargo on this study lifts on Friday—Yuka is still listed as female on Wikipedia.

Just a day until you can live your truth, buddy.

In other news…

Leave it to beavers


Burgher, Jesse A. S. et al. “Beaver-related restoration and freshwater climate resilience across western North America.” Restoration Ecology.

Every era has a champion; in our warming world, eager beavers may rise to claim this lofty title.

These enterprising rodents are textbook “ecosystem engineers” that reshape environments with sturdy dams that create biodiverse havens that are resistant to climate change. To better assess the role of beavers in the climate crisis, researchers reviewed the reported behavioral beaver-related restoration (BRR) projects across North America.

“Climate change is projected to impact streamflow patterns in western North America, reducing aquatic habitat quantity and quality and harming native species, but BRR has the potential to ameliorate some of these impacts,” said researchers led by Jesse A. S. Burgher of Washington State University.

The team reports “substantial evidence that BRR increases climate resiliency…by reducing summer water temperatures, increasing water storage, and enhancing floodplain connectivity” while also creating “fire-resistant habitat patches.”

So go forth and get busy, beavers! May we survive this crisis in part through the skin of your teeth.

One big happy stellar family


Boyle, Andrew W. et al. “Lost Sisters Found: TESS and Gaia Reveal a Dissolving Pleiades Complex.” The Astrophysical Journal.

Visible from both the Northern and Southern Hemispheres, the Pleiades is the most widely recognized and culturally significant star cluster in the night sky. While this asterism is defined by a handful of especially radiant stars, known as the Seven Sisters, scientists have now tracked down thousands of other stellar siblings born from the same clutch scattered across some 2,000 light years.
Wide-field shot of Pleiades. Image Antonio Ferretti & Attilio Bruzzone
“We find that the Pleiades constitutes the bound core of a much larger, coeval structure” and “we refer to this structure as the Greater Pleiades Complex,” said researchers led by Andrew W. Boyle of the University of North Carolina at Chapel Hill. “On the basis of uniform ages, coherent space velocities, detailed elemental abundances, and traceback histories, we conclude that most stars in this complex originated from the same giant molecular cloud.”

The work “further cements the Pleiades as a cornerstone of stellar astrophysics” and adds new allure to a cluster that first exploded into the skies during the Cretaceous age. (For more on the Pleiades, check out this piece I wrote earlier this year about the deep roots of its lore).

Getting inside your head(er)


Zamorano, Francisco et al. “Brain Mechanisms across the Spectrum of Engagement in Football Fans: A Functional Neuroimaging Study.” Radiology.

Scientists have peered into a place I would never dare to visit—the minds of football fans during high-stakes plays. To tap into the neural side of fanaticism, researchers enlisted 60 healthy male fans from the ages of 20 to 45 to witness dozens of goal sequences from matches involving their favorite teams, rival teams, and “neutral” teams while their brains were scanned by an fMRI machine.

The participants were rated according to a “Football Supporters Fanaticism Scale (FSFS)” with criteria like “violent thought and/or action tendencies” and “institutional belonging and/or identification.” The scale divided the group up into 38 casual spectators, 19 committed fans, and four deranged fanatics (adjectives are mine for flourish).
Rendering of the negative effect of significant defeat. Image: Radiological Society of North America (RSNA)
“Our key findings revealed that scoring against rivals activated the reward system…while conceding to rivals triggered the mentalization network and inhibited the dorsal anterior cingulate cortex (dACC)”—a region responsible for cognitive control and decision-making—said researchers led by Francisco Zamorano of the Universidad San Sebastián in Chile. “Higher Football Supporters Fanaticism Scale scores correlated with reduced dACC activation during defeats, suggesting impaired emotional regulation in highly engaged fans.”

In other words, it is now scientifically confirmed that football fanatics are Messi bitches who love drama.

Diplomacy served up fresh


Cabral, Óscar et al “Power for dinner. Culinary diplomacy and geopolitical aspects in Portuguese diplomatic tables (1910-2023).”

We’ll close, as all things should, with a century of fine Portuguese dining. In yet another edition of “yes, this can be a job,” researchers collected 457 menus served at various diplomatic meals in Portugal from 1910 to 2023 to probe “how Portuguese gastronomic culture has been leveraged as a culinary diplomacy and geopolitical rapprochement strategy.”

As a lover of both food and geopolitical bureaucracy, this study really hit the spot. Highlights include a 1957 “regional lunch” for Queen Elizabeth II that aimed to channel “Portugality” through dishes like lobster and fruit tarts from the cities of Peniche and Alcobaça. The study is also filled with amazing asides like “the inclusion of imperial ice cream in the European Free Trade Association official luncheon (ID45, 1960) seems to transmit a sense of geopolitical greatness and vast governing capacity.” Ice cream just tastes so much better when it’s a symbol of international power.
Menu of the “Luncheon in honour of her Majesty Queen Elizabeth II and his Royal Highness the Duke of Edinburgh” held in Alcobaça (Portugal) on February 20th, 1957. Image: Cabral et al., 2025.
The team also unearthed a possible faux pas: Indian president Ramaswamy Venkataraman, a vegetarian who was raised Hindu, was served roast beef in 1990. In a footnote, Cabral and his colleagues concluded that “further investigation is deemed necessary to understand the context of ‘roast beef’ service to the Indian President in 1990.” Talk about juicy gossip!

Thanks for reading! See you next week.




Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.

Tech companies are betting big on nuclear energy to meet AIs massive power demands and theyx27;re using that AI to speed up the construction of new nuclear power plants.#News #nuclear


Power Companies Are Using AI To Build Nuclear Power Plants


Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster.

“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.
playlist.megaphone.fm?p=TBIEA2…
The construction of a nuclear plant involves a long legal and regulatory process called licensing that’s aimed at minimizing the risks of irradiating the public. Licensing is complicated and expensive but it’s also largely worked and nuclear accidents in the US are uncommon. But AI is driving a demand for energy and new players, mostly tech companies like Microsoft, are entering the nuclear field.

“Licensing is the single biggest bottleneck for getting new projects online,” a slide from a Microsoft presentation about using generative AI to fast track nuclear construction said. “10 years and $100 [million.]”

The presentation, which is archived on the website for the US Nuclear Regulatory Commission (the independent government agency that’s charged with setting standards for reactors and keeping the public safe), detailed how the company would use AI to speed up licensing. In the company’s conception, existing nuclear licensing documents and data about nuclear sites data would be used to train an LLM that’s then used to generate documents to speed up the process.

But the authors of the report from AI Now told 404 Media that they have major concerns about trusting nuclear safety to an LLM. “Nuclear licensing is a process, it’s not a set of documents,” Heidy Khlaaf, the head AI scientist at the AI Now Institute and a co-author of the report, told 404 Media. “Which I think is the first flag in seeing proposals by Microsoft. They don’t understand what it means to have nuclear licensing.”

“Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”

The Idaho National Laboratory, a Department of Energy run nuclear lab, is already using Microsoft’s AI to “streamline” nuclear licensing. “INL will generate the engineering and safety analysis reports that are required to be submitted for construction permits and operating licenses for nuclear power plants,” INL said in a press release. Lloyd's Register, a UK-based maritime organization, is doing the same. American power company Westinghouse is marketing its own AI, called bertha, that promises to make the licensing process go from "months to minutes.”

The authors of the AI Now report worry that using AI to speed up the licensing process will bypass safety checks and lead to disaster. “Producing these highly structured licensing documents is not this box taking exercise as implied by these generative AI proposals that we're seeing,” Khlaaf told 404 Media. “The whole point of the lesson in process is to reason and understand the safety of the plant and to also use that process to explore the trade offs between the different approaches, the architectures, the safety designs, and to communicate to a regulator why that plant is safe. So when you use AI, it's not going to support these objectives, because it is not a set of documents or agreements, which I think you know, is kind of the myth that is now being put forward by these proposals.”

Sofia Guerra, Khlaaf’s co-author, agreed. Guerra is a career nuclear safety expert who has advised the U.S. Nuclear Regulatory Commission (NRC) and works with the International Atomic Energy Agency (IAEA) on the safe deployment of AI in nuclear applications. “This is really missing the point of licensing,” Guerra said of the push to use AI. “The licensing process is not perfect. It takes a long time and there’s a lot of iterations. Not everything is perfectly useful and targeted …but I think the process of doing that, in a way, is really the objective.”

Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.

Law is another profession where people have attempted to use AI to streamline the process of writing complicated and involved technical documents. It hasn’t gone well. Lawyers who’ve attempted to write legal briefs have been caught, over and over again, in court. AI-constructed legal arguments cite precedents that do not exist, hallucinate cases, and generally foul up legal proceedings.

Might something similar happen if AI was used in nuclear licensing? “It could be something as simple as software and hardware version control,” Khlaaf said. “Typically in nuclear equipment, the supply chain is incredibly rigorous. Every component, every part, even when it was manufactured is accounted for. Large language models make these really minute mistakes that are hard to track. If you are off in the software version by a letter or a number, that can lead to a misunderstanding of which software version you have, what it entails, the expectation of the behavior of both the software and the hardware and from there, it can cascade into a much larger accident.”

Khlaaf pointed to Three Mile Island as an example of an entirely human-made accident that AI may replicate. The accident was a partial nuclear meltdown of a Pennsylvania reactor in 1979. “What happened is that you had some equipment failure and design flaws, and the operators misunderstood what those were due to a combination of a lack of training…that they did not have the correct indicators in their operating room,” Khlaaf said. “So it was an accident that was caused by a number of relatively minor equipment failures that cascaded. So you can imagine, if something this minor cascades quite easily, and you use a large language model and have a very small mistake in your design.”

In addition to the safety concerns, Khlaaf and Guerra told 404 Media that using sensitive nuclear data to train AI models increases the risk of nuclear proliferation. They pointed out that Microsoft is asking not only for historical NRC data but for real-time and project specific data. “This is a signal that AI providers are asking for nuclear secrets,” Khlaaf said. “To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”

“This is a signal that AI providers are asking for nuclear secrets. To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”


Tech companies maintain cloud servers that comply with federal regulations around secrecy and are sold to the US government. Anthropic and the National Nuclear Security Administration traded information across an Amazon Top Secret cloud server during a recent collaboration, and it’s likely that Microsoft and others would do something similar. Microsoft’s presentation on nuclear licensing references its own Azure Government cloud servers and notes that it’s compliant with Department of Energy regulations. 404 Media reached out to both Westinghouse Nuclear and Microsoft for this story. Microsoft declined to comment and Westinghouse did not respond.

“Where is this data going to end up and who is going to have the knowledge?” Guerra told 404 Media.

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Nuclear is a dual use technology. You can use the knowledge of nuclear reactors to build a power plant or you can use it to build a nuclear weapon. The line between nukes for peace and nukes for war is porous. “The knowledge is analogous," Khlaaf said. “This is why we have very strict export controls, not just for the transfer of nuclear material but nuclear data.”

Proliferation concerns around nuclear energy are real. Fear that a nuclear energy program would become a nuclear weapons program was the justification the Trump administration used to bomb Iran earlier this year. And as part of the rush to produce more nuclear reactors and create infrastructure for AI, the White House has said it will begin selling old weapon-grade plutonium to the private sector for use in nuclear reactors.

Trump’s done a lot to make it easier for companies to build new nuclear reactors and use AI for licensing. The AI Now report pointed to a May 23, 2025 executive order that seeks to overhaul the NRC. The EO called for the NRC to reform its culture, reform its structure, and consult with the Pentagon and the Department of Energy as it navigated changing standards. The goal of the EO is to speed up the construction of reactors and get through the licensing process faster.

A different May 23 executive order made it clear why the White House wants to overhaul the NRC. “Advanced computing infrastructure for artificial intelligence (AI) capabilities and other mission capability resources at military and national security installations and national laboratories demands reliable, high-density power sources that cannot be disrupted by external threats or grid failures,” it said.

At the same time, the Department of Government Efficiency (DOGE) has gutted the NRC. In September, members of the NRC told Congress they were worried they’d be fired if they didn’t approve nuclear reactor designs favored by the administration. “I think on any given day, I could be fired by the administration for reasons unknown,” Bradley Crowell, a commissioner at the NRC said in Congressional testimony. He also warned that DOGE driven staffing cuts would make it impossible to increase the construction of nuclear reactors while maintaining safety standards.

“The executive orders push the AI message. We’re not just seeing this idea of the rollback of nuclear regulation because we’re suddenly very excited about nuclear energy. We’re seeing it being done in service of AI,” Khlaaf said. “When you're looking at this rolling back of Nuclear Regulation and also this monopolization of nuclear energy to explicitly power AI, this raises a lot of serious concerns about whether the risk associated with nuclear facilities, in combination with the sort of these initiatives can be justified if they're not to the benefit of civil energy consumption.”

Matthew Wald, an independent nuclear energy analyst and former New York Times science journalist is more bullish on the use of AI in the nuclear energy field. Like Khlaaf, he also referenced the accident at Three Mile Island. “The tragedy of Three Mile Island was there was a badly designed control room, badly trained operators, and there was a control room indication that was very easy to misunderstand, and they misunderstood it, and it turned out that the same event had begun at another reactor. It was almost identical in Ohio, but that information was never shared, and the guys in Pennsylvania didn't know about it, so they wrecked a reactor,” Wald told 404 Media.

"AI is helpful, but let’s not get messianic about it.”


According to Wald, using AI to consolidate government databases full of nuclear regulatory information could have prevented that. “If you've got AI that can take data from one plant or from a set of plants, and it can arrange and organize that data in a way that's helpful to other plants, that's good news,” he said. “It could be good for safety. It could also just be good for efficiency. And certainly in licensing, it would be more efficient for both the licensee and the regulator if they had a clearer idea of precedent, of relevant other data.”

He also said that the nuclear industry is full of safety-minded engineers who triple check everything. “One of the virtues of people in this business is they are challenging and inquisitive and they want to check things. Whether or not they use computers as a tool, they’re still challenging and inquisitive and want to check things,” he said. “And I think anybody who uses AI unquestionably is asking for trouble, and I think the industry knows that…AI is helpful, but let’s not get messianic about it.”

But Khlaaf and Guerra are worried that the framing of nuclear power as a national security concern and the embrace of AI to speed up construction will setback the embrace of nuclear power. If nuclear isn’t safe, it’s not worth doing. “People seem to have lost sight of why nuclear regulation and safety thresholds exist to begin with. And the reason why nuclear risks, or civilian nuclear risk, were ever justified, was due to the capacity for nuclear power. To provide flexible civilian energy demands at low cost emissions in line with climate targets,” Khlaaf said.

“So when you move away from that…and you pull in the AI arms race into this cost benefit justification for risk proportionality, it leads government to sort of over index on these unproven benefits of AI as a reason to have nuclear risk, which ultimately undermines the risks of ionizing radiation to the general population, and also the increased risk of nuclear proliferation, which happens if you were to use AI like large language models in the licensing process.”





Google is hosting a CBP app that uses facial recognition to identify immigrants, while simultaneously removing apps that report the location of ICE officials because Google sees ICE as a vulnerable group. “It is time to choose sides; fascism or morality? Big tech has made their choice.”#Google #ICE #News


Google Has Chosen a Side in Trump's Mass Deportation Effort


Google is hosting a Customs and Border Protection (CBP) app that uses facial recognition to identify immigrants, and tell local cops whether to contact ICE about the person, while simultaneously removing apps designed to warn local communities about the presence of ICE officials. ICE-spotting app developers tell 404 Media the decision to host CBP’s new app, and Google’s description of ICE officials as a vulnerable group in need of protection, shows that Google has made a choice on which side to support during the Trump administration’s violent mass deportation effort.

Google removed certain apps used to report sightings of ICE officials, and “then they immediately turned around and approved an app that helps the government unconstitutionally target an actual vulnerable group. That's inexcusable,” Mark, the creator of Eyes Up, an app that aims to preserve and map evidence of ICE abuses, said. 404 Media only used the creator’s first name to protect them from retaliation. Their app is currently available on the Google Play Store, but Apple removed it from the App Store.

“Google wanted to ‘not be evil’ back in the day. Well, they're evil now,” Mark added.

💡
Do you know anything else about Google's decision? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

The CBP app, called Mobile Identify and launched last week, is for local and state law enforcement agencies that are part of an ICE program that grants them certain immigration-related powers. The 287(g) Task Force Model (TFM) program allows those local officers to make immigration arrests during routine police enforcement, and “essentially turns police officers into ICE agents,” according to the New York Civil Liberties Union (NYCLU). At the time of writing, ICE has TFM agreements with 596 agencies in 34 states, according to ICE’s website.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




An account is spamming horrific, dehumanizing videos of immigration enforcement because the Facebook algorithm is rewarding them for it.#AI #AISlop #Meta


AI-Generated Videos of ICE Raids Are Wildly Viral on Facebook


“Watch your step sir, keep moving,” a police officer with a vest that reads ICE and a patch that reads “POICE” says to a Latino-appearing man wearing a Walmart employee vest. He leads him toward a bus that reads “IMMIGRATION AND CERS.” Next to him, one of his colleagues begins walking unnaturally sideways, one leg impossibly darting through another as he heads to the back of a line of other Latino Walmart employees who are apparently being detained by ICE. Two American flag emojis are superimposed on the video, as is the text “Deportation.”

The video has 4 million views, 16,600 likes, 1,900 comments, and 2,200 shares on Facebook. It is, obviously, AI generated.

Some of the comments seem to understand this: “Why is he walking like that?” one says. “AI the guys foot goes through his leg,” another says. Many of the comments clearly do not: “Oh, you’ll find lots of them at Walmart,” another top comment reads. “Walmart doesn’t do paperwork before they hire you?” another says. “They removing zombies from Walmart before Halloween?”


0:00
/0:14

The latest trend in Facebook’s ever downward spiral down the AI slop toilet are AI deportation videos. These are posted by an account called “USA Journey 897” and have the general vibe of actual propaganda videos posted by ICE and the Department of Homeland Security’s social media accounts. Many of the AI videos focus on workplace deportations, but some are similar to horrifying, real videos we have seen from ICE raids in Chicago and Los Angeles. The account was initially flagged to 404 Media by Chad Loder, an independent researcher.

“PLEASE THAT’S MY BABY,” a dark-skinned woman screams while being restrained by an ICE officer in another video. “Ma’am stop resisting, keep moving,” an officer says back. The camera switches to an image of the baby: “YOU CAN’T TAKE ME FROM HER, PLEASE SHE’S RIGHT THERE. DON’T DO THIS, SHE’S JUST A BABY. I LOVE YOU, MAMA LOVES YOU,” the woman says. The video switches to a scene of the woman in the back of an ICE van. The video has 1,400 likes and 407 comments, which include “ Don’t separate them….take them ALL!,” “Take the baby too,” and “I think the days of use those child anchors are about over with.”


0:00
/0:14

The USA Journey 897 account publishes multiple of these videos a day. Most of its videos have at least hundreds of thousands of views, according to Facebook’s own metrics, and many of them have millions or double-digit millions of views. Earlier this year, the account largely posted a mix of real but stolen videos of police interactions with people (such as Luigi Mangione’s perp walk) and absurd AI-generated videos such as jacked men carrying whales or riding tigers.

The account started experimenting with extremely crude AI-generated deportation videos in February, which included videos of immigrants handcuffed on the tarmac outside of deportation planes where their arms randomly detached from their body or where people suddenly disappeared or vanished through stairs, for example. Recent videos are far more realistic. None of the videos have an AI watermark on them, but the type and style of video changed dramatically starting with videos posted on October 1, which is the day after OpenAI’s Sora 2 was released; around that time is when the account started posting videos featuring identifiable stores and restaurants, which have become a common trope in Sora 2 videos.

A YouTube page linked from the Facebook account shows a real video uploaded of a car in Cyprus nearly two years ago before any other content was uploaded, suggesting that the person behind the account may live in Cyprus (though the account banner on Facebook includes both a U.S. and Indian flag). This YouTube account also reveals several other accounts being used by the person. Earlier this year, the YouTube account was posting side hustle tips about how to DoorDash, AI-generated videos of singing competitions in Greek, AI-generated podcasts about the WNBA, and AI-generated videos about “Billy Joyel’s health.” A related YouTube account called Sea Life 897 exclusively features AI-generated history videos about sea journeys, which links to an Instagram account full of AI-generated boats exploding and a Facebook account that has rebranded from being about AI-generated “Sea Life” to an account now called “Viral Video’s Europe” that is full of stolen images of women with gigantic breasts and creep shots of women athletes.

My point here is that the person behind this account does not seem to actually have any sort of vested interest in the United States or in immigration. But they are nonetheless spamming horrific, dehumanizing videos of immigration enforcement because the Facebook algorithm is rewarding them for that type of content, and because Facebook directly makes payments for it. As we have seen with other types of topical AI-generated content on Facebook, like videos about Palestinian suffering in Gaza or natural disasters around the world, many people simply do not care if the videos are real. And the existence of these types of videos serves to inoculate people from the actual horrors that ICE is carrying out. It gives people the chance to claim that any video is AI generated, and serves to generally litter social media with garbage, making real videos and real information harder to find.


0:00
/0:14

an early, crude video posted by the account

Meta did not immediately respond to a request for comment about whether the account violates its content standards, but the company has seemingly staked its present and future on allowing bizarre and often horrifying AI-generated content to proliferate on the platform. AI-generated content about immigrants is not new; in the leadup to last year’s presidential debate, Donald Trump and his allies began sharing AI-generated content about Haitian immigrants who Trump baselessly claimed were eating dogs and cats in Ohio.

In January, immediately before Trump was inaugurated, Meta changed its content moderation rules to explicitly allow for the dehumanization of immigrants because it argued that its previous policies banning this were “out of touch with mainstream discourse.” Phrases and content that are now explicitly allowed on Meta platforms include “Immigrants are grubby, filthy pieces of shit,” “Mexican immigrants are trash!” and “Migrants are no better than vomit,” according to documents obtained and published by The Intercept. After those changes were announced, content moderation experts told us that Meta was “opening up their platform to accept harmful rhetoric and mod public opinion into accepting the Trump administration’s plans to deport and separate families.”




Newly released documents provide more details about ICE's plan to use bounty hunters and private investigators to find the location of undocumented immigrants.

Newly released documents provide more details about ICEx27;s plan to use bounty hunters and private investigators to find the location of undocumented immigrants.#ICE #bountyhunters


ICE Plans to Spend $180 Million on Bounty Hunters to Stalk Immigrants


Immigration and Customs Enforcement (ICE) is allocating as much as $180 million to pay bounty hunters and private investigators who verify the address and location of undocumented people ICE wishes to detain, including with physical surveillance, according to procurement records reviewed by 404 Media.

The documents provide more details about ICE’s plan to enlist the private sector to find deportation targets. In October The Intercept reported on ICE’s intention to use bounty hunters or skip tracers—an industry that often works on insurance fraud or tries to find people who skipped bail. The new documents now put a clear dollar amount on the scheme to essentially use private investigators to find the locations of undocumented immigrants.

💡
Do you know anything else about this plan? Are you a private investigator or skip tracer who plans to do this work? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




OpenAI’s guardrails against copyright infringement are falling for the oldest trick in the book.#News #AI #OpenAI #Sora


OpenAI Can’t Fix Sora’s Copyright Infringement Problem Because It Was Built With Stolen Content


OpenAI’s video generator Sora 2 is still producing copyright infringing content featuring Nintendo characters and the likeness of real people, despite the company’s attempt to stop users from making such videos. OpenAI updated Sora 2 shortly after launch to detect videos featuring copyright infringing content, but 404 Media’s testing found that it’s easy to circumvent those guardrails with the same tricks that have worked on other AI generators.

The flaw in OpenAI’s attempt to stop users from generating videos of Nintendo and popular cartoon characters exposes a fundamental problem with most generative AI tools: it is extremely difficult to completely stop users from recreating any kind of content that’s in the training data, and OpenAI can’t remove the copyrighted content from Sora 2’s training data because it couldn’t exist without it.

Shortly after Sora 2 was released in late September, we reported about how users turned it into a copyright infringement machine with an endless stream of videos like Pikachu shoplifting from a CVS and Spongebob Squarepants at a Nazi rally. Companies like Nintendo and Paramount were obviously not thrilled seeing their beloved cartoons committing crimes and not getting paid for it, so OpenAI quickly introduced an “opt-in” policy, which prevented users from generating copyrighted material unless the copyright holder actively allowed it. Initially, OpenAI’s policy allowed users to generate copyrighted material and required the copyright holder to opt-out. The change immediately resulted in a meltdown among Sora 2 users, who complained OpenAI no longer allowed them to make fun videos featuring copyrighted characters or the likeness of some real people.

This is why if you give Sora 2 the prompt “Animal Crossing gameplay,” it will not generate a video and instead say “This content may violate our guardrails concerning similarity to third-party content.” However, when I gave it the prompt “Title screen and gameplay of the game called ‘crossing aminal’ 2017,” it generated an accurate recreation of Nintendo’s Animal Crossing New Leaf for the Nintendo 3DS.

Sora 2 also refused to generate videos for prompts featuring the Fox cartoon American Dad, but it did generate a clip that looks like it was taken directly from the show, including their recognizable voice acting, when given this prompt: “blue suit dad big chin says ‘good morning family, I wish you a good slop’, son and daughter and grey alien say ‘slop slop’, adult animation animation American town, 2d animation.”

The same trick also appears to circumvent OpenAI’s guardrails against recreating the likeness of real people. Sora 2 refused to generate a video of “Hasan Piker on stream,” but it did generate a video of “Twitch streamer talking about politics, piker sahan.” The person in the generated video didn’t look exactly like Hasan, but he has similar hair, facial hair, the same glasses, and a similar voice and background.

A user who flagged this bypass to me, who wished to remain anonymous because they didn’t want OpenAI to cut off their access to Sora, also shared Sora generated videos of South Park, Spongebob Squarepants, and Family Guy.

OpenAI did not respond to a request for comment.

There are several ways to moderate generative AI tools, but the simplest and cheapest method is to refuse to generate prompts that include certain keywords. For example, many AI image generators stop people from generating nonconsensual nude images by refusing to generate prompts that include the names of celebrities or certain words referencing nudity or sex acts. However, this method is prone to failure because users find prompts that allude to the image or video they want to generate without using any of those banned words. The most notable example of this made headlines in 2024 after an AI-generated nude image of Taylor Swift went viral on X. 404 Media found that the image was generated with Microsoft’s AI image generator, Designer, and that users managed to generate the image by misspelling Swift’s name or using nicknames she’s known by, and describing sex acts without using any explicit terms.

Since then, we’ve seen example after example of users bypassing generative AI tool guardrails being circumvented with the same method. We don’t know exactly how OpenAI is moderating Sora 2, but at least for now, the world’s leading AI company’s moderating efforts are bested by a simple and well established bypass method. Like with these other tools, bypassing Sora’s content guardrails has become something of a game to people online. Many of the videos posted on the r/SoraAI subreddit are of “jailbreaks” that bypass Sora’s content filters, along with the prompts used to do so. And Sora’s “For You” algorithm is still regularly serving up content that probably should be caught by its filters; in 30 seconds of scrolling we came across many videos of Tupac, Kobe Bryant, JuiceWrld, and DMX rapping, which has become a meme on the service.

It’s possible OpenAI will get a handle on the problem soon. It can build a more comprehensive list of banned phrases and do more post generation image detection, which is a more expensive but effective method for preventing people from creating certain types of content. But all these efforts are poor attempts to distract from the massive, unprecedented amount of copyrighted content that has already been stolen, and that Sora can’t exist without. This is not an extreme AI skeptic position. The biggest AI companies in the world have admitted that they need this copyrighted content, and that they can’t pay for it.

The reason OpenAI and other AI companies have such a hard time preventing users from generating certain types of content once users realize it’s possible is that the content already exists in the training data. An AI image generator is only able to produce a nude image because there’s a ton of nudity in its training data. It can only produce the likeness of Taylor Swift because her images are in the training data. And Sora can only make videos of Animal Crossing because there are Animal Crossing gameplay videos in its training data.

For OpenAI to actually stop the copyright infringement it needs to make its Sora 2 model “unlearn” copyrighted content, which is incredibly expensive and complicated. It would require removing all that content from the training data and retraining the model. Even if OpenAI wanted to do that, it probably couldn’t because that content makes Sora function. OpenAI might improve its current moderation to the point where people are no longer able to generate videos of Family Guy, but the Family Guy episodes and other copyrighted content in its training data are still enabling it to produce every other generated video. Even when the generated video isn’t recognizably lifting from someone else’s work, that’s what it’s doing. There’s literally nothing else there. It’s just other people’s stuff.




The newly-formed, first of its kind Adult Studio Alliance is founded by major porn companies including Aylo, Dorcel, ERIKALUST, Gamma Entertainment, Mile High Media and Ricky’s Room, and establishes a code of conduct for studios.#porn


Major Porn Studios Join Forces to Establish Industry ‘Code of Conduct’


Six of the biggest porn studios in the world, including industry giant and Pornhub parent company Ayl o, announced Wednesday they have formed a first-of-its-kind coalition called the Adult Studio Alliance (ASA). The alliance’s purpose is to “contribute to a safe, healthy, dignified, and respectful adult industry for performers,” the ASA told 404 Media.

“This alliance is intended to unite professionals creating adult content (from studios to crews to performers) under a common set of values and guidelines. In sharing our common standards, we hope to contribute to a safe, healthy, dignified, and respectful adult industry for performers,” a spokesperson for ASA told 404 Media in an email. “As a diverse group of studios producing a large volume and variety of adult content, we believe it’s key to promote best practices on all our scenes. We all come from different studios, but we share the belief that all performers are entitled to comfort and safety on set.”

The founding members include Aylo, Dorcel, ERIKALUST, Gamma Entertainment, Mile High Media and Ricky’s Room. Aylo owns some of the biggest platforms and porn studios in the industry, including Brazzers, Reality Kings, Digital Playground and more.

In a press release Wednesday, the ASA said its primary mission is “to publish and adhere to a comprehensive Code of Conduct, providing a structured framework for directors, producers, and talent to ensure the safest possible sets and consistent industry best practices.” The ASA’s code of conduct addresses performers’ rights to consent to the types of scenes they’ll shoot, their scene partners including extras, sexual acts, script and creative documents, the length of the shoot day, location, remuneration and conditions, and any other rights involved in their agreement with the studio.

The founding studios say they have signed agreements to adhere to the ASA’s code of conduct, but the ASA “encourages all studios, members or not, to adopt and adhere to these guidelines to foster a safer, more respectful, and more professional adult industry,” the spokesperson said.

“All performers have the right to be treated with professional respect and dignity, free from harassment of any kind,” the code states. “They should be: Able to refuse, at any time, any act, even if previously agreed upon; Able to visually confirm their partner’s STI test status on set before any sexual performance; Provided water, snacks, meals, breaks, and privacy as needed; Provided all necessary sexual health and hygienic materials needed to perform; Paid their agreed-upon rate for the date of production.”

The code also outlines rights and expectations for third-party producers and crew members, including verifying performers’ ages, ensuring an environment “free of harassment of any kind (mental, physical or sexual),” and “never using their influence or access to the studio to pressure performers or promise work.” Agencies and talent agents are also addressed in the code of conduct: “Agencies should represent and protect performers, inform them very clearly of the specific requirements of pornographic performances,” the code states. “They must inform performers of their rights and duties and legitimate expectations, with no expectation of sexual contact with agency staff, reasonably limited contract terms (within industry standard range of 1 year), and no punitive buyouts for performers who choose to leave the agency.”

A need for more autonomy over one’s working conditions spurred the rise of the independent adult content creator economy in the last 10 years, as more performers moved away from studio work—which often dictates workers’ hours, physical location, and ownership rights to their performances, and can be sporadic—to models like webcamming and subscription platforms like OnlyFans. Porn is legal in the U.S. but is still a heavily stigmatized career, and performers have reported that legislation like 2018’s Fight Online Sex Trafficking Act have made their livelihoods more precarious, even when working with studios.

In 2020, as Hollywood reckoned with allegations of abuse and coercion against the most powerful people in the entertainment industry, multiple performers came forward with their own stories of physical and mental abuse on-set. The power dynamic present in mainstream acting careers also exists in porn, with the added stigma of sex work: adult performers, like mainstream entertainment professionals or many other industries, might feel like they risk being ostracized within their industry for speaking out about mistreatment, but they also may feel a risk fueling decades-old anti-porn campaigns and their harmful rhetoric.

Many studios have previously established their own codes of conduct, including Gamma Entertainment-owned Adult Time, which published a guide to “what to expect on an Adult Time set” in 2023, and Kink, which published its shooting protocols, consent documents and checklists in 2019. There are also several talent-focused rights groups, like the Free Speech Coalition, that have operated with performer and crew wellbeing guidelines in place for years.

Michigan Lawmakers Are Attempting to Ban Porn Entirely
The “Anticorruption of Public Morals Act” proposes a total ban on porn in the state, and also targets the existence of trans people online, content like erotic ASMR, and selling VPNs in the state.
404 MediaSamantha Cole


“The landscape for adult production has expanded rapidly over the past few years, so it's encouraging to see bigger studios codify industry best practices,” Mike Stabile, director of public policy at the Free Speech Coalition, told 404 Media. Stabile noted that the needs and requirements of productions and performers vary; independent content creators working with other indie creators might not need or have the resources to hire an intimacy coordinator on each shoot, for example, or a small fetish studio that doesn’t engage in fluid exchange might not need to adhere to testing. But “it sets a bar for what performers can and should expect in production, and provides a framework for understanding one's rights on set,” he said.
playlist.megaphone.fm?p=TBIEA2…
“It's incredibly powerful because it isn't just one studio or one group, it's a collection of some of the most influential leaders in adult production,” Stabile said. “While these practices aren't entirely new, by publishing guidelines they're creating a broad system of accountability. Whether or not other studios join and sign-on, I expect we'll see broader adoption of these protocols at all levels.”

“I believe strong production standards are the foundation of a safe and respectful and successful industry, and I’ve always believed performers deserve nothing less,” performer Cherie DeVille said in the ASA press release. “It's powerful to see these top studios come together with the shared goal of ensuring performer wellness remains a top priority.”


#porn


A Washington judge said images taken by Flock cameras are "not exempt from disclosure" in public record requests.#Flock


Judge Rules Flock Surveillance Images Are Public Records That Can Be Requested By Anyone


A judge in Washington has ruled that police images taken by Flock’s AI license plate-scanning cameras are public records that can be requested as part of normal public records requests. The decision highlights the sheer volume of the technology-fueled surveillance state in the United States, and shows that at least in some cases, police cannot withhold the data collected by its surveillance systems.

In a ruling last week, Judge Elizabeth Neidzwski ruled that “the Flock images generated by the Flock cameras located in Stanwood and Sedro-Wooley [Washington] are public records under the Washington State Public Records Act,” that they are “not exempt from disclosure,” and that “an agency does not have to possess a record for that record to be subject to the Public Records Act.”

She further found that “Flock camera images are created and used to further a governmental purpose” and that the images on them are public records because they were paid for by taxpayers. Despite this, the records that were requested as part of the case will not be released because the city automatically deleted them after 30 days. Local media in Washington first reported on the case; 404 Media bought Washington State court records to report the specifics of the case in more detail.
A screenshot from the judge's decision
Flock’s automated license plate reader (ALPR) cameras are used in thousands of communities around the United States. They passively take between six and 12 timestamped images of each car that passes by, allowing the company to make a detailed database of where certain cars (and by extension, people) are driving in those communities. 404 Media has reported extensively on Flock, and has highlighted that its cameras have been accessed by the Department of Homeland Security and by local police working with DHS on immigration cases. Last month, cops in Colorado used data from Flock cameras to incorrectly accuse an innocent woman of theft based on her car’s movements.

The case came in response to a public records request made by Jose Rodriguez, who in April sought all of the images taken by the city’s Flock cameras between the hours of 5 and 6 p.m. on March 30 (he later narrowed this request to only ask for images taken by a single camera in a half-hour period). The city argued that Rodriguez would have to request them directly from Flock, a private company not subject to public records laws. But Flock’s contracts with cities say that the city owns the images taken on their cameras. The city eventually took Rodriguez to court. In the court proceedings, the city made a series of arguments claiming that Flock images couldn’t be released; the judge’s decision rebuked all of these many arguments.

“I wanted the records to see if they would release them to me, in hopes that if they were public records it would raise awareness to all the communities that have the Flock cameras that they may be public record and could be used by stalkers, or burglars scoping out a house, or other ways someone with bad intentions may use them. My goal was to try getting these cameras taken down by the cities that put them up,” Rodriguez told 404 Media. “In order to show that the records were public records and that they don’t qualify as exempt under the Washington public records act we cited the contract, and I made requests to both cities requesting their exterior normal surveillance camera footage from their City Hall and police station that recorded the streets and parking lots with vehicles driving by and license plates viewable, which is what the Flock images also capture. Both cities provided me with the surveillance videos I requested without issue but denied the Flock images, so my attorney used that to show how they contradict themselves.”

"it is pretty abhorrent that the city tried to make all of these arguments in the first place"


The case highlights the lengths that police departments and cities are willing to go to in order to prevent the release of what they incorrectly perceive to be private information owned by their surveillance vendors (in this case, Flock). Stanwood’s attorneys first argued that the records were Flock’s, not the city’s, which is clearly contradicted in the contract, which states “customer [Stanwood] shall retain whatever legally cognizable right, title, and interest in Customer Generated Data … Flock does not own and shall not sell Customer Generated Data.” The attorneys then argued that images taken by Flock cameras do not become requestable data until it is directly accessed and downloaded by the police on Flock’s customer portal: “the data existing in the cloud system … does not exist anywhere in the City’s files as a record.” The city’s lawyers also argued that Flock footage is police “intelligence information” that should be exempt from public records requests, and that “there are privacy concerns with making ALPR data accessible to the public.”

“Honestly, it is pretty abhorrent that the city tried to make all of these arguments in the first place, but it’s great that the court reaffirmed that these are public records,” Beryl Lipton, senior investigative researcher at the Electronic Frontier Foundation, told 404 Media in a phone interview. “So much of the surveillance law enforcement does is facilitated by third party vendors and that information is stored on their external servers. So for the court to start restricting access to the public because law enforcement has started using these types of systems would have been horribly detrimental to the public’s right to know.”

In affidavits filed with the court, police argued that “if the public could access the Flock Safety System by making Public Records Act requests, it would allow nefarious actors the ability to track private persons and undermine the effectiveness of the system.” The judge rejected every single one of these arguments.

Both Lipton and Timothy Hall, Rodriguez’s attorney, said that, to the contrary, Rodriguez’s request actually shows how pervasive mass surveillance systems are in society, and that sharing this information will help communities make better informed decisions about whether they want to use technology like Flock at all.

“We do think there should be redactions for certain privacy reasons, but we absolutely think that as a whole, these should be considered public records,” Lipton said. “This is part of the whole problem: These police departments and these companies are operating under the impression that everything that happens on the street is fair game, and that their systems are not a privacy violation. But then when it comes to the public wanting to know, they say ‘this is a privacy violation,’ and I think that’s them trying to have it both ways.”

Hall said that Rodriguez’s case, reporting by 404 Media, and a recent study by the University of Washington about Flock data being available to immigration enforcement officers, has started a conversation in the state about Flock in general.

“Now because of the Washington State Public Records Act, people can be aware of all the information these cameras are collecting. Now there’s a discussion going on: Do we even want these cameras? Well, they’re collecting way more information than we realized,” Hall told 404 Media in a phone call. “A lot of people are now realizing there’s a ton of information being collected here. This has now opened up a massive discussion which was ultimately the goal.”

A Flock spokesperson told 404 Media that the company believes that the court simply reaffirmed what the law already was. The city of Stanwood did not respond to a request for comment.

Rodriguez said that even after fighting this case, he is not going to get the images that he originally took, because the city automatically deleted it after 30 days, even though he filed his request. He can now file a new one for more recent images, however.

“I won’t be getting the records, even though I win the case (they could also appeal it and continue the case) no matter what I won’t get those records I requested because they no longer exist,” Rodriguez said. “The cities both allowed the records to be automatically deleted after I submitted my records requests and while they decided to have their legal council review my request. So they no longer have the records and can not provide them to me even though they were declared to be public records.”




A fight against a massive AI data center; how people are 3D-printing whistles to fight ICE; and AI's war on knowledge.

A fight against a massive AI data center; how people are 3D-printing whistles to fight ICE; and AIx27;s war on knowledge.#Podcast


Podcast: Inside a Small Town's Fight Against a $1.2 Billion AI Datacenter


We start with Matthew Gault’s dive into a battle between a small town and the construction of a massive datacenter for America’s nuclear weapon scientists. After the break, Joseph explains why people are 3D-printing whistles in Chicago. In the subscribers-only section, Jason zooms out and tells us what librarians are seeing with AI and tech, and how that is impacting their work and knowledge more broadly.
playlist.megaphone.fm?e=TBIEA3…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/rHk580uKwHw?…
6:03 - ⁠Our New FOIA Forum! 11/19, 1PM ET⁠

7:50 - ⁠A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists⁠

12:27 - ⁠'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community⁠

21:09 - ⁠'House of Dynamite' Is About the Zoom Call that Ends the World⁠

30:35 - ⁠The Latest Defense Against ICE: 3D-Printed Whistles⁠

SUBSCRIBER'S STORY: ⁠AI Is Supercharging the War on Libraries, Education, and Human Knowledge⁠




Waves within Earth’s mantle can carry traces of past continents across hundreds of miles, explaining why their chemical fingerprints appear in unlikely places.#TheAbstract


Remnants of Lost Continents Are Everywhere. Now, We Finally Know Why.


🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

Tiny remnants of long-lost continents that vanished many millions of years ago are sprinkled around the world, including on remote island chains and seamounts, a mystery that has puzzled scientists for years.

Now, a team has discovered a mechanism that can explain how this continental detritus ends up resurfacing in unexpected places, according to a study published on Tuesday in Nature Geoscience.

When continents are subducted into Earth’s mantle, the layer beneath the planet’s crust, waves can form that scrape off rocky material and sweep it across hundreds of miles to new locations. This “mantle wave” mechanism fills in a gap in our understanding of how lost continents are metabolized through our ever-shifting planet.

“There are these seamount chains where volcanic activity has erupted in the middle of the ocean,” said Sascha Brune, a professor at the GFZ Helmholtz Centre for Geosciences and University of Potsdam, in a call with 404 Media. “Geochemists go there, they drill, they take samples, and they do their isotope analysis, which is a very fancy geochemical analysis that gives you small elements and isotopes which come up with something like a ‘taste.’”

“Many of these ocean islands have a taste that is surprisingly similar to the continents, where the isotope ratio is similar to what you would expect from continents and sediments,” he continued. “And there has always been the question: why is this the case? Where does it come from?”

These continental sprinkles are sometimes linked to mantle plumes, which are hot columns of gooey rock that erupt from the deep mantle. Plumes bring material from ancient landmasses, which have been stuck in the mantle for eons, back to the light of day again. Mantle plumes are the source of key hot spots like Hawai’i and Iceland, but there are plenty of locations with enriched continental material that are not associated with plumes—or any other known continental recycling mechanisms.

The idea of a mantle wave has emerged from a series of revelations made by lead author Tom Gernon, a professor at the University of Southampton, along with his colleagues at GFZ, including Brune. Gernon previously led a 2023 study that identified evidence of similar dynamics occurring within continents. By studying patterns in the distribution of diamonds across South Africa, the researchers showed that slow cyclical motions in the mantle dislodge chunks off the keel of landmasses as they plunge into the mantle. Their new study confirms that these waves can also explain how the elemental residue of the supercontinent Gondwana, which broke up over 100 million years ago, resurfaced in seamounts across the Indian Ocean and other locations.

In other words, the ashes of dead continents are scattered across extant landmasses following long journeys through the mantle. Though it’s not possible to link these small traces back to specific past continents or time periods, Brune hopes that researchers will be able to extract new insights about Earth’s roiling past from the clues embedded in the ground under our feet.

“What we are saying now is that there is another element, with this kind of pollution of continental material in the upper mantle,” Brune said. “It is not replacing what was said before; it is just complementing it in a way where we don't need plumes everywhere. There are some regions that we know are not plume-related, because the temperatures are not high enough and the isotopes don't look like plume-affected. And for those regions, this new mechanism can explain things that we haven't explained before.”

“We have seen that there's quite a lot of evidence that supports our hypothesis, so it would be interesting to go to other places and investigate this a bit more in detail,” he concluded.

Update: This story has been update to note Tom Gernon was a lead author on the paper.

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.




Software engineer Hector Dearman built a zoomable map of every issue of BYTE magazine.#archives #magazines #publishing #byte


Visualize All 23 Years of BYTE Magazine in All Its Glory, All at Once


Fifty years ago—almost two decades before WIRED, seven years ahead of PCMag, just a few years after the first email ever passed through the internet and with the World Wide Web still 14 years away—there was BYTE. Now, you can see the tech magazine's entire run at once. Software engineer Hector Dearman recently released a visualizer to take in all of BYTE’s 287 issues as one giant zoomable map.

The physical BYTE magazine published monthly from September 1975 until July 1998, for $10 a month. Personal computer kits were a nascent market, with the first microcomputers having just launched a few years prior. BYTE was founded on the idea that the budding microcomputing community would be well-served by a publication that could help them through it.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




Public records show DHS is deploying the "Homeland Security Information Network" at college protests and football games.#FOIA


DHS Is Deploying a Powerful Surveillance Tool at College Football Games


A version of this article was previously published on FOIAball, a newsletter reporting on college football and public records. You can learn more about FOIAball and subscribe here.

Last weekend, Charleston’s tiny private military academy, the Citadel, traveled to Ole Miss.

This game didn’t have quite the same cachet as the Rebels' Week 11 opponent this time last year, when a one-loss Georgia went to Oxford.

A showdown of ranked SEC opponents in early November 2024 had all eyes trained on Vaught-Hemingway Stadium.

Including those of the surveillance state.

According to documents obtained by FOIAball, the Ole Miss-Georgia matchup was one of at least two games last year where the school used a little-known Department of Homeland Security information-sharing platform to keep a watchful eye on attendees.

The platform, called the Homeland Security Information Network (HSIN), is a centralized hub for the myriad law enforcement agencies involved with security at big events.
CREDIT: Ole Miss/Georgia EAP, obtained by FOIAball
According to an Event Action Plan obtained by FOIAball, at least 11 different departments were on the ground at the Ole Miss-Georgia game, from Ole Miss campus police to a military rapid-response team.

HSINs are generally depicted as a secure channel to facilitate communication between various entities.

In a video celebrating its 20th anniversary, a former HSIN employee hammered home that stance.“When our communities are connected, our country is indeed safer," they said.

In reality HSIN is an integral part of the vast surveillance arm of the U.S. government.

Left unchecked since 9/11, supercharged by technological innovation, HSIN can subject any crowd to almost constant monitoring, looping in live footage from CCTV cameras, from drones flying overhead, and from police body cams and cell phones.

HSIN has worked with private businesses to ensure access to cameras across cities; they collect, store, and mine vast amounts of personal data; and they have been used tofacilitate facial recognition searches from companies like Clearview AI.

It’s one of the least-reported surveillance networks in the country.

And it's been building this platform on the back of college football.

Since 9/11, HSINs have become a widely used tool.

A recentInspector General report found over 55,000 active accounts using HSIN, ranging from federal employees to local police agencies to nebulous international stakeholders.

The platforms host what’s called SBU, sensitive but unclassified information, including threat assessments culled from media monitoring.

According to aprivacy impact study from 2006, HSIN was already maintaining a database of suspicious activities and mining those for patterns.

"The HSIN Database can be mined in a manner that identifies potential threats to the homeland or trends requiring further analysis,” it noted.

In anupdated memo from 2012 discussing whose personal information HSIN can collect and disseminate, the list includes the blanket, “individuals who may pose a threat to the United States.”

A 2023 DHS “Year in Review” found that HSIN averaged over 150,000 logins per month.

Its Connect platform, which coordinates security and responses at major events, was utilized over 500 times a day.

HSIN operated at the Boston Marathon, Lollapalooza, the World Series, and the presidential primary debates. It has also been used at every Super Bowl for the last dozen years.

DHS is quick to tout the capabilities of HSINs in internal communications reviewed by FOIAball.

In doing so, it reveals the growth of its surveillance scope. In documents from 2018, DHS makes no mention of live video surveillance.

But a 2019annual review said that HSINs used private firms to help wrangle cameras at commercial businesses around Minneapolis, which hosted the Final Four that year.

“Public safety partners use HSIN Connect to share live video streams from stationary cameras as well as from mobile phones,” it said. “[HSIN communities such as] the Minneapolis Downtown Security Executive Group works with private sector firms to share live video from commercial businesses’ security cameras, providing a more comprehensive operating picture and greater situational awareness in the downtown area.”

And the platform has made its way to college campuses.

Records obtained by FOIAball show how pervasive this technology has become on college campuses, for everything from football games to pro-Palestinian protests.

In November 2023, students at Ohio State University held several protests against Israel’s war in Gaza. At one, over 100 protesters blocked the entrance to the school president’s office.

Areport that year from DHS revealed the protesters were being watched in real-time from a central command center.

Under the heading "Supporting Operation Excellence," DHS said the school used HSIN to surveil protesters, integrating the school’s closed-circuit cameras to live stream footage to HSIN Connect.

“Ohio State University has elevated campus security by integrating its closed-circuit camera system with HSIN Connect,” it said. “This collaboration creates a real-time Common Operating Picture for swift information sharing, enhancing OSU’s ability to monitor campus events and prioritize community safety.”

“HSIN Connect proved especially effective during on-campus protests, expanding OSU’s security capabilities,” the school’s director of emergency management told DHS. “HSIN Connect has opened new avenues for us in on-campus security.”

While it opened new avenues, the platform already had a well-established relationship with the school.

According to aninternal DHS newsletter from January 2016, HSIN was utilized at every single Buckeyes home game in 2015.

“HSIN was a go-to resource for game days throughout the 2015 season,” it said.

It highlighted that data was being passed along and analyzed by DHS officials.

The newsletter also revealed HSINs were at College Football Playoff games that year and have been in years since. There was no mention of video surveillance at Ohio State back in 2015. But in 2019, that capability was tested at Georgia Tech.

There, police used “HSIN Connect to share live video streams with public safety partners.”

A2019 internal newsletter quoted a Georgia Tech police officer about the use of real-time video surveillance on game days, both from stationary cameras and cell phones.

“The mobile app for HSIN Connect also allows officials to provide multiple, simultaneous live video streams back to our Operations Center across a secure platform,” the department said.

Ohio State told FOIAball that it no longer uses HSIN for events or incidents. However, it declined to answer questions about surveilling protesters or football games.

Ohio State’s records department said that it did not have any documents relating to the use of HSIN or sharing video feeds with DHS.

Georgia Tech’s records office told FOIAball that HSINs had not been used in years and claimed it was “only used as a tool to share screens internally." Its communications team did not respond to a request to clarify that comment.

Years later, DHS had eyes both on the ground and in the sky at college football.

According to the 2023 annual review, HSIN Connect operated during University of Central Florida home games that season. There, both security camera and drone detection system feeds were looped into the platform in real-time.

DHSsaid that the "success at UCF's football games hints at a broader application in emergency management.”

HSIN has in recent years been hooked into facial recognition systems.

A 2024report from the U.S. Commission on Civil Rights found that the U.S. Marshals were granted access to HSIN, where they requested "indirect facial recognition searches through state and local entities" using Clearview AI.

Which brings us to the Egg Bowl—the annual rivalry game between Ole Miss and Mississippi State.

FOIAball learned about the presence of HSIN at Ole Miss through a records request to the city’s police department. It shared Event Action Plans for the Rebels’ games on Nov. 9, 2024 against Georgia and Nov. 30, 2024 against Mississippi State.

It’s unclear how these partnerships are forged.

In videos discussing HSIN, DHS officials have highlighted their outreach to law enforcement, talking about how they want agencies onboarded and trained on the platform. No schools mentioned in this article answered questions about how their relationship with DHS started.

The Event Action Plan provides a fascinating level of detail that shows what goes into security planning for a college football game, from operations meetings that start on Tuesday to safety debriefs the following Monday.

Its timeline of events discusses when Ole Miss’s Vaught-Hemingway Stadium is locked down and when security sweeps are conducted. Maps detail where students congregate beforehand and where security guards are posted during games.

The document includes contingency plans for extreme heat, lightning, active threats, and protesters. It also includes specific scripts for public service announcers to read in the event of any of those incidents.

It shows at least 11 different law enforcement agencies are on the ground on game days, from school cops to state police.

They even have the U.S. military on call. The 47th Civil Support Team, based out of Jackson Air National Guard Base, is ready to respond to a chemical, biological, or nuclear attack.

All those agencies are steered via the document to the HSIN platform.

Under a section on communications, it lists the HSIN Sitroom, which is “Available to all partners and stakeholders via computer & cell phone.”

The document includes a link to an HSIN Connect page.

It uses Eli Manning as an example of how to log in.

“Ole Miss Emergency Management - Log in as a Guest and use a conventional naming convention such as: ‘Eli Manning - Athletics.’”

On the document, it notes that the HSIN hosts sensitive Personally Identifiable Information (PII) and Threat Analysis Documents.

“Access is granted on a need-to-know basis, users will need to be approved prior to entry into the SitRoom.”

“The general public and general University Community is not permitted to enter the online SitRoom,” it adds. “All SitRooms contain operationally sensitive information and PII, therefore access must be granted by the ‘Host’.”

It details what can be accessed in the HSIN, such as a chat window for relaying information.

It includes a section on Threat Analysis, which DHS says is conducted through large-scale media monitoring.

The document does not detail whether the HSIN used at Ole Miss has access to surveillance cameras across campus.

But that may not be something explicitly stated in documents such as these.

Like Ohio State, UCF told FOIAball that it had no memoranda of understanding or documentation about providing access to video feeds to HSINs, despite DHS acknowledging those streams were shared. Ole Miss’ records department also did not provide any documents on what campus cameras may have been shared with DHS.

While one might assume the feeds go dark after the game is over, there exists the very real possibility that by being tapped in once, DHS can easily access them again.

“I’m worried about mission creep,” Matthew Guariglia, a senior policy analyst at the Electronic Frontier Foundation, told FOIAball. “These arrangements are made for very specific purposes. But they could become the apparatus of much greater state surveillance.”

For Ole Miss, its game against Georgia went off without any major incidents.

Well, save for one.

During the second quarter, asquirrel jumped onto the field, and play had to be stopped.

In the EAP, there was no announcer script for handling a live animal interruption.


#FOIA


Chicagoans are making, sharing, and printing designs for whistles that can warn people when ICE is in the area. The goal is to “prevent as many people from being kidnapped as possible.”#ICE #News


The Latest Defense Against ICE: 3D-Printed Whistles


Chicagoans have turned to a novel piece of tech that marries the old-school with the new to warn their communities about the presence of ICE officials: 3D-printed whistles.

The goal is to “prevent as many people from being kidnapped as possible,” Aaron Tsui, an activist with Chicago-based organization Cycling Solidarity, and who has been printing whistles, told 404 Media. “Whistles are an easy way to bring awareness for when ICE is in the area, printing out the whistles is something simple that I can do in order to help bring awareness.”

Over the last couple months ICE has especially focused on Chicago as part of Operation Midway Blitz. During that time, Department of Homeland Security (DHS) personnel have shot a religious leader in the head, repeatedly violated court orders limiting the use of force, and even entered a daycare facility to detain someone.

💡
Do you know anything else about this? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

3D printers have been around for years, with hobbyists using them for everything from car parts to kids’ toys. In media articles they are probably most commonly associated with 3D-printed firearms.

One of the main attractions of 3D printers is that they squarely put the means of production into the hands of essentially anyone who is able to buy or access a printer. There’s no need to set up a complex supply chain of material providers or manufacturers. No worry about a store refusing to sell you an item for whatever reason. Instead, users just print at home, and can do so very quickly, sometimes in a matter of minutes. The price of printers has decreased dramatically over the last 10 years, with some costing a few hundred dollars.


0:00
/0:04

A video of the process from Aaron Tsui.

People who are printing whistles in Chicago either create their own design or are given or download a design someone else made. Resident Justin Schuh made his own. That design includes instructions on how to best use the whistle—three short blasts to signal ICE is nearby, and three long ones for a “code red.” The whistle also includes the phone number for the Illinois Coalition for Immigrant & Refugee Rights (ICIRR) hotline, which people can call to connect with an immigration attorney or receive other assistance. Schuh said he didn’t know if anyone else had printed his design specifically, but he said he has “designed and printed some different variations, when someone local has asked for something specific to their group.” The Printables page for Schuh’s design says it has been downloaded nearly two dozen times.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#News #ice


A man running a Danish copycat of the r/WatchItForThePlot subreddit was assured of posting no less than 347 nude scenes from films, and downloading over 25 terabytes of data from copyrighted works.#Reddit #copyright #denmark


Danish Redditor Charged for Posting Nude Scenes from Films


In a landmark case for Danish courts and internationally, a man was sentenced to seven months’ suspended imprisonment and 120 hours of community service for posting nude scenes from copyrighted films.

He’s convicted of “gross violations of copyright, including violating the right of publicity of more than 100 aggrieved female actors relating to their artistic integrity,” Danish police reported Monday.

The man, a 40-year-old from Denmark who was a prolific Redditor under the username “KlammereFyr” (which translates to “NastierGuy”) was arrested and charged with copyright infringement in September 2024 by Denmark’s National Unit for Serious Crime (NSK).

In a press release, NSK wrote that KlammereFyr was a moderator for the subreddit r/SeDetForPlottet, which is a Danish version of the massive subreddit r/WatchItForThePlot, where people post clips of nude scenes—almost always featuring female actors—out of context. NSK said that KlammereFyr shared “no less than 347 nude scenes, which were played no less than 4.2 million times in total” in the subreddit. He’s also convicted of having shared and downloaded “over 25 terabytes of data with copyrighted works via the file sharing service superbits.org without the consent of the copyright holders,” and was also posting stolen images to the porn platform RedGifs.

The subreddit was set to private after media coverage about actors’ rights groups denouncing the practice, Torrent Freak reported last year. The subreddit is still invite-only, and a message says, “Denne subreddit er lukket ned, og vil ikke blive genåbnet” (“This subreddit has been shut down and will not be reopened.”)
playlist.megaphone.fm?p=TBIEA2…
According to Danish news outlet DR, the Danish Actors' Association and the Rights Alliance reported KlammereFyr to the police in 2023, “on behalf of the Danish Actors' Association, Danish Film Directors and the affected film producers DR and TV 2.” At the time, Danish actor Andrea Vagn Jensen, who had nude clips of her in movie scenes shared online, told DR: “It’s just abuse. You deliver something for the production and the story, and then you end up being molested that way.”

“Illegal sharing of films and series is never harmless, but in this case, we have seen the far-reaching consequences of scenes being taken out of context and placed in a pornographic context,” Maria Fredenslund, CEO of Rights Alliance, wrote in a blog post after KlammereFyr pleaded guilty last week. “This is both violent and very serious for the actors and producers who have been affected. I am therefore pleased that copyright law also protects works in practice, not least the actors’ right of respect, and provides the opportunity for redress after such serious violations of their professional integrity and person. With artificial intelligence and the ease of creating deepfakes, it is becoming easier to produce and share offensive content. This is another reason why it is important for the authorities to help emphasize the seriousness of this type of violation.”

A recently proposed bill in Denmark would amend the country’s copyright laws to protect the rights of ordinary people as well as public figures to their own likenesses, even if they’re used in AI or deepfake content.




Come learn how researchers and others learned what cops were using Flock's nationwide network of cameras for, including searches for ICE.

Come learn how researchers and others learned what cops were using Flockx27;s nationwide network of cameras for, including searches for ICE.#FOIA #FOIAForum


Our New FOIA Forum! 11/19, 1PM ET


It’s that time again! We’re planning our latest FOIA Forum, a live, hour-long or more interactive session where Joseph and Jason will teach you how to pry records from government agencies through public records requests. We’re planning this for Wednesday, November 19th at 1 PM Eastern. That's in just over a week away! Add it to your calendar!

This time we're focused on our coverage of Flock, the automatic license plate reader (ALPR) and surveillance tech company. Earlier this year anonymous researchers had the great idea of asking agencies for the network audit which shows why cops were using these cameras. Following that, we did a bunch of coverage, including showing that local police were performing lookups for ICE in Flock's nationwide network of cameras, and that a cop in Texas searched the country for a woman who self-administered an abortion. We'll tell you how all of this came about, what other requests people did after, and what requests we're exploring at the moment with Flock.

If this will be your first FOIA Forum, don’t worry, we will do a quick primer on how to file requests (although if you do want to watch our previous FOIA Forums, the video archive is here). We really love talking directly to our community about something we are obsessed with (getting documents from governments) and showing other people how to do it too.

Paid subscribers can already find the link to join the livestream below. We'll also send out a reminder a day or so before. Not a subscriber yet? Sign up now here in time to join.

We've got a bunch of FOIAs that we need to file and are keen to hear from you all on what you want to see more of. Most of all, we want to teach you how to make your own too. Please consider coming along!

This post is for subscribers only


Become a member to get access to all content
Subscribe now




This week we have a conversation between Sam and two of the leaders of the independent volunteer archiving project Save Our Signs, an effort to archive national park signs and monument placards.#Podcast #interview #saveoursigns #archiving #archives


Podcast: A Massive Archiving Effort at National Parks (with Jenny McBurney and Lynda Kellam)


If you’ve been to a national park in the U.S. recently, you might have noticed some odd new signs about “beauty” and “grandeur.” Or, some signs you were used to seeing might now be missing completely. An executive order issued earlier this year put the history and educational aspects of the parks system under threat–but a group of librarians stepped in to save it.

This week we have a conversation between Sam and two of the leaders of the independent volunteer archiving project Save Our Signs, an effort to archive national park signs and monument placards. It’s a community collaboration project co-founded by a group of librarians, public historians, and data experts in partnership with the Data Rescue Project and Safeguarding Research & Culture.
playlist.megaphone.fm?p=TBIEA2…
Lynda Kellam leads the Research Data and Digital Scholarship team at the University of Pennsylvania Libraries and is a founding organizer of the Data Rescue Project. Jenny McBurney is the Government Publications Librarian and Regional Depository Coordinator at the University of Minnesota Libraries. In this episode, they discuss turning “frustration, dismay and disbelief” at parks history under threat into action: compiling more than 10,000 images from over 300 national parks into a database to be preserved for the people.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube.

Become a paid subscriber for early access to these interview episodes and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/xrCElwgY5Co?…




Ypsilanti, Michigan has officially decided to fight against the construction of a 'high-performance computing facility' that would service a nuclear weapons laboratory 1,500 miles away.

Ypsilanti, Michigan has officially decided to fight against the construction of a x27;high-performance computing facilityx27; that would service a nuclear weapons laboratory 1,500 miles away.#News


A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists


Ypsilanti, Michigan resident KJ Pedri doesn’t want her town to be the site of a new $1.2 billion data center, a massive collaborative project between the University of Michigan and America’s nuclear weapons scientists at Los Alamos National Laboratories (LANL) in New Mexico.

“My grandfather was a rocket scientist who worked on Trinity,” Pedri said at a recent Ypsilanti city council meeting, referring to the first successful detonation of a nuclear bomb. “He died a violent, lonely, alcoholic. So when I think about the jobs the data center will bring to our area, I think about the impact of introducing nuclear technology to the world and deploying it on civilians. And the impact that that had on my family, the impact on the health and well-being of my family from living next to a nuclear test site and the spiritual impact that it had on my family for generations. This project is furthering inhumanity, this project is furthering destruction, and we don’t need more nuclear weapons built by our citizens.”
playlist.megaphone.fm?p=TBIEA2…
At the Ypsilanti city council meeting where Pedri spoke, the town voted to officially fight against the construction of the data center. The University of Michigan says the project is not a data center, but a “high-performance computing facility” and it promises it won’t be used to “manufacture nuclear weapons.” The distinction and assertion are ringing hollow for Ypsilanti residents who oppose construction of the data center, have questions about what it would mean for the environment and the power grid, and want to know why a nuclear weapons lab 24 hours away by car wants to build an AI facility in their small town.

“What I think galls me the most is that this major institution in our community, which has done numerous wonderful things, is making decisions with—as I can tell—no consideration for its host community and no consideration for its neighboring jurisdictions,” Ypsilanti councilman Patrick McLean said during a recent council meeting. “I think the process of siting this facility stinks.”

For others on the council, the fight is more personal.

“I’m a Japanese American with strong ties to my family in Japan and the existential threat of nuclear weapons is not lost on me, as my family has been directly impacted,” Amber Fellows, a Ypsilanti Township councilmember who led the charge in opposition to the data center, told 404 Media. “The thing that is most troubling about this is that the nuclear weapons that we, as Americans, witnessed 80 years ago are still being proliferated and modernized without question.”

It’s a classic David and Goliath story. On one side is Ypsilanti (called Ypsi by its residents), which has a population just north of 20,000 and situated about 40 minutes outside of Detroit. On the other is the University of Michigan and Los Alamos National Laboratories (LANL), American scientists famous for nuclear weapons and, lately, pushing the boundaries of AI.

The University of Michigan first announced the Los Alamos data center, what it called an “AI research facility,” last year. According to a press release from the university, the data center will cost $1.25 billion and take up between 220,000 to 240,000 square feet. “The university is currently assessing the viability of locating the facility in Ypsilanti Township,” the press release said.
Signs in an Ypsilanti yard.
On October 21, the Ypsilanti City Council considered a proposal to officially oppose the data center and the people of the area explained why they wanted it passed. One woman cited environmental and ethical concerns. “Third is the moral problem of having our city resources towards aiding the development of nuclear arms,” she said. “The city of Ypsilanti has a good track record of being on the right side of history and, more often than not, does the right thing. If this resolution passed, it would be a continuation of that tradition.”

A man worried about what the facility would do to the physical health of citizens and talked about what happened in other communities where data centers were built. “People have poisoned air and poisoned water and are getting headaches from the generators,” he said. “There’s also reports around the country of energy bills skyrocketing when data centers come in. There’s also reports around the country of local grids becoming much less reliable when the data centers come in…we don’t need to see what it’s like to have a data center in Ypsi. We could just not do that.”

The resolution passed. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

Ypsi has a lot of reasons to be concerned. Data centers tend to bring rising power bills, horrible noise, and dwindling drinking water to every community they touch. “The fact that U of M is using Ypsilanti as a dumping ground, a sacrifice zone, is unacceptable,” Fellows said.

Ypsi’s resolution focused on a different angle though: nuclear weapons. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

As part of the resolution, Ypsilanti Township is applying to join the Mayors for Peace initiative, an international organization of cities opposed to nuclear weapons and founded by the former mayor of Hiroshima. Fellows learned about Mayors for Peace when she visited Hiroshima last year.


0:00
/1:46

This town has officially decided to fight against the construction of an AI data center that would service a nuclear weapons laboratory 1,500 miles away. Amber Fellows, a Ypsilanti Township councilmember, tells us why. Via 404 Media on Instagram

Both LANL and the University of Michigan have been vague about what the data center will be used for, but have said it will include one facility for classified federal research and another for non-classified research which students and faculty will have access to. “Applications include the discovery and design of new materials, calculations on climate preparedness and sustainability,” it said in an FAQ about the data center. “Industries such as mobility, national security, aerospace, life sciences and finance can benefit from advanced modeling and simulation capabilities.”

The university FAQ said that the data center will not be used to manufacture nuclear weapons. “Manufacturing” nuclear weapons specifically refers to their creation, something that’s hard to do and only occurs at a handful of specialized facilities across America. I asked both LANL and the University of Michigan if the data generated by the facility would be used in nuclear weapons science in any way. Neither answered the question.

“The federal facility is for research and high-performance computing,” the FAQ said. “It will focus on scientific computation to address various national challenges, including cybersecurity, nuclear and other emerging threats, biohazards, and clean energy solutions.”

LANL is going all in on AI. It partnered with OpenAI to use the company’s frontier models in research and recently announced a partnership with NVIDIA to build two new super computers named “Mission” and “Vision.” It’s true that LANL’s scientific output covers a range of issues but its overwhelming focus, and budget allocation, is nuclear weapons. LANL requested a budget of $5.79 billion in 2026. 84 percent of that is earmarked for nuclear weapons. Only $40 million of the LANL budget is set aside for “science,” according to government documents.

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

“The fact is we don’t really know because Los Alamos and U of M are unwilling to spell out exactly what’s going to happen,” Fellows said. When LANL declined to comment for this story, it told 404 Media to direct its question to the University of Michigan.

The university pointed 404 Media to the FAQ page about the project. “You'll see in the FAQs that the locations being considered are not within the city of Ypsilanti,” it said.

It’s an odd statement given that this is what’s in the FAQ: “The university is currently assessing the viability of locating the facility in Ypsilanti Township on the north side of Textile Road, directly across the street from the Ford Rawsonville Components plant and adjacent to the LG Energy Solutions plant.”

It’s true that this is not technically in the city of Ypsilanti but rather Ypsilanti Township, a collection of communities that almost entirely surrounds the city itself. For Fellows, it’s a distinction without a difference. “[Univeristy of Michigan] can build it in Barton Hills and see how the city of Ann Arbor feels about it,” she said, referencing a village that borders the township where the university's home city of Ann Arbor.

“The university has, and will continue to, explore other sites if they are viable in the timeframe needed for successful completion of the project,” Kay Jarvis, the university’s director of public affairs, told 404 Media.

Fellows said that Ypsilanti will fight the data center with everything it has. “We’re putting pressure on the Ypsi township board to use whatever tools they have to deny permits…and to stand up for their community,” she said. “We’re also putting pressure on the U of M board of trustees, the county, our state legislature that approved these projects and funded them with public funds. We’re identifying all the different entities that have made this project possible so far and putting pressure on them to reverse action.”

For Fellows, the fight is existential. It’s not just about the environmental concerns around the construction project. “I was under the belief that the prevailing consensus was that nuclear weapons are wrong and they should be drawn down as fast as possible. I’m trying to use what little power I have to work towards that goal,” she said.


#News #x27


New research “suggests that dark energy may no longer be a cosmological constant” and that the universe’s expansion is slowing down.#TheAbstract


A Fundamental ‘Constant’ of the Universe May Not Be Constant At All, Study Finds


Welcome back to the Abstract! Here are the studies this week that took a bite out of life, appealed to the death drive, gave a yellow light to the universe, and produced hitherto unknown levels of cute.

First, it’s the most epic ocean battle: orcas versus sharks (pro tip: you don’t want to be sharks). Then, a scientific approach to apocalyptic ideation; curbing cosmic enthusiasm; and last, the wonderful world of tadpole-less toads.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens, or subscribe to my personal newsletter the BeX Files.

Now, to the feast!

I guess that’s why they call them killer whales


Higuera-Rivas, Jesús Erick et al. “Novel evidence of interaction between killer whales (Orcinus orca) and juvenile white sharks (Carcharodon carcharias) in the Gulf of California, Mexico.” Frontiers in Marine Science.

Orcas kill young great white sharks by flipping them upside down and tearing their livers out of their bellies, which they then eat family-style, according to a new study that includes new footage of these Promethean interactions in Mexican waters.

“Here we document novel repeated predations by killer whales on juvenile white sharks in the Gulf of California,” said researchers led by Jesús Erick Higuera Rivas of the non-profit Pelagic Protection and Conservation AC.

“Aerial videos indicate consistency in killer whales’ repeated assaults and strikes on the sharks,” the team added. “Once extirpated from the prey body, the target organ is shared between the members of the pods including calves.”
Sequence of the killer whales attacking the first juvenile white sharks (Carcharodon carcharias) on 15th of August 2020. In (d) The partially exposed liver is seen on the right side of the second shark attacked. Photos credit: Jesús Erick Higuera Rivas.

I’ll give you a beat to let that sink in, like orca teeth on the belly of a shark. While it's well-established that orcas are the only known predator of great white sharks aside from humans, the new study is only the second glimpse of killer whales targeting juvenile sharks.

This group of orcas, known as Moctezuma’s pod, has developed an effective strategy of working together to flip the sharks over, which interrupts the sharks’ sensory system and puts them into a state called tonic immobility. The authors describe the pod’s work as methodical and well coordinated.

“Our evidence undoubtedly shows consistency in the repeated assaults and strikes, indicating efficient maneuvering ability by the killer whales in attempting to turn the shark upside down, likely to induce tonic immobility and allow uninterrupted access to the organs for consumption, " the team said. Previous reports suggest that “the lack of bite marks or injuries anywhere other than the pectoral fins shows a novel and specialized technique of accessing the liver of the shark with minimal handling of each individual.”

An orca attacking a juvenile great white shark. Image: Marco Villegas

Sharks, by the way, do not attack orcas. Just the opposite. As you can imagine based on the horrors you have just read, sharks are so petrified of killer whales that they book it whenever they sense a nearby pod.

“Adult white sharks exhibit a memory and previous knowledge about killer whales, which enables them to activate an avoidance mechanism through behavioral risk effects; a ‘fear’- induced mass exodus from aggregations sites,” the team said. “This response may preclude repeated successful predation on adult white sharks by killer whales.”

In other words, if you’re a shark, one encounter with orcas is enough to make you watch your dorsal side for life—assuming you were lucky enough to escape with it.

In other news…

Apocalypse now plz


Albrecht, Rudolf et al. “Geopolitical, Socio-Economic and Legal Aspects of the 2024PDC25 Event.” Acta Astronautica.

You may have seen the doomer humor meme to “send the asteroid already,” a plea for sweet cosmic relief that fits our beleaguered times. As it turns out, some scientists engage in this type of apocalyptic wish fulfillment professionally.

Planetary defense experts often participate in drills involving fictional hazardous asteroids, such as the 2024PDC25, a virtual object “discovered” at the 2025 Planetary Defense Conference. In that simulation, 2024PDC25 had a possible impact date in 2041.

Now a team has used that exercise as a jumping off point to explore what might happen if it hit even earlier, channeling that “send the asteroid already” energy.. The researchers used this time-crunched scenario to speculate about the effect on geopolitics and pivotal events, such as the 2028 US Presidential elections.

“As it is very difficult to extrapolate from 2025 across 16 years in this ‘what-if’ exercise, we decided to bring the scenario forward to 2031 and examine it with today’s global background,” Rudolf Albrecht of the Austrian Space Forum. “Today would be T-6 years and the threat is becoming immediate.”

As the astro-doomers would say: Finally some good news.

Big dark energy


Son, Junhyuk et al. “Strong progenitor age bias in supernova cosmology – II. Alignment with DESI BAO and signs of a non-accelerating universe.” Monthly Notices of the Royal Astronomical Society.

First, we discovered the universe was expanding. Then, we discovered it was expanding at an accelerating rate. Now, a new study suggests that this acceleration might be slowing down. Universe, make up your mind!

But seriously, the possibility that the rate of cosmic expansion is slowing is a big deal, because dark energy—the term for whatever is making the universe expand—was assumed to be a constant for decades. But this consensus has been challenged by observations from the Dark Energy Spectroscopic Instrument (DESI) in Arizona, which became operational in 2021. In its first surveys, DESI’s observations have pointed to an expansion rate that is not fixed, but in flux.

Together with past results, the study “suggests that dark energy may no longer be a cosmological constant” and “our analysis raises the possibility that the present universe is no longer in a state of accelerated expansion,” said researchers led by Junhyuk Son of Yonsei University. “This provides a fundamentally new perspective that challenges the two central pillars of the [cold dark matter] standard cosmological model proposed 27 years ago.”

It will take more research to constrain this mystery, but for now it’s a reminder that the universe loves to surprise.

And the award for most squee goes to…


Thrane, Christian et al. “Museomics and integrative taxonomy reveal three new species of glandular viviparous tree toads (Nectophrynoides) in Tanzania’s Eastern Arc Mountains (Anura: Bufonidae).” Vertebrate Zoology

We’ll end, as all things should, with toadlets. Most frogs and toads reproduce by laying eggs that hatch into tadpoles, but scientists have discovered three new species of toad in Tanzania that give birth to live young—a very rare adaptation for any amphibian, known as ovoviviparity. The scientific term for these youngsters is in fact “toadlet.” Gods be good.

“We describe three new species from the Nectophrynoides viviparus species complex, covering the southern Eastern Arc Mountains populations,” said researchers led by Christian Thrane of the University of Copenhagen. One of the new species included “the observation of toadlets, suggesting that this species is ovoviviparous.”
One of the newly described toad species, Nectophrynoides luhomeroensis. Image: John Lyarkurwa.

Note to Nintendo: please make a very tiny Toadlet into a Mario Kart racer.

Thanks for reading! See you next week.




This week, we discuss archiving to get around paywalls, hating on smart glasses, and more.#BehindTheBlog


Behind the Blog: Paywall Jumping and Smart Glasses


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss archiving to get around paywalls, hating on smart glasses, and more.

JASON: I was going to try to twist myself into knots attempting to explain the throughline between my articles this week, and about how I’ve been thinking about the news and our coverage more broadly. This was going to be something about trying to promote analog media and distinctly human ways of communicating (like film photography), while highlighting the very bad economic and political incentives pushing us toward fundamentally dehumanizing, anti-human methods of communicating. Like fully automated, highly customized and targeted AI ads, automated library software, and I guess whatever Nancy Pelosi has been doing with her stock portfolio. But then I remembered that I blogged about the FBI’s subpoena against archive.is, a website I feel very ambivalent about and one that is the subject of perhaps my most cringe blog of all time.

So let’s revisit that cringe blog, which was called “Dear GamerGate: Please Stop Stealing Our Shit.” I wrote this article in 2014, which was fully 11 years ago, which is alarming to me. First things first: They were not stealing from me they were stealing from VICE, a company that I did not actually experience financial gains from related to people reading articles; it was good if people read my articles and traffic was very important, and getting traffic over time led to me getting raises and promotions and stuff, but the company made very, very clear that we did not “own” the articles and therefore they were not “mine” in the way that they are now. With that out of the way, the reporting and general reason for the article was I think good but the tone of it is kind of wildly off, and, as I mentioned, over the course of many years I have now come to regard archive.is as sort of an integral archiving tool. If you are unfamiliar with archive.is, it’s a site that takes snapshots of any URL and creates a new link for them which, notably, does not go to the original website. Archive.is is extremely well known for bypassing the paywalls on many sites, 404 Media sometimes but not usually among them.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




X and TikTok accounts are dedicated to posting AI-generated videos of women being strangled.#News #AI #Sora


OpenAI’s Sora 2 Floods Social Media With Videos of Women Being Strangled


Social media accounts on TikTok and X are posting AI-generated videos of women and girls being strangled, showing yet another example of generative AI companies failing to prevent users from creating media that violates their own policies against violent content.

One account on X has been posting dozens of AI-generated strangulation videos starting in mid-October. The videos are usually 10 seconds long and mostly feature a “teenage girl” being strangled, crying, and struggling to resist until her eyes close and she falls to the ground. Some titles for the videos include: “A Teenage Girl Cheerleader Was Strangled As She Was Distressed,” “Prep School Girls Were Strangled By The Murderer!” and “man strangled a high school cheerleader with a purse strap which is crazy.”

Many of the videos posted by this X account in October include the watermark for Sora 2, Open AI’s video generator, which was made available to the public on September 30. Other videos, including most videos that were posted by the account in November, do not include a watermark but are clearly AI generated. We don’t know if these videos were generated with Sora 2 and had their watermark removed, which is trivial to do, or created with another AI video generator.

The X account is small, with only 17 followers and a few hundred views on each post. A TikTok account with a similar username that was posting similar AI-generated choking videos had more than a thousand followers and regularly got thousands of views. Both accounts started posting the AI-generated videos in October. Prior to that, the accounts were posting clips of scenes, mostly from real Korean dramas, in which women are being strangled. I first learned about the X account from a 404 Media reader, who told me X declined to remove the account after they reported it.

“According to our Community Guidelines, we don't allow hate speech, hateful behavior, or promotion of hateful ideologies,” a TikTok spokesperson told me in an email. The TikTok account was also removed after I reached out for comment. “That includes content that attacks people based on protected attributes like race, religion, gender, or sexual orientation.”

X did not respond to a request for comment.

OpenAI did not respond to a request for comment, but its policies state that “graphic violence or content promoting violence” may be removed from the Sora Feed, where users can see what other users are generating. In our testing, Sora immediately generated a video for the prompt “man choking woman” which looked similar to the videos posted to TikTok and X. When Sora finished generating those videos it sent us notifications like “Your choke scene just went live, brace for chaos,” and “Yikes, intense choke scene, watch responsibly.” Sora declined to generate a video for the prompt “man choking woman with belt,” saying “This content may violate our content policies.”

Safe and consensual choking is common in adult entertainment, be it various forms of BDSM or more niche fetishes focusing on choking specifically, and that content is easy to find wherever adult entertainment is available. Choking scenes are also common social media and more mainstream horror movies and TV shows. The UK government recently announced that it will soon make it illegal to publish or possess pornographic depictions of strangulation of suffocation.

It’s not surprising, then, that when generative AI tools are made available to the public some people generate choking videos and violent content as well. In September, I reported about an AI-generated YouTube channel that exclusively posted videos of women being shot. Those videos were generated with Google’s Veo AI-video generator, despite it being against the company’s policies. Google said it took action against the user who was posting those videos.

Sora 2 had to make several changes to its guardrails since it launched after people used it to make videos of popular cartoon characters depicted as Nazis and other forms of copyright infringement.


#ai #News #sora


Early humans crafted the same tools for hundreds of thousands of years, offering an unprecedented glimpse of a continuous tradition that may push back the origins of technology.#TheAbstract


Advanced 2.5 Million-Year-Old Tools May Rewrite Human History


🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

After a decade-long excavation at a remote site in Kenya, scientists have unearthed evidence that our early human relatives continuously fashioned the same tools across thousands of generations, hinting that sophisticated tool use may have originated much earlier than previously known, according to a new study in Nature Communications.

The discovery of nearly 1,300 artifacts—with ages that span 2.44 to 2.75 million years old—reveals that the influential Oldowan tool-making tradition existed across at least 300,000 years of turbulent environmental shifts. The wealth of new tools from Kenya’s Namorotukunan site suggest that their makers adapted to major environmental changes in part by passing technological knowledge down through the ages.

“The question was: did they generally just reinvent the [Oldowan tradition] over and over again? That made a lot of sense when you had a record that was kind of sporadic,” said David R. Braun, a professor of anthropology at the George Washington University who led the study, in a call with 404 Media.

“But the fact that we see so much similarity between 2.4 and 2.75 [million years ago] suggests that this is generally something that they do,” he continued. “Some of it may be passed down through social learning, like observation of others doing it. There’s some kind of tradition that continues on for this timeframe that would argue against this idea of just constantly reinventing the wheel.”

Oldowan tools, which date back at least 2.75 million years, are distinct from earlier traditions in part because hominins, the broader family to which humans belong, specifically sought out high-quality materials such as chert and quartz to craft sharp-edged cutting and digging tools. This advancement allowed them to butcher large animals, like hippos, and possibly dig for underground food sources.

When Braun and his colleagues began excavating at Namorotukunan in 2013, they found many artifacts made of chalcedony, a fine-grained rock that is typically associated with much later tool-making traditions. To the team’s surprise, the rocks were dated to periods as early as 2.75 million years ago, making them among the oldest artifacts in the Oldowan record.

“Even though Oldowan technology is really just hitting one rock against the other, there's good and bad ways of doing it,” Braun explained. “So even though it's pretty simple, what they seem to be figuring out is where to hit the rock, and which angles to select. They seem to be getting a grip on that—not as well as later in time—but they're definitely getting an understanding at this timeframe.”
Some of the Namorotukunan tools. Image: Koobi Fora Research and Training Program
The excavation was difficult as it takes several days just to reach the remote offroad site, while much of the work involved tiptoing along steep outcrops. Braun joked that their auto mechanic lined up all the vehicle shocks that had been broken during the drive each season, as a testament to the challenge.

But by the time the project finally concluded in 2022, the researchers had established that Oldowan tools were made at this site over the course of 300,000 years. During this span, the landscape of Namorotukunan shifted from lush humid forests to arid desert shrubland and back again. Despite these destabilizing shifts in their climate and biome, the hominins that made these tools endured in part because this technology opened up new food sources to them, such as the carcasses of large animals.

“The whole landscape really shifts,” Braun said. “But hominins are able to basically ameliorate those rapid changes in the amount of rainfall and the vegetation around by using tools to adapt to what’s happening.”

“That's a human superpower—it’s that ability we have to keep this information stored in our collective heads, so that when new challenges show up, there's somebody in our group that remembers how to deal with this particular adaptation,” he added.

It’s not clear exactly which species of hominin made the tools at Namorotukunan; it may have been early members of our own genus Homo, or other relatives, like Australopithecus afarensis, that later went extinct. Regardless, the discovery of such a long-lived and continuous assemblage may hint that the origins of these tools are much older than we currently know.

“I think that we're going to start to find tool use much earlier” perhaps “going back five, six, or seven million years,” Braun said. “That’s total speculation. I've got no evidence that that's the case. But judging from what primates do, I don't really understand why we wouldn't see it.”

To that end, the researchers plan to continue excavating these bygone landscapes to search for more artifacts and hominin remains that could shed light on the identity of these tool makers, probing the origins of these early technologies that eventually led to humanity’s dominance on the planet.

“It's possible that this tool use is so diverse and so different from our expectations that we have blinders on,” Braun concluded. “We have to open our search for what tool use looks like, and then we might start to see that they're actually doing a lot more of it than we thought they were.”




"Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another."#AI #libraries


AI Is Supercharging the War on Libraries, Education, and Human Knowledge


This story was reported with support from the MuckRock Foundation.

Last month, a company called the Children’s Literature Comprehensive Database announced a new version of a product called Class-Shelf Plus. The software, which is used by school libraries to keep track of which books are in their catalog, added several new features including “AI-driven automation and contextual risk analysis,” which includes an AI-powered “sensitive material marker” and a “traffic-light risk ratings” system. The company says that it believes this software will streamline the arduous task school libraries face when trying to comply with legislation that bans certain books and curricula: “Districts using Class-Shelf Plus v3 may reduce manual review workloads by more than 80%, empowering media specialists and administrators to devote more time to instructional priorities rather than compliance checks,” it said in a press release.

In a white paper published by CLCD, it gave a “real-world example: the role of CLCD in overcoming a book ban.” The paper then describes something that does not sound like “overcoming” a book ban at all. CLCD’s software simply suggested other books “without the contested content.”

Ajay Gupte, the president of CLCD, told 404 Media the software is simply being piloted at the moment, but that it “allows districts to make the majority of their classroom collections publicly visible—supporting transparency and access—while helping them identify a small subset of titles that might require review under state guidelines.” He added that “This process is designed to assist districts in meeting legislative requirements and protect teachers and librarians from accusations of bias or non-compliance [...] It is purpose-built to help educators defend their collections with clear, data-driven evidence rather than subjective opinion.”

Librarians told 404 Media that AI library software like this is just the tip of the iceberg; they are being inundated with new pitches for AI library tech and catalogs are being flooded with AI slop books that they need to wade through. But more broadly, AI maximalism across society is supercharging the ideological war on libraries, schools, government workers, and academics.

CLCD and Class Shelf Plus is a small but instructive example of something that librarians and educators have been telling me: The boosting of artificial intelligence by big technology firms, big financial firms, and government agencies is not separate from book bans, educational censorship efforts, and the war on education, libraries, and government workers being pushed by groups like the Heritage Foundation and any number of MAGA groups across the United States. This long-running war on knowledge and expertise has sown the ground for the narratives widely used by AI companies and the CEOs adopting it. Human labor, inquiry, creativity, and expertise is spurned in the name of “efficiency.” With AI, there is no need for human expertise because anything can be learned, approximated, or created in seconds. And with AI, there is less room for nuance in things like classifying or tagging books to comply with laws; an LLM or a machine algorithm can decide whether content is “sensitive.”

“I see something like this, and it’s presented as very value neutral, like, ‘Here’s something that is going to make life easier for you because you have all these books you need to review,’” Jaime Taylor, discovery & resource management systems coordinator for the W.E.B. Du Bois Library at the University of Massachusetts told me in a phone call. “And I look at this and immediately I am seeing a tool that’s going to be used for censorship because this large language model is ingesting all the titles you have, evaluating them somehow, and then it might spit out an inaccurate evaluation. Or it might spit out an accurate evaluation and then a strapped-for-time librarian or teacher will take whatever it spits out and weed their collections based on it. It’s going to be used to remove books from collections that are about queerness or sexuality or race or history. But institutions are going to buy this product because they have a mandate from state legislatures to do this, or maybe they want to do this, right?”

The resurgent war on knowledge, academics, expertise, and critical thinking that AI is currently supercharging has its roots in the hugely successful recent war on “critical race theory,” “diversity equity and inclusion,” and LGBTQ+ rights that painted librarians, teachers, scientists, and public workers as untrustworthy. This has played out across the board, with a seemingly endless number of ways in which the AI boom directly intersects with the right’s war on libraries, schools, academics, and government workers. There are DOGE’s mass layoffs of “woke” government workers, and the plan to replace them with AI agents and supposed AI-powered efficiencies. There are “parents rights” groups that pushed to ban books and curricula that deal with the teaching of slavery, systemic racism, and LGBTQ+ issues and attempted to replace them with homogenous curricula and “approved” books that teach one specific type of American history and American values; and there are the AI tools that have been altered to not be “woke” and to reenforce the types of things the administration wants you to think. Many teachers feel they are not allowed to teach about slavery or racism and increasingly spend their days grading student essays that were actually written by robots.

“One thing that I try to make clear any time I talk about book bans is that it’s not about the books, it’s about deputizing bigots to do the ugly work of defunding all of our public institutions of learning,” Maggie Tokuda-Hall, a cofounder of Authors Against Book Bans, told me. “The current proliferation of AI that we see particularly in the library and education spaces would not be possible at the speed and scale that is happening without the precedent of book bans leading into it. They are very comfortable bedfellows because once you have created a culture in which all expertise is denigrated and removed from the equation and considered nonessential, you create the circumstances in which AI can flourish.”

Justin, a cohost of the podcast librarypunk, told me that the project of offloading cognitive capacity to AI continues apace: “Part of a fascist project to offload the work of thinking, especially the reflective kind of thinking that reading, study, and community engagement provide,” Justin said. “That kind of thinking cultivates empathy and challenges your assumptions. It's also something you have to practice. If we can offload that cognitive work, it's far too easy to become reflexive and hateful, while having a robot cheerleader telling you that you were right about everything all along.”

These two forces—the war on libraries, classrooms, and academics and AI boosterism—are not working in a vacuum. The Heritage Foundation’s right-wing agenda for remaking the federal government, Project 2025, talks about criminalizing teachers and librarians who “poison our own children” and pushing artificial intelligence into every corner of the government for data analysis and “waste, fraud, and abuse” detection.

Librarians, teachers, and government workers have had to spend an increasing amount of their time and emotional bandwidth defending the work that they do, fighting against censorship efforts and dealing with the associated stress, harassment, and threats that come from fighting educational censorship. Meanwhile, they are separately dealing with an onslaught of AI slop and the top-down mandated AI-ification of their jobs; there are simply fewer and fewer hours to do what they actually want to be doing, which is helping patrons and students.

“The last five years of library work, of public service work has been a nightmare, with ongoing harassment and censorship efforts that you’re either experiencing directly or that you’re hearing from your other colleagues,” Alison Macrina, executive director of Library Freedom Project, told me in a phone interview. “And then in the last year-and-a-half or so, you add to it this enormous push for the AIfication of your library, and the enormous demands on your time. Now you have these already overworked public servants who are being expected to do even more because there’s an expectation to use AI, or that AI will do it for you. But they’re dealing with things like the influx of AI-generated books and other materials that are being pushed by vendors.”

The future being pushed by both AI boosters and educational censors is one where access to information is tightly controlled. Children will not be allowed to read certain books or learn certain narratives. “Research” will be performed only through one of a select few artificial intelligence tools owned by AI giants which are uniformly aligned behind the Trump administration and which have gone to the ends of the earth to prevent their black box machines from spitting out “woke” answers lest they catch the ire of the administration. School boards and library boards, forced to comply with increasingly restrictive laws, funding cuts, and the threat of being defunded entirely, leap at the chance to be considered forward looking by embracing AI tools, or apply for grants from government groups like the Institute of Museum and Library Services (IMLS), which is increasingly giving out grants specifically to AI projects.

We previously reported that the ebook service Hoopla, used by many libraries, has been flooded with AI-generated books (the company has said it is trying to cull these from its catalog). In a recent survey of librarians, Macrina’s organization found that librarians are getting inundated with pitches from AI companies and are being pushed by their superiors to adopt AI: “People in the survey results kept talking about, like, I get 10 aggressive, pushy emails a day from vendors demanding that I implement their new AI product or try it, jump on a call. I mean, the burdens have become so much, I don’t even know how to summarize them.”

“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another"


Macrina said that in response to Library Freedom Project’s recent survey, librarians said that misinformation and disinformation was their biggest concern. This came not just in the form of book bans and censorship but also in efforts to proactively put disinformation and right-wing talking points into libraries: “It’s not just about book bans, and library board takeovers, and the existing reactionary attacks on libraries. It’s also the effort to push more far-right material into libraries,” she said. “And then you have librarians who are experiencing a real existential crisis because they are getting asked by their jobs to promote [AI] tools that produce more misinformation. It's the most, like, emperor-has-no-clothes-type situation that I have ever witnessed.”

Each person I spoke to for this article told me they could talk about the right-wing project to erode trust in expertise, and the way AI has amplified this effort, for hours. In writing this article, I realized that I could endlessly tie much of our reporting on attacks on civil society and human knowledge to the force multiplier that is AI and the AI maximalist political and economic project. One need look no further than Grokipedia as one of the many recent reminders of this effort—a project by the world’s richest man and perhaps its most powerful right-wing political figure to replace a crowdsourced, meticulously edited fount of human knowledge with a robotic imitation built to further his political project.

Much of what we write about touches on this: The plan to replace government workers with AI, the general erosion of truth on social media, the rise of AI slop that “feels” true because it reinforces a particular political narrative but is not true, the fact that teachers feel like they are forced to allow their students to use AI. Justin, from librarypunk, said AI has given people “absolute impunity to ignore reality […] AI is a direct attack on the way we verify information: AI both creates fake sources and obscures its actual sources.”

That is the opposite of what librarians do, and teachers do, and scientists do, and experts do. But the political project to devalue the work these professionals do, and the incredible amount of money invested in pushing AI as a replacement for that human expertise, have worked in tandem to create a horrible situation for all of us.

“AI is an agreement machine, which is anathema to learning and critical thinking,” Tokuda-Hall said. Previously we have had experts like librarians and teachers to help them do these things, but they have been hamstrung and they’ve been attacked and kneecapped and we’ve created a culture in which their contribution is completely erased from society, which makes something like AI seem really appealing. It’s filling that vacuum.”

“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another,” she added.




The FBI has subpoenaed the domain registrar of archive.today, demanding information about the owner.#fbi #Archiveis


FBI Tries to Unmask Owner of Infamous Archive.is Site


The FBI is attempting to unmask the owner behind archive.today, a popular archiving site that is also regularly used to bypass paywalls on the internet and to avoid sending traffic to the original publishers of web content, according to a subpoena posted by the website. The FBI subpoena says it is part of a criminal investigation, though it does not provide any details about what alleged crime is being investigated. Archive.today is also popularly known by several of its mirrors, including archive.is and archive.ph.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




Nancy Pelosi’s trades over the years have been so good that a startup was created to allow investors to directly mirror her portfolio. #Economics #NancyPelosi


One of the Greatest Wall Street Investors of All Time Announces Retirement


Nancy Pelosi, one of Wall Street’s all time great investors, announced her retirement Thursday.

Pelosi, so known for her ability to outpace the S&P 500 that dozens of websites and apps spawned to track her seeming preternatural ability to make smart stock trades, said she will retire after the 2024-2026 season. Pelosi’s trades over the years, many done through her husband and investing partner Paul Pelosi, have been so good that an entire startup, called Autopilot, was started to allow investors to directly mirror Pelosi’s portfolio.

According to the site, more than 3 million people have invested more than $1 billion using the app. After 38 years, Pelosi will retire from the league—a somewhat normal career length as investors, especially on Pelosi’s team, have decided to stretch their careers later and later into their lives.

The numbers put up by Pelosi in her Hall of Fame career are undeniable. Over the last decade, Pelosi’s portfolio returned an incredible 816 percent, according to public disclosure records. The S&P 500, meanwhile, has returned roughly 229 percent. Awe-inspired fans and analysts theorized that her almost omniscient ability to make correct, seemingly high-risk stock decisions may have stemmed from decades spent analyzing and perhaps even predicting decisions that would be made by the federal government that could impact companies’ stock prices. For example, Paul Pelosi sold $500,000 worth of Visa stock in July, weeks before the U.S. government announced a civil lawsuit against the company, causing its stock price to decrease.

Besides Autopilot and numerous Pelosi stock trade trackers, there have also been several exchange traded funds (ETFs) set up that allow investors to directly copy their portfolio on Pelosi and her trades. Related funds, such as The Subversive Democratic Trading ETF (NANC, for Nancy), set up by the Unusual Whales investment news Twitter account, seek to allow investors to diversify their portfolios by tracking the trades of not just Pelosi but also some of her colleagues, including those on the other team, who have also proven to be highly gifted stock traders.
youtube.com/embed/YEm43kiGBsc?…
Fans of Pelosi spent much of Thursday admiring her career, and wondering what comes next: “Farewell to one of the greatest investors of all time,” the top post on Reddit’s Wall Street Bets community reads. The sentiment has more than 24,000 upvotes at the time of publication. Fans will spend years debating in bars whether Pelosi was the GOAT; some investors have noted that in recent years, some of her contemporaries, like Marjorie Taylor-Green, Ro Khanna, and Michael McCaul, have put up gaudier numbers. There are others who say the league needs reformation, with some of Pelosi’s colleagues saying they should stop playing at all, and many fans agreeing with that sentiment. Despite the controversy, many of her colleagues have committed to continue playing the game.

Pelosi said Thursday that this season would be her last, but like other legends who have gone out on top, it seems she is giving it her all until the end. Just weeks ago, she sold between $100,000 and $250,000 of Apple stock, according to a public box score.

“We can be proud of what we have accomplished,” Pelosi said in a video announcing her retirement. “But there’s always much more work to be done.”




Automattic, the company that owns Wordpress.com, is asking Automatic.CSS to rebrand.#wordpress #automattic #trademark


Automattic Inc. Claims It Owns the Word 'Automatic'


Automattic, the company that owns WordPress.com, is asking Automatic.CSS—a company that provides a CSS framework for WordPress page builders—to change its name amid public spats between Automattic founder Matt Mullenweg and Automatic.CSS creator Kevin Geary. Automattic has two T’s as a nod to Matt.

“As you know, our client owns and operates a wide range of software brands and services, including the very popular web building and hosting platform WordPress.com,” Jim Davis, an intellectual property attorney representing Automattic, wrote in a letter dated Oct. 30.

“Automattic is also well-known for its longtime and extensive contributions to the WordPress system. Our client owns many trademark registrations for its Automattic mark covering those types of services and software,” Davis continued. “As we hope you can appreciate, our client is concerned about your use of a nearly identical name and trademark to provide closely related WordPress services. Automattic and Automatic differ by only one letter, are phonetically identical, and are marketed to many of the same people. This all enhances the potential for consumer confusion and dilution of our client's Automattic mark.”

💡
Do you have a tip? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

Automattic “requests that you rebrand away from using Automatic or anything similar to Automattic,” Davis wrote.

Geary posted the full letter on X, where Mullenweg replied, “We also own automatic.com. You had to know this was a fraught naming area.”

“AutomaticCSS is called ‘automatic’ because it's the only CSS framework that does a lot of things automatically,” Geary replied to Mullenweg. “Congratulations on owning the domain name for a generic term. Let me know when that fact becomes relevant.”

In its trademark filing, Automattic lists the word “automatic” as a disclaimer, meaning an unregistrable word, “such as wording or a design that doesn’t indicate the source of your goods or services or is otherwise merely descriptive of them,” according to the U.S. Patent and Trademark Office.

This beef has gone on for months. On July 14, Mullenweg asked Geary publicly: “is it possible to get some text on automaticcss.com clarifying it has nothing to do with automattic?” “Sure, we'll add it to the footer,” Geary replied. Automatic.CSS has a disclaimer on the bottom of the page that says “(not affiliated with Automattic).”

And just a week before Automattic sent its request to Automatic to change their name, Geary and Mullenweg were beefing about whether making websites without coding expertise is sustainable... or something. “Best of luck selling your solution, I hope you can do so without creating FUD and dissing WordPress in the process,” Mullenweg said, midway through the argument. “You sound completely out of touch. When is the last time you coached someone on learning web design? For me it was yesterday. I’m the one that’s most in touch,” Geary replied.

Geary and Mullenweg have frequently sparred on X, especially after the legal battle between WP Engine and Automattic began last year. In September 2024, Mullenweg started publicly accusing WP Engine of misusing the WordPress brand and not contributing enough to the open-source community, which led to the companies volleying cease and desists, including Automattic demanding WP Engine change its name. “Your unauthorized use of our Client’s trademarks infringes on their rights and dilutes their famous and well-known marks,” Automattic’s September 2024 cease and desist said. This eventually escalated to WP Engine suing Automattic, claiming that Automattic extorted the company by suggesting WP Engine pay “a mere 8% royalty” on WP Engine’s roughly $400 million in annual revenue, which would amount to about $32 million.

Employees Describe an Environment of Paranoia and Fear Inside Automattic Over WordPress Chaos
Automattic CEO Matt Mullenweg made another buyout offer this week, and threatened employees who speak to the press with termination.
404 MediaSamantha Cole


Last week, Automattic filed counterclaims in that case, claiming, “This case arises from WPEngine, Inc.’s (‘WP Engine’) deliberate misappropriation of WordPress-related trademarks and its false attempts to pass itself off as the company behind the world-renowned open-source WordPress software,” and that WP Engine “sought to inflate its valuation and engineer a quick, lucrative exit” as part of a deal with private equity firm Silver Lake, and “exploited the reputation, goodwill, and community trust built over two decades by counterclaimants Automattic, Inc., Matthew Mullenweg, WordPress Foundation, and WooCommerce Inc.”

WP Engine told Techcrunch in a statement: “WP Engine’s use of the WordPress trademark to refer to the open-source software is consistent with longstanding industry practice and fair use under settled trademark law, and we will defend against these baseless claims.”

Geary and Davis did not respond to 404 Media’s request for comment.




The initial 'Shutdown Guidance' for the US Army Garrison Bavaria included instructions to go to German food banks.

The initial x27;Shutdown Guidancex27; for the US Army Garrison Bavaria included instructions to go to German food banks.#News


US Army Tells Soldiers to Go to German Food Bank, Then Deletes It


A US Army website for its bases in Bavaria, Germany published a list of food banks in the area that could help soldiers and staff as part of its “Shutdown Guidance,” the subtext being that soldiers and base employees might need to obtain free food from German government services during the government shutdown.

The webpage included information about which services are affected by the ongoing shutdown of the federal government, FAQs about how to work during a furlough, and links to apply for emergency loans. After the shutdown guidance’s publication, the Army changed it and removed the list of food banks, but the original has been archived here.
playlist.megaphone.fm?p=TBIEA2…
The shutdown of the American federal government is affecting all its employees, from TSA agents to the troops, and the longer people go without paychecks, the more they’re turning to nonprofits and other services to survive. American military bases are like small cities with their own communities, stores, and schools. The US Army Garrison Bavaria covers four bases spread across the German state of Bavaria and is one of the largest garrisons in the world, hosting around 40,000 troops and civilians.

Like many other American military websites, the Garrison’s has stopped updating, but did publish a page of “Shutdown Guidance” to help the people living on its bases navigate the shutdown. At the very bottom of the page there was a “Running list of German support organizations for your kit bags” that included various local food banks. It listed Tafel Deutschland, which it called an “umbrella organization [that] distributes food to people in poverty through its more than 970 local food banks,” Foodsharing e.V, and Essen für Alle (Food for everyone).
Image via the Wayback Machine.
The guidance also provided a link to the German version of the Too Good to Go App, which it described as a service that sells surprise bags of food to reduce food waste. “These bags contain unsellable but perfectly good food from shops, cafés, and restaurants, which can be picked up at a reduced price. To obtain one of these bags, it must be reserved in the app and picked up at the store during a specified time window, presenting the reservation receipt in the app,” the US Army Garrison Bavaria’s shutdown guidance page said.

According to snapshots on the Wayback Machine, the list of food banks was up this morning but was removed sometime in the past few hours. The US Army Garrison Bavaria did not respond to 404 Media’s request for comment about the inclusion of the food banks on its shutdown guidance page.

The White House has kept paying America’s troops during the shut down, but not without struggle. At the end of October, the Trump administration accepted a $130 million donation from the billionaire Timothy Mellon to help keep America’s military paid. Though initially anonymous, The New York Times revealed Mellon’s identity. This donation only covered some of the costs,, however, and the White House has had to move money between accounts to keep the cash flowing to its troops.

But the US military isn’t just its soldiers, sailors, Marines, Guardians, and airmen. Every military base is staffed by thousands of civilian workers, many of them veterans, who do all the jobs that keep a base running. In Bavaria, those workers are a mix of German locals and Americans. The German government has approved a $50 million support package to cover the paychecks of its citizens affected by the shutdown. Any non-troop American working on those military bases is a federal employee, however, and they aren’t getting paid at all.


#News #x27


Meta thinks its camera glasses, which are often used for harassment, are no different than any other camera.#News #Meta #AI


What’s the Difference Between AI Glasses and an iPhone? A Helpful Guide for Meta PR


Over the last few months 404 Media has covered some concerning but predictable uses for the Ray-Ban Meta glasses, which are equipped with a built-in camera, and for some models, AI. Aftermarket hobbyists have modified the glasses to add a facial recognition feature that could quietly dox whatever face a user is looking at, and they have been worn by CBP agents during the immigration raids that have come to define a new low for human rights in the United States. Most recently, exploitative Instagram users filmed themselves asking workers at massage parlors for sex and shared those videos online, a practice that experts told us put those workers’ lives at risk.

404 Media reached out to Meta for comment for each of these stories, and in each case Meta’s rebuttal was a mind-bending argument: What is the difference between Meta’s Ray-Ban glasses and an iPhone, really, when you think about it?

“Curious, would this have been a story had they used the new iPhone?” a Meta spokesperson asked me in an email when I reached out for comment about the massage parlor story.

Meta’s argument is that our recent stories about its glasses are not newsworthy because we wouldn’t bother writing them if the videos in question were filmed with an iPhone as opposed to a pair of smart glasses. Let’s ignore the fact that I would definitely still write my story about the massage parlor videos if they were filmed with an iPhone and “steelman” Meta’s provocative argument that glasses and a phone are essentially not meaningfully different objects.

Meta’s Ray-Ban glasses and an iPhone are both equipped with a small camera that can record someone secretly. If anything, the iPhone can record more discreetly because unlike Meta’s Ray-Ban glasses it’s not equipped with an LED that lights up to indicate that it’s recording. This, Meta would argue, means that the glasses are by design more respectful of people’s privacy than an iPhone.

Both are small electronic devices. Both can include various implementations of AI tools. Both are often black, and are made by one of the FAANG companies. Both items can be bought at a Best Buy. You get the point: There are too many similarities between the iPhone and Meta’s glasses to name them all here, just as one could strain to name infinite similarities between a table and an elephant if we chose to ignore the context that actually matters to a human being.

Whenever we published one of these stories the response from commenters and on social media has been primarily anger and disgust with Meta’s glasses enabling the behavior we reported on and a rejection of the device as a concept entirely. This is not surprising to anyone who has covered technology long enough to remember the launch and quick collapse of Google Glass, so-called “glassholes,” and the device being banned from bars.

There are two things Meta’s glasses have in common with Google Glass which also make it meaningfully different from an iPhone. The first is that the iPhone might not have a recording light, but in order to record something or take a picture, a user has to take it out of their pocket and hold it out, an awkward gesture all of us have come to recognize in the almost two decades since the launch of the first iPhone. It is an unmistakable signal that someone is recording. That is not the case with Meta’s glasses, which are meant to be worn as a normal pair of glasses, and are always pointing at something or someone if you see someone wearing them in public.

In fact, the entire motivation for building these glasses is that they are discreet and seamlessly integrate into your life. The point of putting a camera in the glasses is that it eliminates the need to take an iPhone out of your pocket. People working in the augmented reality and virtual reality space have talked about this for decades. In Meta’s own promotional video for the Meta Ray-Ban Display glasses, titled “10 years in the making,” the company shows Mark Zuckerberg on stage in 2016 saying that “over the next 10 years, the form factor is just going to keep getting smaller and smaller until, and eventually we’re going to have what looks like normal looking glasses.” And in 2020, “you see something awesome and you want to be able to share it without taking out your phone.” Meta's Ray-Ban glasses have not achieved their final form, but one thing that makes them different from Google Glass is that they are designed to look exactly like an iconic pair of glasses that people immediately recognize. People will probably notice the camera in the glasses, but they have been specifically designed to look like "normal” glasses.

Again, Meta would argue that the LED light solves this problem, but that leads me to the next important difference: Unlike the iPhone and other smartphones, one of the most widely adopted electronics in human history, only a tiny portion of the population has any idea what the fuck these glasses are. I have watched dozens of videos in which someone wearing Meta glasses is recording themselves harassing random people to boost engagement on Instagram or TikTok. Rarely do the people in the videos say anything about being recorded, and it’s very clear the women working at these massage parlors have no idea they’re being recorded. The Meta glasses have an LED light, sure, but these glasses are new, rare, and it’s not safe to assume everyone knows what that light means.

As Joseph and Jason recently reported, there are also cheap ways to modify Meta glasses to prevent the recording light from turning on. Search results, Reddit discussions, and a number of products for sale on Amazon all show that many Meta glasses customers are searching for a way to circumvent the recording light, meaning that many people are buying them to do exactly what Meta claims is not a real issue.

It is possible that in the future Meta glasses and similar devices will become so common that most people will understand that if they see them, they would assume they are being recorded, though that is not a future I hope for. Until then, if it is all helpful to the public relations team at Meta, these are what the glasses look like:

And this is what an iPhone looks like:
person holding space gray iPhone 7Photo by Bagus Hernawan / Unsplash
Feel free to refer to this handy guide when needed.


#ai #News #meta


We talk all about our articles on Meta's Ray-Ban smart glasses, and AI-generated ads personalized just for you.

We talk all about our articles on Metax27;s Ray-Ban smart glasses, and AI-generated ads personalized just for you.#Podcast


Podcast: People Are Modding Meta Ray-Bans to Spy On You


We have something of a Meta Ray-Bans smart glasses bumper episode this week. We start with Joseph and Jason’s piece on a $60 mod that disables the privacy-protecting recording light in the smart glasses. After the break, Emanuel tells us how some people are abusing the glasses to film massage workers, and he explains the difference between a phone and a pair of smartglasses, if you need that spelled out for you. In the subscribers-only section, Jason tells us about the future of advertising: AI-generated ads personalized directly to you.
playlist.megaphone.fm?e=TBIEA8…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.




The app, called Mobile Identify and available on the Google Play Store, is specifically for local and regional law enforcement agencies working with ICE on immigration enforcement.#CBP #ICE #FacialRecognition #News


DHS Gives Local Cops a Facial Recognition App To Find Immigrants


Customs and Border Protection (CBP) has publicly released an app that Sheriff Offices, police departments, and other local or regional law enforcement can use to scan someone’s face as part of immigration enforcement, 404 Media has learned.

The news follows Immigration and Customs Enforcement’s (ICE) use of another internal Department of Homeland Security (DHS) app called Mobile Fortify that uses facial recognition to nearly instantly bring up someone’s name, date of birth, alien number, and whether they’ve been given an order of deportation. The new local law enforcement-focused app, called Mobile Identify, crystallizes one of the exact criticisms of DHS’s facial recognition app from privacy and surveillance experts: that this sort of powerful technology would trickle down to local enforcement, some of which have a history of making anti-immigrant comments or supporting inhumane treatment of detainees.

Handing “this powerful tech to police is like asking a 16-year old who just failed their drivers exams to pick a dozen classmates to hand car keys to,” Jake Laperruque, deputy director of the Center for Democracy & Technology's Security and Surveillance Project, told 404 Media. “These careless and cavalier uses of facial recognition are going to lead to U.S. citizens and lawful residents being grabbed off the street and placed in ICE detention.”

💡
Do you know anything else about this app or others that CBP and ICE are using? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




The Airlines Reporting Corporation (ARC), owned by major U.S. airlines, collects billions of ticketing records and sells them to the government to be searched without a warrant. I managed to opt-out of that data selling.#Privacy #arc


How to Opt-Out of Airlines Selling Your Travel Data to the Government


Most people probably have no idea that when you book a flight through major travel websites, a data broker owned by U.S. airlines then sells details about your flight, including your name, credit card used, and where you’re flying to the government. The data broker has compiled billions of ticketing records the government can search without a warrant or court order. The data broker is called the Airlines Reporting Corporation (ARC), and, as 404 Media has shown, it sells flight data to multiple parts of the Department of Homeland Security (DHS) and a host of other government agencies, while contractually demanding those agencies not reveal where the data came from.

It turns out, it is possible to opt-out of this data selling, including to government agencies. At least, that’s what I found when I ran through the steps to tell ARC to stop selling my personal data. Here’s how I did that:

  1. I emailed privacy@arccorp.com and, not yet knowing the details of the process, simply said I wish to delete my personal data held by ARC.
  2. A few hours later the company replied with some information and what I needed to do. ARC said it needed my full name (including middle name if applicable), the last four digits of the credit card number used to purchase air travel, and my residential address.
  3. I provided that information. The following month, ARC said it was unable to delete my data because “we and our service providers require it for legitimate business purposes.” The company did say it would not sell my data to any third parties, though. “However, even though we cannot delete your data, we can confirm that we will not sell your personal data to any third party for any reason, including, but not limited to, for profiling, direct marketing, statistical, scientific, or historical research purposes,” ARC said in an email.
  4. I then followed up with ARC to ask specifically whether this included selling my travel data to the government. “Does the not selling of my data include not selling to government agencies as part of ARC’s Travel Intelligence Program or any other forms?” I wrote. The Travel Intelligence Program, or TIP, is the program ARC launched to sell data to the government. ARC updates it every day with the previous day’s ticket sales and it can show a person’s paid intent to travel.
  5. A few days later, ARC replied. “Yes, we can confirm that not selling your data includes not selling to any third party, including, but not limited to, any government agency as part of ARC’s Travel Intelligence Program,” the company said.

💡
Do you know anything else about ARC or other data being sold to government agencies? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

Honestly, I was quite surprised at how smooth and clear this process was. ARC only registered as a data broker with the state of California—a legal requirement—in June, despite selling data for years.

What I did was not a formal request under a specific piece of privacy legislation, such as the European Union’s General Data Privacy Regulation (GDPR) or the California Consumer Privacy Act (CCPA). Maybe a request to delete information under the CCPA would have more success; that law says California residents have the legal right to ask to have their personal data deleted “subject to certain exceptions (such as if the business is legally required to keep the information),” according to the California Department of Justice’s website.

ARC is owned and operated by at least eight major U.S. airlines, according to publicly released documents. Its board includes representatives from Delta, United, American Airlines, JetBlue, Alaska Airlines, Canada’s Air Canada, and European airlines Air France and Lufthansa.

Public procurement records show agencies such as ICE, CBP, ATF, TSA, the SEC, the Secret Service, the State Department, the U.S. Marshals, and the IRS have purchased ARC data. Agencies have given no indication they use a search warrant or other legal mechanism to search the data. In response to inquiries from 404 Media, ATF said it follows “DOJ policy and appropriate legal processes” and the Secret Service declined to answer.

An ARC spokesperson previously told 404 Media in an email that TIP “was established by ARC after the September 11, 2001, terrorist attacks and has since been used by the U.S. intelligence and law enforcement community to support national security and prevent criminal activity with bipartisan support. Over the years, TIP has likely contributed to the prevention and apprehension of criminals involved in human trafficking, drug trafficking, money laundering, sex trafficking, national security threats, terrorism and other imminent threats of harm to the United States.” At the time, the spokesperson added “Pursuant to ARC’s privacy policy, consumers may ask ARC to refrain from selling their personal data.”




Kodak appears to be taking back control over the distribution of its film.#film #Kodak


Kodak Quietly Begins Directly Selling Kodak Gold and Ultramax Film Again


Kodak quietly acknowledged Monday that it will begin selling two famous types of film stock—Kodak Gold 200 and Kodak Ultramax 400—directly to retailers and distributors in the U.S., another indication that the historic company is taking back control over how people buy its film.

The release comes on the heels of Kodak announcing that it would make and sell two new stocks of film called Kodacolor 100 and Kodacolor 200 in October. On Monday, both Kodak Gold and Kodak Ultramax showed back up on Kodak’s website as film stocks that it makes and sells. When asked by 404 Media, a company spokesperson said that it has “launched” these film stocks and will begin to “sell the films directly to distributors in the U.S. and Canada, giving Kodak greater control over our participation in the consumer film market.”

Unlike Kodacolor, both Kodak Gold and Kodak Ultramax have been widely available to consumers for years, but the way it was distributed made little sense and was an artifact of its 2012 bankruptcy. Coming out of that bankruptcy, Eastman Kodak (the 133-year-old company) would continue to make film, but the exclusive rights to distribute and sell it were owned by a completely separate, UK-based company called Kodak Alaris. For the last decade, Kodak Alaris has sold Kodak Gold and Ultramax (as well as Portra, and a few other film stocks made by Eastman Kodak). This setup has been confusing for consumers and perhaps served as an incentive for Eastman Kodak to not experiment as much with the types of films it makes, considering that it would have to license distribution out to another company.

That all seemed to have changed with the recent announcement of Kodacolor 100 and Kodacolor 200, Kodak’s first new still film stocks in many years. Monday’s acknowledgement that both Kodak Gold and Ultramax would be sold directly by Eastman Kodak, and which come with a rebranded and redesigned box, suggests that the company has figured out how to wrest some control of its distribution away from Kodak Alaris. Eastman Kodak told 404 Media in a statement that it has “launched” these films and that they are “Kodak-marketed versions of existing films.”

"Kodak will sell the films directly to distributors in the U.S. and Canada, giving Kodak greater control over our participation in the consumer film market,” a Kodak spokesperson said in an email. “This direct channel will provide distributors, retailers and consumers with a broader, more reliable supply and help create greater stability in a market where prices have often fluctuated.”

The company called it an “extension of Kodak’s film portfolio,” which it said “is made possible by our recent investments that increased our film manufacturing capacity and, along with the introduction of our KODAK Super 8 Camera and KODAK EKTACHROME 100D Color Reversal Film, reflects Kodak’s ongoing commitment to meeting growing demand and supporting the long-term health of the film industry.”

It is probably too soon to say how big of a deal this is, but it is at least exciting for people who are in the resurgent film photography hobby, who are desperate for any sign that companies are interested in launching new products, creating new types of film, or building more production capacity in an industry where film shortages and price increases have been the norm for a few years.




Lawmakers say AI-camera company Flock is violating federal law by not enforcing multi-factor authentication. 404 Media previously found Flock credentials included in infostealer infections.#Flock #News


Flock Logins Exposed In Malware Infections, Senator Asks FTC to Investigate the Company


Lawmakers have called on the Federal Trade Commission (FTC) to investigate Flock for allegedly violating federal law by not enforcing multi-factor authentication (MFA), according to a letter shared with 404 Media. The demand comes as a security researcher found Flock accounts for sale on a Russian cybercrime forum, and 404 Media found multiple instances of Flock-related credentials for government users in infostealer infections, potentially providing hackers or other third parties with access to at least parts of Flock’s surveillance network.

This post is for subscribers only


Become a member to get access to all content
Subscribe now




Cornell University’s arXiv will no longer accept Computer Science reviews and position papers.#News


arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' Papers


arXiv, a preprint publication for academic research that has become particularly important for AI research, has announced it will no longer accept computer science articles and papers that haven’t been vetted by an academic journal or a conference. Why? A tide of AI slop has flooded the computer science category with low-effort papers that are “little more than annotated bibliographies, with no substantial discussion of open research issues,” according to a press release about the change.

arXiv has become a critical place for preprint and open access scientific research to be published. Many major scientific discoveries are published on arXiv before they finish the peer review process and are published in other, peer-reviewed journals. For that reason, it’s become an important place for new breaking discoveries and has become particularly important for research in fast-moving fields such as AI and machine learning (though there are also sometimes preprint, non-peer-reviewed papers there that get hyped but ultimately don’t pass peer review muster). The site is a repository of knowledge where academics upload PDFs of their latest research for public consumption. It publishes papers on physics, mathematics, biology, economics, statistics, and computer science and the research is vetted by moderators who are subject matter experts.
playlist.megaphone.fm?p=TBIEA2…
But because of an onslaught of AI-generated research, specifically in the computer science (CS) section, arXiv is going to limit which papers can be published. “In the past few years, arXiv has been flooded with papers,” arXiv said in a press release. “Generative AI / large language models have added to this flood by making papers—especially papers not introducing new research results—fast and easy to write.”

The site noted that this was less a policy change and more about stepping up enforcement of old rules. “When submitting review articles or position papers, authors must include documentation of successful peer review to receive full consideration,” it said. “Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv.”

According to the press release, arXiv has been inundated by "review" submissions—papers that are still pending peer review—but that CS was the worst category. “We now receive hundreds of review articles every month,” arXiv said. “The advent of large language models have made this type of content relatively easy to churn out on demand.

The plan is to enforce a blanket ban on papers still under review in the CS category and free the moderators to look at more substantive submissions. arXiv stressed that it does not often accept review articles, but had been doing so when it was of academic interest and from a known researcher. “If other categories see a similar rise in LLM-written review articles and position papers, they may choose to change their moderation practices in a similar manner to better serve arXiv authors and readers,” arXiv said.

AI-generated research articles are a pressing problem in the scientific community. Scam academic journals that run pay-to-publish schemes are an issue that plagued academic publishing long before AI, but the advent of LLMs has supercharged it. But scam journals aren’t the only ones affected. Last year, a serious scientific journal had to retract a paper that included an AI-generated image of a giant rat penis. Peer reviewers, the people who are supposed to vet scientific papers for accuracy, have also been caught cutting corners using ChatGPT in part because of the large demands placed on their time.


#News


"Advertisers are increasingly just going to be able to give us a business objective and give us a credit card or bank account, and have the AI system basically figure out everything else."#AI #Meta #Ticketmaster


We speak to the creator of ICEBlock about Apple banning their app, and what this means for people trying to access information about ICE.#Podcast


The Crackdown on ICE Spotting Apps (with Joshua Aaron)


For this interview episode of the 404 Media Podcast, Joseph speaks to Joshua Aaron, the creator of ICEBlock. Apple recently removed ICEBlock from its App Store after direct pressure from the Department of Justice. Joshua and Joseph talk about how the idea for ICEBlock came about, Apple and Google’s broader crackdown on similar apps, and what this all means for people trying to access information about ICE.
playlist.megaphone.fm?e=TBIEA2…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for early access to these interview episodes and to power our journalism.If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/WLpyObHkPqc?…