The media in this post is not displayed to visitors. To view it, please log in.

Come learn how researchers and others learned what cops were using Flock's nationwide network of cameras for, including searches for ICE.

Come learn how researchers and others learned what cops were using Flockx27;s nationwide network of cameras for, including searches for ICE.#FOIA #FOIAForum


Our New FOIA Forum! 11/19, 1PM ET


It’s that time again! We’re planning our latest FOIA Forum, a live, hour-long or more interactive session where Joseph and Jason will teach you how to pry records from government agencies through public records requests. We’re planning this for Wednesday, November 19th at 1 PM Eastern. That's in just over a week away! Add it to your calendar!

This time we're focused on our coverage of Flock, the automatic license plate reader (ALPR) and surveillance tech company. Earlier this year anonymous researchers had the great idea of asking agencies for the network audit which shows why cops were using these cameras. Following that, we did a bunch of coverage, including showing that local police were performing lookups for ICE in Flock's nationwide network of cameras, and that a cop in Texas searched the country for a woman who self-administered an abortion. We'll tell you how all of this came about, what other requests people did after, and what requests we're exploring at the moment with Flock.

If this will be your first FOIA Forum, don’t worry, we will do a quick primer on how to file requests (although if you do want to watch our previous FOIA Forums, the video archive is here). We really love talking directly to our community about something we are obsessed with (getting documents from governments) and showing other people how to do it too.

Paid subscribers can already find the link to join the livestream below. We'll also send out a reminder a day or so before. Not a subscriber yet? Sign up now here in time to join.

We've got a bunch of FOIAs that we need to file and are keen to hear from you all on what you want to see more of. Most of all, we want to teach you how to make your own too. Please consider coming along!

This post is for subscribers only


Become a member to get access to all content
Subscribe now


This week we have a conversation between Sam and two of the leaders of the independent volunteer archiving project Save Our Signs, an effort to archive national park signs and monument placards.#Podcast #interview #saveoursigns #archiving #archives


Podcast: A Massive Archiving Effort at National Parks (with Jenny McBurney and Lynda Kellam)


If you’ve been to a national park in the U.S. recently, you might have noticed some odd new signs about “beauty” and “grandeur.” Or, some signs you were used to seeing might now be missing completely. An executive order issued earlier this year put the history and educational aspects of the parks system under threat–but a group of librarians stepped in to save it.

This week we have a conversation between Sam and two of the leaders of the independent volunteer archiving project Save Our Signs, an effort to archive national park signs and monument placards. It’s a community collaboration project co-founded by a group of librarians, public historians, and data experts in partnership with the Data Rescue Project and Safeguarding Research & Culture.
playlist.megaphone.fm?p=TBIEA2…
Lynda Kellam leads the Research Data and Digital Scholarship team at the University of Pennsylvania Libraries and is a founding organizer of the Data Rescue Project. Jenny McBurney is the Government Publications Librarian and Regional Depository Coordinator at the University of Minnesota Libraries. In this episode, they discuss turning “frustration, dismay and disbelief” at parks history under threat into action: compiling more than 10,000 images from over 300 national parks into a database to be preserved for the people.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube.

Become a paid subscriber for early access to these interview episodes and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/xrCElwgY5Co?…


The media in this post is not displayed to visitors. To view it, please log in.

The media in this post is not displayed to visitors. To view it, please log in.

Ypsilanti, Michigan has officially decided to fight against the construction of a 'high-performance computing facility' that would service a nuclear weapons laboratory 1,500 miles away.

Ypsilanti, Michigan has officially decided to fight against the construction of a x27;high-performance computing facilityx27; that would service a nuclear weapons laboratory 1,500 miles away.#News


A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists


Ypsilanti, Michigan resident KJ Pedri doesn’t want her town to be the site of a new $1.2 billion data center, a massive collaborative project between the University of Michigan and America’s nuclear weapons scientists at Los Alamos National Laboratories (LANL) in New Mexico.

“My grandfather was a rocket scientist who worked on Trinity,” Pedri said at a recent Ypsilanti city council meeting, referring to the first successful detonation of a nuclear bomb. “He died a violent, lonely, alcoholic. So when I think about the jobs the data center will bring to our area, I think about the impact of introducing nuclear technology to the world and deploying it on civilians. And the impact that that had on my family, the impact on the health and well-being of my family from living next to a nuclear test site and the spiritual impact that it had on my family for generations. This project is furthering inhumanity, this project is furthering destruction, and we don’t need more nuclear weapons built by our citizens.”
playlist.megaphone.fm?p=TBIEA2…
At the Ypsilanti city council meeting where Pedri spoke, the town voted to officially fight against the construction of the data center. The University of Michigan says the project is not a data center, but a “high-performance computing facility” and it promises it won’t be used to “manufacture nuclear weapons.” The distinction and assertion are ringing hollow for Ypsilanti residents who oppose construction of the data center, have questions about what it would mean for the environment and the power grid, and want to know why a nuclear weapons lab 24 hours away by car wants to build an AI facility in their small town.

“What I think galls me the most is that this major institution in our community, which has done numerous wonderful things, is making decisions with—as I can tell—no consideration for its host community and no consideration for its neighboring jurisdictions,” Ypsilanti councilman Patrick McLean said during a recent council meeting. “I think the process of siting this facility stinks.”

For others on the council, the fight is more personal.

“I’m a Japanese American with strong ties to my family in Japan and the existential threat of nuclear weapons is not lost on me, as my family has been directly impacted,” Amber Fellows, a Ypsilanti Township councilmember who led the charge in opposition to the data center, told 404 Media. “The thing that is most troubling about this is that the nuclear weapons that we, as Americans, witnessed 80 years ago are still being proliferated and modernized without question.”

It’s a classic David and Goliath story. On one side is Ypsilanti (called Ypsi by its residents), which has a population just north of 20,000 and situated about 40 minutes outside of Detroit. On the other is the University of Michigan and Los Alamos National Laboratories (LANL), American scientists famous for nuclear weapons and, lately, pushing the boundaries of AI.

The University of Michigan first announced the Los Alamos data center, what it called an “AI research facility,” last year. According to a press release from the university, the data center will cost $1.25 billion and take up between 220,000 to 240,000 square feet. “The university is currently assessing the viability of locating the facility in Ypsilanti Township,” the press release said.
Signs in an Ypsilanti yard.
On October 21, the Ypsilanti City Council considered a proposal to officially oppose the data center and the people of the area explained why they wanted it passed. One woman cited environmental and ethical concerns. “Third is the moral problem of having our city resources towards aiding the development of nuclear arms,” she said. “The city of Ypsilanti has a good track record of being on the right side of history and, more often than not, does the right thing. If this resolution passed, it would be a continuation of that tradition.”

A man worried about what the facility would do to the physical health of citizens and talked about what happened in other communities where data centers were built. “People have poisoned air and poisoned water and are getting headaches from the generators,” he said. “There’s also reports around the country of energy bills skyrocketing when data centers come in. There’s also reports around the country of local grids becoming much less reliable when the data centers come in…we don’t need to see what it’s like to have a data center in Ypsi. We could just not do that.”

The resolution passed. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

Ypsi has a lot of reasons to be concerned. Data centers tend to bring rising power bills, horrible noise, and dwindling drinking water to every community they touch. “The fact that U of M is using Ypsilanti as a dumping ground, a sacrifice zone, is unacceptable,” Fellows said.

Ypsi’s resolution focused on a different angle though: nuclear weapons. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

As part of the resolution, Ypsilanti Township is applying to join the Mayors for Peace initiative, an international organization of cities opposed to nuclear weapons and founded by the former mayor of Hiroshima. Fellows learned about Mayors for Peace when she visited Hiroshima last year.


0:00
/1:46

This town has officially decided to fight against the construction of an AI data center that would service a nuclear weapons laboratory 1,500 miles away. Amber Fellows, a Ypsilanti Township councilmember, tells us why. Via 404 Media on Instagram

Both LANL and the University of Michigan have been vague about what the data center will be used for, but have said it will include one facility for classified federal research and another for non-classified research which students and faculty will have access to. “Applications include the discovery and design of new materials, calculations on climate preparedness and sustainability,” it said in an FAQ about the data center. “Industries such as mobility, national security, aerospace, life sciences and finance can benefit from advanced modeling and simulation capabilities.”

The university FAQ said that the data center will not be used to manufacture nuclear weapons. “Manufacturing” nuclear weapons specifically refers to their creation, something that’s hard to do and only occurs at a handful of specialized facilities across America. I asked both LANL and the University of Michigan if the data generated by the facility would be used in nuclear weapons science in any way. Neither answered the question.

“The federal facility is for research and high-performance computing,” the FAQ said. “It will focus on scientific computation to address various national challenges, including cybersecurity, nuclear and other emerging threats, biohazards, and clean energy solutions.”

LANL is going all in on AI. It partnered with OpenAI to use the company’s frontier models in research and recently announced a partnership with NVIDIA to build two new super computers named “Mission” and “Vision.” It’s true that LANL’s scientific output covers a range of issues but its overwhelming focus, and budget allocation, is nuclear weapons. LANL requested a budget of $5.79 billion in 2026. 84 percent of that is earmarked for nuclear weapons. Only $40 million of the LANL budget is set aside for “science,” according to government documents.

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

“The fact is we don’t really know because Los Alamos and U of M are unwilling to spell out exactly what’s going to happen,” Fellows said. When LANL declined to comment for this story, it told 404 Media to direct its question to the University of Michigan.

The university pointed 404 Media to the FAQ page about the project. “You'll see in the FAQs that the locations being considered are not within the city of Ypsilanti,” it said.

It’s an odd statement given that this is what’s in the FAQ: “The university is currently assessing the viability of locating the facility in Ypsilanti Township on the north side of Textile Road, directly across the street from the Ford Rawsonville Components plant and adjacent to the LG Energy Solutions plant.”

It’s true that this is not technically in the city of Ypsilanti but rather Ypsilanti Township, a collection of communities that almost entirely surrounds the city itself. For Fellows, it’s a distinction without a difference. “[Univeristy of Michigan] can build it in Barton Hills and see how the city of Ann Arbor feels about it,” she said, referencing a village that borders the township where the university's home city of Ann Arbor.

“The university has, and will continue to, explore other sites if they are viable in the timeframe needed for successful completion of the project,” Kay Jarvis, the university’s director of public affairs, told 404 Media.

Fellows said that Ypsilanti will fight the data center with everything it has. “We’re putting pressure on the Ypsi township board to use whatever tools they have to deny permits…and to stand up for their community,” she said. “We’re also putting pressure on the U of M board of trustees, the county, our state legislature that approved these projects and funded them with public funds. We’re identifying all the different entities that have made this project possible so far and putting pressure on them to reverse action.”

For Fellows, the fight is existential. It’s not just about the environmental concerns around the construction project. “I was under the belief that the prevailing consensus was that nuclear weapons are wrong and they should be drawn down as fast as possible. I’m trying to use what little power I have to work towards that goal,” she said.


#News #x27

The media in this post is not displayed to visitors. To view it, please log in.

New research “suggests that dark energy may no longer be a cosmological constant” and that the universe’s expansion is slowing down.#TheAbstract


A Fundamental ‘Constant’ of the Universe May Not Be Constant At All, Study Finds


Welcome back to the Abstract! Here are the studies this week that took a bite out of life, appealed to the death drive, gave a yellow light to the universe, and produced hitherto unknown levels of cute.

First, it’s the most epic ocean battle: orcas versus sharks (pro tip: you don’t want to be sharks). Then, a scientific approach to apocalyptic ideation; curbing cosmic enthusiasm; and last, the wonderful world of tadpole-less toads.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens, or subscribe to my personal newsletter the BeX Files.

Now, to the feast!

I guess that’s why they call them killer whales


Higuera-Rivas, Jesús Erick et al. “Novel evidence of interaction between killer whales (Orcinus orca) and juvenile white sharks (Carcharodon carcharias) in the Gulf of California, Mexico.” Frontiers in Marine Science.

Orcas kill young great white sharks by flipping them upside down and tearing their livers out of their bellies, which they then eat family-style, according to a new study that includes new footage of these Promethean interactions in Mexican waters.

“Here we document novel repeated predations by killer whales on juvenile white sharks in the Gulf of California,” said researchers led by Jesús Erick Higuera Rivas of the non-profit Pelagic Protection and Conservation AC.

“Aerial videos indicate consistency in killer whales’ repeated assaults and strikes on the sharks,” the team added. “Once extirpated from the prey body, the target organ is shared between the members of the pods including calves.”
Sequence of the killer whales attacking the first juvenile white sharks (Carcharodon carcharias) on 15th of August 2020. In (d) The partially exposed liver is seen on the right side of the second shark attacked. Photos credit: Jesús Erick Higuera Rivas.

I’ll give you a beat to let that sink in, like orca teeth on the belly of a shark. While it's well-established that orcas are the only known predator of great white sharks aside from humans, the new study is only the second glimpse of killer whales targeting juvenile sharks.

This group of orcas, known as Moctezuma’s pod, has developed an effective strategy of working together to flip the sharks over, which interrupts the sharks’ sensory system and puts them into a state called tonic immobility. The authors describe the pod’s work as methodical and well coordinated.

“Our evidence undoubtedly shows consistency in the repeated assaults and strikes, indicating efficient maneuvering ability by the killer whales in attempting to turn the shark upside down, likely to induce tonic immobility and allow uninterrupted access to the organs for consumption, " the team said. Previous reports suggest that “the lack of bite marks or injuries anywhere other than the pectoral fins shows a novel and specialized technique of accessing the liver of the shark with minimal handling of each individual.”

An orca attacking a juvenile great white shark. Image: Marco Villegas

Sharks, by the way, do not attack orcas. Just the opposite. As you can imagine based on the horrors you have just read, sharks are so petrified of killer whales that they book it whenever they sense a nearby pod.

“Adult white sharks exhibit a memory and previous knowledge about killer whales, which enables them to activate an avoidance mechanism through behavioral risk effects; a ‘fear’- induced mass exodus from aggregations sites,” the team said. “This response may preclude repeated successful predation on adult white sharks by killer whales.”

In other words, if you’re a shark, one encounter with orcas is enough to make you watch your dorsal side for life—assuming you were lucky enough to escape with it.

In other news…

Apocalypse now plz


Albrecht, Rudolf et al. “Geopolitical, Socio-Economic and Legal Aspects of the 2024PDC25 Event.” Acta Astronautica.

You may have seen the doomer humor meme to “send the asteroid already,” a plea for sweet cosmic relief that fits our beleaguered times. As it turns out, some scientists engage in this type of apocalyptic wish fulfillment professionally.

Planetary defense experts often participate in drills involving fictional hazardous asteroids, such as the 2024PDC25, a virtual object “discovered” at the 2025 Planetary Defense Conference. In that simulation, 2024PDC25 had a possible impact date in 2041.

Now a team has used that exercise as a jumping off point to explore what might happen if it hit even earlier, channeling that “send the asteroid already” energy.. The researchers used this time-crunched scenario to speculate about the effect on geopolitics and pivotal events, such as the 2028 US Presidential elections.

“As it is very difficult to extrapolate from 2025 across 16 years in this ‘what-if’ exercise, we decided to bring the scenario forward to 2031 and examine it with today’s global background,” Rudolf Albrecht of the Austrian Space Forum. “Today would be T-6 years and the threat is becoming immediate.”

As the astro-doomers would say: Finally some good news.

Big dark energy


Son, Junhyuk et al. “Strong progenitor age bias in supernova cosmology – II. Alignment with DESI BAO and signs of a non-accelerating universe.” Monthly Notices of the Royal Astronomical Society.

First, we discovered the universe was expanding. Then, we discovered it was expanding at an accelerating rate. Now, a new study suggests that this acceleration might be slowing down. Universe, make up your mind!

But seriously, the possibility that the rate of cosmic expansion is slowing is a big deal, because dark energy—the term for whatever is making the universe expand—was assumed to be a constant for decades. But this consensus has been challenged by observations from the Dark Energy Spectroscopic Instrument (DESI) in Arizona, which became operational in 2021. In its first surveys, DESI’s observations have pointed to an expansion rate that is not fixed, but in flux.

Together with past results, the study “suggests that dark energy may no longer be a cosmological constant” and “our analysis raises the possibility that the present universe is no longer in a state of accelerated expansion,” said researchers led by Junhyuk Son of Yonsei University. “This provides a fundamentally new perspective that challenges the two central pillars of the [cold dark matter] standard cosmological model proposed 27 years ago.”

It will take more research to constrain this mystery, but for now it’s a reminder that the universe loves to surprise.

And the award for most squee goes to…


Thrane, Christian et al. “Museomics and integrative taxonomy reveal three new species of glandular viviparous tree toads (Nectophrynoides) in Tanzania’s Eastern Arc Mountains (Anura: Bufonidae).” Vertebrate Zoology

We’ll end, as all things should, with toadlets. Most frogs and toads reproduce by laying eggs that hatch into tadpoles, but scientists have discovered three new species of toad in Tanzania that give birth to live young—a very rare adaptation for any amphibian, known as ovoviviparity. The scientific term for these youngsters is in fact “toadlet.” Gods be good.

“We describe three new species from the Nectophrynoides viviparus species complex, covering the southern Eastern Arc Mountains populations,” said researchers led by Christian Thrane of the University of Copenhagen. One of the new species included “the observation of toadlets, suggesting that this species is ovoviviparous.”
One of the newly described toad species, Nectophrynoides luhomeroensis. Image: John Lyarkurwa.

Note to Nintendo: please make a very tiny Toadlet into a Mario Kart racer.

Thanks for reading! See you next week.


This week, we discuss archiving to get around paywalls, hating on smart glasses, and more.#BehindTheBlog


Behind the Blog: Paywall Jumping and Smart Glasses


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss archiving to get around paywalls, hating on smart glasses, and more.

JASON: I was going to try to twist myself into knots attempting to explain the throughline between my articles this week, and about how I’ve been thinking about the news and our coverage more broadly. This was going to be something about trying to promote analog media and distinctly human ways of communicating (like film photography), while highlighting the very bad economic and political incentives pushing us toward fundamentally dehumanizing, anti-human methods of communicating. Like fully automated, highly customized and targeted AI ads, automated library software, and I guess whatever Nancy Pelosi has been doing with her stock portfolio. But then I remembered that I blogged about the FBI’s subpoena against archive.is, a website I feel very ambivalent about and one that is the subject of perhaps my most cringe blog of all time.

So let’s revisit that cringe blog, which was called “Dear GamerGate: Please Stop Stealing Our Shit.” I wrote this article in 2014, which was fully 11 years ago, which is alarming to me. First things first: They were not stealing from me they were stealing from VICE, a company that I did not actually experience financial gains from related to people reading articles; it was good if people read my articles and traffic was very important, and getting traffic over time led to me getting raises and promotions and stuff, but the company made very, very clear that we did not “own” the articles and therefore they were not “mine” in the way that they are now. With that out of the way, the reporting and general reason for the article was I think good but the tone of it is kind of wildly off, and, as I mentioned, over the course of many years I have now come to regard archive.is as sort of an integral archiving tool. If you are unfamiliar with archive.is, it’s a site that takes snapshots of any URL and creates a new link for them which, notably, does not go to the original website. Archive.is is extremely well known for bypassing the paywalls on many sites, 404 Media sometimes but not usually among them.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

Early humans crafted the same tools for hundreds of thousands of years, offering an unprecedented glimpse of a continuous tradition that may push back the origins of technology.#TheAbstract


Advanced 2.5 Million-Year-Old Tools May Rewrite Human History


🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

After a decade-long excavation at a remote site in Kenya, scientists have unearthed evidence that our early human relatives continuously fashioned the same tools across thousands of generations, hinting that sophisticated tool use may have originated much earlier than previously known, according to a new study in Nature Communications.

The discovery of nearly 1,300 artifacts—with ages that span 2.44 to 2.75 million years old—reveals that the influential Oldowan tool-making tradition existed across at least 300,000 years of turbulent environmental shifts. The wealth of new tools from Kenya’s Namorotukunan site suggest that their makers adapted to major environmental changes in part by passing technological knowledge down through the ages.

“The question was: did they generally just reinvent the [Oldowan tradition] over and over again? That made a lot of sense when you had a record that was kind of sporadic,” said David R. Braun, a professor of anthropology at the George Washington University who led the study, in a call with 404 Media.

“But the fact that we see so much similarity between 2.4 and 2.75 [million years ago] suggests that this is generally something that they do,” he continued. “Some of it may be passed down through social learning, like observation of others doing it. There’s some kind of tradition that continues on for this timeframe that would argue against this idea of just constantly reinventing the wheel.”

Oldowan tools, which date back at least 2.75 million years, are distinct from earlier traditions in part because hominins, the broader family to which humans belong, specifically sought out high-quality materials such as chert and quartz to craft sharp-edged cutting and digging tools. This advancement allowed them to butcher large animals, like hippos, and possibly dig for underground food sources.

When Braun and his colleagues began excavating at Namorotukunan in 2013, they found many artifacts made of chalcedony, a fine-grained rock that is typically associated with much later tool-making traditions. To the team’s surprise, the rocks were dated to periods as early as 2.75 million years ago, making them among the oldest artifacts in the Oldowan record.

“Even though Oldowan technology is really just hitting one rock against the other, there's good and bad ways of doing it,” Braun explained. “So even though it's pretty simple, what they seem to be figuring out is where to hit the rock, and which angles to select. They seem to be getting a grip on that—not as well as later in time—but they're definitely getting an understanding at this timeframe.”
Some of the Namorotukunan tools. Image: Koobi Fora Research and Training Program
The excavation was difficult as it takes several days just to reach the remote offroad site, while much of the work involved tiptoing along steep outcrops. Braun joked that their auto mechanic lined up all the vehicle shocks that had been broken during the drive each season, as a testament to the challenge.

But by the time the project finally concluded in 2022, the researchers had established that Oldowan tools were made at this site over the course of 300,000 years. During this span, the landscape of Namorotukunan shifted from lush humid forests to arid desert shrubland and back again. Despite these destabilizing shifts in their climate and biome, the hominins that made these tools endured in part because this technology opened up new food sources to them, such as the carcasses of large animals.

“The whole landscape really shifts,” Braun said. “But hominins are able to basically ameliorate those rapid changes in the amount of rainfall and the vegetation around by using tools to adapt to what’s happening.”

“That's a human superpower—it’s that ability we have to keep this information stored in our collective heads, so that when new challenges show up, there's somebody in our group that remembers how to deal with this particular adaptation,” he added.

It’s not clear exactly which species of hominin made the tools at Namorotukunan; it may have been early members of our own genus Homo, or other relatives, like Australopithecus afarensis, that later went extinct. Regardless, the discovery of such a long-lived and continuous assemblage may hint that the origins of these tools are much older than we currently know.

“I think that we're going to start to find tool use much earlier” perhaps “going back five, six, or seven million years,” Braun said. “That’s total speculation. I've got no evidence that that's the case. But judging from what primates do, I don't really understand why we wouldn't see it.”

To that end, the researchers plan to continue excavating these bygone landscapes to search for more artifacts and hominin remains that could shed light on the identity of these tool makers, probing the origins of these early technologies that eventually led to humanity’s dominance on the planet.

“It's possible that this tool use is so diverse and so different from our expectations that we have blinders on,” Braun concluded. “We have to open our search for what tool use looks like, and then we might start to see that they're actually doing a lot more of it than we thought they were.”


"Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another."#AI #libraries


AI Is Supercharging the War on Libraries, Education, and Human Knowledge


This story was reported with support from the MuckRock Foundation.

Last month, a company called the Children’s Literature Comprehensive Database announced a new version of a product called Class-Shelf Plus. The software, which is used by school libraries to keep track of which books are in their catalog, added several new features including “AI-driven automation and contextual risk analysis,” which includes an AI-powered “sensitive material marker” and a “traffic-light risk ratings” system. The company says that it believes this software will streamline the arduous task school libraries face when trying to comply with legislation that bans certain books and curricula: “Districts using Class-Shelf Plus v3 may reduce manual review workloads by more than 80%, empowering media specialists and administrators to devote more time to instructional priorities rather than compliance checks,” it said in a press release.

In a white paper published by CLCD, it gave a “real-world example: the role of CLCD in overcoming a book ban.” The paper then describes something that does not sound like “overcoming” a book ban at all. CLCD’s software simply suggested other books “without the contested content.”

Ajay Gupte, the president of CLCD, told 404 Media the software is simply being piloted at the moment, but that it “allows districts to make the majority of their classroom collections publicly visible—supporting transparency and access—while helping them identify a small subset of titles that might require review under state guidelines.” He added that “This process is designed to assist districts in meeting legislative requirements and protect teachers and librarians from accusations of bias or non-compliance [...] It is purpose-built to help educators defend their collections with clear, data-driven evidence rather than subjective opinion.”

Librarians told 404 Media that AI library software like this is just the tip of the iceberg; they are being inundated with new pitches for AI library tech and catalogs are being flooded with AI slop books that they need to wade through. But more broadly, AI maximalism across society is supercharging the ideological war on libraries, schools, government workers, and academics.

CLCD and Class Shelf Plus is a small but instructive example of something that librarians and educators have been telling me: The boosting of artificial intelligence by big technology firms, big financial firms, and government agencies is not separate from book bans, educational censorship efforts, and the war on education, libraries, and government workers being pushed by groups like the Heritage Foundation and any number of MAGA groups across the United States. This long-running war on knowledge and expertise has sown the ground for the narratives widely used by AI companies and the CEOs adopting it. Human labor, inquiry, creativity, and expertise is spurned in the name of “efficiency.” With AI, there is no need for human expertise because anything can be learned, approximated, or created in seconds. And with AI, there is less room for nuance in things like classifying or tagging books to comply with laws; an LLM or a machine algorithm can decide whether content is “sensitive.”

“I see something like this, and it’s presented as very value neutral, like, ‘Here’s something that is going to make life easier for you because you have all these books you need to review,’” Jaime Taylor, discovery & resource management systems coordinator for the W.E.B. Du Bois Library at the University of Massachusetts told me in a phone call. “And I look at this and immediately I am seeing a tool that’s going to be used for censorship because this large language model is ingesting all the titles you have, evaluating them somehow, and then it might spit out an inaccurate evaluation. Or it might spit out an accurate evaluation and then a strapped-for-time librarian or teacher will take whatever it spits out and weed their collections based on it. It’s going to be used to remove books from collections that are about queerness or sexuality or race or history. But institutions are going to buy this product because they have a mandate from state legislatures to do this, or maybe they want to do this, right?”

The resurgent war on knowledge, academics, expertise, and critical thinking that AI is currently supercharging has its roots in the hugely successful recent war on “critical race theory,” “diversity equity and inclusion,” and LGBTQ+ rights that painted librarians, teachers, scientists, and public workers as untrustworthy. This has played out across the board, with a seemingly endless number of ways in which the AI boom directly intersects with the right’s war on libraries, schools, academics, and government workers. There are DOGE’s mass layoffs of “woke” government workers, and the plan to replace them with AI agents and supposed AI-powered efficiencies. There are “parents rights” groups that pushed to ban books and curricula that deal with the teaching of slavery, systemic racism, and LGBTQ+ issues and attempted to replace them with homogenous curricula and “approved” books that teach one specific type of American history and American values; and there are the AI tools that have been altered to not be “woke” and to reenforce the types of things the administration wants you to think. Many teachers feel they are not allowed to teach about slavery or racism and increasingly spend their days grading student essays that were actually written by robots.

“One thing that I try to make clear any time I talk about book bans is that it’s not about the books, it’s about deputizing bigots to do the ugly work of defunding all of our public institutions of learning,” Maggie Tokuda-Hall, a cofounder of Authors Against Book Bans, told me. “The current proliferation of AI that we see particularly in the library and education spaces would not be possible at the speed and scale that is happening without the precedent of book bans leading into it. They are very comfortable bedfellows because once you have created a culture in which all expertise is denigrated and removed from the equation and considered nonessential, you create the circumstances in which AI can flourish.”

Justin, a cohost of the podcast librarypunk, told me that the project of offloading cognitive capacity to AI continues apace: “Part of a fascist project to offload the work of thinking, especially the reflective kind of thinking that reading, study, and community engagement provide,” Justin said. “That kind of thinking cultivates empathy and challenges your assumptions. It's also something you have to practice. If we can offload that cognitive work, it's far too easy to become reflexive and hateful, while having a robot cheerleader telling you that you were right about everything all along.”

These two forces—the war on libraries, classrooms, and academics and AI boosterism—are not working in a vacuum. The Heritage Foundation’s right-wing agenda for remaking the federal government, Project 2025, talks about criminalizing teachers and librarians who “poison our own children” and pushing artificial intelligence into every corner of the government for data analysis and “waste, fraud, and abuse” detection.

Librarians, teachers, and government workers have had to spend an increasing amount of their time and emotional bandwidth defending the work that they do, fighting against censorship efforts and dealing with the associated stress, harassment, and threats that come from fighting educational censorship. Meanwhile, they are separately dealing with an onslaught of AI slop and the top-down mandated AI-ification of their jobs; there are simply fewer and fewer hours to do what they actually want to be doing, which is helping patrons and students.

“The last five years of library work, of public service work has been a nightmare, with ongoing harassment and censorship efforts that you’re either experiencing directly or that you’re hearing from your other colleagues,” Alison Macrina, executive director of Library Freedom Project, told me in a phone interview. “And then in the last year-and-a-half or so, you add to it this enormous push for the AIfication of your library, and the enormous demands on your time. Now you have these already overworked public servants who are being expected to do even more because there’s an expectation to use AI, or that AI will do it for you. But they’re dealing with things like the influx of AI-generated books and other materials that are being pushed by vendors.”

The future being pushed by both AI boosters and educational censors is one where access to information is tightly controlled. Children will not be allowed to read certain books or learn certain narratives. “Research” will be performed only through one of a select few artificial intelligence tools owned by AI giants which are uniformly aligned behind the Trump administration and which have gone to the ends of the earth to prevent their black box machines from spitting out “woke” answers lest they catch the ire of the administration. School boards and library boards, forced to comply with increasingly restrictive laws, funding cuts, and the threat of being defunded entirely, leap at the chance to be considered forward looking by embracing AI tools, or apply for grants from government groups like the Institute of Museum and Library Services (IMLS), which is increasingly giving out grants specifically to AI projects.

We previously reported that the ebook service Hoopla, used by many libraries, has been flooded with AI-generated books (the company has said it is trying to cull these from its catalog). In a recent survey of librarians, Macrina’s organization found that librarians are getting inundated with pitches from AI companies and are being pushed by their superiors to adopt AI: “People in the survey results kept talking about, like, I get 10 aggressive, pushy emails a day from vendors demanding that I implement their new AI product or try it, jump on a call. I mean, the burdens have become so much, I don’t even know how to summarize them.”

“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another"


Macrina said that in response to Library Freedom Project’s recent survey, librarians said that misinformation and disinformation was their biggest concern. This came not just in the form of book bans and censorship but also in efforts to proactively put disinformation and right-wing talking points into libraries: “It’s not just about book bans, and library board takeovers, and the existing reactionary attacks on libraries. It’s also the effort to push more far-right material into libraries,” she said. “And then you have librarians who are experiencing a real existential crisis because they are getting asked by their jobs to promote [AI] tools that produce more misinformation. It's the most, like, emperor-has-no-clothes-type situation that I have ever witnessed.”

Each person I spoke to for this article told me they could talk about the right-wing project to erode trust in expertise, and the way AI has amplified this effort, for hours. In writing this article, I realized that I could endlessly tie much of our reporting on attacks on civil society and human knowledge to the force multiplier that is AI and the AI maximalist political and economic project. One need look no further than Grokipedia as one of the many recent reminders of this effort—a project by the world’s richest man and perhaps its most powerful right-wing political figure to replace a crowdsourced, meticulously edited fount of human knowledge with a robotic imitation built to further his political project.

Much of what we write about touches on this: The plan to replace government workers with AI, the general erosion of truth on social media, the rise of AI slop that “feels” true because it reinforces a particular political narrative but is not true, the fact that teachers feel like they are forced to allow their students to use AI. Justin, from librarypunk, said AI has given people “absolute impunity to ignore reality […] AI is a direct attack on the way we verify information: AI both creates fake sources and obscures its actual sources.”

That is the opposite of what librarians do, and teachers do, and scientists do, and experts do. But the political project to devalue the work these professionals do, and the incredible amount of money invested in pushing AI as a replacement for that human expertise, have worked in tandem to create a horrible situation for all of us.

“AI is an agreement machine, which is anathema to learning and critical thinking,” Tokuda-Hall said. Previously we have had experts like librarians and teachers to help them do these things, but they have been hamstrung and they’ve been attacked and kneecapped and we’ve created a culture in which their contribution is completely erased from society, which makes something like AI seem really appealing. It’s filling that vacuum.”

“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another,” she added.


The FBI has subpoenaed the domain registrar of archive.today, demanding information about the owner.#fbi #Archiveis


FBI Tries to Unmask Owner of Infamous Archive.is Site


The FBI is attempting to unmask the owner behind archive.today, a popular archiving site that is also regularly used to bypass paywalls on the internet and to avoid sending traffic to the original publishers of web content, according to a subpoena posted by the website. The FBI subpoena says it is part of a criminal investigation, though it does not provide any details about what alleged crime is being investigated. Archive.today is also popularly known by several of its mirrors, including archive.is and archive.ph.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Nancy Pelosi’s trades over the years have been so good that a startup was created to allow investors to directly mirror her portfolio. #Economics #NancyPelosi


One of the Greatest Wall Street Investors of All Time Announces Retirement


Nancy Pelosi, one of Wall Street’s all time great investors, announced her retirement Thursday.

Pelosi, so known for her ability to outpace the S&P 500 that dozens of websites and apps spawned to track her seeming preternatural ability to make smart stock trades, said she will retire after the 2024-2026 season. Pelosi’s trades over the years, many done through her husband and investing partner Paul Pelosi, have been so good that an entire startup, called Autopilot, was started to allow investors to directly mirror Pelosi’s portfolio.

According to the site, more than 3 million people have invested more than $1 billion using the app. After 38 years, Pelosi will retire from the league—a somewhat normal career length as investors, especially on Pelosi’s team, have decided to stretch their careers later and later into their lives.

The numbers put up by Pelosi in her Hall of Fame career are undeniable. Over the last decade, Pelosi’s portfolio returned an incredible 816 percent, according to public disclosure records. The S&P 500, meanwhile, has returned roughly 229 percent. Awe-inspired fans and analysts theorized that her almost omniscient ability to make correct, seemingly high-risk stock decisions may have stemmed from decades spent analyzing and perhaps even predicting decisions that would be made by the federal government that could impact companies’ stock prices. For example, Paul Pelosi sold $500,000 worth of Visa stock in July, weeks before the U.S. government announced a civil lawsuit against the company, causing its stock price to decrease.

Besides Autopilot and numerous Pelosi stock trade trackers, there have also been several exchange traded funds (ETFs) set up that allow investors to directly copy their portfolio on Pelosi and her trades. Related funds, such as The Subversive Democratic Trading ETF (NANC, for Nancy), set up by the Unusual Whales investment news Twitter account, seek to allow investors to diversify their portfolios by tracking the trades of not just Pelosi but also some of her colleagues, including those on the other team, who have also proven to be highly gifted stock traders.
youtube.com/embed/YEm43kiGBsc?…
Fans of Pelosi spent much of Thursday admiring her career, and wondering what comes next: “Farewell to one of the greatest investors of all time,” the top post on Reddit’s Wall Street Bets community reads. The sentiment has more than 24,000 upvotes at the time of publication. Fans will spend years debating in bars whether Pelosi was the GOAT; some investors have noted that in recent years, some of her contemporaries, like Marjorie Taylor-Green, Ro Khanna, and Michael McCaul, have put up gaudier numbers. There are others who say the league needs reformation, with some of Pelosi’s colleagues saying they should stop playing at all, and many fans agreeing with that sentiment. Despite the controversy, many of her colleagues have committed to continue playing the game.

Pelosi said Thursday that this season would be her last, but like other legends who have gone out on top, it seems she is giving it her all until the end. Just weeks ago, she sold between $100,000 and $250,000 of Apple stock, according to a public box score.

“We can be proud of what we have accomplished,” Pelosi said in a video announcing her retirement. “But there’s always much more work to be done.”


The media in this post is not displayed to visitors. To view it, please log in.

Meta thinks its camera glasses, which are often used for harassment, are no different than any other camera.#News #Meta #AI


What’s the Difference Between AI Glasses and an iPhone? A Helpful Guide for Meta PR


Over the last few months 404 Media has covered some concerning but predictable uses for the Ray-Ban Meta glasses, which are equipped with a built-in camera, and for some models, AI. Aftermarket hobbyists have modified the glasses to add a facial recognition feature that could quietly dox whatever face a user is looking at, and they have been worn by CBP agents during the immigration raids that have come to define a new low for human rights in the United States. Most recently, exploitative Instagram users filmed themselves asking workers at massage parlors for sex and shared those videos online, a practice that experts told us put those workers’ lives at risk.

404 Media reached out to Meta for comment for each of these stories, and in each case Meta’s rebuttal was a mind-bending argument: What is the difference between Meta’s Ray-Ban glasses and an iPhone, really, when you think about it?

“Curious, would this have been a story had they used the new iPhone?” a Meta spokesperson asked me in an email when I reached out for comment about the massage parlor story.

Meta’s argument is that our recent stories about its glasses are not newsworthy because we wouldn’t bother writing them if the videos in question were filmed with an iPhone as opposed to a pair of smart glasses. Let’s ignore the fact that I would definitely still write my story about the massage parlor videos if they were filmed with an iPhone and “steelman” Meta’s provocative argument that glasses and a phone are essentially not meaningfully different objects.

Meta’s Ray-Ban glasses and an iPhone are both equipped with a small camera that can record someone secretly. If anything, the iPhone can record more discreetly because unlike Meta’s Ray-Ban glasses it’s not equipped with an LED that lights up to indicate that it’s recording. This, Meta would argue, means that the glasses are by design more respectful of people’s privacy than an iPhone.

Both are small electronic devices. Both can include various implementations of AI tools. Both are often black, and are made by one of the FAANG companies. Both items can be bought at a Best Buy. You get the point: There are too many similarities between the iPhone and Meta’s glasses to name them all here, just as one could strain to name infinite similarities between a table and an elephant if we chose to ignore the context that actually matters to a human being.

Whenever we published one of these stories the response from commenters and on social media has been primarily anger and disgust with Meta’s glasses enabling the behavior we reported on and a rejection of the device as a concept entirely. This is not surprising to anyone who has covered technology long enough to remember the launch and quick collapse of Google Glass, so-called “glassholes,” and the device being banned from bars.

There are two things Meta’s glasses have in common with Google Glass which also make it meaningfully different from an iPhone. The first is that the iPhone might not have a recording light, but in order to record something or take a picture, a user has to take it out of their pocket and hold it out, an awkward gesture all of us have come to recognize in the almost two decades since the launch of the first iPhone. It is an unmistakable signal that someone is recording. That is not the case with Meta’s glasses, which are meant to be worn as a normal pair of glasses, and are always pointing at something or someone if you see someone wearing them in public.

In fact, the entire motivation for building these glasses is that they are discreet and seamlessly integrate into your life. The point of putting a camera in the glasses is that it eliminates the need to take an iPhone out of your pocket. People working in the augmented reality and virtual reality space have talked about this for decades. In Meta’s own promotional video for the Meta Ray-Ban Display glasses, titled “10 years in the making,” the company shows Mark Zuckerberg on stage in 2016 saying that “over the next 10 years, the form factor is just going to keep getting smaller and smaller until, and eventually we’re going to have what looks like normal looking glasses.” And in 2020, “you see something awesome and you want to be able to share it without taking out your phone.” Meta's Ray-Ban glasses have not achieved their final form, but one thing that makes them different from Google Glass is that they are designed to look exactly like an iconic pair of glasses that people immediately recognize. People will probably notice the camera in the glasses, but they have been specifically designed to look like "normal” glasses.

Again, Meta would argue that the LED light solves this problem, but that leads me to the next important difference: Unlike the iPhone and other smartphones, one of the most widely adopted electronics in human history, only a tiny portion of the population has any idea what the fuck these glasses are. I have watched dozens of videos in which someone wearing Meta glasses is recording themselves harassing random people to boost engagement on Instagram or TikTok. Rarely do the people in the videos say anything about being recorded, and it’s very clear the women working at these massage parlors have no idea they’re being recorded. The Meta glasses have an LED light, sure, but these glasses are new, rare, and it’s not safe to assume everyone knows what that light means.

As Joseph and Jason recently reported, there are also cheap ways to modify Meta glasses to prevent the recording light from turning on. Search results, Reddit discussions, and a number of products for sale on Amazon all show that many Meta glasses customers are searching for a way to circumvent the recording light, meaning that many people are buying them to do exactly what Meta claims is not a real issue.

It is possible that in the future Meta glasses and similar devices will become so common that most people will understand that if they see them, they would assume they are being recorded, though that is not a future I hope for. Until then, if it is all helpful to the public relations team at Meta, these are what the glasses look like:

And this is what an iPhone looks like:
person holding space gray iPhone 7Photo by Bagus Hernawan / Unsplash
Feel free to refer to this handy guide when needed.


#ai #News #meta

We talk all about our articles on Meta's Ray-Ban smart glasses, and AI-generated ads personalized just for you.

We talk all about our articles on Metax27;s Ray-Ban smart glasses, and AI-generated ads personalized just for you.#Podcast


Podcast: People Are Modding Meta Ray-Bans to Spy On You


We have something of a Meta Ray-Bans smart glasses bumper episode this week. We start with Joseph and Jason’s piece on a $60 mod that disables the privacy-protecting recording light in the smart glasses. After the break, Emanuel tells us how some people are abusing the glasses to film massage workers, and he explains the difference between a phone and a pair of smartglasses, if you need that spelled out for you. In the subscribers-only section, Jason tells us about the future of advertising: AI-generated ads personalized directly to you.
playlist.megaphone.fm?e=TBIEA8…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.


The app, called Mobile Identify and available on the Google Play Store, is specifically for local and regional law enforcement agencies working with ICE on immigration enforcement.#CBP #ICE #FacialRecognition #News


DHS Gives Local Cops a Facial Recognition App To Find Immigrants


Customs and Border Protection (CBP) has publicly released an app that Sheriff Offices, police departments, and other local or regional law enforcement can use to scan someone’s face as part of immigration enforcement, 404 Media has learned.

The news follows Immigration and Customs Enforcement’s (ICE) use of another internal Department of Homeland Security (DHS) app called Mobile Fortify that uses facial recognition to nearly instantly bring up someone’s name, date of birth, alien number, and whether they’ve been given an order of deportation. The new local law enforcement-focused app, called Mobile Identify, crystallizes one of the exact criticisms of DHS’s facial recognition app from privacy and surveillance experts: that this sort of powerful technology would trickle down to local enforcement, some of which have a history of making anti-immigrant comments or supporting inhumane treatment of detainees.

Handing “this powerful tech to police is like asking a 16-year old who just failed their drivers exams to pick a dozen classmates to hand car keys to,” Jake Laperruque, deputy director of the Center for Democracy & Technology's Security and Surveillance Project, told 404 Media. “These careless and cavalier uses of facial recognition are going to lead to U.S. citizens and lawful residents being grabbed off the street and placed in ICE detention.”

💡
Do you know anything else about this app or others that CBP and ICE are using? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The Airlines Reporting Corporation (ARC), owned by major U.S. airlines, collects billions of ticketing records and sells them to the government to be searched without a warrant. I managed to opt-out of that data selling.#Privacy #arc


How to Opt-Out of Airlines Selling Your Travel Data to the Government


Most people probably have no idea that when you book a flight through major travel websites, a data broker owned by U.S. airlines then sells details about your flight, including your name, credit card used, and where you’re flying to the government. The data broker has compiled billions of ticketing records the government can search without a warrant or court order. The data broker is called the Airlines Reporting Corporation (ARC), and, as 404 Media has shown, it sells flight data to multiple parts of the Department of Homeland Security (DHS) and a host of other government agencies, while contractually demanding those agencies not reveal where the data came from.

It turns out, it is possible to opt-out of this data selling, including to government agencies. At least, that’s what I found when I ran through the steps to tell ARC to stop selling my personal data. Here’s how I did that:

  1. I emailed privacy@arccorp.com and, not yet knowing the details of the process, simply said I wish to delete my personal data held by ARC.
  2. A few hours later the company replied with some information and what I needed to do. ARC said it needed my full name (including middle name if applicable), the last four digits of the credit card number used to purchase air travel, and my residential address.
  3. I provided that information. The following month, ARC said it was unable to delete my data because “we and our service providers require it for legitimate business purposes.” The company did say it would not sell my data to any third parties, though. “However, even though we cannot delete your data, we can confirm that we will not sell your personal data to any third party for any reason, including, but not limited to, for profiling, direct marketing, statistical, scientific, or historical research purposes,” ARC said in an email.
  4. I then followed up with ARC to ask specifically whether this included selling my travel data to the government. “Does the not selling of my data include not selling to government agencies as part of ARC’s Travel Intelligence Program or any other forms?” I wrote. The Travel Intelligence Program, or TIP, is the program ARC launched to sell data to the government. ARC updates it every day with the previous day’s ticket sales and it can show a person’s paid intent to travel.
  5. A few days later, ARC replied. “Yes, we can confirm that not selling your data includes not selling to any third party, including, but not limited to, any government agency as part of ARC’s Travel Intelligence Program,” the company said.

💡
Do you know anything else about ARC or other data being sold to government agencies? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

Honestly, I was quite surprised at how smooth and clear this process was. ARC only registered as a data broker with the state of California—a legal requirement—in June, despite selling data for years.

What I did was not a formal request under a specific piece of privacy legislation, such as the European Union’s General Data Privacy Regulation (GDPR) or the California Consumer Privacy Act (CCPA). Maybe a request to delete information under the CCPA would have more success; that law says California residents have the legal right to ask to have their personal data deleted “subject to certain exceptions (such as if the business is legally required to keep the information),” according to the California Department of Justice’s website.

ARC is owned and operated by at least eight major U.S. airlines, according to publicly released documents. Its board includes representatives from Delta, United, American Airlines, JetBlue, Alaska Airlines, Canada’s Air Canada, and European airlines Air France and Lufthansa.

Public procurement records show agencies such as ICE, CBP, ATF, TSA, the SEC, the Secret Service, the State Department, the U.S. Marshals, and the IRS have purchased ARC data. Agencies have given no indication they use a search warrant or other legal mechanism to search the data. In response to inquiries from 404 Media, ATF said it follows “DOJ policy and appropriate legal processes” and the Secret Service declined to answer.

An ARC spokesperson previously told 404 Media in an email that TIP “was established by ARC after the September 11, 2001, terrorist attacks and has since been used by the U.S. intelligence and law enforcement community to support national security and prevent criminal activity with bipartisan support. Over the years, TIP has likely contributed to the prevention and apprehension of criminals involved in human trafficking, drug trafficking, money laundering, sex trafficking, national security threats, terrorism and other imminent threats of harm to the United States.” At the time, the spokesperson added “Pursuant to ARC’s privacy policy, consumers may ask ARC to refrain from selling their personal data.”


Kodak appears to be taking back control over the distribution of its film.#film #Kodak


Kodak Quietly Begins Directly Selling Kodak Gold and Ultramax Film Again


Kodak quietly acknowledged Monday that it will begin selling two famous types of film stock—Kodak Gold 200 and Kodak Ultramax 400—directly to retailers and distributors in the U.S., another indication that the historic company is taking back control over how people buy its film.

The release comes on the heels of Kodak announcing that it would make and sell two new stocks of film called Kodacolor 100 and Kodacolor 200 in October. On Monday, both Kodak Gold and Kodak Ultramax showed back up on Kodak’s website as film stocks that it makes and sells. When asked by 404 Media, a company spokesperson said that it has “launched” these film stocks and will begin to “sell the films directly to distributors in the U.S. and Canada, giving Kodak greater control over our participation in the consumer film market.”

Unlike Kodacolor, both Kodak Gold and Kodak Ultramax have been widely available to consumers for years, but the way it was distributed made little sense and was an artifact of its 2012 bankruptcy. Coming out of that bankruptcy, Eastman Kodak (the 133-year-old company) would continue to make film, but the exclusive rights to distribute and sell it were owned by a completely separate, UK-based company called Kodak Alaris. For the last decade, Kodak Alaris has sold Kodak Gold and Ultramax (as well as Portra, and a few other film stocks made by Eastman Kodak). This setup has been confusing for consumers and perhaps served as an incentive for Eastman Kodak to not experiment as much with the types of films it makes, considering that it would have to license distribution out to another company.

That all seemed to have changed with the recent announcement of Kodacolor 100 and Kodacolor 200, Kodak’s first new still film stocks in many years. Monday’s acknowledgement that both Kodak Gold and Ultramax would be sold directly by Eastman Kodak, and which come with a rebranded and redesigned box, suggests that the company has figured out how to wrest some control of its distribution away from Kodak Alaris. Eastman Kodak told 404 Media in a statement that it has “launched” these films and that they are “Kodak-marketed versions of existing films.”

"Kodak will sell the films directly to distributors in the U.S. and Canada, giving Kodak greater control over our participation in the consumer film market,” a Kodak spokesperson said in an email. “This direct channel will provide distributors, retailers and consumers with a broader, more reliable supply and help create greater stability in a market where prices have often fluctuated.”

The company called it an “extension of Kodak’s film portfolio,” which it said “is made possible by our recent investments that increased our film manufacturing capacity and, along with the introduction of our KODAK Super 8 Camera and KODAK EKTACHROME 100D Color Reversal Film, reflects Kodak’s ongoing commitment to meeting growing demand and supporting the long-term health of the film industry.”

It is probably too soon to say how big of a deal this is, but it is at least exciting for people who are in the resurgent film photography hobby, who are desperate for any sign that companies are interested in launching new products, creating new types of film, or building more production capacity in an industry where film shortages and price increases have been the norm for a few years.


Lawmakers say AI-camera company Flock is violating federal law by not enforcing multi-factor authentication. 404 Media previously found Flock credentials included in infostealer infections.#Flock #News


Flock Logins Exposed In Malware Infections, Senator Asks FTC to Investigate the Company


Lawmakers have called on the Federal Trade Commission (FTC) to investigate Flock for allegedly violating federal law by not enforcing multi-factor authentication (MFA), according to a letter shared with 404 Media. The demand comes as a security researcher found Flock accounts for sale on a Russian cybercrime forum, and 404 Media found multiple instances of Flock-related credentials for government users in infostealer infections, potentially providing hackers or other third parties with access to at least parts of Flock’s surveillance network.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Cornell University’s arXiv will no longer accept Computer Science reviews and position papers.#News


arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' Papers


arXiv, a preprint publication for academic research that has become particularly important for AI research, has announced it will no longer accept computer science articles and papers that haven’t been vetted by an academic journal or a conference. Why? A tide of AI slop has flooded the computer science category with low-effort papers that are “little more than annotated bibliographies, with no substantial discussion of open research issues,” according to a press release about the change.

arXiv has become a critical place for preprint and open access scientific research to be published. Many major scientific discoveries are published on arXiv before they finish the peer review process and are published in other, peer-reviewed journals. For that reason, it’s become an important place for new breaking discoveries and has become particularly important for research in fast-moving fields such as AI and machine learning (though there are also sometimes preprint, non-peer-reviewed papers there that get hyped but ultimately don’t pass peer review muster). The site is a repository of knowledge where academics upload PDFs of their latest research for public consumption. It publishes papers on physics, mathematics, biology, economics, statistics, and computer science and the research is vetted by moderators who are subject matter experts.
playlist.megaphone.fm?p=TBIEA2…
But because of an onslaught of AI-generated research, specifically in the computer science (CS) section, arXiv is going to limit which papers can be published. “In the past few years, arXiv has been flooded with papers,” arXiv said in a press release. “Generative AI / large language models have added to this flood by making papers—especially papers not introducing new research results—fast and easy to write.”

The site noted that this was less a policy change and more about stepping up enforcement of old rules. “When submitting review articles or position papers, authors must include documentation of successful peer review to receive full consideration,” it said. “Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv.”

According to the press release, arXiv has been inundated by "review" submissions—papers that are still pending peer review—but that CS was the worst category. “We now receive hundreds of review articles every month,” arXiv said. “The advent of large language models have made this type of content relatively easy to churn out on demand.

The plan is to enforce a blanket ban on papers still under review in the CS category and free the moderators to look at more substantive submissions. arXiv stressed that it does not often accept review articles, but had been doing so when it was of academic interest and from a known researcher. “If other categories see a similar rise in LLM-written review articles and position papers, they may choose to change their moderation practices in a similar manner to better serve arXiv authors and readers,” arXiv said.

AI-generated research articles are a pressing problem in the scientific community. Scam academic journals that run pay-to-publish schemes are an issue that plagued academic publishing long before AI, but the advent of LLMs has supercharged it. But scam journals aren’t the only ones affected. Last year, a serious scientific journal had to retract a paper that included an AI-generated image of a giant rat penis. Peer reviewers, the people who are supposed to vet scientific papers for accuracy, have also been caught cutting corners using ChatGPT in part because of the large demands placed on their time.


#News

We speak to the creator of ICEBlock about Apple banning their app, and what this means for people trying to access information about ICE.#Podcast


The Crackdown on ICE Spotting Apps (with Joshua Aaron)


For this interview episode of the 404 Media Podcast, Joseph speaks to Joshua Aaron, the creator of ICEBlock. Apple recently removed ICEBlock from its App Store after direct pressure from the Department of Justice. Joshua and Joseph talk about how the idea for ICEBlock came about, Apple and Google’s broader crackdown on similar apps, and what this all means for people trying to access information about ICE.
playlist.megaphone.fm?e=TBIEA2…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for early access to these interview episodes and to power our journalism.If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/WLpyObHkPqc?…


The leaked slide focuses on Google Pixel phones and mentions those running the security-focused GrapheneOS operating system.#cellebrite #Hacking #News


Someone Snuck Into a Cellebrite Microsoft Teams Call and Leaked Phone Unlocking Details


Someone recently managed to get on a Microsoft Teams call with representatives from phone hacking company Cellebrite, and then leaked a screenshot of the company’s capabilities against many Google Pixel phones, according to a forum post about the leak and 404 Media’s review of the material.

The leak follows others obtained and verified by 404 Media over the last 18 months. Those leaks impacted both Cellebrite and its competitor Grayshift, now owned by Magnet Forensics. Both companies constantly hunt for techniques to unlock phones law enforcement have physical access to.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Videos on social media show officers from ICE and CBP using facial recognition technology on people in the field. One expert described the practice as “pure dystopian creep.”#ICE #CBP #News #Privacy


ICE and CBP Agents Are Scanning Peoples’ Faces on the Street To Verify Citizenship


“You don’t got no ID?” a Border Patrol agent in a baseball cap, sunglasses, and neck gaiter asks a kid on a bike. The officer and three others had just stopped the two young men on their bikes during the day in what a video documenting the incident says is Chicago. One of the boys is filming the encounter on his phone. He says in the video he was born here, meaning he would be an American citizen.

When the boy says he doesn’t have ID on him, the Border Patrol officer has an alternative. He calls over to one of the other officers, “can you do facial?” The second officer then approaches the boy, gets him to turn around to face the sun, and points his own phone camera directly at him, hovering it over the boy’s face for a couple seconds. The officer then looks at his phone’s screen and asks for the boy to verify his name. The video stops.

💡
Do you have any more videos of ICE or CBP using facial recognition? Do you work at those agencies or know more about Mobile Fortify? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

Grokipedia is not a 'Wikipedia competitor.' It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man.

Grokipedia is not a x27;Wikipedia competitor.x27; It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man.#Grokipedia #Wikipedia #ElonMusk


Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human


I woke up restless and kind of hungover Sunday morning at 6 am and opened Reddit. Somewhere near the top was a post called “TIL in 2002 a cave diver committed suicide by stabbing himself during a cave diving trip near Split, Croatia. Due to the nature of his death, it was initially investigated as a homicide, but it was later revealed that he had done it while lost in the underwater cave to avoid the pain of drowning.” The post linked to a Wikipedia page called “List of unusual deaths in the 21st century.” I spent the next two hours falling into a Wikipedia rabbit hole, clicking through all manner of horrifying and difficult-to-imagine ways to die.

A day later, I saw that Depths of Wikipedia, the incredible social media account run by Annie Rauwerda, had noted the entirely unsurprising fact that, behind the scenes, there had been robust conversation and debate by Wikipedia editors as to exactly what constitutes an “unusual” death, and that several previously listed “unusual” deaths had been deleted from the list for not being weird enough. For example: People who had been speared to death with beach umbrellas are “no longer an unusual or unique occurrence”; “hippos are extremely dangerous and very aggressive and there is nothing unusual about hippos killing people”; “mysterious circumstances doesn’t mean her death itself was unusual.” These are the types of edits and conversations that have collectively happened billions of times that make Wikipedia what it is, and which make it so human, so interesting, so useful.

recently discovered that wikipedia volunteers have a hilariously high bar for what constitutes "unusual death"
depths of wikipedia (@depthsofwikipedia.bsky.social) 2025-10-27T12:38:42.573Z


Wednesday, as part of his ongoing war against Wikipedia because he does not like his page, Elon Musk launched Grokipedia, a fully AI-generated “encyclopedia” that serves no one and nothing other than the ego of the world’s richest man. As others have already pointed out, Grokipedia seeks to be a right wing, anti-woke Wikipedia competitor. But to even call it a Wikipedia competitor is to give the half-assed project too much credit. It is not a Wikipedia “competitor” at all. It is a fully robotic, heartless regurgitation machine that cynically and indiscriminately sucks up the work of humanity to serve the interests, protect the ego, amplify the viewpoints, and further enrich the world’s wealthiest man. It is a totem of what Wikipedia could and would become if you were to strip all the humans out and hand it over to a robot; in that sense, Grokipedia is a useful warning because of the constant pressure and attacks by AI slop purveyors to push AI-generated content into Wikipedia. And it is only getting attention, of course, because Elon Musk does represent an actual threat to Wikipedia through his political power, wealth, and obsession with the website, as well as the fact that he owns a huge social media platform.

One needs only spend a few minutes clicking around the launch version of Grokipedia to understand that it lacks the human touch that makes Wikipedia such a valuable resource. Besides often having a conservative slant and having the general hallmarks of AI writing, Grokipedia pages are overly long, poorly and confusingly organized, have no internal linking, have no photos, and are generally not written in a way that makes any sense. There is zero insight into how any of the articles were generated, how information was obtained and ordered, any edits that were made, no version history, etc. Grokipedia is, literally, simply a single black box LLM’s version of an encyclopedia. There is a reason Wikipedia editors are called “editors” and it’s because writing a useful encyclopedia entry does not mean “putting down random facts in no discernible order.” To use an example I noticed from simply clicking around: The list of “notable people” in the Grokipedia entry for Baltimore begins with a disordered list of recent mayors, perhaps the least interesting but lowest hanging fruit type of data scraping about a place that could be done.

On even the lowest of stakes Wikipedia pages, real humans with real taste and real thoughts and real perspectives discuss and debate the types of information that should be included in any given article, in what order it should be presented, and the specific language that should be used. They do this under a framework of byzantine rules that have been battle tested and debated through millions of edit wars, virtual community meetings, talk page discussions, conference meetings, inscrutable listservs which themselves have been informed by Wikimedia’s “mission statement,” the “Wikimedia values,” its “founding principles” and policies and guidelines and tons of other stated and unstated rules, norms, processes and procedures. All of this behind-the-scenes legwork is essentially invisible to the user but is very serious business to the human editors building and protecting Wikipedia and its related projects (the high cultural barrier to entry for editors is also why it is difficult to find new editors for Wikipedia, and is something that the Wikipedia community is always discussing how they can fix without ruining the project). Any given Wikipedia page has been stress tested by actual humans who are discussing, for example, whether it’s actually that unusual to get speared to death by a beach umbrella.

Grokipedia, meanwhile, looks like what you would get if you told an LLM to go make an anti-woke encyclopedia, which is essentially exactly what Elon Musk did.

As LLMs tend to do, some pages on Grokipedia leak part of its instructions. For example, a Grokipedia page on “Spanish Wikipedia” notes “Wait, no, can’t cite Wiki,” indicating that Grokipedia has been programmed to not link to Wikipedia. That entry does cite Wikimedia pages anyway, but in the “sources,” those pages are not actually hyperlinked:

I have no doubt that Grokipedia will fail, like other attempts to “compete” with Wikipedia or build an “alternative” to Wikipedia, the likes of which no one has heard of because the attempts were all so laughable and poorly participated in that they died almost immediately. Grokipedia isn’t really a competitor at all, because it is everything that Wikipedia is not: It is not an encyclopedia, it is not transparent, it is not human, it is not a nonprofit, it is not collaborative or crowdsourced, in fact, it is not really edited at all. It is true that Wikipedia is under attack from both powerful political figures, the proliferation of AI, and related structural changes to discoverability and linking on the internet like AI summaries and knowledge panels. But Wikipedia has proven itself to be incredibly resilient because it is a project that specifically leans into the shared wisdom and collaboration of humanity, our shared weirdness and ways of processing information. That is something that an LLM will never be able to compete with.