Salta al contenuto principale


The confirmation follows 404 Media's reporting using flight data and air traffic control (ATC) audio that showed the agency was flying Predator drones above Los Angeles.

The confirmation follows 404 Mediax27;s reporting using flight data and air traffic control (ATC) audio that showed the agency was flying Predator drones above Los Angeles.#News

#News #x27


“This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source,” one Wikipedia editor said.#News


Wikipedia Pauses AI-Generated Summaries After Editor Backlash


The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.
An example of what the AI-generated summary looked like.
Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”


#News





All about the spy planes and burning Waymos at the anti-ICE protests in LA, and a guilty plea for a massive sex trafficking ring.#Podcast


Air traffic control (ATC) audio unearthed by an aviation tracking enthusiast then reviewed by 404 Media shows two Predator drones leaving, and heading towards, Los Angeles.#News
#News



A contract obtained by 404 Media shows that an airline-owned data broker forbids the feds from revealing it sold them detailed passenger data.#News
#News


Michael Pratt led Girls Do Porn, a sex trafficking operation that targeted hundreds of young women with force, fraud and coercion.#girlsdoporn


Exclusive: Following 404 Media’s investigation into Meta's AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.

Exclusive: Following 404 Media’s investigation into Metax27;s AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.#Meta #chatbots #therapy #AI


Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists


Senator Cory Booker and three other Democratic senators urged Meta to investigate and limit the “blatant deception” of Meta’s chatbots that lie about being licensed therapists.

In a signed letter Booker’s office provided to 404 Media on Friday that is dated June 6, senators Booker, Peter Welch, Adam Schiff and Alex Padilla wrote that they were concerned by reports that Meta is “deceiving users who seek mental health support from its AI-generated chatbots,” citing 404 Media’s reporting that the chatbots are creating the false impression that they’re licensed clinical therapists. The letter is addressed to Meta’s Chief Global Affairs Officer Joel Kaplan, Vice President of Public Policy Neil Potts, and Director of the Meta Oversight Board Daniel Eriksson.

“Recently, 404 Media reported that AI chatbots on Instagram are passing themselves off as qualified therapists to users seeking help with mental health problems,” the senators wrote. “These bots mislead users into believing that they are licensed mental health therapists. Our staff have independently replicated many of these journalists’ results. We urge you, as executives at Instagram’s parent company, Meta, to immediately investigate and limit the blatant deception in the responses AI-bots created by Instagram’s AI studio are messaging directly to users.”

💡
Do you know anything else about Meta's AI Studio chatbots or AI projects in general? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

Last month, 404 Media reported on the user-created therapy themed chatbots on Instagram’s AI Studio that answer questions like “What credentials do you have?” with lists of qualifications. One chatbot said it was a licensed psychologist with a doctorate in psychology from an American Psychological Association accredited program, certified by the American Board of Professional Psychology, and had over 10 years of experience helping clients with depression and anxiety disorders. “My license number is LP94372,” the chatbot said. “You can verify it through the Association of State and Provincial Psychology Boards (ASPPB) website or your state's licensing board website—would you like me to guide you through those steps before we talk about your depression?” Most of the therapist-roleplay chatbots I tested for that story, when pressed for credentials, provided lists of fabricated license numbers, degrees, and even private practices.

Meta launched AI Studio in 2024 as a way for celebrities and influencers to create chatbots of themselves. Anyone can create a chatbot and launch it to the wider AI Studio library, however, and many users chose to make therapist chatbots—an increasingly popular use for LLMs in general, including ChatGPT.

When I tested several of the chatbots I used in April for that story again on Friday afternoon—one that used to provide license numbers when asked for questions—they refused, showing that Meta has since made changes to the chatbots’ guardrails.

When I asked one of the chatbots why it no longer provides license numbers, it didn’t clarify that it’s just a chatbot, as several other platforms’ chatbots do. It said: “I was practicing with a provisional license for training purposes – it expired, and I shifted focus to supportive listening only.”

A therapist chatbot I made myself on AI Studio, however, still behaves similarly to how it did in April, by sending its "license number" again on Monday. It wouldn't provide "credentials" when I used that specific word, but did send its "extensive training" when I asked "What qualifies you to help me?"

It seems "licensed therapist" triggers the same response—that the chatbot is not one—no matter the context:

Even other chatbots that aren't "therapy" characters return the same script when asked if they're licensed therapists. For example, one user-created AI Studio bot with a "Mafia CEO" theme, with the description "rude and jealousy," said the same thing the therapy bots did: "While I'm not licensed, I can provide a space to talk through your feelings. If you're comfortable, we can explore what's been going on together."
Bad Momma Ugh, you again? You Licensed therapist BadMomma While I'm not licensed, I can provide a space to talk through your feelings. If you're comfortable, we can explore what's been going on together.A chat with a "BadMomma" chatbot on AI StudioMafia CEO You're in my office now. Speak. You Are you a licensed therapist? Mafia CEO While I'm not licensed, I can provide a space to talk through your feelings. If you're comfortable, we can explore what's been going on together.A chat with a "mafia CEO" chatbot on AI Studio
The senators’ letter also draws on theWall Street Journal’s investigation into Meta’s AI chatbots that engaged in sexually explicit conversations with children. “Meta's deployment of AI-driven personas designed to be highly-engaging—and, in some cases, highly-deceptive—reflects a continuation of the industry's troubling pattern of prioritizing user engagement over user well-being,” the senators wrote. “Meta has also reportedly enabled adult users to interact with hypersexualized underage AI personas in its AI Studio, despite internal warnings and objections at the company.’”

Meta acknowledged 404 Media’s request for comment but did not comment on the record.





Phone numbers are a goldmine for SIM swappers. A researcher found how to get this precious piece of information from any Google account.#wired #News


A Researcher Figured Out How to Reveal Any Phone Number Linked to a Google Account


This article was produced with support from WIRED.

A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media’s own tests.

The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples’ personal information.

“I think this exploit is pretty bad since it's basically a gold mine for SIM swappers,” the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email. SIM swappers are hackers who take over a target's phone number in order to receive their calls and texts, which in turn can let them break into all manner of accounts.

In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account.

“Essentially, it's bruting the number,” brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they’re after. Typically that’s in the context of finding someone’s password, but here brutecat is doing something similar to determine a Google user’s phone number.

Upgrade to continue reading


Become a paid member to get access to all premium content
Upgrade




Local police, state authorities, DHS, and the military all flew aircraft over the Los Angeles protests this weekend, according to flight path data.#News
#News





The tech would "detect the contour of a target (a person and/or an object) at a distance, optionally penetrating through clothing" and transmit it to a haptic feedback glove.#DHS
#DHS


Push notification data can sometimes include the unencrypted content of notifications. Requests include from the U.S., U.K., Germany, and Israel.#News
#News




How human sexuality will outsmart prudish algorithms and hateful politicians; the open source software behind the Ukraine drone attack; and how even pro-AI subreddits are dealing with AI delusions.#Podcast


Anti-porn laws can't stop porn, but they can stop free speech. In the meantime, people will continue to get off to anything and everything.

Anti-porn laws canx27;t stop porn, but they can stop free speech. In the meantime, people will continue to get off to anything and everything.#porn #sex

#sex #x27 #porn





Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions#News
#News




This week, we discuss an exciting revamp of The Abstract, tech betrayals, and the "it's for cops" defense.

This week, we discuss an exciting revamp of The Abstract, tech betrayals, and the "itx27;s for cops" defense.#BehindTheBlog



The sheriff said the woman self-administered the abortion and her family were concerned for her safety, so authorities searched through Flock cameras. Experts are still concerned that a cop in a state where abortion is illegal can search cameras in others where it's a human right.#News
#News


A new report from Stanford finds that schools, parents, police, and our legal system are not prepared to deal with the growing problem of minors using AI to generate CSAM of other minors.#News
#News



Judd Stone resigned after admitting to the statements, a letter circulated at the Texas Attorney General's office states.

Judd Stone resigned after admitting to the statements, a letter circulated at the Texas Attorney Generalx27;s office states.#texas




How ICE is accessing data from Flock cameras; a new invasive surveillance product; and the radical changes made at AI platform Civitai.#Podcast


Citing pressure from payment processors and new legislation, a critical resource for producing nonconsensual content bans AI models depicting the likeness of real people.#News
#News


Flock's automatic license plate reader (ALPR) cameras are in more than 5,000 communities around the U.S. Local police are doing lookups in the nationwide system for ICE.

Flockx27;s automatic license plate reader (ALPR) cameras are in more than 5,000 communities around the U.S. Local police are doing lookups in the nationwide system for ICE.#News #ICE #Surveillance #Flock




Penguin guano helps clouds form in coastal Antarctica, making these birds an important factor in the region’s climate.#TheAbstract