Salta al contenuto principale


The lawsuit alleges XVideos, Bang Bros, XNXX, Girls Gone Wild and TrafficFactory are in violation of Florida's law that requires adult platforms to verify visitors are over 18.

The lawsuit alleges XVideos, Bang Bros, XNXX, Girls Gone Wild and TrafficFactory are in violation of Floridax27;s law that requires adult platforms to verify visitors are over 18.#ageverification


Florida Sues Huge Porn Sites Including XVideos and Bang Bros Over Age Verification Law


The state of Florida is suing some of the biggest porn platforms on the internet, accusing them of not complying with the state’s law that requires adult sites to verify that visitors are over the age of 18.

The lawsuit, brought by Florida Attorney General James Uthmeier, is against the companies that own popular porn platforms including XVideos, XNXX, Bang Bros and Girls Gone Wild, and the adult advertising network TrafficFactory.com. Several of these platforms are owned by companies that are based outside of the U.S.

Uthmeier alleges that the companies are violating both HB3 and the Florida Deceptive and Unfair Trade Practices Act.

On January 1, Florida joined 19 other states that require adult websites to verify users’ ages. Twenty-nine states currently have nearly identical legislation enacted for porn sites, or have bills pending. Age verification legislation has failed in eight other states.

“Multiple porn companies are flagrantly breaking Florida’s age verification law by exposing children to harmful, explicit content. As a father of young children, and as Attorney General, this is completely unacceptable,” Uthmeier said in a press release about the lawsuit. “We are taking legal action against these online pornographers who are willfully preying on the innocence of children for their financial gain.”
playlist.megaphone.fm?p=TBIEA2…
The Free Speech Coalition along with several co-plaintiffs, including the sex education platform O.school, sexual wellness retailer Adam & Eve, adult fan platform JustFor.Fans, and Florida attorney Barry Chase filed a challenge to Florida’s law earlier this month. “These laws create a substantial burden on adults who want to access legal sites without fear of surveillance,” Alison Boden, Executive Director of the Free Speech Coalition, said in a press release published in December. “Despite the claims of the proponents, HB3 is not the same as showing an ID at a liquor store. It is invasive and carries significant risk to privacy. This law and others like it have effectively become state censorship, creating a massive chilling effect for those who speak about, or engage with, issues of sex or sexuality.”

Age Verification Laws Drag Us Back to the Dark Ages of the Internet
Invasive and ineffective age verification laws that require users show government-issued ID, like a driver’s license or passport, are passing like wildfire across the U.S.
404 MediaEmanuel Maiberg


After the Supreme Court upheld Texas’ age verification legislation in June, the Free Speech Coalition dropped the lawsuit in Florida. "However, we are continuing to monitor the governmental efforts to restrict adults' access to the internet in Florida," Mike Stabile, the director of public policy for the Free Speech Coalition, said in a statement to the Tallahassee Democrat. “The Paxton decision does not give the government carte blanche to censor content it doesn't like.”

Experts say, and more than a year of real-world anecdotal evidence has shown at this point, that age verification laws are invasive of user’s privacy, chilling for Constitutional adult speech, and don’t work to keep children away from potentially harmful material.

As it has in many states once age verification legislation went into effect, Pornhub pulled access from Florida entirely on January 1, replacing the homepage with a video message from activist and performer Cherie DeVille: "As you may know, your elected officials in Florida are requiring us to verify your age before allowing you access to our website," DeVille says. " While safety and compliance are at the forefront of our mission, giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting our users, and in fact, will put children and your privacy at risk.”




Contracting records reviewed by 404 Media show that ICE wants to target Gen Z, including with ads on Hulu and HBO Max.#News #ICE


ICE Is About To Go on a Social Media and TV Ad Recruiting Blitz


Immigration and Customs Enforcement (ICE) is urgently looking for a company to help it “dominate” digital media channels with advertisements in an attempt to recruit 14,050 more personnel, according to U.S. government contracting records reviewed by 404 Media. The move, which ICE wants to touch everything from social media ads to those played on popular streaming services like Hulu and HBO Max, is especially targeted towards Gen Z, according to the documents.

The push for recruitment advertising is the latest sign that ICE is trying to aggressively expand after receiving a new budget allocation of tens of billions of dollars, and comes alongside the agency building a nationwide network of migrant tent camps. If the recruitment drive is successful, it would nearly double ICE’s number of personnel.

💡
Do you work at ICE? Did you used to? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

“ICE has an immediate need to begin recruitment efforts and requires specialized commercial advertising experience, established infrastructure, and qualified personnel to activate without delay,” the request for information (RFI) posted online reads. An RFI is often the first step in the government purchasing technology or services, in which it asks relevant companies to submit details on what they can offer the agency and for how much. The RFI adds “This effort ties to a broader national launch and awareness saturation initiative aimed at dominating both digital and traditional media channels with urgent, compelling recruitment messages.”

Upgrade to continue reading


Become a paid member to get access to all premium content
Upgrade


#News #ice


“The ability to quickly generate a lot of bogus content is problematic if we don't have a way to delete it just as quickly.”

“The ability to quickly generate a lot of bogus content is problematic if we donx27;t have a way to delete it just as quickly.”#News


Wikipedia Editors Adopt ‘Speedy Deletion’ Policy for AI Slop Articles


Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy, which gives an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of the internet.

Wikipedia is maintained by a global, collaborative community of volunteer contributors and editors, and part of the reason it remains a reliable source of information is that this community takes a lot of time to discuss, deliberate, and argue about everything that happens on the platform, be it changes to individual articles or the policies that govern how those changes are made. It is normal for entire Wikipedia articles to be deleted, but the main process for deletion usually requires a week-long discussion phase during which Wikipedians try to come to consensus on whether to delete the article.

However, in order to deal with common problems that clearly violate Wikipedia’s policies, Wikipedia also has a “speedy deletion” process, where one person flags an article, an administrator checks if it meets certain conditions, and then deletes the article without the discussion period.

For example, articles composed entirely of gibberish, meaningless text, or what Wikipedia calls “patent nonsense,” can be flagged for speedy deletion. The same is true for articles that are just advertisements with no encyclopedic value. If someone flags an article for deletion because it is “most likely not notable,” that is a more subjective evaluation that requires a full discussion.

At the moment, most articles that Wikipedia editors flag as being AI-generated fall into the latter category because editors can’t be absolutely certain that they were AI-generated. Ilyas Lebleu, a founding member of WikiProject AI Cleanup and an editor that contributed some critical language in the recently adopted policy on AI generated articles and speedy deletion, told me that this is why previous proposals on regulating AI generated articles on Wikipedia have struggled.

“While it can be easy to spot hints that something is AI-generated (wording choices, em-dashes, bullet lists with bolded headers, ...), these tells are usually not so clear-cut, and we don't want to mistakenly delete something just because it sounds like AI,” Lebleu told me in an email. “In general, the rise of easy-to-generate AI content has been described as an ‘existential threat’ to Wikipedia: as our processes are geared towards (often long) discussions and consensus-building, the ability to quickly generate a lot of bogus content is problematic if we don't have a way to delete it just as quickly. Of course, AI content is not uniquely bad, and humans are perfectly capable of writing bad content too, but certainly not at the same rate. Our tools were made for a completely different scale.”

The solution Wikipedians came up with is to allow the speedy deletion of clearly AI-generated articles that broadly meet two conditions. The first is if the article includes “communication intended for the user.” This refers to language in the article that is clearly an LLM responding to a user prompt, like "Here is your Wikipedia article on…,” “Up to my last training update …,” and "as a large language model.” This is a clear tell that the article was generated by an LLM, and a method we’ve previously used to identify AI-generated social media posts and scientific papers.

Lebleu, who told me they’ve seen these tells “quite a few times,” said that more importantly, they indicate the user hasn’t even read the article they’re submitting.

“If the user hasn't checked for these basic things, we can safely assume that they haven't reviewed anything of what they copy-pasted, and that it is about as useful as white noise,” they said.

The other condition that would make an AI-generated article eligible for speedy deletion is if its citations are clearly wrong, another type of error LLMs are prone to. This can include both the inclusion of external links for books, articles, or scientific papers that don’t exist and don’t resolve, or links that lead to completely unrelated content. Wikipedia's new policy gives the example of “a paper on a beetle species being cited for a computer science article.”

Lebleu said that speedy deletion is a “band-aid” that can take care of the most obvious cases and that the AI problem will persist as they see a lot more AI-generated content that doesn’t meet these new conditions for speedy deletion. They also noted that AI can be a useful tool that could be a positive force for Wikipedia in the future.

“However, the present situation is very different, and speculation on how the technology might develop in the coming years can easily distract us from solving issues we are facing now, they said. “A key pillar of Wikipedia is that we have no firm rules, and any decisions we take today can be revisited in a few years when the technology evolves.”

Lebleu said that ultimately the new policy leaves Wikipedia in a better position than before, but not a perfect one.

“The good news (beyond the speedy deletion thing itself) is that we have, formally, made a statement on LLM-generated articles. This has been a controversial aspect in the community before: while the vast majority of us are opposed to AI content, exactly how to deal with it has been a point of contention, and early attempts at wide-ranging policies had failed. Here, building up on the previous incremental wins on AI images, drafts, and discussion comments, we workshopped a much more specific criterion, which nonetheless clearly states that unreviewed LLM content is not compatible in spirit with Wikipedia.”


#News #x27


A researcher has scraped a much larger dataset of indexed ChatGPT conversations, exposing contracts and intimate conversations.#News


Nearly 100,000 ChatGPT Conversations Were Searchable on Google


A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI’s chatbot for, and inadvertently exposing. 404 Media’s testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.

Upgrade to continue reading


Become a paid member to get access to all premium content
Upgrade


#News


Keywords and tags have never been a useful metric for distilling nuance. Pushing for regulations based on them is repeating a 30-year history of porn panic online.#Steam #itch



Protesters outside LA's Tesla Diner fear for the future of democracy in the USA

Protesters outside LAx27;s Tesla Diner fear for the future of democracy in the USA#Tesla #News

reshared this




The decision highlights hurdles faced by developers as they navigate a world where credit card companies dictate what is and isn't appropriate.

The decision highlights hurdles faced by developers as they navigate a world where credit card companies dictate what is and isnx27;t appropriate.#News

#News #x27

reshared this




People failing to identify a video of adorable bunnies as AI slop has sparked worries that many more people could fall for online scams.#AISlop #TikTok


AI Bunnies on Trampoline Causing Crisis of Confidence on TikTok


A generation who thought they were immune from being fooled by AI has been tricked by this video of bunnies jumping on a trampoline:

@rachelthecatlovers Just checked the home security cam and… I think we’ve got guest performers out back! @[url=https://mastod.org/users/ring]🦅 🇺🇸🇨🇦🇬🇧🇦🇺🇳🇿[/url] #bunny #ringdoorbell #ring #bunnies #trampoline ♬ Bounce When She Walk - Ohboyprince

The video currently has 183 million views on TikTok and it is at first glance extremely adorable. The caption says “Just checked the home security cam and… I think we’ve got guest performers out back! @[url=https://mastod.org/users/ring]🦅 🇺🇸🇨🇦🇬🇧🇦🇺🇳🇿[/url]”

People were excited by this. The bunnies seem to be having a nice time. @[url=https://social.coop/users/Greg]Greg[/url] posted on X “Never knew how much I needed to see bunnies jumping on a trampoline”

Unfortunately, the bunnies are not real.

The video is AI generated. This becomes clear when, between the fifth and sixth seconds of the video, the back bunny vanishes.



The split second where the top left bunny vanishes

People want to believe, and the fact that it is AI generated is causing widespread crisis among people who thought that AI slop would only fool their parents. We are as a culture intensely attuned to the idea that animals might do cute things at night when we can’t see them, and there have been several real viral security camera videos lately of animals trepidatiously checking out trampolines.
playlist.megaphone.fm?p=TBIEA2…
This particular video was difficult to discern as AI in part because security camera footage is also famously the blurriest type of footage. The aesthetics of this particular video make it very difficult to tell that it’s AI at first glance, because we are used to looking at surveillance camera footage as being blurry and dark, which can hide some of the standard signs people look at when trying to determine if a video is AI generated. The background of the image is also static; newer AI video generators are getting pretty good at creating the foreground subject of a video, but the background often remains very surreal. In this video, that’s not the case because of the static nature of the background. Pretending to be nighttime security footage also helps to disguise the things AI is often bad at—accurate movement, correct blur and lighting, and fine details. Tagging “@[url=https://mastod.org/users/ring]🦅 🇺🇸🇨🇦🇬🇧🇦🇺🇳🇿[/url]” was also pretty smart by the uploader, because it gives a plausible place for the video to come from.

People are responding totally normally, embodying a very relatable arc; the confidence of youth to think “that will never happen to me,” followed by the crushing realization that eventually we all become old and susceptible to scams.

This guy sings that the video of the bunnies “might manufacture the way you made me feel - how do I know that the sky’s really sunny?”

@olivesongs11
7/29/25 - day 576 of writing a song every day
♬ original sound - olivesongs

While @OliviaDaytonn says “Now I feel like I’m gonna be one of those old people that get scammed”.

@oliviadaytonn I wanted them to be real so badly #bunnies #trampoline ♬ original sound - olivia dayton

Another TikToker says the bunnies were “The first AI video I believed was real - I am doomed when I’m old”

@catenstuff #duet with @rachelthecatlovers #bunny #AAALASPARATUCURRO #bunnyjumpingontrampoline ♬ Bounce When She Walk - Ohboyprince

And @sydney_benjamin offers a public apology to her best friend for sending her the video. “Guys, I fell for AI.. I’m quite ashamed, I think of myself as like an educated person.” She says that she felt good when she busted a previous AI video trend for her friends (Grandma Does Interviews On Street).

@sydney_benjamin
This one was hard to admit
♬ original sound - Sydney Benjamin

This video breaks down the animal-on-trampoline trend and explains how to spot a fake animal-on-trampoline video.

@showtoolsai How to spot AI videos - animals on trampolines #bunnies #dog #bear #bunny #ai ♬ original sound - showtools

Of course, because the bunny video went viral, there are now copycats. This video, published on YouTube shorts one day after the first, by a different account, is also AI generated.



Copycat AI-generated bunny trampoline video on YouTube shorts

This is a theme that has a long history of being explored in song; for a more authentic trampolining-bunny musical experience, there is this video which is from a comfortably pre-AI “9 years ago”.

The uploader, @Rachelthecatlovers, only has four other videos. The account posted its first video a year ago, then waited, then posted a second one this week, which is also somewhat unusual for AI slop. Most AI slop accounts post multiple times a day, and most of the accounts are newly created. @Rachelthecatlovers has one other AI bunny video (the flap to the door disappears) and a bird cam video. It also has a video of grapes being rehydrated with a needle, tagged #bunny.


@Rachelthecatlovers' previous AI bunny video

People are freaked out by being fooled by this video and are clearly confident that they can usually spot videos that have been generated. But maybe that’s just the toupee fallacy; you only see the bad ones. Trampolining bunnies have broken that facade.






Submit to biometric face scanning or risk your account being deleted, Spotify says, following the enactment of the UK's Online Safety Act.

Submit to biometric face scanning or risk your account being deleted, Spotify says, following the enactment of the UKx27;s Online Safety Act.#spotify #ageverification



We talked to people living in the building whose views are being blocked by Tesla's massive four-story screen.

We talked to people living in the building whose views are being blocked by Teslax27;s massive four-story screen.#News #Tesla



The massive Tea breach; how the UK's age verification law is impacting access to information; and LeBron James' AI-related cease-and-desist.

The massive Tea breach; how the UKx27;s age verification law is impacting access to information; and LeBron Jamesx27; AI-related cease-and-desist.#Podcast



The Plaintiff claims Tea harmed her and ‘thousands of other similarity situated persons in the massive and preventable cyberattack.’#News
#News

reshared this



The Sig Sauer P320 has a reputation for firing without pulling the trigger. The manufacturer says that's impossible, but the firearms community is showing the truth is more complicated.

The Sig Sauer P320 has a reputation for firing without pulling the trigger. The manufacturer says thatx27;s impossible, but the firearms community is showing the truth is more complicated.#News

#News #x27

reshared this



“If visibility of r/IsraelCrimes is being restricted under the Online Safety Act, it’s only because the state fears accountability,” moderators say.#News
#News

reshared this



404 Media first contacted Tea about the security issue on Saturday. The company disabled direct messages on Monday after our report.#News
#News

reshared this



"This is more representative of the developer environment that our future employees will work in."#Meta #AI #wired


Meta Is Going to Let Job Candidates Use AI During Coding Tests


This article was produced with support from WIRED.

Meta told employees that it is going to allow some coding job candidates to use an AI assistant during the interview process, according to internal Meta communications seen by 404 Media. The company has also asked existing employees to volunteer for a “mock AI-enabled interview,” the messages say.

It’s the latest indication that Silicon Valley giants are pushing software engineers to use AI in their jobs, and signals a broader move toward hiring employees who can vibe code as part of their jobs.

“AI-Enabled Interviews—Call for Mock Candidates,” a post from earlier this month on an internal Meta message board reads. “Meta is developing a new type of coding interview in which candidates have access to an AI assistant. This is more representative of the developer environment that our future employees will work in, and also makes LLM-based cheating less effective.”

“We need mock candidates,” the post continues. “If you would like to experience a mock AI-enabled interview, please sign up in this sheet. The questions are still in development; data from you will help shape the future of interviewing at Meta.”

Meta CEO Mark Zuckerberg has made clear at numerous all-hands and in public podcast interviews that he is not just pushing the company’s software engineers towards using AI in their work, but that he foresees human beings managing “AI coding agents” that will write code for the company.

“I think this year, probably in 2025, we at Meta as well as the other companies that are basically working on this, are going to have an AI that can effectively be a midlevel engineer that you have at your company that can write code,” Zuckerberg told Joe Rogan in January. “Over time we’ll get to a point where a lot of the code in our apps and including the AI that we generate is actually going to be built by AI engineers instead of people engineers […] in the future people are going to be so much more creative and they’re going to be freed up to do kind of crazy things.”

In April, Zuckerberg expanded on this slightly on a podcast with Dwarkesh Patel, where he said that “sometime in the next 12 to 18 months, we’ll reach the point where most of the code that’s going towards [AI] efforts is written by AI.”

While it’s true that many tech companies have pushed software engineers to use AI in their work, they have been slower to allow new applicants to use AI during the interview process. In fact, Anthropic, which makes the AI tool Claude, has specifically told job applicants that they cannot use AI during the interview process. To circumvent that type of ban, some AI tools promise to allow applicants to secretly use AI during coding interviews.The topic, in general, has been a controversial one in Silicon Valley. Established software engineers worry that the next batch of coders will be more AI “prompters” and “vibe coders” than software engineers, and that they may not know how to troubleshoot AI-written code when something goes wrong.

“We're obviously focused on using AI to help engineers with their day-to-day work, so it should be no surprise that we're testing how to provide these tools to applicants during interviews,” a Meta spokesperson told 404 Media.




The more than one million messages obtained by 404 Media are as recent as last week, discuss incredibly sensitive topics, and make it trivial to unmask some anonymous Tea users.#News
#News

Breaking News Channel reshared this.



“Without these safeguards, Mr. Barber eventually developed full-blown PTSD, which he is currently still being treated for,” the former mod's lawyer said.

“Without these safeguards, Mr. Barber eventually developed full-blown PTSD, which he is currently still being treated for,” the former modx27;s lawyer said.#ContentModeration



This Company Wants to Bring End-to-End Encrypted Messages to Bluesky’s AT Protocol#News
#News

Breaking News Channel reshared this.







An error message appears saying "The following are not allowed: no zionist, no zionists" when users try to add the phrase to their bios, but any number of other phrases about political and religious preferences are allowed.#grindr


The games were mentioned in a 2024 report and are now part of a new lawsuit in which a 11 year old girl was allegedly groomed and sexually assaulted after meeting a stranger on Roblox.#News
#News


LeBron James' Lawyers Send Cease-and-Desist to AI Company Making Pregnant Videos of Him

Viral Instagram accounts making LeBron x27;brainrotx27; videos have also been banned.#AISlop




Google’s AI Overview, which is easy to fool into stating nonsense as fact, is stopping people from finding and supporting small businesses and credible sources.#News
#News


The wiping commands probably wouldn't have worked, but a hacker who says they wanted to expose Amazon’s AI “security theater” was able to add code to Amazon’s popular ‘Q’ AI assistant for VS Code, which Amazon then pushed out to users.

The wiping commands probably wouldnx27;t have worked, but a hacker who says they wanted to expose Amazon’s AI “security theater” was able to add code to Amazon’s popular ‘Q’ AI assistant for VS Code, which Amazon then pushed out to users.#News #Hacking



Welcome to the era of ‘gaslight driven development.’ Soundslice added a feature the chatbot thought it existed after engineers kept finding screenshots from the LLM in its error logs.#News


ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It


In what might be a first, a programmer added a feature to a piece of software because ChatGPT hallucinated it, and customers kept attempting to force the software to do it.. The developers of the sheet music scanning app Soundslice, a site that lets people digitize and edit sheet music, added additional functionality to their site because the LLM kept telling people it existed. Rather than fight the LLM, Soundslice indulged the hallucination.

Adrian Holovaty, one of Soundslices’ developers, noticed something strange in the site's error logs a few months ago. Users kept uploading ASCII tablature—a basic system for notating music for guitar, despite the fact that Soundslice wasn’t set up to process it, and had never advertised that it could. The error logs included pictures of what users had uploaded, and many of them were screenshots of ChatGPT conversations where the LLM had churned out ASCII tabs and told the users to send them to Soundslice.
playlist.megaphone.fm?p=TBIEA2…
“It was around 5-10 images daily, for a period of a month or two. Definitely enough where I was like, ‘What the heck is going on here?’” Holovaty told 404 Media. Rather than fight the LLM, Soundslice decided to add the feature ChatGPT had hallucinated. Holovaty said it only took his team a few hours to write up the code, which was a major factor in adding the feature.

“The main reason we did this was to prevent disappointment,” he said. “I highly doubt many people are going to sign up for Soundslice purely to use our ASCII tab importer […] we were motivated by the, frankly, galling reality that ChatGPT was setting Soundslice users up for failure. I mean, from our perspective, here were the options:

“1. Ignore it, and endure the psychological pain of knowing people were getting frustrated by our product for reasons out of our control.

“2. Put annoying banners on our site saying: ‘On the off chance that you're using ChatGPT and it told you about a Soundslice ASCII tab feature, that doesn't exist.’ That's disproportional and lame.

“3. Just spend a few hours and develop the feature.”

There’s also no way to tell ChatGPT the feature doesn’t exist. In an ideal world, OpenAI would have a formal procedure for removing content from its model, similar to the ability to request the removal of a site from Google’s index. “Obviously with an LLM it's much harder to do this technically, but I'm sure they can figure it out, given the absurdly high salaries their researchers are earning,” Holovaty said.

He added that the situation made him realize how powerful ChatGPT has become as an influencer of consumer behavior. “It's making product recommendations—for existent and nonexistent features alike—to massive audiences, with zero transparency into why it made those particular recommendations. And zero recourse.”

This may be the first time that developers have added a feature to a piece of software because ChatGPT hallucinated it, but it won’t be the last. In a personal blog, developer Niki Tonsky dubbed this phenomenon “gaslight-driven development” and shared a recent experience that’s similar to Holovaty’s.

One of Tonsky’s projects is a database for frontends called Instant. An update method for the app used a text document called “update” but LLMs that interacted with Instant kept calling the file “create.” Tonsky told 404 Media that, rather than fight the LLMs, his team just added the text file with the name the systems wanted. “In general I agree `create` is more obvious, it’s just weird that we arrived at this through LLM,” he said.

He told 404 media that programmers will probably need to account for the “tastes” of LLMs in the future. “You kinda already have to. It’s not programming for AI, but AI as a tool changes how we do programming,” he said.

Holovaty doesn’t hate AI—Soundslice uses machine learning to do its magic—but is mixed on LLMs. He compared his experience with ChatGPT to dealing with an overzealous sales team selling a feature that doesn’t exist. He also doesn’t trust LLMs to write code. He experimented with it, but found it caused more problems than it solved.

“I don't trust it for my production Soundslice code,” he said. “Plus: writing code is fun! Why would I choose to deny myself fun? To appease the capitalism gods? No thanks.”


#News


Spotify is publishing AI-generated tracks of dead artists; a company is selling hacked data to debt collectors; and the Astronomer CEO episode shows the surveillance dystopia we live in.#Podcast


The Tesla Diner has two gigantic screens, a robot that serves popcorn, and owners hope it will be free from people who don't like Tesla.

The Tesla Diner has two gigantic screens, a robot that serves popcorn, and owners hope it will be free from people who donx27;t like Tesla.#News #Tesla



An internal memo obtained by 404 Media also shows the military ordered a review hold on "questionable content" at Stars and Stripes, the military's 'editorially independent' newspaper.

An internal memo obtained by 404 Media also shows the military ordered a review hold on "questionable content" at Stars and Stripes, the militaryx27;s x27;editorially independentx27; newspaper.#Pentagon #PeteHegseth



From ICE's facial recognition app to its Palantir contract, we've translated a spread of our ICE articles into Spanish and made them freely available.

From ICEx27;s facial recognition app to its Palantir contract, wex27;ve translated a spread of our ICE articles into Spanish and made them freely available.#Spanish



Chats de Slack y foros de discusión internos de la empresa muestran que el gigante de la vigilancia está colaborando activamente con el ICE para ubicar a personas con órdenes de deportación.#Spanish