Salta al contenuto principale



Butt out


Sensitive content

#anal


... at the BBC


anyone know where I can find any of these? none of my public trackers have any of them... thanks!


in reply to ☆ Yσɠƚԋσʂ ☆

At the same time, the hope is that Russian society would come out en masse against Russian President Putin and dispose of him from power.


Theory that Russia will collapse relies on hoping that Russians don't understand the existential threat of NATO expansion, and NATO hatred of them, AND all Russian power brokers will start believing NATO disinformation, AND risk of losing war will motivate Russians to surrender to Ukraine/NATO.

Instead, any Russian regime change is likely to be caused by Putin weakness and failure to nuke German and UK US military bases. Or otherwise perception of insufficient aggression/speed of progress.

in reply to humanspiral

Exactly, the only actual criticism of Putin in Russia is that he's being too soft with the west. If Putin is somehow ousted from power, then it would almost certainly be by somebody far more hardline like Medvedev. The whole notion of regime change as a path towards defeating Russia is fundamentally rooted in lack of basic understanding of the political situation in Russia.


Trade war truce between US and China is back on


I think I might get TACO Bell for lunch.

in reply to kinther

So wait a minute here guys, you're telling me that the man who was convicted by a unanimous jury of fraud (cheating) in the 2016 election, the same guy who called the governors of various states and asked them to 'find him some votes' in 2020, did not run a clean honest campaign in 2024???

Get the EFF out of here!!

Questa voce è stata modificata (3 mesi fa)



Wikipedia Pauses AI-Generated Summaries After Editor Backlash


Text to avoid paywall

The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.

An example of what the AI-generated summary looked like.

Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”


Wikipedia Pauses AI-Generated Summaries After Editor Backlash


The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.
An example of what the AI-generated summary looked like.
Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”


in reply to bimbimboy

Why would anyone need Wikipedia to offer the AI summaries? Literally all chat bots with access to the internet will summarize Wikipedia when it comes to knowledge based questions. Let the creators of these bots serve AI slop to the masses.
in reply to bimbimboy

Why is it so damned hard for coporate to understand most people have no use nor need for ai at all?
in reply to Sam_Bass

"It is difficult to get a man to understand something, when his salary depends on his not understanding it."


— Upton Sinclair

in reply to explodicle

Wikipedia management shouldn't be under that pressure. There's no profit motive to enshittify or replace human contributions. They're funded by donations from users, so their top priority should be giving users what they want, not attracting bubble-chasing venture capital.
in reply to Sam_Bass

One of the biggest changes for a nonprofit like Wikipedia is to find cheap/free labor that administration trusts.

AI "solves" this problem by lowering your standard of quality and dramatically increasing your capacity for throughput.

It is a seductive trade. Especially for a techno-libertarian like Jimmy Wales.

Questa voce è stata modificata (3 mesi fa)
in reply to Sam_Bass

It pains me to argue this point, but are you sure there isn't a legitimate use case just this once? The text says that this was aimed at making Wikipedia more accessible to less advanced readers, like (I assume) people whose first language is not English. Judging by the screenshot they're also being fully transparent about it. I don't know if this is actually a good idea but it seems the least objectionable use of generative AI I've seen so far.
Questa voce è stata modificata (3 mesi fa)



Ghostty in review: how's the new terminal emulator?


A few months ago, a new terminal emulator was released. It's called ghostty, and it has been a highly anticipated terminal emulator for a while, especially due to the coverage that it received from ThePrimeagen, who had been using for a while, while it was in private beta.
Questa voce è stata modificata (3 mesi fa)
in reply to Pro

Honestly, I rather like the default XFCE terminal. In fact, I was using it even before I used XFCE back when I was just playing with the default GNOME in VMs before I daily-drove Linux.
in reply to Pro

I tried it out on Fedora a few months ago and I found alacritty felt faster in nvim. So i stayed on alacritty.




Tech Deadline 2025 - leave big tech!


Please help promote the hashtags #Deadline2025, #BigTechWalkout2025 and #Reclaim2025 to reach those still using big tech platforms.

And share this great video that a friend of mine made showing how lame the big techbros really are.

If we starve big tech of data, their power diminishes.

Questa voce è stata modificata (3 mesi fa)


Israel’s War on Reproduction in Gaza


The single explosion destroyed more than 4,000 embryos and over 1,000 vials of sperm and unfertilized eggs. Dr Bahaeldeen Ghalayini, the obstetrician who established the clinic, summed up the implications of the attack in an interview with Reuters: “5,000 lives in one shell.”

The strike was an act of reprocide: the systematic targeting of a community’s reproductive health with the intention of eliminating their future. In the context of Israel’s ongoing genocidal war in Gaza, reprocide serves as a tactic. Indeed, genocide includes its definition, “imposing measures intended to prevent births” within a particular national, ethnic or religious group.

The bombing of the IVF clinic was one spectacular example, but as a Palestinian women’s rights activist from Gaza, I have lived and witnessed how Israel uses reprocide within a settler colonial framework that seeks not only territorial domination but demographic erasure—a process that began long before October 7, 2023.

When I was 15 years old, following the Israeli assault on Gaza in 2008–2009, Israeli soldiers began wearing and distributing t-shirts that depicted a pregnant woman in crosshairs above the slogan “1 Shot 2 Kills.” I recall the fear felt by the pregnant women I knew. The t-shirts prompted people around me to recount stories of pregnant women being killed or wounded during other moments of extreme violence in Palestinian history, from the start of the Nakba in 1948 to the Sabra and Shatila massacres in 1982. Underscoring the eliminationist nature of this violence, Israel remains among the world’s leaders in assisted reproduction technology, actively encouraging birth rates among Jewish citizens.

In an effort to trace the effects of reprocide amid Israel’s ongoing genocidal war, between October 2023 and October 2024, I collected ethnographic evidence—voice notes, text messages, emails and phone calls—from those enduring or witnessing reproductive violence. Analyzing their accounts alongside official reports from Gaza reveals the many ways Israel has weaponized reproduction, some more obvious than others: from the direct assaults on reproductive health and infrastructure to the conditions it forces women and men to reproduce under to sexual violence and its role in reproductive erasure.



Are Arch Linux repos in California being blocked?


The image shows the log of a system update in Arch Linux. Several US mirrors are skipped with error 404, until pacman finds a Canadian mirror to download packages.

It is not happening to every californian mirrors (I'm testing all US mirrors), but it's definitely happening to a lot of them, from Mexico. Is this collateral to what's happening in LA or is it mandated by the government? Do you know anything?

Questa voce è stata modificata (3 mesi fa)


Israel’s War on Reproduction in Gaza


The single explosion destroyed more than 4,000 embryos and over 1,000 vials of sperm and unfertilized eggs. Dr Bahaeldeen Ghalayini, the obstetrician who established the clinic, summed up the implications of the attack in an interview with Reuters: “5,000 lives in one shell.”

The strike was an act of reprocide: the systematic targeting of a community’s reproductive health with the intention of eliminating their future. In the context of Israel’s ongoing genocidal war in Gaza, reprocide serves as a tactic. Indeed, genocide includes its definition, “imposing measures intended to prevent births” within a particular national, ethnic or religious group.

The bombing of the IVF clinic was one spectacular example, but as a Palestinian women’s rights activist from Gaza, I have lived and witnessed how Israel uses reprocide within a settler colonial framework that seeks not only territorial domination but demographic erasure—a process that began long before October 7, 2023.

When I was 15 years old, following the Israeli assault on Gaza in 2008–2009, Israeli soldiers began wearing and distributing t-shirts that depicted a pregnant woman in crosshairs above the slogan “1 Shot 2 Kills.” I recall the fear felt by the pregnant women I knew. The t-shirts prompted people around me to recount stories of pregnant women being killed or wounded during other moments of extreme violence in Palestinian history, from the start of the Nakba in 1948 to the Sabra and Shatila massacres in 1982. Underscoring the eliminationist nature of this violence, Israel remains among the world’s leaders in assisted reproduction technology, actively encouraging birth rates among Jewish citizens.

In an effort to trace the effects of reprocide amid Israel’s ongoing genocidal war, between October 2023 and October 2024, I collected ethnographic evidence—voice notes, text messages, emails and phone calls—from those enduring or witnessing reproductive violence. Analyzing their accounts alongside official reports from Gaza reveals the many ways Israel has weaponized reproduction, some more obvious than others: from the direct assaults on reproductive health and infrastructure to the conditions it forces women and men to reproduce under to sexual violence and its role in reproductive erasure.



Israel’s War on Reproduction in Gaza


The single explosion destroyed more than 4,000 embryos and over 1,000 vials of sperm and unfertilized eggs. Dr Bahaeldeen Ghalayini, the obstetrician who established the clinic, summed up the implications of the attack in an interview with Reuters: “5,000 lives in one shell.”

The strike was an act of reprocide: the systematic targeting of a community’s reproductive health with the intention of eliminating their future. In the context of Israel’s ongoing genocidal war in Gaza, reprocide serves as a tactic. Indeed, genocide includes its definition, “imposing measures intended to prevent births” within a particular national, ethnic or religious group.

The bombing of the IVF clinic was one spectacular example, but as a Palestinian women’s rights activist from Gaza, I have lived and witnessed how Israel uses reprocide within a settler colonial framework that seeks not only territorial domination but demographic erasure—a process that began long before October 7, 2023.

When I was 15 years old, following the Israeli assault on Gaza in 2008–2009, Israeli soldiers began wearing and distributing t-shirts that depicted a pregnant woman in crosshairs above the slogan “1 Shot 2 Kills.” I recall the fear felt by the pregnant women I knew. The t-shirts prompted people around me to recount stories of pregnant women being killed or wounded during other moments of extreme violence in Palestinian history, from the start of the Nakba in 1948 to the Sabra and Shatila massacres in 1982. Underscoring the eliminationist nature of this violence, Israel remains among the world’s leaders in assisted reproduction technology, actively encouraging birth rates among Jewish citizens.

In an effort to trace the effects of reprocide amid Israel’s ongoing genocidal war, between October 2023 and October 2024, I collected ethnographic evidence—voice notes, text messages, emails and phone calls—from those enduring or witnessing reproductive violence. Analyzing their accounts alongside official reports from Gaza reveals the many ways Israel has weaponized reproduction, some more obvious than others: from the direct assaults on reproductive health and infrastructure to the conditions it forces women and men to reproduce under to sexual violence and its role in reproductive erasure.



Israel’s War on Reproduction in Gaza


The single explosion destroyed more than 4,000 embryos and over 1,000 vials of sperm and unfertilized eggs. Dr Bahaeldeen Ghalayini, the obstetrician who established the clinic, summed up the implications of the attack in an interview with Reuters: “5,000 lives in one shell.”

The strike was an act of reprocide: the systematic targeting of a community’s reproductive health with the intention of eliminating their future. In the context of Israel’s ongoing genocidal war in Gaza, reprocide serves as a tactic. Indeed, genocide includes its definition, “imposing measures intended to prevent births” within a particular national, ethnic or religious group.

The bombing of the IVF clinic was one spectacular example, but as a Palestinian women’s rights activist from Gaza, I have lived and witnessed how Israel uses reprocide within a settler colonial framework that seeks not only territorial domination but demographic erasure—a process that began long before October 7, 2023.

When I was 15 years old, following the Israeli assault on Gaza in 2008–2009, Israeli soldiers began wearing and distributing t-shirts that depicted a pregnant woman in crosshairs above the slogan “1 Shot 2 Kills.” I recall the fear felt by the pregnant women I knew. The t-shirts prompted people around me to recount stories of pregnant women being killed or wounded during other moments of extreme violence in Palestinian history, from the start of the Nakba in 1948 to the Sabra and Shatila massacres in 1982. Underscoring the eliminationist nature of this violence, Israel remains among the world’s leaders in assisted reproduction technology, actively encouraging birth rates among Jewish citizens.

In an effort to trace the effects of reprocide amid Israel’s ongoing genocidal war, between October 2023 and October 2024, I collected ethnographic evidence—voice notes, text messages, emails and phone calls—from those enduring or witnessing reproductive violence. Analyzing their accounts alongside official reports from Gaza reveals the many ways Israel has weaponized reproduction, some more obvious than others: from the direct assaults on reproductive health and infrastructure to the conditions it forces women and men to reproduce under to sexual violence and its role in reproductive erasure.




Swiss probe intelligence leaks to Russia




Man who tried to smuggle £1.2m in suitcases out of UK jailed


A man who tried to smuggle £1.2m in suitcases out of the United Kingdom to Lebanon has been jailed for 21 months, following a National Crime Agency investigation.

https://www.nationalcrimeagency.gov.uk/news/man-who-tried-to-smuggle-1-2m-in-suitcases-out-of-uk-jailed

Questa voce è stata modificata (3 mesi fa)


C is one of the most energy saving language


cross-posted from: lemmy.world/post/31184895

cross-posted from: lemmy.world/post/31184706
C is one of the top languages in terms of speed, memory and energy

threads.com/@engineerscodex/po…





HP reveals $24,999 hardware created just for Google Beam




With a Trump-driven reduction of nearly 2,000 employees, F.D.A. will Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’


Text to avoid paywall

The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

“The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

“I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

Questa voce è stata modificata (3 mesi fa)
in reply to bimbimboy

Oh good, a 60% chance you’ll get an ineffective or killer drug because they’ll use AI to analyze the usage and AI to report on it.
in reply to RememberTheApollo_

If it actually ends up being an AI and not just some Trump cuck stooge masquerading as AI picking the drug by the company that gave the largest bribe to Trump, I 100% guarantee this AI is trained only on papers written by non-peer reviewed drug company paid "scientists" containing made up narratives.

Those of us prescribed the drugs will be the guinea pigs because R&D costs money and hits the bottom line. The many deaths will be conveniently scape-goated on "the AI" the morons in charge promised is smarter and more efficient than a person.

Fuck this shit.

in reply to bimbimboy

Final stage capitalism: Purging all the experts (at catching bullshit from appllicants) before the agencies train the AI with newb level inputs.





A Tennessee law that made threats of mass violence at school a felony, has led to students being arrested based on rumors and for noncredible threats.


In one case, a Hamilton County deputy arrested an autistic 13-year-old in August for saying his backpack would blow up, though the teen later said he just wanted to protect the stuffed bunny inside.

In the same county almost two months later, a deputy tracked down and arrested an 11-year-old student at a family birthday party. The child later explained he had overheard one student asking if another was going to shoot up the school tomorrow, and that he answered “yes” for him. Last month, the public charter school agreed to pay the student’s family $100,000 to settle a federal lawsuit claiming school officials wrongly reported him to police. The school also agreed to implement training on how to handle these types of incidents, including reporting only “valid” threats to police.

Despite the outcry over increased arrests in Tennessee, two states followed its lead by passing laws that will crack down harder on hoax threats. New Mexico and Georgia have laws, more states are in the process.




Mathematicians move the needle on the Kakeya conjecture, a decades-old geometric problem


Questa voce è stata modificata (3 mesi fa)
in reply to ☆ Yσɠƚԋσʂ ☆

Fixed link: phys.org/news/2025-03-mathemat…


ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit


A study from Profound of OpenAI's ChatGPT, Google AI Overviews and Perplexity shows that while ChatGPT mostly sources its information from Wikipedia, Google AI Overviews and Perplexity mostly source their information from Reddit.
#AII


Portland Said It Was Investing in Homeless People’s Safety. Deaths Have Skyrocketed.


But although the city spent roughly $200,000 per homeless resident throughout that time (2019-2023-5 years at most), deaths of homeless people recorded in the county quadrupled, climbing from 113 in 2019 to more than 450 in 2023, according to the most recent data from the Multnomah County Health Department. The rise in deaths far outpaces the growth in the homeless population, which was recorded at 6,300 by a 2023 county census, a number most agree is an undercount. The county began including newly available state death records in its 2022 report, which added about 60 deaths to the yearly tolls.

Homeless residents of Multnomah County now die at a higher rate than in any major West Coast county with available homeless mortality data: more than twice the rate of those in Los Angeles County and the Washington state county containing Seattle and Tacoma. Almost all the homeless population in Multnomah County lives within Portland city limits.



ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit


A study from Profound of OpenAI's ChatGPT, Google AI Overviews and Perplexity shows that while ChatGPT mostly sources its information from Wikipedia, Google AI Overviews and Perplexity mostly source their information from Reddit.
in reply to Pro

I used ChatGPT on something and got a response sourced from Reddit. I told it I'd be more likely to believe the answer if it told me it had simply made up the answer. It then provided better references.

I don't remember what it was but it was definitely something that would be answered by an expert on Reddit, but would also be answered by idiots on Reddit and I didn't want to take chances.

in reply to Pro

I would be hesitant to use either as a primary source...


ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit


A study from Profound of OpenAI's ChatGPT, Google AI Overviews and Perplexity shows that while ChatGPT mostly sources its information from Wikipedia, Google AI Overviews and Perplexity mostly source their information from Reddit.

in reply to daniel_callahan

The size of the riot doesn’t matter what matters is that LA County police plus the city and the fire department had the situation well in hand Trump is using this is an excuse to use the military to take control.

Makes me sick

in reply to daniel_callahan

It would be smaller if the police and federal government stop shooting at press and nonviolent protestors and making them move around.

It only gets violent when the aggressors(cops) become violent.



BBC: China's electric cars are cheaper, but at what cost? 🤣


#News


Rep. Moulton says many Marine junior officers are opposed to LA deployment


politics reshared this.

in reply to silence7

Then refuse the order. It's literally their duty
in reply to silence7

Then disobey the unlawful order.

Is this whole nation just adverse to taking any action? I'm constantly hearing people complain, then do absolutely fuck all.



When analog restoration makes the past feel too real


Original text below by @versiqcontent@moist.catsweat.com

Lately I have been reflecting on how powerful old photos can become when they are carefully brought back to life. Not because of any specific image, but because of the strange feeling they create. You scan an old photo, adjust a few things, and suddenly it feels like the person is right there, alive and present.

It makes you pause. This is not just an old picture. This is memory coming back with full force.

I found a short article that expresses this feeling really well. It talks about how youth in vintage photos can feel unexpectedly modern and how that changes the way we look at the past.

Curious if anyone else here has felt something similar while working with old family pictures or film.


in reply to ByteOnBikes

This won’t be the last time this happens unfortunately. Spread the word stay safe everyone.