Salta al contenuto principale



John L. Young, the Guy Who Created Wikileaks Before Wikileaks, Dies at 89


The unsung whistleblower, died on March 28 in New York City, where he resided with his partner, Deborah Natsios. Some called him an under-recognized hero of the digital age.
#USA

in reply to Sunshine (she/her)

Yes. We don’t need millions of users to be successful. We come on here for a reason, we enjoy it. And to me that’s all that’s needed for success.

in reply to BrikoX

See the traitors: clerk.house.gov/Votes/2025158




Democrats Are Throwing In the Towel on Rural America


Since 2016, Democrats have operationally withdrawn from rural America. No party can win nationally without rural voters, and progressive economics have plenty to offer, if only the party would embrace them.
#USA
in reply to BrikoX

You calling Kristen Sinema and Joe Manchin “centrists” just shows you have no idea what you are talking about. They are 100% corporate Democrats who took millions from oil or pharmaceuticals lobby to kill bills that were unfairable to them


Yeah. centrists.

John Fetterman voted with Republicans more than he voted with Democrats. He’s a a simple sellout to a foreign country and primarify funded by The American Israel Public Affairs Committee (AIPAC). He should be treated as a traitor.


Yeah. A centrist.

BrikoX doesn't like this.

in reply to Ensign_Crab

You clearly have no understanding of what centrist means. Neither of them represent the middle of the country. Even going by party affiliation and definition used by US corporate media, the center of two right wing parties is still right wing.



in reply to Allah

Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words.


This is a really cool approach. They're not having to determine speech meaning, but instead picking up signals after the person's brain has already done that part and is just trying to vocalize. I'm guessing they can capture nerve impulses that would be moving muscles in the face, mouth, lips, and possibly larynx and then using the AI to quickly determine which sounds that would produce in those few milliseconds those conditions exist. Then the machine to produces the sounds artificially. Because they're able to do this so fast (in 10 milliseconds) it can get close to human body response and reproduction of the specific sounds.

in reply to Allah

This is exciting and terrifying. I am NOT looking forward to the future anymore.



What is the catch with Epic Games' free games?


Like what the title says. There's always a catch unless it's FOSS. So, what is the catch with them giving games for free that you can keep forever? What will the developers of the games get as a thank you?
in reply to airikr

They also for sure get revenue from the hardware companies, seeing recent game releases like Doom -The Dark Ages or ILL, where you need a GPU with at least 32 GB to run it with more than 20 FPS in standart resolution, while you grill bacon on the power supply
Questa voce è stata modificata (3 mesi fa)
in reply to Zerush

Standard resolution for me is FHD. Heavy duty games O.o If it is true, that is. Any source? DOOM: The Dark Ages requires 16 GB in GPU on recommended and ILL's system requirements are TBA.

When I read your comment, I could not stop thinking about those exclusive games that Epic Games have every now and then. I highly dislike that!

Questa voce è stata modificata (3 mesi fa)
in reply to airikr

Hardware companies need money. Yes, Doom need at least 16GB Ram for running the game in 1024x768 pxs, as said windowed and ILL for sure need more when the release it. It's programmed onsolence, while current PC can survive almost 15 years or more, they try it with the soft to make these obsolete. Apart of the prices for these games, DOOM>€100 and ILL for sure isn't cheaper. OK. the graphics are stunning, but this don't make a game better than others, these games anyway, apart of the graphics, offers normally an gameplay pretty lineal.

My favoritefirst person game since almost 10 years is The Dark Mod, nice graphics, not worse as in commercial games, intelligent gameplay, it don't need an NASA computer to run it, almost any cheap Laptop is enough, works on Windows, Linux and Mac and is 100%free. 170 community made missions, more released every few month, you can download and add these in the same game menu.

Questa voce è stata modificata (3 mesi fa)
in reply to airikr

2 great new TDM missions released
- The Last Night on Crookshank Line
- The Lieutnant 4 - A Reciprocal Gambit
in reply to Zerush

Thanks! Will try them out sometime 😀 Last time I tried download missions (which was maybe 2 months ago), I got 404. Hopefully that issue will be fixed now.

Edit: the error was not 404, but "Cannot connect to server". I had to execute a command to make it work. Will give one of these missions a shot now.

Questa voce è stata modificata (3 mesi fa)
in reply to airikr

Like any company offering "exclusive deals only in the app" the catch is you have to sign up for an account and install an app. That's one more account and one more app that you would have not normally installed but for the "deal."
Questa voce è stata modificata (3 mesi fa)





Israel places 2 foreign activists from Gaza aid ship in solitary confinement


Israel has placed two international activists from a Gaza-bound aid ship in solitary confinement, an Israeli rights group said on Wednesday.

“Rima Hassan was placed in isolation under inhumane conditions in Neve Tirza Prison after writing “Free Palestine” on a wall in Givon Prison,” it added.

“She was moved to a small, windowless cell with extremely poor hygienic conditions and has been denied access to the prison yard.”

in reply to IndustryStandard

Can you imagine the response if China or Russia kidnapped and abused an EU politician?


Remember when corporations avoided politics on social media?


Study finds Twitter surge starting in 2017, most of it Democratic-leaning by surprising range of firms, with negative effects on stock price
#USA


Brazil’s panda bond plan illustrates yuan’s growing international appeal





Estimates


in reply to ☆ Yσɠƚԋσʂ ☆

Whenever I need to provide an estimate, I ask everyone on the team for their gut feeling, take the second-largest estimate and multiply by 1.5. Seems to work pretty well. (if you can't tell I don't know what I'm doing with management)
in reply to balsoft

I find the most relevant aspect of an estimate is how familiar the person making the estimate with the problem. People who have the best understanding will inevitably give the best estimates.


Sidelines No More


cross-posted from: lemmy.ml/post/31534144

Alt text: Wrestler AJ Styles laughing with a label of "Trump sending in the military." Unbeknownst to him his opponent, The Undertaker, is standing behind him menacingly with the the label "Mass of protesters coming off the sidelines."

in reply to drspod

The thing is I agree with nearly every premise of superdeterminism. But the conclusions seem stretched.

I love the idea of not abiding to the strict assumptions set forth by Bell’s theorem. The idea that determinism doesn’t have to hide within the simple hidden variable model bell’s theorem disproves to be true. The idea that we are essentially always part of the experimental system. The questioning of the objective rational experimenter with free will ideal.

Yet I haven’t seen any serious mechanism explaining how the required correlations between experimenter choices and particle states could have been embedded in the universe’s initial conditions in such a finely tuned manner, given that experimentally, the outcomes are indistinguishable from standard quantum mechanics.. I just can’t imagine how this could likely be the case without adding quasi-conspiratorial assumption.

Questa voce è stata modificata (3 mesi fa)


Protesters Urged Not To Give Trump Administration Pretext For What It Already Doing


LOS ANGELES—Responding to escalating clashes between civilian activists and militarized immigration authorities, Los Angeles Mayor Karen Bass publicly urged protesters Monday not to give the Trump administration any pretext for what they’re already doing and will keep doing no matter what.

“Angelenos, don’t engage in violence and give the administration an excuse to inflict all the damage they have been inflicting carte blanche for months on end,” said Bass, adding that Trump and his team are just looking for a reason to respond with violence, as they would have done whether or not any of this happened.

“Don’t fan the flame that has been fanned behind the scenes at the White House since day one of Trump’s term in office. You wouldn’t want them to start abducting people in broad daylight and deporting them, would you? No, so let’s not become scapegoats for the horrific violations of civil liberties that would have eventually landed at our doorstep regardless.”

At press time, Bass warned that Trump was using the actions of protesters to justify sending in the National Guard that had been pre-deployed to the conflict days before it even began.





in reply to Allah

Bots were a big reason why I didn't continue playing Naraka. A battle royale that has bots is just awful. I'd rather have long queues than a BR lobby with a handful of people.


Butt out


Sensitive content

#anal



Trade war truce between US and China is back on


I think I might get TACO Bell for lunch.

in reply to kinther

So wait a minute here guys, you're telling me that the man who was convicted by a unanimous jury of fraud (cheating) in the 2016 election, the same guy who called the governors of various states and asked them to 'find him some votes' in 2020, did not run a clean honest campaign in 2024???

Get the EFF out of here!!

Questa voce è stata modificata (3 mesi fa)


Wikipedia Pauses AI-Generated Summaries After Editor Backlash


Text to avoid paywall

The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.

An example of what the AI-generated summary looked like.

Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”


Wikipedia Pauses AI-Generated Summaries After Editor Backlash


The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.
An example of what the AI-generated summary looked like.
Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”


in reply to bimbimboy

Why would anyone need Wikipedia to offer the AI summaries? Literally all chat bots with access to the internet will summarize Wikipedia when it comes to knowledge based questions. Let the creators of these bots serve AI slop to the masses.
in reply to bimbimboy

Why is it so damned hard for coporate to understand most people have no use nor need for ai at all?
in reply to Sam_Bass

"It is difficult to get a man to understand something, when his salary depends on his not understanding it."


— Upton Sinclair

in reply to explodicle

Wikipedia management shouldn't be under that pressure. There's no profit motive to enshittify or replace human contributions. They're funded by donations from users, so their top priority should be giving users what they want, not attracting bubble-chasing venture capital.
in reply to Sam_Bass

One of the biggest changes for a nonprofit like Wikipedia is to find cheap/free labor that administration trusts.

AI "solves" this problem by lowering your standard of quality and dramatically increasing your capacity for throughput.

It is a seductive trade. Especially for a techno-libertarian like Jimmy Wales.

Questa voce è stata modificata (3 mesi fa)
in reply to Sam_Bass

It pains me to argue this point, but are you sure there isn't a legitimate use case just this once? The text says that this was aimed at making Wikipedia more accessible to less advanced readers, like (I assume) people whose first language is not English. Judging by the screenshot they're also being fully transparent about it. I don't know if this is actually a good idea but it seems the least objectionable use of generative AI I've seen so far.
Questa voce è stata modificata (3 mesi fa)



Ghostty in review: how's the new terminal emulator?


A few months ago, a new terminal emulator was released. It's called ghostty, and it has been a highly anticipated terminal emulator for a while, especially due to the coverage that it received from ThePrimeagen, who had been using for a while, while it was in private beta.
Questa voce è stata modificata (3 mesi fa)
in reply to Pro

Honestly, I rather like the default XFCE terminal. In fact, I was using it even before I used XFCE back when I was just playing with the default GNOME in VMs before I daily-drove Linux.
in reply to Pro

I tried it out on Fedora a few months ago and I found alacritty felt faster in nvim. So i stayed on alacritty.



Israel’s War on Reproduction in Gaza


The single explosion destroyed more than 4,000 embryos and over 1,000 vials of sperm and unfertilized eggs. Dr Bahaeldeen Ghalayini, the obstetrician who established the clinic, summed up the implications of the attack in an interview with Reuters: “5,000 lives in one shell.”

The strike was an act of reprocide: the systematic targeting of a community’s reproductive health with the intention of eliminating their future. In the context of Israel’s ongoing genocidal war in Gaza, reprocide serves as a tactic. Indeed, genocide includes its definition, “imposing measures intended to prevent births” within a particular national, ethnic or religious group.

The bombing of the IVF clinic was one spectacular example, but as a Palestinian women’s rights activist from Gaza, I have lived and witnessed how Israel uses reprocide within a settler colonial framework that seeks not only territorial domination but demographic erasure—a process that began long before October 7, 2023.

When I was 15 years old, following the Israeli assault on Gaza in 2008–2009, Israeli soldiers began wearing and distributing t-shirts that depicted a pregnant woman in crosshairs above the slogan “1 Shot 2 Kills.” I recall the fear felt by the pregnant women I knew. The t-shirts prompted people around me to recount stories of pregnant women being killed or wounded during other moments of extreme violence in Palestinian history, from the start of the Nakba in 1948 to the Sabra and Shatila massacres in 1982. Underscoring the eliminationist nature of this violence, Israel remains among the world’s leaders in assisted reproduction technology, actively encouraging birth rates among Jewish citizens.

In an effort to trace the effects of reprocide amid Israel’s ongoing genocidal war, between October 2023 and October 2024, I collected ethnographic evidence—voice notes, text messages, emails and phone calls—from those enduring or witnessing reproductive violence. Analyzing their accounts alongside official reports from Gaza reveals the many ways Israel has weaponized reproduction, some more obvious than others: from the direct assaults on reproductive health and infrastructure to the conditions it forces women and men to reproduce under to sexual violence and its role in reproductive erasure.



Israel’s War on Reproduction in Gaza


The single explosion destroyed more than 4,000 embryos and over 1,000 vials of sperm and unfertilized eggs. Dr Bahaeldeen Ghalayini, the obstetrician who established the clinic, summed up the implications of the attack in an interview with Reuters: “5,000 lives in one shell.”

The strike was an act of reprocide: the systematic targeting of a community’s reproductive health with the intention of eliminating their future. In the context of Israel’s ongoing genocidal war in Gaza, reprocide serves as a tactic. Indeed, genocide includes its definition, “imposing measures intended to prevent births” within a particular national, ethnic or religious group.

The bombing of the IVF clinic was one spectacular example, but as a Palestinian women’s rights activist from Gaza, I have lived and witnessed how Israel uses reprocide within a settler colonial framework that seeks not only territorial domination but demographic erasure—a process that began long before October 7, 2023.

When I was 15 years old, following the Israeli assault on Gaza in 2008–2009, Israeli soldiers began wearing and distributing t-shirts that depicted a pregnant woman in crosshairs above the slogan “1 Shot 2 Kills.” I recall the fear felt by the pregnant women I knew. The t-shirts prompted people around me to recount stories of pregnant women being killed or wounded during other moments of extreme violence in Palestinian history, from the start of the Nakba in 1948 to the Sabra and Shatila massacres in 1982. Underscoring the eliminationist nature of this violence, Israel remains among the world’s leaders in assisted reproduction technology, actively encouraging birth rates among Jewish citizens.

In an effort to trace the effects of reprocide amid Israel’s ongoing genocidal war, between October 2023 and October 2024, I collected ethnographic evidence—voice notes, text messages, emails and phone calls—from those enduring or witnessing reproductive violence. Analyzing their accounts alongside official reports from Gaza reveals the many ways Israel has weaponized reproduction, some more obvious than others: from the direct assaults on reproductive health and infrastructure to the conditions it forces women and men to reproduce under to sexual violence and its role in reproductive erasure.




With a Trump-driven reduction of nearly 2,000 employees, F.D.A. will Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’


Text to avoid paywall

The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

“The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

“I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

Questa voce è stata modificata (3 mesi fa)
in reply to bimbimboy

Oh good, a 60% chance you’ll get an ineffective or killer drug because they’ll use AI to analyze the usage and AI to report on it.
in reply to RememberTheApollo_

If it actually ends up being an AI and not just some Trump cuck stooge masquerading as AI picking the drug by the company that gave the largest bribe to Trump, I 100% guarantee this AI is trained only on papers written by non-peer reviewed drug company paid "scientists" containing made up narratives.

Those of us prescribed the drugs will be the guinea pigs because R&D costs money and hits the bottom line. The many deaths will be conveniently scape-goated on "the AI" the morons in charge promised is smarter and more efficient than a person.

Fuck this shit.

in reply to bimbimboy

Final stage capitalism: Purging all the experts (at catching bullshit from appllicants) before the agencies train the AI with newb level inputs.


A Tennessee law that made threats of mass violence at school a felony, has led to students being arrested based on rumors and for noncredible threats.


In one case, a Hamilton County deputy arrested an autistic 13-year-old in August for saying his backpack would blow up, though the teen later said he just wanted to protect the stuffed bunny inside.

In the same county almost two months later, a deputy tracked down and arrested an 11-year-old student at a family birthday party. The child later explained he had overheard one student asking if another was going to shoot up the school tomorrow, and that he answered “yes” for him. Last month, the public charter school agreed to pay the student’s family $100,000 to settle a federal lawsuit claiming school officials wrongly reported him to police. The school also agreed to implement training on how to handle these types of incidents, including reporting only “valid” threats to police.

Despite the outcry over increased arrests in Tennessee, two states followed its lead by passing laws that will crack down harder on hoax threats. New Mexico and Georgia have laws, more states are in the process.



Portland Said It Was Investing in Homeless People’s Safety. Deaths Have Skyrocketed.


But although the city spent roughly $200,000 per homeless resident throughout that time (2019-2023-5 years at most), deaths of homeless people recorded in the county quadrupled, climbing from 113 in 2019 to more than 450 in 2023, according to the most recent data from the Multnomah County Health Department. The rise in deaths far outpaces the growth in the homeless population, which was recorded at 6,300 by a 2023 county census, a number most agree is an undercount. The county began including newly available state death records in its 2022 report, which added about 60 deaths to the yearly tolls.

Homeless residents of Multnomah County now die at a higher rate than in any major West Coast county with available homeless mortality data: more than twice the rate of those in Los Angeles County and the Washington state county containing Seattle and Tacoma. Almost all the homeless population in Multnomah County lives within Portland city limits.


in reply to daniel_callahan

The size of the riot doesn’t matter what matters is that LA County police plus the city and the fire department had the situation well in hand Trump is using this is an excuse to use the military to take control.

Makes me sick

in reply to daniel_callahan

It would be smaller if the police and federal government stop shooting at press and nonviolent protestors and making them move around.

It only gets violent when the aggressors(cops) become violent.


in reply to ByteOnBikes

This won’t be the last time this happens unfortunately. Spread the word stay safe everyone.