Chemical leak prompts entire Ohio town to be evacuated
Chemical leak prompts entire Ohio town to be evacuated
Around 3,000 gallons of nitric acid was released into the Ohio town in a chemical leak, officials sayRachel Dobkin (The Independent)
Zur Einstufung der Jüdischen Stimme und BDS als “gesichert extremistisch” durch den Verfassungsschutz (2025-06-12)
Zur Einstufung der Jüdischen Stimme und BDS als “gesichert extremistisch” durch den Verfassungsschutz (2025-06-12)
hcommons.social
hcommons.social is a microblogging network supporting scholars and practitioners across the humanities and around the world.Hometown hosted on hcommons.social
Hollywood studios target AI image generator in copyright lawsuit
Hollywood studios target AI image generator in copyright lawsuit
Multiple-studio complaint cites AI image outputs as evidence of “bottomless pit of plagiarism.”…Benj Edwards (Ars Technica)
DNC will redo party elections for David Hogg and Malcolm Kenyatta’s posts after procedural error
DNC will redo party elections for David Hogg and Malcolm Kenyatta’s posts after procedural error
The Democratic National Committee determined it failed to follow internal rules in the February election. Separately, Hogg’s plans to back primaries against incumbent Democrats stirred controversy.Ben Kamisar (NBC News)
like this
adhocfungus, NoneOfUrBusiness, wagesj45, aramis87, KaRunChiy e Maeve like this.
politics reshared this.
What happened to Blockinger? (Tetris Clone)
Its gone from F-Droid Store..
It looks like F-Droid does not have any apps matching your search string "Blockinger"
lgsp@feddit.it likes this.
Unrequested suggestion:
Lemuroid emulator + apotris is the best (FOSS) tetris replacement on Android
US warns countries not to join French, Saudi UN conference on Palestine: Report
The US is lobbying foreign governments not to attend a UN conference next week sponsored by France and Saudi Arabia on a two-state solution to the Israeli-Palestinian conflict, according to a US diplomatic cable reported by Reuters.The cable, sent to countries on Tuesday, warns them against taking "anti-Israel actions" and says attending the conference would be viewed by Washington as acting against US foreign policy interests.
France, a permanent member of the UN Security Council, is a US ally in Nato. Saudi Arabia is one of the US’s closest Middle East partners.
Sunwapta Falls, Icefields Parkway, Jasper NP
Sunwapta Falls
Easy two mile out and back trail located along the icefields parkway south of Jasper. The main falls are located at the beginning of the hike and the trail follows along the river downstream, revealing several more waterfalls as you go. River access can be had at the end of the trail as it leaves the canyon. The upper area gets a ton of usage, as does the second falls which are fairly close by and have a good viewing area. Thins out a little beyond that, but its a short hike so stays fairly busy.
The second falls (not including the big chute that comes out from the upper falls). Drops around 15 ft before going into another chute.
The outflow from the uppermost waterfall rushing under the bridge above. Over time it has carved a curving path into the rock on the side with this viewpoint.
Looking downstream from this large waterfall just off the trail. When hiking, you will be afforded other angles of the falls as you continue the trail. The Catacombs Mountains can be seen in the distance.
Fedora. There's a video of him explaining why he uses Fedora instead of Debian.
Edit: Link to Fedora's pages and a Youtube video on why Linus does not use Debian (or debian-based distros)
fedoraproject.org/wiki/Is_Fedo…
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
like this
TVA, Rozaŭtuno e adhocfungus like this.
Also, incase you're wondering, Richard Stallman uses Trisquel GNU/Linux.
like this
TVA e adhocfungus like this.
uses a version of Ubuntu's modified kernel, with the non-free code (binary blobs) removed.[8
Why not just Debian without non-free, at that point?
Because Debian does not meet the strict requirements of the FSF. It includes non-free blobs in the kernel and the FSF claims Debian "steers" users with recommendations for installing non-free plugins or codecs. Some "contrib" packages, while free themselves, exist primarily to load separately distributed proprietary programs. There are also references in the Debian documentation and official channels that suggest obtaining non-free software for functionality.
edit: typos
I saw an interview where he was saying he objected to Debian adding non-free blobs so he had them put on GNU's shit list.
Dude is cuckoo for coco puffs.
While I think it would be too hard for most people to be completely free of proprietary software, atleast he is practicing what he preaches. It is a nice goal to someday get there, but I don't think its realistic at the moment.
Kind in mind, though, he is 72 and I don't think he even codes anymore. His computer use probably only consists of mostly Emac (for all text based work) and a web browser (which I read he has a very particular method that involves something similar to wget, lynx, and konqueror). His computer use is very light (I imagine) compared to many Linux users.
While I aspire to and appreciate what the FSF advocates, I don't see a realistic path for myself as a Linux gamer. The proprietary firmware limitations alone would keep you on 2015 hardware.
Source: kottke.org/15/05/how-richard-s…
How Richard Stallman does his computing
Richard Stallman, the free software activist and author of some of the world’s most used and useful software, probably useskottke.org
like this
Aatube e geneva_convenience like this.
I’m not just talking about the free software stuff.
He’s on the record blaming victims of Epstein and chastising a developer for stepping back for the birth of his child, amongst a host of other crazy things.
Truly crazy stuff.
I knew about the Epstein thing and it is pretty offensive but unsurprising. What is surprising is what I just read about the developer in 2005 who mentioned taking time off for the birth of his daughter, essentially implying that contributing to Emacs was a more valuable contribution than having children. That is messed up.
Even worse, apparently there were also old blog posts where he discussed the legalization of sex with minors and child pornography, arguing that certain acts should be legal "as long as no one is coerced" and are only illegal due to "prejudice and narrow-mindedness."
He's not a great guy. I appreciate the work he has lead with free software, but he's said some pretty screwed up stuff.
Sources:
blog.codinghorror.com/spawning…
npr.org/2019/09/17/761718975/f…
Spawning a New Process
I don’t usually talk about my personal life here, but I have to make an exception in this case. I debated for days which geeky reference I would use as a synonym for “we’re having a baby.” The title is the best I could do. I’m truly sorry.Jeff Atwood (Coding Horror)
Yes...
Stallman is nutty, obsessive, and drunk on his own fame. 🙄 🤡 🤦♀️ 🖕 💩
like this
Rozaŭtuno likes this.
I once gave Trisquel a try back in the day. It's one of those FSF approved distros right? My use case was more ahem, standard rather than anything programming related. Either case, one evening, I ran into a dependency hell trying to install a simple Direct Connect client onto it and no matter how much I tried I couldn't succeed.
I then decided to move back to Debian. Either case, most distros have Eiskaltdcpp (as one example of a client) in their repos, except for Trisquel. This was multiple years ago. I am currently on Void.
like this
Rozaŭtuno likes this.
This is the experience I imagine I would have trying it. It is probably what anyone with a modern system would experience with proprietary firmware. From what I read, Trisquel's core philosophy is to include only free software and Eiskaltdcpp most likely relies on some non-free dependencies.
I like Debian. I am currently trying Fedora and it has been good, too. Void is on my list of "distros to someday try" as it sounds super interesting using runit, XBPS, and not relying on systemd.
like this
Rozaŭtuno likes this.
Yes, Trisquel can be a pain to be used as a daily driver. Whilst I admire the philosophy behind it's concept, it definitely leaves a lot of end work to be done by the user.
I have used Fedora for quite some time in the past . I think Fedora and now discontinued Cent OS were two RPM based distros (I think Fedora now uses Dnf as well) I have used. Cent OS I liked decently, it wasn't as bleeding edge as Fedora and for a long time I dual booted Cent OS and Debian.
Void is decent independent distro. Ironically I don't have any anti systemd feelings and just gave it a try for heck of it and stuck to it. I think there is a musl version of Void as well but that makes things only complicated.
I feel the same way about Artix. I had it on my laptop for a while, and it was a regular PITA. I think I may have made it harder on myself, because while getting rid of systemd was fine, I was also trying to do without NetworkManager and on a laptop that wasn't a great idea. I never did find a good, reliable set-up that managed access point hopping as well as nm.
Really, thinking back, Artix was fine; it really was just the roaming WiFi handling that gave me grief, and I did that to myself.
like this
Rozaŭtuno likes this.
(I'm replying to you twice b/c totally different topics)
Tell me more about your Void experience. I've been meaning to give it a shot, except I don't get as much enjoyment out of fussing with distros as I used to. What are the pain points? Under Artix, I used dinit which I really liked, but I tried s6 first and absolutely hated it. I didn't try runit; how is it?
What I'm most interested in is xbps, because IMO it's the package manager that makes or breaks a system. I'm quite fond of pacman and have encountered far fewer dependency hell situations than I did with either rpm or deb, and rolling release is a must. xbps looks kind of like a rolling stable release?
Void is rolling release IIRC. The package manager is quite fast and gets the job done. The pain point is that Void has a lower selection of package in its repos compared to say, Arch. Some good stuff is there (for example I was looking for a third party Spotify client ncspot? Back in the day and it was packaged in Void's repos) but if someone uses niche stuff a lot, there can be issues.
Of course there is Flatpak support. And the system itself is comparatively lean and fast. I don't think my installation of Void came with plenty of pre-installed apps.
It ships in two builds : glibc or musl. The latter one is less favored because it only makes life tougher honestly. Runit support is a strong point of it though personally I don't have any anti systemd qualms.
The documentation is basic and okayish. I still often go to Arch Wiki since that's honestly the most detailed. Also, I just found that it's the highest rated distro on Distro Watch. I have distro hopped a long time and Void is decent. I still hold Debian in higher regard since it's slightly easier for a novice to get used to (though it's repos can be hold often old versions of software) and also because it was my main entry point to the Linux world.
Void is rolling release IIRC
That's what I thought, but the main website says Void focuses on stability over being cutting edge, which would imply some sort of release cycle. Or, maybe they just update packages less frequently.
I still hold Debian in higher regard since it's slightly easier for a novice to get used to
It's hard to beat Mint as a novice distro, for sure.
Mint eschews all of the Snap crap, though, doesn't it?
Jesus, please tell me it does. I've been recommending it to beginners. I thought it was sanitized.
GNU Guix transactional package manager and distribution — GNU Guix
Guix is a distribution of the GNU operating system. Guix is technology that respects the freedom of computer users. You are free to run the system for any purpose, study how it works, improve it, and share it with the whole world.guix.gnu.org
like this
adhocfungus likes this.
like this
adhocfungus likes this.
I'm not an expert, but I believes he codes the Linux Kernel on Asahi these days.
arstechnica.com/gadgets/2022/0…
Linus Torvalds uses an Arm-powered M2 MacBook Air to release latest Linux kernel
More people using Arm hardware will (eventually) lead to better Arm software.Andrew Cunningham (Ars Technica)
like this
adhocfungus likes this.
like this
adhocfungus likes this.
US warns countries not to join French, Saudi UN conference on Palestine: Report
The US is lobbying foreign governments not to attend a UN conference next week sponsored by France and Saudi Arabia on a two-state solution to the Israeli-Palestinian conflict, according to a US diplomatic cable reported by Reuters.
The cable, sent to countries on Tuesday, warns them against taking "anti-Israel actions" and says attending the conference would be viewed by Washington as acting against US foreign policy interests.
France, a permanent member of the UN Security Council, is a US ally in Nato. Saudi Arabia is one of the US’s closest Middle East partners.
Can you be tracked for marketing purposes on a "dumb-phone"?
I'm aware that carrying a phone means that I can be tracked with cell towers and that's fine.
But is there some sort of tracking that can be done on modern dumb-phones that make relevant ads show up(on spotify/youtube) that are based on where the phone has been?
Thanks I'm a newb
like this
Auster likes this.
Even with a dumb phone, they have
- Your identity, IE real name and address.
- Location history
- Contacts (since you're forced to use SMS)
- Message history in plain text
So I don't doubt that they're at least aggregating message history and selling data/trends about certain topics to advertisers and anyone who will buy it.
Plus if they know that your most contacted person is also texting/searching about certain things, they can safely sell that also and present ads to you based on their interests.
You could get a Punkt dumb phone that shouldn’t spy on you
From Punkt FAQ: Verizon: not supported/not supported
Sadly, that keeps a ton of people from using the phone. In many rural areas, VZW is the only coverage that they can get.
I would unironically love if there were enough people in my life that also wanted to live that way to make it viable... Also the lack of functioning payphones these days would be challenging.
The place (at least in the USA) where I've found the most functional-looking payphones was actually Hawaii... And even then, so many are decaying and non-functional. I've had a silly idea to go back and just roam around and photograph as many as I can.
Republicans Want to Defund NPR. To Survive, It Needs To Do Some Soul-Searching.
Republicans Want to Defund NPR. To Survive, It Needs To Do Some Soul-Searching.
NPR's job is to produce news for every American. Its partisan lean is undermining that mission.Zaid Jilani (The American Saga)
My wife and I listen to NPR fairly regularly, she donates, I do not.
My argument is as long as they are taking money from companies like Archer Daniels Midland and the Koch Foundation, they don't need MY money.
Local stations (not NPR, but NPR affiliates) even take money from fucking Monsanto(!)
Use supervisor or desktop Linux for TV gaming PC + NAS?
To give a bit of context : I'm upgrading my whole desktop computer so I now have a spare computer for gaming on the TV. I'm thinking of using it mainly as a gaming "console", but might be interested in embedding a NAS as well, and possibly some Docker containers for Home Assistants etc...
So the question : should I just install a normal Distro like Arch, setup a network share and Docker containers, or should I use a proper hypervisor like Unraid and have a VM for couch gaming etc...?
What issues could I expect with both? Are performances impacted with the hypervisor? (I don't plan on doing competitive games on the TV) or is troubleshooting going to be easier on a standard distro?
Did someone do such a setup and have some feedback?
Never properly used Linux before, but I'm a Windows power-user and am looking to transition part of my setup to Linux.
The GPU is going to be an RTX3070 if that matters
like this
Auster likes this.
Make sure that your gaming VM uses a real harddrive/SSD instead of virtual disk to prevent sluggish I/O.
GPU passthrough is still a Bit of a pita so if you going to VM stuff you need a lot more tinkering aswell many EULA don’t allow VM usage so you need further configuration to avoid detection.
The biggest downside to having a classical setup is that you can’t easily limit resources. So if your game eats up all the RAM your NAS will slow down and vice versa.
imho both are good options it’s just choose your poison
Just try both scenarios and choose what fits your workflow the best
Have you considered/tried streaming games from your primary desktop PC? Obviously very dependent on your situation's specifics, but that's one of the things I do with the Linux htpc I have set up.
And then you wouldn't have to worry about games and NAS stuff competing for system resources.
I'd personally go the hypervisor route (I'm using proxmox, truenas, and an arr stack on my NAS). It keeps things compartmentalized (especially network configurations) and usually keeps me from breaking *everything at the same time.
Top Chinese scientists flee Boston area as Harvard, MIT fall in rankings; Silicon Valley also hit
- Thousands of Chinese researchers and scientists are leaving top jobs in leading US universities and companies, to take positions in China.
- The Cambridge area of Massachusetts is home to Harvard, MIT, and scores of leading companies, and was the number one source of returning Chinese research and engineering talent.
- In second place is the Palo Alto-Berkeley cluster, which includes Stanford, University of California, and Silicon Valley.
- The migration of top scientific and engineering talent back to China is accelerating, but began nearly a decade ago. And while the political situation between China and the United States certainly is a major motivation for many scientists to return, more important is the quality of the education systems.
- Chinese universities are now claiming the top spots across all the hard science disciplines, while American colleges are tumbling.
like this
geneva_convenience likes this.
It's both stupid expensive and the jobs don't pay enough anymore. I can make the same salary as an engineer working a trade or any other white collar job.
I'm sure the growing distrust in science and general stupidity didn't help either.
like this
adhocfungus, Drusas, celeste, aramis87, Oofnik, KaRunChiy, essell, frustrated_phagocytosis, FerretyFever0, FartsWithAnAccent, Phenomephrene, dcpDarkMatter, Azathoth, Maeve, Atelopus-zeteki, offendicula, subignition e NoneOfUrBusiness like this.
politics reshared this.
THIS is why Oregon is a SH*THOLE STATE! I WANT to be part of IDAHO!
-Conservatives who Smoke Pot and have Access to Healthcare!
like this
Klingo, FerretyFever0, Maeve e NoneOfUrBusiness like this.
The legislation was opposed by companies such as Amazon and the statewide nonprofit Oregon Ambulatory Surgery Center Association, an industry group, where executives see private investment as vital to their business strategy.
“We universally agree that the way to protect clinics from closure and maintain the broadest patient access to outpatient care is to keep the existing, and multi-ownership models alive and well,” wrote Ryan Grimm on behalf of the association and the Portland Clinic, a private multispecialty medical group, in a March letter to lawmakers.
“In some communities, there is no hospital to swoop in to the rescue, or no hospital in a financial position to save a clinic,” he wrote.
The bill does not go into effect immediately and it contains a three-year adjustment period for clinics to comply with the restrictions. Institutions such as hospitals, tribal health facilities, behavioral health programs and crisis lines are exempted.
Mein Gott, a ray of sanity! Listen it's not everything a constituent can hope for but it's a giant step in the right direction. Congratulations, Oregon!
like this
can_you_change_your_username e NoneOfUrBusiness like this.
Musk targets June 22 launch of Tesla's long-promised robotaxi service
Tesla CEO Elon Musk said his company will start offering public rides in driverless vehicles in Austin, Texas, on June 22.
Nintendo says your bad Switch 2 battery life might be a bug
It might just be the Switch 2, though.
Nintendo says your bad Switch 2 battery life might be a bug
If you’re dealing with what appears to be poor battery life on the Nintendo Switch 2, the company has a support document with steps you can try to fix it.Jay Peters (The Verge)
Musk’s threat to sue firms that don’t buy ads on X seems to have paid off
Some advertisers return to avoid suits, but Lego and Pinterest rebuffed threats.
Wikipedia pauses AI-generated summaries pilot after editors protest
Editors almost immediately criticized the pilot, raising concerns that it could damage Wikipedia's credibility.
essell likes this.
Apple’s updated parental controls will require kids to get permission to text new numbers
More child safety features.
Apple’s updated parental controls will require kids to get permission to text new numbers
Apple is introducing new child safety features, including one that will give parents more control over who their kids can communicate with.Jay Peters (The Verge)
Ahhhh the beautiful pseudoscience of psychosomatics.
It’s like astrology for medicine.
essell doesn't like this.
Anxiety is the most common mental health problem – here’s how tech could help manage it
Anxiety is the most common mental health problem – here’s how tech could help manage it
Devices that deliver mild, constant electrical current can alter our brain activity.The Conversation
adhocfungus likes this.
World first: brain implant lets man speak with expression ― and sing, Device translates thought to speech in real time.
World first: brain implant lets man speak with expression ― and sing
Device translates thought to speech in real time.Naddaf, Miryam
Technology reshared this.
This is nothing short of stunning, had no idea anyone was even close to this sort of interface. And it's only an 8-bit input! Fuck me, I would have made a (totally ignorant) guess of at least a couple of thousand sensors.
Hoped for a video. 🙁
it's 256 electrodes, yes, but the article doesn't say whether those electrodes are simple digital signals or if each one has some analog range they resolve. Even if it's 100% binary, the tresholding (what level of neural activity is considered a 1 or 0) could be adaptive.
This is amazing technology. I can't imagine how it would feel to have your ability to speak and even sing back after losing it.
like this
Azathoth e subignition like this.
like this
anotherandrew likes this.
like this
anotherandrew likes this.
Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words.
This is a really cool approach. They're not having to determine speech meaning, but instead picking up signals after the person's brain has already done that part and is just trying to vocalize. I'm guessing they can capture nerve impulses that would be moving muscles in the face, mouth, lips, and possibly larynx and then using the AI to quickly determine which sounds that would produce in those few milliseconds those conditions exist. Then the machine to produces the sounds artificially. Because they're able to do this so fast (in 10 milliseconds) it can get close to human body response and reproduction of the specific sounds.
like this
subignition e PokyDokie like this.
40,000 cameras expose feeds to datacenters, health clinics
like this
originalucifer likes this.
Apple expands tools to help parents protect kids and teens online
Apple expands tools to help parents protect kids and teens online
Apple today shared an update on new ways to help parents protect kids and teens online when using Apple products.Apple
Apple expands tools to help parents protect kids and teens online
Apple expands tools to help parents protect kids and teens online
Apple today shared an update on new ways to help parents protect kids and teens online when using Apple products.Apple
Apple expands tools to help parents protect kids and teens online
Apple expands tools to help parents protect kids and teens online
Apple today shared an update on new ways to help parents protect kids and teens online when using Apple products.Apple
Remember when corporations avoided politics on social media?
Study finds Twitter surge starting in 2017, most of it Democratic-leaning by surprising range of firms, with negative effects on stock price
Remember when corporate America steered clear of politics on social media?
Study finds Twitter surge starting in 2017, most of it Democratic-leaning by surprising range of firms, with negative effects on stock price.Christina Pazzanese (Harvard Gazette)
Researchers find the first known “zero-click” attack on an AI agent; the now-fixed flaw in Microsoft 365 Copilot would let a hacker attack a user via an email
Aim Labs | Echoleak Blogpost
The first weaponizable zero-click attack chain on an AI agent, resulting in the complete compromise of Copilot data integritywww.aim.security
The thing is I agree with nearly every premise of superdeterminism. But the conclusions seem stretched.
I love the idea of not abiding to the strict assumptions set forth by Bell’s theorem. The idea that determinism doesn’t have to hide within the simple hidden variable model bell’s theorem disproves to be true. The idea that we are essentially always part of the experimental system. The questioning of the objective rational experimenter with free will ideal.
Yet I haven’t seen any serious mechanism explaining how the required correlations between experimenter choices and particle states could have been embedded in the universe’s initial conditions in such a finely tuned manner, given that experimentally, the outcomes are indistinguishable from standard quantum mechanics.. I just can’t imagine how this could likely be the case without adding quasi-conspiratorial assumption.
like this
adhocfungus e FundMECFS like this.
'Fortnite' Lobbies Can Now Have Up to 92% Bots - Players Are Furious Over Supposed OG Season 3 Update
‘Fortnite’ Lobbies Can Now Have Up to 92% Bots – Players Are Furious Over Supposed OG Season 3 Update
'Fortnite OG' lobbies may now have as little as eight real players, according to a report from a prominent Epic Games leaker.Brent Koepp (VICE)
like this
DaGeek247, Endymion_Mallorn, TVA, xep e PokyDokie like this.
don't like this
Pro doesn't like this.
Technology reshared this.
A couple of months ago there was a data breach on twitter that revealed only 7% were actual people (active accounts)
hackread.com/twitter-x-of-2-8-…
Twitter (X) Hit by 2.8 Billion Profile Data Leak in Alleged Insider Job
A data breach involving a whopping 2.87 billion Twitter (X) users has surfaced on the infamous hacker and cyber crime platform Breach Forums.Waqas (Hack Read)
like this
DaGeek247 likes this.
Elmo "Pedo Guy" Musk is merging Twitter with Fortnite. So the Twitter bots will now be playing Fortine while spamming Elmo propaganda in chat.
What exactly is not clear to you?
Funny thing is, even if your skills were in the bracket for more human weighted matches, you’d not have hit them in your first few sessions. The first few matches are always 100% bots to give you a feeling for the game without the rick of being steam rolled by humans.
There’s also the problem of matches being 100 people and not starting until it hits about that number. Imagine the fun of sitting and waiting for 10 minutes for people to hop on.
They have a combined 3 kills and we have like 30 each. There is no reason playing this.
Bitcoin devs scramble to protect $2.2tn blockchain from looming quantum computer threat
Bitcoin devs scramble to protect $2.2tn blockchain from looming quantum computer threat
Quantum computers pose a threat to Bitcoin’s security. Developers are rushing to future-proof the network. Michael Saylor is unconvinced this is a problem.Tim Craig (DL News)
Trade war truce between US and China is back on
Trade war truce between US and China is back on
Donald Trump says agreement struck with Beijing covers rare earths.Financial Times (Ars Technica)
like this
originalucifer likes this.
Wikipedia Pauses AI-Generated Summaries After Editor Backlash
Text to avoid paywall
The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.
“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”
Two other editors simply commented, “Yuck.”
For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.
A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”
In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.
An example of what the AI-generated summary looked like.
Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.
“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”
A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.
“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”
“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”
“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”
The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”
Wikipedia Pauses AI-Generated Summaries After Editor Backlash
The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”
Two other editors simply commented, “Yuck.”
For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.
A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”
In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.
An example of what the AI-generated summary looked like.
Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”
A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.
“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”
“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”
“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”
The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”
The Editors Protecting Wikipedia from AI Hoaxes
WikiProject AI Cleanup is protecting Wikipedia from the same kind of misleading AI-generated information that has plagued the rest of the internet.Emanuel Maiberg (404 Media)
like this
Kilgore Trout, adhocfungus, Scrollone, Rozaŭtuno, Maeve, andyburke, KaRunChiy, miguel, onewithoutaname, Endymion_Mallorn, tiredofsametab e dandi8 like this.
Technology reshared this.
like this
KaRunChiy likes this.
like this
KaRunChiy, RandomStickman, onewithoutaname e DaGeek247 like this.
Fucking thank you. Yes, experienced editor to add to this: that's called the lead, and that's exactly what it exists to do. Readers are not even close to starved for summaries:
- Every single article has one of these. It is at the very beginning – at most around 600 words for very extensive, multifaceted subjects. 250 to 400 words is generally considered an excellent window to target for a well-fleshed-out article.
- Even then, the first sentence itself is almost always a definition of the subject, making it a summary unto itself.
- And even then, the first paragraph is also its own form of summary in a multi-paragraph lead.
- And even then, the infobox to the right of 99% of articles gives easily digestible data about the subject in case you only care about raw, important facts (e.g. when a politician was in office, what a country's flag is, what systems a game was released for, etc.)
- And even then, if you just want a specific subtopic, there's a table of contents, and we generally try as much as possible (without harming the "linear" reading experience) to make it so that you can intuitively jump straight from the lead to a main section (level 2 header).
- Even then, if you don't want to click on an article and just instead hover over its wikilink, we provide a summary of fewer than 40 characters so that readers get a broad idea without having to click (e.g. Shoeless Joe Jackson's is "American baseball player (1887–1951)").
What's outrageous here isn't wanting summaries; it's that summaries already exist in so many ways, written by the human writers who write the contents of the articles. Not only that, but as a free, editable encyclopedia, these summaries can be changed at any time if editors feel like they no longer do their job somehow.
This not only bypasses the hard work real, human editors put in for free in favor of some generic slop that's impossible to QA, but it also bypasses the spirit of Wikipedia that if you see something wrong, you should be able to fix it.
like this
DaGeek247 likes this.
like this
onewithoutaname likes this.
There are also external AI tools that do this just fine.
But imagine these tools generating summaries of summaries.
Two other editors simply commented, “Yuck.”
What insightful and meaningful discourse.
like this
KaRunChiy likes this.
If they’re high quality editors who consistently put out a lot of edits then yeah, it is meaningful and insightful. Wikipedia exists because of them and only them. If most feel like they do and stop doing all this maintenance for free, then Wikipedia becomes a graffiti wall/ad space and not an encyclopedia.
Thinking the immediate disgust of the people doing all the work for you for free is meaningless is the best way to nose dive.
Also, you literally had to scroll past a very long and insightful comment to get to that.
like this
KaRunChiy, onewithoutaname e DaGeek247 like this.
Also, you literally had to scroll past a very long and insightful comment to get to that.
No I didn't. It's in the summary, appropriately enough.
AI chatbots unable to accurately summarise news, BBC finds
The BBC's head of news and current affairs says the developers of the tools are "playing with fire."Imran Rahman-Jones (BBC News)
like this
DaGeek247 likes this.
like this
DaGeek247 likes this.
"Pause" and not "Stop" is concerning.
Is it just me, or was the addition of AI summaries basically predetermined? The AI panel probably would only be attended by a small portion of editors (introducing selection bias) and it's unclear how much of the panel was dedicated to simply promoting the concept.
I imagine the backlash comes from a much wider selection of editors.
like this
DaGeek247 likes this.
A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”
The intent was to make more uniform summaries, since some of them can still be inscrutable.
Relying on a tool notorious for making significant errors isn't the right way to do it, but it's a real issue being examined.
In thermochemistry, an exothermic reaction is a "reaction for which the overall standard enthalpy change ΔH⚬ is negative."[1][2] Exothermic reactions usually release heat. The term is often confused with exergonic reaction, which IUPAC defines as "... a reaction for which the overall standard Gibbs energy change ΔG⚬ is negative."[2] A strongly exothermic reaction will usually also be exergonic because ΔH⚬ makes a major contribution to ΔG⚬. Most of the spectacular chemical reactions that are demonstrated in classrooms are exothermic and exergonic. The opposite is an endothermic reaction, which usually takes up heat and is driven by an entropy increase in the system.
This is a perfectly accurate summary, but it's not entirely clear and has room for improvement.
I'm guessing they were adding new summaries so that they could clearly label them and not remove the existing ones, not out of a desire to add even more summaries.
Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from
The entire mistake right there. Look no further. They saw a solution (LLMs) and started hunting for a problem.
Had they done it the right way round there might have been some useful, though less flashy, outcome. I agree many article summaries are badly written. So why not experiment with an AI that flags those articles for review? Or even just organize a community drive to clean up article summaries?
The questions are rhetorical of course. Like every GenAI peddler they don't have an interest in the problem they purport to solve, they just want to play with or sell you this shiny toy that pretends really convincingly that it is clever.
Fundamentally, I agree with you.
Because the phrase "Wikipedians discussed ways that AI..." Is ambiguous I tracked down the page being referenced. It could mean they gathered with the intent to discuss that topic, or they discussed it as a result of considering the problem.
The page gives me the impression that it's not quite "we're gonna use AI, figure it out", but more that some people put together a presentation on how they felt AI could be used to address a broad problem, and then they workshopped more focused ways to use it towards that broad target.
It would have been better if they had started with an actual concrete problem, brainstormed solutions, and then gone with one that fit, but they were at least starting with a problem domain that they thought it was a applicable to.
Personally, the problems I've run into on Wikipedia are largely low traffic topics where the content is too much like someone copied a textbook into the page, or just awkward grammar and confusing sentences.
This article quickly makes it clear that someone didn't write it in an encyclopedia style from scratch.
Even beyond that, the "complex" language they claim is confusing is the whole point of Wikipedia. Neutral, precise language that describes matters accurately for laymen. There are links to every unusual or complex related subject and even individual words in all the articles.
I find it disturbing that a major share of the userbase is supposedly unable to process the information provided in this format, and needs it dumbed down even further. Wikipedia is already the summarized and simplified version of many topics.
Ho come on it’s not that simple. Add to that the language barrier. And in general precise language and accuracy are not making knowledge more available to laymen. Laymen don’t have to vocabulary to start with, that’s pretty much the definition of being a layman.
There is definitely value in dumbing down knowledge, that’s the point of education.
Now using AI or pushing guidelines for editors to do it that’s entirely different discussion…
The vocabulary is part of the knowledge. The concept goes with the word, that's how human brains understand stuff mostly.
You can click on the terms you don't know to learn about them.
You can click on the terms you don't know to learn about them.
This is what makes Wikipedia special. Not the fact that it is a giant encyclopedia, but that you can quickly and logically work your way through a complex subject at your pace and level of understanding. Reading about elements but don't know what a proton is? Guess what, there's a link right fucking there!
some article summaries can be quite dense and filled with technical jargon, but that Al features needed to be cleared labeled as such and that users needed an easy to way to flag issues with "machine-generated/remixed content once it was published or generated automatically.
I feel like if they feel that this is an issue generate the summary in the talk page and have the editors refine and approve it before publishing. Alternatively set an expectation that the article summaries are in plain English.
some article summaries can be quite dense
Well yeah, that's the point of a summary. If I want something in long form, I'll read the article.
These summaries are useless anyways because the AI hallucinates like crazy... Even the newest models constantly make up bullshit.
It can't be relied on for anything, and it's double work reading the words it shits out and then you still gotta double check it's not made up crap.
Good! I was considering stopping my monthly donation.
Ditto. I don't want to overreact, but it's not a good look.
Same person who saw most American adults have a 6th grade reading level or lower?
Honestly that's the reason I thought it was a good idea at least. Might actually give them a place to start learning from and improve.
Those Americans with a 6th grade reading level or less are precisely the people who shouldn’t be reading AI summaries. They’ll lack the critical thinking and reading skills to catch on to garbage.
Simple Wikipedia already exists and is great.
Problem is they can't read Wikipedia articles in the first place. A lot of it, in particular anything STEM, is higher level reading.
What you're advocating for is the same as dropping off a physics textbook at an elementary school.
Thats why I mentioned Simple Wikipedia.
This is far more readable that what an AI generated version of the article would make.
Yeah - tbh the name sucks. I hate recommending it to students, because it feels like I’m calling them dumb.
But yes 100%. Instead of doing dumb AI shit, they should be advertising what they already have.
Wikipedia Simple has fewer articles than regular Wikipedia.
And how do you plan to convince editors to add more articles to Wikipedia Simple?
That number of articles is still pretty impressive. I’d rather have fewer, high quality articles, than millions of terrible quality AI articles.
The great thing about Wikipedia is that anyone can add articles! It also wouldn’t be too difficult to “translate” regular Wikipedia articles to simple ones. You could even use AI tools to help - there are text leveler tools that will help you recognize which words lower level readers would struggle with and can help you make those changes. But this cannot be an automated process.
I’ve done graduate level course work on modifying text for “EMLs” - “emerging multilingual learners.” (“ELL” is still okay, but lots of folks in the field prefer EML because it is prioritizing the students “assets.”) I’ve made several assignments for students with reading difficulties. When I did experiment a bit with AI tools to help me with this process, I had to do a lot of fine tuning to get an acceptable product.
Tbh, you just convinced me right now that I should start adding more articles myself.
If someone is going to Wikipedia specifically looking for information in a STEM field, then an AI summary isn't going to help them. Odds are they can also read, because they're looking up STEM topics.
Also, is Wikipedia not available around the world, or you just think only Americans can't read? Inflammatory just for the sake of being inflammatory I'm guessing. Shit troll job.
Aaaaarrgg! This is horrible they stopped AI summaries, which I was hoping would help corrupt a leading institution protecting free thought and transfer of knowledge.
Sincerely, the Devil, Satan
Lucifer is literally the angel of free thought. Satanism promotes critical thinking and the right to question authority. Wikipedia is one of the few remaining repositories of free knowledge and polluting it with LLM summaries is exactly the inscrutable, uncritiqueable bullshit that led to the Abrahamic god casting Lucifer out.
I realize your reply is facetious, but there's a reason we're dealing with christofascists and not satanic fascists. Don't do my boy dirty like that.
Didn't they just pass a site-wide decision on the use of LLMs in creating/editing otherwise "human made" text?
Why do they need to take the human element out? Why would anyone want them to?
God I hope this isn't the beginning of the end for Wikipedia. They live and die on the efforts of volunteer editors (like Reddit relied on volunteer mods and third party tool devs). The fastest way to tank themselves is by driving off their volunteers with shit like this.
And it's absurdly easier to lose the good will they have than to rebuild it.
I'm so tired of "AI". I'm tired of people who don't understand it expecting it to be magical and error free. I'm tired of grifters trying to sell it like snake oil. I'm tired of capitalist assholes drooling over the idea of firing all that pesky labor and replacing them with machines. (You can be twice as productive with AI! But you will neither get paid twice as much nor work half as many hours. I'll keep all the gains.). I'm tired of the industrial scale theft that apologists want to give a pass to while individuals who torrent can still get in trouble, and libraries are chronically under funded.
It's just all bad, and I'm so tired of feeling like so many people are just not getting it.
I hope wikipedia never adopts this stupid AI Summary project.
like this
dandi8 likes this.
If I wanted an AI summary, I'd put the article into my favourite LLM and ask for one.
I'm sure LLMs can take links sometimes.
And if Wikipedia wanted to include it directly into the site...make it a button, not an insertion.
like this
dandi8 likes this.
On the one hand, it’s insulting to expect people to write entries for free only to have AI just summarize the text and have users never actually read those written words.
On the other hand, the future is people copying the url into chat gpt and asking for a summary.
The future is bleak either way.
like this
dandi8 likes this.
You are correct that it would not instantly become unusable. But when all editors with integrity have ceased to contribute in frustration, wikipedia would eventually become stale, or very unreliable.
Also there is nothing stopping a person from using an llm to summarize an article for them. And the added benefit to that is that the energy and reasources used for that would be only used on the people that wanted to, not on evey single page view. I would assume the enegy consumption on that, would be significant.
like this
dandi8 likes this.
The United States is transitioning into a post-literate society. Teaching kids to read was too hard, and had the ugly side effect of encouraging critical thinking, and that led to liberalism, or worse, Marxism.
So we're using technology to eliminate reading entirely. After all, if you can ask a LLM any question and get a simple answer read to you out loud in simple vocabulary, what more do you need? Are you going to read for pleasure? To fact check? To better yourself? Sounds like ivory tower liberal elitism to me.
like this
dandi8 likes this.
like this
dandi8 likes this.
Too late.
With thresholds calibrated to achieve a 1% false positive rate on pre-GPT-3.5 articles, detectors flag over 5% of newly created English Wikipedia articles as AI-generated, with lower percentages for German, French, and Italian articles. Flagged Wikipedia articles are typically of lower quality and are often self-promotional or partial towards a specific viewpoint on controversial topics.
Human posting of AI-generated content is definitely a problem
It isn't clear whether this content is posted by humans or by AI fueled bot accounts. All they're sifting for is text with patterns common to AI text generation tools.
There wasn’t necessarily anything stopping people from doing the same thing pre-GPT
The big inhibiting factor was effort. ChatGPT produces long form text far faster than humans and in a form less easy to identify than prior Markov Chains.
The fear is that Wikipedia will be swamped with slop content. Humans won't be able to keep up with the work of cleaning it out.
At least it's only an issue for new articles, which probably have the least editor involvement.
People creating self-promotion on Wikipedia has been a problem for a long time before ChatGPT.
like this
dandi8 likes this.
"It is difficult to get a man to understand something, when his salary depends on his not understanding it."
— Upton Sinclair
One of the biggest changes for a nonprofit like Wikipedia is to find cheap/free labor that administration trusts.
AI "solves" this problem by lowering your standard of quality and dramatically increasing your capacity for throughput.
It is a seductive trade. Especially for a techno-libertarian like Jimmy Wales.
Ghostty in review: how's the new terminal emulator?
A few months ago, a new terminal emulator was released. It's called ghostty, and it has been a highly anticipated terminal emulator for a while, especially due to the coverage that it received from ThePrimeagen, who had been using for a while, while it was in private beta.
like this
Mechanize, Rozaŭtuno, Endymion_Mallorn e adhocfungus like this.
This feels like a paid advertisement ”review” to me. There is basically nothing negative or critical at all. No places to improve? Here is the most critical bit in the entire post:
If you use GNOME, you should definitely be giving Ghostty a try. To be completely fair, I did not dislike using it on my other KDE Plasma — based machine either, but it does not feel as “native” yet. One day it will, though…
Mmmmm 😕
like this
Mechanize likes this.
In support is that, I'd point to
As you keep navigating through the hamburger menu, one thing you will notice is that, unlike on the default GNOME terminal, there is no graphical Settings menu to speak of here. The reason for that is that Ghostty is so customizable that it would have been pretty much impossible to provide a practical GUI to expose all its configuration options: you need the full expressivity of a configuration file for that.
as making a virtue out of a lack. I really don't buy that "impossible" line. It was just too much work or work they during want to do.
foot
in its client-server mode. It allows basically instant startup because the server is already running in the background (even on my Core 2 Duo Thinkpad).
~~I thought you were going to talk about the lack of terminal scrollback.~~
Edit: I was misremembering. There is scrollback, but you can't search it. github.com/ghostty-org/ghostty…
Search scrollback · Issue #189 · ghostty-org/ghostty
A major missing feature is the ability to search scrollback, i.e. cmd+f on Mac, ctrl+F on Linux. This issue can be implemented in multiple steps, not just one giant PR: Core search functionality in...GitHub
It is very good, and I am currently using it. I don't like its dependencies on GTK stuff, the developer is a little picky about what to support, and I dislike the +options
style. Other than that, 👍 .
Also great: Wezterm, Konsole, Rio. I'm excitedly following Rio's development, which has a much smaller dependency list, and hopping back and forth between it and Ghostty/Wezterm. But it's still got some things to iron out and features to develop.
I tried this one and Wezterm, but I just couldn't get past how much vram they use, when vram is still at a premium. Konsole works really well for me anyway, so I guess I don't see the appeal.
Though, I do like Wezterm's lua config.
I give it a spin every month or so to see how it’s getting on. I’m on macOS.
Every time I walk away unimpressed, despite its maker’s very deserved esteemed reputation.
I’m probably not seeing something. What I do see, however, is that I can’t search my scrollback history, nor can I select text without a mouse.
Also, pressing cmd+,
on macOS opens the config inside TextEditor (yes, a separate GUI app) rather than in $EDITOR
. It’s a small thing but I couldn’t figure out how to change it. Coming from Kitty, this drove me mad.
I’m not sure who Ghostty is for. My feeling is it’s aiming to be an excellent, polished experience for casual terminal users. But I didn’t see anything that Kitty or just tmux anywhere can’t do.
The article says it can debug TUIs, similar to what the browser's debug panel does for web apps.
That is useful for TUI developers.
Other than that, I don't know either what Kitty is missing.
Ghostty has lots of issues ssh-ing into remote systems that aren’t on the bleeding edge.
I couldn’t get it to work reasonably well enough for me and tried a bunch of others. Currently using Alacritty on both my Linux desktop workstation and Mac Laptop.
I use Zellij anyway and it has all the tab/pane/floating window support I was looking for.
SetEnv TERM=xterm-256color
Yep - but seeing the thread about it in their github repo was also a turn off. I don’t have to do it with other clients.
I also believe that has to happen on each server - and we’ve got a lot of servers. I’m not particularly keen on needing to change anything to get my terminal emulator to, well, work.
While I get the ghostty team’s PoV - I don’t agree with it.
That's fair, I get the frustration.
I guess I've been cutting Mitchell some slack since this is a passion project for him - his goal was to build the modern terminal he always wanted, so an opinionated feature set was always expected. And, new terminals with actual new features need their own terminfo entries, it just comes with the territory. It'll sort itself out as the databases catch up.
For now, though, you don't need to address this on an individual host level. I'm in the same boat at work with thousands of servers. If you want to give Ghostty another shot, this wrapper handles the issue automatically, even for servers where AcceptEnv doesn't include TERM or where SetEnv is disabled:
ssh() {
if [[ "$TERM" == "xterm-ghostty" ]]; then
TERM=xterm-256color command ssh "$@"
else
command ssh "$@"
fi
}
Just drop it in your
.bashrc
(or functions.sh
if you rock a modular setup) and SSH connections will auto-switch to compatible terminfo while keeping your local session full-featured. Best of both worlds. ¯\_(ツ)_/¯
I really appreciate your response. It’s incredibly helpful and deeply thoughtful. Thank you.
What comes next is not directed at you but rather provides some other color based on a few things you touched on.
I worked for the guy. He gets no slack from me. He changed my life in many ways both wonderful and not. And while it’s unlikely I’d work with or for him again he was a net positive in my life.
I don’t see product the way he sees product which is exactly as you note: it’s for him. Some of that “for him” approach has resonated deeply with the OSS community and still does. He changed Cloud Computing in the best of ways. He’s a giant. And we’re lucky he’s around.
This small ghostty issue (and some others I can’t recall now) was emblematic of our core disagreement about how we build systems for a broader user base. That’s why I said I get their PoV but disagree with it. I think it would be fair to say using the product reminded me a lot about this particular tension. Reading the GitHub issues even more so. That’s wholly on me.
I am thankful to ghostty for helping me explore many more options. I had been using iterm2 on my laptop and struggling to find something I liked on my Linux workstation. Checking out the new hotness after all the hype still resulted in a net positive.
Nevertheless I am genuinely happy it’s working for you and, again, thanks for your kind and calm response.
Wow - you've certainly got a unique perspective on the situation, and I'm grateful that you took the time to share it. Thank you. It's fascinating to hear from someone who actually worked with the guy.
I can relate to both the Linux struggle and your "I get their PoV but disagree" reaction. Had the same feeling when Kitty's creator dismissed multiplexers as "a hack" - as a longtime tmux user, that stung. Great tool, but that philosophy never sat right with me.
I bounced between most of the more popular terminals for years (Wezterm rocks but has performance issues, Kitty never felt quite right) so I was eager for Ghostty to drop. So far it's delivered on what I was hoping for (despite needing a minor tweak or two out of the box).
I'm glad you found my last response so helpful. Sounds like exploring alternatives worked out well for you in the end, which is what matters. Cheers. 😀
Pssst. 😀
github.com/ghostty-org/ghostty…
Add SSH Integration Configuration Option (#7608) · ghostty-org/ghostty@5a5c9e4
Addresses #4156 and #5892, specifically by implementing @mitchellh's [request](https://github.com/ghostty-org/ghostty/discussions/5892#discussioncomment-12283628) for "opt-in shell integra...GitHub
If you are happy with the default, then just use the default.
Some of us use the terminal more than any other app, so I like my terminal to be super lightweight and snappy in all situations so it opens instantaneously (I doubt this one is like that though, if it has big dependencies like GTK / Qt), preferably if it does so without sacrificing in features (true color, things like sixel for graphics, allowing to set fallback fonts, maybe font ligatures, being able to set the app-id so my compositor can treat special terminal windows differently, etc).
defunct_punk
in reply to return2ozma • • •Eldritch
in reply to defunct_punk • • •HuskerNation
in reply to Eldritch • • •givesomefucks
in reply to return2ozma • • •The rules were violated under the last chair, and Martin kicked it to committee instead of unilaterally making the call.
We don't need a DNC that favors progressives, we just need one that will let fair elections happen.
And for the first time in decades, we have that.
like this
KaRunChiy, aramis87 e dcpDarkMatter like this.
Lasherz
in reply to givesomefucks • • •like this
NoneOfUrBusiness, KaRunChiy, aramis87 e Maeve like this.
givesomefucks
in reply to Lasherz • • •You're thinking of the DNC as a single entity.
It's not.
It's about 400 people who vote for chair and vice chairs. And over time those voters as a group have become more progressive.
The chair has final say on everything, and is accountable to no one. Ken Martin could have done whatever he wanted, but he can't time travel to before he became chair and enforce the DNC's rules.
He choose the democratic path forward...
So I don't understand why people keep hanging the sins of past DNC's on his head. If you want to know what kind of leader he's gonna be, I'm not asking for blind faith, he has a track record and all signs point to him running the DNC like he ran Minnesota for over a decade immediately before becoming DNC chair.
But imagine your ideal candidate becomes president 2028, and I show up and start saying they're gonna be terrible because of all the shit trump is doing right now. If I told you it didn't matter because they're both holding the same office, I wouldn't be surprised if people called me crazy...
like this
Azathoth e dcpDarkMatter like this.
Boomer Humor Doomergod
in reply to givesomefucks • • •He can run it any way he wants. There’s 399 other people who can try to stop him.
I’m not holding my breath. Dems have failed me so damn always.
like this
NoneOfUrBusiness e Maeve like this.
Maeve
in reply to givesomefucks • • •grue
in reply to givesomefucks • • •like this
NoneOfUrBusiness likes this.
TropicalDingdong
in reply to return2ozma • • •like this
Maeve likes this.
NoneOfUrBusiness
in reply to return2ozma • • •like this
wagesj45, KaRunChiy e Maeve like this.
cabron_offsets
in reply to return2ozma • • •like this
aramis87 e dcpDarkMatter like this.
santa
in reply to return2ozma • • •like this
Maeve likes this.
BigMacHole
in reply to return2ozma • • •like this
Maeve e NoneOfUrBusiness like this.
conditional_soup
in reply to BigMacHole • • •like this
NoneOfUrBusiness likes this.
EightBitBlood
in reply to conditional_soup • • •like this
NoneOfUrBusiness likes this.
conditional_soup
in reply to EightBitBlood • • •E: changed my mind about snark
I meant my remark in good humor. Looking at the ratio, I can see that it was taken very different to how I meant.
rodneyck
in reply to BigMacHole • • •HuskerNation
in reply to return2ozma • • •When will Dems learn? they can't win without the progressive vote and perhaps they don't want to, maybe they are ok making bank while rubes are in charge.
It's time for a 3rd party
like this
NoneOfUrBusiness likes this.
Archangel1313
in reply to HuskerNation • • •Fuck that! There's way more MAGA voters than progressives. You gotta go where the numbers are. /s
like this
NoneOfUrBusiness likes this.
Etterra
in reply to return2ozma • • •like this
NoneOfUrBusiness likes this.
Archangel1313
in reply to return2ozma • • •like this
NoneOfUrBusiness likes this.
lostoncalantha
in reply to return2ozma • • •like this
NoneOfUrBusiness likes this.
thisphuckinguy
in reply to return2ozma • • •like this
NoneOfUrBusiness likes this.