In New Orleans and across U.S., anger over ICE raids sparks a 2nd American Revolution
Everyday folks are rising up to resist immigration raids with whistles, car chases, and noisy protests. Revolution is in the air.
like this
copymyjalopy e essell like this.
Billionaire Palantir Co-Founder Pushes Return of Public Hangings as Part of 'Masculine Leadership' Initiative
cross-posted from: news.abolish.capital/post/1218…
Venture capitalist Joe Lonsdale, a co-founder of data platform company Palantir, is calling for the return of public hangings as part of a broader push to restore what he describes as "masculine leadership" to the US.
In a statement posted on X Friday, Lonsdale said that he supported changing the so-called "three strikes" anti-crime law to ensure that anyone who is convicted of three violent crimes gets publicly executed, rather than simply sent to prison for life.
"If I’m in charge later, we won’t just have a three strikes law," he wrote. "We will quickly try and hang men after three violent crimes. And yes, we will do it in public to deter others."
Lonsdale then added that "our society needs balance," and said that "it's time to bring back masculine leadership to protect our most vulnerable."
Lonsdale's views on public hangings being necessary to restore "masculine leadership" drew swift criticism.
Gil Durán, a journalist who documents the increasingly authoritarian politics of Silicon Valley in his newsletter "The Nerd Reich," argued in a Saturday post that Lonsdale's call for public hangings showed that US tech elites are "entering a more dangerous and desperate phase of radicalization."
"For months, Peter Thiel guru Curtis Yarvin has been squawking about the need for more severe measures to cement Trump's authoritarian rule," Durán explained. "Peter Thiel is ranting about the Antichrist in a global tour. And now Lonsdale—a Thiel protégé—is fantasizing about a future in which he will have the power to unleash state violence at mass scale."
Taulby Edmondson, an adjunct professor of history, religion, and culture at Virginia Tech, wrote in a post on Bluesky that the rhetoric Lonsdale uses to justify the return of public hangings has even darker intonations than calls for state-backed violence.
"A point of nuance here: 'masculine leadership to protect our most vulnerable' is how lynch mobs are described, not state-sanctioned executions," he observed.
Theoretical physicist Sean Carroll argued that Lonsdale's remarks were symbolic of a kind of performative masculinity that has infected US culture.
"Immaturity masquerading as strength is the defining personal characteristic of our age," he wrote.
Tech entrepreneur Anil Dash warned Lonsdale that his call for public hangings could have unintended consequences for members of the Silicon Valley elite.
"Well, Joe, Mark Zuckerberg has sole control over Facebook, which directly enabled the Rohingya genocide," he wrote. "So let’s have the conversation."
And Columbia Journalism School professor Bill Grueskin noted that Lonsdale has been a major backer of the University of Austin, an unaccredited liberal arts college that has been pitched as an alternative to left-wing university education with the goal of preparing "thoughtful and ethical innovators, builders, leaders, public servants and citizens through open inquiry and civil discourse."
From Common Dreams via This RSS Feed.
Joe Lonsdale Calls For Public Hangings
Silicon Valley radicalization escalatesGil Duran (The Nerd Reich)
A new storm is brewing in South-East Asia. This time it's in the halls of power
ABC News
ABC News provides the latest news and headlines in Australia and around the world.Karishma Vyas (Australian Broadcasting Corporation)
like this
adhocfungus e essell like this.
RRF Caserta Sport. Calcio Serie C. Cerignola Casertana 1 a 1
Firefox Account
Maybe a silly question - but is it unwise to use Firefox for getting torrents, or saving any bookmarks in firefox? Is there benefit to using a private window (doubtful as I believe this only affects your device).
I know we generally can trust Firefox but they could turn quickly.
like this
Cătă likes this.
ICE has arrested nearly 75,000 people with no criminal records, data shows
ICE has arrested nearly 75,000 people with no criminal records, data shows
More than a third of the roughly 220,000 people arrested by ICE officers in the first nine months of the Trump administration had no criminal histories.Laura Strickler (NBC News)
like this
copymyjalopy e essell like this.
What all would you do to set up Ubuntu as a NAS?
like this
adhocfungus likes this.
NAS
Depends on what your plans are, an actual NAS-only machine or what develops into a general-purpose server. For the NAS part you'd only need a few services like FTP, SMB or whatever you want to run.
Those are easily configured on the command line.
I went for the simplest option
- Installed a distro (in this case Debian)
- Installed tailscale on the server, logged in
- Installed tailscale on my other devices, logged in
- Used sshfs to mount the desired directory on the server to my client
- SSH in once a week or so to run updates
Found it very simple. Avoided the tedious setup of samba and samba had weird reliability issues for me when copying large files. Took a bit to learn how ssh works, but very much so worth it.
I just use Debian... I won't touch Ubuntu as a server anymore (or a desktop either, but really that stems from the server side for me).
Vanilla Debian or proxmox is functionally all I'll use at this point, including with 3 AMD machines (two 1700x, one 5700x). Though none with an and igpu, mostly older dgpu's.
Edit: The point being, maybe figure out what the problem is here rather than going Ubuntu, which has been a huge security problem in the past (snap + docker especially).
samba package, add a user, configure your shares, and you're good to go.
This is a relatively new CPU. You might struggle on Ubuntu as well. As much as I love Debian, something like Fedora might be better.
It may be possible to get Debian running, though - either run Debian Testing or install a Backports kernel and Mesa. Were you failing to boot Debian, or did you just struggle after getting it installed?
Either way, I just don't recommend Ubuntu.
Can't talk about AMD but I'm on NVIDIA and I always followed wiki.debian.org/NvidiaGraphics… and never had issues others seem to be having. I typically hear good things about AMD GPU support, on Debian and elsewhere so I'm surprised.
Now in practice IMHO GPU support doesn't matter much for NAS, as you're probably going headless (no monitor, mouse or keyboard). You probably though do want GPU instruction set support for transcoding but here again can't advise for this brand of GPU. It should just be relying on e.g. trac.ffmpeg.org/wiki/Hardware/…
Finally I'm a Debian user and I'm quite familiar with setting it up, locally on remotely. I also made ISOs for RPi based on Raspbian so this post made me realize I never (at least I don't remember) installed Debian headlessly, by that I mean booting on a computer with no OS all the way to getting a working ssh connection established on LAN or WiFi. I relied on Imager for RPi configuration or making my own ISO via a microSD card (using dd) but it made me curious about preseeding wiki.debian.org/DebianInstaller/Preseed so I might tinker with it via QEMU. Advices welcomed.
PS: based on few other comments, consider minidlna over more complex setups. Consider Wireguard over tailscale (or at least headscale for a version relying solely on your infrastructure) with e.g. wg-easy if you want to manage everything without 3rd parties.
What?! Theres a huge difference. Ubuntu is hot garbage for server work. How? Wait until you hit a permissions issue with your share and you find that snap did some bullshit, because you have mixed apt and snap packages. The notorious hardcoded snap store backend? Not a fan.
Yeah one might say debian is old packages, but first of all its a nas system, not an internet facing machine or even a main server thst needs a ton of services.
Even then third party software is pretty recent even on debian.
Ubuntu is the wrong choice for any server. Any.
I mean not much difference in hardware support.
Ubuntu is the wrong choice for any server.
In general, I agree. But I don't want do participate in holy wars.
Same here, sick of holy wars.
That being said, it seems the hardware difference is there, amd 370 is undercooked on debian, unless you use either sid or custom kernels.
Depends what protocols you need?
If you use SMB install the Samba server package. If you use WebDAV install a WebDAV server like SFTPGo, etc..
If you want a google drive like replacement there's Nextcloud, Owncloud, Seafile, and others.
For the drives themselves you can have traditional RAID with MD, or ZFS for more reliability and neat features, or go with MergerFS + SnapRAID, or just directly mount the disks and store files on some and backup to the others with Restic or something.
Lots of options!
For the storage volume:
Bcache to make an ssd cache, then for the volume itself, BTRFS raid 1 setup. Setup any necessary SMB and NFS shares. Setup Nextcloud via docker. Probably rsync for any local distro mirrors. I'd also toss on dockerized gitlab. Add any additional services via docker.
For a NAS, like, storage on the network, keep it as simple and as reliable as possible,, so avoid Ubuntu and go to the core underlying OS: Debian.
Then just build up the functionality you need, is SMB, NFS, etc.
Personally, I went from OMV to a home built NAS, but went with Arch as that's what I use elsewhere (btw), so am comfortable with it, but it's bleeding edge which isn't always the best if some functionality changes when you're not ready for it.
If you're going for a server running lots of containers, etc, then find whatever the container handler (docker?) is best on... I just put everything on bare metal, so can't advise what's best for containers... probably Debian again...
But, keep it simple.
Yup! Considering that cpu, debian will be painful, but ubuntu as a server is asking for trouble. Look for fedora server, for the love of god even an arch server. Do your future self a favor and avoid that grilled roadkill that is ubuntu.
Fedora server comes with cockpit if thsts what you want.
For the 370, fedora works well.
Pinch of salt- i got burned too many times with ubuntu server, I run an extensive homelab and I'm a regular at the burns ward. I loathe ubuntu with the energy of a Wolf-Rayet star.
Young German leaders deem recent trip to Israel a 'PR operation' run by foreign ministry
Some of the 160 delegates from Germany tell Haaretz that the the five-day, all-expenses-paid trip lacked critical narratives. 'The messages of the delegation was very nationalist ... there were no perspective for diplomacy or peacebuilding,' reports one participant
In mid-November, some 160 young Germans, described as the country's "handpicked future leaders" and representing each German state, were brought to Israel by the Israeli Foreign Ministry and the Israeli Embassy in Berlin for a trip billed as a celebration of 60 years of diplomatic relations.
Their itinerary included receptions and briefings at the Knesset, the President's Residence and the Supreme Court; a visit to Rafael, the weapons manufacturer; meetings with survivors of the October 7 attack; and visits to a kibbutz near the Gaza border and the Nova site.
☆ Yσɠƚԋσʂ ☆ likes this.
🔥AGORA: Multidão ocupa o Rio de Janeiro contra o feminicídio 🔥
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.youtube.com
RRF Caserta. Sport. Basket Serie B. Juve Caserta 71 Imola 62
FBI Making List of American “Extremists,” Leaked Memo Reveals
Attorney General Pam Bondi is ordering the FBI to “compile a list of groups or entities engaging in acts that may constitute domestic terrorism,” according to a Justice Department memo published here exclusively.
The target is those expressing “opposition to law and immigration enforcement; extreme views in favor of mass migration and open borders; adherence to radical gender ideology,” as well as “anti-Americanism,” “anti-capitalism,” and “anti-Christianity.”
That language echoes the so-called indicators of terrorism identified by President Trump’s directive National Security Presidential Memorandum-7, or NSPM-7, which the memo says it’s intended to implement. Where NSPM-7 was a declaration of war on just about anyone who isn’t MAGA, this is the war plan for how the government will wage it on a tactical level.
FBI Making List of American “Extremists,” Leaked Memo Reveals
Are you on Trump's naughty list?Ken Klippenstein
☆ Yσɠƚԋσʂ ☆ likes this.
FBI Making List of American “Extremists,” Leaked Memo Reveals
Attorney General Pam Bondi is ordering the FBI to “compile a list of groups or entities engaging in acts that may constitute domestic terrorism,” according to a Justice Department memo published here exclusively.
The target is those expressing “opposition to law and immigration enforcement; extreme views in favor of mass migration and open borders; adherence to radical gender ideology,” as well as “anti-Americanism,” “anti-capitalism,” and “anti-Christianity.”
That language echoes the so-called indicators of terrorism identified by President Trump’s directive National Security Presidential Memorandum-7, or NSPM-7, which the memo says it’s intended to implement. Where NSPM-7 was a declaration of war on just about anyone who isn’t MAGA, this is the war plan for how the government will wage it on a tactical level.
FBI Making List of American “Extremists,” Leaked Memo Reveals
Are you on Trump's naughty list?Ken Klippenstein
$27 million in divestment victories. - JVP
cross-posted from: lemmy.ml/post/39990892
from The Wire
[online publication of Jewish Voice For Peace JVP in USA]
In North Carolina, Michigan, and Minnesota, local campaigns have notched divestment wins totaling more than $27 million.Across the country, JVP chapters and our partners are organizing to demand their state and municipal fund managers divest from Israel Bonds — essentially investments in Israeli genocide and apartheid — and invest instead in the well-being of our communities.
This organizing targets the engine enabling Israel's violence against Palestinians: material support from our own institutions in the U.S. And the momentum is growing...
Also:
* Defend anti-Zionist students.
* Plug in locally.
* Join Power Half-Hour.
$27 million in divestment victories. - JVP
from The Wire
[online publication of Jewish Voice For Peace JVP in USA]In North Carolina, Michigan, and Minnesota, local campaigns have notched divestment wins totaling more than $27 million.Across the country, JVP chapters and our partners are organizing to demand their state and municipal fund managers divest from Israel Bonds — essentially investments in Israeli genocide and apartheid — and invest instead in the well-being of our communities.
This organizing targets the engine enabling Israel's violence against Palestinians: material support from our own institutions in the U.S. And the momentum is growing...
Also:
* Defend anti-Zionist students.
* Plug in locally.
* Join Power Half-Hour.$27 million in divestment victories. - JVP
The West Bank is in crisis as Israel expands its genocide and extremist settler violence reaches record highs.Sarah Burch (Jewish Voice for Peace)
like this
Malkhodr, LVL, durduramayacaklar, HyonoKo, GlueBear, Sanya, davel, psycocan, Commiejones, Maeve, PeeOnYou [he/him], mufasio, electric_nan, stink, ComradZoid e Philo_and_sophy like this.
$27 million in divestment victories. - JVP
cross-posted from: lemmy.ml/post/39990892
from The Wire
[online publication of Jewish Voice For Peace JVP in USA]
In North Carolina, Michigan, and Minnesota, local campaigns have notched divestment wins totaling more than $27 million.Across the country, JVP chapters and our partners are organizing to demand their state and municipal fund managers divest from Israel Bonds — essentially investments in Israeli genocide and apartheid — and invest instead in the well-being of our communities.
This organizing targets the engine enabling Israel's violence against Palestinians: material support from our own institutions in the U.S. And the momentum is growing...
Also:
* Defend anti-Zionist students.
* Plug in locally.
* Join Power Half-Hour.
$27 million in divestment victories. - JVP
from The Wire
[online publication of Jewish Voice For Peace JVP in USA]In North Carolina, Michigan, and Minnesota, local campaigns have notched divestment wins totaling more than $27 million.Across the country, JVP chapters and our partners are organizing to demand their state and municipal fund managers divest from Israel Bonds — essentially investments in Israeli genocide and apartheid — and invest instead in the well-being of our communities.
This organizing targets the engine enabling Israel's violence against Palestinians: material support from our own institutions in the U.S. And the momentum is growing...
Also:
* Defend anti-Zionist students.
* Plug in locally.
* Join Power Half-Hour.$27 million in divestment victories. - JVP
The West Bank is in crisis as Israel expands its genocide and extremist settler violence reaches record highs.Sarah Burch (Jewish Voice for Peace)
$27 million in divestment victories. - JVP
cross-posted from: lemmy.ml/post/39990892
from The Wire
[online publication of Jewish Voice For Peace JVP in USA]
In North Carolina, Michigan, and Minnesota, local campaigns have notched divestment wins totaling more than $27 million.Across the country, JVP chapters and our partners are organizing to demand their state and municipal fund managers divest from Israel Bonds — essentially investments in Israeli genocide and apartheid — and invest instead in the well-being of our communities.
This organizing targets the engine enabling Israel's violence against Palestinians: material support from our own institutions in the U.S. And the momentum is growing...
Also:
* Defend anti-Zionist students.
* Plug in locally.
* Join Power Half-Hour.
$27 million in divestment victories. - JVP
from The Wire
[online publication of Jewish Voice For Peace JVP in USA]In North Carolina, Michigan, and Minnesota, local campaigns have notched divestment wins totaling more than $27 million.Across the country, JVP chapters and our partners are organizing to demand their state and municipal fund managers divest from Israel Bonds — essentially investments in Israeli genocide and apartheid — and invest instead in the well-being of our communities.
This organizing targets the engine enabling Israel's violence against Palestinians: material support from our own institutions in the U.S. And the momentum is growing...
Also:
* Defend anti-Zionist students.
* Plug in locally.
* Join Power Half-Hour.$27 million in divestment victories. - JVP
The West Bank is in crisis as Israel expands its genocide and extremist settler violence reaches record highs.Sarah Burch (Jewish Voice for Peace)
Politics Channel reshared this.
$27 million in divestment victories. - JVP
from The Wire
[online publication of Jewish Voice For Peace JVP in USA]
In North Carolina, Michigan, and Minnesota, local campaigns have notched divestment wins totaling more than $27 million.Across the country, JVP chapters and our partners are organizing to demand their state and municipal fund managers divest from Israel Bonds — essentially investments in Israeli genocide and apartheid — and invest instead in the well-being of our communities.
This organizing targets the engine enabling Israel's violence against Palestinians: material support from our own institutions in the U.S. And the momentum is growing...
Also:
* Defend anti-Zionist students.
* Plug in locally.
* Join Power Half-Hour.
$27 million in divestment victories. - JVP
The West Bank is in crisis as Israel expands its genocide and extremist settler violence reaches record highs.Sarah Burch (Jewish Voice for Peace)
How do I notify followers/subscribers to a PeerTube channel?
I created a video channel on a PeerTube server, but its service quality is poor, so I have migrated everything to another server. The first server' channel is still open, but I have a migration notice on the channel's home page.
The problem is that I have 8 followers on the old server whom I would like to notify directly. How can I do that using my PeerTube handle?
I tried to follow my PeerTube account from my Mastodon account, but it shows "Pending" in Mastodon. When I go to my PeerTube account, I don't see any place for messages, requests, etc.
Any documention or suggestions for me to look at? Thanks.
"Yes"
Where i used to work we had a lot of Chinese students in the city, with varying degrees of skill with English. No problem, English is my main but also not my first language. This story is from about 20 years ago.
A customer comes in for help with their computer. I ask my troubleshooting questions to triage the problem.
"My computer can't connect to the Internet"
OK what happens when you try?
"Nothing"
At home, at work?
"At home"
Have you checked all the connections?
"Yes"
Restarted everything? PC, router?
"Yes"
Have you contacted your ISP?
"Yes"
And?
"No problem"
OK do you see link lights on the network socket?
"Yes"
Is it just websites? Are you having problems with email, MSN messenger or Skype or any other chat clients?
"Yes"
(We are at this for a good 10 minutes but I'll skip the unnecessary bits)
Have you tried a new network cable?
"Yes"
OK bring it in, we can test it here for you.
"Oh but it works in Starbucks"
What? You mean its a laptop? Wireless?
"Yes"
And you connect wirelessly at home too?
"Yes"
But you said.. the cable, the link lights?
"Yes"
And then it hits me. Waves of memories wash over me. My Japanese father talking to clients, head bobbing up and down constantly nodding and bowing.
"Hai, hai, hai, haaaa, hai, hai, kashikomarimashita"
Yes. Yes. Yes. Oh yes. Yes. Yes. I understand.
Yes in japanese doesn't necessarily mean "correct". We say it to show we're listening and being attentive, following the conversation. Yes doesn't always mean "yes that's right", it can means "yes please continue". And now I assume its similar in Chinese too. Like in English you might say "aw yeah! Aw hell yeah!" while listening to a story.
Right. Forget everything we've just said and start from the beginning.
Can you see your SSID in the list at home?
"Yes"
(Fuck. Thats on me)
And what name is your SSID in the list when you connect at home?
"[ISP]-XYZ123"
OK good and...
reshared this
Lord Caramac the Clueless, KSC reshared this.
The hidden list of jewish terrorists Israel’s far right wants to free
cross-posted from: lemmy.ml/post/39990149
from +972’s Sunday Recap
972 Magazine [published in Israel]
Dec 7, 2025
In the wake of the Gaza ceasefire, far-right Israeli ministers and activists are pushing to release dozens of Jewish prisoners, including murderers, who attacked Palestinians. Sivan Tahel revealed several Jewish terrorists on this hidden list, as the state refuses to disclose their identities.Also:
* Legislating apartheid: How Israel entrenched unequal rule during Gaza war
* Netanyahu’s veiled threat to outlaw Ra’am is a message to all Palestinian citizens
* The billionaire family poised to rewire U.S. media in Israel’s favor
* PODCAST: Uncovering the inner workings of an AI genocide
The hidden list of jewish terrorists Israel’s far right wants to free
from +972’s Sunday Recap
972 Magazine [published in Israel]
Dec 7, 2025In the wake of the Gaza ceasefire, far-right Israeli ministers and activists are pushing to release dozens of Jewish prisoners, including murderers, who attacked Palestinians. Sivan Tahel revealed several Jewish terrorists on this hidden list, as the state refuses to disclose their identities.
Also:
* Legislating apartheid: How Israel entrenched unequal rule during Gaza war
* Netanyahu’s veiled threat to outlaw Ra’am is a message to all Palestinian citizens
* The billionaire family poised to rewire U.S. media in Israel’s favor
* PODCAST: Uncovering the inner workings of an AI genocidehttps://www.972mag.com/wp-content/themes/rgb/newsletter.php?page_id=8§ion_id=188890
like this
Cat_Daddy [any, any], Malkhodr, LVL, durduramayacaklar, GlueBear, La Dame d'Azur, davel, atomkarinca, psycocan, BassedWarrior, Apollonian, lemmyseizethemeans, Maeve, electric_nan, mufasio, stink, Sudruh_Lebkavic, Ayache Benbraham ☭🪬, highduc, ComradZoid, ikilledtheradiostar [comrade/them, love/loves] e nathanboleshevik like this.
The hidden list of jewish terrorists Israel’s far right wants to free
cross-posted from: lemmy.ml/post/39990149
from +972’s Sunday Recap
972 Magazine [published in Israel]
Dec 7, 2025
In the wake of the Gaza ceasefire, far-right Israeli ministers and activists are pushing to release dozens of Jewish prisoners, including murderers, who attacked Palestinians. Sivan Tahel revealed several Jewish terrorists on this hidden list, as the state refuses to disclose their identities.Also:
* Legislating apartheid: How Israel entrenched unequal rule during Gaza war
* Netanyahu’s veiled threat to outlaw Ra’am is a message to all Palestinian citizens
* The billionaire family poised to rewire U.S. media in Israel’s favor
* PODCAST: Uncovering the inner workings of an AI genocide
The hidden list of jewish terrorists Israel’s far right wants to free
from +972’s Sunday Recap
972 Magazine [published in Israel]
Dec 7, 2025In the wake of the Gaza ceasefire, far-right Israeli ministers and activists are pushing to release dozens of Jewish prisoners, including murderers, who attacked Palestinians. Sivan Tahel revealed several Jewish terrorists on this hidden list, as the state refuses to disclose their identities.
Also:
* Legislating apartheid: How Israel entrenched unequal rule during Gaza war
* Netanyahu’s veiled threat to outlaw Ra’am is a message to all Palestinian citizens
* The billionaire family poised to rewire U.S. media in Israel’s favor
* PODCAST: Uncovering the inner workings of an AI genocidehttps://www.972mag.com/wp-content/themes/rgb/newsletter.php?page_id=8§ion_id=188890
The hidden list of jewish terrorists Israel’s far right wants to free
cross-posted from: lemmy.ml/post/39990149
from +972’s Sunday Recap
972 Magazine [published in Israel]
Dec 7, 2025
In the wake of the Gaza ceasefire, far-right Israeli ministers and activists are pushing to release dozens of Jewish prisoners, including murderers, who attacked Palestinians. Sivan Tahel revealed several Jewish terrorists on this hidden list, as the state refuses to disclose their identities.Also:
* Legislating apartheid: How Israel entrenched unequal rule during Gaza war
* Netanyahu’s veiled threat to outlaw Ra’am is a message to all Palestinian citizens
* The billionaire family poised to rewire U.S. media in Israel’s favor
* PODCAST: Uncovering the inner workings of an AI genocide
The hidden list of jewish terrorists Israel’s far right wants to free
from +972’s Sunday Recap
972 Magazine [published in Israel]
Dec 7, 2025In the wake of the Gaza ceasefire, far-right Israeli ministers and activists are pushing to release dozens of Jewish prisoners, including murderers, who attacked Palestinians. Sivan Tahel revealed several Jewish terrorists on this hidden list, as the state refuses to disclose their identities.
Also:
* Legislating apartheid: How Israel entrenched unequal rule during Gaza war
* Netanyahu’s veiled threat to outlaw Ra’am is a message to all Palestinian citizens
* The billionaire family poised to rewire U.S. media in Israel’s favor
* PODCAST: Uncovering the inner workings of an AI genocidehttps://www.972mag.com/wp-content/themes/rgb/newsletter.php?page_id=8§ion_id=188890
The hidden list of jewish terrorists Israel’s far right wants to free
from +972’s Sunday Recap
972 Magazine [published in Israel]
Dec 7, 2025
In the wake of the Gaza ceasefire, far-right Israeli ministers and activists are pushing to release dozens of Jewish prisoners, including murderers, who attacked Palestinians. Sivan Tahel revealed several Jewish terrorists on this hidden list, as the state refuses to disclose their identities.
Also:
* Legislating apartheid: How Israel entrenched unequal rule during Gaza war
* Netanyahu’s veiled threat to outlaw Ra’am is a message to all Palestinian citizens
* The billionaire family poised to rewire U.S. media in Israel’s favor
* PODCAST: Uncovering the inner workings of an AI genocide
https://www.972mag.com/wp-content/themes/rgb/newsletter.php?page_id=8§ion_id=188890
RRF Caserta. Cultura. La Prima storica al San Carlo il 6 dicembre 2025 della Medea di Cherubini
Palestinian prisoners face ‘hunger, overcrowding and violence’, Israeli report finds
Palestinian prisoners face ‘hunger, overcrowding and violence’, Israeli report finds
Palestinian prisoners in Israeli detention are suffering extreme hunger, overcrowding and systematic violence by prison staff, a report by Israel’s Public Defender’s Office has revealed.MEE staff (Middle East Eye)
Israel has a modus operandi of taking and holding random Palestinians hostage to control and terrorize the population. Using them as bargaining chips (even in death) is just a bonus.
As of November 1, Israeli authorities held nearly 7,000 Palestinians from the occupied territory in detention for alleged security offenses, according to the Israeli human rights organization HaMoked. Far more Palestinians have been arrested since the October 7 attacks in Israel than have been released in the last week. Among those being held are dozens of women and scores of children.The majority have never been convicted of a crime, including more than 2,000 of them being held in administrative detention, in which the Israeli military detains a person without charge or trial. Such detention can be renewed indefinitely based on secret information, which the detainee is not allowed to see. Administrative detainees are held on the presumption that they might commit an offense at some point in the future. Israeli authorities have held children, human rights defenders and Palestinian political activists, among others, in administrative detention, often for prolonged periods.
[...]
Under military law, Palestinians can be held for up to eight days before they must see a judge — and then, only a military judge. Yet, under Israeli law, a person has to be brought before a judge within 24 hours of being arrested, which can be extended to 96 hours when authorized in extraordinary cases.
Palestinians can be jailed for participating in a gathering of merely 10 people without a permit on any issue “that could be construed as political,” while settlers can demonstrate without a permit unless the gathering exceeds 50 people, takes place outdoors and involves “political speeches and statements.”
In short, Israeli settlers and Palestinians live in the same territory, but are tried in different courts under different laws with different due process rights and face different sentences for the same offense. The result is a large and growing number of Palestinians imprisoned without basic due process.
Discrimination also pervades the treatment of children. Israeli civil law protects children against nighttime arrests, provides the right to have a parent present during interrogations and limits the amount of time children may be detained before being able to consult a lawyer and to be presented before a justice.
Israeli authorities, however, regularly arrest Palestinian children during nighttime raids, interrogate them without a guardian present, hold them for longer periods before bringing them before a judge and hold those as young as 12 in lengthy pretrial detention. The Association for Civil Rights in Israel found in 2017 that authorities kept 72 percent of Palestinian children from the West Bank in custody until the end of proceedings, but only 17.9 percent of children in Israel.
[...]
Even those charged with a crime are routinely deprived of due process rights in military courts. Many of those convicted and serving time for “security offenses” (2,331 people as of November 1) accepted plea bargains to avoid prolonged pretrial detention and sham military trials, which have a nearly 100 percent conviction rate against Palestinians.
Beyond the lack of due process, Israeli authorities have for decades mistreated and tortured Palestinian detainees. More than 1,400 complaints of torture, including painful shackling, sleep deprivation and exposure to extreme temperatures, by Shin Bet, Israel’s internal security service, have been filed with Israel’s Justice Ministry since 2001.
These complaints have resulted in a total of three criminal investigations and no indictments, according to the Public Committee Against Torture, an Israeli rights group. The group Military Court Watch reported that, in 22 cases of detention of Palestinian children they documented in 2023, 64 percent said they were physically abused and 73 percent were strip searched by Israeli forces while in detention.
Palestinian rights groups have reported a spike in arrests and deterioration in the conditions of Palestinian prisoners prior to October 7, including violent raids, retaliatory prison transfers and isolation of prisoners, less access to running water and bread and fewer family visits. The trends have worsened since.
Source: hrw.org/news/2023/11/29/why-do…
The cruelty is the point.
Why Does Israel Have So Many Palestinians in Detention and Available to Swap?
While many have rightly hailed the release of civilians held hostage by Hamas after the killings of hundreds of Israelis and other civilians on October 7 — hostage-taking is a war crime — less attention has been focused on why exactly Israel has so m…Human Rights Watch
Israel is threatening to demolish a popular West Bank youth football pitch
Israel is threatening to demolish a popular West Bank youth football pitch
Israel is threatening to demolish a refugee camp’s popular youth football ground built on land owned by the Armenian church on the outskirts of Bethlehem in the shadow of the West Bank separation wall.Lubna Masarwa (Middle East Eye)
like this
☆ Yσɠƚԋσʂ ☆ likes this.
Sam Mraiche was investigated by Elections Alberta over alleged illegal political donations
archive.is/w03hg#selection-275…
The elections regulator’s director of compliance and enforcement said in an affidavit that Mr. Mraiche was being investigated in connection with an alleged straw donor scheme – an illegal practice in which an individual circumvents donation limits by providing money through others.“Mr. Mraiche is alleged to have given funds to other people for the purpose of having those people make contributions to a registered party,” Diane Brauer, the official, said. The alleged donations were made in the two months prior to the May, 2023, provincial election, according to her affidavit, which was filed in support of the contempt request.
Besides Mraiche joining the UCP's Smith in a hotel suite to watch provincial election results in May 2023, and the Edmonton Oilers hockey games with the notorious skybox photo, keep in mind that Mraiche has also allegedly been tied to McFee, Public Safety Minister Mike Ellis, and Dr. Jayan Nagendran.
thetyee.ca/News/2025/02/14/AHS…
thetyee.ca/News/2025/02/26/UCP…
Sam Mraiche was investigated by Elections Alberta over alleged illegal political donations
Regulator says Mraiche was being investigated this year in connection to an alleged straw donor schemeTom Cardoso (The Globe and Mail)
Sunday, December 7, 2025
Share
The Kyiv Independent [unofficial]
Over 25,000 members make our journalism possible. Can we count on you, too? Joining is easy and safe.
Olga Rudenko, editor-in-chief
of the Kyiv Independent
Russia’s war against Ukraine
A woman mourns among graves of Ukrainian servicemen at the Lychakiv cemetery on the Day of the Armed Forces of Ukraine, in Lviv on Dec. 6, 2025. (Yuriy Dyachyshyn / AFP via Getty Images)
Explosions reported in Kremenchuk as Russia launches barrage of missiles, drones towards central Ukraine. Russian forces launched a large-scale attack on the central Ukrainian city of Kremenchuk overnight on Dec. 6, officials reported.
‘Happy Ukrainian Armed Forces Day’ — hackers deface website of Russian company delivering military goods, HUR source claims. The cyberattack took down over 700 computers and servers and deleted accounts of more than 1,000 Eltrans+ users, HUR claimed.
End to war ‘depends on Russia’s commitment to peace,’ Ukraine, US agree. Over the course of 2025, Ukraine has repeatedly agreed to ceasefire proposals put forward by the White House. Russia has refused to agree to a single one.
Zelensky reports ‘long and substantive call‘ with Witkoff, Kushner. “Ukraine is determined to keep working in good faith with the American side to genuinely achieve peace. We agreed on the next steps and formats for talks with the United States,” Zelensky wrote on Dec. 6.
Your contribution helps keep the Kyiv Independent going. Become a member today.
Ukraine will not accept any peace deal requiring territorial concessions, Syrskyi tells UK broadcaster. “There are no pauses, no delays in (Russia’s) operations. They keep pushing their troops forward to seize as much of our territory as possible under the cover of negotiations,” Syrskyi said.
Chornobyl protective shield ‘lost its primary safety functions’ after Russian drone strike, UN nuclear agency warns. Russia’s drone strike caused a fire that burned the outer cladding of the shelter.
G7, EU mull ban on Russian oil maritime services, but experts sceptical. If the ban goes through, Russia would likely have to expand its shadow fleet to transport crude oil instead.
Read our exclusives
Opinion: Ukraine’s lights still burn, even if not all the time
After three winters in Ukraine, I (and every Ukrainian) have become adept at dealing with the constant power cuts resulting from Russia’s relentless missile, bomb, and drone attacks.
Photo: Roman Pilipey / AFP via Getty Images
Learn more
Kyiv Independent event in New York to hold live stream, photo exhibition
The Kyiv Independent on Dec. 9 will host its first live event in New York City, an evening dedicated to storytelling, investigative journalism, and frontline storytelling.
Photo: The Kyiv Independent
Independent journalism is never easy, and it’s even harder in wartime
Yet we can do it without paywalls, billionaires, or compromise — because of our community.
Human costs of war
General Staff: Russia has lost 1,179,790 troops in Ukraine since Feb. 24, 2022. The number includes 1,180 casualties that Russian forces suffered over the past day.
Why Ukraine rejects Russia’s 600,000 army cap demand
International response
Zelensky to visit London amid push for peace. Zelensky will also meet with the leaders of France and Germany in the British capital on Monday.
This newsletter is open for sponsorship. Boost your brand’s visibility by reaching thousands of engaged subscribers. Click here for more details.
Today’s Ukraine Daily was brought to you by Dominic Culverwell, Chris York, Olena Goncharova, Sonya Bandouil, and Dmytro Basmat.
If you’re enjoying this newsletter, consider joining our membership program. Start supporting independent journalism today.
Share
Abu Shabab’s death signals the inevitable failure of Israel’s plan for Gaza
Throughout the war, Abu Shabab’s name was synonymous with collaboration with Israel. He was a key partner in Gaza in securing safe passage for Israeli troops, searching for Israeli captives, killing Palestinian resistance members, and, most infamously, looting aid trucks. Before he was killed, Abu Shabab was reportedly being considered for the position of governor of Rafah to be appointed by Israel.His death deals a massive blow to Israel’s efforts to establish a new Palestinian administration in Gaza that responds to its wishes and oppresses the Palestinians. It is yet another proof that the Palestinian people will never accept colonial rule.
Abu Shabab’s death signals the inevitable failure of Israel’s plan for Gaza
The Israeli efforts to establish Palestinian rule in Gaza loyal to the occupation are doomed.Said Alsaloul (Al Jazeera)
Extremely slow boot time
Is this the right place to ask for help? Or is there another place? Anyways, feel free to delete this post if i'm in the wrong spot.
I use Pop OS on an Asus. Something has happened where i either have a 10 min plus boot time, or it doesn't boot at all. I have reinstalled Pop OS twice (and used recovery mode) and even took it into a computer shop to see if there was something wrong with my hardware (there isn't). When I first do a new install it will restart fine, but then it'll be the next day when it will either take over 8 minutes to load, or it will be stuck on boot.
Right now it is stuck on boot. I can get into a live usb stick just fine. I have done systemanalyze blame, and it didn't give me any helpful information. I have the same issue even if I try to press space bar and boot into an old kernel.
I should note that my computer has encryption enabled.
Any help would be awesome.
All hail the other Linux noobs out there!
"systemd-analyze blame didn't give me any helpful information
And what exactly did it give you?
Could you copy-paste the output of that command (also known as "stdout")?
EDIT: It seems that you made the same post 2 times. Ideally, you should delete one of them.
I had a similar issue years back when I was first duel booting with ubuntu. The issue ostensibly was a systemd service that was checking the time since last boot to check if anything needed to be updated. I recall needing to manually set the timer for that to 30 minutes after boot. It didn't fix the whole issue but it did clean up the boot time by about half. Hope that helps point you in the right direction! Boot problems can be a nightmare to debug
It's also worth checking if you can drop into a tty
Extremely slow boot time
Is this the right place to ask for help? Or is there another place? Anyways, feel free to delete this post if i'm in the wrong spot.
I use Pop OS on an Asus. Something has happened where i either have a 10 min plus boot time, or it doesn't boot at all. I have reinstalled Pop OS twice (and used recovery mode) and even took it into a computer shop to see if there was something wrong with my hardware (there isn't). When I first do a new install it will restart fine, but then it'll be the next day when it will either take over 8 minutes to load, or it will be stuck on boot.
Right now it is stuck on boot. I can get into a live usb stick just fine. I have done systemanalyze blame, and it didn't give me any helpful information. I have the same issue even if I try to press space bar and boot into an old kernel.
I should note that my computer has encryption enabled.
Any help would be awesome.
All hail the other Linux noobs out there!
I Went All-In on AI. The MIT Study Is Right.
Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.
The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):
I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.
Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.
I Went All-In on AI. The MIT Study Is Right.
My all-in AI experiment cost me my confidenceJosh Anderson (The Leadership Lighthouse)
like this
rash, massive_bereavement, ammorok, TheFederatedPipe, joshg253 e Lapo Luchini like this.
reshared this
Technology reshared this.
What's interesting is what he found out. From the article:
I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.
like this
Erosdiscordia likes this.
> "Then three months later, you realize nobody actually understands what you’ve built."
gratz, gang, you turned everything into Perl.
well
*(dusts off `perldoc`)*
I'll be ready
to be fair, Perl and PHP both suffered from the fact that it was WAY too easy to write TERRIBLE code.
Both languages required a high level of personal discipline to write good code, but it was actually very doable.
The problem wasn't the languages. It was the humans using them.
This, to me, is the most insidious effect of AI.
Whether it's as complex as code or as simple as fact-checking a search result. People lose confidence in their judgment, and therefore their agency is eroded.
This is kind of the obvious conclusion. I didn't need to use AI to know this would be the outcome. This is why I only use it for small code snippets if at all. This is why I've taught my kids not to rely on AI to do their homework.
It may seem like the easy way but it will absolutely come back to haunt you later. If you don't do the work you don't learn anything or develop any skills.
like this
BlackLaZoR likes this.
like this
AGuyAcrossTheInternet e rash like this.
Auditing the code it produces is basically the only effective way to use coding LLMs at this point.
You're basically playing the role of senior dev code reviewing and editing a junior dev's code, except in this case the junior dev randomly writes an amalgamation of mostly valid, extremely wonky, and/or complete bullshit code. It has no concept of best practices, or fitness for purpose, or anything you'd expect a junior dev to learn as they gain experience.
Now given the above, you might ask yourself: "Self, what if I myself don't have the skills or experience of a senior dev?" This is where vibe coding gets sketchy or downright dangerous: if you don't notice the problems in generated code, you're doomed to fail sooner or later. If you're lucky, you end up having to do a big refactoring when you realize the code is brittle. If you're unlucky, your backend is compromised and your CTO is having to decide whether to pay off the ransomware demands or just take a chance on restoring the latest backup.
If you're just trying to slap together a quick and dirty proof of concept or bang out a one-shot script to accomplish a task, it's fairly useful. If you're trying to implement anything moderately complex or that you intend to support for months/years, you're better off just writing it yourself as you'll end up with something stylistically cohesive and more easily maintainable.
Untrained dev here, but the trend I’m seeing is spec-driven development where AI generates the specs with a human, then implements the specs. Humans can modify the specs, and AI can modify the implementation.
This approach seems like it can get us to 99%, maybe.
This poster calckey.world/notes/afzolhb0xk is more articulate than my post.
The difference between this "spec-driven" approach is that the entire process is repeatable by AI once you've gotten the spec sorted. So you no longer work on the code, you just work on the spec, which can be a collection of files, files in folders, whatever — but the goal is some kind of determinism, I think.
I use it on a much smaller scale and haven't really cared much for the "spec as truth" approach myself, at this level. I also work almost exclusively on NextJS apps with the usual Tailwind + etc stack. I would certainly not trust a developer without experience with that stack to generate "correct" code from an AI, but it's sort of remarkable how I can slowly document the patterns of my own codebase and just auto-include it as context on every prompt (or however Cursor does it) so that everything the LLMs suggest gets LLM-reviewed against my human-written "specs". And doubly neat is that the resulting documentation of patterns turns out to be really helpful to developers who join or inherit the codebase.
I think the author / developer in the article might not have been experienced enough to direct the LLMs to build good stuff, but these tools like React, NextJS, Tailwind, and so on are all about patterns that make us all build better stuff. The LLMs are like "8 year olds" (someone else in this thread) except now they're more like somewhat insightful 14 year olds, and where they'll be in another 5 years… Who knows.
Anyway, just saying. They're here to stay, and they're going to get much better.
@technology@lemmy.world
I used to deal with programming since I was 9 y.o., with my professional career in DevOps starting several years later, in 2013. I dealt with lots of other's code, legacy code, very shitty code (especially done by my "managers" who cosplayed as programmers), and tons of technical debts.Even though I'm quite of a LLM power-user (because I'm a person devoid of other humans in my daily existence), I never relied on LLMs to "create" my code: rather, what I did a lot was tinkering with different LLMs to "analyze" my own code that I wrote myself, both to experiment with their limits (e.g.: I wrote a lot of cryptic, code-golf one-liners and fed it to the LLMs in order to test their ability to "connect the dots" on whatever was happening behind the cryptic syntax) and to try and use them as a pair of external eyes beyond mine (due to their ability to "connect the dots", and by that I mean their ability, as fancy Markov chains, to relate tokens to other tokens with similar semantic proximity).
I did test them (especially Claude/Sonnet) for their "ability" to output code, not intending to use the code because I'm better off writing my own thing, but you likely know the maxim, one can't criticize what they don't know. And I tried to know them so I could criticize them. To me, the code is.. pretty readable. Definitely awful code, but readable nonetheless.
So, when the person says...
The developers can’t debug code they didn’t write.
...even though they argue they have more than 25 years of experience, it feels to me like they don't.One thing is saying "developers find it pretty annoying to debug code they didn't write", a statement that I'd totally agree! It's awful to try to debug other's (human or otherwise) code, because you need to try to put yourself on their shoes without knowing how their shoes are... But it's doable, especially by people who deal with programming logic since their childhood.
Saying "developers can't debug code they didn't write", to me, seems like a layperson who doesn't belong to the field of Computer Science, doesn't like programming, and/or only pursued a "software engineer" career purely because of money/capitalistic mindset. Either way, if a developer can't debug other's code, sorry to say, but they're not developers!
Don't take me wrong: I'm not intending to be prideful or pretending to be awesome, this is beyond my person, I'm nothing, I'm no one. I abandoned my career, because I hate the way the technology is growing more and more enshittified. Working as a programmer for capitalistic purposes ended up depleting the joy I used to have back when I coded in a daily basis. I'm not on the "job market" anymore, so what I'm saying is based on more than 10 years of former professional experience. And my experience says: a developer that can't put themselves into at least trying to understand the worst code out there can't call themselves a developer, full stop.
They’re here to stay
Eh, probably. At least for as long as there is corporate will to shove them down the rest of our throats. But right now, in terms of sheer numbers, humans still rule, and LLMs are pissing off more and more of us every day while their makers are finding it increasingly harder to forge ahead in spite of us, which they are having to do ever more frequently.
and they’re going to get much better.
They're already getting so much worse, with what is essentially the digital equivalent of kuru, that I'd be willing to bet they've already jumped the shark.
If their makers and funders had been patient, and worked the present nightmares out privately, they'd have a far better chance than they do right now, IMO.
Simply put, LLMs/"AI" were released far too soon, and with far too much "I Have a Dream!" fairy-tale promotion that the reality never came close to living up to, and then shoved with brute corporate force down too many throats.
As a result, now you have more and more people across every walk of society pushed into cleaning up the excesses of a product they never wanted in the first place, being forced to share their communities AND energy bills with datacenters, depleted water reserves, privacy violations, EXCESSIVE copyright violations and theft of creative property, having to seek non-AI operating systems just to avoid it . . . right down to the subject of this thread, the corruption of even the most basic video search.
Can LLMs figure out how to override an angry mob, or resolve a situation wherein the vast majority of the masses are against the current iteration of AI even though the makers of it need us all to be avid, ignorant consumers of AI for it to succeed? Because that's where we're going, and we're already farther down that road than the makers ever foresaw, apparently having no idea just how thin the appeal is getting on the ground for the rest of us.
So yeah, I could be wrong, and you might be right. But at this point, unless something very significant changes, I'd put money on you being mostly wrong.
Trained dev with a decade of professional experience, humans routinely fail to get me workable specs without hours of back and forth discussion. I'd say a solid 25% of my work week is spent understanding what the stakeholders are asking for and how to contort the requirements to fit into the system.
If these humans can't be explict enough with me, a living thinking human that understands my architecture better than any LLM, what chance does an LLM have at interpreting them?
like this
bluGill likes this.
Even more efficient: humans do the specs and the implementation. AI has nothing to contribute to specs, and is worse at implementation than an experienced human. The process you describe, with current AIs, offers no advantages.
AI can write boilerplate code and implement simple small-scale features when given very clear and specific requests, sometimes. It's basically an assistant to type out stuff you know exactly how to do and review. It can also make suggestions, which are sometimes informative and often wrong.
If the AI were a member of my team it would be that dodgy developer whose work you never trust without everyone else spending a lot of time holding their hand, to the point where you wish you had just done it yourself.
Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive.
And all they'll hear is "not failure, metrics great, ship faster, productive" and go against your advice because who cares about three months later, that's next quarter, line must go up now. I also found this bit funny:
I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me... I was proud of what I’d created.
Well you didn't create it, you said so yourself, not sure why you'd be proud, it's almost like the conclusion should've been blindingly obvious right there.
like this
AGuyAcrossTheInternet, rash, massive_bereavement e TheFederatedPipe like this.
The top comment on the article points that out.
It's an example of a far older phenomenon: Once you automate something, the corresponding skill set and experience atrophy. It's a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired. I'll have to find it but there's a story about a modern fighter jet pilot not being able to handle a WWII era Lancaster bomber. They don't know how to do the stuff that modern warplanes do automatically.
like this
AGuyAcrossTheInternet, rash e classic like this.
It's more like the ancient phenomenon of spaghetti code. You can throw enough code at something until it works, but the moment you need to make a non-trivial change, you're doomed. You might as well throw away the entire code base and start over.
And if you want an exact parallel, I've said this from the beginning, but LLM coding at this point is the same as offshore coding was 20 years ago. You make a request, get a product that seems to work, but maintaining it, even by the same people who created it in the first place, is almost impossible.
Indeed.. Throw-away code is currently where AI coding excels. And that is cool and useful - creating one off scripts, self-contained modules automating boilerplate, etc.
You can't quite use it the same way for complex existing code bases though... Not yet, at least..
The thing about this perspective is that I think its actually overly positive about LLMs, as it frames them as just the latest in a long line of automations.
Not all automations are created equal. For example, compare using a typewriter to using a text editor. Besides a few details about the ink ribbon and movement mechanisms you really haven't lost much in the transition. This is despite the fact that the text editor can be highly automated with scripts and hot keys, allowing you to manipulate even thousands of pages of text at once in certain ways. Using a text editor certainly won't make you forget how to write like using ChatGPT will.
I think the difference lies in the relationship between the person and the machine. To paraphrase Cathode Ray Dude, people who are good at using computers deduce the internal state of the machine, mirror (a subset of) that state as a mental model, and use that to plan out their actions to get the desired result. People that aren't good at using computers generally don't do this, and might not even know how you would start trying to.
For years 'user friendly' software design has catered to that second group, as they are both the largest contingent of users and the ones that needed the most help. To do this software vendors have generally done two things: try to move the necessary mental processes from the user's brain into the computer and hide the computer's internal state (so that its not implied that the user has to understand it, so that a user that doesn't know what they're doing won't do something they'll regret, etc). Unfortunately this drives that first group of people up the wall. Not only does hiding the internal state of the computer make it harder to deduce, every "smart" feature they add to try to move this mental process into the computer itself only makes the internal state more complex and harder to model.
Many people assume that if this is the way you think about software you are just an elistist gatekeeper, and you only want your group to be able to use computers. Or you might even be accused of ableism. But the real reason is what I described above, even if its not usually articulated in that way.
Now, I am of the opinion that the 'mirroring the internal state' method of thinking is the superior way to interact with machines, and the approach to user friendliness I described has actually done a lot of harm to our relationship with computers at a societal level. (This is an opinion I suspect many people here would agree with.) And yet that does not mean that I think computers should be difficult to use. Quite the opposite, I think that modern computers are too complicated, and that in an ideal world their internal states and abstractions would be much simpler and more elegant, but no less powerful. (Elaborating on that would make this comment even longer though.) Nor do I think that computers shouldn't be accessible to people with different levels of ability. But just as a random person in a store shouldn't grab a wheelchair user's chair handles and start pushing them around, neither should Windows (for example) start changing your settings on updates without asking.
Anyway, all of this is to say that I think LLMs are basically the ultimate in that approach to 'user friendliness'. They try to move more of your thought process into the machine than ever before, their internal state is more complex than ever before, and it is also more opaque than ever before. They also reflect certain values endemic to the corporate system that produced them: that the appearance of activity is more important than the correctness or efficacy of that activity. (That is, again, a whole other comment though.) The result is that they are extremely mind numbing, in the literal sense of the phrase.
Once you automate something, the corresponding skill set and experience atrophy. It's a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired.
Well, to be fair, different skills are acquired. You've learned how to create automated systems, that's definitely a skill. In one of my IT jobs there were a lot of people who did things manually, updated computers, installed software one machine at a time. But when someone figures out how to automate that, push the update to all machines in the room simultaneously, that's valuable and not everyone in that department knew how to do it.
So yeah, I guess my point is, you can forget how to do things the old way, but that's not always bad. Like, so you don't really know how to use a scythe, that's fine if you have a tractor, and trust me, you aren't missing much.
I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me… I was proud of what I’d created.Well you didn’t create it, you said so yourself, not sure why you’d be proud, it’s almost like the conclusion should’ve been blindingly obvious right there.
Does a director create the movie? They don't usually edit it, they don't have to act in it, nor do all directors write movies. Yet the person giving directions is seen as the author.
The idea is that vibe coding is like being a director or architect. I mean that's the idea. In reality it seems it doesn't really pan out.
You can vibe write and vibe edit a movie now too. They also turn out shit.
The issue is that llm isnt a person with skills and knowledge. Its a complex guessing box that gets thing kinda right, but not actually right, and it absolutely cant tell whats right or not. It has no actual skills or experience or humainty that a director can expect a writer or editor to have.
Wrong, it's just outsourcing.
You're making a false-equivalence. A director is actively doing their job; they're a puppeteer and the rest is their puppet. The puppeteer is not outsourcing his job to a puppet.
And I'm pretty sure you don't know what architects do.
If I hire a coder to write an app for me, whether it's a clanker or a living being, I'm outsourcing the work; I'm a manager.
It's like tasking an artist to write a poem for you about love and flowers, and being proud about that poem.
yeah i don't get why the ai can't do the changes
don't you just feed it all the code and tell it? i thought that was the point of 100% AI
like this
AGuyAcrossTheInternet likes this.
We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.
Except we are talking about that, and the tech bro response is "in 10 years we'll have AGI and it will do all these things all the time permanently." In their roadmap, there won't be a next generation of software developers, product managers, or mid-level leaders, because AGI will do all those things faster and better than humans. There will just be CEOs, the capital they control, and AI.
What's most absurd is that, if that were all true, that would lead to a crisis much larger than just a generational knowledge problem in a specific industry. It would cut regular workers entirely out of the economy, and regular workers form the foundation of the economy, so the entire economy would collapse.
"Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders."
like this
AGuyAcrossTheInternet, rash e ammorok like this.
like this
felixthecat, beatnixxx e Quantumantics like this.
According to a study, the ~~lower~~ top 10% accounts for something like 68% of cash flow in the economy. Us plebs are being cut out all together.
That being said, I think if people can't afford to eat, things might bet bad. We will probably end up a kept population in these ghouls fever dreams.
Edit: I'm an idiot.
Once Boston Dynamic style dogs and Androids can operate over a number of days independently, I'd say all bets are off that we would be kept around as pets.
I'm fairly certain your Musks and Altmans would be content with a much smaller human population existing to only maintain their little bubble and damn everything else.
Edit: I’m an idiot.
Same here. Nobody knows what the eff they are doing. Especially the people in charge. Much of life is us believing confident people who talk a good game but dont know wtf they are doing and really shouldnt be allowed to make even basic decisions outside a very narrow range of competence.
We have an illusion of broad meritocracy and accountability in life but its mostly just not there.
I did see someone write a post about Chat Oriented Programming, to me that appeared successful, but not without cost and extra care. Original Link, Discussion Thread
Successful in that it wrote code faster and its output stuck to conventions better than the author would. But they had to watch it like a hawk and with the discipline of a senior developer putting full attention over a junior, stop and swear at it every time it ignored the rules that they give at the beginning of each session, terminate the session when it starts doing a autocompactification routine that wastes your money and makes Claude forget everything. And you try to dump what it has completed each time. One of the costs seem to be the sanity of the developer, so I really question if it's a sustainable way of doing things from both the model side and from developers. To be actually successful you need to know what you're doing otherwise it's easy to fall in a trap like the CTO, trusting the AI's assertions that everything is hunky-dory.
That perfectly describes what my day-to-day has become at work (not by choice).
The only way to get anywhere close to production-ready code is to do like you just described, and the process is incredibly tedious and frustrating. It also isn't really any faster than just writing the code myself (unless I'm satisfied with committing slop) and in the end, I still don't understand the code I've 'written' as well as if I'd done it without AI. When you write code yourself there's a natural self-reinforcement mechanism, the same way that taking notes in class improves your understanding/retention of the information better than when just passively listening. You don't get that when vibe coding (no matter how knowledgeable you are and how diligent you are about babysitting it), and the overall health of the app suffers a lot.
The AI tools are also worse than useless when it comes to debugging, so good fucking luck getting it to fix the bugs it inevitably introduces...
“fractional CTO”(no clue what that means, don’t ask me)
For those who were also interested to find out: Consultant and advisor in a part time role, paid to make decisions that would usually fall under the scope of a CTO, but for smaller companies who can't afford a full-time experienced CTO
like this
classic likes this.
That sounds awful. You get someone who doesn’t really know the company or product, they take a bunch of decisions that fundamentally affect how you work, and then they’re gone.
… actually, that sounds exactly like any other company.
Ive worked with a fractional CISO. He was scattered, but was insanly useful about setting roadmaps, writting procedure/docs, working audits and correcting us moving in bad cybersecurity directions.
Fractional is way better than none.
like this
emmanuel_car likes this.
He didn't need to go "all-in on ai" cause there is hundreds of thousands of people who tried the same thing already and everyone of them could tell him that's not what ai can do.
Hundreds of thousands of internet strangers is different from lived experience.
I take the author's opinion more seriously because they went out and tried it for themselves.
@technology@lemmy.world
I used to deal with programming since I was 9 y.o., with my professional career in DevOps starting several years later, in 2013. I dealt with lots of other's code, legacy code, very shitty code (especially done by my "managers" who cosplayed as programmers), and tons of technical debts.
Even though I'm quite of a LLM power-user (because I'm a person devoid of other humans in my daily existence), I never relied on LLMs to "create" my code: rather, what I did a lot was tinkering with different LLMs to "analyze" my own code that I wrote myself, both to experiment with their limits (e.g.: I wrote a lot of cryptic, code-golf one-liners and fed it to the LLMs in order to test their ability to "connect the dots" on whatever was happening behind the cryptic syntax) and to try and use them as a pair of external eyes beyond mine (due to their ability to "connect the dots", and by that I mean their ability, as fancy Markov chains, to relate tokens to other tokens with similar semantic proximity).
I did test them (especially Claude/Sonnet) for their "ability" to output code, not intending to use the code because I'm better off writing my own thing, but you likely know the maxim, one can't criticize what they don't know. And I tried to know them so I could criticize them. To me, the code is.. pretty readable. Definitely awful code, but readable nonetheless.
So, when the person says...
The developers can’t debug code they didn’t write.
...even though they argue they have more than 25 years of experience, it feels to me like they don't.
One thing is saying "developers find it pretty annoying to debug code they didn't write", a statement that I'd totally agree! It's awful to try to debug other's (human or otherwise) code, because you need to try to put yourself on their shoes without knowing how their shoes are... But it's doable, especially by people who deal with programming logic since their childhood.
Saying "developers can't debug code they didn't write", to me, seems like a layperson who doesn't belong to the field of Computer Science, doesn't like programming, and/or only pursued a "software engineer" career purely because of money/capitalistic mindset. Either way, if a developer can't debug other's code, sorry to say, but they're not developers!
Don't take me wrong: I'm not intending to be prideful or pretending to be awesome, this is beyond my person, I'm nothing, I'm no one. I abandoned my career, because I hate the way the technology is growing more and more enshittified. Working as a programmer for capitalistic purposes ended up depleting the joy I used to have back when I coded in a daily basis. I'm not on the "job market" anymore, so what I'm saying is based on more than 10 years of former professional experience. And my experience says: a developer that can't put themselves into at least trying to understand the worst code out there can't call themselves a developer, full stop.
@technology@lemmy.world
Often, those are developers who "specialized" in one or two programming languages, without specializing in computer/programming logic.
I used to repeat a personal saying across job interviews: "A good programmer knows a programming language. An excellent programmer knows programming logic". IT positions often require a dev to have a specific language/framework in their portfolio (with Rust being the Current Thing™ now) and they reject people who have vast experience across several languages/frameworks but the one required, as if these people weren't able to learn the specific language/framework they require.
Languages and framework differ on syntax, namings, paradigms, sometimes they're extremely different from other common languages (such as (Lisp (parenthetic-hell)), or .asciz "Assembly-x86_64"), but they all talk to the same computer logic under the hood. Once a dev becomes fluent in bitwise logic (or, even better, they become so fluent in talking with computers that they can say 41 53 43 49 49 20 63 6f 64 65 without tools, as if it were English), it's just a matter of accustoming oneself to the specific syntax and naming conventions from a given language.
Back when I was enrolled in college, I lost count of how many colleagues struggled with the entire course as soon as they were faced by Data Structure classes, binary trees, linked lists, queues, stacks... And Linear Programming, maximization and minimization, data fitness... To the majority of my colleagues, those classes were painful, especially because the teachers were somewhat rigid.
And this sentiment echoes across the companies and corps. Corps (especially the wannabe-programmer managers) don't want to deal with computers, they want to deal with consumers and their sweet money, but a civil engineer and their masons can't possibly build a house without willing to deal with a blueprint and the physics of building materials. This is part of the root of this whole problem.
@technology@lemmy.world
Given how it's very akin to dynamic and chaotic systems (e.g. double pendulum, whose initial position, mass and length rules the movement of the pendulum, very similar to how the initial seed and input rule the output of generative AIs) due to the insurmountable amount of physically intertwined factors and the possibility of generalizing the system in mathematical, differential terms, I'd say that the more fit would be a physicist. Or a mathematician. lol
As always, relevant xkcd: xkcd.com/435/
where the massive decline in code quality catches up with big projects.
That's going to depend, as always, on how the projects are managed.
LLMs don't "get it right" on the first pass, ever in my experience - at least for anything of non-trivial complexity. But, their power is that they're right more than half of the time AND when they can be told they are wrong (whether by a compiler, or a syntax nanny tool, or a human tester) AND then they can try again, and again as long as necessary to get to a final state of "right," as defined by their operators.
The trick, as always, is getting the managers to allow the developers to keep polishing the AI (or human developer's) output until it's actually good enough to ship.
The question is: which will take longer, which will require more developer "head count" during that time to get it right - or at least good enough for business?
I feel like the answers all depend on the particular scenarios - some places some applications current state of the art AI can deliver that "good enough" product that we have always had with lower developer head count and/or shorter delivery cycles. Other organizations with other product types, it will certainly take longer / more budget.
However, the needle is off 0, there are some places where it really does help, a lot. The other thing I have seen over the past 12 months: it's improving rapidly.
Will that needle ever pass 90% of all software development benefitting from LLM agent application? I doubt it. In my outlook, I see that needle passing +50% in the near future - but not being there quite yet.
An LLM can generate code like an intern getting ahead of their skis. If you let it generate enough code, it will do some gnarly stuff.
Another facet is the nature of mistakes it makes. After years of reviewing human code, I have this tendency to take some things for granted, certain sorts of things a human would just obviously get right and I tend not to think about it. AI mistakes are frequently in areas my brain has learned to gloss over and take on faith that the developer probably didn't screw that part up.
AI generally generates the same sorts of code that I hate to encounter when humans write, and debugging it is a slog. Lots of repeated code, not well factored. You would assume of the same exact thing is fine in many places, you'd have a common function with common behavior, but no, AI repeated itself and didn't always get consistent behavior out of identical requirements.
His statement is perhaps an over simplification, but I get it. Fixing code like that is sometimes more trouble than just doing it yourself from the onset.
Now I can see the value in generating code in digestible pieces, discarding when the LLM gets oddly verbose for simple function, or when it gets it wrong, or if you can tell by looking you'd hate to debug that code. But the code generation can just be a huge mess and if you did a large project exclusively through prompting, I could see the end result being just a hopeless mess.v frankly surprised he could even declare an initial "success", but it was probably "tutorial ware" which would be ripe fodder for the code generators.
FYI this article is written with a LLM.
Don't believe a story just because it confirms your view!
I've tested lots and lots of different ones. GPTZero is really good.
If you read the article again, with a critical perspective, I think it will be obvious.
This!
Also, the irony: those are AI tools used by anti-AI people who use AI to try and (roughly) determine if a content is AI, by reading the output of an AI. Even worse: as far as I know, they're paid tools (at least every tool I saw in this regard required subscription), so Anti-AI people pay for an AI in order to (supposedly) detect AI slop. Truly "AI-rony", pun intended.
AI Detector - Free AI Checker for ChatGPT, GPT-5 & Gemini
Best free AI detector - simply paste your text to instantly get an overall AI score and advanced sentence by sentence detection.Edward Tian (GPTZero)
GPTZero is 99% accurate.
gptzero.me/news/gptzero-accura…
GPTZero: Officially The Most Accurate Commercial AI Detector
GPTZero confirms its title as the most accurate commercial AI detector, outperforming competitors on the massive independent RAID benchmark.Alex Adam (AI Detection Resources | GPTZero)
The story was invented so people would subscribe to his substack, which exists to promote his company.
We're being manipulated into sharing made-up rage-bait in order to put money in his pocket.
I needed to make a small change and realized I wasn’t confident I could do it.
Wouldn't the point be to use AI to make the change, if you're trying to do it 100% with AI? Who is really saying 100% AI adoption is a good idea though? All I hear about from everyone is how it's not a good idea, just like this post.
I work in an company who is all-in on selling AI and we are trying desperately to use this AI ourselves. We've concluded internally that AI can only be trusted with small use cases that are easily validated by humans, or for fast prototyping work.. hack day stuff to validate a possibility but not an actual high quality safe and scalable implementation, or in writing tests of existing code, to increase test coverage. yes, I know thats a bad idea but QA blessed the result.... so um .. cool.
The use case we zeroed in on is writing well schema'd configs in yaml or json. Even then, a good percentage of the time the AI will miss very significant mandatory sections, or add hallucinations that are unrelated to the task at hand. We then can use AI to test AI's work, several times using several AIs. And to a degree, it'll catch a lot of the issues, but not all. So we then code review and lint with code we wrote that AI never touched, and send all the erroring configs to a human. It does work, but cant be used for mission critical applications. And nothing about the AI or the process of using it is free. Its also disturbingly not idempotent. Did it fail? Run it again a few times and it'll pass. We think it still saves money when done at scale, but not as much as we promise external AI consumers. The Senior leadership know its currently overhyped trash and pressure us to use it anyway on expectations it'll improve in the future, so we give the mandatory crisp salute of alignment and we're off.
I will say its great for writing yearly personnel reviews. It adds nonsense and doesnt get the whole review correct, but it writes very flowery stuff so managers dont have to. So we use it for first drafts and then remove a lot of the true BS out of it. If it gets stuff wrong, oh well, human perception is flawed.
This is our shared future. One of the biggest use cases identified for the industry is health care. Because its hard to assign blame on errors when AI gets it wrong, and AI will do whatever the insurance middle men tell it to do.
I think we desperately need a law saying no AI use in health care decisions, before its too late. This half-assed tech is 100% going to kill a lot of sick people.
At work there's a lot of rituals where processes demand that people write long internal documents that no one will read, but management will at least open it up, scroll and be happy to see such long documents with credible looking diagrams, but never read them, maybe looking at a sentence or two they don't know, but nod sagely at.
LLM can generate such documents just fine.
Incidentally an email went out to salespeople. It told them they didn't need to know how to code or even have technical skills, they code just use Gemini 3 to code up whatever a client wants and then sell it to them. I can't imagine the mind that thinks that would be a viable business strategy, even if it worked that well.
like this
Badabinski likes this.
eh, not if you know how it works. basic hedging and not shorting stuff limits your risk significantly.
especially in a bull market where ratfucking and general fraud is out in thebopen for all to see
The developers can’t debug code they didn’t write.
This is a bit of a stretch.
If you've never had to debug code. Are you really a developer?
There is zero chance you have never written a big so... Who is fixing them?
Unless you just leave them because you work for Infosys or worse but then I ask again - are you really a developer?
I mean I was trying to solve a problem t'other day (hobbyist) - it told me to create a
function foo(bar):
await object.foo(bar)
then in object
function foo(bar):
_foo(bar)
function _foo(bar):
original_object.foo(bar)
like literally passing a variable between three wrapper functions in two objects that did nothing except pass the variable back to the original function in an infinite loop
add some layers and complexity and it'd be very easy to get lost
The few times I've used LLMs for coding help, usually because I'm curious if they've gotten better, they let me down. Last time it was insistent that its solution would work as expected. When I gave it an example that wouldn't work, it even broke down each step of the function giving me the value of its variables at each step to demonstrate that it worked... but at the step where it had fucked up, it swapped the value in the variable to one that would make the final answer correct. It made me wonder how much water and energy it cost me to be gaslit into a bad solution.
How do people vibe code with this shit?
As a learning process it’s absolutely fine.
You make a mess, you suffer, you debug, you learn.
But you don’t call yourself a developer (at least I hope) on your CV.
Yes, this is what I intended to write but I submitted it hastily.
Its like a catch-22, they can't write code so they vibecode, but to maintain vibed code you would necessarily need to write code to understand what's actually happening
Think an interior designer having to reengineer the columns and load bearing walls of a masonry construction.
What are the proportions of cement and gravel for the mortar? What type of bricks to use? Do they comply with the PSI requirements? What caliber should the rebars be? What considerations for the pouring of concrete? Where to put the columns? What thickness? Will the building fall?
"I don't know that shit, I only design the color and texture of the walls!"
And that, my friends, is why vibe coding fails.
And it's even worse: Because there are things you can more or less guess and research. The really bad part is the things you should know about but don't even know they are a thing!
Unknown unknowns: Thread synchronization, ACID transactions, resiliency patterns. That's the REALLY SCARY part. Write code? Okay, sure, let's give the AI a chance. Write stable, resilient code with fault tolerance, and EASY TO MAINTAIN? Nope. You're fucked. Now the engineers are gone and the newbies are in charge of fixing bad code built by an alien intelligence that didn't do its own homework and it's easier to rewrite everything from scratch.
Computers are too powerful and too cheap. Bring back COBOL, painfully expensive CPU time, and some sort of basic knowledge of what's actually going on.
Pain for everyone!
Yes, and that's exactly what everyone forgets about automating cognitive work. Knowledge or skill needs to be intergenerational or we lose it.
If you have no junior developers, who will turn into senior developers later on?
If you have no junior developers, who will turn into senior developers later on?
At least it isn't my problem. As long as I have CrowdStrike, Cloudflare, Windows11, AWS us-east-1 and log4j... I can just keep enjoying today's version of the Internet, unchanged.
And then there are actual good developers who could or would tell you that LLMs can be useful for coding
The only people who believe that are managers and bad developers.
There’s a difference between vibe coding and responsible use.
There's also a difference between the occasional evening getting drunk and alcoholism. That doesn't make an occasional event healthy, nor does it mean you are qualified to drive a car in that state.
People who use LLMs in production code are - by definition - not "good developers". Because:
* a good developer has a clear grasp on every single instruction in the code - and critically reviewing code generated by someone else is more effort than writing it yourself
* pushing code to production without critical review is grossly negligent and compromises data & security
This already means the net gain with use of LLMs is negative. Can you use it to quickly push out some production code & impress your manager? Possibly. Will it be efficient? It might be. Will it be bug-free and secure? You'll never know until shit hits the fan.
Also: using LLMs to generate code, a dev will likely be violating copyrights of open source left and right, effectively copy-pasting licensed code from other people without attributing authorship, i.e. they exhibit parasitic behavior & outright violate laws.
Furthermore the stuff that applies to all users of LLMs applies:
* they contribute to the hype, fucking up our planet, causing brain rot and skill loss on average, and pumping hardware prices to insane heights.
We have substantially similar opinions, actually. I agree on your points of good developers having a clear grasp over all of their code, ethical issues around AI (not least of which are licensing issues), skill loss, hardware prices, etc.
However, what I have observed in practice is different from the way you describe LLM use. I have seen irresponsible use, and I have seen what I personally consider to be responsible use. Responsible use involves taking a measured and intentional approach to incorporating LLMs into your workflow. It’s a complex topic with a lot of nuance, like all engineering, but I would be happy to share some details.
Critical review is the key sticking point. Junior developers also write crappy code that requires intense scrutiny. It’s not impossible (or irresponsible) to use code written by a junior in production, for the same reason. For a “good developer,” many of the quality problems are mitigated by putting roadblocks in place to…
- force close attention to edits as they are being written,
- facilitate handholding and constant instruction while the model is making decisions, and
- ensure thorough review at the time of design/writing/conclusion of the change.
When it comes to making safe and correct changes via LLM, specifically, I have seen plenty of “good developers” in real life, now, who have engineered their workflows to use AI cautiously like this.
Again, though, I share many of your concerns. I just think there’s nuance here and it’s not black and white/all or nothing.
While I appreciate your differentiated opinion, I strongly disagree. As long as there is no actual AI involved (and considering that humanity is dumb enough to throw hundreds of billions at a gigantic parrot, I doubt we would stand a chance to develop true AI, even if it was possible to create), the output has no reasoning behind it.
* it violates licenses and denies authorship and - if everyone was indeed equal before the law, this alone would disqualify the code output from such a model because it's simply illegal to use code in violation of license restrictions & stripped of licensing / authorship information
* there is no point. Developing code is 95-99% solving the problem in your mind, and 1-5% actual code writing. You can't have an algorithm do the writing for you and then skip on the thinking part. And if you do the thinking part anyways, you have gained nothing.
A good developer has zero need for non-deterministic tools.
As for potential use in brainstorming ideas / looking at potential solutions: that's what the usenet was good for, before those very corporations fucked it up for everyone, who are now force-feeding everyone the snake oil that they pretend to have any semblance of intelligence.
violates licenses
Not a problem if you believe all code should be free. Being cheeky but this has nothing to do with code quality, despite being true
do the thinking
This argument can be used equally well in favor of AI assistance, and it’s already covered by my previous reply
non-deterministic
It’s deterministic
brainstorming
This is not what a “good developer” uses it for
- you have no clue about licenses
- you have no clue what deterministic means
I can't keep you from doing what you want, but I will continue to view software developers using LLMs as script kiddies playing with fire.
You're pushing code to prod without pr's and code reviews? What kind of jank-ass cowboy shop are you running?
It doesn't matter if an llm or a human wrote it, it needs peer review, unit tests and go through QA before it gets anywhere near production.
Thats exactly what I so often find myself saying when people show off some neat thing that a code bot "wrote" for them in x minutes after only y minutes of "prompt engineering". I'll say, yeah I could also do that in y minutes of (bash scripting/vim macroing/system architecting/whatever), but the difference is that afterwards I have a reusable solution that: I understand, is automated, is robust, and didn't consume a ton of resources. And as a bonus I got marginally better as a developer.
Its funny that if you stick them in an RPG and give them an ability to "kill any level 1-x enemy instantly, but don't gain any xp for it" they'd all see it as the trap it is, but can't see how that's what AI so often is.
I can least kinda appreciate this guy's approach. If we assume that AI is a magic bullet, then it's not crazy to assume we, the existing programmers, would resist it just to save our own jobs. Or we'd complain because it doesn't do things our way, but we're the old way and this is the new way. So maybe we're just being whiny and can be ignored.
So he tested it to see for himself, and what he found was that he agreed with us, that it's not worth it.
Ignoring experts is annoying, but doing some of your own science and getting first-hand experience isn't always a bad idea.
Calling a turd a diamond neither makes it sparkle, nor does it get rid of the stink.
I can't just call everything snake oil without some actual measurements and tests.
Naive cynicism is just as naive as blind optimism
I can’t just call everything snake oil without some actual measurements and tests.
With all due respect, you have not understood the basic mechanic of machine learning and the consequences thereof.
Terrible take. Thanks for playing.
It’s actually impressive the level of downvotes you’ve gathered in what is generally a pretty anti-ai crowd.
I am for sure not a coder as it has never been my strong suite, but I am without a doubt an awesome developer or I would not have a top rated multiplayer VR app that is pushing the boundaries of what mobile VR can do.
The only person who will have to look at my code is me so any and all issues be it my code or AI code will be my burden and AI has really made that burden much less. In fact, I recently installed Coplay in my Unity Engine Editor and OMG it is amazing at assisting not just with code, but even finding little issues with scene setup, shaders, animations and more. I am really blown away with it. It has allowed me to spend even less time on the code and more time imagineering amazing experiences which is what fans of the app care about the most. They couldn’t care less if I wrote the code or AI did as long as it works and does not break immersion. Is that not what it is all about at the end of the day?
As long as AI helps you achieve your goals and your goals are grounded, including maintainability, I see no issues. Yeah, misdirected use of AI can lead to hard to maintain code down the line, but that is why you need a human developer in the loop to ensure the overall architecture and design make sense. Any code base can become hard to maintain if not thought through be is human or AI written.
Look, bless your heart if you have a successful app, but success / sales is not exclusive to products of quality. Just look around at all the slop that people buy nowadays.
As long as AI helps you achieve your goals and your goals are grounded, including maintainability, I see no issues.
Two issues with that
1) what you are using has nothing whatsoever to do with AI, it's a glorified pattern repeater - an actual parrot has more intelligence
2) if the destruction of entire ecosystems for slop is not an issue that you see, you should not be allowed anywhere near technology (as by now probably billions of people)
I do not understand your point you are making about my particular situation as I am not making slop. Plus one persons slop is another’s treasure. What exactly are you suggesting as the 2 issues you outlined see like they are being directed to someone else perhaps?
- I am calling it AI as that is what it is called, but you are correct, it is a pattern predictor
- I am not creating slop but something deeply immersive and enjoyed by people. In terms of the energy used, I am on solar and run local LLMs.
I didn't say your particular application that I know nothing about is slop, I said success does not mean quality. And if you use statistical pattern generation to save time, chances are high that your software is not of good quality.
Even solar energy is not harvested waste-free (chemical energy and production of cells). Nevertheless, even if it were, you are still contributing to the spread of slop and harming other people. Both through spreading acceptance of a technology used to harm billions of people for the benefit of a few, and through energy and resource waste.
Some small companies benefit from the senior experience of these kinds of executives but don't have the money or the need to hire one full time. A fraction of the time they are C suite for various companies.
Sooo… he works multiple part-time jobs?
Weird how a forced technique of the ultra-poor is showing up here.
The thing with being cocky is, if you are wrong it makes you look like an even bigger asshole
en.wikipedia.org/wiki/AlphaFol…
The program uses a form of attention network, a deep learning technique that focuses on having the AI identify parts of a larger problem, then piece it together to obtain the overall solution.
Cool, now do an environmental impact on the data centre hosting your instance while you pollute by mindlessly talking shit on the Internet.
I'll take AI unfolding proteins over you posting any day.
Hilarious. You’re comparing a lemmy instance to AI data centers. There’s the proof I needed that you have no fucking clue what you’re talking about.
“bUt mUh fOLdeD pRoTEinS,” said the AI minion.
While this is a popular sentiment, it is not true, nor will it ever be true.
AI (LLMs & agents in the coding context, in this case) can serve as both a tool and a crutch. Those who learn to master the tools will gain benefit from them, without detracting from their own skill. Those who use them as a crutch will lose (or never gain) their own skills.
Some skills will in turn become irrelevent in day-to-day life (as is always the case with new tech), and we will adapt in turn.
That this is and will be abused is not in question. 😛
You are making a leap though.
Great article, brave and correct. Good luck getting the same leaders who blindly believe in a magical trend for this or next quarters numbers; they don't care about things a year away let alone 10.
I work in HR and was stuck by the parallel between management jobs being gutted by major corps starting in the 80s and 90s during "downsizing" who either never replaced them or offshore them. They had the Big 4 telling them it was the future of business. Know who is now providing consultation to them on why they have poor ops, processes, high turnover, etc? Take $ on the way in, and the way out. AI is just the next in long line of smart people pretending they know your business while you abdicate knowing your business or employees.
Hope leaders can be a bit braver and wiser this go 'round so we don't get to a cliffs edge in software.
Exactly. The problem isn't moving part of production to some other facility or buying a part that you used to make in-house. It's abdicating an entire process that you need to be involved in if you're going to stay on top of the game long-term.
Claude Code is awesome but if you let it do even 30% of the things it offers to do, then it's not going to be your code in the end.
AI is really great for small apps. I've saved so many hours over weekends that would otherwise be spent coding a small thing I need a few times whereas now I can get an AI to spit it out for me.
But anything big and it's fucking stupid, it cannot track large projects at all.
Depends on how demanding you are about your application deployment and finishing.
Do you want that running on an embedded system with specific display hardware?
Do you want that output styled a certain way?
AI/LLM are getting pretty good at taking those few lines of Bash, pipes and other tools' concepts, translating them to a Rust, or C++, or Python, or what have you app and running them in very specific environments. I have been shocked at how quickly and well Claude Sonnet styled an interface for me, based on a cell phone snap shot of a screen that I gave it with the prompt "style the interface like this."
FWIW that's a good question but IMHO the better question is :
What kind of small things have you vibed out that you needed that didn't actually exist or at least you couldn't find after a 5min search on open source forges like CodeBerg, Gitblab, Github, etc?
Because making something quick that kind of works is nice... but why even do so in the first place if it's already out there, maybe maintained but at least tested?
Since you put such emphasis on "better": I'd still like to have an answer to the one I posed.
Yours would be a reasonable follow-up question if we noticed that their vibed projects are utilities already available in the ecosystem. 👍
people re-inventing the wheel because it’s “easier” than searching without properly understand the cost of the entire process.
A good LLM will do a web search first and copy its answer from there...
So if it can be vibe coded, it's pretty much certainly already a "thing", but with some awkwardness.
Maybe what you need is a combination of two utilities, maybe the interface is very awkward for your use case, maybe you have to make a tiny compromise because it doesn't quite match.
Maybe you want a little utility to do stuff with media. Now you could navigate your way through ffmpeg and mkvextract, which together handles what you want, with some scripting to keep you from having to remember the specific way to do things in the myriad of stuff those utilities do. An LLM could probably knock that script out for you quickly without having to delve too deeply into the documentation for the projects.
It's certainly a use case that LLM has a decent shot at.
Of course, having said that I gave it a spin with Gemini 3 and it just hallucinated a bunch of crap that doesn't exist instead of properly identifying capable libraries or frontending media tools....
But in principle and upon occasion it can take care of little convenience utilities/functions like that. I continue to have no idea though why some people seem to claim to be able to 'vibe code' up anything of significance, even as I thought I was giving it an easy hit it completely screwed it up...
Having used both Gemini and Claude.... I use Gemini when I need to quickly find something I don't want to waste time searching for, or I need a recipe found and then modified to fit what I have on hand.
Everytime I used Gemini for coding has ended in failure. It constantly forgets things, forgets what version of a package you're using so it tells you to do something that is deprecated, it was hell. I had to hold its hand the entire time and talk to it like it's a stupid child.
Claude just works. I use Claude for so many things both chat and API. I didn't care for AI until I tried Claude. There's a whole whack of novels by a Russian author I like but they stopped translating the series. Claude vibe coded an app to read the Russian ebooks, translate them by chapter in a way that prevented context bleed. I can read any book in any language for about $2.50 in API tokens.
I think it really depends on the user and how you communicate with the AI. People are different, and we communicate differently. But if you're precise and you tell it what you want, and what your expected result should be it's pretty good at filling in the blanks.
I can pull really useful code out of Claude, but ask me to think up a prompt to feed into Gemini for video creation and they look like shit.
The type of problem in my experience is the biggest source of different results
Ask for something that is consistent with very well trodden territory, and it has a good shot. However if you go off the beaten path, and it really can't credibly generate code, it generates anyway, making up function names, file paths, rest urls and attributes, and whatever else that would sound good and consistent with the prompt, but no connection to real stuff.
It's usually not that that it does the wrong thing because it "misunderstood", it is usually that it producea very appropriate looking code consistent with the request that does not have a link to reality, and there's no recognition of when it invented non existent thing.
If it's a fairly milquetoast web UI manipulating a SQL backend, it tends to chew through that more reasonably (though in various results that I've tried it screwed up a fundamental security principle, like once I saw it suggest a weird custom certificate validation and disable default validation while transmitting sensitive data before trying to meaningfully execute the custom valiidation.
I tried using Gemini 3 for OpenSCAD, and it couldn't slice a solid properly to save its life, I gave up on it after about 6 attempts to put a 3:12 slope shed roof on four walls. Same job in Opus 4.5 and I've got a very nicely styled 600 square foot floor plan with radiused 3D concrete printed walls, windows, doors, shed roof with 1' overhang, and a python script that translates the .scad to a good looking .svg 2D floorplan.
I'm sure Gemini 3 is good for other things, but Opus 4.5 makes it look infantile in 3D modeling.
I'll put it this way: LLMs have been getting pretty good at translation over the past 20 years. Sure, human translators still look down their noses at "automated translations" but, in the real world, an automated translation gets the job done well enough most of the time.
LLMs are also pretty good at translating code, say from C++ to Rust. Not million line code bases, but the little concepts they can do pretty well.
On a completely different tack, I've been pretty happy with LLM generated parsers. Like: I've got 1000 log files here, and I want to know how many times these lines appear. You've got grep for that. But, write me a utility that finds all occurrences of these lines, reads the time stamps, and then searches for any occurrences of these other lines within +/- 1 minute of the first ones.... grep can't really do that, but a 5 minute vibe coded parser can.
Open an issue to explain why it's not enough for you? If you can make a PR for it that actually implements the things you need, do it?
My point to say everything is already out there and perfectly fits your need, only that a LOT is already out there. If all re-invent the wheel in our own corner it's basically impossible to learn from each other.
These are the principles I follow:
indieweb.org/make_what_you_nee…
indieweb.org/use_what_you_make
I don’t have time to argue with FOSS creators to get my stuff in their projects, nor do I have the energy to maintain a personal fork of someone else’s work.
It’s much faster for me to start up Claude and code a very bespoke system just for my needs.
I don’t like web UIs nor do I want to run stuff in a Docker container. I just want a scriptable CLI application.
Like I just did a subtitle translation tool in 2-3 nights that produces much better quality than any of the ready made solutions I found on GitHub. One of which was an *arr stack web monstrosity and the other was a GUI application.
Neither did what I needed in the level of quality I want, so I made my own. One I can automate like I want and have running on my own server.
make what you need
Make what you need is an IndieWeb principle that helps creators focus on creating & publishing things prioritized by what they need & want for their own personal site.IndieWeb
Depends on the “app”.
A full ass Lemmy client? Nope.
A subtitle translator or a RSS feed hydrator or a similar single task “app”? Easily and I’ve done it many times already.
I don’t have time to argue with FOSS creators to get my stuff in their projects
So much this. Over the years I have found various issues in FOSS and "done the right thing" submitting patches formatted just so into their own peculiar tracking systems according to all their own peculiar style and traditions, only to have the patches rejected for all kinds of arbitrary reasons - to which I say: "fine, I don't really want our commercial competitors to have this anyway, I was just trying to be a good citizen in the community. I've done my part, you just go on publishing buggy junk - that's fine."
And if the maintainer doesn't agree to merge your changes, what to you do then?
You have to build your own project, where you get to decide what gets added and what doesn't.
There have been some articles published positing that AI coding tools spell the end for FOSS because everybody is just going to do stuff independently and don't need to share with each other anymore to get things done.
I think those articles are short sighted, and missing the real phenomenon that the FOSS community needs each other now more than ever in order to tame the LLMs into being able to write stories more interesting than "See Spot run." and the equivalent in software projects.
I built a MAL clone using AI, nearly 700 commits of AI. Obviously I was responsible for the quality of the output and reviewing and testing that it all works as expected, and leading it in the right direction when going down the wrong path, but it wrote all of the code for me.
There are other MAL clones out there, but none of them do everything I wanted, so that's why I built my own project. It started off as an inside joke with a friend, and eventually materialized as an actual production-ready project. It's limited more by design of the fact that it relies on database imports and delta edits rather than the fact that it was written by AI, because that's just the nature of how data for these types of things tend to work.
making something quick that kind of works is nice… but why even do so in the first place if it’s already out there, maybe maintained but at least tested?
In a sense, this is what LLMs are doing for you: regurgitating stuff that's already out there. But... they are "bright" enough to remix the various bits into custom solutions. So there might already be a NWS API access app example, and a Waveshare display example, and so on, but there's not a specific example that codes up a local weather display for the time period and parameters you want to see (like, temperature and precipitation every 15 minutes for the next 12 hours at a specific location) on the particular display you have. Oh, and would you rather build that in C++ instead of Python? Yeah, LLMs are actually pretty good at remixing little stuff like that into things you're not going to find exact examples of ready to your spec.
I don't really agree, I think that's kind of a problem with approaching it. I've built some pretty large projects with AI, but the thing is, you have to approach it the same way you should be approaching larger projects to begin with - you need to break it down into smaller steps/parts.
You don't tell it "build me an entire project that does X, Y, Z, and A, B, C", you have to tackle it one part at a time.
I cannot understand and debug code written by AI. But I also cannot understand and debug code written by me.
Let's just call it even.
As an experiment I asked Claude to manage my git commits, it wrote the messages, kept a log, archived excess documentation, and worked really well for about 2 weeks. Then, as the project got larger, the commit process was taking longer and longer to execute. I finally pulled the plug when the automated commit process - which had performed flawlessly for dozens of commits and archives, accidentally irretrievably lost a batch of work - messed up the archive process and deleted it without archiving it first, didn't commit it either.
AI/LLM workflows are non-deterministic. This means: they make mistakes. If you want something reliable, scalable, repeatable, have the AI write you code to do it deterministically as a tool, not as a workflow. Of course, deterministic tools can't do things like summarize the content of a commit.
The longer the project the more stupid Claude gets. I've seen it both in chat, and in Claude code, and Claude explains the situation quite well:
Increased cognitive load: Longer projects have more state to track - more files, more interconnected components, more conventions established earlier. Each decision I make needs to consider all of this, and the probability of overlooking something increases with complexity.
Git specifically: For git operations, the problem is even worse because git state is highly sequential - each operation depends on the exact current state of the repository. If I lose track of what branch we're on, what's been committed, or what files exist, I'll give incorrect commands.
Anything I do with Claude. I will split into different chats, I won't give it access to git but I will provide it an updated repository via Repomix. I get much better results because of that.
Yeah, context management is one big key. The "compacting conversation" hack is a good one, you can continue conversations indefinitely, but after each compact it will throw away some context that you thought was valuable.
The best explanation I have heard for the current limitations is that there is a "context sweet spot" for Opus 4.5 that's somewhere short of 200,000 tokens. As your context window gets filled above 100,000 tokens, at some point you're at "optimal understanding" of whatever is in there, then as you continue on toward 200,000 tokens the hallucinations start to increase. As a hack, they "compact the conversation" and throw out less useful tokens getting you back to the "essential core" of what you were discussing before, so you can continue to feed it new prompts and get new reactions with a lower hallucination rate, but with that lower hallucination rate also comes a lower comprehension of what you said before the compacting event(s).
Some describe an aspect of this as the "lost in the middle" phenomenon since the compacting event tends to hang on to the very beginning and very end of the context window more aggressively than the middle, so more "middle of the window" content gets dropped during a compacting event.
I also cannot understand and debug code written by me.
So much this. I look back at stuff I wrote 10 years ago and shake my head, console myself that "we were on a really aggressive schedule." At least in my mind I can do better, in practice the stuff has got to ship eventually and what ships is almost never what I would call perfect, or even ideal.
I'm just not following the mindset of "get ai to code your whole program" and then have real people maintain it? Sounds counter productive
I think you need to make your code for an Ai to maintain. Use Static code analysers like SonarQube to ensure that the code is maintainable (cognitive complexity)!and that functions are small and well defined as you write it.
I don't think we should be having the AI write the program in the first place. I think we're barreling towards a place where remotely complicated software becomes a lost technology
I don't mind if AI helps here and there, I certainly use it. But it's not good at custom fit solutions, and the world currently runs on custom fit solutions
AI is like no code solutions. Yeah, it's powerful, easier to learn and you can do a lot with it... But eventually you will hit a limit. You'll need to do something the system can't do, or something you can't make the system do because no one properly understands what you've built
At the end of the day, coding is a skill. If no one is building the required experience to work with complex systems, we're going to be swimming in a world of endless ocean of vibe coded legacy apps in a decade
I just don't buy that AI will be able to take something like a set of State regulations and build a complaint outcome. Most of our base digital infrastructure is like that, or it uses obscure ancient systems that LLMs are basically allergic to working with
To me, we're risking everything on achieving AGI (and using it responsibly) before we run out of skilled workers, and we're several game changing breakthroughs from achieving that
I think we’re barreling towards a place where remotely complicated software becomes a lost technology
I think complicated software has been an art more than a science, for the past 30 years we have been developing formal processes to make it more of a procedural pursuit but the art is still very much in there.
I think if AI authored software is going to reach any level of valuable complexity, it's going to get there with the best of our current formal processes plus some more that are being (rapidly) developed specifically for LLM based tools.
But eventually you will hit a limit. You’ll need to do something...
And how do we surpass those limits? Generally: research. And for the past 20+ years where do we do most of that research? On the internet. And where were the LLMs trained, and what are they relatively good at doing quickly? Internet research.
At the end of the day, coding is a skill. If no one is building the required experience to work with complex systems
So is semiconductor design, application of transistors to implement logic gates, etc. We still have people who can do that, not very many, but enough. Not many people work in assembly language anymore, either...
So is semiconductor design, application of transistors to implement logic gates, etc. We still have people who can do that, not very many, but enough. Not many people work in assembly language anymore, either...
Yeah, that's a lost tech. We still use the same decades, even century old, frameworks
They're not perfect. But they are unchangeable. We no longer have the skills to adapt them to modern technology. Improvements are incremental, despite decades of effort you still can't reliably run a system on something like RISK.
I’ve made full-ass changes on existing codebases with Claude
It’s a skill you can learn, pretty close to how you’d work with actual humans
What full ass changes have you made that can't be done better with a refactoring tool?
I believe Claude will accept the task. I've been fixing edge cases in a vibe colleague's full-ass change all month. Would have taken less time to just do it right the first time.
True that LLMs will accept almost any task, whether they should or not. True that their solutions aren't 100% perfect every time. Whether it's faster to use them or not I think depends a lot on what's being done, and what alternative set of developers you're comparing them with.
What I have seen across the past year is that the number of cases where LLM based coding tools are faster than traditional developers has been increasing, rather dramatically. I called them near useless this time last year.
I just did three tasks purely with Claude - at work.
All were pretty much me pasting the Linear ticket to Claude and hitting go. One got some improvement ideas on the PR so I said “implement the comments from PR 420” and so it did.
These were all on a codebase I haven’t seen before.
The magic sauce is that I’ve been doing this for a quarter century and I’m pretty good at reading code and I know if something smells like shit code or not. I’m not just YOLOing the commits to a PR without reading first, but I save a ton of time when I don’t need to do the grunt work of passing a variable through 10 layers of enterprise code.
pretty close to how you’d work with actual humans
That has been my experience as well. It's like working with humans who have extremely fast splinter skills, things they can rip through in 10 minutes that might take you days, weeks even. But then it also takes 5-10 minutes to do some things that you might accomplish in 20 seconds. And, like people, it's not 100% reliable or accurate, so you need to use all those same processes we have developed to help people catch their mistakes.
It’s good at writing it, ideally 50-250 lines at a time
I find Claude Sonnet 4.5 to be good up to 800 lines at a chunk. If you structure your project into 800ish line chunks with well defined interfaces you can get 8 to 10 chunks working cooperatively pretty easily. Beyond about 2000 lines in a chunk, if it's not well defined, yeah - the hallucinations start to become seriously problematic.
The new Opus 4.5 may have a higher complexity limit, I haven't really worked with it enough to characterize... I do find Opus 4.5 to get much slower than Sonnet 4.5 was for similar problems.
Okay, but if it's writing 800 lines at once, it's making design choices. Which is all well and good for a one off, but it will make those choices, make them a different way each time, and it will name everything in a very generic or very eccentric way
The AI can't remember how it did it, or how it does things. You can do a lot... Even stuff that hasn't entered commercial products like vectorized data stores to catalog and remind the LLM of key details when appropriate
2000 lines is nothing. My main project is well over a million lines, and the original author and I have to meet up to discuss how things flow through the system before changing it to meet the latest needs
But we can and do it to meet the needs of the customer, with high stakes, because we wrote it. These days we use AI to do grunt work, we have junior devs who do smaller tweaks.
If an AI is writing code a thousand lines at a time, no one knows how it works. The AI sure as hell doesn't. If it's 200 lines at a time, maybe we don't know details, but the decisions and the flow were decided by a person who understands the full picture
I don't know shit about anything, but it seems to me that the AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work. But I'd try it anyway, because what do you have to lose?
Unless it gets pissed off at being questioned, and destroys the world. I've seen more than few movies about that.
You are in a way correct. If you keep sending the context of the "conversation" (in the same chat) it will reinforce its previous implementation.
The way ais remember stuff is that you just give it the entire thread of context together with your new question. It's all just text in text out.
But once you start a new conversation (meaning you don't give any previous chat history) it's essentially a "new" ai which didn't know anything about your project.
This will have a new random seed and if you ask that to look for mistakes etc it will happily tell you that the last Implementation was all wrong and here's how to fix it.
It's like a minecraft world, same seed will get you the same map every time. So with AIs it's the same thing ish. start a new conversation or ask a different model (gpt, Google, Claude etc) and it will do things in a new way.
Maybe the solution is to keep sending the code through various AI requests, until it either gets polished up, or gains sentience, and destroys the world. 50-50 chance.
This stuff ALWAYS ends up destroying the world on TV.
Seriously, everybody is complaining about the quality of AI product, but the whole point is for this stuff to keep learning and improving. At this stage, we're expecting a kindergartener to product the work of a Harvard professor. Obviously, were going to be disappointed.
But give that kindergartener time to learn and get better, and they'll end up a Harvard professor, too. AI may just need time to grow up.
And frankly, that's my biggest worry. If it can eventually start producing results that are equal or better than most humans, then the Sociopathic Oligarchs won't need worker humans around, wasting money that could be in their bank accounts.
And we know what their solution to that problem will be.
This stuff ALWAYS ends up destroying the world on TV.
TV is also full of infinite free energy sources. In the real world warp drive may be possible, you just need to annihilate the mass of Jupiter with an equivalent mass of antimatter to get the energy necessary to create a warp bubble to move a small ship from the orbit of Pluto to a location a few light years away, but on TV they do it every week.
your team of AIs keeps running circles
Depending on your team of human developers (and managers), they will do the same thing. Granted, most LLMs have a rather extreme sycophancy problem, but humans often do the same.
We haven’t gotten yet to AIs who will tell you that what you ask is impossible.
If it's a problem like under or over-constrained geometry or equations, they (the better ones) will tell you. For difficult programing tasks I have definitely had the AIs bark up all the wrong trees trying to fix something until I gave them specific direction for where to look for a fix (very much like my experiences with some human developers over the years.)
I had a specific task that I was developing in one model, and it was a hard problem but I was making progress and could see the solution was near, then I switched to a different model which did come back and tell me "this is impossible, you're doing it wrong, you must give up this approach" up until I showed it the results I had achieved to-date with the other model, then that same model which told me it was impossible helped me finish the job completely and correctly. A lot like people.
AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work.
There's an LLM concept/parameter called "temperature" that determines basically how random the answer is.
As deployed, LLMs like Claude Sonnet or Opus have a temperature that won't give the same answer every time, and when you combine this with feedback loops that point out failures (like compliers that tell the LLM when its code doesn't compile), the LLM can (and does) the old Beckett: try, fail, try again, fail again, fail better next time - and usually reach a solution that passes all the tests it is aware of.
The problem is: with a context window limit of 200,000 tokens, it's not going to be aware of all the relevant tests in more complex cases.
To quote your quote:
I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.
I think the author just independently rediscovered "middle management". Indeed, when you delegate the gruntwork under your responsibility, those same people are who you go to when addressing bugs and new requirements. It's not on you to effect repairs: it's on your team. I am Jack's complete lack of surprise. The idea that relying on AI to do nuanced work like this and arrive at the exact correct answer to the problem, is naive at best. I'd be sweating too.
The problem though (with AI compared to humans): The human team learns, i.e. at some point they probably know what the mistake was and avoids doing it again.
AI instead of humans: well maybe the next or different model will fix it maybe...
And what is very clear to me after trying to use these models, the larger the code-base the worse the AI gets, to the point of not helping at all or even being destructive.
Apart from dissecting small isolatable pieces of independent code (i.e. keep the context small for the AI).
Humans likely get slower with a larger code-base, but they (usually) don't arrive at a point where they can't progress any further.
Humans likely get slower with a larger code-base, but they (usually) don’t arrive at a point where they can’t progress any further.
Notable exceptions like: peimpact.com/the-denver-intern…
Lessons Learned: The Denver International Airport Automated Baggage-Handling System - PEimpact - Recognizing the impact of PEs
The Denver International Airport (DIA) is renowned for its iconic tent-like structure, but it is also infamous in engineering and project management circles for its ambitious yet flawed automated baggage-handling system.PEimpact JH (PE Impact)
Same thing would happen if they were a non-coder project manager or designer for a team of actual human programmers.
Stuff done, shipped and working.
“But I can’t understand the code 😭”, yes. You were the project manager why should you?
I think the point is that someone should understand the code. In this case, no one does.
Big corporations have been pushing for outsourcing software development for decades, how is this any different? Can you always recall your outsourced development team for another round of maintenance? A LLM may actually be more reliable and accessible in the future.
If you outsource you could at least sue them when things go wrong. Good luck doing that with AI.
Plus you can own the code if a person does it.
If you outsource you could at least sure them when things go wrong.
Most outsourcing consultants I have worked with aren't worth the legal fees to attempt to sue.
Plus you can own the code if a person does it.
I'm not aware of any ownership issues with code I have developed using Claude, or any other agents. It's still mine, all the more so because I paid Claude to write it for me, at my direction.
Nobody is asking it to (except freaks trying to get news coverage.)
It's like compiler output - no, I didn't write that assembly code, gcc did, but it did it based on my instructions. My instructions are copyright by me, the gcc interpretation of them is a derivative work covered by my rights in the source code.
When a painter paints a canvas, they don't record the "source code" but the final work is also still theirs, not the brush maker or the canvas maker or paint maker (though some pigments get a little squirrely about that...)
My instructions are copyright by me
First, how much that is true is debatable. Second, that doesn't matter as far as the output. No one can legally own that.
First, how much that is true is debatable.
It's actually settled case law. AI does not hold copyright any more than spell-check in a word processor does. The person using the AI tool to create the work holds the copyright.
Second, that doesn’t matter as far as the output. No one can legally own that.
Idealistic notions aside, this is no different than PIXAR owning the Renderman output that is Toy Story 1 through 4.
Cloudflare, AWS, and other recent major service outages are what come to mind re: AI code. I’ve no doubt it is getting forced into critical infrastructure without proper diligence.
Humans are prone to error so imagine the errors our digital progeny are capable of!
There's no point telling it not to do x because as soon as you mention it x it goes into its context window.
It has no filter, it's like if you had no choice in your actions, and just had to do every thought that came into your head, if you were told not to do a thing you would immediately start thinking about doing it.
I’ve noticed this too, it’s hilarious(ly bad).
Especially with image generation, which we were using to make some quick avatars for a D&D game. “Draw a picture of an elf.” Generates images of elves that all have one weird earring. “Draw a picture of an elf without an earing.” Great now the elves have even more earrings.
There’s no point telling it not to do x because as soon as you mention it x it goes into its context window.
Reminds me of the Sonny Bono high speed downhill skiing problem: don't fixate on that tree, if you fixate on that tree you're going to hit the tree, fixate on the open space to the side of the tree.
LLMs do "understand" words like not, and don't, but they also seem to work better with positive examples than negative ones.
Even worse, the ones I’ve evaluated (like Claude) constantly fail to even compile because, for example, they mix usages of different SDK versions. When instructed to use version 3 of some package, it will add the right version as a dependency but then still code with missing or deprecated APIs from the previous version that are obviously unavailable.
More time (and money, and electricity) is wasted trying to prompt it towards correct code than simply writing it yourself and then at the end of the day you have a smoking turd that no one even understands.
LLMs are a dead end.
constantly fail to even compile because, for example, they mix usages of different SDK versions
Try an agentic tool like Claude Code - it closes the loop by testing the compilation for you, and fixing its mistakes (like human programmers do) before bothering you for another prompt. I was where you are at 6 months ago, the tools have improved dramatically since then.
From TFS > I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.
That sounds like a "fractional CTO problem" to me (IMO a fractional CTO is a guy who convinces several small companies that he's a brilliant tech genius who will help them make their important tech decisions without actually paying full-time attention to any of them. Actual tech experience: optional.)
If you have lost confidence in your ability to modify your own creation, that's not a tools problem - you are the tool, that's a you problem. It doesn't matter if you're using an LLM coding tool, or a team of human developers, or a pack of monkeys to code your applications, if you don't document and test and formally develop an "understanding" of your product that not only you but all stakeholders can grasp to the extent they need to, you're just letting the development run wild - lacking a formal software development process maturity. LLMs can do that faster than a pack of monkeys, or a bunch of kids you hired off Craigslist, but it's the exact same problem no matter how you slice it.
The LLM comparison to a team of human developers is a great example. But like outsourcing your development, LLM is less a tool and more just delegation. And yes, you can dig in deep to understand all the stuff the LLM is delegated to do the same as you can get deeply involved with a human development team to maintain an understanding. But most of the time, the sell is that you can save time - which means you aren't expecting to micro manage your development team.
It is a fractional CTO problem but the actual issue is that developers are being demanded to become fractional CTOs by using LLM because they are being measured by expected productivity increases that limit time for understanding.
the sell is that you can save time
How do you know when salespeople (and lawyers) are lying? It's only when their lips are moving.
developers are being demanded to become fractional CTOs by using LLM because they are being measured by expected productivity increases that limit time for understanding.
That's the kind of thing that works out in the end. Like outsourcing to Asia, etc. It does work for some cases, it can bring sustainable improvements to the bottom line, but nowhere near as fast or easy or cheaply as the people selling it say.
... as long as the giant corpos paying through the nose for the data centers continue to vastly underprice their products in order to make us all dependent on them.
Just wait till everyone's using it and the prices will skyrocket.
Personally I tried using LLMs for reading error logs and summarizing what's going on. I can say that even with somewhat complex errors, they were almost always right and very helpful. So basically the general consensus of using them as assistants within a narrow scope.
Though it should also be noted that I only did this at work. While it seems to work well, I think I'd still limit such use in personal projects, since I want to keep learning more, and private projects are generally much more enjoyable to work on.
Another interesting use case I can highlight is using a chatbot as documentation when the actual documentation is horrible. However, this only works within the same ecosystem, so for instance Copilot with MS software. Microsoft definitely trained Copilot on its own stuff and it's often considerably more helpful than the docs.
A Turning Point for Cuban Soccer
A Turning Point for Cuban Soccer
[from weekly newsletter about #Cuba (with YouTube video links) from the #BellyOfTheBeast #news / #video collective][their videos can also be found at: peertube.wtf/c/cuba/_botb/_vid…]
groups.io/g/cubanews/message/4…
>
>
Cuba has long been known as a baseball powerhouse. But #soccer is on the rise, especially among young people: back in October, the island's Under-20 national men's team earned its first-ever point in a #WorldCup. In a new video, BotB sits down with players to talk about what this achievement means to them and the future of Cuban soccer.
peertube.wtf/w/qeiuMsrGW6REyB8…Also:
- Cuba condemns #US attempt to close #Venezuela airspace
- #Trump halts immigration processing for people from 19 countries — including Cuba
- #Florida hardliners pressure Supreme Court ahead of Havana Docks case review
#EndTheBlockadeEmbargo
#LetCubaLive
#EndSanctionsAgainstCuba
#AbajoElBloqueo
#LatinAmerica #Caribbean
#politics #USpol #football #futbol #sports
@cuba
Extremely slow boot time
Is this the right place to ask for help? Or is there another place? Anyways, feel free to delete this post if i'm in the wrong spot.
I use Pop OS on an Asus. Something has happened where i either have a 10 min plus boot time, or it doesn't boot at all. I have reinstalled Pop OS twice (and used recovery mode) and even took it into a computer shop to see if there was something wrong with my hardware (there isn't). When I first do a new install it will restart fine, but then it'll be the next day when it will either take over 8 minutes to load, or it will be stuck on boot.
Right now it is stuck on boot. I can get into a live usb stick just fine. I have done systemanalyze blame, and it didn't give me any helpful information. I have the same issue even if I try to press space bar and boot into an old kernel.
I should note that my computer has encryption enabled.
Any help would be awesome.
All hail the other linux noobs out there!
"systemd-analyze blame didn't give me any helpful information
And what exactly did it give you?
Could you copy-paste the output of that command (also known as "stdout")?
EDIT: It seems that you made the same post 2 times. Ideally, you should delete one of them.
journalctl -b0 and systemd-analyze blame results from after a successful boot. i have broken and fixed my own systems countless ways so maybe i'll spot something
thanks, can you please give me the output of
journalctl -b0 -u systemd-modules-loadi'm curious why it's taking 30s. maybe the other two services as well
the dmesg you posted is very truncated, just like a screenful of info. you can usually pipe command output to curl with these pastebin sites. i understand if you're concerned about sensitive info in dmesg though
j@pop-os:~$ journalctl -b0 -u systemd-modules-load
Dec 07 12:45:50 pop-os systemd-modules-load[614]: Inserted module 'lp'
Dec 07 12:45:50 pop-os systemd-modules-load[614]: Inserted module 'ppdev'
Dec 07 12:45:50 pop-os systemd-modules-load[614]: Inserted module 'parport_pc'
Dec 07 12:45:50 pop-os systemd-modules-load[614]: Inserted module 'msr'
Dec 07 12:45:50 pop-os systemd-modules-load[614]: Inserted module 'kyber_iosched'
Dec 07 12:45:50 pop-os systemd[1]: Finished Load Kernel Modules.
journalctl -b0 -p4 to show only high priority messages. that would help too
j@pop-os:~$ journalctl -b0 -p4Dec 07 12:45:20 pop-os kernel: #1 #3 #5 # - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.Pastebin
it's very hard to decipher. the lines are right-truncated like you just copy-pasted from the terminal (the lines end in > which is less's sigil for "more content to the right"). you can make a pastebin from command output. to capture any command as a paste try
journalctl -b0 -p4 | curl -s -F "content=<-" https://dpaste.com/api/v2/the part after the
| comes from here:you can put anything before | to capture it to dpaste. check it for sensitive information first!
from what i can see though, your nvme is behaving strangely. it may be related to power saving settings. try these settings from the Arch wiki:
wiki.archlinux.org/title/Solid…
do you boot from the nvme?
journalctl -b0 -p4 | curl -s -F "content=<-" https://dpaste.com/api/v2/that captures the output from
journalctl -b0 -p4 and sends it to dpaste.com. it will print out a URL to the result. give that a try
Based on your systemd output, it looks like the system is taking a long while to decrypt your drive. Is it a spinning disk, or an SSD?
I'm not sure if the PC repair shop specifically checked your drive, but it might be worth swapping out for another. Or maybe run some speed tests and/or diagnostics to see if there's something funky going on.
You could also try an unencrypted install to see if the problem persists.
like this
TVA likes this.
I'm agreeing with other people; there's probably a drive issue that the shop didn't catch.
On my machine, those two services that take 30 seconds for you do not take nearly that long for me. dev-mapper-DebianVolume\x2dDebianMain.device (which is equivalent to dev-mapper-data\x2droot.device; our drives are just called different things) only takes 1.074 seconds for me, while lvm2-monitor.service only takes 357 milliseconds.
I've only ever seen Linux boots take this long when either a drive failed or I accidentally formatted a drive that's in my fstab, causing it to fail to mount and eventually landing me in a recovery shell. At that point, I'd either use the recovery shell or a USB to edit the fstab.
Next time you boot in, check to see if all your drives are showing up, check disk health, etcetera. Also, although this likely won't solve the problem, check that your drive connections are well-seated.
Check your disk usage with df -h
When my machine gets weird it's always out of disk space.
The pincer movement of authoritarianism: Europe is under pressure from Trump & Putin at a crossroads
Share
They once formed opposing poles of the political world order, but today the US and Russia speak almost the same language – especially when it comes to Europe.
The fact that the government of Donald Trump, of all people, speaks of censorship of free speech in Europe, while imposing draconian penalties on universities, firing employees who display rainbow flags, denigrating the free press as “enemies of the people,” calling female journalists who ask questions “pigsties,” and actively promoting disinformation technologies—this demonstrates the perfidy of the argument.
In a reversal of perpetrator and victim typical of modern authoritarian movements, the US is now blaming European governments for the poor relations with Russia
Full article in German: riffreporter.de/de/internation…
English version of full article in PDF version for download:
The pincer movement of authoritarianismDownload
Share
Zangengriff des Autoritarismus: Europa steht unter Druck von Trump und Putin am Scheideweg
Kommentar: Die USA sagen in ihrer neuen außenpolitischen Strategie der EU in ähnlicher Sprache den Kampf an wie Putins Russland. Doch noch immer schlafwandeln die europäischen Lenker, statt schnell und entschieden zu handeln.Christian Schwägerl (RiffReporter)
New US security strategy aligns with Russia's vision, Moscow says
New US Security Strategy aligns with Russia's vision, Moscow says
The Kremlin welcomes the starkly worded document, which does not cast Russia as a threat to the US.Rachel Muller-Heyndyk (BBC News)
essell likes this.
When Musk joined Trump, countries rolled out the red carpet for Starlink
On April 7, Muhammad Yunus, chief adviser of Bangladesh, sent Trump an urgent letter. He listed all the ways that his country was trying to comply with Trump’s agenda and asked him to delay tariffs. The note included a curious addition: “We have executed the necessary steps to launch Starlink in Bangladesh.”Since Starlink launched its first satellites in 2019, the internet provider owned by billionaire Elon Musk has attempted to expand into markets around the world, often facing regulatory red tape in doing so. But with Musk playing a high-profile role in Trump’s White House from January through May, Yunus and other leaders seemed to recognize that accommodating Starlink could be one means of appeasing the new administration.
The same day Yunus sent his letter, Starlink applied for a license with the Bangladesh Telecommunication Regulatory Commission. Three weeks after Yunus’ letter to Trump, the BTRC approved Starlink’s application. The service launched in Bangladesh the following month.
Bangladesh became the latest country around the world to expedite its regulatory approval process for satellite internet providers while Musk took part in Trump’s second administration. During the first five months of the year — as Musk assumed his lead role in the Department of Government Efficiency — Starlink announced it had become available in at least 13 countries, while its applications were approved in two more. In the six months since Musk broke ties with the administration, Starlink announced its entry into an additional 13 countries, totalling at least 26 countries in 2025.
In some cases, Starlink found quick success in countries it sought to enter for the first time. In others, Starlink’s applications had stalled for years until they were suddenly greenlit.
How Starlink benefited from Elon Musk’s Trump ties - Rest of World
The satellite internet service cut through red tape to enter new countries while Musk led DOGE.Kate Bubacz (Rest of World)
4 reasons Plex is turning into the thing it replaced
4 reasons Plex is turning into the thing it replaced - Android Authority
Plex was once the go-to media server, but growing restrictions and paywalls are pushing users away. Here's why you should consider switching.Karandeep Singh (Android Authority)
like this
Scrollone, originalucifer, IAmLamp, KaRunChiy, RaoulDuke85, deliriousdreams, ammorok, giantpaper, yessikg, hpx9140 e TheFederatedPipe like this.
reshared this
Technology reshared this.
Technology reshared this.
Sure, but you also don't have the option to use those features because they don't exist in jellyfin.
In my plex instance, I have discover enabled, and enabled all the streaming services so that discover is populated with all the movies and shows available. Then I have an automation setup so I can search in discover for a movie, and add it to my watchlist, and my automation will automatically download that movie and add it to my library.
I can do it right from my couch, and its WAF approved. Using those bloat features against them, in a way.
But, its just as easy to turn those all off if one doesn't want to utilize them. I'd be annoyed if they forced them on permanently but that's not what plex does, but they sure get a lot of hate for just having those features.
That's a feature I wouldnt want in mine for example.
I just want my stuff and only mine.
But hey: Everyones gotta choose their own. And if youre happy, who am I to judge.
Sure, so you open settings and simply disable those extras. Then you have a nice clean ui with only your libraries. It even cleans up the app when disabled so there's only home and libraries tabs. Nothing more.
I think many people aren't aware all the extras have disable options in plex. Essentially turning it back into plex from years ago.
IMO: Not the point.
Essentially the same discussion here: lemmy.dbzer0.com/comment/23054…
Sure, you could do it by turning 5 switches and 2 knobs but mine just does what I tell it to.
(And I don't have to pay for remote access or HWA)
It just works differently, in a way that requires more hands-on work
That's correct.But you chose to ignore the instructions because you are used to a different way of doing it and them you get duplicate entries.
That's it (shrug).
Great, now how do you deal with 30 remote streamers when your IP changes? Do you have tobsetup extra knobs just to get remote streaming to work? Are your apps refined or still buggy?
I'd personally rather deal with 5 options to turn off in settings than deal with all those extra steps and drawbacks. You really seem to have a huge hate bone for plex, so enjoy your choice, and move on.
I needed it anyway so why not use it also there. ¯\_(ツ)_/¯
I'm going to call it like I saw it, a very long time ago.
You have a product that is basically purpose built to make data hoarding and piracy practical, yet it requires a login with a central service. I don't care what justification anyone thinks makes that worthwhile or even a good compromise. Signaling to any corporate entity that you're in possession of such a thing is a bad idea to begin with. They shouldn't even know you exist. That information, along with anything else you do with the product is compromising to you and can be sold for money if aggregated with everyone else's data.
If you find this rant out of place in our modern world, I'd like to point to the concept of shifting baselines. This didn't used to be normal and nothing short of greed continues the behavior. The technology before this ran/runs without anyone knowing. Consider VLC, or XBMC.
Technology reshared this.
Technology reshared this.
Technology reshared this.
I already have to expose my Plex Media Server with a Tailscsle funnel (for IPv4 only) for IPv6 I use my Synology NAS reverse proxy which can be accessed globally.
I have been maining this setup for years now that I forgot if I can access my PMS outside without either those solutions lol (I am GGNATED but IPv6 works fine as stated).
The main thing here is that I don't need my users to do anything, they just open the app and access it, no need to remember IPs/URLs or install VPNs to my server... Is that possible with Jellyfin as well?
Technology reshared this.
Technology reshared this.
Thanks, that clears everything up for me...
Now if you could set that URL from the server itself and not the client apps... Certainly I don't think that's an impossible task.
Technology reshared this.
For tailscale, I previously use this but needs to add the jellyfin port after the tailscale IP.
Tailscale. You don't even need it on the client device, you can get a gl.inet travel router that'll do the work.
Edit: i’M nOt GoInG tO aLl Of My FrIeNdS aNd FaMiLiEs HoUsEs AnD sWaPpInG oUt ThEiR rOuTeRs🤡🤡🤡
Edit 2: people who don't know wtf a travel router does or how to use it, or how nat works at all, but are more than willing to sound off about what they don't know, keep replying because you're helping me keep my feed free of dipshits. ❤️
Can you fly out to my MIL every time her router breaks and fix it for her?
Edit: holy shit, your edits are insane
Tailscale is woefully impractical, as is setting up travel routers. You're adding so much unnecessary complexity that has the chance to fail and frustrate them even more. Doubly so for anyone an appreciable distance from you (having tried this before, it's just not worth it for me - about the 3rd time their tailscale client lost my network I was done with it). And not everyone wants to buy hardware to setup a remote streaming platform for blue hairs, because that also adds to the administrative complexity of the setup.
But feel free to continue your childish tantrum about how people don't understand why your genius ideas are really super great.
Once Jellyfin can do that or something similar
Once Jellyfin does that then it'll be time to look at jumping ship to something else, because that'll be the indication that Jellyfin is going down the same road as Plex.
They changed their TOC a while ago, the only thing they have in there now is boiler plate stuff about not hosting pirated content.
cloudflare.com/en-gb/terms/You agree not to, and not to allow third parties to use the Services to ... post, transmit, store or link to any files, materials, data, text, audio, video, images or other content that infringe on any person’s intellectual property rights or that are otherwise unlawful;
I just set up a cache rule to ignore my jellyfin subdomain and they won't ever care about me and my half dozen users.
Self-Serve Subscription Agreement
This page is for those who are interested in our Self-Serve Subscription Agreement.www.cloudflare.com
Oh weird. I would guess a transcoding issue, maybe double check those settings to make sure you have the right config for your hardware.
Theres also Infuse, its a video player that supports jellyfin, but I think some features are behind a premium purchase.
Infuse App - App Store
Download Infuse by Firecore, LLC on the App Store. See screenshots, ratings and reviews, user tips and more games like Infuse.App Store
For the love, as a Plex alternative, they don’t even have a native app on all major tv stores. It should be a P1 feature.
Are you really bitching this hard about a completely free and open source project?
It's not technology or finances that kill most FOSS projects and burn out the devs. It's this kind of shitty entitled unappreciative demanding attitude from users.
As others have pointed out, there are fully functional and good quality frontends available, such as Swiftfin.
I maintain open source projects too, and I fully understand the burnout, the pressure from supporters and such.
What I was saying is they can do better from a project management perspective. Otherwise I love their work :3
Swiftfin is buggy atm, like my other comment.
I maintain open source projects too, and I fully understand the burnout, the pressure from supporters and such.
Then you should know better than most that your wording and approach matters.
Just to think of replacing the mount points in the docker container from Plex to Jellyfin (in order for it to read my Riven and Decypharr symlinks) scares me... Mostly because after I finish a docker project my mind seems to go blank lol.
At least they still kinda honour the Plex Pass lifetime users...
That’s more on you than Plex, though, right? Like do you get mad at Walmart or Home Depot because you bought a tool you never use, or don’t use as frequently as you thought you would?
Not defending Plex, I’m just curious.
EDIT: I realize your post referenced pounds as currency, but I don’t know the equivalent stores on that side of the pond. Been 20 years since I was in London! Apologies.
The writing was on the wall when they started getting American VC money.
American VC culture is anthenema to truly user focused products.
like this
HeerlijkeDrop likes this.
Playing devil's advocate, I understand one point of pressure: Plex doesn’t want to be perceived as a “piracy app.”
See: Kodi. kodi.expert/kodi-news/mpaa-war…
To be blunt, that’s a huge chunk of their userbase. And they run the risk of being legally pounded to dust once that image takes hold.
So how do they avoid that? Add a bunch of other stuff, for plausible deniability. And it seems to have worked, as the anti-piracy gods haven’t singled them out like they have past software projects.
To be clear, I'm not excusing Plex. But I can sympathize.
like this
yessikg likes this.
comparitech.com/kodi/kodi-pira…
digital-digest.com/news-64644-…
Based on our research, comparative search volume for “Kodi” has fallen around 85 percent from 2017 to 2022. Google Trends data reveals the dramatic decline started in Q2 of 2017 and has, for the most part, continued that trend up to this point. Consequently, the decline in people searching for Kodi directly relates to the appearance of the coordinated attack against piracy in the form of ACE.
And this is with Kodi furiously distancing itself from pirates at the time.
Attacks don’t have to be direct. Though they absolutely can be, too.
Kodi in steep decline after introduction of anti-piracy steps
Following several anti-piracy efforts in 2017, Kodi piracy is now seeing a sharp decrease, as is almost all search traffic related to Kodi.Sam Cook (Comparitech)
From their blog post about it:
An unauthorized third party accessed a limited subset of customer data from one of our databases. While we quickly contained the incident, information that was accessed included emails, usernames, securely hashed passwords and authentication data. Any account passwords that may have been accessed were securely hashed, in accordance with best practices, meaning they cannot be read by a third party.
The passwords were hashed and, I'm inferring from their language, salted per-user as well. Assuming a reasonable length password (complexity doesn't matter much here, what we want is entropy) it would take a conventional (i.e. not quantum) computer tens to hundreds of millions of years to crack one user's password.
… my personal Jellyfin server (nor anything else on it) has been hacked…
And I’ve never been attacked by a bear while wearing my goose feather headdress.
If you have a static IP, or dynamic DNS set up, you can set up your own remote access with a reverse proxy like nginx. The nice thing is I get to use my own SSL certificate and all the actual streaming goes directly to my server, not through their proxies.
The only "hacky" part about it is that the Admin dashboard shows "Not available outside your network", even though everything works perfectly.
That serves the purpose too. It’s harder to pin Plex as an “illegal distribution service” when you have to pay for access. Either the streamer or “distributor” can’t be very anonymous, which makes large scale sharing impractical.
On the other hand, the more money they squeeze out, the more they risk appearing as if they “make money from piracy,” which is exactly how you get the MPAA’s attention.
There is that but it’s primarily that they’ve taken over 40 million dollars of venture capital. They are almost certainly under immense pressure to turn profitable asap and converting lifetime pass users into revenue streams somehow, converting new users into SaaS, etc are going to be things they pursue more aggressively.
Don’t take the devils money if you don’t want the devils stipulations
like this
giantpaper likes this.
If jellyfin was easier to use and had the same options as jellyfin
Just guessing here, but I think it just might.
Individual user accounts, so multiple people can use the same device without needing to log into a new account each time. For example, User A watches a show on the TV. Then User B opens the TV, and has to log in to be able to access their own watch history. Then User A returns, and has to log back into their account.
Braindead remote access. I use a reverse proxy so it’s not a need for me, but plenty of people don’t understand how to properly set something like that up.
Single Sign On. It flies in the face of what Jellyfin stands for, because it would require a centralized authentication server that everyone’s servers phone home to. Just like Plex. With Plex, you log into one account, and can see all of your available servers, because they’re all tied to the same account. With Jellyfin, every server requires its own authentication, because there is no central server to manage all of the “Account XYZ has access to libraries A, B, and C” stuff. If I want to watch something, I can’t easily just search all of my servers at once; I need to individually log into and search each one to see if it has the content I want to watch.
Not for me. Before Plex I was browsing folders on my TV and I actually had to organize everything, plus find and download matching subtitles. It sucked so much.
I got into self hosting because of Plex and ran it on a 2015 Shield (both the server and the player) for ~8 years. Just moved the server to another machine this year.
Still happy premium user.
like this
giantpaper likes this.
3 Things stop from using jellyfin 100% of the time.
1) TV tuner is janky and loading a guide for local channels is garbage. I like watching the morning local news and jellyfin just does not cut it.
2) I want sub accounts. They used to have something similar but took it out for security reasons. I want to log all my TVs into one account but then have each user select their profile. So I can easily have a restricted profile for say kids then another for my parents then one for me then one for SO under the same roof. It will track each persons watched profile so when someone watches ahead it doesn't mess with someone else's.
3) On the same note, controller/ HTPC remote configs feel janky. I know its there but its not a smooth and easy as Plex. This goes along with above for anyone who says just make another account. You try entering half decent passwords with small HTPC remotes or controllers. Every time you go to watch TV.
If they could fix these things I would ditch Plex all the way. But as it stands I use Plex for my TV and jellyfin for my phones, tablets, PC.
TV tuner is janky and loading a guide for local channels is garbage. I like watching the morning local news and jellyfin just does not cut it.
I DVR local stuff with Plex and play it back in Jellyfin.
Jellyfin has local channels? Why don't you just watch local channels?
Does plex have local channels? Seems like that is a use case that doesnt make any sense to me.
Stremio
At a glance, it looks like it requires signing up with their service, which means they can track everything I do. No, thanks. I'll stick with Jellyfin.
Sure, you can disable a lot of features from the home page, but even the remaining bits push you toward Plex’s ecosystem with things like recommendations. And I’ve even seen people complaining about needing to re-disable promotional content after updates. It’s simply a shady business.
Edit: It's just occurred to me that he might literally be referring to the Recommended tab on your home page - which you only have to interact with by choice.
If anyone would care to tell me where I'm being pushed towards Plex's ecosystem I'd love to understand what the flying fuck he's talkin about. The only thing I could find that could generously be called part of the Plex "ecosystem" are the social features. Does it give more "ads" if you have a free account or something? Also I've had a server for 15 years and I've never had to re-do my customization from an update.
You mean this part?
"Sure, you can disable a lot of features from the home page, but even the remaining bits push you toward Plex’s ecosystem with things like recommendations. And I’ve even seen people complaining about needing to re-disable promotional content after updates. It’s simply a shady business"
If so I've definitely experienced that where all of a sudden the damn tab is re-enabled by itself. And it's not even "disabled" it's just not the default selection anymore. I can still see it down there.
I believe I experienced what they called "re-disable promotional content after an update." Everything was reset and my media was hidden with only their streaming options available. Similarly setting up a new Chromecast it only had their streaming content and I had to hide their content and unhide mine.
I seem to remember there being some weaselly link that would re-enable their content after it was disabled too.
Generously, they're providing more content and a way to support the development of the product through ads. But all the changes and the way they're happening show me a picture of a company with changing priorities. So I tend to agree with the sentiment of the author.
I'm not sure I've ever used it, but according to Wikipedia, ad videos started in 2019, live tv is 2020, and rentals in 2024. During that time it's become more and more intrusive, now replacing your media entirely out of the box.
That means for 10 of its 16 software purchases and software subscriptions where it's bread and butter and has grown into different revenue streams. It's still software, but now it's Ad based revenue streams. Adding more and more fees. You might say it's growing into the thing it was supposed to replace, corporate cable and streaming service.
I have both but Jellyfin is not good with duplicates. Having several versions of movies in different languages just puts multiple copies of the movies in Jellyfin, with no distinction between them until you click into the details. Plex does this well with "Play version".
But Plex is worse for other reasons, on my LG TV. It's painfully slow and doesn't play the correct audio track that I select.
Looking back at this thread. Jellyfin does let you select both versions and combine them into one. Then you can keep seeding to your heart's content.
I don't use that feature often, but have a couple movies that use it.
You just need to name them correctly (too lazy to link the docs. Just look up versions in media library)
That's what I mean. You have to rename them. Plex handles this automatically, with the same shared library. I wish Jellyfin was better at this.
Jellyfin goes by file name, Plex goes by identified movie/show. Much better.
Welll...They state in their docs how it should be.
If you deviate from it, that's on you.
And yes it'd be nice if they did it automagically but we can't have everything and I don't expect it from them honestly as that is really a very niche requirement considering it already works.
If you deviate from it, that's on you.
I don't understand why we need to "pin it" on someone?
It just works differently, in a way that requires more hands-on work, as opposed to no hands-on work. So it's objectively worse. That's "on me"?
It being in the docs is irrelevant in this context. It could've been there or not. But the fact that I need to do extra work as opposed to not makes Plex more comfortable in this regard, and I don't see how that's up for debate.
If Jellyfin had done it's duplication check on identified movie IDs instead of filesystem names, we would be in a different situation. But they don't, and here we are.
I'm not ragging on Jellyfin, I'm just pointing out facts. Not even an opinion piece.
It just works differently, in a way that requires more hands-on work
That's correct.
But you chose to ignore the instructions because you are used to a different way of doing it and them you get duplicate entries.
That's it (shrug).
Why are still trying to blame this on the user, lol?
If the user has to do more work for the same result, it's a worse system. Period.
That's it. 🤷♂️
To go into more detail:
How did I choose to ignore instructions when I didn't read them in the first place? Neither system's installation instructions has this in it. You'd have to deep dive when you realize it doesn't work for one of them. Namely Jellyfin.
"Choosing" to ignore it is also a matter of definition. If I rename all my shit, I am a) duplicating lots of downloads on my system because I need to keep the original in order to seed, or b) not able to seed and lose my ability to gain more content in the first place.
Sometimes people's circumstances are different from yours, my friend.
I understand Jellyfin is better in so many other aspects, I agree with that, but do not defend one single feature which works objectively worse and pin it on the user. Don't be that person.
Neither system's installation instructions has this in it. You'd have to deep dive when you realize it doesn't work for one of them
Bruh.
I can't any more. That's the worst take I have heard yet.
Yes, if it works automagically it's great. But if it aint, you just need to follow instructions. One just can't duct tape everything together
If I rename all my shit, I am a) duplicating lots of downloads on my system because I need to keep the original in order to seed, or b) not able to seed and lose my ability to gain more content in the first place.
Are you unable to hard link?
You can say what you want about my "take". All I'm saying is, I have to do work with one system, and I don't with the other one. If all other things were identical, which one would you choose, bruh??
I know about hard links and have used them before, they are fine. But it's still work I need to do in one system where it needn't be done in the other.
Is this so hard to understand for you? Leave me alone if so.
Get this: more work takes more time; less work, less time. I have little time, and I need to keep the file names intact, period.
I get that the instructions are there. There's instructions for Plex too that I've gone and read.
That's not the point.
Plex does this well with “Play version”.
It does it even better with "editions" support, at least for movies.
The problem I have with "play version" is that you can't really control which version is the default. Also it's kind of hidden in the menu. And when you do select the version it just shows you the resolution (which is useless if you have two versions with the same resolution but different languages).
Unless some is already familiar with plex, they probably won't find your different language version. But a custom "different language" edition of the movie will show up right below the extras.
Jellyfin have that?
No idea.
Unless some is already familiar with plex, they probably won't find your different language version.
I don't have this issue, but I agree it could be easier to see which version you are playing. I think it's supposed to be very different quality versions, so one would be like 4K, then 1080p, then maybe 720p. But when you have one English 4K and one Nordic 4K, is a 50-50 guessing game. It's easy to switch once you start playing though.
Still better than Jellyfin though, in this particular regard.
Bandwidth is free, as long as it doesn't get to the point its tanking my performance I don't care. If people do start to abuse it I will bother to change it but until then no reason to bother. Obviously not giving the URL out here because then immediately it is going to get hammered.
Security through obscurity is fine when the only thing you are securing against is a bit of an inconvenience and the benefit is its easy to give friends a URL to go to. But sure, if it became a problem I would probably look into something else.
For your use case its pretty much identical.
I prefer the plex interface slightly. But id rather use open source
It may have very well changed recently or I could be misremembering, but the reason I switched over was being unable to play certain codecs/media types (types of hdr?) over stream while converting on host.... unless I had a subscription.
Utter lunacy to want me to pay to convert on my own machine. I've since swapped to jellyfin, donated, and am happier for it (and the open source part is such an added plus).
I use Plex for audiobooks and TV shows primarily.
The fact that you can't (or at least can't easily) scan library files from Plexamp is utterly insane to me. Especially after they made audio libraries completely unavailable on the regular Plex app.
I'll probably switch to Audiobookshelf or something else down the line.
I hate headlines like this. I’d love to hear the REASONS WHY Plex are doing all of this. But no, it’s just “4 ways in which Plex now sucks” which we all know already.
Before someone says “the reason is money” we need to ask: do the developers of Jellyfin not use money? Why won’t the same thing just happen to them too?
Before someone says “enshittification,” we need to ask: does this mean Jellyfin will soon have the same problems?
We all seem to love Jellyfin so I think we need to understand the actual reason why, or this will just continue happening.
I hate headlines like this. I’d love to hear the REASONS WHY Plex are doing all of this.
- Greed... do you really need 3 more?
Before someone says “the reason is money” we need to ask: do the developers of Jellyfin not use money? Why won’t the same thing just happen to them too?
Plex is a private company wanting money... Jellyfin is a voluteer-drive effort
Before someone says “enshittification,” we need to ask: does this mean Jellyfin will soon have the same problems?
Enshitification happens to privately develop products due to greed... Jellyfin is not a private company pushing a product for profit
We all seem to love Jellyfin so I think we need to understand the actual reason why, or this will just continue happening.
Back to "greed"
As predicted, a one-dimensional answer.
Let’s say they want more money: they do have a healthy software subscriptions business. How can they get more by becoming the world’s tiniest streaming service? And won’t that cannibalize their subscriptions business as the experience gets shittier and shittier?
Some actual “whys” within this would be things like (made up, but for example)
1) the subscriptions business is dying - less than 1% of users ever buy a pass and efforts to increase that failed for (another reason here)
2) streaming services are dumping cash into viewer acquisition because a war is on for dominance in that space and Pled is capitalizing on that
3) Plex has high overlap with gamers and are making good money on midroll gaming ads during these streams
4) Plex has legal concerns about facilitating piracy - this is the real reason why sync is shit and they killed watch together. They are desperately trying to pivot out of their old business before they get sued - OR all this streaming nonsense gives them a kind of fig leaf over that somehow
See, issues can be complex and interesting. Just calling them greedy is neither. How is this the greedy play, even?
Nobody outside Plex's finance department is going to have what you're looking for if those examples are anything to go by.
What it comes down to is they have $130M that investors are going to want back and all the decisions they're making now are aimed at doing so. That doesn't mean any of those decisions are good or are going to work. It didn't even mean they won't backfire and have the opposite effect.
opencollective.com/jellyfin
Plex took a significant degree of other people’s money, to the tune of over 40 million dollars. The people who gave said money were not kickstarter funders, donators, subscribers, etc but investors, who have an expectation that plex will move the company in a direction that makes them profitable enough to not only repay the 40+ million investment, but to then earn profits for a lengthy period (possibly in perpetuity) as they are stakeholders. This is the same thing that happened to Reddit (though Reddits scale and timeline was FAR more vast), openai, Google, literally every company ever basically. Plex now has an obligation to not just continue development but to continue it in a way that maximizes growth and revenue, even if that is anti consumer.
Jellyfin on the other hand has language on their contributions page that almost discourages financial support. This is because the only financial support they accept is donations, which are clearly explained are to support the free software and give no ownership stake. The software does not generate profit and donation does not equate to any kind of investment, other than supporting continued development. Expecting any kind of return on your part (again, other than the project continuing to move forward) is foolish. Lemmy is similar, as are many other FOSS projects. Jellyfin can remain ideologically stable to its goals, and because it is free if its users feel the lead developers are straying from this they can fork it and make “new ideologically pure jellyfin” (see xmbc to plex to emby to jellyfin, or lemmys 938 forks, many of which are tweaks and some of which are because people got beef with the main devs)
FAQs about Plex's funding and investors
Explore Plex's funding history with round-wise details, lead investors, and complete investor list.Tracxn
Further to this, I heard Cory Doctorow talk about open source licensing being a Ulysses Pact. Basically Ulysses wanted to hear the sirens song. Normally, hearing it would drive you mad and you would wreck upon the rocks. Ulysses ordered his men to bind their ears with wax so they would not be affected by the sirens song. He also ordered them to tie him to the mast.
In the moment, he knew he would not be strong enough to resist the sirens song and because he was bound to the mast, he could not jump overboard. In the same way, people that use open source licenses on their projects are binding themselves to the open source license so that if a large temptation was to present itself (such as investors wanting to give them life changing money in exchange for mistreating their customers) they are already bound by that license and cannot break that bond.
Or they’ll do what plex did. Reminder that plex started life as a fork of xbmc/kodi for macos. When their fork showed some popularity they shifted development to various names (plex home theater). While this still contained a lot of GPL code they then spent a good deal of dev time rewriting said code to be fully closed source.
This is less discussed but also why plex is one of the most insidious and disgusting pieces of unethical software one can use. The writing is on the wall and the company is led by scumbags, sure, but people don’t talk as much about how they forked xbmc, built a huge product based on everything learned from it, and then closed everything off once they did the minimum required cover your ass moves.
What they did is legal but is it ethical? If they did it to a company like apple or Microsoft they’d get sued, that’s for damn sure. And ethically speaking I would say it’s really fucked to take all this stuff from the community: architecture, ideas, ui/ux, approaches to plugin design, data modeling, etc and build a whole company off of it, then basically give nothing back. They closed it off so they could get their bag, fuck the community that taught them so much and helped build their MVP.
What you describe is similar to the creation of jellyfin from emby though; where embys dev team suddenly decided to close source the GPL server code (a violation) and add monetization. the community rejected this, and forked the last version prior to the nonsense into what is now jellyfin.
Plex has been off limits to me for along time. Just the fact they want to require auth with their central service for something I use for reasons rights holders would love to sue me into third world poverty over (muh Linux ISOs) is enough reason.
Them demanding that auth hook into the server makes me uneasy about what sort of metatdata they are currently, or could exfiltrate later on, should they want to or be demanded to.
Whole thing stinks of willingly being part of a honeypot.
Nobody talking about Emby?
Why not? I haven't used it yet but it seems great too.
One reason: It's not FOSS, and because of that, it's not protected from the Capitalist profit motive that's always pushing the creators/owners towards enshitification.
The same forces act upon FOSS too, but the difference is that FOSS has structural immunity built into it. If the software enshitifies, it can be forked and maintained by a community that values software freedom.
We've seen it happen time and again. Terraform, CentOS, RHEL, The Xen Hypervisor, etc. When companies try to take freedom away from FOSS, they fail, because their users and maintainers are empowered by FOSS licenses (especially restrictive ones like the GPL) and can fight back.
With proprietary software, the users are powerless, only the owners have control.
Don't trust promises, good intentions, or corporate slogans. Trust free software and the open ecosystems they thrive in.
PS, Jellyfin is amazing ❤️
What Trade War? China’s Export Juggernaut Marches On
China’s Global Exports Continue to Grow Despite Trump Tariffs
As Trump has imposed steep tariffs on China, American importers are buying much less. But China has offset the decline from the United States with breathtaking speed.Agnes Chang (The New York Times)
A Journalist Reported From Palestine. YouTube Deleted His Account Claiming He’s an Iranian Agent.
In February 2024, without warning, YouTube deleted the account of independent British journalist Robert Inlakesh.
His YouTube page featured dozens of videos, including numerous livestreams documenting Israel’s military occupation of the West Bank. In a decade covering Palestine and Israel, he had captured video of Israeli authorities demolishing Palestinian homes, police harassing Palestinian drivers, and Israeli soldiers shooting at Palestinian civilians and journalists during protests in front of illegal Israeli settlements. In an instant, all of that footage was gone.
YouTube declined to provide evidence to support this claim, stating that the company doesn’t discuss how it detects influence operations. Inlakesh remains unable to make new Google accounts, preventing him from sharing his video journalism on the largest English language video platform.
Inlakesh, now a freelance journalist, acknowledged that from 2019 to 2021 he worked from the London office of the Iranian state-owned media organization Press TV, which is under U.S. sanctions. Even so, Inlakesh said that should not have led to the erasure of his entire YouTube account, the vast majority of which was his own independent content that was posted before or after his time at Press TV.
A Journalist Reported From Palestine. YouTube Deleted His Account Claiming He’s an Iranian Agent.
YouTube offered conflicting explanations for deleting the account of Robert Inlakesh, who covered Israel’s occupation of the West Bank.Jonah Valdez (The Intercept)
like this
adhocfungus e ☆ Yσɠƚԋσʂ ☆ like this.
Your Party's moment is now or never
Your Party's moment is now or never
The new leftist party can get past its chaotic launch, but it needs rank-and-file leadersMiddle East Eye
Fun fact MEE is based in the UK.
Also this article was probably the most comprehensive one of the mess I've read so far.

ryokimball
in reply to bridgeenjoyer • • •If you do not sync your bookmarks and such, and you do not install malicious plugins, then this information is as safe as your device itself is.
Firefox claims to use E2EE so It should be pretty safe to use their built-in sync as well.
It's open source so if you feel like verifying all this, or compiling it on your own, you can.
Or you can use a completely separate instance or browser (perhaps ice weasel or even the Tor browser) just for the activity you want to keep separate.
like this
Cătă e MyTurtleSwimsUpsideDown like this.
atomicbocks
in reply to ryokimball • • •GitHub - mozilla-services/syncstorage-rs: Sync Storage server in Rust
GitHubryokimball
in reply to atomicbocks • • •cecilkorik
in reply to bridgeenjoyer • • •I recommend Librewolf, it's a lot more privacy-aggressive out of the box, and you can turn that down a little bit if you need, but otherwise it's just a more trustworthy Firefox fork as far as I'm concerned. It supports Firefox sync as well (which is telling, because Librewolf takes privacy very seriously and isn't going to provide too many easy opportunities for you to completely compromise it) Like the other person said sync is E2EE and the hosting server has zero-knowledge of any of your unencrypted data. If Librewolf trusts it, I trust it, and I think you can rest assured that with Librewolf, it's probably never going to be sabotaged either, which as you imply, is not necessarily true with Firefox.
I don't recall whether they use Firefox's sync server directly or if they have their own, but either way, like I said, the server has no knowledge of or access to your unencrypted data.
like this
Cătă likes this.
bridgeenjoyer
in reply to cecilkorik • • •jol
in reply to bridgeenjoyer • • •Kevlar21
in reply to jol • • •null_dot
in reply to bridgeenjoyer • • •No, it's not unwise. Mozilla has no mechanism with which to surveil your activities built into the browser.
That said, you should avoid categorising companies as generally trustworthy or untrustworthy. Any given service will have privacy considerations - some may be important to you, others may not.
like this
MyTurtleSwimsUpsideDown likes this.
Jo Miran
in reply to bridgeenjoyer • • •bridgeenjoyer
in reply to Jo Miran • • •MyTurtleSwimsUpsideDown
in reply to bridgeenjoyer • • •TheTurner
in reply to bridgeenjoyer • • •B0rax
in reply to TheTurner • • •like this
onewithoutaname likes this.
VoidJuiceConcentrate
in reply to bridgeenjoyer • • •Kevlar21
in reply to bridgeenjoyer • • •yeehaw
in reply to Kevlar21 • • •like this
onewithoutaname likes this.
_cryptagion [he/him]
in reply to bridgeenjoyer • • •bridgeenjoyer
in reply to _cryptagion [he/him] • • •