[Announcement] The Last of the Druids Livestream Twitch Drops
Watch The Last of the Druids live reveal at www.twitch.tv/pathofexile on Thursday, December 4th (PST) and you'll be able to earn the Verdant Wilds Dodge Roll Effect with Twitch Drops!
How to Participate
Simply link your Path of Exile account to Twitch (see below) and tune into GGG Live at www.twitch.tv/pathofexile, or any channel in the Path of Exile 2 Directory for 45 minutes.
Start Time: December 4th 11:00 AM PST
End Time: December 5th 11:00 AM PST
You will get the Verdant Wilds Dodge Roll Effect after 45 minutes of accumulated watch time on any channel with drops enabled streaming Path of Exile 2 during the event. This means that the drop is guaranteed for everyone who has watched any Path of Exile stream for this amount of time. This promotion is available for all accounts.
If you're planning to stream Path of Exile 2 during our livestream and want to enable Twitch Drops for your viewers, you can do it via your Twitch Creator Dashboard here.
Linking your Path of Exile Account to Twitch
Visit your Twitch Settings page while logged in. If your account isn't connected, click the "Connect" button for Twitch under "Other Connections". Complete the process on Twitch and you will be redirected back to your Twitch Settings page. If your account is already connected, this page should say "Your Path of Exile account is currently linked to your Twitch account."
After you've accumulated enough watch time to earn your Verdant Wilds Dodge Roll Effect, you must redeem it from your Twitch Inventory before the promotional period ends. Then it will be immediately available in your microtransactions list in Path of Exile 2 and will later be made available as a Quicksilver Flask effect in Path of Exile. The Verdant Wilds Dodge Roll Effect will be available for purchase from the store at a later date.
We can't wait to share the details of The Last of the Druids with you! To stay up to date in the lead up to the livestream be sure to check out our Twitter, Facebook and Forums. We'll see you at GGG Live at www.twitch.tv/pathofexile!
Early Access Announcements - The Last of the Druids Livestream Twitch Drops - Forum - Path of Exile
Path of Exile is a free online-only action RPG under development by Grinding Gear Games in New Zealand.Path of Exile
Advent Calendar 1
Advent Calendar
Zen Mischief Photographs
This year for our Advent Calendar we have a selection of my photographs from recent years. They may not be technically the best, or the most recent, but they’re ones which, for various reasons, I rather like.Austen graves in Tenterden Churchyard
© Keith C Marshall, 2014
Click the image for a larger view
John Lee Hooker — The Healer (1989)
“L’album blues più venduto in assoluto per uno dei più grandi bluesman ancor oggi in circolazione”, recitava la pubblicità del disco a fine anni novanta, poco prima della morte avvenuta nel 2001 a ottantaquattro anni... Leggi e ascolta...
John Lee Hooker — The Healer (1989)
“L’album blues più venduto in assoluto per uno dei più grandi bluesman ancor oggi in circolazione”, recitava la pubblicità del disco a fine anni novanta, poco prima della morte avvenuta nel 2001 a ottantaquattro anni. Il termine “blues” è probabilmente più abusato che usato in questo disco che, sinceramente ho ascoltato fino alla nausea, per la sua immediatezza, per la sua ascoltabilità ma non certamente per la sua sonorità marcatamente blues. Con questo disco, la chitarra più corteggiata del rock insieme a Muddy Waters e anche l’unico a uscire e te... silvanobottaro.it/archives/373…
Ascolta il disco: album.link/s/7dX5RVwG4Bdw13xrC…
Home – Identità DigitaleSono su: Mastodon.uno - Pixelfed - Feddit
The Healer by John Lee Hooker
Listen now on your favorite streaming service. Powered by Songlink/Odesli, an on-demand, customizable smart link service to help you share songs, albums, podcasts and more.Songlink/Odesli
[Research] At least 80 million inconsistent facts on Wikipedia – can AI help find them?
Technology reshared this.
[Opinion] AI finds errors in 90% of Wikipedia's best articles
For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its "Extended Thinking" version) to find an error in "Today's featured article". In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.
like this
fistac0rpse, FaceDeer, Lasslinthar e KaRunChiy like this.
don't like this
Kami doesn't like this.
Technology reshared this.
like this
fistac0rpse, FaceDeer, Get_Off_My_WLAN, Lasslinthar e KaRunChiy like this.
I find that an extremely simplified way of finding out whether the use of an LLM is good or not is whether the output from it is used as a finished product or not. Here the human uses it to identify possible errors and then verify the LLM output before acting and the use of AI isn't mentioned at all for the corrections.
The only danger I see is that errors the LLM didn't find will continue to go undiscovered, but they probably would be undiscovered without the use of the LLM too.
like this
fistac0rpse, FaceDeer, goldenbug, Get_Off_My_WLAN, Lasslinthar, missingno e KaRunChiy like this.
I think the first part you wrote is a bit hard to parse but I think this is related:
I think the problematic part of most genAI use cases is validation at the end. If you're doing something that has a large amount of exploration but a small amount of validation, like this, then it's useful.
A friend was using it to learn the linux command line, that can be framed as having a single command at the end that you copy, paste and validate. That isn't perfect because the explanation could still be off and it wouldn't be validated but I think it's still a better use case than most.
If you're asking for the grand unifying theory of gravity then:
- validation isn't built into the task (so you're unlikely to do it with time).
- validation could be as time intensive as the task (so there is no efficiency gain if you validate).
- its beyond your ability to validate so if it says nice things about you then a subset of people will decide the tool is amazing.
like this
Australis13, Get_Off_My_WLAN e missingno like this.
Yeah, my morning brain was trying to say that when it is used as a tool by someone that can validate the output and act upon it then it's often good. When it is used by someone who can't, or won't, validate the output and simply uses it as the finished product then it usually isn't any good.
Regarding your friend learning to use the terminal I'd still recommend validating the output before using it.
If it's asking genAI about flags for ls then sure no big deal, but if a genAI ends up switching around sda and sdb in your dd command resulting in a wiped drive you only got yourself to blame for not checking the manual.
like this
Get_Off_My_WLAN e missingno like this.
Or it flags something as an error falsely and the human has so much faith in the system that it must be correct, and either wastes time finding the solution or bends reality to “correct” it in a human form of hallucinating bs. Especially dangerous if saying there is an error supports the individual’s personal beliefs
Edit:
I’ll call it “AI-induced confirmation bias” cousin to AI-induced psychosis.
like this
goldenbug likes this.
like this
Get_Off_My_WLAN likes this.
like this
Zier likes this.
This is an interesting idea:
The "at least one" in the prompt is deliberately aggressive, and seems likely to force hallucinations in case an article is definitely error-free. So, while the sample here (running the prompt only once against a small set of articles) would still be too small for it, it might be interesting to investigate using this prompt to produce a kind of article quality metric: If it repeatedly results only in invalid error findings (i.e. what a human reviewerDisagrees with), that should indicate that the article is less likely to contain factual errors
So… the same as most employees but cheaper.
People here are above average and overestimate the vast majority of humanity.
Wait, you mean using Large Language Model that created to parse walls of text, to parse walls of text, is a legit use?
Those kids at openai would've been very upset if they could read.
like this
massive_bereavement, goldenbug, tiredofsametab, Get_Off_My_WLAN, KaRunChiy e fif-t like this.
If you read the post it's actually quite a good method. Having an LLM flag potential errors and then reviewing them manually as a human is actually quite productive.
I've done exactly that on a project that relies on user-submitted content; moderating submissions at even a moderate scale is hard, but having an llm look through for me is easy. I can then check through anything it flags and manually moderate. Neither the accuracy nor precision is perfect, but it's high enough to be useful so it's a low-effort way to find a decent number of the thing you're looking for. In my case I was looking for abusive submissions from untrusted users; in the OP author's case they were looking for errors. I'm quite sure this method would never find all errors, and as per the article the "errors" it flags aren't always correct either. But the effort:reward ratio is high on a task that would otherwise be unfeasible.
like this
Get_Off_My_WLAN, Azathoth, Lasslinthar e KaRunChiy like this.
I can then check through anything it flags and manually moderate.
It isn't doing anything automatically; it isn't moderating for me. It's just flagging submissions for human review. "Hey, maybe have a look at this one". So if it falsely flags something it shouldn't, which is common, I simply ignore it. And as I said, that error rate is moderate, and although I haven't checked the numbers of the error rate, it's still successful enough to be quite useful.
And the featured articles are usually quite large. As an example, today's featured article is on a type of crab - the article is over 3,700 words with 129 references and 30-something books in the bibliography.
It's not particularly unreasonable or unsurprising to be able to find a single error amongst articles that complex.
The tool doesn't just check the text for errors it would know of. It can also check sources, compare articles, and find inconsistencies within the article itself.
There's a list of the problems it found that often explains where it got the correct information from.
The first edit was undoing a vandalism that persisted for 5 years. Someone changed the number of floors a building had from 67, to 70.
A friendly reminder to only use Wikipedia as a summary/reference aggregate for serious research.
This is a cool tool for checking these sorts of things, run everything through the LLM to flag errors and go after them like a wack-a-mole game instead of a hidden object game.
like this
Drusas, Lasslinthar e KaRunChiy like this.
No surprise.
Wikipedia ain't the bastion of facts that lemmites make them out to be.
It's a mess of personal fiefdoms run by people with way too much time on their hands and an ego to match.
I know this is sarcasm, but in case people don't know.
Oh Jesus Christ no. At least Wikipedia has some form of oversight from multiple sources and people.
Disagree, Wikipedia is a pretty reliable bastion of facts due to its editorial demands for citations and rigorous style guides etc.
Can you point out any of these personal fiefdoms so we can see what you're referring to?
Finding inconsistencies is not so hard. Pointing them out might be a -little- useful. But resolving them based on trustworthy sources can be a -lot- harder. Most science papers require privileged access. Many news stories may have been grounded in old, mistaken histories ... if not on outright guesses, distortions or even lies. (The older the history, the worse.)
And, since LLMs are usually incapable of citing sources for their own (often batshit) claims any -- where will 'the right answers' come from? I've seen LLMs, when questioned again, apologize that their previous answers were wrong.
To quote ChatGPT:
"Large Language Models (LLMs) like ChatGPT cannot accurately cite sources because they do not have access to the internet and often generate fabricated references. This limitation is common across many LLMs, making them unreliable for tasks that require precise source citation."
It also mentions Claude. Without a cite, of course.
Reliable information must be provided by a source with a reputation for accuracy ... trustworthy. Else it's little more than a rumor. Of course, to reveal a source is to reveal having read that source ... which might leave the provider open to a copyright lawsuit.
The problem is a lot of this is almost impossible to actually verify. After all if an article says a skyscraper has 70 stories even people working in the building may not be able to necessarily verify that.
I have worked in a building where the elevator only went to every other floor, and I must have been in that building for at least 3 months before I noticed because the ground floor obviously had access and the floor I worked on just happened to do have an elevator so it never occurred to me that there may be other floors not listed.
For something the size of a 63 (or whatever it actually was) story building it's not really visually apparent from the outside either, you'd really have to put in the effort to count the windows. Plus often times the facade looks like more stories so even counting the windows doesn't necessarily give you an accurate answer not that anyone would necessarily have the inclination to do so. So yeah, I'm not surprised that errors like that exist.
More to the point the bigger issue is can the AI actually prove that it is correct. In the article there was contradictory information in official sources so how does the AI know which one was the right one? Could somebody be employed to go check? Presumably even the building management don't know the article is incorrect otherwise they would have been inclined to fix it.
Congrats. You just burned down 4 trees in the rainforest for every article you had an LLM analyze.
LLMs can be incredibly useful, but everybody forgets how much of an environmental nightmare this shit is.
Had to look up Chat GPT's energy usage because you made me curious.
Seems like Open AI claims Chat GPT 4o uses about 0.34 Wh per "query." This is apparently consistent with third party estimates. The average Google search is about 0.03 Wh, for reference.
Issue is, "query" isn't defined, and it's possible this figure is the energy consumption of the GPUs alone, omitting additional sources that comprise the full picture (energy conversion loss, cooling, infrastructure, etc.). It's also unclear if this figure was obtained during model training, or during normal use.
I also briefly saw that Chat GPT 5 uses between 18-40 Wh per query, so 100x more than GPT 4o. The OP used GPT 5.
It sounds like the energy consumption is relatively bad no matter how it's spun, but consider that it replaces other forms of compute and reduces workload for people, and the net energy tradeoff may not be that bad. Consider the task from the OP - how much longer/how many more people would it take to accomplish the same result that GPT 5 and the lone author accomplished? I bet the net energy difference isn't that far from zero.
Here's the article I found: towardsdatascience.com/lets-an…
Let’s Analyze OpenAI’s Claims About ChatGPT Energy Use | Towards Data Science
ChatGPT uses an average of 0.34 Wh per query, according to a blog post by Sam Altman. Does that figure hold up?Kasper Groes Albin Ludvigsen (Towards Data Science)
A setup with one monitor and a computer with a 5090 will draw about 1 kW under load. That's 7 kWh per week if the average is 1 hour a day.
So that's about:
- 233k Google searches
- 20k GPT 4o queries
- 175 GPT 5 queries
This headline is a bit misleading. The article also says that only 2/3 of the errors GPT found were verified errors (according to the author).
- Overall, ChatGPT identified 56 supposed errors in these 31 featured articles.
- I confirmed 38 of these (i.e. 68%) as valid errors in my assessment. Implemented corrections for 35 of these, and Agreed with 3 additional ones without yet implementing a correction myself. Disagreed with 13 of the alleged errors (23%).
- I rated 4 as Inconclusive (7%), and one as Not Applicable (in the sense that ChatGPT's observation appeared factually correct but would only have implied an error in case that part of the article was intended in a particular way, a possibility that the ChatGPT response had acknowledged explicitly).
RRF Caserta. Cronache Africane. Rapimento studentesse Colpi di Stato Narcostati Economia
[LTT] Building a Computer with the CREATOR of Linux! - Linus Torvalds Collab PC
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
Technology reshared this.
Anthropic claims Chinese state-sponsored hackers used Claude Code to access data from and leave backdoors in over 30 companies using AI-automated cyberattacks
AI firm claims Chinese spies used its tech to automate cyber attacks
The company claimed in a blog post this was the "first reported AI-orchestrated cyber espionage campaign".Joe Tidy (BBC News)
India orders smartphone makers to preload state-owned cyber safety app
India's telecoms ministry has privately asked smartphone makers to preload all new devices with a state-owned cyber security app that cannot be deleted, a government order showed
like this
andyburke likes this.
Technology reshared this.
"Cannot be deleted"
sure. Based on ALL the youtube vids from India of unlocking phones or bypassing FRPs, etc I imagine it won't be long before you start seeing videos from India on how to easily bypass this.
I mean some of those FRP videos are wild with the steps they go through to unlock an Android phone.
The app is mainly designed to help users block and track lost or stolen smartphones across all telecom networks, using a central registry. It also lets them identify, and disconnect, fraudulent mobile connections.With more than 5 million downloads since its launch, the app has helped block more than 3.7 million stolen or lost mobile phones, while more than 30 million fraudulent connections have also been terminated.
The government says it helps prevent cyber threats and assists tracking and blocking of lost or stolen phones, helping police to trace devices, while keeping counterfeits out of the black market.
There has to be a way to do all of this without installing something on your phone that you didn't ask for.
like this
onewithoutaname likes this.
Cyber security
In sure it protects children too by scanning for "child pornography" too, right?
Well I mean if you have a picture of yourself and your kids, or just of your kids, or your kids playing with their friends (other kids) that counts as highly suspicious since you have a lot of kid pics meaning you probably have child porn somewhere or maybe you are using them to train a LORA for an AI 'art' generator to make porn of those kids...
I'm going to stop there. But ANYTHING CAN BE TWISTED AGAINST YOU! I MEAN ANYTHING! I had that happen to me many times in my life.
Apple's iOS powered an estimated 4.5% of 735 million smartphones in India by mid-2025, with the rest using Android, Counterpoint Research says.
will be interesting to see if Apple tells them to pound sand.
Ah yes the Telecom minister of the BJP is notorious for creating some draconian bills that would put China to Shame.
Like the *"Broadcast Bill 2.0**
Turns out, the Government is just another company now.
state-owned cyber security app that cannot be deleted
I think it's called malware.
South Korea police say 120,000 home cameras hacked for 'sexploitation' footage
South Korea police say 120,000 home cameras hacked for 'sexploitation' footage
The cameras were located in private homes, karaoke rooms, a Pilates studio and a gynaecologist's clinic.Gavin Butler (BBC News)
like this
thisisbutaname e adhocfungus like this.
How do streaming sites work? Not the "official" the other 'free' ones
Technology reshared this.
Streaming like movies and TV, or like Twitch?
And free as in the legal free ones, or illegal free ones?
blog.velocix.com/cdn-leeching-…
CDN Leeching: the hidden threat undermining streaming performance & profits
CDN leeching is draining streaming performance and profits. Explore strategies to detect, prevent, and reduce its impact.blog.velocix.com
Social and Organizational Talks at FOSDEM 2026
Hey, all. One thing that’s different this year about the Social Web Devroom at FOSDEM 2026 is that we’re going to include talks about the organizational and social aspects of rolling out Open Source Fediverse software for individuals and communities. Last year, we focused pretty heavily on technical talks from the principle developers of FLOSS packages. This year, we want to make sure the other aspects of Fediverse growth and improvement are covered, too.
Consequently, the guidance for last year’s event, which was focused on how to make a great technical presentation, might seem a little outdated. But on reviewing it, I’ve found that it still has good advice for social and organizational talks. Just like software developers, community builders see problems and construct solutions for them. The solutions aren’t just about writing code, though; more often they involve bringing people together, assembling off-the-shelf tools, and making processes and rules for interaction.
Talks about Open Source software to implement ActivityPub and build the social web are still welcome, of course. We’re just expanding a bit to cover the human aspects of the Fediverse as well.
I’m looking forward to having the interesting discussions about bringing people together to make the Social Web. If you haven’t already, please consider submitting a talk to pretalx.fosdem.org/fosdem-2026…. Select “Social Web” from the “Track” dropdown, and include the length of your talk (8/25/50) in the submission notes. The deadline is December 1, 2025, so get them in as soon as possible!
FOSDEM 2026 – Social Web Devroom – Call For Participation
The Social Web Foundation is pleased to announce the Social Web Devroom at FOSDEM 2026, and invite participants to submit proposals for talks for the event.FOSDEM is an exciting free and open source software event in Brussels, Belgium that brings together thousands of enthusiasts from around the world. The event spans the weekend of January 31 to February 1, 2026 and features discussion tracks (“devrooms”) for scores of different technology topics.
The Social Web Devroom will take place in the afternoon of Saturday, January 31.
Format
There will be three available talk formats:
- 50 minutes – for bigger projects, followed by 10 minutes of questions.
- 25 minutes – for bigger projects, followed by 5 minutes of questions.
- 8 minutes – micro-talks on smaller or newer projects, in groups of 3, followed by 6 minutes of combined questions for the group.
Topics
The Social Web Devroom is open to talks all about the Social Web AKA the Fediverse, including:
- Implementations of the ActivityPub protocol or ActivityPub API
- Clients for ActivityPub-enabled software like Mastodon
- Supporting services for the Fediverse, like search or onboarding
- ActivityPub-related libraries, toolkits, and frameworks
- Tools, bots, platforms, and related topics
- Advocacy, organization and social activity in deploying Open Source ActivityPub applications
Important dates
- Submission open: 1 Nov 2025
- Submission deadline: 1 Dec 2025
- Acceptance notifications: 10 Dec 2025
- Final schedule announcement: 15 Dec 2025
- Devroom: 31 Jan 2026
Submissions
Submit talk proposals to pretalx.fosdem.org/fosdem-2026…. Select “Social Web” from the “Track” dropdown, and include the length of your talk (8/25/50) in the submission notes. (Note that the “Lightning Talks” track is a separate event-wide track; if you’re proposing a Social Web micro-talk, please choose the “Social Web” track!)Code of Conduct
All attendees and speakers must be familiar with and agree to the FOSDEM Code of Conduct.Contact
Questions about topics, formats, or the Social Web in general should go to contact@socialwebfoundation.org.
California immunization leader blasts FDA vaccine chief’s unsupported claim of child deaths
This post uses a gift link which requires some people to register to access it.
Not posting an archive.is link to bypass the paywall because Hearst has lawyers which don't like that.
CA immunization leader blasts FDA official’s child-death claim
California's immunization leader blasts FDA vaccine chief’s unsupported claim of child deaths, calling it 'reckless'Ko Lyn Cheang (San Francisco Chronicle)
UN Ditches Google for Taking Form Submissions, Opts for an Open Source Solution Instead
UN Ditches Google for Taking Form Submissions, Opts for an Open Source Solution Instead
The United Nations opts for an open source alternative to Google Forms.Sourav Rudra (It's FOSS)
like this
adhocfungus, Maeve e mPony like this.
Technology reshared this.
You Want Microservices, but Do You Need Them?
You Want Microservices—But Do You Need Them? | Docker
Before you default to microservices, weigh hidden costs and consider a modular monolith or SOA. Learn when Docker delivers consistency and scale—without sprawl.Manish Hatwalne (Docker)
like this
adhocfungus e essell like this.
why did the U.S. invade Iraq when Venezuela is so close?
Is that what's going to happen soon?
why did the U.S. invade Iraq when Venezuela is so close?
2/3 of Venezuela's reserves were discovered/confirmed in the last 20 years.
20 years ago, Venezuela was one of the U.S.'s biggest oil suppliers. 10ish years ago the oil price crashed and sent Venezuela spiralling into chaos. Trump has been grinding them down since then (mostly through sanctions). Biden let them breathe for 4 years.
Is that what's going to happen soon?
Ummm, yeah. You think they're posturing over drugs? Trump doesn't care if the poors die of drug overdoses. He wants that sweet, sweet crude and has been gearing up to ~~pacify~~ invade Venezuela to get it. That's why he wants Ukraine to surrender. He needs Russia to cool its jets so that Europe/NATO doesn't drag the U.S. into a war over there. Troops will be on the ground in Venezuela soon, after an indiscriminate blitzkrieg bombing campaign for a few weeks - they will say they're bombing gangs/cartels. It should start any day now since he "closed the air space" over Venezuela. He's just waiting for Venezuela to launch one of its fighters (or for Ukraine to capitulate) and the game is on. This is all ramping up because the CIA has failed to eliminate Maduro and install a new leader.
I wouldn't be surprised if Trump ultimately hopes to take over the whole country and rename it.
That's how wars start nowadays.
Well, actually it's gonna be something relative to drugs this time, almost plain terrorism has just been used by Israel, too fresh and repetitive, they're very good at new plots, we must admit it.
9/11 Civil planes attack
09/26 Submarines / scuba divers attack
10/7 Fortnite style attack with motorcycles and parachutes
xx/xx Must be using boats this time
Yes.
abc.net.au/news/2025-10-28/ven…
ABC News
ABC News provides the latest news and headlines in Australia and around the world.Elissa Steedman (Australian Broadcasting Corporation)
RRF Caserta. Sport. Basket serie B. San Severo 75 Juve Caserta 85
Amazon’s AI ‘Banana Fish’ Dubs Are Hilariously, Inexcusably Bad
They are also AI dubbing show that already have a dub:
xcancel.com/Pikagreg/status/19…
like this
Kilgore Trout, yessikg, Lasslinthar, frustrated_phagocytosis e IAmLamp like this.
Technology reshared this.
It's insulting because there are a lot of LGBTQ+ voice actors who want to do this but the powers that be won't greenlight it.
Look at Amazon's relationship with Trump, Trump's positions on LGBTQ+ people, and ask why Amazon is doing this. They aren't contractually obligated to do so. They are doing various animation projects and paying real voice actors. Though Hazbin Hotel is pretty gay, and so is Helluva Boss (same people/same universe). The only other animation project I know at Amazon is Mighty Nein, and I wanna say those guys are LGBTQ+ friendly but I'm not sure they're on the spectrum. So I don't wanna say Amazon is acting prejudicial here, but it smells.
I mean, yeah, Amazon's relationship with Trump is far too cozy... however, in this case, I think it's more likely that Amazon was just trying to cheap out by using AI (and possibly test/showcase their use of the tech) rather than them actively deciding to cut out LGBTQ+ voice actors.
FTA:
As giant tech corporations try to jam AI into every possible orifice in the world, we are consistently getting new examples day in and day out of that going poorly. This week, that’s Amazon Prime Video introducing dubs to the beloved anime Banana Fish, which has needed them for a long while. The problem? They’re AI. Not just AI, horrible AI.
Had this decision been based in bigotry, I would imagine that they wouldn't have bothered with the dub at all.
like this
yessikg likes this.
Company like Amazon is huge. Like, the department to cozy up with administration is a completely different subsidiary than Prime Video anime importing. They don’t know to each other. They will cut each other’s throats if they are fighting for the same promotion.
This happens because 1) cost savings and 2) human garbage exist everywhere. They hide behind the decisions and use the current political climate to push their bigotry.
like this
yessikg likes this.
Wow. I watched some in the embeded tweet and that was uhhh... That was hot garbage...
Dont watch the tweet if you don't want spoilers, there are important story beats included cause they're scenes with a lot of emotion that the ai dub completely mangles
Highly recommend the show. Content warning for sexual abuse and exploitation though, its a central part of the story and the show is a very difficult watch
It feels like Neil Breen directed this dub.
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
So do they not even have internal review?
OK, so it gets generated. Just like if someone sent a real recording, someone should listen to it, right? And make mention if there's parts that are noisy, or theres a random change in eq that feels jarring, or... I don't know, because it sounds like an adult voice on a 6yr old reading the lines?
Hostly, even a dirt cheap language model (with sound input) would tell you it’s garbage. It could itemize problematic parts of the sub.
But they didn’t use that because this isn’t machine learning. Its Tech Bro AI.
Quality Control? Review? Pffftt, ain't nobody got time for that! just generate and ship, baby, generate and ship!
Think of all the yachts our glorious ~~leader~~ CEO will be able to afford with all the money we're saving 😍😍😍
- Internal review also takes time and expertise. Those things cost money, and the whole point of the exercise is to not spend money.
- No one uses generative AI because they actually care about the quality of the end product.
But even allowing for those points, it's entirely possible that they did, in fact, do quality review. Extensively. But at some point the generation costs exceeded their allowed budget and this is what they settled on. This is the thing that lurks behind bad quality AI art; the fact that what we see is often the best result out of many, many tries. The Coca Cola holiday ad had to be stitched together from hours upon hours of failed attempts. Even the horrendously bad looking end product wasn't as bad as many of the failed outputs they got.
Regarding point 1, its a factored in value already. Replacing multiple stages of production simultaneously is a massive risk - voice acting + editing + editor review + production review on the cut.
This part:
it's entirely possible that they did, in fact, do quality review. Extensively. But at some point the generation costs exceeded their allowed budget and this is what they settled on.
I'd call entirely likely.
It would also mean that there was almost no testing of the llm's output prior to pushing it to production work, or basic items like intonation would have been called out.
Its also possible that the production team knew it was dogshit and pushed it out on purpose so people could see it for dogshit. Anime fans are not known for being supportive of poor adaptations after all, maybe they hoped for backlash? I know if I were on that team I'd prefer it.
At some point I'd expect management to have recognized it for being terrible though.
I seriously doubt that any of the decision makers involved in this process actually watch anime.
Anyone in management who cared probably didn't have enough pull / authority to do a damn thing about it.
To all those working on Naruto fan dubs on the early 2000s I say:
I am sorry. I was too hard on you. It took me 20 years to realize what 'hot garbage " really is in the context of an anime dub.
Posted this in another community but I’ll leave this here too:
So back when I worked at Amazon I was playing around with AWS Skillbuilder. They don’t pay for any other training materials for SDEs (well they used to have ACloudGuru but ended that).
I was like “they charge money for this so it can’t be that bad right?” Well
1/3 of the courses were actually what I’d call “watchable”
1/3 were just SEO Blogspam masquerading as information
And the remaining 1/3 clearly used a text to speech software that was dreadful. It was incomprehensible.
I say all this to say that if there was ever a list of companies I would trust to do AI Dubs Amazon would be the bottom of that list.
They’re pretty bad outside of English-Chinese actually.
Voice-to-voice is all relatively new, and it sucks if it’s not all integrated (eg feeding a voice model plain text so it loses the original tone, emotion, cadence and such).
And… honestly, the only models I can think of that'd be good at this are Chinese. Or Japanese finetunes of Chinese models. Amazon certainly has some stupid policy where they aren’t allowed to use them (even with zero security risk since they’re open weights).
Afghan accused of shooting 2 National Guard members was part of CIA-backed unit whose veterans have struggled in the U.S.
Afghan accused of shooting 2 National Guard members was part of CIA-backed unit whose veterans have struggled in the U.S.
Rahmanullah Lakanwal was a member of a “Zero Unit,” an elite squad of Afghans who have faced hardships in the U.S. due to visa and employment issues.Dan De Luce (NBC News)
Building the PERFECT Linux PC with Linus Torvalds
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
like this
toothpaste_sandwich e chookity like this.
Technology reshared this.
like this
toothpaste_sandwich likes this.
like this
DaGeek247 likes this.
like this
SaltySalamander likes this.
You know the silly stuff at the start was actually Torvald's idea, right?
Linus (Sebastian) spoke about how he didn't even get the reference but Linus (Torvalds) prompted him to go and watch Highlander ("there can only be one!!")
You should've kept watching it. There's some good stuff there.
I laughed so hard at the highlander reference. It might just be that you gotta know the specific meme and culture to enjoy it. Even YouTube Linus didn't know the reference.
Seeing the two Linus's pulling out katanas on each other was hilarious.
God forbid someone who's made their life tech is very excited that Torvalds has come to visit them and turned out to be a really nice guy.
Torvalds will probably be the highlight guest of his entire career, and he knows it. Of course that's enormously exciting.
Hardware
- AMD Ryzen Threadripper 9960X
- GIGABYTE TRX50 AERO D Motherboard
- Samsung SSD 9100 PRO 2TB SSD
- 64GB ECC RAM
- Noctua NH-U14S TR5-SP6 Cooler
- Intel Arc B580 GPU
- Fractal Design Torrent E-ATX Case
- Seasonic PRIME TX-1600 1600W 80+ Titanium PSU
OS
- Fedora
Fatti e non...
Sì, ma, il mondo non va male solo per via di piccoli atti negativi di ognuno, ma anche perché gli atti negativi di alcuni vengono premiati al punto di portare quegli alcuni ai vertici del potere
Un minimo di teoria lì può servire, visto che quel potere si contrasta solo con un'azione coordinata e unita delle persone comuni
Don't throw away your old PC—it makes a better NAS than anything you can buy
Don't throw away your old PC—it makes a better NAS than anything you can buy
Doing it yourself is way more cost effective.Nick Lewis (How-To Geek)
like this
riot, Lasslinthar, SuiXi3D e mPony like this.
Technology reshared this.
like this
Azathoth, qupada, DaGeek247 e onewithoutaname like this.
Technology reshared this.
So I did this, using a Ryzen 3600, with some light tweaking the base system burns about 40-50W idle. The drives add a lot, 5-10W each, but they would go into any NAS system, so that's irrelevant. I had to add a GPU because the MB I had wouldn't POST without one, so that increases the power draw a little, but it's also necessary for proper Jellyfin transcoding. I recently swapped the GPU for an Intel ARC A310.
By comparison, the previous system I used for this had a low-power, fanless intel celeron, with a single drive and two SSDs it drew about 30W.
like this
DaGeek247 likes this.
Ok, im glad im not the only one that wants a responsive machine for video streaming.
I ran a pi400 with plex for a while. I dont care to save 20W while I wait for the machine to respond after every little scrub of the timeline. I want to have a better experience than Netflix. Thats the point.
Technology reshared this.
Technology reshared this.
Drivers? Are you running it on Windows? On Linux I just plugged it in and it worked, Jellyfin transparently started transcoding the additional codecs.
It fixed my issue with tone mapping, before this HDR files on my not-so-old TV showed the wrong colors.
I've not desktop environment on the NAS, it was plug and play in terminal. I did get an error about HSW/BDW HD-Audio HDMI/DP requiring binding with a gfx driver - but I've not yet even bothered to google it.
I read somewhere the sparkle elf I have just ramps the fan to 100% at all times with the Linux driver and has no option to edit fan curve under Linux
(suggested fix was install a windows VM, set the curve there and the card will remember, but after rebuilding the NAS and fixing a couple of minor issues to get it all working I couldn't face installing windows, so just left it as is until I have the time lol).
Technology reshared this.
The host is running Proxmox, so I guess their kernel just works with it.
It does run the fan way more than I'd like, but its noise is drowned out by the original AMD cooler on the CPU anyway, but thanks for the info, I may look into it... But I guess I'd have to set up GPU pass-through on a VM just for that.
A desktop running a low usage wouldn't consume much more than a NAS, as long as you drop the video card (which wouldn't be running anyways).
Take only that extra and you probably have a few years usage before additional electricty costs overrun NAS cost. Where I live that's around 5 years for an estimated extra 10W.
Technology reshared this.
as long as you drop the video card
As I wrote below, some motherboards won't POST without a GPU.
Take only that extra and you probably have a few years usage before additional electricty costs overrun NAS cost. Where I live that’s around 5 years for an estimated extra 10W.
Yeah, and what's more, if one of those appliance-like NASes breaks down, how do you fix it? With a normal PC you just swap out the defective part.
like this
onewithoutaname likes this.
Depends.
Toss the GPU/wifi, disable audio, throttle the processor a ton, and set the OS to power saving, and old PCs can be shockingly efficient.
like this
DaGeek247 likes this.
There was a post a while back of someone trying to eek every single watt out of their computer. Disabling XMP and running the ram at the slowest speed possible saved like 3 watts I think. An impressive savings, but at the cost of HORRIBLE CPU performance. But you do actually need at least a little bit of grunt for a nas.
At work we have some of those atom based NASes and the combination of lack of CPU, and horrendous single channel ram speeds makes them absolutely crawl. One HDD on its own performs the same as this raid 10 array.
like this
onewithoutaname likes this.
Yeah.
In general, 'big' CPUs have an advantage because they can run at much, much lower clockspeeds than atoms, yet still be way faster. There are a few exceptions, like Ryzen 3000+ (excluding APUs), which idle notoriously hot thanks to the multi-die setup.
Peripherals and IO will do that. Cores pulling 5-6W while IO die pulls 6-10W
techpowerup.com/review/amd-ryz…
AMD Ryzen 7 5700X Review - Finally an Affordable 8-Core
With the Ryzen 7 5700X, AMD is finally offering a more affordable 8-core processor. In our review, we take a close look at how this $265 CPU performs against the Ryzen 7 5800X, and also compare it to Intel's Alder Lake lineup, including the i5-12600K…TechPowerUp
Same with auto overclocking mobos.
My ASRock sets VSoC to a silly high coltage with EXPO. Set that back down (and fiddle with some other settings/disable the IGP if you can), and it does help a ton.
...But I think AMD's MCM chips just do idle hotter. My older 4800HS uses dramatically less, even with the IGP on.
And heat your room in the winter!
Add spring + autumn if you live up north.
Stuff designed for much higher peek usage tend to have a lot more waste.
For example, a 400W power source (which is what's probably in the original PC of your example) will waste more power than a lower wattage on (unless it's a very expensive one), so in that example of yours it should be replaced by something much smaller.
Even beyond that, everything in there - another example, the motherboard - will have a lot more power leakage than something designed for a low power system (say, an ARM SBC).
Unless it's a notebook, that old PC will always consume more power than, say, an N100 Mini-PC, much less an ARM based one.
All true, yep.
Still, the clocking advantage is there. Stuff like the N100 also optimizes for lower costs, which means higher clocks on smaller silicon. That's even more dramatic for repurposed laptop hardware, which is much more heavily optimized for its idle state.
For example, a 400W power source (which is what's probably in the original PC of your example) will waste more power than a lower wattage on
in my experience power supplies are more efficient near the 50% utilization. be quiet psus have charts about it
The way one designs hardware in is to optimize for the most common usage scenario with enough capacity to account for the peak use scenario (and with some safety margin on top).
(In the case of silent power sources they would also include lower power leakage in the common usage scenario so as to reduce the need for fans, plus in the actual physical circuit design would also include things like airflow and having space for a large slower fan since those are more silent)
However specifically for power sources, if you want to handle more power you have to for example use larger capacitors and switching MOSFETs so that it can handle more current, and those have more leakage hence more baseline losses. Mind you, using more expensive components one can get higher power stuff with less leakage, but that's not going to happen outside specialist power supplies which are specifically designed for high-peak use AND low baseline power consumption, and I'm not even sure if there's a genuine use case for such a design that justifies paying the extra cost for high-power low-leakage components.
In summary, whilst theoretically one can design a high-power low-leakage power source, it's going to cost a lot more because you need better components, and that's not going to be a generic desktop PC power source.
That said, I since silent PC power sources are designed to produce less heat, which means have less leakage (as power leakage is literally the power turning to heat), even if the with the design having been targetted for the most common usage scenario of that power source (which is not going to be 15W) that would still probably mean better components hence lower baseline leakage, hence they should waste less power if that desktop is repurposed as a NAS. Still won't beat a dedicated ARM SBC (not even close), but it might end up cheap enough to be worth it if you already have that PC with a silent power source.
The GTX 480 is efficient by modern standards. If Nvidia could make a cooler that could handle 600 watts in 2010 you can bet your sweet ass that GPU would have used a lot more power.
Well that and if 1000 watt power supplies were common back then.
How about a Raspberry Pi? I've got one (Raspberry Pi 400) running my Home Automation setup with a couple USB 3.0 ports. Was thinking there's gotta be some add-ons for Home Assistant to put some external storage to good use.
Don't need anything too fancy. Just looking for some on-site backup and maybe some media storage
Technology reshared this.
Yeah, I guess I should have been clear that's part of what I was thinking (although to be honest I'm mostly a schmuck who pays for a few streaming services and uses that)
What exactly would be the main choking point? Horsepower of the Pi to take that stored file and stream it to the client?
So I believe the Pi 4 was the 1st to have an actual ethernet controller and not just having essentially a built in USB to ethernet adapter so bandwidth to your HDDs/ethernet shouldn't be a problem.
Streaming directly off of the pi should be tolerable. A bit slower than a full fat computer with tons of ram for caching and CPU power to buffer things. But fine. There's some quirks with usb connected HDDs that makes them a bit slower than they should (still in 2025 UASP isn't a given somehow) But streaming ultimately doesn't need that much bandwidth.
What's going to be unbearable is transcoding. If you're connecting some shitty ass smart TV that only understands like H264 and your videos are 265 then that has to get converted, and that SUCKS. Plex by default also likes to play videos at a lower bitrate sometimes, which means transcoding.
There's also other weird quirks to look out for. Like someone else was (I think) doing exactly what you wanted to do, but no matter what the experience was unbearable. Apparently LVM was somehow too much compute for the pi to handle, and as soon as they switched to raw EXT4 they could stream perfectly fine. I don't remember why this was a problem, but it's just kind of a reminder of how weak these devices actually are compared to "full" computers.
I've got 2 rPis - a pi5 running Home Assistant and a pi4 with a USB drive caddy acting as little more than a NAS (it also does all the downloading through radarr etc.. )
I find them perfectly adequate.
My gaming rig acts as my emby server as it's basically on all the time and it has a beefy gfx card that can handle transcoding.
Technology reshared this.
None of that really matters for a home media server. Even the limited SATA ports, worst case you have to grab a cheap expansion card.
Power consumption is a much bigger concern, a purpose built NAS is much more efficient than a random old PC.
like this
DaGeek247 e onewithoutaname like this.
Technology reshared this.
Even the most expensive Synology only has space for 8 drives with only one 10Gbit ethernet port.
You can build something yourself for less with much better performance.
Technology reshared this.
That's not true at all. Synology will sell you 24 bay rack mounted devices and 12 bay towers, as well as expansion modules for both with more bays you can daisy chain to them.
Granted, I believe those are technically marketed as enterprise solutions, but you can buy a 12 bay unit off of Amazon for like two grand diskless, so... I mean, it's a thing.
Not saying you should, and it's definitely less cost effective (and less powerful, depending on what you have laying around) than reusing old hardware, but it does exist.
I think the self-hosting community needs to be more honest with itself about separating self hosting from building server hardware at home as separate hobbies.
You absolutely don't need sever-grade hardware for a home/family server, but I do see building a proper server as a separate activity, kinda like building a ship in a bottle.
That calculation changes a bit if you're trying to host some publicly available service at home, but even that is a bit of a separate thing unless you're running a hosting business, at which point it's not a really a home server anyways, even if it happens to sit inside your house.
like this
DaGeek247 likes this.
You absolutely don't need sever-grade hardware for a home/family server
Server-grade hardware makes a lot of sense even for home use. My NAS is tucked away in a closet, having IPMI is so much more convenient when you can’t easily hook it up to a keyboard and mouse.
I'm currently running some stuff out of an old laptop which I also have tucked away somewhere and just... remote desktop in for most of the same functionality. And even if you can't be bothered to flip it open in the rare occassion you can't get to the points where the OS will let you remote in, there are workarounds for that these days. And of course the solution to the "can't hook it up to a keyboard and mouse" in that case is the thing comes with both (and its own built-in UPS) out of the box.
Nobody is saying that server grade solutions aren't functional or convenient. They exist for a reason. The argument is that a home/family server you don't need to use at scale can run perfectly fine without them only losing minor quality of life features and is a perfectly valid solution to upcycle old or discarded consumer hardware.
I totally agree - and depending on your needs & budget, slightly older server-grade equipment idle power usage is much higher compared to consumer stuff (servers didn't really know how to idle until "recently"). And also if you don't host a tone of different things for different users (ie you don't need all the pcie lanes) you get so much faster CPUs for the same monies.
The only server-grade things you need are ofc disk drives that are gonna do server stuff.
And a good PSU (but a nice Seasonic is almost server-grade anyways). When ppl talk about power usage they tend to forget PSUs (they check their PC usage with a shitty PSU that itself can't idle low & maybe doesn't even get to 90% at peak loads).
HBAs are cheap, IPMI isn't at all needed under normal uses cases, and ECC is way overkill.
For most people a halfway decent PC that isn't failing is plenty.
like this
onewithoutaname likes this.
Hardware is boring. Doing some research is boring. People don't care about boring stuff. Or their data.
"Let's put every single family photo taken between 1976 and today on this and only this one shitty drive. And let me spin up an Immich container on my trusty raspberry. I have watched a YouTube video or two in my days. I think I know what I'm doing."
Bonus points for "but ssh is all you need", "static electricity has never been a problem for me" and "what gpu do you recommend for jellyfin?".
OK. Science time. Somewhat arbitrary values used, the point is there is a amortization calculation, you'll need to calculate your own with accurate input values.
A PC drawing 100W 24/7 uses 877 kWh@0.15 $131.49 per year.
A NAS drawing 25W 24/7 uses 219 kWh@0.15 $32.87 per year
So, in this hypothetical case you "save" about $100/year on power costs running the NAS.
Assuming a capacity equivalent NAS might cost $1200 then you're better off using the PC you have rather than buying a NAS for 12 years.
This ignores that the heat generated by the devices is desirable in winter so the higher heat output option has additional utility.
like this
subignition likes this.
Technology reshared this.
... 100W? Isn't that like a rally bygone era? CPUs of the past decade can idle at next to nothing (like, there isn't much difference between an idling i7/i9 and a Pentium from the same era/family).
Or are we taking about arm? (Sry, I don't know much about them.)
Technology reshared this.
All devices on the computer consume power.
The CPU being the largest in this context. Older processors usually don't have as aggressive throttling as modern ones for low power scenarios.
Similarly, the "power per watt" of newer processors is incredibly high in comparison, meaning they can operate at much lower power levels while running the same workload.
Assuming a capacity equivalent NAS might cost $1200
Either you already have drives and could use them in a new NAS or you would have to buy them regardless and shouldn’t include them in the NAS price.
8 drives could go into most computers I think. Even 6 drive NAS can be quite expensive.
https://a.co/d/jcUR3yV
I bought a two bay Synology for $270, and a 20TB hdd for $260. I did this for multiple reasons. The HDD was on sale so I bought it and kept buying things. Also I couldn't be buggered to learn everything necessary to set up a homemade NAS. Also also i didn't have an old PC. My current PC is a Ship of Theseus that I originally bought in 2006.
You're not wrong about an equivalent NAS to my current pc specs/capacity being more expensive. And yes i did spend $500+ on my NAS And yet I also saved several days worth of study, research, and trial and error by not building my own.
That being said, reducing e-waste by converting old PCs into Jellyfin/Plex streaming machines, NAS devices, or personal servers is a really good idea
In the UK the calculus is quite different, as it's £0.25/kWh or over double the cost.
Also, an empty Synology 4-bay NAS can be gotten for like £200 second hand. Good enough if you only need file hosting. Mine draws about 10W compared to an old Optiplex that draws around 60W.
With that math using the NAS saves you 1.25 pence per hour. Therefore the NAS pays for itself in around about 2 years.
This ignores that the heat generated by the devices is desirable in winter so the higher heat output option has additional utility.
But the heat is a negative in the summer. So local climate might tip the scales one way or the other.
In the fall/winter in northern areas it's free! (Money that would already be spent on heating).
Summer is a negative though, as air conditioning needs to keep up. But the additional cost is ~1/3rd the heat output for most ACs (100w of heat require < 30w of refrigeration losses to move)
like this
DaGeek247 likes this.
Ok. R5 3600, rtx 3070, and 4 spinning drives. Idles at 62W. 80W under normal load (2 concurrent streams). This is a hilariously over specced NAS. This is all 2nd or 3rd life pc parts (outside of the spinning rust), so financially speaking I'm happy with the result.
The long term goal is to use it as a homelab separate from anything I need to work all the time. I want to try running some LLMs locally and use it to control some home automation stuff. That'll stress it.
Edit: so yeah its double yours.
The Xeon 2224G workstation with 32GB of ECC ram I got on eBay pulls 15 watts from the wall streaming 4k video on Plex.
It didn't have 6 bays but if I needed it I could move the guts to a bigger case
I mean... my old PC burns through 50-100W, even at idle and even without a bunch of spinning hard drives. My actual NAS barely breaks that under load with all bays full.
I could scrounge up enough SATA inputs on it to make for a decent NAS if I didn't care about that, and I could still run a few other services with the spare cycles, but... maybe not the best use of power.
I am genuinely considering turning it into a backup box I turn on under automation to run a backup and then turn off after completion. That's feasible and would do quite well, as opposed to paying for a dedicated backup unit.
I see what you mean, and I have that (old PC with a bunch of 2.5" HDDs formatted as ZFS).
For me power consumption is more important than performance, so I'm looking for a lower power solution for photo sharing, music collection and backups.
Proxmox VE Helper-Scripts
The official website for the Proxmox VE Helper-Scripts (Community) repository. Featuring over 400+ scripts to help you manage your Proxmox Virtual Environment.Proxmox VE Helper-Scripts
And as usual everyone is saying NAS, but talking about servers with a built in NAS.
I'm not saying you can't run your services on the same machine as your NAS, I'm just confused why every time there's a conversation about NASs it's always about what software it can run.
The way I see it, a box of drives still needs something to connect it to your network.
And that something that can only do a basic connection costs only a little less than something that can run a bunch of other stuff too.
You can see why it all gets bundled together.
I somehow doubt that.
My last desktop PC has been retasked as an HTPC. The CPU in it requires a graphics card for the system to POST, it's currently mounted in a SFF case with barely room for two 2.5" drives, so it would either make for a shitty, difficult to service, bulky for what it does, power inefficient NAS, or I'd have to buy a new case and CPU.
My current machine is in an mATX mini-tower, there's room for hard disks and the 7700X has integrated graphics so I could haul the GPU out, but it's still kind of bulky for what you'd get.
So I'm gonna keep my Synology in service for a little while longer, then build a NAS from scratch selecting components that would be good for that purpose.
like this
mPony likes this.
I used to have a 5700G system that I had to switch out to a 14600k system due to quciksync pass through.
I got my 14600K down to 55w from 75w with everything else being equal. Insane how efficient some setups can be.
My 16tb Pi sips at 13w max or 8w idle. But no encoding or enough storage for normal work. So it's warm storage
like this
mPony likes this.
I want to reduce wasteful power consumption.
But I also desire ECC for stability and data corruption avoidance, and hardware redundancy for failures (Which have actually happened!!)
Begrudgingly I'm using dell rack mount servers. For the most part they work really well, stupid easy to service, unified remote management, lotssss of room for memory, thick PCIe lane counts, stupid cheap 2nd hand RAM, and stable.
But they waste ~100 watts of power per device though... That stuff ads up, even if we have incredibly cheap power.
like this
mPony likes this.
Carte PCIE SATA 12/16/20 Ports carte PCIe SATA 3.0 6 Go, carte d'extension de contrôleur PCIe vers SATA, prise en charge des appareils SATA 3.0 - AliExpress 7
Smarter Shopping, Better Living! Aliexpress.comaliexpress.
ebay.ca/itm/155132091204
INSPUR 9211-8i 6Gbps SAS LSI 2008 HBA IT Mode ZFS FreeNAS unRAID+2*SFF-8087 SATA | eBay
Find many great new & used options and get the best deals for INSPUR 9211-8i 6Gbps SAS LSI 2008 HBA IT Mode ZFS FreeNAS unRAID+2*SFF-8087 SATA at the best online prices at eBay! Free shipping for many products!eBay
So, it's better if I get a normal pcie to sata card and connect them individually.
Then just raid them through software.
Also, what are your thoughts on second hand drives, and just monitoring them and replacing them as needed. (im currently saving up for good new 4tb x 6 drives lol)
With TrueNAS yes, a sata card connected to a bare drive is the preferred way. I have done it differently with enterprise hardware and virtualization but it’s not really supposed to be done that way. And ZFS is not technically “RAID” in the classic sense, but it does implement its own RAID‑like redundancy (RAIDZ and mirrors) as part of an integrated filesystem and volume manager. There are also things you can do with faster NVME drives like SLOG, L2ARC, and SPECIAL vdevs to store pool metadata. But some of these can fail and wipe out all your data if you aren’t careful. So read a lot.
Second hand drives are fine in my opinion as long as SMART is not reporting any immediate errors. Just assume you will have failures and have spares built into the zfs volume.
I’m not an expert by any stretch but I have been doing this for 10 plus years so I have some experience.
Interesting, thanks.
If you no expert, the I'm a newbie lol.
I will try a sata card and raiding them through software.
What would you recommend for 6x4tb ?
I know there are raids and mirrors, I was thinking like raid 5 but still unsure. I also have a icydock for 2.5 in drives that I can raid separately with ssds when I have the funds.
What is your experience with raids and safest bet on old hardware, if running 24/7 with important data?
I would think that right now the sweet spot for good used drives is between 4-8tb. Check out backblaze’s drive stats for some good info about failure rates for older drives.
backblaze.com/blog/category/cl…
Yeah RAID 5 is fine (in ZFS terms it's just called raidz or raidz1). You could also do something like raidz2 (which is essentially RAID6 with two parity drives). There is some newer stuff in TrueNAS called dRAID which does some interesting stuff with the spares. It's kinda like old RAID5EE stuff if youre familiar with that. Just google it and read up on it.
Safest bet on old hardware… in my opinion find some old enterprise level stuff somebody is upgrading out of. I get lots of hand-me-downs that way. This stuff is meant to run 24/7, keep running forever, and is usually upgraded before it’s really not useful to anyone. Word of warning, this stuff is generally not power efficient, or quiet for that matter. So I wouldn't be running this in my bedroom. Well unless you're cold 'cause your heater is broken and love lots of white noise 😀
As a hardware guy going on 20+ years let me offer some basic advice. If this data is important , which you mentioned it was, RAID is NOT backup. Have separate backups. Yes I know it's expensive but hardware can and does fail. Sometimes irrecoverably. ZFS does a good job helping with this with snapshots and the ability to sync easily. For me just I follow the 3-2-1 rules. Yeah it's kinda outdated but I'm old.
The 3-2-1 rule is basically:
- 3 copies
- Primary data (on its own pool).
- Local backup (on a separate ZFS pool, ideally on different hardware). This is where ZFS replication is useful. This built into TrueNAS.
- Off‑site/cloud backup (replicated ZFS dataset or traditional backup tool like restic/Borg to cloud).
- 2 different media
- e.g., Primary on SSDs, backup on HDDs; or primary on local NAS, backup in cloud.
- 1 off‑site
- Replicate ZFS snapshots to a remote location (another site or cloud).
Oh and one other thing. If you are using TrueNAS be mindful there are two flavors now, TrueNAS Core and TrueNAS Scale. The interfaces are slightly different but the main differences are:
- TrueNAS Core is based on FreeBSD and is the older, more mature “classic NAS” platform, optimized for rock‑solid file serving with jails and VMs.
- TrueNAS Scale is based on Debian Linux and is designed for “scale‑out” and hyperconverged use: clustering, containers, and modern virtualization on newer hardware.
Hope this is useful….
Hard Drive Stats Archives
Backblaze regularly publishes statistics and insights based on our hard drives. Look back through all the blog posts going over the Hard Drive Stats.Backblaze Blog | Cloud Storage & Cloud Backup
Very informative thanks!
Seems like seagate 4-8tb is the sweet spot.
Is there any difference in the models of the segate drives? Or just the iron wolf NAS are the better choice?
Also, currently can fund all ssds for primary and I'm not that interested in read speeds. I'm more interested in a safe space for files to get stored in without fear of loss.
I have a old tell server tower, running truenas scale. Once I get a pcie sata card I will set up with raid5.
And zfs is just a backup of the raid, like a sync?
And then I think my move would be to get 6 Seagate drives lol
When I looked into this I found that, for TrueNAS, using ZFS with RAW disks is generally preferable.
I wound up writing custom firmware to my hardware RAID card so that it would be effectively “transparent” and yield direct hardware access to the disks.
Big shout out to Windows 11 and their TPM bullshit.
Was thinking that my wee "Raspberry PI home server" was starting to feel the load a bit too much, and wanted a bit of an upgrade. Local business was throwing out some cute little mini PCs since they couldn't run Win11. Slap in a spare 16 GB memory module and a much better SSD that I had lying about, and it runs Arch (btw) like an absolute beast. Runs Forgejo, Postgres, DHCP, torrent and file server, active mobile phone backup etc. while sipping 4W of power. Perfect; much better fit than an old desktop keeping the house warm.
Have to think that if you've been given a work desktop machine with a ten-year old laptop CPU and 4GB of RAM to run Win10 on, then you're probably not the most valued person at the company. Ran Ubuntu / GNOME just fine when I checked it at its original specs, tho. Shocking, the amount of e-waste that Microsoft is creating.
Question, what's the benefit of running a separate DHCP server?
I run openwrt, and the built in server seems fine? Why add complexity?
I'm sure there's a good reason I'm just curious.
So on mine, I haven't bothered to change from the ISP provided router, which is mostly adequate for my needs, except I need to do some DNS shenigans, and so I take over DHCP to specify my DNS server which is beyond the customization provided by the ISP router.
Frankly been thinking of an upgrade because they don't do NAT loopback and while I currently workaround with different DNS results for local queries, it's a bit wonky to do that and I'm starting to get WiFi 7 devices and could use an excuse to upgrade to something more in my control.
The router provided with our internet contract doesn't allow you to run your own firmware, so we don't have anything so flexible as what OpenWRT would provide.
Short answer; in order to Pi-hole all of the advertising servers that we'd be connecting to otherwise. Our mobile phones don't normally allow us to choose a DNS server, but they will use the network-provided one, so it sorts things out for the whole house in one go.
Long, UK answer: because our internet is being messed with by the government at the moment, and I'd prefer to be confident that the DNS look-ups we receive haven't been altered. That doesn't fix everything - it's a VPN job - but little steps.
The DHCP server provided with the router is so very slow in comparison to running our own locally, as well. Websites we use often are cached, but connecting to something new takes several seconds. Nothing as infuriating as slow internet.
Oh you mean DNS server, yes ok that makes sense. Yeah I totally understand running your own.
If I understand correctly, DHCP servers just assign local IPs on initial connection, and configure other stuff like pointing devices to the right DNS server, gateway, etc
Gotcha! No worries. Networking gets more and more like sorcery the deeper you go.
Networking and printers are my two least favorite computer things.
True for notebooks.
(For years my home NAS was an old Asus EEE PC)
Desktops, on the other hand, tend to consume a lot more power (how bad it is, depends on the generation) - they're simply not designed to be a quiet device sitting on a corner continuously running a low CPU power demanding task: stuff designed for a lot more demanding tasks will have things like much bigger power sources which are less efficient at low power demand (when something is design to put out 400W, wasting 5 or 10W is no big deal, when it's designed to put out 15W, wasting 5 or 10W would make it horribly inefficient).
Meanwhile the typical NAS out there is running an ARM processor (which are known for their low power consumption) or at worse a low powered Intel processor such as the N100.
Mind you, the idea of running you own NAS software is great (one can do way more with that than with a proprietary NAS, since its far more flexible) as long as you put it in the right hardware for the job.
When I had my setup with an ASUS EEE PC I had mobile external HDDs plugged to it via USB.
Since my use case was long-term storage and feeding video files to a Media TV Box, the bandwidth limit of USB 2.0 and using HDDs rather than SDDs was fine. Also back then I had 100Mbps ethernet so that too limited bandwidth.
Even in my current setup where I use a Mini-PC to do the same, I still have the storage be external mobile HDDs and now badwidth limits are 1Gbps ethernet and USB 3.0, which is still fine for my use case.
Because my use case now is long term storage, home file sharing and torrenting, my home network is using the same principles as distributed systems and modern microprocessor architectures: smaller faster data stores with often used data close to were its used (for example fast smaller SDDs with the OS and game executables inside my gaming machine, plus a torrent server inside that same Mini-PC using its internal SDD) and then layered outwards with decreasing speed and increasing size (that same desktop machine has an internal "storage" HDD filled with low use files, and one network hop from it there's the Mini-PC NAS sharing its external HDDs containing longer term storage files).
The whole thing tries to balance storage costs and with usage needs.
I suppose I could improve performance a bit more by setting up some of the space in the internal SDD in the Mini-PC as a read/write cache for the external HDDs, but so far I haven't had the patience to do it.
I used to design high performance distributed computing systems and funnilly enough my home setup follows the same design principles (which I had not noticed until thinking about it now as I wrote this).
Yeah, different hardware is designed for different use cases and generally won't work as well for other use cases, which is also why desktops seldom make for great NAS servers (their fans will also fail from constant use, plus their design spec is for much higher power usage so they have a lot more power waste even if trottled down).
That said my ASUS EEE PC lasted a few years on top of a cabinet in my kitchen (which is were the Internet came into my house so the router was also there) with a couple of external HDDs plugged in, and that's a bit of a hostile environment (because some of the particulates from cooking, including fat, don't get pulled out and end up accumulating there).
At the moment I just have a Mini-PC on my living room with a couple of external HDDs plugged in that works as NAS, TV Media Box and home server (including wireguard VPN on top of a 1Gbps connection, which at peak is somewhat processor intensive). It's an N100 and the whole thing has a TDP of 15W so the fan seldom activates. So far that seems to be the best long term solution, plus it's multiple use unlike a proprietary NAS. It's the some of the best €140 (not including the HDDs) I've ever spent.
Laptops are better, because they have an integrated uninterruptible power supply, but worse because most can't fit two hard drives internally. Less of a problem, now that most have USB3. Just run external RAID if you have to.
Arguably, a serious home server will need a UPS anyway to keep the modem and router online, but a UPS for just the NAS is still better than no UPS at all. Also, only a small UPS is needed for the modem and router. A full desktop UPS is much larger.
They make m.2 to SATA adapters that have like 10 SATA ports. A laptop motherboard in a case with one of those would be very interesting. I have plans for one but I need to buy some parts (keyboard and laptop fan).
Edit: the adapters run hot and are kind of fragile. I'd recommend having a thermal pad under it thermally coupling it to the motherboard and giving it some support.
I have an old machine been using as a Unraid server for years. It's an i7-3770 paired with 32GB of ram and like 4x2TB drives.
Finally upgrading it because it's just not going to keep meeting needs and frankly it's wicked old (might keep it as a gitlab runner server or something). Finally "upgrading" by taking some old hardware (and bought some new), to have a full compute + storage setup. Proxmox (Ryzen 9 5900XT + 128GB ram) with all the compute and TruNas (Ryzen 7 3700X + 64GB ram + 8x16TB drives [LSI LOGIC SAS9211-8I] [raidz2/82.62 TiB usable]) for storage with a private 10G direct link between the two (Intel X550T2BLK).
I'd use an old PC as a NAS but turned it on only on demand, when it was needed. Which does hurt its convenience factor a little.
Note: talking about desktops.
Why would I throw it away, when I can give it to someone who needs it more, or sell it?
Because selling is always a hassle, dealing with choosing beggars and scammers, and it may not be worth much anymore for general use.
For example, my old PC is a i7 4770k... it can't run Windows 11 or play remotely recent games. I don't know anyone who could use this thing, so to save a few watts I took out the GPU, put it in eco mode and have been using it as my Linux server.
My NUC uses 6-7W idle.
I have played around with some mini PC's (minisforum and beelink brand), they're neat but they turned out to be not very reliable, two have already died prematurely, and unfortunately they are not end-user serviceable. Lack of storage expansion options is an issue as well, if you don't just want to stack a bunch of external USB drives on top of each other.
The main concern with old hardware is probably powerdraw/efficiency, depending on how old your PC is, it might not be the best choice. But remember: companies are getting rid of old hardware fairly quickly, they can be a good choice and might be available for dirt cheap or even free.
I recently replaced my old Synology NAS from 2011 with an old Dell Optiplex 3050 workstation that companies threw away.
The system draws almost twice the power (25W) compared to my old synology NAS (which only drew 13W, both with 2 spinning drives), but increase in processing power and flexibility using TrueNAS is very noticable, it allowed me to also replace an old raspberry pi (6W) that only ran pihole.
So overall, my new home-server is close in power draw to the two devices it replaced, but with an immense increase in performance.
I've made a decent NAS out of a Raspberry Pi 4. It used USB to SATA converters and old hard drives.
My setup has one 3Tb drive and two 1.5Tb drives. The 1.5Tb drives form a 3Tb drive using RAID and then combines with the 3Tb drive to make redundant storage.
Yes it's inefficient AF but it's good enough for full HD streaming so good enough for me.
I'm too stingy to buy better drives.
The moment the Windows installer detected it, a blue screen ended the installation.
But a Linux installation worked and afterwards it was even possible to disable the damaged hardware permanently.
The laptop still runs without further problems.
Don't throw away your old PC
Literally first-world problems, right? There's absolutely no need to tell that to someone that don't live on a rich country. Old gear always finds some use or is sold/donated away.
DRAM prices are spiking, but I don't trust the industry's reasons why
DRAM prices are spiking, but I don't trust the industry's reasons why
There are a lot of reasons to be skeptical.Adam Conway (XDA)
Technology reshared this.
cartel that has previously done cartel things continues to do more cartel things
more at 11
like this
DaGeek247, toothpaste_sandwich e onewithoutaname like this.
The Memory Cartel: we can give you that feeling of childhood wonder, or, erase those embarrassing things keeping you awake at night. Or... we can make your enemies remember things that will haunt them forever... for a price.
OR:
The Ram Cartel: leather, bears, tops, chains and spikes, their safe word is 'disestablishmentarianism'
Just built a rig to give me enough raw power I move however I need yo when this all blows up. Went with a Ryzen 5000 series cpu and ddr4 ram and a godawful motherboard with an Intel B580 cpu. It’s cheap but I now have more options.
Too bad I couldn’t get the opnsense VM working properly so I’m stuck with keeping the firewalla running. But that may not matter as the Nazis want to kill the internet anyway. We may be forced to rely on wonky mixnets like reticulum.
For example, OpenAI's new "Stargate" project reportedly signed deals with Samsung and SK Hynix for up to 900,000 wafers of DRAM per month to feed its AI clusters, which is an amount close to 40% of total global DRAM output if it's ever met. That's an absurd amount of DRAM.
Will these even be useful on the second hand market, or are these chips gonna be on specialized PCBs for these machines?
like this
SuiXi3D likes this.
Will these ever be useful on the second hand market
Nope, not ever. Even if it's standard form factor gear.
They will be disposed of ("recycled"), since that grants the largest asset depreciation tax break, and is the easiest economically. The grand majority of all data center gear gets trashed instead of reused or repurposed through the second hand market.
Source: I used to work at a hardware recycling facility, where much of the perfectly good hardware was required to be shredded, down to the components, because of these stipulations. It's such a waste.
Dumping bucket of tens of TB worth of modern RAM into a shredder is.... Infuriating.
like this
fistac0rpse e SuiXi3D like this.
like this
SuiXi3D likes this.
like this
onewithoutaname likes this.
I think when the economics of destroying a thing is better than reusing a thing, we should maybe have some sort of incentives toward reuse.
I get that the logistics of setting up what's basically a secondary supply chain is difficult, but I've got to believe it would be for the better.
I get that the logistics of setting up what’s basically a secondary supply chain is difficult, but I’ve got to believe it would be for the better.
hear me out: an org that guaranteed destruction of any residual data and ensured that no component or resource was wasted, was responsible nationwide for the collection of all e-waste into resource streams OR repair for reuse.
I'm just saying, techpriests might make me reevaluate my views on organized religion.
The amount of Labor that would go into it it really isn't that high.
This is what distribution is for.
The company that owns the hardware is not the company that recycles it. The recycler can make a profit by reselling these components, they're not allowed to.
Many of these components still have to be pulled out so that labor cost is already a wash. The additional labor cost of testing, selling, packaging, and shipping is baked into the price in the secondary market.
Not everything is worth being resold, but many things are and those things are often not allowed to be resold due to destruction contracts.
The NAND market is an effective monopoly that has been caught price fixing in the past. They desperately want to keep prices as high as they can so they tightly control supply to prevent having any excess product. This screws everyone over as soon as there's a spike in demand that they failed to account for.
Instead of just keeping a consistent supply and allowing prices to drop from competition, we end up with a price rollercoaster that peaks every few years then crashes back down again. The severity is just higher than usual due to the higher demand from data centers.
The market desperately needs a new player that just consistently creates supply instead of playing stupid games, but the barrier to entry is too high.
like this
onewithoutaname likes this.
like this
onewithoutaname likes this.
Fifteen Years Together and Her Tone Still Hits Like a Chalkboard
So yesterday I’m just trying to run a simple errand at the local store. This place is pet friendly, which is the only reason I tolerate it, so I had my dog with me. He’s the friendly one in the family, obviously.
And who’s standing there at her job like a plot twist I didn’t ask for?
My ex-wife. Fifteen-plus years of history wrapped in one human speedbump.
She spots my dog and suddenly she’s all sunshine, petting him like we didn’t survive a whole era together. My dog loves it, because he’s a dog and he’s smart enough not to get emotionally involved. Meanwhile, I’m standing there doing my usual routine: stay pleasant, stay tolerable, don’t let the annoyance leak out of my face.
My current wife talked to her more than I did, which is probably for the best. I kept it tight. Didn’t say much. Didn’t need to. I was just trying to get through the moment without my eye twitching.
But here’s the part that hit me like a bad flashback:
After all those years, her tone still grates on me. It’s unreal. It’s that chalkboard-scrape sound that makes your molars hurt. It’s that dial-up internet scream from the 90s, the one that made the whole house vibrate before you could connect for five minutes of slow loading misery. Somehow her voice still has that frequency that goes straight to the spine.
It wasn’t emotional. It wasn’t dramatic. It wasn’t even awkward.
It was just… noisy. Not loud, just that same old tone that reminds me exactly why life is better now.
We walked out. My wife and I joked about it. My dog? He just wanted more scratches. Must be nice.
Anyway, that’s how my quiet shopping trip turned into an unexpected reunion with the soundtrack of my past. Life really does throw curveballs, even the annoying ones.
Tech-tinkering geocacher who questions everything and dodges people on a purpose. Introverted agnostic, punk at heart, and a self-taught dev who learned things the hard way because nothing else ever sticks.
Eric Foltin
Geocacher / Pessimist / Agnostic / Introvert / Archivist / Punker / Self-Taught DevEric Foltin
Datacenters in space are a terrible, horrible, no good idea.
Datacenters in space are a terrible, horrible, no good idea.
There is a rush for AI companies to team up with space launch/satellite companies to build datacenters in space. TL;DR: It's not going to work.Taranis
like this
adhocfungus, copymyjalopy, dflemstr e essell like this.
Reticulum: Unstoppable Networks for The People - markqvist's talk at 38C3
Reticulum: Unstoppable Networks for The People
Reticulum is a cryptography-based networking stack for building local and wide-area networks with readily available hardware. Reticulum c...media.ccc.de
Sunday, November 30, 2025
Share
The Kyiv Independent [unofficial]
We are looking for 500 supporters of the truth and independent press. Can we count you in?
Olga Rudenko, editor-in-chief
at the Kyiv Independent
Russia’s war against Ukraine
High-rise residential building on fire in Vyshhorod, Kyiv Oblast, following a Russian drone attack on Nov. 30 (DSNS Poltava / Facebook)
1 killed, 11 injured in Russian drone attack on Kyiv Oblast. One person was killed and 11 people were injured in Vyshhorod district as Russia launched a drone attack on Kyiv Oblast overnight on Nov. 30.
Americans showing ‘constructive approach’ in peace talks, Zelensky says as Ukrainian delegates arrive in US. Ukrainian officials will meet with Marco Rubio, Steve Witkoff and Jared Kushner in Florida on Nov. 30. Zelensky said a final agreement could be ready “in the coming days.”
Zelensky’s ex-chief of staff Yermak says he’s ‘going to the front’ after resigning amid corruption probe. Former Presidential Office head Andriy Yermak said he intends to go to the front line after resigning from his post amid a major corruption investigation, the New York Post reported on Nov. 28, citing a letter he sent the outlet.
‘Successful’ Ukrainian naval drone strike disables 2 Russian shadow fleet tankers, source says. The operation targeted ships that, according to the source, could have transported nearly $70 million worth of oil and helped Moscow bypass international sanctions.
Ukraine attacks one of southern Russia’s largest oil refineries, sparks fire.
Ukraine’s military targeted the Afipsky Oil Refinery in Krasnodar Krai — one of southern Russia’s largest refineries — overnight on Nov. 29, the General Staff of the Ukrainian Armed Forces reported.
Your contribution helps keep the Kyiv Independent going. Become a member today.
‘Half of Kyiv without electricity’ — 2 killed, 38 injured in ‘serious‘ Russian attack on capital. Russia launched a mass missile and drone attack against Kyiv overnight on Nov. 29, killing two people and injuring 38 others, including a child, Ukraine’s State Emergency Service reported.
Russian drone violated Moldovan airspace during 10-hour attack on Kyiv, Chisinau says. Russian drones violated Moldova’s airspace during Moscow’s mass overnight attack against Kyiv, Moldovan President Maia Sandu said on Nov. 29.
‘Time to update’ Ukraine’s defense documents, Zelensky says after meeting top military, intelligence officials. President Volodymyr Zelensky met with Defense Minister Denys Shmyhal and military intelligence chief Kyrylo Budanov on Nov. 29 and ordered a revision of Ukraine’s core defense documents.
Drone attack forces oil terminal in Russia’s Novorossiysk to halt all loading operations. Naval drones struck the Caspian Pipeline Consortium’s marine terminal in the Russian port city of Novorossiysk on Nov. 29, forcing the facility to suspend oil shipments, the company said.
Learn more
Five ways to keep Ukraine in your news feed
The world increasingly turns its attention to Russia’s war against Ukraine only when a new round of peace negotiations begins. Here on the ground, however, the war doesn’t slow down between those waves of talks.
Photo: Lisa Litvinenko/The Kyiv Independent
Independent journalism is never easy, and it’s even harder in wartime
Yet we can do it without paywalls, billionaires, or compromise — because of our community. Help us reach 25,000 members by the end of 2025.
International response
Russian victory would cost Europe twice as much as supporting Ukraine, study finds. A Russian military victory in Ukraine would cost Europe twice as much as a Ukrainian victory, according to a new study by Corisk and the Norwegian Institute of International Affairs published on Nov. 25.
Zelensky, Macron to hold talks on ‘durable peace’ in Paris Dec. 1.
The leaders will discuss “the conditions of a just and durable peace” in Ukraine, according to French President Emmanuel Macron’s office.
In other news
Russia declares Human Rights Watch an ‘undesirable organization’. Russia’s Ministry of Justice designated Human Rights Watch an “undesirable organization” on Nov. 28, effectively banning the group from operating in the country.
Daughter of former South African president resigns from parliament amid investigation into Russian military recruitment scheme. Duduzile Zuma-Sambudla, the daughter of former South African President Jacob Zuma, resigned from parliament after being accused of helping lure 17 South African men to fight for the Russian military in Ukraine, her party announced on Nov. 29.
This newsletter is open for sponsorship. Boost your brand’s visibility by reaching thousands of engaged subscribers. Click here for more details.
Today’s Ukraine Daily was brought to you by Lucy Pakhnyuk, Dmytro Basmat, Yuliia Taradiuk, Tim Zadorozhnyy, Sonya Bandouil, and Abbey Fenbert.
If you’re enjoying this newsletter, consider joining our membership program. Start supporting independent journalism today.
Share
Vom Nischenthema zur Technologiepolitik: #cnetz gibt sich neuen Sound – Jarzombek skizziert den „Deutschland-Stack“
Im Konrad-Adenauer-Haus in Berlin vollzieht das cnetz an diesem Wochenende einen Kurswechsel – programmatisch wie stilistisch. Zum Auftakt der Jahreshauptversammlung kündigte Sprecher Prof. Jörg Müller-Lietzkow an, das Netzwerk wolle „wieder lauter“ werden, sich stärker einmischen und einen eigenen Sound in der Digitalpolitik etablieren. Netzpolitik, so seine Botschaft, sei die Debatte von gestern. Künftig gehe es um Digitalpolitik als Technologiepolitik – und darum, wie Deutschland seine digitale Infrastruktur, seine Souveränität und seine Innovationsfähigkeit neu ordnet.
cnetz will zurück in den Maschinenraum der Politik
Müller-Lietzkow ließ keinen Zweifel daran, dass sich das Netzwerk nach einer Phase relativer Funkstille neu positionieren will. Viele hätten in den vergangenen Jahren gefragt, warum man so wenig vom cnetz höre, erzählte er. Die Antwort: Die Zeit der leisen Hintergrundarbeit sei vorbei, nun solle aus dem Netzwerk wieder eine hörbare Stimme werden – auch dann, wenn dies in Berlin nicht jedem gefalle.
Der Anspruch ist hoch: Weg von Detailstreitigkeiten über Uploadfilter (schöne Grüße in Richtung von Axel Voss) oder einzelne Social-Media-Regeln, hin zu den großen Linien der Technologiepolitik – technologische Souveränität, digitale Infrastruktur, KI-Einsatz im Staat, europäische Plattform-Ökonomie. Der Begriff Netzpolitik wird von Müller-Lietzkow fast demonstrativ zur historischen Kategorie erklärt. Wer heute über Digitalisierung spreche, müsse in Systemen denken: Stack, Datenräume, Cloud, KI-Agenten, Genehmigungsprozesse.
„Ohne cnetz hätte es dieses Ministerium nicht gegeben“
Der zweite Schwerpunkt des Tages: die Rolle des cnetz beim Aufbau des neuen Bundesministeriums für Digitales und Staatsmodernisierung. Thomas Jarzombek, Parlamentarischer Staatssekretär im Haus von Kersten Wildberger, zeichnete die Linie explizit nach.
Er würdigte das Netzwerk als „Stachel im Fleisch“ der Union: Ohne den kontinuierlichen Druck und die inhaltlichen Impulse aus dem cnetz, so Jarzombek, hätte es das eigenständige Digitalministerium in dieser Form nicht gegeben. Das sei mehr als eine Höflichkeitsfloskel – Jarzombek verwies auf die lange gemeinsame Vorgeschichte: vom frühen „digitalpolitischen Verein“ bis zu Strategiedebatten in der Merkel-Ära.
Mit dem neuen Ministerium verbinde sich nun ein Paradigmenwechsel: weg von verstreuten Zuständigkeiten und blockierenden Ressort-Egoismen, hin zu einer Instanz, die Standards setzen, IT-Projekte bündeln und Prioritäten definieren könne.
Der Deutschland-Stack als digitale Grundinfrastruktur
Den inhaltlichen Kern seiner Rede widmete Jarzombek dem „Deutschland-Stack“ – einer digitalen Grundinfrastruktur für Verwaltung und Wirtschaft. Ziel sei es, eine durchgängige Architektur zu schaffen, in der zentrale Prinzipien wie „APIs first“, Wiederverwendung von Komponenten, einheitliche Datenformate und Portabilität von Anfang an mitgedacht würden.
Drei Elemente hob er besonders hervor:
- E-Wallet für digitale Identitäten: Die bisherige eID auf dem Personalausweis werde von einer alltagstauglichen Wallet abgelöst, die Bürgerinnen und Bürger für Verwaltungsprozesse ebenso nutzen könnten wie Unternehmen für Authentifizierung und Signaturen. Die Nutzung soll niedrigschwelliger werden – aber technisch so robust, dass Verwaltung und Wirtschaft darauf aufbauen können.
- Registermodernisierung: Anstatt Bürger und Unternehmen immer wieder die gleichen Daten beizubringen, sollen Register miteinander sprechen. Anträge – vom BAföG bis zu Fachverfahren in der Verwaltung – sollen künftig automatisiert prüfen können, ob die Voraussetzungen erfüllt sind.
- KI-gestützte Großvorhabensteuerung: Als Schaufensterprojekt nannte Jarzombek die geplante Genehmigungsplattform, die mit Hilfe von KI-Agenten Großprojekte wie Brücken, Bahntrassen, Stromtrassen oder die Wasserstoff-Infrastruktur begleitet. Heute dauerten Planfeststellungsverfahren fünf bis acht Jahre – die erste Version der Plattform solle diese Zeit perspektivisch halbieren.
KI-Agenten gegen den deutschen Genehmigungsstau
Besonders konkret wurde Jarzombek bei der Genehmigungsplattform. Über 100 Millionen Euro stelle der Bund bereit, um im Wettbewerb Lösungen zu entwickeln, die ganze Genehmigungsprozesse Ende-zu-Ende digital abbilden.
Antragsunterlagen mit bis zu 20 Aktenordnern sollen zunächst automatisiert auf Vollständigkeit und Widersprüche geprüft werden. Die Plattform markiert den Sachbearbeitern, wo Gutachten und Antragsbestandteile nicht zusammenpassen – allein dieser Schritt könne mehrere Monate Verfahren sparen.
Noch deutlicher wird das Potenzial bei der Bürgerbeteiligung: Tausende oder gar Hunderttausende Einwendungen, bislang als Papierflut in die Behörden getragen, könnten in wenigen Stunden digital erfasst, clustert und nach Argumentationsmustern strukturiert werden. Das System liefert nicht nur eine Übersicht, welche Argumente wie oft vorgebracht werden, sondern auch Vorlagen für die juristische Auswertung und eine automatische Generierung des Planfeststellungsbeschlusses.
Das Versprechen ist ambitioniert: Der deutsche Genehmigungsstau soll nicht länger mit mehr Personal, sondern mit mehr Algorithmus bekämpft werden – ohne die politische Verantwortung aus der Hand zu geben.
Sonderwege unter Druck
Jarzombek kündigte einen Zustimmungsvorbehalt an: Künftig sollen alle Ressorts ihre großen IT-Projekte beim Digitalministerium anmelden und abstimmen müssen.
Die Logik dahinter: Statt unterschiedlich gestrickter Fachverfahren, Portale und Plattformen sollen wiederverwendbare Bausteine entstehen, die bundesweit funktionieren.
In der Diskussion klang durch, was kaum jemand offen ausspricht: Wer sich dem Stack entzieht, riskiert, technologisch und organisatorisch abgehängt zu werden.
Digitale Souveränität: mehr als Symbolpolitik
Ein weiterer roter Faden der Tagung in der CDU-Bundesgeschäftsstelle war die Frage nach digitaler Souveränität. Jarzombek beschrieb, wie sehr Deutschland und Europa von US-Cloud-Anbietern und Plattformen abhängig seien – und wie schwer es europäischen Herausforderern falle, in öffentlichen Ausschreibungen überhaupt als ernsthafte Option wahrgenommen zu werden.
Die Devise: keine Abschottung, kein plumper Protektionismus, aber eine bewusste Stärkung europäischer Anbieter und Architekturen. Das Wirtschaftsargument liegt auf der Hand: Wenn Wertschöpfung künftig vor allem über Software und Services statt über Hardware erzielt wird, entscheidet die Plattformfrage über künftigen Wohlstand.
Dazu gehört auch, Regulierung so zu gestalten, dass sie Innovation ermöglicht statt verhindert – etwa über den „Digital Omnibus“, mit dem Datenschutz- und KI-Regeln für kleine und mittlere Unternehmen handhabbarer werden sollen.
Ein Netzwerk meldet sich zurück
Am Ende steht ein doppeltes Signal: Das cnetz meldet sich als politischer Akteur zurück – mit dem Anspruch, technologiepolitische Debatten nicht nur zu kommentieren, sondern aktiv zu prägen. Und das Digitalministerium setzt mit Deutschland-Stack, Genehmigungsplattform und E-Wallet eine Agenda, die deutlich über Symbolpolitik hinausgeht.
Für Unternehmen, Verwaltungen und Länder bedeutet das: Die Komfortzone der „Pilotprojekte“ ist vorbei. Wer jetzt nicht beginnt, sich in diese Architektur einzufügen – technisch, organisatorisch und mental –, wird sich in wenigen Jahren in einer Parallelwelt wiederfinden, in der alte Sonderwege sehr realen Standortnachteil bedeuten.
Der neue Sound, den cnetz für sich reklamiert, ist damit zugleich ein Stresstest für die digitale Republik: Ob aus den wohlklingenden Ankündigungen belastbare Infrastruktur wird, entscheidet sich nicht in Talkrunden, sondern in Vergabestellen, Fachverfahren und Genehmigungsbehörden – genau dort, wo der Deutschland-Stack ansetzen soll.
Windows drive letters are not limited to A-Z
Windows drive letters are not limited to A-Z - ryanliptak.com
If you want €:\, you can have it, sort ofwww.ryanliptak.com
Technology reshared this.
The Linux approach did take some getting used to, of course, but mounting drives to folders just makes too much sense. The only qualm I've had with it is if the drive doesn't get mounted and stuff gets written to that folder, which, AFAIK, isn't possible in windows.
Also, tbf (and balanced), windows also supports mounting drives to folders iirc, it's just a weird way to do it.
like this
rash likes this.
One (contrived) example would be to have a drive that doesn’t have any installed file system filters on it. Filters being the hooks that windows, antivirus, etc have that intercept file writes and such. Could make it much faster on windows for that use-case. I can see custom software using that drive.
Contrived? Definitely. But potentially useful. I can see it working similarly to something MS has in testing which is the file system thing that is super fast but is limited in features—can’t seem to find it atm…
Edit: Found it. Dev drive via ReFS.
So? Who cares? Drive letters were always a dumb idea.
Also, obligatory "get your butt off of windows, switch to Linux."
Drive Letters are also for removable media (floppy disks, CD/DVD drives, others [magneto-optical drives, etc), not to mention network drives. Not just Fixed Disks (hard drives).
It's just an easy way to specify one disk from another.
This behavior is actually in line with what I'd expect, as Unicode support in Windows predates UTF-16, so Windows generally does not handle surrogate pairs and instead operates almost exclusively on WTF-16 code units directly.
So it's just straight UCS-2, and the software does enforce that, pretty much the opposite of "WTF-16".
Edit: Pretty sure "modern" (XP+ I think) Windows actually does enforce UTF-16 validity in the system, but there's always legacy stuff from the NT4/2K era that might turn up.
Remember these soldiers filmed 3 days ago murdering two surrendered palestinians ? Ben-Gvir just promoted their officer
From this source, the soldiers were interrogated for 5 hours, and were then released without conditions. Their weapons were not confiscated, and they returned to their unit.
And if Israel doesn't want "terrorists", then they simply have to accept the Oslo agreements, they could have been at peace even before the 90s if they weren't so selfish and greedy(, they don't care about al-Aqsa for example). Sadistic abusers playing the victims.
mecaforpeace.org/one-palestini…
Ben-Gvir promotes officer whose soldiers shot dead surrendered Palestinians
A day after Border Police officers shot two Palestinians dead after they had raised their hands, National Security Minister Ben-Gvir visited their unit's base to 'strengthen and hug heroic fighters' and announce the promotion of their commanderJosh Breiner (Haaretz)
like this
NoYouLogOff [he/him, they/them], rainpizza, davel, Ayache Benbraham ☭🪬, ToxicDivinity [comrade/them], REEEEvolution, atomkarinca, ☆ Yσɠƚԋσʂ ☆, DeadWorld, Jin008, Bart, Malkhodr, KrasnaiaZvezda, IsThisLoss [comrade/them], Cowbee [he/they], microphone900, LVL, GreatSquare, Ashes2ashes, ExtimateCookie [he/him], Gil Wanderley, ?Geektragedy, thefluffiest, Commiejones, Limitless_screaming, zeb, Maeve, TheTux, Saklas, Apollonian, RedWizard [he/him, comrade/them], Lenin's Dumbbell, stink, RedCat, mufasio, senseamidmadness, mathemachristian [he/him], CascadeOfLight [he/him], HarrierDuBard, ComradZoid, anarchoilluminati [comrade/them], MusclesMarinara, Verenand, ikilledtheradiostar [comrade/them, love/loves], cwtshycwtsh e Nimux2 like this.
like this
Ayache Benbraham ☭🪬, stink, mufasio e mathemachristian [he/him] like this.
eta Platforms har förbjudit politisk reklam på sina sociala medier som exempelvis Facebook, Instagram och Threads. Alphabet har förbjudit det i alla sina kanaler som exempelvis Google och Youtube. Information från Valmyndigheten omfattas också av förbudet när det gäller Metas sociala medier. Det innebär att det är svårare för Valmyndigheten att infoformera om valet nästa år.
Forpasis Rob Moerbeek, vivanta institucio
Rob Moerbeek eklaboris en la Centra Oficejo de UEA en Roterdamo en 1969. Ĉiuj, kiuj iam vizitis la oficejon, certe konas kaj memoras lian senpretendan afablecon. Lia lasta labortago tie estis la 7-a de novembro 2025. Tri semajnojn poste li forpasis en la aĝo de 89 jaroj.
Datacenters in space are a terrible, horrible, no good idea.
Datacenters in space are a terrible, horrible, no good idea.
There is a rush for AI companies to team up with space launch/satellite companies to build datacenters in space. TL;DR: It's not going to work.Taranis
like this
Maeve likes this.
Technology reshared this.
presente dell’octo di fine ’25 tra tormenti e riflessioni integralmente cosmiche
Stanotte ho proprio stabilito un nuovissimo record negativo, andando a letto alle 2 e mezza e… addormentandomi alle 4 e passa o qualcosa del genere; perché stanotte, come quella dell’altro giorno, ero maledettamente tormentata e ultimamente con me non c’è proprio versi a riguardo… Nella trappola del pensare involontariamente anziché riuscire a prendere sonno, però, […]
octospacc.altervista.org/2025/…
presente dell'octo di fine '25 tra tormenti e riflessioni integralmente cosmiche - fritto misto di octospacc
Stanotte ho proprio stabilito un nuovissimo record negativo, andando a letto alle 2 e mezza e... addormentandomi alle 4 e passa o qualcosa del genere; perché sminioctt (fritto misto di octospacc)
The human cost of renewables: Why Australia should build solar here
cross-posted from: lemmy.sdf.org/post/46467998
With the renewable energy transition underway in Australia, the higher than expected uptake of solar panels has human rights groups concerned about links to Uyghur forced labour in the supply chain. As Australia looks into developing its own solar panel industry, rights groups say government and industry should work to ensure the clean energy transition isn't at the cost of freedom.[...]
Without a domestic supply chain, though, Australia is importing around 90 per cent of its solar panels from China.
Ramila Chanisheff, President of the Australian Uyghur Tangritagh Women's Association, says her people are being forced to make them.
“We know that the biggest industry that is complicit in Uyghur forced labour is the solar industry or the wind turbine industry or the EV vehicles.”
Since 2016, the Chinese government has reportedly kidnapped and detained millions of Uyghur people in the Xinjiang province, known to its indigenous Uyghur population as East Turkistan.
In what was officially described as an effort to combat extremism, around one million members of the majority Muslim Uyghur minority were sent to so-called re-education centres between 2017 and 2019.
Evidence and testimony from ex-detainees reveals torture and political indoctrination, forced sterilisation and drugging, as well as food deprivation to punish those who showed resistance.
An official Chinese government report published in November 2020 documents the “placement” of 2.6 million minority citizens in farms and factories within the Uyghur Region and across the country through state-sponsored initiatives.
[...]
“We do have credible evidence and Uyghur who have spoken about their family members who've been taken into the concentration camps, which have with research, and that's come out that they are turned into labour camps. All those Uyghur reserve being put into forced labour within East Turkistan or Xinjiang and or being trafficked to mainland China to do the work.”
[...]
Australia has poured billions into solar power and green manufacturing and the Australian Renewable Energy Agency is currently funding feasibility studies for new domestic polysilicon production facilities.
But for now, with a few small exceptions, Australia still imports most of its solar panels from China.
Fuzz Kitto is the co-founder of Be Slavery Free, which works to raise awareness and end modern slavery.
“The conflict between climate and human rights commitment has led investors to feel that they've got no choice but to invest in companies sourcing, or connected to, the Xinjiang region despite the human rights abuses that are there. And even though the experts say that there's enough outside of that region to supply the United States, Europe and leading countries in their needs for solar produced electricity, it is certainly not being transparent about where these are coming from. In fact, quite opaque sometimes and a lot of greenwashing.”
[...]
To make solar panels you need solar-grade polysilicon, which is made from silica sand produced from quartz.
China manufactures around 95 per cent of the global supply of polysilicon, much of it made in factories with links to forced Uyghur labour.
According to the Australian Mining Review, Australia is the largest silica sand exporter in the Asia-Pacific region, with most of our exports going to Chinese markets.
[...]
Fuzz Kitto says we should be making it here.
“I think one of the great difficulties is that people think that there are no alternatives and now there are a growing amount of that. The thing is that in Xinjiang there are the sands that produce the polysilicon. So to produce poly silicons, basically you need cheap electricity and you need sands of that quality. We do have sands of that quality in Australia, not quite of the standard of Xinjiang. In fact, we export sand to China for the making of polysilicons, which is just incredible. Why we are not producing an industry in Australia of making them is beyond us.”
[...]
UEA ne sukcesis vendi sian domon
La planata vendo de la Centra Oficejo de UEA ĝis nun ne efektiviĝis, ĉar la aĉetonto ne sukcesis pagi la garantiaĵon. UEA tamen esperas, ke la vendokontrakto povos esti subskribita post la jarŝanĝo. Plu mankas plano por la estonteco de la libroservo.
Companies swamped by 3,000 hours of paperwork to tap EU climate funds
Companies swamped by 3,000 hours of paperwork to tap EU climate funds
Of the €7.1bn awarded from the bloc’s flagship innovation programme for clean tech, only 5% has been paid outAlice Hancock (Financial Times)
thisisbutaname likes this.
Reticulum: Unstoppable Networks for The People - markqvist's talk at 38C3
Reticulum: Unstoppable Networks for The People
Reticulum is a cryptography-based networking stack for building local and wide-area networks with readily available hardware. Reticulum c...media.ccc.de
Technology reshared this.
Power surge: law changes could soon bring balcony solar to millions across US | Tweaks to state laws mean many Americans will be able to benefit from small, simple plug-in solar panels
Balcony solar panels are now widespread in countries such as Germany – where more than 1m homes have them – but have until now been stymied in the US by state regulations. This is set to change, with lawmakers in New York and Pennsylvania filing bills to join Utah in adopting permission for the panels, with Vermont, Maryland and New Hampshire set to follow suit soon.
Power surge: law changes could soon bring balcony solar to millions across US
Tweaks to state laws mean many Americans will be able to benefit from small, simple plug-in solar panelsOliver Milman (The Guardian)
Many Fighting Climate Change Worry They Are Losing the Information War
Shifting politics, intensive lobbying and surging disinformation online have undermined international efforts to respond to the threat.
like this
adhocfungus likes this.
You basically see two pro-fossil fuel campaigns on social media:
- In right-leaning spaces, they say “climate change is a hoax, wind turbines kill birds”
- In left-leaning spaces, they say “we’re all doomed, it’s already over, just give up”
It’s an extremely dangerous two-pronged assault, because any information at all can be catalyzed into inaction:
- “Things are getting worse” can be treated as confirmation of propaganda efforts or confirmation that we’re screwed
- ”Things are getting better” can be treated as admission of overreacting or dismissed as “too little, too late”
You're right
I'm very much in the left camp and I do fear its too late. I fear that in the next 5 decades, millions if not billions will die and that we might actually face a societal collapse
Governments don't give a shit about climate, they're too dumb
Companies don't give a shit about climate because their owners direct the actions and they only care about money
So how exactly should I be positive and have a "let's go fix this!" attitude?
I'm tired, really really tired
So how exactly should I be positive and have a "let's go fix this!" attitude?
Fix what ? The civilization hell bent on destroying the biosphere and collapsing civilization ?
Climate change is the answer, just try not to not be at the front of the queue, try not to make it worse with your actions and how you vote and enjoy life as best you can. That's about all a sane person can do.
nationalobserver.com/2024/06/1…
Rees bluntly states, “the human enterprise is effectively subsuming the ecosphere” and “wide-spread societal collapse cannot be averted — collapse is not a problem to be solved, but rather the final stage of a cycle to be endured.”
Sri Lanka’s capital hit by floods as cyclone death toll nears 200
The climate crisis has affected storm patterns, including the duration and intensity of the season, leading to heavier rainfall, flash flooding and stronger wind gusts.
Sri Lanka’s capital hit by floods as cyclone death toll nears 200
Hundreds of people still missing after heavy rain and mudslides in country’s deadliest natural disaster for yearsGuardian staff reporter (The Guardian)
like this
thisisbutaname likes this.
Glitch leaves most public toilets at Tokyo's Haneda Airport unable to flush
Turkey condemns Ukrainian strikes on Russian oil tankers off Black Sea coast
The Turkish Foreign Ministry spokesman has condemned Ukrainian drone attacks on two Russian “shadow fleet” oil tankers in the Black Sea
Archived version: archive.is/newest/independent.…
Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.
Tropical storm deaths top 600 in South-east Asia; over 4 million people affected
Relief efforts for tens of thousands of displaced people continued over the weekend.
Archived version: archive.is/newest/straitstimes…
Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.
Tropical storm deaths top 600 in South-east Asia; over 4 million people affected
Relief efforts for tens of thousands of displaced people continued over the weekend. Read more at straitstimes.com.ST
msage
in reply to King • • •TheLeadenSea
in reply to msage • • •msage
in reply to TheLeadenSea • • •artyom
in reply to msage • • •msage
in reply to artyom • • •artyom
in reply to msage • • •Derpenheim
in reply to King • • •CptBread
in reply to Derpenheim • • •fodor
in reply to Derpenheim • • •FriendBesto
in reply to King • • •I watch a YT channel that talks and researches History on Wales, and on that somewhat narrow topic alone, he has found some ridiculous mistakes on Wikipedia. There are tons but few people are aware as they may lack the suffiency in knowledge or background to know how wrong they are. AI will surely make that problem worse. I have caught ChatGTP to be wrong numerous times on some topics within my wheelhouse. When I tell it is wrong it "apologizes," corrects itself and just adds what I told it. Well, if it had found the data before, then why does it have to wait until it is corrected? If kids use this for school, they are so fucked.
Who wants to put glue on their pizza?
kalkulat
in reply to King • • •Finding inconsistencies is not so hard. Pointing them out might be a -little- useful. But resolving them based on trustworthy sources can be a -lot- harder. Most science papers require privileged access. Many news stories may have been grounded in old, mistaken histories ... if not on outright guesses, distortions or even lies. (The older the history, the worse.)
And, since LLMs are usually incapable of citing sources for their own (often batshit) claims any -- where will 'the right answers' come from? I've seen LLMs, when questioned again, apologize that their previous answers were wrong.