India orders smartphone makers to preload state-owned cyber safety app
like this
ammorok likes this.
Technology reshared this.
Deepseek-v3.2Speciale, built for agentic work, just released
DeepSeek has released V3.2, replacing the experimental version. There are two main models are open as always and can be downloaded from Hugging Face:
- V3.2: General-purpose, balanced performance (GPT‑5 level)
- V3.2‑Speciale: Specialized for complex reasoning (Gemini‑3.0‑Pro level)
V3.2 can now "think" while using tools (like searching the web, running code, or calling APIs). This makes AI assistants more transparent and better at multi‑step tasks. You can choose thinking mode (slower but more thorough) or non‑thinking mode (faster for simple tasks).
Key improvements are better reasoning transparency with the model explaining the steps when using tools, and stronger performance on benchmarks.
deepseek-ai/DeepSeek-V3.2 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.huggingface.co
Technology reshared this.
The Consumer Safety Technology Act– what could this mean for the private sector?
Technology reshared this.
Can't seem to find the actual article, so I'll just engage with this small paragraph here.
Capitalism needs to be regulated (or better yet, replaced). Given that the US is currently experiencing the effects of unfettered capitalism (fascism, bribery, oligarchy, price gouging, monopolization, market collusion, just to name a few), I'm for more oversight.
However, the current administration and current Congress are both generally disinterested in actual regulation and, in my opinion, unqualified to implement something like AI-powered guardrails. It's just the whole "blockchain everywhere" debacle all over again.
Furthermore, who would develop and maintain such a system? There would almost certainly be bids from the usual suspects (i.e. billionaires) who would "definitely develop it in good faith, trust me bro." They definitely wouldn't use that kind of access to hamstring the bot that's supposed to be regulating them. /s
Rather than just putting a bot in charge, how about we just make the wealthy pay their fair share? How about strong legislation that prevents fraudulent transactions and mergers? How about meaningful punishments that deter bad actors, rather than slaps on the wrist that are just "the cost of doing business?"
We don't need robots and software, we need sensible legislation.
MKBHD's Panels wallpaper app is shutting down
Panels is shutting down
Panels will shut down on December 31, 2025. Thank you for being part of the journey.panels.art
adhocfungus likes this.
‘The new price of eggs.’ The political shocks of data centers and electric bills
PressReader.com - Digital Newspaper & Magazine Subscriptions
Digital newsstand featuring 7000+ of the world’s most popular newspapers & magazines. Enjoy unlimited reading on up to 5 devices with 7-day free trial.www.pressreader.com
like this
toothpaste_sandwich likes this.
Technology reshared this.
Can't really help you. I got through to the article, but the page doesn't archive well and copying text on mobile is a pain.
Just try again?
bostonglobe.com/2025/11/30/bus…
The political shocks of data centers and electric bills
Across the country, Democrats have seized on rising anxiety over electricity costs and data centers in what could be a template for the 2026 midterm elections.The Boston Globe
Netflix kills casting from phones
Casting support is still available on older Chromecast devices or TVs that support Google Cast natively, according to Netflix’s support page
Netflix kills casting from phones
Netflix has removed the ability to cast shows and movies from phones to TVs, unless subscribers are using older casting devices.Jess Weatherbed (The Verge)
like this
Nobilmantis, aramis87 e Drusas like this.
Technology reshared this.
the next VC driven company…
With you so far
that focuses more on growth than profit
Ah - there’s your problem. VC companies simply don’t do that.
like this
Drusas likes this.
like this
Azathoth, missingno e onewithoutaname like this.
Ah - there’s your problem. VC companies simply don’t do that.
They most certainly do and then either cash in by selling to the next more risk adverse VC or sells it at a loss if they believe the company failed to disrupt the market.
Ah - there’s your problem. VC companies simply don’t do that.
That is exactly what VC companies do. That's why they need the VC money. First you conquer the market at a loss. Only when users have no other options to escape to, you start squeezing them.
like this
onewithoutaname likes this.
To streaming?
Never.
Streaming is a finite market that is already covered. The moment old money (aka existing media companies) jumped on it, it was done for.
like this
missingno likes this.
like this
fonix232 e onewithoutaname like this.
Lol why?
Genuinely seems pretty arbitrary given you need to use their app to start the cast anyway
Because fuck you, that’s why. I’m sure they will re-introduce the feature behind a paywall soon.
¯\_(ツ)_/¯
Once you tell a company that you are willing to pay for something more than once, prepare to get fucked, because that’s all you’re gonna get. And not the fun kind.
like this
RandomStickman e dcpDarkMatter like this.
Casting support is still available on older Chromecast devices or TVs that support Google Cast natively, according to Netflix’s support page, but only for subscribers on pricier ad-free plans, which start from $17.99 per month. Netflix users with an ad-supported subscription ($7.99 per month) will be unable to cast from their phones even if they own legacy Chromecast devices.
Paywall already there. Excerpt taken from the linked article.
Yikes. You know, I just in comfort the “pay for ad free” sales pitch for renewing my Amazon subscription. Shit bags.
Edit: i’m only subscribed to Amazon because I live in a food desert, and it’s the only way I can get groceries
But it seems you can still log into an account by scanning a QR Code, so... ehh????
My understanding is they already made it more annoying. Your devices have to connect to the home accounts "home" wifi within a certain amount of time or that device gets locked out.
I quit Netflix when they announced the pilot program for removing account sharing in South America, and I stand by that decision to this day.
I bought a Lifetime PlexPass 17 years ago for 35 bucks.
I’m good
I bought a one-time purchase to unlock streaming to my Android device years ago.
Let's hope they don't alter your deal too.
I have an iPhone. I am more than happy to pay for quality.
I switched to Linux when it was finally ready. I will switch to Jellyfin when it is ready.
If you paid the one-time purchase for a lifetime Plexpass, then Plex did not drop any support for you.
If you subscribed to a monthly pass, then you are beholden to the terms of that contract.
Do you think you’ve been mistreated by a choice you made willingly?
I'm not talking about the lifetime Plex pass or monthly pass.
If you paid the one-time purchase to unlock streaming to your device, Plex has stopped honouring that deal. You now either need to pay for a Remote Streaming Pass, or a Plex Pass.
... Because you bought a Plex Pass, which I specifically said is not what I'm talking about.
Are you dyslexic?
You’re the one claiming things that aren’t true
Maybe take a break from your hate session and touch some grass
Sorry man but it's you who is misunderstanding.
Plex used to let you buy a "license" for a one-time fee which allowed you to stream to phones. You did not need a Plex Pass, lifetime or otherwise.
Recently, Plex announced that you no longer need to pay this one-time fee. However, you can no longer stream videos to your phone without Plex Pass. If you have Plex Pass, you are obviously unaffected
This basically invalidated the paid licenses of anyone not paying for Plex Pass.
That is what they're talking about.
Plex used to let you buy a "license" for a one-time fee which allowed you to stream to phones. You did not need a Plex Pass, lifetime or otherwise.
Correct. Both myself and several friends are enjoying our lifetime Plex pass nearly 2 decades later.
As for the rest, none of that changes anything for me. It changes things for people who did not buy a lifetime one time payment Plex pass.
So are you done lying or was it not really a lie? It was just that you were too stupid to figure out subscription terms?
If you pay once for a Plex subscription, the lifetime pass, you only pay once. I’m sorry to hear that this condition was too difficult for you to figure out.
Did you....read what I wrote? Even the part you quoted? I'm guessing not because you didn't notice I'm not the person you've been arguing. I'm just a bystander who saw someone being a confidentially incorrect.
The one-time license we're referring to is different than Plex Pass. What part aren't you understanding? I'd break it down for you again with links but I'm doubting you'd even read it lol
You’re either lying or hallucinating again. There was only ever one one time Plex pass sold. And it was the Plex pass. For a lifetime.
Once again, I’m very sorry that you seem to have some extremely difficult mental issues that prevent you from understanding the terms that Plex offers. Millions of others seemed to manage just fine, but perhaps, you should have someone helping you out.
Really, I don’t know who told you this shit, but they were wrong and so are you
Downvote me all you like, but it doesn’t change the facts
There was only ever one one time Plex pass sold. And it was the Plex pass. For a lifetime.
Correct. Which is why both of us keep saying we're not talking about the Plex Pass. We're talking about the one-time purchase to unlock remote streaming on Android/iOS.
I’m talking about Plex pass. If you’re talking about something else, go to another post or thread.
You're right that downvotes don't change facts. And the fact is that Plex used to sell a license for phone streaming
Here's an official post referencing its removal since you're too pigheaded to accept that you're wrong
Our Android and iOS mobile apps previously required a one-time activation fee or Plex Pass to remove the one-minute playback limitation when streaming content from a Plex Media Server.
Important 2025 Plex Updates | Plex
Update 4/30/25Since publishing this post we have changed how remote streaming works for personal media libraries, and it is no...jerry (Plex)
So the fuck what?
Sounds like you’re stuck in the past and trying to use something that happened decades ago as the one and only argument you have.
Thing is: it doesn’t fucking matter. Nobody fucking cares. You’re crying into the wind.
So what you're saying is that you were completely wrong and so smug and confident about it?
What a shocker.
You said that. And you have decided that such a falsehood should define your reality.
You have my pity
Hilarious. From "you're lying" to "it doesn't matter anyway"! Are you by chance employed by Plex?
It's relevant to the conversation at hand. There has been literally no crying, just acknowledging that Plex changes their terms even if you paid in full.
You are the only one with elevated emotions. Perhaps it's time for some chamomile tea and a nice little nap.
Keep crying over the myths and legends you’ve been told.
I treat my parents the same way when they get into that bullshit “god” crap, lol
Oh damn didn't know I was arguing with a literal child, my bad.
You're really back to "you're lying," though? Guess I was right about your inability (or decision not to) read.
I hope you think about this thread later. Do some self reflection, bud. Ask yourself why you lashed out. What did you feel when presented with the evidence that you were wrong? Why did you keep digging your heels in when you realized that they did indeed used to offer a paid license that was separate from Plex Pass? Why insult the intelligence of the people teaching you?
like this
fonix232 e onewithoutaname like this.
like this
dcpDarkMatter likes this.
like this
dcpDarkMatter likes this.
i have heard of that; i need to check it out.
(also while not required to watch, i also consider club twit and daily tech news show to be very good too)
There's people running old raspberry pis with USB hard drives.
It'll run on just about anything.
Though, you'll only be able to stream original quality, no on the fly quality changes for low speed connections and such.
I could do something simple like a NUC or equivalent and a tiny NAS
Literally me rn. A tiny second-hand Dell Optiplex with a 8th gen i3 and a 2TB SSD.
Maybe not the most future thinking solution since it can only fit 2.5" HDDs and NVMe's, which are both more expensive than conventional big fat HDDs, but hey, works great.
If you can find one for cheap locally and get a decent deal on a compatible drive you're set. You could stretch a Terabyte or two for a while as long as you're not trying to host Jellyfin for too many people (and are OK deleting watched Shows/Movies when you start running low)
NUCs (specifically Intel 8th?-gen or later) are pretty much ideal for serving Jellyfin because the Intel integrated graphics can do video transcoding and the software is actually not very demanding otherwise, so the low-power CPUs are fine.
If you were buying hardware specifically for Jellyfin (i.e. didn't want to cobble together something used), I'd suggest an N100 or N150-based NAS mini-PC like this: bee-link.com/products/beelink-…
Mine runs on a mini PC(NUC) hooked up to a hard drive array for the storage. So it's basically a tiny PC and another box full of hard drives(not required, but you'll need space somehow...). Pc was around $250.
Very easy and you don't need to set up an actual "server rack". Hell, you can use an old laptop.
Also, keep in mind you can hook the miniPC up to your TV or another PCs monitor(assuming you have extra plugs). You don't need a dedicated monitor for this. Mine uses the same monitor as my gaming PC on a different input. It basically lives on my keyboard tray tucked away running.
Don't port forward Jellyfin. That's terribly insecure. Just install tailscale or similar and invite the people you wish to allow access.
I don’t disagree with you. My earlier comment that mentions port forwarding and infrastructure comes from guides that direct admins to set up a tunnel through Cloudflare, expose JFs port at the router, and point the tunnel at it. Not only is it insecure and likely to offer poor performance, it’s probably a violation of CF ToS (tunneling video data). Going the Plex or pivpn routes will require a port being forwarded, Plex more a beginner option, pivpn only slightly more complicated, but both still expose an attack surface. Tailscale looks appealing from a security perspective, no port forwarding required, plus I find full mesh networks really neat. I just don’t want to rely on tailscale’s coordinated servers to stand between my network devices since I rely on WireGuard for more than media streaming. Tailscale is definitely a great solution for users with CGNAT-based ISPs though.
My buddy had success running it off a pi.
But he had to encode everything ahead of time as h264.
And require to login too
Make no mistake, this was intentional before the holidays so families visiting relatives can't just cast Netflix from their phone to watch something and will require someone to login and use it one of their authorized devices...or coerce them to upgrade if they already have too many authorized devices
The only good thing about Netflix is their diverse global library.
As in, a Netflix subscription plus a VPN, gives you access to a large library of global content.
Taking that into account, it's probably still the best streaming service, which means they're the shiniest turd in the toilet.
But still, Jellyfin FTW.
like this
TVA likes this.
They're made to “Enhance the ~~user experience~~ profit”
“Enhance the user experience” is just what the dev or documentation team writes when management dictates that they drop a feature. The only reasons they would have dropped it:
- Dev work vs actual customer usage (e.g. it wasn't getting a lot of users but devs had to maintain it with each update)
- People were using it to intercept the stream and capture movies to pirate.
Every decision is about increasing profits first, and UX almost always takes a back seat to that
UX almost always takes a back seat to that
If it even makes it on the bus in the first place
like this
TVA likes this.
Just go Piracy.
go Servarr or go bust, despite it requiring some amount of tinkering with docker to run smoothly
Seems like if you've got the ad-free plan and an older chromecast you're still ok... For now. This is indicative though that the service may not be viable for long for you.
like this
TVA likes this.
Oh yeah. My friend group starts having issues with more than 8 people watching. ^^;
Unfortunately the plugin seems to be abandoned.
I'm not sure what features you feel are lacking. And I haven't used Plex at all.
For me, all I want is being able to connect to it from any device, and sync play.
I’m not sure what features you feel are lacking
As I said, it was about a year ago. I vaguely remember not liking the layout when displayed on my TV, sorting shows/episodes not working correctly, watched list not being properly updated... that and assorted little bugs that finally tired me and I went back to plex
Haven't tried JF in over a year, but last attempt was full of errors. I'll give it another shot.
Only reason I'm still on Plex is I have a lifetime pass, and it's working. But it's sure inshitifying every day... Remote play with plex pass is super easy, and plex amp was promising but replaced it with navidrome and so much happier. I'm ready to ditch Plex if JF is better now, I'll install it next time I have time to mess with my setup.
I have had a lifetime pass for years since maybe 2014 or so? They added photo sync, that was awesome, then they took it away. That sucked.
I simply run both at the same time on my server, they point to the same library. The compose file is stupid simple (as is plex's) so why not.
Better is relative, but I like Jellyfin better. Plex's choices for my library layout suck. Jellyfin gets to the point, and fast.
Either way, doesn't cost anything to run both, and set up is about 10 minutes if you already are using docker.
Plex is slow and the menus dont get right to the point
That is true. I also hate having to change each episode separatedly to change subtitle/audio track. Apropos, jellyfin didn't have subtiles download integrated on their menus, has that changed since last year? Not having that means manually downloading and syncronizing subtitles, and life's too short for that kind of drudgery
Nope. Either they can't view(encrypted )it or they don't care. The isps that actually do something, will send you multiple warnings, like 7 plus.
Then you just get a VPN and carry on as usual. In 25 years or so I have gotten 2 warnings.
1936: any universal turing machine can mimic another
2025: unfortunately your turing machine has a shape we don't like so we will block you from using it productively despite the fact that it has the exact same hardware inside that other machines
But yea, complain about ads, price and casting as you keep paying them.
MKBHD's Panels wallpaper app is shutting down
TL;DR
MKBHD is shutting down the Panels app at the end of this month, citing issues with finding the right development team fit.
You can no longer buy collections, and you must download existing wallpapers before the app is removed.
Users will receive automatic pro-rated refunds for active subscriptions, and the app code is promised to be open-sourced after shutdown.
MKBHD's Panels wallpaper app is shutting down, here's what's next
MKBHD is shutting down the Panels app this week. Users will get automatic refunds, and there are plans to open-source the code. Read on!Aamir Siddiqui (Android Authority)
like this
Lasslinthar likes this.
Technology reshared this.
60+ images
Jokes on you, I wrote own shader that I use in simple script that shows it on background as wallpaper. The future is now, old man!
$50/yr for wallpapers?!?! That's some asshat seeing the enshitification train and thinking, "Man I gotta get on THAT!"
$50/yr?!?!
For WALLPAPERS?!?!
like this
dcpDarkMatter e qupada like this.
like this
dcpDarkMatter, fif-t e qupada like this.
But hey, as a Finnish saying goes, "It isn't the one who asks who is stupid, but the one who pays."
It was used exclusively by his viewers who had formed parasocial relationships.
Literally, no one else.
I totally forgot about this app because, well, it seemed pretty forgettable.
I'm also not in the target demo since I currently have my wallpaper set to whatever my phone wants to rotate in and out every hour from my Photos app.
I totally forgot about this app because, well, it seemed pretty forgettable.
It's too trivial to remember.
How fast was that fucker driving again? in a residential area, with "watch out for the kids" signs if I remember correctly.
Edit: 96 in a 35 mph zone
In a sponsored shill video that also got him roasted. He must have been in 96mph hurry to destroy his youtube career.
nbcnews.com/tech/internet/marq…
YouTube reviewer Marques Brownlee apologizes for speeding controversy
Marques Brownlee, who runs one of YouTube's biggest tech review channels, apologized after viewers caught him speeding almost three times over the speed limit.Kat Tenbarge (NBC News)
like this
qupada likes this.
Yeah I'm not aware of any way to do that
Wow, I totally did not know about this.
I don't watch him anyways so it'll just stay that way.
I spent $50 a year on wallpapers once as a monthly subscription. It was to support an independent artist that hand painted video game scenes I enjoyed, produced 1-3 new paintings per month, granted access to his back catalog, and the deliverable was high quality scans I could download and use at my leisure.
Since he is an artist, and you could feel that each painting took weeks to complete, I felt justified.
Never had a desire to pay for an app with generic wallpapers though, or to financially support someone who generates my annual salary per sponsorship deal.
Sure - patreon.com/orioto
He might be doing some digital now (I’m no longer subscribed and I see some free digital sketches in there), but I know when I subscribed the final work was painted on canvas.
His favorite one of mine is titled “Breath of Adventure”, and I’m fortunate he shared a 4k scan that looks amazing on my monitor.
Wallpaper App? Like WTF? Isn't a wallpaper just whatever you put there and that is pretty much freely available?
PAY!!! I don't use wallpaper at all, but if I did, I couldn't imagine paying!
Users will receive automatic pro-rated refunds for active subscriptions, and the app code is promised to be open-sourced after shutdown.
That's the correct way to wind down a cloud based subscription app.
Not that I'm in favor of the entire business model of cloud based subscription apps, but at least Marques is ending this one the right way.
‘Refusers brought us a ceasefire, now we are in new critical phase’
Didi here. As a refuser and the head of Refuser Solidarity Network, I'm writing to you at a fragile moment. News shifts by the hour: one headline declares a "ceasefire," the next warns of the "reoccupation" of Gaza. Amid the confusion, one truth remains clear: the only force that has ever stopped Israel's wars of annihilation is the people who refuse to fight them.This ceasefire was not granted by the government or diplomacy. It was forced into being by resistance: by global outrage, by organizing, and by soldiers who said no. Refusers slowed mobilization, broke ranks, and disrupted the machinery of war.
Now those same refusers face a new challenge as Israel prepares to reoccupy Gaza under the guise of peace. That is why we are turning to you today. RSN is launching its end-of-year campaign to grow this movement with the momentum of the ceasefire. Netanyahu is betting on silence, on the world's attention fading so he can deepen control. But we are still here. The struggle did not end but changed phase.
That is why we need you today. We plan to continue training organizers, support refusers, and build the movement that can stop this. Help us today to reach our end-of-year goal of $50,000 so that we can continue to support refuser groups, build up their infrastructure, and expand our movement.
In recent weeks, Israeli officials have begun sketching plans for a long-term reoccupation of Gaza. New military zones have been mapped across the Strip, separating the west from the east. Armed checkpoints and "buffer areas" are expanding even though they are supposed to be temporary, effectively carving Gaza into controlled enclaves. Displaced Palestinians remain barred from returning home. The Israeli government doesn't even pretend this is temporary while it entrenches itself in the Gaza Strip. This is a new phase of domination that is poised to expand. We need a strong resistance movement to stop it.
At RSN, we know refusal works. It worked during the Intifada, it worked during this war, and it will work now. Our mission is to make refusal widespread, organized, and impossible to ignore. We support the networks that make it happen: from reservist groups to grassroots activists.
Together, we are building the most powerful resistance Israel has ever known, a movement that grows stronger each time someone says enough. But refusal demands resources. It needs us. Because the truth is, the ceasefire is fragile, the fire has not ceased, and the occupation is not over. Yet we have an opportunity to end the Israeli occupation, but it can happen only with real resistance.
But resistance will continue to grow, despite the ceasefire, and in spite of Israel's plans. Support the movement that makes it possible and contribute to our campaign today. Your donations will provide crucial support in this critical phase. It will allow us to expand our work by building refuser groups' organizational capacity, mentoring, media consultation and mental health support.
In solidarity,
Didi Remez
Executive Director
Refuser Solidarity Network
(Taken from an email sent to me by the Refuser Solidarity Network. Emphasis original.)
Refuser Solidarity Network's End-of-Year Campaign
For over 20 years we at Refuser Solidarity Network, have supported Israeli refusers and war resisters. We launched this campaign in response to the increasing needs of activists on the ground.app.moonclerk.com
like this
ComradZoid, durduramayacaklar, allende2001, rainpizza, Cowbee [he/they], TheTux, bettyschwing, Malkhodr, Maeve, PeeOnYou [he/him], Ayache Benbraham ☭🪬, ☭ Comrade Pup Ivy 🇨🇺 e Philo_and_sophy like this.
It was forced into being by resistance: by global outrage, by organizing, and by soldiers who said no.
I'm not sure how to put this politely because all the effort by the above groups and what they risk in doing so but... no.
It was forced by Hamas, the resistance fighters and the palestinians under siege of torture and death.
like this
IsThisLoss [comrade/them], NotMushroomForDebate, Ashes2ashes, Malkhodr, Nocturne Dragonite, Che's Motorcycle e Tovarish Tomato like this.
Israel offers disgusting excuse for murdering children
Israel violated the ceasefire again, this time killing an 8-year-old and an 11-year-old — claiming it's their fault we've killed them
Archived version: archive.is/newest/thecanary.co…
Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.
adhocfungus likes this.
Police are feeding passport photo data into facial recognition databases
Big Brother Watch have warned that this facial recognition drive "affects everyone with a passport" — threatening rights and civil liberties
adhocfungus likes this.
Switzerland no longer wants American cloud in the public sector
The Swiss privacy regulator Privatim has taken steps to ban Microsoft, Amazon, and Google’s American cloud services for government agencies. Data storage within Switzerland offers no protection against American laws, Privatim argues.
Switzerland no longer wants American cloud in the public sector - Techzine Global
Zurich bans American cloud services for government. Data storage in Switzerland offers no protection against the CLOUD Act.Berry Zwets (Techzine)
like this
adhocfungus e BrikoX like this.
Exclusive: jury in anti-genocide activist 'terrorism' trial 'told to ignore international law'
Former MP Chris Williamson spoke exclusively to Skwawkbox and the Canary, revealing the instructions given to the jury
like this
adhocfungus e essell like this.
International law guarantees an unequivocal right of resistance, including armed resistance, to people under illegal occupation. Legal experts say that UK terrorism legislation breaches international law by blocking this right.
Youth Equality Coalition: fuck yer flags, fund our future
Youth Equality Coalition are a breath of fucking fresh air, and they're showing squabbling leftists exactly how it's done
adhocfungus likes this.
Kyodo News protests unauthorized use of articles by Perplexity
Kyodo News on Monday sent a letter of protest to Perplexity AI Inc., accusing the U.S. startup of using its articles without permission to provide online responses generated by artificial intelligence for its web search engine and infringing the Japanese news agency's copyright.Kyodo said in the letter that Perplexity must immediately stop using its articles published on the website 47 News, which features articles created by Kyodo and its member newspapers, and compensate for damages resulting from the unauthorized use of Kyodo's distributed articles, among other demands.
Kyodo News protests unauthorized use of articles by U.S. startup
Kyodo News on Monday sent a letter of protest to Perplexity AI Inc., accusing the U.S. startup of using its articles without permission to provide online responses generated by artificial intelligence for its web search engine and infringing the Japa…KYODO NEWS (Japan Wire by KYODO NEWS)
RRF Caserta . Rassegna stampa 01 12 25. Morto Pietrangeli. USA Russia per pace Ucraina. Il Papa da Erdogan. Sport
My totally local, DIY alternative to Pocket and Instapaper
TL;DR
I setup a local workflow that allows me to turn a webpage to an epub on my android phone and send it to my Kobo
Introduction
Since Mozilla killed Pocket, i have been looking for an alternative that didn't depend on decisions from any tech company, but only on myself.
I used the Pocket feature quite a lot, and, even if I appreciated the effort from Kobo to replace it with Instapaper, I didn't want to depend on someone else for something as simple as reading an article later on my eink device.
I considered Wallabag and Readeck, but, for both I had to depend on someone else server, or I had to self-host, and I didn't want to deal with the complexity.
I wanted an approach where I was in control, so all the steps needed to be based on FOSS software that I could at least understand.
The basic idea
I thought that what I needed is a 2 step approach, and I could solve both of them
- Turn a webpage into an epub
- Send the epub to my kobo
The explanation below is long, but, especuially following step 1-a and step 2-a is fairly easy and doesn't involve any modification or coding
Step 1: Turn a webpage into an epub
In the long search to do this I ended up finding 2 apporaches, on available "off the shelf" and one that involved much more coding.
Step 1-a: einkbro
i found out that there is a fantastic FOSS browser, EinkBro, that is designed for eink screen devices, but works very well for any Android device. It is slick, fast, configurable and well designed. It implements the readibility library from mozilla, which is great, and, more than anything else, can directly export webpages as epub files. You can configure the toolbar so that the "export to epub" icon is directly visible. The exported epub is nice, looks like the "readibility" version of the webpage (probably because it is...).
So, when I want to save a article I share it from my browesr to einkbro, and, from there, I export it to epub.
Step 1-b: Termux + readiblity scrape + pandoc
For this one I went all-in the rabbit hole of total control... Or maybe I could have done worse.
Anyway, here are the components:
- Termux: a terminal emulator for android, that allows you to do almost whatev you can do in a terminal emulator on a full blown Linux machine
- Readability scrape is a command line tool that scrpaes an url and returns a simplified version of it, using the readability library from Mozilla (as in the read-mode from Firefox)
- Pandoc is a command line tool that can convert documentation from one format to another, like, in our case, html to epub
I won't go into the details , of how to install what. In case, just ask.
I setup termux so that, if i share a webpage to termux via Andorid share menu, it triggers the following script ~/bin/termux-url-opener (see this webpage to understand how termux handles shared URLs):
termux-toast "termux received $1" # toast message to war that the url was received
termux-chroot "~/scripts/webpage_to_epub.sh" $1 note: for some reasons pandoc works as intended only if executed in chroot, so that's why the follwing script is launched as from the command
termux-chroot in the snippet abovewebpage_to_epub.sh
\#!/bin/bash
# final desitnation of epub file
FINAL_DIR="~/storage/shared/Documents/epub_articles/"
# Check if the URL argument is provided
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <URL>"
exit 1
fi
URL="$1"
JSON_OUTPUT=$(readability-scrape --json "$URL")
# Check if the readability command was successful
if [ $? -ne 0 ]; then
echo "Error: Failed to scrape URL."
exit 1
else
echo "readibility scrape: SUCCESS!!"
fi
# Extract title and content using jq
TITLE=$(echo "$JSON_OUTPUT" | jq -r '.title')
CONTENT=$(echo "$JSON_OUTPUT" | jq -r '.content')
AUTHOR=$(echo "$JSON_OUTPUT" | jq -r '.byline')
CONTENT_LENGTH=$(echo "$JSON_OUTPUT" | jq -r '.length') # Length in characters
# Calculate reading times based on character length
# Convert characters to words (approximate)
WORDS=$(($CONTENT_LENGTH / 5))
# Calculate reading times based on two speeds (200 and 300 words per minute)
READING_TIME_LOW=$(($WORDS / 300)) # For 300 wpm
READING_TIME_HIGH=$(($WORDS / 200)) # For 200 wpm
# Format the output for reading time
if [ "$READING_TIME_LOW" -eq "$READING_TIME_HIGH" ]; then
READING_TIME="${READING_TIME_LOW} minutes"
else
READING_TIME="${READING_TIME_LOW} - ${READING_TIME_HIGH} minutes"
fi
# Output the estimated reading time
echo "Estimated reading time: $READING_TIME"
# Format the current date in ISO format (YYYY-MM-DD)
CURRENT_DATE=$(date +"%Y-%m-%d")
# Remove accent characters and sanitize the title to create a valid filename
SANITIZED_TITLE=$(echo "$TITLE" | iconv -f UTF-8 -t ASCII//TRANSLIT | tr -cd '[:alnum:]_ ') # Convert to ASCII and keep alphanumeric characters
SANITIZED_TITLE="${SANITIZED_TITLE// /_}" # Replace spaces with underscores
# Create the final filename with date prefix
EPUB_FILE="${CURRENT_DATE}_${SANITIZED_TITLE}.epub"
# Create a temporary HTML file
HTML_FILE=$(mktemp /tmp/readability_output.XXXXXX.html)
# Write the complete HTML output
cat <<EOT > "$HTML_FILE"
<html>
<head>
<title>$TITLE</title>
</head>
<body>
<h1>$TITLE</h1>
<div>
$READING_TIME | <a href="$URL">original link</a>
</div>
<hr />
$CONTENT
</body>
</html>
EOT
# Create a temporary title file for metadata
TITLE_FILE=$(mktemp /tmp/title.XXXXXXXXX.txt)
# Write the Pandoc YAML metadata block
cat <<EOT > "$TITLE_FILE"
---
title: "$TITLE"
author: "$AUTHOR"
EOT
# Convert the HTML file to EPUB including the metadata
\#pandoc "$TITLE_FILE" "$HTML_FILE" -o "$EPUB_FILE"
pandoc "$HTML_FILE" -o "$EPUB_FILE"
# Check if pandoc command was successful
if [ $? -eq 0 ]; then
echo "EPUB generated: $EPUB_FILE"
mv "$EPUB_FILE" ~/storage/shared/Documents/epub_articles
else
echo "Error: Failed to generate EPUB."
fi
# Clean up temporary file
rm "$HTML_FILE"
read -p "Press [Enter] key to continue..."I spent time to craft the script to produce an output that I like, but, honestly, it's not better than the one produced by einkbro in Step1-a. The advantage with the termux script is that it is a one click process. I share the link to termux, and the script generates the epub and saves to a folder that is setup in the next step to do the uplaod automatically
Step 2: send the epub to my kobo
Again also for step 2 i found 2 alternatives, one more "manual" and direct, and the second more automatic
Step 2-a: share to http
For this I use a simple app, share via http: I share the epub file via android share menu to this app. The app generates a mini web server at my local IP address (on the wifi, that can also be the one from android hotspot). I then use the kobo browser to the local address. The browser asks if you want to download the file. Once downloaded the file is added to the kobo ebooks.
By using Nickelmenu I added a shortcut to the kobomenu to start the browser, to make things faster.
This is the simplest solution, everything work locally, no third party involved
Step 2-b
As an alternative I setup a nextcloud sync.
- On android I setup the folder where I save epubs as "automatic upload", so epub files are uploaded to a folder on my nextcloud as soon as I asve them
- On kobo I setup nexcloud syncronization. There is more than one alternative, I used this one. Whenever I connect my kobo to wifi, the new epubs are downloaded to my kobo and added to the library.
The only downside is that to delete an article, I have to delete form the nexcloud foder; if I delete it from my kobo, it gets re-added as soon as I connect the wifi
Conclusions
Maybe this looks too complex, but I learned a lot of stuff and had fun in the process. i find that pandoc is probably a bit too much for what it is needed here, in the end the epub content is a bundle of html and images, probably there is a better and slicker way to package them. If you have any suggestion to improve the workflow it is welcome 😀
What do you use these days?
GitHub - plateaukao/einkbro: A small, fast web browser based on Android WebView. It's tailored for E-Ink devices but also works great on normal android devices.
A small, fast web browser based on Android WebView. It's tailored for E-Ink devices but also works great on normal android devices. - plateaukao/einkbroGitHub
adhocfungus likes this.
reshared this
Feddit Un'istanza italiana Lemmy e lgsp reshared this.
lgsp@feddit.it likes this.
send.djazz.se/ magari puo’ tornar utile.
Interessante, anche per la possibilita di generare un kepub, che su kobo è leggermente meglio. Grazie
me lo devo leggere con calma (ho un kindle dell’anteguerra, non un kobo ma il discorso cambia poco)
Se hai domande o suggerimenti dimmi pure. L'unico dubbio che ho è se pandoc può esportare in mobi... da una prima occhiata sembra di no, ci vuole calibre, ma nonc redo si possa farlo andare in termux, almeno in modo semplice...
lgsp@feddit.it likes this.
webpagetoepub.github.io/
Convert webpage to EPUB
Easily convert any webpage to an EPUB file with our free online tool. Just paste the URL and download your EPUB for offline reading on any device.webpagetoepub.github.io
lgsp@feddit.it likes this.
like this
Rozaŭtuno, adhocfungus, copymyjalopy, pewpew e essell like this.
Kilgore Trout doesn't like this.
del works on directories? I'm going by very old memories here
Uh... kinda? Powershell has many POSIX aliases to cmdlets (equivalent to shell built-ins) of allegedly the same functionality. rmdir and rm are both aliases of Remove-Item, ls is Get-ChildItem, cd is Set-Location, cat is Get-Content, and so on.
Of particular note is curl. Windows supplies the real CURL executable (System32/curl.exe), but in a Powershell 5 session, which is still the default on Windows 11 25H2, the curl alias shadows it. curl is an alias of the Invoke-WebRequest cmdlet, which is functionally a headless front-end for Internet Explorer unless the -UseBasicParsing switch is specified. But since IE is dead, if -UseBasicParsing is not specified, the cmdlet will always throw an error. Fucking genius, Microsoft.
SmartTube has been comprised
Release Important Announcement · yuliskov/SmartTube
Important Announcement Friends, it seems that my digital signature has been exposed. This signature protects the app from fake and malicious updates, so there is a risk that someone may try to rele...GitHub
BrikoX doesn't like this.
The title doesn't match the facts in the announcement.
Only signature of the developer was leaked which could have lead to unsafe releases, but the point is moot as the developer is changing the signature key moving forward.
The app id is being changed, so there is no way to push new updates with that signature anymore. Hence the need to re-install the app.
Also it looks like the developer is adding VirusTotal scan workflow for all new releases moving forward.
That said, I'm not familiar with the developer or the situation enough to comfortably say it's safe.
fix backups on api 30+ and app id change · yuliskov/SmartTube@a78a301
Browse media content with your own rules on Android TV - fix backups on api 30+ and app id change · yuliskov/SmartTube@a78a301GitHub
Advent Calendar 1
Advent Calendar
Zen Mischief Photographs
This year for our Advent Calendar we have a selection of my photographs from recent years. They may not be technically the best, or the most recent, but they’re ones which, for various reasons, I rather like.Austen graves in Tenterden Churchyard
© Keith C Marshall, 2014
Click the image for a larger view
John Lee Hooker — The Healer (1989)
“L’album blues più venduto in assoluto per uno dei più grandi bluesman ancor oggi in circolazione”, recitava la pubblicità del disco a fine anni novanta, poco prima della morte avvenuta nel 2001 a ottantaquattro anni... Leggi e ascolta...
John Lee Hooker — The Healer (1989)
“L’album blues più venduto in assoluto per uno dei più grandi bluesman ancor oggi in circolazione”, recitava la pubblicità del disco a fine anni novanta, poco prima della morte avvenuta nel 2001 a ottantaquattro anni. Il termine “blues” è probabilmente più abusato che usato in questo disco che, sinceramente ho ascoltato fino alla nausea, per la sua immediatezza, per la sua ascoltabilità ma non certamente per la sua sonorità marcatamente blues. Con questo disco, la chitarra più corteggiata del rock insieme a Muddy Waters e anche l’unico a uscire e te... silvanobottaro.it/archives/373…
Ascolta il disco: album.link/s/7dX5RVwG4Bdw13xrC…
Home – Identità DigitaleSono su: Mastodon.uno - Pixelfed - Feddit
The Healer by John Lee Hooker
Listen now on your favorite streaming service. Powered by Songlink/Odesli, an on-demand, customizable smart link service to help you share songs, albums, podcasts and more.Songlink/Odesli
[Research] At least 80 million inconsistent facts on Wikipedia – can AI help find them?
Technology reshared this.
I watch a YT channel that talks and researches History on Wales, and on that somewhat narrow topic alone, he has found some ridiculous mistakes on Wikipedia. There are tons but few people are aware as they may lack the suffiency in knowledge or background to know how wrong they are. AI will surely make that problem worse. I have caught ChatGTP to be wrong numerous times on some topics within my wheelhouse. When I tell it is wrong it "apologizes," corrects itself and just adds what I told it. Well, if it had found the data before, then why does it have to wait until it is corrected? If kids use this for school, they are so fucked.
Who wants to put glue on their pizza?
Finding inconsistencies is not so hard. Pointing them out might be a -little- useful. But resolving them based on trustworthy sources can be a -lot- harder. Most science papers require privileged access. Many news stories may have been grounded in old, mistaken histories ... if not on outright guesses, distortions or even lies. (The older the history, the worse.)
And, since LLMs are usually incapable of citing sources for their own (often batshit) claims any -- where will 'the right answers' come from? I've seen LLMs, when questioned again, apologize that their previous answers were wrong.
RRF Caserta. Cronache Africane. Rapimento studentesse Colpi di Stato Narcostati Economia
[LTT] Building a Computer with the CREATOR of Linux! - Linus Torvalds Collab PC
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
Technology reshared this.
Anthropic claims Chinese state-sponsored hackers used Claude Code to access data from and leave backdoors in over 30 companies using AI-automated cyberattacks
AI firm claims Chinese spies used its tech to automate cyber attacks
The company claimed in a blog post this was the "first reported AI-orchestrated cyber espionage campaign".Joe Tidy (BBC News)
South Korea police say 120,000 home cameras hacked for 'sexploitation' footage
South Korea police say 120,000 home cameras hacked for 'sexploitation' footage
The cameras were located in private homes, karaoke rooms, a Pilates studio and a gynaecologist's clinic.Gavin Butler (BBC News)
like this
thisisbutaname e adhocfungus like this.
Social and Organizational Talks at FOSDEM 2026
Hey, all. One thing that’s different this year about the Social Web Devroom at FOSDEM 2026 is that we’re going to include talks about the organizational and social aspects of rolling out Open Source Fediverse software for individuals and communities. Last year, we focused pretty heavily on technical talks from the principle developers of FLOSS packages. This year, we want to make sure the other aspects of Fediverse growth and improvement are covered, too.
Consequently, the guidance for last year’s event, which was focused on how to make a great technical presentation, might seem a little outdated. But on reviewing it, I’ve found that it still has good advice for social and organizational talks. Just like software developers, community builders see problems and construct solutions for them. The solutions aren’t just about writing code, though; more often they involve bringing people together, assembling off-the-shelf tools, and making processes and rules for interaction.
Talks about Open Source software to implement ActivityPub and build the social web are still welcome, of course. We’re just expanding a bit to cover the human aspects of the Fediverse as well.
I’m looking forward to having the interesting discussions about bringing people together to make the Social Web. If you haven’t already, please consider submitting a talk to pretalx.fosdem.org/fosdem-2026…. Select “Social Web” from the “Track” dropdown, and include the length of your talk (8/25/50) in the submission notes. The deadline is December 1, 2025, so get them in as soon as possible!
FOSDEM 2026 – Social Web Devroom – Call For Participation
The Social Web Foundation is pleased to announce the Social Web Devroom at FOSDEM 2026, and invite participants to submit proposals for talks for the event.FOSDEM is an exciting free and open source software event in Brussels, Belgium that brings together thousands of enthusiasts from around the world. The event spans the weekend of January 31 to February 1, 2026 and features discussion tracks (“devrooms”) for scores of different technology topics.
The Social Web Devroom will take place in the afternoon of Saturday, January 31.
Format
There will be three available talk formats:
- 50 minutes – for bigger projects, followed by 10 minutes of questions.
- 25 minutes – for bigger projects, followed by 5 minutes of questions.
- 8 minutes – micro-talks on smaller or newer projects, in groups of 3, followed by 6 minutes of combined questions for the group.
Topics
The Social Web Devroom is open to talks all about the Social Web AKA the Fediverse, including:
- Implementations of the ActivityPub protocol or ActivityPub API
- Clients for ActivityPub-enabled software like Mastodon
- Supporting services for the Fediverse, like search or onboarding
- ActivityPub-related libraries, toolkits, and frameworks
- Tools, bots, platforms, and related topics
- Advocacy, organization and social activity in deploying Open Source ActivityPub applications
Important dates
- Submission open: 1 Nov 2025
- Submission deadline: 1 Dec 2025
- Acceptance notifications: 10 Dec 2025
- Final schedule announcement: 15 Dec 2025
- Devroom: 31 Jan 2026
Submissions
Submit talk proposals to pretalx.fosdem.org/fosdem-2026…. Select “Social Web” from the “Track” dropdown, and include the length of your talk (8/25/50) in the submission notes. (Note that the “Lightning Talks” track is a separate event-wide track; if you’re proposing a Social Web micro-talk, please choose the “Social Web” track!)Code of Conduct
All attendees and speakers must be familiar with and agree to the FOSDEM Code of Conduct.Contact
Questions about topics, formats, or the Social Web in general should go to contact@socialwebfoundation.org.
California immunization leader blasts FDA vaccine chief’s unsupported claim of child deaths
This post uses a gift link which requires some people to register to access it.
Not posting an archive.is link to bypass the paywall because Hearst has lawyers which don't like that.
CA immunization leader blasts FDA official’s child-death claim
California's immunization leader blasts FDA vaccine chief’s unsupported claim of child deaths, calling it 'reckless'Ko Lyn Cheang (San Francisco Chronicle)
UN Ditches Google for Taking Form Submissions, Opts for an Open Source Solution Instead
UN Ditches Google for Taking Form Submissions, Opts for an Open Source Solution Instead
The United Nations opts for an open source alternative to Google Forms.Sourav Rudra (It's FOSS)
like this
adhocfungus, Maeve e mPony like this.
Technology reshared this.
You Want Microservices, but Do You Need Them?
You Want Microservices—But Do You Need Them? | Docker
Before you default to microservices, weigh hidden costs and consider a modular monolith or SOA. Learn when Docker delivers consistency and scale—without sprawl.Manish Hatwalne (Docker)
like this
adhocfungus e essell like this.
why did the U.S. invade Iraq when Venezuela is so close?
Is that what's going to happen soon?
why did the U.S. invade Iraq when Venezuela is so close?
2/3 of Venezuela's reserves were discovered/confirmed in the last 20 years.
20 years ago, Venezuela was one of the U.S.'s biggest oil suppliers. 10ish years ago the oil price crashed and sent Venezuela spiralling into chaos. Trump has been grinding them down since then (mostly through sanctions). Biden let them breathe for 4 years.
Is that what's going to happen soon?
Ummm, yeah. You think they're posturing over drugs? Trump doesn't care if the poors die of drug overdoses. He wants that sweet, sweet crude and has been gearing up to ~~pacify~~ invade Venezuela to get it. That's why he wants Ukraine to surrender. He needs Russia to cool its jets so that Europe/NATO doesn't drag the U.S. into a war over there. Troops will be on the ground in Venezuela soon, after an indiscriminate blitzkrieg bombing campaign for a few weeks - they will say they're bombing gangs/cartels. It should start any day now since he "closed the air space" over Venezuela. He's just waiting for Venezuela to launch one of its fighters (or for Ukraine to capitulate) and the game is on. This is all ramping up because the CIA has failed to eliminate Maduro and install a new leader.
I wouldn't be surprised if Trump ultimately hopes to take over the whole country and rename it.
That's how wars start nowadays.
Well, actually it's gonna be something relative to drugs this time, almost plain terrorism has just been used by Israel, too fresh and repetitive, they're very good at new plots, we must admit it.
9/11 Civil planes attack
09/26 Submarines / scuba divers attack
10/7 Fortnite style attack with motorcycles and parachutes
xx/xx Must be using boats this time
Yes.
abc.net.au/news/2025-10-28/ven…
ABC News
ABC News provides the latest news and headlines in Australia and around the world.Elissa Steedman (Australian Broadcasting Corporation)
Amazon’s AI ‘Banana Fish’ Dubs Are Hilariously, Inexcusably Bad
They are also AI dubbing show that already have a dub:
xcancel.com/Pikagreg/status/19…
like this
Kilgore Trout, yessikg, Lasslinthar, frustrated_phagocytosis e IAmLamp like this.
Technology reshared this.
It's insulting because there are a lot of LGBTQ+ voice actors who want to do this but the powers that be won't greenlight it.
Look at Amazon's relationship with Trump, Trump's positions on LGBTQ+ people, and ask why Amazon is doing this. They aren't contractually obligated to do so. They are doing various animation projects and paying real voice actors. Though Hazbin Hotel is pretty gay, and so is Helluva Boss (same people/same universe). The only other animation project I know at Amazon is Mighty Nein, and I wanna say those guys are LGBTQ+ friendly but I'm not sure they're on the spectrum. So I don't wanna say Amazon is acting prejudicial here, but it smells.
I mean, yeah, Amazon's relationship with Trump is far too cozy... however, in this case, I think it's more likely that Amazon was just trying to cheap out by using AI (and possibly test/showcase their use of the tech) rather than them actively deciding to cut out LGBTQ+ voice actors.
FTA:
As giant tech corporations try to jam AI into every possible orifice in the world, we are consistently getting new examples day in and day out of that going poorly. This week, that’s Amazon Prime Video introducing dubs to the beloved anime Banana Fish, which has needed them for a long while. The problem? They’re AI. Not just AI, horrible AI.
Had this decision been based in bigotry, I would imagine that they wouldn't have bothered with the dub at all.
like this
yessikg likes this.
Company like Amazon is huge. Like, the department to cozy up with administration is a completely different subsidiary than Prime Video anime importing. They don’t know to each other. They will cut each other’s throats if they are fighting for the same promotion.
This happens because 1) cost savings and 2) human garbage exist everywhere. They hide behind the decisions and use the current political climate to push their bigotry.
like this
yessikg likes this.
Wow. I watched some in the embeded tweet and that was uhhh... That was hot garbage...
Dont watch the tweet if you don't want spoilers, there are important story beats included cause they're scenes with a lot of emotion that the ai dub completely mangles
Highly recommend the show. Content warning for sexual abuse and exploitation though, its a central part of the story and the show is a very difficult watch
It feels like Neil Breen directed this dub.
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
So do they not even have internal review?
OK, so it gets generated. Just like if someone sent a real recording, someone should listen to it, right? And make mention if there's parts that are noisy, or theres a random change in eq that feels jarring, or... I don't know, because it sounds like an adult voice on a 6yr old reading the lines?
Hostly, even a dirt cheap language model (with sound input) would tell you it’s garbage. It could itemize problematic parts of the sub.
But they didn’t use that because this isn’t machine learning. Its Tech Bro AI.
Quality Control? Review? Pffftt, ain't nobody got time for that! just generate and ship, baby, generate and ship!
Think of all the yachts our glorious ~~leader~~ CEO will be able to afford with all the money we're saving 😍😍😍
- Internal review also takes time and expertise. Those things cost money, and the whole point of the exercise is to not spend money.
- No one uses generative AI because they actually care about the quality of the end product.
But even allowing for those points, it's entirely possible that they did, in fact, do quality review. Extensively. But at some point the generation costs exceeded their allowed budget and this is what they settled on. This is the thing that lurks behind bad quality AI art; the fact that what we see is often the best result out of many, many tries. The Coca Cola holiday ad had to be stitched together from hours upon hours of failed attempts. Even the horrendously bad looking end product wasn't as bad as many of the failed outputs they got.
Regarding point 1, its a factored in value already. Replacing multiple stages of production simultaneously is a massive risk - voice acting + editing + editor review + production review on the cut.
This part:
it's entirely possible that they did, in fact, do quality review. Extensively. But at some point the generation costs exceeded their allowed budget and this is what they settled on.
I'd call entirely likely.
It would also mean that there was almost no testing of the llm's output prior to pushing it to production work, or basic items like intonation would have been called out.
Its also possible that the production team knew it was dogshit and pushed it out on purpose so people could see it for dogshit. Anime fans are not known for being supportive of poor adaptations after all, maybe they hoped for backlash? I know if I were on that team I'd prefer it.
At some point I'd expect management to have recognized it for being terrible though.
I seriously doubt that any of the decision makers involved in this process actually watch anime.
Anyone in management who cared probably didn't have enough pull / authority to do a damn thing about it.
To all those working on Naruto fan dubs on the early 2000s I say:
I am sorry. I was too hard on you. It took me 20 years to realize what 'hot garbage " really is in the context of an anime dub.
Posted this in another community but I’ll leave this here too:
So back when I worked at Amazon I was playing around with AWS Skillbuilder. They don’t pay for any other training materials for SDEs (well they used to have ACloudGuru but ended that).
I was like “they charge money for this so it can’t be that bad right?” Well
1/3 of the courses were actually what I’d call “watchable”
1/3 were just SEO Blogspam masquerading as information
And the remaining 1/3 clearly used a text to speech software that was dreadful. It was incomprehensible.
I say all this to say that if there was ever a list of companies I would trust to do AI Dubs Amazon would be the bottom of that list.
They’re pretty bad outside of English-Chinese actually.
Voice-to-voice is all relatively new, and it sucks if it’s not all integrated (eg feeding a voice model plain text so it loses the original tone, emotion, cadence and such).
And… honestly, the only models I can think of that'd be good at this are Chinese. Or Japanese finetunes of Chinese models. Amazon certainly has some stupid policy where they aren’t allowed to use them (even with zero security risk since they’re open weights).
Building the PERFECT Linux PC with Linus Torvalds
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
like this
toothpaste_sandwich e chookity like this.
Technology reshared this.
like this
toothpaste_sandwich likes this.
like this
DaGeek247 likes this.
like this
SaltySalamander likes this.
You know the silly stuff at the start was actually Torvald's idea, right?
Linus (Sebastian) spoke about how he didn't even get the reference but Linus (Torvalds) prompted him to go and watch Highlander ("there can only be one!!")
You should've kept watching it. There's some good stuff there.
I laughed so hard at the highlander reference. It might just be that you gotta know the specific meme and culture to enjoy it. Even YouTube Linus didn't know the reference.
Seeing the two Linus's pulling out katanas on each other was hilarious.
God forbid someone who's made their life tech is very excited that Torvalds has come to visit them and turned out to be a really nice guy.
Torvalds will probably be the highlight guest of his entire career, and he knows it. Of course that's enormously exciting.
Hardware
- AMD Ryzen Threadripper 9960X
- GIGABYTE TRX50 AERO D Motherboard
- Samsung SSD 9100 PRO 2TB SSD
- 64GB ECC RAM
- Noctua NH-U14S TR5-SP6 Cooler
- Intel Arc B580 GPU
- Fractal Design Torrent E-ATX Case
- Seasonic PRIME TX-1600 1600W 80+ Titanium PSU
OS
- Fedora
Don't throw away your old PC—it makes a better NAS than anything you can buy
Don't throw away your old PC—it makes a better NAS than anything you can buy
Doing it yourself is way more cost effective.Nick Lewis (How-To Geek)
like this
riot, Lasslinthar, SuiXi3D e mPony like this.
Technology reshared this.
like this
Azathoth, qupada, DaGeek247 e onewithoutaname like this.
Technology reshared this.
So I did this, using a Ryzen 3600, with some light tweaking the base system burns about 40-50W idle. The drives add a lot, 5-10W each, but they would go into any NAS system, so that's irrelevant. I had to add a GPU because the MB I had wouldn't POST without one, so that increases the power draw a little, but it's also necessary for proper Jellyfin transcoding. I recently swapped the GPU for an Intel ARC A310.
By comparison, the previous system I used for this had a low-power, fanless intel celeron, with a single drive and two SSDs it drew about 30W.
like this
DaGeek247 likes this.
Ok, im glad im not the only one that wants a responsive machine for video streaming.
I ran a pi400 with plex for a while. I dont care to save 20W while I wait for the machine to respond after every little scrub of the timeline. I want to have a better experience than Netflix. Thats the point.
Technology reshared this.
Technology reshared this.
Drivers? Are you running it on Windows? On Linux I just plugged it in and it worked, Jellyfin transparently started transcoding the additional codecs.
It fixed my issue with tone mapping, before this HDR files on my not-so-old TV showed the wrong colors.
I've not desktop environment on the NAS, it was plug and play in terminal. I did get an error about HSW/BDW HD-Audio HDMI/DP requiring binding with a gfx driver - but I've not yet even bothered to google it.
I read somewhere the sparkle elf I have just ramps the fan to 100% at all times with the Linux driver and has no option to edit fan curve under Linux
(suggested fix was install a windows VM, set the curve there and the card will remember, but after rebuilding the NAS and fixing a couple of minor issues to get it all working I couldn't face installing windows, so just left it as is until I have the time lol).
Technology reshared this.
The host is running Proxmox, so I guess their kernel just works with it.
It does run the fan way more than I'd like, but its noise is drowned out by the original AMD cooler on the CPU anyway, but thanks for the info, I may look into it... But I guess I'd have to set up GPU pass-through on a VM just for that.
A desktop running a low usage wouldn't consume much more than a NAS, as long as you drop the video card (which wouldn't be running anyways).
Take only that extra and you probably have a few years usage before additional electricty costs overrun NAS cost. Where I live that's around 5 years for an estimated extra 10W.
Technology reshared this.
as long as you drop the video card
As I wrote below, some motherboards won't POST without a GPU.
Take only that extra and you probably have a few years usage before additional electricty costs overrun NAS cost. Where I live that’s around 5 years for an estimated extra 10W.
Yeah, and what's more, if one of those appliance-like NASes breaks down, how do you fix it? With a normal PC you just swap out the defective part.
like this
onewithoutaname likes this.
Depends.
Toss the GPU/wifi, disable audio, throttle the processor a ton, and set the OS to power saving, and old PCs can be shockingly efficient.
like this
DaGeek247 likes this.
There was a post a while back of someone trying to eek every single watt out of their computer. Disabling XMP and running the ram at the slowest speed possible saved like 3 watts I think. An impressive savings, but at the cost of HORRIBLE CPU performance. But you do actually need at least a little bit of grunt for a nas.
At work we have some of those atom based NASes and the combination of lack of CPU, and horrendous single channel ram speeds makes them absolutely crawl. One HDD on its own performs the same as this raid 10 array.
like this
onewithoutaname likes this.
Yeah.
In general, 'big' CPUs have an advantage because they can run at much, much lower clockspeeds than atoms, yet still be way faster. There are a few exceptions, like Ryzen 3000+ (excluding APUs), which idle notoriously hot thanks to the multi-die setup.
Peripherals and IO will do that. Cores pulling 5-6W while IO die pulls 6-10W
techpowerup.com/review/amd-ryz…
AMD Ryzen 7 5700X Review - Finally an Affordable 8-Core
With the Ryzen 7 5700X, AMD is finally offering a more affordable 8-core processor. In our review, we take a close look at how this $265 CPU performs against the Ryzen 7 5800X, and also compare it to Intel's Alder Lake lineup, including the i5-12600K…TechPowerUp
Same with auto overclocking mobos.
My ASRock sets VSoC to a silly high coltage with EXPO. Set that back down (and fiddle with some other settings/disable the IGP if you can), and it does help a ton.
...But I think AMD's MCM chips just do idle hotter. My older 4800HS uses dramatically less, even with the IGP on.
And heat your room in the winter!
Add spring + autumn if you live up north.
Stuff designed for much higher peek usage tend to have a lot more waste.
For example, a 400W power source (which is what's probably in the original PC of your example) will waste more power than a lower wattage on (unless it's a very expensive one), so in that example of yours it should be replaced by something much smaller.
Even beyond that, everything in there - another example, the motherboard - will have a lot more power leakage than something designed for a low power system (say, an ARM SBC).
Unless it's a notebook, that old PC will always consume more power than, say, an N100 Mini-PC, much less an ARM based one.
All true, yep.
Still, the clocking advantage is there. Stuff like the N100 also optimizes for lower costs, which means higher clocks on smaller silicon. That's even more dramatic for repurposed laptop hardware, which is much more heavily optimized for its idle state.
For example, a 400W power source (which is what's probably in the original PC of your example) will waste more power than a lower wattage on
in my experience power supplies are more efficient near the 50% utilization. be quiet psus have charts about it
The way one designs hardware in is to optimize for the most common usage scenario with enough capacity to account for the peak use scenario (and with some safety margin on top).
(In the case of silent power sources they would also include lower power leakage in the common usage scenario so as to reduce the need for fans, plus in the actual physical circuit design would also include things like airflow and having space for a large slower fan since those are more silent)
However specifically for power sources, if you want to handle more power you have to for example use larger capacitors and switching MOSFETs so that it can handle more current, and those have more leakage hence more baseline losses. Mind you, using more expensive components one can get higher power stuff with less leakage, but that's not going to happen outside specialist power supplies which are specifically designed for high-peak use AND low baseline power consumption, and I'm not even sure if there's a genuine use case for such a design that justifies paying the extra cost for high-power low-leakage components.
In summary, whilst theoretically one can design a high-power low-leakage power source, it's going to cost a lot more because you need better components, and that's not going to be a generic desktop PC power source.
That said, I since silent PC power sources are designed to produce less heat, which means have less leakage (as power leakage is literally the power turning to heat), even if the with the design having been targetted for the most common usage scenario of that power source (which is not going to be 15W) that would still probably mean better components hence lower baseline leakage, hence they should waste less power if that desktop is repurposed as a NAS. Still won't beat a dedicated ARM SBC (not even close), but it might end up cheap enough to be worth it if you already have that PC with a silent power source.
The GTX 480 is efficient by modern standards. If Nvidia could make a cooler that could handle 600 watts in 2010 you can bet your sweet ass that GPU would have used a lot more power.
Well that and if 1000 watt power supplies were common back then.
How about a Raspberry Pi? I've got one (Raspberry Pi 400) running my Home Automation setup with a couple USB 3.0 ports. Was thinking there's gotta be some add-ons for Home Assistant to put some external storage to good use.
Don't need anything too fancy. Just looking for some on-site backup and maybe some media storage
Technology reshared this.
Yeah, I guess I should have been clear that's part of what I was thinking (although to be honest I'm mostly a schmuck who pays for a few streaming services and uses that)
What exactly would be the main choking point? Horsepower of the Pi to take that stored file and stream it to the client?
So I believe the Pi 4 was the 1st to have an actual ethernet controller and not just having essentially a built in USB to ethernet adapter so bandwidth to your HDDs/ethernet shouldn't be a problem.
Streaming directly off of the pi should be tolerable. A bit slower than a full fat computer with tons of ram for caching and CPU power to buffer things. But fine. There's some quirks with usb connected HDDs that makes them a bit slower than they should (still in 2025 UASP isn't a given somehow) But streaming ultimately doesn't need that much bandwidth.
What's going to be unbearable is transcoding. If you're connecting some shitty ass smart TV that only understands like H264 and your videos are 265 then that has to get converted, and that SUCKS. Plex by default also likes to play videos at a lower bitrate sometimes, which means transcoding.
There's also other weird quirks to look out for. Like someone else was (I think) doing exactly what you wanted to do, but no matter what the experience was unbearable. Apparently LVM was somehow too much compute for the pi to handle, and as soon as they switched to raw EXT4 they could stream perfectly fine. I don't remember why this was a problem, but it's just kind of a reminder of how weak these devices actually are compared to "full" computers.
I've got 2 rPis - a pi5 running Home Assistant and a pi4 with a USB drive caddy acting as little more than a NAS (it also does all the downloading through radarr etc.. )
I find them perfectly adequate.
My gaming rig acts as my emby server as it's basically on all the time and it has a beefy gfx card that can handle transcoding.
Technology reshared this.
None of that really matters for a home media server. Even the limited SATA ports, worst case you have to grab a cheap expansion card.
Power consumption is a much bigger concern, a purpose built NAS is much more efficient than a random old PC.
like this
DaGeek247 e onewithoutaname like this.
Technology reshared this.
Even the most expensive Synology only has space for 8 drives with only one 10Gbit ethernet port.
You can build something yourself for less with much better performance.
Technology reshared this.
That's not true at all. Synology will sell you 24 bay rack mounted devices and 12 bay towers, as well as expansion modules for both with more bays you can daisy chain to them.
Granted, I believe those are technically marketed as enterprise solutions, but you can buy a 12 bay unit off of Amazon for like two grand diskless, so... I mean, it's a thing.
Not saying you should, and it's definitely less cost effective (and less powerful, depending on what you have laying around) than reusing old hardware, but it does exist.
I think the self-hosting community needs to be more honest with itself about separating self hosting from building server hardware at home as separate hobbies.
You absolutely don't need sever-grade hardware for a home/family server, but I do see building a proper server as a separate activity, kinda like building a ship in a bottle.
That calculation changes a bit if you're trying to host some publicly available service at home, but even that is a bit of a separate thing unless you're running a hosting business, at which point it's not a really a home server anyways, even if it happens to sit inside your house.
like this
DaGeek247 likes this.
You absolutely don't need sever-grade hardware for a home/family server
Server-grade hardware makes a lot of sense even for home use. My NAS is tucked away in a closet, having IPMI is so much more convenient when you can’t easily hook it up to a keyboard and mouse.
I'm currently running some stuff out of an old laptop which I also have tucked away somewhere and just... remote desktop in for most of the same functionality. And even if you can't be bothered to flip it open in the rare occassion you can't get to the points where the OS will let you remote in, there are workarounds for that these days. And of course the solution to the "can't hook it up to a keyboard and mouse" in that case is the thing comes with both (and its own built-in UPS) out of the box.
Nobody is saying that server grade solutions aren't functional or convenient. They exist for a reason. The argument is that a home/family server you don't need to use at scale can run perfectly fine without them only losing minor quality of life features and is a perfectly valid solution to upcycle old or discarded consumer hardware.
I totally agree - and depending on your needs & budget, slightly older server-grade equipment idle power usage is much higher compared to consumer stuff (servers didn't really know how to idle until "recently"). And also if you don't host a tone of different things for different users (ie you don't need all the pcie lanes) you get so much faster CPUs for the same monies.
The only server-grade things you need are ofc disk drives that are gonna do server stuff.
And a good PSU (but a nice Seasonic is almost server-grade anyways). When ppl talk about power usage they tend to forget PSUs (they check their PC usage with a shitty PSU that itself can't idle low & maybe doesn't even get to 90% at peak loads).
HBAs are cheap, IPMI isn't at all needed under normal uses cases, and ECC is way overkill.
For most people a halfway decent PC that isn't failing is plenty.
like this
onewithoutaname likes this.
Hardware is boring. Doing some research is boring. People don't care about boring stuff. Or their data.
"Let's put every single family photo taken between 1976 and today on this and only this one shitty drive. And let me spin up an Immich container on my trusty raspberry. I have watched a YouTube video or two in my days. I think I know what I'm doing."
Bonus points for "but ssh is all you need", "static electricity has never been a problem for me" and "what gpu do you recommend for jellyfin?".
OK. Science time. Somewhat arbitrary values used, the point is there is a amortization calculation, you'll need to calculate your own with accurate input values.
A PC drawing 100W 24/7 uses 877 kWh@0.15 $131.49 per year.
A NAS drawing 25W 24/7 uses 219 kWh@0.15 $32.87 per year
So, in this hypothetical case you "save" about $100/year on power costs running the NAS.
Assuming a capacity equivalent NAS might cost $1200 then you're better off using the PC you have rather than buying a NAS for 12 years.
This ignores that the heat generated by the devices is desirable in winter so the higher heat output option has additional utility.
like this
subignition likes this.
Technology reshared this.
... 100W? Isn't that like a rally bygone era? CPUs of the past decade can idle at next to nothing (like, there isn't much difference between an idling i7/i9 and a Pentium from the same era/family).
Or are we taking about arm? (Sry, I don't know much about them.)
Technology reshared this.
All devices on the computer consume power.
The CPU being the largest in this context. Older processors usually don't have as aggressive throttling as modern ones for low power scenarios.
Similarly, the "power per watt" of newer processors is incredibly high in comparison, meaning they can operate at much lower power levels while running the same workload.
Assuming a capacity equivalent NAS might cost $1200
Either you already have drives and could use them in a new NAS or you would have to buy them regardless and shouldn’t include them in the NAS price.
8 drives could go into most computers I think. Even 6 drive NAS can be quite expensive.
https://a.co/d/jcUR3yV
I bought a two bay Synology for $270, and a 20TB hdd for $260. I did this for multiple reasons. The HDD was on sale so I bought it and kept buying things. Also I couldn't be buggered to learn everything necessary to set up a homemade NAS. Also also i didn't have an old PC. My current PC is a Ship of Theseus that I originally bought in 2006.
You're not wrong about an equivalent NAS to my current pc specs/capacity being more expensive. And yes i did spend $500+ on my NAS And yet I also saved several days worth of study, research, and trial and error by not building my own.
That being said, reducing e-waste by converting old PCs into Jellyfin/Plex streaming machines, NAS devices, or personal servers is a really good idea
In the UK the calculus is quite different, as it's £0.25/kWh or over double the cost.
Also, an empty Synology 4-bay NAS can be gotten for like £200 second hand. Good enough if you only need file hosting. Mine draws about 10W compared to an old Optiplex that draws around 60W.
With that math using the NAS saves you 1.25 pence per hour. Therefore the NAS pays for itself in around about 2 years.
This ignores that the heat generated by the devices is desirable in winter so the higher heat output option has additional utility.
But the heat is a negative in the summer. So local climate might tip the scales one way or the other.
In the fall/winter in northern areas it's free! (Money that would already be spent on heating).
Summer is a negative though, as air conditioning needs to keep up. But the additional cost is ~1/3rd the heat output for most ACs (100w of heat require < 30w of refrigeration losses to move)
like this
DaGeek247 likes this.
Ok. R5 3600, rtx 3070, and 4 spinning drives. Idles at 62W. 80W under normal load (2 concurrent streams). This is a hilariously over specced NAS. This is all 2nd or 3rd life pc parts (outside of the spinning rust), so financially speaking I'm happy with the result.
The long term goal is to use it as a homelab separate from anything I need to work all the time. I want to try running some LLMs locally and use it to control some home automation stuff. That'll stress it.
Edit: so yeah its double yours.
The Xeon 2224G workstation with 32GB of ECC ram I got on eBay pulls 15 watts from the wall streaming 4k video on Plex.
It didn't have 6 bays but if I needed it I could move the guts to a bigger case
I mean... my old PC burns through 50-100W, even at idle and even without a bunch of spinning hard drives. My actual NAS barely breaks that under load with all bays full.
I could scrounge up enough SATA inputs on it to make for a decent NAS if I didn't care about that, and I could still run a few other services with the spare cycles, but... maybe not the best use of power.
I am genuinely considering turning it into a backup box I turn on under automation to run a backup and then turn off after completion. That's feasible and would do quite well, as opposed to paying for a dedicated backup unit.
I see what you mean, and I have that (old PC with a bunch of 2.5" HDDs formatted as ZFS).
For me power consumption is more important than performance, so I'm looking for a lower power solution for photo sharing, music collection and backups.
Proxmox VE Helper-Scripts
The official website for the Proxmox VE Helper-Scripts (Community) repository. Featuring over 400+ scripts to help you manage your Proxmox Virtual Environment.Proxmox VE Helper-Scripts
And as usual everyone is saying NAS, but talking about servers with a built in NAS.
I'm not saying you can't run your services on the same machine as your NAS, I'm just confused why every time there's a conversation about NASs it's always about what software it can run.
The way I see it, a box of drives still needs something to connect it to your network.
And that something that can only do a basic connection costs only a little less than something that can run a bunch of other stuff too.
You can see why it all gets bundled together.
I somehow doubt that.
My last desktop PC has been retasked as an HTPC. The CPU in it requires a graphics card for the system to POST, it's currently mounted in a SFF case with barely room for two 2.5" drives, so it would either make for a shitty, difficult to service, bulky for what it does, power inefficient NAS, or I'd have to buy a new case and CPU.
My current machine is in an mATX mini-tower, there's room for hard disks and the 7700X has integrated graphics so I could haul the GPU out, but it's still kind of bulky for what you'd get.
So I'm gonna keep my Synology in service for a little while longer, then build a NAS from scratch selecting components that would be good for that purpose.
like this
mPony likes this.
I used to have a 5700G system that I had to switch out to a 14600k system due to quciksync pass through.
I got my 14600K down to 55w from 75w with everything else being equal. Insane how efficient some setups can be.
My 16tb Pi sips at 13w max or 8w idle. But no encoding or enough storage for normal work. So it's warm storage
like this
mPony likes this.
I want to reduce wasteful power consumption.
But I also desire ECC for stability and data corruption avoidance, and hardware redundancy for failures (Which have actually happened!!)
Begrudgingly I'm using dell rack mount servers. For the most part they work really well, stupid easy to service, unified remote management, lotssss of room for memory, thick PCIe lane counts, stupid cheap 2nd hand RAM, and stable.
But they waste ~100 watts of power per device though... That stuff ads up, even if we have incredibly cheap power.
like this
mPony likes this.
Carte PCIE SATA 12/16/20 Ports carte PCIe SATA 3.0 6 Go, carte d'extension de contrôleur PCIe vers SATA, prise en charge des appareils SATA 3.0 - AliExpress 7
Smarter Shopping, Better Living! Aliexpress.comaliexpress.
ebay.ca/itm/155132091204
INSPUR 9211-8i 6Gbps SAS LSI 2008 HBA IT Mode ZFS FreeNAS unRAID+2*SFF-8087 SATA | eBay
Find many great new & used options and get the best deals for INSPUR 9211-8i 6Gbps SAS LSI 2008 HBA IT Mode ZFS FreeNAS unRAID+2*SFF-8087 SATA at the best online prices at eBay! Free shipping for many products!eBay
So, it's better if I get a normal pcie to sata card and connect them individually.
Then just raid them through software.
Also, what are your thoughts on second hand drives, and just monitoring them and replacing them as needed. (im currently saving up for good new 4tb x 6 drives lol)
With TrueNAS yes, a sata card connected to a bare drive is the preferred way. I have done it differently with enterprise hardware and virtualization but it’s not really supposed to be done that way. And ZFS is not technically “RAID” in the classic sense, but it does implement its own RAID‑like redundancy (RAIDZ and mirrors) as part of an integrated filesystem and volume manager. There are also things you can do with faster NVME drives like SLOG, L2ARC, and SPECIAL vdevs to store pool metadata. But some of these can fail and wipe out all your data if you aren’t careful. So read a lot.
Second hand drives are fine in my opinion as long as SMART is not reporting any immediate errors. Just assume you will have failures and have spares built into the zfs volume.
I’m not an expert by any stretch but I have been doing this for 10 plus years so I have some experience.
Interesting, thanks.
If you no expert, the I'm a newbie lol.
I will try a sata card and raiding them through software.
What would you recommend for 6x4tb ?
I know there are raids and mirrors, I was thinking like raid 5 but still unsure. I also have a icydock for 2.5 in drives that I can raid separately with ssds when I have the funds.
What is your experience with raids and safest bet on old hardware, if running 24/7 with important data?
I would think that right now the sweet spot for good used drives is between 4-8tb. Check out backblaze’s drive stats for some good info about failure rates for older drives.
backblaze.com/blog/category/cl…
Yeah RAID 5 is fine (in ZFS terms it's just called raidz or raidz1). You could also do something like raidz2 (which is essentially RAID6 with two parity drives). There is some newer stuff in TrueNAS called dRAID which does some interesting stuff with the spares. It's kinda like old RAID5EE stuff if youre familiar with that. Just google it and read up on it.
Safest bet on old hardware… in my opinion find some old enterprise level stuff somebody is upgrading out of. I get lots of hand-me-downs that way. This stuff is meant to run 24/7, keep running forever, and is usually upgraded before it’s really not useful to anyone. Word of warning, this stuff is generally not power efficient, or quiet for that matter. So I wouldn't be running this in my bedroom. Well unless you're cold 'cause your heater is broken and love lots of white noise 😀
As a hardware guy going on 20+ years let me offer some basic advice. If this data is important , which you mentioned it was, RAID is NOT backup. Have separate backups. Yes I know it's expensive but hardware can and does fail. Sometimes irrecoverably. ZFS does a good job helping with this with snapshots and the ability to sync easily. For me just I follow the 3-2-1 rules. Yeah it's kinda outdated but I'm old.
The 3-2-1 rule is basically:
- 3 copies
- Primary data (on its own pool).
- Local backup (on a separate ZFS pool, ideally on different hardware). This is where ZFS replication is useful. This built into TrueNAS.
- Off‑site/cloud backup (replicated ZFS dataset or traditional backup tool like restic/Borg to cloud).
- 2 different media
- e.g., Primary on SSDs, backup on HDDs; or primary on local NAS, backup in cloud.
- 1 off‑site
- Replicate ZFS snapshots to a remote location (another site or cloud).
Oh and one other thing. If you are using TrueNAS be mindful there are two flavors now, TrueNAS Core and TrueNAS Scale. The interfaces are slightly different but the main differences are:
- TrueNAS Core is based on FreeBSD and is the older, more mature “classic NAS” platform, optimized for rock‑solid file serving with jails and VMs.
- TrueNAS Scale is based on Debian Linux and is designed for “scale‑out” and hyperconverged use: clustering, containers, and modern virtualization on newer hardware.
Hope this is useful….
Hard Drive Stats Archives
Backblaze regularly publishes statistics and insights based on our hard drives. Look back through all the blog posts going over the Hard Drive Stats.Backblaze Blog | Cloud Storage & Cloud Backup
Very informative thanks!
Seems like seagate 4-8tb is the sweet spot.
Is there any difference in the models of the segate drives? Or just the iron wolf NAS are the better choice?
Also, currently can fund all ssds for primary and I'm not that interested in read speeds. I'm more interested in a safe space for files to get stored in without fear of loss.
I have a old tell server tower, running truenas scale. Once I get a pcie sata card I will set up with raid5.
And zfs is just a backup of the raid, like a sync?
And then I think my move would be to get 6 Seagate drives lol
When I looked into this I found that, for TrueNAS, using ZFS with RAW disks is generally preferable.
I wound up writing custom firmware to my hardware RAID card so that it would be effectively “transparent” and yield direct hardware access to the disks.
Big shout out to Windows 11 and their TPM bullshit.
Was thinking that my wee "Raspberry PI home server" was starting to feel the load a bit too much, and wanted a bit of an upgrade. Local business was throwing out some cute little mini PCs since they couldn't run Win11. Slap in a spare 16 GB memory module and a much better SSD that I had lying about, and it runs Arch (btw) like an absolute beast. Runs Forgejo, Postgres, DHCP, torrent and file server, active mobile phone backup etc. while sipping 4W of power. Perfect; much better fit than an old desktop keeping the house warm.
Have to think that if you've been given a work desktop machine with a ten-year old laptop CPU and 4GB of RAM to run Win10 on, then you're probably not the most valued person at the company. Ran Ubuntu / GNOME just fine when I checked it at its original specs, tho. Shocking, the amount of e-waste that Microsoft is creating.
Question, what's the benefit of running a separate DHCP server?
I run openwrt, and the built in server seems fine? Why add complexity?
I'm sure there's a good reason I'm just curious.
So on mine, I haven't bothered to change from the ISP provided router, which is mostly adequate for my needs, except I need to do some DNS shenigans, and so I take over DHCP to specify my DNS server which is beyond the customization provided by the ISP router.
Frankly been thinking of an upgrade because they don't do NAT loopback and while I currently workaround with different DNS results for local queries, it's a bit wonky to do that and I'm starting to get WiFi 7 devices and could use an excuse to upgrade to something more in my control.
The router provided with our internet contract doesn't allow you to run your own firmware, so we don't have anything so flexible as what OpenWRT would provide.
Short answer; in order to Pi-hole all of the advertising servers that we'd be connecting to otherwise. Our mobile phones don't normally allow us to choose a DNS server, but they will use the network-provided one, so it sorts things out for the whole house in one go.
Long, UK answer: because our internet is being messed with by the government at the moment, and I'd prefer to be confident that the DNS look-ups we receive haven't been altered. That doesn't fix everything - it's a VPN job - but little steps.
The DHCP server provided with the router is so very slow in comparison to running our own locally, as well. Websites we use often are cached, but connecting to something new takes several seconds. Nothing as infuriating as slow internet.
Oh you mean DNS server, yes ok that makes sense. Yeah I totally understand running your own.
If I understand correctly, DHCP servers just assign local IPs on initial connection, and configure other stuff like pointing devices to the right DNS server, gateway, etc
Gotcha! No worries. Networking gets more and more like sorcery the deeper you go.
Networking and printers are my two least favorite computer things.
True for notebooks.
(For years my home NAS was an old Asus EEE PC)
Desktops, on the other hand, tend to consume a lot more power (how bad it is, depends on the generation) - they're simply not designed to be a quiet device sitting on a corner continuously running a low CPU power demanding task: stuff designed for a lot more demanding tasks will have things like much bigger power sources which are less efficient at low power demand (when something is design to put out 400W, wasting 5 or 10W is no big deal, when it's designed to put out 15W, wasting 5 or 10W would make it horribly inefficient).
Meanwhile the typical NAS out there is running an ARM processor (which are known for their low power consumption) or at worse a low powered Intel processor such as the N100.
Mind you, the idea of running you own NAS software is great (one can do way more with that than with a proprietary NAS, since its far more flexible) as long as you put it in the right hardware for the job.
When I had my setup with an ASUS EEE PC I had mobile external HDDs plugged to it via USB.
Since my use case was long-term storage and feeding video files to a Media TV Box, the bandwidth limit of USB 2.0 and using HDDs rather than SDDs was fine. Also back then I had 100Mbps ethernet so that too limited bandwidth.
Even in my current setup where I use a Mini-PC to do the same, I still have the storage be external mobile HDDs and now badwidth limits are 1Gbps ethernet and USB 3.0, which is still fine for my use case.
Because my use case now is long term storage, home file sharing and torrenting, my home network is using the same principles as distributed systems and modern microprocessor architectures: smaller faster data stores with often used data close to were its used (for example fast smaller SDDs with the OS and game executables inside my gaming machine, plus a torrent server inside that same Mini-PC using its internal SDD) and then layered outwards with decreasing speed and increasing size (that same desktop machine has an internal "storage" HDD filled with low use files, and one network hop from it there's the Mini-PC NAS sharing its external HDDs containing longer term storage files).
The whole thing tries to balance storage costs and with usage needs.
I suppose I could improve performance a bit more by setting up some of the space in the internal SDD in the Mini-PC as a read/write cache for the external HDDs, but so far I haven't had the patience to do it.
I used to design high performance distributed computing systems and funnilly enough my home setup follows the same design principles (which I had not noticed until thinking about it now as I wrote this).
Yeah, different hardware is designed for different use cases and generally won't work as well for other use cases, which is also why desktops seldom make for great NAS servers (their fans will also fail from constant use, plus their design spec is for much higher power usage so they have a lot more power waste even if trottled down).
That said my ASUS EEE PC lasted a few years on top of a cabinet in my kitchen (which is were the Internet came into my house so the router was also there) with a couple of external HDDs plugged in, and that's a bit of a hostile environment (because some of the particulates from cooking, including fat, don't get pulled out and end up accumulating there).
At the moment I just have a Mini-PC on my living room with a couple of external HDDs plugged in that works as NAS, TV Media Box and home server (including wireguard VPN on top of a 1Gbps connection, which at peak is somewhat processor intensive). It's an N100 and the whole thing has a TDP of 15W so the fan seldom activates. So far that seems to be the best long term solution, plus it's multiple use unlike a proprietary NAS. It's the some of the best €140 (not including the HDDs) I've ever spent.
Laptops are better, because they have an integrated uninterruptible power supply, but worse because most can't fit two hard drives internally. Less of a problem, now that most have USB3. Just run external RAID if you have to.
Arguably, a serious home server will need a UPS anyway to keep the modem and router online, but a UPS for just the NAS is still better than no UPS at all. Also, only a small UPS is needed for the modem and router. A full desktop UPS is much larger.
They make m.2 to SATA adapters that have like 10 SATA ports. A laptop motherboard in a case with one of those would be very interesting. I have plans for one but I need to buy some parts (keyboard and laptop fan).
Edit: the adapters run hot and are kind of fragile. I'd recommend having a thermal pad under it thermally coupling it to the motherboard and giving it some support.
I have an old machine been using as a Unraid server for years. It's an i7-3770 paired with 32GB of ram and like 4x2TB drives.
Finally upgrading it because it's just not going to keep meeting needs and frankly it's wicked old (might keep it as a gitlab runner server or something). Finally "upgrading" by taking some old hardware (and bought some new), to have a full compute + storage setup. Proxmox (Ryzen 9 5900XT + 128GB ram) with all the compute and TruNas (Ryzen 7 3700X + 64GB ram + 8x16TB drives [LSI LOGIC SAS9211-8I] [raidz2/82.62 TiB usable]) for storage with a private 10G direct link between the two (Intel X550T2BLK).
I'd use an old PC as a NAS but turned it on only on demand, when it was needed. Which does hurt its convenience factor a little.
Note: talking about desktops.
Why would I throw it away, when I can give it to someone who needs it more, or sell it?
Because selling is always a hassle, dealing with choosing beggars and scammers, and it may not be worth much anymore for general use.
For example, my old PC is a i7 4770k... it can't run Windows 11 or play remotely recent games. I don't know anyone who could use this thing, so to save a few watts I took out the GPU, put it in eco mode and have been using it as my Linux server.
My NUC uses 6-7W idle.
I have played around with some mini PC's (minisforum and beelink brand), they're neat but they turned out to be not very reliable, two have already died prematurely, and unfortunately they are not end-user serviceable. Lack of storage expansion options is an issue as well, if you don't just want to stack a bunch of external USB drives on top of each other.
The main concern with old hardware is probably powerdraw/efficiency, depending on how old your PC is, it might not be the best choice. But remember: companies are getting rid of old hardware fairly quickly, they can be a good choice and might be available for dirt cheap or even free.
I recently replaced my old Synology NAS from 2011 with an old Dell Optiplex 3050 workstation that companies threw away.
The system draws almost twice the power (25W) compared to my old synology NAS (which only drew 13W, both with 2 spinning drives), but increase in processing power and flexibility using TrueNAS is very noticable, it allowed me to also replace an old raspberry pi (6W) that only ran pihole.
So overall, my new home-server is close in power draw to the two devices it replaced, but with an immense increase in performance.
I've made a decent NAS out of a Raspberry Pi 4. It used USB to SATA converters and old hard drives.
My setup has one 3Tb drive and two 1.5Tb drives. The 1.5Tb drives form a 3Tb drive using RAID and then combines with the 3Tb drive to make redundant storage.
Yes it's inefficient AF but it's good enough for full HD streaming so good enough for me.
I'm too stingy to buy better drives.
The moment the Windows installer detected it, a blue screen ended the installation.
But a Linux installation worked and afterwards it was even possible to disable the damaged hardware permanently.
The laptop still runs without further problems.
Don't throw away your old PC
Literally first-world problems, right? There's absolutely no need to tell that to someone that don't live on a rich country. Old gear always finds some use or is sold/donated away.
DRAM prices are spiking, but I don't trust the industry's reasons why
DRAM prices are spiking, but I don't trust the industry's reasons why
There are a lot of reasons to be skeptical.Adam Conway (XDA)
Technology reshared this.
cartel that has previously done cartel things continues to do more cartel things
more at 11
like this
DaGeek247, toothpaste_sandwich e onewithoutaname like this.
The Memory Cartel: we can give you that feeling of childhood wonder, or, erase those embarrassing things keeping you awake at night. Or... we can make your enemies remember things that will haunt them forever... for a price.
OR:
The Ram Cartel: leather, bears, tops, chains and spikes, their safe word is 'disestablishmentarianism'
Just built a rig to give me enough raw power I move however I need yo when this all blows up. Went with a Ryzen 5000 series cpu and ddr4 ram and a godawful motherboard with an Intel B580 cpu. It’s cheap but I now have more options.
Too bad I couldn’t get the opnsense VM working properly so I’m stuck with keeping the firewalla running. But that may not matter as the Nazis want to kill the internet anyway. We may be forced to rely on wonky mixnets like reticulum.
For example, OpenAI's new "Stargate" project reportedly signed deals with Samsung and SK Hynix for up to 900,000 wafers of DRAM per month to feed its AI clusters, which is an amount close to 40% of total global DRAM output if it's ever met. That's an absurd amount of DRAM.
Will these even be useful on the second hand market, or are these chips gonna be on specialized PCBs for these machines?
like this
SuiXi3D likes this.
Will these ever be useful on the second hand market
Nope, not ever. Even if it's standard form factor gear.
They will be disposed of ("recycled"), since that grants the largest asset depreciation tax break, and is the easiest economically. The grand majority of all data center gear gets trashed instead of reused or repurposed through the second hand market.
Source: I used to work at a hardware recycling facility, where much of the perfectly good hardware was required to be shredded, down to the components, because of these stipulations. It's such a waste.
Dumping bucket of tens of TB worth of modern RAM into a shredder is.... Infuriating.
like this
fistac0rpse e SuiXi3D like this.
like this
SuiXi3D likes this.
like this
onewithoutaname likes this.
I think when the economics of destroying a thing is better than reusing a thing, we should maybe have some sort of incentives toward reuse.
I get that the logistics of setting up what's basically a secondary supply chain is difficult, but I've got to believe it would be for the better.
I get that the logistics of setting up what’s basically a secondary supply chain is difficult, but I’ve got to believe it would be for the better.
hear me out: an org that guaranteed destruction of any residual data and ensured that no component or resource was wasted, was responsible nationwide for the collection of all e-waste into resource streams OR repair for reuse.
I'm just saying, techpriests might make me reevaluate my views on organized religion.
The amount of Labor that would go into it it really isn't that high.
This is what distribution is for.
The company that owns the hardware is not the company that recycles it. The recycler can make a profit by reselling these components, they're not allowed to.
Many of these components still have to be pulled out so that labor cost is already a wash. The additional labor cost of testing, selling, packaging, and shipping is baked into the price in the secondary market.
Not everything is worth being resold, but many things are and those things are often not allowed to be resold due to destruction contracts.
The NAND market is an effective monopoly that has been caught price fixing in the past. They desperately want to keep prices as high as they can so they tightly control supply to prevent having any excess product. This screws everyone over as soon as there's a spike in demand that they failed to account for.
Instead of just keeping a consistent supply and allowing prices to drop from competition, we end up with a price rollercoaster that peaks every few years then crashes back down again. The severity is just higher than usual due to the higher demand from data centers.
The market desperately needs a new player that just consistently creates supply instead of playing stupid games, but the barrier to entry is too high.
like this
onewithoutaname likes this.
like this
onewithoutaname likes this.
Fifteen Years Together and Her Tone Still Hits Like a Chalkboard
So yesterday I’m just trying to run a simple errand at the local store. This place is pet friendly, which is the only reason I tolerate it, so I had my dog with me. He’s the friendly one in the family, obviously.
And who’s standing there at her job like a plot twist I didn’t ask for?
My ex-wife. Fifteen-plus years of history wrapped in one human speedbump.
She spots my dog and suddenly she’s all sunshine, petting him like we didn’t survive a whole era together. My dog loves it, because he’s a dog and he’s smart enough not to get emotionally involved. Meanwhile, I’m standing there doing my usual routine: stay pleasant, stay tolerable, don’t let the annoyance leak out of my face.
My current wife talked to her more than I did, which is probably for the best. I kept it tight. Didn’t say much. Didn’t need to. I was just trying to get through the moment without my eye twitching.
But here’s the part that hit me like a bad flashback:
After all those years, her tone still grates on me. It’s unreal. It’s that chalkboard-scrape sound that makes your molars hurt. It’s that dial-up internet scream from the 90s, the one that made the whole house vibrate before you could connect for five minutes of slow loading misery. Somehow her voice still has that frequency that goes straight to the spine.
It wasn’t emotional. It wasn’t dramatic. It wasn’t even awkward.
It was just… noisy. Not loud, just that same old tone that reminds me exactly why life is better now.
We walked out. My wife and I joked about it. My dog? He just wanted more scratches. Must be nice.
Anyway, that’s how my quiet shopping trip turned into an unexpected reunion with the soundtrack of my past. Life really does throw curveballs, even the annoying ones.
Tech-tinkering geocacher who questions everything and dodges people on a purpose. Introverted agnostic, punk at heart, and a self-taught dev who learned things the hard way because nothing else ever sticks.
Eric Foltin
Geocacher / Pessimist / Agnostic / Introvert / Archivist / Punker / Self-Taught DevEric Foltin
optissima
in reply to sonofearth • • •Controlling the presentation of "facts" is essential to fascism, but yeah it hurts to see