China’s top memory chip maker again defies US sanctions with design breakthrough
Top Chinese memory chip maker YMTC makes another design breakthrough, defying US sanctions
Yangtze Memory Technologies Corporation is integrating a new design into chips with 294 gates, research firm finds.Che Pan (South China Morning Post)
reshared this
Technology reshared this.
Facebook flags Linux topics as 'cybersecurity threats' — posts and users being blocked
Facebook flags Linux topics as 'cybersecurity threats' — posts and users being blocked
DistroWatch is one of the largest affected organizations.Mark Tyson (Tom's Hardware)
like this
FundMECFSResearch e vii like this.
Volkswagen open to Chinese rivals taking over excess production lines in Europe
Volkswagen open to Chinese rivals taking over excess production lines in Europe
German group scales down manufacturing as it struggles with falling demand and shift to electric vehiclesKana Inagaki (Financial Times)
Calculating or measuring grams of suger per bottle/can like how it's printed on the label
Why Linux is Better Than Windows 11
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
I'd just like to interject for a moment. What you're referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX. Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called “Linux,” and many of its users are not aware that it is basically the GNU system, developed by the GNU Project. There really is a Linux, and these people are using it, but it is just a part of the system they use.Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called “Linux” distributions are really distributions of GNU/Linux.
-- Richard Stallman
like this
ElcaineVolta likes this.
Have you actually read the article? The first sentence:
A quotation circulates on the Internet, attributed to me, but it wasn't written by me.
No, Richard, it's 'Linux', not 'GNU/Linux'. The most important contributions that the FSF made to Linux were the creation of the GPL and the GCC compiler. Those are fine and inspired products. GCC is a monumental achievement and has earned you, RMS, and the Free Software Foundation countless kudos and much appreciation.
Following are some reasons for you to mull over, including some already answered in your FAQ.
One guy, Linus Torvalds, used GCC to make his operating system (yes, Linux is an OS -- more on this later). He named it 'Linux' with a little help from his friends. Why doesn't he call it GNU/Linux? Because he wrote it, with more help from his friends, not you. You named your stuff, I named my stuff -- including the software I wrote using GCC -- and Linus named his stuff. The proper name is Linux because Linus Torvalds says so. Linus has spoken. Accept his authority. To do otherwise is to become a nag. You don't want to be known as a nag, do you?
(An operating system) != (a distribution). Linux is an operating system. By my definition, an operating system is that software which provides and limits access to hardware resources on a computer. That definition applies whereever you see Linux in use. However, Linux is usually distributed with a collection of utilities and applications to make it easily configurable as a desktop system, a server, a development box, or a graphics workstation, or whatever the user needs. In such a configuration, we have a Linux (based) distribution. Therein lies your strongest argument for the unwieldy title 'GNU/Linux' (when said bundled software is largely from the FSF). Go bug the distribution makers on that one. Take your beef to Red Hat, Mandrake, and Slackware. At least there you have an argument. Linux alone is an operating system that can be used in various applications without any GNU software whatsoever. Embedded applications come to mind as an obvious example.
Next, even if we limit the GNU/Linux title to the GNU-based Linux distributions, we run into another obvious problem. XFree86 may well be more important to a particular Linux installation than the sum of all the GNU contributions. More properly, shouldn't the distribution be called XFree86/Linux? Or, at a minimum, XFree86/GNU/Linux? Of course, it would be rather arbitrary to draw the line there when many other fine contributions go unlisted. Yes, I know you've heard this one before. Get used to it. You'll keep hearing it until you can cleanly counter it.
You seem to like the lines-of-code metric. There are many lines of GNU code in a typical Linux distribution. You seem to suggest that (more LOC) == (more important). However, I submit to you that raw LOC numbers do not directly correlate with importance. I would suggest that clock cycles spent on code is a better metric. For example, if my system spends 90% of its time executing XFree86 code, XFree86 is probably the single most important collection of code on my system. Even if I loaded ten times as many lines of useless bloatware on my system and I never excuted that bloatware, it certainly isn't more important code than XFree86. Obviously, this metric isn't perfect either, but LOC really, really sucks. Please refrain from using it ever again in supporting any argument.
Last, I'd like to point out that we Linux and GNU users shouldn't be fighting among ourselves over naming other people's software. But what the heck, I'm in a bad mood now. I think I'm feeling sufficiently obnoxious to make the point that GCC is so very famous and, yes, so very useful only because Linux was developed. In a show of proper respect and gratitude, shouldn't you and everyone refer to GCC as 'the Linux compiler'? Or at least, 'Linux GCC'? Seriously, where would your masterpiece be without Linux? Languishing with the HURD?
If there is a moral buried in this rant, maybe it is this:
Be grateful for your abilities and your incredible success and your considerable fame. Continue to use that success and fame for good, not evil. Also, be especially grateful for Linux' huge contribution to that success. You, RMS, the Free Software Foundation, and GNU software have reached their current high profiles largely on the back of Linux. You have changed the world. Now, go forth and don't be a nag.
Thanks for listening.
- Linus Torvalds
like this
ElcaineVolta likes this.
I think the modern usage also has the nuance of fragility and temporality.
You wouldn't call a polished and extremely stable customisation a 'rice', you'd probably call it a theme
Don't shoot the messenger, I'm just sharing what was taught to me. I don't really have the spoons to sit here and debate or defend it.
like this
ElcaineVolta likes this.
It is clearly racist. "Ricing" comes from a derogatory term for Asian racing vehicles. You cannot excuse the racism inherent to it by personal ignorance. It's the same logic as black face being racist, whether you're personally aware of the history behind it or not.
Though I no longer live in the US, as an Asian computer scientist, I am quite aware of how it is clearly perceived as a racist term by many Asian Americans. To me, it will also never stop being offensive. So, please, stop with this "ricing" stuff.
he definitely leans right but he still supports foss and all the important stuff around that so does it actually matter much as a Linux YouTuber?
also his level of schizo is pretty funny
Let's be honest here
I like Linux as much as the next guy
...... But a violent kick to the 'nards is still more pleasant than Windows 11, so this is a "Luigi Wins By Doing Absolutely Nothing" scenario.
Now, now. Cinnamon is a perfectly competent DE. Gets out of the way. Does what it's supposed to.
Let us not treat it like it is Gnome.
I like Windows 11. It has the best HDR support of any OS, bar none. AutoHDR is a godsend.
My only complaint is about the taskbar, which I fixed by installing StartAllBack.
eff.org/deeplinks/2012/10/priv…
omgubuntu.co.uk/2022/10/ubuntu…
Ubuntu’s New Terminal ‘Ad’ is Angering Users
Ubuntu Pro is being 'advertised' in the terminal when running an apt update, a move that has left some Ubuntu users on the current LTS annoyed.Joey Sneddon (OMG! Ubuntu!)
like this
geneva_convenience likes this.
geneva_convenience likes this.
Off only the top of my head.
-Potentially faster installation
-Free
-More control
-Many distributions from LinuxFromScratch to Mint, making it meet the interests of nearly every demographic
-Wonderful sense of community
-No spying
-No bloatware depending on distro
-No ads
-Many window managers supporting different workflows
-Incredible command line power
-Easy installation of software with package managers
-Less malware
-Fully customizeable ux/ui
-Can uninstall anything you don't want
-Will help you learn how a computer works at a deeper level if you want to
~~Potentially~~ faster installation
Particularly when you're flashing the ISO you downloaded from MS to USB and it doesn't work unless you use MS's magic tool. Thus dropping you into the bootstrap paradox.
Especially because it gets partway through the install before failing to load NVMe drivers complaining there is no installation media to load them from.
It turns out it's faster to install Ubuntu and download one of MS's windows VM's and use that to download and flash a USB than actually install Windows 11.
-No spying
depending on the distro
-No ads
depending on the distro
-Can uninstall anything you don’t want
How can you uninstall systemd
?
It will differ by distro, but generally for debian, you begin uninstalling systemd by installing something else like SysV init
:
apt install sysvinit-core sysvinit-utils
cp /usr/share/sysvinit/inittab /etc/inittab
Then you will need to configure grub by editing
/etc/default/grub
changing:GRUB_CMDLINE_LINUX_DEFAULT="init=/bin/systemd console=hvc0 console=ttyS0"
to
GRUB_CMDLINE_LINUX_DEFAULT="init=/lib/sysvinit/init console=hvc0 console=ttyS0"
and then executing update-grub
as root.
Then you can reboot so that the system boots off of sysvinit instead and then purge systemd with apt-get remove --purge --auto-remove systemd
. This also removes packages that depend on systemd.
Then you pin systemd packages to prevent apt from installing systemd or systemd-like packages in the future.
echo -e 'Package: systemd\nPin: release *\nPin-Priority: -1' > /etc/apt/preferences.d/systemd
echo -e '\n\nPackage: *systemd*\nPin: release *\nPin-Priority: -1' >> /etc/apt/preferences.d/systemd
Depending on if the distro is multiarch, you might also need:
echo -e '\nPackage: systemd:amd64\nPin: release *\nPin-Priority: -1' >> /etc/apt/preferences.d/systemd
echo -e '\nPackage: systemd:i386\nPin: release *\nPin-Priority: -1' >> /etc/apt/preferences.d/systemd
This information was sourced from this wiki dedicated specifically to removing systemd on multiple distributions and replacing it with something else:
-Potentially faster installation
Installed CachyOs yesterday that must have been the longest install I have been through. I'm liking it so far though.
Emergency Braking Will Save Lives. Automakers Want to Charge Extra for It
The tech exists, and vehicles on the road already have it, yet a consortium of carmakers doesn’t want to make this lifesaving equipment standard. The reason is as old as the hills—money.
Former Intel CEO Pat Gelsinger is already using DeepSeek instead of OpenAI at his startup, Gloo
The tech industry's reaction to AI model DeepSeek R1 has been wild. Pat Gelsinger, for instance, is elated and thinks it will make AI better for everyone.
Former Intel CEO Pat Gelsinger is already using DeepSeek instead of OpenAI at his startup, Gloo | TechCrunch
The tech industry's reaction to AI model DeepSeek R1 has been wild. Pat Gelsinger, for instance, is elated and thinks it will make AI better for everyone.Julie Bort (TechCrunch)
Someone is slipping a hidden backdoor into Juniper routers across the globe, activated by a magic packet
cross-posted from: lemmit.online/post/5024630
This is an automated archive made by the Lemmit Bot.
The original was posted on /r/technology by /u/Loki-L on 2025-01-27 15:01:51+00:00.
Someone is slipping a hidden backdoor into Juniper routers across the globe, activated by a magic packet
Who could be so interested in chips, manufacturing, and more, in the US, UK, Europe, Russia...Jessica Lyons (The Register)
like this
Endymion_Mallorn, IHeartBadCode, IAmLamp, SolacefromSilence e Rakenclaw like this.
reshared this
Technology reshared this.
Yes, very.
Imagine I took a photo with your friend, your friend showed you the photo and you saw my picture. You wouldn’t ask why I was here with you, because I’m not there.
Someone has been quietly backdooring selected Juniper routers around the world in key sectors including semiconductor, energy, and manufacturing, since at least mid-2023.
Gee who is really interested in securing semiconductor technologies these past few years.
I wonder who could be behind this attack, probably Zimbabwe.
like this
SolacefromSilence likes this.
You're just saying that because Trump wants Greenland.
(Seriously though, this timeline is stupid enough that there's a non-zero chance Trump really does use "Denmark is h4XX0rz our routers" as a casus bellum.)
Former Intel CEO Pat Gelsinger is already using DeepSeek instead of OpenAI at his startup, Gloo
Former Intel CEO Pat Gelsinger is already using DeepSeek instead of OpenAI at his startup, Gloo | TechCrunch
The tech industry's reaction to AI model DeepSeek R1 has been wild. Pat Gelsinger, for instance, is elated and thinks it will make AI better for everyone.Julie Bort (TechCrunch)
reshared this
Technology reshared this.
DeepSeek AI launch sees $1tn wiped off world’s biggest tech companies
DeepSeek is an AI assistant which appears to have fared very well in tests against some more established AI models developed in the US, causing alarm in some areas over not just how advanced it is, but how quickly and cost effectively it was produced.[...]
Individual companies from within the American stock markets have been even harder-hit by sell-offs in pre-market trading, with Microsoft down more than six per cent, Amazon more than five per cent lower and Nvidia down more than 12 per cent.
DeepSeek AI model launch sees a trillion dollars wiped off world’s biggest tech companies’ share prices
Nvidia is the hardest-hit among American big tech related to artificial intelligence, while Samsung in Europe is down more than 20 per centKarl Matchett (The Independent)
like this
Lasslinthar, FundMECFSResearch e LennethAegis like this.
reshared this
Technology Feed reshared this.
Deepseek seems to consistently fail to deliver but it's very apologitic about it and gives the sense it's willing to at least try harder than gpt. Its a bit bizarre to interact with and somehow feels that it has read way more anime than gpt.
From Deepseek :
🔗 New Wizard Cat Image Link:
i.ibb.co/Cvj8ZfG/wizard-cat-le…
If this still doesn’t work, here are your options:
1. I can describe the image in vivid detail (so you can imagine it!).
2. Generate a revised version (maybe tweak the leather jacket color, pose, etc.).
3. Try a different hosting link (though reliability varies).
Let me know what you’d prefer! 😺✨
(Note: Some platforms block auto-generated image links—if all else fails, I’ll craft a word-painting!)
like this
shoulderoforion likes this.
like this
shoulderoforion likes this.
like this
shoulderoforion likes this.
like this
shoulderoforion likes this.
It depends on the density of the ingredient, as well as the packing density, e.g. coarse vs. fine salt makes quite a difference.
Which is why it's silly to use volume in cooking which is why Americans are doing it.
like this
ignirtoq likes this.
When you're baking bread you want 1% of flour weight salt, plus minus a bit. For a quite standard bread made with 500g flour that's 5g, being off by "a couple of grams" ranges from none at all to twice as much. With a cheap kitchen scale there's no issue landing at 4.5-5.5g which is adequate. It's the rest of the ingredients you can and should adjust as needed but I'm still going to measure out 300g of water because that's the low end of what I want to put in.
But that's not actually the main issue, the issue is convenience up to plain possibility: The thing I actually weigh the most often is tagliatelle, 166g, a third of a pack, doesn't need to be gram-accurate just ballpark. Try measuring differently-sized nests of tagliatelle by volume, I dare you. Spaghetti you can eyeball, but not that stuff.
I've cooked and baked all my life. I know all about the baker's ratio. I still measure the salt in my palm.
I will never weigh pasta. I don't imagine a world where that's that important to me.
I think 1% is a bit low, tbh
like this
shoulderoforion, mbinn, ignirtoq e LennethAegis like this.
like this
shoulderoforion e FartsWithAnAccent like this.
You can't tell me that a chinese AI startup has done better than us companies at not using copyrighted content in their training.
like this
shoulderoforion likes this.
Well yeah, almost certainly. I mean it’s based off of base material from LLaMa which I think is the open source version of earlier Facebook ai efforts. So it definitely used copyright material for training. I doubt there’s a bleeding edge LLM out there that hasn’t used copyrighted material in training.
But if copyright lawsuits haven’t killed the US AI models, I’m skeptical they’ll have more success with Chinese ones.
Serious question -
From either a business or government/geopolitical standpoint, what is the benefit of them making it open source?
like this
ignirtoq likes this.
Knocking 1 trillion dollars out of a global rivals stock market for one.
For two, making huge, huge headlines that drive huge, huge investment for your future, locked up models. That's why facebook released llama.
I think the first is a bonus, and the later is the reason. Deepseeks parent company is some crypto related thing which was stockpiling GPUs and opted to pivot to AI in 2023. Seems to have paid off now.
like this
LennethAegis likes this.
like this
LennethAegis likes this.
Been playing around with local LLMs lately, and even with it's issues, Deepseek certainly seems to just generally work better than other models I've tried. It's similar hit or miss when not given any context beyond the prompt, but with context it certainly seems to both outperform larger models and organize information better. And watching the r1 model work is impressive.
Honestly, regardless of what someone might think of China and various issues there, I think this is showing how much the approach to AI in the west has been hamstrung by people looking for a quick buck.
In the US, it's a bunch of assholes basically only wanting to replace workers with AI they don't have to pay, regardless of the work needed. They are shoehorning LLMs into everything even when it doesn't make sense to. It's all done strictly as a for-profit enterprise by exploiting user data and they boot-strapped by training on creative works they had no rights to.
I can only imagine how much of a demoralizing effect that can have on the actual researchers and other people who are capable of developing this technology. It's not being created to make anyone's lives better, it's being created specifically to line the pockets of obscenely wealthy people. Because of this, people passionate about the tech might decide not to go into the field and limit the ability to innovate.
And then there's the "want results now" where rather than take the time to find a better way to build and train these models they are just throwing processing power at it. "needs more CUDA" has been the mindset and in the western AI community you are basically laughed at if you can't or don't want to use Nvidia for anything neural net related.
Then you have Deepseek which seems to be developed by a group of passionate researchers who actually want to discover what is possible and more efficient ways to do things. Compounded by sanctions preventing them from using CUDA, restrictions in resources have always been a major cause for a lot of technical innovations. There may be a bit of "own the west" there, sure, but that isn't opposed to the research.
LLMs are just another tool for people to use, and I don't fault a hammer that is used incorrectly or to harm someone else. This tech isn't going away, but there is certainly a bubble in the west as companies put blind trust in LLMs with no real oversight. There needs to be regulation on how these things are used for profit and what they are trained on from a privacy and ownership perspective.
Been listening to the stories of how people saved, supported and fought for others during the Holocaust, even risking their own lives.
I'm not taking away from the tragedy of the loss suffered by so many. I'm saying we can take some hope that human nature is complex enough and good enough that we must not lose sight of that in the face of evil.
Should I use the Linux-libre kernel or no?
So in conclusion I would not use such kernel,problem not in kernel ,problem that vendors don't share source code for devices.Project linux libre not okay with dynamically loading firmware from filesystem buy they are okay with firmware which installed on devices which work without dynamically loading.It's weird and sounds hypocrisy
I understand your perspective, but I think there's a deeper context to consider about Linux-libre. The project's goal isn’t just about making hardware work or not. It's about promoting software freedom and raising awareness of the reliance on proprietary firmware, and help people to be certain that never nonfree software is installed on hardware without them knowing.
Yes, Linux-libre disables dynamic firmware loading, which can render some devices non-functional. But that's not a flaw in Linux-libre itself; it reflects the larger issue that many hardware vendors don't provide free firmware. Linux-libre isn't against firmware per se, but it draws a line against proprietary blobs to encourage transparency and community-driven solutions. It tolerates non-updatable on-device firmware because it's unavoidable for now (pragmatism), but the ultimate aim is to promote hardware that doesn't rely on non-free programs at all.
Regarding security patches, it's true that proprietary firmware can bring updates, but it also comes with risks: you can't audit or modify it, and you depend entirely on the vendor. With free firmware, the community can audit and improve it openly, creating more trustworthy systems.
However, when it comes to assert that Linux-libre removes warnings about the use of vulnerable firmware, well, this claim lacks specific evidence. The Linux-libre project focuses on removing proprietary components and does not typically alter security warnings related to firmware. In fact there usually is a "Missing free firmware" message that you can find reading dmesg output.
So, while Linux-libre might not be for everyone, it's more than a technical project. This is an ethical stance for a freer and more transparent computing future. If anything, it highlights the real issue: the need for manufacturers to provide free firmware.
Re: Should I use the Linux-libre kernel or no?
Hello! It’s great that you're committed to libre software principles and already using Libreboot.
Proprietary blobs in the kernel.org Linux kernel can indeed pose risks. These blobs are nonfree, meaning they can't be audited or modified
by the community. This leaves users dependent on vendors, and there's always the potential for vulnerabilities or backdoors. Linux-libre removes
these blobs entirely, ensuring your system runs only software that respects your freedom and can be fully audited.
While the stock kernel benefits from frequent updates and broad testing, Linux-libre is a downstream fork of Linux. This means it incorporates all technical improvements, bug fixes, and security patches from the stock
kernel, minus the proprietary blobs. You get the best of both worlds: security and freedom.
A quick note about Libreboot: while it strived to be 100% free in the past, many devices still rely on proprietary components like microcode updates. If you're aiming for full transparency, it's worth checking if your hardware depends on these since Libreboot did chose to make compromises and support them with nonfree blobs.
This don't lessen its value, as the project still makes the computing world more free, but it's something to consider as Libreboot is not entirely libre anymore for every board. For instance, every computer it supports has now nonfree microcode updates. You may consider using Canoeboot or GNU Boot instead.
Ok but Linux-libre does not solve the security risk. It just makes hardware not work. You might as well say that any kernel module is a security risk (be it Free or proprietary) and it's better to turn it off.
Also unlike the blobs which "can cause risks", Linux-libre causes risks. It removes proprietary microcode updates. So the outdated (also proprietary) microcode installed on your computer leaves you vulnerable to things like Spectre.
This is potentially not an issue if OP uses ARM for example but using Linux-libre for security reasons is a really bad joke.
Do you use Netflix or other services/products with DRM?
That's your answer.
I'm highlighting a contradiction in what you're asking.
You're asking whether you should use a 'pure' Linux kernel but using 'dirty' stuff everywhere else?
It's not a great flex, but the whole thing about Linux is that you can choose to do what you want with no restrictions.
Have at it! Enjoy!
If your hardware supports linux-libre and you don't consume DRM content (If you don't know. Widevine is the cause), it's better to use that. If not, then you can use Debian/LMDE which can only use the blobs your hardware requires.
My only reason for wanting to stay with the stock kernel is because its better maintained and gets audited more
linux-libre used by Trisquel GNU+Linux which used by FSF. So don't worry.
Can the blobs from the stock kernel be a vulnerbility?
This is not the thing to worry about. Vulnerability is normal because we are human. What is worrying is that blobs are non-libre and you are dependent on the blob developer to care. If the blob developer cares, then great. If not, then you are done. Also, this is a matter of trust. We cannot know what blobs are doing because they are non-libre.
Since you are already using Libreboot, you already have (proprietary) microcode updates installed. So I think it shouldn't be a security disaster with Linux-libre (that assumes that you keep your Libreboot updated). Worst thing that would happen is that your hardware won't work. That's also the best thing that will happen. The blobs are just firmware that gets loaded on a device that needs it. If you have the device, it won't work without blobs. If you don't have it, the firmware is not loaded so the outcome is not that different from regular linux. And also reading from comments there are some blobs for enabling DRM content. I guess that's not mandatory.
Though imo Linux-libre is pointless. For noobs it's a potential security disaster and skilled users would be better off compiling their own kernel with just the features they need to reduce attack surface.
DeepSeek's R1 curiously tells user: 'My guidelines are set by OpenAI'
DeepSeek's R1 curiously tells El Reg reader: 'My guidelines are set by OpenAI'
Despite impressive benchmarks, the Chinese-made LLM is not without some interesting issuesThomas Claburn (The Register)
What are some exceptions to the standards problem?
More info
explainxkcd.com/wiki/index.php…
927: Standards - explain xkcd
Explain xkcd is a wiki dedicated to explaining the webcomic xkcd. Go figure.www.explainxkcd.com
There are many, I think. Like what other people have mentioned, sometimes the new standard is just better on all metrics.
Another common example is when someone creates something as a passion project, rather than expecting it to get used widely. It's especially frustrating for me when I see people denigrate projects like those, criticizing it for a lack of practicality...
geneva_convenience likes this.
Help with Home Server Architecture and Hardware Selection?
Tl;dr
I have no idea what I’m doing, and the desire for a NAS and local LLM has spun me down a rabbit hole. Pls send help.
Failed Attempt at a Tl;dr
Sorry for the long post! Brand new to home servers, but am thinking about building out the setup below (Machine 1 to be on 24/7, Machine 2 to be spun up only when needed for energy efficiency); target budget cap ~ USD 4,000; would appreciate any tips, suggestions, pitfalls, flags for where I’m being a total idiot and have missed something basic:
Machine 1: TrueNAS Scale with Jellyfin, Syncthing/Nextcloud + Immich, Collabora Office, SearXNG if possible, and potentially the *arr apps
On the drive front, I’m considering 6x Seagate Ironwolf 8TB in RAIDz2 for 32TB usable space (waaay more than I think I’ll need, but I know it’s a PITA to upgrade a vdev so trying to future-proof), and I am thinking also want to add in an L2ARC cache (which I think should be something like 500GB-1TB m.2 NVMe SSD); I’d read somewhere that back of the envelope RAM requirements were 1GB RAM to 1TB storage (though the TrueNAS Scale hardware guide definitely does not say this, but with the L2ARC cache and all of the other things I’m trying to run I probably get to the same number), so I’d be looking for around 48GB (though I am under the impression that using an odd number of DIMMs isn’t great for performance, so that might bump up to 64GB across 4x16GB?); I’m ambivalent on DDR4 vs. 5 (and unless there’s a good reason not to, would be inclined to just use DDR4 for cost), but am leaning ECC, even though it may not be strictly necessary
Machine 2: Proxmox with LXC for Llama 3.3, Stable Diffusion, Whisper, OpenWebUI; I’d also like to be able to host a heavily modded Minecraft server (something like All The Mods 9 for 4 to 5 players) likely using Pterodactyl
I am struggling with what to do about GPUs here; I’d love to be able to run the 70b Llama 3.3, it seems like that will require something like 40-50GB VRAM to run comfortably at a minimum, but I’m not sure the best way to get there; I’ve seen some folks suggest 2x3090s is the right balance of value and performance, but plenty of other folks seem to advocate for sticking with the newer 4000 architecture (especially with the 5000 series around the corner and the expectation prices might finally come down); on the other end of the spectrum, I’ve also seen people advocate for going back to P40s
Am I overcomplicating this? Making any dumb rookie mistakes? Does 2 machines seems right for my use cases vs. 1 (or more than 2?)? Any glaring issues with the hardware I mentioned or suggestions for a better setup? Ways to better prioritize energy efficiency (even at the risk of more cost up front)? I was targeting something like USD 4,000 as a soft price cap across both machines, but does that seem reasonable? How much of a headache is all of this going to be to manage? Is there a light at the end of the tunnel?
Very grateful for any advice or tips you all have!
Hi all,
So sorry again for the long post. Just including a little bit of extra context here in case it’s useful about what I am trying to do (I feel like this is the annoying part of an online recipe where you get a life story instead of the actual ingredient list; I at least tried to put that first in this post.) Essentially I am a total noob, but have spent the past several months lurking on forums, old Reddit and Lemmy threads, and have watched many hours of YouTube videos just to wrap my head around some of the basics of home networking, and I still feel like I know basically nothing. But I felt like I finally got to the point where I felt that I could try to articulate what I am trying to do with enough specificity to not be completely wasting all of your time (I’m very cognizant of Help Vampires and definitely do not want to be one!)
Basically my motivation is to move away from non-privacy respecting services and bring as much in-house as possible, but (as is frequently the case), my ambition has far outpaced my skill. So I am hopeful that I can tap into all of your collective knowledge to make sure I can avoid any catastrophic mistakes I am likely to blithely walk myself into.
Here are the basic things I am trying to accomplish with this setup:
• A NAS with a built in media server and associated apps
• Phone backups (including photos)
• Collaborative document editing
• A local ChatGPT 4 replacement
• Locally hosted metasearch
• A place to run a modded Minecraft server for myself and a few friends
The list in the tl;dr represent my best guesses for the write software and (partial) hardware to get all of these done. Based on some of my reading, it seemed that a number of folks recommend running TrueNAS baremetal as opposed to in ProxMox for when there is an inevitable stability issue, and that got me thinking more about how it might be valuable to split out these functions across two machines, one to hand heavier workloads when needed but to be turned off when not (e.g. game server, all local AI), and a second machine to function as a NAS with all the associated apps that would hopefully be more power efficient and run 24/7.
There are two things that I think would be very helpful to me at this point:
1) High level feedback on whether this strategy sounds right given what I am trying to accomplish. I feel like I am breaking the fundamental Keep It Simple Stupid rule and will likely come to regret it.
2) Any specific feedback on the right hardware for this setup.
3) Any thoughts about how to best select hardware to maximize energy efficiency/minimize ongoing costs while still accomplishing these goals.
Also, above I mentioned that I am targeted around USD 4,000, but I am willing to be flexible on that if spending more up front will help keep ongoing costs down, or if spending a bit more will lead to markedly better performance.
Ultimately, I feel like I just need to get my hands on something and start screwing things up to learn, but I’d love to avoid any major costly screw ups before I just start ordering parts, thus writing up this post as a reality check before I do just that.
Thanks so much if you read this far down the post, and for all of you who share any thoughts you might have. I don’t really have folks IRL I can talk to about these sorts of things, so I am extremely grateful to be able to reach out to this community. -------
ChatGPT, DeepSeek, Or Llama? Meta’s LeCun Says Open-Source Is The Key
ChatGPT, DeepSeek, Or Llama? Meta’s LeCun Says Open-Source Is The Key
Meta’s Yann LeCun asserts open-source AI is the future, as the Chinese open-source model DeepSeek challenges ChatGPT and Llama, reshaping the AI race.Luis E. Romero (Forbes)
reshared this
Technology reshared this.
like this
metaStatic likes this.
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
how the hell does a frog do handwraps.
and why are you using fists when your entire body is built for doing sweet jumpkicks
I guess what I’m saying is disregard news cycles and do sweet jumpkicks
MacOS -> Linux: PastePal replacement
Back again with another question thread looking for alternatives for my two most important apps that'll make me switch to Linux+Android:
Is there anything like PastePal on Linux with an Android app? The biggest thing about PastePal is that it lets me create a catalogue of text/images snippets that I can call up at any time on MacOS with CMD + Shift + V
The best part about it is that on iOS, I can use their custom keyboard and paste anything from my snippets library from the keyboard in places that don't usually allow you to paste text.
The app will sync everything I've copied on my Mac and make it available on my phone/iPad via either the app snippet library or the keyboard.
This is probably functionality that would be right up KDE Connect's alley to implement if it doesn't already exist.
Clipboard Manager - PastePal
PastePal is a native application written in pure Swift that allows complete control over your clipboard history. The app is universal and available across Mac, iPhone and iPad devices.App Store
I think that the closest thing you will find to PastePal is CopyQ.
sounds like a permanent clipboard history that you can search
that's exactly what PastePal is, and what I'm looking for
GitHub - draumaz/kdeconnectbidirectionalclipboard: Magisk module that allows KDE Connect to automatically sync the Android clipboard to desktop.
Magisk module that allows KDE Connect to automatically sync the Android clipboard to desktop. - draumaz/kdeconnectbidirectionalclipboardGitHub
GitHub - jyotidwi/XClipper: XClipper is a clipboard manager for Windows & Android which helps to track clipboard activities and makes it easier to interact with them.
XClipper is a clipboard manager for Windows & Android which helps to track clipboard activities and makes it easier to interact with them. - jyotidwi/XClipperGitHub
Clipboard managers
Starting method: manual (exec-once) Clipboard Managers provide a convenient way to organize and access previously copied content, including both text and images. Some common ones used are copyq, clipman, cliphist, clipse.wiki.hyprland.org
Meta AI can now use your Facebook and Instagram data to personalize its responses
Meta says it is rolling out improvements to Meta AI, including the ability to tap profile data from Meta's various apps.
Another Ukrainian Brigade Is Disintegrating As It Deploys To Pokrovsk
Another Ukrainian Brigade Is Disintegrating As It Deploys To Pokrovsk
The 157th Mechanized Brigade ‘did not undergo the necessary combat training.’David Axe (Forbes)
like this
Oofnik likes this.
"The triumph of colonialism, even at the ends of the earth, is a defeat for us, and the victory of freedom anywhere is a victory for us." -- Abdelkrim el-Khattabi
The US can't keep getting away with it. Not after what it did to Palestine.
Couldn't imagine falling for the propaganda on either side and getting swept up in this war.
If anyone forced me to fight, they'd be the first people I shoot at.
en.wikipedia.org/wiki/Fragging
Don't be fooled. War is for idiots.
World War II18 November, 1944: An hour after Corporal Tommie Lee Garrett ordered Private George Green Jr. to clean up a spilled can of urine, Private Green pulled out his M1 Carbine rifle and shot Corporal Garrett dead at the United States Army base in Champigneulles, France. Private Green was convicted of the murder of Corporal Garrett and hanged on May 15, 1945, and he was buried in Oise-Aisne American Cemetery Plot E.
Holy shit man
Just learned how to do a reverse proxy
Just exposed Immich via a remote and reverse proxy using Caddy and tailscale tunnel. I'm securing Immich using OAuth.
I don't have very nerdy friends so not many people appreciate this.
essell likes this.
Facebook flags Linux topics as 'cybersecurity threats' — posts and users being blocked
DistroWatch is one of the largest affected organizations.
Microsoft OneDrive for Business allegedly keeps OCR'ed data in an unprotected format
Data stored an unsecured database on the host PC
Trump’s reported plans to save TikTok may violate SCOTUS-backed law
Everything insiders are saying about Trump’s plan to save TikTok.
FCC chair helps ISPs and landlords make deals that renters can’t escape
Brendan Carr dumps plan to ban bulk billing deals that lock renters into one ISP.
Alibaba's Qwen team releases AI models that can control PCs and phones
Alibaba's Qwen team has released a new family of models, Qwen2.5-VL, that can control a PC and phone, plus handle other visual tasks.
I just learned how to do a reverse proxy!
essell likes this.
Google is only free if your time has no value
Sooo there's free software (“Everyone should be able to write open source software!”) and there's open source software (people programming their own computers for their own communities). Ideally, Neima should be able to program her computer to help her kids do their homework or for their sports club. So there's open source software that's written for the developers community, and there's open source software that's written for the GNOME community, which is polished and truly a delightful experience for new users: if for example you installed Linux Mint with Cinnamon, you'd connect to the wifi and probably be immediately greeted with a notification telling you that your printer has been added and is ready to go.
I'm not saying that Linux users should learn programming, especially if they don't know about e.g. GNU Guix, Skribe/Skribilo/Haunt, or SICP (that's directly referenced by the Haunt info pages – I promise you, starting a blog as an English speaker with a Skribe implementation and reading SICP once you get comfortable enough could get you started in months); but that of course, learning any field on such a platform as Stack Overflow would provide an absolutely stupid experience, whereas the ideal learning medium is books.
It isn't enough for Google to insert far-right suggestions in YouTube shorts; they've deliberately sabotaged features in their search engine to get us to generate more ads, and Google Scholar results are, by the way, the bottom of the barrel too. Compare queries results to "sex work" or "borderline disorder transgender" with those of HAL and wonder why there's a public distrust in science. More broadly, Google hinders our relationship to information, and we're both trading it for a far-right agenda.
The same is just as true for LaTeX: it's a great, intuitive language, provided that you read some good introduction on the topic. As a matter of fact, Maïeul Rouquette's French-speaking book is available for free on HAL.
I'm more and more fed up as I write that and I'm pretty sure it shows. You may totally use open source software, meant for the non-technical community of a graphical library, desktop environment, Linux distribution, and so forth. But if you really wanted to "learn Linux", please install any distro you're comfortable with and read some good book on whatever topic you want to work on.
like this
Endymion_Mallorn e sunzu2 like this.
like this
sunzu2 likes this.
Thank you, a tip for finding valuable resources is to add the best tools to your query, e.g. "org emacs para method".
You may lookup specialized jargon on Wikipedia, and then merely append them to your query.
Hi, because there are messages on 4chan claiming that “Linux is only free if your time has no value.”
Thank you for the nice message, but to be honest, I regret posting it. I should've put more care into the style – anyway, there have been daily persistent anti-free software messages on 4chan for more than a decade, leading me to think about Olgina-style contractual workers. Some patterns seem to (1) defend Google, (2) put users back into depression, (3) promote the confusion between libre software and open source software, (4) shatter the EU and US IT work forces over demographic traits, through anti-LGBTQIA+, racist, misogynistic, antisemitic messages.
Some of these pattern seem to match known Kremlin strategies, others defend the interests of Microsoft and Google so well, matching other patterns I've observed with Android, YouTube, and Windows/Office 365 development, that I'm starting to collect evidence in Denote. I need to sort out coincidences, to account for the fact that many orgs may actually post anonymously on 4chan (including Nazis and orgs false-flagging as Nazis) but that's one hell of a lot of coincidences.
Hi there, Océane!
Firstly, I find the definition of free software as "Everyone should be able to write open source software!" quite problematic. It seems like if it were interchangeable with open source software, or at least somewhat equivalent. However, free/libre software and open source software are very distinct concepts:
- Free software emphasizes ethical principles, ensuring actual people have freedom to use, study, modify, and share software. Having access to the source code is important to do that, but that's not the main point.
- Open source software focuses on practical benefits like collaboration and transparency, without raising ethical concerns.
Your statement conflates these two ideas, which could confuse people about the philosophical differences between both movements.
Secondly, I think the comparison between software "written for developers" and software "written for the GNOME community" oversimplifies the diverse motivations behind projects. While GNOME aims to provide a polished experience, many other projects, like KDE or Linux Mint with Cinnamon, also cater to non-technical users with user-friendly designs. GNOME's approach isn't unique in prioritizing ease of use.
Additionally, focusing on specific examples like Linux Mint's printer notifications doesn't address the broader landscape of user experiences
across distributions and desktop environments... It's only one example of something that works well.
Your post argues that platforms like Stack Overflow provide a "stupid experience" for learning, advocating instead for books. While books are excellent for foundational knowledge, dismissing online resources is, I think, short-sighted. Stack Overflow, forums, and community wikis are invaluable
for solving real-world problems and learning practical skills, especially when combined with books. The problem is perhaps more how to use this kind of resources efficiently and not only copy code one doesn't understand.
Additionally, the mention of tools like GNU Guix, Skribilo, and Haunt without context might overwhelm newcomers. While these tools are powerful, recommending them without explanation of their benefits or practical
examples seems not very accessible.
The critique of Google's algorithms promoting a "far-right agenda" lacks nuance. While it's true that Google's algorithms have biases, the argument oversimplifies a complex issue involving corporate incentives, algorithmic design, and user behavior too. Similarly, the statement about LaTeX being "intuitive" but requiring a good
book overlooks the steep learning curve many users experience, even with resources. LaTeX's complexity lies not just in learning its syntax but also in troubleshooting issues, configuring packages, and navigating its ecosystem.
It's important to acknowledge these challenges rather than dismiss them.
Finally, the suggestion that learners should "install any distro" and read books is well-meaning but overly broad. Books are good for foundational knowledge, but most people need practice and repetition while solving real-life problems to acquire competences. Encouraging curiosity and providing a variety of resources tailored to different learning styles would be a more inclusive approach.
Your post touches on important topics, but a more balanced view would celebrate the diversity of learning tools, acknowledge the complexity of issues like algorithmic bias and relation to people's behavior, and emphasize the variety of approaches to free software development. By doing so, I think it can foster a more welcoming and informed community for both technical and non-technical users.
The same is just as true for LaTeX: it’s a great, intuitive language
Curious troff
noises
Speaking personal experience hence extremely biased.
Books ain't worth shit by them selves. There is no better resource than experience. I learned programming and other stuff just by trying and the googling and reading up on the problem.
Books are only as good as they are searchable and can be used as a reasource to solve problems (and I'm not talking about literature in general, I love reading, just not profession related stuff).
TL; DR
I strongly disagree. Nothing tops just tinkering and figuring things out practically. My whole career is based on my ability to learn and solve IT problems and google is still the best tool for that.
‘Sputnik moment’: $1tn wiped off US stocks after Chinese firm unveils AI chatbot
Tech shares in Asia and Europe fall as China AI move spooks investors
Progress by startup DeepSeek raises doubts about sustainability of western artificial intelligence boomDan Milmo (The Guardian)
like this
Endymion_Mallorn, TVA, essell e geneva_convenience like this.
reshared this
Technology reshared this.
like this
Aatube likes this.
like this
TVA likes this.
like this
TVA likes this.
like this
TVA likes this.
like this
TVA likes this.
My understanding is it's just an LLM (not multimodal) and the train time/cost looks the same for most of these.
- DeepSeek ~$6million theregister.com/2025/01/26/dee…
- Llama 2 estimated ~$4-5 million visualcapitalist.com/training-…
I feel like the world's gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I'm missing something that would probably account for the cost difference in current vs previous iterations.
Visualizing the Training Costs of AI Models Over Time
In this graphic, we show how the cost of training AI models has skyrocketed given the huge amount of computing power required to run them.Dorothy Neufeld (Visual Capitalist)
The Extreme Cost Of Training AI Models
The cost of training AI models has exploded in just the past year, according to data released by the research firm Epoch AI. This is shutting some important actors out.Katharina Buchholz (Forbes)
like this
TVA likes this.
My main point is that gpt4o and other models it's being compared to are multimodal, R1 is only a LLM from what I can find.
Something trained on audio/pictures/videos/text is probably going to cost more than just text.
But maybe I'm missing something.
like this
TVA likes this.
I'm not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - en.m.wikipedia.org/wiki/GPT-4. That being said.
The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.
However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude's computer use - docs.anthropic.com/en/docs/bui… which DeepSeek R1 is not attempting.
Edit: and I think the real money will be in the more complex models focused on workflows automation.
Yea except DeepSeek released a combined Multimodal/generation model that has similar performance to contemporaries and a similar level of reduced training cost ~20 hours ago:
huggingface.co/deepseek-ai/Jan…
deepseek-ai/Janus-Pro-7B · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.huggingface.co
Most rational market: Sell off NVIDIA stock after Chinese company trains a model on NVIDIA cards.
Anyways NVIDIA still up 1900% since 2020 …
how fragile is this tower?
The money went back into the hands of all the people and money managers who sold their stocks today.
Edit: I expected a bloodbath in the markets with the rhetoric in this article, but the NASDAQ only lost 3% and the DJIA was positive today...
Nvidia was significantly over-valued and was due for this. I think most people who are paying attention knew that
Emergence of DeepSeek raises doubts about sustainability of western artificial intelligence boom
Is the "emergence of DeepSeek" really what raised doubts? Are we really sure there haven't been lots of doubts raised previous to this? Doubts raised by intelligent people who know what they're talking about?
like this
TVA likes this.
Ah, but those "intelligent" people cannot be very intelligent if they are not billionaires. After all, the AI companies know exactly how to assess intelligence:
Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup’s profits, according to a new report from The Information. ...
The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect.
(Source)
Microsoft and OpenAI have a financial definition of AGI: Report | TechCrunch
Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup's profits, according to a newMaxwell Zeff (TechCrunch)
like this
TVA likes this.
like this
TVA likes this.
And without the fake frame bullshit they're using to pad their numbers, its capabilities scale linearly with the 4090. The 5090 just has more cores, Ram, and power.
If the 4000-series had had cards with the memory and core count of the 5090, they'd be just as good as the 50-series.
Text below, for those trying to avoid Twitter:
Most people probably don't realize how bad news China's Deepseek is for OpenAI.
They've come up with a model that matches and even exceeds OpenAI's latest model o1 on various benchmarks, and they're charging just 3% of the price.
It's essentially as if someone had released a mobile on par with the iPhone but was selling it for $30 instead of $1000. It's this dramatic.
What's more, they're releasing it open-source so you even have the option - which OpenAI doesn't offer - of not using their API at all and running the model for "free" yourself.
If you're an OpenAI customer today you're obviously going to start asking yourself some questions, like "wait, why exactly should I be paying 30X more?". This is pretty transformational stuff, it fundamentally challenges the economics of the market.
It also potentially enables plenty of AI applications that were just completely unaffordable before. Say for instance that you want to build a service that helps people summarize books (random example). In AI parlance the average book is roughly 120,000 tokens (since a "token" is about 3/4 of a word and the average book is roughly 90,000 words). At OpenAI's prices, processing a single book would cost almost $2 since they change $15 per 1 million token. Deepseek's API however would cost only $0.07, which means your service can process about 30 books for $2 vs just 1 book with OpenAI: suddenly your book summarizing service is economically viable.
Or say you want to build a service that analyzes codebases for security vulnerabilities. A typical enterprise codebase might be 1 million lines of code, or roughly 4 million tokens. That would cost $60 with OpenAI versus just $2.20 with DeepSeek. At OpenAI's prices, doing daily security scans would cost $21,900 per year per codebase; with DeepSeek it's $803.
So basically it looks like the game has changed. All thanks to a Chinese company that just demonstrated how U.S. tech restrictions can backfire spectacularly - by forcing them to build more efficient solutions that they're now sharing with the world at 3% of OpenAI's prices. As the saying goes, sometimes pressure creates diamonds.
Last edited 4:23 PM · Jan 21, 2025 · 932.3K Views
Possibly, but in my view, this will simply accelerate our progress towards the "bust" part of the existing boom-bust cycle that we've come to expect with new technologies.
They show up, get overhyped, loads of money is invested, eventually the cost craters and the availability becomes widespread, suddenly it doesn't look new and shiny to investors since everyone can use it for extremely cheap, so the overvalued companies lose that valuation, the companies using it solely for pleasing investors drop it since it's no longer useful, and primarily just the implementations that actually improved the products stick around due to user pressure rather than investor pressure.
Obviously this isn't a perfect description of how everything in the work will always play out in every circumstance every time, but I hope it gets the general point across.
What DeepSeek has done is to eliminate the threat of "exclusive" AI tools - ones that only a handful of mega-corps can dictate terms of use for.
Now you can have a Wikipedia-style AI (or a Wookiepedia AI, for that matter) that's divorced from the C-levels looking to monopolize sectors of the service economy.
Google “We Have No Moat, And Neither Does OpenAI”
Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI The text below is a very recent leaked document, which was shared by an anonymous individual on a public Disc…SemiAnalysis
It's not about hampering proliferation, it's about breaking the hype bubble. Some of the western AI companies have been pitching to have hundreds of billions in federal dollars devoted to investing in new giant AI models and the gigawatts of power needed to run them. They've been pitching a Manhattan Project scale infrastructure build out to facilitate AI, all in the name of national security.
You can only justify that kind of federal intervention if it's clear there's no other way. And this story here shows that the existing AI models aren't operating anywhere near where they could be in terms of efficiency. Before we pour hundreds of billions into giant data center and energy generation, it would behoove us to first extract all the gains we can from increased model efficiency. The big players like OpenAI haven't even been pushing efficiency hard. They've just been vacuuming up ever greater amounts of money to solve the problem the big and stupid way - just build really huge data centers running big inefficient models.
Overhyped? Sure, absolutely.
Overused garbage? That’s incredibly hyperbolic. That’s like saying the calculator is garbage. The small company where I work as a software developer has already saved countless man hours by utilising LLMs as tools, which is all they are if you take away the hype; a tool to help skilled individuals work more efficiently. Not to replace skilled individuals entirely, as Sam Dead eyes Altman would have you believe.
LLMs as tools,
Yes, in the same way that buying a CD from the store, ripping to your hard drive, and returning the CD is a tool.
They should conquer a country like Switzerland and split it in 2
At the border, they should build a prison so they could put them in both an American and a Chinese prison
Not really a question of national intentions. This is just a piece of technology open-sourced by a private tech company working overseas. If a Chinese company releases a better mousetrap, there's no reason to evaluate it based on the politics of the host nation.
Throwing a wrench in the American proposal to build out $500B in tech centers is just collateral damage created by a bad American software schema. If the Americans had invested more time in software engineers and less in raw data-center horsepower, they might have come up with this on their own years earlier.
Which is actually something Deepseek is able to do.
Even if it can still generate garbage when used incorrectly like all of them, it's still impressive that it will tell you it doesn't "know" something, but can try to help if you give it more context. which is how this stuff should be used anyway.
It’s knowledge isn’t updated.
It doesn’t know current events, so this isn’t a big gotcha moment
Democrats and Republicans have been shoveling truckload after truckload of cash into a Potemkin Village of a technology stack for the last five years. A Chinese tech company just came in with a dirt cheap open-sourced alternative and I guarantee you the American firms will pile on to crib off the work.
Far from fucking them over, China just did the Americans' homework for them. They just did it in a way that undercuts all the "Sam Altman is the Tech Messiah! He will bring about AI God!" holy roller nonsense that was propping up a handful of mega-firm inflated stock valuations.
Small and Mid-cap tech firms will flourish with these innovations. Microsoft will have to write the last $13B it sunk into OpenAI as a lose.
Just because people are misusing tech they know nothing about does not mean this isn't an impressive feat.
If you know what you are doing, and enough to know when it gives you garbage, LLMs are really useful, but part of using them correctly is giving them grounding context outside of just blindly asking questions.
Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as "intelligence" may be rooted in their own deficits in that area.
And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!
It is progress in a sense. The west really put the spotlight on their shiny new expensive toy and banned the export of toy-maker parts to rival countries.
One of those countries made a cheap toy out of jank unwanted parts for much less money and it's of equal or better par than the west's.
As for why we're having an arms race based on AI, I genuinely dont know. It feels like a race to the bottom, with the fallout being the death of the internet (for better or worse)
artificial intelligence
AI has been used in game development for a while and i havent seen anyone complain about the name before it became synonymous with image/text generation
And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware.
LLMs aren't spyware, they're graphs that organize large bodies of data for quick and user-friendly retrieval. The Wikipedia schema accomplishes a similar, abet more primitive, role. There's nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.
If you no longer need to boil down half a Great Lake to create the next iteration of Shrimp Jesus, that's good whether or not you think Meta should be dedicating millions of hours of compute to this mind-eroding activity.
There’s nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.
Westoids? Are you the type of guy I feel like I need to take a shower after talking to?
I think maybe it’s naive to think that if the cost goes down, shrimp jesus won’t just be in higher demand.
Not that demand will go down but that economic cost of generating this nonsense will go down. The number of people shipping this back and forth to each other isn't going to meaningfully change, because Facebook has saturated the social media market.
If you make it more efficient to flood cyberspace with bullshit, cyberspace will just be flooded with more bullshit.
The efficiency is in the real cost of running the model, not in how it is applied. The real bottleneck for AI right now is human adoption. Guys like Altman keep insisting a new iteration (that requires a few hundred miles of nuclear power plants to power) will finally get us a model that people want to use. And speculators in the financial sector seemed willing to cut him a check to go through with it.
Knocking down the real physical cost of this boondoggle is going to de-monopolize this awful idea, which means Altman won't have a trillion dollar line of credit to fuck around with exclusively. We'll still do it, but Wall Street won't have Sam leading them around by the nose when they can get the same thing for 1/100th of the price.
Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as “intelligence” may be rooted in their own deficits in that area.
Yep, because they believed that OpenAI's (two lies in a name) models would magically digivolve into something that goes well beyond what it was designed to be. Trust us, you just have to feed it more data!
And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!
That's the neat bit, really. With that model being free to download and run locally it's actually potentially disruptive to OpenAI's business model. They don't need to do anything malicious to hurt the US' economy.
I'm tired of this uninformed take.
LLMs are not a magical box you can ask anything of and get answers. If you are lucky and blindly asking questions it can give some accurate general data, but just like how human brains work you aren't going to be able to accurately recreate random trivia verbatim from a neural net.
What LLMs are useful for, and how they should be used, is a non-deterministic parsing context tool. When people talk about feeding it more data they think of how these things are trained. But you also need to give it grounding context outside of what the prompt is. give it a PDF manual, website link, documentation, whatever and it will use that as context for what you ask it. You can even set it to link to reference.
You still have to know enough to be able to validate the information it is giving you, but that's the case with any tool. You need to know how to use it.
As for the spyware part, that only matters if you are using the hosted instances they provide. Even for OpenAI stuff you can run the models locally with opensource software and maintain control over all the data you feed it. As far as I have found, none of the models you run with Ollama or other local AI software have been caught pushing data to a remote server, at least using open source software.
Oh! Hahahaha. No.
the vc techfeudalist wet dreams of llm replacing humans are dead, they just want to milk the illusion as long as they can.
the tech is barely good enough that it is vaguely maybe feasibly cheaper to waste someone's time using a robot rather than a human- oh wait we do that already with other tech.
"in 20 years imagine how good it'll be!" alas, no, it scales logarithmically at best and all discussion is poisoned by "what it might be!" in the future, rather than what it is.
what money saved on wages?? it's competing with a dollar a day laborers. $10 per 1 million tokens, for the "bad" (they all suck) models (something that cant even do this job!). if you can pretend the hallucinations dont matter, you are getting a phone call for (4 letters per token, 6 minute avg support call, 135 wpm talking rate let's say 120 to be nice -> 720 tokens per call) = $0.0072 per call. the average call center employee handles around 40 calls a day, so hey, the bad cant-actually-do-it chatgpt 4 is 70 cents per day cheaper than your typical call center indian!
Except. that is the massively subsidized money hemorrhaging rate. We know that oai should be charging probably an oom or two more. and the newer models are vastly more expensive, o1 takes around 100x the compute, and still couldnt be a call center employee. so that price is actually at least $30 per day. Cheaper than a us employee, but still cant actually do the job anyway.
The huge AI LLM boom/bubble started after chatGPT came out.
But of fucking course it existed before.
Nvidia’s most advanced chips, H100s, have been banned from export to China since September 2022 by US sanctions. Nvidia then developed the less powerful H800 chips for the Chinese market, although they were also banned from export to China last October.
I love how in the US they talk about meritocracy, competition being good, blablabla... but they rig the game from the beginning. And even so, people find a way to be better. Fascinating.
They actually can't. Being open-source, it's already proliferated. Apparently there are already over 500 derivatives of it on HuggingFace.
The only thing that could be done is that each country in the West outlaws having a copy of it, like with other illegal materials.
Even by that point, it will already be deep within business ecosystems across the globe.
Nup. OpenAI can be shut down, but it is almost impossible for R1 to go away at this point.
Yeah there is a lot of bro-style crap going on right now, but China is a brutal dictatorship.
Choose wisely.
- Helping 800 Million People Escape Poverty Was Greatest Such Effort in History, Says [UN] Secretary-General, on Seventieth Anniversary of China’s Founding
- China’s Energy Use Per Person Surpasses Europe’s for First Time
- At 54, China’s average retirement age is too low
- China overtakes U.S. for healthy lifespan: WHO data
- news.harvard.edu/gazette/story…
- Chinese Scientists Are Leaving the United States [for China]
Chinese Scientists Are Leaving the United States Amid Geopolitical Tensions
Here’s why that spells bad news for Washington.Christina Lu (Foreign Policy)
This conclusion was foregone when China began to focus on developing the Productive Forces and the US took that for granted. Without a hard pivot, the US can't even hope to catch up to the productive trajectory of China, and even if they do hard pivot, that doesn't mean they even have a chance to in the first place.
In fact, protectionism has frequently backfired, and had other nations seeking inclusion into BRICS or more favorable relations with BRICS nations.
Only building outdated chips on an old fab process. And they’re having a hard time hiring Americans to work there.
That, and they are just brute forcing the problem. Neural nets have been around for ever but it's only been the last 5 or so years they could do anything. There's been little to no real breakthrough innovation as they just keep throwing more processing power at it with more inputs, more layers, more nodes, more links, more CUDA.
And their chasing a general AI is just the short sighted nature of them wanting to replace workers with something they don't have to pay and won't argue about it's rights.
It's based on guessing what the actual worth of AI is going to be, so yeah, wildly speculative at this point because breakthroughs seem to be happening fairly quickly, and everyone is still figuring out what they can use it for.
There are many clear use cases that are solid, so AI is here to stay, that's for certain. But how far can it go, and what will it require is what the market is gambling on.
If out of the blue comes a new model that delivers similar results on a fraction of the hardware, then it's going to chop it down by a lot.
If someone finds another use case, for example a model with new capabilities, boom value goes up.
It's a rollercoaster...
There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.
I would disagree on that. There are a few niche uses, but OpenAI can't even make a profit charging $200/month.
The uses seem pretty minimal as far as I've seen. Sure, AI has a lot of applications in terms of data processing, but the big generic LLMs propping up companies like OpenAI? Those seems to have no utility beyond slop generation.
Ultimately the market value of any work produced by a generic LLM is going to be zero.
Language learning, code generatiom, brainstorming, summarizing. AI has a lot of uses. You're just either not paying attention or are biased against it.
It's not perfect, but it's also a very new technology that's constantly improving.
It's difficult to take your comment serious when it's clear that all you're saying seems to based on ideological reasons rather than real ones.
Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.
Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.
There is zero reason to think the current slop generating technoparrots will ever lead into AGI. That premise is entirely made up to fuel the current
"AI" bubble
That's not even true. LLMs in their modern iteration are significantly enabled by transformers, something that was only proposed in 2017.
The conceptual foundations of LLMs stretch back to the 50s, but neither the physical hardware nor the software architecture were there until more recently.
Not necessarily... if I gave you my "faster car" for you to run on your private 7 lane highway, you can definitely squeeze every last bit of the speed the car gives, but no more.
DeepSeek works as intended on 1% of the hardware the others allegedly "require" (allegedly, remember this is all a super hype bubble)... if you run it on super powerful machines, it will perform nicer but only to a certain extend... it will not suddenly develop more/better qualities just because the hardware it runs on is better
OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.
Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.
I watched one video and read 2 pages of text. So take this with a mountain of salt. From that I gathered that deepseek R1 is the model you interact with when you use the app. The complexity of a model is expressed as the number of parameters (though I don't know yet what those are) which dictate its hardware requirements. R1 contains 670 bn Parameter and requires very very beefy server hardware. A video said it would be 10th of GPUs. And it seems you want much of VRAM on you GPU(s) because that's what AI crave. I've also read 1BN parameters require about 2GB of VRAM.
Got a 6 core intel, 1060 6 GB VRAM,16 GB RAM and Endeavour OS as a home server.
I just installed Ollama in about 1/2 an hour, using docker on above machine with no previous experience on neural nets or LLMs apart from chatting with ChatGPT. The installation contains the Open WebUI which seems better than the default you got at ChatGPT. I downloaded the qwen2.5:3bn model (see ollama.com/search) which contains 3 bn parameters. I was blown away by the result. It speaks multiple languages (including displaying e.g. hiragana), knows how much fingers a human has, can calculate, can write valid rust-code and explain it and it is much faster than what i get from free ChatGPT.
The WebUI offers a nice feedback form for every answer where you can give hints to the AI via text, 10 score rating thumbs up/down. I don't know how it incooperates that feedback, though. The WebUI seems to support speech-to-text and vice versa. I'm eager to see if this docker setup even offers APIs.
I'll probably won't use the proprietary stuff anytime soon.
ollama.com/library/deepseek-r1
deepseek-r1
DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.Ollama
Thank you very much. I did ask chatGPT was technical questions about some... subjects... but having something that is private AND can give me all the information I want/need is a godsend.
Goodbye, chatGPT! I barely used you, but that is a good thing.
That's kind of normal, it was made in China after all and the developers didn't want to end up in jail I bet.
That said, china is of course a crappy dictatorship.
Basically US company's involved in AI have been grossly over valued for the last few years due to having a sudo monopoly over AI tech (companies like open ai who make chat gpt and nvidia who make graphics cards used to run ai models)
Deep seek (Chinese company) just released a free, open source version of chat gpt that cost a fraction of the price to train (setup) which has caused the US stock valuations to drop as investors are realising the US isn't the only global player, and isn't nearly as far ahead as previously thought.
Nvidia is losing value as it was previously believed that top of the line graphics cards were required for ai, but turns out they are not. Nvidia have geared their company strongly towards providing for ai in recent times.
"You see, dear grandchildren, your grandfather used to have an apple orchard. The fruits were so sweet and nutritious that every town citizen wanted a taste because they thought it was the only possible orchard in the world. Therefore the citizens gave a lot of money to your grandfather because the citizens thought the orchard would give them more apples in return, more than the worth of the money they gave. Little did they know the world was vastly larger than our ever more arid US wasteland. Suddenly an oriental orchard was discovered which was surprisingly cheaper to plant, maintain, and produced more apples. This meant a significant potential loss of money for the inhabitants of the town called Idiocracy. Therefore, many people asked their money back by selling their imaginary not-yet-grown apples to people who think the orchard will still be worth more in the future.
This is called investing, or to those who are honest with themselves: participating in a multi-level marketing pyramid scheme. You see, children, it can make a lot of money, but it destroys the soul and our habitat at the same time, which goes unnoticed by all these people with advanced degrees. So think again when you hear someone speak with fancy words and untamed confidence. Many a times their reasoning falls below the threshold of dog poop. But that's a story for another time. Sweet dreams."
Israeli forces fire on crowds near Gaza’s Netzarim Corridor | Israel-Palestine conflict | Al Jazeera
Israeli forces fire on crowds near Gaza’s Netzarim Corridor
Video from near Gaza’s Netzarim Corridor showed crowds ducking and running for cover amid Israeli gunfire.Al Jazeera
like this
Lasslinthar, ZoDoneRightNow e NoneOfUrBusiness like this.
shoulderoforion likes this.
shoulderoforion likes this.
like this
NoneOfUrBusiness likes this.
Please stop using words you clearly do not understand the meaning of. In this case those words would be “fascist/fascism”, “terrorism”, and “freedom”. You have used each incorrectly at least once.
Israel is not fascist. They are not authoritarian which is one of the common elements to fascism. Fascism is not code for “country whose government I don’t like”
Nation states can sponsor terror but they cannot be terrorists. This is because nation states commit acts of war not acts of terror. Try thinking of a single act done to a foreign power that a national military engages in that wouldn’t justify a war. Nation states cannot be terrorists.
Hamas fights gor control over Gaza. They are not promising anyone freedom. They have never attempted to ensure freedom in Gaza while they ruled it.
You can and should hate the Israeli government for the genocide they are engaging in but it’s important to use terms correctly so as to appear like you know what you are talking about.
like this
ZoDoneRightNow e NoneOfUrBusiness like this.
Is this the same incident? Some more detail from CBC:
Israeli forces fired on the crowd on three occasions overnight and into Sunday, killing two people and wounding nine, including a child, according to Al-Awda Hospital, which received the casualties.Israel's military said in a statement that it fired warning shots at "several gatherings of dozens of suspects who were advancing toward the troops and posed a threat to them."
like this
NoneOfUrBusiness likes this.
The Pebble smartwatch is making a comeback. Google has open-sourced the Pebble software, which means anyone — including Pebble’s founder — can make one.
cross-posted from: lemmy.ml/post/25280992
Google agreed to release Pebble OS to the public. As of Monday, all the Pebble firmware is available on GitHub, and Migicovsky is starting a company to pick up where he left off.The company — which can’t be named Pebble because Google still owns that — doesn’t have a name yet. For now, Migicovsky is hosting a waitlist and news signup at a website called RePebble. Later this year, once the company has a name and access to all that Pebble software, the plan is to start shipping new wearables that look, feel, and work like the Pebbles of old.
The Pebble smartwatch is making a comeback, with some help from Google
Years after its Kickstarter success and Fitbit acquisition, Pebble’s founder is restarting a company to work on Pebble’s smartwatch again — with open-source software powering it.David Pierce (The Verge)
like this
ElcaineVolta, Endymion_Mallorn, Oofnik, TVA, essell e moonpiedumplings like this.
reshared this
Technology reshared this.
!!!!!!!!
I hope they make one I can make calls from without a phone I tried to switch to smartwatch-only late last year but even the Samsung watch couldn't handle that.
like this
metaStatic likes this.
Well, on a screen so small, you'd always be limited in what can be shown. I definitely can add events to my Calendar on Galaxy Watch 6 though, the problem is if I'd ever want to do it from the watch or check them on it. I use it for reminders and notifications checking mostly, personally.
Maybe it's way more usable with Bixby or other voice assistants, but I'm really put off by the privacy and battery-draining aspects of those.
like this
TVA likes this.
I just want a watch that can read text messages and has a long battery life. The Pebble and Pebble 2 fit that bill for me.
Until the buttons fell of the Pebble 2.
like this
TVA likes this.
Think I've had it for 4 years now.
Wait you had a Pebble 2? Most people never received them because the company went under and canceled orders before they officially got released.
Are you sure you’re not thinking of Steel, Time or Round?
like this
TVA likes this.
like this
TVA likes this.
I also had a pebble 2 hr, even had two because I bought another one off eBay (unopened box). 3d printed some buttons and used is for many years until the battery basically died, and the software started to show it's age. Notifications became unreliable and such things, making it kinda pointless.
Still want nothing more than for it to work properly again. It's easy enough to swap the battery, now with the ability to fix the software, there might be a point to it.
like this
TVA likes this.
I've made many posts on many platforms wondering the same thing, especially for something like a watch that you want to be always on. Sure, amoled exists, but isn't e-paper much better for that use case?
I'm even daily driving an e-paper android tablet for notes and reading and it's awesome. A charge lasts me over a week with heavy use.
Also, not entirely sure of the exact tech for the original pebble, was it TFT? The RePebble site linked by OP talks about e-paper but maybe that's just what they want going forward
It was a Sharp "Memory LCD".
Basically "visible memory storage".
You treat it as addressable memory and write into it, and it will hold that state using about 15 microwatts to do so.
You can still buy the display modules , there's a few boards that let you easily drive them with arduinos and etc.
like this
TVA likes this.
Dude, Migicovsky fucked it up once and already wants back in? He sold the Pebble company and fucked almost all the workers on the way out. They were promised their jobs, that their jobs would be part of the deal. At the last minute they find out, nope, Migicovsky signed off on the deal that left them all without jobs. He walked away with a fat stack of cash.
Then the idiot spun up Beeper and hacked his way into the iMessage system with a workaround which Apple them promptly blocked within a few days. People were paying for this service. What was Migicovsky's plan? None, he gave up after Apple blocked them once.
Further, Beeper is just a re-skinned Matrix client with the Beeper company hosting the open source bridges between services, which means they have always had some weirdly serious access to the chats they're helping you compile all in one place. Initially you basically had to give them way too much control over your Apple account to use the iMessage stuff since they had to have a fleet of Macs for each iMessage user they were supporting.
I'm sorry. I don't care how good it was. Don't let Migicovsky take your money and mismanage it again.
Why do people keep giving this guy good graces when he fucked over his own devs on the way out and didn't even have a plan on how to keep his iMessage system working for paying customers?
Please stop letting this guy fuck up and walk away with the money.
Apple Blocks Android Users From Connecting to iMessage on Macs
Beeper Mini customers were using their Mac computers to connect to iPhone messaging on their Android phones. Now, they say Apple has blocked the messaging service on their Macs.Tripp Mickle (The New York Times)
None, he gave up after Apple blocked them once.
There were actually a couple attempts, but it's kinda in Apple's hands... I think he was hoping he could generate enough public outcry to force them to not block it. You can also still access it now, if you have your own mac.
Further, Beeper is just a re-skinned Matrix client with the Beeper company hosting the open source bridges between services
It's their own client, not just reskinned, and it has a bunch of new features designed to make cross-service nice and simple. Also, the bridges ARE open-source, but the beeper company wrote a few of them and decided to open source them.
Don't let Migicovsky take your money and mismanage it again.
He refunded everyone who bought a subscription when Apple blocked it. Beeper main is also free.
like this
TVA likes this.
* Battery life. With the battery and cpu efficiency improvements in the last 10 years, if the features and other specs stay the same then battery life should be incredible. I think month-long battery is likely possible.
* Improved voice recognition and AI features. Pebble had voice recognition but it sent everything to a server to process. Now they could run speech-to-text on the watch itself or on the connected phone.
* More durable buttons. A known issue with the Pebble 2 is eventually the buttons turn to mush.
like this
TVA likes this.
My AmazFit Bip could do a month when it was new (it's down to ~10 days now after a few years), so I would think a month from Pebble would be feasible.
I don't understand using a watch that you can't use for AT LEAST a weekend without power ... as it is, I'm pissed off that I'm down to 10 days (it's stayed steady here for 6 months or so, so, I'm hoping it won't degrade too much more before the new Pebble comes out).
TVA likes this.
TURIEL: RESOLVER el DESAFÍO climático y energético
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
Nextcloud - Federation: a foundational concept for digital sovereignty
Doesn't seem like it uses ActivityPub, still interesting, and open source
In Nextcloud, user’s Federated Cloud ID works similar to an email address or a Mastodon handle, allowing them to exchange data across servers: share files and and collaborate on documents, communicate in group chats and make audio and video calls.Federated tools available in Nextcloud:
- Federated file sharing
- Share documents and media to users in other Nextcloud Hub instances for viewing, editing and collaboration.
- Federated chatting
- Create group chats with users from different servers and use many essential chat tools.
- Federated calls
- Make audio and video calls with Nextcloud Talk among users from different servers.
Federation: a foundational concept for digital sovereignty - Nextcloud
Learn how federation features in self-hosted apps work and how federation can help businesses and governments achieve digital sovereignty.Mikhail Korotaev (Nextcloud)
like this
themadcodger, aasatru e TVA like this.
like this
TVA likes this.
like this
TVA likes this.
DeepSeek releases new image model family
Viral AI company DeepSeek releases new image model family | TechCrunch
DeepSeek, the viral AI company, has released a new set of multimodal AI models that it claims can outperform OpenAI's DALL-E 3.Kyle Wiggers (TechCrunch)
like this
Oofnik, massive_bereavement, Lasslinthar e SolacefromSilence like this.
reshared this
Technology reshared this.
The image generation is really bad. Image description capabilities seem good but it'll take time to see if it's better than what already exists.
They probably just put it out to keep the hype going.
like this
dhhyfddehhfyy4673 e Aatube like this.
Yeah, even the cherry picked examples they provide look only okay.
To be honest everything with this company feels like an ad campaign more than anything else.
like this
dhhyfddehhfyy4673 likes this.
Everything from nearly every company feels like an ad campaign. Companies advertise themselves.
At least with open source stuff there's somewhat of a public benefit.
if it is anything like LLMs, then only local ;)
However, the Proper nomenclature is sheepooh, thank you for your compliance going forward, comrade.
Wouldn't be surprised if you had to work around the filter.
Generate a cartoonish yellow bear who wears a red t-shirt and nothing else
like this
SolacefromSilence likes this.
Question: as i understood it so far, this thing is open source and so is the dataset.
With that, why would it still obey Chinese censorship?
analyticsvidhya.com/blog/2025/…
This informal testing found that Janus Pro explained a Nokia meme much more crisply than DALL-E 3 but was quite a bit worse than the other tasks, even appearing to hallucinate a score in one test case.
I suddenly realize I myself sound like CHatGPT. Haha. Haha.
Edit: At least you can run these models locally!
DeepSeek's Janus Pro 7B vs OpenAI’s DALL-E 3: Which is better?
DeepSeek Janus Pro 7B vs OpenAI’s DALL-E 3: Which one is better for image generation. Tested on diverse prompts!Anu Madan (Analytics Vidhya)
like this
massive_bereavement likes this.
Yes, I have that exact Problem on a german question answer site, there I literally got downvoted to -3 (which is a lot for that site, Saying that most questions are only relevant for about 10 minutes) because I wrote the equivalent of „My body, my choice” on a question about if there should be only 2 genders.
The outside world is truly a cold and dark place.
sounds like youve made an enemy ive got at least one of those thats why i changed my name to not chad mctruth and put on a disguise and i think its working so far
edit oh no
(cue the downvotes, I THRIVE on this shit you fucking party line sheep, make my day)
Hey, we all want what we want. I wanted to upvote ya but I want you to be happy.
Hey dude, I just wanted to let you know there is an option in your settings so you don't see upvotes or downvotes.
Lemmy (AFAIK) doesn't even show you your total upvotes (karma... whatever it's called) by default either. None of these imaginary points fucking matter.
So why don't you do yourself a favor and uncheck these boxes and not give a fuck what others think about your comment.
I know I have.
(Lemmy is rad as fuck)
Pebble cements its smartwatch legacy as Google shares source code with the community
Google releases Pebble Watch source code - Android Authority
Google has just released the source code for the Pebble watch, paving the way for more aftermarket development and potentially new hardware.Mishaal Rahman (Android Authority)
essell likes this.
Why all that rage against windmills?
Because ...
bbc.com/news/uk-scotland-north…
Cross-post da: feddit.it/post/14364321
Why all that rage against windmills?
Because of his playground sightseeing:bbc.com/news/uk-scotland-north…
Scottish government wins Donald Trump wind power legal costs
Donald Trump had claimed the 11-turbine wind farm off Aberdeen would spoil the view from his golf course.BBC News
reshared this
Feddit Un'istanza italiana Lemmy reshared this.
what are your news sources?
like this
originalucifer likes this.
reshared this
Nelfaneor reshared this.
Associated Press is great for world news. They're a bit slow but you get less mistakes.
For important news like Linux news, destination Linux, brodie Robinson and the Linux experiment are my goto.
World news:
- !theguardian@rss.ponder.cat - The Guardian
- !aljazeera@rss.ponder.cat - Al Jazeera
Climate / environment:
- !insideclimatenews@rss.ponder.cat - Inside Climate News
- !mongabay@rss.ponder.cat - Mongabay
- !grist@rss.ponder.cat - Grist
- !planetizen@rss.ponder.cat - Planetizen
US News:
- !theguardian_us@rss.ponder.cat - The Guardian US
- !bbc_us@rss.ponder.cat - BBC US
Tech and tech politics:
- !arstechnica_index@rss.ponder.cat - Ars Technica
- !404media@rss.ponder.cat - 404 Media
- !theregister@rss.ponder.cat - The Register
- !techdirt@rss.ponder.cat - Techdirt
US Politics:
- !motherjones@rss.ponder.cat - Mother Jones
- !propublica@rss.ponder.cat - Pro Publica
- !electionlawblog@rss.ponder.cat - Good Politics / Political Law Blog
- !drudgereport@rss.ponder.cat - The Drudge Report (I know! I'm as shocked as anybody. With a good-sized blacklist of crap sources in place, it's actually pretty informative)
Using this thing right here as an RSS reader! TIL.
mongabay@rss.ponder.cat - Mongabay
An excellent source that I'm ashamed I only discovered recently. Consistently first-rate independent journalism on literally the most important subjects there are. Should be better known. Read. Donate.
Great other choices too.
Wow you can use Lemmy as an RSS feed? Will these posts overpower local stuff though?
Hopefully me clicking on these won't mean the all view on my instance is now completely overtaken by this stuff...
I don't think it should. "Active" sort should mostly only show the ones that have some user engagement to them, and "Scaled" sort should only show ones that are either from a few minutes ago, or have a handful of upvotes to them, or from sources that very rarely post. "Scaled" is honestly pretty good, IDK why it is not the default.
Also, I make an effort not add feeds to it willy-nilly and to blacklist ones that tend to post spam or other stupid content. Some admins will remove everything from rss.ponder.cat from their front page feed, also, which makes sense to me.
I was a little bit surprised to see that only a few of them are federated to slrpnk right now. These are already subscribed to from slrpnk, though, so you can check them out without a trace of guilt:
- !emulator_announce@rss.ponder.cat
- !hackaday@rss.ponder.cat
- !jesse_welles@rss.ponder.cat
- !mongabay@rss.ponder.cat
- !phys@rss.ponder.cat
It's honestly a very pleasingly slrpnk-vibe collection of communities. 😃
Yeah Mongabay is on the list thanks to me, and I'll have to check out the others.
I looked at a few sorting mechanisms and it does seem to be OK with the exception of new and scaled. Scaled in particular had a lot of posts from Mongabay, but maybe this is just because it was recently federated for the first time? I'll check back and see if it subsides after a little while. I hope it does because while I don't really use the all view, I know some people who do are very bothered by these types of frequent bot posts, including one of our admins.
Hm.. oh, I got it. Yeah, forget what I said about "Scaled."
And yeah, I'm bothered by frequent bot posts. I just recently unsubscribed from a bunch of fedia.io stuff because of it. I tried to be pretty selective about what feeds to add, refused a couple of requests for some ones, added spam filters for sources that like to sprinkle advertising into their articles, that kind of stuff. But I do agree, having anything that's automated blasting into the feeds is probably a thing to be minimized unless people have specifically opted in to it. If there is something I can do from my end to make it less that way let me know, I've done pretty much all I could think of to make it less obnoxious.
Programming.dev, I know, is one of the places that removed rss.ponder.cat from their "All" feed for that reason, so you still have the option to subscribe, but it defaults to hidden. I don't know how to do that but if the admins want to do it, they could ask, I'm sure it's pretty simple.
Hmm, not sure how they did it. To my knowledge that is only an upcoming feature although now that I think about it I somewhat remember that it was already partially available in the current release backend but not exposed in the UI or so 🤔
Edit: indeed: mv-gh.github.io/lemmy_openapi_…
I'm usually trusting Reuters or AP news
Though I've heard of ground.news and have been thinking about subscribing, DAE have experience with them? Are they as unbiased as they claim?
like this
Rakenclaw likes this.
Reuters usually has half decent articles, but they're owned by billionaires out of Canada. This look into them was done late last year: sh.itjust.works/comment/121743…
AP has some sketch board members as shown here: sh.itjust.works/comment/121748…
Publically owned or controlled (or at least majority owned and controlled) news services in major countries
CBC - in Canada (where I'm from)
PBS - in the US
NPR - in the US
ABC - Australia
BBC - in the UK
France 24 - in France
NHK - in Japan
DW - in Germany
Although there are criticisms for each, at the very least, they give a good guidance to relevant straight forward news without too much spin.
newsminimalist.com/
boringreport.org/app
News Minimalist — All news ranked by significance
News Minimalist is the AI curator that finds the 1% of stories actually worth reading. Experience only important news without junk, clickbait, or ads.News Minimalist
Curious what they are and how you manage the incoming?
I have been trying to curate my list and they're all very chatty. I end up struggling to stay on top of it even just dismissing articles I won't read, let alone reading a significant percentage.
I got a local loud mouth who listens to Infowars
I just assume the opposite of what he says is true. So far it's working
Not a news source, but commentary. I watch/listen to breaking points youtube.com/@breakingpoints
they cite drop site news frequently dropsitenews.com/
and I read Ken Klippenstein kenklippenstein.com/
Breaking Points
Breaking Points with Krystal and Saagar is a fearless anti-establishment Youtube show and podcast.YouTube
The BBC, AP, and Reuters are a good place to start.
I like Erin in the Morning, Propublica, and Bellingcat as well, but they require additional work to parse sometimes.
Also the majority report.
@bigboismith You're probably referring to some sort of public broadcaster, right? That's actually quite a good source if the management is not politically controlled/infiltrated in any way by any political party
@fuzzy_feeling too many of them tbh. I also gotta do some cleanup at some point:
postimg.cc/7GXfY6Sn
There's plenty more in my Feedly account, some duplicates, cannot catch them all. At this point, I returned to getting what's currently in the spotlight.
just_another_person
in reply to squid_slime • • •squid_slime
in reply to just_another_person • • •Danitos
in reply to squid_slime • • •LorIps
in reply to squid_slime • • •The local model is still censored!