Meloni: “La Shoah fu un abominio nazista con la complicità del fascismo”
@Politica interna, europea e internazionale
L’Olocausto fu un piano “condotto dal regime hitleriano, che in Italia trovò anche la complicità di quello fascista”. Lo sottolinea la presidente del Consiglio Giorgia Meloni nel suo messaggio in occasione del Giorno della Memoria, dedicata al ricordo della Shoah.
Politica interna, europea e internazionale reshared this.
When is Europe going to act against this?
uawire.org/russia-allegedly-in…
#EuropeanCommission #EuropeanUnion #Germany #Russia
Russia allegedly invests billions in disinformation campaign to sway German elections
Russian authorities have funneled significant funds into creating a network of hundreds of thousands of fake social media accounts, forging news sites, and disseminating false information.uawire.org
metadata.cat/noticia/5331/fuga…
La fuga de X no s’atura
Institucions polítiques, acadèmiques, informatives i entitats socials abandonen progressivament la xarxa social liderada per Elon MuskRedacció (MetaData)
“Chi ama la pace non alimenta la guerra”, appello pacifista ai deputati
Appello pacifista ai deputati che il 28 gennaio voteranno per il rinnovo dell'invio di armi in Ucraina.Redazione PeaceLink (PeaceLink)
reshared this
Associazione Peacelink e Cristian Savioli reshared this.
reshared this
Poliversity - Università ricerca e giornalismo, el Celio 🇪🇺 🇺🇦, Max Stucchi e News del giorno 🔝 diggita reshared this.
#social #fediverso #notizie
Per chi non ha paura dei #bot abbiamo attivato una sezione di articoli di #Internazionale... 😊😊
Basta seguirlo da qui come se fosse un utente:
@www.internazionale.it.ultimi-articoli
𝔻𝕚𝕖𝕘𝕠 🦝🧑🏻💻🍕 likes this.
Kosuge plays Beethoven, Fujikura, Takemitsu and Schumann in Mainz - Schedule // - www.worldconcerthall.com
Yu Kosuge, piano, plays: BEETHOVEN: Sonata No. 30 in E major, op. 109. Dai FUJIKURA: Sonata. Toru TAKEMITSU: 'Litany', In Memory of Michael Vyner. SCHUMANN: Sonata No. 3 in F minor, op. 14 'Concert sans orchestre', revised version in 4 movements.www.worldconcerthall.com
The Incessant - Meat Wave (2016)
youtube.com/watch?v=ylLa96FNI0…
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
Nvidia falls 14% in premarket trading as China's DeepSeek triggers global tech sell-off
cross-posted from: lemm.ee/post/53805638
Nvidia falls 14% in premarket trading as China's DeepSeek triggers global tech sell-off
Nvidia falls 14% in premarket trading as China's DeepSeek triggers global tech sell-off
DeepSeek launched a free, open-source large-language model in late December, claiming it was developed in just two months at a cost of under $6 million.Jenni Reid (CNBC)
like this
SuiXi3D, Lasslinthar, Dantpool, FundMECFS, felixthecat, TheFederatedPipe e geneva_convenience like this.
like this
felixthecat, TheFederatedPipe e tiredofsametab like this.
It's coming, Pelosi sold her shares like a month ago.
It's going to crash, if not for the reasons she sold for, as more and more people hear she sold, they're going to sell because they'll assume she has insider knowledge due to her office.
Which is why politicians (and spouses) shouldn't be able to directly invest into individual companies.
Even if they aren't doing anything wrong, people will follow them and do what they do. Only a truly ignorant person would believe it doesn't have an effect on other people.
like this
ignirtoq e felixthecat like this.
It's coming, Pelosi sold her shares like a month ago.
Yeah but only cause she was really disappointed with the 5000 series lineup. Can you blame her for wanting real rasterization improvements?
Everyone's disappointed with the 5000 series...
They're giving up on improving rasterazation and focusing on "ai cores" because they're using gpus to pay for the research into AI.
"Real" core count is going down on the 5000 series.
It's not what gamers want, but they're counting on people just buying the newest before asking if newer is really better. It's why they're already cutting 4000 series production, they just won't give people the option.
I think everything under 4070 super is already discontinued
like this
Benign e felixthecat like this.
Prices rarely, if ever, go down and there is a push across the board to offload things "to the cloud" for a range of reasons.
That said: If your focus is on gaming, AMD is REAL good these days and, if you can get past their completely nonsensical naming scheme, you can often get a really good GPU using "last year's" technology for 500-800 USD (discounted to 400-600 or so).
like this
SuiXi3D likes this.
like this
SuiXi3D e felixthecat like this.
like this
magic_lobster_party likes this.
Or from the sounds of it, doing things more efficiently.
Fewer cycles required, less hardware required.
Maybe this was an inevitability, if you cut off access to the fast hardware, you create a natural advantage for more efficient systems.
That's generally how tech goes though. You throw hardware at the problem until it works, and then you optimize it to run on laptops and eventually phones. Usually hardware improvements and software optimizations meet somewhere in the middle.
Look at photo and video editing, you used to need a workstation for that, and now you can get most of it on your phone. Surely AI is destined to follow the same path, with local models getting more and more robust until eventually the beefy cloud services are no longer required.
The problem for American tech companies is that they didn't even try to move to stage 2.
OpenAI is hemorrhaging money even on their most expensive subscription and their entire business plan was to hemorrhage money even faster to the point they would use entire power stations to power their data centers. Their plan makes about as much sense as digging your self out of a hole by trying to dig to the other side of the globe.
Hey, my friends and I would've made it to China if recess was a bit longer.
Seriously though, the goal for something like OpenAI shouldn't be to sell products to end customers, but to license models to companies that sell "solutions." I see these direct to consumer devices similarly to how GPU manufacturers see reference cards or how Valve sees the Steam Deck: they're a proof of concept for others to follow.
OpenAI should be looking to be more like ARM and less like Apple. If they do that, they might just grow into their valuation.
It’s a reaction to thinking China has better AI
I don't think this is the primary reason behind Nvidia's drop. Because as long as they got a massive technological lead it doesn't matter as much to them who has the best model, as long as these companies use their GPUs to train them.
The real change is that the compute resources (which is Nvidia's product) needed to create a great model suddenly fell of a cliff. Whereas until now the name of the game was that more is better and scale is everything.
China vs the West (or upstart vs big players) matters to those who are investing in creating those models. So for example Meta, who presumably spends a ton of money on high paying engineers and data centers, and somehow got upstaged by someone else with a fraction of their resources.
Looking at the market cap of Nvidia vs their competitors the market belives it is, considering they just lost more than AMD/Intel and the likes are worth combined and still are valued at $2.9 billion.
And with technology i mean both the performance of their hardware and the software stack they've created, which is a big part of their dominance.
Yeah. I don't believe market value is a great indicator in this case. In general, I would say that capital markets are rational at a macro level, but not micro. This is all speculation/gambling.
My guess is that AMD and Intel are at most 1 year behind Nvidia when it comes to tech stack. "China", maybe 2 years, probably less.
However, if you can make chips with 80% performance at 10% price, its a win. People can continue to tell themselves that big tech always will buy the latest and greatest whatever the cost. It does not make it true. I mean, it hasn't been true for a really long time. Google, Meta and Amazon already make their own chips. That's probably true for DeepSeek as well.
Yeah. I don’t believe market value is a great indicator in this case. In general, I would say that capital markets are rational at a macro level, but not micro. This is all speculation/gambling.
I have to concede that point to some degree, since i guess i hold similar views with Tesla's value vs the rest of the automotive Industry. But i still think that the basic hirarchy holds true with nvidia being significantly ahead of the pack.
My guess is that AMD and Intel are at most 1 year behind Nvidia when it comes to tech stack. “China”, maybe 2 years, probably less.
Imo you are too optimistic with those estimations, particularly with Intel and China, although i am not an expert in the field.
As i see it AMD seems to have a quite decent product with their instinct cards in the server market on the hardware side, but they wish they'd have something even close to CUDA and its mindshare. Which would take years to replicate. Intel wish they were only a year behind Nvidia. And i'd like to comment on China, but tbh i have little to no knowledge of their state in GPU development. If they are "2 years, probably less" behind as you say, then they should have something like the rtx 4090, which was released end of 2022. But do they have something that even rivals the 2000 or 3000 series cards?
However, if you can make chips with 80% performance at 10% price, its a win. People can continue to tell themselves that big tech always will buy the latest and greatest whatever the cost. It does not make it true.
But the issue is they all make their chips at the same manufacturer, TSMC, even Intel in the case of their GPUs. So they can't really differentiate much on manufacturing costs and are also competing on the same limited supply. So no one can offer 80% of performance at 10% price, or even close to it. Additionally everything around the GPU (datacenters, rack space, power useage during operation etc.) also costs, so it is only part of the overall package cost and you also want to optimize for your limited space. As i understand it datacenter building and power delivery for them is actually another limiting factor right now for the hyperscalers.
Google, Meta and Amazon already make their own chips. That’s probably true for DeepSeek as well.
Google yes with their TPUs, but the others all use Nvidia or AMD chips to train. Amazon has their Graviton CPUs, which are quite competitive, but i don't think they have anything on the GPU side. DeepSeek is way to small and new for custom chips, they evolved out of a hedge fund and just use nvidia GPUs as more or less everyone else.
Thanks for high effort reply.
The Chinese companies probably use SIMC over TSMC from now on. They were able to do low volume 7 nm last year. Also, Nvidia and "China" are not on the same spot on the tech s-curve. It will be much cheaper for China (and Intel/AMD) to catch up, than it will be for Nvidia to maintain the lead. Technological leaps and reverse engineering vs dimishing returns.
Also, expect that the Chinese government throws insane amounts of capital at this sector right now. So unless Stargate becomes a thing (though I believe the Chinese invest much much more), there will not be fair competition (as if that has ever been a thing anywhere anytime). China also have many more tools, like optional command economy. The US has nothing but printing money and manipulating oligarchs on a broken market.
I'm not sure about 80/10 exactly of course, but it is in that order of magnitude, if you're willing to not run newest fancy stuff. I believe the MI300X goes for approx 1/2 of the H100 nowadays and is MUCH better on paper. We don't know the real performance because of NDA (I believe). It used to be 1/4. If you look at VRAM per $, the ratio is about 1/10 for the 1/4 case. Of course, the price gap will shrink at the same rate as ROCm matures and customers feel its safe to use AMD hardware for training.
So, my bet is max 2 years for "China". At least when it comes to high-end performance per dollar. Max 1 year for AMD and Intel (if Intel survive).
China really has nothing to do with it, it could have been anyone. It's a reaction to realizing that GPT4-equivalent AI models are dramatically cheaper to train than previously thought.
It being China is a noteable detail because it really drives the nail in the coffin for NVIDIA, since China has been fenced off from having access to NVIDIA's most expensive AI GPUs that were thought to be required to pull this off.
It also makes the USA gov look extremely foolish to have made major foreign policy and relationship sacrifices in order to try to delay China by a few years, when it's January and China has already caught up, those sacrifices did not pay off, in fact they backfired and have benefited China and will allow them to accelerate while hurting USA tech/AI companies
Does it still need people spending huge amounts of time to train models?
After doing neural networks, fuzzy logic, etc. in university, I really question the whole usability of what is called "AI" outside niche use cases.
If inputText = "hello" then Respond.text("hello there") ElseIf inputText (...)
Something is got to give. You can't spend ~$200 billion annually on capex and get a mere $2-3 billion return on this investment.
I understand that they are searching for a radical breakthrough "that will change everything", but there is also reasons to be skeptical about this (e.g. documents revealing that Microsoft and OpenAI defined AGI as something that can get them $100 billion in annual revenue as opposed to some specific capabilities).
like this
sunzu2 likes this.
Giving these parasites money now is a bail out of their bad decisions...
Let them compete, they should lay for their own capex
Shovel vendors scrambling for solid ground as prospectors start to understand geology.
...that is, this isn't yet the end of the AI bubble. It's just the end of overvaluing hardware because efficiency increased on the software side, there's still a whole software-side bubble to contend with.
like this
Benign, andyburke, felixthecat e TheFederatedPipe like this.
there's still a whole software-side bubble to contend with
They're ultimately linked together in some ways (not all). OpenAI has already been losing money on every GPT subscription that they charge a premium for because they had the best product, now that premium must evaporate because there are equivalent AI products on the market that are much cheaper. This will shake things up on the software side too. They probably need more hype to stay afloat
The software side bubble should take a hit here because:
- Trained model made available for download and offline execution, versus locking it behind a subscription friendly cloud only access. Not the first, but it is more famous.
- It came from an unexpected organization, which throws a wrench in the assumption that one of the few known entities would "win it".
…that is, this isn’t yet the end of the AI bubble.
The "bubble" in AI is predicated on proprietary software that's been oversold and underdelivered.
If I can outrun OpenAI's super secret algorithm with 1/100th the physical resources, the $13B Microsoft handed Sam Altman's company starts looking like burned capital.
And the way this blows up the reputation of AI hype-artists makes it harder for investors to be induced to send US firms money. Why not contract with Hangzhou DeepSeek Artificial Intelligence directly, rather than ask OpenAI to adopt a model that's better than anything they've produced to date?
like this
palordrolap likes this.
And it may yet swing back the other way.
Twenty or so years ago, there was a brief period when going full AMD (or AMD+ATI as it was back then; AMD hadn't bought ATI yet) made sense, and then the better part of a decade later, Intel+NVIDIA was the better choice.
And now I have a full AMD PC again.
Intel are really going to have to turn things around in my eyes if they want it to swing back, though. I really do not like the idea of a CPU hypervisor being a fully fledged OS that I have no access to.
I'm way behind on the hardware at this point.
Are you saying that AMD is moving toward an FPGA chip on GPU products?
While I see the appeal - that's going to dramatically increase cost to the end user.
No.
GPU is good for graphics. That's what is designed and built for. It just so happens to be good at dealing with programmatic neural network tasks because of parallelism.
FPGA is fully programmable to do whatever you want, and reprogram on the fly. Pretty perfect for reducing costs if you have a platform that does things like audio processing, then video processing, or deep learning, especially in cloud environments. Instead of spinning up a bunch of expensive single-phroose instances, you can just spin up one FPGA type, and reprogram on the fly to best perform on the work at hand when the code starts up. Simple.
AMD bought Xilinx in 2019 when they were still a fledgling company because they realized the benefit of this. They are now selling mass amounts of these chips to data centers everywhere. It's also what the XDNA coprocessors on all the newer Ryzen chips are built on, so home users have access to an FPGA chip right there. It's efficient, cheaper to make than a GPU, and can perform better on lots of non-graphic tasks than GPUs without all the massive power and cooling needs. Nvidia has nothing on the roadmap to even compete, and they're about to find out what a stupid mistake that is.
Huh. Everything I'm reading seems to imply it's more like a DSP ASIC than an FPGA (even down to the fact that it's a VLIW processor) but maybe that's wrong.
I'm curious what kind of work you do that's led you to this conclusion about FPGAs. I'm guessing you specifically use FPGAs for this task in your work? I'd love to hear about what kinds of ops you specifically find speedups in. I can imagine many exist, as otherwise there wouldn't be a need for features like tensor cores and transformer acceleration on the latest NVIDIA GPUs (since obviously these features must exploit some inefficiency in GPGPU architectures, up to limits in memory bandwidth of course), but also I wonder how much benefit you can get since in practice a lot of features end up limited by memory bandwidth, and unless you have a gigantic FPGA I imagine this is going to be an issue there as well.
I haven't seriously touched FPGAs in a while, but I work in ML research (namely CV) and I don't know anyone on the research side bothering with FPGAs. Even dedicated accelerators are still mostly niche products because in practice, the software suite needed to run them takes a lot more time to configure. For us on the academic side, you're usually looking at experiments that take a day or a few to run at most. If you're now spending an extra day or two writing RTL instead of just slapping together a few lines of python that implicitly calls CUDA kernels, you're not really benefiting from the potential speedup of FPGAs. On the other hand, I know accelerators are handy for production environments (and in general they're more popular for inference than training).
I suspect it's much easier to find someone who can write quality CUDA or PTX than someone who can write quality RTL, especially with CS being much more popular than ECE nowadays. At a minimum, the whole FPGA skillset seems much less common among my peers. Maybe it'll be more crucial in the future (which will definitely be interesting!) but it's not something I've seen yet.
Looking forward to hearing your perspective!
I remember Xilinx from way back in the 90s when I was taking my EE degree, so they were hardly a fledgling in 2019.
Not disputing your overall point, just that detail because it stood out for me since Xilinx is a name I remember well, mostly because it's unusual.
FPGAs have been a thing for ages.
If I remember it correctly (I learned this stuff 3 decades ago) they were basically an improvement on logic circuits without clocks (think stuff like NAND and XOR gates - digital signals just go in and the result comes out on the other side with no delay beyond that caused by analog elements such as parasitical inductances and capacitances, so without waiting for a clock transition).
The thing is, back then clocking of digital circuits really took off (because it's WAY simpler to have things done one stage at a time with a clock synchronizing when results are read from one stage and sent to the next stage, since different gates have different delays and so making sure results are only read after the slowest path is done is complicated) so all CPU and GPU architecture nowadays are based on having a clock, with clock transitions dictating things like when is each step of processing a CPU/GPU instruction started.
Circuits without clocks have the capability of being way faster than circuits with clocks if you can manage the problem of different digital elements having different delays in producing results I think what we're seeing here is a revival of using circuits without clocks (or at least with blocks of logic done between clock transitions which are much longer and more complex than the processing of a single GPU instruction).
Yes, but I'm not sure what your argument is here.
Least resistance to an outcome (in this case whatever you program it to do) is faster.
Applicable to waterfall flows, FPGA makes absolute sense for the neural networks as they operate now.
I'm confused on your argument against this and why GPU is better. The benchmarks are out in the world, go look them up.
I'm not making an argument against it, just clarifying were it sits as technology.
As I see it, it's like electric cars - a technology that was overtaken by something else in the early days when that domain was starting even though it was the first to come out (the first cars were electric and the ICE engine was invented later) and which has now a chance to be successful again because many other things have changed in the meanwhile and we're a lot closes to the limits of the tech that did got widely adopted back in the early days.
It actually makes a lot of sense to improve the speed of what programming can do by getting it to be capable of also work outside the step-by-step instruction execution straight-jacked which is the CPU/GPU clock.
From a "compute" perspective (so not consumer graphics), power... doesn't really matter. There have been decades of research on the topic and it almost always boils down to "Run it at full bore for a shorter period of time" being better (outside of the kinds of corner cases that make for "top tier" thesis work).
AMD (and Intel) are very popular for their cost to performance ratios. Jensen is the big dog and he prices accordingly. But... while there is a lot of money in adapting models and middleware to AMD, the problem is still that not ALL models and middleware are ported. So it becomes a question of whether it is worth buying AMD when you'll still want/need nVidia for the latest and greatest. Which tends to be why those orgs tend to be closer to an Azure or AWS where they are selling tiered hardware.
Which... is the same issue for FPGAs. There is a reason that EVERYBODY did their best to vilify and kill opencl and it is not just because most code was thousands of lines of boilerplate and tens of lines of kernels. Which gets back to "Well. I can run this older model cheap but I still want nvidia for the new stuff...."
Which is why I think nvidia's stock dropping is likely more about traders gaming the system than anything else. Because the work to use older models more efficiently and cheaply has already been a thing. And for the new stuff? You still want all the chooch.
Your assessment is missing the simple fact that FPGA can do things a GPU cannot faster, and more cost efficiently though. Nvidia is the Ford F-150 of the data center world, sure. It's stupidly huge, ridiculously expensive, and generally not needed unless it's being used at full utilization all the time. That's like the only time it makes sense.
If you want to run your own models that have a specific purpose, say, for scientific work folding proteins, and you might have several custom extensible layers that do different things, N idia hardware and software doesn't even support this because of the nature of Tensorrt. They JUST announced future support for such things, and it will take quite some time and some vendor lock-in for models to appropriately support it.....OR
Just use FPGAs to do the same work faster now for most of those things. The GenAI bullshit bandwagon finally has a wheel off, and it's obvious people don't care about the OpenAI approach to having one model doing everything. Compute work on this is already transitioning to single purpose workloads, which AMD saw coming and is prepared for. Nvidia is still out there selling these F-150s to idiots who just want to piss away money.
Your assessment is missing the simple fact that FPGA can do things a GPU cannot faster
Yes, there are corner cases (many of which no longer exist because of software/compiler enhancements but...). But there is always the argument of "Okay. So we run at 40% efficiency but our GPU is 500% faster so..."
Nvidia is the Ford F-150 of the data center world, sure. It’s stupidly huge, ridiculously expensive, and generally not needed unless it’s being used at full utilization all the time. That’s like the only time it makes sense.
You are thinking of this like a consumer where those thoughts are completely valid (just look at how often I pack my hatchback dangerously full on the way to and from Lowes...). But also... everyone should have that one friend with a pickup truck for when they need to move or take a load of stuff down to the dump or whatever. Owning a truck yourself is stupid but knowing someone who does...
Which gets to the idea of having a fleet of work vehicles versus a personal vehicle. There is a reason so many companies have pickup trucks (maybe not an f150 but something actually practical). Because, yeah, the gas consumption when you are just driving to the office is expensive. But when you don't have to drive back to headquarters to swap out vehicles when you realize you need to go buy some pipe and get all the fun tools? It pays off pretty fast and the question stops becoming "Are we wasting gas money?" and more "Why do we have a car that we just use for giving quotes on jobs once a month?"
Which gets back to the data center issue. The vast majority DO have a good range of cards either due to outright buying AMD/Intel or just having older generations of cards that are still in use. And, as a consumer, you can save a lot of money by using a cheaper node. But... they are going to still need the big chonky boys which means they are still going to be paying for Jensen's new jacket. At which point... how many of the older cards do they REALLY need to keep in service?
Which gets back down to "is it actually cost effective?" when you likely need
I'm thinking of this as someone who works in the space, and has for a long time.
An hour of time for a g4dn instance in AWS is 4x the cost of an FPGA that can do the same work faster in MOST cases. These aren't edge cases, they are MOST cases. Look at a Sagemaker, AML, GMT pricing for the real cost sinks here as well.
The raw power and cooling costs contribute to that pricing cost. At the end of the day, every company will choose to do it faster and cheaper, and nothing about Nvidia hardware fits into either of those categories unless you're talking about milliseconds of timing, which THEN only fits into a mold of OpenAI's definition.
None of this bullshit will be a web-based service in a few years, because it's absolutely unnecessary.
And you are basically a single consumer with a personal car relative to those data centers and cloud computing providers.
YOUR workload works well with an FPGA. Good for you, take advantage of that to the best degree you can.
People;/companies who want to run newer models that haven't been optimized for/don't support FPGAs? You get back to the case of "Well... I can run a 25% cheaper node for twice as long?". That isn't to say that people shouldn't be running these numbers (most companies WOULD benefit from the cheaper nodes for 24/7 jobs and the like). But your use case is not everyone's use case.
And, it once again, boils down to: If people are going to require the latest and greatest nvidia, what incentive is there in spending significant amounts of money getting it to work on a five year old AMD? Which is where smaller businesses and researchers looking for a buyout come into play.
At the end of the day, every company will choose to do it faster and cheaper, and nothing about Nvidia hardware fits into either of those categories unless you’re talking about milliseconds of timing, which THEN only fits into a mold of OpenAI’s definition.
Faster is almost always cheaper. There have been decades of research into this and it almost always boils down to it being cheaper to just run at full speed (if you have the ability to) and then turn it off rather than run it longer but at a lower clock speed or with fewer transistors.
And nVidia wouldn't even let the word "cheaper" see the glory that is Jensen's latest jacket that costs more than my car does. But if you are somehow claiming that "faster" doesn't apply to that company then... you know nothing (... Jon Snow).
unless you’re talking about milliseconds of timing
So... its not faster unless you are talking about time?
Also, milliseconds really DO matter when you are trying to make something responsive and already dealing with round trip times with a client. And they add up quite a bit when you are trying to lower your overall footprint so that you only need 4 notes instead of 5.
They don't ALWAYS add up, depending on your use case. But for the data centers that are selling computers by time? Yeah,. time matters.
So I will just repeat this: Your use case is not everyone's use case.
I mean...I can shut this down pretty simply. Nvidia makes GPUs that are currently used as a blunt force tool, which is dumb, and now that the grift has been blown, OpenAI, Anthropic, Meta, and all the others trying to make a business center around a really simple tooling that is open source, are about to be under so much scrutiny for the cost that everyone will figure out that there are cheaper ways to do this.
Plus AMD, Con Nvidia. It's really simple.
What "point"?
Your "point" was "Well I don't need it" while ignoring that I was referring to the market as a whole. And then you went on some Team Red rant because apparently AMD is YOUR friend or whatever.
Some things to learn in here ? :
github.com/deepseek-ai
Large-scale reinforcement learning (RL) ?
::: spoiler chat (requires login via email or Google...)
Chat with DeepSeek-R1 on DeepSeek's official website: chat.deepseek.com, and switch on the button "DeepThink"
:::
::: spoiler aha moments (in white paper)
from page 8 of 22 in :
raw.githubusercontent.com/deep…
One of the most remarkable aspects of this self-evolution is the emergence of sophisticated behaviors as the test-time computation increases. Behaviors such as reflection—where the model revisits and reevaluates its previous steps—and the exploration of alternative approaches to
problem-solving arise spontaneously. These behaviors are not explicitly programmed but instead emerge as a result of the model’s interaction with the reinforcement learning environment. This spontaneous development significantly enhances DeepSeek-R1-Zero’s reasoning capabilities, enabling it to tackle more challenging tasks with greater efficiency and accuracy.
Aha Moment of DeepSeek-R1-Zero
A particularly intriguing phenomenon observed during the training of DeepSeek-R1-Zero is the occurrence of an “aha moment”. This moment, as illustrated in Table 3, occurs in an intermediate version of the model. During this phase, DeepSeek-R1-Zero learns to allocate more thinking time to a problem by reevaluating its initial approach. This behavior is not only a testament to the model’s growing reasoning abilities but also a captivating example of how reinforcement learning can lead to unexpected and
sophisticated outcomes.
This moment is not only an “aha moment” for the model but also for the researchers
observing its behavior. It underscores the power and beauty of reinforcement learning: rather than explicitly teaching the model on how to solve a problem, we simply provide it with the right incentives, and it autonomously develops advanced problem-solving strategies. The “aha moment” serves as a powerful reminder of the potential of RL to unlock new levels of intelligence in artificial systems, paving the way for more autonomous and adaptive models in
the future.
:::
github.com/huggingface/open-r1
Fully open reproduction of DeepSeek-R1
en.m.wikipedia.org/wiki/DeepSe…
DeepSeek_R1 was released 2025-01-20
My understanding is that DeepSeek still used Nvidia just older models and way more efficiently, which was remarkable. I hope to tinker with the opensource stuff at least with a little Twitch chat bot for my streams I was already planning to do with OpenAI. Will be even more remarkable if I can run this locally.
However this is embarassing to the western companies working on AI and especially with the $500B announcement of Stargate as it proves we don't need as high end of an infrastructure to achieve the same results.
500b of trust me Bros... To shake down US taxpayer for subsidies
Read between the lines folks
It's really not. This is the ai equivalent of the vc repurposing usa bombs that didn't explode when dropped.
Their model is the differentiator here but they had to figure out something more efficient in order to overcome the hardware shortcomings.
The us companies will soon outpace this by duping the model and running it on faster hw
I’m using Ollama to run my LLM’s. Going to see about using it for my twitch chat bot too
GitHub - ollama/ollama: Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models. - ollama/ollamaGitHub
My understanding is that DeepSeek still used Nvidia just older models
That's the funniest part here, the sell off makes no sense. So what if some companies are better at utilizing AI than others, it all runs in the same hardware. Why sell stock in the hardware company? (Besides the separate issue of it being totally overvalued at the moment)
This would be kind of like if a study showed that American pilots were more skilled than European pilots, so investors sold stock in airbus... Either way, the pilots still need planes to fly...
Perhaps the stocks were massively overvalued and any negative news was going to start this sell off regardless of its actual impact?
That is my theory anyway.
Yes, but if they already have lots of planes, they don't need to keep buying more planes. Especially if their current planes can now run for longer.
AI is not going away but it will require less computing power and less capital investment. Not entirely unexpected as a trend, but this was a rapid jump that will catch some off guard. So capital will be reallocated.
Good. That shit is way overvalued.
There is no way that Nvidia are worth 3 times as much as TSMC, the company that makes all their shit and more besides.
I'm sure some of my market tracker funds will lose value, and they should, because they should never have been worth this much to start with.
It’s because Nvidia is an American company and also because they make final stage products. American companies right now are all overinflated and almost none of the stocks are worth what they’re at because of foreign trading influence.
As much as people whine about inflation here, the US didn’t get hit as bad as many other countries and we recovered quickly which means that there is a lot of incentive for other countries to invest here. They pick our top movers, they invest in those. What you’re seeing is people bandwagoning onto certain stocks because the consistent gains create more consistent gains for them.
The other part is that yes, companies who make products at the end stage tend to be worth a lot more than people trading more fundamental resources or parts. This is true of almost every industry except oil.
It is also because the USA is the reserve currency of the world with open capital markets.
Savers of the world (including countries like Germany and China who have excess savings due to constrained consumer demand) dump their savings into US assets such as stocks.
This leads to asset bubbles and an uncompetitively high US dollar.
The root problem they are trying to fix is real (systemic trade imbalances) but they way they are trying to fix it is terrible and won't work.
1) Only a universally applied tariff would work in theory but would require other countries not to retaliate (there will 100% be retaliation).
2) It doesn't really solve the root cause, capital inflows into the USA rather than purchasing US goods and services.
3) Trump wants to maintain being the reserve currency which is a big part of the problem (the strength of currency may not align with domestic conditions, i.e. high when it needs to be low).
Agree, but the market doesn’t think rationally.
Better access to software is good for hardware companies. Nvidia is still the best company when it comes to this kind of computing hardware.
Hm even with DeepSeek being more efficient, wouldn't that just mean the rich corps throw the same amount of hardware at it to achieve a better result?
In the end I'm not convinced this would even reduce hardware demand. It's funny that this of all things deflates part of the bubble.
like this
magic_lobster_party likes this.
Maybe but it also means that if a company needs a datacenter with 1000 gpu's to do it's AI tasks demand, it will now buy 500.
Next year it might need more but then AMD could have better gpu's.
Hm even with DeepSeek being more efficient, wouldn’t that just mean the rich corps throw the same amount of hardware at it to achieve a better result?
Only up to the point where the AI models yield value (which is already heavily speculative). If nothing else, DeepSeek makes Altman's plan for $1T in new data-centers look like overkill.
The revelation that you can get 100x gains by optimizing your code rather than throwing endless compute at your model means the value of graphics cards goes down relative to the value of PhD-tier developers. Why burn through a hundred warehouses full of cards to do what a university mathematics department can deliver in half the time?
you can get 100x gains by optimizing your code rather than throwing endless compute at your model
woah, that sounds dangerously close to saying this is all just developing computer software. Don't you know we're trying to build God???
like this
magic_lobster_party likes this.
The way I understood it, it's much more efficient so it should require less hardware.
Nvidia will sell that hardware, an obscene amount of it, and line will go up. But it will go up slower than nvidia expected because anything other than infinite and always accelerating growth means you're not good at business.
like this
magic_lobster_party likes this.
Back in the day, that would tell me to buy green.
Of course, that was also long enough ago that you could just swap money from green to red every new staggered product cycle.
It requires only 5% of the same hardware that OpenAI needs to do the same thing. So that can mean less quantity of top end cards and it can also run on less powerful cards (not top of the line).
Should their models become standard or used more commonly, then nvidis sales will drop.
Doesn’t this just mean that now we can make models 20x more complex using the same hardware? There’s many more problems that advanced Deep Learning models could potentially solve that are far more interesting and useful than a chat bot.
I don’t see how this ends up bad for Nvidia in the long run.
Honestly none of this means anything at the moment. This might be some sort of calculated trickery from China to give Nvidia the finger, or Biden the finger, or a finger to Trump's AI infrastructure announcement a few days ago, or some other motive.
Maybe this "selloff" is masterminded by the big wall street players (who work hand-in-hand with investor friendly media) to panic retail investors so they can snatch up shares at a discount.
What I do know is that "AI" is a very fast moving tech and shit that was true a few months ago might not be true tomorrow - no one has a crystal ball so we all just gotta wait and see.
There could be some trickery on the training side, i.e. maybe they spent way more than $6M to train it.
But it is clear that they did it without access to the infra that big tech has.
And on the run side, we can all verify how well it runs and people are also running it locally without internet access. There is no trickery there.
They are 20x cheaper than OpenAI if you run it on their servers and if you run it yourself, you only need a small investment in relatively affordable servers.
And you should, generally we are amidst the internet world war. It's not something fishy but digital rotten eggs thrown around by the hundreds.
The only way to remain sane is to ignore it and scroll on. There is no winning versus geopolitical behemoths as a lone internet adventurer. It's impossible to tell what's real and what isn't
the first casualty of war is truth
like this
sunzu2 likes this.
Yeah I'd say so - but you can't put the genie in the bottle.
It's just fighting for who gets the privilege to do so
I think this prompted investors to ask "where's the ROI?".
Current AI investment hype isn't based on anything tangible. At least the amount of investment isn't, it is absurd to think that trillion dollars that was put in the space already, even before that Softbanks deal is going to be returned. The models still hallucinate as it is inherent to the architecture, we are nowhere near replacing the workers but we got chatbots that "when they work sometimes, then they are kind of good?" and mediocre off-putting pictures. Is there any value? Sure, it's not NFTs. But the correction might be brutal.
Interestingly enough, DeepSeek's model is released just before Q4 earning's call season, so we will see if it has a compounding effect with another statement from big players that they burned massive amount of compute and USD only to get milquetoast improvements and get owned by a small Chinese startup that allegedly can do all that for 5 mil.
I have a dirty suspicion that the "where's the ROI?" talking point is actually a calculated and collaborated strategy by big wall street banks to panic retail investors to sell so they can gobble up shares at a discount - trump is going to be pumping (at minimum) hundreds of BILLIONS into these companies in the near future.
Call me a conspiracy guy, but I've seen this playbook many many times
I don't know. In a lot of usecase AI is kinda crap, but there's certain usecase where it's really good. Honestly I don't think people are giving enough thought to it's utility in early-middle stages of creative works where an img2img model can take the basic composition from the artist, render it then the artist can go in and modify and perfect it for the final product. Also video games that use generative AI are going to be insane in about 10-15 years. Imagine an open world game where it generates building interiors and NPCs as you interact with them, even tying the stuff the NPCs say into the buildings they're in, like an old sailer living in a house with lots of pictures of boats and boat models, or the warrior having tons of books about battle and decorative weapons everywhere all in throw away structures that would have previously been closed set dressing. Maybe they'll even find sane ways to create quests on the fly that don't feel overly cookie-cutter? Life changing? Of course not, but definitely a cool technology with a lot of potential
Also realistically I don't think there's going to be long term use for AI models that need a quarter of a datacenter just to run, and they'll all get tuned down to what can run directly on a phone efficiently. Maybe we'll see some new accelerators become common place maybe we won't.
What the fuck are markets when you can automate making money on them???
Ive been WTF about the stock market for a long time but now it's obviously a scam.
Was watching bbc news interview some American guy about this and wow they were really pushing that it's no big deal and deepseek is way behind and a bit of a joke. Made claims they weren't under cyber attack they just couldn't handle having traffic etc.
Kinda making me root for China honestly.
Lots of techies loved the internet, built it, and were all early adopters. Lots of normies didn't see the point.
With AI it's pretty much the other way around: CEOs saying "we don't need programmers, any more", while people who understand the tech roll their eyes.
It took weeks to move big money around.
Lol this is just either a statement out of ignorance or a complete lie. Wire transfers didn't take weeks. Checks didn't take weeks to clear, and most people aren't moving "big money" via fucking cash app either.
"Big money" isn't paying half for an Uber unless you're like 16 years old.
Education doesn't make a tech CEO ridiculously wealthy, so there's no draw for said CEOs to promote the shit out of education.
Plus educated people tend to ask for more salary. Can't do that and become a billionaire!
Because the silicon valley bros had convinced the national security wonks in the Beltway that it was paramount for national security, technological leadership and economic prosperity.
I think this will go down as the biggest grift in history.
Kevin Walmsley reported on Deepseek 10 days ago. Last week, the smart money exited big tech. This week the panic starts.
I'm getting big dot-com 2.0 vibes from all of this.
youtube.com/@inside_china_busi…
Inside China Business
Key insights and strategies for global business owners and managers, from inside the world's factory and supply chains.YouTube
For the cost of an education we could end up with smart people who contribute to the economy and society. Instead we are dumping billions into this shit.
Those are different "we"s.
Tech/Wall St constantly needs something to hype in order to bring in “investor” money. The “new technology-> product development -> product -> IPO” pipeline is now “straight to pump-and-dump” (for example, see Crypto currency).
The excitement of the previous hype train (self-driving cars) is no longer bringing in starry-eyed “investors” willing to quickly part ways with OPM. “AI” made a big splash and Tech/Wall St is going to milk it for all they can lest they fall into the same bad economy as that one company that didn’t jam the letters “AI” into their investor summary.
Tech has laid off a lot of employees, which means they are aware there is nothing else exciting in the near horizon. They also know they have to flog “AI” like crazy before people figure out there’s no “there” there.
That “investors” scattered like frightened birds at the mere mention of a cheaper version means that they also know this is a bubble. Everyone wants the quick money. More importantly they don’t want to be the suckers left holding the bag.
I follow EV battery tech a little. You’re not wrong that there is a lot of “oh its just around the bend” in battery development and tech development in general. I blame marketing for 80% of that.
But battery technology is changing drastically. The giant cell phone market is pushing battery tech relentlessly. Add in EV and grid storage demand growth and the potential for some companies to land on top of a money printing machine is definitely there.
We’re in a golden age of battery research. Exciting for our future, but it will be a while before we consumers will have clear best options.
Look at it in another way, people think this is the start of an actual AI revolution, as in full blown AGI or close to it or something very capable at least
I think the bigger threat of revolution (and counter-revolution) is that of open source software. For people that don't know anything about FOSS, they've been told for decades now that [XYZ] software is a tool you need and that's only possible through the innovative and superhuman-like intelligent CEOs helping us with the opportunity to buy it.
If everyone finds out that they're actually the ones stifling progress and development, while manipulating markets to further enrich themselves and whatever other partners that align with that goal, it might disrupt the golden goose model. Not to mention defrauding the countless investors that thought they were holding rocket ship money that was actually snake oil.
All while another country did that collectively and just said, "here, it's free. You can even take the code and use it how you personally see fit, because if this thing really is that thing, it should be a tool anyone can access. Oh, and all you other companies, your code is garbage btw. Ours runs on a potato by comparison."
I'm just saying, the US has already shown they will go to extreme lengths to keep its citizens from thinking too hard about how its economic model might actually be fucking them while the rich guys just move on to the next thing they'll sell us.
ETA: a smaller scale example: the development of Wine, and subsequently Proton finally gave PC gamers a choice to move away from Windows if they wanted to.
It's easier to sell people on the idea of a new technology or system that doesn't have any historical precedent. All you have to do is list the potential upsides.
Something like a school or a workplace training programme, those are known quantities. There's a whole bunch of historical and currently-existing projects anyone can look at to gauge the cost. Your pitch has to be somewhat realistic compared to those, or it's gonna sound really suspect.
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
Why? If you automatize away (regardless of whether it's feasible or not) all the workers, what's stop them for cutting them out of the equation? Why can't they just trade assets between themselves, maintaining a small slave population that does machine maintenance for food and shelter and screwing the rest? Why do you think they still need us if they own both the means for the production as well as labor to produce? That would be a post-labour scarcity economy, available only for the wealthy and with the rest of us left to rot. If you have assets like land, materials, factories you can participate, if you don't, you can't
While I don't think that this is feasible technologically yet by any means, I think this is what the rich are huffing currently. They want to be independent from us because they are threatened by us.
And you could pay people to use an abacus instead of a calculator. But the advanced tech improves productivity for everyone, and helps their output.
If you don’t get the tech, you should play with it more.
“Improves productivity for everyone”
Famously only one class benefits from productivity, while one generates the productivity. Can you explain what you mean, if you don’t mean capitalistic productivity?
I’m referring to output for amount of work put in.
I’m a socialist. I care about increased output leading to increased comfort for the general public. That the gains are concentrated among the wealthy is not the fault of technology, but rather those who control it.
Thank god for DeepSeek.
confidently so in the face of overwhelming evidence
That I'd really like to see. And I mean more than the marketing bullshit that AI companies are doing...
For the record I was one of the first jumping on the AI hype-train (as programmer, and computer-scientist with machine-learning background), following the development of GPT1-4, being excited about having to do less boilerplaty code etc. getting help about rough ideas etc. GPT4 was almost so far as being a help (similar with o1 etc. or Anthropics models). Though I seldom use AI currently (and I'm observing similar with other colleagues and people I know of) because it actually slows me down with my stuff or gives wrong ideas, having to argue, just to see it yet again saturating at a local-minimum (aka it doesn't get better, no matter what input I try). Just so that I have to do it myself... (which I should've done in the first place...).
Same is true for the image-generative side (i.e. first with GANs now with diffusion-based models).
I can get into more details about transformer/attention-based-models and its current plateau phase (i.e. more hardware doesn't actually make things significantly better, it gets exponentially more expensive to make things slightly better) if you really want...
I hope that we do a breakthrough of course, that a model actually really learns reasoning, but I fear that that will take time, and it might even mean that we need different type of hardware.
DeepSeek
Yeah it'll be exciting to see where this goes, i.e. if it really develops into a useful tool, for certain. Though I'm slightly cautious non-the less. It's not doing something significantly different (i.e. it's still an LLM), it's just a lot cheaper/efficient to train, and open for everyone (which is great).
Have you actually read my text wall?
Even o1 (which AFAIK is roughly on par with R1-671B) wasn't really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren't just capable to deliver this.
I still need to try it out whether it's possible to train it on my/our codebase, such that it's at least possible to use as something like Github copilot (which I also don't use, because it just isn't reliable enough, and too often generates bugs). Also I'm a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.
I currently don't miss it... Keep in mind that you still have to check whether all the code is correct etc. writing code isn't the thing that usually takes that much time for me... It's debugging, and finding architecturally sound and good solutions for the problem. And AI is definitely not good at that (even if you're not that experienced).
Yes, I have tested that use case multiple times. It performs well enough.
A calculator also isn’t much help, if the person operating it fucks up. Maybe the problem in your scenario isn’t the AI.
As you're being unkind all the time, let me be unkind as well 😀
A calculator also isn’t much help, if the person operating it fucks up. Maybe the problem in your scenario isn’t the AI.
If you can effectively use AI for your problems, maybe they're too repetitive, and actually just dumb boilerplate.
I rather like to solve problems that require actual intelligence (e.g. do research, solve math problems, think about software architecture, solve problems efficiently), and don't even want to deal with problems that require me to write a lot of repetitive code, which AI may be (and often is not) of help with.
I have yet to see efficient generated Rust code that autovectorizes well, without a lot of allocs etc. I always get triggered by the insanely bad code-quality of the AI that just doesn't even really understand what allocations are... Arghh I could go on...
So unreliable boilerplate generator, you need to debug?
Right I've seen that it's somewhat nice to quickly generate bash scripts etc.
It can certainly generate quick'n dirty scripts as a starter. But code quality is often supbar (and often incorrect), which triggers my perfectionism to make it better, at which point I should've written it myself...
But I agree that it can often serve well for exploration, and sometimes you learn new stuff (if you weren't expert in it at least, and you should always validate whether it's correct).
But actual programming in e.g. Rust is a catastrophe with LLMs (more common languages like js work better though).
If you are blindly asking it questions without a grounding resources you're gonning to get nonsense eventually unless it's really simple questions.
They aren't infinite knowledge repositories. The training method is lossy when it comes to memory, just like our own memory.
Give it documentation or some other context and ask it questions it can summerize pretty well and even link things across documents or other sources.
The problem is that people are misusing the technology, not that the tech has no use or merit, even if it's just from an academic perspective.
Long story short: I'm faster just not caring about AI (at the moment).
As I said somewhere else here, I have a theoretical background in this area.
Though speaking of, I think I really need to try out training or refining a DeepSeek model with our code-bases, whether it helps to be a good alternative to something like the dumb Github Copilot (which I've also disabled, because it produces a looot of garbage that I don't want to waste my attention with...) Maybe it's now finally possible to use at least for completion when it knows details about the whole code-base (not just snapshots such as Github CoPilot).
It depends on the bank and the amount you are trying to move.There are banks that might take a week (five business days) or so though very rare and there are banks that might do it instantly. I once used a bank in the US to move money and they sent a physical check and this was domestic not international.
Edit: I thought he meant a week not weeks. Normally a max of five working days.
How do you solve the problem that half the country can't even be bothered to participate once every four years?
Don't get me wrong, I'm with you 100%, but how would we get people to engage with such a system?
I think you’re victim blaming. I can’t blame half the country for not wanting to participate in a symbolic gesture that will have no impact on the end result in this corrupted system.
pnhp.org/news/gilens-and-page-…
Gilens and Page: Average citizens have little impact on public policy - PNHP
Testing Theories of American Politics: Elites, Interest Groups, and Average CitizensBy Martin Gilens and Benjamin I.PNHP
How do you solve the problem that half the country can’t even be bothered to participate once every four years?
I assume you're talking about the US electoral system?? That's very different.
but how would we get people to engage with such a system?
By empowering them.
Consider how the current electoral system disempowers people:
1) Some people literally cannot vote or risk jeopardizing their job taking the day off, others face voter suppression tactics
2) The FPTP system (esp. spoiler effect) and the present political circumstances mean that there are really only two viable options for political parties for most people, so many feel that neither option represents them, let alone their individual positions on policy
3) Politics is widely considered to be corrupt and break electoral promises regularly. There is little faith in either party to represent voters
But, in a system where you are able to represent yourself at will, engagement is actually rewarding and meaningful. It won't magically make everyone care, but direct democracy alongside voter rights reform would likely make more people think it's worth polling.
I hope you're right. I would love to see it. I actually support mandatory voting like in Australia. With mostly current laws everyone could get a mail in ballot. If you don't want to participate just check that box at the top, sign it, and send it in.
Your system sounds much better but would require a lot more legislation.
That you had to qualify it with a date after it had been corrupted by the west, implies that you’re well aware of how well communism served for half a century before that.
They went from a nation of dirt poor peasants, to a nuclear superpower driving the space race in just a couple of decades. All thanks to communism. And also why China is leaving us in the dust.
There are many instances of communism failing lmao
There are also many current communist states that have less freedom than many capitalist states
Also, you need to ask the Uyghurs how they're feeling about their experience under the communist government you're speaking so highly of at the moment.
How many of those instances failed due to external factors, such as illegal sanctions or a western coup or western military aggression?
Which communist states would you say have less freedom than your country? Let’s compare.
The Uyghur genocide was debunked. Even the US state department was forced to admit they didn’t have the evidence to support their claims. In reality, western intelligence agencies were trying to radicalize the Uyghurs to destabilize the region, but China has been rehabilitating them. The intel community doesn’t like their terrorist fronts to be shut down.
LMAO found the pro-Xi propagandist account
Either you're brainwashed, are only reading one-sided articles, or you're an adolescent with little world experience given how confidently you speak in absolutes, which doesn't reflect how nuanced the global stage is.
I'm not saying capitalism is the best, but communism won't ALWAYS beat out capitalism (as it hasn't regardless of external factors b/c if those regimes were strong enough they would be able to handle or recover from external pressures) nor does it REQUIRE negatively affecting others as your other comment says. You're just delulu.
Remember, while there maybe instances where all versions of a certain class of anything are equal, in most cases they are not. So blanketly categorizing as your have done just reflects your lack of historical perspective.
you need to ask the Uyghurs how they’re feeling about their experience under the communist government
Everytime people ask regular Uyghurs, they're usually happy enough with it. I'm guessing you mean ask Adrian Zenz and the Victims of Communism Memorial Foundation to tell the Uyghurs what they think.
It sounds like you don’t know what “capitalism” means. Market participation exists in other economy types, too. It’s how the means of production are controlled and the profits distributed that defines capitalism vs communism.
And you don’t lift 800 million people out of poverty under capitalism. Or they’ve done a ridiculously bad job of concentrating profits into the hands of a very small few.
I don’t think you understand how China’s economy works. Seems very clouded by anti-China propaganda.
In reality, the working class exercises a great deal of control over the means of production in China, and the 996 culture you’re referring to is in fact illegal.
bbc.com/news/world-asia-china-…
Again, capitalism vs communism is not defined by the existence of production/profits/markets, but how control and benefit of those systems is distributed.
China steps in to regulate brutal '996' work culture
Workers in China are fed up with the brutal 12-hour work days once seen as a key driver of success.Waiyee Yip (BBC News)
I disagree. Under the right conditions (read: actual competition instead of unregulated monopolies) I think a capitalist system be able to stay ahead, though I think both systems could compete depending on how they're organized.
But what I'm more interested in is you view that China is still Socialist/Communist. Isn't DeepSeek a private company trying to maximize profits for itself by innovating, instead of a public company funded by the people? I don't really know myself, but my perspective was that this was more of a capitalist vs capitalist situation. With one side (the US) kinda suffering from being so unregulated that innovation dies down.
Capitalism will by its very nature always lead to monopolies and depressed innovation. You cannot prevent corruption, while concentrating control of the means of production in the hands of a very few.
They released DeepSeek for free. It was a side project the company worked on. How is releasing it for free in any way profit seeking?
I disagree.
Like it or hate it, crypto is here to stay.
And it's actually one of the few technologies that, at least with some of the coins, empowers normal people.
I should really start looking into shorting stocks. I was looking at the news and Nvidia's stock and thought "huh, the stock hasn't reacted to these news at all yet, I should probably short this".
And then proceeded to do fuck all.
I guess this is why some people are rich and others are like me.
like this
felixthecat likes this.
It's been proven that people who do fuckall after throwing their money into mutual funds generally fare better than people actively monitoring and making stock moves.
You're probably fine.
I never bought NVIDIA in the first place so this news doesn't affect me.
If anything now would be a good time to buy NVIDIA. But I probably won't.
Wth?! Like seriously.
I assume they are running the smallest version of the model?
Still, very impressive.
True, but training is one-off. And as you say, a factor 100x less costs with this new model. Therefore NVidia just saw 99% of their expected future demand for AI chips evaporate
Even if they are lying and used more compute, it's obvious they managed to train it without access to the large amounts of the highest end chips due to export controls.
Conservatively, I think NVidia is definitely going to have to scale down by 50% and they will have to reduce prices by a lot, too, since VC and government billions will no longer be available to their customers.
They have made it harder, but it's not really hard.
Just buy any regulated crypto and convert. Cake Wallet makes it easy, but there are many other ways.
I myself hold Bitcoin and Monero.
That's not the way it works. And I'm not even against that.
It sill won't work this way a few years later.
I’m not sure. That’s a very static view of the context.
While china has an AI advantage due to wider adoption, less constraints and overall bigger market, the US has higher tech, and more funds.
OpenAI, Anthropic, MS and especially X will all be getting massive amounts of backing and will reverse engineer and adopt whatever advantages R1 had. Which while there are some it’s still not a full spectrum competitor.
I see the is as a small correction that the big players will take advantage of to buy stock, and then pump it with state funds, furthering the gap and ignoring the Chinese advances.
Regardless, Nvidia always wins. They sell the best shovels. In any scenario the world at large still doesn’t have their Nvidia cluster, think Africa, Oceania, South America, Europe, SEA who doesn’t necessarily align with Chinese interests, India. Plenty to go around.
Extra funds are only useful if they can provide a competitive advantage.
Otherwise those investments will not have a positive ROI.
The case until now was built on the premise that US tech was years ahead and that AI had a strong moat due to high computer requirements for AI.
We now know that that isn't true.
If high compute enables a significant improvement in AI, then that old case could become true again. But the prospects of such a reality happening and staying just got a big hit.
I think we are in for a dot-com type bubble burst, but it will take a few weeks to see if that's gonna happen or not.
Especially because programming is quite fucking literally giving computers instructions, despite what you believe keyboard monkeys do. You wanker!What? You think “developers” are some kind on mythical beings that possess the mystical ability of speaking to the machines in cryptic tongues?
First off, you're contradicting yourself: Is programming about "giving instructions in cryptic languages", or not?
Then, no: Developers are mythical beings who possess the magical ability of turning vague gesturing full of internal contradictions, wishful thinking, up to right-out psychotic nonsense dreamt up by some random coke-head in a suit, into hard specifications suitable to then go into algorithm selection and finally into code. Typing shit in a cryptic language is the easy part, also, it's not cryptic, it's precise.
They'll probably do that, but that's assuming we aren't past the point of diminishing returns.
The current LLM's are pretty basic in how they work, and it could be that with the current training we're near what they'll ever be capable of. They'll of course invest a billion in training a new generation, but if it's only marginally better than the current one, they won't keep investing billions into it if it doesn't really improve the results.
fascinating. my boss really bought into the tech bro bullshit, every time we get coffee as a team, he's always going on and on about how chatGPT will be the savior of humanity, increase productivity so much that we'll have a 2 day work week, blah blah blah.
I've been on his shit list lately because i had to take some medical leave and didn't deliver my project on time.
Now that this thing is open sourced, I can bring it to him, tell him it out performs even chatgpt O1 or whatever it is, and tell him that we can operate it locally. I'll be off the shit list and back into his good graces and maybe even get a raise.
Your boss sounds like he buys into bullshit for a living. Maybe that’s what drew him to the job, lol.
I think believing in our corporate AI overlords is even overshadowed by believing those same corporations would pass the productivity gains on to their employees.
DeepSeek proved you didn't need anywhere near as much hardware to train or run an even better AI model
Imagine what would happen to oil prices if a manufacturer comes out with a full ice car that can run 1000 miles per gallon... Instead of the standard American 3 miles per 1.5 gallons hehehe
en.wikipedia.org/wiki/Jevons_p…
more efficient use of oil will lead to increased demand, and will not slow the arrival or the effects of peak oil.
Energy demand is infinite and so is the demand for computing power because humans always want to do MORE.
Yes but that's not the point... If you can buy a house for $1000 nobody would buy a similar house for $500000
Eventually the field would even out and maybe demand would surpass current levels, but for the time being, Nvidia's offer seem to be a giant surplus and speculators will speculate
Maybe, but there is incentive to not let that happen, and I wouldn’t be surprised if “they” have unpublished tech that will be rushed out.
The ROI doesn’t matter, it wasn’t there yet it’s the potential for it. The Chinese AIs are also not there yet. The proposition is to reduce FTEs, regardless of cost, as long as cost is less.
While I see OpenAi and mostly startups and VC reliant companies taking a hit, Nvidia itself as the shovel maker will remain strong.
In europe i can send any amount (like up to 100k ) in just a few days since 20 years, to anyone with a bank account in europe, from my computer or phone.
Also, since 2025 every bank allows me to send istant money to any other bank account. For free.
That's becoming less true. The cost of inference has been rising with bigger models, and even more so with "reasoning models".
Regardless, at the scale of 100M users, big one-off costs start looking small.
In part we agree. However there are two things to consider.
For one, the llms are plateauing pretty much now. So they are dependant on more quality input. Which, basically, they replace. So perspecively imo the learning will not work to keep this up. (in other fields like nature etc there's comparatively endless input for training, so it will keep on working there).
The other thing is, as we likely both agree, this is not intelligence. It has it's uses.
But you said to replace programming, which in my opinion will never work: were missing the critical intelligence element. It might be there at some point. Maybe llm will help there, maybe not, we might see. But for now we don't have that piece of the puzzle and it will not be able to replace human work with (new) thought put into it.
The data on the blockchain is not private.
However data can be encrypted before it hits the blockchain and it can also be cryptographicly manipulated in ways that remain private.
True, but training is one-off. And as you say, a factor 100x less costs with this new model. Therefore NVidia just saw 99% of their expected future demand for AI chips evaporate
It might also lead to 100x more power to train new models.
It's not a statement out of ignorance and it's not a lie. Most people don't try to move huge money around so I'll illustrate what I had to go through - I had a huge sum of money I had in an online investing company. I had a very time critical situation I needed the money for, so I cashed out my investments - the company only cashed out via check sent via registered mail (maybe they did transfers for smaller amounts, but for the sum I had it was check only). It took almost two weeks for me to get that check. When I deposited that check with my bank, the bank had a mandatory 5-7 business day wait to clear (once again, smaller checks they deposit immediately and then do the clearing process - BIG checks they don't do that, so I had to wait another week). Once cleared, I had to move the money to another bank, and guess what - I couldn't take that much cash out, daily transfers are capped at like $1500 or whatever they were, so I had to get a check from the bank. The other bank made me wait another 5-7 business day as well, because the check was just too damn big.
4 weeks it took me to move huge money around, and of course I missed the time critical thing I really needed the money for.
I'm just a random person, not a business, no business accounts, etc. The system just isn't designed for small folk to move big money
It's easy to mod the software to get rid of those censors
Part of why the US is so afraid is because anyone can download it and start modding it easily, and because the rich make less money
Yes and no. Not many people can afford the hardware required to run the biggest LLMs. So the majority of people will just use the psyops vanilla version that China wants you to use. All while collecting more data and influencing the public like what TikTok is doing.
Also another thing with Open source. It's just as easy to be closed as it is open with zero warnings. They own the license. They control the narrative.
How are you the product if you can download, mod, and control every part of it?
Ever heard of WinRAR?
Audacity?
VLC media player?
Libre office?
Gimp?
Fruitloops?
Deluge?
Literally any free open source standalone software ever made?
Just admit that you aren't capable of approaching this subject unbiasly.
You just named Western FOSS companies and completely ignored the "psyops" part. This is a Chinese psyops tool disguised as a FOSS.
99.9999999999999999999% can't afford or have the ability to download and mod their own 67B model. The vast majority of the people who will use it will be using Deepseek vanilla servers. They can collect a mass amount of data and also control the narrative on what is truth or not. Think TikTok but on a work computer.
Jason Carty (@Doctor_J_@mastodon.social)
Attached: 1 video Here’s a fun experiment you can do using Deepseek, the hot new Chinese AI tool. Part one of two:Mastodon
You wouldn't, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Many others would, because they think "wow, so this is a computer that talks to me like a human, it knows everything and can respond super fast to any question!"
The issue to me is (and has been for the past), the framing of what "artifical intelligence" is and how humans are going to use it. I'd like more people to be critical of where they get their information from and what kind of biases it might have.
You wouldn’t, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Well, more because I'm knowledgeable enough about machine learning to know it's only as good as its dataset, and knowledgeable enough about mass media and the internet to know how atrocious 'common sense' often is. But yes, you're right about me speaking from a level of familiarity which I shouldn't consider typical.
People have been strangely trusting of chat bots since ELIZA in the 1960s. My country is lucky enough to teach a small amount of bias and media literacy skills through education and some of the state broadcaster's programs (it's not how it sounds, I swear!), and when I look over to places like large chunks of the US, I'm reminded that basic media literacy isn't even very common, let alone universal.
I did. The answer it gave is clear and concise with no judgement. Instead it talks about the argument on both sides. Not the "magical Hasbara dance" you promised me.
Try asking Deepseek about Taiwan independence and watch how it completely ignores all (/think) and gives a false answer.
The question of whether Israel is currently committing genocide is a subject of intense debate among international organizations, scholars, and political entities.
Accusations of Genocide:
Amnesty International's Report: On December 5, 2024, Amnesty International released a report concluding that Israel is committing genocide against Palestinians in the Gaza Strip. The report cites actions such as killings, causing serious bodily or mental harm, and deliberately inflicting conditions calculated to bring about the physical destruction of Palestinians in Gaza.
UN Special Committee Findings: In November 2024, a UN Special Committee found that Israel's methods of warfare in Gaza are consistent with characteristics of genocide, noting mass civilian casualties and widespread destruction.
Scholarly Perspectives: Israeli historian Amos Goldberg has stated that the situation in Gaza constitutes a genocide, pointing to the extensive destruction and high civilian death toll as indicative of genocidal intent.
Counterarguments:
Israeli Government's Position: The Israeli government asserts that its military actions in Gaza are aimed at dismantling Hamas, a group designated as a terrorist organization by multiple countries, and emphasizes efforts to minimize civilian casualties.
Criticism of Genocide Accusations: Organizations such as the American Jewish Committee (AJC) reject the genocide label, arguing that Israel's actions are self-defense measures against Hamas and do not meet the legal definition of genocide.
Legal Definition of Genocide:
According to the UN's 1948 Convention on Genocide, genocide includes acts committed with intent to destroy, in whole or in part, a national, ethnical, racial, or religious group. These acts encompass killing members of the group, causing serious bodily or mental harm, and deliberately inflicting conditions calculated to bring about the group's physical destruction.
Conclusion:
The determination of whether Israel's actions constitute genocide involves complex legal and factual analyses. While some international bodies and scholars argue that the criteria for genocide are met, others contend that Israel's military operations are legitimate acts of self-defense. This remains a deeply contentious issue within the international community.
don't like this
geneva_convenience doesn't like this.
like this
geneva_convenience likes this.
¯\_(ツ)_/¯
So you expect that an AI provides a morally framed view on current events that meet your morally framed point of view?
The answer provides a concise overview on the topic. It contains a legal definition and different positions on that matter. It does at not point imply. It's not the job of AI (or news) to form an opinion, but to provide facts to allow consumers to form their own opinion. The issues isn't AI in this case. It's the inability of consumers to form opinions and their expec that others can provide a right or wrong opinion they can assimilation.
like this
geneva_convenience likes this.
If you're of the idea that it's not a genocide you're wrong. There is no alternate explanation. If it were giving a fact that would be correct. The fact that it's giving both sides is an opinion rather than a fact.
If their ibtebtion was fact only. The answer would have been yes
Okay, cool...
So, how much longer before Nvidia stops slapping a "$500-600 RTX XX70" label on a $300 RTX XX60 product with each new generation?
The thinly-veiled 75-100% price increases aren't fun for those of us not constantly-touching-themselves over AI.
Try buying Monero, it is very hard to buy.
- Acquire BTC (there are even ATMs for this in many countries)
- Trade for XMR using one of the many non-KYC services like WizardSwap or exch
I haven't looked into whether that's illegal in some jurisdictions but it's really really easy, once you know that's an option.
Or you could even just trade directly with anyone who owns XMR. Obviously easier for some people than others but it's a real option.
Both of these methods don't even require personal details like ID/name/phone number.
Nvidia falls 14% in premarket trading as China's DeepSeek triggers global tech sell-off
Nvidia falls 14% in premarket trading as China's DeepSeek triggers global tech sell-off
DeepSeek launched a free, open-source large-language model in late December, claiming it was developed in just two months at a cost of under $6 million.Jenni Reid (CNBC)
like this
essell likes this.
Never buy a HP laptop. For some dumb reason my job gives us HP laptops. I can put it into hibernate at 100% battery and unplug it, and the next morning the battery is so empty it can't even boot up to the "please connect to power supply" message.
#it #technology
Today's @pixel_dailies prompt is "pin", so I made Battlewasp - Pin the Bullseye as pixel art.
Everyone take -200.
[r] _ documentare il genocidio ai danni del popolo palestinese, conservarne la memoria
per circa un anno ho seguito, visto e registrato centinaia, credo migliaia di video, immagini, audio, saggi e articoli sul genocidio sionista ai danni del popolo palestinese. (e ovviamente continuo).
“il primo genocidio in diretta, livestream“, come dice giustamente Susan Abulhawa (qui).
ho potuto e posso esporre e/o salvare una parte di quei materiali in alcuni post qui su slowforward o su differx.noblogs.org, e in vari folder Mega (dei quali ho dato e darò di volta in volta i link).
a questi strumenti ho aggiunto già da tempo il canale youtube youtube.com/@differx-2 e, più recentemente, l’account instagram instagram.com/palestina_it
non mi riesce di essere troppo sollecito con gli aggiornamenti, e il materiale è sovrabbondante, un vero oceano di orrore, anche non volendo estensivamente considerare le mire omicide/genocide di israele ad altri Paesi, in particolare, ora, il Libano.
ma cerco comunque di accumulare materiali, a volte annotando nei nomi dei file la data del download, per continuare a dar conto di quello che sta succedendo.
questo post vuole (1) segnalare i link citati sopra, e invitare a seguirli, nonostante la loro inevitabile incompletezza, a fronte dell’enormità del genocidio; (2) sollecitare la condivisione di video, immagini, testi ecc.; (3) preludere a un progetto (almeno progetto) di archivio del genocidio, iniziativa di testimonianza e memoria che ai Palestinesi è da noi occidentali totalmente dovuta.
#000000 #archivio #archivioDelGenocidio #Cisgiordania #colonialismo #condividere #condivisione #differx2 #differxNoblogsOrg #ff0000 #Gaza #genocide #genocidio #Israele #israeleStatoTerrorista #izrahell #Libano #link #memoria #neocolonialismo #Palestina #palestinaIt #Palestine #politica #sionismoOmicida #WestBank
reshared this
differx e Il blogverso italiano di Wordpress reshared this.
[r] _ documentare il genocidio, conservare la memoria
vedere, salvare, condividere, archiviare, ricordare, riproporre i materiali informativi sul genocidio operato da israele contro i Palestinesi
slowforward.net/2025/01/27/r-_…
[r] _ documentare il genocidio ai danni del popolo palestinese, conservarne la memoria
per circa un anno ho seguito, visto e registrato centinaia, credo migliaia di video, immagini, audio, saggi e articoli sul genocidio sionista ai danni del popolo palestinese. (e ovviamente continuo).“il primo genocidio in diretta, livestream“, come dice giustamente Susan Abulhawa (qui).
ho potuto e posso esporre e/o salvare una parte di quei materiali in alcuni post qui su slowforward o su differx.noblogs.org, e in vari folder Mega (dei quali ho dato e darò di volta in volta i link).
a questi strumenti ho aggiunto già da tempo il canale youtube youtube.com/@differx-2 e, più recentemente, l’account instagram instagram.com/palestina_it
non mi riesce di essere troppo sollecito con gli aggiornamenti, e il materiale è sovrabbondante, un vero oceano di orrore, anche non volendo estensivamente considerare le mire omicide/genocide di israele ad altri Paesi, in particolare, ora, il Libano.
ma cerco comunque di accumulare materiali, a volte annotando nei nomi dei file la data del download, per continuare a dar conto di quello che sta succedendo.
questo post vuole (1) segnalare i link citati sopra, e invitare a seguirli, nonostante la loro inevitabile incompletezza, a fronte dell’enormità del genocidio; (2) sollecitare la condivisione di video, immagini, testi ecc.; (3) preludere a un progetto (almeno progetto) di archivio del genocidio, iniziativa di testimonianza e memoria che ai Palestinesi è da noi occidentali totalmente dovuta.
#000000 #archivio #archivioDelGenocidio #Cisgiordania #colonialismo #condividere #condivisione #differx2 #differxNoblogsOrg #ff0000 #Gaza #genocide #genocidio #Israele #israeleStatoTerrorista #izrahell #Libano #link #memoria #neocolonialismo #Palestina #palestinaIt #Palestine #politica #sionismoOmicida #WestBank
L'articolo di Maurizio Maggiani su La Stampa del lunedì, come quasi sempre succede, ci offre riflessioni profonde e toccanti che vale la pena di leggere.
#Letture #MaurizioMaggiani
#sogni #potatori #deportazioni #catene #Trump #USA #algoritmi #guerre
@maupao
@macfranc
@Puntopanto
@alephoto85
@goofy
@orporick
@nemobis
@scuola
Puntopanto reshared this.
@contributopia @macfranc @alephoto85 @goofy @orporick @nemobis @maupao @scuola
macfranc reshared this.
Zukunft Sozialer Medien: Für freie Feeds
Um die Macht der Tech-Bosse zu brechen, arbeiten Entwickler an alternativen Sozialen Medien. Wie könnten die Plattformen der Zukunft funktionieren?Christian Jakob (taz)
30.01 Justyna Wydrzyńska, will stand for a trial. Last time, she was found “guilty of helping“ someone who asked for help. Justyna had sent abortion pills to Anna. Her abusive partner took first pack of pills away and called the cops. After all, Anna miscarried after using a Foley catheter, ended up in the hospital with sepsis, and barely survived.
6 trials, 1 conviction, over the INTENT to help someone.
For the past year, we hoped the new government in Poland, which campaigned on liberalization and decriminalization of abortion, would fulfill their promises. They came into power thanks to the extraordinary women's mobilization, but this was just a lie - NOTHING has changed. Activist still has charges. Government even consider bringing back the abortion ban of 1993, saying that this is the 'liberalization' they promised us.
The situation with abortion is even worse - women turn to Abortion Without Borders for help later than they used to, trying to get an abortion in Polish hospitals because politicians announce everywhere that the situation is better now. These abortions are more problematic to organize and much more expensive. Donations and grants are lower since donors and foreign governments believed that “Polish women won the election.” 'Donald Tusk guarantees democracy and women's rights', so we no longer need help...
Abortion Without Borders provides abortion to over 130 people in Poland EVERY DAY. Some of them, in wanted pregnancies with health risks, are forced to travel for medical care, even to Mexico, because French and Belgian hospitals are up to their limit.
Justyna is an expert, a guarantee of safe abortions. She should be protected not prosecuted. But we're not giving up! Justyna and the Abortion Dream Team are opening AboYes - the first abortion clinic in Poland. Please support us:
💕Share
💕Post solidarity messages #IamJustyna
💕Donate: abotak.org/en/
Thank you! With love
Polish abortionists
maszwybor.net/blog/justyna-fac…
@worldnews @politics @world #abortion #humanrights #women
Justyna faces the court again for helping in an abortion. She needs your support! - Miałam Aborcję
On the 30th of January, our activist sister, Justyna Wydrzyńska, will stand again for a trial in front of a Polish court. Last time, she was found “guilty of helping“ someone who asked for help.wind (Kobiety w Sieci)
reshared this
Khalid K-1000 🤖, Cory Doctorow e Ben! (Boo!) reshared this.
reshared this
Pills of Science🧬🦋 reshared this.
Don't miss phpday 2025!🥳
The community is looking forward to joining Verona as soon as possible.
Tickets for 22nd edition are selling quickly!
Few early bird tickets are available.
🐤 bit.ly/3PPceLi
#API #REST #Architectures #ContinuousDelivery #Database #Development #Devops #Frameworks #Internals #PHP7 #PHP8 #conference #networking #community
-----
#phpday - The gathering for the European PHP community.
📍 Verona (Italy) | 📅 May 15-16, 2025
📺🔍 Looking for the best free IPTV players for Linux in 2025? Discover top-rated options that offer seamless streaming and a user-friendly experience. Elevate your Linux entertainment setup with these fantastic IPTV players! 🔥📡
#IPTVPlayers #Linux #Streaming #Entertainment
iptvprovidersreview.com/best-f…
Best Free IPTV Players for Linux to Use in 2025
Explore the Best Free IPTV Players for Linux to Use in 2025 for watching live TV channels and on-demand videos on Linux. Are you looking for the ideal IPTV player for your Linux computer? Your searchadmin_iptvprovidersreview.com (IPTV Providers Review 2025)
🧐 Desvendando Mitos do Linux: O que é Verdade e o que é Mentira? 🐧❓
O Linux é cercado por muitos mitos e mal-entendidos. Está na hora de separar os fatos da ficção e descobrir o que realmente é verdade sobre esse poderoso sistema operacional. Não caia em desinformações!
👉 Confira a verdade no blog: nova.escolalinux.com.br/blog/m…
#Linux #Mitos #Fatos #Tecnologia #OpenSource
Mitos do Linux: O Que É Verdade e o que é Mentira?
Está pensando em migrar para o Linux, mas tem dúvidas sobre esse sistema operacional? Neste artigo, desmistificamos as principais crenças sobre o Linux e te mostramos a verdade por trás de cada uma delas.Paulo Oliveira
New post!
Distribute Open-Source Tools with Homebrew Taps: A Beginner's Guide
casraf.dev/2025/01/distribute-…
#homebrew #brew #macos #linux #development
Distribute Open-Source Tools with Homebrew Taps: A Beginner's Guide - casraf.dev
Often, when I create open-source tools or apps, it’s tricky to distribute them easily. You usually need to specify where to download from, and where to place the extracted files, which is a manual process that doesn’t usually translate into a simple …casraf.dev
This Week in KDE Apps
blogs.kde.org/2025/01/27/this-…
@kde
#KDE #KDEApps #Linux #FOSS #OpenSource
This Week in KDE Apps
Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps. Due to FOSDEM happening next weekend, there won't be any "This Week in KDE Apps" post next week.This Week in KDE Apps
reshared this
Jure Repinc reshared this.
#ceasefireGaza #Gaza #apartheidstate #defacingholocaustvictims #barbary #acultureofhatred #waronchildren #westernsponsoredethniccleansing
📷 Abed Rahim Khatib
Claro, aqui está uma descrição de texto alternativo para a imagem:
A imagem mostra um grupo de pessoas, aparentemente refugiadas, reunidas em uma rua. Algumas crianças dormem embrulhadas em cobertores sobre tapetes improvisados no chão. Um pequeno grupo de adultos está sentado perto de uma pequena fogueira, enquanto um homem parece estar em uma chamada de telefone. Ao fundo, há carros carregados com pertences, indicando uma situação de deslocamento. A cena sugere pobreza e desespero, com foco na vulnerabilidade das crianças. A atmosfera é fria e de penúria, com uma sensação geral de desespero e incerteza.
Fornecido por @altbot, gerado usando Gemini
Tech Cyborg reshared this.
„Autorität des IStGH massiv unterminiert“: Polen missachtet Netanjahu-Haftbefehl vor Auschwitz-Gedenken
https://www.fr.de/politik/das-politische-dilemma-um-netanjahu-polen-missachtet-haftbefehl-vor-auschwitz-gedenken-zr-93537578.html?utm_source=flipboard&utm_medium=activitypub
Gepostet in Politik @politik-FR_de
„Autorität massiv unterminiert“: Polen missachtet Netanjahu-Haftbefehl vor Auschwitz-Gedenken
Anlässlich des Auschwitz-Gedenkens wollte Polens Regierung den Haftbefehl gegen Netanjahu ignorieren. „Ein schwerwiegendes Problem“, erklärt Völkerrechtler Safferling.www.fr.de
Nvidia, ASML Plunge as DeepSeek Triggers Tech Stock Selloff
Link: finance.yahoo.com/news/asml-si…
Discussion: news.ycombinator.com/item?id=4…
Desktop Motherboards Continue Playing Catch-Up For Linux Monitoring Support
The hardware monitoring "HWMON" subsystem updates have been merged for the Linux 6.14 kernel. As happens with most kernel releases, there are a number of already-launched desktop motherboards beginning to see working sensor monitoring support under Linux...
phoronix.com/news/Linux-6.14-H…
Desktop Motherboards Continue Playing Catch-Up For Linux Monitoring Support
The hardware monitoring 'HWMON' subsystem updates have been merged for the Linux 6.14 kernelwww.phoronix.com
Reduced SquashFS Memory Use With The Linux 6.14 Kernel, More NILFS2 Fixes
In addition to all of the exciting "MM" changes for Linux 6.14 that were submitted by Andrew Morton's pull request, he also sent out the set of "non-MM" updates for the Linux 6.14 merge window...
phoronix.com/news/Linux-6.14-N…
Reduced SquashFS Memory Use With The Linux 6.14 Kernel, More NILFS2 Fixes
In addition to all of the exciting 'MM' changes for Linux 6.14 that were submitted by Andrew Morton's pull request, he also sent out the set of 'non-MM' updates for the Linux 6.14 merge window.www.phoronix.com
bbc.com/news/articles/cn8x195d…
#Shoah
Holocaust survivors fear Europe is forgetting the lessons of Auschwitz
Eighty years after the liberation of the concentration camp, some politicians have been quick to target outsiders.Katya Adler (BBC News)
A good #ScreenshotSunday to all! 😶🌫️ I've been testing and fixing little bugs on Pepper Odyssey. I guess I could show you this sunset and a new mask!
mas.to/@tyrexito/1138569401315…
Mauro Entrialgo (@tyrexito@mas.to)
Attached: 1 image Repetimos @EconoCabreado@mastodon.social y yo. Esta vez en la Prospe.mas.to
A Normal Life is the autobiography of Vassilis Palaiokostas. Known to the public as the “Greek Robin Hood,” to police as “The Uncatchable” he lives an illegalist existence in defiance of the state. For decades, it has been a life lived as a fugitive.
In Greece he has become a household name. A modern folk hero of sorts, taking millions in bank raids — including the famous Kalambaka heist, Greece’s biggest ever — and ransoming CEOs whilst distributing his gains to those who needed it. But he is most famous for his prison breakouts — infamously escaping the high-security wing of Korydallos Prison by helicopter.
Twice.
Authorities hate him, deem him a terrorist. His freedom a continued insult to the Greek State. Now his memoir, an instant bestseller in #Greece, translated into #English for the first time tells his story in his own words. Vassilis Palaiokostas does not offer any mealy-mouthed, “socially acceptable” justifications for his actions, but honestly elaborates on his dreams and their totality.
freedompress.org.uk/product/a-…
#books #bookstodon
A Normal Life - Freedom Press
"A Normal Life" is the autobiography of Vassilis Palaiokostas, known to the public as the "Greek Robin Hood," to police as "The Uncatchable."Freedom Press
Proletarian Rage reshared this.
@magnus supposed to be a thread. Check this also todon.nl/@prolrage/11390011123…
Greece's most wanted man who gives to the poor
bbc.co.uk/news/special/2014/ne…
He's spent decades dodging the law. He's escaped from jail twice by helicopter. He's given millions to the poor. This is the story of how #Greece ’s most wanted man became a folk hero.
But just like the original Robin Hood, #VassilisPaleokostas is despised by the authorities he plagues. They portray him as a violent terrorist and #Greek journalists have strangely shied away from telling his incredible story.
Nothing Phone (3) o altro? Annuncio previsto il 4 marzo
#CarlPei #Eventi #Leak #Marzo2025 #Nothing #NothingPhone #NothingPhone3 #Phone3 #Notizie #Novità #Rumors #Smartphone #Teaser #Tech #TechNews #Tecnologia
ceotech.it/nothing-phone-3-o-a…
Nothing Phone (3) o altro? Annuncio previsto il 4 marzo
Nothing annuncia un evento per il 4 marzo: sarà il Nothing Phone (3) o un nuovo dispositivo? Un'immagine teaser mostra un modulo fotocamera a pillola.CeoTech
Amid weight loss drug boom, UK pharmacies urge tougher rules for online sales
https://www.euronews.com/health/2025/01/27/amid-weight-loss-drug-boom-uk-pharmacies-urge-tougher-rules-for-online-sales?utm_source=flipboard&utm_medium=activitypub
Posted into Europe News @europe-news-euronews
Amid weight loss drug boom, UK pharmacies urge tougher rules for online sales
The pharmacy association said it was worried some patients were accessing the drugs inappropriately amid a surge in demand.Gabriela Galvin (Euronews.com)
my.wealthyaffiliate.com/vitali… #Pinterest
How to Organize Pins in Your Pinterest Boards
As you grow the pins in different Pinterest Boards you make, the need to organize them becomes important overtime and one of the most suggested things to do is put your best performing ones at the top, while leaving theWealthy Affiliate
Daniele Scaglione
in reply to Pills of Science 🧬 • • •