The AI bubble is the only thing keeping the US economy together, Deutsche Bank warns: When the bubble bursts, reality will hit far harder than anyone expects techspot.com/news/109626-ai-buβ¦
#ai
reshared this
The AI bubble is the only thing keeping the US economy together, Deutsche Bank warns: When the bubble bursts, reality will hit far harder than anyone expects techspot.com/news/109626-ai-buβ¦
#ai
reshared this
Aaron Longchamps
in reply to nixCraft π§ • • •Em reshared this.
James H McLaren
in reply to nixCraft π§ • • •Lesley Carhart reshared this.
Jeff Atwood
in reply to James H McLaren • • •@JamesHMcLaren that's just completely absolutely untrue, I'm sorry, but.. infosec.exchange/@codinghorrorβ¦
Jeff Atwood
2025-09-28 03:06:32
Mike McCaffrey
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Mike McCaffrey • • •George Mitchell
in reply to Jeff Atwood • • •@codinghorror @mikemccaffrey @JamesHMcLaren Itβs the overhyping thatβs the problem; when the hype bubble bursts, itβs not just going to take down the no-value βsolutionsβ, itβs also going to crater the actually-valuable propositions because those products arenβt profitable without the hype money coming in.
(My wife was a pre-IPO employee of Amazon; I watched the dot-com collapse in extreme close-up.)
doragasu
in reply to nixCraft π§ • • •Jeremy Kahn
in reply to doragasu • • •@doragasu
It's not the data centers. It's the overvaluation of "AI" stocks that pushes bubble money into pension funds
When that bubble valuation goes away, the idled data centers will be there like the feet of Ozymandias, and the pension funds and 401(k)s will be the barren desert all around
@nixCraft
@cstross
doragasu
in reply to Jeremy Kahn • • •lbcp π¦
in reply to doragasu • • •Charlie Stross
in reply to lbcp π¦ • • •The Incredible Laser
in reply to nixCraft π§ • • •Knightmare
in reply to nixCraft π§ • • •Mariner
in reply to nixCraft π§ • • •Andreas K
in reply to nixCraft π§ • • •Nazo
in reply to nixCraft π§ • • •JP
in reply to nixCraft π§ • • •Galad
in reply to nixCraft π§ • • •xs4me2
in reply to nixCraft π§ • • •Karsten Johansson
in reply to nixCraft π§ • • •Isn't that another way of saying the US will do anything to keep propping it up, no matter the cost?
This is a race that is China's for the taking. USA will let that happen?
Huge sacrifices are about to be made to keep up the illusion as long as possible.
Kierkegaanks regretfully
in reply to nixCraft π§ • • •Astrius
in reply to nixCraft π§ • • •Alan Hicks
in reply to nixCraft π§ • • •Al & Val's Modern Homesteading
in reply to nixCraft π§ • • •Scotty Trees
in reply to nixCraft π§ • • •Strefa Linux
in reply to nixCraft π§ • • •acm
in reply to nixCraft π§ • • •Alex Lohr (RIP Natenom)
in reply to nixCraft π§ • • •It is only AI in PowerPoint. In Python, it's large models. In truth, it is creative marketing around statistical models tuned to predict the answer.
And the answer is: don't waste so many resources on predicting answers using models that human brains are far better at giving.
The attempt to replace human brains was doomed to begin with for any task that required one in the first place.
SpaceLifeForm
in reply to nixCraft π§ • • •When Deutsche Bank speaks, people listen.
They know it is a clusterfuck. Now, they are wondering what are going to get in return for their investments.
Maybe they are thinking to pop the bubble before they lose more money.
#AI #Insanity #Money
RRB
in reply to nixCraft π§ • • •contrasocial
in reply to nixCraft π§ • • •deedlebaum
in reply to nixCraft π§ • • •samiamsam
in reply to nixCraft π§ • • •Lesley Carhart
in reply to nixCraft π§ • • •Jeff Atwood
in reply to nixCraft π§ • • •pcyx
in reply to Jeff Atwood • • •David Chisnall (*Now with 50% more sarcasm!*)
in reply to Jeff Atwood • • •@codinghorror
Some of it is useful, but note the point in the article: it is not adding anything to the GDP in terms of productivity. Iβve seen people use LLMs as slightly better interfaces to documentation. Yes, itβs useful, but it isnβt a killer app. People use the, for summarisation but theyβre really bad at it and they often miss the key point. Using that in decision making lets you move faster, but in the wrong direction. It takes a while for that to appear and then itβs a problem. People are βvibe codingβ business apps, but the ones that work are the kind that you could build in less time with MS PowerApps and no programming ability beyond Excel formulae. Producing moderately bad subtitles is better than having no subtitles. Low-stakes translation was already pretty good pre-LLM and is now better (but still makes ludicrous mistakes and so canβt be trusted in situations where being misunderstood costs money or lives).
And all of these use cases are massively subsidised. How many people are willing to pay ten times OpenAIβs current price for LLMs? Inference costs dropping relies on the ML-tailored GPUs having large economies of scale, which relies on large numbers of consumers. Increase the price and that cycle ends. Training costs are increasing and must be amortised over more users, increase the costs and this stops, so you must increase the costs more.
Iβve seen estimates that, without VC money being set on fire, a ChatGPT subscription would cost at least $1,000/month. For a thing that doubles employee productivity, thatβs an easy sell. For a modest increase, it is an easy βnoβ.
And LLMs become obsolete fast. A coding assistant that doesnβt know about new language features and which always uses deprecated or removed APIs is a productivity drain, not a help. A search assistant that has no knowledge of current events past 2025 will rapidly become useless. For translation, this is slower, but language evolves, both in terms of new words and new meanings for words, so these need retraining. For some use cases, RAG helps patch over the limitations of a stale model, but it also drives up the inference costs.
And all of that is assuming that the large-scale plagiarism for training these models is legal. At the moment governments are believing the βthis will add massive amounts to your GDPβ hype from big tech and so are willing to throw creative industries under a bus in aid of better AI models. When they realise that the hype is built on lies, this will change.
Jeff Atwood
in reply to David Chisnall (*Now with 50% more sarcasm!*) • • •David Chisnall (*Now with 50% more sarcasm!*)
in reply to Jeff Atwood • • •There are benchmarks in research papers evaluating this, but letβs assume that you are correct and, in your use case, it works well. If OpenAI charged 10x their current amount, would it still be worthwhile? Are you seeing enough revenue from customers that paying that amount would be worth it? If other people decide it isnβt and youβre now paying a larger fraction and the cost is 15x, is the same true? If so, great, you have a use case for LLMs that is sustainable after the bubble bursts.
Jeff Atwood
in reply to David Chisnall (*Now with 50% more sarcasm!*) • • •David Chisnall (*Now with 50% more sarcasm!*)
in reply to Jeff Atwood • • •Substitute someone else for OpenAI if you want. The compute requirements for teaming an LLM are huge. So huge that the economies of scale mean that whoever has the most paying customers can charge the least (once theyβve finished burning investor money). Thatβs a textbook example of a natural monopoly.
Jeff Atwood
in reply to David Chisnall (*Now with 50% more sarcasm!*) • • •mms
in reply to Jeff Atwood • • •David Chisnall (*Now with 50% more sarcasm!*)
in reply to Jeff Atwood • • •@codinghorror
Gemini is produced by a three-trillion-dollar company that is βall inβ on AI and is burning their cash reserves (and money raised by pumping their stock price as part of the bubble and issuing more stock) to fund development.
There are a handful of big tech companies doing the same. There are a handful of companies with tens of billions of investor money doing the same.
Iβd strongly dispute βtonsβ. Can you list even ten companies producing LLMs that are in a similar class to OpenAI? Can you list one that is reporting an operating profit from their LLM offering?
Ben Aveling
in reply to Jeff Atwood • • •Joe Vinegar reshared this.
David Chisnall (*Now with 50% more sarcasm!*)
in reply to Ben Aveling • • •@BenAveling @codinghorror
Summarisation is a staggeringly hard problem. English schools used to tech prΓ©cis, because itβs a skill thatβs difficult. Identifying which parts of a document are the key ideas and then expressing those in fewer words is something requires understanding not just the text of the document but also the purpose of the summary. Summarising a technical document for someone making a business decision requires removing most of the technical detail and providing the information that leads to a cost-benefit analysis, whereas summarising the same document for someone evaluating experimental rigour requires pulling out the methodology and discarding much of the rest.
There are a bunch of classical NLP approaches to summarisation that rely on part-of-speech tagging and removing words (and sometimes entire sentences) that have a low probability of contributing to the end result. These often work well, but they have awful failure modes. For example, they will usually strip superlatives or modifiers on adjectives, but sometimes these are the critical information in the sentence.
LLMs are not summarising, because they do not have any of this context. They are doing a text-to-text transform to make the text shorter, in a way that mirrors what other summaries look like in their training data. If you have formulaic boilerplate, LLMs are great at removing that. For a lot of corporate documents, this kind of structure repeats and so simply removing it is easy as a statistical transform, though this doesnβt usually save much time because the boilerplate usually exists to make it easy for people to learn to navigate a class of documents quickly and find the points that are relevant for their particular use. If you have text that was expanded from a handful of bullet points by another LLM, they are pretty good at reversing that transform.
Jeff Atwood
in reply to David Chisnall (*Now with 50% more sarcasm!*) • • •Albert Cardona
in reply to Jeff Atwood • • •@codinghorror @david_chisnall @BenAveling
A few anecdotal cases:
When I ask an LLM to create an abstract for a grant proposal I have just written, it's laughably off the mark, highliting irrelevant points.
When I ask an LLM to summarize a scientific manuscript, the result is concerning in that some of the points aren't made in the manuscript at all.
When I ask an LLM to pick key points from a long document, be it a book or a white paper, the choices are its own, not the ones I would have chosen. Resisting its choices is hard, setting up a concerning anchoring effect that I'd rather avoid.
Last I tried was this last August.
For all the above I don't use LLMs. For my domain of knowledge LLMs fall very short.
CDCastillo reshared this.
Jeff Atwood
in reply to Albert Cardona • • •Albert Cardona
in reply to Jeff Atwood • • •Asking an LLM to fix a poorly written scientific manuscript is also a failure: makes the language smoother, even grandiloquent, but the content isn't clearer, primarily because it doesn't know β it can't know β what's missing. Otherwise the work wouldn't be at the horizon of knowledge, i.e., out of distribution to use the CS lingo.
CDCastillo reshared this.
Jeff Atwood
in reply to Albert Cardona • • •Albert Cardona
in reply to Jeff Atwood • • •Indeed and that highlights another major point: LLMs arenβt generic know-it-all tools, rarher, like all tools, thereβs a learning process towards their proper, sensible use. Including, and particularly, being aware of its limitations as a tool and the domains were they apply at all.
Major issue for me is that unless one is an expert in the knowledge domain of the content of the text being generated, thereβs no way to assess it for correctness or completeness, yet the neat eloquent language misleads.
These limitations go against current business models of growth and more growth so it isnβt surprising they arenβt more known.
Ben Aveling
in reply to Jeff Atwood • • •Mikalai
in reply to Jeff Atwood • • •Having taught physics to different levels and ages, may I suggest the following.
When you see a mess from GenAI, its like that which physics teachers see from students. Same depth or lack of grasp in meaning with full "I know it" aura.
Student's goal is to get grade.
GenAI is possessed by prompt, and just completes it.
As a human, it is a humbling sight.
ggdupont
in reply to Jeff Atwood • • •@david_chisnall Summarization is a well defined task and it covers multiple layers depending on the source material to sump up. Strong value on conversation (or QA discourse) does not mean it will be good on other content. LLMs been proven (solid evaluation protocols) not to be great to summarize complex written documents.
It might appear I'm saying you ate both right... which I am. This confirms LLMs are not the generic problem solvers providers want us to believe.
Jeff Atwood
in reply to ggdupont • • •ggdupont
in reply to Jeff Atwood • • •@codinghorror@infosec.exchang
@david_chisnall @nixCraft
I share that view. Especially now since we are barely starting to know what are the best uses for these new set of tools.
Test it, try it, proceed with caution.
(Also don't ask me to review the code of your coding assistant, that's your job)
Joe Vinegar
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Joe Vinegar • • •ggdupont
in reply to Jeff Atwood • • •@codinghorror @joe_vinegar @david_chisnall
> And it will depend on the domain / content.
1000%
> ultimately the market will decide
There is a strong bias here, given the incentive to make everyone use it. It might take time for the dust to settle to realistic usage (ie not marketing driven).
Joe Vinegar reshared this.
Jeff Atwood
in reply to ggdupont • • •ggdupont
in reply to Jeff Atwood • • •I wouldn't place any bet.
time will tell...
Mikalai
in reply to Jeff Atwood • • •May I suggest, while gravy flows, and some of you may have it, or may influence distribution, please direct it to cimc.ai
Those guys have a point.
π
mkroehnert
in reply to Jeff Atwood • • •@codinghorror
@david_chisnall @nixCraft
This is a good text going over the summarizing argument.
ea.rna.nl/2024/05/27/when-chatβ¦
When ChatGPT summarises, it actually does nothing of the kind.
R&A IT Strategy & Architecturemms
in reply to David Chisnall (*Now with 50% more sarcasm!*) • • •Jeff Atwood
in reply to mms • • •Garrett Wollman
in reply to nixCraft π§ • • •Joe Vinegar
in reply to Garrett Wollman • • •Tofu Golem
in reply to nixCraft π§ • • •Bob Thomson
in reply to nixCraft π§ • • •a7ndrew
in reply to nixCraft π§ • • •Toni M.
in reply to nixCraft π§ • • •Mike Torr
in reply to nixCraft π§ • • •I can't say I'm surprised.
I've never seen "parabolic" used in economic discussions before β how interesting! I know what it means, though in computer science we'd probably say "polynomial". But "parabolic" is more specific...
fhekland
in reply to nixCraft π§ • • •β L. Rhodes
in reply to nixCraft π§ • • •