bad news "AI bubble doomers". I've found the LLMs to be incredibly useful and reduce the workload (and/or make people much, MUCH more effective at their jobs with the "centaur" model).
Is it overhyped? FUCK Yes. Salespeople Gotta Always Be Closing. But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, which is all grifters and gamblers and criminals end-to-end, and the first dot-com bubble where not NEARLY enough people had broadband or even internet access, plus the logistics systems to support shipping products was nowhere REMOTELY where it is today.
If you are expecting this "AI bubble" to pop anytime soon, uh.. you might be waiting a bit longer than you think? Overhyped, yes, overbuilding, sure, but not remotely a true bubble any any of the same senses of the three examples I listed above 👆. There's something very real, very practical, very useful here, and it is getting better every day.
If you find this uncomfortable, I'm sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.
CM Harrington
in reply to Jeff Atwood • • •Joachim Wiberg
in reply to CM Harrington • • •@codinghorror
A foundation model to predict and capture human cognition - Nature
NatureChristian Tietze
in reply to Joachim Wiberg • • •Joachim Wiberg
in reply to Christian Tietze • • •Jeff Atwood
in reply to Joachim Wiberg • • •balu
in reply to Jeff Atwood • • •Jeff Atwood
in reply to balu • • •Joachim Wiberg
in reply to Jeff Atwood • • •Patrick Berry
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Patrick Berry • • •Patrick Berry
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Patrick Berry • • •Janne Moren
in reply to Jeff Atwood • • •it's real in much the same way the railroad boom was real (rather than tulip mania, say).
But LLMs are also not remotely worth the level of valuations and investment we're seeing, and that is a bubble that will pop. "Useful" and "a bubble" can both be true.
Many other types of AI systems have been in production use for years but command nothing like this kind of manic investment.
Jeff Atwood
in reply to Janne Moren • • •Janne Moren
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Janne Moren • • •Joris Meys
in reply to Jeff Atwood • • •@jannem but the "bubble" warnings of financial experts from Deutsche bank aren't about usefulness. It's about assets, revenue streams and the fact this frantic building of generic data centers is hiding the recession in the US.
Housing is useful too. Didn't stop the 2008 crash...
Jeff Atwood
in reply to Joris Meys • • •Joris Meys
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Joris Meys • • •Joris Meys
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Joris Meys • • •@JorisMeys @jannem Oh, I definitely will have a great day, because I'm putting $69m into action to help desperately poor people and orgs doing amazing work.
blog.codinghorror.com/stay-gol…
blog.codinghorror.com/the-road…
The Road Not Taken is Guaranteed Minimum Income
Jeff Atwood (Coding Horror)Jeff Atwood
in reply to Joris Meys • • •The Road Not Taken is Guaranteed Minimum Income
Jeff Atwood (Coding Horror)Joris Meys
in reply to Jeff Atwood • • •Seth Richards
in reply to Jeff Atwood • • •“I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.”
Please do, if you can. Because most time I’ve tried to use LLMs for work the error rate ends up costing me MORE time than I would have spent without, and most AI boosters are short on specifics. We just had a presentation at my job on how we all need to be using AI with no case studies of how it’s actually been useful so far.
Jeff Atwood
in reply to Seth Richards • • •here's one: a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.
Here's two: GiveDirectly did two GMI studies, Chicago and Cook County and we were very unclear what the relationship was, or why they did it that way. ChatGPT also knocked this out park and saved Tia a lot of time finding that information out, so she was freed up to focus on other work.
I could go on and on and on. Email me if you want ~12 more specific examples. With citations.
But also realize this: I am elite at asking very good, well specified, very clear, well researched questions, because we built Stack Overflow.
You want to get good at LLMS? learn how to ask better questions of evil genies. I was raised on that. 🧞
Jeff Atwood reshared this.
Phil Wolff
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Phil Wolff • • •Keith Wansbrough
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Keith Wansbrough • • •in negotiations, the best alternative to no deal
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Christian Tietze
in reply to Jeff Atwood • • •Keith Wansbrough
in reply to Jeff Atwood • • •Hugo
in reply to Jeff Atwood • • •⊥ᵒᵚ⁄Cᵸᵎᶺᵋᶫ∸ᵒᵘ ☑️
in reply to Jeff Atwood • • •J. Peterson
in reply to Jeff Atwood • • •Major Denis Bloodnok
in reply to Jeff Atwood • • •Tom Bortels
in reply to Jeff Atwood • • •@sethrichards
Evil genies with a severe form of ADD of some sort.
You hit it on the head - the prompt is the key.
With an experienced human - vagueness is often acceptable, and they will usually ask for clarification. The AI doesn't ask - it guesses, often incorrectly. So you need to over-specify in the prompt, including things it might be insulting to mention when talking to an experienced human. Then iterate, and aggressively steer that conversation.
This is why I don't see the AI as replacing a human except for trivial situations. It's a force multiplier, but not a replacement, and the skills necessary to use them effectively are non-obvious.
Nik
in reply to Jeff Atwood • • •@sethrichards
So your argument is simultaneously:
> LLMs are useful RIGHT FUCKING NOW for SO MANY scenarios
But also, they're only useful because:
> I am elite at asking very good, well specified, very clear, well researched questions, because we built Stack Overflow.
Is it then fair to say that LLMs are likely to be very misleading for people who do not have your "elite" experience?
If not, why not?
Seth Richards
in reply to Jeff Atwood • • •Jonathan Hartley
in reply to Jeff Atwood • • •@sethrichards@mas. Those examples do not make it clear to skeptical drive-by readers like me how you established the extent to which the output you received was actually correct
Is part of the magic value add to embrace the idea that for many activities, being "actually correct" isn't the most important criteria? Compared to, eg, just having a direction to get started in.
If someone could reference or breakdown examples that did unpack actual correctness, that would be persuasive.
poswald
in reply to Jeff Atwood • • •Jeff Atwood
in reply to poswald • • •poswald
in reply to Jeff Atwood • • •Jeff Atwood
in reply to poswald • • •poswald
in reply to Jeff Atwood • • •yeah I was there working at a startup in NYC too… I get your point. You think any AI bubble will be mitigated because the tech can be delivered to the consumer easily this time.
I was making a different point that I think explains why you still hear AI doomers despite it being useful tech. It’s still a very dangerous bubble that will likely misallocate vast funds and careers IMO. Anyway it’s fine. sorry, I didn’t mean to frustrate you with my comment.
Von Javi
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Von Javi • • •Von Javi
in reply to Jeff Atwood • • •revenues were no there because just broadband. Broadband did make new applications possible , Netflix streaming, etc. All new technologies take time to adopt. ecommerce took almost a decade to 2.5% of total sales. It is still not even close to what people thought in 2000.
Profits were more because many companies focused on attracting customers, investing no matter the cost. Once money to support losses stopped flowing in. The dance stopped, market crashed, so yes similarities
Jeff Atwood
in reply to Von Javi • • •Rob Eickmann
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Rob Eickmann • • •retech
in reply to Jeff Atwood • • •I'll tag you in a few days with this project I'm working on. VERY much not a big deal. But way beyond my capabilities. I've been using Cgpt to help build my new portfolio site. During this, I have found it is grossly object blind to its own errors. First drafts are always cool, way beyond anything I could do or even afford to pay someone for. But I'll find a glitch and then spend 10hrs trying to get it to track it down. It just pushes the error further down the line, but still there. The only fix was to dump that chat window and start fresh, completely rephrasing the issue and the desired resolution.
Ironically, this is more like a human than anything else. Humans are invariably unable to see their inherent personality and thinking flaws. No matter how well pointed out, how hard they are worked on, invariably they spend more time pushing the problem around and not actually solving it. We have entire industries built on this very issue, therapists, pop culture self-improvement, religions... For the last 19 days I've run into this same issue with it every single day. And spent way more time not fixing minor issues it generated than actually moving forward.
5 times it even gave me code to drop in that had spelling errors. We track the bug down and it blamed me. I copied and pasted that very code and fed it back to it to find the issue and then deny it wrote that error. Talk about a freakishly human thing to do.
I've used it now for 2yrs to help with art projects. It's far better for that than almost every human I know. With the correct personality framework, it ends up being incredibly useful as a sort of partner in the project.
I do think there's a lot it cannot do, yet. For specific tasks it is better than many humans can be. And I think, given the resources being tossed at it, this is going to rework most all of human culture/industry/interaction. But if it already has human flaws built into it, I suspect that those will grow in a similar way.
Jeff Atwood
in reply to retech • • •Loïc Denuzière
in reply to Jeff Atwood • • •retech
in reply to Jeff Atwood • • •Jeff Atwood
in reply to retech • • •The Best Code is No Code At All
Jeff Atwood (Coding Horror)Frank Quednau
in reply to Jeff Atwood • • •Elijah
in reply to Jeff Atwood • • •I'm right there with you. The increased productivity is staggering when you know how to write the prompts.
1. Pretend you're writing a legal document or contract - say the things that seem obvious and be painfully precise.
2. use the LLM to eliminate tedious tasks entirely.
3. treat it like a smart junior team member you're collaborating with - give it the shape of what you expect the result to be.
Using these rules, what used to take 3 days can be accomplished in 3 hours.
Jeff Atwood reshared this.
Curtis Carter
in reply to Elijah • • •@elijah studies indicate that we overestimate how much it actually speeds us up, but treating it as a junior dev or an intern is the way to work with it.
I complain constantly about the mistakes it makes, but I often use it to scaffold boilerplate and make quick small adjustments successfully. I just have to be super vigilant about what parts I commit. When possible providing an example of what you're trying to accomplish helps.
In practice it seems to only speed me up ~20%
Jeff Atwood
in reply to Curtis Carter • • •deepfryed
in reply to Jeff Atwood • • •Nobody is questioning the practical utility, the problems are all fundamentally about economics. Unless someone makes breakthroughs that can at scale generate ROI, you're going to reach a threshold where there's not enough capital in the market to sustain the ongoing investment while also simultaneously starving investment in other industries.
Obviously the investors know what they're doing right, that's probably what everyone assumes at this stage 😀
tinsukE
in reply to Jeff Atwood • • •interesting anedoctal evidence!
Now, how about we get serious and publish/wait for some (at least potentially) unbiased study/research on that?
Because I haven't seen any. All I've seen are the likes of this one, negative about Centaur:
circumstances.run/@davidgerard…
David Gerard (@davidgerard@circumstances.run)
GSV Sleeper ServiceJeff Atwood
in reply to tinsukE • • •@tinsuke here's the specific examples. Feel free to explain why I'm wrong. I'll be waiting. Good luck, pal. infosec.exchange/@codinghorror…
Jeff Atwood
2025-09-28 03:40:29
tinsukE
in reply to Jeff Atwood • • •those examples sound OK, but I'm not particularly interested on their specifics or picking them apart.
I'd be interested on how representative of the overall experience they are. Because they're still anedoctal evidence and I don't think one could generalize LLMs' usefulness from them.
That's what I meant by expecting some unbiased research or study with thorough analysis, specially given how LLM users seem to be bad at estimating the benefits: metr.org/blog/2025-07-10-early…
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
METR (metr.org)Marko Karppinen
in reply to Jeff Atwood • • •largely agreed, but given the literal trillions we’re spending I feel the bar for this not being a financial bubble is much higher than mere existence of utility.
After the dust settles, will we have useful LLMs? Yes. Will most AI investors have lost their shirts? Also yes.
Johnnyvibrant
in reply to Marko Karppinen • • •Jeff Atwood
in reply to Johnnyvibrant • • •Marko Karppinen
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Marko Karppinen • • •Osma A 🇫🇮🇺🇦
in reply to Jeff Atwood • • •Bad news "subprime housing bubble doomers". I've found homes incredibly useful and reduce life on street (and/or make people much happier of their conditions).
This is NOTHING like previous overleveraged financing and not REMOTELY like a true bubble because people live in houses and banks won't yank them.
If you find this to be uncomfortable, sorry, but lessons have to be learned.
@codinghorror
Jeff Atwood
in reply to Osma A 🇫🇮🇺🇦 • • •Mikarnage
in reply to Jeff Atwood • • •John Parker
in reply to Jeff Atwood • • •Jeff Atwood
in reply to John Parker • • •John Parker
in reply to Jeff Atwood • • •Ahh… so the onus is on me to somehow uncover the true costs of the model training, etc., despite the fact that all of the players in the industry go to great lengths to obfuscate them?
Guess I’ll be walking away then. 🫡
Jeff Atwood
in reply to John Parker • • •The Road Not Taken is Guaranteed Minimum Income
Jeff Atwood (Coding Horror)John Parker
in reply to Jeff Atwood • • •To be clear, Jeff, I firmly believe what you’re doing in terms of wealth distribution, both in terms of your personal wealth, and the “stay gold” initiative is incredibly admirable. Whilst it’s an option only available to a few, taking a top-down approach such as the one you’re taking is one of the few ways meaningful change can be enacted.
It’s a pity there’s not more people out there with the same attitude, and the courage to put their money where their mouth is.
Nik
in reply to Jeff Atwood • • •@Middaparka
Here's a couple to start with:
1. mit-genai.pubpub.org/pub/8ulgr…
> The unfettered growth in Gen-AI has notably outpaced global regulatory efforts, leading to varied and insufficient oversight of its socioeconomic and environmental impact [...]
2. Google's 2025 Environment Impact report, sustainability.google/reports/…
> Compared to 2023, our total [CO2] emissions increased by 22%, primarily due to increases in data centre capacity [...] for AI.
The Climate and Sustainability Implications of Generative AI
An MIT Exploration of Generative AIJ. Peterson
in reply to Jeff Atwood • • •I heard a podcast recently (ProfG I think) predicting the $$$ bubble will pop, but the utility will remain.
A good analogy is PCs. Originally the 1980s PC built many fortunes (remember Gateway 2000?) but it eventually became a low-margin commodity.
disorderlyf
in reply to Jeff Atwood • • •I'll stop calling it a bubble when core functionality stops being neglected for shoehorning into everything no matter how hard one tries to actively avoid it as justification for circular investments.
As much as I'm sceptical of anyone saying it actually improves their ability to do X task with Y amount of people, websites didn't die when the dotcom bubble burst and neither did cryptocurrencies. They just got relegated to the tasks they were actually useful for after enough blood was spilled to write the regulations with it. All of my complaints with using LLMs for things can be resolved without killing it off entirely.
Lately my issue is more with the zero sum game nature of it. it'd be difficult now, but I could've easily got along without internet when that bubble burst. I got along just fine each time cryotocurrencies went bust. With what people are reducing a the two-letter marketing phrase of half a century ago is something I'm constantly having to actively avoid at basically every step, and even then there's very likely personal data of mine that I cannot prevent from being fed into training data no mater how loudly and how explicitly I state that I DO NOT consent.
If it's so useful, I don't need to be cautious about my operating system, the tools I use, where I host my projects, what configuration I have set for everything down that pipeline, and risk remaining in a perpetual state of unemployment if I don't change the workflow I've had for over a decade so that at every one of those steps my hand is forced further and further away from the vision I as the creator of said projects had in mind and more towards tweaking a system I never asked to become the entirety of my work. If it's so useful, its own merits will have me curious and I'll actually take the plunge of my own volition.
But how it's done now, how pervasive and inescapable it's becoming, how stigmatised wanting to perfect a craft with your own two hands at every step of the way is becoming, it's less reminiscent of a revolutionary paradigm shift and more reminiscent of the cult I left when I entered adulthood.
Jeff Atwood
in reply to disorderlyf • • •Marius Gundersen - mdg 🌻
in reply to Jeff Atwood • • •there is a bubble because there is no way these AI companies will be profitable. The dotcom bubble burst bot because the internet wasn't useful (it was) but because all the dotcom companies were unprofitable.
Investors expect exponential growth, but there is no way for openAI to grow any further, and it's difficult for them to charge any more money from customers. AI models are too easy to replicate by competitors, so there is no lockin, costomers can go to competitors any time.
Marius Gundersen - mdg 🌻
in reply to Marius Gundersen - mdg 🌻 • • •Marius Gundersen - mdg 🌻
in reply to Marius Gundersen - mdg 🌻 • • •and we've seen the diminishing returns on new LLM models. There is exponential growth in costs to develop a new marginally better model. There just isn't demand or willingness to pay for that model.
Once a technology becomes good enough then it's more about convenience than quality. MP3 isn't the best audio, but it's dominant, same as streaming movies, even though Blu-ray is technical much better.
Even small LLMs have been shown to be good enough for most use...
Jeff Atwood
in reply to Marius Gundersen - mdg 🌻 • • •@gundersen yep. Already said that here. Feel free to read it. Or don't. I really don't care. You do you. infosec.exchange/@codinghorror…
Jeff Atwood
2025-09-21 16:22:36
Sephster
in reply to Jeff Atwood • • •Sasha
in reply to Jeff Atwood • • •deep AI/ML bubble or GenAI bubble? I think there is a difference and unless deep AI/ML can take up the momentum, I think GenAI AI will pop. There was a huge web bubble and yet here I am 25 years later replying directly to a legend via the web.
I hold with those who feel we're overestimating in the short term and underestimating in the long term.
Don't have a million though.
Hakan Bayındır
in reply to Jeff Atwood • • •Considering how these models are trained and how the fair use principle is abused, praising current crop of big AI models is a bit in contradiction with your values, no?
Or do I have a wrong impression of you about ethics, privacy, and doing the right thing?
Jeff Atwood
in reply to Hakan Bayındır • • •@bayindirh as I said here infosec.exchange/@codinghorror…
Jeff Atwood
2025-09-08 01:36:36
Hakan Bayındır
in reply to Jeff Atwood • • •Creative commons license also has a non-commercial, no-derivs, attribution, share-alike license (CC-NC-BY-SA), which I license my blog with. This normally blocks AI training (no transformation, no-sell, must-cite), so, CC doesn't allow free-reign over training, and I don't want to feed models with my output.
So your stance is, "tech is more important, we can figure ethics later" AFAICS.
Thanks.
Jeff Atwood
in reply to Hakan Bayındır • • •not at all what I said. I'm saying licensing is a FUCKING NIGHTMARE problem. Have you ever even once looked at how difficult music licensing is, alone? Protip: read this: infosec.exchange/@codinghorror… and then this infosec.exchange/@codinghorror…
Jeff Atwood
2025-09-08 01:49:57
Jeff Atwood
in reply to Jeff Atwood • • •Hakan Bayındır
in reply to Jeff Atwood • • •I might have misunderstood you, sorry if I did.
My rule-0 is don't do anything that you don't want to experience. So, again, if I misunderstood you, sorry about that (English is not my native language to begin with). It's not my intention to stuff words into anyone's mouth.
Yes, I know how music licensing is a hell of an onion. I played in an orchestra and have enough musician friends to experience in close proximity.
Jeff Atwood
in reply to Hakan Bayındır • • •Hakan Bayındır
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Hakan Bayındır • • •@bayindirh I don't condone stealing at all but "let's create another fifty thousand different nightmare mode bureaucracy licensing systems like music" is really not appealing to me either. Creators should get paid for their work, for sure.
You, of all people, know that musicians get screwed more than anyone else with that "perfectly legal and OK" licensing system. So is it a good system, then?
Hard things are hard.
Hakan Bayındır
in reply to Jeff Atwood • • •No, music licensing and academic publishing is two mediums that rip the creators the most. I support neither model in their current forms.
My proposition is a narrower interpretation of fair use and license detection on the page.
If you gonna sell this model or to access to it, assume everything is "All rights reserved". Any license preventing transformation stops scraping. Viral licenses assumed affect all output, exclude cite requiring content if your model can't cite. Simple.
Jeff Atwood
in reply to Hakan Bayındır • • •Hakan Bayındır
in reply to Hakan Bayındır • • •Guys building "The Stack" use a license filtering system to select what to include. LLMs are "smart" enough to understand licensing lines on the things they ingest.
If industry wants, we can add relevant HTTP headers to our pages to signal our stance.
They are simple, open ways to communicate what creators want. The only obstacle is the AI companies. Will they cooperate?
Jeff Atwood
in reply to Hakan Bayındır • • •Jeff Atwood
in reply to Jeff Atwood • • •Hakan Bayındır
in reply to Hakan Bayındır • • •Jeff Atwood
in reply to Hakan Bayındır • • •Hakan Bayındır
in reply to Jeff Atwood • • •I will try. Try, because I'm an unpleasantly busy period of my life. On the other hand, 500 character limit makes us look like more conflicted than we are.
I'm aware of the pitfalls and shortcomings of my proposal, because it's purely technical, but the problem is mostly social.
Again, technical problems are easy, humans are hard.
The proposal I'll write will technically work until it hits real world, because of humans and tragedy of commons.
Major Denis Bloodnok
in reply to Jeff Atwood • • •Nik
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Nik • • •Nik
in reply to Jeff Atwood • • •Here are four easy ones to start with:
- Training on copyrighted data without compensation and/or respecting the license
- "bias washing"
- "accountability washing"
- Environmental impact
Also interested to hear what other ethical issues you've considered.
Jeff Atwood
in reply to Nik • • •Nik
in reply to Jeff Atwood • • •I've read your other replies on the thread, where you repeatedly state that licensing is a "nightmare problem".
1. Is your position that this problem is so difficult that AI model builders should just ignore licensing and ingest stolen content?
2. If that's not your position, do you agree that the vast majority of AI models are built on stolen work?
3. If you agree with (2), then do you think it is ethical to use AI models that you know are trained on stolen work?
Aiono
in reply to Jeff Atwood • • •The fact that it's not completely useless aggravates the effects of the bubble, not mildens them. I think bubble will pop because even though it's useful, the profits doesn't make up to the cost of making those models. Current market value of AI companies is at least 150x of the revenue right now, while it was at best 10x pre-AI. There is no indication whatsoever that the companies will make up that much revenue in anytime soon.
I highly recommend reading profgalloway.com/bubble-ai/
Bubble.ai | No Mercy / No Malice
Scott Galloway (No Mercy / No Malice)Jeff Atwood
in reply to Aiono • • •Aiono
in reply to Jeff Atwood • • •20% is still a lot less than the investment increase. Also that's also just your claim, while there are studies for the contrary metr.org/blog/2025-07-10-early… . I know you say it's not about coding but the study shows that self assessments are in general unreliable.
Your language is completely rude. I just gave a different opinion than yours and you are talking disrespectfully. My argument might be bullshit, but then you can call out my argument not the person.
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
METR (metr.org)Jorge Salvador Caffarena
in reply to Jeff Atwood • • •Jeff Atwood
in reply to Jorge Salvador Caffarena • • •Jorge Salvador Caffarena
in reply to Jeff Atwood • • •still doesn’t explain it. You can have something useful and make companies overspend massively, create a bubble, burst it, and still the technology is useful and will still be afterwards no matter.
Housing was, is, will be useful after the 2008 bubble.
Lutin Discret
in reply to Jeff Atwood • • •Kevin Coulombe
in reply to Jeff Atwood • • •I've used LLMs to successfully input data into my brain. Analyze, compare and basically make sense of data in various shape from multiple sources. I even used it to generate common patterns to guide my learning of a craft like languages and technology.
I have yet to use it successfully to output anything from my brain though. Be it writing code or an email, the mechanics of transferring my thoughts to a destination format is hardly ever the limiting factor : The bottleneck is my brain. It's the silver bullet problem all over again.
The solution seems easy. Bypass the brain and have the LLM go from input to output on its own. Luckily, we have hundred of vibe coders live streaming their fall from optimism to show that's a bad idea. If I don't understand what the machine is doing, there is no way I'll trust that work, at least until there is a revolutionary leap in the technology.
That leaves one area I can think : Have it challenge my output. I can imagine significant incremental gains in productivity there, but I haven't had the chance to try any offering like that for either for code or prose...
Jeff Atwood
in reply to Kevin Coulombe • • •Frank
in reply to Jeff Atwood • • •But If it's useful or nor IS Not the whole point to decide if it's a bubble?
I find it useful, and I would pay about 3-5$/month max for the usefulness it provides for me. So we will see, if they are either able to operate the text generators for such prices or if there are many people who are indeed willing to pay >100$+ per month and seat.
This is IMHO the question the coming months/years have to answer to decide bubble or not or how big. I don't know the answer, we will see.
javi
in reply to Jeff Atwood • • •Honest question: how are are you seeing it making people more effective?
I work in tech, and in the last three years I've seen it not only being adopted, but also made mandatory in some cases, in three different companies. At this point, everyone is using LLMs for one or other thing.
What I have not seen is any significant gain in productivity. If any. We don't ship faster, we don't produce less bugs, we don't communicate better, our documentation is not better than it used to be. (Some) People enjoy using it, for sure, it makes their job funnier... But I haven't seen it making them more effective.
As you say, it can save a bit of time in some tasks. I've also seen it create messes that sucked up an entire team days to solve. So I'm not sure of the overall result being too spectacular.
Jeff Atwood
in reply to javi • • •@javi examples were provided here, have a look infosec.exchange/@codinghorror…
Jeff Atwood
2025-09-28 03:40:29
Nicolas Delsaux
in reply to Jeff Atwood • • •But, on this particular subject, I think you may have a too much narrow view on the subject.
LLM are pushed by top-level CEO because it allow their wet dream of an intelligent, but non-reflecting, task force to exist.
Mike Johnson
in reply to Jeff Atwood • • •it can be useful and still a bubble! I mean tulip bulbs could be still grown even if their valuation changed dramatically.
AI companies are doing seriously some questionable financing, laundering valuations to pull more funding... It seems like some of the big providers are on increasingly shaky financials. We'll still have AI, of course, but their valuations might change dramatically.