Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task [edited post to change title and URL]
Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.
Someone pointed out that the "Science, Public Health Policy and the Law" website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT's page about the study instead.
The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.
Note that the study with its original title got far less upvotes than the click-bait summary did 🤡
like this
reshared this
Imgonnatrythis
in reply to Arthur Besse • • •suddenlyme
in reply to Arthur Besse • • •hisao
in reply to suddenlyme • • •xthexder
in reply to hisao • • •Personally I don't use AI because I see all the subtle ways it's wrong when programming, and the more I pay attention to things like AI search results, it seems like there's almost always something misrepresented or subtly incorrect in the output, and for any topics I'm not already fluent in, I likely won't notice these things until it's already causing issues
hisao
in reply to xthexder • • •TubularTittyFrog
in reply to suddenlyme • • •it's not any different than eating fast/processed food vs eating healthy.
it warps your expectations
Ganbat
in reply to Arthur Besse • • •svc
in reply to Ganbat • • •FauxLiving
in reply to svc • • •masterofn001
in reply to Ganbat • • •Wojwo
in reply to Arthur Besse • • •like this
themadcodger likes this.
socphoenix
in reply to Wojwo • • •sqgl
in reply to Wojwo • • •ALoafOfBread
in reply to Wojwo • • •sheogorath
in reply to ALoafOfBread • • •Fuck, this is why I'm feeling dumber myself after getting promoted to more senior positions and had only had to work in architectural level and on stuff that the more junior staffs can't work on.
With LLMs basically my job is still the same.
rebelsimile
in reply to ALoafOfBread • • •vacuumflower
in reply to Wojwo • • •My dad around 1993 designed a cipher better than RC4 (I know it's not a high mark now, but it kinda was then) at the time, which passed audit by a relevant service.
My dad around 2003 still was intelligent enough, he'd explain me and my sister some interesting mathematical problems and notice similarities to them and interesting things in real life.
My dad around 2005 was promoted to a management position and was already becoming kinda dumber.
My dad around 2010 was a fucking idiot, you'd think he's mentally impaired.
My dad around 2015 apparently went to a fortuneteller to "heal me from autism".
So yeah. I think it's a bit similar to what happens to elderly people when they retire. Everything should be trained, and also real tasks give you feeling of life, giving orders and going to endless could-be-an-email meetings makes you both dumb and depressed.
TubularTittyFrog
in reply to Wojwo • • •that's the peter principle.
people only get promoted so far as their inadequacies/incompetence shows. and then their job becomes covering for it.
hence why so many middle managers primary job is managing the appearance of their own competence first and foremost and they lose touch with the actual work being done... which is a key part of how you actually manage it.
Wojwo
in reply to TubularTittyFrog • • •Yeah, that's part of it. But there is something more fundamental, it's not just rising up the ranks but also time spent in management. It feels like someone can get promoted to middle management and be good at the job initially, but then as the job is more about telling others what to do and filtering data up the corporate structure there's a certain amount of brain rot that sets in.
I had just attributed it to age, but this could also be a factor. I'm not sure it's enough to warrant studies, but it's interesting to me that just the act of managing work done by others could contribute to mental decline.
Tracaine
in reply to Arthur Besse • • •I don't refute the findings but I would like to mention: without AI, I wasn't going to be writing anything at all. I'd have let it go and dealt with the consequences. This way at least I'm doing something rather than nothing.
I'm not advocating for academic dishonesty of course, I'm only saying it doesn't look like they bothered to look at the issue from the angle of:
"What if the subject was planning on doing nothing at all and the AI enabled the them to expend the bare minimum of effort they otherwise would have avoided?"
acosmichippo
in reply to Tracaine • • •sad that people knee jerk downvote you, but i agree. i think there is definitely a productive use case for AI if it helps you get started learning new things.
It helped me a ton this summer learn gardening basics and pick out local plants which are now feeding local pollinators. That is something i never had the motivation to tackle from scratch even though i knew i should.
frongt
in reply to acosmichippo • • •Given the track record of some models, I'd question the accuracy of the information it gave you. I would have recommended consulting traditional sources.
acosmichippo
in reply to frongt • • •jfc you people are so eager to shit on anything even remotely positive of AI.
Firstly, the entire point of this comment chain is that if "consulting traditional sources" was the only option, I wouldn't have done anything. My back yard would still be a barren mulch pit. AI lowered the effort-barrier of entry, which really helps me as someone with ADHD and severe motivation deficit.
Secondly, what makes you think i didn't? Just because I didn't explicitly say so? yes, i know not to take an LLM's word as gospel. i verified everything and bought the plants from a local nursery that only sells native plants. There was one suggestion out of 8 or so that was not native (which I caught before even going shopping). Even with that overhead of verifying information, it still eliminated a lot of busywork searching and collating.
Hominine
in reply to acosmichippo • • •Pycorax
in reply to Tracaine • • •hisao
in reply to Pycorax • • •Is this a general statement right? Try to forget about context then and read that again 😅
I actually think the moments when AI goes wrong are the moments that stimulate you and make you realize what you're doing and what you want to achieve better. And when you do subsequent prompts to fix the issue, you essentially do problem solving on figuring out what to ask to make it do the exact thing you want. And it's never going to be always right, simply because most of cases of it being wrong is you not providing enough details about what you actually want. So step-by-step AI usage with clarifications and fixes is always going to be brain-stimulating problem solving process.
dai
in reply to hisao • • •So vibe coding?
I've tried using llm for a couple of tasks before I gave up on the jargon outputs and nonsense loops that they kept feeding me.
I'm no coder / programmer but for the simple tasks / things I needed I took inspo from others, understood how the scripts worked, added comments to my own scripts showing my understanding and explaining what it's doing.
I've written honestly so much, just throwing spaghetti at the wall and seeing what sticks (works). I have fleshed out a method for using base16 colour schemes to modify other GTK* themes so everything in my OS matches. I have declarative containers, IP addresses, secrets, containers and so much more. Thanks to the folks who created nix-colors, I should really contribute to that repo.
I still feel like a noob when it comes to Linux however seeing my progress in ~ 1y is massive.
I managed to get a working google coral after everyone else's scripts (that I could find on Github) had quit working (NixOS). I've since ditched that module as the upkeep required isn't worth a few ms in detection speeds.
I don't believe any of my configs would be where they are if I'd asked a llm to slap it together for me. I'd have none of the understanding of how things work.
hisao
in reply to dai • • •wg.conf
file getting wrong SELinux context and wg-quick daemon refusing to work because of that:I never knew such this thing even exist, and LLM just casually explained that and provided a fix:
FauxLiving
in reply to hisao • • •LLMs are good as a guide to point you in the right direction. They’re about the same kind of tool as a search engine. They can help point you in the right direction and are more flexible in answering questions.
Much like search engines, you need to be aware of the risks and limitations of the tools. Google with give you links that are crawling with browser exploiting malware and LLMs will give you answers that are wrong or directions that are destructive to follow (like incorrect terminal commands).
We’re a bit off from the ability to have models which can tackle large projects like coding complete applications, but they’re good at some tasks.
I think the issue is when people try to use them to replace having to learn instead of as a tool to help you learn.
hisao
in reply to FauxLiving • • •I believe they're (Copilot and similar) good for coding large projects if you use them in small steps and micromanage everything. I think in this mode of use they save a huge amount of time, and more importantly, they prevent you wasting your energy doing grindy/stupid/repetitive parts and allow you to save it for actually interesting/challenging parts.
Pycorax
in reply to hisao • • •Well that's why u was asking for an example of sorts. The problem is that if you're just starting out, you don't know what you don't know and more importantly, you won't be able to tell if something is wrong. It doesn't help that LLMs are notoriously good at being confidently incorrect and prone to hallucinations.
When I tried it for programming, more often than not, it has hallucinated functions and APIs that did not exist. And I know that they don't because I've been working at this for more than half of my life so I have the intuition to detect bullshit when it appears. However, for learners they are unlikely to be able to differentiate that.
hisao
in reply to Pycorax • • •When you run it, test it, and it doesn't work as expected (or doesn't work at all), that means most likely something is wrong. Not all fields of work require programs to be 100% correct from the first try, pretty often you can run and test your code infinite number of times before shipping/deploying.
hisao
in reply to Tracaine • • •WhyJiffie
in reply to hisao • • •a custom VPN without security minded planning and knowledge? that sounds like a disaster.
surely you could do other things that have more impact for yourself, still with computers. use wireguard and spend the time with setting up your services and network security.
and, port forwarding.. I don't know where are you running that, but linux iptables can do that too, in the kernel, with better performance.
hisao
in reply to WhyJiffie • • •Oops, I meant self-hosting a wireguard server, not actually doing an alternative to wireguard or openvpn themselves...
With my previous paid VPN I had to use natpmpc to ask their server for forwarding/binding ports for me, and I also had to do that every 45 seconds. It's nice to get a bash script running in a systemd demon that does that in a loop, and also parses output and saves remote ports server gave us this time to file in case we need them (like, for setting up a tor relay). Also, I got another script and demon for tor relay that monitors forwarded port changes (from a file) and updates torrc and restarts tor container. All this by Copilot, without knowing bash at all. Without having to write complex regexes to parse that output or regexes to overwrite tor config, etc. It's not a single prompt, it requires some troubleshooting and clarifications and ultimately I got to know some of the low level details of this myself. Which is also great.
WhyJiffie
in reply to hisao • • •oh, that's fine then, recommended even.
oh so this is a management automation that requests an outside system to open ports, and updates services to use the ports you got. that's interesting! what VPN service was that?
be sure to run shellcheck for your scripts though, it can point out issues. aim for it to have no output, that means all seems ok.
hisao
in reply to WhyJiffie • • •Proton VPN
It does some logging though, and I read what it logs via
systemctl --user status
. Anyway, those scripts/services so far are of a simple kind - if they don't work, I notice that immediately, because my torrents not seeding or my tor/i2p proxy ports not working in browser. In case when error can only be discovered conditionally somewhere during a long runtime, it needs more complicated and careful testing.How to manually set up port forwarding | Proton VPN
Proton VPNSerinus
in reply to hisao • • •hisao
in reply to Serinus • • •Feathercrown
in reply to Tracaine • • •You haven't done anything, though. If you're getting to the point where you are doing actual work instead of letting the AI do it for you, then congratulations, you've learned some writing skills. It would probably be more effective to use some non-ai methods to learn as well though.
If you're doing this solely to produce output, then sure, go ahead. But if you want good output, or output that actually reflects your perspective, or the skills to do it yourself, you've gotta do it the hard way.
QuadDamage
in reply to Arthur Besse • • •Microsoft reported the same findings earlier this year, spooky to see a more academic institution report the same results.
microsoft.com/en-us/research/w…
Abstract for those too lazy to click:
sqgl
in reply to QuadDamage • • •felsiq
in reply to sqgl • • •sqgl
in reply to felsiq • • •mushroommunk
in reply to sqgl • • •Hackworth
in reply to Arthur Besse • • •Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task – MIT Media Lab
MIT Media Labveebee
in reply to Arthur Besse • • •sudo_shinespark
in reply to Arthur Besse • • •morto
in reply to sudo_shinespark • • •TubularTittyFrog
in reply to morto • • •bingo.
it's like a health supplement company telling you eating healthy is stupid when they have this powder/pill you should take.
tech evangelism is very cultish and one of it's values is worshipping 'youth' and 'novelty' to an absurd degree, as if youth is automatically superior to experience and age.
Korkki
in reply to Arthur Besse • • •One of these papers that are basically "water is wet, researches discover".
canadaduane
in reply to Arthur Besse • • •radix
in reply to canadaduane • • •Jesus
in reply to Arthur Besse • • •theneverfox
in reply to Arthur Besse • • •lechekaflan
in reply to Arthur Besse • • •Another reason for refusing those so-called tools... it could turn one into another tool.
drspawndisaster
in reply to lechekaflan • • •surph_ninja
in reply to lechekaflan • • •mika_mika
in reply to surph_ninja • • •unpossum
in reply to Arthur Besse • • •FreedomAdvocate
in reply to Arthur Besse • • •What a ridiculous study. People who got AI to write their essay can’t remember quotes from their AI written essay? You don’t say?! Those same people also didn’t feel much pride over their essay that they didn’t write? Hold the phone!!! Groundbreaking!!!
Academics are a joke these days.
FauxLiving
in reply to FreedomAdvocate • • •manefraim
in reply to FauxLiving • • •FreedomAdvocate
in reply to manefraim • • •DownToClown
in reply to Arthur Besse • • •The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there's a clear link between vaccines and autism.
Neat.
Tad Lispy
in reply to DownToClown • • •Thanks for the warning. Here's the link to the original study, so we don't have to drive traffic to that guys website.
arxiv.org/abs/2506.08872
I haven't got time to read it and now I wonder if it was represented accurately in the article.
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
arXiv.orgcodemankey
in reply to Tad Lispy • • •Tad Lispy
in reply to codemankey • • •SocialMediaRefugee
in reply to DownToClown • • •Arthur Besse
in reply to DownToClown • • •Thanks for pointing this out. Looking closer I see that that "journal" was definitely not something I want to be sending traffic to, for a whole bunch of reasons - besides anti-vax they're also anti-trans, and they're gold bugs... and they're asking tough questions like "do viruses exist" 🤡
I edited the post to link to MIT instead, and added a note in the post body explaining why.
Do Viruses Exist? - Science, Public Health Policy and the Law
Science, Public Health Policy and the Lawtrashgarbage78
in reply to Arthur Besse • • •what should we do then? just abandon LLM use entirely or use it in moderation? i find it useful to ask trivial questions and sort of as a replacement for wikipedia. also what should we do to the people who are developing this 'rat poison' and feeding it to young people's brains?
edit:
i also personally wouldn't use AI at all if I didn't have to compete with all these prompt engineers and their brainless speedy deployments
GlenRambo
in reply to trashgarbage78 • • •trashgarbage78
in reply to GlenRambo • • •Shanmugha
in reply to trashgarbage78 • • •trashgarbage78
in reply to Shanmugha • • •mp_complete
in reply to trashgarbage78 • • •Shanmugha
in reply to trashgarbage78 • • •surph_ninja
in reply to GlenRambo • • •orrk
in reply to trashgarbage78 • • •UnderpantsWeevil
in reply to trashgarbage78 • • •Gotta argue that your more methodical and rigorous deployment strategy is more cost efficient than guys cranking out big ridden releases.
If your boss refuses to see it, you either go with the flow or look for a new job (or unionize).
paequ2
in reply to UnderpantsWeevil • • •I'm not really worried about competing with the vibe coders. At least on my team, those guys tend to ship more bugs, which causes the fire alarm to go off later.
I'd rather build a reputation of being a little slower, but more stable and higher quality. I want people to think, "Ah, nice. Paequ2 just merged his code. We're saved." instead of, "Shit. Paequ2 just merged. Please nothing break..."
Also, those guys don't really seem to be closing tickets faster than me. Typing words is just one small part of being a programmer.
TubularTittyFrog
in reply to trashgarbage78 • • •you should stop using it and use wikipedia.
being able to pull relevant information out of a larger of it, is a incredibly valuable life skill. you should not be replacing that skill with an AI chatbot
trashgarbage78
in reply to TubularTittyFrog • • •surph_ninja
in reply to Arthur Besse • • •rumba
in reply to surph_ninja • • •Yeah, I went over there with ideas that it was grandiose and not peer-reviewed. Turns out it's just a cherry-picked title.
If you use an AI assistant to write a paper, you don't learn any more from the process than you do from reading someone else's paper. You don't think about it deeply and come up with your own points and principles. It's pretty straightforward.
But just like calculators, once you understand the underlying math, unless math is your thing, you don't generally go back and do it all by hand because it's a waste of time.
At some point, we'll need to stop using long-form papers to gauge someone's acumen in a particular subject. I suspect you'll be given questions in real time and need to respond to them on video with your best guesses to prove you're not just reading it from a prompt.
UnderpantsWeevil
in reply to surph_ninja • • •Seems like you've made the point succinctly.
Don't lean on a calculator if you want to develop your math skills. Don't lean on an AI if you want to develop general cognition.
5C5C5C
in reply to UnderpantsWeevil • • •I don't think this is a fair comparison because arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics. Any human that doesn't have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.
The really useful aspects of math are things like how to think quantitatively. How to formulate a problem mathematically. How to manipulate mathematical expressions in order to reach a solution. For the most part these are not things that calculators do for you. In some cases reaching for a calculator may actually be a distraction from making real progress on the problem. In other cases calculators can be a useful tool for learning and building your intuition - graphing calculators are especially useful for this.
The difference with LLMs is that we are being led to believe that LLMs are sufficient to solve your problems for you, from start to finish. In the past students who develop a reflex to reach for a calculator when they don't know how to solve a problem were thwarted by the fact that the calculator won't actually solve it for them. Nowadays students develop that reflex and reach for an LLM instead, and now they can walk away with the belief that the LLM is really solving their problems, which creates both a dependency and a misunderstanding of what LLMs are really suited to do for them.
I'd be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks. That might also help mitigate the fact that LLMs don't reliably know the answers: if the user is presented with a leading question instead of an answer then they're still left with the responsibility of investigating and validating.
But that doesn't leave users with a sense of immediate gratification which makes it less marketable and therefore less opportunity to profit...
UnderpantsWeevil
in reply to 5C5C5C • • •I'd consider it foundational. And hardly small or inconsequential given the time young people spend mastering it.
With time and training, sure. But simply handing out calculators and cutting math teaching budgets undoes that.
This is the real nut of comparison. Telling kids "you don't need to know math if you have a calculator" is intended to reduce the need for public education.
But the economic vision for these tools is to replace workers, not to enhance them. So the developers don't want to do that. They want tools that facilitate redundancy and downsizing.
It leads them to dig their own graves, certainly.
BananaIsABerry
in reply to UnderpantsWeevil • • •Sorry the study only examined the ability to respond to SAT writing prompts, not general cognitive abilities. Further, they showed that the ones who used an AI just went back to "normal" levels of ability when they had to write it on their own.
UnderpantsWeevil
in reply to BananaIsABerry • • •An ability that changes with practice
Randomgal
in reply to surph_ninja • • •surph_ninja
in reply to Randomgal • • •petrol_sniff_king
in reply to surph_ninja • • •surph_ninja
in reply to petrol_sniff_king • • •petrol_sniff_king
in reply to surph_ninja • • •surph_ninja
in reply to petrol_sniff_king • • •ayyy
in reply to surph_ninja • • •surph_ninja
in reply to ayyy • • •petrol_sniff_king
in reply to surph_ninja • • •I don't always use the calculator.
Do you bench press 100 lbs and then give up on lifting altogether?
surph_ninja
in reply to petrol_sniff_king • • •petrol_sniff_king
in reply to surph_ninja • • •surph_ninja
in reply to petrol_sniff_king • • •Well what do you mean with the lifting metaphor?
Many people who use AI are doing it to supplement their workflow. Not replace it entirely, though you wouldn’t know that with all these ragebait articles.
petrol_sniff_king
in reply to surph_ninja • • •salty_chief
in reply to Arthur Besse • • •Reygle
in reply to Arthur Besse • • •SCmSTR
in reply to Reygle • • •SCmSTR
in reply to SCmSTR • • •reptar
in reply to SCmSTR • • •BussyGyatt
in reply to Arthur Besse • • •Better late than never. Good catch.
Blackmist
in reply to Arthur Besse • • •Anyone who doubts this should ask their parents how many phone numbers they used to remember.
In a few years there'll be people who've forgotten how to have a conversation.
TubularTittyFrog
in reply to Blackmist • • •I already have seen a massive decline personally and observationally (watching other people) in conversation skills.
Most people now to talk to each other like they are exchanging internet comments. They don't ask questions, they don't really engage... they just exchange declaratory sentences. Heck most of the dates I went on the past few years... zero real conversation and just vague exchanges of opinion and commentary. A couple of them went full on streamer, like just ranting at me and randomly stopping to ask me nonsense questions.
Most of our new employees the past year or two really struggle with any verbal communication and if you approach them physically to converse about something they emailed about they look massively uncomfortable and don't really know how to think on their feet.
Before the pandemic I used to actually converse with people and learn from them. Now everyone I meet feels like interacting with a highlight reel. What I don't understand is why people are choosing this and then complaining about it.
interdimensionalmeme
in reply to Blackmist • • •Phoenixz
in reply to Blackmist • • •That doesn't require a few years, there are loads of people out there already who have forgotten how to have a conversation
Especially moderators, who typically are the polar opposite nog the word. You disagree with my factually incorrect statement? Ban. Problem solved. You disagree with my opinion? Ban.
Similarly I've seen loads of users on Lemmy (and before or reddit) that just ban anyone who asks questions or who disagrees.
It's so nice and easy, living in a echo chamber, but it does break your brain
zqps
in reply to Blackmist • • •I don't see how that's any indicator of cognitive decline.
Also people had notebooks for ages. The reason they remembered phone numbers wasn't necessity, but that you had to manually dial them every time.
NateNate60
in reply to zqps • • •—a story told by Socrates, according to his student Plato
starman2112
in reply to Blackmist • • •The other day I saw someone ask ChatGPT how long it would take to perform 1.5 million instances of a given task, if each instance took one minute. Mfs cannot even divide 1.5 million minutes by 60 to get get 25,000 hours, then by 24 to get 1,041 days. Pretty soon these people will be incapable of writing a full sentence without ChatGPT's input
Edit to add: divide by 365.25 to get 2.85 years. Anyone who can tell me how many months that is without asking an LLM gets a free cookie emoji
pirat
in reply to starman2112 • • •You forgot doing the years, which is a bit trickier if we take into account the leap years.
According to the Gregorian calendar, every fourth year is a leap year unless it's divisible by 100 – except those divisible by 400 which are leap years anyway. Hence, the average length of one year (over 400 years) must be:
So,
Or 2 years and...
1041 days is just about 2y 310d 12h 21m 36s
Wtf, how did we go from 1041 whole days to fractions of a day? Damn leap years!
Had we not been accounting for them, we would have had 2 years and...
Or simply 2y 311d if we just ignore that tiny rounding error or use fewer decimals.
tehn00bi
in reply to pirat • • •Engineers be like…
1041/365 =2,852
.852*365=310.980
Thus 2 y 311 d. Or really, fuck it 3 y
Edit. #til
The lemmy app on my phone does basic calculator functions.
pirat
in reply to tehn00bi • • •Seems about right! But really, it often seems pretty useful to me, since it removes a lot of unnecessary information thoughout a content feed or thread, though I usually still want to be able to see the exact date and time when tapping or hovering over the value for further context.
Edit: However, the lemmy client I use, Eternity, shows the entire date and time for each comment instead of the age of it, and I'm fine with that too, but unsure what I actually prefer...
Which client and how?
pirat
in reply to starman2112 • • •I want a free cookie emoji!
I didn't ask an LLM, no, I asked Wikipedia:
Edit: but since I already knew a year is 365.2425 I could, of course, have divided that by the 12 months of a year to get that number.
So,
34 months + 6d 3h 30m 35s 999ms 999999 ns (or we could call it 36s...)
Edit: 34 months is better known as 2 years and 10 months.
irregular unit of time dividing a calendar year
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)starman2112
in reply to pirat • • •🍪
You got as far as nanoseconds so here's a cupcake for extra credit too 🧁
pirat
in reply to starman2112 • • •olympicyes
in reply to starman2112 • • •lennivelkant
in reply to starman2112 • • •Rough estimate using 30 days as average month would be ~35 months (1050 = 35×30). The average month is a tad longer than 30 days, but I don't know exactly how much. Without a calculator, I'd guess the total result is closer to 34.5. Just using my own brain, this is as far as I get.
Now, adding a calculator to my toolset, the average month is 365.2425 d / 12 m = 30.4377 d/m. The total result comes out to about 34.2, so I overestimated a little.
Also, the total time is 1041.66... which would be more correctly rounded to 1042, but has negligible impact on the redult.
Edit: I saw someone else went even harder on this, but for early morning performance, I'm satisfied with my work
starman2112
in reply to lennivelkant • • •🍪
Pirat gave me an egg emoji, so I baked some more cupcake emojis. Have one for getting it so close without even using a calculator 🧁
lennivelkant
in reply to starman2112 • • •billwashere
in reply to Blackmist • • •I still remember all my family’s phone numbers from when I was a kid growing up In WV in the 70s
I currently have my wife’s number memorized and that’s it. Not my mom, my kids, friends, anybody. I just don’t have to. It’s all in my phone.
But I’m also of the opinion that NOT having this info in my head has freed it up for more important things. Like memes and cat videos 🤣
But seriously, I don’t think this tool, and AI is just a tool, is dumbing me down. Yes I think about certain things less, but it allows me to ask different or better questions, and just learn differently. I don’t necessarily trust everything it spits out, I double check all code it produces, etc. It’s very good at explaining things or providing other examples. Since I’m older, I’ve heard similar arguments about TV and/or the Internet. LLMs are a very interesting tool that have good and bad uses. They are not intelligent, at least not yet, and are not the solution to everything technical. They are very resource intensive and should be used much more judiciously then the currently are.
Ultimately it boils down to if you’re lazy, this allows you to be more lazy. If you don’t want to continue learning and just rely on it, you are gonna have a bad time. Be skeptical, questioning, employee critical thinking, take in information from lots of sources, and in the end you will be fine. That is unless it becomes sentient and wipes us all out.
MourningDove
in reply to Blackmist • • •Psythik
in reply to Blackmist • • •UntitledQuitting
in reply to Psythik • • •MourningDove
in reply to Arthur Besse • • •relying on AI makes people stupid?
Who knew?
Yoshi
in reply to Arthur Besse • • •LeoshenkuoDaSimpli
in reply to Arthur Besse • • •eletes
in reply to Arthur Besse • • •Been vibe coding hard for a new project this past week. It's been working really well but I feel like I watched a bunch of TV. Like it's passive enough like I'm flipping through channel, paying a little attention and then going to the next.
Where as coding it myself would engage my brain and it might feel like reading.
It's bizarre because I've never had this experience before.