Salta al contenuto principale


Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task [edited post to change title and URL]


Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.

Someone pointed out that the "Science, Public Health Policy and the Law" website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT's page about the study instead.

The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.

Note that the study with its original title got far less upvotes than the click-bait summary did 🤡


The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there's a clear link between vaccines and autism.

Neat.


Questa voce è stata modificata (3 giorni fa)

reshared this

in reply to Arthur Besse

Its so disturbing. Especially the bit about your brain activity not returning to normal afterwards. They are teaching the kids to use it in elementary schools.
in reply to suddenlyme

I think they meant it doesn't return to non-AI-user levels when you do the same task on your own immediately afterwards. But if you keep doing the task on your own for some time, I'd expect it to return to those levels rather fast.
Questa voce è stata modificata (3 giorni fa)
in reply to hisao

That's probably true, but it sure can be hard to motivate yourself to do things yourself when that AI dice roll is right there to give you an immediate dopamine hit. I'm starting to see things like vibecoding being as addictive as gambling.
Personally I don't use AI because I see all the subtle ways it's wrong when programming, and the more I pay attention to things like AI search results, it seems like there's almost always something misrepresented or subtly incorrect in the output, and for any topics I'm not already fluent in, I likely won't notice these things until it's already causing issues
Questa voce è stata modificata (3 giorni fa)
in reply to xthexder

This "dopamine hit" isn't a permanent source of happiness, just repeatedly clicking "randomize" button not going to make you feel constantly high, after 3 maybe 5 hits you will start noticing a common pattern that gets old really fast. And to make it better you need to come up with ways to declare different structures, to establish rulesets, checklists, to make some unique pieces at certain checkpoints yourself, while allowing LLM to fill all the boilerplate around it, etc. Which is more effort but also produces more rewarding results. I like to think about it this way: LLM produces the best most generic thing possible for the prompt. Then I look at it and consider which parts I want to be less generic and reprompt. In programming or scripting, I'm okay with "best generic thing" that solves the problem I have. If I were writing novels, maybe it's usable for some kind of top-down writing where you start with high-level structure, then clarify it step by step down to the lowest level. You can use AI to write around this structure, and if something is too boring/generic it's again simply a matter of refining this structure more and expanding something into multiple more detailed things.
in reply to suddenlyme

it's not any different than eating fast/processed food vs eating healthy.

it warps your expectations

in reply to Arthur Besse

But does it cause this when when used exclusively for RP gooning sessions?
Questa voce è stata modificata (3 giorni fa)
in reply to svc

I think we can get federal funding, let me run it by Director Big Balls
in reply to Ganbat

To date, after having gooned once (ongoing since September 2023), my core executive functions, my cognitive abilities and my behaviors have not suffered in the least. In fact, potato.
in reply to Arthur Besse

Does this also explain what happens with middle and upper management? As people have moved up the ranks during the course of their careers, I swear they get dumber.
in reply to Wojwo

I’d expect similar at least. When one doesn’t keep up to date on new information and lets their brain coast it atrophies like any other muscle would from disuse.
in reply to Wojwo

That was my first reaction. Using LLMs is a lot like being a manager. You have to describe goals/tasks and delegate them, while usually not doing any of the tasks yourself.
in reply to ALoafOfBread

Fuck, this is why I'm feeling dumber myself after getting promoted to more senior positions and had only had to work in architectural level and on stuff that the more junior staffs can't work on.

With LLMs basically my job is still the same.

in reply to ALoafOfBread

After being out of being a direct practitioner, I will say all my direct reports are “faster” in programs we use at work than I am, but I’m still waaaaaaaaaay more efficient than all of them (their inefficiencies drive me crazy actually), but I’ve also taken up a lot of development to keep my mind sharp. If I only had my team to manage and not my own personal projects, I could really see regressing a lot.
in reply to Wojwo

My dad around 1993 designed a cipher better than RC4 (I know it's not a high mark now, but it kinda was then) at the time, which passed audit by a relevant service.

My dad around 2003 still was intelligent enough, he'd explain me and my sister some interesting mathematical problems and notice similarities to them and interesting things in real life.

My dad around 2005 was promoted to a management position and was already becoming kinda dumber.

My dad around 2010 was a fucking idiot, you'd think he's mentally impaired.

My dad around 2015 apparently went to a fortuneteller to "heal me from autism".

So yeah. I think it's a bit similar to what happens to elderly people when they retire. Everything should be trained, and also real tasks give you feeling of life, giving orders and going to endless could-be-an-email meetings makes you both dumb and depressed.

in reply to Wojwo

that's the peter principle.

people only get promoted so far as their inadequacies/incompetence shows. and then their job becomes covering for it.

hence why so many middle managers primary job is managing the appearance of their own competence first and foremost and they lose touch with the actual work being done... which is a key part of how you actually manage it.

Questa voce è stata modificata (2 giorni fa)
in reply to TubularTittyFrog

Yeah, that's part of it. But there is something more fundamental, it's not just rising up the ranks but also time spent in management. It feels like someone can get promoted to middle management and be good at the job initially, but then as the job is more about telling others what to do and filtering data up the corporate structure there's a certain amount of brain rot that sets in.

I had just attributed it to age, but this could also be a factor. I'm not sure it's enough to warrant studies, but it's interesting to me that just the act of managing work done by others could contribute to mental decline.

in reply to Arthur Besse

I don't refute the findings but I would like to mention: without AI, I wasn't going to be writing anything at all. I'd have let it go and dealt with the consequences. This way at least I'm doing something rather than nothing.

I'm not advocating for academic dishonesty of course, I'm only saying it doesn't look like they bothered to look at the issue from the angle of:

"What if the subject was planning on doing nothing at all and the AI enabled the them to expend the bare minimum of effort they otherwise would have avoided?"

in reply to Tracaine

sad that people knee jerk downvote you, but i agree. i think there is definitely a productive use case for AI if it helps you get started learning new things.

It helped me a ton this summer learn gardening basics and pick out local plants which are now feeding local pollinators. That is something i never had the motivation to tackle from scratch even though i knew i should.

in reply to acosmichippo

It helped me a ton this summer learn gardening basics and pick out local plants which are now feeding local pollinators. That is something i never had the motivation to tackle from scratch even though i knew i should.


Given the track record of some models, I'd question the accuracy of the information it gave you. I would have recommended consulting traditional sources.

in reply to frongt

I would have recommended consulting traditional sources.


jfc you people are so eager to shit on anything even remotely positive of AI.

Firstly, the entire point of this comment chain is that if "consulting traditional sources" was the only option, I wouldn't have done anything. My back yard would still be a barren mulch pit. AI lowered the effort-barrier of entry, which really helps me as someone with ADHD and severe motivation deficit.

Secondly, what makes you think i didn't? Just because I didn't explicitly say so? yes, i know not to take an LLM's word as gospel. i verified everything and bought the plants from a local nursery that only sells native plants. There was one suggestion out of 8 or so that was not native (which I caught before even going shopping). Even with that overhead of verifying information, it still eliminated a lot of busywork searching and collating.

Questa voce è stata modificata (3 giorni fa)
in reply to acosmichippo

Saw you down-voted and wanted to advise that I am glad you went on to learn some things you had been meaning to, that alone makes the experiment worthwhile as discipline is a rare enough beast. To be clear I myself have a Claude subscription that is about to lapse, and find the article unfortunately spot on. I feel fortunate to have moved away from LLMs naturally.
Questa voce è stata modificata (3 giorni fa)
in reply to Tracaine

Could you expand with an example because what you said is too vague to really extract any point from. I'd argue that if it gives you wrong information, doing something wrong is worse than doing nothing.
in reply to Pycorax

doing something wrong is worse than doing nothing.


Is this a general statement right? Try to forget about context then and read that again 😅

I actually think the moments when AI goes wrong are the moments that stimulate you and make you realize what you're doing and what you want to achieve better. And when you do subsequent prompts to fix the issue, you essentially do problem solving on figuring out what to ask to make it do the exact thing you want. And it's never going to be always right, simply because most of cases of it being wrong is you not providing enough details about what you actually want. So step-by-step AI usage with clarifications and fixes is always going to be brain-stimulating problem solving process.

Questa voce è stata modificata (3 giorni fa)
in reply to hisao

So vibe coding?

I've tried using llm for a couple of tasks before I gave up on the jargon outputs and nonsense loops that they kept feeding me.

I'm no coder / programmer but for the simple tasks / things I needed I took inspo from others, understood how the scripts worked, added comments to my own scripts showing my understanding and explaining what it's doing.

I've written honestly so much, just throwing spaghetti at the wall and seeing what sticks (works). I have fleshed out a method for using base16 colour schemes to modify other GTK* themes so everything in my OS matches. I have declarative containers, IP addresses, secrets, containers and so much more. Thanks to the folks who created nix-colors, I should really contribute to that repo.

I still feel like a noob when it comes to Linux however seeing my progress in ~ 1y is massive.

I managed to get a working google coral after everyone else's scripts (that I could find on Github) had quit working (NixOS). I've since ditched that module as the upkeep required isn't worth a few ms in detection speeds.

I don't believe any of my configs would be where they are if I'd asked a llm to slap it together for me. I'd have none of the understanding of how things work.

in reply to dai

I'm happy for your successes and your enthusiasm! I'm in a different position, I'm kinda very lazy and have little enthusiasm regarding coding/devops stuff specifically, but I enjoy backsitting the Copilot. I also think that you're definitely learning more by doing everything yourself, but it's not really true that you learn nothing by only backsitting LLM, because it doesn't just produce working solution from a single prompt, you have to reprompt and refine things again and again until you get what you want and it's working as expected. I feel myself a bit overpowered this way because it lets me get things done extraordinarily fast. For example, at 00:00 I was only choosing a VPS to buy and by 04:00 I already had wireguard server with port forwarding up and running and all my clientside stuff configured and updated accordingly. And I had some exotic issues during setup which I also troubleshoot using LLM, like for example, my clientside wg.conf file getting wrong SELinux context and wg-quick daemon refusing to work because of that:
unconfined_u:object_r:user_home_t:s0

I never knew such this thing even exist, and LLM just casually explained that and provided a fix:
sudo semanage fcontext -a -t etc_t "/etc/wireguard(/.*)?"
sudo restorecon -Rv /etc/wireguard
in reply to hisao

LLMs are good as a guide to point you in the right direction. They’re about the same kind of tool as a search engine. They can help point you in the right direction and are more flexible in answering questions.

Much like search engines, you need to be aware of the risks and limitations of the tools. Google with give you links that are crawling with browser exploiting malware and LLMs will give you answers that are wrong or directions that are destructive to follow (like incorrect terminal commands).

We’re a bit off from the ability to have models which can tackle large projects like coding complete applications, but they’re good at some tasks.

I think the issue is when people try to use them to replace having to learn instead of as a tool to help you learn.

in reply to FauxLiving

We’re a bit off from the ability to have models which can tackle large projects like coding complete applications, but they’re good at some tasks.


I believe they're (Copilot and similar) good for coding large projects if you use them in small steps and micromanage everything. I think in this mode of use they save a huge amount of time, and more importantly, they prevent you wasting your energy doing grindy/stupid/repetitive parts and allow you to save it for actually interesting/challenging parts.

in reply to hisao

Well that's why u was asking for an example of sorts. The problem is that if you're just starting out, you don't know what you don't know and more importantly, you won't be able to tell if something is wrong. It doesn't help that LLMs are notoriously good at being confidently incorrect and prone to hallucinations.

When I tried it for programming, more often than not, it has hallucinated functions and APIs that did not exist. And I know that they don't because I've been working at this for more than half of my life so I have the intuition to detect bullshit when it appears. However, for learners they are unlikely to be able to differentiate that.

in reply to Pycorax

you won’t be able to tell if something is wrong


When you run it, test it, and it doesn't work as expected (or doesn't work at all), that means most likely something is wrong. Not all fields of work require programs to be 100% correct from the first try, pretty often you can run and test your code infinite number of times before shipping/deploying.

in reply to Tracaine

I'm in the same boat with many things I'm using AI for. I would never write natpmpc port-forwarding demons, I would never create my own DIY VPN, etc, if I had to do this all by myself. Not because I can't, but because I don't enjoy spending my time diving into tons of manuals for various utilities, protocols, OS level stuff, networking, etc. I would simply give up and use some premade solutions. But with AI, I was able to get it all done while also quickly getting to know some surface-level things about all of this stuff myself.
in reply to hisao

a custom VPN without security minded planning and knowledge? that sounds like a disaster.

surely you could do other things that have more impact for yourself, still with computers. use wireguard and spend the time with setting up your services and network security.
and, port forwarding.. I don't know where are you running that, but linux iptables can do that too, in the kernel, with better performance.

Questa voce è stata modificata (2 giorni fa)
in reply to WhyJiffie

Oops, I meant self-hosting a wireguard server, not actually doing an alternative to wireguard or openvpn themselves...

and, port forwarding… I don’t know where are you running that, but linux iptables can do that too, in the kernel, with better performance.


With my previous paid VPN I had to use natpmpc to ask their server for forwarding/binding ports for me, and I also had to do that every 45 seconds. It's nice to get a bash script running in a systemd demon that does that in a loop, and also parses output and saves remote ports server gave us this time to file in case we need them (like, for setting up a tor relay). Also, I got another script and demon for tor relay that monitors forwarded port changes (from a file) and updates torrc and restarts tor container. All this by Copilot, without knowing bash at all. Without having to write complex regexes to parse that output or regexes to overwrite tor config, etc. It's not a single prompt, it requires some troubleshooting and clarifications and ultimately I got to know some of the low level details of this myself. Which is also great.

in reply to hisao

Oops, I meant self-hosting a wireguard server, not actually doing an alternative to wireguard or openvpn themselves...


oh, that's fine then, recommended even.

With my previous paid VPN I had to use natpmpc to ask their server for forwarding/binding ports for me, and I also had to do that every 45 seconds. It's nice to get a bash script running in a systemd demon that does that in a loop, and also parses output and saves remote ports server gave us this time to file in case we need them (like, for setting up a tor relay).


oh so this is a management automation that requests an outside system to open ports, and updates services to use the ports you got. that's interesting! what VPN service was that?

All this by Copilot, without knowing bash at all.


be sure to run shellcheck for your scripts though, it can point out issues. aim for it to have no output, that means all seems ok.

in reply to WhyJiffie

what VPN service was that?


Proton VPN

be sure to run shellcheck for your scripts though, it can point out issues. aim for it to have no output, that means all seems ok.


It does some logging though, and I read what it logs via systemctl --user status. Anyway, those scripts/services so far are of a simple kind - if they don't work, I notice that immediately, because my torrents not seeding or my tor/i2p proxy ports not working in browser. In case when error can only be discovered conditionally somewhere during a long runtime, it needs more complicated and careful testing.

in reply to Serinus

I agree, I should have clarified I actually meant setting up a wireguard server on a vps, not developing and alternative to wireguard or openvpn.
in reply to Tracaine

You haven't done anything, though. If you're getting to the point where you are doing actual work instead of letting the AI do it for you, then congratulations, you've learned some writing skills. It would probably be more effective to use some non-ai methods to learn as well though.

If you're doing this solely to produce output, then sure, go ahead. But if you want good output, or output that actually reflects your perspective, or the skills to do it yourself, you've gotta do it the hard way.

Questa voce è stata modificata (3 giorni fa)
in reply to Arthur Besse

Microsoft reported the same findings earlier this year, spooky to see a more academic institution report the same results.
microsoft.com/en-us/research/w…
Abstract for those too lazy to click:

The rise of Generative AI (GenAI) in knowledge workflows raises
questions about its impact on critical thinking skills and practices.
We survey 319 knowledge workers to investigate 1) when and
how they perceive the enaction of critical thinking when using
GenAI, and 2) when and why GenAI affects their effort to do so.
Participants shared 936 first-hand examples of using GenAI in work
tasks. Quantitatively, when considering both task- and user-specific
factors, a user’s task-specific self-confidence and confidence in
GenAI are predictive of whether critical thinking is enacted and
the effort of doing so in GenAI-assisted tasks. Specifically, higher
confidence in GenAI is associated with less critical thinking, while
higher self-confidence is associated with more critical thinking.
Qualitatively, GenAI shifts the nature of critical thinking toward
information verification, response integration, and task stewardship.
Our insights reveal new design challenges and opportunities for
developing GenAI tools for knowledge work.
in reply to QuadDamage

~~Why is it referring to GenAI?~~ ~~It doesn't exist.~~
Questa voce è stata modificata (3 giorni fa)
in reply to felsiq

Thanks. It is there in the first line. D'oh! My distaste for Microsoft clouds my thinking.
Questa voce è stata modificata (3 giorni fa)
in reply to Arthur Besse

Heyyy, now I get to enjoy some copium for being such a dinosaur and resisting to use it as often as I can
in reply to sudo_shinespark

You're not a dinosaur. Making people feel old and out of the trend is exactly one of the strategies used by big techs to shove their stuff into people.
in reply to morto

bingo.

it's like a health supplement company telling you eating healthy is stupid when they have this powder/pill you should take.

tech evangelism is very cultish and one of it's values is worshipping 'youth' and 'novelty' to an absurd degree, as if youth is automatically superior to experience and age.

Questa voce è stata modificata (2 giorni fa)
in reply to Arthur Besse

You write essay with AI your learning suffers.


One of these papers that are basically "water is wet, researches discover".

in reply to Arthur Besse

cognitive decline.


Another reason for refusing those so-called tools... it could turn one into another tool.

in reply to lechekaflan

More like it would cause you to need the tool in order to be the tool that you are already mandated to be.
in reply to lechekaflan

It’s a clickbait title. Using AI doesn’t actually cause cognitive decline. They’re saying using AI isn’t as engaging for your brain as the manual work, and then broadly linking that to the widely understood concept that you need to engage your brain to stay sharp. Not exactly groundbreaking.
in reply to surph_ninja

Sir this is Lemmy & I'm afraid I have to downvote you for defending AI which is always bad. /s
in reply to Arthur Besse

So if someone else writes your essays for you, you don’t learn anything?
in reply to Arthur Besse

What a ridiculous study. People who got AI to write their essay can’t remember quotes from their AI written essay? You don’t say?! Those same people also didn’t feel much pride over their essay that they didn’t write? Hold the phone!!! Groundbreaking!!!

Academics are a joke these days.

in reply to FreedomAdvocate

I see you skipped that part of academia where they taught that, in science, there are steps between hypothesis and conclusion even if you already think you know the answer.
in reply to FauxLiving

Or one could entirely skip the part where they read the study beyond the headline.
in reply to Arthur Besse

The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there's a clear link between vaccines and autism.

Neat.

in reply to DownToClown

Thanks for the warning. Here's the link to the original study, so we don't have to drive traffic to that guys website.

arxiv.org/abs/2506.08872

I haven't got time to read it and now I wonder if it was represented accurately in the article.

Questa voce è stata modificata (3 giorni fa)
in reply to DownToClown

Thanks for pointing this out. Looking closer I see that that "journal" was definitely not something I want to be sending traffic to, for a whole bunch of reasons - besides anti-vax they're also anti-trans, and they're gold bugs... and they're asking tough questions like "do viruses exist" 🤡

I edited the post to link to MIT instead, and added a note in the post body explaining why.

Questa voce è stata modificata (2 giorni fa)
in reply to Arthur Besse

what should we do then? just abandon LLM use entirely or use it in moderation? i find it useful to ask trivial questions and sort of as a replacement for wikipedia. also what should we do to the people who are developing this 'rat poison' and feeding it to young people's brains?

edit:
i also personally wouldn't use AI at all if I didn't have to compete with all these prompt engineers and their brainless speedy deployments

Questa voce è stata modificata (3 giorni fa)
in reply to trashgarbage78

The abstract seems to suggest that in the long run you'll out perform those prompt engineers.
in reply to GlenRambo

in the long run won't it just become superior to what it is now and outperform us? the future doesn't look bright tbh for comp sci, only good paths i see is if you're studying AI/ML or Security
in reply to Shanmugha

so avoid LLMs entirely when programming and also studying AI/ML isnt a good idea?
in reply to trashgarbage78

Probably studying AI/ML or security is a fine choice if that's what you want to do. But if you want to go into CS, it's probably not a bad choice to do. IMO it's much less likely that AI will completely replace all or even many engineers (or people in other industries).
in reply to trashgarbage78

I do not see how it can be a good or bad idea. Do whatever you want to do, however is best for you
in reply to trashgarbage78

Thing is, that "trivial question asking" is part of what causes this phenomenon
in reply to trashgarbage78

what should we do then?

i also personally wouldn’t use AI at all if I didn’t have to compete with all these prompt engineers and their brainless speedy deployments


Gotta argue that your more methodical and rigorous deployment strategy is more cost efficient than guys cranking out big ridden releases.

If your boss refuses to see it, you either go with the flow or look for a new job (or unionize).

Questa voce è stata modificata (3 giorni fa)
in reply to UnderpantsWeevil

I'm not really worried about competing with the vibe coders. At least on my team, those guys tend to ship more bugs, which causes the fire alarm to go off later.

I'd rather build a reputation of being a little slower, but more stable and higher quality. I want people to think, "Ah, nice. Paequ2 just merged his code. We're saved." instead of, "Shit. Paequ2 just merged. Please nothing break..."

Also, those guys don't really seem to be closing tickets faster than me. Typing words is just one small part of being a programmer.

in reply to trashgarbage78

you should stop using it and use wikipedia.

being able to pull relevant information out of a larger of it, is a incredibly valuable life skill. you should not be replacing that skill with an AI chatbot

Questa voce è stata modificata (2 giorni fa)
in reply to Arthur Besse

And using a calculator isn’t as engaging for your brain as manually working the problem. What’s your point?
in reply to surph_ninja

Yeah, I went over there with ideas that it was grandiose and not peer-reviewed. Turns out it's just a cherry-picked title.

If you use an AI assistant to write a paper, you don't learn any more from the process than you do from reading someone else's paper. You don't think about it deeply and come up with your own points and principles. It's pretty straightforward.

But just like calculators, once you understand the underlying math, unless math is your thing, you don't generally go back and do it all by hand because it's a waste of time.

At some point, we'll need to stop using long-form papers to gauge someone's acumen in a particular subject. I suspect you'll be given questions in real time and need to respond to them on video with your best guesses to prove you're not just reading it from a prompt.

in reply to surph_ninja

Seems like you've made the point succinctly.

Don't lean on a calculator if you want to develop your math skills. Don't lean on an AI if you want to develop general cognition.

in reply to UnderpantsWeevil

I don't think this is a fair comparison because arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics. Any human that doesn't have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.

The really useful aspects of math are things like how to think quantitatively. How to formulate a problem mathematically. How to manipulate mathematical expressions in order to reach a solution. For the most part these are not things that calculators do for you. In some cases reaching for a calculator may actually be a distraction from making real progress on the problem. In other cases calculators can be a useful tool for learning and building your intuition - graphing calculators are especially useful for this.

The difference with LLMs is that we are being led to believe that LLMs are sufficient to solve your problems for you, from start to finish. In the past students who develop a reflex to reach for a calculator when they don't know how to solve a problem were thwarted by the fact that the calculator won't actually solve it for them. Nowadays students develop that reflex and reach for an LLM instead, and now they can walk away with the belief that the LLM is really solving their problems, which creates both a dependency and a misunderstanding of what LLMs are really suited to do for them.

I'd be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks. That might also help mitigate the fact that LLMs don't reliably know the answers: if the user is presented with a leading question instead of an answer then they're still left with the responsibility of investigating and validating.

But that doesn't leave users with a sense of immediate gratification which makes it less marketable and therefore less opportunity to profit...

in reply to 5C5C5C

arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics.


I'd consider it foundational. And hardly small or inconsequential given the time young people spend mastering it.

Any human that doesn’t have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.


With time and training, sure. But simply handing out calculators and cutting math teaching budgets undoes that.

This is the real nut of comparison. Telling kids "you don't need to know math if you have a calculator" is intended to reduce the need for public education.

I’d be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks.


But the economic vision for these tools is to replace workers, not to enhance them. So the developers don't want to do that. They want tools that facilitate redundancy and downsizing.

But that doesn’t leave users with a sense of immediate gratification


It leads them to dig their own graves, certainly.

in reply to UnderpantsWeevil

Don't lean on an AI if you want to develop general ~~cognition~~ essay writing skills.


Sorry the study only examined the ability to respond to SAT writing prompts, not general cognitive abilities. Further, they showed that the ones who used an AI just went back to "normal" levels of ability when they had to write it on their own.

in reply to BananaIsABerry

the ones who used an AI just went back to “normal” levels of ability when they had to write it on their own


An ability that changes with practice

in reply to surph_ninja

You better not read audiobooks or learn form videos either. That's pure brianrot. Too easy.
in reply to Randomgal

Look at this lazy fucker learning trig from someone else, instead of creating it from scratch!
in reply to petrol_sniff_king

LoL. These damn kids! No one wants to re-invent the wheel anymore! Well, if you’re not duplicating the works of Hipparchus of Nicaea, you’re a lazy good for nothing!
in reply to petrol_sniff_king

Oh, thank god you made sure to clarify you didn’t. Someone may have gotten confused!
in reply to surph_ninja

It’s important to know these things as fact instead of vibes and hunches.
Questa voce è stata modificata (2 giorni fa)
in reply to ayyy

Sure, and it’s important to know how to perform math functions without a calculator. But once you learn it, and move on to something more advanced or day-to-day work, you use the calculator.
in reply to surph_ninja

I don't always use the calculator.

Do you bench press 100 lbs and then give up on lifting altogether?

in reply to petrol_sniff_king

Do you believe that using AI locks you out of doing something any other way again?
in reply to petrol_sniff_king

Well what do you mean with the lifting metaphor?

Many people who use AI are doing it to supplement their workflow. Not replace it entirely, though you wouldn’t know that with all these ragebait articles.

in reply to Arthur Besse

I just asked ChatGPT if this is true. It told me no and to increase my usage of AI. So HA!
in reply to Arthur Besse

16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.


Better late than never. Good catch.

in reply to Arthur Besse

Anyone who doubts this should ask their parents how many phone numbers they used to remember.

In a few years there'll be people who've forgotten how to have a conversation.

in reply to Blackmist

I already have seen a massive decline personally and observationally (watching other people) in conversation skills.

Most people now to talk to each other like they are exchanging internet comments. They don't ask questions, they don't really engage... they just exchange declaratory sentences. Heck most of the dates I went on the past few years... zero real conversation and just vague exchanges of opinion and commentary. A couple of them went full on streamer, like just ranting at me and randomly stopping to ask me nonsense questions.

Most of our new employees the past year or two really struggle with any verbal communication and if you approach them physically to converse about something they emailed about they look massively uncomfortable and don't really know how to think on their feet.

Before the pandemic I used to actually converse with people and learn from them. Now everyone I meet feels like interacting with a highlight reel. What I don't understand is why people are choosing this and then complaining about it.

Questa voce è stata modificata (2 giorni fa)
in reply to Blackmist

I could remember so many phone numbers nowadays I just click their names on my rectangle, the future sucks and is weakening us !
in reply to Blackmist

That doesn't require a few years, there are loads of people out there already who have forgotten how to have a conversation

Especially moderators, who typically are the polar opposite nog the word. You disagree with my factually incorrect statement? Ban. Problem solved. You disagree with my opinion? Ban.

Similarly I've seen loads of users on Lemmy (and before or reddit) that just ban anyone who asks questions or who disagrees.

It's so nice and easy, living in a echo chamber, but it does break your brain

in reply to Blackmist

I don't see how that's any indicator of cognitive decline.

Also people had notebooks for ages. The reason they remembered phone numbers wasn't necessity, but that you had to manually dial them every time.

in reply to zqps

And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.


—a story told by Socrates, according to his student Plato

in reply to Blackmist

The other day I saw someone ask ChatGPT how long it would take to perform 1.5 million instances of a given task, if each instance took one minute. Mfs cannot even divide 1.5 million minutes by 60 to get get 25,000 hours, then by 24 to get 1,041 days. Pretty soon these people will be incapable of writing a full sentence without ChatGPT's input

Edit to add: divide by 365.25 to get 2.85 years. Anyone who can tell me how many months that is without asking an LLM gets a free cookie emoji

Questa voce è stata modificata (2 giorni fa)
in reply to starman2112

You forgot doing the years, which is a bit trickier if we take into account the leap years.

According to the Gregorian calendar, every fourth year is a leap year unless it's divisible by 100 – except those divisible by 400 which are leap years anyway. Hence, the average length of one year (over 400 years) must be:

365 + 1⁄4 − 1⁄100 + 1⁄400 = 365.2425 days


So,

1041 / 365.2425 ≈ 2.85 years


Or 2 years and...

0.850161194275 × 365.2425 ≈ 310 days and...

0.514999999987 × 24 ≈ 12 hours and...

0.359999999688 × 60 ≈ 21 minutes and...

0.59999998128 × 60 ≈ 36 seconds


1041 days is just about 2y 310d 12h 21m 36s

Wtf, how did we go from 1041 whole days to fractions of a day? Damn leap years!

Had we not been accounting for them, we would have had 2 years and...

0.852054794521 × 365 = 311.000000000165 days


Or simply 2y 311d if we just ignore that tiny rounding error or use fewer decimals.

Questa voce è stata modificata (2 giorni fa)
in reply to pirat

Engineers be like…

1041/365 =2,852

.852*365=310.980

Thus 2 y 311 d. Or really, fuck it 3 y

Edit. #til

The lemmy app on my phone does basic calculator functions.

Questa voce è stata modificata (2 giorni fa)
in reply to tehn00bi

Or really, fuck it 3 y


Seems about right! But really, it often seems pretty useful to me, since it removes a lot of unnecessary information thoughout a content feed or thread, though I usually still want to be able to see the exact date and time when tapping or hovering over the value for further context.

Edit: However, the lemmy client I use, Eternity, shows the entire date and time for each comment instead of the age of it, and I'm fine with that too, but unsure what I actually prefer...

The lemmy app on my phone does basic calculator functions.


Which client and how?

Questa voce è stata modificata (2 giorni fa)
in reply to starman2112

I want a free cookie emoji!

I didn't ask an LLM, no, I asked Wikipedia:

The mean month-length in the Gregorian calendar is 30.436875 days.


Edit: but since I already knew a year is 365.2425 I could, of course, have divided that by the 12 months of a year to get that number.

So,

1041 ÷ 30.436875 ≈ 34 months and...

0.2019343313 × 30.436875 ≈ 6 days and...

0.146249999987 × 24 ≈ 3 hours and...

0.509999999688 × 60 ≈ 30 minutes and...

0.59999998128 × 60 ≈ 35 seconds and...

0.9999988768 × 1000 ≈ 999 milliseconds and

0.9999988768 × 1000000 ≈ 999999 nanoseconds


34 months + 6d 3h 30m 35s 999ms 999999 ns (or we could call it 36s...)

Edit: 34 months is better known as 2 years and 10 months.

Questa voce è stata modificata (2 giorni fa)
in reply to pirat

🍪

You got as far as nanoseconds so here's a cupcake for extra credit too 🧁

in reply to starman2112

Thank you, you really didn't have to. That cupcake is truly the icing and it's almost too much! I'll give you this giant egg of unknown origin: 🥚 in return, as long as you promise to use it for baking and making some more of those cupcakes for whoever else needs or deserves one within the next few days, hours, minutes, seconds, milliseconds and 999999 bananoseconds 🍌
in reply to starman2112

I swear the companies hard code solutions for weird edge cases so their investors are fooled into believing that their LLMs are getting smarter.
Questa voce è stata modificata (2 giorni fa)
in reply to starman2112

Rough estimate using 30 days as average month would be ~35 months (1050 = 35×30). The average month is a tad longer than 30 days, but I don't know exactly how much. Without a calculator, I'd guess the total result is closer to 34.5. Just using my own brain, this is as far as I get.

Now, adding a calculator to my toolset, the average month is 365.2425 d / 12 m = 30.4377 d/m. The total result comes out to about 34.2, so I overestimated a little.

Also, the total time is 1041.66... which would be more correctly rounded to 1042, but has negligible impact on the redult.

Edit: I saw someone else went even harder on this, but for early morning performance, I'm satisfied with my work

Questa voce è stata modificata (2 giorni fa)
in reply to lennivelkant

🍪

Pirat gave me an egg emoji, so I baked some more cupcake emojis. Have one for getting it so close without even using a calculator 🧁

Questa voce è stata modificata (2 giorni fa)
in reply to Blackmist

I still remember all my family’s phone numbers from when I was a kid growing up In WV in the 70s

I currently have my wife’s number memorized and that’s it. Not my mom, my kids, friends, anybody. I just don’t have to. It’s all in my phone.

But I’m also of the opinion that NOT having this info in my head has freed it up for more important things. Like memes and cat videos 🤣

But seriously, I don’t think this tool, and AI is just a tool, is dumbing me down. Yes I think about certain things less, but it allows me to ask different or better questions, and just learn differently. I don’t necessarily trust everything it spits out, I double check all code it produces, etc. It’s very good at explaining things or providing other examples. Since I’m older, I’ve heard similar arguments about TV and/or the Internet. LLMs are a very interesting tool that have good and bad uses. They are not intelligent, at least not yet, and are not the solution to everything technical. They are very resource intensive and should be used much more judiciously then the currently are.

Ultimately it boils down to if you’re lazy, this allows you to be more lazy. If you don’t want to continue learning and just rely on it, you are gonna have a bad time. Be skeptical, questioning, employee critical thinking, take in information from lots of sources, and in the end you will be fine. That is unless it becomes sentient and wipes us all out.

in reply to Blackmist

People don't memorize phone numbers anymore? Why not? Dialing is so much quicker than searching your contacts for the right person.
in reply to Psythik

This is the furthest thing from my experience lol I can type 2 letters in my phone, see the right name and press call. I haven’t memorised a phone number since before the year 2000* (*hyperbole)
Questa voce è stata modificata (2 giorni fa)
in reply to Arthur Besse

Been vibe coding hard for a new project this past week. It's been working really well but I feel like I watched a bunch of TV. Like it's passive enough like I'm flipping through channel, paying a little attention and then going to the next.

Where as coding it myself would engage my brain and it might feel like reading.

It's bizarre because I've never had this experience before.