Human-level AI is not inevitable. We have the power to change course
Human-level AI is not inevitable. We have the power to change course
Technology happens because people make it happen. We can choose otherwiseGuardian staff reporter (The Guardian)
like this
Apple sues YouTuber for alleged iOS 26 trade-secret theft
YouTuber leaked iOS secrets via friend spying on dev's phone, Apple lawsuit claims
: Jon Prosser and alleged accomplice accused of stealing trade secrets from development deviceBrandon Vigliarolo (The Register)
Mechanize likes this.
The AI boom is more overhyped than the 1990s dot-com bubble, says top economist
The AI boom is more overhyped than the 1990s dot-com bubble, says top economist
Torsten Slok, chief economist at Apollo Global Management, recently argued that the stock market currently overvalues a handful of tech giants – including Nvidia and Microsoft –...Daniel Sims (TechSpot)
Apple sues YouTuber for alleged iOS 26 trade-secret theft
YouTuber leaked iOS secrets via friend spying on dev's phone, Apple lawsuit claims
: Jon Prosser and alleged accomplice accused of stealing trade secrets from development deviceBrandon Vigliarolo (The Register)
like this
According to the suit, Ramacciotti was in need of money, and had a friend named Ethan Lipnik who worked at Apple as a software engineer on the Photos team – two facts that Prosser was aware of when he allegedly offered to pay Ramacciotti to break into Lipnik's development iPhone and show Prosser what the version of iOS running on the device looked like.Ramacciotti, who frequently stayed at Lipnik's home, allegedly used location-tracking software to determine when Lipnik was far enough from home to be gone for an extended period. During such windows, he allegedly used the opportunity to obtain the passcode and access the device.
Apple isn’t a very pro WFH or remote work company from what I learned when I was job hunting, I’m honestly surprised they let a dev iPhone leave their campus.
Remember that one, but honestly: not worth much testing a device exclusively in laboratory settings and not in real life situations.
It is a risk but I think not one you can and should avoid. At least if you want your mobile device to perform.
You can read it two ways:
1) gee they’re so WFH friendly
2) they drive their people hard and they work nights and weekends
yt-dlp command on debian to download highest available video and audio, provided that resolution is no higher than 1920 x 1080 p
debian 12.11, yt-dlp stable@2025.06.30.
I used this argument: "-f bv*[ext=mp4]+ba[ext=m4a]/b[ext=mp4]"
and it works: it downloads the best available video, audio and ffmpeg merges both in a single file. Automatically.
Except that the maximum resolution I need is 1920 x 1080 p. Best available video is oftentimes 4096 x 2160 p, too much for the target hardware.
Using -F to check different resolutions to then select one (like -f 299 or -f 148) is tiresome.
How do I do that? Ideally for whole playlists involving between 25 and 50 videos.
The following numeric meta fields can be used with comparisons <, <=, >, >=, = (equals), != (not equals):filesize: The number of bytes, if known in advance
filesize_approx: An estimate for the number of bytes
width: Width of the video, if known
height: Height of the video, if known
aspect_ratio: Aspect ratio of the video, if known
So a height<=1080 should be it.
GitHub - yt-dlp/yt-dlp: A feature-rich command-line audio/video downloader
A feature-rich command-line audio/video downloader - yt-dlp/yt-dlpGitHub
Others have given good examples for formats you were aiming for.
For bulk download, simply create a list.txt file in your target directory, bulk add all urls in separate lines. Then
Yt-dlp list.txt {your options here}
It is noteworthy that, instead of listing urls manually, you can also grab entire playlists from relevant platforms if that’s what you’re after, including preserving the playlist names as directory names. Same even goes for entire channels.
Just combining our answers for a more complete answer
Download from a premade text file
yt-dlp list.txt -f "bv*[height<=1080][ext=mp4]+ba[ext=m4a]/b[ext=mp4]"
Download a playlist
yt-dlp -f "bv*[height<=1080][ext=mp4]+ba[ext=m4a]/b[ext=mp4]" --yes-playlist
Ukrainian Ex-Official Found Dead Near Russian Defector Pilot’s Killing Site In Spain
Ukrainian Ex-Official Found Dead Near Russian Defector Pilot's Killing Site In Spain
A high-ranking former Ukrainian Interior Ministry official, Igor Grushevsky, was found dead under suspicious circumstances in the same Spanish residen...Anonymous103 (South Front)
Polish border guards claim that Ukrainians are increasingly using fake driver’s licenses to avoid being conscripted into the war
Polish border guards claim that Ukrainians are increasingly using fake driver's licenses to avoid being conscripted into the war, - Bild
The Polish Border Guard is increasingly detecting fake Ukrainian C-category driver’s licenses for trucks among those entering Poland from Ukraine.newsmaker newsmaker (English News front)
Instacart’s former CEO is taking the reins of a big chunk of OpenAI
Instacart’s former CEO is taking the reins of a big chunk of OpenAI
Incoming OpenAI executive Fidji Simo, who will start Aug. 18 as its “CEO of Applications” and report directly to CEO Sam Altman, sent a memo to employees Monday.Hayden Field (The Verge)
EpicFailGuy likes this.
Scientists Are Now 43 Seconds Closer to Producing Limitless Energy
Scientists Are Now 43 Seconds Closer to Producing Limitless Energy
The Wendelstein 7-X stellarator in Germany set a record with 43 seconds of plasma, marking a major step toward clean, sustainable nuclear fusion energy.Elizabeth Rayne (Popular Mechanics)
Technology reshared this.
This is a very good point since tritium is a very limited resource.
The hope is that it will be generated by the fusion reactor itself using tritium breeder blankets iter.org/machine/supporting-sy…
Whether that will work remains to be seen.
Tritium breeding
In the deuterium-tritium (D-T) fusion reaction, high energy neutrons are released along with helium atoms.ITER - the way to new energy
like this
The Hater's Guide To The AI Bubble
The Hater's Guide To The AI Bubble
Hey! Before we go any further — if you want to support my work, please sign up for the premium version of Where’s Your Ed At, it’s a $7-a-month (or $70-a-year) paid product where every week you get a premium newsletter, all while supporting my free w…Edward Zitron (Ed Zitron's Where's Your Ed At)
adhocfungus likes this.
Palestinians Are Collapsing in Gaza's Streets From Israeli-Imposed Starvation Campaign
cross-posted from: lemmy.ml/post/33477630
[can't believe that it keeps getting worse, but it does]Abdel Qader Sabbah and Sharif Abdel Kouddous
Jul 21, 2025
Over the past five days alone, more than 550 Palestinians have been killed in Gaza, according to ministry of health figures. The confirmed death toll since the beginning of the war crossed 59,000 on Monday in what is widely acknowledged to be a vast undercount. Over the past two months, more than 1,000 Palestinians have been killed as they are forced to seek aid in militarized zones in a system mostly overseen by the Gaza Humanitarian Foundation (GHF), a shadowy U.S.- and Israeli-backed group.One of the deadliest days for aid seekers came on Sunday, when over 70 people were killed, at least 67 of them in northern Gaza where Israeli troops opened fire on crowds trying to get food from a World Food Program convoy entering through the Zikim crossing.
“The tank came, surrounded us, and started shooting at us and we kept raising our hands,” Ibrahim Hamada, who was wounded in the leg, told Drop Site as he lay on a hospital gurney wincing in pain. “There were many martyrs, no one was able to retrieve them. I crawled on my stomach just to reach a car to take me to the hospital,” he said. “I went there to eat, because there was no food at home.”
like this
i think lots of people do care. i do. but i cannot do anything
it is not as if anything i do or say makes even a small difference in all this. israel is going mad, like a rabid dog, and does not listen to reason or anything else anymore. there is nothing that i can do, and i guess the same goes for most other agencies.
it would still be nice to see big countries like germany say publicly that israel is stupid and should stop though
[can’t believe that it keeps getting worse, but it does]
I was assured by many "uncommitted" lemmykins that this wasn't possible.
There was certainly no course change, which Uncommitted tried to promote as an option. It was an attempt to make a public appeal that genocide should be an issue worth making political decisions around.
The lesson the victors over Uncomitted demand is that genocide is not a political enough issue worth acting or voting on.
Palestinians Are Collapsing in Gaza's Streets From Israeli-Imposed Starvation Campaign
cross-posted from: lemmy.ml/post/33477630
[can't believe that it keeps getting worse, but it does]Abdel Qader Sabbah and Sharif Abdel Kouddous
Jul 21, 2025
Over the past five days alone, more than 550 Palestinians have been killed in Gaza, according to ministry of health figures. The confirmed death toll since the beginning of the war crossed 59,000 on Monday in what is widely acknowledged to be a vast undercount. Over the past two months, more than 1,000 Palestinians have been killed as they are forced to seek aid in militarized zones in a system mostly overseen by the Gaza Humanitarian Foundation (GHF), a shadowy U.S.- and Israeli-backed group.One of the deadliest days for aid seekers came on Sunday, when over 70 people were killed, at least 67 of them in northern Gaza where Israeli troops opened fire on crowds trying to get food from a World Food Program convoy entering through the Zikim crossing.
“The tank came, surrounded us, and started shooting at us and we kept raising our hands,” Ibrahim Hamada, who was wounded in the leg, told Drop Site as he lay on a hospital gurney wincing in pain. “There were many martyrs, no one was able to retrieve them. I crawled on my stomach just to reach a car to take me to the hospital,” he said. “I went there to eat, because there was no food at home.”
geneva_convenience likes this.
Palestinians Are Collapsing in Gaza's Streets From Israeli-Imposed Starvation Campaign
[can't believe that it keeps getting worse, but it does]
Abdel Qader Sabbah and Sharif Abdel Kouddous
Jul 21, 2025
Over the past five days alone, more than 550 Palestinians have been killed in Gaza, according to ministry of health figures. The confirmed death toll since the beginning of the war crossed 59,000 on Monday in what is widely acknowledged to be a vast undercount. Over the past two months, more than 1,000 Palestinians have been killed as they are forced to seek aid in militarized zones in a system mostly overseen by the Gaza Humanitarian Foundation (GHF), a shadowy U.S.- and Israeli-backed group.One of the deadliest days for aid seekers came on Sunday, when over 70 people were killed, at least 67 of them in northern Gaza where Israeli troops opened fire on crowds trying to get food from a World Food Program convoy entering through the Zikim crossing.
“The tank came, surrounded us, and started shooting at us and we kept raising our hands,” Ibrahim Hamada, who was wounded in the leg, told Drop Site as he lay on a hospital gurney wincing in pain. “There were many martyrs, no one was able to retrieve them. I crawled on my stomach just to reach a car to take me to the hospital,” he said. “I went there to eat, because there was no food at home.”
Palestinians Are Collapsing in Gaza's Streets From Israeli-Imposed Starvation Campaign
A frontline report on a people forced to face death from starvation or being shot in a perilous quest to obtain meager rationsAbdel Qader Sabbah (Drop Site News)
like this
The process as explained in this article has nothing to do with privacy. The problem with privacy is not that I send Google a query, it's they Google is scanning my machine, gathering cookie data, recording every move I make, mixing and matching my data with data from other sites, data from data brokers, also using third party cookies, etc etc etc...
Encrypting the query I make with Google isn't going to change much of that.
Dating Apps Need to Learn How Consent Works
Dating Apps Need to Learn How Consent Works
Staying safe whilst dating online should not be the responsibility of users—dating apps should be prioritizing our privacy by default, and laws should require companies to prioritize user privacy over their profit.Electronic Frontier Foundation
Writing is thinking - On the value of human-generated scientific writing in the age of large-language models.
Writing is thinking - Nature Reviews Bioengineering
On the value of human-generated scientific writing in the age of large-language models.Nature
EpicFailGuy likes this.
UK, France and 23 other countries say the war in Gaza ‘must end now’
cross-posted from: lemmy.ml/post/33476377
[now let's see if they do something]By SYLVIA HUI and JILL LAWLESS
Updated 1:05 PM EDT, July 21, 2025
LONDON (AP) — Twenty-five countries including #Britain, #France and a host of #European nations issued a joint statement on Monday that puts more pressure on #Israel, saying the war in #Gaza “must end now” and Israel must comply with international law.The foreign ministers of countries including #Australia, #Canada and #Japan said “the suffering of civilians in Gaza has reached new depths.” They condemned “the drip feeding of aid and the inhumane killing of civilians, including children, seeking to meet their most basic needs of water and food.”
The statement described as “horrifying” the deaths of over 800 #Palestinians who were seeking aid...
like this
Quali sono i 50 stati europei?
UK, France and 23 other countries say the war in Gaza ‘must end now’
cross-posted from: lemmy.ml/post/33476377
[now let's see if they do something]By SYLVIA HUI and JILL LAWLESS
Updated 1:05 PM EDT, July 21, 2025
LONDON (AP) — Twenty-five countries including #Britain, #France and a host of #European nations issued a joint statement on Monday that puts more pressure on #Israel, saying the war in #Gaza “must end now” and Israel must comply with international law.The foreign ministers of countries including #Australia, #Canada and #Japan said “the suffering of civilians in Gaza has reached new depths.” They condemned “the drip feeding of aid and the inhumane killing of civilians, including children, seeking to meet their most basic needs of water and food.”
The statement described as “horrifying” the deaths of over 800 #Palestinians who were seeking aid...
UK, France and 23 other countries say the war in Gaza ‘must end now’
[now let's see if they do something]
By SYLVIA HUI and JILL LAWLESS
Updated 1:05 PM EDT, July 21, 2025
LONDON (AP) — Twenty-five countries including #Britain, #France and a host of #European nations issued a joint statement on Monday that puts more pressure on #Israel, saying the war in #Gaza “must end now” and Israel must comply with international law.The foreign ministers of countries including #Australia, #Canada and #Japan said “the suffering of civilians in Gaza has reached new depths.” They condemned “the drip feeding of aid and the inhumane killing of civilians, including children, seeking to meet their most basic needs of water and food.”
The statement described as “horrifying” the deaths of over 800 #Palestinians who were seeking aid...
https://apnews.com/article/europe-israel-hamas-war-gaza-e4062cffa9585790061105236a93d8e5/
like this
Anything less than sanctions and total economic boycott is a stalling tactic aimed at allowing the genocide to finish.
This is no different from if the police show up to a drug house, and one of the residents stands in the doorway opining about how horrible it is that drugs were discovered at their home and how morally bankrupt the illicit drug trade is, in order for their buddy to finish flushing their stash. Except it's even less subtle when countries do it.
VS Achuthanandan, politician who pushed for Linux adoption in India, passed away today
India has one of the highest rates of (desktop) Linux usages in the world - hovering around 10% according to StatCounter. Why is this? One reason is concerns over software controlled by foreign countries - particularly the US and China. But another is cost.
The first major boost for Linux and other free software in India came in 2006, when VS Achuthanandan - who passed away today - was elected Chief Minister of the state of Kerala. His government came up with a policy to shift all government computers to free software, starting with schools and colleges.
When the financial benefits became apparent, other states and the Union government followed suit.
Microsoft Windows to be replaced by Maya OS amid rising cyber threats
Indian government agencies reportedly developed Ubuntu-based Maya OS for more than six months.Vinay Patel (International Business Times UK)
Reverse engineering the mysterious Up-Data Link Test Set from Apollo
cross-posted from: lemmy.bestiver.se/post/507866
Comments
Reverse engineering the mysterious Up-Data Link Test Set from Apollo
Back in 2021, a collector friend of ours was visiting a dusty warehouse in search of Apollo-era communications equipment. A box with NASA-st...www.righto.com
like this
"Wait and respect" — Peskov's response to Russophobia in Azerbaijan
"Wait and respect" — Peskov's response to Russophobia in Azerbaijan: EADaily
EADaily, July 21st, 2025. It is important for Russia that Russians are respected in Azerbaijan, Russian Presidential spokesman Dmitry Peskov said, answering a question from journalists about the prospects for relations with Azerbaijan amid the confro…EADaily
The Russian army is moving to the Dnipropetrovsk region, there is panic in Kiev
The Russian army is moving to the Dnipropetrovsk region, there is panic in Kiev — summary: EADaily
EADaily, July 21st, 2025. The Russian Armed Forces are reaching the state border in the Dnipropetrovsk direction with a broad front.EADaily
EA daily is among laziest, most obvious russian propaganda channels. Full stop. Whatever you're getting paid to stink up this thread with your propganda, please for your sake, insist on hard foreign currency payments.
Your continuous stream of thread of "uS emPiRe FaiLliNg, rUzZiA $tRoNk!!" variants is admirable for dedication, but could use a little balance and subtlety if you expect actual humans to be influenced by them.
Instant aggressive whataboutism response from a two month old burner account that posts pro-russia slop several times a day! you're a moderatly well programmed bot.
Wailing about the 'empire's propaganda machine' while literally trumpeting Russian talking points with shameless aplomb. <chef's kiss>. Spectacular stuff.
That's okay. Your stream is a constant barrage of propaganda jumping from Kremlin-aligned and China friendly talking points, from "AOC IS A FRAUD", Israel is committing genocide, Zelensky is a fascist. Generalized complaints about Western propaganda, complaining about NATO expansion, celebrating Chinese achievements in high speed rail. Quick scan of your history, this one jumps out re: US sending weapons to Ukraine: "I don’t care. Whatever ends up weakening the US. I’m tired of it turning my region into a shooting range.".
Let's dissect this one for a second - the mental gymnastics to get around this are truly Olympic caliber - complaining about external forces turning your region into a shooting gallery, given that Russia and Iran have funded Hezbollah, Hamas, PLO, Assad's Syria, general terrorist activity and frequent direct military intervention for the last 75 years, and yet implying all blame rests with the U.S. and none with the corrosive, mischievous death cult of Russia and Iran - Spectactular stuff.
So - think whatever you want. It's cool - ths is social media, and you have picked your side, loud and clear. West Bad, Ukraine Bad. Got it. Yawn. Although if you are actually a human being, rather than a paid spammer or bot - it begs the question why you would waste your time picking fights on a Ukraine-sympathetic themed thread.
LGBT movement tagged as extremist in Russia — Supreme Court
As the lawsuit pointed out, various extremism-related signs and manifestations, including incitement of social and religious discord, have been identified in the movement's activities in RussiaTASS
Russian Forces Press Multi-Front Offensive: Ukraine’s Defenses Stretched to Breaking Point
Russian Forces Press Multi-Front Offensive: Ukraine's Defenses Stretched to Breaking Point
Russian troops have intensified offensive operations along several key axes in Ukraine, making tactical advances while stretching Ukrainian defenses. ...Anonymous103 (South Front)
HDR Video Playback Lands in Chromium on Wayland
HDR Video Playback Lands in Chromium on Wayland
Chromium adds Wayland color-management-v1 support, allowing HDR rendering on supported platforms with a default-enabled feature flag.Bobby Borisov (Linuxiac)
like this
Etterra
in reply to Davriellelouna • • •like this
giantpaper likes this.
AngryRobot
in reply to Etterra • • •giantpaper likes this.
terrific
in reply to Davriellelouna • • •We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.
Irrelevant at best, harmful at worst 🤷
qt0x40490FDB
in reply to terrific • • •terrific
in reply to qt0x40490FDB • • •I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.
IMHO, there is simply nothing indicating that it's close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current "reasoning models" still don't actually reason. They are just LLMs with some extra steps.
There is lots of information out there on the topic so I'm not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.
qt0x40490FDB
in reply to terrific • • •terrific
in reply to qt0x40490FDB • • •I definitely think that's remarkable. But I don't think scoring high on an external measure like a test is enough to prove the ability to reason. For reasoning, the process matters, IMO.
Reasoning models work by Chain-of-Thought which has been shown to provide some false reassurances about their process arxiv.org/abs/2305.04388 .
Maybe passing some math test is enough evidence for you but I think it matters what's inside the box. For me it's only proved that tests are a poor measure of the ability to reason.
Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
arXiv.orgqt0x40490FDB
in reply to terrific • • •I’m sorry, but this reads to me like “I am certain I am right, so evidence that implies I’m wrong must be wrong.” And while sometimes that really is the right approach to take, more often than not you really should update the confidence in your hypothesis rather than discarding contradictory data.
But, there must be SOMETHING which is a good measure of the ability to reason, yes? If reasoning is an actual thing that actually exists, then it must be detectable, and there must be a way to detect it. What benchmark do you purpose?
You don’t have to seriously answer, but I hope you see where I’m coming from. I assume you’ve read Searle, and I cannot express to you the contempt in which I hold him. I think, if we are to be scientists and not philosophers (and good philosophers should be scientists too) we have to look to the external world to test our theories.
For me, what goes on inside does matter, but what goes on inside everyone everywhere is just math, and I haven’t formed an opinion about what math is really most efficient at instantiating reasoning, or thinking, or whatever you want to talk about.
To be honest, the other day I was convinced it was actually derivatives and integrals, and, because of this, that analog computers would make much better AIs than digital computers. (But Hava Siegelmann’s book is expensive, and, while I had briefly lifted my book buying moratorium, I think I have to impose it again).
Hell, maybe Penrose is right and we need quantum effects (I really really really doubt it, but, to the extent that it is possible for me, I try to keep an open mind).
🤷♂️
terrific
in reply to qt0x40490FDB • • •I'm not sure I can give a satisfying answer. There are a lot of moving parts here, and a big issue here is definitions which you also touch upon with your reference to Searle.
I agree with the sentiment that there must be some objective measure of reasoning ability. To me, reasoning is more than following logical rules. It's also about interpreting the intent of the task. The reasoning models are very sensitive to initial conditions and tend to drift when the question is not super precise or if they don't have sufficient context.
The AI models are in a sense very fragile to the input. Organic intelligence on the other hand is resilient and also heuristic. I don't have any specific idea for the test, but it should test the ability to solve a very ill-posed problem.
cmhe
in reply to qt0x40490FDB • • •I think we also should require to set some energy limits to those tests. Before it was assumed that those tests are done by humans, that can do those tests after eating some crackers and a bit of water.
Now we are comparing that to massive data centers that need nuclear reactors to have enough power to work through these problems...
qt0x40490FDB
in reply to terrific • • •Gary Marcus is certainly good. It’s not as if I think say, LeCun, or any of the many people who think that LLMs aren’t the way are morons. I don’t think anyone thinks all the problems are currently solved. And I think long time lines are still plausible, but, I think dismissing short time line out of hand is thoughtless.
My main gripe is how certain people are about things they know virtually nothing about. And how slap dashed their reasoning is. It seems to me most people’s reasoning goes something like “there is no little man in the box, it’s just math, and math can’t think.” Of course, they say it with a lot fancier words, like “it’s just gradient decent” as if human brains couldn’t have gradient decent baked in anywhere.
But, out of interest what is your take on the Stochastic Parrot? I find the arguments deeply implausible.
terrific
in reply to qt0x40490FDB • • •I'm not saying that we can't ever build a machine that can think. You can do some remarkable things with math. I personally don't think our brains have baked in gradient descent, and I don't think neural networks are a lot like brains at all.
The stochastic parrot is a useful vehicle for criticism and I think there is some truth to it. But I also think LMMs display some super impressive emergent features. But I still think they are really far from AGI.
AnarchoEngineer
in reply to qt0x40490FDB • • •Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.
I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.
Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.
This has two major preventative issues for AGI: input size limits, and determinism.
The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)
This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.
Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…
Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.
ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.
All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.
This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.
Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.
Now there are some more exotic neural networks architectures that could surpass these limitations.
Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.
However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.
You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).
SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”
Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently
In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.
The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.
Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.
Badabinski
in reply to AnarchoEngineer • • •slaacaa
in reply to AnarchoEngineer • • •Wow, what an insightful answer.
I have been trying to separate the truth from the hype, and learn more about how LLMs work, and this explanation has been one of the best one I’ve read on the topic. You strike a very good balance by going deep enough, but still keeping it understandable.
A question: I remember using Wolfram Alpha a lot back in university 15+ years ago. From a user perspective, it seems very similar to LLMs, but it was very accurate with math. From this, I take that modern LLMs are not the evolution of that model, but WA still appeared to be ahead of it’s time. What is/was the difference?
AnarchoEngineer
in reply to slaacaa • • •Thanks, I almost didn’t post because it was an essay of a comment lol, glad you found it insightful
As for Wolfram Alpha, I’m definitely not an expert but I’d guess the reason it was good at math was that it would simply translate your problem from natural language into commands that could be sent to a math engine that would do the actual calculation.
So basically act like a language translator but for typed out math to a programming language for some advanced calculation program (like wolfram Mathematica)
Again, this is just speculation because I’m a bit too tired to look into it rn, but it seems plausible since we had basic language translators online back then (I think…) and I’d imagine parsing written math is probably easier than natural language translation
Veidenbaums
in reply to AnarchoEngineer • • •Perspectivist
in reply to terrific • • •That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.
ExLisper
in reply to Perspectivist • • •I think the argument is we're not remotely close when considering the specific techniques used by current generation of AI tools. Of course people can make new discovery any day and achieve AGI but it's a different discussion.
terrific
in reply to Perspectivist • • •That's true in a somewhat abstract way, but I just don't see any evidence of the claim that it is just around the corner. I don't see what currently existing technology can facilitate it. Faster-than-light travel could also theoretically be just around the corner, but it would surprise me if it was, because we just don't have the technology.
On the other hand, the people who push the claim that AGI is just around the corner usually have huge vested interests.
cyd
in reply to terrific • • •terrific
in reply to cyd • • •I think that's a very generous use of the word "superintelligent". They aren't anything like what I associate with that word anyhow.
I also don't really think they are knowledge retrieval engines. I use them extensively in my daily work, for example to write emails and generate ideas. But when it comes to facts they are flaky at best. It's more of a free association game than knowledge retrieval IMO.
ZILtoid1991
in reply to terrific • • •SpicyLizards
in reply to Davriellelouna • • •Asafum
in reply to Davriellelouna • • •Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?
If capital wants it capital gets it. 🙁
drapeaunoir
in reply to Asafum • • •masterofn001
in reply to drapeaunoir • • •drapeaunoir
in reply to masterofn001 • • •scarabic
in reply to Asafum • • •Perspectivist
in reply to Davriellelouna • • •The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:
Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.
The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.
rottingleaf
in reply to Perspectivist • • •As if silicon were the only technology we have to build computers.
Perspectivist
in reply to rottingleaf • • •rottingleaf
in reply to Perspectivist • • •Perspectivist
in reply to rottingleaf • • •I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.
I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.
rottingleaf
in reply to Perspectivist • • •I personally think that the additional component (suppose it's energy) that modern approaches miss is the sheer amount of entropy a human brain gets - plenty of many times duplicated sensory signals with pseudo-random fluctuations. I don't know how one can use lots of entropy to replace lots of computation (OK, I know what Monte-Carlo method is, just how it applies to AI), but superficially this seems to be the way that will be taken at some point.
On your point - I agree.
I'd say we might reach AGI soon enough, but it will be impractical to use as compared to a human.
While the matching efficiency is something very far away, because a human brain has undergone, so to say, an optimization\compression taking the energy of evolution since the beginning of life on Earth.
Deathgl0be
in reply to Davriellelouna • • •Perspectivist
in reply to Deathgl0be • • •SparrowHawk
in reply to Davriellelouna • • •gandalf_der_12te
in reply to Davriellelouna • • •AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.
When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.
The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That's what should actually be discussed.
economic model of price determination in microeconomics
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Vinstaal0
in reply to gandalf_der_12te • • •Not sure if we will even really notice that in our lifetime, it is taking decades to get things like invoice processing to automate. Heck in the US they can't even get proper bank connections made.
Also, tractors have replaced a lot of workers on the land, computers have both lost a lot of jobs in offices and created a lot at the same time.
Jobs will change, that's for sure and I think most of the heavy labour jobs will become more expensive since they are harder to replace.
Codpiece
in reply to Davriellelouna • • •Outwit1294
in reply to Codpiece • • •markovs_gun
in reply to Davriellelouna • • •