"each individual kid is now hooked into a Nonsense Machine"
Edit: I got those screenshots from imgur. It might be from Xitter, with the account deleted or maybe threads with the account not visible without login? 🤷
2nd Edit: @edgeofeurope found this threadreaderapp.com/thread/180…
#school #AI #KI #meme #misinformation #desinformation
Edit: I got those screenshots from imgur. It might be from Xitter, with the account deleted or maybe threads with the account not visible without login? 🤷
2nd Edit: @edgeofeurope found this threadreaderapp.com/thread/180…
#school #AI #KI #meme #misinformation #desinformation
Thread by @stilloranged on Thread Reader App
@stilloranged: weird interaction with a student this week. they kept coming up with weird "facts" ("greek is actually a combination of four other languages") that left me baffled. i said let's look this stuff up tog...…threadreaderapp.com
Questa voce è stata modificata (2 mesi fa)
reshared this
FediThing 🏳️🌈
in reply to b-rain • • •b-rain
in reply to FediThing 🏳️🌈 • • •feld
in reply to b-rain • • •Mad A. Argon
in reply to feld • • •feld
in reply to Mad A. Argon • • •Ω 🌍 Gus Posey
in reply to b-rain • • •Bolt
in reply to b-rain • • •This also ties into how the way we design things influences how people percieve them.
Before ChatGPT, there was "OpenAI Playground," a paragraph-sized box where you would type words, and the GPT-2 model would respond or *continue* the prompt, highlighted in green.
Then ChatGPT came along, but it was formatted as a chat. Less an authoritative source, more a conversational tool.
Now the ChatGPT home page is formatted like a search engine. A tagline, search bar, and suggested prompts.
Lord Caramac the Clueless, KSC reshared this.
joël
in reply to Bolt • • •reshared this
Lord Caramac the Clueless, KSC reshared this.
Bolt
in reply to joël • • •@jollysea Technically the models can vary a bit in how they handle that (e.g. they could be using an XML format with <user> and <llm> for example) but yeah, that's the structure essentially all conversational LLMs have to follow.
In the end, LLMs are just word prediction machines. They predict the most likely next word based on the prior context, and that's it. If nothing delineated between the original prompt and the LLM's response, it would naturally just continue the prompt.
reshared this
Lord Caramac the Clueless, KSC reshared this.
Bolt
in reply to Bolt • • •@jollysea That was actually one of the most fun parts about the original interface. If you wanted it to continue some code, just paste in your code and it'll add on to it. Have a random idea for a poem? Write the first line, and it'll write a poem that continues from that starting line in a more cohesive manner.
Now any time you ask an LLM to do something, it won't just do the thing you wanted, it'll throw in a few paragraphs of extra text/pleasantries/re-iteration you didn't ask for.
reshared this
Lord Caramac the Clueless, KSC reshared this.
ein unbehaarter affe
in reply to Bolt • • •Lord Caramac the Clueless, KSC
in reply to ein unbehaarter affe • • •Bolt
in reply to Lord Caramac the Clueless, KSC • • •@LordCaramac I'd assume that has something to do with how GPT2 was a lot more loosely fine-tuned than GPT3 and subsequent models.
GPT2 was more of an attempt at simply mimicking text, rather than mimicking text *in an explicitly conversational, upbeat, helpful tone designed to produce mass-market acceptable language*
Like how GPT2 would usually just do the thing you asked for, whereas GPT3 and others now all start with "Certainly! Here's a..." or something similar.
Lord Caramac the Clueless, KSC
in reply to Bolt • • •