Salta al contenuto principale


"each individual kid is now hooked into a Nonsense Machine"
Edit: I got those screenshots from imgur. It might be from Xitter, with the account deleted or maybe threads with the account not visible without login? 🤷
2nd Edit: @edgeofeurope found this threadreaderapp.com/thread/180…
#school #AI #KI #meme #misinformation #desinformation
Questa voce è stata modificata (2 mesi fa)
in reply to b-rain

What's more, LLM companies can tweak the answers to mislead deliberately. They could set it to lie about any person or group that they want to promote or attack.
in reply to b-rain

when did school stop teaching kids how to cite primary sources and write a bibliography ??
in reply to feld

@feld I remember in my schools they push citing sources strongly on us. Now I wonder if this caused my current resistance to using LLMs because I simply can't tolerate unverified answers.
@feld
in reply to Mad A. Argon

some of them do cite their sources in their responses which is nice
in reply to b-rain

This is our greatest doom, an angry generation of annoyed know-it-alls who know absolutely nothing.
in reply to b-rain

This also ties into how the way we design things influences how people percieve them.

Before ChatGPT, there was "OpenAI Playground," a paragraph-sized box where you would type words, and the GPT-2 model would respond or *continue* the prompt, highlighted in green.

Then ChatGPT came along, but it was formatted as a chat. Less an authoritative source, more a conversational tool.

Now the ChatGPT home page is formatted like a search engine. A tagline, search bar, and suggested prompts.

in reply to Bolt

@boltx you probably know that but it wasn't until a few weeks ago that I learned that under the hood, they have to write "user:" before your prompt and "agent:" after it, before the interface hands it to the LLM, otherwise it would just continue writing your prompt.
@Bolt

reshared this

in reply to joël

@jollysea Technically the models can vary a bit in how they handle that (e.g. they could be using an XML format with <user> and <llm> for example) but yeah, that's the structure essentially all conversational LLMs have to follow.

In the end, LLMs are just word prediction machines. They predict the most likely next word based on the prior context, and that's it. If nothing delineated between the original prompt and the LLM's response, it would naturally just continue the prompt.

reshared this

in reply to Bolt

@jollysea That was actually one of the most fun parts about the original interface. If you wanted it to continue some code, just paste in your code and it'll add on to it. Have a random idea for a poem? Write the first line, and it'll write a poem that continues from that starting line in a more cohesive manner.

Now any time you ask an LLM to do something, it won't just do the thing you wanted, it'll throw in a few paragraphs of extra text/pleasantries/re-iteration you didn't ask for.

reshared this

in reply to Bolt

@boltx @jollysea @LordCaramac but also it was hard to project another mind into that interface so they had to change it for marketing reasons 🤷
in reply to Lord Caramac the Clueless, KSC

@LordCaramac I'd assume that has something to do with how GPT2 was a lot more loosely fine-tuned than GPT3 and subsequent models.

GPT2 was more of an attempt at simply mimicking text, rather than mimicking text *in an explicitly conversational, upbeat, helpful tone designed to produce mass-market acceptable language*

Like how GPT2 would usually just do the thing you asked for, whereas GPT3 and others now all start with "Certainly! Here's a..." or something similar.

in reply to Bolt

@boltx GPT2 was often quite unhinged and produced text that was quite surreal and like a weird hallucination or the ramblings of a madman. I liked it.
@Bolt