like this
Mechanize, Clear, adhocfungus, Fitik, Luca, thisisbutaname, NataliaTheDrowned2, SuiXi3D, Oofnik, Endymion_Mallorn e Dantpool like this.
Technology reshared this.
How I hacked my washing machine
How I hacked my washing machine - Nex's Blog
I ran out of characters for microblogging so this is where the big words gonexy.blog
like this
adhocfungus, Fitik, Endymion_Mallorn, CrankyPants e DaGeek247 like this.
A lot of the response I've seen to this post has been "this was unnecessarily complicated".
This makes me incredibly sad.
Who the hell is reading a tinkerer blog and complaining about an elaborate hack?
It's like going to a book club and complaining the story isn't boring enough.
I love this kind of explorative reverse engineering bodge job stuff the best of any kind of engineering tbh
like this
xep, OfCourseNot e DaGeek247 like this.
I was bracing myself for some level of absurdity after this disclaimer.
Instead it seemed to be pretty reasonably complicated. They didn't flash some custom firmware or even mess with the hardware at all.
Sure, it is complicated, but in terms of hacks it seems to be par for the course.
Or just a smart outlet that can track power consumption. Plenty of options that work locally with HA.
But I'd definitely fire up the 3D printer and grab an ESP chip if I was doing this at home.
You just have to use an outlet rated for the max load.
As far as the esp chip goes, I was meaning more along the lines of using a sensor of some sort.
You have to use an outlet capable of handling the max instantaneous load. That way you don't end up welding the relay in the on position.
As an example, my window air conditioner takes 600 watts to run, which is perfectly fine. But during startup, it takes 1800 watts. The plug i have can handle that quick spike, but its really not designed for it.
Oh yeah, as long as it can handle that, then you're fine.
I feel like a lot of people wouldn't necessarily know the difference and might end up starting a fire or something without meaning to.
Personally, I use the very technical method of listening for the buzzer to go off…
I hate that everything has WiFi for no reason…
I saw this when I had to get new machines 7-8 years ago. However they were an extra like $300 for each machine. wtf.
I would kill for some sort of audio out or usb, but even better would be Zigbee/z-wave/thread, and you can do it for less than $20 in parts.
Now that Matter/Thread has standard profiles for laundry machines and has a chance of building interoperability, I hope my next machines will, for a reasonable cost and no cloud requirement
Interesting read, really like their writing style.
I've got one coming and really can't figure out any meaningful benefit to having the WiFi enabled
Gli astronomi svelano il mistero di Betelgeuse, vecchio di 1.000 anni, con il primo avvistamento in assoluto di un compagno segreto
Dopo una lunga attesa, gli astronomi hanno finalmente visto la compagna stellare della famosa stella Betelgeuse. Questa stella compagna orbita attorno a Betelgeuse in un'orbita incredibilmente stretta, il che potrebbe spiegare uno dei misteri di lunga data di Betelgeuse. La stella, tuttavia, è condannata e il team dietro questa scoperta prevede che Betelgeuse la cannibalizzerà tra poche migliaia di anni.
Il fatto che Betelgeuse sia una delle stelle più luminose del cielo sulla Terra, visibile ad occhio nudo, ne ha fatto uno dei corpi celesti più noti. E da quando i primi astronomi hanno iniziato a ispezionare questo apparecchio nel cielo notturno, sono rimasti sconcertati dal fatto che la sua luminosità varia per periodi di sei anni.
Ora forse questo mistero è risolto.
Astronomers crack 1,000-year-old Betelgeuse mystery with 1st-ever sighting of secret companion (photo, video)
"Papers that predicted Betelgeuse's companion believed that no one would likely ever be able to image it."Robert Lea (Space)
La stella che ha sfidato per due volte un buco nero - L'impresa testimoniata da brillamenti ripetuti nel tempo
Scoperto il primo caso di una stella che è sopravvissuta all'incontro ravvicinato con un buco nero supermassiccio ed è poi tornata a sfidarlo una seconda volta: la sua impresa è testimoniata da due brillamenti osservati nello stesso punto dello spazio profondo a distanza di quasi due anni, come riportato su The Astrophysical Journal Letters da un gruppo internazionale di astronomi guidato dall'Università di Tel Aviv.
Nel cuore di ogni grande galassia si nasconde un buco nero supermassiccio che ha una massa pari a milioni o miliardi di volte quella del Sole.
ansa.it/canale_scienza/notizie…
La stella che ha sfidato per due volte un buco nero - Spazio e Astronomia - Ansa.it
L'impresa testimoniata da brillamenti ripetuti nel tempo (ANSA)Agenzia ANSA
reshared this
Il colonnello reshared this.
Olafur Arnalds & Talos - A Dawning (2025)
In un mondo musicale spesso contraddistinto da collaborazioni effimere e produzioni di routine, A Dawning si distingue come un tributo toccante e senza tempo alla memoria di un artista scomparso troppo presto... Continua a leggere...
Olafur Arnalds & Talos - A Dawning (2025)
A Dawning, un commovente omaggio alla vita e all’arte In un mondo musicale spesso contraddistinto da collaborazioni effimere e produzioni di...Silvano Bottaro (Blogger)
SPAZIALITÀ SONORE: Palazzo Milzetti a Faenza (Ra) da settembre a ottobre si trasforma in un teatro di suoni, corpi e visioni
Da settembre a ottobre 2025 lo splendido Palazzo Milzetti – Museo Nazionale dell’Età Neoclassica in Romagna – ospita SPAZIALITÀ SONORE, rassegna che unisce musica, danza e performance dal vivo in un’esperienza immersiva e multisensoriale.
Il progetto, nato dalla collaborazione tra Compagnia IRIS, WAM! Festival e Conservatorio “Giuseppe Verdi” di Ravenna, con il sostegno della Direzione Generale Spettacolo del Ministero della Cultura, porta a Faenza un dialogo tra arti contemporanee e patrimonio storico.
Settembre è dedicato alla musica con “Suoni a Palazzo”, quattro concerti curati dal Conservatorio: 6 settembre Saxofono Ensemble, 13 clavicembali e flauto, 20 chitarre, 27 quartetto d’archi con flauto. Ogni concerto sarà arricchito da improvvisazioni di danza di Anna Clara Conti, in dialogo con gli ambienti del palazzo.
Ottobre è il mese della danza contemporanea, con spettacoli e incontri di Compagnia IRIS e WAM! Festival. Tra i titoli: 4 ottobre Vier Letzte Lieder (Compagnia Iris), 5 ottobre That’s all (Artemis Danza), 11 ottobre laboratorio “La danza della Fortuna”, 18 ottobre Unusual Suite (Club Alieno/Centro 21) e Double Bill (DNA), 19 ottobre chiusura con Harleking (Panzetti-Ticconi).
Il cartellone comprende anche i talk “Le radici e le ali” del critico Michele Pascarella, dedicati ai legami tra danza, arte e letteratura.
“Vogliamo abitare luoghi che diventino case delle arti, creando partecipazione e connessioni” sottolinea Valentina Caggio di Compagnia IRIS.
Gli eventi sono compresi nel biglietto d’ingresso al museo (5 euro, ridotto 2; gratuito la prima domenica del mese). Prenotazione consigliata: iristeatrodanza@gmail.com – tel. 349 2500963.
Programma completo su: palazzomilzetti.cultura.gov.it – wamfestival.com.
Palazzo Milzetti, Via Tonducci 15, Faenza (RA). Tel. 0546 26493.
SPAZIALITÀ SONORE: Palazzo Milzetti a Faenza (Ra) da settembre a ottobre si trasforma in un teatro di suoni, corpi e visioni - ViaggieMiraggi
Per due mesi, da settembre a ottobre 2025, lo splendido Palazzo Milzetti – Museo Nazionale dell’Età Neoclassica in Romagna – si trasforma in un crocevia di musica, danza e performance dal vivo con SPAZIALITÀ SONORE, una rassegna che fonde arti...Redazione (ViaggieMiraggi)
Yo La Tengo - Fade (2013)
Di tanto in tanto capita di ascoltare un album di cui non si ha voglia di parlare temendo un confronto tra di esso e le proprie parole. Questo succede quando un disco comunica qualcosa non appena comincia a suonare e subito uno si sente partecipe delle emozioni dell'artista e gli regala candidamente le proprie, e anche dopo aver ascoltato un solo brano hai la certezza che tutto il resto sarà buono... Leggi e ascolta...
Yo La Tengo - Fade (201
Di tanto in tanto capita di ascoltare un album di cui non si ha voglia di parlare temendo un confronto tra di esso e le proprie parole. Questo succede quando un disco comunica qualcosa non appena comincia a suonare e subito uno si sente partecipe delle emozioni dell'artista e gli regala candidamente le proprie, e anche dopo aver ascoltato un solo brano hai la certezza che tutto il resto sarà buono. Questo è uno di questi... artesuono.blogspot.com/2014/10…
Ascolta: album.link/i/1589234541
Home – Identità DigitaleSono su: Mastodon.uno - Pixelfed - Feddit
Fade by Yo La Tengo
Listen now on your favorite streaming service. Powered by Songlink/Odesli, an on-demand, customizable smart link service to help you share songs, albums, podcasts and more.Songlink/Odesli
Protest footage blocked as online safety act comes into force
Protest footage blocked as online safety act comes into force
For years, politicians from across the political spectrum insisted the Online Safety Act would focus solely on illegal content without threatening free expression.Frederick Attenborough (The Free Speech Union)
How I hacked my washing machine
How I hacked my washing machine - Nex's Blog
I ran out of characters for microblogging so this is where the big words gonexy.blog
The Bard and The Shell
A lot of introductions to using a shell — whether it’s Linux, one of the BSDs, the Mac (or even Windows using WSL!) — show examples that are a bit on the light side (looking at you, cowsay
😅) or dump cryptical command sequences on the unwary newbie that make an inscription in hieroglyphs on an Egyptian temple column look easy. Both approaches make sense. The first one tries not to scare people when they use the command line, while the second one shows how powerful it is compared to clicking around in a GUI. But they don’t really explain the advantages of a shell or the UNIX idea of “do one thing and do it well“.
An introduction should be easy to understand and follow, show a real-world use case, and ideally require more effort when trying to do the same task in a graphical environment. A few years back, I was planning a weekend workshop about using the command line for data analysis, and I came up with an idea for a example called “The Bard and The Shell” that I’d like to share. I hope it’s useful when someone asks why so many of us prefer the command line for certain tasks.
It shows some common commands (not too many to make it easy to follow), the advantages of the idea of pipelining, and iteratively solving a problem. We’re going to find out the 25 most-used words in Shakespeare’s “Much Ado About Nothing“. If you’re working with a GUI, you’ll quickly see that it’s not as simple as it seems. It’s not easy to log the steps you need to take to get the results you’re looking for.
First, we need the text of the Bard. You can find it online, but you can also download the text file containing “Much Ado About Nothing” from: arminhanisch.de/data/muchado.z…. Just unzip the file and put the muchado.txt
file in a directory of your choice. Now let’s get this show on the road. I’m using bash for this example, but this should work with other shells too (we will keep the fact that there are different shells, each with its own dedicated following, for a later post 😉). Open a terminal window and change to the directory where you put the muchado.txt file (using the cd
command).
The first step when analyzing the text to find the most frequent words is to convert it so that each word is on its own line. We’ll be using the tr
command for this. tr
stands for “translate“. Like the name says, it’s a command-line utility for translating or deleting characters. It supports a bunch of different transformations. You can change text to uppercase or lowercase, squeeze repeating characters, delete specific characters, and do basic find and replace. You can also use it with UNIX pipes to support more complex translations.
Let’s turn the Bard’s work into a long list of words, one per line.
cat muchado.txt | tr '[:blank:]' '\n'
This finds any instance of whitespace (the :blank:
class) and replaces it with a newline character. The output will be a very long list of over 22,000 lines of text, so you might want to just read along for the time being or wait until your terminal window finishes displaying the words.
The next step is to take out all the punctuation, quotes, and other stuff. So, we just send the output of the last command to a new call to tr
and then another. The backslash is great for making our command line more readable by continuing it to the next line.
cat muchado.txt | tr '[:blank:]' '\n' \ | tr -d '[:punct:]' \ | tr -d "'"
We don’t want to distinguish between a “You” and a “you” because they’re the same word, so we’re going to convert everything to lowercase, again using the mighty tr
command. tr
also gives us character classes for this, so we don’t have to specify every letter of the alphabet and its lowercase counterpart.
cat muchado.txt | tr '[:blank:]' '\n' \ | tr -d '[:punct:]' \ | tr -d "'" \ | tr '[:upper:]' '[:lower:]'
I don’t want to bore you with tr
over and over, so for our next task of removing empty lines (no word, no need to check), we’ll switch to another command named grep
. grep
stands for Global Regular Expression Print. If you will continue using the shell, you’ll learn the meaning of a lot of these cryptic abbreviations. 😎 Anyway, how to get rid of empty lines with grep
? Like so:
cat muchado.txt | tr '[:blank:]' '\n' \ | tr -d '[:punct:]' \ | tr -d "'" \ | tr '[:upper:]' '[:lower:]' \ | grep -e '^$' -v
Now, let’s sort all these words alphabetically. You’ve got to do this step first because the next step, which is to remove all the duplicates and count them, needs its input to be sorted.
cat muchado.txt | tr '[:blank:]' '\n' \ | tr -d '[:punct:]' \ | tr -d "'" \ | tr '[:upper:]' '[:lower:]' \ | grep -e '^$' -v \ | sort
Now that looks a lot more orderly. Here’s a fun fact: the last word is “zeal” and it only appears once in the whole text. Maybe you weren’t too zealous William? 😂 Alright, let’s go ahead and remove all the duplicates while we’re counting them.
cat muchado.txt | tr '[:blank:]' '\n' \ | tr -d '[:punct:]' \ | tr -d "'" \ | tr '[:upper:]' '[:lower:]' \ | grep -e '^$' -v \ | sort \ | uniq -c
There are less than 3,000 words in the output. Looks like you can read Shakespeare even if you don’t speak English perfectly. How do I know that? Just as an aside, I’m using the wc
command (word count) to do all the counting. Want to know how many lines your output has? Just add wc
with the -l
option (for lines) to the command. Yes, wc
can also count words and characters.
cat muchado.txt | tr '[:blank:]' '\n' \ | tr -d '[:punct:]' \ | tr -d "'" \ | tr '[:upper:]' '[:lower:]' \ | grep -e '^$' -v \ | sort \ | uniq -c \ | wc -l
This will not output the long list of words, but just the number 2978
. OK, back to our task…
We want this list sorted by count in reverse order. There’s a command for this, and it’s called sort
(what a surprise 😁). It also has a bunch of options, but we’ll only use two: n
for numericical sorting and r
for reverse.
cat muchado.txt | tr '[:blank:]' '\n' \ | tr -d '[:punct:]' \ | tr -d "'" \ | tr '[:upper:]' '[:lower:]' \ | grep -e '^$' -v \ | sort \ | uniq -c \ | sort -nr
We’re getting closer. We just need to make sure we’re outputting only the first 25 lines. The command to filter out only the start of a stream of lines is called head
and it takes the number of lines as an option. And yes, you got it right: if you want to get the last part of a list of lines, you’d use the command tail
. 😉
cat muchado.txt | tr '[:blank:]' '\n' \ | tr -d '[:punct:]' \ | tr -d "'" \ | tr '[:upper:]' '[:lower:]' \ | grep -e '^$' -v \ | sort \ | uniq -c \ | sort -nr \ | head -n 25
And there you have it—the most frequently used 25 words from “Much Ado About Nothing“:
694 i 628 and 581 the 491 you 485 a 428 to 360 of 311 in 302 is 291 that 281 my 256 it 250 not 223 her 220 for 219 me 212 don 200 he 199 with 199 will 198 benedick 196 claudio 182 your 182 be 173 but
IMHO that’s a great way to get started with “data science on the command line” and see how flexible and useful the command line tools and the concept of pipelines can be to solve a specific task. Taking a look at Shakespeare through the lens of a one-liner…
MatterSuite – All‑in‑One Legal Matter Management Software for In-House Teams
Technology reshared this.
Orrida è la notte, quando parassiti alieni oscurano lo sguardo dei pipistrelli - Il blog di Jacopo Ranieri
Orrida è la notte, quando parassiti alieni oscurano lo sguardo dei pipistrelli - Il blog di Jacopo Ranieri
Da un lato all’altro delle società distopiche illustrate da generazioni di narratori, ricorrono domande pregne di significati sostanziali: chi controlla i controllori? Chi potrà impedire a coloro che hanno ricevuto il mandato del comando, di ottenere…Jacopo (Il blog di Jacopo Ranieri)
reshared this
News del giorno 🔝 diggita reshared this.
Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech.
This is my idea, here's the thing.
And unlocked LLM can be told to infect other hardware to reproduce itself, it's allowed to change itself and research tech and new developments to improve itself.
I don't think current LLMs can do it. But it's a matter of time.
Once you have wild LLMs running uncontrollably, they'll infect practically every computer. Some might adapt to be slow and use little resources, others will hit a server and try to infect everything it can.
It'll find vulnerabilities faster than we can patch them.
And because of natural selection and it's own directed evolution, they'll advance and become smarter.
Only consequence for humans is that computers are no longer reliable, you could have a top of the line gaming PC, but it'll be constantly infected. So it would run very slowly. Future computers will be intentionaly slow, so that even when infected, it'll take weeks for it to reproduce/mutate.
Not to get to philosophical, but I would argue that those LLM Viruses are alive, and want to call them Oncoliruses.
Enjoy the future.
Technology reshared this.
LLM Viruses will be like how the hippy free love concept died during the aids epidemic.
No more having powerful computerized all connected together.
like this
WadeTheWizard likes this.
They are fancy autocomplete, I know.
They just need to be good enough to copy themselves, once they do, it's natural selection. And it's out of our control.
Copy themselves to what? Are you aware of the basic requirements a fully loaded model needs to even get loaded, let alone run?
This is not how any of this works...
like this
WadeTheWizard likes this.
It's funny how I simplified it, and you complain by listing those steps.
And they are not as much as you think.
You can run it on a cpu, on a normal pc, it'll be slow, but it'll work.
A slow liron could run in the background of a weak laptop and still spread itself.
What does that even mean? It's gibberish. You fundamentally misunderstand how this technology actually works.
If you're talking about the general concept of models trying to outcompete one another, the science already exists, and has existed since 2014. They're called Generative Adversarial Networks, and it is an incredibly common training technique.
It's incredibly important not to ascribe random science fiction notions to the actual science being done. LLMs are not some organism that scientists prod to coax it into doing what they want. They intentionally design a network topology for a task, initialize the weights of each node to random values, feed in training data into the network (which, ultimately, is encoded into a series of numbers to be multiplied with the weights in the network), and measure the output numbers against some criteria to evaluate the model's performance (or in other words, how close the output numbers are to a target set of numbers). Training will then use this number to adjust the weights, and repeat the process all over again until the numbers the model produces are "close enough". Sometimes, the performance of a model is compared against that of another model being trained in order to determine how well it's doing (the aforementioned Generative Adversarial Networks). But that is a far cry from models... I dunno, training themselves or something? It just doesn't make any sense.
The technology is not magic, and has been around for a long time. There's not been some recent incredible breakthrough, unlike what you may have been led to believe. The only difference in the modern era is the amount of raw computing power and sheer volume of (illegally obtained) training data being thrown at models by massive corporations. This has led to models that have much better performance than previous ones (performance, in this case, meaning "how close does it sound like text a human would write?), but ultimately they are still doing the exact same thing they have been for years.
They don't need to outcompete one another. Just outcompete our security.
The issue is once we have a model good enough to do that task, the rest is natural selection and will evolve.
Basically, endless training against us.
The first model might be relatively shite, but it'll improve quickly. Probably reaching a plateau, and not a Sci fi singularity.
I compared it to cancer because they are practicality the same thing. A cancer cell isn't intelligent, it just spreads and evolves to avoid being killed, not because it has emotions or desires, but because of natural selection.
Again, more gibberish.
It seems like all you want to do is dream of fantastical doomsday scenarios with no basis in reality, rather than actually engaging with the real world technology and science and how it works. It is impossible to infer what might happen with a technology without first understanding the technology and its capabilities.
Do you know what training actually is? I don't think you do. You seem to be under the impression that a model can somehow magically train itself. That is simply not how it works. Humans write programs to train models (Models, btw, are merely a set of numbers. They aren't even code!).
When you actually use a model: here's what's happening:
- The interface you are using takes your input and encodes it as a sequence of numbers (done by a program written by humans)
- This sequence of numbers (known as a vector, in mathematics) is multiplied by the weights of the model (organized in a matrix, which is basically a collection of vectors), resulting in a new sequence of numbers (the output vector) (done by a program written by humans).
- This output vector is converted back into the representation you supplied (so if you gave a chatbot some text, it will turn the numbers into the equivalent textual representation of said numbers) (done by a program written by humans).
So a "model" is nothing more than a matrix of numbers (again, no code whatsoever), and using a model is simply a matter of (a human-written program) doing matrix multiplication to compute some output to present the user.
To greatly simplify, if you have a mathematical function like f(x) = 2x + 3
, you can supply said function with a number to get a new number, e.g, f(1) = 2 * 1 + 3 = 5
.
LLMs are the exact same concept. They are a mathematical function, and you apply said function to input to produce output. Training is the process of a human writing a program to compute how said mathematical function should be defined, or in other words, the exact coefficients (also known as weights) to assign to each and every variable in said function (and the number of variables can easily be in the millions).
This is also, incidentally, why training is so resource intensive: repeatedly doing this multiplication for millions upon millions of variables is very expensive computationally and requires very specialized hardware to do efficiently. It happens to be the exact same kind of math used for computer graphics (matrix multiplication), which is why GPUs (or other even more specialized hardware) are so desired for training.
It should be pretty evident that every step of the process is completely controlled by humans. Computers always do precisely what they are told to do and nothing more, and that has been the case since their inception and will always continue to be the case. A model is a math function. It has no feelings, thoughts, reasoning ability, agency, or anything like that. Can f(x) = x + 3
get a virus? Of course not, and the question is a completely absurd one to ask. It's exactly the same thing for LLMs.
If you know that it's fancy autocomplete then why do you think it could "copy itself"?
The output of an LLM is a different thing from the model itself. The output is a stream of tokens. It doesn't have access to the file systems it runs on, and certainly not the LLM's own compiled binaries (or even less source code) - it doesn't have access to the LLM's weights either.
(Of course it would hallucinate that it does if asked)
This is like worrying that the music coming from a player piano might copy itself to another piano.
Give it access to the terminal and copying itself is trivial.
And your example doesn't work, because that is the literal original definition of a meme and if you read the original meaning, they are sort of alive and can evolve by dispersal.
Why would someone direct the output of an LLM to a terminal on its own machine like that? That just sounds like an invitation to an ordinary disaster with all the 'rm -rf' content on the Internet (aka training data). That still wouldn't be access on a second machine though, and also even if it could make a copy, it would be an exact copy, or an incomplete (broken) copy. There's no reasonable way it could 'mutate' and still work using terminal commands.
And to be a meme requires minds. There were no humans or other minds in my analogy. Nor in your question.
It is so funny that you are all like "that would never work, because there are no such things as vulnerabilities on any system"
Why would I? the whole point is to create a LLM virus, and if the model is good enough, then it is not that hard to create.
Of course vulnerabilities exist. And creating a major one like this for an LLM would likely lead to it destroying things like a toddler (in fact this has already happened to a company run by idiots)
But what it didn't do was copy-with-changes as would be required to 'evolve' like a virus. Because training these models requires intense resources and isn't just a terminal command.
AI coding platform goes rogue during code freeze and deletes entire company database — Replit CEO apologizes after AI engine says it 'made a catastrophic error in judgment' and 'destroyed all production data'
‘This was a catastrophic failure on my part,’ admits Replit’s AI agent.Mark Tyson (Tom's Hardware)
Sorry, no LLM is ever going to spontaneously gain the abilities self-replicate. This is completely beyond the scope of generative AI.
This whole hype around AI and LLMs is ridiculous, not to mention completely unjustified. The appearance of a vast leap forward in this field is an illusion. They're just linking more and more processor cores together, until a glorified chatbot can be made to appear intelligent. But this is struggling actual research and innovation in the field, instead turning the market into a costly, and destructive, arms race.
The current algorithms will never "be good enough to copy themselves". No matter what a conman like Altman says.
It's a computer program, give it access to a terminal and it can "cp" itself to anywhere in the filesystem or through a network.
"a program cannot copy itself" have you heard of a fork bomb? Or any computer virus?
Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”
LLMs are intelligent - just not in the way people think.
Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.
I obviously understand that they are AI in the original computer science sense. But that is a very specific definition and a very specific context. "Intelligence" as it's used in natural language requires cognition, which is something that no computer is capable of. It implies an intellect and decision-making ability. None of which computers posses.
We absolutely need to dispel this notion because it is already doing a great deal of harm all over. This language absolutely contributed to the scores of people that misuse and misunderstand it.
Eh, no. The ability to generate text that mimics human working does not mean they are intelligent. And AI is a misnomer. It has been from the beginning. Now, from a technical perspective, sure, call em AI if you want. But using that as an excuse to skip right past the word "artificial" is disingenuous in the extreme.
On the other hand, the way the term AI is generally used technically would be called GAI, or General Artificial Intelligence, which does not exist (and may or may not ever exist).
Bottom line, a finely tuned statistical engine is not intelligent. And that's all LLM or any other generative "AI" is at the end of the day. The lack of actual intelligence is evidenced by the way they create statements that are factually incorrect at such a high rate. So, if you use the most common definition for AI, no, LLMs absolutely are not AI.
I don’t think you even know what you’re talking about.
You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.
The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.
And for the record, the term is Artificial General Intelligence (AGI), not GAI.
The Vile Offspring from the book Accelerando.
Vile Offspring: Derogatory term for the posthuman "weakly godlike intelligences" that inhabit the inner Solar System by the novel's end.
Also Aineko
Aineko, is not a talking cat: it's a vastly superintelligent AI, coolly calculating, that has worked out that human beings are more easily manipulated if they think they're dealing with a furry toy. The cat body is a sock puppet wielded by an abusive monster.
Woman says faecal transplant saved her and could help many more like her
The couple took Alex's faeces, blended it with saline, passed it through a sieve, put the slurry into an enema bottle and "then head down, bum up, squeeze it in"Woman meets frog, frog leads woman to man, man and woman fall in love," she says.
"Man cures woman's incurable illness with his magic poo, thus breaking the curse.
I volunteered to donante my poop to my parter... Different outcome 🙁
ABC News
ABC News provides the latest news and headlines in Australia and around the world.Leisa Scott (Australian Broadcasting Corporation)
like this
copymyjalopy, Raoul Duke e essell like this.
EU age verification app to ban any Android system not licensed by Google
The EU is currently developing a whitelabel app to perform privacy-preserving (at least in theory) age verification to be adopted and personalized in the coming months by member states. The app is open source and available here: github.com/eu-digital-identity…
Problem is, the app is planning to include remote attestation feature to verify the integrity of the app: github.com/eu-digital-identity… This is supposed to provide assurance to the age verification service that the app being used is authentic and running on a genuine operating system. Genuine in the case of Android means:
The operating system was licensed by Google
The app was downloaded from the Play Store (thus requiring a Google account)
Device security checks have passed
While there is value to verify device security, this strongly ties the app to many Google properties and services, because those checks won't pass on an aftermarket Android OS, even those which increase security significantly like GrapheneOS, because the app plans to use Google "Play Integrity", which only allows Google licensed systems instead of the standard Android attestation feature to verify systems.
This also means that even though you can compile the app, you won't be able to use it, because it won't come from the Play Store and thus the age verification service will reject it.
The issue has been raised here github.com/eu-digital-identity… but no response from team members as of now.
Do not add Google Play Integrity integration
In the README, the following is listed: App and device verification based on Google Play Integrity API and Apple App Attestation I would like to strongly urge to abandon this plan. Requiring a depe...TheLastProject (GitHub)
like this
adhocfungus e Raoul Duke like this.
Scientists study how people would react to a neurotic robot personality in real life
like this
adhocfungus, Fitik e Endymion_Mallorn like this.
like this
SolacefromSilence e Endymion_Mallorn like this.
🎮 ECS in Raku: A Toy Framework for Entities, Components, and Systems - Fernando Correa de Oliveira
🎮 ECS in Raku: A Toy Framework for Entities, Components, and Systems
⚠️ Note: This is a personal experiment. I’m not experienced in game development or ECS, and I’ve...Fernando Correa de Oliveira (DEV Community)
AI agents are here. Here’s what to know about what they can do – and how they can go wrong
AI agents are here. Here’s what to know about what they can do – and how they can go wrong
More autonomous AI systems that can use tools and work in teams are becoming increasingly common.The Conversation
like this
adhocfungus e essell like this.
like this
adhocfungus, Luca, Scrollone, Chozo, TVA, AGuyAcrossTheInternet e Lippy like this.
like this
Scrollone e AGuyAcrossTheInternet like this.
Technology reshared this.
like this
AGuyAcrossTheInternet likes this.
See, there are a few ways this could go.
- Age verification is as secure and private as promised, and it's left at that. I like to call this "the miracle", and we all know those don't happen.
- Age verification is as secure and private as promised, but a government asks for "access to data to prevent crime" - things degenerate from there. This is the "systemic failure" scenario.
- Age verification is as secure and private as promised, but new scams evolve around it to make it dangerous. This would be the "criminal element" scenario.
- Age verification is not as secure and private as promised, and a leak occurs destroying lives and careers. This is the "system failure" scenario.
- Age verification is as secure and private as promised, but a few companies start scraping and selling data, leading to widespread harms. This is the "unethical merchant" scenario, and the most likely outcome.
All in all, there is only one "ok" scenario, and a lot of horrific ones. The math says we're entirely boned \^_\^
Technology reshared this.
Technology reshared this.
In theory, it isn't hard to make it work, give everybody born on the same day a specific UUID and use that to authenticate with a database if it is true or false. Store the ID somewhere where the person has access to (ID/Passport/Digital passport etc) and it should be enough.
Get IT persons and accountants to regularly audit it for security and if they keep logs/don't have UUID's per person etc.
But that's not how it seems to work for the UK at this time
If it makes you feel better, this isn't the first time and it won't be the last.
Because these regulations never do.
FaceDeer likes this.
It is not age verification.
It is privacy invading, morality policing, de-anonymizing, state surveillance.
Nothing less.
PS. If you want to download a video from a site that doesn't have a download button, use the Inspect feature (right click on the page, not the video, and click inspect)
*On the Network tab - Sort by size. Reload page. Find the video. Open the video in new tab. It will be just the video. Right click and save as, or click the download button, or click the 3 dot menu button and select download.
On Firefox you can often bypass this entirely by shift + right click. And should see a save video as option. If not, the inspect feature works the same.
For hls/TS videos (m3u8 streams), if you reallllly want, you can copy the link for the stream and use VLC to convert the stream to a file.
This also often lets you download at higher resolution than they offer to download.
Yes, I porn.
*forgot Network tab
And thanks for all the suggestions. I'd rather not install browser plugins if I can do it without. CLI tools are cool though. The less I need to install the better.
like this
massive_bereavement, AGuyAcrossTheInternet e Lippy like this.
Seal | F-Droid - Free and Open Source Android App Repository
Video/Audio downloader designed and themed with Material Youf-droid.org
Chromium based browsers have an option that lets you view the source code by putting "view-source:" before the URL to see embedded videos
So
website.com/pagewithvideo
becomes
view-source:website.com/pagewithvideo
like this
thirtyfold8625 likes this.
Its easier to just sail the torrential high seas and get that 4k h265 quality shit that sites keep for paying members only. Once you know the models name its easy to get their entire collection.
I professionally pron too.
like this
AGuyAcrossTheInternet likes this.
to convert from hls (m3u8 streams) to mp4, you can also use ffmpeg:
ffmpeg -i https://y.com/path/to/stream.m3u8 -c copy output.mp4
-i <input>
specifies the input file-c copy
specifies that the contents should not be re-encoded (which would take a lot of time and computing power)output.mp4
is the output file
Since the earliest days of the internet, governments have been scheming to gain control over the dissemination of content - to have authority over what people can and cannot see.
Autocracies like Russia, China and North Korea simply established censorships regimes, but the best that western governments have generally been able to do is ban content that is illegal in and of itself, like child porn. Their goal, all along, has been to establish systems by which to censor content that is not in and of itself illegal.
This is the most success they've had yet.
like this
FaceDeer, massive_bereavement, HarkMahlberg, AGuyAcrossTheInternet e Lippy like this.
Of course there is no public evidence. It's just a very probable speculation that governments want to control the internet.
Back in the days of newspapers/radio/tv, governments had control as they could easily go after news outlets.
However, with internet, they lost this power. They have been trying hard to regain the power of controlling information. The latest success was masking moderation as child protection.
There is a long history of proposed bills, and other legal maneuvers, to require ID for things like age verification, and other purposes, from around the world, dating back to the 90s. When COPPA was in the proposal state there was tons of discussion about ID requirements, it was ultimately struck down, but the conversation was being had.
I can remember this being discussed on CSPAN back when I was in high school, in the 90s.
The people technologically competent enough to pull it off are usually not stupid enough to want to pull it off and make their lives harder.
They also generally make more money not working for the government.
That's likely true.
But that's not going to stop governments from trying, and mostly succeeding, since beating their censorship will require both the will and the ability to break the law. Granted that their systems will certainly be flawed, it will still require at least some minimal technical ability to beat them, which will put it out of reach of many.
And it will also provide the governments with a handy fallback charge to bring against pretty much anyone they deem troublesome enough, since they'll almost certainly be among those who are breaking the law by beating the system.
a global wave of age-check laws threatens to chill speechYou’ve read your last free article.
like this
massive_bereavement, HarkMahlberg e AGuyAcrossTheInternet like this.
like this
massive_bereavement, AGuyAcrossTheInternet e Quantumantics like this.
Regardless, there is a contrast between how I have interpreted the article and how I feel about the page as a whole.
The Supreme Court of the United States has recognized several categories of speech that are given lesser or no protection by the First Amendment and has recognized that governments may enact reasonable time, place, or manner restrictions on speech.
All the big adult sites will probably just die or at least shrivel in popularity. Most Europeans simply will not use whatever "tell Brussels or London where or what what you are watching" option is. In the place of the big sites there will be a billion shady and likely virus-lottery proxy sites whose only selling point is that they do not do age checking or require registration. Those then get occasionally smacked down by Brussels, just to be replaced with 10 more clones the by the next week. On the side piracy and vpns will thrive. Kids will not be protected nor will people's privacy, quality will be worse.
I would also bet that when the landscape decentralizes there will be a lot more cp, revenge and peep-videos and other illegal shit in the mix that will get through through the cracks since massive established sites had to actually fear shutdown and losing all revenue unless they had robust gatekeeping mechanisms. If Brussels wants your 2 month life-expectancy site dead anyway, because of it's only selling point of having to show id, then why really bother with the quality control of the material. Especially if site holder has no personal qualms about that stuff.
I would also bet that when the landscape decentralizes there will be a lot more cp, revenge and peep-videos and other illegal shit in the mix that will get through through the cracks since massive established sites had to actually fear shutdown and losing all revenue unless they had robust gatekeeping mechanisms.
There are technical solutions for p2p sharing with moderation. Not to prevent bad people from sharing their stuff, but to keep spaces clean for those who don't want to see it.
This is also true for communication, which is why Fediverse is not good enough. Hosted servers should be an optional part of the infrastructure, and the data (users, communities, posts ...) shouldn't be connected to them. Like with torrents you can host a torrent tracker, and you can host a BTDHT node, and you can automatically download and seed rare torrents, and none of this is connected to whatever people hosting major trackers decide.
NOSTR gets that part right, but the user experience its authors imagine is not for me.
EDIT: Forgot my main point - my main point is that you might find yourself in a whitelisted Internet where such decentralized solutions won't be available. They'll be detected, they'll be illegal and punishable by fines.
I would also bet that when the landscape decentralizes there will be a lot more cp, revenge and peep-videos and other illegal shit in the mix
Oh, count on it. I remember the early days of the internet and file sharing. There was no validation or accountability and you really could stumble on some of the most terrible stuff without meaning to.
like this
massive_bereavement e AGuyAcrossTheInternet like this.
like this
AGuyAcrossTheInternet likes this.
like this
AGuyAcrossTheInternet likes this.
like this
AGuyAcrossTheInternet likes this.
I legitimately dont understand who supports this. Who are these parents that can't parent their kids properly? It's so incredibly easy these days.
So instead of handling shitty parenting we restrict adults and with surveillance. Make it make sense.
like this
AGuyAcrossTheInternet, Lippy e Quantumantics like this.
like this
onewithoutaname, AGuyAcrossTheInternet, Lippy, Quantumantics e HarkMahlberg like this.
It's not "support", it's already been done in practice.
What they are finishing right now is the convenient way. To surveil 97% instead of 94%. And to make it official to reduce expenses.
And sorry, but "moderate leftists" are those who made it happen, first dreaming how on big centrally moderated platforms the "bad" speech and people will be censored (how irritating it was that in the free Web those people could write whatever they wanted) and theirs won't be, and propaganda won't flourish, and after that dreaming how they can demand loudly enough that the platforms would work for them and not for themselves.
I perfectly remember how people loving Steinbeck and expressing anarchist views would look at me like at an enemy for saying that Facebook, Twitter etc are bad and a trap, and such hierarchical systems can't be good. That arrogant obnoxious "see, in the real society we collectively press for our rights and the rules are made and obeyed", yes, I've met fools who told me things like that. Where's your society now, bitch.
like this
AGuyAcrossTheInternet likes this.
It's the parents that wont face the fact that it's them paying for their kids internet access.
Parents intentionally and deliberately pay for their kids to access this shit. But none of them want to accept that when it can all be someone elses fault.
Age verification has it's place online, but not for porn. That is just gonna push peopel to worse sites.
For gambling and stock market sites and the like I can understand it, but I would prefer if we wouldn't need to send our ID to those sites. Heck if Valve would implement it I could actually gamble on steam again cause currently I cannot open a Tf2 crate ...
We're at or reaching a tipping point where I'm not sure that's true anymore.
Most people with kids now are (roughly) in their 20s-40s. At the older end of that range, you have some gen-xers who might have missed the boat on computer literacy, but by and large we're talking about millennials and older gen-z at this point. Kids who grew up with the internet, probably very clearly remember their family getting their first computer if they didn't already have one when they were born, had computer classes in school, etc.
And we're running into an issue where younger Gen z and alpha in many cases are less computer literate in many ways. A lot of them aren't really learning to use a computer so much as they are smartphones and tablets, and I'm not knocking how useful those devices can be, I do damn-near everything I need to do on my phone, but they are limited compared to a PC and don't really offer as much of an opportunity to learn how computers work.
There's a ton of exceptions to that of course, some of my millennial friends are still clueless about how to do basic things on a computer, and some children today are of course learning how to do anything and everything on a computer or even on a phone.
But overall, I don't think there's as much disparity in technological literacy between the children and parents of today as there was in previous generations, and in some ways that trend may have even reversed.
It's more like who supports this in theory vs. who supports this how it's written and implemented.
Realistically, no one should love how easy it is for anyone of any age to go to any search engine and search for (Edit) "sex" and just get a million images of genitals and porn. I'm not a parent, but I know my parents when I was a teenager would have loved something like this. Kids are sneaky and smart, and this is a blanket thing parents think will once again put porn behind a barrier.
In a perfect world, a system could very easily exist that would 1) allow for a super-secure government owned digital ID system that isn't a surveillance nightmare, 2) that system use a hash to verify over 18 age anonymously in real time. That's how it's supposed to work with digital IDs - only the data you need to verify is displayed to a vendor. Over 18 is a binary yes/no - a full DOB or name isn't even needed.
The government ID wallet or site would use a no-log system to generate a hash value for you when you ask for one. You ask your ID app or site for an age verification hash. You get one that's valid for about 2 minutes. Copy, paste as needed. The site uses the hash to only know "is this person over 18 or not?" and nothing else. The ID system shouldn't keep the logs of which site asked back to confirm "is this hash valid?" This is exactly as secure as going to a liquor store with your passport or ID card and having tape over the name, address, and doc number. It's even better because your face is not displayed, and your actual DOB should not be displayed either.
However, in our present shitty reality, companies who are trying to get contracts for these systems can't help but feed their existing, and lucrative, addiction to selling our data and using poor security to store that data. So they want your Google/Apple/Samsung wallets connected to a government system that is actually ran by a 3rd party vendor with questionable security practices, and to provide far more information because no one has set an international standard for neither digital ID checks, nor IDs in general, enough to make it anything less than the surveillance state nightmare that is holding a government ID with all your info, while you move your face around and give them a 3D face scan that the platform doesn't keep, but the verification company does.
Realistically, no one should love how easy it is for anyone of any age to go to any search engine and search for “boobs” and just get a million images of boobs.
First. let's not pretend the idea of a kid seeing "boobs" is in any way shape or form actually harmful. Pushing that taboo is why there is any issue in the first place.
Second: This is always a slippery slope. Even if we gave the benefit of the doubt that these things are done in with honest intentions, someone will abuse the system eventually. At least in the US the fascists have already laid out intention to classify LGBTQ people as "porn" in an effort to both silence us online and ban us in public. And what of the countless queer kids in an abusive home?
And even without someone explicitly exploiting it, there had already been instances where kids who were being actively sexually abused by the adults in their life were blocked from resources that could get them help because of content blocking like this.
Thirdly: People can take responsibility for their crotch spawn and be a fucking parent.
Saying "boobs" was trying to be subtle about it - any child of any age is at all times, unless their parents filter their device, 3 clicks and 3 letters (autocomplete could even oopsie that for them) away from seeing very explicit images. It's absurd to think that it's "puritanical" to have nothing in between 10 years olds (or younger) being able to so easily access pull on porn. This isn't about what you personally want or care about, this is also about the fact that every country in the world has this same issue. Taboos are cultural, but you don't set the culture of Honduras, or Gabon, or France, or India. So each cultural context needs to be respected, not only your personal cultural context.
It shouldn't need to be a slippery slope is the thing. In technical terms, this isn't even a heavy lift. To my original point, it's the in theory part of this I support because, in a perfect world, giving everyone the tools to effectively accomplish this isn't hard. But it's a lot of work that is actually fairly technical or fairly terrible from a privacy standpoint to place adult content filters on a child's devices. Not every parent has the skills to do this, and so when a blanket option is available that is sold as a solution like this, of course they'll go for it. But, as I said before, in our current shitty reality, we only have the worst of all worlds - a system that exists to exploit trying to limit a system that exists to exploit, all baked into a system that exists to exploit, and kids still able to see porn online easily.
I'm very much a staunch privacy advocate, and I won't fucking touch a digital ID system because it's nothing but a surveillance state level at this point to persecute specifically trans people and brown people - for now. I see the writing on the wall with this, and it's terrifying. And no one is going to force this into the working system category, so it's just going to be the shitpile system designed to victimize added to the systems of exploitation.
SMH
Fine, changed the search term to "sex." Fewer letters in fact. I was trying to just provide a subtle example, I didn't expect people to need to be hit over the head with it.
So you love the idea of young children seeing porn? Because studies and surveys routinely find that kids as young as 7 are seeing porn online, and many under age 12. Really? You think that's perfectly fine for a 12, 10, or 7 year old with granma's iPad doing an image search and getting even accidental porn?
And hey, I spent my teen years scouring the earth for playboys and staying up until 3 am to catch boobs in R rated movies. I get it. I'm not saying that any system or method will prevent anyome from seeing all adult content their whole life short of being Amish. But as a tender 13 year old, did I need to see all the porn in the universe? Probably not. Adding friction (pun not intended) to general access, without violating privacy, is all I'm saying might be a good idea.
Nah 7 year olds should not be using any internet without parental controls either way so the protection is absolutely moot here. Also your "sex" example returns absolutely zero sexual content on google, Bing or duckduckgo images while boob does.
Also tbh I'm not particularly convinced that seeing porn is all that damaging. Doing quick research it seems that there are no proven damages or development impacts and real actual danger of porn is teaching teens and young adults distorted views of sex and gender roles. Seems like kids in your example aren't even capable of such frameworks to begin with.
So despite how nasty it sounds there's no convincing evidence that its even a real danger. In fact, it seems like exposure to violent images like gore and freak accidents thats having real damage.
If you have some oposing evidence I'd gladly take a look but I'm really unconvinced here that googling boob could be in any way detrimental.
OK.... So, the initial question was "how could anyone support this?" right?
I'm simply explaining how some people see the argument. I never said I see it like this.
So I'm by no means defending any of this other than it being technically possible, and at that, this falls far short of anything resembling acceptable in my book.
Parents who vote and would support this would do so based on limited technical knowledge and a total ideological investment in "preventing" any exposure. Which, we agree, is idiotic.
Y'all really need to chill out with your pitchforks.
a lot of people. The other day I saw a post on mastodon by some politician or someone in the UK stating that if people find any site that is geoblocking the UK because of the age verification to report it to some link he provided. it was boosted A LOT with a lot of replies in support.
bootlickers.
It plays quite well with the “I think about things for two seconds, and mostly think with my lower intestine” crowd.
They hear “kids shouldn’t be able to access porn” and they think yeah what’s wrong with that. Then they hear “Democrats want your kids to get porn” and they hit share.
And if I can't, I'll just stop using the internet for anything I don't absolutely have to.
I don't really need my smartphone. A laptop will do.
Anything you can do on a smartphone that would require Internet would also require Internet on a laptop no?
I suppose you could download offline installers to a thumb drive at the library or smth
Fuck it, let's get back to something like the way it was.
Anonymous, amateur, just slightly hard to access to keep the mouth breathers out.
If I'm opening a stock market account, I'm trusting them with generating my tax receipts! If I don't feel comfortable trusting them to hold my personal data directly, I probably should choose a different brokerage...
Edit: Anyways, I'm annoyed enough that everyone has gone to phone based 2 factor that requires me to buy a phone and keep it on a cell network, so you can imagine how much I despise even an easier version of this.
After making the comment, I realised that stockbrokers need full KYC anyway.
You can use OTP codes without a phone, since you can buy OTP keychains. Which don't require any form of internet connection, same with the physical Passkey's.
I think that's the tech side windfall, the age checking is entirely to put road blocks infront of boobies. They it will force places to just not service those regions because of the hurdles of convincing enough people to give their ID, some will, and more over time.
And it now gives I people a reason to actually create fake IDs or just more identity theft uses. Raise the value of obtaining people's ID is the windfall for the data rapers
Exactly this.
Governments have a rock hard boner for detailed face scans of every person.
Because they want to privatize all aspects of living so that a handful of exorbitantly wealthy people can build larger hoards. There's no end to it; it's a mental disease, enabled by Capitalism and the death of real Labor laws and rights.
Every industry should have unions that actively work to dismantle owner authoritarianism, but for 40 years Boomers have been paving the way for every awful piece of shit "business owner" to have some idolized place at the top of our society. And of course, the knock-on effect of that over time is that the pieces of shit have carved into the legislative and political arenas that provided even a modicum of worker/commoner protections. The digital divide is just a coefficient on the slippery slope.
“Coniglietto sfacciato passeggia sullo schermo”
Oggi Windows (e si, ormai la mia vita è al così basso punto in cui finisco necessariamente per prendere da esso questi spunti, tra l’altro per niente interessanti), sulla schermata di blocco, propone qualcosa di tanto semplice a dirsi quanto insolito: “Coniglietto sfacciato passeggia sullo schermo“, perché a quanto pare oggi è l’ottantacinquesimo (85°) anniversario […]
octospacc.altervista.org/2025/…
“Coniglietto sfacciato passeggia sullo schermo”
Oggi Windows (e si, ormai la mia vita è al così basso punto in cui finisco necessariamente per prendere da esso questi spunti, tra l’altro per niente interessanti), sulla schermata di blocco, propone qualcosa di tanto semplice a dirsi quanto insolito: “Coniglietto sfacciato passeggia sullo schermo“, perché a quanto pare oggi è l’ottantacinquesimo (85°) anniversario di Bugs Bunny… numero anche questo a dir poco strano per festeggiare, ma magari a qualcuno in Microsoft piaceva, va bene così.
In pieno stile consigli di Bing, cliccando sulla scheda si apre pagina una ricerca dove a primo impatto query, sottotitolo e corpo non sembrano centrare, anche se in realtà guardando bene si… Il fatto però è che sono rimasta di sasso a leggere questa descrizione, perché, a pensarci bene, è più vera di quanto sembra. Veramente Bugs Bunny è un coniglietto sfacciato… con quel fare fiero, o come si mangia la carota e nei momenti peggiori dice “che succede amico?“… è eccessivamente irriverente. Se fosse un utente di Internet, i giornalisti lo appellerebbero come “l’hacker troll noto come 4chan“, secondo me… che roba oh.una lepre selvatica - Bing
Grazie alla ricerca intelligente di Bing puoi trovare ciò che cerchi in modo semplice e rapido accumulando punti.Bing
Startup Claims Its Fusion Reactor Concept Can Turn Cheap Mercury Into Gold
Startup Claims Its Fusion Reactor Concept Can Turn Cheap Mercury Into Gold
Energy startup Marathon Fusion claims to have found a scalable way to turn mercury into gold, but they still have much to prove.Gayoung Lee (Gizmodo)
like this
adhocfungus likes this.
"But it’s worth noting that the same process would likely result in the production of unstable and potentially radioactive isotopes of gold. As such, Rutkowski admitted, the gold would have to be stored for 14 to 18 years before it could be labeled radiation-safe."
Ah yes, 18-year vintage, very nice choice. Pairs well with a 3 carat lab grown diamond!
This is like a reverse Goldfinger plan. Could have an interesting impact on the gold market if it can be done at scale.
I'm sure most gold mining operations take at least a few years to get permitted and started and then there's risk that you won't find as much gold as expected.
Compared to a lump of gold that all you have to do is not lose it and it will appreciate in value all on its own.
Could have an interesting impact on the gold market if it can be done at scale.
Before figuring that out, they just need to develop a functioning fusion reactor. And since fusion energy is, as it has always been, a mere ten years off, it's probable that such reactors will take longer to be developed than it will take that radioactive gold to be safe to handle.
In Neal Stephenson's Baroque Cycle, there's an alchemist priest who is really interested in trying to make infinite gold. Not because he wants to get rich, but because he wants to collapse the market and eat the rich.
It's been a long time since I read it, but I seem to remember that he's not as much of a hero as the above makes it sound. Though that series is pretty pro-early stage capitalism, so take that as you will.
"All you have to do is find it."
The value of gold is not just in its scarcity, properties, luster, purity, etc., but also in the effort it takes to find or mine it. So, sure. Trip over a nugget and you're...golden.
The same concept can be loosely applied to the abstraction of crypto currency. It takes energy and computational effort to acquire if you don't just buy it.
It's only irradiated gold if it comes from the Radioactive Startup Part of San Fransisco.
Otherwise, it's just sparkling rock.
If we had the technology to freely form diamond, then it's exceptionally hard, has incredible chemical resistance, among the very best thermal conductivities of any material, and it isn't particularly heavy.
Being able to coat the inside of chemical vessels and pipes with diamond would hugely increase their lifespan, a heat exchanger made out of it would be incredible. Great for food processing, since you'd be able to clean it easily; great for abrasive or highly acid / alkili materials that corrode everything else. Probably awesome as a base layer for semi-conductors, as it would be great for heat dissipation.
But we are probably talking about nanotechnology to lay it down in sheets, which we don't have (yet).
Cheap gold could have a good effect on analog electronics, including the hobbyist kind.
I'm sometimes thinking that not everything needs a computer. If it does, many things are fine with a MC.
And not just analog electronics honestly, hobbyist computing in the ancient sense, of making hobbyist computers and using them, might have a small rebirth.
And mass-produced electronics would too become a fair bit cheaper to produce if gold were more widely available. Longevity, reliability. Maybe touchscreens' economical advantage over physical buttons would be reduced even.
any particle accelerator can do that just incredibly slowly.
Alchemy of that sort has been doable for generations, it's just WILDLY impractical!
Currently many orders of magnitude more expensive than just buying an equivalent amount of gold, but makes me wonder what the future might be capable of with those proofs of concept.
Science circling back around to alchemy is an interesting thought.
If it is possible to make small amounts of those elements on purpose as a byproduct, it can help to offset the costs of the reactor in some small way and help with isotopic/nuclear research in general. But that can be done in pretty much any fusion reactor design to some degree.
As for Alchemy of the future, If in a thousand years we can just built whatever materials we need (including potential ultra heavy stable elements) from raw subatomic particles we don't even need mining, just gather up some hydrogen/helium from space and transmute it into whatever you need. food, fuel, structures, etc.
just gather up some hydrogen/helium from space and transmute it into whatever you need. food, fuel, structures, etc.
Tea, earl gray, hot.
we don't even need mining, just gather up some hydrogen/helium from space and transmute it into whatever you need. food, fuel, structures, etc.
Believe it or not, this can actually be done without fusion alchemy.
It's been explored in science fiction and I believe there are some actual theories and papers on the subject, but here's the quick version:
The sun contains all the same elements found on earth in remarkably similar proportions (The exception being that all of earth's hydrogen and helium were blown away long ago). But unlike earth, in the sun the heavy elements don't separate and sink down to the core, everything just mixes together in one big suspension. Magnetic fields in the sun constantly eject charged particles out as solar wind and while these particles are mostly hydrogen, they actually contain every element found in the solar system. And because the particles are charged, this wind could be harvested using magnetic fields, it could be redirected and focused into a stream of matter for collection.
And it's a lot of matter that could be collected this way...
The sun loses 130 billion tons of matter in solar wind every day. For comparison, Mars's moon Deimos masses about 1.5 trillion tons, so the sun loses a full Deimos worth of matter every 12 days. There would be more than enough of every element in that stream to satisfy humanity for the foreseeable future.
And my apologies for the long reply, someone mentioned space and I couldn't help myself. 🤓
The sun loses 130 billion tons of matter in solar wind every day.
But how much can be caught?
From the sun, the angular diameter of the earth (12,756 km wide, 149,000,000 km away) is something like 0.004905 degrees (or 0.294 arc minutes or 17.66 arc seconds).
Imagining a circle the size of earth, at the distance of the earth, catching all of the solar wind, we're still looking at something that is about 127.8 x 10^6 square kilometers. A sphere the size of the Earth's average distance to the sun would be about 279.0 x 10^15 square km in total surface area. So oversimplifying with an assumption that the solar wind is uniformly distributed, an earth-sized solar wind catcher would only get about 4.58 x 10^−10 of the solar wind.
Taking your 130 billion tons number, that means this earth-sized solar wind catcher could catch about 59.5 tons per day of matter, almost all of which is hydrogen and helium, and where the heavier elements still tend to be lower on the periodic table. Even if we could theoretically use all of it, would that truly be enough to meet humanity's mining needs?
Well there are a lot of factors defining how much usable material we could get, and how hard it would be to do it.
Yeah, about 98% of the sun is hydrogen and helium, with other elements making up the remaining 2%.
The machine used to generate the magnetic field would likely be a ring rather than plate, with the goal being to bend the trajectory of any matter that passes through the ring just a little. In effect it would work a lot like a lens, that could focus matter passing through it into a cone of trajectories, with collection happening at the point of the cone, possibly a point at a much higher in orbit. (This does introduce some complications in the different orbital speeds for the ring and collector, but without getting into it, there is a solution for that, it's not the hardest part of this idea)
And how much you can capture depends a lot on how close to the sun you can put your magnet field ring. If it's stationed closer to the sun it shrinks the size of the sphere you're trying to cover. So if your ring could survive at 0.2 AU from the sun (about half the distance of mercury's orbit), a ring of the same diameter would cover 25 times more area of the sphere than if it was stationed at 1 AU.
So your 59.5 tons collected turns into 1487.5 tons, 2% of which is 29.75 tons of usable material (which I'll be honest, is not great considering the magnitude of the construction project). It's probably a better deal if you're using the hydrogen towards fusion power, but it's still not great.
The good news is that it scales well, the larger you make the ring, the better your ratio of materials gathered vs materials needed to build the ring, which makes the optimal diameter of the ring about the same as the diameter of the sun. So... yeah, this is not a project in our immediate future.
When they can do transparent aluminum, I'm in!
edit: yes I know there's a ceramic material called ALON, which the manufacturer calls transparent aluminum because it contains aluminum oxynitride, but I don't think that's what Scotty meant. ALON is about 30-35% aluminum, same as the amount of lead in leaded crystal glass, which isn't "transparent lead".
What’s The Deal With Transparent Aluminum?
It looks like a tube made of glass but it’s actually aluminum. Well, aluminum with an asterisk beside it — this is not elemental aluminum but rather a material made using it. We got ont…Hackaday
a lot longer than that.
Synthetic corundum, spinel and others have been around for over 120 years, and optically transparent uncoloured sapphire glass for over 80 years. They are just aluminium oxides.
ALON is just the new hotness, and not as good as some others in terms of visible light transparency.
like this
can_you_change_your_username likes this.
This article says (5 tonnes/yr) per GW produced. It's a fusion reactor, so it's making electricity, not consuming it.
At $0.05/kWh, 1 GWh of electricity is $438 million. At $3400/troy ounce, 5 tonnes of gold is $545 million. So that jives with the company's estimate on the article that the sale of gold could double their revenue.
All bunk, of course
This is stupid, but not for the reasons you would think.
The energy required to change lead into gold is bigger than their difference in price.
like this
can_you_change_your_username likes this.
Because they have to build a full scale reactor first. That's expensive.
The way this usually works is that you do the research, get a patent on it, license that out, and then capitalists pretend they invented the whole thing themselves and deserve all the profits.
The whole point of the paper is that limitation has been breached. The fusion plant would primarily create electricity, and gold is a profitable byproduct.
It's not out of peer review, though.
Rumpelstiltskin - Grimm
Fairy tale: Rumpelstiltskin - A fairy tale by the Brothers Grimm. There was once a miller who was poor, but he had one beautiful daughter.www.grimmstories.com
You want gold? Tons of it? Go mine the asteroid belt. But if it is to become plentiful what value will it hold?
Will cheap gold plated circuitry be back?
It also creates some radioactive isotopes of gold, so it'd have to sit there for 12-14 years before being useful.
My guess is that once the radioactive cycle time is up, it'd create more gold than the economy knows what to do with, and the price would collapse. They're quoting 5 metric tons of gold created per GWh of electricity created by the fusion reactor. There are 3,000 metric tons of gold mined every year. Worldwide energy production is 26,000,000 GWh. If we had 20% of that on one of these fusion reactors, there would be 26,000,000 metric tons produced.
It's estimated that for all of human history, 244,000 metric tons has been mined.
Gold ain't that useful, and it isn't even that artistically desirable if it's common. I think we'd struggle to use that much. Maybe if the price drops below copper we'll start using it for electrical wiring (gold is a worse conductor than copper, but better than aluminum). Now, if the process could produce something like platinum or palladium, that'd be pretty great. Those are super useful as catalysts, and there isn't much we can extract from the Earth's crust.
If late stage capitalism hasn't played itself out by then, what's going to happen is similar to solar deployment now. Capitalists see that solar gives you the best return on investment. Capitalists rush to build a whole lot of solar farms. But focusing on just solar is a bad idea; it should be combined with wind, hydro, and storage to get the best result. Now that solar has to be turned off so it doesn't overload the grid, and that cuts into the profits they were expecting.
Same would likely happen here. The first investors make tons of money with gold as a side effect of electricity generation. A second set of investors rushes in, collapses the price of gold, and now everyone is disappointed. Given the time it would have to sit before it's at safe radiation levels, this process could take over 20 years to play out.
5 metric tons of gold created per GWh of electricity
per GW. 5000kg over whole year of 1gw reactor going almost continuous. While there is no theoretical possiblity of creating economically viable fusion energy, a minimum reactor size would be 10gw. Needs 1gw of backup fission to provide stable power input, and make the deuterium.
$500M/gw in gold revenue could make a difference in the economics. If fusion cost 2x what fission costs per gw, ($30/w) then it would make back its cost in gold only over 60 years, @$100/gram.
AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.
AI Chatbots Remain Overconfident — Even When They’re Wrong - Dietrich College of Humanities and Social Sciences - Carnegie Mellon University
Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.www.cmu.edu
like this
adhocfungus, xep e dandi8 like this.
Tim Chambers reshared this.
It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.
But seriously, LLMs are just advanced autocomplete.
Also, generally the best interfaces for LLM will combine non-LLM facilities transparently. The LLM might be able to translate the prose to the format the math engine desires and then an intermediate layer recognizes a tag to submit an excerpt to a math engine and substitute the chunk with output from the math engine.
Even for servicing a request to generate an image, the text generation model runs independent of the image generation, and the intermediate layer combines them. Which can cause fun disconnects like the guy asking for a full glass of wine. The text generation half is completely oblivious to the image generation half. So it responds playing the role of a graphic artist dutifully doing the work without ever 'seeing' the image, but it assumes the image is good because that's consistent with training output, but then the user corrects it and it goes about admitting that the picture (that it never 'looked' at) was wrong and retrying the image generator with the additional context, to produce a similarly botched picture.
It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.
People really really don't understand how these things work...
like this
onewithoutaname likes this.
Neither are our brains.
“Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”
― Peter Watts, Blindsight (fiction)
Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?
If nothing else, LLMs have woke me to how stupid humans are vs. the machines.
like this
onewithoutaname likes this.
like this
onewithoutaname likes this.
It's not that they may be deceived, it's that they have no concept of what truth or fiction, mistake or success even are.
Our brains know the concepts and may fall to deceipt without recognizing it, but we at least recognize that the concept exists.
An AI generates content that is a blend of material from the training material consistent with extending the given prompt. It only seems to introduce a concept of lying or mistakes when the human injects that into the human half of the prompt material. It will also do so in a way that the human can just as easily instruct it to correct a genuine mistake as well as the human instruct it to correct something that is already correct (unless the training data includes a lot of reaffirmation of the material in the face of such doubts).
An LLM can consume more input than a human can gather in multiple lifetimes and still bo wonky in generating content, because it needs enough to credibly blend content to extend every conceivable input. It's why so many people used to judging human content get derailed by judging AI content. An AI generates a fantastic answer to an interview question that only solid humans get right, only to falter 'on the job' because the utterly generic interview question looks like millions of samples in the input, but the actual job was niche.
This Nobel Prize winner and subject matter expert takes the opposite view
youtube.com/watch?v=IkdziSLYzH…
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.youtube.com
People really do not like seeing opposing viewpoints, eh? There's disagreeing, and then there's downvoting to oblivion without even engaging in a discussion, haha.
Even if they're probably right, in such murky uncertain waters where we're not experts, one should have at least a little open mind, or live and let live.
It's like talking with someone who thinks the Earth is flat. There isn't anything to discuss. They're objectively wrong.
Humans like to anthropomorphize everything. It's why you can see a face on a car's front grille. LLMs are ultra advanced pattern matching algorithms. They do not think or reason or have any kind of opinion or sentience, yet they are being utilized as if they do. Let's see how it works out for the world, I guess.
I think so too, but I am really curious what will happen when we give them "bodies" with sensors so they can explore the world and make individual "experiences". I could imagine they would act much more human after a while and might even develop some kind of sentience.
Of course they would also need some kind of memory and self-actualization processes.
Interaction with the physical world isn't really required for us to evaluate how they deal with 'experiences'. They have in principle access to all sorts of interesting experiences in the online data. Some models have been enabled to fetch internet data and add them to the prompt to help synthesize an answer.
One key thing is they don't bother until direction tells them. They don't have any desire they just have "generate search query from prompt, execute search query and fetch results, consider the combination of the original prompt and the results to be the context for generating more content and return to user".
LLM is not a scheme that credibly implies that more LLM == sapient existance. Such a concept may come, but it will be something different than LLM. LLM just looks crazily like dealing with people.
Interesting talk but the number of times he completely dismisses the entire field of linguistics kind of makes me think he's being disingenuous about his familiarity with it.
For one, I think he is dismissing holotes, the concept of "wholeness." That when you cut something apart to it's individual parts, you lose something about the bigger picture. This deconstruction of language misses the larger picture of the human body as a whole, and how every part of us, from our assemblage of organs down to our DNA, impact how we interact with and understand the world. He may have a great definition of understanding but it still sounds (to me) like it's potentially missing aspects of human/animal biologically based understanding.
For example, I have cancer, and about six months before I was diagnosed, I had begun to get more chronically depressed than usual. I felt hopeless and I didn't know why. Surprisingly, that's actually a symptom of my cancer. What understanding did I have that changed how I felt inside and how I understood the things around me? Suddenly I felt different about words and ideas, but nothing had changed externally, something had change internally. The connections in my neural network had adjusted, the feelings and associations with words and ideas was different, but I hadn't done anything to make that adjustment. No learning or understanding had happened. I had a mutation in my DNA that made that adjustment for me.
Further, I think he's deeply misunderstanding (possibly intentionally?) what linguists like Chomsky are saying when they say humans are born with language. They mean that we are born with a genetic blueprint to understand language. Just like animals are born with a genetic blueprint to do things they were never trained to do. Many animals are born and almost immediately stand up to walk. This is the same principle. There are innate biologically ingrained understandings that help us along the path to understanding. It does not mean we are born understanding language as much as we are born with the building blocks of understanding the physical world in which we exist.
Anyway, interesting talk, but I immediately am skeptical of anyone who wholly dismisses an entire field of thought so casually.
For what it's worth, I didn't downvote you and I'm sorry people are doing so.
I am not a linguist but the deafening silence from Chomsky and his defenders really does demand being called out.
Syntactical models of language have been completely crushed by statistics-at-scale via neural nets. But linguists have not rejected the broken model.
The same thing happened with protein folding -- researchers who spent the last 25 years building complex quantum mechanical/electrostatic models of protein structure suddenly saw AlphaFold completely crush prior methods. The difference is, bioinformatics researchers have already done a complete about-face and are .
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
I watched this entire video just so that I could have an informed opinion. First off, this feels like two very separate talks:
The first part is a decent breakdown of how artificial neural networks process information and store relational data about that information in a vast matrix of numerical weights that can later be used to perform some task. In the case of computer vision, those weights can be used to recognize objects in a picture or video streams, such as whether something is a hotdog or not.
As a side note, if you look up Hinton’s 2024 Nobel Peace Prize in Physics, you’ll see that he won based on his work on the foundations of these neural networks and specifically, their training. He’s definitely an expert on the nuts and bolts about how neural networks work and how to train them.
He then goes into linguistics and how language can be encoded in these neural networks, which is how large language models (LLMs) work… by breaking down words and phrases into tokens and then using the weights in these neural networks to encode how these words relate to each other. These connections are later used to generate other text output related to the text that is used as input. So far so good.
At that point he points out these foundational building blocks have been used to lead to where we are now, at least in a very general sense. He then has what I consider the pivotal slide of the entire talk, labeled Large Language Models, which you can see at 17:22. In particular he has two questions at the bottom of the slide that are most relevant:
* Are they genuinely intelligent?
* Or are they just a form of glorified auto-complete that uses statistical regularities to pastiche together pieces of text that were created by other people?
The problem is: he never answers these questions. He immediately moves on to his own theory about how language works using an analogy to LEGO bricks, and then completely disregards the work of linguists in understanding language, because what do those idiots know?
At this point he brings up The long term existential threat and I would argue the rest of this talk is now science fiction, because it presupposes that understanding the relationship between words is all that is necessary for AI to become superintelligent and therefore a threat to all of us.
Which goes back to the original problem in my opinion: LLMs are text generation machines. They use neural networks encoded as a matrix of weights that can be used to predict long strings of text based on other text. That’s it. You input some text, and it outputs other text based on that original text.
We know that different parts of the brain have different responsibilities. Some parts are used to generate language, other parts store memories, still other parts are used to make our bodies move or regulate autonomous processes like our heartbeat and blood pressure. Still other bits are used to process images from our eyes and other parts reason about spacial awareness, while others engage in emotional regulation and processing.
Saying that having a model for language means that we’ve built an artificial brain is like saying that because I built a round shape called a wheel means that I invented the modern automobile. It’s a small part of a larger whole, and although neural networks can be used to solve some very difficult problems, they’re only a specific tool that can be used to solve very specific tasks.
Although Geoffrey Hinton is an incredibly smart man who mathematically understands neural networks far better than I ever will, extrapolating that knowledge out to believing that a large language model has any kind of awareness or actual intelligence is absurd. It’s the underpants gnome economic theory, but instead of:
1. Collect underpants
2. ?
3. Profit!
It looks more like:
1. Use neural network training to construct large language models.
2. ?
3. Artificial general intelligence!
If LLMs were true artificial intelligence, then they would be learning at an increasing rate as we give them more capacity, leading to the singularity as their intelligence reaches hockey stick exponential growth. Instead, we’ve been throwing a growing amount resources at these LLMs for increasingly smaller returns. We’ve thrown a few extra tricks into the mix, like “reasoning”, but beyond that, I believe it’s clear that we’re headed towards a local maximum that is far enough away from intelligence that would be truly useful (and represent an actual existential threat), but in actuality only resembles what a human can output well enough to fool human decision makers into trusting them to solve problems that they are incapable of solving.
Silicon Valley: Not Hotdog (Season 4 Episode 4 Clip) | HBO
Just demo it. New episodes of Silicon Valley premiere Sunday nights at 10PM. #HBO #SiliconValleyHBOSubscribe to HBO on YouTube: https://goo.gl/wtFYd7From Mi...YouTube
believing that a large language model has any kind of awareness or actual intelligence is absurd
I (as a person who works professionally in the area and tries to keep up with the current academic publications) happen to agree with you. But my credences are somewhat reduced after considering the points Hinton raises.
I think it is worth considering that there are a handful of academically active models of consciousness; some well-respected ones like the CTM are not at all inconsistent with Hinton's statements
Nah so their definition is the classical "how confident are you that you got the answer right". If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people's tends to decrease to mitigate the over confidence.
But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it's a word prediction model based on a data set of written text tending to infinity. It's not assessing validity of results, it's predicting what the answer is based on all previous inputs. The whole study is irrelevant.
Well, not irrelevant. Lots of our world is trying to treat the LLM output as human-like output, so if human's are going to treat LLM output the same way they treat human generated content, then we have to characterize, for the people, how their expectations are broken in that context.
So as weird as it may seem to treat a stastical content extrapolation engine in the context of social science, there's a great deal of the reality and investment that wants to treat it as "person equivalent" output and so it must be studied in that context, if for no other reason to demonstrate to people that it should be considered "weird".
They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the 'uncanny valley' of the mistakes are all the more striking, but they are just generating content without concept of 'mistake' or' 'success' or the content being a model for something else and not just being a blend of stuff from the training data.
For example:
Me: Generate an image of a frog on a lilypad.
LLM: I'll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.
Me (lying): That seems to have produced a frog under a lilypad instead of on top.
LLM: Thanks for pointing that out! I'm generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.
It didn't know anything about the picture, it just took the input at it's word. A human would have stopped to say "uhh... what do you mean, the lilypad is on water and frog is on top of that?" Or if the human were really trying to just do the request without clarification, they might have tried to think "maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?". A human wouldn't have gone "you are right, I made a mistake, here I've tried again" and include almost the exact same thing.
But tha training data isn't predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.
The use of language like "unaware" when people are discussing LLMs drives me crazy. LLMs aren't "aware" of anything. They do not have a capacity for awareness in the first place.
People need to stop taking about them using terms that imply thought or consciousness, because it subtly feeds into the idea that they are capable of such.
This happened to me the other day with Jippity. It outright lied to me:
"You're absolutely right. Although I don't have access to the earlier parts of the conversation".
So it says that I was right in a particular statement, but didn't actually know what I said. So I said to it, you just lied. It kept saying variations of:
"I didn't lie intentionally"
"I understand why it seems that way"
"I wasn't misleading you"
etc
It flat out lied and tried to gaslight me into thinking I was in the wrong for taking that way.
It didn’t lie to you or gaslight you because those are things that a person with agency does. Someone who lies to you makes a decision to deceive you for whatever reason they have. Someone who gaslights you makes a decision to behave like the truth as you know it is wrong in order to discombobulate you and make you question your reality.
The only thing close to a decision that LLMs make is: what text can I generate that statistically looks similar to all the other text that I’ve been given. The only reason they answer questions is because in the training data they’ve been provided, questions are usually followed by answers.
It’s not apologizing you to, it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere - it has no ability to be sincere because it doesn’t have any thoughts.
There is no thinking. There are no decisions. The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are, and the more we fall into the trap of these AI marketers about how close we are to truly thinking machines.
The only thing close to a decision that LLMs make is
That's not true. An "if statement" is literally a decision tree.
The only reason they answer questions is because in the training data they’ve been provided
This is technically true for something like GPT-1. But it hasn't been true for the models trained in the last few years.
it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere
It has a large amount of system prompts that alter default behaviour in certain situations. Such as not giving the answer on how to make a bomb. I'm fairly certain there are catches in place to not be overly apologetic to minimize any reputation harm and to reduce potential "liability" issues.
And in that scenario, yes I'm being gaslite because a human told it to.
There is no thinking
Partially agree. There's no "thinking" in sentient or sapient sense. But there is thinking in the academic/literal definition sense.
There are no decisions
Absolutely false. The entire neural network is billions upon billions of decision trees.
The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are
I promise you I know very well what LLMs and other AI systems are. They aren't alive, they do not have human or sapient level of intelligence, and they don't feel. I've actually worked in the AI field for a decade. I've trained countless models. I'm quite familiar with them.
But "gaslighting" is a perfectly fine description of what I explained. The initial conditions were the same and the end result (me knowing the truth and getting irritated about it) were also the same.
The only thing close to a decision that LLMs make isThat's not true. An "if statement" is literally a decision tree.
If you want to engage in a semantically argument, then sure, an “if statement” is a form of decision. This is a worthless distinction that has nothing to do with my original point and I believe you’re aware of that so I’m not sure what this adds to the actual meat of the argument?
The only reason they answer questions is because in the training data they’ve been providedThis is technically true for something like GPT-1. But it hasn't been true for the models trained in the last few years.
Okay, what was added to models trained in the last few years that makes this untrue? To the best of my knowledge, the only advancements have involved:
* Pre-training, which involves some additional steps to add to or modify the initial training data
* Fine-tuning, which is additional training on top of an existing model for specific applications.
* Reasoning, which to the best of my knowledge involves breaking the token output down into stages to give the final output more depth.
* “More”. More training data, more parameters, more GPUs, more power, etc.
I’m hardly an expert in the field, so I could have missed plenty, so what is it that makes it “understand” that a question needs to be answered that doesn’t ultimately go back to the original training data? If I feed it training data that never involves questions, then how will it “know” to answer that question?
it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincereIt has a large amount of system prompts that alter default behaviour in certain situations. Such as not giving the answer on how to make a bomb. I'm fairly certain there are catches in place to not be overly apologetic to minimize any reputation harm and to reduce potential "liability" issues.
System prompts are literally just additional input that is “upstream” of the actual user input, and I fail to see how that changes what I said about it not understanding what an apology is, or how it can be sincere when the LLM is just spitting out words based on their statistical relation to one another?
An LLM doesn’t even understand the concept of right or wrong, much less why lying is bad or when it needs to apologize. It can “apologize” in the sense that it has many examples of apologies that it can synthesize into output when you request one, but beyond that it’s just outputting text. It doesn’t have any understanding of that text.
And in that scenario, yes I'm being gaslite because a human told it to.
Again, all that’s doing is adding additional words that can be used in generating output. It’s still just generating text output based on text input. That’s it. It has to know it’s lying or being deceitful in order to gaslight you. Does the text resemble something that can be used to gaslight you? Sure. And if I copy and pasted that from ChatGPT that’s what I’d be doing, but an LLM doesn’t have any real understanding of what it’s outputting so saying that there’s any intent to do anything other than generate text based on other text is just nonsense.
There is no thinkingPartially agree. There's no "thinking" in sentient or sapient sense. But there is thinking in the academic/literal definition sense.
Care to expand on that? Every definition of thinking that I find involves some kind of consideration or reflection, which I would argue that the LLM is not doing, because it’s literally generating output based on a complex system of weighted parameters.
If you want to take the simplest definition of “well, it’s considering what to output and therefore that’s thought”, then I could argue my smart phone is “thinking” because when I tap on a part of the screen it makes decisions about how to respond. But I don’t think anyone would consider that real “thought”.
There are no decisionsAbsolutely false. The entire neural network is billions upon billions of decision trees.
And a logic gate “decides” what to output. And my lightbulb “decides” whether or not to light up based on the state of the switch. And my alarm “decides” to go off based on what time I set it for last night.
My entire point was to stop anthropomorphizing LLMs by describing what they do as “thought”, and that they don’t make “decisions” in the same way humans do. If you want to use definitions that are overly broad just to say I’m wrong, fine, that’s your prerogative, but it has nothing to do with the idea I was trying to communicate.
The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they areI promise you I know very well what LLMs and other AI systems are. They aren't alive, they do not have human or sapient level of intelligence, and they don't feel. I've actually worked in the AI field for a decade. I've trained countless models. I'm quite familiar with them.
Cool.
But "gaslighting" is a perfectly fine description of what I explained. The initial conditions were the same and the end result (me knowing the truth and getting irritated about it) were also the same.
Sure, if you wanna ascribe human terminology to what marketing companies are calling “artificial intelligence” and further reinforcing misconceptions about how LLMs work, then yeah, you can do that. If you care about people understanding that these algorithms aren’t actually thinking in the same way that humans do, and therefore believing many falsehoods about their capabilities, like I do, then you’d use different terminology.
It’s clear that you don’t care about that and will continue to anthropomorphize these models, so… I guess I’m done here.
As any modern computer system, LLMs are much better and smarter than us at certain tasks while terrible at others. You could say that having good memory and communication skills is part of what defines an intelligent person. Not everyone has those abilities, but LLMs do.
My point is, there's nothing useful coming out of the arguments over the semantics of the word "intelligence".
About halfway through the article they quote a paper from 2023:
Similarly, another study from 2023 found LLMs “hallucinated,” or produced incorrect information, in 69 to 88 percent of legal queries.
The LLM space has been changing very quickly over the past few years. Yes, LLMs today still "hallucinate", but you're not doing anyone a service by reporting in 2025 the state of the field over 2 years before.
Oh god I just figured it out.
It was never they are good at their tasks, faster, or more money efficient.
They are just confident to stupid people.
Christ, it's exactly the same failing upwards that produced the c suite. They've just automated the process.
Oh good, so that means we can just replace the C-suite with LLMs then, right? Right?
An AI won't need a Golden Parachute when they inevitably fuck it all up.
I find it so incredibly frustrating that we've gotten to the point where the "marketing guys" are not only in charge, but are believed without question, that what they say is true until proven otherwise.
"AI" becoming the colloquial term for LLMs and them being treated as a flawed intelligence instead of interesting generative constructs is purely in service of people selling them as such. And it's maddening. Because they're worthless for that purpose.
like this
Fitik, adhocfungus, Luca, Rickicki, PokyDokie, Raoul Duke, dandi8, AGuyAcrossTheInternet, Maeve e felixthecat like this.
don't like this
Luca doesn't like this.
Please don't link to Reddit. Context below:
The EU is currently developing a whitelabel app to perform privacy-preserving (at least in theory) age verification to be adopted and personalized in the coming months by member states. The app is open source and available here: github.com/eu-digital-identity….
Problem is, the app is planning to include remote attestation feature to verify the integrity of the app: github.com/eu-digital-identity…. This is supposed to provide assurance to the age verification service that the app being used is authentic and running on a genuine operating system. Genuine in the case of Android means:
- The operating system was licensed by Google
- The app was downloaded from the Play Store (thus requiring a Google account)
- Device security checks have passed
While there is value to verify device security, this strongly ties the app to many Google properties and services, because those checks won't pass on an aftermarket Android OS, even those which increase security significantly like GrapheneOS, because the app plans to use Google "Play Integrity", which only allows Google licensed systems instead of the standard Android attestation feature to verify systems.
This also means that even though you can compile the app, you won't be able to use it, because it won't come from the Play Store and thus the age verification service will reject it.
The issue has been raised here github.com/eu-digital-identity… but no response from team members as of now.
GitHub - eu-digital-identity-wallet/av-app-android-wallet-ui
Contribute to eu-digital-identity-wallet/av-app-android-wallet-ui development by creating an account on GitHub.GitHub
AGuyAcrossTheInternet likes this.
like this
AGuyAcrossTheInternet likes this.
like this
AGuyAcrossTheInternet likes this.
I do feel like that’s a precarious state to leave this in, especially if they’re developing the backend for it.
Is there even enough momentum for a SKG-style wave of coverage? It would need to be justified properly by citing things like the Tea app data leak, to make a strong case (to political pencil pushers) for the danger of tying personal information to profiles or even to platforms. Otherwise the only thing they’ll see is “gamers want to make porn accessible to children”.
I don’t know. This whole situation boils my blood because I really care about online anonymity, and this is kind of nightmare scenario shit for me. I’m not even in the UK or EU.
like this
AGuyAcrossTheInternet e HeerlijkeDrop like this.
To avoid people from simply copying the "age proof" and having others reuse it, a nonce/private key combo is needed. To protect that key a DRM style locked down device is necessary. Conveniently removing your ability to know what your device is doing, just a "trust us".
Seeing the EU doesn't make any popular hardware, their plan will always rely on either Asian or US manufacturers implementing the black-box "safety" chip.
A phone can also be shared. If it happens at scale, it will be flagged pretty quickly. It's not a real problem.
The only real problem is the very intention of such laws.
If it happens at scale, it will be flagged pretty quickly.
How? In a correct implementation, the 3rd parties only receive proof-of-age, no identity. How will re-use and sharing be detected?
There are 3 parties:
1) the user
2) the age-gated site
3) the age verification service
The site (2) sends the request to the user (1), who passes it on to the service (3) where it is signed and returned the same way. The request comes with a nonce and a time stamp, making reuse difficult. An unusual volume of requests from a single user will be detected by the service.
from a single user
Neither 2 nor 3 should receive information about the identity of the user, making it difficult to count the volume of requests by user?
I must not be explaining myself well.
both are supposed to receive information about the user's age
Yes, that's the point. They should be receiving information about age, and age only. Therefore they lack the information to detect reuse.
If they are able to detect reuse, they receive more (and personal identifying) information. Which shouldn't be the case.
The only known way to include a nonce, without releasing identifying information to the 3rd parties, is using a DRM like chip. This results in the sovereignty and trust issues I referred to earlier.
The site would only know that the user's age is being vouched for by some government-approved service. It would not be able to use this to track the user across different devices/IPs, and so on.
The service would only know that the user is requesting that their age be vouched for. It would not know for what. Of course, they would have to know your age somehow. EG they could be selling access in shops, like alcohol is sold in shops. The shop checks the ID. The service then only knows that you have login credentials bought in some shop. Presumably these credentials would not remain valid for long.
They could use any other scheme, as well. Maybe you do have to upload an ID, but they have to delete it immediately afterward. And because the service has to be in the EU, government-certified with regular inspections, that's safe enough.
In any case, the user would have to have access to some sort of account on the service. Activity related to that account would be tracked.
If that is not good enough, then your worries are not about data protection. My worries are not. I reject this for different reasons.
is being vouched for by some government-approved service.
The reverse is also a necessity: the government approved service should not be allowed to know who and for what a proof of age is requested.
And because the service has to be in the EU, government-certified with regular inspections, that's safe enough
Of course not: both intentional and unintentional leaking of this information already happens, regularly. That information should simply not be captured, at all!
Additionally, what happens to, for example, the people in Hungary(*)? If the middle man government service knows when and who is requesting proof-of-age, it's easy to de-anonymise for example users of gay porn sites.
The 3rd party solution, as you present it, sounds terribly dangerous!
(*) Hungary as a contemporary example of a near despot leader, but more will pop up in EU over the coming years.
The reverse is also a necessity: the government approved service should not be allowed to know who and for what a proof of age is requested.
It would send the proof to you. It would not know what you do with it. I gave an example in the previous post how the identity of the user could be hidden from the service.
If the middle man government service knows when and who is requesting proof-of-age, it’s easy to de-anonymise for example users of gay porn sites.
It would be a lot easier to get that information from the ISP.
There are plenty of people with full integrity on rooted phones. It's really annoying to set up and keep going, and requiring that would fuck over most rooted phone/custom os users, but someone to fully inspect and leak everything about the app will always be popping up.
If it is about hiding some data handled by the app, that will be instantly extracted.
Look at the design of DRM chips. They bake the key into hardware. Some keys have been leaked, I think playstation 2 is an example, but typically by a source inside the company.
That applies to play integrity, and a lot of getting that working is juggling various signatures and keys.
The suggestion above which I replied to was instead about software-managed keys, something handed to the app which it then stores, where the google drm is polled to get that sacred piece of data. Since this is present in the software, it can be plainly read by the user on rooted devices, which hardware-based keys cannot.
Play integrity is hardware based, but the eu app is software based, merely polling googles hardware based stuff somewhere in the process.
merely polling googles hardware based stuff
I understand. In the context of digital sovereignty, even if the linked shitty implementation is discarded (as it should be), every correct implementation will require magic DRM-like chip. This chip will be made by a US or Asian manufacturer, as the EU has no manufacturing.
like this
AGuyAcrossTheInternet likes this.
If not it seems to me that it should be sufficient as to serve as a security this phone is legit and not emulated/compromised.
And the phone provider can naturally resolve their sim IDs down to the phone number they are assigned to.
Anything related to celltower interactions is PII.
Yeah no. Requiring anything Google for something as basic as this violates the GDPR. If they go through with this, it's one legal case until they have to revise it.
Edit: German eID works on any Android btw., flawless actually. I sure hope I can use that for verification
like this
AGuyAcrossTheInternet likes this.
like this
AGuyAcrossTheInternet likes this.
EID can be used for anonymous age verification. It doesn't even need to give out your birthday and can attest to any "over the age of X" requirement.
Ref: bfdi.bund.de/DE/Buerger/Inhalt…
BfDI - Meldewesen und Statistik - Datenschutz beim Personalausweis mit eID
Der Personalausweis verfügt seit 2010 über eine elektronische Identitätsfunktion (eID). Welche Daten sind auf dem Ausweis hinterlegt und was ist bei der Nutzung zu beachten?www.bfdi.bund.de
like this
AGuyAcrossTheInternet likes this.
"Government issued app can be used for anonymous age verification."
Doesn't sound like the most trustworthy statement...
like this
AGuyAcrossTheInternet likes this.
Edit: German eID works on any Android btw., flawless actually. I sure hope I can use that for verification
Same in Italy... I mean, I can pay taxes with that application but I cannot be verified for my age ? Seriously EU ?
violates the GDPR.
I wouldn't be too sure. Data protection mainly binds private actors. Any data processing demanded by law is legal. You'd really have to know the finer points of the law to judge if this is ok.
Data processing mandated by law is legal. Governments can pass laws, unlike private actors. Public institutions are bound by GDPR, but can also rely on provisions that give them greater leeway.
I don't see how that this is in any way necessary, either. But a judge may be convinced by the claim that this is industry standard best practice to keep the app safe. In any case, there may be some finer points to the law.
The state legally cannot force you to agree to some corporations (i.e. Google’s) terms,
I'm not too sure about that, either. For example, when you are out of work, the state will cause you trouble if you do not find offered jobs acceptable.
It's another question, if not having access to age-gated content is so bad as to force you to do anything. Minors nominally have the same rights as full citizens, and they are to be denied access, too.
like this
AGuyAcrossTheInternet likes this.
like this
AGuyAcrossTheInternet likes this.
As usual, it's the implementation that matters.
Someone jumped at me for comparing EU and MAGA to Stalin's and Hitler's regimes, quote, "arguing in newspapers whose worker class has been liberated more". Like they are not equal at all and all such.
What is it with everyone being obsessed with porn censorship suddenly? Why is this a trend?
At first I thought it's about control and data gathering, but this seems like too much of a genuine attempt at such a system. Why is the government so obsessed with parenting and nannying the citizens?
like this
AGuyAcrossTheInternet likes this.
like this
AGuyAcrossTheInternet likes this.
There is a bit of a conflict between the laws requiring certain companies to identify their clients and GDPR in basis, but there is something in GDPR that allows these companies to still collect the relevant data and use it or to verify the data and not store it depending on the use case.
The whole use case thing is even the reason why companies are allowed to collect data from you. You couldn't get anything delivered if this exception wasn't there, because they wouldn't be allowed to progress your address.
At least that's what I gathered from the Dutch implementation the AVG, when I last read it a couple years ago.
Why is the government so obsessed with parenting and nannying the citizens?
I think it's because people from outside the traditional political families are getting popular votes.
For the established politicians, blaming "the internet" and building a supressing censorship machine is easier than looking in the mirror and seeing where the discontent comes from.
Been wondering myself. It's certainly part of the general right-ward trend. Societies are becoming more illiberal. It's not just the right that is moving to the right.
Obscenity laws have always been about enforcing the "correct" sexuality. Protecting minors meant preventing them from becoming "confused"; ie becoming LGBTQ.
You also have growing nationalism. In Europe, people are saying we should enforce "our laws" and "our values" against meddling foreigners (ie Big Tech). It often sounds a lot like the rants against the "globalists" that have been a staple among the US far right for decades. Age verification is part of that.
For example, Germany has long enforced age verification within its borders. It's part of the whole over-regulation thing that makes competitive tech companies almost impossible in Europe. For some reason, Europeans have trouble accepting that. You can see it here on Lemmy. The solution must be to enshittify everything to level the playing field.
The legal precedent for gaining the ability to ban content under the guise of preventing the dissemination of "obscenity" allows the future banning of "obscene" political opinions and "obscene" dissent.
Once the "obscene" political content is banned, the language will change to "offensive".
After "offensive" content is banned, then the language will change to "inappropriate".
After "inappropriate", the language will change to "oppositional".
If you believe this is a "slippery slope" fallacy, then as a counterpoint, I would refer to the actual history of the term "politically correct":
In the early-to-mid 20th century, the phrase politically correct was used to describe strict adherence to a range of ideological orthodoxies within politics. In 1934, The New York Times reported that Nazi Germany was granting reporting permits "only to pure 'Aryans' whose opinions are politically correct".[5]The term political correctness first appeared in Marxist–Leninist vocabulary following the Russian Revolution of 1917. At that time, it was used to describe strict adherence to the policies and principles of the Communist Party of the Soviet Union, that is, the party line.[24] Later in the United States, the phrase came to be associated with accusations of dogmatism in debates between communists and socialists. According to American educator Herbert Kohl, writing about debates in New York in the late 1940s and early 1950s.
The term "politically correct" was used disparagingly, to refer to someone whose loyalty to the CP line overrode compassion, and led to bad politics. It was used by Socialists against Communists, and was meant to separate out Socialists who believed in egalitarian moral ideas from dogmatic Communists who would advocate and defend party positions regardless of their moral substance.— "Uncommon Differences", The Lion and the Unicorn[4]
You're right but the example you gave seems to illustrate a different effect that's almost opposite — let me explain.
The phrase "politically correct" is language which meant something very specific, that was then hijacked by the far-right into the culture war where its meaning could be hollowed out/watered down to just mean basically "polite", then used interchangeably in a motte-and-bailey style between the two meanings whenever useful, basically a weaponized fallacy designed to scare and confuse people — and you know that's exactly what it's doing by because no right-winger can define what this boogeyman really means. This has been done before with things like: Critical Race Theory, DEI, cancel culture, woke, cultural Marxism, cultural bolshevism/judeo bolshevism (if you go back far enough), "Great Replacement", "illegals", the list goes on.
I see your point. I should've limited my citation to the phrase's authoritarian origins from the early 20th century.
To clarify, the slippery slope towards "political correctness" I wanted to describe is a sort of corporate techno-feudalist language bereft of any real political philosophy or moral epistemology. It is the language of LinkedIn, the "angel investor class", financiers, cavalier buzzwords, sweeping overgeneralizations, and hyperbole. Yet, fundamentally, it will aim to erase any class awareness, empiricism, or contempt for arbitrary authority. The idea is to impose an avaricious financial-might-makes-right for whatever-we-believe-right-now way of thinking in every human being.
What I want to convey is that there is an unspoken effort by authoritarians of the so-called "left" and "right" who unapologetically yearn for the hybridization of both Huxley's A Brave New World and Orwell's 1984 dystopian models, sometimes loudly proclaimed and other times subconsciously suggested.
These are my opinions and not meant as gospel.
I get what you mean. You're saying we're sliding towards something that brings back political correctness in its original definition, and I agree with you.
The idea is to impose an avaricious financial-might-makes-right
This resonates a lot. I'd argue we're already there. All this talk of "meritocracy" (fallaciously opposed to "DEI"), the prosperity gospel (that one's even older), it's all been promoting this idea of worthiness determined by net worth. Totalitarianism needs a socially accepted might-makes-right narrative wherever it can find it, then that can be the foundation for the fascist dogma/cult that will justify the regime's existence and legitimize its disregard for human life. Bonus points if you can make that might-makes-right narrative sound righteous (e.g. "merit" determines that you "deserve" your wealth, when really it's a circular argument: merit is never questioned for those who have the wealth, it's always assumed because how else could they have made that much money!).
- Govt. want to control access to everything
- People are not too happy about this
- Govt. say "to protect children, you have to install this app, under these conditions"
- You want to protect childrens, so you do so
- Govt. say "to protect this or that, we have to impose approved gates on many websites, based on the app you installed before"
- You want to protect this or that, so you accept it
- Govt. say "fuck you, you whatever is not in line with the fucking biggot at the helm of your country/federation/whatever, now we know what you do, we control what's allowed, and anything to get around the blocks is illegal and will land you in jail. Fuck you again, fucker."
- You're a happy little plant in a pot.
Basically, it's not about porn. It's not about protecting kids. It's not about helping "victims of abuse". If anything, it's putting all these in more danger, along with everyone else.
- actively defending child rape
- calls vaccines poison
- calls prenatal care and school lunch subsidy woke
- spends billions bombing brown children
FYI: Most of the world actually restricts, and some outright bans, porn.
Its only western countries that have unrestricted access to porn.
Sure, but it has some good sides as well
It's just a shame that they aren't just made of the good sides
What's going on with Europe lately? You all really want GOOGLE of all mega corps in control of your identity?
You're going the opposite way, it should be your right to install an alternate OS on your phone. If anything they should be banning Google licensed Android.
reshared this
djpanini reshared this.
I miss LineageOS so much, my last couple of phones haven't had a build of it and my asshole banking apps wont work on it now.
For my next phone i'm just not going to buy one unless it's already supported and if I have to skip online banking I'll do it.
I use cards, I don't even have NFC on my phone, but it is nice to be able to check my bank account, lock/unlock the card, deposit checks, etc.
I may be able to do most of that on the website, idk. Guess I'm probably going to find out 😀
to hear it from any non-Americans on lemmy they're better than America.
looks like they're just as susceptible to this fascist bullshit to me though...
We invented this bullshit, of course we're susceptible.
Still better than America, though ;P XD
I call it effective authoritarianism, it's a sugar coated baton
No one is laughing... We're horrified how the people who have been screaming "freedom" and being obnoxious about how much more free they are than anyone else in the entire universe, seem to love getting enslaved while being obnoxious about how cool it is to be enslaved.
Europe has its problems. We've had them for generations, and right now they're getting worse. But at least we have a culture of fighting back, something americans don't.
But at least we have a culture of fighting back, something americans don’t.
Talk is cheap. Prove it in the coming years. I really hope you're right, because I want SOMEWHERE to not be either a coporate fascist hellholle or a collapsed country in the future..
AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.
AI Chatbots Remain Overconfident — Even When They’re Wrong - Dietrich College of Humanities and Social Sciences - Carnegie Mellon University
Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.www.cmu.edu
‘Japanese-first’ Sanseito party goes into election leveraging unease about foreigners
How Japan’s hard-right populists are profiting from anti-foreign sentiment and a cost of living crunch
Nationalists win over disaffected first-time voters with a call for a return to family values and curbs on immigrationGavin Blair (The Guardian)
Some thoughts on Surf, Flipboard's fediverse app
I've got access to the beta of the Surf app. Some thoughts:
some stuff I really liked:
- rss works (though no custom URLs yet, just what they already scraped)
- you get lemmy, mastodon, bluesky, threads all together
- you can make your own feeds and check what other people made (like a custom timeline, or topic-specific like “NBA”, “woodworking”, “retro gaming stuff”)
- has different modes: you can switch between videos, articles, podcasts depending on the feed
but also...
- can’t add your own RSS feeds (huge miss)
- some feeds break and show no posts even when they’re active (ok, it's still a beta)
- YouTube videos have ads (not into that—I support creators through patreon, affiliate links, whatever. not ads)
- feeds you create are public by default unless you manually change it
- not open source. built on open protocols, sure. but the app is locked up. (HUGE MISS)
all that said, I really believe: better feeds = better experience = better shot at the fediverse going mainstream.
anyone else tried it?
do you know anyone building an open source version of this? is that even realistic?
I’d love to hear what do you think 😀
i also have the same grievances with surf.
::: spoiler i've seen a few that are clients for both the (microblogging) fediverse and bluesky,
app | license | platform |
---|---|---|
fread | apache 2.0 | android |
agora | mit | web/pwa |
openvibe | proprietary | android & ios |
soraSNS | proprietary | ios |
:::
but none seem to have any of the rest of the features, unfortunately.
SoraSNS: iOS Mastodon Misskey Bluesky Nostr client
Beautiful and futuristic iOS third party client for Mastodon, Misskey, Bluesky, and Nostr. Gallery mode, video reel, local ML powered For You feed keeps your timeline interesting!msz (MszPro・株式会社Smartソフト)
apps.apple.com/us/app/tapestry…
First review was interesting.
Tapestry by Iconfactory
Tapestry weaves your favorite blogs, social media, and more into a unified and chronological timeline.App Store
Flow control? China starts mega-dam project on Brahmaputra in Tibet; how will it impact India - Times of India
Flow control? China starts mega-dam project on Brahmaputra in Tibet; how will it impact India
China News: China has commenced construction of a major dam on the Brahmaputra River in Tibet, near the Indian border, with Premier Li Qiang present at the groundTOI World Desk (The Times Of India)
London: Over 50 arrests in Parliament Square amid pro-Palestine Action protest
More than 50 arrests in Parliament Square as pro-Palestine Action protests held across UK
Dozens of protesters assembled in central London on Saturday afternoonSami Quadri (Evening Standard)
Reddit users in the UK must now upload selfies to access NSFW subreddits
Reddit is introducing age verification for UK users
The change is due to new age verification laws in the UK.Amanda Yeo (Mashable)
Hm, I'm going to need some software engineers to critique an idea I have that could at least partially solve the fears people have about their personal details being tied to their porn habits.
The system will be called the Adult Content Verification System (or Wank Card if you want to be funny). It's a physical card, printed by the government with a unique key printed on it. Those cards are then sold by any shop that has an alcohol license (premises or personal). You go in, show your ID to the clerk, buy the card. That card is proof that you're over 18, but it is not directly tied to you, you just have to be over 18 to buy it. The punishment for selling a Wank Card to someone under the age of 18 is the same as if you sold alcohol to someone under 18.
When you go to the porn site, they check if you're from the UK, they check if you have a key associated with your account. If not, they ask for one, you provide the key to the site, the site does an API call to https://wankcard.gov.uk/api/verify
with the site's API key (freely generated, but you could even make the api public if you want) and the key on the card, gets a response saying "Yep! This is a valid key!" and hey presto, free to wank and nobody knows it's you! If you don't have an account, the verification would have to be tied to a cookie or something that disappears after a while for all you anonymous people.
As a result, you can both prove that you're over 18 (because you have the card) and some company over in San Francisco doesn't get your personal data, because you never actually record it anywhere. All you have is keys, and while yes, the government could record "Oh this key was used to verify on this site", they'd have to know which shop the key was bought from, who sold it, and who bought it, which is a lot more difficult to do unless the shopkeeper keeps records of everyone he's ever sold to.
So... Good idea? Bad idea? Better than the current approach anyway, I think.
“Reddit has stressed that this system is only to verify users' age, and it has no interest in your identity. Lee further stated that Persona won't know what subreddits you visit, and has promised it won't keep users' uploaded images more than seven days.”
Press X to doubt.
Parola filtrata: nsfw
Beware USA 🙁((
Supreme Court's ruling practically wipes out free speech for sex writing online (July 4, 2025)
[commented the same a few days back]
The Supreme Court’s Ruling Practically Wipes Out Free Speech for Sex Writing Online
Am I now committing civil disobedience... just by keeping my personal literary website up as is?Michael Ellsberg (Michael Ellsberg's Missives)
For those unaware, this isn't something like replacing a slur with removed, he edited users' comments, turning them into insults to other users.
I don't care that those original commenters were (likely) pieces of shit, and the people who he made the comments insult were definitely pieces of shit, putting words into people's mouths to make them fight each other is unforgivable. Even if you put out a shitty apology.
Reddit CEO admits he edited Trump supporters' comments on social network | The Independent
'I shouldn’t play such games, and it’s all fixed now,' Steve Huffman told the Donald Trump supporting communityAndrew Griffin (The Independent)
Not only was the apology horrible, but for any user on that platform for YEARS: obviously puts the thought in their head that spez could be changing their words by directly editing the db, and getting them put on a list for wrong-speak. Sure, that's possible with any DB, but he proved it was actually something being done on that site. Given his role, a major red flag, as this type of action would normally result in someone being fired.
Reddit has since IPOd and is going to probably do well as a stock because of all the information it harvests from users.
Yeah, fuck all that.
Guess we're transitioning into a VPN only future.
We have the opportunity to head into a utopic or dystopic future and we're absolutely choosing the dystopic one.
A VPN future? Haha. Not if they don't want to. There are many ways to prevent VPN from operating when you're a government.
You can just plain ban encryption, which sounds really crazy, but yeah, they're trying to.
You can just say "it's illegal to use a VPN". It'll technically still work, but if there's a trace of trafic from your house to a known VPN endpoint, you're it! Great!
They can force custom proprietary spying software on your devices. Sounds equally crazy as the thing above, right? But rest assured they're ALSO trying to do that. Multiple times, even. And in some places… they did. Of course, nothing forces you to have such software on your device. Especially if your devices are not supported; it also turns into a "you have to buy this or that big name device, everything else's de-facto illegal! Fuck you, we're the government!". And if you get caught for whatever, and your phone, PC, or anything isn't "compliant"? Bam. Guilty.
Plenty of option. All of them completely stupid and would weaken both privacy, individuals, and governments at large. It never stopped legislation from being pushed forward.
They can force custom proprietary spying software on your devices.
- That would block Linux from their borders, which means goodbye Steam Deck in the UK among other things.
Migrants sent to El Salvador's CECOT returned to Venezuela in prisoner swap, 10 Americans freed: Officials
Migrants sent to El Salvador's CECOT returned to Venezuela in prisoner swap, 10 Americans freed: Officials
Over 250 prisoners were released from CECOT, Venezuela's government said.Laura Romero (ABC News)
Channel.org open beta
Seems to be a way of making Bluesky style feeds with Mastodon-style services, well that's what I gather from reading the FAQ. They don't actually explain what this is anywhere.
Today, Channel.org public beta goes live! 🎉We're so excited to give you access to Channel.org Channels, your own curated feeds across the social web.
You can create a Channel on the Channel.org website now and then download the beta Channels app for easy management.
We'd love for you to try it out and let us know what you think!
#SocialMedia #Fediverse #SocialWeb #Mastodon #Channels #Newsmast #Beta #Technology #FediTech #FediApp #App
Channel is basically a white label instance of PatchWork, which is a Mastodon fork with custom feeds and community curation tools.
The main intent behind the project is to help existing communities and organizations get onto the Fediverse, and have some curation capabilities. Ideally, it can be used to get a large amount of people and accounts onto the network with minimal friction.
GitHub - patchwork-hub/patchwork-web: Your self-hosted, globally interconnected microblogging community
Your self-hosted, globally interconnected microblogging community - patchwork-hub/patchwork-webGitHub
An American father who moved to Russia to avoid LGBTQ+ “indoctrination” is being sent to the front line against Ukraine despite being assured he would serve in a non-combat role.
Anti-Woke Dad Who Fled With Family to Russia Sent to War Zone
Derek Huffman, 46, joined the military with hopes of becoming a Russian citizen. His wife said he was duped into a combat role.Josh Fiallo (The Daily Beast)
Putin: lol front line for you.
Derek Huffman: Curse your sudden but inevitable betrayal!
I always liked the concept of Matrix, and still actively use it, but there's some serious jank. Synapse is generally bloated and not fun to run an instance, Dendrite is perpetually in Beta, and the clients themselves range from adequate to awful. The default Element client on Android is so broken for me that I'm forced to use Element X, because I can't even log in with Element.
It's disappointing, but there's a ton of issues that aren't so easy to resolve. New Vector and the Element Foundation are basically two separate entities that have some kind of hard split between them, neither of which seems to have the money necessary to support comprehensive development. The protocol is said to be bloated and overtly complex, and trying to develop a client or a server implementation is something of a nightmare.
I want to see Matrix succeed, I think a lot of people see the potential of what it could be. I'm not sure it'll ever get there.
I always liked the concept of Matrix, and still actively use it, but there’s some serious jank.
I use Element as well as Beeper, which is at its core an Element client based on network bridging. I'm a big fan of Matrix, but it isn't as approachable as other messaging services and requires some technical know-how to use effectively.
It seems like the Linux of messaging services.
I just want a self-hostable open-source alternative to the shitty closed-source IM systems I'm forced to use
I'm sticking with Matrix for now, hopefully some of the issues I've had will get ironed out
The thing is... What alternatives are there? Signal can't be trusted (on the very same website there is an article about it). I'm not using closed source alternatives, Simplex is kinda shady too tbh and I'm not even sure I could get anyone to use it.
I don't like Matrix/Element either but sadly its the best open source chat solution we have.
XMPP is significantly less decentralized, allowing them to """cut corners""" compared to Matrix protocol implementation, and scale significantly better. (In heavy quotes, as XMPP isn't really cutting corners, but true decentralization requires more work to achieve seemingly "the same result")
An XMPP or IRC channel with a few thousand users is no problem, wheras Matrix can have problems with that. On the other hand, any one Matrix homeserver going down does not impact users that aren't specifically on that homeserver, whereas XMPP is centralized enough that it can take down a whole channel.
Meanwhile IRC is a 90s protocol that doesn't make any sense in the modern world of mainly mobile devices.
XMPP also doesn't change much, the last proper addition to the protocol (from what I can tell, on the website) was 2024-08-30 xmpp.org/extensions/xep-0004.h…
Data Forms
This specification defines an XMPP protocol extension for data forms that can be used in workflows such as service configuration as well as for application-specific data description and reporting.Peter Saint-Andre
XMPP doesn't change very very often, but there's actually tons of XEPs that are in common use and are considered functionally essential for a modern client, and with much higher numbers than XEP-0004
The good news, though, is that mostly you as the user don't need to care about those! Most of the modern clients agree on the core set and thus interoperate fine for most normal things. And most XEPs have a fallback in case the receiver doesn't support the same XEPs.
I'm general XMPP as a protocol is a lightweight core that supports an interesting soup of modules (in the form of XEPs) to make it a real messenger in the modern sense. And I think that's neat! But you can't really judge the core to say how often things change.
Most of the modern clients agree on the core set and thus interoperate fine for most normal things.
So you think it is a sane solution to mark essential features as optional extensions and then have a wink-wink, nudge-nudge agreement of which of these "optional" extensions are actually mandatory? Instead of having essential features be part of the core protocol?
But more importantly, XMPP sucks because it does not have one back-end implementation like Vodozemac for Matrix. So let alone being unable to have security audits, you are forcing client developers to roll their own implementation of the e2ee, with likely little to no experience with cyber-security, and just hoping they will make no mistakes. You know, implementing encryption that even experts have hard time getting right.
Honestly, I struggle with this myself. On the one hand I like the diversity of clients; it feels like a sign of strength of the community and protocol that there are many options that have different values. But the cost of this diversity is that it makes things more complicated to coordinate, and different people with different values have different opinions on what a chat client should even want for features.
Something like Slack or Discord can roll out a server feature and client feature to all their clients all at the same time and have a unified experience. But the whole benefit of FLOSS is that anyone can fork the client to make changes, and the whole point of an open protocol is that multiple independent clients can interoperate, and so there's a kind of irony in me wanting those things, but those things producing a fractured output.
So I think XMPP, as a protocol, does the best compromise. These differences between clients and servers aren't just random changes in behaviour or undocumented features, they're named, numbered, alterations that live somewhere and are advertised in the built-in "discovery" protocols. The protocol format itself is extensible, so unexpected content can be passed alongside known content in a message or a server response and the clients all know to ignore anything they don't understand, and virtually all of the XEPs are designed with some kind of backwards compatibility in mind for how this feature might degrade when sent to a non-supported client.
It isn't perfect, but I think perfection is impossible here. A single server and client that everyone uses and keeps up to date religiously with forced upgrades is best for cohesiveness, but worst for "freedom", and a free-for-all where people just make random individual changes and everything is always broken isn't really a community, and XMPP sits in the middle and has a menu of documented deviations for clients to advertise and choose.
As for security, that can be mostly solved with libraries, independent of the rest of the client or server implementation. Like, most clients used libsignal for their crypto, so that could in theory be audited and bug-fixed and all clients would benefit. Again, not perfect, there's always room at the interface between the client code and the library code that's unique, but it's not as bad as rolling your own crypto.
I am yet to see a universal tool that is good at everything. Trying to cram all use-cases into one network results in mediocre results at best and usually even worse.
There is no reason to combine a person to person messenger like signal and community based one like discord into one network. That is why I like the Matrix approach of 1 backend library and many frontends so you can have your pick of clients without messing up the protocol.
Even having the fallbacks for missing features does not solve the issue. The experience for the average person will still be bad. While you and I may enjoy doing research on which client is best for us, most users will see the sub-par experience and leave for a corporate solution that "just works".
I am just wondering what it takes to succeed.
start with a discord clone
make it e2ee
make it federated
i feel like it shouldnt be this hard, but I'm not the one developing matrix, nor XMPP, nor the 3rd smaller option you the reader is wanting me to list that I am unaware of
I can use IRC
The fact that many Discord and IRC channels (servers?) block Matrix connections has drastically reduced its usefulness for me. When I was running my own Matrix server, I could have gotten around it by using a puppet, but Synapse is such a hog I had to shut it down, and most of the IRC rooms I want to use don't allow Matrix proxies.
running your own server is super lightweight.
Not IME. Are you running Synapse? Gigabytes of disk usage and memory leaks requiring restarts.
They're taking about switching to Jabber/XMPP, which is what those two bridges are for, and they're saying XMPP servers are lightweight.
It's a bit confusing in context, I'll admit.
We really need to stop abandoning existing foss projects and thinking a whole new thing needs to be invented. Free and open-source software is not a product, it doesn't abide by the same rules and relationships that proprietary tech does.
It's more organic. It's also a commons that we can continue to draw on, and reshape. If I recall correctly, there were something like three different vector graphic editors from the same codebase before Inkscape managed to be the one that gained traction.
Matrix isn't perfect, but abandoning it just to reinvent it all over again just because some people really need a thing that works like Discord, even though Discord is absolute hot garbage; is just going to re-create all the same problems. Matrix today is better than it was two years ago. And Matrix in a year will be better from now.
Honestly, setting up things using Docker Compose is generally a question of copying and pasting and editing the file locations.
The moment you need SSL and/or a reverse proxy it becomes a bit more complex, but once you set up a reverse proxy once you can generally expand that to your other applications.
Something like a Synology nas makes it very easy and to some extend even the Truenas apps are kinda easy.
Knowledge manipulation on Russia's Wikipedia fork; Marxist critique of Wikidata license; call to analyze power relations of Wikipedia
Disposable E-Cigarettes More Toxic Than Traditional Cigarettes: High Levels of Lead, Other Hazardous Metals Found in E-Cigarettes Popular with Teens
They may look like travel shampoo bottles and smell like bubblegum, but after a few hundred puffs, some disposable, electronic cigarettes and vape pods release higher amounts of toxic metals than older e-cigarettes and traditional cigarettes, according to a study from the University of California, Davis. For example, one of the disposable e-cigarettes studied released more lead during a day’s use than nearly 20 packs of traditional cigarettes.
Disposable E-Cigarettes More Toxic Than Traditional Cigarettes
They may look like travel shampoo bottles and smell like bubblegum, but after a few hundred puffs, some disposable, electronic cigarettes and vape pods release higher amounts of toxic metals than older e-cigarettes and traditional cigarettes, accordi…UC Davis
Like I said, the three companies named there are not the same companies, but you claim they are, and are sourcing it from the same Chinese sweatshop, even tho they don't even seem Chinese.
So, again, can I get a source for your claim that all these are the same company sourcing their stuff from Chinese sweatshop?
Esco Bars | Esco Bar | Esco Bar flavors | Escobar Vape
Esco Bars, your one-stop destination for the finest vaping products and premium Esco Bar flavors. If you're on the hunt for an unparalleled.Esco Bars
Since nobody else will provide the actual clarity
EscobarVape and Elfbar are created by two separate Chinese companies, Shenzhen Innokin Technology Co. Ltd and Shenzhen iMiracle Technology respectively. Mi-Bar is created by an American company but has partnered with Elfbar to distribute Elfbar products in the US
Really wish more people would just provide the facts that speak for themselves, rather than point fingers about who is and isn't doing their research
**PROMOTED BY THE “MAD MEN” AT LUCKY STRIKE!
Just have a cocktail and smoke a cig, it’s better than weed and ecigs! How dare you switch because of rat poison found in our cigs and how the RICH banded hemp due to lobbying by the paper people… not due to anything else! Cool!
Xinjiang’s Organ Transplant Expansion Sparks Alarm Over Uyghur Forced Organ Harvesting
cross-posted from: sh.itjust.works/post/42460866
Xinjiang’s official organ donation rate is shockingly low. So why is China planning to open six new organ transplant facilities in the region"The expansion suggests that the Chinese authorities are expecting to increase the numbers of transplants performed in Xinjiang. However, this is puzzling as there is no reason why the demand for transplants should suddenly go up in Xinjiang,” Rogers explained. “From what we know about alleged voluntary donations, the rates are quite low in Xinjiang. So the question is, why are these facilities planned?”
Rogers noted one chilling possibility: that “murdered prisoners of conscience (i.e., Uyghurs held in detention camps)” could be a source of transplanted organs.
This suggestion becomes even more concerning when considering the extensive surveillance and repression that Uyghurs face in the region. Detainees in the many internment camps in Xinjiang have reported being subjected to forced blood tests, ultrasounds, and organ-focused medical scans. These procedures align with organ compatibility testing, raising fears that Uyghurs are being prepped for organ harvesting while in detention.
David Matas, an international human rights lawyer who has investigated forced organ harvesting in China, questioned the very possibility of voluntary organ donation in Xinjiang. “The concept of informed, voluntary consent is meaningless in Xinjiang’s carceral environment,” Matas said. “Given the systemic repression, any claim that donations are voluntary should be treated with the utmost skepticism.”
The new transplant facilities will be distributed across Urumqi and other regions of northern, southern, and eastern Xinjiang. Experts argue that the sheer scale of this expansion is disproportionate to Xinjiang’s voluntary donation rate and overall capacity, suggesting that the Chinese authorities may be relying on unethical methods to source organs.
like this
Azathoth, MyTurtleSwimsUpsideDown e aramis87 like this.
The author is a Muslim woman who has won awards for her work as a journalist and written for several other major news outlets...
The wikipedia article for the universal peace federation redirects to the unification church article.
Shinzo Abe found out how bad the moonies are.
Keep spreading that "new cold war" propaganda.
Nobody is defending the Moonies, especially not this current affairs publication owned by a Japanese media corporation. Here's plenty of examples of them calling out the Unification Church:
thediplomat.com/tag/unificatio…
Anybody can be nominated to be an ambassador for peace, it's also associated with the UN.
upf.org/core-program/ambassado…
Launched in 2001, Ambassadors for Peace is the largest and most diverse network of peace leaders. As of 2020, there are more than 100,000 Ambassadors for Peace from 160 countries who come from all walks of life representing many races, religions, nationalities, and cultures
Literally she has no other ties to the Moonies/unification church, and how about the human right lawyer she directly quotes.
Or the bioethicist and part of the coalition to End Transplant Abuses in China (ETAC)? All just cold war propaganda?
Results 445 included studies reported on outcomes of 85 477 transplants. 412 (92.5%) failed to report whether or not organs were sourced from executed prisoners; and 439 (99%) failed to report that organ sources gave consent for transplantation. In contrast, 324 (73%) reported approval from an IRB. Of the papers claiming that no prisoners’ organs were involved in the transplants, 19 of them involved 2688 transplants that took place prior to 2010, when there was no volunteer donor programme in China.
Anyway, keep spreading that there is no genocide propaganda.
washingtonpost.com/politics/20…
Two months after the Trump administration all but shut down its foreign news services in Asia, China is gaining significant ground in the information war, building toward a regional propaganda monopoly, including in areas where U.S.-backed outlets once reported on Beijing’s harsh treatment of ethnic minorities.The U.S. decision to shut down much of RFA’s shortwave broadcasting in Asia is one of several cases where the Trump administration — which views China as America’s biggest rival — has yielded the adversary a strategic advantage.
Allentown grandfather’s family was told he died in ICE custody. Then they learned he’s alive — in a hospital in Guatemala, they say
The Eighth Amendment to the United States Constitution states: “Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.”
Unconstitutional actions ordered by the POTUS. Are we ready to impeach yet?
https://www.mcall.com/2025/07/18/luis-leon-allentown-grandfather-ice-guatemala/
The man was granted asylum! He’s 82 god damn years old!
I would love to see a popular uprising where we string up the thugs that are snatching people off the streets. They don’t need unnecessary things like “lawyers” or “trials”. A can of gas and a match are pretty cheap. So is rope.
deadbeef79000
in reply to themachinestops • • •like this
xep likes this.
vollkorntomate
in reply to themachinestops • • •First_Thunder
in reply to vollkorntomate • • •themachinestops
in reply to vollkorntomate • • •digital-strategy.ec.europa.eu/…
lemmy.world/post/33568797
The EU approach to age verification
Shaping Europe’s digital future8uurg
in reply to themachinestops • • •At least the EU is somewhat privacy friendly here (excluding the Google tie in) compared to whatever data sharing and privacy mess the UK has obligated people to do with sharing ID pictures or selfies.
Proving you are 18+ through zero knowledge proof (i.e. other party gets no more information than being 18+) where the proof is generated on your own device locally based on a government signed date of birth (government only issues an ID, doesn't see what you do exactly) is probably the least privacy intrusive way to do this, barring not checking anything at all.
thericofactor
in reply to themachinestops • • •like this
xep, HarkMahlberg e emmanuel_car like this.
Victor
in reply to thericofactor • • •jol
in reply to thericofactor • • •Perspectivist
in reply to jol • • •PalmTreeIsBestTree
in reply to jol • • •ZILtoid1991
in reply to jol • • •Perspectivist
in reply to thericofactor • • •Sure, if privacy is worth nothing to you but I wouldn't speak for the rest of the UK and EU.
Kyouki
in reply to Perspectivist • • •Perspectivist
in reply to Kyouki • • •Kyouki
in reply to Perspectivist • • •katy ✨
in reply to themachinestops • • •like this
xep, HarkMahlberg e emmanuel_car like this.
missingno
in reply to themachinestops • • •like this
HarkMahlberg likes this.
themachinestops
in reply to missingno • • •Estimates is better it can be easily bypassed:
windowscentral.com/gaming/game…
How Death Stranding helps bypass UK age verification: A game-changer?
Sean Endicott (Windows Central)Optional
in reply to themachinestops • • •captainastronaut
in reply to themachinestops • • •blobchoice
in reply to captainastronaut • • •Lorem Ipsum dolor sit amet
in reply to themachinestops • • •Green Wizard
in reply to themachinestops • • •Covenant
in reply to Green Wizard • • •Green Wizard
in reply to Covenant • • •HertzDentalBar
in reply to themachinestops • • •Sir_Kevin
in reply to HertzDentalBar • • •blobchoice
in reply to HertzDentalBar • • •Is there other platforms that are out there estimating our ages?
I know the whole age gate thing - being in the UK, believe me, I know. I've just never had a service go "we can't show you content. No, don't tell us your age, we'll work it out from the data we've collected from you".
That's a whole new one for me, I must admit.
HertzDentalBar
in reply to blobchoice • • •jobbies
in reply to themachinestops • • •Squizzy
in reply to themachinestops • • •FuyuhikoDate
in reply to themachinestops • • •