As rising sea levels swallow Bangladesh's land, its climate refugees are forced to adapt
As rising sea levels swallow Bangladesh’s land, its climate refugees are forced to adapt
Few countries in the world are considered more vulnerable to the impact of rising sea levels and climate change than Bangladesh, a nation of 175 million people squeezed into a landmass the size of Iowa.Fred de Sam Lazaro (PBS News)
How Quantum Computers are gonna screw us
reshared this
Technology reshared this.
Is AI Slop Killing the Internet?
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
Clear likes this.
The worst possible antitrust outcome | Google's only "punishment" for its illegal search monopoly is to have to share all the data it gathered on US with every company that wants it
Cory Doctorow is rightfully enraged:
This is all downside. If Google complies with the order, it will constitute a privacy breach on a scale never before seen. If they don't comply with the order, it will starve competitors of the one tiny drop of hope that Judge Mehta squeezed out of his pen. It's a catastrophe. An utter, total catastrophe. It has zero redeeming qualities. Hope you like enshittification, folks, because Judge Mehta just handed Google an eternal licence to enshittify the entire fucking internet.
like this
copymyjalopy, adhocfungus, Rozaŭtuno e Raoul Duke like this.
G.O.P. Thwarts Epstein Disclosure Bill as Accusers Plead for Files
Jeffrey Epstein’s accusers went to the Capitol to ask Congress to get behind their calls for more disclosures, but momentum for a bill demanding it appeared to stall.
like this
copymyjalopy, adhocfungus e Raoul Duke like this.
2025 DCI All-Age Championship Finals – Hawthorne Caballeros Photos
The Hawthorne Caballeros performing “On The Edge” during the 2025 DCI All-Age Championship Finals at Lucas Oil Stadium in Indianapolis, Indiana.
All of these photos are available under a Creative Commons license, free for you to use as long as you give me photography credit.Hawthorne Caballeros
2025 DCI All-Age Championship Finals
Photo Credit: Kevin Gamin
You can find all of the edited photos from this and other events on my Flickr site.Hawthorne Caballeros
2025 DCI All-Age Championship Finals
Photo Credit: Kevin Gamin
You can find all of my photos on my Smugmug site.Hawthorne Caballeros
2025 DCI All-Age Championship Finals
Photo Credit: Kevin Gamin
Mexican man dies in immigration detention in Arizona
cross-posted from: tucson.social/post/2212969
A 32-year-old man Mexican man died of unknown causes on Sunday after being detained by U.S. Immigration and Customs Enforcement at a private prison in Arizona, authorities confirmed.
like this
adhocfungus, copymyjalopy e Raoul Duke like this.
David Seymour doesn't want more houses built near his place
Deputy PM David Seymour says parts of Auckland plan ‘not necessary’. He plans to lobby council and Housing Minister Chris Bishop for changes
David Seymour says parts of the Auckland Council's new plan are not necessary.Thomas Coughlan (The New Zealand Herald)
An AI Social Coach Is Teaching Empathy to People with Autism
An AI Social Coach Is Teaching Empathy to People with Autism | Stanford HAI
A specialized chatbot named Noora is helping individuals with autism spectrum disorder practice their social skills on demand.hai.stanford.edu
like this
adhocfungus e Rozaŭtuno like this.
l’amore 2d e gli oggetti orientifici, ma quello che accade non fa divertire
Visti gli imprevisti con HaxeFlixel che non ho ancora avuto il tempo di elaborare qui, stavo (ri)considerando il basato Love2D che, ultimamente mi sono (ri)accorta, gira su talmente tante piattaforme da rendere inutile anche fare degli esempi qui. La cosa seccante di quel coso, però, è che non è esattamente un motore di gioco, quanto […]
octospacc.altervista.org/2025/…
l’amore 2d e gli oggetti orientifici, ma quello che accade non fa divertire (test e valutazioni prestazioni OOP su Love2D)
Visti gli imprevisti con HaxeFlixel che non ho ancora avuto il tempo di elaborare qui, stavo (ri)considerando il basato Love2D che, ultimamente mi sono (ri)accorta, gira su talmente tante piattaforme da rendere inutile anche fare degli esempi qui. La cosa seccante di quel coso, però, è che non è esattamente un motore di gioco, quanto più un framework multimediale… e quindi, a differenza di Flixel e altri robini, non ha tutte le varie utilità che è bene avere per poter sviluppare qualcosa senza partire dallo zero assoluto… e quindi, l’idea sarebbe di creare una specie di motorino per esso per gestire cose comuni come sprite, fisica, boh, queste cose (non) belle. 🤥Ovviamente, il problema inevitabile è sorto immediatamente dopo una giornata di lavoro iniziale — a dire il vero, fatta per fortuna con le pinze, perché ero sotto sotto pronta a vedere cose storte accadere — e cioè che, con una dose di OOP in realtà nemmeno troppo grossa per gli standard comuni, le prestazioni sono crollate così tanto a picco che, per un semplice giochino di Breakout (la demo di HaxeFlixel, che ho adattato strada facendo per testare), da un lato su PC l’uso di CPU si aggirava attorno al 10-15% (che è tipo il quintuplo di cosa fa la stessa demo in HaxeFlixel)… mentre, su piattaforme pazzurde come il 3DS era letteralmente ingiocabile, facendo 5 secondi a frame (e pensare che io ho il new, che è più veloce). 💀
Ho fatto un po’ di ricerca e — per quanto fosse a me comunque ovvio che una programmazione ad oggetti basata principalmente sull’ereditarietà rende un programma più lento, perché i sassi elettronici sono fatti per eseguire istruzioni sequenziali e lavorare con memoria quanto più continua possibile, che è il contrario di cosa succede con tutte quelle classi che estendono classi che estendono il mercato che mio padre comprò — non immaginavo che su Lua il calo prestazionale fosse tale da essere non solo evidente, ma proprio fuori scala in certi casi. E ora, dunque, i problemi sono grossissimi. 😤
Anche stavolta ho raccolto molti link a riguardo di questa ennesima causa di sofferenza per me, e in realtà ancora non ho capito bene la questione, ma un grande problema sembra essere causato dagli accessi a tabelle nidificate, e alle chiamate di funzioni fatte più del necessario anche per operazioni altrimenti veloci… e nel mio caso certamente una buona parte di overhead in questo senso sarà causata dal fatto di non scrivere Lua nativo, bensì usare Haxe (o in alternativa, TypescriptToLua) per traspilare a Lua, ma sentivo che il problema non poteva essere solo il codice bloattato generato da questi affari… 🧨
E infatti, scrivendo in Lua puro un piccolo benchmark (battezzato al volo solo per dare un titolo al memo: “Love2D fucking rectangles“, genera innumerevoli rettangoli e li fa muovere calcolando le collisioni), prima in modo classico e poi con un minimo di OOP, ed eseguendolo oltre i limiti del ragionevole, ho visto le cose brutte: la versione OOP è in effetti più lenta. Non tanto più lenta, e comunque dipende dalle opzioni con cui la si fa girare, ma solo perché è comunque molto semplice… a differenza del motorino che tanto vorrei creare per replicare la API di HaxeFlixel in Love2D per quanto possibile (evidentemente, non molto possibile). 😭
Dopo ben 4 (quattro) immagini non so se ho voglia di elaborare oltre… Ma, in sostanza: in una modalità, il programma genera solo X (200mila) rettangoli all’avvio, mentre nell’altra ne genera X (200) a frame, andando all’infinito, calcolando sempre le collisioni… e quindi con la prima si esclude una lentezza dovuta alla continua istanziazione di oggetti, mentre la seconda da modo di vedere come un programma rallenta nel tempo rispetto all’altro (generando meno oggetti a parità di tempo). 💥Nella prima modalità, il carico è basato principalmente sull’accesso alla chiamata draw, quindi non potevo limitare il numero di quadrati effettivamente visibili, e quindi ho potuto eseguirla solo su PC, dove si nota in media un rallentamento di circa il doppio per la versione OOP, che accede a svariate proprietà nidificate per fare il disegno… Mentre, nella seconda la prova il carico era più il resto, quindi ho deciso di limitare il numero di rettangoli visibili a schermo ad ogni frame agli ultimi X (500), e questo mi ha permesso di eseguire il programma pure sul 3DS senza che crashasse (credo ci siano limiti di VRAM lì), ma sia su PC che su 3DS si vede che la versione non-OOP riesce a generare in media 1,2 sprite in più per delta di tempo, differenza che nel corso di minuti diventa di migliaia di sprite. 😵
Incredibile, spassoso, magicante, ma… e adesso??? Boh! Dovrò ingegnarmi pesantemente per creare un motorino sufficientemente generalizzato da poter essere usato come comoda libreria per molti giochi Love2D, ma che allo stesso tempo sia efficiente… ma qui casca l’asino, perché per implementare concetti come uno sprite, che oltre ai classici dati come posizione X e Y ha un oggetto “disegnabile”, che può essere un’immagine o una forma geometrica, che quindi richiede chiamate della API Love2D completamente diverse dietro le quinte, non vedo alternativa non incasinata se non l’OOP; ma non basterà usare più la composizione che l’ereditarietà, bensì per sconfiggere l’overhead serviranno mosse di design interne talmente scomode che ho davvero tanta paura anche solo a pensare di scriverle… 😱
#benchmark #development #LOVE2D #Lua #test #testing
Memo by ██▓▒░⡷⠂𝚘𝚌𝚝𝚝 𝚒𝚗𝚜𝚒𝚍𝚎 𝚞𝚛 𝚠𝚊𝚕𝚕𝚜⠐⢾░▒▓██
TypeScriptToLua, A generic TypeScript to Lua transpiler: + https://typescripttolua.github.io + https://github.com/TypeScriptToLua/TypeScriptToLua Memos
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task [edited post to change title and URL]
Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.
Someone pointed out that the "Science, Public Health Policy and the Law" website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT's page about the study instead.
The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.
Note that the study with its original title got far less upvotes than the click-bait summary did 🤡
MIT Study Finds Artificial Intelligence Use Reprograms the Brain, Leading to Cognitive Decline - Science,
By Nicolas Hulscher, MPHScience, Public Health Policy and the Law
The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there's a clear link between vaccines and autism.Neat.
like this
adhocfungus, Rozaŭtuno, Pro, dflemstr e ə-Li 🐝💨💨🍯 like this.
reshared this
Technology e ə-Li 🐝💨💨🍯 reshared this.
Personally I don't use AI because I see all the subtle ways it's wrong when programming, and the more I pay attention to things like AI search results, it seems like there's almost always something misrepresented or subtly incorrect in the output, and for any topics I'm not already fluent in, I likely won't notice these things until it's already causing issues
it's not any different than eating fast/processed food vs eating healthy.
it warps your expectations
like this
themadcodger likes this.
Fuck, this is why I'm feeling dumber myself after getting promoted to more senior positions and had only had to work in architectural level and on stuff that the more junior staffs can't work on.
With LLMs basically my job is still the same.
My dad around 1993 designed a cipher better than RC4 (I know it's not a high mark now, but it kinda was then) at the time, which passed audit by a relevant service.
My dad around 2003 still was intelligent enough, he'd explain me and my sister some interesting mathematical problems and notice similarities to them and interesting things in real life.
My dad around 2005 was promoted to a management position and was already becoming kinda dumber.
My dad around 2010 was a fucking idiot, you'd think he's mentally impaired.
My dad around 2015 apparently went to a fortuneteller to "heal me from autism".
So yeah. I think it's a bit similar to what happens to elderly people when they retire. Everything should be trained, and also real tasks give you feeling of life, giving orders and going to endless could-be-an-email meetings makes you both dumb and depressed.
that's the peter principle.
people only get promoted so far as their inadequacies/incompetence shows. and then their job becomes covering for it.
hence why so many middle managers primary job is managing the appearance of their own competence first and foremost and they lose touch with the actual work being done... which is a key part of how you actually manage it.
Yeah, that's part of it. But there is something more fundamental, it's not just rising up the ranks but also time spent in management. It feels like someone can get promoted to middle management and be good at the job initially, but then as the job is more about telling others what to do and filtering data up the corporate structure there's a certain amount of brain rot that sets in.
I had just attributed it to age, but this could also be a factor. I'm not sure it's enough to warrant studies, but it's interesting to me that just the act of managing work done by others could contribute to mental decline.
I don't refute the findings but I would like to mention: without AI, I wasn't going to be writing anything at all. I'd have let it go and dealt with the consequences. This way at least I'm doing something rather than nothing.
I'm not advocating for academic dishonesty of course, I'm only saying it doesn't look like they bothered to look at the issue from the angle of:
"What if the subject was planning on doing nothing at all and the AI enabled the them to expend the bare minimum of effort they otherwise would have avoided?"
sad that people knee jerk downvote you, but i agree. i think there is definitely a productive use case for AI if it helps you get started learning new things.
It helped me a ton this summer learn gardening basics and pick out local plants which are now feeding local pollinators. That is something i never had the motivation to tackle from scratch even though i knew i should.
It helped me a ton this summer learn gardening basics and pick out local plants which are now feeding local pollinators. That is something i never had the motivation to tackle from scratch even though i knew i should.
Given the track record of some models, I'd question the accuracy of the information it gave you. I would have recommended consulting traditional sources.
I would have recommended consulting traditional sources.
jfc you people are so eager to shit on anything even remotely positive of AI.
Firstly, the entire point of this comment chain is that if "consulting traditional sources" was the only option, I wouldn't have done anything. My back yard would still be a barren mulch pit. AI lowered the effort-barrier of entry, which really helps me as someone with ADHD and severe motivation deficit.
Secondly, what makes you think i didn't? Just because I didn't explicitly say so? yes, i know not to take an LLM's word as gospel. i verified everything and bought the plants from a local nursery that only sells native plants. There was one suggestion out of 8 or so that was not native (which I caught before even going shopping). Even with that overhead of verifying information, it still eliminated a lot of busywork searching and collating.
doing something wrong is worse than doing nothing.
Is this a general statement right? Try to forget about context then and read that again 😅
I actually think the moments when AI goes wrong are the moments that stimulate you and make you realize what you're doing and what you want to achieve better. And when you do subsequent prompts to fix the issue, you essentially do problem solving on figuring out what to ask to make it do the exact thing you want. And it's never going to be always right, simply because most of cases of it being wrong is you not providing enough details about what you actually want. So step-by-step AI usage with clarifications and fixes is always going to be brain-stimulating problem solving process.
So vibe coding?
I've tried using llm for a couple of tasks before I gave up on the jargon outputs and nonsense loops that they kept feeding me.
I'm no coder / programmer but for the simple tasks / things I needed I took inspo from others, understood how the scripts worked, added comments to my own scripts showing my understanding and explaining what it's doing.
I've written honestly so much, just throwing spaghetti at the wall and seeing what sticks (works). I have fleshed out a method for using base16 colour schemes to modify other GTK* themes so everything in my OS matches. I have declarative containers, IP addresses, secrets, containers and so much more. Thanks to the folks who created nix-colors, I should really contribute to that repo.
I still feel like a noob when it comes to Linux however seeing my progress in ~ 1y is massive.
I managed to get a working google coral after everyone else's scripts (that I could find on Github) had quit working (NixOS). I've since ditched that module as the upkeep required isn't worth a few ms in detection speeds.
I don't believe any of my configs would be where they are if I'd asked a llm to slap it together for me. I'd have none of the understanding of how things work.
wg.conf
file getting wrong SELinux context and wg-quick daemon refusing to work because of that:unconfined_u:object_r:user_home_t:s0
I never knew such this thing even exist, and LLM just casually explained that and provided a fix:
sudo semanage fcontext -a -t etc_t "/etc/wireguard(/.*)?"
sudo restorecon -Rv /etc/wireguard
LLMs are good as a guide to point you in the right direction. They’re about the same kind of tool as a search engine. They can help point you in the right direction and are more flexible in answering questions.
Much like search engines, you need to be aware of the risks and limitations of the tools. Google with give you links that are crawling with browser exploiting malware and LLMs will give you answers that are wrong or directions that are destructive to follow (like incorrect terminal commands).
We’re a bit off from the ability to have models which can tackle large projects like coding complete applications, but they’re good at some tasks.
I think the issue is when people try to use them to replace having to learn instead of as a tool to help you learn.
We’re a bit off from the ability to have models which can tackle large projects like coding complete applications, but they’re good at some tasks.
I believe they're (Copilot and similar) good for coding large projects if you use them in small steps and micromanage everything. I think in this mode of use they save a huge amount of time, and more importantly, they prevent you wasting your energy doing grindy/stupid/repetitive parts and allow you to save it for actually interesting/challenging parts.
Well that's why u was asking for an example of sorts. The problem is that if you're just starting out, you don't know what you don't know and more importantly, you won't be able to tell if something is wrong. It doesn't help that LLMs are notoriously good at being confidently incorrect and prone to hallucinations.
When I tried it for programming, more often than not, it has hallucinated functions and APIs that did not exist. And I know that they don't because I've been working at this for more than half of my life so I have the intuition to detect bullshit when it appears. However, for learners they are unlikely to be able to differentiate that.
you won’t be able to tell if something is wrong
When you run it, test it, and it doesn't work as expected (or doesn't work at all), that means most likely something is wrong. Not all fields of work require programs to be 100% correct from the first try, pretty often you can run and test your code infinite number of times before shipping/deploying.
a custom VPN without security minded planning and knowledge? that sounds like a disaster.
surely you could do other things that have more impact for yourself, still with computers. use wireguard and spend the time with setting up your services and network security.
and, port forwarding.. I don't know where are you running that, but linux iptables can do that too, in the kernel, with better performance.
Oops, I meant self-hosting a wireguard server, not actually doing an alternative to wireguard or openvpn themselves...
and, port forwarding… I don’t know where are you running that, but linux iptables can do that too, in the kernel, with better performance.
With my previous paid VPN I had to use natpmpc to ask their server for forwarding/binding ports for me, and I also had to do that every 45 seconds. It's nice to get a bash script running in a systemd demon that does that in a loop, and also parses output and saves remote ports server gave us this time to file in case we need them (like, for setting up a tor relay). Also, I got another script and demon for tor relay that monitors forwarded port changes (from a file) and updates torrc and restarts tor container. All this by Copilot, without knowing bash at all. Without having to write complex regexes to parse that output or regexes to overwrite tor config, etc. It's not a single prompt, it requires some troubleshooting and clarifications and ultimately I got to know some of the low level details of this myself. Which is also great.
Oops, I meant self-hosting a wireguard server, not actually doing an alternative to wireguard or openvpn themselves...
oh, that's fine then, recommended even.
With my previous paid VPN I had to use natpmpc to ask their server for forwarding/binding ports for me, and I also had to do that every 45 seconds. It's nice to get a bash script running in a systemd demon that does that in a loop, and also parses output and saves remote ports server gave us this time to file in case we need them (like, for setting up a tor relay).
oh so this is a management automation that requests an outside system to open ports, and updates services to use the ports you got. that's interesting! what VPN service was that?
All this by Copilot, without knowing bash at all.
be sure to run shellcheck for your scripts though, it can point out issues. aim for it to have no output, that means all seems ok.
what VPN service was that?
be sure to run shellcheck for your scripts though, it can point out issues. aim for it to have no output, that means all seems ok.
It does some logging though, and I read what it logs via systemctl --user status
. Anyway, those scripts/services so far are of a simple kind - if they don't work, I notice that immediately, because my torrents not seeding or my tor/i2p proxy ports not working in browser. In case when error can only be discovered conditionally somewhere during a long runtime, it needs more complicated and careful testing.
How to manually set up port forwarding | Proton VPN
A guide to manually configuring port forwarding for Proton VPN using the NAT-PMP protocol on macOS and LinuxProton VPN
You haven't done anything, though. If you're getting to the point where you are doing actual work instead of letting the AI do it for you, then congratulations, you've learned some writing skills. It would probably be more effective to use some non-ai methods to learn as well though.
If you're doing this solely to produce output, then sure, go ahead. But if you want good output, or output that actually reflects your perspective, or the skills to do it yourself, you've gotta do it the hard way.
Microsoft reported the same findings earlier this year, spooky to see a more academic institution report the same results.
microsoft.com/en-us/research/w…
Abstract for those too lazy to click:
The rise of Generative AI (GenAI) in knowledge workflows raises
questions about its impact on critical thinking skills and practices.
We survey 319 knowledge workers to investigate 1) when and
how they perceive the enaction of critical thinking when using
GenAI, and 2) when and why GenAI affects their effort to do so.
Participants shared 936 first-hand examples of using GenAI in work
tasks. Quantitatively, when considering both task- and user-specific
factors, a user’s task-specific self-confidence and confidence in
GenAI are predictive of whether critical thinking is enacted and
the effort of doing so in GenAI-assisted tasks. Specifically, higher
confidence in GenAI is associated with less critical thinking, while
higher self-confidence is associated with more critical thinking.
Qualitatively, GenAI shifts the nature of critical thinking toward
information verification, response integration, and task stewardship.
Our insights reveal new design challenges and opportunities for
developing GenAI tools for knowledge work.
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task – MIT Media Lab
This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine…MIT Media Lab
bingo.
it's like a health supplement company telling you eating healthy is stupid when they have this powder/pill you should take.
tech evangelism is very cultish and one of it's values is worshipping 'youth' and 'novelty' to an absurd degree, as if youth is automatically superior to experience and age.
You write essay with AI your learning suffers.
One of these papers that are basically "water is wet, researches discover".
cognitive decline.
Another reason for refusing those so-called tools... it could turn one into another tool.
What a ridiculous study. People who got AI to write their essay can’t remember quotes from their AI written essay? You don’t say?! Those same people also didn’t feel much pride over their essay that they didn’t write? Hold the phone!!! Groundbreaking!!!
Academics are a joke these days.
The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there's a clear link between vaccines and autism.
Neat.
Thanks for the warning. Here's the link to the original study, so we don't have to drive traffic to that guys website.
I haven't got time to read it and now I wonder if it was represented accurately in the article.
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition.arXiv.org
Thanks for pointing this out. Looking closer I see that that "journal" was definitely not something I want to be sending traffic to, for a whole bunch of reasons - besides anti-vax they're also anti-trans, and they're gold bugs... and they're asking tough questions like "do viruses exist" 🤡
I edited the post to link to MIT instead, and added a note in the post body explaining why.
Do Viruses Exist? - Science, Public Health Policy and the Law
Sanitation, nutrition, and hygiene reduced disease mortality long before vaccines, while reliance on PCR testing distorted the reality of COVID-19. InScience, Public Health Policy and the Law
what should we do then? just abandon LLM use entirely or use it in moderation? i find it useful to ask trivial questions and sort of as a replacement for wikipedia. also what should we do to the people who are developing this 'rat poison' and feeding it to young people's brains?
edit:
i also personally wouldn't use AI at all if I didn't have to compete with all these prompt engineers and their brainless speedy deployments
what should we do then?i also personally wouldn’t use AI at all if I didn’t have to compete with all these prompt engineers and their brainless speedy deployments
Gotta argue that your more methodical and rigorous deployment strategy is more cost efficient than guys cranking out big ridden releases.
If your boss refuses to see it, you either go with the flow or look for a new job (or unionize).
I'm not really worried about competing with the vibe coders. At least on my team, those guys tend to ship more bugs, which causes the fire alarm to go off later.
I'd rather build a reputation of being a little slower, but more stable and higher quality. I want people to think, "Ah, nice. Paequ2 just merged his code. We're saved." instead of, "Shit. Paequ2 just merged. Please nothing break..."
Also, those guys don't really seem to be closing tickets faster than me. Typing words is just one small part of being a programmer.
you should stop using it and use wikipedia.
being able to pull relevant information out of a larger of it, is a incredibly valuable life skill. you should not be replacing that skill with an AI chatbot
Yeah, I went over there with ideas that it was grandiose and not peer-reviewed. Turns out it's just a cherry-picked title.
If you use an AI assistant to write a paper, you don't learn any more from the process than you do from reading someone else's paper. You don't think about it deeply and come up with your own points and principles. It's pretty straightforward.
But just like calculators, once you understand the underlying math, unless math is your thing, you don't generally go back and do it all by hand because it's a waste of time.
At some point, we'll need to stop using long-form papers to gauge someone's acumen in a particular subject. I suspect you'll be given questions in real time and need to respond to them on video with your best guesses to prove you're not just reading it from a prompt.
Seems like you've made the point succinctly.
Don't lean on a calculator if you want to develop your math skills. Don't lean on an AI if you want to develop general cognition.
I don't think this is a fair comparison because arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics. Any human that doesn't have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.
The really useful aspects of math are things like how to think quantitatively. How to formulate a problem mathematically. How to manipulate mathematical expressions in order to reach a solution. For the most part these are not things that calculators do for you. In some cases reaching for a calculator may actually be a distraction from making real progress on the problem. In other cases calculators can be a useful tool for learning and building your intuition - graphing calculators are especially useful for this.
The difference with LLMs is that we are being led to believe that LLMs are sufficient to solve your problems for you, from start to finish. In the past students who develop a reflex to reach for a calculator when they don't know how to solve a problem were thwarted by the fact that the calculator won't actually solve it for them. Nowadays students develop that reflex and reach for an LLM instead, and now they can walk away with the belief that the LLM is really solving their problems, which creates both a dependency and a misunderstanding of what LLMs are really suited to do for them.
I'd be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks. That might also help mitigate the fact that LLMs don't reliably know the answers: if the user is presented with a leading question instead of an answer then they're still left with the responsibility of investigating and validating.
But that doesn't leave users with a sense of immediate gratification which makes it less marketable and therefore less opportunity to profit...
arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics.
I'd consider it foundational. And hardly small or inconsequential given the time young people spend mastering it.
Any human that doesn’t have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.
With time and training, sure. But simply handing out calculators and cutting math teaching budgets undoes that.
This is the real nut of comparison. Telling kids "you don't need to know math if you have a calculator" is intended to reduce the need for public education.
I’d be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks.
But the economic vision for these tools is to replace workers, not to enhance them. So the developers don't want to do that. They want tools that facilitate redundancy and downsizing.
But that doesn’t leave users with a sense of immediate gratification
It leads them to dig their own graves, certainly.
Don't lean on an AI if you want to develop general ~~cognition~~ essay writing skills.
Sorry the study only examined the ability to respond to SAT writing prompts, not general cognitive abilities. Further, they showed that the ones who used an AI just went back to "normal" levels of ability when they had to write it on their own.
the ones who used an AI just went back to “normal” levels of ability when they had to write it on their own
An ability that changes with practice
I don't always use the calculator.
Do you bench press 100 lbs and then give up on lifting altogether?
Well what do you mean with the lifting metaphor?
Many people who use AI are doing it to supplement their workflow. Not replace it entirely, though you wouldn’t know that with all these ragebait articles.
16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.
Better late than never. Good catch.
Anyone who doubts this should ask their parents how many phone numbers they used to remember.
In a few years there'll be people who've forgotten how to have a conversation.
I already have seen a massive decline personally and observationally (watching other people) in conversation skills.
Most people now to talk to each other like they are exchanging internet comments. They don't ask questions, they don't really engage... they just exchange declaratory sentences. Heck most of the dates I went on the past few years... zero real conversation and just vague exchanges of opinion and commentary. A couple of them went full on streamer, like just ranting at me and randomly stopping to ask me nonsense questions.
Most of our new employees the past year or two really struggle with any verbal communication and if you approach them physically to converse about something they emailed about they look massively uncomfortable and don't really know how to think on their feet.
Before the pandemic I used to actually converse with people and learn from them. Now everyone I meet feels like interacting with a highlight reel. What I don't understand is why people are choosing this and then complaining about it.
That doesn't require a few years, there are loads of people out there already who have forgotten how to have a conversation
Especially moderators, who typically are the polar opposite nog the word. You disagree with my factually incorrect statement? Ban. Problem solved. You disagree with my opinion? Ban.
Similarly I've seen loads of users on Lemmy (and before or reddit) that just ban anyone who asks questions or who disagrees.
It's so nice and easy, living in a echo chamber, but it does break your brain
I don't see how that's any indicator of cognitive decline.
Also people had notebooks for ages. The reason they remembered phone numbers wasn't necessity, but that you had to manually dial them every time.
And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.
—a story told by Socrates, according to his student Plato
The other day I saw someone ask ChatGPT how long it would take to perform 1.5 million instances of a given task, if each instance took one minute. Mfs cannot even divide 1.5 million minutes by 60 to get get 25,000 hours, then by 24 to get 1,041 days. Pretty soon these people will be incapable of writing a full sentence without ChatGPT's input
Edit to add: divide by 365.25 to get 2.85 years. Anyone who can tell me how many months that is without asking an LLM gets a free cookie emoji
You forgot doing the years, which is a bit trickier if we take into account the leap years.
According to the Gregorian calendar, every fourth year is a leap year unless it's divisible by 100 – except those divisible by 400 which are leap years anyway. Hence, the average length of one year (over 400 years) must be:
365 + 1⁄4 − 1⁄100 + 1⁄400 = 365.2425 days
So,
1041 / 365.2425 ≈ 2.85 years
Or 2 years and...
0.850161194275 × 365.2425 ≈ 310 days and...0.514999999987 × 24 ≈ 12 hours and...
0.359999999688 × 60 ≈ 21 minutes and...
0.59999998128 × 60 ≈ 36 seconds
1041 days is just about 2y 310d 12h 21m 36s
Wtf, how did we go from 1041 whole days to fractions of a day? Damn leap years!
Had we not been accounting for them, we would have had 2 years and...
0.852054794521 × 365 = 311.000000000165 days
Or simply 2y 311d if we just ignore that tiny rounding error or use fewer decimals.
Engineers be like…
1041/365 =2,852
.852*365=310.980
Thus 2 y 311 d. Or really, fuck it 3 y
Edit. #til
The lemmy app on my phone does basic calculator functions.
Or really, fuck it 3 y
Seems about right! But really, it often seems pretty useful to me, since it removes a lot of unnecessary information thoughout a content feed or thread, though I usually still want to be able to see the exact date and time when tapping or hovering over the value for further context.
Edit: However, the lemmy client I use, Eternity, shows the entire date and time for each comment instead of the age of it, and I'm fine with that too, but unsure what I actually prefer...
The lemmy app on my phone does basic calculator functions.
Which client and how?
I want a free cookie emoji!
I didn't ask an LLM, no, I asked Wikipedia:
The mean month-length in the Gregorian calendar is 30.436875 days.
Edit: but since I already knew a year is 365.2425 I could, of course, have divided that by the 12 months of a year to get that number.
So,
1041 ÷ 30.436875 ≈ 34 months and...0.2019343313 × 30.436875 ≈ 6 days and...
0.146249999987 × 24 ≈ 3 hours and...
0.509999999688 × 60 ≈ 30 minutes and...
0.59999998128 × 60 ≈ 35 seconds and...
0.9999988768 × 1000 ≈ 999 milliseconds and
0.9999988768 × 1000000 ≈ 999999 nanoseconds
34 months + 6d 3h 30m 35s 999ms 999999 ns (or we could call it 36s...)
Edit: 34 months is better known as 2 years and 10 months.
Rough estimate using 30 days as average month would be ~35 months (1050 = 35×30). The average month is a tad longer than 30 days, but I don't know exactly how much. Without a calculator, I'd guess the total result is closer to 34.5. Just using my own brain, this is as far as I get.
Now, adding a calculator to my toolset, the average month is 365.2425 d / 12 m = 30.4377 d/m. The total result comes out to about 34.2, so I overestimated a little.
Also, the total time is 1041.66... which would be more correctly rounded to 1042, but has negligible impact on the redult.
Edit: I saw someone else went even harder on this, but for early morning performance, I'm satisfied with my work
🍪
Pirat gave me an egg emoji, so I baked some more cupcake emojis. Have one for getting it so close without even using a calculator 🧁
I still remember all my family’s phone numbers from when I was a kid growing up In WV in the 70s
I currently have my wife’s number memorized and that’s it. Not my mom, my kids, friends, anybody. I just don’t have to. It’s all in my phone.
But I’m also of the opinion that NOT having this info in my head has freed it up for more important things. Like memes and cat videos 🤣
But seriously, I don’t think this tool, and AI is just a tool, is dumbing me down. Yes I think about certain things less, but it allows me to ask different or better questions, and just learn differently. I don’t necessarily trust everything it spits out, I double check all code it produces, etc. It’s very good at explaining things or providing other examples. Since I’m older, I’ve heard similar arguments about TV and/or the Internet. LLMs are a very interesting tool that have good and bad uses. They are not intelligent, at least not yet, and are not the solution to everything technical. They are very resource intensive and should be used much more judiciously then the currently are.
Ultimately it boils down to if you’re lazy, this allows you to be more lazy. If you don’t want to continue learning and just rely on it, you are gonna have a bad time. Be skeptical, questioning, employee critical thinking, take in information from lots of sources, and in the end you will be fine. That is unless it becomes sentient and wipes us all out.
Been vibe coding hard for a new project this past week. It's been working really well but I feel like I watched a bunch of TV. Like it's passive enough like I'm flipping through channel, paying a little attention and then going to the next.
Where as coding it myself would engage my brain and it might feel like reading.
It's bizarre because I've never had this experience before.
Democrats foil justice department lawsuit by negotiating to keep 98,000 North Carolina voters
Democrats foil justice department lawsuit by negotiating to keep 98,000 North Carolina voters
Proposed consent order and agreement to allow voters to provide information while voting with provisional ballotGeorge Chidi (The Guardian)
like this
adhocfungus, copymyjalopy, Raoul Duke e dflemstr like this.
Instagram is finally launching an iPad app
Instagram is finally launching an iPad app | TechCrunch
Until now, Instagram on iPad was just a blown-up iOS app, which wasn't pleasant to look at. The company said that it has now revamped the experience to suit the big screen.Ivan Mehta (TechCrunch)
Oregon, Washington, California form health care alliance to protect vaccine access
Oregon, Washington, California form health care alliance to protect vaccine access
The states Democratic governors offered few specifics Wednesday as to how they hope the Western Health Alliance could influence which vaccines will be available in their states.Amelia Templeton | Michelle Wiley (OPB)
like this
copymyjalopy, adhocfungus e Raoul Duke like this.
Can US offshore wind survive the Trump administration? Regardless of what happens with Revolution Wind, the government’s pause may set the industry back decades.
Can offshore wind survive the Trump administration?
Regardless of what happens with Revolution Wind, the government’s pause may set the industry back decades.Rebecca Egan McCarthy (Grist)
Replacing Music Streaming Services with a Self-hosted Stack
::: spoiler Comments
- Lemmy at Self-Hosted Community;
- Reddit.
:::
Replacing TV and movie streaming services is pretty trivial, and typically one of the first projects for any new self-hoster, but music streaming services are a whole different beast. There's a growing need to replace the likes of Spotify, but there's no one-size-fits-all solution, and maintaining an on-disk music library will always be a lot of manual work. That being said, I've put together a stack that I'm happy with for now, and there was some interest in the full details, so I'll try to slap together a tutorial here.
like this
copymyjalopy e Raoul Duke like this.
Technology Channel reshared this.
The looming power crunch: Solutions for data center expansion in an energy-constrained world
The looming power crunch
Is a power crunch looming? Explore the surging energy demands of data centers and discover solutions for sustainable growth in an energy-constrained world.By Patrick Donovan (Schneider Electric)
Carbon storage is becoming a more mainstream climate solution. A new study says that we won’t have enough room to bury all our CO2
Access options:
* gift link — registration required
* archive.today
The paper ishere
A prudent planetary limit for geologic carbon storage - Nature
A risk-based, spatially explicit analysis of carbon storage in sedimentary basins establishes a prudent planetary limit of around 1,460 Gt of geological carbon storage, which requires making explicit decisions on priorities for storage use.Nature
like this
Oofnik e adhocfungus like this.
The dirty dozen: meet America’s top climate villains
Few are household names, yet these 12 enablers and profiteers have an unimaginable sway over the fate of humanityAmy Westervelt (The Guardian)
It should, but will not. What it will do is increase profits marginally milking fed dollars for test projects that go over budget and over timeframe without working, and without them actually trying.
Since Bush at least they have been funding this stuff. Technohopium to justify biz as usual.
The ultimate solution is fewer humans. I've seen Earth's population more than double in my lifetime.
I think population is also a driver of immigration and immigrant hate.
LOL, not at the rate we're at now! C'mon, you don't think the population going from 3.7 billion in my childhood to 8+ billion today isn't a major factor?
People think of their personal habits, mostly driving, when they think of global CO2. Factor in all those people eating. That's a shitload of farm CO2, and other waste. And look how fat we are in the first world!
Concrete is a major driver of CO2 emissions, something like 7-8%. Guess what all of us need to build our homes and infrastructure.
On top of that, worldwide poverty has nosedived in that time, and that's a great thing, but people that weren't burning fuel and needing plastics are doing so now. Even as population has exploded, poverty is still riding hard on the down slope. That lift out of poverty requires energy, and shitloads of it.
Depopulation is going to cause worldwide economic depression. But whether by individual choice, government decree or climate change, it's gonna happen. I don't know of any economic system that can weather this.
When we burn fossil fuels, CO~2~ concentrations stay elevated basically forever in human terms. Half the burning since the industrial revolution has happened in the last ~30 years. The human population is young, so to make the kind of difference you're thinking of, it would mean a campaign of mass murder.
I'm not in it for that.
You can get a very modest difference in future emissions by encouraging the use of contraceptives, and educating girls, but it isn't going to get you out of the need for a rapid shift off fossil fuels.
Behold, the eco fascist in the wild, promoting mass murder as a solution to something that can only be solved with cooperation.
Edit: There are extremely specific people who would benefit the world by ceasing to exist, and those people are billionaires. There are way to make billionaires not exist other then killing them.
Florida Democrats RaShon Young, LaVon Bracy Davis Win Special Elections
Photo: Florida House/Rashon Young for Florida House Florida Democrats scored decisive victories in two special elections on Tuesday (September 2), signaling growing opposition to Republican leadership. According to the Orlando Sentinel, RaShon Young and LaVon Bracy Davis both won their races for the Florida House and Senate, respectively. Young, a legislative staffer and former NASA … Continued
The post Florida Democrats RaShon Young, LaVon Bracy Davis Win Special Elections appeared first on Atlanta Daily World.
Jaguar Land Rover Cyberattack 2025: What Happened and Its Impact
Jaguar Land Rover Cyberattack 2025: What Happened and Its Impact
Jaguar Land Rover Cyberattack 2025: A Wake-Up Call for Automotive CybersecurityJames Scott (Wealthari)
New Rules Going into Effect
Hello!
As you might already know or have seen if you browse the local feed of our instance, we are going to be putting into effect some tighter rules around what sort of communities will be allowed on this instance. Mostly just saying this is a literature focused instance so we want literature focused communities on here. I've reached out to all of the moderators of the communities that will be disallowed going forward and they have graciously agreed to start their migration. I do want to say we appreciate whole-heartily how understanding everyone has been with this change. This is going to be a rolling change, I don't except compliance immediately to all who are affected. I have updated the rules in the sidebar, but we will work with a rolling schedule to allow for migrations.
- Please keep instance-hosted communities related to literature and literature topics.
This is the new rule. This only affects communities, you can of course use your accounts on here to interact with other communities in the fediverse. I don't think I needed to say that, but I guess better safe than sorry? Please feel free to reach out with any suggestions!
Thank you!
Google’s $45 Million Contract With Netanyahu's Office to Spread Israeli Propaganda
Publicly available government contracts show that Israel’s advertising bureau, which reports to the prime minister’s office, has since embarked on a mass advertising and public messaging effort to conceal the hunger crisis. The push includes the use of American influencers widely reported on last month. It also includes a high-dollar spending spree on paid advertising, yielding tens of millions for Google, YouTube, X, Meta, and other tech platforms.
“There is food in Gaza. Any other claim is a lie,” asserted a propaganda video published by Israel’s foreign ministry to Google’s YouTube video sharing platform in late August and viewed more than 6 million times. Much of the video’s reach results from an ad placed during an ongoing and previously unreported $45 million (NIS 150 million) advertising campaign initiated between Google and Netanyahu’s office in late June.
The contract—which is with both YouTube and Google's advertising campaign management platform, Display & Video 360—explicitly characterizes the ad campaign as hasbara, a Hebrew word whose meaning is somewhere between public relations and propaganda
Google’s $45 Million Contract With Netanyahu's Office to Spread Israeli Propaganda
Google is in the middle of a six-month, $45 million contract to amplify propaganda with Netanyahu’s office. The contract describes Google as a “key entity” supporting the prime minister’s messaging.Jack Poulson (Drop Site News)
like this
Endymion_Mallorn likes this.
reshared this
Technology reshared this.
I saw an ad on YouTube for what a good job ICE is doing not that long ago. Kristi Noem was in it.
More disturbingly, I've noticed a little scattering of those "police bodycam raw video" channels starting to play up when the criminal involved is an immigrant, what their status was, how ICE was involved, and so on. There's clearly something at work that's a little more subtle and sinister than just paid advertising.
viewed more than 6 million times
Fucking savage reference
Come on now, Google is just a business trying to make ends meet. All the starving, dying Palestinians in an open air concentration camp have to do is spend $45 million on a counter advertising campaign. Like... Duh.
/s
Google’s $45 Million Contract With Netanyahu's Office to Spread Israeli Propaganda
Publicly available government contracts show that Israel’s advertising bureau, which reports to the prime minister’s office, has since embarked on a mass advertising and public messaging effort to conceal the hunger crisis. The push includes the use of American influencers widely reported on last month. It also includes a high-dollar spending spree on paid advertising, yielding tens of millions for Google, YouTube, X, Meta, and other tech platforms.
“There is food in Gaza. Any other claim is a lie,” asserted a propaganda video published by Israel’s foreign ministry to Google’s YouTube video sharing platform in late August and viewed more than 6 million times. Much of the video’s reach results from an ad placed during an ongoing and previously unreported $45 million (NIS 150 million) advertising campaign initiated between Google and Netanyahu’s office in late June.
The contract—which is with both YouTube and Google's advertising campaign management platform, Display & Video 360—explicitly characterizes the ad campaign as hasbara, a Hebrew word whose meaning is somewhere between public relations and propaganda
Google’s $45 Million Contract With Netanyahu's Office to Spread Israeli Propaganda
Google is in the middle of a six-month, $45 million contract to amplify propaganda with Netanyahu’s office. The contract describes Google as a “key entity” supporting the prime minister’s messaging.Jack Poulson (Drop Site News)
pancake likes this.
reshared this
Kent Navalesi ☕️ reshared this.
The 2024 CBO report shows that combat-capability rates of F-35Bs and F-35Cs older than four years plummets to less than 10%.
Availability, Use, and Operating and Support Costs of F-35 Fighter Aircraft
At a Glance In this report, the Congressional Budget Office analyzes the recent availability, use, and operating and support costs of stealthy F-35 fighter aircraft. Programwide operating and support costs exceeded $5 billion in 2023.Congressional Budget Office
like this
pancake e adhocfungus like this.
The Los Angeles Schoolteacher Leading the Fight Against ICE
The Trump administration’s war on immigrants is expanding. Homeland Security Secretary Kristi Noem on Sunday confirmed its deportation operations would ramp up in Chicago and other major U.S. cities in the coming weeks. When the new fiscal year kicks in October 1, Immigration Customs and Enforcement can begin tapping billions in new funds from President Trump’s Big Beautiful Bill. With the agency seeking to hire 10,000 new agents, Americans can expect more violent raids snatching their neighbors off the streets.
The epicenter of America’s anti-immigrant campaign has been Los Angeles and its surrounding cities, where thousands have been arrested since June. Almost every day this summer, federal agents from ICE and U.S. Border Patrol have stalked Home Depot parking lots, car washes, and immigrant communities across Southern California, detaining people based on ethnicity or language.
“If they break LA, they can break any community in this country.”
But as the Trump administration’s war on immigrants expands, so does the resistance against it.
“It’s important that they break LA,” said Ron Gochez, a high school history teacher and leading member of the LA-based grassroots group Unión Del Barrio. “If they break LA, they can break any community in this country.”
Gochez and Unión Del Barrio are a part of the Community Self-Defense Coalition, a network of dozens of grassroots groups. The network conducts daily street patrols to warn their neighbors of possible ICE activity.
Filmmaker Brandon Tauszik embedded with Gochez and other members of Unión Del Barrio throughout the summer for The Intercept. In the documentary film “A City Fights Back: How LA Defends Itself Against ICE,” activists show a multifaceted strategy of opposition. They drive the streets in search of federal agents, monitor highway off-ramps to flag suspicious cars entering their communities, organize protests, and recruit and train new members willing to combat ICE.
For Gochez, a high school teacher and a father, the stakes are increasingly personal.
Ron Gochez at a rally outside a Home Depot in Los Angeles. Photo: Brandon Tauszik/The Intercept
On August 8, federal agents snatched up high school student Benjamin Marcelo Guerrero-Cruz, 18, while he was walking his dog in Van Nuys, days before he was set to begin his senior year at Reseda Charter High School. He remains in ICE detention at a privately owned facility 80 miles away in Adelanto, California. Days later, agents detained at gunpoint Nathan Mejia, 15, outside of Arleta High School before releasing him later that day.
Both Mejia and Guerrero-Cruz are students in the Los Angeles Unified School District, where Gochez teaches. In the film, he reflects on how his fight is intertwined with that of the next generation.
“It’s a constant reminder why we struggle and why we do what we do,” he says, while playing with his son. “One day when we’re no longer here and he’ll be here, and maybe his children, they’ll have a better life than what we had and what our parents had — so we’re fighting for the next seven generations, and he’s next up.”
This project was supported by the Economic Hardship Reporting Project with funding made possible by The Puffin Foundation.
The post The Los Angeles Schoolteacher Leading the Fight Against ICE appeared first on The Intercept.
Community Defense Groups Take the Last Stand Against ICE in LA
Community defense organizers argue that LA’s sanctuary laws aren’t enough to keep their immigrant neighbors safe.Claudia Villalona (The Intercept)
The 2024 CBO report shows that combat-capability rates of F-35Bs and F-35Cs older than four years plummets to less than 10%.
Availability, Use, and Operating and Support Costs of F-35 Fighter Aircraft
At a Glance In this report, the Congressional Budget Office analyzes the recent availability, use, and operating and support costs of stealthy F-35 fighter aircraft. Programwide operating and support costs exceeded $5 billion in 2023.Congressional Budget Office
Shein Used Luigi Mangione’s AI-Generated Face to Sell a Shirt
cross-posted from: programming.dev/post/36815160
Pop Crave on X/TwitterThe image in question was provided by a third-party vendor and was removed immediately upon discovery. We have stringent standards for all listings on our platform. We are conducting a thorough investigation, strengthening our monitoring processes, and will take appropriate action against the vendor in line with our policies.
Shein Used Luigi Mangione’s AI-Generated Face to Sell a Shirt
::: spoiler Comments
- Reddit.
:::The image in question was provided by a third-party vendor and was removed immediately upon discovery. We have stringent standards for all listings on our platform. We are conducting a thorough investigation, strengthening our monitoring processes, and will take appropriate action against the vendor in line with our policies.
Shein Responds After 'Luigi Mangione' Model Advert Goes Viral
A product listing for a shirt, sold by the fast-fashion retailer and modeled by a person who bears a striking resemblance to Mangione, has taken off online.Marni Rose McFall (Newsweek)
like this
adhocfungus, copymyjalopy e Raoul Duke like this.
Pornhub Parent Company(Aylo) Will Pay $5 Million Over Allegations of Hosting Child Sexual Abuse Material
In its complaint, the FTC alleged:
- Aylo allowed the dissemination of CSAM and NCM content on its Tube sites by: allowing until December 2020 anyone to upload pornographic videos and photos, urging its content partners to contribute content involving “young girl,” “schoolgirl” and similar topics; licensing and owning CSAM and NCM content with titles such as “Brunette Girl was Raped;” and promoting to users playlists of CSAM and NCM content with such titles as “less than 18,” and “the best collection of young boys.”
- Aylo did not maintain, even though it promised to, paperwork required by federal law to verify the age and identity of individuals featured in some of the content posted on its sites.
- Aylo only decided to conduct audits of CSAM and NCM on its sites in 2020 when credit card processors threatened to impose fines or cut off access to their services and media started reporting on the issue. These audits revealed tens of thousands of CSAM and NCM videos. Even then, Aylo routinely ignored or overruled efforts by its compliance team to remove such content. For example, when a credit card processor threatened to fine Aylo for a content partner’s channel titled “PunishTeens” that included “Rape/Brutality,” the company removed the channel from Pornhub and Pornhub Premium but allowed the same content to remain on their other websites.
- Despite promising to quickly review and, if necessary, remove violative content flagged by users, Aylo did not even review content flagged as CSAM and NCM until it received at least 16 flags. It also claimed it would utilize fingerprinting technology to block users from re-uploading CSAM that had been removed, but the technology failed to effectively prevent such content from being re-uploaded to the site.
- Aylo also failed to block individuals who uploaded CSAM despite promising to ban such users. Even when it began taking action against uploaders of CSAM in October 2022, it only prohibited the user from making a new account under the same username or email address but did not prevent them from creating a new account using an alternate email address and username.The complaint also alleged that Aylo deceived consumers by failing to protect the privacy and security of data—such as their dates of birth, Social Security numbers and government-issued IDs—uploaded by people enrolled in their model program, which included those who appear in their videos.
In December 2020, Aylo announced it would use a third-party vendor to verify the identities of people seeking to participate in its model program and collect, review and secure their ID documents. Aylo, however, failed to disclose that it obtains the data from the vendor and retains it indefinitely. Aylo also told its models that they could “trust that their personal data remains secure” yet failed to use standard security measures to protect the data. For example, Aylo did not encrypt the personal data it stored, failed to limit access to the data, and did not store the data behind a firewall.
The proposed order settling the FTC and Utah allegations imposes a $15 million penalty against Aylo, which will be suspended after payment of $5 million to Utah, and permanently prohibits Aylo from misrepresenting its practices related to preventing the posting and proliferation of CSAM and NCM on its websites. Aylo also will be required to take multiple actions to address the deceptive and unfair conduct outlined in the complaint including:
- Implement a program to prevent the publication or dissemination of CSAM and NCM content, which must include policies, procedures and technical measures to ensure that such content is not published on its websites and a process to respond to reports about CSAM and NCM content on its websites;
- Implement a system to verify that people who appear in videos or photos on its websites are adults and have provided consent to the sexual conduct as well as its production and publication;
- Remove content uploaded prior to the implementation of the CSAM and NCM prevention program until Aylo verifies that the individuals participating in those videos were at least 18 at the time the content was created and consented to the sexual conduct and its production and publication;
- Post a notice on its website informing users about the FTC’s and Utah’s allegations and the requirements of the proposed order; and
- Implement a comprehensive privacy and information security program to address the privacy and security issues detailed in the complaint.
Technology Channel reshared this.
I’m getting redpilled on the “Trump had a stroke” theory
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
I’m getting redpilled on the “Trump had a stroke” theory
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
White House Orders Agencies to Escalate Fight Against Offshore Wind
The effort involves several agencies that typically have little to do with wind power, including the Health and Human Services Department.
like this
adhocfungus e Raoul Duke like this.
This is the critical detail that could unravel the AI trade: Nobody is paying for it.
- Reddit.
:::
This is the critical detail that could unravel the AI trade: Nobody is paying for it. - r/technology
View on Redlib, an alternative private front-end to Reddit.farside.link
Technology Channel reshared this.
https://lemmy.saik0.com/u/Saik0Shinigami
in reply to abbiistabbii • • •So much fear mongering and incorrect statements... and I'm only 3 minutes in. I can't...
Nearly all encryption mechanism currently in use on the modern internet is quantum resistant. Breaking RSA-2048 would require millions of stable, error-corrected qubits. I believe the biggest systems right now are at 500 bits at most.
The NIST Post-Quantum Cryptography project has finalized new quantum-resistant algorithms like CRYSTALS-Kyber and Dilithium. These will replace RSA and ECC long before practical quantum attacks exist. Migration has already started.
Symmetric cryptography is mostly safe. Algorithms like AES, SHA-2, SHA-3, and similar remain secure against quantum attacks. Grover's algorithm can halve their effective key strength. Example: AES-256 becomes as secure as AES-128 against a quantum attacker. To crack on AES-128 hash with current efficiency you need ~88TW of power... Even if we make it 10 or 100x more efficient over time... It's too expensive. We don't have the resources to power anything big enough to crack aes-128... The biggest nuclear reactor (Taishan) only puts out a mere 1,660MWe...
It's not happening in our lifetimes. and probably not at all until we start harvesting stars.
Edit: Several typos.
Edit 2: For the AES-256 example that get's reduced to AES-128. It would take implementing efficiencies that reduce power usage by 1000x (there's a few methods that might get worked out in our lifetimes... lets just take them as functional right now). Then you'd need 55 of the biggest nuclear reactors we have on the planet... Then you wait a year for the computer to finish the compute. That decrypts one key.
Weaker keys might be a problem. Sure. But by the time we're there... it won't matter. For things like Singal, Matrix, or anything else that's actively developed... Someone might store the conversation on some massive datacenter out there... And might decrypt it 200 years from now. That's your "risk"... Long after everyone reading this message is dead.
Edit 3: Because I hadn't looked at it in a few months... I decided to check in on Let's Encrypt's (LE) "answer" to it. Since that's what most people here are probably interested in and using. First... remember that Let's Encrypt rotates keys every 90 days. So for your domain, there's 4 keys a year to crack at a minimum. Except that acme services like to register near the halfway point... So more realistically 8 keys a year to decrypt a years worth of data. But it turns out that browsers already have the PQC projects done... And many certificate registrars already support it as well. OpenSSL also supports it from 3.5.0+...
community.letsencrypt.org/t/ro…
developers.cloudflare.com/ssl/…
Apparently LE is even moving to MUCH shorter certs... letsencrypt.org/2025/02/20/fir… 6 days... So a new key every half-week (remember acme clients want to renew about halfway through the cycle)... or ~100 keys a year to break. Even TODAY, you're not going to need to worry about "weak" encryption for decades. It will take time for the quantum resources to come available... it will take time to go through the backlog of keys that they are interested in decrypting EVEN IF they're storing 100% of data somewhere. You WILL be long dead before they can even have the opportunity to care about you and your data... The "200 years from now" above reference... is assuming that humans can literally harvest suns for power and break really really big problems in the quantum field. It's really going to be on the order of millennia if not longer before your message to your mom from last year gets decrypted. LE doesn't have PQC on the roadmap quite yet... Probably because they understand there's still some time before it even matters and they want to wait a bit until the cryptography around the new mechanisms is more hashed out.
Edit4: At this point I feel that this post needs a TL;DR...
If you're scared.... rotate keys regularly, the more you rotate, the more keys will have to be broken to get the whole picture... Acme services (Let's Encrypt) already do this. You'll be fine with current day technology long after (probably millennia) your dead. No secret you're hiding will matter 1000 years from now.
Edit5: Fuck... I need to stop thinking about this... but I just want to point out one more thing... It's actually likely that in the next 100 (let alone 1000s of years) that a few bits will rot in your data on their cluster that they're storing. So even IF they manage to store it... and manage to get a cluster big enough that either takes so little power that they can finally power it... or get a power source that can rival literal suns. A few bits flipped here and there will happen... Your messages and data will start to scramble over time just by the very nature of... well... nature... Every sunflare. Every gravitational anomaly. Every transmission from space or gamma particle... has a chance to OOPS a 0 into a 1 or vice versa. Think of every case you've heard of Amazon or Facebook accidentally breaking BGP for their whole service and they're down for hours... Over the course of 100 years... your data will likely just die, or get lost, be forgotten, get broken, etc... The longer it takes for them to figure this out (and science is NOT on their side on this matter) the less likely they even have a chance to recover anything, let alone decrypt it in a timely matter to resolve anything in our lifetimes.
Roadmap Request: Post Quantum Cryptography
Let's Encrypt Community SupportMelvin_Ferd
in reply to • • •Atherel
in reply to Melvin_Ferd • • •It's also the only post of this account...
Edit: sorry only checked posts, there are multiple comments
Spuddlesv2
in reply to Melvin_Ferd • • •MonkderVierte
in reply to • • •Why this is an issue: add one more to the chain of entangled qbits and the whole chain is twice as likely to collapse.