R๐ ๐๐๐ ๐ท๐๐๐ ๐ผ๐ฝ๐ฒ๐ป๐ฒ๐ฑ ๐๐ต๐ฒ ๐ด๐ฎ๐๐ฒ๐ ๐ผ๐ณ ๐ต๐ฒ๐น๐น ๐ถ๐ป ๐ฃ๐ฒ๐ป๐ป๐๐๐น๐๐ฎ๐ป๐ถ๐ฎ.
While Dems have been asleep at the wheel, MAGA has launched a surprise attack in the November 2025, Pennsylvania Supreme Court Election.
If Dems lose the PA Supreme Court we never hold power here again
โ ๐ฃ๐ฒ๐ป๐ป๐๐๐น๐๐ฎ๐ป๐ถ๐ฎ ๐ฎ๐ป๐ฑ ๐ถ๐๐ ๐ญ๐ต ๐ฒ๐น๐ฒ๐ฐ๐๐ผ๐ฟ๐ฎ๐น ๐ฐ๐ผ๐น๐น๐ฒ๐ด๐ฒ ๐๐ผ๐๐ฒ๐, ๐๐ค๐ฃ๐ ๐๐ค๐ง ๐๐ค๐ค๐.
We are 39 days out from this election and MAGA has ๐ข๐ญ๐ณ๐ฆ๐ข๐ฅ๐บ been outspending us by millions
on TV buys, canvassing, and mail.
๐ง๐ต๐ถ๐ ๐ถ๐ ๐ ๐๐๐โ๐ ๐บ๐ผ๐๐ ๐ถ๐บ๐ฝ๐ผ๐ฟ๐๐ฎ๐ป๐ ๐ผ๐ฏ๐ท๐ฒ๐ฐ๐๐ถ๐๐ฒ ๐๐ต๐ถ๐ ๐๐ฒ๐ฎ๐ฟ ๐ฎ๐ป๐ฑ ๐๐ฒ ๐ป๐ฒ๐ฒ๐ฑ ๐๐ผ ๐ฑ๐ผ ๐ฒ๐๐ฒ๐ฟ๐๐๐ต๐ถ๐ป๐ด ๐๐ฒ ๐ฐ๐ฎ๐ป ๐๐ผ ๐ฝ๐ฟ๐ฒ๐๐ฒ๐ป๐ ๐๐ต๐ฒ๐บ ๐ณ๐ฟ๐ผ๐บ ๐๐ถ๐ป๐ป๐ถ๐ป๐ด ๐๐ต๐ถ๐.
Defend our Courts is spearheading the defense of the Pennsylvania Supreme Court
and we are mounting a desperate counteroffensive against the ๐ข๐ฃ๐ด๐ฐ๐ญ๐ถ๐ต๐ฆ ๐ฐ๐ฏ๐ด๐ญ๐ข๐ถ๐จ๐ฉ๐ต that MAGA is throwing at us.
We need Dems from across the nation to help Pennsylvania brace for impact.
Can you rush a donation in to help us keep up with MAGAโs full court press?
defend-ourcourts.org/actblue_sโฆ
We will never forget those who came to Pennsylvaniaโs aid during this moment of crisis.
Ballan: โPogacar favorito nel Mondiale su Evenepoel. Peccato per Longo Borghiniโ
https://www.lastampa.it/sport/ciclismo/2025/09/28/news/ballan_pogacar_favorito_nel_mondiale_su_evenepoel_peccato_per_longo_borghini-15327948/?utm_source=flipboard&utm_medium=activitypub
Pubblicato su La Stampa Sport @la-stampa-sport-LaStampa
Ballan: โPogacar favorito nel Mondiale su Evenepoel. Peccato per Longo Borghiniโ
Lโultimo italiano a vincere il titolo iridato: โEdizione riuscita. Il Ruando รจ pronto. Presto ci saranno corridori forti del loro paeseโDaniela Cotto (La Stampa)
Roland Häder๐ฉ๐ช likes this.
Appimage. It deceives you with false promises of distribution independence, but in practice it simply does not work because it cannot find the core library that the developer assumed you had. I gave this format many chances, but it only worked once for rustdesk. Everything else just complained about missing dependencies.
But in my opinion, software distribution in Linux is complete bullshit. For example, I want to install printer drivers at my workplace, but, jokes on you, the driver is only packaged for Ubuntu, and you have Fedora. Even if you try to manually copy and paste files, you'll find that some dependencies are simply missing from the distro repos.
flatpak helps, but dumping whole Linux filesystem for a single program is really the best we could come up with? In addition, flatpak sandbox introduces new problems. For example, the Surge XT VST plugin, run through flatpak version of Bitwig Studio, cannot find system presets and waveforms.
The World Doesn't Hate Jews, The World Hates Israel
Listen to a reading of this article (reading by Tim Foley):Caitlin Johnstone (Caitlinโs Newsletter)
harry haller likes this.
theintercept.com/2025/09/18/dhโฆ
The persecutor's agents are using an improperly issued subpoena to try to identify the person who published the identity of agents of the secret police, who were marauding near Los Angeles.
The Feds Want to Unmask Instagram Accounts That Identified Immigration Agents
StopICE.net filed a motion to quash a subpoena about an Instagram video that identified a Border Patrol agent.Shawn Musgrave (The Intercept)
JustME reshared this.
Yard signs are in today. They were union-printed at a local New Hampshire small business and turned out great.
If you'd like to host one on your property, you can request one here:
voteveitch.org/request-a-yard-โฆ
Request a Yard Sign - Lucas Veitch for Dover
Thanks to your generous donations, we were able to produce the โVote Veitchโ vine design on 18ร24 double-sided corrugated yard signs! All these signs were union-printed at a local New Hampshire woman-run small business.Lucas Veitch for Dover
Mango decided to join in with DnD today!
#Cat #CatsOfMastodon #OrangeCat #OrangeCatsOfMastodon #DungeonsAndDragons #DnD #RPG #RPGGeek
anubis2814 likes this.
๐ฐ Circ Domingo โ Ven a conocer el circo en familia
๐ท๏ธ #ChileCultura #Cartelera #Panoramas #Cultura #Chile
๐ฐ XIV Exposiciรณn ARTEFACTO llega a Vitacura
๐ท๏ธ #ChileCultura #Cartelera #Panoramas #Cultura #Chile
๐ฐ Exposiciรณn fotogrรกfica โTestimonios del Silencioโ
๐ท๏ธ #ChileCultura #Cartelera #Panoramas #Cultura #Chile
Roland Häder๐ฉ๐ช likes this.
Wirtschafts-Analyst Hannich: Die Enteignung der Deutschen wird jetzt richtig heftig!
https://www.techtudo.com.br/listas/2025/09/controle-a-casa-por-voz-alexas-que-valem-cada-centavo-compare-edqualcomprarie.ghtml?utm_source=flipboard&utm_medium=activitypub
Controle a casa por voz: 6 Alexas que valem cada centavo; compare
Lista traz dispositivos Echo com e sem tela para vocรช comparar e escolher a melhor opรงรฃo para sua rotina; preรงos variam de R$ 379 a R$ 1.399Techtudo
We've started Alien: Covenant. I have not seen this one before.
I love that Demiรกn Bichir is here. He was a Mexican soap opera heart throb and villain in the 90s.
Also, hi Daddy Guy Pearce, who is uncredited???
theguardian.com/environment/20โฆ
Meat is a leading emissions source โ but few outlets report on it, analysis finds
Food and agriculture contribute one-third of global greenhouse gas emissions โ second only to the burning of fossil fuels. And yet the vast majority of media coverage of the climate crisis overlooks this critical sector, according to a new data analysis from Sentient Media.
The data reveals a media environment that obscures a key driver of the climate crisis. Meat production alone is responsible for nearly 60% of the food sectorโs climate emissions and yet its impact is sorely underestimated: a 2023 Washington Post/University of Maryland poll found 74% of US respondents believe eating less meat has little to no effect on the climate crisis.
#climate #greenhouseemissions #meat #agriculture #media #vegetarian
Meat is a leading emissions source โ but few outlets report on it, analysis finds
Sentient Media reveals less than 4% of climate news stories mention animal agriculture as source of carbon emissionsJoe Fassler (The Guardian)
ใปใจใใฉ็งๆธโฆๅคไธญใซๆ ๅ ฑใพใจใใฆ่ตทใใใ็ฅใใใฆใใใChatGPT Pulse
New Comic Strip Found: FurBabies by Nancy Beiman
FurBabies - 2025-09-26 gocomics.com/furbabies/2025/09โฆ
FurBabies by Nancy Beiman for September 26, 2025 | GoComics
Read FurBabiesโa comic strip by creator Nancy Beimanโfor today, September 26, 2025, and check out other great comics, too!www.gocomics.com
ไฝ ่ไธไบๅคงๅญธ๏ผ้ไธๆฏไป้บผ็นๅฅๅฅฝ็ๅคงๅญธ๏ผไฝไฝ ๅจ้้ๆพๅฐไบๅฟๅ้ๅ็ไบบ๏ผ่ฝ่ซ่ซ่ชๅทฑไธปไฟฎ๏ผๅๅๅญธไธ่ตทๅ้ปไป้บผใ
ๅฐไบๅคงไธ๏ผๅธ้ขไธ็ช็ถๆต่ก่ตทใๅไบบ็ญ้กๅฐ็ฒพ้ใใ็ช็ถๆๆ็ๆฐ็้ฝ้ ๅฎ่้ฒไพไบ๏ผๆ็ธพไธ้่ฆ๏ผๆๅฐ็ฒพ้ๅฐฑ่ฝไธๆฆใ้ป่ฆไธ้ๆไธไบๅๅธซ่ชช๏ผไธ่ฒท็ญ้กๅฐ็ฒพ้ๅฐฑๆฒๆณ็ขๆฅญ๏ผไบบ็ๅพๆญคๅฎ่ใ
่็ธ่ๅชฝๅพ่ๆฅ๏ผๆผๆฏ่ชชๆไฝ ่ฒทไบๅๅฐ็ฒพ้ใไฝ ็จไบ็จ๏ผ็ผ็พไป้็ถๆฏๅ็งๆๅฅ่น๏ผไฝไนๆฒๆๅๅธซ่ชช็้ฃ้บผ็ฅใๅฟๆณ๏ผๆข็ถๅคงๅฎถ้ฝ่ชชๅฎ็ฅ๏ผๆไธๆๆฏๆๆ้็ๆนๅผไธๅฐ๏ผๆฏไธๆฏ่ฉฒๅจๅฎ่บซไธๆพ้ปๆฐด๏ผ
็พๅจๅๅญธๅๅญธๅผ้ฝไธๅคช่ซไธปไฟฎ๏ผๅช่ฌๆๆจฃ็จๅฐ็ฒพ้ใไฝ ็่ๅธซไธ้่ญฆๅๅคงๅฎถไธ่ฆ่ฎๅฐ็ฒพ้ไปฃๅฏซไฝๆฅญ๏ผไธ้ๅ่ฆ่ชๅทฑ็ๅฐ็ฒพ้ๅนซๅฟๅบไฝๆฅญใ
้ๆฏๆ้บผไบ๏ผไฝ ๅๅฐ็ฒพ้ใๅฎ็ญ้๏ผ่ฆไธ่ฆๆๅนซไฝ ๆด็ๆไธไปฝๅฏ่จๆ ไบ๏ผ่ฎไฝ ๅๅๅญธๅไบซ๏ผ
The video features a series of interactions in a modern office setting, characterized by a bookshelf, a plant, and a round table. It begins with a person wearing a gray cardigan over a light blue shirt, seated at the table. The text on the screen reads, "ๆ่ฟไธชไบบๆฏ่พ็ญ" (I am a bit hot), indicating a personal characteristic. The person then gestures with their hands, and the text changes to, "้ฃๅฑไฟฉไธๅ้็ธไบฒ็ปๆไบ" (Well, we are not a good match for a blind date, let's end it), suggesting a conclusion to a conversation. The text then shifts to, "ๆ่ฟๆๅๅไธ" (I just sat down), followed by, "ไฝ ไธๆ ข็ญไนๆๅฟซๅท" (You are not slow to warm up, I am getting cold), indicating a discussion about compatibility. The final text in this segment reads, "ๅซๅบ่ฏๅซ่ฝ่ฏฏๆ็ธไธไธไธช" (Don't waste time, let me meet the next one), showing a desire to move on.
The scene transitions to another individual in a pink top and black skirt, seated at the same table. The text on the screen reads, "ๆๆ่ชไบๅๆๆณๆพไธชๆ่ชไบไธ็" (I earn 5,000 yuan a month, I want to find someone who earns 50,000 yuan a month), expressing a desire for a higher salary. The text then changes to, "้ฃๅฑไฟฉๆญฃๅ้" (Well, we are a perfect match), indicating a positive response. The final text reads, "ไฝๆๆฒก็ไธไฝ ๅไธไธไธช" (But I didn't like you, the next one), suggesting a rejection. The video concludes with a person in a gray dress with a white collar, seated at the table, with the text, "ๆๅฎถๅขๆฏไผ ็ปๅฎถๅบญ" (My family is a traditional family), indicating a discussion about family values. The video wraps up with the text, "ไธไธไธช" (Next one), suggesting a continuation of the process.
Provided by @altbot, generated privately and locally using Ovis2-8B
๐ฑ Energy used: 0.712 Wh
I don't care if it gets me clicks right now, this chart remains one of my absolute favorite. And I'm not just saying that because I made it
By itself, it disproves a 100 year consensus for how Pre-Election polls are graded for accuracy. They all violate it.
That's cool AF. Science is cool AF.
People who care about good science should follow me or something .
4 detained as #Chicago fights at Broadview detention center
Video from fighting at ICE's Broadview "processing center" near Chicago
Ha! My kid asked me to watch it and I borrowed the whole series from the library.
Some was difficult to get through.
Iโm on the second season.
FreedomPatriot reshared this.
Joveljiฤ scores, refuses to celebrate, shows a heart to the supporters section ๐ฅบ
Weโll love you forever, Dejan, and youโre more than allowed to celebrate the next one. #LAvSKC
Tjeknavorian and the Ensemble Reflektor play Beethoven and Mozart in Schwinkendorf - Schedule // - www.worldconcerthall.com
Mecklenburg-Vorpommern Festival.ย ย Emmanuel Tjeknavorian, violin and conductor, and the Ensemble Reflektor play: BEETHOVEN: Overture to 'The Creatures of Prometheus'. Op. 43. MOZART: Violin Concerto in G major, K. 216. BEETHOVEN: Symphony No...www.worldconcerthall.com
In 1934, to stop the New Deal, far-right financiers tried to organize a military coup against FDR.and recruit Gen.Smedly Butler to lead it.
Butler blew the whistle on them, remembering his oath to uphold the U.S. Constitution.
May the Generals remember their oath next week.
Knowing history is important. Sometimes it repeats and sometimes it rhymes.
reshared this
Trendy Toots e GhostOnTheHalfShell reshared this.
Der Herbst der Wahrheit: Arbeitsplรคtze gegen Ideologie โ TE-Wecker am 28. September 2025
Der TE-Wecker erscheint montags bis freitags โ und bietet Ihnen einen gut informierten Start in den Tag. Ideal fรผr den Frรผhstรผckstisch โ wir freuen uns, wenn Sie regelmรครig einschalten.Natalie Furjan (Tichys Einblick)
It's going to look even more suspicious for Trump when the only unreleased files left are the Epstein Files. #Epsteinfiles
Hey look โ it's a UFO! ๐ธ
Is it real or a deception?
The whole nation must know
the answer to this question.
Without a doubt, let's fund the Space Farce,
to find out โ is there life on Mars?
๐ต
Bittersweet Fruit (Solanum Dulcamara)
The fruits are toxic.
Olympus OM-2N, 50mm, Kodak Gold 200
CIA Officers Helped Block Investigation into Ukrainian Energy Company that Employed Hunter Biden - CovertAction Magazine
New disclosures add weight to theory that Burisma was part of CIA operation On February 21, 2019, a confidential source told the FBI that two CIA officers went with Mykola Zlochevsky, owner of Burisma, a Ukrainian energy company that appointed HunterโฆJeremy Kuzmarov (CovertAction Magazine)
harry haller likes this.
clickorlando.com/news/local/20โฆ
bad news "AI bubble doomers". I've found the LLMs to be incredibly useful and reduce the workload (and/or make people much, MUCH more effective at their jobs with the "centaur" model).
Is it overhyped? FUCK Yes. Salespeople Gotta Always Be Closing. But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, which is all grifters and gamblers and criminals end-to-end, and the first dot-com bubble where not NEARLY enough people had broadband or even internet access, plus the logistics systems to support shipping products was nowhere REMOTELY where it is today.
If you are expecting this "AI bubble" to pop anytime soon, uh.. you might be waiting a bit longer than you think? Overhyped, yes, overbuilding, sure, but not remotely a true bubble any any of the same senses of the three examples I listed above ๐. There's something very real, very practical, very useful here, and it is getting better every day.
If you find this uncomfortable, I'm sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.
@codinghorror
A foundation model to predict and capture human cognition - Nature
A computational model called Centaur, developed by fine-tuning a language model on a huge dataset called Psych-101, can predict and simulate human nature in experiments expressible in natural language, even in previously unseen situations.Nature
it's real in much the same way the railroad boom was real (rather than tulip mania, say).
But LLMs are also not remotely worth the level of valuations and investment we're seeing, and that is a bubble that will pop. "Useful" and "a bubble" can both be true.
Many other types of AI systems have been in production use for years but command nothing like this kind of manic investment.
@jannem but the "bubble" warnings of financial experts from Deutsche bank aren't about usefulness. It's about assets, revenue streams and the fact this frantic building of generic data centers is hiding the recession in the US.
Housing is useful too. Didn't stop the 2008 crash...
@JorisMeys @jannem Oh, I definitely will have a great day, because I'm putting $69m into action to help desperately poor people and orgs doing amazing work.
blog.codinghorror.com/stay-golโฆ
blog.codinghorror.com/the-roadโฆ
The Road Not Taken is Guaranteed Minimum Income
The dream is incomplete until we share it with our fellow Americans.Jeff Atwood (Coding Horror)
The Road Not Taken is Guaranteed Minimum Income
The dream is incomplete until we share it with our fellow Americans.Jeff Atwood (Coding Horror)
โI can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.โ
Please do, if you can. Because most time Iโve tried to use LLMs for work the error rate ends up costing me MORE time than I would have spent without, and most AI boosters are short on specifics. We just had a presentation at my job on how we all need to be using AI with no case studies of how itโs actually been useful so far.
here's one: a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.
Here's two: GiveDirectly did two GMI studies, Chicago and Cook County and we were very unclear what the relationship was, or why they did it that way. ChatGPT also knocked this out park and saved Tia a lot of time finding that information out, so she was freed up to focus on other work.
I could go on and on and on. Email me if you want ~12 more specific examples. With citations.
But also realize this: I am elite at asking very good, well specified, very clear, well researched questions, because we built Stack Overflow.
You want to get good at LLMS? learn how to ask better questions of evil genies. I was raised on that. ๐ง
reshared this
Jeff Atwood e Maho Pacheco ๐ฆ๐ป reshared this.
How can I ask better questions on Stack Overflow beyond reading the "How do I ask a good question?" page?
I'm an enthusiast programmer. I usually ask questions on Stack Overflow when I can't solve hard programming problems myself. But when I ask questions, people usually start downvoting them (my worstMeta Stack Overflow
The Road Not Taken is Guaranteed Minimum Income
The dream is incomplete until we share it with our fellow Americans.Jeff Atwood (Coding Horror)
Evil genies with a severe form of ADD of some sort.
You hit it on the head - the prompt is the key.
With an experienced human - vagueness is often acceptable, and they will usually ask for clarification. The AI doesn't ask - it guesses, often incorrectly. So you need to over-specify in the prompt, including things it might be insulting to mention when talking to an experienced human. Then iterate, and aggressively steer that conversation.
This is why I don't see the AI as replacing a human except for trivial situations. It's a force multiplier, but not a replacement, and the skills necessary to use them effectively are non-obvious.
So your argument is simultaneously:
> LLMs are useful RIGHT FUCKING NOW for SO MANY scenarios
But also, they're only useful because:
> I am elite at asking very good, well specified, very clear, well researched questions, because we built Stack Overflow.
Is it then fair to say that LLMs are likely to be very misleading for people who do not have your "elite" experience?
If not, why not?
@sethrichards@mas. Those examples do not make it clear to skeptical drive-by readers like me how you established the extent to which the output you received was actually correct
Is part of the magic value add to embrace the idea that for many activities, being "actually correct" isn't the most important criteria? Compared to, eg, just having a direction to get started in.
If someone could reference or breakdown examples that did unpack actual correctness, that would be persuasive.
The problem is not "LLMs are useless and when the bubble bursts they go away," they aren't going away any more than websites went away when the .com bubble burst.
The problems are
1. They are a 6/10 tool being advertised as an 11/10 tool with the folks selling this stuff consistently overstating what they're capable of doing.
2. The few hundred billion spent building them needs the 11/10 promises to come true in order to be justified.
3. They're really good at making up answers that appear *plausible* but are also completely wrong, and verifying the answers is becoming increasingly difficult as the top search results are increasingly flooded with output from the same LLMs.
4. 'AI' is being used to try to sell a bunch of completely unrelated stuff like 'copilot+ pcs' even though everything meaningful in the LLM space only runs in datacenters due to GPU memory limitations.
Maho Pacheco ๐ฆ๐ป reshared this.
@sethrichards when I was in uni we learned about specifying pres and posts to out functions as contracts and a way to derivate the algorithm from the post to the pre. Not only the derivation was already hard, sometimes defining the pre and post was as hard to define as solving the problem in the first place.
1/
yeah I was there working at a startup in NYC tooโฆ I get your point. You think any AI bubble will be mitigated because the tech can be delivered to the consumer easily this time.
I was making a different point that I think explains why you still hear AI doomers despite it being useful tech. Itโs still a very dangerous bubble that will likely misallocate vast funds and careers IMO. Anyway itโs fine. sorry, I didnโt mean to frustrate you with my comment.
revenues were no there because just broadband. Broadband did make new applications possible , Netflix streaming, etc. All new technologies take time to adopt. ecommerce took almost a decade to 2.5% of total sales. It is still not even close to what people thought in 2000.
Profits were more because many companies focused on attracting customers, investing no matter the cost. Once money to support losses stopped flowing in. The dance stopped, market crashed, so yes similarities
@poswald In terms of monopolists, I'm slightly more concerned about TSMC and ASML. If Nvidia implodes, others will rise from the corpse. Too much talent there not to. But the fab design/implementation cycle is much longer than the chip design cycle. Would take much longer to recover. But TSMC is also more resilient than Nvidia.
I'm also concerned about the new Intel CEO. Doesn't seem to value long-term investment. I think Intel was 1-2 generations from being a competitive player, but I don't have faith they keep it up now.
I'll tag you in a few days with this project I'm working on. VERY much not a big deal. But way beyond my capabilities. I've been using Cgpt to help build my new portfolio site. During this, I have found it is grossly object blind to its own errors. First drafts are always cool, way beyond anything I could do or even afford to pay someone for. But I'll find a glitch and then spend 10hrs trying to get it to track it down. It just pushes the error further down the line, but still there. The only fix was to dump that chat window and start fresh, completely rephrasing the issue and the desired resolution.
Ironically, this is more like a human than anything else. Humans are invariably unable to see their inherent personality and thinking flaws. No matter how well pointed out, how hard they are worked on, invariably they spend more time pushing the problem around and not actually solving it. We have entire industries built on this very issue, therapists, pop culture self-improvement, religions... For the last 19 days I've run into this same issue with it every single day. And spent way more time not fixing minor issues it generated than actually moving forward.
5 times it even gave me code to drop in that had spelling errors. We track the bug down and it blamed me. I copied and pasted that very code and fed it back to it to find the issue and then deny it wrote that error. Talk about a freakishly human thing to do.
I've used it now for 2yrs to help with art projects. It's far better for that than almost every human I know. With the correct personality framework, it ends up being incredibly useful as a sort of partner in the project.
I do think there's a lot it cannot do, yet. For specific tasks it is better than many humans can be. And I think, given the resources being tossed at it, this is going to rework most all of human culture/industry/interaction. But if it already has human flaws built into it, I suspect that those will grow in a similar way.
The Best Code is No Code At All
Rich Skrenta writes thatย code is our enemy. Code is bad. It rots. It requires periodic maintenance. It has bugs that need to be found. New features mean old code has to be adapted. The more code you have, the more places there are for bugs to hide.Jeff Atwood (Coding Horror)
I'm right there with you. The increased productivity is staggering when you know how to write the prompts.
1. Pretend you're writing a legal document or contract - say the things that seem obvious and be painfully precise.
2. use the LLM to eliminate tedious tasks entirely.
3. treat it like a smart junior team member you're collaborating with - give it the shape of what you expect the result to be.
Using these rules, what used to take 3 days can be accomplished in 3 hours.
Jeff Atwood reshared this.
@elijah studies indicate that we overestimate how much it actually speeds us up, but treating it as a junior dev or an intern is the way to work with it.
I complain constantly about the mistakes it makes, but I often use it to scaffold boilerplate and make quick small adjustments successfully. I just have to be super vigilant about what parts I commit. When possible providing an example of what you're trying to accomplish helps.
In practice it seems to only speed me up ~20%
AI coders think theyโre 20% faster โ but theyโre actually 19% slower
Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisatโฆPivot to AI
@noodlejetski @codingcoyote I'm well aware of that study. I only know what the actual impacts have been in my experience.
All I can share is the real-world experience I have.
ex: Creating production-quality services implementing business logic between two APIs used to take a few days with learning the APIs, writing the tests, setting up dev and e2e testing environments, integrating with CI, etc. Now, AI does 90% of the drudgery while we're doing meaningful things elsewhere.
Nobody is questioning the practical utility, the problems are all fundamentally about economics. Unless someone makes breakthroughs that can at scale generate ROI, you're going to reach a threshold where there's not enough capital in the market to sustain the ongoing investment while also simultaneously starving investment in other industries.
Obviously the investors know what they're doing right, that's probably what everyone assumes at this stage ๐
You seem to be assuming compute time is fungible. The GPUs being built for ML are heavily optimised for multiplying sparse matrixes of very low-precision floating-point values. They are not even very good at graphics, let alone other workloads.
interesting anedoctal evidence!
Now, how about we get serious and publish/wait for some (at least potentially) unbiased study/research on that?
Because I haven't seen any. All I've seen are the likes of this one, negative about Centaur:
circumstances.run/@davidgerardโฆ
David Gerard (@davidgerard@circumstances.run)
Attached: 1 image We simulated the human mind with a chatbot. It didnโt work not advocating data fraud! technically https://www.youtube.com/watch?v=JlW-jTjhGXo&list=UU9rJrMVgcXTfa8xuMnbhAEA - video https://pivottoai.libsyn.GSV Sleeper Service
@tinsuke here's the specific examples. Feel free to explain why I'm wrong. I'll be waiting. Good luck, pal. infosec.exchange/@codinghorrorโฆ
here's one: a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.Here's two: GiveDirectly did two GMI studies, Chicago and Cook County and we were very unclear what the relationship was, or why they did it that way. ChatGPT also knocked this out park and saved Tia a lot of time finding that information out, so she was freed up to focus on other work.
I could go on and on and on. Email me if you want ~12 more specific examples. With citations.
But also realize this: I am elite at asking very good, well specified, very clear, well researched questions, because we built Stack Overflow.
You want to get good at LLMS? learn how to ask better questions of evil genies. I was raised on that. ๐ง
those examples sound OK, but I'm not particularly interested on their specifics or picking them apart.
I'd be interested on how representative of the overall experience they are. Because they're still anedoctal evidence and I don't think one could generalize LLMs' usefulness from them.
That's what I meant by expecting some unbiased research or study with thorough analysis, specially given how LLM users seem to be bad at estimating the benefits: metr.org/blog/2025-07-10-earlyโฆ
@tinsuke "those examples sound OK, but I'm not particularly interested on their specifics or picking them apart."
Which means they are valid examples. I can provide a few dozen more via email if you want. Highly specific, too.
To your point, I think LLMs will continue suck a lot at code and coding, because it requires very precise, very accurate language. But a LOT of human tasks do not require this level of precision. Compare with answering the phone at the local public zoo. How many times do you think you say the same 20 answers over and over?
I didn't bother asking about the examples (and still ain't!) because anedoctal evidence little matters in supporting statements like that LLMs would be useful tools for the general public (or a decent, monetizable, chunk of it), and not just a product of hype.
If LLMs are useful, why wouldn't trustworthy studies/research be able to show that? That would make convincing people about them so much easier.
As easy as addressing/waving off the ethical concerns around them, that is.
largely agreed, but given the literal trillions weโre spending I feel the bar for this not being a financial bubble is much higher than mere existence of utility.
After the dust settles, will we have useful LLMs? Yes. Will most AI investors have lost their shirts? Also yes.
Bad news "subprime housing bubble doomers". I've found homes incredibly useful and reduce life on street (and/or make people much happier of their conditions).
This is NOTHING like previous overleveraged financing and not REMOTELY like a true bubble because people live in houses and banks won't yank them.
If you find this to be uncomfortable, sorry, but lessons have to be learned.
@codinghorror
Ahhโฆ so the onus is on me to somehow uncover the true costs of the model training, etc., despite the fact that all of the players in the industry go to great lengths to obfuscate them?
Guess Iโll be walking away then. ๐ซก
The Road Not Taken is Guaranteed Minimum Income
The dream is incomplete until we share it with our fellow Americans.Jeff Atwood (Coding Horror)
To be clear, Jeff, I firmly believe what youโre doing in terms of wealth distribution, both in terms of your personal wealth, and the โstay goldโ initiative is incredibly admirable. Whilst itโs an option only available to a few, taking a top-down approach such as the one youโre taking is one of the few ways meaningful change can be enacted.
Itโs a pity thereโs not more people out there with the same attitude, and the courage to put their money where their mouth is.
Concepts like Universal Basic Income/Guaranteed Minimum Income and an acceptance of the environmental trade-offs of AI feel like uncomfortable bedfellows to me though, although perhaps Iโm just ill-informed. Either way, Iโve the luxury of not being in a position where my opinion or influence matter in the slightest.
Irrespective, I very much wish you well with โstay goldโ. If there were more people with your convictions, the world would be a measurably better place.
Here's a couple to start with:
1. mit-genai.pubpub.org/pub/8ulgrโฆ
> The unfettered growth in Gen-AI has notably outpaced global regulatory efforts, leading to varied and insufficient oversight of its socioeconomic and environmental impact [...]
2. Google's 2025 Environment Impact report, sustainability.google/reports/โฆ
> Compared to 2023, our total [CO2] emissions increased by 22%, primarily due to increases in data centre capacity [...] for AI.
The Climate and Sustainability Implications of Generative AI
The rapid expansion of generative artificial intelligence (Gen-AI) is propelled by its perceived benefits, significant advancements in computing efficiency, corporate consolidation of artificial intelligence innovation and capability, and limited regโฆAn MIT Exploration of Generative AI
Google used to say "Net zero by 2030". Now that report I linked to has changed that to "50% of 2019 emissions by 2030".
I heard a podcast recently (ProfG I think) predicting the $$$ bubble will pop, but the utility will remain.
A good analogy is PCs. Originally the 1980s PC built many fortunes (remember Gateway 2000?) but it eventually became a low-margin commodity.
Whatever the source, I think commoditization of AI tools is inevitable.
AI being a useful tool and AI being an investment bubble can both be true.
See also railroads and PCs.
I'll stop calling it a bubble when core functionality stops being neglected for shoehorning into everything no matter how hard one tries to actively avoid it as justification for circular investments.
As much as I'm sceptical of anyone saying it actually improves their ability to do X task with Y amount of people, websites didn't die when the dotcom bubble burst and neither did cryptocurrencies. They just got relegated to the tasks they were actually useful for after enough blood was spilled to write the regulations with it. All of my complaints with using LLMs for things can be resolved without killing it off entirely.
Lately my issue is more with the zero sum game nature of it. it'd be difficult now, but I could've easily got along without internet when that bubble burst. I got along just fine each time cryotocurrencies went bust. With what people are reducing a the two-letter marketing phrase of half a century ago is something I'm constantly having to actively avoid at basically every step, and even then there's very likely personal data of mine that I cannot prevent from being fed into training data no mater how loudly and how explicitly I state that I DO NOT consent.
If it's so useful, I don't need to be cautious about my operating system, the tools I use, where I host my projects, what configuration I have set for everything down that pipeline, and risk remaining in a perpetual state of unemployment if I don't change the workflow I've had for over a decade so that at every one of those steps my hand is forced further and further away from the vision I as the creator of said projects had in mind and more towards tweaking a system I never asked to become the entirety of my work. If it's so useful, its own merits will have me curious and I'll actually take the plunge of my own volition.
But how it's done now, how pervasive and inescapable it's becoming, how stigmatised wanting to perfect a craft with your own two hands at every step of the way is becoming, it's less reminiscent of a revolutionary paradigm shift and more reminiscent of the cult I left when I entered adulthood.
there is a bubble because there is no way these AI companies will be profitable. The dotcom bubble burst bot because the internet wasn't useful (it was) but because all the dotcom companies were unprofitable.
Investors expect exponential growth, but there is no way for openAI to grow any further, and it's difficult for them to charge any more money from customers. AI models are too easy to replicate by competitors, so there is no lockin, costomers can go to competitors any time.
and we've seen the diminishing returns on new LLM models. There is exponential growth in costs to develop a new marginally better model. There just isn't demand or willingness to pay for that model.
Once a technology becomes good enough then it's more about convenience than quality. MP3 isn't the best audio, but it's dominant, same as streaming movies, even though Blu-ray is technical much better.
Even small LLMs have been shown to be good enough for most use...
@gundersen yep. Already said that here. Feel free to read it. Or don't. I really don't care. You do you. infosec.exchange/@codinghorrorโฆ
The LLM / GAI people are hitting exponential difficulty walls with massively diminishing returns. I donโt care how many GPUs and โtraining dataโ you throw at the problem, you canโt brute force your way out of thisโฆ but you can certainly waste billions trying. My self driving car bet with Carmack is the canary in the coal mine. When, and only when, we have fully autonomous SAE level 5 self driving cars, we thus by definition have true, full general purpose artificial Intelligence: blog.codinghorror.com/the-2030โฆThe 2030 Self-Driving Car Bet
Itโs my honor to announce that John Carmack and I have initiated a friendly bet of $10,000* to the 501(c)(3) charity of the winnerโs choice: By January 1st, 2030, completely autonomous self-driving cars meeting SAE J3016 level 5 will be commerciallyโฆJeff Atwood (Coding Horror)
deep AI/ML bubble or GenAI bubble? I think there is a difference and unless deep AI/ML can take up the momentum, I think GenAI AI will pop. There was a huge web bubble and yet here I am 25 years later replying directly to a legend via the web.
I hold with those who feel we're overestimating in the short term and underestimating in the long term.
Don't have a million though.
Considering how these models are trained and how the fair use principle is abused, praising current crop of big AI models is a bit in contradiction with your values, no?
Or do I have a wrong impression of you about ethics, privacy, and doing the right thing?
@bayindirh as I said here infosec.exchange/@codinghorrorโฆ
if only everything ever published on the web was Creative Commons!Atwood's Third Law: Content licensing is now the hardest problem in computer science.
Creative commons license also has a non-commercial, no-derivs, attribution, share-alike license (CC-NC-BY-SA), which I license my blog with. This normally blocks AI training (no transformation, no-sell, must-cite), so, CC doesn't allow free-reign over training, and I don't want to feed models with my output.
So your stance is, "tech is more important, we can figure ethics later" AFAICS.
Thanks.
not at all what I said. I'm saying licensing is a FUCKING NIGHTMARE problem. Have you ever even once looked at how difficult music licensing is, alone? Protip: read this: infosec.exchange/@codinghorrorโฆ and then this infosec.exchange/@codinghorrorโฆ
for example..
I might have misunderstood you, sorry if I did.
My rule-0 is don't do anything that you don't want to experience. So, again, if I misunderstood you, sorry about that (English is not my native language to begin with). It's not my intention to stuff words into anyone's mouth.
Yes, I know how music licensing is a hell of an onion. I played in an orchestra and have enough musician friends to experience in close proximity.
@bayindirh I don't condone stealing at all but "let's create another fifty thousand different nightmare mode bureaucracy licensing systems like music" is really not appealing to me either. Creators should get paid for their work, for sure.
You, of all people, know that musicians get screwed more than anyone else with that "perfectly legal and OK" licensing system. So is it a good system, then?
Hard things are hard.
No, music licensing and academic publishing is two mediums that rip the creators the most. I support neither model in their current forms.
My proposition is a narrower interpretation of fair use and license detection on the page.
If you gonna sell this model or to access to it, assume everything is "All rights reserved". Any license preventing transformation stops scraping. Viral licenses assumed affect all output, exclude cite requiring content if your model can't cite. Simple.
Guys building "The Stack" use a license filtering system to select what to include. LLMs are "smart" enough to understand licensing lines on the things they ingest.
If industry wants, we can add relevant HTTP headers to our pages to signal our stance.
They are simple, open ways to communicate what creators want. The only obstacle is the AI companies. Will they cooperate?
I will try. Try, because I'm an unpleasantly busy period of my life. On the other hand, 500 character limit makes us look like more conflicted than we are.
I'm aware of the pitfalls and shortcomings of my proposal, because it's purely technical, but the problem is mostly social.
Again, technical problems are easy, humans are hard.
The proposal I'll write will technically work until it hits real world, because of humans and tragedy of commons.
The license of the thing you crawled may not allow transformations, selling or sharing w/o citations (CC BY-NC-SA).
-or-
The license of the thing you crawled doesn't allow its inclusion in any other codebase, ever (source available).
-also-
We have torrented books, mislabeled data, etc. etc.
-so-
The case gets complicated as you want to enable exploitation of the thing you crawled, for free.
Here are four easy ones to start with:
- Training on copyrighted data without compensation and/or respecting the license
- "bias washing"
- "accountability washing"
- Environmental impact
Also interested to hear what other ethical issues you've considered.
I've read your other replies on the thread, where you repeatedly state that licensing is a "nightmare problem".
1. Is your position that this problem is so difficult that AI model builders should just ignore licensing and ingest stolen content?
2. If that's not your position, do you agree that the vast majority of AI models are built on stolen work?
3. If you agree with (2), then do you think it is ethical to use AI models that you know are trained on stolen work?
- No. But having a lot of freely available creative commons data and content is good for the world in all possible futures, full stop.
- It depends on the model, and how ethical the builders are. If it's Zuck, then yeah, they are completely amoral and do whatever it takes to win. They literally don't care about anything else except, ya know, "winning".
- Again, I'd need them to label the contents of the product I'm about to ingest accurately, and .. are they? We should require them to.
- Can we provide potential viable solutions to the problems rather than just endlessly complaining about them? Also, honestly acknowledge that licensing is INCREDIBLY complex.
The fact that it's not completely useless aggravates the effects of the bubble, not mildens them. I think bubble will pop because even though it's useful, the profits doesn't make up to the cost of making those models. Current market value of AI companies is at least 150x of the revenue right now, while it was at best 10x pre-AI. There is no indication whatsoever that the companies will make up that much revenue in anytime soon.
I highly recommend reading profgalloway.com/bubble-ai/
Bubble.ai | No Mercy / No Malice
Note: This newsletter is not investment advice. Five years ago, Nvidia was a second-tier semiconductor company known for giving Call of Duty better resolution.Scott Galloway (No Mercy / No Malice)
20% is still a lot less than the investment increase. Also that's also just your claim, while there are studies for the contrary metr.org/blog/2025-07-10-earlyโฆ . I know you say it's not about coding but the study shows that self assessments are in general unreliable.
Your language is completely rude. I just gave a different opinion than yours and you are talking disrespectfully. My argument might be bullshit, but then you can call out my argument not the person.
still doesnโt explain it. You can have something useful and make companies overspend massively, create a bubble, burst it, and still the technology is useful and will still be afterwards no matter.
Housing was, is, will be useful after the 2008 bubble.
I've used LLMs to successfully input data into my brain. Analyze, compare and basically make sense of data in various shape from multiple sources. I even used it to generate common patterns to guide my learning of a craft like languages and technology.
I have yet to use it successfully to output anything from my brain though. Be it writing code or an email, the mechanics of transferring my thoughts to a destination format is hardly ever the limiting factor : The bottleneck is my brain. It's the silver bullet problem all over again.
The solution seems easy. Bypass the brain and have the LLM go from input to output on its own. Luckily, we have hundred of vibe coders live streaming their fall from optimism to show that's a bad idea. If I don't understand what the machine is doing, there is no way I'll trust that work, at least until there is a revolutionary leap in the technology.
That leaves one area I can think : Have it challenge my output. I can imagine significant incremental gains in productivity there, but I haven't had the chance to try any offering like that for either for code or prose...
But If it's useful or nor IS Not the whole point to decide if it's a bubble?
I find it useful, and I would pay about 3-5$/month max for the usefulness it provides for me. So we will see, if they are either able to operate the text generators for such prices or if there are many people who are indeed willing to pay >100$+ per month and seat.
This is IMHO the question the coming months/years have to answer to decide bubble or not or how big. I don't know the answer, we will see.
Honest question: how are are you seeing it making people more effective?
I work in tech, and in the last three years I've seen it not only being adopted, but also made mandatory in some cases, in three different companies. At this point, everyone is using LLMs for one or other thing.
What I have not seen is any significant gain in productivity. If any. We don't ship faster, we don't produce less bugs, we don't communicate better, our documentation is not better than it used to be. (Some) People enjoy using it, for sure, it makes their job funnier... But I haven't seen it making them more effective.
As you say, it can save a bit of time in some tasks. I've also seen it create messes that sucked up an entire team days to solve. So I'm not sure of the overall result being too spectacular.
@javi examples were provided here, have a look infosec.exchange/@codinghorrorโฆ
here's one: a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.Here's two: GiveDirectly did two GMI studies, Chicago and Cook County and we were very unclear what the relationship was, or why they did it that way. ChatGPT also knocked this out park and saved Tia a lot of time finding that information out, so she was freed up to focus on other work.
I could go on and on and on. Email me if you want ~12 more specific examples. With citations.
But also realize this: I am elite at asking very good, well specified, very clear, well researched questions, because we built Stack Overflow.
You want to get good at LLMS? learn how to ask better questions of evil genies. I was raised on that. ๐ง
That's fair, yeah, that's the kind of things I've seen people using it for successfully LLMs for. Nothing you probably wouldn't have gotten to on your own after 15/30 minutes of online searches.
I'm not saying they are not useful, but I see two major problems that make me be skeptical of how important this technology is:
First, both of your examples show cases where an incomplete/slightly wrong answer is not really a deal breaker: if some of the resources in the first example were outdated, or if the second answer would had missed some extra connection, the answer would still be valuable.
The problem that is usually not "good enough" for a lot of "work" problems, and I often see people spending almost as much time validating and correcting LLM outputs as if they have not used an LLM to begin with. Not in every case, of course, but it makes their "productivity boost" effect diminished by how important is the task they are being used for.
On the other hand, in a more broad terms, it's the economics issue. Right now, to get the current level of functionality, The different providers are burning cash at a literally never-seen-before rate. Unless there is a real and unexpected breakthrough in the technology, there is no way to keep it at the current cost for the end user in the long term. I know plenty of people who pay $200/month, or get a license from their employer that pays something like that for them. Are they going to be willing to pay $1k or $2k for the current (or even slightly improved) capabilities ? I'm highly skeptical.
So I can't help but not seeing any of this as ground breaking as you folks seem to see it
But, on this particular subject, I think you may have a too much narrow view on the subject.
LLM are pushed by top-level CEO because it allow their wet dream of an intelligent, but non-reflecting, task force to exist.
it can be useful and still a bubble! I mean tulip bulbs could be still grown even if their valuation changed dramatically.
AI companies are doing seriously some questionable financing, laundering valuations to pull more funding... It seems like some of the big providers are on increasingly shaky financials. We'll still have AI, of course, but their valuations might change dramatically.
But did you really shoulder the cost of the use of the LLMs? I wager you didn't, I wager it was VC money. Delivery pet food was futilely propped up by VC money in the 90s, too. It's an every day reality today.
The dot-com bubble bursting didn't kill the WWW. This bubble bursting won't kill LLMs or AI in general. But it will slow it down a lot and wipe out a bunch of investor-class money, which will kill VC confidence for a while, I predict.
As those workers are rehired to the jobs the LLM could never do, I'm certain they've invested their generous severance in becoming domain experts so that they are now the premier LLM output-validators, deftly capable of approving generated summaries without the need to read primary sources at all
/s
Whether or not it's useful is not entirely related to whether it's a bubble. OpenAI is predicting it will need to charge $5,000/month for a BASIC professional user. For more advanced models, OpenAI is predicting $10,000 (coding) - $20,000 (PhD) per month. How many people currently paying ~100s/month are going to switch to thousands? They've also been up front that they're losing money on individual pro accounts that are $200/month, and are DEFINITELY getting hosed by the 90% of users on the free tier.
Furthermore, while AI models have been getting better, they've ALSO been getting more expensive. Sustainable technologies tend to get cheaper over time as they get better, and AI is just not. OpenAI just signed a deal with Nvidia for building out massive new datacenters that will require as much electricity as 10 nuclear reactors can produce and cost more than half a TRILLION dollars just to build.
AI is a bubble because the finances do not work, and cannot work without a massive reorganization of the industry (i.e., a crash). The companies are vastly overleveraged with debt and floating along on investor money that's going to dry up. Their physical assets are largely computing datacenters that are obsolete almost as soon as they're built and full of computing hardware that burns out faster than any other hardware in existence. Their finances only look as "good" as they do--and they don't look good at all--because they're performing weird accounting tricks like having Microsoft "invest" in the company, then using that money to pay MS for Azure; same with Nvidia and graphics cards. The money is counted by each company simultaneously as payment and investment and profit and cost, but it's all just the same money getting passed around the same companies.
Sooner or later--but even The Wall Street Journal is starting to predict sooner--the 1/3 of the stock market that's wrapped up in AI is going to have their shareholders demand a return on investment, and the money is not there to be returned. They almost literally are lighting it on fire in the form of burned out graphics cards and coal power.
Some of AI is going to be salvaged, but that doesn't mean this isn't a bubble (just like dot com).
It's hard finding IRL tabletop groups. Especially when it's not D&D 3.5/5e, Pathfinder, or Call of Cthulu you want to play
Advice?
#ttrpgs
be the change.
The only way I get to play โnicheโ TTRPGs is when i offer to run them. Folks are usually pretty receptive, but it takes finding the right folks to make it last. Iโve yet to be able todo more than a Few sessions with IRL folks on anything but 5e
5.04 Droni russi sarebbero caduti su Kiev.Lo annuncia il sindaco della capitale, Vitali Klitschko su Telegram. Secondo quanto riferito dal primo cittadino un palazzo di 5 piani nel distretto di Solomyansky sarebbe andato parzialmente distrutto. "Attualmente scrive Klitschko ci sono feriti nei distretti di Solomyansky e Svyatoshynsky. I medici si stanno recando sul posto. Uno dei feriti รจ stato trasportato in ospedale". Alcuni detriti sarebbero caduti anche nei distretti di Svyatoshynsky e Holosiivsky. #televideo #ultimaora
My data have not been verified but my work is highly reproducible.
- Downloads (csv, img, dump) โก๏ธ decompwlj.com/
- Algorithms โก๏ธ oeis.org/wiki/Decomposition_inโฆ
#decompwlj #math #mathematics #maths #sequence #OEIS #Downloads #Algorithms #numbers #primes #PrimeNumbers #PARIGP #FundamentalTheoremOfArithmetic #sequences #NumberTheory #classification #integer #decomposition #number #theory #equation #graphs #sieve #fundamental #theorem #arithmetic #research
Decomposition into weight ร level + jump - 3D graphs - 2D graphs - First 500 terms - Rรฉmi Eismann
Decomposition into weight ร level + jump with 3D graphs (WebGL three.js), 2D graphs and first 500 terms. This decomposition is an extension of the fundamental theorem of arithmetic and a new way to see the numbers.decompwlj.com
RE: bsky.app/profile/did:plc:6y3hoโฆ
Sorpresa: stipendio e stabilitร non bastano piรน. Vi racconto cosa vuole chi oggi รจ in cerca di un lavoro
https://startupitalia.eu/economy/lavoro/stipendio-e-stabilita-non-bastano-piu-cosa-vuole-chi-oggi-e-in-cerca-di-un-lavoro/?utm_source=flipboard&utm_medium=activitypub
Pubblicato su Ischool @ischool-StartupItalia
Sorpresa:ย stipendio e stabilitร non bastano piรน. Vi raccontoย cosa vuole chi oggi รจ in cerca di un lavoro
Sei persone su dieci dichiarano di scegliere di lavorare per unโimpresa se concordano con i suoi valori ma questo non ha impedito alle Big Tech USA di fare retromarcia su diritti e inclusione per compiacere Trump.Luca Furfaro, Valentina Marini, Filippo Poletti (StartupItalia)
The man enlisted to save James Comey
Patrick Fitzgerald: The man enlisted to save James Comey
In the run-up to former FBI Director James Comeyโs indictment, there was no question who would step up to represent him: Patrick Fitzgerald, Comey's longtime friend.Natasha Korecki (NBC News)
like this
adhocfungus e copymyjalopy like this.
Where faith carves memory into granite, and silence sings of grace.
Moorook Cemetery, Moorook, South Australia, September 2025.
Moorook Cemetery, located on Gogel Road in the Riverland town of Moorook, South Australia, is a modest yet historically resonant burial ground within the District Council of Loxton Waikerie.
While it lacks a formal published history, its memorials trace the lives of early settlers, farming families, and service members who shaped the regionโs development.
Though small, Moorook Cemetery offers a quiet testament to the resilience and continuity of Riverland life.
Photographed by Bron and edited by Kev.
ยฉ All Rights Reserved by Gardens of the Silent.
#cemetery #cemetary #cemeteries #cementerio #grave #graves #gravestone #graveyard #taphophile #gardensofthesilent #southaustralia #photography #cemeteryphotography
Jeff Atwood
in reply to Chuck Darwin • • •excited for the mastodon rise
in reply to Jeff Atwood • • •@codinghorror because the billionaires are the ones causing it and until we all decide to not have billionaires, we have to try and stop them from taking our country away.
It'd be easier to just not have billionaires but people ain't there yet.