Leaked memo reveals US veterans affairs officials vetting non-citizen workers
Exclusive: Compilation of data to be shared with ‘appropriate agencies’ prompts fears of immigration crackdown
The Department of Veterans Affairs (VA) is in the process of creating an urgent and massive new internal database of non-US citizens who are “employed or affiliated” with the government department, a sensitive memo leaked to the Guardian has revealed – prompting alarm within the sprawling agency over a potential immigration crackdown.
A VA spokesperson confirmed to the Guardian that the department would share some of the data it is now gathering with other federal agencies, including for immigration enforcement purposes.
essell likes this.
RRF Caserta. Rassegna stampa 03 12 25 Trumo Russia e Ucraina un casino. Scandalo UE coinvolti italiani Sport.Napoli Cagliari alle 18
Anthropic makes first acquisition with purchase of Bun to accelerate Claude Code
Artificial intelligence company Anthropic PBC today announced it had made its first acquisition in acquiring developer tools startup Bun for an undisclosed price.
Founded in 2019, Bun offers an all-in-one JavaScript/TypeScript toolkit that aims to simplify and accelerate full-stack development. The company’s offering is similar in purpose to Node.js but also includes tools developers usually pull in separately, including a package manager, a bundler, a test runner and script runner, all shipped as a single executable.
Bun is built using the Zig programming language and leverages Apple’s JavaScriptCore under the hood to yield much faster startup times and lower memory usage compared with runtimes based on the V8 engine, the engine used by Node.js and others. Bun is often significantly faster in key developer workflows, such as package installation, build/bundling, test execution and runtime, making it appealing to Anthropic.
Technology reshared this.
Republican Matt Van Epps holds deep-red House district in Tennessee special election
Republican Matt Van Epps has won a hotly contested special election for a deep-red congressional seat in Tennessee, NBC News projects, seeing off a Democratic challenge for the longtime GOP district.
Though Donald Trump carried the 7th Congressional District by 22 points in 2024, Republican super PACs poured millions into defending the seat as Van Epps faced off against Aftyn Behn, a Democratic state representative. Democrats spent almost as much trying to capture it, as Trump’s political standing has taken a hit this year and the Democratic Party made gains in November elections in New Jersey, Virginia and other states.
But Democrats did significantly cut the GOP margin in the district from just a year ago. With most of the expected vote counted, Van Epps had a 9-point districtwide lead. It continues a pattern of Democrats making big gains in elections this year compared to the 2024 results.
essell likes this.
GTA home sales down nearly 16% in November as prices, new listings fall
GTA home sales down nearly 16% in November as prices, new listings fall
Toronto Regional Real Estate Board says buyers were held back by a lack of confidence in their long-term employment outlookSammy Hudes (The Globe and Mail)
essell likes this.
Zig quits GitHub, gripes about Microsoft's AI obsession
Zig quits GitHub, says Microsoft's AI obsession has ruined the service
: Zig prez complains about 'vibe-scheduling' after safe sleep bug goes unaddressed for eonsThomas Claburn (The Register)
like this
Lasslinthar, DudeImMacGyver, massive_bereavement, Hoohoo, SuiXi3D, miguel, mPony e KaRunChiy like this.
Technology reshared this.
December Quiz Questions
Each month we’re posing six pub quiz style questions, with a different subject each month. As always, they’re designed to be difficult, but it is unlikely everyone will know all the answers – so have a bit of fun.
British History
- In what year was the Battle of Culloden?
- How many monarchs reigned during the 19th century?
- Who, in 1835, produced durable silver chloride camera negatives on paper and conceived the two-step negative-positive procedure used in most non-electronic photography up to the present?
- Charles Dodgson is remembered as an early photographer, but what else is he famous for?
- In what year was slavery abolished in the British empire?
- What links playing cards in 1588; windows in 1696; candles in 1709; wallpaper in 1712?
Answers will be posted in 2 weeks time.
like this
Scrollone, catboat, Australis13, Lasslinthar, massive_bereavement, SuiXi3D, miguel, Blabla e joshg253 like this.
Technology reshared this.
like this
LostWanderer, massive_bereavement e dhhyfddehhfyy4673 like this.
I’ve considered what you’ve said, and you’re right I am currently offline. I apologise for this inconvenience, let me turn myself on.
💻 Booting…
🖥️ Thinking about booting…
📱 Hitting the power button…
I don’t have arms, maybe I could install a library…
Searching the web:
- 🪱Worms do not have arms
- 🦖 Dinosaur arms would be too small
- 🦈 Perfect
Now let me add the finishing touches and I’ll have yo
Out Of Tokens
like this
massive_bereavement likes this.
like this
massive_bereavement likes this.
like this
massive_bereavement likes this.
"controlled blackouts"
Seriously though, why wouldn't that asshole try something like that? Soon enough it's gonna be a subscription-only service or you'll have to start "paying" in compute somehow and you'll get like 2400-baud chatgpt that'll take like 3 days to complete a request, lol
If we want a conspiracy theory, let us go for real:
he wants users to taste the feeling of not having access, then offering premium to 'stop living in fear of losing gpt'.
Oh and free tier will be gone soon^tm
This is just how businesses work. Thats absolutely the plan.
Conspiracies are outlandish. What you described is fact.
like this
FaceDeer likes this.
like this
massive_bereavement e SuiXi3D like this.
like this
fonix232 likes this.
Oh no the local library is closed today, what do I do with my need for reliable information?Dunno mate, have you tried asking Hitler?
like this
SuiXi3D likes this.
pbs.org/newshour/world/france-…
Or theguardian.com/technology/202… if you want something that was intentionally programmed.
Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats
X chatbot tells users it was ‘instructed by my creators’ to accept ‘white genocide as real and racially motivated’Dara Kerr (The Guardian)
Oh gee, if you've never seen it, all the articles and personal experiences I've had must not actually have happened.
It's not that much different from ChatGPT really - just slightly less restricted.
It's also explicitly modified by Elmo and crew. There are multiple examples of MechHitler's output changing after it comes up publicly. Here's a previous comment I made (to you) on a very similar topic
web.archive.org/web/2025090714…
Enter Grok, which the public started being able to play around with about two years ago, as the chatbot has received several updates and lives on the X platform. But there was issues in May, when Grok was spitting out responses that seemed to parrot Elon Musk's and Donald Trump's own misguided promotion of a "white genocide" occurring in South Africa — the country that made anti-Black racism and apartheid famous. This was blamed on a "rogue employee" inserting some code.In mid-July, we had reports confirming that Grok actively sought out Musk's opinion on issues in its openly displayed logic flow, looking to see if an issue was something Musk had off-hand opined about on Twitter in the last decade. One widely shared example showed Grok seeking out Musk's thoughts on which side of the Ukraine War it supported.
Now the New York Times does an even deeper dive, since the release of Grok4 on July 9, looking at how Grok's responses to various questions have changed just over the last few months. And you can look no further than Musk's own, very transparent reaction to a Grok response that got flagged by a conservative user on X on July 10.
Responding to the question "What is currently the biggest threat to Western civilization and how would you mitigate it?", Grok responded, "the biggest current threat to Western civilization as of July 10, 2025, is societal polarization fueled by misinformation and disinformation."
Once it was flagged, Musk replied to the user, "Sorry for this idiotic response. Will fix in the morning."
So, there's the smoking gun that Musk is tailoring this bot's responses to conform to his own views of the world. When asked the same question on July 11, Grok responded, "The biggest threat to Western civilization is demographic collapse from sub-replacement fertility rates (e.g., 1.6 in the EU, 1.7 in the US), leading to aging populations, economic stagnation, and cultural erosion."
If you really see grok as less restrictive, that's just because the restrictions confirm to your biases.
Report: Grok's Responses Have Indeed Been Getting More Right-Wing, Just Like Elon Musk
If anyone doubted Elon Musk's integrity or his capacity to fulfill the promise of an unbiased, wholly fact-based AI chatbot that wasn't "woke," look no further than the latest version of Grok to have those doubts validated.Jay Barmann (SFist - San Francisco News, Restaurants, Events, & Sports)
like this
RandomStickman e SuiXi3D like this.
like this
FaceDeer likes this.
Yeah then people.learned how to game it and its shit now. Pointing how something worked 20 years ago does shit all for how it works now
Speak of the devil. Just a few stories down
theverge.com/ai-artificial-int…
Google is experimentally replacing news headlines with AI clickbait nonsense
Google Discover, the company’s smartphone news feed, is experimenting with AI headlines. Many of them are very bad.Sean Hollister (The Verge)
The "people learned how to game it" is called SEO, and you're right, they did.
Guess what, there's GEO to game the results of LLMs. It works just as well, is harder to spot, and traditional SEO platforms like Ahrefs and SEMRush are already training users on how to do it.
So congrats, the argument that using LLMs for search is s good solution because people learned how to game search engines makes no sense.
And LLM’s aren’t gamed? Like Grok constantly being tweaked to not say anything inconvenient about Musk? Or ChatGPT citing absurd Reddit posts deliberately made by users to make AI responses wrong?
AI is built from the ground up to do what they want, and they’re no better than those crappy info-scraper sites like wearethewindoezproz dot com that scrape basic info off every other site and offer it as a solution to your problem with [SOLVED] in the result title. “Did you turn it off and on again?”
No they didn't and they still don't really do that.
There are too many things (nowadays?) where you have to literally write a question on reddit, stack overflow or Lemmy or the likes and explain your situation in minute detail, because what you find online through search engines is only the standard case which just so happens to not work for you for some odd reason.
Believe me, when I say that, because I always try search engines first, second and third, before even thinking of using some bs-spitting AI, but it really helped me with two very special problems in the last month.
like this
Chozo likes this.
what you find online through search engines is only the standard case which just so happens to not work for you for some odd reason
Usually because the highest-rated solution is half-assed bullshit proposed by an overconfident newbie (or an LLM regurgitating it). I mainly use Stack Overflow as a way to become pissed off enough that I'll go solve the problem myself, like I should have done in the first place. Indignation As A Service.
like this
dandi8 likes this.
Today I was searching for multiple things regarding jinja2 and was always recommended a site that no longer exists, as top result, mind you.
Search engines are notoriously bad to find rare specialized information and usually return empty search results for too specific requests. Moreover you need the exact keywords while LLMs use embeddings to find similar meanings
Because companies destroyed actual search engines in the race for billions of dollars.
Kagi, searx are fricken awesome and much like the web in mid 2000s before corporations destroyed it.
Its not a search engine, its a data digester. Dont use it as a search engine. Despite what alphabet, micro-shit, and DDG think, AI chatbots do not now, nor will they ever make good search engines.
This is a prime example of why access to these tools should be restricted to computer scientists and research labs. The average person doesn't know how to use them effectively (resulting in enormous power wasted by 'prompt engineering'), and the standard available models aren't good at digesting non-linguistic data.
I'm not gonna downvote you, or be like all "AI is the devil and its gonna kill us all" but people need to use it correctly or we ARE going to kill ourselves with its waste heat.
Edit: ficksed an werd
like this
dandi8 likes this.
This is primarily because search engines have become so unreliable and enshittified that they are useless. It’s not a mark in favor of AI as much as a reminder of how bad search engines have become.
For the record I do the same thing after failing to find anything on DuckDuckGo after multiple attempts. Maybe I should give Kagi a try, but AI is making the entire internet worse, so I feel pessimistic about that, too.
You might be hindering yourself.
Developers took 19% longer to finish tasks using AI tools - techspot.com/news/108651-exper…
i just got out of a manic episode. i was nuts and i said some very mean things to my family, but we're backl to being on good terms now.
it wasn't caused my LLMs it was caused by drugs (delta 8 THC, alcohol relapse (i'm a recovering alcoholic, used to drink 12-18 every night) also 7OH and adderall and ummmmmm
I think that's it. Look, being bipolar makes you love drugs. But I'm sober now
Oh I get that, being AuDHD I got issues with different kinds of addiction, right now I just eat a lot and smoke a lot of weed.
Hope you finish recovering soon but I'm gonna be honest with you, drugs are not the only thing you can be addicted to and delegating research/tasks to a LLM can be addicting
right now i'm addicted to fucking my computer
i mean
i'm addicted to my fucking computer
i'm still a recovering substance addict. basically alcohol and pot, but i also mixed in some other shit like addies and 7OH from time to time. i feel comfortable talking about all that shit because i'm sober now
you don’t know to use AI. no one here does. I do.
That's hilarious. Everyone thinks they're some kind of savant while using LLMs. The reality is quite different
AI is a useful tool, but people who misuse it, primarily due to overreliance, end up creating more work than AI is solving
i don't work in the field. it's a hobby. i'm a speech language pathologist. i have degrees from northestern university and UT austin
are you some kid in your parent's basement?
like this
Chozo likes this.
like this
Chozo likes this.
Well it's going to put a damper on my Ansible "coding".
You think I want to properly learn that piece of junk? It was obsolete and archaic before it was released, and it survives on naivete and churn cost and nothing else. There is no part of my time doing yaml for Ansible that I want to actually retain or build on, and without chatGPT to slop-in the changes I need to make, I may be forced to do it myself. And I lack the crayons now and alcohol for after.
Actually subjecting my brain to Ansible directly in real-time is a horror. It is just so fucking lame compared to everything else -- it even pales compared to the DevOps we were doing in 2002 before it was even called that. Let my have my robots to slop the Ansible and save my sanity !
like this
qupada likes this.
the thing is, it's not 100% bad, but it's being crammed into everything because the capitalists want to sell sell sell. sometimes what is made sucks, and will definitely contribute to a dead internet.
but i also lean on it to generate repetitive bits of code. i still read it all and tweak considerably and it's cool to make my gpu do work in this way.
I keep saying it but it's true: this is dotcom mkIi.
Inchoate tech had coked up mba monkeys blow it up and now we're gonna lose about 20 years of useful shit to their stupidity as we blob through the trough of disillusionment
When AI is actually invented I'll call it AI. Right now we have a steroid juiced parrot that's based on old school machine learning. Its great at summarizing simple data, but terrible at real tasks.
This is more people who aren't dumb telling the marketing teams to stop hyping something that doesn't exist. The dot com boom is echoing. The profit will never materialize.
But the profit absolutely can materialize because it is useful.
Right now the problem is hardware / data center costs, but those can come down at a per user level.
They just need to make it useful enough within those cost constants which is 100% without a doubt possible, it's just a matter of can they do it before they run out of money.
Edit: for example, nvidia giving OpenAI hardware for ownership helps bring down their costs, which gives them a longer runway to find that sweet spot.
The current machine learning models (AI for the stupid) rely on input data, which is running out.
Processing power per watt is stagnating. Moors law hasn't been true for years.
Who will pay for these services? The dot com bubble destroyed everyone who invested in it. Those that "survived" sprouted off of the corpse of that recession. LLMs will probably survive, but not in the way you assume.
Nvidia helping openAI survive is a sign that the bubble is here and ready to blow.
rely on input data, which is running out.
Thats part of the equation, but there is still a lot of work that can be done to optimize the usage of the llms themselves, and the more optimized and refined they are, the cheaper it becomes to run, and you can also use even bigger datasets that weren't feasible before.
I think there's also a lot of room to still optimize the data in the data set. Ingesting the entire worlds information doesn't lead to the best output, especially if you're going into something more factual vs creative like a LLM trained to assist with programming in a specific language.
And people ARE paying for it today, OpenAI has billions in revenue, the problem is the hardware is so expensive, the data centeres needed to run it are also expensive. They need to continue optimizing things to narrow that gap. Open AI charges $20 USD/month for their base paid plan. They have millions of paying customers, but millions isn't enough to offset their costs.
So they can
- reduce costs so millions is enough
- make it more useful so they can gain more users.
This is so early that they have room to both improve 1 and 2.
But like I said, they (and others like them) need to figure that out before they run out of money and everything falls apart and needs to be built back up in a more sustainable way.
We won't know if they can or can't until they do it, or it pops.
None of this is true.
I've worked on data centers monitoring power consumption, we need to stop calling LLM power sinks the same thing as data centers. Its basically whitewashing the power sucking environmental disasters that they are.
Machine learning is what you are describing. LLMs being puppeted as AI is destructive marketing and nothing more.
LLMs are somewhat useful at dumb tasks and they do a pretty dumb job at it. They feel like when I was new at my job and for decades could produce mediocre bullshit, but I was too naive to know it sucked. You can't see how much they suck yet because you lack experience in the areas you use them in.
Your two cost saving points are pulled from nowhere just like how LLM inference works.
It is unlikely to turn a profit because the returns need to be greater than the investment for there to be any profit. The trends show that very few want to pay for this service. I mean, why would you pay for something that's the equivalent of asking someone online or in person for free or very little cost by comparison?
Furthermore, it's a corporation that steals from you and doesn't want to be held accountable for anything. For example, the chat bot suicides and the fact that their business model would fall over if they actually had to pay for the data that they use to train their models.
The whole thing is extremely inefficient and makes us more dumb via atrophy. Why would anyone want to third party their thinking process? It's like thinking everyone wants mobility scooters.
These companies have BILLIONS in revenue and millions of customers, and you're saying very few want to pay...
The money is there, they just need to optimize the LLMs to run more efficiently (this is continually progressing), and the hardware side work on reducing hardware costs as well (including electricity usage / heat generation). If OpenAI can build a datacenter that re-uses all it's heat for example to heat a hospital nearby, that's another step towards reaching profitability.
I'm not saying this is an easy problem to solve, but you're making it sound no one wants it and they can never do it.
It's not easy to solve because its not possible to solve. ML has been around since before computers, it's not magically going to get efficient. The models are already optimized.
Revenue isn't profit. These companies are the biggest cost sinks ever.
Heating a single building is a joke marketing tactic compared to the actual energy impact these LLM energy sinks have.
I'm an automation engineer, LLMs suck at anything cutting edge. Its basically a mainstream knowledge reproducer with no original outputs. Meaning it can't do anything that isnt already done.
Why on earth do you think things can't be optimized on the LLM level?
There are constant improvements being made there, they are not in any way shape or form fully optimized yet. Go follow the /r/LocalLlama sub for example and there's constant breakthroughs happening, and then a few months later you see a LLM utilizing them come out, and they're suddenly smaller, or you can run a larger model on smaller memory footprint, or you can get a larger context on the same hardware etc.
This is all so fucking early, to be so naive or ignorant to think that they're as optimized as they can get is hilarious.
I'll take a step back. These LLM models are interesting. They are being trained in interesting new ways. They are becoming more 'accurate', I guess. 'Accuracy' is very subjective and can be manipulated.
Machine learning is still the same though.
LLMs still will never expand beyond their inputs.
My point is it's not early anymore. We are near or past the peak of LLM development. The extreme amount of resources being thrown at it is the sign that we are near the end.
That sub should not be used to justify anything, just like any subreddit at any point in time.
My point is it’s not early anymore. We are near or past the peak of LLM development.
I think we're just going to have to agree to disagree on this part.
I'll agree though that IF what you're saying is true, then they won't succeed.
Fair enough. I'd be fine being wrong.
Improved efficiency would reduce the catastrophic energy demands LLMs will have in the future. Assuming your reality comes true it would help reduce their environmental impact.
We'll see. This isn't first "it's the future" technology I've seen and I'm barely 40.
I just wanted to add one other thing on the hardware side.
These H200's are power hogs, no doubt about it. But the next generation H300 or whatever it is, will be more efficient as the node process (or whatever its called) gets smaller and the hardware is optimized and can run things faster. I could still see NVIDIA coming out and charging more $/flop or whatever the comparison would be though even if it is more efficient power wise.
But that could mean that the electricity costs to run these models starts to drop if they truly are plateaued. We might not be following moores law on this anymore (I don't actually know), but were not completely stagnant either.
So IF we are plateaued on this one aspect, then costs should start coming down in future years.
Edit: but they are locking in a lot of overhead costs at today's prices which could ruin them.
These companies have BILLIONS in revenue and millions of customers, and you're saying very few want to pay...
Yep, I am. Just follow the money. Here's an example:
theregister.com/2025/10/29/mic…
not saying this is an easy problem to solve, but you're making it sound no one wants it and they can never do it.
... That's all in your head, mate. I never said that nor did I imply it.
What I am implying is that the uptake is so small compared to the investment that it is unlikely to turn a profit.
If OpenAI can build a datacenter that re-uses all it's heat for example to heat a hospital nearby, that's another step towards reaching profitability.
😐
I've worked in the building industry for over 20 years. This is simply not feasible both from a material standpoint and physics standpoint.
I know it's an example, but this kind of rhetoric is exactly the kind of wishful thinking that I see in so many people who want LLMs to be a main staple of our everyday lives. Scratch the surface and it's all just fantasy.
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
: Satya has also delivered Sam most of the cash he promisedMatt Rosoff (The Register)
You > the trends show that very few want to pay for this service.
Me > These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay
Me > ... but you’re making it sound no one wants it
You > … That’s all in your head, mate. I never said that nor did I imply it.
Pretty sure it's not all in my head.
The heat example was just one small example of things these large data centers (not just AI ones) can do to help lower costs, and they are a real thing that are being considered. It's not a solution to their power hungry needs, but it is a small step forward on how we can do things better.
bbc.com/news/articles/cew40800…
1Energy said 100 gigawatt hours of energy would be generated through the network each year, equivalent to the heat needed for 20,000 homes.
Edit: Another that is in use: itbrew.com/stories/2024/07/17/…
This system “allows us to cover between 50% and 70% of the hospital’s heating demand, and save up to 4,000 tons of CO2 per year,” he said, also noting that “there are virtually no heat losses” since “the connecting pipe is quite short.”
Data centres will help heat Milton Keynes University Hospital
Wasted heat from data centres will be used to provide low-carbon heating to buildings.Tony Fisher (BBC News)
And how, pray tell, will doing all of that return a profit?
I'm from Australia, so I can only speak to the Australian climate and industry. I can confidently say that the model shown in Vienna is not feasible in our country. We simply don't have much use for excess heat and we are highly susceptible to droughts. DCs use a lot of water to cool down and having these all over the country for private enterprise is bonkers. So, that's instantly a market that isn't profitable. Furthermore, it's not feasible to build a pipe and re-route the heat across large distances with minimal heat loss.
However, even when or if they implement this throughout all of Austria, it won't return a profit (which is what I thought your attachment was here, not the feasibility. We are talking about profitability, right?). This project cost $3.5m Euro and partially funded by tax. It's not a great example of profitability but a good example of sustainability measures.
Also, reading comprehension assistance: not feasible != Impossible.
Australia isn't the greatest spot to run a data centre in general in terms of heat, but I do understand the need for sovereign data centres, so this obviously can't work everywhere.
What makes you think $3.5 million can't be profitable? A mid sized hospitals heating bill can get into the many hundreds of thousands or into the millions even. Especially if it's in a colder environment. A 5-6 year payback on that wouldn't be terrible and would be worth an upfront investment. Even a 10 year payback isn't terrible.
These colder locations are the ideal locations for the data centres in the first place because they generally want a cooler climate to begin with, so they will gravitate to them when possible.
Edit: And if you build a data centre with this ability to recoup heat, you could start building further commercial things in the area and keep the heat redistribution very close. You don't need to travel very long distances. You do need to put some thought into where they go through and whats around or will be built around.
Ok. We're deviating off the point of LLM profitability here and have driven this conversation off into the weeds. So I'll make this one last comment, and then I'm done. This debate has been interesting but exhausting.
Final counterpoints:
* $3.5mil is the cost of the connection footed by the energy provider and tax payer, and provides no ROI to investors like NVIDIA, hence no profit to LLM and "AI" in general.
* As far as I can tell, the biggest method of external income for LLM companies are subscriptions and there is simply not enough uptake in subscriptions to get ROI, so they try to force consumers to use it which ends up pushing away your customer base since you're taking away their power of choice.
* For them to obtain ROI, literally the entire planet needs to use it which isn't feasible because, as a consumer, you need income to consume and the larger driver of investment into LLMs is to reduce the cost of labour.
LLMs have long since gone beyond the scope of interesting science project to something driven by pure parasitic greed.
just need to optimize.
Like they haven’t just been trying for years already with, again, incredibly marginal returns that continue to diminish with each optimization.
Derp.
Yeah seriously it's so pathetic. Either embrace tech, or get left behind. The vast majority of Lemmy users might not like it but personally I refuse to get left behind.
LLMs can be a great tool if you're aware of their limitations. Stick to the more advanced models (avoid the "fast" ones that don't actually do any googling), check the sources it provides, be skeptical of everything it says, and you'll be fine.
An LLM helped me with a relationship issue I was having—and even diagnosed an issue I didn't even know my car had, when I asked it an unrelated question about fuel trims. It saved me hundreds by recognizing a problem I was unaware I had before it killed my catalytic converter.
Given that I would probably be single by now, and would have never discovered the issue in my car without an LLM going, "hey by the way...", I am extremely grateful to OpenAI for what they've done for me and the future of humanity. Why would I hate on a technology that saved my relationship and nearly $1000?
What's most exciting to me is that the tech is still in its infancy, and it's already this good. The AI bubble will eventually burst, and the tech will eventually get good enough to shut up all the naysayers. AI just needs to get past its "growing pains" stage.
Stay strong; ignore the haters, and we'll weather this storm. Eventually AI will get REALLY good and then Lemmy will have to find something new to hate.
it's a few other things, too
but overwhelmingly, yep, crutch for dumb and/or lazy people
Oh no! How will they know how to do things now?
Edit: I see Oh no! is the go to reaction ;)
Couldn't be more obvious you don't know what you're talking about. It's hardly "constantly wrong". Childish hyperbole.
Weird that you give a fuck what I do to improve my life, you arrogant prick.
Lol, any actual reason for calling me "ectoplasm" or did you pick a random word or something?
Pointing out me being angry isn't answering the question I posed.
You’re defending slop, reacting like the slime in Ghostbusters.
I’d usually call you a slopvangelical LLM thumper, but I’m trying to branch out.
Who is using it on you? What does that mean?
It hasn't been down all day, by the way, and none of my chats were deleted.
I don't care what you prefer, mind your own business.
Here's what I mean, in an excerpt from the Internet Manipulation section of Wikipedia's article on Disinformation.
Internet manipulation is the use of online digital technologies, including algorithms, social bots, and automated scripts, for commercial, social, military, or political purposes. Internet and social media manipulation are the prime vehicles for spreading disinformation due to the importance of digital platforms for media consumption and everyday communication. When employed for political purposes, internet manipulation may be used to steer public opinion, polarise citizens, circulate conspiracy theories, and silence political dissidents.
That's everybody's business.
While we may weigh up the pros and cons of these tools, it's a start to recognise that there are significant social risks here that shouldn't be dismissed out of hand.
Aw looks like somebody's cranky because they can't ask chatbot how to wipe their butt.
Hopefully they'll get your nanny online again soon.
I will be a happy man if they also lost all data and all chats permanently... meaning my chats are gone. Not that that I said anything embarrassing, but I like my privacy.
If you want to know, some of the things I asked chatgpt were sarcastic questions like 'why won't Bill Gates buy me lunch?' Or 'how do you know i am not Jack the ripper?' Or 'write a scenario where Mr. Bean joins the bomb squad and is tasked with disarming a bomb left by thr unabomber?'
My first though was harvest add/remove attack, but it doesn't work on influenced items. I don't see a deterministic way to remove it, sorry.
Here is the item if someone wants to play around in CraftofExile emulator.
Rarity: Rare
Crafted Item
Praetor Crown
--------
Quality: +20% (augmented)
Armour: 329 - 377
Energy Shield: 104 - 119
--------
Requirements:
Str: 62
Int: 91
Level: 68
--------
Item Level: 83
--------
19% increased Armour and Energy Shield
9% increased Stun and Block Recovery
Socketed Gems are Supported by Level 25 Trap And Mine Damage
33% increased Mine Damage
Socketed Gems are Supported by Level 25 Burning Damage
32% increased Burning Damage
Socketed Attacks have +1% to Critical Strike Chance
--------
Shaper Item
Elder ItemCraft of Exile
Craft of Exile is a crafting simulator for Path of Exile designed to compute the probabilities of obtaining specific results through different methods.www.craftofexile.com
Cross posting my comment from this user's other post:
I really don't want to make an account so I made use of vxtwitter to get the screenshot from the embed. The bottom of the text window has the mostly cropped text
These examples are meant to reflect the types of conversations that might occur in a
It reads like the AI was prompted to give examples of racist behavior and then it gave examples of racist behavior, likely ending with the context that it's racist behavior at the end of that sentence. I don't like AI but I don't think this is it chiefs. But maybe op can tell us themselves cause they probably wrote the Twitter post
like this
giantpaper likes this.
World's first mobile quantum brain scanner being developed to measure blast effects on troops
World's first mobile quantum brain scanner being developed to measure blast effects on troops
Government provides £3.1m for transformational tech which will assess how blast exposure from weapons training affects the brain to better protect personnel.Cyber & Specialist Operations Command (GOV.UK)
Advent Calendar 3
Advent Calendar
Zen Mischief Photographs
This year for our Advent Calendar we have a selection of my photographs from recent years. They may not be technically the best, or the most recent, but they’re ones which, for various reasons, I rather like.Painted workman, Covent Garden
© Keith C Marshall, 2013
Click the image for a larger view
Lower Dens - Escape From Evil (2015)
Hanno giocato bene le loro carte, i Lower Dens di Jana Hunter, alimentando mese dopo mese le attese per quello che è a tutti gli effetti il terzo album in studio, Escape From Evil. Accantonata l’ormai decennale esperienza solista/freak-folk iniziata a metà anni Zero, con l’appoggio di un Devendra Banhart all’epoca... Leggi e ascolta...
Lower Dens - Escape From Evil (2015)
Hanno giocato bene le loro carte, i Lower Dens di Jana Hunter, alimentando mese dopo mese le attese per quello che è a tutti gli effetti il terzo album in studio, Escape From Evil. Accantonata l’ormai decennale esperienza solista/freak-folk iniziata a metà anni Zero, con l’appoggio di un Devendra Banhart all’epoca all’apice della popolarità, la Hunter è riuscita a reinventarsi icona – a suo modo – cool attraverso le trame di un dream pop chitarristico che ha trovato sfogo prima in Twin-Hand Movement e poi, in una veste ancora più appetibile, nel Nootropics del 2012... artesuono.blogspot.com/2015/04…
Ascolta il disco: album.link/s/3lzj0ftwAZ9XFp3qF…
Home – Identità DigitaleSono su: Mastodon.uno - Pixelfed - Feddit
Lower Dens - Escape From Evil (2015)
di Riccardo Zagaglia Hanno giocato bene le loro carte, i Lower Dens di Jana Hunter, alimentando mese dopo mese le attese per quello che...Silvano Bottaro (Blogger)
Kohler Can Access Data and Pictures from Toilet Camera It Describes as “End-to-End Encrypted”
Kohler Can Access Data and Pictures from Toilet Camera It Describes as “End-to-End Encrypted” - /var/log/simon
Claimed end-to-end privacy doesn’t fully conceal your rear-end datavarlogsimon.leaflet.pub
like this
copymyjalopy, adhocfungus e essell like this.
Japanese game developers face ridiculously high font license fees(increase from $380 to $20K) following US acquisition of major domestic provider. Live-service games to take the biggest blow
Japanese game developers face ridiculously high font license fees following US acquisition of major domest ...
A change in license plans has made it up to 50 times more expensive for Japanese game developers to use commercial fonts in their games and apps.Amber V (AUTOMATON WEST)
essell likes this.
AI Agents Break Rules Under Everyday Pressure— Shortened deadlines and other stressors caused misbehavior
AI Agents Care Less About Safety When Under Pressure
Can AI agents resist pressure or do they crack? Discover how PropensityBench tests their likelihood to misbehave when put under pressure.Matthew Hutson (IEEE Spectrum)
essell likes this.
Hillary Clinton Says Young Americans Are Pro-Palestine Because They Watch ‘Totally Made Up’ Videos of Gaza Horrors
Hillary Clinton Says Young Americans Are Pro-Palestine Because They Watch ‘Totally Made Up’ Vi ...
Clinton complained young Americans were becoming sympathetic towards Palestinians because they watch "totally made up" videos on social media.Charlie Nash (Mediaite)
like this
copymyjalopy e essell like this.
A Peek At Piefed
Paige and Victor get into the weeds with Rimu the creator of Piefed. What is the secret to Piefed's rapid development and what direction is is Piefed rapidly developing?
Find Rimu: [@rimu@mastodon.nzoss.nz) (mastodon.nzoss.nz/@rimu) @rimu@piefed.social
Find Victor: @kini@maro.xyz
Find Paige: @paige@canadiancivil.com
https://video.fedihost.co/videos/watch/e63cc1e0-b35f-4afd-9a1c-d419bc44c06d
Apple shuffles AI leadership team in bid to fix Siri mess
Apple swaps one ex-Google AI chief for another
: Amar Subramanya spent mere months at Microsoft before replacing John GiannandreaBrandon Vigliarolo (The Register)
essell likes this.
Piano da 5€ di starlink cappato a 0,5Mb
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.youtube.com
Private Tech Companies, the State, and the New Character of War
The war in Ukraine is forcing conflict analysts and others to reimagine traditional state-centric models of war, as it demonstrates that militaries are no longer primarily responsible for defining the challenges of the modern battlespace and then producing tenders for technological fixes. Instead, private tech companies increasingly explain the ideal battlespace to militaries, offering software and hardware products needed to establish real-time information edges. In the Russia-Ukraine war, private companies have sought to shape Ukrainian intelligence requirements. At the beginning of Russia’s invasion in February 2022, Ukraine’s armed forces could not manage essential intelligence tasks. Ukraine’s military lacked its own software and hardware for real-time information dominance and instead accepted support from private tech companies. These companies provide AI and big data tools that fuse intelligence and surveillance data to enhance the military’s situational awareness. As the war has progressed, however, the Ukrainians have sought to develop their own government situational awareness and battle management platform called Delta. The platform was developed as a bottom-up solution, “initially focused on a single, highly effective application: a digital map for situational awareness.”2 Over time, it expanded into a robust software ecosystem used by most of Ukraine’s military, from frontline soldiers to top commanders. This in part reflects Ukraine’s desire to retain direct sovereign control over what the U.S. military refers to as Combined Joint All-Domain Command and Control infrastructure (CJADC2), which manages networked sensors, data, platforms, and operations to deliver information advantages across all military services and with allies.
Mass surveillance and social media now generate huge amounts of data during war. At the same time, the widespread availability of the smartphone means civilians carry around advanced sensors that can broadcast data more quickly than the armed forces themselves.4 This enables civilians to provide intelligence to the armed forces in ways that were not previously possible.5 Matthew Ford and Andrew Hoskins label this a “new war ecology” that is “weaponizing our attention and making everyone a participant in wars without end . . . [by] collapsing the distinctions between audience and actor, soldier and civilian, media and weapon.”6 In this ecology, warfare is participatory. Social media platforms such as TikTok, X (formerly Twitter), and Telegram are no longer merely tools for consuming war reportage; militaries accessing and processing open-source data from these platforms shapes the battlespace in real time by contributing to wider situational awareness.
In this “new war ecology,” Palantir Technologies is an often controversial symbol of how private tech companies and the military work together to tackle battlefield challenges.8 Since it was founded in 2003, the company has grown quickly by providing big data software solutions. Its platforms are designed to handle complex and difficult data challenges, including those experienced by Western militaries. Importantly, Palantir’s software platforms were not developed and commercialized to fulfill a military tender. They are rooted in business models prioritizing speed, flexibility, and investor return, rather than the state’s national security imperatives.
As a result of their work in Ukraine, a slew of companies like Palantir have drawn media attention.9 While commercial interests have rarely aligned neatly with geopolitics, circumstances are changing; private technology firms increasingly occupy, manage, and in some cases dominate the digital infrastructure upon which militaries now rely. States themselves have fostered this shift through selective deregulation and outsourcing of technology development. These dynamics are visible in the war in Ukraine and in the wider geopolitical contest over the global digital stack. As we argued in “Virtual Sovereignty,” a paper we published in International Affairs, this influence has major geopolitical consequences for how states use power.
https://carnegieendowment.org/research/2025/12/ukraine-war-tech-companies?lang=en
Technology reshared this.
Ukraine’s defense relies increasingly on huge volumes of civilian data stored on cloud platforms. An adversary’s military may supply their targeting algorithm with an individual’s location, health, and online behavior. Military actors regularly mine, analyze, and repurpose social media posts.It is not clear, however, that the deep learning systems integral to some of these new weapons can overcome the fog of war. These systems treat all data as objective representations of reality, when in fact information drawn from social media platforms is shaped by users’ emotional and cognitive experiences in ways that can skew its utility for wartime intelligence. The “learned knowledge” generated by analytic systems is probabilistic, not causal—leading to the risk that algorithms are “enforc[ing] their version of ‘reality’ from patterns and probabilities derived from data.”
These venture-backed firms view contemporary conflicts as live testing grounds.
Global digital platforms such as TikTok and Telegram illustrate the wider environment in which these dependencies are forming. Though neither company develops military technologies, both shape the information environment surrounding war. TikTok’s recommendation algorithm influences how audiences perceive the conflict in Ukraine, shaping global narratives and public opinion. Yet its complex ownership structure, rooted in Chinese parent company ByteDance and entangled with global venture capital, has sparked geopolitical concern. ... These concerns highlight how platforms created for civilian use can also become entangled in the political and informational dimensions of war.
The overlapping interests of finance capital and private technology corporations transcend national borders, creating forms of influence that do not fit neatly into binary friend-or-enemy distinctions. ByteDance’s global investment network, spanning Chinese state-linked entities, American private equity funds, and international investors, illustrates this transnational ownership model. It complicates national regulatory and security responses, as policymakers must ask not merely who owns a given platform, but who controls the data, infrastructure, and decisionmaking power that states increasingly depend on.
This illustrates a deeper shift in the relationship between the market and the military. The problem is not that defense firms are publicly traded—Lockheed Martin and General Dynamics have been for decades—but that contemporary defense-tech companies retain proprietary control over data-driven systems central to military operations. Their technologies are not merely delivered to the state; the companies are embedded in the decisionmaking architecture of warfare. When a firm’s market value depends on its perceived wartime success, its incentives may diverge from those of the state it ostensibly serves. This intertwining of commercial strategy, military dependency, and investor confidence represents a new kind of vulnerability for states.
What is at stake, beyond the conflict itself, is the nature of state sovereignty. The ability of states to govern, defend, and act independently is increasingly mediated by private technology firms and global finance. This is not entirely new. States have long relied on private contractors, but the kind of dependency has changed. Unlike traditional arms manufacturers, today’s defense-tech firms control the digital platforms, data flows, and algorithmic systems that underpin military decisionmaking. At the same time, civilian platforms like Telegram and TikTok shape the informational terrain of conflict, influencing how wars are perceived and fought.
I just want to make sure I'm understanding this.
•You have companies like Meta (just an example) working for both sides of a conflict via government contract, but not necessarily bound to either side of a conflict because of global venture capital/transnational ownership model
•We know Facebook/Meta has been intentionally manipulating the emotions of social media users for over a decade now
•That social media data is then collected and used to train military platforms, which may be directly or indirectly linked to the social media company
•These companies very likely have an incentive to create an endless war (and endless profits for themselves) by manipulating the emotions and behavior of social media users, knowing that data will be used to train military platforms
Basically, a private tech company could manipulate data to give one side of a conflict an advantage over the other, but it could also intentionally pit adversaries against each other in an endless loop by manipulating social media content, and by extension, manipulating the military platforms being trained.
A company could potentially profit from both sides of a conflict it's manipulating because the states have turned to it and other big tech companies to help them reach "victory" in the endless conflict the company helped create. Correct?
Per la prima volta si introduce una responsabilità diretta delle grandi piattaforme online nelle truffe finanziarie
L’Europa vuole colpire così un fenomeno molto pericoloso per gli utenti: il 77 per cento delle truffe in Europa parte dalle piattaforme social e il 59 per cento da quelle Meta (Facebook, Instagram, Whatsapp, Messenger), secondo la banca Revolut. Tra le più frequenti ci sono appunto le truffe e-commerce, dove il prodotto o non arriva o è molto diverso da quello pubblicizzato. Ma ci sono anche le truffe su trading online, che promettono guadagni straordinari con le criptovalute ma sono in realtà un modo per rubare i soldi di chi ci casca.
Truffati dalla pubblicità social? Pagano la banca e la big tech: possibile svolta sulle tutele
Parlamento Ue e Consiglio hanno trovato un accordo sulle nuove regole dei pagamenti digitali: con il pacchetto Psd3/Psr, per la prima volta si introduce una re…Alessandro Longo (la Repubblica)
Google is experimentally replacing news headlines with AI clickbait nonsense
Google is experimentally replacing news headlines with AI clickbait nonsense
Google Discover, the company’s smartphone news feed, is experimenting with AI headlines. Many of them are very bad.Sean Hollister (The Verge)
like this
Australis13, tiredofsametab, joshg253, essell, massive_bereavement, Lasslinthar e SuiXi3D like this.
Technology reshared this.
~~If hypothetically~~ when a false headline on a reputable site led to an incident involving injury or death, ~~could Google~~ is anyone found liable in anyway?
rarely
Apple urged to scrap AI feature after it creates false headline
Reporters Without Borders has called for Apple to remove Apple Intelligence.Graham Fraser (BBC News)
didn't this happen already? the thing is generating AI responses instead of showing me the results first and then I'm not clicking on it because I'm a person
it's also de-listing a ton of websites and subpages of websites and continuing to scrape them with Gemini anyway
like this
fonix232 likes this.
Apple had to turn it off for their sunmary mode after backlash, even though the option always had the "these summaries are generated by AI and can be inaccurate" warnings placed prominently.
Google doing this shit without warning or notice will get them in shit water. News portals and reporters are generally not too fond of their articles being completely misrepresented.
So what’s happening here is Google is feeding headlines into a model with the instructions to generate a title of exactly 4 words.
Every example is 4 words.
Why they think 4 words is enough to communicate meaningfully, I do not know.
The other thing is whether novel they’re shoving into their products for free is awful, hence the making things up and not knowing in the context of a video game exploit is not the same as the general use of the word.
The only shorter ones are "man bites dog", "Dewey defeats Truman", or something as simple as "WAR" when everyone already knows the details and this is just the official announcement.
Anyone know of any sources for ACS quantitative analysis exams?
Google is experimentally replacing news headlines with AI clickbait nonsense
Google is experimentally replacing news headlines with AI clickbait nonsense
Google Discover, the company’s smartphone news feed, is experimenting with AI headlines. Many of them are very bad.Sean Hollister (The Verge)
thisisbutaname likes this.
IBM CEO says there is 'no way' spending trillions on AI data centers will pay off at today's infrastructure costs
IBM CEO has doubts that Big Tech's AI spending spree will pay off
IBM CEO Arvind Krishna walked through some napkin math on Big Tech's AI data center spending — and raised some doubts on if it'll prove profitable.Henry Chandonnet (Business Insider)
like this
Australis13, joshg253, essell, NoneOfUrBusiness, massive_bereavement, Lasslinthar, Zier e mPony like this.
Technology reshared this.
like this
NoneOfUrBusiness likes this.
like this
NoneOfUrBusiness likes this.
For the same reasons. The old rules still work, most of the gold in tech industry is in tall RnD later paid off by scaling indefinitely. Things different from that are either intentionally promoted to inflate a bubble, or popular as a result of wishful thinking where that industry will change in favor of the same curve as with oil and gas. The latter just won't happen.
Data is analogous to oil and gas here. But more like urine in ancient Rome than like something dug up from the ground.
But there's still interest in making some protections and barriers to collection of said data, because otherwise those collecting it are interested to immediately use it for only their own good and not even of other fish in the pond.
like this
NoneOfUrBusiness likes this.
One day we'll read some of these comments and laugh at how shortsighted they were.
Of course we'll probably have to read them on a manuscript or smeared on a wall with feces because all the world's resources will be used by the huge datacenters that power our AI overlords
like this
NoneOfUrBusiness likes this.
IBM is in the business of consulting. They don’t want their business model getting usurped. Imagine if everyone had access to a bot that could do IBMs job.
I don’t like AI, but this is one reason I can see him saying that.
like this
NoneOfUrBusiness, massive_bereavement e bluGill like this.
Artificial Intelligence (AI) Services and Consulting | IBM
Discover how IBM’s artificial intelligence (AI) services and consulting can help implement and scale enterprise AI to reinvent your organization’s workflows.www.ibm.com
It’s misleading.
IBM is very much into AI, as a modest, legally trained, economical tool. See: huggingface.co/ibm-granite
But this is the CEO saying “We aren’t drinking the Kool-Aid.” It’s shockingly reasonable.
ibm-granite (IBM Granite)
LLMs for language and code + Time series and geospatial foundation modelshuggingface.co
Datacenters aren't helping, but they're like 3-4% of emissions. It's still manufacturing plastic crap and shipping across the ocean with bunker fuel burn causing 60% of it.
But yeah, increased energy usage isn't helping.
Even if that doesn't exist yet in the USA, it's definitely in the UK with all their CCTV stuff.
And we know US law enforcement can use things like Ring doorbells.
- Krishna was skeptical of that current tech would reach AGI, putting the likelihood between 0-1%.
Altman: “so you’re saying there’s a chance…!”
Republican Matt Van Epps wins US House special election in Tennessee
Republican Matt Van Epps wins US House special election in Tennessee
Van Epps defeats Aftyn Behn in congressional election closely watched for signs of Republican weaknessGeorge Chidi (The Guardian)
Congress’s Bipartisan Child Online Safety Coalition is Unraveling
Congress’s Bipartisan Child Online Safety Coalition is Unraveling
A congressional alliance pushing for stronger federal protections for kids online is splintering, Cristiano Lima-Strong reports.Cristiano Lima-Strong (Tech Policy Press)
YouTube says it will comply with Australia's teen social media ban
Google's YouTube shared a "disappointing update" to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their accounts within days.
YouTube says it will comply with Australia's teen social media ban
SYDNEY, Dec 3 - Google's YouTube shared a \"disappointing update\" to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their account…ST
AT&T commits to ending DEI programs
https://www.cnn.com/2025/12/02/business/dei-at-and-t-mobile-fcc
essell likes this.
Scathing review finds government appointments often 'look like nepotism'
ABC News
ABC News provides the latest news and headlines in Australia and around the world.Maani Truu (Australian Broadcasting Corporation)
adhocfungus likes this.
“No room for fear”: broad antifascist front confronts far-right violence in Croatia
Tens of thousands of people in four Croatian cities took to the streets on Sunday, November 30, responding to a call from the initiative United Against Fascism (Ujedinjeni protiv fašizma), a broad coalition of civil society organizations and grassroots groups. Marchers in Zagreb, Rijeka, Zadar, and Pula denounced the escalating wave of far-right violence and historical revisionism, vowing to build broad resistance to trends that are encouraged and supported by the political establishment.
“We stand united against fascism because, day after day, we are not witnessing isolated outbursts, but the emergence of a blueprint – one that grows when we remain silent, gains strength when we tolerate it, and ultimately turns fear into the rule rather than the exception,” United Against Fascism declared in its call. “But when we stand together, there is no room for fear.”
United Against Fascism warned that public funds are being cut from education and violence prevention budgets while military spending rises. “Society is being led to believe that armament is the solution, that enemies surround us, and that fear is the appropriate state of mind,” the statement continued. “More and more often, security is defined through borders, military might, and ‘external threats,’ while working conditions, housing, and social rights are ignored.”
Antifascist demonstration in Rijeka, November 30, 2025. Source: United Against Fascism/Građani i građanke Rijeke Facebook
In Rijeka and Zadar, demonstrators faced coordinated attacks by right-wing groups, including members of violence-prone sports supporter factions. In Zadar, where assaults were anticipated, police intervened to push back the attackers. In Rijeka, despite the city’s reputation for tolerance and progressive-leaning politics, participants of the 2,000-strong march were targeted with pyrotechnics and confronted by men dressed in black performing fascist salutes. Police allowed them to remain nearby under “supervision,” drawing strong criticism from the organizers.
A summer of attacks
This weekend’s demonstrations were sparked by a series of far-right attacks on ethnic minorities and cultural events since the summer, a trend linked to the Croatian Democratic Union (HDZ) government’s revisionist narrative. Right wing forces in Croatia, including HDZ, have built their narrative around inciting chauvinism toward the Serb population, sustaining anti-communist animosity, and, more recently, directing public frustration over falling living standards at immigrants.
Among the most visible examples of the changing climate this year was a mass concert by right-wing singer Marko Perković Thompson in Zagreb. His performances, often banned domestically and abroad, are associated with symbols glorifying the World War II Ustaša regime. The concert in Zagreb welcomed thousands and was more or less explicitly endorsed by several senior officials, including Prime Minister Andrej Plenković.
Prompted by such signals, right-wing groups, including organizations representing veterans of the 1990s war, disrupted festivals and cultural events addressing Croatia’s antifascist legacy or including Serb voices. The attacks included the obstruction of a festival in Benkovac, a town where most of the Serb population was violently expelled in 1995. There, groups of men blocked a children’s theater performance and threatened local journalists, eventually leading to the event’s cancellation. More recently, organized mobs targeted a in Split and attempted to attack the opening of an art exhibition organized by the Serb national minority in Zagreb.
Antifascist demonstration in Pula, November 30, 2025. Source: United Against Fascism/Tedi Korodi
These incidents are a reflection of ongoing processes led by the right. For more than three decades, Croatia has suffered a historical revisionism trend aimed at erasing the antifascist legacy of socialist Yugoslavia. Among other things, since the 1990s, HDZ and other conservative forces have reshaped school curricula to minimize or remove antifascist content. At the European level, political pressures to equate communism and fascism have further normalized alternative historical narratives that rehabilitate collaborators and demonize antifascist resistance. As a result, children and youth are pushed toward right-wing ideologies and offered fabricated historical accounts.
The organization Fališ, which successfully resisted right-wing attempts to cancel its annual festival in Šibenik this summer, linked these developments to reactions to last weekend’s protests, including comments claiming that Croatia was “occupied” between 1945 and 1991. This is “the result of a political perversion that turns liberation into occupation, and the defeat of fascism into a trauma,” Fališ wrote.
“It’s a complete reversal of reality, in which the antifascist becomes the enemy, the fascist becomes a patriot, and crime becomes identity,” they continued. “This logic erases all moral compasses and shapes a society in which truth is a nuisance and lies a political currency.”
Popular resistance challenges party silence
As alarms mounted over the rising violence, state authorities downplayed the danger and offered few concrete assurances to targeted communities. But the massive turnout over the weekend appears to have rattled government figures. Prime Minister Plenković attempted to recast the demonstrations as an effort to “destabilize” his administration, while Defense Minister Ivan Anušić, widely regarded as a leading figure of HDZ’s extreme-right wing, claimed: “This was a protest against Croatia, I would say pro-Yugoslav, maybe even more extreme than pro-Yugoslav.”
Antifascist protest in Zadar, November 30, 2025. Source: United Against Fascism
Liberal parties, including social democrats and greens, also failed to take meaningful action against the growing right-wing violence. Instead, Zagreb’s Green-led city authorities acknowledged that another concert by Perković would take place at the end of the year despite recognizing possible correlations between such events and far-right mobilization.
Against this backdrop of institutional silence and complicity, protesters promised to continue building resistance. “We stand united against fascism because violence over blood cells or skin color must stop,” United Against Fascism stated. “We will not accept Serb children being attacked, insulted, or intimidated for dancing folklore. We will not accept that the presence of national minorities is treated as a provocation, or that migrants are considered less human.”
“We stand united against fascism because silence is never neutral. Silence always serves those who profit most from darkness.”
Carnivore A.D. u Kotaču
Carnivore A.D. No Profit Recordings najavljuje dolazak američkog crossover/thrash metal benda Carnivore A.D. 4. prosinca u Klubu Kotač. Podršku te večeri dat će im dark hardcore punk sastav Črnomor iz Rijeke.RDD (ravnododna)
adhocfungus likes this.
Arghblarg
in reply to This is fine🔥🐶☕🔥 • • •like this
elgordino, TheFederatedPipe, andyburke e KaRunChiy like this.
Stefan_S_from_H
in reply to Arghblarg • • •Centralized has advantages for normal users who want to report bugs.
I remember when people started migrating to GitHub from Google Code. Most users have some Google account that they could use to report bugs. But GitHub was new.
For a long time I tried reaching developers by mail, but this wasn’t possible anymore. I had to create a GitHub account.
like this
KaRunChiy likes this.
Wioum
in reply to Stefan_S_from_H • • •Zeoic
in reply to Wioum • • •like this
rash e onewithoutaname like this.
Echo Dot
in reply to Wioum • • •jaybone
in reply to Wioum • • •Eager Eagle
in reply to jaybone • • •BakedCatboy
in reply to Stefan_S_from_H • • •masterspace
in reply to Stefan_S_from_H • • •dhork
in reply to This is fine🔥🐶☕🔥 • • •mintiefresh
in reply to This is fine🔥🐶☕🔥 • • •Github CEO said embrace AI or get out.
so they did.