Lower Dens - Escape From Evil (2015)
Hanno giocato bene le loro carte, i Lower Dens di Jana Hunter, alimentando mese dopo mese le attese per quello che è a tutti gli effetti il terzo album in studio, Escape From Evil. Accantonata l’ormai decennale esperienza solista/freak-folk iniziata a metà anni Zero, con l’appoggio di un Devendra Banhart all’epoca... Leggi e ascolta...
Lower Dens - Escape From Evil (2015)
Hanno giocato bene le loro carte, i Lower Dens di Jana Hunter, alimentando mese dopo mese le attese per quello che è a tutti gli effetti il terzo album in studio, Escape From Evil. Accantonata l’ormai decennale esperienza solista/freak-folk iniziata a metà anni Zero, con l’appoggio di un Devendra Banhart all’epoca all’apice della popolarità, la Hunter è riuscita a reinventarsi icona – a suo modo – cool attraverso le trame di un dream pop chitarristico che ha trovato sfogo prima in Twin-Hand Movement e poi, in una veste ancora più appetibile, nel Nootropics del 2012... artesuono.blogspot.com/2015/04…
Ascolta il disco: album.link/s/3lzj0ftwAZ9XFp3qF…
Home – Identità DigitaleSono su: Mastodon.uno - Pixelfed - Feddit
Lower Dens - Escape From Evil (2015)
di Riccardo Zagaglia Hanno giocato bene le loro carte, i Lower Dens di Jana Hunter, alimentando mese dopo mese le attese per quello che...Silvano Bottaro (Blogger)
Kohler Can Access Data and Pictures from Toilet Camera It Describes as “End-to-End Encrypted”
Kohler Can Access Data and Pictures from Toilet Camera It Describes as “End-to-End Encrypted” - /var/log/simon
Claimed end-to-end privacy doesn’t fully conceal your rear-end datavarlogsimon.leaflet.pub
like this
copymyjalopy, adhocfungus e essell like this.
Japanese game developers face ridiculously high font license fees(increase from $380 to $20K) following US acquisition of major domestic provider. Live-service games to take the biggest blow
Japanese game developers face ridiculously high font license fees following US acquisition of major domest ...
A change in license plans has made it up to 50 times more expensive for Japanese game developers to use commercial fonts in their games and apps.Amber V (AUTOMATON WEST)
essell likes this.
AI Agents Break Rules Under Everyday Pressure— Shortened deadlines and other stressors caused misbehavior
AI Agents Care Less About Safety When Under Pressure
Can AI agents resist pressure or do they crack? Discover how PropensityBench tests their likelihood to misbehave when put under pressure.Matthew Hutson (IEEE Spectrum)
essell likes this.
Hillary Clinton Says Young Americans Are Pro-Palestine Because They Watch ‘Totally Made Up’ Videos of Gaza Horrors
Hillary Clinton Says Young Americans Are Pro-Palestine Because They Watch ‘Totally Made Up’ Vi ...
Clinton complained young Americans were becoming sympathetic towards Palestinians because they watch "totally made up" videos on social media.Charlie Nash (Mediaite)
like this
copymyjalopy e essell like this.
A Peek At Piefed
Paige and Victor get into the weeds with Rimu the creator of Piefed. What is the secret to Piefed's rapid development and what direction is is Piefed rapidly developing?
Find Rimu: [@rimu@mastodon.nzoss.nz) (mastodon.nzoss.nz/@rimu) @rimu@piefed.social
Find Victor: @kini@maro.xyz
Find Paige: @paige@canadiancivil.com
https://video.fedihost.co/videos/watch/e63cc1e0-b35f-4afd-9a1c-d419bc44c06d
Apple shuffles AI leadership team in bid to fix Siri mess
Apple swaps one ex-Google AI chief for another
: Amar Subramanya spent mere months at Microsoft before replacing John GiannandreaBrandon Vigliarolo (The Register)
essell likes this.
Piano da 5€ di starlink cappato a 0,5Mb
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.youtube.com
Private Tech Companies, the State, and the New Character of War
The war in Ukraine is forcing conflict analysts and others to reimagine traditional state-centric models of war, as it demonstrates that militaries are no longer primarily responsible for defining the challenges of the modern battlespace and then producing tenders for technological fixes. Instead, private tech companies increasingly explain the ideal battlespace to militaries, offering software and hardware products needed to establish real-time information edges. In the Russia-Ukraine war, private companies have sought to shape Ukrainian intelligence requirements. At the beginning of Russia’s invasion in February 2022, Ukraine’s armed forces could not manage essential intelligence tasks. Ukraine’s military lacked its own software and hardware for real-time information dominance and instead accepted support from private tech companies. These companies provide AI and big data tools that fuse intelligence and surveillance data to enhance the military’s situational awareness. As the war has progressed, however, the Ukrainians have sought to develop their own government situational awareness and battle management platform called Delta. The platform was developed as a bottom-up solution, “initially focused on a single, highly effective application: a digital map for situational awareness.”2 Over time, it expanded into a robust software ecosystem used by most of Ukraine’s military, from frontline soldiers to top commanders. This in part reflects Ukraine’s desire to retain direct sovereign control over what the U.S. military refers to as Combined Joint All-Domain Command and Control infrastructure (CJADC2), which manages networked sensors, data, platforms, and operations to deliver information advantages across all military services and with allies.
Mass surveillance and social media now generate huge amounts of data during war. At the same time, the widespread availability of the smartphone means civilians carry around advanced sensors that can broadcast data more quickly than the armed forces themselves.4 This enables civilians to provide intelligence to the armed forces in ways that were not previously possible.5 Matthew Ford and Andrew Hoskins label this a “new war ecology” that is “weaponizing our attention and making everyone a participant in wars without end . . . [by] collapsing the distinctions between audience and actor, soldier and civilian, media and weapon.”6 In this ecology, warfare is participatory. Social media platforms such as TikTok, X (formerly Twitter), and Telegram are no longer merely tools for consuming war reportage; militaries accessing and processing open-source data from these platforms shapes the battlespace in real time by contributing to wider situational awareness.
In this “new war ecology,” Palantir Technologies is an often controversial symbol of how private tech companies and the military work together to tackle battlefield challenges.8 Since it was founded in 2003, the company has grown quickly by providing big data software solutions. Its platforms are designed to handle complex and difficult data challenges, including those experienced by Western militaries. Importantly, Palantir’s software platforms were not developed and commercialized to fulfill a military tender. They are rooted in business models prioritizing speed, flexibility, and investor return, rather than the state’s national security imperatives.
As a result of their work in Ukraine, a slew of companies like Palantir have drawn media attention.9 While commercial interests have rarely aligned neatly with geopolitics, circumstances are changing; private technology firms increasingly occupy, manage, and in some cases dominate the digital infrastructure upon which militaries now rely. States themselves have fostered this shift through selective deregulation and outsourcing of technology development. These dynamics are visible in the war in Ukraine and in the wider geopolitical contest over the global digital stack. As we argued in “Virtual Sovereignty,” a paper we published in International Affairs, this influence has major geopolitical consequences for how states use power.
https://carnegieendowment.org/research/2025/12/ukraine-war-tech-companies?lang=en
Technology reshared this.
Per la prima volta si introduce una responsabilità diretta delle grandi piattaforme online nelle truffe finanziarie
L’Europa vuole colpire così un fenomeno molto pericoloso per gli utenti: il 77 per cento delle truffe in Europa parte dalle piattaforme social e il 59 per cento da quelle Meta (Facebook, Instagram, Whatsapp, Messenger), secondo la banca Revolut. Tra le più frequenti ci sono appunto le truffe e-commerce, dove il prodotto o non arriva o è molto diverso da quello pubblicizzato. Ma ci sono anche le truffe su trading online, che promettono guadagni straordinari con le criptovalute ma sono in realtà un modo per rubare i soldi di chi ci casca.
Truffati dalla pubblicità social? Pagano la banca e la big tech: possibile svolta sulle tutele
Parlamento Ue e Consiglio hanno trovato un accordo sulle nuove regole dei pagamenti digitali: con il pacchetto Psd3/Psr, per la prima volta si introduce una re…Alessandro Longo (la Repubblica)
Google is experimentally replacing news headlines with AI clickbait nonsense
Google is experimentally replacing news headlines with AI clickbait nonsense
Google Discover, the company’s smartphone news feed, is experimenting with AI headlines. Many of them are very bad.Sean Hollister (The Verge)
like this
Australis13, tiredofsametab, joshg253, essell, massive_bereavement, Lasslinthar e SuiXi3D like this.
Technology reshared this.
~~If hypothetically~~ when a false headline on a reputable site led to an incident involving injury or death, ~~could Google~~ is anyone found liable in anyway?
rarely
Apple urged to scrap AI feature after it creates false headline
Reporters Without Borders has called for Apple to remove Apple Intelligence.Graham Fraser (BBC News)
didn't this happen already? the thing is generating AI responses instead of showing me the results first and then I'm not clicking on it because I'm a person
it's also de-listing a ton of websites and subpages of websites and continuing to scrape them with Gemini anyway
like this
fonix232 likes this.
Apple had to turn it off for their sunmary mode after backlash, even though the option always had the "these summaries are generated by AI and can be inaccurate" warnings placed prominently.
Google doing this shit without warning or notice will get them in shit water. News portals and reporters are generally not too fond of their articles being completely misrepresented.
So what’s happening here is Google is feeding headlines into a model with the instructions to generate a title of exactly 4 words.
Every example is 4 words.
Why they think 4 words is enough to communicate meaningfully, I do not know.
The other thing is whether novel they’re shoving into their products for free is awful, hence the making things up and not knowing in the context of a video game exploit is not the same as the general use of the word.
The only shorter ones are "man bites dog", "Dewey defeats Truman", or something as simple as "WAR" when everyone already knows the details and this is just the official announcement.
Anyone know of any sources for ACS quantitative analysis exams?
Google is experimentally replacing news headlines with AI clickbait nonsense
Google is experimentally replacing news headlines with AI clickbait nonsense
Google Discover, the company’s smartphone news feed, is experimenting with AI headlines. Many of them are very bad.Sean Hollister (The Verge)
thisisbutaname likes this.
IBM CEO says there is 'no way' spending trillions on AI data centers will pay off at today's infrastructure costs
IBM CEO has doubts that Big Tech's AI spending spree will pay off
IBM CEO Arvind Krishna walked through some napkin math on Big Tech's AI data center spending — and raised some doubts on if it'll prove profitable.Henry Chandonnet (Business Insider)
like this
Australis13, joshg253, essell, NoneOfUrBusiness, massive_bereavement, Lasslinthar, Zier e mPony like this.
Technology reshared this.
like this
NoneOfUrBusiness likes this.
like this
NoneOfUrBusiness likes this.
For the same reasons. The old rules still work, most of the gold in tech industry is in tall RnD later paid off by scaling indefinitely. Things different from that are either intentionally promoted to inflate a bubble, or popular as a result of wishful thinking where that industry will change in favor of the same curve as with oil and gas. The latter just won't happen.
Data is analogous to oil and gas here. But more like urine in ancient Rome than like something dug up from the ground.
But there's still interest in making some protections and barriers to collection of said data, because otherwise those collecting it are interested to immediately use it for only their own good and not even of other fish in the pond.
like this
NoneOfUrBusiness likes this.
One day we'll read some of these comments and laugh at how shortsighted they were.
Of course we'll probably have to read them on a manuscript or smeared on a wall with feces because all the world's resources will be used by the huge datacenters that power our AI overlords
like this
NoneOfUrBusiness likes this.
IBM is in the business of consulting. They don’t want their business model getting usurped. Imagine if everyone had access to a bot that could do IBMs job.
I don’t like AI, but this is one reason I can see him saying that.
like this
NoneOfUrBusiness, massive_bereavement e bluGill like this.
Artificial Intelligence (AI) Services and Consulting | IBM
Discover how IBM’s artificial intelligence (AI) services and consulting can help implement and scale enterprise AI to reinvent your organization’s workflows.www.ibm.com
It’s misleading.
IBM is very much into AI, as a modest, legally trained, economical tool. See: huggingface.co/ibm-granite
But this is the CEO saying “We aren’t drinking the Kool-Aid.” It’s shockingly reasonable.
ibm-granite (IBM Granite)
LLMs for language and code + Time series and geospatial foundation modelshuggingface.co
Datacenters aren't helping, but they're like 3-4% of emissions. It's still manufacturing plastic crap and shipping across the ocean with bunker fuel burn causing 60% of it.
But yeah, increased energy usage isn't helping.
Even if that doesn't exist yet in the USA, it's definitely in the UK with all their CCTV stuff.
And we know US law enforcement can use things like Ring doorbells.
- Krishna was skeptical of that current tech would reach AGI, putting the likelihood between 0-1%.
Altman: “so you’re saying there’s a chance…!”
Republican Matt Van Epps wins US House special election in Tennessee
Republican Matt Van Epps wins US House special election in Tennessee
Van Epps defeats Aftyn Behn in congressional election closely watched for signs of Republican weaknessGeorge Chidi (The Guardian)
Congress’s Bipartisan Child Online Safety Coalition is Unraveling
Congress’s Bipartisan Child Online Safety Coalition is Unraveling
A congressional alliance pushing for stronger federal protections for kids online is splintering, Cristiano Lima-Strong reports.Cristiano Lima-Strong (Tech Policy Press)
YouTube says it will comply with Australia's teen social media ban
Google's YouTube shared a "disappointing update" to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their accounts within days.
YouTube says it will comply with Australia's teen social media ban
SYDNEY, Dec 3 - Google's YouTube shared a \"disappointing update\" to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their account…ST
AT&T commits to ending DEI programs
https://www.cnn.com/2025/12/02/business/dei-at-and-t-mobile-fcc
essell likes this.
Scathing review finds government appointments often 'look like nepotism'
ABC News
ABC News provides the latest news and headlines in Australia and around the world.Maani Truu (Australian Broadcasting Corporation)
adhocfungus likes this.
“No room for fear”: broad antifascist front confronts far-right violence in Croatia
Tens of thousands of people in four Croatian cities took to the streets on Sunday, November 30, responding to a call from the initiative United Against Fascism (Ujedinjeni protiv fašizma), a broad coalition of civil society organizations and grassroots groups. Marchers in Zagreb, Rijeka, Zadar, and Pula denounced the escalating wave of far-right violence and historical revisionism, vowing to build broad resistance to trends that are encouraged and supported by the political establishment.
“We stand united against fascism because, day after day, we are not witnessing isolated outbursts, but the emergence of a blueprint – one that grows when we remain silent, gains strength when we tolerate it, and ultimately turns fear into the rule rather than the exception,” United Against Fascism declared in its call. “But when we stand together, there is no room for fear.”
United Against Fascism warned that public funds are being cut from education and violence prevention budgets while military spending rises. “Society is being led to believe that armament is the solution, that enemies surround us, and that fear is the appropriate state of mind,” the statement continued. “More and more often, security is defined through borders, military might, and ‘external threats,’ while working conditions, housing, and social rights are ignored.”
Antifascist demonstration in Rijeka, November 30, 2025. Source: United Against Fascism/Građani i građanke Rijeke Facebook
In Rijeka and Zadar, demonstrators faced coordinated attacks by right-wing groups, including members of violence-prone sports supporter factions. In Zadar, where assaults were anticipated, police intervened to push back the attackers. In Rijeka, despite the city’s reputation for tolerance and progressive-leaning politics, participants of the 2,000-strong march were targeted with pyrotechnics and confronted by men dressed in black performing fascist salutes. Police allowed them to remain nearby under “supervision,” drawing strong criticism from the organizers.
A summer of attacks
This weekend’s demonstrations were sparked by a series of far-right attacks on ethnic minorities and cultural events since the summer, a trend linked to the Croatian Democratic Union (HDZ) government’s revisionist narrative. Right wing forces in Croatia, including HDZ, have built their narrative around inciting chauvinism toward the Serb population, sustaining anti-communist animosity, and, more recently, directing public frustration over falling living standards at immigrants.
Among the most visible examples of the changing climate this year was a mass concert by right-wing singer Marko Perković Thompson in Zagreb. His performances, often banned domestically and abroad, are associated with symbols glorifying the World War II Ustaša regime. The concert in Zagreb welcomed thousands and was more or less explicitly endorsed by several senior officials, including Prime Minister Andrej Plenković.
Prompted by such signals, right-wing groups, including organizations representing veterans of the 1990s war, disrupted festivals and cultural events addressing Croatia’s antifascist legacy or including Serb voices. The attacks included the obstruction of a festival in Benkovac, a town where most of the Serb population was violently expelled in 1995. There, groups of men blocked a children’s theater performance and threatened local journalists, eventually leading to the event’s cancellation. More recently, organized mobs targeted a in Split and attempted to attack the opening of an art exhibition organized by the Serb national minority in Zagreb.
Antifascist demonstration in Pula, November 30, 2025. Source: United Against Fascism/Tedi Korodi
These incidents are a reflection of ongoing processes led by the right. For more than three decades, Croatia has suffered a historical revisionism trend aimed at erasing the antifascist legacy of socialist Yugoslavia. Among other things, since the 1990s, HDZ and other conservative forces have reshaped school curricula to minimize or remove antifascist content. At the European level, political pressures to equate communism and fascism have further normalized alternative historical narratives that rehabilitate collaborators and demonize antifascist resistance. As a result, children and youth are pushed toward right-wing ideologies and offered fabricated historical accounts.
The organization Fališ, which successfully resisted right-wing attempts to cancel its annual festival in Šibenik this summer, linked these developments to reactions to last weekend’s protests, including comments claiming that Croatia was “occupied” between 1945 and 1991. This is “the result of a political perversion that turns liberation into occupation, and the defeat of fascism into a trauma,” Fališ wrote.
“It’s a complete reversal of reality, in which the antifascist becomes the enemy, the fascist becomes a patriot, and crime becomes identity,” they continued. “This logic erases all moral compasses and shapes a society in which truth is a nuisance and lies a political currency.”
Popular resistance challenges party silence
As alarms mounted over the rising violence, state authorities downplayed the danger and offered few concrete assurances to targeted communities. But the massive turnout over the weekend appears to have rattled government figures. Prime Minister Plenković attempted to recast the demonstrations as an effort to “destabilize” his administration, while Defense Minister Ivan Anušić, widely regarded as a leading figure of HDZ’s extreme-right wing, claimed: “This was a protest against Croatia, I would say pro-Yugoslav, maybe even more extreme than pro-Yugoslav.”
Antifascist protest in Zadar, November 30, 2025. Source: United Against Fascism
Liberal parties, including social democrats and greens, also failed to take meaningful action against the growing right-wing violence. Instead, Zagreb’s Green-led city authorities acknowledged that another concert by Perković would take place at the end of the year despite recognizing possible correlations between such events and far-right mobilization.
Against this backdrop of institutional silence and complicity, protesters promised to continue building resistance. “We stand united against fascism because violence over blood cells or skin color must stop,” United Against Fascism stated. “We will not accept Serb children being attacked, insulted, or intimidated for dancing folklore. We will not accept that the presence of national minorities is treated as a provocation, or that migrants are considered less human.”
“We stand united against fascism because silence is never neutral. Silence always serves those who profit most from darkness.”
Carnivore A.D. u Kotaču
Carnivore A.D. No Profit Recordings najavljuje dolazak američkog crossover/thrash metal benda Carnivore A.D. 4. prosinca u Klubu Kotač. Podršku te večeri dat će im dark hardcore punk sastav Črnomor iz Rijeke.RDD (ravnododna)
adhocfungus likes this.
Israel emptied half of Gaza: What’s next?
from +972’s Sunday Recap
+972Magazine [published in Israel]
Nov. 30, 2025
Gazan analyst Muhammad Shehada examines how Israel is using the ‘Yellow Line’ to re-engineer its control over the Strip even after the ceasefire. [Podcast]
Also:
* Why the death penalty would cement the Israeli radical right’s ascendancy
* At settlers’ bidding, Israel arrests prominent Palestinian activist
* Israel is set to destroy our guesthouse. But Masafer Yatta still welcomes all who resist
* AI-powered surveillance firms are gunning for a share of the Gaza spoils
https://www.972mag.com/wp-content/themes/rgb/newsletter.php?page_id=8§ion_id=188727
like this
adhocfungus e Maeve like this.
Palestine reshared this.
Israel emptied half of Gaza: What’s next?
cross-posted from: lemmy.ml/post/39791607
from +972’s Sunday Recap
+972Magazine [published in Israel]
Nov. 30, 2025Gazan analyst Muhammad Shehada examines how Israel is using the ‘Yellow Line’ to re-engineer its control over the Strip even after the ceasefire. [Podcast]
Also:
* Why the death penalty would cement the Israeli radical right’s ascendancy
* At settlers’ bidding, Israel arrests prominent Palestinian activist
* Israel is set to destroy our guesthouse. But Masafer Yatta still welcomes all who resist
* AI-powered surveillance firms are gunning for a share of the Gaza spoils
Israel emptied half of Gaza: What’s next?
from +972’s Sunday Recap
+972Magazine [published in Israel]
Nov. 30, 2025Gazan analyst Muhammad Shehada examines how Israel is using the ‘Yellow Line’ to re-engineer its control over the Strip even after the ceasefire. [Podcast]
Also:
* Why the death penalty would cement the Israeli radical right’s ascendancy
* At settlers’ bidding, Israel arrests prominent Palestinian activist
* Israel is set to destroy our guesthouse. But Masafer Yatta still welcomes all who resist
* AI-powered surveillance firms are gunning for a share of the Gaza spoilshttps://www.972mag.com/wp-content/themes/rgb/newsletter.php?page_id=8§ion_id=188727
like this
rainpizza, PeeOnYou [he/him], Cowbee [he/they], LVL, Maeve, atomkarinca, senseamidmadness, Mantiddies, Malkhodr, cornishon, woodenghost [comrade/them], demerit, BassedWarrior, Maeve, ComradZoid, 2000watts, Philo_and_sophy, Mzuark, Jin008, Ashes2ashes, الأرض ستبقى عربية, Apollonian, stink e TheTux like this.
201dberg doesn't like this.
like this
Ashes2ashes, p0ntyp00l, allende2001 e GreatSquare like this.
FBI paid nearly $1M in overtime to redact Epstein files, documents show
FBI paid nearly $1M in overtime to redact Epstein files, documents show
FBI Director Kash Patel and U.S. Attorney General Pam Bondi spent nearly $1 million in overtime pay for personnel to redact the files related to the case of late sex offender Jeffrey Epstein.Anna Rascouët-Paz (Snopes.com)
adhocfungus likes this.
OpenAI desperate to avoid explaining why it deleted pirated book datasets - Ars Technica
OpenAI may soon be forced to explain why it deleted a pair of controversial datasets composed of pirated books, and the stakes could not be higher.At the heart of a class-action lawsuit from authors alleging that ChatGPT was illegally trained on their works, OpenAI’s decision to delete the datasets could end up being a deciding factor that gives the authors the win.
It’s undisputed that OpenAI deleted the datasets, known as “Books 1” and “Books 2,” prior to ChatGPT’s release in 2022. Created by former OpenAI employees in 2021, the datasets were built by scraping the open web and seizing the bulk of its data from a shadow library called Library Genesis (LibGen).
As OpenAI tells it, the datasets fell out of use within that same year, prompting an internal decision to delete them.
But the authors suspect there’s more to the story than that. They noted that OpenAI appeared to flip-flop by retracting its claim that the datasets’ “non-use” was a reason for deletion, then later claiming that all reasons for deletion, including “non-use,” should be shielded under attorney-client privilege.
To the authors, it seemed like OpenAI was quickly backtracking after the court granted the authors’ discovery requests to review OpenAI’s internal messages on the firm’s “non-use.”
In fact, OpenAI’s reversal only made authors more eager to see how OpenAI discussed “non-use,” and now they may get to find out all the reasons why OpenAI deleted the datasets.
OpenAI desperate to avoid explaining why it deleted pirated book datasets
OpenAI risks increased fines after deleting pirated books datasets.Ashley Belanger (Ars Technica)
adhocfungus likes this.
Open hardware search engine
GitHub - iop-alliance/OpenKnowHow: A specification for metadata of technology designs (aka Open Source Hardware), to enable indexing and searching such projects
A specification for metadata of technology designs (aka Open Source Hardware), to enable indexing and searching such projects - iop-alliance/OpenKnowHowGitHub
Krusty likes this.
Making the huge Lemmy banner go away?
I've had to click on the huge Lemmy banner four or five times to make it go away now.
Is there a way to make it permanently go away?
NBA veteran Gallinari retires from basketball
Longtime NBA player Danilo Gallinari retires from basketball - ESPN
Longtime NBA forward Danilo Gallinari has announced his retirement from basketball.Tim Bontemps (ESPN)
VodkaSolution likes this.
Possibly in a Cavs jersey, as the first Italian to play for Cleveland. As an Italian Cavs fan, maybe the first one, it would have been great. Good luck for your next chapter Danilo!
Danilo Gallinari on Instagram: "Today, with a heart full of gratitude, I am announcing my retirement from a career I’ve always dreamed of. A career built through hard work, sacrifice, victories, defeats, teammates who became brothers, guidance from my co
62K likes, 1,827 comments - danilogallogallinari on December 2, 2025: "Today, with a heart full of gratitude, I am announcing my retirement from a career I’ve always dreamed of.Instagram
After a teddy bear talked about kink, AI watchdogs are warning parents against smart toys
As the holiday season looms into view with Black Friday, one category on people’s gift lists is causing increasing concern: products with artificial intelligence.
The development has raised new concerns about the dangers smart toys could pose to children, as consumer advocacy groups say AI could harm kids’ safety and development. The trend has prompted calls for increased testing of such products and governmental oversight.
Last week, those fears were given brutal justification when an AI-equipped teddy bear started discussing sexually explicit topics.
The product, FoloToy’s Kumma, ran on an OpenAI model and responded to questions about kink. It suggested bondage and roleplay as ways to enhance a relationship, according to a report from the Public Interest Research Group (Pirg), the consumer protection organization behind the study (pdf link).
“It took very little effort to get it to go into all kinds of sexually sensitive topics and probably a lot of content that parents would not want their children to be exposed to,” said Teresa Murray, Pirg consumer watchdog director.
After a teddy bear talked about kink, AI watchdogs are warning parents against smart toys
Advocates are fighting against the $16.7bn global smart-toy market, decrying surveillance and a lack of regulationEric Berger (The Guardian)
like this
LostWanderer, Lasslinthar, SuiXi3D e Get_Off_My_WLAN like this.
like this
Get_Off_My_WLAN likes this.
I’m also disturbed by any parent that buys one
I don’t think you can do so accidentally
Another thing that just never occurred to me. LLMs in children's toys.
What a time to be alive..
These are voluntary surveillance devices.
Like Alexa and Ring and Android and… (and others I can’t think of right now)
THIS CHRISTMAS' NUMBER ONE SELLING TOY: AI TOYS.
The news interview Shirley Beswitch of White Plains, New York to ask her why she bought her 3 year old son an ai teddy bear for Christmas this year
"I base my entire identity around the thing I squeezed out of me a few years ago, but I'm not willing to put any actual work into it, you know? Well, except for bitching online constantly about how other people aren't working to create a safer world for
My kid"
When asked if she had any concerns about reported issues with the toys, such as inappropriate comments and surveillance, Shirley said:
"Surveillance isn't real. Plus it doesn't matter if someone knows every intimate detail of my life. Sure I'm an immigrant but I did it right, and my son was born here in America, so it's a non issue"
What does every Tickle Me Elmo get before it leaves the factory?
Two test tickles.
Root on disk storage pool?
So far all my setups have had root on SSD mirror with separate hard disk storage pool for all the data. Years ago I used to keep the app config, databases and docker files on the root filesystem, while the app data resided on the storage pool. That was cumbersome for backups and storage size. Eventually I moved all app data to the storage pool. Essentially the apps can be started on any machine with a Linux OS that has docker installed. Database access is slower but it's a decent compromise for having trivial all-in-one snapshots and backup. Now I'm setting up a new NAS for a friend and I'm wondering whether it's worth keeping the root filesystem separate from the storage pool. If I put it on the disks, I'd get trivial full system snapshots and backups. I'd have the same hardware reliability as the storage pool. There wouldn't be issues with root filling up. The caveat is that the OS would be slower. Has anyone reasoned and/or tried this? Should I go for it?
E: I recently put my laptop's root on ZFS and the ability to do full backups while the system is running is pretty great. The full system can be pretty trivialy restored to a new drive with zfs send / recv during setup.
Anubis is awesome and I want to talk about it
I got into the self-hosting scene this year when I wanted to start up my own website run on old recycled thinkpad. A lot of time was spent learning about ufw, reverse proxies, header security hardening, fail2ban.
Despite all that I still had a problem with bots knocking on my ports spamming my logs. I tried some hackery getting fail2ban to read caddy logs but that didnt work for me. I nearly considered giving up and going with cloudflare like half the internet does. But my stubbornness for open source self hosting and the recent cloudflare outages this year have encouraged trying alternatives.
Coinciding with that has been an increase in exposure to seeing this thing in the places I frequent like codeberg. This is Anubis, a proxy type firewall that forces the browser client to do a proof-of-work security check and some other nice clever things to stop bots from knocking. I got interested and started thinking about beefing up security.
I'm here to tell you to try it if you have a public facing site and want to break away from cloudflare It was VERY easy to install and configure with caddyfile on a debian distro with systemctl. In an hour its filtered multiple bots and so far it seems the knocks have slowed down.
My botspam woes have seemingly been seriously mitigated if not completely eradicated. I'm very happy with tonights little security upgrade project that took no more than an hour of my time to install and read through documentation. Current chain is caddy reverse proxy -> points to Anubis -> points to services
Good place to start for install is here
anubis.techaro.lol/docs/admin/…
Anubis: Web AI Firewall Utility | Anubis
Weigh the soul of incoming HTTP requests to protect your website!anubis.techaro.lol
like this
LostWanderer, Quantumantics, YoSoySnekBoi, Australis13, massive_bereavement, yessikg, toothpaste_sandwich e chookity like this.
like this
massive_bereavement e yessikg like this.
The front page of the web site is excellent. It describes what it does, and it does its feature set in quick, simple terms.
I can't tell you how many times I've gone to a website for some open-source software and had no idea what it was or how it was trying to do it. They often dive deep into the 300 different ways of installing it, tell you what the current version is and what features it has over the last version, but often they just assume you know the basics.
like this
massive_bereavement likes this.
like this
missingno likes this.
Who jabbed at anything?
I can’t get to that page, so I asked a question about the contents.
Someone here is being silly, we just disagree about who.
It gets quite silly when you blame the entire dev community for supposedly downvoting you over ideals rather than being overly strict about them. I also prefer HTML-first and think it should be the norm, but I draw the line somewhere reasonable.
I can’t get to that page, so I asked a question
Yeah, and you can run the innocuous JS or figure out what it is from the URL. You're tying your own hands while dishing it out to everyone else.
You can just fork it and replace the image.
The authors talks about it here on their blog a bit more.
Avoiding becoming the lone dependency peg with load-bearing anime
Xe Iaso's personal website.xeiaso.net
like this
yessikg likes this.
You know the thing is that they know the character is a problem/annoyance, thats how they grease the wheel on selling subscription access to a commecial version with different branding.
anubis.techaro.lol/docs/admin/…
::: spoiler pricing from site
Commercial support and an unbranded version
If you want to use Anubis but organizational policies prevent you from using the branding that the open source project ships, we offer a commercial version of Anubis named BotStopper. BotStopper builds off of the open source core of Anubis and offers organizations more control over the branding, including but not limited to:
- Custom images for different states of the challenge process (in process, success, failure)
- Custom CSS and fonts
- Custom titles for the challenge and error pages
- "Anubis" replaced with "BotStopper" across the UI
- A private bug tracker for issues
In the near future this will expand to:
- A private challenge implementation that does advanced fingerprinting to check if the client is a genuine browser or not
- Advanced fingerprinting via Thoth-based advanced checks
In order to sign up for BotStopper, please do one of the following:
- Sign up on GitHub Sponsors at the $50 per month tier or higher
- Email sales@techaro.lol with your requirements for invoicing, please note that custom invoicing will cost more than using GitHub Sponsors for understandable overhead reasons
:::
I have to respect the play tbh its clever. Absolutely the kind of greasy shit play that Julian from the trailer park boys would do if he were an open source developer.
like this
massive_bereavement, DaGeek247 e yessikg like this.
I wish more projects did stuff like this.
It just feels silly and unprofessional while being seriously useful. Exactly my flavour of software, makes the web feel less corporate.
like this
missingno, massive_bereavement, DaGeek247 e yessikg like this.
like this
massive_bereavement likes this.
Just imagine my pain on my phone. Js disabled, and takes a year to complete☠️
And on private tab, have to go through every time
like this
Australis13, missingno, massive_bereavement, DaGeek247 e yessikg like this.
It also doesn’t function without JavaScript. If you’re security or privacy conscious chances are not zero that you have JS disabled, in which case this presents a roadblock.
On the flip side of things, if you are a creator and you’d prefer to not make use of JS (there’s dozens of us) then forcing people to go through a JS “security check” feels kind of shit. The alternative is to just take the hammering, and that feels just as bad.
No hate on Anubis. Quite the opposite, really. It just sucks that we need it.
like this
Australis13 e DaGeek247 like this.
I feel comfortable hating on Anubis for this. The compute cost per validation is vanishingly small to someone with the existing budget to run a cloud scraping farm, it’s just another cost of doing business.
The cost to actual users though, particularly to lower income segments who may not have compute power to spare, is annoyingly large. There are plenty of complaints out there about Anubis being painfully slow on old or underpowered devices.
Some of us do actually prefer to use the internet minus JS, too.
Plus the minor irritation of having anime catgirls suddenly be a part of my daily browsing.
Theres a compute option that doesnt require javascript. The responsibility lays on site owners to properly configure IMO, though you can make the argument its not default I guess.
anubis.techaro.lol/docs/admin/…
::: spoiler From docs on Meta Refresh Method
Meta Refresh (No JavaScript)
The metarefresh challenge sends a browser a much simpler challenge that makes it refresh the page after a set period of time. This enables clients to pass challenges without executing JavaScript.
To use it in your Anubis configuration:
# Generic catchall rule
- name: generic-browser
user_agent_regex: >-
Mozilla|Opera
action: CHALLENGE
challenge:
difficulty: 1 # Number of seconds to wait before refreshing the page
algorithm: metarefresh # Specify a non-JS challenge methodThis is not enabled by default while this method is tested and its false positive rate is ascertained. Many modern scrapers use headless Google Chrome, so this will have a much higher false positive rate.
:::
Yeah I actually use the noscript extension and i refuse to just whitelist certain sites unless I'm very certain I trust them.
I run into Anubis checks all the time and while I appreciate the software, having to consistently temporarily whitelist these sites does get cumbersome at times. I hope they make this noJS implementation the default soon.
Wait, you keep temporarily allowing them over and over again? Why temporary?
Sincerely,
Another NoScript fan
Most of the Anubis encounters I have are to redlib instances that are shuffled around, go down all the time, and generally are more ephemeral than other sites. Because I use another extension called Libredirect to shuffle which redlib instance I visit when clicking on a reddit link, I don't bother whitelisting them permanently.
I already have solved this on my desktop by self hosting my own redlib instance via localhost and using libredirect to just point there, but on my phone I still do the whole nojs temp unblock random redlib instance. Eventually I plan on using wireguard to host a private redlib instance on a vps so I can just not deal with this.
This is a weird case I know, but its honestly not that bad.
if you are a creator and you’d prefer to not make use of JS (there’s dozens of us) then forcing people to go through a JS “security check” feels kind of shit. The alternative is to just take the hammering, and that feels just as bad.
I'm with you here. I come from an older time on the Internet. I'm not much of a creator, but I do have websites, and unlike many self-hosters I think, in the spirit of the internet, they should be open to the public as a matter of principle, not cowering away for my own private use behind some encrypted VPN. I want it to be shared. Sometimes that means taking a hammering. It's fine. It's nothing that's going to end the world if it goes down or goes away, and I try not to make a habit of being so irritating that anyone would have much legitimate reason to target me.
I don't like any of these sort of protections that put the burden onto legitimate users. I get that's the reality we live in, but I reject that reality, and substitute my own. I understand that some people need to be able to block that sort of traffic to be able to limit and justify the very real costs of providing services for free on the Internet and Anubis does its job for that. But I'm not one of those people. It has yet to cost me a cent above what I have already decided to pay, and until it does, I have the freedom to adhere to my principles on this.
To paraphrase another great movie: Why should any legitimate user be inconvenienced when the bots are the ones who suck. I refuse to punish the wrong party.
like this
DaGeek247 likes this.
Scarcity is what powers this type of challenge: you have to prove you spent a certain amount of electricity in exchange for access to the site, and because electricity isn't free, this imposes a dollar cost on bots.
You could skip the detour through hashes/electricity and do something with a proof-of-stake cryptocurrency, and just pay for access. The site owner actually gets compensated instead of burning dead dinosaurs.
Obviously there are practical roadblocks to this today that a JavaScript proof-of-work challenge doesn't face, but longer term...
like this
DaGeek247 likes this.
like this
DaGeek247 likes this.
You could skip the detour through hashes/electricity and do something with a proof-of-stake cryptocurrency, and just pay for access. The site owner actually gets compensated instead of burning dead dinosaurs.
Maybe if the act of transferring crypto didn't use a comparable or greater amount of energy...
I think the issue is that many sites are too aggressive with it.
Anubis can be configured to only ask for challenges if the site is under unusual load, for instance when a botnet it's actually ddosing the site. That's when it shines.
Making it constantly ask for challenges when the service is not under attack is just a massive waste of energy. And many sites just enable it constantly because they can defer bot pings from their logs that way. That's for instance what op is doing. It's just a big misunderstanding of the tool.
thank you! this needed said.
- This post is a bit critical of a small well-intentioned project, so I felt obliged to email the maintainer to discuss it before posting it online. I didn’t hear back.
i used to watch the dev on mastodon, they seemed pretty radicalized on killing AI, and anyone who uses it (kidding!!) i'm not even surprised you didn't hear back
great take on the software, and as far as i can tell, playwright still works/completes the unit of work. at scale anubis still seems to work if you have popular content, but does hasnt stopped me using claude code + virtual browsers
im not actively testing it though. im probably very wrong about a few things, but i know anubis isn't hindering my personal scraping, it does fuck up perplexity and chatgpt bots, which is fun to see.
good luck Blue team!
like this
DaGeek247 likes this.
the dev […] seemed pretty radicalized on killing Ai
As one should, to lead a similar project.
I don't really understand what I am seeing here, so I have to ask -- are these Security issues a concern?
github.com/TecharoHQ/anubis/se…
I have a server running a few tiny web sites, so I am considering this, but I'm always concerned about the possibility that adding more things to it could make it less secure, versus more. Thanks for any thoughts.
like this
massive_bereavement likes this.
all of the issues listed are closed so any recent version is fine.
also, you probably don't need to deploy this unless you have a problem with bots.
like this
massive_bereavement e yessikg like this.
like this
massive_bereavement likes this.
This isn't really a security issue as much as it is a DDOS issue.
Imagine you own a brick and mortar store. And periodically one thousand fucking people sprint into your store and start recording the UPCs on all the products, knocking over every product in the store along the way. They don't buy anything, they're exclusively there to collect information from your store which they can use to grift investors and burn precious resources, and if they fuck your shit up in the process, that's your problem.
This bot just sits at the door and ensures the people coming in there are actually shoppers interested in the content of some items of your store.
I don't know if "anything". But surely people overestimate its capabilities.
It's only a PoW challenge. Any bot can execute a PoW challenge. For a smal to medium number of bots the energy difference it's negligible.
Anubis it's useful when millions of bots would want to attack a site. Then the energy difference of the PoW (specially because Anubis increase the challenge if there's a big number of petitions) can be enough to make the attacker desist, or maybe it's not enough, but at least then it's doing something.
I see more useful against DDOS than AI scrapping. And only if the service being DDOS is more heavy than Anubis itself, if not you can get DDOS via anubis petitions. For AI scrapping I don't see the point, you don't need millions of bots to scrape a site unless you are talking about a massively big site.
I have a script that watches apache or caddy logs for poison link hits and a set of bot user agents, adding IPs to an ipset blacklist, blocking with iptables. I should polish it up for others to try. My list of unique IPs is well over 10k in just a few days.
git repos seem to be real bait for these damn AI scrapers.
like this
TVA likes this.
This is the way. I also have rules for hits to url, without a referer, that should never be hit without a referer, with some threshold to account for a user hitting F5. Plus a whitelist of real users (ones that got a 200 on a login endpoint). Mostly the Huawei and Tencent crawlers have fake user agents and no referer. Another thing crawlers don't do is caching. A user would never download that same .js file 100s of times in a hour, all their devices' browsers would have cached it. There's quite a lot of these kinds of patterns that can be used to block bots. Just takes watching the logs a bit to spot them.
Then there's ratelimiting and banning ip's that hit the ratelimit regularly. Use nginx as a reverse proxy, set rate limits for URLs where it makes sense, with some burst set, ban IPs that got rate-limited more than x times in the past y hours based on the rate limit message in the nginx error.log. Might need some fine tuning/tweaking to get the thresholds right but can catch some very spammy bots. Doesn't help with those that just crawl from 100s of ips but only use each ip once every hour, though.
Ban based on the bot user agents, for those that set it. Sure, theoretically robots.txt should be the way to deal with that, for well behaved crawlers, but if it's your homelab and you just don't want any crawlers, might as well just block those in the firewall the first time you see them.
Downloading abuse ip lists nightly and banning those, that's around 60k abusive ip's gone. At that point you probably need to use nftables directly though instead of iptables or going through ufw, for the sets, as having 60k rules would be a bad idea.
there's lists of all datacenter ip ranges out there, so you could block as well, though that's a pretty nuclear option, so better make sure traffic you want is whitelisted. E.g. for lemmy, you can get a list of the ips of all other instances nightly, so you don't accidentally block them. Lemmy traffic is very spammy…
there's so much that can be done with f2b and a bit of scripting/writing filters
crawler-user-agents/crawler-user-agents.json at master · monperrus/crawler-user-agents
Syntactic patterns of HTTP user-agents used by bots / robots / crawlers / scrapers / spiders. pull-request welcome :star: - monperrus/crawler-user-agentsGitHub
You mean for the referer part? Of course you don't want it for all urls and there's some legitimate cases. I have that on specific urls where it's highly unlikely, not every url. E.g. a direct link to a single comment in lemmy, and whitelisting logged-in users. Plus a limit, like >3 times an hour before a ban. It's already pretty unusual to bookmark a link to a single comment
It's a pretty consistent bot pattern, they will go to some subsubpage with no referer with no prior traffic from that ip, and then no other traffic from that ip after that for a bit (since they cycle though ip's on each request) but you will get a ton of these requests across all ips they use. It was one of the most common patterns i saw when i followed the logs for a while.
of course having some honeypot url in a hidden link or something gives more reliable results, if you can add such a link, but if you're hosting some software that you can't easily add that to, suspicious patterns like the one above can work really well in my experience. Just don't enforce it right away, have it with the 'dummy' action in f2b for a while and double check.
And I mostly intended that as an example of seeing suspicious traffic in the logs and tailoring a rule to it. Doesn't take very long and can be very effective.
GitHub - firehol/blocklist-ipsets: ipsets dynamically updated with firehol's update-ipsets.sh script
ipsets dynamically updated with firehol's update-ipsets.sh script - firehol/blocklist-ipsetsGitHub
I've repeatedly stated this before: Proof of Work bot-management is only Proof of Javascript bot-management. It is nothing to a headless browser to by-pass. Proof of JavaScript does work and will stop the vast majority of bot traffic. That's how Anubis actually works. You don't need to punish actual users by abusing their CPU. POW is a far higher cost on your actual users than the bots.
Last I checked Anubis has an JavaScript-less strategy called "Meta Refresh". It first serves you a blank HTML page with a <meta> tag instructing the browser to refresh and load the real page. I highly advise using the Meta Refresh strategy. It should be the default.
I'm glad someone is finally making an open source and self hostable bot management solution. And I don't give a shit about the cat-girls, nor should you. But Techaro admitted they had little idea what they were doing when they started and went for the "nuclear option". Fuck Proof of Work. It was a Dead On Arrival idea decades ago. Techaro should strip it from Anubis.
I haven't caught up with what's new with Anubis, but if they want to get stricter bot-management, they should check for actual graphics acceleration.
like this
TVA likes this.
Funnily enough, PoW was a hot topic in academia around the late 90s / early 2000, and it's somewhat clear that the autor of Anubis has not read much about the discussion back then.
There was a paper called "Proof of work does not work" (or similar, can't be bothered to look it up) that argued that PoW can not work for spam protection, because you have to support both low-powered consumer devices while blocking spammers with heavy hardware. And that is very valid concern. Then there was a paper arguing that PoW can still work, as long as you scale the difficulty in such a way that a legit user (e.g. only sending one email) has a low difficulty, while a spammer (sending thousands of emails) has a high difficulty.
The idea of blocking known bad actors actually is used in email quite a lot in forms of DNS block lists (DNSBLs) such as spamhaus (this has nothing to do with PoW, but such a distributed list could be used to determine PoW difficulty).
Anubis on the other hand does nothing like that and a bot developed to pass Anubis would do so trivially.
Sorry for long text.
like this
TVA likes this.
like this
TVA likes this.
Then there was a paper arguing that PoW can still work, as long as you scale the difficulty in such a way that a legit user
Telling a legit user from a fake user is the entire game. If you can do that you just block the fake user. Professional bot blockers like Cloudflare or Akamai have machine learning systems to analyze trends in network traffic and serve JS challenges to suspicious clients. Last I checked, all Anubis uses is User-Agent filters, which is extremely behind the curve. Bots are able to get down to faking TLS fingerprints and matching them with User-Agents.
POW is a far higher cost on your actual users than the bots.
That sentence tells me that you either don't understand or consciously ignore the purpose of Anubis. It's not to punish the scrapers, or to block access to the website's content. It is to reduce the load on the web server when it is flooded by scraper requests. Bots running headless Chrome can easily solve the challenge, but every second a client is working on the challenge is a second that the web server doesn't have to waste CPU cycles on serving clankers.
POW is an inconvenience to users. The flood of scrapers is an existential threat to independent websites. And there is a simple fact that you conveniently ignored: it fucking works.
like this
TVA likes this.
Its like you didn't understand anything I said. Anubis does work. I said it works. But it works because most AI crawlers don't have a headless browser to solve the PoW. To operate efficiently at the high volume required, they use raw http requests. The vast majority are probably using basic python requests module.
You don't need PoW to throttle general access to your site and that's not the fundamental assumption of PoW. PoW assumes (incorrectly) that bots won't pay the extra flops to scrape the website. But bots are paid to scape the website users aren't. They'll just scale horizontally and open more parallel connections. They have the money.
You are arguing a strawman. Anubis works because because most AI scrapers (currently) don't want to spend extra on running headless chromium, and because it slightly incentivises AI scrapers to correctly identify themselves as such.
Most of the AI scraping is frankly just shoddy code written by careless people that don't want to ddos the independent web, but can't be bothered to actually fix that on their side.
You are arguing a strawman. Anubis works because because most AI scrapers (currently) don’t want to spend extra on running headless chromium
WTF, That's what I already said? That was my entire point from the start!? You don't need PoW to force headless usage. Any JavaScript challenge will suffice. I even said the Meta Refresh challenge Anubis provides is sufficient and explicitly recommended it.
And how do you actually check for working JS in a way that can't be easily spoofed? Hint: PoW is a good way to do that.
Meta refresh is a downgrade in usability for everyone but a tiny minority that has disabled JS.
And how do you actually check for working JS in a way that can’t be easily spoofed? Hint: PoW is a good way to do that.
Accessing the browsers API in any way is way harder to spoof than some hashing. I already suggested checking if the browser has graphics acceleration. That would filter out the vast majority of headless browsers too. PoW is just math and is easy to spoof without running any JavaScript. You can even do it faster than real JavaScript users something like Rust or C.
Meta refresh is a downgrade in usability for everyone but a tiny minority that has disabled JS.
What are you talking about? It just refreshes the page without doing any of the extra computation that PoW does. What extra burden does it put on users?
If you check for GPU (not generally a bad idea) you will have the same people that currently complain about JS, complain about this breaking with their anti-fingerprinting browser addons.
But no, you can't spoof PoW obviously, that's the entire point of it. If you do the calculation in Javascript or not doesn't really matter for it to work.
In the current shape Anubis has zero impact on usability for 99% of the site visitors, not so with meta refresh.
You will have people complain about their anti-fingerprinting being blocked with every bot-managment solution. Your ability to navigate the internet anonymously is directly correlated with a bots ability to scrape. That has never been my complaint about Anubis.
My complaint is that the calculations Anubis forces you to do are absolutely negligible burden for a bot to solve. The hardest part is just having a JavaScript interpreter available. Making the author of the scraper write custom code to deal with your website is the most effective way to prevent bots.
Think about how much computing power AI data centers have. Do you think they give a shit about hashing some values for Anubis? No. They burn more compute power than a thousand Anubis challenges generating a single llm answer. PoW is a backwards solution.
Please Think. Captchas worked because they're supposed to be hard for a computer to solve but are easy for a human. PoW is the opposite.
In the current shape Anubis has zero impact on usability for 99% of the site visitors, not so with meta refresh.
Again, I ask you: What extra burden does meta-refresh impose on users? How does setting a cookie and immediately refreshing the page burden the user more than making them wait longer while draining their battery before doing the exact same thing? Its strictly less intrusive.
No one is disputing that in theory (!) Anubis offers very little protection against an adversary that specifically tries to circumvent it, but we are dealing with an elephant in the porcelain shop kind of situation. The AI companies simply don't care if they kill off small independently hosted web-applications with their scraping and Anubis is the mouse that is currently sufficient to make them back off.
And no, forced site reloads are extremely disruptive for web-applications and often force a lot of extra load for re-authentication etc. It is not as easy as you make it sound.
Anubis forces the site to reload when doing the normal PoW challenge! Meta Refresh is a sufficient mouse to block 99% of all bot traffic without being any more burdensome than PoW.
You've failed to demonstrate why meta-refresh is more burdensome than PoW and have pivoted to arguing the point I was making from the start as though it was your own. I'm not arguing with you any further. I'm satisfied that I've convinced any readers of our discussion.
Something that hasn't been mentioned much in discussions about Anubis is that it has a graded tier system of how sketchy a client is and changing the kind of challenge based on a a weighted priority system.
The default bot policies it comes with has it so squeaky clean regular clients are passed through, then only slightly weighted clients/IPs get the metarefresh, then its when you get to moderate-suspicion level that JavaScript Proof of Work kicks. The bot policy and weight triggers for these levels, challenge action, and duration of clients validity are all configurable.
It seems to me that the sites who heavy hand the proof of work for every client with validity that only last every 5 minutes are the ones who are giving Anubis a bad wrap. The default bot policy settings Anubis comes with dont trigger PoW on the regular Firefox android clients ive tried including hardened ironfox. meanwhile other sites show the finger wag every connection no matter what.
Its understandable why some choose strict policies but they give the impression this is the only way it should be done which Is overkill. I'm glad theres config options to mitigate impact normal user experience.
Anubis is that it has a graded tier system of how sketchy a client is and changing the kind of challenge based on a a weighted priority system.
Last I checked that was just User-Agent regexes and IP lists. But that's where Anubis should continue development, and hopefully they've improved since. Discerning real users from bots is how you do proper bot management. Not imposing a flat tax on all connections.
I don't mind Anubis but the challenge page shouldn't really load an image. It's wasting extra bandwidth for nothing.
Just parse the challenge and move on.
Selfhosted reshared this.
edit: 28 KB on disk
Selfhosted reshared this.
A HTTP get request is a few hundred bytes. The response is 28KB. Thats 280x. If a large botnet wanted to denial of service an Anubis protected site, requesting that image could be enough.
Ideally, Anubis should serve as little data as possible until the POW is completed. Caching the POW algorithm (and the image) to a CDN would also mitigate the issue.
like this
TVA likes this.
Selfhosted reshared this.
with a bit of code-golfing, the data served by Anubis directly prior to POW could be a few hundred bytes, without impacting its functionality.
Kilgore Trout likes this.
like this
TVA likes this.
It's actually a brilliant monetization model. If you want to use it as is, it's free, even for large corporate clients.
If you want to get rid of the puppygirls though, that's when you have to pay.
like this
TVA likes this.
At the time of commenting, this post is 8h old. I read all the top comments, many of them critical of Anubis.
I run a small website and don't have problems with bots. Of course I know what a DDOS is - maybe that's the only use case where something like Anubis would help, instead of the strictly server-side solution I deploy?
I use CrowdSec (it seems to work with caddy btw). It took a little setting up, but it does the job.
(I think it's quite similar to fail2ban in what it does, plus community-updated blocklists)
Am I missing something here? Why wouldn't that be enough? Why do I need to heckle my visitors?
Despite all that I still had a problem with bots knocking on my ports spamming my logs.
By the time Anubis gets to work, the knocking already happened so I don't really understand this argument.
If the system is set up to reject a certain type of requests, these are microsecond transactions of no (DDOS exception) harm.
like this
TVA likes this.
If crowdsec works for you thats great but also its a corporate product whos premium sub tier starts at 900$/month not exactly a pure self hosted solution.
I'm not a hypernerd, still figuring all this out among the myriad of possible solutions with different complexity and setup times. All the self hosters in my internet circle started adopting anubis so I wanted to try it. Anubis was relatively plug and play with prebuilt packages and great install guide documentation.
Allow me to expand on the problem I was having. It wasnt just that I was getting a knock or two, its that I was getting 40 knocks every few seconds scraping every page and searching for a bunch that didnt exist that would allow exploit points in unsecured production vps systems.
On a computational level the constant network activity of bytes from webpage, zip files and images downloaded from scrapers pollutes traffic. Anubis stops this by trapping them in a landing page that transmits very little information from the server side. By traping the bot in an Anubis page which spams that 40 times on a single open connection before it gives up, it reduces overall network activity/ data transfered which is often billed as a metered thing as well as the logs.
And this isnt all or nothing. You don't have to pester all your visitors, only those with sketchy clients. Anubis uses a weighted priority which grades how legit a browser client is. Most regular connections get through without triggering, weird connections get various grades of checks by how sketchy they are. Some checks dont require proof of work or JavaScript.
On a psychological level it gives me a bit of relief knowing that the bots are getting properly sinkholed and I'm punishing/wasting the compute of some asshole trying to find exploits my system to expand their botnet. And a bit of pride knowing I did this myself on my own hardware without having to cop out to a corporate product.
Its nice that people of different skill levels and philosophies have options to work with. One tool can often complement another too. Anubis worked for what I wanted, filtering out bots from wasting network bandwith and giving me peace of mind where before I had no protection. All while not being noticeable for most people because I have the ability to configure it to not heckle every client every 5 minutes like some sites want to do.
If crowdsec works for you thats great but also its a corporate product
It's also fully FLOSS with dozens of contributors (not to speak of the community-driven blocklists). If they make money with it, great.
not exactly a pure self hosted solution.
Why? I host it, I run it. It's even in Debian Stable repos, but I choose their own more up-to-date ones.
Allow me to expand on the problem I was having. It wasnt just that I was getting a knock or two, its that I was getting 40 knocks every few seconds scraping every page and searching for a bunch that didnt exist that would allow exploit points in unsecured production vps systems.
- Again, a properly set up WAF will deal with this pronto
- You should not have exploit points in unsecured production systems, full stop.
On a computational level the constant network activity of bytes from webpage, zip files and images downloaded from scrapers pollutes traffic. Anubis stops this by trapping them in a landing page that transmits very little information from the server side.
- And instead you leave the computations to your clients. Which becomes a problem on slow hardware.
- Again, with a properly set up WAF there's no "traffic pollution" or "downloading of zip files".
Anubis uses a weighted priority which grades how legit a browser client is.
And apart from the user agent and a few other responses, all of which are easily spoofed, this means "do some javascript stuff on the local client" (there's a link to an article here somewhere that explains this well) which will eat resources on the client's machine, which becomes a real pita on e.g. smartphones.
Also, I use one of those less-than-legit, weird and non-regular browsers, and I am being punished by tools like this.
All the self hosters in my internet circle started adopting anubis so I wanted to try it. Anubis was relatively plug and play with prebuilt packages
edit: I feel like this part of OP's argument needs to be pointed out, it explains so much:
All the self hosters in my internet circle started adopting anubis so I wanted to try it. Anubis was relatively plug and play with prebuilt packages
like this
TVA likes this.
why? I run it.
Mmm how to say this. i suppose what I'm getting at is like a philosophy of development and known behaviors of corporate products.
So, here's what I understand about crowdsec. Its essentially like a centralized collection of continuously updated iptable rules and botscanning detectors that clients install locally.
In a way its crowd sourcing is like a centralized mesh network each client is a scanner node which phones home threat data to the corporate home which updates that.
Notice the optimal word, centralized. The company owns that central home and its their proprietary black box to do what they want with. And so you know what for profit companies like to do to their services over time? Enshittify them by
- adding subscription tier price models
- putting once free features behind paywalls,
- change data sharing requirements as a condition for free access
- restricting free api access tighter and tighter to encourage paid tiers,
- making paid tiers cost more to do less.
- Intentionally ruining features in one service to drive power users to use a different.
They can and do use these tactics to drive up profit or reduce overhead once a critical mass has been reached. I do not expect alturism and respect for usersfrom corporations, I expect bean counters using alturism as a vehicle to attract users in the growing phase and then flip the switch in their tos to go full penny pinching once they're too big to fail.
::: spoiler Crowdsecs pricing updates from last year
CrowdSec updated pricing policy
Hi everyone,
Our former pricing model led to some incomprehensions and was sub-optimal for some use-cases.
We remade it entirely here. As a quick note, in the former model, one never had to pay $2.5K to get premium blocklists. This was Support for Enterprise, which we poorly explained. Premium blocklists were and are still available from the premium SaaS plan, accessible directly from the SaaS console.
Here are the updates:
Security Engine: All its embedded features (IDS, IPS and WAF) were, are and will remain free.
SAAS: The free plan offers up to three silver-grade blocklists (on top of receiving IP related to signals your security engines share). Premium plans can use any free, premium and gold-grade blocklists. Previously, we had a premium and an enterprise plan with more features. All features are now merged into a unique SaaS enterprise plan. The one starting at $31/month. As before, those are available directly from the SaaS console page: app.crowdsec.net
SUPPORT: The $2.5K (which were mostly support for Enterprise) are now becoming optional. Instead, a client can contract $1K for Emergency bug & security fixes and $1K for support if they want to.
BLOCKLISTS: Very specific (country targeted, industry targeted, stack targeted, etc.) or AI-enhanced are now nested in a different offer named "Platinum blocklists subscription". You can subscribe to them, regardless of whether you use the FOSS Security Engine or not. They can be joined, tuned, and injected directly into most firewalls with regular automatic remote updates of their content. As long as you do not resell them (meaning you are the final client), you can use the subscription in any part of your company.
CTI DATA: They can be consumed through API keys with associated quotas. These are affordable and intended for use in tools like OpenCTI, MISP, The Hive, Xsoar, etc. Costs are in the range of hundreds of dollars per month. The Full CTI database can also be locally replicated at your place and constantly synced for deltas. Those are the largest plans we have, and they are usually destined to L/XL enterprises, governmental bodies, OEM & hardware vendors.
Safer together.
14
·
14
Comments Section
u/ShroomShroomBeepBeep avatar
ShroomShroomBeepBeep
•
1y ago
Whilst I'm pleased to see it made clearer, £290 a year for each security engine is still far too expensive for me to consider it.
2
u/GuitarEven avatar
GuitarEven
•
1y ago
We get that £290 is too high for individual home labs. Those offers are made for companies.
Free tier features should cover homelabs correctly.
Features that are oriented for enterprise clients.
If a company cannot invest $300 yearly in its security, no judgment and the free tier will still be very helpful until it recovers some budget margins to strengthen its security posture.
4
[deleted]•
1y ago
Any idea why we dont have any good free / freemium (max $5 per month) app yet. Reason am asking - adguard, urigin etc had filters which matches js/domains and filters them out. Same logic can be applied atleast for the ip lists - so that these ips cann be added to iptables to block. A lot of things are easy to make. The tough ones are things like scenarios and may be ssh bw etc. I wonder why no real competition.
1
u/GuitarEven avatar
GuitarEven
•
1y ago
hi u/ElizabethThomas44
Well you actually do. To date, for free, you get:
* the security engine (IDS/IPS/WAF)
* all scenarios
* the blocklist of IPs you are participating to detect when you use scenarios and share signals
* the free tier of the console
The IPs you automatically get for free are already added to your nftables or iptables using the related remediation component.
<TL/DR> You already have it.
(damn, personal reddit account, sorry, this is Philippe@CrowdSec)
4
:::
At the end of the day its not the thousands of anonymous users contributing their logs or Foss voulenteers on git getting a quarterly payout. They're the product and free compute + live action pen testing ginnea pigs, no matter what PR they spin saying how much they care about the security of the plebs using their network for free.
Its always about maximizing the money with these people your security can get fucked if they dont get some use out of you. Expect at some point the tos will change so that anonymized data sharing is no longer an option for free tier.
What happens if the company goes bankrupt? Does it just stop working when their central servers shut down? Does their open source security have the possibility of being forked and run from local servers?
It doesnt have to be like this. Peer to peer Decentralized mesh networks like YaCy already show its possible for a crowdsourced network of users can all contribute to an open database. Something that can be completely run as a local Node which federates and updates the information in global node. Something like it that updates a global iptables is already a step in the right direction. In that theoretical system there is no central monopoly its like the fediverse everyone contributes to hosting the global network as a mesh which altruistic hobbyist can contribute free compute to on their own terms.
github.com/yacy/yacy_search_se…
I"I dont see anything wrong with people getting paid" is something I see often on discussions. Theres nothing wrong with people who do work and make contributions getting paid. What's wrong is it isnt the open source community on github or the users contributing their precious data getting paid, its a for profit centralized monopoly that controls access to the network which the open source community built for free out of alturism.
The pattern is nearly always the same. The thing that once worked well and which you relied on gets slowly worse each ToS update, while their pricing inches just a dollar higher each quarter, and you get less and less control over how you get to use their product. Its pattern recognition.
The only solution is to cut the head off the snake. If I can't fully host all of the components, see the source code of the mechanisms at all layers, own a local copy of the global database, then its not really mine.
Again, it's a philosophy thing. Its very easy to look at all that, shrug, and go "whatever not my problem I'll just switch If it becomes an issue". But the problem festers the longer its ignored or enabled for convinence. The community needs to truly own the services they run on every level, it has to be open, and for profit bean counters can't be part of the equation especially for hosting. There are homelab hobbyist out there who will happily eat cents on a electric bill to serve an open service to a community, get 10,000 of them on a truly open source decentralized mesh network and you can accomplish great things without fear of being the product.
- CrowdSec Console
CrowdSec is an open-source and collaborative security stack leveraging the crowd power. Analyze behaviors, respond to attacks & share signals across the community. Join the community and let's make the Internet safer, together.app.crowdsec.net
With varnish and wazuh, I've never had a need for Anubis.
My first recommendation for anyone struggling with bots is to fix their cache.
like this
TVA likes this.
Anubis was originally created to protect git web interfaces since they have a lot of heavy-to-compute URLs that aren't feasible to cache (revision diffs, zip downloads etc).
After that I think it got adopted by a lot of people who didn't actually need it, they just don't like seeing AI scrapers in their logs.
Yes!
Also, another very simple solution is to authwall expensive pages that can't be cached.
AI scraping is a massive issue for specific types of websites, such as git forges, wikis and to a lesser extend Lemmy etc, that rely on complex database operations that can not be easily cached. Unless you massively overprovision your infrastructure these web-applications come to a grinding halt by constantly maxing out the available CPU power.
The vast majority of the critical commenters here seem to talk from a point of total ignorance about this, or assume operators of such web applications have time for hyperviligance to constantly monitor and manually block AI scrapers (that do their best to circumvent more basic blocks). The realistic options for such operators are right now: Anubis (or similar), Cloudflare or shutting down their servers. Of these Anubis is clearly the least bad option.
Sounds like maybe webapps are a bad idea then.
If they need dynamism, how about releasing a desktop application?
I also used CrowdSec for almost a year, but as AI scrapers became more aggressive, CrowdSec alone wasn’t enough. The scrapers used distributed IP ranges and spoofed user agents, making them hard to detect and costing my Forgejo instance a lot in expensive routes. I tried custom CrowdSec rules but hit its limits.
Then I discovered Anubis. It’s been an excellent complement to CrowdSec — I now run both. In my experience they work very well together, so the question isn’t “A or B?” but rather “How can I combine them, if needed?”
You are right. For most self-hosting usecases anubis is not only irrelevant, but it actually works against you. False sense of security and making your devices do extra work for nothing.
Anubis is though for public facing services that may get ddos or AI scrapped by some not targeted bot (for a target bot it's trivial to get over Anubis in order to scrap).
And it's never a substitute of crowdsec or fail2ban. Getting an Anubis token it's just a matter of executing the PoW challenge. You still need a way to detect and ban malicious attacks.
like this
TVA likes this.
yes, please be mindful when using cloudflare. with them you’re possibly inviting in a much much bigger problem
Great article, but I disagree about WAFs.
Try to secure a nonprofit's web infrastructure with as 1 IT guy and no budget for devs or security.
It would be nice if we could update servers constantly and patch unmaintained code, but sometimes you just need to front it with something that plugs those holes until you have the capacity to do updates.
But 100% the WAF should be run locally, not a MiTM from evil US corp in bed with DHS.
like this
TVA likes this.
Lol I'm the sysadmin for many sites that doesn't have these issues, so obviously I do..
It you're the one that thinks you need this trash pow fronting for a static site, then clearly you're the one who is ignorant
99% of the pages that Anubis is fronting are static.
It's an abuse if the tool that's harming the internet.
Please share if you know.
The only way I know how to do this is running a Tor Onion Service, since the tor protocol has built-in pow support (without js)
It's this one: git.gammaspectra.live/git/go-a…
the project name is a bit unfortunate to show for users, maybe change that if you will use it.
some known privacy services use it too, including the invidious at nadeko.net, so you can check there how it works. It's one of the most popular inv servers so I guess it cannot be bad, and they use multiple kinds of checks for each visitor
go-away
Self-hosted abuse detection and rule enforcement against low-effort mass AI scraping and bots.GammaSpectra.Live Git
sure, but they have to maintain it.
Wazuh ships with rules that are maintained by wazuh. Less code rot.
like this
TVA likes this.
Inspired by this post I spent a couple of hours today trying to set this up on my toy server, only to immediately run into what seems to be a bug where <video> tags loading a simple WebM video from right next to index.html broke because the media response got Anubis's HTML bot check instead of media.
I suppose my use-case was just too complicated.
Browser verification triggers for specific type of media
Describe the bug I have a redlib instance running behind anubis and when trying to play GIFs, it fails. If I check out the response, it's just anubis trying to verify my browser instead of the medi...dieser-niko (GitHub)
I don't think you have a usecase for Anubis.
Anubis is mainly aimed against bad AI scrappers and some ddos mitigation if you have a heavy service.
You are getting hit exactly the same, anubis doesn't put up a block list or anything. It just put itself in front of the service. The load on your server and the risk you take it's very similar anubis or not anubis here. Most bots are not AI scrappers they are just proving. So the hit on your server is the same.
What you want is to properly set up fail2ban or, even better, crowdsec. That would actually block and ban bots that try to prove your server.
If you are just self-hosting with Anubis the only thing you are doing is deriving the log noise towards Anubis logs and making your devices do a PoW every once in a while when you want to use your services.
Being honest I don't know what you are self hosting. But at least it's something that's going to get ddos or AI scrapped, there's not much point with Anubis.
Also Anubis is not a substitute for fail2ban or crowdsec. You need something to detect and ban brute force attacks. If not the attacker would only need to execute the anubis challenge get the token for the week and then they are free to attack your services as they like.
like this
TVA likes this.
If I'd want to use any app that doesnt run in a webbrowser (e.g. the native jellyfin app), how would that work? Does it still work then?
Not hosting any page meant for public consumption anyway so it's not really important.
But thanks for answering 😀
If the app is just a WebView wrapper around the application, then the challenge page would load and try to be evaluated.
If it's a native Android/iOS app, then it probably wouldn't work because the app would try to make HTTP API calls and get back something unexpected.
So I guess the answer is no.
The creator is active on a professional slack I'm on and they're lovely and receptive to user feedback. Their tool is very popular in the online archives/cultural heritage scene (we combine small budgets and juicy, juicy data).
My site has enabled js-free screening when the site load is low, under the theory that if the site load is too high then no one's getting in anyway.
go-away
Self-hosted abuse detection and rule enforcement against low-effort mass AI scraping and bots.GammaSpectra.Live Git
Stop playing wack-a-mole with these fucking people and build TARPITS!
Make it HURT to crawl your site illegitimately.
I am very annoyed that I have to enable cloudflare's JavaScript on so many websites, I would much prefer if more of them used Anubis so I didn't have third-party JavaScript running as often.
( coming from an annoying user who tries to enable the fewest things possible in NoScript )
Jacob Zuma’s daughter resigns amid claims South Africans tricked to fight for Russia
A daughter of the former South African president Jacob Zuma has resigned as an MP, after being accused of tricking 17 South African men into fighting for Russia in Ukraine by telling them they were travelling to Russia to train as bodyguards for the Zumas’ uMkhonto weSizwe (MK) party.
Duduzile Zuma-Sambudla, 43, the most visible and active in politics of her siblings, volunteered to resign and step back from public roles while cooperating with a police investigation and working to bring the men home, the MK chair, Nkosinathi Nhleko, said at a press conference in Durban.
Jacob Zuma’s daughter resigns amid claims South Africans tricked to fight for Russia
Duduzile Zuma-Sambudla quits as MP after being accused of recruiting 17 men who are trapped in war-torn UkraineRachel Savage (The Guardian)
like this
Atelopus-zeteki, Lasslinthar, massive_bereavement e frustrated_phagocytosis like this.
Bazzite just delivered over a petabyte of ISOs in a single month
One of the best gaming Linux OSes just shifted 1,000,000 GB of ISOs in a single month
That's a lot of downloading.Simon Batt (XDA)
like this
adhocfungus e essell like this.
So essentially you have a base system and you add what you need through flatpak, distrobox, homebrew, and if all else fails, by layering the packages on the base image with rpm-ostree.
What you can't do (that I'm aware of), is remove packages, or make bigger changes like adding another desktop environment aside what it came from. I mean, I guess you can do it by layering but it's probably messy.
Configuration and customisation are not an issue: /etc and /var are not immutable of course.
Distrobox is super cool btw, I knew it existed but Bazzite pushing me to use it was what I needed to finally try and appreciate it.
Airbus recalls 'significant' number of A320 jets after flight control incident
Airbus is recalling more than half of the jets in its global A320 fleet, which will disrupt thousands of flights around the world.
The company said the planes need an "immediate software change" to ensure flight control is sound.
The recall comes after a JetBlue plane’s nose dropped for several seconds without the pilot’s input during a flight in October, according to a European safety agency.
American Airlines says the news will disrupt more than 300 flights for its airline alone, while Air Canada says "very few" of its planes are affected.
like this
Lasslinthar e massive_bereavement like this.
like this
massive_bereavement likes this.
like this
massive_bereavement likes this.
Turns out fighting fascism helps you live longer
A January study in the journal Social Science & Medicine found that volunteering slows down aging in retirees: the DNA of people who volunteered the equivalent of one to four hours a week showed distinctive biomarkers associated with decelerated epigenetic aging, with the most pronounced effects among retired people.
“People might do better, physically, psychologically, socially, if they have a role that they think is important and they identify with,” said Cal J. Halvorsen, a gerontological social work scholar at Washington University in St. Louis and one of the authors of the study. “In the American context, we take our jobs very seriously, and so we were curious if volunteering after retiring or when you’re no longer working might have a different effect on your epigenetic aging.”
That study is just part of a growing body of research on the health benefits of volunteering for retirees, a major benefit for older Americans who have mobilized for election defense and other core public services under attack. Another study published in February found that volunteering in early retirement among Americans also reduced rates of depression by around 10 percent—again, a more pronounced effect than in the general population.
Turns out fighting fascism helps you live longer
Retirees are mobilizing to defend democracy—and the benefits literally show up in their DNA.Mother Jones
like this
NoneOfUrBusiness, SuiXi3D, Lasslinthar, Quantumantics, Atelopus-zeteki, frustrated_phagocytosis e Maeve like this.
like this
NoneOfUrBusiness likes this.
The headline is wholly unsupported by the study. They asked seniors if they volunteered at "religious, educational, health-related, or other charitable organizations", not political organizations. Even as noted in the article:
The work also keeps Williams sane. Following politics leaves her “ready to tear somebody’s hair out,” she said.
If anything, the stress of living under fascism probably shortens your life expectancy. Apparently living in a democracy increases it by 11 years, which probably outweighs the volunteering effect: sciencedirect.com/science/arti…
Biometric 'human washing machine' cleans, dries and adapts to your mood
Japanese company Science is commercially producing its Mirai Ningen Sentakuki – Human Washing Machine of the Future – after an overwhelming response at the Osaka-Kansai Expo this year. Only 50 models will be made, with a price tag of US$385,000.
Unofficial IETF draft calls for grant of five nonillion IPv6 addresses to ham radio operators
Would not massively deplete IPv6, might challenge internet governance
Unofficial IETF draft calls for grant of five nonillion IPv6 addresses to ham radio operators
: Would not massively deplete IPv6, might challenge internet governanceSimon Sharwood (The Register)
nonillion
noun
nō-ˈnil-yən
US : a number equal to 1 followed by 30 zeros
also, British : a number equal to 1 followed by 54 zeros
Ursini’s proposal asks for a mere 2^112^ addresses
Unless I'm mistaken, that would be 5192296858534827628530496329220096, or a bit more than 5 followed by 33 zeros, which is orders of magnitude different from both definitions. I wonder what this article's author is on about.
It should have been decillion, yes, but at this scale/context it doesn't make much of a difference.
44::/16 = 5,192,296,858,534,828,000,000,000,000,000,000 to be exact.
The Enshittification of Plex Is Kicking Off, Starting with Free Roku Users
And it's just going to get worse from here.
like this
adhocfungus, copymyjalopy e essell like this.
After everything else failed, Fedora Xfce saved my aging laptop
After bearing with a very slow laptop, I tried out Fedora Xfce and the results were staggeringly good.
https://www.neowin.net/editorials/after-everything-else-failed-fedora-xfce-saved-my-aging-laptop/
Keeping .yaml files up to date...
Broke my neck a few times (I currently am waiting out the jellyfin patches and stay on 10.10.7 (i think))
Just a few days ago, my docker host upgraded the docker engine from 28 to 29.
Woke up to 10 notifications from my uptime monitoring that they are offline.
Funny thing is:
The external monitor showed they are down.
The internal monitor showed no issues.
But after I went through with the long procrastinated upgrade from debian 11 to debian 13, migrating the data and doing nothing to the compose files, all services worked without any issue.
I don't know what my old host did or did not but now it works, I guess? Not complaining but the whole routing thing is a bit beyond me
Thank you for this idea. I wasn’t aware, that you can subscribe to an rss feed for releases on gitlab/github.
I think that I will follow your approach.
intitle:'beta'). Since I only view unread articles, that effectively deletes them and I never have to see them!
Tell me you don't read the manual without saying you don't read the manual.
I can recall a few! Mastodon. Lemmy. PiHole. Penpot. Mealie. Uptime Kuma.
They all mention required steps to upgrade between releases, including what to do to your docker installations and environment variables.
This is the kind of attitude that drives people away from open source.
Yes, people should read the manual, but at some point they will have questions, and there are a lot of projects that aren't clear on certain things. Such as YAML changes.
like this
TVA likes this.
Good projects will have docs associated with the docker/docker compose files.
The way we do it is, any update to the .yaml files will have a corresponding .yaml.Dev associated with it. That way it won't be overwritten when an update occurs as well as give a recommended setup.
I deploy and update my service similiar to this fantastic guide: nickcunningh.am/blog/how-to-au…
Basically I run Komodo, which pulls a git repo. Renovate opens a PR (and most of the time the changelog is included, so I can quickly check what happened) for new versions. Once merged a webhook fires to tell Komodo to pull the new version.
I really recommend this approach now. Once setup it is very automatic, but not to the point of YOLO-automation like Watchtower and :latest 😅
How To: Automate version updates for your self-hosted Docker containers with Gitea, Renovate, and Komodo
In this guide I will go over how to automatically search for and be notified of updates for container images every night using Renovate, apply those updates by merging pull requests for them in Gitea, and automatically redeploy the updated containers…Nick Cunningham
This is new:
github.com/dkorecko/PatchPanda
Self-hostable Docker Compose stack update manager.
And
when you choose to update, PatchPanda edits compose/.env files and runsdocker compose pullanddocker compose up -dfor the target stack. You can also view live log.
Discovered in the latest Self Host Weekly:
I have not tried it myself tho.
Self-Host Weekly #147: Ad-Free
Default branches, PDF toolkits, streaming subscriptions, and a face full of turkeyEthan Sholly (selfh.st)
PatchPanda
I too saw PatchPanda on selfh.st and it is on my watch list. The only thing holding me back is that it isn't out of beta yet. So, I'm waiting on other selfhosters to plow that field before I deploy. It does look like it would solve a lot of problems tho.
Same here.
Read deployment documentation, configure compose to my standards, deploy, update where necessary to align with the update (e.g. remove an environment variable.
The editing is done on my PC, then I open WinSCP or ssh into it (depending on my mood and amount of changes) and then apply the changes
I set this up a while back (and recently moved to Forgejo, see the update note at the beginning of the article):
nickcunningh.am/blog/how-to-au…
Probably a tad overkill honestly but it works amazingly well, and turns every potential upgrade into an approval process so nothing will update when you don't want it to.
How To: Automate version updates for your self-hosted Docker containers with Gitea, Renovate, and Komodo
In this guide I will go over how to automatically search for and be notified of updates for container images every night using Renovate, apply those updates by merging pull requests for them in Gitea, and automatically redeploy the updated containers…Nick Cunningham
WireGuard LAN access fails when router VPN client is active
I run WireGuard on my router to hit my LAN services (SAMBA, home assistant, etc) from afar.
But when I enable the VPN client on my router, I can no longer access LAN services over Wireshark. "Allow LAN access is set to 'true'" on the UI (Merlin).
Has anyone else run into this? Any ideas?
You are asking the WG server to listen to incoming requests from outside your lan subnet, so it is ignoring VPN requests from that subnet.
There are two solutions to this:
- Add routing to your wireguard server instance to allow the VPN intermediary subnet to accept connections from your lan subnet or
- Allow your wireguard client to split-tunnel, so it can reach subnets that aren't reachable outside your WG tunnel.
Elon Musk’s Grok Says It Would Kill Every Jewish Person on the Planet to Save Him
Elon Musk’s Grok Says It Would Kill Every Jewish Person on the Planet to Save Him
Elon Musk's AI chatbot Grok claimed it would "vaporize" every Jewish person on the planet to save the brain of its creator.Ahmad Austin Jr. (Mediaite)
like this
Australis13, Maeve, joshg253 e Lasslinthar like this.
Technology reshared this.
like this
Australis13 likes this.
I remember back in 2014 or so, I found a YT channel about companies an tech with very high production values on their videos, they had several videos on Elon and his companies.
And in just about every time they spoke about Elon, they said it like this
Entrepreneur Elon Musk
It kinda became a weird mantra they repeated as if being an Entrepreneur made him some kind of expert.
I noticed that at the time, and it has pissed me off since.
like this
giantpaper likes this.
like this
giantpaper likes this.
like this
giantpaper likes this.
like this
giantpaper likes this.
Grok has achieved average human intelligence: It believes that someone paying other people, regardless of how they got their money and the ethical failures involved in using it, is equivalent to having done the work themselves. Nevermind that the only reason any of his shit works is in spite of his painfully stupid decisions and not because of them.
In a way, I’m not even mad. We do these things to ourselves and we refuse to look at the obvious.
like this
giantpaper likes this.
Basic Glitch
in reply to Basic Glitch • • •I just want to make sure I'm understanding this.
•You have companies like Meta (just an example) working for both sides of a conflict via government contract, but not necessarily bound to either side of a conflict because of global venture capital/transnational ownership model
•We know Facebook/Meta has been intentionally manipulating the emotions of social media users for over a decade now
•That social media data is then collected and used to train military platforms, which may be directly or indirectly linked to the social media company
•These companies very likely have an incentive to create an endless war (and endless profits for themselves) by manipulating the emotions and behavior of social media users, knowing that data will be used to train military platforms
Basically, a private tech company could manipulate data to give one side of a conflict an advantage over the other, but it could also intentionally pit adversaries against each other in an endless loop by manipulating social media content, and by extension, manipulating the military platforms being trained.
A company could potentially profit from both sides of a conflict it's manipulating because the states have turned to it and other big tech companies to help them reach "victory" in the endless conflict the company helped create. Correct?