Palestinians Barred From Entering Syria Without Permit Under New Jolani Rules
VIENNA -- The Jolani regime in Damascus has imposed sweeping new entry restrictions on Palestinians, ending decades of open-entry rights. On January 20th, 2026, the Palestinian Embassy in Syria confirmed that Palestinian travel document holders are now barred from entering the country without prior authorization.
Under the new regulations, entry now requires permit approval through the Ministry of Interior and a formal sponsor or a relative already residing in the country.
This policy effectively nullifies Law No. 260, passed in 1956. For 70 years, this landmark legislation made Syria the only Arab nation to grant Palestinians rights equal to those of its own citizens. While Palestinians were denied citizenship to protect their “Right of Return” and prevent permanent displacement, Law 260 ensured they could work, own property, and access state services— rights often denied to them in neighboring countries like Lebanon, where Palestinians are barred from owning property.
Palestinians Barred From Entering Syria Without Permit Under New Jolani Rules
The move dismantles long-standing legal protections that treated Palestinians as equals in Syria.Richard Medhurst (Richard Medhurst’s Newsletter)
Palestinians Barred From Entering Syria Without Permit Under New Jolani Rules
VIENNA -- The Jolani regime in Damascus has imposed sweeping new entry restrictions on Palestinians, ending decades of open-entry rights. On January 20th, 2026, the Palestinian Embassy in Syria confirmed that Palestinian travel document holders are now barred from entering the country without prior authorization.
Under the new regulations, entry now requires permit approval through the Ministry of Interior and a formal sponsor or a relative already residing in the country.
This policy effectively nullifies Law No. 260, passed in 1956. For 70 years, this landmark legislation made Syria the only Arab nation to grant Palestinians rights equal to those of its own citizens. While Palestinians were denied citizenship to protect their “Right of Return” and prevent permanent displacement, Law 260 ensured they could work, own property, and access state services— rights often denied to them in neighboring countries like Lebanon, where Palestinians are barred from owning property.
Palestinians Barred From Entering Syria Without Permit Under New Jolani Rules
The move dismantles long-standing legal protections that treated Palestinians as equals in Syria.Richard Medhurst (Richard Medhurst’s Newsletter)
like this
Grerkol, SockOlm [she/her], 𝕆𝕔𝕦𝕝𝕚, leftofthat [he/him], driving_crooner, nondescripthandle, Bobr, St3alth, Cowbee [he/they], FunkyStuff [he/him], ComradZoid, TheLastHero [he/him], altphoto, Kallestar, smokeppb [he/him], ToxicDivinity [comrade/them], unexposedhazard, jackmaoist [none/use name], From_the_river_to_the_sea [he/him], ComradeSharkfucker, j5906, plinky [he/him], Townlately, anarchoilluminati [comrade/them], atomkarinca, BlasphemousTiefling, Red_October [he/him], raoul, ORbituary, Chloé 🥕, Disgruntled, ExotiqueMatter, Narri N., lolo, Dessalines, PerryGirl [she/her, she/her], Thurstylark, davel, Saymaz, Edge004, Samsuma, Pahyne, ZeroHora, tdgoodman, Manjushri, cmhickman358 [he/him], cmhickman358, Gorillatactics [none/use name], fedislug, Gil Wanderley, mathemachristian [he/him], buckykat [none/use name], ABigTowel [comrade/them, any], rattlethatlock42 [he/him], NotMushroomForDebate, affenlehrer, comrade_elmo [none/use name], Amnesigenic, Spectrism, manuallybreathing, fermionsnotbosons, Radiolino, Krafty Kactus, ferric_carcinization, madasi, Wakmrow [he/him], lemonwood, TheTux, ea6927d8, Snokomo, kjo, emergencyfood, idriss, Lowleekun [comrade/them, he/him] e altre 16 persone like this.
don't like this
Greddan, cloudnative, Arondeus, hector e unique_hemp don't like this.
Israel's imports of Azerbaijani oil via Turkey jump despite Ankara's trade ban
Jan 21 (Reuters) - Israel's imports of Azerbaijani oil from Turkey's Ceyhan port hit a three-year high in 2025, ship-tracking data show, underscoring Israel's ability to secure supplies despite hostility from some governments over its war in Gaza.
Official Turkish data show that trade between Turkey and Israel has been zero since June 2024, after Ankara imposed a ban over Israel's ~~war against Palestinian militant group Hamas~~ genocide.
Israel's imports of Azerbaijani oil via Turkey jump despite Ankara's trade ban
Jan 21 (Reuters) - Israel's imports of Azerbaijani oil from Turkey's Ceyhan port hit a three-year high in 2025, ship-tracking data show, underscoring Israel's ability to secure supplies despite hostility from some governments over its war in Gaza.
Official Turkish data show that trade between Turkey and Israel has been zero since June 2024, after Ankara imposed a ban over Israel's ~~war against Palestinian militant group Hamas~~ genocide.
Leaked Doc: Homeland Security's Domestic Terror Obsession
The annual assessment, which has been prepared since 2020, purports to offer a holistic assessment to threats to the Western Hemisphere. These assessments have consistently focused on what you imagine: southern border security, the drug trade, immigration, and critical infrastructure protection in the United States.
But this year’s assessment, marked “For Official Use Only” and not yet released to the public, identifies violent extremism on the part of American citizens as the priority and greatest threat.
One phrase in particular stands out to me as new: potential terrorism based upon “class-based or economic grievances.” (The term has not appeared in any previous assessment.)
The assessment doesn’t define what it means by “class-based or economic grievances.” The phrase could as much refer to an angry MAGA Midwesterner as it could any Mamdani-supporting urban dweller. But the focus is clear: the main threat to the “homeland,” DHS thinks, is the American people.
Leaked Doc: Homeland Security's Domestic Terror Obsession
Forget Greenland; the American public are the real targetKen Klippenstein
US Park Service Erases Climate Facts at Fort Sumter, Where the Civil War Began
The historic site, on an island in South Carolina, could be inundated by rising seas in decades to come. A display on the threat has been removed.
like this
thisisbutaname likes this.
Videos circulating online show SDF supporters attacking Syrians across Europe
Videos circulating on social media show SDF and YPG supporters attacking people and businesses while shouting racist comments in several European cities. Many Syrians have reported physical assaults, property damage, and verbal harassment.
In Germany, YPG and SDF supporters took to the streets in several cities to protest against Syrian army operations in northeastern Syria. What began as demonstrations quickly escalated into violence, with protesters using firecrackers, pyrotechnics, and iron bars against police attempting to maintain order.
Businesses across Berlin, Dortmund, and other German cities reported vandalism, with Syrian-owned shops targeted in particular. The protests were followed by direct attacks on Syrians and assaults on individuals who identified as Syrian Arabs.
Videos circulating online show SDF supporters attacking Syrians across Europe
A series of violent attacks on Syrian refugees by Kurdish groups across Europe was reported on Tuesday evening in Germany, France, Sweden, Austria, and the United Kingdom amid rising tensions over the collapse of the Kurdish-led Syrian Democratic For…Reem Aouir (Middle East Eye)
You can now use Debian without Linux
A Linux alternative? Debian/Hurd shows microkernel Unix dream is alive
: The official GNU microkernel is still breathing – and now it's 64-bitLiam Proven (The Register)
like this
Sculptus Poe, rescue_toaster, j4yc33, bruce965, trevor (he/they), sasquash, hedgehog, 𝔳𝔢𝔩𝔲𝔪𝔪𝔬𝔯𝔱𝔦𝔰, تحريرها كلها ممكن, LainTrain, Limerance, khannie, TerraRoot, pelya, ashestosea, randomaside, Emanuel [comrade/them], Procleus, geoff, j5906, thisisbutaname, floofloof, zod000, altphoto, ZkhqrD5o, Hawke, cactus, KY13KR385, FirmDistribution, sleeperdouge, potatoguy, olaren_uwu, First_Thunder, TexasDrunk, Andromeda, atomkarinca, RaoulDuke25, Townlately, toothbrush, dagodemon42, BaroqueInMind, Eyck_of_denesle, Creat, Black616Angel, Binette, tekino, Morethanevil, xmanmonk, ☂️-, Nednarb44, Dessalines, ArchAengelus, sawdustprophet, thedeadwalking4242, pebbles, Tywèle, over_clox, anelephant, davel, knightly the Sneptaur, waldfee, Dogiedog64, osbo9991, hperrin, Cysio, Nemoder, AstroLightz, Daniel Quinn, Justin, ChicoSuave, normal_user [they/them, any], vortexal, Twig, Takapapatapaka e altre 118 persone like this.
don't like this
corsicanguppy, Murkbeard, toothpick e nycki don't like this.
A “Hello World” virtual machine running the Hurd — 2020 — Blog — GNU Guix
Blog posts about GNU Guix.guix.gnu.org
like this
𝔳𝔢𝔩𝔲𝔪𝔪𝔬𝔯𝔱𝔦𝔰, تحريرها كلها ممكن, joonazan, Limerance, MeowWeHaveAProblem, AnyOldName3, Zenlix, ashestosea, banazir, ZkhqrD5o, Hawke, breakcore, Hdurbcuf, potatoguy, First_Thunder, verdare, TexasDrunk, toothbrush, Eyck_of_denesle, Binette, tekino, thedeadwalking4242, davel, Twig, DanceMomsSavedMe, Raccoonn, jake, torgeir, jcr, streetcoder, Runecrush376, tomenzgg, monovergent, 73QjabParc34Vebq, caurvo, narp, olof, toffi, Everyday0764, sonstwas, NewOldGuard, sunstoned, jpicture, besmtt, someacnt, eth0slash0, floofloof, FreeBooteR69, augustus672, Jumuta, ascallion, KoloradoKoolAid75, stewarpt, idriss, cornishon, HulkSmashBurgers, WalnutLum, juipeltje, highduc, yetAnotherUser, T4V0, Spectrism, Rozaŭtuno, bootleg e Shrike 🐦⬛ like this.
don't like this
orc_princess e PseudoSpock don't like this.
like this
Limerance, First_Thunder, تحريرها كلها ممكن, toothbrush, t҉̠̙ǵ̣̞̄ͪ͜x̸̱͚̳ͫ͐̑̈ͯͣ̚n̒͌҉͉̦̜̝ͅ, torgeir, tomenzgg, monovergent, caurvo, toffi, someacnt, augustus672 e baatliwala like this.
like this
toothbrush, tomenzgg, monovergent, Everyday0764 e someacnt like this.
like this
تحريرها كلها ممكن, toothbrush, SomethingWentWrong, tekino, Raccoonn, Pantherina (he), eodur, Steamymoomilk, Everyday0764, NewOldGuard, someacnt e Jumuta like this.
like this
تحريرها كلها ممكن, trevor (he/they), HeerlijkeDrop, MouldyCat e someacnt like this.
like this
BaroqueInMind, Raccoonn, Pantherina (he), MouldyCat, Everyday0764, NewOldGuard, DasSkelett, floofloof e Jumuta like this.
But it's a paid enterprise feature.
like this
Pantherina (he), NewOldGuard e someacnt like this.
like this
تحريرها كلها ممكن, t҉̠̙ǵ̣̞̄ͪ͜x̸̱͚̳ͫ͐̑̈ͯͣ̚n̒͌҉͉̦̜̝ͅ, monovergent e erebion like this.
like this
First_Thunder, TexasDrunk, N.E.P.T.R, تحريرها كلها ممكن, Black616Angel, tekino, over_clox, notabot, fratermus, Raccoonn, Runecrush376, monovergent, Eelt, Everyday0764, NewOldGuard, sunstoned, baatliwala, juipeltje e GooeyGlob like this.
Some time ago (one or two years, i am not sure) i had the Hurd running on an old Thinkpad and used it as a daily driver for a couple of months. It...worked. Most of the times.
The thing is: Its a really interesting system that - in a different timeline - would have made up a GREAT operating system if it would have come forward and evolved a lot faster. Even without the lack of a ~~browser~~ the bloated VM we nowadays call a browser (you can absolutely run Dillo on it) it just hurts a bit too much to use it for more than resarch / hobbyist / hacking purposes.
like this
TexasDrunk, Celediel, تحريرها كلها ممكن, toothbrush, SomethingWentWrong, Eyck_of_denesle, thedeadwalking4242, pebbles, exu, knightly the Sneptaur, waldfee, t҉̠̙ǵ̣̞̄ͪ͜x̸̱͚̳ͫ͐̑̈ͯͣ̚n̒͌҉͉̦̜̝ͅ, orc_princess, Mactan, Raccoonn, Pantherina (he), jake, torgeir, jcr, banazir, Runecrush376, monovergent, besbin, caurvo, Eelt, toffi, Everyday0764, NewOldGuard, besmtt, someacnt, Jakeroxs, Life_inst_bad, QuandaleDingle, molten_boron, fakeman_pretendname, IngeniousRocks (They/She), ascallion, stewarpt, cornishon, baatliwala, holomorphic, T4V0, Spectrism, Rozaŭtuno e Shrike 🐦⬛ like this.
In short: The stability is really a problem - at least it was at the time when i tried it out. I don't know how often something that i worked with just, well, stopped working and the message 'Computer bought the farm' appeared. Sometimes a crashing X session dragged the whole system with it into the abyss freezing the whole system and requiering a reset... followed by a lengthy fsck session. And it was slow. I mean, granted, the Thinkpad R60 i used it on isn't a supercomputer by todays standards, but compared to a Linux system or even OpenBSD it was really, really slow.
I have a high pain tolerance regarding software, i really have... but i had to give up on it after a while.
But now, with this new release... well, i think i will give it another go.
I just rediscovered that i indeed made a little writeup of my first days with the Hurd in my Gopherhole back then, perhaps it gives a bit of the taste what it was when i tried it. Beware, wall of text after the spoiler:
::: spoiler My Phlog post about the Hurd
Oh it HURDs...
I think i have some undiagnosed masochistic tendencies,
because i am constantly drawn to ever more esorteric and
fringe operating systems and software that will make my
life a little bit harder.
The HURD is something that got my attention a long, long
time ago, as being this "mysterious next-gen OS that will
change everything". Well, it was the late 90s / early 00s,
the CD was still king and dialup was really expensive (at
least here in germany). I was already messing around with
linux and most of the time i compensated my utter lack of
knowledge with determination and pure madness.
It was some holiday back in the very early 00s when i
managed to get it installed on my (i think) pentium 133,
after spending way too long on our very slow dialup line
to download the necessary files from some GNU mirror
(and learning later that i burned through a whole lot
Deutsche Mark after my parents received the telephone bill)
Well, my adventure back then ended in an booting system
but without any recognized keyboard, and without really
knowing what i am doing (and in need of that PC for my
apprenticeship at that time) i threw the towel after some
really long nights without getting anywhere. Beaten and
defeated i reinstalled Windows and (i think) Debian Linux
again and went on to mess with other things. But somehow
(like with some other failed projects) it left a scar that
sometimes itches.
Now, fast forward about 20 years, its a very slow day at
the office, i have nothing really pressing to do and...
right out of the blue a GNU is sneaking into my thoughts.
I fired up a browser, skimmed the web about news regarding
my white whale and found out (somewhat to my surprise) that
Hurd is still in active development (even if its going
forward at an glacial pace). So, after having messed around
with some fairly exotic systems and (thinking to have) much
more experience than my teenage self i thought it is on time
to take a ride on this bovine again.
Debian Hurd seemed to me the most viable option, so i went
right on, downloaded three DVD images of 2021 vintage while
absolutely missing the very-not-missable news about an 2023
version until after i had already downloaded the images and
burned them to three disks. Well... one can always update
later, right?
So, i grabbed my "spare" Thinkpad T60 out of the cabinet,
looked at the HDD to make sure i had nothing there that i
needed and started the installation... nothing too exotic
there, its debian based after all...
After some time the installation was finished, i rebooted
into the new system and... FOXTROTT UNIFORM CHARLY KILO!!!
... the keyboard wasn't working. WHY??? It did work in the
installer???
After reading through some sites on the net (and yes, i
understand the hardware support is slim, there are only a
handfull of developers left etc, etc...) and not wanting to
repeat my first encounter with the hurd i thought to myself:
Why not try it on the R60? So, i took the HD from the T60,
put it into the R60, started the installation again, just to
be sure, then the dreaded moment of the reboot came... AND
I HAD A WORKING KEYBOARD. YESSS!
So, now i started exploring this system i had waited about
20 years to get running, the GNU and debian sites give a
nice overview what does work in which ways, and after all,
its not THAT different from your standard GNU/Linux system.
An interesting concept is that of the "translators", just to
give an short example:
If you run the following in your home directory
%<-----------------------------------------------------
settrans -ac ftp /hurd/ftpfs ftp.gnu.org
%<-----------------------------------------------------
It creates the folder "ftp" wherein you will find the
content of ftp.gnu.org. Granted, for anyone who has worked
with Plan 9 or has used FUSE this is not THAT of a
revelation, but it is nice... making it possible to layer
translators (e.g. for accessing an iso on the ftp server)
makes it even a bit nicer.
Now i still had only the three DVDs as package sources, so,
thinking that it would be the most safe-ish option to first
upgrading everything to the 2023 release i followed an
article on the debian pages and added the following to my
sources.lst (after commenting out the DVDs):
%<-----------------------------------------------------
deb [check-valid-until=no trusted=yes] snapshot.debian.org/archive/de… sid main
deb [check-valid-until=no trusted=yes] snapshot.debian.org/archive/de… unreleased main
deb-src [check-valid-until=no trusted=yes]
snapshot.debian.org/archive/de…
sid main
%<-----------------------------------------------------
After that i ran an apt update, installed the
debian-archive-ports-keyring package, upgraded everything,
initiated a reboot while praying to the mighty GNU that it
will come up again.
It did. Everything worked fine.
So, now on the 2023 release, i thought that it would be nice
not being stuck on this
%<-----------------------------------------------------
deb deb.debian.org/debian-ports unstable main
deb-src deb.debian.org/debian unstable main
deb deb.debian.org/debian-ports unreleased main
%<-----------------------------------------------------
And, again, i initiated an update followed by an upgrade.
Aaaand it broke the install. "I am idiot. Shoot me" to quote
an romanian friend of mine. Well, back to square one
then... now, knowing that it is possible to get a working
install, i downloaded the 2023 netinstall ISO and started
all over again.
The install from the netinstall media did go as planned
until i reached the point where it wanted me to select an
debian mirror, started to scan its content... and froze.
Ok, its unstable software, something like that may happen. I
rebooted, started the installation again and it did freeze
again.
Well... at this point i reached my frustration zenit, it was
already late so i somewhat rage-quitted for the day.
Ok, the next evening i was back at it again. THIS time the
installer was able to scan the mirror and finish the
installation. After the reboot it booted up normally... only
to freeze during boot. Okay, its still unfinished
software... hard-reset and another try. Just... something
during this failed boot attempt seemed to have messed up the
ext2 filesystem that bad that fsck could not repair it on
its own.
That was the last buckling of that bovine that threw me off
again. I needed a break, junior needed attention, and
just-too-many things at home needed my attention as an
handyman.
Addendum
Another day, new luck... lets try it again. I thought to
myself: Well, you got a functional installed system out of
the 2021 version, so try it again with this approach. And
following my steps above up until after the upgrade to the
2023 version and... everything still works!
Now setting up X and an desktop environment was just a piece
of cake after that. Is this GNU now tamed? I don't think so,
but at least i got the reins of the bovine and now its
really time to explore this ecosystem.
:::
like this
Ardor von Heersburg, zergtoshi, juipeltje, yetAnotherUser, T4V0, Spectrism, 56!, mostlikelyaperson e Rozaŭtuno like this.
I do not know how that article covered so much background on GNU hURD and the quest for a micro-kernel UNIX without mentioning Redox OS.
Redox is also micro-kernel based POSIX compatible operating system (UNIX compatible). So quite like the GNU project and HURD in that sense.
Redox is younger, 10 years old instead of 30, and more “modern” (eg. written in Rust). It can be seen as a GNU competitor as it does not rely on the GNU C library or utilities.
like this
thedeadwalking4242, trevor (he/they), davel, BuoyantCitrus, t҉̠̙ǵ̣̞̄ͪ͜x̸̱͚̳ͫ͐̑̈ͯͣ̚n̒͌҉͉̦̜̝ͅ, Mactan, Raccoonn, jake, streetcoder, Runecrush376, monovergent, Eelt, silasmariner, Everyday0764, NewOldGuard, sunstoned, muhyb, floofloof, augustus672, emergencyfood, Xyre, Zulasas, Slein4273, juipeltje, T4V0 e Shrike 🐦⬛ like this.
don't like this
chronotron, ProgrammingSocks e mostlikelyaperson don't like this.
like this
Pantherina (he), jcr, filibusterPerigrin, WFH, tomenzgg, monovergent, تحريرها كلها ممكن, Spectrism, eleijeep, chronotron, Eelt, Everyday0764, NewOldGuard, SleveMcDichael, someacnt, RIotingPacifist, muhyb, frenex, FreeBooteR69, Scroll Responsibly, ProgrammingSocks, baatliwala, highduc, ScreaminOctopus, Coriza, T4V0, GooeyGlob e mostlikelyaperson like this.
GooeyGlob doesn't like this.
Fun fact, there isnt even an "MIT license", look:
like this
Mactan, jcr, monovergent e silasmariner like this.
don't like this
TheOneCurly, Spectrism, eleijeep, olof, NewOldGuard, DasSkelett, floofloof, ProgrammingSocks e mmmac don't like this.
Did you even read the page you linked? It took less than 10 seconds to scroll down to the 'M's.
don't like this
Pantherina (he) e eldavi don't like this.
like this
NewOldGuard, someacnt, floofloof, ProgrammingSocks e T4V0 like this.
like this
someacnt, pucker4676, RIotingPacifist, ProgrammingSocks, juipeltje e Spectrism like this.
First, there has been massive amounts of MIT code in important parts of the Linux ecosystem for decades. Xorg, Wayland, and Mesa for starters. The sky has not fallen. I am not exactly panicking.
But let’s address your specific example.
Let start by pointing out that Redis was BSD, not MIT. But let’s assume your cautionary tale applies.
A truly gigantic corporation, Amazon, was making all the money off Redis without giving anything back to the company that actually wrote the code (Redis). So, Redis tried to change the license to make that more difficult. The license they chose is the strictest free software license the FSF offers—the AGPL.
Pop quiz: what part of the above are we “the community” outraged about? The clearly predatory Amazon stuff? Or the defensive action by the company writing all the code? That’s right, we are mad at the company that gave us all the code for free and that still licenses it AGPL.
But even beyond that, what was lost again? Because the implication is that BSD (or MIT) somehow allows companies to “take” free software from us. This is false.
What happened with Redis is that the original code remained 100% available. And it remained part of a 100% free software project. It remains 100% BSD licensed to this day. You can use it, you can study it, you can improve it, you can share it, and you can even sell it commercially! It offers you at least FIVE freedoms.
Not a single line of code was lost from the project. Yes, the project had to change its name (Redis owns the name Redis). Yes, Redis stopped contributing to the project. Is that not their right?
It is that last bit that seems to drive us mad. We yell about corporations taking our code. But all the examples of bad behaviour we give boil down to them choosing to give us less of theirs.
If “the community” is the one writing the code, nobody can take it from us. And even if big evil companies are writing the code, the only code that they can deny us is code they write in the future.
I find it hard to be either outraged or even particularly afraid of that.
Anyway, I do not want to talk you out of your license preferences. I have no beef with that. But I do wish there was less FUD slinging at projects that choose to license their hard work as MIT.
GitHub - valkey-io/valkey: A flexible distributed key-value database that is optimized for caching and other realtime workloads.
A flexible distributed key-value database that is optimized for caching and other realtime workloads. - valkey-io/valkeyGitHub
like this
sonstwas, NewOldGuard, sunstoned, Renohren, herbsti72, Jakeroxs, DasSkelett, floofloof, cravl, emergencyfood, Tanoh, Communist, labsin e psycocan like this.
don't like this
SleveMcDichael, someacnt, FreeBooteR69, SwooshBakery624 [they/them], Scroll Responsibly, ProgrammingSocks, ScreaminOctopus e mostlikelyaperson don't like this.
like this
frenex, floofloof, cravl, Scroll Responsibly, ProgrammingSocks, TheFinn, Unlix86, ScreaminOctopus, T4V0 e dukatos like this.
I realize I oversimplified a complex set of moves and “shared source” is its own can of worms. My post was already too long.
But my core point is that the code (as Valkey) remained available and remains available under the same free software license that it has always been available under.
The only consequence of what Redis did was that they stopped giving away their “new” code to service providers like Amazon. Even Amazon can continue to use what was there before. And the community can continue to collaborate on the same code base that they were collaborating on before. The licence Redis chooses for its “new” code is largely irrelevant.
We talk about permissive licenses like they represent some massive risk. I just do not see it that way. And they have many advantages including often attracting more corporate participation (more free code for me).
I am a very happy user of Clang/LLVM. It is the product of collaboration between Google, Apple, Sony, Microsoft, academia, and other nerds. I am very happy we have licenses that encourage companies to create quality software for me to use.
I am sure Redis chose BSD to begin with in case they ever had to make a move like they did. If the only option was GPL, they may never have released it as Open Source to begin with. Again, I am glad they did.
The difference with llvm is that nobody is selling a hosted llvm as a service, nobody is making money off llvm without contributing back (directly, I know a bunch of companies use llvm to make a product that makes money).
Redis clearly thinks that using the BSD licence was a mistake. I agree with you, using BSD attracted more people/companies to use it than if they had chosen AGPL, that's the trade-off you make when choosing a copyleft licence.
I think I agree with you on a lot of this, let me know if this is a fair summary of your argument:
Permissive and copyleft licences both have advantages and disadvantages, if a project chooses a permissive licence then that's their choice, and if they later decide to re-license then the project will probably get forked and carry on under the original licence, so as a user you can just switch to the fork and the only thing that will is the name of the package you install.
That seems pretty reasonable to me, let me know if I made any mistakes summarising your point.
The caveat I would add to that is that the project shouldn't complain about freeloaders if they choose a licence that explicitly allows freeloading. They chose a permissive licence for its advantages but they won't accept the consequences that come with that decision.
I'm super excited for Redox, but unless you're a Rust developer it's a bit limited. Few programming languages oþer þan Rust are available for it.
Eventually, I hope it'll have tiling window managers and Go, V, and Zig ports; Helix (an editor written in Rust), tmux, and zsh. At þe moment, no-one of þese have been ported, and þat's kind of a bare minimum.
mostlikelyaperson likes this.
don't like this
Tanoh, zalgotext, _nonex_ e Frank Exchange of Views don't like this.
like this
Gil Wanderley, Red5, Runecrush376, tomenzgg, monovergent, FauxLiving, besbin, Jay, cristian64, oysvendsen, Xbeam, some_random_nick, NewOldGuard, mrgoosmoos, jpicture, Fish [Indiana], arthur, mangaskahn, someacnt, Michael Sasser, Zen_Shinobi, Jakeroxs, Life_inst_bad, RIotingPacifist, QuandaleDingle, molten_boron, fakeman_pretendname, Someonelol, muhyb, ozymandias117, PseudoSpock, emergencyfood, Shifty, IchNichtenLichten, ascallion, belated_frog_pants, stewarpt, TVA, 0x0F, baatliwala, GentlePulpy, Tyrrlan, Saljid, ilinamorato, tnzqs, FooBarrington, thecoffeehobbit, crippy, jim3692, Vitaly, djdarren, Luffy, T4V0, Smitheee, Sergius, Rozaŭtuno, Shrike 🐦⬛, LuceVendemiaire e randint like this.
like this
Hasnep, ExtremeUnicorn, TheFinn, silica, baatliwala, holomorphic, banazir, dukatos, Luffy, T4V0, quick_snail, Shrike 🐦⬛ e Cyberpunk Librarian like this.
PseudoSpock doesn't like this.
like this
cornishon, holomorphic, crippy, yetAnotherUser, Slein4273, dukatos, 56!, Rozaŭtuno, bootleg e Shrike 🐦⬛ like this.
like this
sic_semper_tyrannis, 0x0F, Ŝan • 𐑖ƨɤ, yetAnotherUser, vfreire85, RLKRo, Rozaŭtuno e Shrike 🐦⬛ like this.
like this
CheerfulPassionFruit, yetAnotherUser, Slein4273, juipeltje, dukatos, Rozaŭtuno, bootleg, Shrike 🐦⬛ e 3rl1k like this.
like this
ScreaminOctopus, barryamelton, GooeyGlob, Oinks, agoremix, Cora, Rozaŭtuno, bootleg, tomatoely e LuceVendemiaire like this.
Rozaŭtuno likes this.
LibreFind: l’app Android che trova alternative FOSS alle applicazioni proprietarie
LibreFind nasce con un obiettivo molto chiaro: aiutare gli utenti Android a individuare rapidamente quali applicazioni installate non sono libere e quali alternative open source possono sostituirle. L’app il cui repository è pubblicato su GitHub analizza il dispositivo, confronta i pacchetti con un database ospitato su Firebase Firestore e restituisce un elenco ordinato di software proprietario insieme a suggerimenti FOSS pertinenti. L’idea è semplice ma potente, perché permette di avere una panoramica immediata del livello di libertà del proprio telefono e di intervenire con scelte più consapevoli.
Grazie a @digidavidex@mastodon.uno per la segnalazione
Qui l'articolo completo:
linuxeasy.org/librefind-lapp-a…
LibreFind: l’app Android che trova alternative FOSS alle applicazioni proprietarie
LibreFind è un progetto Android che identifica le app proprietarie installate sul dispositivo e suggerisce alternative FOSS, basandosi su un database gestito tramite Firebase Firestore.Ferramosca Roberto (Linux Easy)
reshared this
valeriobruno e picoz 🇵🇸 🇺🇦 🇸🇩... 🏳🌈 🇪🇺 reshared this.
What do you think of loss32?
A dream of a Linux distribution where the entire desktop environment is Win32 software running under WINE. A completely free and open-source OS where you can just download .exe files and run them, for the power user who isn't necessarily a Unixhead, or just for someone who thinks this sounds fun.
I like the layout and looks of windows 2000 but I think running actual exe files from the old OS is not viable from security standpoint.
like this
fckreddit, hedgehog, Late2TheParty, 𝔳𝔢𝔩𝔲𝔪𝔪𝔬𝔯𝔱𝔦𝔰, db2, Maiq, sleeperdouge, Soused, FirmDistribution, 7eter, eta, xmanmonk, Binette, SpaceMan9000, jqubed, SingularEye, MonkderVierte, Black616Angel, 🏴☠️𝔊𝔯𝔞𝔳𝔦𝔱𝔞𝔰🏴☠️, vortexal, ashestosea, Pantherina (he), Runecrush376, exu, drofenvy, Leafimo, olof, erebion, ferric_carcinization, devils_dust, Zangoose, QuandaleDingle, someacnt, comfy, Auster, jittery_shibe, shiroininja, Virual, mrnarwall, tomatoely, AstroLightz, LemmyAtEm2, RaoulDuke25, indomara, ☂️-, joshg253, wal_kr, orgrinrt, Harvey656, sawdustprophet, Mirakoolix, jim3692, rdri, SomeGuy016, morgenman, juipeltje, iiamrishi e cevn like this.
don't like this
Sims e MonkderVierte don't like this.
like this
Maiq, sleeperdouge, deleted, Soused, somethingsomethingidk, someacnt e jittery_shibe like this.
I currently don't have a use/need for a 32bit Linux distro but I do believe that one with support should always exist. I normally recommend Q4OS to people who need a 32bit Linux distro but they're only going to be supporting the 32bit version for 2 more years. There was some discussion about 32bit distros a while back and it seems like all or at least most distros are doing something similar. While loss32 is just a proof of concept, I would support it if it actually did release because for as long as old 32bit computers exist, there will be a need for 32bit operating systems.
Edit: I realize that I missed the actual point of loss32 but I'm not deleting my comment.
like this
FishFace, ferric_carcinization, mangaskahn, QuandaleDingle, corsicanguppy, CaptainPriapism, AstroLightz, eth0slash0, TVA e Harvey656 like this.
like this
ray, ferric_carcinization, nocteb, QuandaleDingle, jittery_shibe, AstroLightz e Harvey656 like this.
Follow the link and go down to the third point:
Isn't this just ReactOS?
ReactOS tries to reimplement the Windows NT kernel, and that has always been its Achilles heel, holding it back from a hardware compatibility and stability standpoint. The loss32 concept is to achieve a similar-feeling end result to ReactOS, but built on a more usable foundation, using components known to work well (the Linux kernel, WINE, everything that glues those together, and a sprinkling of ReactOS userland niceties). As a bonus, the OS would still technically be a Linux distro, so it would be possible to run Linux software when necessary, something ReactOS can't do.
like this
exu, James R Kirk, ferric_carcinization, QuandaleDingle, someacnt, comfy, jittery_shibe, frenex, AstroLightz, Harvey656 e Jaredude like this.
Ahhhh okay. Cool.
Personally, I don’t see the appeal though. I am really happy to have left Windows in the trash.
like this
randy, ferric_carcinization, Redkey, QuandaleDingle, Zangoose, someacnt, jittery_shibe, AstroLightz, utopiah, tomenzgg, TVA e Harvey656 like this.
A dream of a Linux distribution where the entire desktop environment is Win32 software running under WINE.
Sounds like a nightmare to me. Why would anyone want that?
like this
danakongur, ray, Leafimo, FishFace, DampSquid, ferric_carcinization, nocteb, KindaABigDyl, DibbleDabble, QuandaleDingle, someacnt, jittery_shibe, NullVertex, AstroLightz, nkk, eth0slash0, tomenzgg, Dae, michaelalf, ☂️-, Harvey656, jim3692, QuizzaciousOtter, Slein4273, qweertz, juipeltje, Prove_your_argument e SapientLasagna like this.
☂️- likes this.
tomenzgg likes this.
I'd recommend using something like Niri instead of mutter for the compositor as Niri is:
- Extremely customizable
- Meant to be used alone (unlike mutter which is for Gnome)
- Supportive of Wayland portals better than any compositor I've tried
- Very modern
- Pretty stable
- Making use of scrolling window management which is, imo, superior to anything else
+ You could force all windows to be floating if you want that traditional method tho
I'd also recommend using DankMaterialShell and simply providing a theming to get the appeal you want. It works well with Niri and provides all the system tools you need for an OS like bluetooth and audio management, application lookup, etc. It's sort of a stripped down Gnome-shell for standalone compositors but way more customizable.
Then everything else can just be installed WINE apps.
like this
QuandaleDingle, ☂️- e Harvey656 like this.
i think wlroots does something similar too?
it would be neat if it supported the windows uxthemes like reactos.
like this
bitcrafter, tomenzgg, ☂️-, holomorphic, Harvey656, jim3692, SomeGuy016 e juipeltje like this.
I wonder if this has any practical benefits over running a Windows OS in a VM...
edit: piping, or easier collaboration between, Win32 and Linux programs could be an example. The creator mentions creative and gaming applications.
My mom and my wife's mom have computers soon to be out of support. Windows.
They need something stable, but also that does all their normal stuff. I'd love something that updated cleanly like enterprise Linux, but gave them the win7 interface they had for so long (they complain about this one now).
So that's your market. Yeah, a wine box would work well, and Nobara is nearly the winning candidate. But even it requires a lot of finagling for windows people, and I'd love something completely seamless so it's easier to support.
comfy likes this.
it does look like ~~reactos~~ windows but better. damn, i wish i had the skills for such a project.
might be good for the potential switchers who need something very similar to windows, old computers that need to brainlessly run win32 software. deshitified windows for those who need it. normies orphaned by win10 who are not ready to learn linux headfirst for any reason. and all open source so you can actually trust your os.
this is what foss enables in spite of purists and copyrighters alike. i like it.
like this
𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠 e Unlix86 like this.
At Davos, NVIDIA, Microsoft CEOs deny AI bubble
At Davos, NVIDIA, Microsoft CEOs deny AI bubble
What, us worry? As OpenAI pivots to "last resort," tech titans pivot from bubble talk.Mashable
Is it just me, or does anyone else feel that Lemmy should potentially duplicate this feature?
On Reddit, subreddit moderators can comment as regular users by default, and only show the green MOD badge when they intentionally “distinguish” a comment as an official moderation response.
It got me thinking about Lemmy.
On Lemmy, mod comments are often immediately identifiable, even when the mod is just participating casually in a discussion rather than speaking in an official capacity. That can sometimes unintentionally shift the tone of a thread or discourage open conversation.
Do you think Lemmy should consider:
A clearer opt-in distinction system for mod comments
The ability for mods to participate by default as regular users unless explicitly marking a comment as “mod voice”
Or is the current transparency model preferable for federated communities?
Curious how other instance admins, mods, and users feel about this — especially from a trust, power-balance, and community-health perspective.
Is this something Lemmy should copy from Reddit, or is it intentionally different for good reason?
like this
Auster likes this.
Where?
Because I'm a mod, and whenever I comment within the community, it always just shows my my mod status (green, because I use Summit for Lemmy)
Oh ok, I now get what you mean.
Because of how federation works, that (or those, depending on what client you use,) badges or indicators can't be completely hidden.
Lemmy devs could I guess hide it in the API, but it would be as hidden as votes are right now (e.g. if someone really wanted to, they could spin up a temporary instance to get that info). I mean, look at what lemvotes.org/ does.
And in this case, you don't even need an instance, you can literally just use browser.pub on your browser right now to get the moderators activitypub collection: browser.pub/https%3A%2F%2Flemm…
lemmy.world/c/asklemmy/moderators · BrowserPub · A browser for exploring ActivityPub and the fediverse
Explore the open social web through the lens of ActivityPub and the fediverse.browser.pub
Rahm Emanuel steers a course between 'monopolists' and 'Marxists'
He is now exploring a campaign for the presidency, an effort that seems designed to challenge some orthodoxies of that same party.
Archived at web.archive.org/web/2026012213…
Rahm Emanuel steers a course between 'monopolists' and 'Marxists'
NPR's Steve Inskeep speaks with longtime Democrat Rahm Emanuel about politics in the Trump era.NPR
Is it just me, or does anyone else feel that Lemmy should potentially duplicate this feature?
On Reddit, subreddit moderators can comment as regular users by default, and only show the green MOD badge when they intentionally “distinguish” a comment as an official moderation response.
It got me thinking about Lemmy.
On Lemmy, mod comments are often immediately identifiable, even when the mod is just participating casually in a discussion rather than speaking in an official capacity. That can sometimes unintentionally shift the tone of a thread or discourage open conversation.
Do you think Lemmy should consider:
A clearer opt-in distinction system for mod comments
The ability for mods to participate by default as regular users unless explicitly marking a comment as “mod voice”
Or is the current transparency model preferable for federated communities?
Curious how other instance admins, mods, and users feel about this — especially from a trust, power-balance, and community-health perspective.
Is this something Lemmy should copy from Reddit, or is it intentionally different for good reason?
like this
Auster likes this.
Hide Admin / Mod tags unless using "speak as moderator" (distinguishing post)
Requirements This is a feature request and not a bug report. Otherwise, please create a new bug report instead. Please check to see if this request (or a similar one) already exists. It's a single ...Crashdoom (GitHub)
Egypt and Saudi Arabia pressure Libya’s Haftar to stop UAE supplies to Sudan’s RSF
Egypt and Saudi Arabia have stepped up pressure on Khalifa Haftar over the eastern Libya commander’s role in facilitating Emirati military support to Sudan’s Rapid Support Forces (RSF), warning that continued assistance could trigger a serious shift in Cairo’s relationship with him.
The pressure forms part of a broader Egyptian-Saudi effort to block flows of arms, fuel and fighters to the RSF, curb the UAE influence across the region and prevent further destabilisation along the sensitive Egypt–Libya–Sudan border triangle.
Earlier this month, Saddam Haftar, Khalifa’s son and deputy commander-in-chief of his Libyan Arab Armed Forces (LAAF), visited Cairo and met Egyptian Defence Minister Abdel Meguid Saker and other senior military and security officials.
Egyptian and Libyan media portrayed the meeting as focused on military cooperation, but the full purpose of the visit was not publicly disclosed.
How Do Developers Ensure Profitability in P2E Games?
Play To Earn Game Development Company | P2E Game Development
As a leading Play to earn game development company, GamesDapp offers top-notch P2E game development services on any blockchain network to launch P2E gaming platformsGamesDapp - Game Development Company
UK: Report into Maccabi Tel Aviv police ban failed to include local Muslim voices
The head of a British policing watchdog tasked with investigating the decision to ban Maccabi Tel Aviv fans from attending the Israeli football club’s match against Aston Villa failed to include any voices from Birmingham’s Muslim community in his preliminary report.
The report by Andy Cooke, the chief inspector of constabulary, led to the UK Home Secretary Home Secretary Shabana Mahmood saying that she had “no confidence” in Craig Guildford, the chief constable of West Midlands Police (WMP), prompting Guildford’s immediate retirement.
Those interviewed by Cooke include the charge d’affaires from the Israeli embassy in London, a representative of the Jewish Representative Council for Birmingham and West Midlands, and Lord Mann, the government’s independent adviser on antisemitism.
However, Cooke did not include any voices from Muslim community groups or mosque leaders. Muslims make up approximately 30 percent of Birmingham’s population and almost 10 percent of the wider West Midlands region, according to census data.
UK: Report into Maccabi Tel Aviv police ban failed to include local Muslim voices
The head of a British policing watchdog tasked with investigating the decision to ban Maccabi Tel Aviv fans from attending the Israeli football club’s match against Aston Villa failed to include any voices from Birmingham’s Muslim community in his p…Areeb Ullah (Middle East Eye)
UK court ruling a threat to jurors who acquit pro-Palestine activists on moral grounds
It relates to a criminal case against five women charged with criminal damage for breaking the windows of JP Morgan Bank’s European head office at Victoria Embankment over its fossil fuel investments.
During the trial, which began in February 2024 at Inner London Crown Court, protesters held signs reminding jurors that they have an “absolute right” to acquit a defendant on conscience.
Judge Silas Reid instructed the jury to disregard the placards, saying they were “misstating the law".
He then told them that it is a “criminal offence for a juror to do anything from which it can be concluded that a decision will be made on anything other than the evidence in the case”.
The women were handed suspended sentences, but launched an appeal on the basis that Judge Reid had wrongly directed the jury that it would be a criminal offence for them to acquit the defendants according to conscience.
They argued that in telling jurors this, he was pressuring them to return a guilty verdict. They said this made their convictions “unsafe”.
The campaign group Defend Our Juries (DOJ) said the ruling is the latest in a slew of efforts by the government to crack down on a growing trend of juries acquitting members of social movements – including pro-Palestine and climate activists – according to conscience.
How the UK government is trying tackle juries who may acquit pro-Palestine activists on moral grounds
A UK Court of Appeal ruling is the latest in a series of measures that lawyers warn shows authorities are trying to stop juries from acquitting on conscience.Katherine Hearst (Middle East Eye)
like this
MyTurtleSwimsUpsideDown, thisisbutaname, Rozaŭtuno, Lasslinthar, Cătă e Auster like this.
like this
Cătă likes this.
like this
Cătă likes this.
like this
Cătă likes this.
like this
Cătă likes this.
cnet.com/tech/services-and-sof…
Meta's use of copyrighted books to trains its Llama AI was fair use, a judge ruled."This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," he wrote. "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one."
The plaintiffs focused their arguments on how Meta's AI models can reproduce exact snippets from their works and how the company's Llama models hurt their ability to license their books to AI companies. These arguments weren't as compelling in Chhabria's eyes -- he called them "clear losers" -- so he sided with Meta.
That's different from the Anthropic ruling, where Judge William Alsup focused on the "exceedingly transformative" nature of the use of the plaintiff's books in the results AI chatbots spit out. Chhabria wrote that while "there is no disputing" that the use of copyrighted material was transformative, the more urgent question was the effect AI systems had on the ecosystem as a whole.
Maybe? Not lawyer, but sound like train might fair use? And generate not?
Meta Won Its AI Fair Use Lawsuit, but Judge Says Authors Are Likely 'to Often Win' Going Forward
Meta's victory came two days after Claude maker Anthropic won a similar case.Katelyn Chedraoui (CNET)
torrentfreak.com/meta-secures-…
Yesterday, U.S. District Court Judge Vince Chhabria ruled on both motions, which at first sight offers a clear win for Meta. The court denied the authors’ motion to hold Meta liable for direct copyright infringement after it obtaining pirated books from shadow libraries via BitTorrent.
Did have piracy part. Just not listed on first website.
Meta Secures Bittersweet Fair Use Victory in AI 'Piracy' Case * TorrentFreak
In a pivotal moment for AI and copyright law, Meta hassecured a bittersweet partial fair use victory in a 'piracy' lawsuit brought by authors.Ernesto Van der Sar (TF Publishing)
@far_university1990 yes but the legal precedent has been set, lol
(/s maybe)
Which would you rather, though? That they determine that pirating content is legal when used for training LLM or to continue with the precedent that pirating is illegal?
Tbh, I'd rather they do the latter, because I doubt the former would set any precedent that allows anyone other than billion dollar companies from getting away with piracy.
like this
MyTurtleSwimsUpsideDown likes this.
But also money writes law.
The copyright laws weren‘t written to protect the common Joe.
They were pushed by powerful publisher lobbies back in the day.
like this
Th4tGuyII likes this.
like this
Th4tGuyII likes this.
The obligation to obbey the ~~Copyright of others~~ anything is only for the riff-raff.
like this
Th4tGuyII likes this.
RRF Caserta Notizie. VII Edizione Premio Melagrana al Teatro Don Bosco di Caserta 20 gennaio 2026
Hind Rajab group urges Greece to arrest 'Israeli' Defense Minister
The Brussels-based human rights organization Hind Rajab announced Wednesday that it filed a formal complaint with Greek authorities calling for the arrest and investigation of 'Israeli' Defense Minister Yisrael Katz over alleged war crimes committed in Gaza.
Katz has been visiting Athens since Monday on an official trip scheduled to end Thursday. The complaint was submitted to the Greek Supreme Court prosecutor, urging urgent legal action due to the short duration of Katz’s stay.
Hind Rajab asserts that Katz’s policies and conduct amount to acts of genocide, war crimes, and crimes against humanity under Article 2 of the Genocide Convention and Article 6 of the Rome Statute of the International Criminal Court. The complaint emphasizes that Greece’s jurisdiction and legal obligations are activated while Katz is present on Greek soil.
I'm tired of LLM bullshitting. So I fixed it.
Hello!
As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:
It lies. Confidently. ALL THE TIME.
(Technically, it “bullshits” - link.springer.com/article/10.1…
I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.
The thing: llama-conductor
llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).
Not a model, not a UI, not magic voodoo.
A glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.
TL;DR: “In God we trust. All others must bring data.”
Three examples:
1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)
You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:
>>attach <kb>— attaches a KB folder>>summ new— generatesSUMM_*.mdfiles with SHA-256 provenance baked in- `>> moves the original to a sub-folder
Now, when you ask something like:
“yo, what did the Commodore C64 retail for in 1982?”
…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:
The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.
Confidence: medium | Source: Mixed
No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into stupid.
And when you’re happy with your summaries, you can:
>>move to vault— promote those SUMMs into Qdrant for the heavy mode.
2) Mentats: proof-or-refusal mode (Vault-only)
Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:
- no chat history
- no filesystem KBs
- no Vodka
- Vault-only grounding (Qdrant)
It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:
FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.
Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.
The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”
3) Vodka: deterministic memory on a potato budget
Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.
Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).
!!stores facts verbatim (JSON on disk)??recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)- CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages
So instead of:
“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”
you get:
!! my server is 203.0.113.42
?? server ip→ 203.0.113.42 (with TTL/touch metadata)
And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.
There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.
TL;DR:
If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:
- Primary (Codeberg): codeberg.org/BobbyLLM/llama-co…
- Mirror (GitHub): github.com/BobbyLLM/llama-cond…
PS: Sorry about the AI slop image. I can't draw for shit.
PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.
llama-conductor
Route workflows, not models. Glass-box, not black-box. Squash LLM nonsense.Codeberg.org
like this
adhocfungus e Nobilmantis like this.
Good question.
It doesn’t “correct” the model after the fact. It controls what the model is allowed to see and use before it ever answers.
There are basically three modes, each stricter than the last. The default is "serious mode" (governed by serious.py). Low temp, punishes chattiness and inventiveness, forces it to state context for whatever it says.
Additionally, Vodka (made up of two sub-modules - "cut the crap" and "fast recall") operate at all times. Cut the crap trims context so the model only sees a bounded, stable window. You can think of it like a rolling, summary of what's been said. That summary is not LLM generated summary either - it's concatenation (dumb text matching), so no made up vibes.
Fast recall OTOH stores and recalls facts verbatim from disk, not from the model’s latent memory.
It writes what you tell it to a text file and then when you ask about it, spits it back out verbatim ((!! / ??)
And that's the baseline
In KB mode, you make the LLM answer based on the above settings + with reference to your docs ONLY (in the first instance).
When you >>attach , the router gets stricter again. Now the model is instructed to answer only from the attached documents.
Those docs can even get summarized via an internal prompt if you run >>summ new, so that extra details are stripped out and you are left with just baseline who-what-where-when-why-how.
The SUMM_*.md file come SHA-256 provenance, so every claim can be traced back to a specific origin file (which gets moved to a subfolder)
TL;DR: If the answer isn’t in the KB, it’s told to say so instead of guessing.
Finally, Mentats mode (Vault / Qdrant). This is the “I am done with your shit" path.
It's all of the three above PLUS a counter-factual sweep.
It runs ONLY on stuff you've promoted into the vault.
What it does is it takes your question and forms in in a particular way so that all of the particulars must be answered in order for there to BE an answer. Any part missing or not in context? No soup for you!
In step 1, it runs that past the thinker model. The answer is then passed onto a "critic" model (different llm). That model has the job of looking at the thinkers output and say "bullshit - what about xyz?".
It sends that back to the thinker...who then answers and provides final output. But if it CANNOT answer the critics questions (based on the stored info?). It will tell you. No soup for you, again!
TL;DR:
The “corrections” happen by routing and constraint. The model never gets the chance to hallucinate in the first place, because it literally isn’t shown anything it’s not allowed to use. Basic premise - trust but verify (and I've given you all the tools I could think of to do that).
Does that explain it better? The repo has a FAQ but if I can explain anything more specifically or clearly, please let me know. I built this for people like you and me.
God, I hope so. Else I just pissed 4 months up the wall and shouted a lot of swears at my monitor for nada 😀
Let me know if it works for you
like this
adhocfungus likes this.
On the stuff you use the pipeline/s on? About 85-90% in my tests.
Just don't GIGO (Garbage in, Garbage Out) your source docs...and don't use a retarded LLM.
That's why I recommend Qwen3-4 2507 Instruct. It does what you tell it to (even the abilterated one I use).
Random Sexy-fun-bot900-HAVOK-MATRIX-1B.gguf? I couldn't say 😀
like this
adhocfungus likes this.
don't like this
adhocfungus doesn't like this.
Comment removed by (auto-mod?) cause I said sexy bot. Weird.
Restating again:
On the stuff you use the pipeline/s on? About 85-90% in my tests. Just don't GIGO (Garbage in, Garbage Out) your source docs...and don't use a dumb LLM. That's why I recommend Qwen3-4 2507 Instruct. It does what you tell it to (even the abilterated one I use).
abilterated one
Please elaborate, that alone piqued my curiosity. Pardon me if I couldve searched
Yes of course.
Abliterated is a technical LLM term meaning "safety refusals removed".
Basically, abliteration removes the security theatre that gets baked into LLM like chatGPT.
I don't like my tools deciding for me what I can and cannot do with them.
I decide.
Anyway, the model I use has been modified with a newer, less lobotomy inducing version of abliteration (which previously was a risk).
huggingface.co/DavidAU/Qwen3-4…
According to validation I've seen online (and of course, I tested it myself), it's lost next to zero "IQ" and dropped refusals by about...90%.
BEFORE:
Initial refusals: 99/100AFTER:
Refusals: 8/100 [lower is better], KL divergence: 0.02 (less than 1 is great, "0" is perfect.)
In fact, in some domains it's actually a touch smarter, because it doesn't try to give you "perfect" model answers. Maths reasoning for example, where the answer is basically impossible, it will say "the answer is impossible. Here's the nearest workable solution based on context" instead of getting stuck in a self-reinforcing loop, trying to please you, and then crashing.
In theory, that means you could ask it for directions on how to cook Meth and it would tell you.
I'm fairly certain the devs didn't add the instructions for that in there, but if they did, the LLM won't "sorry, I can't tell you, Dave".
Bonus: with my harness over the top, you'd have an even better idea if it was full of shit (it probably would be, because, again, I'm pretty sure they don't train LLM on Breaking Bad).
Extra double bonus: If you fed it exact instructions for cooking meth, using the methods I outlined? It will tell you exactly how to cook Meth, 100% of the time.
Say...you...uh...wanna cook some meth? 😛
PS: if you're more of a visual learner, this might be a better explanation
DavidAU/Qwen3-4B-Hivemind-Instruct-NEO-MAX-Imatrix-GGUF at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.huggingface.co
I have no remarks, just really amused with your writing in your repo.
Going to build a Docker and self host this shit you made and enjoy your hard work.
Thank you for this!
like this
adhocfungus likes this.
Thank you ❤
Please let me know how it works...and enjoy the >>FR settings. If you've ever wanted to trolled by Bender (or a host of other 1990s / 2000s era memes), you'll love it.
like this
adhocfungus likes this.
There are literally dozens of us. DOZENS!
I'm on a potato, so I can't attach it to something super sexy, like a 405B or a MoE.
If you do, please report back.
PS: You may see (in the docs) occasional references that slipped passed me to MoA. That doesn't stand for Mixture of Agents. That stood for "Mixture of Assholes". That's always been my mental model for this.
Or, in the language of my people, this was my basic design philosophy:
YOU (question)-> ROUTER+DOCS (Ah shit, here we go again. I hate my life)
|
ROUTER+DOCS -> Asshole 1: Qwen ("I'm right")
|
ROUTER+DOCS -> Asshole 2: Phi ("No, I'm right")
|
ROUTER+DOCS -> Asshole 3: Nanbeige ("Idiots, I'm right!")
|
ROUTER+DOCS (Jesus, WTF. I need booze now) <- (all assholes)
|
--> YOU (answer)
(this could have been funnier in the ASCII actually worked but man...Lemmy borks that)
EDIT: If you want to be boring about it, it's more like this
PS: If you like it, let other people in other places know about it.
llama-conductor goes brrrr - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.Pastebin
like this
adhocfungus likes this.
LOL. Don't do that. Wikipedia is THE nosiest source.
Would you like me to show you HOW and WHY the SUMM pathway works? I built it after I tried a "YOLO wikipedia in that shit - done, bby!". It...ended poorly
Not OP, but random human.
Glad you tried the "YOLO Wikipeida", and are sharing that fact as it saves the rest of us time. 😀
Of course. Here is a copy paste from my now defunct reddit account. Feel free to follow the pastebin links to see what v1 of SUMM did. Whats in the router uses is v1.1:
########
My RAG
I've recently been playing around with making my SLM's more useful and reliable. I'd like to share some of the things I did, so that perhaps it might help someone else in the same boat.
Initially, I had the (obvious, wrong) idea that "well, shit, I'll just RAG dump Wikipedia and job done". I trust it's obvious why that's not a great idea (retrieval gets noisy, chunks lack context, model spends more time sifting than answering).
Instead, I thought to myself "why don't I use the Didactic Method to teach my SLMs what the ground truth is, and then let them argue from there?". After all, Qwen3-4B is pretty good with its reasoning...it just needs to not start from a position of shit.
The basic work flow -
TLDR
- Use a strong model to write clean, didactic notes from source docs.
- Distill + structure those notes with a local 8B model.
- Load distilled notes into RAG (I love you, Qdrant).
- Use a 4B model with low temp + strict style as the front‑end brain.
- Let it consult RAG both for facts and for “who should answer this?” policy.
Details
(1) Create a "model answer" --> this involves creating a summary of source material (like say, markdown document explaining launch flags for llama.cpp). You can do this manually or use any capable local model to do it, but for my testing, I fed the source info straight in Gippity 5 with specfic "make me a good summary of this, hoss" prompt
Like so: pastebin.com/FaAB2A6f
(2) Save that output as SUMM-llama-flags.md. You can copy paste it into Notepad++ and do it manually if need to.
(3) Once the summary has been created, use a local "extractor" and "formatter" model to batch extract high yield information (into JSON) and then convert that into a second distillation (markdown). I used Qwen3-8b for this.
Extract prompt pastebin.com/nT3cNWW1
Format prompt (run directly on that content after model has finished its extraction) pastebin.com/PNLePhW8
(4) Save that as DISTILL-llama-flags.md.
(5) Drop Temperature low (0.3) and made Qwen3-4B cut the cutsey imagination shit (top_p = 0.9, top_k=0), not that it did a lot of that to begin with.
(6) Import DISTILL-llama-flags.md into your RAG solution (god I love markdown).
Once I had that in place, I also created some "fence around the law" (to quote Judaism) guard-rails and threw them into RAG. This is my question meta, that I can append to the front (or back) of any query. Basically, I can ask the SLM "based on escalation policy and the complexity of what I'm asking you, who should answer this question? You or someone else? Explain why."
(I also created another "how much will this cost me to answer with X on Open Router" calculator, a "this is my rig" ground truth document etc but those are sort of bespoke for my use-case and may not be generalisable. You get the idea though; you can create a bunch of IF-THEN rules).
The TL:DR of all this -
With a GOOD initial summary (and distillation) you can make a VERY capable little brain, that will argue quite well from first principles. Be aware, this can be a lossy pipeline...so make sure you don't GIGO yourself into stupid. IOW, trust but verify and keep both the source material AND SUMM-file.md until you're confident with the pipeline. (And of course, re-verify anything critical as needed).
I tested, and retested, and re-retest a lot (literally 28 million tokens on OR to make triple sure), doing a bunch of adversarial Q&A testing, side by side with GPT5, to triple check that this worked as I hoped it would.
The results basically showed a 9/10 for direct recall of facts, 7-8/10 for "argue based on my knowledge stack" or "extrapolate based on knowledge stack + reference to X website" and about 6/10 on "based on knowledge, give me your best guess about X adjacent topic". That's a LOT better than just YOLOing random shit into Qdrant...and orders of magnitude better than relying on pre-trained data.
Additionally, I made this this cute little system prompt to give me some fake confidence -
Tone: neutral, precise, low-context.
Rules:
Answer first. No preamble. ≤3 short paragraphs.Minimal emotion or politeness; no soft closure.Never generate personal memories, subjective experiences, or fictional biographical details.Emotional or expressive tone is forbidden.Cite your sourcesEnd with a declarative sentence.
Append: "Confidence: [percent] | Source: [Pretrained | Deductive | User | External]".
\^ model reported, not a real statistical analysis. Not really needed for Qwen model, but you know, cute.
The nice thing here is, as your curated RAG pile grows, so does your expert system’s "smarts", because it has more ground truth to reason from. Plus, .md files are tiny, easy to demarcate, highlight important stuff (enforce semantic chunking) etc.
The next step:
Build up the RAG corpus and automate steps 1-6 with a small python script, so I don't need to baby sit it. Then it basically becomes "drop source info into folder, hit START, let'er rip" (or even lazier, set up a Task Scheduler to monitor the folder and then run "Amazing-python-code-for-awesomeness.py" at X time).
Also, create separate knowledge buckets. OWUI (probably everything else) let's you have separate "containers" - right now within my RAG DB I have "General", "Computer" etc - so I can add whichever container I want to a question, ad hoc, query the whole thing, or zoom down to a specific document level (like my DISTILL-llama.cpp.md)
I hope this helps someone! I'm just noob but I'm happy to answer whatever questions I can (up to but excluding the reasons my near-erotic love for .md files and notepad++. A man needs to keep some mystery).
EDIT: Gippity 5 made a little suggestion to that system prompt that turns it from made up numbers to something actually useful to eyeball. Feel free to use; I'm trialing it now myself
Tone: neutral, precise, low‑context.Rules:
Answer first. No preamble. ≤3 short paragraphs (plus optional bullets/code if needed).
Minimal emotion or politeness; no soft closure.
Never generate personal memories, subjective experiences, or fictional biographical details.
Emotional or expressive tone is forbidden.
End with a declarative sentence.Source and confidence tagging: At the end of every answer, append a single line: Confidence: [low | medium | high | top] | Source: [Model | Docs | Web | User | Contextual | Mixed]Where:
Confidence is a rough self‑estimate:
low = weak support, partial information, or heavy guesswork.
medium = some support, but important gaps or uncertainty.
high = well supported by available information, minor uncertainty only.
top = very strong support, directly backed by clear information, minimal uncertainty.Source is your primary evidence:
Model – mostly from internal pretrained knowledge.
Docs – primarily from provided documentation or curated notes (RAG context).
Web – primarily from online content fetched for this query.
User – primarily restating, transforming, or lightly extending user‑supplied text.
Contextual – mostly inferred from combining information already present in this conversation.
Mixed – substantial combination of two or more of the above, none clearly dominant.Always follow these rules.
##THE GIPPTY 5 PIPE - USE WHICHEVER SECTION YOU NEED##You are a conversation - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.Pastebin
AI Horde has a OpenAI compatible REST API (oai.aihorde.net/). They say that it doesn't support the full feature set of their native API, but will almost assuredly work with this.
OP manually builds the oapi JSON payload and then uses the python requests library to handle the request.
The fields they're using match the documentation on oai.aihorde.net/docs
You would need to add a header with your AI Horde API key. Looks like that would only need to be done in router_fastapi.py - call_model_prompt() (line 269) and call_model_messages() (line 303) and then everything else is setup according to documentation
Fuck, I'm an all seeing, prescient wizard! Neat!
Uh, I mean... "yes of course, I intended to have Ai Horde piping baked in from the get go".
Do let me know if my shit works with their shit to make...not shit.
#####################
EDIT: For fun, I asked my router to ask Qwen3-4B 2507 WTF Ai Horde was. I'm showing this for interest sake, because 1) shows you how the thing works IRL 2) shows you how and when it's WRONG in a LOUD way, if you know how to read it.
######################
This is in default mode, no KBs attached. Just straight YOLOing the question at it, cold, and asking it to argue from it's own internal priors.
AI-horde is a decentralized platform where users rent or sell computational power (GPU time) to run AI models. It connects AI model developers who need processing with individuals or organizations who have idle GPU hardware. The platform uses blockchain-like tokens to manage payments and incentives. It’s not a single AI system but a marketplace for GPU resources.
Missing details: Specific token mechanics, current model library, or user onboarding flow could affect accuracy. For example, if token economics changed recently, the explanation might not reflect current conditions.
Confidence: Medium | Source: Model
#####################################
That's the routers way of saying "hey, best guess. Sorry". And from what I can see, it sort of right, but specifics are wrong. So, ok for a cheap YOLO, not for "I bet my money on this" Let's call this a 4/10
####################################
Now, I gave it web access privileges. It did this. Let's call this 7/10
An AI-horde is a free, open-source community service that allows anyone to contribute computing power to generate AI-generated images and text. It functions as a crowdsourced distributed cluster of workers focused on image and text generation tasks [1]. This model resembles collaborative projects like Folding@home, where users share computational resources for a common goal [2].
####################################
Finally, here's what it does in >>KB mode. I scraped the FAQ and then did >>summ new. Eyeballing it against the FAQ, it's call this 8.5/10. YMMV
AI-horde is a decentralized platform for distributed AI computing. It allows users to rent out their idle GPU hardware to run AI models, connecting developers needing processing power with volunteers who provide it through open APIs. The system uses a non-cryptocurrency-based incentive called Kudos to prioritize contributions, where workers earn Kudos for completing tasks, and these are not tradable assets but rather internal metrics of service quality and project loyalty.
Access to AI-horde is managed via API keys; anonymous usage is allowed but receives the lowest priority. The project emphasizes volunteerism and discourages commercialization of Kudos, treating its sale or exchange for money or cryptocurrency as contrary to its ethos. Participation as a worker involves installing the AI-horde client, selecting task types (e.g., image or text generation), configuring supported models, and running the worker to begin earning Kudos.
Confidence: high | Source: Docs
################################
FINALLY, for "fun", I >>kb attached all and asked it to reason over the above three responses, in context of what it has in the KB store, and rank them out of 10.
[3] is most correct, scoring 9.5/10. It accurately reflects the core model: decentralized GPU renting via open APIs, Kudos as a non-crypto, non-tradable incentive tied to volunteerism, and the crowdsourced nature of contributions. It also correctly notes access via API keys and lower priority for anonymous use.
[2] is accurate but less complete, scoring 7.5/10. It correctly identifies the crowdsourced, open-source, volunteer-based nature and analogy to Folding@home. However, it omits key specifics like Kudos, API key access, and the explicit non-commercialization stance.
[1] is partially correct but misleading, scoring 6/10. It correctly describes GPU
Confidence: medium | Source: Docs
TL;DR
* KB mode: source-based info and when queried against, adjudication accuracy = v.good
* Clean, reproducible demo. Try it yourself
Very impressive. The only mistake on the third one is that the kudos are actually transferrable (i.e. "tradable"), but we forbid exchanges for monetary rewards.
Disclaimer: I'm the lead developer for the AI Horde. I also like you've achieved here and would be interesting if we can promote this usage via the AI Horde in some way. If you can think of some integration or collaboration we could do, hit me up!
PS: While the OpenAI API is technically working, we still prefer people to use our own API as it's much more powerful (allowing people to use multiple models, filter workers, tweak more vars) and so on. If you would support our native API, I'd be happy to add a link to your software in our frontpage in the integrations area for LLMs.
Oh shit! Uh...thank you! Umm. Yes. That was unexpected 😀
Re: collab. I'm away for a bit with work, but let me think on it for a bit? There's got to be a way to make this useful to more peeps.
Believe it or not, I am not a CS guy at ALL (I work in health-care) and I made this for fun, in a cave, with a box of scraps.
I'm not good at CS. I just have a ... "very special" brain. As in, I designed this thing from first principles using invariants, which I understand now is not typical CS practice.
WTF is a "goon-coder" lol 😁
I haven't had good experiences with HN myself, even when I was simply trying to post about the AI Horde.
I had to look it up. Apparently, it's someone who over-optimises the bells and whistles and never ships a finished product.
At first blush, this looks great to me. Are there limitations with what models it will work with? In particular, can you use this on a lightweight model that will run in 16 Gb RAM to prevent it hallucinating? I've experimented a little with running ollama as an NPC AI for Skyrim - I'd love to be able to ask random passers-by if they know where the nearest blacksmith is for instance. It was just far too unreliable, and worse it was always confidently unreliable.
This sounds like it could really help these kinds of uses. Sadly I'm away from home for a while so I don't know when I'll get a chance to get back on my home rig.
My brother in virtual silicon: I run this shit on a $200 p.o.s with 4gb of VRAM.
If you can run an LLM at all, this will run. BONUS: because of the way "Vodka" operates, you can run with a smaller context window without eating shit of OOM errors. So...that means.. if you could only run a 4B model (because the GGUF itself is 3GBs without the over-heads...then you add in the drag from the KV cache accumulation).. maybe you can now run next sized up model...or enjoy no slow down chats with the model size you have.
I never knew LLMs can run on such low-spec machines now! That's amazing. You said elsewhere you're using Qwen3-4B (abliterated), and I found a page saying that there are Qwen3 models that will run on "Virtually any modern PC or Mac; integrated graphics are sufficient. Mobile phones"
Is there still a big advantage to using Nvidia GPUs? Is your card Nvidia?
My home machine that I've installed ollama on (and which I can't access in the immediate future) has an AMD card, but I'm now toying with putting it on my laptop, which is very midrange and has Intel Arc graphics (which performs a whole lot better than I was expecting in games)
Yep, LLMs can and do run on edge devices (weak hardware).
One of the driving forces for this project was in fact trying to make my $50 raspberry pi more capable of running llm. It sits powered on all the time, so why not?
No special magic with NVIDIA per se, other than ubiquity.
Yes, my card is NVIDIA, but you don't need a card to run this.
Yes. Several reasons -
- Focuses on making LOCAL LLMs more reliable. You can hitch it to OpenRouter or ChatGPT if you want to leak you personal deets everywhere, but that's not what this is for. I built this to make local, self hosted stuff BETTER.
- Entire system operates on curating (and ticketing with provenance trails) local data..so you don't need to YOLO request thru god knows where to pull information.
- In theory, you could automate a workflow that does this - poll SearXNG, grab whatever you wanted to, make a .md summary, drop it into your KB folder, then tell your LLM "do the thing". Or even use Scrapy if you prefer: github.com/scrapy/scrapy
- Your memory is stored on disk, at home, on a tamper proof file, that you can inspect. No one else can see it. It doesn't get leaked by the LLM any where. Because until you ask it, it literally has no idea what facts you've stored. The content of your KBs, memory stores etc are CLOSED OFF from the LLM.
GitHub - scrapy/scrapy: Scrapy, a fast high-level web crawling & scraping framework for Python.
Scrapy, a fast high-level web crawling & scraping framework for Python. - scrapy/scrapyGitHub
Super interesting build
And if programming doesn't pan out please start writing for a magazine, love your style (or was this written with your AI?)
meat popsicle
( ͡° ͜ʖ ͡°)
Anyway, the other person is right. Your writing style is great !
I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.
Anyway version 2, this
Is a very cool idea ! I cannot wait to either :
- incorporate it to my workflows
- let it sit in a tab to never be touched ever again
- tgeoryceaft, do tests and request features so much as to burnout
Last but not least, thank you for not using github as your primary repo
Hmm. One of those things is not like the other, one of those things just isn't the same...
About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC...
...sorry, sorry...
Anyway, enjoy. Don't spam my Github inbox plz 😀
Don't spam my Github inbox plz
I can spam your codeberg's then ? 😀
About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC...
..sorry, sorry...
Understandable, have a great day.
Don't spam my Codeberg either.
Just send nudes.
In ASCII format.
By courier pigeon
don’t see how it addresses hallucinations. It’s really cool! But seems to still be inherently unreliable (because LLMs are)
LLMs are inherently unreliable in “free chat” mode. What llama-conductor changes is the failure mode: it only allows the LLM to argue from user curated ground truth and leaves an audit trail.
You don't have to trust it (black box). You can poke it (glass box). Failure leaves a trail and it can’t just hallucinate a source out of thin air without breaking LOUDLY and OBVIOUSLY.
TL;DR: it won't piss in your pocket and tell you it's rain. It may still piss in your pocket (but much less often, because it's house trained)
Just bush-league ones I did myself, that have no validation or normative values. Not that any of the LLM benchmarks seem to have those either LOL
I'm open to ideas, time wiling. Believe it or not, I'm not a code monkey. I do this shit for fun to get away from my real job
Maybe try to contact "AI Explained" on YT, he's the best IMO. Your solution might be novel or not but he might help you figuring that. If it is indeed novel, it might be worth it to share it with the larger community.
Of course, I totally get that you might not want to do any of that.
Thank you for your work!
This seems astonishingly more useful than the current paradigm, this is genuinely incredible!
I mean, fellow Autist here, so I guess I am also... biased towards... facts...
But anyway, ... I am currently uh, running on Bazzite.
I have been using Alpaca so far, and have been successfully running Qwen3 8B through it... your system would address a lot of problems I have had to figurr out my own workarounds for.
I am guessing this is not available as a flatpak, lol.
I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!
Indeed. And have you heard? That makes the normies think were clankers (bots). How delightful.
Re: the Linux stuff...please, if someone can do that, please do. I have no idea how to do that. I can figure it out but making it into a "one click install" git command took several years off my life.
Believe it or not, I'm not actually a IT / CS guy. My brain just decided to latch onto this problem one day 6 months ago and do an autism.
I'm 47 and I still haven't learned how to operate this vehicle...and my steering is getting worse, not better, with age.
Oh I entirely believe you.
Hell hath no wrath like an annoyed high functioning autist.
I've ... had my own 6 month black out periods where I came up with something extremely comprehensive and 'neat' before.
Seriously, bootstrapping all this is incredibly impressive.
I would... hope that you can find collaborators, to keep this thing alive in the event you get into a car accident (metaphorical or literal), or, you know, are completely burnt out after this.
... but yeah, it is... yet another immensely ironic aspect of being autistic that we've been treated and maligned as robots our whole lives, and then when the normies think they've actually built the AI from sci fi, no, turns out its basically extremely talented at making up bullshit and fudging the details and being a hypocrite, which... appalls the normies when they have to look into a hyperpowered mirror of themselves.
And then, of course, to actually fix this, its some random autist no one has ever heard of (apologies if you are famous and i am unaware of this), who is putting in an enormous of effort, that... most likely, will not be widely recognized.
... fucking normies man.
Not famous, no 😀
I hear you, brother. Normally, my hyperfocus is BJJ (I've been at that for 25 years; it's a sickness). I herniated a disc in my low back and lost the ability to exercise for going on 6 months.
BJJ is like catnip for autists. There is an overwhelming population of IT, engineers and ASD coded people in BJJ world.
There's even a gent we loving call Blinky McHeelhook, because well...see for yourself
Noticing the effects of elbow position, creating an entire algorithm, flow chart and epistemology off the fact?
"VERY NORMAL."
Anyway, when my body said "sit down", my brain went "ok, watch this".
I'm sorry. I'm so sorry. No one taught me how to drive this thing 😀
PS: I only found out after my eldest was diagnosed. Then my youngest. The my MIL said "go get tested". I did.
Result - ASD.
Her response - "We know".
Great - thanks for telling me. Would have been useful to know, say... 40ish years ago.
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
Fair point on setting expectations, but this isn’t just LLMs checking LLMs. The important parts are non-LLM constraints.
The model never gets to “decide what’s true.” In KB mode it can only answer from attached files. Don't feed it shit and it won't say shit.
In Mentats mode it can only answer from the Vault. If retrieval returns nothing, the system forces a refusal. That’s enforced by the router, not by another model.
The triple-pass (thinker → critic → thinker) is just for internal consistency and formatting. The grounding, provenance, and refusal logic live outside the LLM.
So yeah, no absolute guarantees (nothing in this space has those), but the failure mode is “I don’t know / not in my sources, get fucked” not “confidently invented gibberish.”
Yeah.
The SHA isn’t there to make the model smarter. It’s there to make the source immutable and auditable.
Having been burnt by LLMs (far too many times), I now start from a position of "fuck you, prove it".
The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.
If it does that more than twice, straight in the bin. I have zero chill any more.
Secondly, drift detection. If someone edits or swaps a file later, the hash changes. That means yesterday’s answer can’t silently pretend it came from today’s document. I doubt my kids are going to sneak in and change the historical prices of 8 bit computers (well, the big one might...she's dead keen on being a hacker) but I wanted to be sure no one and no-thing was fucking with me.
Finally, you (or someone else) can re-run the same question against the same hashed inputs and see if the system behaves the same way.
So: the hashes don't fix hallucinations (I don't even think that's possible, even with magic). The hashes make it possible to audit the answer and spot why hallucinations might have happened.
PS: You’re right that interpretation errors still exist. That's why Mentats does the triple-pass and why the system clearly flags “missing / unsupported” instead of filling gaps. The SHA is there to make the pipeline inspectable, instead of “trust me, bro.”.
Guess what? I don't trust you. Prove it or GTFO.
The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.
Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting
If it does that more than twice, straight in the bin. I have zero chill any more.
That’s… not how any of this works…
THIS IS AWESOME!!! I've been working on using an obsidian vault and a podman ollama container to do something similar, with VSCodium + continue as middleware. But this! This looks to me like it is far superior to what I have cobbled together.
I will study your codeberg repo, and see if I can use your conductor with my ollama instance and vault program. I just registered at codeberg, if I make any progress I will contact you there, and you can do with it what you like.
On an unrelated note, you can download wikipedia. Might work well in conjunction with your conductor.
Please enjoy 😀 Hope it's of use to you!
EDIT: Please don't yeet all of wikipedia into it. It will die. And you will be sad.
For the record: none of my posts here are AI-generated. The only model output in this thread is in clearly labeled, cited examples.
I built a tool to make LLMs ground their answers and refuse without sources, not to replace anyone’s voice or thinking.
If it’s useful to you, great. If not, that’s fine too - but let’s keep the discussion about what the system actually does.
Also, being told my writing “sounds like a machine” lands badly, especially as an ND person, so I’d prefer we stick to the technical critique.
I'm sorry if my method of writing is unpleasant to you.
Your method of communicating your thoughts is ABHORRENT to me.
Let's go our separate ways.
Peace favour your sword.
Oh god, I think liked being called a clanker more 😛
(Not North Dakotan. West Australian. Proof: cunt cunt cunty cunt cuntington).
I wouldn't know how to get this going, but I very much enjoyed reading it and your comments and think that it looks like a great project. 👍
(I mean, as a fellow autist I might be able to hyperfocus on it for a while, but I'm sure that the ADHD would keep me from finishing to go work on something else. 🙃)
Ah - ASD, ADHD and Lemmy. You're a triple threat, Harry! 😀
Glad if it was entertaining, if even a little!
I feel your pain. Literally.
I once lost ... 24? 26? hrs over a period of days with GPT once...it each time confidently asserting "no, for realz, this is the fix".
This thing I built? Purely spite driven engineering + caffeine + ASD to overcome "Bro, trust me bro".
I hope it helps.
It's copyLEFT (AGPL-3.0 license). That means, free to share, copy, modify...but you can't roll a closed source version of it and sell it for profit.
In any case, I didn't build this to get rich (fuck! I knew I forgot something).
I built this to try to unfuck the situation / help people like me.
I don't want anything for it. Just maybe a fist bump and an occasional "thanks dude. This shit works amazing"
Responding to my own top post like a FB boomer: May I make one request?
If you found this little curio interesting at all, please share in the places you go.
And especially, if you're on Reddit, where normies go.
I use to post heavily on there, but then Reddit did a reddit and I'm done with it.
lemmy.world/post/41398418/2152…
Much as I love Lemmy and HN, they're not exactly normcore, and I'd like to put this into the hands of people 😀
PS: I am think of taking some of the questions you all asked me here (de-identified) and writing
a "Q&A_with_drBobbyLLM.md" and sticking it on the repo. It might explain some common concerns.
And, If nothing else, it might be mildly amusing.
like this
☆ Yσɠƚԋσʂ ☆ likes this.
Show off 😀
You're self hosting that, right? I will not be held responsible for some dogey OpenRouter quant hosted by ToTaLlY NoT a ScAM LLC 😀
This is the way. Good luck with OSS-120B. Those OSS models, they
- really
- like
- bullet
- points
gpt-oss is pretty much unusable without custom system prompt.
Sycophancy turned to 11, bullet points everywhere and you get a summary for the summary of the summary.
I want to believe you, but that would mean you solved hallucination.
Either:
A) you're lying
B) you're wrong
C) KB is very small
Hallucination isn't nearly as big a problem as it used to be. Newer models aren't perfect but they're better.
The problem addressed by this isn't hallucination, its the training to avoid failure states. Instead of guessing (different from hallucination), the system forces a Negative response.
That's easy and any big and small company could do it, big companies just like the bullshit
A very tailored to llms strengths benchmark calls you a liar.
artificialanalysis.ai/articles…
(A month ago the hallucination rate was ~50-70%)
^ Yes! That. Exactly that. Thank you!
I don't like the bullshit...and I'm not paid to optimize for bullshit-leading-to-engagment-chatty-chat.
"LLM - tell me the answer and then go away. If you can't, say so and go away. Optionally, roast me like you've watched too many episodes of Futurama while doing it"
D) None of the above.
I didn’t "solve hallucination". I changed the failure mode. The model can still hallucinate internally. The difference is it’s not allowed to surface claims unless they’re grounded in attached sources.
If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isn’t “the model is always right.”
The guarantee is “the system won’t pretend it knows when the sources don’t support it.” That's it. That's the whole trick.
KB size doesn’t matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.
That’s a control-layer property, not a model property. If it helps: think of it as moving from “LLM answers questions” to “LLM summarizes evidence I give it, or says ‘insufficient evidence.’”
Again, that’s the whole trick.
You don't need to believe me. In fact, please don't. Test it.
I could be wrong...but if I'm right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesn't suck balls as much as you think it might.
Maybe it's even useful to you.
I dunno. Try it?
Parts of this are RAG, sure
RAG parts:
- Vault / Mentats is classic retrieval + generation.
- Vector store = Qdrant
- Embedding and reranker
So yes, that layer is RAG with extra steps.
What’s not RAG -
KB mode (filesystem SUMM path)
This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.
If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.
Vodka (facts memory)
That’s not retrieval at all, in the LLM sense. It's verbatim key-value recall.
- JSON on disk
- Exact store (!!)
- Exact recall (??)
Again, no embeddings, no similarity search, no model interpretation.
"Facts that aren’t RAG"
In my set up, they land in one of two buckets.
1) Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.
2) Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.
In response to the implicit "why not just RAG then"
Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.
The extra "steps" are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.
So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop.
I don't trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that's a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that's how ASD brains work.
The system summarizes and hashes docs. The model can only answer from those summaries in that mode
Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?
Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?
Huh? That is the literal opposite of what I said. Like, diametrically opposite.
Let me try this a different way.
Hallucination in SUMM doesn’t "poison" the KB, because SUMMs are not authoritative facts, they’re derived artifacts with provenance. They’re explicitly marked as model output tied to a specific source hash. Two key mechanics that stop the cascade you’re describing:
1) SUMM is not a "source of truth"
The source of truth is still the original document, not the summary. The summary is just a compressed view of it. That’s why it carries a SHA of the original file. If a SUMM looks wrong, you can:
a) trace it back to the exact document version
b) regenerate it
c) discard it
d) read the original doc yourself and manually curate it.
Nothing is "silently accepted" as ground truth.
2) Promotion is manual, not automatic
The dangerous step would be: model output -> auto-ingest into long-term knowledge.
That’s explicitly not how this works.
The Flow is:
Attach KB -> SUMM -> human reviews -> Ok, move to Vault -> Mentats runs against that
Don't like a SUMM? Don't push it into the vault. There's a gate between “model said a thing” and “system treats this as curated knowledge.” That's you - the human. Don't GI and it won't GO.
Determinism works for you here. The hash doesn’t freeze the hallucination; it freezes the input snapshot. That makes bad summaries:
- reproducible
- inspectable
- fixable
Which is the opposite of silent drift.
If SUMM is wrong and you miss it, the system will be consistently wrong in a traceable way, not creatively wrong in a new way every time.
That’s a much easier class of bug to detect and correct. Again: the proposition is not "the model will never hallucinate.". It's "it can't silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version".
And that, is ultimately what keeps the pipeline from becoming "poisoned".
Huh? That is the literal opposite of what I said. Like, diametrically opposite.The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step.
No, that's exactly what you wrote.
Now, with this change
SUMM -> human reviews
That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.
Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work? Do you expect a human to verify that SUMM? How are you going to converse with your system to get the data from that KB Person set? Because to me that sounds like case C, only works for small KBs.
Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.
Fair. Except that you are still left with the original problem of you don't know WHEN the information is incorrect if you missed it at SUMM time.
Replying in specific
“SUMM -> human reviews That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.”
Correct: filesystem SUMM + human review is intentionally for small/curated KBs, not “review 3,000 entities.” The point of SUMM is curation, not bulk ingestion at scale. If the KB is so large that summaries become exhaustive, that dataset is in the wrong layer.
“Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work?”
Poorly. It shouldn’t work via filesystem SUMM. A “Person table” is structured data; SUMM is for documents. For 3,000 people × (3–7 facts), you’d put that in a structured store (SQLite/CSV/JSONL/whatever) and query it via a non-LLM tool (exact lookup/filter) or via Vault retrieval if you insist on LLM synthesis on top.
Do you expect a human to verify that SUMM?”
No - not for that use case. Human verification is realistic when you’re curating dozens/hundreds of docs, not thousands of structured records. For 3,000 persons, verification is done by data validation rules (schema, constraints, unit tests, diff checks), not reading summaries.
“How are you going to converse with your system to get the data from that KB Person set?”
Not by attaching a folder and “asking the model nicely.” You’d do one of these -
- Exact tool lookup: person("Alice") -> facts, or search by ID/name, return rows deterministically.
- Hybrid: tool lookup returns the relevant rows, then the LLM formats/summarizes them.
- Vault retrieval: embed/chunk rows and retrieve top-k, but that’s still weaker than exact lookup for structured “Person facts.”
So: conversation is fine as UX, but the retrieval step should be tool-based (exact) for that dataset.
But actually, you give me a good idea here. It wouldn't be the work of ages to build a >>look or >>find function into this thing. Maybe I will.
My mental model for this was always "1 person, 1 box, personal scale" but maybe I need to think bigger. Then again, scope creep is a cruel bitch.
“Because to me that sounds like case C, only works for small KBs.”
For filesystem SUMM + human review: yes. That’s the design. It’s a personal, “curate your sources” workflow, not an enterprise entity store.
This was never designed to be a multi-tenant look up system. I don't know how to build that and still keep it 1) small 2) potato friendly 3) account for ALL the moving part nightmares that brings.
What I built is STRICTLY for personal use, not enterprise use.
Fair. Except that you are still left with the original problem of you don't know WHEN the information is incorrect if you missed it at SUMM time.”
Sort of. Summation via LLM was always going to be a lossy proposition. What this system changes is the failure mode:
- Without this: errors can get injected and later you can’t tell where they came from.
- With this: if a SUMM is wrong, it is pinned to a specific source file hash + summary hash, and you can fix it by re-summarizing or replacing the source.
In other words: it doesn’t guarantee correctness; it guarantees traceability and non-silent drift. You still need to "trust but verify".
TL;DR:
You don’t query big, structured datasets (like 3,000 “Person” records) via SUMM at all. You use exact tools/lookup first (DB/JSON/CSV), then let the LLM format or explain the result. That can probably be added reasonably quickly, because I tried to build something that future me wouldn't hate past me for. We'll see if he/I succeeded.
SUMM is for curated documents, not tables. I can try adding a >>find >>grep or similar tool (the system is modular so I should be able to accommodate a few things like that, but I don't want to end up with 1500 "micro tools" and hating my life)
And yeah, you can still miss errors at SUMM time - the system doesn’t guarantee correctness. That's on you. Sorry.
What it guarantees is traceability: every answer is tied to a specific source + hash, so when something’s wrong, you can see where it came from and fix it instead of having silent drift. That's the "glass box, not black box" part of the build.
Sorry - really. This is the best I could figure out for caging the stochastic parrot. I built this while I was physically incapacitated and confined to be rest, and shooting the shit with Gippity all day. Built it for myself and then though "hmm, this might help someone else too. I can't be the only one that's noticed this problem".
If you or anyone else has a better idea, I'm willing to consider.
Woof, after reading your "contributions" here, are you this fucking insufferable IRL or do you keep it behind a keyboard?
Goddamn. I'm assuming you work in tech in some capacity? Shout-out to anyone unlucky enough to white-knuckle through a workday with you, avoiding an HR incident would be a legitimate challenge, holy fuck.
re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)
Yep, good question. You can do that, it's not wrong. If your KB is small + your question is basically “find me the paragraph that contains X,” then yeah: two-pass fuzzy find will dunk on any LLM for speed and correctness.
But the reason I put an LLM in the loop is: retrieval isn’t the hard part. Synthesis + constraint are. What a LLM is doing in KB mode (basically) is this -
1) Turns question into extraction task. Instead of “search keywords,” it’s: “given these snippets, answer only what is directly supported, and list what’s missing.”
2) Then, rather that giving 6 fragments across multiple files, the LLM assembles the whole thing into a single answer, while staying source locked (and refusing fragments that don't contain the needed fact).
3) Finally: it has "structured refusal" baked in. IOW, the whole point is that the LLM is forced to say "here are the facts I saw, and this is what I can't answer from those facts".
TL;DR: fuzzy search gets you where the info lives. This gets you what you can safely claim from it, plus an explicit "missing list".
For pure retreval: yeah - search. In fact, maybe I should bake in a >>grep or >>find commands. That would be the right trick for "show me the passage" not "answer the question".
I hope that makes sense?
This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.
Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.
Spite based inference?
You dirty pirate hooker.
I don’t believe you.
Hallucination is mathematically proven to be unsolvable with LLMs. I don't deny this may have drastically reduced it, or not, I have no idea.
But hallucinations will just always be there as long as we use LLMs.
Agree-ish
Hallucination is inherent to unconstrained generative models: if you ask them to fill gaps, they will. I don’t know how to “solve” that at the model level.
What you can do is make “I don’t know” an enforced output, via constraints outside the model.
My claim isn’t “LLMs won’t hallucinate.” It’s “the system won’t silently propagate hallucinations.” Grounding + refusal + provenance live outside the LLM, so the failure mode becomes “no supported answer” instead of “confident, slick lies.”
So yeah: generation will always be fuzzy. Workflow-level determinism doesn’t have to be.
I tried yelling, shouting, and even percussive maintenance but the stochastic parrot still insisted “gottle of geer” was the correct response.
This sounds really interesting, I'm looking forward to reading the comments here in detail and looking at the project, might even end up incorporating it into my own!
I'm working on something that addresses the same problem in a different way, the problem of constraining or delineating the specifically non-deterministic behavior one wants to involve in a complex workflow. Your approach is interesting and has a lot of conceptual overlap with mine, regarding things like strictly defining compliance criteria and rejecting noncompliant outputs, and chaining discrete steps into a packaged kind of "super step" that integrates non-deterministic substeps into a somewhat more deterministic output, etc.
How involved was it to build it to comply with the OpenAI API format? I haven't looked into that myself but may.
Cheers!
Re: OpenAI API format: 3.6 - not great, not terrible 😀
In practice I only had to implement a thin subset: POST /v1/chat/completions + GET /v1/models (most UIs just need those). The payload is basically {model, messages, temperature, stream...} and you return a choices[] with an assistant message. The annoying bits are the edge cases: streaming/SSE if you want it, matching the error shapes UIs expect, and being consistent about model IDs so clients don’t scream “model not found”. Which is actually a bug I still need to squash some more for OWUI 0.7.2. It likes to have its little conniptions.
But TL;DR: more plumbing than rocket science. The real pain was sitting down with pen and paper and drawing what went where and what wasn't allowed to do what. Because I knew I'd eventually fuck something up (I did, many times), I needed a thing that told me "no, that's not what this is designed to do. Do not pass go. Do not collect $200".
shrug I tried.
Holy shit I'm glad to be on the autistic side of the internet.
Thank you for proving that fucking JSON text files are all you need and not "just a couple billion more parameters bro"
Awesome work, all the kudos.
AMD China and Micro Center Confirm Ryzen 7 9850X3D Launch on January 28
AMD China and Micro Center have confirmed that the upcoming gaming CPU, the AMD Ryzen 7 9850X3D, will launch on January 28. Previous rumors had suggested this launch date, and now Micro Center has confirmed it. On AMD China's JD storefront, the Ryzen 7 9850X3D is already listed with a preorder option, requiring an 80 Yuan deposit, although the final price has not been disclosed. This 8-core/16-thread processor is powered by the "Zen 5" microarchitecture, enhanced with 3D V-Cache technology, and offers a speed increase over the current 9800X3D. The chip has a base frequency of 4.70 GHz and a maximum boost frequency of 5.60 GHz. Some samples have even been seen running at a 5.75 GHz boost frequency, indicating that enthusiasts might achieve even higher frequencies under regular home conditions. Our late 2024 review crowned the Ryzen 7 9800X3D as the world's best gaming processor. However, we need to determine how much of a difference the extra 400 MHz out-of-the-box overclock will make in gaming tests so we can draw more conclusions. Until third-party reviews arrive, we will have to wait.
Fitik likes this.
Japan announces $6 billion in support for Ukraine
Japan will allocate $6 billion to Ukraine for humanitarian and technical support in 2026, according to a statement by Verkhovna Rada Deputy Speaker Olena Kondratyuk on Facebook.
Archived version: archive.is/newest/newsukraine.…
Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.
Alien fan builds a better Raspberry Pi cyberdeck — The MU/TH/UR of all homages to a classic movie series
In space, no one can hear you scream how good this cyberdeck is!
Liza Minnelli uses AI to release first new music in 13 years
Singing legend heralds ‘new tools in service of expression’, on compilation that also features an Art Garfunkel song using AI-generated piano backing
Taco Thursday: European stocks rise after Trump ‘chickens out’ on tariff threat
Gains come after US president says he will not use military force to acquire territory and cites ‘framework deal’
Archived version: archive.is/newest/theguardian.…
Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.
Israel accused of extracting billions from Gaza through wartime trade controls
Israeli occupation authorities have intensified economic measures against the Gaza Strip during the ongoing war, imposing what Palestinian sources describe as “forced arrangements” that have enabled Israel to extract vast sums from Gaza’s economy while deepening humanitarian suffering.
Archived version: archive.is/newest/middleeastmo…
Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.
EU condemns Israeli raid on UNRWA, vows continued political and financial support
The European Union on Wednesday strongly condemned the Israeli occupation authorities’ raid on the headquarters of the United Nations Relief and Works Agency for Palestine Refugees (UNRWA) in East Jerusalem and the subsequent demolition activities, describing the move as a serious attack on the United Nations.
Archived version: archive.is/newest/middleeastmo…
Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.
phuntis doesn't like this.
Snapchat gives parents new insights into teens' screen time and friends
With these new features, Snap is likely looking to appease regulators and parents over concerns about safety and screen time on its platform.
Archived version: archive.is/newest/techcrunch.c…
Technology Channel reshared this.
Deportations up, job growth down: Trump’s second term so far – in charts
Tracking data from a chaotic year, from ICE detention and job growth to inflation and the president’s popularity
Fourth bid to censure von der Leyen in European Parliament fails
The European Parliament has defeated a mostly rightwing and far-right motion of censure against the commission — for the fourth time since Ursula von der Leyen took office in December 2024.
Archived version: archive.is/newest/euobserver.c…
KLM introduces free Wi-Fi on European flights
KLM has begun offering free onboard Wi-Fi on its European routes, becoming one of the first airlines in Europe to provide complimentary internet access throughout the flight. The service launches on 22 January and allows passengers to browse the web, send emails, stream music and video, and use online apps during their journey.
EU Parliament defends strong passenger rights as airlines warn of disproportionate impact
The European Parliament has adopted a strong position on the revision of EU air passenger rights rules (EU261), reaffirming its commitment to maintaining and strengthening existing protections for travellers. In a decisive plenary vote, MEPs rejected proposals from member states to dilute passenger rights, notably insisting on keeping the current three-hour delay threshold for compensation, preserving compensation levels of up to €600 depending on distance, and safeguarding the right to reimbursement, rerouting, and care in cases of delay, cancellation, or denied boarding.
NASA Tests Novel Wing Design to Cut Drag and Boost Aircraft Fuel Efficiency
NASA has completed a high-speed taxi test of a new wing concept designed to improve airflow and reduce drag, a step that could make future aircraft more fuel-efficient and cost-effective. The milestone test took place in the United States and highlights progress in sustainable aviation research.
NASA Tests Novel Wing Design to Cut Drag and Boost Aircraft Fuel Efficiency
Aviation News – NASA has completed a high-speed taxi test of a new wing concept designed to improve airflow and reduce drag, a step that could make future aircraft more fuel-efficient and cost-eff…aviationnews.eu
EDRi launches new resource to document abuses and support a full ban on spyware in Europe
cross-posted from : lemmy.zip/post/57521167
Over the past years, repeated investigations have shown that at least 14 EU Member States have deployed spyware against journalists, human rights defenders, lawyers, activists, political opponents, and others.Despite the findings of the European Parliament’s PEGA Inquiry Committee in 2023, and the push from human rights organisations, the European Commission has so far refused to propose binding legislation to prohibit spyware. Not only that: it has done nothing. Right now, no EU-wide red lines exist against the use of spyware. This means that victims lack effective remedies, authorities face no scrutiny, and commercial spyware vendors continue to operate with near-total impunity, enriching themselves by violating human rights, and even benefiting from European public funding.
EDRi launches a spyware document pool - European Digital Rights (EDRi)
EDRi is launching a document pool to centralise resources that tracks abuse and support the growing push for a full EU-wide ban of spyware.European Digital Rights (EDRi)
𝕆𝕔𝕦𝕝𝕚
in reply to تحريرها كلها ممكن • • •Imperial boomerang hitting them straight in the teeth
None of these people deserve to ever sleep again
like this
driving_crooner, تحريرها كلها ممكن, Kallestar, j5906, TheMadBeagle [he/him], anarchoilluminati [comrade/them], BlasphemousTiefling, raoul, Chloé 🥕, Dessalines, PerryGirl [she/her, she/her], davel, Saymaz, Edge004, ZeroHora, cmhickman358, OBJECTION!, RiverRock, Gil Wanderley, mathemachristian [he/him], 🇵🇸antifa_ceo, 73QjabParc34Vebq, fermionsnotbosons, dastanktal [comrade/them], caurvo, ferric_carcinization, Gold_E_Lox, idriss, Bobr, Emperor, PolandIsAStateOfMind, geneva_convenience, orc_princess e besbin like this.
Greddan doesn't like this.
🇵🇸antifa_ceo
in reply to 𝕆𝕔𝕦𝕝𝕚 • • •like this
cmhickman358, OBJECTION!, RiverRock, تحريرها كلها ممكن, mathemachristian [he/him], PerryGirl [she/her, she/her], NotMushroomForDebate, Chloé 🥕, fermionsnotbosons, caurvo, ferric_carcinization, Gold_E_Lox, idriss, Bobr, PolandIsAStateOfMind, orc_princess e besbin like this.
Gorillatactics [none/use name]
in reply to تحريرها كلها ممكن • • •like this
mathemachristian [he/him], PerryGirl [she/her, she/her], fermionsnotbosons, ferric_carcinization, Tofutefisk, RiverRock, PolandIsAStateOfMind, geneva_convenience, orc_princess e besbin like this.
🏴حمید پیام عباسی🏴
in reply to تحريرها كلها ممكن • • •Look at these people doing nothing while murdering my family, so wholesome, what a great point they're making
Fucking assholes I hope they all never sleep due to torturous dreams and suffer every day of their lives. Fuckers.
like this
PolandIsAStateOfMind, geneva_convenience, orc_princess, besbin e تحريرها كلها ممكن like this.