Hind Rajab group urges Greece to arrest 'Israeli' Defense Minister
The Brussels-based human rights organization Hind Rajab announced Wednesday that it filed a formal complaint with Greek authorities calling for the arrest and investigation of 'Israeli' Defense Minister Yisrael Katz over alleged war crimes committed in Gaza.
Katz has been visiting Athens since Monday on an official trip scheduled to end Thursday. The complaint was submitted to the Greek Supreme Court prosecutor, urging urgent legal action due to the short duration of Katz’s stay.
Hind Rajab asserts that Katz’s policies and conduct amount to acts of genocide, war crimes, and crimes against humanity under Article 2 of the Genocide Convention and Article 6 of the Rome Statute of the International Criminal Court. The complaint emphasizes that Greece’s jurisdiction and legal obligations are activated while Katz is present on Greek soil.
I'm tired of LLM bullshitting. So I fixed it.
Hello!
As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:
It lies. Confidently. ALL THE TIME.
(Technically, it “bullshits” - link.springer.com/article/10.1…
I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.
The thing: llama-conductor
llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).
Not a model, not a UI, not magic voodoo.
A glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.
TL;DR: “In God we trust. All others must bring data.”
Three examples:
1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)
You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:
>>attach <kb>— attaches a KB folder>>summ new— generatesSUMM_*.mdfiles with SHA-256 provenance baked in- `>> moves the original to a sub-folder
Now, when you ask something like:
“yo, what did the Commodore C64 retail for in 1982?”
…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:
The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.
Confidence: medium | Source: Mixed
No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into stupid.
And when you’re happy with your summaries, you can:
>>move to vault— promote those SUMMs into Qdrant for the heavy mode.
2) Mentats: proof-or-refusal mode (Vault-only)
Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:
- no chat history
- no filesystem KBs
- no Vodka
- Vault-only grounding (Qdrant)
It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:
FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.
Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.
The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”
3) Vodka: deterministic memory on a potato budget
Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.
Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).
!!stores facts verbatim (JSON on disk)??recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)- CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages
So instead of:
“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”
you get:
!! my server is 203.0.113.42
?? server ip→ 203.0.113.42 (with TTL/touch metadata)
And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.
There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.
TL;DR:
If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:
- Primary (Codeberg): codeberg.org/BobbyLLM/llama-co…
- Mirror (GitHub): github.com/BobbyLLM/llama-cond…
PS: Sorry about the AI slop image. I can't draw for shit.
PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.
llama-conductor
Route workflows, not models. Glass-box, not black-box. Squash LLM nonsense.Codeberg.org
like this
AMD China and Micro Center Confirm Ryzen 7 9850X3D Launch on January 28
AMD China and Micro Center have confirmed that the upcoming gaming CPU, the AMD Ryzen 7 9850X3D, will launch on January 28. Previous rumors had suggested this launch date, and now Micro Center has confirmed it. On AMD China's JD storefront, the Ryzen 7 9850X3D is already listed with a preorder option, requiring an 80 Yuan deposit, although the final price has not been disclosed. This 8-core/16-thread processor is powered by the "Zen 5" microarchitecture, enhanced with 3D V-Cache technology, and offers a speed increase over the current 9800X3D. The chip has a base frequency of 4.70 GHz and a maximum boost frequency of 5.60 GHz. Some samples have even been seen running at a 5.75 GHz boost frequency, indicating that enthusiasts might achieve even higher frequencies under regular home conditions. Our late 2024 review crowned the Ryzen 7 9800X3D as the world's best gaming processor. However, we need to determine how much of a difference the extra 400 MHz out-of-the-box overclock will make in gaming tests so we can draw more conclusions. Until third-party reviews arrive, we will have to wait.
Fitik likes this.
Japan announces $6 billion in support for Ukraine
Japan will allocate $6 billion to Ukraine for humanitarian and technical support in 2026, according to a statement by Verkhovna Rada Deputy Speaker Olena Kondratyuk on Facebook.
Archived version: archive.is/newest/newsukraine.…
Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.
Alien fan builds a better Raspberry Pi cyberdeck — The MU/TH/UR of all homages to a classic movie series
In space, no one can hear you scream how good this cyberdeck is!
Liza Minnelli uses AI to release first new music in 13 years
Singing legend heralds ‘new tools in service of expression’, on compilation that also features an Art Garfunkel song using AI-generated piano backing
Nascita del self-hosting
Cronache di un admin nel Fediverso, 2021 → oggi
Sono nel Fediverso dal 2021.
All’inizio era tutto strano e nuovo, io ero spaesato, tipo esploratore senza bussola in una galassia piena di avatar e federazione.
Sono arrivato nella più grande istanza italiana, lì ho fatto gavetta sul serio, osservavo, studiavo, capivo come ci si muove, mi è servito tantissimo, era un corso accelerato di “vita federata”.
Poi, per una serie di inconvenienti di salute, ho mollato tutto, per poi rientrare in altre due istanze e trovare gestioni diverse, stili diversi, regole diverse, insomma, stesso universo e pianeti completamente differenti.
Tutto bello, a un certo punto ho fatto la scelta inevitabile, farmi la mia istanza, con le mie regole, parlando comunque con tutti, perché il punto non è chiudersi, è federarsi bene.
Nel 2024 nasce snowfan.masto.host, alcuni amici mi seguono, anche lì gavetta da admin, moderazione, manutenzione, ordine nel caos.
Col tempo però mi è iniziato a stare stretto un dettaglio, la parte tecnica era gestita da masto.host, aggiornamenti, backup, variazioni. Per carità, sono bravissimi, ma è come noleggiare un’auto, va benissimo, la guidi, è comoda, però non è tua.
E allora arriva il salto, novembre 2025, nasce snowfan.it.
Questa volta è tutto mio, gestione tecnica mia, non su un semplice VPS ma su una macchina dedicata, un VDS, cioè, non più passeggero, io sono il meccanico, il pilota e quello che tiene l’estintore vicino.
Non è stato facile all’inizio, però avevo molto tempo libero, giocoforza, ho imparato tanto.
Ora gestisco 3 server, 2 VPS e 1 VDS, sopra ci girano Mastodon, Pixelfed, Matrix, SNAC2 e searXNG.
Non mi interessa ingrandirmi, anzi, per me il vero Fediverso è la decentralizzazione più capillare possibile, una miriade di istanze, piccole o medie, non dominabili da un singolo.
Detto questo, eccomi qui, grazie a tutti gli abitanti del Fediverso.
Mastodon
Istanza italiana aperta agli Amici che ne fanno richiesta. -ATTENZIONE- istanza NO-Threads/BlueSkyMastodon hosted on snowfan.it
like this
reshared this
La patente a crediti per imprese e lavoratori autonomi
Dal 1º ottobre 2024 è obbligatoria la patente a punti o a crediti per imprese e lavoratori autonomi che operano nei cantieri mobili o temporanei. Sono previste esclusioni per alcuni soggetti.
Chi deve possedere la patente:
* Tutte le imprese e i lavoratori autonomi che svolgono attività lavorativa nei cantieri temporanei o mobili.
Secondo me l'ennesimo balzello ai danni dei lavoratori autonomi e delle piccole imprese. Infatti:
Non sono obbligati:
* Chi effettua solo forniture di materiali o servizi.
* Chi svolge prestazioni di natura intellettuale (es. progettazione, consulenza, ingegneri, geometri ...).
* Le imprese in possesso di attestazione di qualificazione SOA in classifica pari o superiore alla III (cioè le imprese di una certa dimensione, con un certo fatturato).
E questa è bella. Secondo me una plateale discriminazione. Perché? Un fornitore di materiale non può fornire materiale scadente o difettoso che può essere causa di un cedimento strutturale per aver trascurato il controllo di qualità?
Un ingegnere non può sbagliare un calcolo strutturale e provocare il crollo di una costruzione? E chi è lui per essere esentato?
Le piccole imprese e gli artigiani sempre più vessati dalla politica. Gli altri no. Che vergogna! Da queste leggi, secondo me, emerge tutto il degrado della politica e dei legislatori ai quali affidiamo l'amministrazione del paese.
RRF Notizie del 22 01 26. Trump Groenlandia Dazi. L'ONU a pagamento. Sicilia a pezzi dopo il tifone. Sport
Sick of posts telling you enshittification happened to another thing? !deshittification@thebrainbin.org
Stopped poking my head in communities like !technology@lemmy.world because the frequent bad news posts, followed by rightfully upset comments, was depressing and stressful. And it just felt worse that there was so much outrage, but usually not a comment telling you alternative options or small actions you could take to try to resist the enshittification—rage with no productive outlet besides complaints online. So I'm popping my head back in here because I feel it would be useful to post somewhere that tries to give you those things and has a rule against posting bad news with no action we can take to maybe claw back value for ourselves: !deshittification@thebrainbin.org
If you must talk about enshittification, please include how someone could reverse it; otherwise, post it elsewhere
Overall sidebar:
State of affairs for technology seem bad? But what is being done to change it? And what could be done by you so it changes at least for yourself?A community to talk about the reversal of enshittification, be it news, actions that could be done, etc.
A person that sees no solution has no reason to keep going.
Hope others here are heartened by it as I was.
Full disclosure: I don't mod this community or otherwise have power in it. I just post sometimes.
Prigionieri del nostro destino: la solitudine nell’era post-pandemica di Lorenzo Zucchi
Indice dei contenuti
Toggle
- La Solitudine come Tema Centrale
- La Ferita della Pandemia sui Più Giovani
- Un Ritorno alla Natura
- Un Romanzo di Profondità Psicologica
Titolo: prigionieri del nostro destino
Autore: Lorenzo Zucchi
data di pubblicazione: 25 maggio 2025
casa editrice: Edizioni Underground
numero pagine: 206 pagine
Mauro vive una vita ordinaria a Sesto San Giovanni: una famiglia apparentemente unita, un lavoro da tecnico di elettrodomestici, e un’ossessione per i gialli e i social notturni. Quando il lockdown ferma il mondo, Mauro continua a muoversi tra case e cortili, ma la sua mente deraglia. Il confine tra realtà e fantasia si assottiglia, tra desideri repressi e incontri ambigui con tre giovani donne: Emily, Flora e Christelle ― le sue “Tre Grazie”. Nel silenzio irreale di una città spenta, Mauro perde contatto con tutto, anche con sé stesso. Il ritorno del “cronista dell’invisibile”, con un romanzo nero e psicologico che unisce ironia, malinconia e suspense, per raccontare la solitudine urbana, il desiderio che consuma, e la sottile linea tra ciò che siamo e ciò che potremmo diventare.
Prigionieri del nostro destino: la solitudine nell’era post-pandemica di Lorenzo Zucchi
Prigionieri del nostro destino di Lorenzo Zucchi è un romanzo che esplora le cicatrici lasciate dalla pandemia, immergendoci in una Milano silenziosa e abbandonata, dove l’aria di paura, disagio e solitudine si fa palpabile. Zucchi ci racconta come gli anni della pandemia abbiano cambiato non solo il volto delle città, ma anche le dinamiche familiari e i rapporti interpersonali. Un viaggio emotivo che ci porta a riflettere sul vero significato della solitudine e dell’isolamento, temi tanto universali quanto devastanti.
La Solitudine come Tema Centrale
Nel romanzo, i protagonisti sono Antonella, Mauro, i loro figli e Costantin, un amico di Mauro che funge da supporto durante il difficile periodo della pandemia. La solitudine è il filo conduttore che lega le loro storie. Mauro si perde nel suo lavoro monotono, che diventa un rifugio dal caos esterno ma anche dalla sua vita familiare. Si lascia catturare da sogni giovanili e illusioni, mentre la sua famiglia si allontana sempre più. Antonella, dal canto suo, cerca una fuga, ma la sua ricerca di un sogno nuovo non riesce a colmare il vuoto che sente dentro.
La pandemia ha amplificato questa solitudine, ma la verità è che, per molti, essa era già presente. Non sono solo le pareti delle case a separare le persone, ma anche muri invisibili, spesso creati dai ritmi frenetici della vita quotidiana. In questo contesto, la solitudine non è più solo un’esperienza personale, ma un’emozione collettiva, condivisa da tutti, anche dai più piccoli.
La Ferita della Pandemia sui Più Giovani
Un tema particolarmente forte nel libro riguarda gli effetti devastanti della pandemia sui bambini. La privazione dei contatti sociali, la chiusura delle scuole e il distacco da amici e familiari ha avuto un impatto diretto sul loro sviluppo psicologico e relazionale. Zucchi esplora con delicatezza questo aspetto, mostrando come la pandemia abbia messo in crisi la crescita dei più giovani, non solo sul piano educativo, ma anche su quello emotivo. La solitudine, in un periodo in cui l’interazione sociale è stata drasticamente limitata, ha inflitto cicatrici che potrebbero non rimarginarsi facilmente.
Un Ritorno alla Natura
L’autore mi ha raccontato che Prigionieri del nostro destino è nato proprio durante quegli anni di isolamento forzato. Quello che la pandemia ha reso evidente è stata la necessità di rallentare e riscoprire ciò che davvero conta, come il contatto con la natura. Forse, un aspetto positivo di questa triste parentesi è stato il ritorno alla semplicità e alla bellezza dei piccoli gesti quotidiani, come camminare scalzi sull’erba. Zucchi, che cammina spesso a piedi nudi, racconta come questo semplice atto sia stato per lui una fonte di ispirazione, un modo per riconnettersi a una parte più selvaggia e primitiva di sé. Un messaggio potente: dobbiamo avere il coraggio di scoprire la nostra parte selvaggia e di tornare alle radici di ciò che siamo.
Un Romanzo di Profondità Psicologica
Prigionieri del nostro destino non è solo un romanzo sulla pandemia, ma una riflessione profonda sulla solitudine, sull’importanza dei legami umani e sull’impatto che l’isolamento può avere sulla nostra psiche. La scrittura di Zucchi, densa di emozioni e riflessioni, non si ferma alle prime parole, ma va oltre, spingendo il lettore a confrontarsi con il proprio vissuto e con le proprie paure. È un libro che invita a riflettere sulla nostra condizione esistenziale e ci fa capire quanto sia fondamentale non solo vivere accanto agli altri, ma anche vivere con gli altri, creando connessioni autentiche.
Se c’è qualcosa che possiamo imparare da questo romanzo, è che la solitudine, sebbene amplificata dal Covid, è una condizione che molti di noi portano dentro. La pandemia non ha inventato la solitudine, ma l’ha resa più visibile, più tangibile. Prigionieri del nostro destino ci invita a non rimanere prigionieri di questa solitudine, ma a cercare sempre il contatto, il dialogo, l’amore. Perché solo così possiamo davvero essere liberi.
Solitudine nel romanzo Prigionieri del nostro destino
Scopri come la pandemia ha influito sulla vita di Antonella, Mauro e Constantin in un racconto profondo che racconta Milano nella pandemia.Gloria Donati (Magozine.it)
Il blogverso italiano di Wordpress reshared this.
Explainer: Why gas plays a minimal role in China’s climate strategy
Explainer: Why gas plays a minimal role in China’s climate strategy - Carbon Brief
While gas could play a role in decarbonising some aspects of China’s energy demand, multiple factors would need to changeCarbon Brief Staff (Carbon Brief)
X is also launching Bluesky-like starter packs
X is also launching Bluesky-like starter packs
X is rolling out a new feature called “Starterpacks” to all users in the coming weeks, the company’s head of product has announced.Mariella Moon (Engadget)
Dell i7 Laptop Price Guide – Refurbished Dell i7 Laptops from Eazypc
like this
don't like this
Anthropic releases new AI Constitution for Claude
Claude's new constitution
A new approach to a foundational document that expresses and shapes who Claude iswww.anthropic.com
BrikoX doesn't like this.
Ripping Blu-Rays
Ahoy everybody.
Lately I was thinking about buying Blu-Rays to actually own the media that I enjoy watching. I need some recommendations for player/ripper hardware so I can both backup make a backup and also share what I'll get with you guys.
Let me know of something good and preferably cheap that I can get in the EU. Thanks, guys 😀
MakeMKV for ripping it. You can either buy it or use it free by going to the forums and getting the monthly reg key.
If you then want to reencode, Handbrake. I've given up on reencoding my personal collection and bought a couple of large hard drives. But to make sharing easier, might want to reencode to x265 or AV1 to manage the file size.
Also might want to consider a libre drive. It is a Blu-ray/UHD drive that has custom firmware which region unlocks it and helps the ripping process. You can find reputable sellers in the MakeMKV forums.
Recommendations for federated CMS alternatives to Wordpress?
Crossposted from kbin.earth/m/fediverse@lemmy.z…
Crossposting to get more input (on the actual issues)
Fedi folks, I turn to you for advice with a bit of a problem. I co-admin an ActivityPub-enabled Wordpress site with 15+ years worth of blog posts and a couple of long podcast series. When WP announced their "vision" to become a CMS for "AI", the collective admin reaction was to get the hell off that boat before it turned to algorithmic shit.[NB — I realise this isn't an anti-"AI" community, but that part is only the starting premise of our situation here. I'm not getting into discussions with slop herders in the comments]
We're a loose network of nerds discussing the speculative genres, including sci-fi. We've seen this movie, we know how it's going to play out. Trouble is, we're not coders. We can assemble the proverbial IKEA flatpak kit and give it a lick of CSS paint, but we can't be trusted to build furniture we'd want to sit in ourselves.
The crunch points for alternatives are
- the ability to migrate an old, multi-user WP site without breaking too many canonical URLs and feeds,
- needing a somewhat familiar backend for most of the non-techie contributors to even post stuff, and
- the federation bit, which is why I post this to !fediverse first. I am aware that setting up an essentially new fedi instance at the same address as a previous one is disencouraged. I'll be glad to hear how or if this can be avoided while preserving profile and post URLs...
So last month I mined the Mastodon hive mind for existing alternatives to WP with fediverse capabilities, and got a selection of qualified responses. I nixed WriteFreely and Plume early on, because while they are perfectly good, federated blog software, my impression is they lean toward a text focused minimalist layout that would be hard to deviate from, where our current site has a bit more pizzazz.
Going through the alternatives listed below, maybe that's a superficial reason to throw some good options out with the bath water. Either way, I'm presenting you with the most frequent, feasible, and/or interesting offers. I've done some surface research and weighed pros and cons for our use case, but I hope there are people out there who can add their experience to the eventual decision:
ClassicPress
This should be a shoo-in, right? It's basically Wordpress with some newer parts torn out (specifically the Gutenberg block editor), but most of the core architecture remains. Including many, many plug-ins. Plus, they're said to have sworn off any "AI" nonsense. Migration would be relatively easy, and with a little bit of luck nobody would even know the difference.Except apparently compatibility with the WP-activityPub plugin broke. So that's out of the window.
Ghost
A lot of recommendations for Ghost! I believe it was originally another Wordpress fork, but was completely rewritten early on? Either way, a few things turn me off Ghost as an potential alternative:
- The insistent "we help you monetize your content" vibe on the project website. That's a personal quibble; our site is just entirely non-commercial for the sake of everybody's well-being. I'm told all of that stuff can be turned off in individual installs, though.
- Ghost's ActivityPub implementation is reportedly not making great progress despite enthusiastic early announcements? If that's not a deal breaker,
- the fact that the Ghost devs are relying on agentic LLMs to code the application is. Just nope.
Backdrop CMS/Drupal
From what I'm told, Drupal is a step up the CMS learning curve from Wordpress, but since they're projects that have coexisted for a long time, there are established and tried migration methods from one to the other.I'm not exactly on top of Drupal's ActivityPub implementation, though. But even if that's in a workable shape, Drupal is trying to pitch itself as "the best AI-powered Open Source CMS in the world". Which, to me, is like saying you only put the sharpest razor blades available in kids' Hallowe'en candy.
One user involved in the Backdrop CMS fork from Drupal 7 made convincing arguments for that over later Drupal versions, so here's hoping they drank the right (ie., federated, not algorithmic) Kool Aid.
Hubzilla
Now, this may be the most exciting but also most challenging alternative. Hubzilla is a fairly advanced, and in some ways mold-breaking Fediverse application. From the same developer who made Friendica and (streams), and, if I understand correctly, based on the same core principles.In contrast to Wordpress and Drupal, Hubzilla declares itself "a CMS which doesn't use LLM / AI". Can't say I don't appreciate that signalling! And of course the whole package revolves around federation. But wait.
The CMS part may be technically correct, but as far as I can tell making Hubzilla present as a plain blog or website requires some advanced stylesheet finagling — and the application only comes with one official, microblog-esque theme. I haven't found any open projects trying to bridge that visual gap, but will appreciate your tips about them if they exist.
For Hubzilla to be a feasible alternative here, we will also need to be able to migrate existing posts, media, users and comments from Wordpress. Preferably in a way that doesn't mess up permalinks too badly. A quick glance at Hubzilla urls indicate that the entire architecture is very different. I assume concepts like "channels" substitute "authors"(?) but I don't know where we are with WP terms like taxonomies.
So there's a challenge, and I'm hoping others have tried (and hopefully succeeded in) that particular migration... or at least have advice to offer.
Bonus: Bonfire
I'm putting this on the table because I expect somebody is going to suggest it in the comments. Like Hubzilla, Bonfire looks really interesting as a Swiss army knife for the Fediverse: You want to make a blog? Take these modules. A community forum? Try these other ones. It's federated first, and it seems to make good headway toward its goals.But there is no official CMS flavour is still in development; we have no idea about migration possibilities, and honestly? The more mature Hubzilla will be a challenge, I'm fairly certain this is a step further out of our comfort zone. This is totally an "us" problem, not a Bonfire one.
So, thoughts? Specifically practical advice on Hubzilla and/or/versus Backdrop, which I think are the most realistic avenues right now. But there may be alternatives I just didn't see even though they're right in front of me.
I'm ready to have my mind changed on WriteFreely, or to hear about something completely new to me. Mostly though, I'm hoping for replies that consider the massive history of posts and comments that we look to import into the next generation of our site.
Thanks in advance!
Retail stores still selling the same overpriced junk since at least 2019 and even pretending it's on sale
(Price is in € EUR)
For context, six months ago I bought a renewed Thinkpad X395 for exactly this price and I got: An actually decent CPU and not something as powerful as a Wii, 16 GB of RAM, 256 GB of actual M.2 SSD, a really nice 1080p Touchscreen, really nice build quality with metal and a nice backlit keyboard.
Heck, even when I bought a cheap laptop in May 2020 it was much better than this and it even was brand new for the same price.
I know this CPU very well, for this price you are getting something that has trouble playing a Youtube video in 1080p at 60 FPS and can't even run the latest version of Minecraft at above 10 FPS.
Now imagine this combined with Windows 11 and only 4 GB of RAM...
No, this is not because of the current hardware crysis, this is pure greed.
But hey, 1 year of Microslop 365 is included!
Yeah Intel Celeron+4gb ram is a total slogfest on Windows, Linux makes it a bit more bearable.
If they are made to be as cheap as possible, why don't they just drop the Windows license and preload them with Linux Mint or something?
It wasn't that bad on Linux, maybe this was best case scenario, but still doing anything took forever
So what happened with Venezuela?
cross-posted from: lemmygrad.ml/post/10455030
Is it still sovereign?What about the Maduros?
Who is in charge there and are they kowtowing to the United States, doing a bit of both (their own thing and pacifying the United States), or are they resolutely against USA influence?
Are there still air strikes?
Apps helping boycott US goods gain popularity in Denmark
UdenUSA is currently the fourth most downloaded app in Denmark on the App Store, the American ChatGPT is in fifth place
like this
reshared this
alternativeto.net/lists/42568/…
European Alternatives
We help you find European alternatives for digital service and products, like cloud services and SaaS products.European Alternatives
Apps helping boycott US goods gain popularity in Denmark
Apps helping boycott US goods gain popularity in Denmark
UdenUSA is currently the fourth most downloaded app in Denmark on the App Store, the American ChatGPT is in fifth placeTASS
Apps helping boycott US goods gain popularity in Denmark
UdenUSA is currently the fourth most downloaded app in Denmark on the App Store, the American ChatGPT is in fifth place
LibreFind: l’app Android che trova alternative FOSS alle applicazioni proprietarie
LibreFind nasce con un obiettivo molto chiaro: aiutare gli utenti Android a individuare rapidamente quali applicazioni installate non sono libere e quali alternative open source possono sostituirle.
L’app analizza il dispositivo, confronta i pacchetti con un database ospitato su Firebase Firestore e restituisce un elenco ordinato di software proprietario insieme a suggerimenti FOSS pertinenti. L’idea è semplice ma potente, perché permette di avere una panoramica immediata del livello di libertà del proprio telefono e di intervenire con scelte più consapevoli.
...
GitHub - jksalcedo/librefind: Find FOSS alternatives to Proprietary Android Apps
Find FOSS alternatives to Proprietary Android Apps - jksalcedo/librefindGitHub
What's your go to simple desktop photo editor (a la snapseed)?
Yes, I know snapseed is a mobile app, but that's the kind of simplicity I'm looking for. Pre-made filters, an auto-fix button, adjustment sliders, etc.
I have image toolbox on mobile and even that's a bit over the top with option (and I still haven't found sliders for brightness, contrast, saturation, shadows etc in that maze).
Linux or Windows programs are fine, I run both.
Trump-Greenland Deal Reportedly Includes U.S. ‘Sovereignty Over Small Pockets’ of Territory
This is mine, that is yours, that's yours too but only if there's nothing valuable there. If there is, that's mine too.
You mean America owns it?
NO ME!!!
Trump-Greenland Deal Reportedly Includes U.S. ‘Sovereignty Over Small Pockets’ of Territory
The NATO statement came amid new reporting from the New York Times that deal may include the U.S. being given “sovereignty over small pockets” of land in Greenland.Alex Griffing (Mediaite)
Millions of people imperiled through sign-in links sent by SMS
Millions of people imperiled through sign-in links sent by SMS
Even well-known services with millions of users are exposing sensitive data.Dan Goodin (Ars Technica)
ICE agents drew guns on off-duty officer in Minnesota, chief says
ICE agents drew guns on off-duty officer in Minnesota, chief says
Multiple police chiefs in Minnesota said their officers were among those being targeted by immigration agents operating in the state., USA TODAY (USA TODAY)
RRF Caserta PC Facile. Migliorare la visione dei caratteri a monitor con Clear Type
Energia dal cielo con raggi infrarossi, tutto sul test (riuscito)
Energia dal cielo con raggi infrarossi, tutto sul test (riuscito)
Un Cessna nel vento della Pennsylvania ha trasmesso energia a terra usando raggi infrarossi. Primo passo concreto verso il solare spaziale.Gianluca Riccio (FuturoProssimo)
reshared this
Apps for boycotting American products surge to the top of the Danish App Store
European consumers are fighting back against the U.S. following Trump’s threats to take control of Greenland, a Danish territory. As a result, two mobile apps that offer a way to determine if products are made in America, then suggest local alternatives, have surged to the top of the Danish App Store in recent days.
The boost in downloads comes as Danish consumers have been organizing a grassroots boycott of American-made products, which also included canceling their U.S. vacations and ditching their subscriptions to U.S.-based streaming services, like Netflix.
Across both iOS and Android, two apps, NonUSA and Made O’Meter, have entered the top 10 this month, according to new data from market intelligence provider Appfigures.
Apps for boycotting American products surge to the top of the Danish App Store | TechCrunch
Two origin ID apps, NonUSA and Made O'Meter, are seeing downloads surge as Europeans boycott US-made goods.Sarah Perez (TechCrunch)
like this
Minnesota rising
Friday could be a seminal moment in this new civil rights movement.Minnesota unions, religious groups, and ordinary citizens are planning a massive statewide strike and economic boycott.
The Ice Out of Minnesota website declares:
It is time to suspend the normal order of business to demand immediate cessation of ICE actions in MN, accountability for federal agents who have caused loss of life and abuse to Minnesota residents and call for Congress to immediately intervene.Friday, January 23rd will be a statewide day of non-violent moral action, reflection: no work, no school, no shopping — only community, conscience, and collective action.
There will be a unified, statewide pause in daily economic activity. Instead, Minnesotans will spend time with family, neighbors, and their community to show Minnesota’s moral heart and collective economic power. This means:
- No work (except emergency services)
- No school
- No shopping or consumer spendingThere will be a peaceful march and rally in downtown Minneapolis at 2:00pm.
The weather forecast is brutal, with below-zero temperatures expected all day Friday along with windchill temperature descending into the 30s below zero.
Minnesota rising
A massive statewide strike and economic boycott is set for FridayDan Froomkin (Heads Up News)
CEO of Palantir Says AI Means You'll Have to Work With Your Hands Like a Peasant
CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant
Speaking at the World Economic Forum, Alex Karp said the majority of humanity will be working in manufacturing and vocational jobs.Joe Wilkins (Futurism)
100_kg_90_de_belin likes this.
Does shooting billionaires count as manual labor?
Does providing for those who shoot billionaires count?
NZ Treasury: "The likely effect would therefore be to increase house prices"
I stumbled across a 2020 OIA request to NZ Treasury where someone asked:
what analysis Treasury has done on the KiwiSaver First Home scheme affecting house prices, and how much taxpayer money gets transferred into the housing stock
Treasury released a few internal docs and they basically say that increasing caps would lead to higher house prices and that subsidies for renters/buyers tend to be captured by landlords/sellers instead of improving affordability long-term.
The advice was apparently ignored.
Cuban Detainee in El Paso ICE Facility Died by Homicide, Autopsy Shows
The report from the county medical examiner said the detainee, Geraldo Lunas Campos, was asphyxiated and restrained by law enforcement. Federal officials described his death as a suicide.
A Cuban immigrant’s death in an El Paso detention center this month was ruled a homicide, according to an autopsy report released Wednesday by the county medical examiner’s office.
The detainee, Geraldo Lunas Campos, 55, became unresponsive while he was physically restrained by law enforcement on Jan. 3 at the Immigration and Customs Enforcement facility called Camp East Montana, the report said. Emergency medical workers tried to resuscitate him, but he was pronounced dead at the scene.
The autopsy listed the cause of death as “asphyxia due to neck and torso compression.” The report also described injuries Mr. Lunas Campos had sustained to his head and neck, including burst blood vessels in the front and side of the neck, as well as on his eyelids.
New York Times - Bias and Credibility - Media Bias/Fact Check
LEFT-CENTER BIAS These media sources have a slight to moderate liberal bias. They often publish factual information that utilizes loaded words (wording that attempts to influence an audience by appeals to emotion or stereotypes) to favor liberal cau…Media Bias Fact Check
The New World Situation: The decline of U.S. imperialism and the centrality of the class struggle
cross-posted from: news.abolish.capital/post/2174…
The writer is the First Secretary of Workers World Party. In assessing the new world situation, we should start here in the U.S. At this moment, the epicenter of the struggle is Minneapolis. What’s happening there poses a fundamental question that is germane to the changing world situation, the global . . .
From Workers World via This RSS Feed.
The New World Situation: The decline of U.S. imperialism and the centrality of the class struggle
The writer is the First Secretary of Workers World Party. In assessing the new world situation, we should start here in the U.S. At this moment, the epicenter of the struggle is Minneapolis.Workers World
itkovian
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to itkovian • • •itkovian
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to itkovian • • •Good question.
It doesn’t “correct” the model after the fact. It controls what the model is allowed to see and use before it ever answers.
There are basically three modes, each stricter than the last. The default is "serious mode" (governed by serious.py). Low temp, punishes chattiness and inventiveness, forces it to state context for whatever it says.
Additionally, Vodka (made up of two sub-modules - "cut the crap" and "fast recall") operate at all times. Cut the crap trims context so the model only sees a bounded, stable window. You can think of it like a rolling, summary of what's been said. That summary is not LLM generated summary either - it's concatenation (dumb text matching), so no made up vibes.
Fast recall OTOH stores and recalls facts verbatim from disk, not from the model’s latent memory.
It writes what you tell it to a text file and then when you ask about it, spits it back out verbatim ((!! / ??)
And that's the baseline
In KB mode, you make the LLM answer based on the above settings + with reference to your docs ONLY (in the first instance).
When you >>attach , the router gets stricter again. Now the model is instructed to answer only from the attached documents.
Those docs can even get summarized via an internal prompt if you run >>summ new, so that extra details are stripped out and you are left with just baseline who-what-where-when-why-how.
The SUMM_*.md file come SHA-256 provenance, so every claim can be traced back to a specific origin file (which gets moved to a subfolder)
TL;DR: If the answer isn’t in the KB, it’s told to say so instead of guessing.
Finally, Mentats mode (Vault / Qdrant). This is the “I am done with your shit" path.
It's all of the three above PLUS a counter-factual sweep.
It runs ONLY on stuff you've promoted into the vault.
What it does is it takes your question and forms in in a particular way so that all of the particulars must be answered in order for there to BE an answer. Any part missing or not in context? No soup for you!
In step 1, it runs that past the thinker model. The answer is then passed onto a "critic" model (different llm). That model has the job of looking at the thinkers output and say "bullshit - what about xyz?".
It sends that back to the thinker...who then answers and provides final output. But if it CANNOT answer the critics questions (based on the stored info?). It will tell you. No soup for you, again!
TL;DR:
The “corrections” happen by routing and constraint. The model never gets the chance to hallucinate in the first place, because it literally isn’t shown anything it’s not allowed to use. Basic premise - trust but verify (and I've given you all the tools I could think of to do that).
Does that explain it better? The repo has a FAQ but if I can explain anything more specifically or clearly, please let me know. I built this for people like you and me.
itkovian
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to itkovian • • •God, I hope so. Else I just pissed 4 months up the wall and shouted a lot of swears at my monitor for nada 😀
Let me know if it works for you
itkovian
in reply to SuspciousCarrot78 • • •FrankLaskey
in reply to SuspciousCarrot78 • • •like this
adhocfungus likes this.
SuspciousCarrot78
in reply to FrankLaskey • • •On the stuff you use the pipeline/s on? About 85-90% in my tests.
Just don't GIGO (Garbage in, Garbage Out) your source docs...and don't use a retarded LLM.
That's why I recommend Qwen3-4 2507 Instruct. It does what you tell it to (even the abilterated one I use).
Random Sexy-fun-bot900-HAVOK-MATRIX-1B.gguf? I couldn't say 😀
like this
adhocfungus likes this.
don't like this
adhocfungus doesn't like this.
SuspciousCarrot78
in reply to FrankLaskey • • •Comment removed by (auto-mod?) cause I said sexy bot. Weird.
Restating again:
On the stuff you use the pipeline/s on? About 85-90% in my tests. Just don't GIGO (Garbage in, Garbage Out) your source docs...and don't use a dumb LLM. That's why I recommend Qwen3-4 2507 Instruct. It does what you tell it to (even the abilterated one I use).
7toed
in reply to SuspciousCarrot78 • • •Please elaborate, that alone piqued my curiosity. Pardon me if I couldve searched
SuspciousCarrot78
in reply to 7toed • • •Yes of course.
Abliterated is a technical LLM term meaning "safety refusals removed".
Basically, abliteration removes the security theatre that gets baked into LLM like chatGPT.
I don't like my tools deciding for me what I can and cannot do with them.
I decide.
Anyway, the model I use has been modified with a newer, less lobotomy inducing version of abliteration (which previously was a risk).
huggingface.co/DavidAU/Qwen3-4…
According to validation I've seen online (and of course, I tested it myself), it's lost next to zero "IQ" and dropped refusals by about...90%.
In fact, in some domains it's actually a touch smarter, because it doesn't try to give you "perfect" model answers. Maths reasoning for example, where the answer is basically impossible, it will say "the answer is impossible. Here's the nearest workable solution based on context" instead of getting stuck in a self-reinforcing loop, trying to please you, and then crashing.
In theory, that means you could ask it for directions on how to cook Meth and it would tell you.
I'm fairly certain the devs didn't add the instructions for that in there, but if they did, the LLM won't "sorry, I can't tell you, Dave".
Bonus: with my harness over the top, you'd have an even better idea if it was full of shit (it probably would be, because, again, I'm pretty sure they don't train LLM on Breaking Bad).
Extra double bonus: If you fed it exact instructions for cooking meth, using the methods I outlined? It will tell you exactly how to cook Meth, 100% of the time.
Say...you...uh...wanna cook some meth? 😛
PS: if you're more of a visual learner, this might be a better explanation
DavidAU/Qwen3-4B-Hivemind-Instruct-NEO-MAX-Imatrix-GGUF at main
huggingface.coBaroqueInMind
in reply to SuspciousCarrot78 • • •I have no remarks, just really amused with your writing in your repo.
Going to build a Docker and self host this shit you made and enjoy your hard work.
Thank you for this!
like this
adhocfungus likes this.
SuspciousCarrot78
in reply to BaroqueInMind • • •Thank you ❤
Please let me know how it works...and enjoy the >>FR settings. If you've ever wanted to trolled by Bender (or a host of other 1990s / 2000s era memes), you'll love it.
like this
adhocfungus likes this.
Diurnambule
in reply to BaroqueInMind • • •SuspciousCarrot78
in reply to Diurnambule • • •There are literally dozens of us. DOZENS!
I'm on a potato, so I can't attach it to something super sexy, like a 405B or a MoE.
If you do, please report back.
PS: You may see (in the docs) occasional references that slipped passed me to MoA. That doesn't stand for Mixture of Agents. That stood for "Mixture of Assholes". That's always been my mental model for this.
Or, in the language of my people, this was my basic design philosophy:
YOU (question)-> ROUTER+DOCS (Ah shit, here we go again. I hate my life)
|
ROUTER+DOCS -> Asshole 1: Qwen ("I'm right")
|
ROUTER+DOCS -> Asshole 2: Phi ("No, I'm right")
|
ROUTER+DOCS -> Asshole 3: Nanbeige ("Idiots, I'm right!")
|
ROUTER+DOCS (Jesus, WTF. I need booze now) <- (all assholes)
|
--> YOU (answer)
(this could have been funnier in the ASCII actually worked but man...Lemmy borks that)
EDIT: If you want to be boring about it, it's more like this
pastebin.com/gNe7bkwa
PS: If you like it, let other people in other places know about it.
llama-conductor goes brrrr - Pastebin.com
PastebinTerces
in reply to SuspciousCarrot78 • • •like this
adhocfungus likes this.
SuspciousCarrot78
in reply to Terces • • •SpaceNoodle
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to SpaceNoodle • • •LOL. Don't do that. Wikipedia is THE nosiest source.
Would you like me to show you HOW and WHY the SUMM pathway works? I built it after I tried a "YOLO wikipedia in that shit - done, bby!". It...ended poorly
MNByChoice
in reply to SuspciousCarrot78 • • •Not OP, but random human.
Glad you tried the "YOLO Wikipeida", and are sharing that fact as it saves the rest of us time. 😀
SuspciousCarrot78
in reply to MNByChoice • • •SpaceNoodle
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to SpaceNoodle • • •Of course. Here is a copy paste from my now defunct reddit account. Feel free to follow the pastebin links to see what v1 of SUMM did. Whats in the router uses is v1.1:
########
My RAG
I've recently been playing around with making my SLM's more useful and reliable. I'd like to share some of the things I did, so that perhaps it might help someone else in the same boat.
Initially, I had the (obvious, wrong) idea that "well, shit, I'll just RAG dump Wikipedia and job done". I trust it's obvious why that's not a great idea (retrieval gets noisy, chunks lack context, model spends more time sifting than answering).
Instead, I thought to myself "why don't I use the Didactic Method to teach my SLMs what the ground truth is, and then let them argue from there?". After all, Qwen3-4B is pretty good with its reasoning...it just needs to not start from a position of shit.
The basic work flow -
TLDR
Details
(1) Create a "model answer" --> this involves creating a summary of source material (like say, markdown document explaining launch flags for llama.cpp). You can do this manually or use any capable local model to do it, but for my testing, I fed the source info straight in Gippity 5 with specfic "make me a good summary of this, hoss" prompt
Like so: pastebin.com/FaAB2A6f
(2) Save that output as SUMM-llama-flags.md. You can copy paste it into Notepad++ and do it manually if need to.
(3) Once the summary has been created, use a local "extractor" and "formatter" model to batch extract high yield information (into JSON) and then convert that into a second distillation (markdown). I used Qwen3-8b for this.
Extract prompt pastebin.com/nT3cNWW1
Format prompt (run directly on that content after model has finished its extraction) pastebin.com/PNLePhW8
(4) Save that as DISTILL-llama-flags.md.
(5) Drop Temperature low (0.3) and made Qwen3-4B cut the cutsey imagination shit (top_p = 0.9, top_k=0), not that it did a lot of that to begin with.
(6) Import DISTILL-llama-flags.md into your RAG solution (god I love markdown).
Once I had that in place, I also created some "fence around the law" (to quote Judaism) guard-rails and threw them into RAG. This is my question meta, that I can append to the front (or back) of any query. Basically, I can ask the SLM "based on escalation policy and the complexity of what I'm asking you, who should answer this question? You or someone else? Explain why."
pastebin.com/rDj15gkR
(I also created another "how much will this cost me to answer with X on Open Router" calculator, a "this is my rig" ground truth document etc but those are sort of bespoke for my use-case and may not be generalisable. You get the idea though; you can create a bunch of IF-THEN rules).
The TL:DR of all this -
With a GOOD initial summary (and distillation) you can make a VERY capable little brain, that will argue quite well from first principles. Be aware, this can be a lossy pipeline...so make sure you don't GIGO yourself into stupid. IOW, trust but verify and keep both the source material AND SUMM-file.md until you're confident with the pipeline. (And of course, re-verify anything critical as needed).
I tested, and retested, and re-retest a lot (literally 28 million tokens on OR to make triple sure), doing a bunch of adversarial Q&A testing, side by side with GPT5, to triple check that this worked as I hoped it would.
The results basically showed a 9/10 for direct recall of facts, 7-8/10 for "argue based on my knowledge stack" or "extrapolate based on knowledge stack + reference to X website" and about 6/10 on "based on knowledge, give me your best guess about X adjacent topic". That's a LOT better than just YOLOing random shit into Qdrant...and orders of magnitude better than relying on pre-trained data.
Additionally, I made this this cute little system prompt to give me some fake confidence -
Tone: neutral, precise, low-context.Rules:Answer first. No preamble. ≤3 short paragraphs.Minimal emotion or politeness; no soft closure.Never generate personal memories, subjective experiences, or fictional biographical details.Emotional or expressive tone is forbidden.Cite your sourcesEnd with a declarative sentence.Append: "Confidence: [percent] | Source: [Pretrained | Deductive | User | External]".\^ model reported, not a real statistical analysis. Not really needed for Qwen model, but you know, cute.
The nice thing here is, as your curated RAG pile grows, so does your expert system’s "smarts", because it has more ground truth to reason from. Plus, .md files are tiny, easy to demarcate, highlight important stuff (enforce semantic chunking) etc.
The next step:
Build up the RAG corpus and automate steps 1-6 with a small python script, so I don't need to baby sit it. Then it basically becomes "drop source info into folder, hit START, let'er rip" (or even lazier, set up a Task Scheduler to monitor the folder and then run "Amazing-python-code-for-awesomeness.py" at X time).
Also, create separate knowledge buckets. OWUI (probably everything else) let's you have separate "containers" - right now within my RAG DB I have "General", "Computer" etc - so I can add whichever container I want to a question, ad hoc, query the whole thing, or zoom down to a specific document level (like my DISTILL-llama.cpp.md)
I hope this helps someone! I'm just noob but I'm happy to answer whatever questions I can (up to but excluding the reasons my near-erotic love for .md files and notepad++. A man needs to keep some mystery).
EDIT: Gippity 5 made a little suggestion to that system prompt that turns it from made up numbers to something actually useful to eyeball. Feel free to use; I'm trialing it now myself
##THE GIPPTY 5 PIPE - USE WHICHEVER SECTION YOU NEED##You are a conversation - Pastebin.com
PastebinSlimePirate
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to SlimePirate • • •SlimePirate
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to SlimePirate • • •tomenzgg
in reply to SuspciousCarrot78 • • •Pudutr0n
in reply to SuspciousCarrot78 • • •frightful_hobgoblin
in reply to SlimePirate • • •db0
in reply to SuspciousCarrot78 • • •AI Horde
aihorde.netSuspciousCarrot78
in reply to db0 • • •db0
in reply to SuspciousCarrot78 • • •FauxLiving
in reply to db0 • • •AI Horde has a OpenAI compatible REST API (oai.aihorde.net/). They say that it doesn't support the full feature set of their native API, but will almost assuredly work with this.
OP manually builds the oapi JSON payload and then uses the python requests library to handle the request.
The fields they're using match the documentation on oai.aihorde.net/docs
You would need to add a header with your AI Horde API key. Looks like that would only need to be done in router_fastapi.py - call_model_prompt() (line 269) and call_model_messages() (line 303) and then everything else is setup according to documentation
FastAPI - Swagger UI
oai.aihorde.netSuspciousCarrot78
in reply to FauxLiving • • •Fuck, I'm an all seeing, prescient wizard! Neat!
Uh, I mean... "yes of course, I intended to have Ai Horde piping baked in from the get go".
Do let me know if my shit works with their shit to make...not shit.
#####################
EDIT: For fun, I asked my router to ask Qwen3-4B 2507 WTF Ai Horde was. I'm showing this for interest sake, because 1) shows you how the thing works IRL 2) shows you how and when it's WRONG in a LOUD way, if you know how to read it.
######################
This is in default mode, no KBs attached. Just straight YOLOing the question at it, cold, and asking it to argue from it's own internal priors.
AI-horde is a decentralized platform where users rent or sell computational power (GPU time) to run AI models. It connects AI model developers who need processing with individuals or organizations who have idle GPU hardware. The platform uses blockchain-like tokens to manage payments and incentives. It’s not a single AI system but a marketplace for GPU resources.
Missing details: Specific token mechanics, current model library, or user onboarding flow could affect accuracy. For example, if token economics changed recently, the explanation might not reflect current conditions.
Confidence: Medium | Source: Model
#####################################
That's the routers way of saying "hey, best guess. Sorry". And from what I can see, it sort of right, but specifics are wrong. So, ok for a cheap YOLO, not for "I bet my money on this" Let's call this a 4/10
####################################
Now, I gave it web access privileges. It did this. Let's call this 7/10
An AI-horde is a free, open-source community service that allows anyone to contribute computing power to generate AI-generated images and text. It functions as a crowdsourced distributed cluster of workers focused on image and text generation tasks [1]. This model resembles collaborative projects like Folding@home, where users share computational resources for a common goal [2].
####################################
Finally, here's what it does in >>KB mode. I scraped the FAQ and then did >>summ new. Eyeballing it against the FAQ, it's call this 8.5/10. YMMV
AI-horde is a decentralized platform for distributed AI computing. It allows users to rent out their idle GPU hardware to run AI models, connecting developers needing processing power with volunteers who provide it through open APIs. The system uses a non-cryptocurrency-based incentive called Kudos to prioritize contributions, where workers earn Kudos for completing tasks, and these are not tradable assets but rather internal metrics of service quality and project loyalty.
Access to AI-horde is managed via API keys; anonymous usage is allowed but receives the lowest priority. The project emphasizes volunteerism and discourages commercialization of Kudos, treating its sale or exchange for money or cryptocurrency as contrary to its ethos. Participation as a worker involves installing the AI-horde client, selecting task types (e.g., image or text generation), configuring supported models, and running the worker to begin earning Kudos.
Confidence: high | Source: Docs
################################
FINALLY, for "fun", I >>kb attached all and asked it to reason over the above three responses, in context of what it has in the KB store, and rank them out of 10.
[3] is most correct, scoring 9.5/10. It accurately reflects the core model: decentralized GPU renting via open APIs, Kudos as a non-crypto, non-tradable incentive tied to volunteerism, and the crowdsourced nature of contributions. It also correctly notes access via API keys and lower priority for anonymous use.
[2] is accurate but less complete, scoring 7.5/10. It correctly identifies the crowdsourced, open-source, volunteer-based nature and analogy to Folding@home. However, it omits key specifics like Kudos, API key access, and the explicit non-commercialization stance.
[1] is partially correct but misleading, scoring 6/10. It correctly describes GPU
Confidence: medium | Source: Docs
TL;DR
* KB mode: source-based info and when queried against, adjudication accuracy = v.good
* Clean, reproducible demo. Try it yourself
db0
in reply to SuspciousCarrot78 • • •Very impressive. The only mistake on the third one is that the kudos are actually transferrable (i.e. "tradable"), but we forbid exchanges for monetary rewards.
Disclaimer: I'm the lead developer for the AI Horde. I also like you've achieved here and would be interesting if we can promote this usage via the AI Horde in some way. If you can think of some integration or collaboration we could do, hit me up!
PS: While the OpenAI API is technically working, we still prefer people to use our own API as it's much more powerful (allowing people to use multiple models, filter workers, tweak more vars) and so on. If you would support our native API, I'd be happy to add a link to your software in our frontpage in the integrations area for LLMs.
SuspciousCarrot78
in reply to db0 • • •Oh shit! Uh...thank you! Umm. Yes. That was unexpected 😀
Re: collab. I'm away for a bit with work, but let me think on it for a bit? There's got to be a way to make this useful to more peeps.
Believe it or not, I am not a CS guy at ALL (I work in health-care) and I made this for fun, in a cave, with a box of scraps.
I'm not good at CS. I just have a ... "very special" brain. As in, I designed this thing from first principles using invariants, which I understand now is not typical CS practice.
db0
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to db0 • • •db0
in reply to SuspciousCarrot78 • • •WTF is a "goon-coder" lol 😁
I haven't had good experiences with HN myself, even when I was simply trying to post about the AI Horde.
SuspciousCarrot78
in reply to db0 • • •I had to look it up. Apparently, it's someone who over-optimises the bells and whistles and never ships a finished product.
gooncode.dev/
GoonCode | Beyond vibe coding
gooncode.devrollin
in reply to SuspciousCarrot78 • • •At first blush, this looks great to me. Are there limitations with what models it will work with? In particular, can you use this on a lightweight model that will run in 16 Gb RAM to prevent it hallucinating? I've experimented a little with running ollama as an NPC AI for Skyrim - I'd love to be able to ask random passers-by if they know where the nearest blacksmith is for instance. It was just far too unreliable, and worse it was always confidently unreliable.
This sounds like it could really help these kinds of uses. Sadly I'm away from home for a while so I don't know when I'll get a chance to get back on my home rig.
SuspciousCarrot78
in reply to rollin • • •My brother in virtual silicon: I run this shit on a $200 p.o.s with 4gb of VRAM.
If you can run an LLM at all, this will run. BONUS: because of the way "Vodka" operates, you can run with a smaller context window without eating shit of OOM errors. So...that means.. if you could only run a 4B model (because the GGUF itself is 3GBs without the over-heads...then you add in the drag from the KV cache accumulation).. maybe you can now run next sized up model...or enjoy no slow down chats with the model size you have.
rollin
in reply to SuspciousCarrot78 • • •I never knew LLMs can run on such low-spec machines now! That's amazing. You said elsewhere you're using Qwen3-4B (abliterated), and I found a page saying that there are Qwen3 models that will run on "Virtually any modern PC or Mac; integrated graphics are sufficient. Mobile phones"
Is there still a big advantage to using Nvidia GPUs? Is your card Nvidia?
My home machine that I've installed ollama on (and which I can't access in the immediate future) has an AMD card, but I'm now toying with putting it on my laptop, which is very midrange and has Intel Arc graphics (which performs a whole lot better than I was expecting in games)
SuspciousCarrot78
in reply to rollin • • •Yep, LLMs can and do run on edge devices (weak hardware).
One of the driving forces for this project was in fact trying to make my $50 raspberry pi more capable of running llm. It sits powered on all the time, so why not?
No special magic with NVIDIA per se, other than ubiquity.
Yes, my card is NVIDIA, but you don't need a card to run this.
null
in reply to SuspciousCarrot78 • • •als
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to als • • •Yes. Several reasons -
GitHub - scrapy/scrapy: Scrapy, a fast high-level web crawling & scraping framework for Python.
GitHubFrankLaskey
in reply to als • • •SuspciousCarrot78
in reply to FrankLaskey • • •Angel Mountain
in reply to SuspciousCarrot78 • • •Super interesting build
And if programming doesn't pan out please start writing for a magazine, love your style (or was this written with your AI?)
SuspciousCarrot78
in reply to Angel Mountain • • •Karkitoo
in reply to SuspciousCarrot78 • • •( ͡° ͜ʖ ͡°)
Anyway, the other person is right. Your writing style is great !
I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.
Anyway version 2, this
Is a very cool idea ! I cannot wait to either :
- incorporate it to my workflows
- let it sit in a tab to never be touched ever again
- tgeoryceaft, do tests and request features so much as to burnout
Last but not least, thank you for not using github as your primary repo
SuspciousCarrot78
in reply to Karkitoo • • •Hmm. One of those things is not like the other, one of those things just isn't the same...
About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC...
...sorry, sorry...
Anyway, enjoy. Don't spam my Github inbox plz 😀
Karkitoo
in reply to SuspciousCarrot78 • • •I can spam your codeberg's then ? 😀
Understandable, have a great day.
SuspciousCarrot78
in reply to Karkitoo • • •Don't spam my Codeberg either.
Just send nudes.
In ASCII format.
By courier pigeon
CIA_chatbot
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to CIA_chatbot • • •CIA_chatbot
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to CIA_chatbot • • •Alvaro
in reply to SuspciousCarrot78 • • •Anarki_
in reply to Alvaro • • •SuspciousCarrot78
in reply to Alvaro • • •LLMs are inherently unreliable in “free chat” mode. What llama-conductor changes is the failure mode: it only allows the LLM to argue from user curated ground truth and leaves an audit trail.
You don't have to trust it (black box). You can poke it (glass box). Failure leaves a trail and it can’t just hallucinate a source out of thin air without breaking LOUDLY and OBVIOUSLY.
TL;DR: it won't piss in your pocket and tell you it's rain. It may still piss in your pocket (but much less often, because it's house trained)
bilouba
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to bilouba • • •Just bush-league ones I did myself, that have no validation or normative values. Not that any of the LLM benchmarks seem to have those either LOL
I'm open to ideas, time wiling. Believe it or not, I'm not a code monkey. I do this shit for fun to get away from my real job
bilouba
in reply to SuspciousCarrot78 • • •Maybe try to contact "AI Explained" on YT, he's the best IMO. Your solution might be novel or not but he might help you figuring that. If it is indeed novel, it might be worth it to share it with the larger community.
Of course, I totally get that you might not want to do any of that.
Thank you for your work!
wolfrasin
in reply to SuspciousCarrot78 • • •Hey Human,
Thank you!
SuspciousCarrot78
in reply to wolfrasin • • •sp3ctr4l
in reply to SuspciousCarrot78 • • •This seems astonishingly more useful than the current paradigm, this is genuinely incredible!
I mean, fellow Autist here, so I guess I am also... biased towards... facts...
But anyway, ... I am currently uh, running on Bazzite.
I have been using Alpaca so far, and have been successfully running Qwen3 8B through it... your system would address a lot of problems I have had to figurr out my own workarounds for.
I am guessing this is not available as a flatpak, lol.
I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!
SuspciousCarrot78
in reply to sp3ctr4l • • •Indeed. And have you heard? That makes the normies think were clankers (bots). How delightful.
Re: the Linux stuff...please, if someone can do that, please do. I have no idea how to do that. I can figure it out but making it into a "one click install" git command took several years off my life.
Believe it or not, I'm not actually a IT / CS guy. My brain just decided to latch onto this problem one day 6 months ago and do an autism.
I'm 47 and I still haven't learned how to operate this vehicle...and my steering is getting worse, not better, with age.
sp3ctr4l
in reply to SuspciousCarrot78 • • •Oh I entirely believe you.
Hell hath no wrath like an annoyed high functioning autist.
I've ... had my own 6 month black out periods where I came up with something extremely comprehensive and 'neat' before.
Seriously, bootstrapping all this is incredibly impressive.
I would... hope that you can find collaborators, to keep this thing alive in the event you get into a car accident (metaphorical or literal), or, you know, are completely burnt out after this.
... but yeah, it is... yet another immensely ironic aspect of being autistic that we've been treated and maligned as robots our whole lives, and then when the normies think they've actually built the AI from sci fi, no, turns out its basically extremely talented at making up bullshit and fudging the details and being a hypocrite, which... appalls the normies when they have to look into a hyperpowered mirror of themselves.
And then, of course, to actually fix this, its some random autist no one has ever heard of (apologies if you are famous and i am unaware of this), who is putting in an enormous of effort, that... most likely, will not be widely recognized.
... fucking normies man.
SuspciousCarrot78
in reply to sp3ctr4l • • •Not famous, no 😀
I hear you, brother. Normally, my hyperfocus is BJJ (I've been at that for 25 years; it's a sickness). I herniated a disc in my low back and lost the ability to exercise for going on 6 months.
BJJ is like catnip for autists. There is an overwhelming population of IT, engineers and ASD coded people in BJJ world.
There's even a gent we loving call Blinky McHeelhook, because well...see for yourself
Noticing the effects of elbow position, creating an entire algorithm, flow chart and epistemology off the fact?
"VERY NORMAL."
Anyway, when my body said "sit down", my brain went "ok, watch this".
I'm sorry. I'm so sorry. No one taught me how to drive this thing 😀
PS: I only found out after my eldest was diagnosed. Then my youngest. The my MIL said "go get tested". I did.
Result - ASD.
Her response - "We know".
Great - thanks for telling me. Would have been useful to know, say... 40ish years ago.
- YouTube
www.youtube.comWolfLink
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to WolfLink • • •Fair point on setting expectations, but this isn’t just LLMs checking LLMs. The important parts are non-LLM constraints.
The model never gets to “decide what’s true.” In KB mode it can only answer from attached files. Don't feed it shit and it won't say shit.
In Mentats mode it can only answer from the Vault. If retrieval returns nothing, the system forces a refusal. That’s enforced by the router, not by another model.
The triple-pass (thinker → critic → thinker) is just for internal consistency and formatting. The grounding, provenance, and refusal logic live outside the LLM.
So yeah, no absolute guarantees (nothing in this space has those), but the failure mode is “I don’t know / not in my sources, get fucked” not “confidently invented gibberish.”
skisnow
in reply to WolfLink • • •SuspciousCarrot78
in reply to skisnow • • •skisnow
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to skisnow • • •Yeah.
The SHA isn’t there to make the model smarter. It’s there to make the source immutable and auditable.
Having been burnt by LLMs (far too many times), I now start from a position of "fuck you, prove it".
The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.
If it does that more than twice, straight in the bin. I have zero chill any more.
Secondly, drift detection. If someone edits or swaps a file later, the hash changes. That means yesterday’s answer can’t silently pretend it came from today’s document. I doubt my kids are going to sneak in and change the historical prices of 8 bit computers (well, the big one might...she's dead keen on being a hacker) but I wanted to be sure no one and no-thing was fucking with me.
Finally, you (or someone else) can re-run the same question against the same hashed inputs and see if the system behaves the same way.
So: the hashes don't fix hallucinations (I don't even think that's possible, even with magic). The hashes make it possible to audit the answer and spot why hallucinations might have happened.
PS: You’re right that interpretation errors still exist. That's why Mentats does the triple-pass and why the system clearly flags “missing / unsupported” instead of filling gaps. The SHA is there to make the pipeline inspectable, instead of “trust me, bro.”.
Guess what? I don't trust you. Prove it or GTFO.
skisnow
in reply to SuspciousCarrot78 • • •Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting
That’s… not how any of this works…
Disillusionist
in reply to SuspciousCarrot78 • • •UNY0N
in reply to SuspciousCarrot78 • • •THIS IS AWESOME!!! I've been working on using an obsidian vault and a podman ollama container to do something similar, with VSCodium + continue as middleware. But this! This looks to me like it is far superior to what I have cobbled together.
I will study your codeberg repo, and see if I can use your conductor with my ollama instance and vault program. I just registered at codeberg, if I make any progress I will contact you there, and you can do with it what you like.
On an unrelated note, you can download wikipedia. Might work well in conjunction with your conductor.
en.wikipedia.org/wiki/Wikipedi…
Wikipedia:Database download - Wikipedia
en.wikipedia.orgSuspciousCarrot78
in reply to UNY0N • • •Please enjoy 😀 Hope it's of use to you!
EDIT: Please don't yeet all of wikipedia into it. It will die. And you will be sad.
brettvitaz
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to brettvitaz • • •For the record: none of my posts here are AI-generated. The only model output in this thread is in clearly labeled, cited examples.
I built a tool to make LLMs ground their answers and refuse without sources, not to replace anyone’s voice or thinking.
If it’s useful to you, great. If not, that’s fine too - but let’s keep the discussion about what the system actually does.
Also, being told my writing “sounds like a machine” lands badly, especially as an ND person, so I’d prefer we stick to the technical critique.
brettvitaz
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to brettvitaz • • •I'm sorry if my method of writing is unpleasant to you.
Your method of communicating your thoughts is ABHORRENT to me.
Let's go our separate ways.
Peace favour your sword.
btsax
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to btsax • • •Oh god, I think liked being called a clanker more 😛
(Not North Dakotan. West Australian. Proof: cunt cunt cunty cunt cuntington).
Murdoc
in reply to SuspciousCarrot78 • • •I wouldn't know how to get this going, but I very much enjoyed reading it and your comments and think that it looks like a great project. 👍
(I mean, as a fellow autist I might be able to hyperfocus on it for a while, but I'm sure that the ADHD would keep me from finishing to go work on something else. 🙃)
SuspciousCarrot78
in reply to Murdoc • • •Ah - ASD, ADHD and Lemmy. You're a triple threat, Harry! 😀
Glad if it was entertaining, if even a little!
7toed
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to 7toed • • •I feel your pain. Literally.
I once lost ... 24? 26? hrs over a period of days with GPT once...it each time confidently asserting "no, for realz, this is the fix".
This thing I built? Purely spite driven engineering + caffeine + ASD to overcome "Bro, trust me bro".
I hope it helps.
7toed
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to 7toed • • •It's copyLEFT (AGPL-3.0 license). That means, free to share, copy, modify...but you can't roll a closed source version of it and sell it for profit.
In any case, I didn't build this to get rich (fuck! I knew I forgot something).
I built this to try to unfuck the situation / help people like me.
I don't want anything for it. Just maybe a fist bump and an occasional "thanks dude. This shit works amazing"
SuspciousCarrot78
in reply to SuspciousCarrot78 • • •Responding to my own top post like a FB boomer: May I make one request?
If you found this little curio interesting at all, please share in the places you go.
And especially, if you're on Reddit, where normies go.
I use to post heavily on there, but then Reddit did a reddit and I'm done with it.
lemmy.world/post/41398418/2152…
Much as I love Lemmy and HN, they're not exactly normcore, and I'd like to put this into the hands of people 😀
PS: I am think of taking some of the questions you all asked me here (de-identified) and writing
a "Q&A_with_drBobbyLLM.md" and sticking it on the repo. It might explain some common concerns.
And, If nothing else, it might be mildly amusing.
like this
☆ Yσɠƚԋσʂ ☆ likes this.
Domi
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to Domi • • •Show off 😀
You're self hosting that, right? I will not be held responsible for some dogey OpenRouter quant hosted by ToTaLlY NoT a ScAM LLC 😀
Domi
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to Domi • • •This is the way. Good luck with OSS-120B. Those OSS models, they
ThirdConsul
in reply to SuspciousCarrot78 • • •I want to believe you, but that would mean you solved hallucination.
Either:
A) you're lying
B) you're wrong
C) KB is very small
Kobuster
in reply to ThirdConsul • • •Hallucination isn't nearly as big a problem as it used to be. Newer models aren't perfect but they're better.
The problem addressed by this isn't hallucination, its the training to avoid failure states. Instead of guessing (different from hallucination), the system forces a Negative response.
That's easy and any big and small company could do it, big companies just like the bullshit
Squizzy
in reply to Kobuster • • •ThirdConsul
in reply to Kobuster • • •A very tailored to llms strengths benchmark calls you a liar.
artificialanalysis.ai/articles…
(A month ago the hallucination rate was ~50-70%)
SuspciousCarrot78
in reply to Kobuster • • •^ Yes! That. Exactly that. Thank you!
I don't like the bullshit...and I'm not paid to optimize for bullshit-leading-to-engagment-chatty-chat.
"LLM - tell me the answer and then go away. If you can't, say so and go away. Optionally, roast me like you've watched too many episodes of Futurama while doing it"
SuspciousCarrot78
in reply to ThirdConsul • • •D) None of the above.
I didn’t "solve hallucination". I changed the failure mode. The model can still hallucinate internally. The difference is it’s not allowed to surface claims unless they’re grounded in attached sources.
If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isn’t “the model is always right.”
The guarantee is “the system won’t pretend it knows when the sources don’t support it.” That's it. That's the whole trick.
KB size doesn’t matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.
That’s a control-layer property, not a model property. If it helps: think of it as moving from “LLM answers questions” to “LLM summarizes evidence I give it, or says ‘insufficient evidence.’”
Again, that’s the whole trick.
You don't need to believe me. In fact, please don't. Test it.
I could be wrong...but if I'm right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesn't suck balls as much as you think it might.
Maybe it's even useful to you.
I dunno. Try it?
ThirdConsul
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to ThirdConsul • • •Parts of this are RAG, sure
RAG parts:
So yes, that layer is RAG with extra steps.
What’s not RAG -
KB mode (filesystem SUMM path)
This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.
If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.
Vodka (facts memory)
That’s not retrieval at all, in the LLM sense. It's verbatim key-value recall.
Again, no embeddings, no similarity search, no model interpretation.
"Facts that aren’t RAG"
In my set up, they land in one of two buckets.
1) Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.
2) Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.
In response to the implicit "why not just RAG then"
Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.
The extra "steps" are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.
So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop.
I don't trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that's a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that's how ASD brains work.
ThirdConsul
in reply to SuspciousCarrot78 • • •Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?
SuspciousCarrot78
in reply to ThirdConsul • • •Huh? That is the literal opposite of what I said. Like, diametrically opposite.
Let me try this a different way.
Hallucination in SUMM doesn’t "poison" the KB, because SUMMs are not authoritative facts, they’re derived artifacts with provenance. They’re explicitly marked as model output tied to a specific source hash. Two key mechanics that stop the cascade you’re describing:
1) SUMM is not a "source of truth"
The source of truth is still the original document, not the summary. The summary is just a compressed view of it. That’s why it carries a SHA of the original file. If a SUMM looks wrong, you can:
a) trace it back to the exact document version
b) regenerate it
c) discard it
d) read the original doc yourself and manually curate it.
Nothing is "silently accepted" as ground truth.
2) Promotion is manual, not automatic
The dangerous step would be: model output -> auto-ingest into long-term knowledge.
That’s explicitly not how this works.
The Flow is:
Attach KB -> SUMM -> human reviews -> Ok, move to Vault -> Mentats runs against that
Don't like a SUMM? Don't push it into the vault. There's a gate between “model said a thing” and “system treats this as curated knowledge.” That's you - the human. Don't GI and it won't GO.
Determinism works for you here. The hash doesn’t freeze the hallucination; it freezes the input snapshot. That makes bad summaries:
Which is the opposite of silent drift.
If SUMM is wrong and you miss it, the system will be consistently wrong in a traceable way, not creatively wrong in a new way every time.
That’s a much easier class of bug to detect and correct. Again: the proposition is not "the model will never hallucinate.". It's "it can't silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version".
And that, is ultimately what keeps the pipeline from becoming "poisoned".
Pudutr0n
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to Pudutr0n • • •Yep, good question. You can do that, it's not wrong. If your KB is small + your question is basically “find me the paragraph that contains X,” then yeah: two-pass fuzzy find will dunk on any LLM for speed and correctness.
But the reason I put an LLM in the loop is: retrieval isn’t the hard part. Synthesis + constraint are. What a LLM is doing in KB mode (basically) is this -
1) Turns question into extraction task. Instead of “search keywords,” it’s: “given these snippets, answer only what is directly supported, and list what’s missing.”
2) Then, rather that giving 6 fragments across multiple files, the LLM assembles the whole thing into a single answer, while staying source locked (and refusing fragments that don't contain the needed fact).
3) Finally: it has "structured refusal" baked in. IOW, the whole point is that the LLM is forced to say "here are the facts I saw, and this is what I can't answer from those facts".
TL;DR: fuzzy search gets you where the info lives. This gets you what you can safely claim from it, plus an explicit "missing list".
For pure retreval: yeah - search. In fact, maybe I should bake in a >>grep or >>find commands. That would be the right trick for "show me the passage" not "answer the question".
I hope that makes sense?
Pudutr0n
in reply to SuspciousCarrot78 • • •pineapple
in reply to SuspciousCarrot78 • • •This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.
Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.
SuspciousCarrot78
in reply to pineapple • • •Zexks
in reply to SuspciousCarrot78 • • •SuspciousCarrot78
in reply to Zexks • • •Arthur Besse
in reply to SuspciousCarrot78 • • •