Trump gives Hamas days to respond to Gaza “peace plan”; German chancellor says Europe no longer “at peace” with Russia
Trump gives Hamas days to respond to Gaza “peace plan”; German chancellor says Europe no longer “at peace” with Russia
Drop Site Daily: September 30, 2025Drop Site News
Is there someplace I can get a VPS that I can seed from without issues?
like this
Auster likes this.
They are slightly different, I think people are kind of asking if you are specifically looking for a VPS vs a seedbox. Some people do want a VPS so it's a fair question.
A VPS implies that you'll be renting a server and installing/setting up all the software on your own. You'll probably have some sort of SSH + root access to install things there since you're doing all that on your own.
A seedbox is more like a pre-configured shared VPS so it'll already have torrent clients pre-installed along with other software commonly used with torrent clients. Depending on the vendor and type of seedbox you often won't have root access and/or SSH access, usually the vendor won't want you to randomly install software system-wide that might disrupt other users on that seedbox server.
PS - !seedboxes@lemmy.dbzer0.com also exists, a bit quieter there but it's specific to the topic.
like this
Coopr8 likes this.
Premium apps hosting that just works
Expertly-tuned shared apps hosting starting at just €4.95EUR per monthultra.cc
Geometry Dash
Geometry Dash is a legendary rhythm-platformer with the iconic and fiery rhythm challenges. Jump in now and see why this adventure refuses to cool down!Geometry Dash
AI Coding Is Massively Overhyped, Report Finds
"No Duh," say senior developers everywhere.
The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.
AI Coding Is Massively Overhyped, Report Finds
The AI industry's claims about AI coding assistants boosting productivity significantly appear to be massively overblown, per a new report.Victor Tangermann (Futurism)
like this
adhocfungus, Badabinski, tiredofsametab e _ParallaxMax_ like this.
Yeah if I use it and it generatse more than 5 lines of code, now I just immediately cancel it out because I know it's not worth even reading. So bad at repeating itself and falling to reasonably break things down in logical pieces..
With that I only have to read some of it's suggestions, still throw out probably 80% entirely, and fix up another 15%, and actually use 5% without modification.
There are tricks to getting better output from it, especially if you're using Copilot in VS Code and your employer is paying for access to models, but it's still asking for trouble if you're not extremely careful, extremely detailed, and extremely precise with your prompts.
And even then it absolutely will fuck up. If it actually succeeds at building something that technically works, you'll spend considerable time afterwards going through its output and removing unnecessary crap it added, fixing duplications, securing insecure garbage, removing mocks (God... So many fucking mocks), and so on.
I think about what my employer is spending on it a lot. It can't possibly be worth it.
AI companies and investors are absolutely overhyping its capabilities, but if you haven't tried it before I'd strongly recommend doing so. For simple bash scripts and Python it almost always gets something workable first try, genuinely saving time.
AI LLMs are pretty terrible for nearly every other task I've tried. I suspect it's because the same amount of quality training data just doesn't exist for other fields.
Oh god, please don't use it for Bash. LLM-generated Bash is such a fucking pot of horse shit bad practices. Regular people have a hard enough time writing good Bash, and something trained on all the fucking crap on StackOverflow and GitHub is inevitably going to be so bad...
Signed, a senior dev who is the "Bash guy" for a very large team.
The problem isn't the tool, it's the user: they don't know if they're getting good code or not, therefore they cannot make the prompt to improve it.
In my view the problems occur when using AI to do something you don't already know how to do.
I've found it's pretty good at refactoring existing code to use a different but well-supported and well documented library. It's absolutely terrible for a new and poorly documented library.
I recently tried using Copilot with Claude to implement something in a fairly young library, and did get the basics working, including a long repetitive string of "that doesn't work, I'm getting error msg [error]". Seven times of that, and suddenly it worked! I was quite amazed, though it failed me in many other ways with that library (imagining functions and options that don't exist). But then redoing the same thing in the older, better supported library, it got it right on the first try.
But maybe the biggest advantage of AI coding is that it allows me to code when my brain isn't fully engaged. Of course the risk there is that my brain might not fully engage because of the AI.
I'd be inclined to try using it if it was smart enough to write my unit tests properly, but it's great at double inserting the same mock and have 0 working unit tests.
I might try using it to generate some javadoc though.. then when my org inevitably starts polling how much ai I use I won't be in the gutter lol
I personally think unit tests are the worst application of AI. Tests are there to ensure the code is correct, so ideally the dev would write the tests to verify that the AI-generated code is correct.
I personally don't use AI to write code, since writing code is the easiest and quickest part of my job. I instead use it to generate examples of using a new library, give me comparisons of different options, etc, and then I write the code after that. Basically, I use it as a replacement for a search engine/blog posts.
Ideally, there are requirements before anything, and some TDD types argue that the tests should come before the code as well.
Ideally, the customer is well represented during requirements development - ideally, not by the code developer.
Ideally, the code developer is not the same person that develops the unit tests.
Ideally, someone other than the test developer reviews the tests to assure that the tests do in-fact provide requirements coverage.
Ideally, the modules that come together to make the system function have similarly tight requirements and unit-tests and reviews, and the whole thing runs CI/CD to notify developers of any regressions/bugs within minutes of code check in.
In reality, some portion of that process (often, most of it) is short-cut for one or many reasons. Replacing the missing bits with AI is better than not having them at all.
Ideally, the code developer is not the same person that develops the unit tests.
Why? The developer is exactly the person I want writing the tests.
There should also be integration tests written by a separate QA, but unit tests should 100% be the responsibility of the dev making the change.
Replacing the missing bits with AI is better than not having them at all.
I disagree. A bad test is worse than no test, because it gives you a false sense of security. I can identify missing tests with coverage reports, I can't easily identify bad tests. If I'm working in a codebase with poor coverage, I'll be extra careful to check for any downstream impacts of my change because I know the test suite won't help me. If I'm working in a codebase with poor tests but high coverage, I may assume a test pass indicates that I didn't break anything else.
If a company is going to rely heavily on AI for codegen, I'd expect tests to be manually written and have very high test coverage.
but unit tests should 100% be the responsibility of the dev making the change.
True enough
A bad test is worse than no test
Also agree, if your org has trimmed to the point that you're just making tests to say you have tests, with no review as to their efficacy, they will be getting what they deserve soon enough.
If a company is going to rely heavily on AI for anything I'd expect a significant traditional human employee backstop to the AI until it has a track record. Not "buckle up, we're gonna try somethin'" track record, more like two or three full business cycles before starting to divest of the human capital that built the business to where it is today. Though, if your business is on the ropes and likely to tank anyway.... why not try something new?
Was a story about IBM letting thousands of workers go, replacing them with AI... then hiring even more workers in other areas with the money saved from the AI retooling. Apparently they let a bunch of HR and other admin staff go and beefed up on sales and product development. There are some jobs that you want more predictable algorithms in than potentially biased people, and HR seems like an area that could have a lot of that.
Why? The developer is exactly the person I want writing the tests.
It's better if it's a different developer, so they don't know the nuances of your implementation and test functionality only, avoids some mistakes. You're correct on all the other points.
independence of review is a very important aspect of “harnessing the power of the team.”
Yep, that's basically my rationale
I really disagree here. If someone else is writing your unit tests, that means one of the following is true:
- the tests are written after the code is merged - there will be gaps, and the second dev will be lazy in writing those tests
- the tests are written before the code is worked on (TDD) - everything would take twice as long because each dev essentially needs to write the code again, and there's no way you're going to consistently cover everything the first time
Devs should write their tests, and reviewers should ensure the tests do a good job covering the logic. At the end of the day, the dev is responsible for the correctness of their code, so this makes the most sense to me.
the tests are written after the code is merged - there will be gaps, and the second dev will be lazy in writing those tests
I don't really see how this follows. Why do the second one necessary have to be lazy, and what stops the first one from being lazy as well.
The reason I like it to be different people is so there are two sets of eyes looking at the same problem without the need for doing a job twice. If you miss something while implementing, it's easier for you to miss it during test writing. It's very hard to switch to testing the concept and not the specific implementation, but if you weren't the one implementing it, you're not "married" to the code and it's easier for you to spot the gaps.
Devs are more invested in code they wrote themselves. When I'm writing tests for something I didn't write, I'm less personally invested in it. Looking at PRs by other devs when we do pushes for improving coverage, I'm not alone here. That's just human psychology, you care more about things you built than things you didn't.
I think testing should be an integral part of the dev process. I don't think any code should be merged until there are tests proving its correctness. Having someone else write the tests encourages handing tests to jr devs since they're "lower priority."
Replacing the missing bits with AI is better than not having them at all.
Nah, bullshit tests that pretend to be tests but are essentially "if true == true then pass" is significantly worse than no test at all.
bullshit tests that pretend to be tests but are essentially “if true == true then pass” is significantly worse than no test at all.
Sure. But, unsupervised developers who: write the code, write their own tests, change companies every 18 months, are even more likely to pull BS like that than AI is.
You can actually get some test validity oversight out of AI review of the requirements and tests, not perfect, but better than self-supervised new hires.
You can actually get some test validity oversight out of AI review
You also will get some bullshit out of it. If you're in a situation when you can't trust your developers because they're changing companies every 18 months, and you can't even supervise your untrustworthy developers, then you sure as shit can't trust whatever LLM will generate you. At least your flock of developers will bullshit you predictably to save time and energy, with LLM you have zero ideas where lies will come from, and those will be inventive lies.
I work in a "tight" industry where we check ALL our code. By contrast, a lot of places I have visited - including some you would think are fairly important like medical office management and gas pump card reader software makers - are not tight, not tight at all. It's a matter of moving the needle, improving a bad situation. You'll never achieve "perfect" on any dynamic non-trivial system, but if you can move closer to it for little or no cost?
Of course, when I interviewed with that office management software company, they turned me down - probably because they like their culture the way it is and they were afraid I'd change things with my history of working places for at least 2.5 years, sometimes up to 12, and making sure the code is right before it ships instead of giving their sales reps that "hands on, oooh I see why you don't like that, I'll have our people fix that right away - just for you" support culture.
To preface I don't actually use ai for anything at my job, which might be a bad metric but my workflow is 10x slower if i even try using ai
That said, I want AI to be able to do unit tests in the sense that I can write some starting ones, then it be able to infer what branches aren't covered and help me fill the rest.
Obviously it's not smart enough, and honestly I highly doubt it will ever be because that's the nature of llm, but my peeve with unit test is that testing branches usually entail just copying the exact same test but changing one field to be an invalid value, or a dependency to throw. It's not hard, just tedious. Branching coverage is already enforced, so you should know when you forgot to test a case.
Edit: my vision would be an interactive version rather than my company's current, where it just generates whatever it wants instantly. I'd want something to prompt me saying this branch is not covered, and then tell me how it will try to cover it. It eliminates the tedious work but still lets the dev know what they're doing.
I also think you should treat ai code as a pull request and actually review what it writes. My coworkers that do use it don't really proofread, so it ends up having some bad practices and code smells.
testing branches usually entail just copying the exact same test but changing one field to be an invalid value, or a dependency to throw
That's what parameterization is for. In unit tests, most dependencies should be mocked, so expecting a dependency to throw shouldn't really be a thing much of the time.
I’d want something to prompt me saying this branch is not covered, and then tell me how it will try to cover it
You can get the first half with coverage tools. The second half should be fairly straightforward, assuming you wrote the code. If a branch is hard to hit (i.e. it happens if an OS or library function fails), either mock that part or don't bother with the test. I ask my team to hit 70-80% code coverage because that last 20-30% tends to be extreme corner cases that are hard to hit.
My coworkers that do use it don’t really proofread, so it ends up having some bad practices and code smells.
And this is the problem. Reviewers only know so much about the overall context and often do a surface level review unless you're touching something super important.
We can make conventions all we want, but people will be lazy and submit crap, especially when deadlines are close. >
The issue with my org is the push to be ci/cd means 90% line and branch coverage, which ends up being you spend just as much time writing tests as actually developing the feature, which already is on an accelerated schedule because my org has made promises that end up becoming ridiculous deadlines, like a 2 month project becoming a 1 month deadline
Mocking is easy, almost everything in my team's codebase is designed to be mockable. The only stuff I can think of that isn't mocked are usually just clocks, which you could mock but I actually like using fixed clocks for unit testing most of the time. But mocking is also tedious. Lots of mocks end up being:
1. Change the test constant expected. Which usually ends up being almost the same input just with one changed field.
2. Change the response answer from the mock
3. Given the response, expect the result to be x or some exception y
Chances are, if you wrote it you should already know what branches are there. It's just translating that to actual unit tests that's a pain. Branching logic should be easy to read as well. If I read a nested if statement chances are there's something that can be redesigned better.
I also think that 90% of actual testing should be done through integ tests. Unit tests to me helps to validate what you expect to happen, but expectations don't necessarily equate to real dependencies and inputs. But that's a preference, mostly because our design philosophy revolves around dependency injection.
I also think that 90% of actual testing should be done through integ tests
I think both are essential, and they test different things. Unit tests verify that individual pieces do what you expect, whereas integration tests verify that those pieces are connected properly. Unit tests should be written by the devs and help them prove their solution works as intended, and integration tests should be written by QA to prove that user flows work as expected.
Integration test coverage should be measured in terms of features/capabilities, whereas unit tests are measured in terms of branches and lines. My target is 90% for features/capabilities (mostly miss the admin bits that end customers don't use), and 70-80% for branches and lines (skip unlikely errors, simple data passing code like controllers, etc). Getting the last bit of testing for each is nice, but incredibly difficult and low value.
Lots of mocks end up being
I use Python, which allows runtime mocking of existing objects, so most of our mocks are like this:
@patch.object(Object, "method", return_value=value)Most tests have one or two lines of this above the test function. It's pretty simple and not very repetitive at all. If we need more complex mocks, that's usually a sign we need to refactor the code.
dependency injection
I absolutely hate dependency injection, most of the time. 99% of the time, there are only two implementations of a dependency, the standard one and a mock.
If there's a way to patch things at runtime (e.g. Python's unittest.mock lib), dependency injection becomes a massive waste of time with all the boilerplate.
If there isn't a way to patch things at runtime, I prefer a more functional approach that works off interfaces where dependencies are merely passed as needed as data. That way you avoid the boilerplate and still get the benefits of DI.
That said, dependency injection has its place if a dependency has several implementations. I find that's pretty rare, but maybe its more common in your domain.
A software tester walks into a bar, he orders a beer.
He orders -1 beers.
He orders 0 beers.
He orders 843909245824 beers.
He orders duck beers.
AI can be trained to do that, but if you are in a not-well-trodden space, you'll want to be defining your own edge cases in addition to whatever AI comes up with.
Way I heard this joke, it continues with:
A real customer enters.
He asks where the toilets are.
The bar explodes.
better than nothing
I disagree. I'd much rather have a lower coverage with high quality tests than high coverage with dubious tests.
If your tests are repetitive, you're probably writing your tests wrong, or at least focusing on the wrong logic to test. Unit tests should prove the correctness of business logic and calculations. If there's no significant business logic, there's little priority for writing a test.
The actual risk of those tests being wrong is low because you're checking them.
If your tests aren't repetitive they've got no setup or mocking in so they don't test very much.
@parameterized.expand([
(key, value, Expected Exception,),
(other_key, other_value, OtherExpectedException,),
])
def test_exceptions(self, key, value, exception_class):
obj = setup()
setattr(obj, key, value)
with self.assertRaises(exception_class):
func_to_test(obj)Mocks are similarly simple:
@unittest.mock.patch.object(Class, "method", return_value=...)
dynamic_mock = MagicMock(Class)
dynamic_mock...How this looks will vary in practice, but the idea is to design code such that usage is simple. If you're writing complex mocks frequently, there's probably room for a refactor.
I know how to use parametrised tests, but thanks.
Tests are still much more repetitive than application code. If you're testing a wrapper around some API, each test may need you to mock a different underlying API call. (Mocking all of them at once would hide things). Each mock is different, so you can't just extract it somewhere; but it is still repetitive.
If you need three tests each of which require a (real or mock) user, a certain directory structure to be present somewhere, input data to be got from somewhere, that's three things that, even if you streamline them, need to be done in each test. I have been involved in a project where we originally followed the principle of, "if you need a user object in more than one test, put it in setUp or in a shared fixture" and the result is rapid unwieldy shared setup between tests - and if ever you should want to change one of those tests, you'd better hope you only need to add to it, not to change what's already there, otherwise you break all the other tests.
For this reason, zealous application of DRY is not a good idea with tests, and so they are a bit repetitive. That is an acceptable trade-off, but also a place where an LLM can save you some time.
If you’re writing complex mocks frequently, there’s probably room for a refactor.
Ah, the end of all coding discussions, "if this is a problem for you, your code sucks." I mean, you're not wrong, because all code sucks.
LLMs are like the junior dev. You have to review their output because they might have screwed up in some stupid way, but that doesn't mean they're not worth having.
zealous application of DRY is not a good idea with tests
I absolutely agree. My point is that if you need complex setup, there's a good chance you can reuse it and replace only the data that's relevant for your test instead of constructing it every time.
But yes, there's a limit here. We currently have a veritable mess because we populate the database with fixture data so we have enough data to not need setup logic for each test. Changing that fixture data causes a dozen tests to fail across suites. Since I started at this org, I've been pushing against that and introduced the repository pattern so we can easily mock db calls.
IMO, reused logic/structures should be limited to one test suite. But even then, rules are meant to be broken, just make sure you justify it.
also a place where an LLM can save you some time.
I'm still not convinced that's the case though. A typical mock takes a minute or two to write, most of the time is spent thinking about which cases to hit or refactoring code to make testing easier. Working with the LLM takes at least that long, esp if you count reviewing the generated code and whatnot.
LLMs are like the junior dev
Right, and I don't want a junior dev writing my tests. Junior devs are there to be trained with the expectation that they'll learn from mistakes. LLMs don't learn, they're perennially junior.
That's why I don't use them for code gen and instead use them for research. Writing code is the easy part of my job, knowing what to write is what takes time, so I outsource as much of the latter as I can.
Then write comments in the tests that say they haven't been checked.
That is indeed the absolute worst case though, and most of the tests that are so produced will be giving value because checking a test is easier than checking the code (this is kind of the point of tests) and so most will be correct.
The risk of regressions covered by the good tests is higher than someone writing code to the rare bad test that you've marked as suspicious because you (for whatever reason) are not confident in your ability to check it.
One of the guys at my old job submitted a PR with tests that basically just mocked everything, tested nothing. Like,
with patch("something.whatever", return_value=True):
assert whatever(0) is True
assert whatever(1) is TrueExcept for a few dozen lines, with names that made it look like they were doing useful.
He used AI to generate them, of course. Pretty useless.
Some do, some don't, but more importantly: most just don't care.
I had a tester wander into a set of edge cases which weren't 100% properly handled and their first reaction was "gee, maybe I didn't see that, it sounds like I'm going to have a lot more work because I did."
Fair, I've used it recently to translate a translations.ts file to Spanish.
But for repetitive code, I feel like it is kind of a slow down sometimes. I should have refactored instead.
I use it for writing code to call APIs and is a huge boon.
Yeah, you have to check the results, but it’s way faster than me.
This is a thing people miss. "Oh it can generate repetitive code."
OK, now who's going to maintain those thousands of lines of repetitive unit tests, let alone check them for correctness? Certainly not the developer who was too lazy to write their own tests and to think about how to refactor or abstract things to avoid the repetition.
If someone's response to a repetitive task is copy-pasting poorly-written code over and over we call them a bad engineer. If they use an AI to do the copy-paste for them that's supposed to be better somehow?
At absolute best.
My experience is it's the bottom stack overflow answers. Making up bullshit and nonexistent commands, etc.
If you know what you want, its automatic code completion can save you some typing in those cases where it gets it right (for repetitive or trivial code that doesn't require much thought). It's useful if you use it sparingly and can see through its bullshit.
For junior coders, though, it could be absolute poison.
it's slowing you down. The solution to that is to use it in even more places!
Wtf was up with that conclusion?
I have been vibe coding a whole game in JavaScript to try it out. So far I have gotten a pretty ok game out of it. It's just a simple match three bubble pop type of thing so nothing crazy but I made a design and I am trying to implement it using mostly vibe coding.
That being said the code is awful. So many bad choices and spaghetti code. It also took longer than if I had written it myself.
So now I have a game that's kind of hard to modify haha. I may try to setup some unit tests and have it refactor using those.
Might be there someday, but right now it’s basically a substitute for me googling some shit.
If I let it go ham, and code everything, it mutates into insanity in a very short period of time.
A crazy number of devs weren't even using EXISTING code assistant tooling.
Enterprise grade IDEs already had tons of tooling to generate classes and perform refactoring in a sane and algorithmic way. In a way that was deterministic.
So many use cases people have tried to sell me on (boilerplate handling) and im like "you have that now and don't even use it!".
I think there is probably a way to use llms to try and extract intention and then call real dependable tools to actually perform the actions. This cult of purity where the llm must actually be generating the tokens themselves... why?
I'm all for coding tools. I love them. They have to actually work though. Paradigm is completely wrong right now. I don't need it to "appear" good, i need it to BE good.
Exactly. We're already bootstrapping, re-tooling, and improving the entire process of development to the best of our collective ability. Constantly. All through good, old fashioned, classical system design.
Like you said, a lot of people don't even put that to use, and they remain very effective. Yet a tiny speck of AI tech and its marketing is convincing people we're about to either become gods or be usurped.
It's like we took decades of technical knowledge and abstraction from our Computing Canon and said "What if we didn't use that anymore?"
I use AI as an entryway to learning or for finding the name or technique that I'm thinking of but can't remember or know it's name so then i can look elsewhere for proper documentation. I would never have it just blindly writing code.
Sadly search engines getting shitter has sort of made me have to use it to replace them.
Then it's also good to quickly parse an error for anything obviously wrong.
LLMs work great to ask about tons of documentation and learn more about high-level concepts. It's a good search engine.
The code they produce have basically always disappointed me.
I sometimes get up to five lines of viable code. Then upon occasion what should have been a one liner tries to vomit all over my codebase. The best feature about AI enabled IDE is the button to decline the mess that was just inflicted.
In the past week I had two cases I thought would be "vibe coding" fodder, blazingly obvious just tedious. One time it just totally screwed up and I had to scrap it all. The other one generated about 4 functions in one go and was salvageable, though still off in weird ways. One of those was functional, just nonsensical. It had a function to check whether a certain condition was present or not, but instead of returning a boolean, it passed a pointer to a string and set the string to "" to indicate false... So damn bizarre, hard to follow and needlessly more lines of code, which is another theme of LLM generated code.
Those happen so often. I've taken to stop calling them hallucinations anymore (that's anthropomorphising and over-selling what LLMs do imho). They are statistical prediction machines, and either they hit their practical limits of predicting useful output, or we just call it broken.
I think the next 10 years are going to be all about learning what LLMs are actually good for, and what they are fundamentally limited at no matter how much GPU ram we throw at it.
Based on my experience with claude sonnet and gpt4/5... It's a little useful but generally annoying and fails more often than works.
I do think moderate use still comes out ahead, as it saves a bunch of typing when it does work, but I still get annoyed at the blatantly stupid suggestions I keep having to decline.
I thought that as well and got some code from someone that left the company and asked it to comment it.
It did the obvious "x= 5 // assign 5 to x" crap comments and then it got to the actually confusing part and just skipped that mess entirely....
Im not super surprised, but AI has been really useful when it comes to learning or giving me a direction to look into something more directly.
Im not really an advocate for AI, but there are some really nice things AI can do. And i like to test the code quality of the models i have access to.
I always ask for a ftp server and dns server, to check what it can do and they work surprisingly well most of the time.
I code with LLMs every day as a senior developer but agents are mostly a big lie. LLMs are great for information index and rubber duck chats which already is incredible feaute of the century but agents are fundamentally bad. Even for Python they are intern-level bad. I was just trying the new Claude and instead of using Python's pathlib.Path it reinvented its own file system path utils and pathlib is not even some new Python feature - it has been de facto way to manage paths for at least 3 years now.
That being said when prompted in great detail with exact instructions agents can be useful but thats not what being sold here.
After so many iterations it seems like agents need a fundamental breakthrough in AI tech is still needed as diminishing returns is going hard now.
Oh yes. The Great pathlib. The Blessed pathlib. Hallowed be it and all it does.
I'm a Ruby girl. A couple of years ago I was super worried about my decision to finally start learning Python seriously. But once I ran into pathlib, I knew for sure that everything will be fine. Take an everyday headache problem. Solve it forever. Boom. This is how standard libraries should be designed.
Pathlib is very nice indeed, but I can understand why a lot of languages don't do similar things. There are major challenges implementing something like that. Cross-platform functionality is a big one, for example. File permissions between Unix systems and Windows do not map perfectly from one system to another which can be a maintenance burden.
But I do agree. As a user, it feels great to have. And yes, also in general, the things Python does with its standard library are definitely the way things should be done, from a user's point of view at least.
They are so much better than using a search engine to parse web forums and stack overflow,
The hallucinations (more accurately bullshitting) and the fact they have to get new training data but are discouraging people from engaging in the structures that do so make this highly debatable
I will concur with the whole 'llm keeps suggesting to reinvent the wheel'
And poorly. Not only did it not use a pretty basic standard library to do something, it's implementation is generally crap. For example it offered up a solution that was hard coded to IPv4, and the context was very ipv6 heavy
I'd wager that the votes are irrelevant. Stock overflow is generously <50% good code and is mostly people saying 'this code doesn't work -- why?' and that is the corpus these models were trained on.
I've yet to see something like a vibe coding livestream where something got done. I can only find a lot of 'tutorials' that tell how to set up tools. Anyone want to provide one?
I could.. possibly.. imagine a place where someone took quality code from a variety of sources and generate a model that was specific to a single language, and that model was able to generate good code, but I don't think we have that.
Vibe coders: Even if your code works and seems to be a success, do you know why it works, how it works? Does it handle edge cases you didn't include in your prompt? Does it expose the database to someone smarter than the LLM? Does it grant an attacker access to the computer it's running on, if they are smarter than the LLM? Have you asked your LLM how many 'r's are in strawberry?
At the very least, we will have a cyber-security crisis due to vibe coding; especially since there seems to be a high likelihood of HR and Finance vibe coders who think they can do the traditional IT/Dev work without understanding what they are doing and how to do it safely.
And the band plays on
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
I would say absolutely in the general sense nost people, and the salesmen, frame them in.
When I was invited to assist with the GDC development, I got a chance to partner with a few AI developers and see the development process firsthand, try my hand at it myself, and get my hands on a few low parameter models for my own personal use. It's really interesting just how capable some models are in their specific use-cases. However, even high param. models easily become useless at the drop of a hat.
I found the best case, one that's rarely done mind you, is integrate the model into a program that has the ability to call a known database. With a properly trained model to format output in both natural language and use a given database for context calls, and concrete information, the qualitative performance leaps ahead by bounds. Problem is, that requires so much customization it pretty much ends up being something a capable hobbyist would do, it's just not economically sound for a business to adopt.
According to Deutsche Bank the AI bubble is ~~a~~ the pillar of our economy now.
So when it pops. I guess that's kinda apocalyptic.
Edit - strikethrough
“No Duh,” say senior developers everywhere.
I'm so glad this was your first line in the post
Thing is both statements can be true.
Used appropriately and in the right context, LLMs can accelerate some select work.
But the hype level is 'human replacement is here (or imminent, depending on if the company thinks the audience is willing to believe yet or not)'. Recently Anthropic suggested someone could just type 'make a slack clone' and it'll all be done and perfect.
This. Like with any tool you have to learn how and when to use it. I've started to get the hang of what tasks it improves but I don't think I've regained the hours I've spent learning it yet.
But as the tool and my understanding of it improves it'll probably happen some day.
Heh. That's a fun chart. If that's programming aptitude, I scored 80 on that part of the broad spectrum aptitude test I got a sneak-peek chance to do several parts of. Well now I know why I'm so easily in agreement with "senior coders", if it is programming aptitude quotient. If it's just iq, ... pulls hood up to block the glare.
Daunting that there may be a middling bias getting apparent advantages. Evolution may not serve us well like that.
That's kinda wrong though. I've seen llm's write pretty good code, in some cases even doing something clever I hadn't thought of.
You should treat it as any junior though, and read the code changes and give feedback if needed.
And many between "seniour developers everywhere" and "a layman who never wrote code in his life".
Like me, I'm saying it too. A big ol "No duh".
Disbelieve the hype.
I have never seen an AI generated code which is correct. Not once. I've certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.
I sure as hell wouldn't trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through "vibe" programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.
How is it not correct if the code successfully does the very thing that was prompted?
F.ex. in my company we don’t have any real programmers but have built handful of useful tools (approx. 400-1600 LOC, mainly Python) to do some data analysis, regex stuff to cleanup some output files, index some files and analyze/check their contents for certain mistakes, dashboards to display certain data, etc.
Of course the apps may not have been perfect after the very first prompt, or even compiled, but after iterating an error or two, and explaining an edge case or two, they’ve started to perform flawlessly, saving tons of work hours per week. So how is this not useful? If the code creates results that are correct, doesn’t that make the app itself technically ”correct” too, albeit likely not nearly as optimized as equivalent human code would be.
If the code doesn't compile, or is badly mangled, or uses the wrong APIs / imports or forgets something really important then it's broken. I can use AI to inform my opinion and sometimes makes use of what it outputs but critically I know how to program and I know how to spot good and bad code.
I can't speak for how you use it, but if you don't have any real programmers and you're iterating until something works then you could be producing junk and not know it. Maybe it doesn't matter in your case if its a bunch for throwaway scripts and helpers but if you have actual code in production where money, lives, reputation, safety or security are at risk then it absolutely does.
I disagree on the junk part: I see it so that if the output of the program are working, the logic must be flawless (just maybe not optimized when it comes to efficiency). Of course in our case the inputs are highly structured and it is easy for humans to spot errors in the output files so this ”iterate until outputs are perfect” has worked great, and yield huge savings in workhours. In our case none of the tools are exposed outside so in very worst case user may just crash the app.
But yeah I agree building any public frontend or anything business critical is likely the way to doom.
I've seen and used AI for snippets of code and it's pretty decent at that.
With my colleagues I always compare it to a battery powered drill. It's very powerful and can make shit a lot easier. But you'd not try to build furniture from scratch with only a battery powered drill.
You need the knowledge to use it - and also saws, screws, the proper bits for those screws and so on and so forth.
What bothers me the most is the amount of tech debt it adds by using outdated approaches.
For example, recently I used AI to create some python scripts that use polars and altair to parse some data and draw charts. It kept insisting to bring in pandas so it could convert the polars dataframes to pandas dataframes just for passing them to altair. When I told if that altair can use polars dataframes directly, that helped, but two or three prompts later it would try to solve problems by adding the conversion again.
This makes sense too, because the training material, on average, is probably older than the change that enabled altair to use polars dataframes directly. And a lot of code out there just only uses pandas in the first place.
The result is that in all these cases, someone who doesn’t know this would probably be impressed that the scripts worked, and just not notice the extra tech debt from that unnecessary dependency on pandas.
It sounds like it’s not a big deal, but these things add up and eventually, our AI enhanced code bases will be full of additional dependencies, deprecated APIs, unnecessarily verbose or complicated code, etc.
I feel like this is one aspect that gets overlooked a bit when we talk about productivity gains. We don’t necessarily immediately realize how much of that extra LoC/time goes into outdated code and old fashioned verbosity. But it will eventually come back to bite us.
I've used Claude code to fix some bugs and add some new features to some of my old, small programs and websites. Not things I can't do myself, but things I can't be arsed to sit down and actually do.
It's actually gone really well, with clean and solid code. easily readable, correct, with error handling and even comments explaining things. It even took a gui stream processing program I had and wrote a server / webapp with the same functionality, and was able to extend it with a few new features I've been thinking to add.
These are not complex things, but a few of them were 20+ files big, and it manage to not only navigate the code, but understand it well enough to add features with the changes touching multiple files (model, logic, view layer for example, or refactor a too big class and update all references to use the new classes).
So it's absolutely useful and capable of writing good code.
AI is works well for mindless tasks. Data formatting, rough drafts, etc.
Once a task requires context and abstract thinking, AI can't handle it.
I work adjacent to software developers, and I have been hearing a lot of the same sentiments. What I don't understand, though, is the magnitude of this bubble then.
Typically, bubbles seem to form around some new market phenomenon or technology that threatens to upset the old paradigm and usher in a new boom. Those market phenomena then eventually take their place in the world based on their real value, which is nowhere near the level of the hype, but still substantial.
In this case, I am struggling to find examples of the real benefits of a lot of these AI assistant technologies. I know that there are a lot of successes in the AI realm, but not a single one I know of involves an LLM.
So, I guess my question is, "What specific LLM tools are generating profits or productivity at a substantial level well exceeding their operating costs?" If there really are none, or if the gains are only incremental, then my question becomes an incredulous, "Is this biggest in history tech bubble really composed entirely of unfounded hype?"
From what I've seen and heard, there are a few factors to this.
One is that the tech industry right now is built on venture capital. In order to survive, they need to act like they're at the forefront of the Next Big Thing in order to keep bringing investment money in.
Another is that LLMs are uniquely suited to extending the honeymoon period.
The initial impression you get from an LLM chatbot is significant. This is a chatbot that actually talks like a person. A VC mogul sitting down to have a conversation with ChatGPT, when it was new, was a mind-blowing experience. This is a computer program that, at first blush, appears to be able to do most things humans can do, as long as those things primarily consist of reading things and typing things out - which a VC, and mid/upper management, does a lot of. This gives the impression that AI is capable of automating a lot of things that previously needed a live, thinking person - which means a lot of savings for companies who can shed expensive knowledge workers.
The problem is that the limits of LLMs are STILL poorly understood by most people. Despite constructing huge data centers and gobbling up vast amounts of electricity, LLMs still are bad at actually being reliable. This makes LLMs worse at practically any knowledge work than the lowest, greenest intern - because at least the intern can be taught to say they don't know something instead of feeding you BS.
It was also assumed that bigger, hungrier LLMs would provide better results. Although they do, the gains are getting harder and harder to reach. There needs to be an efficiency breakthrough (and a training breakthrough) before the wonderful world of AI can actually come to pass because as it stands, prompts are still getting more expensive to run for higher-quality results. It took a while to make that discovery, so the hype train was able to continue to build steam for the last couple years.
Now, tech companies are doing their level best to hide these shortcomings from their customers (and possibly even themselves). The longer they keep the wool over everyone's eyes, the more money continues to roll in. So, the bubble keeps building.
I think right now companies are competing until they're only 1 or 2 that clearly own the majority of the market.
Afterwards they will devolve back into the same thing search engines are now. A cesspool of sponsored ads and links to useless SEO blogs.
They'll just become gate keepers of information again and the only ones that will be heard are the ones who pay a fee or game the system.
Maybe not though, I'm usually pretty cynical when it comes to what the incentives of businesses are.
It remains to be seen whether the advent of “agentic AIs,” designed to autonomously execute a series of tasks, will change the situation.“Agentic AI is already reshaping the enterprise, and only those that move decisively — redesigning their architecture, teams, and ways of working — will unlock its full value,” the report reads.
"Devs are slower with and don't trust LLM based tools. Surely, letting these tools off the leash will somehow manifest their value instead of exacerbating their problems."
Absolute madness.
Industry? Yes, industry hires people who know how to do things needed by industry and who do nothing besides those things.
Programmers outside "industry" more often find themselves writing using the libraries they see for the first time and using languages they never thought to use. AI helps a lot here.
Except LLMs are absolutely terrible at working with a new, poorly documented library. Commonly-used, well-defined libraries? Sure! Working in an obscure language or an obscure framework? Good luck.
LLMs can surface information. It's perhaps the one place they're actually useful. They cannot reason in the same way a human programmer can, and all the big tech companies are trying to sell them on that basis.
Well, don't use it with new, poorly documented libraries. That is a common sense rule: use the tool where it is useful.
Somehow many LLM criticizers just claim that LLMs are shit because they can't autonomously write code. Yes, they can't. But they can do many other useful things.
Wait...
I think maybe a good comparison is to written papers/ assignments. It can generate those just like it can generate code.
But it is not about the words themself, but about the content.
This article sums up a Stanford study of AI and developer productivity. TL;DR - net productivity boost is a modest 15-20%, or as low as negative to 10% in complex, brownfield codebases. This tracks with my own experience as a dev.
linkedin.com/pulse/does-ai-act…
When Mark Zuckerberg announced at the beginning of this year that he would replace all mid-level engineers at Meta with AI by the end of the year, he kicked up a storm in the tech world. As Yegor Denisov-Blanch noted in his presentation, this was likely a
When Mark Zuckerberg announced at the beginning of this year that he would replace all mid-level engineers at Meta with AI by the end of the year, he kicked up a storm in the tech world.Ömer Faruk Çelebi (www.linkedin.com)
These types of articles always fail to mention how well trained the developers were on techniques and tools. In my experience that makes a big difference.
My employer mandates we use AI and provides us with any model, IDE, service we ask for. But where it falls short is providing training or direction on ways to use it. Most developers seem to go for results prompting and get a terrible experience.
I on the other hand provide a lot of context through documents and various mcp tooling, I talk about the existing patterns in the codebase and provide sources to other repositories as examples, then we come up with an implementation plan and execute on it with a task log to stay on track. I spend very little time fixing bad code because I spent the setup time nailing down context.
So if a developer is just prompting "Do XYZ". It's no wonder they're spending more time untangling a random mess.
Another aspect is that everyone seems to always be working under the gun and they just don't have the time to figure out all the best practices and techniques on their own.
I think this should be considered when we hear things like this.
I have 3 questions, and I'm coming from a heavily AI-skeptic position, but am open:
1) Do you believe that providing all that context, describing the existing patterns, creating an implementation plan, etc, allows the AI to both write better code and faster than if you just did it yourself? To me, this just seems like you have to re-write your technical documentation in prose each time you want to do something. You are saying this is better than 'Do XYZ', but how much twiddling of your existing codebase do you need to do before an AI can understand the business context of it? I don't currently do development on an existing codebase, but every time I try to get these tools to do something fairly simple from scratch, they just flail. Maybe I'm just not spending the hours to build my AI-parsable functional spec. Every time I've tried this, asking something as simple as (and paraphrased for brevity) "write an Asteroids clone using JavaScript and HTML 5 Canvas" results in a full failure, even with multiple retries chasing errors. I wrote something like that a few years ago to learn Javascript and it took me a day-ish to get something that mostly worked.
2) Speaking of that context. Are you running your models locally, or do you have some cloud service? If you give your entire codebase to a 3rd party as context, how much of your company's secret sauce have you disclosed? I'd imagine most sane companies are doing something to make their models local, but we see regular news articles about how ChatGPT is training on user input and leaking sensitive data if you ask it nicely and I can't imagine all the pro-AI CEOs are aware of the risks here.
3) How much pen-testing time are you spending on this code, error handling, edge cases, race conditions, data sanitation? An experienced dev understands these things innately, having fixed these kinds of issues in the past and knows the anti-patterns and how to avoid them. In all seriousness, I think this is going to be the thing that actually kills AI vibe coding, but it won't be fast enough. There will be tons of new exploits in what used to be solidly safe places. Your new web front-end? It has a really simple SQL injection attack. Your phone app? You can tell it your username is admin'joe@google.com and it'll let you order stuff for free since you're an admin.
I see a place for AI-generated code, for instant functions that do something blending simple and complex. "Hey claude, write a function to take a string and split it at the end of every sentence containing an uppercase A". I had to write weird functions like that constantly as a sysadmin, and transforming data seems like a thing an AI could help me accelerate. I just don't see that working on a larger scale, though, or trusting an AI enough to allow it to integrate a new function like that into an existing codebase.
Thank you for reading my comment. I'm on the train headed to work and I'll try to answer completely. I love talking about this stuff.
- >Do you believe that providing all that context, describing the existing patterns, creating an implementation plan, etc, allows the AI to both write better code and faster than if you just did it yourself?
For my work, absolutely. My work is a lot of tickets that were setup from multiple stories and multiple epics. It would be like asking me if I am really framing a house faster with a nail gun and compressor. If I were just hanging up a picture or a few pictures in the hallway, it's probably faster to use a hammer than to set up the compressor and nail gun, plus cleanup.
However, a lot of that documentation already exists by the time it gets to me. All of the Design Documents and Product Requirement Documents have already been formed, discussed, and approved by our architecture team and team leads. Imagine if you already had this documentation for the asteroid game; how much better do you think your LLM would do? Maybe this is the benefit of using LLMs for development at an established company. Btw, a lot of those Documents were also created with the assistance of AI by the Product Team, Architects, and Principle/Staff/Leads anyway.
how much twiddling of your existing codebase do you need to do before an AI can understand the business context of it?
With the help of our existing documents and codebase(s) I feel I dont have any issues with the model knowing what we're doing. I do have to set up my own context for how I want it to be done. To me this is like explaining to a Junior Engineer what I need them to help me with. If you're familiar with "Know when to Direct, when to Delegate, or when to Develop" I would say it lands in between Direct and Delegate. I have markdown files with my rules and guidelines and provide that as context. I use Augment Code which is pretty good with codebase context.
write an Asteroids clone using JavaScript and HTML 5 Canvas
I would try "Let's plan out the steps needed to write an Asteroids game using JavaScript and HTML 5. Identify and explain each step of the development plan. The game must build with no errors, be playable, and pass all tests. Do not wrote code at this time until our plan is approved" Then once it comes back with the initial steps, I would guide it further if needed. Finally I would approve the plan and tell it to execute while tracking it's steps (Augment Code uses a task log).
Are you running your models locally, or do you have some cloud service? If you give your entire codebase to a 3rd party as context, how much of your company's secret sauce have you disclosed?
We are required to use the frontier models that my employer has contracts with and are forbidden from using local models. In our enterprise contracts we have negotiated for no training on our data. I imagine we pay for that. I'm not involved in that level of interaction on the accounts.
How much pen-testing time are you spending on this code, error handling, edge cases, race conditions, data sanitation? An experienced dev understands these things innately, having fixed these kinds of issues in the past and knows the anti-patterns and how to avoid them. In all seriousness, I think this is going to be the thing that actually kills AI vibe coding, but it won't be fast enough. There will be tons of new exploits in what used to be solidly safe places.
We have other teams that handle a lot of these tasks. These teams are also using AI tools to get the job done. In addition, we have static testing tools on our repo like CodeRabbit and another one I can't remember the name of that looks specifically for security concerns. It will comment on the PR directly and our merge would be blocked until handled. Code coverage for testing is at 85% or it blocks the merge and we have a full QA department of Analysts and SDETs to QA. In addition to that we still have human approvals required (2 devs + Sr+). All of these people involved are still using AI tools to help them in each step.
I hope that answers your questions and gives you some insight into how I've found success in my experience with it. I will say that on my personal projects I don't go this far with process and I don't experience the same AI output that I do at work.
Thanks for your reply, and I can still see how it might work.
I'm curious if you have any resources that do some end-to-end examples. This is where I struggle. If I have an atomic piece of code I need and I can maybe get it started with a LLM and finish it by hand, but anything larger seems to just always fail. So far the best video I found to try a start-to-finish demo was this:
He spends plenty of time describing the tools and how to use them, but when we get to the actual work, we spend 20 minutes telling the LLM that it's doing stuff wrong. There's eventually a prototype, but to get there he had to alternate between 'I still can't jump' and 'here's the new error.' He eventually modified code himself, so even getting a 'mario clone' running requires an actual developer and the final result was underwhelming at best.
For me, a 'game' is this tiny product that could be a viable unit. It doesn't need to talk to other services, it just needs to react to user input. I want to see a speed-run of someone using LLMs to make a game that is playable. It doesn't need to be "fun", but the video above only got to the 'player can jump and gets game over if hitting enemy' stage. How much extra effort would it take to make the background not flat blue? Is there a win condition? How to refactor this so that the level is not hard-coded? Multiple enemy types? Shoot a fireball that bounces? Power Ups? And does doing any of those break jump functionality again? How much time do I have to spend telling the LLM that the fireball still goes through the floor and doesn't kill an enemy when it hits them?
I could imagine that if the LLM was handed a well described design document and technical spec that it could do better, but I have yet to see that demonstrated. Given what it produces for people publishing tutorials online, I would never let it handle anything business critical.
The video is an hour long, and spends about 20 minutes in the middle actually working on the project. I probably couldn't do better, but I've mostly forgotten my javascript and HTML canvas. If kaboom.js was my focus, though, I imagine I could knock out what he did in well under 20 minutes and have a better architected design that handled the above questions.
I've, luckily, not yet been mandated that I embed AI into my pseudo-developer role, but they are asking.
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
Yea, I use it for home assistant, it's amazingly powerful... And so incredibly dumb
It will take my if and statements, and shrunk it to 1/3 the length, while being twice as to robust... While missing that one of the arguments is entirely in the wrong place.
It regurgitates old code, it cannot come up with new stuff.
The trick is, most of what you write is basically old code in new wrapping. In most projects, I'd say the new and novel part is maybe 10% of the code. The rest is things like setting up db models, connecting them to base logic, set up views, api endpoints, decoding the message on the ui part, displaying it to user, handling input back, threading things so UI doesn't hang, error handling, input data verification, basic unit tests, set up settings, support reading them from a file or env vars, making UI look not horrible, add translatable text, and so on and on and on. All that has been written in some variation a million times before. All can be written (and verified) by a half-asleep competent coder.
The actual new interesting part is gonna be a small small percentage of the total code.
You mean relying blindly on a statistical prediction engine to attempt to produce sophisticated software without any understanding of the underlying principles or concepts doesn't magically replace years of actual study and real-world experience?
But trust me, bro, the singularity is imminent, LLMs are the future of human evolution, true AGI is nigh!
I can't wait for this idiotic "AI" bubble to burst.
About that "net slowdown". I think it's true, but only in specific cases. If the user already knows well how to write code, an LLM might be only marginally useful or even useless.
However, there are ways to make it useful, but it requires specific circumstances. For example, you can't be bothered to write a simple loop, you can use and LLM to do it. Give the boring routine to an LLM, and you can focus on naming the variables in a fitting way or adjusting the finer details to your liking.
Can't be bothered to look up the exact syntax for a function you use only twice a year? Let and LLM handle that, and tweak the details. Now, you didn't spend 15 minutes reading stack overflow posts that don't answer the exact question you had in mind. Instead, you spent 5 minutes on the whole thing, and that includes the tweaking and troubleshooting parts.
If you have zero programming experience, you can use an LLM to write some code for you, but prepare to spend the whole day troubleshooting something that is essentially a black box to you. Alternatively, you could ask a human to write the same thing in 5-15 minutes depending on the method they choose.
This is a sane way to use LLM. Also, pick your poison, some bots are better than others for a specific task. It's kinda fascinating to see how other people solve coding problems and that is essentially on tap with a bot, it will churn out as many examples as you want. It's a really useful tool for learning syntax and libraries of unfamiliar languages.
On one extreme side of LLM there is this insane hype and at the other extreme a great pessimism but in the middle is a nice labour saving educational tool.
Writing new code is easier than editing someone else's code but editing a portion is still better than writing the entire program again from start to end.
Then there is LLMs which force you to edit the entire thing from start to end.
Two excerpts from Friendly Feudalism: The Tibet Myth:
Drepung monastery was one of the biggest landowners in the world, with its 185 manors, 25,000 serfs, 300 great pastures, and 16,000 herdsmen. The wealth of the monasteries rested in the hands of small numbers of high-ranking lamas. Most ordinary monks lived modestly and had no direct access to great wealth. The Dalai Lama himself “lived richly in the 1000-room, 14-story Potala Palace.”[12]Secular leaders also did well. A notable example was the commander-in-chief of the Tibetan army, a member of the Dalai Lama’s lay Cabinet, who owned 4,000 square kilometers of land and 3,500 serfs. [13] Old Tibet has been misrepresented by some Western admirers as “a nation that required no police force because its people voluntarily observed the laws of karma.” [14] In fact it had a professional army, albeit a small one, that served mainly as a gendarmerie for the landlords to keep order, protect their property, and hunt down runaway serfs.
Young Tibetan boys were regularly taken from their peasant families and brought into the monasteries to be trained as monks. Once there, they were bonded for life. Tashì-Tsering, a monk, reports that it was common for peasant children to be sexually mistreated in the monasteries. He himself was a victim of repeatedremoved, beginning at age nine. [15] The monastic estates also conscripted children for lifelong servitude as domestics, dance performers, and soldiers.
In old Tibet there were small numbers of farmers who subsisted as a kind of free peasantry, and perhaps an additional 10,000 people who composed the “middle-class” families of merchants, shopkeepers, and small traders. Thousands of others were beggars. There also were slaves, usually domestic servants, who owned nothing. Their offspring were born into slavery. [16] The majority of the rural population were serfs. Treated little better than slaves, the serfs went without schooling or medical care. They were under a lifetime bond to work the lord’s land — or the monastery’s land — without pay, to repair the lord’s houses, transport his crops, and collect his firewood. They were also expected to provide carrying animals and transportation on demand. [17] Their masters told them what crops to grow and what animals to raise. They could not get married without the consent of their lord or lama. And they might easily be separated from their families should their owners lease them out to work in a distant location.
[18]As in a free labor system and unlike slavery, the overlords had no responsibility for the serf’s maintenance and no direct interest in his or her survival as an expensive piece of property. The serfs had to support themselves. Yet as in a slave system, they were bound to their masters, guaranteeing a fixed and permanent workforce that could neither organize nor strike nor freely depart as might laborers in a market context. The overlords had the best of both worlds.
One 22-year old woman, herself a runaway serf, reports: “Pretty serf girls were usually taken by the owner as house servants and used as he wished”; they “were just slaves without rights.” [19] Serfs needed permission to go anywhere. Landowners had legal authority to capture those who tried to flee. One 24-year old runaway welcomed the Chinese intervention as a “liberation.” He testified that under serfdom he was subjected to incessant toil, hunger, and cold. After his third failed escape, he was merciless beaten by the landlord’s men until blood poured from his nose and mouth. They then poured alcohol and caustic soda on his wounds to increase the pain, he claimed.
[20]The serfs were taxed upon getting married, taxed for the birth of each child and for every death in the family. They were taxed for planting a tree in their yard and for keeping animals. They were taxed for religious festivals and for public dancing and drumming, for being sent to prison and upon being released. Those who could not find work were taxed for being unemployed, and if they traveled to another village in search of work, they paid a passage tax. When people could not pay, the monasteries lent them money at 20 to 50 percent interest. Some debts were handed down from father to son to grandson. Debtors who could not meet their obligations risked being cast into slavery.
[21]The theocracy’s religious teachings buttressed its class order. The poor and afflicted were taught that they had brought their troubles upon themselves because of their wicked ways in previous lives. Hence they had to accept the misery of their present existence as a karmic atonement and in anticipation that their lot would improve in their next lifetime. The rich and powerful treated their good fortune as a reward for, and tangible evidence of, virtue in past and present lives.
Selection two, shorter: (CW sexual violence and mutilation)
The Tibetan serfs were something more than superstitious victims, blind to their own oppression. As we have seen, some ran away; others openly resisted, sometimes suffering dire consequences. In feudal Tibet, torture and mutilation — including eye gouging, the pulling out of tongues, hamstringing, and amputation — were favored punishments inflicted upon thieves, and runaway or resistant serfs.[22]Journeying through Tibet in the 1960s, Stuart and Roma Gelder interviewed a former serf, Tsereh Wang Tuei, who had stolen two sheep belonging to a monastery. For this he had both his eyes gouged out and his hand mutilated beyond use. He explains that he no longer is a Buddhist: “When a holy lama told them to blind me I thought there was no good in religion.” [23] Since it was against Buddhist teachings to take human life, some offenders were severely lashed and then “left to God” in the freezing night to die. “The parallels between Tibet and medieval Europe are striking,” concludes Tom Grunfeld in his book on Tibet.
[24]In 1959, Anna Louise Strong visited an exhibition of torture equipment that had been used by the Tibetan overlords. There were handcuffs of all sizes, including small ones for children, and instruments for cutting off noses and ears, gouging out eyes, breaking off hands, and hamstringing legs. There were hot brands, whips, and special implements for disemboweling. The exhibition presented photographs and testimonies of victims who had been blinded or crippled or suffered amputations for thievery. There was the shepherd whose master owed him a reimbursement in yuan and wheat but refused to pay. So he took one of the master’s cows; for this he had his hands severed. Another herdsman, who opposed having his wife taken from him by his lord, had his hands broken off. There were pictures of Communist activists with noses and upper lips cut off, and a woman who wasremovedd and then had her nose sliced away.
[25]Earlier visitors to Tibet commented on the theocratic despotism. In 1895, an Englishman, Dr. A. L. Waddell, wrote that the populace was under the “intolerable tyranny of monks” and the devil superstitions they had fashioned to terrorize the people. In 1904 Perceval Landon described the Dalai Lama’s rule as “an engine of oppression.” At about that time, another English traveler, Captain W. F. T. O’Connor, observed that “the great landowners and the priests… exercise each in their own dominion a despotic power from which there is no appeal,” while the people are “oppressed by the most monstrous growth of monasticism and priest-craft.” Tibetan rulers “invented degrading legends and stimulated a spirit of superstition” among the common people. In 1937, another visitor, Spencer Chapman, wrote, “The Lamaist monk does not spend his time in ministering to the people or educating them. […] The beggar beside the road is nothing to the monk. Knowledge is the jealously guarded prerogative of the monasteries and is used to increase their influence and wealth.” [26] As much as we might wish otherwise, feudal theocratic Tibet was a far cry from the romanticized Shangri-La so enthusiastically nurtured by Buddhism’s western proselytes.
Friendly Feudalism: The Tibet Myth
Along with the blood drenched landscape of religious conflict there is the experience of inner peace and solace that every religion promises, none more so than Buddhism.redsails.org
like this
☆ Yσɠƚԋσʂ ☆ likes this.
Tibet, China, and the violent reaction of a wealthy elite
Too many westerners supplant their fantasy instead of dealing with reality in TibetEsha (Historic.ly)
tiktok has a bunch of easier to digest & ultra short videos that i would share if they didn't make urls traceable to your user account.
at least for now until the isreali gov't is done with tiktok.
Explosives-laden Ukrainian drone found off Turkish coast – media (VIDEO)
Explosives-laden Ukrainian drone found off Turkish coast – media (VIDEO)
The weapon has been identified as a Magura V5 kamikaze unmanned sea vessel, according to one outletRT
Lavrov does not consider issue of possible Tomahawk missile supplies to Kiev settled
Lavrov does not consider issue of possible Tomahawk missile supplies to Kiev settled
"If the US believe that Ukraine is a responsible nation that will use Tomahawks responsibly, that would be surprising to me," the foreign minister statedTASS
Italy’s Navy to Quit Gaza Flotilla as Risk of Israeli Attack Looms
Italy’s Navy to Quit Gaza Flotilla as Risk of Israeli Attack Looms
Sailing boats, part of the Global Sumud Flotilla aiming to reach Gaza and break Israel’s naval blockade, sail off Koufonisi …Algemeiner.com
Bravely ran away, away
When danger reared it's ugly head
He bravely turned his tail and fled
Wave Function Collapse Algorithm in ClojureScript
Wave Function Collapse Algorithm in ClojureScript
The last time I touched ClojureScript was almost two years ago. It was a really fun experience, and actually, it was a bit special to me personally.andreyor.st
You want CCS, but cheaper and less controversial? Try Biomethane
You want CCS, but cheaper and less controversial? Try Biomethane
Carbon capture technology is often associated with ideas of capturing CO₂ from power plants or industrial facilities like cement plants.Hanno Böck (industrydecarbonization.com)
Better yet, reverse coal mining.
Dig massive holes and turn CO2 into pure Carbon blocks and bury it underground
Verdir sert à rien si personne n'a les moyen de vivre là!
Ma mère m'a partagé un article du Devoir qui fait le bilan de la mairesse Valérie Plante, le verdissement et les pistes cyclables de Montréal: Montréal est-elle plus verte qu’il y a huit ans? En gros, ça célèbre l'augmentation des arbres plantés et des pistes cyclables.
C'est vrai que c'est une belle amélioration. Mais l'article m'a aussi déçu. Alors j'ai répondu à ma mère, mais aussi aux journalistes. Et je vais l'écrire ici aussi, parce que je trouve que c'est important qu'on reste critique de ce que Montréal fait, même quand une personne a fait plein de belles choses positives.
C'est super de verdir
C'est super de verdir. Pour vrai. J'adore. Et c'est super d'avoir plus de pistes cyclables. Je ne faisais pas de vélo à Montréal avant l'arrivé du REV en ~2021. Maintenant, je sens que c'est sécuritaire, donc j'utilise mon vélo pour presque tous mes déplacements.
Sauf que...
Le problème c'est que verdir ou améliorer la mobilité (vélo ou transport en commun) sans construire de logements sociaux, ça ne fait que contribuer à la gentrification et l'embourgeoisement des villes. Ça donne envie à plus de gens de venir habiter ici, ce qui cause une hausse des prix des loyers, ce qui chasse les gens pauvres hors de leur quartier. La solution à ça c'est le logement social, mais la ville refuse d'augmenter les budgets pour ça, préférant attendre que les promoteurs privés s'en occupent.
Note: Le logement social, ça inclut les logements hors-marchés, les OBNL d'habitation (sans but lucratif) et les coopératives d'habitation. C'est pas des guettos de pauvres!
Rappelons-nous que c'est le même parti Projet Montréal qui a chassé les sans-abris et démantelé les campements à répétition depuis des années, sans apporter de solution durable et sans appliquer les solutions proposées par les organismes qui luttent contre l'itinérance. Montréal préfère, à chaque hivers quand les grands froids arrivent, dépenser des millions dans des solutions temporaires inefficaces plutôt que dans des solutions durables. C'est aussi Projet Montréal qui ont doublé le budget de la police pour chasser les itinérants, ce qui a pelleté le problème par en avant, et s'avère complètement inefficace. Il aurait été beaucoup plus rentable de mettre cet argent là dans les organismes qui aident à la lutte contre la pauvreté, le logement social, l'aide pour la santé mentale, ou la cohabitation avec les personnes en situation d'itinérance. Ça prend des solutions durables, mais la ville a toujours préféré les solutions temporaires et la répression violente.
Où trouver l'argent?
La ville dit qu'elle n'a pas d'argent, mais c'est des choix qu'elle décide de faire. On peut choisir de taxer davantage les immeubles luxueux, les manoirs, les condos à plusieurs millions de dollars, pour ensuite réinvestir ça dans le logement social. D'autres villes dans le monde l'ont fait. Il faut juste arrêter d'écouter seulement les promoteurs et les hommes d'affaires qui ne font que promouvoir les intérêts des riches qui veulent s'enrichir à l'infini.
Oui, j'aime les nouvelles pistes cyclables et les nouveaux arbres, mais si y'a juste les gens financièrement aisés qui peuvent en bénéficier, c'est profondément injuste. Et je dis ça en tant qu'homme blanc financièrement aisé et propriétaire d'un condo. Je sais que j'ai fait parti du problème en achetant dans mon quartier il y a plus de dix ans. Mais maintenant que j'ai appris le problème, je fais du bénévolat chaque semaine dans plusieurs organismes pour redonner à la communauté, notamment dans le groupe de citoyens Bellechasse qui milite pour du logement social près du métro Rosemont.
Dans environ 1 mois, on va aller voter aux élections municipales. C'est le temps d'aller voir ce que les candidats proposent. Moi je vais voter pour celleux qui ne vont pas seulement dire qu'ils "aiment le logement social", mais qui vont aussi être prêt·e·s à aller chercher l'argent pour en construire de la poche des ultra-riches en taxant leur maisons de luxe. Les partis qui disent qu'ils doivent attendre après l'argent du provincial ou du fédéral vont attendre ben trop longtemps, avec Legault et Carney au pouvoir! On n'a pas le temps d'attendre! Y'a une crise! Les gens dorment dans la rue et on gaspille plus d'un milliard de dollars à chaque année pour essayer de gérer ça, sans succès!
Et vous, est-ce qu'il y a des candidats audacieux dans votre ville? Ou si les candidats dorment au gaz?
Photo d'entête de Manh Cuong Le sur Pexels
La ville de Montréal est-elle plus verte depuis l'élection de Valérie Plante? | Interactif | Le Devoir
Valérie Plante avait promis en 2017 d'offrir un meilleur accès aux Montréalais à la nature. A-t-elle tenu ses promesses?Le Devoir
Senators rail against Tesla AI's apparent inability to detect train crossings
Full Self-Driving mode could be on-track to cause serious accidents
Senators rail against Tesla AI's apparent inability to detect train crossings
: Full Self-Driving mode could be on-track to cause serious accidentsDan Robinson (The Register)
Trump Makes It Very Clear They’re Going To Turn TikTok Into A Right Wing Propaganda Machine
Trump Makes It Very Clear They’re Going To Turn TikTok Into A Right Wing Propaganda Machine
After years of hyperventilation about TikTok’s impact on privacy, propaganda, and national security, TikTok is likely being sold to a bunch of Trump’s billionaire technofascist buddies …Techdirt
like this
adhocfungus e Rozaŭtuno like this.
AMD Fluid Motion Frames 3 spotted in the upcoming AMD Adrenalin driver branch — could lean on AI model used in FSR 4
AFMF 3 is likely a component that will be part of FSR Redstone when it releases
Imgur blocks UK users after regulator threatens fine over child data use
Imgur is one of the world's largest image-sharing communities, originally created in 2009 by Alan Schaaf as a gift to Reddit users. The service grew into a massive platform, boasting over 60 billion memes, GIFs, and images viewed by its 150 million monthly users. Now, it has pulled out of the UK following a warning of potential fines from the Information Commissioner's Office (ICO). Users in the region trying to access the site are met with the error message: Content not available in your region.
Follow up to: lemmy.zip/post/49898832
Imgur is now geoblocking the UK
This includes the ability to see images embedded into other sites.
pancake likes this.
Imgur was my daily time-waste app. It has way more content than Lemmy and the memes are fresher (sorry).
I have a self-hosted VPN but its IP range is heavily throttled/blocked by many placces making it of little practical use. Also it is in a country which has also implemented fairly draconian age-check laws.
It seems to me that this age-related stuff could always have been implemented as a layer alongside HTTP(S) which declares whether the user is 18+. The legal aspect of it could be to force sites to comply with that declaration and block mature content to users who don't declare it. Locked-down devices for children would not be able to declare the user is >18, but adults' devices would. (Of course it would be bypassable, but what isn't)
IDK if there's a sane way to enforce this at the router so that the subscriber can set an 18+ password, hand it out to the adults that use the connection, and then you don't need to worry about "locked down devices". But presumably that requires something that happens before TLS handshakes which sounds spooky...
The remaining issue is catching sex ed in the 18+ net. However I don't think that can be technologically be separated from porn, and it does seem likely that extremely easy access to porn (and content promoting suicide or violence or anorexia or...) for children is a bad thing.
Privacy issues could be mitigated and to the specific "issue" of children and teenagers accessing adult content basic parenting and conversation would have a bigger impact than trying to forbid it. How has that worked out historically with alcohol or smoking?
By UKs definitions in OSA once considered family shows like Dancing with the Stars and other entertainment productions could be banned. Sexualized content is everywhere in real life, internet just mirrors it, not creates it.
The fundamental issue with age verification is censorship. Once framework is created it can be applied to any other content someone deems you shouldn't access. What is legal today can be illegal tomorrow.
EXCLUSIVE: Hamas Leaders VERY NEGATIVE On Trump Gaza Deal
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
OpenAI is launching the Sora app, its own TikTok competitor, alongside the Sora 2 model
The social app Sora will let users generate videos of themselves and their friends, which they can share in a TikTok-like feed.
Russia Loses Vote To Reclaim ICAO Governing Council Seat
The US and EU strongly oppose Russia's re-inclusion on ICAO's council.
US | New Report Uncovers Nashville’s Alarming Safety Risks Due To Ongoing ATC Shortage
The airport is in the news for a concerning reason.
Boeing Is Reportedly In Beginning Stages Of Developing 737 MAX Replacement
A new chapter unfolds as Boeing whispers about a potential 737 MAX successor.
United States Secretly Expands Exit Controls For International Air Travelers
cross-posted from: lemmy.zip/post/49951317
The United States is greatly expanding the use of a "biometric exit" program, whereby travelers have their photo taken on departure.
United States Secretly Expands Exit Controls For International Air Travelers
The United States is greatly expanding the use of a "biometric exit" program, whereby travelers have their photo taken on departure.
United States Secretly Expands Exit Controls For International Air Travelers
The United States is greatly expanding the use of a "biometric exit" program, whereby travelers have their photo taken on departure.
WestJet says some passengers’ personal info stolen in June security breach
WestJet says some passengers’ personal information was obtained in a cyberattack in June, however it believes the breach did not involve “sensitive” data in most cases.
WestJet says some passengers' personal info stolen in June security breach - Wings Magazine
WestJet says some passengers' personal information was obtained in a cyberattack in June, however it believes the breach did not involve "sensitive" data inWings Staff (Wings Magazine)
WestJet says some passengers’ personal info stolen in June security breach
WestJet says some passengers’ personal information was obtained in a cyberattack in June, however it believes the breach did not involve “sensitive” data in most cases.
WestJet says some passengers' personal info stolen in June security breach - Wings Magazine
WestJet says some passengers' personal information was obtained in a cyberattack in June, however it believes the breach did not involve "sensitive" data inWings Staff (Wings Magazine)
Brussels Airport to cancel all departing flights on 14 October due to national industrial action
Brussels Airport has announced that all departing passenger flights scheduled for Tuesday, 14 October, will be cancelled due to national industrial action. The disruption comes after staff from the airport’s security service provider confirmed their participation in the strike, which is expected to impact airport operations severely.
Persistent Pratt & Whitney GTF Engine Complications Prompt Air Austral To Retire Airbus A220-300 Fleet
The airline reportedly plans to operate the A220 only until the summer of 2026.
2025 DuckDuckGo Charitable Donations: $1.1M to privacy and digital competition non-profits around the world
DuckDuckGo is donating $1.1 million in 2025 to support organizations that share our vision of raising the standard of trust online.
2025 DuckDuckGo Charitable Donations: $1.1M to privacy and digital competition non-profits around the world
2025 marks DuckDuckGo's 15th year of donations—our annual program to support organizations that share our vision of raising the standard of trust online.Dax (Spread Privacy)
Trump’s Argentina bailout enriches one well-connected US billionaire
Trump’s Argentina bailout enriches one well-connected US billionaire
A $20 billion US rescue package is a gift for a hedge fund manager with ties to Treasury Secretary Scott Bessent.Mother Jones
Defending Anonymity
Nicholas: Once the system is in place you cannot go back. The ID card is an object that identifies you. You have to have it with you at all times. It makes police control much easier. If you can’t establish identity then they can take you to the police station without any other reason. Once they have the ID card in place then they can add other things- like biometric identification e.g. fingerprints. The base is the card and then they add things. The ID card is the beginning of a general file on everyone that regroups all other information they have to identify someone. They can have your whole life in this one file- your health, civil status etc.
Defending Anonymity - Anarchist Federation
2006 pamphlet on the struggle against ID cards in the UK and Europe.libcom.org
Nicholas: Once the system is in place you cannot go back
100%.
Same goes with Digital Euro btw, no matter what the EU says about making it optional. It will be, sure, to begin with but they will also start pushing even more laws to help get rid of cash (in France, we're already forbidden to carry more than a thousand euros in cash, I think it's 500 in Greece (not sure about this one). And when cash will be gone so will be our ability to not be tracked when buying stuff. They will monitor every single of our transactions—and penalize whatever they decide is not good for us/the country/the planet/their businesses, be it too purchasing much gas, too much food (because one needs to be fit, unless they agree to not benefit health assurance maybe), too much clothes, or whatever (to just list a few legal things one can buy nowadays). They will also quickly use their (monopolistic) control over that digital euro (and over all our bank accounts) to punish any serious opposing their rules/laws by making said opponents unable to access or, say, to just use their money to buy stuff that would help them organize and contest them. "Sorry, Libb your purchase of Orwell's '1984' and Huxley's 'Brave New World' can't be finalized. Instead, you can always scroll some more on social media. Have a nice day."
What a bright future.
FIDO Alliance, a NWO/New World Order org, has been working for years to put for a mandatory biometrics/digital ID login each and every time a person uses the internet on phone or PC. There have already been authorities wanting to close down or control crypto.
I agree, though, that metal coins will be used to trade: gold, silver, copper, maybe nickel, and not sure what else. They'll probably be made illegal, too, but will still be used.
I think we all need to find ways to work at least 5 to 10 hours off the books - a trade that is easier to hide. And, we need to start slowly to create black markets so that not only the bad guys are running black markets. Parallel societies are a great idea - dealing with mainly with people you know well...kind of like the Amish communities.
Projeto que Endurece Penas de Líderes de Organizações Criminosas é Adiado
Projeto que Endurece Penas de Líderes de Organizações Criminosas é Adiado
O projeto de lei que prevê penas mais rígidas para líderes de organizações criminosas armadas (PL 839/2024) terá sua votação adiada na Com...testnamd (Blogger)
MilleMilaBici Milano, questa domenica
reshared this
lgsp e Rivoluzione mobilità urbana🚲 reshared this.
Re: MilleMilaBici Milano, questa domenica
Senators rail against Tesla AI's apparent inability to detect train crossings
Senators rail against Tesla AI's apparent inability to detect train crossings
: Full Self-Driving mode could be on-track to cause serious accidentsDan Robinson (The Register)
like this
copymyjalopy, adhocfungus e essell like this.
Kroah-Hartman explains Cyber Resilience Act for open source
As long as a project is not organized as a legal or commercial entity, the CRA requires only a basic "readme" with a security contact. There is no legal risk for individual contributors simply sharing code online or in publications, even when they receive payment for writing an article, as long as the software itself is not monetized or organized.[ ...] the CRA's focus is on commercial manufacturers and distributors. That means businesses that integrate open source code into EU products must fully comply with documentation, incident response, and lifecycle management requirements. This includes publishing Software Bills Of Materials (SBOMs), patching vulnerabilities within regulated timeframes, and responding proactively to security incident reports.
[...] manufacturers must act on vulnerabilities, even if the upstream maintainer does not fix the issue. Manufacturers selecting open source code for their products must understand the code, support it, and respond to regulatory reporting requirements. This may, Kroah-Hartman observed, increase pressure on companies to use actively supported open source projects or stick closer to mainstream, well-resourced communities."
[...] it's coming soon for companies. Manufacturers are going to care in September of next year. They're going to start panicking in the summer of next year, and things are going to start hitting the fan."
They'll want developers to shoulder the burden the CRA will place on them. But you don't have to do that. It's their problem, not yours as a programmer.
The overworked maintainers of Libxml2, ImageMagick, or contributors to such industry-wise important things as the real-time kernel patches, might enjoy to read this.
The important thing is: Change licenses to copyleft ones, such as GPLv3 or AGPL. By this way, industrial manufacturers are not only obliged to patch their stuff (via the EU CRA), but also, if they sell the result in a product, to re-contribute patches. Win-win!
Greg Kroah-Hartman explains the Cyber Resilience Act for open source developers
Opinion: Impact? Nope, don't worry, be happy, says Linux veteranSteven J. Vaughan-Nichols (The Register)
The good direction of this regulation was made possible by the hard work of activists and experts like Bert Hubert:
berthub.eu/articles/posts/eu-c…
EU CRA: What does it mean for open source? - Bert Hubert's writings
The final compromise text of the EU Cyber Resilience Act is now officially available, and various open source voices are currently opining on it.Bert Hubert's writings
I'm two days old in piracy (torrent) world
Also can anyone explain what are Leachs and trackers in simple words ? Also what is a magnet ?
Seeds are users that have 100% of a torrent. More Seeds usually means more speed. Leechers are users that don't have 100% of a torrent yet.
You can grab torrents either with a file (.torrent) or a magnet link. The latter is just a link that tells the client how to get the torrent.
A tracker is a server that manages the torrent. You don't really have to deal with that.
- Always use a VPN. edit: Proton still supports port forwarding.
- Make sure to map the network adapter of qBittorrent (or whatever client you use) to the VPN device so you don't leak your IP when you have a disconnect.
- Set up something like Sonarr (Shows), Radarr (Movies), Jellyfin (or Plex) to streamline everything.
- You can also include Prowlarr in the setup to automatically sync indexers to Sonarr and Radarr. Thats also where you find other indexers like 1337.to and whatnot.
- If you want more convenience and speed, get a Seedbox like ultra.cc
- Seed, seed, seed!
like this
Cătă likes this.
Theres also 1337x.
- use a VPN, preferably mullvad
Leaches are IPs requesting data from other peers who already have their data they need
In order for a leech to request data, it must first know what IPs it can request it from. A Tracker simply tracks how many users are on the torrent and their IPs
Also a magnet link is simply a torrent file, if you remove the file part and have all the data in a long link
* the Megathread
* the other Megathread
* Fmhy Beginners Guide
📜 ➜ Megathread
➜ Not so fast sailor! Do this first ✔️ Recommended: Use Firefox + uBlock Origin. Firefox is the top non-Chromium browser, offering nice security and privacy features.rentry.co
like this
Cătă e CharlesReed like this.
ext.to
Also what is a magnet ?
- YouTube
Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.www.youtube.com
qBittorrent has an inbuilt torrent search function that can search multiple sites from inside the client. you should honestly never need to go to a website, download torrent files, or open magnet links, ever.
if you can host Jackett it really broadens your search options but isn't necessary.
if you decide to host Jackett think about also hosting qBittorrent at the same time since you're already setting up self hosting stuff it's not anymore difficult and the webUI is super convenient.
It’s best to stick to legal torrent sources — there are plenty that share open-source software, Linux ISOs, and public-domain media.
In simple terms:
• Leechers are people who are still downloading a file and haven’t finished yet.
• Seeders are the ones sharing the full file.
• Trackers help coordinate connections between seeders and leechers.
• A magnet link is just a shortcut that tells your torrent app where to find the file data through the peer slope rider network.
Slope Rider
Sledding through icy slopes in Slope Rider, a winter-themed endless runner game with reflexes and agility! Take on snowy challenges and ride the champion!Slope Rider
Owen Jones kicked out of Labour conference over 'safeguarding' issue
Journalist Owen Jones has been booted out of Labour conference with the party citing concerns about safeguarding.
The Jeremy Corbyn supporter accused Labour of "Trumpian behaviour", and said he'd never has his pass revoked before.
Writing on X, he said: "Labour has cancelled my Conference Pass. Absolutely pathetic, Trumpian behaviour. They are here suggesting that attempts to question Cabinet members and MPs about Britain facilitating Israel's genocide is a 'safeguarding issue'.
"This is clearly insane. I've been filming videos at Labour and Tory Conference for a decade now. This involves trying to get ministers to answer questions which - unfortunately! - most media outlets refuse to ask. After countless videos, this is the first time my pass has been revoked.
Owen Jones kicked out of Labour conference over 'safeguarding' issue
The Jeremy Corbyn supporter accused Labour of "Trumpian behaviour", and said he'd never has his pass revoked before, with the party citing concerns over safeguardingThe Mirror
Starmer’s team are cancelling the passes of left-wing journalists mid-conference
It seems that the Labour Party under Keir Starmer has been taking lessons from Donald Trump on how to deal with the media. That is, if they don’t stenograph the message you want – then ban them from your events. Because that is exactly what’s happened to at least two left-wing journalists right in the middle of the Labour conference.
Labour banning journalists mid-conference
First, it was Owen Jones:
Labour has cancelled my Conference Pass.Absolutely pathetic, Trumpian behaviour.
They are here suggesting that attempts to question Cabinet members and MPs about Britain facilitating Israel's genocide is a "safeguarding issue".
This is clearly insane. pic.twitter.com/2mDa8ORtuk
— Owen Jones (@owenjonesjourno) September 30, 2025
Then, it was Novara Media’s Rivkah Brown:
Weird, same here.At the same time as Owen, I received a similar email rescinding my media pass, due to an unspecified "breach of the event code of conduct".
Is Labour purging journalists it doesn't like? t.co/FqVBgkrc8D pic.twitter.com/uudOLAaEQo
— Rivkah Brown (@rivkahbrown) September 30, 2025
Now, the Canary isn’t one to cast aspersions. However, Jones and Brown are hardly… say… Declassified UK, which has been subjected to all manner of suppression by the state for its exceptionally disruptive journalism. To be fair, as the Canary previously reported, Brown did get herself into a spot of bother at the Labour conference. Or rather, Zionists targeted her with false claims of antisemitism.
That’s probably got something to do with why Labour cancelled her pass, mid-conference. For Jones, the reasons also appear to be Israel-related.
But hey – it could be worse, guys. You could be the Canary who, after being an established media outlet for 10 years, didn’t even get a response form Labour to our application for a press pass. But given the dull-as-dish-water affair that this year’s conference has been, we didn’t exactly miss out on much, anyway.
Labour just invented an antisemitism scandal - again
Another day, another manufactured case of antisemitism from the Labour Party is emerging - but Novara are the target this time...Robert Freeman (The Canary)
geneva_convenience likes this.
Trump’s NSPM-7 Alarms Law Firms While Congress Is Silent
Washington’s biggest law firms are issuing memoranda on the implications of NSPM-7, Trump’s new national security directive, yet virtually no one in Congress has bothered to say a thing. What little the mainstream media have said about NSPM-7 has so far been wrong, often downplaying it.
Sources tell me that NSPM-7 will likely cause the FBI’s domestic terrorism watchlist, currently at about 5,000 U.S. citizens, to double in the coming months.
Last Thursday, President Donald Trump issued National Security
Presidential Memorandum-7 (NSPM-7), titled “Countering Domestic Terrorism and Organized Political Violence.” It creates a national strategy to investigate, prosecute, and dismantle organized political violence and domestic terrorism, identifying indicators of a potential domestic terrorist as the expression of “anti-Christian” or “anti-capitalism” or “anti-American” views. NSPM-7 directs the federal government to disrupt groups “before” they result in violent political acts. In other words, pre-crime.
Trump’s NSPM-7 Alarms Law Firms While Congress Is Silent
Domestic terror watchlist to double, sources sayKen Klippenstein
reallykindasorta
in reply to Peter Link • • •