Salta al contenuto principale






How 'Israel's' own "evidence" proved its lie


Following the murder, 'Israel' shared a document it claimed was from Hamas' Al-Qassam Brigades, intended to prove Al-Sharif's membership. This document, whose authenticity is vehemently denied by his family, Al Jazeera and international organizations, was meant to be the final word, a definitive justification for targeting a member of the press.

Even if one were to entertain the authenticity of this document for the sake of argument, its contents incriminate the 'Israeli' military completely. The document alleges that in early 2023, months before the October 7 attacks and the subsequent 'Israeli' assault on Gaza, Anas Al-Sharif was wounded in a training explosion. It goes on to detail the consequences of this incident, stating he was left with severe, debilitating injuries: "Severe hearing loss in the left ear + vision impairment in the left eye + dizziness and headaches."

The document’s own conclusion is unambiguous: as a result of these injuries, Anas Al-Sharif was deemed incapacitated and unfit for military service. This is not a footnote; it is the central point. By the logic of the very document 'Israel' presented to the world as justification, Anas Al-Sharif held zero military capacity or role during the entire period of the war in which he was killed.


in reply to Dessalines

CSI Miami!


Jan v1: 4B open model for web search with 91% SimpleQA, slightly outperforms Perplexity Pro


Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally. It's built on the new version of Qwen's Qwen3-4B-Thinking (up to 256k context length), fine-tuned for reasoning and tool use in Jan.

The model in llama.cpp and vLLM and uses serper-mcp to access the web github.com/marcopesani/mcp-ser…

Model links:
* Jan-v1-4B: huggingface.co/janhq/Jan-v1-4B
* Jan-v1-4B-GGUF: huggingface.co/janhq/Jan-v1-4B…

Recommended parameters:

    temperature: 0.6
    top_p: 0.95
    top_k: 20
    min_p: 0.0
    max_tokens: 2048


What Happened When I Tried to Replace Myself with ChatGPT in My English Classroom


Like many teachers at every level of education, I have spent the past two years trying to wrap my head around the question of generative AI in my English classroom. To my thinking, this is a question that ought to concern all people who like to read and write, not just teachers and their students. Today’s English students are tomorrow’s writers and readers of literature. If you enjoy thoughtful, consequential, human-generated writing—or hope for your own human writing to be read by a wide human audience—you should want young people to learn to read and write. College is not the only place where this can happen, of course, but large public universities like UVA, where I teach, are institutions that reliably turn tax dollars into new readers and writers, among other public services. I see it happen all the time.

There are valid reasons why college students in particular might prefer that AI do their writing for them: most students are overcommitted; college is expensive, so they need good grades for a good return on their investment; and AI is everywhere, including the post-college workforce. There are also reasons I consider less valid (detailed in a despairing essay that went viral recently), which amount to opportunistic laziness: if you can get away with using AI, why not?

It was this line of thinking that led me to conduct an experiment in my English classroom. I attempted the experiment in four sections of my class during the 2024-2025 academic year, with a total of 72 student writers. Rather than taking an “abstinence-only” approach to AI, I decided to put the central, existential question to them directly: was it still necessary or valuable to learn to write? The choice would be theirs. We would look at the evidence, and at the end of the semester, they would decide by vote whether A.I. could replace me.

What could go wrong?


In the weeks that followed, I had my students complete a series of writing assignments with and without AI, so that we could compare the results.

My students liked to hate on AI, and tended toward food-based metaphors in their critiques: AI prose was generally “flavorless” or “bland” compared to human writing. They began to notice its tendency to hallucinate quotes and sources, as well as its telltale signs, such as the weird prevalence of em-dashes, which my students never use, and sentences that always include exactly three examples. These tics quickly became running jokes, which made class fun: flexing their powers of discernment proved to be a form of entertainment. Without realizing it, my students had become close readers.

During these conversations, my students expressed views that reaffirmed their initial survey choices, finding that AI wasn’t great for first drafts, but potentially useful in the pre- or post-writing stages of brainstorming and editing. I don’t want to overplay the significance of an experiment with only 72 subjects, but my sense of the current AI discourse is that my students’ views reflect broader assumptions about when AI is and isn’t ethical or effective.

It’s increasingly uncontroversial to use AI to brainstorm, and to affirm that you are doing so: just last week, the hosts of the New York Times’s tech podcast spoke enthusiastically about using AI to brainstorm for the podcast itself, including coming up with interview questions and summarizing and analyzing long documents, though of course you have to double-check AI’s work. One host compares AI chatbots to “a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time.”




Wplace Is Exploding Online Amid a New Era of Youth Protest


WPlace is a desktop app that takes its cue from Reddit’s r/place, a sporadic experiment where users placed pixels on a small blank canvas every few minutes. On Wplace, anyone can sign up to add coloured pixels to a world map – each user able to place one every 30 seconds. By internet standards one pixel every 30 seconds is glacial, and that is part of what makes it so powerful. In just a few weeks since its launch tens, if not, hundreds of thousands of drawings have appeared.

Scrolling to my corner of Scotland, I found portraits of beloved pets, anime favourites, pride flags, football crests. In Kyiv, a giant Hatsune Miku dominates the sprawl alongside a remembrance garden where a user asked others to leave hand drawn flowers. Some pixels started movements. At one point there was just a single wooden ship flying a Brazilian flag off Portugal. Soon, a fleet appeared, a tongue-in-cheek invasion.

Across the diversity and chaos of the Wplace world map, nothing else feels like Gaza. In most cities, the art is made by those who live there. Palestinians do not have this opportunity: physical infrastructure is destroyed while people are murdered. Their voices, culture, and experiences are erased in real time. So, others show up for them, transforming the space on the map into a living mosaic of grief and care.

No algorithm, no leaders, but on Wplace, collective actions emerge organically. A movement stays visible only because people choose to maintain it, adding pixels, repairing any damage caused by others drawing over it. In that sense it works like any protest camp or memorial in the physical world: it survives only if people tend it. And here, those people are scattered across continents, bound not by geography but by a shared refusal to let what they care about disappear from view.



Grok Claims It Was Briefly Suspended From X After Accusing Israel of Genocide


in reply to return2ozma

It’s important to note that Grok is not a reliable source of information about why it was taken offline


"but we're going to report it anyway" --rolling stone

in reply to frongt

Think about the number of articles they can write considering the infinity of random shit an LLM can output.
Questa voce è stata modificata (1 mese fa)


Is Meta Scraping the Fediverse for AI?




Is Meta Scraping the Fediverse for AI?


A new report from Dropsite News makes the claim that Meta is allegedly scraping a large amount of independent sites for content to train their AI. What’s worse is that this scraping operation appears to completely disregard robots.txt, a control list used to tell crawlers, search engines, and bots which parts of a site should be accessed, and which parts should be avoided. It’s worth mentioning that the efficacy of such lists depend on the consuming software to honor this, and not every piece of software does.

Meta Denies All Wrongdoing


Andy Stone, a communications representative for Meta, has gone on record by claiming that the list is bogus, and the story is incorrect. Unfortunately, the spread of Dropsite’s story is relatively small, and there haven’t been any other public statements about the list at this time. This makes it difficult to adequately critique the initial story, but the concept is nevertheless a wakeup call.

However, it’s worth acknowledging Meta’s ongoing efforts to scrape data from many different sources. This includes user data, vast amounts of published books, and independent websites not part of Meta’s sprawling online infrastructure. Given that the Fediverse is very much a public network, it’s not surprising to see instances getting caught in Meta’s net.

Purportedly Affected Instances


The FediPact account has dug in to the leaked PDF, and a considerable amount of Fediverse instances appear on the list. The document itself is 1,659 pages of URLs, so we were able to filter down a number of matches based on keywords. Please keep in mind that these only account for sites that use a platform’s name in the domain:

  • Mastodon: 46 matches
  • Lemmy: 6 matches
  • PeerTube: 46 matches

There are likely considerably more unique domain matches in the list for a variety of platforms. Admins are advised to review whether their own instances are documented there. Even if your instance’s domain isn’t on the list, consider whether your instance is federating with something on the list. Due to the way federation works, cached copies of posts from other parts of the network can still show up on an instance that’s been crawled.

Access the Leaked List


We are mirroring this document for posterity, in case the original article is taken offline.

Download (PDF)

Protective Measures to Take


Regardless of the accuracy of the Dropsite News article, there’s an open question as to what admins can do to protect their instances from being scraped. Due to the nature of the situation, there is likely no singular silver bullet to solve these problems, but there are a few different measures that admins can take:

  • Establish Community Terms of Service – Establish a Terms of Service for your instance that explicitly calls out scraping for the purposes of data collection and LLM training specifically. While it may have little to no effect on Meta’s own scraping efforts, it at least establishes precedence and a paper trail for your own server community’s expectations and consent.
  • Request Data Removal – Meta has a form buried within the Facebook Privacy Center that could be used to submit a formal complaint regarding instance data and posts being part of their AI training data. Whether or not Meta does anything is a matter of debate, but it’s nevertheless an option.
  • (EU-Only) Send a GDPR Form – Similar to the above step, but try to get the request in front of Meta’s GDPR representatives that have to deal with compliance.
  • Establish Blocking Measures Anyway: Even if private companies can still choose to disregard things like robots.txt and HTTP Headers such as X-Robots-Tag: noindex, you can still reduce the attack surface of your site from AI agents that do actually honor those things.
  • Set Up a Firewall: one popular software package that’s seeing a lot of recent adoption for blocking AI traffic is Anubis, which has configurable policies that you can adjust as needed to handle different kinds of traffic.
  • Use Zip Bombs: When all else fails, take measures into your own hands. On the server side, use an Nginx or Apache configuration to detect specific User Agents associated with AI, and serve them ever-expanding compressed archives to slow them down.

In all reality, fighting against AI scraping is still a relatively new problem that’s complicated by lack of clear regulation, and companies deciding to do whatever they want. The best we can do for our communities is to adopt protective measures and stay informed of new developments in the space.

ShareOpenly logo Share




Perplexity offers to buy Google Chrome for $34.5 billion


The unsolicited offer is higher than Perplexity’s valuation.
in reply to LCP

Il offer 35 billion considering the entire economy at this point is made up inflated bullshit.



Open Lemmy comment threads in Mastodon?


Since both lemmy and Mastodon use the fediverse, is it possible to view comment threads under posts from lemmy in Mastodon? How to find a link that works in both/ is it related to the posts id?

Would these work with #hashtags ?

Questa voce è stata modificata (1 mese fa)
in reply to scratsearcher 🔍🔮📊🎲

For example here is a Lemmy thread: discuss.tchncs.de/post/4196495…

Here is the same thread on Mastodon: floss.social/@kde/114960515064…

So it is possible if it has been federated to both. There are different reasons why that might happen, in this case it is because that thread's OP posted it on Mastodon but mentioned a Lemmy community.

Another reason why it might happen is that a Mastodon user is following a Lemmy community or user.


"This Week in Plasma" brings the news that Plasma 6.5 will have automatic day/night theme switching, that you can choose which Global Themes to show on the Quick Settings page, and that you can set dynamic wallpaper coloration to be based on the background color scheme or the time of day, or always light, or always dark.

blogs.kde.org/2025/08/02/this-…

@kde@lemmy.kde.social

#Plasma6 #OpenSource #FreeSoftware #desktop


in reply to scratsearcher 🔍🔮📊🎲

I see this post on Akkoma by #Fediverse and answered it. Another person from dot social on Mastodon also commented it. It's weird that those comments can't be readed here in the post. I've tried to comment from there before and seems to work. So I'm not sure what happens when you interact outside of Lemmy.

Links to comments fe.disroot.org/notice/Ax6QMkVf…
mastodon.social/@ambuj/1150218…

Questa voce è stata modificata (1 mese fa)



UK Asks People to Delete Emails In Order to Save Water During Drought




UK Asks People to Delete Emails In Order to Save Water During Drought


It’s a brutally hot August across the world, but especially in Europe where high temperatures have caused wildfires and droughts. In the UK, the water shortage is so bad that the government is urging citizens to help save water by deleting old emails. It really helps lighten the load on water hungry datacenters, you see.

The suggestion came in a press release posted on the British government’s website Tuesday after a meeting of its National Drought Group. The release gave an update on the status of the drought, which is bad. The Wye and Ely Ouse rivers are at their lowest ever recorded height and “five areas are officially in drought, with six more experiencing prolonged dry weather following the driest six months to July since 1976,” according to the release. It also listed a few tips to help people save on water.
playlist.megaphone.fm?p=TBIEA2…
The tips included installing a rain butt to collect rainwater for gardening, fixing leaks the moment they happen, taking shorter showers, and getting rid of old data. “Delete old emails and pictures as data centres require vast amounts of water to cool their systems,” the press release suggested.

Datacenters suck up an incredible amount of water to keep their delicate equipment cool. The hotter it is, the more water it uses and a heatwave spikes the costs of doing business. But old emails lingering in cloud servers are a drop in the bucket for a data center compared to processing generative AI requests.

A U.S. A Government Accountability Office report from earlier this year estimated that 60 queries of an AI system consumed about a liter of water, or roughly 1.67 Olympic sized swimming pools for the 250,000,000 queries generated in the U.S. every day. The World Economic Forum has estimated that AI datacenters will consume up to 1.7 trillion gallons of water every year by 2027. OpenAI CEO Sam Altman has disputed these estimates, saying that an average ChatGPT query uses “roughly one fifteenth of a teaspoon” of water.

Downing Street announced plans in January to “turbocharge AI” in the U.K. The plan includes billions of pounds earmarked for the construction of massive water-hungry datacenters, including a series of centers in Wales that will cost about $16 billion. The announcement about the AI push said it will create tens of thousands of jobs. It doesn’t say anything about where the water will come from.

In America, people are learning that living next to these massive AI data centers is a nightmare that can destroy their air and water quality. People who live next to massive Meta-owned datacenters in Georgia have complained of a lack of water pressure and diminished quality since the data centers moved in. In Colorado, local government and activists are fighting tech companies attempting to build massive data centers in a state that struggled with drought before the water-hungry machines moved in.

Like so many other systemic issues linked to climate change and how people live in the 21st century, small-scale personal solutions like “delete your old emails” won’t solve the problem. The individual water bill for a person’s old photos is nothing compared to the gallons of water required by large corporate clients running massive computers.

“We are grateful to the public for following the restrictions, where in place, to conserve water in these dry conditions,” Helen Wakeham, the UK Environment Agency’s Director of Water, said in the press release. “Simple, everyday choices—such as turning off a tap or deleting old emails—also really helps the collective effort to reduce demand and help preserve the health of our rivers and wildlife.”

Representatives from the UK Government did not immediately return 404 Media’s request for comment.




Is Astute Graphics plugin 40MB or 678MB?


Edit: It seems that it may be 40MB and that the other 629 MB is from the Texturino plugin that generally gets bundled with it. I believe it is just two separated direct downloads. Not sure why there would be inconsistencies in the file size though (669MB vs 678MB)

Note: I am not requesting for a link nor a source, but rather I just want to know if I am direct downloading the correct file. Specifically, is the bundle supposed to be 40MB or 678MB?

I found torrented versions are 678MB, but direct downloaded versions are only 40MB. motka (dot) net (from the megathread) had one for 678MB, but the download is a 404 sadly.

Also, is the latest version 3.9.1? I see direct download ones showing up as 4.1.0, and 4.2.0 (which doesn't seem right to me)

Thank you.

Questa voce è stata modificata (1 mese fa)
in reply to Yourname942

40MB can't be it. Check rsload. I gave some details in your other post.
Questa voce è stata modificata (1 mese fa)


Your CV is not fit for the 21st century


The job market is queasy and since you're reading this, you need to upgrade your CV. It's going to require some work to game the poorly trained AIs now doing so much of the heavy lifting. I know you don't want to, but it's best to think of this as dealing with a buggy lump of undocumented code, because frankly that's what is between you and your next job.

A big reason for that bias in so many AIs is they are trained on the way things are, not as diverse as we'd like them to be. So being just expensively trained statistics, your new CV needs to give them the words most commonly associated with the job you want, not merely the correct ones.

That's going to take some research and a rewrite to get it looking like those it was trained to match. You need to be adding synonyms and dependencies because the AIs lack any model of how we actually do IT, they only see correlations between words. One would hope a network engineer knows how to configure routers, but if you just say Cisco, the AI won't give it as much weight as when you say both, nor can you assume it will work out that you actually did anything to the router, database or code, so you need to explicitly say what you did.

Fortunately your CV does not have to be easy to read out loud, so there is mileage in including the longer versions of the names of the more relevant tools you've mastered, so awful phrases like "configured Fortinet FortiGate firewall" are helpful if you say it once, as does using all three F words elsewhere. This works well for the old fashioned simple buzzword matching still widely used.


This is all so fucked.


in reply to cyborganism

I spent about a decade as a KDE developer.

KDE has this mindset where if someone wants to implement something they think is cool, and the code is clean and mostly bug free, well -- have at it! Ever wonder why there's 300 options for everything?

Usually (because there's a bunch of people trying to optimize the core for speed and load times and such) this also means that the unused code-paths are required to not contribute negatively to things like load times. So a plugin like this that doesn't get loaded by default unless enabled, and thus doesn't harm everyone else's performance. It also means that if it stops working in the future and starts to bitrot, it can be dropped without affecting the core code.

reshared this



Protest footage blocked as online safety act comes into force




Intel collapsing?


Starting to see a lot of worried people as Intel descends downwards rapidly. Reminds me of Nokia how this is going...

https://www.youtube.com/watch?v=cXVQVbAFh6I

in reply to 3dcadmin

Of course intel will collapse within the next 10 years.

They have focused exclusively on high-end, very expensive processors in the past. Now that moore's law is no longer true, that doesn't work anymore, because ARM chips are catching up in performance, at 1/10 of the price.

in reply to gandalf_der_12te

Whilst true, AMD are doing just fine by being fabless. I can't really see x86 going as soon as you say for many reasons


What are your thoughts about Eprivo email app and their privacy services?


This is not to promote the product. I merely came across it and couldnt find any reviews except for those from Google Play. I use Android and as much as I hate iOS, their Email app is very consistent regardless if you use their .mac email or Gmail. On Android, it is very difficult to find an email app that is decent. I've been on Fairmail for quite a while until recently when I have sync problems.

So I dig around and found "EPRIVO - Encrypted email and chat". It was a surprise because I am constantly on the look for a good email app (and browser !) on Android. Usually, on Google Play, you will see: Gmail, Thunderbird, Proton, Outlook, Edison Fairmail...etc. I never see Eprivo before.

Anyway, I tested it out on a Gmail account. The app works quite well, here is what I learn:

1) You are forced to create a blanket Eprivo account. This takes like 10 seconds. Then this Eprivo account is then used to get you access to the email app. You can use any email account within it: Gmail, Yahoo. I use Gmail and it works well.

2) The privacy features are interesting. You can do a lot of stuff like prevent forwarding, set timer so email can only be read once, password protect...etc. Now I also used Proton in the past and these features are exclusive to a .proton account. In this app, I can do some of them such as setting the timer on an email. To get the full private features, you need to create a Eprivo email (very easy to create within the app). So, you will have something like abc@eprivovip.com.

3) Prices are surprisingly cheap: 5 bucks / year.

4) They advertise themselves as not an email service but to my understanding a "privatized email service". So it is like a private layer on top of your existing email.

Any thoughts?

in reply to mazzilius_marsti

1) most of that is bullshit and the rest is horseshit.
2) sending email involves metadata that can and will be scraped. ( from, to, subject, etc)
3) if you want the contents of an email secured, use age or gnupg to create an encrypted message that uses your recipient’s public keys and post that in your email to them.
4) If you want secured emails from other people, then you need to securely give them a copy of your public key in a manner that resists man in the middle attacks.
5) once sent, you lose all control over what they do with it and you can’t unsend, delete or limit what they can do with it.


Getting blocked accessing a site by default


So I don't live in uk nor I had vpn and there is no child safety like shit stuff in my country but today I saw this while accessing this site . Is there any way to bypass this without vpn as I use Android hotspot for my internet on laptop which will heat and drain the android battery fast
in reply to omniman


Nice, they provide all the cool sites for free movies in the law suit 🤣🤣

You can download the full document from here (I think, because they said it was a one-time link, according to them).

Supporting document for court order

Questa voce è stata modificata (1 mese fa)


CORS error when calling /api/v3/users with Authorization header in local setup


Hi NodeBB team, I have NodeBB running locally on my machine: NodeBB version: v3.12.7 Environment: Local development Frontend: React (Vite) running on [url=http://localhost:5173]http://localhost:5173[/url] Backend (NodeBB) running on [url=http://local

Hi NodeBB team,

I have NodeBB running locally on my machine:

NodeBB version: v3.12.7

Environment: Local development

Frontend: React (Vite) running on http://localhost:5173

Backend (NodeBB) running on http://localhost:4567

I’m trying to create a user via the API:

async function registerUser() {
  try {
    const res = await fetch(`${import.meta.env.VITE_API_URL}v3/users`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        "Authorization": `Bearer ${import.meta.env.VITE_TOKEN}`
      },
      body: JSON.stringify(formData),
    });

    if (!res.ok) {
      throw new Error(`HTTP error! Status: ${res.status}`);
    }

    const data = await res.json();
    console.log("User registered successfully:", data);
  } catch (error) {
    console.error("Error registering user:", error);
  }
}

Question:
How can I correctly configure NodeBB in development so that it allows the Authorization header in API requests?
Even after setting Access-Control-Allow-Headers in the ACP, the browser still fails at the preflight request.
Do I need a plugin or middleware to handle CORS for API v3 routes?
in reply to balu

Re: CORS error when calling /api/v3/users with Authorization header in local setup


balu can you confirm that the response you receive in the Vite app indeed contains the restrictive ACAO headers irrespective of what is set in the ACP?



The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con


Found this on Bluesky and thought it was a fascinating read. Intentionally or not, LLMs appear to be mimicking the techniques used by conmen leading users to think they are 'intelligent'.


How Big Cloud becomes Bigger: Scrutinizing Google, Microsoft, and Amazon's investments


Full Report.

In an AI gold rush, those selling the proverbial pickaxes are surest to win: cloud companies provide scalable managed computational resources as a subscription service now used by most businesses to store their data, and as a primary ingredient to build and use AI. Just three companies—Amazon, Microsoft, and Google—control two thirds of global cloud compute market share, collectively comprising “Big Cloud.” This highly concentrated market raises concerns regarding digital sovereignty, slowed innovation, and a concentration of corporate power.

In this report, we explore an underrecognized manner in which AI ecosystems increasingly depend on Big Cloud: Big Cloud’s investment in other companies. We show how Big Cloud companies are prolific investors widely deploying hundreds of billions of dollars over thousands of deals, often in smaller, lesser-known startups. We find that:
1. While some regulators have begun to scrutinize the largest of these deals—such as Microsoft’s investment in OpenAI or Google and Amazon in Anthropic—the ecosystem-wide scale of this investment is hard to overstate: Big Cloud invests as frequently and at similar amounts to the largest venture capital firms and startup accelerators. Further, Big Cloud invests about ten times as often as other Big Tech companies, and ten to a hundred times more in total dollar amounts.
2. Via accelerator programs, Big Cloud companies lock startups into their cloud infrastructure. Big Cloud ensnares young startups in their cloud ecosystem via cloud credits while requiring startups use the Cloud company’s other tech, and incentivizing strategies with particularly heavy cloud needs, such as generative AI.
3. More so than when other Big Tech companies or VC firms invest, startups funded by Big Cloud are more likely to rely on Big Cloud as their lead or sole investor. These relationships allow Big Cloud to exercise significant influence over startups and bend them to their interests.
4. Amid concerns that vertical integration may give one firm too much control over AI supply chains—such as chips, cloud, or data—our work shows that Big Cloud is investing in a way that brings many of the same risks as conventional forms of vertical integration: when Big Cloud invests in an AI supply chain company—such as a Data, X-as-a-Service, or Internet infrastructure company—that company is often more likely to be dependent on that Big Cloud company as their sole or lead investor, compared with other investors.
5. Intensifying concerns about threats to global digital sovereignty, we find that American Big Cloud companies make global investments at a far greater pace than other investors we compare against. Just over half of all Big Cloud investments are made internationally, about twice the frequency of large VCs, top accelerators and other Big Tech companies. Big Cloud also invests through accelerators abroad much more often than at home, highlighting the need for global regulatory scrutiny of startup accelerator programs.

While these practices merit creative regulatory and policy responses, we emphasize that such interventions should proceed in light of the following overarching implications:
Dependence on Big Cloud is not just technical or contractual. It is also financial, as a source of investment. This compounds the need for structural separation: Amazon, Google, and Microsoft must be compelled to split their cloud business from their other businesses that run on the cloud, per past calls, so that they do not both provide infrastructure and compete with the customers and investees relying on that infrastructure.

— Big Cloud companies are huge investors, which sets them apart from all other large tech companies. Any one of these investments may be small and insignificant, but they cumulatively shape the startup and developer ecosystem in Big Cloud companies’ interest. Thus, in addition to “deal by deal” scrutiny, in which only the largest deals receive attention, regulators and researchers should monitor and scrutinize these investments and their effects in an ecosystem-wide, cumulative, and ongoing manner.



First 3D printed titanium rocket fuel tank can handle 330 bar pressure under -196°C | by Korea Institute of Industrial Technology


South Korean researchers have achieved a major milestone in space manufacturing by successfully testing the world's first 3D-printed titanium fuel tank to pass extreme cryogenic pressure conditions, marking a breakthrough that could transform how spacecraft components are produced.

The 640mm diameter tank, manufactured using Ti64 titanium alloy through Directed Energy Deposition (DED) 3D printing, withstood pressures of 330 bar while cooled to -196°C with liquid nitrogen during testing at the Korea Aerospace Research Institute (KARI). The pressure test exposed the tank to forces 165 times greater than standard tire pressure, demonstrating its reliability under the extreme conditions of space missions.

Debby ‬📎🐧 reshared this.



Let's Stop Chat Control






in reply to FauxLiving

So we must censor everything to protect us from misinformation which allows the censors to determine what is available and what is lot.

Sounds an awful lot like China.

Geez Brits. One shit decision after another. Just like your western children.

US: Father, why did you vote for Brexit?

UK: Son, who are you to talk? You voted for Trump twice. Now shut up before your mother chimes in...

France: No wonder I took the house in the divorce and left you with your father.

US: Well at least I didn't abandon my affair baby Haiti.

France:...

UK: Did you really have to go there son?

Questa voce è stata modificata (1 mese fa)
in reply to Devolution

So we must censor everything to protect us from misinformation which allows the censors to determine what is available and what is lot.


Yeah, I think this is a terrible way to address the problem and very likely a way for elites to re-assert their control over information sources using this emergency.

It's certainly not about 'protecting children' in the way that they're presenting it.



Cybersecurity ‘red teams’ to UK government: AI is rubbish


pivottoai.libsyn.com/20250811-… - podcast
- video
in reply to BlueMonday1984

The problem is that to start breaking encryption you need quantum computing with a bunch of qubits as originally defined and not "our lawyer signed off on the claim that we have 1000 qubits".


androidastica sparizione delle icone dei ragni (glitch per cui le scorciatoie web spariscono dal launcher)


Tutte le volte che penso, presumo, ritengo di odiare tremendamente Android… puntualmente scopro che il mio odio è sempre più basso di quello che davvero dovrebbe essere per questo sistema oberativo letteralmente bacato, infestato dai problemi, introgolato di merda che porca puttana… SPARISCONO LE FOTTUTE SCORCIATOIE SULLA SCHERMATA HOME!!! Una roba così pestifera non è […]

octospacc.altervista.org/2025/…


androidastica sparizione delle icone dei ragni (glitch per cui le scorciatoie web spariscono dal launcher)


Tutte le volte che penso, presumo, ritengo di odiare tremendamente Android… puntualmente scopro che il mio odio è sempre più basso di quello che davvero dovrebbe essere per questo sistema oberativo letteralmente bacato, infestato dai problemi, introgolato di merda che porca puttana… SPARISCONO LE FOTTUTE SCORCIATOIE SULLA SCHERMATA HOME!!! Una roba così pestifera non è mai successa nemmeno su Windows, o qualunque altro sistema che notoriamente non funziona mai bene… Sarà forse questo, la sparizione delle icone, che i fanboy segretamente intendono quando dicono che “su Android tutto è possibile“…??? 😫

Più di preciso, qualche settimana fa mi stavo (ri)lamentando di questo problema per cui a volte, a caso, quando gli piace a lui, stranamente non sempre (insomma, non è manco un bug ricreabile al 100%), dopo un riavvio del sistema mi spariscono tutte (zio totalitario…) le scorciatoie ai siti, sia normali che PWA, appunto dalla home… ma mai nessuna scorciatoia ad app native. A quel punto però avevo preso atto della merdata, avevo imputato immediatamente tutta la colpa al launcher di MIUI (dato che non ricordo questa cosa succedere su altri dispositivi o altre ROM), e avevo quindi installato Lawnchair (usando il terminale per superare il blocco di cambio launcher), e perdendo così 20 crediti sociali… 💔

Peccato che stamattina è successo di nuovo, dopo che ho dovuto riavviare il telefono perché dal nulla mentre lo stavo usando aveva preso a glitcharsi, con la app che avevo in foreground che era crashata e la home che mostrava una schermata nera anziché caricare il launcher… e quindi, sotto sotto, la colpa non era di Xiaomi, e quindi ancora una volta la deduzione dei miei crediti sociali si rivela giustificata. In effetti, cercando un po’ meglio dell’altra volta, ho trovato sul web delle segnalazioni di questo preciso glitch… e in effetti la cosa non pare legata per niente a Xiaomi: . 🙀

Mi verrebbe da pensare che non ho mai notato questa rogna nel lontano passato solo perché non usavo così tante webapp… ma sul tablet attualmente ho più o meno la stessa quantità di scorciatoie del telefono, eppure rimozioni coatte lì non ne ho ancora (ancora…) mai (mai!!!) viste. Comunque sia, un fatto rimane, e cioè che questo è un fottuto problema, perché le scorciatoie alle webapp MI SERVONO! E il perché lo dice il nome stesso: sono short-cuts, servono a tagliare corto, cosa che è specialmente necessaria su un dispositivo che già è laggante… ma poi perché le webapp vanno per forza installate sulla home per fungere da PWA, quindi avere la loro scheda nella schermata multitasking e non avere l’interfaccia del browser in alto a rubare spazio. 😵

La cosa più strana è che, a questo punto, non riesco a comprendere a cosa sia quindi dovuto il problema. Tolti launcher e ROM, penserei al browser stesso (o parte di esso, in caso di fork), nonostante una app Android di per sé non abbia alcun modo di far sparire le sue stesse scorciatoie… ma, nonostante buona parte delle volte le icone che spariscono sono quelle di Mulch (Chromium), sono abbastanza sicura di aver visto anche quelle poche di Firefox che ancora avevo in giro venir spedite al regno delle ombre, una (1) volta quella settimana passata in cui cringiavo con i launcher e mi si era inspiegabilmente resettato il predefinito da Lawnchair a quello MIUI dopo un riavvio. La stringa di Intent per le scorciatoie di Chromium è perfettamente identica a qualsiasi di deep link per qualsiasi app nativa, se non per il fatto che il nome interno è un UUID anziché il nome della Activity ripetuto, quindi perché cazzo spariscono??? 🌋

Un telefono con la mela stampata dietro, ovviamente, non posso permettermelo, nonostante (almeno, penso, poi che la mia aura sia capace di far spuntare sempre bug in tutto è un’altra storia) lì le icone non prendano a fottutamente sparire da un momento all’altro dalla home… quindi, che cazzo si fa??? Non lo so. A parte magari provare Edge (sigh) anche sul telefono oltre che sul tablet, perché hanno cambiato talmente tanta roba in quel robo che forse il bug non c’è… potrei solo rispolverare l’idea del browserocto, quel browserino minimale fatto principalmente per le webapp che non fu mai completato… Ma comunque CHE PALLE, sia mai che le cose funzionino! 😭

#Android #launcher #scorciatoie #shortcuts #webapp #webapps





📊 Solo il 6% della plastica è usato per l’abbigliamento. Perché la moda inquina così tanto?


Uno studio dell’UNDP rivela che solo il 6% della plastica globale finisce nell’abbigliamento, contro il 31% degli imballaggi o il 16% dell’edilizia. Eppure, la moda è spesso considerata uno dei settori più inquinanti al mondo. Come mai?

🔍 Cosa non dice quel 6%?

I dati dell’UNDP misurano solo la plastica come materia prima, ma l’impatto della moda va ben oltre:

• 🌊 Inquinamento idrico: Il 20% dell’inquinamento industriale delle acque viene dalle tinture tessili (Banca Mondiale). I tessuti sintetici (es. poliestere) rilasciano microplastiche, responsabili del 35% dell’inquinamento da microplastiche negli oceani (IUCN).

• ☁️ Emissioni : La moda produce 4-10% delle emissioni globali di CO₂ (più di aerei e navi insieme, UNEP).

• 🗑️ Rifiuti : Ogni secondo, un camion di vestiti finisce in discarica o viene bruciato (Ellen MacArthur Foundation). Meno dell’1% viene riciclato.

• 💧 Risorse: Una maglietta di cotone richiede 2.700 litri d’acqua (WWF).

🏆 La moda è davvero il 2° settore più inquinante?

Dipende dagli studi:

• 1° posto: Petrolio e gas.

• 2° posto: Alcuni includono la moda per l’insieme di danni (acqua, CO₂, rifiuti). Altri la piazzano dopo agricoltura o allevamento.

Conclusione

Quel 6% è solo la punta dell’iceberg. L’inquinamento della moda deriva dall’intero ciclo: produzione, uso, smaltimento. Serve un cambio sistemico, non solo sostituire il poliestere.

Cosa fare?

• Sostenere la moda circolare.

• Comprare meno, indossare di più.

• Pretendere trasparenza dai brand.

📌 Fonte: UNDP | Ellen MacArthur Foundation

Se ti interessa, ho approfondito questo tema in un articolo sul mio blog: Feedback benvenuti!

🔗🇮🇹 Solo il 6% della produzione di plastica è destinato all’abbigliamento: perché allora la moda è tra i maggiori inquinatori?

🔗🇬🇧 Only 6% of plastic production goes to clothing—so why is fashion a top polluter?

reshared this



Colombian presidential candidate Miguel Uribe dies two months being shot at campaign event


The right-wing presidential candidate was shot at a campaign event in June


Archived version: archive.is/newest/peoplesdispa…


Disclaimer: The article linked is from a single source with a single perspective. Make sure to cross-check information against multiple sources to get a comprehensive view on the situation.



good ideas about how to give more user agency in a scrolling list of content


the video is a bit obnoxious in its presentation but the ideas contained within seem like they would actually be super nice

https://www.youtube.com/watch?v=HqYIUV7Eqjk




Coordinated network amplifies child sex abuse on X, researchers warn