Salta al contenuto principale



Is Meta Scraping the Fediverse for AI?




Is Meta Scraping the Fediverse for AI?


A new report from Dropsite News makes the claim that Meta is allegedly scraping a large amount of independent sites for content to train their AI. What’s worse is that this scraping operation appears to completely disregard robots.txt, a control list used to tell crawlers, search engines, and bots which parts of a site should be accessed, and which parts should be avoided. It’s worth mentioning that the efficacy of such lists depend on the consuming software to honor this, and not every piece of software does.

Meta Denies All Wrongdoing


Andy Stone, a communications representative for Meta, has gone on record by claiming that the list is bogus, and the story is incorrect. Unfortunately, the spread of Dropsite’s story is relatively small, and there haven’t been any other public statements about the list at this time. This makes it difficult to adequately critique the initial story, but the concept is nevertheless a wakeup call.

However, it’s worth acknowledging Meta’s ongoing efforts to scrape data from many different sources. This includes user data, vast amounts of published books, and independent websites not part of Meta’s sprawling online infrastructure. Given that the Fediverse is very much a public network, it’s not surprising to see instances getting caught in Meta’s net.

Purportedly Affected Instances


The FediPact account has dug in to the leaked PDF, and a considerable amount of Fediverse instances appear on the list. The document itself is 1,659 pages of URLs, so we were able to filter down a number of matches based on keywords. Please keep in mind that these only account for sites that use a platform’s name in the domain:

  • Mastodon: 46 matches
  • Lemmy: 6 matches
  • PeerTube: 46 matches

There are likely considerably more unique domain matches in the list for a variety of platforms. Admins are advised to review whether their own instances are documented there. Even if your instance’s domain isn’t on the list, consider whether your instance is federating with something on the list. Due to the way federation works, cached copies of posts from other parts of the network can still show up on an instance that’s been crawled.

Access the Leaked List


We are mirroring this document for posterity, in case the original article is taken offline.

Download (PDF)

Protective Measures to Take


Regardless of the accuracy of the Dropsite News article, there’s an open question as to what admins can do to protect their instances from being scraped. Due to the nature of the situation, there is likely no singular silver bullet to solve these problems, but there are a few different measures that admins can take:

  • Establish Community Terms of Service – Establish a Terms of Service for your instance that explicitly calls out scraping for the purposes of data collection and LLM training specifically. While it may have little to no effect on Meta’s own scraping efforts, it at least establishes precedence and a paper trail for your own server community’s expectations and consent.
  • Request Data Removal – Meta has a form buried within the Facebook Privacy Center that could be used to submit a formal complaint regarding instance data and posts being part of their AI training data. Whether or not Meta does anything is a matter of debate, but it’s nevertheless an option.
  • (EU-Only) Send a GDPR Form – Similar to the above step, but try to get the request in front of Meta’s GDPR representatives that have to deal with compliance.
  • Establish Blocking Measures Anyway: Even if private companies can still choose to disregard things like robots.txt and HTTP Headers such as X-Robots-Tag: noindex, you can still reduce the attack surface of your site from AI agents that do actually honor those things.
  • Set Up a Firewall: one popular software package that’s seeing a lot of recent adoption for blocking AI traffic is Anubis, which has configurable policies that you can adjust as needed to handle different kinds of traffic.
  • Use Zip Bombs: When all else fails, take measures into your own hands. On the server side, use an Nginx or Apache configuration to detect specific User Agents associated with AI, and serve them ever-expanding compressed archives to slow them down.

In all reality, fighting against AI scraping is still a relatively new problem that’s complicated by lack of clear regulation, and companies deciding to do whatever they want. The best we can do for our communities is to adopt protective measures and stay informed of new developments in the space.

ShareOpenly logo Share




Perplexity offers to buy Google Chrome for $34.5 billion


The unsolicited offer is higher than Perplexity’s valuation.
in reply to LCP

Il offer 35 billion considering the entire economy at this point is made up inflated bullshit.



Open Lemmy comment threads in Mastodon?


Since both lemmy and Mastodon use the fediverse, is it possible to view comment threads under posts from lemmy in Mastodon? How to find a link that works in both/ is it related to the posts id?

Would these work with #hashtags ?

Questa voce è stata modificata (1 mese fa)
in reply to scratsearcher 🔍🔮📊🎲

For example here is a Lemmy thread: discuss.tchncs.de/post/4196495…

Here is the same thread on Mastodon: floss.social/@kde/114960515064…

So it is possible if it has been federated to both. There are different reasons why that might happen, in this case it is because that thread's OP posted it on Mastodon but mentioned a Lemmy community.

Another reason why it might happen is that a Mastodon user is following a Lemmy community or user.


"This Week in Plasma" brings the news that Plasma 6.5 will have automatic day/night theme switching, that you can choose which Global Themes to show on the Quick Settings page, and that you can set dynamic wallpaper coloration to be based on the background color scheme or the time of day, or always light, or always dark.

blogs.kde.org/2025/08/02/this-…

@kde@lemmy.kde.social

#Plasma6 #OpenSource #FreeSoftware #desktop


in reply to scratsearcher 🔍🔮📊🎲

I see this post on Akkoma by #Fediverse and answered it. Another person from dot social on Mastodon also commented it. It's weird that those comments can't be readed here in the post. I've tried to comment from there before and seems to work. So I'm not sure what happens when you interact outside of Lemmy.

Links to comments fe.disroot.org/notice/Ax6QMkVf…
mastodon.social/@ambuj/1150218…

Questa voce è stata modificata (1 mese fa)


AI Is a Total Grift


don't like this



AI Is a Total Grift


in reply to chobeat

They steal intellectual Property and labor and pass it off as their own. AI is grift.

☆ Yσɠƚԋσʂ ☆ doesn't like this.




UK Asks People to Delete Emails In Order to Save Water During Drought




UK Asks People to Delete Emails In Order to Save Water During Drought


It’s a brutally hot August across the world, but especially in Europe where high temperatures have caused wildfires and droughts. In the UK, the water shortage is so bad that the government is urging citizens to help save water by deleting old emails. It really helps lighten the load on water hungry datacenters, you see.

The suggestion came in a press release posted on the British government’s website Tuesday after a meeting of its National Drought Group. The release gave an update on the status of the drought, which is bad. The Wye and Ely Ouse rivers are at their lowest ever recorded height and “five areas are officially in drought, with six more experiencing prolonged dry weather following the driest six months to July since 1976,” according to the release. It also listed a few tips to help people save on water.
playlist.megaphone.fm?p=TBIEA2…
The tips included installing a rain butt to collect rainwater for gardening, fixing leaks the moment they happen, taking shorter showers, and getting rid of old data. “Delete old emails and pictures as data centres require vast amounts of water to cool their systems,” the press release suggested.

Datacenters suck up an incredible amount of water to keep their delicate equipment cool. The hotter it is, the more water it uses and a heatwave spikes the costs of doing business. But old emails lingering in cloud servers are a drop in the bucket for a data center compared to processing generative AI requests.

A U.S. A Government Accountability Office report from earlier this year estimated that 60 queries of an AI system consumed about a liter of water, or roughly 1.67 Olympic sized swimming pools for the 250,000,000 queries generated in the U.S. every day. The World Economic Forum has estimated that AI datacenters will consume up to 1.7 trillion gallons of water every year by 2027. OpenAI CEO Sam Altman has disputed these estimates, saying that an average ChatGPT query uses “roughly one fifteenth of a teaspoon” of water.

Downing Street announced plans in January to “turbocharge AI” in the U.K. The plan includes billions of pounds earmarked for the construction of massive water-hungry datacenters, including a series of centers in Wales that will cost about $16 billion. The announcement about the AI push said it will create tens of thousands of jobs. It doesn’t say anything about where the water will come from.

In America, people are learning that living next to these massive AI data centers is a nightmare that can destroy their air and water quality. People who live next to massive Meta-owned datacenters in Georgia have complained of a lack of water pressure and diminished quality since the data centers moved in. In Colorado, local government and activists are fighting tech companies attempting to build massive data centers in a state that struggled with drought before the water-hungry machines moved in.

Like so many other systemic issues linked to climate change and how people live in the 21st century, small-scale personal solutions like “delete your old emails” won’t solve the problem. The individual water bill for a person’s old photos is nothing compared to the gallons of water required by large corporate clients running massive computers.

“We are grateful to the public for following the restrictions, where in place, to conserve water in these dry conditions,” Helen Wakeham, the UK Environment Agency’s Director of Water, said in the press release. “Simple, everyday choices—such as turning off a tap or deleting old emails—also really helps the collective effort to reduce demand and help preserve the health of our rivers and wildlife.”

Representatives from the UK Government did not immediately return 404 Media’s request for comment.




Is Astute Graphics plugin 40MB or 678MB?


Edit: It seems that it may be 40MB and that the other 629 MB is from the Texturino plugin that generally gets bundled with it. I believe it is just two separated direct downloads. Not sure why there would be inconsistencies in the file size though (669MB vs 678MB)

Note: I am not requesting for a link nor a source, but rather I just want to know if I am direct downloading the correct file. Specifically, is the bundle supposed to be 40MB or 678MB?

I found torrented versions are 678MB, but direct downloaded versions are only 40MB. motka (dot) net (from the megathread) had one for 678MB, but the download is a 404 sadly.

Also, is the latest version 3.9.1? I see direct download ones showing up as 4.1.0, and 4.2.0 (which doesn't seem right to me)

Thank you.

Questa voce è stata modificata (1 mese fa)
in reply to Yourname942

40MB can't be it. Check rsload. I gave some details in your other post.
Questa voce è stata modificata (1 mese fa)


Your CV is not fit for the 21st century


The job market is queasy and since you're reading this, you need to upgrade your CV. It's going to require some work to game the poorly trained AIs now doing so much of the heavy lifting. I know you don't want to, but it's best to think of this as dealing with a buggy lump of undocumented code, because frankly that's what is between you and your next job.

A big reason for that bias in so many AIs is they are trained on the way things are, not as diverse as we'd like them to be. So being just expensively trained statistics, your new CV needs to give them the words most commonly associated with the job you want, not merely the correct ones.

That's going to take some research and a rewrite to get it looking like those it was trained to match. You need to be adding synonyms and dependencies because the AIs lack any model of how we actually do IT, they only see correlations between words. One would hope a network engineer knows how to configure routers, but if you just say Cisco, the AI won't give it as much weight as when you say both, nor can you assume it will work out that you actually did anything to the router, database or code, so you need to explicitly say what you did.

Fortunately your CV does not have to be easy to read out loud, so there is mileage in including the longer versions of the names of the more relevant tools you've mastered, so awful phrases like "configured Fortinet FortiGate firewall" are helpful if you say it once, as does using all three F words elsewhere. This works well for the old fashioned simple buzzword matching still widely used.


This is all so fucked.




Syncthing 2.0 Launches With Major Database Overhaul


Release Note
in reply to Karna

Can I ask about the change of not keeping record of deleted files after 6 months by default. Does that mean if I sync two directories constantly so that if syncthing sees one of them has a file deleted, it will delete the file on the other too, if I copy back that same file into the synced folder, after 6 months pass Syncthing would sync that file again? Or what else does this mean?

Currently I am just using this to have an easy transfer between two computers, I keep moving out files that have been transferred from both folders, so I would think this has no effect on how I use it?

Questa voce è stata modificata (1 mese fa)
in reply to ook

I don't think your use will be effected. I believe the only thing is your database will be less bloated with deleted items that have never been removed previously.

If you add a file back after it's removed from the database, It should sync as usual.

(This is my interpretation of the change notes, i'm no experto, maybe a real experto can confirm this is true or not).


in reply to cyborganism

I spent about a decade as a KDE developer.

KDE has this mindset where if someone wants to implement something they think is cool, and the code is clean and mostly bug free, well -- have at it! Ever wonder why there's 300 options for everything?

Usually (because there's a bunch of people trying to optimize the core for speed and load times and such) this also means that the unused code-paths are required to not contribute negatively to things like load times. So a plugin like this that doesn't get loaded by default unless enabled, and thus doesn't harm everyone else's performance. It also means that if it stops working in the future and starts to bitrot, it can be dropped without affecting the core code.

reshared this




Intel CPU Microcode Updates Released For Six High Severity Vulnerabilities


cross-posted from: lemmy.ml/post/34564216

Impacted CPUs:
  1. Arrow Lake
  2. Core Gen 13 Raptor Lake
  3. Core Ultra 200V Lunar Lake
  4. Xeon Scalable Gen3 and newer through Xeon 6 Sierra Forest / Granite Rapids
  5. Xeon D-17xx / Xeon D-27xx
in reply to Karna

What is Intel Microcode anyway? Been bugging me for ages seeing it reported for violating vrms.
in reply to oeuf

Simply put, modern processors aren't just converting instructions directly into transistors, they actually have code that controls how they operate. That's the microcode.
in reply to Karna

If you want to scan for vulnerable systems online, here is a list of operating systems that will not be applying these “privilege escalation” fixes.

gnu.org/distros/free-distros.e…


in reply to jackeroni

Lmao I hope you get paid in a stronger currency than russian dollars

in reply to ☆ Yσɠƚԋσʂ ☆

This is clearly AI and Chinese Propaganda. Get back to me when this actually rolls out and is shown to be effective. The endless stream to AI hype and bullshit is sickening.

don't like this



in reply to Zerush

Also vanilla, but artificial vanillin is more or less the same chemical as natural vanillin.

in reply to Troy00

Skip to the section "defectors." Essentially, information on the DPRK is hard to verify, and 70% of defectors are unemployed, so many turn to selling sensationalized stories that are more fantasy than reality in order to make a living. See Yeonmi Park for perhaps the most famous "celebrity defector."

The authenticity of her claims about life in North Korea – many of which have contradicted her earlier stories and those of both her mother and fellow defectors from North Korea – have been the subject of widespread skepticism. Political commentators, journalists and professors of Korean studies have criticized Park's accounts of life in North Korea for inconsistencies,[8][9][10] contradictory claims, and exaggerations.[11][12][1] Other North Korean defectors, including those from the same city as Park, have expressed concern that the tendency for "celebrity defectors" to exaggerate about life in North Korea will produce skepticism about their stories.[13][14] In 2014, The Diplomat published an investigation by journalist Mary Ann Jolley, who had previously worked with Park, documenting numerous inconsistencies in Park's memories and descriptions of life in Korea.[13] In July 2023, a Washington Post investigation found there was little truth to Park's claims about life in North Korea.[3] Park attributed the discrepancies to her imperfect memory and language skills,[3][13] and her autobiography's coauthor, Maryanne Vollers, said Park was the victim of a North Korean smear campaign.[15]


These are both just Wikipedia, you can find way more elsewhere why defectors aren't a good source of information on the DPRK. is a good documentary on the horrible treatment of defectors in the Republic of Korea and why the celebrity defector industry exists.



Protest footage blocked as online safety act comes into force




Intel collapsing?


Starting to see a lot of worried people as Intel descends downwards rapidly. Reminds me of Nokia how this is going...

https://www.youtube.com/watch?v=cXVQVbAFh6I

in reply to 3dcadmin

Of course intel will collapse within the next 10 years.

They have focused exclusively on high-end, very expensive processors in the past. Now that moore's law is no longer true, that doesn't work anymore, because ARM chips are catching up in performance, at 1/10 of the price.

in reply to gandalf_der_12te

Whilst true, AMD are doing just fine by being fabless. I can't really see x86 going as soon as you say for many reasons



What are your thoughts about Eprivo email app and their privacy services?


This is not to promote the product. I merely came across it and couldnt find any reviews except for those from Google Play. I use Android and as much as I hate iOS, their Email app is very consistent regardless if you use their .mac email or Gmail. On Android, it is very difficult to find an email app that is decent. I've been on Fairmail for quite a while until recently when I have sync problems.

So I dig around and found "EPRIVO - Encrypted email and chat". It was a surprise because I am constantly on the look for a good email app (and browser !) on Android. Usually, on Google Play, you will see: Gmail, Thunderbird, Proton, Outlook, Edison Fairmail...etc. I never see Eprivo before.

Anyway, I tested it out on a Gmail account. The app works quite well, here is what I learn:

1) You are forced to create a blanket Eprivo account. This takes like 10 seconds. Then this Eprivo account is then used to get you access to the email app. You can use any email account within it: Gmail, Yahoo. I use Gmail and it works well.

2) The privacy features are interesting. You can do a lot of stuff like prevent forwarding, set timer so email can only be read once, password protect...etc. Now I also used Proton in the past and these features are exclusive to a .proton account. In this app, I can do some of them such as setting the timer on an email. To get the full private features, you need to create a Eprivo email (very easy to create within the app). So, you will have something like abc@eprivovip.com.

3) Prices are surprisingly cheap: 5 bucks / year.

4) They advertise themselves as not an email service but to my understanding a "privatized email service". So it is like a private layer on top of your existing email.

Any thoughts?

in reply to mazzilius_marsti

1) most of that is bullshit and the rest is horseshit.
2) sending email involves metadata that can and will be scraped. ( from, to, subject, etc)
3) if you want the contents of an email secured, use age or gnupg to create an encrypted message that uses your recipient’s public keys and post that in your email to them.
4) If you want secured emails from other people, then you need to securely give them a copy of your public key in a manner that resists man in the middle attacks.
5) once sent, you lose all control over what they do with it and you can’t unsend, delete or limit what they can do with it.


Rogov said Zelenskyy was afraid of the upcoming meeting between Putin and Trump




Big Tech or Big Threat? Google’s Ukraine Military Ties Exposed




U.S. Becomes First Country To Recognize Mega-Israel


WASHINGTON—Calling the ongoing violence in the region “disgusting” while pledging America’s unwavering support, President Trump announced Monday that the United States would be the first country to recognize the state of Mega-Israel.

“We recognize the right of Mega-Israel to exist as an ever-expanding sovereign nation,” said Trump, who added that he believed the West had turned a blind eye to Mega-Israel for too long, and that Mega-Israel had the right to defend whatever they claimed their borders to be.

“Today, I called Giga-Prime Minister Benjamin Netanyahu, and I told him that the U.S. stands behind Mega-Israel, its Mega-land, and its Mega-army. As such, we will continue to provide them with military support as they face attacks from the Micro-Middle East.” At press time, Trump announced plans for the United States to officially back a one-Mega-Israel solution.



"Live long enough to become the villain"


Or more lik live long enough to let your true villain colors show ha!
in reply to bubblybubbles

bug-facts Cool bug fact: many US companies continued to do business in nazi Germany throughout ww2, to the point that allied bombers were briefed specifically to avoid hitting their factories
Questa voce è stata modificata (1 mese fa)
in reply to bubblybubbles

Add Russia instead of NATO and it is correct.

Fuck Russia for promoting fascism!

Questa voce è stata modificata (1 mese fa)


Getting blocked accessing a site by default


So I don't live in uk nor I had vpn and there is no child safety like shit stuff in my country but today I saw this while accessing this site . Is there any way to bypass this without vpn as I use Android hotspot for my internet on laptop which will heat and drain the android battery fast
in reply to omniman


Nice, they provide all the cool sites for free movies in the law suit 🤣🤣

You can download the full document from here (I think, because they said it was a one-time link, according to them).

Supporting document for court order

Questa voce è stata modificata (1 mese fa)


CORS error when calling /api/v3/users with Authorization header in local setup


Hi NodeBB team, I have NodeBB running locally on my machine: NodeBB version: v3.12.7 Environment: Local development Frontend: React (Vite) running on [url=http://localhost:5173]http://localhost:5173[/url] Backend (NodeBB) running on [url=http://local

Hi NodeBB team,

I have NodeBB running locally on my machine:

NodeBB version: v3.12.7

Environment: Local development

Frontend: React (Vite) running on http://localhost:5173

Backend (NodeBB) running on http://localhost:4567

I’m trying to create a user via the API:

async function registerUser() {
  try {
    const res = await fetch(`${import.meta.env.VITE_API_URL}v3/users`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        "Authorization": `Bearer ${import.meta.env.VITE_TOKEN}`
      },
      body: JSON.stringify(formData),
    });

    if (!res.ok) {
      throw new Error(`HTTP error! Status: ${res.status}`);
    }

    const data = await res.json();
    console.log("User registered successfully:", data);
  } catch (error) {
    console.error("Error registering user:", error);
  }
}

Question:
How can I correctly configure NodeBB in development so that it allows the Authorization header in API requests?
Even after setting Access-Control-Allow-Headers in the ACP, the browser still fails at the preflight request.
Do I need a plugin or middleware to handle CORS for API v3 routes?
in reply to balu

Re: CORS error when calling /api/v3/users with Authorization header in local setup


balu can you confirm that the response you receive in the Vite app indeed contains the restrictive ACAO headers irrespective of what is set in the ACP?



The BBC helped kill Anas al-Sharif. Its reporting will kill more journalists


How is it possible for a BBC reporter to have made the following obscene observation in his segment on Israel’s murder at the weekend of Al-Jazeera journalist Anas al-Sharif: "There's the question of proportionality. Is it justified to kill five journalists when you were only targeting one?"

Unpacking the depraved journalistic assumptions behind this short “question” is no small task.

Imagine that Israel finally allows western journalists into Gaza after blocking their entry for nearly two years. A team of five familiar BBC faces covering the region set up shop in Gaza and work out of an improvised studio inside the enclave.

Then news breaks that that their studio has been hit by an Israeli strike, and all five killed: Jeremy Bowen, Lyse Doucet, Yollande Knell, Lucy Williamson and Jon Donnison.

Israel doesn’t claim the strike was a mistake, but celebrates the killings. It says it has secret evidence that one of them – let’s say Jon Donnison, who made the observation above – was secretly recruited by Hamas’ military wing while in the enclave.

Can we imagine the BBC or any other western news organisation framing the segment in the following terms: "There's the question of proportionality. Is it justified to kill five journalists when you were only targeting one?"



The BBC helped kill Anas al-Sharif. Its reporting will kill more journalists


How is it possible for a BBC reporter to have made the following obscene observation in his segment on Israel’s murder at the weekend of Al-Jazeera journalist Anas al-Sharif: "There's the question of proportionality. Is it justified to kill five journalists when you were only targeting one?"

Unpacking the depraved journalistic assumptions behind this short “question” is no small task.

Imagine that Israel finally allows western journalists into Gaza after blocking their entry for nearly two years. A team of five familiar BBC faces covering the region set up shop in Gaza and work out of an improvised studio inside the enclave.

Then news breaks that that their studio has been hit by an Israeli strike, and all five killed: Jeremy Bowen, Lyse Doucet, Yollande Knell, Lucy Williamson and Jon Donnison.

Israel doesn’t claim the strike was a mistake, but celebrates the killings. It says it has secret evidence that one of them – let’s say Jon Donnison, who made the observation above – was secretly recruited by Hamas’ military wing while in the enclave.

Can we imagine the BBC or any other western news organisation framing the segment in the following terms: "There's the question of proportionality. Is it justified to kill five journalists when you were only targeting one?"

in reply to geneva_convenience

As an american the BBC was always seen by me as honest and more trustworthy. I guess I learned my lesson about them. Who would work for them in the field knowing how they will be sold out by them rather than defended?
Questa voce è stata modificata (1 mese fa)
in reply to MehBlah

Most Western media are very reputable until it's time to whitewash some war crimes. It's magical how fast Western media can uniformly spread complete lies and all repeat one specific false narrative given to them by higher ups.


The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con


Found this on Bluesky and thought it was a fascinating read. Intentionally or not, LLMs appear to be mimicking the techniques used by conmen leading users to think they are 'intelligent'.


How Big Cloud becomes Bigger: Scrutinizing Google, Microsoft, and Amazon's investments


Full Report.

In an AI gold rush, those selling the proverbial pickaxes are surest to win: cloud companies provide scalable managed computational resources as a subscription service now used by most businesses to store their data, and as a primary ingredient to build and use AI. Just three companies—Amazon, Microsoft, and Google—control two thirds of global cloud compute market share, collectively comprising “Big Cloud.” This highly concentrated market raises concerns regarding digital sovereignty, slowed innovation, and a concentration of corporate power.

In this report, we explore an underrecognized manner in which AI ecosystems increasingly depend on Big Cloud: Big Cloud’s investment in other companies. We show how Big Cloud companies are prolific investors widely deploying hundreds of billions of dollars over thousands of deals, often in smaller, lesser-known startups. We find that:
1. While some regulators have begun to scrutinize the largest of these deals—such as Microsoft’s investment in OpenAI or Google and Amazon in Anthropic—the ecosystem-wide scale of this investment is hard to overstate: Big Cloud invests as frequently and at similar amounts to the largest venture capital firms and startup accelerators. Further, Big Cloud invests about ten times as often as other Big Tech companies, and ten to a hundred times more in total dollar amounts.
2. Via accelerator programs, Big Cloud companies lock startups into their cloud infrastructure. Big Cloud ensnares young startups in their cloud ecosystem via cloud credits while requiring startups use the Cloud company’s other tech, and incentivizing strategies with particularly heavy cloud needs, such as generative AI.
3. More so than when other Big Tech companies or VC firms invest, startups funded by Big Cloud are more likely to rely on Big Cloud as their lead or sole investor. These relationships allow Big Cloud to exercise significant influence over startups and bend them to their interests.
4. Amid concerns that vertical integration may give one firm too much control over AI supply chains—such as chips, cloud, or data—our work shows that Big Cloud is investing in a way that brings many of the same risks as conventional forms of vertical integration: when Big Cloud invests in an AI supply chain company—such as a Data, X-as-a-Service, or Internet infrastructure company—that company is often more likely to be dependent on that Big Cloud company as their sole or lead investor, compared with other investors.
5. Intensifying concerns about threats to global digital sovereignty, we find that American Big Cloud companies make global investments at a far greater pace than other investors we compare against. Just over half of all Big Cloud investments are made internationally, about twice the frequency of large VCs, top accelerators and other Big Tech companies. Big Cloud also invests through accelerators abroad much more often than at home, highlighting the need for global regulatory scrutiny of startup accelerator programs.

While these practices merit creative regulatory and policy responses, we emphasize that such interventions should proceed in light of the following overarching implications:
Dependence on Big Cloud is not just technical or contractual. It is also financial, as a source of investment. This compounds the need for structural separation: Amazon, Google, and Microsoft must be compelled to split their cloud business from their other businesses that run on the cloud, per past calls, so that they do not both provide infrastructure and compete with the customers and investees relying on that infrastructure.

— Big Cloud companies are huge investors, which sets them apart from all other large tech companies. Any one of these investments may be small and insignificant, but they cumulatively shape the startup and developer ecosystem in Big Cloud companies’ interest. Thus, in addition to “deal by deal” scrutiny, in which only the largest deals receive attention, regulators and researchers should monitor and scrutinize these investments and their effects in an ecosystem-wide, cumulative, and ongoing manner.



First 3D printed titanium rocket fuel tank can handle 330 bar pressure under -196°C | by Korea Institute of Industrial Technology


South Korean researchers have achieved a major milestone in space manufacturing by successfully testing the world's first 3D-printed titanium fuel tank to pass extreme cryogenic pressure conditions, marking a breakthrough that could transform how spacecraft components are produced.

The 640mm diameter tank, manufactured using Ti64 titanium alloy through Directed Energy Deposition (DED) 3D printing, withstood pressures of 330 bar while cooled to -196°C with liquid nitrogen during testing at the Korea Aerospace Research Institute (KARI). The pressure test exposed the tank to forces 165 times greater than standard tire pressure, demonstrating its reliability under the extreme conditions of space missions.

Debby ‬📎🐧 reshared this.






Let's Stop Chat Control