Salta al contenuto principale


[2018]DUDES: We made a website where you can look up a charity and see what % of donations it spends on admin overhead

ME: Hey that rules

DUDES: It's called effective altruism

[2025]OTHER DUDES: So those EA dudes want to pave all farmland on earth for the benefit of hypothetical robots 10,000,000,000,000 years in the future

ME: Wha

[DOES THIS STORY HAVE A MORAL? I CAN'T TELL.]

reshared this

in reply to mcc

What's funny (correction: not funny at all) is that MacAskill has always been like that, but he did a better job of hiding it in 2018 when it was more profitable to pretend to have any grounding in reality.
in reply to mcc

i think its "never take anything to its logical extreme"

Oblomov reshared this.

in reply to margot

or, i guess in this case, illogical, bc im not sure logic applies to that particular philosophical leap
in reply to Irenes (many)

@ireneista @emaytch It's the exact same fallacy as Pascal's Wager. Once you assign infinite cost or reward to a decision, any "empirical" or "rational" framework applied to that loss function absolutely shits the bed. The resolution, of course, is "why in the hell do you think that's a possible outcome? what is your basis for that belief?"

reshared this

in reply to JP

the singularity is the rapture for people who find computers easier to believe in than old men
in reply to JP

I suddenly see a connection to one of Zeno's Paradoxes, as well, where when you build a seemingly logical construct of reality (to reach a location, you must first reach halfway between there and your starting point) and take it to the limit (infinitely many halved distances between you and the destination) you end up with results that seem sound (the harmonic series does not converge!) and yet are easily disproven just by looking at reality.

The scientific method applied here would suggest, "oh, your model is bad". But it's clear there was no applying that here.

in reply to Asta [AMP]

@aud @jplebreton @emaytch @ireneista There was a great article I read a while ago about how you can understand science as the transition from rationalism to empiricism. That is, that science is the idea that you need to actually check your logic against the real world. There are many logically consistent worlds which are not ours, so it doesn't matter what you derive in your own brain if you don't have a connection out to empirical observation.

Techbros could stand to take note.

in reply to Cassandra is only carbon now

and it's worth remembering that this radical position, that observation of reality is irrelevant and unimportant, is the thing that today's rationalist movement is actually named for. it's not actually about the practice of science, though we think many of its adherents don't fully appreciate that.
in reply to Irenes (many)

@ireneista @aud @jplebreton @emaytch Yep. It's radical and also *deeply* reactionary. Notably, it's also a huge break from even the reactionary forms of New Atheism. At one point, Sam Harris of all people was arguing that empiricism was necessary to ethics, not just philosophy. Now the whole reactionary movement seems to have left even that behind.
in reply to Irenes (many)

(the thread is about effective altruism but in practice there's heavy overlap in those communities)
in reply to Irenes (many)

@ireneista @aud @jplebreton @emaytch Yeah, absolutely. I disagree about some of the specifics, but "TESCREAL" as a term to point to the intersection of all of those related but distinct philosophical schools is useful nonetheless.
in reply to Cassandra is only carbon now

@xgranade @ireneista @emaytch ...but then you try to apply it to something like intercepting killer asteroids and they suddenly get all "well, let's be reasonable and practical." It's all just different forms of confirmation bias.
in reply to Tom Forsyth

@TomF @ireneista @emaytch I love how in e/acc and xrisk language, climate change is not "existential," only these pretend billion-year-future things.
in reply to Cassandra is only carbon now

@xgranade @TomF @ireneista @emaytch There's this idea from indigenous philosophy (the Haudenosaunee/Iroquois, I think? i found about it from uh, a cleaning products company) that you should do all things while considering its impact on people seven generations from now. I think that's a really good rule actually. That's a really reasonable timespan. I don't know if people exist in 10^10 years. I'm certain people exist in 7 generations, or at least, what we do now decides whether they exist.

reshared this

in reply to mcc

@xgranade @TomF @ireneista @emaytch Notably, seven generations turns out to be right in the sweet spot for "work to hold back greenhouse emissions now results in benefits then".

reshared this

in reply to mcc

personal anecdote, necrogendering

Sensitive content

in reply to Cassandra is only carbon now

personal anecdote, necrogendering

Sensitive content

in reply to mcc

@xgranade @TomF @ireneista @emaytch

Oh, is that why they're called "7th generation"? I read that expecting it to be Dr. Bronner's until I got to that point

in reply to Cassandra is only carbon now

as i like to say, i wouldn't say there's such a thing as reading too much science fiction…but there is such a thing as not reading enough stuff that isn't science fiction
in reply to Neville Park

@nev @ireneista @emaytch Or at least, forgetting the "fiction" part of "science fiction." Fiction is a wonderful thing, but you still need critical reading skills!
in reply to Cassandra is only carbon now

@xgranade @ireneista @emaytch That reminds me of something I was taught in physics, which has absolutely saved my bacon multiple times in terms of not getting drawn into weird ideologies:

If your model of the world predicts that something will be infinite, that doesn't mean that whatever it corresponds to in the real world is actually, literally infinite. Instead, it means that your model is missing something. You have extrapolated beyond the domain of applicability of your model, and something else you haven't accounted for will happen before you get there.

in reply to Rachel Barker

@rachelplusplus @xgranade @ireneista @emaytch *thinking* I'd say something about Effective Moral Theory but that sounds like a buzzword someone would probably run with it and use it to justify horrors
in reply to Irenes (many)

@ireneista @emaytch slight disagree: it's a multiply by infinity error (because the future is infinite)

more specifically: it's not the first place they went wrong, but the place that they really removed all guardrails that would mitigate any wrongness, was in giving value in the future a 0% discount rate.

while people in the future are not any less morally legitimate than people are today, not only is opportunity cost is real, but the future is uncertain (and your plans for the future even more so! you're not building the far future, you'll be dead)

once you've made that error, basically any imaginary future is justified by throwing excessively big numbers at it, and you're just Immanentizing The Eschaton again (usually a big sign you've gone wrong).

sure, we could argue for years about Repugnant Conclusions and whether various measures of utility in utilitarianism make any sense, but giving even the most modest discount rate to The Abstract Far Future would render 99.9% of these questions moot

in reply to madison taylor

we do also believe that the fundamental error of utilitarian ethics is the idea that good and bad cancel out. they don't, they coexist - so arithmetic on them is unhelpful.
in reply to Irenes (many)

@ireneista @tomoyo @emaytch maybe the problem is trying to represent things that aren't numbers using numbers
in reply to mcc

@ireneista @tomoyo @emaytch like science models things with numbers all the time but (1) there's an understanding these are models, and that the models become more or less effective depending on how number-like the underlying reality is (2) the scientists have to spend quite a lot of time arguing their mathematical models match reality before they're actually accepted
in reply to mcc

@ireneista @tomoyo @emaytch in like 2005 I was introduced to the idea of "metrics" by an MS employee and one of the first things they said was that it's important to remember (not their words, can't remember their exact words) that metrics are a proxy measurement and they're not the underlying reality, they're just an attempt to get an indirect handle on the underlying reality by finding something related which can be measured and measuring it
in reply to mcc

we've tried that line of argument and the immediate response is "but you still have a value function for them"

so we put on our mathematician hat and thought about why they aren't numbers, and this is the result

we agree with you, of course, but the goal with this line of argumentation is to reach people who are deeply mired in this belief system

in reply to Irenes (many)

@ireneista @tomoyo @emaytch "but you still have a value function for them"

I don't agree with that at all, but I assume you're not here person I need to convince

in reply to madison taylor

for those who don't spend a lot of time thinking about kinda-abstract economics (an eminently rational form of ignorance - in the economics sense not the rationalist sense - and quite defensible)

a normal discount rate in economic decision-making might be something like 8%-25% for a company (depending on the stability of the firm and its future prospects)

or, more abstractly, in terms of economic value, maybe something like 4% annually (in real terms)

but hey, i'm not picky. take, like, just a 1% discount rate, just to account for the risks to your own plans going badly awry. heck, be full of hubris and take a 0.1% discount rate: suddenly it's numerically clear that the infinities of your Machine God are all total nonsense, and you can join the rest of us worrying about $BILLIONAIRE having all the power

in reply to madison taylor

@tomoyo I'm shook cause I didn't realize they didn't do some sort of discount rate. Like even entry level Expected Value models account for probabilities, and everything compounds.

I never bothered to look at their calculations because I know it's so obviously wrong, but wow the amount of hubris to model like that and assume you are getting anything that "proves" anything.

in reply to margot

@emaytch at the risk of almost assuredly being That Guy, actually maybe the hints were there in the original website …something something… disrespect for glue work …rambling segway to my own REAL pet peeve… how this timeline could have been so much better if we'd tracked % of donations sent back in the mail asking for more donations instead U+220E
in reply to Nathan Vander Wilt

@natevw @emaytch so there's a "soft problematic" version of EA where they get really really focused on dollars that go directly to services and this winds up over-funding things that accidentally game that number and de-funding important community work which due to the structural nature of its work means a slightly higher percentage gets spent on facilities or outreach

your city gets a lot of mosquito nets but no arts funding, in blunt terms

in reply to mcc

@emaytch type of guy who dissolves mosquito nets in petrol to use as waterproof coating on his art projects (why is not everybody knowing this secret trick?)

Oblomov reshared this.

in reply to mcc

@natevw i also remember seeing a lot of people apply EA logic to *personal donations* (ie, to ppl posting gofundmes for hospital bills) which always felt particularly wrongheaded to me
in reply to mcc

@natevw @emaytch the worst possible combination of Goodhart's law and the one I forgot the name of, about choosing a metric only because it's easy to measure.
in reply to mcc

If You Give A Mouse A Cookie, but it's "If You Give A Tech Bro A Philosophy Book."
in reply to mcc

Robert Anton Wilson, 1979: Any false premise, sufficiently extended, provides a reasonable approximation of insanity

2025: Any false premise, sufficiently extended, turns out to be an already-existing thread on something called "lesswrong dot com" and it turns out a cofounder of Paypal has already given it 10 million dollars

mastodon.social/@emaytch/11546…


i think its "never take anything to its logical extreme"

in reply to Kevin Boyd (he/him) 🇨🇦

@kboyd
Oh boy, you're at the entrance of a glorious rabbit hole. It's got a basilisk that's going to torture us all for all eternity, several murders, Harry Potter fan fiction and disturbing ties to some of the most powerful people in the world.

It's a doozy.
@mcc @emaytch

in reply to Lu-Tze

@Lu_Tze @kboyd @emaytch honestly, i've seen better rabbit holes (or whichever type of hole it is is full of bat shit)
in reply to mcc

It's my kind of trash, I guess. Among the coalition of factions trying to fuck over the vast majority of people, LessWrong is one of the goofier nodes.

They're the modern occultist strain of Nazis, like in Indiana Jones.

in reply to Lu-Tze

@Lu_Tze no intent to belittle your taste in Wow Check This Guy Out! I just have high standards… I'm from Texas… the level of Wow Check This Guy Out I've been exposed to is incredible
in reply to mcc

Hey, it's good to have my tastes challenged from time to time. I'm aware that sometimes I can get lost in fascination for the grotesque, and while that can be both fun and part of a useful critique of fascist movements, it's also easy to lose sight of the ways such movements hurt real people.

On the other hand, you've piqued my interest about Texan weirdos. Could you maybe suggest a good point of entry?

in reply to mcc

A rat/rabbit hole that is full of batshit has to be a "bat hole", right?

That's for sure what I'm calling it from now on anyway!

"Sorry, I'm a bit tired today. Last night I learned about Effective Altruism and went way down a bat hole reading about it online instead of going to sleep."

Questa voce è stata modificata (1 settimana fa)
in reply to mcc

Quotes from Jeff Miller @jmeowmeow in a followers-only discussion I wanted to foreground:

"Runaway inflation in the philosophical flattery economy."

"As wealth and power is narrowly concentrated, the reward of flattery as a practice increases. If there's competition for flattery work as verbally charming people lose other opportunities for subsistence, there's motivation to go bigger and bigger."

reshared this

in reply to mcc

Roko’s Obsequiosity. We can imagine in the far future a vast, gigantic ass, an ass so incomprehensibly big that we must start kissing the idea of it immediately in preparation.
Questa voce è stata modificata (1 settimana fa)
in reply to mcc

I once saw an EA person do a presentation and their shiny formula for the expected value of the future of humanity included terms for the star density both in the milky way as well as in the local virgo supercluster as a whole.

Absolute clown show of a movement.

Oblomov reshared this.

in reply to λTotoro

@lambdatotoro the fun thing is that they aren't even doing THAT right. As an example (and shameless plug) some time ago I did some back of the napkin calculations to check what energy requirements world be like if we didn't stop pushing for “growth at all costs”. Billions of years? Turns out that at current growth rates we'd exhaust the entire Milky Way in a couple thousand years even if we could convert it to energy at 100% efficiency.

wok.oblomov.eu/tecnologia/nucl…

in reply to mcc

what’s the 2018 website? Didn’t charity navigator do this for the past decades?
in reply to Jason Petersen (he)

@jason Charity navigator is generally considered to be an "effective altruist" project and it is the website I was thinking of. I pulled 2018 out of a hat because I was like "when did i first encounter that? 2015?".

It doesn't believe in the future semi-infinite AI people thing, but it embeds some of the other bad assumptions of EA. I would say that it awards its "stars" based on over-valuing metrics that may not actually be the best way to provide well-rounded community benefit.

in reply to mcc

yeah I think I had already soured on it well before then, but had used it in the aughts, was all. I morbidly wondered whether there was a more blatant EA-themed site.
in reply to mcc

kind of funny when you first mentioned the website I at first thought it's ProPublica's Nonprofit Explorer:anenw35:
in reply to mcc

my only contact with this whole ecosphere is scott adams, and while a lot of his posts were very interesting, i do feel he kind of ignored how inequality and real life suffering are a bit more immediate threats than possible future raptures (or "the economy").
in reply to mcc

what i don't understand is how the fuck is roko's basilisk anything more than dumb bullshit that you talk about when you're extremely drunk or high

but supposedly intelligent men believe it's a real thing

in reply to mcc

inventing very special terminology like that might be a red flag
in reply to mcc

Even the very initial premise is deeply flawed. "Admin overhead" is the health and safety of an NGO's staff and the security, efficiency, and reliability of its infrastructure. Underfunding it is toxic.
in reply to Eleanor Saitta

@dymaxion I think there's a threshold % where if admin+fundraising exceeds it it indicates a real problem.
in reply to mcc

Definitely! Also, that threshold is a lot higher than folks outside the sector think and varies a lot by org structure and space. What indicates a real problem at one org in the same space that provides minimal staff support can be an absolute survival budget for another org that isn't a burnout and abuse factory.