Home

Bullshit Longtermist “Altruism”

November 15, 2022

When I first heard about effective altruism, it was, from a certain point of view, a completely sensible thing; given dollars to donate, how can you make those dollars have the greatest effect?  And, the guy I heard it from, very much walked the talk, he donated an admirably substantial portion of his non-trivial tech-industry compensation to charity.  I might hedge a little on “how sure are you about the numbers guiding your donations?”, but basically super-admirable.  Effective, even.

However, apparently, recently, some weird crazy “longtermist” bullshit has taken hold in EA, and someone got the brilliant idea of claiming that their goal was to save the maximum number of lives in the future, where by future they mean “hypothetical future space civilization of unimaginable size” and/or “a giant AI into which we have uploaded zillions of human minds”.  “Altruism” is helpfully redefined as “whatever maximizes the number and/or welfare of these completely imaginary people”.  I.e., it’s bullshit, intended to let its proponents claim that whatever silly-ass thing they want to do, is “altruism”, and furthermore, that it is the best possible altruism.

So, why is it bullshit?  First, it’s very, very unlikely that we’ll expand out of our solar system, and if somehow we do, we have no idea when that will be possible, because we currently lack the physics and biological knowledge necessary to make it happen.  It takes too much energy, propulsion systems are too energy-hungry, and the time scales far exceed our ability to keep humans alive with no external support.  All these problems need solving first, assuming that they even have solutions.

Second, where would we go?  We haven’t identified any actually-habitable planets anywhere else, yet.  That’s something that we might be able to do in the not-too-distant future, with another turn of the giant space telescope crank, but as yet, zero other planets are ready to support human life out-of-the-box, and we have no guarantee that the habitable planet we do find will be at anything like a feasible distance.  10 light-years is unimaginably far, but what if it is 1000?.  Or, suppose we compromise, and try terraforming?  We haven’t even done that in our own solar system, and the best possible day on Venus or Mars is still more lethal than the worst day anywhere on earth outside of an actual natural disaster.  Anything we could possibly do there, we could more easily do here to fix problems with our own climate.  (A good start might be “stop doing stupid shit, stop other people from doing stupid shit”.  If we can’t even manage that here….)

Third, even if we accept the airy-fairy bullshit that we’ll be able to leave the solar system, find a planet to adapt, change it, and bootstrap a civilization there, then repeat this process exponentially, we have no clue what the timescale for doing this will be.  We haven’t solved any of the problems yet, and we’ve got no particular reason to believe that we’ll solve them in this century or the next.  Implicit in the bullshit longtermist hubris, is the assumption that *we* will be the ones to solve them, and that *we* know the steps towards that solution, so obviously, whatever is good for *us*, is good for those future hypothetical humans.  Or perhaps, we expect an answer from that general-purpose-human-exceeding-AI (that, like self-driving cars, is coming real soon now) that we have already determined will find solutions to these problems, instead of telling us, “actually,  no, you are stuck here on earth.  Period.  Here’s the proof”.  Assuming, of course, that such an AI is even possible, and that some other scaling law doesn’t crap out first.

I would propose that it is more prudent and effective to pay attention to the very high probability event that almost all humans live on Earth, and will continue to do so for hundreds of years, and that we should worry about ensuring that civilization-endangering disasters are avoided here, and that human capital (i.e., health, happiness, longevity, intelligence, education, productivity) is maximized.  We should assume that whatever we do in the distant future, we are stuck here for a long, long time, and if we don’t make plans for that long long time here, there won’t be a future beyond. That would mean taking climate change more seriously than the US currently does, and that would mean looking at obvious inefficiencies (the US, fat and happy, has quite a few of these) and replacing them with better systems.  It would mean taking all the “not-first-world” countries seriously, taking their health, social, political, and economic needs seriously, and not just exploiting them for a quick buck.  We should think about political systems that are resistant to fascist, racist, and nativist impulses, and adopt those systems.

And, in the unlikely event that we do start a galactic civilization, these efforts here would not be wasted.  If we can’t maintain a habitable atmosphere and climate on a favorable planet, it’s hubris to think we’d do it elsewhere, so we’d better start practicing till we get good at it.  The same externalized-cost problems of capitalism that make planet-scale pollution hard to control here, will surely travel with us wherever we go, running away from one’s own intrinsic problems is (ahem) a well-known waste of time.

Postscript: today I discovered that their bullshit extends to climate science.  These guys are truly full of shit.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: