Thanks to the crash of FTX and Scott Aaronson’s subsequent post about SBF, I read a very interesting deep-dive into Effective Altruism by the New Yorker. I’m seeing a lot of important characters show up that I’ve seen before: Eliezer Yudkowsky, earn-to-donate, 80,000 hours, etc. It’s really fascinating to see this all coming together in one narrative so I can understand a little better the inspiration for these ideas and the way that the movement has interacted with the world up till now. Here are my notes and critiques of the ethical ideas presented in the article.
I think the single biggest problem with EA-flavored utilitarianism (at least as presented in the article) is that since individual quality of life is the goal, it has ignored the possibility that increasing the quality of a community may be at least as effective as intervening in individual outcomes. The article cites critics who point out that EA has a status quo (capitalist) bias, and I think that’s part of the problem.
The article quotes Bernard Williams’ critique of utilitarianism:
Someone who seeks justification for the impulse to save the life of a spouse instead of that of a stranger, Williams famously wrote, has had “one thought too many.”
To me, the answer to this critique is that choosing to save the life of your spouse instead of that of a stranger is the execution of a responsibility that forms the fabric of a healthy community, and having that healthy community is an essential subtask in improving the lives of individuals.
From this perspective, the whole longtermism thing that Effective Altruism has moved to doesn’t really make sense to me. Longtermism taken to the extreme says:
Something like a .0001-per-cent reduction in over-all existential risk might be worth more than the effort to save a billion people today.
To me, this question of tradeoff between existential risk and current human life actually doesn’t make sense. If humanity ended tomorrow, the event (whatever it were) would be an awful tragedy for those alive. But it would not be a tragedy for the unborn people who would have existed had humanity not ended, because those people would be counterfactual. I don’t assign any moral value to the number of individuals that exist in different possible futures. I’ll have to read MacAskill’s book What we owe the future to see what he argues, but I think any longtermism beyond “catastrophe is bad for the people involved” eventually leads you to things like this quote from Nick Beckstead:
Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country.
I cannot overstate how vehemently I oppose this sentiment. It’s already the central capitalist dogma that those with wealth are the ones who should decide where labor goes. Now Beckstead would like it to be the central component to deciding who receives “altruism”. It merges altruism into techno-utopianism until it’s undistinguishable from the enlightened, libertarian, tech-bro milieu that surrounds the Effective Altruism movement.
In summary, I think the New Yorker article really hit the nail on the head with its coverage. I feel like got a glimpse of the movement from all sides. Feel free to reach out if you’d like to dialog with me on this topic.