On Reboot's Ineffective Altruism
I've seen "Ineffective Altruism" used a couple of times to poke fun at EAs. I remember the first time I saw the phrasing I jumped to some state inbetween of amused and confused. Ineffective Altruism sounds jocular (who would oppose something effective!) so what must be going on is a reaction to the EA aesthetic or specific definitions of "effectiveness". That in turn of course leads us to ask what that alternative effectiveness might be. Or is it a reaction against not so much EA but specific causes EAs seem to favor. I think it's also a reaction against something that has nothing to do with EA or even utilitarianism: "Ineffective Altruism" seems to be at its core at least in substantial part an embracing of the concept of Knightian (or "radical") uncertainty, and relatedly of formal methods for decisionmaking. It is also secondly a positive moral evaluation of such fact.
It is similar to what I have said elsewhere about tacit knowledge: It is one thing to say that some knowledge is really hard to get, that maybe one has to go to the one master that knows a specific craft and apprentice for a year. It is another thing to say that such a thing is good (perhaps because it allows us to go on quests seeking tacit knowledge), as opposed to lamenting, as I do here, the fact that this type of knowledge limits human flourishing.
A good riff on these themes is Michael Nielsen's Notes on Effective Altruism. I link here to the tweet because the replies contain some good discussion. A funky related reading are these essays from David Chapman where you could take the essay below to be Stage 3 making a misguided critique of something like Stage 4.5 while at the same time making valid points that could be made from Stage 5 (Where you go get your post-rational sage license).
Here a recent essay from RebootHQ is copied below and commented on in the sidenotes. You can find other commentary of this same article by Nick Whitaker here. I decided to comment line by line and developed a few new additions to the Nintil.com stack. I have colored lines in red, yellow, and green to denote "how do I feel" about that specific line. Take red to mean annoyed, yellow is me having some quibbles with it, and green me appreciating the line. I mostly focused on the negatives deliberately, so you won't see much green. Hover over each sentence to focus the relevant sidenote. Some sidenotes are hidden because of lack of space, but they will become visible and pop over the others when hovered.
Towards Ineffective Altruism (All text below is extracted from their Substack!)
This tweet from Timnit Gebru has been living in my head rent-free for the past month. Recently, she and other critics of big tech (as well as former longtermist Phil Torres) have been loudly sounding the alarm about effective altruism and longtermism on Twitter and in various publications.
These ideologies scare me, and I want to engage with them seriously — not because I believe in them, but because they are seemingly rational, relying on the language of science, moral philosophy, and statistics. They are increasingly influential among policymakers, intellectuals, well-funded institutions, and the richest men in the world. Their ubiquity makes them pernicious and hard to combat. To take them on, we must critique their philosophical foundations, their rhetoric, and their material impacts simultaneously.
At its most basic, the effective altruism movement makes a generally utilitarian argument about how the world’s privileged people should spend their time and money if they want to maximize their positive impact on the world.
Effective altruism was born mostly at Oxford in the late 1990s and early 2000s, at around the same time that the internet industry in Silicon Valley was experiencing its first cycle of boom and bust. The dominant ideologies of both come from the business culture of the time, and the two have become closer together since. As Nadia Asparouhova writes in her recent piece on “Idea Machines”: “Effective altruism is often associated with tech, but it’s genetically more similar to McKinsey.”
Nowadays, effective altruism’s epistemology and tools often parallel those of the tech industry. At its heart, it is driven by the principle of maximization and informed by statistical analysis. With these methods, effective altruists make arguments such as maximizing disability-adjusted life-years by allocating time and money towards initiatives that provide a mosquito net for a child in a poor country (rather than providing direct donations to the child’s family).
At first glance, this all seems straightforward and uncontroversial, even if it speaks of “doing good” in the terms of a business investment. If we want to make the world a better place by giving money away, of course we should maximize the good that each dollar does, you might say. And besides, how bad can an ideology be if its principal goal is to give billions of *maximally effective* dollars away to charity each year?
These are fair points, and I don’t entirely disagree with them. Billions of dollars per year from the wealthy tech elite used to convince people to go vegan or to give to non-religious health NGOs or to end factory farming is not, on its face, a bad thing.
But effective altruism is just the tip of the utilitarian iceberg. Beneath the visible argument that giving must be optimized in order to be “good,” there are an array of ideologies in close contact with effective altruism that are far stranger, more ethically dubious, and highly influential. Foremost among them is the ideology of longtermism — an ideology that Phil Torres (a former longtermist himself) has described as “one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about.”
Longtermism originates from the same spaces and places as effective altruism, including the rationalist community and online blogs like LessWrong. If we can survive the next few hundred years, colonize other worlds, and learn to simulate conscious beings with computers, longtermists say, there could be a lot of people that exist in the future. The high end of the range is 1058 (10 billion trillion trillion trillion trillion), but most say there could be at least quad- or quintillions. If all beings are equally important, regardless of when or where they exist, then doing something right now that has a tiny probability (say, a one in one quintillion chance) of affecting a tiny fraction of the future people (0.00000000000000000000000000001% — that’s 28 zeros), could still potentially change the lives of more than 10 billion people, more than the nearly 8 billion people existing on the planet today.
From this time-and-space-agnostic view, the current state of the world and the humans in it begins to seem miniscule, a grain of sand in the beach of a future that may span galaxies and trillions of years.
With this perspective, new priorities emerge. Instead of focusing on the material inequities of our world, longtermists think that the way to do the most good in the long-term is to focus on the things that could prevent this unthinkably large set of futures from coming to pass. Thus, we ought to focus on studying and reducing existential risks — potential developments that could wipe humanity out completely or permanently constrain humanity before it achieves its full potential. Existential risks include global totalitarian governments, deadly pandemics, asteroids, nuclear wars, misaligned hyper-intelligent AI systems that destroy human civilization, and other unspecified horrors.
“Strong longtermism” is a variant of longtermism advanced by Hilary Greaves and William MacAskill that argues that, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focusing primarily on the further-future effects.” An extended quotation from their paper is illustrative of the impacts that these ideologies can have. Let’s say Shivani, a philanthropic donor, wants to donate $10,000 to the cause that will do the most good:
Suppose, for instance, Shivani thinks there’s a 1% probability of a transition to a world government in the next century, and that $1 billion of well-targeted grants… would increase the well-being in an average future life, under the world government, by 0.1% with a 0.1% chance of that effect lasting until the end of civilisation, and that the impact of grants in this area is approximately linear with respect to the amount of spending. Then, using [a] figure of one quadrillion lives to come, the expected good done by Shivani contributing $10,000 to this goal would… be 100 lives. In contrast, funding for the Against Malaria Foundation, often regarded as the most cost-effective intervention in the area of short-term global health improvements, on average saves one life per $3500.
In simple terms, Shivani can save 35 expected future lives for each current life she can save. In this instance, the premier example of donating based on effective altruist principles is utterly ineffective compared to the logic of longtermism and existential risk. The idea that studying existential risk and reducing it by a fraction of a percent could improve the lives of untold future millions is a powerful one.Of course, longtermism is not the principal motivation for most effefctive altruists, and there are gradations of how far one can subscribe to this argument. Regardless, in recent years, it has increasingly begun to drive giving, set priorities, and define the movement as a whole, prompting some to ask if effective altruism is just.) longtermism now.
Why understanding effective altruism and longtermism is important
Effective altruism and longermism are ideologies that are increasingly influential among the richest men and the most prestigious institutions in the world, shaping policy and capital allocation. The movement has shifted to pushing young adherents towards careers in government, with a focus on reducing existential risks through policy . These recent developments take our discussion away from questions of rhetoric and morals (for the moment) and squarely into material considerations.
Longtermism and existential risk are particularly influential ideologies among those who made fortunes in technology and in elite institutions. Elon Musk has cited the work of Nick Bostrom (who coined the term existential risk in 2002) and has donated millions to the Future of Humanity Institute and Future of Life Institute, sister organizations based out of Oxford. Jean Tallinn, a founder of Skype worth an estimated $900 million in 2019, also cofounded the Center for the Study of Existential Risk at Cambridge, and has donated more than a million dollars to the Machine Intelligence Research Institute (MIRI). Vitalik Buterin, a cofounder of the Ethereum cryptocurrency, has donated extensively to MIRI as well. Peter Thiel, the radical libertarian donor, early Trump supporter, and funder of JD Vance’s Ohio Senate campaign, delivered the keynote address at the 2013 Effective Altruism summit.
Longtermism is also increasingly popular among rank-and-file effective altruists, to the point where many consider them to be synonymous. According to data from the Open Philanthropy Grants database, in 2021 effective altruists donated $92 million to AI risk research, $21 million to biosecurity and pandemic preparedness, and $10.5 million to global catastrophic risk research. Altogether, this $125 million towards longtermist existential risk research represents a larger slice of donations than any other individual cause. And the allure of AGI (Artificial General Intelligence) — a major focus/fear of effective altruism and longtermism — is especially clear in industry, where multiple startups and big tech companies pour billions of dollars into research and development.
These bureaucrats, donors, research institutes, and companies are by no means an ideological monolith, nor do they necessarily represent the beliefs of the average effective altruist. However, this web of entities has one key feature — intellectual, institutional, and financial capital. A relatively small cadre of longtermist academics housed within and legitimized by influential institutions can advance ideas that guide how governments and venture capitalists think about and shape the future.
Towards ineffective altruism
So far, in the spirit of critique , I’ve laid out the philosophical underpinnings of the effective altruism and longtermism movements and the material superstructures that have arisen from those foundations over the past two decades.
It seems to me that the seemingly limitless bounds of longtermism are ultimately a moral carte blanche on anything we do (except make the species extinct). It’s easy to see how this position can ultimately lead to reprehensible outcomes. Just this week, 80,000 Hours released a piece that argues for effective altruists to not focus their careers on climate change — a process which will uproot hundreds of millions of mostly non-white poor people and cause billions to experience chronic water scarcity — because it has a low chance of becoming uncontrollable and turning Earth into Venus . Other longtermists worry that their ideology would provide rationalizations for genocide if political leaders took it literally. Mathematical statistician Olle Häggström, usually a proponent of longtermism, imagines
a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.
Besides the moral hazards of advocating these positions, these ideologies provide an overly simplistic formula for doing good : 1) define “good” as a measurable metric, 2) find the most effective means of impacting that metric, and 3) pour capital into scaling those means up.
But following the formula of effective altruism is clearly not all that being good requires. There are boundless ways of doing good that are fundamentally immeasurable or, if they are measurable, may not be optimized. Nevertheless, this universe of actions demands our consideration. To follow in the footsteps of Timnit Gebru (and to be purposefully contrarian), let’s call the philosophy of seriously considering the merits of doing good immeasurably or suboptimally ineffective altruism.
Ineffective altruism might look like giving $10 to a houseless person who asks for it. It might look like organizing to ensure that as many people as possible have access to basic material needs like food, housing, and healthcare . It might look like the ephemeral work of knitting a social and political community together. After all, how can one quantify the resiliency of a particular neighborhood? None of these actions would be particularly “effective,” and yet they might also have more of a tangible impact than unknowably reducing an existential risk by some fraction of a percentage point. They also show an understanding of one’s responsibilities to their community, how strengthening community is also important for our shared future, even if it isn’t measurable.
Ineffective altruism eschews metrics, because “What does doing good look like?” should be a continuously-posed question rather than an optimization problem . As an ideology of allocating resources, it is recognized as explicitly political, rather than cloaking itself in the discourse of science and rationality. It allows us to get outside of the concept of altruism entirely — a concept that feels limiting in its focus on the actions of the individual — and instead consider a paradigm of collective, democratic mutual aid. Most importantly, ineffective altruism allows us to ask harder questions than effective altruism does: questions about who and what we value.
What might “moral good” look like outside of market-derived values (like the maximization principle)? How can we collectively decide to allocate resources? How can we build societies based on principles that cannot be measured, like mutual respect and solidarity? How can we eliminate material misery from the world? What might we do to ensure the flourishing of future generations, rather than just their survival? How can we depart from a society where those who have the privilege to choose to care about others can, and move towards a society where everyone has the power to care about others and must?
People all over the world have been attempting to answer these questions for generations. After massive street protests in 2019 in Chile, 80% of the population voted to redraft the nation’s constitution — an effort that is currently in progress and will be finalized this September. In Taiwan, Digital Minister Audrey Tang is building effective tools for building consensus and making decisions online. Tang helped enable a highly effective set of COVID-19 policies that kept the disease largely outside Taiwan for more than two years, influenced what digital democracy looks like on the island, and inspired other online civil processes around the world. And in the United States, the last few years has seen rising interest in small-d democratic institutions like labor unions and mutual aid organizations. These efforts may be inefficient or messy or unpredictable, but are good in part because of those facts, not in spite of them.
Philosopher Karl Popper on about the dangers of exclusive focus on the utopian ideal of the far future over the material concerns of the present day:
We must not argue that a certain social situation is a mere means to an end on the grounds that it is merely a transient historical situation. For all situations are transient. Similarly we must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next.
When we critically examine effective altruism and longtermism, we can see them as falsely utopian ideologies cloaked in the opaque vocabulary of science and math . Let’s instead strive for a world where altruism doesn’t have to be maximally effective for it to be worthy, where doing good doesn’t have to be optimized, where morals aren’t a function of the market.