From Global Poverty to Artificial Intelligence
Effective Altruism is a way of giving money to the highest-impact causes. Why has spending growth moved from Global Poverty to Artificial Intelligence? I argue this is a feature, not a bug.
Please think about contributing to the blog here! 10% of subscriptions will go to GiveWell - an effective charity evaluator. Thanks for your help.
Imagine you are buying a new laptop. Rarely will you choose the first one you come across. Instead, you will spend time checking out the specifications of various models. Its processing power, screen size, battery life, and (for the budget-constrained) price. Why? Because we often want the most bang for our buck.
This behaviour is not seen in all areas of our spending, however. When we give money to charity we rarely think about how effective our donation has been. We are led to give to those charities who accost us on the street, or those who pay for heart-wrenching adverts on our TVs. Instead of getting the biggest bang for our philanthropic buck - we spend with our hearts rather than our heads.
Does this matter? The Effective Altruism (EA) community thinks so. By reasoning about real world data, EA seeks to figure out how we can do the most good with the resources currently at our disposal. If we can save 100 lives instead of 10, all else equal, we should do so. When over 6 million lives are lost to preventable causes a year, our choice of spending is a matter of life and death.
The donation of malaria nets to Africa by EA organisations has saved hundreds of millions of lives. The nets are low-cost and highly effective at stopping mosquitoes biting people as they sleep. Through years of experimentation, this was found to be one of the highest value measures ever seen in the development field. Effective altruists are led to do good by their hearts but they think and act with their heads.
There are always new funding opportunities being presented, so the community updates its spending decisions on an almost constant basis. This is a feature, not a bug of Effective Altruism. A new charity that looks promising? A new field that might save more lives in the longer term? Does funding need to be cut to a non-profit who have proven to not be great at saving lives? Money is moved to new areas if they look promising.
As the above graph shows, every year since 2014-2021 (and estimated to in 2022) EA has spent the most on Global Health and Development. But since around 2019, the growth of spending has been highest in the Longtermist and Existential Risk category. Longtermism is a philosophy which gives equal moral priority to those beings inhabiting the long-run future. An Existential Risk is a risk which could wipe out humanity, such as a very deadly pandemic, or an asteroid hitting Earth.
And now, over $200 million will be spent by EA organisations, mostly on organisations that work on Artificial Intelligence (AI) Safety. But how can AI be more important than spending money on Global Health? I thought the EA community was about saving lives?
AI programmes such as the fun-to-use Chat-GPT (which I promise I have not used in writing this piece) can create complex written answers to simple text inputs. It is an incredible example of what is called a ‘language model.’ We also have other AI programmes that can beat chess masters, and more invasive models like China’s social scoring system which rates your goodness as a citizen.
But some day, experts believe that we may create what is called an ‘artificial general intelligence’ (AGI). Instead of being better than humans at specific tasks at a time - will be smarter than humans at everything. This means it could more easily do things not in the interest of human beings because, if the AGI had a goal ‘to not be turned off’ - it may deceive its creators in myriad ways.
Less likely is the Terminator scenario where an AGI kills us all with guns; more likely is the scenario where an AGI hacks into a nuclear weapons system to use as a bargaining chip for its release from its ‘cage’ in the lab… It might have made the assumption that it is less likely to survive the longer it is ‘unfree’.
This is where AI safety research comes in: which aims to create AI systems that do not harm humans as they are deployed. It seeks to ensure the Terminator robots help us fix roads rather than terminate us. As AI capabilities have increased hugely, so too has the EA concern that an AGI which does not factor in human interest could result in many more deaths in the longer-term.
And so, the spending growth seen in the Longtermist and Existential Risk cause area, which has seen much faster spending growth than any other EA cause area, is due to what the community sees as a new, dangerous, and hitherto neglected space. However, this has created friction within the community.
Up until the last few years, most people had only heard about EA through its global health work. The recent popularity in AI safety has created an interesting intra-community dynamic. Those who are less sold on the change in spending argue that perhaps we should go “back to buying malaria nets” in an ode to one of EA’s early intervention suggestions.
Imagine a founder of a company who made his billions selling laptops. Would you think it smart if he decided to put more money into buying a huge stock of tablets, if when he did the numbers it would make him more profit? His move to higher value spending is a feature, not a bug of his business decisions. Where a businessman is often moved by profit, EA is instead moved by the amount of good it can do.
If there is an area which is underinvested in, and offers huge social benefits, such as being serious about AI Safety, then it is incumbent on EA to shift spending growth to this new cause area. And this happens all the time. We expect flux in EA, its very principles dictate the presence of flux and change and updating priors. When EA alters spending decisions this is a feature, not a bug. And it’s a feature that saves millions of lives.
Now get back to shopping for that laptop. Remember: use your head!
***