In an age of accelerating progress in artificial intelligence, everyone is debating AI’s implications for the labor market or national security. There is far less discussion of what AI could or should mean for philanthropy.
Many (not all) insiders now say AGI — artificial general intelligence — stands a good chance of happening in the next few years. AGI is a generative AI model that could, on intellectually oriented tests, outperform human experts on 90% of questions. That doesn’t mean AI will be able to dribble a basketball, make GDP grow by 40% a year or, for that matter, destroy us. Still, AGI would be an impressive accomplishment — and over time, however slowly, it will change our world.
For purposes of objectivity, I will put aside universities, where I work, and consider other areas in which philanthropic returns will become higher or lower.
One big change is that AI will enable individuals, or very small groups, to run large projects. By directing AIs, they will be able to create entire think tanks, research centers or businesses. The productivity of small groups of people who are very good at directing AIs will go up by an order of magnitude.
Philanthropists ought to consider giving more support to such people. Of course that is difficult, because right now there are no simple or obvious ways to measure those skills. But that is precisely why philanthropy might play a useful role. More commercially oriented businesses may shy away from making such investments, both because of risk and because the returns are uncertain. Philanthropists do not have such financial requirements.
Another possible new avenue for philanthropy in a world of AI, as odd as it may sound: intellectual branding. As quality content becomes cheaper to produce, how it is presented and curated (with the help of AI, naturally) will become more important. Some media properties and social influencers already have reputations for trustworthiness, and they will want to protect and maintain them. But if someone wanted to create a new brand name for trustworthiness, and had a sufficiently good plan to do so, they should receive serious philanthropic consideration.
Then there is the matter of AI systems themselves. Philanthropy should buy good or better AI systems for people, schools and other institutions in very poor countries. A decent AI in a school or municipal office in, say, Kenya, can serve as translator, question-answerer, lawyer, and sometimes medical diagnostician. It’s not yet clear exactly what those services might cost, but in most very poor countries there will be significant lags in adoption, due in part to affordability.
A good rule of thumb might be that countries that cannot always afford clean water will also have trouble affording advanced AI systems. One difference is that the near ubiquity of smart phones might make AI easier to provide.
Strong AI capabilities also mean that the world might be much better over some very long time horizon, say 40 years hence. Perhaps there will be amazing new medicines that otherwise would not have come to pass, and as a result people might live 10 years longer. That increases the return — today — to fixing childhood maladies that are hard to reverse. One example would be lead poisoning in children, which can lead to permanent intellectual deficits. Another would be malnutrition. Addressing those problems was already a very good investment, but the brighter the world’s future looks, and the better the prospects for our health, the higher those returns.
The flip side is that reversible problems should probably decline in importance. If we can fix a particular problem today for $10 billion, maybe in 10 years’ time — due to AI — we will be able to fix it for a mere $5 billion. So it will become more important to figure out which problems are truly irreversible. Philanthropists ought to be focused on long time horizons anyway, so they need not be too concerned about how long it will take AI to make our world a fundamentally different place.
For what it’s worth, I did ask an AI for the best answer to the question of how it should change the focus of philanthropy. It suggested (among other ideas) more support for mental health, more work on environmental sustainability and improvements to democratic processes. Sooner rather than later, we may find ourselves taking its advice.
A message from Advisor Perspectives and VettaFi: To learn more about this and other topics, check out some of our webcasts.
Bloomberg News provided this article. For more articles like this please visit
bloomberg.com.
Read more articles by Tyler Cowen