According to Gary Gensler, chair of the SEC, a market crash caused by artificial intelligence is “nearly unavoidable.” Like many other regulators, he has called for new regulations on AI to prevent such dire scenarios.
Such fears are considerably exaggerated. It is true that AI might cause a market crash — just as many events, some of them quite arbitrary or unexpected, have led to market downturns. On net, though, AI probably lowers the chances of a market crash.
One fear is that a small number of AI base models could lead investors to herd behavior, where many of them sell (or buy) at the same time because their models have told them to. But the number of base models is likely to rise over time, not fall. AI is in a period of considerable innovation, with many startups being founded and many new trading and investing techniques being developed. Diversity, not uniformity, will reign.
The incentives of a trading firm are not to use the same model as everyone else, as that could lead them to sell into falling market panics or buy into temporarily rising prices — which is precisely what they should not do. Instead, a top trading firm will try to develop better models than its competitors. If a firm discovers that competitors are using a common model in a predictable way, it can identify the weaknesses of that model and trade against those firms.
Insofar as regulators exert influence and try to exercise more control over the market, they raise compliance costs and impose legal burdens on firms. That favors larger incumbents, whether in the trading market or in the provision of AI services. In other words, regulation tends to decrease rather than increase the number and diversity of techniques and programs in the market. That is one reason that regulation is not ideally suited to addressing potential over-centralization.
When it comes to Wall Street, AI — and, more generally, quantitative techniques — are nothing new. It is not obvious that more recent advances in large language models will fundamentally change the basic situation in securities markets.
For all the quant techniques on Wall Street, share price volatility in recent years has been low. And some of the volatility in recent years has probably been more due to the pandemic and its aftermath than to trading techniques or quantitative analysis.
Quant techniques probably did cause the “flash crash” of 2010. Yet that episode also shows the self-limiting nature of purely “technical” market crashes. The Dow fell almost 1,000 points, but the entire episode lasted only 36 minutes, as other traders stepped in to buy at temporarily low prices. In addition, the initiating factor behind the crash was probably the “spoofing” techniques of a single trader, who tried to trick the market into overreacting in a particular direction. That tactic is illegal under current law, as it should be.
It is always possible that some future development in AI will lead to an entirely new calculus in markets and cause some flash crashes. Yet the more general point stands: Market participants will use quantitative techniques to try and identify which price movements are temporary or unjustified. That doesn’t mean AI always will operate for the better, but it has some fundamental stabilizing properties in public markets.
One piece of good news is that AI is likely to boost productivity and therefore be good for stock prices. Bull markets tend to have less volatility than bear markets, and even if there is some volatility, investors may find it easier to endure because they have made money.
AI — and software more generally — do reflect some problems with the current model of regulation. The US system is basically designed around regulating well-identified intermediaries. The Securities and Exchange Commission regulates brokerage houses, the Federal Reserve regulates banks, the Food and Drug Administration regulates pharmaceutical companies, and so on.
As software plays an independently active role in market outcomes, so regulation becomes more difficult. Software is not readily transparent to outsiders, or sometimes even insiders. It is hard to assess whether a particular piece of software is going to do what it is supposed to do. If that is the concern, then a better response would be to increase capital requirements, so that market players have more protection if something goes wrong.
Regulators are like most people: They cannot be expected to know where AI is heading. So neither can they be expected to arrive in advance with the rules to make everything just right. Far better to focus on general remedies to protect the solvency of intermediaries.
A message from Advisor Perspectives and VettaFi: To learn more about this and other topics, check out our most recent white papers.
Bloomberg News provided this article. For more articles like this please visit
bloomberg.com.
Read more articles by Tyler Cowen