AI’s Slowdown Is Everyone Else’s Opportunity

The multi-trillion-dollar artificial intelligence boom was built on certainty that generative models would keep getting exponentially better. Spoiler alert: they aren’t.

In simple terms, “scaling laws” said that if you threw more data and computing power at an AI model, its capabilities would continuously grow. But a recent flurry of press reports suggests that’s no longer the case, and AI’s leading developers are finding their models aren’t improving as dramatically as they used to.

OpenAI’s Orion isn’t that much better at coding than the company’s last flagship model, GPT-4, according to Bloomberg News, while Alphabet Inc.’s Google is seeing only incremental improvements to its Gemini software. Anthropic, a major rival to both companies, has fallen behind on the release of its long-awaited Claude model.

Executives at OpenAI, Anthropic and Google all told me without hesitation in recent months that AI development was not plateauing. But they would say that. The truth is that long-held fears of diminishing returns for generative AI, predicted even by Bill Gates, are becoming real. Ilya Sutskever, an AI icon who popularized the bigger-is-better approach to building large language models, recently told Reuters that it had leveled off. “The 2010s were the age of scaling,” he said. “Now we're back in the age of wonder and discovery once again.”

“Wonder and discovery” puts quite the positive spin on “we have no idea what to do next.” It could also, understandably, spark anxiety attacks for investors and businesses, who are expected to spend $1 trillion on the infrastructure needed to deliver on AI’s promise to transform everything. Wall Street banks, hedge funds and private equity firms are spending billions on funding the buildout of vast data centers, according to a recent Bloomberg News investigation.

Does this all add up to a terrible gamble? Not exactly.