Can AI Hallucinations Be Tamed for Investing Models?

Generative artificial intelligence has a reliability problem. Here’s how investors can gain confidence in portfolios that deploy the technology.

As generative artificial intelligence (GAI) gains popularity, the technology’s tendency to fabricate responses remains a big flaw. We believe specialist models can be designed to reduce hallucinations and improve AI’s accuracy and efficacy for use in investing applications.

If you’ve played with ChatGPT or GAI-driven applications over the last year, you’ve probably been amazed and skeptical. The technology has dazzled us with its ability to write smart summaries, compose poetry, tell jokes and answer questions on a range of topics in remarkably well-written prose. Yet it also tends to fabricate information—between 3% and 27% of the time, depending on the model, according to one study by study by AI start-up Vectara. While this defect may be tolerable in entertainment applications, GAI’s hallucinations must be tamed for investors to gain a high level of confidence in its output for portfolios.

Why Does GAI Hallucinate?

The magic of GAI happens in large language models (LLMs). LLMs are algorithms, based on deep learning technology, that can recognize, summarize, translate, predict and generate text and other forms of content. The knowledge that drives these models is based on massive datasets and the statistical probabilities of words and word sequences occurring in a particular context.

But building large models comes at a cost. LLMs are generalists, which means they are trained on generic data that can be found across the internet, with no fact-checking of the source. These models may also fail when faced with unfamiliar data that were not included in training. And depending on how the user prompts the model, it may come up with answers that are simply not true.

Fixing hallucinations is a big focus of GAI providers seeking to boost confidence in and commercialization of the technology. For investment applications, we believe the key to solving the problem is to create specialist models that can improve the output. These smaller models are known as knowledge graphs (KGs), which are built on narrower, defined datasets. KGs use graph-based technology—a type of machine learning that improves a model’s ability to capture reliable relationships and patterns.