Apple Is Our Hope for Making AI More Private

There’s a price to pay for all the generative AI tools that professionals are using to make themselves more efficient. It’s not just a subscription fee to OpenAI, Microsoft Corp. or some other AI company — it’s their privacy too.

Every interaction with tools like ChatGPT requires a connection to the Internet, and every query is processed on a server (essentially a much more powerful computer than the one you have at home) in a vast data center. Your conversation history? That often gets fed back to the AI model to train it further, along with your personal information. That has rankled some employers worried about data security.

But if the “brain” of an AI tool lived within your own computer instead of routing back to someone else’s, the lack of privacy might not be as much of a problem. The great hope for making this happen is Apple Inc.

Apple famously isn’t first to the party when it comes to new technology. It was slow to enter the smartphone race, and it wasn’t the first to bring out a smartwatch — but it now dominates the market for both because it took the time to engineer its way toward the most user-friendly devices. It’s the same story with AI. Building the “smartest” generative AI is all about powerful chips, and while Nvidia Corp. dominates that market for cloud-based servers, Apple is well-positioned to be the first to do that successfully on smaller devices, even if it was caught off guard by the generative AI boom. Because although many companies are trying to figure out how to process AI on phones and laptops — without having to connect to a server somewhere — they’re hitting technical limitations that may be difficult to surmount without the resources of the $3 trillion hardware company in Cupertino.

For one thing, Apple has already been designing more powerful chips for its phones and laptops, including the advanced M series chips that provide exceptional processing speeds. The chips that Nvidia builds for processing AI on servers are still far more powerful, but that might be OK if AI models themselves become smaller and more efficient.

That points to another phenomenon driving the on-device trend: While leading AI builders like OpenAI and Alphabet Inc.’s Google are focused on making their models as big as possible, many others are trying to make them smaller and more energy-efficient. Over the last few years, OpenAI found that it could make its large language models sound much more fluent and humanlike if it simply increased the parameters of its models and the amount of computing power that they used, requiring huge cloud servers with powerful chips.