OpenAI’s power brokers seem to have decided that the quickest fix for last week’s dysfunction is to borrow a page from corporate America’s playbook by adding some establishment figures to its board.
The company’s initial new lineup of directors now includes some of the archetypes that make its boardroom look much more like everyone else’s: A well-regarded technology executive in the form of Bret Taylor, Salesforce Inc.’s former co-chief executive officer, and a bigwig economist in Larry Summers, the former Treasury secretary. The two join Quora’s Adam D’Angelo, the one holdover from the old group of directors that briefly ousted co-founder and CEO Sam Altman.
The board makeover into one that more closely resembles the traditional corporate mold is being cast by some as the beginning of adult supervision at OpenAI. (At least so far candidates that might add some diversity to the all-male roster apparently don’t fall into that category.) But it’s not yet clear that this board composition — or any board structure for that matter — can oversee Altman and his highly paid and devoted employees as they chase something that has the potential to destroy humanity.
More important, this question of what oversight should look like at OpenAI has implications that stretch beyond the company and the artificial intelligence community. OpenAI was set up as a “humanity scale endeavor pursuing broad benefit for humankind.” Not all companies are aiming for such lofty stakes, but it’s not at all out of the ordinary anymore for founders and CEOs to strive to build a money-making endeavor alongside a social mission — an attempt to tackle issues of public good that government simply cannot or will not address. The OpenAI debacle is a clear warning sign that how these types of complex enterprises are governed needs to be sorted out. “The key question is how do we do this,” said Emilie Aguirre, a professor at Duke Law School who researches companies that pursue both social purpose and profit. “No one has figured out a great or reliable way.”
Altman’s attempt to solve this problem at OpenAI was to structure his project as a nonprofit. But tech talent, especially in a hot field like AI, is expensive. When the money ran out, OpenAI started a for-profit arm overseen by its nonprofit board that was legally bound to pursue the nonprofit’s original goal — a resolution that basically shoehorned the money-seeking piece of the enterprise into the old governance structure. This is clearly not the most graceful solution, but it worked just fine until the money and the mission came into conflict.