The Paradox at the Heart of Elon Musk’s OpenAI Lawsuit

The Paradox at the Heart of Elon Musk’s OpenAI Lawsuit

It would be easy to dismiss Elon Musk’s lawsuit against OpenAI as a case of sour grapes.

Mr. Musk sued OpenAI this week, accusing the company of breaching the terms of its founding agreement and violating its founding principles. In his telling, OpenAI was established as a nonprofit that would build powerful A.I. systems for the good of humanity and give its research away freely to the public. But Mr. Musk argues that OpenAI broke that promise by starting a for-profit subsidiary that took on billions of dollars in investments from Microsoft.

An OpenAI spokeswoman declined to comment on the suit. In a memo sent to employees on Friday, Jason Kwon, the company’s chief strategy officer, denied Mr. Musk’s claims and said, “We believe the claims in this suit may stem from Elon’s regrets about not being involved with the company today,” according to a copy of the memo I viewed.

On one level, the lawsuit reeks of personal beef. Mr. Musk, who founded OpenAI in 2015 along with a group of other tech heavyweights and provided much of its initial funding but left in 2018 over disputes with leadership, resents being sidelined in the conversations about A.I. His own A.I. projects haven’t gotten nearly as much traction as ChatGPT, OpenAI’s flagship chatbot. And Mr. Musk’s falling out with Sam Altman, OpenAI’s chief executive, has been well documented.

But amid all of the animus, there’s a point that is worth drawing out, because it illustrates a paradox that is at the heart of much of today’s A.I. conversation — and a place where OpenAI really has been talking out of both sides of its mouth, insisting both that its A.I. systems are incredibly powerful and that they are nowhere near matching human intelligence.

The claim centers on a term known as A.G.I., or “artificial general intelligence.” Defining what constitutes A.G.I. is notoriously tricky, although most people would agree that it means an A.I. system that can do most or all things that the human brain can do. Mr. Altman has defined A.G.I. as “the equivalent of a median human that you could hire as a co-worker,” while OpenAI itself defines A.G.I. as “a highly autonomous system that outperforms humans at most economically valuable work.”

Most leaders of A.I. companies claim that not only is A.G.I. possible to build, but also that it is imminent. Demis Hassabis, the chief executive of Google DeepMind, told me in a recent podcast interview that he thought A.G.I. could arrive as soon as 2030. Mr. Altman has said that A.G.I. may be only four or five years away.

Building A.G.I. is OpenAI’s explicit goal, and it has lots of reasons to want to get there before anyone else. A true A.G.I. would be an incredibly valuable resource, capable of automating huge swaths of human labor and making gobs of money for its creators. It’s also the kind of shiny, audacious goal that investors love to fund, and that helps A.I. labs recruit top engineers and researchers.

But A.G.I. could also be dangerous if it’s able to outsmart humans, or if it becomes deceptive or misaligned with human values. The people who started OpenAI, including Mr. Musk, worried that an A.G.I. would be too powerful to be owned by a single entity, and that if they ever got close to building one, they’d need to change the control structure around it, to prevent it from doing harm or concentrating too much wealth and power in a single company’s hands.

Which is why, when OpenAI entered into a partnership with Microsoft, it specifically gave the tech giant a license that applied only to “pre-A.G.I.” technologies. (The New York Times has sued Microsoft and OpenAI over use of copyrighted work.)

According to the terms of the deal, if OpenAI ever built something that met the definition of A.G.I. — as determined by OpenAI’s nonprofit board — Microsoft’s license would no longer apply, and OpenAI’s board could decide to do whatever it wanted to ensure that OpenAI’s A.G.I. benefited all of humanity. That could mean many things, including open-sourcing the technology or shutting it off entirely.

Most A.I. commentators believe that today’s cutting-edge A.I. models do not qualify as A.G.I., because they lack sophisticated reasoning skills and frequently make bone-headed errors.

But in his legal filing, Mr. Musk makes an unusual argument. He argues that OpenAI has already achieved A.G.I. with its GPT-4 language model, which was released last year, and that future technology from the company will even more clearly qualify as A.G.I.

“On information and belief, GPT-4 is an A.G.I. algorithm, and hence expressly outside the scope of Microsoft’s September 2020 exclusive license with OpenAI,” the complaint reads.

What Mr. Musk is arguing here is a little complicated. Basically, he’s saying that because it has achieved A.G.I. with GPT-4, OpenAI is no longer allowed to license it to Microsoft, and that its board is required to make the technology and research more freely available.

His complaint cites the now-infamous “Sparks of A.G.I.” paper by a Microsoft research team last year, which argued that GPT-4 demonstrated early hints of general intelligence, among them signs of human-level reasoning.

But the complaint also notes that OpenAI’s board is unlikely to decide that its A.I. systems actually qualify as A.G.I., because as soon as it does, it has to make big changes to the way it deploys and profits from the technology.

Moreover, he notes that Microsoft — which now has a nonvoting observer seat on OpenAI’s board, after an upheaval last year that resulted in the temporary firing of Mr. Altman — has a strong incentive to deny that OpenAI’s technology qualifies as A.G.I. That would end its license to use that technology in its products, and jeopardize potentially huge profits.

“Given Microsoft’s enormous financial interest in keeping the gate closed to the public, OpenAI, Inc.’s new captured, conflicted and compliant board will have every reason to delay ever making a finding that OpenAI has attained A.G.I.,” the complaint reads. “To the contrary, OpenAI’s attainment of A.G.I., like ‘Tomorrow’ in ‘Annie,’ will always be a day away.”

Given his track record of questionable litigation, it’s easy to question Mr. Musk’s motives here. And as the head of a competing A.I. start-up, it’s not surprising that he’d want to tie up OpenAI in messy litigation. But his lawsuit points to a real conundrum for OpenAI.

Like its competitors, OpenAI badly wants to be seen as a leader in the race to build A.G.I., and it has a vested interest in convincing investors, business partners and the public that its systems are improving at breakneck pace.

But because of the terms of its deal with Microsoft, OpenAI’s investors and executives may not want to admit that its technology actually qualifies as A.G.I., if and when it actually does.

That has put Mr. Musk in the strange position of asking a jury to rule on what constitutes A.G.I., and decide whether OpenAI’s technology has met the threshold.

The suit has also placed OpenAI in the odd position of downplaying its own systems’ abilities, while continuing to fuel anticipation that a big A.G.I. breakthrough is right around the corner.

“GPT-4 is not an A.G.I.,” Mr. Kwon of OpenAI wrote in the memo to employees on Friday. “It is capable of solving small tasks in many jobs, but the ratio of work done by a human to the work done by GPT-4 in the economy remains staggeringly high.”

The personal feud fueling Mr. Musk’s complaint has led some people to view it as a frivolous suit — one commenter compared it to “suing your ex because she remodeled the house after your divorce” — that will quickly be dismissed.

But even if it gets thrown out, Mr. Musk’s lawsuit points toward important questions: Who gets to decide when something qualifies as A.G.I.? Are tech companies exaggerating or sandbagging (or both), when it comes to describing how capable their systems are? And what incentives lie behind various claims about how close to or far from A.G.I. we might be?

A lawsuit from a grudge-holding billionaire probably isn’t the right way to resolve those questions. But they’re good ones to ask, especially as A.I. progress continues to speed ahead.

Liam Garrison

Related Posts

Read also x