Call it the “Burning Man” theory of technology. Every so often, the hopes and dreams of a technology visionary are nearly snuffed out by those around them. In 1985 Steve Jobs was fired from Apple, the company he fathered, and did not return. for 11 years. In 2000, Elon Musk’s co-founders removed him as CEO of X.com, the company that became PayPal, a digital payment platform. In 2008, Jack Dorsey’s co-founders of Twitter ended his short reign as chief executive. On November 17, Sam Altman looked like he would become the next burned icon of the Bay Area, fired from OpenAI, the artificial intelligence (AI) company he co-founded in 2015, by a board that accused him of lacking candor. But on November 21, after four days in which he, his employees and OpenAI’s investors, such as Microsoft, argued feverishly for his reinstatement, he was back in control of the company. “Oh, Jesus even took three days.” one swing tweeted amid the drama. Instead of Mr. Altman, three of the four board members who gave him the boot are toast.
It is not the first time in his 38 years on Earth that Mr. Altman has been at the center of such an imbroglio. He is a man of such supreme self-confidence that people tend to treat him as either a genius or an opportunist – the latter usually in private. Like Jobs, he has a messianic ability to inspire people, even if he doesn’t have the iPhone creator’s divine eye for design. Like Mr. Musk, he has unbridled faith in his vision for the future, even if he lacks the legendary engineering skills of the Tesla boss. Like Mr. Dorsey, he shipped a product, ChatGPT, that became a global topic of conversation—and consternation.
However, along the way he made people angry. This started at Y Combinator (YC), a greenhouse for entrepreneurs, which he led from 2014 until he was pushed out in 2019 for expanding it too quickly and being distracted by side hustles like OpenAI. At OpenAI, he fell out with Mr Musk, another co-founder, and some influential AI researchers, who left in frustration. The latest evidence comes from the four board members who clumsily tried to fire him. The specific reasons for their decision remain unclear. But it wouldn’t be a surprise if Mr. Altman’s unbridled ambition came into play.
If there is one constant in Mr. Altman’s life, it is a missionary zeal that, even by Silicon Valley standards, is striking. Some entrepreneurs are motivated by fame and fortune. His goal seems to be techno-omnipotence. Paul Graham, a co-founder of YC, said of Mr. Altman, then still in his early 20s: “You could parachute him onto an island full of cannibals and come back in five years and he’d be king.”
Forget the island. The world is now his domain. In 2021 he wrote a utopian manifesto called “Moore’s Law for Everything,” predicting that the AI revolution (which he led) would shower profits on Earth—creating phenomenal wealth, changing the nature of work, reducing poverty. He is an ardent proponent. of nuclear fusion, arguing that coupled with ChatGPT-like “generic” AI, falling costs of knowledge and energy will create a “beautiful exponential curve”. This is a capricious thing, all the more so because of the need to strike a careful balance between speed. and security during the development of such world-changing technologies. Where Mr. Altman sits on that spectrum is difficult to measure.
Mr. Altman is a man of contradictions. In 2016, when he was still running YC, Peter Thiel, a billionaire venture capitalist, described him to the New Yorker as “not particularly religious but… culturally very Jewish—an optimist yet a survivalist” (then Mr. Altman had a bolthole in Big Sur, provided with guns and gold, in preparation for rogue AIs, pandemics and other disasters).As for his continued optimism, it sounded clear during an interview he recorded just two days before the coup of the OpenAI board, which he did not see coming. . “What differentiates me (from) most of the AI companies is that I think AI is good,” he told “Hard Fork,” a podcast. “I don’t secretly hate what I do all day. I think it’s going to be cool.”
He sought to have it both ways when it comes to OpenAI governance as well. Mr. Altman devised the crazy corporate structure at the heart of the latest drama. OpenAI was founded as a non-profit, to push the boundaries of AI to a point where computers can outthink humans, yet without sacrificing human superiority. But it also needed money. For this it established a for-profit subsidiary that offered investors limited rewards but no say in the running of the company. Mr. Altman, who owns no stock in OpenAI, defended the model. In March he told one interviewer that putting such technologies in the hands of a company that sought to create unlimited value left him “a little scared.”
And yet he also seems to resent its limitations. As he did at YC, he dabbled in side projects, including seeking investors to make generative AI devices and semiconductors that could potentially be very profitable. The old board is being replaced by a new one that may be less tied to OpenAI’s security-first charter. The incoming chairman, Bret Taylor, used to run Salesforce, a software giant. On his watch the startup might look like a more conventional, fast-scaling tech company. Mr. Altman will probably be happy about that, too.
Mercury rising
If that happens, OpenAI might become an even hotter ticket. With the latest version of its AI model, GPT-5, and other products on the way, it’s ahead of the pack. Mr. Altman has a unique ability to raise money and recruit talented individuals, and his job would be all the easier with a more normal corporate structure. But his ambiguities, especially about where to strike the balance between speed and safety, are a lesson. Although Mr. Altman has been welcomed into the world’s corridors of power to provide guidance on AI regulation, his own convictions are not yet set in stone. That’s all the more reason for governments to set the tone for AI security, not mercurial tech visionaries.
© 2023, The Economist Newspaper Limited. All rights reserved.
From The Economist, published under license. The original content can be found at www.economist.com
(tagsTo Translate)Sam Altman