New York: It’s been quite a week for ChatGPT-creator OpenAI – and co-founder Sam Altman,
Altman, who helped start OpenAI as a nonprofit research lab in 2015, was suddenly and mostly inexplicably removed as CEO on Friday, shocking the industry. And while his chief executive position was swiftly reinstated just days later, a lot of questions are still up in the air.
If you’re catching up on the OpenAI saga now and what’s at stake artificial intelligenceSpace as a whole, you have come to the right place. Here’s a summary of what you need to know.
Who is Sam Altman and how did he rise to fame?
Altman is a co-founder of the San Francisco-based company OpenAI chatgpt (Yes, that chatbot appears everywhere today – from schools to health care).
The explosion of ChatGPIT since its arrival a year ago put Altman in the spotlight of the rapid commercialization of generic AI – which can produce new imagery, excerpts of text and other media. And as he became Silicon Valley’s most sought-after voice on the promise and potential dangers of this technology, Altman helped transform OpenAI into a world-renowned startup.
But there were some ups and downs in his position at OpenAI last week. Altman was ousted as CEO on Friday — and a few days later, he was back on the job with a new board of directors.
Within that time, Microsoft, which has invested billions of dollars in OpenAI and holds rights to its existing technology, facilitated Altman’s return, hiring him as well as another OpenAI co-founder and former president, Greg Brockman. He quickly hired D.C., who resigned in protest. The CEO is being removed. Meanwhile, hundreds of OpenAI employees threatened to resign.
Both Altman and Brockman celebrated their return to the company by posting on X, formerly known as Twitter, early Wednesday.
Why does his removal – and reinstatement – matter?
Much is unknown about Altman’s initial removal. Friday’s announcement said he “has not been consistently clear in his communications” with the then-current board of directors, which declined to provide more specific details.
Regardless, the news has shocked the entire AI world – and, because OpenAI and Altman are such leading players in the field, it may raise trust concerns about the growing technology that many have in mind. There are still questions.
“The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing Risks of AI”said Johann Laux, an expert at the Oxford Internet Institute who focuses on human oversight of artificial intelligence.
The turmoil has also heightened differences between Altman and the company’s previous board members, who have expressed different views on the security risks posed by AI as the technology advances.
Many experts say the drama highlights how governments — not big tech companies — should make decisions on AI regulation, especially for fast-growing technologies like generative AI.
“The events of the past few days not only jeopardize OpenAI’s effort to introduce more ethical corporate governance into the management of its company, but it also shows that corporate governance alone, even if well-intentioned, “Can be easily circumvented by other corporate dynamics and interests,” said Enza Iannoppolo, principal analyst at Forrester.
The lesson, Iannopoulou said, is that companies alone cannot provide the level of security and trust in AI that society needs. “Rules and guardrails designed with companies and rigorously enforced by regulators are vital if we are to benefit from AI,” he said.
What is Generative AI? How is it being regulated?
Unlike traditional AI, which processes data and completes tasks using predetermined rules, generative AI (including chatbots like ChatGPT) can create something new.
Tech companies are still leading the way when it comes to regulating AI and its risks, while governments around the world are working to catch up.
In the European Union, negotiators are finalizing the world’s first comprehensive AI rules. But they are reportedly at loggerheads over whether and how to incorporate the most controversial and revolutionary AI products, the commercialized large-language models that underpin generative AI systems, including ChatGPIT.
When Brussels first presented its initial draft legislation in 2021, chatbots were barely mentioned, focusing on AI with specific uses. But officials are still figuring out how to incorporate these systems, also known as the foundation model, into the final version.
Meanwhile, in the US, President Joe Biden signed an ambitious executive order last month seeking to balance the needs of cutting-edge technology companies with national security and consumer rights.
The order – which will likely need to be augmented by congressional action – is an initial step aimed at ensuring that AI is trustworthy and helpful, rather than misleading and destructive. Its purpose is to guide how AI is developed so that companies can make profits without endangering public safety.
Altman, who helped start OpenAI as a nonprofit research lab in 2015, was suddenly and mostly inexplicably removed as CEO on Friday, shocking the industry. And while his chief executive position was swiftly reinstated just days later, a lot of questions are still up in the air.
If you’re catching up on the OpenAI saga now and what’s at stake artificial intelligenceSpace as a whole, you have come to the right place. Here’s a summary of what you need to know.
Who is Sam Altman and how did he rise to fame?
Altman is a co-founder of the San Francisco-based company OpenAI chatgpt (Yes, that chatbot appears everywhere today – from schools to health care).
The explosion of ChatGPIT since its arrival a year ago put Altman in the spotlight of the rapid commercialization of generic AI – which can produce new imagery, excerpts of text and other media. And as he became Silicon Valley’s most sought-after voice on the promise and potential dangers of this technology, Altman helped transform OpenAI into a world-renowned startup.
But there were some ups and downs in his position at OpenAI last week. Altman was ousted as CEO on Friday — and a few days later, he was back on the job with a new board of directors.
Within that time, Microsoft, which has invested billions of dollars in OpenAI and holds rights to its existing technology, facilitated Altman’s return, hiring him as well as another OpenAI co-founder and former president, Greg Brockman. He quickly hired D.C., who resigned in protest. The CEO is being removed. Meanwhile, hundreds of OpenAI employees threatened to resign.
Both Altman and Brockman celebrated their return to the company by posting on X, formerly known as Twitter, early Wednesday.
Why does his removal – and reinstatement – matter?
Much is unknown about Altman’s initial removal. Friday’s announcement said he “has not been consistently clear in his communications” with the then-current board of directors, which declined to provide more specific details.
Regardless, the news has shocked the entire AI world – and, because OpenAI and Altman are such leading players in the field, it may raise trust concerns about the growing technology that many have in mind. There are still questions.
“The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing Risks of AI”said Johann Laux, an expert at the Oxford Internet Institute who focuses on human oversight of artificial intelligence.
The turmoil has also heightened differences between Altman and the company’s previous board members, who have expressed different views on the security risks posed by AI as the technology advances.
Many experts say the drama highlights how governments — not big tech companies — should make decisions on AI regulation, especially for fast-growing technologies like generative AI.
“The events of the past few days not only jeopardize OpenAI’s effort to introduce more ethical corporate governance into the management of its company, but it also shows that corporate governance alone, even if well-intentioned, “Can be easily circumvented by other corporate dynamics and interests,” said Enza Iannoppolo, principal analyst at Forrester.
The lesson, Iannopoulou said, is that companies alone cannot provide the level of security and trust in AI that society needs. “Rules and guardrails designed with companies and rigorously enforced by regulators are vital if we are to benefit from AI,” he said.
What is Generative AI? How is it being regulated?
Unlike traditional AI, which processes data and completes tasks using predetermined rules, generative AI (including chatbots like ChatGPT) can create something new.
Tech companies are still leading the way when it comes to regulating AI and its risks, while governments around the world are working to catch up.
In the European Union, negotiators are finalizing the world’s first comprehensive AI rules. But they are reportedly at loggerheads over whether and how to incorporate the most controversial and revolutionary AI products, the commercialized large-language models that underpin generative AI systems, including ChatGPIT.
When Brussels first presented its initial draft legislation in 2021, chatbots were barely mentioned, focusing on AI with specific uses. But officials are still figuring out how to incorporate these systems, also known as the foundation model, into the final version.
Meanwhile, in the US, President Joe Biden signed an ambitious executive order last month seeking to balance the needs of cutting-edge technology companies with national security and consumer rights.
The order – which will likely need to be augmented by congressional action – is an initial step aimed at ensuring that AI is trustworthy and helpful, rather than misleading and destructive. Its purpose is to guide how AI is developed so that companies can make profits without endangering public safety.
(TagstoTranslate)Sam Altman OpenAI CEO(T)Sam Altman OpenAI(T)Sam Altman Firing(T)Sam Altman ChatGPT(T)Sam Altman(T)The Future of AI(T)ChatGPT(T)Artificial Intelligence(T)AI risk