The Unfolding Crisis Within OpenAI: The Future of Artificial Intelligence at Stake

Around midday on November 17th, Sam Altman, the CEO of OpenAI, joined a video call from a luxury hotel in Las Vegas. He was in the city for its first-ever Formula 1 race, which attracted 315,000 visitors, including Rihanna and Kylie Minogue. Mr. Altman, who had soared to personal fame beyond the tech world due to the success of OpenAI’s ChatGPT chatbot, had a meeting scheduled that day with Ilya Sutskever, the start-up’s chief scientist. However, when the call began, Mr. Altman saw that Dr. Sutskever was not alone – he was virtually flanked by OpenAI’s three independent board members. Instantly, Mr. Altman sensed that something was amiss.

Unbeknownst to him, Dr. Sutskever and the three board members had been plotting behind his back for months. They believed that Mr. Altman had been dishonest and should no longer lead a company that was driving the A.I. race. In a secret 15-minute video call the previous day, the board members had individually voted to oust Mr. Altman from OpenAI. Now they were breaking the news to him. Shocked, Mr. Altman asked, “How can I help?” The board members urged him to support an interim CEO, to which Mr. Altman agreed. However, within hours, he changed his mind and decided to fight back against OpenAI’s board.

His firing was the climax of years of underlying tensions at OpenAI that pitted those concerned about A.I.’s power against those who saw the technology as a once-in-a-lifetime profitability and prestige opportunity. As rifts deepened, the organization’s leaders turned against each other in a boardroom confrontation that ultimately revealed who wields the upper hand in A.I.’s future development: Silicon Valley’s tech elite and well-financed corporate interests.

The controversy engulfed Microsoft, which had invested $13 billion in OpenAI and intervened to protect its stake. Numerous top Silicon Valley executives and investors also rallied to support Mr. Altman. Some advocated on his behalf from Mr. Altman’s $27 million mansion in San Francisco’s Russian Hill neighborhood, using social media and private text threads to voice their opposition to the board’s decision.

At the center of the upheaval was Mr. Altman, a 38-year-old multimillionaire, who one long-time mentor described as having a hunger for power over money. Concerns were privately raised about his disregard for the potential dangers of the technology despite becoming A.I.’s public face. The chaos at OpenAI has raised questions about the individuals and companies spearheading the A.I. revolution, as if the world’s premier A.I. start-up can easily descend into turmoil over internal conflicts and controversial ideas of impropriety, can it be relied upon to propel a technology that may have immeasurable effects on billions of people?

From its inception in 2015, OpenAI was on course for conflict. The San Francisco lab was established as a non-profit by founders including Elon Musk, Mr. Altman, Dr. Sutskever, and others. Its objective was to develop A.I. systems for the benefit of humanity. Unlike most tech start-ups, it was structured as a non-profit with a board tasked with ensuring it stayed true to its mission.

The board comprised individuals with different, sometimes opposing, A.I. philosophies. On one side were those concerned about A.I.’s perils, with Mr. Musk, who departed OpenAI in frustration in 2018, as one of them. On the other side were Mr. Altman and others focused on the technology’s potential benefits. In 2019, Mr. Altman, through his association with the start-up incubator Y Combinator, became OpenAI’s CEO. Despite having a minor ownership stake in the start-up, he steered it towards divergent paths by establishing a for-profit subsidiary and raising $1 billion from Microsoft, resulting in speculation about how this would align with the board’s mission of safe A.I.

Earlier this year, the departures reduced OpenAI’s board from nine members to six. Three – Mr. Altman, Dr. Sutskever, and Greg Brockman, the start-up’s president – were founders, while the others were independent. They had divergent A.I. perspectives, unified by mutual concern about the potential for A.I. to surpass human intelligence.

Tensions soared after OpenAI rolled out ChatGPT last year. As millions used the chatbot for various tasks, Mr. Altman basked in the limelight. However, some board members were anxious that ChatGPT’s popularity stood in contrast to creating safe A.I. These worries were compounded when they disputed with Mr. Altman regarding the appointment of three new board members.

Concerns escalated when Mr. Altman met with investors in the Middle East to discuss an A.I. chip project without fully involving the board. Dr. Sutskever, who’s been apprehensive that A.I. could spell humanity’s doom, had reservations about Mr. Altman, whom he believed was undermining the board to OpenAI’s executives. Tensions worsened when Mr. Altman promoted another OpenAI researcher to a senior role equivalent to Dr. Sutskever’s, which Dr. Sutskever perceived as a slight. He threatened to leave, presenting the board with a quandary to pick between him and Mr. Altman.

Additional friction arose when Helen Toner, a board member, published a paper praising a competitor for postponing a product release and appraising this approach as opposed to the “frantic corner-cutting that the release of ChatGPT appeared to spur.” Mr. Altman disapproved, especially since the Federal Trade Commission had commenced an inquiry into OpenAI’s data collection practices. This episode and others raised concerns among board members that Mr. Altman was attempting to provoke conflict among them. Therefore, they took action on November 16th, voting to oust Mr. Altman. OpenAI’s outside counsel counseled the board to restrict ….-limitations reached.


Moms Managing Girl Influencers: A Marketplace Stalked by Men

Elissa began receiving threatening messages early last year from a person calling themselves “Instamodelfan” targeting her daughter’s Instagram account. Despite having over 100,000 followers, the account has been under scrutiny for potentially exploiting children in exchange for money. However, the issue runs deeper than that. Research from The New York Times found that the platform […]

Read More

Insights from The Times’s Investigation of Child Influencers

Instagram maintains the 13-year-old minimum age for accounts, but parents can take control, largely for their daughters’ ambitions to become influencers. Parents initiate their child’s modeling career or gain favor from clothing brands, but a dark subculture emerges, controlled by men attracted to minors, as per The New York Times. The emergence of mom-run profiles […]

Read More

Rising Threat: China’s Growing Cyber Espionage and the New Vulnerability

Beijing’s Networks Expanding Hacking Efforts China has spread its hacking reach with new tools that exploit computer vulnerabilities and a network of contracted vendors. The large scale of China’s hacking operations poses a significant threat, with the FBI reporting China’s hacking program to be larger than all major nations combined. The U.S. has tracked consistent […]

Read More