The staff complained that the o1 model had been released too early

Deadline

Relentless pressure to introduce products such as GPT-4o, and a newer model, called o1, which debuted last month, were straining the abilities of OpenAI’s research and safety teams to keep pace.
Safety teams pleaded with Murati for more time, the source said.
In the end, OpenAI’s safety testing was not entirely concluded or verified by the time of the launch, but the safety team’s preliminary assessment was that GPT-4o was safe to launch.
After o1 was trained, Altman pushed for the model to be released as a product almost immediately.
In a post on X, Leike took a parting shot at OpenAI for, in his view, increasingly prioritizing “shiny products” over AI safety.

NEGATIVE

Donning a gray-collar T-shirt and jeans, Murati demonstrated the software’s capacity for simultaneous translation, arithmetic, and coding, as well as its ability to interact through voice and image cues. The live demo was quite impressive and strategically planned to overshadow OpenAI’s competitor, Google, which was scheduled to reveal new features for Gemini, its AI chatbot, later that week.

Sources acquainted with the company’s internal operations, however, claim that things were anything but smooth behind the scenes. The research and safety teams at OpenAI were finding it difficult to keep up with the constant pressure to launch products like GPT-4o and a newer model called o1, which debuted last month. Teams working to ensure that OpenAI’s products did not pose unnecessary risks—like the potential to aid in the production of biological weapons—and commercial teams tasked with bringing new products to market and turning a profit clashed.

Although many employees of OpenAI believed that o1 was not yet ready for release, Altman insisted on its launch in order to enhance OpenAI’s standing as a pioneer in the field of artificial intelligence. This is the first time that specifics of the o1 controversy have been made public.

Murati frequently found herself in the middle of disputes between the commercial teams, who were anxious to get products out, and the research and technical teams, whom she oversaw. In November 2023, she had to cope with grievances regarding the management approach of OpenAI president and cofounder Greg Brockman, as well as residual animosity regarding her part in Altman’s short-lived removal through a boardroom takeover.

These demands appear to have taken a toll on Murati, as she shocked the tech world on Wednesday by announcing her six-year departure from OpenAI. She had played such a critical role in OpenAI’s meteoric rise that many speculated about the potential damage of her exit to the company’s future. An aide to Murati refuted this story, claiming that her decision to quit was unrelated to burnout or problems with her management.

Two other senior staff members who had joined Murati in the GPT4-o launch webcast, Chief Research Officer Bob McGrew and Vice President of Research Barret Zoph, also announced their resignations on the same day Murati announced her departure. If the company was to sell investors on the idea that it was turning into a more stable, albeit still rapidly expanding, entity, it appeared from the outside that it was once again in danger of collapsing.

The three had decided to depart “independently and amicably,” according to a memo that Altman wrote to employees and then shared on social media site X. This was an attempt to calm the ruckus. He explained that the company had chosen to make the announcement of the departures on the same day in order to facilitate “a smooth handover to the next generation of leadership.”. He said, “We are not a normal company,” but he acknowledged that such changes are not often “so abrupt.”. “.

Long-serving researchers and staff members who have left OpenAI in the last six months now number three senior resignations. A few of them have publicly voiced their concerns about the company’s shifting priorities and culture as it grows quickly from a small nonprofit lab that aims to ensure that superpowerful AI is developed for “the benefit of all humanity” to a for-profit business that is growing at an alarming rate.

OpenAI has risen to the forefront of the AI revolution in just nine years since its founding, frightening big tech companies like Google, Facebook’s parent company Meta, and Amazon that are vying for dominance in the newest technologies. However, rivalry within the company and turmoil among the executives have also rocked OpenAI, posing a threat to the company’s growth at the same time as its competitors.

The most recent pandemonium occurs as OpenAI strives to meet three very significant business benchmarks. The company, which collects billions of dollars annually from the paid versions of its AI models, is trying to reduce the reported billions of dollars it’s also losing due to high staff costs and the cost of the computing power needed to train and run its AI products.

OpenAI is attempting to close a fresh round of venture capital at the same time. If successful, this round could raise up to $7 billion, placing the company at a $150 billion valuation and making it one of the most valuable tech startups in history. Investors may be put off by the internal drama, which could result in a lower valuation.

Altman has informed employees that the company intends to restructure its current complex corporate structure, which has a nonprofit foundation in charge of its for-profit arm, in part to attract investors. The nonprofit would no longer have control over the for-profit under the new agreement.

When Altman was unexpectedly fired by OpenAI’s nonprofit board in November 2023, with the explanation that he had “not been completely candid” with its members, tensions among the staff reached a public peak. Following a five-day spectacle that culminated in Altman’s rehire and the resignation of multiple board members, Altman was fired. While there had been a fundamental breakdown in trust between the board and Altman, no action taken by Altman required his removal, according to the findings of an outside law firm later hired by a newly constituted board to look into his firing. In order to reassure the public that the drama was over, OpenAI employees internally refer to the incident as “the Blip.”.

The resignations of McGrew, Zoph, and Murati, however, suggest that the tense situations that have rocked OpenAI may not have completely subsided.

This article’s reporting is based on interviews with individuals who have knowledge of OpenAI’s operations as well as current and former employees. Concerned about breaking nondisclosure agreements in their employment or separation contracts, they all asked to remain anonymous.

Though the company disagreed with many of the characterizations in this article, an OpenAI spokesperson acknowledged that the company had to grow and adapt in order to go from an unknown research lab to a global company that provides advanced AI research to hundreds of millions of people in just two years. “.

hurried through safety testing and development.

The testing and release of OpenAI’s most recent and potent AI models have been hurried due to fierce competition from other AI startups and pressure to demonstrate the company’s technological leadership to prospective investors, according to people familiar with the planning behind both launches.

“Everyone hated the process,” one source said, describing the GPT4-o launch as “unusually chaotic even by OpenAI standards.”. According to a second source familiar with the rollout, teams were given just nine days to complete safety assessments of the model, which required them to put in more than twenty hours of work in a single day to make the launch date. This source claimed that “it was definitely rushed.”.

According to the source, safety teams begged Murati for an extension of time. In the past, when safety teams claimed they needed more time for testing, Murati occasionally stepped in to delay model debuts, over the objections of OpenAI’s commercial teams. However, Murati stated that the timing was unchangeable in this instance.

That deadline was mainly set by Google, a different business. According to the sources, OpenAI was aware that Google intended to unveil a number of new AI features at its May 14 Google I/O developer conference, including a sneak peek at a potent new AI model that could rival GPT-4o. OpenAI made a valiant effort to divert attention from those declarations and undermine any notion that it was outpacing Google in the competition to create ever-more-powerful AI models. So, May 13 was set aside for GPT-4o’s live premiere.

Although OpenAI’s safety testing was not fully completed or confirmed at the time of launch, the safety team’s initial evaluation indicated that GPT-4o could be launched safely. The source claimed that a safety researcher later found that GPT-4o appeared to surpass OpenAI’s own safety threshold for “persuasion,” or the model’s ability to persuade people to alter their opinions or carry out a task. This is crucial because manipulative models have the potential to be used for evil, like persuading someone to vote a certain way or believe in a conspiracy theory.

No model that poses a “high” or “critical” risk according to any safety metric will be released, the company has stated. In this instance, OpenAI had first declared that GPT-4o posed a “medium” risk. The source claims that the researcher has since informed others within the organization that GPT-4o may truly pose a “high” risk in terms of the persuasion metric.

OpenAI disputes this description, claiming that only its internal safety and readiness procedures influence when it will release new software. According to an OpenAI representative, the model “was determined safe to deploy” and received a “medium” persuasion rating. The company “followed a deliberate and empirical safety process” for the release of GPT-4o. According to the company, methodological flaws in the tests caused higher ratings for the model that were discovered after it was released, which did not accurately reflect the risk associated with GPT-4o. Hundreds of millions of people have safely used the model since its launch in May, the company said, adding to its “confidence in its risk assessment.”. “.

This chronology contained details that Fortune independently verified from several sources after being previously reported by the Wall Street Journal.

Fortune has also learned that the release of OpenAI’s most recent model, o1, which was revealed on September 12 and created a lot of buzz, was accompanied by similar problems. Compared to earlier AI models, it is more proficient at tasks requiring math, reasoning, and logic. Some see this as a major advancement toward artificial general intelligence, or AGI. This achievement, long regarded as the Holy Grail of computer science, denotes the development of an AI system that is cognitively comparable to a human. OpenAI has stated that its goal is AGI.

But senior executives at the company were not happy about the model’s introduction, according to a person familiar with the situation. The model is currently only accessible to the public in two versions: a “mini” version designed for math and coding, and a less functional “preview” version. According to the source, some OpenAI employees believed the model was not secure enough to be released and that its performance was too erratic.

Furthermore, conflicts among employees earlier in the development of o1 were mentioned by OpenAI researcher and cofounder Woijceich Zarembra in a post on X in response to the departures of Murati, McGrew, and Zoph. He said that he and Zoph had “a fierce conflict about compute for what later became o1,” but he did not go into detail about the nature of the disagreement.

Altman pushed for the model’s near instantaneous release as a product following o1’s training. According to the source who spoke with Fortune, the CEO was impatient to show prospective investors in the company’s most recent funding round that OpenAI is still at the forefront of AI development and that its AI models outperform those of its rivals. This is partly why the CEO insisted on a quick rollout.

A lot of teams that reported to Murati felt that o1 wasn’t really shaped into a product and wasn’t ready for release. But, according to the source, their objections were rejected.

According to the source, a group dubbed “post-training” was still working to improve o1. Post-training involves taking a trained model and finding ways to improve its responses so that they are safer and more helpful. Usually, this process is finished prior to a model’s debut. Zoph, who announced he was leaving on the same day as Murati, had been in charge of the post-training team.

Zico Kolter, a computer scientist at Carnegie Mellon University, and Paul Nakasone, a former director of the U.S. s. The National Security Agency is a member of OpenAI’s board committee responsible for safety and security oversight. The release of the o1 model, according to Kolter and Nakasone, “demonstrated rigorous evaluations and safety mitigations the company implements at every step of model development,” and it had been handled safely.

A representative for Murati refuted claims that management headaches or stress stemming from the mad dash to get models released were factors in her resignation. The spokesperson stated, “She felt it was the right time to step away after big wins and worked closely with her teams.”.

persistent doubts regarding Murati’s allegiance.

There might have been more going on between Altman and Murati than disagreements over the release of AI models. There were rumors among OpenAI staff members that Altman considered her to be disloyal because of her part in his short-lived dismissal during “the Blip,” according to people familiar with the company. “.

After Altman was fired in November, there were almost immediately concerns among the OpenAI staff regarding Murati’s loyalty, according to one source. To start with, Murati was named CEO on an interim basis by the previous board. After that, on the day of Altman’s firing, astonished workers questioned senior staff about when they were informed of the board’s decision. With the exception of Murati, all the executives claimed to have learned of the decision only after it was made public. The board, she claimed, had notified her twelve hours prior to the announcement of the decision. Surprised by his dismissal, some supporters of Altman believed she ought to have forewarned him.

During the meeting, Murati assured staff that she would collaborate with the other executives of the company to advocate for Altman’s return to work. A few days later, the board named seasoned tech executive Emmett Shear as her temporary CEO in her place. Five days after Altman’s firing, Murati took up her position as CTO when he returned as CEO.

Despite her denials, Murati might have had more influence over Altman’s brief dismissal. According to a March New York Times story, Murati had written a memo to Altman voicing her disapproval of his leadership style, and she had subsequently brought these concerns up with the OpenAI board. It was also stated that the board’s decision to fire Altman was influenced by her discussions with them.

Through a lawyer, Murati denied to the newspaper that she had contacted the board of OpenAI with the intention of having him fired. However, she later disclosed to staff that she had given Altman “feedback” when specific board members had gotten in touch with her. She added that Altman was already aware of her opinions because “I have not been shy about sharing feedback with him directly.”. “We had a strong and productive partnership,” she added at the time. “.

According to sources, despite Murati’s assurances, some at OpenAI questioned if Murati and Altman’s relationship had been permanently ruined. Some conjectured that the billionaire investors in OpenAI were put off by the prospect of a top executive whose allegiance to Altman was questionable.

Murati had to tidy up after Brockman’s ruffled feathers.

Murati also had to deal with disagreements between some of the teams that reported to her and Brockman, the president and cofounder of OpenAI, during the course of the previous year. Sources claimed that Brockman, a well-known workaholic, lacked a specific portfolio. As an alternative, he had a habit of jumping at the last minute and without invitation into a number of projects. In other situations, he sent them messages at all hours of the day or night and forced them to work at his own harsh pace.

Because of Brockman’s actions, staff members on those teams became agitated, and sources claimed that Murati was frequently called in to try to calm down Brockman or mend hurt feelings.

One source claimed that Brockman was eventually “voluntold” to take a sabbatical. Altman met with Brockman and advised him to take a break, according to a Wall Street Journal report. It was Brockman’s “first time to relax since cofounding OpenAI 9 years ago” when he declared on social media on August 6 that he was “taking a sabbatical.”. “.

A representative for OpenAI affirmed that Brockman’s decision was fully his own and that he would rejoin the company “by the end of the year.”. “.

It’s possible that Murati and her subordinates had additional personnel-related concerns in addition to Brockman. Zarembra also claimed in his X post that he had been chastised by McGrew “for doing a jacuzzi with a coworker,” which sparked rumors on social media about an open-minded corporate culture. Zarembra withheld further information.

According to a representative for Murati, handling staff conflicts is a common responsibility of a senior executive, and stress or burnout had no bearing on Murati’s decision to step down. The spokeswoman stated, “After a successful transition, she is leaving to devote all of her attention and energies to her exploration and whatever comes next.”.

A changing culture: Since Altman’s brief dismissal ten months ago, OpenAI has undergone a significant transformation, which includes Brockman’s sabbatical and Murati’s departure. Along with the senior executives who made their exit announcements last week, OpenAI’s former chief scientist and cofounder Ilya Sutskever also departed the company in May.

As a board member of OpenAI, Sutskever cast the vote to dismiss Altman. Sutskever, who had headed the company’s research into controlling potentially extremely intelligent AI systems that could outsmart all of humanity put together, never came back to work for the company after Altman was rehired. Since then, he established Safe Super Intelligence, his own AI startup.

Jan Leike, a senior AI researcher who co-led the so-called “superalignment” team with Sutskever, also announced his departure to join OpenAI competitor Anthropic following Sutskever’s departure. Leike made his final criticism of OpenAI in a post on X, saying that he believed the company was becoming more concerned with “shiny products” than AI security.

Another cofounder of OpenAI, John Schulman, departed in August to work for Anthropic, citing a desire to concentrate on “hands on technical work” and AI safety research. Concerns concerning OpenAI’s dedication to safety have been raised by the resignation of almost half of the company’s safety researchers in the last six months.

OpenAI, meanwhile, has been hiring rapidly. The statement “OpenAI is nothing without its people” was posted on X by OpenAI employees during Altman’s brief removal in November as a way to show support for his comeback. But more often than not, OpenAI’s workforce has changed since its founding.

Since “the Blip,” the number of employees at OpenAI has more than doubled, from less than 800 to nearly 1,800. In contrast to the specialized areas of AI research from which OpenAI has historically drawn many of its employees, a large number of those hired have come from large technology companies or traditional fast-growing startups.

Commercial domains like product management, sales, risk, and developer relations now employ a significantly larger number of people at the company. Potential big paydays and the opportunity to create products that people use today may drive these individuals more than the quest for safe AGI development.

According to one source, OpenAI’s culture has changed as a result of the influx of new employees. “Conversations about product or deployment into society are more prevalent than conversations about research,”. “.

At least a few former workers appear upset about Altman’s intentions to reorganize the company’s organizational structure, removing the nonprofit OpenAI foundation from control over the company’s for-profit division. As a result of the modifications, OpenAI’s investors would no longer be limited in their potential earnings.

On September 29, Gretchen Kreuger, a former policy researcher for OpenAI who left the organization in May, expressed her concerns about the suggested modifications to OpenAI’s organizational structure in a post on X.

“Removing the profit cap and releasing the for-profit from nonprofit oversight feels like a step in the wrong direction, when what we need is multiple steps in the right direction,” Kreuger wrote. “OpenAI’s nonprofit governance and profit cap are part of the reason I joined in 2019,” Kreuger wrote. “.

However, some current workers claim they are not dissatisfied with the modifications. Now that colleagues they consider to be too academic or unduly concerned about AI safety—and thus unwilling to ever release products—are progressively opting out, they hope internal strife will become less common.

scroll to top