OpenAI Policy Expert Miles Brundage Resigns Amid New Models Launch

As a researcher who has been closely following the developments at OpenAI, I find Miles Brundage’s departure a significant shift in the landscape of AI policy and ethics. Having worked with Brundage’s contributions during his tenure at OpenAI, it is clear that he played a pivotal role in shaping the company’s safety culture and providing ethical guidance to executives.


After spending six years with OpenAI as their Senior Advisor for AGI Readiness, Miles Brundage has chosen to step down from the company. In his farewell note, he mentioned that he’s moving on to concentrate on AI policy research outside of the tech sector.

At this point, it’s significant to note that his action comes at a juncture when OpenAI under Sam Altman’s leadership is facing some internal challenges and transitions. These changes involve the launch of innovative products like consistency models, which are designed to drive the progression of artificial intelligence.

OpenAI Policy Expert Miles Brundage Resigns

Back in 2018, I joined the OpenAI community and since then, I’ve been actively involved in addressing policy matters and ensuring the safety aspects of advanced AI technologies. A significant part of my contributions has centered around the optimal use and management of cutting-edge AI systems such as ChatGPT.

For several years, Brundage has played a significant role in shaping our company’s red teaming initiative and has been instrumental in generating “system analysis” reports that highlight the advantages and vulnerabilities of OpenAI’s AI model architectures.

In his role as part of the AGI (Artificial General Intelligence) readiness team, Brundage offered ethical advice to executives, including CEO Sam Altman, on matters related to AI. He played a crucial role in shaping OpenAI’s safety culture during a significant phase of the company’s growth and evolution.

On his social media platform, Brundage expressed that working at OpenAI offers an exceptionally significant chance, emphasizing it was a difficult decision to resign. He admired the company’s purpose, yet underscored the need for additional autonomous researchers during the debates on AI policy.

Transition Amid Leadership Changes

Miles Brundage’s departure from OpenAI is significant, given that it coincides with the recent exits of CTO Mira Murati and research VP Barret Zoph. Sam Altman, who supports Brundage’s decision, believes that the policy research work Brundage intends to pursue outside of OpenAI will ultimately benefit the company.

The economic research division, which was formerly part of the AGI readiness team, is now under the management of OpenAI’s new Chief Economist, Ronnie Chatterji. Joshua Achiam, the head of mission alignment, will assume some of the duties that were previously handled by Brundage in the projects.

Miles Brundage’s upcoming focus will shift towards examining the regulation of AI, its impact on the economy, and ensuring the safety of future AI development. He emphasizes these aspects are crucial in addressing challenges related to the implementation of AI across various industries, including the application of consistency models.

Consistency Models and New AI Developments

At the same time, Sam Altman’s OpenAI unveiled consistency models – a novel strategy for boosting the speed of AI’s sample generation processes. Unlike conventional diffusion models, these new models are engineered to produce high-quality samples more swiftly, marking a substantial leap forward in artificial intelligence technology.

Introducing these new models forms a key aspect of our company’s strategy to expand its overall capacity, tackling operational issues, particularly following the receipt of a $6.6 billion investment.

The emergence of consistency models coincides with increased criticism towards the company’s operations, specifically accusations of copyright infringement in the training process of their AI models. Previous employees from OpenAI, like Suchir Balaji, have voiced apprehensions about the company’s techniques, fueling the ongoing discussion about the appropriate regulation of AI technology.

Read More

2024-10-23 23:10