AfterMira Muratiresigned from OpenAI last month, Miles Brundage has now left the company, in another high-profile exit. Brundage served as the Head of Policy Research at OpenAI and recently, he was made Senior Advisor for theAGIReadiness team. His work was instrumental in safely deploying AI models onChatGPTand led red teaming efforts at OpenAI. The new “System Card” we see for OpenAI models is thanks to his vision.
In hisX post, Brundage says, “I think I’ll have more impact as a policy researcher/advocate in the non-profit sector, where I’ll have more of an ability to publish freely and more independence.“
As I have noted in my piece onOpenAI’s internal conflicts, the company’s shift toward profit-driven products over AI research and safety, is pushing many longtime researchers to leave. OpenAI is also working to make its non-profit board toothless and become a for-profit corporation. This radical shift in OpenAI’s culture is forcing many to quit.
He further mentions, “OpenAI has a lot of difficult decisions ahead, and won’t make the right decisions if we succumb to groupthink,” urging OpenAI employees to raise concerns inside the company.
More crucially, with Miles Brundage’s departure, OpenAI is disbanding the AGI Readiness team. Those members will be absorbed by other teams and some projects will move to the Mission Alignment team. Brundage has shared his thoughts in more detail onSubstack.
Apart from that,The New York Timesinterviewed a former OpenAI researcher, Suchir Balaji, who says the company broke copyright law for AI training. Balaji quit the company in August because “he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit.”
With such high-profile exits, what do you think about OpenAI’s new direction? Is it going to take AI safety seriously or focus more on shipping commercial products? Let us know in the comments below.
Passionate about Windows, ChromeOS, Android, security and privacy issues. Have a penchant to solve everyday computing problems.