Twelve former OpenAI employees have entered a legal dispute between Elon Musk and OpenAI’s current leadership, filing an amicus brief in support of Musk’s lawsuit. At the heart of the matter is a growing concern that OpenAI’s for-profit division could override the nonprofit’s original purpose, a mission designed to ensure artificial intelligence serves the public good.
The brief, submitted in early April, is not an endorsement of Musk himself, the former employees clarified. Instead, their support hinges on preserving the foundational structure OpenAI was built upon.
“We worked at OpenAI; we know the promises it was founded on and we’re worried that in the conversion those promises will be broken,” wrote Todor Markov, one of the signatories and a former OpenAI researcher. “The nonprofit needs to retain control of the for-profit. This has nothing to do with Elon Musk and everything to do with the public interest“, he added.
Founded in 2015 with a commitment to safe and open AI development, its nonprofit charter originally aimed to prevent the concentration of artificial general intelligence (AGI) power in the hands of a few.
Over the years, however, the organization adopted a hybrid model with a for-profit subsidiary to attract the capital needed for advanced research and development. That structure is now at the centre of a legal clash.
The core issue is whether the nonprofit arm will continue to have oversight of the for-profit business, or whether the restructuring would shift ultimate control to the for-profit board. While OpenAI maintains that its nonprofit entity will still benefit and hold influence, the former employees argue this control is at risk.
Markov questioned OpenAI’s recent statements, saying, “OpenAI claims ‘the nonprofit isn’t going anywhere’ but has yet to address the critical question: Will the nonprofit retain control over the for-profit? This distinction matters.”
The group’s legal argument hinges on the nonprofit’s legally binding responsibility to uphold its founding mission: to ensure AGI benefits all of humanity. “The nonprofit directors have a fiduciary duty to the nonprofit purpose,” Markov said, referencing the corporation’s original certificate of incorporation.
This duty, he noted, is enforceable by the attorney-generals of both Delaware and California, states with jurisdiction over OpenAI’s corporate registration and operations.
According to the group, these state officials, who are elected by the public, serve as a mechanism of accountability. If the nonprofit’s directors deviate from their mission, the attorneys general have legal standing to act.
“If at any point in time in the future you believe that the organisation is acting contrary to its mission, you can write to the AG requesting they take action,” Markov added, reinforcing the idea that public oversight must remain intact.
Their brief contrasts this structure with what could happen under a fully for-profit model. “Directors of the PBC would have no such fiduciary duty,” Markov noted. “They would be allowed to balance [the mission] against shareholder interests, but not required to do so.”
OpenAI has defended its restructuring plans as a practical move. In statements to the press, the company said the shift in control is necessary to raise $40 billion in new investments, which would accelerate the development and deployment of its AI models.
As it seeks more funding to remain competitive with rivals like Google DeepMind and Anthropic, the company insists the mission will not be compromised. The legal battle is still unfolding.
A jury trial in the Musk v. Altman case has been scheduled for March 2026.
Meanwhile, OpenAI has filed a countersuit against Musk, accusing him of using the courts to gain leverage over its fast-developing technologies.
Beyond the legal filings, the public conversation around this case has raised deeper philosophical and ethical questions about corporate structure and the role of former insiders. On X (formerly Twitter), a critic questioned the relevance of former employees weighing in, suggesting they had forfeited their right to influence decisions after leaving the organization.
“Why do you feel you should have a say in what an independent entity prefers? You are no longer in the employment of OpenAI,” the user wrote. “OpenAI will have to maximize shareholders’ value… Why shouldn’t OpenAI?”
In response, Markov addressed the misconception about OpenAI’s obligations. “OpenAI is legally bound to ‘ensure that artificial general intelligence benefits all of humanity,’” he replied. Markov explained that this is not a mere guiding principle but a legal clause embedded in the nonprofit’s founding documents.
He further cited a signed agreement by OpenAI’s shareholders reinforcing this mission-first framework: “The Company’s duty to this mission and the principles advanced in the OpenAI Inc Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so.”
This distinction, he argued, is critical to preserving AI development that serves broad societal needs instead of corporate or private gain.
The debate around OpenAI’s governance has larger implications beyond this single organization. As AI becomes more powerful and widely adopted, the structures controlling its development and deployment will influence how its benefits, or risks, are distributed.
The outcome of this case could set a precedent for how future AI companies balance public accountability with private investment. It may also signal whether nonprofit oversight can realistically coexist with the immense capital needs required to build cutting-edge AI technologies.
For now, former employees like Markov are urging the courts, and the public, not to lose sight of what made OpenAI unique. “Keeping such a lever is in the public interest,” he wrote, “and hence, so is keeping nonprofit control.”
免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。