Corporate Governance in AI Companies: Insights from the OpenAI Crisis

Corporate Governance in AI Companies: Insights from the OpenAI Crisis

The corporate crisis that unfolded at OpenAI in November 2023 inspired me to reflect on corporate governance in AI companies. The crisis, which concluded with Sam Altman’s reinstatement and board restructuring, offers valuable insights worthy of discussion.

 

While many analysts drew parallels to Steve Jobs’ ouster from Apple, the circumstances differ significantly. Jobs’ departure occurred amid product failures, whereas Altman’s dismissal came at OpenAI’s peak – merely eleven days after his compelling presentation at the developers’ conference, with the company’s valuation approaching $90 billion and commanding global attention.

 

Altman’s sudden termination raises intriguing questions about OpenAI’s governance structure. Does this carefully crafted framework have fundamental flaws? Yet, if the structure were entirely flawed, how did OpenAI rapidly emerge as an AI industry leader?

 

I believe we should examine this through the lens of fundamental corporate governance principles. Corporate systems, while imperfect, have evolved through practical refinement into modern governance frameworks, primarily addressing two stakeholder groups (shareholders and managers) and two key aspects (power and interests). However, the AI era presents novel challenges to these established frameworks.

 

OpenAI’s approach to corporate governance represents an innovative experiment. The financially independent founders, wary of capital monopolization, prioritized neither shareholder, employee, nor customer interests, but rather committed to “acting in humanity’s best interests.” This led to their initial adoption of a non-profit model, with all revenues reinvested in development.

 

However, this idealistic approach soon encountered practical challenges: training large models required substantial capital that donations alone couldn’t sustain, and attracting top talent necessitated competitive compensation. In 2019, OpenAI devised an innovative hybrid structure to address these constraints.

 

Specifically, they established a limited partnership where OpenAI’s non-profit entity retained control as the General Partner (GP), while other investors participated as Limited Partners, receiving capped returns without involvement in management or board representation. The structure included differentiated return caps and exit mechanisms for investors at various stages. For instance, Microsoft’s $13 billion investment came with a $92 billion return cap, with a graduated decrease in profit-sharing ratios leading to eventual exit. This structure enabled OpenAI to maintain its idealistic mission while securing necessary funding.

 

However, the crisis exposed several governance challenges, primarily in executive management oversight, including board-management trust issues and partner conflicts – all common themes in corporate governance.

 

The root of executive management governance lies in the principal-agent problem, characterized by misaligned interests and information asymmetry. Traditional solutions involve shareholder oversight through boards or executive stock ownership to align interests.

 

OpenAI’s approach was notably unconventional: they restricted shareholder powers, preventing them from appointing board representatives. More uniquely, to prevent capital influence over board decisions, voting board members were prohibited from holding equity – a rule that applied to both external and internal directors, including Altman himself.

 

While this non-ownership might not significantly impact Altman, given his financial independence and ideological motivation, the universal prohibition on board member equity ownership merits scrutiny. The board’s composition – three independent external directors and three non-equity-holding internal directors – eliminated traditional mechanisms for interest alignment.

 

The independent director system, originally designed to mitigate insider control issues, introduced its own concerns. The question arose: would directors without financial stakes adequately balance stakeholder interests in major decisions? The board’s apparent willingness to “sacrifice the company to preserve its mission” exemplifies this dilemma.

 

Another critical aspect was partner conflict management. Partner disagreements are virtually inevitable, manifesting in three dimensions: strategic, power-related, and financial. Interestingly, in companies like OpenAI, strategic differences often outweigh financial disputes. The Altman-Sutskever situation, while not fully transparent, appears rooted in divergent views on achieving their mission – a strategic conflict that escalated into a power struggle.

 

Traditional solutions to partner conflicts include predetermined governance rules, exit mechanisms, or structural arrangements ensuring decision-making authority. OpenAI’s non-adoption of these measures drew criticism. However, the situation was more nuanced – in 2019, when Altman became CEO, the board may not have been ready to grant him special powers like veto rights.

 

In my analysis, this crisis reveals a unique challenge in AI company governance: human limitations become particularly evident when dealing with transformative technology. The apparently impulsive decision-making by Sutskever and other directors could have far-reaching implications for humanity’s future. While their commitment to ideals is admirable, from a governance perspective, excessive idealism might undermine the very organizational foundation needed to achieve those ideals.

 

Capital dependency presents another concern. OpenAI’s reliance on Microsoft, particularly the June 2023 creation of a special-purpose holding company for Microsoft, suggests deeper operational involvement. This raises concerns about OpenAI potentially becoming an instrument of large corporate control.

 

This situation exemplifies a fundamental governance paradox: balancing entrepreneurial empowerment with power constraints. For a company like OpenAI, established recently yet capable of influencing humanity’s future, this challenge is particularly acute. Its rapid technological advancement and potential impact far exceed typical corporate parameters, complicating the power constraint equation.

 

After careful consideration, I suggest that given OpenAI’s current influence, implementing moderate constraints on Altman’s authority might better serve both corporate and human interests in the long term.

 

In conclusion, these challenges, while significant, provide valuable insights into evolving governance models. The AI era demands more transparent, open, balanced, and efficient governance structures. This crisis might serve as a crucial learning experience in developing such frameworks.

 

Source:

https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI#:~:text=On%20November%2017%2C%202023%2C%20OpenAI,and%20allegations%20of%20abusive%20behavior.

https://www.wsj.com/tech/ai/openai-seeks-new-valuation-of-up-to-90-billion-in-sale-of-existing-shares-ed6229e0

https://www.lexology.com/library/detail.aspx?g=3854b224-bae8-4b29-b244-863e6b48982d

https://fortune.com/longform/chatgpt-openai-sam-altman-microsoft/

https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai  

Leave a Reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.