Skip to content

OpenAI in Court in California: Witnesses Testify the Product Killed the Safety

1 min read
Share

OpenAI finds itself in court in California, not as plaintiff but as defendant in one of the biggest cases in the artificial intelligence industry. Elon Musk, the former co-founder of the company, is suing on the grounds that OpenAI betrayed its original mission - AI safety and benefit to humanity - in favour of commercial interest. And the witnesses who have given statements are not helping the company.

Rosie Campbell, who joined the AGI preparedness team in 2021 and left in 2024 after her team was broken up, described a process in which OpenAI transformed in front of her eyes. "When I joined, everything was very research-focused and we often talked about AGI and questions of safety," she testified. "Over time it became more of a product-focused organisation." Translation: the researchers were replaced by the marketers. The Super Alignment team, one of the key safety departments, was shut down in the same period.

Campbell pointed to a concrete incident - Microsoft launched GPT-4 in India through Bing before the model had even passed the evaluation by OpenAI's safety deployment board. Even if the risk from the model itself was minimal, the point is different: if you don't honour the processes for the foundational models, what happens when the technology becomes truly dangerous?

Tasha McCauley, a former board member of OpenAI's nonprofit arm, went further. She testified that Sam Altman, the CEO, misled board members, did not inform them about the public launch of ChatGPT, and concealed potential conflicts of interest. "We are a nonprofit board and our mandate was to oversee the for-profit arm beneath us," she said. "We did not have a high degree of trust that the information they were giving us allowed us to make decisions in an informed way."

When the board tried to remove Altman in 2023, employees rallied around him, Microsoft intervened, and the decision was reversed. That is now a documented fact in court. "If it all comes down to one chief executive making such decisions, and at stake is the public good - that is very suboptimal," McCauley added. In other words: a company holding technology with global implications in its hands should not depend on the whim of one man.

The expert witness for Musk's team, David Schizer, a former dean of Columbia Law School, sharpened the question: "OpenAI emphasised that a key part of their mission is safety and that they would put safety before profit. Part of that is taking safety rules seriously. If something needs to be the subject of a safety evaluation, it must happen. It's about the process itself."

OpenAI declined to comment on its current approach to AGI alignment, but reminded that it publicly publishes model evaluations and that in February it hired a new head of preparedness, Dylan Scandinaro, previously of Anthropic. That is an institutional response to a crisis of trust, but not to a structural question. Because what witnesses are asking is not whether OpenAI has processes - but whether it has the will to respect them when they collide with deadlines, investors, or political pressure.

For the Balkans this might sound far away, but it is anything but. The same models reach our phones, our schools, and our businesses. When the global AI industry cannot respect its own rules, regulators from Brussels to Skopje should ask themselves one question: if the people creating the technology cannot trust their own company, why should we trust it?