OpenAI is cutting corners—and fast. Once known for its rigorous AI safety checks, the $300 billion juggernaut is now giving testers just days, not months, to evaluate its latest large language models. That’s a major shift from GPT-4, which had six months of scrutiny before launch. The push? A full-speed sprint to stay ahead of Google (GOOG, Financial), Meta (META, Financial), and Elon Musk’s xAI. Internally, researchers are sounding alarms, calling the shortened review process “reckless” and “a recipe for disaster”—especially as these new models become more powerful, capable of tasks that could be misused in dangerous ways.
What’s driving the rush? Competitive pressure, plain and simple. OpenAI wants its new o3 model out the door as early as next week. But with no binding safety laws in the U.S. or U.K.—only voluntary commitments—there’s little stopping them. Europe’s upcoming AI Act will impose tougher standards, but for now, the industry’s most powerful models are being tested on tight deadlines, often using incomplete or pre-release versions. Former staff say this isn’t just a time-saving tweak—it’s a shift away from the company’s original commitment to prioritize safety above all else.
OpenAI insists its evaluations are still thorough, thanks to automation and streamlined processes. But insiders claim safety tests aren’t being run on the final models, and that dangerous capabilities may still be slipping through the cracks. If something goes wrong—whether it’s a misuse case or an international backlash—the ripple effects could hit the entire AI ecosystem. Translation: this speed-over-safety strategy might juice short-term innovation, but the long-term risks are getting harder to ignore.