OpenAI has announced a delay in the release of its upcoming open-weights AI model, which was originally expected to launch in early summer.
Highlights
- Delayed Timeline: OpenAI has pushed back the release of its upcoming open-weights model to late summer 2025, following a major research breakthrough.
- o3 Progress Drives Delay: Advancements in the o3 series—including a 75.7% ARC-AGI score and autonomous detection of a Linux kernel vulnerability—are influencing the model’s refinement.
- OpenAI’s First Open Release in Years: The model is expected to rival open-weight models like Meta’s LLaMA and Mistral’s Magistral, offering high-level reasoning capabilities.
- o3 Pro Launched in the Meantime: OpenAI released o3 Pro via ChatGPT and API access, showcasing the reliability and performance benchmarks likely to inform the open-weights model.
- Open-Weights ≠ Open-Source: The model will share trained parameters, but retain proprietary control over datasets and architecture—allowing for local deployment and fine-tuning.
- Responding to Industry Trends: Influences from Meta, DeepSeek, and Qwen have shaped OpenAI’s middle-ground approach—balancing accessibility with control.
- Rebuilding Developer Trust: OpenAI aims to reengage the dev community after criticism of its closed-source pivot, acknowledging past missteps and reasserting its relevance in open AI progress.
The company now aims to release the model later this year, citing an unexpected research breakthrough. CEO Sam Altman shared the update on X, noting that the delay stems from a significant and promising development by OpenAI’s research team.
OpenAI Reenters Open-Weights Race
The new model is set to be OpenAI’s first open release in years and is expected to compete with other open-weight reasoning models, such as DeepSeek’s R1 and Meta’s LLaMA series.
Designed to rival proprietary models in the o-series line, the release was anticipated to offer high-level reasoning capabilities and broader accessibility to the developer community.
The delay comes amid growing momentum in the open AI space. In recent weeks, Mistral released its Magistral models focused on structured reasoning, and Qwen introduced hybrid models that can toggle between rapid response and multi-step reasoning.
These developments have intensified expectations for OpenAI’s contribution to the open model ecosystem.
Delay Linked to Progress in o3 Research
OpenAI’s announcement suggests that the delay is related to advancements in its o3 model series. Recent reports indicate that o3 has achieved a 75.7% score on the ARC-AGI-Pub benchmark, a key test of general reasoning.
Notably, the model was also able to autonomously detect a zero-day vulnerability in the Linux kernel (CVE-2025-37899), highlighting its advanced capabilities in real-world scenarios.
These findings suggest that OpenAI may be integrating recent research progress into the upcoming open-weights model, aiming to enhance its performance and utility.
o3 Pro Rolls Out During Interim
Alongside the delay, OpenAI has introduced o3 pro, an enhanced version of its reasoning model, now available through ChatGPT Pro and the OpenAI API.
According to Altman, the model has received strong evaluations across disciplines such as education, coding, and scientific research, and consistently outperforms previous models. It also achieved a full reliability score in internal assessments.
This release may offer a preview of the technology and performance level expected in the forthcoming open-weights model.
Clarifying the Term “Open-Weights”
While sometimes confused with fully open-source models, OpenAI’s forthcoming release will be an “open-weights” model. This means the company will share the trained parameters but will retain proprietary control over the training datasets, source code, and architecture.
This approach allows developers to,
- Run the model locally
- Fine-tune it for custom use cases
- Deploy it in various environments
Learning from Industry Peers
OpenAI’s shift toward releasing an open-weights model is partly influenced by the growing adoption of similar frameworks by other major players.
Meta’s LLaMA models, which offer broad accessibility under specific licenses, and DeepSeek’s transparent reasoning workflows on affordable infrastructure have both helped define a middle ground between openness and proprietary control.
These examples appear to have shaped OpenAI’s strategy in designing its upcoming release to be both developer-friendly and aligned with the company’s internal research standards.
The open-weights model release also holds symbolic value. OpenAI has faced criticism for moving away from its open-source roots, a shift that CEO Sam Altman acknowledged publicly. He previously admitted the company had landed on the “wrong side of history” in the open-source debate.