OpenAI has announced that its GPT-4 model will be fully phased out of the ChatGPT product by April 30, with GPT-4o becoming the new default model for ChatGPT Plus users.
Highlights
While GPT-4 will remain accessible via OpenAI’s API, it will no longer power the ChatGPT interface for consumers.
The decision was shared through OpenAI’s official changelog, where the company noted that GPT-4o offers improved performance across a broad range of tasks.
These include writing, coding, STEM problem-solving, and conversation handling. According to OpenAI, GPT-4o demonstrates better instruction-following behavior and enhanced reasoning capabilities, making it a suitable replacement for GPT-4.
A Successor to a Foundational Model
Launched in March 2023, GPT-4 marked a major upgrade over GPT-3.5, introducing initial multimodal capabilities and serving as the first widely deployed model to support both text and image inputs.
It was integrated across various platforms, including Microsoft’s Copilot suite, expanding its reach into productivity software.
Despite the model’s widespread use, OpenAI introduced GPT-4 Turbo in late 2023 as a more efficient alternative. GPT-4o is the next evolution in this sequence, now set to become the primary engine for ChatGPT.
Efficiency and Cost Optimization
The development of GPT-4 required a reported investment exceeding $100 million and involved hundreds of engineers.
OpenAI has since indicated that, based on progress achieved with GPT-4.5, the company could now reconstruct GPT-4 using a significantly smaller team of just five to ten engineers. These efficiency gains highlight the rapid advancement in model training and deployment techniques.
GPT-4o brings additional cost and speed benefits:
- Speed: Generates responses up to twice as fast as GPT-4.
- Cost: Input tokens are priced at $2.50 per million, and output tokens at $10 per million—substantially lower than GPT-4’s $30 and $60 rates, respectively.
- Context Window: Supports up to 128,000 tokens, a significant leap from GPT-4’s 8,192-token limit, allowing it to process longer and more complex prompts effectively.
Enhanced Capabilities Across Modalities and Languages
GPT-4o is built with advanced multimodal functionality, enabling it to process inputs across text, image, audio, and video.
Unlike GPT-4, which had limited multimodal integration, GPT-4o was trained end-to-end across these input types, improving the consistency and coherence of responses in interactive settings.
In addition to modality support, GPT-4o offers stronger multilingual capabilities. OpenAI has optimized tokenization for non-Latin script languages such as Hindi, Chinese, and Korean. These improvements allow for more accurate and efficient responses in a wider range of languages.
Ongoing Legal Context
GPT-4 has also been part of several legal challenges. Notably, it has been cited in lawsuits from major publishers, including The New York Times, who allege that copyrighted content was used without authorization during model training.
OpenAI maintains that its practices fall under the fair use doctrine, though the legal debate remains unresolved.
Future Model Variants
While GPT-4’s tenure in ChatGPT concludes, OpenAI appears to be preparing for additional releases.
A recent leak by reverse engineer Tibor Blaho points to potential future models under development, including GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, along with reasoning-oriented variants codenamed o3 and o4-mini.
These models may reflect a more modular approach, tailored to specific deployment needs—ranging from mobile compatibility to complex reasoning use cases.