Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Microsoft Integrates OpenAI’s gpt-oss-20b into Windows Ecosystem

    August 6, 2025

    OpenAI Releases First Open-Weight Models in Years: gpt-oss-120b and gpt-oss-20b

    August 6, 2025

    DeepMind’s Genie 3 Brings Real-Time 3D Simulations to AGI Research

    August 5, 2025
    Facebook X (Twitter) Instagram Pinterest
    EchoCraft AIEchoCraft AI
    • Home
    • AI
    • Apps
    • Smart Phone
    • Computers
    • Gadgets
    • Live Updates
    • About Us
      • About Us
      • Privacy Policy
      • Terms & Conditions
    • Contact Us
    EchoCraft AIEchoCraft AI
    Home»AI»OpenAI Releases First Open-Weight Models in Years: gpt-oss-120b and gpt-oss-20b
    AI

    OpenAI Releases First Open-Weight Models in Years: gpt-oss-120b and gpt-oss-20b

    EchoCraft AIBy EchoCraft AIAugust 6, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    gpt-oss
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI has released two new open-weight large language models — gpt-oss-120b and gpt-oss-20b — marking its first major open-source release since GPT-2 over five years ago.

    Highlights

    • First open-weight release in 5+ years: OpenAI publishes gpt-oss-120b and gpt-oss-20b under Apache 2.0, allowing full commercial use.
    • Scalable across hardware tiers: 120b is optimized for a single H100 GPU, while 20b runs on laptops with 16GB RAM — democratizing access.
    • Mixture-of-Experts (MoE) architecture: Both models activate a subset of parameters per token for efficient inference and reasoning.
    • Competitive performance: Outperforms DeepSeek’s R1 and Qwen in some benchmarks, though still behind OpenAI’s own o-series models.
    • High hallucination rates: The 20b and 120b models exhibit 49–53% hallucination in PersonQA — a tradeoff for openness and accessibility.
    • Agentic capabilities: Supports tool use, chain-of-thought prompting, structured outputs, and adjustable reasoning — ideal for building autonomous AI agents.
    • Safety-first release: Underwent evaluations using OpenAI’s Preparedness Framework, showing insufficient risk to restrict deployment.
    • Available via major platforms: Deployable through Hugging Face, AWS Bedrock, Azure AI Foundry, and SageMaker.
    • Strategic shift: Signals OpenAI’s response to global competition and a pivot back toward open-source collaboration.
    • Altman’s new stance: CEO Sam Altman admits OpenAI was “on the wrong side of history” regarding openness — now aiming to empower developers globally.

    Both models are now available for download via Hugging Face under the permissive Apache 2.0 license, allowing full commercial use.

    Differences and Capabilities

    The two models differ in scale, hardware requirements, and intended use.

    • gpt-oss-120b is a large-scale model designed to run efficiently on a single NVIDIA H100 GPU. It leverages Mixture-of-Experts (MoE) architecture with 5.1B active parameters per token, enabling strong performance across reasoning tasks with efficient inference.
    • gpt-oss-20b, a smaller variant, is optimized to run on consumer-grade hardware — such as laptops with 16GB RAM — making advanced reasoning AI more accessible to individual developers and smaller teams.

    Benchmark Performance and Limitations

    OpenAI reports that both models perform competitively with existing open-weight models. For example:

    • On Codeforces with tools, the 120b and 20b scored 2622 and 2516, respectively — outperforming DeepSeek’s R1 model.
    • In Humanity’s Last Exam, both models outperformed Qwen and DeepSeek but remained below the performance of OpenAI’s o-series models like o3 and o4-mini.

    The models also demonstrate higher hallucination rates compared to OpenAI’s proprietary systems.

    On PersonQA, gpt-oss-120b and gpt-oss-20b showed hallucination rates of 49% and 53%, respectively — significantly above the o1 model’s 15% and o4-mini’s 36%.

    Architecture and Use in Agentic Workflows

    Both models employ Mixture-of-Experts (MoE) design, which activates only a subset of parameters per token to reduce compute overhead.

    • gpt-oss-120b: ~5.1B active parameters per token
    • gpt-oss-20b: ~3.6B active parameters per token

    The models support advanced reasoning workflows, including,

    • Chain-of-thought prompting
    • Tool use, such as code execution and web browsing
    • Structured output generation
    • Adjustable reasoning effort

    Safety Measures and Responsible Release

    Prior to release, OpenAI conducted extensive internal and third-party evaluations under its Preparedness Framework, testing for misuse in high-risk domains such as cybersecurity and biotechnology.

    The results indicated that the models do not meet the criteria for “high capability” in dangerous applications, enabling OpenAI to proceed with open-weight deployment while maintaining its safety commitments.

    Accessibility and Deployment Options

    Beyond open access on Hugging Face, the gpt-oss models are also being integrated into cloud platforms.

    • AWS Bedrock
    • Azure AI Foundry
    • SageMaker

    OpenAI claims that gpt-oss-120b is up to 3× more cost-efficient than competitors like Gemini or DeepSeek’s R1 on AWS infrastructure — a potentially significant advantage for enterprises seeking scalable, transparent AI solutions.

    The release comes at a time of increasing pressure from global AI labs — particularly in China — that are rapidly advancing open-weight models. DeepSeek’s R1, Alibaba’s Qwen, and Moonshot AI have all made substantial progress, prompting OpenAI to revisit its approach to openness.

    CEO Sam Altman acknowledged this shift in direction, stating earlier this year that the company had been “on the wrong side of history” regarding open-source AI.

    He emphasized OpenAI’s renewed commitment to its mission of ensuring that AGI benefits all of humanity by enabling broader developer access.

    “We are excited for the world to be building on an open AI stack created in the United States, based on democratic values,” Altman said in a recent statement.

    AI gpt-oss-120b gpt-oss-20b Hugging Face Open-Source AI Open-Weights Model OpenAI Sam Altman
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleDeepMind’s Genie 3 Brings Real-Time 3D Simulations to AGI Research
    Next Article Microsoft Integrates OpenAI’s gpt-oss-20b into Windows Ecosystem
    EchoCraft AI

    Related Posts

    AI

    Microsoft Integrates OpenAI’s gpt-oss-20b into Windows Ecosystem

    August 6, 2025
    AI

    DeepMind’s Genie 3 Brings Real-Time 3D Simulations to AGI Research

    August 5, 2025
    AI

    What’s New in GPT-5? A Detailed Look at OpenAI’s Upcoming Model

    August 5, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Search
    Top Posts

    Samsung Galaxy S25 Rumours of A New Face in 2025

    March 19, 2024381 Views

    CapCut Ends Free Cloud Storage, Introduces Paid Plans Starting August 5

    July 12, 2024251 Views

    6G technology The Future of Innovation for 2024

    February 24, 2024229 Views
    Categories
    • AI
    • Apps
    • Computers
    • Gadgets
    • Gaming
    • Innovations
    • Live Updates
    • Science
    • Smart Phone
    • Social Media
    • Tech News
    • Uncategorized
    Latest in AI
    AI

    Microsoft Integrates OpenAI’s gpt-oss-20b into Windows Ecosystem

    EchoCraft AIAugust 6, 2025
    AI

    OpenAI Releases First Open-Weight Models in Years: gpt-oss-120b and gpt-oss-20b

    EchoCraft AIAugust 6, 2025
    AI

    DeepMind’s Genie 3 Brings Real-Time 3D Simulations to AGI Research

    EchoCraft AIAugust 5, 2025
    AI

    What’s New in GPT-5? A Detailed Look at OpenAI’s Upcoming Model

    EchoCraft AIAugust 5, 2025
    AI

    OpenMind Aims to Become the “Android” of Humanoid Robots

    EchoCraft AIAugust 4, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Stay In Touch
    • Facebook
    • YouTube
    • Twitter
    • Instagram
    • Pinterest
    Tags
    2024 Adobe AI AI agents AI safety android Anthropic apple Apple Intelligence Apps ChatGPT Claude AI Copilot Cyberattack Elon Musk Gaming Gemini Generative Ai Google Grok AI Hugging Face India Innovation Instagram IOS iphone Meta Meta AI Microsoft NVIDIA Open-Source AI OpenAI PC privacy and Security Reasoning Model Robotics Samsung Smartphones Smart phones Social Media U.S whatsapp xAI Xiaomi YouTube
    Most Popular

    Samsung Galaxy S25 Rumours of A New Face in 2025

    March 19, 2024381 Views

    Insightful iQoo Z9 Turbo with New Changes in 2024

    March 16, 2024217 Views

    Apple A18 Pro Impressive Leap in Performance

    April 16, 2024170 Views
    Our Picks

    Google Tests AI-Powered Age Estimation to Shield Minors Across Its Products in the U.S.

    July 31, 2025

    Apple Previews Major Accessibility Upgrades, Explores Brain-Computer Interface Integration

    May 13, 2025

    Apple Advances Custom Chip Development for Smart Glasses, Macs, and AI Systems

    May 9, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • Home
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • About Us
    © 2025 EchoCraft AI. All Right Reserved

    Type above and press Enter to search. Press Esc to cancel.

    Manage Consent
    To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
    Functional Always active
    The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
    Preferences
    The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
    Statistics
    The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
    Marketing
    The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
    Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
    View preferences
    {title} {title} {title}