Mistral has released its newest large language model, Mistral Medium 3, designed to offer strong benchmark performance while remaining accessible in terms of operational cost.
Highlights
Positioned as a cost-effective alternative to top-tier models like Claude Sonnet 3.7 by Anthropic, Medium 3 is priced at $0.40 per million input tokens and $20.80 per million output tokens, making it a competitive option for businesses with high-volume AI workloads.
According to Mistral, Medium 3 performs at or above 90% of Claude Sonnet 3.7’s benchmark results while remaining significantly more affordable. It outperforms several open-weight models, including Meta’s Llama 4 Maverick and Cohere’s Command R, across standard AI benchmarks.
The model is available via Mistral’s API and is compatible with self-hosted environments using four GPUs or more, as well as all major cloud platforms.
Built for Enterprise Use Cases
Mistral Medium 3 is engineered to handle diverse enterprise applications—ranging from software development and STEM-related content generation to complex multimodal tasks.
During its beta phase, the model was tested by companies in financial services, healthcare, and energy, with use cases including automated customer service, data processing, and workflow optimization.
The model’s infrastructure flexibility makes it suitable for organizations balancing cloud costs and on-premise compute performance. Mistral reports that Medium 3 delivers favorable results even when compared to cost-optimized models like DeepSeek v3, both in hosted and local environments.
Integration with Major Cloud Platforms
To support seamless deployment, Mistral Medium 3 is now integrated with Amazon SageMaker, with availability on Microsoft Azure AI Foundry and Google Cloud Vertex AI expected soon.
These integrations allow enterprises to adopt the model without restructuring existing AI pipelines or infrastructure.
Expanding AI Tools for Business
Alongside the model launch, Mistral has introduced Le Chat Enterprise, a productivity-focused AI assistant tailored for corporate environments.
Following a successful private preview, the platform is now publicly available and includes features such as AI agent builders and integration with Gmail, Google Drive, and SharePoint.
The company also confirmed that Le Chat Enterprise will support the Model Compatibility Protocol (MCP)—a standard developed by Anthropic and now supported by OpenAI and Google—for improved interoperability with enterprise data systems.
Mistral’s Rapid Ascent in the AI Sector
Founded in April 2023, Mistral has quickly established itself as a notable player in the AI landscape. The company is currently valued at over $1.2 billion, having raised more than €1.1 billion from investors including General Catalyst.
Its growing list of enterprise clients includes regulated industry leaders like BNP Paribas, AXA, and Mirakl, signaling strong early adoption in sectors requiring secure and robust AI systems.
Mistral is also recognized for its open-source-first approach. Previous releases like Mistral 7B and Mixtral 8x7B were published under the Apache 2.0 license, fostering transparency and collaboration within the developer and research communities.
Model Portfolio
The company’s offerings go beyond Medium 3, with a growing suite of models for targeted applications:
- Codestral 22B: Specializes in code generation across 80+ programming languages and performs strongly on HumanEval benchmarks.
- Mathstral 7B: Designed for mathematics and STEM tasks, demonstrating reliable performance on MATH and MMLU evaluations.