AD
Mistral AI logo

Mistral AI

FreemiumπŸ‡«πŸ‡·DecacornScaling Losses

Europe's frontier AI β€” open, efficient, yours to own

85

Overall score

44

Heat score

Pricing

Le Chat Free$0/month
Le Chat Pro$14.99/month
Le Chat Pro (Student)$7.04/month
Le Chat Team$24.99/user/month
La Plateforme API (Free Tier)$0 (experimentation)
La Plateforme API (Pay-as-you-go)Usage-based from $0.02/M tokens
EnterpriseCustom

Technical Specs

Inputs

Text Prompt, Document Upload, Image, URL, Audio File, Code, PDF, CSV, Spreadsheet

Outputs

Generated Text, Code Snippet, AI Image, Summary, Translation, Embedding, JSON, Structured Data, Research Report

AI Type

LLM

Model Architecture

MoE Transformer

Daily Prompts

N/A

Context Length

256K

Output Quality

Accuracy

89%

Content

90%

Reasoning

84%

Company Profile

Company

Mistral AI SAS

Founded

2023

HQ

Paris, Île-de-France, France

Employees

350

Total Raised / Total Funding

$3.05B

Revenue

$60M

Valuation

$13.8B

ARR

$400M

CEO

Arthur Mensch

Overview

Estimated Paid Users

250K

Current estimate

Total Earnings Till Date

$95M

+13.64% from last month

Market Share

3.2%

Current share

Average Session

18

Per active user

Hallucination Rate

12%

Model quality signal

Growth Rate

+7.14%

Monthly active users

Burn Rate

$280M

Total expenses / years active

Paid User Gain

+28.00%

Monthly paid user trend

Profit Analysis

-$400M

Total Loss

$600M

Total Profit

$0

Performance Metrics

Accuracy

89%

Context

90%

Reasoning

84%

Safety

88%

Benchmarks

MMLU

88.7%

HumanEval

92%

MMLU-Pro

73.1%

AGIEval

74%

GPQA Diamond

43.9%

MATH-500

91%

Chatbot Arena Elo

100%

Mistral AI Models

Mistral 7B

Type: Text

Description: First Mistral model, 7B parameters, Apache 2.0. Outperformed LLaMA 2 13B on all benchmarks and LLaMA 34B on many, despite having fewer parameters. Context: 32K tokens.

Context Length: 256K tokens

Architecture: MoE Transformer

Mixtral 8x7B

Type: Text

Description: Sparse mixture-of-experts model with 8 experts, 2 active per token. Outperformed LLaMA 2 70B and GPT-3.5 on most benchmarks. 46.7B total parameters, 12.9B active. Apache 2.0.

Context Length: 256K tokens

Architecture: MoE Transformer

Mistral Large (v1)

Type: Text

Description: First proprietary frontier model. Natively fluent in English, French, Spanish, German, Italian. 32K context. MMLU 81.2%, positioned between GPT-3.5 and GPT-4 in performance.

Context Length: 256K tokens

Architecture: MoE Transformer

Mistral Large 2

Type: Text

Description: Major upgrade. 84% MMLU, 92% HumanEval. New point on performance/cost Pareto frontier for open models. 128K context window. Strong multilingual reasoning, reduced hallucinations. Available on La Plateforme and Hugging Face (research weights).

Context Length: 256K tokens

Architecture: MoE Transformer

Mixtral 8x22B

Type: Text

Description: Larger MoE model with 8 experts and 22B parameters each. 141B total parameters, ~39B active. 64K context window. Strong coding and multilingual performance. Apache 2.0.

Context Length: 256K tokens

Architecture: MoE Transformer

Funding Rounds & Investors

Total Funding

$3.05B

Rounds

5

Series C

$1.9B

Sep 2025

Led by ASML which invested ~€1.3B for ~11% stake; total round €1.7B; post-money valuation €11.7B ($13.7B); source: Mistral AI official blog, Orrick advisory announcement, Bloomberg

Series B

$645M

Jun 2024

Led by General Catalyst; €600M round comprising ~€503M equity + €142M debt; valuation €5.8B ($6.2B); source: Wikipedia, Orrick, and company blog

Strategic

$16M

Feb 2024

€15M convertible note as part of Azure partnership; Microsoft received right to convert to equity; source: Wikipedia and TechCrunch

Founders/Team

AM

Arthur Mensch

Co-founder & CEO

GL

Guillaume Lample

Co-founder & Chief Scientist

TL

TimothΓ©e Lacroix

Co-founder & CTO

Direct competitors

No direct competitors available.

Change Log / Major Updates

2025 Β· Sep 8

Closed €1.7B Series C led by ASML, which took an 11% stake valued at €1.3B. Co-investors include DST Global, Andreessen Horowitz, Bpifrance, General Catalyst, Index Ventures, Lightspeed, and NVIDIA. Three co-founders became France's first AI billionaires.

2025 Β· Dec 2

Released Mistral Large 3, a 675B-parameter MoE model with 41B active parameters and 256K context window. Scored 88.7% MMLU, 92% HumanEval, and 1418 Elo on LMSYS Arena. Also released three Ministral 3 dense edge models (3B, 8B, 14B). All Apache 2.0 open-weight.

2026 Β· Feb 17

Acquired Paris-based serverless cloud startup Koyeb (13 employees, 3 co-founders) for undisclosed amount to accelerate Mistral Compute. Announced $400M ARR milestone. Plans for $1.4B Swedish data center investment revealed alongside 2026 revenue target of $1B.

Compliance, Integrations & Support

Industry: Not specified

Compliances: Not specified

Integrations: Microsoft Azure, Amazon Bedrock, Google Vertex AI, Hugging Face, LangChain, LlamaIndex, vLLM, Ollama, NVIDIA TensorRT, Google Drive, SharePoint, Agence France-Presse (AFP), Black Forest Labs Flux Ultra, Koyeb, Salesforce, IBM, Databricks, Accenture, BNP Paribas, GitHub, Mistral Vibe CLI, La Plateforme API, Mistral AI Studio, MCP Protocol

Support:email, help center, community forum, enterprise support, chat support

Target audience: Developers, Enterprise Teams, AI Researchers, Data Scientists, Product Managers, European Businesses, Government Agencies, Finance Teams, Healthcare Organizations, Startups

Supported languages: English, French, Spanish, German, Italian, Portuguese, Dutch, Polish, Russian, Japanese, Chinese, Arabic, Korean, Hindi

Mistral AI Acquisitions

KO

Koyeb

February 17, 2026

N/A

AD

Reviews & Rating

0 reviews

No reviews yet

Be the first to share how Mistral AI performs for your workflow.

0.0

Accuracy

0.0

Ease of Use

0.0

Output Quality

0.0

Security

0.0

More About Mistral AI

In April 2023, three former AI researchers β€” Arthur Mensch from Google DeepMind and Guillaume Lample and TimothΓ©e Lacroix from Meta β€” walked away from some of the highest-paying jobs in technology to build what they believed Silicon Valley could not: a world-class AI laboratory rooted in Europe, committed to open science, and designed to compete without surrendering sovereignty. Eighteen months later, they had done exactly that. Mistral AI closed a €600 million Series B at a €5.8 billion valuation, became the fastest European startup to reach unicorn status, and released models β€” including the landmark Mixtral 8x7B β€” that redefined what efficiency meant in a field obsessed with scale. Their thesis was simple and disruptive: bigger is not always better, and a model that runs on a single GPU while matching the performance of systems ten times its size is worth more to the world than one that requires a data center to breathe.

Mistral's approach rests on three pillars: open weights, efficiency, and European identity. The company publishes its foundational models under the Apache 2.0 license β€” anyone can download, fine-tune, and deploy them commercially without paying a royalty. This has won it a developer community that now spans continents and contributed to Mistral's API platform processing billions of tokens daily. For enterprises, particularly in Europe's regulated industries, the open-source foundation answers a question that ChatGPT cannot: what happens to your data? French President Emmanuel Macron has publicly recommended Le Chat over ChatGPT to French citizens, and BNP Paribas, AXA, CMA CGM, and HSBC have all signed commercial agreements, with CMA CGM committing €100 million over five years. In September 2025, semiconductor giant ASML led a €1.7 billion Series C that valued Mistral at €11.7 billion and formalized a strategic partnership to apply AI to chip manufacturing β€” arguably the most consequential production bottleneck in the global technology supply chain.

As of early 2026, Mistral surpassed $400 million in annual recurring revenue, made its first acquisition by buying serverless cloud provider Koyeb to accelerate Mistral Compute, and announced $1.4 billion in Swedish data center investment. The Mistral 3 family β€” anchored by Mistral Large 3, a 675-billion-parameter mixture-of-experts model β€” scores 88.7% on MMLU, 92% on HumanEval, and ranks second among open-source non-reasoning models on the LMSYS Chatbot Arena. With 350 employees, founders who are now billionaires, and a roadmap stretching to sovereign European GPU clouds powered by nuclear energy, Mistral has become more than a startup. It is the institutional argument that AI does not have to be American to be frontier.

Mistral AI FAQ's

What is the difference between Le Chat and La Plateforme?

Le Chat is Mistral's consumer-facing AI assistant β€” similar to ChatGPT β€” available on web, iOS, and Android with free and Pro tiers. La Plateforme is the developer API where you pay per token to integrate Mistral's models into your own applications. The two are completely separate: a Le Chat Pro subscription does not include API credits.

Are Mistral's models really open source?

Most of Mistral's earlier models (Mistral 7B, Mixtral 8x7B, Mistral Small, Mistral NeMo, Pixtral 12B, Mistral Large 3) are released under the Apache 2.0 license, meaning you can download, fine-tune, and deploy them commercially for free. Newer frontier models like Mistral Large are proprietary and only accessible via API or Le Chat.

How does Mistral AI handle data privacy and GDPR?

Mistral AI is headquartered in Paris, France and operates under EU law by default. Enterprise and on-premises deployments offer full data residency in Europe, zero-retention options on La Plateforme, and GDPR-compliant processing. Le Chat Pro includes a No Telemetry Mode to prevent conversation data from being used for model training.

What models does Mistral AI offer in 2025-2026?

Mistral's lineup includes Mistral Large 3 (675B MoE, flagship), Mistral Medium 3 (balanced multimodal), Mistral Small 3.2 (24B efficient), Ministral 3 (3B/8B/14B edge variants), Magistral (reasoning models), Codestral (code specialist), Pixtral (vision), Mistral Embed (embeddings), and Voxtral (speech). Most smaller models are Apache 2.0 open-weight; frontier models are proprietary API-only.

How does Mistral compare to OpenAI and Anthropic in price?

Mistral is typically 70-80% cheaper than equivalent OpenAI or Anthropic offerings. Mistral Medium 3 at $0.40 per million input tokens delivers comparable performance to models costing $3-8 per million tokens from competitors. Le Chat Pro at $14.99/month is also cheaper than ChatGPT Plus ($20) and Claude Pro ($17-20).