Mistral AI price starts at $0 for hobbyists, scales to custom six-figure enterprise contracts, and remains one of the best $/token values in the LLM market thanks to super-efficient models such as Mistral Medium 3.
Jump to 👉 Pricing Tables • Model Cost-Efficiency • Which Plan Is Right for You?

Why Trust This Breakdown
-
Primary sources only. Every figure comes directly from Mistral’s pricing pages, terms of service, investor decks, or the official system cards released in May 2025.
-
Enterprise vantage point. I’ve negotiated eight-figure AI contracts and audited token bills for finance, retail, and healthcare clients.
-
No affiliate links, no fluff. Just the hard numbers, decoded.
Quick-View Pricing Tables
Le Chat Subscription Tiers
| Tier | Monthly Price* | Daily Flash Answers | Doc-RAG Storage | Data Used for Training? | Stand-out Perk |
|---|---|---|---|---|---|
| Free | $0 | ≈ 25 | Limited | Opt-out required | Access to Mistral Large |
| Pro | $14.99 ($6.99 student) |
150 | 15 GB | No Telemetry Mode | Build custom Agents |
| Team | $24.99 / user $19.99 annual |
200 | 30 GB / user | Excluded by default | Google Drive & SharePoint connectors |
| Enterprise | Custom | Custom | Custom | Excluded + on-prem option | Powered by Mistral Medium 3 |
*Prices exclude tax.
La Plateforme Core Models
| Model (May 2025) | Input $/M Tokens | Output $/M Tokens | Context Window |
|---|---|---|---|
| Mistral Large 24-11 | $2.00 | $6.00 | 128 k |
| Mistral Medium 3 | $0.40 | $2.00 | 128 k |
| Mistral Small 3.1 | $0.10 | $0.30 | 128 k |
| Codestral 2501 | $0.20 | $0.60 | 32 k |
| Pixtral Large (vision) | $0.15 | $0.15 | — |
| Mistral NeMo (cheap) | $0.15 | $0.15 | 128 k |
Le Chat Plans Explained
1. Free — “Try-Before-You-Buy” Done Right
Mistral’s gratis tier actually lets you push the models: Flash Answers, code interpreter, document uploads, even AFP-verified news search. Limits kick in after ~25 messages or heavy multimodal use, but that’s still 2-4× more generous than most rivals.
2. Pro — Power-User Sweet Spot
For $14.99 you unlock “unlimited” chats (soft-cap at fair-use 6 × Free), 150 ultra-fast Flash Answers per day and the coveted “No Telemetry Mode.” That single toggle means your prompts are never recycled for model training—gold for journalists, lawyers, and IP-sensitive devs. Throw in Student pricing (-53 %) and you have the cheapest path to world-class inference on the market.
3. Team — Collaboration + Data Guardrails
Small teams pay $24.99 per seat and gain:
-
Domain verification for brand consistency.
-
Shared 30 GB RAG libraries per user.
-
Admin console & consolidated billing.
-
Default data-training opt-out—zero setup.
4. Enterprise — “Your AI, Your Rules”
Custom contracts bundle Mistral Medium 3 (8 × cheaper vs. rival GPT-4 class), private or on-prem deployment, no-code Agent Builders, and end-to-end audit logs. Typical entry point: $20 k+ / month or annual commits, but real value is avoiding multi-cloud egress and locking data inside EU borders for GDPR peace-of-mind.
La Plateforme API Costs & Quotas
| Free Workspace | Paid | Enterprise Boost |
|---|---|---|
| 1 RPS | ≥ 10 RPS | Custom (100 RPS+) |
| 500 k TPM | 25 M TPM | Custom |
| 1 B tokens/mo cap | 10 B+ | Uncapped |
Pro-tip: Combine Medium 3 with batch input for effective $0.0004 per thousand tokens—60 % cheaper than GPT-3.5-Turbo when you count output.
Model Cost-Efficiency Analysis
| Model | Stanford HELM* Score | $ per 1 k “Useful Tokens” |
|---|---|---|
| Mistral Large | 79.3 | $0.008 |
| Medium 3 | 76.1 | $0.0024 |
| GPT-4o | 84.0 | $0.01 |
| Claude Sonnet 4 | 78.5 | $0.0055 |
*Average across reasoning, STEM, code, safety.
Key insight: Medium 3 hits 90 % of GPT-4-level reasoning for 20 % of the cost. If your workload is heavy Q&A or SQL generation, that delta can shave six figures off an annual bill.
Data Privacy & “No-Telemetry” Upsells
-
Free users must manually opt-out; otherwise prompts improve the model.
-
Pro adds a hard switch: model weights never see your content.
-
Team & Enterprise are opt-out by default + offer Zero Data Retention (ephemeral processing).
-
Self-hosting option (Enterprise) guarantees 100 % data residency—rare among Western LLM vendors.
Why it matters: For EU corporates under GDPR or US firms bound by SOC 2, these toggles often decide procurement.
Choosing the Right Tier
| Use-Case | Recommended Plan | Rationale |
|---|---|---|
| Casual brainstorming | Free | Generous limits, no cost |
| Solo dev / student | Pro | Agents, no telemetry, low fee |
| 5-20 person startup | Team | Shared RAG, admin controls, per-seat billing |
| Fortune 500, regulated | Enterprise | Private deployment, custom models, legal SLA |
Expert insight: If your org spends > $3 k / month on OpenAI credits, shifting 70 % of calls to Medium 3 under an Enterprise deal typically halves run-rate compute cost while preserving escape-hatch access to premium models through model-routing.
FAQ
-
Does the Free tier throttle speed?
Responses run on the same inference engine; only concurrency and daily caps differ. -
Can I mix models inside one Team workspace?
Yes—route lightweight calls to Small 3.1 and heavyweight reasoning to Large automatically. -
Is Codestral covered by the Pro subscription?
Inside Le Chat, yes. API usage is separate and billed per token. -
How long are tokens stored?
30 days by default; Enterprise can enforce zero retention. -
Student verification?
.edu or equivalent email and ID upload; renewal required each academic year.
Key Takeaways
-
Mistral AI price spectrum—free to bespoke—matches every adoption stage.
-
Medium 3 is the price-performance hero (up-to-8× cheaper than peers).
-
Privacy-first toggles create clear upsell paths without gating innovation.
-
For devs, transparent per-token rates and wide 128 k context windows slash modeling blind spots.
-
Enterprises gain rare EU-centric data residency + on-prem flexibility.





