Mistral Launches Devstral 2 and Devstral Small 2 for Code Automation

Mistral unveils Devstral 2 (123B) and Small 2 (24B), open coding models with 256K context windows for multi-file reasoning and on-prem use. Mistral Vibe CLI adds terminal-native, project-aware automation for code agents.

Mistral Launches Devstral 2 and Devstral Small 2 for Code Automation

TL;DR

  • Devstral 2 family: Devstral 2 (123B) and Devstral Small 2 (24B) with a 256K context window; Devstral 2 under modified MIT, Small under Apache 2.0.
  • Benchmarks and efficiency: Devstral 2 scores 72.2% on SWE-bench Verified, Small 68.0%, and claimed up to 7x cost-efficiency vs Claude Sonnet on real-world tasks.
  • Model capabilities: dense transformer focused on multi-file reasoning and dependency-aware code changes; fine-tuning supported; Small model adds image inputs for multimodal agents.
  • Deployment requirements: Devstral 2 targets data-center GPUs (minimum 4 H100-class GPUs); Devstral Small 2 supports single-GPU, GeForce RTX, DGX Spark, and CPU-only setups; NVIDIA NIM support forthcoming.
  • Mistral Vibe CLI: open-source Apache 2.0 terminal-first agent with project-aware context, @ file autocomplete, ! shell execution, multi-file orchestration, persistent history; Zed extension and README: https://zed.dev/extensions and https://github.com/mistralai/mistral-vibe/blob/main/README.md.
  • Availability and pricing: free via API now (https://console.mistral.ai/); post-free pricing Devstral 2 $0.40/$2.00 per million tokens (input/output), Devstral Small 2 $0.10/$0.30 per million tokens (input/output); recommended sampling temperature 0.2.

Devstral 2 and Devstral Small 2 arrive with a focus on compact, production-ready coding models

Mistral has released Devstral 2, a family of open-source coding models intended for code agents and automation workflows. The family includes Devstral 2 (123B) and Devstral Small 2 (24B), both supporting a 256K context window and permissive licensing designed for widespread use and on-prem deployment.

Key highlights

  • Model sizes and licenses: Devstral 2 (123B) is released under a modified MIT license; Devstral Small 2 (24B) uses Apache 2.0.
  • Benchmarks: Devstral 2 reaches 72.2% on SWE-bench Verified; Devstral Small 2 scores 68.0%.
  • Efficiency: Reported as up to 7x more cost-efficient than Claude Sonnet on real-world tasks.
  • Local deployment: Devstral Small 2 is designed to run on consumer hardware and supports CPU-only configurations; Devstral 2 targets data center GPUs.
  • Tooling: Introduction of Mistral Vibe, an open-source CLI agent built around Devstral for terminal-first code automation.

Technical profile and intended use

Devstral 2 is a dense transformer with 123B parameters and a very large context window, focused on multi-file reasoning, dependency awareness, and end-to-end codebase changes. The models are presented as compact relative to some competitors: the 123B and 24B sizes are cited as being 5x and 28x smaller than DeepSeek V3.2, and 8x and 41x smaller than Kimi K2, respectively. The smaller footprint aims to reduce deployment friction while preserving architecture-level context for tasks such as bug fixing and modernization.

Fine-tuning is supported to prioritize particular languages or enterprise codebase characteristics. The Small model also supports image inputs for multimodal agents.

Mistral Vibe CLI: terminal-native agent

Mistral Vibe is an open-source, Apache 2.0–licensed command-line assistant that integrates Devstral capabilities into terminal workflows and IDEs via the Agent Communication Protocol. Notable features include:

  • Project-aware context: automatic scanning of file structure and Git status.
  • Smart references and execution: file referencing with @ autocomplete, shell execution with !, and slash commands for configuration.
  • Multi-file orchestration: reasoning across the whole codebase to coordinate changes spanning multiple files.
  • Persistent history, autocompletion, theme customization, and programmatic control for scripting and tool-permission configuration. Vibe is available as an extension in Zed and the CLI README and best practices are hosted on GitHub: Mistral Vibe README.

Deployment recommendations and pricing

  • Devstral 2 is optimized for data center GPUs and requires a minimum of 4 H100-class GPUs for deployment. An evaluation option is listed at build.nvidia.com.
  • Devstral Small 2 targets single-GPU operation and runs across NVIDIA systems including DGX Spark and GeForce RTX; CPU-only setups are supported. NVIDIA NIM support is noted as forthcoming.
  • Recommended sampling temperature for best results is 0.2, per the Vibe CLI best practices.

Availability and pricing details:

  • Devstral 2 is currently free via the API: console.mistral.ai. After the free period, pricing will be $0.40/$2.00 per million tokens (input/output) for Devstral 2.
  • Devstral Small 2 pricing after the free period will be $0.10/$0.30 per million tokens (input/output).

Partnerships and next steps

Mistral highlights integrations with agent tooling partners Kilo Code and Cline to bring Devstral into existing agent ecosystems. Community touchpoints and repositories are reachable via Mistral’s social and code channels.

Open positions and contributions are listed at Mistral Careers and the project repositories on GitHub.

Original source: https://mistral.ai/news/devstral-2-vibe-cli

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community