AI Studio
Mistral AI Studio is a platform where you can access and manage models, usage, APIs, organizations, workspaces, and a variety of other features.
We offer flexible access to our models through a range of options, services, and customizable solutions-including playgrounds, fine-tuning, and more-to meet your specific needs.
Mistral AI currently provides three general types of access to Large Language Models and other services:
- AI Studio previously "La Plateforme": We provide API endpoints through AI Studio providing pay-as-you-go access to our latest models, manage Workspaces and Usage, as well as diverse other features.
- Third Party Cloud: You can access Mistral AI models via your preferred cloud platforms.
- Self-Deployment: You can self-deploy our open-weights models on your own on-premise infrastructure. Our open weights models are available under the Apache 2.0 License, available on Hugging Face, or through our external partners.
- Self-Deploy with Enterprise Support: You can also self-deploy our models, both open and frontier, with enterprise support. Reach out to us here if you’re interested!
API Access with AI Studio
You will need to activate payments on your account to enable your API keys in the AI Studio. Check out the Quickstart guide to get started with your first Mistral API request.
Explore diverse capabilities of our models:
- Completion
- Embeddings
- Function calling
- JSON mode
- Guardrailing
- Much more...
Cloud-based deployments
For a comprehensive list of options to deploy and consume Mistral AI models on the cloud, head on to the cloud deployment section.
Raw model weights
Raw model weights can be used in several ways:
- For self-deployment, on cloud or on premise, using either TensorRT-LLM or vLLM, head on to Deployment
- For research, head-on to our reference implementation repository,
- For local deployment on consumer grade hardware, check out the llama.cpp project or Ollama.