Introduction

What are AI agents?
AI agents are autonomous systems powered by large language models (LLMs) that, given high-level instructions, can plan, use tools, carry out processing steps, and take actions to achieve specific goals. These agents leverage advanced natural language processing capabilities to understand and execute complex tasks efficiently and can even collaborate with each other to achieve more sophisticated outcomes.
Our Agents API allows developers to build such agents, leveraging multiple features such as:
- Multiple mutlimodal models available, text and vision models.
- Persistent state across conversations.
- Ability to have conversations with base models, a single agent, and multiple agents.
- Built-in connector tools for code execution, web search, image generation and document library out of the box.
- Handoff capability to use different agents as part of a workflow, allowing agents to call other agents.
- Features supported via our chat completions endpoint are also supported, such as:
- Structured Outputs
- Document Understanding
- Tool Usage
- Citations
More Information
- Agents Basics: Basic explanations and code snippets around our Agents API.
- Connectors: Make your tools accessible directly to any Agents.
- Websearch: In-depth explanation of our web search built-in connector tool.
- Code Interpreter: In-depth explanation of our code interpreter for code execution built-in connector tool.
- Image Generation: In-depth explanation of our image generation built-in connector tool.
- Document Library (Beta): A RAG built-in connector enabling Agents to access a library of documents.
- MCP: How to use MCP (Model Context Protocol) servers with Agents.
- Function Calling: How to use Function calling with Agents.
- Handoffs: Relay tasks and use other agents as tools in agentic workflows.
Cookbooks
For more information and guides on how to use our Agents, we have the following cookbooks:
FAQ
-
Which models are supported?
Currently, only
mistral-medium-latest
andmistral-large-latest
are supported, but we will soon enable it for more models.