Build an agent with tools

Give a Mistral model access to external functions it can call mid-conversation.

  • Define a tool schema that describes your function
  • Send a message that triggers a tool call
  • Execute the function locally and feed the result back to the model

The pattern works for any data source: APIs, databases, or internal services.

Time to complete: ~10 minutes

Prerequisites

Prerequisites

  • A Mistral API key (see Get your API key if you don't have one yet)
  • Python 3.9+ or Node.js 18+ installed
  • The Mistral SDK installed (see Install the SDK if you haven't yet)
Step 1: Define a tool

Step 1: Define a tool

Define the function the model can call. This example creates a get_weather tool that accepts a city name.

import json
import os
from mistralai import Mistral

client = Mistral(api_key=os.environ["MISTRAL_API_KEY"])

# Define the tool schema
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a given city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "The city name, e.g. 'Paris'."
                    }
                },
                "required": ["city"]
            }
        }
    }
]
Step 2: Send a request with the tool

Step 2: Send a request with the tool

Ask the model a question that requires the tool. The model returns a tool_calls response instead of a text answer.

messages = [
    {"role": "user", "content": "What's the weather in Paris today?"}
]

response = client.chat.complete(
    model="mistral-medium-latest",
    messages=messages,
    tools=tools,
)

tool_call = response.choices[0].message.tool_calls[0]
print(f"Model wants to call: {tool_call.function.name}")
print(f"With arguments: {tool_call.function.arguments}")
Step 3: Execute the function and return the result

Step 3: Execute the function and return the result

Run your function with the model's arguments, then send the result back so the model can generate a natural-language answer.

# Simulate the function (replace with a real API call)
def get_weather(city: str) -> dict:
    return {"city": city, "temperature": "18°C", "condition": "Partly cloudy"}

# Execute the tool call
args = json.loads(tool_call.function.arguments)
result = get_weather(**args)

# Send the result back to the model
messages.append(response.choices[0].message)
messages.append({
    "role": "tool",
    "name": tool_call.function.name,
    "content": json.dumps(result),
    "tool_call_id": tool_call.id,
})

final_response = client.chat.complete(
    model="mistral-medium-latest",
    messages=messages,
    tools=tools,
)

print(final_response.choices[0].message.content)
# "The weather in Paris is 18°C and partly cloudy."
Verify

Verify

A successful run prints a natural-language response that includes the tool's return value. The model:

  • Detected the request needed external data
  • Generated a structured tool_calls request
  • Incorporated your function's result into a conversational answer

Set tool_choice: "any" to force the model to always call a tool, or use tool_choice: "auto" (default) to let the model decide.

What's next

What's next