Custom Structured Outputs
Custom Structured Outputs allow you to ensure the model provides an answer in a very specific JSON format by supplying a clear JSON schema. This approach allows the model to consistently deliver responses with the correct typing and keywords.
- python
- typescript
- curl
Here is an example of how to achieve this using the Mistral AI client and Pydantic:
Define the Data Model
First, define the structure of the output using a Pydantic model:
from pydantic import BaseModel
class Book(BaseModel):
name: str
authors: list[str]
Start the completion
Next, use the Mistral AI python client to make a request and ensure the response adheres to the defined structure using response_format
set to the corresponding pydantic model:
import os
from mistralai import Mistral
api_key = os.environ["MISTRAL_API_KEY"]
model = "ministral-8b-latest"
client = Mistral(api_key=api_key)
chat_response = client.chat.parse(
model=model,
messages=[
{
"role": "system",
"content": "Extract the books information."
},
{
"role": "user",
"content": "I recently read 'To Kill a Mockingbird' by Harper Lee."
},
],
response_format=Book,
max_tokens=256,
temperature=0
)
In this example, the Book
class defines the structure of the output, ensuring that the model's response adheres to the specified format.
There are two types of possible outputs that are easily accessible via our SDK:
- The raw JSON output, accessed with
chat_response.choices[0].message.content
:
{
"authors": ["Harper Lee"],
"name": "To Kill a Mockingbird"
}
- The parsed output, converted into a Pydantic object with
chat_response.choices[0].message.parsed
. In this case, it is aBook
instance:
name='To Kill a Mockingbird' authors=['Harper Lee']
Here is an example of how to achieve this using the Mistral AI client and Zod:
Define the Data Model
First, define the structure of the output using Zod:
import { z } from "zod";
const Book = z.object({
name: z.string(),
authors: z.array(z.string()),
});
Start the completion
Next, use the Mistral AI TypeScript client to make a request and ensure the response adheres to the defined structure using responseFormat
set to the corresponding Zod schema:
import { Mistral } from "@mistralai/mistralai";
const apiKey = process.env.MISTRAL_API_KEY;
const client = new Mistral({apiKey: apiKey});
const chatResponse = await client.chat.parse({
model: "ministral-8b-latest",
messages: [
{
role: "system",
content: "Extract the books information.",
},
{
role: "user",
content: "I recently read 'To Kill a Mockingbird' by Harper Lee.",
},
],
responseFormat: Book,
maxTokens: 256,
temperature: 0,
});
In this example, the Book
schema defines the structure of the output, ensuring that the model's response adheres to the specified format.
There are two types of possible outputs that are easily accessible via our SDK:
- The raw JSON output, accessed with
chatResponse.choices[0].message.content
:
{
"authors": ["Harper Lee"],
"name": "To Kill a Mockingbird"
}
- The parsed output, converted into a TypeScript object with
chatResponse.choices[0].message.parsed
. In this case, it is aBook
object:
{ name: 'To Kill a Mockingbird', authors: [ 'Harper Lee' ] }
The request is structured to ensure that the response adheres to the specified custom JSON schema. The schema
defines the structure of a Book object with name and authors properties.
curl --location "https://api.mistral.ai/v1/chat/completions" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"model": "ministral-8b-latest",
"messages": [
{
"role": "system",
"content": "Extract the books information."
},
{
"role": "user",
"content": "I recently read To Kill a Mockingbird by Harper Lee."
}
],
"response_format": {
"type": "json_schema",
"json_schema": {
"schema": {
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"authors": {
"items": {
"type": "string"
},
"title": "Authors",
"type": "array"
}
},
"required": ["name", "authors"],
"title": "Book",
"type": "object",
"additionalProperties": false
},
"name": "book",
"strict": true
}
},
"max_tokens": 256,
"temperature": 0
}'
FAQ
Q: Which models support custom Structured Outputs?
A: All currently available models except for codestral-mamba
are supported.