Quickstart
Looking for La Plateforme? Head to console.mistral.ai
Getting started with Mistral AI API
Mistral AI API provides a seamless way for developers to integrate Mistral's state-of-the-art
models into their applications and production workflows with just a few lines of code.
Our API is currently available through La Plateforme.
You need to activate payments on your account to enable your API keys.
After a few moments, you will be able to use our chat
endpoint:
- python
- typescript
- curl
import os
from mistralai import Mistral
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-large-latest"
client = Mistral(api_key=api_key)
chat_response = client.chat.complete(
model= model,
messages = [
{
"role": "user",
"content": "What is the best French cheese?",
},
]
)
print(chat_response.choices[0].message.content)
import { Mistral } from '@mistralai/mistralai';
const apiKey = process.env.MISTRAL_API_KEY;
const client = new Mistral({apiKey: apiKey});
const chatResponse = await client.chat.complete({
model: 'mistral-large-latest',
messages: [{role: 'user', content: 'What is the best French cheese?'}],
});
console.log('Chat:', chatResponse.choices[0].message.content);
curl --location "https://api.mistral.ai/v1/chat/completions" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"model": "mistral-large-latest",
"messages": [{"role": "user", "content": "Who is the most renowned French painter?"}]
}'
To generate text embeddings using Mistral AI's embeddings API, we can make a request to the API
endpoint and specify the embedding model mistral-embed
, along with providing a list of input texts.
The API will then return the corresponding embeddings as numerical vectors, which can be used for
further analysis or processing in NLP applications.
- python
- typescript
- curl
import os
from mistralai import Mistral
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-embed"
client = Mistral(api_key=api_key)
embeddings_response = client.embeddings.create(
model=model,
inputs=["Embed this sentence.", "As well as this one."]
)
print(embeddings_response)
import { Mistral } from '@mistralai/mistralai';
const apiKey = process.env.MISTRAL_API_KEY;
const client = new Mistral({apiKey: apiKey});
const embeddingsResponse = await client.embeddings.create({
model: 'mistral-embed',
inputs: ["Embed this sentence.", "As well as this one."],
});
console.log(embeddingsResponse);
curl --location "https://api.mistral.ai/v1/embeddings" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"model": "mistral-embed",
"input": ["Embed this sentence.", "As well as this one."]
}'
For a full description of the models offered on the API, head on to the model documentation.
Account setup
- To get started, create a Mistral account or sign in at console.mistral.ai.
- Then, navigate to "Workspace" and "Billing" to add your payment information and activate payments on your account.
- After that, go to the "API keys" page and make a new API key by clicking "Create new key". Make sure to copy the API key, save it safely, and do not share it with anyone.