Outscale
Introduction
Mistral AI models are available on the Outscale platform as managed deployments. Through the Outscale marketplace, you can subscribe to a Mistral service that will, on your behalf, provision a virtual machine and a GPU then deploy the model on it.
As of today, the following models are available:
- Mistral Small (2409)
- Codestral
For more details, visit the models page.
Getting started
The following sections outline the steps to query a Mistral model on the Outscale platform.
Deploying the model
Follow the steps described in the Outscale documentation to deploy a service with the model of your choice.
Querying the model (chat completion)
Deployed models expose a REST API that you can query using Mistral's SDK or plain HTTP calls. To run the examples below you will need to set the following environment variables:
OUTSCALE_SERVER_URL
: the URL of the VM hosting your Mistral modelOUTSCALE_MODEL_NAME
: the name of the model to query (e.g.small-2409
,codestral-2405
)
- cURL
- Python
- TypeScript
echo $OUTSCALE_SERVER_URL/v1/chat/completions
echo $OUTSCALE_MODEL_NAME
curl --location $OUTSCALE_SRV_URL/v1/chat/completions \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{
"model": "'"$OUTSCALE_MODEL_NAME"'",
"temperature": 0,
"messages": [
{"role": "user", "content": "Who is the best French painter? Answer in one short sentence."}
],
"stream": false
}'
import os
from mistralai import Mistral
client = Mistral(server_url=os.environ["OUTSCALE_SERVER_URL"])
resp = client.chat.complete(
model=os.environ["OUTSCALE_MODEL_NAME"],
messages=[
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
}
],
temperature=0
)
print(resp.choices[0].message.content)
import { Mistral } from "@mistralai/mistralai";
const client = new Mistral({
serverURL: process.env.OUTSCALE_SERVER_URL || ""
});
const modelName = process.env.OUTSCALE_MODEL_NAME|| "";
async function chatCompletion(user_msg: string) {
const resp = await client.chat.complete({
model: modelName,
messages: [
{
content: user_msg,
role: "user",
},
],
});
if (resp.choices && resp.choices.length > 0) {
console.log(resp.choices[0]);
}
}
chatCompletion("Who is the best French painter? Answer in one short sentence.");
Querying the model (FIM completion)
Codestral can be queried using an additional completion mode called fill-in-the-middle (FIM). For more information, see the code generation section.
- cURL
- Python
- TypeScript
curl --location $OUTSCALE_SERVER_URL/v1/fim/completions \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{
"model": "'"$OUTSCALE_MODEL_NAME"'",
"prompt": "def count_words_in_file(file_path: str) -> int:",
"suffix": "return n_words",
"stream": false
}'
import os
from mistralai import Mistral
client = Mistral(server_url=os.environ["OUTSCALE_SERVER_URL"])
resp = client.fim.complete(
model = os.environ["OUTSCALE_MODEL_NAME"],
prompt="def count_words_in_file(file_path: str) -> int:",
suffix="return n_words"
)
print(resp.choices[0].message.content)
import { Mistral} from "@mistralai/mistralai";
const client = new Mistral({
serverURL: process.env.OUTSCALE_SERVER_URL || ""
});
const modelName = "codestral-2405";
async function fimCompletion(prompt: string, suffix: string) {
const resp = await client.fim.complete({
model: modelName,
prompt: prompt,
suffix: suffix
});
if (resp.choices && resp.choices.length > 0) {
console.log(resp.choices[0]);
}
}
fimCompletion("def count_words_in_file(file_path: str) -> int:",
"return n_words");
Going further
For more information and examples, you can check:
- The Outscale documentation explaining how to subscribe to a Mistral service and deploy it.