Skip to main content

Outscale

Introduction

Mistral AI models are available on the Outscale platform as managed deployments. Through the Outscale marketplace, you can subscribe to a Mistral service that will, on your behalf, provision a virtual machine and a GPU then deploy the model on it.

As of today, the following models are available:

  • Mistral Small (2409)
  • Codestral

For more details, visit the models page.

Getting started

The following sections outline the steps to query a Mistral model on the Outscale platform.

Deploying the model

Follow the steps described in the Outscale documentation to deploy a service with the model of your choice.

Querying the model (chat completion)

Deployed models expose a REST API that you can query using Mistral's SDK or plain HTTP calls. To run the examples below you will need to set the following environment variables:

  • OUTSCALE_SERVER_URL: the URL of the VM hosting your Mistral model
  • OUTSCALE_MODEL_NAME: the name of the model to query (e.g. small-2409, codestral-2405)
echo $OUTSCALE_SERVER_URL/v1/chat/completions
echo $OUTSCALE_MODEL_NAME
curl --location $OUTSCALE_SRV_URL/v1/chat/completions \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{
"model": "'"$OUTSCALE_MODEL_NAME"'",
"temperature": 0,
"messages": [
{"role": "user", "content": "Who is the best French painter? Answer in one short sentence."}
],
"stream": false
}'

Querying the model (FIM completion)

Codestral can be queried using an additional completion mode called fill-in-the-middle (FIM). For more information, see the code generation section.

 curl --location $OUTSCALE_SERVER_URL/v1/fim/completions \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{
"model": "'"$OUTSCALE_MODEL_NAME"'",
"prompt": "def count_words_in_file(file_path: str) -> int:",
"suffix": "return n_words",
"stream": false
}'

Going further

For more information and examples, you can check: