Outscale
Mistral AI models are available on the Outscale platform as managed deployments. Through the Outscale marketplace, you can subscribe to a Mistral service that will, on your behalf, provision a virtual machine and a GPU then deploy the model on it. As of today, the following models are available:
- Mistral Small (24.09)
- Codestral (24.05)
- Ministral 8B (24.10)
For more details, visit the models page.
Getting Started
Getting Started
The following sections outline the steps to query a Mistral model on the Outscale platform.
Deploying the Model
Deploying the Model
Querying the Model (Chat Completion)
Querying the Model (Chat Completion)
OUTSCALE_SERVER_URL: The URL of the VM hosting your Mistral model.OUTSCALE_MODEL_NAME: The name of the model to query (e.g.,small-2409,codestral-2405).
import os
from mistralai import Mistral
client = Mistral(server_url=os.environ["OUTSCALE_SERVER_URL"])
resp = client.chat.complete(
model=os.environ["OUTSCALE_MODEL_NAME"],
messages=[
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence.",
}
],
temperature=0
)
print(resp.choices[0].message.content)Querying the Model (FIM Completion)
Querying the Model (FIM Completion)
import os
from mistralai import Mistral
client = Mistral(server_url=os.environ["OUTSCALE_SERVER_URL"])
resp = client.fim.complete(
model=os.environ["OUTSCALE_MODEL_NAME"],
prompt="def count_words_in_file(file_path: str) -> int:",
suffix="return n_words"
)
print(resp.choices[0].message.content)Going Further
Going Further
For more information and examples, you can check:
- The Outscale documentation explaining how to subscribe to a Mistral service and deploy it.