Skip to main content

Code generation

Codestral is a cutting-edge generative model that has been specifically designed and optimized for code generation tasks, including fill-in-the-middle and code completion. Codestral was trained on 80+ programming languages, enabling it to perform well on both common and less common languages.

important

We currently offer two domains for Codestral endpoints, both providing FIM and instruct routes:

DomainFeatures
codestral.mistral.ai- Monthly subscription based, free until 1st of August
- Has a rate limit of 30 requests per minute and a high daily limit of 2000 requests
- Requires a new key for which a phone number is needed
api.mistral.ai- Allows you to use your existing API key and you can pay to use Codestral
- Ideal for business use
- Provide higher rate limits of 200 requests per second per workspace

Wondering which endpoint to use ?

  • If you're a user, wanting to query Codestral as part of an IDE plugin, codestral.mistral.ai is recommended.
  • If you're building a plugin, or anything that exposes these endpoints directly to the user, and expect them to bring their own API keys, you should also target codestral.mistral.ai
  • For all other use cases, api.mistral.ai will be better suited

This guide uses api.mistral.ai for demonstration.

This guide will walk you through how to use Codestral fill-in-the-middle endpoint, instruct endpoint, open-weight Codestral model, and several community integrations:

  • Fill-in-the-middle endpoint
  • Instruct endpoint
  • Open-weight Codestral
  • Integrations

Fill-in-the-middle endpoint

With this feature, users can define the starting point of the code using a prompt, and the ending point of the code using an optional suffix and an optional stop. The Codestral model will then generate the code that fits in between, making it ideal for tasks that require a specific piece of code to be generated. Below are three examples:

Example 1: Fill in the middle

import os
from mistralai.client import MistralClient

api_key = os.environ["MISTRAL_API_KEY"]

client = MistralClient(api_key=api_key)

model = "codestral-latest"
prompt = "def fibonacci(n: int):"
suffix = "n = int(input('Enter a number: '))\nprint(fibonacci(n))"

response = client.completion(
model=model,
prompt=prompt,
suffix=suffix,
)

print(
f"""
{prompt}
{response.choices[0].message.content}
{suffix}
"""
)

Example 2: Completion

import os
from mistralai.client import MistralClient

api_key = os.environ["MISTRAL_API_KEY"]

client = MistralClient(api_key=api_key)

model = "codestral-latest"
prompt = "def is_odd(n): \n return n % 2 == 1 \ndef test_is_odd():"

response = client.completion(
model=model,
prompt=prompt
)

print(
f"""
{prompt}
{response.choices[0].message.content}
"""
)

Example 3: Stop tokens

tip

We recommend adding stop tokens for IDE autocomplete integrations to prevent the model from being too verbose.

import os
from mistralai.client import MistralClient

api_key = os.environ["MISTRAL_API_KEY"]

client = MistralClient(api_key=api_key)

model = "codestral-latest"
prompt = "def is_odd(n): \n return n % 2 == 1 \ndef test_is_odd():"
suffix = "n = int(input('Enter a number: '))\nprint(fibonacci(n))"

response = client.completion(
model=model,
prompt=prompt,
suffix=suffix,
stop=["\n\n"]
)

print(
f"""
{prompt}
{response.choices[0].message.content}
"""
)

Instruct endpoint

We also provide the instruct endpoint of Codestral with the same model codestral-latest. The only difference is the endpoint used:

import os
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage

api_key = os.environ["MISTRAL_API_KEY"]

client = MistralClient(api_key=api_key)

model = "codestral-latest"

messages = [
ChatMessage(role="user", content="Write a function for fibonacci")
]
chat_response = client.chat(
model=model,
messages=messages
)
print(chat_response.choices[0].message.content)

Open-weight Codestral

Codestral is also available open-weight under the Mistral AI Non-Production (MNPL) License.

Check out the README of mistral-inference to learn how to use mistral-inference to run Codestral.

Integration with continue.dev

Continue.dev supports both Codestral base for code generation and Codestral Instruct for chat.

How to set up Codestral with Continue

Here is a step-by-step guide on how to set up Codestral with Continue using the Mistral AI API:

  1. Install the Continue VS Code or JetBrains extension following the instructions here. Please make sure you install Continue version >v0.8.33.

  2. Automatic set up:

  • Click on the Continue extension iron on the left menu. Select Mistral API as a provider, select Codestral as a model.
  • Click "Get API Key" to get Codestral API key.
  • Click "Add model", which will automatically populate the config.json.
drawing
  1. (alternative) Manually edit config.json
  • Click on the gear icon in the bottom right corner of the Continue window to open ~/.continue/config.json (MacOS) / %userprofile%\.continue\config.json (Windows)
  • Log in and request a Codestral API key on Mistral AI's La Plateforme here
  • To use Codestral as your model for both autocomplete and chat, replace [API_KEY] with your Mistral API key below and add it to your config.json file:
~/.continue/config.json
{
"models": [
{
"title": "Codestral",
"provider": "mistral",
"model": "codestral-latest",
"apiKey": "[API_KEY]"
}
],
"tabAutocompleteModel": {
"title": "Codestral",
"provider": "mistral",
"model": "codestral-latest",
"apiKey": "[API_KEY]"
}
}

If you run into any issues or have any questions, please join our Discord and post in #help channel here

Integration with Tabnine

Tabnine supports Codestral Instruct for chat.

How to set up Codestral with Tabnine

What is Tabnine Chat?

Tabnine Chat is a code-centric chat application that runs in the IDE and allows developers to interact with Tabnine’s AI models in a flexible, free-form way, using natural language. Tabnine Chat also supports dedicated quick actions that use predefined prompts optimized for specific use cases.

Getting started

To start using Tabnine Chat, first launch it in your IDE (VSCode, JetBrains, or Eclipse). Then, learn how to interact with Tabnine Chat, for example, how to ask questions or give instructions. Once you receive your response, you can read, review, and apply it within your code.

Selecting Codestral as Tabnine Chat App model

In the Tabnine Chat App, use the model selector to choose Codestral.

Integration with LangChain

LangChain provides support for Codestral Instruct. Here is how you can use it in LangChain:

# make sure to install `langchain` and `langchain-mistralai` in your Python environment

import os
from langchain_mistralai import ChatMistralAI
from langchain_core.prompts import ChatPromptTemplate

api_key = os.environ["MISTRAL_API_KEY"]
mistral_model = "codestral-latest"
llm = ChatMistralAI(model=mistral_model, temperature=0, api_key=api_key)
llm.invoke([("user", "Write a function for fibonacci")])

For a more complex use case of self-corrective code generation using the instruct Codestral tool use, check out this notebook and this video:

Integration with LlamaIndex

LlamaIndex provides support for Codestral Instruct and Fill In Middle (FIM) endpoints. Here is how you can use it in LlamaIndex:

# make sure to install `llama-index` and `llama-index-llms-mistralai` in your Python enviornment

import os
from llama_index.core.llms import ChatMessage
from llama_index.llms.mistralai import MistralAI

api_key = os.environ["MISTRAL_API_KEY"]
mistral_model = "codestral-latest"
messages = [
ChatMessage(role="user", content="Write a function for fibonacci"),
]
MistralAI(api_key=api_key, model=mistral_model).chat(messages)

Check out more details on using Instruct and Fill In Middle(FIM) with LlamaIndex in this notebook.