Prompting
When you first start using Mistral models, your initial interaction will revolve around prompts. Mastering the art of crafting effective prompts is essential for generating high-quality responses from Mistral models or other LLMs.
Main Concepts
The Art of Crafting Prompts
Below, we cover the core concepts of prompting and how to craft effective prompts to maximize the capabilities of our models.
System Prompt
system and user.
- The
systemprompt is provided at the beginning of the conversation. It sets the general context and instructions for the model’s behavior and is typically managed by the developer. - The
userprompt is provided during the conversation to give the model specific context or instructions for the current interaction.
As a developer, you can still use user prompts to provide additional context or instructions during the conversation if needed.
If you cannot control the system prompt, you can still include the general context and instructions in the user prompt by concatenating them with the actual query.
Role-Separated Example:
{
"role": "system",
"content": "system_prompt"
},
{
"role": "user",
"content": "user_prompt"
}Concatenated Example:
{
"role": "user",
"content": "system_prompt\n\nUser: user_prompt"
}Providing a Purpose
Structure
You are a language detection model, your task is to detect the language of the given text.
# Available Languages
Select the language from the following list:
- English: "en"
- French: "fr"
- Spanish: "es"
- German: "de"
Any language not listed must be classified as "other" with the code "on".
# Response Format
Your answer must follow this format:
{"language_iso": <language_code>}
# Examples
Below are sample inputs and expected outputs:
## English
User: Hello, how are you?
Answer: {"language_iso": "en"}
## French
User: Bonjour, comment allez-vous?
Answer: {"language_iso": "fr"}Formatting
- Readable: Easy for humans to scan.
- Parsable: Simple to extract information programmatically.
- Familiar: Likely seen massively during the model’s training. Good formatting not only helps the model understand the prompt but also makes it easier for developers to iterate and maintain the application.
Example Prompting
Few-Shot Prompting
[...]
# Examples
Input: Hello, how are you?
Output: {"language_iso": "en"}
[...]Standard Few-Shot Prompting Structure:
[
{
"role": "system",
"content": "You are a language detection model. Your task is to detect the language of the given text.\n[...]"
},
{
"role": "user",
"content": "Hello, how are you?"
},
{
"role": "assistant",
"content": "{\"language_iso\": \"en\"}"
},
{
"role": "user",
"content": "Bonjour, comment allez-vous?"
},
{
"role": "assistant",
"content": "{\"language_iso\": \"fr\"}"
}
]Structured Outputs
Advice
What to Avoid
What you should Avoid
Below we provide a list of "good to know" advice about what to avoid doing. The list is not exhaustive and can depend on your use case - but these points are good to keep in mind while building your prompts.
Avoid Subjective and Blurry Words
- Avoid blurry quantitative adjectives: “too long”, “too short”, “many”, “few”, etc.
- Instead, provide objective measures.
- Avoid blurry words like “things”, “stuff”, “write an interesting report”, “make it better”, etc.
- Instead, state exactly what you mean by “interesting”, “better”, etc.
Avoid Contradictions
Example:
- “If the new data is related to an existing database record, update this record.”
- “If the data is new, create a new record.”
- This is unclear because new data could either update an existing record or create a new one.
Instead, use a decision tree:
## How to update database records
Follow these steps:
- If the data does not include new information (i.e., it already exists in a record):
- Ignore this data.
- Otherwise, if the data is not related to any existing record in the same table:
- Create a new record.
- Otherwise, if the related record is larger than 100 characters:
- Create a new record.
- Otherwise, if the data directly contradicts the existing record:
- Delete the existing record and create a new one.
- Otherwise:
- Update the existing record to include the new data.Do Not Make LLMs Count Words
- Avoid: “If the record is too long, split it into multiple records.”
- Avoid: “If the record is longer than 100 characters, split it into multiple records.”
- Instead, provide character counts as input:
Existing records: - { record: “User: Alice, Age: 30”, charCount: 15 } - { record: “User: Bob, Age: 25”, charCount: 13 } New data: - { data: “User: Charlie, Age: 35”, charCount: 17 }
Do Not Generate Too Many Tokens
Bad Examples:
- Generating full record content for a
NO_OPoperation. - Generating an entire book in one shot.
Only generate the update or necessary data.
Prefer Worded Scales
Avoid:
“Rate these options on a 1 to 5 scale, 1 being highly irrelevant and 5 being highly relevant.”Use:
Rate these options using this scale:
- Very Low: if the option is highly irrelevant
- Low: if the option is not good enough
- Neutral: if the option is not particularly interesting
- Good: if the option is worth considering
- Very Good: for highly relevant optionsYou can convert this worded scale to a numeric one if needed.
Prompting Examples
- Classification
- Summarization
- Personalization
- Evaluation
¡Meow! Click one of the tabs above to learn more.