Test a model in the API playground
Use the Studio›Studio ↗ playground to send prompts to Mistral models, adjust generation parameters, and compare outputs, all without writing code.
- Interactive testing: type a prompt and get a response in seconds
- Model comparison: switch between models to see how outputs differ
- Parameter tuning: adjust temperature, max tokens, and other settings in real time
By the end you will know how to evaluate models and parameters before integrating them into your application.
Time to complete: ~5 minutes
Prerequisites
- An active Studio account with the Experiment or Scale plan. If you have not set one up yet, complete the Activate Studio and generate an API key ↗ quickstart first.
Step 1: Open the playground
- Go to Studio›Studio ↗.
- Click Playground in the left sidebar.
The playground opens with a chat interface where you can interact with any available model.
Step 2: Select a model and send a prompt
- Open the Model dropdown at the top of the playground.
- Select a model (for example,
mistral-small-latestfor fast responses ormistral-large-latestfor higher quality). - Type a prompt in the input field:
Explain the difference between supervised and unsupervised learning in three sentences.
- Click Send (or press Enter).
The response appears in the chat panel. Review the output for quality, accuracy, and style.
Step 3: Adjust parameters and compare outputs
The right sidebar exposes generation parameters. Change them to see how they affect the output.
- Temperature: controls randomness. Lower values (0.1) produce more deterministic outputs. Higher values (0.9) increase creativity.
- Max tokens: limits response length. Set this to control cost and verbosity.
- Top P: an alternative to temperature for controlling diversity.
Try this experiment:
- Set Temperature to
0.1and send the same prompt. - Set Temperature to
0.9and send it again. - Compare the two responses. The low-temperature response is more focused and consistent. The high-temperature response shows more variation.
Switch to a different model from the dropdown and send the same prompt to compare how models handle the same task.
Verify
You have used the playground to:
- Send a prompt to a Mistral model.
- Adjust temperature and observe the effect on output.
- Compare responses across different models.
You now understand how model selection and parameters affect generation quality, which helps you make informed choices when building your application.