# MistralAI ## Docs [Agents & Conversations](https://docs.mistral.ai/docs/agents/agents_and_conversations.md): Agents & Conversations API: Create, manage agents with tools, and handle interactive conversations with persistent history [Agents Function Calling](https://docs.mistral.ai/docs/agents/agents_function_calling.md): Agents use tools and function calling to perform tasks, with built-in and customizable options [Agents Introduction](https://docs.mistral.ai/docs/agents/agents_introduction.md): AI agents autonomously execute tasks using LLMs, with tools, state persistence, and multi-agent collaboration via the Agents API [Code Interpreter](https://docs.mistral.ai/docs/agents/connectors/code_interpreter.md): Code Interpreter enables safe, on-demand code execution for data analysis, graphing, and more in isolated containers [Connectors Overview](https://docs.mistral.ai/docs/agents/connectors/connectors_overview.md): Connectors enable Agents and users to access tools like websearch, code interpreter, image generation, and document library on demand [Document Library](https://docs.mistral.ai/docs/agents/connectors/document_library.md): Document Library enhances agents with uploaded documents via Mistral Cloud's built-in RAG tool [Image Generation](https://docs.mistral.ai/docs/agents/connectors/image_generation.md): Built-in tool for agents to generate images on demand with detailed output handling and download options [Websearch](https://docs.mistral.ai/docs/agents/connectors/websearch.md): Websearch enables models to browse the web for real-time, up-to-date information and access specific websites [Agents Handoffs](https://docs.mistral.ai/docs/agents/handoffs.md): Agents Handoffs enable seamless task delegation and workflow automation between multiple agents with diverse tools and capabilities [MCP](https://docs.mistral.ai/docs/agents/mcp.md): MCP is an open standard protocol for seamless AI model integration with data sources and tools [Audio & Transcription](https://docs.mistral.ai/docs/capabilities/audio_and_transcription.md): Audio & Transcription: Voxtral models enable chat and transcription via audio input with various file-passing methods [Batch Inference](https://docs.mistral.ai/docs/capabilities/batch_inference.md): Process multiple API requests in batches with customizable models, endpoints, and metadata [Citations and References](https://docs.mistral.ai/docs/capabilities/citations_and_references.md): Citations and references enable models to ground responses with sources, ideal for RAG and agentic applications [Coding](https://docs.mistral.ai/docs/capabilities/coding.md): Mistral AI offers Codestral for code generation & FIM, and Devstral for agentic tool use in software development, with integrations for IDEs and frameworks [Annotations](https://docs.mistral.ai/docs/capabilities/document_ai/annotations.md): Mistral Document AI API extracts structured data from documents using custom JSON annotations for bboxes and full documents [Basic OCR](https://docs.mistral.ai/docs/capabilities/document_ai/basic_ocr.md): Extract text and structured content from PDFs and images with Mistral's Document AI OCR processor [Document AI](https://docs.mistral.ai/docs/capabilities/document_ai/document_ai_overview.md): Mistral Document AI offers enterprise-grade OCR, structured data extraction, and multilingual support for fast, accurate document processing [Document QnA](https://docs.mistral.ai/docs/capabilities/document_ai/document_qna.md): Document QnA combines OCR and AI to enable natural language queries on document content for insights and extraction [Code Embeddings](https://docs.mistral.ai/docs/capabilities/embeddings/code_embeddings.md): Code embeddings enable retrieval, clustering, and analytics for code databases and coding assistants using Mistral AI's API [Embeddings Overview](https://docs.mistral.ai/docs/capabilities/embeddings/embeddings_overview.md): Mistral AI's Embeddings API provides advanced vector representations for text and code, enabling NLP tasks like retrieval, clustering, and classification [Text Embeddings](https://docs.mistral.ai/docs/capabilities/embeddings/text_embeddings.md): Generate and use text embeddings with Mistral AI's API for NLP tasks like similarity, classification, and retrieval [Classifier Factory](https://docs.mistral.ai/docs/capabilities/finetuning/classifier-factory.md): Create and fine-tune custom classification models for intent detection, moderation, sentiment analysis, and more using Mistral's Classifier Factory [Fine-tuning Overview](https://docs.mistral.ai/docs/capabilities/finetuning): Learn about fine-tuning AI models, its benefits, use cases, and available services for customization." (99 characters) [Text & Vision Fine-tuning](https://docs.mistral.ai/docs/capabilities/finetuning/text-vision-finetuning.md): Fine-tune Mistral's text and vision models with custom datasets in JSONL format for domain-specific or conversational improvements [Function calling](https://docs.mistral.ai/docs/capabilities/function-calling.md): Mistral models enable function calling to integrate external tools for dynamic, data-driven responses [Moderation](https://docs.mistral.ai/docs/capabilities/moderation.md): Mistral's moderation API detects harmful content across multiple categories using AI-powered classification for text and conversations [Predicted outputs](https://docs.mistral.ai/docs/capabilities/predicted-outputs.md): Optimize response time by predefining predictable content for faster, efficient AI outputs." (99 characters) [Reasoning](https://docs.mistral.ai/docs/capabilities/reasoning.md): Reasoning models generate logical chains of thought to solve problems, improving accuracy with extra compute time." (99 characters) [Custom Structured Output](https://docs.mistral.ai/docs/capabilities/structured-output/custom.md): Define and enforce JSON output formats using Pydantic or Zod schemas with Mistral AI [JSON mode](https://docs.mistral.ai/docs/capabilities/structured-output/json-mode.md): Enable JSON mode by setting `response_format` to `{\"type\": \"json_object\"}` in API requests [Structured Output](https://docs.mistral.ai/docs/capabilities/structured-output/overview.md): Learn to generate structured outputs like JSON for LLM agents and pipelines, with custom and flexible formatting options [Text and Chat Completions](https://docs.mistral.ai/docs/capabilities/text_and_chat_completions.md): Mistral models enable chat and text completions with customizable prompts, roles, and streaming options [Vision](https://docs.mistral.ai/docs/capabilities/vision.md): Multimodal AI models analyze images and text for insights, supporting use cases like OCR, chart understanding, and receipt transcription [AWS Bedrock](https://docs.mistral.ai/docs/deployment/cloud/aws.md): Deploy and query Mistral AI models on AWS Bedrock with fully managed, serverless endpoints [Azure AI](https://docs.mistral.ai/docs/deployment/cloud/azure.md): Deploy and query Mistral AI models on Azure AI via serverless MaaS or GPU-based endpoints [IBM watsonx.ai](https://docs.mistral.ai/docs/deployment/cloud/ibm-watsonx.md): Mistral AI's Large model on IBM watsonx.ai: SaaS & on-premise deployment with setup, API access, and usage guides [Outscale](https://docs.mistral.ai/docs/deployment/cloud/outscale.md): Deploy and query Mistral AI models on Outscale via managed VMs and REST APIs [Cloud](https://docs.mistral.ai/docs/deployment/cloud/overview.md): Access Mistral AI models via Azure, AWS, Google Cloud, Snowflake, IBM, and Outscale using cloud credits [Snowflake Cortex](https://docs.mistral.ai/docs/deployment/cloud/sfcortex.md): Access Mistral AI models on Snowflake Cortex as serverless, fully managed endpoints for SQL & Python [Vertex AI](https://docs.mistral.ai/docs/deployment/cloud/vertex.md): Deploy and query Mistral AI models on Google Cloud Vertex AI as serverless endpoints [Workspaces](https://docs.mistral.ai/docs/deployment/laplateforme/organization.md): La Plateforme workspaces enable team collaboration, access control, and shared fine-tuned models." (99 characters) [La Plateforme](https://docs.mistral.ai/docs/deployment/laplateforme/overview.md): Mistral AI's La Plateforme offers pay-as-you-go API access to its latest models with flexible deployment options [Pricing](https://docs.mistral.ai/docs/deployment/laplateforme/pricing.md): Check the pricing page for detailed API cost information [Rate limit and usage tiers](https://docs.mistral.ai/docs/deployment/laplateforme/tier.md): Learn about Mistral's API rate limits, usage tiers, and how to upgrade for higher capacity." (99 characters) [Deploy with Cerebrium](https://docs.mistral.ai/docs/deployment/self-deployment/cerebrium.md): Deploy AI apps effortlessly with Cerebrium's serverless GPU infrastructure and auto-scaling." (99 characters) [Deploy with Cloudflare Workers AI](https://docs.mistral.ai/docs/deployment/self-deployment/cloudflare.md): Deploy AI models on Cloudflare's global network with Workers AI for serverless GPU-powered LLMs [Self-deployment](https://docs.mistral.ai/docs/deployment/self-deployment/overview.md): Deploy Mistral AI models on your infrastructure using vLLM, TensorRT-LLM, TGI, or tools like SkyPilot and Cerebrium [Deploy with SkyPilot](https://docs.mistral.ai/docs/deployment/self-deployment/skypilot.md): Deploy AI models on any cloud with SkyPilot for cost savings, high GPU availability, and managed execution [Text Generation Inference](https://docs.mistral.ai/docs/deployment/self-deployment/tgi.md): TGI is a toolkit for deploying and serving LLMs with high-performance text generation features like quantization and OpenAI-like API support [TensorRT](https://docs.mistral.ai/docs/deployment/self-deployment/trt.md): Guide to building and deploying TensorRT-LLM engines with Triton inference server [vLLM](https://docs.mistral.ai/docs/deployment/self-deployment/vllm.md): vLLM is an open-source LLM inference engine optimized for deploying Mistral models on-premise [SDK Clients](https://docs.mistral.ai/docs/getting-started/clients.md): Official Python & TypeScript SDKs and community clients for Mistral AI [Mistral AI Documentation](https://docs.mistral.ai/docs/getting-started/docs_introduction.md): Mistral AI offers open-source and commercial LLMs, APIs, and tools for developers and enterprises to build AI-powered applications [Glossary](https://docs.mistral.ai/docs/getting-started/glossary.md): Glossary of key AI and LLM terms, including LLMs, text generation, tokens, MoE, RAG, fine-tuning, function calling, embeddings, and temperature [Model customization](https://docs.mistral.ai/docs/getting-started/model_customization.md): Learn how to customize LLMs for your application with system prompts, fine-tuning, and moderation layers [Models Benchmarks](https://docs.mistral.ai/docs/getting-started/models/benchmark.md): Mistral's benchmarked models excel in reasoning, multilingual tasks, coding, and multimodal capabilities, outperforming competitors in key benchmarks [Model selection](https://docs.mistral.ai/docs/getting-started/models/model_selection.md): Guide to selecting Mistral models based on performance, cost, and use case complexity." (99 characters) [Models Overview](https://docs.mistral.ai/docs/getting-started/models/overview.md): Mistral offers open and premier models for various tasks, including text, code, audio, and multimodal processing [Model weights](https://docs.mistral.ai/docs/getting-started/models/weights.md): Open-source pre-trained and instruction-tuned models with various licenses, download links, and usage guidelines [Quickstart](https://docs.mistral.ai/docs/getting-started/quickstart.md): Quickstart guide for setting up a Mistral AI account, configuring billing, and using the API for models and embeddings [Basic RAG](https://docs.mistral.ai/docs/guides/basic-RAG.md): Learn how to build a basic RAG system by combining retrieval and generation for AI-powered knowledge-based responses [Ambassador](https://docs.mistral.ai/docs/guides/contribute/ambassador.md): Join Mistral AI's Ambassador Program to advocate, create content, and gain exclusive benefits for AI enthusiasts [Contribute](https://docs.mistral.ai/docs/guides/contribute/overview.md): Learn how to contribute to Mistral AI through docs, code, community, and the Ambassador Program [Evaluation](https://docs.mistral.ai/docs/guides/evaluation.md): Guide to evaluating LLMs for specific tasks with metrics, human, and LLM-based methods [Fine-tuning](https://docs.mistral.ai/docs/guides/finetuning.md): Fine-tuning models incurs a $2 monthly storage fee per model; see pricing for details [ 01 Intro Basics](https://docs.mistral.ai/docs/guides/finetuning_sections/_01_intro_basics.md): Learn the basics of fine-tuning LLMs with Mistral AI's API and open-source tools for optimized performance [ 02 Prepare Dataset](https://docs.mistral.ai/docs/guides/finetuning_sections/_02_prepare_dataset.md): Learn how to prepare datasets for fine-tuning models across various use cases, from tone to coding and RAG [download the validation and reformat script](https://docs.mistral.ai/docs/guides/finetuning_sections/_03_e2e_examples.md): Download the reformat_data.py script to validate and reformat datasets for Mistral API fine-tuning [get data from hugging face](https://docs.mistral.ai/docs/guides/finetuning_sections/_04_faq.md): FAQ on data validation, size limits, job creation, and fine-tuning details for Mistral API and mistral-finetune [Observability](https://docs.mistral.ai/docs/guides/observability.md): Observability for LLMs ensures visibility, debugging, and performance optimization across prototyping, testing, and production [Other resources](https://docs.mistral.ai/docs/guides/other-resources.md): Explore Mistral AI Cookbook for code examples, community contributions, and third-party tool integrations [Prefix](https://docs.mistral.ai/docs/guides/prefix.md): Prefixes enhance model responses by improving language adherence, saving tokens, enabling roleplay, and strengthening safeguards [Prompting capabilities](https://docs.mistral.ai/docs/guides/prompting-capabilities.md): Learn effective prompting techniques for classification, summarization, personalization, and evaluation with Mistral models [Sampling](https://docs.mistral.ai/docs/guides/sampling.md): Learn how to adjust LLM sampling parameters like Temperature, Top P, and penalties for better output control [Tokenization](https://docs.mistral.ai/docs/guides/tokenization.md): Learn about Mistral AI's tokenization process, including subword tokenization, control tokens, and Python implementation for LLMs