Skip to main content

Mistral AI API (0.0.2)

Download OpenAPI specification:Download

Our Chat Completion and Embeddings APIs specification. Create your account on La Plateforme to get access and read the docs to learn how to use it.

Chat

Chat Completion API.

Chat Completion

Authorizations:
ApiKey
Request Body schema: application/json
required
required
Model (string) or Model (null) (Model)

ID of the model to use. You can use the List Available Models API to see all of your available models, or see our Model overview for model descriptions.

Temperature (number) or Temperature (null) (Temperature)

What sampling temperature to use, we recommend between 0.0 and 0.7. Higher values like 0.7 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. The default value varies depending on the model you are targeting. Call the /models endpoint to retrieve the appropriate value.

top_p
number (Top P) [ 0 .. 1 ]
Default: 1

Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

Max Tokens (integer) or Max Tokens (null) (Max Tokens)

The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.

stream
boolean (Stream)
Default: false

Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.

Stop (string) or Array of Stop (strings) (Stop)

Stop generation if this token is detected. Or if one of these tokens is detected when providing an array

Random Seed (integer) or Random Seed (null) (Random Seed)

The seed to use for random sampling. If set, different calls will generate deterministic results.

required
Array of any (Messages)

The prompt(s) to generate completions for, encoded as a list of dict with role and content.

object (ResponseFormat)
Array of Tools (objects) or Tools (null) (Tools)
ToolChoice (object) or ToolChoiceEnum (string) (Tool Choice)
Default: "auto"
presence_penalty
number (Presence Penalty) [ -2 .. 2 ]
Default: 0

presence_penalty determines how much the model penalizes the repetition of words or phrases. A higher presence penalty encourages the model to use a wider variety of words and phrases, making the output more diverse and creative.

frequency_penalty
number (Frequency Penalty) [ -2 .. 2 ]
Default: 0

frequency_penalty penalizes the repetition of words based on their frequency in the generated text. A higher frequency penalty discourages the model from repeating words that have already appeared frequently in the output, promoting diversity and reducing repetition.

N (integer) or N (null) (N)

Number of completions to return for each request, input tokens are only billed once.

safe_prompt
boolean
Default: false

Whether to inject a safety prompt before all conversations.

Responses

Request samples

Content type
application/json
{
  • "model": "mistral-small-latest",
  • "temperature": 1.5,
  • "top_p": 1,
  • "max_tokens": 0,
  • "stream": false,
  • "stop": "string",
  • "random_seed": 0,
  • "messages": [
    ],
  • "response_format": {
    },
  • "tools": [
    ],
  • "tool_choice": "auto",
  • "presence_penalty": 0,
  • "frequency_penalty": 0,
  • "n": 1,
  • "safe_prompt": false
}

Response samples

Content type
{
  • "id": "cmpl-e5cc70bb28c444948073e77776eb30ef",
  • "object": "chat.completion",
  • "model": "mistral-small-latest",
  • "usage": {
    },
  • "created": 1702256327,
  • "choices": [
    ]
}

FIM

Fill-in-the-middle API.

Fim Completion

FIM completion.

Authorizations:
ApiKey
Request Body schema: application/json
required
required
Model (string) or Model (null) (Model)
Default: "codestral-2405"

ID of the model to use. Only compatible for now with:

  • codestral-2405
  • codestral-latest
Temperature (number) or Temperature (null) (Temperature)

What sampling temperature to use, we recommend between 0.0 and 0.7. Higher values like 0.7 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. The default value varies depending on the model you are targeting. Call the /models endpoint to retrieve the appropriate value.

top_p
number (Top P) [ 0 .. 1 ]
Default: 1

Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

Max Tokens (integer) or Max Tokens (null) (Max Tokens)

The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.

stream
boolean (Stream)
Default: false

Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.

Stop (string) or Array of Stop (strings) (Stop)

Stop generation if this token is detected. Or if one of these tokens is detected when providing an array

Random Seed (integer) or Random Seed (null) (Random Seed)

The seed to use for random sampling. If set, different calls will generate deterministic results.

prompt
required
string (Prompt)

The text/code to complete.

Suffix (string) or Suffix (null) (Suffix)
Default: ""

Optional text/code that adds more context for the model. When given a prompt and a suffix the model will fill what is between them. When suffix is not provided, the model will simply execute completion starting with prompt.

Min Tokens (integer) or Min Tokens (null) (Min Tokens)

The minimum number of tokens to generate in the completion.

Responses

Request samples

Content type
application/json
{
  • "model": "codestral-2405",
  • "temperature": 1.5,
  • "top_p": 1,
  • "max_tokens": 0,
  • "stream": false,
  • "stop": "string",
  • "random_seed": 0,
  • "prompt": "def",
  • "suffix": "return a+b",
  • "min_tokens": 0
}

Response samples

Content type
{
  • "id": "cmpl-e5cc70bb28c444948073e77776eb30ef",
  • "object": "chat.completion",
  • "model": "codestral-latest",
  • "usage": {
    },
  • "created": 1702256327,
  • "choices": [
    ]
}

Agents

Agents API.

Agents Completion

Authorizations:
ApiKey
Request Body schema: application/json
required
Max Tokens (integer) or Max Tokens (null) (Max Tokens)

The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.

stream
boolean (Stream)
Default: false

Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.

Stop (string) or Array of Stop (strings) (Stop)

Stop generation if this token is detected. Or if one of these tokens is detected when providing an array

Random Seed (integer) or Random Seed (null) (Random Seed)

The seed to use for random sampling. If set, different calls will generate deterministic results.

required
Array of any (Messages)

The prompt(s) to generate completions for, encoded as a list of dict with role and content.

object (ResponseFormat)
Array of Tools (objects) or Tools (null) (Tools)
ToolChoice (object) or ToolChoiceEnum (string) (Tool Choice)
Default: "auto"
presence_penalty
number (Presence Penalty) [ -2 .. 2 ]
Default: 0

presence_penalty determines how much the model penalizes the repetition of words or phrases. A higher presence penalty encourages the model to use a wider variety of words and phrases, making the output more diverse and creative.

frequency_penalty
number (Frequency Penalty) [ -2 .. 2 ]
Default: 0

frequency_penalty penalizes the repetition of words based on their frequency in the generated text. A higher frequency penalty discourages the model from repeating words that have already appeared frequently in the output, promoting diversity and reducing repetition.

N (integer) or N (null) (N)

Number of completions to return for each request, input tokens are only billed once.

agent_id
required
string

The ID of the agent to use for this completion.

Responses

Request samples

Content type
application/json
{
  • "max_tokens": 0,
  • "stream": false,
  • "stop": "string",
  • "random_seed": 0,
  • "messages": [
    ],
  • "response_format": {
    },
  • "tools": [
    ],
  • "tool_choice": "auto",
  • "presence_penalty": 0,
  • "frequency_penalty": 0,
  • "n": 1,
  • "agent_id": "string"
}

Response samples

Content type
application/json
{
  • "id": "cmpl-e5cc70bb28c444948073e77776eb30ef",
  • "object": "chat.completion",
  • "model": "mistral-small-latest",
  • "usage": {
    },
  • "created": 1702256327,
  • "choices": [
    ]
}

Embeddings

Embeddings API.

Embeddings

Embeddings

Authorizations:
ApiKey
Request Body schema: application/json
required
required
Input (string) or Array of Input (strings) (Input)

Text to embed.

model
required
string (Model)
Default: "mistral-embed"

ID of the model to use.

Encoding Format (string) or Encoding Format (null) (Encoding Format)
Default: "float"

The format to return the embeddings in.

Responses

Request samples

Content type
application/json
{
  • "input": [
    ],
  • "model": "mistral-embed",
  • "encoding_format": "float"
}

Response samples

Content type
application/json
{
  • "id": "cmpl-e5cc70bb28c444948073e77776eb30ef",
  • "object": "chat.completion",
  • "model": "mistral-small-latest",
  • "usage": {
    },
  • "data": [
    ]
}

Classifiers

Classifiers API.

Moderations

Authorizations:
ApiKey
Request Body schema: application/json
required
required
Input (string) or Array of Input (strings) (Input)

Text to classify.

Model (string) or Model (null) (Model)

Responses

Request samples

Content type
application/json
{
  • "input": "string",
  • "model": "string"
}

Response samples

Content type
application/json
{
  • "id": "mod-e5cc70bb28c444948073e77776eb30ef",
  • "model": "string",
  • "results": [
    ]
}

Moderations Chat

Authorizations:
ApiKey
Request Body schema: application/json
required
required
Array of Input (any) or Array of Input (any) (Input)

Chat to classify

required
Model (string) or Model (null) (Model)

Responses

Request samples

Content type
application/json
{
  • "input": [
    ],
  • "model": "string"
}

Response samples

Content type
application/json
{
  • "id": "mod-e5cc70bb28c444948073e77776eb30ef",
  • "model": "string",
  • "results": [
    ]
}

Files

Files API

Upload File

Upload a file that can be used across various endpoints.

The size of individual files can be a maximum of 512 MB. The Fine-tuning API only supports .jsonl files.

Please contact us if you need to increase these storage limits.

Authorizations:
ApiKey
Request Body schema: multipart/form-data
required
purpose
string (FilePurpose)
Default: "fine-tune"
Enum: "fine-tune" "batch"
file
required
string <binary> (File)

The File object (not file name) to be uploaded. To upload a file and specify a custom file name you should format your request as such:

file=@path/to/your/file.jsonl;filename=custom_name.jsonl

Otherwise, you can just keep the original file name:

file=@path/to/your/file.jsonl

Responses

Response samples

Content type
application/json
{
  • "id": "497f6eca-6276-4993-bfeb-53cbbbba6f09",
  • "object": "file",
  • "bytes": 13000,
  • "created_at": 1716963433,
  • "filename": "files_upload.jsonl",
  • "purpose": "fine-tune",
  • "sample_type": "pretrain",
  • "num_lines": 0,
  • "source": "upload"
}

List Files

Returns a list of files that belong to the user's organization.

Authorizations:
ApiKey
query Parameters
page
integer (Page)
Default: 0
page_size
integer (Page Size)
Default: 100
Array of Sample Type (strings) or Sample Type (null) (Sample Type)
Array of Source (strings) or Source (null) (Source)
Search (string) or Search (null) (Search)
FilePurpose (string) or null

Responses

Response samples

Content type
application/json
{
  • "data": [
    ],
  • "object": "string",
  • "total": 0
}

Retrieve File

Returns information about a specific file.

Authorizations:
ApiKey
path Parameters
file_id
required
string (File Id)

Responses

Response samples

Content type
application/json
{
  • "id": "497f6eca-6276-4993-bfeb-53cbbbba6f09",
  • "object": "file",
  • "bytes": 13000,
  • "created_at": 1716963433,
  • "filename": "files_upload.jsonl",
  • "purpose": "fine-tune",
  • "sample_type": "pretrain",
  • "num_lines": 0,
  • "source": "upload",
  • "deleted": true
}

Delete File

Delete a file.

Authorizations:
ApiKey
path Parameters
file_id
required
string (File Id)

Responses

Response samples

Content type
application/json
{
  • "id": "497f6eca-6276-4993-bfeb-53cbbbba6f09",
  • "object": "file",
  • "deleted": false
}

Download File

Download a file

Authorizations:
ApiKey
path Parameters
file_id
required
string (File Id)

Responses

Fine Tuning

Fine-tuning API

Get Fine Tuning Jobs

Get a list of fine-tuning jobs for your organization and user.

Authorizations:
ApiKey
query Parameters
page
integer (Page)
Default: 0

The page number of the results to be returned.

page_size
integer (Page Size)
Default: 100

The number of items to return per page.

Model (string) or Model (null) (Model)

The model name used for fine-tuning to filter on. When set, the other results are not displayed.

Created After (string) or Created After (null) (Created After)

The date/time to filter on. When set, the results for previous creation times are not displayed.

created_by_me
boolean (Created By Me)
Default: false

When set, only return results for jobs created by the API caller. Other results are not displayed.

Status (string) or Status (null) (Status)

The current job state to filter on. When set, the other results are not displayed.

Wandb Project (string) or Wandb Project (null) (Wandb Project)

The Weights and Biases project to filter on. When set, the other results are not displayed.

Wandb Name (string) or Wandb Name (null) (Wandb Name)

The Weight and Biases run name to filter on. When set, the other results are not displayed.

Suffix (string) or Suffix (null) (Suffix)

The model suffix to filter on. When set, the other results are not displayed.

Responses

Response samples

Content type
application/json
{
  • "data": [ ],
  • "object": "list",
  • "total": 0
}

Create Fine Tuning Job

Create a new fine-tuning job, it will be queued for processing.

Authorizations:
ApiKey
query Parameters
Dry Run (boolean) or Dry Run (null) (Dry Run)
  • If true the job is not spawned, instead the query returns a handful of useful metadata for the user to perform sanity checks (see LegacyJobMetadataOut response).
  • Otherwise, the job is started and the query returns the job ID along with some of the input parameters (see JobOut response).
Request Body schema: application/json
required
model
required
string (FineTuneableModel)
Enum: "open-mistral-7b" "mistral-small-latest" "codestral-latest" "mistral-large-latest" "open-mistral-nemo"

The name of the model to fine-tune.

Array of objects (Training Files)
Default: []
Array of Validation Files (strings) or Validation Files (null) (Validation Files)

A list containing the IDs of uploaded files that contain validation data. If you provide these files, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in checkpoints when getting the status of a running fine-tuning job. The same data should not be present in both train and validation files.

required
object (TrainingParametersIn)

The fine-tuning hyperparameter settings used in a fine-tune job.

Suffix (string) or Suffix (null) (Suffix)

A string that will be added to your fine-tuning model name. For example, a suffix of "my-great-model" would produce a model name like ft:open-mistral-7b:my-great-model:xxx...

Array of Integrations (any) or Integrations (null) (Integrations)

A list of integrations to enable for your fine-tuning job.

Array of any (Repositories)
Default: []
auto_start
boolean (Auto Start)

This field will be required in a future release.

Responses

Request samples

Content type
application/json
{
  • "model": "open-mistral-7b",
  • "training_files": [ ],
  • "validation_files": [
    ],
  • "hyperparameters": {
    },
  • "suffix": "string",
  • "integrations": [
    ],
  • "repositories": [ ],
  • "auto_start": true
}

Response samples

Content type
application/json
Example
{
  • "id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
  • "auto_start": true,
  • "hyperparameters": {
    },
  • "model": "open-mistral-7b",
  • "status": "QUEUED",
  • "job_type": "string",
  • "created_at": 0,
  • "modified_at": 0,
  • "training_files": [
    ],
  • "validation_files": [ ],
  • "object": "job",
  • "fine_tuned_model": "string",
  • "suffix": "string",
  • "integrations": [
    ],
  • "trained_tokens": 0,
  • "repositories": [ ],
  • "metadata": {
    }
}

Get Fine Tuning Job

Get a fine-tuned job details by its UUID.

Authorizations:
ApiKey
path Parameters
job_id
required
string <uuid> (Job Id)

The ID of the job to analyse.

Responses

Response samples

Content type
application/json
{
  • "id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
  • "auto_start": true,
  • "hyperparameters": {
    },
  • "model": "open-mistral-7b",
  • "status": "QUEUED",
  • "job_type": "string",
  • "created_at": 0,
  • "modified_at": 0,
  • "training_files": [
    ],
  • "validation_files": [ ],
  • "object": "job",
  • "fine_tuned_model": "string",
  • "suffix": "string",
  • "integrations": [
    ],
  • "trained_tokens": 0,
  • "repositories": [ ],
  • "metadata": {
    },
  • "events": [ ],
  • "checkpoints": [ ]
}

Cancel Fine Tuning Job

Request the cancellation of a fine tuning job.

Authorizations:
ApiKey
path Parameters
job_id
required
string <uuid> (Job Id)

The ID of the job to cancel.

Responses

Response samples

Content type
application/json
{
  • "id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
  • "auto_start": true,
  • "hyperparameters": {
    },
  • "model": "open-mistral-7b",
  • "status": "QUEUED",
  • "job_type": "string",
  • "created_at": 0,
  • "modified_at": 0,
  • "training_files": [
    ],
  • "validation_files": [ ],
  • "object": "job",
  • "fine_tuned_model": "string",
  • "suffix": "string",
  • "integrations": [
    ],
  • "trained_tokens": 0,
  • "repositories": [ ],
  • "metadata": {
    },
  • "events": [ ],
  • "checkpoints": [ ]
}

Start Fine Tuning Job

Request the start of a validated fine tuning job.

Authorizations:
ApiKey
path Parameters
job_id
required
string <uuid> (Job Id)

Responses

Response samples

Content type
application/json
{
  • "id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
  • "auto_start": true,
  • "hyperparameters": {
    },
  • "model": "open-mistral-7b",
  • "status": "QUEUED",
  • "job_type": "string",
  • "created_at": 0,
  • "modified_at": 0,
  • "training_files": [
    ],
  • "validation_files": [ ],
  • "object": "job",
  • "fine_tuned_model": "string",
  • "suffix": "string",
  • "integrations": [
    ],
  • "trained_tokens": 0,
  • "repositories": [ ],
  • "metadata": {
    },
  • "events": [ ],
  • "checkpoints": [ ]
}

Models

Model Management API

List Models

List all models available to the user.

Authorizations:
ApiKey

Responses

Response samples

Content type
application/json
{
  • "object": "list",
  • "data": [
    ]
}

Retrieve Model

Retrieve a model information.

Authorizations:
ApiKey
path Parameters
model_id
required
string (Model Id)
Example: ft:open-mistral-7b:587a6b29:20240514:7e773925

The ID of the model to retrieve.

Responses

Response samples

Content type
application/json
Example
{
  • "id": "string",
  • "object": "model",
  • "created": 0,
  • "owned_by": "mistralai",
  • "capabilities": {
    },
  • "name": "string",
  • "description": "string",
  • "max_context_length": 32768,
  • "aliases": [ ],
  • "deprecation": "2019-08-24T14:15:22Z",
  • "default_model_temperature": 0,
  • "type": "base"
}

Delete Model

Delete a fine-tuned model.

Authorizations:
ApiKey
path Parameters
model_id
required
string (Model Id)
Example: ft:open-mistral-7b:587a6b29:20240514:7e773925

The ID of the model to delete.

Responses

Response samples

Content type
application/json
{
  • "id": "ft:open-mistral-7b:587a6b29:20240514:7e773925",
  • "object": "model",
  • "deleted": true
}

Update Fine Tuned Model

Update a model name or description.

Authorizations:
ApiKey
path Parameters
model_id
required
string (Model Id)
Example: ft:open-mistral-7b:587a6b29:20240514:7e773925

The ID of the model to update.

Request Body schema: application/json
required
Name (string) or Name (null) (Name)
Description (string) or Description (null) (Description)

Responses

Request samples

Content type
application/json
{
  • "name": "string",
  • "description": "string"
}

Response samples

Content type
application/json
{
  • "id": "string",
  • "object": "model",
  • "created": 0,
  • "owned_by": "string",
  • "root": "string",
  • "archived": true,
  • "name": "string",
  • "description": "string",
  • "capabilities": {
    },
  • "max_context_length": 32768,
  • "aliases": [ ],
  • "job": "4bbaedb0-902b-4b27-8218-8f40d3470a54"
}

Archive Fine Tuned Model

Archive a fine-tuned model.

Authorizations:
ApiKey
path Parameters
model_id
required
string (Model Id)
Example: ft:open-mistral-7b:587a6b29:20240514:7e773925

The ID of the model to archive.

Responses

Response samples

Content type
application/json
{
  • "id": "string",
  • "object": "model",
  • "archived": true
}

Unarchive Fine Tuned Model

Un-archive a fine-tuned model.

Authorizations:
ApiKey
path Parameters
model_id
required
string (Model Id)
Example: ft:open-mistral-7b:587a6b29:20240514:7e773925

The ID of the model to unarchive.

Responses

Response samples

Content type
application/json
{
  • "id": "string",
  • "object": "model",
  • "archived": false
}

Batch

Batch API

Get Batch Jobs

Get a list of batch jobs for your organization and user.

Authorizations:
ApiKey
query Parameters
page
integer (Page)
Default: 0
page_size
integer (Page Size)
Default: 100
Model (string) or Model (null) (Model)
Metadata (object) or Metadata (null) (Metadata)
Created After (string) or Created After (null) (Created After)
created_by_me
boolean (Created By Me)
Default: false
BatchJobStatus (string) or null

Responses

Response samples

Content type
application/json
{
  • "data": [ ],
  • "object": "list",
  • "total": 0
}

Create Batch Job

Create a new batch job, it will be queued for processing.

Authorizations:
ApiKey
Request Body schema: application/json
required
input_files
required
Array of strings <uuid> (Input Files) [ items <uuid > ]
endpoint
required
string (ApiEndpoint)
Enum: "/v1/chat/completions" "/v1/embeddings" "/v1/fim/completions" "/v1/moderations"
model
required
string (Model)
Metadata (object) or Metadata (null) (Metadata)
timeout_hours
integer (Timeout Hours)
Default: 24

Responses

Request samples

Content type
application/json
{
  • "input_files": [
    ],
  • "endpoint": "/v1/chat/completions",
  • "model": "string",
  • "metadata": {
    },
  • "timeout_hours": 24
}

Response samples

Content type
application/json
{
  • "id": "string",
  • "object": "batch",
  • "input_files": [
    ],
  • "metadata": { },
  • "endpoint": "string",
  • "model": "string",
  • "output_file": "c7c9cb17-f818-4ee3-85de-0d2f8954882c",
  • "error_file": "6b79e6a4-c3aa-4da1-8fb4-9e2520d26bfa",
  • "errors": [
    ],
  • "status": "QUEUED",
  • "created_at": 0,
  • "total_requests": 0,
  • "completed_requests": 0,
  • "succeeded_requests": 0,
  • "failed_requests": 0,
  • "started_at": 0,
  • "completed_at": 0
}

Get Batch Job

Get a batch job details by its UUID.

Authorizations:
ApiKey
path Parameters
job_id
required
string <uuid> (Job Id)

Responses

Response samples

Content type
application/json
{
  • "id": "string",
  • "object": "batch",
  • "input_files": [
    ],
  • "metadata": { },
  • "endpoint": "string",
  • "model": "string",
  • "output_file": "c7c9cb17-f818-4ee3-85de-0d2f8954882c",
  • "error_file": "6b79e6a4-c3aa-4da1-8fb4-9e2520d26bfa",
  • "errors": [
    ],
  • "status": "QUEUED",
  • "created_at": 0,
  • "total_requests": 0,
  • "completed_requests": 0,
  • "succeeded_requests": 0,
  • "failed_requests": 0,
  • "started_at": 0,
  • "completed_at": 0
}

Cancel Batch Job

Request the cancellation of a batch job.

Authorizations:
ApiKey
path Parameters
job_id
required
string <uuid> (Job Id)

Responses

Response samples

Content type
application/json
{
  • "id": "string",
  • "object": "batch",
  • "input_files": [
    ],
  • "metadata": { },
  • "endpoint": "string",
  • "model": "string",
  • "output_file": "c7c9cb17-f818-4ee3-85de-0d2f8954882c",
  • "error_file": "6b79e6a4-c3aa-4da1-8fb4-9e2520d26bfa",
  • "errors": [
    ],
  • "status": "QUEUED",
  • "created_at": 0,
  • "total_requests": 0,
  • "completed_requests": 0,
  • "succeeded_requests": 0,
  • "failed_requests": 0,
  • "started_at": 0,
  • "completed_at": 0
}