Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 46 additions & 27 deletions static/llms-full.txt
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,16 @@ Source: https://docs.mistral.ai/api/#tag/chat_classifications_v1_chat_classifica

post /v1/chat/classifications

# Create Transcription
Source: https://docs.mistral.ai/api/#tag/audio_api_v1_transcriptions_post

post /v1/audio/transcriptions

# Create streaming transcription (SSE)
Source: https://docs.mistral.ai/api/#tag/audio_api_v1_transcriptions_post_stream

post /v1/audio/transcriptions#stream

# List all libraries you have access to.
Source: https://docs.mistral.ai/api/#tag/libraries_list_v1

Expand Down Expand Up @@ -5305,7 +5315,7 @@ console.log(transcriptionResponse);
curl --location 'https://api.mistral.ai/v1/audio/transcriptions' \
--header "x-api-key: $MISTRAL_API_KEY" \
--form 'file=@"/path/to/file/audio.mp3"' \
--form 'model="voxtral-mini-2507"' \
--form 'model="voxtral-mini-2507"'
```

**With Language defined**
Expand Down Expand Up @@ -5571,7 +5581,7 @@ client = Mistral(api_key=api_key)
transcription_response = client.audio.transcriptions.complete(
model=model,
file_url="https://docs.mistral.ai/audio/obama.mp3",
timestamp_granularities="segment"
timestamp_granularities=["segment"]
)

# Print the contents
Expand All @@ -5593,7 +5603,7 @@ const client = new Mistral({ apiKey: apiKey });
const transcriptionResponse = await client.audio.transcriptions.complete({
model: "voxtral-mini-latest",
fileUrl: "https://docs.mistral.ai/audio/obama.mp3",
timestamp_granularities: "segment"
timestamp_granularities: ["segment"]
});

// Log the contents
Expand All @@ -5607,7 +5617,7 @@ console.log(transcriptionResponse);
curl --location 'https://api.mistral.ai/v1/audio/transcriptions' \
--header "x-api-key: $MISTRAL_API_KEY" \
--form 'file_url="https://docs.mistral.ai/audio/obama.mp3"' \
--form 'model="voxtral-mini-latest"'
--form 'model="voxtral-mini-latest"' \
--form 'timestamp_granularities="segment"'
```
</TabItem>
Expand Down Expand Up @@ -13088,7 +13098,7 @@ Source: https://docs.mistral.ai/docs/capabilities/vision

Vision capabilities enable models to analyze images and provide insights based on visual content in addition to text. This multimodal approach opens up new possibilities for applications that require both textual and visual understanding.

For more specific use cases regarding document parsing and data extraction we recommend taking a look at our Document AI stack [here](../OCR/document_ai_overview).
For more specific use cases regarding document parsing and data extraction we recommend taking a look at our Document AI stack [here](../document_ai/document_ai_overview).

## Models with Vision Capabilities:
- Pixtral 12B (`pixtral-12b-latest`)
Expand Down Expand Up @@ -13739,7 +13749,10 @@ in two ways:
This page focuses on the MaaS offering, where the following models are available:

- Mistral Large (24.11, 24.07)
- Mistral Small (24.09)
- Mistral Medium (25.05)
- Mistral Small (25.03)
- Mistral Document AI (25.05)
- Mistral OCR (25.05)
- Ministral 3B (24.10)
- Mistral Nemo

Expand Down Expand Up @@ -13843,9 +13856,11 @@ To run the examples below, set the following environment variables:
## Going further

For more details and examples, refer to the following resources:
- [Release blog post for Mistral Document AI](https://techcommunity.microsoft.com/blog/aiplatformblog/deepening-our-partnership-with-mistral-ai-on-azure-ai-foundry/4434656)
- [Release blog post for Mistral Large 2 and Mistral NeMo](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/ai-innovation-continues-introducing-mistral-large-2-and-mistral/ba-p/4200181).
- [Azure documentation for MaaS deployment of Mistral models](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral).
- [Azure ML examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/mistral) with several Mistral-based samples.
- [Azure AI Foundry GitHub repository](https://github.com/azure-ai-foundry/foundry-samples/tree/main/samples/mistral)


[IBM watsonx.ai]
Expand Down Expand Up @@ -14089,7 +14104,7 @@ To run the examples below you will need to set the following environment variabl

Codestral can be queried using an additional completion mode called fill-in-the-middle (FIM).
For more information, see the
[code generation section](../../../capabilities/code_generation/#fill-in-the-middle-endpoint).
[code generation section](../../../capabilities/code_generation).


<Tabs>
Expand Down Expand Up @@ -14390,7 +14405,7 @@ for more details.

Codestral can be queried using an additional completion mode called fill-in-the-middle (FIM).
For more information, see the
[code generation section](../../../capabilities/code_generation/#fill-in-the-middle-endpoint).
[code generation section](../../../capabilities/code_generation).


<Tabs>
Expand Down Expand Up @@ -15693,7 +15708,7 @@ The [Mistral AI APIs](https://console.mistral.ai/) empower LLM applications via:

- [Text generation](/capabilities/completion), enables streaming and provides the ability to display partial model results in real-time
- [Vision](/capabilities/vision), enables the analysis of images and provides insights based on visual content in addition to text.
- [OCR](/capabilities/OCR/basic_ocr), allows the extraction of interleaved text and images from documents.
- [OCR](/capabilities/document_ai/basic_ocr), allows the extraction of interleaved text and images from documents.
- [Code generation](/capabilities/code_generation), enpowers code generation tasks, including fill-in-the-middle and code completion.
- [Embeddings](/capabilities/embeddings/overview), useful for RAG where it represents the meaning of text as a list of numbers.
- [Function calling](/capabilities/function_calling), enables Mistral models to connect to external tools.
Expand Down Expand Up @@ -16198,7 +16213,7 @@ Mistral provides two types of models: open models and premier models.

| Model | Weight availability|Available via API| Description | Max Tokens| API Endpoints|Version|
|--------------------|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
| Mistral Medium 3 | | :heavy_check_mark: | Our frontier-class multimodal model released May 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2505` | 25.05|
| Mistral Medium 3.1 | | :heavy_check_mark: | Our frontier-class multimodal model released August 2025. Improving tone and performance. Read more about Medium 3 in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2508` | 25.08|
| Magistral Medium 1.1 | | :heavy_check_mark: | Our frontier-class reasoning model released July 2025. | 40k | `magistral-medium-2507` | 25.07|
| Codestral 2508 | | :heavy_check_mark: | Our cutting-edge language model for coding released end of July 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-25-08/) | 256k | `codestral-2508` | 25.08|
| Voxtral Mini Transcribe | | :heavy_check_mark: | An efficient audio input model, fine-tuned and optimized for transcription purposes only. | | `voxtral-mini-2507` via `audio/transcriptions` | 25.07|
Expand All @@ -16207,6 +16222,7 @@ Mistral provides two types of models: open models and premier models.
| Magistral Medium 1 | | :heavy_check_mark: | Our first frontier-class reasoning model released June 2025. Learn more in our [blog post](https://mistral.ai/news/magistral/) | 40k | `magistral-medium-2506` | 25.06|
| Ministral 3B | | :heavy_check_mark: | World’s best edge model. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-3b-2410` | 24.10|
| Ministral 8B | :heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: |Powerful edge model with extremely high performance/price ratio. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-8b-2410` | 24.10|
| Mistral Medium 3 | | :heavy_check_mark: | Our frontier-class multimodal model released May 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2505` | 25.05|
| Codestral 2501 | | :heavy_check_mark: | Our cutting-edge language model for coding with the second version released January 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-2501/) | 256k | `codestral-2501` | 25.01|
| Mistral Large 2.1 |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our top-tier large model for high-complexity tasks with the lastest version released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `mistral-large-2411` | 24.11|
| Pixtral Large |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our first frontier-class multimodal model released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `pixtral-large-2411` | 24.11|
Expand Down Expand Up @@ -16241,8 +16257,8 @@ Additionally, be prepared for the deprecation of certain endpoints in the coming
Here are the details of the available versions:
- `magistral-medium-latest`: currently points to `magistral-medium-2507`.
- `magistral-small-latest`: currently points to `magistral-small-2507`.
- `mistral-medium-latest`: currently points to `mistral-medium-2505`.
- `mistral-large-latest`: currently points to `mistral-large-2411`.
- `mistral-medium-latest`: currently points to `mistral-medium-2508`.
- `mistral-large-latest`: currently points to `mistral-medium-2508`, previously `mistral-large-2411`.
- `pixtral-large-latest`: currently points to `pixtral-large-2411`.
- `mistral-moderation-latest`: currently points to `mistral-moderation-2411`.
- `ministral-3b-latest`: currently points to `ministral-3b-2410`.
Expand Down Expand Up @@ -18984,6 +19000,24 @@ Here is an [example notebook](https://github.com/mistralai/cookbook/blob/main/th

<img src="/img/guides/obs_mlflow.png" alt="drawing" width="700"/>

### Integration with Maxim

Maxim AI provides comprehensive observability for your Mistral based AI applications. With Maxim's one-line integration, you can easily trace and analyse LLM calls, metrics, and more.

**Pros:**

* Performance Analytics: Track latency, tokens consumed, and costs
* Advanced Visualisation: Understand agent trajectories through intuitive dashboards

**Mistral integration Example:**

* Learn how to integrate Maxim observability with the Mistral SDK in just one line of code - [Colab Notebook](https://github.com/mistralai/cookbook/blob/main/third_party/Maxim/cookbook_maxim_mistral_integration.ipynb)

Maxim Documentation to use Mistral as an LLM Provider and Maxim as Logger - [Docs Link](https://www.getmaxim.ai/docs/sdk/python/integrations/mistral/mistral)


![Gif](https://raw.githubusercontent.com/akmadan/platform-docs-public/docs/observability-maxim-provider/static/img/guides/maxim_traces.gif)


[Other resources]
Source: https://docs.mistral.ai/docs/guides/other-resources
Expand Down Expand Up @@ -20736,18 +20770,3 @@ Mistral AI's LLM API endpoints charge based on the number of tokens in the input

To help you estimate your costs, our tokenization API makes it easy to count the number of tokens in your text. Simply run `len(tokens)` as shown in the example above to get the total number of tokens in the text, which you can then use to estimate your cost based on our pricing information.


[Mistral AI Crawlers]
Source: https://docs.mistral.ai/docs/robots

## Mistral AI Crawlers

Mistral AI employs web crawlers ("robots") and user agents to execute tasks for its products, either automatically or upon user request. To facilitate webmasters in managing how their sites and content interact with AI, Mistral AI utilizes specific robots.txt tags.

### MistralAI-User

MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response. MistralAI-User governs which sites these user requests can be made to. It is not used for crawling the web in any automatic fashion, nor to crawl content for generative AI training.

Full user-agent string: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; MistralAI-User/1.0; +https://docs.mistral.ai/robots)

Published IP addresses: https://mistral.ai/mistralai-user-ips.json
Loading