From 062343652fa3c7e8f58e96890e9b66d989be6eac Mon Sep 17 00:00:00 2001 From: Ravi Theja Desetty Date: Mon, 1 Sep 2025 21:58:43 +0530 Subject: [PATCH 1/3] Update llms llms-full.txt files --- static/llms-full.txt | 223 +++++++++++++++++++++++-------------------- static/llms.txt | 150 ++++++++++++++--------------- 2 files changed, 196 insertions(+), 177 deletions(-) diff --git a/static/llms-full.txt b/static/llms-full.txt index 49b25d4c..40318011 100644 --- a/static/llms-full.txt +++ b/static/llms-full.txt @@ -223,6 +223,16 @@ Source: https://docs.mistral.ai/api/#tag/chat_classifications_v1_chat_classifica post /v1/chat/classifications +# Create Transcription +Source: https://docs.mistral.ai/api/#tag/audio_api_v1_transcriptions_post + +post /v1/audio/transcriptions + +# Create streaming transcription (SSE) +Source: https://docs.mistral.ai/api/#tag/audio_api_v1_transcriptions_post_stream + +post /v1/audio/transcriptions#stream + # List all libraries you have access to. Source: https://docs.mistral.ai/api/#tag/libraries_list_v1 @@ -314,7 +324,7 @@ Source: https://docs.mistral.ai/api/#tag/libraries_documents_reprocess_v1 post /v1/libraries/{library_id}/documents/{document_id}/reprocess [Agents & Conversations] -Source: https://docs.mistral.ai/docs/agents/agents_and_conversations +Source: https://docs.mistral.ai/docs/.docs/agents/agents_and_conversations ### Objects @@ -1440,7 +1450,7 @@ data: {"type":"conversation.response.done","usage":{"prompt_tokens":18709,"total [Agents Function Calling] -Source: https://docs.mistral.ai/docs/agents/agents_function_calling +Source: https://docs.mistral.ai/docs/.docs/agents/agents_function_calling The core of an agent relies on its tool usage capabilities, enabling it to use and call tools and workflows depending on the task it must accomplish. @@ -1883,7 +1893,7 @@ curl --location "https://api.mistral.ai/v1/conversations/" \ [Agents Introduction] -Source: https://docs.mistral.ai/docs/agents/agents_introduction +Source: https://docs.mistral.ai/docs/.docs/agents/agents_introduction ## What are AI agents? @@ -1937,7 +1947,7 @@ For more information and guides on how to use our Agents, we have the following [Code Interpreter] -Source: https://docs.mistral.ai/docs/agents/connectors/code_interpreter +Source: https://docs.mistral.ai/docs/.docs/agents/connectors/code_interpreter Code Interpreter adds the capability to safely execute code in an isolated container, this built-in [connector](../connectors) tool allows Agents to run code at any point on demand, practical to draw graphs, data analysis, mathematical operations, code validation, and much more. @@ -2169,7 +2179,7 @@ There are 3 main entries in the `outputs` of our request: [Connectors Overview] -Source: https://docs.mistral.ai/docs/agents/connectors/connectors_overview +Source: https://docs.mistral.ai/docs/.docs/agents/connectors/connectors_overview Connectors are tools that Agents can call at any given point. They are deployed and ready for the agents to leverage to answer questions on demand. @@ -2278,7 +2288,7 @@ Currently, our API has 4 built-in Connector tools, here you can find how to use [Document Library] -Source: https://docs.mistral.ai/docs/agents/connectors/document_library +Source: https://docs.mistral.ai/docs/.docs/agents/connectors/document_library Document Library is a built-in [connector](../connectors) tool that enables agents to access documents from Mistral Cloud. @@ -3209,7 +3219,7 @@ For more information regarding the use of citations, you can find more [here](.. [Image Generation] -Source: https://docs.mistral.ai/docs/agents/connectors/image_generation +Source: https://docs.mistral.ai/docs/.docs/agents/connectors/image_generation Image Generation is a built-in [connector](../connectors) tool that enables agents to generate images of all kinds and forms. @@ -3576,7 +3586,7 @@ curl --location "https://api.mistral.ai/v1/files//content" \ [Websearch] -Source: https://docs.mistral.ai/docs/agents/connectors/websearch +Source: https://docs.mistral.ai/docs/.docs/agents/connectors/websearch Websearch is the capability to browse the web in search of information, this tool does not only fix the limitations of models of not being up to date due to their training data, but also allows them to actually retrieve recent information or access specific websites. @@ -3837,7 +3847,7 @@ For more information regarding the use of citations, you can find more [here](.. [Agents Handoffs] -Source: https://docs.mistral.ai/docs/agents/handoffs +Source: https://docs.mistral.ai/docs/.docs/agents/handoffs When creating and using Agents, often with access to specific tools, there are moments where it is desired to call other Agents mid-action. To elaborate and engineer workflows for diverse tasks that you may want automated, this ability to give Agents tasks or hand over a conversation to other agents is called **Handoffs**. @@ -4341,7 +4351,7 @@ content=[ToolFileChunk(tool='code_interpreter', file_id='40420c94-5f99-477f-8891 [MCP] -Source: https://docs.mistral.ai/docs/agents/mcp +Source: https://docs.mistral.ai/docs/.docs/agents/mcp The Model Context Protocol (MCP) is an open standard designed to streamline the integration of AI models with various data sources and tools. By providing a standardized interface, MCP enables seamless and secure connections, allowing AI systems to access and utilize contextual information efficiently. It simplifies the development process, making it easier to build robust and interconnected AI applications. @@ -4753,7 +4763,7 @@ Here is a brief example of how to stream conversations: [Audio & Transcription] -Source: https://docs.mistral.ai/docs/capabilities/audio_and_transcription +Source: https://docs.mistral.ai/docs/.docs/capabilities/audio_and_transcription Audio input capabilities enable models to chat and understand audio directly, this can be used for both chat use cases via audio or for optimal transcription purposes. @@ -5305,7 +5315,7 @@ console.log(transcriptionResponse); curl --location 'https://api.mistral.ai/v1/audio/transcriptions' \ --header "x-api-key: $MISTRAL_API_KEY" \ --form 'file=@"/path/to/file/audio.mp3"' \ - --form 'model="voxtral-mini-2507"' \ + --form 'model="voxtral-mini-2507"' ``` **With Language defined** @@ -5571,7 +5581,7 @@ client = Mistral(api_key=api_key) transcription_response = client.audio.transcriptions.complete( model=model, file_url="https://docs.mistral.ai/audio/obama.mp3", - timestamp_granularities="segment" + timestamp_granularities=["segment"] ) # Print the contents @@ -5593,7 +5603,7 @@ const client = new Mistral({ apiKey: apiKey }); const transcriptionResponse = await client.audio.transcriptions.complete({ model: "voxtral-mini-latest", fileUrl: "https://docs.mistral.ai/audio/obama.mp3", - timestamp_granularities: "segment" + timestamp_granularities: ["segment"] }); // Log the contents @@ -5607,7 +5617,7 @@ console.log(transcriptionResponse); curl --location 'https://api.mistral.ai/v1/audio/transcriptions' \ --header "x-api-key: $MISTRAL_API_KEY" \ --form 'file_url="https://docs.mistral.ai/audio/obama.mp3"' \ ---form 'model="voxtral-mini-latest"' +--form 'model="voxtral-mini-latest"' \ --form 'timestamp_granularities="segment"' ``` @@ -5849,7 +5859,7 @@ Here are some tips if you need to handle longer audio files: [Batch Inference] -Source: https://docs.mistral.ai/docs/capabilities/batch_inference +Source: https://docs.mistral.ai/docs/.docs/capabilities/batch_inference ## Prepare and upload your batch @@ -6308,7 +6318,7 @@ Yes, due to high throughput and concurrent processing, batches may slightly exce [Citations and References] -Source: https://docs.mistral.ai/docs/capabilities/citations_and_references +Source: https://docs.mistral.ai/docs/.docs/capabilities/citations_and_references Citations enable models to ground their responses and provide references, making them a powerful feature for Retrieval-Augmented Generation (RAG) and agentic applications. This feature allows the model to provide the source of the information extracted from a document or chunk of data from a tool call. @@ -6500,7 +6510,7 @@ This template will help get started with web search and document grounding with [Coding] -Source: https://docs.mistral.ai/docs/capabilities/coding +Source: https://docs.mistral.ai/docs/.docs/capabilities/coding LLMs are powerfull tools for text generation, and they also show great performance in code generation for multiple tasks, both for code completion, code generation and agentic tool use for semi-automated software development. @@ -7074,7 +7084,7 @@ For more information visit the [Cline github repo](https://github.com/cline/clin [Annotations] -Source: https://docs.mistral.ai/docs/capabilities/document_ai/annotations +Source: https://docs.mistral.ai/docs/.docs/capabilities/document_ai/annotations # Annotations @@ -7850,7 +7860,7 @@ A: When using Document Annotations, the file cannot have more than 8 pages. BBox [Basic OCR] -Source: https://docs.mistral.ai/docs/capabilities/document_ai/basic_ocr +Source: https://docs.mistral.ai/docs/.docs/capabilities/document_ai/basic_ocr ## Document AI OCR processor @@ -8761,7 +8771,7 @@ A: Yes, there are certain limitations for the OCR API. Uploaded document files m [Document AI] -Source: https://docs.mistral.ai/docs/capabilities/document_ai/document_ai_overview +Source: https://docs.mistral.ai/docs/.docs/capabilities/document_ai/document_ai_overview # Mistral Document AI @@ -8786,7 +8796,7 @@ Using `client.ocr.process` as the entry point, you can access the following serv [Document QnA] -Source: https://docs.mistral.ai/docs/capabilities/document_ai/document_qna +Source: https://docs.mistral.ai/docs/.docs/capabilities/document_ai/document_qna # Document AI QnA @@ -8986,7 +8996,7 @@ A: Yes, there are certain limitations for the Document QnA API. Uploaded documen [Code Embeddings] -Source: https://docs.mistral.ai/docs/capabilities/embeddings/code_embeddings +Source: https://docs.mistral.ai/docs/.docs/capabilities/embeddings/code_embeddings Embeddings are at the core of multiple enterprise use cases, such as **retrieval systems**, **clustering**, **code analytics**, **classification**, and a variety of search applications. With code embedings, you can embed **code databases** and **repositories**, and power **coding assistants** with state-of-the-art retrieval capabilities. @@ -9254,7 +9264,7 @@ For more information and guides on how to make use of our embedding sdk, we have [Embeddings Overview] -Source: https://docs.mistral.ai/docs/capabilities/embeddings/embeddings_overview +Source: https://docs.mistral.ai/docs/.docs/capabilities/embeddings/embeddings_overview
Open In Colab @@ -9607,7 +9617,7 @@ Our embedding model excels in retrieval tasks, as it is trained with retrieval i [Classifier Factory] -Source: https://docs.mistral.ai/docs/capabilities/finetuning/classifier-factory +Source: https://docs.mistral.ai/docs/.docs/capabilities/finetuning/classifier-factory In various domains and enterprises, classification models play a crucial role in enhancing efficiency, improving user experience, and ensuring compliance. These models serve diverse purposes, including but not limited to: - **Moderation**: Classification models are essential for moderating services and classifying unwanted content. For instance, our [moderation service](../../guardrailing/#moderation-api) helps in identifying and filtering inappropriate or harmful content in real-time, ensuring a safe and respectful environment for users. @@ -10101,7 +10111,7 @@ Explore our guides and [cookbooks](https://github.com/mistralai/cookbook) levera [Fine-tuning Overview] -Source: https://docs.mistral.ai/docs/capabilities/finetuning/finetuning_overview +Source: https://docs.mistral.ai/docs/.docs/capabilities/finetuning/finetuning_overview :::warning[ ] Every fine-tuning job comes with a minimum fee of $4, and there's a monthly storage fee of $2 for each model. For more detailed pricing information, please visit our [pricing page](https://mistral.ai/technology/#pricing). @@ -10142,7 +10152,7 @@ Fine-tuning has a wide range of use cases, some of which include: [Text & Vision Fine-tuning] -Source: https://docs.mistral.ai/docs/capabilities/finetuning/text-vision-finetuning +Source: https://docs.mistral.ai/docs/.docs/capabilities/finetuning/text-vision-finetuning Fine-tuning allows you to tailor a pre-trained language model to your specific needs by training it on your dataset. This guide explains how to fine-tune text and vision models, from preparing your data to training, whether you aim to improve domain-specific understanding or adapt to a unique conversational style. @@ -10687,7 +10697,7 @@ curl --location --request DELETE 'https://api.mistral.ai/v1/models/ft:open-mistr [Function calling] -Source: https://docs.mistral.ai/docs/capabilities/function-calling +Source: https://docs.mistral.ai/docs/.docs/capabilities/function-calling Open In Colab @@ -11263,7 +11273,7 @@ The status of your transaction with ID T1001 is "Paid". Is there anything else I [Moderation] -Source: https://docs.mistral.ai/docs/capabilities/moderation +Source: https://docs.mistral.ai/docs/.docs/capabilities/moderation ## Moderation API @@ -11539,7 +11549,7 @@ Please adjust the self-reflection prompt according to your own use cases. [Predicted outputs] -Source: https://docs.mistral.ai/docs/capabilities/predicted-outputs +Source: https://docs.mistral.ai/docs/.docs/capabilities/predicted-outputs Predicted Outputs optimizes response time by leveraging known or predictable content. This approach minimizes latency while maintaining high output quality. In tasks such as editing large texts, modifying code, or generating template-based responses, significant portions of the output are often predetermined. By predefining these expected parts with Predicted Outputs, models can allocate more computational resources to the unpredictable elements, improving overall efficiency. @@ -11679,7 +11689,7 @@ No, the placement of sentences or words in your prediction does not affect its e [Reasoning] -Source: https://docs.mistral.ai/docs/capabilities/reasoning +Source: https://docs.mistral.ai/docs/.docs/capabilities/reasoning **Reasoning** is the next step of CoT (Chain of Thought), naturally used to describe the **logical steps generated by the model** before reaching a conclusion. Reasoning strengthens this characteristic by going through **training steps that encourage the model to generate chains of thought freely before producing the final answer**. This allows models to **explore the problem more profoundly and ultimately reach a better solution** to the best of their ability by using extra compute time to generate more tokens and improve the answer—also described as **Test Time Computation**. @@ -12526,7 +12536,7 @@ Therefore, John is 22 years old. [Custom Structured Output] -Source: https://docs.mistral.ai/docs/capabilities/structured-output/custom +Source: https://docs.mistral.ai/docs/.docs/capabilities/structured-output/custom # Custom Structured Outputs @@ -12732,7 +12742,7 @@ However, it is recommended to add more explanations and iterate on your system p [JSON mode] -Source: https://docs.mistral.ai/docs/capabilities/structured-output/json-mode +Source: https://docs.mistral.ai/docs/.docs/capabilities/structured-output/json-mode Users have the option to set `response_format` to `{"type": "json_object"}` to enable JSON mode. Currently, JSON mode is available for all of our models through API. @@ -12813,7 +12823,7 @@ curl --location "https://api.mistral.ai/v1/chat/completions" \ [Structured Output] -Source: https://docs.mistral.ai/docs/capabilities/structured-output/overview +Source: https://docs.mistral.ai/docs/.docs/capabilities/structured-output/overview # Structured Output When utilizing LLMs as agents or steps within a lengthy process, chain, or pipeline, it is often necessary for the outputs to adhere to a specific structured format. JSON is the most commonly used format for this purpose. @@ -12835,7 +12845,7 @@ Use JSON mode when more flexibility in the output is required while maintaining [Text and Chat Completions] -Source: https://docs.mistral.ai/docs/capabilities/text_and_chat_completions +Source: https://docs.mistral.ai/docs/.docs/capabilities/text_and_chat_completions The Mistral models allows you to chat with a model that has been fine-tuned to follow instructions and respond to natural language prompts. @@ -13084,11 +13094,11 @@ Chat messages (`messages`) are a collection of prompts or messages, with each me [Vision] -Source: https://docs.mistral.ai/docs/capabilities/vision +Source: https://docs.mistral.ai/docs/.docs/capabilities/vision Vision capabilities enable models to analyze images and provide insights based on visual content in addition to text. This multimodal approach opens up new possibilities for applications that require both textual and visual understanding. -For more specific use cases regarding document parsing and data extraction we recommend taking a look at our Document AI stack [here](../OCR/document_ai_overview). +For more specific use cases regarding document parsing and data extraction we recommend taking a look at our Document AI stack [here](../document_ai/document_ai_overview). ## Models with Vision Capabilities: - Pixtral 12B (`pixtral-12b-latest`) @@ -13617,7 +13627,7 @@ Model output: [AWS Bedrock] -Source: https://docs.mistral.ai/docs/deployment/cloud/aws +Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/aws ## Introduction @@ -13722,7 +13732,7 @@ For more details and examples, refer to the following resources: [Azure AI] -Source: https://docs.mistral.ai/docs/deployment/cloud/azure +Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/azure ## Introduction @@ -13739,7 +13749,10 @@ in two ways: This page focuses on the MaaS offering, where the following models are available: - Mistral Large (24.11, 24.07) -- Mistral Small (24.09) +- Mistral Medium (25.05) +- Mistral Small (25.03) +- Mistral Document AI (25.05) +- Mistral OCR (25.05) - Ministral 3B (24.10) - Mistral Nemo @@ -13843,13 +13856,15 @@ To run the examples below, set the following environment variables: ## Going further For more details and examples, refer to the following resources: +- [Release blog post for Mistral Document AI](https://techcommunity.microsoft.com/blog/aiplatformblog/deepening-our-partnership-with-mistral-ai-on-azure-ai-foundry/4434656) - [Release blog post for Mistral Large 2 and Mistral NeMo](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/ai-innovation-continues-introducing-mistral-large-2-and-mistral/ba-p/4200181). - [Azure documentation for MaaS deployment of Mistral models](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral). - [Azure ML examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/mistral) with several Mistral-based samples. +- [Azure AI Foundry GitHub repository](https://github.com/azure-ai-foundry/foundry-samples/tree/main/samples/mistral) [IBM watsonx.ai] -Source: https://docs.mistral.ai/docs/deployment/cloud/ibm-watsonx +Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/ibm-watsonx ## Introduction @@ -13980,7 +13995,7 @@ For more information and examples, you can check: [Outscale] -Source: https://docs.mistral.ai/docs/deployment/cloud/outscale +Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/outscale ## Introduction @@ -14089,7 +14104,7 @@ To run the examples below you will need to set the following environment variabl Codestral can be queried using an additional completion mode called fill-in-the-middle (FIM). For more information, see the -[code generation section](../../../capabilities/code_generation/#fill-in-the-middle-endpoint). +[code generation section](../../../capabilities/code_generation). @@ -14158,7 +14173,7 @@ For more information and examples, you can check: [Cloud] -Source: https://docs.mistral.ai/docs/deployment/cloud/overview +Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/overview You can access Mistral AI models via your preferred cloud provider and use your cloud credits. In particular, Mistral's optimized commercial models are available on: @@ -14172,7 +14187,7 @@ In particular, Mistral's optimized commercial models are available on: [Snowflake Cortex] -Source: https://docs.mistral.ai/docs/deployment/cloud/sfcortex +Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/sfcortex ## Introduction @@ -14251,7 +14266,7 @@ For more information and examples, you can check the Snowflake documentation for [Vertex AI] -Source: https://docs.mistral.ai/docs/deployment/cloud/vertex +Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/vertex ## Introduction @@ -14390,7 +14405,7 @@ for more details. Codestral can be queried using an additional completion mode called fill-in-the-middle (FIM). For more information, see the -[code generation section](../../../capabilities/code_generation/#fill-in-the-middle-endpoint). +[code generation section](../../../capabilities/code_generation). @@ -14482,7 +14497,7 @@ For more information and examples, you can check: [Workspaces] -Source: https://docs.mistral.ai/docs/deployment/laplateforme/organization +Source: https://docs.mistral.ai/docs/.docs/deployment/laplateforme/organization A La Plateforme workspace is a collective of accounts, each with a designated set of rights and permissions. Creating a workspace for your team enables you to: - Manage access and costs @@ -14522,7 +14537,7 @@ and click "Invite a new member". [La Plateforme] -Source: https://docs.mistral.ai/docs/deployment/laplateforme/overview +Source: https://docs.mistral.ai/docs/.docs/deployment/laplateforme/overview [platform_url]: https://console.mistral.ai/ [deployment_img]: /img/deployment.png @@ -14561,7 +14576,7 @@ Raw model weights can be used in several ways: [Pricing] -Source: https://docs.mistral.ai/docs/deployment/laplateforme/pricing +Source: https://docs.mistral.ai/docs/.docs/deployment/laplateforme/pricing :::note[ ] Please refer to the [pricing page](https://mistral.ai/pricing#api-pricing) for detailed information on costs. @@ -14569,7 +14584,7 @@ Please refer to the [pricing page](https://mistral.ai/pricing#api-pricing) for d [Rate limit and usage tiers] -Source: https://docs.mistral.ai/docs/deployment/laplateforme/tier +Source: https://docs.mistral.ai/docs/.docs/deployment/laplateforme/tier :::note[ ] Please visit https://admin.mistral.ai/plateforme/limits for detailed information on the current rate limit and usage tiers for your workspace. @@ -14598,7 +14613,7 @@ We offer various tiers on the platform, including a **free API tier** with restr [Deploy with Cerebrium] -Source: https://docs.mistral.ai/docs/deployment/self-deployment/cerebrium +Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/cerebrium [Cerebrium](https://www.cerebrium.ai/) is a serverless AI infrastructure platform that makes it easier for companies to build and deploy AI based applications. They offer Serverless GPU's with low cold start times with over 12 varieties of GPU chips that auto scale and you only pay for the compute you use. @@ -14756,7 +14771,7 @@ You should then get a message looking like this: [Deploy with Cloudflare Workers AI] -Source: https://docs.mistral.ai/docs/deployment/self-deployment/cloudflare +Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/cloudflare [Cloudflare](https://www.cloudflare.com/en-gb/) is a web performance and security company that provides content delivery network (CDN), DDoS protection, Internet security, and distributed domain name server services. Cloudflare launched Workers AI, which allows developers to run LLMs models powered by serverless GPUs on Cloudflare’s global network. @@ -14835,7 +14850,7 @@ Here is the output you should receive [Self-deployment] -Source: https://docs.mistral.ai/docs/deployment/self-deployment/overview +Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/overview Mistral AI models can be self-deployed on your own infrastructure through various inference engines. We recommend using [vLLM](https://vllm.readthedocs.io/), a @@ -14851,7 +14866,7 @@ You can also leverage specific tools to facilitate infrastructure management, su [Deploy with SkyPilot] -Source: https://docs.mistral.ai/docs/deployment/self-deployment/skypilot +Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/skypilot [SkyPilot](https://skypilot.readthedocs.io/en/latest/) is a framework for running LLMs, AI, and batch jobs on any cloud, offering maximum cost savings, highest GPU availability, and managed execution. @@ -14955,7 +14970,7 @@ Many cloud providers require you to explicitly request access to powerful GPU in [Text Generation Inference] -Source: https://docs.mistral.ai/docs/deployment/self-deployment/tgi +Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/tgi Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open-access LLMs. Among other features, it has quantization, tensor parallelism, token streaming, continuous batching, flash attention, guidance, and more. @@ -15119,7 +15134,7 @@ curl 127.0.0.1:8080/generate \ [TensorRT] -Source: https://docs.mistral.ai/docs/deployment/self-deployment/trt +Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/trt ## Building the engine @@ -15136,7 +15151,7 @@ Follow the [official documentation](https://github.com/triton-inference-server/t [vLLM] -Source: https://docs.mistral.ai/docs/deployment/self-deployment/vllm +Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/vllm [vLLM](https://github.com/vllm-project/vllm) is an open-source LLM inference and serving engine. It is particularly appropriate as a target platform for self-deploying Mistral @@ -15552,7 +15567,7 @@ using the same code as in a standalone deployment. [SDK Clients] -Source: https://docs.mistral.ai/docs/getting-started/clients +Source: https://docs.mistral.ai/docs/.docs/getting-started/clients We provide client codes in both Python and Typescript. @@ -15656,7 +15671,7 @@ We recommend reaching out to the respective maintainers for any assistance or in [Bienvenue to Mistral AI Documentation] -Source: https://docs.mistral.ai/docs/getting-started/docs_introduction +Source: https://docs.mistral.ai/docs/.docs/getting-started/docs_introduction Mistral AI is a research lab building the best open source models in the world. La Plateforme enables developers and enterprises to build new products and applications, powered by Mistral’s open source and commercial LLMs. @@ -15693,7 +15708,7 @@ The [Mistral AI APIs](https://console.mistral.ai/) empower LLM applications via: - [Text generation](/capabilities/completion), enables streaming and provides the ability to display partial model results in real-time - [Vision](/capabilities/vision), enables the analysis of images and provides insights based on visual content in addition to text. -- [OCR](/capabilities/OCR/basic_ocr), allows the extraction of interleaved text and images from documents. +- [OCR](/capabilities/document_ai/basic_ocr), allows the extraction of interleaved text and images from documents. - [Code generation](/capabilities/code_generation), enpowers code generation tasks, including fill-in-the-middle and code completion. - [Embeddings](/capabilities/embeddings/overview), useful for RAG where it represents the meaning of text as a list of numbers. - [Function calling](/capabilities/function_calling), enables Mistral models to connect to external tools. @@ -15704,7 +15719,7 @@ The [Mistral AI APIs](https://console.mistral.ai/) empower LLM applications via: [Glossary] -Source: https://docs.mistral.ai/docs/getting-started/glossary +Source: https://docs.mistral.ai/docs/.docs/getting-started/glossary ## LLM @@ -15793,7 +15808,7 @@ Temperature is a fundamental sampling parameter in LLMs that controls the random [Model customization] -Source: https://docs.mistral.ai/docs/getting-started/model_customization +Source: https://docs.mistral.ai/docs/.docs/getting-started/model_customization ### Otherwise known as "How to Build an Application with a Custom Model" @@ -15907,7 +15922,7 @@ Congrats! You’ve deployed your custom model into your application. [Models Benchmarks] -Source: https://docs.mistral.ai/docs/getting-started/models/benchmark +Source: https://docs.mistral.ai/docs/.docs/getting-started/models/benchmark LLM (Large Language Model) benchmarks are standardized tests or datasets used to evaluate the performance of large language models. These benchmarks help researchers and developers understand the strengths and weaknesses of their models and compare them with other models in a systematic way. @@ -15975,7 +15990,7 @@ We've gathered a lot of valuable insights from platforms like Reddit and Twitter [Model selection] -Source: https://docs.mistral.ai/docs/getting-started/models/model_selection +Source: https://docs.mistral.ai/docs/.docs/getting-started/models/model_selection This guide will explore the performance and cost trade-offs, and discuss how to select the appropriate model for different use cases. We will delve into various factors to consider, offering guidance on choosing the right model for your specific needs. @@ -16185,7 +16200,7 @@ Donc, un kilogramme de plumes est plus lourd qu'une livre de fer, car il corresp [Models Overview] -Source: https://docs.mistral.ai/docs/getting-started/models/overview +Source: https://docs.mistral.ai/docs/.docs/getting-started/models/overview Mistral provides two types of models: open models and premier models. @@ -16198,7 +16213,7 @@ Mistral provides two types of models: open models and premier models. | Model | Weight availability|Available via API| Description | Max Tokens| API Endpoints|Version| |--------------------|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:| -| Mistral Medium 3 | | :heavy_check_mark: | Our frontier-class multimodal model released May 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2505` | 25.05| +| Mistral Medium 3.1 | | :heavy_check_mark: | Our frontier-class multimodal model released August 2025. Improving tone and performance. Read more about Medium 3 in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2508` | 25.08| | Magistral Medium 1.1 | | :heavy_check_mark: | Our frontier-class reasoning model released July 2025. | 40k | `magistral-medium-2507` | 25.07| | Codestral 2508 | | :heavy_check_mark: | Our cutting-edge language model for coding released end of July 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-25-08/) | 256k | `codestral-2508` | 25.08| | Voxtral Mini Transcribe | | :heavy_check_mark: | An efficient audio input model, fine-tuned and optimized for transcription purposes only. | | `voxtral-mini-2507` via `audio/transcriptions` | 25.07| @@ -16207,6 +16222,7 @@ Mistral provides two types of models: open models and premier models. | Magistral Medium 1 | | :heavy_check_mark: | Our first frontier-class reasoning model released June 2025. Learn more in our [blog post](https://mistral.ai/news/magistral/) | 40k | `magistral-medium-2506` | 25.06| | Ministral 3B | | :heavy_check_mark: | World’s best edge model. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-3b-2410` | 24.10| | Ministral 8B | :heavy_check_mark:
[Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: |Powerful edge model with extremely high performance/price ratio. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-8b-2410` | 24.10| +| Mistral Medium 3 | | :heavy_check_mark: | Our frontier-class multimodal model released May 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2505` | 25.05| | Codestral 2501 | | :heavy_check_mark: | Our cutting-edge language model for coding with the second version released January 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-2501/) | 256k | `codestral-2501` | 25.01| | Mistral Large 2.1 |:heavy_check_mark:
[Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our top-tier large model for high-complexity tasks with the lastest version released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `mistral-large-2411` | 24.11| | Pixtral Large |:heavy_check_mark:
[Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our first frontier-class multimodal model released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `pixtral-large-2411` | 24.11| @@ -16241,8 +16257,8 @@ Additionally, be prepared for the deprecation of certain endpoints in the coming Here are the details of the available versions: - `magistral-medium-latest`: currently points to `magistral-medium-2507`. - `magistral-small-latest`: currently points to `magistral-small-2507`. -- `mistral-medium-latest`: currently points to `mistral-medium-2505`. -- `mistral-large-latest`: currently points to `mistral-large-2411`. +- `mistral-medium-latest`: currently points to `mistral-medium-2508`. +- `mistral-large-latest`: currently points to `mistral-medium-2508`, previously `mistral-large-2411`. - `pixtral-large-latest`: currently points to `pixtral-large-2411`. - `mistral-moderation-latest`: currently points to `mistral-moderation-2411`. - `ministral-3b-latest`: currently points to `ministral-3b-2410`. @@ -16287,7 +16303,7 @@ To prepare for model retirements and version upgrades, we recommend that custome [Model weights] -Source: https://docs.mistral.ai/docs/getting-started/models/weights +Source: https://docs.mistral.ai/docs/.docs/getting-started/models/weights We open-source both pre-trained models and instruction-tuned models. These models are not tuned for safety as we want to empower users to test and refine moderation based on their use cases. For safer models, follow our [guardrailing tutorial](/capabilities/guardrailing). @@ -16378,7 +16394,7 @@ To learn more about how to use mistral-inference, take a look at the [README](ht [Quickstart] -Source: https://docs.mistral.ai/docs/getting-started/quickstart +Source: https://docs.mistral.ai/docs/.docs/getting-started/quickstart [platform_url]: https://console.mistral.ai/ @@ -16526,7 +16542,7 @@ For a full description of the models offered on the API, head on to the **[model [Basic RAG] -Source: https://docs.mistral.ai/docs/guides/basic-RAG +Source: https://docs.mistral.ai/docs/.docs/guides/basic-RAG # Basic RAG Retrieval-augmented generation (RAG) is an AI framework that synergizes the capabilities of LLMs and information retrieval systems. It's useful to answer questions or generate content leveraging external knowledge. There are two main steps in RAG: @@ -16701,7 +16717,7 @@ Among them you can find how to perform... [Ambassador] -Source: https://docs.mistral.ai/docs/guides/contribute/ambassador +Source: https://docs.mistral.ai/docs/.docs/guides/contribute/ambassador # Welcome to the Mistral AI Ambassador Program! @@ -16982,7 +16998,7 @@ Thank you to each and every one of you, including those who prefer not to be nam [Contribute] -Source: https://docs.mistral.ai/docs/guides/contribute/overview +Source: https://docs.mistral.ai/docs/.docs/guides/contribute/overview # How to contribute @@ -17034,7 +17050,7 @@ A valuable way to support Mistral AI is by engaging in active communication in t [Evaluation] -Source: https://docs.mistral.ai/docs/guides/evaluation +Source: https://docs.mistral.ai/docs/.docs/guides/evaluation
Open In Colab @@ -17501,7 +17517,7 @@ When implementing crowdsourcing for human evaluation, you can opt for a simple a [Fine-tuning] -Source: https://docs.mistral.ai/docs/guides/finetuning +Source: https://docs.mistral.ai/docs/.docs/guides/finetuning :::warning[ ] There's a monthly storage fee of $2 for each model. For more detailed pricing information, please visit our [pricing page](https://mistral.ai/pricing#api-pricing). @@ -17514,7 +17530,7 @@ There's a monthly storage fee of $2 for each model. For more detailed pricing in [ 01 Intro Basics] -Source: https://docs.mistral.ai/docs/guides/finetuning_sections/_01_intro_basics +Source: https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_01_intro_basics ## Introduction @@ -17529,7 +17545,7 @@ In this guide, we will cover the following topics: [ 02 Prepare Dataset] -Source: https://docs.mistral.ai/docs/guides/finetuning_sections/_02_prepare_dataset +Source: https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_02_prepare_dataset ## Prepare the dataset @@ -18124,7 +18140,7 @@ Here are six specific use cases that you might find helpful: [download the validation and reformat script] -Source: https://docs.mistral.ai/docs/guides/finetuning_sections/_03_e2e_examples +Source: https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_03_e2e_examples ## End-to-end example with Mistral API @@ -18618,7 +18634,7 @@ To see an end-to-end example of how to install mistral-finetune, prepare and val [get data from hugging face] -Source: https://docs.mistral.ai/docs/guides/finetuning_sections/_04_faq +Source: https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_04_faq ## FAQ @@ -18732,7 +18748,7 @@ for file in output_file_objects: [Observability] -Source: https://docs.mistral.ai/docs/guides/observability +Source: https://docs.mistral.ai/docs/.docs/guides/observability ## Why observability? @@ -18984,9 +19000,27 @@ Here is an [example notebook](https://github.com/mistralai/cookbook/blob/main/th drawing +### Integration with Maxim + +Maxim AI provides comprehensive observability for your Mistral based AI applications. With Maxim's one-line integration, you can easily trace and analyse LLM calls, metrics, and more. + +**Pros:** + +* Performance Analytics: Track latency, tokens consumed, and costs +* Advanced Visualisation: Understand agent trajectories through intuitive dashboards + +**Mistral integration Example:** + +* Learn how to integrate Maxim observability with the Mistral SDK in just one line of code - [Colab Notebook](https://github.com/mistralai/cookbook/blob/main/third_party/Maxim/cookbook_maxim_mistral_integration.ipynb) + +Maxim Documentation to use Mistral as an LLM Provider and Maxim as Logger - [Docs Link](https://www.getmaxim.ai/docs/sdk/python/integrations/mistral/mistral) + + +![Gif](https://raw.githubusercontent.com/akmadan/platform-docs-public/docs/observability-maxim-provider/static/img/guides/maxim_traces.gif) + [Other resources] -Source: https://docs.mistral.ai/docs/guides/other-resources +Source: https://docs.mistral.ai/docs/.docs/guides/other-resources Visit the [Mistral AI Cookbook](https://github.com/mistralai/cookbook) for additional inspiration, where you'll find example code, community contributions, and demonstrations of integrations with third-party tools, including: @@ -18995,7 +19029,7 @@ where you'll find example code, community contributions, and demonstrations of i [Prefix] -Source: https://docs.mistral.ai/docs/guides/prefix +Source: https://docs.mistral.ai/docs/.docs/guides/prefix # Prefix: Use Cases @@ -19607,7 +19641,7 @@ to different needs and use cases.* [Prompting capabilities] -Source: https://docs.mistral.ai/docs/guides/prompting-capabilities +Source: https://docs.mistral.ai/docs/.docs/guides/prompting-capabilities # Prompting Capabilities @@ -19936,7 +19970,7 @@ Explain your choice by pointing out specific reasons such as clarity, completene [Sampling] -Source: https://docs.mistral.ai/docs/guides/sampling +Source: https://docs.mistral.ai/docs/.docs/guides/sampling # Sampling: Overview on our sampling settings @@ -20396,7 +20430,7 @@ print(chat_response.choices[0].message.content) [Tokenization] -Source: https://docs.mistral.ai/docs/guides/tokenization +Source: https://docs.mistral.ai/docs/.docs/guides/tokenization Open In Colab @@ -20736,18 +20770,3 @@ Mistral AI's LLM API endpoints charge based on the number of tokens in the input To help you estimate your costs, our tokenization API makes it easy to count the number of tokens in your text. Simply run `len(tokens)` as shown in the example above to get the total number of tokens in the text, which you can then use to estimate your cost based on our pricing information. - -[Mistral AI Crawlers] -Source: https://docs.mistral.ai/docs/robots - -## Mistral AI Crawlers - -Mistral AI employs web crawlers ("robots") and user agents to execute tasks for its products, either automatically or upon user request. To facilitate webmasters in managing how their sites and content interact with AI, Mistral AI utilizes specific robots.txt tags. - -### MistralAI-User - -MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response. MistralAI-User governs which sites these user requests can be made to. It is not used for crawling the web in any automatic fashion, nor to crawl content for generative AI training. - -Full user-agent string: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; MistralAI-User/1.0; +https://docs.mistral.ai/robots) - -Published IP addresses: https://mistral.ai/mistralai-user-ips.json \ No newline at end of file diff --git a/static/llms.txt b/static/llms.txt index 51eb70bf..13a06208 100644 --- a/static/llms.txt +++ b/static/llms.txt @@ -2,78 +2,78 @@ ## Docs -[Agents & Conversations](https://docs.mistral.ai/docs/agents/agents_and_conversations.md): Agents & Conversations API: Create, manage agents with tools, and handle interactive conversations with persistent history -[Agents Function Calling](https://docs.mistral.ai/docs/agents/agents_function_calling.md): Agents use tools and function calling to perform tasks, with built-in and customizable options -[Agents Introduction](https://docs.mistral.ai/docs/agents/agents_introduction.md): AI agents autonomously execute tasks using LLMs, with tools, state persistence, and multi-agent collaboration via the Agents API -[Code Interpreter](https://docs.mistral.ai/docs/agents/connectors/code_interpreter.md): Code Interpreter enables safe, on-demand code execution for data analysis, graphing, and more in isolated containers -[Connectors Overview](https://docs.mistral.ai/docs/agents/connectors/connectors_overview.md): Connectors enable Agents and users to access tools like websearch, code interpreter, image generation, and document library on demand -[Document Library](https://docs.mistral.ai/docs/agents/connectors/document_library.md): Document Library enhances agents with uploaded documents via Mistral Cloud's built-in RAG tool -[Image Generation](https://docs.mistral.ai/docs/agents/connectors/image_generation.md): Built-in tool for agents to generate images on demand with detailed output handling and download options -[Websearch](https://docs.mistral.ai/docs/agents/connectors/websearch.md): Websearch enables models to browse the web for real-time, up-to-date information and access specific websites -[Agents Handoffs](https://docs.mistral.ai/docs/agents/handoffs.md): Agents Handoffs enable seamless task delegation and workflow automation between multiple agents with diverse tools and capabilities -[MCP](https://docs.mistral.ai/docs/agents/mcp.md): MCP is an open standard protocol for seamless AI model integration with data sources and tools -[Audio & Transcription](https://docs.mistral.ai/docs/capabilities/audio_and_transcription.md): Audio & Transcription: Voxtral models enable chat and transcription via audio input with various file-passing methods -[Batch Inference](https://docs.mistral.ai/docs/capabilities/batch_inference.md): Process multiple API requests in batches with customizable models, endpoints, and metadata -[Citations and References](https://docs.mistral.ai/docs/capabilities/citations_and_references.md): Citations and references enable models to ground responses with sources, ideal for RAG and agentic applications -[Coding](https://docs.mistral.ai/docs/capabilities/coding.md): Mistral AI offers Codestral for code generation & FIM, and Devstral for agentic tool use in software development, with integrations for IDEs and frameworks -[Annotations](https://docs.mistral.ai/docs/capabilities/document_ai/annotations.md): Mistral Document AI API extracts structured data from documents using custom JSON annotations for bboxes and full documents -[Basic OCR](https://docs.mistral.ai/docs/capabilities/document_ai/basic_ocr.md): Extract text and structured content from PDFs and images with Mistral's Document AI OCR processor -[Document AI](https://docs.mistral.ai/docs/capabilities/document_ai/document_ai_overview.md): Mistral Document AI offers enterprise-grade OCR, structured data extraction, and multilingual support for fast, accurate document processing -[Document QnA](https://docs.mistral.ai/docs/capabilities/document_ai/document_qna.md): Document QnA combines OCR and AI to enable natural language queries on document content for insights and extraction -[Code Embeddings](https://docs.mistral.ai/docs/capabilities/embeddings/code_embeddings.md): Code embeddings enable retrieval, clustering, and analytics for code databases and coding assistants using Mistral AI's API -[Embeddings Overview](https://docs.mistral.ai/docs/capabilities/embeddings/embeddings_overview.md): Mistral AI's Embeddings API provides advanced vector representations for text and code, enabling NLP tasks like retrieval, clustering, and classification -[Text Embeddings](https://docs.mistral.ai/docs/capabilities/embeddings/text_embeddings.md): Generate and use text embeddings with Mistral AI's API for NLP tasks like similarity, classification, and retrieval -[Classifier Factory](https://docs.mistral.ai/docs/capabilities/finetuning/classifier-factory.md): Create and fine-tune custom classification models for intent detection, moderation, sentiment analysis, and more using Mistral's Classifier Factory -[Fine-tuning Overview](https://docs.mistral.ai/docs/capabilities/finetuning/finetuning_overview.md): Learn about fine-tuning AI models, its benefits, use cases, and available services for customization." (99 characters) -[Text & Vision Fine-tuning](https://docs.mistral.ai/docs/capabilities/finetuning/text-vision-finetuning.md): Fine-tune Mistral's text and vision models with custom datasets in JSONL format for domain-specific or conversational improvements -[Function calling](https://docs.mistral.ai/docs/capabilities/function-calling.md): Mistral models enable function calling to integrate external tools for dynamic, data-driven responses -[Moderation](https://docs.mistral.ai/docs/capabilities/moderation.md): Mistral's moderation API detects harmful content across multiple categories using AI-powered classification for text and conversations -[Predicted outputs](https://docs.mistral.ai/docs/capabilities/predicted-outputs.md): Optimize response time by predefining predictable content for faster, efficient AI outputs." (99 characters) -[Reasoning](https://docs.mistral.ai/docs/capabilities/reasoning.md): Reasoning models generate logical chains of thought to solve problems, improving accuracy with extra compute time." (99 characters) -[Custom Structured Output](https://docs.mistral.ai/docs/capabilities/structured-output/custom.md): Define and enforce JSON output formats using Pydantic or Zod schemas with Mistral AI -[JSON mode](https://docs.mistral.ai/docs/capabilities/structured-output/json-mode.md): Enable JSON mode by setting `response_format` to `{\"type\": \"json_object\"}` in API requests -[Structured Output](https://docs.mistral.ai/docs/capabilities/structured-output/overview.md): Learn to generate structured outputs like JSON for LLM agents and pipelines, with custom and flexible formatting options -[Text and Chat Completions](https://docs.mistral.ai/docs/capabilities/text_and_chat_completions.md): Mistral models enable chat and text completions with customizable prompts, roles, and streaming options -[Vision](https://docs.mistral.ai/docs/capabilities/vision.md): Multimodal AI models analyze images and text for insights, supporting use cases like OCR, chart understanding, and receipt transcription -[AWS Bedrock](https://docs.mistral.ai/docs/deployment/cloud/aws.md): Deploy and query Mistral AI models on AWS Bedrock with fully managed, serverless endpoints -[Azure AI](https://docs.mistral.ai/docs/deployment/cloud/azure.md): Deploy and query Mistral AI models on Azure AI via serverless MaaS or GPU-based endpoints -[IBM watsonx.ai](https://docs.mistral.ai/docs/deployment/cloud/ibm-watsonx.md): Mistral AI's Large model on IBM watsonx.ai: SaaS & on-premise deployment with setup, API access, and usage guides -[Outscale](https://docs.mistral.ai/docs/deployment/cloud/outscale.md): Deploy and query Mistral AI models on Outscale via managed VMs and REST APIs -[Cloud](https://docs.mistral.ai/docs/deployment/cloud/overview.md): Access Mistral AI models via Azure, AWS, Google Cloud, Snowflake, IBM, and Outscale using cloud credits -[Snowflake Cortex](https://docs.mistral.ai/docs/deployment/cloud/sfcortex.md): Access Mistral AI models on Snowflake Cortex as serverless, fully managed endpoints for SQL & Python -[Vertex AI](https://docs.mistral.ai/docs/deployment/cloud/vertex.md): Deploy and query Mistral AI models on Google Cloud Vertex AI as serverless endpoints -[Workspaces](https://docs.mistral.ai/docs/deployment/laplateforme/organization.md): La Plateforme workspaces enable team collaboration, access control, and shared fine-tuned models." (99 characters) -[La Plateforme](https://docs.mistral.ai/docs/deployment/laplateforme/overview.md): Mistral AI's La Plateforme offers pay-as-you-go API access to its latest models with flexible deployment options -[Pricing](https://docs.mistral.ai/docs/deployment/laplateforme/pricing.md): Check the pricing page for detailed API cost information -[Rate limit and usage tiers](https://docs.mistral.ai/docs/deployment/laplateforme/tier.md): Learn about Mistral's API rate limits, usage tiers, and how to upgrade for higher capacity." (99 characters) -[Deploy with Cerebrium](https://docs.mistral.ai/docs/deployment/self-deployment/cerebrium.md): Deploy AI apps effortlessly with Cerebrium's serverless GPU infrastructure and auto-scaling." (99 characters) -[Deploy with Cloudflare Workers AI](https://docs.mistral.ai/docs/deployment/self-deployment/cloudflare.md): Deploy AI models on Cloudflare's global network with Workers AI for serverless GPU-powered LLMs -[Self-deployment](https://docs.mistral.ai/docs/deployment/self-deployment/overview.md): Deploy Mistral AI models on your infrastructure using vLLM, TensorRT-LLM, TGI, or tools like SkyPilot and Cerebrium -[Deploy with SkyPilot](https://docs.mistral.ai/docs/deployment/self-deployment/skypilot.md): Deploy AI models on any cloud with SkyPilot for cost savings, high GPU availability, and managed execution -[Text Generation Inference](https://docs.mistral.ai/docs/deployment/self-deployment/tgi.md): TGI is a toolkit for deploying and serving LLMs with high-performance text generation features like quantization and OpenAI-like API support -[TensorRT](https://docs.mistral.ai/docs/deployment/self-deployment/trt.md): Guide to building and deploying TensorRT-LLM engines with Triton inference server -[vLLM](https://docs.mistral.ai/docs/deployment/self-deployment/vllm.md): vLLM is an open-source LLM inference engine optimized for deploying Mistral models on-premise -[SDK Clients](https://docs.mistral.ai/docs/getting-started/clients.md): Official Python & TypeScript SDKs and community clients for Mistral AI -[Bienvenue to Mistral AI Documentation](https://docs.mistral.ai/docs/getting-started/docs_introduction.md): Mistral AI offers open-source and commercial LLMs, APIs, and tools for developers and enterprises to build AI-powered applications -[Glossary](https://docs.mistral.ai/docs/getting-started/glossary.md): Glossary of key AI and LLM terms, including LLMs, text generation, tokens, MoE, RAG, fine-tuning, function calling, embeddings, and temperature -[Model customization](https://docs.mistral.ai/docs/getting-started/model_customization.md): Learn how to customize LLMs for your application with system prompts, fine-tuning, and moderation layers -[Models Benchmarks](https://docs.mistral.ai/docs/getting-started/models/benchmark.md): Mistral's benchmarked models excel in reasoning, multilingual tasks, coding, and multimodal capabilities, outperforming competitors in key benchmarks -[Model selection](https://docs.mistral.ai/docs/getting-started/models/model_selection.md): Guide to selecting Mistral models based on performance, cost, and use case complexity." (99 characters) -[Models Overview](https://docs.mistral.ai/docs/getting-started/models/overview.md): Mistral offers open and premier models for various tasks, including text, code, audio, and multimodal processing -[Model weights](https://docs.mistral.ai/docs/getting-started/models/weights.md): Open-source pre-trained and instruction-tuned models with various licenses, download links, and usage guidelines -[Quickstart](https://docs.mistral.ai/docs/getting-started/quickstart.md): Quickstart guide for setting up a Mistral AI account, configuring billing, and using the API for models and embeddings -[Basic RAG](https://docs.mistral.ai/docs/guides/basic-RAG.md): Learn how to build a basic RAG system by combining retrieval and generation for AI-powered knowledge-based responses -[Ambassador](https://docs.mistral.ai/docs/guides/contribute/ambassador.md): Join Mistral AI's Ambassador Program to advocate, create content, and gain exclusive benefits for AI enthusiasts -[Contribute](https://docs.mistral.ai/docs/guides/contribute/overview.md): Learn how to contribute to Mistral AI through docs, code, community, and the Ambassador Program -[Evaluation](https://docs.mistral.ai/docs/guides/evaluation.md): Guide to evaluating LLMs for specific tasks with metrics, human, and LLM-based methods -[Fine-tuning](https://docs.mistral.ai/docs/guides/finetuning.md): Fine-tuning models incurs a $2 monthly storage fee per model; see pricing for details -[ 01 Intro Basics](https://docs.mistral.ai/docs/guides/finetuning_sections/_01_intro_basics.md): Learn the basics of fine-tuning LLMs with Mistral AI's API and open-source tools for optimized performance -[ 02 Prepare Dataset](https://docs.mistral.ai/docs/guides/finetuning_sections/_02_prepare_dataset.md): Learn how to prepare datasets for fine-tuning models across various use cases, from tone to coding and RAG -[download the validation and reformat script](https://docs.mistral.ai/docs/guides/finetuning_sections/_03_e2e_examples.md): Download the reformat_data.py script to validate and reformat datasets for Mistral API fine-tuning -[get data from hugging face](https://docs.mistral.ai/docs/guides/finetuning_sections/_04_faq.md): FAQ on data validation, size limits, job creation, and fine-tuning details for Mistral API and mistral-finetune -[Observability](https://docs.mistral.ai/docs/guides/observability.md): Observability for LLMs ensures visibility, debugging, and performance optimization across prototyping, testing, and production -[Other resources](https://docs.mistral.ai/docs/guides/other-resources.md): Explore Mistral AI Cookbook for code examples, community contributions, and third-party tool integrations -[Prefix](https://docs.mistral.ai/docs/guides/prefix.md): Prefixes enhance model responses by improving language adherence, saving tokens, enabling roleplay, and strengthening safeguards -[Prompting capabilities](https://docs.mistral.ai/docs/guides/prompting-capabilities.md): Learn effective prompting techniques for classification, summarization, personalization, and evaluation with Mistral models -[Sampling](https://docs.mistral.ai/docs/guides/sampling.md): Learn how to adjust LLM sampling parameters like Temperature, Top P, and penalties for better output control -[Tokenization](https://docs.mistral.ai/docs/guides/tokenization.md): Learn about Mistral AI's tokenization process, including subword tokenization, control tokens, and Python implementation for LLMs \ No newline at end of file +[Agents & Conversations](https://docs.mistral.ai/docs/.docs/agents/agents_and_conversations.md): Agents, Conversations, and Entries enhance API interactions with tools, history, and flexible event representation +[Agents Function Calling](https://docs.mistral.ai/docs/.docs/agents/agents_function_calling.md): Agents use function calling to execute tools and workflows, with built-in connectors and custom JSON schema support +[Agents Introduction](https://docs.mistral.ai/docs/.docs/agents/agents_introduction.md): AI agents are autonomous systems powered by LLMs that plan, use tools, and execute tasks to achieve goals, with APIs for multimodal models, persistent state, and collaboration +[Code Interpreter](https://docs.mistral.ai/docs/.docs/agents/connectors/code_interpreter.md): Code Interpreter enables secure, on-demand code execution in isolated containers for data analysis, graphing, and more +[Connectors Overview](https://docs.mistral.ai/docs/.docs/agents/connectors/connectors_overview.md): Connectors enable Agents and users to access tools like websearch, code interpreter, and more for on-demand answers +[Document Library](https://docs.mistral.ai/docs/.docs/agents/connectors/document_library.md): Document Library is a built-in RAG tool for agents to access and manage uploaded documents in Mistral Cloud +[Image Generation](https://docs.mistral.ai/docs/.docs/agents/connectors/image_generation.md): Image Generation tool enables agents to create images on demand." (99 characters) +[Websearch](https://docs.mistral.ai/docs/.docs/agents/connectors/websearch.md): Websearch enables models to browse the web for real-time info, bypassing training data limitations with search and URL access +[Agents Handoffs](https://docs.mistral.ai/docs/.docs/agents/handoffs.md): Agents Handoffs enable seamless task delegation and conversation transfers between multiple agents in automated workflows +[MCP](https://docs.mistral.ai/docs/.docs/agents/mcp.md): MCP standardizes AI model integration with data sources for seamless, secure, and efficient contextual access +[Audio & Transcription](https://docs.mistral.ai/docs/.docs/capabilities/audio_and_transcription.md): Audio & Transcription: Models for chat and transcription with audio input support +[Batch Inference](https://docs.mistral.ai/docs/.docs/capabilities/batch_inference.md): Prepare and upload batch requests, then create a job to process them with specified models and endpoints +[Citations and References](https://docs.mistral.ai/docs/.docs/capabilities/citations_and_references.md): Citations and references enable models to ground responses with sources, enhancing RAG and agentic applications +[Coding](https://docs.mistral.ai/docs/.docs/capabilities/coding.md): LLMs for coding: Codestral for code generation, Devstral for agentic tool use, with FIM and chat endpoints +[Annotations](https://docs.mistral.ai/docs/.docs/capabilities/document_ai/annotations.md): Mistral Document AI API adds structured JSON annotations for OCR, including bbox and document annotations for efficient data extraction +[Basic OCR](https://docs.mistral.ai/docs/.docs/capabilities/document_ai/basic_ocr.md): Extract text and structured content from PDFs with Mistral's OCR API, preserving formatting and supporting multiple formats +[Document AI](https://docs.mistral.ai/docs/.docs/capabilities/document_ai/document_ai_overview.md): Mistral Document AI offers enterprise-level OCR, structured data extraction, and multilingual support for fast, accurate document processing +[Document QnA](https://docs.mistral.ai/docs/.docs/capabilities/document_ai/document_qna.md): Document AI QnA enables natural language queries on documents using OCR and large language models for insights and answers +[Code Embeddings](https://docs.mistral.ai/docs/.docs/capabilities/embeddings/code_embeddings.md): Code embeddings power retrieval, clustering, and analytics for code databases and coding assistants +[Embeddings Overview](https://docs.mistral.ai/docs/.docs/capabilities/embeddings/embeddings_overview.md): Mistral AI's Embeddings API provides state-of-the-art vector representations for text and code, enabling NLP tasks like retrieval, clustering, and search +[Text Embeddings](https://docs.mistral.ai/docs/.docs/capabilities/embeddings/text_embeddings.md): Generate 1024-dimension text embeddings using Mistral AI's embeddings API for NLP applications +[Classifier Factory](https://docs.mistral.ai/docs/.docs/capabilities/finetuning/classifier-factory.md): Classifier Factory: Tools for moderation, intent detection, and sentiment analysis to enhance efficiency and user experience +[Fine-tuning Overview](https://docs.mistral.ai/docs/.docs/capabilities/finetuning/finetuning_overview.md): Learn about fine-tuning costs, storage fees, and when to choose it over prompt engineering for AI models +[Text & Vision Fine-tuning](https://docs.mistral.ai/docs/.docs/capabilities/finetuning/text-vision-finetuning.md): Fine-tune text and vision models for domain-specific tasks or conversational styles using JSONL datasets +[Function calling](https://docs.mistral.ai/docs/.docs/capabilities/function-calling.md): Mistral models enable function calling to integrate external tools for custom applications and problem-solving +[Moderation](https://docs.mistral.ai/docs/.docs/capabilities/moderation.md): New moderation API using Mistral model to detect harmful text in raw and conversational content +[Predicted outputs](https://docs.mistral.ai/docs/.docs/capabilities/predicted-outputs.md): Optimizes response time by predefining predictable content to improve efficiency in tasks like code editing +[Reasoning](https://docs.mistral.ai/docs/.docs/capabilities/reasoning.md): Reasoning enhances CoT by generating logical steps before conclusions, improving problem-solving with deeper exploration +[Custom Structured Output](https://docs.mistral.ai/docs/.docs/capabilities/structured-output/custom.md): Define and enforce JSON output structure using Pydantic models with Mistral AI +[JSON mode](https://docs.mistral.ai/docs/.docs/capabilities/structured-output/json-mode.md): Enable JSON mode by setting `response_format` to `{\"type\": \"json_object\"}` in API requests +[Structured Output](https://docs.mistral.ai/docs/.docs/capabilities/structured-output/overview.md): Learn to generate structured JSON or custom outputs from LLMs for reliable agent workflows +[Text and Chat Completions](https://docs.mistral.ai/docs/.docs/capabilities/text_and_chat_completions.md): Mistral models enable chat and text completions via natural language prompts, with flexible API options for streaming and async responses +[Vision](https://docs.mistral.ai/docs/.docs/capabilities/vision.md): Vision models analyze images and text for multimodal insights, supporting applications like document parsing and data extraction +[AWS Bedrock](https://docs.mistral.ai/docs/.docs/deployment/cloud/aws.md): Deploy Mistral AI models on AWS Bedrock as fully managed, serverless endpoints +[Azure AI](https://docs.mistral.ai/docs/.docs/deployment/cloud/azure.md): Deploy Mistral AI models on Azure AI with pay-as-you-go or real-time GPU-based endpoints +[IBM watsonx.ai](https://docs.mistral.ai/docs/.docs/deployment/cloud/ibm-watsonx.md): Mistral AI's Large model on IBM watsonx.ai for managed & on-premise deployments with API access setup +[Outscale](https://docs.mistral.ai/docs/.docs/deployment/cloud/outscale.md): Deploy and query Mistral AI models on Outscale via managed VMs and GPUs +[Cloud](https://docs.mistral.ai/docs/.docs/deployment/cloud/overview.md): Access Mistral AI models via Azure, AWS, Google Cloud, Snowflake, IBM, and Outscale using cloud credits +[Snowflake Cortex](https://docs.mistral.ai/docs/.docs/deployment/cloud/sfcortex.md): Access Mistral AI models on Snowflake Cortex as serverless, fully managed endpoints +[Vertex AI](https://docs.mistral.ai/docs/.docs/deployment/cloud/vertex.md): Deploy Mistral AI models on Google Cloud Vertex AI as serverless endpoints +[Workspaces](https://docs.mistral.ai/docs/.docs/deployment/laplateforme/organization.md): La Plateforme workspaces enable team collaboration, access management, and shared fine-tuned models." (99 characters) +[La Plateforme](https://docs.mistral.ai/docs/.docs/deployment/laplateforme/overview.md): Mistral AI's pay-as-you-go API platform for accessing latest large language models +[Pricing](https://docs.mistral.ai/docs/.docs/deployment/laplateforme/pricing.md): Check the pricing page for detailed API cost information." (99 characters) +[Rate limit and usage tiers](https://docs.mistral.ai/docs/.docs/deployment/laplateforme/tier.md): Learn about Mistral's API rate limits, usage tiers, and how to check or upgrade your workspace limits +[Deploy with Cerebrium](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/cerebrium.md): Deploy AI apps effortlessly with Cerebrium's serverless GPU infrastructure, auto-scaling and pay-per-use +[Deploy with Cloudflare Workers AI](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/cloudflare.md): Deploy AI models on Cloudflare's global network with serverless GPUs via Workers AI +[Self-deployment](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/overview.md): Deploy Mistral AI models on your infrastructure using vLLM, TensorRT-LLM, TGI, or tools like SkyPilot and Cerebrium +[Deploy with SkyPilot](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/skypilot.md): Deploy AI models on any cloud with SkyPilot for cost savings, high GPU availability, and managed execution +[Text Generation Inference](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/tgi.md): TGI is a high-performance toolkit for deploying and serving open-access LLMs with features like quantization and streaming +[TensorRT](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/trt.md): Guide to building and deploying TensorRT-LLM engines for Mistral-7B and Mixtral-8X7B models +[vLLM](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/vllm.md): vLLM is an open-source LLM inference engine optimized for deploying Mistral models on-premise +[SDK Clients](https://docs.mistral.ai/docs/.docs/getting-started/clients.md): Python & Typescript SDK clients for Mistral AI, with community third-party options +[Bienvenue to Mistral AI Documentation](https://docs.mistral.ai/docs/.docs/getting-started/docs_introduction.md): Mistral AI offers open-source and commercial LLMs for developers, with premier models like Mistral Medium and Codestral +[Glossary](https://docs.mistral.ai/docs/.docs/getting-started/glossary.md): Glossary of key terms related to large language models (LLMs) and text generation +[Model customization](https://docs.mistral.ai/docs/.docs/getting-started/model_customization.md): Guide to building applications with custom LLMs for iterative, user-driven AI development +[Models Benchmarks](https://docs.mistral.ai/docs/.docs/getting-started/models/benchmark.md): Standardized benchmarks evaluate LLM performance, comparing strengths in reasoning, multilingual tasks, math, and code generation +[Model selection](https://docs.mistral.ai/docs/.docs/getting-started/models/model_selection.md): Guide to selecting Mistral models based on performance, cost, and use-case complexity +[Models Overview](https://docs.mistral.ai/docs/.docs/getting-started/models/overview.md): Mistral offers open and premier models, including multimodal and reasoning options with API access and commercial licensing +[Model weights](https://docs.mistral.ai/docs/.docs/getting-started/models/weights.md): Open-source pre-trained and instruction-tuned models with varying licenses; commercial options available +[Quickstart](https://docs.mistral.ai/docs/.docs/getting-started/quickstart.md): Set up your Mistral account, configure billing, and generate API keys to start using Mistral AI +[Basic RAG](https://docs.mistral.ai/docs/.docs/guides/basic-RAG.md): RAG combines LLMs with retrieval systems to generate answers using external knowledge." (99 characters) +[Ambassador](https://docs.mistral.ai/docs/.docs/guides/contribute/ambassador.md): Join Mistral AI's Ambassador Program to advocate for AI, share expertise, and support the community. Apply by July 1, 2025 +[Contribute](https://docs.mistral.ai/docs/.docs/guides/contribute/overview.md): Learn how to contribute to Mistral AI through docs, code, or the Ambassador Program +[Evaluation](https://docs.mistral.ai/docs/.docs/guides/evaluation.md): Guide to evaluating LLMs for specific use cases with metrics, LLM, and human-based methods +[Fine-tuning](https://docs.mistral.ai/docs/.docs/guides/finetuning.md): Fine-tuning models incurs a $2 monthly storage fee; see pricing for details +[ 01 Intro Basics](https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_01_intro_basics.md): Learn the basics of fine-tuning LLMs to optimize performance for specific tasks using Mistral AI's tools +[ 02 Prepare Dataset](https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_02_prepare_dataset.md): Prepare training data for fine-tuning models with specific use cases and examples +[download the validation and reformat script](https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_03_e2e_examples.md): Download the reformat_data.py script to validate and reformat Mistral API fine-tuning datasets +[get data from hugging face](https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_04_faq.md): Learn how to fetch, validate, and format data from Hugging Face for Mistral models +[Observability](https://docs.mistral.ai/docs/.docs/guides/observability.md): Observability ensures visibility, debugging, and continuous improvement for LLM systems in production." (99 characters) +[Other resources](https://docs.mistral.ai/docs/.docs/guides/other-resources.md): Explore Mistral AI Cookbook for code examples, community contributions, and third-party tool integrations +[Prefix](https://docs.mistral.ai/docs/.docs/guides/prefix.md): Prefixes enhance instruction adherence and response control for models in various use cases +[Prompting capabilities](https://docs.mistral.ai/docs/.docs/guides/prompting-capabilities.md): Learn how to craft effective prompts for classification, summarization, personalization, and evaluation with Mistral models +[Sampling](https://docs.mistral.ai/docs/.docs/guides/sampling.md): Learn how to adjust LLM sampling parameters like Temperature, Top P, and penalties for better output control +[Tokenization](https://docs.mistral.ai/docs/.docs/guides/tokenization.md): Tokenization breaks text into subword units for LLM processing, with Mistral AI's open-source tools for Python \ No newline at end of file From 5edbcddf2e83c335c55c33fa6eb6bf2109291198 Mon Sep 17 00:00:00 2001 From: Ravi Theja Desetty Date: Mon, 1 Sep 2025 22:02:26 +0530 Subject: [PATCH 2/3] update --- static/llms.txt | 150 ++++++++++++++++++++++++------------------------ 1 file changed, 75 insertions(+), 75 deletions(-) diff --git a/static/llms.txt b/static/llms.txt index 13a06208..2ebd1702 100644 --- a/static/llms.txt +++ b/static/llms.txt @@ -2,78 +2,78 @@ ## Docs -[Agents & Conversations](https://docs.mistral.ai/docs/.docs/agents/agents_and_conversations.md): Agents, Conversations, and Entries enhance API interactions with tools, history, and flexible event representation -[Agents Function Calling](https://docs.mistral.ai/docs/.docs/agents/agents_function_calling.md): Agents use function calling to execute tools and workflows, with built-in connectors and custom JSON schema support -[Agents Introduction](https://docs.mistral.ai/docs/.docs/agents/agents_introduction.md): AI agents are autonomous systems powered by LLMs that plan, use tools, and execute tasks to achieve goals, with APIs for multimodal models, persistent state, and collaboration -[Code Interpreter](https://docs.mistral.ai/docs/.docs/agents/connectors/code_interpreter.md): Code Interpreter enables secure, on-demand code execution in isolated containers for data analysis, graphing, and more -[Connectors Overview](https://docs.mistral.ai/docs/.docs/agents/connectors/connectors_overview.md): Connectors enable Agents and users to access tools like websearch, code interpreter, and more for on-demand answers -[Document Library](https://docs.mistral.ai/docs/.docs/agents/connectors/document_library.md): Document Library is a built-in RAG tool for agents to access and manage uploaded documents in Mistral Cloud -[Image Generation](https://docs.mistral.ai/docs/.docs/agents/connectors/image_generation.md): Image Generation tool enables agents to create images on demand." (99 characters) -[Websearch](https://docs.mistral.ai/docs/.docs/agents/connectors/websearch.md): Websearch enables models to browse the web for real-time info, bypassing training data limitations with search and URL access -[Agents Handoffs](https://docs.mistral.ai/docs/.docs/agents/handoffs.md): Agents Handoffs enable seamless task delegation and conversation transfers between multiple agents in automated workflows -[MCP](https://docs.mistral.ai/docs/.docs/agents/mcp.md): MCP standardizes AI model integration with data sources for seamless, secure, and efficient contextual access -[Audio & Transcription](https://docs.mistral.ai/docs/.docs/capabilities/audio_and_transcription.md): Audio & Transcription: Models for chat and transcription with audio input support -[Batch Inference](https://docs.mistral.ai/docs/.docs/capabilities/batch_inference.md): Prepare and upload batch requests, then create a job to process them with specified models and endpoints -[Citations and References](https://docs.mistral.ai/docs/.docs/capabilities/citations_and_references.md): Citations and references enable models to ground responses with sources, enhancing RAG and agentic applications -[Coding](https://docs.mistral.ai/docs/.docs/capabilities/coding.md): LLMs for coding: Codestral for code generation, Devstral for agentic tool use, with FIM and chat endpoints -[Annotations](https://docs.mistral.ai/docs/.docs/capabilities/document_ai/annotations.md): Mistral Document AI API adds structured JSON annotations for OCR, including bbox and document annotations for efficient data extraction -[Basic OCR](https://docs.mistral.ai/docs/.docs/capabilities/document_ai/basic_ocr.md): Extract text and structured content from PDFs with Mistral's OCR API, preserving formatting and supporting multiple formats -[Document AI](https://docs.mistral.ai/docs/.docs/capabilities/document_ai/document_ai_overview.md): Mistral Document AI offers enterprise-level OCR, structured data extraction, and multilingual support for fast, accurate document processing -[Document QnA](https://docs.mistral.ai/docs/.docs/capabilities/document_ai/document_qna.md): Document AI QnA enables natural language queries on documents using OCR and large language models for insights and answers -[Code Embeddings](https://docs.mistral.ai/docs/.docs/capabilities/embeddings/code_embeddings.md): Code embeddings power retrieval, clustering, and analytics for code databases and coding assistants -[Embeddings Overview](https://docs.mistral.ai/docs/.docs/capabilities/embeddings/embeddings_overview.md): Mistral AI's Embeddings API provides state-of-the-art vector representations for text and code, enabling NLP tasks like retrieval, clustering, and search -[Text Embeddings](https://docs.mistral.ai/docs/.docs/capabilities/embeddings/text_embeddings.md): Generate 1024-dimension text embeddings using Mistral AI's embeddings API for NLP applications -[Classifier Factory](https://docs.mistral.ai/docs/.docs/capabilities/finetuning/classifier-factory.md): Classifier Factory: Tools for moderation, intent detection, and sentiment analysis to enhance efficiency and user experience -[Fine-tuning Overview](https://docs.mistral.ai/docs/.docs/capabilities/finetuning/finetuning_overview.md): Learn about fine-tuning costs, storage fees, and when to choose it over prompt engineering for AI models -[Text & Vision Fine-tuning](https://docs.mistral.ai/docs/.docs/capabilities/finetuning/text-vision-finetuning.md): Fine-tune text and vision models for domain-specific tasks or conversational styles using JSONL datasets -[Function calling](https://docs.mistral.ai/docs/.docs/capabilities/function-calling.md): Mistral models enable function calling to integrate external tools for custom applications and problem-solving -[Moderation](https://docs.mistral.ai/docs/.docs/capabilities/moderation.md): New moderation API using Mistral model to detect harmful text in raw and conversational content -[Predicted outputs](https://docs.mistral.ai/docs/.docs/capabilities/predicted-outputs.md): Optimizes response time by predefining predictable content to improve efficiency in tasks like code editing -[Reasoning](https://docs.mistral.ai/docs/.docs/capabilities/reasoning.md): Reasoning enhances CoT by generating logical steps before conclusions, improving problem-solving with deeper exploration -[Custom Structured Output](https://docs.mistral.ai/docs/.docs/capabilities/structured-output/custom.md): Define and enforce JSON output structure using Pydantic models with Mistral AI -[JSON mode](https://docs.mistral.ai/docs/.docs/capabilities/structured-output/json-mode.md): Enable JSON mode by setting `response_format` to `{\"type\": \"json_object\"}` in API requests -[Structured Output](https://docs.mistral.ai/docs/.docs/capabilities/structured-output/overview.md): Learn to generate structured JSON or custom outputs from LLMs for reliable agent workflows -[Text and Chat Completions](https://docs.mistral.ai/docs/.docs/capabilities/text_and_chat_completions.md): Mistral models enable chat and text completions via natural language prompts, with flexible API options for streaming and async responses -[Vision](https://docs.mistral.ai/docs/.docs/capabilities/vision.md): Vision models analyze images and text for multimodal insights, supporting applications like document parsing and data extraction -[AWS Bedrock](https://docs.mistral.ai/docs/.docs/deployment/cloud/aws.md): Deploy Mistral AI models on AWS Bedrock as fully managed, serverless endpoints -[Azure AI](https://docs.mistral.ai/docs/.docs/deployment/cloud/azure.md): Deploy Mistral AI models on Azure AI with pay-as-you-go or real-time GPU-based endpoints -[IBM watsonx.ai](https://docs.mistral.ai/docs/.docs/deployment/cloud/ibm-watsonx.md): Mistral AI's Large model on IBM watsonx.ai for managed & on-premise deployments with API access setup -[Outscale](https://docs.mistral.ai/docs/.docs/deployment/cloud/outscale.md): Deploy and query Mistral AI models on Outscale via managed VMs and GPUs -[Cloud](https://docs.mistral.ai/docs/.docs/deployment/cloud/overview.md): Access Mistral AI models via Azure, AWS, Google Cloud, Snowflake, IBM, and Outscale using cloud credits -[Snowflake Cortex](https://docs.mistral.ai/docs/.docs/deployment/cloud/sfcortex.md): Access Mistral AI models on Snowflake Cortex as serverless, fully managed endpoints -[Vertex AI](https://docs.mistral.ai/docs/.docs/deployment/cloud/vertex.md): Deploy Mistral AI models on Google Cloud Vertex AI as serverless endpoints -[Workspaces](https://docs.mistral.ai/docs/.docs/deployment/laplateforme/organization.md): La Plateforme workspaces enable team collaboration, access management, and shared fine-tuned models." (99 characters) -[La Plateforme](https://docs.mistral.ai/docs/.docs/deployment/laplateforme/overview.md): Mistral AI's pay-as-you-go API platform for accessing latest large language models -[Pricing](https://docs.mistral.ai/docs/.docs/deployment/laplateforme/pricing.md): Check the pricing page for detailed API cost information." (99 characters) -[Rate limit and usage tiers](https://docs.mistral.ai/docs/.docs/deployment/laplateforme/tier.md): Learn about Mistral's API rate limits, usage tiers, and how to check or upgrade your workspace limits -[Deploy with Cerebrium](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/cerebrium.md): Deploy AI apps effortlessly with Cerebrium's serverless GPU infrastructure, auto-scaling and pay-per-use -[Deploy with Cloudflare Workers AI](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/cloudflare.md): Deploy AI models on Cloudflare's global network with serverless GPUs via Workers AI -[Self-deployment](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/overview.md): Deploy Mistral AI models on your infrastructure using vLLM, TensorRT-LLM, TGI, or tools like SkyPilot and Cerebrium -[Deploy with SkyPilot](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/skypilot.md): Deploy AI models on any cloud with SkyPilot for cost savings, high GPU availability, and managed execution -[Text Generation Inference](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/tgi.md): TGI is a high-performance toolkit for deploying and serving open-access LLMs with features like quantization and streaming -[TensorRT](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/trt.md): Guide to building and deploying TensorRT-LLM engines for Mistral-7B and Mixtral-8X7B models -[vLLM](https://docs.mistral.ai/docs/.docs/deployment/self-deployment/vllm.md): vLLM is an open-source LLM inference engine optimized for deploying Mistral models on-premise -[SDK Clients](https://docs.mistral.ai/docs/.docs/getting-started/clients.md): Python & Typescript SDK clients for Mistral AI, with community third-party options -[Bienvenue to Mistral AI Documentation](https://docs.mistral.ai/docs/.docs/getting-started/docs_introduction.md): Mistral AI offers open-source and commercial LLMs for developers, with premier models like Mistral Medium and Codestral -[Glossary](https://docs.mistral.ai/docs/.docs/getting-started/glossary.md): Glossary of key terms related to large language models (LLMs) and text generation -[Model customization](https://docs.mistral.ai/docs/.docs/getting-started/model_customization.md): Guide to building applications with custom LLMs for iterative, user-driven AI development -[Models Benchmarks](https://docs.mistral.ai/docs/.docs/getting-started/models/benchmark.md): Standardized benchmarks evaluate LLM performance, comparing strengths in reasoning, multilingual tasks, math, and code generation -[Model selection](https://docs.mistral.ai/docs/.docs/getting-started/models/model_selection.md): Guide to selecting Mistral models based on performance, cost, and use-case complexity -[Models Overview](https://docs.mistral.ai/docs/.docs/getting-started/models/overview.md): Mistral offers open and premier models, including multimodal and reasoning options with API access and commercial licensing -[Model weights](https://docs.mistral.ai/docs/.docs/getting-started/models/weights.md): Open-source pre-trained and instruction-tuned models with varying licenses; commercial options available -[Quickstart](https://docs.mistral.ai/docs/.docs/getting-started/quickstart.md): Set up your Mistral account, configure billing, and generate API keys to start using Mistral AI -[Basic RAG](https://docs.mistral.ai/docs/.docs/guides/basic-RAG.md): RAG combines LLMs with retrieval systems to generate answers using external knowledge." (99 characters) -[Ambassador](https://docs.mistral.ai/docs/.docs/guides/contribute/ambassador.md): Join Mistral AI's Ambassador Program to advocate for AI, share expertise, and support the community. Apply by July 1, 2025 -[Contribute](https://docs.mistral.ai/docs/.docs/guides/contribute/overview.md): Learn how to contribute to Mistral AI through docs, code, or the Ambassador Program -[Evaluation](https://docs.mistral.ai/docs/.docs/guides/evaluation.md): Guide to evaluating LLMs for specific use cases with metrics, LLM, and human-based methods -[Fine-tuning](https://docs.mistral.ai/docs/.docs/guides/finetuning.md): Fine-tuning models incurs a $2 monthly storage fee; see pricing for details -[ 01 Intro Basics](https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_01_intro_basics.md): Learn the basics of fine-tuning LLMs to optimize performance for specific tasks using Mistral AI's tools -[ 02 Prepare Dataset](https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_02_prepare_dataset.md): Prepare training data for fine-tuning models with specific use cases and examples -[download the validation and reformat script](https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_03_e2e_examples.md): Download the reformat_data.py script to validate and reformat Mistral API fine-tuning datasets -[get data from hugging face](https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_04_faq.md): Learn how to fetch, validate, and format data from Hugging Face for Mistral models -[Observability](https://docs.mistral.ai/docs/.docs/guides/observability.md): Observability ensures visibility, debugging, and continuous improvement for LLM systems in production." (99 characters) -[Other resources](https://docs.mistral.ai/docs/.docs/guides/other-resources.md): Explore Mistral AI Cookbook for code examples, community contributions, and third-party tool integrations -[Prefix](https://docs.mistral.ai/docs/.docs/guides/prefix.md): Prefixes enhance instruction adherence and response control for models in various use cases -[Prompting capabilities](https://docs.mistral.ai/docs/.docs/guides/prompting-capabilities.md): Learn how to craft effective prompts for classification, summarization, personalization, and evaluation with Mistral models -[Sampling](https://docs.mistral.ai/docs/.docs/guides/sampling.md): Learn how to adjust LLM sampling parameters like Temperature, Top P, and penalties for better output control -[Tokenization](https://docs.mistral.ai/docs/.docs/guides/tokenization.md): Tokenization breaks text into subword units for LLM processing, with Mistral AI's open-source tools for Python \ No newline at end of file +[Agents & Conversations](https://docs.mistral.ai/docs/agents/agents_and_conversations.md): Agents, Conversations, and Entries enhance API interactions with tools, history, and flexible event representation +[Agents Function Calling](https://docs.mistral.ai/docs/agents/agents_function_calling.md): Agents use function calling to execute tools and workflows, with built-in connectors and custom JSON schema support +[Agents Introduction](https://docs.mistral.ai/docs/agents/agents_introduction.md): AI agents are autonomous systems powered by LLMs that plan, use tools, and execute tasks to achieve goals, with APIs for multimodal models, persistent state, and collaboration +[Code Interpreter](https://docs.mistral.ai/docs/agents/connectors/code_interpreter.md): Code Interpreter enables secure, on-demand code execution in isolated containers for data analysis, graphing, and more +[Connectors Overview](https://docs.mistral.ai/docs/agents/connectors/connectors_overview.md): Connectors enable Agents and users to access tools like websearch, code interpreter, and more for on-demand answers +[Document Library](https://docs.mistral.ai/docs/agents/connectors/document_library.md): Document Library is a built-in RAG tool for agents to access and manage uploaded documents in Mistral Cloud +[Image Generation](https://docs.mistral.ai/docs/agents/connectors/image_generation.md): Image Generation tool enables agents to create images on demand." (99 characters) +[Websearch](https://docs.mistral.ai/docs/agents/connectors/websearch.md): Websearch enables models to browse the web for real-time info, bypassing training data limitations with search and URL access +[Agents Handoffs](https://docs.mistral.ai/docs/agents/handoffs.md): Agents Handoffs enable seamless task delegation and conversation transfers between multiple agents in automated workflows +[MCP](https://docs.mistral.ai/docs/agents/mcp.md): MCP standardizes AI model integration with data sources for seamless, secure, and efficient contextual access +[Audio & Transcription](https://docs.mistral.ai/docs/capabilities/audio_and_transcription.md): Audio & Transcription: Models for chat and transcription with audio input support +[Batch Inference](https://docs.mistral.ai/docs/capabilities/batch_inference.md): Prepare and upload batch requests, then create a job to process them with specified models and endpoints +[Citations and References](https://docs.mistral.ai/docs/capabilities/citations_and_references.md): Citations and references enable models to ground responses with sources, enhancing RAG and agentic applications +[Coding](https://docs.mistral.ai/docs/capabilities/coding.md): LLMs for coding: Codestral for code generation, Devstral for agentic tool use, with FIM and chat endpoints +[Annotations](https://docs.mistral.ai/docs/capabilities/document_ai/annotations.md): Mistral Document AI API adds structured JSON annotations for OCR, including bbox and document annotations for efficient data extraction +[Basic OCR](https://docs.mistral.ai/docs/capabilities/document_ai/basic_ocr.md): Extract text and structured content from PDFs with Mistral's OCR API, preserving formatting and supporting multiple formats +[Document AI](https://docs.mistral.ai/docs/capabilities/document_ai/document_ai_overview.md): Mistral Document AI offers enterprise-level OCR, structured data extraction, and multilingual support for fast, accurate document processing +[Document QnA](https://docs.mistral.ai/docs/capabilities/document_ai/document_qna.md): Document AI QnA enables natural language queries on documents using OCR and large language models for insights and answers +[Code Embeddings](https://docs.mistral.ai/docs/capabilities/embeddings/code_embeddings.md): Code embeddings power retrieval, clustering, and analytics for code databases and coding assistants +[Embeddings Overview](https://docs.mistral.ai/docs/capabilities/embeddings/embeddings_overview.md): Mistral AI's Embeddings API provides state-of-the-art vector representations for text and code, enabling NLP tasks like retrieval, clustering, and search +[Text Embeddings](https://docs.mistral.ai/docs/capabilities/embeddings/text_embeddings.md): Generate 1024-dimension text embeddings using Mistral AI's embeddings API for NLP applications +[Classifier Factory](https://docs.mistral.ai/docs/capabilities/finetuning/classifier-factory.md): Classifier Factory: Tools for moderation, intent detection, and sentiment analysis to enhance efficiency and user experience +[Fine-tuning Overview](https://docs.mistral.ai/docs/capabilities/finetuning/finetuning_overview.md): Learn about fine-tuning costs, storage fees, and when to choose it over prompt engineering for AI models +[Text & Vision Fine-tuning](https://docs.mistral.ai/docs/capabilities/finetuning/text-vision-finetuning.md): Fine-tune text and vision models for domain-specific tasks or conversational styles using JSONL datasets +[Function calling](https://docs.mistral.ai/docs/capabilities/function-calling.md): Mistral models enable function calling to integrate external tools for custom applications and problem-solving +[Moderation](https://docs.mistral.ai/docs/capabilities/moderation.md): New moderation API using Mistral model to detect harmful text in raw and conversational content +[Predicted outputs](https://docs.mistral.ai/docs/capabilities/predicted-outputs.md): Optimizes response time by predefining predictable content to improve efficiency in tasks like code editing +[Reasoning](https://docs.mistral.ai/docs/capabilities/reasoning.md): Reasoning enhances CoT by generating logical steps before conclusions, improving problem-solving with deeper exploration +[Custom Structured Output](https://docs.mistral.ai/docs/capabilities/structured-output/custom.md): Define and enforce JSON output structure using Pydantic models with Mistral AI +[JSON mode](https://docs.mistral.ai/docs/capabilities/structured-output/json-mode.md): Enable JSON mode by setting `response_format` to `{\"type\": \"json_object\"}` in API requests +[Structured Output](https://docs.mistral.ai/docs/capabilities/structured-output/overview.md): Learn to generate structured JSON or custom outputs from LLMs for reliable agent workflows +[Text and Chat Completions](https://docs.mistral.ai/docs/capabilities/text_and_chat_completions.md): Mistral models enable chat and text completions via natural language prompts, with flexible API options for streaming and async responses +[Vision](https://docs.mistral.ai/docs/capabilities/vision.md): Vision models analyze images and text for multimodal insights, supporting applications like document parsing and data extraction +[AWS Bedrock](https://docs.mistral.ai/docs/deployment/cloud/aws.md): Deploy Mistral AI models on AWS Bedrock as fully managed, serverless endpoints +[Azure AI](https://docs.mistral.ai/docs/deployment/cloud/azure.md): Deploy Mistral AI models on Azure AI with pay-as-you-go or real-time GPU-based endpoints +[IBM watsonx.ai](https://docs.mistral.ai/docs/deployment/cloud/ibm-watsonx.md): Mistral AI's Large model on IBM watsonx.ai for managed & on-premise deployments with API access setup +[Outscale](https://docs.mistral.ai/docs/deployment/cloud/outscale.md): Deploy and query Mistral AI models on Outscale via managed VMs and GPUs +[Cloud](https://docs.mistral.ai/docs/deployment/cloud/overview.md): Access Mistral AI models via Azure, AWS, Google Cloud, Snowflake, IBM, and Outscale using cloud credits +[Snowflake Cortex](https://docs.mistral.ai/docs/deployment/cloud/sfcortex.md): Access Mistral AI models on Snowflake Cortex as serverless, fully managed endpoints +[Vertex AI](https://docs.mistral.ai/docs/deployment/cloud/vertex.md): Deploy Mistral AI models on Google Cloud Vertex AI as serverless endpoints +[Workspaces](https://docs.mistral.ai/docs/deployment/laplateforme/organization.md): La Plateforme workspaces enable team collaboration, access management, and shared fine-tuned models." (99 characters) +[La Plateforme](https://docs.mistral.ai/docs/deployment/laplateforme/overview.md): Mistral AI's pay-as-you-go API platform for accessing latest large language models +[Pricing](https://docs.mistral.ai/docs/deployment/laplateforme/pricing.md): Check the pricing page for detailed API cost information." (99 characters) +[Rate limit and usage tiers](https://docs.mistral.ai/docs/deployment/laplateforme/tier.md): Learn about Mistral's API rate limits, usage tiers, and how to check or upgrade your workspace limits +[Deploy with Cerebrium](https://docs.mistral.ai/docs/deployment/self-deployment/cerebrium.md): Deploy AI apps effortlessly with Cerebrium's serverless GPU infrastructure, auto-scaling and pay-per-use +[Deploy with Cloudflare Workers AI](https://docs.mistral.ai/docs/deployment/self-deployment/cloudflare.md): Deploy AI models on Cloudflare's global network with serverless GPUs via Workers AI +[Self-deployment](https://docs.mistral.ai/docs/deployment/self-deployment/overview.md): Deploy Mistral AI models on your infrastructure using vLLM, TensorRT-LLM, TGI, or tools like SkyPilot and Cerebrium +[Deploy with SkyPilot](https://docs.mistral.ai/docs/deployment/self-deployment/skypilot.md): Deploy AI models on any cloud with SkyPilot for cost savings, high GPU availability, and managed execution +[Text Generation Inference](https://docs.mistral.ai/docs/deployment/self-deployment/tgi.md): TGI is a high-performance toolkit for deploying and serving open-access LLMs with features like quantization and streaming +[TensorRT](https://docs.mistral.ai/docs/deployment/self-deployment/trt.md): Guide to building and deploying TensorRT-LLM engines for Mistral-7B and Mixtral-8X7B models +[vLLM](https://docs.mistral.ai/docs/deployment/self-deployment/vllm.md): vLLM is an open-source LLM inference engine optimized for deploying Mistral models on-premise +[SDK Clients](https://docs.mistral.ai/docs/getting-started/clients.md): Python & Typescript SDK clients for Mistral AI, with community third-party options +[Bienvenue to Mistral AI Documentation](https://docs.mistral.ai/docs/getting-started/docs_introduction.md): Mistral AI offers open-source and commercial LLMs for developers, with premier models like Mistral Medium and Codestral +[Glossary](https://docs.mistral.ai/docs/getting-started/glossary.md): Glossary of key terms related to large language models (LLMs) and text generation +[Model customization](https://docs.mistral.ai/docs/getting-started/model_customization.md): Guide to building applications with custom LLMs for iterative, user-driven AI development +[Models Benchmarks](https://docs.mistral.ai/docs/getting-started/models/benchmark.md): Standardized benchmarks evaluate LLM performance, comparing strengths in reasoning, multilingual tasks, math, and code generation +[Model selection](https://docs.mistral.ai/docs/getting-started/models/model_selection.md): Guide to selecting Mistral models based on performance, cost, and use-case complexity +[Models Overview](https://docs.mistral.ai/docs/getting-started/models/overview.md): Mistral offers open and premier models, including multimodal and reasoning options with API access and commercial licensing +[Model weights](https://docs.mistral.ai/docs/getting-started/models/weights.md): Open-source pre-trained and instruction-tuned models with varying licenses; commercial options available +[Quickstart](https://docs.mistral.ai/docs/getting-started/quickstart.md): Set up your Mistral account, configure billing, and generate API keys to start using Mistral AI +[Basic RAG](https://docs.mistral.ai/docs/guides/basic-RAG.md): RAG combines LLMs with retrieval systems to generate answers using external knowledge." (99 characters) +[Ambassador](https://docs.mistral.ai/docs/guides/contribute/ambassador.md): Join Mistral AI's Ambassador Program to advocate for AI, share expertise, and support the community. Apply by July 1, 2025 +[Contribute](https://docs.mistral.ai/docs/guides/contribute/overview.md): Learn how to contribute to Mistral AI through docs, code, or the Ambassador Program +[Evaluation](https://docs.mistral.ai/docs/guides/evaluation.md): Guide to evaluating LLMs for specific use cases with metrics, LLM, and human-based methods +[Fine-tuning](https://docs.mistral.ai/docs/guides/finetuning.md): Fine-tuning models incurs a $2 monthly storage fee; see pricing for details +[ 01 Intro Basics](https://docs.mistral.ai/docs/guides/finetuning_sections/_01_intro_basics.md): Learn the basics of fine-tuning LLMs to optimize performance for specific tasks using Mistral AI's tools +[ 02 Prepare Dataset](https://docs.mistral.ai/docs/guides/finetuning_sections/_02_prepare_dataset.md): Prepare training data for fine-tuning models with specific use cases and examples +[download the validation and reformat script](https://docs.mistral.ai/docs/guides/finetuning_sections/_03_e2e_examples.md): Download the reformat_data.py script to validate and reformat Mistral API fine-tuning datasets +[get data from hugging face](https://docs.mistral.ai/docs/guides/finetuning_sections/_04_faq.md): Learn how to fetch, validate, and format data from Hugging Face for Mistral models +[Observability](https://docs.mistral.ai/docs/guides/observability.md): Observability ensures visibility, debugging, and continuous improvement for LLM systems in production." (99 characters) +[Other resources](https://docs.mistral.ai/docs/guides/other-resources.md): Explore Mistral AI Cookbook for code examples, community contributions, and third-party tool integrations +[Prefix](https://docs.mistral.ai/docs/guides/prefix.md): Prefixes enhance instruction adherence and response control for models in various use cases +[Prompting capabilities](https://docs.mistral.ai/docs/guides/prompting-capabilities.md): Learn how to craft effective prompts for classification, summarization, personalization, and evaluation with Mistral models +[Sampling](https://docs.mistral.ai/docs/guides/sampling.md): Learn how to adjust LLM sampling parameters like Temperature, Top P, and penalties for better output control +[Tokenization](https://docs.mistral.ai/docs/guides/tokenization.md): Tokenization breaks text into subword units for LLM processing, with Mistral AI's open-source tools for Python \ No newline at end of file From a8de15c22b41096b177e2cddcd00e4b8ef1d54f6 Mon Sep 17 00:00:00 2001 From: Ravi Theja Desetty Date: Mon, 1 Sep 2025 22:03:31 +0530 Subject: [PATCH 3/3] update --- static/llms-full.txt | 150 +++++++++++++++++++++---------------------- 1 file changed, 75 insertions(+), 75 deletions(-) diff --git a/static/llms-full.txt b/static/llms-full.txt index 40318011..5b9f435a 100644 --- a/static/llms-full.txt +++ b/static/llms-full.txt @@ -324,7 +324,7 @@ Source: https://docs.mistral.ai/api/#tag/libraries_documents_reprocess_v1 post /v1/libraries/{library_id}/documents/{document_id}/reprocess [Agents & Conversations] -Source: https://docs.mistral.ai/docs/.docs/agents/agents_and_conversations +Source: https://docs.mistral.ai/docs/agents/agents_and_conversations ### Objects @@ -1450,7 +1450,7 @@ data: {"type":"conversation.response.done","usage":{"prompt_tokens":18709,"total [Agents Function Calling] -Source: https://docs.mistral.ai/docs/.docs/agents/agents_function_calling +Source: https://docs.mistral.ai/docs/agents/agents_function_calling The core of an agent relies on its tool usage capabilities, enabling it to use and call tools and workflows depending on the task it must accomplish. @@ -1893,7 +1893,7 @@ curl --location "https://api.mistral.ai/v1/conversations/" \ [Agents Introduction] -Source: https://docs.mistral.ai/docs/.docs/agents/agents_introduction +Source: https://docs.mistral.ai/docs/agents/agents_introduction ## What are AI agents? @@ -1947,7 +1947,7 @@ For more information and guides on how to use our Agents, we have the following [Code Interpreter] -Source: https://docs.mistral.ai/docs/.docs/agents/connectors/code_interpreter +Source: https://docs.mistral.ai/docs/agents/connectors/code_interpreter Code Interpreter adds the capability to safely execute code in an isolated container, this built-in [connector](../connectors) tool allows Agents to run code at any point on demand, practical to draw graphs, data analysis, mathematical operations, code validation, and much more. @@ -2179,7 +2179,7 @@ There are 3 main entries in the `outputs` of our request: [Connectors Overview] -Source: https://docs.mistral.ai/docs/.docs/agents/connectors/connectors_overview +Source: https://docs.mistral.ai/docs/agents/connectors/connectors_overview Connectors are tools that Agents can call at any given point. They are deployed and ready for the agents to leverage to answer questions on demand. @@ -2288,7 +2288,7 @@ Currently, our API has 4 built-in Connector tools, here you can find how to use [Document Library] -Source: https://docs.mistral.ai/docs/.docs/agents/connectors/document_library +Source: https://docs.mistral.ai/docs/agents/connectors/document_library Document Library is a built-in [connector](../connectors) tool that enables agents to access documents from Mistral Cloud. @@ -3219,7 +3219,7 @@ For more information regarding the use of citations, you can find more [here](.. [Image Generation] -Source: https://docs.mistral.ai/docs/.docs/agents/connectors/image_generation +Source: https://docs.mistral.ai/docs/agents/connectors/image_generation Image Generation is a built-in [connector](../connectors) tool that enables agents to generate images of all kinds and forms. @@ -3586,7 +3586,7 @@ curl --location "https://api.mistral.ai/v1/files//content" \ [Websearch] -Source: https://docs.mistral.ai/docs/.docs/agents/connectors/websearch +Source: https://docs.mistral.ai/docs/agents/connectors/websearch Websearch is the capability to browse the web in search of information, this tool does not only fix the limitations of models of not being up to date due to their training data, but also allows them to actually retrieve recent information or access specific websites. @@ -3847,7 +3847,7 @@ For more information regarding the use of citations, you can find more [here](.. [Agents Handoffs] -Source: https://docs.mistral.ai/docs/.docs/agents/handoffs +Source: https://docs.mistral.ai/docs/agents/handoffs When creating and using Agents, often with access to specific tools, there are moments where it is desired to call other Agents mid-action. To elaborate and engineer workflows for diverse tasks that you may want automated, this ability to give Agents tasks or hand over a conversation to other agents is called **Handoffs**. @@ -4351,7 +4351,7 @@ content=[ToolFileChunk(tool='code_interpreter', file_id='40420c94-5f99-477f-8891 [MCP] -Source: https://docs.mistral.ai/docs/.docs/agents/mcp +Source: https://docs.mistral.ai/docs/agents/mcp The Model Context Protocol (MCP) is an open standard designed to streamline the integration of AI models with various data sources and tools. By providing a standardized interface, MCP enables seamless and secure connections, allowing AI systems to access and utilize contextual information efficiently. It simplifies the development process, making it easier to build robust and interconnected AI applications. @@ -4763,7 +4763,7 @@ Here is a brief example of how to stream conversations: [Audio & Transcription] -Source: https://docs.mistral.ai/docs/.docs/capabilities/audio_and_transcription +Source: https://docs.mistral.ai/docs/capabilities/audio_and_transcription Audio input capabilities enable models to chat and understand audio directly, this can be used for both chat use cases via audio or for optimal transcription purposes. @@ -5859,7 +5859,7 @@ Here are some tips if you need to handle longer audio files: [Batch Inference] -Source: https://docs.mistral.ai/docs/.docs/capabilities/batch_inference +Source: https://docs.mistral.ai/docs/capabilities/batch_inference ## Prepare and upload your batch @@ -6318,7 +6318,7 @@ Yes, due to high throughput and concurrent processing, batches may slightly exce [Citations and References] -Source: https://docs.mistral.ai/docs/.docs/capabilities/citations_and_references +Source: https://docs.mistral.ai/docs/capabilities/citations_and_references Citations enable models to ground their responses and provide references, making them a powerful feature for Retrieval-Augmented Generation (RAG) and agentic applications. This feature allows the model to provide the source of the information extracted from a document or chunk of data from a tool call. @@ -6510,7 +6510,7 @@ This template will help get started with web search and document grounding with [Coding] -Source: https://docs.mistral.ai/docs/.docs/capabilities/coding +Source: https://docs.mistral.ai/docs/capabilities/coding LLMs are powerfull tools for text generation, and they also show great performance in code generation for multiple tasks, both for code completion, code generation and agentic tool use for semi-automated software development. @@ -7084,7 +7084,7 @@ For more information visit the [Cline github repo](https://github.com/cline/clin [Annotations] -Source: https://docs.mistral.ai/docs/.docs/capabilities/document_ai/annotations +Source: https://docs.mistral.ai/docs/capabilities/document_ai/annotations # Annotations @@ -7860,7 +7860,7 @@ A: When using Document Annotations, the file cannot have more than 8 pages. BBox [Basic OCR] -Source: https://docs.mistral.ai/docs/.docs/capabilities/document_ai/basic_ocr +Source: https://docs.mistral.ai/docs/capabilities/document_ai/basic_ocr ## Document AI OCR processor @@ -8771,7 +8771,7 @@ A: Yes, there are certain limitations for the OCR API. Uploaded document files m [Document AI] -Source: https://docs.mistral.ai/docs/.docs/capabilities/document_ai/document_ai_overview +Source: https://docs.mistral.ai/docs/capabilities/document_ai/document_ai_overview # Mistral Document AI @@ -8796,7 +8796,7 @@ Using `client.ocr.process` as the entry point, you can access the following serv [Document QnA] -Source: https://docs.mistral.ai/docs/.docs/capabilities/document_ai/document_qna +Source: https://docs.mistral.ai/docs/capabilities/document_ai/document_qna # Document AI QnA @@ -8996,7 +8996,7 @@ A: Yes, there are certain limitations for the Document QnA API. Uploaded documen [Code Embeddings] -Source: https://docs.mistral.ai/docs/.docs/capabilities/embeddings/code_embeddings +Source: https://docs.mistral.ai/docs/capabilities/embeddings/code_embeddings Embeddings are at the core of multiple enterprise use cases, such as **retrieval systems**, **clustering**, **code analytics**, **classification**, and a variety of search applications. With code embedings, you can embed **code databases** and **repositories**, and power **coding assistants** with state-of-the-art retrieval capabilities. @@ -9264,7 +9264,7 @@ For more information and guides on how to make use of our embedding sdk, we have [Embeddings Overview] -Source: https://docs.mistral.ai/docs/.docs/capabilities/embeddings/embeddings_overview +Source: https://docs.mistral.ai/docs/capabilities/embeddings/embeddings_overview
Open In Colab @@ -9617,7 +9617,7 @@ Our embedding model excels in retrieval tasks, as it is trained with retrieval i [Classifier Factory] -Source: https://docs.mistral.ai/docs/.docs/capabilities/finetuning/classifier-factory +Source: https://docs.mistral.ai/docs/capabilities/finetuning/classifier-factory In various domains and enterprises, classification models play a crucial role in enhancing efficiency, improving user experience, and ensuring compliance. These models serve diverse purposes, including but not limited to: - **Moderation**: Classification models are essential for moderating services and classifying unwanted content. For instance, our [moderation service](../../guardrailing/#moderation-api) helps in identifying and filtering inappropriate or harmful content in real-time, ensuring a safe and respectful environment for users. @@ -10111,7 +10111,7 @@ Explore our guides and [cookbooks](https://github.com/mistralai/cookbook) levera [Fine-tuning Overview] -Source: https://docs.mistral.ai/docs/.docs/capabilities/finetuning/finetuning_overview +Source: https://docs.mistral.ai/docs/capabilities/finetuning/finetuning_overview :::warning[ ] Every fine-tuning job comes with a minimum fee of $4, and there's a monthly storage fee of $2 for each model. For more detailed pricing information, please visit our [pricing page](https://mistral.ai/technology/#pricing). @@ -10152,7 +10152,7 @@ Fine-tuning has a wide range of use cases, some of which include: [Text & Vision Fine-tuning] -Source: https://docs.mistral.ai/docs/.docs/capabilities/finetuning/text-vision-finetuning +Source: https://docs.mistral.ai/docs/capabilities/finetuning/text-vision-finetuning Fine-tuning allows you to tailor a pre-trained language model to your specific needs by training it on your dataset. This guide explains how to fine-tune text and vision models, from preparing your data to training, whether you aim to improve domain-specific understanding or adapt to a unique conversational style. @@ -10697,7 +10697,7 @@ curl --location --request DELETE 'https://api.mistral.ai/v1/models/ft:open-mistr [Function calling] -Source: https://docs.mistral.ai/docs/.docs/capabilities/function-calling +Source: https://docs.mistral.ai/docs/capabilities/function-calling Open In Colab @@ -11273,7 +11273,7 @@ The status of your transaction with ID T1001 is "Paid". Is there anything else I [Moderation] -Source: https://docs.mistral.ai/docs/.docs/capabilities/moderation +Source: https://docs.mistral.ai/docs/capabilities/moderation ## Moderation API @@ -11549,7 +11549,7 @@ Please adjust the self-reflection prompt according to your own use cases. [Predicted outputs] -Source: https://docs.mistral.ai/docs/.docs/capabilities/predicted-outputs +Source: https://docs.mistral.ai/docs/capabilities/predicted-outputs Predicted Outputs optimizes response time by leveraging known or predictable content. This approach minimizes latency while maintaining high output quality. In tasks such as editing large texts, modifying code, or generating template-based responses, significant portions of the output are often predetermined. By predefining these expected parts with Predicted Outputs, models can allocate more computational resources to the unpredictable elements, improving overall efficiency. @@ -11689,7 +11689,7 @@ No, the placement of sentences or words in your prediction does not affect its e [Reasoning] -Source: https://docs.mistral.ai/docs/.docs/capabilities/reasoning +Source: https://docs.mistral.ai/docs/capabilities/reasoning **Reasoning** is the next step of CoT (Chain of Thought), naturally used to describe the **logical steps generated by the model** before reaching a conclusion. Reasoning strengthens this characteristic by going through **training steps that encourage the model to generate chains of thought freely before producing the final answer**. This allows models to **explore the problem more profoundly and ultimately reach a better solution** to the best of their ability by using extra compute time to generate more tokens and improve the answer—also described as **Test Time Computation**. @@ -12536,7 +12536,7 @@ Therefore, John is 22 years old. [Custom Structured Output] -Source: https://docs.mistral.ai/docs/.docs/capabilities/structured-output/custom +Source: https://docs.mistral.ai/docs/capabilities/structured-output/custom # Custom Structured Outputs @@ -12742,7 +12742,7 @@ However, it is recommended to add more explanations and iterate on your system p [JSON mode] -Source: https://docs.mistral.ai/docs/.docs/capabilities/structured-output/json-mode +Source: https://docs.mistral.ai/docs/capabilities/structured-output/json-mode Users have the option to set `response_format` to `{"type": "json_object"}` to enable JSON mode. Currently, JSON mode is available for all of our models through API. @@ -12823,7 +12823,7 @@ curl --location "https://api.mistral.ai/v1/chat/completions" \ [Structured Output] -Source: https://docs.mistral.ai/docs/.docs/capabilities/structured-output/overview +Source: https://docs.mistral.ai/docs/capabilities/structured-output/overview # Structured Output When utilizing LLMs as agents or steps within a lengthy process, chain, or pipeline, it is often necessary for the outputs to adhere to a specific structured format. JSON is the most commonly used format for this purpose. @@ -12845,7 +12845,7 @@ Use JSON mode when more flexibility in the output is required while maintaining [Text and Chat Completions] -Source: https://docs.mistral.ai/docs/.docs/capabilities/text_and_chat_completions +Source: https://docs.mistral.ai/docs/capabilities/text_and_chat_completions The Mistral models allows you to chat with a model that has been fine-tuned to follow instructions and respond to natural language prompts. @@ -13094,7 +13094,7 @@ Chat messages (`messages`) are a collection of prompts or messages, with each me [Vision] -Source: https://docs.mistral.ai/docs/.docs/capabilities/vision +Source: https://docs.mistral.ai/docs/capabilities/vision Vision capabilities enable models to analyze images and provide insights based on visual content in addition to text. This multimodal approach opens up new possibilities for applications that require both textual and visual understanding. @@ -13627,7 +13627,7 @@ Model output: [AWS Bedrock] -Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/aws +Source: https://docs.mistral.ai/docs/deployment/cloud/aws ## Introduction @@ -13732,7 +13732,7 @@ For more details and examples, refer to the following resources: [Azure AI] -Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/azure +Source: https://docs.mistral.ai/docs/deployment/cloud/azure ## Introduction @@ -13864,7 +13864,7 @@ For more details and examples, refer to the following resources: [IBM watsonx.ai] -Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/ibm-watsonx +Source: https://docs.mistral.ai/docs/deployment/cloud/ibm-watsonx ## Introduction @@ -13995,7 +13995,7 @@ For more information and examples, you can check: [Outscale] -Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/outscale +Source: https://docs.mistral.ai/docs/deployment/cloud/outscale ## Introduction @@ -14173,7 +14173,7 @@ For more information and examples, you can check: [Cloud] -Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/overview +Source: https://docs.mistral.ai/docs/deployment/cloud/overview You can access Mistral AI models via your preferred cloud provider and use your cloud credits. In particular, Mistral's optimized commercial models are available on: @@ -14187,7 +14187,7 @@ In particular, Mistral's optimized commercial models are available on: [Snowflake Cortex] -Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/sfcortex +Source: https://docs.mistral.ai/docs/deployment/cloud/sfcortex ## Introduction @@ -14266,7 +14266,7 @@ For more information and examples, you can check the Snowflake documentation for [Vertex AI] -Source: https://docs.mistral.ai/docs/.docs/deployment/cloud/vertex +Source: https://docs.mistral.ai/docs/deployment/cloud/vertex ## Introduction @@ -14497,7 +14497,7 @@ For more information and examples, you can check: [Workspaces] -Source: https://docs.mistral.ai/docs/.docs/deployment/laplateforme/organization +Source: https://docs.mistral.ai/docs/deployment/laplateforme/organization A La Plateforme workspace is a collective of accounts, each with a designated set of rights and permissions. Creating a workspace for your team enables you to: - Manage access and costs @@ -14537,7 +14537,7 @@ and click "Invite a new member". [La Plateforme] -Source: https://docs.mistral.ai/docs/.docs/deployment/laplateforme/overview +Source: https://docs.mistral.ai/docs/deployment/laplateforme/overview [platform_url]: https://console.mistral.ai/ [deployment_img]: /img/deployment.png @@ -14576,7 +14576,7 @@ Raw model weights can be used in several ways: [Pricing] -Source: https://docs.mistral.ai/docs/.docs/deployment/laplateforme/pricing +Source: https://docs.mistral.ai/docs/deployment/laplateforme/pricing :::note[ ] Please refer to the [pricing page](https://mistral.ai/pricing#api-pricing) for detailed information on costs. @@ -14584,7 +14584,7 @@ Please refer to the [pricing page](https://mistral.ai/pricing#api-pricing) for d [Rate limit and usage tiers] -Source: https://docs.mistral.ai/docs/.docs/deployment/laplateforme/tier +Source: https://docs.mistral.ai/docs/deployment/laplateforme/tier :::note[ ] Please visit https://admin.mistral.ai/plateforme/limits for detailed information on the current rate limit and usage tiers for your workspace. @@ -14613,7 +14613,7 @@ We offer various tiers on the platform, including a **free API tier** with restr [Deploy with Cerebrium] -Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/cerebrium +Source: https://docs.mistral.ai/docs/deployment/self-deployment/cerebrium [Cerebrium](https://www.cerebrium.ai/) is a serverless AI infrastructure platform that makes it easier for companies to build and deploy AI based applications. They offer Serverless GPU's with low cold start times with over 12 varieties of GPU chips that auto scale and you only pay for the compute you use. @@ -14771,7 +14771,7 @@ You should then get a message looking like this: [Deploy with Cloudflare Workers AI] -Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/cloudflare +Source: https://docs.mistral.ai/docs/deployment/self-deployment/cloudflare [Cloudflare](https://www.cloudflare.com/en-gb/) is a web performance and security company that provides content delivery network (CDN), DDoS protection, Internet security, and distributed domain name server services. Cloudflare launched Workers AI, which allows developers to run LLMs models powered by serverless GPUs on Cloudflare’s global network. @@ -14850,7 +14850,7 @@ Here is the output you should receive [Self-deployment] -Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/overview +Source: https://docs.mistral.ai/docs/deployment/self-deployment/overview Mistral AI models can be self-deployed on your own infrastructure through various inference engines. We recommend using [vLLM](https://vllm.readthedocs.io/), a @@ -14866,7 +14866,7 @@ You can also leverage specific tools to facilitate infrastructure management, su [Deploy with SkyPilot] -Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/skypilot +Source: https://docs.mistral.ai/docs/deployment/self-deployment/skypilot [SkyPilot](https://skypilot.readthedocs.io/en/latest/) is a framework for running LLMs, AI, and batch jobs on any cloud, offering maximum cost savings, highest GPU availability, and managed execution. @@ -14970,7 +14970,7 @@ Many cloud providers require you to explicitly request access to powerful GPU in [Text Generation Inference] -Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/tgi +Source: https://docs.mistral.ai/docs/deployment/self-deployment/tgi Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open-access LLMs. Among other features, it has quantization, tensor parallelism, token streaming, continuous batching, flash attention, guidance, and more. @@ -15134,7 +15134,7 @@ curl 127.0.0.1:8080/generate \ [TensorRT] -Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/trt +Source: https://docs.mistral.ai/docs/deployment/self-deployment/trt ## Building the engine @@ -15151,7 +15151,7 @@ Follow the [official documentation](https://github.com/triton-inference-server/t [vLLM] -Source: https://docs.mistral.ai/docs/.docs/deployment/self-deployment/vllm +Source: https://docs.mistral.ai/docs/deployment/self-deployment/vllm [vLLM](https://github.com/vllm-project/vllm) is an open-source LLM inference and serving engine. It is particularly appropriate as a target platform for self-deploying Mistral @@ -15567,7 +15567,7 @@ using the same code as in a standalone deployment. [SDK Clients] -Source: https://docs.mistral.ai/docs/.docs/getting-started/clients +Source: https://docs.mistral.ai/docs/getting-started/clients We provide client codes in both Python and Typescript. @@ -15671,7 +15671,7 @@ We recommend reaching out to the respective maintainers for any assistance or in [Bienvenue to Mistral AI Documentation] -Source: https://docs.mistral.ai/docs/.docs/getting-started/docs_introduction +Source: https://docs.mistral.ai/docs/getting-started/docs_introduction Mistral AI is a research lab building the best open source models in the world. La Plateforme enables developers and enterprises to build new products and applications, powered by Mistral’s open source and commercial LLMs. @@ -15719,7 +15719,7 @@ The [Mistral AI APIs](https://console.mistral.ai/) empower LLM applications via: [Glossary] -Source: https://docs.mistral.ai/docs/.docs/getting-started/glossary +Source: https://docs.mistral.ai/docs/getting-started/glossary ## LLM @@ -15808,7 +15808,7 @@ Temperature is a fundamental sampling parameter in LLMs that controls the random [Model customization] -Source: https://docs.mistral.ai/docs/.docs/getting-started/model_customization +Source: https://docs.mistral.ai/docs/getting-started/model_customization ### Otherwise known as "How to Build an Application with a Custom Model" @@ -15922,7 +15922,7 @@ Congrats! You’ve deployed your custom model into your application. [Models Benchmarks] -Source: https://docs.mistral.ai/docs/.docs/getting-started/models/benchmark +Source: https://docs.mistral.ai/docs/getting-started/models/benchmark LLM (Large Language Model) benchmarks are standardized tests or datasets used to evaluate the performance of large language models. These benchmarks help researchers and developers understand the strengths and weaknesses of their models and compare them with other models in a systematic way. @@ -15990,7 +15990,7 @@ We've gathered a lot of valuable insights from platforms like Reddit and Twitter [Model selection] -Source: https://docs.mistral.ai/docs/.docs/getting-started/models/model_selection +Source: https://docs.mistral.ai/docs/getting-started/models/model_selection This guide will explore the performance and cost trade-offs, and discuss how to select the appropriate model for different use cases. We will delve into various factors to consider, offering guidance on choosing the right model for your specific needs. @@ -16200,7 +16200,7 @@ Donc, un kilogramme de plumes est plus lourd qu'une livre de fer, car il corresp [Models Overview] -Source: https://docs.mistral.ai/docs/.docs/getting-started/models/overview +Source: https://docs.mistral.ai/docs/getting-started/models/overview Mistral provides two types of models: open models and premier models. @@ -16303,7 +16303,7 @@ To prepare for model retirements and version upgrades, we recommend that custome [Model weights] -Source: https://docs.mistral.ai/docs/.docs/getting-started/models/weights +Source: https://docs.mistral.ai/docs/getting-started/models/weights We open-source both pre-trained models and instruction-tuned models. These models are not tuned for safety as we want to empower users to test and refine moderation based on their use cases. For safer models, follow our [guardrailing tutorial](/capabilities/guardrailing). @@ -16394,7 +16394,7 @@ To learn more about how to use mistral-inference, take a look at the [README](ht [Quickstart] -Source: https://docs.mistral.ai/docs/.docs/getting-started/quickstart +Source: https://docs.mistral.ai/docs/getting-started/quickstart [platform_url]: https://console.mistral.ai/ @@ -16542,7 +16542,7 @@ For a full description of the models offered on the API, head on to the **[model [Basic RAG] -Source: https://docs.mistral.ai/docs/.docs/guides/basic-RAG +Source: https://docs.mistral.ai/docs/guides/basic-RAG # Basic RAG Retrieval-augmented generation (RAG) is an AI framework that synergizes the capabilities of LLMs and information retrieval systems. It's useful to answer questions or generate content leveraging external knowledge. There are two main steps in RAG: @@ -16717,7 +16717,7 @@ Among them you can find how to perform... [Ambassador] -Source: https://docs.mistral.ai/docs/.docs/guides/contribute/ambassador +Source: https://docs.mistral.ai/docs/guides/contribute/ambassador # Welcome to the Mistral AI Ambassador Program! @@ -16998,7 +16998,7 @@ Thank you to each and every one of you, including those who prefer not to be nam [Contribute] -Source: https://docs.mistral.ai/docs/.docs/guides/contribute/overview +Source: https://docs.mistral.ai/docs/guides/contribute/overview # How to contribute @@ -17050,7 +17050,7 @@ A valuable way to support Mistral AI is by engaging in active communication in t [Evaluation] -Source: https://docs.mistral.ai/docs/.docs/guides/evaluation +Source: https://docs.mistral.ai/docs/guides/evaluation Open In Colab @@ -17517,7 +17517,7 @@ When implementing crowdsourcing for human evaluation, you can opt for a simple a [Fine-tuning] -Source: https://docs.mistral.ai/docs/.docs/guides/finetuning +Source: https://docs.mistral.ai/docs/guides/finetuning :::warning[ ] There's a monthly storage fee of $2 for each model. For more detailed pricing information, please visit our [pricing page](https://mistral.ai/pricing#api-pricing). @@ -17530,7 +17530,7 @@ There's a monthly storage fee of $2 for each model. For more detailed pricing in [ 01 Intro Basics] -Source: https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_01_intro_basics +Source: https://docs.mistral.ai/docs/guides/finetuning_sections/_01_intro_basics ## Introduction @@ -17545,7 +17545,7 @@ In this guide, we will cover the following topics: [ 02 Prepare Dataset] -Source: https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_02_prepare_dataset +Source: https://docs.mistral.ai/docs/guides/finetuning_sections/_02_prepare_dataset ## Prepare the dataset @@ -18140,7 +18140,7 @@ Here are six specific use cases that you might find helpful: [download the validation and reformat script] -Source: https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_03_e2e_examples +Source: https://docs.mistral.ai/docs/guides/finetuning_sections/_03_e2e_examples ## End-to-end example with Mistral API @@ -18634,7 +18634,7 @@ To see an end-to-end example of how to install mistral-finetune, prepare and val [get data from hugging face] -Source: https://docs.mistral.ai/docs/.docs/guides/finetuning_sections/_04_faq +Source: https://docs.mistral.ai/docs/guides/finetuning_sections/_04_faq ## FAQ @@ -18748,7 +18748,7 @@ for file in output_file_objects: [Observability] -Source: https://docs.mistral.ai/docs/.docs/guides/observability +Source: https://docs.mistral.ai/docs/guides/observability ## Why observability? @@ -19020,7 +19020,7 @@ Maxim Documentation to use Mistral as an LLM Provider and Maxim as Logger - [Doc [Other resources] -Source: https://docs.mistral.ai/docs/.docs/guides/other-resources +Source: https://docs.mistral.ai/docs/guides/other-resources Visit the [Mistral AI Cookbook](https://github.com/mistralai/cookbook) for additional inspiration, where you'll find example code, community contributions, and demonstrations of integrations with third-party tools, including: @@ -19029,7 +19029,7 @@ where you'll find example code, community contributions, and demonstrations of i [Prefix] -Source: https://docs.mistral.ai/docs/.docs/guides/prefix +Source: https://docs.mistral.ai/docs/guides/prefix # Prefix: Use Cases @@ -19641,7 +19641,7 @@ to different needs and use cases.* [Prompting capabilities] -Source: https://docs.mistral.ai/docs/.docs/guides/prompting-capabilities +Source: https://docs.mistral.ai/docs/guides/prompting-capabilities # Prompting Capabilities @@ -19970,7 +19970,7 @@ Explain your choice by pointing out specific reasons such as clarity, completene [Sampling] -Source: https://docs.mistral.ai/docs/.docs/guides/sampling +Source: https://docs.mistral.ai/docs/guides/sampling # Sampling: Overview on our sampling settings @@ -20430,7 +20430,7 @@ print(chat_response.choices[0].message.content) [Tokenization] -Source: https://docs.mistral.ai/docs/.docs/guides/tokenization +Source: https://docs.mistral.ai/docs/guides/tokenization Open In Colab