Loading...

Mistral

Configure and use Mistral AI models in your Tambo project

Mistral AI provides a range of powerful language models designed for professional use cases and complex reasoning tasks. This page covers the Mistral models available in Tambo, their capabilities, and how to configure them.

Known Rendering Issues

Mistral models (Large 2.1 and Medium 3) may inconsistently follow rendering instructions, similar to Gemini models. Try clarifying prompt structure if you encounter formatting issues. See Labels for more details.

Available Models

Tambo supports three Mistral models, ranging from frontier-class reasoning to high-performance production models.

Magistral Medium 1

Status: Tested API Name: magistral-medium-2506 Context Window: 40,000 tokens

A frontier-class reasoning model released in June 2025, designed for advanced problem-solving and analytical tasks.

Best for:

  • Complex reasoning and multi-step problem solving
  • Code generation and debugging
  • Professional knowledge work requiring deep analysis
  • Tasks benefiting from extended thinking

Provider Documentation: Mistral AI - Magistral


Mistral Medium 3

Status: Known Issues API Name: mistral-medium-2505 Context Window: 128,000 tokens

Designed to be frontier-class, particularly excelling in categories of professional use. This model provides a balance of power and versatility for production workloads.

Best for:

  • Professional applications requiring reliable performance
  • Long-form content generation and analysis
  • Multi-document reasoning with large context windows
  • Production deployments where consistency matters

Notes: May occasionally resist rendering as requested. Try clarifying instructions (e.g., "Return a bulleted list only"). Outputs may have formatting quirks when structure is important.

Provider Documentation: Mistral AI - Mistral Medium 3


Mistral Large 2.1

Status: Known Issues API Name: mistral-large-latest Context Window: 128,000 tokens

Mistral's top-tier large model for high-complexity tasks, with the latest version released in November 2024. This model represents Mistral's most capable offering for demanding workloads.

Best for:

  • High-complexity reasoning and analysis
  • Advanced code generation and review
  • Multi-turn conversations requiring context retention
  • Tasks demanding maximum model capability

Notes: Similar to Medium 3, may inconsistently follow rendering instructions. Validate outputs where structure is critical.

Provider Documentation: Mistral AI - Pixtral Large

Configuration

Setting Up Mistral in Your Project

  1. Navigate to your project in the Tambo dashboard
  2. Go to SettingsLLM Providers
  3. Add or configure your Mistral API credentials
  4. Select your preferred Mistral model
  5. Adjust token limits and parameters as needed
  6. Click Save to apply your configuration

Custom Parameters

Mistral models support standard LLM parameters like temperature, max tokens, and more. Configure these in the dashboard under Custom LLM Parameters.

For detailed information on available parameters, see Custom LLM Parameters.

Model Comparison

ModelContext WindowStatusBest Use Case
Magistral Medium 140K tokensTestedReasoning & problem solving
Mistral Medium 3128K tokensKnown IssuesProfessional applications
Mistral Large 2.1128K tokensKnown IssuesHigh-complexity tasks

Best Practices

Choosing the Right Model

  • Start with Magistral Medium 1 for reasoning-heavy tasks where the smaller context window is sufficient
  • Use Mistral Medium 3 when you need larger context windows for professional applications
  • Reserve Mistral Large 2.1 for the most demanding tasks requiring maximum capability

Handling Rendering Issues

If you encounter formatting inconsistencies with Medium 3 or Large 2.1:

  1. Clarify instructions - Be explicit about desired output format
  2. Use structured prompts - Provide clear examples of expected structure
  3. Validate outputs - Add checks for critical formatting requirements
  4. Test thoroughly - Run a prompt suite to verify behavior in your workload

For production-critical formatting, consider using Tested models and validating outputs. See Labels for more guidance.

Troubleshooting

Model not appearing in dashboard?

  • Verify your Mistral API key is configured correctly
  • Check that your Tambo Cloud instance is up to date
  • Ensure you have proper permissions for your project

Inconsistent formatting in responses?

High token usage?

See Also