Model Providers
Connect and configure AI model providers for your Tambo application.
Tambo supports multiple AI model providers, allowing you to choose the best model for your application's needs. Each provider offers different models with varying capabilities, pricing, and performance characteristics.
Available Providers
Tambo integrates with five major AI providers:
| Provider | Description | Best For |
|---|---|---|
| OpenAI | Industry-leading models including GPT-4.1, GPT-5, GPT-5.1, and o3 reasoning models | General-purpose tasks, reasoning, and state-of-the-art performance |
| Anthropic | Claude models with strong safety and reasoning capabilities | Complex reasoning, analysis, and safety-critical applications |
| Cerebras | Ultra-fast inference (2,000+ tokens/sec) powered by Wafer-Scale Engine hardware | Real-time applications, high-throughput processing |
| Gemini models with multimodal support and extended thinking capabilities | Multimodal tasks, vision-based applications, and advanced reasoning | |
| Mistral | Fast, efficient open-source models with strong performance | Cost-effective alternatives with reliable performance |
Configuring Providers
All model providers are configured through your Tambo Cloud dashboard:
- Navigate to Dashboard → Project → Settings → LLM Providers
- Select your desired provider
- Choose a model from the available options
- (Optional) Add custom parameters for fine-tuned behavior
Multiple Providers
You can configure multiple providers in a single project and switch between them as needed. This is useful for testing different models or optimizing for different use cases.
Model Status Labels
Each model carries a status label indicating how thoroughly it has been tested with Tambo:
- Tested - Validated on common Tambo tasks; recommended for production
- Untested - Available but not yet validated; use with caution and test in your context
- Known Issues - Usable but with observed behaviors worth noting
For detailed information about each label and specific model behaviors, see Labels.
Streaming Considerations
Streaming may behave inconsistently in models other than OpenAI. We're aware of the issue and actively working on a fix. Please proceed with caution when using streaming on non-OpenAI models.
Advanced Configuration
Custom LLM Parameters
Fine-tune model behavior with custom parameters like temperature, max tokens, and provider-specific settings. This allows you to optimize models for your specific use case—whether you need deterministic responses for analysis or creative outputs for generation.
Common parameters across all providers:
temperature- Control randomness (0.0-1.0)maxOutputTokens- Limit response lengthtopP- Nucleus sampling thresholdpresencePenalty- Encourage staying on topicfrequencyPenalty- Reduce repetition
Learn more in Custom LLM Parameters.
Reasoning Models
Advanced reasoning models from OpenAI (GPT-5, GPT-5.1, O3) and Google (Gemini 3.0 Pro, Gemini 3.0 Deep Think) expose their internal thinking process. These models excel at complex problem-solving by spending additional compute time analyzing problems before generating responses.
Configure reasoning capabilities through your project's LLM provider settings to enable:
- Multi-step problem decomposition
- Solution exploration and verification
- Detailed reasoning token access
- Adaptive thinking time (for supported models)
See Reasoning Models for detailed configuration guides.
Quick Links
Model Labels & Status
Understand model testing status and known behaviors
Custom Parameters
Fine-tune model behavior with temperature, tokens, and more
Reasoning Models
Configure advanced reasoning capabilities for complex tasks
Next Steps
- Getting Started: Choose a provider and configure it in your project settings
- Optimize Performance: Use custom parameters to fine-tune responses for your use case
- Explore Reasoning: Enable reasoning on supported models for complex tasks
- Monitor Usage: Track model performance and costs in your dashboard
For comprehensive API and integration guidance, explore the API Reference and Concepts sections.