Configure LLM Provider
Step-by-step guide to selecting and configuring LLM providers and models
This guide walks you through configuring LLM providers and models for your Tambo project, including provider-specific settings and parameter tuning patterns.
Step 1: Select a Provider
Tambo supports multiple LLM providers:
- OpenAI - GPT-5.2, GPT-5.1, GPT-4.1 models
- Anthropic - Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5
- Google - Gemini 2.5 Pro, Gemini 2.5 Flash
- Groq - Fast inference for Llama 4 models
- Mistral - Mistral Large, Magistral Medium
- OpenAI-Compatible - Any provider with OpenAI API compatibility
Navigate to your project's LLM Configuration section in the settings of the Tambo dashboard and select your provider from the dropdown.
For detailed provider capabilities and comparison, see the Provider Reference.
Step 2: Add API Keys
Generate an API key from your provider's dashboard:
- OpenAI: platform.openai.com/api-keys
- Anthropic: console.anthropic.com/settings/keys
- Google AI Studio: makersuite.google.com/app/apikey
- Groq: console.groq.com/keys
- Mistral: console.mistral.ai/api-keys
In your project's LLM Configuration:
- Find API Key field
- Paste your API key exactly as provided
- Click Save to store securely
Step 3: Select a Model
Choose a model based on your requirements:
Selection Criteria
- Capability - More capable models handle complex tasks better
- Cost - Larger models cost more per token
- Speed - Smaller models respond faster
- Context Window - Some models support larger inputs
- Skills support - Only certain models support skills
Step 4: Configure Custom Parameters
Parameters control how your model behaves. Configure these in the Custom Parameters section of your project settings.
The specific parameters available depend on the selected provider and model. For example, OpenAI GPT-5 models have a reasoningEffort parameter that you can edit.
For details on how each parameter changes model output, refer to your provider's documentation for available options.
Skills Support
Skills are reusable instruction sets that run inside your LLM provider's sandbox. Not all models support them. If you plan to use skills, make sure your model is on the supported list.
OpenAI: GPT-5.2, GPT-5.2 Pro, GPT-5.3 Chat Latest, GPT-5.4, GPT-5.4 Pro
Anthropic: Claude Haiku 4.5, Claude Sonnet 4.5, Claude Sonnet 4.6, Claude Opus 4.5, Claude Opus 4.6
Models outside this list (GPT-4o, GPT-4.1, GPT-5, GPT-5.1, Claude Sonnet 4, Claude Opus 4, Claude Opus 4.1) and other providers (Google, Mistral, Groq, Cerebras) do not support skills. Tambo stores your skills regardless of the model, so you can switch models later without losing them.
If your project has skills enabled but uses an unsupported model, the Skills section in your project settings shows a warning. To start using skills, see the Add a Skill to Your Agent guide.
Advanced: Custom OpenAI-Compatible Endpoints
If your provider is not listed explicitly in the LLM Providers dropdown, you are still able to use it if its API is OpenAI compatible:
- Select OpenAI Compatible as your provider
- Enter Custom Model Name (e.g.,
meta-llama/Llama-3-70b-chat-hf) - Enter Custom Base URL without "chat/completions" - Tambo will add this automatically. (e.g.,
https://api.myai.xyz/v1becomeshttps://api.myai.xyz/v1/chat/completions) - Add your API key if required
- Click Save
Your endpoint must implement OpenAI API format with /chat/completions support.
Next Steps
- Configure Agent Behavior - Practical configuration patterns
- Add a Skill to Your Agent - Extend your agent's capabilities with skills
- Provider Reference - Detailed provider capabilities
- Agent Configuration Concepts - Understanding the system