Labels
What the Tested, Untested, and Known Issues labels mean and observed behaviors for certain models.
Potential Streaming Issues
Streaming may behave inconsistently in models other than OpenAI. We're aware of the issue and are actively working on a fix. Please proceed with caution when using streaming on non-OpenAI models.
Models in tambo carry a status label, shown when you select a model from the LLM settings
(Dashboard → Project → Settings → LLM Providers).
Why Use Labels?
- Set expectations: Understand tambo’s confidence level for each model.
- Guide selection: Prefer
testedmodels for production; approach others with care. - Highlight caveats:
known-issueslabels call out specific behaviors we've observed.
Label Definitions
| Label | Meaning |
|---|---|
tested | Validated on common tambo tasks. Recommended for most workflows. |
untested | Available, but not yet validated. Use it—but test in your context. |
known-issues | Usable, but we’ve observed behaviors worth noting (see below). |
Provider-Specific Notes
For detailed information about each model, including status, capabilities, and observed behaviors, see the provider-specific pages:
- OpenAI Models - Notes on GPT-5.1, GPT-5.1 Chat Latest, GPT-4.1 Nano, and other untested models
- Anthropic Models - Known issues with Claude 3.5 Haiku component rendering
- Google Models - Known issues with Gemini rendering consistency and untested Gemini 3.0 models
- Groq Models - Notes on untested Llama 4 Scout and Maverick models
- Mistral Models - Known issues with Mistral Large 2.1 and Medium 3 rendering
Each provider page includes complete model information, configuration guidance, and specific notes about observed behaviors during testing.
Production Guidance
For production-critical formatting, use Tested models and validate outputs. When using Untested or Known Issues models, run a small prompt suite to check behavior in your specific workload.
Usage Patterns
- Prefer
testedmodels for reliability. If using others, test with your use case. - Use inline notes in the picker to spot caveats quickly.
Integration
You can change providers and models at the project level under LLM Provider Settings. tambo will apply your token limits and defaults accordingly.