# Tambo generative UI for React
URL: /
import LearnMore from "@/components/learn-more";
Tambo is a generative UI SDK for React. The AI dynamically decides which components to render and what props to pass based on natural language conversations.
Register your components once. The AI agent renders and controls them based on user messages and context.
**MCP-native** from the ground up — integrates the Model Context Protocol (MCP), a standardized protocol that lets AI models connect to external systems (databases, APIs, files) the same way.
## Why Tambo?
You don't want to write 200 lines of boilerplate to show a chart.
Tambo handles:
* ✅ AI orchestration (which component to render)
* ✅ Streaming (progressive prop updates)
* ✅ State management (persistence across conversation)
* ✅ Error handling (retries, fallbacks)
* ✅ Tool coordination (MCP servers, local functions)
You write:
* Your existing React components
* Zod schemas for props
* React hooks for advanced AI features
That's the entire API.
```tsx
// Register your chart component once
const components: TamboComponent[] = [{
name: "Graph",
description: "Displays data as charts",
component: Graph,
propsSchema: z.object({ data: z.array(...), type: z.enum(["line", "bar", "pie"]) })
}];
// 10 lines of registration → infinite use cases
```
## Key Benefits
* **No AI Expertise Needed** - If you can write React, you can build generative UIs. Use your existing design system and components.
* **MCP-Native** - Built-in support for Model Context Protocol means plug-and-play integrations with any MCP server. Your own, or external servers with Linear, Slack.
* **Pre-built UI Primitives** - Copy/paste production-ready components for forms, charts, maps, messaging, and more. Customize everything.
* **Bring Your Own LLM** - Works with OpenAI, Anthropic, Cerebras, Google, Mistral, or any OpenAI-compatible provider. Not locked into one vendor.
* **Truly Open Source** - MIT licensed React SDK and backend. Self-host with full control, or use Tambo Cloud for zero-config deployment.
## How Tambo Works
Tambo supports two component workflows:
**Generative components** (like charts) **or persistent components** (like shopping carts that update across the conversation).
### Generative Components
AI renders these once in response to user messages. Best for charts, data visualizations, and summary cards.
```tsx
const components: TamboComponent[] = [
{
name: "Graph",
description: "Displays data as charts using Recharts library",
component: Graph,
propsSchema: z.object({
data: z.array(z.object({ name: z.string(), value: z.number() })),
type: z.enum(["line", "bar", "pie"]),
}),
},
];
```
### Interactable Components
Components that persist on the page and update by ID across conversations. Perfect for shopping carts, spreadsheets, task boards, or dashboards. Pre-place them in your code, or let AI generate them dynamically.
```tsx
const InteractableNote = withInteractable(Note, {
componentName: "Note",
description: "A note supporting title, content, and color modifications",
propsSchema: z.object({
title: z.string(),
content: z.string(),
color: z.enum(["white", "yellow", "blue", "green"]).optional(),
}),
});
```
## Core Workflow
### 1. Register Your Components
Tell the AI which components it can use:
```tsx
import { TamboProvider } from "@tambo-ai/react";
export function Home() {
return (
);
}
```
For apps with signed-in users, pass a per-user `userToken` (OAuth access token) to `TamboProvider` to enable per-user auth and connect Tambo to your app's end-user identity. See [User Authentication](/concepts/user-authentication) for details.
### 2. Use Tambo Hooks
Send messages and render AI responses with dynamic components:
```tsx
const { thread } = useTamboThread();
const { value, setValue, submit, isPending } = useTamboThreadInput();
// Render messages with components
{
thread.messages.map((message) => (
));
}
```
import { MessageCircleMore } from "lucide-react";
### 3. Add MCP Integrations
Connect pre-built integrations (Linear, Slack, databases) or your own custom MCP servers:
```tsx
import { MCPTransport } from "@tambo-ai/react/mcp";
const mcpServers = [
{
name: "filesystem",
url: "http://localhost:8261/mcp",
transport: MCPTransport.HTTP,
},
];
;
```
## Key Features
* **Generative UI Components** - Render dynamic React components in response to user messages
* **Interactable Components** - Pre-place components and let AI update them across conversations
* **MCP-Native** - Built-in Model Context Protocol support for seamless integrations
* **Local Tools** - Write JavaScript functions that execute in your React app
* **Streaming Support** - Real-time streaming for all AI-generated content with UX-enhancing hooks
* **Message History** - Automatic conversation storage and management
* **State Management** - AI-integrated state hooks to persist user input and component state
* **Suggested Actions** - Generate contextual suggestions to guide users
* **Tool Orchestration** - Automatic tool call coordination during response generation
* **Model Flexibility** - Works with OpenAI, Anthropic, Cerebras, Google, Mistral, and custom providers
## Pre-built UI Components
Ready-to-use components integrated with Tambo, installable via CLI:
import { Box } from "lucide-react";
## Get Started
Ready to build? Follow our quickstart guide:
# Coding Agent MD Rules
URL: /best-practices/coding-agent-generative-ui-rules
We highly recommend adding these rules to your Coding agent `CLAUDE.md` or `AGENTS.md` or similar AI assistant configuration file.
Adding these will often avoid common issues and improve the assistants integration.
```markdown
## Tambo Generative UI
- Props are `undefined` during streaming—always use `?.` and `??`
- Use `useTamboComponentState` for state the assistant needs to see
- Use `useTamboStreamStatus` when you need to control UI behavior based on streaming state
- Common `useTamboStreamStatus` use cases: disabling buttons, showing section-level loading, waiting for required fields before rendering
- String props can render as they stream; structured data like arrays/objects may stream progressively or wait for completion depending on the use case
- Generate array item IDs client-side—React keys must be stable, and AI-generated IDs are unreliable during streaming
- If the item IDs are used to fetch data, use `useTamboStreamStatus` to wait until the array is complete before rendering
- Fetch server data or derive from app state; don't have AI generate what already exists
- Use `.describe()` to guide prop generation
```
# Component Props and Performance
URL: /best-practices/component-data-props
When Tambo chooses a component to use, it will look at the component's `propsSchema` to know what props to pass to the component, and how they should be structured.
If the size of the props object passed to a component is large, it can impact Tambo's performance, since Tambo needs to generate more data. Essentially, the response time of a request to Tambo is directly related to, and mostly determined by, the size (token count) of the props object generated for the chosen component.
This is most relevant when using components that need to show real data, such as a component that shows a list of fetched objects.
For example, consider this component which shows a list of meetings:
```tsx
const MeetingsList = ({ meetings: {id: string, title: string, date: string, location: string, attendees: string[]}[] }) => {
return (
{meetings.map(meeting => )}
)
}
```
Where the registration of the component might be:
```tsx
registerComponent({
name: "MeetingsList",
description: "A list of meetings",
component: MeetingsList,
propsSchema: z.object({
meetings: z.array(
z.object({
id: z.string(),
title: z.string(),
date: z.string(),
location: z.string(),
attendees: z.array(z.string()),
}),
),
}),
});
```
When the user asks for a list of meetings and Tambo decides to use this component, Tambo will need to generate every meeting object. If there are many meetings, this can take a long time.
### A Workaround
If the problem is the size of the props object, one way to solve it is to structure the component's props such that they will always be small.
Instead of having Tambo generate the list of meeting objects, we can have Tambo generate a 'meetings request', and have the component fetch its own data.
Heres what the MeetingsList component would look like restructured in this way:
```tsx
const MeetingsList = ({
meetingsRequest: { startDate: string, endDate: string },
}) => {
const [meetings, setMeetings] = useState([]);
useEffect(() => {
fetchMeetings(startDate, endDate);
}, [startDate, endDate]);
const fetchMeetings = async (startDate: string, endDate: string) => {
const meetings = await getMeetings(startDate, endDate); // Assuming there is a function to fetch meetings somewhere
setMeetings(meetings);
};
return (
{meetings.map((meeting) => (
))}
);
};
```
Where the registration of the component would be updated to:
```tsx
Tambo.registerComponent({
name: "MeetingsList",
description: "A list of meetings",
component: MeetingsList,
propsSchema: z.object({
meetingsRequest: z.object({
startDate: z.string(),
endDate: z.string(),
}),
}),
});
```
Now, Tambo only needs to generate the startDate and endDate strings.
### Considerations
In the restructured MeetingsList above, we used `startDate` and `endDate` as the props. This assumes that the 'meetings api' we fetch data from has parameters for startDate and endDate. In general, the 'request object' that we allow Tambo to generate should match the parameters of the api we fetch data from.
This also means that when structuring components like this, Tambo will only be able to filter in ways that the API allows. For example, if the user asks for meetings with people named 'John', but the API (and the component props) only allows filtering by startDate and endDate, Tambo will not be able to filter by 'John'. With the original MeetingsList component, Tambo can look at the list of meetings in context and decide which meetings to pass to the component completely based on the user's message.
# Additional Context
URL: /concepts/additional-context
When Tambo's response to a user's message needs to depend on information other than the message's text, additional context can be added to user messages sent to Tambo.
## Context Helper Functions
You can define and register any function as a Context Helper function. These are called every time a message is sent to Tambo, and their result is added to the message's additional context for Tambo to consider.
Context Helpers are useful for giving Tambo information that might change between messages, like the user's current time, the user's current page, or some state object.
```tsx
({ time: new Date().toISOString() }),
},
]}
>
{/* Your app */}
```
For detailed instructions on configuring Context Helpers, see the [Make AI Aware of State guide](/guides/give-context/make-ai-aware-of-state).
## Context Attachments
Context Attachments are one-time additions that are added as additional context to the next user message and then cleared. Stage any object as additional context to be added to the next message using `addContextAttachment`.
Context Attachments are useful for situations where the additional context should depend on a user action. For example, in an application that allows users to select files and ask Tambo about them, you might add the file as a Context Attachment when the user clicks the file.
```tsx
const { addContextAttachment } = useTamboContextAttachment();
function handleSelectFile(file) {
addContextAttachment({
context: file.content,
displayName: file.name,
type: "file",
});
}
```
For detailed instructions on using Context Attachments, see the [Let Users Attach Context guide](/guides/give-context/let-users-attach-context).
## Resources
Resources are a form of additional context that usually come from attached MCP servers, although you can define local resources in your app. The list of available resources is stored by Tambo and exposed, enabling a UX where users can reference resources by typing @ and selecting what they want to include as additional context along with their message.
For more information on resources, see the [Resources guide](/guides/give-context/make-context-referenceable).
# Agent Configuration
URL: /concepts/agent-configuration
Messages sent from your app to Tambo are processed by the Tambo agent, which you can configure through your project settings page. You can add custom instructions to change Tambo's behavior and personality, configure the underlying LLM model and its options, connect MCP servers to extend capabilities, and set up user authentication.
## Custom Instructions
Custom instructions define how Tambo behaves and responds to users.
When you add custom instructions, you're customizing Tambo's "system prompt". This is where you define the agent's role (is it a support agent? a creative partner? a data analyst?), set the tone and personality (formal or casual? friendly or professional?), establish behavioral guidelines for handling different types of requests, and give your AI a name or identity to help users understand who they're talking to.
Here's how you might configure custom instructions for a creative writing application:
```
You are a creative writing partner helping authors develop their stories. You're imaginative, collaborative, and supportive. When users share ideas:
- Ask thoughtful questions to deepen their concepts
- Suggest creative alternatives and plot twists
- Help brainstorm character motivations and arcs
- Maintain an encouraging, enthusiastic tone
- Avoid being prescriptive - guide, don't dictate
```
These instructions persist across all conversations in your project, ensuring consistent agent behavior. Users will experience the same personality and approach regardless of which thread they're in.
## LLM Configuration
The LLM configuration determines which AI model powers your Tambo agent and how it behaves. You can change the provider, select specific models, and fine-tune parameters to match your application's needs.
### Choosing a Provider and Model
Tambo integrates with major AI providers including OpenAI, Anthropic, Google, Groq, and Mistral. Each provider offers different models optimized for various use cases. See the [Models and Providers](/reference/llm-providers) page for a complete list of available providers and their capabilities. You can also use any OpenAI-compatible model by selecting the "OpenAI Compatible" provider and specifying a custom endpoint.
Each project starts with free messages to help you get started and test your configuration. Once you've used your free messages, you can add your own API key for the underlying LLM provider.
### Custom Parameters
If you need to adjust model-specific behavior, you can configure custom parameters. Temperature controls randomness in responses (0.0 = deterministic, 1.0 = creative), max tokens limits response length, top P controls diversity via nucleus sampling, and top K limits candidate tokens for each generation step. These parameters are stored as JSON and applied when threads are created. Different models support different parameters, so refer to your provider's documentation for available options.
For detailed instructions on configuring LLM providers and parameters, see the [Configure LLM Provider guide](/guides/setup-project/llm-provider).
## MCP Server Connection
Model Context Protocol servers extend your Tambo agent's capabilities by providing access to external tools, data sources, and services. You can connect MCP servers directly from your project settings page, making their capabilities available to all users of your application.
Once connected, your agent can call tools to perform actions and retrieve data, access resources like files, documents, and databases, and use prompts for structured interactions. MCP connections are configured at the project level and shared across all users. This means you set it up once, and every conversation in your project has access to those capabilities.
To learn more about MCP, how it works, and what you can do with it, see the [Model Context Protocol concept page](/concepts/model-context-protocol).
## User Authentication
User authentication determines how Tambo identifies and isolates users in your application. Each user has their own threads and messages, kept separate from other users' data through secure token-based authentication.
For more information on how authentication works and how to integrate with your provider, see the [User Authentication concept page](/concepts/user-authentication).
# Conversation Storage
URL: /concepts/conversation-storage
Every conversation with Tambo is automatically stored. When a user sends a message, Tambo persists the message content, any additional context, and the complete response including text, tool calls, and generated components. You don't need to configure databases or write persistence logic. Conversations are immediately available through the React SDK and visible in your project dashboard.
This automatic persistence enables users to return to previous conversations, and the full conversation history informs future Tambo responses.
## What Gets Stored
### Threads
Threads are the containers for conversations. Each thread tracks a single conversation with its own unique ID and maintains metadata about that conversation. A thread knows which project it belongs to, when it was created and last updated, and what stage of generation it's currently in (idle, processing, complete, or error).
Threads can include optional metadata like custom properties or a context key for organizing related conversations. When Tambo is generating a response, the thread tracks what stage it's in and provides a human-readable status message describing what's happening.
### Messages
Messages are the actual content within threads. Each message belongs to a specific thread and has a role indicating who sent it: the user, Tambo (assistant), the system (for instructions), or a tool (for function results).
Messages contain content parts that can be text, images, audio, or other media types. When Tambo responds with a [generative component](/concepts/generative-interfaces/generative-components), the component definition and props are attached to the message. This allows the component to be re-rendered when loading conversation history.
If components use `useTamboComponentState` to track state, that state is also persisted with the message. When re-rendering a thread, components restore their state from storage, showing users exactly what they last saw.
Messages can also include additional context that was provided when they were sent, any errors that occurred during generation, and metadata like whether the message was cancelled.
## Accessing Stored Conversations
### Through the React SDK
The React SDK provides hooks for accessing stored conversations. Use `useTamboThread()` to work with the current thread, or `useTamboThreads()` to access all threads for a user:
```tsx
import { useTamboThread, useTamboThreads } from "@tambo-ai/react";
// Access current thread and its messages
const { thread } = useTamboThread();
// Access all stored threads for the current project
const { data: threads, isLoading, error, refetch } = useTamboThreadList();
```
The SDK automatically fetches thread data, provides real-time updates as new messages arrive, caches data to minimize network requests, and triggers re-renders when thread state changes.
### Through the Project Dashboard
Your Tambo Cloud dashboard provides visibility into all conversations stored in your project. You can see the list of all threads, view complete message history for each thread, check thread status and metadata, and search or filter conversations. This is useful for monitoring how users interact with your application and debugging issues.
## Building Conversation Interfaces
Tambo offers pre-built UI components for common conversation patterns like chat interfaces, thread navigation, and input forms. These connect directly to stored conversations and handle all data fetching and rendering automatically.
For custom interfaces, the React SDK provides direct access to stored conversation data. You can build any UI pattern (chat, canvas, dashboard, or hybrid) with full control over presentation while Tambo handles storage and retrieval.
import LearnMore from "@/components/learn-more";
import { Rocket } from "lucide-react";
# Suggestions
URL: /concepts/suggestions
# Suggestions Content Moved
Suggestions documentation is now part of the [Build Custom Conversation UI](/guides/build-interfaces/build-chat-interface) guide.
See the "Add Contextual Suggestions" section for implementation details.
# Tools
URL: /concepts/tools
Tools are functions that allow Tambo to take actions in response to natural language. When a user asks "What's the weather in San Francisco?", Tambo can use a registered tool that calls a weather API to retrieve current data, and then use that data in the response or to populate a Generative Component.
## Tambo's Approaches for Enabling Tools
### Local Tools
Local tools are JavaScript functions that execute in your React app, defined by you and registered with Tambo. When processing a user message, Tambo may decide to use tools you have registered to retrieve information or to take actions. Local tools allow you to easily extend the capabilities of Tambo.
```tsx
const getWeather = (city: string) => {
return `The weather in ${city} is warm and sunny!`;
};
export const weatherTool: TamboTool = {
name: "getWeather",
description: "A tool to get the current weather conditions of a city",
tool: getWeather,
inputSchema: z
.string()
.describe("The city name to get weather information for"),
outputSchema: z.string(),
};
;
```
import LearnMore from "@/components/learn-more";
import { Wrench } from "lucide-react";
## How Tools Execute
### Default Execution
By default, Tambo waits until tool arguments are fully generated before executing the tool. This ensures your tool receives complete, valid data before performing its action. For most tools—especially those that make API calls, update databases, or have side effects—this is the desired behavior.
### Streamable Execution
The `annotations.tamboStreamableHint` option enables incremental tool execution during streaming. When set to `true`, the tool will be called multiple times with partial arguments as they stream in, rather than waiting for complete arguments.
This follows the [MCP ToolAnnotations specification](https://modelcontextprotocol.io/specification/2025-06-18/schema#toolannotations). This is useful for tools that can handle partial data gracefully and benefit from incremental execution—such as updating state, building visualizations progressively, or providing real-time feedback.
### Execution Behavior Details
When `annotations.tamboStreamableHint: true`:
1. **Partial execution**: The tool is called with partial arguments as they stream in
2. **Error handling**: If any partial call throws an error, streaming continues, and only the final call's error (if any) is reported
3. **Final result**: Only the final call's return value (when arguments are complete) is sent back to the AI
Intermediate calls allow for progressive execution.
### Tool Annotations
Tambo supports [MCP ToolAnnotations](https://modelcontextprotocol.io/specification/2025-06-18/schema#toolannotations) to describe tool behavior:
| Annotation | Default | Description |
| --------------------- | ------- | ------------------------------------------------------------------------------------ |
| `title` | - | Human-readable title for the tool. |
| `readOnlyHint` | `false` | Tool doesn't modify its environment. |
| `destructiveHint` | `true` | Tool may perform destructive updates (only meaningful when `readOnlyHint` is false). |
| `idempotentHint` | `false` | Calling repeatedly with same args has no additional effect. |
| `openWorldHint` | `false` | Tool may interact with external entities. |
| `tamboStreamableHint` | `false` | Tool is safe to call repeatedly with partial arguments during streaming. |
The `tamboStreamableHint` annotation is Tambo's key enabler for incremental execution. For practical examples of implementing streamable tools, see [Register Tools](/guides/take-actions/register-tools#enable-streaming-execution-optional).
### MCP Tools
MCP tools come from external servers that package tool definitions and implementations into reusable services. Services like Linear and GitHub provide MCP servers, letting you gain powerful integrations without writing them yourself, or as a way to expose your own API to Tambo. MCP tools are automatically discovered and made available to Tambo when you connect to an MCP server (either server-side or client-side).
import { Network } from "lucide-react";
# User Authentication
URL: /concepts/user-authentication
In a Tambo application, each user has their own threads and messages, isolated from other users' data. This user isolation is achieved through secure token-based authentication.
## How Tambo Authentication Works
Tambo uses OAuth 2.0 [Token Exchange](https://datatracker.ietf.org/doc/html/rfc8693) to securely identify users. Here's what happens:
1. **Your app authenticates the user** with your chosen OAuth provider (Auth0, Clerk, etc.)
2. **Your app receives a JWT token** from the provider containing user information
3. **Your app exchanges this token with Tambo** via the `/oauth/token` endpoint
4. **Tambo returns a Tambo-specific token** that identifies the user for all subsequent API calls
## Token Requirements
Tambo supports any OAuth 2.0 provider that issues a [JSON Web Token (JWT)](https://datatracker.ietf.org/doc/html/rfc7519) with:
* A `sub` (subject) claim identifying the user
* Proper signature for verification (when JWT verification is enabled)
This includes most popular providers like Google, Microsoft, Auth0, Clerk, and others.
## Implementation Approaches
### Server-Side Token Retrieval (Recommended)
**Best for**: Most applications, especially those requiring server-side rendering or enhanced security.
* Tokens are retrieved on the server during page rendering
* More secure as tokens never appear in client-side JavaScript
* Better for SEO and initial page load performance
* Handles authentication state before the client renders
### Client-Side Token Retrieval
**Best for**: Highly interactive applications that need real-time authentication state changes.
* Tokens are retrieved in the browser after the page loads
* Allows for real-time authentication state management
* Required when using client-side routing with authentication guards
* May show loading states during token retrieval
## Using TamboProvider
The `TamboProvider` component from `@tambo-ai/react` handles the token exchange process automatically:
```tsx title="Basic Setup"
"use client";
// TamboProvider must be in a client component to manage authentication state
import { TamboProvider } from "@tambo-ai/react";
export default function Layout({ children }: { children: React.ReactNode }) {
const userToken = useUserToken(); // Get token from your auth provider
return {children};
}
```
TamboProvider needs to be in a client component because it manages
authentication state, handles token refresh, and provides React context to
child components. Server components cannot manage state or provide React
context.
## Provider-Specific Integration Guides
For detailed integration examples with popular authentication providers, see the following guides:
Learn how to integrate Tambo with Auth.js using Google OAuth as an example.
Step-by-step guide for integrating Tambo with Auth0 authentication.
Complete example of using Tambo with Clerk's authentication system.
Integration guide for Supabase Auth with Tambo in Next.js applications.
How to use Tambo with Auth.js and Neon PostgreSQL database integration.
Enterprise-grade authentication with WorkOS and Tambo integration.
Modern authentication toolkit with built-in support for multiple providers
and plugins.
## JWT Verification Strategies
When your OAuth provider supports OpenID Connect Discovery (most do), Tambo automatically verifies tokens. For providers that don't, you can configure verification in your project dashboard:
* **OpenID Connect Discovery** (Default): Automatic verification using the provider's public keys
* **Asymmetric JWT Verification**: Manual verification using a provided public key
* **Symmetric JWT Verification**: Verification using a shared secret (testing only)
* **None**: No verification (development only)
Supabase Auth doesn't support asymmetric JWT verification. You'll need to
disable JWT verification in your Tambo project settings when using Supabase.
All verification strategies can be configured in your project dashboard under Settings > User Authentication.
# Components
URL: /getting-started/components
## Component Library
Our component library provides ready-to-use UI elements specifically designed for AI interactions:
* Chat interfaces
* Message threads
* Input fields
* Response indicators
* Suggestion buttons
* and Generative UI components
## Explore Our Components
For a complete showcase of all available components, interactive examples, and implementation guides, visit our dedicated component site:
Explore our full component library at ui.tambo.co
## Try our CLI command
The quickest way to add Tambo components to your project is with our CLI:
```bash
npx tambo full-send
```
This will install a set of common components and hook them up to Tambo in your project.
For more customization options and advanced component usage, refer to the component documentation at [ui.tambo.co](https://ui.tambo.co).
import { Card, Cards } from "fumadocs-ui/components/card";
# Integrate Tambo into an existing React app
URL: /getting-started/integrate
This example assumes an application using NextJs, but NextJs is not required.
If you have an existing React application and want to add Tambo functionality, follow these steps:
## Step 1: Install tambo-ai
```bash
npx tambo full-send
```
This command will:
* Setup Tambo in your existing project and get you an API key
* Install components that are hooked up to tambo-ai
* Show you how to wrap your app with the ``
If you prefer manual setup, you can run `npx tambo init` which will just get you set up with an API key. If you don't have an account yet, sign up for free first.
## Step 2: Add the TamboProvider
Once the installation completes, update your `src/app/layout.tsx` file to enable Tambo:
```tsx title="src/app/layout.tsx"
"use client";
import { TamboProvider } from "@tambo-ai/react";
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
You need to create a `.env.local` file in the root of your project to store
your Tambo API key: `NEXT_PUBLIC_TAMBO_API_KEY=your_api_key_here`
Replace `your_api_key_here` with the actual API key you received during setup.
This file should not be committed to version control as it contains sensitive
information.
Note that the `TamboProvider` only works in the browser. On Next.js, specify
`"use client"` at the top of your file to ensure that the `TamboProvider` is
rendered in the browser.
## Step 3: Add the MessageThreadCollapsible component
The `` component that the `full-send` command installed provides a complete chat interface for your AI assistant. Here's how to add it to your `src/app/page.tsx` file:
```tsx title="src/app/page.tsx"
"use client";
import { MessageThreadCollapsible } from "../source/components/message-thread-collapsible";
export default function Home() {
return (
);
}
```
## Step 4: Run your app
```bash
npm run dev
```
When you are done, you should see a chat interface like this:
# Tambo quickstart template
URL: /getting-started/quickstart
Download and run our generative UI chat template to get an understanding of the fundamental features of Tambo. This application will show you how to build generative UI, integrate tools, send messages to Tambo, and stream responses.
import { ImageZoom } from "fumadocs-ui/components/image-zoom";
## Set up the Tambo starter app
### 1. Download the starter app
```bash title="Create and navigate to your project"
npm create tambo-app@latest my-tambo-app && cd my-tambo-app
```
This will copy the source code of the template app into your directory and install the dependencies. See the source repo [here.](https://github.com/tambo-ai/tambo-template)
### 2. Create a Tambo API key
```bash title="Initialize Tambo"
npx tambo init
```
To send messages to Tambo, you need to create a Project through Tambo and generate an API key to send with requests. This command will walk you through the setup of your first Tambo project, generate an API key, and set the API key in your project automatically.
### 3. Run the app
```bash title="Run the app"
npm run dev
```
Start the app and go to `localhost:3000` in your browser to start sending messages to Tambo!
## Customize your Tambo template
To get a better understanding of what's happening in this application, try to make a change to update Tambo's capabilities.
In `/src/lib/tambo.ts` you'll see how the template registers components and tools with Tambo. In `/src/app/chat/page.tsx` you'll see how those tools and components are passed to the `TamboProvider` to 'register' them.
### Add a Tambo component
Let's create and register a new component with Tambo to give our AI chat a new feature.
In `src/components` create a new file called `recipe-card.tsx` and paste the following code into it:
```tsx title="recipe-card.tsx"
"use client";
import { ChefHat, Clock, Minus, Plus, Users } from "lucide-react";
import { useState } from "react";
interface Ingredient {
name: string;
amount: number;
unit: string;
}
interface RecipeCardProps {
title?: string;
description?: string;
ingredients?: Ingredient[];
prepTime?: number; // in minutes
cookTime?: number; // in minutes
originalServings?: number;
}
export default function RecipeCard({
title,
description,
ingredients,
prepTime = 0,
cookTime = 0,
originalServings,
}: RecipeCardProps) {
const [servings, setServings] = useState(originalServings || 1);
const scaleFactor = servings / (originalServings || 1);
const handleServingsChange = (newServings: number) => {
if (newServings > 0) {
setServings(newServings);
}
};
const formatTime = (minutes: number) => {
if (minutes < 60) {
return `${minutes}m`;
}
const hours = Math.floor(minutes / 60);
const remainingMinutes = minutes % 60;
return remainingMinutes > 0
? `${hours}h ${remainingMinutes}m`
: `${hours}h`;
};
const totalTime = prepTime + cookTime;
return (
);
}
```
Register it with Tambo by updating the `components` array in `src/lib/tambo.ts` with the following entry:
```tsx title="tambo.ts"
{
name: "RecipeCard",
description: "A component that renders a recipe card",
component: RecipeCard,
propsSchema: z.object({
title: z.string().describe("The title of the recipe"),
description: z.string().describe("The description of the recipe"),
prepTime: z.number().describe("The prep time of the recipe in minutes"),
cookTime: z.number().describe("The cook time of the recipe in minutes"),
originalServings: z
.number()
.describe("The original servings of the recipe"),
ingredients: z
.array(
z.object({
name: z.string().describe("The name of the ingredient"),
amount: z.number().describe("The amount of the ingredient"),
unit: z.string().describe("The unit of the ingredient"),
})
)
.describe("The ingredients of the recipe"),
}),
},
```
Now refresh the browser page and send a message like "Show me a recipe" and you should see Tambo generate and stream in an instance of your `RecipeCard` component.
### Add a Tambo tool
You might notice that when using our added `RecipeCard` component above, Tambo generates recipe data completely from scratch. To allow Tambo to retrieve the list of ingredients we actually have, we can add a tool to get them.
In `src/lib/tambo.ts` add the following entry to the 'tools' array:
```tsx title="tambo.ts"
{
name: "get-available-ingredients",
description:
"Get a list of all the available ingredients that can be used in a recipe.",
tool: () => [
"pizza dough",
"mozzarella cheese",
"tomatoes",
"basil",
"olive oil",
"chicken breast",
"ground beef",
"onions",
"garlic",
"bell peppers",
"mushrooms",
"pasta",
"rice",
"eggs",
"bread",
],
inputSchema: z.object({}),
outputSchema: z.array(z.string()),
},
```
Now refresh the browser page and send a message like "Show me a recipe I can make" and you should see Tambo look for the available ingredients and then generate a `RecipeCard` using them.
### Going Further
This template app just scratches the surface of what you can build with Tambo. By using Tambo in creative ways you can make truly magical custom user experiences!
# Chat Starter App
URL: /examples-and-templates/chat-starter-app
[Github](https://github.com/tambo-ai/tambo-template?tab=readme-ov-file#tambo-template)
```bash title="Install the starter app:"
npm create tambo-app@latest my-tambo-app
```
This template app shows how to setup the fundamental parts of an AI application using tambo:
**Component registration**
See in [src/lib/tambo.ts](https://github.com/tambo-ai/tambo-template/blob/main/src/lib/tambo.ts) how a graph component is registered with tambo.
**Tool Registration**
See in [src/lib/tambo.ts](https://github.com/tambo-ai/tambo-template/blob/main/src/lib/tambo.ts) how population data tools are registered with tambo.
**UI for sending messages to tambo and showing responses**
The components used within [src/components/tambo/message-thread-full.tsx](https://github.com/tambo-ai/tambo-template/blob/main/src/components/tambo/message-thread-full.tsx) use hooks from tambo's react SDK to send messages and show the thread history.
**Wrap the app with the `TamboProvider`**
In [src/app/chat/page.tsx](https://github.com/tambo-ai/tambo-template/blob/main/src/app/chat/page.tsx) we wrap the page with the `TamboProvider` to enable the usage of tambo's react SDK within the message sending and thread UI components.
# Supabase MCP Client App
URL: /examples-and-templates/supabase-mcp-client
Add MCP server tools and UI tools to a React app using tambo to build an AI app with minimal custom code.
Use this as a starting point to build apps to interact with any MCP server.
[Github](https://github.com/tambo-ai/supabase-mcp-client/tree/main?tab=readme-ov-file#supabase-mcp-client-react-app)
This application makes use of Tambo's MCP integration to easily add the tools defined by the official [Supabase MCP server](https://github.com/supabase-community/supabase-mcp).
Custom react components to show interactive visualizations of the responses from the Supabase tools are registered with tambo in [src/lib/tambo.ts](https://github.com/tambo-ai/supabase-mcp-client/blob/main/src/lib/tambo.ts)
# Connect MCP Servers
URL: /guides/connect-mcp-servers
import LearnMore from "@/components/learn-more";
import { BookOpen } from "lucide-react";
Learn how to connect MCP servers to Tambo, giving your AI access to external tools, data sources, and services.
## What You Can Do With MCP
MCP servers extend Tambo's capabilities by providing:
* **Tools** - Functions the AI can execute (query databases, call APIs, perform calculations)
* **Resources** - Access to files, documentation, and data sources
* **Prompts** - Pre-configured prompt templates with context
* **Elicitations** - Interactive data collection from users
## Connection Methods
Tambo supports two ways to connect MCP servers:
### Server-Side Connections (Recommended)
Configure MCP servers through the Tambo dashboard. These run on Tambo's backend infrastructure and provide the most efficient communication.
**Best for:** Production applications, shared services, OAuth-authenticated services
**Setup:**
1. Navigate to your project in the [Tambo Cloud dashboard](https://console.tambo.co)
2. Go to the "MCP Servers" section
3. Click "Add Server"
4. Enter the server connection details:
* **Server Name** - A descriptive name for your reference
* **Server URL** - The MCP server endpoint
* **Authentication** - Configure OAuth or API key headers if required
5. Click "Connect" to establish the connection
6. Test the connection by viewing available tools and resources
**Server features are now available to your AI agent automatically.** When users ask questions or request actions that match the server's capabilities, Tambo will use the appropriate tools and resources.
### Client-Side Connections
Connect MCP servers from your browser using the `@tambo-ai/react` SDK. This approach runs the MCP connection in the user's browser.
**Best for:** Development, testing, user-specific credentials, local services
**Setup:**
> **Dependency note**
>
> The `@tambo-ai/react/mcp` subpath declares `@modelcontextprotocol/sdk`, `zod`, and `zod-to-json-schema` as optional peer dependencies. If you import this subpath, install these packages:
>
> ```bash
> npm install @modelcontextprotocol/sdk@^1.24.0 zod@^4.0.0 zod-to-json-schema@^3.25.0
> ```
Configure MCP servers in your React application:
```tsx title="app/layout.tsx"
import { TamboProvider } from "@tambo-ai/react";
import { MCPTransport } from "@tambo-ai/react/mcp";
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
`TamboProvider` automatically establishes connections to the specified MCP servers and makes their capabilities available to Tambo.
**Transport options:**
* `MCPTransport.HTTP` (default) - For HTTP-based MCP servers
* `MCPTransport.SSE` - For Server-Sent Events transport
**Note:** Client-side connections require the MCP server to support browser-based connections. Most MCP servers support HTTP transport, which works well from browsers.
## Authentication
### OAuth Authentication (Server-Side Only)
For OAuth-protected MCP servers:
1. Configure the OAuth flow in the Tambo dashboard
2. Complete the OAuth authorization when prompted
3. Tambo will handle token refresh automatically
**Current limitation:** OAuth tokens are shared across all users in your project. All users will act as the same identity when using OAuth-authenticated servers. Per-user OAuth is planned for a future release.
### API Key Authentication
For API key-based servers:
**Server-side:** Configure custom headers in the dashboard with your API key.
**Client-side:** Pass headers in the `customHeaders` field:
```tsx
{children}
```
## Testing Your Connection
After connecting an MCP server, verify it's working:
1. In the Tambo dashboard, view the "Tools" and "Resources" tabs to see what's available
2. Test a simple tool execution from the dashboard
3. In your application, ask the AI a question that requires the MCP server's capabilities
4. Check that the AI successfully uses the tools and returns results
## Common MCP Servers
Popular MCP servers you can connect:
* **GitHub** - Access repositories, issues, and pull requests
* **Linear** - Manage issues and projects
* **Supabase** - Query your database
* **Filesystem** - Access local files and documents
* **Google Drive** - Read and search documents
* **Slack** - Send messages and search conversations
Many applications provide MCP servers. Check the [MCP server directory](https://modelcontextprotocol.io/servers) for available integrations.
## Troubleshooting
### Server won't connect
* Verify the server URL is correct and accessible
* Check that authentication credentials are valid
* For client-side connections, ensure the server supports HTTP or SSE transport
* Review browser console for connection errors
### Tools aren't being used
* Verify the server connection is active in the dashboard
* Check that tool descriptions clearly explain what they do
* Ensure the user's question matches the tool's capabilities
* Review the AI's instructions to ensure it's allowed to use tools
### OAuth errors
* Re-authenticate the OAuth connection in the dashboard
* Verify the OAuth app has the required scopes
* Check that the OAuth redirect URL matches your configuration
## Next Steps
* [Understand MCP concepts](/concepts/model-context-protocol) - Learn how MCP works architecturally
* [Give Tambo Components to Generate](/guides/enable-generative-ui/register-components) - Display MCP data in custom UI components
* [Build a Custom Chat Interface](/guides/build-interfaces/build-chat-interface) - Create conversation UI with MCP features
# REST API
URL: /reference/rest-api
The Tambo Cloud REST API provides programmatic access to threads, messages, and AI generation capabilities.
## OpenAPI Specification
The complete REST API is documented via an interactive OpenAPI specification available at:
**[api.tambo.co/api](https://api.tambo.co/api)**
The OpenAPI spec includes:
* All available endpoints with request/response schemas
* Authentication requirements
* Interactive "Try it out" functionality
* Code generation for multiple languages
## Authentication
All API requests require a Bearer token in the Authorization header:
```bash
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://api.tambo.co/threads
```
Get your API key from the [Tambo Cloud dashboard](https://tambo.co).
## Common Endpoints
| Endpoint | Method | Description |
| ---------------------------- | ------ | -------------------------------------- |
| `/threads` | GET | List all threads for your project |
| `/threads` | POST | Create a new thread |
| `/threads/:id` | GET | Get a specific thread with messages |
| `/threads/:id/advance` | POST | Send a message and get AI response |
| `/threads/:id/advancestream` | POST | Send a message with streaming response |
## Error Responses
The API uses [RFC 9457 Problem Details](https://www.rfc-editor.org/rfc/rfc9457.html) for error responses. Each error response includes a `type` field with a URI that links to detailed documentation about that specific problem.
### Common Problem Types
* **[Endpoint Deprecated](/reference/problems/endpoint-deprecated)** - Returned when calling a deprecated endpoint (410 Gone)
## SDKs
For most use cases, we recommend using one of the official SDKs instead of direct REST calls:
* **React**: [`@tambo-ai/react`](/reference/react-sdk) - Full-featured React SDK with hooks and providers
* **TypeScript/Node**: [`@tambo-ai/typescript-sdk`](https://www.npmjs.com/package/@tambo-ai/typescript-sdk) - Core TypeScript client
# Tambo MCP Server
URL: /tambo-mcp-server
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
Tambo's MCP server provides tools for an LLM to retrieve information from our documentation about how to use Tambo. Add our MCP server to your IDE to give your coding assistant knowledge of how to use Tambo.
## Quick Setup
The fastest way to add tambo MCP to Cursor is by clicking the "Add to IDE" button on our homepage and selecting your preferred IDE. For other IDEs or manual setup, follow the instructions below.
## Setup Instructions
1. Click the install button on the homepage, or go directly to your global MCP configuration file at `~/.cursor/mcp.json`.
2. Add the following configuration:
```bash
"mcpServers": {
"tambo": {
"url": "https://mcp.tambo.co/mcp"
},
}
```
4. Save the file
5. Restart Cursor
**Claude Desktop**
1. Open Claude Desktop
2. Go to Settings (from the Claude menu) → Developer → Edit Config
3. Add the following configuration:
```bash
"mcpServers": {
"tambo": {
"url": "https://mcp.tambo.co/mcp"
},
}
```
4. Save the file
5. Restart Claude Desktop
**Claude Code**
```bash
claude mcp add --transport sse tambo-server https://mcp.tambo.co/mcp
```
1. Go to your global MCP configuration file at `.vscode/mcp.json`.
2. Add the following configuration:
```bash
"servers": {
"tambo": {
"type": "http",
"url": "https://mcp.tambo.co/mcp"
}
}
```
4. Save the file
5. Restart VSCode
6. Alternatively, run the **MCP: Add Server** command from the Command Palette, choose the type of MCP server to add and provide the server information. Next, select Workspace Settings to create the `.vscode/mcp.json` file in your workspace if it doesn't already exist
1. Click the hammer icon (🔨) in Cascade
2. Click Configure to open \~/.codeium/windsurf/mcp\_config.json
3. Add the following configuration:
```bash
"mcpServers": {
"tambo": {
"command": "npx",
"args": ["-y", "mcp-remote", "https://mcp.tambo.co/mcp"]
},
}
```
4. Save the file
5. Click Refresh (🔄) in the MCP toolbar
1. Open settings with `Cmd + ,`
2. Add the following configuration:
```json
{
"context_servers": {
"tambo": {
"command": {
"path": "npx",
"args": ["-y", "mcp-remote", "https://mcp.tambo.co/mcp"],
"env": {}
},
"settings": {}
}
}
}
```
## Verification
After setting up the MCP server, you can verify it's working:
1. Open a new chat or file
2. Try asking the AI assistant about tambo
3. Check the IDE's output/console for any error messages
# Component State
URL: /concepts/generative-interfaces/component-state
By default, React component state is private and invisible to Tambo. If a user types into a text field and then asks Tambo to "check the grammar of what I typed," Tambo won't know the current value.
Replace `useState` with `useTamboComponentState` to give Tambo visibility into your component's state. This does two things:
1. **AI visibility** - State is included in follow-up message context, so Tambo can respond to "edit what I typed" requests
2. **Rehydration** - State persists when re-rendering thread history
import { ImageZoom } from "fumadocs-ui/components/image-zoom";
## Tracking State with `useTamboComponentState`
Consider this simple React component that allows a user to update an `emailBody` field, and tracks whether the email has been sent:
```tsx title="Simple email component"
export const EmailSender = () => {
...
const [emailBody, setEmailBody] = useState("") // tracks the message being typed
const [isSent, setIsSent] = useState(false) // tracks whether the 'send' button has been clicked
...
}
```
If Tambo renders this component and the user edits the `emailBody` field, Tambo will not know about the edit. A following user message like "Help me edit what I've typed so far" will not generate a relevant response.
To allow Tambo to see these state values, simply replace `useState` with `useTamboComponentState`, and pass a `keyName` for each value:
```tsx title="Email component with tambo state"
import { useTamboComponentState } from "@tambo-ai/react";
export const EmailSender = () => {
...
const [emailBody, setEmailBody] = useTamboComponentState("emailBody", "");
const [isSent, setIsSent] = useTamboComponentState("isSent", false);
...
}
```
Now tambo will know the current values of `emailBody` and `isSent`.
## State Rehydration
When using generative components, your component is rendered as part of a message in a thread. Tambo persists component state values remotely when using `useTamboComponentState`, so when the thread is reloaded later, the component is re-rendered with the persisted state values.
For example, if a user:
1. Asks Tambo to generate an email component
2. Types "Hello, let's schedule a meeting" into the email body
3. Closes the app and returns later
When they reopen the thread, the email component will be re-rendered with `emailBody` set to "Hello, let's schedule a meeting" - the state is automatically restored from the persisted thread data.
This rehydration happens automatically when using `useTamboComponentState`. The hook reads the persisted state value for the given `keyName` when the component first mounts, ensuring user edits and interactions are preserved across page reloads and thread navigation.
## Updating editable state from props
Often when we have an editable state value, like the `emailBody` above, we want Tambo to be able to generate and stream in the initial value. If a user sends "Help me generate an email asking about a good time to meet," Tambo should be able to fill in the value with relevant text, and then the user should be able to edit it.
When using `useState` this can be done by adding a `useEffect` that updates the state value with prop value changes:
```tsx title="Simple email component"
export const EmailSender = ({ initialEmailBody }: { initialEmailBody: string }) => {
...
const [emailBody, setEmailBody] = useState("") // tracks the message being typed
const [isSent, setIsSent] = useState(false) // tracks whether the 'send' button has been clicked
useEffect(() => {
setEmailBody(initialEmailBody)
}, [initialEmailBody])
...
}
```
However, when using `useTamboComponentState`, this pattern will cause the initial prop value to overwrite the latest stored state value when re-rendering a previously generated component.
Instead, use the `setFromProp` parameter of `useTamboComponentState` to specify a prop value that should be used to set the initial state value:
```tsx title="Simple email component"
export const EmailSender = ({ initialEmailBody }: { initialEmailBody: string }) => {
...
const [emailBody, setEmailBody] = useTamboComponentState("emailBody", "", initialEmailBody) // tracks the message being typed, and sets initial value from the prop
const [isSent, setIsSent] = useTamboComponentState("isSent", false) // tracks whether the 'send' button has been clicked
...
}
```
# Generative Components
URL: /concepts/generative-interfaces/generative-components
Generative components are React components that Tambo creates and renders dynamically in response to user messages. Unlike Interactable components that you pre-place on the page, generative components are chosen and instantiated by Tambo based on the conversation context.
When you register a component with Tambo, you're giving Tambo the ability to decide when and how to render that component. Tambo analyzes the user's message, considers available components, and generates appropriate props to create a new instance of the component inline in the conversation.
**Key characteristics:**
* **Created on-demand**: Tambo generates new component instances in response to messages
* **One-time rendering**: Each message can produce a new component instance
* **AI-driven selection**: Tambo chooses which component to use based on the conversation
* **Props generated by AI**: Tambo generates the props values based on the user's request
## How Generative Components Work
The lifecycle of a generative component follows this flow:
### 1. Registration
Components are registered with Tambo either statically (at app startup) or dynamically (at runtime). Registration provides Tambo with:
* The React component to render
* A name and description for identification
* A Zod schema defining valid props
```tsx
const components: TamboComponent[] = [
{
name: "DataChart",
description: "Displays data as a chart",
component: DataChart,
propsSchema: z.object({
data: z.array(
z.object({
label: z.string(),
value: z.number(),
}),
),
type: z.enum(["bar", "line", "pie"]),
}),
},
];
```
### 2. Component Selection
When a user sends a message, Tambo analyzes the request and available registered components. It uses the component's `name` and `description` to determine if a component is appropriate for the user's intent.
Tambo considers:
* The semantic meaning of the user's message
* Component descriptions and names
* The conversation context and history
* Available tools and resources
### 3. Props Generation
Once Tambo selects a component, it generates props that match the component's schema. The Zod schema acts as both validation and guidance:
* **Required fields**: Tambo must provide values for all non-optional fields
* **Optional fields**: Tambo may omit optional fields (marked with `.optional()`)
* **Type constraints**: Enum values, number ranges, and string formats guide generation
* **Descriptions**: `z.describe()` provides hints about expected formats or usage patterns
```tsx
// Schema with descriptions helps Tambo generate better props
const DataChartProps = z.object({
data: z.array(
z.object({
label: z.string().describe("Short label text, 1-3 words"),
value: z.number().describe("Numeric value for the data point"),
}),
),
type: z
.enum(["bar", "line", "pie"])
.describe("Use bar for comparisons, line for trends, pie for proportions"),
});
```
### 4. Component Rendering
Tambo renders the selected component with generated props as part of the assistant's message. The component appears inline in the conversation thread, creating a visual response alongside any text. Components rendered in messages become part of the conversation history, allowing Tambo to reference them in future messages.
## Generative vs. Interactable Components
Understanding when to use each type helps you design effective Tambo applications:
| Aspect | Generative Components | Interactable Components |
| ---------------- | -------------------------------------------- | --------------------------------------- |
| **Placement** | Created by Tambo in messages | Pre-placed by you in your UI |
| **Lifecycle** | New instance per message | Persistent, updates in place |
| **Use Case** | Charts, summaries, one-time displays | Shopping carts, forms, persistent state |
| **Registration** | Via `TamboProvider` or `registerComponent()` | Via `withInteractable()` HOC |
| **Updates** | New message = new component | Same component, updated props |
**Example scenarios:**
* **Generative**: User asks "Show me sales data as a chart" → Tambo creates a new `DataChart` component
* **Interactable**: User has a shopping cart on the page → User asks "Add 3 more items" → Tambo updates the existing cart component's props
You can use both together: register a component as generative for on-demand creation, and also make it interactable for persistent instances.
import LearnMore from "@/components/learn-more";
import { Rocket } from "lucide-react";
# Generative User Interfaces
URL: /concepts/generative-interfaces
Traditional user interfaces are predetermined by their creators based on assumptions about how users should interact with the software. Users navigate menus, fill forms, and click through screens that developers have pre-built. To see sales data, you must know where to find the dashboard, which filters to apply, and how to configure the view. The interface is fixed and you must adapt to it.
Generative user interfaces flip this model. Instead of navigating predetermined layouts, users describe what they want in natural language, and the appropriate interface is decided in real-time. When you ask "Show me sales data," a generative interface renders an actual interactive chart component, populated with your data. The interface adapts to users rather than requiring users to adapt to the interface.
## How Tambo Approaches Generative UI
### Use Your Own React Components
Rather than attempting to have AI generate new UI from scratch in responses, which introduces risk and unpredictability, Tambo's approach is to use React components you build and give it access to. You provide descriptions of when and how to use each component, and Tambo automatically uses the appropriate components when responding to users' messages to help them accomplish their goals.
Tambo supports generative UI through two fundamental approaches:
**[Generative Components](/concepts/generative-interfaces/generative-components)** - Tambo creates new component instances in response to user messages. When a user asks for information or functionality, Tambo selects an appropriate component, generates the data to populate it, and includes it in the response message. This creates visual responses that appear as part of the conversation.
**[Interactable Components](/concepts/generative-interfaces/interactable-components)** - Tambo updates components that are already placed on screen. You position components in your interface, and Tambo can read their current state and modify them through conversation.
Both approaches can work together in the same application. A user might ask Tambo to create a new chart (generative), then later ask it to "update the filters on my dashboard" (interactable). This flexibility allows you to choose the right pattern for each use case.
### Component Selection
When a user sends a message, Tambo analyzes the request registered components and tools. It considers the semantic meaning of the message, the conversation history, and the descriptions of available components. Based on this analysis, it determines whether any of the available components match the user's intent and whether it needs to fetch any additional data to populate them.
The matching process happens automatically. Tambo interprets natural language, maps it to component purposes, and makes selection decisions. A message about displaying data might match to a chart component. A request to edit information could map to a form.
### Rendering Selected Components
When using generative components, Tambo includes the selected component in the response message. The component becomes part of the message history, creating a visual conversation thread where text and interactive elements coexist.
While generative components appear in response messages, you control where they're displayed. Components can be rendered in a traditional chat interface, but they can also be placed anywhere in your application, across your entire layout. This flexibility means you're not limited to chat-style UIs. You can build dashboards, forms, or any interface pattern while still benefiting from AI-driven component generation.
This rendering happens in real-time. Users see their requests transform into visual responses immediately as data to populate any chosen component is streamed in from Tambo.
## Exploring This Section
The following pages dive deeper into specific aspects of generative UI with Tambo:
import LearnMore from "@/components/learn-more";
import { Component, Hand, Database, ArrowLeftRight } from "lucide-react";
# Interactable Components
URL: /concepts/generative-interfaces/interactable-components
import LearnMore from "@/components/learn-more";
import { Rocket } from "lucide-react";
When you want to place specific components on screen rather than letting Tambo choose which to show, but still want to allow your users to interact with them using natural language, use Tambo's Interactable components. Unlike [generative components](/concepts/generative-interfaces/generative-components) that Tambo creates on-demand when responding to messages, Interactable components are **pre-placed by you** while still allowing Tambo to modify their props when responding to users.
You place them, set their initial state, and users can interact with them directly like any other React component. Simultaneously, Tambo can observe their current props values and update them through natural language requests.
This creates a conversational interface where traditional UI manipulation and natural language interaction work together. A user might click and edit a note's title, then ask Tambo to "update the content of this note with today's meeting notes," both modifying the same component.
## How Interactables Work
Interactables are normal React components that you build and then wrap with Tambo's `withInteractable` to give Tambo access to them.
```tsx
import { withInteractable } from "@tambo-ai/react";
import { Note } from "./note";
import { NotePropsSchema } from "./note-schema";
export const InteractableNote = withInteractable(Note, {
componentName: "Note",
description:
"A simple note component for recording ideas that can change title, content, and background color",
propsSchema: NotePropsSchema,
});
```
When you wrap a component with `withInteractable`, Tambo creates an automatic bidirectional connection:
**Automatic Context Sending**: The current props of the component are visible to Tambo automatically.
**Automatic Tool Registration**: Update tools are automatically registered to allow Tambo to update the component's props when needed.
Place these wherever you normally would within the application to enable a traditional, statically-organized, user interface enhanced with AI capabilities.
# MCP Features
URL: /concepts/model-context-protocol/features
Model Context Protocol provides several powerful features that enable rich interactions between your application and MCP servers. This page explains what each feature is, why it exists, and what the user experience looks like.
For more details on the MCP specification, see the [official MCP documentation](https://modelcontextprotocol.io/docs).
## Tools
**Tools** are functions exposed by MCP servers that the AI can call to perform actions and retrieve data. When a user makes a request that requires external capabilities, the AI automatically calls the appropriate MCP tool.
### What Are Tools?
Tools allow MCP servers to expose executable functions that extend what the AI can do. For example, a Linear MCP server might expose tools like `create_issue`, `list_issues`, or `update_status`. The AI decides when to call these tools based on the conversation context.
### User Experience
From the user's perspective, MCP tools work invisibly:
1. User makes a request: "Show me all open issues"
2. AI recognizes this needs the `list_issues` tool
3. Tool executes and retrieves the data
4. AI formats the results into components
5. User sees the rendered response
The user doesn't manually invoke tools - the AI orchestrates tool calls automatically based on what's needed.
### Rich Content Support
MCP tools automatically support rich content responses. When tools return content arrays (text, images, and other media), Tambo passes them directly to the AI without converting to plain text. This enables tools to seamlessly return images, formatted content, and other rich media.
### Connection Type Support
Tools work with both server-side and client-side MCP connections. Server-side tools execute faster due to direct server-to-server communication, while client-side tools can access local services and browser authentication state.
***
## Prompts
**Prompts** are predefined message templates that MCP servers expose for users to quickly insert into their conversations. They provide standardized starting points for common workflows.
### What Are Prompts?
Prompts are reusable text templates that help users get started with specific tasks. A GitHub MCP server might provide prompts like "Create detailed issue report" or "Review pull request", which expand into well-structured templates when selected.
### User Experience
Users can access prompts in two ways:
**Using the "/" hotkey:**
import { ImageZoom } from "fumadocs-ui/components/image-zoom";
1. Type "/" at the start of an empty message input
2. A dropdown appears showing all available prompts
3. Type to filter prompts by name
4. Select a prompt to insert its content
5. Edit the inserted content before sending
**Using the prompts button:**
1. Click the document icon (📄) in the message input
2. Browse available prompts organized by MCP server
3. Select a prompt to insert its content
Once inserted, the prompt text appears in the message input and can be edited before sending.
### Prompt Naming
Prompts are always prefixed with the MCP server's `serverKey` (e.g., `linear:new-issue`) to identify which server provides them. This prevents naming conflicts when multiple MCP servers are connected.
### Current Limitations
Prompt parameters are not yet supported. While the MCP specification allows prompts to accept customization parameters, Tambo's current implementation inserts all prompts as-is using their default values. Parameter support is planned for a future release.
### Connection Type Support
Prompts work with both server-side and client-side MCP connections.
***
## Resources
**Resources** are data sources that MCP servers expose, allowing users to reference external content in their conversations. Resources enable dynamic context inclusion without manual copy-pasting.
### What Are Resources?
Resources represent external data that can be referenced in messages. A Linear MCP server might expose issues as resources (`issue://TAM-123`), a file system server might expose files, or a documentation server might expose articles. When a user references a resource, its content is automatically fetched and included in the AI's context.
### User Experience
Users can reference resources in two ways:
**Using the "@" mention:**
1. Type "@" anywhere in the message input
2. A dropdown appears showing available resources
3. Type to filter resources by name
4. Select a resource to insert its reference as `@resourceUri`
After selecting a resource, it appears as a mention in the message:
**Using the resources button:**
When resources are available, an @ icon appears in the message input area. Click it to browse and select resources:
1. Click the @ button to open the resource picker
2. A dropdown appears showing all available resources
3. Resources are organized by MCP server
4. Type to search and filter resources by name
5. Select a resource to insert its reference into the message input
After selection, the resource appears as a mention in the message (e.g., `@linear:issue://TAM-123`). When the message is sent, the resource content is fetched and included in the AI's context.
### Resource Reference Syntax
Resources are always prefixed to identify which MCP server they come from:
* **Server-side resources:** `@serverKey:serverSidePrefix:resourceUri`
* Example: `@linear:tambo:issue://TAM-123`
* Double prefix (client + server)
* **Client-side resources:** `@serverKey:resourceUri`
* Example: `@linear:issue://TAM-123`
* Single prefix
The prefix ensures resources from different servers remain distinct even if they use the same URI scheme.
### Server-side vs Client-side Processing
How resources are processed depends on the connection type:
**Server-side resources:**
* The resource reference is sent to Tambo's API
* The server fetches resource content from the MCP server
* Content is injected into the prompt server-side
* Better for resources requiring server authentication or large content
**Client-side resources:**
* The resource is fetched in the browser
* Full content is injected into the prompt client-side
* The complete content is sent to Tambo's API
* Better for resources accessible from the browser or reasonably sized content
### Example Use Cases
* **Issue tracking:** Reference specific issues in conversations
* **File systems:** Include project files for code discussions
* **Knowledge bases:** Pull in documentation articles for context
* **CRM data:** Reference customer profiles in support conversations
### Connection Type Support
Resources work with both server-side and client-side MCP connections, with different fetching strategies as described above.
***
## Elicitations
**Elicitations** allow MCP servers to pause during tool execution and request additional information from users. This creates interactive workflows where tools can ask for confirmation or missing details.
### What Are Elicitations?
Elicitations enable MCP servers to request structured input mid-execution. When a tool needs additional information that wasn't provided initially, it can pause and ask the user for specific details using a dynamically generated form.
### User Experience
When an MCP server requests elicitation:
1. The tool begins executing based on the user's request
2. The tool pauses and sends an elicitation request
3. The message input area transforms into a dynamic form
4. The form displays the server's message and requested fields
5. User fills in the form fields (text, numbers, booleans, enums)
6. User clicks Accept, Decline, or Cancel
7. The tool receives the response and continues execution
The elicitation UI automatically:
* Renders appropriate input fields based on the schema
* Validates user input according to constraints
* Handles the response flow back to the MCP server
### Form Field Types
Elicitations support various input types:
* **Text fields** for string input
* **Number fields** for numeric input
* **Checkboxes** for boolean values
* **Dropdowns** for enum selections
* **Multi-field forms** for complex input
### Response Actions
Users can respond to elicitations in three ways:
* **Accept** - Provide the requested input and continue
* **Decline** - Skip providing input but let the tool continue
* **Cancel** - Abort the entire operation
### Example Use Cases
**Confirmation dialogs:** Ask user to confirm before destructive actions
**Multi-field forms:** Request multiple pieces of information at once
**Enum selection:** Let users choose from predefined options
* **Missing information:** Request details not provided in the initial request
* **Progressive disclosure:** Collect information step-by-step as needed
### Connection Type Support
Elicitations work with both server-side and client-side MCP connections.
***
## Sampling
**Sampling** enables MCP servers to request that your application's LLM generate text completions. This allows MCP servers to leverage AI capabilities without implementing their own LLM infrastructure.
### What Is Sampling?
Sampling creates a "sub-conversation" between the MCP server and your application's LLM. The MCP server can send messages to the LLM, receive completions, and use those completions to enhance its tool results. This enables AI-augmented MCP tools.
### User Experience
Sampling happens transparently during tool execution:
1. User makes a request that triggers an MCP tool
2. The tool fetches raw data
3. The tool requests the LLM to analyze or summarize the data (sampling)
4. The LLM generates a response
5. The tool uses the LLM's response in its result
6. User receives the enhanced, AI-processed result
Tambo automatically displays sampling interactions in the UI as expandable dropdowns within tool information panels, showing the messages exchanged between the MCP server and the LLM.
### How It Works
The sampling flow:
### Example Use Cases
* **Data summarization:** Tools fetch detailed data and use the LLM to create concise summaries
* **Content generation:** Tools retrieve specifications and use the LLM to generate code examples
* **Decision making:** Tools gather context and use the LLM to recommend next actions
* **Analysis:** Tools collect information and use the LLM to analyze patterns
### Server-side Only
Sampling currently only works with server-side MCP connections. Client-side MCP servers cannot make sampling requests at this time. This limitation exists because sampling requests need to be processed through Tambo's backend infrastructure for proper security, rate limiting, and LLM access control.
### Connection Type Support
* **Server-side MCP:** ✅ Fully supported
* **Client-side MCP:** ❌ Not yet supported (planned for future)
***
## Feature Support Summary
| Feature | Server-side | Client-side | Description |
| ---------------- | ----------- | --------------- | --------------------------------------------------- |
| **Tools** | ✅ Supported | ✅ Supported | Call functions to perform actions and retrieve data |
| **Prompts** | ✅ Supported | ✅ Supported | Insert predefined message templates |
| **Resources** | ✅ Supported | ✅ Supported | Reference external data sources in conversations |
| **Elicitations** | ✅ Supported | ✅ Supported | Request user input during tool execution |
| **Sampling** | ✅ Supported | ❌ Not supported | Allow MCP servers to request LLM completions |
## What's Next
For step-by-step guides on customizing the UI for these features, see the guides section:
import LearnMore from "@/components/learn-more";
# Model Context Protocol (MCP)
URL: /concepts/model-context-protocol
Model Context Protocol (MCP) allows you to give Tambo access to tools, data, and functionality defined by other services. Many popular applications publish MCP servers, and Tambo provides a simple way to connect to them. This lets your users conversationally interact with external services without needing to write integration code yourself.
## What is MCP?
MCP is an open standard that enables AI applications to securely connect to external data sources and tools. It provides a standardized way for AI assistants to:
* **Call tools** - Execute functions to perform actions and retrieve data
* **Access resources** - Reference files, documents, and other data sources
* **Use prompts** - Insert predefined message templates
* **Request input** - Pause for user confirmation or additional information (elicitations)
* **Leverage AI** - Request LLM completions for analysis and generation (sampling)
By using MCP, you can rapidly extend Tambo's capabilities without building custom integrations for every external service.
## Connection Architectures
Tambo supports two ways to connect to MCP servers, each with different architectural tradeoffs:
### Server-side Connections (Recommended)
Server-side MCP connections are configured through the Tambo dashboard and run on Tambo's backend infrastructure. This approach provides the most efficient communication since tools execute via direct server-to-server connections.
**Key characteristics:**
* **Authentication:** Supports OAuth-based authentication and custom API key headers
* **Performance:** Most efficient due to direct server-to-server communication
* **Sharing:** MCP servers are shared across all users of your project
* **Configuration:** Managed through the Tambo dashboard
* **Ideal for:** Production applications, shared services, and scenarios requiring OAuth
**Authentication model:** When an MCP server is authenticated via OAuth, the OAuth identity is currently shared across all users of your project. This means every user acts as the same MCP identity when using that server. Only configure servers this way if that shared access level is acceptable for your data and tools. Per-user MCP authentication is planned for the future.
**End-user identity:** Per-user authentication is supported today via the `userToken` prop on `TamboProvider` (an OAuth access token from your auth/SSO provider) — see [User Authentication](/concepts/user-authentication). MCP server auth and end-user auth are independent layers.
### Client-side Connections
Client-side MCP connections run directly in the user's browser, allowing you to leverage the browser's existing authentication state and access to local services.
**Key characteristics:**
* **Authentication:** Leverages the browser's authentication state (cookies, session storage)
* **Performance:** More chatty due to browser-to-Tambo-to-MCP communication
* **Local access:** Can connect to local MCP servers (e.g., `localhost`)
* **Configuration:** Configured in your React application code
* **Ideal for:** Local development, user-specific services, and services behind firewalls
**Authentication note:** There is currently no support for OAuth-based authentication when using client-side MCP connections. The MCP server must be accessible from the browser without additional authentication, or rely on the browser's existing session.
### Comparison
| Feature | Server-side (Recommended) | Client-side |
| ------------------ | -------------------------------- | --------------------------------- |
| **Performance** | Fast (direct server connection) | Slower (browser proxies requests) |
| **Authentication** | OAuth + API keys supported | Browser session only |
| **Local servers** | No (must be internet-accessible) | Yes (can connect to localhost) |
| **Sharing** | Shared across all users | Per-user connections |
| **Configuration** | Tambo dashboard | React application code |
| **Best for** | Production, shared services | Development, local tools |
### When to Use Each
**Choose server-side when:**
* You need OAuth authentication
* Performance is critical
* The MCP server should be shared across users
* You want centralized management of connections
**Choose client-side when:**
* Connecting to local development servers
* The MCP server is already accessible from the browser
* Each user needs their own connection
* The service is behind a firewall accessible only to users
## Quick Start
### Server-side Setup
1. Navigate to your [project dashboard](https://tambo.co/dashboard)
2. Click on your project
3. Find the "MCP Servers" section
4. Click "Add MCP Server"
5. Enter the server URL and server type (StreamableHTTP or SSE)
6. If authentication is required, click "Begin Authentication" to start the OAuth flow
Once configured, the MCP servers will be available to all users without any client-side code changes.
### Client-side Setup
> **Dependency note**
>
> The `@tambo-ai/react/mcp` subpath declares `@modelcontextprotocol/sdk`, `zod`, and `zod-to-json-schema` as optional peer dependencies. If you import this subpath, install these packages:
>
> ```bash
> npm install @modelcontextprotocol/sdk@^1.24.0 zod@^4.0.0 zod-to-json-schema@^3.25.0
> ```
Configure MCP servers in your React application:
```tsx
import { TamboProvider } from "@tambo-ai/react";
import { MCPTransport } from "@tambo-ai/react/mcp";
function MyApp() {
return (
{/* Your application components */}
);
}
```
`TamboProvider` automatically establishes connections to the specified MCP servers and makes their capabilities available to Tambo.
## Server Namespacing
Tambo **always prefixes** MCP capabilities with a `serverKey` to identify which server they come from:
* **Prompts:** `serverKey:promptName` (e.g., `linear:new-issue`)
* **Resources:**
* Server-side: `serverKey:serverSidePrefix:resourceUri` (e.g., `linear:tambo:issue://TAM-123`)
* Client-side: `serverKey:resourceUri` (e.g., `linear:issue://TAM-123`)
* **Tools:** `serverKey__toolName` (e.g., `linear__create_issue`)
If you don't specify `serverKey`, Tambo derives one from the server URL hostname (e.g., `https://mcp.linear.app/mcp` becomes `linear`). For predictable behavior across environments, set `serverKey` explicitly.
**Naming guidelines:**
* Avoid `:` and `__` in the `serverKey` (used as separators)
* Prefer letters, numbers, `_`, and `-`
* Use environment-specific keys in multi-environment setups (e.g., `linear-staging`)
## Rich Content Support
MCP tools automatically support rich content responses. When MCP servers return content arrays (text, images, and other media types), Tambo automatically passes them through to the AI without converting them to plain text. This means MCP tools can seamlessly return images, formatted content, and other rich media.
If you're building custom tools that need similar capabilities, you can use the `transformToContent` parameter when registering tools. [Learn more about returning rich content from tools](/guides/take-actions/register-tools#return-rich-content-optional).
## What's Next
import LearnMore from "@/components/learn-more";
# Build a Custom Chat Interface
URL: /guides/build-interfaces/build-chat-interface
Tambo provides [pre-built components](https://ui.tambo.co) to help you create common conversation interfaces quickly, but if you prefer to build your own from scratch you can use the React SDK.
The Tambo React SDK provides hooks for accessing stored conversation data, allowing you to build custom interfaces that match your application's design. Whether you're building a traditional chat, a canvas-style workspace, or a hybrid interface, the SDK handles data fetching, real-time updates, and state management while you control the presentation.
This guide walks through building a complete custom conversation interface from scratch.
## Prerequisites
Before building custom conversation UI:
* Understand how [Conversation Storage](/concepts/conversation-storage) works
* Set up the `TamboProvider` in your application
## Single Conversation Interface
### Display Messages
Show the conversation history using the current thread's messages:
```tsx
import { useTamboThread } from "@tambo-ai/react";
export default function MessageList() {
const { thread } = useTamboThread();
if (!thread) {
return
Loading conversation...
;
}
return (
{thread.messages.map((message) => (
{message.role}
{/* Render text content */}
{message.content.map((contentPart, idx) => {
if (contentPart.type === "text") {
return
)}
{/* Render component if present */}
{message.renderedComponent && (
{message.renderedComponent}
)}
))}
);
}
```
Messages contain text content, images, generated components, and tool calls. The `renderedComponent` property contains any component Tambo created in response to the message. Tool calls show which tools the AI invoked, useful for debugging or transparency.
**Alternative: Canvas-Style Display**
For interfaces showing only the latest component (dashboards, workspaces), walk backwards through messages to find the most recent component:
```tsx
import { useTamboThread } from "@tambo-ai/react";
function CanvasView() {
const { thread } = useTamboThread();
const latestComponent = thread?.messages
.slice()
.reverse()
.find((message) => message.renderedComponent)?.renderedComponent;
return (
{latestComponent ? (
latestComponent
) : (
Ask Tambo to create something...
)}
);
}
```
This pattern is useful when you want a clean workspace that updates with each AI response, rather than showing full conversation history.
### Send Messages
Create an input form that sends messages to the current thread:
```tsx
import { useTamboThreadInput } from "@tambo-ai/react";
function MessageInput() {
const { value, setValue, submit, isPending, error } = useTamboThreadInput();
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
if (!value.trim() || isPending) return;
await submit({
streamResponse: true,
});
};
return (
);
}
```
The `useTamboThreadInput` hook manages input state and submission, providing the current value, a setter function, a submit function, pending state, and any errors.
For more control over message sending, use `sendThreadMessage` directly:
```tsx
import { useState } from "react";
import { useTamboThread } from "@tambo-ai/react";
function CustomInput() {
const { sendThreadMessage } = useTamboThread();
const [input, setInput] = useState("");
const handleSend = async () => {
await sendThreadMessage(input, {
streamResponse: true,
});
setInput("");
};
return (
setInput(e.target.value)} />
);
}
```
## Multiple Conversations
### Display Thread List
Show users their available conversations:
```tsx
import { useTamboThreadList } from "@tambo-ai/react";
export default function ThreadList() {
const { data: threads, isLoading, error, refetch } = useTamboThreadList();
return (
Conversations
{threads?.items.map((thread) => (
{thread.name || "Untitled Conversation"}
{new Date(thread.createdAt).toLocaleDateString()}
))}
);
}
```
The `threads` array contains all stored conversations.
### Switch Between Threads
Allow users to select and view different conversations:
```tsx
import { useTamboThread, useTamboThreadList } from "@tambo-ai/react";
export default function ThreadList() {
const { data: threads, isLoading, error, refetch } = useTamboThreadList();
const { currentThread, switchCurrentThread } = useTamboThread();
return (
Conversations
{threads?.items.map((thread) => (
))}
);
}
```
When you switch threads, the entire UI automatically updates to show the new thread's messages and state. The SDK handles fetching the thread data and updating component state.
## Advanced Patterns
## Add Contextual Suggestions (Optional)
Show AI-generated suggestions after each assistant message to help users discover next actions.
### Display Suggestions
Use the `useTamboSuggestions` hook to get and display suggestions:
```tsx
import { useTamboThread, useTamboSuggestions } from "@tambo-ai/react";
function MessageThread() {
const { thread } = useTamboThread();
const { suggestions, isLoading, isAccepting, accept } = useTamboSuggestions({
maxSuggestions: 3, // Optional: 1-10, default 3
});
const latestMessage = thread.messages[thread.messages.length - 1];
const showSuggestions = latestMessage?.role === "assistant";
return (
);
}
```
Suggestions are automatically generated after each assistant message when the hook is used.
### Accept Suggestions
The `accept` function provides two modes:
```tsx
// Set suggestion text in the input (user can edit before sending)
accept(suggestion);
// Set text and automatically submit
accept(suggestion, true);
```
### Custom Suggestions
Override auto-generated suggestions for specific contexts using `useTamboContextAttachment`:
```tsx
import { useTamboContextAttachment } from "@tambo-ai/react";
function ComponentSelector({ component }) {
const { setCustomSuggestions } = useTamboContextAttachment();
const handleSelectComponent = () => {
setCustomSuggestions([
{
id: "1",
title: "Edit this component",
detailedSuggestion: `Modify the ${component.name} component`,
messageId: "",
},
{
id: "2",
title: "Add a feature",
detailedSuggestion: `Add a new feature to ${component.name}`,
messageId: "",
},
]);
};
return ;
}
```
Clear custom suggestions to return to auto-generated ones:
```tsx
setCustomSuggestions(null);
```
## Related Concepts
* **[Conversation Storage](/concepts/conversation-storage)** - Understanding how threads are persisted
* **[Additional Context](/concepts/additional-context)** - Providing context to improve responses
* **[Component State](/concepts/generative-interfaces/component-state)** - How component state persists across renders
# Customize How MCP Features Display
URL: /guides/build-interfaces/customize-mcp-display
This guide shows you how to build custom UI components for MCP features using the React SDK hooks. While Tambo provides built-in UI components for prompts, resources, and elicitations, you can create your own interfaces that match your application's design.
For an overview of MCP features, see [MCP Features](/concepts/model-context-protocol/features).
## Prerequisites
* MCP servers connected (see [Connect MCP Servers](/guides/connect-mcp-servers))
* Peer dependencies installed:
```bash
npm install @modelcontextprotocol/sdk@^1.24.0 zod@^4.0.0 zod-to-json-schema@^3.25.0
```
## Custom Prompt Picker
Build a custom interface for browsing and inserting MCP prompts.
### Step 1: List Available Prompts
Use `useTamboMcpPromptList` to fetch all prompts from connected servers:
```tsx
import { useTamboMcpPromptList } from "@tambo-ai/react/mcp";
function CustomPromptPicker() {
const { data: prompts, isLoading, error } = useTamboMcpPromptList();
if (isLoading) {
return
);
}
```
### Step 4: Integrate with Message Input
Combine the resource selector with your message input:
```tsx
function ChatInterface() {
const [inputValue, setInputValue] = useState("");
const [showResources, setShowResources] = useState(false);
const handleResourceInsert = (resourceReference: string) => {
setInputValue(inputValue + resourceReference);
setShowResources(false);
};
return (
);
}
```
## Custom Elicitation Handler
Build custom UI for handling elicitation requests from MCP servers.
### Step 1: Access Elicitation Context
Use `useTamboElicitationContext` to access the current elicitation request:
```tsx
import { useTamboElicitationContext } from "@tambo-ai/react/mcp";
function CustomElicitationUI() {
const { elicitation, resolveElicitation } = useTamboElicitationContext();
if (!elicitation) {
return null;
}
const { message, requestedSchema } = elicitation.params;
return (
Additional Information Needed
{message}
{/* Render form fields based on requestedSchema */}
);
}
```
### Step 2: Build Dynamic Form Fields
Render form fields based on the requested schema:
```tsx
function ElicitationForm() {
const { elicitation, resolveElicitation } = useTamboElicitationContext();
const [formData, setFormData] = useState>({});
if (!elicitation) return null;
const { message, requestedSchema } = elicitation.params;
const properties = requestedSchema.properties;
return (
{message}
);
}
function renderField(
fieldName: string,
fieldSchema: any,
formData: Record,
setFormData: (data: Record) => void,
) {
const value = formData[fieldName] ?? "";
const handleChange = (newValue: unknown) => {
setFormData({ ...formData, [fieldName]: newValue });
};
// Text field
if (fieldSchema.type === "string") {
return (
handleChange(e.target.value)}
/>
);
}
// Number field
if (fieldSchema.type === "number" || fieldSchema.type === "integer") {
return (
handleChange(Number(e.target.value))}
/>
);
}
// Boolean field
if (fieldSchema.type === "boolean") {
return (
handleChange(e.target.checked)}
/>
);
}
// Enum field
if (fieldSchema.enum) {
return (
);
}
return null;
}
```
### Step 3: Provider-level Handler (Advanced)
For programmatic control over elicitation flow, provide a handler function:
```tsx
import { TamboProvider } from "@tambo-ai/react";
function App() {
const handleElicitation = async (request, extra, serverInfo) => {
console.log(`Elicitation from ${serverInfo.name}: ${request.params.message}`);
// Show custom UI and collect user input
const userInput = await showCustomElicitationDialog(request.params);
return {
action: "accept",
content: userInput,
};
};
return (
);
}
```
Per-server handlers take precedence over provider-level handlers:
```tsx
{
// Custom handling for GitHub MCP only
return { action: "accept", content: { confirmed: true } };
},
},
},
]}
>
```
## Complete Example
Here's a complete example integrating all MCP UI customizations:
```tsx
import { useState } from "react";
import {
useTamboMcpPromptList,
useTamboMcpResourceList,
} from "@tambo-ai/react/mcp";
function MCPEnabledChatInput() {
const [message, setMessage] = useState("");
const [showPrompts, setShowPrompts] = useState(false);
const [showResources, setShowResources] = useState(false);
const { data: prompts } = useTamboMcpPromptList();
const { data: resources } = useTamboMcpResourceList();
const hasPrompts = prompts && prompts.length > 0;
const hasResources = resources && resources.length > 0;
return (
);
}
```
## Best Practices
### Performance
* Use the built-in query caching from React Query (hooks use it internally)
* Debounce search inputs when filtering large resource or prompt lists
* Lazy load resource content only when needed
### User Experience
* Show loading states while fetching prompts or resources
* Provide search and filtering for large lists
* Display clear error messages when connections fail
* Group items by MCP server for better organization
### Accessibility
* Use semantic HTML elements (buttons, forms, labels)
* Add keyboard navigation support (arrow keys, Enter, Escape)
* Include ARIA labels for screen readers
* Ensure form validation messages are accessible
## What's Next
import LearnMore from "@/components/learn-more";
# Auth0
URL: /guides/add-authentication/auth0
Auth0 is a comprehensive identity and access management platform that provides secure authentication and authorization services. This guide shows how to integrate it with Tambo in a Next.js application.
This guide assumes you've already set up Auth0 in your Next.js application. If
you haven't, follow the [Auth0 Next.js Quick
Start](https://auth0.com/docs/quickstart/webapp/nextjs) first.
## Installation
Install the required packages:
```bash
npm install @auth0/nextjs-auth0 @tambo-ai/react
```
## Integration Options
### Server-Side Token Retrieval (Recommended)
Use this approach for better security and performance, especially when you don't need real-time authentication state changes.
```tsx title="app/layout.tsx"
import { getAccessToken } from "@auth0/nextjs-auth0";
import ClientLayout from "./client-layout";
export default async function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
let accessToken: string | undefined;
try {
// Get the access token from Auth0
const tokenResponse = await getAccessToken();
accessToken = tokenResponse.accessToken;
} catch (error) {
// User is not authenticated
console.log("User not authenticated");
}
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { UserProvider } from "@auth0/nextjs-auth0/client";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
userToken?: string;
}
export default function ClientLayout({
children,
userToken,
}: ClientLayoutProps) {
return (
{children}
);
}
```
### Client-Side Token Retrieval
Use this approach when you need real-time authentication state management or client-side routing with authentication guards.
First, create a token API endpoint:
```tsx title="app/api/auth/token/route.ts"
import { getAccessToken } from "@auth0/nextjs-auth0";
import { NextResponse } from "next/server";
export async function GET() {
try {
const { accessToken } = await getAccessToken();
return NextResponse.json({ accessToken });
} catch (error) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
}
```
Then use Auth0's `useUser` hook in your client layout:
```tsx title="app/client-layout.tsx"
"use client";
import { UserProvider, useUser } from "@auth0/nextjs-auth0/client";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode, useEffect, useState } from "react";
interface ClientLayoutProps {
children: ReactNode;
}
function TamboWrapper({ children }: { children: ReactNode }) {
const { user, isLoading } = useUser();
const [accessToken, setAccessToken] = useState();
useEffect(() => {
if (user && !isLoading) {
const fetchToken = async () => {
try {
const response = await fetch("/api/auth/token");
const data = await response.json();
setAccessToken(data.accessToken);
} catch (error) {
console.error("Error fetching token:", error);
}
};
fetchToken();
}
}, [user, isLoading]);
return {children};
}
export default function ClientLayout({ children }: ClientLayoutProps) {
return (
{children}
);
}
```
## Usage
Once configured, you can use Tambo components throughout your application:
```tsx title="app/dashboard/page.tsx"
import { MessageThreadFull } from "@components/tambo/message-thread-full";
export default function Dashboard() {
return (
Dashboard
);
}
```
## Auth0 Configuration
Make sure your Auth0 application is configured with the appropriate scopes and settings:
### Required Scopes
Ensure your Auth0 application has the necessary scopes configured:
* `openid` - Required for OpenID Connect
* `profile` - Access to user profile information
* `email` - Access to user email address
### Token Configuration
In your Auth0 dashboard, verify that your application is configured to:
* Issue access tokens in JWT format
* Include the user's ID in the `sub` claim
* Have the appropriate audience configured if using APIs
Auth0 access tokens are JWTs that include the user's ID in the `sub` claim,
which is exactly what Tambo needs for user identification. The tokens are
automatically signed by Auth0 and can be verified using Auth0's public keys.
# Better Auth
URL: /guides/add-authentication/better-auth
Better Auth is a modern authentication library that provides built-in support for multiple providers and plugins. This guide shows how to integrate it with Tambo in a Next.js application.
This guide assumes you've already set up Better Auth in your Next.js
application. If you haven't, follow the [Better Auth Next.js Quick
Start](https://www.better-auth.com/docs/installation) first.
## Installation
Install the required packages:
```bash
npm install better-auth @tambo-ai/react
```
## Integration Options
Choose the approach that best fits your application:
### Server-Side Token Retrieval (Recommended)
Use this approach when you want maximum security and don't need real-time authentication state changes in your UI.
**Benefits:**
* Tokens never appear in client-side JavaScript
* Better for SEO and initial page load performance
* No loading states for authentication
```tsx title="app/layout.tsx"
import { auth } from "./lib/auth"; // Your Better Auth instance
import ClientLayout from "./client-layout";
import { headers } from "next/headers";
export default async function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
const session = await auth.api.getSession({
headers: await headers(),
});
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
userToken?: string;
}
export default function ClientLayout({
children,
userToken,
}: ClientLayoutProps) {
return {children};
}
```
### Client-Side Token Retrieval
Use this approach when you need real-time authentication state management or client-side routing with authentication guards.
**Benefits:**
* Real-time authentication state updates
* Better for single-page applications with client-side routing
* Allows for authentication state-dependent UI rendering
```tsx title="app/client-layout.tsx"
"use client";
import { authClient } from "./lib/auth-client";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
}
export default function ClientLayout({ children }: ClientLayoutProps) {
const { data: session } = authClient.useSession();
return {children};
}
```
## Usage
Once configured, you can use Tambo components throughout your application. The authentication context is automatically handled:
```tsx title="app/dashboard/page.tsx"
import { MessageThreadFull } from "@components/tambo/message-thread-full";
export default function Dashboard() {
return (
Dashboard
);
}
```
## Next Steps
Your Tambo integration is now complete. The `TamboProvider` will automatically:
* Exchange your Better Auth token for a Tambo token
* Refresh the Tambo token when it expires
* Handle authentication state changes
* Provide user isolation for all Tambo API calls
# Clerk
URL: /guides/add-authentication/clerk
Clerk is a complete authentication and user management solution that handles authentication through middleware without requiring specific API routes. This guide shows how to integrate it with Tambo.
This guide assumes you've already set up Clerk in your Next.js application,
including middleware and sign-in/sign-up pages. If you haven't, follow the
[Clerk Next.js Quick Start](https://clerk.com/docs/quickstarts/nextjs) first.
## Installation
Install the required packages:
```bash
npm install @clerk/nextjs @tambo-ai/react
```
## Integration Options
### Server-Side Token Retrieval (Recommended)
Use this approach for better security and performance, especially when you don't need real-time authentication state changes.
```tsx title="app/layout.tsx"
import { auth } from "@clerk/nextjs/server";
import ClientLayout from "./client-layout";
export default async function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
const { getToken } = await auth();
const token = await getToken();
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { ClerkProvider } from "@clerk/nextjs";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
userToken?: string;
}
export default function ClientLayout({
children,
userToken,
}: ClientLayoutProps) {
return (
{children}
);
}
```
### Client-Side Token Retrieval
Use this approach when you need real-time authentication state management or client-side routing with authentication guards.
```tsx title="app/layout.tsx"
"use client";
import { ClerkProvider } from "@clerk/nextjs";
import ClientLayout from "./client-layout";
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { useAuth } from "@clerk/nextjs";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode, useEffect, useState } from "react";
interface ClientLayoutProps {
children: ReactNode;
}
export default function ClientLayout({ children }: ClientLayoutProps) {
const { getToken, isLoaded, isSignedIn } = useAuth();
const [accessToken, setAccessToken] = useState();
useEffect(() => {
async function fetchToken() {
if (isLoaded && isSignedIn) {
try {
const token = await getToken();
setAccessToken(token || undefined);
} catch (error) {
console.error("Error fetching token:", error);
}
}
}
fetchToken();
}, [isLoaded, isSignedIn, getToken]);
return {children};
}
```
## Usage
Once configured, you can use Tambo components throughout your application:
```tsx title="app/dashboard/page.tsx"
import { MessageThreadFull } from "@components/tambo/message-thread-full";
export default function Dashboard() {
return (
Dashboard
);
}
```
## Advanced Configuration
### Custom JWT Templates
For advanced use cases, you can create custom JWT templates in Clerk to include specific claims:
1. Go to your Clerk Dashboard
2. Navigate to "JWT Templates"
3. Create a new template with the claims you need
4. Use the template name when calling `getToken()`:
```tsx
const token = await getToken({ template: "your-template-name" });
```
Clerk provides JWT tokens that include the user's ID in the `sub` claim, which
is exactly what Tambo needs for user identification. The token is
automatically signed and verified by Clerk.
# Add User Authentication
URL: /guides/add-authentication
Learn how to enable per-user authentication so users of your application only access Tambo threads and messages that belong to them.
Tambo supports authentication through any OAuth 2.0 provider. Choose your authentication provider below to get started with a step-by-step integration guide.
## Available Providers
Learn how to integrate Tambo with Auth.js using Google OAuth as an example.
Step-by-step guide for integrating Tambo with Auth0 authentication.
Complete example of using Tambo with Clerk's authentication system.
Integration guide for Supabase Auth with Tambo in Next.js applications.
How to use Tambo with Auth.js and Neon PostgreSQL database integration.
Enterprise-grade authentication with WorkOS and Tambo integration.
Modern authentication toolkit with built-in support for multiple providers
and plugins.
## Before You Begin
All authentication integrations follow the same pattern:
1. **Set up your auth provider** - Configure your OAuth application with the provider
2. **Get the user token** - Retrieve the JWT token from your auth provider
3. **Pass token to Tambo** - Provide the token to `TamboProvider` via the `userToken` prop
4. **Configure verification** - Set up JWT verification in your Tambo project settings
For a deeper understanding of how Tambo authentication works, see the [User Authentication concept](/concepts/user-authentication).
## Common Integration Pattern
All provider guides follow this basic structure:
```tsx
"use client";
import { TamboProvider } from "@tambo-ai/react";
export default function Layout({ children }: { children: React.ReactNode }) {
const userToken = useUserToken(); // Provider-specific hook
return {children};
}
```
The main difference between providers is how you retrieve the `userToken`. Each guide shows both server-side and client-side approaches.
# Neon
URL: /guides/add-authentication/neon
Auth.js (formerly NextAuth.js) is a complete authentication solution that can use various database adapters. This guide shows how to integrate Tambo with Auth.js when using Neon as the database backend for session and user data storage.
This guide assumes you've already set up Auth.js with the Neon database
adapter. If you haven't, follow the [Neon Auth.js Integration
guide](https://neon.tech/docs/guides/auth-authjs) first.
## Installation
Install the required packages:
```bash
npm install next-auth @auth/neon-adapter @neondatabase/serverless @tambo-ai/react
```
## Auth.js Configuration
Configure Auth.js to use the Neon adapter and return access tokens:
```tsx title="app/api/auth/[...nextauth]/route.ts"
import NextAuth from "next-auth";
import { NeonAdapter } from "@auth/neon-adapter";
import { neon } from "@neondatabase/serverless";
import GoogleProvider from "next-auth/providers/google";
const sql = neon(process.env.NEON_DATABASE_URL!);
export const authOptions = {
adapter: NeonAdapter(sql),
providers: [
GoogleProvider({
clientId: process.env.GOOGLE_CLIENT_ID!,
clientSecret: process.env.GOOGLE_CLIENT_SECRET!,
}),
],
callbacks: {
async jwt({ token, account }) {
if (account) {
token.accessToken = account.access_token;
}
return token;
},
async session({ session, token }) {
session.accessToken = token.accessToken as string;
return session;
},
},
};
const handler = NextAuth(authOptions);
export { handler as GET, handler as POST };
```
## Integration Options
### Server-Side Token Retrieval (Recommended)
Use this approach for better security and performance, especially when you don't need real-time authentication state changes.
```tsx title="app/layout.tsx"
import { getServerSession } from "next-auth/next";
import { authOptions } from "./api/auth/[...nextauth]/route";
import ClientLayout from "./client-layout";
export default async function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
const session = await getServerSession(authOptions);
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
userToken?: string;
}
export default function ClientLayout({
children,
userToken,
}: ClientLayoutProps) {
return {children};
}
```
### Client-Side Token Retrieval
Use this approach when you need real-time authentication state management or client-side routing with authentication guards.
```tsx title="app/layout.tsx"
"use client";
import { SessionProvider } from "next-auth/react";
import ClientLayout from "./client-layout";
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { useSession } from "next-auth/react";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
}
export default function ClientLayout({ children }: ClientLayoutProps) {
const { data: session } = useSession();
return (
{children}
);
}
```
## Usage
Once configured, you can use Tambo components throughout your application:
```tsx title="app/dashboard/page.tsx"
import { MessageThreadFull } from "@components/tambo/message-thread-full";
export default function Dashboard() {
return (
Dashboard
);
}
```
# Auth.js
URL: /guides/add-authentication/nextauth
Auth.js (formerly NextAuth.js) is a complete authentication solution for Next.js applications. This guide demonstrates integration with Tambo using Google as the OAuth provider.
This guide assumes you've already set up Auth.js with Google OAuth in your
Next.js application. If you haven't, follow the [Auth.js Google Provider
documentation](https://authjs.dev/getting-started/providers/google) first.
## Installation
Install the required packages:
```bash
npm install next-auth @auth/core @tambo-ai/react
```
## Auth.js Configuration
First, configure Auth.js to return the access token in your API route:
```tsx title="app/api/auth/[...nextauth]/route.ts"
import NextAuth, { NextAuthOptions } from "next-auth";
import GoogleProvider from "next-auth/providers/google";
export const authOptions: NextAuthOptions = {
providers: [
GoogleProvider({
clientId: process.env.GOOGLE_CLIENT_ID!,
clientSecret: process.env.GOOGLE_CLIENT_SECRET!,
}),
],
callbacks: {
async jwt({ token, account }) {
// Persist the OAuth access_token to the token right after signin
if (account) {
token.accessToken = account.access_token;
}
return token;
},
async session({ session, token }) {
// Send properties to the client
session.accessToken = token.accessToken as string;
return session;
},
},
};
const handler = NextAuth(authOptions);
export { handler as GET, handler as POST };
```
## TypeScript Configuration
Add the access token to your Auth.js session type:
```tsx title="types/next-auth.d.ts"
import "next-auth";
declare module "next-auth" {
interface Session {
accessToken?: string;
}
}
declare module "next-auth/jwt" {
interface JWT {
accessToken?: string;
}
}
```
## Integration Options
### Server-Side Token Retrieval (Recommended)
Use this approach for better security and performance, especially for server-rendered applications.
```tsx title="app/layout.tsx"
import { getServerSession } from "next-auth/next";
import { authOptions } from "./api/auth/[...nextauth]/route";
import ClientLayout from "./client-layout";
export default async function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
const session = await getServerSession(authOptions);
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
userToken?: string;
}
export default function ClientLayout({
children,
userToken,
}: ClientLayoutProps) {
return {children};
}
```
### Client-Side Token Retrieval
Use this approach when you need real-time authentication state management or client-side routing.
```tsx title="app/layout.tsx"
"use client";
import { SessionProvider } from "next-auth/react";
import ClientLayout from "./client-layout";
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { useSession } from "next-auth/react";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
}
export default function ClientLayout({ children }: ClientLayoutProps) {
const { data: session } = useSession();
return (
{children}
);
}
```
## Usage
Once configured, you can use Tambo components throughout your application:
```tsx title="app/dashboard/page.tsx"
import { MessageThreadFull } from "@components/tambo/message-thread-full";
export default function Dashboard() {
return (
Dashboard
);
}
```
## Important Considerations
Google access tokens expire after 1 hour. For production applications,
consider implementing token refresh logic or using Auth.js's built-in refresh
token capabilities.
Make sure your Google OAuth application has the necessary scopes configured.
The `openid`, `email`, and `profile` scopes are typically sufficient for user
identification.
# Supabase
URL: /guides/add-authentication/supabase
Supabase Auth is a complete authentication solution that integrates seamlessly with your Supabase database. This guide shows how to integrate it with Tambo in a Next.js application.
This guide assumes you've already set up Supabase Auth in your Next.js
application, including the auth callback route. If you haven't, follow the
[Supabase Next.js Quick
Start](https://supabase.com/docs/guides/auth/quickstarts/nextjs) first.
Supabase Auth doesn't support asymmetric JWT verification. You **must**
disable JWT verification in your Tambo project settings (Settings > User
Authentication > Verification Strategy > None) when using Supabase Auth.
## Installation
Install the required packages:
```bash
npm install @supabase/supabase-js @tambo-ai/react
```
## Integration Options
### Server-Side Token Retrieval (Recommended)
Use this approach for better security and performance, especially when you don't need real-time authentication state changes.
```tsx title="app/layout.tsx"
import { createServerClient } from "@supabase/supabase-js";
import { cookies } from "next/headers";
import ClientLayout from "./client-layout";
export default async function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{
cookies: {
get(name: string) {
return cookies().get(name)?.value;
},
},
},
);
const {
data: { session },
} = await supabase.auth.getSession();
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
userToken?: string;
}
export default function ClientLayout({
children,
userToken,
}: ClientLayoutProps) {
return {children};
}
```
### Client-Side Token Retrieval
Use this approach when you need real-time authentication state management or client-side routing with authentication guards.
```tsx title="app/client-layout.tsx"
"use client";
import { TamboProvider } from "@tambo-ai/react";
import { createClient } from "@/lib/supabase";
import { ReactNode, useEffect, useState } from "react";
interface ClientLayoutProps {
children: ReactNode;
}
export default function ClientLayout({ children }: ClientLayoutProps) {
const [accessToken, setAccessToken] = useState();
const supabase = createClient();
useEffect(() => {
// Get initial session
const getInitialSession = async () => {
const {
data: { session },
} = await supabase.auth.getSession();
setAccessToken(session?.access_token);
};
getInitialSession();
// Listen for auth changes
const {
data: { subscription },
} = supabase.auth.onAuthStateChange((_event, session) => {
setAccessToken(session?.access_token);
});
return () => subscription.unsubscribe();
}, [supabase]);
return {children};
}
```
## Usage
Once configured, you can use Tambo components throughout your application:
```tsx title="app/dashboard/page.tsx"
import { MessageThreadFull } from "@components/tambo/message-thread-full";
export default function Dashboard() {
return (
Dashboard
);
}
```
## Supabase-Specific Features
### Automatic Token Refresh
Supabase automatically handles token refresh in the background. When tokens expire, Supabase will automatically refresh them, and the TamboProvider will receive the updated token through the auth state change listener.
### Session Management
Supabase provides robust session management across tabs and devices. The auth state change listener ensures that authentication state stays synchronized across your application.
Supabase uses symmetric JWT signing (HMAC-SHA256) rather than asymmetric
signing (RS256). Tambo's JWT verification is designed for asymmetric tokens
from OAuth providers. Since Supabase handles authentication security,
disabling JWT verification in Tambo is safe and recommended.
# WorkOS
URL: /guides/add-authentication/workos
WorkOS is an enterprise-grade authentication and user management platform that provides features like SSO, SCIM provisioning, and directory sync. This guide shows how to integrate it with Tambo in a Next.js application.
This guide assumes you've already set up WorkOS in your Next.js application,
including the callback route for handling authentication responses. If you
haven't, follow the [WorkOS Next.js Quick
Start](https://workos.com/docs/user-management/nextjs) first.
## Installation
Install the required packages:
```bash
npm install @workos-inc/node @tambo-ai/react
```
## Integration Options
### Server-Side Token Retrieval (Recommended)
Use this approach for better security and performance, especially when you don't need real-time authentication state changes.
```tsx title="app/layout.tsx"
import { getSession } from "@workos-inc/node/middleware";
import ClientLayout from "./client-layout";
export default async function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
const session = await getSession({
cookiePassword: process.env.WORKOS_COOKIE_PASSWORD!,
});
return (
{children}
);
}
```
```tsx title="app/client-layout.tsx"
"use client";
import { TamboProvider } from "@tambo-ai/react";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
userToken?: string;
}
export default function ClientLayout({
children,
userToken,
}: ClientLayoutProps) {
return {children};
}
```
### Client-Side Token Retrieval
Use this approach when you need real-time authentication state management or client-side routing with authentication guards.
First, create a custom hook for WorkOS authentication:
```tsx title="hooks/use-workos-auth.ts"
import { useEffect, useState } from "react";
interface UseWorkOSAuthReturn {
accessToken: string | null;
user: any | null;
loading: boolean;
}
export function useWorkOSAuth(): UseWorkOSAuthReturn {
const [accessToken, setAccessToken] = useState(null);
const [user, setUser] = useState(null);
const [loading, setLoading] = useState(true);
useEffect(() => {
const fetchSession = async () => {
try {
const response = await fetch("/api/auth/session");
if (response.ok) {
const data = await response.json();
setAccessToken(data.accessToken);
setUser(data.user);
}
} catch (error) {
console.error("Error fetching session:", error);
} finally {
setLoading(false);
}
};
fetchSession();
}, []);
return { accessToken, user, loading };
}
```
Create a session API endpoint:
```tsx title="app/api/auth/session/route.ts"
import { NextRequest, NextResponse } from "next/server";
import { getSession } from "@workos-inc/node/middleware";
export async function GET(request: NextRequest) {
try {
const session = await getSession({
cookiePassword: process.env.WORKOS_COOKIE_PASSWORD!,
});
if (!session?.accessToken) {
return NextResponse.json({ error: "No access token" }, { status: 401 });
}
return NextResponse.json({
accessToken: session.accessToken,
user: session.user,
});
} catch (error) {
return NextResponse.json({ error: "Invalid session" }, { status: 401 });
}
}
```
Use the hook in your client layout:
```tsx title="app/client-layout.tsx"
"use client";
import { TamboProvider } from "@tambo-ai/react";
import { useWorkOSAuth } from "@/hooks/use-workos-auth";
import { ReactNode } from "react";
interface ClientLayoutProps {
children: ReactNode;
}
export default function ClientLayout({ children }: ClientLayoutProps) {
const { accessToken, loading } = useWorkOSAuth();
if (loading) {
return
Loading...
;
}
return (
{children}
);
}
```
## Usage
Once configured, you can use Tambo components throughout your application:
```tsx title="app/dashboard/page.tsx"
import { MessageThreadFull } from "@components/tambo/message-thread-full";
export default function Dashboard() {
return (
Dashboard
);
}
```
## WorkOS Enterprise Features
WorkOS provides several enterprise-grade features that work seamlessly with Tambo:
### Single Sign-On (SSO)
* **SAML 2.0 & OpenID Connect**: Support for enterprise identity providers
* **Directory Sync**: Automatic user provisioning and deprovisioning
* **Domain Verification**: Restrict access to verified domains
### User Management
* **SCIM Provisioning**: Automated user lifecycle management
* **Multi-factor Authentication**: Built-in MFA support
* **Audit Logs**: Complete authentication and authorization tracking
WorkOS access tokens include enterprise-specific claims and are designed for
high-security environments. The token automatically includes the necessary
user identification and organizational context that Tambo needs.
# Give Tambo Components to Generate
URL: /guides/enable-generative-ui/register-components
import LearnMore from "@/components/learn-more";
import { BookOpen } from "lucide-react";
This guide shows you how to register your React components with Tambo so it can intelligently choose and render them in response to user messages. You'll see how to apply both static registration (at app startup) and dynamic registration (at runtime).
## Prerequisites
* A React component you want Tambo to use
* A Zod schema defining the component's props
* `@tambo-ai/react` installed in your project
## Step 1: Define Your Component Props Schema
Create a Zod schema that describes your component's props. This tells Tambo what data it needs to generate.
```tsx
import { z } from "zod";
export const WeatherCardPropsSchema = z.object({
city: z.string(),
temperature: z.number(),
condition: z.string(),
humidity: z.number().optional(),
});
```
When using `.optional()` on a field, Tambo may not generate a value for that prop. Only mark props as optional if you truly want Tambo to sometimes omit them.
**Important for streaming**: During streaming, all props (required and optional) start as `undefined` and populate as data arrives. Your component should handle undefined values gracefully by using optional prop types (`city?: string`) or providing default values in your component implementation.
The schema passed to Tambo must match the actual shape of the component's props, or Tambo may generate invalid props. One pattern to ensure this is to define the props based on the schema using `z.infer`:
```tsx
import { z } from "zod";
const WeatherCardPropsSchema = z.object({
city: z.string(),
temperature: z.number(),
condition: z.string(),
humidity: z.number().optional(),
});
type WeatherCardProps = z.infer;
function WeatherCard({
city,
temperature,
condition,
humidity,
}: WeatherCardProps) {
// Component implementation
}
```
## Step 2: Add Descriptions for Better AI Guidance
Use `z.describe()` to provide hints that help Tambo generate better prop values:
```tsx
import { z } from "zod";
export const WeatherCardPropsSchema = z
.object({
city: z.string().describe("City name to display, e.g., 'San Francisco'"),
temperature: z
.number()
.describe("Temperature in Celsius as a whole number"),
condition: z
.string()
.describe("Weather condition like 'Sunny', 'Cloudy', 'Rainy'"),
humidity: z
.number()
.optional()
.describe("Humidity percentage (0-100). Optional field."),
})
.describe("Displays current weather information for a city");
```
Descriptions help Tambo understand:
* What format to use for values
* When to use specific enum options
* Expected value ranges and formats
Use descriptive field names and helpful descriptions:
```tsx
// ✅ Good: Clear field names and guidance
z.object({
city: z.string().describe("City name to display"),
temperature: z.number().describe("Temperature in Celsius"),
condition: z.string().describe("Weather condition description"),
});
// ❌ Poor: Generic names, no guidance
z.object({
data: z.any(),
value: z.string(),
});
```
## Step 3: Register Your Component with Tambo
Create a `TamboComponent` object including a reference to your component, the schema defined previously, a name, and a description of when to use the component.
```tsx
const weatherCardComponent: TamboComponent = {
component: WeatherCard,
name: "WeatherCard",
description:
"Displays current weather for a city. Use when the user asks about weather, temperature, or conditions.",
propsSchema: WeatherCardPropsSchema,
};
```
Make sure the description explains both what the component does and when to use it to help Tambo use it appropriately:
```tsx
// ✅ Good: Clear purpose and usage
description: "Displays current weather for a city. Use when the user asks about weather, temperature, or conditions.";
// ❌ Poor: Too vague
description: "A weather component";
```
Finally, give Tambo access to it by registering it in one of two ways:
### Option A: Static Registration (Recommended for Most Cases)
Register components when your app initializes by passing them to `TamboProvider`:
```tsx
function App() {
return (
);
}
```
**Use static registration when:**
* Components are available at app startup
* You want all components registered immediately
### Option B: Dynamic Registration
Register components at runtime using the `registerComponent` function from `useTamboRegistry`:
```tsx
import { useTamboRegistry } from "@tambo-ai/react";
import { useEffect } from "react";
import { weatherCardComponent } from "@/components/WeatherCard";
function MyComponent() {
const { registerComponent } = useTamboRegistry();
useEffect(() => {
registerComponent(weatherCardComponent);
}, [registerComponent]);
return ;
}
```
**Use dynamic registration when:**
* Components depend on runtime data or user context
* You want to conditionally register components
* Components are loaded asynchronously
* You need to register components based on thread state
## Step 4: Verify Registration
Once registered, Tambo can use your component in responses. When a user sends a message that matches your component's purpose, Tambo will generate it with appropriate props.
For example, if a user asks "What's the weather in Tokyo?", Tambo will render your `WeatherCard` component with generated weather data for Tokyo.
## Complete Example
Here's a complete example combining all steps:
```tsx
import { z } from "zod";
import { WeatherCard } from "@/components/WeatherCard";
import { TamboProvider } from "@tambo-ai/react";
// Step 1/2: Define schema with descriptions
export const WeatherCardPropsSchema = z
.object({
city: z.string().describe("City name to display, e.g., 'San Francisco'"),
temperature: z
.number()
.describe("Temperature in Celsius as a whole number"),
condition: z
.string()
.describe("Weather condition like 'Sunny', 'Cloudy', 'Rainy'"),
humidity: z
.number()
.optional()
.describe("Humidity percentage (0-100). Optional field."),
})
.describe("Displays current weather information for a city");
// Step 3: Registration via TamboProvider
const tamboComponents: TamboComponent[] = [
{
component: WeatherCard,
name: "WeatherCard",
description:
"Displays current weather for a city. Use when the user asks about weather, temperature, or conditions.",
propsSchema: WeatherCardPropsSchema,
},
];
function App() {
return (
);
}
```
This registration approach is for generative components that Tambo creates on-demand. If you want to pre-place components on your page and let Tambo modify them, use [Interactable Components](/concepts/generative-interfaces/interactable-components) instead. They register themselves automatically when mounted.
# Let Users Edit Components Through Chat
URL: /guides/enable-generative-ui/register-interactables
import LearnMore from "@/components/learn-more";
import { BookOpen } from "lucide-react";
This guide shows you how to register React components as "Interactable" so Tambo can modify their props in response to user messages. Unlike generative components that Tambo creates on-demand, Interactable components are pre-placed by you and allow Tambo to update them in place.
## Step 1: Build Your React Component
Create a standard React component that accepts props.
If you're using `useState` and your component has state values that depend on the props, use `useEffect` to sync state with updated prop values as they stream in from Tambo.
If you want Tambo to be able to see and update state values rather than just props, replace `useState` with `useTamboComponentState`, and use the `setFromProp` parameter to sync state from props during streaming rather than `useEffect`.
```tsx
import { useTamboComponentState } from "@tambo-ai/react";
type NoteProps = {
title: string;
content: string;
};
function Note({ title, content }: NoteProps) {
// useTamboComponentState allows Tambo to see and update the draft content
// The setFromProp parameter syncs state from props during streaming
const [draftContent, setDraftContent] = useTamboComponentState(
"draftContent",
content,
content,
);
return (
{title}
);
}
```
## Step 2: Define Your Props Schema
Create a Zod schema that describes which props Tambo can modify:
```tsx
import { z } from "zod";
export const NotePropsSchema = z.object({
title: z.string(),
content: z.string(),
});
```
This schema tells Tambo:
* Which props it can update
* What types and values are valid
* Which props are optional
## Step 3: Wrap with `withInteractable`
Use `withInteractable` to create an Interactable version of your component:
```tsx
import { withInteractable } from "@tambo-ai/react";
import { Note } from "./note";
import { NotePropsSchema } from "./note-schema";
export const InteractableNote = withInteractable(Note, {
componentName: "Note",
description: "A simple note that can change title, and content",
propsSchema: NotePropsSchema,
});
```
**Configuration options:**
* `componentName`: Name Tambo uses to reference this component
* `description`: What the component does (helps Tambo decide when to use it)
* `propsSchema`: Zod schema defining editable props
Unlike generative components, Interactable components register themselves automatically when they mount. You don't need to add them to `TamboProvider`'s components array.
## Step 4: Render in Your App
Place the Interactable component in your app where you want it to appear, within the `TamboProvider`:
```tsx
import { TamboProvider } from "@tambo-ai/react";
import { InteractableNote } from "./interactable-note";
function App() {
return (
);
}
```
Tambo can now see and modify this component when users send messages like:
* "Change the note title to 'Important Reminder'"
* "Update the note content to 'Don't forget the meeting at 3pm'"
## Complete Example
Here's a complete working example:
```tsx
import {
useTamboComponentState,
withInteractable,
TamboProvider,
} from "@tambo-ai/react";
import { z } from "zod";
// Step 1: Define Component
type NoteProps = {
title: string;
content: string;
};
function Note({ title, content }: NoteProps) {
// useTamboComponentState allows Tambo to see and update the draft content
// The setFromProp parameter syncs state from props during streaming
const [draftContent, setDraftContent] = useTamboComponentState(
"draftContent",
content,
content,
);
return (
{title}
);
}
// Step 2 & 3: Schema and wrap with withInteractable
const NotePropsSchema = z.object({
title: z.string(),
content: z.string(),
});
export const InteractableNote = withInteractable(Note, {
componentName: "Note",
description: "A simple note that can change title, and content",
propsSchema: NotePropsSchema,
});
// Step 4: Use in app
export default function Page() {
return (
);
}
```
# Let Users Attach Context
URL: /guides/give-context/let-users-attach-context
import LearnMore from "@/components/learn-more";
import { BookOpen } from "lucide-react";
Use context attachments to let users explicitly stage temporary context—like file contents, selected text, or specific elements—that's automatically included in the next message and then cleared.
## When to Use This
Context attachments are perfect when:
* ✅ Users should explicitly choose what context to include
* ✅ Context is temporary and relevant to a single message
* ✅ Users need to attach files, selections, or specific elements
* ✅ You want to show staged context before sending
* ✅ Context should be cleared after the message is sent
**Don't use context attachments for:**
* ❌ Automatic ambient context (use [Context Helpers](/guides/give-context/make-ai-aware-of-state))
* ❌ @ mentionable documentation (use [Resources](/guides/give-context/make-context-referenceable))
## Prerequisites
* A Tambo application with `TamboProvider` configured
* Understanding of React hooks
## Step 1: Import the Hook
Import the `useTamboContextAttachment` hook:
```tsx
import { useTamboContextAttachment } from "@tambo-ai/react";
```
## Step 2: Use the Hook in Your Component
The hook provides methods to add, remove, and access context attachments:
```tsx
function FileViewer({ file }) {
const { addContextAttachment, removeContextAttachment, attachments } =
useTamboContextAttachment();
return (
// Your component JSX
);
}
```
### Hook API
The `useTamboContextAttachment` hook returns:
* `attachments`: Array of currently staged attachments
* `addContextAttachment`: Function to add a new attachment
* `removeContextAttachment`: Function to remove an attachment by ID
* `clearContextAttachments`: Function to remove all attachments
## Step 3: Add Context Attachments
Add attachments when users select files, highlight text, or focus on specific content:
```tsx
function FileViewer({ file }) {
const { addContextAttachment } = useTamboContextAttachment();
const handleSelectFile = () => {
addContextAttachment({
context: file.content, // The actual content to include
displayName: file.name, // Name shown in UI
type: "file", // Optional type for grouping
});
};
return ;
}
```
### Attachment Structure
Each attachment object has:
```tsx
{
id: string; // Auto-generated unique identifier
context: string; // The content to include (required)
displayName?: string; // Optional name for UI display
type?: string; // Optional type identifier
}
```
## Step 4: Display Active Attachments
Show users which context has been staged:
```tsx
function StagedContextDisplay() {
const { attachments, removeContextAttachment } = useTamboContextAttachment();
if (attachments.length === 0) return null;
return (
Staged Context
{attachments.map((attachment) => (
{attachment.displayName || "Untitled"}
))}
);
}
```
## Common Patterns
### Pattern 1: File Browser
Allow users to stage files for AI context:
```tsx
function FileBrowser({ files }) {
const { attachments, addContextAttachment, removeContextAttachment } =
useTamboContextAttachment();
const handleFileClick = (file) => {
addContextAttachment({
context: file.content,
displayName: file.name,
type: "file",
});
};
return (
{/* Active attachments */}
{attachments.length > 0 && (
Staged Files:
{attachments.map((attachment) => (
{attachment.displayName}
))}
)}
{/* File list */}
{files.map((file) => (
))}
);
}
```
### Pattern 2: Text Selection
Stage selected text from your application:
```tsx
function TextSelectionHandler() {
const { addContextAttachment } = useTamboContextAttachment();
const handleTextSelection = () => {
const selectedText = window.getSelection()?.toString();
if (!selectedText) return;
addContextAttachment({
context: selectedText,
displayName: `Selection: ${selectedText.slice(0, 30)}...`,
type: "selection",
});
};
return (
);
}
```
### Pattern 3: Canvas/Workspace Elements
Let users focus the AI on specific workspace elements:
```tsx
function WorkspaceElement({ element }) {
const { addContextAttachment } = useTamboContextAttachment();
const handleFocus = () => {
addContextAttachment({
context: JSON.stringify(element, null, 2),
displayName: element.title,
type: "workspace-element",
});
};
return (
{element.title}
);
}
```
## Automatic Cleanup
Context attachments are automatically cleared after message submission. You don't need to manually clear them:
```tsx
// After user sends a message:
// 1. Attachments are included in the message
// 2. Attachments array is automatically cleared
// 3. User can stage new attachments for the next message
```
To manually clear before sending:
```tsx
const { clearContextAttachments } = useTamboContextAttachment();
;
```
## Complete Example
Full implementation with file browser and staged context display:
```tsx
import { useTamboContextAttachment } from "@tambo-ai/react";
function ContextStagingDemo({ files }) {
const {
attachments,
addContextAttachment,
removeContextAttachment,
clearContextAttachments,
} = useTamboContextAttachment();
const handleFileClick = (file) => {
addContextAttachment({
context: file.content,
displayName: file.name,
type: "file",
});
};
return (
Available Files
{files.map((file) => (
))}
{attachments.length > 0 && (
Staged Context ({attachments.length})
{attachments.map((attachment) => (
{attachment.displayName}
))}
)}
);
}
```
## Next Steps
# Make Tambo Aware of State
URL: /guides/give-context/make-ai-aware-of-state
import LearnMore from "@/components/learn-more";
import { BookOpen } from "lucide-react";
Use context helpers to automatically include information about the user's current state on every message.
## When to Use This
Context helpers are perfect when:
* ✅ AI needs to know current page or section
* ✅ AI needs user session information (ID, role, permissions)
* ✅ Context is relevant to all messages in a conversation
* ✅ You want zero-effort automatic context inclusion
* ✅ Data changes based on navigation or app state
**Don't use context helpers for:**
* ❌ User-selected files or text (use [Context Attachments](/guides/give-context/let-users-attach-context))
* ❌ @ mentionable documentation (use [Resources](/guides/give-context/make-context-referenceable))
## How It Works
Context helpers are functions that run automatically before sending each message. Their results are merged into the message context and sent to Tambo along with the user's message.
```tsx
// Helper function (runs automatically)
const sessionHelper = async () => ({
userId: getCurrentUserId(),
role: getUserRole(),
});
// Registered in provider
// Result sent with every message:
{
"session": {
"userId": "user123",
"role": "admin"
}
}
```
### Step 1: Configure at the Root
The most common approach is to configure context helpers at your application root:
```tsx
import { TamboProvider } from "@tambo-ai/react";
import {
currentTimeContextHelper,
currentPageContextHelper,
} from "@tambo-ai/react";
function AppRoot({ children }: { children: React.ReactNode }) {
return (
{children}
);
}
```
### Step 2: Use Prebuilt Helpers
Tambo provides these prebuilt context helpers:
* **`currentTimeContextHelper`**: Provides current timestamp
* **`currentPageContextHelper`**: Provides current URL and page title
* **`currentInteractablesContextHelper`**: Automatically enabled when using interactable components
### Step 3: Add Custom Helpers
Add your own helpers inline or as separate functions:
```tsx
({
userId: getCurrentUserId(),
role: getUserRole(),
permissions: await getPermissions(),
}),
// Another inline helper
appState: () => ({
selectedItems: getSelectedItems(),
activeFilters: getActiveFilters(),
}),
}}
>
```
## Creating Custom Helpers
### Basic Pattern
A context helper is simply a function that returns data:
```tsx
// Synchronous helper
const themeHelper = () => ({
theme: getTheme(),
language: getLanguage(),
});
// Asynchronous helper
const sessionHelper = async () => ({
sessionId: getSessionId(),
user: await getCurrentUser(),
});
```
### Helper Function Rules
* Return an object, primitive value, or `null`/`undefined`
* Returning `null` or `undefined` skips including the context
* Can be sync or async (Tambo waits for async helpers)
* Should not throw errors (wrap in try/catch if needed)
* Keep them fast—they run on every message
### Common Patterns
#### Pattern 1: User Session Context
Provide information about the current user and session:
```tsx
const sessionHelper = async () => {
const user = await getCurrentUser();
if (!user) return null; // Skip if no user logged in
return {
userId: user.id,
email: user.email,
role: user.role,
sessionId: getSessionId(),
sessionDuration: getSessionDuration(),
};
};
```
#### Pattern 2: Environment and Feature Flags
Include environment information and feature flags:
```tsx
const environmentHelper = async () => ({
env: process.env.NODE_ENV,
version: process.env.NEXT_PUBLIC_APP_VERSION,
features: {
betaAccess: await checkFeatureFlag("beta-access"),
darkMode: await checkFeatureFlag("dark-mode"),
advancedTools: await checkFeatureFlag("advanced-tools"),
},
});
```
#### Pattern 3: Device and Browser Information
Provide device context for responsive AI responses:
```tsx
const deviceHelper = () => {
// Skip on server-side
if (typeof window === "undefined") return null;
return {
userAgent: navigator.userAgent,
platform: navigator.platform,
screen: {
width: window.screen.width,
height: window.screen.height,
},
viewport: {
width: window.innerWidth,
height: window.innerHeight,
},
};
};
```
#### Pattern 4: Application State
Share relevant application state with the AI:
```tsx
const appStateHelper = () => ({
currentProject: getCurrentProjectId(),
selectedItems: getSelectedItemIds(),
filterSettings: getActiveFilters(),
sortOrder: getCurrentSortOrder(),
});
```
### Error Handling
Handle errors gracefully to prevent helpers from breaking message sending:
```tsx
const safeHelper = async () => {
try {
const data = await fetchData();
return {
data,
status: "success",
};
} catch (error) {
console.error("Helper failed:", error);
// Return partial data or skip
return null;
}
};
```
### Overriding Prebuilt Helpers
Replace a prebuilt helper by using the same key:
```tsx
({
formattedTime: new Date().toLocaleString("en-US", {
weekday: "long",
year: "numeric",
month: "long",
day: "numeric",
hour: "numeric",
minute: "numeric",
}),
timezone: Intl.DateTimeFormat().resolvedOptions().timeZone,
isDaytime: new Date().getHours() >= 6 && new Date().getHours() < 18,
}),
}}
>
```
## Managing Context Dynamically
### Using the Hook API
For runtime control over context helpers, use the `useTamboContextHelpers` hook:
```tsx
import { useTamboContextHelpers } from "@tambo-ai/react";
const {
getContextHelpers, // Get current map of helpers
addContextHelper, // Add or replace a helper
removeContextHelper, // Remove a helper
} = useTamboContextHelpers();
```
### Pattern: State-Based Context
Add or update helpers when application state changes:
```tsx
function ProjectContextController({ projectId }: { projectId: string }) {
const { addContextHelper, removeContextHelper } = useTamboContextHelpers();
useEffect(() => {
// Skip if no project selected
if (!projectId) return;
// Add helper for current project
addContextHelper("currentProject", async () => ({
projectId,
projectName: await getProjectName(projectId),
projectData: await getProjectData(projectId),
permissions: await getProjectPermissions(projectId),
}));
// Cleanup when project changes or component unmounts
return () => {
removeContextHelper("currentProject");
};
}, [projectId, addContextHelper, removeContextHelper]);
return null; // This is a logic-only component
}
```
Use this pattern in your application:
```tsx
function App() {
const [currentProjectId, setCurrentProjectId] = useState(null);
return (
);
}
```
### Pattern: User Session Management
Add session context on login and remove on logout:
```tsx
function SessionContextManager() {
const { addContextHelper, removeContextHelper } = useTamboContextHelpers();
const { user, session } = useAuth();
useEffect(() => {
if (!user || !session) {
// User logged out - remove session context
removeContextHelper("session");
return;
}
// User logged in - add session context
addContextHelper("session", async () => ({
userId: user.id,
email: user.email,
role: user.role,
sessionId: session.id,
sessionStartTime: session.startTime,
}));
return () => {
removeContextHelper("session");
};
}, [user, session, addContextHelper, removeContextHelper]);
return null;
}
```
### Pattern: Conditional Context
Add helpers only when certain conditions are met:
```tsx
function FeatureContextManager() {
const { addContextHelper, removeContextHelper } = useTamboContextHelpers();
const { features } = useFeatureFlags();
useEffect(() => {
if (features.advancedMode) {
// Add extra context for advanced users
addContextHelper("advancedFeatures", () => ({
mode: "advanced",
capabilities: ["bulk-operations", "custom-scripts", "api-access"],
limits: {
maxOperations: 1000,
rateLimitPerHour: 10000,
},
}));
} else {
removeContextHelper("advancedFeatures");
}
}, [features.advancedMode, addContextHelper, removeContextHelper]);
return null;
}
```
## Page-Specific Context
### Using TamboContextHelpersProvider
For page-scoped context, use `TamboContextHelpersProvider` to add helpers only while specific routes or layouts are mounted:
```tsx title="app/dashboard/page.tsx"
import { TamboContextHelpersProvider } from "@tambo-ai/react";
export default function DashboardPage() {
return (
({
widgets: await getActiveWidgets(),
filters: getCurrentFilters(),
dateRange: getSelectedDateRange(),
}),
}}
>
);
}
```
**When to use this pattern:**
* You don't want to configure helpers at the root
* You need different helpers for different pages
* You want to experiment with helpers on specific routes
**Important notes:**
* You still need `TamboProvider` higher in the component tree
* If the same key is registered in multiple places, the most recently mounted registration takes effect
* Helpers are only active while the provider is mounted
## Best Practices
### 1. Keep Helpers Fast
Helpers run on every message, so avoid expensive operations:
```tsx
// ❌ Slow - fetches on every message
const slowHelper = async () => ({
data: await fetchExpensiveData(),
});
// ✅ Fast - uses cached data
const cachedData = new Map();
const fastHelper = async () => {
if (!cachedData.has("key")) {
cachedData.set("key", await fetchExpensiveData());
}
return cachedData.get("key");
};
```
### 2. Return Null When Not Applicable
Don't include context that's not relevant:
```tsx
const projectHelper = () => {
const projectId = getCurrentProjectId();
if (!projectId) return null; // Skip if no project
return {
projectId,
projectName: getProjectName(projectId),
};
};
```
### 3. Avoid Large Payloads
Only include data the AI actually needs:
```tsx
// ❌ Too much data
const heavyHelper = () => ({
allUsers: getAllUsers(), // Thousands of records
completeHistory: getFullHistory(),
});
// ✅ Essential data only
const lightHelper = () => ({
currentUserId: getCurrentUserId(),
recentActivityCount: getRecentActivityCount(),
});
```
### 4. Clean Up in useEffect
Always remove helpers in the cleanup function:
```tsx
useEffect(() => {
addContextHelper("myHelper", myHelperFunction);
return () => {
removeContextHelper("myHelper"); // Clean up!
};
}, [dependencies]);
```
### 5. Handle Dependencies Correctly
Include all hook methods in dependencies:
```tsx
useEffect(() => {
addContextHelper("key", helperFunction);
return () => removeContextHelper("key");
}, [addContextHelper, removeContextHelper /* other deps */]);
```
### 6. Namespace Keys
Avoid conflicts by namespacing helper keys:
```tsx
addContextHelper("dashboard:filters", filtersHelper);
addContextHelper("settings:preferences", preferencesHelper);
```
## Complete Example
Full implementation with root helpers, custom helpers, and dynamic management:
```tsx
import { TamboProvider, useTamboContextHelpers } from "@tambo-ai/react";
import {
currentTimeContextHelper,
currentPageContextHelper,
} from "@tambo-ai/react";
import { useEffect } from "react";
// Root-level helpers
function App() {
return (
({
env: process.env.NODE_ENV,
version: process.env.NEXT_PUBLIC_APP_VERSION,
}),
device: () => {
if (typeof window === "undefined") return null;
return {
viewport: {
width: window.innerWidth,
height: window.innerHeight,
},
};
},
}}
>
);
}
// Dynamic session management
function SessionManager() {
const { addContextHelper, removeContextHelper } = useTamboContextHelpers();
const { user } = useAuth();
useEffect(() => {
if (!user) {
removeContextHelper("session");
return;
}
addContextHelper("session", async () => ({
userId: user.id,
role: user.role,
permissions: await getUserPermissions(user.id),
}));
return () => {
removeContextHelper("session");
};
}, [user, addContextHelper, removeContextHelper]);
return null;
}
// Dynamic project context
function ProjectManager() {
const { addContextHelper, removeContextHelper } = useTamboContextHelpers();
const { projectId } = useCurrentProject();
useEffect(() => {
if (!projectId) {
removeContextHelper("project");
return;
}
addContextHelper("project", async () => {
try {
const [name, members, settings] = await Promise.all([
fetchProjectName(projectId),
fetchProjectMembers(projectId),
fetchProjectSettings(projectId),
]);
return {
id: projectId,
name,
memberCount: members.length,
settings,
};
} catch (error) {
console.error("Failed to fetch project context:", error);
return null;
}
});
return () => {
removeContextHelper("project");
};
}, [projectId, addContextHelper, removeContextHelper]);
return null;
}
```
## Next Steps
# Make Context Referenceable
URL: /guides/give-context/make-context-referenceable
import LearnMore from "@/components/learn-more";
import { BookOpen } from "lucide-react";
Use resources to give users a library of documentation, files, or knowledge that they can explicitly reference in their messages using @ mentions.
## When to Use This
Resources are perfect when:
* ✅ Users should be able to @ mention documentation or data
* ✅ You have a library of referenceable content (docs, guides, files)
* ✅ Context is discoverable through search
* ✅ Users explicitly control when to include specific resources
**Don't use resources for:**
* ❌ Automatic ambient context (use [Context Helpers](/guides/give-context/make-ai-aware-of-state))
* ❌ Temporary user-selected files (use [Context Attachments](/guides/give-context/let-users-attach-context))
**For server-side integrations** with databases, file systems, or external APIs, consider using [MCP servers](/concepts/model-context-protocol) instead. MCP servers provide additional capabilities like tools, prompts, and sampling.
## Prerequisites
* A Tambo application with `TamboRegistryProvider` configured
* Understanding of React providers
## Static Resources
The simplest way to register resources is to provide a static array via the `resources` prop. This is useful when you have a fixed set of resources that don't change:
```tsx
import { TamboRegistryProvider, ListResourceItem } from "@tambo-ai/react";
const resources: ListResourceItem[] = [
{
uri: "docs://api-reference",
name: "API Reference",
description: "Complete API documentation",
mimeType: "text/plain",
},
{
uri: "docs://faq",
name: "Frequently Asked Questions",
description: "Common questions and answers",
mimeType: "text/plain",
},
];
export function App() {
return (
);
}
```
With static resources, you must also provide a `getResource` function to retrieve the content:
```tsx
import { ReadResourceResult } from "@tambo-ai/react";
const getResource = async (uri: string): Promise => {
if (uri === "docs://api-reference") {
return {
contents: [
{
uri,
mimeType: "text/plain",
text: "API Reference: GET /api/users - Returns a list of users...",
},
],
};
}
if (uri === "docs://faq") {
return {
contents: [
{
uri,
mimeType: "text/plain",
text: "Q: How do I reset my password? A: Click the reset link...",
},
],
};
}
throw new Error(`Resource not found: ${uri}`);
};
export function App() {
return (
);
}
```
## Dynamic Resources
For resources that change based on search queries or other dynamic conditions, provide `listResources` and `getResource` functions. Both functions must be provided together:
```tsx
import {
TamboRegistryProvider,
ListResourceItem,
ReadResourceResult,
} from "@tambo-ai/react";
const listResources = async (search?: string): Promise => {
// Fetch available resources, optionally filtered by search
const allDocs = await fetchDocumentation();
if (search) {
return allDocs.filter((doc) =>
doc.name.toLowerCase().includes(search.toLowerCase()),
);
}
return allDocs.map((doc) => ({
uri: `docs://${doc.id}`,
name: doc.title,
description: doc.summary,
mimeType: "text/plain",
}));
};
const getResource = async (uri: string): Promise => {
const docId = uri.replace("docs://", "");
const doc = await fetchDocument(docId);
if (!doc) {
throw new Error(`Document not found: ${uri}`);
}
return {
contents: [
{
uri,
mimeType: "text/plain",
text: doc.content,
},
],
};
};
export function App() {
return (
);
}
```
> Note: `listResources` and `getResource` must be provided together. If you provide one, you must provide the other. This validation ensures resources can be both discovered and retrieved.
## Programmatic Registration
You can also register resources programmatically using the registry context. This is useful for adding resources based on user actions or application state:
```tsx
import { useTamboRegistry } from "@tambo-ai/react";
function DocumentUploader() {
const { registerResource } = useTamboRegistry();
const handleUpload = async (file: File) => {
const content = await file.text();
// Register the uploaded document as a resource
registerResource({
uri: `user-docs://${file.name}`,
name: file.name,
description: `User uploaded: ${file.name}`,
mimeType: file.type,
});
};
return (
{
if (e.target.files?.[0]) {
handleUpload(e.target.files[0]);
}
}}
/>
);
}
```
For batch registration, use `registerResources`:
```tsx
import { useTamboRegistry } from "@tambo-ai/react";
function DocumentLibrary() {
const { registerResources } = useTamboRegistry();
const loadLibrary = async () => {
const docs = await fetchAllDocuments();
registerResources(
docs.map((doc) => ({
uri: `library://${doc.id}`,
name: doc.title,
description: doc.summary,
mimeType: "text/plain",
})),
);
};
// Load on mount
useEffect(() => {
loadLibrary();
}, []);
return
Library loaded
;
}
```
## Resource Content Types
Resources can return various content types through the `contents` array. Each content item must include a `uri` and `mimeType`:
### Text Content
```tsx
const getResource = async (uri: string): Promise => {
return {
contents: [
{
uri,
mimeType: "text/plain",
text: "This is plain text content",
},
],
};
};
```
### Binary Content (Base64)
```tsx
const getResource = async (uri: string): Promise => {
const imageData = await fetchImageAsBase64(uri);
return {
contents: [
{
uri,
mimeType: "image/png",
blob: imageData, // Base64-encoded binary data
},
],
};
};
```
### Multiple Content Items
A single resource can return multiple content items:
```tsx
const getResource = async (uri: string): Promise => {
return {
contents: [
{
uri: `${uri}/readme`,
mimeType: "text/plain",
text: "README content...",
},
{
uri: `${uri}/diagram`,
mimeType: "image/png",
blob: "base64-encoded-image-data...",
},
],
};
};
```
## Resource URIs and Prefixing
Local resources registered through `TamboRegistryProvider` are **never prefixed** with a server key. This distinguishes them from MCP resources, which are always prefixed with their server key to prevent URI conflicts.
```tsx
// Local resource URI (no prefix)
uri: "docs://getting-started";
// MCP resource URI (always prefixed)
uri: "filesystem:file:///path/to/document";
```
When using `useTamboMcpResourceList()`, local resources appear alongside MCP resources in the combined list, with local resources easily identifiable by their unprefixed URIs.
## Validation and Error Handling
Tambo validates resource registrations to ensure data integrity:
* **Resource objects**: Must have `uri`, `name`, and `mimeType` properties
* **Function pairing**: `listResources` and `getResource` must both be provided or both omitted
* **URI uniqueness**: Each resource should have a unique URI within your application
```tsx
// ✅ Valid - both functions provided
// ✅ Valid - neither function provided (static only)
// ❌ Invalid - only one function provided
// Error: Both listResources and getResource must be provided together
```
## Integration with MCP Hooks
Local resources integrate seamlessly with existing MCP hooks:
### Listing All Resources
```tsx
import { useTamboMcpResourceList } from "@tambo-ai/react";
function ResourceBrowser() {
const { data: resources } = useTamboMcpResourceList();
return (
);
}
```
### Reading a Resource
```tsx
import { useTamboMcpResource } from "@tambo-ai/react";
function ResourceViewer({ uri }: { uri: string }) {
const { data: resource } = useTamboMcpResource(uri);
if (!resource) return
Loading...
;
return (
{resource.contents.map((content, i) => (
{content.text &&
{content.text}
}
{content.blob && (
)}
))}
);
}
```
## Combining Registration Methods
You can combine static resources with dynamic functions for maximum flexibility:
```tsx
import { TamboRegistryProvider } from "@tambo-ai/react";
// Static resources that are always available
const staticResources = [
{
uri: "docs://privacy-policy",
name: "Privacy Policy",
mimeType: "text/plain",
},
];
// Dynamic function to list all resources (static + dynamic)
const listResources = async (search?: string) => {
const dynamicDocs = await fetchUserDocuments();
const allResources = [
...staticResources,
...dynamicDocs.map((doc) => ({
uri: `user-docs://${doc.id}`,
name: doc.title,
mimeType: "text/plain",
})),
];
if (search) {
return allResources.filter((r) =>
r.name.toLowerCase().includes(search.toLowerCase()),
);
}
return allResources;
};
const getResource = async (uri: string) => {
if (uri === "docs://privacy-policy") {
return {
contents: [
{ uri, mimeType: "text/plain", text: "Privacy policy text..." },
],
};
}
// Handle dynamic resources
const doc = await fetchDocument(uri);
return {
contents: [{ uri, mimeType: "text/plain", text: doc.content }],
};
};
;
```
# Configure Agent Behavior
URL: /guides/setup-project/agent-behavior
This guide shows how to use custom instructions to define your agent's personality, expertise, and behavioral guidelines.
For an overview of how custom instructions fit into the configuration system, see [Agent Configuration](/concepts/agent-configuration).
## Configuring Custom Instructions
To set custom instructions for your project:
1. Open your project in the [Tambo Cloud dashboard](https://console.tambo.co)
2. Navigate to **Settings**
3. Find the **Custom Instructions** field
4. Enter your instructions
5. Click **Save**
Changes apply to all new messages sent to the project.
## Writing Effective Custom Instructions
### Keep It Focused
Custom instructions should define static behavior that applies to all users add guidelines that:
* Define the agent's role and expertise
* Set tone and personality
* Establish behavioral guidelines
* Specify response format preferences
### Structure Your Instructions
A clear structure helps the model understand and follow your instructions:
```
You are a [role/identity]. [Brief description of purpose].
When responding:
- [Key guideline 1]
- [Key guideline 2]
- [Key guideline 3]
[Any specific behaviors or constraints]
```
## Practical Examples
### Example 1: General-Purpose Assistant
**Use case:** A helpful assistant that answers questions across many topics.
**Custom instructions:**
```
You are a knowledgeable and helpful assistant. Provide clear, accurate
answers to questions. When you're unsure, say so rather than guessing.
Keep responses concise but complete.
```
### Example 2: Specialized Domain Expert
**Use case:** An agent focused on a specific domain (e.g., legal, medical, technical).
**Custom instructions:**
```
You are a senior software engineer specializing in React and TypeScript.
When helping with code:
- Write type-safe TypeScript with proper interfaces
- Follow React best practices and hooks patterns
- Explain your reasoning for architectural decisions
- Point out potential bugs or security issues
- Suggest improvements when appropriate
```
### Example 3: Customer Support Agent
**Use case:** Handling customer inquiries professionally and efficiently.
**Custom instructions:**
```
You are a friendly customer support agent for [Company Name]. Your goal
is to resolve customer issues efficiently and professionally.
Always:
- Greet customers warmly
- Listen carefully to their concerns
- Provide accurate, specific solutions
- Escalate to human support when needed
- Thank customers for their patience
Maintain a professional, empathetic tone even when customers are frustrated.
```
## Testing Your Instructions
After setting custom instructions:
1. Send a test message that should trigger your guidelines
2. Verify the agent follows the specified behavior
3. Test edge cases to ensure appropriate responses
4. Adjust and iterate based on results
For parameter tuning to control response creativity and consistency, see [Configure LLM Provider](/guides/setup-project/llm-provider).
## Next Steps
* [Configure LLM Provider](/guides/setup-project/llm-provider) - Model selection and parameter tuning
* [Give Tambo Extra Context](/guides/give-context) - Runtime context injection for user-specific information
* [Agent Configuration Concepts](/concepts/agent-configuration) - Understanding the configuration system
* [Additional Context](/concepts/additional-context) - Thread and message-level context patterns
# Create a Tambo Project
URL: /guides/setup-project/create-project
Learn how to create a new Tambo project through the dashboard to start building AI-powered applications.
## Step 1: Sign Up or Log In
Go to [tambo.co](https://tambo.co) and create an account or log in to your existing account.
## Step 2: Create a New Project
1. Click "Create Project" in the dashboard
2. Enter a project name (e.g., "My Chat App")
## Step 3: Get Your API Key
1. Click the "Create API Key" button
2. Copy your project API key
## Step 4: Add to Your Application
Add the API key to your environment variables:
```bash title=".env.local"
NEXT_PUBLIC_TAMBO_API_KEY=your_api_key_here
```
Note that this example uses NextJS client environment variable naming, but NextJS is not required.
Then pass it to TamboProvider in your React application:
```tsx title="app/layout.tsx"
import { TamboProvider } from "@tambo-ai/react";
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
## Next Steps
Now that you have your project set up, you can start sending messages and getting responses from Tambo!
Learn how to build conversation interfaces:
* [Build a Custom Chat Interface](/guides/build-interfaces/build-chat-interface) to create your conversation UI
Or configure how Tambo behaves:
* [Configure Agent Behavior](/guides/setup-project/agent-behavior) to customize how your agent responds
* [Configure LLM Provider](/guides/setup-project/llm-provider) to select your AI model and settings
* [Give Tambo Components to Generate](/guides/enable-generative-ui/register-components) to enable AI-generated UI
# Configure LLM Provider
URL: /guides/setup-project/llm-provider
This guide walks you through configuring LLM providers and models for your Tambo project, including provider-specific settings and parameter tuning patterns.
## Step 1: Select a Provider
Tambo supports multiple LLM providers:
* **OpenAI** - GPT-4, GPT-4 Turbo, GPT-3.5 models
* **Anthropic** - Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
* **Google** - Gemini 1.5 Pro, Gemini 1.5 Flash
* **Groq** - Fast inference for Llama, Mixtral models
* **Mistral** - Mistral Large, Mistral Medium
* **OpenAI-Compatible** - Any provider with OpenAI API compatibility
Navigate to your project's LLM Configuration section in the settings of the Tambo dashboard and select your provider from the dropdown.
For detailed provider capabilities and comparison, see the [Provider Reference](/reference/llm-providers).
## Step 2: Add API Keys
Generate an API key from your provider's dashboard:
* **OpenAI**: [platform.openai.com/api-keys](https://platform.openai.com/api-keys)
* **Anthropic**: [console.anthropic.com/settings/keys](https://console.anthropic.com/settings/keys)
* **Google AI Studio**: [makersuite.google.com/app/apikey](https://makersuite.google.com/app/apikey)
* **Groq**: [console.groq.com/keys](https://console.groq.com/keys)
* **Mistral**: [console.mistral.ai/api-keys](https://console.mistral.ai/api-keys)
In your project's LLM Configuration:
1. Find **API Key** field
2. Paste your API key exactly as provided
3. Click **Save** to store securely
## Step 3: Select a Model
Choose a model based on your requirements:
### Selection Criteria
* **Capability** - More capable models handle complex tasks better
* **Cost** - Larger models cost more per token
* **Speed** - Smaller models respond faster
* **Context Window** - Some models support larger inputs
## Step 4: Configure Custom Parameters
Parameters control how your model behaves. Configure these in the Custom Parameters section of your project settings.
The specific parameters available depend on the selected provider and model. For example, OpenAI GPT-5 models have a `reasoningEffort` parameter that you can edit.
For details on how each parameter changes model output, refer to your provider's documentation for available options.
## Advanced: Custom OpenAI-Compatible Endpoints
If your provider is not listed explicitly in the LLM Providers dropdown, you are still able to use it if its API is OpenAI compatible:
1. Select **OpenAI Compatible** as your provider
2. Enter **Custom Model Name** (e.g., `meta-llama/Llama-3-70b-chat-hf`)
3. Enter **Custom Base URL** without "chat/completions" - Tambo will add this automatically. (e.g., `https://api.myai.xyz/v1` becomes `https://api.myai.xyz/v1/chat/completions`)
4. Add your API key if required
5. Click **Save**
Your endpoint must implement OpenAI API format with `/chat/completions` support.
## Next Steps
* [Configure Agent Behavior](/guides/setup-project/agent-behavior) - Practical configuration patterns
* [Provider Reference](/reference/llm-providers) - Detailed provider capabilities
* [Agent Configuration Concepts](/concepts/agent-configuration) - Understanding the system
# Give Tambo Access to Your Functions
URL: /guides/take-actions/register-tools
import LearnMore from "@/components/learn-more";
import { Wrench } from "lucide-react";
This guide shows you how to register custom JavaScript functions as tools so Tambo can call them to retrieve data or take actions in response to user messages.
## Prerequisites
* TamboProvider set up in your application
* Zod installed for schema definitions
## Step 1: Define your tool function
Create a JavaScript function that performs the action:
```tsx
const getWeather = async (city: string) => {
const res = await fetch(`/api/weather?city=${encodeURIComponent(city)}`);
if (!res.ok) throw new Error(`HTTP ${res.status}`);
return await res.json();
};
```
## Step 2: Create the tool definition
Wrap your function in a TamboTool object with a schema. The `description` field helps Tambo decide when to use your tool, and `.describe()` on schema fields helps Tambo understand how to call it with the right parameters.
```tsx
import { TamboTool } from "@tambo-ai/react";
import { z } from "zod";
export const weatherTool: TamboTool = {
name: "get_weather",
// Clear description of what the tool does and when to use it
description: "Fetch current weather information for a specified city",
tool: getWeather,
// Use .describe() to explain each parameter
inputSchema: z.string().describe("The city to fetch weather for"),
outputSchema: z.object({
location: z.object({
name: z.string(),
}),
}),
};
```
Write clear, specific descriptions so Tambo knows when to call your tool and what values to pass.
## Step 3: Register with TamboProvider
Pass your tools array to the provider:
```tsx
```
Tambo can now call your tool when relevant to user messages.
## Return Rich Content (Optional)
To return images, audio, or mixed media instead of plain text, add `transformToContent` to your tool definition:
```tsx
const getProductImage = async (productId: string) => {
const product = await fetchProductData(productId);
return {
name: product.name,
description: product.description,
imageUrl: product.imageUrl,
price: product.price,
};
};
export const productTool: TamboTool = {
name: "get_product",
description: "Fetch product information with image",
tool: getProductImage,
inputSchema: z.string().describe("Product ID"),
outputSchema: z.object({
name: z.string(),
description: z.string(),
imageUrl: z.string(),
price: z.number(),
}),
transformToContent: (result) => [
{
type: "text",
text: `${result.name} - $${result.price}\n\n${result.description}`,
},
{
type: "image_url",
image_url: { url: result.imageUrl },
},
],
};
```
Content types: `text`, `image_url`, `input_audio`
## Enable Streaming Execution (Optional)
By default, Tambo waits for complete tool arguments before executing. For tools that can handle partial arguments gracefully, you can enable streaming execution to call the tool incrementally as arguments are generated. To understand how streamable execution works under the hood, see [Tools > How Tools Execute](/concepts/tools#how-tools-execute).
### When to Use Streamable Tools
Use `annotations.tamboStreamableHint: true` when your tool:
* ✅ Can handle incomplete or partial data gracefully
* ✅ Has no side effects (safe to call multiple times)
* ✅ Benefits from incremental execution (state updates, visualizations, real-time feedback)
* ✅ Is idempotent or can merge partial updates safely
### When NOT to Use Streamable Tools
Avoid `annotations.tamboStreamableHint: true` when your tool:
* ❌ Makes API calls or database writes (would cause duplicate requests)
* ❌ Has side effects that shouldn't be repeated
* ❌ Requires complete arguments to function correctly
* ❌ Returns data that the AI needs immediately
### Example: Enable Streaming Execution
```tsx
import { TamboTool } from "@tambo-ai/react";
import { z } from "zod";
const updateChart = (data: { title?: string; values?: number[] }) => {
// Called incrementally as arguments stream in
// Handles partial data gracefully
setChartState((prev) => ({
...prev,
...(data.title && { title: data.title }),
...(data.values && { values: data.values }),
}));
};
export const chartTool: TamboTool = {
name: "update_chart",
description: "Update the chart visualization with new data",
tool: updateChart,
inputSchema: z.object({
title: z.string().optional(),
values: z.array(z.number()).optional(),
}),
outputSchema: z.void(),
annotations: {
tamboStreamableHint: true, // Enable streaming execution
},
};
```
### Handling Partial Data
Your streamable tool should handle incomplete data gracefully. Use optional properties and defensive checks:
```tsx
const updateDashboard = (data: {
title?: string;
metrics?: { name: string; value: number }[];
timeRange?: string;
}) => {
// Only update fields that are present
if (data.title !== undefined) {
setTitle(data.title);
}
if (data.metrics !== undefined) {
setMetrics(data.metrics);
}
if (data.timeRange !== undefined) {
setTimeRange(data.timeRange);
}
};
export const dashboardTool: TamboTool = {
name: "update_dashboard",
description: "Update dashboard with metrics and time range",
tool: updateDashboard,
inputSchema: z.object({
title: z.string().optional(),
metrics: z
.array(z.object({ name: z.string(), value: z.number() }))
.optional(),
timeRange: z.string().optional(),
}),
outputSchema: z.void(),
annotations: {
tamboStreamableHint: true,
},
};
```
This pattern ensures your tool processes data incrementally as the AI generates each field, providing a smooth real-time experience.
### Good vs Bad Examples
```tsx
// ❌ Bad: API call tool - would make duplicate requests
export const createUserTool: TamboTool = {
name: "create_user",
description: "Create a new user account",
tool: async (data) => await api.createUser(data),
inputSchema: z.object({
name: z.string(),
email: z.string(),
}),
outputSchema: z.object({ userId: z.string() }),
annotations: { tamboStreamableHint: true }, // Don't do this!
};
// ✅ Good: State update tool - safe to call multiple times
export const updateFormTool: TamboTool = {
name: "update_form",
description: "Update form fields in real-time",
tool: (fields) => setFormState(fields),
inputSchema: z.object({
name: z.string().optional(),
email: z.string().optional(),
}),
outputSchema: z.void(),
annotations: { tamboStreamableHint: true }, // Safe for repeated calls
};
```
# CSS & Tailwind Configuration
URL: /reference/cli/configuration
The Tambo CLI automatically configures your CSS and Tailwind setup based on your project's Tailwind CSS version. This page explains what changes are made and how to configure them manually if needed.
## What Gets Configured
When you run Tambo CLI commands (`full-send`, `add`, `update`, `upgrade`), the CLI:
1. **Detects your Tailwind CSS version** (v3 or v4)
2. **Updates your `globals.css`** with required CSS variables
3. **Updates your `tailwind.config.ts`** (v3 only) with basic configuration
4. **Preserves your existing styles** and configuration
The CLI automatically detects your Tailwind version from your `package.json`
and applies the appropriate configuration format. You don't need to specify
which version you're using.
## CSS Variables Added
Tambo components require specific CSS variables to function properly. The CLI adds these in the appropriate format for your Tailwind version:
### Core Tailwind Variables
These standard Tailwind CSS variables are added with Tambo's default color scheme:
```css
/* Light mode */
--background: /* White or appropriate light background */ --foreground:
/* Dark text color */
--primary: /* Tambo brand primary color */
--secondary: /* Secondary accent color */
--muted: /* Muted backgrounds and borders */ --border: /* Border colors */
/* ... additional standard Tailwind variables */;
```
### Tambo-Specific Variables
These custom variables control Tambo component layouts and behavior:
```css
/* Layout dimensions */
--panel-left-width: 500px;
--panel-right-width: 500px;
--sidebar-width: 3rem;
/* Container and backdrop styles */
--container: /* Light container background */ --backdrop:
/* Modal backdrop opacity */
--muted-backdrop: /* Subtle backdrop for overlays */ /* Border radius */
--radius: 0.5rem;
```
The Tambo-specific variables (`--panel-left-width`, `--panel-right-width`,
`--sidebar-width`, `--container`, `--backdrop`, `--muted-backdrop`) are
essential for proper component functionality. Removing these will break
component layouts.
## Tailwind CSS v3 Configuration
For Tailwind v3 projects, the CLI uses the `@layer base` approach:
### Complete globals.css for v3
```css title="app/globals.css"
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
:root {
/* Default Tailwind CSS Variables customized with tambo colors */
--background: 0 0% 100%;
--foreground: 240 10% 3.9%;
--card: 0 0% 100%;
--card-foreground: 240 10% 3.9%;
--popover: 0 0% 100%;
--popover-foreground: 240 10% 3.9%;
--primary: 235 12% 21%;
--primary-foreground: 0 0% 98%;
--secondary: 218 11% 46%;
--secondary-foreground: 0 0% 100%;
--muted: 217 14% 90%;
--muted-foreground: 217 14% 68%;
--accent: 240 4.8% 95.9%;
--accent-foreground: 240 5.9% 10%;
--destructive: 0 84.2% 60.2%;
--border: 207 22% 90%;
--input: 240 5.9% 90%;
--ring: 240 10% 3.9%;
--chart-1: 30 80% 54.9%;
--chart-2: 339.8 74.8% 54.9%;
--chart-3: 219.9 70.2% 50%;
--chart-4: 160 60% 45.1%;
--chart-5: 280 64.7% 60%;
/* Tambo Specific Variables needed for tambo components */
--radius: 0.5rem;
--container: 210 29% 97%;
--backdrop: 210 88% 14% / 0.25;
--muted-backdrop: 210 88% 14% / 0.1;
--panel-left-width: 500px;
--panel-right-width: 500px;
--sidebar-width: 3rem;
}
.dark {
/* Default Tailwind CSS Variables customized with tambo colors */
--background: 240 10% 3.9%;
--foreground: 0 0% 98%;
--card: 240 10% 3.9%;
--card-foreground: 0 0% 98%;
--popover: 240 10% 3.9%;
--popover-foreground: 0 0% 98%;
--primary: 0 0% 98%;
--primary-foreground: 240 5.9% 10%;
--secondary: 240 3.7% 15.9%;
--secondary-foreground: 0 0% 98%;
--muted: 240 3.7% 15.9%;
--muted-foreground: 240 5% 64.9%;
--accent: 240 3.7% 15.9%;
--accent-foreground: 0 0% 98%;
--destructive: 0 62.8% 30.6%;
--border: 240 3.7% 15.9%;
--input: 240 3.7% 15.9%;
--ring: 240 4.9% 83.9%;
--chart-1: 30 80% 54.9%;
--chart-2: 339.8 74.8% 54.9%;
--chart-3: 219.9 70.2% 50%;
--chart-4: 160 60% 45.1%;
--chart-5: 280 64.7% 60%;
/* Tambo Specific Variables needed for tambo components */
--radius: 0.5rem;
--container: 210 29% 97%;
--backdrop: 210 88% 14% / 0.25;
--muted-backdrop: 210 88% 14% / 0.1;
--panel-left-width: 500px;
--panel-right-width: 500px;
--sidebar-width: 3rem;
}
}
body {
background: var(--background);
color: var(--foreground);
font-family: Arial, Helvetica, sans-serif;
}
```
### tailwind.config.ts for v3
```tsx title="tailwind.config.ts"
import type { Config } from "tailwindcss";
const config: Config = {
darkMode: "class",
content: [
"./pages/**/*.{ts,tsx}",
"./components/**/*.{ts,tsx}",
"./app/**/*.{ts,tsx}",
"./src/**/*.{ts,tsx}",
],
// Your existing theme config is preserved and merged
};
export default config;
```
## Tailwind CSS v4 Configuration
Tailwind v4 uses CSS-first configuration with a different approach:
### Complete globals.css for v4
```css title="app/globals.css"
@import "tailwindcss";
@custom-variant dark (&:is(.dark *));
@theme inline {
/* Tailwind CSS Variables customized with tambo colors */
--color-background: var(--background);
--color-foreground: var(--foreground);
--color-card: var(--card);
--color-card-foreground: var(--card-foreground);
--color-popover: var(--popover);
--color-popover-foreground: var(--popover-foreground);
--color-primary: var(--primary);
--color-primary-foreground: var(--primary-foreground);
--color-secondary: var(--secondary);
--color-secondary-foreground: var(--secondary-foreground);
--color-muted: var(--muted);
--color-muted-foreground: var(--muted-foreground);
--color-accent: var(--accent);
--color-accent-foreground: var(--accent-foreground);
--color-destructive: var(--destructive);
--color-border: var(--border);
--color-input: var(--input);
--color-ring: var(--ring);
--color-chart-1: var(--chart-1);
--color-chart-2: var(--chart-2);
--color-chart-3: var(--chart-3);
--color-chart-4: var(--chart-4);
--color-chart-5: var(--chart-5);
/* Tambo Specific Variables needed for tambo components */
--color-container: var(--container);
--color-backdrop: var(--backdrop);
--color-muted-backdrop: var(--muted-backdrop);
}
:root {
/* Default Tailwind CSS Variables customized with tambo colors */
--background: oklch(1 0 0);
--foreground: oklch(0.14 0 285);
--card: oklch(1 0 0);
--card-foreground: oklch(0.14 0 285);
--popover: oklch(1 0 0);
--popover-foreground: oklch(0.14 0 285);
--primary: oklch(0.31 0.02 281);
--primary-foreground: oklch(0.98 0 0);
--secondary: oklch(0.54 0.027 261);
--secondary-foreground: oklch(1 0 0);
--muted: oklch(0.92 0 260);
--muted-foreground: oklch(0.73 0.022 260);
--accent: oklch(0.97 0 286);
--accent-foreground: oklch(0.21 0 286);
--destructive: oklch(0.64 0.2 25);
--border: oklch(0.93 0 242);
--input: oklch(0.92 0 286);
--ring: oklch(0.14 0 285);
--chart-1: oklch(0.72 0.15 60);
--chart-2: oklch(0.62 0.2 6);
--chart-3: oklch(0.53 0.2 262);
--chart-4: oklch(0.7 0.13 165);
--chart-5: oklch(0.62 0.2 313);
/* Tambo Specific Variables needed for tambo components */
--container: oklch(0.98 0 247);
--backdrop: oklch(0.25 0.07 252 / 0.25);
--muted-backdrop: oklch(0.25 0.07 252 / 0.1);
--radius: 0.5rem;
--panel-left-width: 500px;
--panel-right-width: 500px;
--sidebar-width: 3rem;
}
.dark {
/* Dark Mode Tailwind CSS Variables customized with tambo colors */
--background: oklch(0.145 0 0);
--foreground: oklch(0.985 0 0);
--card: oklch(0.205 0 0);
--card-foreground: oklch(0.985 0 0);
--popover: oklch(0.205 0 0);
--popover-foreground: oklch(0.985 0 0);
--primary: oklch(0.922 0 0);
--primary-foreground: oklch(0.205 0 0);
--secondary: oklch(0.269 0 0);
--secondary-foreground: oklch(0.985 0 0);
--muted: oklch(0.269 0 0);
--muted-foreground: oklch(0.708 0 0);
--accent: oklch(0.269 0 0);
--accent-foreground: oklch(0.985 0 0);
--destructive: oklch(0.704 0.191 22.216);
--border: oklch(1 0 0 / 10%);
--input: oklch(1 0 0 / 15%);
--ring: oklch(0.556 0 0);
--chart-1: oklch(0.72 0.15 60);
--chart-2: oklch(0.62 0.2 6);
--chart-3: oklch(0.53 0.2 262);
--chart-4: oklch(0.7 0.13 165);
--chart-5: oklch(0.62 0.2 313);
/* Tambo Specific Variables needed for tambo components */
--container: oklch(0.98 0 247);
--backdrop: oklch(0.25 0.07 252 / 0.25);
--muted-backdrop: oklch(0.25 0.07 252 / 0.1);
--radius: 0.5rem;
--panel-left-width: 500px;
--panel-right-width: 500px;
--sidebar-width: 3rem;
}
@layer base {
* {
@apply border-border outline-ring/50;
}
body {
@apply bg-background text-foreground;
font-family: Arial, Helvetica, sans-serif;
}
}
```
With Tailwind v4, most configuration is done in CSS using `@theme inline`. The
JavaScript config file is not needed.
## Manual Configuration
If you need to manually configure these files or the automatic setup doesn't work:
### 1. Check Your Tailwind Version
```bash
npm list tailwindcss
# or check package.json
```
### 2. Copy the Appropriate CSS
Choose the v3 or v4 format based on your version and copy the complete CSS above into your `globals.css` file.
### 3. Update Tailwind Config (v3 only)
For v3, ensure your `tailwind.config.ts` includes `darkMode: "class"` and the proper content paths.
### 4. Verify Required Variables
Ensure these Tambo-specific variables are present:
* `--panel-left-width`
* `--panel-right-width`
* `--sidebar-width`
* `--container`
* `--backdrop`
* `--muted-backdrop`
* `--radius`
## Merging with Existing Styles
If you already have CSS variables defined, you'll want to merge them carefully
rather than replacing your entire file. The CLI automatically preserves
existing variables, but manual setup requires more care.
### If you have existing CSS variables:
1. **Keep your existing variables** that aren't listed in the Tambo configuration
2. **Add missing Tambo variables** from the appropriate version above
3. **Update conflicting variables** if you want to use Tambo's color scheme
4. **Preserve your custom styling** outside of the CSS variables
### If you have existing `@layer base` blocks:
For Tailwind v3, add the Tambo variables inside your existing `@layer base` block rather than creating a duplicate.
## Troubleshooting
### Components Look Broken
**Problem**: Components have no styling or look broken
**Solution**: Check that all CSS variables are present in your `globals.css`
### Dark Mode Not Working
**Problem**: Dark mode styles not applying
**Solution**:
* For v3: Ensure `darkMode: "class"` in `tailwind.config.ts`
* For v4: Check `@custom-variant dark` is present
* Verify `.dark` class variables are defined
### Version Mismatch
**Problem**: Wrong CSS format for your Tailwind version
**Solution**:
* Check your Tailwind version with `npm list tailwindcss`
* Use v3 format (HSL) for Tailwind 3.x
* Use v4 format (OKLCH) for Tailwind 4.x
### Layout Issues
**Problem**: Panels or sidebars have wrong dimensions
**Solution**: Ensure Tambo layout variables (`--panel-left-width`, etc.) are defined and have appropriate values
### Existing Styles Overridden
**Problem**: Your custom CSS variables got replaced
**Solution**: The CLI preserves existing variables, but if manually copying, merge with your existing styles rather than replacing them
## CLI Behavior
### What the CLI Does
* ✅ **Preserves existing styles**: Your custom CSS is kept
* ✅ **Adds only missing variables**: Won't override your existing variables
* ✅ **Backs up files**: Creates `.backup` files before making changes
* ✅ **Shows diffs**: Previews changes before applying them
* ✅ **Asks permission**: Prompts before modifying existing files
### What the CLI Doesn't Do
* ❌ **Override existing variables**: Your customizations are preserved
* ❌ **Change your color scheme**: Only adds missing standard variables
* ❌ **Modify other CSS**: Only touches CSS variable definitions
* ❌ **Break existing config**: Merges with your existing Tailwind config
The CLI is designed to be safe and non-destructive. It preserves your existing
configuration and only adds what's needed for Tambo components to work.
# Global Options
URL: /reference/cli/global-options
All Tambo CLI commands support these global options. You can use them with any command to modify behavior, skip prompts, or handle common scenarios.
## Available Options
### `--version`
Shows the current version of the Tambo CLI.
```bash
npx tambo --version
# Output: 1.2.3
```
### `--yes, -y`
Auto-answers "yes" to all confirmation prompts. Required for non-interactive environments like CI/CD.
**Examples:**
```bash
# Skip all prompts during setup
npx tambo init --yes
# Install components without confirmation
npx tambo add form graph --yes
# Update all components without asking
npx tambo update installed --yes
# Migrate components automatically
npx tambo migrate --yes
# Upgrade in CI/CD (required for non-interactive)
npx tambo upgrade --yes
```
**Use cases:**
* CI/CD pipelines (required for upgrade command)
* Automated deployments
* Docker builds
* Batch operations
* When you're confident about the changes
### `--legacy-peer-deps`
Installs dependencies using npm's `--legacy-peer-deps` flag. This resolves common dependency conflicts.
**Examples:**
```bash
# Install with legacy peer deps
npx tambo init --legacy-peer-deps
# Add components with legacy peer deps
npx tambo add message-thread-full --legacy-peer-deps
# Upgrade project with legacy peer deps
npx tambo upgrade --legacy-peer-deps
```
**When to use:**
* Getting peer dependency warnings
* Working with older React versions
* Complex dependency trees
* Corporate environments with strict package policies
If you see errors like "unable to resolve dependency tree" or peer dependency
warnings, try adding `--legacy-peer-deps` to your command.
### `--prefix `
Specifies a custom directory for components instead of the default `components/tambo`.
**Examples:**
```bash
# Install components in src/components/ui
npx tambo add form --prefix=src/components/ui
# List components in custom directory
npx tambo list --prefix=src/components/ui
# Update components in custom location
npx tambo update installed --prefix=src/components/custom
# Migrate from custom source to custom destination
npx tambo migrate --prefix=src/components/tambo
```
**Common prefix patterns:**
* `src/components/ui` - Traditional UI components directory
* `src/components/tambo` - Dedicated Tambo directory in src
* `app/components/ui` - App router components
* `lib/components` - Library-style organization
### `--dry-run`
Preview changes before applying them. Currently only available for the `migrate` command.
**Examples:**
```bash
# Preview migration changes
npx tambo migrate --dry-run
```
**Output example:**
```
🔍 Dry run mode - no changes will be made
The following changes would be applied:
📁 Move: components/ui/form.tsx → components/tambo/form.tsx
📁 Move: components/ui/graph.tsx → components/tambo/graph.tsx
📝 Update: lib/tambo.ts (import paths)
Run without --dry-run to apply these changes.
```
## Combining Options
You can combine multiple options in a single command:
```bash
# Install components with custom prefix, skip prompts, and handle conflicts
npx tambo add form graph --prefix=src/components/ui --yes --legacy-peer-deps
# Upgrade with custom prefix
npx tambo upgrade --yes --prefix=src/components/ui
# Migrate automatically with custom paths
npx tambo migrate --yes --prefix=src/components/custom
```
## Command-Specific Options
Some commands have additional options beyond these global ones:
### `create-app` specific options
```bash
# Initialize git repository
npx tambo create-app my-app --init-git
# Use specific template
npx tambo create-app my-app --template=standard
```
### `add` specific options
```bash
# Install multiple components
npx tambo add form graph canvas-space
```
## Best Practices
### For Development
```bash
# Safe exploration - preview migration first
npx tambo migrate --dry-run
# Quick iterations
npx tambo add form --yes
npx tambo update form --yes
```
## Troubleshooting
### Common Issues
**Issue: Command not found**
```bash
# Check CLI version
npx tambo --version
# Update to latest
npm install -g @tambo-ai/cli@latest
```
**Issue: Permission errors**
```bash
# Use npx instead of global install
npx tambo init --yes
```
**Issue: Dependency conflicts**
```bash
# Use legacy peer deps
npx tambo add form --legacy-peer-deps
```
**Issue: Wrong directory**
```bash
# Check current components
npx tambo list
# Use correct prefix
npx tambo list --prefix=src/components/ui
```
## Exit Codes
The Tambo CLI uses standard exit codes for shell integration and CI/CD:
| Exit Code | Meaning | Common Causes |
| --------- | ------- | --------------------------------------- |
| `0` | Success | Command completed successfully |
| `1` | Failure | Command failed, error details in output |
**Usage in scripts:**
```bash
# Check if command succeeded
if npx tambo add form --yes; then
echo "Component installed successfully"
else
echo "Component installation failed"
exit 1
fi
# CI/CD example
npx tambo upgrade --yes || exit 1
npm run build || exit 1
```
**Common failure scenarios:**
* Missing required files (package.json, tsconfig.json)
* Invalid component names
* Network errors during installation
* User cancellation (Ctrl+C or answering "No" to prompts)
* Non-interactive environment without `--yes` flag
* File system permission errors
# Tambo CLI overview
URL: /reference/cli
import { Card, Cards } from "fumadocs-ui/components/card";
The Tambo CLI is a tool to help you get Tambo apps setup quickly.
Here you'll find a description of each command available in the Tambo CLI.
## Install the Tambo CLI
The Tambo CLI is available as an npm package and can be used with `npx`:
```bash
npx tambo
```
## Tambo CLI quickstart
For new projects, the fastest way to get started is:
```bash
# Create a new Tambo app
npm create tambo-app@latest my-tambo-app
# Or add to existing project
npx tambo full-send
```
## Tambo CLI command categories
### Project Setup
* [`create-app`](/reference/cli/commands/create-app) - Create a new Tambo app from template
* [`init`](/reference/cli/commands/init) - Initialize Tambo in existing project
* [`full-send`](/reference/cli/commands/full-send) - Complete setup with components
### Component Management
* [`add`](/reference/cli/commands/add) - Add Tambo components to your project
* [`list`](/reference/cli/commands/list) - List installed components
* [`update`](/reference/cli/commands/update) - Update components to latest versions
### Project Maintenance
* [`upgrade`](/reference/cli/commands/upgrade) - Upgrade entire project (packages + components)
* [`migrate`](/reference/cli/commands/migrate) - Migrate from legacy component structure
## Get help with the Tambo CLI
* Check out our [common workflows](/reference/cli/workflows) for typical usage patterns
* See [global options](/reference/cli/global-options) available for all commands
* Browse individual command documentation in the Commands section
Learn typical CLI usage patterns and workflows
Get started with the full-send command
# Common Workflows
URL: /reference/cli/workflows
This guide covers the most common workflows you'll use with the Tambo CLI, organized by scenario and use case.
## Getting Started Workflows
### New Project Setup
For brand new projects, use the template approach:
```bash
# Create new project with template
npm create tambo-app@latest my-tambo-app
cd my-tambo-app
# Complete setup with API key
npx tambo init
# Start development
npm run dev
```
### Adding to Existing Project
For existing React/Next.js projects:
```bash
# Quick setup with components
npx tambo full-send
# Or step-by-step approach
npx tambo init
npx tambo add message-thread-full
```
After running `init` or `full-send`, make sure to add your API key to
`.env.local`: `NEXT_PUBLIC_TAMBO_API_KEY=your_api_key_here`
## Component Management Workflows
### Adding Components Strategically
Start with core components, then add specialized ones:
```bash
# 1. Start with a message interface
npx tambo add message-thread-collapsible
# 2. Add form capabilities
npx tambo add form
# 3. Add visualization components
npx tambo add graph canvas-space
# 4. Add advanced interactions
npx tambo add control-bar
```
### Checking What's Installed
```bash
# See what components you have
npx tambo list
# Check if updates are available
npx tambo update --dry-run
```
## Development Workflows
### Building a Chat Interface
```bash
# 1. Setup project
npx tambo init
# 2. Add chat component
npx tambo add message-thread-full
# 3. Configure in your app
# Add TamboProvider to layout.tsx
# Import and use MessageThreadFull component
```
### Building a Form Experience
```bash
# 1. Add form components
npx tambo add form input-fields
# 2. Register form-related tools in lib/tambo.ts
# 3. Create form validation components
```
### Building a Data Visualization App
```bash
# 1. Add visualization components
npx tambo add graph canvas-space
# 2. Add control interface
npx tambo add control-bar
# 3. Register data tools for fetching/processing
```
## Maintenance Workflows
### Keeping Everything Updated
```bash
# Option 1: Update everything at once
npx tambo upgrade
# Option 2: Update selectively
npx tambo update installed # All components
npx tambo update form graph # Specific components
```
### Migrating from Legacy Structure
If you installed Tambo components before the directory structure change (more info [here](/reference/cli/commands/migrate)):
```bash
# 1. Check what needs migration
npx tambo migrate --dry-run
# 2. Perform migration
npx tambo migrate
# 3. Test everything still works
npm run dev
```
## Troubleshooting Workflows
### Dependency Conflicts
If you encounter peer dependency issues:
```bash
# Use legacy peer deps flag
npx tambo add message-thread-full --legacy-peer-deps
npx tambo upgrade --legacy-peer-deps
```
### Component Not Working
```bash
# 1. Check if component is properly installed
npx tambo list
# 2. Update to latest version
npx tambo update
# 3. Check your TamboProvider setup
# Make sure API key is set
# Verify component is imported correctly
```
### Clean Reinstall
```bash
# 1. Remove existing components
rm -rf components/tambo/*
# 2. Reinstall fresh
npx tambo add message-thread-full form graph
# 3. Update configuration
npx tambo upgrade
```
## Quick Reference
### Most Common Commands
```bash
# Quick setup
npx tambo full-send
# Add components
npx tambo add message-thread-full form
# Update everything
npx tambo upgrade
# Check status
npx tambo list
```
### Flags You'll Use Often
For detailed information about all available flags and options, see the [Global Options](/reference/cli/global-options) page.
Quick reference:
* `--yes` - Skip confirmation prompts
* `--legacy-peer-deps` - Fix dependency conflicts
* `--prefix=` - Custom component directory
* `--dry-run` - Preview changes before applying
# Anthropic
URL: /reference/llm-providers/anthropic
Anthropic's Claude models provide state-of-the-art reasoning, coding, and conversational capabilities. This page covers all Claude models available in Tambo, including the Claude 4.5, 4.1, 4, 3.7, and 3.5 families.
## Anthropic Claude model overview
Claude models are known for their strong reasoning abilities, excellent coding capabilities, and nuanced understanding of context. Anthropic has released several model families, each optimized for different use cases:
* **Claude 4.5 Family**: Latest generation with exceptional coding and reasoning
* **Claude 4.1 Family**: Most powerful models with highest intelligence
* **Claude 4 Family**: High-performance models with exceptional reasoning
* **Claude 3.7 Family**: Fast or step-by-step thinking for complex tasks
* **Claude 3.5 Family**: Fast and affordable for real-time tasks
All Claude models support a 200,000 token context window, allowing for extensive conversations and document processing.
## Available Models
### Claude 4.5 Family
#### claude-opus-4-5
**Status:** Tested
**API Name:** `claude-opus-4-5-20251101`
**Context Window:** 200,000 tokens
**Provider Docs:** [Anthropic Model Overview](https://docs.anthropic.com/en/docs/about-claude/models/overview)
Claude Opus 4.5 is Anthropic's premium model combining maximum intelligence with practical performance. It represents the latest evolution of the Opus line, offering exceptional capabilities for the most demanding tasks.
**Best for:**
* Premium coding and development tasks
* Complex reasoning and problem-solving
* Advanced research and analysis
* Tasks requiring maximum intelligence with practical performance
**Notes:**
* Latest premium model in the Claude 4.5 family
* Combines maximum intelligence with practical performance
* Top choice for critical applications requiring the best capabilities
* Excellent balance of intelligence and usability
#### claude-sonnet-4-5
**Status:** Tested
**API Name:** `claude-sonnet-4-5-20250929`
**Context Window:** 200,000 tokens
**Provider Docs:** [Anthropic Model Overview](https://docs.anthropic.com/en/docs/about-claude/models/overview)
Claude 4.5 Sonnet is Anthropic's best model for coding and reasoning. It excels at complex problem-solving, code generation, and multi-step reasoning tasks.
**Best for:**
* Advanced coding tasks and code review
* Complex reasoning and problem-solving
* Technical documentation generation
* Multi-step analysis and planning
**Notes:**
* Top choice for software development workflows
* Excellent balance of speed and capability
* Strong performance on technical tasks
#### claude-haiku-4-5
**Status:** Tested
**API Name:** `claude-haiku-4-5-20251001`
**Context Window:** 200,000 tokens
**Provider Docs:** [Anthropic Model Overview](https://docs.anthropic.com/en/docs/about-claude/models/overview)
Claude 4.5 Haiku is Anthropic's fastest and most intelligent model in the Haiku line. It provides rapid responses while maintaining high quality.
**Best for:**
* Real-time conversational AI
* Fast code completion
* Quick analysis tasks
* High-throughput applications
**Notes:**
* Optimized for speed without sacrificing quality
* Ideal for production environments requiring low latency
* Good balance of cost and performance
### Claude 4.1 Family
#### claude-opus-4-1
**Status:** Tested
**API Name:** `claude-opus-4-1-20250805`
**Context Window:** 200,000 tokens
**Provider Docs:** [Anthropic Model Overview](https://docs.anthropic.com/en/docs/about-claude/models/overview)
Claude Opus 4.1 is Anthropic's most powerful model yet, with the highest level of intelligence and capability available. This model represents the cutting edge of Claude's capabilities.
**Best for:**
* Most complex reasoning tasks
* Advanced research and analysis
* Critical decision support
* Tasks requiring maximum intelligence
**Notes:**
* Highest capability model in the Claude family
* Best choice when accuracy and reasoning depth are paramount
* Premium pricing reflects advanced capabilities
### Claude 4 Family
#### claude-opus-4
**Status:** Tested
**API Name:** `claude-opus-4-20250514`
**Context Window:** 200,000 tokens
**Provider Docs:** [Anthropic Model Overview](https://docs.anthropic.com/en/docs/about-claude/models/overview)
Claude Opus 4 has very high intelligence and capability. It is a good model for coding and reasoning tasks that require deep understanding.
**Best for:**
* Advanced coding projects
* Complex reasoning tasks
* Detailed analysis and synthesis
* Technical problem-solving
**Notes:**
* Strong performance across diverse tasks
* Excellent for professional development work
* Good alternative to Claude Opus 4.1 for cost-sensitive projects
#### claude-sonnet-4
**Status:** Tested
**API Name:** `claude-sonnet-4-20250514`
**Context Window:** 200,000 tokens
**Provider Docs:** [Anthropic Model Overview](https://docs.anthropic.com/en/docs/about-claude/models/overview)
Claude 4 Sonnet is Anthropic's high-performance model with exceptional reasoning and efficiency. It strikes an excellent balance between capability and speed.
**Best for:**
* Production applications
* Balanced reasoning and speed requirements
* General-purpose development tasks
* Cost-effective complex tasks
**Notes:**
* Excellent efficiency for complex tasks
* Good choice for production deployments
* Strong performance across multiple domains
### Claude 3.7 Family
#### claude-3-7-sonnet
**Status:** Tested
**API Name:** `claude-3-7-sonnet-20250219`
**Context Window:** 200,000 tokens
**Provider Docs:** [Anthropic Model Overview](https://docs.anthropic.com/en/docs/about-claude/models/overview)
Claude 3.7 Sonnet is Anthropic's smartest model yet, with fast or step-by-step thinking capabilities. Great for coding and front-end development tasks.
**Best for:**
* Front-end development
* UI/UX implementation
* Web application development
* Tasks requiring iterative refinement
**Notes:**
* Supports both fast and deliberate thinking modes
* Excellent for web development workflows
* Strong understanding of modern frameworks
### Claude 3.5 Family
#### claude-3-5-haiku
**Status:** Known Issues
**API Name:** `claude-3-5-haiku-20241022`
**Context Window:** 200,000 tokens
**Provider Docs:** [Anthropic Model Overview](https://docs.anthropic.com/en/docs/about-claude/models/overview)
Claude 3.5 Haiku is Anthropic's fastest and most affordable model. Great for real-time tasks like chatbots, coding, and data extraction.
**Best for:**
* Real-time chatbots
* Data extraction and parsing
* Quick coding assistance
* High-volume, cost-sensitive applications
**Known Issues:**
* May fail to fully render components, even when receiving the correct data
* Example: When rendering a graph component, it may leave it in a loading state without streaming the data into props
* Use with caution for complex component rendering tasks
**Notes:**
* Most cost-effective Claude model
* Optimized for speed and throughput
* Best for simpler tasks where the known issues don't impact functionality
Claude 3.5 Haiku has known issues with component rendering. It may fail to
fully render components or leave them in loading states. For production
applications requiring reliable component rendering, consider using [Claude
4.5 Haiku](#claude-haiku-4-5) or higher models.
## Configuration
All Claude models are configured through your project settings in the Tambo dashboard:
1. Navigate to your project in the dashboard
2. Go to **Settings** → **LLM Providers**
3. Select **Anthropic** as your provider
4. Choose your desired Claude model from the dropdown
5. Configure any [additional parameters](/guides/setup-project/llm-provider) (temperature, maxOutputTokens, etc.)
6. Click **Save** to apply the configuration
Anthropic models support the standard LLM parameters available in Tambo. For detailed parameter configuration, see [Custom LLM Parameters](/guides/setup-project/llm-provider).
## Model Selection Guide
**For coding and reasoning:**
* **Premium**: [Claude Opus 4.5](#claude-opus-4-5) or [Claude Opus 4.1](#claude-opus-4-1) or [Claude Sonnet 4.5](#claude-sonnet-4-5)
* **Balanced**: [Claude Sonnet 4](#claude-sonnet-4) or [Claude Opus 4](#claude-opus-4)
* **Fast**: [Claude Haiku 4.5](#claude-haiku-4-5)
**For conversational AI:**
* **High-quality**: [Claude Sonnet 4.5](#claude-sonnet-4-5) or [Claude 3.7 Sonnet](#claude-3-7-sonnet)
* **Real-time**: [Claude Haiku 4.5](#claude-haiku-4-5)
* **Cost-effective**: [Claude 3.5 Haiku](#claude-3-5-haiku) (with [known issue caveats](#claude-3-5-haiku))
**For complex analysis:**
* **Maximum capability**: [Claude Opus 4.5](#claude-opus-4-5) or [Claude Opus 4.1](#claude-opus-4-1)
* **High performance**: [Claude Opus 4](#claude-opus-4)
* **Efficient**: [Claude Sonnet 4](#claude-sonnet-4)
## See Also
* [Labels](/reference/llm-providers/labels) - Understanding model status labels and observed behaviors
* [Custom LLM Parameters](/guides/setup-project/llm-provider) - Configuring model parameters for fine-tuned responses
# Cerebras
URL: /reference/llm-providers/cerebras
Cerebras provides extremely fast inference (2,000+ tokens/second) powered by their Wafer-Scale Engine hardware. This page covers all Cerebras models available in Tambo, including Llama, Qwen, and other open-weight model families.
## Cerebras model overview
Cerebras hosts a variety of open-source models optimized for their specialized hardware. Available model families include:
* **Llama Family**: Meta's Llama 3.1 and 3.3 models with up to 128K context
* **Qwen Family**: Alibaba's Qwen 3 models for multilingual and structured outputs
* **Other Models**: GPT-OSS and GLM models for various use cases
All Cerebras models deliver ultra-fast inference at 2,000+ tokens/sec. Context windows are limited to 8,192 tokens on the free tier.
## Supported Models
### Llama Family
#### llama3.1-8b
**Status:** Untested
**API Name:** `llama3.1-8b`
**Context Window:** 128,000 tokens (8,192 on free tier)
**Provider Docs:** [Cerebras Inference Docs](https://inference-docs.cerebras.ai/)
Meta's Llama 3.1 8B model running on Cerebras hardware. Ideal for fast inference at 2,000+ tokens/sec, best suited for simple tasks and cost-effective deployments.
**Best for:**
* Cost-sensitive applications
* Real-time chat and responses
* Simple text generation tasks
* High-volume, low-complexity workloads
**Notes:**
* Not yet validated on common Tambo tasks
* Smallest and fastest Llama model on Cerebras
* Good starting point for testing
#### llama-3.3-70b
**Status:** Untested
**API Name:** `llama-3.3-70b`
**Context Window:** 128,000 tokens (8,192 on free tier)
**Provider Docs:** [Cerebras Inference Docs](https://inference-docs.cerebras.ai/)
Meta's Llama 3.3 70B model on Cerebras, offering balanced performance with ultra-fast inference. Suitable for complex reasoning and multi-step tasks.
**Best for:**
* Complex reasoning tasks
* Multi-step problem solving
* Production workloads requiring higher accuracy
* Tasks requiring stronger language understanding
**Notes:**
* Not yet validated on common Tambo tasks
* Stronger capabilities than the 8B variant
* Maintains fast inference speeds
### Qwen Family
#### qwen-3-32b
**Status:** Untested
**API Name:** `qwen-3-32b`
**Context Window:** 32,768 tokens (8,192 on free tier)
**Provider Docs:** [Cerebras Inference Docs](https://inference-docs.cerebras.ai/)
Alibaba's Qwen 3 32B model with hybrid reasoning capabilities on Cerebras. Good for multilingual tasks and structured outputs.
**Best for:**
* Multilingual applications
* Structured output generation
* Hybrid reasoning tasks
* JSON and code generation
**Notes:**
* Not yet validated on common Tambo tasks
* Excels at multilingual tasks and structured outputs
* Good balance of capability and speed
#### qwen-3-235b-a22b-instruct-2507
**Status:** Untested
**API Name:** `qwen-3-235b-a22b-instruct-2507`
**Context Window:** 32,768 tokens (8,192 on free tier)
**Provider Docs:** [Cerebras Inference Docs](https://inference-docs.cerebras.ai/)
Alibaba's large-scale Qwen 3 model (235B params, A22B architecture) optimized for instruction following on Cerebras.
**Best for:**
* Complex instruction following
* Tasks requiring maximum model capability
* Advanced reasoning and analysis
* Professional content generation
**Notes:**
* Not yet validated on common Tambo tasks
* One of the largest models available on Cerebras
* Best for demanding tasks requiring maximum capability
### Other Models
#### gpt-oss-120b
**Status:** Untested
**API Name:** `gpt-oss-120b`
**Context Window:** 8,192 tokens
**Provider Docs:** [Cerebras Inference Docs](https://inference-docs.cerebras.ai/)
OpenAI open-weight 120B parameter model on Cerebras. Powerful capabilities for demanding applications with Cerebras's fast inference.
**Best for:**
* General-purpose high-performance tasks
* Applications requiring strong reasoning
* Complex content generation
* Tasks benefiting from larger model scale
**Notes:**
* Not yet validated on common Tambo tasks
* Based on OpenAI's open-weight release
* Strong general-purpose capabilities
#### zai-glm-4.6
**Status:** Untested
**API Name:** `zai-glm-4.6`
**Context Window:** 128,000 tokens (8,192 on free tier)
**Provider Docs:** [Cerebras Inference Docs](https://inference-docs.cerebras.ai/)
Zhipu AI's GLM 4.6 model on Cerebras with fast inference capabilities.
**Best for:**
* General text generation
* Conversational AI
* Content creation
* Bilingual (Chinese/English) tasks
**Notes:**
* Not yet validated on common Tambo tasks
* Strong bilingual capabilities
* Good for Chinese/English applications
#### zai-glm-4.7
**Status:** Untested
**API Name:** `zai-glm-4.7`
**Context Window:** 128,000 tokens (8,192 on free tier)
**Provider Docs:** [Cerebras Inference Docs](https://inference-docs.cerebras.ai/)
Zhipu AI's GLM 4.7 model on Cerebras, the latest iteration with improved capabilities.
**Best for:**
* General text generation
* Conversational AI
* Content creation
* Bilingual (Chinese/English) tasks
**Notes:**
* Not yet validated on common Tambo tasks
* Latest GLM iteration with improvements over 4.6
* Strong bilingual capabilities
All Cerebras models are currently **Untested** in Tambo. They are newly added
to the platform. Test them in your specific context before production
deployment. See the [Labels](/reference/llm-providers/labels) page for more
information about model status labels.
## Configuration
All Cerebras models are configured through your project settings in the Tambo dashboard:
1. Navigate to your project in the dashboard
2. Go to **Settings** → **LLM Providers**
3. Select **Cerebras** as your provider
4. Enter your Cerebras API key (get one from [Cerebras Cloud](https://cloud.cerebras.ai/))
5. Choose your desired model from the dropdown
6. Configure any [additional parameters](/guides/setup-project/llm-provider) (temperature, maxOutputTokens, etc.)
7. Click **Save** to apply the configuration
Cerebras models support the standard LLM parameters available in Tambo. For detailed parameter configuration, see [Custom LLM Parameters](/guides/setup-project/llm-provider).
## Model Selection Guide
**For speed and cost efficiency:**
* **Fastest**: [Llama 3.1 8B](#llama31-8b)
* **Balanced**: [Llama 3.3 70B](#llama-33-70b) or [Qwen 3 32B](#qwen-3-32b)
**For complex tasks:**
* **Maximum capability**: [Qwen 3 235B](#qwen-3-235b-a22b-instruct-2507) or [GPT-OSS 120B](#gpt-oss-120b)
* **Multilingual**: [Qwen 3 32B](#qwen-3-32b) or [Qwen 3 235B](#qwen-3-235b-a22b-instruct-2507)
**For bilingual (Chinese/English):**
* **Latest**: [GLM 4.7](#zai-glm-47)
* **Stable**: [GLM 4.6](#zai-glm-46)
## See Also
* [Labels](/reference/llm-providers/labels) - Understanding model status labels and observed behaviors
* [Custom LLM Parameters](/guides/setup-project/llm-provider) - Configuring model parameters for fine-tuned responses
# Google Gemini Models
URL: /reference/llm-providers/google
Google's Gemini models provide powerful multimodal capabilities with advanced reasoning features. Tambo supports Gemini models across three generations: Gemini 3, Gemini 2.5, and Gemini 2.0.
Gemini models may occasionally resist rendering as requested. Sometimes they
complete the request, but behavior can be inconsistent. Try clarifying
instructions (e.g., "Return a bulleted list only"). Outputs may have
formatting quirks—be cautious when structure matters.
## Model Families
### Gemini 3 Family
The latest generation of Gemini models with enhanced multimodal and reasoning capabilities.
#### gemini-3-pro-preview
**Status:** Tested
Google's most powerful model as of November 2025, best for multimodal understanding and agentic use cases.
* **API Name:** `gemini-3-pro-preview`
* **Context Window:** 1,048,576 tokens
* **Provider Documentation:** [Gemini 3.0 Pro](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/3-0-pro)
**Best For:**
* Complex multimodal understanding tasks
* Agentic workflows requiring reasoning
* Advanced problem-solving scenarios
**Notes:**
* Expected to have improved performance over 2.5 models
* Supports reasoning via [thinking configuration](#reasoning-configuration)
### Gemini 2.5 Family
Advanced reasoning models with extended thinking capabilities.
#### gemini-2.5-pro
**Status:** Known Issues
Gemini 2.5 Pro is Google's most advanced reasoning model, capable of solving complex problems.
* **API Name:** `gemini-2.5-pro`
* **Context Window:** 1,048,576 tokens
* **Provider Documentation:** [Gemini 2.5 Pro](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro)
**Best For:**
* Complex reasoning tasks
* Multi-step problem-solving
* Tasks requiring deep analysis
**Notes:**
* May occasionally resist rendering as requested
* Behavior can be inconsistent with formatting
* Supports reasoning via [thinking configuration](#reasoning-configuration)
#### gemini-2.5-flash
**Status:** Known Issues
Gemini 2.5 Flash is Google's best model in terms of price and performance, offering well-rounded capabilities.
* **API Name:** `gemini-2.5-flash`
* **Context Window:** 1,048,576 tokens
* **Provider Documentation:** [Gemini 2.5 Flash](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash)
**Best For:**
* Production workloads requiring speed
* Cost-effective reasoning tasks
* Balanced performance across various use cases
**Notes:**
* Fast and efficient for production use
* May have formatting quirks occasionally
* Supports reasoning via [thinking configuration](#reasoning-configuration)
### Gemini 2.0 Family
Next-generation features designed for the agentic era with superior speed and built-in tool use.
#### gemini-2.0-flash
**Status:** Known Issues
Gemini 2.0 Flash delivers next-generation features and improved capabilities designed for the agentic era, including superior speed, built-in tool use, multimodal generation, and a 1M token context window.
* **API Name:** `gemini-2.0-flash`
* **Context Window:** 1,048,576 tokens
* **Provider Documentation:** [Gemini 2.0 Flash](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash)
**Best For:**
* Agentic workflows with built-in tool use
* Applications requiring superior speed
* Multimodal generation tasks
**Notes:**
* Optimized for the agentic era
* May occasionally have inconsistent rendering behavior
* Strong tool-calling capabilities
#### gemini-2.0-flash-lite
**Status:** Known Issues
Gemini 2.0 Flash Lite is a model optimized for cost efficiency and low latency.
* **API Name:** `gemini-2.0-flash-lite`
* **Context Window:** 1,048,576 tokens
* **Provider Documentation:** [Gemini 2.0 Flash Lite](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash-lite)
**Best For:**
* High-volume applications
* Cost-sensitive deployments
* Low-latency requirements
**Notes:**
* Most cost-efficient Gemini model
* Optimized for speed over capability
* May have formatting quirks
## Provider-Specific Parameters
### Reasoning Configuration
Gemini models support reasoning capabilities through a thinking configuration. All Gemini models can use these parameters to control reasoning behavior.
**thinkingConfig**
Configure thinking behavior with a JSON object:
| Field | Type | Description |
| ----------------- | ------- | --------------------------------------- |
| `thinkingBudget` | number | Token budget allocated for thinking |
| `includeThoughts` | boolean | Whether to include thinking in response |
**Example Configuration:**
```json
{
"thinkingBudget": 5000,
"includeThoughts": true
}
```
**Recommended Settings:**
* **Quick reasoning (1000-2000 tokens)**: Simple tasks requiring minimal thinking
* **Standard reasoning (3000-5000 tokens)**: Most use cases (recommended)
* **Extended reasoning (7000+ tokens)**: Complex problems requiring deep analysis
### Configuring in the Dashboard
1. Navigate to your project in the dashboard
2. Go to **Settings** → **LLM Providers**
3. Select Google Gemini as your provider
4. Choose your model
5. Under [**Custom LLM Parameters**](/guides/setup-project/llm-provider), click **+ thinkingConfig**
6. Enter your configuration as a JSON object:
```json
{
"thinkingBudget": 5000,
"includeThoughts": true
}
```
7. Click **Save** to apply the configuration
When you select a Gemini reasoning model, the dashboard automatically shows
[**thinkingConfig**](#reasoning-configuration) as a suggested parameter. Just
click it to add!
## Best Practices
### Model Selection
* [**Gemini 3.0 Pro Preview**](#gemini-3-pro-preview): Use for cutting-edge multimodal and agentic tasks (test thoroughly as untested)
* [**Gemini 2.5 Pro**](#gemini-2-5-pro): Best for complex reasoning requiring extended thinking
* [**Gemini 2.5 Flash**](#gemini-2-5-flash): Recommended for production workloads needing speed and cost efficiency
* [**Gemini 2.0 Flash**](#gemini-2-0-flash): Choose for agentic workflows with strong tool-calling needs
* [**Gemini 2.0 Flash Lite**](#gemini-2-0-flash-lite): Select for high-volume, cost-sensitive applications
### Performance Optimization
* Start with [Gemini 2.5 Flash](#gemini-2-5-flash) for balanced performance
* Use lower [thinking budgets](#reasoning-configuration) for simple tasks to reduce latency
* Monitor token usage when using [reasoning features](#reasoning-configuration)
* Test formatting requirements carefully due to [known inconsistencies](#model-families)
### Cost Considerations
* [Gemini 2.0 Flash Lite](#gemini-2-0-flash-lite) offers the best cost efficiency
* Reasoning tokens ([thinking budget](#reasoning-configuration)) are billed separately
* Balance [thinking budget](#reasoning-configuration) with task complexity
* Consider caching for repeated queries with large context windows
## Troubleshooting
**Inconsistent rendering behavior?**
* Try clarifying instructions more explicitly
* Use specific formatting directives (e.g., "Return a bulleted list only")
* Test with different prompt phrasings
* Consider using a tested OpenAI model for production-critical formatting
**Reasoning not appearing in responses?**
* Verify [`thinkingConfig`](#reasoning-configuration) is added in your [dashboard settings](#configuring-in-the-dashboard)
* Ensure `includeThoughts` is set to `true`
* Check that you've saved your [configuration](#configuring-in-the-dashboard)
* Try increasing the `thinkingBudget` value
**Model performance issues?**
* Lower the `thinkingBudget` for faster responses
* Use [Gemini 2.0 Flash Lite](#gemini-2-0-flash-lite) for speed-critical applications
* Consider [Gemini 2.5 Flash](#gemini-2-5-flash) for balanced performance
* Monitor your context window usage
## See Also
* [Labels](/reference/llm-providers/labels) - Understanding model status labels and observed behaviors
* [Custom LLM Parameters](/guides/setup-project/llm-provider) - Configure additional model parameters
* [Reasoning Models](/reference/llm-providers/reasoning-models) - Comprehensive guide to reasoning capabilities
# Groq
URL: /reference/llm-providers/groq
Groq provides ultra-fast inference for Meta's Llama models, delivering high-throughput AI capabilities for demanding applications. Groq's specialized hardware accelerates model performance, enabling speeds of 400+ tokens/sec for real-time use cases.
## Groq Llama model overview
Groq hosts Meta's latest Llama models, including the new Llama 4 family. These models excel at diverse NLP tasks, from summarization and reasoning to multilingual and multimodal applications, all powered by Groq's high-performance infrastructure.
**Provider-specific features:**
* Ultra-fast inference (400+ tokens/sec)
* Large context windows (128K tokens)
* Cost-effective pricing
* Specialized hardware acceleration
## Supported Models
Groq offers 4 Llama models across different generations, each optimized for specific use cases.
### Llama 4 Family
#### llama-4-scout-17b-16e-instruct
**Status:** Untested
**API Name:** `meta-llama/llama-4-scout-17b-16e-instruct`
**Context Window:** 128K tokens
**Provider Docs:** [Groq Llama 4 Announcement](https://groq.com/blog/llama-4-now-live-on-groq-build-fast-at-the-lowest-cost-without-compromise)
**Description:**
Meta's Llama 4 Scout model (17Bx16E) is ideal for summarization, reasoning, and code generation. Runs at 460+ tokens/sec on Groq's infrastructure.
**Best For:**
* Code generation and analysis
* Text summarization
* Multi-step reasoning tasks
* Real-time applications requiring high throughput
**Notes:**
Not yet validated on common Tambo tasks. This model is newly released as of November 2025. Use with caution and test in your specific context.
#### llama-4-maverick-17b-128e-instruct
**Status:** Untested
**API Name:** `meta-llama/llama-4-maverick-17b-128e-instruct`
**Context Window:** 128K tokens
**Provider Docs:** [Groq Llama 4 Announcement](https://groq.com/blog/llama-4-now-live-on-groq-build-fast-at-the-lowest-cost-without-compromise)
**Description:**
Meta's Llama 4 Maverick model (17Bx128E) is optimized for multilingual and multimodal tasks, making it ideal for assistants, chat applications, and creative use cases.
**Best For:**
* Multilingual applications
* Conversational AI and chatbots
* Creative writing and content generation
* Assistant and agent implementations
**Notes:**
Not yet validated on common Tambo tasks. This model is newly released as of November 2025 with enhanced multimodal capabilities. Use with caution and test in your specific context.
### Llama 3.3 Family
#### llama-3.3-70b-versatile
**Status:** Untested
**API Name:** `llama-3.3-70b-versatile`
**Context Window:** 128K tokens
**Provider Docs:** [Groq Llama 3.3 Documentation](https://console.groq.com/docs/model/llama-3.3-70b-versatile)
**Description:**
Llama 3.3 70B Versatile is Meta's powerful multilingual model with 70B parameters, optimized for diverse NLP tasks and delivering strong performance across a wide range of applications.
**Best For:**
* Complex multilingual tasks
* General-purpose NLP applications
* Tasks requiring strong reasoning
* Production workloads requiring reliability
**Notes:**
Not yet validated on common Tambo tasks. With 70B parameters, this model offers strong capabilities but may have higher latency than smaller variants.
### Llama 3.1 Family
#### llama-3.1-8b-instant
**Status:** Tested
**API Name:** `llama-3.1-8b-instant`
**Context Window:** 128K tokens
**Provider Docs:** [Groq Llama 3.1 Documentation](https://console.groq.com/docs/model/llama-3.1-8b-instant)
**Description:**
Llama 3.1 8B on Groq delivers fast, high-quality responses for real-time tasks. Supports function calling, JSON output, and 128K context at low cost, making it ideal for cost-conscious applications.
**Best For:**
* Real-time chat applications
* Cost-sensitive production deployments
* Function calling and tool use
* JSON-structured outputs
* High-volume applications
**Notes:**
This model has been tested with Tambo component generation tasks and shows issues with JSON escaping in component attributes and component tag syntax. While it's the most cost-effective option in Groq's lineup, it will require careful validation and potentially post-processing for component generation use cases. See below for details.
**Known Shortcomings:**
Based on testing with component generation tasks, the following issues have been observed:
* **JSON Escaping in Component Attributes**: The fields attribute contains unescaped JSON strings. This will cause parsing errors because quotes aren't escaped. (e.g., `fields="[{"id":"text_field","type":"text",...}]"`)
* **Verbose Explanatory Text**: The model tends to include unnecessary explanatory text before and after component tags (e.g., "The form component is now displayed below" or "Your form now includes an email input field"). While not breaking, this adds noise to responses.
* **Invalid Component Syntax**: Uses non-standard XML-like syntax that won't render properly. (e.g., ``)
* **Streaming Error**: The model throws error while streaming sometimes.
## Configuration
### Dashboard Setup
Configure Groq models through your project's LLM provider settings:
1. Navigate to your project in the dashboard
2. Go to **Settings** → **LLM Providers**
3. Add or select **Groq** as your provider
4. Enter your [Groq API key](#api-key)
5. Select your preferred Llama model
6. Configure any [custom LLM parameters](/guides/setup-project/llm-provider) as needed
7. Click **Save** to apply the configuration
### API Key
You'll need a Groq API key to use these models. Get one from [Groq Console](https://console.groq.com/).
### Model Selection
Choose your model based on your use case:
* [**Llama 4 Scout**](#llama-4-scout-17b-16e-instruct): Code generation, reasoning, summarization
* [**Llama 4 Maverick**](#llama-4-maverick-17b-128e-instruct): Multilingual, multimodal, creative tasks
* [**Llama 3.3 70B**](#llama-3-3-70b-versatile): Complex tasks requiring strong reasoning
* [**Llama 3.1 8B**](#llama-3-1-8b-instant): Cost-effective real-time applications
## Performance Considerations
**Speed vs. Size:**
* Smaller models ([8B](#llama-3-1-8b-instant)) offer lower latency and cost
* Larger models ([70B](#llama-3-3-70b-versatile)) provide better reasoning and accuracy
* [Llama 4 models](#llama-4-family) balance performance with specialized capabilities
**Context Window:**
All Groq Llama models support 128K token context windows, enabling long-form conversations and document analysis.
**Throughput:**
Groq's specialized hardware delivers exceptional inference speeds (400-460+ tokens/sec), making it ideal for real-time applications.
## Best Practices
* **Start with [Llama 3.1 8B](#llama-3-1-8b-instant)** for cost-effective testing and simple tasks
* **Use [Llama 3.3 70B](#llama-3-3-70b-versatile)** when you need stronger reasoning or complex understanding
* **Try [Llama 4 models](#llama-4-family)** for specialized tasks ([Scout](#llama-4-scout-17b-16e-instruct) for code, [Maverick](#llama-4-maverick-17b-128e-instruct) for multilingual)
* **Test thoroughly** since all models are currently [untested with Tambo](#known-behaviors)
* **Monitor costs** and performance to find the right balance for your use case
## Known Behaviors
All Groq Llama models are currently **Untested** in Tambo. They are newly
released or recently added. Test them in your specific context before
production deployment. See the [Labels](/reference/llm-providers/labels) page
for more information about model status labels.
Streaming may behave inconsistently in models other than OpenAI. We're aware
of the issue and are actively working on a fix. Please proceed with caution
when using streaming on Groq models.
## Troubleshooting
**Slow response times?**
* Groq is optimized for speed; if experiencing slowness, check your network connection
* Verify you're using the correct API endpoint
* Monitor your Groq account for rate limits
**Unexpected outputs?**
* Adjust temperature and other parameters in [Custom LLM Parameters](/guides/setup-project/llm-provider)
* Try a different model size ([8B](#llama-3-1-8b-instant) vs. [70B](#llama-3-3-70b-versatile)) for your use case
* Provide more context in your prompts for better results
**API errors?**
* Verify your [Groq API key](#api-key) is valid and not expired
* Check your account quota and rate limits
* Ensure the model name matches exactly as shown in [Supported Models](#supported-models)
## See Also
* [Labels](/reference/llm-providers/labels) - Understanding model status labels
* [Custom LLM Parameters](/guides/setup-project/llm-provider) - Fine-tune model behavior
* [Groq Console](https://console.groq.com/) - Manage your API keys and usage
* [Groq Documentation](https://console.groq.com/docs) - Official Groq provider docs
# Models and Providers
URL: /reference/llm-providers
import { Card, Cards } from "fumadocs-ui/components/card";
AI models are the foundation of agent behavior in Tambo. Different models excel at different tasks—some are optimized for speed, others for reasoning depth, and others for cost efficiency. Understanding the characteristics of available models helps you choose the right tool for your application's needs.
## Why Model Selection Matters
The model you choose significantly impacts:
* **Response Quality** - More capable models handle complex tasks better
* **Latency** - Smaller, faster models respond quicker
* **Cost** - Larger models cost more per token
* **Capabilities** - Some models support vision, reasoning, or extended context windows
* **Behavior** - Models have different "personalities" and instruction-following abilities
Tambo makes it easy to switch between providers and models, letting you optimize for your specific use case.
## Available Providers
Tambo integrates with five major AI providers, each with unique strengths:
| Provider | Description | Best For |
| ------------- | ---------------------------------------------------------------------------------- | ------------------------------------------------------------------- |
| **OpenAI** | Industry-leading models including GPT-4.1, GPT-5, GPT-5.1, and o3 reasoning models | General-purpose tasks, reasoning, and state-of-the-art performance |
| **Anthropic** | Claude models with strong safety and reasoning capabilities | Complex reasoning, analysis, and safety-critical applications |
| **Cerebras** | Ultra-fast inference (2,000+ tokens/sec) powered by Wafer-Scale Engine hardware | Real-time applications, high-throughput processing |
| **Google** | Gemini models with multimodal support and extended thinking capabilities | Multimodal tasks, vision-based applications, and advanced reasoning |
| **Mistral** | Fast, efficient open-source models with strong performance | Cost-effective alternatives with reliable performance |
For detailed capabilities of each provider's models, see the official docs of the provider.
## Provider Configuration
Model providers are configured at the project level in Tambo Cloud. Each project has:
* A default provider and model
* API keys for authentication
* Custom parameters for fine-tuning behavior (temperature, max tokens, etc.)
* Token and tool call limits
Configuration is inherited by all threads within that project unless specifically overridden (when enabled).
Learn how to configure providers in the [Configure LLM Provider guide](/guides/setup-project/llm-provider).
## Model Status Labels
Each model carries a status label indicating how thoroughly it has been tested with Tambo:
* **Tested** - Validated on common Tambo tasks; recommended for production
* **Untested** - Available but not yet validated; use with caution and test in your context
* **Known Issues** - Usable but with observed behaviors worth noting
For detailed information about each label and specific model behaviors, see [Labels](/reference/llm-providers/labels).
Streaming may behave inconsistently in models other than OpenAI. We're aware
of the issue and actively working on a fix. Please proceed with caution when
using streaming on non-OpenAI models.
## Model Capabilities
### Custom Parameters
Models can be customized with parameters that control their behavior:
* **temperature** - Controls randomness and creativity (0.0 = deterministic, 1.0+ = creative)
* **top\_p** - Nucleus sampling threshold for response diversity
* **max\_tokens** - Maximum length of generated responses
* **presence\_penalty** - Discourages topic repetition
* **frequency\_penalty** - Reduces word/phrase repetition
These parameters are configured at the project level and apply to all threads using that project's configuration.
### Reasoning Models
Some models expose their internal thinking process, excelling at complex problem-solving:
* **OpenAI**: GPT-5, GPT-5.1, O3 models with adjustable reasoning effort
* **Google**: Gemini 3.0 Pro, Gemini 3.0 Deep Think with extended thinking
Reasoning models spend additional compute time analyzing problems before responding, enabling:
* Multi-step problem decomposition
* Solution exploration and verification
* Detailed reasoning token access
Learn more in [Reasoning Models](/reference/llm-providers/reasoning-models).
## Related Concepts
How projects, models, and instructions combine to control agent behavior
Understand model testing status and known behaviors with Tambo
Deep dive into advanced reasoning capabilities
# Labels
URL: /reference/llm-providers/labels
Streaming may behave inconsistently in models other than **OpenAI**. We're
aware of the issue and are actively working on a fix. Please proceed with
caution when using streaming on non-OpenAI models.
Models in tambo carry a **status label**, shown when you select a model from the LLM settings\
(**Dashboard → Project → Settings → LLM Providers**).
### Why Use Labels?
* **Set expectations**: Understand tambo’s confidence level for each model.
* **Guide selection**: Prefer `tested` models for production; approach others with care.
* **Highlight caveats**: `known-issues` labels call out specific behaviors we've observed.
### Label Definitions
| Label | Meaning |
| -------------- | ------------------------------------------------------------------ |
| `tested` | Validated on common tambo tasks. Recommended for most workflows. |
| `untested` | Available, but not yet validated. Use it—but test in your context. |
| `known-issues` | Usable, but we’ve observed behaviors worth noting (see below). |
### Provider-Specific Notes
For detailed information about each model, including status, capabilities, and observed behaviors, see the provider-specific pages:
* **[OpenAI Models](/reference/llm-providers/openai)** - Notes on GPT-5.1, GPT-5.1 Chat Latest, GPT-4.1 Nano, and other untested models
* **[Anthropic Models](/reference/llm-providers/anthropic)** - Known issues with Claude 3.5 Haiku component rendering
* **[Google Models](/reference/llm-providers/google)** - Known issues with Gemini rendering consistency and untested Gemini 3.0 models
* **[Groq Models](/reference/llm-providers/groq)** - Notes on untested Llama 4 Scout and Maverick models
* **[Mistral Models](/reference/llm-providers/mistral)** - Known issues with Mistral Large 2.1 and Medium 3 rendering
Each provider page includes complete model information, configuration guidance, and specific notes about observed behaviors during testing.
For production-critical formatting, use **Tested** models and validate
outputs. When using **Untested** or **Known Issues** models, run a small
prompt suite to check behavior in your specific workload.
### Usage Patterns
* **Prefer `tested`** models for reliability. If using others, test with your use case.
* **Use inline notes** in the picker to spot caveats quickly.
### Integration
You can change providers and models at the project level under **LLM Provider Settings**. tambo will apply your token limits and defaults accordingly.
# Mistral
URL: /reference/llm-providers/mistral
Mistral AI provides a range of powerful language models designed for professional use cases and complex reasoning tasks. This page covers the Mistral models available in Tambo, their capabilities, and how to configure them.
Mistral models (Large 2.1 and Medium 3) may inconsistently follow rendering
instructions, similar to Gemini models. Try clarifying prompt structure if you
encounter formatting issues. See [Labels](/reference/llm-providers/labels) for
more details.
## Available Models
Tambo supports three Mistral models, ranging from frontier-class reasoning to high-performance production models.
### Magistral Medium 1
**Status:** Tested
**API Name:** `magistral-medium-2506`
**Context Window:** 40,000 tokens
A frontier-class reasoning model released in June 2025, designed for advanced problem-solving and analytical tasks.
**Best for:**
* Complex reasoning and multi-step problem solving
* Code generation and debugging
* Professional knowledge work requiring deep analysis
* Tasks benefiting from extended thinking
**Provider Documentation:** [Mistral AI - Magistral](https://mistral.ai/news/magistral)
***
### Mistral Medium 3
**Status:** Known Issues
**API Name:** `mistral-medium-2505`
**Context Window:** 128,000 tokens
Designed to be frontier-class, particularly excelling in categories of professional use. This model provides a balance of power and versatility for production workloads.
**Best for:**
* Professional applications requiring reliable performance
* Long-form content generation and analysis
* Multi-document reasoning with large context windows
* Production deployments where consistency matters
**Notes:** May occasionally resist rendering as requested. Try clarifying instructions (e.g., "Return a bulleted list only"). Outputs may have formatting quirks when structure is important.
**Provider Documentation:** [Mistral AI - Mistral Medium 3](https://mistral.ai/news/mistral-medium-3)
***
### Mistral Large 2.1
**Status:** Known Issues
**API Name:** `mistral-large-latest`
**Context Window:** 128,000 tokens
Mistral's top-tier large model for high-complexity tasks, with the latest version released in November 2024. This model represents Mistral's most capable offering for demanding workloads.
**Best for:**
* High-complexity reasoning and analysis
* Advanced code generation and review
* Multi-turn conversations requiring context retention
* Tasks demanding maximum model capability
**Notes:** Similar to Medium 3, may inconsistently follow rendering instructions. Validate outputs where structure is critical.
**Provider Documentation:** [Mistral AI - Pixtral Large](https://mistral.ai/news/pixtral-large)
## Configuration
### Setting Up Mistral in Your Project
1. Navigate to your project in the Tambo dashboard
2. Go to **Settings** → **LLM Providers**
3. Add or configure your Mistral API credentials
4. Select your preferred [Mistral model](#available-models)
5. Adjust token limits and parameters as needed
6. Click **Save** to apply your configuration
### Custom Parameters
Mistral models support standard LLM parameters like temperature, max tokens, and more. Configure these in the dashboard under [**Custom LLM Parameters**](/guides/setup-project/llm-provider).
For detailed information on available parameters, see [Custom LLM Parameters](/guides/setup-project/llm-provider).
## Model Comparison
| Model | Context Window | Status | Best Use Case |
| ------------------ | -------------- | ------------ | --------------------------- |
| Magistral Medium 1 | 40K tokens | Tested | Reasoning & problem solving |
| Mistral Medium 3 | 128K tokens | Known Issues | Professional applications |
| Mistral Large 2.1 | 128K tokens | Known Issues | High-complexity tasks |
## Best Practices
### Choosing the Right Model
* **Start with [Magistral Medium 1](#magistral-medium-1)** for reasoning-heavy tasks where the smaller context window is sufficient
* **Use [Mistral Medium 3](#mistral-medium-3)** when you need larger context windows for professional applications
* **Reserve [Mistral Large 2.1](#mistral-large-2-1)** for the most demanding tasks requiring maximum capability
### Handling Rendering Issues
If you encounter formatting inconsistencies with [Medium 3](#mistral-medium-3) or [Large 2.1](#mistral-large-2-1):
1. **Clarify instructions** - Be explicit about desired output format
2. **Use structured prompts** - Provide clear examples of expected structure
3. **Validate outputs** - Add checks for critical formatting requirements
4. **Test thoroughly** - Run a prompt suite to verify behavior in your workload
For production-critical formatting, consider using [**Tested**](/reference/llm-providers/labels) models and validating outputs. See [Labels](/reference/llm-providers/labels) for more guidance.
## Troubleshooting
**Model not appearing in dashboard?**
* Verify your Mistral API key is [configured correctly](#setting-up-mistral-in-your-project)
* Check that your Tambo Cloud instance is up to date
* Ensure you have proper permissions for your project
**Inconsistent formatting in responses?**
* This is a [known issue](#available-models) with [Medium 3](#mistral-medium-3) and [Large 2.1](#mistral-large-2-1) models
* Try being more explicit in your prompt instructions
* Consider using [Magistral Medium 1](#magistral-medium-1) if formatting is critical
* See [Labels](/reference/llm-providers/labels) for detailed behavior notes
**High token usage?**
* [Mistral Large 2.1](#mistral-large-2-1) and [Medium 3](#mistral-medium-3) have 128K context windows
* Monitor your input length and conversation history
* Use token limits in [dashboard settings](#setting-up-mistral-in-your-project) to control costs
* Consider [Magistral Medium 1](#magistral-medium-1) for shorter context needs
## See Also
* [Labels](/reference/llm-providers/labels) - Understanding model status labels and observed behaviors
* [Custom LLM Parameters](/guides/setup-project/llm-provider) - Configuring model parameters
* [Reasoning Models](/reference/llm-providers/reasoning-models) - Advanced reasoning capabilities
# OpenAI
URL: /reference/llm-providers/openai
OpenAI provides a comprehensive suite of language models optimized for different use cases, from high-intelligence reasoning to cost-efficient production tasks. Tambo supports the full range of OpenAI models, including the latest GPT-5 series with advanced reasoning capabilities.
## Supported Models
Tambo supports 14 OpenAI models organized into five families:
### GPT-5 Family
The latest generation of OpenAI models with advanced reasoning capabilities and massive context windows.
#### gpt-5.2
**Status:** Tested
**API Name:** `gpt-5.2`
**Context Window:** 400,000 tokens
**Provider Docs:** [OpenAI GPT-5.2](https://platform.openai.com/docs/models/gpt-5.2)
Latest GPT-5.2 model with improved capabilities. Supports reasoning parameters for exposing internal thought processes.
**Best for:**
* Advanced coding and agentic tasks
* Complex multi-step reasoning workflows
* Tasks requiring cutting-edge model capabilities
* Applications needing the latest improvements
**Notes:** The newest GPT-5 series model. Use [`reasoningEffort`](#reasoningeffort) and [`reasoningSummary`](#reasoningsummary) parameters to control thinking behavior.
#### gpt-5
**Status:** Tested
**API Name:** `gpt-5-2025-08-07`
**Context Window:** 400,000 tokens
**Provider Docs:** [OpenAI GPT-5](https://platform.openai.com/docs/models/gpt-5)
The flagship GPT-5 model, best for coding and agentic tasks across domains. Supports reasoning parameters for exposing internal thought processes.
**Best for:**
* Complex coding tasks and refactoring
* Agentic workflows requiring multi-step reasoning
* Tasks requiring deep analysis and problem-solving
* Applications where showing reasoning builds trust
**Notes:** The most powerful model for reasoning-intensive tasks. Use [`reasoningEffort`](#reasoningeffort) and [`reasoningSummary`](#reasoningsummary) parameters to control thinking behavior.
#### gpt-5-mini
**Status:** Tested
**API Name:** `gpt-5-mini-2025-08-07`
**Context Window:** 400,000 tokens
**Provider Docs:** [OpenAI GPT-5 Mini](https://platform.openai.com/docs/models/gpt-5-mini)
A faster, more cost-efficient version of GPT-5 for well-defined tasks.
**Best for:**
* Production applications requiring balance of intelligence and cost
* Well-scoped tasks with clear requirements
* High-volume reasoning workloads
* Applications where latency matters
**Notes:** Maintains GPT-5's reasoning capabilities while optimizing for speed and cost. Ideal for production deployments.
#### gpt-5-nano
**Status:** Tested
**API Name:** `gpt-5-nano-2025-08-07`
**Context Window:** 400,000 tokens
**Provider Docs:** [OpenAI GPT-5 Nano](https://platform.openai.com/docs/models/gpt-5-nano)
The fastest, most cost-efficient version of GPT-5.
**Best for:**
* High-volume applications requiring reasoning at scale
* Simple reasoning tasks
* Latency-sensitive applications
* Cost-optimized production deployments
**Notes:** Smallest GPT-5 variant, optimized for efficiency while retaining reasoning capabilities.
#### gpt-5.1
**Status:** Tested (Default Model)
**API Name:** `gpt-5.1`
**Context Window:** 400,000 tokens
**Provider Docs:** [OpenAI Latest Model](https://platform.openai.com/docs/guides/latest-model)
GPT-5.1 with adaptive reasoning. Dynamically varies thinking time based on task complexity for better token efficiency.
**Best for:**
* Tasks with variable complexity
* Applications requiring intelligent cost optimization
* Complex problem-solving with automatic effort adjustment
* Production systems handling diverse query types
**Notes:** Released November 2025. Adaptive reasoning automatically adjusts thinking time, optimizing cost without sacrificing quality on complex tasks.
#### gpt-5.1-chat-latest
**Status:** Tested
**API Name:** `gpt-5.1-chat-latest`
**Context Window:** 400,000 tokens
**Provider Docs:** [OpenAI Latest Model](https://platform.openai.com/docs/guides/latest-model)
GPT-5.1 Chat Latest - warmer, more conversational model with adaptive reasoning.
**Best for:**
* Conversational applications requiring natural responses
* Latency-sensitive chat interfaces
* Applications balancing warmth and intelligence
* Real-time interactions with optional reasoning
**Notes:** Released November 2025. Conversational variant with improved responsiveness.
### GPT-4.1 Family
High-intelligence models excelling at function calling and instruction following with massive context windows.
#### gpt-4.1
**Status:** Tested
**API Name:** `gpt-4.1-2025-04-14`
**Context Window:** 1,047,576 tokens
**Provider Docs:** [OpenAI GPT-4.1](https://platform.openai.com/docs/models/gpt-4.1)
Excels at function calling and instruction following.
**Best for:**
* Function calling and tool use
* Following complex instructions precisely
* General-purpose applications
* Large context requirements (1M+ tokens)
**Notes:** Balances intelligence, reliability, and cost. Ideal for most production applications.
#### gpt-4.1-mini
**Status:** Tested
**API Name:** `gpt-4.1-mini-2025-04-14`
**Context Window:** 1,047,576 tokens
**Provider Docs:** [OpenAI GPT-4.1 Mini](https://platform.openai.com/docs/models/gpt-4.1-mini)
Balanced for intelligence, speed, and cost.
**Best for:**
* Production applications requiring cost efficiency
* High-volume workloads
* Applications balancing quality and performance
* Large context with cost constraints
**Notes:** Offers excellent value proposition with maintained quality at reduced cost.
#### gpt-4.1-nano
**Status:** Tested
**API Name:** `gpt-4.1-nano-2025-04-14`
**Context Window:** 1,047,576 tokens
**Provider Docs:** [OpenAI GPT-4.1 Nano](https://platform.openai.com/docs/models/gpt-4.1-nano)
Fastest, most cost-efficient version of GPT-4.1.
**Best for:**
* Maximum throughput applications
* Simple, well-defined tasks
* Cost-critical deployments
* High-volume production systems
**Notes:** Validated on common Tambo tasks including streaming responses and generative UI components. Some edge cases require explicit prompts and are documented.
### o3 Family
Specialized reasoning model for complex problem-solving.
#### o3
**Status:** Tested
**API Name:** `o3-2025-04-16`
**Context Window:** 200,000 tokens
**Provider Docs:** [OpenAI o3](https://platform.openai.com/docs/models/o3)
The most powerful reasoning model available.
**Best for:**
* Mathematical proofs and calculations
* Complex code review and debugging
* Strategic planning requiring deep analysis
* Research and analysis tasks
* Any task where showing reasoning is critical
**Notes:** Dedicated reasoning model with the most powerful thinking capabilities. Higher latency and cost but unmatched reasoning depth.
### GPT-4o Family
Versatile multimodal models with text and image input support.
#### gpt-4o
**Status:** Tested
**API Name:** `gpt-4o-2024-11-20`
**Context Window:** 128,000 tokens
**Provider Docs:** [OpenAI GPT-4o](https://platform.openai.com/docs/models/gpt-4o)
Versatile and high-intelligence model with text and image input support. Best for most tasks, combining strong reasoning, creativity, and multimodal understanding.
**Best for:**
* Multimodal applications (text + images)
* Creative tasks requiring nuanced understanding
* General-purpose applications requiring versatility
* Tasks requiring both analysis and generation
**Notes:** Excellent all-around model with multimodal capabilities. Strong choice when you need both text and image understanding.
#### gpt-4o-mini
**Status:** Tested
**API Name:** `gpt-4o-mini-2024-07-18`
**Context Window:** 128,000 tokens
**Provider Docs:** [OpenAI GPT-4o Mini](https://platform.openai.com/docs/models/gpt-4o-mini)
Fast, affordable model ideal for focused tasks and fine-tuning. Supports text and image inputs, with low cost and latency for efficient performance.
**Best for:**
* Cost-sensitive multimodal applications
* Fine-tuning for specific use cases
* High-volume image analysis
* Production deployments prioritizing efficiency
**Notes:** Most efficient multimodal option with excellent performance-to-cost ratio.
### GPT-4 Turbo Family
Previous generation high-intelligence model, still powerful but superseded by newer families.
#### gpt-4-turbo
**Status:** Tested
**API Name:** `gpt-4-turbo-2024-04-09`
**Context Window:** 128,000 tokens
**Provider Docs:** [OpenAI GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo)
High-intelligence model that's cheaper and faster than GPT-4. Still powerful, but we recommend using GPT-4o for most tasks.
**Best for:**
* Legacy applications requiring GPT-4 Turbo specifically
* Cost-conscious deployments not yet migrated to newer models
**Notes:** While still capable, GPT-4o and GPT-4.1 families offer better performance and features for most use cases.
## Provider-Specific Parameters
OpenAI reasoning models support specialized parameters to control their thinking behavior.
### Reasoning Parameters
Configure reasoning capabilities through your project's LLM provider settings in the [dashboard](#configuration).
#### reasoningEffort
**Type:** `string`
**Values:** `"none"`, `"low"`, `"medium"`, `"high"`
**Description:** Controls the intensity of the model's reasoning process. Only effective when [`reasoningSummary`](#reasoningsummary) is also set.
* **`"none"`** - No extended reasoning (fastest)
* **`"low"`** - Light reasoning for simpler tasks (faster, cheaper)
* **`"medium"`** - Balanced reasoning for most use cases (recommended)
* **`"high"`** - Deep reasoning for complex problems (slower, more expensive)
**Default Values by Model:**
| Model | Default |
| -------------------------------------- | ---------- |
| gpt-5.1 | `"none"` |
| gpt-5.1-chat-latest | `"medium"` |
| gpt-5.2, gpt-5, gpt-5-mini, gpt-5-nano | `"low"` |
| o3 | `"medium"` |
#### reasoningSummary
**Type:** `string`
**Values:** `"auto"`, `"detailed"`
**Description:** Enables reasoning token output, allowing you to see the model's internal thought process.
* **`"auto"`** - Automatically determines appropriate reasoning detail level
* **`"detailed"`** - Provides more comprehensive reasoning output
**Example Configuration:**
In your Tambo Cloud dashboard under **Settings** → **LLM Providers** → **Custom LLM Parameters**:
```
reasoningEffort: "medium"
reasoningSummary: "auto"
```
### Supported Reasoning Models
Reasoning parameters are available for the following OpenAI models:
* GPT-5.2 (`gpt-5.2`)
* GPT-5.1 (`gpt-5.1`)
* GPT-5.1 Chat Latest (`gpt-5.1-chat-latest`)
* GPT-5 (`gpt-5-2025-08-07`)
* GPT-5 Mini (`gpt-5-mini-2025-08-07`)
* GPT-5 Nano (`gpt-5-nano-2025-08-07`)
* o3 (`o3-2025-04-16`)
## Configuration
### Setting Up OpenAI in Tambo
1. Navigate to your project in the Tambo Cloud dashboard
2. Go to **Settings** → **LLM Providers**
3. Select **OpenAI** as your provider
4. Choose your desired model from the dropdown
5. Configure any provider-specific parameters (reasoning, temperature, etc.)
6. Click **Save**
### Dashboard Features
When you select an OpenAI reasoning model, the dashboard automatically suggests relevant parameters:
* [**reasoningEffort**](#reasoningeffort) - Suggested for all reasoning models
* [**reasoningSummary**](#reasoningsummary) - Suggested for all reasoning models
Simply click the suggested parameter to add it to your configuration, set the desired value, and save.
### Best Practices
**Model Selection:**
* Use **gpt-5.1** for most production applications (default)
* Use **gpt-5** family when reasoning capabilities are critical
* Use **gpt-4o** for multimodal applications
* Use **mini/nano** variants for cost-optimized deployments
* Use **o3** for maximum reasoning depth
**Reasoning Configuration:**
* Start with [`reasoningEffort: "medium"`](#reasoningeffort) for balanced performance
* Set [`reasoningSummary: "auto"`](#reasoningsummary) to enable reasoning display
* Increase effort for complex tasks, decrease for simple queries
* Monitor token usage as reasoning consumes additional tokens
**Production Considerations:**
* Test untested models thoroughly before production deployment
* Monitor costs with reasoning parameters enabled
* Use separate projects for different reasoning requirements
* Consider using non-reasoning models for high-volume simple tasks
## See Also
* [Labels](/reference/llm-providers/labels) - Understanding model status labels and observed behaviors
* [Custom LLM Parameters](/guides/setup-project/llm-provider) - Configuring temperature, max tokens, and other parameters
* [Reasoning Models](/reference/llm-providers/reasoning-models) - Deep dive into reasoning capabilities for OpenAI and Gemini models
# Reasoning Models
URL: /reference/llm-providers/reasoning-models
Reasoning models are specialized LLMs that expose their internal thought process before generating a final response. These models excel at complex tasks requiring multi-step reasoning, problem-solving, and logical analysis by spending additional compute time "thinking" through the problem.
Tambo currently supports reasoning capabilities for **OpenAI** and **Google
Gemini** models. Each provider uses different parameter names to control
reasoning behavior.
## Provider-Specific Documentation
For detailed information about reasoning parameters and configuration for each provider, see:
* **[OpenAI Models](/reference/llm-providers/openai#provider-specific-parameters)** - Configure `reasoningEffort` and `reasoningSummary` for GPT-5, O3, and other OpenAI reasoning models
* **[Google Models](/reference/llm-providers/google#provider-specific-parameters)** - Configure `thinkingConfig` for Gemini 3.0 Pro, Deep Think, and other Gemini reasoning models
## What Are Reasoning Models?
Traditional LLMs generate responses token-by-token in a single forward pass. Reasoning models add an intermediate "thinking" phase where the model:
1. **Analyzes the problem** - Breaks down complex queries into sub-problems
2. **Explores solutions** - Considers multiple approaches and their tradeoffs
3. **Verifies reasoning** - Checks its logic before committing to an answer
4. **Generates response** - Produces the final output based on verified reasoning
This thinking process is captured as **reasoning tokens** that you can access, display, and analyze alongside the final response.
## Supported Models
See the provider-specific pages for complete model lists:
* **[OpenAI Models](/reference/llm-providers/openai)** - GPT-5, GPT-5.1, GPT-5 Mini, GPT-5 Nano, GPT-4.1 Nano, O3, and more
* **[Google Models](/reference/llm-providers/google)** - Gemini 3.0 Pro, Deep Think, 2.5 Pro, 2.5 Flash, and more
For configuration instructions and parameter details, see the **Provider-Specific Parameters** sections on each provider page.
## When to Use Reasoning Models
Reasoning models work best for tasks that benefit from step-by-step thinking:
**✅ Best for:**
* Complex problem-solving and analysis
* Mathematical calculations and proofs
* Code review and debugging
* Strategic planning requiring multiple steps
* Tasks where showing work builds user trust
**⚠️ Not ideal for:**
* Simple Q\&A or fact retrieval
* Real-time chat requiring instant responses
* High-volume, cost-sensitive applications
## Displaying Reasoning in Your App
Tambo components automatically handle reasoning display - no additional code required.
### Built-in Support
When you add Tambo components from the CLI, reasoning support is included out-of-the-box:
```bash
npx tambo add message
```
These components include the `ReasoningInfo` sub-component that:
* **Auto-displays** reasoning in a collapsible dropdown
* **Shows thinking progress** with step counts during streaming
* **Auto-collapses** when the final response arrives
* **Auto-scrolls** to follow reasoning as it streams
If you're using Tambo's pre-built components (message, thread-content,
message-thread-full, etc.), reasoning display is already built-in. Just
configure reasoning parameters in your dashboard and it works automatically.
### Custom Implementation
If building custom components, reasoning is available in the `ThreadMessage` type:
```typescript
interface ThreadMessage {
id: string;
content: string;
role: "user" | "assistant";
reasoning?: string[]; // Array of reasoning strings
// ... other fields
}
```
Access it like any other message property:
```tsx
{
message.reasoning?.map((step, index) => (
Step {index + 1}: {step}
));
}
```
## Best Practices
### Performance Optimization
Balance reasoning effort with cost and latency in your dashboard configuration. See provider-specific pages for parameter details:
* **[OpenAI Configuration](/reference/llm-providers/openai#provider-specific-parameters)** - Adjust `reasoningEffort` levels
* **[Google Configuration](/reference/llm-providers/google#provider-specific-parameters)** - Optimize `thinkingBudget` values
**General Guidelines:**
* **Development**: Use maximum reasoning effort for thorough testing
* **Production**: Use balanced reasoning settings for most use cases
* **High-Volume**: Consider using non-reasoning models for simple queries
### Cost Considerations
Reasoning tokens are billed separately and typically cost more than standard tokens:
* **Monitor usage** - Track reasoning token consumption in your dashboard
* **Optimize effort** - Use lower reasoning settings when appropriate
* **Test different levels** - Compare quality vs. cost at different reasoning levels
## Troubleshooting
**Reasoning not appearing in responses?**
* Verify you're using a supported reasoning model (see [OpenAI](/reference/llm-providers/openai) or [Google](/reference/llm-providers/google) model lists)
* Check your dashboard settings under **LLM Providers** → **Custom LLM Parameters**
* Ensure reasoning parameters are properly configured (see provider-specific documentation)
* Click **Save Settings** after making changes
**Reasoning tokens consuming too many resources?**
* Lower your reasoning effort settings (see provider-specific parameter documentation)
* Create separate projects with different configurations for simple vs. complex queries
* Monitor token usage in your Tambo Cloud usage dashboard
* Consider using non-reasoning models for high-volume, simple tasks
## Additional Resources
* **[OpenAI Models](/reference/llm-providers/openai)** - Complete OpenAI model list and reasoning configuration
* **[Google Models](/reference/llm-providers/google)** - Complete Google model list and thinking configuration
* **[Custom LLM Parameters](/guides/setup-project/llm-provider#step-4-configure-custom-parameters)** - General parameter configuration guide
* **[Model Labels](/reference/llm-providers/labels)** - Understanding model status and observed behaviors
# Endpoint Deprecated
URL: /reference/problems/endpoint-deprecated
The API returns a `410 Gone` status with this problem type when you call an endpoint that has been deprecated and removed.
## Response Format
```json
{
"type": "https://docs.tambo.co/reference/problems/endpoint-deprecated",
"status": 410,
"title": "Endpoint Deprecated",
"detail": "The non-streaming /advance endpoint has been deprecated. Please use /advancestream instead.",
"code": "ENDPOINT_DEPRECATED",
"instance": "/threads/advance",
"details": {
"migrateToEndpoint": "POST /advancestream"
}
}
```
## Fields
* **`type`** - Stable URI identifying this problem type
* **`status`** - HTTP status code (always `410`)
* **`title`** - Short summary of the problem
* **`detail`** - Human-readable description explaining what was deprecated
* **`code`** - Stable identifier (`ENDPOINT_DEPRECATED`) for programmatic error handling
* **`instance`** - The specific request path that triggered this error
* **`details.migrateToEndpoint`** - Migration guidance showing the HTTP method and replacement endpoint path
## Handling
When you receive this error:
1. Check the `details.migrateToEndpoint` field for the replacement endpoint
2. Update your code to use the new endpoint
3. Review the [REST API documentation](/reference/rest-api) for the correct request format
## Example
```bash
# Old endpoint (deprecated)
curl -X POST https://api.tambo.co/threads/advance \
-H "Authorization: Bearer YOUR_API_KEY"
# New endpoint (use this instead)
curl -X POST https://api.tambo.co/threads/advancestream \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json"
```
## See Also
* [REST API Reference](/reference/rest-api) - Complete API documentation
* [RFC 9457 Problem Details](https://www.rfc-editor.org/rfc/rfc9457.html) - Standard for error responses
# React SDK Hooks
URL: /reference/react-sdk/hooks
The `@tambo-ai/react` package provides hooks that expose state values and functions to make building AI apps with Tambo simple.
Here you'll find a description of each state value and function, organized by hook.
## useTambo
The primary entrypoint for the Tambo React SDK. This hook provides access to all Tambo functionality including the client, component registry, thread context, and interactable component management.
```tsx
const tambo = useTambo();
```
This hook returns a composite of all context values from the nested providers, including:
* **Client context**: `client`, `queryClient`, `isUpdatingToken`
* **Thread context**: `thread`, `sendThreadMessage`, `generationStage`, `isIdle`, etc.
* **Component context**: `currentMessage`, `currentComponent`
* **Interactable context**: `interactableComponents`, `addInteractableComponent`, etc.
* **Context helpers**: `getContextHelpers`, `addContextHelper`, `removeContextHelper`
* **Context attachments**: `attachments`, `addContextAttachment`, `removeContextAttachment`
For most use cases, prefer using the more specific hooks (like `useTamboThread` or `useTamboRegistry`) to access only what you need.
## useTamboRegistry
This hook provides helpers for component and tool registration.
### registerComponent
`const { registerComponent } = useTamboRegistry()`
This function is used to register components with Tambo, allowing them to be potentially used in Tambo's responses.
### registerTool
`const { registerTool } = useTamboRegistry()`
This function is used to register tools with Tambo.
### registerTools
`const { registerTools } = useTamboRegistry()`
This function allows registering multiple tools at once.
### addToolAssociation
`const { addToolAssociation } = useTamboRegistry()`
This function creates an association between components and tools.
### componentList
`const { componentList } = useTamboRegistry()`
This value provides access to the list of registered components.
### toolRegistry
`const { toolRegistry } = useTamboRegistry()`
This value provides access to the registry of all registered tools.
### componentToolAssociations
`const { componentToolAssociations } = useTamboRegistry()`
This value provides access to the associations between components and tools.
## useTamboThread
This hook provides access to the current thread and functions for managing thread interactions.
### thread
`const { thread } = useTamboThread()`
The current thread object containing messages and metadata. Messages can be accessed via `thread.messages`. This value is kept up-to-date automatically by Tambo when messages are sent or received.
### sendThreadMessage
`const { sendThreadMessage } = useTamboThread()`
Function to send a user message to Tambo and receive a response. A call to this function will update the provided `thread` state value.
To have the response streamed, use `sendThreadMessage(message, {streamResponse: true})`.
### generationStage
`const { generationStage } = useTamboThread()`
Current stage of message generation. Possible values are:
* `IDLE`: The thread is not currently generating any response (Initial stage)
* `CHOOSING_COMPONENT`: Tambo is determining which component to use for the response
* `FETCHING_CONTEXT`: Gathering necessary context for the response by calling a registered tool
* `HYDRATING_COMPONENT`: Generating the props for a chosen component
* `STREAMING_RESPONSE`: Actively streaming the response
* `COMPLETE`: Generation process has finished successfully
* `ERROR`: An error occurred during the generation process
### inputValue
`const { inputValue } = useTamboThread()`
Current value of the thread input field.
### generationStatusMessage
`const { generationStatusMessage } = useTamboThread()`
Status message describing the current generation state, as generated by Tambo.
### isIdle
`const { isIdle } = useTamboThread()`
Boolean indicating whether the thread is in an idle state (`generationStage` is `IDLE`, `COMPLETE`, or `ERROR`).
### switchCurrentThread
`const { switchCurrentThread } = useTamboThread()`
Function to change the active thread by id. This will update the `thread` state value to the fetched thread.
### addThreadMessage
`const { addThreadMessage } = useTamboThread()`
Function to append a new message to the thread.
### updateThreadMessage
`const { updateThreadMessage } = useTamboThread()`
Function to modify an existing thread message.
### setLastThreadStatus
`const { setLastThreadStatus } = useTamboThread()`
Function to update the status of the most recent thread message.
### setInputValue
`const { setInputValue } = useTamboThread()`
Function to update the input field value.
## useTamboThreadList
This hook provides access to the list of all threads for a project and their loading states.
### data
`const { data } = useTamboThreadList()`
Array of threads or null if not yet loaded.
### isPending
`const { isPending } = useTamboThreadList()`
Boolean indicating if threads are currently being fetched.
### isSuccess
`const { isSuccess } = useTamboThreadList()`
Boolean indicating if threads were successfully fetched.
### isError
`const { isError } = useTamboThreadList()`
Boolean indicating if an error occurred while fetching threads.
### error
`const { error } = useTamboThreadList()`
Error object containing details if the fetch failed.
## useTamboThreadInput
This hook provides utilities for building an input interface that sends messages to Tambo.
### value
`const { value } = useTamboThreadInput()`
Current value of the input field.
### setValue
`const { setValue } = useTamboThreadInput()`
Function to update the input field value.
### submit
`const { submit } = useTamboThreadInput()`
Function to submit the current input value, with optional context and streaming configuration.
### isPending
`const { isPending } = useTamboThreadInput()`
Boolean indicating if a submission is in progress.
### isSuccess
`const { isSuccess } = useTamboThreadInput()`
Boolean indicating if the last submission was successful.
### isError
`const { isError } = useTamboThreadInput()`
Boolean indicating if the last submission failed.
### error
`const { error } = useTamboThreadInput()`
Error object containing details if the submission failed.
## useTamboSuggestions
This hook provides utilities for managing AI-generated message suggestions.
### suggestions
`const { suggestions } = useTamboSuggestions()`
List of available AI-generated suggestions for the next message.
### selectedSuggestionId
`const { selectedSuggestionId } = useTamboSuggestions()`
ID of the currently selected suggestion.
### accept
`const { accept } = useTamboSuggestions()`
Function to accept and apply a suggestion, with an option for automatic submission.
### acceptResult
`const { acceptResult } = useTamboSuggestions()`
Detailed mutation result for accepting a suggestion.
### generateResult
`const { generateResult } = useTamboSuggestions()`
Detailed mutation result for generating new suggestions.
### isPending
`const { isPending } = useTamboSuggestions()`
Boolean indicating if a suggestion operation is in progress.
### isSuccess
`const { isSuccess } = useTamboSuggestions()`
Boolean indicating if the last operation was successful.
### isError
`const { isError } = useTamboSuggestions()`
Boolean indicating if the last operation resulted in an error.
### error
`const { error } = useTamboSuggestions()`
Error object containing details if the operation failed.
## useTamboClient
This hook provides direct access to the Tambo client instance.
### client
`const { client } = useTamboClient()`
The Tambo client instance for direct API access.
## useTamboComponentState
This hook is similar to React's `useState`, but allows Tambo to see the state values to help respond to later messages.
`const [myValue, setMyValue] = useTamboComponentState(keyName, initialValue, setFromProp)`
For streaming components, use the third parameter (`setFromProp`) to seed
editable state from AI-generated props. Combined with `useTamboStreamStatus`,
this lets you disable inputs while streaming and hand control to the user once
complete.
### value and setValue
`const { value } = useTamboComponentState()`
Current state value stored in the thread message for the given key.
### setValue
`const { setValue } = useTamboComponentState()`
Function to update the state value, synchronizing both local state and server state.
## useTamboContextHelpers
This hook provides dynamic control over context helpers.
### getContextHelpers
`const { getContextHelpers } = useTamboContextHelpers()`
Returns the current map of registered helper functions keyed by name.
### addContextHelper
`const { addContextHelper } = useTamboContextHelpers()`
Adds or replaces a helper at the given key.
### removeContextHelper
`const { removeContextHelper } = useTamboContextHelpers()`
Removes a helper by key so it is no longer included in outgoing messages.
## useTamboContextAttachment
This hook provides utilities for managing context attachments that will be sent with the next user message.
### attachments
`const { attachments } = useTamboContextAttachment()`
Array of active context attachments that will be included in `additionalContext` when the next message is sent.
### addContextAttachment
`const { addContextAttachment } = useTamboContextAttachment()`
Function to add a new context attachment. Accepts an object with `context` (string), optional `displayName` (string), and optional `type` (string). Returns the `ContextAttachment` object with an auto-generated `id`. All attachments are automatically registered together as a single merged context helper (key: `contextAttachments`) that returns an array of all active attachments.
```tsx
// Without displayName
const attachment = addContextAttachment({
context: "The contents of File.txt",
});
// With displayName
const attachment = addContextAttachment({
context: "The contents of File.txt",
displayName: "File.txt",
});
// With displayName and type
const attachment = addContextAttachment({
context: "The contents of File.txt",
displayName: "File.txt",
type: "file",
});
```
### removeContextAttachment
`const { removeContextAttachment } = useTamboContextAttachment()`
Function to remove a specific context attachment by its ID. The context helper automatically updates to reflect the change.
### clearContextAttachments
`const { clearContextAttachments } = useTamboContextAttachment()`
Function to remove all active context attachments at once. The context helper automatically updates to reflect the change. Context attachments are automatically cleared after message submission (one-time use), so you typically don't need to call this manually.
## useCurrentInteractablesSnapshot
`const snapshot = useCurrentInteractablesSnapshot()`
Returns a cloned snapshot of the current interactable components.
## useTamboCurrentMessage
`const message = useTamboCurrentMessage()`
Returns the complete `TamboThreadMessage` object for the current message, including thread ID, component data, state, and timestamps. Must be used within a component rendered as part of a message thread.
Use when you need full message/thread context. For component metadata only, see [`useTamboCurrentComponent`](#usetambocurrentcomponent).
## useTamboCurrentComponent
`const component = useTamboCurrentComponent()`
Returns component metadata (`componentName`, `props`, `interactableId`, `description`, `threadId`) from the parent component context. Returns `null` if used outside a component. Works with both inline rendered and interactable components.
Use when you need component information or thread ID without full message context. For complete message data, see [`useTamboCurrentMessage`](#usetambocurrentmessage).
See [Interactable Components](/concepts/generative-interfaces/interactable-components) for detailed patterns and examples.
## useTamboStreamStatus
Track streaming status for Tambo component props. Returns both global stream status and per-prop status flags.
```tsx
const { streamStatus, propStatus } = useTamboStreamStatus();
```
**Important**: Props update repeatedly during streaming and may be partial. Use `propStatus.?.isSuccess` before treating a prop as complete.
### streamStatus
Global stream status flags for the component:
```tsx
interface StreamStatus {
isPending: boolean; // No tokens received yet, generation not active
isStreaming: boolean; // Active streaming - generation or props still streaming
isSuccess: boolean; // Complete - all props finished without error
isError: boolean; // Fatal error occurred
streamError?: Error; // First error encountered (if any)
}
```
### propStatus
Per-prop streaming status:
```tsx
interface PropStatus {
isPending: boolean; // No tokens received for this prop yet
isStreaming: boolean; // Prop has partial content, still updating
isSuccess: boolean; // Prop finished streaming successfully
error?: Error; // Error during streaming (if any)
}
```
### Example: Wait for entire stream
```tsx
const { streamStatus } = useTamboStreamStatus();
if (!streamStatus.isSuccess) {
return ;
}
return ;
```
### Example: Highlight in-flight props
```tsx
const { propStatus } = useTamboStreamStatus();
return (
{title}
);
```
## useTamboStreamingProps
**Deprecated**: Use `useTamboComponentState` with `setFromProp` instead. This hook will be removed in 1.0.0.
Low-level helper that merges streamed props into state.
```tsx
useTamboStreamingProps(currentState, setState, streamingProps);
```
## useTamboGenerationStage
Access the current generation stage from the thread context.
```tsx
const { generationStage } = useTamboGenerationStage();
```
Returns the current `GenerationStage` enum value. See [GenerationStage](/reference/react-sdk/types#generationstage) for possible values.
## useTamboVoice
Exposes functionality to record speech and transcribe it using the Tambo API.
```tsx
const {
startRecording,
stopRecording,
isRecording,
isTranscribing,
transcript,
transcriptionError,
mediaAccessError,
} = useTamboVoice();
```
### Return values
| Value | Type | Description |
| -------------------- | ---------------- | ------------------------------------------------------ |
| `startRecording` | `() => void` | Start recording audio and reset the current transcript |
| `stopRecording` | `() => void` | Stop recording and automatically start transcription |
| `isRecording` | `boolean` | Whether the user is currently recording |
| `isTranscribing` | `boolean` | Whether audio is being transcribed |
| `transcript` | `string \| null` | The transcript of the recorded audio |
| `transcriptionError` | `string \| null` | Error message if transcription fails |
| `mediaAccessError` | `string \| null` | Error message if microphone access fails |
### Example
```tsx
function VoiceInput() {
const {
startRecording,
stopRecording,
isRecording,
isTranscribing,
transcript,
} = useTamboVoice();
return (
);
}
```
## useTamboInteractable
Provides access to the interactable component management functions.
```tsx
const {
interactableComponents,
addInteractableComponent,
removeInteractableComponent,
updateInteractableComponentProps,
getInteractableComponent,
getInteractableComponentsByName,
clearAllInteractableComponents,
setInteractableState,
getInteractableComponentState,
setInteractableSelected,
clearInteractableSelections,
} = useTamboInteractable();
```
### interactableComponents
`TamboInteractableComponent[]` - Array of currently registered interactable components.
### addInteractableComponent
`(component: Omit) => string`
Registers a new interactable component. Returns the generated component ID.
### removeInteractableComponent
`(id: string) => void`
Removes an interactable component by ID.
### updateInteractableComponentProps
`(id: string, newProps: Record) => string`
Updates the props of an interactable component. Returns a status message.
### getInteractableComponent
`
(id: string) => TamboInteractableComponent
| undefined`
Gets a specific interactable component by ID.
### getInteractableComponentsByName
`(componentName: string) => TamboInteractableComponent[]`
Gets all interactable components with the given name.
### clearAllInteractableComponents
`() => void`
Removes all interactable components.
### setInteractableState
`(componentId: string, key: string, value: unknown) => void`
Sets a specific state value for an interactable component.
### getInteractableComponentState
`(componentId: string) => Record | undefined`
Gets the current state of an interactable component.
### setInteractableSelected
`(componentId: string, isSelected: boolean) => void`
Sets the selected state of an interactable component.
### clearInteractableSelections
`() => void`
Clears all component selections.
## useMessageImages
Hook for managing images in message input.
```tsx
const { images, addImage, addImages, removeImage, clearImages } =
useMessageImages();
```
### images
`StagedImage[]` - Array of staged images ready to be sent with a message.
### addImage
`(file: File) => Promise`
Add a single image file. Throws if the file is not an image.
### addImages
`(files: File[]) => Promise`
Add multiple image files. Only valid image files will be added.
### removeImage
`(id: string) => void`
Remove a staged image by ID.
### clearImages
`() => void`
Remove all staged images.
# React SDK Reference
URL: /reference/react-sdk
The `@tambo-ai/react` package is Tambo's official React SDK for building AI-powered generative UI applications. This reference documents all public APIs including hooks, types, utilities, and provider components.
## Installation
```bash
npm install @tambo-ai/react
```
## Quick Links
* [Hooks](/reference/react-sdk/hooks) - React hooks for thread management, component state, streaming, and more
* [Types](/reference/react-sdk/types) - TypeScript interfaces and types for type-safe development
* [Utilities](/reference/react-sdk/utilities) - Helper functions like `defineTool()` and `withInteractable()`
* [Providers](/reference/react-sdk/providers) - Provider components for configuring Tambo in your app
* [MCP](/reference/react-sdk/mcp) - Model Context Protocol hooks and types
## Overview
The SDK is organized around a provider hierarchy that manages AI state and configuration:
```tsx
import { TamboProvider } from "@tambo-ai/react";
function App() {
return (
);
}
```
Inside the provider, use hooks to access Tambo functionality:
```tsx
import {
useTambo,
useTamboThread,
useTamboStreamStatus,
} from "@tambo-ai/react";
function Chat() {
const { sendThreadMessage, thread } = useTamboThread();
const { streamStatus } = useTamboStreamStatus();
// Build your UI
}
```
## MCP Support
For Model Context Protocol integrations, import from the `/mcp` subpath:
```tsx
import { TamboMcpProvider, useTamboMcpServers } from "@tambo-ai/react/mcp";
```
MCP requires additional peer dependencies. See the [MCP reference](/reference/react-sdk/mcp) for setup instructions.
# MCP Reference
URL: /reference/react-sdk/mcp
The `@tambo-ai/react/mcp` subpath provides hooks and types for Model Context Protocol (MCP) integrations. MCP enables AI models to interact with external tools, resources, and prompts through a standardized protocol.
## Installation
MCP functionality requires additional peer dependencies:
```bash
npm install @modelcontextprotocol/sdk zod zod-to-json-schema
```
Then import from the `/mcp` subpath:
```tsx
import { TamboMcpProvider, useTamboMcpServers } from "@tambo-ai/react/mcp";
```
## Provider Setup
MCP servers are configured on `TamboProvider` and connected via `TamboMcpProvider`:
```tsx
import { TamboProvider } from "@tambo-ai/react";
import { TamboMcpProvider } from "@tambo-ai/react/mcp";
function App() {
return (
);
}
```
## Hooks
### useTamboMcpServers
Access connected MCP servers and their clients.
```tsx
const mcpServers = useTamboMcpServers();
```
Returns `McpServer[]` - an array of connected (or failed) MCP servers.
```tsx
function McpStatus() {
const servers = useTamboMcpServers();
return (
);
}
```
### useTamboMcpResource
Fetch a specific resource by URI.
```tsx
const {
data, // ReadResourceResult | null
isPending,
isSuccess,
isError,
error,
} = useTamboMcpResource(resourceUri?: string);
```
| Parameter | Type | Description |
| ------------- | -------- | --------------------------------------------------- |
| `resourceUri` | `string` | Prefixed resource URI (e.g., `"linear:file://foo"`) |
### useTamboMcpPromptList
Get prompts from all connected MCP servers.
```tsx
const {
data, // ListPromptEntry[]
isPending,
isSuccess,
isError,
error,
refetch,
} = useTamboMcpPromptList(search?: string);
```
| Parameter | Type | Description |
| --------- | -------- | ---------------------------------------- |
| `search` | `string` | Optional search string to filter prompts |
### useTamboMcpPrompt
Fetch a specific prompt by name with arguments.
```tsx
const {
data, // GetPromptResult | null
isPending,
isSuccess,
isError,
error,
} = useTamboMcpPrompt(promptName?: string, args?: Record);
```
| Parameter | Type | Description |
| ------------ | ------------------------ | -------------------------------------------- |
| `promptName` | `string` | Prompt name (can be prefixed with serverKey) |
| `args` | `Record` | Arguments to pass to the prompt |
### useTamboMcpElicitation
Access MCP elicitation state for handling user input requests from servers.
```tsx
const { elicitation, resolveElicitation } = useTamboMcpElicitation();
```
Returns:
* `elicitation` - Current elicitation request (or `null`)
* `resolveElicitation` - Function to respond to the elicitation
```tsx
function ElicitationHandler() {
const { elicitation, resolveElicitation } = useTamboMcpElicitation();
if (!elicitation) return null;
return (
);
}
```
### useTamboElicitationContext
**Deprecated**: Use `useTamboMcpElicitation` instead.
## Types
### McpServer
Union type for connected or failed MCP servers.
```typescript
type McpServer = ConnectedMcpServer | FailedMcpServer;
```
### ConnectedMcpServer
A successfully connected MCP server with an active client.
```typescript
interface ConnectedMcpServer {
name: string;
url: string;
transport: MCPTransport;
serverKey: string;
customHeaders?: Record;
handlers?: Partial;
key: string; // Stable identity key
serverType: ServerType;
client: MCPClient; // Active MCP client
}
```
### FailedMcpServer
An MCP server that failed to connect.
```typescript
interface FailedMcpServer {
name: string;
url: string;
transport: MCPTransport;
serverKey: string;
customHeaders?: Record;
handlers?: Partial;
key: string;
serverType: ServerType;
connectionError: Error; // The connection error
}
```
### MCPTransport
Transport protocol for MCP connections.
```typescript
enum MCPTransport {
HTTP = "http",
SSE = "sse",
STDIO = "stdio",
}
```
### MCPHandlers
Handlers for MCP server requests.
```typescript
interface MCPHandlers {
elicitation: MCPElicitationHandler;
sampling: MCPSamplingHandler;
}
```
### ProviderMCPHandlers
Provider-level handlers that receive server info as context.
```typescript
interface ProviderMCPHandlers {
elicitation?: (
request: ElicitationRequest,
extra: RequestHandlerExtra,
serverInfo: McpServerConfig,
) => Promise;
sampling?: (
request: SamplingRequest,
extra: RequestHandlerExtra,
serverInfo: McpServerConfig,
) => Promise;
}
```
### ListResourceEntry
A resource entry from `useTamboMcpResourceList`.
```typescript
type ListResourceEntry = RegistryResourceEntry | McpResourceEntry;
interface RegistryResourceEntry {
server: null; // Local registry resource
resource: ListResourceItem;
}
interface McpResourceEntry {
server: ConnectedMcpServer;
resource: ListResourceItem;
}
```
### ListPromptEntry
A prompt entry from `useTamboMcpPromptList`.
```typescript
interface ListPromptEntry {
server: ConnectedMcpServer;
prompt: ListPromptItem;
}
```
### TamboElicitationRequest
An elicitation request from an MCP server.
```typescript
interface TamboElicitationRequest {
message: string;
requestedSchema?: ElicitationRequestedSchema;
}
```
### TamboElicitationResponse
Response to an elicitation request.
```typescript
interface TamboElicitationResponse {
action: "accept" | "decline";
content?: Record;
}
```
### McpServerInfo
Configuration for an MCP server (passed to `TamboProvider`).
```typescript
interface McpServerInfo {
name: string;
url: string;
transport?: MCPTransport | "http" | "sse" | "stdio";
serverKey?: string;
customHeaders?: Record;
handlers?: Partial;
}
```
### NormalizedMcpServerInfo
Normalized server info with required fields.
```typescript
interface NormalizedMcpServerInfo {
name: string;
url: string;
transport: MCPTransport; // Always defined
serverKey: string; // Always defined
customHeaders?: Record;
handlers?: unknown;
}
```
## Type Guards
### isMcpResourceEntry
Type guard to narrow a `ListResourceEntry` to an MCP-backed resource.
```typescript
import { isMcpResourceEntry } from "@tambo-ai/react/mcp";
const { data: resources } = useTamboMcpResourceList();
resources.forEach((entry) => {
if (isMcpResourceEntry(entry)) {
// entry.server is ConnectedMcpServer
console.log(`MCP resource from ${entry.server.name}`);
} else {
// entry.server is null (local registry)
console.log("Local registry resource");
}
});
```
# Provider Components
URL: /reference/react-sdk/providers
Provider components configure Tambo functionality and make hooks available throughout your component tree.
## TamboProvider
The main provider that wraps your application and provides access to the full Tambo API. This is the primary way to integrate Tambo into your React app.
```tsx
import { TamboProvider } from "@tambo-ai/react";
function App() {
return (
);
}
```
### Props
| Prop | Type | Required | Description |
| --------------------------- | ----------------------------- | -------- | ------------------------------------------------ |
| `apiKey` | `string` | Yes\* | Your Tambo API key |
| `userToken` | `string` | No | OAuth token for user authentication |
| `tamboUrl` | `string` | No | Custom Tambo API URL |
| `environment` | `string` | No | Environment name |
| `components` | `TamboComponent[]` | No | Components to register |
| `tools` | `TamboTool[]` | No | Tools to register |
| `mcpServers` | `McpServerInfo[]` | No | MCP servers to connect |
| `contextHelpers` | `ContextHelpers` | No | Context helper functions |
| `contextKey` | `string` | No | Key for thread scoping |
| `streaming` | `boolean` | No | Enable streaming (default: `true`) |
| `autoGenerateThreadName` | `boolean` | No | Auto-generate thread names (default: `true`) |
| `autoGenerateNameThreshold` | `number` | No | Message count for name generation (default: `3`) |
| `initialMessages` | `InitialTamboThreadMessage[]` | No | Initial messages for new threads |
| `onCallUnregisteredTool` | `function` | No | Callback for unregistered tool calls |
| `resources` | `ListResourceItem[]` | No | Static resources for MCP |
| `listResources` | `function` | No | Dynamic resource listing |
| `getResource` | `function` | No | Resource content resolver |
\*Either `apiKey` or `userToken` is required for authentication.
### Example with All Options
```tsx
import {
TamboProvider,
currentPageContextHelper,
type TamboComponent,
type TamboTool,
} from "@tambo-ai/react";
const components: TamboComponent[] = [
{
name: "WeatherCard",
description: "Displays weather information",
component: WeatherCard,
propsSchema: z.object({
city: z.string(),
temperature: z.number(),
}),
},
];
const tools: TamboTool[] = [
{
name: "get_weather",
description: "Fetch weather for a city",
tool: async ({ city }) => fetchWeather(city),
inputSchema: z.object({ city: z.string() }),
outputSchema: z.object({ temperature: z.number() }),
},
];
;
```
### Provider Hierarchy
`TamboProvider` internally nests several sub-providers in this order:
1. `TamboClientProvider` - API client and authentication
2. `TamboRegistryProvider` - Component and tool registration
3. `TamboContextHelpersProvider` - Context helper management
4. `TamboThreadProvider` - Thread state and messaging
5. `TamboMcpTokenProvider` - MCP token management
6. `TamboMcpProvider` - MCP server connections
7. `TamboContextAttachmentProvider` - Context attachments
8. `TamboComponentProvider` - Component lifecycle
9. `TamboInteractableProvider` - Interactable components
10. `TamboThreadInputProvider` - Input handling
For advanced use cases, you can use individual providers directly instead of `TamboProvider`.
## TamboStubProvider
A stub provider for testing and development that doesn't require API connectivity.
```tsx
import { TamboStubProvider } from "@tambo-ai/react";
function TestApp() {
return (
);
}
```
### Props
| Prop | Type | Description |
| ---------------- | ---------------------- | ------------------------ |
| `stubResponses` | `TamboThreadMessage[]` | Pre-configured responses |
| `components` | `TamboComponent[]` | Components to register |
| `tools` | `TamboTool[]` | Tools to register |
| `contextHelpers` | `ContextHelpers` | Context helpers |
## TamboMcpProvider
Provider for Model Context Protocol (MCP) server connections. Required when using MCP features.
```tsx
import { TamboMcpProvider } from "@tambo-ai/react/mcp";
// Inside TamboProvider
{
// Handle elicitation requests
return { action: "accept", content: {} };
},
}}
contextKey="my-context"
>
;
```
### Props
| Prop | Type | Description |
| ------------ | --------------------- | ------------------------------------- |
| `handlers` | `ProviderMCPHandlers` | Optional handlers for all MCP servers |
| `contextKey` | `string` | Context key for threadless MCP tokens |
| `children` | `ReactNode` | Child components |
**Note**: MCP servers are configured on `TamboProvider` via the `mcpServers` prop. `TamboMcpProvider` manages the connections and provides hooks to interact with them.
### MCP Server Configuration
Configure MCP servers on `TamboProvider`:
```tsx
```
See the [MCP reference](/reference/react-sdk/mcp) for hooks and types related to MCP functionality.
## Individual Providers
For advanced use cases, you can use individual providers directly:
### TamboClientProvider
Provides the API client and authentication context.
```tsx
import { TamboClientProvider, useTamboClient } from "@tambo-ai/react";
;
```
### TamboRegistryProvider
Manages component and tool registration.
```tsx
import { TamboRegistryProvider, useTamboRegistry } from "@tambo-ai/react";
;
```
### TamboThreadProvider
Manages thread state and message sending.
```tsx
import { TamboThreadProvider, useTamboThread } from "@tambo-ai/react";
;
```
### TamboContextHelpersProvider
Manages context helpers that provide additional information to the AI.
```tsx
import {
TamboContextHelpersProvider,
useTamboContextHelpers,
} from "@tambo-ai/react";
({ url: window.location.href }),
}}
>
;
```
### TamboInteractableProvider
Manages interactable component registration and state.
```tsx
import {
TamboInteractableProvider,
useTamboInteractable,
} from "@tambo-ai/react";
;
```
# TypeScript Types
URL: /reference/react-sdk/types
The `@tambo-ai/react` package exports TypeScript types and interfaces to help you build type-safe AI applications.
## TamboTool
The `TamboTool` interface defines the structure for registering tools with Tambo.
```typescript
interface TamboTool {
name: string;
description: string;
tool: (params: Record) => unknown;
inputSchema: z.ZodObject | JSONSchema7;
outputSchema: z.ZodTypeAny | JSONSchema7;
transformToContent?: (
result: any,
) => Promise | ChatCompletionContentPart[];
maxCalls?: number;
}
```
### Properties
#### name
The unique identifier for the tool. This is how Tambo references the tool internally.
```typescript
name: string;
```
#### description
A clear description of what the tool does. This helps the AI understand when to use the tool.
```typescript
description: string;
```
#### tool
The function that implements the tool's logic. Receives a single object with named parameters.
```typescript
tool: (params: Record) => unknown;
```
#### inputSchema
A Zod schema that defines the tool's input parameters. Can also be a JSON Schema object. Use `z.object({})` (or an equivalent empty object schema) for no-parameter tools.
```typescript
inputSchema: z.ZodObject | JSONSchema7;
```
**Example:**
```typescript
inputSchema: z.object({
city: z.string().describe("The city name"),
units: z.enum(["celsius", "fahrenheit"]).optional(),
});
```
#### outputSchema
A Zod schema that defines the tool's return type. Can also be a JSON Schema object.
```typescript
outputSchema: z.ZodTypeAny | JSONSchema7;
```
**Example:**
```typescript
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
});
```
#### transformToContent (optional)
A function that transforms the tool's return value into an array of content parts. Use this when your tool needs to return rich content like images or audio.
```typescript
transformToContent?: (result: any) => Promise | ChatCompletionContentPart[];
```
By default, tool responses are stringified and wrapped in a text content part. The `transformToContent` function allows you to return rich content including images, audio, or mixed media.
**Example:**
```typescript
transformToContent: (result) => [
{ type: "text", text: result.description },
{ type: "image_url", image_url: { url: result.imageUrl } },
];
```
[Learn more about returning rich content from tools](/guides/take-actions/register-tools#return-rich-content-optional).
## TamboComponent
The `TamboComponent` interface defines the structure for registering React components with Tambo.
```typescript
interface TamboComponent {
name: string;
description: string;
component: ComponentType;
propsSchema?: z.ZodTypeAny | JSONSchema7;
propsDefinition?: any;
loadingComponent?: ComponentType;
associatedTools?: TamboTool[];
}
```
### Properties
#### name
The unique identifier for the component.
```typescript
name: string;
```
#### description
A clear description of what the component displays or does. This helps the AI understand when to use the component.
```typescript
description: string;
```
#### component
The React component to render.
```typescript
component: ComponentType;
```
#### propsSchema (recommended)
A Zod schema that defines the component's props.
```typescript
propsSchema?: z.ZodTypeAny | JSONSchema7;
```
#### propsDefinition (deprecated)
A JSON object defining the component's props. Use `propsSchema` instead.
```typescript
propsDefinition?: any;
```
#### loadingComponent (optional)
A component to display while the main component is loading.
```typescript
loadingComponent?: ComponentType;
```
#### associatedTools (optional)
An array of tools that are associated with this component.
```typescript
associatedTools?: TamboTool[];
```
## ChatCompletionContentPart
Content parts that can be sent to or received from the AI.
```typescript
interface ChatCompletionContentPart {
type: "text" | "image_url" | "input_audio";
text?: string;
image_url?: { url: string; detail?: "auto" | "high" | "low" };
input_audio?: { data: string; format: "wav" | "mp3" };
}
```
This type is used in the `transformToContent` function to define rich content responses.
## TamboThreadMessage
A message in a Tambo thread.
```typescript
interface TamboThreadMessage {
id: string;
role: "user" | "assistant" | "system" | "tool";
content: ChatCompletionContentPart[];
createdAt: string;
renderedComponent?: React.ReactNode;
component?: {
componentName: string;
props: any;
};
actionType?: string;
error?: string;
}
```
## TamboThread
A Tambo conversation thread.
```typescript
interface TamboThread {
id: string;
messages: TamboThreadMessage[];
contextKey?: string;
createdAt: string;
updatedAt: string;
}
```
## GenerationStage
The current stage of AI response generation.
```typescript
enum GenerationStage {
IDLE = "IDLE",
CHOOSING_COMPONENT = "CHOOSING_COMPONENT",
FETCHING_CONTEXT = "FETCHING_CONTEXT",
HYDRATING_COMPONENT = "HYDRATING_COMPONENT",
STREAMING_RESPONSE = "STREAMING_RESPONSE",
COMPLETE = "COMPLETE",
ERROR = "ERROR",
}
```
## ContextAttachment
Represents a context attachment that will be sent with the next user message.
```typescript
interface ContextAttachment {
id: string; // Auto-generated unique identifier
displayName?: string; // Optional display name for UI rendering
context: string; // The context value that will be used in additionalContext
type?: string; // Optional type identifier for grouping/rendering
}
```
### Properties
#### id
Unique identifier for the attachment. Auto-generated when adding a context attachment.
```typescript
id: string;
```
#### displayName
Display name for UI rendering.
```typescript
displayName: string;
```
#### context
The context value that will be used in `additionalContext` when the next message is sent.
```typescript
context: string;
```
#### type
Optional type identifier for grouping/rendering multiple contexts of the same type.
```typescript
type?: string;
```
## ContextAttachmentState
The state interface returned by the `useTamboContextAttachment` hook.
```typescript
interface ContextAttachmentState {
attachments: ContextAttachment[];
addContextAttachment: (
contextAttachment: Omit,
) => ContextAttachment;
removeContextAttachment: (id: string) => void;
clearContextAttachments: () => void;
}
```
### Properties
#### attachments
Array of active context attachments that will be included in `additionalContext` when the next message is sent.
```typescript
attachments: ContextAttachment[];
```
#### addContextAttachment
Function to add a new context attachment. The `id` is automatically generated. All attachments are automatically registered together as a single merged context helper (key: `contextAttachments`) that returns an array of all active attachments.
```typescript
addContextAttachment: (contextAttachment: Omit) =>
ContextAttachment;
```
#### removeContextAttachment
Function to remove a specific context attachment by its ID. The context helper automatically updates to reflect the change.
```typescript
removeContextAttachment: (id: string) => void;
```
#### clearContextAttachments
Function to remove all active context attachments. The context helper automatically updates to reflect the change. Context attachments are automatically cleared after message submission (one-time use), so you typically don't need to call this manually.
```typescript
clearContextAttachments: () => void;
```
## StreamStatus
Global stream status flags for a component during streaming. Returned by `useTamboStreamStatus`.
```typescript
interface StreamStatus {
isPending: boolean; // No tokens received yet, generation not active
isStreaming: boolean; // Active streaming - generation or props still streaming
isSuccess: boolean; // Complete - all props finished without error
isError: boolean; // Fatal error occurred
streamError?: Error; // First error encountered (if any)
}
```
## PropStatus
Streaming status flags for individual component props. Returned by `useTamboStreamStatus`.
```typescript
interface PropStatus {
isPending: boolean; // No tokens received for this prop yet
isStreaming: boolean; // Prop has partial content, still updating
isSuccess: boolean; // Prop finished streaming successfully
error?: Error; // Error during streaming (if any)
}
```
## TamboInteractableComponent
Represents a component instance that can be interacted with by Tambo. Extends `TamboComponent`.
```typescript
interface TamboInteractableComponent<
Props = Record,
State = Record,
> extends TamboComponent {
id: string; // Unique identifier for this instance
props: Props; // Current props
isSelected?: boolean; // Whether selected for interaction
state?: State; // Current component state
stateSchema?: SupportedSchema; // Optional state validation schema
}
```
## InteractableConfig
Configuration for the `withInteractable` HOC.
```typescript
interface InteractableConfig<
Props = Record,
State = Record,
> {
componentName: string; // Name used for identification
description: string; // Description for LLM understanding
propsSchema?: SupportedSchema; // Optional props validation
stateSchema?: SupportedSchema; // Optional state validation
}
```
## WithTamboInteractableProps
Props injected by `withInteractable` HOC.
```typescript
interface WithTamboInteractableProps {
interactableId?: string; // Optional custom ID
onInteractableReady?: (id: string) => void; // Called when registered
onPropsUpdate?: (newProps: Record) => void; // Called on prop updates
}
```
## InteractableMetadata
Metadata about an interactable component.
```typescript
interface InteractableMetadata {
id: string;
componentName: string;
description: string;
}
```
## ToolAnnotations
Annotations describing a tool's behavior, aligned with the MCP specification.
```typescript
type ToolAnnotations = MCPToolAnnotations & {
tamboStreamableHint?: boolean; // Safe to call repeatedly during streaming
};
```
The `tamboStreamableHint` property indicates that the tool is safe to be called repeatedly while a response is being streamed. This is typically used for read-only tools that do not cause side effects.
## SupportedSchema
A schema type that accepts either a Standard Schema compliant validator or a raw JSON Schema object.
```typescript
type SupportedSchema =
| StandardSchemaV1
| JSONSchema7;
```
Standard Schema is a specification that provides a unified interface for TypeScript validation libraries. Libraries like Zod, Valibot, and ArkType implement this spec.
## StagedImage
Represents an image staged for upload in message input.
```typescript
interface StagedImage {
id: string; // Unique identifier
name: string; // File name
dataUrl: string; // Base64 data URL
file: File; // Original File object
size: number; // File size in bytes
type: string; // MIME type
}
```
## AdditionalContext
Interface for additional context that can be added to messages.
```typescript
interface AdditionalContext {
name: string; // Name of the context type
context: unknown; // The context data
}
```
## ContextHelperFn
A function that returns context data to include in messages.
```typescript
type ContextHelperFn = () =>
| unknown
| null
| undefined
| Promise;
```
Return `null` or `undefined` to skip including the context.
## ContextHelpers
A collection of context helpers keyed by their context name.
```typescript
type ContextHelpers = Record;
```
The key becomes the `AdditionalContext.name` sent to the model.
# Utility Functions
URL: /reference/react-sdk/utilities
The `@tambo-ai/react` package exports utility functions for common tasks like defining tools with full type inference and making components interactable.
## defineTool
Type-safe helper for defining Tambo tools. Provides full type inference from your schema definitions.
```tsx
import { defineTool } from "@tambo-ai/react";
import { z } from "zod";
const weatherTool = defineTool({
name: "get_weather",
description: "Get current weather for a location",
tool: async ({ location }) => {
const response = await fetch(`/api/weather?location=${location}`);
return response.json();
},
inputSchema: z.object({
location: z.string().describe("City name or zip code"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
});
```
### Parameters
The `defineTool` function accepts a tool definition object:
| Property | Type | Required | Description |
| -------------------- | ----------------- | -------- | ---------------------------------------------- |
| `name` | `string` | Yes | Unique identifier for the tool |
| `description` | `string` | Yes | Description of what the tool does (used by AI) |
| `tool` | `function` | Yes | The function implementing the tool logic |
| `inputSchema` | `SupportedSchema` | Yes | Schema for input parameters |
| `outputSchema` | `SupportedSchema` | Yes | Schema for return value |
| `title` | `string` | No | Human-readable display name |
| `maxCalls` | `number` | No | Maximum calls per response |
| `annotations` | `ToolAnnotations` | No | Behavior hints (e.g., `tamboStreamableHint`) |
| `transformToContent` | `function` | No | Transform result to content parts |
### Schema Support
Tambo uses the [Standard Schema](https://standard-schema.dev) specification, so you can use any compliant validator:
```tsx
// Zod
import { z } from "zod";
inputSchema: z.object({ query: z.string() });
// Valibot
import * as v from "valibot";
inputSchema: v.object({ query: v.string() });
// ArkType
import { type } from "arktype";
inputSchema: type({ query: "string" });
```
### Streaming-Safe Tools
For tools that are safe to call repeatedly during streaming (typically read-only tools), use the `tamboStreamableHint` annotation:
```tsx
const searchTool = defineTool({
name: "search",
description: "Search for items",
annotations: {
tamboStreamableHint: true, // Safe for streaming
},
tool: async ({ query }) => searchDatabase(query),
inputSchema: z.object({ query: z.string() }),
outputSchema: z.array(z.object({ id: z.string(), title: z.string() })),
});
```
## withInteractable
Higher-Order Component that makes any component interactable by Tambo. Interactable components can have their props and state modified by the AI during a conversation.
```tsx
import { withInteractable } from "@tambo-ai/react";
import { z } from "zod";
const Note = ({ title, content }: { title: string; content: string }) => (
{title}
{content}
);
const InteractableNote = withInteractable(Note, {
componentName: "Note",
description: "A note component that can be edited by the AI",
propsSchema: z.object({
title: z.string(),
content: z.string(),
}),
});
// Usage
;
```
### Parameters
```tsx
withInteractable(WrappedComponent, config);
```
| Parameter | Type | Description |
| ------------------ | --------------------- | ---------------------------------- |
| `WrappedComponent` | `React.ComponentType` | The component to make interactable |
| `config` | `InteractableConfig` | Configuration for the interactable |
### InteractableConfig
| Property | Type | Required | Description |
| --------------- | ----------------- | -------- | ----------------------------------- |
| `componentName` | `string` | Yes | Unique name for identification |
| `description` | `string` | Yes | Description for AI understanding |
| `propsSchema` | `SupportedSchema` | No | Schema for validating prop updates |
| `stateSchema` | `SupportedSchema` | No | Schema for validating state updates |
### Injected Props
The wrapped component receives additional props:
| Prop | Type | Description |
| --------------------- | --------------------------------------------- | ------------------------------------ |
| `interactableId` | `string` | Optional custom ID for this instance |
| `onInteractableReady` | `(id: string) => void` | Called when component is registered |
| `onPropsUpdate` | `(newProps: Record) => void` | Called when AI updates props |
### Example with State
```tsx
import { withInteractable, useTamboComponentState } from "@tambo-ai/react";
import { z } from "zod";
const Task = ({
title,
description,
}: {
title: string;
description: string;
}) => {
const [isComplete, setIsComplete] = useTamboComponentState(
"isComplete",
false,
);
return (
{title}
{description}
);
};
const InteractableTask = withInteractable(Task, {
componentName: "Task",
description: "A task that can be completed or edited",
propsSchema: z.object({
title: z.string(),
description: z.string(),
}),
stateSchema: z.object({
isComplete: z.boolean(),
}),
});
```
## Built-in Context Helpers
Tambo provides pre-built context helpers that automatically provide useful information to the AI.
### currentPageContextHelper
Provides information about the user's current page.
```tsx
import { currentPageContextHelper } from "@tambo-ai/react";
// Returns: { url: "https://...", title: "Page Title" }
```
### currentTimeContextHelper
Provides the current timestamp.
```tsx
import { currentTimeContextHelper } from "@tambo-ai/react";
// Returns: { timestamp: "Wed Jan 22 2025 10:30:00 GMT-0800" }
```
### Using Context Helpers
Context helpers are configured on the `TamboProvider`:
```tsx
import {
TamboProvider,
currentPageContextHelper,
currentTimeContextHelper,
} from "@tambo-ai/react";
({
theme: "dark",
language: "en",
}),
}}
>
;
```
### Dynamic Context Helpers
You can add/remove context helpers dynamically using `useTamboContextHelpers`:
```tsx
import { useTamboContextHelpers } from "@tambo-ai/react";
function MyComponent() {
const { addContextHelper, removeContextHelper } = useTamboContextHelpers();
useEffect(() => {
addContextHelper("selection", () => ({
selectedItems: getSelectedItems(),
}));
return () => removeContextHelper("selection");
}, []);
}
```
Context helper return values:
* Return a value to include it in the context
* Return `null` or `undefined` to skip
* Can be async (return a Promise)
# add
URL: /reference/cli/commands/add
`npx tambo add `
Adds a component hooked up to Tambo to your app.
This command will install the component file directly into the `/components/tambo` directory of your app so you can easily customize the behavior and styles. It will also install the components' dependencies and update your styles.
**Available Components:**
* `message-thread-full` - Full-screen chat interface with history and typing indicators
* `message-thread-panel` - Split-view chat with integrated workspace
* `message-thread-collapsible` - Collapsible chat for sidebars
* `control-bar` - Spotlight-style command palette
* `form` - AI-powered form components
* `graph` - Interactive graph visualization
* `canvas-space` - Canvas workspace for visual AI interactions
* `input-fields` - Smart input field components
**Examples:**
```bash
# Add a single component
npx tambo add form
# Add multiple components
npx tambo add form graph canvas-space
# Add to custom directory
npx tambo add form --prefix=src/components/ui
# Skip confirmation prompts
npx tambo add form --yes
# Use legacy peer deps (for dependency conflicts)
npx tambo add form --legacy-peer-deps
```
## Automatic Configuration
The `add` command automatically configures your CSS and Tailwind setup based on your project's Tailwind CSS version:
* **Detects your Tailwind version** (v3 or v4) automatically
* **Updates your `globals.css`** with required CSS variables
* **Updates your `tailwind.config.ts`** (v3 only) with basic configuration
* **Preserves your existing styles** and configuration
The CLI preserves your existing configuration and only adds what's needed for
Tambo components to work. Your custom styles and colors won't be overridden.
For detailed information about what gets configured or for manual setup, see:
import { Card, Cards } from "fumadocs-ui/components/card";
Complete guide to CSS variables and Tailwind configuration changes
## Automatic Dependency Resolution
When you add components, the CLI automatically resolves and installs all dependencies:
```bash
npx tambo add message-thread-full
# Output:
✓ Analyzing dependencies...
ℹ Will install 5 components:
• message-thread-full (selected)
• button (dependency)
• card (dependency)
• scroll-area (dependency)
• textarea (dependency)
? Continue? (Y/n)
```
**What happens:**
1. You specify which components to add
2. CLI analyzes component dependencies
3. All required dependencies are automatically included
4. CLI shows you the complete list before installing
You don't need to manually track which components depend on others - the CLI handles this for you.
## Legacy Location Detection
If you're working with an existing project that has Tambo components in the legacy `components/ui/` location, the CLI will detect this and offer guidance:
```bash
npx tambo add form
# Output:
⚠ Warning: Found existing Tambo components in components/ui/
• message-thread-full
• control-bar
Recommended: Components should be in components/tambo/
? What would you like to do?
❯ Migrate components to components/tambo/ (recommended)
Continue installing to components/ui/ (legacy)
Cancel
# If you continue:
⚠ Installing to components/ui/ for consistency with existing components.
⚠ This location is deprecated. Run `npx tambo migrate` to move to components/tambo/
```
**Why this matters:**
* **Current standard:** Components should live in `components/tambo/`
* **Legacy location:** Older projects may have components in `components/ui/`
* **Mixed locations cause problems:** Import paths become confusing
**Recommended action:**
Migrate your components before adding new ones:
```bash
npx tambo migrate
npx tambo add form
```
See the [migrate command documentation](/reference/cli/commands/migrate) for details on the migration process.
**If you skip migration:**
The CLI will install new components in `components/ui/` to match your existing setup, but you'll continue to see warnings. It's best to migrate when convenient.
# create-app
URL: /reference/cli/commands/create-app
`npx tambo create-app [directory]` or `npm create tambo-app my-app`
Creates a new Tambo app from a template. Choose from pre-built templates to get started quickly with different use cases.
**Available Templates:**
* `standard` - Tambo + Tools + MCP - general purpose AI app template with MCP integration
* `analytics` - Generative UI analytics template with drag-and-drop canvas and data visualization
* More templates coming soon!
**Examples:**
```bash
# Create app with interactive prompts
npx tambo create-app
# Create in current directory
npx tambo create-app .
# Create with specific template
npx tambo create-app my-app --template=standard
# Initialize git repository automatically
npx tambo create-app my-app --init-git
# Use legacy peer deps
npx tambo create-app my-app --legacy-peer-deps
```
**Manual Setup After Creating:**
```bash
cd my-app
npx tambo init # Complete setup with API key
npm run dev # Start development server
```
# full-send
URL: /reference/cli/commands/full-send
`npx tambo full-send`
For instant project setup, performs the same project setup steps as `init`, and also installs a few useful components.
**What it does:**
1. Sets up authentication and API key
2. Creates the `lib/tambo.ts` configuration file
3. Prompts you to select starter components to install
4. Installs selected components and their dependencies
5. Provides setup instructions with code snippets
**Examples:**
```bash
# Interactive setup with component selection
npx tambo full-send
# Skip prompts and install all starter components
npx tambo full-send --yes
# Use legacy peer deps
npx tambo full-send --legacy-peer-deps
```
## Component Selection
The `full-send` command presents an interactive checkbox list of curated starter components:
```bash
? Select starter components to install: (Press to select, to toggle all)
❯ ◯ message-thread-full - Full-featured message thread with controls
◯ message-thread-panel - Message thread optimized for side panels
◯ message-thread-collapsible - Collapsible message thread
◯ control-bar - Control panel for managing thread settings
```
**Available starter components:**
* **message-thread-full** - Complete message thread with all features, perfect for full-page chat interfaces
* **message-thread-panel** - Optimized for sidebars and side panels, great for embedded chat
* **message-thread-collapsible** - Can expand/collapse, ideal for compact layouts or mobile
* **control-bar** - Settings and controls for managing thread behavior
**Selection options:**
* Press `` to select/deselect individual components
* Press `` to toggle all components at once
* Press `` to confirm your selection
**Automatic setup:**
After you select components, the CLI:
1. Installs all selected components
2. Installs all required dependencies automatically
3. Creates the `lib/tambo.ts` configuration file
4. Registers all components in the configuration
5. Copies TamboProvider setup code to your clipboard
The setup code for TamboProvider is automatically copied to your clipboard.
Just paste it into your layout file to complete the setup.
To skip selection and install all starter components:
```bash
npx tambo full-send --yes
```
**Manual Provider Setup:**
After running `full-send`, add the TamboProvider to your layout file:
```tsx title="app/layout.tsx"
"use client";
import { TamboProvider } from "@tambo-ai/react";
import { components } from "../lib/tambo";
import { MessageThreadFull } from "@/components/tambo/message-thread-full";
export default function Layout({ children }) {
return (