LLM Chat Node
Overview
LLM Chat is the recommended chat node for new Rivet 2 graphs. It is a vendor-agnostic chat node built on the Vercel AI SDK and supports OpenAI, Anthropic, Google, and custom OpenAI-compatible providers from one node.
Use LLM Chat when you want to:
- switch providers without rewiring the graph
- use OpenAI reasoning models, Anthropic thinking, or Google thinking controls
- provide the API key from settings or from an input port
- call custom OpenAI-compatible providers by setting
Provider base URL - expose response status/error outputs for provider debugging
- retry non-200 provider responses and inspect per-attempt status/error arrays
The older Chat Node remains available for existing graphs, but new work should prefer LLM Chat.
- Inputs
- Outputs
- Editor Settings
Inputs
| Title | Data Type | Description |
|---|---|---|
| System Prompt | string | Optional system prompt prepended to the main prompt. |
| Prompt | chat-message, chat-message[], string, or string[] | Main prompt sent to the model. Strings are converted to user messages. |
| Tools | gpt-function or gpt-function[] | Available when Tool use is enabled. Defines functions/tools the model may call. |
| API Key | string | Available when API key source is set to Input port. |
| Provider base URL | string | Available for Custom provider when Provider base URL uses an input. Accepts an OpenAI-compatible base URL or full /chat/completions URL. |
| Base URL | string | Available for built-in providers when Base URL uses an input. |
| Headers | object | Available when Headers uses an input. Adds provider request headers. |
| Extra Provider Options | string or object | Available when Extra provider options uses an input. Power-user Vercel provider options. |
| Response Schema | object or gpt-function | Available when Response format is JSON schema. |
Outputs
| Title | Data Type | Description |
|---|---|---|
| Response | string, number, boolean, object, any, or arrays of those values | Final model response. Default/Text formats output text. JSON and JSON schema formats output the parsed structured value when parsing succeeds, or the raw string when parsing fails. It may stream visually in the editor, but downstream nodes receive the final value. |
| Messages Sent | chat-message[] | Messages sent to the provider. |
| All Messages | chat-message[] | Conversation messages including the response. |
| Response Tokens | number | Response token count when available. |
| Function Calls | object[] | Available when tool use or provider built-in tools are enabled. |
| Usage | object | Available when Output usage details is enabled. Includes provider usage metadata when available. |
| Reasoning | string or string[] | Available when Output reasoning is enabled and the provider exposes reasoning/thinking output. |
| Response Status | number or number[] | Available when Output request status is enabled. When Retry on non-200 is enabled, this is a per-attempt array. |
| Response Error | string or string[] | Available when Output request status is enabled. When Retry on non-200 is enabled, this is a per-attempt array. |
Editor Settings
Model
| Setting | Description |
|---|---|
| Provider | OpenAI, Anthropic, Google, or Custom provider. |
| Provider base URL | Custom provider only. This is separate from Provider Advanced > Base URL. |
| Model | Provider model. For Custom provider, enter the model ID expected by that provider. |
| API key source | Use the configured provider key or expose an API Key input port. |
| API key env var name | Custom provider only when using a configured key. Defaults to CUSTOM_PROVIDER_API_KEY. |
Parameters
LLM Chat supports common generation parameters such as Temperature, Max output tokens, Top P, Top K, presence penalty, frequency penalty, stop sequences, and seed. Provider support varies; unsupported settings may be ignored by the provider or model.
Reasoning
Reasoning settings are provider-specific:
- OpenAI: Reasoning effort and optional reasoning summary.
- Anthropic: Thinking mode, effort, thinking budget, and cache breakpoint TTL.
- Google: Thinking level, thinking budget, and Include thoughts.
The node body shows the selected reasoning effort for built-in providers.
Response Format
Response format supports Default, Text, JSON, and JSON schema. JSON and JSON schema make the Response output emit the parsed structured value instead of a JSON string when parsing succeeds. If the provider returns text that cannot be parsed into the requested structure, Rivet falls back to the raw string Response output instead of failing the node. JSON schema adds a Response Schema input port and optional schema name/description settings. For Custom provider, Rivet sends the same plain JSON-compatible schema as an OpenAI-compatible response_format request field so compatible endpoints receive the schema instead of only JSON mode.
Tools
Tool use enables the Tools input and optional tool-choice controls. Auto-continue mode lets Rivet execute tool calls, send tool results back to the model, and repeat until a normal answer is produced or the max tool-round limit is reached.
Provider Advanced
Provider Advanced contains optional Base URL, Headers, and Extra provider options. For Custom provider, use the Model section's Provider base URL instead of Provider Advanced > Base URL.
Technical Details
| Setting | Description |
|---|---|
| Retry on non-200 | Retries provider requests when the Vercel AI SDK reports a non-200 HTTP status. |
| Repeat times | Number of repeats after the initial request. |
| Cooldown, ms | Delay between retry attempts. |
| Output request status | Adds Response Status and Response Error outputs. Retry mode changes both outputs to per-attempt arrays. |
Custom Provider
Choose Custom provider for OpenAI-compatible APIs such as local model servers or other hosted providers.
For Custom provider, the expected node body order is:
Custom provider
Provider base URL: <url>
<model name>
The custom provider API key can come from either:
- the environment variable named by API key env var name
- the API Key input port
Error Handling
LLM Chat normalizes common provider failures into readable errors. For example, an invalid API key should produce a provider request error that includes the HTTP status, provider, model, endpoint, and a hint to check the API key source.
Enable Output request status when you want machine-readable response status/error outputs. With Retry on non-200 enabled, Response Status and Response Error become arrays so you can see what happened on each attempt.