Skip to main content

LLM Chat Node

Overview

LLM Chat is the recommended chat node for new Rivet 2 graphs. It is a vendor-agnostic chat node built on the Vercel AI SDK and supports OpenAI, Anthropic, Google, and custom OpenAI-compatible providers from one node.

Use LLM Chat when you want to:

  • switch providers without rewiring the graph
  • use OpenAI reasoning models, Anthropic thinking, or Google thinking controls
  • provide the API key from settings or from an input port
  • call custom OpenAI-compatible providers by setting Provider base URL
  • expose response status/error outputs for provider debugging
  • retry non-200 provider responses and inspect per-attempt status/error arrays

The older Chat Node remains available for existing graphs, but new work should prefer LLM Chat.

Inputs

TitleData TypeDescription
System PromptstringOptional system prompt prepended to the main prompt.
Promptchat-message, chat-message[], string, or string[]Main prompt sent to the model. Strings are converted to user messages.
Toolsgpt-function or gpt-function[]Available when Tool use is enabled. Defines functions/tools the model may call.
API KeystringAvailable when API key source is set to Input port.
Provider base URLstringAvailable for Custom provider when Provider base URL uses an input. Accepts an OpenAI-compatible base URL or full /chat/completions URL.
Base URLstringAvailable for built-in providers when Base URL uses an input.
HeadersobjectAvailable when Headers uses an input. Adds provider request headers.
Extra Provider Optionsstring or objectAvailable when Extra provider options uses an input. Power-user Vercel provider options.
Response Schemaobject or gpt-functionAvailable when Response format is JSON schema.

Custom Provider

Choose Custom provider for OpenAI-compatible APIs such as local model servers or other hosted providers.

For Custom provider, the expected node body order is:

Custom provider

Provider base URL: <url>
<model name>

The custom provider API key can come from either:

  • the environment variable named by API key env var name
  • the API Key input port

Error Handling

LLM Chat normalizes common provider failures into readable errors. For example, an invalid API key should produce a provider request error that includes the HTTP status, provider, model, endpoint, and a hint to check the API key source.

Enable Output request status when you want machine-readable response status/error outputs. With Retry on non-200 enabled, Response Status and Response Error become arrays so you can see what happened on each attempt.

See Also