Providers
Configure OpenAI and Anthropic LLM providers
Providers
Stevora ships with built-in support for OpenAI and Anthropic as LLM providers. The provider registry automatically detects which providers are available based on environment variables and routes each LLM step to the correct provider based on the model name.
Supported models
Stevora resolves providers using the MODEL_PROVIDER_MAP, which matches model name prefixes to their provider:
| Prefix | Provider | Example models |
|---|---|---|
gpt- | OpenAI | gpt-4o, gpt-4o-mini, gpt-4-turbo |
o1- | OpenAI | o1, o1-mini |
o3- | OpenAI | o3-mini |
claude- | Anthropic | claude-sonnet-4-20250514, claude-haiku-4-5-20251001, claude-opus-4-20250514 |
When you specify a model in a step definition, Stevora checks these prefixes in order and selects the first matching provider. If no prefix matches, the step fails with a provider resolution error.
Environment variables
Set one or both API keys to register providers at startup:
# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...On boot, initProviders() reads these keys and registers each provider that has a valid key. If neither key is set, Stevora logs a warning but continues to start -- non-LLM step types will still work.
You only need to set the key for providers you plan to use. For example, if your workflows only use Claude models, you can omit OPENAI_API_KEY entirely.
Provider selection
Provider selection is automatic. When an LLM step runs, Stevora calls resolveProvider(model) which walks the MODEL_PROVIDER_MAP and returns the matching provider instance:
// From provider-registry.ts
const MODEL_PROVIDER_MAP: Record<string, string> = {
'gpt-': 'openai',
'o1-': 'openai',
'o3-': 'openai',
'claude-': 'anthropic',
};
export function resolveProvider(model: string): LlmProvider | undefined {
for (const [prefix, providerName] of Object.entries(MODEL_PROVIDER_MAP)) {
if (model.startsWith(prefix)) {
return providers.get(providerName);
}
}
return undefined;
}This means you never configure a provider directly in a step definition. You set the model field, and Stevora figures out the rest:
{
"type": "llm",
"name": "generate-summary",
"model": "gpt-4o",
"messages": [
{ "role": "user", "content": "Summarize: {{state.document}}" }
]
}Using both providers
You can use OpenAI and Anthropic models across different steps in the same workflow, or combine them with model fallback so that a step tries one provider and fails over to the other:
{
"type": "llm",
"name": "classify-intent",
"model": "claude-sonnet-4-20250514",
"fallbackModels": ["gpt-4o"],
"messages": [
{ "role": "user", "content": "Classify this support ticket: {{state.ticket}}" }
],
"responseFormat": "json"
}In this example, Stevora first tries Claude Sonnet 4 via Anthropic. If that call fails (network error, rate limit, provider outage), it automatically retries with GPT-4o via OpenAI.
Checking available providers
The registry exposes getAvailableProviders() which returns the list of provider names that were successfully registered at startup. The execution engine uses this internally, and it is also available through the health check endpoint to confirm your deployment is correctly configured.