Define multi-step AI workflows. We handle the retries, the waits, the crashes, and the cost tracking. All while you sleep.
Step Types
task, wait, condition, event, LLM, approval
LLM Providers
OpenAI + Anthropic with auto-fallback
Lost Executions
PostgreSQL-backed durable state
Workflow Length
hours, days, weeks — no timeout
Typed steps in code. LLM calls, timed waits, conditional branches, human approval gates. No visual builder, no YAML.
POST your input. We persist the state, enqueue execution, and your workflow begins immediately. Idempotency keys prevent duplicates.
Steps run in sequence. Failures retry with exponential backoff. Wait steps resume after hours or days. Server crashes lose nothing.
Every LLM call logged with tokens and cost. Every tool invocation traced. Every state transition recorded.
Server restarts mid-workflow? Execution resumes from the last completed step. Zero data loss.
BullMQ ensures every step is processed exactly once. Redis is a queue, not the source of truth.
Every state transition, LLM call, tool invocation, and approval decision is recorded as an event.
Every AI workflow is a sequence of these six step types. Combine them to build anything from a simple chatbot to a multi-day sales pipeline.
Execute custom business logic with full access to workflow state. Register typed handlers that receive context and return structured results.
{ "type": "task", "name": "enrich-lead",
"handler": "enrichCompanyData",
"retryPolicy": { "maxRetries": 3 } }Unified interface for OpenAI and Anthropic. Model fallback chains. Every call traced.
Per-call, per-workflow, per-workspace. Know exactly what your agents spend.
PostgreSQL is the source of truth. Redis crashes? We rebuild. Server restarts? Resume.
Exponential backoff per step. Configurable max attempts. Dead-letter after exhaustion.
Approval steps pause the workflow. Approve, reject, or edit AI outputs before they ship.
Schema validation on LLM outputs. Content safety checks. Auto-retry with feedback.
Every team building with LLMs hits the same wall: production reliability. Here's what Stevora makes possible.
Research prospects with LLM, draft personalized emails, pause for human approval, send, and wait for replies. Multi-day workflows that run autonomously.
Generate articles with AI, validate tone and factual accuracy through guardrails, queue for editorial review, then publish to your CMS.
Classify incoming tickets, attempt auto-resolution with tool-calling LLMs, escalate to human agents when confidence is low, track resolution cost.
Ingest a batch of records, call external APIs to enrich each one, validate with schema guardrails, retry failures, write results back.
One import. Type-safe from definition to execution. The SDK handles polling, error recovery, and response parsing.
Full type safety from definitions to results
waitForCompletion() with configurable timeout
AgentRuntimeError with code, status, details
One API key, one import, ship immediately
1import { AgentRuntime } from '@stevora/sdk'23const stevora = new AgentRuntime({4 apiKey: process.env.STEVORA_KEY5})67// Define → launch → observe. That’s it.8const run = await stevora.workflows.create({9 definitionId: 'ai-sdr-outreach',10 input: { prospect: 'Sarah Chen' },11 idempotencyKey: 'sarah-q2-2026'12})1314// LLM calls, retries, waits, approvals15// — all handled durably16const result = await stevora.workflows.waitForCompletion(17 run.id, { timeoutMs: 300_000 }18)1920// Every token accounted for21const { totalCostDollars } = await stevora.workflows.getCost(run.id)Stevora runs on your infrastructure. No data leaves your network. Deploy with a single docker compose up.
Docker image, docker-compose, or deploy to any VPS. You own the infrastructure.
Standard PostgreSQL + Redis. Swap providers, fork the code, extend anything.
Read every line. Audit the security. Contribute back. MIT licensed.
$ git clone https://github.com/abhi-apple/stevora$ cd stevora$ cp .env.example .env$ docker compose up -d✓ PostgreSQL ........... healthy✓ Redis ................ healthy✓ Stevora API .......... :3000✓ Stevora Worker ....... readyReady at http://localhost:3000Start free. Scale when you need to. Self-host for free forever — or let us handle the infrastructure.
For side projects and experimentation.
For startups shipping AI-powered products.
For teams with production workloads.
Custom volume, SLA, dedicated support, on-prem deployment, HIPAA/SOC2.
Your AI agents should be shipping value, not failing silently at 3am.
No credit card required