

build ai workflows. deploy as one api.
design multi-step ai visually, deploy to a versioned endpoint, and iterate without touching your app code. 9 providers, one integration.
works with openai, anthropic, gemini, and more
You design a workflow in our visual builder — chaining ai steps, tools, agents, and logic. We host it and give you one API endpoint. Every change you publish is a versioned deployment behind the same URL.
No orchestration code in your repo. No provider sdk sprawl. Your app makes one POST — we run every step and return the result.
how it works
graphdeployendpoint
design it once. it runs in production here.
optiml studio — prompt-to-image v2
INPUT
text
AI STEP
enhance prompt
text → textIMAGE GEN
generate asset
text → imageOUTPUT
response
Execution Trace — Run #4821
input
—
2ms
—
user prompt
enhance prompt
AI
620ms
$0.003
gpt-4o-mini
generate image
IMG
4,200ms
$0.040
dall-e-3
output
—
1ms
—
image + caption
total
4,823ms
$0.043
workflow lifecycle
from idea to production in minutes
Build
design your workflow visually with ai steps, tools, agents, and logic blocks. pick models from 9 providers.
Deploy
publish to a stable url. every change is a new version — roll back without touching your app.
Test
a/b test versions in production. gate deploys with evals. auto-rollback on errors or latency spikes.
Monitor
per-step traces, cost, and latency for every request. see exactly what ran and why.
capabilities
everything you need to ship ai features
one platform to build, deploy, and operate — so you can focus on your product.
visual workflow builder
drag-and-drop ai steps, tools, agents, conditions, and loops. 11+ block types for any workflow shape.
knowledge base
upload your docs, pdfs, and web pages. your ai calls get business context automatically — no copy-pasting into prompts.
versioned deployments
every publish is a numbered snapshot behind the same url. roll back in one click. your app integration never changes.
9 providers, one api
openai, anthropic, gemini, mistral, cohere, groq, together, deepseek, fireworks — swap models without code changes.
powering these apps


use cases
what you can build with optiml
document analysis
upload playbooks or policies, review documents with ai, extract insights automatically
content generation
turn briefs into production copy across channels — email, landing pages, social
knowledge assistants
connect your docs and internal knowledge, build assistants that answer questions instantly
why optiml
what makes this different
01
not a proxy
we execute the full workflow — multi-step chains, tools, agents, streaming — not a single forwarded api call.
02
not just observability
dashboards don’t deploy software. optiml versions, tests, and rolls back the actual workflow serving traffic.
03
not just a builder
a canvas alone doesn’t ship anything. we give you a production endpoint with eval gates and rollback built in.
04
context built in
upload your business docs to the knowledge base. every ai call gets the right context without your app passing it.
integration
one POST — we run the workflow
your backend stays thin; complexity lives in the hosted graph.
const res = await fetch( "https://api.optiml.ai/api/public/acme/support-bot", { method: "POST", headers: { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json", }, body: JSON.stringify({ variables: { user_message: "How do I reset my password?" }, }), } ); const data = await res.json(); console.log(data.final_output);
const res = await fetch( "https://api.optiml.ai/api/public/acme/support-bot", { method: "POST", headers: { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json", }, body: JSON.stringify({ variables: { user_message: "How do I reset my password?" }, stream: true, }), } ); const reader = res.body.getReader(); const decoder = new TextDecoder(); while (true) { const { done, value } = await reader.read(); if (done) break; const text = decoder.decode(value); // each line: data: {"type":"token","content":"..."} process.stdout.write(text); }
const res = await fetch( "https://api.optiml.ai/api/public/acme/support-bot", { method: "POST", headers: { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json", }, body: JSON.stringify({ variables: { user_message: "What about two-factor auth?" }, conversation_id: "conv_abc123", }), } );
const fs = require("fs"); const image = fs.readFileSync("photo.jpg", { encoding: "base64" }); const res = await fetch( "https://api.optiml.ai/api/public/acme/vision-analyzer", { method: "POST", headers: { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json", }, body: JSON.stringify({ variables: { question: "What product is shown in this image?", image: `data:image/jpeg;base64,${image}`, }, }), } ); const data = await res.json(); console.log(data.final_output);
Streaming, chat, and multimodal — same endpoint, same versioning story.
pricing
start free. add rigor when traction hits.
free
$0
validate the idea before you scale.
2 projects, 5 workflows
1,000 requests/mo
all providers
streaming + conversations
community support
startup
$49/mo
founding teams shipping real users.
10 projects, 25 workflows
50,000 requests/mo
5 team members
a/b experiments
auto-graded evals
rollback + version history
team
$149/mo
growing engineering teams.
unlimited projects & workflows
500,000 requests/mo
20 team members
custom routing policies
model comparison
budget controls
enterprise
custom
governance, sso, and slas.
everything in team
unlimited requests
sso / saml
audit logs
custom sla
dedicated support
ship the feature, not the plumbing
free to start. same platform scales to versioned deploys, experiments, and rollback when you need them.