AI at the Command Line

One command. Multiple iterations. The best answer — not the first one.

ora is a single binary that asks an AI model your question, critiques the answer, refines it, and repeats — until the answer is actually good. You set the budget. It does the work.

Give your AI agents full CLI power — any model, any strategy, structured output, background execution, and cost control. All from one command.

Download ora

One binary. Drop it anywhere on your PATH. No runtime, no containers, no setup.

🍎

macOS

Apple Silicon

Download
🍎

macOS

Intel

Download
🐧

Linux

x86_64

Download
🐧

Linux

ARM64

Download
🖥

Windows

x64

Download

or install via script:

curl -sSL https://oracommand.com/install.sh | sh

Up and running in 60 seconds

From download to first answer — nothing else to configure.

1

Install

One command. One binary. No dependencies, no Docker, no package manager.

$ curl -sSL https://oracommand.com/install.sh | sh
2

Ask anything

ora drafts, critiques, refines — and returns the best answer it can find within your budget.

$ ora -q "explain quantum tunneling"
   Answer v3 (confidence: 0.92)

Built for the terminal. Built for agents.

Everything you need to get better answers from AI — from a single binary that works everywhere.

🌐

Multi-Provider

Claude, GPT, Gemini, Grok, Ollama, or any OpenAI-compatible API. One interface for every model.

$ ora --model claude-opus-4-5
$ ora --model gpt-4o --fallback claude-sonnet-4-20250514
🔄

Autonomous Reasoning Loops

Not one answer — the best answer. ora iterates: answer, critique, refine, repeat. Three strategies: critique, debate, research.

$ ora -q "best DB for time series?" --strategy research
   v4 confidence: 0.91 — stopped
💰

Budget & Cost Control

Set a budget per run. ora tracks every token, warns at 80%, and stops at your limit. Never overspend.

$ ora -q "analyze codebase" --budget 0.50
   $0.41 / $0.50 (82%) — warning
🏃

Background Jobs

Start a deep run, close your terminal, come back later. Attach, pause, resume, kill — full process control.

$ ora -q "audit the codebase" --bg
  [ora-3] started in background
$ ora attach ora-3
✏️

Prompt Crafting

Describe what you want in English. ora builds the optimal prompt, recommends flags, and generates ready-to-run commands.

$ ora --craft "weekly EV market report"
   Crafted prompt + 3 command variants
🧠

Memory & Continuations

Chain runs together. Feed previous answers as context. Pick up where you left off with --continue.

$ ora -q "go deeper on point 3" --continue last
   Inherited context from ora-7
📊

Terminal Dashboard

Live TUI showing all processes, cost breakdown, and scheduled jobs. No browser needed — everything in your terminal.

$ ora dashboard
  Processes | Cost | Schedule
🤖

Agent-Ready JSON Output

Stable JSON schema for pipelines, scripts, and AI agents. Pipe into jq, feed into other tools, automate everything.

$ ora -q "analyze" --output json --quiet | jq .answer

For every workflow

Whether you're exploring an idea or automating a pipeline — ora meets you where you are.

Ask questions in plain English

No prompt engineering needed. Just ask your question and ora handles the rest. Use --guide for interactive workflows.

# Simple question — ora iterates automatically
$ ora -q "explain what a reverse proxy does"
# Interactive guide builds the command for you
$ ora --guide "how do database indexes work?"
# Craft an optimized prompt from vague intent
$ ora --craft "I need to understand Kubernetes"

Works everywhere you work

ora is a CLI binary — it plugs into anything that runs a command.

>_

Terminal

Bash, Zsh, Fish, PowerShell. Any terminal on any OS. ora is a single binary — just run it.

$

Shell Scripts

Pipe data in, get JSON out. Use ora in bash scripts, Makefiles, and CI/CD pipelines.

Claude

Anthropic Claude

Claude Opus, Sonnet, Haiku — all supported out of the box. Auto-detected from model name.

GPT

OpenAI

GPT-4o, o3, and any OpenAI model. Same interface, same flags, same output format.

OC

OpenClaw

Connect ora to OpenClaw workflows. Use ora as the reasoning engine behind your agents.

🦜

Ollama

Run local models with zero API keys. ora auto-detects Ollama on localhost. Fully offline.

G

Google Gemini

Gemini Pro, Flash — native integration. One flag to switch: --model gemini-2.0-pro

🔗

Any OpenAI-Compatible API

Groq, Together, Fireworks, self-hosted — anything with an OpenAI-compatible endpoint works with --endpoint.

Free vs Pro Coming Soon

ora is free to download and use — no limits on runs, models, or strategies. Pro adds power features for teams and automation.

FeatureFreePro
Reasoning loops (critique, debate, research)
All providers (Claude, GPT, Gemini, Ollama...)
Background jobs & process management
Prompt crafting & guided workflows
Memory & continuations
Cost tracking & budgets
Terminal dashboard
JSON output for agents & pipelines
Scheduled runs (cron)
Batch processing
PDF export
Cloud sync & sharing
Web dashboard
Priority support

About Us

Built by an ex-Bloomberg team

We spent years building real-time data systems at Bloomberg — where every millisecond matters and every answer has to be right. We saw how teams use AI today: paste a question into a chat window, get a mediocre first draft, manually rephrase, try again. Repeat until you give up or get lucky.

That's not how quality works. At Bloomberg, you don't ship the first draft. You iterate, critique, validate, and refine until the answer is actually correct. We built ora to bring that same discipline to AI.

The problem we're solving

One-shot answers aren't good enough

Chat UIs give you one response. Quality depends entirely on how well you prompt. Most people accept a mediocre first draft.

Manual iteration is slow and expensive

Rephrasing, re-prompting, copy-pasting context back in — you're doing the work the machine should be doing.

No cost visibility or control

API calls add up. Most tools have no budget controls — you find out what you spent when the invoice arrives.

Locked into one provider

Every AI tool picks a model for you. Switch providers? Learn a new tool. ora works with any model from any provider.

ora is the answer

One static binary that runs an autonomous reasoning loop against any model. You define the intent, the budget, and the iteration depth. ora does the rest — asks, critiques, refines, and returns the best possible answer within your constraints.

It also helps you build the right workflow in the first place. --craft turns vague intent into optimized prompts. --guide builds production-ready commands from plain English. A beginner and a power user get the same quality output.