ora Documentation
ora is a command-line tool. You give it a question and a set of flags that control how it reasons — model, strategy, budget, iterations. It does the rest.
Installation
One command. No dependencies, no runtime, no package manager.
curl -sSL https://oracommand.com/install.sh | shOr download the binary directly from the download page.
Verify it works:
ora --versionFirst Queries
ora is a CLI. You pass your question with -q and control behavior with flags. Here are real queries you can run right now:
Basic question
Just ask something. ora runs a reasoning loop with defaults (critique strategy, up to 10 iterations, $1 budget).
ora -q "what is the best database for time series data?"Pick a model
Use --model to choose any model from any provider.
ora -q "explain quantum tunneling simply" --model gpt-4oControl depth
Set how many iterations ora runs. More iterations = more refinement = better answer.
ora -q "design a rate limiter for a REST API" --min-iter 3 --max-iter 15Set a budget
Cap spending per run. ora stops gracefully and returns the best answer found.
ora -q "full competitive analysis of EV market" --budget 0.50Choose a strategy
Three strategies: critique (default), debate (two perspectives), research (breaks into sub-questions).
ora -q "postgres vs mongodb for IoT?" --strategy debateSave the output
ora -q "summarize Q1 metrics" --save report.mdQuiet mode — just the answer
ora -q "what is a monad?" --quietDry run — see what would happen
Estimate cost and time without making any API calls.
ora -q "analyze this codebase" --model claude-opus-4-5 --max-iter 10 --dry-runCombine flags
Flags compose freely. This runs a deep research query with Claude, capped at $0.50, saving to a file:
ora -q "comprehensive analysis of AI regulation in the EU" \
--model claude-opus-4-5 \
--strategy research \
--min-iter 5 --max-iter 20 \
--budget 0.50 \
--save eu-ai-regulation.mdProvider Setup
ora works with any AI provider. Set your API key:
# Anthropic (Claude)
ora config set anthropic-key sk-ant-...
# OpenAI (GPT)
ora config set openai-key sk-...
# Google (Gemini)
ora config set google-key AIza-...
# Or use environment variables
export ORA_ANTHROPIC_KEY=sk-ant-...ora auto-detects the provider from the model name: claude-* → Anthropic, gpt-* → OpenAI, gemini-* → Google.
For Ollama (local models), no API key needed:
ora -q "prompt" --model llama3.2For any OpenAI-compatible endpoint:
ora -q "prompt" --model mixtral-8x7b --endpoint https://api.together.xyz/v1How ora Works
Instead of a single API call, ora runs an iterative loop:
iter 1: answer(prompt) → answer_v1
iter 2: critique(answer_v1) → refine(critique) → answer_v2
iter 3: critique(answer_v2) → refine(critique) → answer_v3
...
stop: confidence >= threshold OR budget reached OR max iterationsEach iteration, the model scores its own confidence (0.00–1.00). ora returns the highest-confidence answer across all iterations — not necessarily the last one.
Strategies
Critique (default)
Answer → critique → refine → repeat. Best for most queries.
ora -q "prompt" --strategy critiqueDebate
Two perspectives (advocate + skeptic) debate, then synthesize. Best for nuanced topics.
ora -q "postgres vs mongodb for IoT?" --strategy debateResearch
Decomposes into sub-questions, answers each, synthesizes. Best for complex research.
ora -q "full EV market analysis" --strategy researchConfidence & Stopping
ora stops when any of these conditions are met:
- Confidence reaches the threshold (default: 0.85)
- Budget is exhausted
- Max iterations reached
- Convergence detected (confidence barely changing for 3 iterations)
ora always runs at least --min-iter iterations regardless of confidence.
ora -q "prompt" --confidence 0.95 # higher bar
ora -q "prompt" --min-iter 5 # at least 5 iterationsBudget Control
Set a per-run spending cap in USD:
ora -q "prompt" --budget 0.50
# Check your spending
ora cost
ora cost --by-modelora warns at 80% of budget and stops gracefully at the limit, returning the best answer found so far.
Background Jobs
Run long queries in the background:
ora -q "deep analysis" --bg
# → [ora-3] started in background
ora list # see all processes
ora attach ora-3 # stream output
ora status ora-3 # check progress
ora pause ora-3 # pause mid-run
ora resume ora-3 # continue
ora kill ora-3 # terminate (saves best answer)Memory & Continue
Chain runs together using previous answers as context:
# Inject a previous run's output
ora -q "go deeper on point 3" --memory ora-1
# Inject last 3 runs
ora -q "what patterns do you see?" --memory last:3
# Continue — inherits model, system prompt, strategy
ora -q "expand on the risk section" --continue lastPrompt Crafting
Let ora optimize your prompt before running:
# Craft from intent
ora --craft "analyze the EV market every Monday"
# Improve a weak prompt
ora --craft --improve "tell me about stocks"
# Interactive workflow builder
ora --guide--craft generates an optimized prompt plus ready-to-run commands. --guide walks you through building a workflow step by step.
Context Files & URLs
ora -q "summarize this" --context report.pdf
ora -q "critique this" --context https://example.com/article
ora -q "compare these" --context doc1.pdf --context doc2.pdfora strips HTML, enforces size limits, and warns about potential prompt injection.
Dashboard
Live terminal UI showing all processes, costs, and scheduled jobs:
ora dashboardNavigate with arrow keys. Enter to attach, K to kill, P to pause, Q to quit.
CLI Reference
# Query
ora -q "prompt" # basic query
ora -q "prompt" --model claude-opus-4-5 # specific model
ora -q "prompt" --budget 0.50 # cost cap
ora -q "prompt" --strategy research # strategy
ora -q "prompt" --bg # background
ora -q "prompt" --quiet # final answer only
ora -q "prompt" --output json # JSON output
ora -q "prompt" --save report.md # save to file
ora -q "prompt" --dry-run # estimate only
# Process management
ora list [--all] [--running]
ora attach <id>
ora status <id>
ora kill <id>
ora pause <id>
ora resume <id>
# Tools
ora cost [--by-model]
ora history [--search "keyword"]
ora prompts [--save "name=content"]
ora dashboard
ora config show
ora config set <key> <value>
# Utilities
ora --craft "intent"
ora --guide
ora test [--dry-run]
ora export <id> --format md|json
ora --versionJSON Output
Use --output json --quiet for structured output in pipelines:
{
"schema_version": 1,
"id": "ora-3",
"status": "done",
"stopped_by": "confidence_threshold",
"answer": "...",
"confidence": 0.91,
"iterations": { "completed": 4, "min": 2, "max": 10 },
"cost": { "total_usd": 0.18, "tokens_in": 8200, "tokens_out": 4100 }
}# Extract answer
ora -q "prompt" --output json --quiet | jq -r '.answer'
# Check confidence
ora -q "prompt" --output json --quiet | jq '.confidence > 0.85'Configuration
Config lives at ~/.ora/config.toml:
ora config show # view all settings
ora config set model claude-opus-4-5 # default model
ora config set budget 1.00 # default budget
ora config set min-iter 2 # default min iterations
ora config set max-iter 10 # default max iterations
ora config set strategy critique # default strategy
ora config set confidence 0.85 # default thresholdPriority: explicit flags > environment variables > config file > defaults.
Cost Tracking
ora tracks the cost of every run automatically. Costs are based on the token pricing set by each AI provider — ora uses whatever the model charges. You pay your provider directly; ora just tracks it for you.
# See your spending
ora cost # today, week, month, lifetime
ora cost --by-model # breakdown by model
# Estimate before running
ora -q "prompt" --dry-run
# Set a budget cap
ora -q "prompt" --budget 0.50Local models (Ollama) are free. For cloud models, check your provider's pricing page for current rates. ora will never spend more than your --budget.