Skip to main content
Most AI tools give you a confident answer. MIRA is designed to give you a correct answer — and show you exactly how it got there.

The core problem with single-shot AI

When you ask a standard language model a complex question, it:
  1. Generates a statistically likely continuation of your prompt
  2. Returns it as a single response
  3. Gives you no visibility into its reasoning
  4. Has no way to verify whether the answer is actually right
For professional work — financial analysis, research synthesis, code review, security assessment — this is a serious limitation. A plausible-sounding wrong answer can be worse than no answer at all.

MIRA’s approach

MIRA uses multi-step reasoning rather than single-shot generation. Every query is decomposed into sub-tasks, each reasoned through explicitly, with the results composed into a final answer. What this means in practice:
  • Complex questions are broken into investigable sub-questions
  • Each sub-task is handled with the appropriate reasoning strategy
  • Results are synthesised from actual intermediate outputs, not assembled from statistical plausibility
  • If you are using the RLM Engine, every step is visible to you in the REPL Console

Two engines, one interface

MIRA ships with two fundamentally different reasoning engines. You choose which to use based on what the problem requires.

Native Agent Engine

Direct LLM API with built-in memory management and multi-agent orchestration. Best for open-ended research, long sessions, and multi-domain tasks.

RLM Engine

Write Python → execute → observe → refine. Best for data analysis, calculations, and anything where you need a verifiably correct answer.

Shared capabilities

Both engines support:
CapabilityBoth engines
Skills (reasoning personas)
Workflows (multi-step pipelines)
Document context (PDF, DOCX, CSV…)
MCP tool integration
REPL Console visibilityRLM Engine only
Persistent session history
Streaming output
The difference is how they reason under the hood, not what they’re connected to.

Choosing between engines

Use Native Agent Engine when:
  • The question is open-ended or multi-domain
  • You need long conversational continuity
  • The problem benefits from parallel sub-task execution
  • You want the fastest time-to-answer
Use RLM Engine when:
  • The problem involves quantitative data or calculations
  • You need to verify answers — not just trust them
  • You’re working with specific datasets and want exact computed results
  • Seeing every code iteration is useful for your work
Switch in Settings → Engine (⌘,). The active engine is always visible in the status bar.