Skip to main content
MIRA ships with two reasoning engines. The active engine is shown in the status bar at the bottom of the window. Switch between them in Settings → Engine (⌘,).

Engine comparison

DimensionNative Agent Engine (NAE)RLM Engine
Default✅ YesNo
LLM accessDirect API callsVia Python bridge
Context managementBuilt-in — token budget, compaction, summarisation, episodic memoryDelegated to RLM layer
Code executionNoYes — local Python REPL (full OS access)
Multi-agentSingle / Multi / Auto modesNot applicable
ProvidersOpenAI, Anthropic, Ollama, BedrockBedrock, Anthropic, OpenAI, Ollama
Best forOpen-ended research, long sessions, multi-domain tasksData analysis, calculations, tasks where provably correct answers matter
StreamingToken-by-token chunksIteration-by-iteration updates
Skills✅ Supported✅ Supported
Workflows✅ Supported✅ Supported
MCP tools✅ Supported✅ Supported
Document context✅ Supported✅ Supported
Switching engines resets the active session — context from one engine is not transferred to the other.

Engine status

The active reasoning engine is always visible in the status bar at the bottom of the MIRA window.
StatusMeaning
● RunningEngine is ready and accepting queries
◐ InitializingEngine is starting up (normal on first launch, ~60 s)
↻ RestartingEngine crashed and is being restarted with exponential backoff
✕ FailedEngine could not start — check Settings → Engine and your credentials
Switch engines any time in Settings → Engine (⌘,). Switching resets the active session.

Switching engines

Switching engines resets the active session — context, history, and document context from one engine are not carried over to the other. This is intentional: the two engines maintain fundamentally different internal state. To switch:
  1. Open Settings (⌘,)
  2. Go to the Engine tab
  3. Click the engine you want to activate
  4. A confirmation dialog appears if there is an active session
  5. The status bar updates immediately
Your engine choice is persisted across restarts.

Configuration

Each engine has its own configuration panel in Settings → Engine:

Deep dives

Native Agent Engine

How NAE manages token budgets, compacts context, spawns sub-agents, and maintains episodic memory

RLM Engine

How RLM writes code, executes it, interprets output, and iterates to a verified answer