Engine comparison
| Dimension | Native Agent Engine (NAE) | RLM Engine |
|---|---|---|
| Default | ✅ Yes | No |
| LLM access | Direct API calls | Via Python bridge |
| Context management | Built-in — token budget, compaction, summarisation, episodic memory | Delegated to RLM layer |
| Code execution | No | Yes — local Python REPL (full OS access) |
| Multi-agent | Single / Multi / Auto modes | Not applicable |
| Providers | OpenAI, Anthropic, Ollama, Bedrock | Bedrock, Anthropic, OpenAI, Ollama |
| Best for | Open-ended research, long sessions, multi-domain tasks | Data analysis, calculations, tasks where provably correct answers matter |
| Streaming | Token-by-token chunks | Iteration-by-iteration updates |
| Skills | ✅ Supported | ✅ Supported |
| Workflows | ✅ Supported | ✅ Supported |
| MCP tools | ✅ Supported | ✅ Supported |
| Document context | ✅ Supported | ✅ Supported |
Engine status
The active reasoning engine is always visible in the status bar at the bottom of the MIRA window.| Status | Meaning |
|---|---|
| ● Running | Engine is ready and accepting queries |
| ◐ Initializing | Engine is starting up (normal on first launch, ~60 s) |
| ↻ Restarting | Engine crashed and is being restarted with exponential backoff |
| ✕ Failed | Engine could not start — check Settings → Engine and your credentials |
Switching engines
Switching engines resets the active session — context, history, and document context from one engine are not carried over to the other. This is intentional: the two engines maintain fundamentally different internal state. To switch:- Open Settings (⌘,)
- Go to the Engine tab
- Click the engine you want to activate
- A confirmation dialog appears if there is an active session
- The status bar updates immediately
Configuration
Each engine has its own configuration panel in Settings → Engine:- NAE Settings Reference — provider, model, context budget, multi-agent mode, MCP timeout
- RLM Settings Reference — provider, model, temperature, iteration limits, chunking strategy
Deep dives
Native Agent Engine
How NAE manages token budgets, compacts context, spawns sub-agents, and maintains episodic
memory
RLM Engine
How RLM writes code, executes it, interprets output, and iterates to a verified answer