Skip to main content

A

Activity Center A panel in the MIRA UI that shows real-time engine steps, tool calls, document events, and engine lifecycle events for the current session. A/B Comparison An Eval Framework feature that puts two eval runs side by side to show which cases improved, regressed, or stayed the same.

C

Chunking The process of splitting a large document into smaller pieces (chunks) before injecting them into the engine’s context. MIRA uses fixed-size chunking with a configurable chunk size and overlap. Context bridge Electron’s contextBridge API — the mechanism by which MIRA’s preload script safely exposes window.api to renderer code without giving it raw Node.js access. Context window The maximum number of tokens an LLM can process in a single call. MIRA manages the context window automatically, summarising older messages when usage exceeds the threshold.

E

Eval case A single test scenario in the Eval Framework — an input, expected output criteria, and a scorer type. Eval profile A named collection of eval cases that is run together as a test suite.

I

IPC Inter-Process Communication — how MIRA’s renderer (UI) and main (Node.js) processes exchange messages. All IPC channels are typed and exposed through window.api. Iteration One pass of the RLM reasoning loop: the engine reads the current state, generates a response or code block, executes it (if applicable), observes the result, and decides whether to continue or stop.

K

Keychain The operating system’s secure credential store. MIRA uses Electron’s safeStorage API which encrypts credentials using the OS native mechanism: macOS Keychain on macOS, DPAPI on Windows, and libsecret/KWallet on Linux. Encrypted data is stored in credentials.json in the MIRA data folder.

M

MCP (Model Context Protocol) An open standard for AI agent tool integration. MCP servers expose tools via JSON-RPC; MIRA connects to them as an MCP client and makes the tools available to both reasoning engines.

N

NAE (Native Agent Engine) MIRA’s default reasoning engine. A custom multi-agent orchestration system built in Python that manages a token ledger, spawns parallel sub-agents, and compacts long-running context automatically.

P

Provider An AI API service that supplies language model capabilities to MIRA. Supported providers: OpenAI, Anthropic, AWS Bedrock, Ollama.

R

RLM (Recursive Loop Machine) MIRA’s second reasoning engine. It runs an iterative loop where each iteration can generate and execute Python code before deciding whether to continue or emit a final answer. REPL Console A read-only output panel in MIRA that streams every step of the RLM engine in real time — code written, code executed, output observed, and reasoning decisions. Used for transparency and auditability. NAE does not output to the REPL Console. Routing rule A condition on a workflow step that controls what happens after the step runs: continue to the next step, retry the current step, or skip the next step.

S

Scope Determines whether a skill, MCP server, or other configuration is active only in the current session (session scope) or in all sessions (global scope). Skill A named configuration that changes the engine’s behaviour — it includes instructions, model overrides, and an optional allowlist of MCP tools. Skills can be session-scoped or global. Summarisation The process by which the NAE engine condenses older messages into a compact summary when the session’s context usage approaches the threshold, freeing space for new messages.

T

Token ledger The NAE engine’s internal accounting of token usage across the context window. It tracks tokens used by the system prompt, conversation history, documents, and skill instructions. Template variable A placeholder in a workflow step instruction that is replaced with dynamic content at run time. Available variables: {{input}}, {{prev_output}}, {{document}}.

W

Workflow A multi-step pipeline of prompt, transform, and human_review steps. Workflows chain engine calls together with routing rules to produce complex, structured outputs.
Edit this page — Open a pull request