The Recursive Loop Machine (RLM) settings control the LLM provider, model, generation parameters, and concurrency behaviour.
Opening RLM settings
Press ⌘, → Engine tab. If RLM is your active engine (or you want to configure it), its settings appear in the lower panel.
Provider & Model
The RLM supports four providers: AWS Bedrock (default), Anthropic, OpenAI, and Ollama. Select a provider with the four provider buttons in the Provider & Model section, then enter the Model ID.
You can use any model from a supported provider — MIRA shows suggestion chips as a starting point, but the Model ID field accepts any valid model ID your account can access.
The RLM Engine is designed and tested to work reliably with Claude and GPT series models.
Other models may produce inconsistent results or fail to follow the reasoning loop correctly.
| Provider | Default credential |
|---|
| AWS Bedrock | AWS credentials (Bedrock / AWS tab) |
| Anthropic | ANTHROPIC_API_KEY (API Tokens tab) |
| OpenAI | OPENAI_API_KEY (API Tokens tab) |
| Ollama | None (local) |
For Ollama, a Base URL field appears (default: http://localhost:11434). For Bedrock, an AWS Region dropdown appears.
Generation parameters
| Setting | Range | Default | Description |
|---|
| Max iterations | 1–50 | 20 | Hard cap on reasoning loop iterations per query. |
| Temperature | 0–1 | 1.0 | 0 = deterministic · 1 = default model creativity. |
| Max output tokens | 512–32 768 | 8 096 | Maximum tokens per LLM response in the reasoning loop. |
Concurrency & Behaviour
| Setting | Default | Description |
|---|
| Max concurrent sessions | 3 | How many sessions can run queries at the same time (1–5). |
| Background completion notifications | On | Show a toast when a session finishes while you are viewing a different one. |
| Auto-focus on completion | Off | Automatically switch to the session that just finished. |
| Verbose mode | Off | Stream all RLM iteration output to the console for debugging. |
Runtime
| Setting | Default | Description |
|---|
| Python executable | python3 | Path to the Python 3.11+ binary used to run the RLM bridge. |
MIRA ships with a bundled Python 3.11 runtime that is used automatically. You only need to change this if you are running a custom Python environment.
Saving changes
Click Save & restart bridge. The RLM Python bridge restarts automatically with the new settings.