Skip to main content
The Recursive Loop Machine (RLM) settings control the LLM provider, model, generation parameters, and concurrency behaviour.

Opening RLM settings

Press ⌘, → Engine tab. If RLM is your active engine (or you want to configure it), its settings appear in the lower panel.

Provider & Model

The RLM supports four providers: AWS Bedrock (default), Anthropic, OpenAI, and Ollama. Select a provider with the four provider buttons in the Provider & Model section, then enter the Model ID. You can use any model from a supported provider — MIRA shows suggestion chips as a starting point, but the Model ID field accepts any valid model ID your account can access.
The RLM Engine is designed and tested to work reliably with Claude and GPT series models. Other models may produce inconsistent results or fail to follow the reasoning loop correctly.
ProviderDefault credential
AWS BedrockAWS credentials (Bedrock / AWS tab)
AnthropicANTHROPIC_API_KEY (API Tokens tab)
OpenAIOPENAI_API_KEY (API Tokens tab)
OllamaNone (local)
For Ollama, a Base URL field appears (default: http://localhost:11434). For Bedrock, an AWS Region dropdown appears.

Generation parameters

SettingRangeDefaultDescription
Max iterations1–5020Hard cap on reasoning loop iterations per query.
Temperature0–11.00 = deterministic · 1 = default model creativity.
Max output tokens512–32 7688 096Maximum tokens per LLM response in the reasoning loop.

Concurrency & Behaviour

SettingDefaultDescription
Max concurrent sessions3How many sessions can run queries at the same time (1–5).
Background completion notificationsOnShow a toast when a session finishes while you are viewing a different one.
Auto-focus on completionOffAutomatically switch to the session that just finished.
Verbose modeOffStream all RLM iteration output to the console for debugging.

Runtime

SettingDefaultDescription
Python executablepython3Path to the Python 3.11+ binary used to run the RLM bridge.
MIRA ships with a bundled Python 3.11 runtime that is used automatically. You only need to change this if you are running a custom Python environment.

Saving changes

Click Save & restart bridge. The RLM Python bridge restarts automatically with the new settings.
Edit this page — Open a pull request