Skip to main content
The REPL Console is an output panel that shows the code the RLM reasoning engine writes and executes during its reasoning loop. It is a read-only log — the engine generates and runs code automatically when using RLM mode; you cannot type or run code in it manually.

Opening the REPL Console

Press ⌘` or click the / Terminal icon in the console toolbar. The console slides up from the bottom of the chat window.

What it shows

When the RLM engine is active and processing a query, each iteration of its reasoning loop appears in the console:
StreamPrefixColourMeaning
stdout NormalCode output printed to stdout
stderrRedErrors and stack traces
systemBlue (mira)Engine status messages (start, stop, etc.)
stdinBlueCode passed to the interpreter
Each line shows a timestamp, a stream prefix, and the content. You can scroll to review earlier output or click Latest to jump to the bottom.

How the RLM engine uses the console

When RLM is the active engine, it autonomously writes Python code, executes it, reads the output, and decides whether to refine its approach — all within a single query. Every execution step is logged to the console so you can follow the engine’s reasoning in real time.
The NAE engine does not execute code. The REPL Console will be empty when NAE is the active engine.

Console toolbar

ButtonAction
Copy allCopies the full console log (with timestamps and prefixes) to the clipboard
Clear (🗑)Clears all log lines — does not affect the running Python session
Closes the console panel (shortcut: ⌘`)

Session state

The Python interpreter state persists for the lifetime of the chat session — variables and imports from one iteration are available in subsequent iterations within the same session. Starting a new chat session resets the Python state. The engine will execute this automatically and print the result to the console.

Security

  • Python code runs in a subprocess on your machine — it has access to your filesystem and network.
  • Code is executed locally — nothing is sent to any AI provider.
  • The maximum number of iterations per query is controlled by the maxIterations setting on the active skill or the global RLM default.
You are responsible for the queries you send. Asking the engine to write and execute code that modifies files or makes network calls will do so. There is no sandboxing beyond regular OS user permissions.
Edit this page — Open a pull request