Real-time streaming
MIRA renders responses as they arrive — token by token for NAE, iteration by iteration for RLM. You see the answer being constructed rather than waiting for it to complete. NAE streaming: Each token from the LLM API arrives via thenae:chunk IPC event and is appended to the message bubble in real time. Rendering is throttled to ~60 fps using a requestAnimationFrame flush loop — fast model responses don’t block the UI.
RLM streaming: Each reasoning iteration (code written, code executed, output observed) arrives as an rlm:iteration event. The REPL Console shows the full iteration stream. The final answer arrives as an rlm:final event and is rendered as a complete Markdown block.
Markdown rendering
All assistant responses are rendered as Markdown with these elements supported:| Element | Example |
|---|---|
| Headings | ## Section Title |
| Bold / italic | **bold**, *italic* |
| Inline code | `variable` |
| Code blocks | Fenced with ``` and optional language tag |
| Tables | GFM pipe tables |
| Ordered lists | 1. item |
| Unordered lists | - item |
| Blockquotes | > quote |
| Links | [text](url) |
| Horizontal rule | --- |