Opening NAE settings
Press ⌘, → Engine tab. If NAE is your active engine (or you want to configure it), its settings appear in the lower panel.Provider & Model
The NAE supports four providers: OpenAI, Anthropic, Ollama (local), and AWS Bedrock. Select a provider with the provider buttons in the Provider & Model section, then type or confirm the Model ID. You can use any model from a supported provider — MIRA shows suggestion chips as a starting point but the Model ID field accepts any valid model ID your account can access.| Provider | Credential needed |
|---|---|
| OpenAI | OPENAI_API_KEY in API Tokens tab |
| Anthropic | ANTHROPIC_API_KEY in API Tokens tab |
| Ollama | None (local) |
| AWS Bedrock | AWS credentials in Bedrock / AWS tab |
http://localhost:11434). For Bedrock, an AWS Region selector appears.
Context management
| Setting | Default | Description |
|---|---|---|
| Context Budget Ceiling | 120 000 tokens | Maximum token budget per session. Compaction triggers at the warn threshold. |
| Max Output Tokens | 8 000 tokens | Tokens reserved for the model’s response per turn. |
| Compaction Warn Threshold | 75% | When context usage reaches this fraction, older messages are compacted to free space. |
Sub-agent behaviour
| Setting | Default | Description |
|---|---|---|
| Max Sub-Agents | 4 | Maximum sub-agents the engine can spawn per task (1–8). |
| Sub-Agent Parallelism | 2 | How many sub-agents may run concurrently (1–4). |
| MCP Tool Timeout | 10 s | Maximum time to wait for an MCP tool call before marking it as failed (5–60 s). |
Saving changes
Click Save NAE Settings. The message “Changes apply immediately — no restart needed” confirms settings were applied.Edit this page — Open a pull
request