Skip to main content
The Native Agent Engine (NAE) settings control the LLM provider, model selection, context budget, and sub-agent behaviour.

Opening NAE settings

Press ⌘, → Engine tab. If NAE is your active engine (or you want to configure it), its settings appear in the lower panel.

Provider & Model

The NAE supports four providers: OpenAI, Anthropic, Ollama (local), and AWS Bedrock. Select a provider with the provider buttons in the Provider & Model section, then type or confirm the Model ID. You can use any model from a supported provider — MIRA shows suggestion chips as a starting point but the Model ID field accepts any valid model ID your account can access.
ProviderCredential needed
OpenAIOPENAI_API_KEY in API Tokens tab
AnthropicANTHROPIC_API_KEY in API Tokens tab
OllamaNone (local)
AWS BedrockAWS credentials in Bedrock / AWS tab
For Ollama, a Base URL field appears (default: http://localhost:11434). For Bedrock, an AWS Region selector appears.

Context management

SettingDefaultDescription
Context Budget Ceiling120 000 tokensMaximum token budget per session. Compaction triggers at the warn threshold.
Max Output Tokens8 000 tokensTokens reserved for the model’s response per turn.
Compaction Warn Threshold75%When context usage reaches this fraction, older messages are compacted to free space.

Sub-agent behaviour

SettingDefaultDescription
Max Sub-Agents4Maximum sub-agents the engine can spawn per task (1–8).
Sub-Agent Parallelism2How many sub-agents may run concurrently (1–4).
MCP Tool Timeout10 sMaximum time to wait for an MCP tool call before marking it as failed (5–60 s).

Saving changes

Click Save NAE Settings. The message “Changes apply immediately — no restart needed” confirms settings were applied.
Edit this page — Open a pull request