Providers
| Provider | Settings page |
|---|---|
| AWS Bedrock | AWS Bedrock |
| OpenAI | OpenAI |
| Anthropic | Anthropic (Claude) |
| Ollama | Ollama (Local Models) |
Engine
| Sub-section | Settings page |
|---|---|
| Native Agent Engine | NAE Settings |
| RLM Engine | RLM Settings |
MCP Tools
Manage stdio and SSE MCP servers. See Adding a stdio Server and Adding an SSE Server.Eval Framework
| Setting | Default | Description |
|---|---|---|
| Embedding model | (none) | Provider + model used for semantic similarity evals |
| Default LLM judge model | (none) | Provider + model used as default for LLM judge evals |
| Run concurrency | 1 | Number of eval cases to run in parallel (1–5) |
| Default pass threshold | 3 (human review), 0.80 (semantic), — (exact) | Per-type default pass thresholds |
Data Management
See Reset & Data Management.Preferences
See Preferences.Security
See API Tokens & Security.About
Displays the installed MIRA version, open source licences, and a link to the changelog.Edit this page — Open a pull
request