Skip to main content
This guide gets MIRA running as fast as possible. If you want platform-specific detail, see the Installation pages.

Step 1 — Download

Go to GitHub Releases and download the installer for your platform:
PlatformFile
macOS (Apple Silicon)MIRA-x.x.x-arm64.dmg
WindowsMIRA-Setup-x.x.x-x64.exe
LinuxMIRA-x.x.x-x64.AppImage, mira_x.x.x_amd64.deb, or mira-x.x.x-x86_64.rpm

Step 2 — Install and launch

  1. Open the downloaded .dmg
  2. Drag MIRA to your Applications folder
  3. Launch MIRA from Applications or Spotlight
  4. If macOS shows “cannot be opened”, go to System Settings → Privacy & Security → Open Anyway

Step 3 — First launch setup

On first launch, MIRA automatically:
  1. Sets up a bundled Python 3.11 virtual environment
  2. Installs all required Python dependencies
A splash screen is shown during this one-time setup. Watch the status bar — when it shows NAE ready, setup is complete. This takes approximately 60 seconds and runs once only.
Python is bundled. MIRA ships a self-contained Python 3.11 runtime. You do not need Python on your PATH and never need to run pip install manually.

Step 4 — Connect your LLM provider

Open Settings (⌘,) and configure one provider:
Go to Settings → Bedrock / AWS and enter:
  • AWS Access Key ID
  • AWS Secret Access Key
  • AWS Region (e.g. eu-west-1)
Click Test Connection — MIRA calls sts:GetCallerIdentity and displays your Account ID on success.Minimum IAM policy required:
{
  "Effect": "Allow",
  "Action": ["bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream"],
  "Resource": "arn:aws:bedrock:*::foundation-model/anthropic.claude-*"
}

Step 5 — Send your first query

  1. Click New Chat (or ⌘N)
  2. Type a question — something specific to your work
  3. Press Enter
Watch MIRA reason through the answer in real time. Streaming output appears token by token.
Try a question that requires multiple steps — MIRA’s strengths are multi-hop reasoning and verified answers. A simple factual question works too, but won’t show the full capability.

What’s next?

Understand the two engines

NAE vs RLM — when to use each

Upload a document

Add PDFs, CSVs, and code files to your session

Apply a Skill

Give MIRA a specialised reasoning persona

Run a Workflow

Chain reasoning into a repeatable pipeline