Getting Started

Install local-ai.run on macOS, Linux, or Windows. Pick the path that fits your environment: a single command for the fastest path, manual Docker Compose for full control, or an offline bundle for air-gapped machines.

Prerequisites

Before you install, make sure your machine meets these requirements:

RequirementMinimumRecommended
DockerDocker Desktop (macOS / Windows) or Docker Engine + Compose v2 (Linux)Latest stable
RAM8 GB16 GB+ for larger models
Free disk~15 GB for images + base models50 GB+ if you plan to install several models
OSmacOS 12+, Ubuntu 22.04+, Debian 12+, Windows 10/11 (WSL2)
Free host ports80 (Caddy), 5433 (Postgres), 11434 (Ollama), 8501 (RAG)installer auto-detects port 80 conflicts
Docker is the only hard requirement. You do not need Python, Node, or any model runtime on the host — everything ships inside containers.

Quick install (recommended)

One command — the installer detects Docker, generates secrets, brings up the stack, and runs database migrations.

curl -fsSL https://get.local-ai.run/install.sh | bash
wsl -d Ubuntu
curl -fsSL https://get.local-ai.run/install.sh | bash

That's it. When the script finishes, open http://local-ai.localhost in your browser.

What the installer does

  1. Checks for Docker, Docker Compose v2, and openssl.
  2. Warns if you have less than ~1 GB of free disk space.
  3. Creates ~/local-ai/ with a generated .env, Caddyfile, and docker-compose.yml.
  4. Generates random DJANGO_SECRET_KEY, RAG_API_KEY, and WHISPER_API_KEY.
  5. Adds local-ai.localhost to /etc/hosts if your OS does not resolve .localhost automatically.
  6. Runs docker compose up -d and waits for Django to be healthy.
  7. Runs python manage.py migrate against the freshly-started Postgres.
Want to inspect before running? Download the script first, read it, then run.
curl -fsSL https://get.local-ai.run/install.sh -o install.sh
cat install.sh
bash install.sh

Manual install (Docker Compose)

Prefer to clone the repo and inspect each step? Use Docker Compose directly.

1

Clone the repository

git clone https://github.com/360solutions-dev/local-ai.git
cd local-ai

To pin to a specific release: git checkout v1.0.3.

2

Configure environment

cp .env.example .env

Open .env and set at minimum:

  • DJANGO_SECRET_KEY — a long random value, e.g. openssl rand -hex 32
  • POSTGRES_PASSWORD — strong password; keep DATABASE_URL in sync
  • RAG_API_KEY — shared secret used by Django and Next.js to call the RAG service
  • WHISPER_API_KEY — shared secret for the Whisper service
  • CORS_ALLOWED_ORIGINS — comma-separated origins allowed to call the Django API

See Configuration for the full reference. Never commit .env.

3

Start the stack

docker compose up --build -d

First build pulls base images and compiles the Next.js frontend. Allow several minutes.

4

Run database migrations

docker compose exec django python manage.py migrate
5

Pull Ollama models

docker compose exec ollama ollama pull llama3.1:8b
docker compose exec ollama ollama pull nomic-embed-text
docker compose exec ollama ollama list

Pick smaller models (llama3.2:3b, phi3:mini) on low-RAM machines.

6

Add the hosts entry

Caddy routes by hostname, so add the following line to your hosts file.

echo "127.0.0.1 local-ai.localhost api.local-ai.localhost" | sudo tee -a /etc/hosts
# Run Notepad as Administrator, then edit:
C:\Windows\System32\drivers\etc\hosts

# Add line:
127.0.0.1 local-ai.localhost api.local-ai.localhost
7

Open the app

You can now visit:

URLWhat it is
http://local-ai.localhostMain web app (Next.js, via Caddy)
http://api.local-ai.localhostDjango API (via Caddy)
http://localhost:8501RAG document chat (Streamlit, optional)
http://localhost:11434Ollama API (advanced users)

On first open, complete onboarding to create your admin account.

Offline / air-gapped install

For machines without internet access, prepare a bundle on a connected machine and transfer it.

1. Prepare on a connected machine

Build images, then save them as a single tarball:

docker compose build
docker compose pull

docker save -o local-ai-all-images.tar \
  caddy:2-alpine postgres:16-alpine ollama/ollama:latest \
  local-ai-backend:latest local-ai-frontend:latest \
  local-ai-rag:latest local-ai-whisper:latest

Back up the Ollama models volume so the offline machine has them ready:

docker run --rm \
  -v local-ai_ollama_data:/from \
  -v "$(pwd):/backup" \
  alpine tar czf /backup/ollama_data.tgz -C /from .

2. Transfer to the offline machine

Copy these files (USB, internal share, etc.):

3. Run the offline installer

chmod +x install.sh
./install.sh --offline

The script loads the tarballs, restores the Ollama volume, runs docker compose up -d, and applies migrations.

After install, verify with docker compose ps and docker compose exec ollama ollama list.

Updating to a new version

Update from inside the app: Settings → Advanced → Check for Updates → Install Update.

Or manually:

docker compose -f docker-compose.release.yml pull
docker compose -f docker-compose.release.yml up -d

Next steps