Getting Started
Install local-ai.run on macOS, Linux, or Windows. Pick the path that fits your environment: a single command for the fastest path, manual Docker Compose for full control, or an offline bundle for air-gapped machines.
Prerequisites
Before you install, make sure your machine meets these requirements:
| Requirement | Minimum | Recommended |
|---|---|---|
| Docker | Docker Desktop (macOS / Windows) or Docker Engine + Compose v2 (Linux) | Latest stable |
| RAM | 8 GB | 16 GB+ for larger models |
| Free disk | ~15 GB for images + base models | 50 GB+ if you plan to install several models |
| OS | macOS 12+, Ubuntu 22.04+, Debian 12+, Windows 10/11 (WSL2) | — |
| Free host ports | 80 (Caddy), 5433 (Postgres), 11434 (Ollama), 8501 (RAG) | installer auto-detects port 80 conflicts |
Quick install (recommended)
One command — the installer detects Docker, generates secrets, brings up the stack, and runs database migrations.
curl -fsSL https://get.local-ai.run/install.sh | bash
wsl -d Ubuntu curl -fsSL https://get.local-ai.run/install.sh | bash
That's it. When the script finishes, open http://local-ai.localhost in your browser.
What the installer does
- Checks for Docker, Docker Compose v2, and openssl.
- Warns if you have less than ~1 GB of free disk space.
- Creates
~/local-ai/with a generated.env,Caddyfile, anddocker-compose.yml. - Generates random
DJANGO_SECRET_KEY,RAG_API_KEY, andWHISPER_API_KEY. - Adds
local-ai.localhostto/etc/hostsif your OS does not resolve.localhostautomatically. - Runs
docker compose up -dand waits for Django to be healthy. - Runs
python manage.py migrateagainst the freshly-started Postgres.
curl -fsSL https://get.local-ai.run/install.sh -o install.sh cat install.sh bash install.sh
Manual install (Docker Compose)
Prefer to clone the repo and inspect each step? Use Docker Compose directly.
Clone the repository
git clone https://github.com/360solutions-dev/local-ai.git cd local-ai
To pin to a specific release: git checkout v1.0.3.
Configure environment
cp .env.example .env
Open .env and set at minimum:
DJANGO_SECRET_KEY— a long random value, e.g.openssl rand -hex 32POSTGRES_PASSWORD— strong password; keepDATABASE_URLin syncRAG_API_KEY— shared secret used by Django and Next.js to call the RAG serviceWHISPER_API_KEY— shared secret for the Whisper serviceCORS_ALLOWED_ORIGINS— comma-separated origins allowed to call the Django API
See Configuration for the full reference. Never commit .env.
Start the stack
docker compose up --build -d
First build pulls base images and compiles the Next.js frontend. Allow several minutes.
Run database migrations
docker compose exec django python manage.py migrate
Pull Ollama models
docker compose exec ollama ollama pull llama3.1:8b docker compose exec ollama ollama pull nomic-embed-text docker compose exec ollama ollama list
Pick smaller models (llama3.2:3b, phi3:mini) on low-RAM machines.
Add the hosts entry
Caddy routes by hostname, so add the following line to your hosts file.
echo "127.0.0.1 local-ai.localhost api.local-ai.localhost" | sudo tee -a /etc/hosts
# Run Notepad as Administrator, then edit: C:\Windows\System32\drivers\etc\hosts # Add line: 127.0.0.1 local-ai.localhost api.local-ai.localhost
Open the app
You can now visit:
| URL | What it is |
|---|---|
http://local-ai.localhost | Main web app (Next.js, via Caddy) |
http://api.local-ai.localhost | Django API (via Caddy) |
http://localhost:8501 | RAG document chat (Streamlit, optional) |
http://localhost:11434 | Ollama API (advanced users) |
On first open, complete onboarding to create your admin account.
Offline / air-gapped install
For machines without internet access, prepare a bundle on a connected machine and transfer it.
1. Prepare on a connected machine
Build images, then save them as a single tarball:
docker compose build docker compose pull docker save -o local-ai-all-images.tar \ caddy:2-alpine postgres:16-alpine ollama/ollama:latest \ local-ai-backend:latest local-ai-frontend:latest \ local-ai-rag:latest local-ai-whisper:latest
Back up the Ollama models volume so the offline machine has them ready:
docker run --rm \ -v local-ai_ollama_data:/from \ -v "$(pwd):/backup" \ alpine tar czf /backup/ollama_data.tgz -C /from .
2. Transfer to the offline machine
Copy these files (USB, internal share, etc.):
- The full project source tree (or a release zip).
local-ai-all-images.tar— place indocker-images/.ollama_data.tgz— place at the project root.
3. Run the offline installer
chmod +x install.sh ./install.sh --offline
The script loads the tarballs, restores the Ollama volume, runs docker compose up -d, and applies migrations.
docker compose ps and
docker compose exec ollama ollama list.
Updating to a new version
Update from inside the app: Settings → Advanced → Check for Updates → Install Update.
Or manually:
docker compose -f docker-compose.release.yml pull docker compose -f docker-compose.release.yml up -d