Install Guide
Step-by-step reference for deploying GeoLens from scratch using Docker Compose.
Prerequisites
Section titled “Prerequisites”| Requirement | Minimum | Notes |
|---|---|---|
| Docker Engine | 24.0+ | Install Docker |
| Docker Compose | v2.20+ | Included with Docker Desktop; standalone install via docker compose version |
| RAM | 4 GB | PostgreSQL + PostGIS + API + Worker + Frontend |
| Disk | 10 GB | Base images + data volumes; scale with dataset size |
| Ports | 3 ports | Default: 8080 (Frontend), 5434 (PostgreSQL), 8001 (API) |
Optional services (Docker Compose profiles)
Section titled “Optional services (Docker Compose profiles)”GeoLens ships with several optional services gated behind Docker Compose profiles. They are not started by docker compose up -d unless you opt in.
| Profile | Services | When to use |
|---|---|---|
backup | backup | Automated nightly database backups (cron-driven, optional S3 upload) |
cloud-dev | minio, minio-setup, valkey | Local S3-compatible object storage and Valkey cache for testing cloud-equivalent setups without provisioning real infrastructure |
Enable automated backups
Section titled “Enable automated backups”docker compose --profile backup up -dThe backup service runs pg_dump on the schedule defined by BACKUP_SCHEDULE (default: daily at 02:00 UTC) and stores dumps in the backup_data volume. Set BACKUP_S3_ENABLED=true and configure the standard S3_* variables to also push backups to off-site storage. See Configuration Reference — Backup for the full variable list.
Run a local cloud-equivalent stack
Section titled “Run a local cloud-equivalent stack”docker compose --profile cloud-dev up -dThis starts MinIO (S3-compatible storage on http://localhost:9001) and Valkey (Redis-compatible cache). To switch GeoLens to use them, uncomment the MinIO block in .env.example (see “Local S3 Testing”) and set REDIS_URL=redis://valkey:6379/0. MinIO admin console credentials default to minioadmin / minioadmin. Use this profile to develop or debug code paths that depend on S3 or shared cache without provisioning real cloud resources.
Profiles can stack
Section titled “Profiles can stack”docker compose --profile backup --profile cloud-dev up -dOptional overlays (compose files)
Section titled “Optional overlays (compose files)”In addition to profiles, a compose overlay file extends the base stack:
| Overlay | Purpose | Command |
|---|---|---|
docker-compose.demo.yml | Pre-populated demo with sample Natural Earth data, public visibility, and 24-hour reset | cp .env.demo .env && docker compose -f docker-compose.yml -f docker-compose.demo.yml up -d |
The demo overlay is the easiest way to evaluate GeoLens with realistic data.
Stopping and Starting
Section titled “Stopping and Starting”Stop all services (preserves data)
Section titled “Stop all services (preserves data)”docker compose downStart services
Section titled “Start services”docker compose up -dStop and remove all data
Section titled “Stop and remove all data”docker compose down -vThe -v flag removes Docker volumes (pgdata and upload_staging), deleting all database data and uploaded files.
View logs
Section titled “View logs”# All servicesdocker compose logs -f
# Specific servicedocker compose logs -f apidocker compose logs -f dbUpgrading
Section titled “Upgrading”For version upgrade procedures, rollback steps, and version-specific notes, see the Upgrade Guide.
Data Persistence
Section titled “Data Persistence”Data is stored in two Docker volumes:
| Volume | Purpose | Path inside container | Profile |
|---|---|---|---|
pgdata | PostgreSQL data directory | /var/lib/postgresql/data | default |
upload_staging | Uploaded files awaiting ingestion | /app/staging | default |
backup_data | Automated database dumps from the backup service | /backups | backup |
minio_data | Local S3 object store for cloud-dev profile | /data | cloud-dev |
valkey_data | Local Valkey cache state for cloud-dev profile | /data | cloud-dev |
These volumes persist across docker compose down (without -v). See Admin Guide for backup procedures.
Troubleshooting
Section titled “Troubleshooting”Missing .env file
Section titled “Missing .env file”If you see a ValidationError with “Field required” errors on startup:
pydantic_core._pydantic_core.ValidationError: 3 validation errors for SettingsYou forgot to create the .env file. Copy the template:
cp .env.example .envdocker compose up -dMigration warnings on startup
Section titled “Migration warnings on startup”You may see log messages like:
INFO: Migrations already applied by migrate service (or database not ready yet)This is expected and harmless. The dedicated migrate service runs Alembic migrations before the API starts. The API and worker entrypoints also attempt migrations as a safety net — when the migrate service has already applied them, the entrypoint logs this informational message and proceeds normally.
Port conflicts
Section titled “Port conflicts”If a port is already in use, change it in .env:
FRONTEND_PORT=8081API_PORT=8002DB_PORT=5435Then restart:
docker compose down && docker compose up -dOut of memory
Section titled “Out of memory”If the database or API crashes with OOM errors:
- Increase Docker memory allocation (Docker Desktop: Settings > Resources > Memory)
- Minimum recommended: 4 GB for all services
- For large datasets: 8 GB+
Services not starting
Section titled “Services not starting”Check the startup order and health:
# View startup logsdocker compose logs --tail=50 dbdocker compose logs --tail=50 api
# Check health statusdocker compose psCommon issues:
- db not healthy: Check
POSTGRES_USERandPOSTGRES_PASSWORDmatch in.env - api not starting: Verify database is healthy first; check migration errors in API logs
- frontend 502 errors: Upstream API not ready yet; wait 30 seconds and retry
Export 500 errors (staging permission denied)
Section titled “Export 500 errors (staging permission denied)”If /api/datasets/{id}/export returns HTTP 500, verify staging writability inside the API container:
docker compose exec api sh -lc '\ dir=${UPLOAD_STAGING_DIR:-/app/staging}; \ mkdir -p "$dir/exports" && \ touch "$dir/.geolens-write-test" "$dir/exports/.geolens-write-test" && \ rm -f "$dir/.geolens-write-test" "$dir/exports/.geolens-write-test"'Run the runtime export verification spec after permissions are corrected:
npm run e2e -- e2e/export-runtime.spec.tsExpected success signals:
- Exports pass for
gpkg,geojson,shp, andcsvwith attachment payload integrity (SQLite header, FeatureCollection JSON, zip members, CSV header row). target_crs=EPSG:3857export shows projected coordinate semantics (not only HTTP 200).bboxandwhereexports are true subsets (feature/property assertions pass).- Audit logs include
dataset.exportentries with export parameters.
If the writability check fails:
- Fix ownership/permissions on the mounted staging path so uid:gid
1001:1001can write. - Or set
UPLOAD_STAGING_DIRin.envto a writable directory and restart services withdocker compose up -d --build. - Re-run
npm run e2e -- e2e/export-runtime.spec.tsto confirm full runtime behavior.
GDAL/OGR errors during ingestion
Section titled “GDAL/OGR errors during ingestion”The API container includes GDAL. If file ingestion fails:
# Check API logs for OGR errorsdocker compose logs api | grep -i "ogr\|gdal\|error"Supported upload formats: .zip (Shapefile), .gpkg (GeoPackage), .geojson, .json, .csv
Database connection errors
Section titled “Database connection errors”Verify the database is accessible:
docker compose exec db pg_isready -U geolens -d geolensIf the database is unreachable from the API, ensure the POSTGRES_HOST is set to db (the Docker service name).
Reset to clean state
Section titled “Reset to clean state”To completely reset the installation:
docker compose down -vdocker compose up -dThis removes all data and starts fresh with the default admin account.