Skip to content
getgeolens.com

Install Guide

Step-by-step reference for deploying GeoLens from scratch using Docker Compose.

RequirementMinimumNotes
Docker Engine24.0+Install Docker
Docker Composev2.20+Included with Docker Desktop; standalone install via docker compose version
RAM4 GBPostgreSQL + PostGIS + API + Worker + Frontend
Disk10 GBBase images + data volumes; scale with dataset size
Ports3 portsDefault: 8080 (Frontend), 5434 (PostgreSQL), 8001 (API)

Optional services (Docker Compose profiles)

Section titled “Optional services (Docker Compose profiles)”

GeoLens ships with several optional services gated behind Docker Compose profiles. They are not started by docker compose up -d unless you opt in.

ProfileServicesWhen to use
backupbackupAutomated nightly database backups (cron-driven, optional S3 upload)
cloud-devminio, minio-setup, valkeyLocal S3-compatible object storage and Valkey cache for testing cloud-equivalent setups without provisioning real infrastructure
Terminal window
docker compose --profile backup up -d

The backup service runs pg_dump on the schedule defined by BACKUP_SCHEDULE (default: daily at 02:00 UTC) and stores dumps in the backup_data volume. Set BACKUP_S3_ENABLED=true and configure the standard S3_* variables to also push backups to off-site storage. See Configuration Reference — Backup for the full variable list.

Terminal window
docker compose --profile cloud-dev up -d

This starts MinIO (S3-compatible storage on http://localhost:9001) and Valkey (Redis-compatible cache). To switch GeoLens to use them, uncomment the MinIO block in .env.example (see “Local S3 Testing”) and set REDIS_URL=redis://valkey:6379/0. MinIO admin console credentials default to minioadmin / minioadmin. Use this profile to develop or debug code paths that depend on S3 or shared cache without provisioning real cloud resources.

Terminal window
docker compose --profile backup --profile cloud-dev up -d

In addition to profiles, a compose overlay file extends the base stack:

OverlayPurposeCommand
docker-compose.demo.ymlPre-populated demo with sample Natural Earth data, public visibility, and 24-hour resetcp .env.demo .env && docker compose -f docker-compose.yml -f docker-compose.demo.yml up -d

The demo overlay is the easiest way to evaluate GeoLens with realistic data.

Terminal window
docker compose down
Terminal window
docker compose up -d
Terminal window
docker compose down -v

The -v flag removes Docker volumes (pgdata and upload_staging), deleting all database data and uploaded files.

Terminal window
# All services
docker compose logs -f
# Specific service
docker compose logs -f api
docker compose logs -f db

For version upgrade procedures, rollback steps, and version-specific notes, see the Upgrade Guide.

Data is stored in two Docker volumes:

VolumePurposePath inside containerProfile
pgdataPostgreSQL data directory/var/lib/postgresql/datadefault
upload_stagingUploaded files awaiting ingestion/app/stagingdefault
backup_dataAutomated database dumps from the backup service/backupsbackup
minio_dataLocal S3 object store for cloud-dev profile/datacloud-dev
valkey_dataLocal Valkey cache state for cloud-dev profile/datacloud-dev

These volumes persist across docker compose down (without -v). See Admin Guide for backup procedures.

If you see a ValidationError with “Field required” errors on startup:

pydantic_core._pydantic_core.ValidationError: 3 validation errors for Settings

You forgot to create the .env file. Copy the template:

Terminal window
cp .env.example .env
docker compose up -d

You may see log messages like:

INFO: Migrations already applied by migrate service (or database not ready yet)

This is expected and harmless. The dedicated migrate service runs Alembic migrations before the API starts. The API and worker entrypoints also attempt migrations as a safety net — when the migrate service has already applied them, the entrypoint logs this informational message and proceeds normally.

If a port is already in use, change it in .env:

FRONTEND_PORT=8081
API_PORT=8002
DB_PORT=5435

Then restart:

Terminal window
docker compose down && docker compose up -d

If the database or API crashes with OOM errors:

  • Increase Docker memory allocation (Docker Desktop: Settings > Resources > Memory)
  • Minimum recommended: 4 GB for all services
  • For large datasets: 8 GB+

Check the startup order and health:

Terminal window
# View startup logs
docker compose logs --tail=50 db
docker compose logs --tail=50 api
# Check health status
docker compose ps

Common issues:

  • db not healthy: Check POSTGRES_USER and POSTGRES_PASSWORD match in .env
  • api not starting: Verify database is healthy first; check migration errors in API logs
  • frontend 502 errors: Upstream API not ready yet; wait 30 seconds and retry

Export 500 errors (staging permission denied)

Section titled “Export 500 errors (staging permission denied)”

If /api/datasets/{id}/export returns HTTP 500, verify staging writability inside the API container:

Terminal window
docker compose exec api sh -lc '\
dir=${UPLOAD_STAGING_DIR:-/app/staging}; \
mkdir -p "$dir/exports" && \
touch "$dir/.geolens-write-test" "$dir/exports/.geolens-write-test" && \
rm -f "$dir/.geolens-write-test" "$dir/exports/.geolens-write-test"'

Run the runtime export verification spec after permissions are corrected:

Terminal window
npm run e2e -- e2e/export-runtime.spec.ts

Expected success signals:

  • Exports pass for gpkg, geojson, shp, and csv with attachment payload integrity (SQLite header, FeatureCollection JSON, zip members, CSV header row).
  • target_crs=EPSG:3857 export shows projected coordinate semantics (not only HTTP 200).
  • bbox and where exports are true subsets (feature/property assertions pass).
  • Audit logs include dataset.export entries with export parameters.

If the writability check fails:

  • Fix ownership/permissions on the mounted staging path so uid:gid 1001:1001 can write.
  • Or set UPLOAD_STAGING_DIR in .env to a writable directory and restart services with docker compose up -d --build.
  • Re-run npm run e2e -- e2e/export-runtime.spec.ts to confirm full runtime behavior.

The API container includes GDAL. If file ingestion fails:

Terminal window
# Check API logs for OGR errors
docker compose logs api | grep -i "ogr\|gdal\|error"

Supported upload formats: .zip (Shapefile), .gpkg (GeoPackage), .geojson, .json, .csv

Verify the database is accessible:

Terminal window
docker compose exec db pg_isready -U geolens -d geolens

If the database is unreachable from the API, ensure the POSTGRES_HOST is set to db (the Docker service name).

To completely reset the installation:

Terminal window
docker compose down -v
docker compose up -d

This removes all data and starts fresh with the default admin account.