Backups & Restore
GeoLens ships an automated backup container that runs on a configurable cron schedule, retains daily and weekly snapshots locally, and optionally replicates to S3-compatible storage. There is no /admin/backups UI page — backups are operated entirely from the Docker Compose CLI and via environment variables.
Replace https://geolens.example.com with your GeoLens instance’s URL in every example below.
Enabling automated backups
Section titled “Enabling automated backups”The backup service runs as a Docker Compose profile. Enable it once on the host:
docker compose --profile backup up -dThe container runs pg_dump on the schedule defined by BACKUP_SCHEDULE (default: daily at 02:00 UTC), keeps 7 daily snapshots and 4 weekly (Sunday) snapshots, and optionally uploads to S3. The service uses the same database credentials as the API, so no additional configuration is required for the local-only path.
To verify the backup container is running:
docker compose ps backupThe backup container restarts automatically; if it exits non-zero (e.g., disk full, connectivity issue), Docker restarts it and the next scheduled run resumes normally.
Configuration
Section titled “Configuration”| Variable | Default | Purpose |
|---|---|---|
BACKUP_SCHEDULE | 0 2 * * * | Cron expression for backup execution |
BACKUP_RETENTION_DAILY | 7 | Number of daily backups to keep locally |
BACKUP_RETENTION_WEEKLY | 4 | Number of weekly (Sunday) backups to keep locally |
BACKUP_S3_ENABLED | false | Upload backups to S3 in addition to local storage |
S3_ENDPOINT, S3_BUCKET, S3_ACCESS_KEY_ID, S3_SECRET_ACCESS_KEY, S3_REGION | — | S3 credentials (shared with the storage provider) |
Set these in .env before bringing up the backup profile. Changes require a container restart:
docker compose --profile backup restart backupThe backup S3 credentials default to the same values as the storage provider configuration. To use a separate bucket or different credentials for backups, set the dedicated BACKUP_S3_* overrides — see Configuration Reference → Backup for the override variables.
Backup destinations
Section titled “Backup destinations”Backups are written to:
/backups/daily/<db>_<timestamp>.dump— every scheduled run/backups/weekly/<db>_<timestamp>.dump— Sundays onlys3://<bucket>/backups/{daily,weekly}/<db>_<timestamp>.dump— whenBACKUP_S3_ENABLED=true
The <db> segment is the database name (default geolens). The <timestamp> segment is YYYYMMDD_HHMMSS in UTC. Files are PostgreSQL custom-format dumps (pg_dump -Fc) — restore with pg_restore rather than psql.
Local backups live inside the backup_data Docker volume. Inspect them by execing into the backup container:
docker compose exec backup ls -lh /backups/daily/docker compose exec backup ls -lh /backups/weekly/To copy a specific backup off the host:
docker compose cp backup:/backups/daily/geolens_20260101_020000.dump ./Retention policy
Section titled “Retention policy”The backup container enforces retention on every run. After producing a new backup:
- Daily backups older than
BACKUP_RETENTION_DAILYruns are deleted from/backups/daily/. - Weekly backups older than
BACKUP_RETENTION_WEEKLYruns are deleted from/backups/weekly/. - S3 retention is not enforced by GeoLens — configure S3 lifecycle rules on the bucket itself for off-site retention. The default 30-day Glacier transition + 365-day expiry is a common starting point for compliance-driven deployments.
Manual deletion is safe — the container scans the directory on the next run and updates retention based on what’s present. Out-of-band file deletion does not corrupt the backup state.
Manual pg_dump (alternate path)
Section titled “Manual pg_dump (alternate path)”For one-off backups outside the scheduled cron, run pg_dump directly against the database container:
# Create a custom-format dump (preferred for pg_restore)docker compose exec db pg_dump -U geolens -d geolens -Fc -f /tmp/geolens_backup.dumpdocker compose cp db:/tmp/geolens_backup.dump ./geolens_backup.dumpFor a plain SQL backup (useful for grep/inspection):
docker compose exec db pg_dump -U geolens -d geolens > geolens_backup.sqlFor volume-level snapshots (database files plus WAL):
# Stop services first to ensure a consistent snapshotdocker compose down
# Backup the pgdata volumedocker run --rm -v geolens_pgdata:/data -v $(pwd):/backup alpine \ tar czf /backup/pgdata_backup.tar.gz -C /data .
# Restartdocker compose up -dVolume snapshots are faster to restore than pg_restore for full-instance recovery but cannot be restored to a different PostgreSQL version. Use the pg_dump format for long-term archival and cross-version portability.
Restoring from a backup
Section titled “Restoring from a backup”To restore from a pg_dump-format backup:
# Stop the API to prevent writesdocker compose stop api
# Copy the backup into the database containerdocker compose cp ./backup.dump db:/tmp/backup.dump
# Restore (--clean drops existing objects first)docker compose exec db pg_restore -U geolens -d geolens --clean /tmp/backup.dump
# Restartdocker compose start apiThe --clean flag drops existing objects before restoring. For partial restores (single table), use pg_restore --table <name>:
docker compose exec db pg_restore -U geolens -d geolens \ --clean --table catalog.datasets /tmp/backup.dumpTo restore a volume-level snapshot:
docker compose downdocker run --rm -v geolens_pgdata:/data -v $(pwd):/backup alpine \ sh -c "rm -rf /data/* && tar xzf /backup/pgdata_backup.tar.gz -C /data"docker compose up -dRestore validation
Section titled “Restore validation”Restore validation is currently operator-driven — there is no automated post-restore checksum or integrity verification. Recommended practice:
-
Restore the most recent daily backup into a staging instance monthly.
-
Run
curl https://staging.geolens.example.com/healthand confirmdatabase.status: ok(see Infrastructure & Monitoring for the health probe). -
Spot-check 2–3 dataset rows from the catalog API:
Terminal window curl https://staging.geolens.example.com/api/datasets/ \-H "Authorization: Bearer $TOKEN" | jq '.items[:3]' -
Open a representative dataset in the staging UI and confirm map preview, exports, and metadata render correctly.
If staging restore reveals corruption, the most recent daily backup may be unrecoverable — promote the most recent weekly backup to production-restore status and investigate the failed daily as a separate incident.
See also
Section titled “See also”- Settings reference — Storage tab covers the S3 credentials shared with
BACKUP_S3_* - Cloud deployment notes — backups + S3 off-site replication in cloud context
- Configuration reference — full
BACKUP_*env-var details