Wiki
Backup & Recovery
How QuoteNode handles database and file backups — online backups, offline restoration, encryption, and remote storage.
Backup & Recovery
QuoteNode includes a built-in backup system that protects your commercial data against hardware failures, accidental deletions, and operational incidents.
What Is Backed Up
Each backup captures three components:
- Database dump — a full PostgreSQL dump in custom format (
pg_dump --format=custom), including all tables, sequences, and constraints. - File storage — all uploaded files (product images, company logos) and generated PDFs, archived as
files.tar.gz. - Integrity manifest — a
checksums.sha256file containing SHA-256 hashes of both the database dump and the file archive, enabling verification that backups have not been corrupted or tampered with.
Online Backup (Default)
Online backups run while the application is serving requests. There is no downtime.
How it works:
pg_dumpin custom format creates a transactionally consistent snapshot of the database without locking tables or blocking queries.- File storage is archived in parallel — since uploaded files and PDFs are immutable (never modified after creation), the archive is always consistent.
- Both outputs are checksummed and optionally encrypted before storage.
Limitations:
- If a transaction is in progress at the exact moment of the dump, its uncommitted data is excluded (this is correct behavior — the dump represents a consistent point-in-time state).
- For most organizations, this is sufficient. Online backups are the recommended approach.
Offline Backup
For maximum consistency guarantees (e.g., before major upgrades or migrations):
- Stop the application containers (
docker compose stop backend backup-worker). - Run
pg_dumpdirectly against the running PostgreSQL container. - Archive the file storage volumes.
- Restart the application.
This approach guarantees zero in-flight transactions but requires a brief maintenance window (typically 1-5 minutes depending on database size).
Scheduling
Backups are controlled by the backup-worker container — a dedicated instance of the backend running in backup-only mode:
- Default schedule: 2:00 AM daily (
0 0 2 * * *, configurable viaBACKUP_CRON) - Enable/disable:
BACKUP_ENABLED=true|false - Manual trigger: Administrators can trigger an immediate backup from the admin panel or via API (
POST /api/v1/admin/backup/trigger) - Concurrency protection: The SKIP LOCKED pattern ensures only one backup runs at a time, even if multiple worker instances exist.
Storage Options
Local storage (default)
Backups are stored in the BACKUP_LOCAL_DIR directory (default: /app/data/backups), which is mapped to a Docker volume. This is suitable for development and small deployments.
For production, local backups should be supplemented with remote copies — a backup stored on the same disk as the database is not protection against disk failure.
Remote storage via rclone
If BACKUP_RCLONE_REMOTE is configured, backups are automatically uploaded to a remote destination after local creation. rclone supports 70+ cloud storage providers, including:
- S3-compatible — AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2
- Google Cloud Storage
- Azure Blob Storage
- SFTP — any server with SSH access
- WebDAV — Nextcloud, ownCloud, etc.
If the remote upload fails, the local backup is preserved and the failure is logged.
Encryption
Backups can be encrypted at rest using GPG:
- Set
BACKUP_GPG_RECIPIENTto the GPG key ID or email address. - The backup script encrypts both the database dump and the file archive before storage.
- Encrypted files have a
.gpgextension. - Decryption requires the corresponding private key — without it, the backup is unreadable.
This is critical for organizations that store backups on third-party cloud storage, where the storage provider should not have access to the backup contents.
Metadata & Audit Trail
Every backup (successful or failed) is tracked in the backup_logs table:
| Field | Description |
|---|---|
| Status | PENDING, RUNNING, COMPLETED, FAILED |
| Initiated by | SCHEDULER (automatic) or ADMIN (manual trigger) |
| Start time | When the backup process started |
| Completion time | When the backup process finished |
| Size (bytes) | Total size of the backup archive |
| Destination | Local path or remote URL |
| SHA-256 checksum | Integrity hash of the backup archive |
| Error message | Failure reason (if applicable) |
The 50 most recent backup logs are accessible via the admin panel or API.
Restoration
Full restore procedure
- Stop all application containers:
docker compose down - Restore the database:
pg_restore --clean --if-exists -d quotenode backup.dump - Extract file storage:
tar xzf files.tar.gz -C /app/data/ - Start all containers:
docker compose up -d - Verify the integrity checksum matches the backup manifest.
Partial restore
Individual tables or data subsets can be restored from a custom-format dump using pg_restore with table-specific flags. This is useful for recovering accidentally deleted records without a full restore.
Encrypted backup restore
For GPG-encrypted backups:
gpg --decrypt backup.dump.gpg > backup.dump
gpg --decrypt files.tar.gz.gpg > files.tar.gz
Then proceed with the standard restore procedure.
Backup Timeout
Each backup operation has a 30-minute timeout. If a backup does not complete within this window, it is terminated and recorded as FAILED. For very large databases (10+ GB), the timeout can be extended via configuration.
Downloading Backups
Administrators can download backup archives directly from the admin panel:
- Navigate to Settings > Backup.
- Find the backup you want to download (status must be Success).
- Click the Download button on the backup entry.
The backup file is a .tar.gz archive containing the database dump, file storage, and an integrity checksum manifest. Store it securely — it contains all your business data.
Downloaded backups can be used for disaster recovery on a new server (see below) or for migrating to a different hosting environment.
Note: Backups stored on remote storage (S3, SFTP, etc.) cannot be downloaded through the admin panel. Download them directly from your storage provider.
Backup Retention
Old backup files are automatically cleaned up after each successful backup. The retention policy is controlled by BACKUP_RETENTION_DAILY (default: 7). This keeps the most recent N successful backups and deletes older local files.
Remote backups (uploaded via rclone) are not deleted by QuoteNode — configure lifecycle policies on your storage provider instead.
Disaster Recovery — Rebuilding from Scratch
Your server is gone. You have a backup file (.tar.gz downloaded from the admin panel or stored on your remote storage). You have a new server with Docker installed. Here is how to restore everything.
Step 1 — Prepare the new server
Create a project directory and set up configuration files exactly as described in the Installation Guide:
mkdir quotenode && cd quotenode
You need two files:
docker-compose.yml— copy from the installation guide.env— restore from your backup copy
Critical: Use the same
DB_ENCRYPTION_KEYas your original installation. Without it, encrypted data (MFA codes, SMTP credentials, and PII ifENCRYPT_PII=true) will be permanently unreadable.
Step 2 — Copy the backup file to the server
Transfer your backup file using scp, rsync, or any file transfer tool:
scp quotenode-backup-20260320.tar.gz user@new-server:~/quotenode/
Step 3 — Download the restore script
curl -O https://raw.githubusercontent.com/quotenode/quotenode/main/scripts/restore.sh
chmod +x restore.sh
Step 4 — Run the restore
./restore.sh --fresh-install ~/quotenode/quotenode-backup-20260320.tar.gz
The script will:
- Start the database container
- Wait for it to be ready
- Restore your database from the backup dump
- Start the full application stack
- Copy your uploaded files and generated PDFs
- Run a health check to verify everything works
Step 5 — Verify
Open your domain in a browser and log in with your existing credentials. All your customers, offers, products, and settings should be exactly as they were at the time of the backup.
Dry run (optional)
To verify a backup without actually modifying anything:
RESTORE_DRY_RUN=true ./restore.sh --fresh-install backup-file.tar.gz
This extracts and validates the backup archive, verifies checksums, and reports the restore plan — without touching the database or file storage.
Recommended Backup Strategy
For production deployments:
- Enable daily automatic backups with remote storage (S3, SFTP, or similar).
- Enable GPG encryption for backups stored on third-party infrastructure.
- Test restore procedures periodically — a backup that has never been tested is not a backup.
- Download a backup to your local machine at least monthly — remote storage is not a substitute for an offline copy.
- Monitor backup logs in the admin panel for failures. The admin panel shows a warning if no successful backup has been recorded in the last 48 hours.
- Keep at least 7 days of backup history (
BACKUP_RETENTION_DAILY=7, the default).