🎁 New User? Get 20% off your first purchase with code NEWUSER20 Register Now →
Menu

Categories

Linux Backup & Disaster Recovery: The Complete Guide for 2026

Linux Backup & Disaster Recovery: The Complete Guide for 2026

Data loss is not a matter of if, but when. Hard drives fail, ransomware strikes, human error happens, and natural disasters are unpredictable. The only thing standing between you and catastrophic data loss is a solid backup strategy.

This guide covers everything you need to build a reliable Linux backup and disaster recovery system — from basic file copying to enterprise-grade automated solutions.

The 3-2-1 Backup Rule

Before diving into tools, understand the golden rule of backups:

  • 3 copies of your data (the original + 2 backups)
  • 2 different storage types (local disk + external/NAS/cloud)
  • 1 offsite copy (cloud storage or a physically separate location)

This simple rule protects you against hardware failure, accidental deletion, ransomware, theft, and even natural disasters affecting your primary location.

rsync — The Foundation of Linux Backups

rsync is the workhorse of Linux backup systems. It efficiently synchronizes files between locations, transferring only the differences (delta transfer), which makes subsequent backups incredibly fast.

Basic rsync Usage

# Local backup
rsync -avz --progress /home/user/ /backup/home/

# Remote backup over SSH
rsync -avz -e "ssh -p 22" /var/www/ user@backup-server:/backups/www/

# Exclude patterns
rsync -avz --exclude='*.log' --exclude='.cache/' /data/ /backup/data/

# Delete files on destination that no longer exist on source
rsync -avz --delete /source/ /destination/

Incremental Backup with rsync + Hardlinks

This technique creates daily snapshots that look like full backups but only use disk space for changed files:

#!/bin/bash
DATE=$(date +%Y-%m-%d)
LATEST="/backup/latest"
DEST="/backup/$DATE"
SOURCE="/var/www"

rsync -avz --delete --link-dest=$LATEST $SOURCE/ $DEST/

# Update the "latest" symlink
rm -f $LATEST
ln -s $DEST $LATEST

echo "Backup completed: $DEST"

Each daily folder appears to contain the complete backup, but unchanged files are hardlinked to previous versions, saving enormous amounts of disk space.

Borg Backup — Deduplication and Encryption

Borg Backup is the modern standard for Linux backups. It provides block-level deduplication, optional encryption, and compression — all with excellent performance.

Setting Up Borg

# Install Borg
sudo apt install borgbackup    # Debian/Ubuntu
sudo dnf install borgbackup    # RHEL/AlmaLinux

# Initialize a repository (with encryption)
borg init --encryption=repokey /backup/borg-repo

# Create a backup
borg create /backup/borg-repo::daily-{now:%Y-%m-%d} \
    /home /var/www /etc \
    --exclude '*.cache' \
    --exclude '*.tmp' \
    --compression zstd,3

# List backups
borg list /backup/borg-repo

# Restore specific files
borg extract /backup/borg-repo::daily-2026-02-15 home/user/documents/

Automated Borg with Pruning

#!/bin/bash
REPO="/backup/borg-repo"
export BORG_PASSPHRASE="your-secure-passphrase"

# Create backup
borg create $REPO::auto-{now:%Y-%m-%d_%H:%M} \
    /home /var/www /etc /var/lib/postgresql \
    --exclude-caches \
    --compression zstd,3

# Prune old backups - keep:
# 7 daily, 4 weekly, 6 monthly, 2 yearly
borg prune $REPO \
    --keep-daily=7 \
    --keep-weekly=4 \
    --keep-monthly=6 \
    --keep-yearly=2

# Compact repository to free disk space
borg compact $REPO

echo "Borg backup and pruning completed"

📚 Recommended Reading

Deep dive into backup automation and storage management:

LVM Snapshots — Instant Point-in-Time Backups

If your server uses LVM (Logical Volume Manager), you can create instant snapshots of entire filesystems. This is perfect for database backups where consistency matters.

# Create a snapshot of the data volume
lvcreate -L 10G -s -n data-snap /dev/vg0/data

# Mount the snapshot read-only
mount -o ro /dev/vg0/data-snap /mnt/snapshot

# Backup from the snapshot (consistent state)
rsync -avz /mnt/snapshot/ /backup/data-snapshot/

# Cleanup
umount /mnt/snapshot
lvremove /dev/vg0/data-snap

The snapshot captures the exact state of the filesystem at creation time, even if files are being written to. This ensures database consistency without stopping services.

Database Backup Strategies

Databases require special attention because simply copying files while the database is running can result in corrupted backups.

PostgreSQL

# Logical backup (portable, smaller)
pg_dump -U postgres mydb > /backup/mydb_$(date +%Y%m%d).sql

# Compressed backup
pg_dump -U postgres -Fc mydb > /backup/mydb_$(date +%Y%m%d).dump

# All databases
pg_dumpall -U postgres > /backup/all_databases_$(date +%Y%m%d).sql

# Restore
pg_restore -U postgres -d mydb /backup/mydb_20260215.dump

MySQL / MariaDB

# Single database
mysqldump -u root -p mydb > /backup/mydb_$(date +%Y%m%d).sql

# All databases with routines and triggers
mysqldump -u root -p --all-databases --routines --triggers > /backup/all_$(date +%Y%m%d).sql

Cloud Backup with rclone

rclone is the "rsync for cloud storage." It supports over 40 cloud providers including AWS S3, Google Cloud, Backblaze B2, and more.

# Install rclone
curl https://rclone.org/install.sh | sudo bash

# Configure (interactive)
rclone config

# Sync local backups to cloud
rclone sync /backup/borg-repo remote:my-backups/borg/

# Encrypted sync
rclone sync /backup/ remote-crypt:backups/ --transfers=4

Automated Backup Schedule

Here is a production-ready cron setup combining all the tools:

# /etc/cron.d/backup-schedule

# Database backup every 6 hours
0 */6 * * * root pg_dump -U postgres mydb | gzip > /backup/db/mydb_$(date +\%Y\%m\%d_\%H).sql.gz

# Borg backup daily at 2 AM
0 2 * * * root /opt/scripts/borg-backup.sh >> /var/log/backup.log 2>&1

# Sync to cloud daily at 4 AM
0 4 * * * root rclone sync /backup/borg-repo remote:backups/ >> /var/log/cloud-sync.log 2>&1

# Weekly full verification
0 6 * * 0 root borg check /backup/borg-repo >> /var/log/backup-verify.log 2>&1

# Monthly test restore
0 8 1 * * root /opt/scripts/test-restore.sh >> /var/log/restore-test.log 2>&1

Disaster Recovery Plan

Having backups is only half the battle. You also need a tested recovery plan:

  1. Document everything: Write step-by-step recovery procedures
  2. Test monthly: Actually restore from backup to a test server
  3. Measure RTO/RPO: Recovery Time Objective (how fast?) and Recovery Point Objective (how much data loss is acceptable?)
  4. Automate recovery: Script the rebuild process so it works under pressure
  5. Monitor backups: A backup that fails silently is worse than no backup

📚 Essential Administration Guides

Build rock-solid Linux administration skills:

Conclusion

A proper backup strategy is non-negotiable for any Linux administrator. Start with rsync for simple file backups, graduate to Borg for deduplication and encryption, and always keep an offsite copy in the cloud.

Remember: backups that are not tested are not backups. Schedule regular test restores and document your recovery procedures. When disaster strikes, you will be glad you invested the time.

Share this article:

Stay Updated

Subscribe to our newsletter for the latest tutorials, tips, and exclusive offers.