🎁 New User? Get 20% off your first purchase with code NEWUSER20 Register Now →
Menu

Categories

15 Bash Scripts That Will Automate Your Linux Server Management in 2026

15 Bash Scripts That Will Automate Your Linux Server Management in 2026

Every Linux administrator has tasks they repeat daily: checking disk space, rotating logs, creating backups, monitoring services. If you're still doing these manually, you're wasting hours every week that could be spent on more valuable work.

In this guide, we share 15 production-ready Bash scripts that you can deploy on your servers today. Each script includes error handling, logging, and can be scheduled via cron for full automation.

1. Automated Server Health Check

This script checks CPU, memory, disk usage, and running services, then sends an alert if anything is critical:

#!/bin/bash
# server-health-check.sh — Automated server health monitoring
# Schedule: */15 * * * * (every 15 minutes)

LOG="/var/log/health-check.log"
ALERT_EMAIL="admin@yourdomain.com"
HOSTNAME=$(hostname)
DATE=$(date "+%Y-%m-%d %H:%M:%S")
ALERTS=""

# Check CPU usage (alert if > 90%)
CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print int($2)}' )
if [ "$CPU_USAGE" -gt 90 ]; then
    ALERTS+="[CRITICAL] CPU usage: ${CPU_USAGE}%\n"
fi

# Check memory usage (alert if > 85%)
MEM_USAGE=$(free | awk '/Mem:/ {printf "%d", $3/$2 * 100}')
if [ "$MEM_USAGE" -gt 85 ]; then
    ALERTS+="[WARNING] Memory usage: ${MEM_USAGE}%\n"
fi

# Check disk usage (alert if any partition > 80%)
while read -r line; do
    USAGE=$(echo "$line" | awk '{print int($5)}')
    MOUNT=$(echo "$line" | awk '{print $6}')
    if [ "$USAGE" -gt 80 ]; then
        ALERTS+="[WARNING] Disk ${MOUNT}: ${USAGE}% used\n"
    fi
done < <(df -h | grep '^/dev/' )

# Check critical services
for SERVICE in nginx postgresql sshd; do
    if ! systemctl is-active --quiet "$SERVICE" 2>/dev/null; then
        ALERTS+="[CRITICAL] Service ${SERVICE} is DOWN\n"
    fi
done

# Log and alert
echo "[$DATE] Health check completed" >> "$LOG"
if [ -n "$ALERTS" ]; then
    echo -e "Server: $HOSTNAME\nDate: $DATE\n\n$ALERTS" | \
        mail -s "[ALERT] Server Health Issues on $HOSTNAME" "$ALERT_EMAIL"
    echo "[$DATE] ALERTS SENT: $(echo -e "$ALERTS" | wc -l) issues found" >> "$LOG"
else
    echo "[$DATE] All systems healthy" >> "$LOG"
fi

2. Intelligent Backup Script with Rotation

Create daily backups with automatic rotation to prevent disk space issues:

#!/bin/bash
# smart-backup.sh — Database + files backup with rotation
# Schedule: 0 2 * * * (daily at 2:00 AM)

BACKUP_DIR="/backup"
DB_NAME="myapp"
DB_USER="dbuser"
WEB_DIR="/var/www/myapp"
RETENTION_DAYS=30
DATE=$(date "+%Y%m%d_%H%M%S")
LOG="/var/log/backup.log"

log() { echo "[$(date "+%Y-%m-%d %H:%M:%S")] $1" >> "$LOG"; }
log "Starting backup..."

# Create backup directory
mkdir -p "${BACKUP_DIR}/daily"

# Database backup
DBFILE="${BACKUP_DIR}/daily/db_${DB_NAME}_${DATE}.sql.gz"
if pg_dump -U "$DB_USER" "$DB_NAME" | gzip > "$DBFILE"; then
    log "Database backup OK: $(du -h "$DBFILE" | cut -f1)"
else
    log "ERROR: Database backup FAILED"
fi

# File backup (excluding logs and cache)
FILEARCHIVE="${BACKUP_DIR}/daily/files_${DATE}.tar.gz"
if tar -czf "$FILEARCHIVE" \
    --exclude="*.log" \
    --exclude="cache/*" \
    --exclude="node_modules/*" \
    "$WEB_DIR" 2>/dev/null; then
    log "File backup OK: $(du -h "$FILEARCHIVE" | cut -f1)"
else
    log "ERROR: File backup FAILED"
fi

# Rotation — delete backups older than retention period
DELETED=$(find "${BACKUP_DIR}/daily" -name "*.gz" -mtime +${RETENTION_DAYS} -delete -print | wc -l)
log "Rotation: deleted $DELETED old backups"

# Calculate total backup size
TOTAL_SIZE=$(du -sh "${BACKUP_DIR}" | cut -f1)
log "Backup completed. Total backup storage: $TOTAL_SIZE"
Go deeper: Our BASH Fundamentals eBook covers variables, loops, functions, error handling, and 60+ practical examples for system administration.

3. Automated Log Analyzer

Parse NGINX or Apache logs and generate a daily summary:

#!/bin/bash
# log-analyzer.sh — Daily web server log analysis
# Schedule: 0 6 * * * (daily at 6:00 AM)

LOGFILE="/var/log/nginx/access.log"
REPORT="/tmp/daily-log-report.txt"
DATE=$(date -d "yesterday" "+%d/%b/%Y")

echo "=== Web Server Log Report ===" > "$REPORT"
echo "Date: $(date -d 'yesterday' '+%Y-%m-%d')" >> "$REPORT"
echo "==============================" >> "$REPORT"

# Total requests
TOTAL=$(grep "$DATE" "$LOGFILE" | wc -l)
echo "Total Requests: $TOTAL" >> "$REPORT"

# Unique visitors
UNIQUE=$(grep "$DATE" "$LOGFILE" | awk '{print $1}' | sort -u | wc -l)
echo "Unique Visitors: $UNIQUE" >> "$REPORT"

# Top 10 most visited pages
echo -e "\n--- Top 10 Pages ---" >> "$REPORT"
grep "$DATE" "$LOGFILE" | awk '{print $7}' | \
    sort | uniq -c | sort -rn | head -10 >> "$REPORT"

# HTTP status code distribution
echo -e "\n--- Status Codes ---" >> "$REPORT"
grep "$DATE" "$LOGFILE" | awk '{print $9}' | \
    sort | uniq -c | sort -rn >> "$REPORT"

# Top error pages (4xx and 5xx)
echo -e "\n--- Error URLs (4xx/5xx) ---" >> "$REPORT"
grep "$DATE" "$LOGFILE" | awk '$9 >= 400 {print $9, $7}' | \
    sort | uniq -c | sort -rn | head -10 >> "$REPORT"

# Bandwidth usage
echo -e "\n--- Bandwidth ---" >> "$REPORT"
BYTES=$(grep "$DATE" "$LOGFILE" | awk '{sum += $10} END {print sum}')
echo "Total: $(echo "scale=2; $BYTES/1048576" | bc) MB" >> "$REPORT"

cat "$REPORT"

4. User Account Manager

#!/bin/bash
# user-manager.sh — Batch user creation with SSH key setup

INPUT_FILE="$1"  # CSV: username,email,pubkey

if [ ! -f "$INPUT_FILE" ]; then
    echo "Usage: $0 users.csv"
    exit 1
fi

while IFS=, read -r USERNAME EMAIL PUBKEY; do
    # Skip header
    [ "$USERNAME" = "username" ] && continue

    if id "$USERNAME" &>/dev/null; then
        echo "SKIP: User $USERNAME already exists"
        continue
    fi

    # Create user with home directory
    useradd -m -s /bin/bash -c "$EMAIL" "$USERNAME"

    # Set up SSH key
    SSH_DIR="/home/${USERNAME}/.ssh"
    mkdir -p "$SSH_DIR"
    echo "$PUBKEY" > "${SSH_DIR}/authorized_keys"
    chmod 700 "$SSH_DIR"
    chmod 600 "${SSH_DIR}/authorized_keys"
    chown -R "${USERNAME}:${USERNAME}" "$SSH_DIR"

    # Set password expiry (force change on first login)
    passwd -e "$USERNAME"

    echo "CREATED: $USERNAME ($EMAIL)"
done < "$INPUT_FILE"

5. SSL Certificate Monitor

#!/bin/bash
# ssl-monitor.sh — Check SSL expiry for multiple domains
# Schedule: 0 9 * * 1 (weekly on Monday 9:00 AM)

DOMAINS=("yourdomain.com" "api.yourdomain.com" "admin.yourdomain.com")
ALERT_DAYS=14
ALERT_EMAIL="admin@yourdomain.com"
EXPIRING=""

for DOMAIN in "${DOMAINS[@]}"; do
    EXPIRY=$(echo | openssl s_client -servername "$DOMAIN" -connect "${DOMAIN}:443" 2>/dev/null | \
             openssl x509 -noout -enddate 2>/dev/null | cut -d= -f2)

    if [ -z "$EXPIRY" ]; then
        EXPIRING+="[ERROR] Cannot check SSL for $DOMAIN\n"
        continue
    fi

    EXPIRY_EPOCH=$(date -d "$EXPIRY" +%s)
    NOW_EPOCH=$(date +%s)
    DAYS_LEFT=$(( (EXPIRY_EPOCH - NOW_EPOCH) / 86400 ))

    if [ "$DAYS_LEFT" -lt "$ALERT_DAYS" ]; then
        EXPIRING+="[WARNING] $DOMAIN — SSL expires in $DAYS_LEFT days ($EXPIRY)\n"
    else
        echo "[OK] $DOMAIN — SSL valid for $DAYS_LEFT more days"
    fi
done

if [ -n "$EXPIRING" ]; then
    echo -e "$EXPIRING" | mail -s "[SSL ALERT] Certificates expiring soon" "$ALERT_EMAIL"
fi

6. Service Auto-Restart with Notification

#!/bin/bash
# service-watchdog.sh — Monitor and auto-restart critical services
# Schedule: */5 * * * * (every 5 minutes)

SERVICES=("nginx" "postgresql" "php-fpm")
LOG="/var/log/service-watchdog.log"

for SERVICE in "${SERVICES[@]}"; do
    if ! systemctl is-active --quiet "$SERVICE"; then
        echo "[$(date)] $SERVICE is DOWN — attempting restart" >> "$LOG"
        systemctl restart "$SERVICE"
        sleep 3

        if systemctl is-active --quiet "$SERVICE"; then
            echo "[$(date)] $SERVICE restarted successfully" >> "$LOG"
        else
            echo "[$(date)] CRITICAL: $SERVICE restart FAILED" >> "$LOG"
            echo "CRITICAL: $SERVICE failed to restart on $(hostname)" | \
                mail -s "[CRITICAL] Service failure on $(hostname)" admin@yourdomain.com
        fi
    fi
done

7. Disk Space Cleaner

#!/bin/bash
# disk-cleaner.sh — Intelligent disk space cleanup
# Schedule: 0 3 * * 0 (weekly Sunday 3:00 AM)

LOG="/var/log/disk-cleaner.log"
BEFORE=$(df -h / | tail -1 | awk '{print $5}')

log() { echo "[$(date "+%Y-%m-%d %H:%M:%S")] $1" >> "$LOG"; }
log "Starting cleanup. Disk usage: $BEFORE"

# Clean package cache
apt-get autoremove -y > /dev/null 2>&1
apt-get clean > /dev/null 2>&1
log "Package cache cleaned"

# Remove old logs (keep 7 days)
find /var/log -name "*.gz" -mtime +7 -delete 2>/dev/null
find /var/log -name "*.old" -mtime +7 -delete 2>/dev/null
log "Old log files removed"

# Clean temp files
find /tmp -type f -atime +3 -delete 2>/dev/null
log "Temp files cleaned"

# Remove old Docker images (if Docker is installed)
if command -v docker &>/dev/null; then
    docker system prune -f > /dev/null 2>&1
    log "Docker cleanup done"
fi

AFTER=$(df -h / | tail -1 | awk '{print $5}')
log "Cleanup complete. Disk: $BEFORE -> $AFTER"
Master shell scripting: Our Introduction to Linux Shell Scripting eBook teaches you to write professional scripts from scratch, including error handling, debugging, and best practices.

8. Firewall Log Analyzer

#!/bin/bash
# fw-analyzer.sh — Analyze blocked connections and detect patterns

LOGFILE="/var/log/ufw.log"
DATE=$(date "+%b %d")

echo "=== Firewall Analysis for $DATE ==="

# Top blocked source IPs
echo -e "\n--- Top 10 Blocked IPs ---"
grep "$DATE" "$LOGFILE" | grep "BLOCK" | \
    grep -oP 'SRC=\K[^ ]+' | sort | uniq -c | sort -rn | head -10

# Most targeted ports
echo -e "\n--- Most Targeted Ports ---"
grep "$DATE" "$LOGFILE" | grep "BLOCK" | \
    grep -oP 'DPT=\K[0-9]+' | sort | uniq -c | sort -rn | head -10

# Blocked connections by hour
echo -e "\n--- Blocks Per Hour ---"
grep "$DATE" "$LOGFILE" | grep "BLOCK" | \
    awk '{print substr($3,1,2)":00"}' | sort | uniq -c

9. Database Performance Monitor

#!/bin/bash
# db-monitor.sh — PostgreSQL performance monitoring
# Schedule: */30 * * * * (every 30 minutes)

DB_NAME="myapp"
LOG="/var/log/db-monitor.log"
DATE=$(date "+%Y-%m-%d %H:%M:%S")

# Connection count
CONNECTIONS=$(psql -U postgres -t -c "SELECT count(*) FROM pg_stat_activity WHERE datname='$DB_NAME';" 2>/dev/null)

# Database size
DB_SIZE=$(psql -U postgres -t -c "SELECT pg_size_pretty(pg_database_size('$DB_NAME'));" 2>/dev/null)

# Slow queries (> 5 seconds)
SLOW_QUERIES=$(psql -U postgres -t -c "
    SELECT count(*)
    FROM pg_stat_activity
    WHERE datname='$DB_NAME'
    AND state='active'
    AND now() - query_start > interval '5 seconds';" 2>/dev/null)

# Cache hit ratio
CACHE_HIT=$(psql -U postgres -t -c "
    SELECT round(100.0 * sum(blks_hit) / nullif(sum(blks_hit) + sum(blks_read), 0), 2)
    FROM pg_stat_database WHERE datname='$DB_NAME';" 2>/dev/null)

echo "[$DATE] Connections: $CONNECTIONS | Size: $DB_SIZE | Slow: $SLOW_QUERIES | Cache Hit: ${CACHE_HIT}%" >> "$LOG"

For PostgreSQL-specific administration, check out our PostgreSQL category with dedicated eBooks on performance tuning, backup strategies, and query optimization.

10-15: More Essential Scripts

10. Security Audit Scanner

#!/bin/bash
# Check for common security issues
echo "=== Security Quick Audit ==="
echo "1. World-writable files:"
find / -xdev -type f -perm -0002 2>/dev/null | head -20
echo "2. SUID/SGID files:"
find / -xdev \( -perm -4000 -o -perm -2000 \) -type f 2>/dev/null
echo "3. Users with empty passwords:"
awk -F: '($2 == "") {print $1}' /etc/shadow 2>/dev/null
echo "4. Failed login attempts (last 24h):"
journalctl -u sshd --since "24 hours ago" | grep "Failed" | wc -l

11. Automated Git Deployment

#!/bin/bash
# deploy.sh — Pull latest code and restart services
DEPLOY_DIR="/var/www/myapp"
BRANCH="main"
cd "$DEPLOY_DIR" || exit 1
git fetch origin
LOCAL=$(git rev-parse HEAD)
REMOTE=$(git rev-parse "origin/$BRANCH")
if [ "$LOCAL" != "$REMOTE" ]; then
    git pull origin "$BRANCH"
    composer install --no-dev --optimize-autoloader
    systemctl restart php-fpm
    echo "[$(date)] Deployed: $(git log --oneline -1)"
else
    echo "[$(date)] No updates available"
fi

12. Network Bandwidth Monitor

#!/bin/bash
# Track network usage per interface
INTERFACE="eth0"
RX_BEFORE=$(cat /sys/class/net/$INTERFACE/statistics/rx_bytes)
TX_BEFORE=$(cat /sys/class/net/$INTERFACE/statistics/tx_bytes)
sleep 60
RX_AFTER=$(cat /sys/class/net/$INTERFACE/statistics/rx_bytes)
TX_AFTER=$(cat /sys/class/net/$INTERFACE/statistics/tx_bytes)
RX_MB=$(echo "scale=2; ($RX_AFTER - $RX_BEFORE) / 1048576" | bc)
TX_MB=$(echo "scale=2; ($TX_AFTER - $TX_BEFORE) / 1048576" | bc)
echo "[$(date)] Download: ${RX_MB}MB/min | Upload: ${TX_MB}MB/min"

13. Process Memory Reporter

#!/bin/bash
# Top memory-consuming processes
echo "=== Top 10 Memory Consumers ==="
ps aux --sort=-%mem | head -11 | \
    awk '{printf "%-20s %s MB\n", $11, int($6/1024)}'

14. Crontab Backup

#!/bin/bash
# Backup all user crontabs
BACKUP_DIR="/backup/crontabs"
mkdir -p "$BACKUP_DIR"
DATE=$(date "+%Y%m%d")
for USER in $(cut -f1 -d: /etc/passwd); do
    CRON=$(crontab -l -u "$USER" 2>/dev/null)
    if [ -n "$CRON" ]; then
        echo "$CRON" > "${BACKUP_DIR}/${USER}_${DATE}.cron"
    fi
done

15. Uptime and Response Time Monitor

#!/bin/bash
# website-monitor.sh — Check website availability and response time
URLS=("https://yourdomain.com" "https://api.yourdomain.com/health")
LOG="/var/log/uptime-monitor.log"
for URL in "${URLS[@]}"; do
    RESPONSE=$(curl -s -o /dev/null -w "%{http_code} %{time_total}" "$URL")
    CODE=$(echo "$RESPONSE" | awk '{print $1}')
    TIME=$(echo "$RESPONSE" | awk '{print $2}')
    if [ "$CODE" != "200" ]; then
        echo "[$(date)] DOWN: $URL (HTTP $CODE)" >> "$LOG"
    else
        echo "[$(date)] OK: $URL (${TIME}s)" >> "$LOG"
    fi
done

Best Practices for Production Scripts

Practice Why It Matters
Always use set -euo pipefail Stops script on errors, catches undefined variables
Log everything You need audit trails for debugging
Use absolute paths Cron runs with minimal PATH
Add lock files Prevents duplicate execution
Test with bash -x script.sh Shows every command as it runs
Send alerts on failure Silent failures are the worst kind

Ready to Automate Everything?

These 15 scripts are just the beginning. With solid Bash skills, you can automate virtually any repetitive task on your Linux servers.

Explore our full collection of Linux administration and scripting resources:

Become a Bash Automation Expert

Get our scripting eBooks and start automating your infrastructure today.

Browse Scripting Books
Share this article:

Stay Updated

Subscribe to our newsletter for the latest tutorials, tips, and exclusive offers.