🎁 New User? Get 20% off your first purchase with code NEWUSER20 Register Now →
Menu

Categories

10 Bash Scripts That Will Automate Your Linux Server Management

10 Bash Scripts That Will Automate Your Linux Server Management

As a Linux administrator, you probably spend hours every week on repetitive tasks: checking disk space, reviewing logs, verifying services are running, managing backups. Bash scripting automates these tasks so you can focus on work that actually requires human judgment.

Here are 10 practical, production-ready Bash scripts you can start using today. Each script includes error handling, logging, and is designed to be dropped into a cron job.

1. Disk Space Monitor with Email Alerts

This script checks all mounted filesystems and sends an alert when any partition exceeds a threshold.

#!/bin/bash
# disk-monitor.sh - Alert when disk usage exceeds threshold

THRESHOLD=85
ADMIN_EMAIL="admin@example.com"
HOSTNAME=$(hostname)
ALERT=false

while read -r line; do
    USAGE=$(echo "$line" | awk '{print $5}' | tr -d '%')
    MOUNT=$(echo "$line" | awk '{print $6}')

    if [ "$USAGE" -ge "$THRESHOLD" ]; then
        ALERT=true
        MESSAGE+="WARNING: $MOUNT is ${USAGE}% full\n"
    fi
done < <(df -h | grep -E '^/dev' | grep -v tmpfs)

if [ "$ALERT" = true ]; then
    echo -e "Disk Alert on $HOSTNAME\n\n$MESSAGE\n\n$(df -h)" | \
        mail -s "DISK ALERT: $HOSTNAME" "$ADMIN_EMAIL"
    logger "disk-monitor: Alert sent - partitions above ${THRESHOLD}%"
fi
# Add to cron - check every 30 minutes
*/30 * * * * /opt/scripts/disk-monitor.sh

2. Automated Log Rotation and Cleanup

Keeps log directories under control by compressing old logs and removing expired ones.

#!/bin/bash
# log-cleanup.sh - Compress old logs, remove expired ones

LOG_DIRS=("/var/log/app" "/var/log/nginx" "/home/*/logs")
COMPRESS_AFTER=7    # Days before compression
DELETE_AFTER=90     # Days before deletion
LOGFILE="/var/log/log-cleanup.log"

log() { echo "$(date '+%Y-%m-%d %H:%M:%S') $1" >> "$LOGFILE"; }

log "Starting log cleanup"

for DIR_PATTERN in "${LOG_DIRS[@]}"; do
    for DIR in $DIR_PATTERN; do
        [ -d "$DIR" ] || continue

        # Compress logs older than 7 days
        find "$DIR" -name "*.log" -mtime +$COMPRESS_AFTER \
            -not -name "*.gz" -exec gzip -9 {} \; 2>/dev/null
        COMPRESSED=$(find "$DIR" -name "*.log" -mtime +$COMPRESS_AFTER | wc -l)

        # Delete compressed logs older than 90 days
        find "$DIR" -name "*.gz" -mtime +$DELETE_AFTER -delete 2>/dev/null
        DELETED=$(find "$DIR" -name "*.gz" -mtime +$DELETE_AFTER | wc -l)

        log "$DIR: compressed=$COMPRESSED, deleted=$DELETED"
    done
done

log "Log cleanup completed"

3. Service Health Checker

Monitors critical services and automatically restarts them if they crash.

#!/bin/bash
# service-health.sh - Monitor and restart critical services

SERVICES=("nginx" "php-fpm" "postgresql")
ADMIN_EMAIL="admin@example.com"
HOSTNAME=$(hostname)
MAX_RESTARTS=3

for SERVICE in "${SERVICES[@]}"; do
    if ! systemctl is-active --quiet "$SERVICE"; then
        RESTART_COUNT=$(journalctl -u "$SERVICE" --since "1 hour ago" | \
            grep -c "Started\|Restarted")

        if [ "$RESTART_COUNT" -lt "$MAX_RESTARTS" ]; then
            systemctl restart "$SERVICE"
            STATUS=$?

            if [ $STATUS -eq 0 ]; then
                MSG="$SERVICE was down and has been restarted successfully"
                logger "service-health: $MSG"
            else
                MSG="$SERVICE is down and FAILED to restart (exit code: $STATUS)"
                logger "service-health: CRITICAL - $MSG"
            fi

            echo "$MSG on $HOSTNAME at $(date)" | \
                mail -s "SERVICE ALERT: $SERVICE on $HOSTNAME" "$ADMIN_EMAIL"
        else
            echo "$SERVICE has restarted $RESTART_COUNT times in the last hour. Manual intervention required." | \
                mail -s "CRITICAL: $SERVICE on $HOSTNAME" "$ADMIN_EMAIL"
        fi
    fi
done

📚 Recommended Reading

Master Bash scripting from fundamentals to advanced automation:

4. SSL Certificate Expiry Checker

Checks SSL certificates for your domains and alerts you before they expire.

#!/bin/bash
# ssl-check.sh - Check SSL certificate expiry dates

DOMAINS=("example.com" "shop.example.com" "api.example.com")
WARN_DAYS=30
ADMIN_EMAIL="admin@example.com"

for DOMAIN in "${DOMAINS[@]}"; do
    EXPIRY=$(echo | openssl s_client -connect "$DOMAIN:443" -servername "$DOMAIN" 2>/dev/null | \
        openssl x509 -noout -enddate 2>/dev/null | cut -d= -f2)

    if [ -z "$EXPIRY" ]; then
        echo "Cannot check SSL for $DOMAIN" | \
            mail -s "SSL CHECK FAILED: $DOMAIN" "$ADMIN_EMAIL"
        continue
    fi

    EXPIRY_EPOCH=$(date -d "$EXPIRY" +%s)
    NOW_EPOCH=$(date +%s)
    DAYS_LEFT=$(( (EXPIRY_EPOCH - NOW_EPOCH) / 86400 ))

    if [ "$DAYS_LEFT" -le "$WARN_DAYS" ]; then
        echo "SSL certificate for $DOMAIN expires in $DAYS_LEFT days ($EXPIRY)" | \
            mail -s "SSL EXPIRY WARNING: $DOMAIN ($DAYS_LEFT days)" "$ADMIN_EMAIL"
    fi

    echo "$DOMAIN: $DAYS_LEFT days remaining"
done

5. User Account Audit

Generates a security report of user accounts, highlighting potential issues.

#!/bin/bash
# user-audit.sh - Audit system user accounts

REPORT="/tmp/user-audit-$(date +%Y%m%d).txt"

{
    echo "=== User Account Audit Report ==="
    echo "Generated: $(date)"
    echo "Hostname: $(hostname)"
    echo ""

    echo "--- Users with UID 0 (root privileges) ---"
    awk -F: '$3 == 0 {print $1}' /etc/passwd
    echo ""

    echo "--- Users with login shell ---"
    grep -v '/nologin\|/false' /etc/passwd | awk -F: '{print $1, $6, $7}'
    echo ""

    echo "--- Users with empty passwords ---"
    sudo awk -F: '$2 == "" {print $1}' /etc/shadow 2>/dev/null
    echo ""

    echo "--- Last password changes ---"
    for USER in $(awk -F: '$3 >= 1000 {print $1}' /etc/passwd); do
        CHANGE=$(sudo chage -l "$USER" 2>/dev/null | grep "Last password change")
        echo "$USER: $CHANGE"
    done
    echo ""

    echo "--- Failed login attempts (last 24h) ---"
    journalctl _COMM=sshd --since "24 hours ago" 2>/dev/null | \
        grep "Failed password" | awk '{print $9, $11}' | sort | uniq -c | sort -rn | head -20
    echo ""

    echo "--- Currently logged in users ---"
    who
} > "$REPORT"

echo "Audit report saved to $REPORT"

6. Database Backup with Verification

Creates database backups and verifies they can be restored.

#!/bin/bash
# db-backup.sh - Backup PostgreSQL with verification

DB_NAME="myapp"
DB_USER="postgres"
BACKUP_DIR="/backup/database"
DATE=$(date +%Y%m%d_%H%M)
RETENTION=14  # Days to keep backups

mkdir -p "$BACKUP_DIR"

# Create backup
BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz"
pg_dump -U "$DB_USER" "$DB_NAME" | gzip > "$BACKUP_FILE"

if [ $? -eq 0 ]; then
    SIZE=$(du -sh "$BACKUP_FILE" | cut -f1)
    logger "db-backup: Success - $BACKUP_FILE ($SIZE)"

    # Verify backup is not empty and is valid gzip
    if gzip -t "$BACKUP_FILE" 2>/dev/null && [ -s "$BACKUP_FILE" ]; then
        logger "db-backup: Verification passed"
    else
        logger "db-backup: WARNING - Verification failed!"
        echo "Database backup verification failed for $DB_NAME" | \
            mail -s "BACKUP ALERT: Verification Failed" admin@example.com
    fi
else
    logger "db-backup: FAILED for $DB_NAME"
    echo "Database backup FAILED for $DB_NAME on $(hostname)" | \
        mail -s "BACKUP ALERT: $DB_NAME Failed" admin@example.com
fi

# Remove old backups
find "$BACKUP_DIR" -name "${DB_NAME}_*.sql.gz" -mtime +$RETENTION -delete
logger "db-backup: Cleaned up backups older than $RETENTION days"

7. System Information Report

#!/bin/bash
# sysinfo.sh - Generate comprehensive system report

echo "========================================"
echo "  System Report - $(date)"
echo "  Hostname: $(hostname)"
echo "========================================"
echo ""
echo "--- Uptime & Load ---"
uptime
echo ""
echo "--- Memory Usage ---"
free -h
echo ""
echo "--- Disk Usage ---"
df -h | grep -v tmpfs
echo ""
echo "--- Top 5 CPU Processes ---"
ps aux --sort=-%cpu | head -6
echo ""
echo "--- Top 5 Memory Processes ---"
ps aux --sort=-%mem | head -6
echo ""
echo "--- Network Connections ---"
ss -s
echo ""
echo "--- Recent Errors ---"
journalctl -p err --since "24 hours ago" --no-pager | tail -20

8. Automated Security Updates

#!/bin/bash
# auto-update.sh - Install security updates only

LOGFILE="/var/log/auto-update.log"

log() { echo "$(date '+%Y-%m-%d %H:%M:%S') $1" >> "$LOGFILE"; }

log "Starting security update check"

if command -v apt &>/dev/null; then
    apt update -qq 2>/dev/null
    UPDATES=$(apt list --upgradable 2>/dev/null | grep -i security | wc -l)

    if [ "$UPDATES" -gt 0 ]; then
        log "Installing $UPDATES security updates"
        DEBIAN_FRONTEND=noninteractive apt upgrade -y -o \
            Dpkg::Options::="--force-confold" 2>&1 >> "$LOGFILE"
        log "Security updates installed"

        # Check if reboot is needed
        if [ -f /var/run/reboot-required ]; then
            log "REBOOT REQUIRED after security updates"
            echo "Reboot required on $(hostname) after security updates" | \
                mail -s "REBOOT REQUIRED: $(hostname)" admin@example.com
        fi
    else
        log "No security updates available"
    fi
fi

📚 Take Your Skills Further

From scripting to full automation:

9. Website Uptime Monitor

#!/bin/bash
# uptime-check.sh - Monitor website availability

URLS=(
    "https://example.com|200"
    "https://api.example.com/health|200"
    "https://shop.example.com|200"
)
ADMIN_EMAIL="admin@example.com"

for ENTRY in "${URLS[@]}"; do
    URL=$(echo "$ENTRY" | cut -d'|' -f1)
    EXPECTED=$(echo "$ENTRY" | cut -d'|' -f2)

    HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
        --max-time 10 --connect-timeout 5 "$URL")

    if [ "$HTTP_CODE" != "$EXPECTED" ]; then
        MSG="$URL returned HTTP $HTTP_CODE (expected $EXPECTED)"
        logger "uptime-check: ALERT - $MSG"
        echo "$MSG at $(date)" | \
            mail -s "DOWNTIME ALERT: $URL" "$ADMIN_EMAIL"
    fi
done

10. Log Analyzer — Find Attack Patterns

#!/bin/bash
# log-analyzer.sh - Analyze auth and web logs for suspicious activity

echo "=== Security Log Analysis ==="
echo "Period: Last 24 hours"
echo "Date: $(date)"
echo ""

echo "--- Top 10 Failed SSH IPs ---"
journalctl _COMM=sshd --since "24 hours ago" 2>/dev/null | \
    grep "Failed password" | awk '{print $11}' | \
    sort | uniq -c | sort -rn | head -10
echo ""

echo "--- Brute Force Attempts (>10 failures) ---"
journalctl _COMM=sshd --since "24 hours ago" 2>/dev/null | \
    grep "Failed password" | awk '{print $11}' | \
    sort | uniq -c | sort -rn | awk '$1 > 10'
echo ""

echo "--- Suspicious Nginx Requests ---"
if [ -f /var/log/nginx/access.log ]; then
    grep -E "(wp-login|phpmyadmin|\.env|/admin|xmlrpc)" \
        /var/log/nginx/access.log | \
        awk '{print $1}' | sort | uniq -c | sort -rn | head -10
fi
echo ""

echo "--- Failed sudo Attempts ---"
journalctl --since "24 hours ago" 2>/dev/null | \
    grep "sudo.*FAILED" | tail -10

Making Scripts Production-Ready

Before deploying any script to production, add these essential elements:

  1. Error handling: Use set -euo pipefail at the top
  2. Logging: Write output to both a log file and syslog via logger
  3. Lock files: Prevent multiple instances from running simultaneously
  4. Timeouts: Add timeout commands for operations that might hang
  5. Permissions: Set chmod 700 and appropriate ownership
#!/bin/bash
set -euo pipefail

LOCKFILE="/tmp/myscript.lock"
trap "rm -f $LOCKFILE" EXIT

if [ -f "$LOCKFILE" ]; then
    echo "Script already running, exiting"
    exit 1
fi
touch "$LOCKFILE"

# Your script logic here...

Conclusion

These 10 scripts cover the most common server management tasks. Start by implementing the ones most relevant to your environment, customize the thresholds and email addresses, and schedule them via cron. Over time, you will build a comprehensive automation library that saves hours of manual work every week.

Remember: the best script is one you write once and forget about — it runs reliably in the background, keeping your servers healthy while you focus on more important work.

Share this article:

Stay Updated

Subscribe to our newsletter for the latest tutorials, tips, and exclusive offers.