Master Linux Intermediate Skills Through Projects
Table of Contents
1. [Introduction](#introduction) 2. [Prerequisites](#prerequisites) 3. [Project 1: System Monitoring Dashboard](#project-1-system-monitoring-dashboard) 4. [Project 2: Automated Backup System](#project-2-automated-backup-system) 5. [Project 3: Log Analysis and Alerting System](#project-3-log-analysis-and-alerting-system) 6. [Project 4: Network Security Scanner](#project-4-network-security-scanner) 7. [Project 5: Container Management System](#project-5-container-management-system) 8. [Project 6: Database Administration Toolkit](#project-6-database-administration-toolkit) 9. [Advanced Commands Reference](#advanced-commands-reference) 10. [Best Practices](#best-practices) 11. [Troubleshooting Guide](#troubleshooting-guide)
Introduction
This comprehensive guide presents six intermediate-level Linux projects designed to enhance your system administration, scripting, and automation skills. Each project builds upon fundamental Linux concepts while introducing advanced techniques used in professional environments.
Learning Objectives
| Objective | Description | Skills Gained | |-----------|-------------|---------------| | System Administration | Master process management, service configuration, and system optimization | systemctl, crontab, performance tuning | | Shell Scripting | Develop complex automation scripts with error handling and logging | bash scripting, regex, text processing | | Network Management | Implement network monitoring and security scanning tools | netstat, ss, nmap, iptables | | Security Hardening | Apply security best practices and monitoring techniques | file permissions, SELinux, audit logs | | Database Management | Automate database operations and maintenance tasks | MySQL/PostgreSQL administration | | Containerization | Deploy and manage containerized applications | Docker, container orchestration |
Prerequisites
Before starting these projects, ensure you have:
- Basic Linux command line proficiency - Understanding of file systems and permissions - Familiarity with text editors (vim/nano) - Basic networking concepts knowledge - Root or sudo access on a Linux system
Required Software
`bash
Update system packages
sudo apt update && sudo apt upgrade -yInstall essential tools
sudo apt install -y curl wget git vim htop iotop nethogs \ build-essential python3-pip docker.io mysql-server \ postgresql postgresql-contrib nmap tcpdump wireshark`Project 1: System Monitoring Dashboard
Overview
Create a comprehensive system monitoring solution that tracks CPU, memory, disk usage, network activity, and running processes. This project introduces system performance analysis and automated reporting.
Core Components
| Component | Function | Tools Used | |-----------|----------|------------| | Resource Monitor | Track CPU, RAM, disk usage | top, free, df, iostat | | Network Monitor | Monitor network connections and traffic | ss, netstat, iftop | | Process Analyzer | Identify resource-intensive processes | ps, pgrep, kill | | Alert System | Send notifications for threshold breaches | mail, logger | | Web Dashboard | Display real-time system metrics | HTML/CSS/JavaScript |
Implementation
#### Step 1: Create the Main Monitoring Script
`bash
#!/bin/bash
system_monitor.sh - Comprehensive system monitoring script
Configuration
LOG_FILE="/var/log/system_monitor.log" ALERT_THRESHOLD_CPU=80 ALERT_THRESHOLD_MEM=85 ALERT_THRESHOLD_DISK=90 EMAIL_RECIPIENT="admin@example.com"Function to log messages with timestamp
log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" }Function to get CPU usage
get_cpu_usage() { cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1) echo "${cpu_usage%.*}" # Remove decimal part }Function to get memory usage
get_memory_usage() { mem_info=$(free | grep Mem) total=$(echo $mem_info | awk '{print $2}') used=$(echo $mem_info | awk '{print $3}') usage=$((used * 100 / total)) echo "$usage" }Function to get disk usage
get_disk_usage() { disk_usage=$(df -h / | awk 'NR==2 {print $5}' | cut -d'%' -f1) echo "$disk_usage" }Function to get top processes by CPU
get_top_processes_cpu() { ps aux --sort=-%cpu | head -6 | tail -5 }Function to get top processes by memory
get_top_processes_mem() { ps aux --sort=-%mem | head -6 | tail -5 }Function to get network connections
get_network_connections() { ss -tuln | wc -l }Function to check system load
get_system_load() { uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | cut -d',' -f1 }Function to send alerts
send_alert() { local metric="$1" local value="$2" local threshold="$3" subject="ALERT: High $metric usage on $(hostname)" message="$metric usage is at $value%, which exceeds the threshold of $threshold%" echo "$message" | mail -s "$subject" "$EMAIL_RECIPIENT" log_message "ALERT: $message" }Main monitoring function
monitor_system() { log_message "Starting system monitoring cycle" # Get system metrics cpu_usage=$(get_cpu_usage) mem_usage=$(get_memory_usage) disk_usage=$(get_disk_usage) load_avg=$(get_system_load) connections=$(get_network_connections) # Log current status log_message "CPU: ${cpu_usage}% | Memory: ${mem_usage}% | Disk: ${disk_usage}% | Load: ${load_avg} | Connections: ${connections}" # Check thresholds and send alerts if [ "$cpu_usage" -gt "$ALERT_THRESHOLD_CPU" ]; then send_alert "CPU" "$cpu_usage" "$ALERT_THRESHOLD_CPU" fi if [ "$mem_usage" -gt "$ALERT_THRESHOLD_MEM" ]; then send_alert "Memory" "$mem_usage" "$ALERT_THRESHOLD_MEM" fi if [ "$disk_usage" -gt "$ALERT_THRESHOLD_DISK" ]; then send_alert "Disk" "$disk_usage" "$ALERT_THRESHOLD_DISK" fi # Generate detailed report generate_report }Function to generate HTML report
generate_report() { report_file="/var/www/html/system_report.html" cat > "$report_file" << EOFSystem Monitoring Report - $(hostname)
Last Updated: $(date)
CPU Usage: ${cpu_usage}%
Memory Usage: ${mem_usage}%
Disk Usage: ${disk_usage}%
Top Processes by CPU
| User | PID | CPU% | MEM% | Command |
|---|---|---|---|---|
| %s | %s | %s | %s | %s |
Top Processes by Memory
| User | PID | CPU% | MEM% | Command |
|---|---|---|---|---|
| %s | %s | %s | %s | %s |
Main execution
if [ "$1" = "--daemon" ]; then while true; do monitor_system sleep 300 # Run every 5 minutes done else monitor_system fi`#### Step 2: Command Explanations
| Command | Purpose | Example Usage |
|---------|---------|---------------|
| top -bn1 | Get single batch of process information | top -bn1 \| grep "Cpu(s)" |
| free | Display memory usage statistics | free -h (human readable) |
| df -h | Show disk space usage | df -h / (root filesystem) |
| ps aux --sort | List processes sorted by resource usage | ps aux --sort=-%cpu |
| ss -tuln | Show network socket statistics | ss -tuln \| grep :80 |
| uptime | Display system load averages | Shows 1, 5, 15 minute averages |
#### Step 3: Setup Automation
`bash
Make script executable
chmod +x system_monitor.shCreate systemd service for daemon mode
sudo tee /etc/systemd/system/system-monitor.service << EOF [Unit] Description=System Monitoring Service After=network.target[Service] Type=simple User=root ExecStart=/path/to/system_monitor.sh --daemon Restart=always RestartSec=30
[Install] WantedBy=multi-user.target EOF
Enable and start the service
sudo systemctl daemon-reload sudo systemctl enable system-monitor.service sudo systemctl start system-monitor.service`Advanced Features
#### Real-time Process Monitoring
`bash
#!/bin/bash
process_monitor.sh - Monitor specific processes
monitor_process() { local process_name="$1" local max_cpu="$2" local max_mem="$3" while true; do # Get process information proc_info=$(ps aux | grep "$process_name" | grep -v grep) if [ -n "$proc_info" ]; then cpu_usage=$(echo "$proc_info" | awk '{print $3}') mem_usage=$(echo "$proc_info" | awk '{print $4}') pid=$(echo "$proc_info" | awk '{print $2}') # Check if usage exceeds limits if (( $(echo "$cpu_usage > $max_cpu" | bc -l) )); then echo "WARNING: Process $process_name (PID: $pid) CPU usage: $cpu_usage%" # Optional: kill or restart process # kill -TERM $pid fi if (( $(echo "$mem_usage > $max_mem" | bc -l) )); then echo "WARNING: Process $process_name (PID: $pid) Memory usage: $mem_usage%" fi else echo "Process $process_name not found" fi sleep 10 done }
Usage example
monitor_process "apache2" 50.0 25.0`Project 2: Automated Backup System
Overview
Develop a robust backup solution that handles incremental backups, compression, encryption, and remote storage synchronization. This project covers file system operations, data protection, and automation strategies.
Architecture
| Component | Function | Implementation | |-----------|----------|----------------| | Backup Engine | Core backup logic with incremental support | rsync, tar | | Compression | Reduce backup size | gzip, xz | | Encryption | Secure backup data | gpg, openssl | | Remote Sync | Upload to remote storage | scp, rsync over SSH | | Rotation | Manage backup retention | find, date calculations | | Notification | Report backup status | mail, webhook |
Implementation
#### Step 1: Main Backup Script
`bash
#!/bin/bash
backup_system.sh - Comprehensive backup solution
Configuration
BACKUP_NAME="system_backup" SOURCE_DIRS=("/home" "/etc" "/var/www" "/opt") BACKUP_BASE_DIR="/backups" REMOTE_HOST="backup.example.com" REMOTE_USER="backup" REMOTE_DIR="/remote/backups" RETENTION_DAYS=30 COMPRESSION_LEVEL=6 ENCRYPT_BACKUPS=true GPG_RECIPIENT="admin@example.com" LOG_FILE="/var/log/backup_system.log"Ensure backup directory exists
mkdir -p "$BACKUP_BASE_DIR"Logging function
log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') [$1] $2" | tee -a "$LOG_FILE" }Function to calculate directory size
get_dir_size() { du -sh "$1" 2>/dev/null | cut -f1 }Function to create incremental backup
create_incremental_backup() { local timestamp=$(date '+%Y%m%d_%H%M%S') local backup_dir="$BACKUP_BASE_DIR/${BACKUP_NAME}_$timestamp" local link_dest="" # Find most recent backup for incremental linking latest_backup=$(find "$BACKUP_BASE_DIR" -maxdepth 1 -name "${BACKUP_NAME}_*" -type d | sort | tail -1) if [ -n "$latest_backup" ]; then link_dest="--link-dest=$latest_backup" log_message "INFO" "Using incremental base: $latest_backup" fi mkdir -p "$backup_dir" # Create backup manifest manifest_file="$backup_dir/backup_manifest.txt" echo "Backup created: $(date)" > "$manifest_file" echo "Hostname: $(hostname)" >> "$manifest_file" echo "Backup type: Incremental" >> "$manifest_file" echo "Source directories:" >> "$manifest_file" # Backup each source directory for source_dir in "${SOURCE_DIRS[@]}"; do if [ -d "$source_dir" ]; then local dir_name=$(basename "$source_dir") local target_dir="$backup_dir/$dir_name" local size_before=$(get_dir_size "$source_dir") log_message "INFO" "Backing up $source_dir (Size: $size_before)" echo " $source_dir ($size_before)" >> "$manifest_file" # Perform rsync backup rsync -av --delete $link_dest \ --exclude='*.tmp' \ --exclude='*.log' \ --exclude='/proc/*' \ --exclude='/sys/*' \ --exclude='/dev/*' \ "$source_dir/" "$target_dir/" 2>&1 | tee -a "$LOG_FILE" if [ ${PIPESTATUS[0]} -eq 0 ]; then log_message "SUCCESS" "Backup completed for $source_dir" else log_message "ERROR" "Backup failed for $source_dir" return 1 fi else log_message "WARNING" "Source directory $source_dir does not exist" fi done # Create compressed archive if [ "$COMPRESSION_LEVEL" -gt 0 ]; then compress_backup "$backup_dir" fi # Encrypt backup if enabled if [ "$ENCRYPT_BACKUPS" = true ]; then encrypt_backup "$backup_dir" fi echo "$backup_dir" }Function to compress backup
compress_backup() { local backup_dir="$1" local archive_name="${backup_dir}.tar.xz" log_message "INFO" "Compressing backup to $archive_name" tar -cJf "$archive_name" -C "$(dirname "$backup_dir")" "$(basename "$backup_dir")" 2>&1 | tee -a "$LOG_FILE" if [ $? -eq 0 ]; then # Remove uncompressed directory rm -rf "$backup_dir" log_message "SUCCESS" "Compression completed: $archive_name" echo "$archive_name" else log_message "ERROR" "Compression failed" return 1 fi }Function to encrypt backup
encrypt_backup() { local backup_file="$1" local encrypted_file="${backup_file}.gpg" log_message "INFO" "Encrypting backup: $backup_file" gpg --trust-model always --encrypt -r "$GPG_RECIPIENT" --output "$encrypted_file" "$backup_file" if [ $? -eq 0 ]; then # Remove unencrypted file rm -f "$backup_file" log_message "SUCCESS" "Encryption completed: $encrypted_file" echo "$encrypted_file" else log_message "ERROR" "Encryption failed" return 1 fi }Function to sync to remote storage
sync_to_remote() { local backup_file="$1" if [ -z "$REMOTE_HOST" ]; then log_message "INFO" "No remote host configured, skipping remote sync" return 0 fi log_message "INFO" "Syncing to remote storage: $REMOTE_HOST" # Test SSH connection ssh -o ConnectTimeout=10 "$REMOTE_USER@$REMOTE_HOST" "mkdir -p $REMOTE_DIR" 2>&1 | tee -a "$LOG_FILE" if [ $? -ne 0 ]; then log_message "ERROR" "Cannot connect to remote host" return 1 fi # Transfer backup file rsync -avz --progress "$backup_file" "$REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/" 2>&1 | tee -a "$LOG_FILE" if [ $? -eq 0 ]; then log_message "SUCCESS" "Remote sync completed" else log_message "ERROR" "Remote sync failed" return 1 fi }Function to cleanup old backups
cleanup_old_backups() { log_message "INFO" "Cleaning up backups older than $RETENTION_DAYS days" # Local cleanup find "$BACKUP_BASE_DIR" -name "${BACKUP_NAME}_*" -type f -mtime +$RETENTION_DAYS -delete 2>&1 | tee -a "$LOG_FILE" # Remote cleanup if [ -n "$REMOTE_HOST" ]; then ssh "$REMOTE_USER@$REMOTE_HOST" "find $REMOTE_DIR -name '${BACKUP_NAME}_*' -type f -mtime +$RETENTION_DAYS -delete" 2>&1 | tee -a "$LOG_FILE" fi log_message "INFO" "Cleanup completed" }Function to verify backup integrity
verify_backup() { local backup_file="$1" log_message "INFO" "Verifying backup integrity: $backup_file" if [[ "$backup_file" == *.tar.xz ]]; then tar -tJf "$backup_file" > /dev/null 2>&1 elif [[ "$backup_file" == *.tar.gz ]]; then tar -tzf "$backup_file" > /dev/null 2>&1 elif [[ "$backup_file" == *.gpg ]]; then gpg --list-packets "$backup_file" > /dev/null 2>&1 fi if [ $? -eq 0 ]; then log_message "SUCCESS" "Backup integrity verified" return 0 else log_message "ERROR" "Backup integrity check failed" return 1 fi }Function to send notification
send_notification() { local status="$1" local message="$2" local backup_file="$3" local subject="Backup $status - $(hostname)" local body="Backup operation completed with status: $status Details: $message Backup file: $backup_file Timestamp: $(date) Host: $(hostname)" # Send email notification echo "$body" | mail -s "$subject" "admin@example.com" # Log notification log_message "INFO" "Notification sent: $status" }Main backup function
main() { log_message "INFO" "Starting backup process" local start_time=$(date +%s) local backup_file="" local status="SUCCESS" local error_message="" # Create backup backup_file=$(create_incremental_backup) if [ $? -ne 0 ]; then status="ERROR" error_message="Backup creation failed" fi # Verify backup integrity if [ "$status" = "SUCCESS" ] && [ -n "$backup_file" ]; then verify_backup "$backup_file" if [ $? -ne 0 ]; then status="ERROR" error_message="Backup verification failed" fi fi # Sync to remote storage if [ "$status" = "SUCCESS" ] && [ -n "$backup_file" ]; then sync_to_remote "$backup_file" if [ $? -ne 0 ]; then status="WARNING" error_message="Remote sync failed, backup available locally" fi fi # Cleanup old backups cleanup_old_backups # Calculate execution time local end_time=$(date +%s) local duration=$((end_time - start_time)) # Send notification local message="Backup completed in ${duration} seconds. $error_message" send_notification "$status" "$message" "$backup_file" log_message "INFO" "Backup process completed with status: $status" }Command line options
case "$1" in --verify) verify_backup "$2" ;; --cleanup) cleanup_old_backups ;; --restore) # Implementation for restore functionality echo "Restore functionality not implemented yet" ;; *) main ;; esac`#### Step 2: Backup Restoration Script
`bash
#!/bin/bash
restore_backup.sh - Restore from backup
restore_backup() { local backup_file="$1" local restore_target="$2" local restore_specific="$3" if [ ! -f "$backup_file" ]; then echo "ERROR: Backup file not found: $backup_file" return 1 fi mkdir -p "$restore_target" echo "Restoring backup: $backup_file" echo "Target directory: $restore_target" # Decrypt if necessary if [[ "$backup_file" == *.gpg ]]; then echo "Decrypting backup..." temp_file="/tmp/restore_temp_$(date +%s)" gpg --decrypt "$backup_file" > "$temp_file" backup_file="$temp_file" fi # Extract backup if [[ "$backup_file" == *.tar.xz ]]; then if [ -n "$restore_specific" ]; then tar -xJf "$backup_file" -C "$restore_target" "$restore_specific" else tar -xJf "$backup_file" -C "$restore_target" fi elif [[ "$backup_file" == *.tar.gz ]]; then if [ -n "$restore_specific" ]; then tar -xzf "$backup_file" -C "$restore_target" "$restore_specific" else tar -xzf "$backup_file" -C "$restore_target" fi fi # Cleanup temporary files if [ -n "$temp_file" ] && [ -f "$temp_file" ]; then rm -f "$temp_file" fi echo "Restore completed successfully" }
Usage: ./restore_backup.sh backup_file target_directory [specific_path]
restore_backup "$1" "$2" "$3"`Automation Setup
`bash
Create cron job for automated backups
(crontab -l 2>/dev/null; echo "0 2 * /path/to/backup_system.sh") | crontab -Create systemd timer for more advanced scheduling
sudo tee /etc/systemd/system/backup.timer << EOF [Unit] Description=Run backup system daily Requires=backup.service[Timer] OnCalendar=daily Persistent=true
[Install] WantedBy=timers.target EOF
sudo tee /etc/systemd/system/backup.service << EOF [Unit] Description=Backup System Service
[Service] Type=oneshot ExecStart=/path/to/backup_system.sh EOF
sudo systemctl daemon-reload
sudo systemctl enable backup.timer
sudo systemctl start backup.timer
`
Project 3: Log Analysis and Alerting System
Overview
Build an intelligent log monitoring system that analyzes system logs, application logs, and security events in real-time. This project focuses on pattern recognition, alerting mechanisms, and automated incident response.
System Architecture
| Component | Function | Tools Used | |-----------|----------|------------| | Log Collector | Aggregate logs from multiple sources | rsyslog, journalctl | | Pattern Analyzer | Detect anomalies and security events | grep, awk, regex | | Alert Engine | Send notifications based on rules | mail, webhook, SMS | | Dashboard | Visual representation of log data | HTML/JavaScript | | Response System | Automated incident response | iptables, fail2ban |
Implementation
#### Step 1: Core Log Analysis Engine
`bash
#!/bin/bash
log_analyzer.sh - Advanced log analysis and alerting system
Configuration
CONFIG_FILE="/etc/log_analyzer/config.conf" RULES_FILE="/etc/log_analyzer/rules.conf" LOG_DIR="/var/log" ANALYSIS_LOG="/var/log/log_analyzer.log" ALERT_LOG="/var/log/security_alerts.log" TEMP_DIR="/tmp/log_analyzer" DASHBOARD_DIR="/var/www/html/logs"Default configuration
DEFAULT_CHECK_INTERVAL=60 DEFAULT_ALERT_EMAIL="admin@example.com" DEFAULT_MAX_ALERTS_PER_HOUR=10Create necessary directories
mkdir -p "$TEMP_DIR" "$DASHBOARD_DIR" "/etc/log_analyzer"Load configuration
load_config() { if [ -f "$CONFIG_FILE" ]; then source "$CONFIG_FILE" else # Create default config cat > "$CONFIG_FILE" << EOFLog Analyzer Configuration
CHECK_INTERVAL=$DEFAULT_CHECK_INTERVAL ALERT_EMAIL=$DEFAULT_ALERT_EMAIL MAX_ALERTS_PER_HOUR=$DEFAULT_MAX_ALERTS_PER_HOUR ENABLE_AUTO_RESPONSE=true ENABLE_DASHBOARD=true EOF fi }Load analysis rules
load_rules() { if [ ! -f "$RULES_FILE" ]; then create_default_rules fi }Create default analysis rules
create_default_rules() { cat > "$RULES_FILE" << EOFLog Analysis Rules
Format: RULE_NAME|LOG_FILE|PATTERN|SEVERITY|ACTION|DESCRIPTION
Security Rules
SSH_BRUTE_FORCE|/var/log/auth.log|Failed password.*ssh|HIGH|BLOCK_IP|SSH brute force attempt detected SUDO_FAILURES|/var/log/auth.log|sudo.*authentication failure|MEDIUM|ALERT|Sudo authentication failure ROOT_LOGIN|/var/log/auth.log|Accepted.*root|HIGH|ALERT|Direct root login detectedSystem Rules
DISK_FULL|/var/log/syslog|No space left on device|HIGH|ALERT|Disk space critical OOM_KILLER|/var/log/syslog|Out of memory: Kill process|HIGH|ALERT|Out of memory condition KERNEL_PANIC|/var/log/syslog|Kernel panic|CRITICAL|ALERT|Kernel panic detectedApplication Rules
APACHE_ERROR|/var/log/apache2/error.log|Internal Server Error|MEDIUM|ALERT|Apache internal server error MYSQL_ERROR|/var/log/mysql/error.log|ERROR|MEDIUM|ALERT|MySQL error detected PHP_ERROR|/var/log/apache2/error.log|PHP Fatal error|HIGH|ALERT|PHP fatal errorNetwork Rules
PORT_SCAN|/var/log/syslog|possible port scan|MEDIUM|BLOCK_IP|Port scan detected DDOS_ATTEMPT|/var/log/apache2/access.log|excessive requests|HIGH|BLOCK_IP|Possible DDoS attempt EOF }Logging function
log_message() { local level="$1" local message="$2" echo "$(date '+%Y-%m-%d %H:%M:%S') [$level] $message" | tee -a "$ANALYSIS_LOG" }Function to analyze log file against rules
analyze_log_file() { local log_file="$1" local since_time="$2" if [ ! -f "$log_file" ]; then return 0 fi # Get recent log entries local temp_log="$TEMP_DIR/$(basename "$log_file").recent" # Extract recent entries based on timestamp if [ -n "$since_time" ]; then # For logs with standard timestamp format awk -v since="$since_time" ' BEGIN { cmd = "date -d \"" since "\" +%s" cmd | getline since_epoch close(cmd) } { # Extract timestamp from log line match($0, /^[A-Za-z]{3} [0-9]{1,2} [0-9]{2}:[0-9]{2}:[0-9]{2}/) if (RSTART > 0) { timestamp = substr($0, RSTART, RLENGTH) cmd = "date -d \"" timestamp "\" +%s 2>/dev/null" if ((cmd | getline log_epoch) > 0) { if (log_epoch >= since_epoch) print $0 } close(cmd) } else { # If no timestamp found, include the line print $0 } }' "$log_file" > "$temp_log" else # If no since time, analyze last 1000 lines tail -1000 "$log_file" > "$temp_log" fi # Apply rules to the log file while IFS='|' read -r rule_name log_pattern pattern severity action description; do # Skip comments and empty lines [[ "$rule_name" =~ ^#.*$ ]] && continue [ -z "$rule_name" ] && continue # Check if this rule applies to current log file if [[ "$log_file" =~ $log_pattern ]]; then # Search for pattern matches local matches=$(grep -c "$pattern" "$temp_log" 2>/dev/null) if [ "$matches" -gt 0 ]; then log_message "MATCH" "Rule '$rule_name' matched $matches times in $log_file" # Extract matching lines for context local match_lines=$(grep "$pattern" "$temp_log" | head -5) # Process the alert process_alert "$rule_name" "$severity" "$action" "$description" "$log_file" "$matches" "$match_lines" fi fi done < "$RULES_FILE" # Clean up temporary file rm -f "$temp_log" }Function to process alerts
process_alert() { local rule_name="$1" local severity="$2" local action="$3" local description="$4" local log_file="$5" local match_count="$6" local match_lines="$7" # Check alert rate limiting local current_hour=$(date +%Y%m%d%H) local alert_count_file="$TEMP_DIR/alert_count_$current_hour" local current_alerts=0 if [ -f "$alert_count_file" ]; then current_alerts=$(cat "$alert_count_file") fi if [ "$current_alerts" -ge "$MAX_ALERTS_PER_HOUR" ]; then log_message "WARNING" "Alert rate limit reached for hour $current_hour" return 0 fi # Increment alert counter echo $((current_alerts + 1)) > "$alert_count_file" # Log the alert local alert_timestamp=$(date '+%Y-%m-%d %H:%M:%S') echo "$alert_timestamp|$rule_name|$severity|$description|$log_file|$match_count" >> "$ALERT_LOG" log_message "ALERT" "[$severity] $description (Rule: $rule_name, Matches: $match_count)" # Execute action based on rule case "$action" in "ALERT") send_alert_notification "$rule_name" "$severity" "$description" "$log_file" "$match_count" "$match_lines" ;; "BLOCK_IP") if [ "$ENABLE_AUTO_RESPONSE" = "true" ]; then block_suspicious_ips "$match_lines" fi send_alert_notification "$rule_name" "$severity" "$description" "$log_file" "$match_count" "$match_lines" ;; "RESTART_SERVICE") restart_service_action "$description" "$match_lines" ;; esac }Function to send alert notifications
send_alert_notification() { local rule_name="$1" local severity="$2" local description="$3" local log_file="$4" local match_count="$5" local match_lines="$6" local subject="[$severity] Security Alert: $description" local body="Security Alert Details:Rule: $rule_name Severity: $severity Description: $description Log File: $log_file Match Count: $match_count Timestamp: $(date) Host: $(hostname)
Sample Log Lines: $match_lines
Please investigate immediately." # Send email alert echo "$body" | mail -s "$subject" "$ALERT_EMAIL" # Send to webhook if configured if [ -n "$WEBHOOK_URL" ]; then curl -X POST "$WEBHOOK_URL" \ -H "Content-Type: application/json" \ -d "{ \"alert\": \"$description\", \"severity\": \"$severity\", \"host\": \"$(hostname)\", \"timestamp\": \"$(date -Iseconds)\", \"matches\": $match_count }" 2>/dev/null fi log_message "NOTIFICATION" "Alert sent for rule: $rule_name" }
Function to block suspicious IPs
block_suspicious_ips() { local log_lines="$1" # Extract IP addresses from log lines local ips=$(echo "$log_lines" | grep -oE '\b([0-9]{1,3}\.){3}[0-9]{1,3}\b' | sort -u) for ip in $ips; do # Check if IP is already blocked if ! iptables -L INPUT -n | grep -q "$ip"; then log_message "ACTION" "Blocking suspicious IP: $ip" iptables -I INPUT -s "$ip" -j DROP # Add to fail2ban if available if command -v fail2ban-client &> /dev/null; then fail2ban-client set sshd banip "$ip" fi fi done }Function to restart service based on alert
restart_service_action() { local description="$1" local match_lines="$2" # Determine service to restart based on description local service="" case "$description" in apache|httpd) service="apache2" ;; mysql|mariadb) service="mysql" ;; nginx) service="nginx" ;; esac if [ -n "$service" ]; then log_message "ACTION" "Restarting service: $service" systemctl restart "$service" if [ $? -eq 0 ]; then log_message "SUCCESS" "Service $service restarted successfully" else log_message "ERROR" "Failed to restart service $service" fi fi }Function to generate dashboard
generate_dashboard() { local dashboard_file="$DASHBOARD_DIR/index.html" # Get recent alerts local recent_alerts=$(tail -20 "$ALERT_LOG" 2>/dev/null | tac) local alert_count=$(wc -l < "$ALERT_LOG" 2>/dev/null || echo "0") local critical_alerts=$(grep -c "CRITICAL" "$ALERT_LOG" 2>/dev/null || echo "0") local high_alerts=$(grep -c "HIGH" "$ALERT_LOG" 2>/dev/null || echo "0") cat > "$dashboard_file" << EOFLog Analysis Dashboard
Real-time security monitoring and alerting system
Last updated: $(date)
Recent Alerts
| Timestamp | Rule | Severity | Description | Matches |
|---|---|---|---|---|
| $timestamp | $rule | $severity | $description | $matches |
| No alerts found | ||||
Function to cleanup old data
cleanup_old_data() { # Remove old temporary files find "$TEMP_DIR" -name "alert_count_*" -mtime +1 -delete # Rotate alert log if it gets too large if [ -f "$ALERT_LOG" ] && [ $(stat -c%s "$ALERT_LOG") -gt 10485760 ]; then # 10MB mv "$ALERT_LOG" "${ALERT_LOG}.old" touch "$ALERT_LOG" log_message "INFO" "Alert log rotated" fi }Main analysis loop
main_loop() { log_message "INFO" "Starting log analysis system" # Get timestamp for incremental analysis local last_check_file="$TEMP_DIR/last_check_timestamp" local since_time="" if [ -f "$last_check_file" ]; then since_time=$(cat "$last_check_file") fi # Update timestamp for next run date '+%Y-%m-%d %H:%M:%S' > "$last_check_file" # Analyze common log files local log_files=( "/var/log/auth.log" "/var/log/syslog" "/var/log/apache2/access.log" "/var/log/apache2/error.log" "/var/log/mysql/error.log" "/var/log/nginx/access.log" "/var/log/nginx/error.log" ) for log_file in "${log_files[@]}"; do if [ -f "$log_file" ]; then analyze_log_file "$log_file" "$since_time" fi done # Generate dashboard if enabled if [ "$ENABLE_DASHBOARD" = "true" ]; then generate_dashboard fi # Cleanup old data cleanup_old_data log_message "INFO" "Analysis cycle completed" }Initialize system
load_config load_rulesCommand line options
case "$1" in --daemon) while true; do main_loop sleep "$CHECK_INTERVAL" done ;; --test-rules) echo "Testing analysis rules..." analyze_log_file "/var/log/auth.log" ;; --generate-dashboard) generate_dashboard echo "Dashboard generated at $DASHBOARD_DIR/index.html" ;; *) main_loop ;; esac`This comprehensive guide provides detailed implementations for three advanced Linux projects that cover system monitoring, backup automation, and log analysis. Each project includes practical examples, command explanations, and real-world applications that will significantly enhance your Linux administration skills.
The projects demonstrate intermediate to advanced concepts including: - Process and resource monitoring - Automated backup strategies with encryption - Real-time log analysis and alerting - Shell scripting best practices - System service management - Security monitoring and incident response
These projects serve as building blocks for more complex system administration tasks and provide hands-on experience with tools and techniques used in professional Linux environments.