Log files are the black box of your Linux servers. When something goes wrong — a service crashes, an unauthorized login attempt occurs, or performance degrades — the answer is almost always in the logs. But with thousands of lines generated every hour, manually reading through log files is impractical.
dargslan-log-parser automates log analysis by parsing common Linux log formats, filtering by severity, extracting statistics, and highlighting security events — all from the command line or Python scripts.
Installing dargslan-log-parser
pip install dargslan-log-parser
# Or install the complete 15-tool toolkit
pip install dargslan-toolkit
CLI Usage
# Parse syslog with full analysis
dargslan-logparse /var/log/syslog
# Filter by severity level
dargslan-logparse /var/log/syslog --level error
dargslan-logparse /var/log/syslog --level warning
# Parse auth log for security events
dargslan-logparse /var/log/auth.log
# Parse nginx access log
dargslan-logparse /var/log/nginx/access.log
# JSON output
dargslan-logparse /var/log/syslog --json
# Last N lines only
dargslan-logparse /var/log/syslog -n 1000
Python API
from dargslan_log_parser import LogParser
lp = LogParser()
# Parse any log file
entries = lp.parse("/var/log/syslog")
print(f"Total entries: {len(entries)}")
# Filter by level
errors = lp.parse("/var/log/syslog", level="error")
warnings = lp.parse("/var/log/syslog", level="warning")
print(f"Errors: {len(errors)}, Warnings: {len(warnings)}")
# Statistics — message frequency, top sources
stats = lp.statistics("/var/log/syslog")
print(f"Total lines: {stats['total_lines']}")
print(f"Error count: {stats['error_count']}")
for source, count in stats['top_sources'][:10]:
print(f" {source}: {count}")
# Auth log — failed login detection
failed = lp.failed_logins("/var/log/auth.log")
for attempt in failed:
print(f" Failed: {attempt['user']}@{attempt['ip']} at {attempt['timestamp']}")
# Nginx — top IPs, status codes, URLs
top_ips = lp.top_ips("/var/log/nginx/access.log", limit=20)
for ip, count in top_ips:
print(f" {ip}: {count} requests")
status_codes = lp.status_code_summary("/var/log/nginx/access.log")
for code, count in status_codes:
print(f" HTTP {code}: {count}")
Understanding Linux Log Formats
Syslog Format (RFC 3164)
# Format: TIMESTAMP HOSTNAME PROCESS[PID]: MESSAGE
Apr 12 10:15:23 web01 sshd[1234]: Accepted publickey for admin from 10.0.0.5
Apr 12 10:15:24 web01 kernel: [12345.678] TCP: request_sock_TCP: Possible SYN flooding
Auth Log
# /var/log/auth.log (Debian/Ubuntu) or /var/log/secure (RHEL/CentOS)
Apr 12 10:20:05 web01 sshd[5678]: Failed password for root from 203.0.113.5 port 45678 ssh2
Apr 12 10:20:05 web01 sshd[5678]: pam_unix(sshd:auth): authentication failure; logname= uid=0
Nginx Access Log
# Combined format (default)
203.0.113.5 - - [12/Apr/2026:10:25:00 +0000] "GET /api/users HTTP/1.1" 200 1234 "https://dargslan.com" "Mozilla/5.0..."
Essential Log Commands
# Real-time log following
tail -f /var/log/syslog
tail -f /var/log/nginx/access.log
# Search for errors
grep -i "error\|fail\|critical" /var/log/syslog
# Search compressed rotated logs
zgrep "error" /var/log/syslog.*.gz
# Count occurrences by service
awk '{print $5}' /var/log/syslog | cut -d: -f1 | sort | uniq -c | sort -rn | head -20
# Find top IPs in nginx
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20
# Status code distribution
awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn
# Time range filtering with journalctl
journalctl --since "2026-04-12 00:00" --until "2026-04-12 12:00"
journalctl -p err --since today
Log File Locations Reference
| Log File | Contents |
|---|---|
| /var/log/syslog | General system log (Debian/Ubuntu) |
| /var/log/messages | General system log (RHEL/CentOS) |
| /var/log/auth.log | Authentication events |
| /var/log/kern.log | Kernel messages |
| /var/log/dmesg | Boot and hardware messages |
| /var/log/nginx/ | Nginx access and error logs |
| /var/log/apache2/ | Apache access and error logs |
| /var/log/mysql/ | MySQL/MariaDB logs |
| /var/log/postgresql/ | PostgreSQL logs |
| /var/log/cron | Cron job execution logs |
Automated Log Analysis Script
#!/usr/bin/env python3
# /opt/scripts/log-analysis.py
from dargslan_log_parser import LogParser
lp = LogParser()
# Check for errors in syslog
errors = lp.parse("/var/log/syslog", level="error")
if len(errors) > 50:
print(f"ALERT: {len(errors)} errors in syslog!")
for e in errors[-10:]:
print(f" {e['timestamp']}: {e['message'][:100]}")
# Check for failed logins
failed = lp.failed_logins("/var/log/auth.log")
if len(failed) > 20:
print(f"SECURITY: {len(failed)} failed login attempts!")
# Group by IP
ip_counts = {}
for f in failed:
ip_counts[f['ip']] = ip_counts.get(f['ip'], 0) + 1
for ip, count in sorted(ip_counts.items(), key=lambda x: -x[1])[:5]:
print(f" {ip}: {count} attempts")
📋 Master Linux Log Management
Our Linux administration eBooks cover centralized logging, ELK stack, log analysis, journald configuration, and log-based alerting for production servers.
Browse Linux Books →Log analysis is how you understand what's happening on your servers. dargslan-log-parser makes it practical to analyze syslog, auth.log, and web server logs without complex pipelines or external tools.
Install now: pip install dargslan-log-parser — or get all 15 tools: pip install dargslan-toolkit
Download our free Linux Log Parser Cheat Sheet for quick reference.