Bash scripts are the backbone of Linux system administration. From automated backups to deployment pipelines, from log rotation to system monitoring, Bash scripts handle the repetitive tasks that keep servers running. But there is an enormous quality gap between a quick hack that "works on my machine" and a production-ready script that handles errors gracefully, runs predictably, and can be maintained by your team.
After reviewing thousands of Bash scripts across enterprise environments, these are the ten best practices that separate professional, maintainable scripts from brittle, dangerous ones. Each practice includes concrete examples and explanations of why it matters for production systems.
1. Always Start with Strict Mode
The single most important practice in Bash scripting is enabling strict mode at the top of every script. Without it, Bash silently ignores errors, continues after failures, and uses undefined variables as empty strings — all behaviors that cause catastrophic bugs in production.
Add these lines at the top of every script:
#!/usr/bin/env bash— Use env to find bash (more portable than hardcoding /bin/bash)set -euo pipefail— The "strict mode" trifecta:-e: Exit immediately if any command fails (non-zero exit code)-u: Treat unset variables as errors (prevents typos from silently becoming empty strings)-o pipefail: A pipeline fails if ANY command in it fails (not just the last one)
IFS=$'\n\t'— Set Internal Field Separator to newline and tab only (prevents word splitting on spaces in filenames)
Why This Matters
Without set -e, a script that fails to mount a filesystem will happily continue to the next step — deleting files from what it thinks is the mounted directory but is actually the root filesystem. Without set -u, a typo like rm -rf /${TEMP_DDIR}/ (note the extra D) expands to rm -rf / because the undefined variable becomes empty. These are real-world disasters that strict mode prevents.
2. Use Functions for Organization and Reusability
Scripts longer than 20-30 lines should be organized into functions. Functions make scripts readable, testable, and maintainable. Each function should do one thing well.
- Name functions descriptively:
validate_input(),create_backup(),send_notification() - Keep functions short — ideally under 25 lines
- Use
localfor variables inside functions to avoid polluting the global scope - Create a
main()function that orchestrates the script flow - Call
main "$@"at the bottom of the script to pass all command-line arguments
This structure makes it immediately clear what the script does when someone reads it for the first time. The main() function reads like a table of contents, and each function can be understood independently.
3. Implement Proper Error Handling
Even with strict mode, you need explicit error handling for graceful failures. Production scripts should never leave a system in an unknown state.
- Use
trapto catch errors and clean up:trap cleanup EXITruns the cleanup function whenever the script exits, whether normally or due to an error - Also trap specific signals:
trap 'echo "Interrupted"; exit 1' INT TERM - Create a cleanup function that removes temporary files, releases locks, and restores system state
- Use meaningful exit codes: 0 for success, 1 for general errors, 2 for usage errors, specific codes for specific failures
- Log errors to stderr:
echo "ERROR: Database connection failed" >&2
Lock Files for Preventing Concurrent Execution
If a script should not run concurrently (e.g., a backup script), use lock files:
- Create a lock file at script start: check if the lock exists, exit if it does, create it if it does not
- Remove the lock file in the cleanup trap — this ensures it is removed even if the script crashes
- Include the PID in the lock file so you can detect stale locks from crashed previous runs
4. Quote All Variables — No Exceptions
Unquoted variables are the #1 source of bugs in Bash scripts. Always wrap variables in double quotes: "${variable}" instead of $variable.
What Happens Without Quotes
- Filenames with spaces break:
rm $FILEwhere FILE="my important file.txt" runsrm my important file.txt— deleting three separate files named "my", "important", and "file.txt" - Empty variables cause syntax errors:
if [ $VAR = "test" ]becomesif [ = "test" ]when VAR is empty - Glob patterns expand unexpectedly:
echo $VARwhere VAR="*" lists all files in the current directory
The Rules
- Always:
"${variable}"(braces + quotes) - Always:
"$@"to pass arguments (preserves argument boundaries) - Use
"$(command)"for command substitution (not backticks) - Use
[[ ]]for tests instead of[ ]— double brackets handle unquoted variables more safely and support regex matching
5. Validate All Inputs
Never trust input — whether from command-line arguments, files, environment variables, or user input. Validate everything before using it.
- Check that required arguments are provided: display usage information if they are not
- Validate file paths exist and are accessible:
if [[ ! -f "${config_file}" ]]; then echo "Config not found: ${config_file}" >&2; exit 1; fi - Verify required tools are installed:
command -v jq >/dev/null 2>&1 || { echo "jq is required but not installed" >&2; exit 1; } - Sanitize inputs that will be used in commands: never directly interpolate user input into commands that could be injection vectors
- Set defaults for optional parameters:
${VAR:-default_value}
6. Use Meaningful Variable Names and Constants
Readable code is maintainable code. Use descriptive variable names and declare constants for values that should not change.
- Use
UPPER_SNAKE_CASEfor constants and environment variables:readonly MAX_RETRIES=3,readonly BACKUP_DIR="/var/backups" - Use
lower_snake_casefor local variables:local current_date,local file_count - Avoid single-letter variables except in short loops:
for i in {1..10}is acceptable - Use
readonlyfor variables that should never be modified after assignment - Group related constants at the top of the script for easy configuration
7. Implement Proper Logging
Production scripts need proper logging — not just echo statements. Good logging helps you debug problems, audit operations, and monitor script health.
- Create logging functions:
log_info(),log_warn(),log_error() - Include timestamps in every log message:
$(date +'%Y-%m-%d %H:%M:%S') - Log to both a file and stderr: use
teeor duplicate output - Include the script name and PID in log messages for multi-script environments
- Use log levels to control verbosity: a
--verboseflag that enables debug-level logging - Rotate log files to prevent disk space issues: check file size and rotate when needed
8. Handle Temporary Files Safely
Temporary files are a common source of security vulnerabilities and cleanup failures.
- Use
mktempto create temporary files with secure, unique names:local tmp_file; tmp_file=$(mktemp) - Use
mktemp -dfor temporary directories - Always clean up temp files in a trap handler — never rely on manual cleanup at the end of the script
- Never use predictable temp file names like
/tmp/mybackup.tmp— this enables symlink attacks where an attacker creates a symlink at that path pointing to a critical system file - Set appropriate permissions on temp files immediately:
chmod 600 "${tmp_file}"
9. Write Idempotent Scripts
An idempotent script produces the same result whether you run it once or ten times. This is critical for automation — cron jobs, CI/CD pipelines, and Ansible playbooks all rely on idempotent operations.
- Check before creating:
mkdir -pinstead ofmkdir(does not fail if directory exists) - Check before modifying: verify the current state before making changes
- Use
--no-clobberor-nflags where available to prevent overwriting - When inserting into databases, use UPSERT or check for existing records
- When adding lines to config files, check if the line already exists first
- Design every operation to be safely repeatable
10. Document and Test Your Scripts
Documentation and testing are what separate scripts from software. A script you write today will be maintained by someone else (possibly future-you with no memory of writing it) in six months.
Documentation
- Include a header comment block: script purpose, usage examples, required environment variables, dependencies
- Add a
usage()function that displays help when called with--helpor-hor with incorrect arguments - Comment on WHY, not WHAT — the code shows what it does, comments should explain why it does it that way
- Document any non-obvious behavior, workarounds, or known limitations
Testing
- Use
shellcheck— the essential static analysis tool for Bash:shellcheck myscript.sh. It catches common bugs, quoting issues, and portability problems automatically - Test with different inputs, empty inputs, and malformed inputs
- Test error paths — what happens when a file is missing, a server is down, or disk is full
- Run scripts with
bash -x script.shto see each command as it executes (debug trace)
Quick Reference: Bash Best Practices Checklist
| # | Practice | One-Line Summary |
|---|---|---|
| 1 | Strict Mode | set -euo pipefail at the top of every script |
| 2 | Functions | Organize code into small, single-purpose functions |
| 3 | Error Handling | Use trap for cleanup; meaningful exit codes |
| 4 | Quote Variables | Always "${var}" — no unquoted variables ever |
| 5 | Validate Inputs | Check arguments, files, and tools before using them |
| 6 | Naming | UPPER_CASE constants, lower_case locals, descriptive names |
| 7 | Logging | Timestamped log functions with levels |
| 8 | Temp Files | mktemp + trap cleanup; never predictable names |
| 9 | Idempotency | Scripts should be safely repeatable |
| 10 | Test & Document | ShellCheck, usage(), header comments |
Frequently Asked Questions
Should I use Bash or Python for scripting?
Use Bash for system-level tasks under 100 lines: file operations, service management, command orchestration. Use Python for anything involving complex data processing, API calls, JSON/YAML parsing, or scripts over 100-200 lines. The threshold is not exact — use whichever makes the task simpler and more maintainable.
What is ShellCheck and should I use it?
ShellCheck is a static analysis tool that finds bugs and style issues in Bash scripts automatically. Yes, you should use it for every script. Install with apt install shellcheck or dnf install ShellCheck, and integrate it into your CI pipeline. It catches quoting issues, unused variables, and common pitfalls that cause production failures.
How do I debug a Bash script?
Run the script with bash -x script.sh to see each command printed before execution (trace mode). For more targeted debugging, add set -x before the section you want to trace and set +x after. Also check exit codes with echo $? after individual commands.
What is the difference between $@ and $*?
"$@" preserves argument boundaries (each argument stays separate). "$*" joins all arguments into a single string. Always use "$@" when passing arguments to other commands or functions — it handles arguments with spaces correctly.
How do I make a Bash script into a cron job?
Edit your crontab with crontab -e and add a schedule line. Important: always use full paths in cron scripts (cron has a minimal PATH), redirect output to a log file for debugging, and use lock files to prevent overlapping runs.
Related Resources
- BASH Fundamentals — Complete Bash scripting eBook
- Bash vs PowerShell: Cross-Platform Scripting — Compare scripting across platforms
- Linux Command Line Mastery — Advanced CLI techniques
- Browse all 205+ free IT cheat sheets