🎁 New User? Get 20% off your first purchase with code NEWUSER20 Register Now →
Menu

Categories

Nginx Configuration Mastery: From Installation to Production-Ready Server

Nginx Configuration Mastery: From Installation to Production-Ready Server

Nginx (pronounced "engine-x") serves over 34% of all websites worldwide, making it the most popular web server on the internet. It excels at handling high-concurrency workloads, serving static files efficiently, and acting as a reverse proxy for application servers.

This guide takes you from a fresh installation to a production-ready Nginx configuration, covering every essential topic with practical examples you can apply immediately.

Installing Nginx

# Ubuntu/Debian
sudo apt update
sudo apt install nginx

# RHEL/AlmaLinux
sudo dnf install nginx

# Start and enable
sudo systemctl start nginx
sudo systemctl enable nginx

# Verify it is running
sudo systemctl status nginx
curl -I http://localhost

After installation, open http://your-server-ip in a browser. You should see the Nginx welcome page.

Understanding Nginx File Structure

/etc/nginx/
├── nginx.conf              # Main configuration file
├── conf.d/                 # Additional configuration files
│   └── default.conf        # Default server block
├── sites-available/        # Available site configurations (Debian)
├── sites-enabled/          # Active site symlinks (Debian)
├── mime.types              # MIME type mappings
└── modules-enabled/        # Loaded modules

On Debian/Ubuntu, site configurations go in sites-available/ and are activated by creating symlinks in sites-enabled/. On RHEL/AlmaLinux, use the conf.d/ directory.

Server Blocks (Virtual Hosts)

Server blocks allow you to host multiple websites on a single server. Each block defines how Nginx handles requests for a specific domain.

# /etc/nginx/sites-available/example.com
server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    root /var/www/example.com/public;
    index index.html index.php;

    # Logging
    access_log /var/log/nginx/example.com.access.log;
    error_log /var/log/nginx/example.com.error.log;

    # Main location
    location / {
        try_files $uri $uri/ =404;
    }

    # Static file caching
    location ~* \.(jpg|jpeg|png|webp|gif|ico|css|js|woff2)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # Deny access to hidden files
    location ~ /\. {
        deny all;
    }
}
# Enable the site
sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

# Test configuration syntax
sudo nginx -t

# Reload Nginx (graceful - no downtime)
sudo systemctl reload nginx

SSL/TLS with Let's Encrypt

Every production website needs HTTPS. Let's Encrypt provides free SSL certificates, and Certbot automates the entire process.

# Install Certbot
sudo apt install certbot python3-certbot-nginx

# Obtain and install certificate (automatic Nginx configuration)
sudo certbot --nginx -d example.com -d www.example.com

# Certbot automatically:
# 1. Obtains the certificate
# 2. Modifies your Nginx config to use it
# 3. Sets up HTTP → HTTPS redirect
# 4. Configures auto-renewal

# Test auto-renewal
sudo certbot renew --dry-run

# Certificates auto-renew via systemd timer
sudo systemctl status certbot.timer

After Certbot runs, your server block is automatically updated with SSL configuration. The result looks like this:

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Modern SSL configuration
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;

    # HSTS (1 year)
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    root /var/www/example.com/public;
    # ... rest of configuration
}

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$server_name$request_uri;
}

📚 Recommended Reading

Master web server administration:

Reverse Proxy Configuration

One of Nginx's most powerful features is acting as a reverse proxy — sitting in front of application servers (Node.js, Python, PHP-FPM, Java) and handling client connections, SSL, caching, and load balancing.

# Reverse proxy for a Node.js application on port 3000
server {
    listen 443 ssl http2;
    server_name app.example.com;

    ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;

        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
}

PHP-FPM Configuration

# PHP application with PHP-FPM
location ~ \.php$ {
    fastcgi_pass unix:/run/php/php8.3-fpm.sock;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
    fastcgi_buffering on;
    fastcgi_buffer_size 16k;
    fastcgi_buffers 16 16k;
}

Load Balancing

Nginx can distribute traffic across multiple backend servers for high availability and performance:

# Define upstream server group
upstream app_servers {
    # Round-robin (default)
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;

    # Or use least connections
    # least_conn;

    # Or use IP hash (sticky sessions)
    # ip_hash;

    # Health checks
    server 10.0.0.4:3000 backup;    # Only used if others fail
    server 10.0.0.5:3000 down;      # Temporarily disabled
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    location / {
        proxy_pass http://app_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Caching for Performance

# Enable proxy cache
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m
    max_size=1g inactive=60m use_temp_path=off;

server {
    location / {
        proxy_pass http://app_servers;
        proxy_cache my_cache;
        proxy_cache_valid 200 10m;
        proxy_cache_valid 404 1m;
        proxy_cache_use_stale error timeout updating http_500 http_502;
        add_header X-Cache-Status $upstream_cache_status;
    }

    # Static files - serve directly, bypass proxy
    location /static/ {
        alias /var/www/app/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
        access_log off;
    }
}

# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css application/json application/javascript
    text/xml application/xml text/javascript image/svg+xml;

Security Hardening

# Security headers
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

# Hide Nginx version
server_tokens off;

# Limit request body size
client_max_body_size 10m;

# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=3r/m;

server {
    location /api/ {
        limit_req zone=api burst=20 nodelay;
        proxy_pass http://app_servers;
    }

    location /login {
        limit_req zone=login burst=5 nodelay;
        proxy_pass http://app_servers;
    }

    # Block common attack patterns
    location ~* (wp-admin|wp-login|xmlrpc|phpmyadmin) {
        return 444;
    }
}

📚 Hosting & Security Deep Dive

Performance Optimization Checklist

Optimization Configuration Impact
Worker processesworker_processes auto;Match CPU cores
Worker connectionsworker_connections 2048;Handle more concurrent users
Gzip compressiongzip on;60-80% smaller responses
HTTP/2listen 443 ssl http2;Multiplexed connections
Static file cachingexpires 30d;Reduce server load
Keep-alivekeepalive_timeout 65;Reuse connections
Sendfilesendfile on;Kernel-level file serving
TCP optimizationtcp_nopush on; tcp_nodelay on;Reduce latency

Useful Nginx Commands

# Test configuration syntax
sudo nginx -t

# Reload (graceful - no downtime)
sudo systemctl reload nginx

# Restart (brief downtime)
sudo systemctl restart nginx

# View error log in real time
sudo tail -f /var/log/nginx/error.log

# View access log with live updates
sudo tail -f /var/log/nginx/access.log

# Check which processes are running
ps aux | grep nginx

# Show compiled modules
nginx -V 2>&1 | tr ' ' '\n' | grep module

Conclusion

Nginx is an incredibly versatile tool that serves as a web server, reverse proxy, load balancer, and caching layer. By mastering its configuration, you gain the ability to deploy fast, secure, and scalable web applications on Linux. Start with a simple server block, add SSL, and progressively implement caching and security hardening as your traffic grows.

Share this article:

Stay Updated

Subscribe to our newsletter for the latest tutorials, tips, and exclusive offers.