Quick summary: The ss command (socket statistics, from the iproute2 package) is the modern replacement for netstat β faster, more featureful, and reading directly from kernel sockets via netlink rather than scraping /proc. Every Linux sysadmin should know about ten patterns by heart: list listening sockets with their owning processes, filter by state, inspect TCP send/receive queues, watch in real time, and dump per-socket statistics. This guide walks through each one with the actual command, the expected output, and the production scenario where you reach for it.
Why ss and not netstat?
netstat reads /proc/net/* on every invocation. On a modest server with a few thousand sockets this is fine; on a busy load balancer with hundreds of thousands of connections, netstat takes seconds to return and adds load to /proc itself. ss talks to the kernel via netlink (NETLINK_INET_DIAG), which is dramatically faster and exposes data netstat cannot reach (TCP internals, congestion-control state, retransmit counts).
Practically every modern distribution ships ss in the iproute2 package; netstat (in net-tools) is now considered legacy and is sometimes not installed by default. Train yourself on ss; teach junior engineers ss; remove the netstat-shaped reflexes from your debugging muscle memory.
1. List Listening Sockets (the most common command)
ss -tlnp
Flags broken down:
-tβ TCP only-lβ listening sockets-nβ numeric (do not resolve hostnames or services)-pβ show owning process
Example output:
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1247,fd=3))
LISTEN 0 4096 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=2103,fd=6))
LISTEN 0 4096 *:443 *:* users:(("nginx",pid=2103,fd=8))
LISTEN 0 128 127.0.0.1:5432 0.0.0.0:* users:(("postgres",pid=987,fd=7))
This is the answer to "what is listening on port X" and "what process owns this port." Add -u if you also want UDP (ss -tulnp), which is the universal "show me everything that is listening" pattern.
2. Find What Owns a Specific Port
ss -tlnp sport = :443
# or, equivalently
ss -tlnp '( sport = :443 )'
The filter syntax in ss is its most powerful feature, and the most underused. sport = source port (i.e., the local listening port). dport = destination port. Combine with and / or:
ss -tn '( dport = :443 or dport = :80 )'
This shows all outbound HTTP/HTTPS connections from the current host β useful when debugging "is this server actually talking to the API endpoint we think it is?"
3. Show Established Connections With Process Info
ss -tnp state established
Lists all currently-active TCP connections with the process that owns each one. Critical when you are trying to identify what is hammering an external service (the answer might be "you have 200 idle connections to RDS that nobody is closing").
Common sub-pattern: count established connections by remote address:
ss -tn state established | awk 'NR>1 {print $5}' | cut -d: -f1 | sort | uniq -c | sort -rn | head
This single line answers "which remote endpoints are we talking to most." Save it as a shell function; you will use it.
4. Filter by Connection State
ss -tn state time-wait | wc -l
Counts sockets stuck in TIME-WAIT. If this number is in the tens of thousands and growing, you have either a connection-pool leak or a tuning problem. The fix is usually application-side (use connection pools, do not create-and-tear-down for every request); the kernel-side knob is net.ipv4.tcp_tw_reuse=1 with caveats.
All possible states ss can filter on: established, syn-sent, syn-recv, fin-wait-1, fin-wait-2, time-wait, closed, close-wait, last-ack, listening, closing. The pseudo-states all, connected (everything except listening and closed), and synchronized (everything except syn-sent) save typing.
5. Show TCP Internals (Send/Receive Queues, RTT, Congestion)
ss -tin state established
The -i flag adds internal TCP info to the output. You will see a second line per connection with values like:
cubic wscale:7,7 rto:204 rtt:0.456/0.123 ato:40 mss:1448 pmtu:1500 rcvmss:1448 advmss:1448 cwnd:10 bytes_sent:142398 bytes_acked:142398 bytes_received:8472 segs_out:198 segs_in:142 data_segs_out:185 data_segs_in:78 send 254Mbps lastsnd:12 lastrcv:12 lastack:12 pacing_rate 508Mbps delivery_rate 254Mbps delivered:186 app_limited busy:84ms rcv_rtt:1.234 rcv_space:14600 rcv_ssthresh:64076 minrtt:0.234
This is gold for performance debugging. The values you care about most:
- rtt β measured round-trip time. Sudden jumps mean network problems.
- cwnd β congestion window. A small cwnd that does not grow means you are saturating the link or hitting packet loss.
- retrans (when present) β retransmitted segments. Non-zero is concerning.
- send / delivery_rate β actual throughput.
- rcv_space β receive buffer. If the application is reading too slowly, this fills up.
One real incident: a customer's API was reporting slow response times. ss -tin showed cwnd stuck at 1 MSS for established connections β classic packet-loss signature. The fault was a flaky NIC; replacing the cable fixed it. No application code changes needed.
6. Watch Sockets in Real Time
watch -n 1 'ss -tan state established | wc -l'
Track the connection count once per second. Useful during a load test or while debugging a connection leak. For more granular monitoring, ss is fast enough to scrape into Prometheus via a custom exporter β many node_exporter alternatives expose ss-derived metrics natively.
7. Memory Usage Per Socket
ss -tnme state established
The -m flag adds memory info per socket. You will see lines like:
skmem:(r0,rb87380,t0,tb16384,f0,w0,o0,bl0,d0)
Where r is bytes in the receive queue, rb is the receive buffer size, t is bytes in the send queue, tb is send buffer size. Critical when you are hunting "why is the kernel reporting socket-buffer pressure" β usually the answer is "an application is buffering, not reading, and the receive queue is full."
8. Show Unix Domain Sockets
ss -xlp
Lists Unix domain sockets that are listening, with the owning process. Useful when debugging "why is my application unable to connect to /var/run/docker.sock" or "what is listening on /tmp/wat.sock". The output includes the socket path, which makes it trivially greppable.
9. Summary Statistics
ss -s
Prints a one-page summary:
Total: 412
TCP: 384 (estab 156, closed 12, orphaned 0, timewait 78)
Transport Total IP IPv6
RAW 0 0 0
UDP 18 12 6
TCP 372 298 74
INET 390 310 80
FRAG 0 0 0
The single fastest way to get a high-level view of socket health on a host. Pair with watch -n 5 for a poor-engineer's monitoring dashboard.
10. Combine It All: The Incident Debug Pattern
When a server "feels slow" and you do not yet know why, the muscle-memory ss sequence:
# Quick health check
ss -s
# Listening sockets and what owns them
ss -tlnp
# All established connections, who they go to
ss -tn state established | awk 'NR>1 {print $5}' | cut -d: -f1 | sort | uniq -c | sort -rn | head -20
# Stuck connections (CLOSE-WAIT means the remote closed but our app did not)
ss -tn state close-wait
# TIME-WAIT count (high = connection-pool issue)
ss -tn state time-wait | wc -l
# Per-connection internals for the busiest endpoints
ss -tin state established '( dport = :443 )'
This six-command sequence catches the vast majority of network-related incidents. Save it as a debug script and run it as a single block when you are paged for a slow-service issue.
The Filter Language Deep Dive
ss's filter language is more powerful than most users realize. Some patterns worth memorizing:
# All connections to or from port 443
ss -tn '( dport = :443 or sport = :443 )'
# Connections from a specific IP
ss -tn dst 10.0.0.5
# Connections to a specific subnet
ss -tn dst 10.0.0.0/24
# All sockets with retransmits > 0 (requires -e for extended info)
ss -tin | grep -B1 'retrans:' | grep -v 'retrans:0/'
# Sockets older than 5 minutes (requires -e and post-processing)
ss -tnel state established
Output Formatting
ss has limited built-in formatting, but pairs perfectly with awk and column. Two patterns worth keeping:
# Tabular output, easier to read
ss -tnp | column -t
# Process-grouped view
ss -tnp state established | awk -F'"' '/users:/ {print $2}' | sort | uniq -c | sort -rn
The second pattern shows "how many established connections does each process own" β useful for catching process-level connection leaks (an old version of a popular ORM was famous for this).
The Kernel Internals Connection
One reason ss is so fast: the kernel exposes socket info via the SOCK_DIAG netlink interface, which ss queries directly. This is the same interface used by tools like atop, conntrack-tools, and modern monitoring stacks. Understanding the underlying interface helps when you need to write custom tooling β the libnl library and the linux/inet_diag.h header are your starting points.
For ad-hoc one-shot queries, ss is the right tool 95% of the time. For production monitoring at scale, the same data flows into purpose-built collectors (node_exporter does some of this; ebpf_exporter does more) that scrape continuously without spawning processes.
Performance: Why ss Beats netstat at Scale
To make the speed difference concrete: on a load balancer with 250,000 active connections, netstat -an takes roughly 8 seconds to return. ss -an on the same machine returns in under 200 milliseconds. The reason is architectural: netstat opens /proc/net/tcp, /proc/net/tcp6, /proc/net/udp, /proc/net/udp6, etc., and reads each as a text file. Each read is a kernel-side serialize-everything operation. ss talks to the kernel via netlink with a structured request that filters server-side, so the kernel only sends back the sockets that match.
This matters in production for two reasons. First, when you are paged at 3 AM and need data fast, an 8-second netstat invocation feels like an eternity. Second, naive monitoring scripts that loop "every 30 seconds run netstat" can become a nontrivial CPU load on busy servers β we have seen cases where the monitoring itself was 5-10% of total host CPU. Replacing the netstat calls with ss reduced that to under 0.1%.
The same architectural advantage applies when comparing ss to fuser, lsof, and similar tools that scrape /proc. For socket-specific questions, ss is always the right answer in modern Linux.
Three Bonus Patterns Worth Knowing
Connections by remote port β useful for "what external services is this host talking to?":
ss -tn state established | awk 'NR>1 {split($5,a,":"); print a[2]}' | sort | uniq -c | sort -rn | head
Detect the slow-accept pattern β high SYN-RECV count means the kernel queue is filling up because the application is not accepting fast enough:
watch -n 1 'ss -tan state syn-recv | wc -l'
List all sockets owned by a specific process β handy when investigating a specific PID's network behavior:
ss -tnp | grep 'pid=12345'
None of these are exotic; together with the ten core patterns they cover essentially every real-world ss use case you will encounter on a Linux server. Build them into your debug toolkit and you will rarely need to reach for tcpdump or strace for socket-related questions.
Frequently Asked Questions
Why does ss -p sometimes show no process info?
You need root privileges to see process info for sockets owned by other users. sudo ss -tlnp is the right pattern in production debugging.
What about ss -K to kill connections?
The -K flag (kill sockets matching the filter) requires kernel support (CONFIG_INET_DIAG_DESTROY) and root privileges. It is genuinely useful for surgically dropping leaked connections without restarting the application β but use it carefully, especially on production.
How do I get ss output suitable for parsing?
Use -H to suppress the header line and -O to keep all info on one line per socket (instead of wrapping). This makes awk/grep pipelines reliable.
Is ss available on all distributions?
Yes β it ships in the iproute2 package, which is in every modern Linux distribution. On minimal containers it might not be installed by default; apk add iproute2 on Alpine, yum install iproute on RHEL-family, apt install iproute2 on Debian-family.
What about IPv6?
ss handles IPv6 natively. -6 restricts to IPv6 only; -4 restricts to IPv4. Without either flag, both are shown.
Can ss show traffic counters?
Per-socket byte counters yes (visible with -i); per-interface counters no β that is what ip -s link and nstat are for. The right tool for the right question.
One More Real Incident
A platform team paged on "intermittent 503s from a backend service." The application logs showed nothing useful β requests just stopped arriving. ss -tnp state listen showed the listening socket was healthy. ss -tan state syn-recv | wc -l showed thousands of half-open connections. The kernel SYN backlog was overflowing because the application was too slow to accept(). Fix: bump net.core.somaxconn, bump the application's listen backlog, fix the slow accept loop. ss made the diagnosis trivial β without it, the team would have been digging through tcpdump for hours.
Further Reading from the Dargslan Library
- Linux Tutorials category β networking tools, kernel internals, and practical sysadmin guides.
- Networking category β TCP tuning, packet capture, and observability.
- Free cheat sheet library β printable references for ss, ip, tcpdump, and other Linux network tools.
- Dargslan eBook library β comprehensive Linux administration and networking courses.
The Bottom Line
The ten patterns above cover 95% of what production Linux administrators need from ss. Commit them to muscle memory, build the debug-sequence script, and stop reaching for netstat. The combination of speed, filter language, and TCP-internals visibility makes ss the single most-used networking tool on a modern Linux box β for very good reason.