I’ll never forget the morning I discovered an unauthorized service listening on port 4444 on one of our production servers. My coffee went cold as I stared at the terminal, realizing we’d been running with an unnecessary attack surface for who knows how long. That incident taught me a critical lesson: knowing how to check listening ports in Linux isn’t just a nice skill to have—it’s essential for security.
A listening port is a network port where a service actively waits for incoming connections. Unlike ports that are simply “open” in your firewall, listening ports have actual processes bound to them, ready to accept traffic. If you don’t know what’s listening on your system, you’re flying blind from a security perspective.
Why Checking Listening Ports Matters for Security
Every listening port is a potential entry point. When I audit a new server, checking listening ports is one of the first things I do—and I’ve caught everything from forgotten development servers to actual malware this way.
The CIS Security Benchmarks explicitly recommend regularly auditing network services and ensuring only necessary ports are listening. Here’s why this matters:
- Attack Surface Reduction: Every listening service is code that can potentially be exploited. Less is more.
- Compliance Requirements: Many security frameworks require documentation of all listening services.
- Performance Impact: Unnecessary services consume resources and can slow down your system.
- Incident Detection: Unexpected listening ports often indicate compromise or misconfiguration.
I once found a cryptocurrency miner that had established a listening port for command and control. Without regular port checks, it would have kept draining resources indefinitely.

The Modern Way: Using ss Command
The ss command is the modern replacement for netstat, and it’s what I reach for first on any current Linux system. It’s faster, more efficient, and pulls data directly from the kernel via Netlink rather than parsing /proc files.
Installing ss
On most modern distributions, ss comes pre-installed as part of the iproute2 package. If you somehow don’t have it:
# Debian/Ubuntu
sudo apt install iproute2
# RHEL/CentOS/Fedora
sudo dnf install iproute
# Arch
sudo pacman -S iproute2Basic Usage to Check All Listening Ports
Here’s the command I use almost daily:
sudo ss -tulpnLet me break down those flags:
- -t: Show TCP sockets
- -u: Show UDP sockets
- -l: Display only listening sockets (this is the key flag)
- -p: Show process information (requires root)
- -n: Show numerical addresses instead of resolving hostnames (faster)
The output looks something like this:
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1234,fd=3))
LISTEN 0 128 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=5678,fd=6))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1234,fd=4))This tells me SSH is listening on port 22 (expected), nginx is on port 80 (also expected), and crucially, I can see the process IDs. If I saw something unexpected here, I could immediately investigate or kill that process.
Filtering for Specific Ports
When I’m troubleshooting a specific service, I filter the output:
# Check if anything is listening on port 443
sudo ss -tulpn | grep :443
# Check only TCP listening ports
sudo ss -tlpn
# Check only UDP listening ports
sudo ss -ulpnThe official ss command manual page has the complete reference if you want to dive deeper into advanced filtering.
The Legacy Method: Using netstat
I still encounter older systems where netstat is the primary tool, so it’s worth knowing. Just be aware that netstat is officially deprecated and slower than ss on systems with many connections.
Installing netstat
# Debian/Ubuntu
sudo apt install net-tools
# RHEL/CentOS/Fedora
sudo dnf install net-tools
# Arch
sudo pacman -S net-toolsChecking Listening Ports with netstat
The equivalent command to what I showed with ss:
sudo netstat -tulpnThe flags are the same, which makes the transition between tools easier. However, on a busy server with thousands of connections, I’ve seen netstat take noticeably longer to return results compared to ss.
Using lsof for Detailed Process Information
When I need to really understand what’s happening with a specific port, I turn to lsof. The name stands for “list open files,” but in Linux, everything is a file—including network sockets.
Installing lsof
# Debian/Ubuntu
sudo apt install lsof
# RHEL/CentOS/Fedora
sudo dnf install lsof
# Arch
sudo pacman -S lsofChecking TCP Listening Ports
This command shows all TCP ports in the LISTEN state:
sudo lsof -iTCP -sTCP:LISTEN -nPBreaking it down:
- -i: Select network files
- TCP: Filter for TCP protocol
- -sTCP:LISTEN: Show only sockets in LISTEN state
- -n: Don’t resolve hostnames
- -P: Don’t resolve port names
The output is more verbose than ss:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1234 root 3u IPv4 12345 0t0 TCP *:22 (LISTEN)
nginx 5678 www-data 6u IPv4 67890 0t0 TCP *:80 (LISTEN)What I love about lsof is the FD (file descriptor) column. When debugging file descriptor leaks or connection issues, this information is gold. The Linode lsof guide covers some advanced use cases I’ve found helpful over the years.
Checking a Specific Port
sudo lsof -i :22This shows everything using port 22—both listening and established connections.
Understanding Socket States
When you check listening ports, you’ll see different socket states. Understanding these helps you interpret what you’re seeing:
- LISTEN: The socket is waiting for incoming connections. This is what we’re primarily looking for when checking listening ports.
- ESTABLISHED: An active connection exists. You’ll see this when checking all ports, not just listening ones.
- TIME_WAIT: The connection is closed but waiting to ensure all packets are processed. This is normal TCP behavior.
- CLOSE_WAIT: The remote end has closed the connection, but the local application hasn’t closed its socket yet.
I once spent hours troubleshooting why a web application was running out of connections, only to discover hundreds of sockets stuck in CLOSE_WAIT due to a bug in the application code. Understanding these states saved my bacon. The technical details are in this excellent breakdown of TCP TIME_WAIT.
Real-World Security Scenario
Let me share a practical example from last year. After setting up a new web server, I ran my standard audit:
sudo ss -tulpnThe output showed:
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=789))
LISTEN 0 128 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=1234))
LISTEN 0 128 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=1234))
LISTEN 0 128 0.0.0.0:3306 0.0.0.0:* users:(("mysqld",pid=5678))Wait. MySQL listening on 0.0.0.0:3306? That means it was accepting connections from any interface, not just localhost. That’s a security issue—MySQL should only be accessible locally unless you specifically need remote database connections.
I immediately edited /etc/mysql/mysql.conf.d/mysqld.cnf:
bind-address = 127.0.0.1After restarting MySQL and checking again:
LISTEN 0 128 127.0.0.1:3306 0.0.0.0:* users:(("mysqld",pid=5678))Much better. Now it’s only listening on localhost. I also made sure our UFW firewall had proper rules configured as a second layer of defense.
Common Ports and What Should Be Listening
After years of server administration, here are the ports I typically expect to see listening on various server types:
| Port | Service | Notes |
|---|---|---|
| 22 | SSH | Almost always present for remote administration |
| 25 | SMTP | Mail server only |
| 80 | HTTP | Web server |
| 443 | HTTPS | Web server with SSL/TLS |
| 3306 | MySQL | Should bind to 127.0.0.1 unless remote access needed |
| 5432 | PostgreSQL | Should bind to 127.0.0.1 unless remote access needed |
| 6379 | Redis | Should bind to 127.0.0.1 unless clustering is configured |
If you see something unexpected, investigate immediately. Use systemctl to check what services are running, and review your system logs with journalctl to understand when that service started.
Automating Port Checks with Scripts
I don’t manually check ports every day—I’ve automated it. Here’s a simple bash script I use that alerts me if unexpected ports start listening:
#!/bin/bash
# Check for listening ports and compare against baseline
EXPECTED_PORTS=(22 80 443)
CURRENT_PORTS=$(sudo ss -tulpn | awk '{print $5}' | grep -oP '(?<=:)\d+$' | sort -u)
for port in $CURRENT_PORTS; do
if [[ ! " ${EXPECTED_PORTS[@]} " =~ " ${port} " ]]; then
echo "WARNING: Unexpected port listening: $port"
sudo ss -tulpn | grep ":$port"
fi
doneI run this via a cron job daily and send the output to email. It's caught issues before they became problems more times than I can count.
What to Do When You Find Unexpected Listening Ports
If you discover a port you don't recognize:
- Identify the process: Use
sudo ss -tulpnorsudo lsof -i :PORTto see what process owns it. - Check if it's legitimate: Research the process name. Is it a service you installed? Part of your application stack?
- Verify it should be listening: Even legitimate services might be misconfigured to listen on all interfaces when they should only bind to localhost.
- Check when it started: Use
ps -p PID -o lstartto see when the process started running. - Review recent changes: Check your package manager logs, deployment history, or system logs for recent installations.
If you can't identify it and it seems suspicious, isolate the system from the network immediately and investigate further. I've seen malware masquerade as system processes, so trust but verify.
Comparing Your Methods: ss vs netstat vs lsof
After using all three tools for years, here's my honest take:
Use ss for regular daily checking. It's fast, modern, and will be around for the long haul. I default to sudo ss -tulpn about 90% of the time.
Use netstat only if you're on an older system that doesn't have ss, or if you're following documentation written before ss became standard.
Use lsof when you need detailed process information or when debugging complex socket issues. The extra detail it provides is invaluable when something's really wrong.
Integration with Security Hardening
Checking listening ports should be part of your overall security hardening process. After I lock down a new server, my checklist includes:
- Review listening ports and disable unnecessary services
- Configure firewall rules to only allow required traffic
- Ensure sensitive services (databases, Redis, etc.) bind only to 127.0.0.1
- Set up SSH key authentication and disable password auth
- Set up automated monitoring for unexpected port changes
This layered approach has kept my servers secure for years. Defense in depth isn't just a buzzword—it's how you actually protect systems in production.
Final Thoughts
Learning how to check listening ports in Linux transformed how I approach server security. What once seemed like arcane knowledge is now second nature—and it's caught real security issues more times than I can count.
Start simple: run sudo ss -tulpn on your systems right now. Make a note of what you see. Do you recognize every service? Is everything configured the way you expect? That baseline knowledge is the foundation of effective security monitoring.
The servers you don't check are the ones that bite you. Trust me on this one.







