How to Check Open Ports in Linux: The Complete Guide

Robert Johnson

I still remember the first time I got a call at 2 AM about a service that “just stopped working.” Spent two hours chasing ghosts before realizing the firewall was blocking the port. The service was running fine – I just couldn’t see which port it was actually listening on because I didn’t know how to check properly.

That night taught me something crucial: knowing how to check open ports in Linux isn’t just about running a command. It’s about understanding what you’re actually looking at and why it matters. After fifteen years of managing Linux servers, I’ve seen every port-related issue imaginable, and I’m going to show you exactly how to diagnose them.

Why Checking Open Ports Actually Matters

Every time I see someone skip port checking during troubleshooting, I want to shake them. Here’s the reality: open ports are how services communicate with the outside world. If you don’t know what’s listening and where, you’re flying blind.

RackNerd Mobile Leaderboard Banner

Three situations where this knowledge has saved my bacon:

  • Security audits: Finding services you didn’t know were running (and shouldn’t be)
  • Troubleshooting connectivity: Verifying a service is actually listening before you blame DNS, firewalls, or the network team
  • Conflict resolution: Discovering why your new application won’t start because something else claimed port 8080

The difference between a good sysadmin and a great one? The great one checks ports first, troubleshoots second.

The Modern Way: Using the ss Command

Let me be blunt: if you’re still using netstat by default in 2025, you’re doing it wrong. The ss command is faster, more powerful, and actually maintained. It directly queries the kernel instead of reading from /proc/net, which makes it significantly faster on systems with thousands of connections.

Here’s the command I run probably fifty times a day:

sudo ss -tulpn

Let me break down what those flags actually do:

  • -t: Show TCP sockets
  • -u: Show UDP sockets
  • -l: Show only listening sockets (the ones accepting connections)
  • -p: Show the process using the socket
  • -n: Don’t resolve service names (show port numbers instead)

The output looks something like this:

Netid State   Recv-Q Send-Q Local Address:Port  Peer Address:Port Process
tcp   LISTEN  0      128    0.0.0.0:22         0.0.0.0:*     users:(("sshd",pid=1234,fd=3))
tcp   LISTEN  0      128    0.0.0.0:80         0.0.0.0:*     users:(("nginx",pid=5678,fd=6))
tcp   LISTEN  0      128    127.0.0.1:3306     0.0.0.0:*     users:(("mysqld",pid=9012,fd=25))

Notice that MySQL is bound to 127.0.0.1? That means it’s only accessible locally, not from the network. This is critical for security – you want your database listening only on localhost unless you explicitly need remote access.

Pro tip: The 0.0.0.0 address means the service is listening on all network interfaces. If you see 127.0.0.1, it’s localhost only. If you see a specific IP, it’s bound to that interface.

Filtering ss Output for Specific Ports

When you’re hunting for a specific port, you don’t want to scroll through hundreds of lines. I use grep to filter the output constantly:

sudo ss -tulpn | grep :80

This shows me everything listening on port 80. Simple, fast, effective.

Finding What’s Using a Specific Port

Someone keeps asking “what’s running on port 8080?” Here’s how I answer in two seconds:

sudo ss -tulpn | grep :8080

If something’s there, you’ll see the process ID and name. If not, the port’s available.

The Old Guard: netstat Command

Look, netstat is officially deprecated. The man page literally says it’s obsolete. But I still run into systems where it’s the only tool available, so you need to know it.

The netstat equivalent of the ss command above:

sudo netstat -tulpn

The flags work the same way. The output format is slightly different, but it shows the same information. On older distributions like CentOS 7 or Ubuntu 16.04, you might need to install the net-tools package first:

sudo apt install net-tools  # Debian/Ubuntu
sudo yum install net-tools  # RHEL/CentOS

Here’s why ss is better: on a server handling 10,000 concurrent connections, netstat takes 2-3 seconds to gather the data. ss does it in under half a second. When you’re troubleshooting a production issue at 3 AM, those seconds matter.

Using lsof to Check Ports (The Process Detective)

The lsof command stands for “list open files,” and since everything in Linux is a file (including network sockets), it’s incredibly useful for port checking. I reach for lsof when I need to see exactly what files and network connections a specific process is using.

To see all network connections:

sudo lsof -i

To check a specific port:

sudo lsof -i :8080

To see what ports a specific process is using:

sudo lsof -i -p 1234

The real power of lsof shows up when you’re debugging permission issues or tracking down resource leaks. I once spent hours trying to figure out why a web application couldn’t write to its log file. Running lsof -p [pid] showed me it had thousands of file handles open and had hit the system limit. Problem solved.

Finding Which Process Is Hogging a Port

This scenario happens constantly: you try to start a service and get “address already in use.” Here’s how I find the culprit:

sudo lsof -i :3000

Output shows you the command, PID, and user. Then you can decide whether to kill the process or reconfigure your service to use a different port.

Scanning Remote Systems with nmap

All the commands above work great for checking your local system. But what about scanning a remote server? That’s where nmap comes in – it’s the network scanning tool every sysadmin needs to know.

Basic port scan of a remote host:

nmap 192.168.1.100

By default, nmap only scans the 1,000 most common ports. For a comprehensive scan:

nmap -p- 192.168.1.100

The -p- flag tells nmap to scan all 65,535 ports. Fair warning: this takes a while.

Practical nmap Examples I Actually Use

Scan specific ports:

nmap -p 22,80,443 192.168.1.100

Fast scan with service detection:

nmap -F -sV 192.168.1.100

Check if a host is up without port scanning (useful for quick connectivity checks):

nmap -sn 192.168.1.100
Security note: Never run nmap against systems you don’t own or have explicit permission to scan. Unauthorized port scanning can be illegal and is definitely unethical. I only scan my own infrastructure or during authorized penetration tests.

Understanding Port States and What They Mean

When you’re checking ports, you’ll see different states. Understanding these is crucial for troubleshooting.

  • LISTEN: The port is open and waiting for incoming connections. This is what you want to see for services that should be accessible.
  • ESTABLISHED: An active connection exists. You’ll see this for ongoing sessions like SSH connections or database queries.
  • TIME_WAIT: The connection has closed but the system is waiting to ensure all packets have been received. This is normal after closing connections.
  • CLOSE_WAIT: The remote end has closed the connection, but the local application hasn’t closed its socket yet. If you see tons of these, you likely have an application that’s not properly closing connections.

I once debugged a Node.js application that was leaking database connections. Running ss -tan | grep CLOSE_WAIT | wc -l showed thousands of connections in CLOSE_WAIT state. The application was creating connections but never properly closing them. Fixed the code, problem disappeared.

Common Port Checking Scenarios and Solutions

Scenario 1: Service Won’t Start (Port Already in Use)

Your application refuses to start with “address already in use.” Here’s my debugging workflow:

sudo ss -tulpn | grep :PORT_NUMBER

This shows what’s using the port. Then decide: kill the conflicting process or reconfigure your service to use a different port. For services managed by systemd, you might need to restart the service properly using systemctl.

Scenario 2: Can’t Connect Remotely But Service Is Running

Check if the service is listening on the right interface:

sudo ss -tulpn | grep SERVICE_PORT

If you see 127.0.0.1:PORT, the service is only listening locally. You need to reconfigure it to listen on 0.0.0.0 or a specific interface IP.

Scenario 3: Finding Unexpected Open Ports (Security Audit)

Run a comprehensive scan and compare against what you expect:

sudo ss -tulpn > current_ports.txt

Review the output line by line. Any port you don’t recognize needs investigation. I found a cryptocurrency miner on a web server this way – it was listening on port 3333 for mining pool connections. If I hadn’t been checking ports regularly, it would have kept running for months.

TCP vs UDP: Why You Need to Check Both

Most people only check TCP ports and completely forget about UDP. This is a mistake. While TCP is used for most services (web servers, databases, SSH), UDP handles DNS, DHCP, VPN traffic, and streaming applications.

To see only UDP ports:

sudo ss -ulpn

UDP is connectionless, so you won’t see states like ESTABLISHED. You’ll just see UNCONN (unconditional) for listening UDP sockets.

I spent three hours once debugging why DNS wasn’t working on a server. Everything looked fine until I checked UDP ports and realized systemd-resolved wasn’t actually listening on port 53. The service was running, but it wasn’t binding to the port. Restarting it fixed the issue immediately.

Firewall Considerations: Listening Doesn’t Mean Accessible

Just because a service is listening on a port doesn’t mean external systems can reach it. You need to verify your firewall rules allow the traffic through.

Check firewall status on systems using firewalld:

sudo firewall-cmd --list-all

Check firewall rules with iptables:

sudo iptables -L -n

If you’re checking whether a service is accessible from the network, you need to test from an external system. From another machine, use telnet or nc to test connectivity:

telnet 192.168.1.100 80

Or with netcat:

nc -zv 192.168.1.100 80

If the connection succeeds, the port is accessible through the firewall. If it times out or is refused, you’ve got a firewall or routing issue to investigate.

Automation: Scripting Port Checks

I have a simple script that checks critical services every five minutes and alerts me if something stops listening:

#!/bin/bash

PORTS=(22 80 443 3306)

for port in "${PORTS[@]}"; do
    if ! sudo ss -tulpn | grep -q ":$port "; then
        echo "WARNING: Nothing listening on port $port"
        # Send alert (email, Slack, whatever)
    fi
done

This saved me multiple times when services crashed unexpectedly. Rather than waiting for users to complain, I get an alert immediately.

The Security Implications of Open Ports

Every open port is a potential attack surface. This is why the principle of least privilege applies to network services too: only run what you need, only listen where you need to.

Regular port audits should be part of your security routine:

  1. List all listening ports: sudo ss -tulpn
  2. Identify each service and verify it’s necessary
  3. Ensure each service is bound to the minimum required interfaces (localhost when possible)
  4. Verify firewall rules are properly restricting access
  5. Check for default credentials or known vulnerabilities in listening services

I make it a habit to review open ports on every server I manage at least monthly. You’d be amazed what accumulates over time – test services left running, forgotten applications, and occasionally, actual compromises.

Troubleshooting Tips from Real Production Issues

High Number of TIME_WAIT Connections

If you see thousands of connections in TIME_WAIT state, it’s usually not a problem – it’s normal TCP behavior. But if you’re running a high-traffic web application, you might hit limits. You can check the count with:

ss -tan | grep TIME_WAIT | wc -l

If it’s excessive, you might need to tune kernel parameters like net.ipv4.tcp_tw_reuse and net.ipv4.tcp_fin_timeout.

Port Showing as Listening But Application Not Working

Check if it’s listening on IPv4 vs IPv6. Some applications bind to IPv6 by default, which shows up as :::PORT in the ss output. If your system or application isn’t properly configured for IPv6, this can cause connection issues even though the port appears to be listening.

Process Dies But Port Stays in Use

Sometimes a process crashes but the port remains in use temporarily. Linux keeps the socket open for a short period (usually 60 seconds) to handle any stray packets. If you need to reclaim it immediately, you can adjust the SO_REUSEADDR socket option in your application code, but usually just waiting a minute works fine.

Tools Comparison: When to Use Which

After years of using all these tools, here’s my practical decision tree:

  • Use ss: For quick checks on the local system, daily operational work, and scripting. It’s fast, modern, and should be your default.
  • Use lsof: When you need to see all resources a process is using (files, network sockets, everything). Great for deep debugging.
  • Use nmap: For scanning remote systems, security audits, and comprehensive port discovery. Don’t use it on your local system – it’s overkill.
  • Use netstat: Only when ss isn’t available (ancient systems) or when you need that one specific output format that ss doesn’t quite replicate.

What About IPv6 Ports?

All the commands above work for IPv6 too. In ss output, IPv6 addresses show up with brackets and colons:

tcp   LISTEN  0      128    [::]:80    [::]:*

The :: is IPv6’s equivalent of 0.0.0.0 – it means listening on all IPv6 interfaces. As IPv6 adoption grows, don’t forget to check these ports too. I’ve seen production issues caused by services that were properly configured for IPv4 but completely exposed on IPv6 because no one thought to check.

Beyond Basic Port Checking: Understanding Network Statistics

Once you know which ports are open, the next level is understanding connection statistics. The ss command with the -s flag gives you summary statistics:

ss -s

This shows total sockets, TCP connections in various states, UDP sockets, and more. When you’re troubleshooting performance issues or capacity planning, these numbers tell you how your services are actually being used.

For example, if you see thousands of ESTABLISHED connections to your database port but your application is slow, you might have a connection pooling problem. The database isn’t the bottleneck – you’re just creating too many connections.

The Reality Check: Practice This Before You Need It

The worst time to learn port checking is during a production outage. Spin up a virtual machine, install some services, and practice these commands until they’re muscle memory. Try these exercises:

  1. Install nginx and verify it’s listening on port 80
  2. Configure it to listen on port 8080 instead and verify the change
  3. Set up MySQL and confirm it’s only listening on localhost
  4. Install a service, then intentionally bind it to the same port to trigger a conflict
  5. Practice filtering output with grep to find specific ports quickly

I guarantee you’ll use these skills weekly, if not daily. Port checking is fundamental to Linux system administration, right up there with finding files and managing permissions.

The Bottom Line: Check Ports First, Troubleshoot Second

Every time you’re debugging a networking issue, connectivity problem, or mysterious service failure, checking ports should be one of your first steps. Is the service actually listening? On which interface? What port exactly?

These simple questions, answered with a quick ss -tulpn, have saved me countless hours of barking up the wrong tree. Don’t assume anything. Verify everything. The service that was supposed to be listening on port 8080 might be listening on 8081. The database you think is accessible remotely might be bound to localhost only. The application that keeps failing might be conflicting with a port you didn’t even know was in use.

Master these commands, understand what the output means, and you’ll solve problems faster than 90% of sysadmins out there. Trust me on this one.

Still debugging networking issues? Check out my guide on managing user permissions since port binding often involves permission issues, or learn about troubleshooting Linux boot problems for more diagnostic techniques.