I still remember the first time lsof saved my career. It was 2 AM, the production server was at 100% disk capacity, and I’d already deleted what felt like half the filesystem. Yet df -h showed zero change. My manager was breathing down my neck, users couldn’t log in, and I was about five minutes from updating my resume.
Then a senior admin mentioned lsof +L1. Thirty seconds later, I found a 47GB log file that had been deleted but was still held open by a runaway process. One process restart later, disk space flooded back, the crisis ended, and I learned the most important lesson of my sysadmin career: lsof is not optional.
The lsof command—short for “list open files”—is one of those tools that separates junior admins from senior ones. While everyone learns ls and grep, lsof sits quietly in the shadows, waiting for the moment when nothing else will do. It’s your Swiss Army knife for troubleshooting processes, network connections, file locks, and those maddeningly vague issues that make you question your career choices.
What is the lsof Command and Why You Need It
On Linux and Unix systems, everything is a file. Regular files, directories, network sockets, pipes, devices—all files. The lsof command lists all open files and the processes using them. This simple concept unlocks extraordinary troubleshooting power.
Here’s what makes lsof invaluable:

- Process investigation: See exactly what files and network connections any process has open
- Network debugging: Find which process is using a specific port instantly
- Disk space mysteries: Locate deleted files still consuming space
- Security auditing: Detect unauthorized connections or suspicious activity
- File lock troubleshooting: Identify what’s preventing you from unmounting a filesystem
Unlike tools that focus on one aspect of the system, lsof gives you the complete picture of how processes interact with files and the network. When checking open ports or investigating stuck processes, lsof shows you the connections other tools miss.
Installing lsof on Linux
Most Linux distributions don’t include lsof by default. Installation is straightforward:
Debian/Ubuntu:
sudo apt update sudo apt install lsof
RHEL/CentOS/Fedora:
sudo dnf install lsof
Arch Linux:
sudo pacman -S lsof
Verify installation:
lsof -v
You should see version information. Now you’re ready to start investigating.
Basic lsof Command Syntax and Output
The basic syntax is simple:
lsof [options] [names]
Run lsof without arguments (as root or with sudo), and you’ll see thousands of lines—every open file on the system. The output looks like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME systemd 1 root cwd DIR 253,0 4096 2 / systemd 1 root rtd DIR 253,0 4096 2 / systemd 1 root txt REG 253,0 1624520 8409757 /usr/lib/systemd/systemd sshd 1234 root 3u IPv4 12345 0t0 TCP *:22 (LISTEN)
Key columns explained:
- COMMAND: Process name
- PID: Process ID
- USER: User running the process
- FD: File descriptor (cwd=current working directory, txt=executable, number+mode like 3u=file descriptor 3 open for read/write)
- TYPE: Type of file (REG=regular file, DIR=directory, IPv4/IPv6=network socket)
- NODE: Inode number
- NAME: File path or network connection details
The raw output is overwhelming, which is why you’ll rarely use lsof alone. You’ll combine it with grep, filter by process, or target specific resources.
Finding Which Process is Using a Specific Port
This is the use case that makes lsof a daily driver. Someone says “port 8080 is already in use” or “I can’t start the web server.” Here’s how you find the culprit:
sudo lsof -i :8080
Output might look like:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 5432 alex 45u IPv6 87654 0t0 TCP *:8080 (LISTEN)
Boom. Process 5432 (a Java application run by user alex) is listening on port 8080. Now you can decide whether to kill the process or reconfigure your application to use a different port.
To see all listening TCP ports:
sudo lsof -i TCP -s TCP:LISTEN
To see all network connections (TCP and UDP):
sudo lsof -i
For more detailed network debugging, combine lsof with netstat or ss. Each tool has strengths: ss is faster for bulk socket data, but lsof excels at connecting sockets to specific processes and files.
Investigating a Specific Process with lsof
When a process misbehaves—consuming resources, hanging, or acting suspiciously—you need to see what it’s doing. Use lsof -p followed by the process ID:
sudo lsof -p 1234
This shows every file, library, network socket, and pipe that process has open. I use this constantly when diagnosing issues with systemd services that won’t start or applications that hang during execution.
Real-world example: A web application was occasionally freezing. Running lsof -p on the frozen process revealed it had opened thousands of connections to the database but never closed them. Connection pooling was misconfigured. Without lsof, we’d have spent days reviewing code.
To see all files opened by a specific user:
sudo lsof -u username
To see all files opened by all users except root:
sudo lsof -u ^root
The caret (^) means “not”—useful for filtering out noise.
Solving the Deleted Files Disk Space Mystery
Here’s a scenario that’s bitten every sysadmin: you delete gigabytes of log files, run df -h, and… nothing. Disk usage hasn’t changed. What gives?
On Linux, deleting a file only removes its directory entry. If a process still has the file open, the file continues to exist and consume disk space until that process closes it or terminates. This is by design—it prevents crashes when log files are rotated—but it’s confusing as hell the first time you encounter it.
Find deleted files still holding disk space:
sudo lsof +L1
The +L1 option shows files with fewer than 1 link—deleted files. Output looks like:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME java 8421 alex 5w REG 253,0 50331648000 0 1234567 /var/log/app.log (deleted)
There it is: a 47GB deleted log file still held open by a Java process. The solution? Restart the process (or kill it if appropriate):
sudo systemctl restart application.service
As soon as the process closes, disk space returns. This Unix StackExchange discussion covers advanced techniques like truncating the file via /proc if you can’t restart the process, though that’s risky in production.
I now check for deleted files any time disk usage doesn’t match expectations. It’s saved me countless hours of frustration.
Finding Files Open in a Specific Directory
Ever tried to unmount a filesystem and gotten “device is busy”? Some process has a file open. lsof tells you which one:
sudo lsof +D /mnt/data
The +D option recursively searches the directory and shows all open files within it. This is slower than +d (lowercase, non-recursive), but it catches everything.
Example output:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME vim 9876 alex 3u REG 253,1 45678 123456 /mnt/data/config.txt
Someone left a vim session open. Close it, and you can unmount cleanly.
I use this constantly when managing external drives, NFS mounts, or preparing systems for maintenance.
Monitoring Network Connections in Real-Time
The -r option makes lsof repeat its output at intervals, creating a real-time monitor:
sudo lsof -i -r 3
This refreshes every 3 seconds, showing all network connections. It’s like watch lsof -i, but built-in and more efficient.
When investigating intermittent network issues or watching for suspicious connections, this is invaluable. I’ve used it to catch malware establishing outbound connections during off-hours and to debug applications that leak TCP connections over time.
Combine with grep for focused monitoring:
sudo lsof -i -r 2 | grep ESTABLISHED
Now you’re watching only established connections, updated every 2 seconds.
Using lsof for Security and Incident Response
When investigating potential security incidents, lsof is essential. Malware often hides by deleting its executable after running (so it doesn’t appear in the filesystem), but the process still holds the deleted file open.
Find processes running deleted executables:
sudo lsof +L1 | grep -E '\.so|bin|exe'
If you see a process running from a deleted binary, that’s a massive red flag. Legitimate processes don’t do this under normal circumstances.
Check what network connections a suspicious process has:
sudo lsof -i -a -p 1234
The -a flag means “AND”—show network connections (-i) for process 1234. This reveals where malware might be calling home.
Incident responders rely on lsof as part of their standard toolkit, alongside ps, netstat, and forensic tools. It’s non-invasive, fast, and reveals attacker behavior patterns.
Combining lsof with Other Commands
lsof’s real power emerges when you combine it with grep, awk, and pipes. Here are patterns I use constantly:
Find all files opened by processes using significant memory:
ps aux | grep java | awk '{print $2}' | xargs -I {} sudo lsof -p {}This finds Java processes, extracts their PIDs, and shows their open files. Useful when investigating memory issues.
Count how many files a user has open:
sudo lsof -u username | wc -l
If a user is hitting file descriptor limits, this shows you the scope.
Find all processes listening on privileged ports (1-1024):
sudo lsof -i TCP:1-1024 -s TCP:LISTEN
Great for security audits—only trusted services should bind to privileged ports.
Common lsof Options and Flags Reference
These options cover 90% of real-world use cases:
-i– Show network connections (optionally filter:-i :80,-i TCP,-i @192.168.1.1)-p PID– Show files opened by specific process ID-u username– Show files opened by specific user+D /path– Show all open files in directory (recursive)+L1– Show deleted files (link count less than 1)-r seconds– Repeat output every N seconds-a– AND conditions together (e.g.,-a -i -u alex= network files for user alex)-c command– Show files opened by processes matching command name-t– Terse output (PIDs only, useful for scripting)
Troubleshooting Performance Issues with lsof
Running lsof on a busy production system with thousands of processes can be slow. A few tricks help:
1. Be specific: Instead of lsof, use lsof -i or lsof -p 1234. Narrowing scope dramatically improves speed.
2. Avoid DNS lookups: Add -n to skip hostname resolution and -P to skip port name lookups:
sudo lsof -i -nP
This can cut runtime from 10+ seconds to under 1 second on busy systems.
3. Use -t for scripting: If you only need PIDs, -t outputs just PIDs, no formatting overhead:
sudo lsof -t -i :80
When to Use lsof vs netstat vs ss
New admins often ask: when should I use lsof instead of netstat or ss? Here’s my rule of thumb:
- Use lsof when: You need to connect network activity to specific processes and files, investigate file locks, find deleted files, or perform security audits
- Use ss when: You need fast, detailed socket statistics and advanced filtering on high-traffic systems
- Use netstat when: You’re working on very old systems or need cross-platform compatibility (though netstat is deprecated on modern Linux)
I reach for lsof first in troubleshooting scenarios because it connects all the dots: processes, files, and network together. For quick checks or bulk socket analysis, ss is faster. For comprehensive investigations, lsof is king.
Real-World lsof War Stories
The best way to internalize lsof is through war stories. Here are three that taught me hard lessons:
The Phantom Disk Eater: Application logs filled a 500GB partition. We configured log rotation and deletion, but disk usage stayed at 98%. Turned out the main application process never reopened log files after rotation—it kept writing to the deleted files. lsof +L1 revealed 400GB of deleted logs still open. A process restart fixed it instantly, and we updated the log rotation config to signal the app properly.
The Port Conflict: A new microservice wouldn’t start, claiming port 9000 was in use. netstat showed nothing. ss showed nothing. But lsof -i :9000 revealed a zombie process from a previous failed deployment, still holding the port. The process was in a weird state that netstat missed. Kill -9, problem solved.
The Unmountable Drive: Needed to unmount a backup drive for maintenance. umount failed with “target is busy.” After 20 minutes of frustration, ran lsof +D /mnt/backup and found a single bash shell with pwd set inside the mount. Literally someone’s terminal was cd’d into the directory. Close the terminal, unmount successful. Five seconds with lsof saved me from a reboot.
Final Thoughts: Make lsof Your Default Diagnostic Tool
If there’s one takeaway from this guide, it’s this: when you’re troubleshooting and you don’t know where to start, run lsof. It won’t always solve your problem directly, but it will almost always point you in the right direction.
I’ve trained myself to check lsof early in any investigation. Network issue? lsof -i. Process acting weird? lsof -p PID. Disk space doesn’t add up? lsof +L1. Can’t unmount? lsof +D /mount. It’s become muscle memory, and it’s saved me—and my teams—hundreds of hours of debugging time.
The lsof command is proof that the best Linux tools are often the simplest in concept but deepest in capability. Master it, and you’ll troubleshoot faster, understand your systems better, and sleep more soundly knowing you can diagnose almost anything that goes wrong.
Now get out there and start using lsof. Your future self—the one who’s calm during the next production incident—will thank you.







