Help! My Linux Disk is Full! A Step-by-Step Guide to Finding and Freeing Up Space
It’s a feeling many Linux users, sysadmins, and developers know all too well: you’re working along, deploying a new application, compiling code, or just letting your server run, when suddenly… alerts start firing. Performance degrades. Applications crash. The dreaded “No space left on device” error message appears. Your Linux system’s disk is full, or dangerously close to it.
Panic might set in. What’s eating all that space? Is it safe to delete? Where do I even start looking? Fear not! Running out of disk space is a common issue, and thankfully, Linux provides excellent tools to diagnose and resolve the problem. I recently simulated this exact scenario on one of my test servers to document a clear process, and today, I’ll walk you through the steps I take to hunt down space hogs and safely reclaim precious gigabytes.
Whether you’re running a home server, a development VM, or managing production infrastructure, knowing how to troubleshoot disk space issues is a fundamental Linux skill. Let’s dive in!
Heads up! Need a reliable and affordable VPS for your Linux projects? Check out RackNerd for some fantastic deals on KVM VPS hosting! I’ve used them for various projects and appreciate their performance and pricing.
Step 1: The Big Picture – Checking Filesystem Usage with `df`
Before you start digging deep, you need a high-level overview. Which partition or filesystem is actually full? Sometimes, it might not be your main root (/
) partition but rather /home
, /var
, or a separate data drive. The standard tool for this job is df
(disk free).
To get a human-readable summary of all mounted filesystems, their size, used space, available space, usage percentage, and mount point, open your terminal and run:
df -h
The -h
flag stands for “human-readable,” displaying sizes in K (Kilobytes), M (Megabytes), G (Gigabytes), etc., which is much easier to understand than raw block counts.
You’ll see output similar to this (the exact filesystems will vary based on your setup):
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 798M 1.7M 796M 1% /run
/dev/sda1 98G 85G 8.0G 92% /
tmpfs 3.9G 45M 3.9G 2% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sdb1 492G 350G 117G 75% /data
/dev/sda15 105M 5.2M 100M 5% /boot/efi
tmpfs 798M 60K 798M 1% /run/user/1000
Interpreting the Output:
- Filesystem: The source device for the filesystem (e.g.,
/dev/sda1
). - Size: Total size of the filesystem.
- Used: Amount of space currently used.
- Avail: Amount of space still available. Note: Some space (usually 5%) is often reserved for the root user to prevent the system from becoming completely unusable if a non-root process fills the disk. This means ‘Avail’ + ‘Used’ might not equal ‘Size’.
- Use%: The percentage of the disk space used (this is the critical column!).
- Mounted on: The directory where this filesystem is accessible (e.g.,
/
,/home
,/var
,/data
).
In the example above, the root filesystem (/dev/sda1
, mounted on /
) is at 92% capacity. This is likely our problem area! The /data
partition is also quite full at 75%, but 92% is more critical. We now know we need to focus our investigation within the /
filesystem.
Ignore tmpfs
entries for now; these are temporary filesystems residing in RAM, not on your physical disk, and typically don’t cause persistent disk space issues (though excessive use could indicate a runaway process).
Step 2: Drilling Down – Finding Large Directories with `du`
Okay, we know the /
filesystem is nearly full. But *what* inside /
is taking up 85GB? The next tool in our arsenal is du
(disk usage). It estimates file and directory space usage.
Unlike df
, which looks at the filesystem as a whole, du
recursively scans directories to calculate the space consumed by their contents. This can take time, especially on large directories or slow disks.
A good starting point is to check the sizes of the main directories directly under the root (/
). We can use the following command:
sudo du -sh / --max-depth=1
Let’s break this down:
sudo
: We often need root privileges to read the sizes of all directories, especially system directories like/var/log
or/root
.du
: The command itself.-s
: Summarize. Display only a total for each argument (without this, it lists every single subdirectory).-h
: Human-readable sizes (like withdf
).--max-depth=1
: Only report sizes for directories directly under/
(depth 1). Without this (or just-s
), it would only show the total size of/
. Alternatively, you could usesudo du -sh /*
, but--max-depth=1
is often cleaner as it avoids shell expansion issues with too many files/folders in root./
: The directory we want to start scanning from.
The output might look something like this (expect errors for directories you don’t have permission to read if you forget sudo
):
16K /lost+found
5.2M /boot
1.7M /run
0 /dev
12G /usr
4.5G /opt
60G /var
8.0K /media
798M /tmp
45M /dev/shm
4.0K /srv
1.2G /home
4.0K /mnt
5.8G /root
... (other system directories, usually small) ...
85G /
Analysis: Aha! We can immediately see some large directories:
/var
: 60GB! This is often a prime suspect due to logs, caches, and application data./usr
: 12GB. This contains most system software and libraries. It’s usually large but relatively static unless you install lots of software./root
: 5.8GB. The root user’s home directory. Worth checking if large files were downloaded or generated here./opt
: 4.5GB. Often used for manually installed third-party software./home
: 1.2GB. Contains regular user home directories.
Our biggest target is clearly /var
. We can repeat the process, drilling down further:
sudo du -sh /var/* --max-depth=0
(Using /var/*
and --max-depth=0
is another way to list the size of items directly inside /var
).
4.0K /var/mail
55G /var/log
2.0G /var/lib
8.0K /var/local
4.0K /var/opt
1.5G /var/cache
...
Bingo! /var/log
is consuming a massive 55GB. This is highly suspicious. Log files should generally be rotated and compressed, not allowed to grow indefinitely.
You can continue this process, running du -sh
on suspect directories until you pinpoint the specific large files or subdirectories responsible.
Step 3: The Interactive Investigator – Using `ncdu`
While `du` is powerful, manually drilling down can be tedious. Enter ncdu
(NCurses Disk Usage) – a fantastic interactive command-line utility that makes exploring disk usage much easier.
If you don’t have it installed, you can usually install it via your package manager:
# Debian/Ubuntu
sudo apt update && sudo apt install ncdu
# Fedora/CentOS/RHEL
sudo dnf install ncdu
# Arch Linux
sudo pacman -S ncdu
Once installed, run it on the directory you want to analyze (again, using sudo
is recommended for scanning system areas):
sudo ncdu /
ncdu
will scan the specified directory (this might take a while for /
) and then present you with an interactive, ncurses-based interface:
ncdu 1.18 ~ Use the arrow keys to navigate, press ? for help
--- / --------------------------------------------------------------------------
60.0 GiB [##########] /var
12.0 GiB [# ] /usr
5.8 GiB [ ] /root
4.5 GiB [ ] /opt
1.2 GiB [ ] /home
798.0 MiB [ ] /tmp
...
Total usage: 85.0 GiB Apparent size: 83.5 GiB Items: 1,234,567
Key features of `ncdu`:
- Sorted List: Directories and files are listed sorted by size (largest first), making it easy to spot offenders.
- Navigation: Use the arrow keys (up/down) to move through the list. Press Enter or Right Arrow to navigate *into* a selected directory. Press Left Arrow to go back up.
- Information: Shows file/directory sizes, percentage of parent directory usage, and item counts.
- Deletion: You can select a file or directory and press `d` to prompt for deletion (use with extreme caution!). Press `Ctrl+L` to refresh the screen.
- Help: Press `?` for a help screen with keybindings.
Using `ncdu`, you can quickly navigate into /var
, then into /var/log
, and see exactly which log files or subdirectories within are the largest. It’s much faster than repeatedly running `du`.
Step 4: Identifying Common Culprits and Safe Cleanup
Now that we have the tools (`df`, `du`, `ncdu`) to find *where* the space is being used, let’s discuss *what* commonly fills up disks and how to safely clean it.
A. Log Files (`/var/log`)
As we saw in our simulated example, log files are frequent offenders. Applications and system services constantly write logs. If `logrotate` (the standard utility for managing log files) isn’t configured correctly or if a service is excessively verbose, log directories can swell.
- Investigation: Use `ncdu /var/log` or `sudo du -sh /var/log/*` to find the largest files or subdirectories (e.g., `/var/log/syslog`, `/var/log/nginx`, `/var/log/journal`).
- Safe Cleanup:
- Check `logrotate` configuration: Look in
/etc/logrotate.conf
and files within/etc/logrotate.d/
. Ensure logs are being rotated (split into older files), compressed (often `.gz`), and eventually deleted (`rotate N` directive). You might need to adjust settings for problematic logs (e.g., rotate more frequently, keep fewer old logs). You can force logrotate to run with `sudo logrotate -f /etc/logrotate.conf`. - Manual Deletion (Use Carefully!): If you have massive, old, uncompressed log files (e.g., `syslog.1`, `nginx.access.log.5`), you might be able to delete the *older* rotated files. Do not delete the current, active log file (e.g., `syslog`, `nginx.access.log`) as the service might still have it open. Deleting older, rotated files (like those ending in .1, .2.gz, etc.) is generally safer. For example: `sudo rm /var/log/some_huge_log.log.5.gz`.
- Truncating Active Logs (Advanced): If the *current* log file is enormous and you can’t restart the service immediately, you can truncate it without deleting it: `sudo truncate -s 0 /var/log/huge_active.log`. This empties the file while keeping the file handle intact for the running service. The service might need restarting later to behave perfectly.
- Journald Logs: If
/var/log/journal
is large, systemd’s journal is the culprit. You can clean it:- Clean logs older than a certain time: `sudo journalctl –vacuum-time=2weeks`
- Limit logs to a certain size: `sudo journalctl –vacuum-size=500M`
- Configure persistent limits in
/etc/systemd/journald.conf
(e.g., `SystemMaxUse=500M`).
- Check `logrotate` configuration: Look in
B. Package Manager Caches (`/var/cache/apt`, `/var/cache/dnf`, etc.)
Package managers download package files (`.deb`, `.rpm`) when you install or update software. They often keep these downloaded files in a cache in case you need to reinstall them later. This cache can grow significantly over time.
- Investigation: Check the size of `/var/cache/apt/archives` (Debian/Ubuntu) or `/var/cache/dnf` / `/var/cache/yum` (Fedora/CentOS/RHEL).
- Safe Cleanup: These caches can almost always be safely cleared.
- Debian/Ubuntu:
- `sudo apt clean`: Removes all downloaded package files (`.deb`) from the cache (`/var/cache/apt/archives`). This is usually very safe and can free up significant space.
- `sudo apt autoclean`: Removes older downloaded package files that are no longer installable. Less aggressive than `apt clean`.
- `sudo apt autoremove`: Removes packages that were installed as dependencies but are no longer needed by any installed package. Also safe and recommended.
- Fedora/CentOS/RHEL:
- `sudo dnf clean all` (or `sudo yum clean all`): Removes cached package files, metadata, and other temporary files. Generally safe.
- `sudo dnf autoremove` (or `sudo yum autoremove`): Removes orphaned dependencies.
- Debian/Ubuntu:
C. Temporary Files (`/tmp`, `/var/tmp`)
These directories are meant for temporary storage by applications. Ideally, programs clean up after themselves, and `/tmp` is often cleared on reboot (depending on configuration). However, poorly written scripts or crashed applications might leave large files behind.
- Investigation: Check the size with `sudo du -sh /tmp /var/tmp`.
- Safe Cleanup: Files in `/tmp` are generally safe to delete *after a reboot*, as the system expects `/tmp` to be volatile. Deleting files from `/tmp` while the system is running *can* cause issues if a program is actively using them. Files in `/var/tmp` are expected to persist between reboots, so be more cautious here. It’s often best to delete files older than a few days: `sudo find /tmp -type f -mtime +7 -delete` (deletes files older than 7 days). Exercise more caution in `/var/tmp`.
D. User Home Directories (`/home/username`)
Don’t forget about user directories! Downloads, old project files, virtual machine images, large datasets, or even application caches within hidden directories (like `~/.cache`) can accumulate.
- Investigation: Use `ncdu /home/username` or `du -sh /home/username/* /home/username/.*` (to include hidden files/dirs) to find large items. Pay attention to `Downloads`, `Videos`, `Documents`, and hidden directories like `.cache`, `.local/share`.
- Safe Cleanup: This depends entirely on the user’s needs. Identify large files/directories and decide if they are still needed. Maybe archive old projects to external storage? Clean application caches if they seem excessive (e.g., `rm -rf ~/.cache/some_application`).
E. Containerization (Docker/Podman)
If you use containers, they can consume significant disk space with images, volumes, and build caches.
- Investigation: Docker’s data usually lives in `/var/lib/docker`. Check its size. Use Docker commands:
- `docker system df`: Shows Docker disk usage summary (images, containers, volumes, build cache).
- `docker image ls`: List images.
- `docker volume ls`: List volumes.
- `docker ps -a`: List all containers (including stopped ones).
- Safe Cleanup: Docker provides a prune command:
- `docker system prune`: Removes stopped containers, dangling images, and unused networks.
- `docker system prune -a`: More aggressive – removes all unused images (not just dangling ones) and stopped containers.
- `docker system prune –volumes`: Removes unused volumes (use with caution – ensures data isn’t lost).
- You can also manually remove specific images (`docker rmi IMAGE_ID`), containers (`docker rm CONTAINER_ID`), or volumes (`docker volume rm VOLUME_NAME`).
Similar commands exist for Podman.
F. Other Potential Areas
- `/opt`:** Check for old versions of manually installed software.
- `/root`:** The root user’s home directory. Sometimes large files are downloaded or generated here accidentally.
- Databases:** (`/var/lib/mysql`, `/var/lib/postgresql/data`, etc.) Database data can grow large. This requires database-specific maintenance (archiving old data, optimizing tables), not just deleting files.
- Backups:** Check where your backups are stored. Ensure old backups are being rotated or deleted correctly.
Step 5: Advanced Techniques and Finding Specific Large Files
Sometimes, large files hide in unexpected places. The `find` command is excellent for locating files based on criteria like size.
To find all files larger than, say, 500MB within the `/` filesystem:
sudo find / -xdev -type f -size +500M -exec ls -lh {} \;
sudo find /
: Start searching from the root directory.-xdev
: Don’t cross filesystem boundaries. This prevents searching `/proc`, `/sys`, or other mounted drives if you only want to search the `/` filesystem identified by `df`.-type f
: Search only for regular files (not directories).-size +500M
: Find files larger than 500 Megabytes (+ means larger than, you can use G for Gigabytes, k for Kilobytes).-exec ls -lh {} \;
: For each file found ({}
), execute the `ls -lh` command to show its details in a human-readable format. The `\;` terminates the `-exec` command.
This can take a long time but is very effective at uncovering individual large files regardless of their location.
Another useful tool is `lsof` (List Open Files). Sometimes, a process might delete a large file, but if another process still has it open, the space isn’t freed until that process closes the file handle. You can look for large deleted files still held open:
sudo lsof +L1
Or specifically search the filesystem in question:
sudo lsof / | grep deleted
This will show processes holding open file handles to files marked as deleted. Restarting the listed process (if safe) will usually free the space.
Step 6: Prevention is Better Than Cure
Once you’ve cleaned up the disk space, take steps to prevent the problem from recurring:
- Monitoring: Set up monitoring tools (like Nagios, Zabbix, Prometheus with node_exporter, or even simple cron scripts running `df`) to alert you *before* disk space becomes critically low (e.g., at 80% or 85% usage).
- Logrotate Tuning: Review and adjust
/etc/logrotate.conf
and files in/etc/logrotate.d/
. Ensure logs are compressed, rotated frequently enough, and old ones are deleted. - Scheduled Cleanup: Set up cron jobs to run cleanup commands regularly (e.g., `apt clean`, `dnf clean all`, cleaning old files in `/tmp`).
- User Quotas: If multiple users share a system, consider implementing disk quotas to limit how much space each user can consume.
- Filesystem Choice: Filesystems like Btrfs offer features like snapshots and compression that can help manage space, though they come with their own complexities.
- Disk/Partition Sizing: When setting up a new system, try to anticipate space needs. Using Logical Volume Management (LVM) during installation makes it much easier to resize partitions later if needed.
Conclusion: Taking Control of Your Disk Space
Running out of disk space on a Linux system can be stressful, but it’s usually a manageable problem. By systematically using tools like df
for an overview, du
or the more interactive ncdu
to pinpoint large directories and files, and understanding common culprits like logs and caches, you can effectively diagnose and resolve the issue.
Remember to always be cautious when deleting files, especially outside your home directory. Double-check what a file is before removing it, and prioritize cleaning up caches, old rotated logs, and unnecessary files you recognize. When in doubt, investigate further or back up before deleting.
By combining these troubleshooting techniques with preventative measures like monitoring and proper log rotation, you can keep your Linux system running smoothly with plenty of free space for your important data and applications.
Looking for a solid platform to practice your Linux skills or host your next project? Give RackNerd a try! Their affordable KVM VPS plans are great for experimenting and hosting applications without breaking the bank.