How to Check Memory Usage in Linux: The Complete Guide

Robert Johnson

You know that moment when your Linux box starts crawling, applications hang, and you’re frantically trying to figure out what’s eating all your RAM? I’ve been there more times than I’d like to admit. Memory issues are one of those problems that can turn a quiet Tuesday afternoon into an all-hands-on-deck situation.

Here’s the thing about Linux memory management – it’s actually pretty sophisticated, but the way it reports memory usage can be downright confusing if you don’t know what you’re looking at. I’ve watched junior admins panic over “high” memory usage that was actually just normal caching behavior. So let’s clear this up once and for all.

In this guide, I’ll show you exactly how to check memory usage in Linux using the tools I rely on every day. You’ll learn what those numbers actually mean, when to worry (and when not to), and how to troubleshoot real memory problems before they take down your system.

RackNerd Mobile Leaderboard Banner

Understanding Linux Memory Management First

Before we dive into commands, you need to understand one critical concept: Linux will use almost all available RAM, and that’s completely normal. This throws people off constantly.

Linux uses free memory for disk caching and buffers to speed up file operations. When applications need that memory, the kernel instantly releases it. So seeing 90% memory usage doesn’t mean you’re about to run out – it usually means Linux is doing its job efficiently.

The real question isn’t “how much memory is used” but “how much memory is available for applications when they need it.” Keep that distinction in mind as we go through these tools.

The Free Command: Your First Line of Defense

The free command is where I always start. It’s fast, it’s simple, and it’s installed everywhere. Here’s the basic usage:

free -h

The -h flag gives you human-readable output (MB and GB instead of kilobytes). You’ll see something like this:

              total        used        free      shared  buff/cache   available
Mem:           15Gi       4.2Gi       8.1Gi       124Mi       3.2Gi        11Gi
Swap:         2.0Gi          0B       2.0Gi

Now, here’s what actually matters in that output:

The Available Column Is What Counts

The “available” column is the single most important number. This shows how much memory is genuinely available to start new applications without swapping. On modern Linux systems (RHEL 7+, Ubuntu 16.04+, anything with kernel 3.14+), ignore the “free” column and focus on “available.”

According to Red Hat’s analysis of the free command, the available metric estimates how much memory can be reclaimed from caches when needed – which is exactly what you want to know.

Understanding Buff/Cache

The buff/cache column shows memory the kernel is using to cache file data and speed up disk operations. This memory gets freed automatically when applications need it. Don’t panic when you see high numbers here – that’s Linux being smart about resource utilization.

When to Actually Worry

You should be concerned when:

  • Available memory drops below 10% of total memory
  • Swap usage starts climbing or fluctuating significantly
  • Applications become noticeably slower despite low CPU usage
  • You see OOM killer messages in system logs

If you’re seeing multiple gigabytes available, you’re fine. If you’re down to a few hundred MB on a system with 16GB RAM, it’s time to investigate.

Using Top for Real-Time Memory Monitoring

When I need to watch memory usage actively and see which processes are the culprits, I reach for top. It’s been around forever and it’s installed on every Linux system by default.

top

The top few lines show overall system stats, including memory. But here’s the trick most people don’t know: press Shift+M while top is running to sort processes by memory usage. Suddenly, your memory hogs rise to the top.

Reading Memory Columns in Top

In the process list, you’ll see columns like RES, SHR, and VIRT. Here’s what matters:

  • RES (Resident Memory): Physical RAM the process is actually using right now
  • SHR (Shared Memory): Memory shared with other processes (like libraries)
  • VIRT (Virtual Memory): Total virtual memory allocated (includes swap and shared libs)

Focus on RES – that’s real memory consumption. I’ve seen processes with massive VIRT values that barely touch physical RAM. VIRT can be misleading.

Need more details on managing processes? Check out my guide on how to kill a process in Linux for troubleshooting runaway applications.

Htop: Top’s Cooler, More Visual Cousin

If I’m going to be monitoring memory for more than a quick check, I use htop instead of top. It’s not installed by default on most systems, but it’s worth adding:

sudo apt install htop    # Debian/Ubuntu
sudo yum install htop    # RHEL/CentOS

Then just run:

htop

What makes htop better? Color-coded output, visual CPU and memory bars, mouse support, and a much more intuitive interface. You can click on column headers to sort, use F6 to change sort order, and F9 to kill processes directly from the interface.

The memory bars at the top show usage visually – green for used memory, blue for buffers, and yellow/orange for cache. According to Vultr’s comparison, htop calculates “free” memory differently than top by including cache and buffers, which gives you a more accurate picture of truly available memory.

Htop vs Top: Which Should You Use?

I keep both in my toolkit. Use top when:

  • You’re on a minimal system where htop isn’t installed
  • You need a quick check and don’t want to install anything
  • You’re SSH’d into a server with limited packages

Use htop when:

  • You’re actively troubleshooting memory issues
  • You want to monitor trends over time
  • You appreciate not having to remember arcane keyboard shortcuts

Checking Memory with /proc/meminfo

When I need the raw details – and I mean all the details – I go straight to /proc/meminfo. This virtual file contains every memory statistic the kernel tracks:

cat /proc/meminfo

You’ll get dozens of lines showing everything from total memory to slab allocations. It’s overwhelming at first, but incredibly useful for deep diagnostics.

I usually pipe it through grep to find specific stats:

cat /proc/meminfo | grep -E 'MemTotal|MemFree|MemAvailable|Buffers|Cached'

This shows the key memory metrics without all the noise. The numbers are in kilobytes, so divide by 1024 for MB or 1,048,576 for GB.

Vmstat for Memory Statistics Over Time

The vmstat command shows virtual memory statistics, and it’s particularly useful for watching trends:

vmstat 2 10

This runs vmstat every 2 seconds for 10 iterations. The columns that matter for memory are:

  • swpd: Virtual memory used (swap)
  • free: Idle memory
  • buff: Memory used as buffers
  • cache: Memory used as cache
  • si/so: Swap in/out (memory swapped from/to disk)

If you see sustained high numbers in the si and so columns, your system is thrashing – swapping memory in and out constantly. That’s a red flag that you either need more RAM or need to reduce memory usage.

For comprehensive system monitoring beyond just memory, take a look at my guides on checking CPU usage and checking disk usage in Linux.

Understanding the OOM Killer

Here’s a scenario I’ve dealt with too many times: you’re checking logs and you see processes mysteriously dying. No crash dumps, no errors – just gone. Welcome to the OOM (Out of Memory) Killer.

When Linux truly runs out of memory and can’t free up cache or swap space, the kernel activates the OOM killer to sacrifice processes and free up memory. It’s a last resort mechanism to keep the system alive.

Detecting OOM Killer Activity

Check your system logs for OOM events:

grep -i "out of memory\|oom" /var/log/syslog
# Or on RHEL/CentOS:
grep -i "out of memory\|oom" /var/log/messages

You can also check dmesg:

dmesg | grep -i "oom\|killed process"

If you find OOM killer entries, that’s your smoking gun – your system genuinely ran out of memory. Oracle’s guide to the OOM killer explains how the kernel scores processes and decides what to kill based on memory usage and importance.

Preventing OOM Situations

Once you’ve identified memory pressure leading to OOM kills, you have a few options:

  • Add more RAM if your workload legitimately needs it
  • Optimize memory-hungry applications (check their config for memory limits)
  • Add swap space as a temporary buffer (though this will slow things down)
  • Implement memory limits using cgroups to prevent runaway processes
  • Scale horizontally by distributing load across multiple servers

Checking Memory for Specific Processes

Sometimes you don’t care about system-wide memory – you need to know exactly what one specific process is doing. Here’s my go-to command:

ps aux | grep process_name

The RSS column (in kilobytes) shows the process’s resident memory. For a cleaner view of just memory info:

ps -eo pid,comm,%mem,rss | grep process_name

Or if you know the process ID:

cat /proc/PID/status | grep -E 'VmRSS|VmSize'

VmRSS is physical memory, VmSize is virtual memory. These are the same concepts as RES and VIRT from top.

Pro Tip: If you’re debugging application performance, check memory usage alongside CPU and network stats. Memory issues often manifest as slow response times rather than obvious crashes. Cross-reference with network interface statistics and open ports to get the full picture.

Memory Usage Scripts and Automation

After you’ve manually checked memory enough times, you’ll want to automate monitoring. I keep a simple script that alerts me when available memory drops below a threshold:

#!/bin/bash
THRESHOLD=10  # Alert if available memory < 10%

AVAILABLE=$(free | grep Mem | awk '{print $7}')
TOTAL=$(free | grep Mem | awk '{print $2}')
PERCENT=$((100 * AVAILABLE / TOTAL))

if [ $PERCENT -lt $THRESHOLD ]; then
    echo "WARNING: Only ${PERCENT}% memory available"
    # Add your alert mechanism here (email, Slack, etc.)
fi

Run this via cron every few minutes. Speaking of which, if you're not familiar with automating tasks in Linux, my guide on scheduling cron jobs will get you up to speed.

Common Memory Usage Mistakes to Avoid

After a decade of managing Linux systems, I've seen these mistakes repeatedly:

Panicking over high "used" memory: Remember, Linux uses free RAM for caching. High memory usage is normal and expected. Check "available" instead.

Ignoring swap usage trends: A little swap usage isn't bad, but if it's constantly growing or if you see high swap in/out rates, you have a problem.

Not investigating root causes: Finding a memory hog process is step one. Step two is figuring out why it's using so much memory - memory leak? Misconfiguration? Legitimate workload growth?

Forgetting about memory limits: If you're running containers or applications with their own memory constraints, check those limits too. The system might have plenty of RAM, but the application is hitting its configured ceiling.

Wrapping Up

Checking memory usage in Linux isn't complicated once you know which tools to use and which numbers actually matter. Start with free -h for a quick overview, use htop or top to identify memory-hungry processes, and dig into /proc/meminfo when you need the full story.

The key insight is this: don't let the "used" memory number scare you. Linux is designed to use available RAM efficiently. Focus on "available" memory, watch for swap activity, and keep an eye on your logs for OOM killer events. Those are your real indicators of memory problems.

And remember - memory issues rarely exist in isolation. When you're troubleshooting performance problems, check memory alongside CPU, disk I/O, and network activity. The real bottleneck might surprise you.

Now you've got the tools and knowledge to confidently answer the question "how's our memory looking?" the next time someone asks. And more importantly, you'll know when to worry and when to just let Linux do its thing.