How to Use Journalctl in Linux: The Log Command That Saved My Career

Robert Johnson

At 2:47 AM on a Tuesday, our main application server went dark. No response. Just… silence. I SSH’d in, heart racing, coffee already brewing (because at that hour, you know it’s going to be a long one). The first command I typed? journalctl -xe. Within thirty seconds, I had the smoking gun: a memory leak that had been building for hours, hidden in a service that had been silently failing.

That’s the thing about journalctl – it’s not just another command. It’s your system’s flight recorder, your debugging partner, and the difference between fixing an issue in minutes versus hours of wild guessing. Since systemd took over as the default init system on most major Linux distributions, journalctl has become the single most important tool in a sysadmin’s troubleshooting arsenal.

In this guide, I’ll show you exactly how to use journalctl to troubleshoot problems, monitor services, and understand what your system is actually doing. No fluff, no theory for theory’s sake – just the commands and techniques I use every single day.

What is Journalctl and Why You’ll Use It Every Day

Journalctl is the command-line utility for querying and displaying logs from systemd’s journal. Think of it as your interface to systemd-journald, the logging service that replaced the traditional syslog daemon on most modern Linux systems.

Here’s why it matters: instead of logs scattered across multiple text files in /var/log, the systemd journal stores everything in a structured, indexed binary format. This means you can filter by service, time range, priority level, or basically any field you can imagine – and it’s fast. Really fast.

RackNerd Mobile Leaderboard Banner

The journal captures everything: kernel messages, service output, authentication attempts, boot information, and every error message your system has seen. It’s all there, timestamped and categorized, waiting for you to ask the right question.

Unlike traditional log files that require you to know which file to check (was it /var/log/messages? /var/log/syslog? /var/log/daemon.log?), journalctl gives you one interface to rule them all. When you’re troubleshooting at 3 AM, that simplicity is a lifesaver.

Basic Journalctl Commands to Get Started

Let’s start with the fundamentals. The most basic command is simply:

journalctl

This dumps the entire journal, from oldest to newest, paginated like the less command. You can navigate with arrow keys, Page Up/Down, and the spacebar. Press q to quit.

But here’s the thing – on a system that’s been running for a while, you might have gigabytes of logs. Scrolling through everything is like trying to find a specific grain of sand on a beach. You need filters.

Viewing Recent Logs

Want to see what just happened? Add the -r flag to reverse the order:

journalctl -r

Now the newest entries are at the top. This is my default when I’m investigating something that just broke.

Or limit it to the last 50 lines:

journalctl -n 50

You can substitute any number. I typically use -n 100 for a quick overview or -n 20 when I know exactly what I’m looking for.

Following Logs in Real-Time

The -f flag works just like tail -f on traditional log files:

journalctl -f

The journal stays open, and new entries appear as they happen. This is invaluable when you’re actively debugging an issue or watching a deployment roll out. I keep a terminal with journalctl -f running in the background constantly on production systems.

You can combine flags too. To follow the last 20 lines and continue watching:

journalctl -n 20 -f

Filtering by Time: The Skill That Saves Hours

Time-based filtering is where journalctl really starts to shine. Instead of scrolling through millions of lines to find logs from when an issue occurred, you can jump straight there.

Relative Time Filters

The --since option accepts human-readable time descriptions:

journalctl --since "10 minutes ago"
journalctl --since "1 hour ago"
journalctl --since "yesterday"
journalctl --since "today"

These are perfect for quick investigations. User reports the site was down at 2 PM? journalctl --since "14:00" gets you there instantly.

The --until option works the same way, but sets an end boundary:

journalctl --since "yesterday" --until "today"

This shows you everything from the previous day, but nothing from today. Great for reviewing yesterday’s issues during your morning coffee.

Absolute Time Ranges

For more precision, use timestamps in the format YYYY-MM-DD HH:MM:SS:

journalctl --since "2025-09-15 14:30:00" --until "2025-09-15 14:45:00"

This gives you a 15-minute window. When you’re investigating an incident and you know exactly when it happened (because monitoring alerted you, or users reported it), this precision is essential.

I’ve used this countless times to correlate application errors with infrastructure events. The web server started returning 502s at 14:32? Let me check what the database was doing between 14:30 and 14:35.

Filtering by Service and Unit

Here’s where journalctl becomes your best friend. You can filter logs to show only entries from a specific systemd service:

journalctl -u nginx.service

Replace nginx.service with any service name. Common ones I check daily: sshd.service, mysql.service, postgresql.service, docker.service.

Want to see if your web server is throwing errors? This command is your starting point. Combine it with time filters for laser-focused debugging:

journalctl -u nginx.service --since "1 hour ago"

You can even watch multiple services simultaneously. This is incredibly useful when debugging issues that span multiple components:

journalctl -u nginx.service -u php-fpm.service --since "today"

The logs from both services are merged chronologically. I use this all the time for web stack debugging – you can see the exact sequence of events as requests flow from nginx to PHP.

For a complete guide on checking systemd service status before diving into logs, check out my article on how to check systemd service status in Linux.

Filtering by Priority Level

Not all log messages are created equal. The journal uses standard syslog priority levels, from 0 (emergency) to 7 (debug). You can filter by priority with the -p flag:

journalctl -p err

This shows only error-level messages and above (err, crit, alert, emerg). When a system is misbehaving but you don’t know where to start, this command cuts through the noise.

Here’s the full priority hierarchy:

  • emerg (0) – System is unusable
  • alert (1) – Action must be taken immediately
  • crit (2) – Critical conditions
  • err (3) – Error conditions
  • warning (4) – Warning conditions
  • notice (5) – Normal but significant condition
  • info (6) – Informational messages
  • debug (7) – Debug-level messages

You can use either the name or the number:

journalctl -p 3

I typically filter for -p warning when doing general health checks, and -p err when actively troubleshooting a known issue.

Combine with service filtering to zero in on problematic services:

journalctl -u mysql.service -p err --since "today"

This command has saved me so many hours. Instead of reading through thousands of informational messages, you get straight to the problems.

Advanced Filtering Techniques

Combining Multiple Filters

The real power of journalctl comes from stacking filters. You can combine service, time, and priority filters to create incredibly specific queries:

journalctl -u nginx.service -p err --since "2025-09-15 14:00" --until "2025-09-15 15:00"

This shows nginx errors from a specific one-hour window. Perfect for incident post-mortems.

Want to see kernel messages from the current boot?

journalctl -k -b

The -k flag shows kernel messages (equivalent to dmesg), and -b limits it to the current boot. You can also specify a specific boot: -b -1 shows the previous boot, -b -2 the boot before that.

This is invaluable when a server crashed and you need to figure out why. After the reboot, check the previous boot’s kernel messages:

journalctl -k -b -1

Look for out-of-memory errors, kernel panics, or hardware failures. I’ve diagnosed so many random reboots with this one command.

JSON Output for Monitoring Tools

If you’re integrating journalctl with monitoring tools, alerting systems, or log aggregation platforms, JSON output is your friend:

journalctl -u nginx.service -o json

Each log entry becomes a JSON object, perfect for piping into tools like jq, sending to Elasticsearch, or processing with custom scripts:

journalctl -u nginx.service -o json | jq '.MESSAGE'

For pretty-printed JSON that’s easier to read manually:

journalctl -u nginx.service -o json-pretty

I use JSON output primarily for automation – parsing logs in Python scripts, feeding monitoring dashboards, or exporting to centralized logging systems. The Loggly guide to journalctl has excellent examples of integrating journal logs with external tools.

Managing Journal Disk Space

Here’s something that catches a lot of people off guard: the systemd journal can grow enormous. I’ve seen production servers with 10+ GB of journal data. On a small VPS with limited disk space, that’s a real problem.

Checking Journal Size

First, see how much space your journal is using:

journalctl --disk-usage

This shows you the total space consumed by journal files. If it’s eating significant disk space, you have options.

Manual Cleanup

You can manually trim the journal using vacuum commands. To remove archived journal files until the total size falls below a certain threshold:

sudo journalctl --vacuum-size=500M

This keeps only the most recent logs, discarding old entries until you’re under 500 MB.

Alternatively, remove all journal entries older than a specific timeframe:

sudo journalctl --vacuum-time=2weeks

This deletes anything older than two weeks. I typically use this on production systems where I know I don’t need logs older than a month.

Want to ensure cleanup happens? Combine rotation with vacuum:

sudo journalctl --rotate --vacuum-time=1week

The --rotate flag forces active journal files to close, making them eligible for vacuum. Without it, the current journal file won’t be trimmed even if it contains old entries.

If you’re troubleshooting disk space issues more broadly, my guide on how to check disk usage in Linux covers the full picture.

Configuring Automatic Rotation

Instead of manually cleaning logs, configure automatic rotation limits in /etc/systemd/journald.conf:

SystemMaxUse=500M
SystemKeepFree=1G
MaxRetentionSec=1month
MaxFileSec=1week

Here’s what each does:

  • SystemMaxUse – Maximum disk space the journal can consume
  • SystemKeepFree – Minimum free space to leave on the filesystem
  • MaxRetentionSec – Maximum time to store entries before deletion
  • MaxFileSec – Maximum time before rotating to a new journal file

After editing the config, restart journald:

sudo systemctl restart systemd-journald

These are sensible defaults for most production systems. Adjust based on your storage capacity and retention requirements. The official journald.conf documentation covers all available options in detail.

On smaller VPS instances, I usually set SystemMaxUse=200M and MaxRetentionSec=2weeks. On larger production systems with plenty of disk, I’ll go with SystemMaxUse=2G and MaxRetentionSec=3months. It all depends on how much history you need for troubleshooting and compliance.

Common Mistakes and How to Avoid Them

I’ve made every journalctl mistake in the book. Here are the ones that cost me the most time:

Mistake #1: Not checking if logs persist across reboots

By default on some systems, the journal is stored in /run/log/journal, which is a tmpfs that clears on reboot. If you reboot and all your logs vanish, that’s why.

Solution: Create /var/log/journal to enable persistent storage:

sudo mkdir -p /var/log/journal
sudo systemctl restart systemd-journald

Verify it worked: journalctl --verify

Mistake #2: Forgetting the service name format – It’s nginx.service, not just nginx. Technically journalctl will often find it without the .service suffix, but being explicit prevents confusion.

Mistake #3: Not using -xe when a service fails – When systemd tells you a service failed, your first command should be journalctl -xe. The -x adds helpful explanations, and -e jumps to the end. This combo shows you exactly what went wrong.

Mistake #4: Ignoring priority filters – Scrolling through thousands of info and debug messages trying to find errors is a waste of time. Use -p err and save yourself the headache.

Mistake #5: Letting the journal fill your disk – I’ve seen this kill production systems. Set SystemMaxUse in journald.conf and monitor your disk space. If you frequently run into resource issues, also check out my guides on checking memory usage and monitoring CPU usage.

Mistake #6: Not knowing you can grep the output – Journalctl supports piping to grep for additional filtering:

journalctl -u nginx.service | grep "error"

But honestly, if you’re doing complex filtering, the built-in options are usually better. Use -p for priority filtering instead of grepping for “error”.

Journalctl in Real-World Troubleshooting Workflows

Let me walk you through how I actually use journalctl when things break.

Scenario 1: A service won’t start

  1. systemctl status servicename.service – Get basic status
  2. journalctl -u servicename.service -n 50 – See recent logs
  3. journalctl -xe – Get the full error with explanations
  4. Fix the issue (usually a config error or dependency problem)
  5. journalctl -u servicename.service -f – Watch it start successfully

Scenario 2: Investigating a mystery reboot

  1. journalctl -b -1 -p err – Errors from the previous boot
  2. journalctl -k -b -1 – Kernel messages from previous boot
  3. Look for OOM (out of memory) errors, kernel panics, or hardware issues

Scenario 3: Web application returning errors

  1. journalctl -u nginx.service -u php-fpm.service --since "30 minutes ago"
  2. Look for 50x errors, timeouts, or upstream connection failures
  3. Check database logs: journalctl -u mysql.service --since "30 minutes ago" -p warning
  4. Correlate timestamps to find the root cause

When you’re deep in troubleshooting and need to manage misbehaving processes, my complete guide to killing processes in Linux is a natural next step.

The Commands I Use Most Often

If I’m being honest, 90% of my journalctl usage boils down to these commands:

journalctl -xe                           # Quick error check
journalctl -u servicename -f             # Watch a service
journalctl -u servicename --since "1 hour ago" -p err   # Recent service errors
journalctl -b -p err                     # Boot errors
journalctl --disk-usage                  # Check journal size
sudo journalctl --vacuum-size=500M       # Clean up space

Master these six, and you’ll handle 95% of common scenarios. The rest is just refinement.

Why Journalctl Beats Traditional Log Files

I started my career when /var/log and text files were the only game in town. I don’t miss it.

With traditional logging, finding an error meant knowing which log file to check, using tail and grep, correlating timestamps across multiple files manually, and praying the logs hadn’t been rotated away yet.

Journalctl gives you structured logging, instant filtering by service and priority, merged chronological views across all log sources, fast indexing (even with gigabytes of logs), and comprehensive metadata for every entry.

The binary format might seem weird at first – you can’t just cat /var/log/messages anymore. But once you get comfortable with journalctl’s filtering capabilities, you’ll never want to go back. The DigitalOcean guide to journalctl has additional filtering examples if you want to dive deeper.

Final Thoughts: Make Journalctl Your First Move

When something breaks on a Linux system, your first instinct should be journalctl. Not random guessing, not Stack Overflow searching, not restarting services blindly. Read the logs first.

The journal knows what happened. It knows when services failed, what errors occurred, which resources were exhausted, and exactly what sequence of events led to the problem. Your job is just to ask it the right questions.

Start simple – journalctl -xe when something fails, journalctl -u servicename -f to watch a service, journalctl -p err --since "today" for a health check. Build from there.

Over time, you’ll develop an intuition for which filters to use in which situations. You’ll learn your system’s patterns – what normal looks like versus what indicates trouble. And you’ll fix issues faster than you ever thought possible.

That 2:47 AM wake-up call I mentioned at the start? Journalctl showed me the memory leak in under a minute. I had the service restarted and monitoring in place by 3:15. Back in bed by 3:30.

That’s the power of knowing your logs. That’s why you need to master journalctl.

Now go read some logs. Your system is trying to tell you something.