I learned about nohup the hard way. It was 2 AM, and I’d just kicked off a massive database migration over SSH. Twenty minutes in, my connection dropped. My heart sank as I reconnected—only to find my migration dead halfway through. That’s when my mentor introduced me to the linux nohup command.
If you’ve ever run a long process over SSH only to have it vanish when your connection dies, you need nohup. It’s one of those commands that seems simple until you realize how much pain it prevents.
What is nohup?
The nohup command stands for “no hang up.” It’s a POSIX command that runs another command immune to the SIGHUP signal. In practical terms? Your process keeps running even when you log out or your SSH session drops.
Most Linux tutorials explain what nohup does, but they skip why it matters. Let me fill that gap with some context you won’t find in the nohup manual page.
Understanding SIGHUP Signal
When you close a terminal session, Linux sends a SIGHUP signal (signal 1) to all processes associated with that terminal. By default, this terminates them. It’s a cleanup mechanism—but it’s terrible for long-running tasks.

Here’s what happens without nohup:
- You SSH into a server
- Start a 3-hour data processing job
- Your laptop goes to sleep, or your network hiccups
- SSH disconnects
- The terminal session closes and sends SIGHUP to your process
- Your process dies, data potentially corrupted
I’ve seen junior admins lose hours of work this way. Some never knew their processes died until they checked back later.
Why nohup Matters for System Administrators
For sysadmins, nohup solves specific real-world problems:
- Remote deployments: Run build scripts over SSH without babysitting the connection
- Database operations: Long migrations, exports, and imports need persistence
- Backup processes: Network glitches shouldn’t kill your backup job
- Data processing: ETL jobs that take hours to complete
By default, nohup redirects output to a file called nohup.out in your current directory. This means you get logs even after you disconnect. Smart design from the Unix era.
nohup Syntax and Basic Usage
Let’s cut through the theory and look at how you actually use nohup. The syntax is dead simple.
Basic Command Structure
The basic pattern:
nohup command [arguments]That’s it. But there’s a catch most tutorials miss: this runs in the foreground. Your terminal is still blocked. Not useful if you want to close your session.
Here’s a real example:
nohup python3 data_analysis.pyThis protects against SIGHUP, but you’re still stuck watching it run. Not ideal.
Using & to Run in Background
To actually free up your terminal, combine nohup with the & operator:
nohup python3 data_analysis.py &Now you get two benefits:
- The process runs in the background (thanks to &)
- It’s immune to SIGHUP (thanks to nohup)
When you run this, you’ll see output like:
[1] 12345
nohup: ignoring input and appending output to 'nohup.out'That 12345 is your process ID (PID). Write it down—you’ll need it to check on your process later or terminate it if something goes wrong. I keep a notes file with PIDs for critical background jobs.
Understanding nohup vs &
This confused me early on, so let me clarify:
&alone: Runs in background, but dies on logoutnohupalone: Protected from SIGHUP, but blocks your terminalnohup ... &: Best of both—background execution and logout protection
Always use both together unless you have a specific reason not to.
Practical Examples Every Sysadmin Needs
Theory is fine, but let’s look at examples I actually use in production. These are the patterns I rely on weekly.
Running a Long-Running Script
I run a nightly backup script that takes 2-3 hours. Here’s the pattern:
nohup ./backup_script.sh > backup_$(date +%Y%m%d).log 2>&1 &Breaking this down:
nohup ./backup_script.sh: Run the script with SIGHUP immunity> backup_$(date +%Y%m%d).log: Redirect stdout to a dated log file2>&1: Redirect stderr to the same file (captures errors)&: Run in background
The $(date +%Y%m%d) trick gives me logs like backup_20250903.log. Makes troubleshooting much easier than digging through generic nohup.out files.
Building Applications in Background
Large codebases take forever to compile. When I’m building over SSH, I use:
nohup make -j4 > build.log 2>&1 &
echo $! > build.pidThat second line is key: $! is the PID of the last background process. I save it to build.pid so I can check on it later:
tail -f build.log # Monitor progress
ps -p $(cat build.pid) # Check if still runningThis workflow has saved me during dozens of deployments.
Data Processing and ETL Jobs
ETL jobs are perfect nohup candidates. They’re long-running, resource-intensive, and failure-sensitive:
nohup python3 etl_pipeline.py --config prod.yaml > etl_$(date +%Y%m%d_%H%M).log 2>&1 &I include both date and time in the log filename because I sometimes run multiple ETL jobs in a day. Timestamps prevent log file collisions.
Monitoring and Logging Tasks
Sometimes I need quick-and-dirty monitoring during incidents:
nohup watch -n 5 'df -h && free -h' > system_monitor.log 2>&1 &This logs disk space and memory every 5 seconds. Not sophisticated, but when you’re debugging a mystery resource leak at 3 AM, it works. For production monitoring, you’d want something more robust like viewing logs with journalctl.
Output Redirection with nohup
Output handling with nohup trips people up constantly. Let’s demystify it.
Default nohup.out Behavior
When you don’t specify output redirection, nohup writes to nohup.out in your current directory:
nohup ./script.sh &This creates or appends to nohup.out. The problem? If you run multiple nohup commands in the same directory, they all dump into the same file. That’s a mess when you’re trying to debug.
I’ve learned to always explicitly redirect output. It’s cleaner and more professional.
Redirecting to Custom Files
Use standard shell redirection:
nohup ./script.sh > custom_output.log &This overwrites custom_output.log each time. If you want to preserve previous runs, use append mode:
nohup ./script.sh >> custom_output.log &The double >> appends instead of overwriting. I use this for periodic tasks where I want a historical record.
Separating stdout and stderr
Sometimes you want errors in a separate file for easier debugging:
nohup ./script.sh > output.log 2> errors.log &Now standard output goes to output.log and errors go to errors.log. This is useful for noisy scripts that produce tons of normal output but occasional critical errors.
The 2> redirects file descriptor 2 (stderr). File descriptor 1 is stdout, which is what > redirects by default.
Appending to Existing Log Files
For recurring jobs, append mode preserves history:
nohup ./daily_report.sh >> reports.log 2>&1 &The 2>&1 redirects stderr to wherever stdout is going (the append file). Order matters—always put 2>&1 after your output redirection.
Pro tip: Add timestamps inside your scripts with echo "Started: $(date)". Makes log archaeology much easier.
nohup vs screen vs tmux vs disown
This is where new sysadmins get confused. There are several tools for keeping processes alive, each with different trade-offs. Let me break down when I use each one.
When to Use nohup
I reach for nohup when:
- I have a single command to run and forget
- I don’t need to interact with the process after starting it
- I want the simplest solution with no dependencies
- I’m running a one-off task, not a permanent service
Think of nohup as fire-and-forget. You launch something and walk away. Perfect for backups, builds, and data processing.
screen: Multiple Sessions and Reattachment
screen creates virtual terminal sessions you can detach and reattach to. I use screen when:
- I need to interact with a long-running process periodically
- I want to run multiple programs in separate “windows” within one SSH session
- I’m starting an interactive task (like a database console) that I’ll come back to later
Start a screen session:
screen -S backup_session
./backup.sh
# Press Ctrl+A, then D to detachReattach later:
screen -r backup_sessionscreen is heavier than nohup but provides interactivity. I use it for multi-hour maintenance windows where I need to check in periodically.
tmux: Modern Terminal Multiplexing
tmux is screen’s modern cousin. It’s more actively maintained, has better defaults, and allows pane splitting. I prefer tmux over screen these days:
tmux new -s deploy
./deploy_application.sh
# Press Ctrl+B, then D to detachReattach with:
tmux attach -t deploytmux shines when you need split panes to monitor multiple things simultaneously. During deployments, I’ll have logs in one pane, system metrics in another, and a command prompt in a third.
disown: Shell-Based Alternative
disown removes jobs from the shell’s job table, making them immune to SIGHUP. The catch? The process must already be running. This is useful when you forgot to use nohup:
# Oops, started without nohup
./long_process.sh
# Press Ctrl+Z to suspend
bg # Resume in background
disown # Remove from job controlI use disown as a rescue tool more than a primary strategy. It’s saved me a few times when I realized mid-process that I needed logout protection. For more process management techniques, check out how to kill a process in Linux.
Here’s a comparison table I reference when choosing:
| Tool | Survives Logout | Reattachable | Complexity | Best For |
|---|---|---|---|---|
| nohup | Yes | No | Very Low | Simple fire-and-forget tasks |
| screen | Yes | Yes | Medium | Interactive sessions, multi-window work |
| tmux | Yes | Yes | Medium | Modern session management, pane splitting |
| disown | Yes | No | Low | Rescue for already-running processes |
For a deeper dive into alternatives, Baeldung has a solid article on Linux job control and disown comparison.
Managing nohup Processes
Starting a nohup process is easy. Managing it—finding it, monitoring it, killing it when needed—requires a bit more knowledge.
Finding Your nohup Process PID
If you saved the PID when you started (using echo $!), great. If not, you’ll need to search:
ps aux | grep script_nameThis shows all processes matching “script_name”. Look for your command in the output. The second column is the PID.
For more precision, use pgrep:
pgrep -af "python3 data_analysis.py"The -a shows full command lines, and -f matches against the entire command, not just the process name. This helps when you have multiple Python scripts running.
Monitoring Running Processes
Once you have the PID, check if it’s still running:
ps -p 12345If it returns nothing, the process has finished or died. Check your log file to see what happened.
To monitor output in real-time:
tail -f nohup.outOr for custom log files:
tail -f my_process.logI keep a terminal tab dedicated to tailing logs during critical operations. It’s my early-warning system for problems.
Terminating nohup Processes
If you need to stop a nohup process, use kill:
kill 12345This sends SIGTERM (signal 15), which allows graceful shutdown. The process can catch this signal and clean up.
If the process won’t die, escalate to SIGKILL:
kill -9 12345SIGKILL is unblockable. The process dies immediately without cleanup. Use it as a last resort—you risk leaving corrupt files or incomplete transactions.
Checking Process Status and Logs
Combine several commands for full visibility:
# Check if running
ps -p $(cat process.pid) > /dev/null && echo "Running" || echo "Stopped"
# Check resource usage
top -p 12345
# View recent log output
tail -n 50 nohup.outI’ve scripted these checks into a simple monitoring function I source in my .bashrc. Saves time during troubleshooting.
Common Pitfalls and Troubleshooting
I’ve hit every nohup gotcha at least once. Let me save you some frustration.
Process Hanging on SSH Disconnect
Sometimes processes still hang even with nohup. Common causes:
- stdin not closed: Add
< /dev/nullto close stdin explicitly - Terminal control: The process is trying to read from the terminal
- Output buffering: Unbuffered output can cause hangs
The bulletproof pattern:
nohup ./script.sh < /dev/null > output.log 2>&1 &The < /dev/null redirects stdin to nothing, preventing the process from trying to read terminal input.
Output File Permission Issues
If you see nohup: cannot create 'nohup.out': Permission denied, check two things:
- Do you have write permission in the current directory?
- Is
nohup.outowned by another user with restrictive permissions?
Fix by specifying a writable location:
nohup ./script.sh > ~/logs/output.log 2>&1 &I've had this bite me when running commands in shared directories. Always redirect to a path you control. For more on permissions, see managing Linux users and permissions.
Signal Handling Problems
nohup only protects against SIGHUP. Other signals still work:
- SIGTERM (kill): Still terminates the process
- SIGKILL (kill -9): Forcefully kills the process
- SIGINT (Ctrl+C): Depends on whether process is in foreground
If your process dies mysteriously, check /var/log/syslog or dmesg for OOM (out-of-memory) kills. The kernel will terminate processes consuming too much memory regardless of nohup.
When nohup Doesn't Work as Expected
Some environments override standard behavior:
- Restricted shells: Some corporate SSH configs disable nohup
- systemd user sessions: systemd may kill all user processes on logout unless configured otherwise
- Container environments: Docker/Kubernetes manage process lifecycles differently
In modern systems, especially those running systemd, the KillUserProcesses setting in /etc/systemd/logind.conf can override nohup. Check with:
grep KillUserProcesses /etc/systemd/logind.confIf set to yes, even nohup processes die on logout. You'll need systemd services instead.
nohup vs systemd Services: Which Should You Use?
This is a question I get constantly from junior admins. Let me clarify the distinction.
When nohup is Sufficient
Use nohup for:
- One-off tasks: A database export, a code build, a data migration
- Ad-hoc operations: Quick fixes during an incident
- Temporary workflows: Testing something before productionizing it
- Personal tasks: Your own scripts that don't need formal management
nohup is the duct tape of process management. It's quick, it works, and it requires zero setup.
When systemd Services are Better
Use systemd services for:
- Production services: Anything that needs to run 24/7
- Automatic restarts: When failures should trigger automatic recovery
- Boot-time start: Services that should start automatically on system boot
- Resource control: When you need to limit CPU, memory, or I/O
- Dependency management: Services that depend on other services
If your process is important enough that you'd be upset if it died silently, make it a systemd service. For more context on modern service management, check the systemd service documentation.
Creating a Simple systemd Service
Converting a nohup command to a systemd service takes about 5 minutes. Here's a template:
# /etc/systemd/system/myapp.service
[Unit]
Description=My Application
After=network.target
[Service]
Type=simple
User=myuser
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/script.sh
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.targetEnable and start it:
sudo systemctl enable myapp
sudo systemctl start myapp
sudo systemctl status myappNow your application restarts on failure, starts on boot, and integrates with system logging. Much more robust than nohup for production use.
Integration Considerations
I follow this progression:
- Development: Run directly in terminal
- Testing: Use nohup for longer tests
- Staging: Create systemd service
- Production: Deploy systemd service with monitoring
nohup is perfect for the middle stages. It bridges the gap between interactive testing and production deployment. You can read more approaches in DigitalOcean's nohup tutorial.
Real-World Sysadmin Scenarios
Let me share some scenarios I've actually encountered where nohup was the right tool.
Database Backup Over SSH
Monthly, I export our production database for compliance archival. The export takes 90 minutes:
nohup pg_dump -h db.example.com production_db | gzip > backup_$(date +%Y%m%d).sql.gz 2> backup_errors.log &Without nohup, if my VPN disconnects mid-export, I'd have to start over. With nohup, the export completes, and I find the file waiting when I reconnect.
Similar logic applies to large file transfers—though these days I prefer how to use rsync for file transfers with its resume capability.
Deploying Code During Maintenance
During scheduled maintenance windows, I deploy application updates. The build and deployment can take 30+ minutes:
nohup ./deploy.sh production v2.4.1 > deploy_$(date +%Y%m%d_%H%M).log 2>&1 &This lets me start the deployment and monitor progress in another terminal via tail -f. If anything goes wrong with my connection, the deployment continues.
For truly critical deployments, though, I use tmux. The ability to reattach and see exactly where things are is worth the extra complexity.
Running Long-Running Analytics
Our data science team occasionally runs multi-hour analytics jobs. They SSH in, start the job with nohup, and disconnect:
nohup python3 customer_segmentation.py --input customers.csv --output segments.json > analysis.log 2>&1 &They check results the next morning. This is exactly the use case nohup was designed for. For regularly scheduled analytics, I've taught them to use scheduling cron jobs for automated tasks instead.
SSH Session Persistence
I often work from coffee shops with unreliable WiFi. When debugging production issues, I can't afford to have diagnostic commands die when the connection wobbles:
nohup tcpdump -i eth0 -w network_capture_$(date +%H%M).pcap 2> tcpdump_errors.log &This captures network traffic even if my connection drops. I can analyze the capture file later without gaps.
You can find more practical command patterns in various nohup command and shell script examples online.
Tips and Best Practices
After years of using nohup, I've settled on these practices that prevent most problems.
Always Redirect Output
Never rely on default nohup.out. Always specify explicit output files:
nohup ./script.sh > logs/$(date +%Y%m%d_%H%M%S)_script.log 2>&1 &The timestamp prevents collisions and makes troubleshooting easier. Six months from now, you'll thank yourself when you need to track down what went wrong.
Use Meaningful Log File Names
Include enough context in filenames:
backup_production_20250903.log(good)output.log(bad)backup.log(slightly better but vague)
I include the task name, environment, and timestamp. This makes log management much easier, especially when combined with log rotation strategies.
Implement Proper Error Handling
Your scripts should exit with meaningful codes:
#!/bin/bash
set -e # Exit on error
nohup ./script.sh > script.log 2>&1 &
PID=$!
# Wait and check exit status
wait $PID
EXIT_CODE=$?
if [ $EXIT_CODE -ne 0 ]; then
echo "Process $PID failed with code $EXIT_CODE"
# Send alert, write to monitoring system, etc.
fiThis pattern lets you catch failures even from background processes. I use variations of this in all my automation.
Document Your Background Processes
Keep a simple inventory of long-running nohup processes:
# ~/nohup_processes.txt
2025-09-03 14:30 - PID 12345 - Database backup - backup.log
2025-09-03 15:00 - PID 12389 - Code deployment - deploy.logWhen something goes wrong at 2 AM, this documentation is invaluable. I script updates to this file as part of my nohup wrapper functions.
Better yet, integrate with your monitoring system so forgotten nohup processes don't become zombie processes consuming resources six months later.
Wrapping Up: nohup in Your Toolkit
The linux nohup command isn't flashy, and it won't impress anyone at a tech conference. But it's one of those tools that quietly prevents disasters. Every sysadmin should know how to use it properly.
Remember the core pattern:
nohup command args > output.log 2>&1 &This single line has saved me countless hours of re-running failed jobs. It's simple, reliable, and requires no setup beyond what's already on every Linux system.
That said, don't use nohup for everything. For production services, use systemd. For interactive work, use tmux or screen. For scheduled tasks, use cron. Each tool has its place.
nohup excels at one thing: running a command in the background that survives logout. Master that use case, and you'll be a more effective sysadmin.
Next time you're about to kick off a long-running task over SSH, remember to wrap it in nohup. Your future self—the one who doesn't have to restart a 3-hour job because of a network blip—will thank you.
Related reading: If you found this useful, you might also benefit from learning about other process and system management tools. Check out guides on background job management, terminal multiplexers, and systemd service configuration to round out your Linux administration skills.






