Linux Command Line Essentials: The Developer's Survival Guide

A comprehensive, practical reference to the Linux commands that matter most in your daily development workflow — from navigating filesystems and processing text to managing processes, networking, and writing powerful one-liners.

20 min read Linux

Table of Contents

  1. Introduction — Why Every Developer Needs the Terminal
  2. Navigating the File System
  3. File Operations
  4. Text Processing Power Tools
  5. Pipes and Redirection
  6. Finding Things
  7. Process Management
  8. Permissions and Ownership
  9. Networking Commands
  10. Disk and System Info
  11. Shell Productivity
  12. Essential One-Liners
  13. Conclusion

Introduction — Why Every Developer Needs the Terminal

Graphical user interfaces are convenient. They lower the barrier to entry, they make discovery intuitive, and they work well for tasks you perform once in a while. But if you are a professional developer who spends hours every day interacting with servers, deploying applications, debugging production incidents, parsing log files, and automating repetitive tasks, the command line is not just an alternative interface — it is the single most powerful tool in your arsenal. The Linux terminal gives you direct, unmediated access to the operating system. There is no abstraction layer hiding what is really happening, no graphical wrapper silently making decisions on your behalf. When you type a command and press Enter, you know exactly what will happen, and you can compose those commands together in ways that GUI applications could never match.

The reason the command line endures, despite decades of advancement in graphical interfaces, is composability. Every command-line tool is designed to do one thing well and to communicate with other tools through simple text streams. This means you can chain together five, ten, or twenty small tools into a pipeline that accomplishes something no single tool was designed to do. You can search through a million lines of log data, filter for specific error patterns, extract timestamps, sort them, remove duplicates, and count occurrences — all in a single line that executes in under a second. Try doing that by clicking through a GUI log viewer.

This guide is written from the perspective of a working developer who has spent years on the command line and wants to share the commands, patterns, and habits that actually matter. We are not going to waste time on obscure options that nobody uses. Instead, every command, flag, and example in this guide is something you will reach for regularly in real development work. Whether you are SSH-ing into a production server to diagnose a memory leak, writing a deployment script, processing CSV data for an analytics report, or simply navigating your project directory, the commands covered here form the core vocabulary you need.

Who is this guide for? This guide targets developers who work with Linux or macOS and want to become significantly more productive at the command line. Whether you are a junior developer still getting comfortable with the terminal or a mid-level engineer looking to fill gaps in your knowledge, every section provides practical, immediately applicable skills. The examples use Bash syntax, but almost everything applies equally to Zsh and other POSIX-compatible shells.

Everything in Linux is a file, and understanding how to move through the filesystem efficiently is the most fundamental terminal skill. The Linux filesystem follows a hierarchical tree structure rooted at /, the root directory. Unlike Windows, which uses drive letters like C:\ and D:\, Linux mounts everything under a single unified tree. Your home directory lives at /home/username (or /Users/username on macOS), system binaries are in /usr/bin, configuration files are in /etc, and temporary files go to /tmp. Knowing this hierarchy by heart means you always know where to look for something, even on a server you have never accessed before.

The four commands you will use most for navigation are pwd, ls, cd, and tree. These are so fundamental that you will type them hundreds of times per day without even thinking about it. The key to being fast is not memorizing every flag, but understanding the most useful flags deeply enough that they become muscle memory.

# Print the current working directory
pwd

# List files in the current directory
ls

# List with details: permissions, owner, size, modification date
ls -la

# List sorted by modification time (newest first)
ls -lt

# List sorted by file size (largest first)
ls -lS

# List only directories
ls -d */

# List with human-readable file sizes (KB, MB, GB)
ls -lh

# Change to your home directory
cd ~
# Or simply:
cd

# Change to the previous directory (toggle between two directories)
cd -

# Go up one directory
cd ..

# Go up two directories
cd ../..

# Navigate using an absolute path
cd /var/log/nginx

# Navigate using a relative path
cd src/components

The tree command is enormously useful for understanding project structure at a glance. It recursively displays the directory tree in a visual format that is far more informative than repeatedly running ls in different directories. On many distributions tree is not installed by default, but it is available in every standard package manager and is well worth installing.

# Display the directory tree (current directory)
tree

# Limit depth to 2 levels
tree -L 2

# Show only directories (no files)
tree -d

# Show hidden files too
tree -a

# Exclude node_modules and .git directories
tree -I 'node_modules|.git'

# Show file sizes
tree -sh

# Practical: see your project structure excluding noise
tree -L 3 -I 'node_modules|.git|dist|build|__pycache__'
Tip: Use pushd and popd instead of cd when you need to jump to a directory temporarily and then return. pushd /var/log changes to that directory and pushes the current directory onto a stack. When you are done, popd returns you to where you were. This is invaluable in shell scripts that need to change directories and reliably return.

File Operations

After navigating to the right place, you need to create, read, copy, move, and delete files and directories. These file operations are the bread and butter of command-line work. What makes them powerful compared to their GUI equivalents is that they can operate on hundreds or thousands of files at once using glob patterns, they can be scripted and automated, and they provide fine-grained control over behavior through flags. A single cp command with the right flags can recursively copy an entire directory tree while preserving permissions, timestamps, and symlinks — something that would require careful clicking in a file manager.

# Display the entire contents of a file
cat config.json

# Display file contents with line numbers
cat -n server.js

# Concatenate multiple files together
cat header.html body.html footer.html > page.html

# View a large file with pagination (scroll with Space, quit with q)
less /var/log/syslog

# Show the first 20 lines of a file
head -n 20 README.md

# Show the last 50 lines of a file
tail -n 50 application.log

# Follow a log file in real-time (essential for debugging)
tail -f /var/log/nginx/access.log

# Follow and retry if the file does not exist yet
tail -F /var/log/app/debug.log

Creating, copying, moving, and removing files and directories all follow consistent patterns. The -r (recursive) flag is critical for operations on directories, and the -i (interactive) flag prompts for confirmation before overwriting, which is a safety net worth using until you are confident in your commands.

# Create an empty file (or update its timestamp if it exists)
touch new-file.txt

# Create multiple files at once
touch index.html style.css script.js

# Create a directory
mkdir my-project

# Create nested directories in one command
mkdir -p src/components/ui/buttons

# Copy a file
cp original.txt backup.txt

# Copy a directory recursively
cp -r src/ src-backup/

# Copy preserving all attributes (permissions, timestamps, symlinks)
cp -a /var/www/html/ /backup/html/

# Move (rename) a file
mv old-name.js new-name.js

# Move a file to a different directory
mv config.json /etc/myapp/

# Move multiple files into a directory
mv *.log /var/log/archived/

# Remove a file
rm unwanted-file.txt

# Remove a file without prompting for confirmation
rm -f locked-file.txt

# Remove a directory and all its contents recursively
rm -r old-project/

# Remove with verbose output (shows what is being deleted)
rm -rv build/

# Create a symbolic link (like a shortcut)
ln -s /usr/local/bin/node18 /usr/local/bin/node

# Create a symbolic link to a directory
ln -s /mnt/data/uploads /var/www/uploads
Warning: The command rm -rf / will attempt to delete everything on your system. Modern Linux distributions include safeguards against running this against the root directory, but variations like rm -rf /* or rm -rf $UNDEFINED_VAR/ (where the variable is empty, expanding to rm -rf /) have caused catastrophic data loss in real production environments. Always double-check your rm -rf commands before pressing Enter, especially in scripts. Consider using rm -ri for interactive confirmation on critical directories.

Text Processing Power Tools

Text processing is where the Linux command line truly shines and where GUI tools cannot even begin to compete. The Unix philosophy of "everything is a text stream" means that log files, configuration files, CSV data, JSON output, and program output are all just text that can be sliced, filtered, transformed, and analyzed with the same set of tools. The big three — grep, sed, and awk — together form a text processing toolkit so powerful that many developers have built entire data pipelines around them. Understanding these tools deeply will save you hours of writing one-off scripts in Python or JavaScript for tasks that can be accomplished in a single line on the terminal.

grep — Search and Filter

The grep command searches for patterns in text. It is the tool you reach for first when you need to find something in your codebase, filter log files, or extract matching lines from any text stream. Modern development has introduced tools like ripgrep (rg) that are faster and have better defaults, but grep is universally available on every Unix system and understanding it is non-negotiable.

# Search for a string in a file
grep "ERROR" application.log

# Search recursively in all files under a directory
grep -r "TODO" src/

# Case-insensitive search
grep -i "warning" server.log

# Show line numbers with matches
grep -n "function" app.js

# Show 3 lines of context around each match
grep -C 3 "segfault" /var/log/kern.log

# Invert match: show lines that do NOT contain the pattern
grep -v "DEBUG" application.log

# Count the number of matches
grep -c "404" access.log

# Search for a regex pattern (extended regex with -E)
grep -E "error|warning|critical" syslog

# List only filenames containing the pattern
grep -rl "deprecated" src/

# Search for whole words only (not substrings)
grep -w "port" config.yaml

sed — Stream Editor

The sed command performs text transformations on streams. Its most common use is find-and-replace, but it can also delete lines, insert text, and perform complex multi-line transformations. While sed has a reputation for cryptic syntax, the patterns you actually need in daily work are straightforward.

# Replace the first occurrence on each line
sed 's/old/new/' file.txt

# Replace ALL occurrences on each line (global flag)
sed 's/old/new/g' file.txt

# Replace in-place (modify the file directly)
sed -i 's/localhost/0.0.0.0/g' config.yaml

# In-place with backup (creates config.yaml.bak)
sed -i.bak 's/localhost/0.0.0.0/g' config.yaml

# Delete lines matching a pattern
sed '/^#/d' config.conf       # Remove all comment lines
sed '/^$/d' file.txt          # Remove all empty lines

# Print only lines 10 through 20
sed -n '10,20p' largefile.txt

# Replace only on lines matching a pattern
sed '/production/s/debug/info/g' logging.conf

# Multiple replacements in one command
sed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txt

awk — Column-Oriented Processing

The awk command excels at processing structured, column-oriented data. When you have text where each line consists of fields separated by spaces, tabs, or a custom delimiter, awk makes it trivial to extract, filter, and transform specific columns. It is a full programming language with variables, conditionals, and loops, but even its most basic usage is remarkably powerful.

# Print the second column of each line (space-delimited)
awk '{print $2}' data.txt

# Print the first and third columns
awk '{print $1, $3}' data.txt

# Use a custom field separator (e.g., colon for /etc/passwd)
awk -F: '{print $1, $7}' /etc/passwd

# Use comma as separator (CSV processing)
awk -F, '{print $1, $3}' data.csv

# Filter rows where the third column exceeds a value
awk '$3 > 100 {print $0}' sales.txt

# Sum a column of numbers
awk '{sum += $2} END {print "Total:", sum}' expenses.txt

# Print lines longer than 80 characters
awk 'length > 80' code.js

# Print the last column of each line (useful for variable-width data)
awk '{print $NF}' access.log

Supporting Cast: sort, uniq, wc, cut, tr

# Sort lines alphabetically
sort names.txt

# Sort numerically
sort -n numbers.txt

# Sort by the third column numerically (tab-delimited)
sort -t$'\t' -k3 -n data.tsv

# Reverse sort
sort -r file.txt

# Remove duplicate adjacent lines (requires sorted input)
sort file.txt | uniq

# Count occurrences of each unique line
sort file.txt | uniq -c | sort -rn

# Count lines, words, and characters
wc file.txt

# Count only lines
wc -l access.log

# Extract specific character positions
cut -c1-10 file.txt

# Extract specific columns with a delimiter
cut -d',' -f1,3 data.csv

# Translate characters (e.g., lowercase to uppercase)
tr 'a-z' 'A-Z' < file.txt

# Delete specific characters
tr -d '\r' < windows-file.txt > unix-file.txt

# Squeeze repeated characters
tr -s ' ' < messy.txt
Pattern to remember: The typical text processing pipeline follows the pattern: select (grep), transform (sed/awk), sort (sort), deduplicate (uniq), count (wc). Once you internalize this flow, you can disassemble almost any text processing problem into a pipeline of these operations.

Pipes and Redirection

Pipes and redirection are the glue that holds the entire Unix command-line philosophy together. Without them, each command would be an isolated tool that can only read from files and write to the screen. With them, every command becomes a composable building block that can send its output to another command, receive input from another command, or redirect output to files. Understanding pipes and redirection transforms you from someone who types individual commands into someone who builds powerful processing pipelines on the fly. This is the difference between using the command line as a clumsy alternative to a GUI and using it as a genuinely superior tool that multiplies your productivity.

The Pipe Operator ( | )

The pipe operator takes the standard output (stdout) of the command on its left and feeds it as standard input (stdin) to the command on its right. You can chain as many pipes together as you need, creating arbitrarily long processing pipelines where data flows from left to right through each transformation stage.

# Count how many JavaScript files are in the project
find . -name "*.js" | wc -l

# Find the 10 largest files in the current directory tree
du -ah . | sort -rh | head -n 10

# Show which IP addresses have the most requests in an access log
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head -n 20

# Find all unique HTTP status codes in a log
awk '{print $9}' access.log | sort -u

# Count how many times each error type appears
grep "ERROR" app.log | awk -F'ERROR' '{print $2}' | sort | uniq -c | sort -rn

Output Redirection ( >, >> )

# Redirect stdout to a file (overwrites the file)
ls -la > directory-listing.txt

# Append stdout to a file (does not overwrite)
echo "Deploy completed at $(date)" >> deploy.log

# Redirect stderr to a file
npm run build 2> build-errors.log

# Redirect both stdout and stderr to the same file
npm run build > build.log 2>&1

# Modern Bash syntax for redirecting both streams
npm run build &> build.log

# Discard all output (send to the void)
command_that_is_noisy > /dev/null 2>&1

Input Redirection and Here Documents

# Redirect a file as stdin to a command
sort < unsorted-list.txt

# Here document: pass multi-line input to a command
cat << 'EOF' > config.json
{
  "host": "0.0.0.0",
  "port": 3000,
  "env": "production"
}
EOF

# Here string: pass a single string as stdin
grep "admin" <<< "admin:x:1000:1000::/home/admin:/bin/bash"

The tee Command

The tee command reads from stdin and writes to both stdout and one or more files simultaneously. This is indispensable when you want to see the output of a pipeline on your terminal while also saving it to a file, or when you need to fork a data stream to multiple destinations.

# See the output AND save it to a file
npm run build 2>&1 | tee build.log

# Append to the file instead of overwriting
ping -c 5 google.com | tee -a network.log

# Write to multiple files at once
echo "config update" | tee file1.txt file2.txt file3.txt
Tip: When debugging complex pipelines, insert tee /dev/stderr at intermediate stages to see what data looks like at that point without disrupting the flow. For example: cat data.txt | grep "ERROR" | tee /dev/stderr | wc -l will show the matched lines on stderr while still passing them to wc -l for counting.

Finding Things

In a large codebase or on a server with a complex directory structure, knowing how to find files, directories, and executables quickly is a critical skill. The find command is the Swiss Army knife here — it can search by name, type, size, modification date, permissions, ownership, and virtually any other file attribute. It can also execute commands on the files it finds, making it a one-stop shop for bulk file operations. While modern tools like fd offer a more user-friendly syntax, find is universally available and its full power is unmatched.

# Find files by name (case-sensitive)
find . -name "package.json"

# Find files by name (case-insensitive)
find . -iname "readme*"

# Find only files (not directories)
find . -type f -name "*.ts"

# Find only directories
find . -type d -name "config"

# Find files modified in the last 24 hours
find . -type f -mtime -1

# Find files modified in the last 30 minutes
find . -type f -mmin -30

# Find files larger than 100MB
find / -type f -size +100M

# Find empty files
find . -type f -empty

# Find and delete all .DS_Store files
find . -name ".DS_Store" -delete

# Find files and execute a command on each one
find . -name "*.log" -exec gzip {} \;

# Find files and pass them all to a single command invocation
find . -name "*.test.js" -exec cat {} +

# Exclude directories from the search
find . -path ./node_modules -prune -o -name "*.js" -print

# Combine multiple conditions (AND is implicit, OR uses -o)
find . -name "*.js" -o -name "*.ts"

# Find files with specific permissions
find . -type f -perm 0777

For simpler searches, locate, which, and whereis provide faster but more narrowly scoped alternatives.

# Find a file by name using the locate database (very fast, but may be stale)
locate nginx.conf

# Update the locate database (run periodically or after major file changes)
sudo updatedb

# Find the path of an executable in your PATH
which node
which python3

# Find binary, source, and man page for a command
whereis git

# Find all versions of a command in your PATH
which -a python
Performance note: The find command traverses the filesystem in real time, which means it is always accurate but can be slow on large directory trees. The locate command uses a pre-built database and returns results almost instantly, but the database must be periodically updated with updatedb. For development work where files change frequently, find is more reliable. For system administration on stable servers, locate is a great time-saver.

Process Management

Every running program on a Linux system is a process with a unique process ID (PID). As a developer, you need to know how to inspect running processes, identify resource hogs, gracefully stop misbehaving applications, and manage background tasks. This is especially critical when you are debugging production servers where a runaway process might be consuming all available memory or CPU, or when you need to restart a service without losing in-flight requests. Process management also comes into play during local development when you need to kill a zombie process that is hogging a port or run long-running tasks in the background while continuing to use your terminal.

# List all running processes (full format)
ps aux

# Search for a specific process
ps aux | grep node

# Show processes in a tree format (parent-child relationships)
ps auxf

# Show processes for the current user
ps ux

# Interactive, real-time process viewer
top

# Better interactive viewer (install with your package manager)
htop

# Show the top 10 memory-consuming processes
ps aux --sort=-%mem | head -n 11

# Show the top 10 CPU-consuming processes
ps aux --sort=-%cpu | head -n 11

Killing Processes

When a process needs to be stopped, Linux provides a signal-based mechanism. The most important signals to understand are SIGTERM (15), which asks a process to shut down gracefully, and SIGKILL (9), which forcefully terminates a process immediately. Always try SIGTERM first, as it allows the process to clean up resources, close database connections, and flush buffers. Use SIGKILL only as a last resort when SIGTERM does not work.

# Send SIGTERM (graceful shutdown) to a process by PID
kill 12345

# Send SIGKILL (force kill) to a process
kill -9 12345

# Kill a process by name
pkill -f "node server.js"

# Kill all processes with a given name
killall node

# Find and kill a process using a specific port
lsof -ti:3000 | xargs kill -9

# Interactive kill: find the PID first, then kill
ps aux | grep "runaway-process"
kill -SIGTERM 12345

Background and Foreground Jobs

# Run a command in the background
npm run dev &

# List background jobs in the current shell session
jobs

# Bring a background job to the foreground
fg %1

# Send a foreground job to the background (Ctrl+Z first to suspend, then bg)
# Press Ctrl+Z to suspend
bg %1

# Run a command that persists after you close the terminal
nohup python3 long-running-script.py &

# Better alternative: use nohup with output redirection
nohup ./deploy.sh > deploy.log 2>&1 &

# Detach a running process from the terminal completely
disown %1
Warning: Using kill -9 does not give the process any chance to clean up. Database processes killed with SIGKILL may leave corrupted data files. Application servers killed this way will drop all active connections without responding. Always use kill (SIGTERM) first and wait a few seconds before resorting to kill -9. In production, use your service manager (systemctl stop) which handles graceful shutdown properly.

Permissions and Ownership

The Linux permission model is one of the foundational security mechanisms of the operating system, and misunderstanding it is one of the most common sources of frustrating "permission denied" errors and, far worse, security vulnerabilities. Every file and directory on a Linux system has three sets of permissions — for the owner, the group, and everyone else — and each set controls whether the entity can read, write, or execute the file. When you see a permissions string like -rwxr-xr-- in the output of ls -la, it tells you exactly who can do what with that file. The first character indicates the file type (- for regular files, d for directories, l for symlinks). The next nine characters are three groups of three: owner permissions, group permissions, and other permissions.

Permission Character Octal Effect on Files Effect on Directories
Read r 4 View file contents List directory contents
Write w 2 Modify file contents Create/delete files in directory
Execute x 1 Run as a program Enter the directory (cd into it)
No permission - 0 Access denied Access denied

Octal notation combines these values by adding them together. For example, 755 means owner gets read+write+execute (4+2+1=7), group gets read+execute (4+1=5), and others get read+execute (4+1=5). This is the standard permission for executable scripts and directories. The value 644 means owner gets read+write (4+2=6), group and others get read-only (4), which is the standard for regular files.

Octal Permission String Common Use Case
755 rwxr-xr-x Executable scripts, directories
644 rw-r--r-- Regular files (config, source code)
700 rwx------ Private scripts, SSH directory
600 rw------- SSH keys, sensitive config files
777 rwxrwxrwx Never use this — anyone can do anything
444 r--r--r-- Read-only files (deployed config)
# View permissions of files
ls -la

# Change permissions using octal notation
chmod 755 deploy.sh
chmod 600 ~/.ssh/id_rsa
chmod 644 index.html

# Change permissions using symbolic notation
chmod u+x script.sh          # Add execute for owner
chmod g-w file.txt            # Remove write for group
chmod o-rwx secret.conf       # Remove all permissions for others
chmod a+r public.html         # Add read for all (a = all)

# Recursively change permissions for a directory
chmod -R 755 /var/www/html/

# Change file ownership
chown appuser:appgroup file.txt

# Recursively change ownership
chown -R www-data:www-data /var/www/

# Change only the group
chgrp developers project/

# View the numeric (octal) permissions
stat -c "%a %n" *
Warning: Blindly running chmod 777 to fix permission errors is a dangerous anti-pattern. It grants read, write, and execute access to every user on the system. On a web server, this could allow any user or compromised process to modify your application files. Instead, identify the correct owner and group, and assign the minimum permissions needed. For web servers, the application user typically needs read access to code files, read+write to upload and log directories, and execute on directories.

Networking Commands

Modern development is fundamentally networked. You interact with APIs, deploy to remote servers, debug connectivity issues, transfer files between machines, and troubleshoot DNS problems regularly. The Linux command line provides a rich set of networking tools that give you direct visibility into what is happening at the network level. These tools are essential for diagnosing issues that manifest as "it doesn't connect" or "it's slow" — vague symptoms that require precise tools to investigate. Whether you are verifying that a newly deployed service is listening on the correct port, testing an API endpoint before writing frontend code, or diagnosing why your application cannot resolve a hostname, these commands will get you to the answer.

curl — The Universal HTTP Client

# Simple GET request
curl https://api.example.com/users

# GET with headers displayed
curl -i https://api.example.com/users

# Only show response headers
curl -I https://api.example.com/users

# POST request with JSON body
curl -X POST https://api.example.com/users \
  -H "Content-Type: application/json" \
  -d '{"name": "Alice", "email": "alice@example.com"}'

# POST with data from a file
curl -X POST https://api.example.com/import \
  -H "Content-Type: application/json" \
  -d @payload.json

# Include authentication header
curl -H "Authorization: Bearer eyJhbGc..." https://api.example.com/me

# Follow redirects
curl -L https://short.url/abc123

# Download a file
curl -O https://example.com/archive.tar.gz

# Download with a custom filename
curl -o myfile.tar.gz https://example.com/archive.tar.gz

# Verbose output (shows full request/response headers and TLS handshake)
curl -v https://api.example.com/health

# Measure response time
curl -w "Total time: %{time_total}s\n" -o /dev/null -s https://example.com

SSH and SCP — Secure Remote Access and File Transfer

# Connect to a remote server
ssh user@192.168.1.100

# Connect on a non-standard port
ssh -p 2222 user@server.example.com

# Execute a single command on a remote server
ssh user@server "df -h && free -m"

# SSH with a specific identity file
ssh -i ~/.ssh/deploy_key user@server.example.com

# Copy a file to a remote server
scp ./deploy.tar.gz user@server:/tmp/

# Copy a file from a remote server
scp user@server:/var/log/app.log ./

# Copy a directory recursively
scp -r ./dist/ user@server:/var/www/html/

# SSH tunneling: forward local port 5432 to remote PostgreSQL
ssh -L 5432:localhost:5432 user@db-server.example.com

# Reverse tunnel: expose your local port 3000 on the remote server as port 8080
ssh -R 8080:localhost:3000 user@server.example.com

Network Diagnostics

# Test connectivity to a host
ping -c 4 google.com

# DNS lookup
dig example.com

# Short DNS answer
dig +short example.com

# Query specific DNS record types
dig MX example.com
dig TXT example.com

# Reverse DNS lookup
dig -x 8.8.8.8

# Show active network connections and listening ports
ss -tulnp

# Legacy equivalent (netstat)
netstat -tulnp

# Check if a specific port is open on a remote host
nc -zv server.example.com 443

# Send a raw TCP request (useful for debugging)
echo -e "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n" | nc example.com 80

# Download a file (alternative to curl)
wget https://example.com/archive.tar.gz

# Download an entire website recursively (for mirroring)
wget -r -l 2 -np https://docs.example.com/
Tip: Use curl -w with format variables to create a quick API performance test without any external tools. The format string "%{time_namelookup} %{time_connect} %{time_appconnect} %{time_starttransfer} %{time_total}\n" breaks down the response time into DNS lookup, TCP connect, TLS handshake, time to first byte, and total time. This is often enough to pinpoint whether a latency issue is in DNS, network, TLS, or application processing.

Disk and System Info

Understanding the state of the system you are working on is crucial, whether it is your local development machine or a production server you have just SSH-ed into for the first time. Running out of disk space is one of the most common causes of mysterious application failures — databases refuse to write, log rotation fails silently, and deployments error out with cryptic messages. Similarly, running out of memory causes the OOM killer to terminate processes seemingly at random, and high CPU usage makes everything sluggish. The commands in this section give you an instant snapshot of system health and help you identify resource bottlenecks before they become outages.

# Show disk space usage for all mounted filesystems
df -h

# Show disk usage for specific directories
du -sh /var/log/
du -sh /home/*

# Find the largest directories under the current path
du -h --max-depth=1 | sort -rh | head -n 20

# Show memory usage (human-readable)
free -h

# Show detailed memory info
cat /proc/meminfo

# Show system information
uname -a

# Show only the kernel version
uname -r

# Show how long the system has been running and load average
uptime

# Show block devices (disks and partitions)
lsblk

# Show CPU information
lscpu

# Show number of CPU cores
nproc

# Show disk I/O statistics
iostat -x 1 5

# Show system load and process summary
vmstat 1 5

# Show the distribution name and version
cat /etc/os-release
Quick health check: When you first SSH into an unfamiliar server, run these four commands in sequence: uptime (is the server overloaded?), free -h (is it out of memory?), df -h (is it out of disk space?), and dmesg | tail (are there kernel-level errors?). This 10-second routine tells you the overall health of the system before you start investigating specific application problems.

Shell Productivity

The difference between a developer who is competent at the command line and one who is truly fast often comes down to shell productivity techniques: aliases, shell configuration, history management, and keyboard shortcuts. These are the meta-skills that accelerate everything else you do on the terminal. A well-configured shell environment with thoughtful aliases can eliminate hundreds of keystrokes per day and prevent common mistakes. Learning the keyboard shortcuts for line editing lets you fix typos and restructure commands at the speed of thought instead of laboriously pressing the arrow keys to navigate to the right character position.

Aliases and Shell Configuration

Your shell configuration file (~/.bashrc for Bash, ~/.zshrc for Zsh) is the place to define aliases, functions, environment variables, and custom prompts. Aliases are simple text substitutions that turn long, frequently-used commands into short abbreviations. Functions are more powerful and can accept arguments.

# Add these to your ~/.bashrc or ~/.zshrc

# Navigation shortcuts
alias ..='cd ..'
alias ...='cd ../..'
alias ....='cd ../../..'

# ls improvements
alias ll='ls -alF'
alias la='ls -A'
alias lt='ls -lt --color=auto'

# Safety nets
alias rm='rm -i'
alias mv='mv -i'
alias cp='cp -i'

# Git shortcuts
alias gs='git status'
alias ga='git add'
alias gc='git commit'
alias gp='git push'
alias gl='git log --oneline --graph --decorate -20'
alias gd='git diff'
alias gb='git branch'

# Docker shortcuts
alias dps='docker ps'
alias dcu='docker compose up -d'
alias dcd='docker compose down'
alias dcl='docker compose logs -f'

# Development
alias serve='python3 -m http.server 8000'
alias ports='ss -tulnp'
alias myip='curl -s ifconfig.me'

# Custom function: create a directory and cd into it
mkcd() {
  mkdir -p "$1" && cd "$1"
}

# Custom function: extract any archive format
extract() {
  if [ -f "$1" ]; then
    case "$1" in
      *.tar.bz2)   tar xjf "$1"   ;;
      *.tar.gz)    tar xzf "$1"   ;;
      *.tar.xz)    tar xJf "$1"   ;;
      *.bz2)       bunzip2 "$1"   ;;
      *.gz)        gunzip "$1"    ;;
      *.tar)       tar xf "$1"    ;;
      *.zip)       unzip "$1"     ;;
      *.7z)        7z x "$1"      ;;
      *)           echo "Cannot extract '$1'" ;;
    esac
  else
    echo "'$1' is not a valid file"
  fi
}

History Management

# Search command history
history | grep "docker"

# Re-run the last command
!!

# Re-run the last command with sudo
sudo !!

# Re-run the most recent command that starts with "git"
!git

# Interactive reverse search (press Ctrl+R, then type)
# Ctrl+R → type "deploy" → finds last command containing "deploy"

# Increase history size in ~/.bashrc
HISTSIZE=10000
HISTFILESIZE=20000

# Ignore duplicate commands in history
HISTCONTROL=ignoredups:erasedups

# Add timestamps to history
HISTTIMEFORMAT="%F %T  "

Keyboard Shortcuts

These readline shortcuts work in Bash, Zsh, and most other shells. They use Emacs-style keybindings by default. Learning even a handful of these will make you noticeably faster at editing commands.

Shortcut Action
Ctrl+A Move cursor to beginning of line
Ctrl+E Move cursor to end of line
Ctrl+U Delete from cursor to beginning of line
Ctrl+K Delete from cursor to end of line
Ctrl+W Delete the word before the cursor
Alt+D Delete the word after the cursor
Ctrl+Y Paste (yank) the last deleted text
Ctrl+L Clear the screen (same as clear)
Ctrl+R Reverse search through command history
Ctrl+C Cancel the current command / send SIGINT
Ctrl+Z Suspend the current foreground process
Ctrl+D Exit the current shell (or send EOF)
Alt+B Move cursor back one word
Alt+F Move cursor forward one word
Ctrl+_ Undo the last edit
Tab Auto-complete file/command names
Tab Tab Show all possible completions
Tip: If you use Zsh (the default shell on macOS since Catalina), install Oh My Zsh or the more lightweight zsh-autosuggestions plugin. It suggests commands as you type based on your history, and you can accept a suggestion by pressing the right arrow key. This single plugin can dramatically speed up your workflow once your history contains your commonly used commands.

Essential One-Liners

One-liners are the command line's party trick — compact, powerful expressions that accomplish in a single line what might otherwise require a script or a dedicated application. The following twenty one-liners are not contrived examples; they are commands that working developers actually use on a regular basis. Each one solves a real problem you will encounter in development, deployment, or system administration. Study them not just to memorize the syntax, but to understand how the individual commands are composed together. Once you see the patterns, you will be able to construct your own one-liners on the fly for whatever situation you encounter.

# 1. Find all files containing a specific string recursively
grep -rl "API_KEY" --include="*.env" .

# 2. Replace a string across all files in a project
find . -name "*.js" -exec sed -i 's/oldFunction/newFunction/g' {} +

# 3. Count lines of code in a project (excluding node_modules)
find . -name "*.ts" -not -path "*/node_modules/*" | xargs wc -l | tail -1

# 4. Find the 10 largest files on disk
find / -type f -exec du -h {} + 2>/dev/null | sort -rh | head -10

# 5. Watch a log file for errors in real-time with highlighting
tail -f /var/log/app.log | grep --color=always -E "ERROR|WARN|$"

# 6. Kill all processes listening on a specific port
lsof -ti:8080 | xargs kill -9

# 7. Create a quick backup of a file with timestamp
cp config.yaml config.yaml.backup.$(date +%Y%m%d_%H%M%S)

# 8. Show all unique file extensions in a directory tree
find . -type f | sed 's/.*\.//' | sort -u

# 9. Monitor disk usage every 5 seconds
watch -n 5 'df -h | grep -E "/$|/home"'

# 10. Download and extract a tar.gz in one step
curl -sL https://example.com/archive.tar.gz | tar xz

# 11. Find files modified in the last hour and list them by time
find . -type f -mmin -60 -printf "%T@ %p\n" | sort -rn | cut -d' ' -f2-

# 12. Generate a random 32-character password
openssl rand -base64 32 | tr -d '/+=' | head -c 32

# 13. List all open ports on the current machine
ss -tulnp | awk 'NR>1 {print $5}' | sort -u

# 14. Calculate the total size of all .log files
find . -name "*.log" -exec du -ch {} + | grep total$

# 15. Show the most frequently used commands from your history
history | awk '{print $2}' | sort | uniq -c | sort -rn | head -20

# 16. Convert all filenames in a directory to lowercase
for f in *; do mv "$f" "$(echo "$f" | tr 'A-Z' 'a-z')" 2>/dev/null; done

# 17. Find duplicate files by checksum
find . -type f -exec md5sum {} + | sort | uniq -w32 -dD

# 18. Quick HTTP server in the current directory (Python 3)
python3 -m http.server 8000

# 19. Test if a list of hosts is reachable
for host in server1 server2 server3; do ping -c1 -W2 $host &>/dev/null && echo "$host: UP" || echo "$host: DOWN"; done

# 20. Extract all unique URLs from a file
grep -oP 'https?://[^\s"'"'"'<>]+' page.html | sort -u
Building intuition: Notice how most of these one-liners follow a consistent pattern: generate a list (find, cat, history), filter it (grep, awk), transform it (sed, cut, tr), sort it (sort), deduplicate it (uniq), and take a slice (head, tail). This is the fundamental grammar of command-line data processing. Once you internalize this pattern, you can decompose almost any data processing problem into a chain of these simple operations.

Conclusion

The Linux command line is not just a tool — it is a force multiplier for everything you do as a developer. In this guide, we covered the complete landscape of essential commands: navigating the filesystem with confidence, performing file operations at scale, processing text with grep, sed, and awk, composing powerful pipelines with pipes and redirection, finding files and executables, managing processes, understanding the permission model, diagnosing network issues, monitoring system health, and optimizing your shell environment for maximum productivity.

Here are the key principles to carry forward as you continue building your command-line skills:

The commands in this guide represent the core vocabulary that professional developers use every day. But the command line is a vast landscape, and there is always more to learn. Tools like tmux for terminal multiplexing, jq for JSON processing, xargs for building command pipelines from input, and rsync for efficient file synchronization are all natural next steps once you are comfortable with the fundamentals covered here. The terminal rewards curiosity and practice — the more you use it, the more you discover, and the more productive you become.