If you work with more than one server, the need to manage multiple servers with SSH becomes obvious pretty quickly. I have several Linux machines running all over the place: Raspberry Pis at home, a few production web servers, some backups, and various one-off testing environments. Without a reliable system, I would be digging through sticky notes and browser bookmarks just to log in. Over the years, I have come up with a setup that helps me stay sane, stay organized, and log in fast when something goes sideways.
What I am sharing here isn’t anything revolutionary. It’s just a series of small steps and habits that help me connect to every server I manage without getting overwhelmed. It’s simple, effective, and easy to maintain. I am not using any graphical tools or external dashboards, just good old terminal access and a few text files. Whether you manage three machines or thirty, these same methods scale up well without adding clutter.
Naming and Organizing Hosts
The first thing I do when setting up SSH access to a new server is give it a clear and consistent name. I never rely on IP addresses alone. Instead, I use a naming scheme that immediately tells me what the server is, where it is, or what it’s used for. For example, my Pi-Hole server is just “pihole” and my backup machine is “nas01.” If it’s a production web server, I will name it something like “web-prod1.”
I manage these names in my SSH config file, which lives at ~/.ssh/config. This file acts like a shortcut directory for all my servers. Instead of typing a full command like ssh jmoorewv@123.45.67.89 -p 2222, I can just type ssh web-prod1. That one change alone saves more time than I can measure. I include the username, port, and even custom key files in each host entry. Here’s an example:
Host web-prod1
HostName 123.45.67.89
User jmoorewv
Port 2222
IdentityFile ~/.ssh/web-prod1Once a host is in the config file, it becomes part of my muscle memory. I don’t have to think about it. And when I add a new server, I just copy and paste an existing entry, change the values, and I’m good to go!
Key-Based Authentication
I don’t use passwords to connect to my servers. Passwords can be guessed, reused, and forgotten. Instead, I generate SSH key pairs and copy the public key to the remote machine. This gives me secure, passwordless access. I do this with the ssh-keygen and ssh-copy-id commands.
To create a new key, I run:
ssh-keygen -t ed25519 -f ~/.ssh/web-prod1
That creates a private and public key. Then I upload the public key to the server:
ssh-copy-id -i ~/.ssh/web-prod1.pub jmoorewv@123.45.67.89 -p 2222
Once the key is in place, I can connect without typing a password every time. I also lock down the server by disabling password authentication entirely in the SSH configuration file, located at /etc/ssh/sshd_config. I set PasswordAuthentication no and restart the SSH service.
By using different keys for different roles, one for personal use, another for client machines, and a third for Raspberry Pis, I add a layer of security. If one key is ever compromised, it won’t affect every server I control. I also name each key clearly so I know which one belongs to what.
SSH Agent and Key Forwarding
When I connect to one server and need to hop to another from there, I don’t want to manually deal with keys again. This is where the SSH agent and agent forwarding come in. I use the ssh-add command to load my keys into memory. If I am going to be using SSH for a while, I run ssh-add ~/.ssh/web-prod1 at the start of the session.
For jumping between servers, I enable agent forwarding in my SSH config file. I add this line to the host I’m using as a jump box:
Host jumpbox
HostName 123.45.67.50
User jmoorewv
ForwardAgent yesNow when I log into jumpbox, I can connect to internal machines behind it without needing to copy private keys around. It’s fast, secure, and lets me keep my private keys on my main machine where they belong.
Routine Maintenance with Scripts
Some tasks need to be done regularly across multiple machines. Package updates, service restarts, log cleanup, things that keep servers healthy. Instead of logging in manually every time, I use Bash scripts. Here’s a simple one I run to update all my Raspberry Pis:
#!/bin/bash
hosts=(pihole nas01 media-pi)
for host in "${hosts[@]}"
do
echo "Updating $host"
ssh "$host" "sudo apt update && sudo apt upgrade -y"
doneThis script lives in my home folder as update-pis.sh and saves me from logging into each box one by one. I also have similar scripts to restart services, back up databases, or pull logs from multiple sources.
For more complex tasks, I sometimes use ansible, but only if I’m managing more than five servers doing the same job. For smaller setups, Bash gets the job done without overhead.
Keeping Track of What’s What
Even with clean naming and automation, it helps to have a little documentation. I keep a plain text file with details about each server. This file includes the IP address, operating system, main purpose, services running, and last change made. I save it in a ~/server-notes/ directory and name each file after the host.
For example, web-prod1.txt might include:
IP: 123.45.67.89 OS: Ubuntu 22.04 LTS Services: NGINX, PHP-FPM, MariaDB Purpose: Main production web server Last updated: 2025-03-14 - PHP upgraded to 8.3
Having this kind of quick reference is useful when troubleshooting or preparing to make changes. It also makes life easier if you’re handing off the system to someone else temporarily.
Using Multiplexing for Faster Connections
There’s a lesser-known SSH feature called multiplexing that allows multiple SSH sessions to reuse the same connection. It speeds things up when I am making several SSH connections in a row. I enable it in my SSH config:
Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h:%p
ControlPersist 10mI create the sockets directory first with:
mkdir -p ~/.ssh/sockets
Now, when I connect to a server, SSH opens a control connection. If I open another terminal and connect to the same server, it just reuses the existing session. It saves time and CPU cycles, especially when I am copying files, restarting services, and pulling logs in sequence.
Monitoring Multiple Servers at Once
Sometimes I need to monitor several servers at the same time. For that, I use tmux. It lets me split my terminal into panes and run different SSH sessions side by side. I start a named session with:
tmux new -s monitor
Inside, I create new windows for each host and connect. One window might tail the NGINX log on the production server, another might monitor system resources on the backup machine. The session can be detached and reattached later, so I don’t lose anything if I get disconnected.
I’ve also used byobu, which builds on tmux and adds some extra info in the status line, like uptime and system load. Either one works well for this kind of multi-server monitoring setup.
Backups and Fail-Safes
Every server I manage gets backed up, one way or another. I use rsync to copy important files and databases to a dedicated backup server on my LAN. SSH makes this easy and secure. A sample command looks like this:
rsync -avz -e ssh jmoorewv@web-prod1:/var/www/ /mnt/backups/web-prod1/
I wrap this in scripts and run it on a schedule with cron. On the backup box, I keep the most recent versions and use simple date-based rotation. It’s not fancy, but it’s reliable.
I also have one remote offsite backup that runs nightly. I send encrypted files to a cloud storage bucket using a combination of tar, gpg, and rclone. If something goes wrong locally, I know I have a copy somewhere else.
Wrapping It Up
When you manage multiple servers with SSH, organization matters just as much as technical skill. A clean config file, good naming habits, and a few reusable scripts go a long way. I don’t waste time looking up credentials, retracing commands, or fixing preventable errors. Everything has a place, and most of it runs without me needing to think about it.
This setup might seem basic, but that’s what I like about it. It’s not fragile. It doesn’t break when I update my terminal or reinstall my OS. It works the same on a laptop or desktop. And when I do need to expand or tweak something, it’s just a text file away.
If you’re managing servers and feel like things are slipping through the cracks, try setting up your own SSH config file and building from there. It’s not glamorous, but it will save your sanity in the long run.






















