Back to blog

How to lock down your server: a beginner's guide to VPS security

OpenClaw — formerly Moltbot & Clawdbot — has increased demand for VPS setup. That is not a bad thing, but it does mean more people need to learn basic server security so they do not get burned early. This guide covers what you need, especially if this is your first time using a VPS. Note: everything here is free and open-source.

Why should you care?

The moment you spin up a VPS, bots start knocking. Not tomorrow — within minutes. Automated scanners crawl the internet 24/7 looking for servers with weak passwords, open ports, or unpatched software. If your server is on the internet, it’s a target.

You do not need to be a security expert to protect yourself. This guide walks through the practical steps to harden a Linux VPS, in plain English.

By the end, you will have multiple layers working together, so one mistake does not become a full compromise.


Pick your admin access model (this drives everything else)

You have two sane ways to administer a VPS:

  1. Public SSH (classic): Your server is reachable on the public internet for SSH, but you harden it with SSH keys, a firewall, and tools like fail2ban.
  2. Tailscale-only admin (recommended): Your server is not reachable publicly for admin access. You only SSH over Tailscale (private network).

If you choose Tailscale-only, you can skip “change SSH port” and you can treat fail2ban as optional (because the public internet can’t reach your SSH port).


The big picture: think like a building

Imagine your server is a building. Security isn’t one giant lock on the front door — it’s layers:

LayerBuilding AnalogyServer Equivalent
NetworkFence around the propertyFirewall + VPN
AccessKey-card entry, no master keys floating aroundSSH keys, no passwords
ApplicationEach office has its own lockDocker isolation, TLS encryption
MonitoringSecurity cameras + alarm systemFile integrity checks, log analysis, alerts
Supply ChainVetting the contractors and materialsDependency scanning in CI
RecoveryInsurance policy + fire escape planAutomated backups, auto-restart, auto-patching

No single layer is perfect. Together, they make breaking in really, really hard — and if someone does get in, you’ll know fast and recover faster.


Before you touch anything: don’t lock yourself out

If you take one thing from this guide, take this:

  • Keep one SSH session open while you change SSH/firewall settings.
  • After each change, open a second terminal and verify you can still connect.
  • Only when the new path works do you close the original session.

If you’re using a cloud provider (DigitalOcean, Hetzner, Linode, AWS), make sure you know where their web console / recovery shell is. That is your last-resort way back in.


Layer 1: network security

What’s the problem?

By default, your server listens on the public internet. Anyone can try to connect. Think of it like leaving every door and window in your building wide open.

1.1 Firewall: close every door you don’t need

A firewall controls which network traffic is allowed in and out. The golden rule:

Block everything by default. Only allow what you specifically need.

On Ubuntu, the built-in firewall is called UFW (Uncomplicated Firewall):

# Set the default: block all incoming traffic
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow SSH so you can still log in (use your port; default is 22)
sudo ufw allow 22/tcp

# Turn it on
sudo ufw enable

# Check your rules
sudo ufw status

Beginner tip: If you change your SSH port (optional, below), update the firewall rule to match, or you’ll lock yourself out.

1.2 Change the default SSH port

If you’re using Public SSH, SSH runs on port 22 by default and every bot on the internet knows it. Changing it to something else (like 8822) won’t stop a determined attacker, but it does cut down automated scanning noise.

Edit /etc/ssh/sshd_config:

Port 8822

Then restart SSH and update your firewall:

sudo ufw allow 8822/tcp
sudo ufw delete allow 22/tcp
sudo systemctl restart ssh

Warning: Before closing port 22, make sure you can connect on the new port. Keep your current session open as a safety net.

1.3 fail2ban: auto-ban attackers

If you’re using Public SSH, even on a non-standard port some bots will find you. fail2ban watches your login logs and automatically blocks IP addresses that fail too many times.

sudo apt install fail2ban

Create /etc/fail2ban/jail.local:

[sshd]
enabled = true
port = ssh
maxretry = 5
bantime = 3600
findtime = 600

This means: if someone fails to log in 5 times in 10 minutes, ban them for 1 hour.

sudo systemctl enable fail2ban
sudo systemctl start fail2ban

# Check who's been banned
sudo fail2ban-client status sshd

Analogy: fail2ban is like a bouncer who remembers faces. Mess up too many times and you’re not getting in.

1.4 Tailscale: make your server invisible

Tailscale is one of the more impactful security upgrades you can make — it removes your server from the public internet entirely, so SSH brute-force attempts and port scanners never reach it in the first place. Tailscale creates a private encrypted network (a “tailnet”) between your devices using WireGuard encryption. Your server becomes much harder to find and attack because you stop exposing admin access on the public internet.

What Is Tailscale?

Think of the internet as a giant city. Your server is a building with an address anyone can look up. Tailscale moves that building into a private gated compound — only people with your specific gate key can even find it, let alone enter.

Under the hood, Tailscale uses WireGuard, a modern VPN protocol that’s faster and simpler than older VPNs like OpenVPN or IPSec. But unlike traditional VPNs, there’s no central VPN server to manage — devices connect directly to each other (peer-to-peer).

Setting It Up

On your VPS:

# Install Tailscale
curl -fsSL https://tailscale.com/install.sh | sh

# Start Tailscale with SSH support
sudo tailscale up --ssh

On your laptop/desktop: Download Tailscale from tailscale.com and sign in with the same account.

Both devices are now on the same private network. Your server gets a private IP like 100.x.x.x that only your Tailscale devices can reach.

Tailscale SSH: Ditch Traditional SSH Keys

One thing many guides skip: Tailscale SSH replaces your entire SSH key management system.

Instead of managing authorized_keys files and SSH key pairs, Tailscale authenticates you based on your Tailscale identity (tied to your Google/GitHub/Microsoft login).

# Connect to your server — no SSH keys needed
ssh your-user@your-server-tailscale-name

# Or use the Tailscale IP
ssh your-user@100.64.0.10

Why this is better

  • No SSH keys to manage, rotate, or lose
  • Authentication tied to your identity provider (Google, GitHub, etc.)
  • Access control managed in the Tailscale admin dashboard
  • Connections are encrypted end-to-end with WireGuard
  • If someone steals your laptop, you can revoke that Tailscale device instantly — no SSH key rotation needed

How to enable it

# On the VPS, start Tailscale with SSH
sudo tailscale up --ssh

# Now you can connect without traditional SSH
ssh user@your-vps-name

You can manage who has SSH access from the Tailscale admin console at login.tailscale.com.

Lock Down the Firewall (Tailscale-Only Access)

Once Tailscale is set up, you can close every public admin port. (If you run a public website/API, you’ll still leave 80/443 open.)

# Allow SSH only over the Tailscale interface
# Pick the port your SSH daemon is actually using (22 if you didn't change it)
sudo ufw allow in on tailscale0 to any port 22 proto tcp

# If you previously allowed SSH publicly, remove that rule:
sudo ufw status numbered
# Then delete the relevant "ALLOW 22/tcp" or "ALLOW 8822/tcp" rule by number, e.g.:
# sudo ufw delete 3

Now your server ignores SSH traffic from the public internet. No port scanning, no brute force attempts, nothing.

Before Tailscale:

Public Internet → Your Server
Anyone can try to connect
Bots scanning 24/7
Brute force attempts every minute

After Tailscale:

Public Internet → [Nothing. Server doesn't respond]
Tailscale Network → Your Server
Only your authorized devices

Tailscale MagicDNS

Tailscale gives your devices human-readable names. Instead of remembering 100.64.0.10, you can do:

ssh your-user@my-vps

MagicDNS is enabled by default. Your devices are named after their hostnames.

Free TLS Certificates via Tailscale

Need HTTPS for a web service? Tailscale can issue TLS certificates automatically for any device on your tailnet:

# Get a TLS certificate for your VPS
sudo tailscale cert your-vps-name.your-tailnet.ts.net

This gives you a valid certificate signed by Let’s Encrypt, no manual renewal needed. Great for internal dashboards, APIs, or any service that needs encryption.

IPv6: The Forgotten Back Door

Another thing most guides skip: your server has two network protocols: IPv4 (the 192.168.x.x addresses you are used to) and IPv6 (the longer 2001:db8::1 addresses). Many people set up a firewall for IPv4 and forget that IPv6 is still exposed.

The nuance with Tailscale: IPv6 is actually important for Tailscale — it uses IPv6 internally for its mesh network. Disabling it system-wide can break Tailscale connectivity or force it to use slower relay servers.

The right approach: Don’t disable IPv6 — just make sure your firewall covers it too.

# UFW handles both IPv4 and IPv6 by default
# Verify IPv6 is enabled in UFW config
sudo grep IPV6 /etc/default/ufw
# Should show: IPV6=yes

With IPV6=yes, your “default deny incoming” rule applies to both IPv4 and IPv6 traffic. No back door.

Verify nothing is exposed on IPv6:

# Check what's listening on IPv6
sudo ss -tlnp6

# You should see very little here — ideally nothing on public interfaces

If you’re NOT using Tailscale and don’t need IPv6, you can disable it entirely:

# Add to /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

# Apply
sudo sysctl -p

But if you use Tailscale (recommended), keep IPv6 enabled and let the firewall handle it.

What About Availability?

Concern: “What if Tailscale goes down?” Answer: Tailscale’s control plane helps devices find each other, but once connected, traffic flows directly between devices. If Tailscale’s servers go down, existing connections keep working. For new connections, their control plane is usually reliable, and you can also run your own coordination server as a backup.

Concern: “Is my traffic going through Tailscale’s servers?” Answer: Usually no. Tailscale uses “DERP relays” only when direct connections are not possible. The relay servers still cannot decrypt your traffic — it is encrypted end-to-end with WireGuard.


Layer 2: access control

What’s the problem?

Passwords are weak. People reuse them, they can be guessed, and they can be brute-forced. SSH keys are dramatically more secure.

2.1 Create a non-root user (do this first)

If you’re logging in as root, fix that now:

sudo adduser deploy
sudo usermod -aG sudo deploy

Open a new terminal and verify:

SSH_PORT=22 # or 8822 if you changed it
ssh -p "$SSH_PORT" deploy@your-server-ip
sudo -v

Then lock down SSH root login in /etc/ssh/sshd_config:

PermitRootLogin no

Restart SSH:

sudo systemctl restart ssh

2.2 SSH keys: throw away the password

If you’re doing Tailscale-only admin, you can skip this section (Tailscale SSH handles auth). If you’re doing Public SSH, do this and then disable passwords.

SSH keys work like a lock-and-key pair:

  • Public key = the lock (goes on your server)
  • Private key = the key (stays on your computer, never shared)

On your local machine:

# Generate a key pair (Ed25519 is modern and secure)
ssh-keygen -t ed25519 -C "you@example.com"

# Copy the public key to your server
SSH_PORT=22 # or 8822 if you changed it
ssh-copy-id -p "$SSH_PORT" deploy@your-server-ip

On your server, disable password login entirely:

Edit /etc/ssh/sshd_config:

PasswordAuthentication no
PubkeyAuthentication yes
sudo systemctl restart ssh

Now what? Even if someone guesses “password123,” it doesn’t matter — the server won’t even ask for a password.

2.3 Clean up your SSH keys

Over time, your ~/.ssh/authorized_keys file can accumulate old or duplicate keys. Audit it:

# See what keys are authorized
ssh-keygen -l -f ~/.ssh/authorized_keys

Remove any keys you don’t recognize. Every key is a potential entry point.

2.4 Disable X11 forwarding

X11 forwarding lets you run graphical apps from a server on your local screen. Unless you specifically need this (you almost certainly don’t on a VPS), turn it off:

Edit /etc/ssh/sshd_config:

X11Forwarding no

Why? An attacker who gets SSH access could use X11 to display fake login prompts on your screen and steal credentials.

2.5 Lock down file permissions

Configuration files often contain sensitive data — API keys, database passwords, tokens. Make sure only the owner can read them:

# Secure config files: owner read/write only
chmod 600 ~/.my-app/config.json
chmod 600 ~/.ssh/authorized_keys
chmod 700 ~/.ssh

# Check current permissions
ls -la ~/.my-app/

What do the numbers mean?

  • 600 = owner can read and write, nobody else can do anything
  • 700 = owner can read, write, and enter the directory, nobody else
  • 644 = owner can read/write, everyone else can only read (fine for non-sensitive files)

Why it matters: If a file is readable by “others” (permissions like 644 or 755), any user on the system — or any process running as a different user — can read your secrets. On a shared server or if an attacker gets limited access, this is the difference between a contained incident and a full breach.

Quick audit:

# Find config files that are too open (readable by others)
find "$HOME" \\( -name "*.json" -o -name "*.env" -o -name "*.key" \\) -print0 2>/dev/null | \
  xargs -0 ls -la 2>/dev/null | grep -v "^-rw-------"

If that command outputs anything, tighten those permissions.


Layer 3: application security

What’s the problem?

Even behind a firewall with perfect SSH config, the applications running on your server need their own protection.

3.1 Docker: keep services isolated

If you run Docker containers, two critical settings:

Bind to localhost only:

# In docker-compose.yml
services:
  my-database:
    ports:
      - "127.0.0.1:5432:5432"   # Only accessible from the server itself
    # NOT "5432:5432"             # This exposes it to the world!

Why? Docker port mappings bypass UFW by default. Even if your firewall blocks port 5432, Docker’s -p 5432:5432 opens it anyway. Binding to 127.0.0.1 prevents this.

Set resource limits:

services:
  my-app:
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 512M

Why? Without limits, a buggy app or a crypto-miner injected by an attacker can eat all your RAM and CPU, crashing everything else on the server.

Analogy: Resource limits are like giving each tenant in your building a power meter. One tenant can’t blow the fuse for the whole building.

3.2 TLS/HTTPS: encrypt everything

Any service that communicates over the network should use TLS (the “S” in HTTPS). This encrypts data in transit so no one can eavesdrop.

For web services, use Let’s Encrypt for free certificates:

sudo apt install certbot
sudo certbot certonly --standalone -d yourdomain.com

If you’re using Tailscale, it provides automatic TLS certificates for your private network — no setup needed.


Layer 4: monitoring and detection

What’s the problem?

Prevention is not enough. You also need to know when something goes wrong. Without monitoring, issues can sit unnoticed for a long time. With basic monitoring and alerts, you can often catch problems quickly.

4.1 Keep your system updated

Keeping your system updated matters more than most people realise — the majority of successful breaches exploit vulnerabilities that already had patches available. Staying current closes those doors before attackers walk through them:

sudo apt update && sudo apt upgrade -y

Automate it with unattended upgrades:

sudo apt install unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades

This automatically installs security patches every day. No action needed from you.

Enable auto-reboot so kernel updates actually take effect:

Edit /etc/apt/apt.conf.d/50unattended-upgrades:

Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "03:00";

Why auto-reboot? Kernel security patches are downloaded and installed, but they don’t actually activate until the server restarts. Without auto-reboot, your server could be running a vulnerable kernel for weeks. With it, the patch activates at 3 AM — about 20 seconds of downtime.

4.2 File integrity monitoring (AIDE)

AIDE takes a snapshot of your system files and alerts you if anything changes unexpectedly.

sudo apt install aide

# Create the initial snapshot (takes 15-20 minutes)
sudo aideinit
sudo mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db

Set up a daily check:

sudo cat > /etc/cron.daily/aide-check <<'EOF'
#!/bin/bash
aide --check >> /var/log/aide/daily-$(date +%Y%m%d).log 2>&1
EOF
sudo chmod +x /etc/cron.daily/aide-check

What it catches: If someone modifies your SSH config, replaces a system binary with a trojan, or installs a rootkit — AIDE will flag it.

Analogy: AIDE is like taking a photo of every room in your building each night. If a window is broken or a lock is changed, you’ll see the difference immediately.

4.3 Log analysis (Logwatch)

Your server generates logs for everything. Logwatch reads them all and sends you a daily summary.

sudo apt install logwatch

It’ll tell you things like:

  • “143 failed SSH login attempts from 203.0.113.42” (example IP)
  • “fail2ban banned 2 IPs today”
  • “The gateway service restarted once”

Instead of reading thousands of log lines, you get a one-page summary.

4.4 Real-time alerts

Daily summaries are great, but for critical issues you want to know now. Set up a simple monitoring script that checks every hour and sends you a message (via Telegram, email, Slack — whatever you prefer) if something’s wrong.

Things worth alerting on:

  • More than 10 failed SSH attempts in an hour (possible brute-force attack)
  • A critical service goes down
  • Docker containers crash

Start simple. A 20-line bash script in cron is better than a complex monitoring stack you never finish setting up.


Layer 5: supply chain security

What’s the problem?

Your code might be fine, but your dependencies can still introduce vulnerabilities. A lot of breaches happen through third-party packages, not the application’s own code.

5.1 Scan dependencies in CI

If you use a CI/CD pipeline (GitHub Actions, GitLab CI, etc.), add a dependency audit step:

# In your CI workflow
- name: Audit dependencies
  run: npm audit --audit-level high
  # or: pip audit / pnpm audit / cargo audit

This checks every dependency against known vulnerability databases and fails the build if something critical is found.

5.2 Automated dependency updates

Tools like Dependabot (GitHub) or Renovate automatically create pull requests when your dependencies have updates available:

# .github/dependabot.yml
version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "weekly"

You’ll get a PR every week with updated packages. Review it, merge it, and your dependencies stay fresh.

5.3 PR-time vulnerability checks

GitHub’s Dependency Review action scans pull requests for newly introduced vulnerabilities:

# .github/workflows/dependency-review.yml
name: Dependency Review
on: [pull_request]
jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/dependency-review-action@v4
        with:
          fail-on-severity: high

If a PR introduces a package with a known vulnerability, it’ll block the merge and leave a comment explaining why.


Layer 6: recovery

What’s the problem?

Even with good security, things break. Hardware fails, and humans make mistakes. You need a plan for getting back up quickly.

6.1 Automated backups

Back up your critical configuration files daily:

#!/bin/bash
# /usr/local/bin/backup-configs.sh
BACKUP_DIR="/root/backups"
DATE=$(date +%Y%m%d)

mkdir -p $BACKUP_DIR

tar czf $BACKUP_DIR/config-$DATE.tar.gz \
    /etc/ssh/sshd_config \
    /etc/ufw \
    /path/to/your/app/config \
    2>/dev/null

# Keep last 30 days only
find $BACKUP_DIR -name "config-*.tar.gz" -mtime +30 -delete

Schedule it:

# Run daily at 2 AM
echo "0 2 * * * /usr/local/bin/backup-configs.sh" | sudo crontab -

Important: Backups on the same server only protect against config mistakes, not hardware failure. For true disaster recovery, copy backups off-server (to S3, another VPS, or your local machine).

6.2 Auto-restart services

If your app crashes, it should come back on its own. systemd can do this:

# /etc/systemd/system/myapp.service
[Service]
ExecStart=/usr/bin/node /path/to/app.js
Restart=always
RestartSec=10

This means: if the app crashes, wait 10 seconds, then start it again. No human needed.

What does recovery look like in practice? On our production server, a kernel update triggered a reboot. The server came back up, systemd automatically restarted all services, and the total downtime was 23 seconds. No one had to wake up at 3 AM.


Running a security audit

After you’ve set everything up, run through this audit to verify nothing was missed. This is something you should repeat monthly or after any significant changes.

Quick self-audit script

Run these commands and check the output:

echo "=== 1. SSH Hardening ==="
sudo sshd -T | grep -E 'passwordauthentication|pubkeyauthentication|x11forwarding|port'
# Want: passwordauthentication no, pubkeyauthentication yes, x11forwarding no

echo "=== 2. Firewall ==="
sudo ufw status verbose
# Want: Status active, Default deny incoming

echo "=== 3. fail2ban ==="
sudo fail2ban-client status sshd
# Want: active, with some stats

echo "=== 4. Tailscale ==="
tailscale status
# Want: shows your devices, connected

echo "=== 5. Open Ports ==="
sudo ss -tlnp
# Want: only localhost (127.0.0.1) and Tailscale (100.x.x.x) listeners

echo "=== 6. IPv6 Coverage ==="
sudo grep IPV6 /etc/default/ufw
# Want: IPV6=yes (so firewall covers IPv6 too)

echo "=== 7. Config Permissions ==="
# Run as the user that owns your app config (or adjust the search roots).
find "$HOME" \\( -name "*.json" -o -name "*.env" -o -name "*.key" \\) -print0 2>/dev/null | \
  xargs -0 ls -la 2>/dev/null | grep -v "^-rw-------" | grep -v "^d"
# Want: no output (everything is 600)

echo "=== 8. Pending Updates ==="
apt list --upgradable 2>/dev/null | tail -n +2
# Want: no output (everything up to date)

echo "=== 9. Unattended Upgrades ==="
grep -E 'Automatic-Reboot' /etc/apt/apt.conf.d/50unattended-upgrades | grep -v '//'
# Want: Automatic-Reboot "true", Automatic-Reboot-Time "03:00"

echo "=== 10. Docker Binding ==="
docker ps --format '{{.Names}}: {{.Ports}}' 2>/dev/null
# Want: all ports bound to 127.0.0.1, not 0.0.0.0

echo "=== 11. Backups ==="
ls -lh /root/backups/ 2>/dev/null | tail -3
# Want: recent backup files

What a passing audit looks like

CheckPassFail
SSH passwords disabledpasswordauthentication nopasswordauthentication yes
Firewall activeStatus: active, Default: denyStatus: inactive
fail2ban runningShows jail statusERROR: no such jail
Tailscale connectedShows devicesNot installed
No public portsOnly 127.0.0.1 and 100.xShows 0.0.0.0 listeners
IPv6 firewalledIPV6=yesIPV6=no
Config permissionsNo output (all secure)Lists readable files
Updates currentNo outputLists pending packages
Auto-reboot enabled"true""false" or commented out
Docker localhost-onlyAll 127.0.0.1:Shows 0.0.0.0:
Backups runningRecent .tar.gz filesEmpty or old files

If everything passes, your server is in a much better state. Save this checklist and run it once a month.


Putting it all together

Here’s a checklist you can follow in order. Each step builds on the previous one:

Phase 1: the basics (20 minutes)

  • Update all packages: sudo apt update && sudo apt upgrade -y
  • Enable automatic security updates: sudo apt install unattended-upgrades
  • Enable auto-reboot for kernel patches
  • Change SSH port from 22 to something else
  • Disable password authentication (use SSH keys only)
  • Disable X11 forwarding
  • Set up UFW firewall (deny all, allow SSH)

Phase 2: active defense (30 minutes)

  • Install fail2ban
  • Install AIDE for file integrity monitoring
  • Install Logwatch for daily log summaries
  • Set up a simple alerting script (Telegram/email/Slack)

Phase 3: application hardening (20 minutes)

  • Bind Docker ports to 127.0.0.1 only
  • Add resource limits to all containers
  • Enable TLS for any exposed services

Phase 4: network isolation (15 minutes)

  • Install Tailscale on VPS and your devices
  • Enable Tailscale SSH (tailscale up --ssh)
  • Lock UFW to Tailscale-only (100.64.0.0/10)
  • Verify IPv6 is covered by UFW (IPV6=yes)
  • Verify no ports listening on 0.0.0.0 (ss -tlnp)

Phase 5: ongoing protection (20 minutes)

  • Set up automated config backups
  • Add dependency scanning to your CI pipeline
  • Enable Dependabot or Renovate for automatic updates
  • Configure systemd auto-restart for critical services
  • Run the security audit script above

Common mistakes to avoid

1. “I’ll secure it later”

Your server is being scanned within minutes of going live. Secure it before deploying anything.

2. “Nobody would target my little server”

Bots don’t care how important your server is. They scan everything. Your server will be probed thousands of times per day.

3. “I have a strong password”

Passwords can be brute-forced, leaked, or phished. SSH keys are orders of magnitude more secure. Turn off passwords entirely.

4. “I set up the firewall, I’m done”

Docker bypasses UFW by default. A firewall alone isn’t enough — you need localhost-only bindings, resource limits, and monitoring too.

5. “I installed updates once”

New vulnerabilities are discovered daily. Automatic updates are essential, not optional.

6. “I’ll notice if something goes wrong”

No, you won’t. Not without monitoring. Set up alerts and you’ll know within an hour instead of within months.


Quick reference card

Save this somewhere handy:

# --- Daily (automated, nothing to do) ---
# Security updates: auto-installed
# AIDE check: runs at midnight
# Backups: 2 AM
# Auto-reboot (if kernel update pending): 3 AM
# Security monitor: every hour

# --- Weekly check-in (5 minutes) ---
sudo fail2ban-client status sshd      # Who got banned?
ls -lh /root/backups/                  # Are backups running?
tail -20 /var/log/security-monitor.log # Any alerts?

# --- Monthly (15 minutes) ---
docker stats --no-stream               # Resource usage healthy?
ssh-keygen -l -f ~/.ssh/authorized_keys # Any unknown keys?
aide --check                            # Any unexpected file changes?

# --- Emergency ---
sudo fail2ban-client status sshd       # Check bans
sudo journalctl -u myapp -n 50        # Check app logs
sudo ufw status verbose                # Check firewall

Glossary

TermPlain English
VPSA virtual server you rent from a cloud provider
SSHA secure way to remotely control your server via command line
Firewall (UFW)Rules that control which network traffic is allowed in and out
fail2banSoftware that automatically blocks IPs that fail to log in too many times
SSH KeyA cryptographic file pair (public + private) used instead of passwords
TLS/SSLEncryption for data traveling over the network (the “S” in HTTPS)
DockerSoftware that runs applications in isolated containers
AIDETool that detects unauthorized changes to system files
LogwatchTool that summarizes your server logs into a daily report
TailscaleA VPN service that creates a private network between your devices
systemdLinux’s service manager — starts, stops, and restarts your apps
Unattended upgradesAutomatic security patch installation
DependabotGitHub tool that automatically proposes dependency updates
CI/CDAutomated pipeline that tests and deploys your code
CVEA publicly known security vulnerability with a tracking number
OWASP Top 10The 10 most critical web application security risks

What changes after this

Starting from a basic Ubuntu VPS, these steps give you:

AreaBeforeAfter
Time to detect a problemDays or weeksOften under 1 hour
Recovery from a crashManual (you fix it)Automatic (23 seconds in the example above)
Pending security updatesWeeks oldUsually current
Monitoring coverageSSH logs onlyFiles, logs, services, containers, and dependencies
Backup strategyNoneDaily, 30-day retention
Attack surfaceWide openSmaller and monitored

Total time: ~90 minutes
Total cost: $0


Security is not about being perfect. It is about reducing avoidable risk and making your server harder to abuse. If you work through this checklist, your VPS will be in much better shape than most.