Last updated: August 2025

Dedicated Server Backup

Server Backup

Note: This article series covers configuring Debian 12 for hosting multiple domains and web sites on a single dedicated server. As such, some strategies may be inappropriate for your environment. Sockets for example are appropriate for communication between services hosted on the same machine but not suited to a set up with distributed services (where you'd use ports). Please consult the overview for more information.

I plan to create a root CRON job that runs the daily backup, creates a compressed file and moves it to the backup users' home directory.

Then my backup server will have a CRON job to pull the data over ssh using the backup user ssh certificate.

Initially, you just want to be the root user for a bit so log in to your server:

sudo -s

First, copy the backup script files into the root users' home folder (/root). You can download them here, there's 1 for mysql and 1 for your files and directories.

backupsql.sh
backupserver.sh

Copy them to /root.

Make sure that both files have execute permissions:

chmod +x ~/backupsql.sh
chmod +x ~/backupserver.sh


Because mariadb uses socket security (eg. if you type mysql on the local host and the user is root, it just logs you in to mariadb as root), we don't need to set up a user or password because we'll be running the sqlbackup.sh script as root!

nano ~/backupsql.sh

#!/bin/sh

# Directory where you want the backup files to be placed
backupdir="/root/backup_files/sql"

# List all of the MySQL databases that you want to backup in here, 
# each separated by a space
databases="db1 db2 db3"

# MySQL dump command, use the full path name here
mysqldumpcmd="mysqldump"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert --lock-tab>

# Create our backup directory if not already there
mkdir -p ${backupdir}
if [ ! -d ${backupdir} ] 
then
   echo "Not a directory: ${backupdir}"
   exit 1
fi

# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
do
        $mysqldumpcmd $dumpoptions $database > ${backupdir}/${database}.s>
done
You can test it if you like, it should create backup_files/sql and put your db files in there:

./backupsql.sh

If you add a new database to Maria DB that you want to back up, you'll need to add it to this file (in the database string).

Add the backup user to your server (you'll be dumping your compressed backup there in a mo!), you want this user to have a home directory so pick a unique name (you don't want to use the default backup user!):

adduser serverbackup

Now it's time to grab all your files which is what backupserver.sh does, dumps your databases using backupsql.sh, grabs your files and then compresses the lot.

I like to backup all the server configuration (/etc), the websites (/mnt/d1/www), mailboxes (/var/mail/vmail) and any custom scripts I might be using (dovecot etc.).

My strategy is to keep everything I need to rebuild the configuration if things go wrong but feel free to be as comprehensive as you like, you just need to copy the rsync line and modify the path to add your own files.

I like to echo what I'm doing as I go because that will appear in your logs later and can be quite useful if diagnosing a problem.

Being the root user is quite useful in this context because you won't fail to back something up because of a lack of permissions.

Edit the backupserver.sh script accordingly:

nano ~/backupserver.sh

If you set the cleanup option to 1 it'll delete the temp files once it's finished which will save a bit of hard disk space but it's worth mentioning that the files are copied with rsync which delta copies (only copies changed files) so your backups will be faster if you leave the files in place.

It's up to you! Make sure you check / edit the first 4 variables at the top of the script if you don't want to use the set up I'm suggesting. Test it:

./backupserver.sh

Once it finishes, you can verify the backup file exists by looking in the /home/serverbackup folder:

ls -la /home/serverbackup

You should see a nice compressed file in there called backup.tar.gz.

Now we just need to edit the root user CRON table to schedule the job (at 3AM) and log the output to the fancy systemd journal (add the line at the bottom of the file and save it):

crontab -e

0 3 * * * systemd-run --unit backupserver /root/backupserver.sh
If you want to check the journal updates later you run as root (sudo):

journalctl -u backupserver.service

We can stop being the root user now:

exit

Ok, so the server's backing everything up and putting a compressed version in the backupserver user home folder. Now we're going to schedule a job on another Linux PC to download that file and store it somewhere.

Log in to that server on the command line, I'm using my NAS.

We're going to set up a pair of those secure certificates so it can all happen over a secure connection, then we're going to schedule another CRON job so it gets done automatically, whilst you're asleep.

First let's create those certificates, copy the path it suggests but change the certificate name to serverbackup (careful, we don't want to overwrite any existing certificate!), don't enter a passphrase either - we want it to login unattended:

ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pi/.ssh/id_rsa): /home/pi/.ssh/serverbackup
Enter passphrase (empty for no passphrase):
Enter same passphrase again:


It will generate a pair of keys which it will store in .ssh (ls .ssh to see them).

You shouldn't touch the one with no extension (just called serverbackup) because that's the private key (which stays on this computer) but we're going to move the public key to the .ssh folder in your serverbackup users' home because that's the one your server advertises publicly anyway and we're going to put it in the authorized_keys file:

First get your public key and copy it - on your NAS:

cat ~/.ssh/serverbackup.pub

Copy that text (it's your public key), we're going to put it into an authorized_keys file for your serverbackup user.

On your server, log in and make the relevant files and permissions as root (so sudo -s), it's pretty fussy about these so take care and get them right!:

mkdir -p /home/serverbackup/.ssh
echo "PASTE_PUBLIC_KEY_HERE" >> /home/serverbackup/.ssh/authorized_keys

chown -R serverbackup:serverbackup /home/serverbackup/.ssh chmod 700 /home/serverbackup/.ssh chmod 600 /home/serverbackup/.ssh/authorized_keys


Now you have to tell your client (NAS) to use the serverbackup certificate specifically when you connect (-i option!):

ssh -p 2222 serverbackup@74.201.177.82 -i /home/pi/.ssh/serverbackup

The first time you connect, you'll get a message - enter yes to continue.

The authenticity of host '[74.201.177.82]:2222 ([74.201.177.82]:2222)' can't be established.
ECDSA key fingerprint is SHA256:KxuNu8/Bst6CzCUQIlCLtCEMl7lhCIlBQQdSVn6WnrI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? 
We'll create a CRON job as root (so sudo crontab -e) on our NAS to automatically download the backup file at 6AM (leaves 3 hours for the main backup to process!). Obviously replace the user paths with your own:

We'll also move yesterdays backup into the local backups yesterday folder so we've got 2 days:

nano ~/dedicated_backup.sh

#!/bin/bash

backup_path="/mnt/d1/backups/dedicated.com_2025"
backup_filename="backup.tar.gz"

fp="${backup_path}/${backup_filename}"

# ensure yesterday folder exists
mkdir -p "${backup_path}/yesterday"

# move the backup if necessary
if [ -f $fp ]; then
   echo "Moving backup to yesterday..."
	mv -f "${fp}" "${backup_path}/yesterday/${backup_filename}"
fi

echo "Downloading the latest file..."
sudo rsync -avz --delete -e "ssh -p 2222 -i /home/pi/.ssh/serverbackup" serverbackup@74.201.177.82:~/backup.tar.gz /mnt/d1/backups/dedicated.com_2025/backup.tar.gz
Don't forget to set execute permissions on it:

chmod +x ~/dedicated_backup.sh

Test it:

~/dedicated_backup.sh

If it works then add it to the root crontab (6AM):

sudo crontab -e

0 6 * * * /home/pi/dedicated_backup.sh > /var/log/dedicated_backup.log 2>&1
/home/pi is the path to the user folder where I created the script on my NAS. Make sure you enter the correct full path to the folder where you created your script.

That's it! Now your server will backup all your files and your NAS will download them securely for you!




2025