Backing up FreeNAS to Backblaze B2

I am not the original author of the content on this page. While searching for information about setting up Backblaze B2 on a FreeNAS installation, I came across this hyperlink: https://blog.justin-tech.com/blog/freenas-b2-backup. Unfortunately, that page now leads to an HTTP 502. Google had it cached.

For what it may be worth to FreeNAS users, I replaced /usr/local/bin/rclone with the rclone v1.52.0-023-g399e8c0b-fix-3991-b2-copy-beta and successfully had a b2 sync complete which previously had hung up on the Copy source too big for modification time update issue.

  • Using Jails in FreeNAS to set up Backblaze B2 A Jail can be thought of as a virtual machine within the FreeNAS system. It is an implementation of operating system-level virtualization. It allows users who are comfortable using the command line to have more control over software installation and manage ment.
  • Cantemo iconik is a cloud-delivered media management service integrated with Backblaze B2 that you can get up and running in minutes, frees you from deploying any hardware at your user sites, and scales to whatever size you need.

The instructions worked almost perfectly, except for a couple adjustments due to updated versions of the relevant software. The idea that this useful information may disappear into the aether worried me, as it may still help others, and I'll more than likely need it again in the future. In the spirit of open source, I copied the page as rich text in Firefox, ran it through an HTML to Markdown converter, and set about cleaning it up, fixing conversion errors and formatting lost in the text-only view in Google's cache. I am posting the results below in the hope that it remains available and useful for others.

If you are the author of this content and object to this use, please contact me here on GitHub and I will promptly remove it. I humbly thank you for writing it in the first place. It really helped!

If you use FreeNAS, it's probably because you care about your data. Part of data security is ensuring the availability of your data. To that end, you need to ensure that said data is backed up. There are generally two reasonable ways to backup your data from FreeNAS. One, local backup (using ZFS replication), and two, cloud backup.

In this article, we will look at setting up cloud backups to Backblaze B2, an economical cloud backup solution similar to Amazon S3.

Step 1: Sign up

Sign up for a Backblaze account here. Once you have created an account, go to the 'My Settings' tab, and under 'Enabled Products', check the box beside B2 Cloud Storage. This enables your account for using B2.

Step 2: Create a Bucket

Once you have enabled your account for B2, you need to create a bucket (where your files are stored). To do this, on the left side of the screen, select 'Buckets' under B2 Cloud Storage. Then, select Createa Bucket. Also on this page, be sure to click on 'Show Account ID and Application Key', and mark down your Account ID and click 'Create Application Key'. Also mark this down, as you will not be able to see itagain, and will need it when we setup rclone in a later step.

Step 2a: Setup Caps and Alerts

While this step is optional, it is highly recommended so that you get notified about any charges against your account that you may not be expecting. I set mine to a cap of $1 a day for each section. This will give you 6Tb of storage, and a good number of API calls.

Step 2b: Have a look around your account

Have a look around your Backblaze account, there is a great get started guide available here.

Step 3: Setting up a FreeBSD Jail on FreeNAS

  1. Login to your FreeNAS GUI, and go to the Jails section.

  2. Click 'Add Jail'.

  3. Enter a Jail Name. I called mine 'b2-backups'.

  4. Click Ok, and your jail will be created. Note that this may take a little bit of time. You should be able to close the dialog box if needed, the jail will be created in the background.

  5. Click on your Jail and click the 'shell' button in the bottom left. This will open a shell session to the Jail.

  6. Enter vi /etc/rc.conf and change sshd_enable='NO' to sshd_enable='YES'. This will enable SSH to the jail.

    FreeBSD uses vim as a text editor, use i to insert text, del to delete the rest of the line, and the arrow keys to scroll. Save and exit by pressing the ESC key and then :we to save and quit.

    You will need to run passwd root and reboot in order to have SSH access, as well as PermitRootLogin yes in /etc/ssh/sshd_config.

    At this point, you can switch over to SSH, if you prefer that to the shell in the FreeNAS GUI.

  7. Install wget using pkg install wget, this will allow you to download the rclone binary.

  8. Download the latest rclone binary: cd /tmp && wget https://downloads.rclone.org/rclone-v1.38-freebsd-amd64.zip.

  9. run unzip rclone-v1.38-freebsd-amd64.zip to extract the binary. rclone version 1.38 is the latest stable release at the time of this writing

  10. Copy the rclone executable to /usr/bin by running cd rclone-v1.37-freebsd-amd64 && cp ./rclone /usr/bin

Step 3a: Adding storage to the Jail

  1. Create a new folder structure in the Jail, I put mine in /mnt/storage, where you will mount your FreeNAS datastores. It is a goodidea to make a folder for each dataset you want to mount.
  2. In the FreeNAS GUI, go to the Jails tab, and then the Storage sub-tab.
  3. Click 'Add Storage'
  4. Select the Jail you want to add the storage to.
  5. Select the source dataset.
  6. Select the destination (this will be the folder structure in the jail that you created in Step 3a-1).
  7. Optionally select read-only.
  8. Leave 'Create Directory' selected.
  9. Click 'Ok'.
  10. Repeat steps 3a-4 to 3a-9 for each dataset you want to backup to B2.

Step 4: Configuring rclone

  1. Run rclone config to initiate the configuration of rclone
  2. Press n to create a new remote (a remote is what rclone uses to know where to copy/sync your files).
  3. Enter a name, I choose b2.
  4. Press 3.
  5. Enter your account ID from your B2 account.
  6. Enter your application Key from your B2 account.
  7. Leave endpoint blank.
  8. Press y to save the config.

Step 4a: Configuring encryption

Backblaze

Veeam Backblaze B2

  1. Follow steps 1-3 from Step 4. Note, name this new remote different than the previous remote.

  2. Press 6 (Encrypt remote).

  3. Enter the name of the remote you created in Step 4, number 3, followed by the name of your bucket. For example, b2:storage in my case.

  4. Choose whether or not you want to encrypt the file names, selecting 1 does not encrypt file names. Selecting 2 encrypts the file names. I choose 2.

  5. Choose y to type in your own password, choose g to generate a strong password randomly. If you choose g, you are given an option as to how strong of a password you want to generate.

  6. Create a password for the salt. This is recommended if you have chosen to enter your own password in the previous section. Note that forsecurity, these passwords should not be the same.

  7. Select y to accept the configuration.

    Note: The rclone config file is not encrypted by default, and Application Keys and your encryption passwords are stored in plaintext. It is recommended to set a password for the config file, and/or ensure the security of the rclone.conf file.

    If you need to recover encrypted files from B2, you NEED both passwords (if you set two), otherwise your files will be completely unrecoverable.

Step 4b: Creating the bash script

In this section, we will look at creating the bash script we will use with cron in order to backup any changes to our local storage to B2.

Qnap Backblaze B2

  1. Create a new file in /root, I called mine rclone-cron.sh.

  2. Copy the following:

    !/bin/sh if pidof -o %PPID -x 'rclone-cron.sh'; then exit 1 fi echo starting storage sync rclone copy {/path/to/local/storage} {name of your crypt remote}: -v --log-file={/path/to/log/file} --min-age 15m --copy-links exit

Backblaze B2 Freenas

Let's break that down a bit and look at what the script actually does:

  1. #!/bin/sh

    Run the script with the sh terminal.

  2. if pidof -o %PPID -x 'rclone-cron.sh'; then

    If the script is currently being run, then:

    exit 1

    Do not run the script currently. This is good if your initial backup will take a while to run, as it won't try to run rclone again.

  3. fi

    closes the if statement.

  4. echo start storage sync

    print to the terminal that the clone is starting.

  5. rclone copy {/path/to/local/storage} {name of your crypt remote}: -v --log-file={/path/to/log/file} --min-age 15m --copy-links

    runs rclone with the copy parameter (does not delete files deleted locally, alternatively change copy to sync to keep an exact copy on B2 (deletes files from B2 that are deleted locally).

    Uses the -v flag for verbosity.

    --log-file={/path/to/log/file} Tells rclone where to create a log file.

    --min-age 15m Tells rclone not to sync files less than 15 minutes old, useful to ensure copied files are probably complete, instead of semi-completed.

    --copy-links Tells rclone to follow slinks.

  6. exit

    exits the script when the copy is finished.

Step 4c: Creating the cron entry

  1. Smart cleaner app iphone 10. Run crontab -e to open the cron editor.

  2. Enter 0 1 * * * /root/rclone-cron.sh

    This will run the script we created in 4b once a day.

Step 5: Run the script!

  1. chmod +x /root/rclone-cron.sh

    makes the script executable

  2. cd /root/ && ./rclone-cron.sh

    runs the script.

    rclone does not run in the background. It is recommended to run the script in tux or similar, or wait for the crontab to run, as the initial backup will probably take a long time if you have a lot of data like I do.

Step 5a: Check B2 console to see if it's working.

  1. Log into your back blaze account, and take a look at your bucket. You should see that files are being copied to B2.

This completes the guide on setting up rclone to backup to B2 on FreeNAS. Rclone can backup to many cloud providers, have a look at different providers if Backblaze is not your cup of tea.

I’ve been constantly evolving my cloud backup strategies to find the ultimate cheap S3 cloud backup solution.

The reason for sticking to “S3” is because there are tons of cloud provided storage service implementations of the S3 API. Sticking to this means that one can generally use the same backup/restore scripts for just about any service.

The S3 client tooling available can of course be leveraged everywhere too (s3cmd, aws s3, etc…).

BackBlaze B2 gives you 10GB of storage free for a start. If you don’t have too much to backup you could get creative with lifecycle policies and stick within the 10GB free limit.

Current Backup Solution

This is the current solution I’ve setup.

Backblaze B2 Cloud Storage

I have a bunch of files on a FreeNAS storage server that I need to backup daily and send to the cloud.

I’ve setup a private BackBlaze B2 bucket and applied a lifecycle policy that removes any files older than 7 days. (See example screenshot above).

I leveraged a FreeBSD jail to install my S3 client (s3cmd) tooling, and mount my storage to that jail. You can follow the steps below if you would like to setup something similar:

Step-by-step setup guide

Create a new jail.

Enable VNET, DHCP, and Auto-start. Mount the FreeNAS storage path you’re interested in backing up as read-only to the jail.

The first step in a clean/base jail is to get s3cmd compiled and installed, as well as gpg for encryption support. You can use portsnap to get everything downloaded and ready for compilation.

The compile and install process takes a number of minutes. Once complete, you should be able to run s3cmd –configure to set up your defaults.

For BackBlaze you’ll need to configure s3cmd to use a specific endpoint for your region. Here is a page that describes the settings you’ll need in addition to your access / secret key.

After gpg was compiled and installed you should find it under the path /usr/local/bin/gpg, so you can use this for your s3cmd configuration too.

Double check s3cmd and gpg are installed with simple version checks.

A simple backup shell script

Here is a quick and easy shell script to demonstrate compressing a directory path and all of it’s contents, then uploading it to a bucket with s3cmd.

Scheduling the backup script is an easy task with crontab. Run crontab -e and then set up your desired schedule. For example, daily at 25 minutes past 1 in the morning:

My home S3 backup evolution

Backblaze B2 Storage

I’ve gone from using Amazon S3, to Digital Ocean Spaces, to where I am now with BackBlaze B2. BackBlaze is definitely the cheapest option I’ve found so far.

Amazon S3 is overkill for simple home cloud backup solutions (in my opinion). You can change to use infrequent access or even glacier tiered storage to get the pricing down, but you’re still not going to beat BackBlaze on pure storage pricing.

Backblaze B2 Free

Digital Ocean Spaces was nice for a short while, but they have an annoying minimum charge of $5 per month just to use Spaces. This rules it out for me as I was hunting for the absolute cheapest option.

BackBlaze currently has very cheap storage costs for B2. Just $0.005 per GB and only $0.01 per GB of download (only really needed if you want to restore some backup files of course).

Concluding

Backblaze B2 Calculator

You can of course get more technical and coerce a willing friend/family member to host a private S3 compatible storage service for you like Minio, but I doubt many would want to go to that level of effort.

So, if you’re looking for a cheap S3 cloud backup solution with minimal maintenance overhead, definitely consider the above.

This is post #4 in my effort towards 100DaysToOffload.