unRAID 6 Beta 6: Btrfs Quick-Start Guide


Recommended Posts

GUIDE OUTDATED

 

With the current versions of unRAID 6 available, you can utilize btrfs and cache pools natively from within the webGui of unRAID itself and do not need to resort to manual methods as described in this guide.

 

Btrfs Quick-Start Guide

In order to make use of Btrfs in unRAID 6, you first have to select a non-array drive to format as Btrfs.  If you are an unRAID Plus or Pro user, this can be done with your cache drive benefit in the unRAID webGUI, but if you are a Basic user or wish to use another non-array partition other than your existing cache drive to test out Btrfs, you will need to follow instructions for the manual method.  This guide will instruct you on how to setup Btrfs on one device and how to create a Btrfs pool of multiple devices.  You can use either HDD or SSD drives for Btrfs, although most use cases for Btrfs benefit from the use of SSDs.

 

Plus/Pro vs. Basic Edition Usage

In the current beta, support for Btrfs is enabled for all editions, but the ability to manage it from the unRAID webGUI is limited to Plus and Pro customers through the cache drive benefit at this time.  Basic edition users, however, can still do this in the current beta by following the manual method documented in this guide.  In addition, if Plus or Pro users do not wish to reformat their existing reiserfs cache drive device with Btrfs yet can also follow the manual method to test out Btrfs on a different device (non-array / non-cache).

 

webGUI Method

In this Beta, Plus and Pro customers will now be able to format their cache drive feature benefit with the Btrfs file system, directly from within the webGUI.  You can do this with either a new or existing device, but either way, ALL DATA WILL BE PURGED FROM THE DEVICE AS A PART OF THIS PROCESS!  If using an existing device, please be sure to move all your important data off the device before performing this process.  Once you're ready to begin the process, first login to the unRAID webGUI and stop the array (click the checkbox, then click "stop"):

 

btrfsqs-1-1024x533.png

 

Once stopped, assign your new device to the cache drive device slot in the GUI and then click the world "cache" to the left of the drop down box:

 

btrfsqs-2.png

 

On the next screen, you can choose between assigning reiserfs or btrfs from the drop down "File system type."  Once you've selected "btrfs," click "apply" and then "done" to return to the main page.  From here, scroll down and click "start" to bring the array online.  Once online, the final step will be to format the device which can be done by clicking the checkbox and then clicking "format" like so:

 

btrfsqs-3-e1403113586979.png

 

It may take a few minutes for the format to complete, but when it does, you can verify clicking on the word "Cache" again while the array is running and notice the File system type designation on that page for confirmation.

 

NOTE REGARDING CACHE POOLS:  If you wish to create a pool of btrfs devices for your cache drive, you can follow step 6 from the manual method below to do so.  This will require command-line access to your unRAID system in the current beta.

 

Command Line Method

In the current beta release, users wishing to format a device other than their cache drive with Btrfs, or wishing to test out Btrfs disk pools will need to manually do using command line access to their system.  This guide will provide instructions for using command line to format a non-array partition and create a disk pool (multiple devices) with Btrfs.

 

Pre-Requisites

To make use of the new Btrfs capabilities in the unRAID 6 Beta, you will first need to select a device to format with the file system.  This device's data will be completely erased for the new file system, so backup your data from this device before proceeding.  In addition, if using a brand new drive for this process, you will need to make sure you create a disk partition before proceeding (using fdisk or gdisk).  Once you are ready and have the device installed in your unRAID server, you can proceed in this guide.

 

Step 1:  Locate Your Device's Drive Letter Designation in the unRAID webGUI

Once your server is booted up with your drive ready for formatting with Btrfs, login to the unRAID webGUI to both stop the array, and locate the drive's letter designation that you wish to format with Btrfs.  This can be done by clicking a drop-down for a drive assignment and viewing all the options, located the device by name, and then writing down the drive letter.  For example:

 

btrfsqs-4.png

 

Here I have a OCZ Agility 4 SSD that I wish to use and I can see from the drop down that it's drive letter is sdb so I have written that down in preparation for my next command.  If you have multiple drives you wish to use, notate the drive letter for each one at this point.  Do not make any actual changes to the drive assignments at this time and leave the array in a stopped state.

 

Step 2:  Gain Command Line Access to Your Server

Now that you have your drive letter(s) for use in creating your Btrfs device(s), now we need to access the server where we can use command line to complete the process.  Either local access to your server with keyboard and mouse or you can use SSH access to the server.  Once you have command line access, you are ready to create your first Btrfs device and format it.

 

NOTE ON SSH ACCESS:  You will need to first set a password to your unRAID root user (in the webGUI) before you can login via SSH for the first time.  You can do this under the unRAID webGUI "users" tab and then clicking on "root".  From there, you can assign a password to your root account.

 

Step 3:  Check if Device is Already Partitioned

At this point, we are about ready to create our first device using Btrfs.  However, if you are using a brand new device at this point, we will first need to partition it before we can use it with Btrfs.  You can check if your device is partitioned by typing the following command:

 

lsblk

 

A list will be returned showing you the names, sizes, and types of block devices on your system.  Locate the drive letter you wrote down in step 1, and look for it on the list.  Underneath that entry, if you see an indented entry with the same drive letter designation you have written down, but with the number "1" after it, your drive is already partitioned.  The below example shows our OCZ Agility drive from the example having NOT been partitioned yet:

 

btrfsqs-5.png

 

While there are other methods / tools for partitioning via command line, the simplest method that covers all possible scenarios is to use sgdisk.  The command for creating a new partition on the drive in the example above would be as follows:

 

sgdisk -g -N 1 /dev/sdb

 

Substitute "sdb" for the drive letter you wrote down in Step 1.  After running this command, if you run another "lsblk", you will now see both the disk and it's partition in the table:

 

btrfsqs-6.png

 

At this point, you now have a device both identified and partitioned properly.  Now we are ready to format the partition with the Btrfs file system.

 

Step 4:  Format Your Device Partition with Btrfs

With your drive letter handy, enter the following command into the unRAID server, substituting your three drive letter designation in the following example as such: 

 

mkfs.btrfs -f /dev/sdb1

 

Once you run this command it may take a little while to complete the formatting depending on both the size and type of device you choose (SSDs are typically faster).  Make sure you include the numeric "1" after your drive letters when typing the command to ensure you specify the partition and not the disk.

 

Once completed this device will be formatted with Btrfs and can now be mounted for access.

 

Step 5:  Mounting Your New Btrfs Device

Now that you have a disk both partitioned and formatted with Btrfs, you will need to mount it to be accessible by the operating system.  You can do this both as a one-time function (for basic testing) or set it up to occur automatically. 

 

One-Time Mounting

To do a one-time mount of a btrfs device, perform the following commands:

 

mkdir /mnt/btrfs
mount /dev/sdb1 /mnt/btrfs

 

At this point, you now have a btrfs formatted device available for use through the path /mnt/btrfs.  When you reboot, not only will you have to run these commands again, but you will need to double-check to ensure that your drive letter designation for the btrfs device did not change as these can be different after a reboot.

 

Auto-Mounting

To automatically configure your Btrfs device to mount to the system after reboot, we will first locate the disk by ID.  To do this, type the following command:

 

v /dev/disk/by-id

 

A listing will appear here showing all your disks and their partitions similar to this:

 

btrfsqs-7.png

 

Locate your partition for your disk in this list and copy the entirety of the text from the first "a" in "ata" to the number "1" after the word "part".  So for our example, I copied the following:

 

ata-OCZ-AGILITY4_OCZ-63DU7JA213Z2272U-part1

 

Now we are ready to edit our unRAID GO script to configure the drive for auto-mounting.  Type the following command in to your command line:

 

nano /boot/config/go

 

This will pull up a text editor for your GO script file.  Add the following on a new line somewhere before "# Start the Management Utility":

 

mkdir -p /mnt/btrfs
mount /dev/disk/by-id/ata-OCZ-AGILITY4_OCZ-63DU7JA213Z2272U-part1 /mnt/btrfs

 

Substitute your copied text for the specific by-id partition item.  If done correctly, your GO file should look like the following:

 

btrfsqs-8.png

 

Press CTRL+X on your keyboard, then press "Y", then press "Enter" to save the GO script.  If you haven't manually mounted your Btrfs device at this point, you can reboot your server and it should be done for you automatically.

 

Step 6 (Optional):  Add Additional Drives to the Btrfs Pool (RAID1 Method)

After you have created and mounted your first Btrfs device, you can add additional devices to a Btrfs "pool" by doing the following.  Follow steps 1-4 the same before for the new device you want to add to your pool.  After step 4 is complete, perform the following commands on your server, replacing the drive letter designation in the example below:

 

btrfs device add /dev/sde1 /mnt
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt

 

If you already have auto-mounting configured from Step 5 for your first btrfs device, you do not need to change anything with the GO script to ensure that it continues working after adding a second btrfs device.  NOTE:  make sure both of your btrfs devices are of the same size.

Link to comment
  • Replies 149
  • Created
  • Last Reply

Top Posters In This Topic

i thought unraid already used btrfs? if not, whats the benefits of going this route vs what we have now?

 

Until now, unRAID only supported reiserfs.  Btrfs is required for use with Docker in Beta 6, so that's one big advantage.  Another is Copy On Write.  There are plenty of advantages for Btrfs, but to learn more, that's what Google's for ;-).

Link to comment

If I'm going to run Docker and host VMs on my SSD, does anyone know or have an opinion on whether it'd be fine to put it all on a Btrfs partition, or if I should have two partitions, one Brtfs and the other Ext4 for the VMs?

 

There is no reason you need an Ext4 for VMs.  Btrfs will be great for VMs all the same.

Link to comment

I have two SSD's in my system. A 180gb with 1 VM and a is 120gb with 2 vm's.

 

I used  the following command to stripe sdb & sdc, how would I then choose the 300gb combined Raid0 array as my cache disk?

 

# Stripe the data without mirroring
mkfs.btrfs -d raid0 /dev/sdb /dev/sdc 

 

 

Link to comment

So no converting your existing cache drive in reiserfs to btrfs ?

we have to do it the hard way ?

remove everything and reformat the cache drive ?

i have no sata slots free and no space to put another disk anyway .... :(

 

I heard someone in another post I think mention something about a way to do this via command line, but I haven't tested that yet.

 

This is a bit of a pain, we know.  :-(.  We don't like it either. 

 

I'm hoping we can find a better solution soon...

Link to comment

personally i wouldn't trust any resier to btr convert tool as any failure in it would result in complete cache drive data loss

 

i would suggest the best way to do this would be two simple rsync commands

 

backing up files to the array, blanking and reformatting the cache and restoring the backup using rsync will be fast and absolutely reliable. at no point would you be doing a hail mary with your data

Link to comment

personally i wouldn't trust any resier to btr convert tool as any failure in it would result in complete cache drive data loss

I referenced the btrfs-convert tool, and a patch for ReiserFS support, in the 'compliants' thread. However it really needs someone who know what they are doing to look through it and see what it's up to.

 

My rough understanding, from a quick read, is that it takes the B-tree data structure from ReiserFS and replicates it in BTRFS (eg which blocks, where they are) and so in theory it shouldn't muck up your actual data - it's just creating the data structure in another form. And since BTRFS can do b-trees, it should be relatively straight forward. Should only need a bit of extra free space to achieve, in theory.

 

I could be wrong though, it was only a quick look.

 

As I say, someone who know's what they are doing needs to look at it and see if it's likely to screw things up on big complex data sets.

 

I have a feeling that a converter that works is going to be VERY useful, VERY shortly....

Link to comment

i would suggest the best way to do this would be two simple rsync commands

 

backing up files to the array, blanking and reformatting the cache and restoring the backup using rsync will be fast and absolutely reliable. at no point would you be doing a hail mary with your data

 

this ^^^

 

You could be done by now instead of looking for the "shortcut"  ;-)

Link to comment

personally i wouldn't trust any resier to btr convert tool as any failure in it would result in complete cache drive data loss

 

i would suggest the best way to do this would be two simple rsync commands

 

backing up files to the array, blanking and reformatting the cache and restoring the backup using rsync will be fast and absolutely reliable. at no point would you be doing a hail mary with your data

 

care to provide these commands for the technically challenged ?

don't think it is as simple as rsync /mnt/cache /mnt/disk10/cache

 

Link to comment

personally i wouldn't trust any resier to btr convert tool as any failure in it would result in complete cache drive data loss

 

i would suggest the best way to do this would be two simple rsync commands

 

backing up files to the array, blanking and reformatting the cache and restoring the backup using rsync will be fast and absolutely reliable. at no point would you be doing a hail mary with your data

 

care to provide these commands for the technically challenged ?

don't think it is as simple as rsync /mnt/cache /mnt/disk10/cache

 

yes, please  :P

Link to comment

Actually it probably is as simple as

 

rsync -anv /mnt/cache/

 

and in reverse

 

rsync -anv /mnt/diskX/cache/ /mnt/cache

 

I do not have a test system this now as my spare USB key is dead so i write this of the top of my head. Obviously the paths will needs set properly and the "-n" removed to take a way the dry run option. You would also needs to make sure nothing is using the drive first with something like "lsof | grep -i cache"

 

But since the -a preserves symbolic links, special and device files, modification times, group, owner, and permissions it should just work and IMHO would be vastly less risky than a low level convert

 

Link to comment

I going to play with 3 spare drives and looking for best write/read performance for a pool for my VM

 

Are some of below command preferred? If not what can I do ?

 

 

mkfs.btrfs -m raid5 -d raid5 /dev/sdb /dev/sdc /dev/sdd

 

mkfs.btrfs -d raid1 /dev/sdb /dev/sdc /dev/sdd 

 

mkfs.btrfs -d raid0 /dev/sdb /dev/sdc /dev/sdd 

 

Link to comment

I going to play with 3 spare drives and looking for best write/read performance for a pool for my VM

 

Are some of below command preferred? If not what can I do ?

 

 

mkfs.btrfs -m raid5 -d raid5 /dev/sdb /dev/sdc /dev/sdd

 

mkfs.btrfs -d raid1 /dev/sdb /dev/sdc /dev/sdd 

 

mkfs.btrfs -d raid0 /dev/sdb /dev/sdc /dev/sdd 

 

Hi peter!  Glad to hear you're going to test this out!  We haven't gotten to testing out beyond two drives in a Btrfs pool yet, so looking forward to hearing your results.

 

Are these all SSD or HDD drives?  Same brand/size/speeds or different mixes?  Curious...

Link to comment

I going to play with 3 spare drives and looking for best write/read performance for a pool for my VM

 

Are some of below command preferred? If not what can I do ?

 

 

mkfs.btrfs -m raid5 -d raid5 /dev/sdb /dev/sdc /dev/sdd

 

mkfs.btrfs -d raid1 /dev/sdb /dev/sdc /dev/sdd 

 

mkfs.btrfs -d raid0 /dev/sdb /dev/sdc /dev/sdd 

 

well obviously this commands create pool of RAID5,RAID1 and RAID0 reflectivity.

which you can do with btrfs very nicely.

what do you mean by preferred? proffered to what?

 

if you creating the cache drive, than what level of redundancy do you need for it?

and what amount  of space do you expect to get out of this setup?

 

the Raid0 will give you the most amount of space for data. keep in mind that even with Raid0 you will not get the full size of the volume equal to sum of the drives. as,unless you force the "single" mode, metadata "-m" will always be in Raid1 configuration.

Raid1 will give you half of the total amount, ie. if you use 3x1T drives you get 1.5T of space on the volume.

where raid5 will net you somewhere around 2T of space. and the security of recovery from any single drive failure

 

so what do you preffere?

 

 

 

 

Link to comment

I going to play with 3 spare drives and looking for best write/read performance for a pool for my VM

 

Are some of below command preferred? If not what can I do ?

 

 

mkfs.btrfs -m raid5 -d raid5 /dev/sdb /dev/sdc /dev/sdd

 

mkfs.btrfs -d raid1 /dev/sdb /dev/sdc /dev/sdd 

 

mkfs.btrfs -d raid0 /dev/sdb /dev/sdc /dev/sdd 

 

well obviously this commands create pool of RAID5,RAID1 and RAID0 reflectivity.

which you can do with btrfs very nicely.

what do you mean by preferred? proffered to what?

 

if you creating the cache drive, than what level of redundancy do you need for it?

and what amount  of space do you expect to get out of this setup?

 

the Raid0 will give you the most amount of space for data. keep in mind that even with Raid0 you will not get the full size of the volume equal to sum of the drives. as,unless you force the "single" mode, metadata "-m" will always be in Raid1 configuration.

Raid1 will give you half of the total amount, ie. if you use 3x1T drives you get 1.5T of space on the volume.

where raid5 will net you somewhere around 2T of space. and the security of recovery from any single drive failure

 

so what do you preffere?

 

I think peter's question was more about confirming the command syntax and that there isn't a more "proper" way to do it.

Link to comment

for speed RAID1 is best. and you get protection for one drive failure.

I haven't see any testing done for Raid5 BTRFS yet for speed or anything else for that matter.

also RAID5/6 functionality was just officially added as out of beta stage thus not @100% supported level yet. stick with RAID1. you can always convert later.

 

I had tested conversion from Raid0 to Raid1 on the fly on a system drive (I build out an OpenSuse 13.1  system using btrfs "/" partition) I added a second drive to "/" volume  and rebalanced with convert to Raid1

on working live system. no issues and system did not even slow down, not that I had noticed anyway. with cache drive it should not be a problem either.

 

 

 

Link to comment

OK, i did this... and I think I'm OK

 

 mkfs.btrfs -f -m raid1 -d raid1 -L VM-store /dev/sdf /dev/sdh /dev/sdg

 mkdir /boot/btrfs

 mount -t btrfs /dev/sdf /mnt/btrfs/

 df
Filesystem      1K-blocks   Used Available Use% Mounted on
/dev/sda1         3882752 923200   2959552  24% /boot
/dev/sdf       1465159752   1280 976753664   1% /mnt/btrfs

dd if=/dev/zero of=/mnt/btrfs/bench.test bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 0.47854 s, 4.4 GB/s

blkid /dev/sdf
/dev/sdf: LABEL="VM-store" UUID="569b8d06-5676-4e2d-9a22-12d85dd1648d" UUID_SUB="bc85940f-c379-4b4f-b10d-370b6d12de3c" TYPE="btrfs"

root@tower:~# btrfs filesystem df /mnt/btrfs/
Data, RAID1: total=3.00GiB, used=1.95GiB
Data, single: total=8.00MiB, used=0.00
System, RAID1: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, RAID1: total=1.00GiB, used=2.09MiB
Metadata, single: total=8.00MiB, used=0.00
unknown, single: total=16.00MiB, used=0.00

root@tower:~# btrfs filesystem show /dev/sdf
Label: 'VM-store'  uuid: 569b8d06-5676-4e2d-9a22-12d85dd1648d
        Total devices 3 FS bytes used 1.96GiB
        devid    1 size 465.76GiB used 2.02GiB path /dev/sdf
        devid    2 size 465.76GiB used 3.01GiB path /dev/sdh
        devid    3 size 465.76GiB used 3.01GiB path /dev/sdg

 

 

So what is the best way to mount this in the go file ?

using /dev/sdf looks not correct, since this might can change during boot?

 

 

//Peter

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.