BTRFS vs. ZFS comparison


jonp

Recommended Posts

Here’s a nice btrfs overview which includes an interesting comparison with ZFS.  He also mentions two reasons why we also don’t include ZFS in unRaid.  First is that ZFS is real memory hog and would likely blow up typical small unRAID NAS servers.  Second is our questionable ability to distribute ZFS along with the linux kernel.  Beyond that, we think for our purposes btrfs is better anyway.  We plan to leverage many btrfs features in future unRaid OS releases, especially snapshots!

 

http://marc.merlins.org/linux/talks/Btrfs-LinuxCon2014/Btrfs.pdf

Link to comment
  • 4 weeks later...

I wants me a RAID0 cache pool setup ability in the unRAID web GUI... please and thank you

 

Posted this elsewhere:

 

You can set up a raid-0 cache pool.  When you click on the 'cache' device after array Started and cache is mounted, scroll down to 'Balance' section and you will see default Balance options are:

-dconvert=raid1 -mconvert=raid1

You can change the 'raid level' by editing these options and click Balance.  For example, to have data raid0 but leave metadata raid1 you could change to:

-dconvert=raid0 -mconvert=raid1

You can monitor syslog to watch balance operation proceed.

 

Note that, at present, the s/w will "revert" the default balance options back to:

-dconvert=raid1 -mconvert=raid1

That is, we don't store your selection.  Similarly if you add a device to an existing pool it will automatically kick off a balance with the default options.  We plan to address this in a future release, but for now you can experiment with it.

 

Link to comment
  • 10 months later...

I´m really sorry for resurrecting old thread.

 

I´m planning on getting a second SSD for my cache pool. Is the default still raid1 in 6.1.9? If I go and change it, will it still revert back to it? If so, how can you prevent this?

Default is still RAID1, different config survives a reboot, it only reverts if you change the pool, e.g., add another disk.

Link to comment
  • 2 months later...

Resurrecting this old thread again, to point out how you can easily create a RAID0 cache from the SSH console, in the event that you have a weird setup, like mine. I have a 256GB SSD and an older 128GB SSD, and the web interface was simply showing that there was no balance in the arrangement at all. It was creating an unbalanced RAID1 of a 192GB capacity, with 2/3 of the 256GB drive mirrored by the other 1/3 of the 256GB and the 128GB drive.

 

First you need to take your array offline, and configure the cache to the number of drives you wish to merge.

 

Then you need to find the /dev/sdx nodes for the two or more SSDs you wish to add.

 

Then you need to run the following from a bash prompt logged into unRAID:

 

mkfs.btrfs -d raid0 -m raid0 -f /dev/sdx /dev/sdx <etc>

 

Where /dev/sdx items listed above are the cache drives, in the order you want them in.

 

These steps are only necessary if you want to format the cache manually. A rebalance may work now, but there's one addition you need, as of some version of the btrfs tool, is the -f switch, to force conversion. This step is necessary now, as it will otherwise error out at the attempt to convert down to an insecure RAID0 array.

Link to comment
  • 2 months later...

So kode54 is this the only way to convert the cache drives over to RAID0, I have the same configuration as yourself a 240GB and 128GB SSD and the current RAID1 setup means I am losing valuable disk space. I do not mind losing out on RAID1 as I can create backups of appdata, Dockers and Vdisks. This is my current config.

 

Data, RAID0: total=221.52GiB, used=197.20GiB

System, RAID1: total=32.00MiB, used=16.00KiB

Metadata, RAID1: total=1.00GiB, used=218.52MiB

GlobalReserve, single: total=80.00MiB, used=0.00B

 

Total devices 2 FS bytes used 197.41GiB

        devid    1 size 111.79GiB used 111.79GiB path /dev/sdg1

        devid    2 size 223.57GiB used 111.79GiB path /dev/sdf1

Link to comment

So kode54 is this the only way to convert the cache drives over to RAID0, I have the same configuration as yourself a 240GB and 128GB SSD and the current RAID1 setup means I am losing valuable disk space. I do not mind losing out on RAID1 as I can create backups of appdata, Dockers and Vdisks. This is my current config.

 

Data, RAID0: total=221.52GiB, used=197.20GiB

System, RAID1: total=32.00MiB, used=16.00KiB

Metadata, RAID1: total=1.00GiB, used=218.52MiB

GlobalReserve, single: total=80.00MiB, used=0.00B

 

Total devices 2 FS bytes used 197.41GiB

        devid    1 size 111.79GiB used 111.79GiB path /dev/sdg1

        devid    2 size 223.57GiB used 111.79GiB path /dev/sdf1

 

No its not the only way. There is the easy way here.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.