ZFS in Unraid 7.x wait or go ahead with 6.x for a media server + *arr apps


Recommended Posts

Hi there,

I have a Synology DS918+ 4 HDD server that I've outgrown. I tried Truenas scale on a SuperMicro SS847 with 12 x 10TB SAS drives, Optane, 10Gb DAC and Nvidia P600 GPU with the goal to use primarily for media serving and for *arr apps, and maybe as a Plex server. I struggled and wasn't able to get it to work, and community Truenas support was lacking imho ( 🤷‍♂️ it is free after all).

I had shyed away from Unraid due to cost, but was blown away, in a really positive way, by the conversation with the CEOs and specifically their talk of community engagement and support in this YouTube video: The Unraid Story: Lime Technology Co-CEO's Discuss the Past and Future of Unraid OS. So I immediately purchased an Unraid Pro license. If the community engagement is as strong as discussed in the vid and based on my limited interactions with Unraid users, I'd be willing to pay an annual support/updates subscription. Everyone's moving towards the subscription model, and it's a good predictable revenue stream for companies.

My question is: Should I install and setup the server now with Unraid 6.x, or wait for 7.x? I want to setup a RaidZ2, unless there is a better suggestion. I say that because for other OS, zfs seems to be the best solution for my requirements, but Unraid does have the ability to add storage and expand pools, like Synology SHR. Thoughts?

Thanks,

Edited by Bladedude
Added more detail about why I purchased Unraid Pro and a subscription model.
Link to comment

I'd recommend playing with 6.x

 

Not to be a debbie downer, but waiting for future versions from Unraid is not productive. They release when stuff is ready, and not a moment sooner. Ask anybody who's been around here for very long what Soon™ means.

 

If you watched through the video, you know that you can sidestep the requirement for having an array disk by using any old spare USB flash drive as Disk1 in the array slot, then build out your storage pool(s) any way you like. I would recommend adding a SSD or two to a different pool for system files, most notably the docker image and appdata for said containers. That will keep your media server GUI snappy.

 

The only real downside I see to not waiting is the lack of automated moves from pool to pool currently, but that shouldn't be an issue for you if you download and sort the media directly on the final storage pool instead of having a separate download pool.

Link to comment
3 hours ago, JonathanM said:

I'd recommend playing with 6.x

 

Not to be a debbie downer, but waiting for future versions from Unraid is not productive. They release when stuff is ready, and not a moment sooner. Ask anybody who's been around here for very long what Soon™ means.

 

If you watched through the video, you know that you can sidestep the requirement for having an array disk by using any old spare USB flash drive as Disk1 in the array slot, then build out your storage pool(s) any way you like. I would recommend adding a SSD or two to a different pool for system files, most notably the docker image and appdata for said containers. That will keep your media server GUI snappy.

 

The only real downside I see to not waiting is the lack of automated moves from pool to pool currently, but that shouldn't be an issue for you if you download and sort the media directly on the final storage pool instead of having a separate download pool.

Thanks for the response @JonathanM

Waiting for future versions: understood. USB flash drive for array disk, I'll have to rewatch the vid and do more reading. Download and sort the media on final storage seems like the optimum route using Trash guides / hardlinking.

 

HW available with original plans below:

  • 12 x 10TB SAS for media in RaidZ2. Not sure whether to stick with zfs, or use a different method such as xfs + parity for future expansion. Though I could add another pool since the server has another 24 x 3.5" bays free
  • 2 x 500GB for Truenas boot. Not sure what to use them for in Unraid
  • 2 x 1TB NVME for apps
  • 2 x 56GB Optane for zfs cache, slog etc
  • I have 1 internal USB type A slot, and I've ordered a metal case 32GB USB stick for the license key

I'd welcome suggestions on how to arrange/optimize this HW for Unraid.

Edited by Bladedude
Link to comment

ZFS works well enough in the current version of Unraid that I'd definitely recommend using it rather than waiting for the next major version of Unraid. My system is a lot smaller than yours though: I've got 2 x 2TB NVMe drives in a ZFS mirror, 2 x 20TB drives in a ZFS mirror, and a standalone (unassigned) 14TB drive for security camera footage. I've got 64GB RAM and am using 32GB of that for ZFS cache. So far, I haven't really needed L2ARC for my use cases.

 

The only caveat I've found is that currently you need at least one drive in the Unraid array (i.e. outside of a ZFS pool). I have an unused USB stick in mine as a workaround. I've heard they're going to make that more flexible in a future release.

 

There's a plugin called "ZFS Master" that adds a bunch of ZFS features that are missing from Unraid, like the ability to create and manage datasets and snapshots.

Edited by Daniel15
Link to comment

My 2cents is have a play on the trial.

 

ZFS on spinning drives could be a waste of power. 

It's better for data that needs to be read fast like perhaps video editing etc.

However you may feel the extra protection is needed but for me I can just re-aquire my media.

I've added how I run my system for an idea of the flexibility the unraid system provides.

 

[Main array XFS] with parity 10x disks from 12TB to 3TB - can easily add disks as my collection grows - it started at 4TB then I added more old drives I have laying around now its at 67TB using HBA backplanes to easily add disks. Shucking old usb drives gives me free 8tb drives.

 

[SSD pool ZFS] mirror 2x disks - 2x NVMe 2tb drives used for appdata VM's & new folder storage until copied to array. One day I will upgrade them to 4TB. All my media is downloaded here first.

 

[12TB Pool ZFS] z1 4x disks (1x disk parity + 3x data) - This needed me to buy 4x 12TB drives and add at the same time -  used for my drone video business to copy large files quickly to storage (up to 1TB at a time from drone SSDs).

Edited by dopeytree
Link to comment
12 hours ago, JonathanM said:

I'd recommend playing with 6.x

 

Not to be a debbie downer, but waiting for future versions from Unraid is not productive. They release when stuff is ready, and not a moment sooner. Ask anybody who's been around here for very long what Soon™ means.

 

 

Yepp, I second that. Some versions take forever to get updated 6.11.5 was around 7 months I think and 6.9.2 für 13 months while some just never see daylight (6.12.7) or get updates within days, weeks or 1-2 months.

 

Since we don't know when 7 is ready I would start tinkering with 6.12.8

Edited by HardwareHarry
Link to comment

Thanks @Daniel15 @dopeytree and @HardwareHarry for the responses and examples of your setups.

Good info on a USB stick workaround for an array @Daniel15.

@dopeytree For power, I don't think the server can power down spinners, and the power is a sunk cost for me. With no drives it pulls ~200 watts, and ~325 watts with 12 3.5" spinners. 

This vid All about Using Multiple Cache Pools and Shares in Unraid 6.9 from SpaceInvaderOne was helpful, as well as this one Overview ZFS for Unraid (Create, Expand and Repair ZFS Pool on Unraid)..

This ZFS documentation link mentions:

Quote

This section should be completed once Unraid 6.12 has been released with ZFS support included as a standard feature

 but maybe it's a backlogged item.

Use case is media server + smb shares. Given that all the drives will be spinning, zfs seems like the route to take. Maybe I'll try xfs+parity and see if the server powers down the unused drives.

Edited by Bladedude
Link to comment
7 hours ago, dopeytree said:

However you may feel the extra protection is needed but for me I can just re-aquire my media.

Yeah - this depends a lot on your use case. For me, most of the files on my NAS aren't 'disposable' files like TV shows and movies - they're things like family photos, personal documents (taxes, mortgage paperwork, etc), backups of several other servers, email backups going back nearly 20 years, music that I've ripped from CD in the past and is very difficult to find these days, etc. I also don't have many drives. For my use case, ZFS' bitrot protection is more important than drives powering down while idle. You need to use a ZFS pool to take advantage of its bitrot protection.

 

The other option is to use an Unraid array of individual ZFS drives. This is similar to what you'd do for XFS drives today. It gives some of ZFS' advantages (like snapshots and compression), but doesn't provide its main benefits (like bitrot protection). This is described as the "hybrid approach" on this page: https://unraid.net/blog/zfs-guide

Edited by Daniel15
Link to comment
2 hours ago, Bladedude said:

Thanks @Daniel15 @dopeytree and @HardwareHarry for the responses and examples of your setups.

Good info on a USB stick workaround for an array @Daniel15.

@dopeytree For power, I don't think the server can power down spinners, and the power is a sunk cost for me. With no drives it pulls ~200 watts, and ~325 watts with 12 3.5" spinners. 

This vid All about Using Multiple Cache Pools and Shares in Unraid 6.9 from SpaceInvaderOne was helpful, as well as this one Overview ZFS for Unraid (Create, Expand and Repair ZFS Pool on Unraid)..

This ZFS documentation link mentions:

 but maybe it's a backlogged item.

Use case is media server + smb shares. Given that all the drives will be spinning, zfs seems like the route to take. Maybe I'll try xfs+parity and see if the server powers down the unused drives.

 

Right now on mobile but my second backup server is running on Unraid too and is 100% ZFS on Spinning Rust.

 

But I'm not using RaidZ - but invidiual ZFS drives in the array with a parity drive.

 

That gives me the ability to do snapshots and enables the drives to spin down - especially when combined with the Folder Caching Plugin (which reduces Reads from the HDDs by caching recently/often used folders into RAM).

 

ZFS single drive array has the downside of no automated bitrot protection on the fly and I wish RaidZ support would be already there especially with multiple Arrays that would be even more powerfull - as I understand it that's planned for later.

 

Right now ZFS does have some downsides, like if you are manually creating folders within your shares via commandline - this is not handled automatically and can lead to weird problems / dataloss (yet).

 

All that in mind: What the heck is wasting away those 200W? Both my servers use less then 60W together ...

 

 

 

Edited by jit-010101
  • Upvote 1
Link to comment
23 hours ago, Bladedude said:

primarily for media serving and for *arr apps, and maybe as a Plex server

 

Yeah storing family photos & documents is different from running a suite or are's & plex.

I'd go full ZFS for the important family stuff plus a cloud backup.

It's just once you fill those disks up you may want to use a basic unraid array for the plex media.

Edited by dopeytree
Link to comment

I could be wrong, but I think it's possible to format array drives as ZFS instead of XFS but just as a single drive with its own filesystem the way Unraid does it. This could be a nice way to have the benefits of the Unraid array while also having some bitrot protection. Maybe someone can confirm this?

Link to comment
2 minutes ago, mackid1993 said:

I could be wrong, but I think it's possible to format array drives as ZFS instead of XFS but just as a single drive with its own filesystem the way Unraid does it. This could be a nice way to have the benefits of the Unraid array while also having some bitrot protection. Maybe someone can confirm this?

Yes you can run a zfs file system on each array drive. Like you do with xfs or btrfs

Link to comment

Thanks for the additional comments @jit-010101 @mackid1993 and @SimonF. This is more feedback to this 1 post 30 hrs ago, than I've ever received from Truenas forums or their discord.

If I didn't have a dozen identical drives, it sounds like xfs would be the way to go given the primary use case of media + smb shares. Given the identical dozen drives, zfs sounds like the best use of the existing HW. 

21 hours ago, Daniel15 said:

The only caveat I've found is that currently you need at least one drive in the Unraid array (i.e. outside of a ZFS pool). I have an unused USB stick in mine as a workaround. I've heard they're going to make that more flexible in a future release.

@Daniel15 I'll need to find a solution for this. I do have 1 internal USB type A port that I will use for the boot/key USB stick. This post mentions using a "USB DOM" which might work since I have 2 free internal USB headers on the mobo.

 

7 hours ago, jit-010101 said:

All that in mind: What the heck is wasting away those 200W? Both my servers use less then 60W together ...

 

 

 

@jit-010101PSU reports it's pulling 280W input at idle. 2 x E5v2 Xeons, 7 fans + 2 more in the PSU, and 12 3.5" spinners.

Link to comment
12 hours ago, mackid1993 said:

I could be wrong, but I think it's possible to format array drives as ZFS instead of XFS but just as a single drive with its own filesystem the way Unraid does it. This could be a nice way to have the benefits of the Unraid array while also having some bitrot protection. Maybe someone can confirm this?

 

My understanding is single ZFS drives in the main array do not provide bit rot protection. Well not full. It can detect bit rot but not protect as this is a limit of a single zfs disk.

 

Full bit rot protection & healing is provided in full ZFS pools that have parity drives in their self contained zfs pool. eg. z1 / z2 / z3

 

https://forums.unraid.net/topic/140936-now-that-612-has-zfs-what-are-our-options-for-recovering-from-bit-rot/

Edited by dopeytree
Link to comment
41 minutes ago, dopeytree said:

My understanding is single ZFS drives in the main array do not provide bit rot protection. Well not full. It can detect bit rot but not protect as this is a limit of a single zfs disk.

Correct

 

41 minutes ago, dopeytree said:

Full bit rot protection & healing is provided in full ZFS pools.

Correct as long as they are configured to have redundancy (which is optional).

 

  • Like 1
Link to comment
8 hours ago, dopeytree said:

My understanding is single ZFS drives in the main array do not provide bit rot protection. Well not full. It can detect bit rot but not protect as this is a limit of a single zfs disk.

 

Technically there is a way to have bitrot protection on a single disk, but it halves the amount of disk space you can use. You can configure ZFS to store two copies on the same disk. That'd affect write speeds too, since each write actually does two writes.

 

I don't know if anyone actually uses this feature for a whole disk on a production system though. Where it's more useful is if you create a separate dataset and set copies=2 just for that dataset (e.g. for your most important files)

Edited by Daniel15
Link to comment

 

18 hours ago, Bladedude said:

Thanks for the additional comments @jit-010101 @mackid1993 and @SimonF. This is more feedback to this 1 post 30 hrs ago, than I've ever received from Truenas forums or their discord.

If I didn't have a dozen identical drives, it sounds like xfs would be the way to go given the primary use case of media + smb shares. Given the identical dozen drives, zfs sounds like the best use of the existing HW. 

@Daniel15 I'll need to find a solution for this. I do have 1 internal USB type A port that I will use for the boot/key USB stick. This post mentions using a "USB DOM" which might work since I have 2 free internal USB headers on the mobo.

 

@jit-010101PSU reports it's pulling 280W input at idle. 2 x E5v2 Xeons, 7 fans + 2 more in the PSU, and 12 3.5" spinners.

 

Eh well, that explains it - server hardware and maybe an not so  power efficient PSU all combined with 12 drives probably none of them in spin-down ... there is a LOT of potential to save energy here hehe (just not for FreeNAS).

 

I remember that there was an issue related to *arr with shfs (the underlying technology used as a basr for Unraid merged folders) so I'd highly recommend you to test the trial to the fullest to be able to face any issues and get yourself an idea what you might need to do.

Edited by jit-010101
Link to comment
4 hours ago, jit-010101 said:

there was an issue related to *arr with shfs (the underlying technology used as a basr for Unraid merged folders) so I'd highly recommend you to test the trial to the fullest to be able to face any issues and get yourself an idea what you might need to do

Based on the identical dozen spinners, I'll likely go with zfs. Is share filesystem (shfs) issue still a concern with RaidZ2? @jit-010101

Edited by Bladedude
Link to comment
8 hours ago, Bladedude said:

Based on the identical dozen spinners, I'll likely go with zfs. Is share filesystem (shfs) issue still a concern with RaidZ2? @jit-010101


I'm honestly not sure - maybe someone from the team can tell but if you're using pools and not the array it might not even use shfs. 🤔

I personally never encountered such issues myself - I'm "just" using it to host Paperless, Nextcloud (58k pictures/video), Homeassistant, Node-Red, Calibre and stuff like that ... and never had an issue. But this is all on an classic Array with BTRFS individual drives (ssd's only) of different sizes ...

I've yet to use any form of Media-Streaming use cases myself ...

Edited by jit-010101
Link to comment

One thing I didn't see mentioned which is a big big plus for me about the array setup is that if you lose a drive beyond your parity drives you only lose the data on that one drive. You can just replace the drive and replace the data (assuming its not irreplicable) lost on only that drive. All the data on the other drives remains and is still usable. ZFS you will obviously lose everything should you lose that 3rd drive.... as unlikely as that maybe. 

 

In my case I never need the raw read speed of proper zfs (opposed to array zfs) for my media. And as long as there is not several streams from the same drive there is plenty of read speed available. And theoretically if I get multiple streams to different drives the cumulative array output speed can actually be very high (30 drives working at max output of 150-200MB/s say a conservative 100MB/s). Write speed is obviously limited to the speed of a single drive due to parity but this can be overcome with cache drives. But I dont put anything on my array that needs that raw write speed either. 

 

In the end I made multiple pools for data that I decided needed the speed and general benefits of zfs and then used the array for the easily replaceable media. I would say I am in a very different position to most in terms of the Hardware I'm running so these considerations might not be an option for everyone but definitely something to consider. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.