Jump to content

Setting up zfs


Recommended Posts

Hi all, does anyone have a good guide on setting up a zfs pool? I am wanting to setup a large single pool for my users in the office to access, to upload and download photos too. Someone suggested setting up a small disk pool for the main system and then set the rest of the space as a zfs pool. I wanted the whole array set as zfs but not sure it’s possible. I have 16x 18tb drives and want to test out the speed of zfs. Only photos are going to be stored on it. What do you guys recommend for this? Thanks!

Link to comment

You seem to be interchangeably using pool to describe both unraid array and unraid pools. To add to the confusion, zfs also shares the pool terminology to describe it's own construct

 

Unraid needs an array, but an array is just a JBOD protected by parity drives. You don't seem to want that. But it can be done with a single drive, even usb. So I suggest you attach a smaller spare drive or a separate usb drive for array. 

 

To run a zfs pool, you have to create it as an unraid pool (if running 6.12 rc) or using zfs plugin (if running 6.11). The process to use them is slightly different though, so I will explain the simpler 6.12 method which is what it will be going forward

 

If all you want to use is zfs pool, you can expose it as a cache only share pointing to the corresponding pool

 

For your use case though, I have to ask why use unraid at all? Something like truenas is more suited to a pure zfs use case in today's landscape. Do you plan to use other unraid features? 

Link to comment

@apandey

Thanks! It was late last night and I was barely able to think while typing that out lol. So I will clarify, yes you pretty much said what I want to do. I want to create a small array for the system, and then a large zfs pool for data to be stored on. I have ran TrueNAS in the past, and had horrible luck with it. I'll pretty much say, it was a perfect storm of the events that happened with it. Don't really want to go into all the details of it, but I ended up losing two parity drives and TrueNAS reported several errors on half the drives, when there wasn't actually any errors on the drives (I pulled them out and double checked them on another system). So I lost almost 100tb of data, and was able to run a recovery tool and recovered 80% of the data. 


Anyways, I am on 6.12 at the moment on the server. I do want to use some other other features in Unraid, so I would prefer to stick with it if possible. I can put in a couple of small SSD's to run the main array and then create a zfs pool of the spinners and use that as my actual storage. If I have to set it as a cache only pool, then that will be fine. Besides running TN in the past, I don't have much experience with zfs. But for the speed of it, from what I've read, its definitely the way I think I should go. I will be storing all photos on it. 


Right now, I have 150tb worth of photos. They are accessed by several users and need the read/write speed faster than what the standard xfs array runs. Copying to the server from a machine with a SSD drive in it, it was hitting 50-75MB. If several users are copying files to it, it would drop down to 1-10MB. These are all Exos 18tb 7200rpm SATA drives. I have all the normal tweaks in Unraid, but nothing seem to improve the transfer rate. So I figured I would try zfs and see how it does. I do know about the cache drive, and having them copy files to it and then letting it transfer overnight, but I would have to put in probably 6x 4tb SSDs and would prefer not to have to do that and worry about the data transferring over. 


Hopefully this gives a little more detail, sorry it was fairly long, but just want to make sure I explained everything properly lol. Thanks again!!

Link to comment

Great. Use case is clear 🙂

 

I am also a novice at zfs right now, but have plenty of experience with other raid / raid like systems. So take my advice with that caveat in mind

 

I guess you are aware at how zfs uses vdevs. You might want to think that through, because within a vdev, there would still be limits on performance for a multi tenent use case like yours. I am myself trying to understand the best practices on zfs topology, and as of now, unraid doesn't allow all config via UI

 

For sequential IO, you can get close to theoretical combined capacity of all disks, which should be in quite some contrast to unraid array. But random IO across multiple users still has the problem of all participating disks having to seek to same location every time, bringing performance closer to single disk

 

Good Luck, and hope you will get more experienced response than me

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...