Multiple File System Support: Feature Highlights


jonp

Recommended Posts

Damn!

 

You'r right!  I was looking for a Format button in the Disk detail page... Thanks

 

Now have 3 x 3TB of XFS FS drive in the array.  What would be the easiest/fastest data move ??  take the content of 1st 3TB and rsync it to new drive via SSH ??  Any recommendation for switchs to use or other commands ?

 

I have been using rsync with:

 

rsync -av --progress --remove-from-source /mnt/diskX/ /mnt/diskY/

 

This allows you to restart if it gets interrupted for any reason and removes the source file once it is successfully copied. 

 

Just a warning, I wouldn't use the shares when coping since it was found that under a certain case you can loose data if you copy from a drive to the share can truncate the data.  Also best to run from screen so you wont have to worry about the session getting terminated..

 

Thanks, I think i'll migrate some data this night while sleeping using Screen.  Migrate 3 disk at the same time in 3 different Screen sessions :)

 

Link to comment
  • Replies 51
  • Created
  • Last Reply

Top Posters In This Topic

Ok...

 

I had couple (3 or 4) of my WD Red 3TB that was in my array, but totally empty.  I stopped the array, then change the format to XFS for my last one (Disk 10), then I started the array.  It show up as XFS, but Unformatted.  Is there a WebGUI command to have the disk formatted by Unraid ??

When you say "in my array, but totally empty" do you mean they had already been precleared but had not yet been formatted to ReiserFS? Just trying to get a clearer idea of how this works. I normally don't think of a drive that hasn't been formatted as "in the array" yet.

 

On the other hand, if it did already have an empty ReiserFS on it, I would think unRAID would want to clear it first.

 

They were 3 disk precleared, added to array as ReiserFS but they were empty (I have 30TB space, but 20TB free).  So it was easy to stop the array, change those 3 to XFS, then restart array and format. 

 

This night, i'll migrate 3 of the other reiserFS to those XFS.  They are all the same size (3TB) and none are full, so no issue with fail copy.

Link to comment

Damn!

 

You'r right!  I was looking for a Format button in the Disk detail page... Thanks

 

Now have 3 x 3TB of XFS FS drive in the array.  What would be the easiest/fastest data move ??  take the content of 1st 3TB and rsync it to new drive via SSH ??  Any recommendation for switchs to use or other commands ?

 

I have been using rsync with:

 

rsync -av --progress --remove-from-source /mnt/diskX/ /mnt/diskY/

 

This allows you to restart if it gets interrupted for any reason and removes the source file once it is successfully copied. 

 

Just a warning, I wouldn't use the shares when coping since it was found that under a certain case you can loose data if you copy from a drive to the share can truncate the data.  Also best to run from screen so you wont have to worry about the session getting terminated..

 

I would seem that --remove-from-source is not an option.

 

rsync -av --progress --remove-from-source /mnt/disk8 /mnt/disk9
rsync: --remove-from-source: unknown option
rsync error: syntax or usage error (code 1) at main.c(1554) [client=3.1.0]

Link to comment

So I, like many others I imagine, are undecided as to what filesystem we should use in the near future. I like the sound of Btrfs but the fact that's still experimental worries me. How would Btrfs be updated in the future, through an unRAID update? Built in checksumming is appealing to me but I don't imagine the auto compression feature would be much benefit to me when I only have large media files on my array. I also have no plans to use docker or the like on my array drives. So should I just go with XFS, the safer option for the near future?

Link to comment

....So should I just go with XFS, the safer option for the near future?

 

You may want to check this discussion/poll over in Lounge: http://lime-technology.com/forum/index.php?topic=34776.0

 

I'm going with XFS mainly on Tom's recommendation. It will be interesting when troubles start popping up on systems with BTRFS/XFS. There's so much experience here with RFS problems, but I'm not sure if the same techniques apply.

Link to comment

....So should I just go with XFS, the safer option for the near future?

 

You may want to check this discussion/poll over in Lounge: http://lime-technology.com/forum/index.php?topic=34776.0

 

I'm going with XFS mainly on Tom's recommendation. It will be interesting when troubles start popping up on systems with BTRFS/XFS. There's so much experience here with RFS problems, but I'm not sure if the same techniques apply.

 

Ah, thanks for the link.

Link to comment

Am I correct in assuming that choosing one of the new file systems gives a file system per disk?

 

In other words, 5 disks ReiserFS, 5 disks XFS and 1 disk parity is still 5 Reiser file systems and 5 XFS file systems?  or is it 5 Reiser file systems and 1 XFS file system with a storage pool of 5 drives in the XFS file system?

Link to comment

In the process of migrating to XFS right now... I have enough room in the array to spare so I just let some moving jobs do their thing..  Will take several weeks probably but very painless... I would be very reluctant in using a filesystem converting system.. brr... real potential of loosing eveything in one go..

Link to comment

In the process of migrating to XFS right now... I have enough room in the array to spare so I just let some moving jobs do their thing..  Will take several weeks probably but very painless... I would be very reluctant in using a filesystem converting system.. brr... real potential of loosing eveything in one go..

 

I agree 100% with you... I don't have much trust in automagic tools...  Especially when it is manipulating the data structure in place.  Usually the simple approach is the way to go and the safest...

Link to comment

I have to confess, I've been trying to get my head around what is the best way forward, and I've hit upon a question to do with the best bet in this new (and confusing) world.

 

In general its been said that you keep the data disks the same size, such that the parity disk is always big enough to deal with any other disk in the array with minimum wasted space.

 

However, with the new capabilities, is it better (possible) to buy and partition one (larger) disk into a data array part that is the same size as the biggest disk/parity drive, and have the other in a BTRFS/RAID1 with an SSD delivering the cache and docker/VM repository with redundancy?

 

Seems like you are going to be limited to HDD write speeds if you have an SSD/HDD combo anyway, and, say, putting a 4TB disk in with 3TB disks gives you a way of moving to larger disks gently. It also reduces the number of disks in your enclosure.

Link to comment

However, with the new capabilities, is it better (possible) to buy and partition one (larger) disk into a data array part that is the same size as the biggest disk/parity drive, and have the other in a BTRFS/RAID1 with an SSD delivering the cache and docker/VM repository with redundancy?

 

Multiple partitions on array disks is not supported at this time - it would be in-sane  ;)

Link to comment

In general its been said that you keep the data disks the same size, such that the parity disk is always big enough to deal with any other disk in the array with minimum wasted space.

I have never seen it stated as such. Individual data drives can be any size but cannot be larger than the parity drive. Parity works a little differently as I understand it.

However, with the new capabilities, is it better (possible) to buy and partition one (larger) disk into a data array part that is the same size as the biggest disk/parity drive, and have the other in a BTRFS/RAID1 with an SSD delivering the cache and docker/VM repository with redundancy?

To me unRAID will continue to utilize the disk as a whole unit and not allow partitioning. What's changed is the freedom to select the formatting from three file systems (Reiser-XFS-BTRFS).

 

 

Link to comment

In general its been said that you keep the data disks the same size, such that the parity disk is always big enough to deal with any other disk in the array with minimum wasted space.

 

This has not been stated as far as I know and makes no sense to me.

 

Suggest you read the what is parity link in my sig for some useful background on parity.

 

You are not wasting any space by using a combination of data disk sizes with parity equal in size to only some of them.

 

You could argue that a parity disk larger than any data disk is a waste. For example, having an array of 6 1TB data disks protected by a 6TB parity. But even in this situation, if it is the owner's intent to add or replace disks with 6TB drives in the future it is not a waste.

Link to comment

I have to confess, I've been trying to get my head around what is the best way forward, and I've hit upon a question to do with the best bet in this new (and confusing) world.

 

In general its been said that you keep the data disks the same size, such that the parity disk is always big enough to deal with any other disk in the array with minimum wasted space.

 

However, with the new capabilities, is it better (possible) to buy and partition one (larger) disk into a data array part that is the same size as the biggest disk/parity drive, and have the other in a BTRFS/RAID1 with an SSD delivering the cache and docker/VM repository with redundancy?

 

Seems like you are going to be limited to HDD write speeds if you have an SSD/HDD combo anyway, and, say, putting a 4TB disk in with 3TB disks gives you a way of moving to larger disks gently. It also reduces the number of disks in your enclosure.

 

Why would you keep the data disks the same size ?  It is a major benefit within unraid (even a selling argument) that you do not have to do that... Only important thing is that your parity drive needs to be the biggest one..

 

So as soon as you run out of storage space, by a large drive, make it your parity drive and reuse the old parity drive as data drive (at least that is the way I do it..)

Link to comment

However, with the new capabilities, is it better (possible) to buy and partition one (larger) disk into a data array part that is the same size as the biggest disk/parity drive, and have the other in a BTRFS/RAID1 with an SSD delivering the cache and docker/VM repository with redundancy?

 

Multiple partitions on array disks is not supported at this time - it would be in-sane  ;)

Hmm, well that makes that simple - I assume there is some reason in how the parity works in UNRAID for not allowing partitioning of space. Thing is, it would be handy.

 

Particularly with the caching, it's RAID1 backup, and VM stores we now have, making the parity drive slightly bigger (say buy a 4TB rather than a 3TB) and use the extra for these various purposes keeps the hardware count and complexity down.

Link to comment

However, with the new capabilities, is it better (possible) to buy and partition one (larger) disk into a data array part that is the same size as the biggest disk/parity drive, and have the other in a BTRFS/RAID1 with an SSD delivering the cache and docker/VM repository with redundancy?

 

Multiple partitions on array disks is not supported at this time - it would be in-sane  ;)

Hmm, well that makes that simple - I assume there is some reason in how the parity works in UNRAID for not allowing partitioning of space. Thing is, it would be handy.

 

Particularly with the caching, it's RAID1 backup, and VM stores we now have, making the parity drive slightly bigger (say buy a 4TB rather than a 3TB) and use the extra for these various purposes keeps the hardware count and complexity down.

 

Mm.. the whole point of unraid is to combine physical disks to one functional volume that is split in shares.. why would you feel the need to go the other route and partition a drive to have two volumes that are then combined to one functional volume #mindexplosion#

 

If you need storage outside of the array, that is what the cache drive or cache pool is used for..

Link to comment

Mm.. the whole point of unraid is to combine physical disks to one functional volume that is split in shares.. why would you feel the need to go the other route and partition a drive to have two volumes that are then combined to one functional volume #mindexplosion#

 

If you need storage outside of the array, that is what the cache drive or cache pool is used for..

Sigh

 

And the point is you shouldn't need to add a cache drive, but with the advent of VMs and Containers, you do. For most people, that's an SSD cache/VMs drive - but the latest betas introduce the idea of pooling that with another drive as a RAID1 setup, for redundancy purposes (eg two arrays  #mindexplosion#)

 

Looking to not have two physical drives outside of the normal UNRAID array drives seems like a fairly obvious aim.

 

 

Link to comment

I know unRAID now offers single-drive BTRFS volumes an array drive format option, alongside the old standby RFS and the other new option, XFS.

 

However, Tom mentioned some issues with BTRFS that appear to be due to its copy-on-write filesystem features.

 

- btrfs is still a real pain to manage in some circumstances.  For example, try moving a directory that contains subvolumes to another partition and preserve the subvolumes! Very difficult.  The 'standard' unix tools: cp, mv, etc. simply don't work (well they work, but your 4GB docker directory balloons to 30-40-50GB).  By isolating Docker in it's own volume image file, it is easier to move around onto other devices.

 

I don't see anyone mentioning these issues when debating which filesystem option is best to use for new array drives.

 

Are the issues Tom mentioned irrelevant when BTRFS is used on an array drive? 

 

What I mean is, were the issues he raised only relevant when using BTRFS on the cache drive with multiple partitions and copying data around on that one drive?  (And thus the reason he's implemented mounting BTRFS loopback volume files rather than requiring BTRFS on the cache drive to support Docker.)

 

-- stewartwb

Link to comment

I know unRAID now offers single-drive BTRFS volumes an array drive format option, alongside the old standby RFS and the other new option, XFS.

 

However, Tom mentioned some issues with BTRFS that appear to be due to its copy-on-write filesystem features.

 

- btrfs is still a real pain to manage in some circumstances.  For example, try moving a directory that contains subvolumes to another partition and preserve the subvolumes! Very difficult.  The 'standard' unix tools: cp, mv, etc. simply don't work (well they work, but your 4GB docker directory balloons to 30-40-50GB).  By isolating Docker in it's own volume image file, it is easier to move around onto other devices.

 

I don't see anyone mentioning these issues when debating which filesystem option is best to use for new array drives.

 

Are the issues Tom mentioned irrelevant when BTRFS is used on an array drive? 

 

What I mean is, were the issues he raised only relevant when using BTRFS on the cache drive with multiple partitions and copying data around on that one drive?  (And thus the reason he's implemented mounting BTRFS loopback volume files rather than requiring BTRFS on the cache drive to support Docker.)

 

-- stewartwb

 

If you're using advanced functions of BTRFS such as Copy on Write, Snap shots, or the like, you may have some issues.  If you're just looking for an alternative to REISERFS, I would actually recommend XFS over BTRFS for array devices.

 

I've yet to find a really glaring point of benefit to anyone for using BTRFS on the array.  There probably is one, but it's not been very apparent to me in testing thus far...

Link to comment

If you're using advanced functions of BTRFS such as Copy on Write, Snap shots, or the like, you may have some issues.  If you're just looking for an alternative to REISERFS, I would actually recommend XFS over BTRFS for array devices.

 

I've yet to find a really glaring point of benefit to anyone for using BTRFS on the array.  There probably is one, but it's not been very apparent to me in testing thus far...

 

What about when you introduce BTRFS scrubbing?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.