Jump to content
savestheday

Re: Format XFS on replacement drive / Convert from RFS to XFS (discussion only)

888 posts in this topic Last Reply

Recommended Posts

2 minutes ago, jonathanm said:

It's also a good idea to be sure that your data copy method and destination support sparse files and symlinks, as there are likely to be a bunch of both things on a well used app / vm / cache drive.

 

What method is that?  I used the rsync method for the other disks.

Share this post


Link to post
3 minutes ago, pcgirl said:

 

What method is that?  I used the rsync method for the other disks.

rsync to another disk in the array should work fine, as long as you have the correct command line options. rsync has a metric boatload of options, so it's a good idea to look up a reference guide to make sure the command you issued is what you really wanted to happen.

Share this post


Link to post

I am using this command from the instructions.  rsync -avPX /mnt/disk10/ /mnt/disk11/

Will this work?

Share this post


Link to post

Ok guys this will be my situation as the screenshot follows

 

 

And I will be adding another 2TB and then wanting to convert the rest of the driver to XFS including the parity so I can use dual parity further down the line

 

Can anyone be of any help 

2018-01-05 19_52_57-Thor_Main.png

Share this post


Link to post

Note that the parity doesn't have any file system, so no file system conversion for that drive.

Share this post


Link to post
44 minutes ago, pwm said:

Note that the parity doesn't have any file system, so no file system conversion for that drive.

Oh ok so can I expand the size of the parity drive before I do the conversion so I have a larger drive to attach to the array ?

 

Share this post


Link to post

I am running 6.3.5 with dual parity have an empty disk that is assigned to the array and already formatted with RFS...what's the easiest way to switch it to XFS?  The disk was being used, but it was trivial enough to move the files off of it.

Share this post


Link to post
12 minutes ago, dave_m said:

I am running 6.3.5 with dual parity have an empty disk that is assigned to the array and already formatted with RFS...what's the easiest way to switch it to XFS?  The disk was being used, but it was trivial enough to move the files off of it.

 

Stop array, click on the disk on Main page to get to its settings, select xfs filesystem for the disk, start array, let if format the disk.

 

Make sure no other disk appears as unmountable or unformatted before you tell it to start and format. If any other disks are shown as unmountable or unformatted, come back for further advice.

 

Whether or not it is empty, it will be an empty xfs filesystem after the format, and since the format happens with the array started, parity is kept in sync during the format. Format is really just writing an empty filesystem, and all writes in the parity array update parity.

Share this post


Link to post

Admittedly, I have not read all 33 pages of this. I did read the first few maybe a year or so ago. 

 

I think I understand the process pretty well. I'm actually injecting some larger drives into the array, as well as making sure certain important files go on these newer drives. I've mapped it out so that the older drives get pulled off to the side and will be converted last. 

 

Here's my conundrum... I'd like to keep my physical slots the same in my case. So after step 17, I'm thinking of physically swapping the slot. So in essence my copy to drive will always be in the same slot (and will assign it Disk 13). 

 

Or is it better to keep walking through the array and then move the physical stuff at the end? 

Share this post


Link to post
On ‎1‎/‎5‎/‎2018 at 3:52 AM, shanehm2 said:

Oh ok so can I expand the size of the parity drive before I do the conversion so I have a larger drive to attach to the array ?

 

 

Yes, of course.   You can replace the parity drive with any size drive you want (12TB anyone??).    Then you can add a drive of the same size (and format it XFS).   Then you could simply move all of the files off of an RFS disk, and reformat it to XFS ... and repeat the process for all of the RFS drives.

 

Share this post


Link to post
1 hour ago, axeman said:

I'm actually injecting some larger drives into the array, as well as making sure certain important files go on these newer drives.

 

While I understand the psychology of putting "important files" on a newer drive; note that there's no difference in how well protected those files are.

 

1 hour ago, axeman said:

Here's my conundrum... I'd like to keep my physical slots the same in my case. So after step 17, I'm thinking of physically swapping the slot. So in essence my copy to drive will always be in the same slot (and will assign it Disk 13). 

 

 

Not sure what difference it makes, but you can certainly do this if you want.   Note that if you change slot assignments, it will invalidate parity if you're using dual parity.   [The assignments can be freely changed with single parity]

 

Share this post


Link to post
1 hour ago, garycase said:

 

While I understand the psychology of putting "important files" on a newer drive; note that there's no difference in how well protected those files are.

 

 

Not sure what difference it makes, but you can certainly do this if you want.   Note that if you change slot assignments, it will invalidate parity if you're using dual parity.   [The assignments can be freely changed with single parity]

 

 

Gary - the move to the newer drives is mostly for space/performance reasons. The older ones were "green" 2TB drives; "newer" are HGST NAS drives, about 2 years old, were pre-cleared and in my test array. 

 

I'm planning on using dual parity, but not until all of this is done. I just want the Drive # to match the physical location on the case, cos OCD, perhaps. 

 

Thanks

Share this post


Link to post

Understand -- and with single parity you can freely move the drives to any assigned slot you want.

 

I certainly understand a bit of OCD ... my wife would tell you I have a LOT of that :D.    [Just not in my server drive assignments and data distribution]

Share this post


Link to post
On 1/29/2018 at 7:00 AM, garycase said:

Understand -- and with single parity you can freely move the drives to any assigned slot you want.

 

I certainly understand a bit of OCD ... my wife would tell you I have a LOT of that :D.    [Just not in my server drive assignments and data distribution]

 

I just realized I don't understand Step 15..

 

Important! Click on each drive name (e.g. Disk 10 and Disk 11) and swap the file system format of the drive - if it's ReiserFS change it to XFS, if it's XFS change it to ReiserFS; it's important to swap the disk formats as well as the physical drive assignments

 

Wouldn't changing this on the target drive require me to format it? So in my case I went from Disk 8 to Disk 13. Disk 13 was an empty drive that was Formated xfs-enc. I finished moving everything over. Did new config, and assigned the new physical disk to Disk 8, and the old disk to slot 13. I understand changing Disk 13 here will be desirable (so that the next step can start with a blank disk in the right format), BUT wouldn't changing the new disks format (now in disk 8  ) require format? 

Share this post


Link to post

Changing the file system will indeed result in the disk being formatted & all data being lost from that disk.    Not sure exactly which list of steps you're referring to ... this is a very long thread with several lists ... but the concept is simple.   Just PAY ATTENTION to where your data is at all times, and after you've safely moved it off of an RFS disk, you can then change that RFS disk's format to XFS and let UnRAID format it -- then use it for the next target disk.     That process is really independent of "moving the assignments" around to satisfy your organizational OCD.    You can do the reassignments after each format change -- or you could just do them all at the end after you've completed the format changes.

 

There's another LONGER (and arguably safer) way to do this -- it's actually what I did when I finally decided to convert my media server to XFS (really just an OCD thing to have all my servers using the same file system -- since it was just 16 disks of mostly static content there wasn't any real reason to switch the file format) ==>  and this approach didn't result in any disk assignments changes or physical relocation of the disks ....

 

One-at-a-time, just do the following:

 

(a)  Copy all of the content of a disk to another location not on the server  [Being OCD, ALL of my copies were verified, so I just did a TeraCopy of the entire disk to a folder on my main PC called "UnRAID DiskX" ... I changed the X to match the current disk I was converting].      Clearly this requires that you have a large enough disk to hold all of the data [I used a spare 8TB disk that I put in my PC for this purpose]

 

(b)  Change the format of the disk in UnRAID to XFS and format it.

 

(c)  Copy all the data back  [I again used TeraCopy with verification.

 

That process is simple; safe (although for the time that data only exists on the PC it's "at risk" since it's not fault tolerant -- although you should still have your backups just in case);  and no drive is ever moved or reassigned.     It DOES take a long time -- two complete copies of every disk's data with verification isn't all that fast [The copy TO the PC is pretty fast ... over 100MB/s in my case;  but the copy back to the server is slower, since you're copying to a parity protected array].

 

Share this post


Link to post
39 minutes ago, garycase said:

Not sure exactly which list of steps you're referring to ... this is a very long thread with several lists ... but the concept is simple.   Just PAY ATTENTION to where your data is at all times, and after you've safely moved it off of an RFS disk, you can then change that RFS disk's format to XFS and let UnRAID format it -- then use it for the next target disk. 

 

You don't have to follow any list. You just have to know what you're doing.:D

 

If you are accessing everything with user shares, and none of your user shares have include or exclude set, then you don't really even care which disk is where or which disks have which files on them. So there isn't really any need to swap things at all.

 

Every process is based on a few simple facts. Anytime there is a write operation in the parity array, unRAID updates parity at the same time. Everything other than a read is a write. Writing a file is a write, deleting a file is a write (the directory is written), and formatting a disk is a write (an empty filesystem is written to the disk). So when you format a disk in the parity array, parity is updated and remains valid.

 

In order to keep your files when you change the filesystem (format), you have to copy them elsewhere.

 

That's really all there is to it. You can make up your own list of steps if you understand all that. Many of us did this before there was any list to follow, and we probably had some differences in the details of how we accomplished it, but the process was all based on understanding what formatting a disk in the array does.

Share this post


Link to post

When I did my conversion on both of my servers, I used this procedure:

 

          http://lime-technology.com/wiki/File_System_Conversion#Mirroring_procedure_to_convert_drives

 

I made up a table of what the variables that I had to change in the command line for each conversion and the which disk format settings for the two disks involved.  BTW, you can use the BASH shell editing features (UP-arrow and DOWN-arrow to move up and down the 'stack' of commands coupled with LEFT-arrow and RIGHT-arrow to position the cursor)  to edit any  command previously entered so as to minimize the required typing.  Most of the time, you are only changing a couple of numbers.  I checked off each step as I did it!

 

I also printed out the instructions and worked through it in my mind (and a bit of pencil and paper work) until I figured exactly what I would be doing.  This instruction set is not real clear without some very logical forethought as to what is be be done.  

Share this post


Link to post
4 hours ago, garycase said:

Changing the file system will indeed result in the disk being formatted & all data being lost from that disk.    Not sure exactly which list of steps you're referring to ... this is a very long thread with several lists ... but the concept is simple.   Just PAY ATTENTION to where your data is at all times, and after you've safely moved it off of an RFS disk, you can then change that RFS disk's format to XFS and let UnRAID format it -- then use it for the next target disk.     That process is really independent of "moving the assignments" around to satisfy your organizational OCD.    You can do the reassignments after each format change -- or you could just do them all at the end after you've completed the format changes.

 

There's another LONGER (and arguably safer) way to do this -- it's actually what I did when I finally decided to convert my media server to XFS (really just an OCD thing to have all my servers using the same file system -- since it was just 16 disks of mostly static content there wasn't any real reason to switch the file format) ==>  and this approach didn't result in any disk assignments changes or physical relocation of the disks ....

 

One-at-a-time, just do the following:

 

(a)  Copy all of the content of a disk to another location not on the server  [Being OCD, ALL of my copies were verified, so I just did a TeraCopy of the entire disk to a folder on my main PC called "UnRAID DiskX" ... I changed the X to match the current disk I was converting].      Clearly this requires that you have a large enough disk to hold all of the data [I used a spare 8TB disk that I put in my PC for this purpose]

 

(b)  Change the format of the disk in UnRAID to XFS and format it.

 

(c)  Copy all the data back  [I again used TeraCopy with verification.

 

That process is simple; safe (although for the time that data only exists on the PC it's "at risk" since it's not fault tolerant -- although you should still have your backups just in case);  and no drive is ever moved or reassigned.     It DOES take a long time -- two complete copies of every disk's data with verification isn't all that fast [The copy TO the PC is pretty fast ... over 100MB/s in my case;  but the copy back to the server is slower, since you're copying to a parity protected array].

 

 

Thanks Gary - I was thinking of doing offline copy to my Media PC as well, but not looking forward to doing it over the network. I'm following the steps of the Mirror Method (the one that @Frank1940 linked). 

 

 

3 hours ago, trurl said:

 

You don't have to follow any list. You just have to know what you're doing.:D

 

If you are accessing everything with user shares, and none of your user shares have include or exclude set, then you don't really even care which disk is where or which disks have which files on them. So there isn't really any need to swap things at all.

 

Every process is based on a few simple facts. Anytime there is a write operation in the parity array, unRAID updates parity at the same time. Everything other than a read is a write. Writing a file is a write, deleting a file is a write (the directory is written), and formatting a disk is a write (an empty filesystem is written to the disk). So when you format a disk in the parity array, parity is updated and remains valid.

 

In order to keep your files when you change the filesystem (format), you have to copy them elsewhere.

 

That's really all there is to it. You can make up your own list of steps if you understand all that. Many of us did this before there was any list to follow, and we probably had some differences in the details of how we accomplished it, but the process was all based on understanding what formatting a disk in the array does.

 

Thanks @trurl  - I do care where data goes for two drive slots as I use the NFS exports ... but do like having the drives map physically back to their physical slot in the  case  also. 

 

3 hours ago, Frank1940 said:

When I did my conversion on both of my servers, I used this procedure:

 

          http://lime-technology.com/wiki/File_System_Conversion#Mirroring_procedure_to_convert_drives

 

I made up a table of what the variables that I had to change in the command line for each conversion and the which disk format settings for the two disks involved.  BTW, you can use the BASH shell editing features (UP-arrow and DOWN-arrow to move up and down the 'stack' of commands coupled with LEFT-arrow and RIGHT-arrow to position the cursor)  to edit any  command previously entered so as to minimize the required typing.  Most of the time, you are only changing a couple of numbers.  I checked off each step as I did it!

 

I also printed out the instructions and worked through it in my mind (and a bit of pencil and paper work) until I figured exactly what I would be doing.  This instruction set is not real clear without some very logical forethought as to what is be be done.  

Thanks @Frank1940 - that's the process I'm using - and mapped everything out on Excel so i know what's going where... what to physically move when done, and what the end result is. 

 

I think the step that got me hung up is sort of a "test" that the array starts up? I thought for some reason the array wouldn't start until the drives were formatted or matched the format. The steps do say to go back and set them properly, but I think I just got nervous knowingly assigning the wrong format. 

Edited by axeman

Share this post


Link to post

Got a quick question, I'm in the process of migrating to XFS and have successfully transferred and converted one disk already.

 

My question is after formatting my old ReiserFS drive to XFS I noticed that 6.60GB was used for Disk9 after the format, is that suppose to be normal for a 6TB drive? The only thread I could find regarding this is @ 

 

 

unraid-disk6-9.thumb.png.fd5ab1b9d6d411370e4693dc392921a9.png

 

Cheers

 

Share this post


Link to post
12 minutes ago, dfx said:

My question is after formatting my old ReiserFS drive to XFS I noticed that 6.60GB was used for Disk9 after the format, is that suppose to be normal for a 6TB drive? The only thread I could find regarding this is @ 

Yes, it's normal, about 1TB per GB.

Share this post


Link to post
10 minutes ago, johnnie.black said:

Yes, it's normal, about 1TB per GB.

 

Awesome! thanks for putting my mind at ease :)

Share this post


Link to post
13 hours ago, johnnie.black said:

Yes, it's normal, about 1TB per GB.

 

The meaning is clear, but the numbers are reversed -- the file system uses about 1GB per TB.

[It would be difficult to allocate a TB for each Gb :D]

 

Share this post


Link to post

I thought about commenting on that, but since the user understood it anyway, I just changed my way of looking at it. Each 1TB of disk per 1GB used for the filesystem.:D

  • Like 1

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now