Re: Format XFS on replacement drive / Convert from RFS to XFS (discussion only)


Recommended Posts

I am aware of the data rebuild as I am working on another concern with that

The 6TB in my set is parity. How can I copy or clone parity to XFS ? 

Once I have a clone of parity, I then need another empty drive formatted XFS?

So if I am understanding this correctly to convert my whole array to XFS

01) Add empty drive (formatted XFS)
02) move data from RFS drive to new XFS drive
03) remove RFS drive
04) replace removed RFS drive with new XFS drive (swap spots)
05) parity sync

won't my parity think the RFS drive I removed is still an RFS drive when I try to sync?  Being it is a XFS drive i am putting in that spot, doesn't parity go by last known FS table or something to that like ?
 

 

On another note rather than migrating all drives to XFS (converting the array). Would it be a good idea, as I add drives or replace drives in the future going forward that I can have a mix of RFS and XFS drives in one array

I.E. The next drive added or replaced to be formatted XFS, and added in replace of a drive, or wait... we can't-do that cause parity won't rebuild the data properly as it sees all current disks as RFS 

Edited by bombz
Link to comment

Parity doesn't have a filesystem. If you do everything correctly parity will be maintained and won't need to be rebuilt.

 

43 minutes ago, bombz said:

01) Add empty drive (formatted XFS)
02) move data from RFS drive to new XFS drive
03) remove RFS drive Reformat RFS drive to XFS
04) replace removed RFS drive with new XFS drive (swap spots)
05) parity sync

Repeat 2 and 3 as needed.

 

If you start with the drives with the most used then work your way down you shouldn't have any problem copying a disk to the most recently formatted disk.

Link to comment

I did read it, and I keep looking it over :-)
 
The troublesome part right now is everything I am trying to do.

First off I am trying to shrink my physical disks in my array
I am trying to replace disks (eventually) that are less than 3TB with 4TB 


At this point with the 12 physical disks in my case, the case is full (no more room to mount physical disks)

 

So being that is the situation, where you state to start with largest DATA DISK in the set which is 4TB, and work down to migrate to XFS, I was looking at it starting with the smaller DATA DISK and moving that way.

As you can see in my screenshot I have some 500GB and 1.5 TB drives. I could add (1x) 4TB disk and push all data from those 3 disks (or at LEAST 2 disks) to the (1X) 4TB drive. However after that is done, I want to remove them (2x 500 and 1x 1.5TB) completely from the array.

What I am cautious about is UnRaid thinking I still have 12 disks when I really want to have (9X) in total at that point. I don't know what parity would do in that case, if it would say "hey missing disks/parity not valid" 
I think that's the point where you have to start a new config? 

 

I think I am being SUPER cautious, but I know once I learn it, I will understand it better.

 

If I can AT LEAST get down to (9X) disks, I can then add another parity (for dual parity) and continue my migration to XFS. 

 

I really appreciate the help

01) Add empty drive (formatted XFS)
02) move data from RFS drive to new XFS drive
03) remove RFS drive Reformat RFS drive to XFS

 

^^^ these steps make sense and seem simple enough, but I have the added step of removing physical disks out of the array ^^^

 

I added "remove drive" (in steps above earlier) as I need to free up space in my case, a nice number to get to when finalized is (10X) disks. 

That make sense or am I over thinking things again (I do that sometimes)

 

Thank you :-) 

Edited by bombz
Link to comment

OK I can try that method if the current disks in the array have enough room to move data to them
The other option is to take the current data off, to an external source and then remove the disks, then add the new XFS disk, and copy content back to the XFS disk over the network 

 

I wonder how much XFS is going to make a difference. I hope a lot performance wise

 

thanks -) 

Edited by bombz
Link to comment

Hm interesting. Would that plugin read NTFS ? 
That would save a lot of time plugging the drive into a USB dock for sure.

I think that is the best method for me (what i am thinking anyways) 

01) copy data from array (2x) 500 + (1x) 1.5TB to external source
02) Remove disks
03) Run new config
04) parity sync/rebuild
05) add new disk (format XFS)
06) USB dock copy back to the XFS disk
07) Parity sync

Then repeat as I have more space free in the case to add physical disks 

Link to comment
11 minutes ago, bombz said:

Hm interesting. Would that plugin read NTFS ? 
That would save a lot of time plugging the drive into a USB dock for sure.

I think that is the best method for me (what i am thinking anyways) 

01) copy data from array (2x) 500 + (1x) 1.5TB to external source
02) Remove disks
03) Run new config
04) parity sync/rebuild
05) add new disk (format XFS)
06) USB dock copy back to the XFS disk
07) Parity sync

Then repeat as I have more space free in the case to add physical disks 

Why do you think parity sync is needed at step 7?

Link to comment
6 minutes ago, bombz said:

01) copy data from array (2x) 500 + (1x) 1.5TB to external source

 

What I would purpose is to remove these three disks and set them aside. 

 

Then proceed with the other steps except that you would mount these three disks (one at a time) into the USB external housing use for the copy in step 06.   It has been some time since I did my conversion but if you follow the steps correctly, step 07 is merely a verification that parity is correct.  (This assume that the first XFS formatted disk is large enough to store all of the files from these three disks and the next one to be converted.)

Link to comment

Ah
I see what you're saying. I think your process is the 'reverse' of mine.

01) Add new pre-cleared disk (4TB formatted XFS) = 13 disks in array
02) remove (2x500 1x1.5tb)

03) run 'new config'
04) with 2x500 1x1TB out of the array... dock them via USB (attached to the server) and copy data back to the 4TB (XFS)

05) parity sync (for good measure)
06) done?

 

I think... 

 

I can't mount disk 13 in the case properly until I physically remove the other disks (that will be a temp 'loose' hookup)

On another note, I can have a mix of XFS and RFS file systems in the array, correct? 

 

Now to read up on how to format the pre-cleared disk XFS over RFS (which has been my default for years)

Edited by bombz
Link to comment
7 hours ago, bombz said:

01) Add new pre-cleared disk (4TB formatted XFS) = 13 disks in array
02) remove (2x500 1x1.5tb)

Reverse these two steps then you will have room in the case for the new drive.  After the 'New Config' step, you are ready to convert. 

 

This is the procedure that I followed:

    https://wiki.lime-technology.com/File_System_Conversion#Mirroring_procedure_to_convert_drives

 

I make a printout of this portion of the instructions and studied it until I completely understood how the procedure worked and formulated in my mind how I was going to do it.  Then I made a paper table with all the steps that had to be done for each disk.  I checked off each step as I completed it.  If you look carefully at the procedure you will see that only one parameter gets changed in the rsync command.  The shell used for the Command line Interface has a extensive built-in editing features which makes editing of the commands used a very simple task. (Basically, the up-down-left-right arrow keys are your friends in this case...) 

Link to comment

You have been very patient with me and all my questions. I suppose it is getting your (my) head wrapped around it cause there is that back end thought that you don't want to lose data or mess up the array.

I will do you recommendation and print things off, and make a game plan for when I am ready. 
I have (1X) 4TB pre-clearing now 38 hours in and another 4TB ordered in the mail, which will be pre-cleared.
I don't know when I am going to exactly get to the FS change over as I have been working on a rebuild this past week with some questionable disks (sigh)

I suppose after I am all changed over file systems (whenever that is) I will THEN add a second 6TB parity. I am assuming when running dual parity they both HAVE TO BE the same size (a match) ? 

 

Thank you for all the responses! I really appreciate the assistance, truely!

Edited by bombz
Link to comment
18 minutes ago, bombz said:

I suppose after I am all changed over file systems (whenever that is) I will THEN add a second 6TB parity. I am assuming when running dual parity they both HAVE TO BE the same size (a match) ? 

 

Not, not in theory anyway.  But the the second Parity Drive ( or the first for that matter) must be as large or larger than the largest data drive.  In this case, since you have already have a 6TB parity drive and you are out of space for more drives in the  case, it would make much more sense to go with a 6TB drive  for that second parity.  As I recall, you could actually use an 8TB drive for that second parity if you wanted to.  (Some folks have gotten good deals for 8TB drives by shucking them from 8TB USB drive enclosures.)

 

About the time required for the conversion, I seem to recall that it took about 2-3 hours per TB (depends a bit on file sizes) to copy the data over using rsync so the task is not one that takes forever.  The time to do the other housekeeping tasks is about thirty minutes per disk.   Both of my servers were converted over about a two week period. 

Link to comment

OK thanks man

Another thing I have been noticing with the current setup is terrible performance when writing to the unraid array from windows.
I am using taracopy for file xfer

But when I go to push/write a file, the windows mapped network drive take a bit to start it seems (progress bar in the file path) and then will SOMETIMES  time out. I then,  start another copy of the same file (that times out) and the file(s) will xfer across with really good speeds. (keep in mind I have tested this same method with spinning up ALL DISKS in the array and still encounter it)

 

But I am suffering in performance somewhere; I can't queue up say 20GB spanned across 3 mapped disks and walk away, it seems to get stuck and I have to resend the file.

I am not sure if this has to do with the file system I am using, the hardware I am using, the software (for file copy) I am using. I wish it could be better, and faster. Is there a better method for pushing large amounts of data from a windows system to a mapped array disk (not folder share) that works the best? 

 

My mind is a blur this week with work, and working on this and normal life stuff, trying to absorb all this info 

 

I have an LSI SAS card in the mail but won't be here for weeks, so I hope that helps over the SuperMicro I have in there now.
Then the plan is to HOPEFULLY push over to XFS, which i have been told is much better performance wise 

 

Edited by bombz
Link to comment
1 hour ago, bombz said:

Right, for disk to disk within the server array that makes sense. I was generally speaking about file transfers in that regard, trying to figure that out was all :-) 

I would suggest waiting to addressing this problem until after you have the server converted over to XFS.  reiserFS does some real issues apparently as disk sizes get larger and when the disks are almost full.  (Development ended sometime in the 2009 time period when the developer (and maintainer) of this file system was convicted of the murder of his wife and sentenced to life.)

 

EDIT:  IF you wish to pursue this slow copy issue at that point , please start a new thread. 

Edited by Frank1940
Link to comment

Wowzers
Fair enough. That's why I was looking info XFS as you and some other made some points that it would assist with many performance concerns :-) 

New ram tomorrow, new HDD to follow, then LSI SAS card, then a few weeks of testing, then migration time, sigh !! so much to do, but it's good this is pushing me to get it done!

Link to comment

I have a failing disk - it shows over 1000 reallocated sectors.  So instead of doing the usual rebuild of the disk by replacing it, I was trying this method to copy the files to the new xfs disc.  I had just completed a parity check with no errors, so I thought I would be ok.  I began the disk copy last night following the published procedure. about 2/3 the way through, the copy rate dropped to about 5-30 kb/sec, and I am seeing 36000 or so errors on the unraid display for this copy. 

 

should I abort the copy, remove the failing disk, and rebuild it to a new replacement, and after that work on the xfs conversion?  Or, should I just wait for the copy to complete?

 

if I need to abort, how do I do that? I am using a direct console command to do this.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.