Re: Format XFS on replacement drive / Convert from RFS to XFS (discussion only)


Recommended Posts

I just performed a major set of upgrades to my unraid server.  In this process, I just added a fifth data drive, which is currently empty.  I have the following data drives:  3, 3, 2, 2, 2 TB.  The fifth data drive is 2TB, which is smaller than two other data drives (3TB).  However, those drives were 1.5TB and contain 1.5TB or less of data, as I just upgraded them to 3TB.  If the data on each drive is less than 2TB, can I use the empty 2TB data drive to start this process?  Or do I still need to use a 3TB drive (which I can do -- both cache/hot spare and second parity drive are 3TB). 

 

Also, I noticed my cache drive is BTRFS.  Should I change this to XFS also?

 

Thank you.

Link to comment

Shoot...if I remove the fifth drive to format it with XFS, does the system have to perform a parity check?  I know when you ADD a drive, you have to perform a parity check, but it also makes sense to me that if you REMOVE a drive, you'd have to do a parity check.  I ask only because for me, that's about of day of waiting (it takes at least 8 hours for a parity check, which means I should start this process tonight if I can, as then the parity check might be done in the morning).  And then if I add the drive back in, I assume that's another parity check?  So, basically 16+ hours of parity checks just to format a single drive that's already part of the array? 

Link to comment

Shoot...if I remove the fifth drive to format it with XFS, does the system have to perform a parity check?  I know when you ADD a drive, you have to perform a parity check, but it also makes sense to me that if you REMOVE a drive, you'd have to do a parity check.  I ask only because for me, that's about of day of waiting (it takes at least 8 hours for a parity check, which means I should start this process tonight if I can, as then the parity check might be done in the morning).  And then if I add the drive back in, I assume that's another parity check?  So, basically 16+ hours of parity checks just to format a single drive that's already part of the array?

You must let unRAID do the format of any drive that is part of the array, so removing it isn't what you want to do. If you have already added it, and started the array, then unRAID is probably offering to format it right now, if it hasn't already done so. And it is probably XFS since that it the default for array drives in V6.

 

If for some reason you have already formatted it to a different filesystem, stop the array, click on the drive to go to its settings page, and change the format. Then unRAID will format it when you start the array.

 

Not only does unRAID have to format any drive it will use, but formatting a drive (a fairly quick operation, since it is just writing a small amount of metadata that represents the empty filesystem) that is already in the array will also update parity, so no parity sync is required.

Link to comment

Thank you.  I stopped the array, selected XFS for the new drive, restarted, and formatted the drive.  I am on 6.2.4, but I can't remember whether XFS was automatically selected and overwrote that with RFS.  It may have been the latter, as all the other drives were RFS, and I likely thought it was better to have them all with the same file system.  It wasn't until I stumbled upon this thread that I realized I should do something else.

Link to comment

Thanks for the responses trurl and garycase.  I had another question and was hoping someone could help.

I am currently moving the data from disk 1 -> disk 6 (new disk) using rsync -avPX.  Then I will format disk1 as XFS and move the data back from disk6 using rsync -avPX.  Next I want to do the same thing for disk2 and move the data to disk6 and then back after formatting to XFS.

 

disk1 (RFS) -> disk6 -> disk1 (XFS)

 

then

 

disk2 (RFS) -> disk6 -> disk2 (XFS)

 

My question is do I need to delete all the data on disk 6 before using the rsync -avPX from disk2?  Or will that just copy over the existing data on disk 6 (disk1's old data from first transfer)?

 

Link to comment

Thanks for the responses trurl and garycase.  I had another question and was hoping someone could help.

I am currently moving the data from disk 1 -> disk 6 (new disk) using rsync -avPX.  Then I will format disk1 as XFS and move the data back from disk6 using rsync -avPX.  Next I want to do the same thing for disk2 and move the data to disk6 and then back after formatting to XFS.

 

disk1 (RFS) -> disk6 -> disk1 (XFS)

 

then

 

disk2 (RFS) -> disk6 -> disk2 (XFS)

 

My question is do I need to delete all the data on disk 6 before using the rsync -avPX from disk2?  Or will that just copy over the existing data on disk 6 (disk1's old data from first transfer)?

You can add --delete to rsync to delete files from the target that don't exist on the source.
Link to comment

Thank you.  I stopped the array, selected XFS for the new drive, restarted, and formatted the drive. 

 

Just a point of reference.  I'm trying to copy about 1.8 TB from an RFS-formatted drive to an XFS formatted drive, and I started the process last night around 7:30 pm.  It's 6:13 am (let's say about 12 hours), and 742 GB (less than half) has been copied.  I'm using an ASRock - A55M-HVS motherboard with AMD A4-3300 APU (dual core) with Radeon™ HD Graphics @ 2500 and with 4GB of RAM.  Memory usage is about 31% and CPU load is varying between 15 and 40%.  It looks like the motherboard supports SATA II, but I don't remember the speed of the RAM.  The motherboard supports DDR3 2400+(OC)/1866/1600/1333/1066/800.  I probably bought a middle grade RAM. 

 

Based on this estimate, it could take 24 hours or so to copy each disk, and I have four disks to copy. 

Link to comment

RAM and CPU are irrelevant here, it's I/O bound.  Only the drive speeds, bus speeds, and time costs of the operations performed matter.  Deletions are very slow on ReiserFS, not near as slow as the transfers, but another time cost.  It would be faster not to use --delete, to re-format instead (not a huge time savings but a little).  Also, I don't think using the --delete option will remove the folder structures.  I would much prefer reformatting as the faster and cleaner way to delete all and start with a fresh clean file system.  Plus it gives you the chance to visually compare the 2 drives before reformatting one, a sanity check, to make sure they look identical (the right drive was copied, the right command was typed, it really did complete, etc).

 

  disk1 -> disk6  (transfer and check)

  reformat disk1

  disk6 -> disk1  (transfer and check)

  reformat disk6

 

You have chosen probably the longest possible way to do this, at every step.  But it does preserve parity and safety and your configuration.  But it just seems redundant to me, to copy it all twice.

Link to comment

It would be faster not to use --delete, to re-format instead (not a huge time savings but a little).  Also, I don't think using the --delete option will remove the folder structures.  I would much prefer reformatting as the faster and cleaner way to delete all and start with a fresh clean file system.  Plus it gives you the chance to visually compare the 2 drives before reformatting one, a sanity check, to make sure they look identical (the right drive was copied, the right command was typed, it really did complete, etc).

 

You have chosen probably the longest possible way to do this, at every step.  But it does preserve parity and safety and your configuration.  But it just seems redundant to me, to copy it all twice.

 

Thanks for the response and I appreciate the tip about formatting instead of --delete.  Part of the reason I am doing it this way is that I physically have the drives from top to bottom on my server in order of Parity to disk6.  They are front loading so I like to know the physical locations in case I would need to replace a drive.  I don't use user shares and have different data on each disk (disk1 TV, disk2 Bluray, disk 3 DVD, etc.). So I wasn't sure if I could simply reassign them afterwards to correspond to the physical locations on my server if I did the method you mentioned earlier.  Also, since disk 6 is new and a different model that will eventually become my parity drive once I am done with the coping.  Then my old parity will become disk4 and I will be back to a 5 data drive server.  I was worried I would mess something up with all the reassigning and figured while longer this would be a safe approach.

 

I know there are easier ways to do this and you had a really good example earlier.  That is where I seen to use rsync -avPX.  But I barely make any changes to my server and up until this week was running v5.  I just use mine to back up my media and the only reason I started making changes was because the extremely slow disk speeds of disk3 is making me go from RFS to XFS.  The disk is nearly full and someone mentioned earlier RFS can be very slow when full.  Time is really not an issue here and I was just trying to take the simple approach for someone who is not an expert with unraid and didn't want to mess anything up.

 

 

Link to comment

... Time is really not an issue here and I was just trying to take the simple approach for someone who is not an expert with unraid and didn't want to mess anything up.

 

Nothing at all wrong with that approach.  It does take longer; BUT it's all "computer time" and not "human time".  In other words, the whole process probably doesn't take you more than 5-10 minutes/disk ... perhaps an hour TOTAL of actual time you will be doing anything to convert ALL of your disks.  All of the rest of the time is just letting the computer do its thing.

... so, as you noted, time is really not an issue.

 

 

Link to comment

RAM and CPU are irrelevant here, it's I/O bound.  Only the drive speeds, bus speeds, and time costs of the operations performed matter.  [cut]

 

Thank you.  In real life terms, it's going to take me about 2-3 days for each of these.  I only have 4 disks of real data (and one newly added disk that happened to be empty).  I estimate in real terms, given my time constraints, it'll be a month to do this, as I usually only have time on weekends.

 

One thing I don't understand is, given it could take days to copy and compare, how do you ensure the "old" drive isn't being overwritten?  In other words, I ran rsync yesterday morning (I couldn't check until today).  How do I know that today there's not a file that's been changed on the "old" drive that's been modified?  Someone had recommended to run two command prompts in windows and use the dir command, which I did this morning.  As of this morning, I know the same number of files and bytes are being used, but I don't know if one file was updated on the "old" drive, if the updated file contains the same number of bytes.  Or I am being too cautious? (I'm about to erase the "old" drive, but I'm just wondering.)

Link to comment

RAM and CPU are irrelevant here, it's I/O bound.  Only the drive speeds, bus speeds, and time costs of the operations performed matter.  [cut]

 

Thank you.  In real life terms, it's going to take me about 2-3 days for each of these.  I only have 4 disks of real data (and one newly added disk that happened to be empty).  I estimate in real terms, given my time constraints, it'll be a month to do this, as I usually only have time on weekends.

 

One thing I don't understand is, given it could take days to copy and compare, how do you ensure the "old" drive isn't being overwritten?  In other words, I ran rsync yesterday morning (I couldn't check until today).  How do I know that today there's not a file that's been changed on the "old" drive that's been modified?  Someone had recommended to run two command prompts in windows and use the dir command, which I did this morning.  As of this morning, I know the same number of files and bytes are being used, but I don't know if one file was updated on the "old" drive, if the updated file contains the same number of bytes.  Or I am being too cautious? (I'm about to erase the "old" drive, but I'm just wondering.)

See for example reply 12 in this thread.
Link to comment

Assuming one's system is functioning normally is there really any compelling reason to migrate to XFS at the moment?  I realize that RFS is no longer being developed, but I was thinking until I'm ready to replace a drive I'm not really feeling any sense of urgency.  I currently have 3 2TB data drives, and 2 2TB parity drives.  I do not have any free SATA ports on my motherboard, and I don't have an expansion card.  So it is not physically possible to connect another drive at present.  So I'm thinking until I need more storage, and want to invest in new hardware it seems best to just leave well enough alone.  Your thoughts?

Link to comment

Assuming one's system is functioning normally is there really any compelling reason to migrate to XFS at the moment?  I realize that RFS is no longer being developed, but I was thinking until I'm ready to replace a drive I'm not really feeling any sense of urgency.  I currently have 3 2TB data drives, and 2 2TB parity drives.  I do not have any free SATA ports on my motherboard, and I don't have an expansion card.  So it is not physically possible to connect another drive at present.  So I'm thinking until I need more storage, and want to invest in new hardware it seems best to just leave well enough alone.  Your thoughts?

Until you experience issues, leave it be. Many people experience performance issues with Reiser, ranging from slow writes to complete lockups. If your system is running well, no need to switch.
Link to comment

Agree.  The old adage, "don't fix what ain't broken" is good advice.  If the system's working well with Reiser, there's no compelling reason to switch.

 

From my perspective, the only reason to change a drive from Reiser to XFS is if it's a very active drive that's also very close to full => Reiser tends to result in slow writes to the drive in that situation, whereas XFS doesn't slow down at all.    Otherwise, a Reiser drive will perform quite well.

 

Link to comment
Until you experience issues, leave it be. Many people experience performance issues with Reiser, ranging from slow writes to complete lockups. If your system is running well, no need to switch.

 

 

Agree.  The old adage, "don't fix what ain't broken" is good advice.  If the system's working well with Reiser, there's no compelling reason to switch.

 

From my perspective, the only reason to change a drive from Reiser to XFS is if it's a very active drive that's also very close to full => Reiser tends to result in slow writes to the drive in that situation, whereas XFS doesn't slow down at all.    Otherwise, a Reiser drive will perform quite well.

 

Thank you gentleman.  I appreciate the feedback.  I will leave things be for now.

Link to comment

RAM and CPU are irrelevant here, it's I/O bound.  Only the drive speeds, bus speeds, and time costs of the operations performed matter.  [cut]

 

Thank you.  In real life terms, it's going to take me about 2-3 days for each of these.  I only have 4 disks of real data (and one newly added disk that happened to be empty).  I estimate in real terms, given my time constraints, it'll be a month to do this, as I usually only have time on weekends.

 

One thing I don't understand is, given it could take days to copy and compare, how do you ensure the "old" drive isn't being overwritten?  In other words, I ran rsync yesterday morning (I couldn't check until today).  How do I know that today there's not a file that's been changed on the "old" drive that's been modified?  Someone had recommended to run two command prompts in windows and use the dir command, which I did this morning.  As of this morning, I know the same number of files and bytes are being used, but I don't know if one file was updated on the "old" drive, if the updated file contains the same number of bytes.  Or I am being too cautious? (I'm about to erase the "old" drive, but I'm just wondering.)

See for example reply 12 in this thread.

 

That doesn't answer my question.  I did what is in post 12 (now in post 2).  But if you wait, say, a week after running "rsync -nrcv /mnt/[source] /mnt[dest]/t >/boot/verify.txt", then you can't be assured that some file was overwritten with data having the same size.  Or am I wrong?

 

OK, I'm going to complain again.  For those of us who use unraid as a NAS, set it up and forget it until we have to do something, there's no central repository of information that's easy and quick to understand and follow.  For instance, I'm following the information from post #2 above.  Post #2 does not say anything about "If you're not experiencing problems, you don't have to change file systems."  I saw the first post about corruption, and simply thought I HAD to change to XFS.  Furthermore, there's no indication as to what problems you would be experiencing.  This is especially true when searching via an Internet search engine. What I find when searching are threads with many posts, some of them from versions before mine.  Do they apply?  I have no idea.  I have to wade through 50+ posts to figure out what I'm supposed to do (like this thread), and many of which might. 

 

For me, I had information on the cache drive, and the system was unresponsive.  I tried to shut it down from the console/keyboard and display at the server (and it may have been shutting down, but it did not appear to being doing so), so I turned off the server and turned it back on.  I lost about 10 hours worth of work that was on the cache drive (and I thought was also at least partially on the array, although I could not find it even with repair of the file system), which for me as an attorney who gets paid for the work I actually do, that's a significant loss of income I had to completely make up.  I subsequently learned that I should have used additional commands to determine what has happening, but when you're trying to start work that morning, you HAVE to have a working system.  You have 15 minutes to determine what to do.  Now, I've reconfigured the server and my system to make it more secure (copy the cache to the array every hour, moved all my work files to a share that does not use the cache, set up a Windows backup to another computer to back up every hour, etc.). 

 

Was that slowdown caused by the RFS?  I don't know.  I've had my unraid server for many years, and I remember only having to shutdown the system while it was running twice.  One time with no data loss, one time with data loss. 

 

So, am I wasting all this time converting from RFS to XFS?  I have no idea.  However, since I've started it and am on drive 2 of 4, I might as well continue.  But at least there could be some easily accessible resource that would tell me why I would want to do this and how.  You have the how in post 2 but not the why.

 

What I'm doing now is keeping notes about things like unraid and other infrequently performed actions (eg, opening and closing pool, cooking Christmas dinner, etc.) and using those later to refresh my memory (and with unraid, there's a lot of refreshing -- I don't even remember that I'm "root" or how to telnet in to the server).  The problem is I store these notes on -- wait for it -- the unraid server.  If the unraid server is unresponsive, my main recourse is to search using an Internet search engine.  (I forgot that I do have two backups of the data on the server; one at home, one offsite.  I could use the one at home, but I did not remember that.  I'll keep that in mind for the future.)

 

Anyway, if there's one thing that would improve unraid is a way to easily find information.  I literally stumbled on this post, for instance.  Upgrading my system (the only reason I started this process) has taught me that I need to revisit my system and these boards more than I used to.  I plan on reviewing these boards once a month now. 

 

I don't want to end on bad note.  I have used unraid since version 4.7 and 99.9999% of the time, it's been great. The recent improvements with version 6 have been awesome and make it much easier to use.  But if there was a way to make it easier to find information, it would be good.

Link to comment

OK, I'm going to complain again.  For those of us who use unraid as a NAS, set it up and forget it until we have to do something, there's no central repository of information that's easy and quick to understand and follow...

 

Anyway, if there's one thing that would improve unraid is a way to easily find information.  I literally stumbled on this post, for instance.  Upgrading my system (the only reason I started this process) has taught me that I need to revisit my system and these boards more than I used to.  I plan on reviewing these boards once a month now.

 

I don't want to end on bad note.  I have used unraid since version 4.7 and 99.9999% of the time, it's been great. The recent improvements with version 6 have been awesome and make it much easier to use.  But if there was a way to make it easier to find information, it would be good.

Make sure you have Notifications setup if you are going to forget it. Notifications can alert you when you have a problem so you don't ignore it before it becomes worse.

 

Assuming you already know how to search (see How to Search in my sig) the forum and wiki, that's about all we have. LimeTech is a very small company, and almost all the support is unpaid volunteers on this forum. Probably not going to change.

 

We're always interested in anyone who will pitch in and edit the wiki. ;)

Link to comment

Whoa. 420 posts on how to convert one's server to XFS?  :o

 

That's what I want to do, of course, but I don't suppose there's now a well-defined, "best practices" way to go about it that somebody can point to? I checked the wiki and FAQ, and didn't see anything.  :(

 

Most active forums (not just this one) badly need an editor for their first posts, to keep the best information up to date and at the front. It's a bit nuts to think a casual user is going to read through hundreds of posts, made over a period of years, to try and sift out information that should be a few paragraphs in a manual, FAQ, or wiki someplace.

Link to comment

Assuming one's system is functioning normally is there really any compelling reason to migrate to XFS at the moment? 

Until you experience issues, leave it be. Many people experience performance issues with Reiser, ranging from slow writes to complete lockups. If your system is running well, no need to switch.

 

That is the reason I decided to convert to XFS.  Normally it would take me around 20 minutes to copy a 20GB file to my server and recently it was taking taking a few hours because my disks were getting full.

 

It's a bit nuts to think a casual user is going to read through hundreds of posts, made over a period of years, to try and sift out information that should be a few paragraphs in a manual, FAQ, or wiki someplace.

 

I read through all the posts and am in the process of converting to XFS.  I am a very casual user and up until a week ago was running v5 of unraid.  There seems to be a lot of different approaches and I ended up following RobJ's instructions for the most part with a few minor changes.

Link to comment

One thing I don't understand is, given it could take days to copy and compare, how do you ensure the "old" drive isn't being overwritten?  In other words, I ran rsync yesterday morning (I couldn't check until today).  How do I know that today there's not a file that's been changed on the "old" drive that's been modified?  Someone had recommended to run two command prompts in windows and use the dir command, which I did this morning.  As of this morning, I know the same number of files and bytes are being used, but I don't know if one file was updated on the "old" drive, if the updated file contains the same number of bytes.  Or I am being too cautious? (I'm about to erase the "old" drive, but I'm just wondering.)

 

That's a really important issue, one I originally underestimated.  I've added warning notes about it to my file system conversion post, at top and bottom in blue.  I've tried to list the common 'agents of change', that HAVE to be stopped/disabled.

 

I've also added a red note to it, to indicate that the swap trick doesn't work any more, as of 6.2, as johnnie.black has reminded me several times.  This 6.2 change really disappointed me, as the swap trick made it so much easier.  For 6.2, I need to replace steps 10 through 12, but I don't have a test system, to see what's on the screen and write new steps up.  I haven't quite known what to do about it, so just left it alone for awhile.

 

"RobJ's instructions" would be post #299 in this thread? https://lime-technology.com/forum/index.php?topic=37490.msg449941#msg449941

 

See, that's something that should be in a manual/wiki/FAQ....

 

I'll try to move it to its own wiki page, where more users can edit and improve it.  I can then generalize it, and add sections about the available file systems, and when and why you might want to convert a file system, or not.  I do have a section about that ->  File systems

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.