Re: Format XFS on replacement drive / Convert from RFS to XFS (discussion only)


Recommended Posts

Will this prevent corruption if a drive red balls during the copy?

 

I used rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/ to convert my disks to XFS and one of them red balled during rsync, in my case it corrupted one file.

 

That is intresting behavior that I would not have expected. I am suprised it kept a transfered file that it didn't verify, which the documentation suggested would not happen.

I think the best course of action is to run two passes then.

 

rsync -av --progress /mnt/diskX/ /mnt/diskY/

This should transfer all the data.

rsync -avc --progress -remove-source-files /mnt/diskX/ /mnt/diskY/

This second pass should not transfer any data, but it'll check the checksums of files at both locations, and then delete the source file if they match. This will take longer and is more IO intensive since it has to generate checksums.

 

Of coruse not you have the file duplication issue for a little while.

 

 

Link to comment

So how do I change the filesystem for a drive that is already in the array?  Do I have to remove it from the array and put it back in?  Just a little hesitant as I've not done this before.

 

thanks

david

 

This is a drive that you've copied teh data from and are ready to format correct. If not do not format.

 

Here is what I did on 6b12.

 

1) Verify that your disk is empty and make sure you know which disk you want to format.

2) Stop the array

3) On list of disks tab, click on the disk name (Disk1, Disk 2, etc..) (Make sure you picked the right one.)

4) On that Disk's settings page there should be a drop down for formating, select xfs

5) Start the array

6) Hit format (Which will appear below where you start the array)

7) Wait a bit, and it should be formated and part of the array.

 

Link to comment

I'm on 6b12 as well.

 

I know the disk, disk1 that needs to be formated from RFS to XFS.

 

I don't understand step 4, where is a settings page for each disk?  I can see a setting -> disk settings, and that is selected as XFS default.  But I don't see how to change a single disk.

 

Sorry for being obtuse,

david

Link to comment

I'm on 6b12 as well.

 

I know the disk, disk1 that needs to be formated from RFS to XFS.

 

I don't understand step 4, where is a settings page for each disk?  I can see a setting -> disk settings, and that is selected as XFS default.  But I don't see how to change a single disk.

 

Sorry for being obtuse,

david

From the main page, click on the disk to get to its settings page
Link to comment

Good idea. I'll do that.

 

But note that the disk does not have to be empty as step 1 says. If you are confident that the files have been successfully copied to another disk, you can change it's file system and reformat it. It will then be empty. Deleting terabytes of data from an RFS disk is pretty time consuming.

 

Even the command options given that remove files in the process are not necessary.

Link to comment
  • 2 weeks later...

I think I then just lost ~1TB of data, so let me make sure I read this right.

 

I had a drive red ball for a write error.

Stopped array, shutdown, swapped in a new precleared drive.

Added new drive to that same slot, switched the file system to XFS (figured why not, perfect time to do this), clicked the format button I really want to do this, will format the new drive to XFS and add to array. This completed, array went back to a green status, ran a parity check and all passed.

 

So in this process did I basically lose everything that was on the previous drive, and Unraid just added the new drive with no data and rebuild parity around that?

 

If so that is unfortunate and confusing!

I still have the old drive and I am sure I can mount it outside of the array and get the data back, just had no idea that was the case.

Link to comment

I think I then just lost ~1TB of data, so let me make sure I read this right.

 

I had a drive red ball for a write error.

Stopped array, shutdown, swapped in a new precleared drive.

Added new drive to that same slot, switched the file system to XFS (figured why not, perfect time to do this), clicked the format button I really want to do this, will format the new drive to XFS and add to array. This completed, array went back to a green status, ran a parity check and all passed.

 

So in this process did I basically lose everything that was on the previous drive, and Unraid just added the new drive with no data and rebuild parity around that?

 

If so that is unfortunate and confusing!

I still have the old drive and I am sure I can mount it outside of the array and get the data back, just had no idea that was the case.

This is not the first case of this happening. Something really needs to be done about the interface to keep people from doing this to themselves.
Link to comment

This is not the first case of this happening. Something really needs to be done about the interface to keep people from doing this to themselves.

 

So that would be a "yes".. Dammit!

 

Ok, will need to load the old drive outside of the array and copy my files back to the new drive.

I would have left it as RFS if I would have known this to be the case, honestly I don't think this is well described in the messages on the GUI (or I am dumb, but I think I am a tad more technical than the average "noobie").

Link to comment

This is not the first case of this happening. Something really needs to be done about the interface to keep people from doing this to themselves.

 

So that would be a "yes".. Dammit!

 

Ok, will need to load the old drive outside of the array and copy my files back to the new drive.

I would have left it as RFS if I would have known this to be the case, honestly I don't think this is well described in the messages on the GUI (or I am dumb, but I think I am a tad more technical than the average "noobie").

 

I am very very optimistic you will get the data off the " failed". If not start a new thread and many users should be able to help recover.

 

Your scenario was the reason I created this sticky thread, although it was covered in announcement threads too. People using betas have to stay current on the forums to get the necessary info to avoid known bugs and pitfalls.

 

IMHO, even a very basic understanding of how parity works would make it obvious that replacing a drive could not change it's format.

 

If only we could protect ourselves from ourselves. ;)

Link to comment

Thank you, I think the data will be preserved just  fine also I just wouldn't have had a clue this happened without you mentioning it (thank you BTW!).

 

I think for ease in regards to a learning curve I will just mount the drive in a spare PC, load a live version of Linux, and copy the data over. Unless this is not recommended, it seems pretty straight forward.

Link to comment

 

If only we could protect ourselves from ourselves. ;)

 

Admittedly we learn from these user errors and hopefully modify the GUI to help prevent people from making the same mistake. You can't 100% prevent it, but maybe adding some warning text or adding a check-box or something will make it harder for users to mess it up. As it stands now it's way to easy to mess up.

Link to comment

I think for ease in regards to a learning curve I will just mount the drive in a spare PC, load a live version of Linux, and copy the data over. Unless this is not recommended, it seems pretty straight forward.

It depends on whether the disk is mountable.  It is possible that there was file system corruption and something like reiserfsck might first be needed to correct that.

 

If you have a USB-SATA adapter/case then you could fix this issue on the unRAID system itself. 

Link to comment

IMHO, even a very basic understanding of how parity works would make it obvious that replacing a drive could not change it's format.

This is true, but I wouldn't be surprised if most unRAID users do not understand how parity works and don't care.

 

The user interface should not allow this to happen. When replacing a disk, there should not be any opportunity for the user to format a disk. If someone really needs to format a replacement instead of rebuilding, they can do a new config.

 

Link to comment

I followed this thread quite closely. Before even starting the RFS to XFS copy, I set move to a monthly schedule, to be invoked next in 30 days. I also shut down docker. I have no backups going into unraid and no plug-ins installed. I am using Beta 13 (did not run into the cache format bug).

 

I was able to get about 2TB of a 3TB RFS drive copied to a 3TB drive using screen and this command: "rsync -av --progress --remove-source-files /mnt/disk1/ /mnt/disk3/"

 

But then the process stopped, and all disks stopped spinning for more than an hour. So I shut down the screen session and started again using the same command. It never restarted and the screen is now stuck at

 

sending incremental file list

./

 

That's what the screen has looked like for a couple of hours.

 

How to finish?

Link to comment

I followed this thread quite closely. Before even starting the RFS to XFS copy, I set move to a monthly schedule, to be invoked next in 30 days. I also shut down docker. I have no backups going into unraid and no plug-ins installed. I am using Beta 13 (did not run into the cache format bug).

 

I was able to get about 2TB of a 3TB RFS drive copied to a 3TB drive using screen and this command: "rsync -av --progress --remove-source-files /mnt/disk1/ /mnt/disk3/"

 

But then the process stopped, and all disks stopped spinning for more than an hour. So I shut down the screen session and started again using the same command. It never restarted and the screen is now stuck at

 

sending incremental file list

./

 

That's what the screen has looked like for a couple of hours.

 

How to finish?

 

I'm not 100% sure why rsync is having trouble, but here's a few things you can try. You can add -n to do a dry run this will tell you what needs to be transfered still. You can also make rsync more verbose which should help you troubleshoot it. -vvv will give you more information (too much IMO, but might help you figure out what is causing rsync to hang)

 

 

 

Link to comment

I'm not 100% sure why rsync is having trouble, but here's a few things you can try. You can add -n to do a dry run this will tell you what needs to be transfered still. You can also make rsync more verbose which should help you troubleshoot it. -vvv will give you more information (too much IMO, but might help you figure out what is causing rsync to hang)

 

Thanks gundamguy. Verbosity did the trick. I noticed it got stuck on a certain, non-vital, directory. I deleted it and rebooted. Back to a peaceful rsync.

Link to comment

Just doing some reading here in preparation for my ultimate migration to v6.  I must say, I'm surprised there's not a simpler method.  Not that this is overly complicated, but there appears to be different schools of thought/little consensus/lots to keep in mind.

 

For me, I'm  wondering how to handle user shares that span multiple disks.  For example, if I'm copying from disk1 (RFS) to disk16 (XFS), do I later reassign what was disk16 to disk1, preserving my user share config, or do I keep records of all the changes, and ultimately make that change to the user share(s)?  If a particular user share spans 6 disks, do I just delete that user share until I'm done migrating all those disks to XFS, then create it again?  Or am I changing the config of the included disks in that user share 6 times?

 

Is any of this even worth the hassle?

 

Thanks!

 

 

Link to comment

Just doing some reading here in preparation for my ultimate migration to v6.  I must say, I'm surprised there's not a simpler method.  Not that this is overly complicated, but there appears to be different schools of thought/little consensus/lots to keep in mind.

 

For me, I'm  wondering how to handle user shares that span multiple disks.  For example, if I'm copying from disk1 (RFS) to disk16 (XFS), do I later reassign what was disk16 to disk1, preserving my user share config, or do I keep records of all the changes, and ultimately make that change to the user share(s)?  If a particular user share spans 6 disks, do I just delete that user share until I'm done migrating all those disks to XFS, then create it again?  Or am I changing the config of the included disks in that user share 6 times?

 

Is any of this even worth the hassle?

 

Thanks!

I am just doing a new config on the array and reassigning the drives back to there original location.  So if I copy from disk1 to disk16 then I do a new config and disk1 and disk16 get swapped.
Link to comment

Just doing some reading here in preparation for my ultimate migration to v6.  I must say, I'm surprised there's not a simpler method.  Not that this is overly complicated, but there appears to be different schools of thought/little consensus/lots to keep in mind.

 

For me, I'm  wondering how to handle user shares that span multiple disks.  For example, if I'm copying from disk1 (RFS) to disk16 (XFS), do I later reassign what was disk16 to disk1, preserving my user share config, or do I keep records of all the changes, and ultimately make that change to the user share(s)?  If a particular user share spans 6 disks, do I just delete that user share until I'm done migrating all those disks to XFS, then create it again?  Or am I changing the config of the included disks in that user share 6 times?

 

Is any of this even worth the hassle?

 

Thanks!

 

I believe the way that user shares work this isn't truely an issue. If you copy disk1 to disk16 data that was stored under /mnt/disk1/Movies and is now at /mnt/disk16/Movies will still show up under /mnt/user/Movies. I think this is true even if say under Movies share settings you exclude disk16. The exclude include options are for controlling what disks data is writen to, and doesn't affect the propagation of that data to /user/share or at least that is my understanding.

 

That said if you are using the exlcude / include options for your shares, you will need to go back and change which disk you are excluding / including to ensure that any new data added to the array goes to the right disk.

Link to comment

Hey Everyone

 

I'm in the process of transitioning my drives to XFS. I have copied all of the content of my original drive to the new XFS drive according to step 7. When I ran the "rsync -nrcv /mnt/[source] /mnt[dest]/t >/boot/verify.txt" command as indicated in step 8 the output file appears to contain a list of all of the files on the original disk. I do not believe this is proper as several posts indicate that this file should be empty or contain very little information. I wanted to check and see if I am missing anything before I delete the files on the original drive and reformat it. Is this the proper way to verify the files are the same, is there another process I should use, or does the output indicate that the copy wasn't successful. Any help will be greatly appreciated. Thanks

Link to comment

To my knowledge the following code represents a simplified method to accomplish steps 6, 7 & 8 in one pass while also maintaining permissions, timestamps, owners, groups, symlinks, and device files using rsync. 

 

rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/

 

I've been running my upgrades this week using your method.  This has been working perfectly for me.  I am down to my last drive, which is running now.  I turned off all Dockers and have been letting it run without issue.  I do miss using my Plex server, but I should be back up and running tomorrow in full force.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.