Re: Format XFS on replacement drive / Convert from RFS to XFS (discussion only)


Recommended Posts

There was a ReiserFS problem with one of the V5 betas that was caused by a Kernel update that Limetech had to patch because ReiserFS isn't a main stream file system in Linux any more so doesn't get the testing it used to.  I WILL be updating all of my arrays because of this reason at some point but I'm not in a big hurry.  Each new kernel that Limetech uses to fix other bugs introduces the possibility of a bug in ReiserFS similar to the bug in the V5 betas.  And since ReiserFS isn't supported in all of the linux distributions it gets less testing.  XFS is a newer file system and is still used in more/all? of the Linux distributions (including unRAID) so it should be better tested.  LimeTech is going to have to be very careful when testing if they want to keep supporting it and may have to create a patch for it like they did during the V5 betas.  Hopefully any bugs would be caught before any future unRAID released versions but I would like to be on a newer better supported file system and not tempt fate too much.

Link to comment

Given the number of folks still on v5 (or even v4.7 for that matter) there's a VERY large number of UnRAID systems that are still using Reiser.    As these folks ultimately migrate to v6, that's a lot of RFS disks that will be moved forward.    The RFS bug in the v6 Beta (I forget which #) was found VERY quickly and indeed only caused problems in a very specific situation.    I'm fairly confident LimeTech will thoroughly test any kernel updates to ensure proper RFS support moving forward -- they simply have too many folks still using Reiser to not do so.

 

Users are FAR more likely to encounter the infamous "user share copy" bug -- which results in total data loss -- than a bug in Reiser support.    Indeed, I've seen a LOT of folks lose data in the past year because of that bug, which they encountered as they were trying to migrate their data from Reiser to XFS disks !!

 

I think there's been enough publicity about this that it's relatively unlikely now -- but as I noted above, there's simply no compelling reason to change.  ESPECIALLY if the Reiser disks are full and are effectively "read only" ... which a lot of them probably are in existing media servers.

 

Notwithstanding that, I agree that NEW disks added to an array should be formatted with XFS.    But if you then want to move data off of an older Reiser-formatted disk, be sure you do it VERY cautiously and that you understand the details of the user share copy bug ... otherwise you'll lose all of the data !!

 

 

 

Link to comment

The RFS.bug was nasty as it could affect disks even if no writes occurred to the disk. The amount of silent corruption caused by the bug is unknown. Only a person with meticulous md5s our a complete backup would be in a position to know if and how much corruption was caused. It was not found so quickly as I recall.

 

Besides the fact that RFS is becoming far less popular, it was also developed at a time when disks ate smaller, and at least some users have performance problems (me included) causing network timeouts as disks get full. XFS had 100% eliminated that issue in my system

 

If you are running an old version of unRaid with no update plans, no issue to keep using RFS. But if you are running 6.0+ and updating regularly, getting new kennel versions, I wholeheartedly recommend converting to XFS.

Link to comment

FWIW my oldest server (full of media) has 16 disks ... parity plus 15 data.  1 of the data disks is 99% full (I keep a few GB free on it, as that's where I keep the ONE file that changes as I update my media ... the DVD Profiler database backup);  12 of them are 100% full (less than a GB of free space on a 2TB drive); and the other 2 have a fair amount of free space [recently updated this server to v6.1.3 and updated parity and these 2 drives to 4TB).

 

ALL of the drives are Reiser ... and as I have no intention of adding additional drives, they will remain that way [since any new capacity will be gained by replacing a 2TB with a 4TB drive].

 

On the other hand, my test server is all-XFS; and my backup server is mixed Reiser and XFS, since I updated it to v6.1.3 and as I add drives they'll be XFS.

 

Since I DO have "... meticulous MD5s ..."  AND a complete set of backups I CAN confirm that there's not been any corruption on my drives ... just did a complete MD5 verification after moving to v6 "just for grins."    I doubt very much that Reiser support is going to be removed from any version of UnRAID for a LONG time to come ... and it will be very well tested as they evolve, since a very high percentage of users still have RFS data disks.

 

Certainly nothing "wrong" about migrating disks to XFS ... but I certainly don't think there's any "compelling reason" to do so -- which was the question.

 

 

 

 

Link to comment

hmm...  sounds like there are reasons for and against.. I think maybe I'll just do my one pre-existing 4TB RFS drive.  It's only 75% full..  The rest are smaller drives (2 and 3TB) and are full.  I usually don't keep updating the OS so if I'm stable now I should be good.  As I replace the smaller drives, I'll make them XFS.

 

Jim

Link to comment

hmm...  sounds like there are reasons for and against.. I think maybe I'll just do my one pre-existing 4TB RFS drive.  It's only 75% full..  The rest are smaller drives (2 and 3TB) and are full.  I usually don't keep updating the OS so if I'm stable now I should be good.  As I replace the smaller drives, I'll make them XFS.

One thing I can confirm from my own experience is that if you get severe file system level corruption and have to run the appropriate recovery tool (reiserfs/xfs_repair) you are much more likely to experience significant data loss using XFS as reiserfsck does an amazing job of recovery.  Against that I expect XFS may be less likely to experience such corruption in the first place.

Link to comment

One thing I can confirm from my own experience is that if you get severe file system level corruption and have to run the appropriate recovery tool (reiserfs/xfs_repair) you are much more likely to experience significant data loss using XFS as reiserfsck does an amazing job of recovery.  Against that I expect XFS may be less likely to experience such corruption in the first place.

I wondered about that.  I had to recover ReiserFS and got back 90% of my files after putting a completely full cache drive in as parity in a parity sync for about 5-10 minutes before I caught it.  So in other words the corruption I experienced would have been the same for either ReiserFS or XFS.  That is bad news maybe I won't switch my drives now.  Was hoping XFS recoveries would be better then ReiserFS not worse.  What I expected to be worse was btrfs.
Link to comment

One thing I can confirm from my own experience is that if you get severe file system level corruption and have to run the appropriate recovery tool (reiserfs/xfs_repair) you are much more likely to experience significant data loss using XFS as reiserfsck does an amazing job of recovery.  Against that I expect XFS may be less likely to experience such corruption in the first place.

I wondered about that.  I had to recover ReiserFS and got back 90% of my files after putting a completely full cache drive in as parity in a parity sync for about 5-10 minutes before I caught it.  So in other words the corruption I experienced would have been the same for either ReiserFS or XFS.  That is bad news maybe I won't switch my drives now.  Was hoping XFS recoveries would be better then ReiserFS not worse.  What I expected to be worse was btrfs.

I think with XFS you might have lost all your data in your scenario.  The xfs_repair tool does not seem to have an option equivalent to reiserfsck --scan-whole-partition that looks for file fragments that can be recovered.  I assume this is because there is less redundancy in the file system so that such fragments can be properly identified and used in recovery.  It simply repairs starting from the superblock and fixing any 'bad' links by chopping off what they might have originally pointed to (thus leading to data loss).  The only thing that I have found that it has that reiserfsck does not have is the ability to scan the disk looking for backup copies of the superblock if you happen to corrupt the one at the start.
Link to comment

I wondered about that.  I had to recover ReiserFS and got back 90% of my files after putting a completely full cache drive in as parity in a parity sync for about 5-10 minutes before I caught it.  So in other words the corruption I experienced would have been the same for either ReiserFS or XFS.  That is bad news maybe I won't switch my drives now.  Was hoping XFS recoveries would be better then ReiserFS not worse.  What I expected to be worse was btrfs.

I think with XFS you might have lost all your data in your scenario.  The xfs_repair tool does not seem to have an option equivalent to reiserfsck --scan-whole-partition that looks for file fragments that can be recovered.  I assume this is because there is less redundancy in the file system so that such fragments can be properly identified and used in recovery.  It simply repairs starting from the superblock and fixing any 'bad' links by chopping off what they might have originally pointed to (thus leading to data loss).  The only thing that I have found that it has that reiserfsck does not have is the ability to scan the disk looking for backup copies of the superblock if you happen to corrupt the one at the start.

Definitely sounds like I will not be switching any time soon.  I definitely want recovery over performance.  So I think I have officially changed my opinion.  Thanks.
Link to comment

I wondered about that.  I had to recover ReiserFS and got back 90% of my files after putting a completely full cache drive in as parity in a parity sync for about 5-10 minutes before I caught it.  So in other words the corruption I experienced would have been the same for either ReiserFS or XFS.  That is bad news maybe I won't switch my drives now.  Was hoping XFS recoveries would be better then ReiserFS not worse.  What I expected to be worse was btrfs.

I think with XFS you might have lost all your data in your scenario.  The xfs_repair tool does not seem to have an option equivalent to reiserfsck --scan-whole-partition that looks for file fragments that can be recovered.  I assume this is because there is less redundancy in the file system so that such fragments can be properly identified and used in recovery.  It simply repairs starting from the superblock and fixing any 'bad' links by chopping off what they might have originally pointed to (thus leading to data loss).  The only thing that I have found that it has that reiserfsck does not have is the ability to scan the disk looking for backup copies of the superblock if you happen to corrupt the one at the start.

Definitely sounds like I will not be switching any time soon.  I definitely want recovery over performance.  So I think I have officially changed my opinion.  Thanks.

I switched to XFS and the system seems more stable since the switch.    However I do have full offline backups so can always recover from those if I get bad corruption.
Link to comment

Do you have the nerd tools installed?  It's part of that package.

 

Once you do..  You type "screen"

Then in screen you can do cntl-a then some other letters to do various things.  Like I think cntl-a x will exit the screen session (while keeping it running)  "screen -r" will re-attach a session. cntl-a ?, I believe will list the commands.

 

Jim

Link to comment

Hello, I am in the process of converting my drives to XFS.  I read almost every post here but never really understood how people are dealing with converting their Cache disk?  I have dockers and one cache only share on there.  I can easily copy off the data to a disk...but after I format it, how do I get the data back without messing up my dockers?  Set all dockers to not auto start until I can copy back the docker.img and config folders?

Link to comment

Hello, I am in the process of converting my drives to XFS.  I read almost every post here but never really understood how people are dealing with converting their Cache disk?  I have dockers and one cache only share on there.  I can easily copy off the data to a disk...but after I format it, how do I get the data back without messing up my dockers?  Set all dockers to not auto start until I can copy back the docker.img and config folders?

I think you will have to disable the docker service itself, not each individual docker.
Link to comment
  • 3 weeks later...

Hi,

 

I have 4x units "WD red" 3TB drives, 1 parity and 3 other ones.

 

I'm in the process to migrate to xfs filesystem from resier. The problem is that I haven't space for that and need to use the parity drive as alternative way. But First I need to peclear the parity drive or maybe format to xfs?.

 

thanks.

Link to comment

Hi,

 

I have 4x units "WD red" 3TB drives, 1 parity and 3 other ones.

 

I'm in the process to migrate to xfs filesystem from resier. The problem is that I haven't space for that and need to use the parity drive as alternative way. But First I need to peclear the parity drive or maybe format to xfs?.

 

thanks.

unRAID requires that any data drive added to a parity array be clear so parity will remain valid. No parity, no need to clear.
Link to comment

unRAID requires that any data drive added to a parity array be clear so parity will remain valid. No parity, no need to clear.

 

thanks trurl for your help.

 

So you mean that I only need not to assign the drive when I start the array and then move data from one disk to this drive?

My understanding is that you intend to use your parity drive as a new data drive and run without a parity drive, correct?

 

If so, New Config without a parity drive and assign the parity drive to a data slot. When you start the array, unRAID will format that drive. Then you use it however you want as another data drive.

 

Obviously you won't have any parity protection. Do you have backups of anything important?

Link to comment
My understanding is that you intend to use your parity drive as a new data drive and run without a parity drive, correct?

 

If so, New Config without a parity drive and assign the parity drive to a data slot. When you start the array, unRAID will format that drive. Then you use it however you want as another data drive.

 

Obviously you won't have any parity protection. Do you have backups of anything important?

 

1- Yes, but only on begin the partiy drive became a new data drive, then at the end or at final step I want to restore this drive as a partity.

 

2- I have some backups but not all other stuff that its important for me too.

Link to comment
2- I have some backups but not all other stuff that its important for me too.
Don't mess with the server until you have backups of everything you don't want to lose. Copying data from drive to drive and changing formats is risky, there is a chance of typing a command wrong or not understanding the directions and erasing stuff by accident. Add to that the fact you want to eliminate the single drive failure protection by invalidating parity in order to move stuff, and you have a recipe for disaster unless everything works perfectly.
Link to comment

2- I have some backups but not all other stuff that its important for me too.
Don't mess with the server until you have backups of everything you don't want to lose. Copying data from drive to drive and changing formats is risky, there is a chance of typing a command wrong or not understanding the directions and erasing stuff by accident. Add to that the fact you want to eliminate the single drive failure protection by invalidating parity in order to move stuff, and you have a recipe for disaster unless everything works perfectly.

 

Changing formats as described in this thread is not risky - except for the risk of human error. Which is something under each person's control to mitigate.

 

I do not recommend running a non-parity protected array unless the data of value is separately backed up. Having a parity protected array protected by another parity protected array provides a lot of protection. In this configuration, I could not argue that removing parity from one of them would add a lot of risk. So long as the backup is occurring very frequently to avoid losing newly added data.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.