Moving from ReiserFS to XFS, best way to do it?


Recommended Posts

Hi

 

I'm planning to move to XFS while i have the storage available to do so.

 

I posted a screenshot so you can see the current setup and so you can understand what i'm talking about.

 

Solution 1:

Right now i could move everything from Disk2 to Disk1. Then format Disk2 to XFS. Then move everything from Disk1 to Disk2. Then format Disk1 to XFS.

But this would leave me with a full Disk1 and not evenly split between the drives.

 

Solution 2:

Make the Parity Disk to Disk3. Move everything from Disk1 and Disk2 to Disk3. Format Disk1 and Disk2 to XFS. The move everything from Disk3 to Disk1 and Disk2 to let the data evenly split.

 

 

 

This is what i'm just thinking, i really don't know what the best way is.

 

I read this: http://lime-technology.com/wiki/index.php/Transferring_Files_Within_the_unRAID_Server

 

So i know about the commands to use. I'm thinking of using the "cp -r /mnt/disk# /mnt/disk#" command locally on my server because it's safer to copy then to move. After the copying process, i would just have to delete the content after verification from the drive i copied?

 

Please guide me trough this process, i'm planning to do this in the coming week as i'm getting a new mobo+ram+cpu.

Unraid_Setup.jpg.28eb92f507b6e3cb6a7c3c4d12c38831.jpg

Link to comment

Hi

 

I'm planning to move to XFS while i have the storage available to do so.

 

I posted a screenshot so you can see the current setup and so you can understand what i'm talking about.

 

Soloution 1:

Right now i could move everything from Disk2 to Disk1. Then format Disk2 to XFS. Then move everything from Disk1 to Disk2. Then format Disk1 to XFS.

But this would leave me with a full Disk1 and not evenly split between the drives.

There is nothing wrong with this approach but as you say it requires that all the data fits on one disk, and also if you want the data more evenly split then once both disks are in XFS format you would need to selectively copy data back from the full disk to the one with more free space.

 

Solution 2:

Make the Parity Disk to Disk3. Move everything from Disk1 and Disk2 to Disk3. Format Disk1 and Disk2 to XFS. The move everything from Disk3 to Disk1 and Disk2 to let the data evenly split.

Nothing wrong with this approach either.  It solves the problem of not having enough free space to get all the data off one disk by temporarily using the parity disk as a staging area.  As long as you format disk3 as XFS then at the end you can make it one of the two data disks, and use the last emptied data disk as the parity disk going forward.  Note, however, that when you have finished if you want to put disk3 back as a parity disk that unRAID does not give you an easy way to reduce the number of data drives in use.  You would need to run the 'New Config' option from the Tools menu to wipe the current disk assignments and then reassign the data and parity disks.  Data already on the disks survives this process. You need to be very careful at this stage that you get the assignments right as accidentally assigning a data disk as a parity disk would destroy its contents.

 

So i know about the commands to use. I'm thinking of using the "cp -r /mnt/disk# /mnt/disk#" command locally on my server because it's safer to copy then to move. After the copying process, i would just have to delete the content after verification from the drive i copied?

If you decide to use the 'cp' command then you need to add the -p option to preserve permissions and owner of files.  If you do not you would need to run the New Permissions at the end.

 

I personally prefer to use 'rsync -a' instead of 'cp -rp' as if you get interrupted rerunning the command skips files already successfully copied, while with the 'cp' command it does not.  You also have the option with rsync to add a --remove-source-files parameter to remove files after they have been successfully copied (although it does NOT remove and empty directories left behind).

Link to comment

Hi

 

I'm planning to move to XFS while i have the storage available to do so.

 

I posted a screenshot so you can see the current setup and so you can understand what i'm talking about.

 

Soloution 1:

Right now i could move everything from Disk2 to Disk1. Then format Disk2 to XFS. Then move everything from Disk1 to Disk2. Then format Disk1 to XFS.

But this would leave me with a full Disk1 and not evenly split between the drives.

 

Solution 2:

Make the Parity Disk to Disk3. Move everything from Disk1 and Disk2 to Disk3. Format Disk1 and Disk2 to XFS. The move everything from Disk3 to Disk1 and Disk2 to let the data evenly split.

 

This is what i'm just thinking, i really don't know what the best way is.

 

I read this: http://lime-technology.com/wiki/index.php/Transferring_Files_Within_the_unRAID_Server

 

So i know about the commands to use. I'm thinking of using the "cp -r /mnt/disk# /mnt/disk#" command locally on my server because it's safer to copy then to move. After the copying process, i would just have to delete the content after verification from the drive i copied?

 

Please guide me trough this process, i'm planning to do this in the coming week as i'm getting a new mobo+ram+cpu.

 

Both of your approaches work, but the first one is better. If you like, you could add another step to move half of the files from disk1 to disk2 after disk2 is formatted as XFS. But if you are adding data routinely, you could start adding to disk2 and eventually the data will even out. For me at least, there is no good reason to have the data split 50-50. I actually try to fill up disks before starting to use the next one so your problem would be my solution. :)

 

The second approach will require breaking parity protection, so if something goes wrong and a disk fails during the conversion, you could loose data. Even if you don't consider this a great risk (which it probably isn't but why risk it), there is another reason. After your second approach, you would be confronted with making disk3 parity again, and then having to rebuild parity - a time consuming task which would take most of the time savings from using the first approach. If you were converting 10 disks, the time savings would pay for the parity rebuild time 10 times over and there would be a better argument for option 2 - which could then be completed in 1/4 or 1/3 the time of option 1.

 

That's the way I see it anyway.

 

(As I was writing itimpi responded - he has a couple other interesting twists. Note that adding the "n" option to cp (i.e., -rpn or -rpvn) has it skip already copied files, so it too is resumable. :))

Link to comment

Note that adding the "n" option to cp (i.e., -rpn or -rpvn) has it skip already copied files, so it too is resumable. :))

The one problem is that if you have a partially copied file then I think the -n options skips that file as well (there may well be other options to cp that avoid the problem).  Using rsync handles partially copied files automatically.
Link to comment

Okay i have read what you all have to say about this, first of all thank you for taking your time and help me!

 

So my conclusion based on everything you have said so far:

 

I will go with solution 1 which seems like most of you preferred/recommenced.

I also decided to skip the process of splitting the content evenly because as eventually as "bjp999" said, if i continuously add data (which i do) the disks will eventually be evenly split.

 

So now that i know which process i will follow now the matter is about which command to use. I read about what you have suggested but i'm really new to this stuff.

 

So if i got it right i would use this exact command locally on my server "rsync -a /mnt/disk2 /mnt/disk1" to move everything from disk2 to disk1. Then format disk2 to xfs.

Then use "rsync -a /mnt/disk1 /mnt/disk2" to move everything from disk1 to disk2. Then format disk1 to xfs.

 

If i wrote the commands wrong then please correct them.

 

And also, using this command, can i see like a process-bar with percentage or do i need to check the disks manually from "Network" on my PC?

Link to comment

I would recommend against using "--remove-source-files" as you will be formatting the drive at the end anyway. No need to delete the originating drive. That way you can compare the sizes of the two drives just before you format it to make sure sizes are about the same. That's the way I'm doing it but I am using this command:

 

rsync -ac --progress /mnt/user/disk/* /mnt/user/disk2/

 

 

Link to comment

I would recommend against using "--remove-source-files" as you will be formatting the drive at the end anyway. No need to delete the originating drive. That way you can compare the sizes of the two drives just before you format it to make sure sizes are about the same. That's the way I'm doing it but I am using this command:

 

rsync -ac --progress /mnt/user/disk/* /mnt/user/disk2/

 

Yeah that makes sense. But then you wrote -ac instead of -av, What's the difference between the two?

Link to comment

I worry with all these posts about rsync recently. It just a matter of time before a slip causes someone to delete critical data.

 

Weve all done it :(

 

I read a few threads in the forums, seems like many have used this rsync command without any issues. Is it really that risky?

Link to comment

I worry with all these posts about rsync recently. It just a matter of time before a slip causes someone to delete critical data.

 

Weve all done it :(

 

I read a few threads in the forums, seems like many have used this rsync command without any issues. Is it really that risky?

rsync can be considered an enhanced variant of cp/mv, so it is no riskier than using that command!

Link to comment

IMHO its riskier than cp/mv in two ways:

 

1. Its cryptic with a load of switches

2. People are using it on mass for XFS migrations etc

 

My worry is use of SSH and rsync is technically out of scope for unRAID. Sure that sounds silly but some of the rsync posts i have seen (not this one) are asking questions that make me certain users are just copying and pasting without understanding what they are really doing...and that fine cause they are using a NAS not linux... but its risky.

 

The crux of this is that with the introduction of new filesystems uNRAID should have an official way to migrate SAFELY.

 

Failing that we should keep people away form rsync and use diskmv script. Its safer and tested.

Link to comment

The crux of this is that with the introduction of new filesystems uNRAID should have an official way to migrate SAFELY.

I would have thought the official way to do it safely is to do it over the network and between disk shares?  That avoids any use of the command line.

 

Also, do not forget the is no NEED to move to XFS, and new users will find they are on XFS by default anyway.

Link to comment

The crux of this is that with the introduction of new filesystems uNRAID should have an official way to migrate SAFELY.

I would have thought the official way to do it safely is to do it over the network and between disk shares?

 

Yeha i thought about that but really thats a procedure for just managing files. The special case of migrating files on mass between disks of different filesystems ideally should preserve timestamps and perms etc which cant be done with cifs.

 

Its not the end of the world but we could have handled this better rather than push every day isers into reading manpages

Link to comment

So i done with one drive using this command:

 

"rsync -av --progress /mnt/diskX/ /mnt/diskY/"

 

I have verified and check the data that was transferred, everything went smoothly without any errors.

 

But i will not be transferring to the XFS drive just yet because this transferred made my CPU fan spin at 5400RPM. I have the NAS in a storage-room but the fans were spinning so loudly you could hear it even though the door was closed to the room.

 

My new MOBO+CPU+RAM that will arrive this week runs with a passive cooler and much lower TDP.

This will be the new set: http://www.zotac.com/products/mainboards/integrated-intel-cpu/zotac-nm10/product/zotac-nm10/detail/nm10-dtx-wifi/sort/starttime/order/DESC/amount/10.html

 

Also checking out some smaller mITX cases as i my current setup is a mATX with Fractal Design Node 804 case.

 

Unraid_-_One_Drive_Done.jpg.116a1a4d4293d0bdb39d5fe7cad2bd3f.jpg

Link to comment

I also agree that there should be some user-friendly way to migrate to XFS instead of using these type of commands.

 

Maybe something that could be integrated in the Web GUI.

The only user friendly way to do it now is trough Windows Network Shares but that is too slow when moving a lot of data.

Link to comment

I also agree that there should be some user-friendly way to migrate to XFS instead of using these type of commands.

 

Maybe something that could be integrated in the Web GUI.

Unfortunately this is one of those statements it is easy to make, but very difficult ((fi not impossible) to implement in a robust way.

 

The ideal would be through an in-place conversion, but I do not think that a reliable tool that could achieve this is ever likely to materialise.

 

The only user friendly way to do it now is trough Windows Network Shares but that is too slow when moving a lot of data.
I agree this is slow, but does that really matter.  It only really becomes important if the final release contains Reiserfs bugs that cannot be worked around.

 

On that basis there is no NEED to migrate to XFS - it is a user choice.    New users will find themselves on XFS unless they make conscious decision to do otherwise.  Also if any users add additional disk drives to their array they will find that they are also XFS by default.

Link to comment

Moving from RFS to XFS is a one-time activity. I do not think that LT has an obligation to provide a GUI utility unless they were discontinuing support for RFS, in which case it would be required.

 

I happen to disagree with the perspective that leaving well enough alone with existing RFS is acceptable. I think it is an accident waiting to happen. RFS was created years ago when drives were measured in gigabytes not terabytes, and it just hasn't kept pace with the disk size growth and Linux development. The author is in prison and no serious maintenance and enhancement in the works. RFS is a dead end file system that struggles with big disks. I am not saying upgrading is a emergency, but anyone planning to stay with it long term is making a mistake IMO.

 

I just completed my last 2 disks this weekend and am happy to be done with RFS. The only good thing about it was its ability to recover data after shooting yourself in the foot. We'll have to see if XFS does as well.

 

I documented a sticky that provided clear instructions (that was my goal anyway) for RFS to XFS conversion. Several have suggested alternatives / enhancements. There is a community script. I think the key is to pick one and do it! Just take your time and verify your data before moving on to the next one.

Link to comment

I would suggest moving from RieserFS to XFS if you have the time and space.

 

The last update performed by the maintainer let a silent corruption bug slip in.

This is the 2nd time a silent bug slipped in that had a potential for corruption.

The last being severe enough to corrupt metadata and/or files not related to the last files updated.

 

If you don't have a good backup plan and/or corruption detection plan, I would suggest moving aware from ReiserFS

Link to comment

I've been trying to read up as much as possible on this subject the last few days as soon as I read that XFS is the new standard for version 6 going forward...

 

I was going to make the move from 5.0.5 to 6b14b this weekend until I started running into reports of issues with version 6 and ReiserFS (http://lime-technology.com/forum/index.php?topic=38434.0, http://lime-technology.com/forum/index.php?topic=38370.0 to reference a few).  In the end I decided to wait until I have a fresh drive or two to make the migration to XFS. 

 

At the very least I think LT could improve their documentation with regards to this major change, and any potential issues that longtime users may run into.  And a LT comment on the best practice for migration would also be helpful.  Glad this helpful community is being active on this.

Link to comment
  • 4 months later...

I documented a sticky that provided clear instructions (that was my goal anyway) for RFS to XFS conversion. Several have suggested alternatives / enhancements. There is a community script. I think the key is to pick one and do it! Just take your time and verify your data before moving on to the next one.

 

for those of us without the blessing of mind-reading - can you share the sticky/links?

Link to comment

I documented a sticky that provided clear instructions (that was my goal anyway) for RFS to XFS conversion. Several have suggested alternatives / enhancements. There is a community script. I think the key is to pick one and do it! Just take your time and verify your data before moving on to the next one.

 

for those of us without the blessing of mind-reading - can you share the sticky/links?

 

HERE

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.