chip Posted February 11, 2017 Share Posted February 11, 2017 so after I move my data ound [source] [dest] ------- ----------- -------- 1 disk3 disk4 to format disk 3 to xfs I will actually need to add it as a new disk with will become disk5. Won't this throw off parity? Disk 1 would become disk6 etc. No. It is not even possible to do what you suggest without setting a new config. Assuming you have already moved all the files off the disk. Just stop the array, click on the disk to get to its settings page, choose the filesystem, and start the array to reformat it. So after I moved stuff from disk 3 to disk 4. Stop the array. Choose the XFS format for Disk 3 and start the array and so on? Quote Link to comment
dboonthego Posted February 11, 2017 Share Posted February 11, 2017 So after I moved stuff from disk 3 to disk 4. Stop the array. Choose the XFS format for Disk 3 and start the array and so on? Correct. Once you're sure you've moved your data off disk3. Quote Link to comment
chip Posted February 11, 2017 Share Posted February 11, 2017 It also won't matter that my data is arranged on different disks afterwards? So in my example Disk1 data will now get moved to Disk3 after disk 3 is converted to xfs. Quote Link to comment
trurl Posted February 11, 2017 Share Posted February 11, 2017 It also won't matter that my data is arranged on different disks afterwards? So in my example Disk1 data will now get moved to Disk3 after disk 3 is converted to xfs. It depends on if you have any settings that refer to specific drives. User shares can be set to include only certain disks or exclude specific disks, for example. It is also possible that you would have docker settings specifying certain drives. Etc. If you have settings that refer to user shares, such as docker volume mappings, for example, then changing the disk number that those user share reside on won't make any difference. You will have to consider your specific setup and deal with these if you move the files. Quote Link to comment
RobJ Posted February 12, 2017 Share Posted February 12, 2017 I think it's time for a new read only sticky post with the agreed upon best practice for migrating off of ReiserFS. This thread should be referenced from that post and kept alive as ongoing support for people migrating, but I think a succinct summary is needed as a sticky instead of this thread. +1 I'm ready to convert my fs too and after reading this thread, I have a headache. I'm looking for the safest process with easy step by step instructions. I did read the instructions on wiki, and they are good, but conflict with other advice in this thread. That is th e only reason I have not started. I would be happy to see the experts agree on best process to follow (how they would convert their fs) and put on post #1 You mentioned a wiki page, are you referring to this one -> File System Conversion If so, I wrote most of it but others have had a hand in improving it, and I'm perfectly willing to revise it further ... I welcome any suggestions. Can I ask what were the conflicts you saw? Quote Link to comment
optiman Posted February 12, 2017 Share Posted February 12, 2017 I think it's time for a new read only sticky post with the agreed upon best practice for migrating off of ReiserFS. This thread should be referenced from that post and kept alive as ongoing support for people migrating, but I think a succinct summary is needed as a sticky instead of this thread. +1 I'm ready to convert my fs too and after reading this thread, I have a headache. I'm looking for the safest process with easy step by step instructions. I did read the instructions on wiki, and they are good, but conflict with other advice in this thread. That is th e only reason I have not started. I would be happy to see the experts agree on best process to follow (how they would convert their fs) and put on post #1 You mentioned a wiki page, are you referring to this one -> File System Conversion If so, I wrote most of it but others have had a hand in improving it, and I'm perfectly willing to revise it further ... I welcome any suggestions. Can I ask what were the conflicts you saw? Yes, that is it. Conflicts may not be the right word, as I see slightly different steps, suggestions in this thread that very a little from the wiki page. So I wonder if the instructions are slightly out of date in terms of best practice. Given I'm ready to get started, I wanted the safest and fastest (easiest) steps get my fs converted. If you needed to convert your fs today, would you follow the wiki exactly, or is there a few suggestions in this thread that you may use? Means, after reading through this, would you update your wiki page, or follow it as is - again if you needed to do this today? If it is good enough for you Rob, it's good enough for me Quote Link to comment
SSD Posted February 12, 2017 Share Posted February 12, 2017 I created a thread with a link to the wiki instructions, and renamed this thread to add "(discussion only)" to the end. The wiki thread does not allow posting (it is just a short statement and a link to the wiki article). Format XFS on replacement drive / Convert from RFS to XFS Quote Link to comment
trurl Posted February 12, 2017 Share Posted February 12, 2017 I created a thread with a link to the wiki instructions, and renamed this thread to add "(discussion only)" to the end. The wiki thread does not allow posting (it is just a short statement and a link to the wiki article). Format XFS on replacement drive / Convert from RFS to XFS Maybe it would be appropriate to unsticky this thread now. Quote Link to comment
SSD Posted February 12, 2017 Share Posted February 12, 2017 Maybe it would be appropriate to unsticky this thread now. Done - with the link in the other post, this will be easy enough to find. Quote Link to comment
chip Posted February 13, 2017 Share Posted February 13, 2017 Ok so I am done with step 1 which was Move data from disk3 to disk4 Now I have stopped the array reformatted disk3 as xfs and started moving data from disk1 to disk3. Hopefully data will be moved by tomorrow at some point and I will probably start the rest of it next Friday. Quote Link to comment
RobJ Posted February 13, 2017 Share Posted February 13, 2017 I think it's time for a new read only sticky post with the agreed upon best practice for migrating off of ReiserFS. This thread should be referenced from that post and kept alive as ongoing support for people migrating, but I think a succinct summary is needed as a sticky instead of this thread. +1 I'm ready to convert my fs too and after reading this thread, I have a headache. I'm looking for the safest process with easy step by step instructions. I did read the instructions on wiki, and they are good, but conflict with other advice in this thread. That is th e only reason I have not started. I would be happy to see the experts agree on best process to follow (how they would convert their fs) and put on post #1 You mentioned a wiki page, are you referring to this one -> File System Conversion If so, I wrote most of it but others have had a hand in improving it, and I'm perfectly willing to revise it further ... I welcome any suggestions. Can I ask what were the conflicts you saw? Yes, that is it. Conflicts may not be the right word, as I see slightly different steps, suggestions in this thread that very a little from the wiki page. So I wonder if the instructions are slightly out of date in terms of best practice. Given I'm ready to get started, I wanted the safest and fastest (easiest) steps get my fs converted. If you needed to convert your fs today, would you follow the wiki exactly, or is there a few suggestions in this thread that you may use? Means, after reading through this, would you update your wiki page, or follow it as is - again if you needed to do this today? If it is good enough for you Rob, it's good enough for me Because I couldn't remember any differences, I decided to look back a few pages. But unless I missed something (and please tell me!), the only difference I saw was the special situation where several users wanted to convert drives but did not have a free drive, did not have a free port to add a drive. So they had to have special instructions to work around that. But in thinking about it, their situation is just a special case of the regular situation. No matter what, to convert a drive you MUST move all the data off of it, before you can format it with a new file system, then copy data to it. So their case is just an extra preliminary step, basically picking one of the largest drives with the least amount of data, then either using unBalance or manually moving all of its data to the rest of the drives. At that point, they have freed up a drive, and therefore fall into the normal situation, and can use the same procedure as everyone else. There's no way around it, you HAVE to free a drive up! To convert your array, you must start with a free and empty data drive, whether you add it or juggle data around to create it. (I'll try to add this preliminary step to the wiki, for those who need it.) So yes, if I were running v6.1, I would use the procedure in this post, and if I'm running v6.2 or later, I would use the procedure in File System Conversion. At the moment, chip is doing part of the wiki procedure, but what is different is that he's not restoring the converted drive back to the same logical slot, so he is running into the problem of the data being on different logical drives than the previous share settings, and possibly other drive references being wrong now. So chip is either going to have to correct all data and share references, or do a New Config to set all the data back on the original logical drives. Plus, if he is doing copies, he may see file duplication on the shares during the conversions. But chip should probably not listen to me! Because he's got a plan now, and it's working, and he might as well stick to it! Quote Link to comment
chip Posted February 13, 2017 Share Posted February 13, 2017 Most of my shares settings are basic. I don't believe I exclude any disks except maybe in a backup I do. Also I think my dockers refer to user shares so ok there as well. I will have to redo my monthly sync jobs but that isn't an issue. disk1 is still copying over to disk3 hopefully will finish up after work today. Looked like 60% moved out of 74%. Quote Link to comment
htpcguru Posted February 14, 2017 Share Posted February 14, 2017 I followed the wiki and step 8 of the first Reiserfs disk is done. However, I notice on the unRaid main page that the used size of the destination XFS disk is larger than the source disk. On the ssh console: root@UnRaid:~# du -s /mnt/disk3 1727843185 /mnt/disk3 root@UnRaid:~# du -s /mnt/disk4 2083663832 /mnt/disk4 root@UnRaid:~# df /mnt/disk3 Filesystem 1K-blocks Used Available Use% Mounted on /dev/md3 3906899292 1727914992 2178984300 45% /mnt/disk3 root@UnRaid:~# df /mnt/disk4 Filesystem 1K-blocks Used Available Use% Mounted on /dev/md4 3905110812 2083701224 1821409588 54% /mnt/disk4 Is this expected? Does XFS use more space than Reiserfs? Please see attached screenshot. The red circle is the used size of the XFS disk (should have the same content of disk3), and blue is the used size of the Reiserfs source disk3. Quote Link to comment
JonathanM Posted February 14, 2017 Share Posted February 14, 2017 Is this expected? Does XFS use more space than Reiserfs? Depends. Different allocation sizes, different reserved space, different filesystem Most likely what you are seeing is perfectly normal. A different mix of file sizes will yield a different result. Quote Link to comment
SSD Posted February 14, 2017 Share Posted February 14, 2017 Is this expected? Does XFS use more space than Reiserfs? Depends. Different allocation sizes, different reserved space, different filesystem Most likely what you are seeing is perfectly normal. A different mix of file sizes will yield a different result. Although I agree with jonathanm in theory, I believe that your numbers are too far apart for that to be the reason. If these two disks are supposed to contain exactly the same files and data, 1.77TB vs 2.13TB = 360 GB. When I did my array, I remember sometimes free space was bigger, sometimes less. But it was always numbers that you could call rounding error (a few gigs maybe). This is much larger than anything I saw. I think something is wrong. Can you compare file counts? Or confirm that you have data on disk4 that is not supposed to be on disk3? Quote Link to comment
htpcguru Posted February 14, 2017 Share Posted February 14, 2017 Although I agree with jonathanm in theory, I believe that your numbers are too far apart for that to be the reason. If these two disks are supposed to contain exactly the same files and data, 1.77TB vs 2.13TB = 360 GB. When I did my array, I remember sometimes free space was bigger, sometimes less. But it was always numbers that you could call rounding error (a few gigs maybe). This is much larger than anything I saw. I have the same line of thinking - the difference is too much to consider normal. I think something is wrong. Can you compare file counts? Or confirm that you have data on disk4 that is not supposed to be on disk3? root@UnRaid:/mnt/disk3# find /mnt/disk3 -type f | wc -l 8109 root@UnRaid:/mnt/disk3# find /mnt/disk4 -type f | wc -l 8109 Quote Link to comment
htpcguru Posted February 14, 2017 Share Posted February 14, 2017 If these two disks are supposed to contain exactly the same files and data, 1.77TB vs 2.13TB = 360 GB. I believe I found where the problem is. Somehow, a vdisk1.img is copied with a much bigger size: root@UnRaid:~# ls -l /mnt/disk4/vm_backup/Windows\ 10\ Workstation/ total 398458884 -rwxrwxrwx 1 root root 408021893120 Mar 1 2016 vdisk1.img* root@UnRaid:~# ls -l /mnt/disk3/vm_backup/Windows\ 10\ Workstation/ total 40994716 -rwxrwxrwx 1 root root 408021893120 Mar 1 2016 vdisk1.img* I know that KVM/Qemu sizes the vm disk dynamically, so that could explain why on disk3 the actual image size of 40994716 is much smaller than the allocated size of 408021893120. However, I cannot explain why the copied image file is not the same size. Quote Link to comment
SSD Posted February 14, 2017 Share Posted February 14, 2017 If these two disks are supposed to contain exactly the same files and data, 1.77TB vs 2.13TB = 360 GB. I believe I found where the problem is. Somehow, a vdisk1.img is copied with a much bigger size: root@UnRaid:~# ls -l /mnt/disk4/vm_backup/Windows\ 10\ Workstation/ total 398458884 -rwxrwxrwx 1 root root 408021893120 Mar 1 2016 vdisk1.img* root@UnRaid:~# ls -l /mnt/disk3/vm_backup/Windows\ 10\ Workstation/ total 40994716 -rwxrwxrwx 1 root root 408021893120 Mar 1 2016 vdisk1.img* I know that KVM/Qemu sizes the vm disk dynamically, so that could explain why on disk3 the actual image size of 40994716 is much smaller than the allocated size of 408021893120. However, I cannot explain why the copied image file is not the same size. Would you expect this disk image to be 400G? That is frickin' huge! 40G seems more normal. Is the VM running? I would stop the VM and recopy this file over. Good catch! This is the type of thing you are looking for when you do sanity checks doing the XFS conversion. Quote Link to comment
JorgeB Posted February 14, 2017 Share Posted February 14, 2017 If the vdisk was sparse before it's not now, that's the different, you can make it sparse again by using: cp --sparse=always /path/to/source.img /path/to/destination.img Quote Link to comment
SSD Posted February 14, 2017 Share Posted February 14, 2017 If the vdisk was sparse before it's not now, that's the different, you can make it sparse again by using: cp --sparse=always /path/to/source.img /path/to/destination.img This! Quote Link to comment
JorgeB Posted February 14, 2017 Share Posted February 14, 2017 I believe rsync also has a sparse flag, users with vdisks on the array should use it. Quote Link to comment
htpcguru Posted February 14, 2017 Share Posted February 14, 2017 If the vdisk was sparse before it's not now, that's the different, you can make it sparse again by using: cp --sparse=always /path/to/source.img /path/to/destination.img Thanks for the tip. Now things are much better: root@UnRaid:~# du /mnt/disk3/ -s 1727843185 /mnt/disk3/ root@UnRaid:~# du /mnt/disk4/ -s 1725260516 /mnt/disk4/ Moving onto other Reiserfs disks... Quote Link to comment
SSD Posted February 14, 2017 Share Posted February 14, 2017 root@UnRaid:~# du /mnt/disk3/ -s 1727843185 /mnt/disk3/ root@UnRaid:~# du /mnt/disk4/ -s 1725260516 /mnt/disk4/ This is similar to what I experienced. Off by a couple gig. Quote Link to comment
chip Posted February 14, 2017 Share Posted February 14, 2017 Ok so I am done with step 3 which was Move data from disk2 to disk1 this finished up right before leaving for work. Later tonight..... Stop the array Format disk2 as xfs Move data from disk4 to disk2 which will take a while as it is at 90% full. Moving along.... Quote Link to comment
chip Posted February 15, 2017 Share Posted February 15, 2017 Almost done moving data from disk4 to disk1 Later tonight..... Stop the array Format disk4 as xfs last disk to format...... Move data from disk1 that is labeled Disk3 to disk4 and that should be it. Verify shares and excluded disks - don't think I set any up. Look at backup cron jobs and change where those point to. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.