bombz Posted September 27, 2017 Share Posted September 27, 2017 (edited) I am aware of the data rebuild as I am working on another concern with that The 6TB in my set is parity. How can I copy or clone parity to XFS ? Once I have a clone of parity, I then need another empty drive formatted XFS? So if I am understanding this correctly to convert my whole array to XFS 01) Add empty drive (formatted XFS) 02) move data from RFS drive to new XFS drive 03) remove RFS drive 04) replace removed RFS drive with new XFS drive (swap spots) 05) parity sync won't my parity think the RFS drive I removed is still an RFS drive when I try to sync? Being it is a XFS drive i am putting in that spot, doesn't parity go by last known FS table or something to that like ? On another note rather than migrating all drives to XFS (converting the array). Would it be a good idea, as I add drives or replace drives in the future going forward that I can have a mix of RFS and XFS drives in one array I.E. The next drive added or replaced to be formatted XFS, and added in replace of a drive, or wait... we can't-do that cause parity won't rebuild the data properly as it sees all current disks as RFS Edited September 27, 2017 by bombz Quote Link to comment
trurl Posted September 27, 2017 Share Posted September 27, 2017 Parity doesn't have a filesystem. If you do everything correctly parity will be maintained and won't need to be rebuilt. 43 minutes ago, bombz said: 01) Add empty drive (formatted XFS) 02) move data from RFS drive to new XFS drive 03) remove RFS drive Reformat RFS drive to XFS04) replace removed RFS drive with new XFS drive (swap spots)05) parity sync Repeat 2 and 3 as needed. If you start with the drives with the most used then work your way down you shouldn't have any problem copying a disk to the most recently formatted disk. Quote Link to comment
trurl Posted September 27, 2017 Share Posted September 27, 2017 Have you actually read any of this thread? I know it is long but a lot of it is just repetitive. The basic concepts are probably covered on every page. Go back one page and study it and see if you have any questions. Quote Link to comment
bombz Posted September 28, 2017 Share Posted September 28, 2017 (edited) I did read it, and I keep looking it over :-) The troublesome part right now is everything I am trying to do.First off I am trying to shrink my physical disks in my array I am trying to replace disks (eventually) that are less than 3TB with 4TB At this point with the 12 physical disks in my case, the case is full (no more room to mount physical disks) So being that is the situation, where you state to start with largest DATA DISK in the set which is 4TB, and work down to migrate to XFS, I was looking at it starting with the smaller DATA DISK and moving that way. As you can see in my screenshot I have some 500GB and 1.5 TB drives. I could add (1x) 4TB disk and push all data from those 3 disks (or at LEAST 2 disks) to the (1X) 4TB drive. However after that is done, I want to remove them (2x 500 and 1x 1.5TB) completely from the array. What I am cautious about is UnRaid thinking I still have 12 disks when I really want to have (9X) in total at that point. I don't know what parity would do in that case, if it would say "hey missing disks/parity not valid" I think that's the point where you have to start a new config? I think I am being SUPER cautious, but I know once I learn it, I will understand it better. If I can AT LEAST get down to (9X) disks, I can then add another parity (for dual parity) and continue my migration to XFS. I really appreciate the help01) Add empty drive (formatted XFS)02) move data from RFS drive to new XFS drive03) remove RFS drive Reformat RFS drive to XFS ^^^ these steps make sense and seem simple enough, but I have the added step of removing physical disks out of the array ^^^ I added "remove drive" (in steps above earlier) as I need to free up space in my case, a nice number to get to when finalized is (10X) disks. That make sense or am I over thinking things again (I do that sometimes) Thank you :-) Edited September 28, 2017 by bombz Quote Link to comment
trurl Posted September 28, 2017 Share Posted September 28, 2017 1) Copy the small disks to other disks, remove them, New Config and rebuild parity. 2 ) Add a new larger disk, format as XFS. 3) Copy RFS disk to XFS disk. 4) Reformat RFS disk to XFS. Repeat 3 and 4 as needed. Quote Link to comment
bombz Posted September 28, 2017 Share Posted September 28, 2017 (edited) OK I can try that method if the current disks in the array have enough room to move data to them The other option is to take the current data off, to an external source and then remove the disks, then add the new XFS disk, and copy content back to the XFS disk over the network I wonder how much XFS is going to make a difference. I hope a lot performance wise thanks -) Edited September 28, 2017 by bombz Quote Link to comment
Frank1940 Posted September 28, 2017 Share Posted September 28, 2017 (edited) I believe you can also use the 'Unassigned Devices" plugin and plug the removed drives into a UBS housing to a USB port on your server to do the copy locally rather than using the network. This would save the time required to copy to an second device... Edited September 28, 2017 by Frank1940 Quote Link to comment
bombz Posted September 28, 2017 Share Posted September 28, 2017 Hm interesting. Would that plugin read NTFS ? That would save a lot of time plugging the drive into a USB dock for sure. I think that is the best method for me (what i am thinking anyways) 01) copy data from array (2x) 500 + (1x) 1.5TB to external source 02) Remove disks 03) Run new config 04) parity sync/rebuild 05) add new disk (format XFS) 06) USB dock copy back to the XFS disk 07) Parity sync Then repeat as I have more space free in the case to add physical disks Quote Link to comment
trurl Posted September 28, 2017 Share Posted September 28, 2017 11 minutes ago, bombz said: Hm interesting. Would that plugin read NTFS ? That would save a lot of time plugging the drive into a USB dock for sure. I think that is the best method for me (what i am thinking anyways) 01) copy data from array (2x) 500 + (1x) 1.5TB to external source 02) Remove disks 03) Run new config 04) parity sync/rebuild 05) add new disk (format XFS) 06) USB dock copy back to the XFS disk 07) Parity sync Then repeat as I have more space free in the case to add physical disks Why do you think parity sync is needed at step 7? Quote Link to comment
Frank1940 Posted September 28, 2017 Share Posted September 28, 2017 6 minutes ago, bombz said: 01) copy data from array (2x) 500 + (1x) 1.5TB to external source What I would purpose is to remove these three disks and set them aside. Then proceed with the other steps except that you would mount these three disks (one at a time) into the USB external housing use for the copy in step 06. It has been some time since I did my conversion but if you follow the steps correctly, step 07 is merely a verification that parity is correct. (This assume that the first XFS formatted disk is large enough to store all of the files from these three disks and the next one to be converted.) Quote Link to comment
bombz Posted September 28, 2017 Share Posted September 28, 2017 (edited) Ah I see what you're saying. I think your process is the 'reverse' of mine. 01) Add new pre-cleared disk (4TB formatted XFS) = 13 disks in array 02) remove (2x500 1x1.5tb) 03) run 'new config' 04) with 2x500 1x1TB out of the array... dock them via USB (attached to the server) and copy data back to the 4TB (XFS) 05) parity sync (for good measure) 06) done? I think... I can't mount disk 13 in the case properly until I physically remove the other disks (that will be a temp 'loose' hookup) On another note, I can have a mix of XFS and RFS file systems in the array, correct? Now to read up on how to format the pre-cleared disk XFS over RFS (which has been my default for years) Edited September 28, 2017 by bombz Quote Link to comment
Frank1940 Posted September 28, 2017 Share Posted September 28, 2017 7 hours ago, bombz said: 01) Add new pre-cleared disk (4TB formatted XFS) = 13 disks in array 02) remove (2x500 1x1.5tb) Reverse these two steps then you will have room in the case for the new drive. After the 'New Config' step, you are ready to convert. This is the procedure that I followed: https://wiki.lime-technology.com/File_System_Conversion#Mirroring_procedure_to_convert_drives I make a printout of this portion of the instructions and studied it until I completely understood how the procedure worked and formulated in my mind how I was going to do it. Then I made a paper table with all the steps that had to be done for each disk. I checked off each step as I completed it. If you look carefully at the procedure you will see that only one parameter gets changed in the rsync command. The shell used for the Command line Interface has a extensive built-in editing features which makes editing of the commands used a very simple task. (Basically, the up-down-left-right arrow keys are your friends in this case...) Quote Link to comment
bombz Posted September 28, 2017 Share Posted September 28, 2017 (edited) You have been very patient with me and all my questions. I suppose it is getting your (my) head wrapped around it cause there is that back end thought that you don't want to lose data or mess up the array. I will do you recommendation and print things off, and make a game plan for when I am ready. I have (1X) 4TB pre-clearing now 38 hours in and another 4TB ordered in the mail, which will be pre-cleared. I don't know when I am going to exactly get to the FS change over as I have been working on a rebuild this past week with some questionable disks (sigh) I suppose after I am all changed over file systems (whenever that is) I will THEN add a second 6TB parity. I am assuming when running dual parity they both HAVE TO BE the same size (a match) ? Thank you for all the responses! I really appreciate the assistance, truely! Edited September 28, 2017 by bombz Quote Link to comment
Frank1940 Posted September 28, 2017 Share Posted September 28, 2017 18 minutes ago, bombz said: I suppose after I am all changed over file systems (whenever that is) I will THEN add a second 6TB parity. I am assuming when running dual parity they both HAVE TO BE the same size (a match) ? Not, not in theory anyway. But the the second Parity Drive ( or the first for that matter) must be as large or larger than the largest data drive. In this case, since you have already have a 6TB parity drive and you are out of space for more drives in the case, it would make much more sense to go with a 6TB drive for that second parity. As I recall, you could actually use an 8TB drive for that second parity if you wanted to. (Some folks have gotten good deals for 8TB drives by shucking them from 8TB USB drive enclosures.) About the time required for the conversion, I seem to recall that it took about 2-3 hours per TB (depends a bit on file sizes) to copy the data over using rsync so the task is not one that takes forever. The time to do the other housekeeping tasks is about thirty minutes per disk. Both of my servers were converted over about a two week period. Quote Link to comment
bombz Posted September 28, 2017 Share Posted September 28, 2017 (edited) OK thanks man Another thing I have been noticing with the current setup is terrible performance when writing to the unraid array from windows. I am using taracopy for file xfer But when I go to push/write a file, the windows mapped network drive take a bit to start it seems (progress bar in the file path) and then will SOMETIMES time out. I then, start another copy of the same file (that times out) and the file(s) will xfer across with really good speeds. (keep in mind I have tested this same method with spinning up ALL DISKS in the array and still encounter it) But I am suffering in performance somewhere; I can't queue up say 20GB spanned across 3 mapped disks and walk away, it seems to get stuck and I have to resend the file.I am not sure if this has to do with the file system I am using, the hardware I am using, the software (for file copy) I am using. I wish it could be better, and faster. Is there a better method for pushing large amounts of data from a windows system to a mapped array disk (not folder share) that works the best? My mind is a blur this week with work, and working on this and normal life stuff, trying to absorb all this info I have an LSI SAS card in the mail but won't be here for weeks, so I hope that helps over the SuperMicro I have in there now. Then the plan is to HOPEFULLY push over to XFS, which i have been told is much better performance wise Edited September 28, 2017 by bombz Quote Link to comment
trurl Posted September 29, 2017 Share Posted September 29, 2017 You can leave Windows and the network completely out of the loop by using Krusader docker or the builtin mc (Midnight Commander) to copy files on the server. Quote Link to comment
bombz Posted September 29, 2017 Share Posted September 29, 2017 Right, for disk to disk within the server array that makes sense. I was generally speaking about file transfers in that regard, trying to figure that out was all :-) Quote Link to comment
Frank1940 Posted September 29, 2017 Share Posted September 29, 2017 (edited) 1 hour ago, bombz said: Right, for disk to disk within the server array that makes sense. I was generally speaking about file transfers in that regard, trying to figure that out was all :-) I would suggest waiting to addressing this problem until after you have the server converted over to XFS. reiserFS does some real issues apparently as disk sizes get larger and when the disks are almost full. (Development ended sometime in the 2009 time period when the developer (and maintainer) of this file system was convicted of the murder of his wife and sentenced to life.) EDIT: IF you wish to pursue this slow copy issue at that point , please start a new thread. Edited September 29, 2017 by Frank1940 Quote Link to comment
bombz Posted September 29, 2017 Share Posted September 29, 2017 Wowzers Fair enough. That's why I was looking info XFS as you and some other made some points that it would assist with many performance concerns :-) New ram tomorrow, new HDD to follow, then LSI SAS card, then a few weeks of testing, then migration time, sigh !! so much to do, but it's good this is pushing me to get it done! Quote Link to comment
bombz Posted September 29, 2017 Share Posted September 29, 2017 I have 'Unassigned Devices' installed and I can see the section in my GUI I should be able to plug in a USB dock to the unraid server with an RFS drive (the ones I am removing) and copy each of the 3 drives to the one XFS drive? using: nohup cp -r /mnt/disk# /mnt/disk# correct? Quote Link to comment
ijuarez Posted September 29, 2017 Share Posted September 29, 2017 30 pages later you think Lime tech would develop a plugin that would help you out on the conversion. just saying. Quote Link to comment
bombz Posted September 29, 2017 Share Posted September 29, 2017 (edited) Sorry :-( I know last time I made a major change it went wrong and was my own fault for not clarifying. I will take the chance and hope it works out. I do appreciate the help Edited September 29, 2017 by bombz Quote Link to comment
BobPhoenix Posted September 30, 2017 Share Posted September 30, 2017 (edited) If I new linux better I could confirm the operation. I just used mc (MidnightCommander) to move my files from ReiserFS disks to XFS disks. Edited September 30, 2017 by BobPhoenix Quote Link to comment
bombz Posted September 30, 2017 Share Posted September 30, 2017 I am a basic user of Linux, i know some base stuff I have learned over the years, but always still learning. Next test is to see if I can use NTFS drives outside the array to dump to and from (eliminating the network completely) Quote Link to comment
PeteAron Posted October 4, 2017 Share Posted October 4, 2017 I have a failing disk - it shows over 1000 reallocated sectors. So instead of doing the usual rebuild of the disk by replacing it, I was trying this method to copy the files to the new xfs disc. I had just completed a parity check with no errors, so I thought I would be ok. I began the disk copy last night following the published procedure. about 2/3 the way through, the copy rate dropped to about 5-30 kb/sec, and I am seeing 36000 or so errors on the unraid display for this copy. should I abort the copy, remove the failing disk, and rebuild it to a new replacement, and after that work on the xfs conversion? Or, should I just wait for the copy to complete? if I need to abort, how do I do that? I am using a direct console command to do this. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.