trurl Posted November 29, 2018 Share Posted November 29, 2018 To jump ahead a bit to the disposition of the failing and unmountable disk1. I think we were planning to use a new 10TB for parity and reuse the current 8TB parity as data. Could that 8TB be the target for ddrescue? If so, would it be better to have the 8TB already in the array or out? Or some other plan? Quote Link to comment
JorgeB Posted November 29, 2018 Share Posted November 29, 2018 If using ddrescue the destination disk should be outside the array, source disk can be in if the clone is done with the array stopped. Quote Link to comment
trurl Posted November 29, 2018 Share Posted November 29, 2018 1 minute ago, johnnie.black said: If using ddrescue the destination disk should be outside the array, source disk can be in if the clone is done with the array stopped. OK. Well I don't see any point in having disk1 (the source) in the array since the array isn't protected currently and we shouldn't rebuild parity with the bad disk1 in the array. So might as well have both source and destination out of the array. After fixing the filesystem on the clone, will we be able to use it in the array with its data intact? If so I assume that would be a New Config and parity resync. Quote Link to comment
JorgeB Posted November 29, 2018 Share Posted November 29, 2018 Just now, trurl said: After fixing the filesystem on the clone, will we be able to use it in the array with its data intact? If so I assume that would be a New Config and parity resync. Yes, as long as xfs_repair can repair the filesystem. Quote Link to comment
trurl Posted November 29, 2018 Share Posted November 29, 2018 After he gets disk2 copied off we could New Config without disks 1,2,3, but with the new 10TB parity and the still OK disks 4,5 in whatever slots. Or we could just wait until after the cloning and repair of disk1 filesystem before getting parity synced again, but that would mean leaving the other disks unprotected while working on the disk1 problem. Certainly not as big a risk as he has been operating at. Quote Link to comment
trurl Posted November 29, 2018 Share Posted November 29, 2018 31 minutes ago, trurl said: Or we could just wait until after the cloning and repair of disk1 filesystem before getting parity synced again This would be simplest and fastest, and shouldn't be any risk at all if the array isn't written to or even started. Quote Link to comment
Pyro Posted November 29, 2018 Author Share Posted November 29, 2018 Copying disk2 has been a pain so far. Getting a lot more errors than I was with disk3, ironically. It's slow going. I'll update when it's done. Quote Link to comment
Pyro Posted November 30, 2018 Author Share Posted November 30, 2018 Finally got disks 2 and 3 backed up. Here's another diagnostic file if it helps. tower-diagnostics-20181130-0404.zip Quote Link to comment
Pyro Posted November 30, 2018 Author Share Posted November 30, 2018 On 11/29/2018 at 8:59 AM, trurl said: we could just wait until after the cloning and repair of disk1 filesystem before getting parity synced again Which disk should I clone disk1 to? Quote Link to comment
JorgeB Posted December 1, 2018 Share Posted December 1, 2018 1 hour ago, Pyro said: Which disk should I clone disk1 to? Any unused new disk, outside the array, same or larger capacity. Quote Link to comment
Pyro Posted December 2, 2018 Author Share Posted December 2, 2018 The only new disk I have outside of the array right now is the 10tb that I'm planning to make into the new parity disk. Will that cause issues later since it's bigger than the 8tb parity currently? Are there any negative side effects to trying xfs_repair and failing? I'm a little nervous about this part. Quote Link to comment
trurl Posted December 2, 2018 Share Posted December 2, 2018 You should use the 8TB parity for the clone. Your parity isn't very useful right now, and you will be resetting your disk assignments and building parity from scratch on the 10TB. Quote Link to comment
JorgeB Posted December 2, 2018 Share Posted December 2, 2018 One thing I forgot to mention, while there's no problem using a larger disk as destination for ddrescue, that disk won't mount in the array, because the partition won't be using the full disk, but you can mount it with for example UD and copy the data to the array. Quote Link to comment
trurl Posted December 2, 2018 Share Posted December 2, 2018 4 hours ago, johnnie.black said: One thing I forgot to mention, while there's no problem using a larger disk as destination for ddrescue, that disk won't mount in the array, because the partition won't be using the full disk, but you can mount it with for example UD and copy the data to the array. In that case maybe he should clone to the 10, use the 8 in the array to copy the cloned data and copy back the data already copied to his PC from disks 2,3, then New Config with the 10 as parity. Quote Link to comment
JorgeB Posted December 2, 2018 Share Posted December 2, 2018 1 minute ago, trurl said: In that case maybe he should clone to the 10, use the 8 in the array to copy the cloned data and the backed up disks 2,3, then New Config with the 10 as parity. Yes, probably the best and easiest way. Quote Link to comment
Pyro Posted December 2, 2018 Author Share Posted December 2, 2018 I cloned disk 1 to the 8tb overnight. After I run xfs_repair, can I use UD to copy to my desktop? I think I might have enough room for that. Quote Link to comment
JorgeB Posted December 2, 2018 Share Posted December 2, 2018 Yes, xfs_repair needs to be run on /dev/sdX1 note the 1 in the end. Quote Link to comment
Pyro Posted December 2, 2018 Author Share Posted December 2, 2018 If I'm understanding the unraid xfs_repair wiki correctly, I should start the array in maintinance mode (screenshot) and type this into the terminal: xfs_repair -v /dev/dse1 Is this correct? I do apologize for the pile of questions, I'm a novice at best and don't want to screw anything up even more. Quote Link to comment
John_M Posted December 2, 2018 Share Posted December 2, 2018 (edited) 11 minutes ago, Pyro said: Is this correct? No. There is no such thing as /dev/dse1 - you have mistyped. Also /dev/sde is your parity disk so that isn't right either. Look under the Unassigned Devices section that you cut off in your screen grab to find the correct device. It's probably /dev/sdb1 but I can't see it so I can't be sure. Edited December 2, 2018 by John_M Quote Link to comment
JorgeB Posted December 2, 2018 Share Posted December 2, 2018 17 minutes ago, Pyro said: I should start the array in maintinance mode No need since the disk is unassigned, it just can't be mounted in UD, and like John_M mentioned check there for the correct identifier. Quote Link to comment
Pyro Posted December 2, 2018 Author Share Posted December 2, 2018 (edited) Ok, so sde is my (currently useless) parity, but it's also the disk I cloned sdf to. I should have the array stopped, and run: xfs_repair -v /dev/sde1 Am I getting closer? sdb is the 10tb that will become my new parity, but it's unformatted. Edited December 2, 2018 by Pyro derp Quote Link to comment
John_M Posted December 2, 2018 Share Posted December 2, 2018 1 hour ago, Pyro said: Ok, so sde is my (currently useless) parity, but it's also the disk I cloned sdf to. You didn't unassign it from the parity slot before cloning onto it? On 12/1/2018 at 12:06 AM, johnnie.black said: Any unused new disk, outside the array, same or larger capacity. The concern I have is that your screenshot shows 7 writes to it, which will have added to the corruption. Stop the array and unassign it. Otherwise, you have the xfs_repair command correct. You might want to consider re-doing the clone, but you might as well run xfs_repair first and see if it allows you to mount the disk in Unassigned Devices. Quote Link to comment
Pyro Posted December 2, 2018 Author Share Posted December 2, 2018 Alright, I guess I'll go ahead and clone disk 1 again to sdb 10tb since sde is likely to fail anyways. Quote Link to comment
John_M Posted December 2, 2018 Share Posted December 2, 2018 Yes, do that but leave sde untouched for now in case the old disk 1 fails completely during the cloning process. You might be able to get something off of it if all else fails. Quote Link to comment
Pyro Posted December 2, 2018 Author Share Posted December 2, 2018 Uh... It happened instantly? This seems off. What did I screw up? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.