Fpsware Posted November 16, 2019 Share Posted November 16, 2019 Today I discovered a fault disk in my system, as I've done in the past, I replaced it with a new drive. I installed the new drive and now I can't past "array starting ... mounting disks". I've tried with a different drive, same problem. I've even tried starting the array without the drive installed and it still tries to mount the EMPTY drive (see attached picture). The faulty drive was BTRFS, I've tried selecting BTRFS, XFS and AUTO for the new drive but nothing works. I've been at this for hours now and I'm beyond extremely frustrated. tower-diagnostics-20191116-0424.zip Quote Link to comment
JorgeB Posted November 16, 2019 Share Posted November 16, 2019 3 hours ago, Fpsware said: I've even tried starting the array without the drive installed and it still tries to mount the EMPTY drive That's normal since it's mounting the emulated disk. Btrfs disks can take several seconds to mount, an emulated btrfs disk can take much longer since data is being reconstructed from all the other disks, I don't see any error on the log, have you waited a few minutes to see if it's just taking a longer time than usual as expected? Quote Link to comment
Fpsware Posted November 16, 2019 Author Share Posted November 16, 2019 Previously when I've replaced a disk its taken literally a few seconds to get past that point. I've left it for ~30 minutes and its still showing the same thing. I've just now check it again, approx 2 hours later, its mounted all the disks but the rebuild process is estimated at 267 days! In the past, this has taken 24 hours at most. Total size:4 TB Elapsed time:1 hour, 19 minutes Current position:149 MB (0.0 %) Estimated speed:171.3 KB/sec Estimated finish:267 days, 23 hours, 20 minutes Quote Link to comment
JorgeB Posted November 16, 2019 Share Posted November 16, 2019 Post current diags Quote Link to comment
Fpsware Posted November 16, 2019 Author Share Posted November 16, 2019 .... been trying to do that for the last 30 minutes now.... its still generating the logs. Quote Link to comment
JorgeB Posted November 16, 2019 Share Posted November 16, 2019 See if you can get just the syslog Quote Link to comment
Fpsware Posted November 16, 2019 Author Share Posted November 16, 2019 This is the current one. After around 2 hours the other logs had not downloaded. I discovered memory usage was at 100%, after killing a few dockers I was able to download the diag. Rebuild is now down to 50 days tower-diagnostics-20191116-0955.zip Quote Link to comment
JorgeB Posted November 16, 2019 Share Posted November 16, 2019 Disk2 is failing: Nov 16 18:19:03 Tower kernel: res 51/40:20:c0:ec:3e/00:00:14:00:00/e0 Emask 0x9 (media error) ... Nov 16 18:41:23 Tower kernel: md: disk2 read error, sector=2352394520 Nov 16 18:41:23 Tower kernel: md: disk2 read error, sector=2352394528 Nov 16 18:41:23 Tower kernel: md: disk2 read error, sector=2352394536 Nov 16 18:41:23 Tower kernel: md: disk2 read error, sector=2352394544 Nov 16 18:41:23 Tower kernel: md: disk2 read error, sector=2352394552 Quote Link to comment
Fpsware Posted November 16, 2019 Author Share Posted November 16, 2019 Thanks Another disk to replace. How am I going to rebuild TWO disk when I only run a single parity disk? Quote Link to comment
JorgeB Posted November 16, 2019 Share Posted November 16, 2019 9 minutes ago, Fpsware said: How am I going to rebuild TWO disk when I only run a single parity disk? You can't, first make sure disk1 really failed, i.e., not a cable or other problem, if it did but it's still partially readable use ddrecuse on both disk1 and disk2 to recover as much data as possible. Quote Link to comment
JorgeB Posted November 16, 2019 Share Posted November 16, 2019 Also and if a full backup is not an option for you consider at least adding a second parity drive, while not a replacement for backups it's a very small price to pay for the added redundancy. Quote Link to comment
Fpsware Posted November 16, 2019 Author Share Posted November 16, 2019 Thanks for the help. Currently the rebuild is down to 6 hours, guess its past the drive errors. In the years I've been running Unraid I've never had 2 drives fail, so no need for a second parity drive. Maybe its time to change that. Fortunately the data on the damaged drive(s) is not critical and I do have online backups. Quote Link to comment
JorgeB Posted November 16, 2019 Share Posted November 16, 2019 Note that some data will be corrupt on disk1, since you're running btrfs a scrub will identify which files. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.