jbeck22 Posted December 26, 2019 Share Posted December 26, 2019 I'm still running the trial. I ran the disk speed docker to make sure that my drives are performing as they should, and they are (originally my cache drive was only getting like 3-4 MB/s, now I'm at 150+ after an upgrade). Anything docker related is almost unusable, including start and stopping of services. Any help would be greatly appreciated! tower-diagnostics-20191226-1512.zip Quote Link to comment
JorgeB Posted December 27, 2019 Share Posted December 27, 2019 WD 3TB is failing and still part of your cache pool, despite not being assigned in the GUI, probably best to re-format the cache pool with just the other disk, after backing up what you can if needed. Quote Link to comment
jbeck22 Posted December 27, 2019 Author Share Posted December 27, 2019 Thanks Johnnie! Can you tell me how I can find that in the diag? I want to check it after I make your recommended changes. Quote Link to comment
trurl Posted December 27, 2019 Share Posted December 27, 2019 Also, I assume you know parity is disabled? I didn't notice in syslog that you were rebuilding parity. Are you? Quote Link to comment
JorgeB Posted December 27, 2019 Share Posted December 27, 2019 On the diags, system/btrfs-usage.txt, sde is still part of the pool Overall: Device size: 4.55TiB Device allocated: 96.02GiB Device unallocated: 4.45TiB Device missing: 0.00B Used: 88.28GiB Free (estimated): 2.70TiB (min: 2.23TiB) Data ratio: 1.65 Metadata ratio: 2.00 Global reserve: 29.31MiB (used: 400.00KiB) Data Data Metadata System Id Path single RAID1 DUP DUP Unallocated -- --------- -------- -------- --------- -------- ----------- 1 /dev/sde1 20.01GiB 37.00GiB 2.00GiB 16.00MiB 2.67TiB 2 /dev/sdf1 - 37.00GiB - - 1.78TiB -- --------- -------- -------- --------- -------- ----------- Total 20.01GiB 37.00GiB 1.00GiB 8.00MiB 4.45TiB Used 18.75GiB 34.64GiB 128.30MiB 16.00KiB Quote Link to comment
trurl Posted December 27, 2019 Share Posted December 27, 2019 3 minutes ago, johnnie.black said: system/btrfs-usage.txt I had never looked at that one, I'll have to remember that. I did notice too much total capacity for cache in df.txt Quote Link to comment
jbeck22 Posted December 27, 2019 Author Share Posted December 27, 2019 46 minutes ago, trurl said: Also, I assume you know parity is disabled? I didn't notice in syslog that you were rebuilding parity. Are you? Yes parity is disabled for some reason. Not sure if the disc had some errors or not, but it disabled itself...not sure how to re-enable it. Quote Link to comment
jbeck22 Posted December 27, 2019 Author Share Posted December 27, 2019 I have formatted the cache drive as suggested. I can see better overall docker performance for sure, so big win there. I re-ran the diagnostics again, but this time I'm not seeing the file that should show me the status of the cache setup. I have attached the new zip file here. tower-diagnostics-20191227-1121.zip Maybe it is in a different file now, since I formatted the cache drive to xfs? Quote Link to comment
JorgeB Posted December 27, 2019 Share Posted December 27, 2019 12 minutes ago, jbeck22 said: since I formatted the cache drive to xfs? That file only exists if the pool is btrfs. To re-enable parity see here: https://wiki.unraid.net/Troubleshooting#Re-enable_the_drive Quote Link to comment
trurl Posted December 27, 2019 Share Posted December 27, 2019 5 minutes ago, jbeck22 said: Yes parity is disabled for some reason. Not sure if the disc had some errors or not, but it disabled itself...not sure how to re-enable it. Unraid disables a disk any time a write to the disk fails. This is because it is no longer in sync with the parity calculation. In the case of a data disk getting disabled, the write to the disk failed but the write to parity succeeded, so the contents of the disk are no longer valid, but the correct contents are represented in the parity array and can be rebuilt to the disk. In the case of parity getting disabled, the write to parity failed, so it is parity that is out of sync with the data in the array. Unraid will not use a disabled disk until it is rebuilt. In the case of a disabled data disk, instead of reading or writing the disabled disk, Unraid emulates the disk. To read the contents of the emulated disk, it uses the parity calculation to get the correct data for the disk by reading parity plus all the other disks. To write to the emulated disk, it reads the emulated disk as explained, then updates parity as if the disk were being written. So even though Unraid has ceased to use the disk at all, the initial failed write, and any subsequent writes to the disk, are represented in the parity array and can be rebuilt to the disk. In the case of a disabled parity disk, parity is no longer written, but writes continue to the data disks, so parity is out of sync. And, of course, if you have a disabled disk in a single parity array, whether that disabled disk is parity or data, then you no longer have any redundancy. Single parity allows only one disk to be rebuilt. Dual parity still has redundancy when a single disk is disabled, and loses redundancy when 2 disks are disabled. Dual parity allows 2 disks to be rebuilt. Looked at SMART for your parity disk, it seems OK. Often these problems are caused by bad connections, cables, etc and not by a bad disk. You can rebuild parity (or indeed any disk) to itself by Stop array Unassign disk to be rebuilt Start array so changed assignment is registered Stop array Reassign disk to be rebuilt Start array to begin rebuild Quote Link to comment
trurl Posted December 27, 2019 Share Posted December 27, 2019 14 minutes ago, jbeck22 said: I formatted the cache drive to xfs Just in case you don't know. Only btrfs allows multiple disks in cache. If you ever decide to have more than one disk in the cache pool again, then you will have to reformat that xfs cache. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.