manolodf Posted June 17, 2019 Share Posted June 17, 2019 (edited) I received the Parity Device Disabled with an X next to it. I think it may have been due to a physical issue since I moved the box slightly to check what I needed for my cache drive issue that I have been dealing with and need to replace the M.2 SSD. So after some movement randomly I got the error and now it says Parity device disabled. I am not sure what the proper procedure is to get it to see the parity again. The Short SMART report came back just fine, the long one has been sitting on 10% for a while. I started the Read test thinking that would re-enable the parity but not sure what the proper sequence would be. I did power down the machine and remove, refasten the data and power plugs to the drive and on the board. tower-diagnostics-20190617-2011.ziptower-smart-20190617-1544.zip Edited June 18, 2019 by manolodf Quote Link to comment
manolodf Posted June 18, 2019 Author Share Posted June 18, 2019 Is my only way to get parity back to my array to do the following: 1. Stop Array 2. Remove Parity Drive Again 3. Start Array 4. Stop Array 5. Add Parity drive again 6. Start array to rebuild parity? is there no shortcut since parity was good? Quote Link to comment
itimpi Posted June 18, 2019 Share Posted June 18, 2019 Parity may well not have been perfect as a red ‘x’ means a write has failed. the steps you went through are the general case were you make no assumptions about the state of parity. As you were virtually certain that your parity was almost perfect you could have done: stop array use Tools >>> New Config selecting the option to retail all current assignments, On Main tab tick the ‘parity is correct’ checkbox and then start array at this point I would recommend doing a correcting parity check just in case the parity was not as perfect as you thought. I would expect at least one correction to correspond to the original red ‘x’, and this is most likely to happen near the beginning. If you include the recommended parity check in the above steps the total elapsed time is not much different. The difference is that if you were right about the parity being valid then you have more chance of recovering if an array data disk fails during the process. Quote Link to comment
manolodf Posted June 18, 2019 Author Share Posted June 18, 2019 Ok, I have done so, but maybe too much has happened in the Array in the past 8 hours since it happened where it is not as close. It does say Parity Invalid now, and it is rebuilding. Is it normal for it to be going at ~10MB/sec? Quote Link to comment
itimpi Posted June 18, 2019 Share Posted June 18, 2019 10MB/sec sounds very slow but your screen shot suggests something may be writing to disk2 which could easily slow it down to that sort of speed. Writes during a parity check severely degrade its speed. Provide the diagnostics if you want to see if anyone can suggest anything else. Quote Link to comment
manolodf Posted June 18, 2019 Author Share Posted June 18, 2019 I disabled Dockers and now the speed is more like 100MB/sec, which should be a bit more normal? Quote Link to comment
itimpi Posted June 18, 2019 Share Posted June 18, 2019 16 minutes ago, manolodf said: I disabled Dockers and now the speed is more like 100MB/sec, which should be a bit more normal? That sounds more typical Where do you have your docker.img file located? Ideally for performance reasons you want it on the cache drive as accessing the cache drive does not affect parity check/build speeds. Also if practical you also want any docker mapped drives to be located there for the same reason, in particular the ‘appdata’ share. Quote Link to comment
manolodf Posted June 18, 2019 Author Share Posted June 18, 2019 The docker image is in /mnt/cache/docker.img the appdata used to reside as a cache only share though I did move to the array temporarily since I have been having those vicious cache drive issues and got tired of restoring often until my new nvme drive arrives. I did update firmware on the nvme drive though for now was leaving the appdata in the array and off cache for now though. I will set it back once I feel a bit more comfortable Quote Link to comment
itimpi Posted June 18, 2019 Share Posted June 18, 2019 4 minutes ago, manolodf said: The docker image is in /mnt/cache/docker.img the appdata used to reside as a cache only share though I did move to the array temporarily since I have been having those vicious cache drive issues and got tired of restoring often until my new nvme drive arrives. I did update firmware on the nvme drive though for now was leaving the appdata in the array and off cache for now though. I will set it back once I feel a bit more comfortable Ok. While appdata is on the array you will need to remember to stop the docker containers to get decent parity check speeds. Quote Link to comment
manolodf Posted June 18, 2019 Author Share Posted June 18, 2019 Yep that’s what I did for now. Turned docker service off for the night hoping to get parity rebuild done as fast as possible. Hopefully get it all back to normal right when the new drive and heat sync arrives tomorrow Quote Link to comment
manolodf Posted June 18, 2019 Author Share Posted June 18, 2019 I have installed the new cache M.2 NVMe SSD, but not sure why it seems that it is running slower than it should. Is there something in the Bios that I should look for to perhaps speed up the write speed? One example is a 20gb file that took 8mins to transfer to Disk2 yesterday took about 18mins to transfer to Cache today on the M.2 SSD that should be way faster. Or the ~40gb appdata I am transferring just is taking hours upon hours when I remember previously it performed to and from array much faster. When Dockers are off parity checks at about 100MB/sec, when they are on, it is about 20MB/sec, and when mover and dockers are off it can hit 160MB/Sec tower-diagnostics-20190618-2348.zip Quote Link to comment
_0m0t3ur Posted June 19, 2019 Share Posted June 19, 2019 When writing to disks in the array, parity writes are going to always be a speed limiter. Quote Link to comment
manolodf Posted June 19, 2019 Author Share Posted June 19, 2019 What about when writing to NVMe Cache drive only and not disks in the Array? Quote Link to comment
manolodf Posted June 19, 2019 Author Share Posted June 19, 2019 With Parity Build Paused, Dockers off, is it normal for 40GB of appdata being transferred from Disk2 to NVMe Cache drive to take over 4hrs? Quote Link to comment
_0m0t3ur Posted June 19, 2019 Share Posted June 19, 2019 You're right, these speeds seem unusually slow. I don't have an NVME drive. Though, I've heard there are "good" and "bad" NVME drives. You might search for "best nvme for unraid" and find out what nvme drives are being used successfully by other unRAID users. You should double check your overall disk and share setup. For example, what shares are set to reside only on the cache vs those that are set to utilize the array in some capacity? One last thing comes to mind, have you tried setting up Netdata docker? I've found Netdata to be a fabulous tool for looking at IOWAIT. Here's a resource regarding IOWAIT. https://bencane.com/2012/08/06/troubleshooting-high-io-wait-in-linux/ Quote Link to comment
manolodf Posted June 20, 2019 Author Share Posted June 20, 2019 23 hours ago, DoItMyselfToo said: You're right, these speeds seem unusually slow. I don't have an NVME drive. Though, I've heard there are "good" and "bad" NVME drives. You might search for "best nvme for unraid" and find out what nvme drives are being used successfully by other unRAID users. You should double check your overall disk and share setup. For example, what shares are set to reside only on the cache vs those that are set to utilize the array in some capacity? One last thing comes to mind, have you tried setting up Netdata docker? I've found Netdata to be a fabulous tool for looking at IOWAIT. Here's a resource regarding IOWAIT. https://bencane.com/2012/08/06/troubleshooting-high-io-wait-in-linux/ I installed the metadata, its pretty cool! How would you recommend I do a write test to cache drive? When I was transferring the appdata from array to cache, well my dockers were off, so I could not really see that. Do you recommend a different test I could setup to see the write speed? Do you think there is something in bios I need to do to improve on this? My previous 256gb NVMe M.2 SSD did not have any issues on the speed department that I noticed. Quote Link to comment
_0m0t3ur Posted June 20, 2019 Share Posted June 20, 2019 (edited) My approach was to perform some task that I would usually do, and then look at the other processes in Netdata and Glances to see which processes were impacted by my task. Then it's a matter of figuring out what needs to change in order to improve the performance. Regarding your NVME, I would think just setting up a typical write you would normally do, and then see what other processes, viewed in Glances, show IOWAIT. Like I mentioned earlier, I don't have an NVME drive. My standard SATA III SSD's are all connected to my motherboard SATA II. And my performance is great. The fact is the SATA II spec is up to 300 MB/s. But because of parity writing, the max speed writing to the array is much less than that. You mentioned that your previous NVM drive worked great. Why not get another one of them? Edited June 20, 2019 by DoItMyselfToo Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.