Jump to content

BradJ

Members
  • Content Count

    61
  • Joined

Community Reputation

2 Neutral

About BradJ

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have now replaced the drive giving read errors and everything seems to be working correctly again. Moral of the story: do not buy white label hard drives from goharddrive (at least for a NAS). In total, I have had problems with 3 out of 4 WL drives. One was replaced under warranty with an enterprise level WL drive and that one works fine. The other ones started having weird intermittent errors right after the one year warranty. Thank you to all that has helped me thru this, I appreciate it. I will now mark this topic as solved.
  2. Lol, yes - It's a while label drive purchased from GoHardDrive. This is the second WL drive from there that unraid has found problems with. I'll be replacing it with a 10TB WD shucked drive (yes, I'm cheap).... I'll report back once the array is rebuilt and a parity check has completed successfully.
  3. Update: Suspect Drive passed the SMART extended test. I then started a non-correcting parity check - successful with no errors. Then it got interesting. I started a correcting parity check and immediately got 134 read errors. It is my belief something is just wrong with the drive. What exactly - who knows - but something is wrong with it. Where to go from here is the question now. I'm thinking of just cancelling the current parity check and replacing the drive. Is that a good game plan or is there anything else I should consider? tower-diagnostics-20201007-1320.zip
  4. This is why I asked about this in the OP. It's a new drive (to this array) so I can not be 100% certain on the quality of this drive. Alright, just to be safe, I will run a non-correcting check first and see what happens. I'll report back on the results.
  5. That would make sense. The Wiki doesn't specifically say the errors will be zeroed out. I guess I made that conclusion in my head after I was reading about how READ errors are fixed automatically. I guess I just assumed the next parity check would zero out the errors. I will run a correcting parity check and report back (in 26+ hours).
  6. Ok, that's good advice and I will run a parity check asap. Again, any insight if the parity check should be a correcting one or not?
  7. I run a monthly parity check. The new drive has not been thru a parity check since I just added it a few days ago. Yes, all of my previous parity checks have had 0 errors. I will start a parity check as soon as the extended smart check is finished. According to the Wiki a (successful) parity check should zero out the errors as well. I'm still not sure about the parity check being a correcting one or not.
  8. I added a 6TB (Disk 8 ) drive to my array a few days ago. Woke up this morning to a notification of 4 read errors. I do not currently see any SMART errors but I am running an extended check to be sure. Reading the Wiki, it says to run a parity check as soon as possible. Should this be a correcting or non-correcting parity check? A previous topic on this said to make sure it is non-correcting but I want to be sure before I start this 26 hour process. Any other words of advice? Thanks. tower-diagnostics-20201005-0953.zip
  9. Due to the constant spamming of the logs I also had to uninstall this plugin. Otherwise I loved this plugin. What a shame. Tower kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff]
  10. Just a FYI... the issue has corrected itself. I believe this was an error related to the massive growth they have seen over the past few weeks.
  11. I joined the team last night. My Nvidia GPU was doing some work right after I installed it, but now it is sitting idle. I found this in the logs: 13:07:53:WU01:FS01:Requesting new work unit for slot 01: READY gpu:0:GP104 [GeForce GTX 1060 6GB] from 13.90.152.57 13:07:53:WU01:FS01:Connecting to 13.90.152.57:8080 13:08:28:ERROR:WU01:FS01:Exception: Failed to remove directory './work/01': boost::filesystem::remove: Directory not empty: "./work/01" Is this an error on my end or on their end? Thanks.
  12. Negative. I upped my firmware from v10 to v12 and that did not help. Any other ideas?
  13. Same here. Latest version has broken functionality. Hopefully it gets fixed on an update soon. If anyone has a work-around, please let us know.
  14. trurl, It was just a user created share. I didn't realize I had one set to "Prefer". It was my mistake. Sometimes I am oblivious to the obvious. Lol. I'm going to mark this SOLVED. Thanks again.
  15. One of my shares was set to "Prefer". I changed to it "Yes". Now the mover appears to be working. Thank you very much johnnie.black! Brad