nikiforos

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by nikiforos

  1. Hm... I don't know what "port multipliers" are, so I expect I don't have them. The parity is being rebuilt at roughly 32MB/sec and is expected to last another 4 days and 20 hours. I will then start a manual parity check, which also lasts 3-4 days.
  2. Ok. Thank you! I started to rebuild. I will update you in about a week when it is done! Have a nice weekend.
  3. Also, is my thinking correct, that I should keep Docker and the VMs inactive during the rebuild (to minimize writing to the disks), or is it fine to activate them?
  4. So I should rebuild the parity again? Do I have to use the same trick as before, by swapping the two drives and force a rebuild, or is there a more elegant solution?
  5. - It is fine that Disk 12 has hardly any data. I had planned on using it only for a specific share - All the disks have green thumbs up on the dashboard (see screenshot) - I shut down the server, opened it, checked all the connections, moved the harddrives away from the motherboard SATA connection onto a PCi card (no room to move cache drives too), rebooted and started the array. The cache drives have fixed themselves, the parity drives seem to still have the same issue. I attached a new diagnostics file. Thanks again! unraidserver-diagnostics-20230128-1921.zip
  6. Hello, thank you for your reply. I replaced data disk 5 (Z2JMJMZT). I will do as you suggested and report back. thank you!
  7. Hello everyone, thank you for your time, I hope you will be able to help me with my situation! I have been running my Unraid server for roughly three years now without any issues. Until a couple of weeks ago... So a couple of weeks ago I got a message, that one of my array disks has an error and cannot be read from. Since I had been contemplating expanding my storage anyway, I did not spend much time looking into the original error and just bought a bigger harddrive to replace the one with the error. To to that, I 1) shut the server down (cleanly) 2) added the new drive to an empty slot in my case 3) pre-cleared the new disk (no issues/errors) 4) replaced the disks in the "Array Devices" 5) started the array and let the new disk rebuild Everything seemed fine after that and I thought the problem was dealt with. Sadly, after the next scheduled parity check, I got an error message that both of my parity drives have errors. Over 1000 each. So I decided to rebuild the parity from the ground up. I'm hoping this wasn't a fatal mistake... To rebuild the parity drives, I stopped the array and swapped the two parity drives with each other. After starting the array, Unraid started rebuilding the "new" parity drives. Btw, I also turned off Docker and the VM manager, as I thought it would be best to minimize data being written to the drives, while the parity is being rebuilt. Once the parity was freshly rebuilt, I manually started a parity check, as I wanted to make sure that everything works fine. Which it did not! Again the parity drives reported over 1000 errors each. I now got the option to start a "Read Check", which I did. It will take about 4 days though. I attached a diagnostic.zip file, which I just now downloaded. I'm hoping someone will find useful information here. I certainly have no clue what to look for Could you please help me with my next steps? Should I run tests on the two parity drives, or should I wait for the "Read Check" to finish? Did I mess up, or can the disks/data me salvaged? Thank you very much for your support!! Greetings from Vienna, Nick unraidserver-diagnostics-20230128-1727.zip
  8. Thanks for your reply. I found a workaround which did the trick for me. I removed all the rules and set the NFS export to "No" on all my shares. I then deactivated and reactivated the NFS sharing in the Unraid settings. Afterwards, I enabled exporting one share at a time, always checking that "exportfs -ra" worked. Not entirely sure what the problem was, but everything's working now.
  9. Hi there, did you ever find a solution for this? I am having the same issue now. Thanks! Nick