trussell34

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by trussell34

  1. Hey everyone, I just wanted to thank you all for the amazing advice and help! The most recent changes allowed the parity to complete successfully. Thanks so much everyone!
  2. Ok I will "rinse and repeat". Any guidance on how low to take it?
  3. So I lowered that setting (from 192 to 140) and I still had issues. Please see attached syslog for more details. Do I lower the number further? Is this a hardware issue? (RAM/HDDs/etc) Thanks! syslog
  4. Appreciate the clarification! Looking more closely at my syslog, I can see that the call traces didn't start for 12+ hours after the parity check. I'll keep an eye on it throughout the day though.
  5. Thanks for the advice. I've lowered the md_sync_thresh setting from 192 to 140. I kicked off the parity check again as well.
  6. So I ran the parity check without writing corrections and it did the same thing again. I woke up this morning to find that it was unreachable and would only respond to a hard reset. I did turn on the syslog settings and have attached them to this post. Please let me know if you can find anything relevant in there. I'm not entirely sure what I'm looking for. Thanks in advance! syslog
  7. Apologies, I just stumbled upon this setting this morning and enabled it immediately. If it does crash again, the logs should be on my flash drive and I will upload if/when it occurs. Cheers!
  8. I will try that and let you know how it goes. Appreciate the help!
  9. No worries! Yes all drives show green. I also see messages of "parity is valid". But if I were to re-run the parity check (with write corrections turned on), it would cause the whole unraid device to be unreachable (dockers, shares, GUI, etc). Should I try running parity check without writing corrections?
  10. FYI the 50% was purely a guess. I haven't been able to determine exactly when the server becomes unreachable. But to answer your question, no my array disks are all 4TB and smaller.
  11. I figured it wouldn't have been that easy. Array is online and passes the array health check. I have a few disks with a small amount of reallocated sectors (<5) and plan on replacing them ASAP but would like to get this figured out first. Not being able to complete a parity check scares me a little bit.
  12. Hello all, So I'm having an issue that when parity check is running and gets to more than 50% done, the server becomes unreachable. I can plug in a monitor and keyboard but only view the screen if I do a hard reboot of the server. Doing a hard reboot also solves the issue and it becomes reachable again but I need to cancel the parity check for it to stay working. So here is where I may have gone wrong. I replaced my 4TB parity drive with a new 10TB drive. Ever since then, I've been having issues. Here are the steps I used and I think I may have messed something up. Stopped array Shutdown Replaced parity drive with new 10TB Turned on Started array Parity rebuild The issue I'm seeing here after some research is that I never set my old 4TB drive as "unassigned" and then stopped the array. Was this step absolutely necessary? If so, where can I go from here? Thanks in advance!
  13. Thank you very much!! Worked like a charm!
  14. Hey strike, do you know of a good way to downgrade deluge or know how to manually revert that code? I saw the post that some people were referring to in another part of the forum but couldn't find the python code they were referring to....
  15. It works!!! I don't know if that got changed or how it would have gotten changed but thanks so much for your help!
  16. IPVanish will not use a PIA cert to secure the tunnel :-), dunno where you downloaded that from but i would check again, are you setting VPN_PROV to custom, if you have set it to pia then it will auto copy the pia cert to the openvpn sub folder. binhex, I'm having the same issue as Critical except I'm not using Ipvanish. I've been using your docker for 6+ months and it worked great! Then the other day I went to change which gateway I was using and I started getting the same issue as Critical (see logs). For clarification, I'm using pia, not custom. Do you have any suggestions for me?