Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About 905jay

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Left the parity to runovernight, and it is at the exact same place it was at yesterday late afternoon 98.7% 3 Hours 50 minutes remaining for at least the last 12 hours. I've given up on the process now, i've aborted and will install the LSI card and see if that helps any any i5unraid-diagnostics-20190829-1236.zip unRAID
  2. @Frank1940 thanks for the follow up on this. Yes initially the server was rebooting every night for an unknown reason. That seems to have been fixed by not having docker and vm services running. The parity is taking forever to run, that is the current concern. due to the random freezing from before (unresolved) the parity was totally messed up. I see what you mean in terms of the read and writes to the disk in the screenshot, but i'm unable to isolate what is causing that, to those disks only. I thought the point of unRAID was to spread it out across all disks (read /write /data), not necessarily evenly, but better than this. Some people point to the fact that 6 disks are on the Intel controller, and 2 are on the Marvell controller on the motherboard. I am holding in my hand an LSI card that arrived today. Once the parity rebuild is complete, I intend to install this card and use it exclusively. Do you think this parity will ever rebuild? Or am I best to stop this now, install the card, and rebuild the parity again?
  3. I have no explanation for that other than it is how it was shown to me, and recommended that I setup this way. But it seems like there are 10 people, who all know what they are talking about, giving me 10 variations on doing everything under the sun. Typical internet stuff...everyone is an expert on everything behind the cloak of an avatar on a forum. So far a 6TB parity drive sync has taken approx 2 days to complete and still isn't close to being finished. At 11:30pm last night it shows 93%, 1 hour to complete (approx.) This morning it shows 93% and a day left to complete (the transfer rate went down from 100MB/s to about 5MB/s) from 8am this morning to now, it has moved 3% (give or take)
  4. mover runs hourly, however there is nothing running at this time in terms of services that I am aware of docker service is disabled, as is VM service
  5. Hey @trurl and @johnnie.black the parity was running for about 24 hours. I started it Monday morning and it went into Tuesday Morning. I decided to stop it yesterday morning and restart it because it was stuck at about 90% and showing 900+ days to complete. I figured something was wrong, and restarted it yesterday morning. At 11:30pm last night it shows 93%, 1 hour to complete (approx.) This morning it shows 93% and a day left to complete (the transfer rate went down from 100MB/s to about 5MB/s) Can anyone help me isolate what the issue is here? It's been stable otherwise, and hasn't crashed on me at all but that problem has lead to this problem. Diagnostics and syslog attached i5unraid-diagnostics-20190828-1309.zip unRAID(1)
  6. @johnnie.black @trurl thanks very much for your input I have the docker and VM service disabled again until the rebuild is complete and I implemented the change that @johnnie.black recommended I will report back here once that is complete
  7. @trurl would you be able to help me figure out if it is a container (which one, or which combination of containers) or if it is a VM that is causing all these issues that I'm seeing? I've replaced the memory on the server, (2x8GB + 2x4GB), and formatted the parity disk and am rebuilding the parity now as we speak. I feel that perhaps I've misconfigured something but i'm unsure of what it may be, and don't want to have to constantly have to go through this parity rebuild due to sudden freezing and power off situations. Is it possible that somewhere I have over-provisioned memory or CPU resources to a docker (or multiple dockers) ? Is there anything else that I can provide the community that may be able to point to where the shortcomings are with this system?
  8. thanks for that info, i have made the change
  9. @trurl when I get my LSI card (hopefully next week) and I plug it in and boot the server, do I have to completely reconfigure all my disks? is this a major undertaking on my part? Or will unRAID just not give a shit, as long as it can see all the disks?
  10. Bios is fully up to date, yes Defaults in BIOS are being used, no overclocking Cooling is a Noctua CPU Cooler, brand new I have 2 disks hooked up to the Marvell. That can't be changed until I get my LSI Logic card delivered so I have to wait on that. I have decided that it may be best to remove the parity disk from the array, preclear it, and add it back in. See if that at least helps that situation
  11. server booted in safe mode, with docker and VM services disabled, it's still standing strong. Some observations: The parity rebuild has been increasing in time, from 20 days, to 190 days, to now over 900 days. Would it make sense for me to format and redo parity fro scratch? The parity disk was brand new for this server build 2 weeks ago. It appears that without docker containers and VM services running (and in safe mode) the server is stable for now. Would you agree that this isn't a hardware issue per-se, but rather misconfigured software (somehow /somewhere)? Diagnostics file is attached i5unraid-diagnostics-20190826-1229.zip
  12. so....booted into safe mode and noticed docker and vm services start automatically shouldn't safemode mean that none of these services would automatically start so as to enable troubleshooting of issues? i've disabled those services and started the array, so docker and vms aren't running also, i am mirroring syslog to flash so see if something gets caught that perhaps the syslog server itself is missing
  13. memtest has been running for 4.5 hours and no errors
  14. thanks again @trurl did you happen to see anything they may offer a clue? I'm grabbing at straws at this point