Jump to content

itimpi

Moderators
  • Posts

    20,789
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. It is normally easier to simply recreate the docker.img file and then reinstall your containers with their previous settings via Apps->Previous Apps.
  2. Those continual resets will be what is slowing things down. You should stop the data rebuild; fix the reset issue; and then restart the rebuild
  3. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  4. This will be from the continual resets on disk1 that I mentioned. You should abandon the parity build; fix that problem; then start the parity build again.
  5. The parity drive getting disabled means a write to it failed so in that sense it would not be a different drive that caused that so you may still have a problem unless that was the disk you swapped. The diagnostics show continual resets on disk1 so you may need to check the power and sata cabling to that drive as well as to the parity drive.
  6. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  7. The diagnostics show that there appears to be corruption on the cache drive itself, and internally within the docker.img file (the loop device which is presumably on the same drive). You need to fix the cache level before attempting to fix the docker.img level. Probably best to start with a scrub of the cache drive to see how that goes. Instructions for recreating/repopulating the docker.img file are here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI.
  8. It looks like it could be a controller issue as just before parity 0 started showing write errors you got. Dec 20 00:23:03 jlw-unRaid kernel: mpt2sas_cm0: SAS host is non-operational !!!! ### [PREVIOUS LINE REPEATED 5 TIMES] ### Dec 20 00:23:08 jlw-unRaid kernel: mpt2sas_cm0: _base_fault_reset_work: Running mpt3sas_dead_ioc thread success !!!! and there is no SMART information for the parity drive in the diagnostics.
  9. You should try enabling the syslog server to get a syslog that survives a reboot and post that after the next crash so we can see what lead up to the crash. Make sure you either enable the mirror to flash option or alternatively put the Unraid server's IP address into the Remote Server field of the syslog server settings.
  10. You could try searching Amazon for “SilverStone SST-PP07-BTSBR - 30cm Molex to 4x SATA Sleeved Extention Cable‘.
  11. You can get exactly the same problem with SATA -> SATA if the wires go in vertically rather than horizontally.
  12. Carefully check those molex->SATA cables. I much prefer ones that have the cable crimped across the top of the SATA connector - the ones going in vertically like the cable you show are more likely to have manufacturing defects that can short two wires in the cables together which can then damage drives.
  13. This is the first time I have ever heard of these exact symptoms so it is not at all clear why you are seeing it and how to resolve it.
  14. You need to run without -n (the no modify flag) for anything to be repaired, and if it asks for it add -L. After that restart the array in normal mode.
  15. The connectors can definitely work loose. in my experience a SATA connector from a cable on the power supply cannot be reliably split more than 2 ways. A molex connector can normally be split 4 ways without issues.
  16. That looks like a Macvlan related crash. If you are still using Macvlan for docker networking have you disabled bridging for eth0 under Settings->Network Settings and rebooted to make sure it is activated?
  17. The release notes say that you need to disable bridging if you do not want Macvlan crashes to eventually crash the server when using Macvlan. Why if have no idea.
  18. You can always pass in the physical path to the folder on the pool.
  19. In which case as mentioned in the release notes you need to ensure that bridging is disabled on eth0.
  20. Unfortunately passing memtest is not always a definitive (whereas failing is) test of memory.
  21. No ETA and Limetech never give dates - just a ‘when it is reach’ statement. Could be some time as it has not yet even started public beta.
  22. Not quite. With the settings shown you will only have the syslog server in listening mode with nothing being written. As mentioned in the link you need to put the IP of your Unraid server into the Remote syslog Server field to start getting the server to log to itself. There is also the mirror to flash field if you want it also/instead log to the logs folder on the flash dtive.
  23. A drive is disabled and stops being used when a write to it fails. To get it used again you are going to have to Rebuild the parity using the process in the link.
  24. Depending on how the container try’s to do the move it is possible you are falling foul of the behavior described in the Caution on this page of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
×
×
  • Create New...