Jump to content

JorgeB

Moderators
  • Posts

    67,852
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. It should be plug and play, please post the diagnostics.
  2. One molex plug won't power 16 drives, should power 4, if you mean one PSU cable then yes, you can run 16 drives from one.
  3. You mean the cable in the other end? Still only 4 pins are used, and it's not standard for modular PSUs, ideally you need as many molex plugs as there are in the backplane, avoid using splitters, you should be able to get another modular molex cable for the PSU if needed, just make sure it's the correct one for that PSU, as mentioned there are no standards, and even PSUs from the same brand can use different cables.
  4. It's for the VMs, either the service was left enabled or another one already exists on cache.
  5. Most likely. Run rsync -av /path/to/source/ /path/to/dest/ it will only copy any missing data.
  6. Enable the syslog server and post that after a crash.
  7. For dockers I recommend using the disk path, better for performance, with VMs like mentioned it should not make a difference, but it won't hurt.
  8. You can also use /mnt/cache/appdata, assuming they are on cache.
  9. Click on settings, then docker and VM Manager settings and disable both, then run the mover, bellow stop array button.
  10. It's my understanding that with VMs /mnt/user is basically the same as /mnt/disk_share, but if that helps leave it as /mnt/cache,
  11. You cannot check now since there are two invalid disks with single parity, so they cannot both be emulated, if you're not sure it's best to enable only disk4 and see if disk9 can be correctly emulated: -Tools -> New Config -> Retain current configuration: All -> Apply -Check all disks are assigned, assign any missing disk(s) if needed -IMPORTANT - Check both "parity is already valid" and "maintenance mode" and start the array (note that the GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the checkbox, but it won't be as long as it's checked) -Stop array -Unassign disk9 -Start array (in normal mode now), ideally the emulated disk9 will now mount and contents look correct, if it doesn't you should run a filesystem check on the emulated disk -If the emulated disk mounts and contents look correct stop the array -Re-assign disk9 to rebuild and start array to begin.
  12. Yeah, it was already reported, looks like it's still missing a module, it should be fixed for -rc6, but I'll post here when it's released.
  13. Not seeing any errors after the scrub.
  14. Firmware is a possibility, since the disk is pretty new, also you should do it anyway, so nothing to lose, if it doesn't help you can try other things, you can also try connecting the disk to the onboard SATA if it's available, that would also confirm if the disk/HBA combo is the problem.
  15. What is the array problem? Please post the diagnostics.
  16. Was anything written to disk9 after it got disabled?
  17. I would start with replacing that, then you'll need to force enable disk4 to rebuild disk9, or re-enable both if nothing was written to disk9 after it got disabled.
  18. System share has files on disk1, move them to cache, services must be disabled first.
  19. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  20. You are having issues with multiple apparently healthy disks, this suggests a power/controller/cable problem, do disks 3, 4 and 9 share anything in common, like a miniSAS cable or power splitter?
×
×
  • Create New...