Jump to content

trurl

Moderators
  • Posts

    44,362
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Some of them are mounted. It looks like the disks that aren't are on a RAID controller or something. Is that the same controller they were on when you formatted them?
  2. It says you may not add new disk(s) and also remove existing disk(s). Did you remove any disks?
  3. Doesn't look like it from those last diagnostics you posted.
  4. Was the repaired emulated disk mountable?
  5. I have merged your threads. Please don't create multiple threads for the same problem.
  6. Go to Settings - Disk Settings and disable autostart. Then when you reboot it won't start anything and you should be in a position where you can proceed with filesystem repair.
  7. Do you have anything accessing user shares? Dockers, VMs, other computers?
  8. SMART for disk4 looks OK. Syslog starts with the disk already disabled and unmountable so unless you have syslog from before you rebooted nothing else to go on.
  9. DO NOT FORMAT!!!! The disk is disabled, and the emulated disk is unmountable, so there will have to be both filesystem repair and rebuild. Try repairing the emulated filesystem first then if it is mountable you can rebuild. There was a video about filesystem repair on that other thread you replied on. Be sure to capture the output during filesystem repair so you can post it for further advice.
  10. If you want us to check post diagnostics.
  11. If the webUI is working again, no need to know, since the recommended way to affect this setting is not by editing the file, but by going to Settings - VM Manager.
  12. You can go directly to the correct support thread for any of your dockers by simply clicking on its icon in the Unraid webUI and selecting Support.
  13. Why are your appdata, domains, and system shares set to cache-yes? They should be prefer. After you get them moved to cache then cache-only would be even better. This isn't the likely cause of your issues though. Just a likely cause of decreased performance due to having these on the array impacted by parity writes, and also causing array disks to spin since these are always in use. Did you try deleting the appdata for these dockers? Of course that would be starting them from scratch.
  14. It was several weeks since your previous post. Was everything working fine until this update? Do you have any unmountable disks? Are you booting from a USB2 port? (recommended) Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  15. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  16. Mover is the way things get moved based on the use cache settings. Mover can be invoked manually. It also runs on schedule, the default is daily in the middle of the night. Mover is intended for idle time. There is also a plugin to invoke mover based on how full cache is.
  17. This is not exactly how it works. Cache-prefer overflows new writes to the array if cache doesn't have Minimum Free (in Global Share Settings). Mover never moves cache-prefer to the array, it only moves from array to cache when cache has room.
  18. The output comes from the linux xfs repair utility and isn't usually that large. The fact that it is so large is NOT a good sign. The only idea I have at this point is to unassign the disk so it is emulated by parity again and see if that can be repaired any better. Let me drag @johnnie.black into this thread and see if he has any ideas.
  19. Possibly there were some details about what happened that weren't fully explained or understood.
  20. All credit to @SpaceInvaderOne for all of the great videos. You should check out his whole Youtube channel, lots of great Unraid tutorials there. However, I am the one that posted the "pointer to the video".
  21. There are dockers for that. Unraid assigns drives by their serial number so port doesn't matter.
  22. I don't see any flash issues in those. Another possibility is a browser problem. Clear browser cache and if you have any adblockers whitelist your server.
  23. According to your diagnostics, the share anonymized as b---s has all its files on disk1, and the share anonymized as N-------d has some files on cache and some on disk1. Possibly those dockers are accessing those shares enough to keep the disk spunup.
×
×
  • Create New...