Jump to content

trurl

Moderators
  • Posts

    44,363
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Do you expect to need all that capacity soon? I always say each additional disk is an additional point of failure. With so many data disks I definitely recommend a 2nd parity disk. Depends on how you use cache. You already have appdata on cache, that will be the largest of the 3 "cache-prefer" shares. system share shouldn't take much space, docker and libvirt images are set to 20G and 1G respectively and that is all that should be needed. domains share depends on how much space you need for your VM OS vdisks. The other shares shouldn't keep things on cache, and you might even consider not caching some of them. If you have large data transfers to do, cache can just get in the way since those have to be moved to the array eventually, and mover works better during idle time. You can't move to the slower array as fast as you can write to the faster cache, so for large transfers, don't cache. With 224G cache, I think that should be enough unless you expect to write more than 200G per day on a continual basis, and if you do, you can just not cache some of it. Just consider when the faster cache writes are really beneficial and only cache those shares. For example, I have a share for backups of my PC. Those are scheduled unattended processes and I don't care how fast they get written since I am asleep, so they go directly to the array.
  2. Since your first 3 disks are much larger than the others then I expect them to be used when you set a share to Most Free since they obviously have the most free space. I personally recommend the default setting of Highwater. With so much free space on your first 3 disks those would still get used first, but disk1 would be used mostly until it gets half full. Most Free can make Unraid constantly switch between disks when they have similar amounts of free space. This will slow things down and keep more disks spinning. Highwater is the default for good reason. It is a compromise between spreading files to multiple drives "eventually", without constantly switching between disks. Also noticed you have your system and domains shares on the array. It is better for these (and appdata) to be all on cache and stay on cache so your dockers and VMs will perform better and not keep array disks spinning. Since mover can't move open files there are several steps required to get these moved to cache. Let me know if you want to work on that.
  3. What exactly are you referring to when you say "It" here? Each user share has its own setting for allocation method. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  4. Looks like your parity disk isn't responding. Do you have another disk you can use?
  5. Preclear is only for testing new disks or disks you think may have some problems. Unraid doesn't even need a clear disk for a rebuild. Nothing obvious in your syslog. Are you seeing anything in the Errors column on Main? Are you trying to read/write lots of data during rebuild? That would slow things down. How are your disks connected? Are any disks showing SMART warnings on the Dashboard?
  6. Didn't notice many other things you have done wrong. Why do you have an nvme disk in the array? SSDs are not recommended for the parity array for several very good reasons. Also, your cache pool has 2 SSDs of very different sizes. The default raid1 configuration will give cache pool mirror with total capacity equal to only the smallest of these disks. Probably simplest is going to be start over. Do you have anything important on any of these disks?
  7. CRC errors are also connection problems. Shut down, check all connections, SATA and power, including any splitters. Reboot and post new diagnostics.
  8. Looks like you're having connection problems on disk3. Are you showing anything in the Errors column on Main - Array Devices?
  9. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  10. If the files are on the disks in a folder, then they should be in the user share with the same name as the folder, so I don't think I fully understand what you are seeing. Maybe some screenshots would help better explain what you mean.
  11. How do you know they are present across the 4 data disks? How do you have the security set on the user share?
  12. Getting docker and libvirt images onto cache and off array may help but should be done in any case. In fact, you don't even need libvirt since you aren't doing VMs. Go to Settings - VM Manager and disable VMs then delete the libvirt image from that same page. Go to Settings - Docker, disable dockers and delete docker image from that same page. Go to Main - Array Operation and Move Now. Wait for Mover to complete and post new diagnostics.
  13. It already tells you if your shares have files on cache/array inconsistent with the settings for the share. Since FCP isn't running when mover runs how could it know if mover didn't do what you wanted.
  14. I recommend NOT increasing docker image beyond 20G unless advised to do so. Filling docker image is almost always caused by misconfigured containers. The most common cause is an application writing to a path that isn't mapped. In this case, losing cache was the cause of losing dockers, since cache is where they are normally configured to be. This hardware problem needs to be resolved before attempting any fix.
  15. Seems unlikely, lots of people running plex without these issues. Post the docker run command for your plex container as explained at this very first link in the Docker FAQ:
  16. Your cache is full because you have set all your shares to cache-prefer. This setting means put files on cache if there is room, and move them TO cache when there is room. There will be a few steps to get this straight, with diagnostic checks along the way to see how things are progressing. Set your appdata, domains, and system shares to cache-only. This will make mover ignore them until we can get room on cache. Set all other shares to cache-yes. This will make mover move them from cache to the array. Go to Main - Array Operation and Move Now. Wait for it to complete then post new diagnostics.
  17. Go to Tools-Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  18. disabled drives is what it rebuilds. It won't attempt to rebuild a disk that isn't missing or disabled. To rebuild to the same disk: Stop array Unassign disabled disk Start array with disabled disk unassigned Stop array Reassign disabled disk Start array to begin rebuild The method is the same whether rebuilding data or parity, parity calculation is the same and gives the data for the missing disk. Since you have dual parity, you can do both at once.
  19. You can rebuild disk7 even without parity2. Or you can shrink the array, but I would just skip the clear drive method and just rebuild parity. I think there have been problems with the clear drive script lately.
  20. If the disks are OK then you can rebuild to the same disks. Can't tell without SMART from those disks
  21. The whole point of dual parity is to allow 2 disabled disks to be rebuilt. Since you have dual parity you can just rebuild both disk7 and parity2. Neither disk currently reporting SMART. Check all connections, SATA and power, including splitters, then post new diagnostics.
  22. All of these symptoms are indicating your server can't reach the internet. From the server command line, can you ping github.com?
  23. Yes. Probably you won't need the flash backup in this case, but good to have.
×
×
  • Create New...