Jump to content

trurl

Moderators
  • Posts

    44,361
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Is your SAS controller configured for non-RAID operation? It isn't passing SMART reports or disk serial numbers so it isn't providing enough information to allow Unraid to detect disk problems.
  2. What is the purpose of the user shares anonymized as 'H--s' and 'P--X'? They are cache-only but have files on the array.
  3. Go to Settings - Docker, Disable then Delete docker image. Set your domains and system share to cache-prefer. Run mover and wait for it to complete. Post a new diagnostic.
  4. Your appdata, domains, and system shares have data on the array. Ideally those shares would only have data on cache so docker and VMs would not keep disks spinning and so docker and VM performance would not be impacted by slower parity writes. Do you actually have any VMs? Doesn't look like it since there is no libvirt image mounted.
  5. Go to Tools-diagnostics and attach the complete Diagnostics zip file to your next post
  6. If your dockers are setup to use User Shares, then it doesn't matter which disk anything is on. User Shares are simply the aggregate of all top level folders on cache and array. User Shares are how Unraid allows folders to span disks. You can move appdata to any disk, assuming your dockers are set up to use the appdata user share instead of the corresponding folders on specific disks.
  7. These are definitely not backups and dockers aren't tied to particular disks unless you specify specific disks in the volume mappings (not recommended). Normally you would configure dockers to only work with user shares, especially if you don't know why you would do it differently. Since you don't have parity it's likely you also don't have cache, which is where appdata, system, and domains shares would ideally reside. If you really want to go down this road instead of adding ports, it is going to be a little more complicated than just swapping disks. You should probably disable dockers (and VMs if you have any) until you get all the data moved. I think you would have to New Config to get it to accept the replacement disk, though I'm not entirely sure how it behaves when you don't have any parity to rebuild from or indeed any parity that needs to be resynced when you change disk assignments.
  8. You have an SSD in the parity array. SSDs are not recommended in the parity array. They can't be trimmed, they can't be written any faster than parity, and there is some question whether or not some implementations might invalidate parity. Your cache drive is pretty full. I am guessing you filled it up somehow and corrupted docker and libvirt, they are corrupt. Your appdata and domains shares have some files on the array instead of cache where they belong. But your other shares aren't on cache or set to use cache now so I'm not sure what is taking all that space. From the console, what do you get with this? ls -lah /mnt/cache
  9. Do you mean a parity check? Most people only run a parity check once per month. Parity check isn't needed to maintain parity, it is maintained realtime whenever a data drive is written. I don't see what difference it makes whether the unused disk is noticed during the time it is unused. As soon as it is accessed you would get the notification.
  10. Do you have an attached keyboard and monitor?
  11. Let's just deal with this for now. You are thinking about this all wrong. Just replace the 500GB disk with the 4GB disk and let it rebuild. No need to copy anything
  12. Those diagnostics seem to indicate your server has the IP 192.168.0.25
  13. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your next post.
  14. After looking at your syslog, it appears it is correcting the same sectors. Bad memory will typically result in random sectors being corrected since the specific memory accessed during disk reading will not be deterministic. I suspect a controller issue or an actual issue with one or more disks causing the same sectors to return bad data. I didn't notice any issues with the SMART reports on any of the array disks. You might try an Extended SMART test for each of them. Click on a disk to get to its page to get to the Self Test.
  15. The drive, or perhaps more precisely, the "slot" the drive is assigned to, will be disabled by Unraid when a write to the drive fails. This could be an actual write, or it could possibly be a write-back from the parity calculation when a read to it fails. After it is disabled, Unraid will emulate it from the parity calculation until it is rebuilt. Unraid is NOT RAID, each drive isn't required to allow access to the data on the array. Each disk is an independent filesystem. Until a read or write is asked for on that drive, Unraid will not notice since it will not try to access the disk. Possibly the disk was even already spun down when you pulled it, since Unraid will spin down unused disks after a (configurable) time. A disabled disk is shown with a red X next to it, and if you have Notifications setup correctly, you will also get an alert. You should setup Notifications to alert you immediately by email or other agent as soon as a problem is detected. If you only have single parity, then when a disk is disabled, you no longer have any redundancy, though the data for that disk can still be accessed for read or write using the parity calculation. Dual parity allows 2 disabled disks before redundancy is lost.
  16. Have you tried to access files on the missing drive?
  17. I doubt any drivers have been removed so if it worked before...
  18. User Scripts is probably the best approach for handling all the possible variety people would need for all the possible configurations they might make.
  19. You probably have your User Shares cache-prefer, which means try to keep files on cache. config/super.dat
  20. This indicates it isn't getting DHCP. Check network connections, plugs, cables, sockets.
  21. I am not following you there, unless this question has something to do with ESXi. I didn't get the impression that you were actually trying to run Unraid under ESXi. Is that what this is all about?
  22. I would. Much larger cache pool that way, which gives you a lot more flexibility in how you can use cache.
×
×
  • Create New...