Jump to content

trurl

Moderators
  • Posts

    44,078
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. It tells you where it is stored. This is the specific line: status="20 0 * * *" http://corntab.com/?c=20_0_*_*_*
  2. If that enclosure is USB you may have other problems besides just speed. To keep the array in sync, you must have permanent connections to the disks. USB often fails on this score. When a disk disconnects, it will have to be rebuilt. And this may be a frequent problem with USB. If you don't use parity, then nothing to keep in sync but of course no redundancy either.
  3. As long as the disks stay the same everything important about your configuration is on flash and should work on different hardware since Unraid figures out the hardware it is on each time it boots.
  4. Nobody else has reported this so I think it is most likely you. My dockers can talk to each other. I guess after you post diagnostics we will know which 6.8 you are running.
  5. Looks like cache corruption has caused docker.img corruption. If you think you have power issues you should deal with those before trying to do anything else.
  6. Wrong. Should be Prefer. But we can get to that.
  7. Don't know about Mac but unless that external enclosure has SAS/SATA connections to the computer for each disk you may have some performance and stability issues.
  8. But I am going to tell you that domains and system shares should stay on cache.
  9. Attachment seems to have not happened. If any of the files for a user share are on cache, and cache has no redundancy, you will get that warning. I have no redundancy on one of my pools (appdata, etc) but it is backed up. Might as well keep the discussion here. Let me know when you have time.
  10. Unraid IS NOT RAID. Each disk is an independent filesystem. Each disk can be read independently on any linux. Each file exists completely on a single disk. Folders can span disks, this is called User Shares. Reads from the parity array is at the speed of the single disk which contains the file. Writes to the parity array are somewhat slower due to realtime parity updates. Unraid is not as fast as RAID, but it has other benefits. Since each disk is an independent filesystem if you lose more than parity can recover, you haven't lost everything because all good disks still have their complete data you can use different sized disks in the parity array you can easily add more disks to the array without rebuilding the whole array you can easily replace disks with larger disks As mentioned, highwater allocation is the default, and for good reason. It is a compromise between using all disks eventually without constantly switching between disks just because one disk temporarily has more free space. Unraid does not automatically move files between array disks, so files already written do not get spread around. When allocation method says disk1 is done, it will begin writing new files to another. Since all your disks are 12TB highwater is easy to figure out, it is half that or 6TB. When 6TB have been written to disk1, the next disk in line will be chosen until it has reached highwater, etc.
  11. I am guessing your use of the word "behind" is exactly opposite your meaning, but just in case. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  12. This is the way Tom described it in some post somewhere years ago, good luck on coming up with the best search terms to find it😉 I don't know what they say on reddit.
  13. This is unlikely to help, and possibly make things worse. Just "pop it in" sounds like another of those ideas lacking in details. You could mount it as an Unassigned Device and possibly get appdata from it, but the docker templates are on flash, and if you mucked those up you would have to go to flash backup to get them.
  14. Parity is invalid until it is built. I assume you are doing the initial parity build. If not Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  15. What are you trying to accomplish? Maybe another plugin would work for you, such as Dynamix Active Streams or Open Files.
  16. You can map the appdata for a specific application to the actual pool instead of to the appdata user share. So, you could use /mnt/cache/appdata in the docker mappings for some apps, and /mnt/apps/appdata in the docker mappings for other apps. And as long as appdata user share is cache-only, it will be ignored by mover whether the appdata is on cache or apps.
  17. Note that SSDs in the parity array cannot be trimmed, and can only be written at parity speed.
  18. You may just need to increase the number of cache slots. Or you have chosen the wrong filesystem. Must be btrfs. Stop the array and post another screenshot showing Cache Devices.
  19. No basically the same as long as I've been here. Might be something here for you
×
×
  • Create New...