itimpi

Moderators
  • Posts

    19585
  • Joined

  • Last visited

  • Days Won

    53

Everything posted by itimpi

  1. The key point is that he must not try to do the rsync until Unraid has successfully started up (in particular its User Share system). Perhaps in the past you were just lucky with the timing?
  2. Many people are now creating ZFS or BTRFS 'arrays' (as Unraid pools) which allows you to go well above the 30 drive limit. If I remember correctly you can have up to 60 drives in a pool and you can have multiple pools. I think it is unlikely that the main Unraid array type will ever go above the 28+2 limit because that is a lot of drives to only be protected by 2 parity drives, but hopefully in the future you can more than one of those.
  3. The parity check time is almost completely determined by the size of the parity drive. If it takes 24 hours for 4TB on your setup then expect it to take twice that for a 8TB parity drive. However your speeds seem a bit slow so maybe your disk controller is limiting your speeds. If your checks are taking that long do you have the Parity Check Tuning plugin installed so you can offload the check to increments run in idle times (albeit at the expense of an extended elapsed time).
  4. It is due to the fact that you had an unclean shutdown while a manual check was running and the plugin does not clear the state information for the manual check after the reboot. This should auto-fix itself when the automatic check finishes, but if you want to stop it immediately you can delete the party.check.tuning.manual file in the plugins folder on the flash drive. This issue is fixed for the next time I issue a plugin update but it did not seem urgent to get an update out so I am sitting on that fix. Unless an urgent issue arises I would like to wait until Unraid 6.12.7 (or even beta 6.13 beta) come out to check that they are not going to require changes to the plugin.
  5. Do you have the volume mapping for the Krusader docker container set to allow that level of access on the Unraid host? If not then you are looking at the location inside the docker container.
  6. I assume you meant 'unmountable'? Handling of unmountable disks is covered here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. That would likely have fixed the issue. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. The Unraid OS->Manual section covers most aspects of the current Unraid release.
  7. Unexpected reboots are normally hardware related (e.g. PSU or CPU overheating). The syslog in the diagnostics is the RAM version that starts afresh every time the system is booted. You should enable the syslog server (probably with the option to Mirror to Flash set) to get a syslog that survives a reboot so we can see what leads up to a crash. The mirror to flash option is the easiest to set up, but if you are worried about excessive wear on the flash drive you can put your server's address into the remote server field.
  8. I wonder if there is a problem related to this then? The OP problem could be explained by mover only trying to use the first one listed. I do not currently have a suitable setup to test this.
  9. Yes - but there are multiple mount points for the Data share - is that normal as well? I do not use ZFS in the main array so no experience of this, and not currently inclined to try bearing that there is a known problem with performance if you do this.
  10. Did not spot anything obvious either! It might be worth installing the File Activity plugin to see if that gives a clue as to what is accessing the drive.
  11. itimpi

    BTRFS error

    BTW: With the Unraid 6.12.x releases you can achieve the same results on a User Share as using a disk share if the share in question is all on one device/pool and you have enabled the Exclusive share option under Settings->Global Share settings.
  12. The whole point of the lost+found folder is that the files in there are ones for which Unraid could not locate the directory entries required to give them their correct path or filename. You may find therefore find files or folder with cryptic numeric names and in such cases it takes a manual operation to work out what their names should be. The Linxux 'file' command can help with that by at least telling you the type of the any files that are there.
  13. This is not relevant to the main array as in that each drive is an independent file system. I also suspect that it is a legacy requirement and nothing like that is needed on modern systems. ZFS in the main array is know to currently have performance issues. This does not apply to ZFS zpools which are currently the highest performance option with Unraid.
  14. This is not necessary if the share is an Exclusive share. Parity is realtime so any write to the array also causes a write to the parity drive. You may find this section of the online documentation accessible via the Manual link at the bottom of the Unraid GUI useful in understanding how writes to the array are handled. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. The Unraid OS->Manual section covers most aspects of the current Unraid release.
  15. Those files get created when file system corruption is detected on the flash drive. No idea why it has that modified date though.
  16. That 'df' command shows something strange in that there appear to be lots of mount points of the form /mnt/diskxx/Data. This is is something that I would only expect to see for an Exclusive share - and that can only exist on one drive or pool. Have you done anything to create these? The error message can be explained if Unraid is picking the first one on the list (/mnt/disk11/Data) as that has less than the Minimum Free Space free set for the Data share. The other anomaly is that none of the physical drives /mnt/diskxx seem to have any space used so no idea where the data is actually being stored - it may be in RAM and thus will not survive a reboot.
  17. You should: Disable the Docker service under settings (it needs to be disabled to stop it keeping files open) Run mover to transfer the contents of the 'system' share to the cache (optional) When that completes you might want to confirm that there is now no sign of the 'system' share on disk1 Set the secondary storage option for the 'system' to None Enable the option to use Exclusive shares under Settings->Global Share settings to get the performance advantage of bypassing the Unraid Fuse layer Reenable the Docker service Once this is done then docker will not be keeping disk1 and parity spun up all the time.
  18. Your ‘system’ share has files on disk1. If this includes the docker.img file then even having the docker service started will keep disk1 and parity spun up as writes to any array drive automatically update parity at the same time.
  19. With the current 6.12.x Unraid releases you can also get the performance benefit of bypassing Fuse if a share is only on a single pool by enabling the Exclusive share option.
  20. It is most likely to be something you have configured in a docker container.
  21. The reason that 32GB is mentioned is that the flash drive needs to be formatted as FAT32 and Windows does not support as standard doing this on drives over 32GB. Also Unraid only needs something like 1-2GB on the flash drive so large drives have lots of space that will never be used.
  22. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread. it is always a good idea when asking questions to supply your diagnostics so we can see details of your system, how you have things configured, and the current syslog.
  23. It is the other way around. After adding users on the page you show, you add permissions for users on the page for any particular share.
  24. I think it can sometimes just mean there is a borderline block rather than the flash drive going bad. That is why it is always worth rewriting all the bz* type files as it often fixes such an issue. I have had a flash drive work fine for years after rewriting the bz* files.
  25. Are you transferring to a User Share or direct to the drive? Asking as the Fuse layer used to support User Shares can impose that sort of speed limit. If you are transferring to a User Share then if it is all one one device/pool so it can become an Exclusive share (which bypasses Fuse) you can get the same performance as transferring directly to the physical device.