Jump to content

itimpi

Moderators
  • Posts

    20,780
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. @estrim Hopefully the release I have just pushed will fix both of the points you have raised. Please let me know if this is not the case or if you spot any other anomaly.
  2. FYI: There is no built-in version of Tailscale - you must be using either the docker or plugin version.
  3. That looks like a bug as because you have no increment Window set it should resume when mover finishes. I think I can see what is going wrong as it looks like I do not check whether running in increments is set at that point so I will check that I can recreate your issue, and issue a fix if I am right about the cause.
  4. Since the log shows it waking up from sleep at around 10:00 am it must have gone to sleep, so any issue with it not powering off properly is related to the S3 Sleep plugin (which seems to have been behaving erratically on the 6.12.x release).
  5. I'll look at changing it to check for 'bin/mover' which is probably generic enough that it will be found whatever the location of the binary and should not fall foul of a Use Case like yours.
  6. Making the docker image file larger than the default of 20GB is rarely required. If you keep getting it filling up it nearly always means you have a container writing to a location that is internal to the container which instead should be mapped to a location external to the container.
  7. You can disable a plugin without actually removing it by renaming the .plg file to have a different extension and rebooting. It then becomes a matter of trial-and-error. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread so we can see what plugins you are running.
  8. Not sure other than to say it works fine for me 😊. What security level on the shares (mine are set to Public).
  9. You could try enabling the syslog server to get a log that can survive a reboot.
  10. Did you try reinstalling them via Apps->Previous Apps?
  11. Do you have Turbo Write enabled? You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.
  12. At the moment the test is very simple as it looks for any process containing 'mover' using the statement: exec ("ps -ef | grep mover", $output); $ret = (count($output)> 2) ? 1 : 0; and then looks at number of entries in $output. Perhaps need to change that test to make it more restrictive or change it in some other way? Just to check - is there a good reason your code does NOT want the parity check to be paused while your process is running?
  13. There are various error messages after attempting to mount the docker.img file: Aug 22 11:11:09 Tower kernel: BTRFS info (device loop2): start tree-log replay Aug 22 11:11:09 Tower kernel: BTRFS error (device loop2): incorrect extent count for 1372585984; counted 8726, expected 8727 Aug 22 11:11:09 Tower kernel: BTRFS: error (device loop2) in btrfs_replay_log:2500: errno=-5 IO failure (Failed to recover log tree) Aug 22 11:11:09 Tower root: mount: /var/lib/docker: can't read superblock on /dev/loop2. Aug 22 11:11:09 Tower root: dmesg(1) may have more information after failed mount system call. Aug 22 11:11:09 Tower kernel: BTRFS error (device loop2: state E): open_ctree failed This suggest this file is corrupt and needs to be recreated.
  14. NerdPack can install components that are incompatible with the Unraid 6.12.x releases and break the system as a result. You definitely want the plugin to be completely removed on the latest Unraid releases.
  15. You probably need to run it again on the rebuilt disk (without the -n option) and if you get similar results the drive should mount after the array is restarted in normal mode.
  16. Was the stove showing as unmountable before the rebuild? The correct handling of Unmountable drives is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  17. You first need to upgrade the parity drive: stop array assign 8TB drive in its place start array to build parity on the 8TB drive i would recommend keeping the old 4TB parity drive intact until the building of parity on the 8TB drive completes as this would give recovery options if one of the 4TB data drives happened to fail during this process. once you have the parity upgraded to 8TB you can add the other 8TB drive and the old 4TB parity drive as new drives to the array.
  18. I would probably just add disk1 as the parity drive in the 3rd step and skip the last 2 steps. Not sure about the 500gb disk. You could either keep it as an array disk, or alternatively use it as a ‘cache’ pool. Does it currently have data on it that needs to be kept?
  19. If you restart in normal mode then the emulated disk should now mount and you can check that it has the data you expect. When you get around to rebuilding then the contents of the emulated disk will be rebuilt onto the physical replacement drive.
  20. At this point Unraid is emulating the missing/failed drive. You need to run without the -n (no modify) option for the emulated drive to be repaired.
  21. No data disk can ever be larger than the smallest parity drive. The parity swap process is designed for exactly the situation you are in where you simultaneously upgrade the parity drive to a larger size and use the old parity drive to replace the failed drive.
  22. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.
  23. This is all covered in here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  24. Your ‘appdata’, ‘system’, and ‘domains’ share are set to with the ‘cache’ pool as primary storage, the array as secondary storage, and the ‘mover’ direction as array to cache which means you want those shares on the ‘cache’ pool if space permits. This is quite normal for maximum performance. In fact those shares are not completely on the ‘cache’ pool so you will be getting some performance impact. To get them moved completely to the cache you would need to temporarily disable the docker and VM services under Settings as those services hold files open which stops them being moved. However as you have a relatively small drive as the ‘cache’ pool so I am not sure you have the space to hold them completely on the cache.
  25. Difficult to say without knowing how you have your system set up to use the cache and what files/folders are currently there. It is quite normal to keep files used by docker containers and VMs on a pool for performance reasons. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.
×
×
  • Create New...