Jump to content

itimpi

Moderators
  • Posts

    20,701
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. The mover behaviour is what is expected if files end up on the wrong cache pool. if using Krusader you would need to use copy/delete to avoid this behaviour. This is because of a quirk in the way the ‘move’ operation is handled by Krusader. If it thinks both source and target are on the same mount point (/mnt/user in this case) it first tries a simple rename (for speed) and only if that fails does a copy/delete operation. In this case the rename is working so the file is left on the same disk rather than moved to one that is part of the target user share. if you do the move over the network then it should always do a copy/delete so give the expected behaviour.
  2. Not spotted anything obvious yet. It might be worth running File System Checks on the cache pool and each individual array drive?
  3. I would suggest that you disable mover logging (unless you are actively investigating a mover related problem) as it generates a lot of output. In addition by its very nature that information cannot be anonymised so you may prefer to not have it appear in your diagnostics
  4. Preclear should not have been affected by whatever activity was happening on other array drives. However it is possible that some other factor (e.g. out-of-memory) happened that did affect it. if you provide your system’s diagnostics zip file (obtained via Tools -> diagnostics) taken before any reboot then it might be possible to see if something relevant happened.
  5. How are you moving the files between the shares? Depending on the method used this might just be expected behaviour.
  6. This will not be the true available space. When using drives of mixed sizes you can get misleading size reported by btrfs. With 2 drives running in the default raid1 style configuration the available space is that of the smaller drive. If you run with the ‘single’ profile sacrificing redundancy then the sizes are added. if you use the Unraid 6.9.0 release you can have multiple pools and each one can be optimised for its particular use.
  7. I use the ‘makepkg’ command when building the .txz file for my plugins.
  8. The ‘bread’ failures definitely indicate a problem at the hardware level either with the flash drive itself (most likely) or with the port it is plugged into.
  9. Yes - I have 16 drives plugged into mine. It typically costs about 3-4 times the price of the variant that handles 8 drives but the fact it only took up one slot on the motherboard was important to me.
  10. Just as a check point are you sure the ‘failed’ drive really has failed? It is more common for drives to be disabled because a write to them failed due to an external factor rather than the drive actually failing. Knowing the answer to this might affect whether it is recommended you go with the Parity Swap procedure or not. it is also worth pointing out that the 6.9.0 release supports multiple cache pools so if you have 2 SSD’s you could easily have one dedicated to VM use and the other to Docker use with either/both/none doing actual caching of writes to shares.
  11. I am afraid this is a manual process. For folders that have lost their names it tends to be relatively easy by examining their contents! For files that have lost their names it is much harder. You have to decide if they are important enough to justify the effort involved. If you think it is then the Linux 'file' command can be of use in at least determining what type of content each file has (and thus probably the file extension that you want).
  12. Glad to hear that there is not (yet anyway) a bug I need to fix
  13. Have you enabled the option in the plug-in settings to apply pause/resume to manually started checks in addition to the automatically started ones? The default setting is to not do so. If you have and it is not happening then this is a bug that would need looking at.
  14. Your syslog is full of reset/retry attempts on ata3 which will explain the slow performance. I could not see what drive ata3 referred to.
  15. If the vdisk is on the array it is NOT cached. The caching works at the complete file level, not at writes within a file.
  16. ECC errors are not ‘typical’ - they indicate connection issues ! A occasional ECC error is not normally a problem but any more of that will impact performance as the system attempts retries on the drive operation it is trying to perform. if you want any sort of informed feedback you should provide your system’s diagnostics zip file (obtained. Is Tools -> Diagnostics) attached to your next post in this thread.
  17. The reason you do not normally want a VM on the array is that it badly affects performance as every write has the performance penalty of having to do additional writes to update parity. Unraid will not stop you putting the VM on the array if you are prepared to take the performance hit. If you really want the VM there then enabling ‘Turbo Write’ mode will help a little, but at the expense of keeping your drives all spun up any time the VM is running.
  18. Might you not also want the contents of the file /etc/cron.d/root to see if that is running anything at those times?
  19. No - that looks like the BIOS for the motherboard. The LSI Disk Controller almost certainly has its own BIOS - you probably have to press a particular keyboard combination to get into it’s settings.
  20. Do you have "turbo write" enabled? It almost sounds as if you are not writing to the cache as expected but directly to the array so that things slow down as soon as RAM buffers run out.
  21. Many BIOS's get upset if more than 12 drives are potentially bootable. You may need to go into the disk controller BIOS to disable the drives attached from being considered bootable
  22. Apparently there are known spin-down issues with rc2 so you need to wait for rc3 to see if they have then been fixed.
  23. Yes. Unraid is loaded into RAM each time Unraid boots and is running from RAM so unless you take explicit steps to make changes survive a reboot they will get lost.
  24. True - certainly something worth trying when I find the time. In practice running in 'headless' mode is not an inconvenience to me so it is not something I have invested any significant effort in trying to resolve as my hardware cannot do hardware pass-through so no real interest in regularly running locally on the server.
×
×
  • Create New...