Jump to content

itimpi

Moderators
  • Posts

    20,782
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Not offhand. Having said that I am not sure what happens if you set a pool to not have ANY User Shares on it (i.e. Enable user share assignment: No) for the pool in question.
  2. Have you checked Settings->Global Share settings that you have not limited which drives can be part of User Shares? You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.
  3. The CRC errors are normally related to connection issues rather than drive issues. They are stored on the drive firmware and never reset so fixing the problem simply stops them increasing. You can click the orange ‘thumbs down’ icon against the drive on the Dashboard and select acknowledge which means you only get notified again if it changes.
  4. The SMART report for the drive looks OK. You can rebuild parity to the same drive by using the procedure described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. In terms of what caused the issue difficult to say. The syslog is only held in RAM and restarts each time Unraid is booted. If you want persistent syslog that can survive a reboot then you need to enable the syslog server.
  5. Do you pass any hardware through to a VM? If so chances are that when you changed the hardware installed the IDs associated with the passed-through hardware changed and you are now passing through something that should not be passed through. Providing the system's diagnostics would allow us to confirm this.
  6. This is expected as all top level folders on any array drive or pool are automatically treated as User Shares, and if automatically created will have default settings. I agree it would have been better if the 'automatic' creation mechanism had included the ZFS pool but this is not how it (currently anyway) operates. If you want the settings to be different to the defaults then you need to amend them appropriately.
  7. Your parity check speed seems much slower that I would expect - normally something between 1 and 2 GB per hour seems more normal. Regardless you might want to consider installing and using the Parity Check Tuning plugin to offload parity checking to periods when the system might otherwise be idle thus trading off disruption to normal working against total duration to complete.
  8. A parity check tends to be a special case as it is a serial write so basically no head movement most of the time. Raw disk speeds quoted also tend to be hard to achieve except in special hardware+software test environments. Really up to you. As you said it is a trade-off between write speed and power consumption required to keep all drives spinning.
  9. Seems unlikely to me! Remove the Mover Tuning plugin and try again. If you still do not get anywhere then enable mover logging and post new diagnostics after attempting a move.
  10. That seems a not unusual speed - I suspect initial boost was due to,RAM caching. Do you have Turbo Write mode enabled (which is likely to even get 70 MB/s). More information on Unraid array write modes can be found here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  11. At the moment all files are on the array. For shares where you want to use Exvludive mode you need them all on the ‘cache’ pool. To achieve this: disable the docker and VM services under Settings For shares you want on cache set primary storage to be ‘cache’ pool and secondary storage to be array. set mover direction to be array->cache run mover manually from Main tab to transfer files when mover finishes remove the secondary storage option from the shares concerned. you can now re-enable docker and/or VM services
  12. Not that I know of that is generic for any User Share. It would be easy to write a script to do this that is run at your defined schedule via the User Scripts plugin. However this raises the question of why you would want to do that in the first place? You could always create a multi-drive pool that has built-in redundancy. there IS a plugin for backing up the ‘appdata’ share as that is a share that is frequently kept on a pool for performance reasons.
  13. Yes - this is quite normal as it gives best performance. If you are using an Unraid 6.11.x release or earlier then you set any share that should be on it as Use Cache=Only. If you are using an Unraid 6.12.x release or later then you set the pool as Primary storage and nothing as secondary storage. You can then take advantage of the Exclusive Share feature to improve performance by bypassing the Fuse layer in Unraid. In both cases if it is a single drive (no redundancy) then you need to make sure regular backups are made (probably to the array) and the VM Backup plugin can help with this. You could also use a multi-drive pool to have built-in redundancy for the pool.
  14. This would be expected. You cannot have multi-drive pools with XFS. Do you realise that BTRFS will support 3 drives running in its variant of RAID1 (giving 1.5TB usable space). No idea on stability of RAID5. Another possibility would be to run the 3 drives as a ZFS raidz pool to get 2TB usable space although this would involve backing up any existing contents first.
  15. That is not a supported option. once a disk has been added to the array and formatted then it is no longer ‘empty’ as far as parity is concerned because although it may not contain any files it has the directory structure representing an empty file system that was created by the formatting process and parity had been updated to reflect this.
  16. You will not get anything like the raw disk speeds. In normal mode then I would expect something like ~30MBs while with Turbo Write enabled more like 80MBs. Those speeds would be impacted by anything else happening on the main array. These write modes are described here in the online documentation. There is also a non-trivial overhead associated with each file as mover carries out checks so with lots of small files then expect it to be even slower.
  17. If you want to guarantee having 100GB free then the Minimum Free Space value would need to be 100GB + largest size of file that will be written as Unraid only stops writing files when at the start of a file the value is already less that the Minimum Free Space.
  18. Yes. You could probably simplify it even further by putting them into a folder on the flash drive and copy the contents of that using wildcards.
  19. It is really up to you! Files on the flash drive cannot have the executable bit set, so to use them you would need to add lines to the config/go file on the flash drive to copy them somewhere else during boot (probably something like /usr/local/sbin) and set the executable bit.
  20. You are meant to format drives AFTER adding them to the array.
  21. This is confusing. If you replaced it then Unraid should immediately have started rebuilding it. There is no concept of preparing it during a replacement unless you did something different such as trying to add it as a new drive rather than replacing the failed drive.
  22. That statement is true as it refers to new files. If primary storage is below the Minimum Free Space value then primary storage is bypassed and written directly to secondary storage. For files on the cache pool then the mover setting is relevant.
  23. Prefer means you want files moved from array->cache. You need Yes for cache->array. This is explained in the help text built into the GUI for the Use Cache setting.
  24. You must use the Yes setting for Use Cache if you are not on a 6.12.x release, and the cache->array setting for mover direction if on a 6.12.x release. Have you got the Mover Tuning plugin installed - if so I suggest removing it to see if that helps. You can turn on the mover logging to see what mover is trying to do. Note that mover can be very slow if moving lots of small files.
×
×
  • Create New...