itimpi

Moderators
  • Posts

    14102
  • Joined

  • Last visited

  • Days Won

    33

Everything posted by itimpi

  1. This suggests a plugin is causing your problem. The key thing that is different in Safe Mode is that no plugins are loaded.
  2. Are you talking about over the network or when checking locally? If over the network then running the New Permissions tool on the share might fix it. If locally then please provide your system's diagnostics so we can look further.
  3. Use the New Config tool and tell it to keep all current assignments. Then change the one for the new disk and start the array to commit the new set of disks. All disks previously used by Unraid and with data already on them are left untouched.
  4. The mover behaviour IS displayed in the GUI alongside that setting. You will see it changing as you change the setting. One of the Unraid releases messed up the display of this additional text pushing it to the right so it was less obvious, but that should now be corrected.
  5. The one that has come up several times is when people deliberately WANT to keep some (but not all) files for a share on a fast SSD. There have been scripts posted to achieve this. what may not be obvious is that the Use Cache setting is specifically about what to do with NEW files. For read purposes existing files can be found on any drive (regardless of user share settings) being managed by Unraid if they have the correct top level folder to make them part of a User Share
  6. There are valid use cases for NOT doing this, and changing current behaviour would break them.
  7. This is expected behaviour! The help built into the GUI describes how the Use Cache settings work and how they affect mover behaviour.
  8. I wonder if perhaps last time you forgot to run without removing the -n option so that the disk got checked but not repaired?
  9. You can also go this way by using the New Config tool to set the drive set you want to keep (although you would then have to rebuild parity to match the new drive set). After doing this you can then mount the old drives in UD to copy the content of the old drives to the array with the new drives in it.
  10. You might want to check what is in the system share on disk5? If it is the docker.img or libvirt.img files then having the docker/VM services enabled will keep the disk spinning.
  11. Perhaps you could explain why you want to do this? That way we can check that what you are trying to do makes sense for the Use Case you are trying to satisfy.
  12. From past experience one thing that can mess with permissions is a plugin that has been built with the wrong permissions on files it installs. When a plugin is installed any permissions in an associated .tgz file that is unpacked can override the standard ones that are set by Unraid. This can apply to any level of the paths to the files contained within the .tgz file.
  13. A User Share is simply an amalgamated view of all the top level folders of that name on all drives. In that sense if you put the drive into any other system it would see these folders and their contents. When you look at a disk at the drive level in Unraid that is the same contents you would see on another system for the same drive.
  14. You can use the Parity Swap procedure that is designed for exactly this Use Case.
  15. You cannot recover data if you have 2 bad disks and only single parity.
  16. Not quite sure what might be going wrong - for me it “just works”. Maybe post a screenshot of the docker settings you are using for the container. You should be able to get the server component working regardless of whether the Alexa skill is installed.
  17. There is no "right" answer to this. If you do not care how files get split across drives in the array, then let any directory be split ad that requires the least supervision. If that is not what you want, then you need to work out what value suits your requirements. I personally use level 0 (manual control) as it happens to suit my work pattern and the way i organise my media, but many people use values in the low single figure range depending on how they organise their files.
  18. Pools can use any of the pseudo RAID levels supported by BTRFS. Ir is a BTRFS specific implementation of RAID that can be dynamically expanded, can change RAID levels and can use odd numbers of drives. The downside is that BTRFS seems to be more susceptible to file system level corruption than XFS (which is the default for the main array).
  19. This is true as far as writing to the array is concerned. if you have SSDs then these are normally used in a ‘pool’ external to the main array. Pools can be made redundant by having multiple drives in the pool, but you are still constrained by the slowest component. There is little point in having a SSD as an array drive with a HDD as parity, particularly with the way Unraid updates parity as described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. With the drives you mention you could have a single drive array using the HDD. Then you could have a single drive pool using the SSD to host VMs.docker containers and run them with the full performance offered by the SSD. Since this configuration does not provide redundancy then you could could do periodic backs (frequency chosen by you) to make backups of the VM/docker files from the SSD to the HDD drive in the array. Plugins are available that can automate this process.
  20. I think you are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread so we can get an idea of what is happening internally in the server.
  21. @bkastner you have a very restrictive value for the Split Level setting. In the event of there being contention between the various settings for the share about which disk to select for a new file, then the Split Level is the one that wins. This can force a file to be written to a specific drive regardless of the other settings such as allocation method or minimum free space. it is also worth pointing out that if a file already exists then a file is always updated in situ on the same drive as where it already exists.
  22. Unraid does not mind if you leave gaps in the assigned drives. Many people find it triggers their OCD, but as long as it does not worry you can leave the gaps in the assignments. When you start the array only the disk slots that have drives assigned will be shown. Just in case it is relevant it is worth pointing out that Unraid does not care how/where the drive is connected as drives are identified by their serial number. There is therefore no reason that the assignments to disk slots have to match the physical layout if that is not convenient.
  23. I think you have misunderstood how parity works? The requirement is that the parity drive must be at least as large as the largest data drive, but it can protect multiple drives of that size or smaller.
  24. That suggests you forgot to disable the VM service before running mover. Mover will not move open files, and the libvirt.img file would be kept open by the VM service if it is running.
  25. That setting means it will not be visible or accessible on the network. You need it set to Yes to see it on the network.