• Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by itimpi

  1. Yes. A UPS is always a sensible investment when running a server, particularly if prone to power cuts.
  2. You can: use Tools->New Config and select the option to keep all current assignments return to the Main tab and assign the additional drive Start the array to commit the assignments and start building parity based on them. Note that your array will not be protected until the parity rebuild finishes.
  3. That suggests that the disk dropped offline or was unmountable. I would suggest you run a file system check on the drive in case file system corruption is the cause.
  4. This does not make sense since your array drives are only 2TB there will never be 3TB free on any drive.
  5. Definitely the case since anything other than 0 errors is too many
  6. It looks as if your docker.img file is corrupt (probably due to the cache drive running out of free space). I notice that you have it configured for 75GB which should be far more than you need - the default of 20GB is normally more than enough as long as you do not have container misconfigured so it ends up writing internally to the image. I would suggest: Stop docker service Go to Settings->Docker and select the option to delete the current image (may need to turn on advanced view) Change the image size to be 20GB (which should free up space on the cache) Restart the docker service to create a new 20GB docker.img file Go to Apps -> Previous Apps to redownload the container binaries and re-instate the containers you select with their previous settings intact Make sure that for any shares you want mover to transfer files to the array Use Cache=Yes is set. If not sure of the correct settings for any share use the GUI built-in help for that field to see how the settings operate and how they affect mover. You should also set the Minimum Free Space setting for the cache (currently 0) to be larger than the biggest file you expect the transfer which will help with avoiding running out of space in the future.
  7. According to those diagnostics the array has not been started yet (in normal mode).
  8. It might make more sense if you think of the setting meaning where new files are to be initially put when they are created. You then need to look at what action mover will subsequently take (if any) to put them into their final location.
  9. The process is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI.
  10. They can certainly become unseated slightly due to vibration.
  11. In step 2 you need to actual stop the docker and VM services as otherwise they will keep file open that mover is then unable to move.
  12. I did not mean the subnet mask but the actual subnet (e.g. 192.168.0.x) .
  13. You could use the Parity Swap procedure to just replace 1 of the parity drives with a larger one and use the old parity drive to replace the failed one. There is no requirement for both parity drives to be the same size as long as no data drive is larger than the smallest parity drive.
  14. You do not necessarily need to bind the nvme drive to pass it through, instead you can set the path to be the /dev/disk/by-Id/? value that corresponds to the nvme drive.
  15. Not quite : Make sure nothing refers directly to /mnt/cache, but only indirectly via the User Share. Change to either Cache:No or Cache:Prefer (to later automatically start using cache again when one is added). Details on getting files from cache to array are covered here in the online documentation that can be accessed via the Manual link at the bottom of the Unraid GUI. After doing that you can remove the cache
  16. If you are going to pass through a drive to a VM you do NOT want to use a /mnt/disks/? typepath as that is where UD mounts drives and UD must not mount a drive to be passed through to a VM. Instead you must get the /dev/disk/by-id type path for the drive.
  17. You can edit the config/network,cfg file on the flash drive to set an explicit IP address or delete it to revert to DHCP.
  18. If you do net need parity protection, why not set up as an additional pool (which is a feature already available)?
  19. You could simply pass the entire drive through to the VM and it should boot into Windows as long as you set the connection type to SATA. If you add the virtio drivers to the windows system you can then use that connection type as well. The only obvious reason to reformat it to XFS would be because you want to use vdisk image files for the VM rather than use the raw drive directly. Not sure why this should be happening as the UD plugin should handle standard NTFS drives fine. Note however if passing the entire drive through to the VM you want to set the "PassThru" setting f/r the drive in UD so it does not try to mount it.
  20. You can do it from the GUI by clicking on the folder icon on the Main tab at the end of the entry for the pool/cache.
  21. Could not see anything obviously wrong with the disk. You could try running an extended SMART test on the drive as a check. External factors such a cabling are far more common reasons for a drive to be disabled than the drive actually failing.
  22. Have you checked that the router does not have a setting that stops WiFi devices connecting to LAN devices - many of them do with it being on by default.
  23. Have you tried connecting via the server address inside the tunnel? I think it will be something like but you can check this under Settings==>VPN Manager (may have toggle on Advanced view to see this). Also, what subnet is your local LAN using and what subnet is the client machine using?
  24. Not sure you can do this without manual editing of the Samba configuration file Is there a reason it is not set up as a pool (as pools can pools can participate in User Shares). With the 6.9.x releases supporting multiple pools you only need to continue to use UD mounted devices if they are removable ones.
  25. You can try but I would expect the drive is likely show up as unmountable when you try The normal way forward is to provide the -L option to the repair - despite the ominous sounding warning it rarely results it any data loss.