Jump to content

itimpi

Moderators
  • Posts

    20,701
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Dealing with unmountable disks is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the Unraid GUI.
  2. Not enough information to give an answer Where in Windows? Do you mean locally in a VM, or on a share or something else?
  3. Definitely slow for me at the moment (UK) taking 30 seconds or so to load each page
  4. I would think that 6.9.0 rc2 is stable enough now to go straight to it and the additional functionality is useful. You may in fact have no choice if you have recent hardware that needs drivers that are only in the 6.9.0 branch (e.g. many 2.5 Gb NICs).
  5. You are likely to get better speeds if you configure the container to by-pass the User Share level and use /mnt/cache/Storage. Leave the Storage Share set to Use Cache=Yes (as /mnt/cache/Storage is part of that User Share) if you want files to later be moved to the array when mover runs.
  6. There is nothing built into Unraid to handle this.
  7. I think there will be few cases where using UD is better when using Unraid 6.9.0. It was normally done so that you could dedicate device(s) to a particular usage type but with multiple cache pools you can do this anyway The only remaining use case I can see is when you want to be able to easily unplug the device and not have it online all the time - something that is not practical with a pool.
  8. When you have the BIOS issue resolved, then you may find this section of the online documentation accessible via the 'Manual' link at the bottom of the Unraid GUI to be of use.
  9. You really need to run the extended SMART test (which takes hours) to get any good idea if the drive is healthy. Having said that if the short test will not complete that is not a good sign.
  10. Until the drive is online it cannot be written to. You may have to power cycle the server to get it back online. The problem might be to identify why the drive was disabled in the first place as if that is not rectified it is just likely to happen again.
  11. Since ZFS is not (yet anyway) an officially supported format within Unraid I do not think User Shares cannot have files located on your ZFS array.
  12. Have you checked the permissions on the containing directory? I think that ‘wx’ permissions allows deleting files within the directory.
  13. What does the 'lsblk' command under Unraid show? Note that 'sda' is the whole drive and so it is still passible for there to be a partition (sda1) present that is not the whole drive.
  14. A drive being disabled means a write to it has failed. A read check will happen if a parity check is started with only 1 parity drive and a drive disabled. the diagnostics do not seem to contain SMART information for the drive (what was irs diskX designation as you have a lot of drives) which suggests it has dropped offline as I only see the parity disk being listed as 14TB in the syllog.
  15. I have a 16GB Cruzer Fit for my main Unraid boot drive and its size is reported correctly by Unraid.
  16. An ISO image is the equivalent of a DVD on a physical installation. Not sure from your comment if you are talking about this or an emulated hard disk (vdisk) stored as a file.
  17. If you are using VNC to access the VM, then this is built in to the Unraid KVM support (and is the default if no GPU passed through) so no need to install anything into the VM.
  18. The ISO is typically only required to install Windows (or drivers). No reason not to use the same ISO file in multiple VMs if they are the same version of Windows/Drivers. Since an ISO is lust an image of a DVD the ISO only needs to be a different one if you would need a different DVD on a physical machine. Up to you. In most cases the ISO is not accessed after initial install so from a performance perspective it does not make much difference. Put it wherever is most convenient. Performance might be more of a concern if you are running a game that continually accesses the ISO. Depends on whether they are running at the same time. Each running VM will require its own passed through card as while it is running it has exclusive control of any passed through hardware.. If graphics performance is not that important to a particular VM you could get away with using an emulated GPU rather than using a real one.
  19. Yes - except that there is no format stage when adding a parity drive (the parity drive has no file system). As soon as you have added a parity drive then on starting the array Unraid will start building the parity information on it.
  20. I would suggest then that you create a new USB stick with the latest release and put the .key file into the ‘Config’ folder. As long as Limetech have your key in their database when you boot UnRAID you should be taken through the automatic licence transfer process (this is allowed up to once a year). If you are not given that option then you will need to contact Limetech by email to get the transfer done manually.
  21. All UnRAID licences have been valid for all UnRAID releases, so if you still have the USB stick and it’s associated licence file then it is merely a case of redoing the USB stick with the latest UnRAID release and putting the licence file onto the ‘Config’ folder on the USB stick. If that is not the case then we need more informations on what state you are in to give advice.
  22. The contents look correct - but then the error should not have been generated so do not know what happened.
  23. The motherboard you quoted has a NIC that is only capable of 100Mbps, so your transfer speed is being limited by that. You need to either get a faster NIC (any Intel NIC should be OK) to do it faster over the network. Alternatively plug the drives directly into the Unraid server bypassing the network. You could either use any spare SATA ports or use a USB adapter/dock for direct connection and then mount the drives using the Unassigned Devices plugin and do the copy locally within the server.
  24. I am not sure if UD supports this - I have only seen references to HFS+ in its description.
  25. When doing a shutdown then if the dockers, VMs and the array do not stop within the timeout periods allowed then a forced close of the array is done thus leading to an unclean shutdown. The timeouts have default values suitable for most common setups but may not be sufficient for your system and need increasing. You will need to do manual tests first to see if you can successfully stop the array when NOT doing a shutdown. If not you need to work out what is not stopping cleanly and fix that.
×
×
  • Create New...