itimpi

Moderators
  • Posts

    19670
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. That would occur if you had something like a docker container configured to use /mnt/cache but do not have a pool called ‘cache’ which would result in /mnt/cache being a location in RAM. since you did not provide your full system’s diagnostics zip file it is not possible to be certain if that is actually what is happening.
  2. In Unraid each data drive is a self-contained file system and there is no requirement for them to be used equally. The rules on how files are allocated to the various drives is controlled by the settings of individual shares. With a single parity drive you can have any single drive fail without data loss. If you have more drives fail then you have parity drives then the data on those any failed data drives will be lost, but all other data drives data will still be fine. have you read any of the descriptions of how this all works in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI?
  3. If the disk is not disabled then format writes to the physical disk, but if it is disabled it writes to the emulated one. In both cases Unraid will updated parity to reflect this. When you selected the format option there is a big warning pop-up telling you this is going to happen and that format is never part of a disk recovery operation.
  4. The link to the drivers does not seem to work (at least for me). I was trying to look if they had drivers for the latest Linux kernels.
  5. You should follow the procedure documented here in the documentation accessible via the ‘Manual’ link at the bottom of the GUI which covers rebuilding a drive onto itself
  6. All you have said looks valid although you could possibly save time by using the Parity Swap procedure to combine upgrading a parity drive and swapping out a 4TB drive for the old parity drive. This would leave the previous 18TB drive as a data drive instead of the 20TB you said you were going to put in place of the 4TB drive but is this an issue? Again with dual parity two could be done at the same time but doing them separately is safer.
  7. That definitely looks like it should work. I have done similar operations myself in the past with no problems. Are you sure you did not mistype anything in the cp command? Remember that capitalisation is significant as Linux is case sensitive.
  8. Ther is this section in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI.
  9. In which case as mentioned you need to provide your system’s diagnostics to get any sort of informed feedback.
  10. One good thing about Unraid is that as it effectively reinstalls itself from the flash drive archives on every boot it is relatively easy to get back to a clean state.
  11. I would not open ANY ports to a container unless you NEED to access it from outside your network.
  12. Your split level setting is very restrictive and it will be that stopping the other disk being used. Only the top level TVShows folder is allowed to exist on more than one drive. Once you have created a sub-folder under TVShows then you are forcing all its contents to always go the disk that is created on regardless of whether there is enough free space. There is no way this process should have resulted in data disappearing as you describe. Are you sure you have not accidentally moved it into a sub-folder somewhere?
  13. As was mentioned the parity swap procedure is for exactly this scenario although it would leave you with the 10TB drive in place of the failed 8TB drive and the other 14TB drive as a spare. If you want to make the 10TB drive a spare you could then follow the standard procedure for upgrading the size of a drive in the array, but maybe it will be simpler to keep it as a spare as it could then replace any failed drive.
  14. Single bit errors like you describe will always be hardware related. It is possible a software upgrade can change the frequency as it may start using regularly a memory address that was not much used before the upgrade.
  15. Are the nvme disks being seen at the BIOS level? If not you need to work out why as Unraid is not expected to see them if the BIOS does not.
  16. You may have a power supply issue as a parity check is one time when all drives are being accessed at the same time. you can edit the config/disk.cfg file on the flash drive to change the startArray option to “no” to avoid starting the array during the boot sequence. That would give you an option to disable the docker and VM services to see if that makes a difference and to do further investigations. You can also try booting in Safe Mode (which stops any plugins from loading) to see if that has any effect.
  17. My guess is that the process supporting User Shares has crashed (if you post a copy of your system’s diagnostics we can probably confirm this). If that is the case rebooting the system will be needed to fix this.
  18. That sounds like a bug although not at all clear what could trigger it. Maybe the flash drive had dropped offline so that Unraid could not update the array status on it to say it was successfully stopped?
  19. @jeffreywhunter you might want to read this section of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI.
  20. Unraid will report on SMART values, but will only mark a drive as failed if a write to it fails.
  21. The diagnostics are a single zip file - please post that instead of all the files inside the zip.
  22. Unraid has had problems getting times (and speeds) correct when there are pauses involved. the plug-in takes account of this in the messages it generates (as opposed to the ones Unraid generates), and also in the history record written on completion.
  23. I use Edge when on Windows, and Safari on my iOS devices. Not sure which I should be voting for?
  24. Was the drive showing as unmountable while it was being emulated before rebuilding it? If so that would not get cleared by a rebuild. the standard process for handling an unmountable disk is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI.