Jump to content

itimpi

Moderators
  • Posts

    20,166
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. You can just remove the disk and then when you reboot the system the missing drive will be emulated using the combination of parity plus all the data drives (assuming you currently have valid parity). Since you have dual parity you still have a level of protection against another drive failing. Having said that there is no SMART information for the failed drive as it appears to have dropped offline. You might want to consider power-cycling the server and getting new diagnostics to see if the SMART information for the disabled disk is available. Experience has shown that disks can more commonly get disabled for reasons other than the disk itself failing. You should also check that the emulated disk is mounting fine and that it’s contents are what you expect.
  2. The only way to know where the corruption occurred is if you are using btrfs file system on your array disks (in which case a scrub will tell you about corrupt files) or if you have checksums for your files when running xfs/reiserfs.
  3. I think a better solution would be to display the release notes as part of the upgrade process and make the user press an OK button before actually doing the upgrade.
  4. Have you read the 6.10.2 release notes and the issues around the tg3 NIC driver? You may need to take the steps to unblacklist that driver to get your network working if you think your hardware is not one being affected by the tg3 issue.
  5. those shares get auto-created by enabling docker and VM support on Unraid. They are never REQUIRED - but if you use either of those features it makes getting support through the forum easier if you have used the standard locations for those features. The domains and isos shares are created if you have enabled VM support. The appdata and system shares are used as part of docker support so would be used if you are running Plex in a docker container on Unraid (or using any other docker container).
  6. You would have to force Unraid to rebuild a complete drive - you cannot do anything at a lower level of granularity with parity. Much easier is restore any affected files from your backups.
  7. The procedure is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  8. Have you checked to see if the EFI folder on the flash drive has trailing tilde character? If so you need to remove this to enable UEFI boot.
  9. There would have been a button to allow all ‘unmountable’ disks to be formatted in the Array Operations part of the Main tab with the new drive listed as one of the drives to be formatted. The procedure for adding disks to an array is documented here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  10. Are you sure that Unraid is not ‘clearing’ the disk? Rebuilding normally only applies when replacing a failed drive. However, depending exactly on what you did it is possible you have managed to get it into a state where unraid is ‘rebuilding’ a disk containing all zeroes. Normally when you add a new disk to the array Unraid will Clear it by writing zeroes to every sector on the disk so that parity remains valid (which wipes any existing format). If the drive has been pre-cleared (but NOT formatted) then this clear phase is skipped). When that completes you then need to format the drive to create an empty file system so that it can be used.
  11. This will only happen if you have set the Automount option for that drive. You do not mention having done this.
  12. That tends to indicate a problem reading a file off the flash drive. You might want to try following the procedure here to see if it helps.
  13. You DO have to have at least 1 drive in the main array to keep Unraid happy, but for a Use Case like yours it could be something inconsequential like a flash drive. A point to remember is that you still need to have backups of anything important or irreplaceable. Having a redundant pool is not enough to guarantee that nothing will go wrong that could cause data loss as there are lots of ways to lose data other than simple drive failure. Your approach needs to take that into account.
  14. Want make of server and Nic do you have? Certain combinations seem to produce this error spuriously if virtualisation is enabled in the server’s BIOS.
  15. Dynamix basically originally referred to the way the current webGUI is structured and as a result there are quite a few references internal to Unraid that mention dynamix. The same developer is also responsible for a significant number of plugins that have not (yet anyway) been absorbed into the standard Unraid releases.
  16. The ‘monitor’ task is a standard Unraid one that is built-in and runs very frequently (I think every minute). No reason that I know of where it should cause a problem.
  17. There is nothing stopping you using SSD’s as pools in Unraid in BTRFS or XFS formats. In fact using SSDs in pools is something most Unraid users do. It is just that if used in the main array then Trim is not currently supported which means performance can degrade over time. in terms of using them with the ZFS plugin that should also work fine although at this point ZFS is not officially supported (hence the need for a plugin). I believe that official support for using ZFS in pools is a roadmap item so official support will arrive at some point.
  18. True, but it is not as flexible as the Parity Check Tuning plugin (which also has more capabilities than the built-in feature).
  19. Do you mean the server webGUI was still working and that showed the array stopped? If so this is strange because I do not know of anything in Unraid that can cause this to happen
  20. Follow this procedure from the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  21. The normal recommendation is to go for as big a drive as you can afford. Large drives tend to perform better, consume less power and be less points of potential failure. Bear in mind, though, that you have to upgrade the parity drive(s) first before you can have large data drives. You can use the Parity Check Tuning plugin to minimise the effect on daily use of parity checks if you have large drives.
  22. You can do the following: Tools->New Config, select the option to keep all assignments; return to the Main tab and correct the ones using old names to use the new ones tick the ‘parity is valid’ checkbox start the array to commit the new names and everything should come up as normal
  23. According to the diagnostics you seem to have a corrupt docker.img file. You need to delete and recreate the docker image and then reinstall containers you want to keep.
  24. If you cannot select a time then you are going to have more difficulty. I simply chose a time I knew was late enough to be very unlikely to impact anybody. I have thought of upgrading the script to check if drives are all spun down before attempting the power down which would be easy enough to do. That might well allow me to shutdown the server earlier, but I have not bothered to do that so far. Checking things like LAN traffic I found to be more problematic.
  25. The disk configuration is in the config/super.dat file (which is not human readable) on the flash. You can also use the procedure documented here if needed.
×
×
  • Create New...