• Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by itimpi

  1. those shares get auto-created by enabling docker and VM support on Unraid. They are never REQUIRED - but if you use either of those features it makes getting support through the forum easier if you have used the standard locations for those features. The domains and isos shares are created if you have enabled VM support. The appdata and system shares are used as part of docker support so would be used if you are running Plex in a docker container on Unraid (or using any other docker container).
  2. You would have to force Unraid to rebuild a complete drive - you cannot do anything at a lower level of granularity with parity. Much easier is restore any affected files from your backups.
  3. The procedure is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  4. Have you checked to see if the EFI folder on the flash drive has trailing tilde character? If so you need to remove this to enable UEFI boot.
  5. There would have been a button to allow all ‘unmountable’ disks to be formatted in the Array Operations part of the Main tab with the new drive listed as one of the drives to be formatted. The procedure for adding disks to an array is documented here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  6. Are you sure that Unraid is not ‘clearing’ the disk? Rebuilding normally only applies when replacing a failed drive. However, depending exactly on what you did it is possible you have managed to get it into a state where unraid is ‘rebuilding’ a disk containing all zeroes. Normally when you add a new disk to the array Unraid will Clear it by writing zeroes to every sector on the disk so that parity remains valid (which wipes any existing format). If the drive has been pre-cleared (but NOT formatted) then this clear phase is skipped). When that completes you then need to format the drive to create an empty file system so that it can be used.
  7. This will only happen if you have set the Automount option for that drive. You do not mention having done this.
  8. That tends to indicate a problem reading a file off the flash drive. You might want to try following the procedure here to see if it helps.
  9. You DO have to have at least 1 drive in the main array to keep Unraid happy, but for a Use Case like yours it could be something inconsequential like a flash drive. A point to remember is that you still need to have backups of anything important or irreplaceable. Having a redundant pool is not enough to guarantee that nothing will go wrong that could cause data loss as there are lots of ways to lose data other than simple drive failure. Your approach needs to take that into account.
  10. Want make of server and Nic do you have? Certain combinations seem to produce this error spuriously if virtualisation is enabled in the server’s BIOS.
  11. Dynamix basically originally referred to the way the current webGUI is structured and as a result there are quite a few references internal to Unraid that mention dynamix. The same developer is also responsible for a significant number of plugins that have not (yet anyway) been absorbed into the standard Unraid releases.
  12. The ‘monitor’ task is a standard Unraid one that is built-in and runs very frequently (I think every minute). No reason that I know of where it should cause a problem.
  13. There is nothing stopping you using SSD’s as pools in Unraid in BTRFS or XFS formats. In fact using SSDs in pools is something most Unraid users do. It is just that if used in the main array then Trim is not currently supported which means performance can degrade over time. in terms of using them with the ZFS plugin that should also work fine although at this point ZFS is not officially supported (hence the need for a plugin). I believe that official support for using ZFS in pools is a roadmap item so official support will arrive at some point.
  14. True, but it is not as flexible as the Parity Check Tuning plugin (which also has more capabilities than the built-in feature).
  15. Do you mean the server webGUI was still working and that showed the array stopped? If so this is strange because I do not know of anything in Unraid that can cause this to happen
  16. Follow this procedure from the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  17. The normal recommendation is to go for as big a drive as you can afford. Large drives tend to perform better, consume less power and be less points of potential failure. Bear in mind, though, that you have to upgrade the parity drive(s) first before you can have large data drives. You can use the Parity Check Tuning plugin to minimise the effect on daily use of parity checks if you have large drives.
  18. You can do the following: Tools->New Config, select the option to keep all assignments; return to the Main tab and correct the ones using old names to use the new ones tick the ‘parity is valid’ checkbox start the array to commit the new names and everything should come up as normal
  19. According to the diagnostics you seem to have a corrupt docker.img file. You need to delete and recreate the docker image and then reinstall containers you want to keep.
  20. If you cannot select a time then you are going to have more difficulty. I simply chose a time I knew was late enough to be very unlikely to impact anybody. I have thought of upgrading the script to check if drives are all spun down before attempting the power down which would be easy enough to do. That might well allow me to shutdown the server earlier, but I have not bothered to do that so far. Checking things like LAN traffic I found to be more problematic.
  21. The disk configuration is in the config/super.dat file (which is not human readable) on the flash. You can also use the procedure documented here if needed.
  22. The script I use is: #!/bin/bash #noParity=true logger "overnight powerdown" powerdown In User Scripts. The #noparity line stops the shutdown happening if a parity check is running. I then use a custom schedule of: 15 0 * * 1-6 to shutdown at 15 minutes after midnight except on Sundays (as I have a special task running then that needs the server active). I have set the server BIOS to automatically start up the server again at 9:00 AM and that means it is ready for me about 5 minutes later which about the time I am first likely to want to access it. Powering off for about 9 hours per day I estimate saves over £100 per year on my electricity bill (may be more now with recent price rises in the UK). I could also have used WOL as an alternative to a specific time to wake up the server which would be something I could issue dynamically from my iPhone/iPad if I wanted more flexibility on the startup time.
  23. You cannot move the old 10TB to the array and change the parity drive at the same time which is what I think you are trying to do?
  24. Did you: stop array unassign parity drive assign 14TB drive as parity start array to build the new 14TB parity drive if that is all you did there should be no message saying you cannot start the array. only when building the 14TB parity drive completes will you be allowed to add the new data drives.
  25. Those diagnostics show you still have a 10TB parity drive. Until you have replaced that with a 14TB drive and built parity on that you will not be able to add the data drives (as one of them is 14TB).