Jump to content

itimpi

Moderators
  • Posts

    19,812
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. Yes, but since you want to end up with dual parity you can also assign the parity2 disk and they will both be built at the same time.
  2. The parity swap procedure is not appropriate to your situation where you are basically just upgrading the parity dtive(s). It only applies in the special case of a failed data drive where you want to simultaneously upgrade parity and use the old parity dtive to replace the failed data drive. in your case you first upgrade parity and then afterwards add the old parity as another data drive.
  3. It looks as though your flash drive has dropped offline.
  4. Not that I know off, other than ensuring that the wait value is large enough. I am not sure how you could tell that a container had finished starting up in any reliable manner.
  5. If you want any sort of informed feedback you should post your system’s diagnostics zip file. Without that you are only going to get guesses
  6. I suspect this is happening when CA Backup runs and stops all containers, but then fails in restarting them all afterwards. It should be easy enough to check if this problem co-insides with when you have configured that to run.
  7. The easiest thing to do is to simply do it via a User Share rather than a disk share (which sharing the cache is) as then it does not matter if a file is on the cache or array.
  8. No idea on final version, but there has already been comments that rc3 will have the 5.15 kernel (or later I guess).
  9. With basic Unraid you would have to start again. Normally all array level operations need to be restarted from the beginning if they do not run to completion before stopping the server. However if you have the Parity Check Tuning plugin installed and you are on Unraid 6.9.0 (or later) and the following conditions are met: you have set the option to resume array operations next time array starts in the plug-in settings [EDIT: I have been thinking of making this the default behaviour so that the user does not need to explicitly request it - any feedback on this idea is welcomed] you pause the sync if you want although this should not be necessary. you successfully stop the array (so that you will not end up with an unclean shutdown). then you can shutdown the server and when you next boot the server (presumably after the power is OK) the parity operation should be resumed from the point it had already reached when the array is started. Note that I used the word ‘should’ because although I have tested what I think is this scenario on my development environment I have not yet had any feedback as to whether it has successfully (or not) been handled in a real world situation. If it does not work for any reason then it can be restarted from the beginning so it does not hurt to try. I would love to get some feedback from anyone who has used this feature. EDIT: If anyone finds themselves about to use this feature in anger I would be very grateful if they could also set the ‘testing’ mode of logging in the plugins settings and then send me their diagnostics/syslog after restarting the array so I can check if the restart was handled as expected.
  10. It might be worth trying a ‘cp’ command rather than rsync to see if this changes the symptoms in case it is something to do with the way rsync operates.
  11. You several time refer to mnt/* without a leading / character. That makes it a relative path rather than an absolute one and what it is relative to is context sensitive. Could this be the cause of your issues?
  12. Yes , but with that value no file must be as big as 1GB or you will still have problems as Unraid does not take the size of a file into account when selecting the location to store it. Unraid never switches to another location once it has selected one for a file - instead you get out-of-space errors when the file does not fit.
  13. I guess that is OK as it is a persistent location.
  14. Click on the drive on the Main tab and select the Scrub option.
  15. Have you explicitly set the Mnimum Free Space for the cache as described here rather than the one at the share level that applies to array disks.
  16. You will need to store the scripts on the flash drive and recopy the scripts any time the system is booted as that location is in RAM. You will also have to add a command to set the execute bit in the permissions.
  17. No. With 6.9.0 you can have multiple pools (with user specified names), and any of them can be set to cache a share. Under settings -> Identification.
  18. Looks like it has a old firmware version, so you might want to consider updating it to the most recent version.
  19. Limetech never gives predicted release dates
  20. You should set the Minimum Free Space setting for the cache (accessed by clicking on it on the Main tab) to be more than the largest file you expect to copy. When the free space on the cache falls below this value Unraid will start by-passing the cache and write new files directly to the array. This should stop the cache getting so full it causes problems.
  21. Safe Mode in Unraid (it is one of the options on the Unraid boot menu). Safe mode suppresses loading of any plugins.
  22. If you really have no internet at all then you may have a problem as that would mean you cannot even email Limetech to get them to issue a replacement licence manually. In terms of having a backup I would suggest that every time you make a configuration change of any significance then click on the flash drive on the Main tab and select the option to download a backup as a ZIP file so that you have one locally.
  23. Y You are getting continual resets on disk4 which suggests a cabling issue with the drive.
  24. The Parity check Tuning plug-in does not take account of other plugins running at the same time. There is a level of reduced performance in the parity check speed while the backup is running, but probably not enough to really matter.
×
×
  • Create New...