Jump to content

itimpi

Moderators
  • Posts

    19,797
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. It depends on whether you have single or dual parity. If you only have parity1 then you can re-order drives without affecting parity so you can (optionally) reassign them to remove any gaps. However if you have parity2 you cannot reorder any drives without invalidating parity2 as that uses the disk slot number as part of the calculation.
  2. As far as I can see that is expected behaviour with your current settings all shares with Use Cache=Yes setting (which is required if you want any contents on the cache to be moved to the array) currently have no files on the cache. What share do you expect to be moved to the array? I also note that you have not set a Minimum Free Space value for the cache pool. This is highly advisable to help stop shares with Use Cache=Prefer setting completely filling the cache.
  3. As far as I know there is no way to change the format keeping the data intact. As to what format is most suitable that does not have a 4GB limit it will depend on what systems you want to move the drive between. For example if it Windows and Unraid then NTFS is probably the most convenient to use.
  4. That means that Unraid was not able to successfully stop the array before shutdown timeouts kicked in. have you tried stopping the array and timing how long it takes. You can then use the answer to check if the current settings for various timeouts are sufficient.
  5. The obvious answer is to use a file system that does not have a 4GB file size limit it might help if we knew more about the drive you are trying to copy the file to.
  6. Do you have data on that server you want to keep or is just the drives you want to keep? If you want to keep the data what file system are they using?
  7. I think it is an issue in that since standard Unraid does not have preclear built in there is no status information available to the GUI to indicate a precleared disk. The check for the signature is done at a lower level during the add process. I guess this could be changed in a future Unraid release but I would not want to bet on it.
  8. The whole zip file as many parts are often looked at to help with diagnosing any particular problem.
  9. The screenshot shows a failure reading off the flash drive. sometimes rewriting all the bz* type files for the release on the flash drive fixes this type of problem.
  10. No as building parity overwrites every sector on the drive. Preclear is never REQUIRED in current Unraid releases - the only reason for using it is to carry out a confidence check on the drive before using it in Unraid.
  11. Yes, but since you want to end up with dual parity you can also assign the parity2 disk and they will both be built at the same time.
  12. The parity swap procedure is not appropriate to your situation where you are basically just upgrading the parity dtive(s). It only applies in the special case of a failed data drive where you want to simultaneously upgrade parity and use the old parity dtive to replace the failed data drive. in your case you first upgrade parity and then afterwards add the old parity as another data drive.
  13. It looks as though your flash drive has dropped offline.
  14. Not that I know off, other than ensuring that the wait value is large enough. I am not sure how you could tell that a container had finished starting up in any reliable manner.
  15. If you want any sort of informed feedback you should post your system’s diagnostics zip file. Without that you are only going to get guesses
  16. I suspect this is happening when CA Backup runs and stops all containers, but then fails in restarting them all afterwards. It should be easy enough to check if this problem co-insides with when you have configured that to run.
  17. The easiest thing to do is to simply do it via a User Share rather than a disk share (which sharing the cache is) as then it does not matter if a file is on the cache or array.
  18. No idea on final version, but there has already been comments that rc3 will have the 5.15 kernel (or later I guess).
  19. With basic Unraid you would have to start again. Normally all array level operations need to be restarted from the beginning if they do not run to completion before stopping the server. However if you have the Parity Check Tuning plugin installed and you are on Unraid 6.9.0 (or later) and the following conditions are met: you have set the option to resume array operations next time array starts in the plug-in settings [EDIT: I have been thinking of making this the default behaviour so that the user does not need to explicitly request it - any feedback on this idea is welcomed] you pause the sync if you want although this should not be necessary. you successfully stop the array (so that you will not end up with an unclean shutdown). then you can shutdown the server and when you next boot the server (presumably after the power is OK) the parity operation should be resumed from the point it had already reached when the array is started. Note that I used the word ‘should’ because although I have tested what I think is this scenario on my development environment I have not yet had any feedback as to whether it has successfully (or not) been handled in a real world situation. If it does not work for any reason then it can be restarted from the beginning so it does not hurt to try. I would love to get some feedback from anyone who has used this feature. EDIT: If anyone finds themselves about to use this feature in anger I would be very grateful if they could also set the ‘testing’ mode of logging in the plugins settings and then send me their diagnostics/syslog after restarting the array so I can check if the restart was handled as expected.
  20. It might be worth trying a ‘cp’ command rather than rsync to see if this changes the symptoms in case it is something to do with the way rsync operates.
  21. You several time refer to mnt/* without a leading / character. That makes it a relative path rather than an absolute one and what it is relative to is context sensitive. Could this be the cause of your issues?
  22. Yes , but with that value no file must be as big as 1GB or you will still have problems as Unraid does not take the size of a file into account when selecting the location to store it. Unraid never switches to another location once it has selected one for a file - instead you get out-of-space errors when the file does not fit.
  23. I guess that is OK as it is a persistent location.
  24. Click on the drive on the Main tab and select the Scrub option.
×
×
  • Create New...