itimpi

Moderators
  • Posts

    19682
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. Only means new files are created on the cache, but any files for the share on the array are left there. Use Prefer to cause files to be moved from array to cache. Once all files are on the cache you can (optionally) change it to Only or simply leave it at Prefer.
  2. Setting it to Yes means you want it to be moved to the array when mover runs. Use the Help built into the GUI to see how the settings for this value work or read the online documentation about this setting.
  3. You always need backups of anything important just for this type of issue. The Linux 'files' command can be useful if you want to put the effort into sorting out the Lost+Found folder (as opposed to restoring from backups) as it will at least give you the content type of files whose names were lost.
  4. You do realise that any files for the appdata share that are on the array will be left there as mover ignores any shares with the Only setting? If you want files to be moved from array to the cache drive you need the Prefer setting.
  5. Even if you do not use any of its features, it might be worth installing the Parity Check Tuning plugin as that will start enhancing the Parity History entries with the type of check that was run.
  6. Yes. Unraid is quite happy for you leave intermediate slots unpopulated.
  7. The procedure works for removing the drive even with Dual Purity as long as you do not subsequently reorder the remaining drives.
  8. No - I think that will be 30K If you click on the field name in the GUI it will display the help text that gives you the suffixes you can use.
  9. I think I misread the post and it is a pool device - sorry about that It might be worth checking that there are no files directly under / that should not be there? Cannot see from the 'df' output if this could be the case as it only shows mount points. You might also want to check if any other folders that are not mount points and are located in RAM appear unexpectedly large (e.g. du -sh /tmp) .
  10. The SMART reports indicate that you have not been able to successfully complete an extended SMART test which I would think is not a good sign
  11. You seem to be using an Unassigned Device for appdata? Are you sure this is mounted before docker starts as otherwise you will be writing to RAM. is there any reason you are not using an Unraid pool for this purpose or is that UD device meant to be a pool device?
  12. It depends on whether you have single or dual parity. If you only have parity1 then you can re-order drives without affecting parity so you can (optionally) reassign them to remove any gaps. However if you have parity2 you cannot reorder any drives without invalidating parity2 as that uses the disk slot number as part of the calculation.
  13. As far as I can see that is expected behaviour with your current settings all shares with Use Cache=Yes setting (which is required if you want any contents on the cache to be moved to the array) currently have no files on the cache. What share do you expect to be moved to the array? I also note that you have not set a Minimum Free Space value for the cache pool. This is highly advisable to help stop shares with Use Cache=Prefer setting completely filling the cache.
  14. As far as I know there is no way to change the format keeping the data intact. As to what format is most suitable that does not have a 4GB limit it will depend on what systems you want to move the drive between. For example if it Windows and Unraid then NTFS is probably the most convenient to use.
  15. That means that Unraid was not able to successfully stop the array before shutdown timeouts kicked in. have you tried stopping the array and timing how long it takes. You can then use the answer to check if the current settings for various timeouts are sufficient.
  16. The obvious answer is to use a file system that does not have a 4GB file size limit it might help if we knew more about the drive you are trying to copy the file to.
  17. Do you have data on that server you want to keep or is just the drives you want to keep? If you want to keep the data what file system are they using?
  18. I think it is an issue in that since standard Unraid does not have preclear built in there is no status information available to the GUI to indicate a precleared disk. The check for the signature is done at a lower level during the add process. I guess this could be changed in a future Unraid release but I would not want to bet on it.
  19. The whole zip file as many parts are often looked at to help with diagnosing any particular problem.
  20. The screenshot shows a failure reading off the flash drive. sometimes rewriting all the bz* type files for the release on the flash drive fixes this type of problem.
  21. No as building parity overwrites every sector on the drive. Preclear is never REQUIRED in current Unraid releases - the only reason for using it is to carry out a confidence check on the drive before using it in Unraid.
  22. Yes, but since you want to end up with dual parity you can also assign the parity2 disk and they will both be built at the same time.
  23. The parity swap procedure is not appropriate to your situation where you are basically just upgrading the parity dtive(s). It only applies in the special case of a failed data drive where you want to simultaneously upgrade parity and use the old parity dtive to replace the failed data drive. in your case you first upgrade parity and then afterwards add the old parity as another data drive.
  24. It looks as though your flash drive has dropped offline.