Jump to content

itimpi

Moderators
  • Posts

    20,694
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Not a bad idea. If you cannot easily do this then you could let Unraid be the 'remote' server to itself and log to something like a pool drive or an unassigned device that is less likely to drop offline.
  2. True, but it can mean that it is not impacting performance at times when you know the array will be busy with other work. If it is just the regular scheduled check being done as part of system housekeeping the elapsed time is normally not so critical if it does not impact daily use.
  3. I would expect the time would be 80% longer as the time is determined primarily by the size of the parity disk. If you are not already using it then I would think the Parity Check Tuning plugin would be a good idea?
  4. The diagnostics start after the system was booted yesterday at 11:06 which suggests there WAS a shutdown. if you are not aware of it then maybe you have some other problem?
  5. It is a top level folder on the flash. If it is missing when you think there should be a log file there then this will typically mean that the flash had dropped offline before Unraid could write the log file there.
  6. The only column that matters for Pending Sectors is the Raw Value. You want that to stay at 0.
  7. You cannot assume that the permissions on a Docker container folder under appdata will have permissions that allow access over SMB. Different containers have different requirements over the permissions they set up.
  8. The normal recommendation is to start by running the repair against the emulated drive. If that results in a lot of files in lost+found (because the emulated drive has bad corruption) then try repairing the physical drive (as the level of corruption can be different) as a backup solution. In many cases repairing the emulated drive goes through error free.
  9. The correct procedure for handling an unmountable drive is described here in the online documentation accessible via the ‘Manual’ link at the bottom of the Unraid GUI.
  10. If that is the case then you need to go into the settings for the share and set up the Use Cache setting appropriately and specify which pool that share should use. The help built into the GUI can help with picking a value that suits your need. The diagnostics showed that all the shares seem to currently be set to NOT use a pool.
  11. Exactly what share(s) do you want to use with a pool? On a quick check most of them have the Use Cache=No setting.
  12. You should also check that regular scheduled parity checks are set to be non-correcting.
  13. Yes. A UPS is always a sensible investment when running a server, particularly if prone to power cuts.
  14. You can: use Tools->New Config and select the option to keep all current assignments return to the Main tab and assign the additional drive Start the array to commit the assignments and start building parity based on them. Note that your array will not be protected until the parity rebuild finishes.
  15. That suggests that the disk dropped offline or was unmountable. I would suggest you run a file system check on the drive in case file system corruption is the cause.
  16. This does not make sense since your array drives are only 2TB there will never be 3TB free on any drive.
  17. Definitely the case since anything other than 0 errors is too many
  18. It looks as if your docker.img file is corrupt (probably due to the cache drive running out of free space). I notice that you have it configured for 75GB which should be far more than you need - the default of 20GB is normally more than enough as long as you do not have container misconfigured so it ends up writing internally to the image. I would suggest: Stop docker service Go to Settings->Docker and select the option to delete the current image (may need to turn on advanced view) Change the image size to be 20GB (which should free up space on the cache) Restart the docker service to create a new 20GB docker.img file Go to Apps -> Previous Apps to redownload the container binaries and re-instate the containers you select with their previous settings intact Make sure that for any shares you want mover to transfer files to the array Use Cache=Yes is set. If not sure of the correct settings for any share use the GUI built-in help for that field to see how the settings operate and how they affect mover. You should also set the Minimum Free Space setting for the cache (currently 0) to be larger than the biggest file you expect the transfer which will help with avoiding running out of space in the future.
  19. According to those diagnostics the array has not been started yet (in normal mode).
  20. It might make more sense if you think of the setting meaning where new files are to be initially put when they are created. You then need to look at what action mover will subsequently take (if any) to put them into their final location.
  21. The process is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI.
  22. They can certainly become unseated slightly due to vibration.
  23. In step 2 you need to actual stop the docker and VM services as otherwise they will keep file open that mover is then unable to move.
  24. I did not mean the subnet mask but the actual subnet (e.g. 192.168.0.x) .
  25. You could use the Parity Swap procedure to just replace 1 of the parity drives with a larger one and use the old parity drive to replace the failed one. There is no requirement for both parity drives to be the same size as long as no data drive is larger than the smallest parity drive.
×
×
  • Create New...