Jump to content

itimpi

Moderators
  • Posts

    20,775
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Technically the size is not critical as long as the drive is formatted as FAT32. The reason that size is quoted as the maximum to be used is that Windows will not format a drive larger than 32GB to FAT32 (not sure about MacOS). Also since Unraid only needs about 2GB on the drive to run successfully larger drives are just a waste of resources.
  2. It should as long as you click the checkbox to confirm this is what you want to do.
  3. I suspect you want the shares containing the files to be set with Use Cache=Only/Prefer. In such a case mover will not try to move the files to the array. This does, however, mean that you later want files moved to a share which IS set to have files either only on the array or has Use Cache=Yes set.
  4. XFS does not support multiple drives in the same pool - you have to be using BTRFS to do this. Switching to BTRFS would involve first copying the existing contents elsewhere (e.g. the main array) and then copying it back after setting up the multi-drive pool as btrfs and formatting the pool drives. Note that if you have a single drive pool already set up as btrfs you can add additional drives to it and/or change the RAID level without reformatting.
  5. Another (although less likely) possibility is that some of the larger files you moved were something like VM disk images. Such files are normally created as ‘sparse’ files which means they will not necessarily occupy the full amount of disk space their logical size allows. This ‘sparseness’ can be lost during a copy and the files then expanded so that their physical size matches their logical size.
  6. Not much point in 1) since in 3) you tell the systems the Downloads share only exists on the pool. You need to be a bit careful with point 4 as it can encounter the behaviour described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. Not an issue if the shares in question are set to Use Cache=Yes as then mover will pick them up. An alternative is to use the Dynamix File Manager plugin as it will make sure you do not fall foul of this behaviour as it always uses a copy/delete strategy. Normally you let mover run automatically at a scheduled time. The default is in the middle of the night when the system would otherwise be ideal (assuming you leave it switched on overnight.
  7. Sometimes just rewriting the files to the flash drive can fix this sort of problem. It is as if that rectifies any sectors that are borderline on reading successfully.
  8. It might not be the log file(s), but it should be easy enough to identify which files are taking up all the space on the flash drive.
  9. How did you do the ‘balance’? Are you sure that you did not leave any copies of files behind so that you now have duplicates on different drives?
  10. That section is about avoiding inadvertently assigning data drives to a parity slot so it then gets wiped. If you have no parity drives then order is probably not critical unless you have shares set to include/exclude specific drives. If so that is easily corrected.
  11. Do you have something like mover logging enabled, or the syslog server with option to mirror to flash. Even so a bit unusual for either of these to cause this sort of problem. I expect it would be relatively obvious what the culprit is by examining the contents of the flash drive when it starts getting full to see what is taking up the space?
  12. You need to use Yes (not No) if you want mover to move files to the array. Mover ignores shares set with the No value. You also need to make sure the docker service (and possibly also the vm service) are disabled while doing this as they keep files open which can stop mover transferring them.
  13. Not quite sure I understand what happened here? If you cannot start the array how was a parity-sync initiated? Perhaps you are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread so we can see the current state of things.
  14. You seem to have completely run out of space on your cache drive and it looks like this has corrupted the docker image file. Btrfs file system seems to be prone to corruption if they run out of free space. You also seem to have all your array drives completely full so I think that will also need resolving to stop a similar issue arising shortly after resolving the current one. you also seem to have 50GB allocated to the docker image file? Is there any reason you made it this large - it would be very rare to need anything like that much. If it is because it was running out of free space this would suggest a docker container was misconfigured so it was writing to a path internal to the docker image because it was not mapped to external storage. to stop drives completely filling you should use the Minimum Free Space setting that is available for both pools and shares. to resolve your current issue you are going to have to free up space on the cache drive; recreate the docker image file; and then reinstall your docker apps using the Previoys Apps section of Community Applications.
  15. If you want persistent logs that survive reboots then you should enable the syslog server.
  16. The last set of output says it also had the -n option set although you said you only used -Lv. Until you run without the -n (no modify) option set nothing will be repaired. The diagnostics posted seem to agree in that they show there is still corruption on disk1.
  17. No. Unraid never automatically moves files between array drives. Changing Share properties only applies to where new files are placed, existing files are not affected.
  18. These files being created indicate corruption has been detected. The fact you say the bz*type files are changing is a big worrying - that should never happen. Whether it is a problematical USB drive or something else like the USB port is not clear.
  19. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread (the syslog snippet is not enough).
  20. Parity has no concept of files - just of disk sectors. It therefore does not know that there are no files so will rebuild the sectors it thinks should be there - even if they correspond to an empty file system.
  21. The moment you format a drive you will lose something like 1-2% of the total size as the file system structures are created to hold directory type information.
  22. sounds as if you have misunderstood how parity works on Unraid. 1.). No. Parity is updated in real-time so after a drive failure you are recovering to the the current data state. 2). It is! Updating the parity on all writes to the array is the main limiting factor on speeds writing to the array. The parity check process is just an (optional) house-keeping one to check that everything is as it should be.
×
×
  • Create New...