Jump to content

itimpi

Moderators
  • Posts

    20,780
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. There is no concept of data being on one parity disk and not the other - they are equal in terms of content. are you sure that none of the files/folders in lost+found are actually photos? They will have cryptic names but you can use the Linux ‘files’ command to find out what the file types (and thus their extension) should be. the only think I can think of is whether a disk recovery utility such as UFS Explorer on Windows might be more successful. are you actually sure the photos were on the drive you are currently working on?
  2. There is no reason to use both parity drives if repairing a single failed drive as parity has no concept of data - it just holds the information that in conjunction with the good data drives allows the bit pattern of the failed drive to be reconstructed. If you have a lost+found folder that will contain any files/folders for which the repair process could not find the directory entry to give them their correct name. at this point is the drive being emulated? If so then the repair happened against the emulated drive and it is possible it might be more successful against the physical drive.
  3. Yes - do it from the GUI and use -L (and not -n)
  4. But did you use sdk1 ? You need to include that partition number, not just the device part. Note that using the raw device names will invalidate parity. Ideally you should have the array started in Maintenance mode and then use /dev/md3p1 (Unraid 6.12.x) or /dev/md1 (Unraid 6.11.x and earlier) which maintains parity.
  5. The question is what did you use for ‘drive name’. It is easy to get it wrong when not using the GUI.
  6. What version of Unraid are you running? There was an error that could cause binaries to become unavailable during the shutdown sequence that has been fixed in the 6.12.4 rc18/rc19 releases.
  7. This almost certainly means you got the command wrong. What was the actual command you used?
  8. Are you sure your power supply is up to handling the additional drives?
  9. Setting the array to Primary Storage will do nothing to data already on the cache pool. The only time any files get moved to the array is when the cache pool is primary and array is Secondary and mover direction is cache->array. You may also need docker and VM services disabled for the ‘system’ and ‘domains’ shares to be moved.
  10. What makes you think the drive is dead rather than dimply disabled ? it DOES have a lot of reallocated sectors so you may want to consider replacing anyway.
  11. Are you trying to access a User Share? If so then make sure the SMB settings for the share do not have the Export option set to No (which is the default). For security reasons you have to actively use one of the other options to make it visible on the network. if you had provided your Unraid system’s diagnostics zip file we would have been able to see if this was actually the mistake you have made.
  12. You do not mention how it is failing to boot. Sometimes downloading the zip file for a release and extracting all the bz* type file and overwriting the copies on the flash drive can help. In terms of backup all your settings are in the ‘config’ folder on the flash drive if that can be read.
  13. They are passed through an anonymization process that removes all obvious sensitive information. Only you can determine if you are happy with the results (they are text files so you can check them).
  14. The correct process would be: set the cache as primary storage and secondary storage as the array. set the mover direction as array -> cache disable the docker and vm services under settings manually run the mover to get all the files transferred to cache when that completes remove the secondary storage option from the share re-enable the docker and VM services
  15. Parity checks of the Unraid type only apply to the main Unraid array. Parity is handled completely differently on btrfs and ZFS based arrays. You would need to read up on those particular file systems to understand how they handle parity, and also the internal checks they have to detect corruption. The closest comparison to a parity check is the scrub operations.
  16. I would think this is a hardware issue such as a dodgy connection in the LAN cable or a pin on the port at either end.
  17. It depends where you set the logs to be saved? If you used the mirror to flash option then it will be in the 'logs' folder on the flash drive.
  18. For mover to do anything then you need to have a pool set as primary storage, the array set as secondary storage and the mover direction set appropriately.
  19. If you have not rebooted then you are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread that cover the actions you took so we can get an idea of what happened under the covers.
  20. Looking at where the files for shares are located the only anomaly I see is where a share has parts on two different pools: appdata shareUseCache="prefer" # Share exists on cache, nvme However mover does not move files between pools - this needs to be done manually.
  21. If you enable the syslog server then the next time the problem occurs you can have a log that survives a reboot so we can see what lead up to the issue. It might also be possible to see something if you manage to get some diagnostics after the problem occurs and taken before you reboot.
  22. Perhaps you need to reboot to get the correct version to register although I am not sure why! I will see if I can reproduce as I normally would have rebooted when testing. If after rebooting you can reproduce the problem then please enable the Testing mode logging in the plugin in its settings and after reproducing the error let me have your system's diagnostics so I can see exactly why you still get the error. Maybe the fix I added has a corner case I did not allow for
  23. Is there a reason you want to do this? I think that using ZFS format drives in the main Unraid array is reported to give lower performance than XFS.
  24. The obvious answer is to ask for enhancements to this to support copying between User Shares and UD managed devices/shares.. I do not see why it should not be implemented (other than the effort involved).
×
×
  • Create New...