Jump to content

itimpi

Moderators
  • Posts

    20,694
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. If for any reason your parity was not completely valid at that point then this can easily happen as that would mean the emulation would have corruption.
  2. Well the syslog shows you were getting I/O errors on the drive. Have you tried running a file system check and/or an extended SMART test?
  3. Once a drive is disabled Unraid stops using it until a rebuild (or equivalent) has been performed so no need to remove it from the array at that point. That is why a repair on the emulated drive is recommended as the first step - it works more often than not and is the fastest and least error prone process if nothing goes wrong.
  4. Normally one would start by trying to repair the emulated drive, and then if that works OK rebuilding the disabled drive (either to itself or better a replacement). That keeps the physical drive intact as long as possible in case you need to try and repair the physical drive as an alternative recovery option.
  5. Not a possibility as once you have swapped out the old parity drives they are not available to support a rebuild This is a possibility, especially if you swap out the drive that failed its SMART test first. However you will not be protected against any other drive failing until the first parity rebuild completes. Not tried doing simultaneous Parity Swap procedures but as long as Unraid lets you do that it would be the fastest. Whichever route you go keep the old data disks intact until you are back fully protected. If the Parity Swap goes wrong in any way there is a good chance that most of the data off these drives would be recoverable.
  6. You will have to find out what needs doing in the Resilio docker to get the permissions set correctly in the first place. Do not use it myself so no idea what is required.
  7. I suspect that Oct 14 08:43:06 Tower root: Creating new image file: /mnt/disks/ua_appdata/appdata/docker/docker.img size: 20G Oct 14 08:43:06 Tower root: ERROR: failed to zero device '/mnt/disks/ua_appdata/appdata/docker/docker.img': Input/output error is the source of your problem. Is there a problem with that drive?
  8. You can use this process from the online documentation that can be accessed via the Manual link at the bottom of the Unraid GUI.
  9. I think your changing no pools is probably not relevant. according to your syslog you are getting BTRFS errors reported on your Samsung SSD. I have no idea how serious these are but you should try and run a file system check/repair as described here in the online docum3ntstion accessible via the ‘Manual’ link at th3 bottom of the Unraid GUI. Note that Unraid would not automatically move any files/folders between the pools when you make such a change so you would probably end up with bits of each share on both pools unless you took specific action to move them over.
  10. Yes. The rebuild process starts by restoring the file system of the disk it is emulating, and then when that completes extends it to fill the whole drive.
  11. Just use the ‘Manual’ link at the bottom of the Unraid GUI to get to the online documentation that covers this under the Storage Management section.
  12. Unraid never automatically move files between array drives. If you want this then it has to be done manually and Krusader is one way of doing this
  13. What you could do is: Go to Tools -> New Config and use the option to retain all current assignments Return to the Main tab and move the disk mentioned from being a data drive to a parity drive. Start the array to commit the new assignments and build parity based on the assigned data disks. The array will show as unprotected until the parity build completes.
  14. Since Nextcloud share is on disk3 then it is almost certainly responsible for keeping that disk spinning. Since any writes to any drive will result in the parity drive spinning then that is probably also the culprit for that behaviour if Nextcloud is ever writing to that share.
  15. A connection can not cause sectors to be reallocated. This is always internal to the drive,
  16. You need to stop the Docker and VM services (not just a container) and then run mover from the Main tab to get the share moved to the cache. These services hold files open in the system share while they are active stopping mover from acting on them. When mover completes you can then re-enable these services.
  17. Yes - Unraid always corrects parity to match the drives. You can combine these steps to remove parity2 and insert the new 8TB parity1 into a single step but what you wrote would work as well. The rest looks fine but keep the old 4TB parity drives intact until you have successfully built new parity on the 8TB drive just in case you get a failure again doing this.
  18. Perhaps you should check the contents of the config/go file on the flash drive to check you have not inadvertently removed the line to start the web gui?
  19. The problem was cause by the combination of Krusader and using a 'move' operation. It would end up triggering a manifestation of the behaviour described here. If you had done a copy + delete you would have gotten the result you expected.
  20. The web server is started by the entry in the /config/go file on the flash drive. As long as the flash drive mounts correctly that should therefore be automatic. Have you ttried the 'df' command to check it is mounted at /boot?
  21. Did you capture your system's diagnostics zip file when the failure happened? That might give some insight into why you got the failure. At that point since you still had one valid parity disk then your system should have been able to handle one data disk failing. Have you actually run a parity check to confirm that parity really WAS valid when you put back the original drives? You do not want another drive to fail and then find out parity was not really valid so recovery is compromised.
  22. If a disk gets listed twice (with one of them being in UD) then that means it dropped offline, and then reconnected with a different device id. I am surprised it is not causing any errors though. My suspicion is that the pool has lost its redundancy
  23. Partially right I had committed the .plg update on my local system but not pushed it up to github, It is now pushed to github, so thanks for letting me know,
  24. Released version that displays version number of plugin that is running in the plugin's GUI pages (top right). This is to help with a recent report where it appears that the version of the plugin showing as installed in the syslog did not agree with what was showing on the Plugins tab. Makes it easier to check the version actually running which should help with support going forward.
  25. Unfortunately not I assume you have not installed the My Servers plugin and let it make a backup? you can follow the steps shown here to at least get your drive assignments back with your data intact. This will also get your User Shares back, but with default settings, Unfortunately any other setup will need redoing.
×
×
  • Create New...