Jump to content

itimpi

Moderators
  • Posts

    20,789
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Handling of unmountable drives is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release.
  2. If you keep gett(no random drives having problems then I would be suspicious of the power supply (cabling or PSU) to the drives.
  3. It will move both. It will work through the shares, checking which pool (if any) is associated for caching purpose for each User Dhare and if needed move the files to/from the main array according to the mover direction set.
  4. No indication that the repair process is having problems. If you rerun without -n and adding -L iit should repair things so that when you restart the array in normal mode it mounts OK.
  5. No reason to assume just because a drive shows as unmountable that anything has been lost. Handling of unmountable drives is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release. the question, however, is why the drives went unmountable. Probably a good idea to attach your system’s diagnostics zip file to your next post in this thread to see if we can spot anything.
  6. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread so we have some chance of seeing what is going on.
  7. The trailing ~ needs to be removed if you want to be able to boot in UEFI mode.
  8. It is easy to recreate the docker.img file with its previous settings intact, but the fact that it went missing suggests something else is going on. I suggest you post your system's diagnostics zip file in your next post in this thread so we can have a look and to get more informed feedback.
  9. Do you have anything set to write log files to the flash drive as the most obvious culprit? It might be worth posting your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  10. It might be worth checking that the problem is not just the cabling to the drives has been disturbed. Do you have any way of checking whether the drives are physically working (e.g. plugging them into another machine) or a USB dock? In the worst case of not being able to get either drive working then you will have lost the data on disk4, but the data on the other array drives will be intact.
  11. Not quite sure what you mean you want to do. Unraid will see the contents of all pools for read purposes.
  12. Sounds as if you might be falling foul of the behavior mentioned in the Caution in this section of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  13. It looks as if disk6 dropped offline. As a result there is no SMART information for the drive. It is possible that the drive would come back online if the server is power-cycled. If that is what happens then it is worth posting new diagnostics so we can get an idea of disk6 health. You might want to check the power and SATA cabling to the drive as cabling issues are far more common than actual drive failures. You could run an extended SMART test on disk6 as that is a good test of disk health. if this happened while rebuilding disk11, then the rebuild of disk11 will not have been successful in not having data corruption.
  14. Have you made sure the EFI folder does not have a trailing ~ character? It must not be there to boot in UEFI mode. Does it even start the boot sequence (i.e. the blue Unraid menu), or fail later in the boot sequence. The fact you booted the stick on another machine does at least confirm the flash drive is potentially bootable.
  15. Are you trying to boot in UEFI or legacy mode?
  16. Did you change the volume mapping to point to /mnt or just try going to /mnt inside the container without changing the mapping?
  17. I am confused by this statement 🙃 You seem to be saying you moved the files out of the ‘systems’ pool and are surprised they are no longer there? I suspect you meant something different from my interpretation, so you may need to clarify this.
  18. FYI: The standard syslog is included in the diagnostics so does not need posting separately. It is only syslog files from the syslog server that need posting separately as they are not included in the diagnostics.
  19. One thing that can cause problems is if you are moving the disk controller to/from a hardware RAID one instead of between HBA controllers. Another is if you pass through and hardware to VMs as the IDs of the hardware are almost certain to change. If you have catered for the above then most of the time the upgrade is painless.
  20. You have to start with a pool to be able to move files between pools. What you are not allowed to do is copy share->pool or pool->share as this can lead to data loss.
  21. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback as we might spot something. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  22. Not sure how the drive got formatted as Unraid will only format a drive if you tell it to do so. Are you sure it actually got formatted rather than repartitioned? The only easy way I would see being able to recover anything in the scenario you describe would be to use disk recovery software such as UFS Explorer.
  23. This would be the way to go as you do not want to retain any content from disk6. Note that if you use New Config you can add the new 10TB drive immediately to the array and build parity based on it being present. It would still need to be formatted after adding it to make it ready for use, but this can be done at any point (even while building parity).
  24. BTRFS has been an option for a long time while ZFS has only been recently added and is still being worked on. In addition there are advantages to BTRFS in terms of its ability to easily add new drives to an existing pool in any combination of numbers and sizes whereas ZFS is far more strict with regard to this.
  25. Could you also post the output that produces as it is not included in the standard diagnostics.
×
×
  • Create New...