trurl

Moderators
  • Posts

    43889
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Maybe because they had empty mappings, which is technically a syntax error. Previous versions allowed this. You can get rid of the empty mappings, or install Docker Patch plugin.
  2. Diagnostics after reboot can't tell us what happened before. Setup syslog server. You have some out-of-date plugins. This one in particular has caused problems recently: disklocation-master.plg - 2024.03.10 (Update available: 2024.03.22) This might be related:
  3. When you add a disk to a new slot in an array that already has valid parity, Unraid will clear the disk unless it has been precleared so parity will remain valid. All those zeros on a clear disk has no effect on parity. This is the only scenario where Unraid requires a clear disk. After the disk has been cleared and added to the array, you can format it so it can be used for files. A disk that hasn't yet been formatted will appear as unmountable. In Main - Array Operation, it will list all unmountable disks and give you a checkbox to enable the format button and allow you to format them. You can see this in the screenshot you posted above. NEVER format a disk that should contain your data. An unmountable disk that should have data on it needs to have its filesystem repaired. That is what check filesystem is for. Or, as suggested above, maybe its partition needs to be fixed. DO NOT FORMAT!!!
  4. I don't know what you mean by that. Parity is not a substitute for backup, whether Unraid or some other system. Parity contains none of your data. Parity is just an extra bit that allows a missing bit to be calculated from all the other bits. That is basically how parity works in any system, Unraid or otherwise. Parity contains parity bits that allow the contents of a failed/missing/disabled/emulated/rebuilding disk to be calculated from the bits on all the other disks. https://docs.unraid.net/unraid-os/manual/what-is-unraid/#parity-protected-array Parity by itself can recover nothing. All bits of all other disks must be reliably read to reliably rebuild all bits of a disk. And as we have seen, it doesn't appear that your current setup can reliably read all other disks.
  5. Doesn't look like there were any problems communicating with disk1 during that. Do you have another copy of anything important and irreplaceable?
  6. Filesystem corruption is independent of disk health. It is the data on the disk (actually, the filesystem metadata that keeps track of the other data) that is bad, not the disk. And that does look pretty bad. Looks like disk1 is connected to this controller 00:17.0 SATA controller [0106]: Intel Corporation Comet Lake SATA AHCI Controller [8086:06d2] DeviceName: Onboard - SATA Subsystem: Micro-Star International Co., Ltd. [MSI] Comet Lake SATA AHCI Controller [1462:7c79] Kernel driver in use: ahci Kernel modules: ahci So USB is not causing any problems with that disk. Post new diagnostics.
  7. Do you have another copy of anything important and irreplaceable? If you insist on trying to get things all working again while still on USB, then I am going to recommend only attempting rebuild of disk1 to a new disk and leaving the original with its contents alone in case rebuild goes badly due to USB.
  8. When a disk is disabled, it is no longer used. Instead, all other disks are read to get the data for the emulated disk from the parity calculation, and parity is updated to emulate writes to the disk so those can also be recovered by rebuild. The initial failed write, and any subsequent writes to the emulated disk, can be recovered by rebuild. Repairing the filesystem will involve writing the emulated disk. But, if you try to work with the emulated disk, you are relying on all the other disks working well to emulate the disk. If other disks disconnect as before, that won't work. And rebuild won't work either since it must write the emulated data to the rebuilding disk. This might be real corruption since I saw that in the logs after reboot, when presumably all enabled disks were still connected. And since that disk isn't being emulated, we could start with that one since repairing it would only involve disk1 and the remaining parity. Check filesystem on disk1 from the webUI. Capture the output and post it.
  9. Have you examined it's data? No reason to do that. Most only do monthly or even less frequently.
  10. Don't even think of that word. Format is NEVER part of rebuild. Format is a write operation. It writes an empty filesystem to the disk. If you format a disk in the array, Unraid treats that write operation just as it does any other, by updating parity. So after formatting a disk in the array, the only thing that can be rebuilt is an empty filesystem.
  11. No. Several approaches possible. Usually repair the emulated filesystem. If the results look good, rebuild. Otherwise see if the physical disk contents look better and New Config it back into the array. Or some combination where you repair then rebuild to a new disk, and use the original to recover any files if necessary. Before doing anything, it would be best to get those disks connected without USB.
  12. Reboot will never fix this. Fortunately, previous syslog was saved and is in those diagnostics. You were having connection problems with many disks. Those just happened to be the 2 disks that got disabled first because they couldn't be written and you can't have more than 2 disable disks. SMART for both disabled disks looks fine, a small number of reallocated on parity nothing to worry about. And as mentioned, not really disk problems. Disabled/emulated disk7 is unmountable though so that will have to be taken care of before rebuilding. Also looks like you have corruption on disk1, but perhaps that is because it can't really be read. Looks like you are trying to use USB for many of your disks. USB not recommended for assigned disks for many reasons, including the disconnections that caused all this.
  13. You can't use the same IP address for your Unraid and for your Windows computer.
  14. Do it again without -n. If it asks for it use -L. Post the results.
  15. For completeness, you would also want to determine which file was "visible" when it existed on more pools than just the one named "cache".
  16. Obviously that is not empty. Do you mean each of those folders are empty? You can't delete appdata until you delete each of those folders.
  17. You want the empty appdata folder removed from the pool named "board"? What do you get from command line with this? ls -lah /mnt/board/appdata
  18. Does it also tell you that pool is read-only?
  19. It doesn't give you any feedback like,are you sure you want to do this, or why you can't do this?
  20. You must have rebooted after disk1 became disabled, so no way to see why that happened. SMART for disk1 looks OK so likely a connection problem. Emulated disk1 is mounted and has plenty of data. Should be OK to rebuild on top after checking connections. https://docs.unraid.net/unraid-os/manual/storage-management/#rebuilding-a-drive-onto-itself
  21. Sorry, another side track. When your server boots, it seems to think the date is Dec 31, then later figures out the correct date. This suggests your BIOS doesn't know what the data and time are, which suggests your CMOS battery is dead.
  22. Not related, but I see you have your docker.img in /mnt/cache/docker. However, any folder at the top level of array or pools is automatically a user share. So that is part of a user share named "docker". Similarly for your default appdata folder. Your appdata share is correctly configured to stay on cache, so that's OK. But your docker share is configured to be moved to the array, and it has files on the array. We can look at that more closely after you get disk1 rebuilt. I will have more to say about that in my next post after I examine diagnostics more.