Jump to content

JorgeB

Moderators
  • Posts

    67,505
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. You can, one at at time if, and it's a big if, data is still valid on those drives, just partition info is missing, still nothing to lose to try, at least up to the emulated disk, again just rebuild on top if the emulated disk mounts and contents look correct. Not with single parity, you'd be doing a procedure similar to this, though for a different reason:
  2. Just to add that the above can only work if parity is valid, and it won't be 100% valid if this was not done in read only mode: Though if that was all you still might be able to get away with it, possibly needing to run xfs_repair on the emulated disk(s).
  3. Unraid can give an invalid partition error for various reasons like the MBR not conforming to what is expected or the partition not being the correct size, and that can happen when you for example change from a raid controller to a non raid controller or vice-versa, but in your case there are no partitions, so something else happened, if just the partitions are missing but the data is still there you can try unassigning one of the unmountable disks and starting the array, Unraid will recreate the partition and as long as the disks weren't wiped it might be able to rebuild it correctly, check if the emulated disk mounts and data looks correct, if it does, and only if it does, you can rebuild on top, then repeat for the other disks.
  4. If by swapping you mean just changing slots that wouldn't be a problem, something else must have happened, or the pool was already missing a device, you can try the old cache device if you still have it untouched, if not you can't recover the pool with 2 missing devices.
  5. According to syslog you're missing two devices:, devid 3 and 4, it's detecting 2 new devices. Sep 6 17:31:02 DEMETER emhttpd: cache uuid: c8f42191-039e-41d8-894d-bdd878c15864 Sep 6 17:31:02 DEMETER emhttpd: cache TotDevices: 4 Sep 6 17:31:02 DEMETER emhttpd: cache NumDevices: 4 Sep 6 17:31:02 DEMETER emhttpd: cache NumFound: 2 Sep 6 17:31:02 DEMETER emhttpd: cache NumMissing: 1 Sep 6 17:31:02 DEMETER emhttpd: cache NumMisplaced: 0 Sep 6 17:31:02 DEMETER emhttpd: cache NumExtra: 2 Sep 6 17:31:02 DEMETER emhttpd: cache LuksState: 0 Sep 6 17:31:02 DEMETER emhttpd: shcmd (408): mount -t btrfs -o noatime,nodiratime,degraded -U c8f42191-039e-41d8-894d-bdd878c15864 /mnt/cache Sep 6 17:31:02 DEMETER kernel: BTRFS info (device sdh1): allowing degraded mounts Sep 6 17:31:02 DEMETER kernel: BTRFS info (device sdh1): disk space caching is enabled Sep 6 17:31:02 DEMETER kernel: BTRFS info (device sdh1): has skinny extents Sep 6 17:31:02 DEMETER kernel: BTRFS warning (device sdh1): devid 3 uuid 91c09af4-c319-4ef1-a89e-17b8b9080b28 is missing Sep 6 17:31:02 DEMETER kernel: BTRFS warning (device sdh1): devid 4 uuid 20de7db3-546e-45a6-b08b-919ab79effeb is missing Sep 6 17:31:02 DEMETER kernel: BTRFS warning (device sdh1): chunk 1009532010496 missing 2 devices, max tolerance is 1 for writeable mount If you just replaced one there's a problem with another one, which is not being detected as a pool member, any idea why that would happen, did you do anything else?
  6. The FAQ mentions to use another procedure if you don't have a spare port. Please post the diagnostics: Tools -> Diagnostics
  7. Not normal to get so many unmountable disks, did you by any chance save the diags when the disk got disable? They might provide some clues, but for now when the rebuild finishes check filesystem on all the unmountable disks: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  8. Best bet to avoid future issues is to backup and re-format cache.
  9. You can use for example midnight commander (mc on the console)
  10. NVMe is dropping offline: Sep 5 13:38:54 Unraid kernel: nvme nvme0: I/O 771 QID 7 timeout, aborting Sep 5 13:38:54 Unraid kernel: nvme nvme0: I/O 772 QID 7 timeout, aborting Sep 5 13:38:54 Unraid kernel: nvme nvme0: I/O 773 QID 7 timeout, aborting Sep 5 13:38:54 Unraid kernel: nvme nvme0: I/O 234 QID 1 timeout, aborting Sep 5 13:39:24 Unraid kernel: nvme nvme0: I/O 771 QID 7 timeout, reset controller Sep 5 13:39:54 Unraid kernel: nvme nvme0: I/O 0 QID 0 timeout, reset controller Sep 5 13:40:57 Unraid kernel: nvme nvme0: Device not ready; aborting reset Sep 5 13:40:57 Unraid kernel: nvme nvme0: Abort status: 0x7 ### [PREVIOUS LINE REPEATED 3 TIMES] ### Sep 5 13:41:27 Unraid kernel: nvme nvme0: Device not ready; aborting reset Sep 5 13:41:27 Unraid kernel: nvme nvme0: Removing after probe failure status: -19 Sep 5 13:41:58 Unraid kernel: nvme nvme0: Device not ready; aborting reset This sometimes helps, some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append" and before "initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference, if it doesn't you can try the latest beta, newer kernel might also help, if not try a different model NVMe device if possible.
  11. Are you using the docker folder plugin? If yes try without it.
  12. Most free is generally bad for performance since parity writes will overlap when switching disks, this can be somewhat mitigated by using some split level, if not recommend using high-water or fill-up for best performance. Most free can be good for an SSD array with faster parity, e.g. I have a small SSD array and since parity is much faster NVMe device and can keep up with multiple disk writes I use most free for best performance.
  13. There are some known issues with Macs and slow SMB performance, you can goolge that for some ideas, you can also try turbo write if you haven't yet.
  14. See if any of those unmountable disks mount with UD in read only mode.
  15. Syslog starts over after every reboot, so not much to see, you can try this and post that one if it happens again.
  16. With the array stopped change cache slots to 1, click on cache and set filesystem to auto, start the array and then grab and post new diags.
  17. You have a lot (all?) nerdpack options installed, install just the ones you need, some util might be overtiring/conflicting with Unraid, and if it still happens again try booting in safe mode.
  18. It's possibly but you need to manually copy the data and do a new config to remove the other drives from the array.
  19. Yes, that's not normal there was some issue with that disk, there's also this later: Sep 4 16:44:45 NAS kernel: sd 9:0:3:0: [sde] tag#2701 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00 Sep 4 16:44:45 NAS kernel: sd 9:0:3:0: [sde] tag#2701 CDB: opcode=0x88 88 00 00 00 00 00 00 00 00 00 00 00 00 20 00 00 Sep 4 16:44:45 NAS kernel: print_req_error: I/O error, dev sde, sector 0 But not sure what caused it, UD was spinning down that disk, but I would think that would be unrelated, but if all is good for now ignore.
  20. I would avoid correcting checks/rebuilds until you find the problem, non correcting checks and/or read checks are fine. You can still do that but if there are multiple disk read errors do it after rebooting, so that the filesystem errors are cleared, and possibly xfs_repair is not even needed anymore.
  21. You have two LSIs installed, 9206-16 e, so it uses two chips, both are it's being detected and initialized correctly by Unraid, what is the current problem?
  22. Don't do that, just reboot and they should go back to normal.
  23. Syslog would confirm if the controller is in IT mode, but yes, assuming you mean the 9207 it works with Unraid in IT mode, which is the default mode for that controller -8i, -8e or -4i4e.
  24. Because it contains a lot of other useful info that can help diagnose the issue.
×
×
  • Create New...