Jump to content

JorgeB

Moderators
  • Posts

    67,884
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Correct, if you have all 3 replaced disks you can do a new config with them, since parity should be mostly valid you can check "parity is already valid" before array start but should then run a correcting check.
  2. It's working for me, please post the diagnostics.
  3. It's a bug with the latest release, for now suggest downgrading to v6.11.1 to partition the disks, you can then upgrade back if you want.
  4. I'm not very familiar with those controllers but after taking a second look looks more like disk2 is causing the HBA related timeouts, what disk(s) did you last replace before you got read errors from disk2? Do you still have those old disks intact? Was anything written to the array after those disks were removed?
  5. Are you using Firefox? If yes reboot first the use a different browser.
  6. Downgrading to v6.9.2 will also work, but you can download the v6.11.1 zip and extract all the bz* files to the flash drive replacing existing ones, then reboot.
  7. Still seeing the same HBA issues, do you have a different HBA you could use? Ideally an LSI.
  8. Yes, that should help since the logged call traced appeared to be related to macvlan.
  9. It's a bug with the latest release, for now suggest downgrading to v6.11.1 to partition the disk, you can then upgrade back if you want.
  10. Pulling the disk won't fix anything since the emulated disk will have the same problem: Nov 6 17:25:09 1mehien kernel: BTRFS info (device md13: state EA): forced readonly P.S. disk14 is also showing issues now: Nov 6 17:13:46 1mehien kernel: BTRFS info (device md14: state EA): forced readonly For both now.
  11. Doesn't look like an Unraid problem: Nov 6 20:31:56 ser kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1 Nov 6 20:31:56 ser kernel: nvme nvme0: Removing after probe failure status: -19 IIRC another user had a similar intermittent issue with the same or similar Samsung device, try power cycling the server and/or downgrading back to v6.11.0 to see it's still detected.
  12. Are you using 6.11.2? There's a bug partitioning >2TB drives, looks like UD is also affected.
  13. See here for better pool monitoring so are notified immediately if there are similar issues.
  14. Do you mean you are suing the disk share for transfer 1? Disk shares are always faster but a user share should be much faster than that, diagnostics might show something.
  15. Disk7 is disabled, you cannot start the array without valid parity, you can do a new config but will lose any data that was on that disk.
  16. It's a bug with the latest release, for now suggest downgrading to v6.11.1 to format the disks, you can then upgrade back if you want.
  17. It's a bug with the latest release, for now suggest downgrading to v6.11.1 to format the disks, you can then upgrade back if you want.
  18. It's a bug with the latest release, for now suggest downgrading to v6.11.1 to format the disks, you can then upgrade back if you want.
  19. It's a bug with the latest release, for now suggest downgrading to v6.11.1 to format the disks, you can then upgrade back if you want.
  20. We usually recommend rebuilding, as long the emulated disk is mounting and contents look correct, other option is doing a new config but you'll need to do a parity check, and that will take as long as the rebuild.
×
×
  • Create New...