JorgeB

Moderators
  • Posts

    61301
  • Joined

  • Last visited

  • Days Won

    642

Everything posted by JorgeB

  1. If you were using /mnt/user for the mappings everything will remain working, if you were using /mnt/disk1 you need to change to /mnt/pool_name
  2. Because if the server is flushing the data slower it can cause temporary halts, could also be a network issue, assuming gigabit, transfer should start at 100MB/s while being cache to RAM, then slowdown to keep up with device speed, according to your video it's starting already at 50MB/s, suggesting the network has bandwidth issues, run iperf.
  3. If you enable bifurcation for for example PCIe slot2, only slot2 will use it
  4. Just added and start the array, it should automatically create a mirrored pool (raid1).
  5. After that you can also create a new pool with sdi, but make sure it's wiped first, just in case there's some old filesystem there causing issues, so before adding it to a new pool run: wipefs -a /dev/sdX
  6. OK, you can now add the other NVMe device, if you want to make the pool redundant again.
  7. There's still something strange going on there, leave sdi disconnect for now, unassign both nvme devices, start array, stop array, re-assign only /dev/nvme0n1p1 to the old pool, start array, post new diags
  8. That suggests the devices are not keeping up with the transfer, your disks are SMR and the cache SSDs a white brand, do you have a non SMR disk you could test with?
  9. That device it from the old pool correct? If yes, the other one was wiped, was the pool redundant?
  10. The old key won't work with the new flash drive, you need to transfer the key yourself or contact support with the old and new GUIDs, so that they can transfer it for you.
  11. Also check the log to see if the errors are still spamming it.
  12. Board manual may show that, but not always, worst case do it by trial and error. No, they should be per slot.
  13. Lots of call traces logged, but unclear to me what cause them, was this a one time thing or is the server crashing regularly?
  14. Looks like all 3 devices have the same btrfs UUID, that should never happen unless they were used together in the past: Apr 15 14:33:08 Mongo emhttpd: Label: none uuid: d0989cc1-462a-4a25-ac4c-b3fbb1d27a4c Apr 15 14:33:08 Mongo emhttpd: #011Total devices 2 FS bytes used 95.33GiB Apr 15 14:33:08 Mongo emhttpd: #011devid 1 size 931.51GiB used 100.03GiB path /dev/nvme1n1p1 Apr 15 14:33:08 Mongo emhttpd: #011devid 2 size 931.51GiB used 82.03GiB path /dev/sdi1 Apr 15 14:33:08 Mongo emhttpd: #011devid 3 size 931.51GiB used 100.03GiB path /dev/nvme0n1p1 Physically disconnect /dev/sdi and post new output of btrfs fi show
  15. This is the new key for this flash drive or for the old one?
  16. Constant errors with sdr and sdr cache devices, check/replace cables, or try connecting then to a different controller and post new diags.
  17. parity2 dropped offline, this is usually a power/connection problem, or weak/failing PSU.
  18. Run it again without -n, and if it asks for -L use it.
  19. 9267 is a RAID controller, recommend getting a HBA instead, like the 9207, if your current controller is not RAID then it should be plug and play.
  20. Go to Tools - Registration, if it's not yet correct copy the new key manually to the /config folder and then reboot.
  21. You can try copying only super.dat and the pools folder from the old flash, that would take care of the assignments, and it should not cause boot issues, but if it does see here: https://docs.unraid.net/unraid-os/manual/changing-the-flash-device/#what-to-do-if-you-have-no-backup-and-do-not-know-your-disk-assignments
  22. You just enable it in the BIOS for the slot you want, I have the similar X11SPL-F and IIRC all CPU PCIe slots support bifurcation. Edit: they are not that similar, different CPU family, check the BIOS for bifurcation support, would still expect it to be supported in that board.