Jump to content

JorgeB

Moderators
  • Posts

    67,755
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. You can recreate them with the old settings and point to the existing vdisks.
  2. CPU lanes are better for everything when available, since they are not shared with anything else and have lower latency, but depending on the total devices going thought he DMI and if they are used concurrently it might not make a big difference, DMI in this case is PCIe 3.0 x4.
  3. Fist two slots use CPU lanes, last two go though the PCH, so they share the DMI with onboard SATA, NIC, etc.
  4. libvirt.img, you should always have a current backup of that file.
  5. Looks like it's related to an NVMe device, enable the syslog server and post that after a crash, together withe complete diagnostics, it might show more.
  6. Feb 1 03:52:37 Thor kernel: ahci 0000:02:00.1: AHCI controller unavailable! Problem with the onboard SATA controller, quite common with some Ryzen boards, look for a BIOS update and if that doesn't help best bet is to use an add-on controller (or a different board).
  7. Looks like xfs_repair succeeded, post new diags after array start in normal mode.
  8. Changed Status to Closed Changed Priority to Other
  9. Default for pools is RAID1, available space is correct, if you want to use the full capacity convert to single, of course there won't be any redundancy.
  10. Disable disk spin down and run the extended test.
  11. You can go to the existing docker support thread by clicking on support:
  12. I have no idea what that crash is about, strange that it only happens during parity copy, and if that's the case unlikely to be hardware related, you could try doing it manually.
  13. You need to rebuild it, make sure the emulate disk is mounting and contents look correct before rebuilding on top of the old disk. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself Other option is to do a new config and re-sync parity, just mentioning it since there's a chance parity is not 100% valid due to the unclean shutdown, but since no errors were found in the beginning of the check, and usually there are some there if there are any, it should be fine.
  14. With the array stopped click on "add pool" on main, then select a name for the pool/cache.
  15. If the share folder exists on other disks (even if it's empty) their space will be included, if you run the mover and everything is moved to cache it will stop being used.
  16. That looks like a xfs_repair issue, though it could be the result of bad hardware, run memtest if no issues are detected you can try to report it to the xfs mailing list.
  17. For the disk issue see here, NVMe issue is unrelated, it dropped offline: Jan 31 17:46:20 Unraid kernel: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 Jan 31 17:46:20 Unraid kernel: nvme 0000:06:00.0: enabling device (0000 -> 0002) Jan 31 17:46:20 Unraid kernel: nvme nvme0: Removing after probe failure status: -19 Look for a BIOS update, this can also help sometimes: Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 e.g.: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
  18. Yes, to "save" you just start the array. Yep, once the parity sync is done.
  19. Only if your server has some controller or other bottleneck, if not the rebuild is limited by the disk maximum speed, it can also be a little slower if you have many different capacity disks and the one you want to replace it's not one of the smaller ones.
  20. You need to do a new config without disk1 and re-sync parity, note that any data there will be gone, though it's unmountable anyway.
×
×
  • Create New...