Jump to content

JorgeB

Moderators
  • Posts

    67,647
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. If you haven't yet looks for a board BIOS update, if that doesn't help not much you can do other than using a different model GPU (or board), or contact Gigabyte support.
  2. Yes, but you can also browse /x and only copy just what you need.
  3. Sorry, can't really see what the problem could be, can only say that I have the exact same board in one of main mains servers and don't have any issues.
  4. It works faster for me, but there could also be issues, it's still worth trying.
  5. Start by running a single stream iperf test to check network bandwidth.
  6. Netmask on Ubuntu is not the same as Unraid, /24 is 255.255.255.0
  7. Do you have another PC you can try with? If not at least try the same PC with a clean OS.
  8. Don't see anything NVMe related detected, the adapter itself we can't see if it's working or not since it's transparent to the OS, you could try another card on that slot just to make sure it's working, or try it in a different PC, other than that basically you'd need to test with different adapter, cable or device, if that's an option.
  9. When the drive is back online post new diags.
  10. 2nd parity is IMHO a small price to pay for the added redundancy, especially for larger arrays, but like mentioned it's not a backup, first make sure you have backups of anything important, also for six array disks single parity should be OK in most cases.
  11. Try replacing the cables or use a different PSU.
  12. This sometimes helps, some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 e.g.: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
  13. See if this applies to you: https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  14. Cache device problems: May 15 00:00:47 whale kernel: sd 1:0:0:0: [sdc] tag#14 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x06 cmd_age=37s May 15 00:00:47 whale kernel: sd 1:0:0:0: [sdc] tag#14 CDB: opcode=0x2a 2a 00 0a f8 92 b8 00 00 10 00 May 15 00:00:47 whale kernel: blk_update_request: I/O error, dev sdc, sector 184062648 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0 May 15 00:00:47 whale kernel: sd 1:0:0:0: [sdc] tag#13 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x06 cmd_age=37s May 15 00:00:47 whale kernel: sd 1:0:0:0: [sdc] tag#13 CDB: opcode=0x2a 2a 00 0a f8 8f 70 00 00 08 00 May 15 00:00:47 whale kernel: blk_update_request: I/O error, dev sdc, sector 184061808 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0 May 15 00:00:47 whale kernel: sd 1:0:0:0: [sdc] tag#7 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x06 cmd_age=37s May 15 00:00:47 whale kernel: sd 1:0:0:0: [sdc] tag#7 CDB: opcode=0x28 28 00 00 52 c4 70 00 00 08 00 May 15 00:00:47 whale kernel: blk_update_request: I/O error, dev sdc, sector 5424240 op 0x0:(READ) flags 0x800 phys_seg 1 prio class 0 May 15 00:00:47 whale kernel: sd 1:0:0:0: [sdc] tag#8 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x06 cmd_age=37s May 15 00:00:47 whale kernel: sd 1:0:0:0: [sdc] tag#8 CDB: opcode=0x28 28 00 07 8f 48 80 00 00 20 00 May 15 00:00:47 whale kernel: blk_update_request: I/O error, dev sdc, sector 126830720 op 0x0:(READ) flags 0x1000 phys_seg 4 prio class 0 May 15 00:00:47 whale kernel: dm-1: writeback error on inode 209549217, offset 4096, sector 184028976 May 15 00:00:47 whale kernel: dm-1: writeback error on inode 209549217, offset 8192, sector 184029816 May 15 00:00:47 whale kernel: XFS (dm-1): metadata I/O error in "xfs_imap_to_bp+0x5c/0xa2 [xfs]" at daddr 0x78ec840 len 32 error 5 May 15 00:00:47 whale kernel: XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 296 of file fs/xfs/xfs_trans_buf.c. Return address = 00000000b84208d3 May 15 00:00:47 whale kernel: XFS (dm-1): I/O Error Detected. Shutting down filesystem May 15 00:00:47 whale kernel: XFS (dm-1): Please unmount the filesystem and rectify the problem(s) Start by replacing cables to see if it helps.
  15. Please post the diagnostics: Tools -> Diagnostics
  16. Did you try updating to v6.9.2 to see if there's any difference? 6.8.x won't be updated.
  17. If the disk you want to remove is empty you can do a new config, keep assignments, unassign old parity and the disk you want to remove, assign new parity and start array to begin parity sync, any data on the removed disk will no longer be on the new array.
  18. See if this applies to you: https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  19. You can do a new config, keep old assignments, assign new disk and parity if needed, start array to begin parity sync, data on the failed disk will be lost.
  20. That's the Unraid driver, only LT might be able to help with that.
  21. By pool do you mean the array or a cache pool?
×
×
  • Create New...