Jump to content

JorgeB

Moderators
  • Posts

    67,600
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. NIC supports and is advertising 10GbE: Advertised link modes: 100baseT/Half 100baseT/Full 1000baseT/Full 10000baseT/Full Problem is this: Link partner advertised link modes: 100baseT/Full 1000baseT/Half 1000baseT/Full
  2. The HBA dropped offline and was re-detected, causing all disks connect there to be inaccessible: pr 11 19:22:48 YAMATO kernel: md: disk1 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk2 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk3 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk5 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk6 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk29 read error, sector=0 Rebooting should fix it but it might happen again, make sure it's well seated, sufficiently cooled, you can also try a different slot.
  3. sda is a flash drive, but since it-s spamming the log disconnect it if not in use, enable this and then post that log and the complete diagnostics after a crash.
  4. Also please post new diags, syslog is empty in the ones posted, reboot if needed.
  5. This is still usually flash drive related, you should try another one.
  6. CRC errors are a known issue with some Samsung SSDs and those AMD chipsets, other issue looks like a hardware problem, like a bad PSU, CPU, board, etc.
  7. Best chance of recovery for that issue is btrfs restore, looks like you tried everything except that, including check --repair which should only be done as a last resort.
  8. That's for LSI only. There's a known issue with those controllers and the newer driver: https://forums.unraid.net/bug-reports/stable-releases/690-691-netapp-pmc-sierra-pm8003-scc-4-port-qsfp-pcie-x8-controller-didnt-find-the-hdds-r1300/?do=getNewComment&d=2&id=1300
  9. Note that slow iperf results can also be caused by some OS issue with the source computer, basically you need to test all the things involved in the network one at a time, NICs, cables, switch, source computer until you find the culprit, also in your case it's with WiFi, that's notoriously unreliable for consistent speeds, first thing to try is cabled connection.
  10. Iperf only tests the network bandwidth, if iperf results are low any transfer in the same direction will also be slow.
  11. Create a new USB, assign all the disks like the screenshot above, check "parity is already valid" before array start, run a parity check.
  12. All the devices report the same serial number: Serial Number: DI DE00A This won't work for Unraid, since it requires uniquer serials for all devices, that's how Unraid keeps track of them, what kind of disks are these? White label?
  13. Also iperf confirms the problem for you is the network, it might do the same for the other user.
  14. Start by running a single stream iperf test.
  15. Unraid can't read Linux md raid arrays, since it's replaced with custom code.
  16. Very strange issue, nest time it doesn't mount please post the complete diagnostics.
  17. What are you talking about? And please stop posting random things on other threads, merged your other thread here.
  18. That many errors suggest the controller dropped (or all the disks there dropped, syslog doesn't show what happened), rebooting or fixing the issue should bring all your data back.
  19. Looks like it, there might be more info in the system/ipmi event log.
  20. You need to do a new config with the old 2TB disk and re-sync parity, but this: suggests a hardware problem, server should never shutdown own it's own, start by checking if the CPU cooler needs cleaning, an overheating CPU can cause a shutdown, and more likely to happen during a sync since it can be CPU intensive.
  21. NIC appears in the device list (system/lspci.txt in the diags) but no driver is loaded.
×
×
  • Create New...