Jump to content

JorgeB

Moderators
  • Posts

    67,696
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. I agree, problem might be with the interface, and a SMART test won't test that part.
  2. The problem are the reads, not the writes, and just setting it to 1 would cause that.
  3. Because with some disks performance is much worse if it's enable: https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta30-available-r1076/?do=findComment&comment=11034
  4. Sep 7 13:25:51 plexbeast kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Sep 7 13:25:51 plexbeast kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address, upgrading to v6.10 might fix it, or see below for info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  5. And forgot to mention, I believe connecting one IOM to another is an invalid configuration, the out port is for daisy chainning to another shelf, e.g.:
  6. AFAIK you would only connect the second module to a second HBA, speed would be the same, it's for redundancy, but Unraid doesn't support SAS multipath.
  7. Sorry, due to an old forum bug missed your reply yesterday, you can re-enable disk3, but since the array was mounted without the old disk parity won't be 100% in sync, so there could be issues replacing the failing disk, though there's not much to lose by trying: -Tools -> New Config -> Retain current configuration: All -> Apply -Check all assignments are correct and assign any missing disk(s) if needed, including the old disk3 -IMPORTANT - Check both "parity is already valid" and "maintenance mode" and start the array (note that the GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the checkbox, but it won't be as long as it's checked) -Stop array Array is now as it was before, except like mentioned parity won't be 100% valid, but with xfs this is usually recoverable, you might need to do a filesystem check once you replace disk4, especially because the fs on the actual disk is already showing issues, most likely because of the bad sectors.
  8. If it's flashed to IT mode as the link suggests it's plug and play, and a good option.
  9. If the drives still don't show up after replacing all that make sure they are good by seeing if they are detected in a different computer.
  10. Yes. That will likely need some modifications, especially if passing-though any hardware.
  11. It's plug and play, as long as you don't use RAID controllers.
  12. It's very difficult to diagnose hardware issues, especially remotely, you can try using a different computer as Unraid server, if you have one, key is tied to the flash drive, not the hardware.
  13. Basically you'd need to start swapping components and test.
  14. Very unlikely that the flash drive is the problem, but can't say 100%.
  15. That's strange, it should work, but you can always run xfs_repair manually after starting the array in maintenance mode: xfs_repair -v /dev/mapper/md1
  16. Yes, if the first drive is filled past the selected allocation method.
  17. rsync creates all the folders before the transfer, so it will all end up on the first disk, you need to manually delete empty folders or set split level to split all for the initial transfer.
  18. That would be logged, hardware is usually board. RAM, PSU, etc.
×
×
  • Create New...