Jump to content

JorgeB

Moderators
  • Posts

    67,452
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Diags are after rebooting so we can't see what happened but you're using a SATA port multiplier and those are a known source of problems, not surprisingly parity is connected there.
  2. Definitely not a know issue, problem with the Asmedia controller, it dropped both parity disks: May 16 18:05:08 Tower kernel: ata7: hard resetting link May 16 18:05:18 Tower kernel: ata8: softreset failed (1st FIS failed) May 16 18:05:18 Tower kernel: ata8: hard resetting link May 16 18:05:18 Tower kernel: ata7: softreset failed (1st FIS failed) May 16 18:05:18 Tower kernel: ata7: hard resetting link May 16 18:05:28 Tower kernel: ata8: softreset failed (1st FIS failed) May 16 18:05:28 Tower kernel: ata8: hard resetting link May 16 18:05:28 Tower kernel: ata7: softreset failed (1st FIS failed) May 16 18:05:28 Tower kernel: ata7: hard resetting link May 16 18:06:03 Tower kernel: ata8: softreset failed (1st FIS failed) May 16 18:06:03 Tower kernel: ata8: limiting SATA link speed to 3.0 Gbps May 16 18:06:03 Tower kernel: ata8: hard resetting link May 16 18:06:03 Tower kernel: ata7: softreset failed (1st FIS failed) May 16 18:06:03 Tower kernel: ata7: limiting SATA link speed to 3.0 Gbps May 16 18:06:03 Tower kernel: ata7: hard resetting link May 16 18:06:08 Tower kernel: ata8: softreset failed (1st FIS failed) May 16 18:06:08 Tower kernel: ata8: reset failed, giving up May 16 18:06:08 Tower kernel: ata8.00: disabled ... May 16 18:06:08 Tower kernel: ata7: softreset failed (1st FIS failed) May 16 18:06:08 Tower kernel: ata7: reset failed, giving up May 16 18:06:08 Tower kernel: ata7.00: disabled May 16 18:06:08 Tower kernel: ata7: EH complete AFAIK there are no issues with Asmedia controllers and v6.8.3, if there were I would expect many users with problems since it's a very commonly used controller, might be a power issue, do both disks share a power splitter or similar? Or, since the controller is onboard and an older revision it might also be a specific problem with that board/revision.
  3. What happened to the other one? In the meantime try this, on the console type: mkdir /x mount -o ro,nologreplay /dev/sdb1 /x If it doesn't mount post the error on the syslog
  4. Some split level examples here: https://forums.lime-technology.com/topic/59589-solved-v635-advice-please-on-shares-settings/?do=findComment&comment=584558
  5. You'll likely need to restart the server, segfault appears to be NFS related, so disable if not in use.
  6. Clearly the network is the problem, try a different switch, cable, NIC, source PC, etc until you find the problem.
  7. That suggests a hardware problem, difficult for us to diagnose, I would start with a different PSU.
  8. No, but depending on the firmware in use you might need to do a new config (or even rebuild every disk there), and that is harder with a disabled disk, so I would suggest connecting that disk directly to the main server on the newer LSI and rebuild it, then worry about upgrading the old LSI.
  9. Revert the change you did to the VM and see if it starts, if it does try again using the onboard sound and if it crashes that's the problem, then post the diagnostics.
  10. If all 3 disks are exact copies and parity is valid yes, also disks need to be same capacity, can't be larger, then you'll need to do a new config and use the invalid slot command, if you get 3 drives working let us know and I'll post the instructions.
  11. Disk is on a controller that only supports 2.2TB max, and it appears that because of that Unraid doesn't consider it a valid partition, best bet is to connect that disk to a controller that supports the full disk capacity, but you'll need to rebuild it again.
  12. Not quite clear what you mean, please try to describe better what you're trying to do, also post the diagnostics: Tools -> Diagnostics
  13. Everything VM/docker related is inside the same share on my cache drive with COW set to auto, that folder is actually a btrfs subvolume so it can get snapshotted and replicated to a different pool daily by a script, this reminds me that the COW setting can't be the only thing causing the write augmentation for the VMs, since all 3 vdisks are together and just the Windows Server has the high writes issues, both other Windows VMs have normal writes according to iotop.
  14. Speed being the same suggests a problem with the source, do you have another computer you could test width? Ideally one running Windows 10 since there are some reports of lower SMB performance for some with Linux or OSX.
  15. Doesn't look like it is, enable turbo write and transfer directly to array, is the speed better or same?
  16. You need iperf3 to iperf3 or iperf2 to iperf2, it won't work with different versions.
  17. Yes, mine are all set to auto, but I want them like that or else btrfs will also stop checksumming the data, and that's more important for me, even if it I have to live with the extra writes, because of this feature I detected silent corruption on an SSD I was using a few years ago with a VM, but you are correct, most users should have it off for vdisks, because of increased fragmentation and likely the additional writes.
  18. Start by running a single stream iperf test to make sure network is working correctly.
  19. Click on the share then check "delete" (it needs to be empty) That will do it.
  20. Just one more screenshot from me on this, I left iotop running since earlier and you can see it's not even the VMs in general, for me it's mostly the Windows Server VM, which is the main one but it was mostly idle during this, loop2 has few writes comparatively, and libvirt (loop3) doesn't even appear on the list. I'm not sure if this is related to this topic, but I have been noticing the unusually large amount of writes to my cache device for some time, it's writing on average 1 or 2TBs per day, some days more, just never though too much of it.
  21. Please post the diagnostics: Tools -> Diagnostics
  22. libvirt.img is already on the cache device on my server.
  23. Most of my writes are also from my 3 Windows VMs, I also have dockers, but there's not much writing going to loop2, at least comparatively, iotop accumulated writes after a couple of minutes:
  24. Disk1 appears to be failing, please post a SMART report.
  25. There's a little activity on disk1 but doesn't look like anything out of the ordinary considering system share exists there, but if you want install nerdpack and use iotop to see exactly whats reading/writing to the disk.
×
×
  • Create New...