Jump to content

JorgeB

Moderators
  • Posts

    67,405
  • Joined

  • Last visited

  • Days Won

    705

Everything posted by JorgeB

  1. Any Intel gigabit NIC is fine.
  2. You need to give more details or it's very difficult to help, stuck how? It's on the Unraid boot menu and you can't select any boot option? You can also post a photo or movie to better show the issue.
  3. No, Unraid is controllers agnostic, as long as RAID controllers are not used, those can sometimes require a rebuild/new config. Plug and play, though it should be in IT mode, if 4/5 ports are enough you can also get a JMB585 controller instead. Yes, you can even connect more than that if used together with a SAS expander.
  4. Also note that the H310, even if flashed to RAID mode, can only create two RAID groups, you need a MegaRAID controller for more, starting with the 9240-8i and above.
  5. Unraid is not RAID, you can never use stripping on the array, you can a raid5 cache pool or uses for example raid0 volumes as arrays devices, I have a small server where all arrays devices are two disks in raid0 (including parity).
  6. That might help, but unless you're trying to pass-trough the SSD to a VM can't see how it can be a configuration issue.
  7. Since IT mode doesn't support any kind of RAID I though I was being clear, but yes, trim won't work in any RAID mode with an LSI controller.
  8. The "invalid partition layout" errors suggest something is happening physically to the cache device, changing the partition from the expected, this should never happen just because of an unclean shutdown, do you have another SSD you could test with?
  9. Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  10. This is happening constantly: Feb 25 19:29:19 fractal kernel: e1000e 0000:00:1f.6 eth0: Detected Hardware Unit Hang: Do you have another NIC you could try?
  11. I can't why some devices are losing the assignments, is the log from a boot right after just moving the M.2 card? Still as a workaround you can do this: Stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign all cache devices, re-enable Docker/VMs if needed, start array. Alternatively you can also do a new config and reassign all devices, then check parity is already valid before starting the array.
  12. It's not trimming the SSD, only the loop images, if you want trim to work connect the SSD to an onboard SATA port.
  13. Yes, if the new disk was precleared and never formatted then parity is valid without it, still you should run a parity check after the new config.
  14. Just FYI docker image share on Unraid by default is NOCOW, so checksums are disabled and any corruption can't be fixed with a scrub.
  15. Disks are most likely fine, you're using a SATA port multiplier, those are not recommended, multiple disks got disconnected at the same time: Feb 25 20:16:56 Tower kernel: ata9.01: status: { DRDY } Feb 25 20:17:06 Tower kernel: ata9.15: softreset failed (1st FIS failed) ### [PREVIOUS LINE REPEATED 2 TIMES] ### Feb 25 20:17:51 Tower kernel: ata9.15: limiting SATA link speed to 3.0 Gbps Feb 25 20:17:56 Tower kernel: ata9.15: softreset failed (1st FIS failed) Feb 25 20:17:56 Tower kernel: ata9.15: failed to reset PMP, giving up Feb 25 20:17:56 Tower kernel: ata9.15: Port Multiplier detaching Feb 25 20:17:56 Tower kernel: ata9.00: disabled Feb 25 20:17:56 Tower kernel: ata9.01: disabled Feb 25 20:17:56 Tower kernel: ata9.02: disabled Feb 25 20:17:56 Tower kernel: ata9.00: disabled Reboot and post new diags so we can check SMART, but you should also get rid of the port multiplier or will very likely continue to have issues.
  16. No, LSI only supports trim on HBAs using IT mode.
  17. What do mean by this? Where is is stopping on the boot process?
  18. There are no disk related errors on the syslog, but CRC errors are not always logged, monitor the SMART attribute, if it continues increasing there's still a problem with the connection.
  19. Then and like mentioned the most likely reason for checksum errors would be one of the devices having dropped offline for some time and then rejoined the pool, if you post the diagnostics we could confirm if that was the case.
  20. It might be but it's still an overclock and known to cause stability issues with some Ryzen servers, more info here.
  21. Don't see nothing on the log, you can try safe mode with all docker/VMs disabled and/or downgrading to a previously known working release, if still issues it's likely a hardware problem.
  22. I won't argue that btrfs doesn't have its bugs, but I've been using it for a long time as well as following the development on the mailing list and never heard of any data checksum related bug, that feature is pretty much bullet proof, i.e., if there's a checksum error data doesn't match the checksum stored at write time, this happens most often in Unraid with raid based pools when one of the members dropped offline and then comes back online, the old data will be stale and fail checksums, scrub will bring up to date, but if this is happening on a single device filesystem then you can be pretty sure data corruption occurred, or there's a hardware problem, like bad RAM. Well, XFS would never complain, since it doesn't ckecskum data, it will happy feed you corrupted data. I also use ZFS for a couple of servers, no doubt more stable than btrfs, but it's not perfect, and not as flexible.
  23. Yes. Yes, if there is available continuous space on the filesystem.
×
×
  • Create New...