Jump to content

JorgeB

Moderators
  • Posts

    67,797
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. It can happen with any Linux distro on kernel 5.15 or newer with Intel VT-d enabled in DMA translations mode, e.g., same exact issue with Ubuntu 22.04 with ZFS, also mostly HP servers from around 2011 are affected, but can't rule out the issue occurring with any Intel based server since there's at least one case from a 6th gen Intel CPU, issue is very easy to identify though, just look for the: DMAR: ERROR: DMA PTE for vPFN 0xxxxxx already set (to xxxxxx not xxxxxx) errors in the log, when these errors start repeating is when the data corruption starts.
  2. You should ask in the parity check tuning plugin support thread, stock Unraid will never pause a check on its own.
  3. Log snippet doesn't show the full correcting check, assuming it did run until the end without finding any errors it suggests the previous error found was unrelated to unclean shutdown, and possibly something like a RAM bit flip, unclean shutdown related errors, when they exist, mostly exist in the beggining of the disks, which were where the metadata is stored, assuming XFS filesystem for the array.
  4. That can't be just from CRC errors, devices are dropping offline, but bad enough CRC errors/cable issue can result in disks dropping.
  5. Doesn't sound like the server is very stable, trying to repair/rebuild disks with an unstable server can cause more issues, see if you can get new diags after array start.
  6. It's fine if it's in IT mode, diags would confirm.
  7. That's a big update, several changes in between, one thing that changed this release is the iommu pass-through mode, unlikely that is the problem but since it's very easy to try, just add iommu.passtrough=0 to syslinux.cfg and reboot: If it doesn't help recommend removing the line.
  8. This suggests a hardware problem, that could also be related to the other issues you are experiencing.
  9. Also not booting with a previous known good release confirms it's not a release problem, try creating the flash drive manually with diskpart: https://lime-technology.com/forums/topic/49992-update-from-619-to-62-rc5-need-help/?tab=comments#comment-492238
  10. If it can be flashed to LSI IT mode it won't be a problem, always good to take a screenshot of the assignments or save the diags before starting, but should be straight forward.
  11. Does it stop before or after starting to boot Unraid? If after a screenshot or photo might help.
  12. I would expect it to change after this, but that could only be if you didn't use the array after updating to 6.10.2, since you'd have the same problem you posted about now.
  13. Note that there's also an option in "Settings -> Display Settings -> Display world-wide-name in device ID" that also affects that, but I assume you didn't change anything.
  14. And nothing changed after updating to 6.10.2 correct? Long name includes the world wide name: HUS724040ALS640_PCK5U0LX_35000cca05cb3a924 Short name omits that: HUS724040ALS640_PCK5U0LX
  15. That and check filesystem on disks 2, 4 and 8.
  16. SAS devices were displaying the long name and are now using the short, though strange this happening after updating from v6.10.2 to v6.10.3, are you sure you didn't update from v6.9.2?
  17. Kind of unexpected that just updating to v6.10.3 helps with this, but good news anyway.
  18. Nothing obvious to me, I would suggest enabling DHCP, that way we can better see if it's communicating with the router.
  19. The ones that have data on cache that you want to move to the array, system shares are usually preferred on cache, so cache=prefer is OK, but it depends on what you want to do.
  20. First try booting in safe mode and do a transfer, if the issue persists enable the syslog server and post that and the diagnostics after a crash.
  21. As long a no RAID controllers are involved it's just plug and play.
  22. Last screenshot shows all shares set to cache=no or cache=prefer, for the mover to move data from a pool to the array you need to set the shares to cache=yes, see GUI help for more info on every option.
  23. It is weird, there's no device on slot 1 but it should still show the usage graphs, could be related to the theme you're using, but just assign it to slot 1: -stop array -unassign the device from cache2 -re-assign to cache1 -start array Note that if you ever want to use the old cache device in the same server you need to wipe it before array start, you can use: blkdiscard -f /dev/sdX Replace X with correct letter.
  24. I would guess unrelated to the update (if you updated from another v6.10.x release) but please start a new thread in the general support forum and don't forget to include the diagnostics after array start.
×
×
  • Create New...