Jump to content

JorgeB

Moderators
  • Posts

    67,783
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. That's interesting, do you mind posting the output of: udevadm info -q property -n /dev/nvme0n1
  2. Try going back to v6.9.2 to see if the errors go way, IIRC there's another user with a similar issue with the same family CPU.
  3. Besides new diags see if you can run this now.
  4. You can check if the data in the actual disks is in better shape, unassign both disabled disks, start array, stop array, now you can use UD to mount the old disks and check contents, note that if you want to mount them with the array started you'll need to change both XFS UUIDs in UD settings first.
  5. Should be mostly fine but you should read the release notes of at least any major release, v6.7.0, v6.8.0 and v6.9.0, there might be some warnings/recommendations.
  6. This suggests a problem with the current config, you might try just copying super.dat and the key and reconfigure the server, you can also try to copy a few more config files but a few at a time to see if you can find what's breaking it.
  7. If the cache device is nor working correctly it might cause various issues, but didn't see anything related logged.
  8. Changed Status to Closed Changed Priority to Other
  9. Going to close this one for since there's already another report and this isn't an Unraid issue anyway.
  10. Didn't the device name change when you updated? I also have the same NVMe device and the name changed when updating to the initial v6.10.0-rc releases, it then changed back with -rc8 or later when it was corrected, if yours also changed and depending on how you handled that you might have wiped the device accidentally, symptom is consistent with wipefs being run on it, can't see how just upgrading would do that, but glad it's solved.
  11. There's no valid btrfs filesystem on that device, like if the device was wiped, are you sure you're not leaving anything out? If it was really btrfs you can try this, with the array stopped: btrfs-select-super -s 1 /dev/nvme0n1p1 If the command is successful start the array to see if it mounts.
  12. Main issue is with those servers is that updating can corrupt all filesystems, luckily it appears only cache was affected for you, if you have backups it would be best to re-format cache and restore the data, if you don't try to copy everything you can now, it's read only but it's still mounting, so most data should be accessible.
  13. Yeah, that makes sense, since there's no cache it generates an error.
  14. I would try with a different controller if available.
  15. Cache alone on the SATA controller *should* be OK.
  16. It's working fine for me, did you grab diags?
  17. So clearly not a disk problem but no idea what's going on.
  18. Doe it work if you type for example mkdir /mnt/disk1/appdata
  19. Then I would try with a different disk if available, you can just try with UD, no need to add it to the array.
  20. Not really, do you have a different disk you could test with? Also does that disk format and mount with the UD plugin?
  21. I would recommend going back to v6.9.2 for now, there have been other users with the same hardware with similar issues: https://forums.unraid.net/bug-reports/stable-releases/upgrade-from-692-to-610-lost-1-disk-in-array-cache-drive-went-offline-r1917/?do=getNewComment&d=2&id=1917
  22. There's nothing logged, I would suggest canceling the sync and running a test with the diskspeed docker to confirm all disks are performing normally, also make sure this high CPU utilization is expected: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27587 root 20 0 16452 15688 2888 R 96.2 0.1 240:12.17 port_ping+ 30062 root 20 0 6004 4060 2712 R 92.3 0.0 0:01.02 lsof
  23. Very strange, don't remember seing anything similar, run wipefs again but reboot first before formatting in case there's some GPT info still in memory that's causing issues, though when a reboot is necessary there's usually a warning.
×
×
  • Create New...