Jump to content

JorgeB

Moderators
  • Posts

    67,656
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Yep, if you don't know how to check you can post new diags after rebooting.
  2. Diskspeed docker can also be a good way of finding an under performing drive.
  3. Those are not similar, they mention nf_conntrack and those are typical of the macvaln related crashes with v6.9.2
  4. Wipe both cache devices, you can use blkdiscard -f /dev/sdX with the array stopped, then start array and format pool.
  5. That's it, but there's one more step once it finishes, you can't just unassign the removed device and start array, you need to stop array, unassign all 3 pool devices, start array (check "I'm sure box") to make Unraid forget that pool config, then stop array, re-assign the two remaining pool devices and start array again.
  6. If the onboard SAS is in IT mode it should be plug and play.
  7. One more thing make sure shares to be copied are set to "split any directory", since rsync creates all the folders before starting to copy, it will fail if it's not.
  8. That's not a problem, it will continue from where it left off, files already copied will be skipped. After removing the disk from the array and mounting it with UD you can do this: rsync -av /mnt/disks/UD_disk_name/share/ /mnt/user/share/ Change UD_disk_name and share to the actual names and if there are multiple shares on source repeat for each one, any files on the root of that disk won't be copied this way, but there shouldn't be any, and you can easily check.
  9. I assume sdf was a drive you rebuilt, and that would recreate the partition, but the other drive is missing the partition, which means something (or someone) deleted them, if your original flash drive was also deleted you were likely hacked, there were multiple cases of that recently, to users you had the server exposed to the Internet, see here for more info. If you're sure the filesystem was reiser there's one more thing you can try, partition must exist, so try first on disk2: https://wiki.unraid.net/Check_Disk_Filesystems#Rebuilding_the_superblock After the superblock is rebuilt run reiserfsck --rebuild-tree on the same disk.
  10. I wouldn't recommend using the GUI since it might not work, you can still do it manually, if you want to do that I can post instructions, please post diags and let me know which device you want to remove first.
  11. It's still mounting the pool (including the unassigned device), since it's available.
  12. Enable syslog mirror to flash then post that log after a crash.
  13. Preclear is not needed, but it's not bad practice to test a new disk, you can do that on a different computer by running for example an extended SMART test or the manufacturer testing tool.
  14. That attribute doesn't mean the SSD failed, just means it's past the expected life, it can still last a long time, my cache NVMe device is way past it's predicted life (127%) and still going strong. Most if not all data should be time, it went read-only to avoid further filesystem corruption, you can copy the data using your favorite tool, not sure the mover will work correctly because it can't delete source files, never tried.
  15. SSDs can get pretty hot during sustained writes, I usually set my warnings to 55 and 60 for SSDs.
  16. Possibly, but doubt it, Unraid is not optimized for those speeds, as an example see a read check with 5 NVMe devices (no parity): Single NVMe device was already only about 2.4GB/s, but speed starts decreasing as you add even more.
  17. It's rather strange, cache initially mounts OK: Jun 9 19:30:20 magnas kernel: XFS (nvme0n1p1): Ending clean mount But just a few seconds later it fails: Jun 9 19:30:26 magnas kernel: nvme nvme0: failed to set APST feature (-19) Jun 9 19:30:26 magnas kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_imap_to_bp+0x5c/0xa2 [xfs]" at daddr 0x5e340 len 32 error 5 Then it's detected again but now as nvme1 so it's picked by Unraid as a new device: Jun 9 19:30:26 magnas kernel: nvme nvme1: pci function 0000:01:00.0 Jun 9 19:30:26 magnas kernel: nvme nvme1: missing or invalid SUBNQN field. Jun 9 19:30:26 magnas kernel: nvme nvme1: Shutdown timeout set to 10 seconds Jun 9 19:30:26 magnas kernel: nvme nvme1: 8/0/0 default/read/poll queues Jun 9 19:30:26 magnas kernel: nvme1n2: p1 As far as Unraid is concern the cache is offline, because it's looking for nvme0, don't remember seeing an issue like this before, I would try a different model NVMe device if that's an option.
  18. Next time please use the existing plugin support thread, but the above should be excluded from the plugin since they are files that change everyday, probably same for the other ones.
  19. Wait for the SMART test to finish and act accordingly, but it should be OK for now, i.e., it should pass the test.
  20. You can still use Windows, can also use for example midnight commander (mc on the console), important part it to only transfer from user share to user share, or disk share to disk share.
  21. Yes, depending on Unraid release you might need to check a box before array start.
  22. Docker image can easily be recreated, and probably best than re-using existing one, but if you it it should be put in: DOCKER_IMAGE_FILE="/mnt/user/system/docker/docker.img" libvirt: IMAGE_FILE="/mnt/user/system/libvirt/libvirt.img" virtio iso: VIRTIOISO="/mnt/user/isos/virtio-win-0.1.190-1.iso" Make sure docker and VM services are stopped before doing it.
  23. Why not? It should... You can do a new config, then check "parity is already valid" before array start and run a parity check.
×
×
  • Create New...