Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Disk dropped offline and reconnect with a different letter, you can reboot to run he SMART tests using the GUI or run then now manually on sdk, the disk shows a lot of pending sectors, so you should run an extended test.
  2. -rw-r--r-- 1 root root 0 Apr 7 2021 /var/log/packages/eudev-3.2.5-x86_64-2_LT
  3. DEVLINKS=/dev/disk/by-id/nvme-TOSHIBA-RD400_664S107XTPGV DEVNAME=/dev/nvme1n1 DEVPATH=/devices/pci0000:64/0000:64:02.0/0000:66:00.0/nvme/nvme1/nvme1n1 DEVTYPE=disk ID_MODEL=TOSHIBA-RD400 ID_MODEL_ENC=TOSHIBA-RD400\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 ID_PART_TABLE_TYPE=dos ID_REVISION=57CZ4102 ID_SERIAL=TOSHIBA-RD400_664S107XTPGV ID_SERIAL_SHORT=664S107XTPGV ID_TYPE=nvme MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=27426685 Re-posting output from v6.10 below so it's easier to compare without scrolling up: DEVLINKS=/dev/disk/by-id/nvme-TOSHIBA-RD400__664S107XTPGV /dev/disk/by-id/nvme-eui.e83a9702000018f5 DEVNAME=/dev/nvme1n1 DEVPATH=/devices/pci0000:64/0000:64:02.0/0000:66:00.0/nvme/nvme1/nvme1n1 DEVTYPE=disk DISKSEQ=18 ID_MODEL=TOSHIBA-RD400 ID_PART_TABLE_TYPE=dos ID_SERIAL=TOSHIBA-RD400_ 664S107XTPGV ID_SERIAL_SHORT= 664S107XTPGV ID_WWN=eui.e83a9702000018f5 MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=36829040
  4. I can do that tomorrow, to be honest it wouldn't be a big loss since I expect to retire this SSD soon anyway because it's way past it's predicted life, currently at 180% with >1PB written: -Percentage used 180% -Data units read 311,791,494 [159 TB] -Data units written 2,076,218,700 [1.06 PB] On the other hand I'm curious to see how much longer it will last and though it's not a very common model there might be other users with the same device, so if it can be fixed it's always better.
  5. Yep: root@Tower1:~# cat /sys/block/nvme1n1/device/serial 664S107XTPGV
  6. DEVLINKS=/dev/disk/by-id/nvme-TOSHIBA-RD400__664S107XTPGV /dev/disk/by-id/nvme-eui.e83a9702000018f5 DEVNAME=/dev/nvme1n1 DEVPATH=/devices/pci0000:64/0000:64:02.0/0000:66:00.0/nvme/nvme1/nvme1n1 DEVTYPE=disk DISKSEQ=18 ID_MODEL=TOSHIBA-RD400 ID_PART_TABLE_TYPE=dos ID_SERIAL=TOSHIBA-RD400_ 664S107XTPGV ID_SERIAL_SHORT= 664S107XTPGV ID_WWN=eui.e83a9702000018f5 MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=36861596 Hmm, guess all those extra spaces before the serial are the problem?
  7. Usually yes, though if you're getting constant filesystem corruption without an apparent reason there might be a hardware issue, like bad RAM.
  8. Possibly some incompatibility with the current kernel then, try next release, if kernel is newer.
  9. Apr 26 20:19:01 Unraid kernel: mpt2sas_cm0: sending diag reset !! Apr 26 20:19:03 Unraid kernel: mpt2sas_cm0: Invalid host diagnostic register value Apr 26 20:19:03 Unraid kernel: mpt2sas_cm0: System Register set: Apr 26 20:19:03 Unraid kernel: 00000000: ffffffff Apr 26 20:19:03 Unraid kernel: 00000004: ffffffff Apr 26 20:19:03 Unraid kernel: 00000008: ffffffff Apr 26 20:19:03 Unraid kernel: 0000000c: ffffffff Apr 26 20:19:03 Unraid kernel: 00000010: ffffffff Apr 26 20:19:03 Unraid kernel: 00000014: ffffffff Apr 26 20:19:03 Unraid kernel: 00000018: ffffffff Apr 26 20:19:03 Unraid kernel: 0000001c: ffffffff Apr 26 20:19:03 Unraid kernel: 00000020: ffffffff Apr 26 20:19:03 Unraid kernel: 00000024: ffffffff Apr 26 20:19:03 Unraid kernel: 00000028: ffffffff Apr 26 20:19:03 Unraid kernel: 0000002c: ffffffff Apr 26 20:19:03 Unraid kernel: 00000030: ffffffff Apr 26 20:19:03 Unraid kernel: 00000034: ffffffff Apr 26 20:19:03 Unraid kernel: 00000038: ffffffff Apr 26 20:19:03 Unraid kernel: 0000003c: ffffffff Apr 26 20:19:03 Unraid kernel: 00000040: ffffffff Apr 26 20:19:03 Unraid kernel: 00000044: ffffffff Apr 26 20:19:03 Unraid kernel: 00000048: ffffffff Apr 26 20:19:03 Unraid kernel: 0000004c: ffffffff Apr 26 20:19:03 Unraid kernel: 00000050: ffffffff Apr 26 20:19:03 Unraid kernel: 00000054: ffffffff Apr 26 20:19:03 Unraid kernel: 00000058: ffffffff Apr 26 20:19:03 Unraid kernel: 0000005c: ffffffff Apr 26 20:19:03 Unraid kernel: 00000060: ffffffff Apr 26 20:19:03 Unraid kernel: 00000064: ffffffff Apr 26 20:19:03 Unraid kernel: 00000068: ffffffff Apr 26 20:19:03 Unraid kernel: 0000006c: ffffffff Apr 26 20:19:03 Unraid kernel: 00000070: ffffffff Apr 26 20:19:03 Unraid kernel: 00000074: ffffffff Apr 26 20:19:03 Unraid kernel: 00000078: ffffffff Apr 26 20:19:03 Unraid kernel: 0000007c: ffffffff Apr 26 20:19:03 Unraid kernel: 00000080: ffffffff Apr 26 20:19:03 Unraid kernel: 00000084: ffffffff Apr 26 20:19:03 Unraid kernel: 00000088: ffffffff Apr 26 20:19:03 Unraid kernel: 0000008c: ffffffff Apr 26 20:19:03 Unraid kernel: 00000090: ffffffff Apr 26 20:19:03 Unraid kernel: 00000094: ffffffff Apr 26 20:19:03 Unraid kernel: 00000098: ffffffff Apr 26 20:19:03 Unraid kernel: 0000009c: ffffffff Apr 26 20:19:03 Unraid kernel: 000000a0: ffffffff Apr 26 20:19:03 Unraid kernel: 000000a4: ffffffff Apr 26 20:19:03 Unraid kernel: 000000a8: ffffffff Apr 26 20:19:03 Unraid kernel: 000000ac: ffffffff Apr 26 20:19:03 Unraid kernel: 000000b0: ffffffff Apr 26 20:19:03 Unraid kernel: 000000b4: ffffffff Apr 26 20:19:03 Unraid kernel: 000000b8: ffffffff Apr 26 20:19:03 Unraid kernel: 000000bc: ffffffff Apr 26 20:19:03 Unraid kernel: 000000c0: ffffffff Apr 26 20:19:03 Unraid kernel: 000000c4: ffffffff Apr 26 20:19:03 Unraid kernel: 000000c8: ffffffff Apr 26 20:19:03 Unraid kernel: 000000cc: ffffffff Apr 26 20:19:03 Unraid kernel: 000000d0: ffffffff Apr 26 20:19:03 Unraid kernel: 000000d4: ffffffff Apr 26 20:19:03 Unraid kernel: 000000d8: ffffffff Apr 26 20:19:03 Unraid kernel: 000000dc: ffffffff Apr 26 20:19:03 Unraid kernel: 000000e0: ffffffff Apr 26 20:19:03 Unraid kernel: 000000e4: ffffffff Apr 26 20:19:03 Unraid kernel: 000000e8: ffffffff Apr 26 20:19:03 Unraid kernel: 000000ec: ffffffff Apr 26 20:19:03 Unraid kernel: 000000f0: ffffffff Apr 26 20:19:03 Unraid kernel: 000000f4: ffffffff Apr 26 20:19:03 Unraid kernel: 000000f8: ffffffff Apr 26 20:19:03 Unraid kernel: 000000fc: ffffffff Apr 26 20:19:03 Unraid kernel: mpt2sas_cm0: diag reset: FAILED Problem with the HBA, you can try re-flashing it.
  10. Not a big deal but we usually only change priority to "other" when it isn't/wasn't a bug, and this one was, it's solved but it was a bug.
  11. This looks like a flash drive problem, backup current flash, format it, download and unzip -rc5, run make_bootable, then restore config folder from the backup.
  12. Not a Mac user but there was an issue with OSX forcing sync writes with Samba, not sure if current releases are still affected but worth trying:
  13. Disable VMs auto starting and post new diags after array start with the 10GbE NIC installed.
  14. Are VMs set to autostart? I can't see the VM XML in the diags, please copy/paste here.
  15. Same deal, I guess this is device related? Strange that this is my oldest NVMe device and never had had issues since like v6.4.
  16. Is there supposed to be a cache1 NVMe device? If yes it's not being detected on a hardware level.
  17. If it's a possibility I would return it.
  18. Do you mean 250Mb/s or 250MB/s? If the former try turbo write, for the latter it's about the expected speed, Unraid can never be faster than single disk write speed when writing to the array.
  19. Cache filesystem is corrupt, before that brtfs was detecting data corruption, usually that's RAM related, with your config RAM speed should be set @ 1866MT/s, not 2133MT/s, and that's a known source of data corruption with Ryzen, more info here, after fixing that still a good idea to run memtest, then you should backup and re-format the pool and monitor for further errors.
  20. Either one it's OK, just have more options when using btrfs, for xfs you can use this: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=511923
×
×
  • Create New...