Jump to content

JorgeB

Moderators
  • Posts

    67,654
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. NVMe device dropped offline: Jun 11 16:14:11 Magnus kernel: nvme 0000:01:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0001 address=0xd9b94000 flags=0x0050] Jun 11 16:14:41 Magnus kernel: nvme nvme0: I/O 374 QID 1 timeout, aborting Jun 11 16:14:42 Magnus kernel: nvme nvme0: I/O 786 QID 3 timeout, aborting Jun 11 16:14:42 Magnus kernel: nvme nvme0: I/O 209 QID 4 timeout, aborting Jun 11 16:14:42 Magnus kernel: nvme nvme0: I/O 210 QID 4 timeout, aborting Jun 11 16:15:12 Magnus kernel: nvme nvme0: I/O 374 QID 1 timeout, reset controller Jun 11 16:15:12 Magnus kernel: nvme nvme0: I/O 2 QID 0 timeout, reset controller Jun 11 16:16:21 Magnus kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1 Jun 11 16:16:21 Magnus kernel: nvme nvme0: Abort status: 0x371 Look for a BIOS update, or try a different NVMe device/board, initial error is similar to the typical SATA controller problem with some Ryzen boards.
  2. If the shares have data you can just rename the folders, if they are empty delete those shares.
  3. I don't know is that controller is supported, and if it works as a HBA, but if you have it you can always try it, see if any drives connected there appear normally and temp/SMART data is available, you can also post diags.
  4. it won't be moved if share is set to cache=no, existing data on cache for that share will remain there, any new data written to that share will go to the array.
  5. Please post the diagnostics: Tools -> Diagnostics
  6. Please post the diagnostics: Tools -> Diagnostics
  7. Backup current flash, re-do it then restore the config folder only.
  8. If the backplane is SATA only it won't accept SAS drives, they won't fit, but most backplane are SAS, and they also accept SATA.
  9. That's strange, please post diags.
  10. No, for now you can only do it manually: btrfs sub create /mnt/diskX/share_name A subvolume will then become a share and can be configured using the GUI. With Unraid each disk is a separate filesystem, subvolumes and snapshots can never spam more than one array disk, same for any individual files, you can have subvolumes with the same name in multiple disks, working as the same share, but snapshots then need to be made disk by disk, that's how I do it. For now you can only do it on the console or with a script, there's no GUI support or plugin for that.
  11. You can, just move form disk to disk, don't involve shares. rsync can't create more files than existed on source, and there can't be any duplicates since it was writing to the same disk, unless you used multiple target folders.
  12. There's something strange here, but not sure what, only disk1 appears as having contents for that share, that looks wrong, post a screenshot of the share in the GUI after clicking "compute".
  13. Check filesystem to see if it still can be fixed: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  14. There's problem reading super.dat, that's where the disks assignments are stored: Jun 6 13:57:59 Thales kernel: kernel read not supported for file /config/super.dat (pid: 14582 comm: modprobe) Jun 6 13:57:59 Thales kernel: read_file: read error 22 Jun 6 13:57:59 Thales kernel: md: could not read superblock from /boot/config/super.dat Jun 6 13:57:59 Thales kernel: md: initializing superblock Try recreating the flash drive, if that fails use a different one.
  15. Please post the diagnostics: Tools -> Diagnostics, and name the share you are copying to.
  16. That guide if for btrfs, not xfs. The NVMe device dropped offline: Jun 7 15:19:21 Tower kernel: nvme nvme0: I/O 998 QID 21 timeout, aborting Jun 7 15:19:21 Tower kernel: nvme nvme0: I/O 999 QID 21 timeout, aborting Jun 7 15:19:21 Tower kernel: nvme nvme0: I/O 1000 QID 21 timeout, aborting Jun 7 15:19:21 Tower kernel: nvme nvme0: I/O 968 QID 5 timeout, aborting Jun 7 15:19:21 Tower kernel: nvme nvme0: I/O 969 QID 5 timeout, aborting Jun 7 15:19:21 Tower kernel: nvme nvme0: I/O 934 QID 12 timeout, aborting Jun 7 15:19:21 Tower kernel: nvme nvme0: I/O 935 QID 12 timeout, aborting Jun 7 15:19:21 Tower kernel: nvme nvme0: I/O 936 QID 12 timeout, aborting Jun 7 15:19:51 Tower kernel: nvme nvme0: I/O 968 QID 5 timeout, reset controller Jun 7 15:20:21 Tower kernel: nvme nvme0: I/O 12 QID 0 timeout, reset controller Jun 7 15:21:15 Tower kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1 Jun 7 15:21:15 Tower kernel: nvme nvme0: Abort status: 0x371 ### [PREVIOUS LINE REPEATED 7 TIMES] ### Jun 7 15:21:36 Tower kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1 Jun 7 15:21:36 Tower kernel: nvme nvme0: Removing after probe failure status: -19 Jun 7 15:21:56 Tower kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1 Jun 7 15:21:56 Tower kernel: XFS (nvme0n1p1): log I/O error -5 Post diags after rebooting.
  17. Disable spin down or it can interrupt the SMART test, but I would run diskspeed first.
  18. Can't see the reason based on the syslog, one thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  19. The preclear log isn't really helpful to us, it can be to the plugin author, you can also post the diags taken after the preclear, with that we might be able to help.
  20. Jun 10 17:06:36 Warptower kernel: mpt2sas_cm0: LSISAS2008: FWVersion(20.00.07.00), ChipRevision(0x03), BiosVersion(07.39.02.00) It's already on the latest one.
  21. Please use the existing plugin support thread:
  22. Yep, if you don't know how to check you can post new diags after rebooting.
  23. Diskspeed docker can also be a good way of finding an under performing drive.
×
×
  • Create New...