Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Log is full of connection errors from an unassigned disk, also filesystem issues with disk1, check filesystem on disk1, disconnect or replace cables on the unassigned disk and post new diags after array start.
  2. I was talking about disk1 in the array, the one that dropped offline and reconnected. I see, you meant that disk1 is not using USB, you're right, I saw wrong before, but still looks like a power/connection issue.
  3. Apr 4 08:44:44 HomeServer kernel: sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Apr 4 08:44:44 HomeServer kernel: sd 0:0:0:0: [sda] tag#0 Sense Key : 0x3 [current] Apr 4 08:44:44 HomeServer kernel: sd 0:0:0:0: [sda] tag#0 ASC=0x11 ASCQ=0x0 Apr 4 08:44:44 HomeServer kernel: sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x28 28 00 00 ee cf 00 00 00 80 00 Apr 4 08:44:44 HomeServer kernel: blk_update_request: critical medium error, dev sda, sector 15650560 op 0x0:(READ) flags 0x80700 phys_seg 1 Problems with the flash drive, and likley why the drive changes weren't recorded.
  4. Apr 4 12:41:56 Frodo kernel: ahci 0000:00:17.0: Found 1 remapped NVMe devices. Apr 4 12:41:56 Frodo kernel: ahci 0000:00:17.0: Switch your BIOS from RAID to AHCI mode to use them.
  5. Disk1 is using USB, we don't recommend USB for array or pool devices since it's prone to disconnect issues, disk3 looks more like a power/connection problem.
  6. I would recommend replacing/swapping cables and If it's still making weird noises and not coming back online replace.
  7. It is: Apr 2 17:31:09 fileserver kernel: XFS (md1): Please umount the filesystem and rectify the problem(s) Check filesystem on disk1
  8. Please post the diagnostics and a screenshot of the main GUI page.
  9. Looks more like a power/connection issue, but since the disk dropped offline there's no SMART report, check/replace cables and post new diags after the disk comes back online.
  10. Like mentioned dd is not a reliable way to test that. Sorry, can't really help with that.
  11. You have one LSI HBA and one LSI RAID controller: 01:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02) Subsystem: Broadcom / LSI SAS9300-8i [1000:30e0] Kernel driver in use: mpt3sas Kernel modules: mpt3sas 02:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS-3 3108 [Invader] [1000:005d] (rev 02) Subsystem: Broadcom / LSI MegaRAID SAS 9361-8i [1000:9361] Kernel driver in use: megaraid_sas Kernel modules: megaraid_sas Disks are connected to the RAID controller.
  12. They should, though a second one will only work if your board supports PCIe bifurcation.
  13. Run a scrub on the pool and make sure there are no uncorrectable errors, after that recreate the docker image.
  14. It's normal filesystem overhead for xfs with reflink support. Because it can be handy to access it over the network, it's disable by default on newer releases, and you can change that on shares settings for the flash drive. You can set the minimum free space for shares, not disks. Correct.
  15. Just a couple of observations, using dd to test /mnt/user doesn't always give relevant results, also it's known that user shares always have some extra overhead vs disk shares, some users see a much bigger difference than others, though 25MB/s would be extra slow, usually only users with 10GbE notice the difference. If you you transfer data from another PC to a share using cache do you also get 25MB/s?
  16. Please use the existing docker support thread:
  17. Apr 2 13:59:49 unRaid kernel: dmar_fault: 2 callbacks suppressed Apr 2 13:59:49 unRaid kernel: DMAR: DRHD: handling fault status reg 3 Apr 2 13:59:49 unRaid kernel: DMAR: [DMA Read] Request device [05:00.0] PASID ffffffff fault addr fff8b000 [fault reason 06] PTE Read access is not set Apr 2 13:59:49 unRaid kernel: DMAR: DRHD: handling fault status reg 3 Apr 2 13:59:49 unRaid kernel: DMAR: [DMA Read] Request device [05:00.0] PASID ffffffff fault addr ffcb6000 [fault reason 06] PTE Read access is not set Apr 2 13:59:49 unRaid kernel: DMAR: DRHD: handling fault status reg 2 Apr 2 13:59:49 unRaid kernel: DMAR: [DMA Read] Request device [05:00.0] PASID ffffffff fault addr ff740000 [fault reason 06] PTE Read access is not set Apr 2 13:59:49 unRaid kernel: DMAR: DRHD: handling fault status reg 2 There's this constant issue with the NIC, later resulting in a hang: Apr 2 13:59:51 unRaid kernel: e1000 0000:05:00.0 eth0: Detected Tx Unit Hang Apr 2 13:59:51 unRaid kernel: Tx Queue <0> Apr 2 13:59:51 unRaid kernel: TDH <6b> Apr 2 13:59:51 unRaid kernel: TDT <7d> Apr 2 13:59:51 unRaid kernel: next_to_use <7d> Apr 2 13:59:51 unRaid kernel: next_to_clean <68> Apr 2 13:59:51 unRaid kernel: buffer_info[next_to_clean] Apr 2 13:59:51 unRaid kernel: time_stamp <fffe78a5> Apr 2 13:59:51 unRaid kernel: next_to_watch <6c> Apr 2 13:59:51 unRaid kernel: jiffies <fffe7f80> Apr 2 13:59:51 unRaid kernel: next_to_watch.status <0> Apr 2 13:59:53 unRaid kernel: e1000 0000:05:00.0 eth0: Detected Tx Unit Hang Apr 2 13:59:53 unRaid kernel: Tx Queue <0> Apr 2 13:59:53 unRaid kernel: TDH <6b> Apr 2 13:59:53 unRaid kernel: TDT <7d> Apr 2 13:59:53 unRaid kernel: next_to_use <7d> Apr 2 13:59:53 unRaid kernel: next_to_clean <68> Apr 2 13:59:53 unRaid kernel: buffer_info[next_to_clean] Apr 2 13:59:53 unRaid kernel: time_stamp <fffe78a5> Apr 2 13:59:53 unRaid kernel: next_to_watch <6c> Apr 2 13:59:53 unRaid kernel: jiffies <fffe8780> Apr 2 13:59:53 unRaid kernel: next_to_watch.status <0> Apr 2 13:59:55 unRaid kernel: e1000 0000:05:00.0 eth0: Detected Tx Unit Hang Apr 2 13:59:55 unRaid kernel: Tx Queue <0> Apr 2 13:59:55 unRaid kernel: TDH <6b> Apr 2 13:59:55 unRaid kernel: TDT <7d> Apr 2 13:59:55 unRaid kernel: next_to_use <7d> Apr 2 13:59:55 unRaid kernel: next_to_clean <68> Apr 2 13:59:55 unRaid kernel: buffer_info[next_to_clean] Apr 2 13:59:55 unRaid kernel: time_stamp <fffe78a5> Apr 2 13:59:55 unRaid kernel: next_to_watch <6c> Apr 2 13:59:55 unRaid kernel: jiffies <fffe8f80> Apr 2 13:59:55 unRaid kernel: next_to_watch.status <0> Apr 2 13:59:56 unRaid kernel: ------------[ cut here ]------------ Try installing the NIC in a different PCIe slot if available.
  18. Yes, only if the array was started with a missing disk a rebuild would be required, you can also see that by looking a the disk status, all are green.
  19. Could be this issue: https://forums.unraid.net/bug-reports/prereleases/69x-610x-intel-i915-module-causing-system-hangs-with-no-report-in-syslog-r1674/?do=getNewComment&d=2&id=1674
  20. Enable the syslog server and post that after a crash.
  21. Yep, same driver as the SASLP and SAS2LP, and known to be problematic, I would recommend replacing with a LSI if that's a possibility.
  22. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
×
×
  • Create New...