JustinChase Posted May 18, 2019 Share Posted May 18, 2019 I've gotten this message emailed to me a few times over the last week. I really have no idea what it's trying to tell me (other than there's an issue) or where to go about trying to fix it. Any ideas what needs fixed? media-diagnostics-20190518-1250.zip Quote Link to comment
JorgeB Posted May 18, 2019 Share Posted May 18, 2019 Looks related to the NVMe device when trimmed, there are some kernel issues with some NVMe/trim/VT-d enable combinations. Quote Link to comment
JustinChase Posted May 18, 2019 Author Share Posted May 18, 2019 That doesn't sound good. How might I try to identify and then fix these issues? I was looking at the SSD trim application, and I think it was set to disabled. I just changed it to daily at 8:50 after posting, as a test. I just looked at the log and see these as the last items in there... May 18 08:47:12 media apcupsd[26964]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded May 18 08:47:12 media apcupsd[26964]: NIS server startup succeeded May 18 08:50:07 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:07 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr fd1ff000 [fault reason 06] PTE Read access is not set May 18 08:50:07 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:07 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr ffcae000 [fault reason 06] PTE Read access is not set May 18 08:50:07 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:07 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr fd1ff000 [fault reason 06] PTE Read access is not set May 18 08:50:07 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:12 media kernel: dmar_fault: 242 callbacks suppressed May 18 08:50:12 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:12 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr f0d0b000 [fault reason 06] PTE Read access is not set May 18 08:50:12 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:12 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr edd5e000 [fault reason 06] PTE Read access is not set May 18 08:50:12 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:12 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr edd33000 [fault reason 06] PTE Read access is not set May 18 08:50:12 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:19 media kernel: dmar_fault: 467 callbacks suppressed May 18 08:50:19 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:19 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr f0412000 [fault reason 06] PTE Read access is not set May 18 08:50:19 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:19 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr fdde8000 [fault reason 06] PTE Read access is not set May 18 08:50:19 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:19 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr f0b0c000 [fault reason 06] PTE Read access is not set May 18 08:50:19 media kernel: DMAR: DRHD: handling fault status reg 3 May 18 08:50:24 media root: /etc/libvirt: 28 GiB (30063636480 bytes) trimmed on /dev/loop3 May 18 08:50:24 media root: /var/lib/docker: 12.4 GiB (13295656960 bytes) trimmed on /dev/loop2 May 18 08:50:24 media root: /mnt/cache: 400.4 GiB (429951348736 bytes) trimmed on /dev/nvme0n1p1 Is the drive going bad? Quote Link to comment
JorgeB Posted May 18, 2019 Share Posted May 18, 2019 31 minutes ago, johnnie.black said: there are some kernel issues with some NVMe/trim/VT-d enable combinations. Quote Link to comment
JustinChase Posted May 18, 2019 Author Share Posted May 18, 2019 I've had that drive in the machine for over a year. I've gotten this message a few times in the last week. i suppose that could be a random kernel issue, as you say, but I wonder if there's any way to actually troubleshoot it, vs assume it's a kinda known issue and ignore it. Quote Link to comment
JorgeB Posted May 18, 2019 Share Posted May 18, 2019 29 minutes ago, JustinChase said: but I wonder if there's any way to actually troubleshoot it, Not that I know of, you could get a Samsung NVMe device and those errors would be gone, otherwise some future kernel/bios might help. Quote Link to comment
CyBuzz Posted May 19, 2019 Share Posted May 19, 2019 So i started getting this message once i upgraded to 6.7. I dont have an NVME drive in my machine. This was the first thread when i googled. I will continue to dig around and come back if i find an answer. Quote Link to comment
Squid Posted May 19, 2019 Share Posted May 19, 2019 2 hours ago, CyBuzz said: So i started getting this message once i upgraded to 6.7. I dont have an NVME drive in my machine. This was the first thread when i googled. I will continue to dig around and come back if i find an answer. You should post your diagnostics. But, sight unseen you probably have an SSD attached to an HBA that isn't supporting TRIM. Move it to a mobo connector Quote Link to comment
ryann Posted May 28, 2019 Share Posted May 28, 2019 Count me in for this issue. Attached are my diagnostics. Besides the error, everything seems normal. nazaretianrack-diagnostics-20190528-1341.zip Quote Link to comment
pervin_1 Posted June 9, 2019 Share Posted June 9, 2019 On 5/19/2019 at 11:23 AM, Squid said: You should post your diagnostics. But, sight unseen you probably have an SSD attached to an HBA that isn't supporting TRIM. Move it to a mobo connector I also received this message the first time when I tried to TRIM the NVMe drive ( ADATA SX8200 480GB, relatively new SSD) connected directly to the MOBO's M.2 port — using as a cache drive. I upgraded to the latest 6.7 Stable a few days ago, never seen this message before. Diagnostics attached. Thank you! tower-diagnostics-20190609-2320.zip Quote Link to comment
Squid Posted June 10, 2019 Share Posted June 10, 2019 None of those errors are in your log, and your cache drive is being trimmed. Quote Link to comment
guru69 Posted June 25, 2019 Share Posted June 25, 2019 I've recently updated to the Nvidia version of Unraid 6.7.0 and am also noticing the errors. I have 4 Samsung 960 Pro NVME SSD cache drives (3 on motherboard, one on a PCI-E adapter). They are BTRFS/RAID10 and I have the trim enabled. Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme1n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31778): exit status: 25 Jun 25 10:04:32 Tank emhttpd: shcmd (31779): /usr/sbin/hdparm -S0 /dev/nvme2n1 Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme2n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31779): exit status: 25 Jun 25 10:04:32 Tank emhttpd: shcmd (31780): /usr/sbin/hdparm -S0 /dev/nvme0n1 Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme0n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31780): exit status: 25 Jun 25 10:04:32 Tank emhttpd: shcmd (31781): /usr/sbin/hdparm -S0 /dev/nvme3n1 Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme3n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31781): exit status: 25 Quote Link to comment
JorgeB Posted June 25, 2019 Share Posted June 25, 2019 (edited) 9 minutes ago, guru69 said: I've recently updated to the Nvidia version of Unraid 6.7.0 and am also noticing the errors. Those errors are different, and not trim related: 9 minutes ago, guru69 said: shcmd (31781): /usr/sbin/hdparm -S0 /dev/nvme3n1 This is the command causing the error, don't know where it comes from, probably some plugin or script, but looks harmless. Edited June 25, 2019 by johnnie.black Quote Link to comment
guru69 Posted June 26, 2019 Share Posted June 26, 2019 It doesn't seem to be causing any issues, I think Unraid is trying to spin down the SSDs, thought might need some change in Unraid so it excludes SSD drives from spindown. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.