Jump to content
JustinChase

HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device

14 posts in this topic Last Reply

Recommended Posts

Looks related to the NVMe device when trimmed, there are some kernel issues with some NVMe/trim/VT-d enable combinations.

Share this post


Link to post

That doesn't sound good.  How might I try to identify and then fix these issues?

 

I was looking at the SSD trim application, and I think it was set to disabled.  I just changed it to daily at 8:50 after posting, as a test.

 

I just looked at the log and see these as the last items in there...

 

May 18 08:47:12 media apcupsd[26964]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded
May 18 08:47:12 media apcupsd[26964]: NIS server startup succeeded
May 18 08:50:07 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:07 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr fd1ff000 [fault reason 06] PTE Read access is not set
May 18 08:50:07 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:07 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr ffcae000 [fault reason 06] PTE Read access is not set
May 18 08:50:07 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:07 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr fd1ff000 [fault reason 06] PTE Read access is not set
May 18 08:50:07 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:12 media kernel: dmar_fault: 242 callbacks suppressed
May 18 08:50:12 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:12 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr f0d0b000 [fault reason 06] PTE Read access is not set
May 18 08:50:12 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:12 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr edd5e000 [fault reason 06] PTE Read access is not set
May 18 08:50:12 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:12 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr edd33000 [fault reason 06] PTE Read access is not set
May 18 08:50:12 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:19 media kernel: dmar_fault: 467 callbacks suppressed
May 18 08:50:19 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:19 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr f0412000 [fault reason 06] PTE Read access is not set
May 18 08:50:19 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:19 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr fdde8000 [fault reason 06] PTE Read access is not set
May 18 08:50:19 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:19 media kernel: DMAR: [DMA Read] Request device [02:00.0] fault addr f0b0c000 [fault reason 06] PTE Read access is not set
May 18 08:50:19 media kernel: DMAR: DRHD: handling fault status reg 3
May 18 08:50:24 media root: /etc/libvirt: 28 GiB (30063636480 bytes) trimmed on /dev/loop3
May 18 08:50:24 media root: /var/lib/docker: 12.4 GiB (13295656960 bytes) trimmed on /dev/loop2
May 18 08:50:24 media root: /mnt/cache: 400.4 GiB (429951348736 bytes) trimmed on /dev/nvme0n1p1

 

Is the drive going bad?

Share this post


Link to post

I've had that drive in the machine for over a year.  I've gotten this message a few times in the last week.  i suppose that could be a random kernel issue, as you say, but I wonder if there's any way to actually troubleshoot it, vs assume it's a kinda known issue and ignore it.

Share this post


Link to post
29 minutes ago, JustinChase said:

but I wonder if there's any way to actually troubleshoot it,

Not that I know of, you could get a Samsung NVMe device and those errors would be gone, otherwise some future kernel/bios might help.

Share this post


Link to post

So i started getting this message once i upgraded to 6.7.  I dont have an NVME drive in my machine.  This was the first thread when i googled.  I will continue to dig around and come back if i find an answer.

Share this post


Link to post
2 hours ago, CyBuzz said:

So i started getting this message once i upgraded to 6.7.  I dont have an NVME drive in my machine.  This was the first thread when i googled.  I will continue to dig around and come back if i find an answer.

You should post your diagnostics.  But, sight unseen you probably have an SSD attached to an HBA that isn't supporting TRIM.  Move it to a mobo connector

Share this post


Link to post
On 5/19/2019 at 11:23 AM, Squid said:

You should post your diagnostics.  But, sight unseen you probably have an SSD attached to an HBA that isn't supporting TRIM.  Move it to a mobo connector

I also received this message the first time when I tried to TRIM the NVMe drive ( ADATA SX8200 480GB, relatively new SSD) connected directly to the MOBO's M.2 port — using as a cache drive. I upgraded to the latest 6.7 Stable a few days ago, never seen this message before. Diagnostics attached. Thank you!

tower-diagnostics-20190609-2320.zip

Share this post


Link to post

None of those errors are in your log, and your cache drive is being trimmed.

Share this post


Link to post

I've recently updated to the Nvidia version of Unraid 6.7.0 and am also noticing the errors. I have 4 Samsung 960 Pro NVME SSD cache drives (3 on motherboard, one on a PCI-E adapter). They are BTRFS/RAID10 and I have the trim enabled.

 

Jun 25 10:04:32 Tank root:  HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device
Jun 25 10:04:32 Tank root: /dev/nvme1n1:
Jun 25 10:04:32 Tank root:  setting standby to 0 (off)
Jun 25 10:04:32 Tank emhttpd: shcmd (31778): exit status: 25
Jun 25 10:04:32 Tank emhttpd: shcmd (31779): /usr/sbin/hdparm -S0 /dev/nvme2n1
Jun 25 10:04:32 Tank root:  HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device
Jun 25 10:04:32 Tank root: /dev/nvme2n1:
Jun 25 10:04:32 Tank root:  setting standby to 0 (off)
Jun 25 10:04:32 Tank emhttpd: shcmd (31779): exit status: 25
Jun 25 10:04:32 Tank emhttpd: shcmd (31780): /usr/sbin/hdparm -S0 /dev/nvme0n1
Jun 25 10:04:32 Tank root:  HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device
Jun 25 10:04:32 Tank root: /dev/nvme0n1:
Jun 25 10:04:32 Tank root:  setting standby to 0 (off)
Jun 25 10:04:32 Tank emhttpd: shcmd (31780): exit status: 25
Jun 25 10:04:32 Tank emhttpd: shcmd (31781): /usr/sbin/hdparm -S0 /dev/nvme3n1
Jun 25 10:04:32 Tank root:  HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device
Jun 25 10:04:32 Tank root: /dev/nvme3n1:
Jun 25 10:04:32 Tank root:  setting standby to 0 (off)
Jun 25 10:04:32 Tank emhttpd: shcmd (31781): exit status: 25

Share this post


Link to post
9 minutes ago, guru69 said:

I've recently updated to the Nvidia version of Unraid 6.7.0 and am also noticing the errors.

Those errors are different, and not trim related:

9 minutes ago, guru69 said:

shcmd (31781): /usr/sbin/hdparm -S0 /dev/nvme3n1

This is the command causing the error, don't know where it comes from, probably some plugin or script, but looks harmless.

Edited by johnnie.black

Share this post


Link to post

It doesn't seem to be causing any issues, I think Unraid is trying to spin down the SSDs, thought might need some change in Unraid so it excludes SSD drives from spindown.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.