Jump to content

blk_update_request: I/O error, dev sdr, sector 8594020512 op 0x0:(READ)


Recommended Posts

Ok let me iterate over all the debugging steps I have taken.  I have been running unraid since 2012 so ive run into alot of problems before but I cant shake this one.

 

Problem

I had a disk fail a while ago, and i dont remember the reason.  I restarted unraid and followed the steps to rebuild the disk upon itself from parity and all was well.  Everything is green across the board but if I stop the array I get the follow error in the logs when it trys to unmount.  

Dec 8 09:36:35 Ares emhttpd: Unmounting disks...
Dec 8 09:36:35 Ares emhttpd: shcmd (797811): umount /mnt/disk1
Dec 8 09:36:35 Ares kernel: XFS (md1): Unmounting Filesystem
Dec 8 09:36:35 Ares kernel: sd 9:0:1:0: [sdr] tag#1009 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 cmd_age=0s
Dec 8 09:36:35 Ares kernel: sd 9:0:1:0: [sdr] tag#1009 CDB: opcode=0x88 88 00 00 00 00 02 00 3e 58 a0 00 00 00 08 00 00
Dec 8 09:36:35 Ares kernel: blk_update_request: I/O error, dev sdr, sector 8594020512 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Dec 8 09:36:35 Ares kernel: md: disk3 read error, sector=8594020448
Dec 8 09:37:00 Ares kernel: sd 9:0:1:0: [sdr] tag#1011 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 cmd_age=0s
Dec 8 09:37:00 Ares kernel: sd 9:0:1:0: [sdr] tag#1011 CDB: opcode=0x8a 8a 00 00 00 00 02 00 3e 58 a0 00 00 00 08 00 00
Dec 8 09:37:00 Ares kernel: blk_update_request: I/O error, dev sdr, sector 8594020512 op 0x1:(WRITE) flags 0x0 phys_seg 1 prio class 0
Dec 8 09:37:00 Ares kernel: md: disk3 write error, sector=8594020448

 

So i swapped the disk with another 8tb and everything was good for a while until it happened again.  Thinking this was a problem on the controller i swapped a known good disk into the bad slot & put the 8tb in the known good slot.  This also gave me the same result.

 

I then swapped in a 10TB red into the bad slot, this repaired from parity just fine, but after it stopped a DIFFERENT 8TB Seagate errored.

 

I then read some stuff on the forums about bad cables etc, so I took every cable out and dusted every connection from the disk -> backplane -> controller.  Let it restor the drive from pairty, and it of course failed after a another array stop.

 

Any IDEAS?  Im really struggling on this one.

 

ares-diagnostics-20211208-0955.zip

Link to comment
04:00.0 RAID bus controller [0104]: Marvell Technology Group Ltd. MV64460/64461/64462 System Controller, Revision B [11ab:6485] (rev 01)

 

You've got 2 marvel controllers.  Marvel is hit and miss since they are rather terrible at drivers, and random drop outs is one of their problems.  LSI based is recommended

 

Disabling IOMMU in the bios might help but if your VM uses passthrough then that's not an option

 

 

Link to comment

@Squid i went ahead and purchased two LSI controllers to swap out the Marvell ones just so I am on supported hardware.  I remember when i grabbed this way back in the day they were the goto controller.

 

@JorgeB Thanks for that link, I already found it but since my drives only get into an error state when turning the array off, i didnt think it applied.  Since I am moving to LSI controllers ill make sure to run that on all my Seagate drives before I migrate for sure.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...