Strange read errors on 4 drives


Recommended Posts

I have a bit of an oddball situation.  Today, I noticed that all 4 of the 6TB drives I have in my array are all giving read errors.  They all have the same amount of errors and it doesnt matter where in the chassis I move them.  If I reboot unraid, the drives are fine for a time and them they start with these errors again...  always these 4 drives and always the same number of errors for each of them.  My hardware that attaches these drives to my HBA is a BPN-SAS-846EL1.  Any ideas or comments would be appreciated.  I am kind of astonished.  lol

 

EDIT:

when the 4 drives get these errors, they are then somehow moved to the unassigned devices as well.  even though they are still sitting in the array.  Very strange

Edited by danktankk
Link to comment

You have NVIDIA plugin ? or NVIDIA Transcoding ?

You can disabled them and check ?

 

I see in the syslog

Quote

Dec  9 01:39:30 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Dec  9 01:39:30 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Dec  9 01:39:40 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Dec  9 01:39:40 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Dec  9 01:39:50 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Dec  9 01:39:50 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Dec  9 01:40:00 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Dec  9 01:40:00 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Dec  9 01:40:10 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Dec  9 01:40:10 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Dec  9 01:40:20 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Dec  9 01:40:20 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Dec  9 01:40:30 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Dec  9 01:40:30 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs

 

Link to comment

I have had UnRAID Nvidia build the entire time.  That most definitely isn't the issue and you can look that error up.  Its from a plugin that I just uninstalled called [PLUGIN] GPU Statistics. And perhaps nvidia unraid itself?

 

Its annoying, but not the reason for drives dropping.

 

Edited by danktankk
Link to comment

i am seeing this on the drives that are not "sticking" to the array:

Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] 11721045168 512-byte logical blocks: (6.00 TB/5.46 TiB)
Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] 4096-byte physical blocks
Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Write Protect is off
Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Mode Sense: 7f 00 10 08
Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Write cache: enabled, read cache: enabled, supports DPO and FUA
Dec 10 00:58:35 baconator kernel: sdk: sdk1
Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Attached SCSI disk
Dec 10 00:59:11 baconator emhttpd: ST6000VN0033-2EE110_ZAD55NRY (sdk) 512 11721045168
Dec 10 00:59:11 baconator kernel: mdcmd (12): import 11 sdk 64 5860522532 0 ST6000VN0033-2EE110_ZAD55NRY
Dec 10 00:59:11 baconator kernel: md: import disk11: (sdk) ST6000VN0033-2EE110_ZAD55NRY size: 5860522532
Dec 10 01:28:17 baconator kernel: sd 11:0:9:0: [sdk] tag#0 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00
Dec 10 01:28:19 baconator kernel: sd 11:0:9:0: [sdk] Synchronizing SCSI cache
Dec 10 01:28:30 baconator kernel: sd 11:0:9:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00

this is one that isnt having any issues:


Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] 4096-byte physical blocks
Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] Write Protect is off
Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] Mode Sense: 7f 00 10 08
Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] Write cache: enabled, read cache: enabled, supports DPO and FUA
Dec 10 00:58:35 baconator kernel: sdh: sdh1
Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] Attached SCSI disk
Dec 10 00:59:11 baconator emhttpd: WDC_WD140EDFZ-11A0VA0_9RJUWYGC (sdh) 512 27344764928
Dec 10 00:59:11 baconator kernel: mdcmd (6): import 5 sdh 64 13672382412 0 WDC_WD140EDFZ-11A0VA0_9RJUWYGC
Dec 10 00:59:11 baconator kernel: md: import disk5: (sdh) WDC_WD140EDFZ-11A0VA0_9RJUWYGC size: 13672382412

It appears there is a byte mismatch?

Dec 10 01:28:19 baconator kernel: sd 11:0:9:0: [sdk] Synchronizing SCSI cache
Dec 10 01:28:30 baconator kernel: sd 11:0:9:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00

I have no idea how to fix correct something like this.

 

 

 

Link to comment

I guess the backplane could be the problem, but why arent any of the other drives have any issues at all?  I have 8 14TB drives, 1 12 TB drive, and 4 2 TB drives running on this same backplane and havent had a single error. 

 

I dont know if this is important or not, but it is the error from the drive when it fails:

Dec 10 01:28:19 baconator kernel: sd 11:0:9:0: [sdk] Synchronizing SCSI cache
Dec 10 01:28:30 baconator kernel: sd 11:0:9:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00

Here is the array:

image.thumb.png.cd185d8dc1d356540076c14ee119b7b6.png

And here are the 6TB also somehow simultaneously being in the unassigned drives area as well.  It takes 10-30 minutes for them to show up in unassigned.  Possibly after that error i posted above appears:

spM2UCzUkL.thumb.png.4d772c77588526b16819cc76b11edec5.png

 

Ive also moved these 6TBdrives to numerous other locations in the chassis and they all still produce the same errors.

 

EDIT:  I am trying a parity check to see if that may have something to do with that sync error.  Its been running for 50 minutes and the drives are all still in the array.  So maybe thats good news...  

Edited by danktankk
Link to comment

Thank you for the reply.  They werent being removed or anything, this just happened while the drives were in the chassis.  They would be in both the array and unassigned devices at the same time as well.

 

The one odd thing i cant shake is why it is only these 4 drives.

 

I am going to try to rebuild them from parity one at a time to see if that helps.

 

I have found that putting these drives in an external eSata enclosure allows them to work without these strange errors. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.