danktankk Posted December 9, 2020 Share Posted December 9, 2020 (edited) I have a bit of an oddball situation. Today, I noticed that all 4 of the 6TB drives I have in my array are all giving read errors. They all have the same amount of errors and it doesnt matter where in the chassis I move them. If I reboot unraid, the drives are fine for a time and them they start with these errors again... always these 4 drives and always the same number of errors for each of them. My hardware that attaches these drives to my HBA is a BPN-SAS-846EL1. Any ideas or comments would be appreciated. I am kind of astonished. lol EDIT: when the 4 drives get these errors, they are then somehow moved to the unassigned devices as well. even though they are still sitting in the array. Very strange Edited December 10, 2020 by danktankk Quote Link to comment
trurl Posted December 9, 2020 Share Posted December 9, 2020 If possible before rebooting and preferably with the array started Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread. Quote Link to comment
danktankk Posted December 9, 2020 Author Share Posted December 9, 2020 (edited) i am currently running an extended self-test but have added the requested files. Thank you for the effort. Let me know if you need anything else. baconator-diagnostics-20201209-0153.zip Edited December 9, 2020 by danktankk added diagnostics Quote Link to comment
danktankk Posted December 10, 2020 Author Share Posted December 10, 2020 (edited) Here is a new diagnostics file along with some images of these same 4 hard drives being in 2 places at the same time in the UnRAID GUI: array: unassigned devices: I have no idea why this is happening or why it is only these 4 drives. *Any* help would be very appreciated! baconator-diagnostics-20201209-2336.zip Edited December 10, 2020 by danktankk Quote Link to comment
Deen Posted December 10, 2020 Share Posted December 10, 2020 You have NVIDIA plugin ? or NVIDIA Transcoding ? You can disabled them and check ? I see in the syslog Quote Dec 9 01:39:30 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Dec 9 01:39:30 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Dec 9 01:39:40 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Dec 9 01:39:40 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Dec 9 01:39:50 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Dec 9 01:39:50 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Dec 9 01:40:00 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Dec 9 01:40:00 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Dec 9 01:40:10 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Dec 9 01:40:10 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Dec 9 01:40:20 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Dec 9 01:40:20 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Dec 9 01:40:30 baconator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Dec 9 01:40:30 baconator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Quote Link to comment
danktankk Posted December 10, 2020 Author Share Posted December 10, 2020 (edited) I have had UnRAID Nvidia build the entire time. That most definitely isn't the issue and you can look that error up. Its from a plugin that I just uninstalled called [PLUGIN] GPU Statistics. And perhaps nvidia unraid itself? Its annoying, but not the reason for drives dropping. Edited December 10, 2020 by danktankk Quote Link to comment
danktankk Posted December 10, 2020 Author Share Posted December 10, 2020 i am seeing this on the drives that are not "sticking" to the array: Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] 11721045168 512-byte logical blocks: (6.00 TB/5.46 TiB) Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] 4096-byte physical blocks Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Write Protect is off Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Mode Sense: 7f 00 10 08 Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Write cache: enabled, read cache: enabled, supports DPO and FUA Dec 10 00:58:35 baconator kernel: sdk: sdk1 Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Attached SCSI disk Dec 10 00:59:11 baconator emhttpd: ST6000VN0033-2EE110_ZAD55NRY (sdk) 512 11721045168 Dec 10 00:59:11 baconator kernel: mdcmd (12): import 11 sdk 64 5860522532 0 ST6000VN0033-2EE110_ZAD55NRY Dec 10 00:59:11 baconator kernel: md: import disk11: (sdk) ST6000VN0033-2EE110_ZAD55NRY size: 5860522532 Dec 10 01:28:17 baconator kernel: sd 11:0:9:0: [sdk] tag#0 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Dec 10 01:28:19 baconator kernel: sd 11:0:9:0: [sdk] Synchronizing SCSI cache Dec 10 01:28:30 baconator kernel: sd 11:0:9:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00 this is one that isnt having any issues: Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB) Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] 4096-byte physical blocks Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] Write Protect is off Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] Mode Sense: 7f 00 10 08 Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] Write cache: enabled, read cache: enabled, supports DPO and FUA Dec 10 00:58:35 baconator kernel: sdh: sdh1 Dec 10 00:58:35 baconator kernel: sd 11:0:6:0: [sdh] Attached SCSI disk Dec 10 00:59:11 baconator emhttpd: WDC_WD140EDFZ-11A0VA0_9RJUWYGC (sdh) 512 27344764928 Dec 10 00:59:11 baconator kernel: mdcmd (6): import 5 sdh 64 13672382412 0 WDC_WD140EDFZ-11A0VA0_9RJUWYGC Dec 10 00:59:11 baconator kernel: md: import disk5: (sdh) WDC_WD140EDFZ-11A0VA0_9RJUWYGC size: 13672382412 It appears there is a byte mismatch? Dec 10 01:28:19 baconator kernel: sd 11:0:9:0: [sdk] Synchronizing SCSI cache Dec 10 01:28:30 baconator kernel: sd 11:0:9:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00 I have no idea how to fix correct something like this. Quote Link to comment
JorgeB Posted December 10, 2020 Share Posted December 10, 2020 1 hour ago, danktankk said: Its annoying, but not the reason for drives dropping. It also makes looking at the diags much more difficult, remove that and reboot, than post new clean diags showing just the disk errors. Quote Link to comment
danktankk Posted December 10, 2020 Author Share Posted December 10, 2020 (edited) I have removed it. It is still there though. It must be from lsio nvidia unraid. baconator-diagnostics-20201210-1213.zip Edited December 10, 2020 by danktankk Quote Link to comment
Deen Posted December 10, 2020 Share Posted December 10, 2020 You can take one cable and connect directly to your motherboard ? for test Maybe the BPN-SAS-846EL1 is the problem ? All disk is connected to the BPN-SAS-846EL1 ? Quote Link to comment
danktankk Posted December 10, 2020 Author Share Posted December 10, 2020 (edited) I guess the backplane could be the problem, but why arent any of the other drives have any issues at all? I have 8 14TB drives, 1 12 TB drive, and 4 2 TB drives running on this same backplane and havent had a single error. I dont know if this is important or not, but it is the error from the drive when it fails: Dec 10 01:28:19 baconator kernel: sd 11:0:9:0: [sdk] Synchronizing SCSI cache Dec 10 01:28:30 baconator kernel: sd 11:0:9:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00 Here is the array: And here are the 6TB also somehow simultaneously being in the unassigned drives area as well. It takes 10-30 minutes for them to show up in unassigned. Possibly after that error i posted above appears: Ive also moved these 6TBdrives to numerous other locations in the chassis and they all still produce the same errors. EDIT: I am trying a parity check to see if that may have something to do with that sync error. Its been running for 50 minutes and the drives are all still in the array. So maybe thats good news... Edited December 10, 2020 by danktankk Quote Link to comment
danktankk Posted December 11, 2020 Author Share Posted December 11, 2020 No luck there either. I have upload several requested diagnostic files but no feedback from that yet. Quote Link to comment
itimpi Posted December 11, 2020 Share Posted December 11, 2020 If a drive that should be in the array subsequently appears under Unassigned Devices then it means that the drive dropped offline for some reason and then came back online with a different device identifier. Unraid is not hot-swap aware so does not handle this happening. Quote Link to comment
danktankk Posted December 13, 2020 Author Share Posted December 13, 2020 Thank you for the reply. They werent being removed or anything, this just happened while the drives were in the chassis. They would be in both the array and unassigned devices at the same time as well. The one odd thing i cant shake is why it is only these 4 drives. I am going to try to rebuild them from parity one at a time to see if that helps. I have found that putting these drives in an external eSata enclosure allows them to work without these strange errors. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.