Jump to content

Unable to use more than one disk in Netapp DS4243


theknat

Recommended Posts

I've recently purchase a Netapp DS4243 disk shelf along with a HP SAS9207-8e HBA and a SFF-8436 to SFF-8088 cable to connect the two. When I added the HBA, it shows up in Unraid as this:

IOMMU group 15
	[1000:0087] 05:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)

I attached the DS4243 to the system and it showed up under the system devices screen ok as well. I added a single new drive to the disk array which appeared and I was able to start a disk replacement of one of my existing disks. So far so good.

 

A few hours into the rebuild, I added another drive into the netapp in preparation and at this point both the drives I had in the enclosure no longer were available in Unraid. After a few more hours of testing, I've come to the conclusion that any time there is more than a single drive in the enclosure, none of them are available. Now, both of the drives are listed as SCSI devices:

[1:0:0:0]    disk    ST6000VN 0041-2EL11C   SM 4321  /dev/sdg   6.00TB
[1:0:1:0]    disk    ST2000VN 004-2E4164    SM 4321  /dev/sdh   2.00TB
[1:0:2:0]    enclosu NETAPP   DS424IOM3        0172  -               -

But neither are available when modifying the array status. I've tried this with a number of different drives and the result always appears to be the same.

 

Anyone got any ideas where I can start trying to work through this?

holodeck-diagnostics-20180308-1809.zip

Link to comment
19 minutes ago, theknat said:

A few hours into the rebuild, I added another drive into the netapp

Not a good idea to change any hardware during a rebuild, even if hotswap is supported.

 

20 minutes ago, theknat said:

neither are available when modifying the array status

Not entirely sure what you mean by "modifying the array status". Normally "array status" is just whether or not the array is started. Do you mean the disk assignments?

 

Don't know much about that particular hardware, but unRAID is not RAID so you might have to do something to make the controller JBOD.

Link to comment
1 minute ago, trurl said:

Not a good idea to change any hardware during a rebuild, even if hotswap is supported.

Yeah I have realised that now, but I don't think thats the root cause of the issue

 

1 minute ago, trurl said:

 

Not entirely sure what you mean by "modifying the array status". Normally "array status" is just whether or not the array is started. Do you mean the disk assignments?

Yes I mean the disk assignments. When trying to modify the disk assignments, the disks don't show up when more than one drive is in the enclosure

 

1 minute ago, trurl said:

 

Don't know much about that particular hardware, but unRAID is not RAID so you might have to do something to make the controller JBOD.

As far as I am aware, the controller should be support as its just a 9207-16e and all the drives do show up in unraid. There's no raid configuration from what I can tell and I didn't have to set anything to get the single disk working ok; just plugged it in and away it went.

 

Thanks for the assistance, its appreciated

Link to comment
5 minutes ago, johnnie.black said:

Do you have another 2TB or smaller disk? If that is a SAS1 enclosure there could be issues when using one or more >2TB disks.

I'm not actually sure if I tried it with only 2TB drives in it together. I just waiting for the rebuild of the array to finish then I'll try moving one of the other 2TB drives to it.

 

I thought I had read of others using larger drives in this enclosure, but at this point I'm happy to try anything

Link to comment

Looking at the diags, the 2TB disk dropped offline, after an HBA host reset failure, so there might be other issues there:

 

Quote

Mar  8 17:58:12 Holodeck kernel: mpt2sas_cm0: sending port enable !!
Mar  8 17:58:12 Holodeck kernel: igb 0000:07:00.0: removed PHC on eth3
Mar  8 17:58:12 Holodeck kernel: igb 0000:06:00.0: removed PHC on eth2
Mar  8 17:58:12 Holodeck kernel: igb 0000:04:00.0: removed PHC on eth1
Mar  8 17:58:12 Holodeck kernel: igb 0000:03:00.0: removed PHC on eth0
Mar  8 18:02:06 Holodeck kernel: mpt2sas_cm0: _base_send_port_enable: timeout
Mar  8 18:02:06 Holodeck kernel: mf:
Mar  8 18:02:06 Holodeck kernel:     
Mar  8 18:02:06 Holodeck kernel: 06000000
Mar  8 18:02:06 Holodeck kernel: 00000000
Mar  8 18:02:06 Holodeck kernel: 00000000
Mar  8 18:02:06 Holodeck kernel:
Mar  8 18:02:06 Holodeck kernel: mpt2sas_cm0: port enable: FAILED
Mar  8 18:02:06 Holodeck kernel: mpt2sas_cm0: host reset: FAILED scmd(ffff88040d5b4f00)

Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: Device offlined - not ready after error recovery
Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: [sdg] Write Protect is off
Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: [sdg] Mode Sense: 98 00 00 08
Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: rejecting I/O to offline device
Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: [sdg] Asking for cache data failed
Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: [sdg] Assuming drive cache: write through
Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: rejecting I/O to offline device
Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: rejecting I/O to offline device
Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: rejecting I/O to offline device
Mar  8 18:02:06 Holodeck kernel: sd 1:0:0:0: [sdg] Attached SCSI disk
Mar  8 18:02:06 Holodeck kernel: sd 1:0:1:0: [sdh] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00
Mar  8 18:02:06 Holodeck kernel: sd 1:0:1:0: [sdh] tag#0 CDB: opcode=0x28 28 00 e8 e0 88 00 00 00 08 00
Mar  8 18:02:06 Holodeck kernel: blk_update_request: I/O error, dev sdh, sector 3907028992

 

Link to comment

i've had nothing but issues with hp raid cards (excluding the h220) on unRaid. Granted, this was back on 6.2.4 but I don't know if they've gotten better or not with software updates.

 

you can try to make sure the card is set to jbod, if it supports it.

Link to comment
21 hours ago, johnnie.black said:

Do you have another 2TB or smaller disk? If that is a SAS1 enclosure there could be issues when using one or more >2TB disks.

You might be on to something here. I've moved one of my existing 2TB drives over to the enclosure and kept the new 2TB in there as well, and I've been able to assign both drives to the array.

 

Its just rebuilding now onto the disk that I moved over, so I'll try moving another one after and see what that does.

 

Would this be more likely to be an issue with the enclosure or the HBA?

Link to comment

Well that would be unfortunate. I read around quite a bit before buying this setup and it seemed like it would be OK to support larger drives size. Theres this link from another Unraid thread where they say they were using 3TB drives:

 

But my setup seemed to crap itself when I moved a 3TB drive over as well. Even that linked threat you've found says its 4TB drives, although I'll admit it also doesn't say anything about 3TBs

 

I'll plough on moving another 2TB drive after this current rebuild (13 hours, yay) and see where I can get from there.

 

Thanks for your help, its very appreciated and may just stop the wife killing me

Link to comment

So after some more playing around, i think it might just be the first slot in the enclosure that causes issues. I've also been a lot more careful in only moving drives once the system has been shutdown as well.

 

I moved one of the existing drives in slot 3 of the array; no issues in assigning it to the array. I then added a new 2TB drive to the enclosure into slot 4; again no issues and the drive was able to be cleared and formatted. So this to me showed that the enclosure itself seemed OK and I could have more drives in there.

 

I then added my new 6TB drive to the 2nd slot in the enclosure and it appears in unraid I had the option to add it to the array. So this looked positive as it showed the array and hba both supported larger drives sizes. As I intended to make the 6TB my new parity drive, I moved it to the first slot of the enclosure and the system was noticeably slower to start up and once it had booted, there was no GUI access possible; also when I shut the machine down it seemed to crash and dump some output to the screen.

 

I've now moved the 6TB drive to the 2nd slot in the enclosure and the system was nice and quick to start up again as I expect, GUI access was restored and I'm now rebuilding the parity onto that drive (running 10 mins so far without issue).

 

So I'll continue to move drives over to the enclosure one at a time, being careful to stop the array and power down the system each time before I do it and I'll just never use that first slot again. I'd like to know why its not useable, but at this point I'm just happy I didn't waste £200.

 

Thanks for everyones assistance.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...