Marvell disk controller chipsets and virtualization


Recommended Posts

"The Patch" referenced by first post was superseded by the one referenced here:

https://lkml.org/lkml/2014/5/22/685

 

This patch set is indeed in our kernel.

 

Toward end of kernel bug report https://bugzilla.kernel.org/show_bug.cgi?id=42679

Alex W. (the man) references another change in 4.2 kernel:

http://git.kernel.org/cgit/linux/kernel/git/helgaas/pci.git/commit/drivers/pci/quirks.c?id=247de694349c2eeea11b8d8936541f5012a09318

 

We are moving to 4.2 kernel in unRaid 6.2.x release.

 

 

Link to comment

So looks like we're waiting for the 4.2 kernel to fix this?

 

Yes.  But as I'm sure you know, patches don't come with guarantees.

I have an Asus Z9PE-D8 WS motherboard with a 9230 controller and just tested to install ubuntu and update to kernel 4.2. Then change the port the HDD is connected to the 9230 controller and it did not work. Still the same problem.

So as Rob says, there is no guarantees it's going to work.

 

Edit: I should read some more before I post stuff  ::) I read something about using iommu=pt if you have 9230 and just tested to boot unraid with it added to the syslinux.cfg and now unraid sees the disks connected to the 9230 controller. I haven't tried using the disk yet.

Link to comment
  • 4 weeks later...
  • 1 month later...
  • 4 weeks later...

i believe i only have this issue with my 8 port 88SE9485 SAS card , but the controller on the MB is Marvell too... I would like to run a VM but i get total lockup upon turning on Virtualization and VT-D.

 

do you think my marvel controller on the MB is out of this bug?(see devices below)

should be able to get another 8 port PCI-X SAS card without a marvell controller to work.

 

will this test for this issue be ok without disruppting the array drives?

1. Disable array auto start , shutdown the array

2. Remove suspect sas card and switch all 8 drives to SI3114 Card...

3. start comp and go to bios and turn on virtualization and VT-D

4. save changes and reboot.

 

do you think the above would be a good test?

 

 

 

02:00.0 RAID bus controller: Marvell Technology Group Ltd. MV64460/64461/64462 System Controller, Revision B (rev 01)

04:00.0 Ethernet controller: Qualcomm Atheros Killer E220x Gigabit Ethernet Controller (rev 10)

05:00.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 41)

06:00.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [sATALink/SATARaid] Serial ATA Controller (rev 02)

07:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev 03)

 

 

Link to comment

i believe i only have this issue with my 8 port 88SE9485 SAS card , but the controller on the MB is Marvell too... I would like to run a VM but i get total lockup upon turning on Virtualization and VT-D.

 

do you think my marvel controller on the MB is out of this bug?(see devices below)

I believe there are a few users that found success with a newer firmware, but I can't guarantee that.  Worth trying though.

 

will this test for this issue be ok without disruppting the array drives?

1. Disable array auto start , shutdown the array

2. Remove suspect sas card and switch all 8 drives to SI3114 Card...

3. start comp and go to bios and turn on virtualization and VT-D

4. save changes and reboot.

There's no problem reconnecting drives anywhere, as on boot the kernel builds a brand new OS based on the hardware it finds then, and unRAID identifies the drives by their serial numbers, no matter where they have moved to.

 

However as you know, the Si3114 only supports 4 drives, so you'd need another controller.

Link to comment

i believe i only have this issue with my 8 port 88SE9485 SAS card , but the controller on the MB is Marvell too... I would like to run a VM but i get total lockup upon turning on Virtualization and VT-D.

 

do you think my marvel controller on the MB is out of this bug?(see devices below)

should be able to get another 8 port PCI-X SAS card without a marvell controller to work.

 

will this test for this issue be ok without disruppting the array drives?

1. Disable array auto start , shutdown the array

2. Remove suspect sas card and switch all 8 drives to SI3114 Card...

3. start comp and go to bios and turn on virtualization and VT-D

4. save changes and reboot.

 

do you think the above would be a good test?

 

 

 

02:00.0 RAID bus controller: Marvell Technology Group Ltd. MV64460/64461/64462 System Controller, Revision B (rev 01)

04:00.0 Ethernet controller: Qualcomm Atheros Killer E220x Gigabit Ethernet Controller (rev 10)

05:00.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 41)

06:00.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [sATALink/SATARaid] Serial ATA Controller (rev 02)

07:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev 03)

Have you tried using iommu=pt in syslinux.cfg?

Link to comment
  • 3 weeks later...

I have Gigabyte X58A-UD3R motherboard and I have this problem.

Motherboard has 2 SATA-ports with Marvell 9128 controller.

unRAID 6.1.2

 

Same motherboard here, I'm on unRAID 6.1.7.  Any drives I plug into the SATA3 Marvel ports, unRAID cannot see them.  I tried adding "iommu=pt" to syslinux.cfg, but no luck so far.

Link to comment

Same motherboard here, I'm on unRAID 6.1.7.  Any drives I plug into the SATA3 Marvel ports, unRAID cannot see them.  I tried adding "iommu=pt" to syslinux.cfg, but no luck so far.

 

This is somewhat surprising, make sure you added the line to the correct place, usually next to the first append, below “label unRAID OS”, there are other lines for safe mode, etc, also you need to reboot after.

Link to comment

Yeah, it's weird...if I boot a GParted bootable CD (for example), all my drives are there.  I can also see them on the post screens when I power on.  Once booted to unRAID, they (the two drives plugged into those ports) don't show up.  I've attached a text file of my syslinux.cfg.  I had shutdown, put the USB stick in another computer to edit the file, then booted it back up.

 

I have a Plus license, 11 devices in my computer at the moment, 12 is the limit, so it shouldn't be that.

syslinux.txt

Link to comment

I have also tried the iommu=pt fix and it didn't work

 

Running a Gigabyte z87-ud3h and I have two mushkin SSDs on the marvell controller (88SE9172) in RAID0 that I wanted to use for either my windows VM or run em in RAID 1 and use that for my cache.

 

I might just be a noob to this stuff but from reading all of the posts on here it seems like we have to wait for the new kernel in 6.2 to see if its been fixed? I also don't know how to tell if theres a newer marvell firmware to try as I thought that was integrated with the BIOS and I'm running the latest one.

 

Any more ideas guys? Saw something about pci-phantom but I think that was a completely different issue.

Link to comment

I have also tried the iommu=pt fix and it didn't work

 

Running a Gigabyte z87-ud3h and I have two mushkin SSDs on the marvell controller (88SE9172) in RAID0 that I wanted to use for either my windows VM or run em in RAID 1 and use that for my cache.

 

I might just be a noob to this stuff but from reading all of the posts on here it seems like we have to wait for the new kernel in 6.2 to see if its been fixed? I also don't know how to tell if theres a newer marvell firmware to try as I thought that was integrated with the BIOS and I'm running the latest one.

 

Any more ideas guys? Saw something about pci-phantom but I think that was a completely different issue.

 

Run them as regular disks, not in RAID.

Link to comment
  • 2 weeks later...
  • 4 weeks later...

I just purchased the ASRock EP2C602 that uses the Marvell SE9230 controller. The first time booting the system I noticed half the drives were missing or failing. I tried to add the "append  iommu=pt", and it appeared to resolve the issue. After 48 hours, the same issue started and 3 of my disks were disabled/missing.

 

I plan on following the instructions to update the Marvel firmware tomorrow to see if that will resolve the issue.

Link to comment

I just purchased the ASRock EP2C602 that uses the Marvell SE9230 controller. The first time booting the system I noticed half the drives were missing or failing. I tried to add the "append  iommu=pt", and it appeared to resolve the issue. After 48 hours, the same issue started and 3 of my disks were disabled/missing.

 

I plan on following the instructions to update the Marvel firmware tomorrow to see if that will resolve the issue.

 

I'd be curious to know what instructions you are referring to, as I have the same MoBo, with the same issues on the Marvel Ports.

 

Do you have a link t the firmware & Update files?

Link to comment

I just purchased the ASRock EP2C602 that uses the Marvell SE9230 controller. The first time booting the system I noticed half the drives were missing or failing. I tried to add the "append  iommu=pt", and it appeared to resolve the issue. After 48 hours, the same issue started and 3 of my disks were disabled/missing.

 

I plan on following the instructions to update the Marvel firmware tomorrow to see if that will resolve the issue.

 

I'd be curious to know what instructions you are referring to, as I have the same MoBo, with the same issues on the Marvel Ports.

 

Do you have a link t the firmware & Update files?

I haven't gotten around to it yet, but I found this on their website which provides the file and instructions.

 

http://www.asrockrack.com/support/faq.asp#BMC

Link to comment
  • 3 weeks later...

Is anyone able to confirm/deny that this IOCraft, Marvell 88SE9215-based controller will be fine?

 

Negative, it won't (be fine).

 

I threw mine away. You cannot enable vt-d/iommu (even in 6.2.0 beta19 with kernel 4.2), the iommu=pt workaround in syslinux.cfg doesn't work either and the ports keep disconnecting preventing you from mounting your array on reboot. A complete nightmare! Run while you can and steer clear of anything that spells MARVELL 9215 controller on unRaid!  :-\  ...for the time being, I guess....

Link to comment

I'm experiencing similar issues, and wonder if this is due to me having a marvel controller. I purchased a Startech PEXSAT34SFF PCI-E to Mini-SAS controller (I've got a HP N54L microserver and I wanted to run mini-SAS rather than several SATA cables due to space constraints etc..) I've switched off SVM in the BIOS but I'm still having issues with drives. Bear in mind smart reports for both SSD's and 1x 4TB connected to this controller report no issues; as these drives are less than a month old and have hardly seen any usage due to the above mentioned issue. Can I disable virtualization within unRAID altogether? I only use Docker; dont plan on using KVM etc..

 

Drives all show up fine and register fine on the array; its only when activity occurs (writes etc..) when things start playing up.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.