3 M1015's in ESXi not showing all drives in unRAID


Recommended Posts

Uh boy, I'm considering an Ivy Xeon myself. Is there much of a performance hit using the Sandy CPU? It sounds like this also hits the Tyan boards, yes?

 

Would a system with just two of the M1015 cards for unRAID work with the third used for other VM? Sounded like the 4x slot was the issue alone, yes?

Link to comment
  • Replies 139
  • Created
  • Last Reply

Top Posters In This Topic

Uh boy, I'm considering an Ivy Xeon myself. Is there much of a performance hit using the Sandy CPU? It sounds like this also hits the Tyan boards, yes?

Yes it does.  I had lots of problems with Ivy Bridge bios's not just the CPU.  I actually don't own an Ivy Bridge Xeon CPU.  I talked to Tyan tech support and they sent me the 1.05 bios for my S5512GM4NR (it came with 2.02).  Tech support also sent a custom batch file that was edited to correctly down grade the bios.  I just haven't had time to apply it yet.

Would a system with just two of the M1015 cards for unRAID work with the third used for other VM? Sounded like the 4x slot was the issue alone, yes?
With the M1015 - yes.  Although I had problems with 2.02 Tyan bios on the PCI port with non HDD controllers.  My Highpoint 1742 works fine but the USB 2.0 or Tuner cards do not work well.
Link to comment
  • 2 months later...

Finally success!!!  I was able to borrow a Xeon E3-1230 Sandy Bridge processor from a coworker and it worked!  So it would seem that the Xeon E3-1230 V2 Ivy Bridge processor was the source of all my problems.  The unRAID VM now boots up without error using all three of my M1015's and parity checks so far have been averaging ~75 MB/sec which is exactly what I experienced in bare metal unRAID.  Also, I no longer get the MPT BIOS fault during boot if I enable OPROM on my PCI-E slots.  So everything seems solved now.  Thank goodness!!  Anyone want to buy a 1230 V2 Ivy Bridge processor  :) ?

 

While I'm not running a VM, I've been having lots of slow parity sync speeds, and from what I could tell we have identical hardware.  Finding this post will hopefully result in the baldness I've developed from tearing out my hear to go away :)

 

CPU is on order!

Link to comment
  • 2 months later...

Hi Guys,

 

I`v Got SuperMicro X9 SCM-F with FW 2.0a and an Ivy 1240 V2.

I have connected:

PCI 1 (x8) ATI 6450

PCI 2 (x8) M1015

PCI 3 (x4) Nothing

PCI 4 (x4) ATI 5450

 

The above setup works...

 

I use passthrough for the M1015 to unRaid and each GPU to a dedicated Windows system (win 7 + win8)

This is mainly for development and work stuff.

I hate coding with RDP.

 

The above is 3 PCI`s working, which is  like RockDawg`s setup execpt he has 3 M1015`s..

 

Anyways, I bought another M1015 and I want to change the setup to:

PCI 1 (x8) M1015 (OpenIndiana / Nappit, For ZFS)

PCI 2 (x8) M1015 (uRaid)

PCI 3 (x4) ATI 6450

PCI 4 (x4) ATI 5450

 

 

So... I`m wondering will 2 M1015`s and 2 GPU`s work?

Link to comment

[...]

The above is 3 PCI`s working, which is  like RockDawg`s setup execpt he has 3 M1015`s..

 

Anyways, I bought another M1015 and I want to change the setup to:

PCI 1 (x8) M1015 (OpenIndiana / Nappit, For ZFS)

PCI 2 (x8) M1015 (uRaid)

PCI 3 (x4) ATI 6450

PCI 4 (x4) ATI 5450

 

 

So... I`m wondering will 2 M1015`s and 2 GPU`s work?

 

....what I understood from the OPs setup was the challenge to pass through all three card to the same VM....two M1015 worked.

Your plans are to pass a single card to an individual VM...different setup.

IMHO with a setup including GPU passthrough you can consider yourself lucky that you have a working system (with two GPUs even).

As already pointed out, you will need to take the challenge and try...only by then you will know....

Link to comment
  • 4 weeks later...

I'm wondering if the problem with the 3x M1015 cards and Ivy Bridge was fixed in the refresh of the Supermicro MB.  I'm running the X9SCM-IIF-O motherboard with the Xeon E3-1230V2 Ivy Bridge CPU.  I just installed my third M1015 card last night and was worried that I'd have problems like rockdawg had.

 

I passed through all three controllers to unRAID.  I put at least one drive on each of the controllers, and it was able to see all of the drives.  I haven't done any parity checks though because I'm still configuring my system and haven't actually added the parity drive to the array yet.  I'm copying all of my data over and will do that when I'm done just because I'm assuming it copies faster if it doesn't have to calculate parity.

 

From what I gathered though, the problem was that the third card wasn't even visible in the unRAID VM.

Link to comment

I'm wondering if the problem with the 3x M1015 cards and Ivy Bridge was fixed in the refresh of the Supermicro MB.  I'm running the X9SCM-IIF-O motherboard with the Xeon E3-1230V2 Ivy Bridge CPU.  I just installed my third M1015 card last night and was worried that I'd have problems like rockdawg had.

 

Thanks for this info.

Is there a new MB revision or just a new BIOS?

What bios are you on, since you are on a X9SCM-IIF instead of the X9SCM-F.

SM lists bios v 2.0b for the X9SCM-F and v 2.0a for the X9SCM-IIF.

Link to comment
  • 1 month later...

I'm not sure it was a question of having the 3 cards pass through successfully - I thought the issue only exhibited itself when you had a lot of drives.. I think 16 was the critical number?  I don't have time to review the thread.

 

I have an X9scm-f with a 1230v2 CPU and I have the 2.0b BIOS.  I currently have 2 m1025 cards Connected but I am going to see if I get the door knock errors if I install one of the cards in a 4x slot.  I am using g the latest esxI build I do have access to a third m1015 but I only have 10 drives

Link to comment

OK couldn't sleep so I did some tests and here is what I can report.

 

Does not matter where I put the 2 m1015's they always say 64 bit and no door knock error.  Here is the syslog bits

 

Slot 1 and Slot 3

 

Jun  1 22:50:30 VMTower kernel: mpt2sas version 12.100.00.00 loaded

Jun  1 22:50:30 VMTower kernel: mpt2sas0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (4147004 kB)

Jun  1 22:50:30 VMTower kernel: mpt2sas 0000:03:00.0: irq 72 for MSI/MSI-X

Jun  1 22:50:30 VMTower kernel: mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 72

Jun  1 22:50:30 VMTower kernel: mpt2sas0: iomem(0x00000000d2400000), mapped(0xf84f8000), size(16384)

Jun  1 22:50:30 VMTower kernel: mpt2sas0: ioport(0x0000000000004000), size(256)

Jun  1 22:50:30 VMTower kernel: mpt2sas0: Allocated physical memory: size(7418 kB)

Jun  1 22:50:30 VMTower kernel: mpt2sas0: Current Controller Queue Depth(3307), Max Controller Queue Depth(3432)

Jun  1 22:50:30 VMTower kernel: mpt2sas0: Scatter Gather Elements per IO(128)

Jun  1 22:50:30 VMTower kernel: mpt2sas0: LSISAS2008: FWVersion(15.00.00.00), ChipRevision(0x03), BiosVersion(07.29.00.00)

Jun  1 22:50:30 VMTower kernel: mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)

Jun  1 22:50:30 VMTower kernel: mpt2sas0: sending port enable !!

Jun  1 22:50:30 VMTower kernel: mpt2sas0: host_add: handle(0x0001), sas_addr(0x500605b0046a12a0), phys(8)

Jun  1 22:50:30 VMTower kernel: mpt2sas0: port enable: SUCCESS

Jun  1 22:50:30 VMTower kernel: mpt2sas1: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (4147004 kB)

Jun  1 22:50:30 VMTower kernel: mpt2sas 0000:0b:00.0: irq 73 for MSI/MSI-X

Jun  1 22:50:30 VMTower kernel: mpt2sas1-msix0: PCI-MSI-X enabled: IRQ 73

Jun  1 22:50:30 VMTower kernel: mpt2sas1: iomem(0x00000000d2500000), mapped(0xf8578000), size(16384)

Jun  1 22:50:30 VMTower kernel: mpt2sas1: ioport(0x0000000000005000), size(256)

Jun  1 22:50:30 VMTower kernel: mpt2sas1: Allocated physical memory: size(7418 kB)

Jun  1 22:50:30 VMTower kernel: mpt2sas1: Current Controller Queue Depth(3307), Max Controller Queue Depth(3432)

Jun  1 22:50:30 VMTower kernel: mpt2sas1: Scatter Gather Elements per IO(128)

Jun  1 22:50:30 VMTower kernel: mpt2sas1: LSISAS2008: FWVersion(15.00.00.00), ChipRevision(0x03), BiosVersion(07.29.00.00)

Jun  1 22:50:30 VMTower kernel: mpt2sas1: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)

Jun  1 22:50:30 VMTower kernel: mpt2sas1: sending port enable !!

Jun  1 22:50:30 VMTower kernel: mpt2sas1: host_add: handle(0x0001), sas_addr(0x500605b0046a17b0), phys(8)

Jun  1 22:50:30 VMTower kernel: mpt2sas1: port enable: SUCCESS

 

Slot 3 and 4

 

Jun  1 23:04:06 VMTower kernel: mpt2sas version 12.100.00.00 loaded

Jun  1 23:04:06 VMTower kernel: mpt2sas0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (4147004 kB)

Jun  1 23:04:06 VMTower kernel: mpt2sas 0000:03:00.0: irq 72 for MSI/MSI-X

Jun  1 23:04:06 VMTower kernel: mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 72

Jun  1 23:04:06 VMTower kernel: mpt2sas0: iomem(0x00000000d2400000), mapped(0xf84f8000), size(16384)

Jun  1 23:04:06 VMTower kernel: mpt2sas0: ioport(0x0000000000004000), size(256)

Jun  1 23:04:06 VMTower kernel: mpt2sas0: Allocated physical memory: size(7418 kB)

Jun  1 23:04:06 VMTower kernel: mpt2sas0: Current Controller Queue Depth(3307), Max Controller Queue Depth(3432)

Jun  1 23:04:06 VMTower kernel: mpt2sas0: Scatter Gather Elements per IO(128)

Jun  1 23:04:06 VMTower kernel: mpt2sas0: LSISAS2008: FWVersion(15.00.00.00), ChipRevision(0x03), BiosVersion(07.29.00.00)

Jun  1 23:04:06 VMTower kernel: mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)

Jun  1 23:04:06 VMTower kernel: mpt2sas0: sending port enable !!

Jun  1 23:04:06 VMTower kernel: mpt2sas0: host_add: handle(0x0001), sas_addr(0x500605b0046a17b0), phys(8)

Jun  1 23:04:06 VMTower kernel: mpt2sas0: port enable: SUCCESS

Jun  1 23:04:06 VMTower kernel: mpt2sas1: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (4147004 kB)

Jun  1 23:04:06 VMTower kernel: mpt2sas 0000:0b:00.0: irq 73 for MSI/MSI-X

Jun  1 23:04:06 VMTower kernel: mpt2sas1-msix0: PCI-MSI-X enabled: IRQ 73

Jun  1 23:04:06 VMTower kernel: mpt2sas1: iomem(0x00000000d2500000), mapped(0xf8578000), size(16384)

Jun  1 23:04:06 VMTower kernel: mpt2sas1: ioport(0x0000000000005000), size(256)

Jun  1 23:04:06 VMTower kernel: mpt2sas1: Allocated physical memory: size(7418 kB)

Jun  1 23:04:06 VMTower kernel: mpt2sas1: Current Controller Queue Depth(3307), Max Controller Queue Depth(3432)

Jun  1 23:04:06 VMTower kernel: mpt2sas1: Scatter Gather Elements per IO(128)

Jun  1 23:04:06 VMTower kernel: mpt2sas1: LSISAS2008: FWVersion(15.00.00.00), ChipRevision(0x03), BiosVersion(07.29.00.00)

Jun  1 23:04:06 VMTower kernel: mpt2sas1: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)

Jun  1 23:04:06 VMTower kernel: mpt2sas1: sending port enable !!

Jun  1 23:04:06 VMTower kernel: mpt2sas1: host_add: handle(0x0001), sas_addr(0x500605b0046a12a0), phys(8)

Jun  1 23:04:06 VMTower kernel: mpt2sas1: port enable: SUCCESS

 

I didn't post it but it is the same if placed in slots 1 and 2

So there are quite a few things I have different.

 

In rockdawg's postings it seems to show that he possibly has 2Gb of ram allocated to his unraid and I have 4Gb as indicated by this line

Jun  1 23:04:06 VMTower kernel: mpt2sas1: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (4147004 kB) I think rockdawg's says (2073502 kB)

my esxi version is 5.1 build 1065491

my m1015's are at ver 15 flashed IT mode

unraid ver 5.0-RC11

and possibly the more important bit is my X9SCM-F (board ver 1.11) with a xeon 1230v2 CPU is running 2.0b bios I have not been able to find a changelog for the bios ver so??

 

other things, my unraid vm boot's in 25 seconds from powering on including waiting the 3 seconds for memtest option not the 2 mins that rockdawg said his took 2 mins or more.

 

So I don't have 16 drives to connect and I have not tested parity or speed as I have no drives connected.  But I will be migrating my 10 disk install over once we get 5 final (and nothing is broken) so I can post it all once I have it up and running.  I will also do a 3 m1015 card test as well and it happens to be only at ver 11 flash so I will also test it in a 4x slot on it's own and see what is reported.

 

Also one thing I have noticed and havn't seen anyone else mention it was high cpu usage with no vm's running.  I tracked it down to esx's ipmi (what it gets fan speed temp with etc etc) I disabled this and all good.

Link to comment

Have you tried disabling or removing the boot rom option on the m1015 s? When you cross flash to IT mode you can leave out the boot rom part (don't need as it's only used to change BIOS settings and in IT mode there aren't any really.)

 

Sent from my SGH-I727R using Tapatalk 2

 

 

Link to comment
  • 4 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.