brian89gp

Members
  • Posts

    173
  • Joined

  • Last visited

Posts posted by brian89gp

  1. That could explain it. I just noticed that the os for unraid is "Other (32-bit)", and not "32bit FreeBSD". Perhaps that could explain my second problem.

     

    Edit: indeed it does. Thanks for the tip, you think this could have some relation to the "Failed to initialize MSI-X interrupts." problem. When passing through the second nic, i do not need to add a NIC, correct?

     

    It might.  If there is an option, it must do something right?

     

    That is correct.  Unless you have a specific need for a dedicated NIC it is usually not necessary though.

  2. Another issue: when I attempted to add the network adapter I have only two options: "Flexilble" or "E1000", which is different than what I see for my windows vms where I see "VMXNET3". Could it be that Zeron's vmtools is not fully compatible with ESXi 5.1 ? In vSphere Client I can see VMware Tools are running (3rd-party/Independent). Maybe that's why I do not see the VMXNET3 option.

     

    Any thoughts?

     

    Which options you get for the NIC hardware is dependent on the OS you select for the guest.  Which one did you choose?

  3. Hi, the board is already in AHCI mode, i tried one of the gigabytes in ide as well.. but not the asus board, only ahci. I did also try pluging the hdd into the m1015 card. Which also wouldn't work, it is flashed to IT mode, if that upsets esxi as well?

     

    The ASUS site says your board has the ICH10R and SIL5723 SATA controllers, I can almost guarentee the SIL5723 isn't supported and I am finding several posts saying the ICH10R is not supported under ESXi (the non-RAID ICH10 is though). Do you have any options to turn off any RAID features on the ICH10R?  How is the SSD set up in the controller?

     

    As far as the M1015, does ESXi see the card under "Storage Adapters"?

  4. 4972: no fs driver claimed device 'mpx.vmhba32:c0t0:l1' : not supported

    4972: no fs driver claimed device 't10.ata______intel_ssdxxxxxxxx_________

    4972: no fs driver claimed device ' control' : not supported

     

    This is your answer, no driver for the controller the SSD is plugged into.  Just because ESXi can boot from it does not mean that it has a driver to support it as a datastore.  For ages ESX/ESXi has supported only SCSI/SAS controllers for datastores, only recently have they introduced support for some AHCI controllers (eg, SATA) and the supported ones tend to be the ones found on server motherboard chipsets (Intel)

     

    If your motherboard allows it try flipping your SATA controllers into AHCI mode.

  5. 1. Just so I totally understand this, I assume that the datastore for the Win7 VM would essentially be my C: drive and contain all the data for that VM?

     

    Datastore is a term used to explain the storage as the virtualization server (ESXi, Xen, etc) see's it.  A virtual machine would be created on the datastore and the space you allocate on the datastore for it would be your C:\ drive in your Win7 VM or however else you want to partition it.  There can be multiple virtual machines on the same datastore up until the point you run out of space.

     

    2. I’ve read about using pass through for the HBA controllers (I have two IBM M1015’s) and I understand that you have to pass the entire controller to the OS and that’s the only OS that can then access it.  It looks like the X9SCM-F-O has two separate onboard SATA controllers.  Can I pass the 4 x 3Gbps controller through to unRAID and use the of the 2 x 6Gbps controller for the datastores ?

     

    If I understand you right, yes.  When you pass through a controller ESXi no longer can use it.  If you don't pass it through, then ESXi can use it.

     

    3. I’ve read some people talk about using a separate NIC for each VM, but it seems more people are doing the virtual switch instead.  (I assume that means that ESXi is handling the network traffic management via a single NIC?).  Which is the best route and why?

     

    The internal VMware switch is 10gbe, and the VMXnet3 NIC is 10gbe.  That is a good reason to put everything on one vSwitch and have one NIC plugged in externally....Inter-VM traffic runs at 10gbe.  You can do it the other way but it is a PITA and there are only a few use cases you would want to do it that way.

     

     

  6. Mellanox has 3rd party drivers for their cards for certain versions of ESXi. 

     

    You might search for older CX4 based 10gb NIC's, I see them on Ebay for under $150 each occasionally.  I am sure running ethernet on an ethernet card is likely to be much less trouble and hassle then running ethernet over IB.

     

    Also saw this one:

    http://www.ebay.com/itm/Dell-10Gb-NIC-PCIe-Network-Adapter-Card-XR997-0XR997-/251095217717?pt=US_Internal_Network_Cards&hash=item3a7670f635

     

    CAT6A makes for a cheap interconnect.

     

    http://forums.servethehome.com/showthread.php?22-10-Gigabit-Ethernet-(10GbE)-Networking-NICs-Switches-etc

     

  7. Too many people drop mega bucks on switches/routers for their home networks. Unless your home takes up several city blocks the regular home stuff works just fine. I never caught on why everyone wants this high end stuff for home when most people just browse the web. :)

     

    Went through two D-link gigabit switches and three Netgear gigabit switches.  Would run for 4-6 months then all of a sudden either start randomly dropping packets or ports going completely dead.  Been going on 3 years with the HP switch now.  Could have bought three HP switches for all that I spent on the multiple lower end switches.

     

    Had the same problem at work with 24 port "enterprise" linksys gigabit switches.  Went through over 15 of them in less then a year, they kept overheating and dropping ports or groups of ports.

     

     

     

  8. Do you know whether the SUPERMICRO MBD-X9SCM-F-O will support it?

     

    Supermicro calls there lights out IPMI, look in the specs for the mobo to find out.

     

    Could I pass the powerdown script through unRaid?

     

    Probably.  Leave SSH turned on in ESX then create a script to log in and issue the shutdown command.

     

    If I get it right, I have to passthrough the USB port to unRaid, make the APCD addin to do a clean powerdown of the unRaid server and to run the VMware remote CLI to shut down the ESXi server, which through rules will power off VM guests (win+linux). Did I get it right?

     

    Pretty much.  Just remember you usually have to pass through the whole USB controller since they are all attached to the same USB hub on the motherboard.

  9. Is it possible to set up WOL with ESXi?

    I would imagine if the motherboard supports it, you could WOL an ESXi server.  Most boards that you would use for ESXi though have a lights out management on them or have it as an option and those include power management and a remote console.  Not sure of your intended purpose, but that might work.

     

    how would it work to ensure data integrity in open VMs?

    There are power on/power off rules for ESXi that can power off and on VM guests on system shutdown and boot.  Just have to make sure VMtools is installed on all VM guests.

     

    How would it be the connection with an apc ups to shut down the system on a power outage?

    Pass through the USB port to a VM guest and have that kick off a script to run the VMware remote CLI to shut down the ESXi server

     

  10. my unRAID guest went down hard lastnight...

    my OCZ SSD cache drive fried. it took the expander with it.. (or the other way around)

    I could not get it to come back up, even after I pulled the SSD.

     

    I had to gut the server to get it to reboot. I also had to switch to the molex power to get the expander back online.

    turn OPROM back on.

    reset the bios on the m1015

    I had to run several new sas cables (not sure if that helped at all)

    re-add it back in ESXi (even ESXi kicked it out.)

     

    It is back up and running for now without the SSD... but I think it is just a bandage.

     

    I need to test the ssd, the m1015  and expander on my test rig as soon as I can get the time.

    The lights on the M1015 and the expander are both lit different then before.

    (help me out here. what on the intel expander is lit in normal operation? I only have one so I cant check another unit.)

    luckily i have plenty of spare HBA's if I do need to RMA anything.

     

    This is distrubing.  I use OCZ drives for my datastores attached to a M1015 and I kept having troubles where they would just dissapear and I couldn't get them back without reformatting them.  I finally figured out that somehow the partition table was getting lost and enter the "partedUtil setptbl" ESXi command to reset the partition table (my Agility 3 I have to reformat the partition table every 2-3 days, the Vertex 3 is once a week).  They also were doing this on the onboard LSI 2008 controller too.

     

    I recently got a Plextor M3 and for the life of me I couldn't get it to work right on the LSI 2008 controllers (either of them).  I could format it, copy data to it, but I kept getting random IO and/or corruption issues.  For example when trying to install ubuntu I would either get a message saying that my CD was corrupt or it would refuse to boot due to a corrupt install.

     

    So, I moved the Plextor and the OCZ to the onboard Intel ICH10 controllers and all problems have dissapeared.

     

    I have nothing to back it up on, but from my impression the LSI 2008 HBA doesn't like SSD too much.

  11. Couple thoughts:

     

    1. I would use CAT 6a as it is rated for longer distances for 10gbe

    2. HP has a POE powered 8 port gigabit switch, put one of those in the outside flat and have a POE switch or POE power injector in the house and you don't need a power supply on the one in the flat.

    3. Run twice as many cables as you forsee needing, including running two coax.  Chances are that eventually either you will need more or one will go bad.

    4. Consider having a couple network cables ran for WAP's in central locations.

    5. Crimping network cables is not too hard to learn to do (as long as you are not colorblind)

  12. I have just bought a IBM M1015 for my new unRAID server. As soon as I receive the card I am going to flash it the usual HBA FW - but!

     

     

    If I one day want to use another OS than unRAID, say windows server 8 maybe, is there then anyway I can reverse the process and get back the original IBM RAID FW?

    Yes. The same way. Almost all the instructions tell you how.

     

    Sent from my SGH-I727R using Tapatalk 2

     

    Though as far as RAID cards go, its not the fastest.

  13. SAS expanders are like Port Multipliers for SATA.  You connect a SAS controller (like IBM M1015) to the SAS expander which then multiplies the number of drives you can connect to the controller.  I currenly have my M1015 connected to an Intel RES2SV240 SAS expander so that I can connect > 8 drives to my single M1015.

     

    Yup.  If you want 24 drives you would need either 3x 8-port cards or 1x 8-port card and a SAS expander.  Both methods work fine, depending on how many PCIe slots your motherboard has.

     

    Also provides an interesting benefit if using ESXi.  My motherboard has 7x PCIe slots and one onboard SAS2008 controller, using SAS expanders I could have 8x unRAID servers with 22+ drives each, all running off the same motherboard.  That wouldn't be possible without SAS expanders and had to use up 3 slots per unRAID server.... (not that I would have that many anyway, but it provides options)

  14. If you have a little extra money, use SAS card(s).

     

    Why SAS card?

     

    For enterprise features and reliability.  Think Realtec vs Intel for a network card, if you had the choice which would you choose?  With the abundance of IBM M1015 cards they are within the price range of a lot of home users now.

     

    -SAS expanders

    -true hot swap

    -almost all use SFF-8087 connectors which make it a lot easier in cable management

    -support in ESXi if you ever go that route

    -able to use SAS drives if it tickles your fancy

    -more consistent driver quality

     

    Just double check your motherboard before you buy.  A lot of consumer desktop boards don't like mass storage controllers being plugged into them, especially to their higher bandwidth video card slots.

  15. I think again I have to stand up for Best Buy's Seagate Barracuda LP. The 3 TB version is $169.99, and comes with the same 5 year warranty as the 2 TB drive. $10 for three additional years' coverage is a no-brainer IMHO.

     

    http://www.bestbuy.com/site/Seagate+-+Barracuda+3TB+Internal+Serial+ATA+Hard+Drive+for+Desktops/3371132.p?id=1218396591168&skuId=3371132

     

    That is not the LP model, it is 7200rpm.

     

    No brainer for me, faster drive with a 5 year warranty for only $10 more.

  16. PCIe 1.0 per channel = 250MB/s = 2Gb/s

    PCIe 2.0 per channel = 500MB/s = 4Gb/s

    PCIe 3.0 per channel = 1Gb/s = 8Gb/s

     

     

    First, it is highly unlikely that all 8 disks are going to push the maximum bus speed at the same time.  It is also unlikely that a single SATA drive would even push the maximum bus speed for SATA 3. (unless you are talking about high end SSD SATA drives).

     

    For your theoretical question though, 8 disks at 6Gb/s each = 48Gb/s.  It would be possible with a 16x PCIe 2.0 card or 8x PCIe 3.0 card.

     

    With the common adoption of 10Gb ethernet, 16Gb fibre channel, and QDR Infiniband in the server market the PCIe 3.0 bus is becomming pretty common.  PCIe 2.0 isn't fast enough anymore.