brian89gp

Members
  • Posts

    173
  • Joined

  • Last visited

Posts posted by brian89gp

  1. I installed mine to a thumbdrive.

    you installed it to a thumbdrive ?

    what about the perf for the different swap files from esx. I know they are not much.. but still...

     

    ESXi boots to RAM.  Its not like Windows or other OS's, it was built and meant to be run from a flash device.  The only thing you gain from installing it to a HDD is it boots a little faster, but that is it.

     

    Keep in mind, this is an enterprise class OS that VMware just so happens to release for free (no doubt only to compete with the free Hyper-V...)

  2. ok...super noob question I guess...what is the benefit/need for having multiple NICs?

     

    The downside to having multiple NICs manually split up is that all VM to VM traffic on the same vSwitch stays on the vSwitch.  If you have two NICs each to their own vSwitch (but still on the same network) or vmDirectPath one NIC through to one of the VM's then any traffic between the other VM's and the one with the dedicated NIC/vSwitch will need to leave the vSwitch and go over the wired network/switch.

     

    Now if you have two NIC's attached to the same vSwitch then ESXi will balance across them and inter-VM traffic will stay on the server.  Traffic for 1 VM will go out 1 NIC, ESXi does not bond the NIC's unless you do on the switch side. 

     

    Staying on the vSwitch and not exiting the ESXi server is important due to the VMXNET3 NIC (10gb).  If all your VM's have the VMXNET3 NIC then all VM to VM traffic will be a 10gbe speeds.  Stay on vSwitch at 10Gb or dedicate a NIC/vSwitch and use your super fast 1Gb switch...?  (10Gb/s = 1280MB/s, 1Gb/s = 128MB/s)

     

    No brainer for me.  90% of everything is on ESXi, the only things that are not are workstations and media players and a single 1Gb/s NIC is more then enough.

  3. It would be better to use VMDirectPath for the USB drive as it will run at full USB 2.0 speeds, not the 1.1 you are seeing with passthrough.  I am unable to get USB via VMDirectPath working properly (hangs during unRAID boot), so I am curious to see if others are successful.

     

    I was able to set it up through VMDirectPath.  It shows 8 USB devices for my motherboard (X8DTH-6F), 6 UHCI and 2 EHCI.  I set VMDirectPath on half of them (3 UHCI and 1 EHCI) since they all seemed to be attached to the same hub and group of ports (Attaching a keyboard to the same USB port was passed through to the guest OS no matter which of the 4 PCI devices I set to pass-through).  I then passed through the EHCI PCI device and it boots fine.

     

    I am having problems with the fastpath fail state, but that is a problem with ESXi 4.0 that was fixed in 4.1.  It is important that you set all USB PCI devices that can see a  particular USB port for VMDirectPath.  If you do not, ESXi and the guest OS will fight for ownership and you get into the fastpath fail state problems, it completely freezes 4.0 and while 4.1 the issue was fixed it can still cause problems in the guest OS.

     

    Maybe someone else can explain how the USB devices work and how one physical USB port seems to be on any of 3 different UHCI ports and several physical USB ports seem to be on the same EHCI port.  It almost seems like the UHCI and EHCI are busses, not physical ports.  EHCI = USB 2.0 so that makes sense, two USB 2.0 buses per motherboard with each bus servicing 4 physical ports.

     

    TIP:  Install a temporary ESXi install onto a SATA drive then start messing around with the VMDirectPath of USB devices.  If you install ESXi onto a flash drive, then set that port to VMDirectPath, bad things will happen and the only way to recover is to reinstall.

     

    TIP2: Dont use ES Xeon 5500 processors.  VMware wrote out support in later versions.  4.0.0 is the latest I can run without getting a PSOD on boot.  People with later revisions of ES processors report they can run 4.0 U1 but not any newer versions.  Live and learn.

  4. I have used the Intel x520 dual port adapters both in mezz card and PCIe card formats.  Probably 50-60 currently in use.  Solid card, no complaints.  The Emulex 10Gbe CNA are also good cards (though more pricey)

     

    Use the twinax (called CU or copper sometimes).  $120 per cable total versus $60 for a 10gbe aqua fiber cable and at least $600 for the optical transciever at each end.

     

    What type of motherboard are you going to be using to drive this?  The Intel x520 cards are 8x PCIe 2.0.  What type of switches on the other end?  If you are planning on actually pushing that much traffic, many cheaper 10Gbe switches will start to suffer from buffer over-runs.

  5. I would like to know the same thing, is there a walkthrough or tutorial on how to do it?

     

    Reason for me is to install in ESXi and have easier config/booting.  Booting because it wouldn't need an additional bootloader to boot from a USB passthrough.  Config because only the key would be stored on the USB key, not the config, so getting the USB key passthrough to multiple unRAID guests mixed up wouldn't be as large of a deal.

  6. If hardware is supported in a previous version it is almost always supported in a later version.  Has held true for 3.0, 3.5, 4.0, and 4.1 so far.  Most compatibility problems come from using consumber based motherboards, storage, network, and trying to shove non-server based hardware into it (TV tuner cards).  The motherboard he used is server grade, NIC is Intel, and storage is industry standard LSI.  Chances are it will work.

     

    (I work with ESX for a living)

  7. The PCIe card edge on the expander is only for physical mounting and power.  It can be screwed to the side of the case and powered through the molex plug (ie, not plugged into the motherboard).  There are no drivers, it is an invisible/silent device.  It is similar to the SATA-SATA port multipliers you mentioned, only at a whole different level of speed and features.

     

     

  8. $50

    Hauppauge WinTV-HVR-2250 Dual TV Tuner PCIe

     

    $50

    Silicon Dust HDHR-US Dual Networked HD tuner

     

    $150 for the below 3 (have switched to XBMC):

    SageTV HD200 media extender

    SageTV v6 & v7 (upgrade) server license

    Quantity 2 SageTV Windows client licenses

  9. Have the following for sale, cheaper then you can buy just the case for online.  Will ship.  $300

     

    SuperMicro SC743T-645B tower case, 8 hot swap SATA drive bays with cages, 645w PSU

    Intel SHG2 dual Xeon motherboard

    dual 2.0ghz Xeon processors

    4gb of RAM.  

    3-ware 9550SX-8LP, SATA-II PCI-X 8 port RAID card (can be set to JBOD to pass the drives through)

    Intel 4 port SATA PCI-X RAID card

     

  10. All LSI SAS2008 controllers I know about, both in onboard and card form, are PCIe 2.0 x8.  Should give 3Gbps (384MB/s) per drive (6Gbps x 8 channels / 16 drives).  

     

    Just looking to postpone buying the RES2SV240 SAS expander until after I have 8 drives.  According to Rajahal's comments here: http://lime-technology.com/forum/index.php?topic=13094.msg124220#msg124220 it doesn't look like it will have any ill effects, since unRAID goes by serial number and adding the SAS expander at most will change the drive location ID.

     

    This is a new build.  Motherboard is a SuperMicro X8DTH-6F, bandwidth will not be a problem.

  11. I am getting a motherboard with the onboard LSI SAS2008 controller on it and will have less then 8 drives to begin with.  Can I plug the SAS 2008 controller directly into the backplane, set up unRAID, and then sometime in the future add an Intel RES2SV240 SAS expander (SAS 2008 to Intel SAS expander to the drives)?  The drives would be moved from being directly plugged into the controller to being plugged into the Intel SAS expander.  Will unRAID handle this and just recognize the new arrangement of drives?

  12. I've not dealt with the ES drives, I'm curious what the variances between them are.

     

    I am guessing better parts, better burn in, and more love/attention from the manufacturer.  We have been getting a lot of 2tb 7200rpm SAS drives (Dell calles them nearline something or another) and they have been absolutely rock solid.  These are on 24tb arrays that there is about 6tb written/written over in a week, so no lack of activity.

  13. After having a handful of Linksys/Dlink/Netgear switches overheat and start dropping packets or worse and burn out completely, I got a 8 port HP switch.  http://www.newegg.com/Product/Product.aspx?Item=N82E16833316076

     

    I bleed Cisco green through and through, but for an unmanaged gigabit switch the HP line is both affordable and reliable.

     

    They also have a POE powered one which is pretty cool in my book.

    http://www.newegg.com/Product/Product.aspx?Item=N82E16833316155

     

     

  14. Where I work, they bought 200+ 750gb Seagate 7200.9 drives for a handful of backup-to-disk arrays.  I can attest to what WeeboTech says, they do fail in batches.  We were having 5-10 drives failing within a few days of each other, then having several months of no failure, then another 5-10 drives failing at a time.  Over 4 years about 25% have failed.  Of those 25% that were replaced, 50% of the reman drives they sent as replacements for the warranty failed.

     

    On the other hand, they bought 45 750gb Seagate ES drives, not one failure in 4 years.