brian89gp

Members
  • Posts

    173
  • Joined

  • Last visited

Posts posted by brian89gp

  1. The free license limites you to 32gb of RAM and 1 processor.  The essentials and standard licenses limit you to 32gb per processor.  Only really important if you are buying a multi socket moterhboard, but an important distiction to make.

     

    Anybody that has worked with it for a while including VMware will still call ESXi by that name, sales and marketing people are always a bit confused and call it by both ESX and ESXi.  ESX\i 5.0 is available only in the ESXi version, so whether ESX or ESXi is used it refers to the same thing.  VSphere is the name they use for a suite of products, usually ESXi and VirtualCenter at a minimum.

  2. For the purposes of unRaid is there any functional differences between the M1015 and the BR10i?  I'm guessing that once we flash these to just be HBAs, they all pretty much perform the same.  Is that correct?  The M1015 seems to be going for $70-80 on ebay and the BR10i can be had for as little as $40.  Not being real familiar with this kind of tech I'm wondering if there any reason to spend the extra money on the M1015.

     

    The M1015 supports 6gb/s SAS and also larger drives.

  3. Much better option would be to run unraid under xen.  Xen has much better performance than esxi. Network performance of free esxi seem to be slow. Xen let you reasign pci cards or hard drives or usb. I am testing server 2008 domain controller on xeon 3050 with 1gb assigned an perfomance it just amazing. It strange that boot times and performance is much better than on esxi with 4 gb assigned and amd 6 core cpu.

     

    Free ESXi and paid-for ESXi are the same thing and perform the same.  Which version of ESXi are you using?  5.0 does all of those things you mentiond, 4.1 did as well minus the USB.

     

    Some workloads do run faster on Xen, some on ESXi.  If both are set up properly though then the performance will be pretty similar.

  4. What I'm looking for is a reasonable cost effective and fast flash drive for these motherboards.

    Did I say "fast"???

     

    Used a generic cheepo thumb drive and a Lexar Firefly, couldn't tell the difference in boot times for ESXi.  At work there isn't much of a difference between booting from the cheap 2gb HP thumb drive and the onboard 15k SAS drives.

     

    Also since I;m not familiar with ESXi yet, is it worthwhile to just use an SSD, install to that and also have it used as a datastore for VM's?

     

    There are times that having ESXi seperate from your VMFS stores is a good thing.  The SSD will be used and thus will go bad faster, the USB thumb drive is only used when ESXi is booted.

     

    Though with that said, I currently have ESXi installed to a thumb drive and will be installing it to a SSD and using the rest as a datastore.  My only reason is that since I am using a Lexar Firefly for ESXi, and Lexar Firefly's for unRAID, it is annoingly difficult to select the right USB device to boot from.  A SSD on a SATA port will make the boot selection process easy because it is the only device on SATA.  I'll use the remainder of the SSD for static files (the non-persistant unRAID vmdk's and ISO images) to keep the wear and tear down.

     

    Do I need a separate flashkey for ESXi?

     

    If you install ESXi to a USB thumb drive, it needs to be its own seperate thumb drive used for no other purposes.

     

  5. On the master box couldn't you just run a cable outside the box to the external connector on the CK23601 to get 28 port capability internally?

     

    Also, just to verify...I shouldn't have any problem using the M1015 & CK23601 in the latest v5 beta build of unRaid?

     

    Yes.  But...what would you install the VM guests on if all 24 drives are dedicated to unRAID?

     

    The M1015 has been used quite a lot.  I know JohnM tested the Intel RES2SV240 expander which uses the same chipset as the Chenbro, but I'm not sure if anyone has tested the Chenbro with unRAID yet.

  6. To be honest I prefer xbmc but can't really get media streamer that I can install xbmc apart from apple tv 2 and don't really want apple tv 2 because won't do 1080p and 3d contents which I use quite often.

     

    AppleTV 1 with the crystal HD card, full blown XBMC install (NOT the plugin).  Does 1080p without even blinking.

  7. 3) So, if i was looking at virtualizing 2 unRAID servers: I would install (2) M1015's in the "master" unRAID box (4224) and then one CK23601 in the "master" box and one CK23601 in the "slave" box (another 4224)?  And i'd only have to run one cable between them?

     

    Two cables.  You will also need a dual port SFF-8087 to SFF-8088 converter.

    http://www.pc-pitstop.com/sas_cables_adapters/AD8788-2.asp

     

    The CK23601 has 8 external and 28 internal ports.  In the slave 4224 it works out perfectly, 8 external ports in and 24 of 28 internal ports to drives.  In the master 4224 you will have 8 internal ports in and 20 internal ports out.  The other 4 bays in your master 4224 can be hooked up to a different controller (onboard SATA if supported will work fine, or a hardware RAID card) for ESX datastores.

     

    The Chenbro 6gb SAS expanders use the LSI 6gb SAS chipset, same as the Intel RES2SV240 expander.  The other chipset SAS expanders, such as the HP expander, have quirks that make them a little less desirable.

     

  8. 2) As far as i can tell you're not using any of the onboard SATA ports.  If this is the case can you explain why not?

     

    For me, SAS cards support SAS expanders, all 24 drives running full speed off of one card.  That is all the reason I need.

     

    3) I really want to future proof my design as much as possible.  This includes adding on other virtualized unRaid servers when needed.  What SATA cards / expanders do you recommend to make this happen and how would you configure them (with the assumption that I'd just buy another Norco 4224 box when needed)?

    One 8 port SAS card (IBM M1015) can support the maximum number of drives unRAID can handle using a SAS expander.  So, one IBM M1015 SAS card per unRAID VM guest attached up to a slave 4224 with a Chenbro CK23601 SAS expander (or CK22803 if you only want 20 drives)

    http://usa.chenbro.com/corporatesite/products_cat.php?pos=37

     

    Configuring them would consist of passing your new M1015 card through to your new unRAID guest.  That would be about it.

     

  9. Yes, the M1015 is a HW-Raid card, see here: http://www.redbooks.ibm.com/abstracts/tips0740.html

    Battery is only needed for a Memory based cache, when power drops, but the M1015 has no cache, thus no battery.

     

    RAID1 write performance isn't that great on the M1015 (or any card that lacks a cache/battery really)

     

    I would suggest the Dell PERC 5/i or 6/i.  The 5/i is a pretty good card that is cheap, has cheap BBU, and performs pretty good for its price.  The 6/i is better, 6g, and more expensive.  Anything Dell is almost guarenteed to work out of the box with ESXi.

     

    The P400 isn't too bad.  The only thing I don't like about HP cards in general is they almost always have an "advanced license" of some sort to add extra features.

  10. I currently have 8x 1.5tb 7200rpm drives in my array and am looking at adding a few more.  I am looking at adding 2tb 5400rpm data drives and ideally would like a 3tb 7200rpm parity drive but due to their currect prices I don't want to buy the 3tb yet.  Is it possible to have a 2tb data drive and a 1.5tb parity drive? 

  11. 1Gb/s = 128MB/s

    A 10k or 15k might be able to do that sustained.  The machine/drive you copying from able to maintain a sustaned read of that rate?

     

    Server grade stuff is usually rated for sustained 100% usage, I would be more worried about adequate cooling in your case to provide cool air for the OEM CPU heatsink/cooler (and 20 drives) then if the OEM heatsink/cooler can keep the CPU cool.

  12. The standard fare Intel heatsink/fan that is rated for your Xeon CPU is usally more then enough.  The CORSAIR CWCH60 is overkill, but just my opinion.

     

    I have a 120gb Vertex 3 I've been using for a VMFS drive that I've been happy with (and I have it on a 3gb/s port even)

     

    Since you have a SAS capable card, you might look towards used 10k or 15k SAS drives for your cache.  Cheaper then a SSD, larger, and more then plenty fast enough.  If you are that concerned over a fast cache drive, I would assume that you plan on using it a lot, which in that case you might burn through a SSD quicker then is wanted.

     

    I would suggest finding RAM that is on the HCL for your motherboard.  Supermicro's tend to be picky, other stuff will work but may not be as reliable.

  13. An SSD (even a smallish one) for your main VM will go a long way.  I only have a 60GB SSD in my ESXi build but I only put windows XP on it and it works a treat.  It is 10 times faster running from the SSD than it ever was when running XP on my MacBook Pro.

     

    So if I am planning on installing 4 VMs:

              -- unRAID --> set to 2Gb partition

              -- WinXP (FileZilla box) --> set to 25 Gb partition

              -- Win2k3R2 DC (maybe) --> set to 25 Gb partition with no pagefile

              -- ClearOS or pfense firewall distro (assuming I can make it work) --> set to  5Gb

    will a 60 Gb SSD work or should I stick with a 250-320 Gb SATA drive?

     

    Should.  The SATA will work too, the only heavy IO server you have in that list is the FileZilla.

     

    A 2k3 DC can be done  in 10gb if you really wanted.

     

     

  14. Soemthing that was mentioned that seemed to gernerate a bit of interest was the ablity to have multiple unraid server present as a single share.  Is this something that is really being looked at, going to be looked at after 5 is live provided as a plugin or never going to happen.  Just very interested in this as a feature.

     

    You can achieve this already if you are willing to do a little bit of work.  I have it on one of my development machines.

     

    I have two unraid virtual machines - each using vt-d to access sata controllers directly.

     

    I have one further VM that mounts the shares from the two unRAID vm's, combines them using a union filesystem (AUFS I think) and then shares that back via samba.

     

    How are writes handled with that setup?  What determines which unRAID server it is written to?  Are there any mirroring features in the union filesystem, such that your ultra important stuff will be mirrored between the two unRAID servers but only presented once?

     

    I do like the idea a lot.  Have one unified filesystem while using more unRAID servers with fewer number of drives to keep the parity calc and rebuild time down.

  15. Got it.  Thanks everyone!  So I really should probably have a separate physical drive for every guest that will get regular use, it seems?  So I'll keep the black 1TB for unRAID cache only, use a new 2 TB green drive for vm backups, and get a couple 7200rpm drives for Win7 and another flavor of Linux?  Sound about right?  Guess spending more money was an inevitability!

     

    You will start to notice a performance slowdown after 3-4 average guests on a 7200rpm drive.  1-2 guests if one is slamming the drive.  For a SSD you will probably be able to run as many guests as you have room for.

  16. Been lurking this thread for a while now. Finally decided to pull the trigger and ordered up a new mobo, CPU, and RAM to convert my current server into ESXi. Its going to be a nerdy weekend for me! Thanks for all the great info!

     

    A few questions that I still have after re-reading this entire thread, if someone wouldn't mind taking some time to answer:

     

    1. I currently use a 1 TB WD Black cache drive for unRAID. I don't think I need all that space anymore and don't want to go SSD quite yet. Can I turn the drive into a data store, reserving half of it for use as the unRAID cache and the rest for other guest VMs?

     

    2. I currently have 1 parity, 7 data, and 1 cache. Conventional wisdom has all of my mobo SATA ports in use (as its fastest). with the rest on my MV8. Assuming I can do #1, the proper way for this configuration in ESXi would be to pass through the MV8 and use it for my 8 drives (minus cache), with the onboard ports going for non-unRAID-exclusive use, right? And when I want more drives I'm going to need to get a new SAS card?

     

    Thanks! Super excited to join the ranks with you all.

     

    1. Yes, you can use the drive for a datastore and then some of the datastore as cache to unRAID. However I am going to say that is not "Best Practice". Best Practice is to passthru controller(s) to the unRAID VM and connect all the drives to those ports. This prevents using it as a datastore.

     

    2. Yes, but as described by John and above, the onboard ports can be used by unRAID (datastore vmdk or RDM), just not very pretty, not what you are looking for. A second controller is advised, (MV8 or LSI).

     

    1. You will run the risk of bring every guest on your ESXi server that is on that datastore to a hault every time you write to your unRAID box.

  17. I will start to virtualize my unraid to a new Esxi server next week. Can I use the current flash drive with Esxi already installed on it on the new esxi server with a different hardware?

     

    Basically I will be moving all VMs from my old esxi server to a new server.

     

    Interesting question. I honestly do not know..

    Since there are two versions embedded and installable. the embedded (usually an SD card) is server model specific.

     

    The installable, is.. well.. installed.  I am not sure if it copies everything or just what it needs or if the config is hardware specific.

     

    We usually just reinstall when we change boxes since it only takes a few min to re-import all the guests on the new build.

     

     

    It will work just fine.  ESXi installs all drivers, changing hardware has never been a problem for me as long as you stay on the HCL radar.

     

    The network config though binds itself to NIC's by MAC address.  When you boot up on a different server you will have to go in through the console and enable the other (new) NIC's.

  18. I almost always buy workstation to server class hardware and always run everything at stock speeds/voltages because to me overclocking can only make things less stable.  Now my less stable might be another persons stable enough.

     

    For my classification of stable, talking servers here because desktop applications throw a wrench in things, I expect my server to be online 100% of the time for 3-4 years with no failures.

     

    So how did you confirm that your hardware is all working ok? You assume it is?

     

    Just because it's server class hardware doesn't mean you could have got a duff one?

     

     

     

    I assume the CPU and motherboard work if they boot.  I don't know if Supermicro etc have better QA but I have yet to have a problem.  I typically buy open-box specials too...

     

    As for memory, most use ECC RAM.  If one is bad it gets flagged as such and either continued to be used and corrected or disabled from use entirely.  IE, if one is bad it tells me, no BSOD here.  Sometimes I run a 1-2 pass memtest, sometimes I don't.

     

    Never had bad RAM bring one of my machines down (I have had a couple bad sticks since I buy almost 100% used off of ebay) and never had bad RAM bring down one of the servers I deal with at work (currently 60+ servers and 8+TB of RAM).  ECC is nice in this regard.

     

    As the saying goes, you get what you pay for.

     

     

     

  19. I would appreciate some feedback on what others think of the hardware selection. This is not going to be a cheap build (for home server standards) so I would like to get this right the first time.

    Some areas I would like comment on but not limited to are the following:

    • ESXi datastore options. SSD + hdd vs. 4 x hdd (1 per VM)

    • 16GB memory. Too much? Too little?

    • General overall hardware selection

    • Cooling fans. Do you think this is enough? The use of AC Temperature Controller fans?

     

    I have several VM's running (Plex, SabNZBD/Sickbeard/Couchpotato, MSSQL 2008, Virtual Center appliance, unRAID) on one very old and very tired 400gb 7.2k WD RE2 drive.  I don't push it hard enough to consider anything fancy when it comes to the VMFS stores.  As long as you don't plan on doing a bunch of stuff on every VM at the same time a single drive will go a long ways.  If you do want more performance out of your VMFS store, I would personally go bigger/faster (10k or 15k SAS or SSD) instead of getting more drives to spread the load across.

     

    With Win7 VM's, 16gb is a good place to start.  Buy it so it is easy to add two more 8gb sticks at a later date.

     

    I would spend a little more and use 1x M1015 SAS card and then an SAS2 expander.  I think the VMDirectPath limit of passthrough devices is 4 per VM, your 3 M1015 cards would be getting too close to that limit plus tie up most/all the onboard PCIe slots.  You are presumably buying a large ESXi server for future use, don't limit yourself from the get-go in ways that require re-working everything in the future.

     

    edit:

    My SabNZBD machine has 2 separate drives to use as complete/incomplete stores.  This keeps the heavy IO off of the main OS drive that is shared by every VM guest.