brian89gp

Members
  • Posts

    173
  • Joined

  • Last visited

Everything posted by brian89gp

  1. I have 6 of the original Seagate 1.5tb models, used every day for going on 3 years now and not a single problem.
  2. The free license limites you to 32gb of RAM and 1 processor. The essentials and standard licenses limit you to 32gb per processor. Only really important if you are buying a multi socket moterhboard, but an important distiction to make. Anybody that has worked with it for a while including VMware will still call ESXi by that name, sales and marketing people are always a bit confused and call it by both ESX and ESXi. ESX\i 5.0 is available only in the ESXi version, so whether ESX or ESXi is used it refers to the same thing. VSphere is the name they use for a suite of products, usually ESXi and VirtualCenter at a minimum.
  3. Used a generic cheepo thumb drive and a Lexar Firefly, couldn't tell the difference in boot times for ESXi. At work there isn't much of a difference between booting from the cheap 2gb HP thumb drive and the onboard 15k SAS drives. There are times that having ESXi seperate from your VMFS stores is a good thing. The SSD will be used and thus will go bad faster, the USB thumb drive is only used when ESXi is booted. Though with that said, I currently have ESXi installed to a thumb drive and will be installing it to a SSD and using the rest as a datastore. My only reason is that since I am using a Lexar Firefly for ESXi, and Lexar Firefly's for unRAID, it is annoingly difficult to select the right USB device to boot from. A SSD on a SATA port will make the boot selection process easy because it is the only device on SATA. I'll use the remainder of the SSD for static files (the non-persistant unRAID vmdk's and ISO images) to keep the wear and tear down. If you install ESXi to a USB thumb drive, it needs to be its own seperate thumb drive used for no other purposes.
  4. Hardly any. Unless you export logs it doesn't even save those through a reboot.
  5. I am going to be buying some 2tb drives, which is the best currently? All are the same price. WD EARX (high rate of DOA but those are the only bad reviews) Seagate ST2000DL (normal DOA reviews) Hitachi 5k3000 (a lot of reviews speaking of high failure rate at 6-8 months)
  6. Yes. But...what would you install the VM guests on if all 24 drives are dedicated to unRAID? The M1015 has been used quite a lot. I know JohnM tested the Intel RES2SV240 expander which uses the same chipset as the Chenbro, but I'm not sure if anyone has tested the Chenbro with unRAID yet.
  7. AppleTV 1 with the crystal HD card, full blown XBMC install (NOT the plugin). Does 1080p without even blinking.
  8. Two cables. You will also need a dual port SFF-8087 to SFF-8088 converter. http://www.pc-pitstop.com/sas_cables_adapters/AD8788-2.asp The CK23601 has 8 external and 28 internal ports. In the slave 4224 it works out perfectly, 8 external ports in and 24 of 28 internal ports to drives. In the master 4224 you will have 8 internal ports in and 20 internal ports out. The other 4 bays in your master 4224 can be hooked up to a different controller (onboard SATA if supported will work fine, or a hardware RAID card) for ESX datastores. The Chenbro 6gb SAS expanders use the LSI 6gb SAS chipset, same as the Intel RES2SV240 expander. The other chipset SAS expanders, such as the HP expander, have quirks that make them a little less desirable.
  9. For me, SAS cards support SAS expanders, all 24 drives running full speed off of one card. That is all the reason I need. One 8 port SAS card (IBM M1015) can support the maximum number of drives unRAID can handle using a SAS expander. So, one IBM M1015 SAS card per unRAID VM guest attached up to a slave 4224 with a Chenbro CK23601 SAS expander (or CK22803 if you only want 20 drives) http://usa.chenbro.com/corporatesite/products_cat.php?pos=37 Configuring them would consist of passing your new M1015 card through to your new unRAID guest. That would be about it.
  10. RAID1 write performance isn't that great on the M1015 (or any card that lacks a cache/battery really) I would suggest the Dell PERC 5/i or 6/i. The 5/i is a pretty good card that is cheap, has cheap BBU, and performs pretty good for its price. The 6/i is better, 6g, and more expensive. Anything Dell is almost guarenteed to work out of the box with ESXi. The P400 isn't too bad. The only thing I don't like about HP cards in general is they almost always have an "advanced license" of some sort to add extra features.
  11. I currently have 8x 1.5tb 7200rpm drives in my array and am looking at adding a few more. I am looking at adding 2tb 5400rpm data drives and ideally would like a 3tb 7200rpm parity drive but due to their currect prices I don't want to buy the 3tb yet. Is it possible to have a 2tb data drive and a 1.5tb parity drive?
  12. 1Gb/s = 128MB/s A 10k or 15k might be able to do that sustained. The machine/drive you copying from able to maintain a sustaned read of that rate? Server grade stuff is usually rated for sustained 100% usage, I would be more worried about adequate cooling in your case to provide cool air for the OEM CPU heatsink/cooler (and 20 drives) then if the OEM heatsink/cooler can keep the CPU cool.
  13. The standard fare Intel heatsink/fan that is rated for your Xeon CPU is usally more then enough. The CORSAIR CWCH60 is overkill, but just my opinion. I have a 120gb Vertex 3 I've been using for a VMFS drive that I've been happy with (and I have it on a 3gb/s port even) Since you have a SAS capable card, you might look towards used 10k or 15k SAS drives for your cache. Cheaper then a SSD, larger, and more then plenty fast enough. If you are that concerned over a fast cache drive, I would assume that you plan on using it a lot, which in that case you might burn through a SSD quicker then is wanted. I would suggest finding RAM that is on the HCL for your motherboard. Supermicro's tend to be picky, other stuff will work but may not be as reliable.
  14. So if I am planning on installing 4 VMs: -- unRAID --> set to 2Gb partition -- WinXP (FileZilla box) --> set to 25 Gb partition -- Win2k3R2 DC (maybe) --> set to 25 Gb partition with no pagefile -- ClearOS or pfense firewall distro (assuming I can make it work) --> set to 5Gb will a 60 Gb SSD work or should I stick with a 250-320 Gb SATA drive? Should. The SATA will work too, the only heavy IO server you have in that list is the FileZilla. A 2k3 DC can be done in 10gb if you really wanted.
  15. You can achieve this already if you are willing to do a little bit of work. I have it on one of my development machines. I have two unraid virtual machines - each using vt-d to access sata controllers directly. I have one further VM that mounts the shares from the two unRAID vm's, combines them using a union filesystem (AUFS I think) and then shares that back via samba. How are writes handled with that setup? What determines which unRAID server it is written to? Are there any mirroring features in the union filesystem, such that your ultra important stuff will be mirrored between the two unRAID servers but only presented once? I do like the idea a lot. Have one unified filesystem while using more unRAID servers with fewer number of drives to keep the parity calc and rebuild time down.
  16. You will start to notice a performance slowdown after 3-4 average guests on a 7200rpm drive. 1-2 guests if one is slamming the drive. For a SSD you will probably be able to run as many guests as you have room for.
  17. 1. Yes, you can use the drive for a datastore and then some of the datastore as cache to unRAID. However I am going to say that is not "Best Practice". Best Practice is to passthru controller(s) to the unRAID VM and connect all the drives to those ports. This prevents using it as a datastore. 2. Yes, but as described by John and above, the onboard ports can be used by unRAID (datastore vmdk or RDM), just not very pretty, not what you are looking for. A second controller is advised, (MV8 or LSI). 1. You will run the risk of bring every guest on your ESXi server that is on that datastore to a hault every time you write to your unRAID box.
  18. Did the drive assignments or anything change when you added the Intel RES2SV240? Or plug and play and unRAID was totally unaffected?
  19. Interesting question. I honestly do not know.. Since there are two versions embedded and installable. the embedded (usually an SD card) is server model specific. The installable, is.. well.. installed. I am not sure if it copies everything or just what it needs or if the config is hardware specific. We usually just reinstall when we change boxes since it only takes a few min to re-import all the guests on the new build. It will work just fine. ESXi installs all drivers, changing hardware has never been a problem for me as long as you stay on the HCL radar. The network config though binds itself to NIC's by MAC address. When you boot up on a different server you will have to go in through the console and enable the other (new) NIC's.
  20. So how did you confirm that your hardware is all working ok? You assume it is? Just because it's server class hardware doesn't mean you could have got a duff one? I assume the CPU and motherboard work if they boot. I don't know if Supermicro etc have better QA but I have yet to have a problem. I typically buy open-box specials too... As for memory, most use ECC RAM. If one is bad it gets flagged as such and either continued to be used and corrected or disabled from use entirely. IE, if one is bad it tells me, no BSOD here. Sometimes I run a 1-2 pass memtest, sometimes I don't. Never had bad RAM bring one of my machines down (I have had a couple bad sticks since I buy almost 100% used off of ebay) and never had bad RAM bring down one of the servers I deal with at work (currently 60+ servers and 8+TB of RAM). ECC is nice in this regard. As the saying goes, you get what you pay for.
  21. I have several VM's running (Plex, SabNZBD/Sickbeard/Couchpotato, MSSQL 2008, Virtual Center appliance, unRAID) on one very old and very tired 400gb 7.2k WD RE2 drive. I don't push it hard enough to consider anything fancy when it comes to the VMFS stores. As long as you don't plan on doing a bunch of stuff on every VM at the same time a single drive will go a long ways. If you do want more performance out of your VMFS store, I would personally go bigger/faster (10k or 15k SAS or SSD) instead of getting more drives to spread the load across. With Win7 VM's, 16gb is a good place to start. Buy it so it is easy to add two more 8gb sticks at a later date. I would spend a little more and use 1x M1015 SAS card and then an SAS2 expander. I think the VMDirectPath limit of passthrough devices is 4 per VM, your 3 M1015 cards would be getting too close to that limit plus tie up most/all the onboard PCIe slots. You are presumably buying a large ESXi server for future use, don't limit yourself from the get-go in ways that require re-working everything in the future. edit: My SabNZBD machine has 2 separate drives to use as complete/incomplete stores. This keeps the heavy IO off of the main OS drive that is shared by every VM guest.
  22. I almost always buy workstation to server class hardware and always run everything at stock speeds/voltages because to me overclocking can only make things less stable. Now my less stable might be another persons stable enough. For my classification of stable, talking servers here because desktop applications throw a wrench in things, I expect my server to be online 100% of the time for 3-4 years with no failures.