JimmyJoe

Members
  • Posts

    169
  • Joined

  • Last visited

Everything posted by JimmyJoe

  1. ESXi datastore performance testing - What's a good way to test and measure performance? I currently have 2 datastores: M1015 Raid1 - 2x Seagate Barracuda LP 2 TB 5900RPM SATA 3 GB/s 32 MB Cache NFS over 1GB network to unRAID with a cache disk I ran a couple of quick tests using dd like this: dd if=/dev/zero bs=1024 count=1000000 of=<datastore>/testfile dd if=<datastore>/testfile of=/dev/null bs=1024k My quick results for a 1GB file: Datastore Write MB/s Read MB/s Local Raid1 2.9 5.2 NFS share 5.5 12.2 I want to test some other datastore configurations like a local drive connected to the MB, a SSD, and will probably setup ZFS/OI/napp-it with 4 drives. What is a good way to test datastore performance? Is dd good or is there something better? Thanks!
  2. I'm not backing up my VMs yet. That's on my to do list. For now my datastore disks are mirrored, so I'm protected from a hard drive failure. I know that's not a backup, and I am not protected from a failure caused by my own stupidity. I'll look into ghettoVCB, thanks. I know SSD's are expensive and can be fairly unreliable. It's the best single upgrade I have ever done to my primary desktop in terms of "Wow... I felt that performance improvement". I am using an Intel X25-M with MLC and have been happy for 4+ years with it so far. Maybe it's overkill for my ESXi server. I'll probably wait for a while and see how I like the performance of my VMs. Now I'm reading about using ZFS as a datastore. So many choices. I am really enjoying ESXi so far.
  3. Hmmm... so, as I am sitting here watching Windows 7 SP1 be applied to one of my VM's... I am thinking this would run a lot faster if my datastore was on a SSD. Thought I wasn't going to do that for a while.... Hmmm....
  4. I really like the M1015. I haven't used the MV8 cards so someone else can speak to those. MV8's you do need to hack to get them to work with ESXi (see first page of this thread for details). I flashed 3 M1015's yesterday with P15, two with IT and one with IR. I used the download, usb stick and flash instructions from here and it really wasn't that hard. I did have to try a few PC's until I found one that worked, but then it was a piece of cake. The MB I used was a Gigabyte P55M-UD4. Granted, it's only been about 36 hours that I have been running all 3 in my system, but not a single issue and rock solid so far and I have moved around a few TB of data. I would hands down buy them again, just because they worked and gave me no issues. In my unRAID VM I completed a parity sync today and have a parity check running right now with two M1015's passed through. I am getting ~70MB/s with 9 slower drives that are all 4-6 years old. I'm very happy with that. So yes, with the M1015's if you want to pay ~$100 you are messing with ebay. I bought mine from this seller and they are refurbished. I couldn't tell the difference from new. Fast shipping and great communication. YMMV. I did need to get high profile brackets that fit my case. I got my brackets here. You can get M1015's for ~$125-$140 from various retailers too. Yes, you need to flash them. Or you can pay a ~$250 and get a new LSI 9211-8i with full warranty.
  5. Me too!!! I might have been fine with an Ivy but went with the Sandy just to be safe, since it's more proven and I just wanted stability. Knock on wood... stable so far! If I ever run out of CPU (or should I say "when I run out of CPU")... maybe I'll buy an Ivy down the road and see what happens. Great advice, I didn't know that. thanks! I just placed an order to replace my Noctua 120mm fans that aren't cutting it for me: 3x Panaflo 120x38mm Ultra High Speed 1x NZXT Sentry Mesh Fan Controller w/ Five 30 watts Channels The Panaflo's are 120mm x 38mm so they are fat and rated 114.7 CFM @ 2750 RPM, 45.5 dBA, 6.12w, 510 mA, 12v DC. They move a lot of air, and should have better static pressure than the Noctua's. I went with the controller so I can dial them in and hopefully find a good balance between noise and cooling. We'll see!
  6. Sweet! Congrats! That's really good to hear.
  7. Johnm - Thanks so much for this great thread. I just finished my first ESXi build and have everything up and running with two guests so far, unRAID and Windows 7. Working great. IPMI ROCKS! I followed several of your guides and they helped me a great deal. THANK YOU!
  8. Everything is put together and running great! I updated the original post.
  9. I'm pretty sure the link is wrong and these are 5900 rpm. Seagate isn't saying "officially". The review on Newegg is mine. I did some performance testing of these drives. So far I am very happy with price/performance. http://lime-technology.com/forum/index.php?topic=26140.msg228281#msg228281
  10. Just ran into this problem myself after I upgraded. Old Hardware: Motherboard - SuperMicro C2SEA - 6 sata ports CPU - Intel Pentium Dual-Core E5200 Processor, 2.5 GHz, 2M L2 Cache, 800MHz FSB, LGA775 Power Supply - CORSAIR 750w TX Series 80 Plus Certified Power Supply Memory - CORSAIR XMS3 4GB (2 x 2GB) 240-Pin DDR3 1333 TW3X4G1333C9 Controller - Qty 2 - Adaptec 1430SA PCIe x4 - 8 ports (4 each) Controller - Qty 1 - SD-SA2PEX-2IR PCIe x1 - 2 ports (Sil3132 chipset) Controller - Qty 1 - LSI PCI SATA MegaRAID 150-6 Kit - 6 ports New Hardware: Motherboard: Supermicro X9SCM-IIF-O bios 2.0a CPU: Intel Xeon E3-1220 Sandy Bridge Power Supply: CORSAIR HX750 Memory: 32GB - 4x Super Talent DDR3-1333 8GB ECC Micron Controller: 2x IBM M1015 w/P15 IT Mode (used for unRAID) Controller: 1x IBM M1015 w/P15 IR Mode (installed but unused, future use for ESXi) Write speeds went down to ~1MB/s for any writes to disks, including dd copies between disks (with no parity). I set mem=4095 and all my writes went back up to normal, ~70 MB/s.
  11. So I just swapped out a bunch of hardware and now my temps for my drives no longer display in MyMain but the temps display fine in unraid Main. Running 5.0-rc11 with unmenu 1.5 rev 246. Syslog is clean. Parity check speed is good. Any ideas? Thanks! smartctl -a /dev/sd? does report temps: root@voyagerold:~# for drive in `ls -alF /dev/sd?`; do echo $drive |grep dev; smartctl -a $drive |grep Temp; done /dev/sda 194 Temperature_Celsius 0x0022 116 109 000 Old_age Always - 34 /dev/sdb 194 Temperature_Celsius 0x0022 116 109 000 Old_age Always - 34 /dev/sdc 190 Airflow_Temperature_Cel 0x0022 066 059 045 Old_age Always - 34 (Lifetime Min/Max 34/34) 194 Temperature_Celsius 0x0022 034 041 000 Old_age Always - 34 (0 11 0 0) /dev/sdd 194 Temperature_Celsius 0x0022 117 108 000 Old_age Always - 33 /dev/sde 194 Temperature_Celsius 0x0022 112 107 000 Old_age Always - 38 /dev/sdf 194 Temperature_Celsius 0x0002 157 157 000 Old_age Always - 38 (Lifetime Min/Max 13/43) plugdev /dev/sdg Temperature Warning Disabled or Not Supported /dev/sdh 194 Temperature_Celsius 0x0022 114 109 000 Old_age Always - 36 /dev/sdi 190 Airflow_Temperature_Cel 0x0022 064 060 045 Old_age Always - 36 (Lifetime Min/Max 34/36) 194 Temperature_Celsius 0x0022 036 040 000 Old_age Always - 36 (0 9 0 0) /dev/sdj 190 Airflow_Temperature_Cel 0x0022 068 058 000 Old_age Always - 32 194 Temperature_Celsius 0x0022 142 112 000 Old_age Always - 32 /dev/sdk 190 Airflow_Temperature_Cel 0x0022 071 061 000 Old_age Always - 29 194 Temperature_Celsius 0x0022 151 121 000 Old_age Always - 29 /dev/sdl 190 Airflow_Temperature_Cel 0x0022 068 062 000 Old_age Always - 32 194 Temperature_Celsius 0x0022 142 124 000 Old_age Always - 32 /dev/sdm 194 Temperature_Celsius 0x0022 114 108 000 Old_age Always - 36 root@voyagerold:~# Old Hardware: Motherboard - SuperMicro C2SEA - 6 sata ports CPU - Intel Pentium Dual-Core E5200 Processor, 2.5 GHz, 2M L2 Cache, 800MHz FSB, LGA775 Power Supply - CORSAIR 750w TX Series 80 Plus Certified Power Supply Memory - CORSAIR XMS3 4GB (2 x 2GB) 240-Pin DDR3 1333 TW3X4G1333C9 Controller - Qty 2 - Adaptec 1430SA PCIe x4 - 8 ports (4 each) Controller - Qty 1 - SD-SA2PEX-2IR PCIe x1 - 2 ports (Sil3132 chipset) Controller - Qty 1 - LSI PCI SATA MegaRAID 150-6 Kit - 6 ports New Hardware: Motherboard: Supermicro X9SCM-IIF-O bios 2.0a CPU: Intel Xeon E3-1220 Sandy Bridge Power Supply: CORSAIR HX750 Memory: 32GB - 4x Super Talent DDR3-1333 8GB ECC Micron Controller: 2x IBM M1015 w/P15 IT Mode (used for unRAID) Controller: 1x IBM M1015 w/P15 IR Mode (installed but unused, future use for ESXi)
  12. Sweet! Tracking numbers show that my controllers and motherboard should be delivered tomorrow!
  13. Rsync is done, several TB copied. Parity Sync and Check completed with no errors. It's running great! Updated first post.
  14. Fans and Cables - Arrived today Motherboard - BAH! Yesterday PCNation told me I would have a tracking number last night. Nope. Called again this morning, was told within an hour the system will update and I will get an email. Nope. Called this afternoon, was put and hold and then told they shipped it from a warehouse somewhere to their location in Chicago, they should get it tomorrow and then they will re-ship it out to me. What?!?!? This was to avoid sales tax on the purchase. I would have preferred an option and would gladly have paid sales tax to have them ship it from the warehouse directly to me. Their "Free 3 day shipping" turned into 8 days (6 business days) for me. No thank you. I cancelled my order. Superbiiz has the MB back in stock, so I just placed an order. Hopefully they ship today and I get it tomorrow, that would rock!
  15. That's the same problem I had and that fixed it for me too.
  16. I sure wish Superbiiz would have had my MB in stock. I already got my CPU and Memory delivered via UPS Ground, less than 24 hours after I placed the order. I am only a couple hours away from them in CA. As far as the MB, it was ordered at the same time from PCNation and I'm told it will be shipped out today but I probably won't have it until Monday.
  17. Got my M1015's ordered as well as my cables. It's all ordered now I just wait for all my parts to arrive.
  18. I went ahead and ordered my CPU, memory and MB yesterday. Unfortunately Superbiiz went out of stock on the MB just before I went to order. So I ordered the MB from PCNation. Never ordered from them before, hopefully all goes well.
  19. My preclears all finished successfully even running seven at a time. The cycle only took about two hours longer than running two at a time, so very pleased with that. For one cycle I am seeing 38-40 hours. I am currently using rsync to copy data from my old array, and it's chugging along at ~70MB/s. This is gonna take a while! I found this post very helpful in quickly setting up rsync: http://lime-technology.com/forum/index.php?topic=13432.msg127670#msg127670
  20. This is what I see if I clear the config: This is what I see when I add the 3 data disks back and start the array:
  21. See this post for the better way to disable the HPA warning for your disks. http://lime-technology.com/forum/index.php?topic=14194.msg134248;topicseen#msg134248 Yup, I get that and thanks for all of your hard work and the awesome tool. Sorry, I should have been more clear. I plan on setting the hpa_ok to 1 for the drives. I thought it was odd that the disks show up as one size when formatted, and another when unformatted. I also wanted to provide feedback with info on these drives so that can get worked back into a future update, I wasn't sure what the best way to do that was. What's really puzzling me is that all the drives are exactly the same size. They were all precleared. unRaid main, fdisk and hdparm shows them all as the same size. MyMain doesn't. I just played around with this a little more and this is what I see in MyMain: Parity and data drives show as 3,907,018,532 (same as unRaid main) Unformatted and cache drives show as 3,907,018,552 (different than unRaid main) I can move drives around from data to parity to cache and it stays consistent. For example, when I take the cache drive that shows 3,907,018,552 and move it into a parity or data slot it changes to 3,907,018,532. If I swap it back, it changes back. Any idea why I am seeing this behavior? hdparm shows me that HPA is disabled: root@defiant:~# hdparm -N /dev/sda /dev/sda: max sectors = 7814037168/7814037168, HPA is disabled root@defiant:~# hdparm -N /dev/sdb /dev/sdb: max sectors = 7814037168/7814037168, HPA is disabled root@defiant:~# hdparm -N /dev/sdc /dev/sdc: max sectors = 7814037168/7814037168, HPA is disabled root@defiant:~# hdparm -N /dev/sdd /dev/sdd: max sectors = 7814037168/7814037168, HPA is disabled Fdisk output: root@defiant:~# fdisk -lu /dev/sda WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 4000.8 GB, 4000787030016 bytes 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. root@defiant:~# fdisk -lu /dev/sdb WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. root@defiant:~# fdisk -lu /dev/sdc WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. root@defiant:~# fdisk -lu /dev/sdd WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdd1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary.
  22. I am building out a new system and noticed the HPA warning in MyMain of unMenu for my 4TB drives. This is a fresh install of 5.0rc11. I was surprised to not see a 4TB entry in MyMain.conf and thought maybe I had an older version. I also thought it was odd that the drive shows up a different size when formatted and unformatted and I had to add both of them into MyMain.conf in order for the HPA warning to go away. Just curious, why does it need both entries? #----------------------- # Used to check for HPA #----------------------- SetConstant(ValidPartitionSizes, "MX200G=199148512,200G=195360952,MX250G=245117344,MX300G=293057320,SG300G=293036152,320G=312571192,400G=390711352,640G=625131832, \ 8G=8257000,MX40G=40146592,60G=58615672,WD74G=72613024,80G=78150712,SM120G=117246496,WD120G=117220792,MX160G=160086496,160G=156290872, \ 3T=2930266532,4T=3907018532,4T2=3907018552") Here what I see without the changes to MyMain.conf: Here my version of unmenu:
  23. My 3rd preclear cycle just finished on 7 of my DM drives. All looks good. One drive has a couple High Fly Writes. Not sure if I should keep that one or exchange it. Looks like lots of people have seen a few High Fly Writes on Seagate drives and asked the same question.... "Is this bad?" I can't find any examples of it actually being a bad thing, so I am just going to keep it. ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdc /tmp/smart_finish_sdc ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 117 119 6 ok 150353568 Seek_Error_Rate = 62 100 30 ok 1590816 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 High_Fly_Writes = 98 100 0 ok 2 Airflow_Temperature_Cel = 69 71 45 near_thresh 31 Temperature_Celsius = 31 29 0 ok 31 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 2. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 2. 0 sectors were pending re-allocation after post-read in cycle 1 of 2. 0 sectors were pending re-allocation after zero of disk in cycle 2 of 2. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. ============================================================================