fade23

Members
  • Posts

    28
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

fade23's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I'm running all green drives (currently 5 + parity) on an LSI 9211-8i HBA (same chipset as the m1015)/Intel RES2SV240 expander and have no issues. If you run the HBA in IT mode you will be bypassing any of the hardware RAID functions, so drive timeouts shouldn't be an issue. I average around 40 MB/s read, 25 MB/s write, without a cache drive. I upgraded to ESXi 5 a few weekends ago with no problems.
  2. Has anyone attempted an upgrade to ESXi 5.0 yet? In particular, I'm curious to know if the "format all unknown disks" behavior is still present, either on a fresh install of 5.0 or an upgrade from 4.1
  3. Create the file /config/smb-extra.conf on your flash share (if it does not already exist) and add the following lines: map hidden = No map system = No hide files = /ntuser.*/.AppleDouble/.AppleDesktop/.AppleDB//Network Trash Folder/Temporary Items/TheFindByContentFolder/TheVolumeSettingsFolder/.DS_Store/Icon\r/ You'll still see the files if you telnet in and ls locally, but this will prevent them from showing up on your SMB exports.
  4. I was getting inconsistent results scripting a clean shutdown of ESXi on power loss from the UPS; I found that if unRAID was set as first to boot, last to shutdown, ESXi was initiating the guest powerdown but then powering off before the guest had completely shut down, regardless of the timeout set. I think it would have worked ok if, as you've done, unRAID was the last to boot, first to shutdown, but as it was I ended up with unclean shutdown that resulted in lengthy parity checks on resumption of power. If you need your VMs to shutdown in a different order than they start up, you can script your own shutdown sequence to call them one at time using this Perl library. Here's an example of the shutdown script my Windows 7 VM calls when the UPS reports a power loss: perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname indra perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname xbmc ping 192.168.99.99 -n 1 -w 120000 > nul perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname unraid ping 192.168.99.99 -n 1 -w 360000 > nul perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action host-shutdown The ping statements in there are a surrogate for a SLEEP type command, which Windows batch scripting does not support. the '-w xxxxxx' arguement is the number of miliseconds to wait in between each command. Using this script I can shut down my VMs in any arbitrary order, and then issue the shutdown command to ESXi at the end. The only drawback is that you don't have feedback that the prior VM has completed its shutdown (though the Perl library does support querying the status, so one probably could build that in if it were important)
  5. "REMOVE ALL DRIVES! Remove/Unplug all Hard Disks and Flash Drives from the server! During install, ESXi will erase ALL drives it sees!!" I've seen this mentioned before; what exactly is ESXi doing here? It's just claiming all the disks as datastores regardless of what's currently on them? Wondering what I'll need to do if and when i upgrade to ESXi 5
  6. My biggest concern with the free version of ESXi 5.0 will be the 8 GB vRAM limit across all guests.
  7. Quick math gives you about 140 MB/s per device (6Gbps x 4 channels /21 drives)with 21 slots all reading at the same time. That's if you use a single link; some expander/HBA combos will support dual linking to the expander (though obviously doing that you lose ports). Slower than a dedicated port for sure, but if you're running green drives you're more likely to be bottle necked by your rotational speed than you ever will be by the bus.
  8. yes, it shows up as a single device, expander and all.
  9. I'm doing exactly that using this tool: http://communities.vmware.com/docs/DOC-11623; I have a Cyberpower UPS, not APC, but any should work as long as they support executing a script when there is a power loss. In my setup, I pass my UPS USB connection through to a Windows 7 VM, which executes the above script to initiate a shutdown on ESXi. At that point it will shutdown or suspend all VMs depending on how you've configured those options in ESXi. Make sure you've installed vmware-tools on unRAID so that it can cleanly unmount the array and shutdown, and watch out for your startup/shutdown order and shutdown timeout values (especially if unRAID is the last VM to shutdown).
  10. For those of you with BR10i cards passed through, have you tried beta7 with 3 TB drives on them yet? LSI has been a bit cagey on whether the older 1068e based cards will work with 3 TB drives, but I'm not clear whether that is just a limitation to boot from them or not.
  11. I think the idea is so you can daisy chain multiple cards in multiple chassis. Not so useful with the 20 disk limit in unRAID, but it allows you to expand to over 128 disks on a single HBA across multiple enclosures.
  12. http://www.scythe-eu.com/en/products/pc-accessory/slot-rafter.html http://www.newegg.com/Product/Product.aspx?Item=N82E16817998079&Tpk=SY-MRA25018 I'm using the latter link in my build. The nice thing about that one is that it's a hotswap bay, so I can change the datastore out without having to open the case.
  13. Keep in mind that though the bandwidth IS shared, in most real world usage scenarios for unRAID the only time you are accessing all the discs at once is when you are doing a parity calculation/rebuild. For day-to-day media usage, you're very rarely going to saturate it.
  14. If you are building today you might also want to consider Chenbro's new 36 port expander if you can find it; it uses the same chipset as the Intel one I used, but has more ports, so it can fully populate a Norco 4224 using a single port HBA (or with dual link). Availability still seems a bit limited though; when I built I was debating waiting to use that, but in the end I got impatient. If and when I build slave chassis for additional drives I'll be using one of those.
  15. Yes, I'm definitely planning on going with 3 TB drives from here out now that they are supported. In fact one of the main reasons I went with the SAS2008 card is because LSI has stated that the 3 Gbps SAS controllers may not support 3 TB drives (though I'm not clear on whether that is just for booting or not). I have not found any appreciable difference between running the ESXi datastore on a 7200 rpm 2.5" drive vs. a 7200 rpm 3.5" drive. One of the reasons I went with the 2.5" drive is that eventually I will replace it with an SSD, and I can just pop the new drive in via the hotswap bay. Drive temps work fine in 5.0beta6; beta7 supposedly adds spindown support, but I have not upgraded yet.