fade23

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by fade23

  1. I'm running all green drives (currently 5 + parity) on an LSI 9211-8i HBA (same chipset as the m1015)/Intel RES2SV240 expander and have no issues. If you run the HBA in IT mode you will be bypassing any of the hardware RAID functions, so drive timeouts shouldn't be an issue. I average around 40 MB/s read, 25 MB/s write, without a cache drive. I upgraded to ESXi 5 a few weekends ago with no problems.
  2. Has anyone attempted an upgrade to ESXi 5.0 yet? In particular, I'm curious to know if the "format all unknown disks" behavior is still present, either on a fresh install of 5.0 or an upgrade from 4.1
  3. Create the file /config/smb-extra.conf on your flash share (if it does not already exist) and add the following lines: map hidden = No map system = No hide files = /ntuser.*/.AppleDouble/.AppleDesktop/.AppleDB//Network Trash Folder/Temporary Items/TheFindByContentFolder/TheVolumeSettingsFolder/.DS_Store/Icon\r/ You'll still see the files if you telnet in and ls locally, but this will prevent them from showing up on your SMB exports.
  4. I was getting inconsistent results scripting a clean shutdown of ESXi on power loss from the UPS; I found that if unRAID was set as first to boot, last to shutdown, ESXi was initiating the guest powerdown but then powering off before the guest had completely shut down, regardless of the timeout set. I think it would have worked ok if, as you've done, unRAID was the last to boot, first to shutdown, but as it was I ended up with unclean shutdown that resulted in lengthy parity checks on resumption of power. If you need your VMs to shutdown in a different order than they start up, you can script your own shutdown sequence to call them one at time using this Perl library. Here's an example of the shutdown script my Windows 7 VM calls when the UPS reports a power loss: perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname indra perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname xbmc ping 192.168.99.99 -n 1 -w 120000 > nul perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname unraid ping 192.168.99.99 -n 1 -w 360000 > nul perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action host-shutdown The ping statements in there are a surrogate for a SLEEP type command, which Windows batch scripting does not support. the '-w xxxxxx' arguement is the number of miliseconds to wait in between each command. Using this script I can shut down my VMs in any arbitrary order, and then issue the shutdown command to ESXi at the end. The only drawback is that you don't have feedback that the prior VM has completed its shutdown (though the Perl library does support querying the status, so one probably could build that in if it were important)
  5. "REMOVE ALL DRIVES! Remove/Unplug all Hard Disks and Flash Drives from the server! During install, ESXi will erase ALL drives it sees!!" I've seen this mentioned before; what exactly is ESXi doing here? It's just claiming all the disks as datastores regardless of what's currently on them? Wondering what I'll need to do if and when i upgrade to ESXi 5
  6. My biggest concern with the free version of ESXi 5.0 will be the 8 GB vRAM limit across all guests.
  7. Quick math gives you about 140 MB/s per device (6Gbps x 4 channels /21 drives)with 21 slots all reading at the same time. That's if you use a single link; some expander/HBA combos will support dual linking to the expander (though obviously doing that you lose ports). Slower than a dedicated port for sure, but if you're running green drives you're more likely to be bottle necked by your rotational speed than you ever will be by the bus.
  8. yes, it shows up as a single device, expander and all.
  9. I'm doing exactly that using this tool: http://communities.vmware.com/docs/DOC-11623; I have a Cyberpower UPS, not APC, but any should work as long as they support executing a script when there is a power loss. In my setup, I pass my UPS USB connection through to a Windows 7 VM, which executes the above script to initiate a shutdown on ESXi. At that point it will shutdown or suspend all VMs depending on how you've configured those options in ESXi. Make sure you've installed vmware-tools on unRAID so that it can cleanly unmount the array and shutdown, and watch out for your startup/shutdown order and shutdown timeout values (especially if unRAID is the last VM to shutdown).
  10. For those of you with BR10i cards passed through, have you tried beta7 with 3 TB drives on them yet? LSI has been a bit cagey on whether the older 1068e based cards will work with 3 TB drives, but I'm not clear whether that is just a limitation to boot from them or not.
  11. I think the idea is so you can daisy chain multiple cards in multiple chassis. Not so useful with the 20 disk limit in unRAID, but it allows you to expand to over 128 disks on a single HBA across multiple enclosures.
  12. http://www.scythe-eu.com/en/products/pc-accessory/slot-rafter.html http://www.newegg.com/Product/Product.aspx?Item=N82E16817998079&Tpk=SY-MRA25018 I'm using the latter link in my build. The nice thing about that one is that it's a hotswap bay, so I can change the datastore out without having to open the case.
  13. Keep in mind that though the bandwidth IS shared, in most real world usage scenarios for unRAID the only time you are accessing all the discs at once is when you are doing a parity calculation/rebuild. For day-to-day media usage, you're very rarely going to saturate it.
  14. If you are building today you might also want to consider Chenbro's new 36 port expander if you can find it; it uses the same chipset as the Intel one I used, but has more ports, so it can fully populate a Norco 4224 using a single port HBA (or with dual link). Availability still seems a bit limited though; when I built I was debating waiting to use that, but in the end I got impatient. If and when I build slave chassis for additional drives I'll be using one of those.
  15. Yes, I'm definitely planning on going with 3 TB drives from here out now that they are supported. In fact one of the main reasons I went with the SAS2008 card is because LSI has stated that the 3 Gbps SAS controllers may not support 3 TB drives (though I'm not clear on whether that is just for booting or not). I have not found any appreciable difference between running the ESXi datastore on a 7200 rpm 2.5" drive vs. a 7200 rpm 3.5" drive. One of the reasons I went with the 2.5" drive is that eventually I will replace it with an SSD, and I can just pop the new drive in via the hotswap bay. Drive temps work fine in 5.0beta6; beta7 supposedly adds spindown support, but I have not upgraded yet.
  16. How are your disks being presented to the VM? VMDirectPath/VT-d passthrough? Raw Device Mapping?
  17. I'm running Windows 7 as a guest alongside unRAID; 7MC will start up ok, but I have not tried it with any capture cards. In theory it should work if you pass the card through. I will say, however, that I have had trouble trying to run XBMC in a guest because it's expecting direct access to the video card for acceleration; in a VM scenario, ESXi is just presenting Windows with a basic virtual video adapter. I don't think it's ever even aware of what your hardware video is.
  18. I am using a Tyan S5510GM3NR (Socket 1155, C204 chipset, integrated video + IPMI/KVM-over-IP) motherboard with a Xeon E3-1230 to pass through an LSI 9211-8i SAS controller/Intel RES2SV240 expander directly to unRAID via VT-d. Works great, and with the latest beta supports full temps and spindown. Full details of the build are in my UCD thread
  19. I run Sickbeard/SAB/Deluge under a Gentoo Linux VM parallel to unRAID VM (both on the same physical box under ESXi). I have an unRAID share at /mnt/user/Media which is exported via SMB. I also have a cache drive installed, with a non-array directory at /mnt/cache/.Scratch exported via SMB (smb-extra.conf). On Gentoo, these are mounted at /mnt/tank/Media and /mnt/tank/Scratch. Sickbeard points to a series folder at, for example, /mnt/tank/Media/video/series/Firefly; SAB and Deluge use the Scratch share as the work directory, and completed downloads are unpacked to /mnt/tank/Scratch/unsorted. Now, when Sickbeard processes a new file like /mnt/tank/Scratch/unsorted/Firefly.S01E01/firefly.s01e01.mkv, it will move it to /mnt/tank/Media/video/series/Firefly/. Since the Media share is configured to use the cache drive (the same physical drive the non-array share is on), the end result here is that the file has effectively just been moved and renamed on the same disk (an operation that should be almost instantaneous). However, because there are several layers of abstraction from the underlying filesystem at play here, what actually occurs is that a full copy of the file is done, which can take some time for larger files. I understand that part of this is a limitation of the SMB/CIFS protocol, as the mapped shares are not aware that the underlying filesystems are on the same disk. Would NFS do any better in this regard, or will the underlying unRAID driver also force it to do a copy in this scenario, since we're moving files in and out of the array (even though it's actually just to the unprotected cache drive, so parity does not come into play here)? I realize I could just have SAB unpack directly to the Media share, but then I'm concerned that if the mover script fires while Sickbeard is in the middle of processing my split levels would not be honored correctly. I also want to keep my Deluge seeds out of the protected array, but still be able to process them through Sickbeard the same way.
  20. If you use a Plop CD Image ISO though you cannot save the settings to automatically boot to USB, can you? What I've done is create a small VMDK and used the install option of PLOP to actually install it to the disk; that way I can save settings and force it to boot directly to the unRAID without any user intervention.
  21. I don't think that ESXi currently supports datastores (ie your VMDK file) on USB disks.
  22. The motherboard has 3 NICs (1 is shared with IPMI/KVM-over-IP). I'm passing one of those through directly to unRAID, and the other two are shared among the remaining VMs (letting ESXi do the load balancing). I've also added a second virtual NIC to unRAID for communication with the Gentoo VM on a seperate subnet so that traffic never actually goes out to the router. The ESXi configuration itself is pretty standard. Really the only hoops I had to jump through was using Plop Boot Manager on the unRAID VM (since you cannot boot a VM from a passed USB drive), and some Perl scripts on the Windows 7 VM (i'll have to dig up the URL for those) to interact with my UPS to initiate a clean powerdown of all of the guest VMs on power failure.
  23. No idea what brand it is. I picked up a bunch of them for $10 each on Craigslist.
  24. This was part of a larger project to consolidate all of my hardware into a single 42u rack mount server cabinet, as well as virtualize all server duties from multiple machines into a single physical box running ESXi. Though not officially supported, all of the research and experiences in this thread gave me confidence that I could succesfully virtualize unRAID alongside multiple other guest operating systems. My goal was for maximum incremental expandability for both unRAID storage and the ability to add additional virtual machines as needed, without sacrificing performance or flexibility of unRAID itself. To that end, I selected components capable of using VT-d/VMDirectPath to provide direct hardware access to both the drive controller and NIC to the virtualized unRAID instance. In addition to expansion to 20 data drives in the primary chassis, I can eventually add up to 3 more PCI-Express SAS HBA's attached to external chassis in the rack and assign them to additional instances of unRAID for a total of 60 additional data drives. OS at time of building: unRAID Server Pro 5.0-beta6a as a guest OS under ESXi 4.1 (on the bleeding edge for the sake of SAS2008 support CPU: Intel Xeon E3-1230 Sandy Bridge 3.2 GHz (1 vCPU allocated to unRAID) Motherboard: TYAN S5510GM3NR RAM: 16 GB (4GBx4) Crucial DDR3 1333 ECC (2 GB allocated to unRAID) Case: Norco RPC-4224 Drive Cage(s): SYBA SY-MRA2508 expansion slot cage for 2.5" ESXi datastore drive Power Supply: Corsair HX 750 Watt SAS Expansion Card(s): LSI 9211-8i 6 Gb/s (passed through to unRAID via VMDirectPath) + Intel RES2SV240 SAS Expander, for a total of 24 ports Cables: 6x Norco SFF-8087 to SFF-8087 to the the 6 case backplanes; generic SATA to SATA to the internal datastore drive Fans: 2x Arctic Cooling CF8 PWM 80mm, 3xScythe SY1225SL12L 120mm NIC: Intel 82574L (onboard, passed through to unRAID via VMDirectPath) + virtual NIC for exclusive communication between other VMs on the same machine ESXi Datastore Drive: WD Scorpio 2.5 500 GB 7200 RPM Parity Drive: Seagate 5900 RPM 2 TB Data Drives: 3xSeagate 5900 RPM + 2xHitachi 5400 RPM 2 TB Cache Drive: Hitachi 500 GB 7200 Total Drive Capacity: 10 TB, expandable to 40 TB in the primary chassis, and 160 TB via additional unRAID virtualized on the same ESXi installation, given current 2 TB drives Primary Use: Media serving to XBMC Add Ons Used: unMenu, VMWare Tools Other VMs on the machine: : Windows 7 Enterprise runs Homeseer home automation, Plex Media Server, and CyberPower Powercenter for UPS management and shutdown/resume automation); Gentoo Linux runs MySQL for XBMC library, LDAP, MediaTomb DLNA server, SABNzbd, Sickbeard, Couch Potato, and Deluge; PBX In a Flash (CentOS) Asterisk VOIP server; XBMC Live for central library management Unassembled components: I used some epoxy to custom mount the SAS expander card directly to the fan wall of the case. Since the card can be powered by MOLEX, I did not want to waste a PCIe slot. This also allowed me to use shorter SAS cables: Fully assembled and ready to move into the rack: In its final home in the rack along with UPS (Cyberpower OR1500 LCDRM2U 1500VA, 900 Watt), patch panel and 24 port Gigabit switch (Netgear GS724-T300NAS). Low light shot of the same to emphasize the LEDs: I used a USB label printer with white-on-clear laminated label tape to label each drive bay with the last four digits of the drive serial number: The full rack. In addition to the equipment mentioned above, it is also home to my primary desktop PC, an older (and former NAS) box running PFSense for router duties, a testbed box, and a Yamaha AVR:
  25. For those who tested, did anyone try it with a SAS expander in the mix?