brian89gp

Members
  • Posts

    173
  • Joined

  • Last visited

Everything posted by brian89gp

  1. Technically, QoS as it is used in the business world is not QoS as you know it for home use. If you look for QoS on managed switches you are looking for the business implementation, home use QoS is not the same and more similar to firewall rate limiting. QoS on your switches will do next to nothing unless you also have a router that understands QoS tagged packets, cheap ones do not do QoS in this manner. Buy a good unmanaged gigabit switch (8 port HP for $150). If you are indeed pushing enough bandidth to worry about QoS then you will kill any cheapo switch, the cheap ones have small or no buffers and when you start loading up more then 1-2 ports with fast/bursty traffic you start getting buffer overflows. If you are not pushing that much data and are worried about QoS 'just cause' then at least the more expensive switch will last a lot longer. Youtube and online games are typically low bandwidth in comparison to the size of internet circuits today. How big is your internet connection?
  2. Unless I'm mistaken, Tom is still using Slackware 13.1 development tools for v5 beta build, so this wiki topic should guide the kernel recompilation: http://lime-technology.com/wiki/index.php?title=Installing_VirtualBox_in_unRAID The goal here is to install the development tools and the kernel source, configure the existing kernel ".config" options to the source and then built the headers. Then you can add drivers with the "make menuconfig" tool. Please note that those scripts at the end of the topic are outdated, so some links need to be updated. I'll go that route once it is out of beta, too much work for me to keep up with the versions before then.
  3. An easier method might be a custom kernel with the VMXNET3 support complied in. I'm not sure the kernel version but it has been built into the Linux kernel for a while now.
  4. Have a SageTV HD-200 with power cord and remote for sale. $160 shipped via USPS Priority to the USA.
  5. Hynix, many thanks to being heavily used in the OEM server market (Dell, HP...) they can be had used pretty cheaply on Ebay. Never had one go bad and they are almost always on the workstation/server class motherboard approved memory lists. Where I work the memory failure rate for a 3 year span is around 1 stick out of 100, and each time it was DOA. Have yet to have one fail while in use (and I am taking 10+ TB total worth of RAM here)
  6. Have WD to cross-ship your drive. Guarenteed to get a different one that way.
  7. Thanks Johnm. Found this link after my post: http://support.dell.com/support/edocs/storage/Storlink/H200/en/UG/HTML/features.htm I intend to use it to attach external disk arrays without using up excessive PCI slots in the case for 8087-8088 converters, so external only would be a good thing.
  8. Has anybody played around with a Dell D687J HBA yet? What little I can find says it is a H200 card which in RAID form is a branded LSI2008 chipset card. Wondering if it could be a possible replacment for the LSI 9200-8e
  9. Could I convince anyone here to recompile the 3.0.3 kernel (beta 12) with the only change from the unRAID build to include the vmxnet3 NIC driver? It is included in kernel versions 2.6.33 and later.
  10. Should try what sabnzbd themselves suggest, having the incomplete and complete directories on different drives. Fewer drives, less chance of total loss, and probably will run at the same speed or better then a 4 disk RAID 0 I am using very old and very used 400gb SATA drives for the incomplete and complete disks and unrar of 12gb usually takes under 3 minutes and that is even while downloading something else at 50mb/s. Also, I am running this inside of ESXi with a single CPU and 1GB of RAM (single core off of a e5530 2.4ghz processor). Load average is rarely above 1.5.
  11. Found a con. It will allow people with the click-happy disease (me) attach the vmdk as a disk in unRAID and then format it. If you suffer from this too, it might be wise to set the vmdk as independent/non-persistant...
  12. USB to ethernet bridges work pretty well, especially if you are running a Windows guest OS.
  13. Using the vmDirectPath method or USB device method? I am using the USB device method and have not tried the vmDirectPath way (though with the quick boot times, there is little reason to do it this way). I also used the name BOOT, don't know if it matters or not but maybe worth a try. Config changes have stuck through several reboots for me and also alternating booting between beta 11 and 12. Installing packages in unMenu and modifying the go script also stick. I was trying to break it and havn't been able to yet. They have some xHCI mode now for the USB device, but it is still rather slow. It was designed for USB software keys and whatnot so VMware didn't focus on speed and is also probably why (guessing) the VM BIOS doesn't support booting from USB.
  14. You rename your thumb drive to something other then "UNRAID" before you used WinImage on it?
  15. Not at all, the vmdk will be used only to boot the kernel (bzimage file) and attach the ramfs (bzroot). After that, the system will mount the drive with "UNRAID" label (i.e. the flash drive) at the "/boot" path, and will only use the configuration files from there. Yep. All the packages I have installed are located on the flash drive (unmenu, openssl, openssh, etc).
  16. How much money do you want to spend? You will have a low WAF with RDP so that would leave PCoIP For PCoIP AMD has a vmdirectpath capable video card with PCoIP output for around $500-600. Then for the user side a standalone PCoIP client is around $350 (EVGA has a new one that is "cute") and Samsung has two monitors with the client built into the back for about $100 more. I run a 400 user VMware View (software driven PCoIP) install at work and use one of the Samsung NC240 zero clients. Cool stuff if you are willing to pay for it. USB is capped at 1.1 so keep that in mind. As far as quality, the NC240 monitor is a 23" wide screen at 1920x?? resolution. I have watched a 1080p video on it with no problems. This was on the software PCoIP (VM View) so the hardware driven one would be even better. PCoIP was built for this type of thing. www.Teradici.com VMware just licenses PCoIP tech and it existed a a direct 1-1 mapped client-workstation long before VMware started using it.
  17. A general tip for anyone running unRAID in ESXi (4.1 or 5.0) if you are looking for a very fast boot up. Make a .vmdk disk image of your thumb drive using WinImage. I used a spare (non-licensed) 2gb flash drive I had laying around and deleted the config directory and created the image from it (change the name after the make bootable part to something other then UNRAID). Upload it to your ESXi server and attach it to your unRAID VM. Still do the USB passthrough like you normally would. What happens is ESXi boots up from the vmdk image (very fast) and sometime during the boot unRAID mounts any flash drive with the name of "UNRAID" and reads the config/license data from it. Pro: 1. Boot up from the local HDD (less then 10 seconds on mine) 2. Config and license still stored on the thumb drive 3. Can do away with the plop boot manager/CD 4. Never have to remove the thumb drive and attach it to a Windows machine again. All "updating" is done on a spare thumb drive and the images created from it. 5. Can have a boot vmdk of every version sitting on your server. Booting a different version is as simple as attaching a different vmdk to the VM guest. Con: 1. Haven't found any yet.
  18. If it does see a RAID SSD array as "just another drive", there are ways to fool ESXi into thinking that it (or any) volume is SSD. http://www.virtuallyghetto.com/2011/07/how-to-trick-esxi-5-in-seeing-ssd.html There are discussions of using this feature with the SSD Violin arrays, so there might be some intelligence in the check ESXi performs on the drive to determine if it is a SSD or not.
  19. Only if your VM's are swapping. Otherwise it would go unused. Could be a cheap and easy way to get a couple dozen low-CPU usage VM's running at the same time on a small amount of RAM though. None, unless you are affected by the 2tb-512mb limit of VMFS3
  20. you installed it to a thumbdrive ? what about the perf for the different swap files from esx. I know they are not much.. but still... ESXi boots to RAM. Its not like Windows or other OS's, it was built and meant to be run from a flash device. The only thing you gain from installing it to a HDD is it boots a little faster, but that is it. Keep in mind, this is an enterprise class OS that VMware just so happens to release for free (no doubt only to compete with the free Hyper-V...)
  21. brian89gp

    Norco 4224 Thread

    The downside to having multiple NICs manually split up is that all VM to VM traffic on the same vSwitch stays on the vSwitch. If you have two NICs each to their own vSwitch (but still on the same network) or vmDirectPath one NIC through to one of the VM's then any traffic between the other VM's and the one with the dedicated NIC/vSwitch will need to leave the vSwitch and go over the wired network/switch. Now if you have two NIC's attached to the same vSwitch then ESXi will balance across them and inter-VM traffic will stay on the server. Traffic for 1 VM will go out 1 NIC, ESXi does not bond the NIC's unless you do on the switch side. Staying on the vSwitch and not exiting the ESXi server is important due to the VMXNET3 NIC (10gb). If all your VM's have the VMXNET3 NIC then all VM to VM traffic will be a 10gbe speeds. Stay on vSwitch at 10Gb or dedicate a NIC/vSwitch and use your super fast 1Gb switch...? (10Gb/s = 1280MB/s, 1Gb/s = 128MB/s) No brainer for me. 90% of everything is on ESXi, the only things that are not are workstations and media players and a single 1Gb/s NIC is more then enough.
  22. Created bat files to make it easier. Based on the files from the first post. Will create one for the MegaRAID when the download link starts working again http://206.126.96.32/sas1068e.zip http://206.126.96.32/sas2008.zip
  23. I was able to set it up through VMDirectPath. It shows 8 USB devices for my motherboard (X8DTH-6F), 6 UHCI and 2 EHCI. I set VMDirectPath on half of them (3 UHCI and 1 EHCI) since they all seemed to be attached to the same hub and group of ports (Attaching a keyboard to the same USB port was passed through to the guest OS no matter which of the 4 PCI devices I set to pass-through). I then passed through the EHCI PCI device and it boots fine. I am having problems with the fastpath fail state, but that is a problem with ESXi 4.0 that was fixed in 4.1. It is important that you set all USB PCI devices that can see a particular USB port for VMDirectPath. If you do not, ESXi and the guest OS will fight for ownership and you get into the fastpath fail state problems, it completely freezes 4.0 and while 4.1 the issue was fixed it can still cause problems in the guest OS. Maybe someone else can explain how the USB devices work and how one physical USB port seems to be on any of 3 different UHCI ports and several physical USB ports seem to be on the same EHCI port. It almost seems like the UHCI and EHCI are busses, not physical ports. EHCI = USB 2.0 so that makes sense, two USB 2.0 buses per motherboard with each bus servicing 4 physical ports. TIP: Install a temporary ESXi install onto a SATA drive then start messing around with the VMDirectPath of USB devices. If you install ESXi onto a flash drive, then set that port to VMDirectPath, bad things will happen and the only way to recover is to reinstall. TIP2: Dont use ES Xeon 5500 processors. VMware wrote out support in later versions. 4.0.0 is the latest I can run without getting a PSOD on boot. People with later revisions of ES processors report they can run 4.0 U1 but not any newer versions. Live and learn.
  24. brian89gp

    10GB NICs

    I have used the Intel x520 dual port adapters both in mezz card and PCIe card formats. Probably 50-60 currently in use. Solid card, no complaints. The Emulex 10Gbe CNA are also good cards (though more pricey) Use the twinax (called CU or copper sometimes). $120 per cable total versus $60 for a 10gbe aqua fiber cable and at least $600 for the optical transciever at each end. What type of motherboard are you going to be using to drive this? The Intel x520 cards are 8x PCIe 2.0. What type of switches on the other end? If you are planning on actually pushing that much traffic, many cheaper 10Gbe switches will start to suffer from buffer over-runs.
  25. If hardware is supported in a previous version it is almost always supported in a later version. Has held true for 3.0, 3.5, 4.0, and 4.1 so far. Most compatibility problems come from using consumber based motherboards, storage, network, and trying to shove non-server based hardware into it (TV tuner cards). The motherboard he used is server grade, NIC is Intel, and storage is industry standard LSI. Chances are it will work. (I work with ESX for a living)