brian89gp

Members
  • Posts

    173
  • Joined

  • Last visited

Posts posted by brian89gp

  1. Is it possible to add an existing datastore disk which holds 2 VMs, as datastore to a newly installed ESXi server?

     

    Yes, as long as you didn't somehow reformat it inbetween.  Go through the "Add Storage" wizard just like you would if you were creating a new datastore, somewhere along the line you will have an option to add existing VMFS datastores.

     

    ESX 5.0 uses VMFS5 by default but can still use VMFS3.  ESX4.x uses VMFS3 and cannot read VMFS5.  Might come into play depending on which versions you were on and which one you are going to.

  2. Can you pretty please compile the kernel with the PPTP/PPP modules?  .. :)

     

    What exactly do I need to configure in the kernel?  Do you want slackware's ppp package as well?

     

    If you are taking requests of kernel modules, having the vmxnet3 NIC driver would be nice.

     

    Added it for -rc2.

     

    Was this in response to the PPP addition or the vmxnet3 addition? If the vmxnet3 (very nice touch), what version and from which supplier?

     

    Somewhere in the minor versions kernel 2.6 they added a native vmxnet3 driver..

  3. When virtualizing unraid, will unraid still boot from the usb-stick or from datastore.

    Either.  Datastore is much much faster.

     

    Where do you put the key, as it is bound to the usb stick?

    Always stays on the USB stick.

     

    Are all unraid disks RDM'd?

    Any disk you present to unRAID will work, wether you do a vmdk file on a datastore, RDM, or pass through an entire card.

     

    Can the unraid disks stay connected to the SATA-ports on the motherboard or do I need a HBA (M1015)

    Yes.  You would have to either pass through the entire SATA controller (make sure your ESXi datastore is on a different controller) or RDM the individual disks.

     

    For the price though, the M1015 is a small price to pay for the performance it gets you and the simplicity of passing through just one card.

     

  4. how can i find out which sas expanders are compatible with unraid

     

    hp sas expander (pmc chipset) works

    intel RES2SV240 (lsi 2x24 chipset) works

     

    i would like to find out what other chipsets work.

     

    any help ?

     

    I do believe that SAS expanders are invisible, as long as your SAS card and expander chipset work together then at the OS level the only difference you see is your SAS card has more SAS ports on it.

     

    That aside, I prefer the 6gb LSI sas expander chipsets (includes the Intel RES2SV240)

  5. It is interesting it started to happen after you physically moved it.  Bumped and dislodged something perhaps?

     

    I would put the unRAID server onto the virtual switch and drop the NIC passthrough.  No real reason for it.  Back before we made the switch to 10gb at work we were running over 40 VM's out of a single 1gb NIC (with a 100mb NIC as a passive backup), chances are you won't notice a difference between the vSwitch NIC and a dedicated NIC.  ESXi has an internal switch, any VM to VM traffic stays internal to the ESX server in the virtual switch.  When you pass through a NIC to a VM you bypass this nice little feature. 

     

    I would do the above regardless.  As far as the rest, start eliminating possibilities.  Shut down all other VM's and see if it still happens.  Remove the extra NIC and see if it still happens.  Remove the USB 3.0 passthrough and see if it still happens.

     

     

  6. I wouldn't necessairly just assume that since there is no "beta" tied to the SnapRAID versioning system that it is more reliable then unRAID beta.  Look at the the release notes to the 1.9 release.  Its great that it "improves the chances of recovering data after an aborted 'sync' ", but think about what that implies.  You can loose data with an aborted sync?

     

    I have looked at SnapRAID too, for the same type of storage requirements you are referring to.  It is an interesting concept but I think it is still in its beginning stages.  6-12 months from now, who knows.  If the concept delivers then it might be a serious contender for a lot of products.  The hashing for data integrity has my eye for sure.

     

    I have been running 12a inside of ESXi, and it has been hard power cycled more then a dozen times (CPU overheating and locking up ESXi).  Each time, unRAID came back with no errors and no data loss.

  7. Several years ago I bought 9x 400gb WD RE2 hard drives over a few different orders, had a couple fail within the first couple days or were DOA but then they ran fime for over 4 years.  Then one failed, the next week two more failed, and by the end of the 3rd week I only had 1 that was still working.

     

    Moral of the story, well it depends on how much you value your data.  Hard drives will fail, the more you value your data the more pre-emptive you should be about replacing them.

     

    If mine get any serious errors they get replaced.  If they are nearing the end of their warranty period, they get replaced (because somehow the drive just knows the warranty is about to expire....)

  8. Just works for me.

     

    I use an Apple TV gen 1 with a broadcom crystal HD card.  The distro (forget the name) is built for the ATV and is generally updated once a month, when I reboot it will pull down any XBMC and any distro updates automatically.  Plays 1080p without a problem.  Only complaint is that the CPU in the thing is pretty small so you can't use an overly fancy skin unless you like sluggishness.

     

    As far as the media catalogging capabilities of XBMC, it will do it but it is nowhere near what Plex will do.

     

    Been contemplating getting a gen 2 Apple TV to play Netflix, Hulu, iTunes, and install the Plex plugin then use the Apple TV gen 1 to play the uncatalogged folder and anything else Plex throws its arms up about.

  9. I was running a pre-clear of 3 drives at once, each on their own channel.  unRAID eventually was unusable about 2 days in and I had to reboot it, even canceling the pre-clear tasks didn't clear up the issues.  It doesn't seem to be a bandwidth usage thing, more so a problem that comes up after a certain amount of time.  If I had to guess I would say that is what that is.

  10. Space is MB, transfer (network) is Mb.  Though nobody can seem to keep them straight and regularly swap their uses and the b/B.

     

    800Mb/s = 100MB/s

     

    Win 7 and the like with support for larger TCP window sizes as well as using better NIC's and jumbo frames will get you above 800Mb/s sometimes.  Theoretical is close to 900Mb/s or 113MB/s.

     

    Keep in mind the actual line usage will be 100%, but 10-20% of that is protocol overheads.

     

     

  11. In the real world it usually caps out at around 800 Mb/s if using TCP on a low latency LAN, I don't know about theoretical.  With speeds that high things such as latency (even 0.2ms) and the TCP window size come into play in a big way.

  12. Adding and removing NIC's from a vSwitch is very easy (like 5 seconds easy) and can be done live.  You don't bond the ports with LACP or Etherchannel or anything, only put both into the same vSwitch.  ESX will load balance the guests across all available NIC's (guests 1,2,4 on NIC1 and guests 3,5 on NIC2).

     

    If I go for a minimal Windows install I usually go for XP SP2 or SP3.  Can usually have it running in 5gb pretty easy with a statically set 256mb pagefile.  A lot of the things I do will compile/modify/change data that is saved elsewhere while the applciation itself stays 100% static and unchanging, so I save it elsewhere and set the VMDK for the guest to be non-persistant.  As soon as it is powered off all changes to the disk will be dropped and it will be like it was.  You can thin provision everything so that they only ever use the actual amount of space they use.

     

    I have a MS SQL server, Plex, the assortment of linux apps you mentioned, two unRAID machines, and two other Windows servers running all on a 60gb SSD.

     

     

  13. Networking – I believe I read somewhere in this thread that Johnm had originally intended on passing one of the gigE ports to unraid, but then had done fine with the bandwidth and left everything through just esxi.  While my unraid usage is not very high (majority use is media, maybe 2-3 simult. streams max), my concern here is how much bandwidth will be used mostly by the mythtv VM.  With a few streams coming in from the HDHR, as well as potentially streaming out via mythbox to xbmc frontends, I really have no idea how much bandwidth this may consume. 

     

    I love the idea of the internal 10G vlan, especially since the myth VM will likely be transferring a fair amount of data to unraid, but am I going to be running into problems at some point with all of this going through a single gigE port managed by esxi?  If so, is there some way (not too terribly hard, or at least that I could figure my way through) to have the mythtv VM be passed the second gigE port but still have all traffic to/from the other VM’s routed internally?

     

     

    You could technically create one vSwitch and then create two VM portgroups inside of it.  Both NIC's would be assigned to this vSwitch and then in each portgroup you can set NIC preferences, either active/passive for failover or active/dont ever use.  VM portgroup #1 would have NIC1 active NIC2 passive.  VM portgroup #2 would have it opposite.  You put your Myth box on VM portgroup #2 network and everything else on the other.

     

    The downside is that from one portgroup to the other will go external to the box and through your network switch.

     

    The better idea is to bond both NIC's into a single portgroup and let ESX handle it.  Chances are you will not have a problem, I would only try to fix it after a problem happens.  If you are planning on regularly pushing close to 1gb, what type of physical switch are you using?  More ofthen then not I find that a cheap "gigabit" switch is anything but, I burnt out several cheap brands before I got a HP switch.

     

    Other random questions:

     

    1) Just to be clear, because of the M1015 I should be running on 5.0b12a at the latest for the LSI issue, right?  Or do I need to go back further?

    2) Non-build specific question, but hopefully a quick answer - how complicated is moving drives from current unraid (plugged into mb) to new build (fresh unraid) with M1015?  I'll look up more for sure on this before I even think about switching over, but if there's a fast/easy answer, I'd love to hear that its basically plug and go =)

    3) Any suggestions of the best guest OS to run usenet box under?  Never touched usenet until running it as an unraid plugin, so I don’t have prior experience to work from.  Would a thin W7 guest be terrible for this, as I wouldn’t mind dumping an air-video server on this guest as well (airvideo doesn’t get used that much, but need to put it somewhere).

    4) Mythtv being fed from a HDHR prime – was planning on taking the recordings from Mythtv and stripping/repacking them as mkv’s to be dropped into unraid media library – any reason to do this in a separate guest, or is doing it within the same fine?

     

    (Sidenote: the W7 VM with passthrough graphics I know is off the beaten path and is hit or miss – its not critical, but something I’d like to do.)

    1. b-12a has been stable for me.  I did have one instance of the unRAID guest resetting but I was copying data to the array and preclearing 5 drives at the same time.  Started up the exact same transfer and pre-clear after it was booted back up and it did not happen again, figured it was a fluke.

    3. an old version of Ubuntu LTS server works for me.  Boot time is 5 seconds.  Thinking it was version 10.something

    4. on average, multiple smaller machines tend to perform better then one large one with multiple CPU's.  Either will work, but a dedicated gust might work better.

     

     

    How do you plan on connecting to your W7 guest?

  14. The second one.  If using the nicer chipset SAS expanders (eg, LSI based), they bond for lack of a better term the 4x 6Gb channel to the SAS HBA.  You don't have 4x 6Gbps channels you have one 24Gbps channel

     

    JohnM posted somewhere on his Atlas thread that his drives were maxing out around 150MBps (1.2Gbps...), not an interface limitation but a hard drive speed limitation.  Using that math, a single 4x port off the M1015 can power between 20 and 21 drives full speed.

     

    As an interesting side note, EMC has finally switched to SAS on their Clarion arrays.  The disk shelf loops are I am pretty sure 4x 6Gbps SAS.  SAS is high end stuff in comparison to SATA.

  15. How many CPUs should i assign to unRaid in my VM guest?

     

    As few as possible because adding more then is needed can cause it to be slower.  One is working fine for me.

     

    This is due to CPU resource scheduling, if you have 2 CPU's it will wait for 2 physical CPU cycles to open up to run the job, if you have 1 CPU then you only have to wait for 1 open CPU cycle.  The typical wait time for 1 open CPU cycle is typically shorter then for 2 concurrent open CPU cycles and therefore the two CPU VM will have to wait longer for the CPU resource which in the end-user perspective makes it seem slower.

     

    ESXi 5.0 improves this a lot, but it is still best practice to use as few CPU's as possible.

  16. I think I understand what you are explaining. unRAID is the only OS that sees the that sees the controller and no other OS can see it at the same time. If that is correct then is there something that can be ran in unRAID, software or a vm, that can see everything? "See" meaning view, read, and write the data on there shares.

     

    Why not just use SMB to access space on unRAID from other machines?