brian89gp

Members
  • Posts

    173
  • Joined

  • Last visited

Everything posted by brian89gp

  1. Yes, as long as you didn't somehow reformat it inbetween. Go through the "Add Storage" wizard just like you would if you were creating a new datastore, somewhere along the line you will have an option to add existing VMFS datastores. ESX 5.0 uses VMFS5 by default but can still use VMFS3. ESX4.x uses VMFS3 and cannot read VMFS5. Might come into play depending on which versions you were on and which one you are going to.
  2. What exactly do I need to configure in the kernel? Do you want slackware's ppp package as well? If you are taking requests of kernel modules, having the vmxnet3 NIC driver would be nice. Added it for -rc2. Was this in response to the PPP addition or the vmxnet3 addition? If the vmxnet3 (very nice touch), what version and from which supplier? Somewhere in the minor versions kernel 2.6 they added a native vmxnet3 driver..
  3. What exactly do I need to configure in the kernel? Do you want slackware's ppp package as well? If you are taking requests of kernel modules, having the vmxnet3 NIC driver would be nice.
  4. Yea, I was hoping for a (discounted) used one. Hard to believe the price is going up.
  5. Either. Datastore is much much faster. Always stays on the USB stick. Any disk you present to unRAID will work, wether you do a vmdk file on a datastore, RDM, or pass through an entire card. Yes. You would have to either pass through the entire SATA controller (make sure your ESXi datastore is on a different controller) or RDM the individual disks. For the price though, the M1015 is a small price to pay for the performance it gets you and the simplicity of passing through just one card.
  6. I do believe that SAS expanders are invisible, as long as your SAS card and expander chipset work together then at the OS level the only difference you see is your SAS card has more SAS ports on it. That aside, I prefer the 6gb LSI sas expander chipsets (includes the Intel RES2SV240)
  7. Increase the buffer on XBMC or add a cache disk to uNRAID. These aren't only options, just two of the easier ones.
  8. It is interesting it started to happen after you physically moved it. Bumped and dislodged something perhaps? I would put the unRAID server onto the virtual switch and drop the NIC passthrough. No real reason for it. Back before we made the switch to 10gb at work we were running over 40 VM's out of a single 1gb NIC (with a 100mb NIC as a passive backup), chances are you won't notice a difference between the vSwitch NIC and a dedicated NIC. ESXi has an internal switch, any VM to VM traffic stays internal to the ESX server in the virtual switch. When you pass through a NIC to a VM you bypass this nice little feature. I would do the above regardless. As far as the rest, start eliminating possibilities. Shut down all other VM's and see if it still happens. Remove the extra NIC and see if it still happens. Remove the USB 3.0 passthrough and see if it still happens.
  9. I wouldn't necessairly just assume that since there is no "beta" tied to the SnapRAID versioning system that it is more reliable then unRAID beta. Look at the the release notes to the 1.9 release. Its great that it "improves the chances of recovering data after an aborted 'sync' ", but think about what that implies. You can loose data with an aborted sync? I have looked at SnapRAID too, for the same type of storage requirements you are referring to. It is an interesting concept but I think it is still in its beginning stages. 6-12 months from now, who knows. If the concept delivers then it might be a serious contender for a lot of products. The hashing for data integrity has my eye for sure. I have been running 12a inside of ESXi, and it has been hard power cycled more then a dozen times (CPU overheating and locking up ESXi). Each time, unRAID came back with no errors and no data loss.
  10. Snapraid looks interesting, similar in end-effect as unRAID with a cache drive (old stuff protected, new stuff not, until copy/sync job is run).
  11. Several years ago I bought 9x 400gb WD RE2 hard drives over a few different orders, had a couple fail within the first couple days or were DOA but then they ran fime for over 4 years. Then one failed, the next week two more failed, and by the end of the 3rd week I only had 1 that was still working. Moral of the story, well it depends on how much you value your data. Hard drives will fail, the more you value your data the more pre-emptive you should be about replacing them. If mine get any serious errors they get replaced. If they are nearing the end of their warranty period, they get replaced (because somehow the drive just knows the warranty is about to expire....)
  12. Just works for me. I use an Apple TV gen 1 with a broadcom crystal HD card. The distro (forget the name) is built for the ATV and is generally updated once a month, when I reboot it will pull down any XBMC and any distro updates automatically. Plays 1080p without a problem. Only complaint is that the CPU in the thing is pretty small so you can't use an overly fancy skin unless you like sluggishness. As far as the media catalogging capabilities of XBMC, it will do it but it is nowhere near what Plex will do. Been contemplating getting a gen 2 Apple TV to play Netflix, Hulu, iTunes, and install the Plex plugin then use the Apple TV gen 1 to play the uncatalogged folder and anything else Plex throws its arms up about.
  13. I was running a pre-clear of 3 drives at once, each on their own channel. unRAID eventually was unusable about 2 days in and I had to reboot it, even canceling the pre-clear tasks didn't clear up the issues. It doesn't seem to be a bandwidth usage thing, more so a problem that comes up after a certain amount of time. If I had to guess I would say that is what that is.
  14. Space is MB, transfer (network) is Mb. Though nobody can seem to keep them straight and regularly swap their uses and the b/B. 800Mb/s = 100MB/s Win 7 and the like with support for larger TCP window sizes as well as using better NIC's and jumbo frames will get you above 800Mb/s sometimes. Theoretical is close to 900Mb/s or 113MB/s. Keep in mind the actual line usage will be 100%, but 10-20% of that is protocol overheads.
  15. The general idea of using ESXi to hose unRAID is so that you don't need to install any applications on your storage server.
  16. In the real world it usually caps out at around 800 Mb/s if using TCP on a low latency LAN, I don't know about theoretical. With speeds that high things such as latency (even 0.2ms) and the TCP window size come into play in a big way.
  17. Adding and removing NIC's from a vSwitch is very easy (like 5 seconds easy) and can be done live. You don't bond the ports with LACP or Etherchannel or anything, only put both into the same vSwitch. ESX will load balance the guests across all available NIC's (guests 1,2,4 on NIC1 and guests 3,5 on NIC2). If I go for a minimal Windows install I usually go for XP SP2 or SP3. Can usually have it running in 5gb pretty easy with a statically set 256mb pagefile. A lot of the things I do will compile/modify/change data that is saved elsewhere while the applciation itself stays 100% static and unchanging, so I save it elsewhere and set the VMDK for the guest to be non-persistant. As soon as it is powered off all changes to the disk will be dropped and it will be like it was. You can thin provision everything so that they only ever use the actual amount of space they use. I have a MS SQL server, Plex, the assortment of linux apps you mentioned, two unRAID machines, and two other Windows servers running all on a 60gb SSD.
  18. You could technically create one vSwitch and then create two VM portgroups inside of it. Both NIC's would be assigned to this vSwitch and then in each portgroup you can set NIC preferences, either active/passive for failover or active/dont ever use. VM portgroup #1 would have NIC1 active NIC2 passive. VM portgroup #2 would have it opposite. You put your Myth box on VM portgroup #2 network and everything else on the other. The downside is that from one portgroup to the other will go external to the box and through your network switch. The better idea is to bond both NIC's into a single portgroup and let ESX handle it. Chances are you will not have a problem, I would only try to fix it after a problem happens. If you are planning on regularly pushing close to 1gb, what type of physical switch are you using? More ofthen then not I find that a cheap "gigabit" switch is anything but, I burnt out several cheap brands before I got a HP switch. 1. b-12a has been stable for me. I did have one instance of the unRAID guest resetting but I was copying data to the array and preclearing 5 drives at the same time. Started up the exact same transfer and pre-clear after it was booted back up and it did not happen again, figured it was a fluke. 3. an old version of Ubuntu LTS server works for me. Boot time is 5 seconds. Thinking it was version 10.something 4. on average, multiple smaller machines tend to perform better then one large one with multiple CPU's. Either will work, but a dedicated gust might work better. How do you plan on connecting to your W7 guest?
  19. The second one. If using the nicer chipset SAS expanders (eg, LSI based), they bond for lack of a better term the 4x 6Gb channel to the SAS HBA. You don't have 4x 6Gbps channels you have one 24Gbps channel JohnM posted somewhere on his Atlas thread that his drives were maxing out around 150MBps (1.2Gbps...), not an interface limitation but a hard drive speed limitation. Using that math, a single 4x port off the M1015 can power between 20 and 21 drives full speed. As an interesting side note, EMC has finally switched to SAS on their Clarion arrays. The disk shelf loops are I am pretty sure 4x 6Gbps SAS. SAS is high end stuff in comparison to SATA.
  20. Correct. Its debatable if this particular shortcomming would even be noticed in a home user setting.
  21. As few as possible because adding more then is needed can cause it to be slower. One is working fine for me. This is due to CPU resource scheduling, if you have 2 CPU's it will wait for 2 physical CPU cycles to open up to run the job, if you have 1 CPU then you only have to wait for 1 open CPU cycle. The typical wait time for 1 open CPU cycle is typically shorter then for 2 concurrent open CPU cycles and therefore the two CPU VM will have to wait longer for the CPU resource which in the end-user perspective makes it seem slower. ESXi 5.0 improves this a lot, but it is still best practice to use as few CPU's as possible.
  22. Why not just use SMB to access space on unRAID from other machines?