ATLAS My Virtualized unRAID server


Recommended Posts

my answers in RED

Since I’ll be using only 4 drives, can I plug the M1015 in one of the PCIE x4 ports without sacrificing performance?

Yes. you will be fine

Thanks

 

I know close to nothing about FreeNAS, but I believe it’s a stripped raid. What kind of writing/reading performance improvement should I expect?

You have a few ways to set it up. you can do what is equivalent to a raid5 (RAIDZ) or a raid10 (Striped Mirrored Vdev’s). the Striped Mirrored Vdev’s will net you more speed at the cost of 2drives for parity. look back a few pages, SEE LINK Im actually getting better writes then my SATA3 Performance Pro2 and slightly less performance for the reads.

Thanks for the link to your post, that was the one that first put me on the idea of using FreeNAS, although I couldn't find it. This thread has become HUGE!

If you would to start again, would you still recommend OpenIndiana? Considering I'm a newbie, FreeNAS seems to have a larger community for support.

 

I was actually thinking of raid-z2 (which i believe is equivalent to raid6). I may seem to be too paranoid, but right at the beginning of starting using unraid I had two drives failing in just 3 days. So I lost some data. You are never too sure.

The thing about raid 10 is that if two drives within the same vdev fail,  you are done, right?

Any other alternatives?

 

Are SDDs really worth it for FreeNAS, or should I go with spinners? If using SSDs, what kind of speed should I expect? What about spinners? Please feel free to recommend parts  :)!

Spinners are fine for this. the reason i say that is for space. with 4x 2TB Spinner Green Drives,  you will get around 6TB at 450MB/s read/write in RAIDZ. I would expect more in the 500-600 MB/s range with the new Seagate 7200 RPM 1TB platter drives.

 

yes with 4x 250MB SSD, You will get 750MB at maybe 1000MB/s read and 700MB/s Write (am totally guessing here, I could be wrong. [you might even maxout the PCIe bus first. although 4x should be ok to handle it]) At a ridiculous cost. 

 

When you weigh the size of array vs performance VS the cost. the spinners look like more bang for the buck.

Spinner = 6TB @ 450MB/s for $360 vs SSD = 750MB @ Say 800MB/s (ish) for $750.

you are fully right, I just needed someone else than my wife to tell me about the cost of SSD drives  ;)

Besides, once the server is up, I don't really need SSD speeds.

 

Don't forget, you can put your unRAID cache drive on this array...

I didn't mention it, but my goal is to use FreeNAS for the cache drive as well as the datastores. I should say that one of my datastores is a newznab server and it uses a lot of space...

 

When looking around I’ve seen many, reasonably priced, 1U SATA II enclosures for 4 drives. However, when looking for SATA III, prices really go up, especially here. I’m Spanish, so apart from the spelling mistakes (sorry!), my options to shop around aren’t many. If I go with SSDs, going with SATA III is a no brainer, but if using spinners, could I take SATA II?

Sata2 should be fine for spinners. for SSD, youll want SATA3

I guess i need to see what the item is you're looking at.

I’ve found these so far:

https://ri-vier.eu/1u-server-case-w-4-hotswappable-satasas-drive-bay-rpc1204-p-3.html?cPath=1_3_4

https://ri-vier.eu/1u-server-case-w-4-hotswappable-satasas-drive-bay-rpc1004-p-1.html?cPath=1_3_4

 

And then suddenly today I found a SATA III, which is nice thinking ahead:

http://www.xcase.co.uk/X-Case-RM-140-1u-4-drive-hotswap-case-with-rails-p/case-xcase-140.htm

 

Anything else I’m missing?

Yes, if you are making a this a 1U standalone server, you're gigabit network will maxout before your NAS array.

You will feel the slowness of the gigabit.

consider making the freenas a guest ON the ESXi server itself. passthough the M1015 to the guest and use that. by using the ESXi VLAN, you can then use the 10GB virtual nics and get full NAS speed.

 

Also, unless you are going full standalone freeNAS with iSCSI target (or NFS). you will still need 1 datastore drive local on the server.

I  already thought of that. I'll make the FreeNAS a guest on the ESXi server to use the vmxnet3 lan. I'm thinking of using a CF card as the FreeNAS datastore. I ordered this and I'm waiting for it to arrive to test it

I want it to be a standalone case so I can use the 24 drives of the norco for unraid, but it'll be daisy chained to the server.

 

EDIT:

I looked for some real world benchmarks for SSD's in raid5

http://www.storagereview.com/intel_ssd_510_raid_review

 

Pretty sad honestly.. about a 20% performance gain. so my guesses would be about correct with M3's, Performance PRO's or any other max IOP drives like the sandisk extreme.

Thanks a lot for that, you saved me some serious money and maybe also my wife. I'll go with the spinners.

 

However, this brings a new issue: cooling. One of the nice features, apart from speed and consumption, is that they don't need much cooling. I'll have to give some though to this.

 

I'll keep you posted.

Link to comment

I hadn't thought about booting FreeNAS or NAS4Free from the same stick as my ESX. How exactly would that be setup? How would you make the stick a part of the data store?

 

Right now I have most of my VM on an SSD but two on a 2gig spinner I'd LOVE to dump. I have yet to add any disks to my NAS4Free VM, how would I make a share on this a cache and also make part of it available for data store?

 

Also, my NewzNAB server was allocated a bunch of space, frankly it's not using very much of it at all. I need to shrink this at some point I think, not sure how best to do so.

Link to comment

I hadn't thought about booting FreeNAS or NAS4Free from the same stick as my ESX. How exactly would that be setup? How would you make the stick a part of the data store?

 

To boot from a USB you would still need a datastore to define the guest. ESXi will boot the VM that will boot FreeNAS from the USB. Kind of like unRAID. I'm trying a different approach, to boot FreeNAS from a CF card that will be a datastore. I haven't received the parts yet, so it's early to say whether this will work

Link to comment

Okay, as it stands now I boot ESX from

USB and unRAID has a stick dedicated to it as well. I've not figured out how to add a NAS as a datastore and I'm not sure how I'd do that for a USB stick either, especially not the one ESX comes off of. I can't pass thru another USB either, I think all hubs are used currently. If I could use a NAS for datastore I'd dump the spinner I've got and bootstrap things from an SSD to get the NAS going then run the VM from there. Thinking about it, I'd keep unRAID off the other NAS!

Link to comment

@dheg

Thanks for the link to your post, that was the one that first put me on the idea of using FreeNAS, although I couldn't find it. This thread has become HUGE!

If you would to start again, would you still recommend OpenIndiana? Considering I'm a newbie, FreeNAS seems to have a larger community for support.

 

OpenIndiana is a fork of NexentaStor.  The main difference between OpenIndiana and NexentaStor is the interface.  NexentaStor has the superior interface.  Under the covers they perform the same.  Please also understand that both OpenIndiana and NexentaStor use Solaris.  Mainly OpenIndiana uses Open Solaris and NexentaStor uses regular Solaris.

 

Solaris is different from Linux.  One main difference is the block devices are labeled differently.  I think Solaris versions of Unix allow for more block devices (instead of Linuxs 26).

 

NexentaStor community, that is the actual version name, is the free version of commercial NexentaStor Storage Appliance. NexentaStor community, supports up to 18TB of storage.  It has no support, and no support for plugins.  The cost for the commercial product beyond 18TB but up to 32 TB is roughly $4,750 for silver support and $6,950 for Gold Support and much higher (not quoted on website) for Platinum support.  The commercial product can go up to 1 petabyte of protected storage.

 

I have been told by people who work with NexentaStor product that if you do not need technical support but want a good solid commercial product then NexentaStor community is the way to go.  If you don't want any restrictions and at least community support go with OpenIndiana / Napit.

 

Here is a link on how to setup a nappit server:  http://www.napp-it.org/doc/downloads/napp-it.pdf

 

Here is a link on how to setup ESXi datastore with Nappit:  http://www.napp-it.org/doc/downloads/all-in-one.pdf

 

Both of the above documents go hand in hand.  I have also read in several forums that FreeNas is not quite ready yet.  Though it does support zfs, I have read that performance is slow.  I believe this is due to how ZFS has been implemented in that distribution.  Using one of the above options will give you better performance.

 

I will keep looking to see if there is anything else that would be worth posting on this subject.

 

Please also note that any option that utilizes ZFS requires memory.  LOTS of memory.  The more memory you can throw at it the better ZFS performs.  I do understand this is a lab environment but when troubleshooting performance issues with ZFS, more memory the better.  If you can't afford gobs of memory, then ZFS should be allocated as much as you can afford.  By that I mean a minimum of 4GB.  2GB is asking for real trouble.

 

--Sideband Samurai

 

Here is another link with OpenIndiana and ESXi 5.0:  http://www.servethehome.com/install-openindiana-esxi-50-allinone-zfs-server/

Link to comment

@SidebandSamurai

 

wow, thanks a lot, I have some more reading to do  :)

RAM won't be a problem, I have 16 GB and just ordered 2 8GB sticks. My intention is to dedicate at least 8GB, could be more if needed, for 4 1TB HDDs. After Johnm's post I decided to go with this, so 8GB for a 4 TB array should be plenty.

Napp-it seems easy, Nexenta nicer... maybe I should try  ;)

 

Have anyone tried this case. The 'Add to Cart' button is calling my name ::)

Link to comment

Couple of comments... Free version of ESX only allows for ONE physical CPU as well as the 32gig limit - don't use two.

 

With the 5 series (and i'm almost positive 4.1, was limited was 6 cores per CPU) of ESXi  there is no CPU/Core count restrictions. the limit is 32GB of ram and no vsphere client.

 

the real changes between 4.1 and 5x that we noticed in the lab. was that 5 supports CPUs with more then 6 Cores (not threads). That the 32GB memory is hard limited to the entire server in 5 while on 4.1 we could have boxes with 96GB+ and the limit was 32 virtual GB in a guest.

Link to comment

@SidebandSamurai

 

wow, thanks a lot, I have some more reading to do  :)

RAM won't be a problem, I have 16 GB and just ordered 2 8GB sticks. My intention is to dedicate at least 8GB, could be more if needed, for 4 1TB HDDs. After Johnm's post I decided to go with this, so 8GB for a 4 TB array should be plenty.

Napp-it seems easy, Nexenta nicer... maybe I should try  ;)

 

Have anyone tried this case. The 'Add to Cart' button is calling my name ::)

 

The 2TB versions of that drive are probably only a few $$ more. I have seen them for as low $89 recently. Obviously that might not be the case where you are located. but it might be worth a look.

 

That case rolled off the same assembly lines as the Norco's. It just has custom hotswap trays. It is the same as the RPC-1204.

 

 

Link to comment

This last 2 pages has so much info i am not sure where and how to comment on it all..

 

@dheg

I would defiantly consider OI and napp-it if I started over.

I was where I was going to go from the start.

I was building both freeNAS and OI servers side-by-side to test performance. some time mid build, the drives for the IO guest ended up in another server that had older 1.5TB drives that started failing..

time and finances (for server toys) have been a bit tight for a while. so i never go to finish the build.

 

 

 

I auto boot my domain controller then  the ZFS guest first then have a long timeout before any other guests boot for the ZFS array.

 

I have the ZFS array shared out as NFS (then mapped from inside esxi as a datastore). You couls also use iSCSI.

It is the same as if the ZFS server was a stand-alone server.. i just happened to have virtualized it.

 

I have my own needs to keep my SSD datastores, so i have not looked at alternatives to replace them.

 

So my understanding is that you want to virtualize the ZFS. put it into an external chassis (like a DAS with no motherboard) and link it with an external SAS cable?

I want to do similar, get a 2U-3U 12-16 drive "head" with my 2 unraid servers hanging off of them.

 

Link to comment

So my understanding is that you want to virtualize the ZFS. put it into an external chassis (like a DAS with no motherboard) and link it with an external SAS cable?

I want to do similar, get a 2U-3U 12-16 drive "head" with my 2 unraid servers hanging off of them.

Exactly!

 

I only need a 1U case though. Four 1T drives is more than enough for me. My estimations are:

  • 500-750GB for cache. I don't really need more. I think my highest throughput per day has been around 250GB.
  • 500GB for newznab

 

That leaves me, assuming I'll go with raidZ2, 1T (maybe 600GB to leave some headroom  ;)) for datastores. In the short term this is more than enough. If I ran out of space I can always upscale my drives.

 

Link to comment

@Johnm, siamsquare, SidebandSamurai, et al.

 

you seem to prefer OI vs NexentaStor. May I ask why?

 

As sidebandsamurai wrote in an earlier post, they both perform the same, but NexentaStor interface is nicer? Am i missing something?

 

My decision for IO was this thread at [H]ard. It was well written and had decent community support.

 

As I stated, I never did flip the switch to migrate my freeNAS to OI / napp-it.

My freeNAS has worked 100% flawlessly for almost a year now with super easy setup (once I found the hacks I needed) and super user intuitive  interface, performance reporting, and error reporting. Plus it was ESXi aware and installed itself as a guest with VMware tools ready to go.

I honestly have not felt that I would gain enough of an upgrade to justify compromising my data in a migration (it should go smooth, but crap happens. we have all been there)

 

Why would i rebuild with OI?

Because I have done freeNAS, i wanted to learn something new.

Would i stick with it after I tried it?

I have no clue honestly.. I have not used it.

 

Your, raid6 idea, while it might be more fault tolerant, it wont perform as fast as the Striped Mirrored Vdev’s.

 

If you wanted to expand that raid6, you would need to by 4 new drives for only 2 more drives of data.

With the Striped Mirrored Vdev’s you can add drives 2 at a time with 1 drive loss but gain a higher IO each expansion. Yes, loosing 2 drives in the same Vdev = total array loss.

 

It sounds like you are in the mindset that a production raid is a backup (or needs no backup). it is not (especially on zfs), you have a greater chance of staying lucky. a backup is the only way to be safe.

 

remember, you can backup the ZFS to your unraid... Restoration will be a pain because you have  chicken/egg problem. you have to first recreate the zfs and unraid guests before you restore (this in one reason why my unraid guest is on my ssd, only the cache is on the ZFS).

 

I still backup my entire ZFS array to my unraid weekly and backup key guests daily (2x 3tb HDDs are enough for my 4x 2TB's).

 


 

Some things i should point out about ZFS that is not obvious or mentioned a lot..

first off, ZFS is a bit more complicated and advanced then unRAID.

Especially if you get into a Solaris based OS. (freeNAS is quite a bit simpler)

it is very unforgiving! one mistake and you can loose your data.

 

dont mix 4k and 512k disks.. they say to avoid 4k disks. some people say it's ok to use them with newer builds.

(my array is 4k drives samsung f4's)

 

expanding an array is almost impossible without some sort of knowledge of what your doing. you really do need a plan ahead of time. When you first build your array, plan out your expansion options for the future.

You cant just add a disk and hit expand like a hardware raid. The best way is to add vdevs. and that needs a matching(ish) group of disks as your existing array drives.

(I already plan to double my array with another 4 drive Vdev when I migrate to a "Head".)

 

Before you build a "production array". test out a few test arrays. once you are happy with it, then migrate your data to it.

 

I see people time and time again when expanding arrays in ZFS arrays creating a whole new array and migrating the data over instead of expanding.

Link to comment

It sounds like you are in the mindset that a production raid is a backup (or needs no backup).

Not really, I am just very risk adverse  ;D

 

it is not (especially on zfs), you have a greater chance of staying lucky. a backup is the only way to be safe.

Did you mean that with ZFS you have a better chance of staying lucky or the other way around. I though ZFS was built with data security in mind

 

remember, you can backup the ZFS to your unraid... Restoration will be a pain because you have  chicken/egg problem. you have to first recreate the zfs and unraid guests before you restore (this in one reason why my unraid guest is on my ssd, only the cache is on the ZFS).

you could create an unraid VM in a new datastore with the unraid USB and fire the system up,  then you could restore the ZFS system. Did I get this right?

 

Before you build a "production array". test out a few test arrays. once you are happy with it, then migrate your data to it.

That's very good advise and I plan to follow it :D

Link to comment

Yo,

 

Well I'm running best of both worlds...my ideal would be the current Synology NAS but custom build!

They have hybrid raid ( you can mix different sizes in disks) and they have easy raid expantion.

I have been reading articles about adding raid expantion to ZFS but haven't got a crystal ball when Solaris will ever have this feature implemented as ZFS was never intended for home use!

ZFS is fast and stable but as Johnm says unforgiving! And indeed you need to plan ahead....therefor I have Unraid as an archive server and easy to expand if I need more space!

You need to decide what you want... I own 3 media players and streaming 3DBD at the same time doesn't compute on unraid...my ZFS has no problem with that!

So thats why I run both ZfS and Unraid on ESXi...best of both worlds until something better comes along.

 

Gr33tz

Link to comment

I apologize in advance if this is already touched on, but the topic at hand seems to overlap significantly with my current situation.  I have a vm host that has a variety of guests on it, including passthrough gpu W7 boxes and an unraid server (all-in-one as a kind of test platform to date).  I'm looking at breaking some of these functions out for better consolidation/grouping of functions, and one of the primary issues I want to do is split my storage out to a separate box - a) for ease of storage expansion, b) centralization of most storage, both media and vm datastores, and c) not have data access down if I want to change around hardware in a more experimental passthrough situation.

 

Which brings me to - I'm looking at using ZFS on a separate box for vm data stores, with the potential of having that box actually still be a vm host, but only host unraid and zfs side by side.  The major issue I'm considering right now is what type of network interface to use, as I could see 1gbps being a bottleneck for iops across numerous vms.  However, I will be honest when saying my network infrastructure knowledge is my weakest point, and while I'm more than willing to learn, any direction would be definitely welcomed.  I've heard of a few people picking up 4gbps fibre equipment for cheap off ebay, but I'm not sure all what that would entail and quite where to start.  Standard teaming/agg. of 1gbps nics seems to have drawbacks and not quite accomplish direct bandwidth gains, especially if I were to run NFS (multipathing doesn't really function there, correct?).  And 10gbps ethernet appears to be flat-out unaffordable.

 

So, I'm sure I'm mincing certain issues, but please bear with me here, and if you have any thoughts off your experiences, I'd love to hear.

 

 

Link to comment

I apologize in advance if this is already touched on, but the topic at hand seems to overlap significantly with my current situation.  I have a vm host that has a variety of guests on it, including passthrough gpu W7 boxes and an unraid server (all-in-one as a kind of test platform to date).  I'm looking at breaking some of these functions out for better consolidation/grouping of functions, and one of the primary issues I want to do is split my storage out to a separate box - a) for ease of storage expansion, b) centralization of most storage, both media and vm datastores, and c) not have data access down if I want to change around hardware in a more experimental passthrough situation.

 

Which brings me to - I'm looking at using ZFS on a separate box for vm data stores, with the potential of having that box actually still be a vm host, but only host unraid and zfs side by side.  The major issue I'm considering right now is what type of network interface to use, as I could see 1gbps being a bottleneck for iops across numerous vms.  However, I will be honest when saying my network infrastructure knowledge is my weakest point, and while I'm more than willing to learn, any direction would be definitely welcomed.  I've heard of a few people picking up 4gbps fibre equipment for cheap off ebay, but I'm not sure all what that would entail and quite where to start.  Standard teaming/agg. of 1gbps nics seems to have drawbacks and not quite accomplish direct bandwidth gains, especially if I were to run NFS (multipathing doesn't really function there, correct?).  And 10gbps ethernet appears to be flat-out unaffordable.

 

So, I'm sure I'm mincing certain issues, but please bear with me here, and if you have any thoughts off your experiences, I'd love to hear.

 

Hi Roancea,

 

I have an ESXi host with several guests (windows, several linux servers and unRAID). I'm planning on implementing a new guest to have a ZFS server (Open Indiana is very well positioned). To do this, I'll fit a sas card which will be pass-throughed to this guest. This way everything will be in the same VMXNET3 network @ 10gbps.

 

 

Link to comment

Hi Roancea,

 

I have an ESXi host with several guests (windows, several linux servers and unRAID). I'm planning on implementing a new guest to have a ZFS server (Open Indiana is very well positioned). To do this, I'll fit a sas card which will be pass-throughed to this guest. This way everything will be in the same VMXNET3 network @ 10gbps.

 

I realize that you can do it this way if everything you're running is in one box (and in fact has upsides because of the speed of the internal vmxnet).  However, what I'm specifically talking about is having multiple vm hosts, and a separate box for shared storage datastores - i.e., all of the vmdk's will reside on a physically separate box from other vm hosts, which then presents the issue of sufficient network bandwidth/iops from the physical datastore box to the physical vm hosts.

Link to comment

I have 2 physical servers at home:

  • my unRAID box that desperately needs an upgrade (approx. 10TB)
  • my media box with the GRUB boot manager which let me either start the Ubuntu based media & TV server (XBMC, VDR) or Windows 7

 

I have just ordered a super silent server case (FRACTAL DESIGN Define R4) and want to order the mainboard (including CPU & Memory) next. The plan is to have the server next to my TV and to switch over to a "real" server environment (Supermicro board with a XEON CPU) but I also want to configure it as green as possible.

 

I would like to reuse at least some of the existing hardware:

 

  • 8 unRaid data drives
  • GeForce GT 220 card for the VDR (ASUS Bravo 220 Silent, PCIe 2.1 x16)
  • TV card for the VDR (Mystique Satix Dual S2, PCI-e x1)
  • Adaptec 1220SA SATA Adapter, PCI-e x1
  • Blu-ray player for the Windows environment

 

What is the right choice of the Mobo? Thanks a lot.

Link to comment

I have 2 physical servers at home:

  • my unRAID box that desperately needs an upgrade (approx. 10TB)
  • my media box with the GRUB boot manager which let me either start the Ubuntu based media & TV server (XBMC, VDR) or Windows 7

 

I have just ordered a super silent server case (FRACTAL DESIGN Define R4) and want to order the mainboard (including CPU & Memory) next. The plan is to have the server next to my TV and to switch over to a "real" server environment (Supermicro board with a XEON CPU) but I also want to configure it as green as possible.

 

I would like to reuse at least some of the existing hardware:

 

  • 8 unRaid data drives
  • GeForce GT 220 card for the VDR (ASUS Bravo 220 Silent, PCIe 2.1 x16)
  • TV card for the VDR (Mystique Satix Dual S2, PCI-e x1)
  • Adaptec 1220SA SATA Adapter, PCI-e x1
  • Blu-ray player for the Windows environment

 

What is the right choice of the Mobo? Thanks a lot.

 

I think i missed the question. are you looking to replace just the unRAID box with a baremetal unraid? replace both boxes with 1 esxi all in one? replace everything but the htpc part?

Link to comment

mmm.. honestly. I would not make that move..

 

ESXi was never meant to pass video cards and it is really hit or miss. only a few cards have worked for doing it. I would spend a few bucks and get a rasberry PI or build an htpc from leftover parts. the headache saved is well worth the extra PC..

I bet you could even hide the PI inside the the same case or mounted behind the TV.

 

As far as everything else,

lets see..

dives will move over (V5x will help)

Video card.. unknown see above.

TV card.. i get a lot of google hits about it and esxi.. all in german. i would check it out..unknown. (bob might know)

adaptec card. most likely it wont work out of the box.

blu ray.. check. thats good.

 

i think the challenge is is will the DVR work in passthough, I am not sure.

If so, your best bet is still go 2 boxes. a heftier server (for unRAID and DVR duties along with other VMs) and a light client (Pure HTPC).

 

you could also do a bit a research and see what others have done for the video card passthough. I know people that have done it and it was a PITA to get it working.

 

If you want to continue with this, then we can look at additional hardware.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.