3rd M1015 on a X9SCM-iif MOBO - Performance?


guitarlp

Recommended Posts

I'm running my first two M1015's passed through ESXi directly to unRAID for 16 drives total.

 

I'm considering adding a third M1015 so that I can pass that through to OpenIndiana or FreeBSD and setup a Raid Z2 5 or 6 disk array. The purpose of this array would be as an NFS mount for my main MacBook Pro to store all my data.

 

The reason I want to go the ZFS Raid route is because I want redundancy and performance. This is going to be data I'll be accessing all throughout the day, so having the drives sleep isn't important to me. Also, since they will be striped, I'll get better performance then what my unRAID shares would provide.

 

That being said, am I going to be OK running that third M1015 on my X9SCM MOBO? The third and fourth PCI-E ports are 4x while the first two are 8x. I could always move unRAID to the third slot and have slower parity checks, but I'm wondering if 5-6 disks in Raid Z2 would suffer any performance hit running on a 4x slot?

Link to comment

Simple math:  A 4x slot on a PCI v2 board has 2GB of bandwidth.  For 6 disks that works out to 333MB/disk => clearly far more than any disk except an SSD requires.

 

... So no, it won't slow anything down  :)

 

Caveat:  I'm not sure if the M1015's will take full advantage of the x4 bandwidth, since they're designed for an x8 slot.  But even if they don't, they should still use at least 250MB/lane ... or 1GB total ... which is still 166MB/drive.    While that may be a SLIGHT bottleneck on the highest capacity drives on their outer cylinders, it certainly isn't enough to be concerned about.

 

Link to comment

Awesome... thanks for your help yet again Gary :). I'm not 100% up to date on ZFS, but it does support (and recommends) using SSD's for logs and cache disks... so that may be something I'll consider. And if I go that route, it may effect the performance of my ZFS pool.

 

Have you heard of anyone testing an unRAID parity rebuild 2 M1015's on the 8X slots and comparing that to a rebuild with 1 M1015 on 8x and 1 M1015 on 4x? I'd be curious what sort of penalty that would have on parity checks (if any).

Link to comment

Awesome... thanks for your help yet again Gary :). I'm not 100% up to date on ZFS, but it does support (and recommends) using SSD's for logs and cache disks... so that may be something I'll consider. And if I go that route, it may effect the performance of my ZFS pool.

 

What kind of scenario are you looking into?

The addition of a separate ARC and ZIL cache is only feasible for enterprise grade workloads...nothing you couldn't compensate with simple RAM in your VM in home use.

If you want some extra performance, for example when hosting a ZFS-share to hold your VM datastore, a simple set of mirrored vdevs  based on SSDs can contribute to that

and you probably would limit that pool to 4 SSDs with a spare.

Also consider that building a dedicated ZFS box might be cheaper and more efficient.

Link to comment

Awesome... thanks for your help yet again Gary :). I'm not 100% up to date on ZFS, but it does support (and recommends) using SSD's for logs and cache disks... so that may be something I'll consider. And if I go that route, it may effect the performance of my ZFS pool.

 

What kind of scenario are you looking into?

The addition of a separate ARC and ZIL cache is only feasible for enterprise grade workloads...nothing you couldn't compensate with simple RAM in your VM in home use.

If you want some extra performance, for example when hosting a ZFS-share to hold your VM datastore, a simple set of mirrored vdevs  based on SSDs can contribute to that

and you probably would limit that pool to 4 SSDs with a spare.

Also consider that building a dedicated ZFS box might be cheaper and more efficient.

 

Good to know on the SSD probably not making a difference in my home environment.

 

I've been moving from a desktop environment to a laptop setup (with a docking station of sorts). The transition has been great except for storage space. I previously ran RAID 10 and Raid 5 on my desktop machines. But being on a laptop, I lost out on the local storage of a desktop machine.

 

Since my new ESXi build has a number of extra drive slots available, and I already have 4 1TB WD RE4 drives from HP MicroServer I'm retiring, I figure it's a great time to put those into the new build and run a RAID Z2 6 drive array (buy 2 more 1TB drives). I can then automount that share via NFS on my laptop and essentially have 4TB of "direct" storage available for all my "big" data and other things I don't or can't store on the smaller SSD drive.

 

Besides storing data I frequently access (Music and Pictures), I also do audio and video editing and I'd like to write those files directly to the NFS share. I was previously considering going the Drobo DAS route since that avoids any network latency, but since I already have this server running ESXi, I figure a ZFS array on a NFS share would probably offer better performance then the proprietary Drobo unit.

 

At the end of the day, when I'm at home on my laptop I'd like to have that 4TB of storage available with redundancy and the speed of striped drives so that I can access my big Music collection quickly (without delays), and also use it for music recording and video editing directly onto the share. Recording music shouldn't be a problem... but I'm interested in the read/write performance when it comes to the video editing side of things over an NFS RAIDZ share. This is where I was thinking things like SSD drives could help out... but I'll have to do a bit more looking into it.

Link to comment

...right.

Using/Accessing the ZFS Array outside of the ESXi host will limit speed ti that of your network connection....if you don't employ 10GB NICs, I doubt that you will see that much of a performance gain from SSDs.

 

Anyway, you know about the possibility to swap M1015-slots between unRAID and ZFS.

Another option is to use an expander.

I am using a solaris based ZFS VM under ESXi with one M1015 and an expander (19 disks total)....all green drives and I have no problem to saturate a NIC. A scrub shows bandwitdh at 800MB/sec which is OK for me.

For ZFS look into napp-it (www.napp-it.org), which adds web-mangagment to your (open-)solaris based filer.

Link to comment

Yea... I've been doing a bunch of reading and I'm leaning towards OpenIndiana and Napp-It for managing my ZFS pool. FreeNAS is pretty popular probably because it's the easiest to get going, but it doesn't look like it performs as well as something like Solaris of FreeBSD (likely because it's running an older version of ZFS that still had issues). I'm torn between FreeBSD and OpenIndiana, but I'll probably go with OpenIndiana so I can play with something new :).

Link to comment

...if you're going with solaris and napp-it, go with OmniOS, like recommended by napp-it inventor.

This is the most recent ZFS development outside of closed Oracle boundaries.

 

I did build a second ZFS filer based on Linux, in order to get Full-Disk-Encryption working for a special array, usng dm-crypt/LUKS and ZFSOnLinux, which now has the same features as recent OmniOS.

This is a smaller build but I am very pleased with the performance so far, see: http://lime-technology.com/forum/index.php?topic=27822.msg247349#msg247349

Link to comment

Never heard of OmniOS, but it looks like a better solution then OI for a home NAS. I wonder about its future though like all the Solaris forks. FreeBSD will continue to be around for a long long time. That said, I could always switch from OmniOS to FreeBSD if OmniOS kicks the bucket.

Link to comment

Very interesting side discussion [clearly not UnRAID-related, but it IS storage-related  :) ] ... and I'm definitely intrigued by the speeds you're noting.    Is there a good "beginner's link" for setting up a ZFS system?    ... and OmniOS ?

 

...look here: http://www.napp-it.org/downloads/omnios_en.html

For a dedicated box, like the HP46L, use: http://www.napp-it.org/manuals/to-go_en.html

Link to comment

Never heard of OmniOS, but it looks like a better solution then OI for a home NAS. I wonder about its future though like all the Solaris forks. FreeBSD will continue to be around for a long long time. That said, I could always switch from OmniOS to FreeBSD if OmniOS kicks the bucket.

 

..you should be able to exchange OS as long as pool- and zfs-versions are the same supported by the zfs implementation of the OS.

 

Edit:

To make things clearer: ZFS implementation / standard is managed with pool- and zfs-version, which defines the capabilities/features of that implementation.

When a ZFS array is created, this information is stored on the array itself.

You can export an array on one computer and import the array on another

The local zfs implementation (aka the real software package/lib/OS) needs to support/match the features stored in the array upon import, or import will fail.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.