Jump to content

ESXi - using UnRaid as datastore.. is this part of the NFS issue?


Recommended Posts

I don't know if it's me, or if going to 5rc12a broke how this used to work.

 

I have my VMDKs on an NFS share on UnRaid.. used to work fine with 4.7 ... never needing a reboot (neither UnRaid, nor my VMs)...

 

Now, occasionally, my VMs get pathetically slow/non responsive (to the point where I have to force power down the VM).

 

I'm using a BR10i card passed through to UnRaid.

 

Typically a reboot of the VM fixes it; sometimes both the Guest and UnRaid need to be reset.

 

With NFS, when I do a disk share, Esxi gives a permissions error. I have it set to private, with the rule of *(rw).

 

I can post a syslog, but I don't recall seeing anything in there other than emails from SF and spindowns.

 

Still haven't been able to trigger it at will. Anyone else using Unraid in this way that's having similar issues?

 

Almost tempted to go back to RDM + 4.7.

Link to comment

Are you sure it's not simply a matter of your VM's doing more I/O work than they used to?  Having a datastore on unRAID isn't really the best idea... unRAID isn't designed for fast disk access, and if one of your VM's starts doing anything that requires high disk I/O (Sabnzbd downloading/Par repairs/unpacking, torrents downloading etc), then all the rest are going to suffer.

 

You'd be far better off breaking out a couple of disks on another controller and setting up some sort of RAID storage for your datastore (using something like NAS4Free, FreeNAS etc in another VM), or using a SSD.

 

 

Link to comment

Are you sure it's not simply a matter of your VM's doing more I/O work than they used to?  Having a datastore on unRAID isn't really the best idea... unRAID isn't designed for fast disk access, and if one of your VM's starts doing anything that requires high disk I/O (Sabnzbd downloading/Par repairs/unpacking, torrents downloading etc), then all the rest are going to suffer.

 

You'd be far better off breaking out a couple of disks on another controller and setting up some sort of RAID storage for your datastore (using something like NAS4Free, FreeNAS etc in another VM), or using a SSD.

 

thanks for the reply.. Yah, sure it's not related to other I/O... not a lot of VMs on there.. shut down the only other resource intensive VM; and still the same.

 

It worked pretty much flawlessly under 4.7. VMs took a while to load, but once loaded, they were pretty snappy.

 

I have an SSD in the host now, and can't fit more than two VMs worth of VMDKs on it. My reason for going to UnRaid was because this seemed to work... I have about 3TBs worth of VMDKs on the datastore, and all were fine.

 

I can't tell if this is related to the NFS issue or something completely different.

Link to comment

Newer rc's broke datastore hosting within unraid via nfs for me.

 

I saw more extreme issues than you though - I would get i/o errors in the guest operating systems very quickly, usually before they had even finished booting. I'm currently on rc3 which works fine in that respect.

 

I'm planning to move all the guests off that datastore as soon as possible to avoid this issue in the future. I don't have much faith that it will be better in the final release / won't reappear randomly in a future point update of unraid given the current state of nfs.

Link to comment

Newer rc's broke datastore hosting within unraid via nfs for me.

 

I saw more extreme issues than you though - I would get i/o errors in the guest operating systems very quickly, usually before they had even finished booting. I'm currently on rc3 which works fine in that respect.

 

I'm planning to move all the guests off that datastore as soon as possible to avoid this issue in the future. I don't have much faith that it will be better in the final release / won't reappear randomly in a future point update of unraid given the current state of nfs.

 

yeah it's unfortunate. I ended up going with UnRaid after seeing the awesome ESXi write-ups.

 

I actually did have an entire disk go corrupt. thanks to crashplan, I was able to recover. One thing that seemed to help a little bit, is that I put all of the working files off unraid, and on an SSD datastore; also used disk shares, to make sure that disk.vmdk and disk-flat.vmdk are on the same physical disk. I also only allocated 4GB ram, instead of choosing the 4GB limit

 

All of the above seemed to have helped, a little bit.

 

I think that Tom will fix the NFS issue at some point. The question is, will it be before I lose another disk. If it's not fixed by the time I reach full array capacity (i'm 1/2 way there) I will look into alternatives. Right now the leader is OpenFiler. Openfiler can be an iSCSI target. Haven't dived too deep in, but will research when I get to 80% capacity.

 

 

Link to comment

I use a ZFS pool running on an OpenIndiana VM for my NFS datastore. That's a popular option around here. Well the ZFS pool is popular around here. Most people use NAS4Free or FreeNAS. I like using OpenIndiana though. ZFS is super fast too.

 

i've heard of raidz and zfs; but never Openindiana....how's the performance, compared to UnRaid ?

Link to comment

ZFS vs unRAID are worlds apart.  unRAID is bound to a single disk's performance, ZFS is only limited by the size of the array, the amount of l2arc/zil caching and other optional extras.  To be honest, my current ZFS array with 4x1Tb in a raidz1 is more than adequate as a datastore.

Link to comment

ZFS vs unRAID are worlds apart.  unRAID is bound to a single disk's performance, ZFS is only limited by the size of the array, the amount of l2arc/zil caching and other optional extras.  To be honest, my current ZFS array with 4x1Tb in a raidz1 is more than adequate as a datastore.

 

hmm - then what do you do with your UnRaid?

Link to comment

ZFS vs unRAID are worlds apart.  unRAID is bound to a single disk's performance, ZFS is only limited by the size of the array, the amount of l2arc/zil caching and other optional extras.  To be honest, my current ZFS array with 4x1Tb in a raidz1 is more than adequate as a datastore.

 

hmm - then what do you do with your UnRaid?

 

 

Store over 20TB worth of Movies and TV shows. ZFS is like RAID in that all disks need to be spun up during and reads/writes, which means disk longevity and power consumption come in to play. It also means it doesn't have one of unraid's best features, that only the disk being read/written to needs to be spun up. I use four 500GB 7200RPM laptop drives for my ZFS pool.

Link to comment

ZFS vs unRAID are worlds apart.  unRAID is bound to a single disk's performance, ZFS is only limited by the size of the array, the amount of l2arc/zil caching and other optional extras.  To be honest, my current ZFS array with 4x1Tb in a raidz1 is more than adequate as a datastore.

 

Trying to understand how you guys do this.  Tell me if I have it right...

 

You have a FreeNAS or NAS4Free VM and a few HDs dedicated to it via passthrough and setup as a ZFS or RAIDZ or whatever.  You then add the FreeNAS *array* as a datastore in ESXi.

 

Right now I use 3 SSDs as my datastores for ESXi but am getting close to running out of space and need expansion capabilities.  I guess I could always jury-rig more SSDs internally (inside wall of the case) but if the FreeNAS option is adequate, I could go that route.

 

John

Link to comment

Store over 20TB worth of Movies and TV shows. ZFS is like RAID in that all disks need to be spun up during and reads/writes, which means disk longevity and power consumption come in to play. It also means it doesn't have one of unraid's best features, that only the disk being read/written to needs to be spun up. I use four 500GB 7200RPM laptop drives for my ZFS pool.

 

ah so only using the ZFS where performance is critical, and UnRaid for others?

 

tempted to try it out, but seems like yet another thing to manage. We'll see how this NFS thing plays out. in 4.7, it just worked and worked and worked.

Store over 20TB worth of Movies and TV shows. ZFS is like RAID in that all disks need to be spun up during and reads/writes, which means disk longevity and power consumption come in to play. It also means it doesn't have one of unraid's best features, that only the disk being read/written to needs to be spun up. I use four 500GB 7200RPM laptop drives for my ZFS pool.

Link to comment

You have a FreeNAS or NAS4Free VM and a few HDs dedicated to it via passthrough and setup as a ZFS or RAIDZ or whatever.  You then add the FreeNAS *array* as a datastore in ESXi.

 

John

 

 

Bingo. Just share the zpool via NFS and add a new datastore using that NFS share as the target. My zpool disks are hooked up to an IBM M1015 controller card in IT mode passed through to the OpenIndiana VM.

Link to comment

ah so only using the ZFS where performance is critical, and UnRaid for others?

 

 

Not sure about BetaQuasi but the only think my ZFS pool is used for is as a datastore. Nothing else is stored on there. My OpenIndiana VM is stored on a SSD datastore, and that vmdk is in turn backed up from the SSD to my unraid server.

 

OpenIndiana, by the way, is the open source project that spun off of OpenSolaris when Oracle bought them and closed the source. ZFS originated from OpenSolaris.

Link to comment

Yep same here, only used for a datastore.  I have various VM's that I bring up depending on what I need to replicate in my lab environment (couple of Cisco call managers, various server 2008/server 2012 VM's etc etc).  I've also moved any high I/O VM's to the ZFS datastore as I wanted to avoid smashing my SSD's (which seem to be showing the impact of lots of I/O after a year of having them in place.)

 

Management isn't too bad, the ZFS options are all rock solid (I've tried OpenIndiana, NAS4Free and FreeNAS.. settled on NAS4Free simply because I prefer its GUI.)  Once you have them up and running, they are pretty much set and forget.  The only other thing to manage is boot order, i.e. making sure the ZFS array is up and online prior to booting up any VM's stored on it.

 

Also, I did dick around with iSCSI for ESXi => ZFS for a while, but it turns out that NFS seems to edge it out in terms of performance.  It's also worlds easier to add a NFS datastore to ESXi, and to share the ZFS pool out via NFS at the other end.

 

Yes it does mean slightly higher power consumption as the disks in the ZFS pool never spin down.. that's the only real downside though.

Link to comment

Yes it does mean slightly higher power consumption as the disks in the ZFS pool never spin down.. that's the only real downside though.

 

 

That's why I used laptop drives.  8)

I've got about 15 500GB 7200RPM Hitachi laptop drives just sitting around so I figured I'd put a few to use and if one dies I've got plenty of spares to replace it with.

Link to comment
  • 2 weeks later...

after rc15, and setting the value to never forget, i think my problems disappeared.. haven't rebooted the VM for about 55hrs... previous record was <24hrs. unraid itself hasn't seen a reboot since release of rc15, 8 days ago. all of my nvram and other "working" files of the VM are on the SSD. this way, even if the array is shutdown, i don't get a "config not found" type error in ESXi

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...