ATLAS My Virtualized unRAID server


Recommended Posts

 

The problem I have is the motherboard I am using.  I have no more pci e slots.

 

I'm not an expert, but would an expander card not be an option?

 

Wouldn't need to be plugged into a PCIe slot, could be powered by molex and just occupy the space of the unused PCI slot.

 

Like you I have a lack of PCIe slots and have been considering this option, though I was looking to do this with a M1015.

 

Sent from my Samsung Galaxy S2 using Tapatalk 2

 

 

Link to comment

 

Thanks for the reply and understand you don't have time atm,

 

The part I'm interested in is the ZFS cache for Unraid...care to elaborate? And would a raid-z parity be also possible for Unraid?

 

Technically, it is just a a virtual disk sitting on an NFS share that ESXi is mapped to.

As I has stated, you could also use an iSCSI target instead of an NFS share.

 

As far as the parity drive on the ZFS..

as far as unraid is concerned, it is possible..

as far as ESXi it is not..  (depending on your  array drive size, see below).

 

the problem you will run into is the maximum VMDK size. Which is 2TB. And then you run into an issue that there is 512B overhead that would make Your parity disk "smaller" then your data disk in an all 2TB disk array. If you have 3TB+ drives. it all goes out the window before you even get started.

 

So, unless you are running an array of 1.5TB or smaller data drives.. I do not think it is possible (this is all untested by me, I am guessing based on my knowledge)

 

There might be a chance that ESXi has increased that 2TB Limit in 5.1. but i don't recall that it has. (anyone?)

 

Unless you are pounding on your unraid with multiple disk writes at once, I dont see a need to do this (especially if your cache is on the ZFS raid [or a normal cache drive]).

If you are pounding your parity excessively hard, get a 7200rpm parity disk. If that is not enough.. then you might need a hardware raid controller (that can be passed though in ESXi). [this is all overkill and not really needed for 98% of unraid users]

Link to comment

Well unless I am confused how this would work, this is my thoughts:

I would replace the  SUPERMICRO AOC-SASLP-MV8 with a M1015 then I could use an expander.  The problem with this as it applies to my senario is that I pass through the the  SUPERMICRO AOC-SASLP-MV8.  My thought is that if I replace the  SUPERMICRO AOC-SASLP-MV8 with the M1015, I would pass through the M1015 and that would also pass through the expander to my unraid.

 

I certainly could be wrong on this (certainly hope I am) but I believe this to true.  I do greatly appreciate the suggestion.  I am open to any other suggestions.

Link to comment

I had the same issue with one of my WHS guests on RMD.. It turned out in the end that the drive itself was starting to meltdown.  it was getting all sorts of smart errors and was close to end of life. after I replaced it, all was good.

 

I really don't have an answer for you. in theory, it should work.

 

As Bob pointed out, life might be easier if you pick up a sata controller just for the sage guest.  you also then gets smart and spindown support on the guest.

 

At this point I have no more RMD guests. everything has its own controller or is  on a virtual controller.

 

Sometimes there is a little fine tuning needed to make a guest happy, and sometimes the issue is not so obvious.

 

The problem I have is the motherboard I am using.  I have no more pci e slots.

The mobo I have is SUPERMICRO MBD-X8SIL-F-O.  It has 3 slots plus a reg pci slot.

1 slot is used by SUPERMICRO AOC-SASLP-MV8 for my unraid machine.

2 are used by Hauppauge Colossus

The pci slot is free but shared by the onboard video.

 

I guess I have 2 choices.

1.  Change back to the HD PVR (usb).  I have 2 but one of them is not functioning correctly.  This move would free up 2 PCI X4 slots.  Then I could buy another controller to pass through to sagetv. plus I would need to purchase a good usb controller to pass through.  So this one would be the cost of another hdpvr $190, usb controller $10-30 and another sata controller.

2.  Stick with the Collossus and change mobos http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235.  That is one I have looked at.  My processor is already a socket 1156.  This would cost $239.00 plus shipping.

 

So basically the cost is the same for either option.  Although I tend to lean towards the new mobo.  What are you guys thoughts on these senarios?

 

Thanks for the help

Bill

For Sage recording drives a PCI HDD controller would be fine.  My personal favorite is SuperMicro AOC-SAT2-MV8.  Even though it is designed for PCI-X slots it works in PCI and would give you up to 8 ports for recording drives.  I had my living room server with 7 recording drives on that port on an ASUS P5Q-EM MB for a couple of years.  Worked real well.  Would not recommend that for unRAID where you have parity checks but for JBOD recording drives for a Windows SageTV VM it should work well.  If you don't want that card then any 4 port PCI card should work just as well.  Which might be necessary if a PCI-x card won't actually work in your PCI slot do to MB chips being in the way.

 

Or as you said an M1015 with expander will free up another PCIe slot.  That is how I virtualized my SageTV servers in basement - used expanders so that I could have 24 drive unRAID VMs on a single PCIe slot.

Link to comment

 

The problem I have is the motherboard I am using.  I have no more pci e slots.

 

I'm not an expert, but would an expander card not be an option?

 

Wouldn't need to be plugged into a PCIe slot, could be powered by molex and just occupy the space of the unused PCI slot.

 

Like you I have a lack of PCIe slots and have been considering this option, though I was looking to do this with a M1015.

 

Sent from my Samsung Galaxy S2 using Tapatalk 2

 

Ehh.. I see the issue.. well.. technically, you do not need video with esxi.. but then you run into the possible issue of PCI being fast enough for multiple recordings at once.

 

you might try popping the suspect disk into another PC as a slave/secondary drive and run some tests on it to make sure it is happy. SMART and surface scans at the least.

down time sucks. but random crashing sucks more.

 

In this case an expander wont help. yet

 

EDIT.. lol quoted wrong post. but it still applies.

I see BOB chimed in with some good wisdom.. the AOC-SAT2-MV8 and bandwidth for recording.

Link to comment

 

EDIT.. lol quoted wrong post. but it still applies.

I see BOB chimed in with some good wisdom.. the AOC-SAT2-MV8 and bandwidth for recording.

 

lol no worries, my post was useless anyway - I missed his original post mentioning the problems he was having with Sage... Ignore me ::)

 

Sent from my Samsung Galaxy S2 using Tapatalk 2

 

 

Link to comment

Does anyone here use unraid as an nfs target within their ESXi as a datastore?

 

i.e once your unraid VM has booted - you then map a new datastore to it via nfs and create new guests stored on the unraid datastore?

 

I do this and whilst performance isn't anything to get excited about it obviously works and is convenient.

 

However the latest unraid beta isn't happy with doing this for some reason. ESXi doesn't seem to have issue with it as a datastore but existing guests boot and then after a short period (sometimes even whilst booting) see i/o errors - i.e problems writing to their virtual disks which are stored on unraid via nfs.

 

If I change nothing else but boot into an older version of unraid (currently running 5.0-rc3) it's all fine again. I'm aware nfs was a bit of an issue during the rc cycle but :

 

- I thought it had been fixed

- I thought it mainly affect user shares shared via nfs whereas I'm sharing direct to my cache drive.

 

Anyone doing this under the latest release successfully or seen a similar issue?

Link to comment

I had the same issue with one of my WHS guests on RMD.. It turned out in the end that the drive itself was starting to meltdown.  it was getting all sorts of smart errors and was close to end of life. after I replaced it, all was good.

 

I really don't have an answer for you. in theory, it should work.

 

As Bob pointed out, life might be easier if you pick up a sata controller just for the sage guest.  you also then gets smart and spindown support on the guest.

 

At this point I have no more RMD guests. everything has its own controller or is  on a virtual controller.

 

Sometimes there is a little fine tuning needed to make a guest happy, and sometimes the issue is not so obvious.

 

The problem I have is the motherboard I am using.  I have no more pci e slots.

The mobo I have is SUPERMICRO MBD-X8SIL-F-O.  It has 3 slots plus a reg pci slot.

1 slot is used by SUPERMICRO AOC-SASLP-MV8 for my unraid machine.

2 are used by Hauppauge Colossus

The pci slot is free but shared by the onboard video.

 

I guess I have 2 choices.

1.  Change back to the HD PVR (usb).  I have 2 but one of them is not functioning correctly.  This move would free up 2 PCI X4 slots.  Then I could buy another controller to pass through to sagetv. plus I would need to purchase a good usb controller to pass through.  So this one would be the cost of another hdpvr $190, usb controller $10-30 and another sata controller.

2.  Stick with the Collossus and change mobos http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235.  That is one I have looked at.  My processor is already a socket 1156.  This would cost $239.00 plus shipping.

 

So basically the cost is the same for either option.  Although I tend to lean towards the new mobo.  What are you guys thoughts on these senarios?

 

Thanks for the help

Bill

For Sage recording drives a PCI HDD controller would be fine.  My personal favorite is SuperMicro AOC-SAT2-MV8.  Even though it is designed for PCI-X slots it works in PCI and would give you up to 8 ports for recording drives.  I had my living room server with 7 recording drives on that port on an ASUS P5Q-EM MB for a couple of years.  Worked real well.  Would not recommend that for unRAID where you have parity checks but for JBOD recording drives for a Windows SageTV VM it should work well.  If you don't want that card then any 4 port PCI card should work just as well.  Which might be necessary if a PCI-x card won't actually work in your PCI slot do to MB chips being in the way.

 

Or as you said an M1015 with expander will free up another PCIe slot.  That is how I virtualized my SageTV servers in basement - used expanders so that I could have 24 drive unRAID VMs on a single PCIe slot.

And to add to my post.  If you want to you can use a port multiplier and external storage cage for your recording drives.  I'm using a Highpoint 1742 in my PCI port on my Tyan S5512GM2NR MB right now as recording drives - 5 3TB recording drives.  This also had 2 internal ports as well as the 2 external.  I've got a Rosewill 5 drive port multiplier drive cage holding my recording drives for one of my SageTV VMs and am thinking about the same setup for the 2nd SageTV VM on my other ESXi server next year.  I did have to upgrade the drivers to use > 2TB drives in the external encloser but once I got the most current it has been working well.  Allot better then when I tried this with a SIL3124 or SIL3132 controller.
Link to comment

Does anyone here use unraid as an nfs target within their ESXi as a datastore?

 

i.e once your unraid VM has booted - you then map a new datastore to it via nfs and create new guests stored on the unraid datastore?

 

I do this and whilst performance isn't anything to get excited about it obviously works and is convenient.

 

However the latest unraid beta isn't happy with doing this for some reason. ESXi doesn't seem to have issue with it as a datastore but existing guests boot and then after a short period (sometimes even whilst booting) see i/o errors - i.e problems writing to their virtual disks which are stored on unraid via nfs.

 

If I change nothing else but boot into an older version of unraid (currently running 5.0-rc3) it's all fine again. I'm aware nfs was a bit of an issue during the rc cycle but :

 

- I thought it had been fixed

- I thought it mainly affect user shares shared via nfs whereas I'm sharing direct to my cache drive.

 

Anyone doing this under the latest release successfully or seen a similar issue?

 

Not I. I am doing the exact opposite. My cache drive is on an NFS share..

 

Up until about a week ago, I had been only running SMB on my unraid. I just turned on AFP for a time machine target for several Mac.. I have been having odd permission errors on both the afp and smb since.

 

if this keeps up and I can't troubleshoot it quickly, I am going to move the TM target to another server or guest.

Link to comment
And to add to my post.  If you want to you can use a port multiplier and external storage cage for your recording drives.  I'm using a Highpoint 1742 in my PCI port on my Tyan S5512GM2NR MB right now as recording drives - 5 3TB recording drives.  This also had 2 internal ports as well as the 2 external.  I've got a Rosewill 5 drive port multiplier drive cage holding my recording drives for one of my SageTV VMs and am thinking about the same setup for the 2nd SageTV VM on my other ESXi server next year.  I did have to upgrade the drivers to use > 2TB drives in the external encloser but once I got the most current it has been working well.  Allot better then when I tried this with a SIL3124 or SIL3132 controller.

 

Do you record HD?  On my mobo the pci slot and the onboard video share the same bus.  I put in a pci serial card in and tried to pass it through and it killed esxi.  I don't remember if I passed both or one to the vm machine.  Do you have any experience with pci pass through in this situation?

 

you might try popping the suspect disk into another PC as a slave/secondary drive and run some tests on it to make sure it is happy. SMART and surface scans at the least.

down time sucks. but random crashing sucks more.

 

What utility do you like to use to run tests on the drive?  Could I do this from the vm also or is it better remove the disk and attach it to a desktop?  I run mainly windows.

Link to comment
Do you record HD?
YES 2 HD-PVRs on PCIe USB card on pass through.  Also OTA and QAM HD on 6 tuners - 1 AVer Media Duet OTA, 1 HVR-2250 OTA and 1 HDHomeRun QAM.  The only problems I've had have been HD-PVR related.  About 1 recording every 2 months is bad.  The latest was actually OK but was 400GB+ in size because the recording never stopped so got about 10+ hours of the same channel.
On my mobo the pci slot and the onboard video share the same bus.  I put in a pci serial card in and tried to pass it through and it killed esxi.  I don't remember if I passed both or one to the vm machine.  Do you have any experience with pci pass through in this situation?
No can't say that I have.  Try turning off the on board video - as Johnm said you don't need video for ESXi.  If you need to see video while setting up ESXi use a plugin card then once it is setup and booting remove the video card and check to see that it still boots.  I would also turn off parallel ports and serial ports if you are not going to use them with a VM.  That may also help your booting problems.
Link to comment

Ok now that actually sounds like a solution.  I will look through the bios and figure out how to turn off the video.  Then I could install the pci serial card (just as a test), configure pci pass through and see how it shows up.  Then if that looks ok.  I can pull the recording drive out of esxi, run the suggested programs to check for shart issue.  If it checks out ok, I think I will do the following:

Purchase a suggested Pci Sata card.

Place my boot drive and recording drive on it. 

create a new vm for sagetv with the pci to sata pass through.

I want to get my raptor drive back as the boot drive for sagetv.

 

WHat are your thoughts on a setup like this?  I currently own a Rosewill estat ll 2 Port PCIe HDD controller card RC-219 chipset Sil3132.  Do you know if this would be backwards compatible to PCI?

 

Thoughts, suggestions?  I appreciate any suggestions as to controllers that are pci compatible, that will provide the necessary performance.  I could also do this:  Buy a M1015 (although I am running unraid 5rc8a, is it compatible) move the SUPERMICRO AOC-SASLP-MV8 to the pci for sagtv.  This would allow me to be able to add the expander in case I wanted to add more drives to my unraid.

 

Thanks

Bill

Link to comment

I do not have time now. maybe in the future. here is the basic Idea though

 

 

certainly understand, unfortunately i already read that post of yours.

 

im not really asking for a tutorial (like some others said), i should be able to work out something like Solaris/FreeBSD (slowly getting better with Linux, thanks to my Raspberry Pi!) and ZFS, mainly asking about your setup.

ie, what HDDs in what sort of array, on what controller, what speeds you get

Link to comment

Ok now that actually sounds like a solution.  I will look through the bios and figure out how to turn off the video.  Then I could install the pci serial card (just as a test), configure pci pass through and see how it shows up.  Then if that looks ok.  I can pull the recording drive out of esxi, run the suggested programs to check for shart issue.

That sounds like a good plan.
If it checks out ok, I think I will do the following:

Purchase a suggested Pci Sata card.

Place my boot drive and recording drive on it. 

create a new vm for sagetv with the pci to sata pass through.

I want to get my raptor drive back as the boot drive for sagetv.

The only problem I see here is that you may not be able to boot the VM from the PCI card that is passed through.  I tried it with a SASLP-MV8 and it wouldn't boot.  I had to create a virtual boot drive to boot my VM from.  It may have just been me.  I didn't try all options as I was running out of time and I had to get my SageTV server up and running.  Another option would be to use your Raptor as a datastore drive and put a virtual boot drive for your SageTV VM on it.  That's how I currently have my HD-PVR server.  Or you could RDM your Raptor as the boot drive on a virtual controller and get some speed.  My other ESXi server's SageTV VM boots that way from an SSD.  I will be switching to using another Raptor because ESXi doesn't really support SSD's and I don't want to burn it out.  The SSD doesn't have garbage collection so if Windows isn't using Trim because the drive is RDM'd (even though Windows says it is enabled) I could be burning out the SSD.

WHat are your thoughts on a setup like this?  I currently own a Rosewill estat ll 2 Port PCIe HDD controller card RC-219 chipset Sil3132.  Do you know if this would be backwards compatible to PCI?
Since it is PCIe the only way you could get it to work in a PCI port is with a riser card that converts to PCIe at the same time - which may not exist.  I do know a PCIe x4 to 4 PCIe x1 splitter exists but if I remember correctly it was $800.  Using this you could take a PCIe x4 and split it to 4 x1 PCIe slots for the 2 Colossus cards and your Rosewill controller.  You may or may not be able to pass them through individually but since they would all go to the SageTV VM it may not matter.  You would still be in uncharted waters as I haven't heard anybody using a PCIe spliter in ESXi - but I haven't looked either.

Thoughts, suggestions?  I appreciate any suggestions as to controllers that are pci compatible, that will provide the necessary performance.  I could also do this:  Buy a M1015 (although I am running unraid 5rc8a, is it compatible) move the SUPERMICRO AOC-SASLP-MV8 to the pci for sagtv.  This would allow me to be able to add the expander in case I wanted to add more drives to my unraid.

 

Thanks

Bill

This is probably the best option but the most expensive other than using a PCIe splitter.  I still don't trust port multipliers.  When I was using the Sil3132 controller I got with my external 5 port enclosure it worked for several months just fine then started dropping drives.  I'm hoping my Highpoint controller works better but I've only had it connected for 48 days so far so not long enough to say it is a complete success yet.  If you are wanting performance for your boot drive I would get an SSD with garbage collection and RDM it to a virtual controller.  It still has more speed than a spinner that way - even a raptor.  If my SSDs had garbage collection built in I would have retired my raptors.
Link to comment

So this discussion seem as though my best (yet most expensive) option is to change motherboards.  What are your guys thoughts in regards to the mobo I linked.  It has 1 PCI X16, 2 PCI X8, 1 PCI X4. 2 PCIX1

 

My thoughts:

The colossus could be placed in the 2 PCI X8.  The AOC card could be placed into  PCI X4 (unraid).  Then I could use the rosewill card I listed above in a PCIx1 slot.  This would allow me the flexibility to add another Storage controller if so desired (PCIX16).

 

If you have a suggestion for a cheap pci card that will work for my situation and is compatible with esxi. I am all ears (so to speak).  It seems as though my mobo is the limiting factor.  I really need to stabilize sagetv.

 

Thanks

Bill

Link to comment

So this discussion seem as though my best (yet most expensive) option is to change motherboards.  What are your guys thoughts in regards to the mobo I linked.  It has 1 PCI X16, 2 PCI X8, 1 PCI X4. 2 PCIX1

It has IPMI so that is a plus.  If I had an 1156 CPU this it the board I would have tried to virtualize.  I do see that newegg says that it has 3 x8 electrical PCIe slots one of which is in a x16 physical slot.  Only bringing that to your attention so that you don't think you are getting a full x16 electrical PCIe slot.  Electrically it is the same as the other 2 x8 physical slots.

 

My thoughts:

The colossus could be placed in the 2 PCI X8.  The AOC card could be placed into  PCI X4 (unraid).  Then I could use the rosewill card I listed above in a PCIx1 slot.  This would allow me the flexibility to add another Storage controller if so desired (PCIX16).

 

If you have a suggestion for a cheap pci card that will work for my situation and is compatible with esxi. I am all ears (so to speak).  It seems as though my mobo is the limiting factor.  I really need to stabilize sagetv.

 

Thanks

Bill

Just remember even if you change your MB you may not be able to boot from the card passed through to the VM.  Here is a short thread on HardForum with your MB that the questioner says he cannot boot from the pass through MB controller so similar situation to an actual PCI/e card.

 

Personally I would setup your SageTV VM to boot from a virtual HDD on an ESXi datastore on your existing hardware first.  Get your system stable and virtualized.  Then tweak it with faster boot hardware.  I think you will find that SageTV will work fine off a virtual HDD located on a ESXi datastore.  Once you have everything working you can start experimenting with different configurations that will improve your SageTV performance. 

 

I'm assuming you are trying to improve the performance of SageTV HD extenders?  If you have client computers to watch from you won't need more performance on the server at least not on the OS boot drive anyway. 

 

You could also try some of the options I didn't. 

 

Like setup a small virtual HDD that is the boot drive but install the windows files on a drive off your controller.  So your C drive is small.  Your D drive is where you install Windows.  I haven't done a split install recently but I had an XP desktop setup this way about 10 years ago.

 

Another possibility especially if Windows doesn't allow a split install anymore is to install all of Windows on your C drive but install SageTV to a drive (your Raptor) on the pass through controller.  Then you get the Raptor speed for everything in the SageTV directory which should improve extender speed.

Link to comment

I have sagetv installed and working the best I have ever seen as far as performance goes.  The issue is the server crashing.  I cannot tell you with confidence exactly when this started but I feel it started when I tried to pass the serial pci card and was forced to reinstall esxi.

Since that day, I have been having weird issue with the sagetv server.  This is the only server having the issue.

The first setup I had was a 250gig raptor rdm'd as the boot drive and a 1tb wd red rdm'd as the recording drive.  The server would randomly crash.  It would lose a drive.  I first thought it was the raptor (brand new drive).  I thought this because it was crashing, so I decided to make a datastore out of the raptor.  Well I lost the raptor datastore for no reason.  So I pulled the raptor and rebuilt the sage server.  I used my wd red for my boot and a 1 tb drive for my recording (rdm both).  Right now, approximately every 1.5-2 days I lose the recording drive.  I don't believe I have lost the boot drive.

Link to comment

Yo,

 

Just installed my 1st All-In-One server with 2 VM's on a Norco RPC-4224.

I would be interested in buying a second one for my Unraid VM and put it on a Intel RES2SV240 expander card attached to a M1015 on 8x pci on my server.

The question is how do you power the second server and want to be able to power it down when not in use!

 

ty

Link to comment

I do not have time now. maybe in the future. here is the basic Idea though

 

 

certainly understand, unfortunately i already read that post of yours.

 

im not really asking for a tutorial (like some others said), i should be able to work out something like Solaris/FreeBSD (slowly getting better with Linux, thanks to my Raspberry Pi!) and ZFS, mainly asking about your setup.

ie, what HDDs in what sort of array, on what controller, what speeds you get

The reason for the decision to add a ZFS array  was the fact that i did not like a having single spinners for data. I have had to many fail in my lifetime. that and my disk IO was really the weakest link in my ESXi box. (also my single 2TB datastore was getting smart errors. It was time to replace it)

I was going to get a cheapo raid card like an arecca or a used HP. I had a few spare M1015's laying about, I decided why not try ZFS as a guest?..

 

The guest is pretty simple. just a vanilla ZFS raidz on a FreeNAS guest with and NFS guest. It has its own M1015 with 4x 2TB Samsung Green F4's. The array gets about 450MB/s.

 

My plan was to later add a second raidz vdev and then combine them into a single zpool for double the IO (think Raid 50).. in the end i decided my disk IO is fine and that i really don't need a 6TB data store. that, and I would have to split my server into a head/DAS configuration. (I might still do this in the future, I just don't know if I want 8 drives 24x7 spinning)

 

This post here compares the SATA3 SSD Vs. the Array speeds inside my usenet guest.

http://lime-technology.com/forum/index.php?topic=14695.msg182730;topicseen#msg182730

You can see the array is writing faster then the SSD and reading almost as fast.

In this image: the 60GB disk is SSD and the 500GB disk is the ZFS. both disks are also shared IO with other systems so there is a slight hit here.

qHGqUm.png

 

I have the usenet guest and my unraid cache drive on this array (along with several other guests). the array is reading and writing 24x7. It has gone about 6 months now without a single hiccup.

 

The build sort of was a PITA to get running at first. I had to tinker with it a bit. I had two or three big issues right off the bat. the first was the version of freenas I was running did not natively support the M1015 card. I also had add some "tunables".. I cant remember why. I believe those were because of the NFS overhead on top of ZFS brought the array to a crawl? I also remember haveing an issue getting the iSCSI to to be stable (my plan was iSCSI not NFS at first). I almost wish I made a writeup back then... If I had to rebuild the guest, I would be lost myself.

I had planned on switching to Open Indiana and not stick with FreeNAS (the OI guest is built and works). I just never got around to migrating the data.

 

To give you and idea of Atlas's stability. The FreeNAS guest is on 112 days uptime. The server itself has been up longer. I think my last hard reboot was when I replaced my dead expander. It is about time to down it for cleaning.. It is getting dusty inside.

 

PS.

*A nice advantage to the NFS datastore is Gigabit speed to the datastore for migrations or backups.

 

*if your unraid cache is on it. you can create a hybrid unraid server. Fast front end protected cache array that then migrates to the long term storage array.  You can create "cache only" protected shares for things that you need to be faster then standard unraid. Like lightroom data for example.

Link to comment

I do not have time now. maybe in the future. here is the basic Idea though

 

 

certainly understand, unfortunately i already read that post of yours.

 

im not really asking for a tutorial (like some others said), i should be able to work out something like Solaris/FreeBSD (slowly getting better with Linux, thanks to my Raspberry Pi!) and ZFS, mainly asking about your setup.

ie, what HDDs in what sort of array, on what controller, what speeds you get

The reason for the decision to add a ZFS array  was the fact that i did not like a having single spinners for data. I have had to many fail in my lifetime. that and my disk IO was really the weakest link in my ESXi box. (also my single 2TB datastore was getting smart errors. It was time to replace it)

I was going to get a cheapo raid card like an arecca or a used HP. I had a few spare M1015's laying about, I decided why not try ZFS as a guest?..

 

The guest is pretty simple. just a vanilla ZFS raidz on a FreeNAS guest with and NFS guest. It has its own M1015 with 4x 2TB Samsung Green F4's. The array gets about 450MB/s.

 

My plan was to later add a second raidz vdev and then combine them into a single zpool for double the IO (think Raid 50).. in the end i decided my disk IO is fine and that i really don't need a 6TB data store. that, and I would have to split my server into a head/DAS configuration. (I might still do this in the future, I just don't know if I want 8 drives 24x7 spinning)

 

This post here compares the SATA3 SSD Vs. the Array speeds inside my usenet guest.

http://lime-technology.com/forum/index.php?topic=14695.msg182730;topicseen#msg182730

You can see the array is writing faster then the SSD and reading almost as fast.

In this image: the 60GB disk is SSD and the 500GB disk is the ZFS. both disks are also shared IO with other systems so there is a slight hit here.

qHGqUm.png

 

I have the usenet guest and my unraid cache drive on this array (along with several other guests). the array is reading and writing 24x7. It has gone about 6 months now without a single hiccup.

 

The build sort of was a PITA to get running at first. I had to tinker with it a bit. I had two or three big issues right off the bat. the first was the version of freenas I was running did not natively support the M1015 card. I also had add some "tunables".. I cant remember why. I believe those were because of the NFS overhead on top of ZFS brought the array to a crawl? I also remember haveing an issue getting the iSCSI to to be stable (my plan was iSCSI not NFS at first). I almost wish I made a writeup back then... If I had to rebuild the guest, I would be lost myself.

I had planned on switching to Open Indiana and not stick with FreeNAS (the OI guest is built and works). I just never got around to migrating the data.

 

To give you and idea of Atlas's stability. The FreeNAS guest is on 112 days uptime. The server itself has been up longer. I think my last hard reboot was when I replaced my dead expander. It is about time to down it for cleaning.. It is getting dusty inside.

 

PS.

*A nice advantage to the NFS datastore is Gigabit speed to the datastore for migrations or backups.

 

*if your unraid cache is on it. you can create a hybrid unraid server. Fast front end protected cache array that then migrates to the long term storage array.  You can create "cache only" protected shares for things that you need to be faster then standard unraid. Like lightroom data for example.

 

When I compared Open Indiana with Freenas ZFS last year I saw that OI was allmost twice as fast compared to the older kernel of Freenas!

Link to comment

When I compared Open Indiana with Freenas ZFS last year I saw that OI was allmost twice as fast compared to the older kernel of Freenas!

 

The speed I am getting from this small array is about on Par with the same speeds I am getting with Arecca hardware raids.

One reason I tried FreeNAS first was how nice it plays with ESXi. it vmaware out of the box. that and it was easier to configure.

 

For a larger array, I would go OI.

 

Keep in mind.. ZFS is very unforgiving. keep regular back-ups of the array!

 

wow Johnm, thats awesome!

 

one quick question, you havr 4x2TB drives in the array, how large does it present as? 6TB like Raid5?

Yes. Raidz or raid-z is software raid5.

if you want better performance. you can make 2 vdevs and put them in a zpool (similar to raid10) from the 4 drives this would net 4TB.

 

of course you dont need 4 disks. but I wanted to put my entire WHS2011 on the array along with my cache and downloading box..

 

 

Link to comment

I have sagetv installed and working the best I have ever seen as far as performance goes.  The issue is the server crashing.  I cannot tell you with confidence exactly when this started but I feel it started when I tried to pass the serial pci card and was forced to reinstall esxi.

Since that day, I have been having weird issue with the sagetv server.  This is the only server having the issue.

The first setup I had was a 250gig raptor rdm'd as the boot drive and a 1tb wd red rdm'd as the recording drive.  The server would randomly crash.  It would lose a drive.  I first thought it was the raptor (brand new drive).  I thought this because it was crashing, so I decided to make a datastore out of the raptor.  Well I lost the raptor datastore for no reason.  So I pulled the raptor and rebuilt the sage server.  I used my wd red for my boot and a 1 tb drive for my recording (rdm both).  Right now, approximately every 1.5-2 days I lose the recording drive.  I don't believe I have lost the boot drive.

As for loosing the recording drive if you pass through the controller it should solve the dropping problem there.  But otherwise I'm not sure what to tell you.  I would work on one VM at a time until you get them solid.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.