ATLAS My Virtualized unRAID server


Recommended Posts

Johnm, I just installed a 240GB SSD for my second datastore. I currently have a 1TB Datastore which contain an 400GB RDM pointer for my cache drive. How can I move that RDM to the SSD datastore? Every time I move the RDM, I ran out of space.

 

Doing "du -sh *.vmdk" on the RDM folder gives the correct size:

0      unRaidCache-rdmp.vmdk

64.0K  unRaidCache.vmdk

 

Link to comment

It does only take a min or two to create a vmdk boot drive from scratch or image.

You will most likely have to down your server anyways to test that it boots correctly.

 

just remember to update your vmdk when you update your flash versions. the two need to be on the same version.

This is a good reason to learn how to create your own, so you're not waiting for someone else to update an image when a new version hits.

Link to comment

Johnm, I just installed a 240GB SSD for my second datastore. I currently have a 1TB Datastore which contain an 400GB RDM pointer for my cache drive. How can I move that RDM to the SSD datastore? Every time I move the RDM, I ran out of space.

 

Doing "du -sh *.vmdk" on the RDM folder gives the correct size:

0      unRaidCache-rdmp.vmdk

64.0K  unRaidCache.vmdk

 

You are correct. the normal behavior of an RDM is similar to a Thin Provisioned virtual disk. in moving or coping it, it expands (or so it thinks) the physical disk to the image. Basically, it copies the actual RDM drive (not only does it trick the guest, it apparently tricks itself).

 

 

here are the actual instructions from VMWare http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005241

 

When I migrated mine, honestly, I just created a new RDM on the new drive., then had the guest point to the new RDM. I only had one or 2 disks to migrate. It was only a minute recreate and to change the guest over. it stopped me from bashing my head on the wall.

 

 

 

 

 

 

 

Link to comment

 

Specs look correct. That's a nice price for the ram.

 

 

this ram seems to be working for the past hour... i have 2 4gb chips for 8gb and i just added 1 8gb chip for a total of 16gb

i will most likely order another chip to fill the other slot...my VMs are running way smoother

 

Link to comment

Johnm, I just installed a 240GB SSD for my second datastore. I currently have a 1TB Datastore which contain an 400GB RDM pointer for my cache drive. How can I move that RDM to the SSD datastore? Every time I move the RDM, I ran out of space.

 

Doing "du -sh *.vmdk" on the RDM folder gives the correct size:

0      unRaidCache-rdmp.vmdk

64.0K  unRaidCache.vmdk

 

You are correct. the normal behavior of an RDM is similar to a Thin Provisioned virtual disk. in moving or coping it, it expands (or so it thinks) the physical disk to the image. Basically, it copies the actual RDM drive (not only does it trick the guest, it apparently tricks itself).

 

 

here are the actual instructions from VMWare http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005241

 

When I migrated mine, honestly, I just created a new RDM on the new drive., then had the guest point to the new RDM. I only had one or 2 disks to migrate. It was only a minute recreate and to change the guest over. it stopped me from bashing my head on the wall.

 

I did what you suggested and saved me a lot of frustration. Why did I not just create a new RDM on the new Datastore? lol! Thanks for the advice!

Link to comment

Yo,

 

Trying the ALL-IN-ONE and am currently using 2 tutorials, one from Johnm and one from a guy who wrote a GUI for ZFS.

I was wondering....I want to build Win7 x64,Unraid and ZFS server(ZFS for cache) and he puts all the data and VM's on oneSSD Bootdisk! Do I need more disks?

Also how would you setup ram(16gb) and cores (4) for Win7,Unraid and ZFS....I was thinking Win7 (4GB ram), Unraid (4GB ram) and ZFS needs the most (8GB ram)

Another question is where do I put the parity of Unraid? I have 3 M1015 or do I put it on the mobo? 2 M1015 are in use for ZFS atm.

The idea is to move all of my media off ZFS and put it on Unraid , I have one Hitachi 3Tb 7200rpm drive for parity and 4 WD 3TB free. I'll be freeing up my 2 ZFS pools and use those disks to expand Unraid.

 

thanks

Link to comment

Sorry for a slightly off topic question, but know lots of people are using M1015's here and are much smarter than me when it comes to how ESXi works !!

 

Will a M1015 + expander work in UnRAID, when not using ESXi?

 

For me ESXi is a long term goal, but I will need more sata connections for bare metal unRAID before I get server virtualized.

 

Just trying to cover all my bases here - thanks in advance.

 

Sent from my Samsung Galaxy S2 using Tapatalk 2

 

 

Link to comment

Will a M1015 + expander work in UnRAID, when not using ESXi?

For me ESXi is a long term goal, but I will need more sata connections for bare metal unRAID before I get server virtualized.

 

Yes, In a 5.x release. Not in 4.7.

4.7 has no drivers for the m1015 (many LSI other based cards).

 

Link to comment

Yo,

 

Trying the ALL-IN-ONE and am currently using 2 tutorials, one from Johnm and one from a guy who wrote a GUI for ZFS.

I was wondering....I want to build Win7 x64,Unraid and ZFS server(ZFS for cache) and he puts all the data and VM's on oneSSD Bootdisk! Do I need more disks?

You need 1 disk for your datastore. you need the disks for your ZFS and the Disks for your unRAID.

If you need more datastore room, you can put additional datastores on the ZFS.

 

Also how would you setup ram(16gb) and cores (4) for Win7,Unraid and ZFS....I was thinking Win7 (4GB ram), Unraid (4GB ram) and ZFS needs the most (8GB ram)

That honestly depends on your setup and the needs of your guests.

 

I have win7 and 2008r2 guests that run on 2GB of ram just fine (but I do not run those as interactive workstations)

RAM for the ZFS depends on the size of array. I am running mine on 6GB with no issues,

For my unraid. with 22 drives at 95% full and cache dirs running, was getting out of memory errors when running preclears with 4GB...

 

As far as cores, that the one resource that esxi can easily handle if you over assign the resources. although I belive you will get kernal panics in freeNAS with more then 2 cores on esxi. I'm not sure about Open Indiana.

 

 

Another question is where do I put the parity of Unraid? I have 3 M1015 or do I put it on the mobo? 2 M1015 are in use for ZFS atm.

The idea is to move all of my media off ZFS and put it on Unraid , I have one Hitachi 3Tb 7200rpm drive for parity and 4 WD 3TB free. I'll be freeing up my 2 ZFS pools and use those disks to expand Unraid.

 

thanks

for ESXi i would put it on the M1015 that's in passthough for ESXi. for baremetal unraid. I'd put it on the mobo.

Since you are starting out with a hybrid system, I'd just put it on the m1015 and go from there.

Don't forget to unplug all of your drives when installing ESXi.... bad things will happen to your data.

Link to comment

ok, question Johnm, i keep seeing you talking about using a ZFS array as an ESXi DataStore and for your UnRaid Cache.

I've been able to find that you are using Solaris as the Host.

 

What HDDs are you using to get such good speeds? Are they spinners or SSDs? I haven't been able to find much detail on that part of your setup. I could've easilly missed it, only 58 pages of posts to search through :P

 

I currently use a 300Gb Raptor as my Cache and 2x500GB WD Blacks as my ESXi Datastores.

Would going your way and using, say, 3x500GB 7200RPM drives improve cache and datastore performance?

 

I run a i5-650 (pretty sure), is that grunty enough for it? (You mentioned high CPU usage for the array in one of your posts)

Link to comment

ok, question Johnm, i keep seeing you talking about using a ZFS array as an ESXi DataStore and for your UnRaid Cache.

I've been able to find that you are using Solaris as the Host.

 

What HDDs are you using to get such good speeds? Are they spinners or SSDs? I haven't been able to find much detail on that part of your setup. I could've easilly missed it, only 58 pages of posts to search through :P

 

I currently use a 300Gb Raptor as my Cache and 2x500GB WD Blacks as my ESXi Datastores.

Would going your way and using, say, 3x500GB 7200RPM drives improve cache and datastore performance?

 

I run a i5-650 (pretty sure), is that grunty enough for it? (You mentioned high CPU usage for the array in one of your posts)

 

Same here,would âppreciate a tutorial but already asked in another thread!

 

Link to comment

ok, question Johnm, i keep seeing you talking about using a ZFS array as an ESXi DataStore and for your UnRaid Cache.

I've been able to find that you are using Solaris as the Host.

 

What HDDs are you using to get such good speeds? Are they spinners or SSDs? I haven't been able to find much detail on that part of your setup. I could've easilly missed it, only 58 pages of posts to search through :P

 

I currently use a 300Gb Raptor as my Cache and 2x500GB WD Blacks as my ESXi Datastores.

Would going your way and using, say, 3x500GB 7200RPM drives improve cache and datastore performance?

 

I run a i5-650 (pretty sure), is that grunty enough for it? (You mentioned high CPU usage for the array in one of your posts)

 

Same here,would âppreciate a tutorial but already asked in another thread!

+1 ;D

 

Sent from my GT-I9100 using Tapatalk 2

 

 

Link to comment

Hello I want thank you for all your help in virtualization.  I have utilized this thread as basically a bible in assisting me in virtualizing my servers.  I am having a problem with one server in particular and am looking for some input from others to assist in solving this weird issue.  This has been an adventure to say the least.  I have virtualized the following servers

I created a new virtualized homeseer server

virtualized my unraid server (works beautifully)

virtualized sagetv (this is the one with the issue)

create a virtual whs 2011 (virtually untested)

 

Sagetv setup:

4 processors

4 gigs of ram

2 rdm hardrives

usbuirt passed through

colossus x2 passed through

Windows 7 64 bit

 

Problem:  Approximately every 2 days Sagetv crashes.  I have discovered through trouble shooting is that ESXI loses the drive (thinks the drive is no longer available).  This does does not affect the function of the other virtual machines but does does affect esxi somewhat.

 

Effects of the lost drive are:  I cannot access the sagetv server.  Esxi cannot shutdown the vm.  It get stuck at 95% and sits there.  I cannot shutdown esxi.  i have to use IPMI to shout down the server via the ipmi console (not esxi interface).  I turn on the esxi server and everything comes up.  When I attempt to turn on the sage server, it balks and tells me a drive is not available.  If I shutdown the server, move the sata cable (which is attached to the onboard sata port) to a different port, esxi will see the drive again.  Sagetv will now boot and run for approximately 2 days.

 

What have I done to troubleshoot:  I first experienced this with a WD250 raptor, so I replaced the drive, sata cable and moved it to a different port.  This did not make a difference.  I have check my other vm's and looked at what scsi port they were assigned to and made any adjustments so none are on the same controller (Iie 0:0, 1:0, 2:0).  None of these changes made a difference. 

 

I am currently waiting on results of my most recent change.  When I added the disks, I did not check the independent and persistent option.  So today I shutdow sagetv and removed the RDM disks.  I then readded this RDM disks and checked independent plus persistent options.

 

How I rdm's the drives:  I followed the tutorial in the beginning of this thread utilizing the -r option.

 

Possible things (in my mind) that happened that could be contributing factors:  I was setting up my homeseer server.  I needed a serial port.  I purchased a pci serial port.  The pci port shares the same bus as the on board graphics.  I attempted to pass through the pci serial port.  This action corrupted something in my esxi install requiring a reinstall (not repair) of esxi.  Once I completely recovered from this incident, I found sagetv would not boot.  I created a new sagetv server from the ground up.  I do not recall these disk problems prior to this incident but I also don't recall how long Sagetv was virtualized prior to this incident.

 

Facts:  Sagetv is the only server having issues, although I have been leaving my whs server off to see if it could be causing the issue.  The problem continues with WHS 2011 turned off.  Sagetv and WHS are the only machines that utilize RDM disks.

 

What are my thoughts if I have another incident:  Get another USB drive and completely reinstall esxi.

 

I greatly appreciate you guys sharing your knowledge and experiences in helping me get through this issue.  The WAF is lowering with each incident.  I am almost ready to unvirtualize sagetv.  Please help

Link to comment

I bought a Highpoint 622 HDD controller (the one that supports port multipliers) for one of my SageTV VMs.  Then I pass through the whole controller and use that for recording drives.  I setup my OS boot drive as an IDE controller RDM'd drive (will be changing to thin provisioned virtual drive hopefully before the current RDM'd SSD dies).  Sounds like you have your's setup as a SATA drive when you created your RDM.  I would try it again as an IDE drive by using the IDE parameter instead of lsilogic. 

 

So would be (from Johnm's example above):

vmkfstools -r /vmfs/devices/disks/vml.0100000000202020202020202020202020355944344e355956535432303030 WHS2011RDM.vmdk -a IDE

 

This may not help but I am not having your problems with my 2 SageTV VMs and that is how I have/had my boot drives.  The only time I was close to your problem was when using a SuperMicro X7SBE MB.  I would definitely try to NOT RDM recording drives for SageTV.

Link to comment

ok, question Johnm, i keep seeing you talking about using a ZFS array as an ESXi DataStore and for your UnRaid Cache.

I've been able to find that you are using Solaris as the Host.

 

What HDDs are you using to get such good speeds? Are they spinners or SSDs? I haven't been able to find much detail on that part of your setup. I could've easilly missed it, only 58 pages of posts to search through :P

 

I currently use a 300Gb Raptor as my Cache and 2x500GB WD Blacks as my ESXi Datastores.

Would going your way and using, say, 3x500GB 7200RPM drives improve cache and datastore performance?

 

I run a i5-650 (pretty sure), is that grunty enough for it? (You mentioned high CPU usage for the array in one of your posts)

 

Same here,would âppreciate a tutorial but already asked in another thread!

+1 ;D

 

Sent from my GT-I9100 using Tapatalk 2

 

 

 

I do not have time now. maybe in the future. here is the basic Idea though

 

Perhaps this might help.

 

i have 2 SSD's for local datastores. these have several high performance VM's and my domain controller. (a single mechanical would work fine also)

 

then I have a ZFS guest on one of the SSD's. This could be OI, FreeNAS, ect. this server then has a ZFS raid array that is shared via NFS. on this share are my virtual drives for the rest of my ESXi guests.

You could also use iSCSI targets instead of NFS.

 

my first VM to autostart is my ZFS server.

 

My next few VM's that boot are also on SSD's and start up right away (AD controller ect.)

 

the next few VMs are either hosted on the ZFS server, or have virtual drives that reside on it..

these are on a huge time delay for auto start. maybe 2-5 minutes for the first one.  this gives the ZFS server time to actually get up on the network and start the shares.

once the  ZFS server is up. you'll see the grayed out guests start showing up as available to start that is about when my auto start is set to kick off.

 

 

the performance is much better then a JBOD datastores and i have fault tolerance. (450MB/s ish)

I have several guests and my unraid cache drive on the ZFS..

I would suggest using the virtual gigabit nics since this method will move data across the esxi box faster then gigabit

Link to comment

Hello I want thank you for all your help in virtualization.  I have utilized this thread as basically a bible in assisting me in virtualizing my servers.  I am having a problem with one server in particular and am looking for some input from others to assist in solving this weird issue.  This has been an adventure to say the least.  I have virtualized the following servers

I created a new virtualized homeseer server

virtualized my unraid server (works beautifully)

virtualized sagetv (this is the one with the issue)

create a virtual whs 2011 (virtually untested)

 

Sagetv setup:

4 processors

4 gigs of ram

2 rdm hardrives

usbuirt passed through

colossus x2 passed through

Windows 7 64 bit

 

Problem:  Approximately every 2 days Sagetv crashes.  I have discovered through trouble shooting is that ESXI loses the drive (thinks the drive is no longer available).  This does does not affect the function of the other virtual machines but does does affect esxi somewhat.

 

 

I had the same issue with one of my WHS guests on RMD.. It turned out in the end that the drive itself was starting to meltdown.  it was getting all sorts of smart errors and was close to end of life. after I replaced it, all was good.

 

I really don't have an answer for you. in theory, it should work.

 

As Bob pointed out, life might be easier if you pick up a sata controller just for the sage guest.  you also then gets smart and spindown support on the guest.

 

At this point I have no more RMD guests. everything has its own controller or is  on a virtual controller.

 

Sometimes there is a little fine tuning needed to make a guest happy, and sometimes the issue is not so obvious.

Link to comment

ok, question Johnm, i keep seeing you talking about using a ZFS array as an ESXi DataStore and for your UnRaid Cache.

I've been able to find that you are using Solaris as the Host.

 

What HDDs are you using to get such good speeds? Are they spinners or SSDs? I haven't been able to find much detail on that part of your setup. I could've easilly missed it, only 58 pages of posts to search through :P

 

I currently use a 300Gb Raptor as my Cache and 2x500GB WD Blacks as my ESXi Datastores.

Would going your way and using, say, 3x500GB 7200RPM drives improve cache and datastore performance?

 

I run a i5-650 (pretty sure), is that grunty enough for it? (You mentioned high CPU usage for the array in one of your posts)

 

Same here,would âppreciate a tutorial but already asked in another thread!

+1 ;D

 

Sent from my GT-I9100 using Tapatalk 2

 

 

 

I do not have time now. maybe in the future. here is the basic Idea though

 

Perhaps this might help.

 

i have 2 SSD's for local datastores. these have several high performance VM's and my domain controller. (a single mechanical would work fine also)

 

then I have a ZFS guest on one of the SSD's. This could be OI, FreeNAS, ect. this server then has a ZFS raid array that is shared via NFS. on this share are my virtual drives for the rest of my ESXi guests.

You could also use iSCSI targets instead of NFS.

 

my first VM to autostart is my ZFS server.

 

My next few VM's that boot are also on SSD's and start up right away (AD controller ect.)

 

the next few VMs are either hosted on the ZFS server, or have virtual drives that reside on it..

these are on a huge time delay for auto start. maybe 2-5 minutes for the first one.  this gives the ZFS server time to actually get up on the network and start the shares.

once the  ZFS server is up. you'll see the grayed out guests start showing up as available to start that is about when my auto start is set to kick off.

 

 

the performance is much better then a JBOD datastores and i have fault tolerance. (450MB/s ish)

I have several guests and my unraid cache drive on the ZFS..

I would suggest using the virtual gigabit nics since this method will move data across the esxi box faster then gigabit

 

Thanks for the reply and understand you don't have time atm,

 

The part I'm interested in is the ZFS cache for Unraid...care to elaborate? And would a raid-z parity be also possible for Unraid?

Link to comment
I had the same issue with one of my WHS guests on RMD.. It turned out in the end that the drive itself was starting to meltdown.  it was getting all sorts of smart errors and was close to end of life. after I replaced it, all was good.

 

I really don't have an answer for you. in theory, it should work.

 

As Bob pointed out, life might be easier if you pick up a sata controller just for the sage guest.  you also then gets smart and spindown support on the guest.

 

At this point I have no more RMD guests. everything has its own controller or is  on a virtual controller.

 

Sometimes there is a little fine tuning needed to make a guest happy, and sometimes the issue is not so obvious.

 

The problem I have is the motherboard I am using.  I have no more pci e slots.

The mobo I have is SUPERMICRO MBD-X8SIL-F-O.  It has 3 slots plus a reg pci slot.

1 slot is used by SUPERMICRO AOC-SASLP-MV8 for my unraid machine.

2 are used by Hauppauge Colossus

The pci slot is free but shared by the onboard video.

 

I guess I have 2 choices.

1.  Change back to the HD PVR (usb).  I have 2 but one of them is not functioning correctly.  This move would free up 2 PCI X4 slots.  Then I could buy another controller to pass through to sagetv. plus I would need to purchase a good usb controller to pass through.  So this one would be the cost of another hdpvr $190, usb controller $10-30 and another sata controller.

2.  Stick with the Collossus and change mobos http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235.  That is one I have looked at.  My processor is already a socket 1156.  This would cost $239.00 plus shipping.

 

So basically the cost is the same for either option.  Although I tend to lean towards the new mobo.  What are you guys thoughts on these senarios?

 

Thanks for the help

Bill

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.