ATLAS My Virtualized unRAID server


Recommended Posts

Pretty sure I'm on 2.02 or somesuch. I'll check this evening but it was above the Sandy BIOS rev online but not quite the same as the first Ivy BIOS rev they posted so it's not 100% clear. I had issues with USB passthru early on too but I think it was me figuring out settings for the various hardware. I would like to update to 5.1 but haven't gotten to it and need to figure out exactly what I'll need to do first. Should've done that before moving my server into a tiny cramped closet. <sigh> First chance I get I'll try to do the update and post back on the USB passthru! Maybe do that prior to trying an Ivy CPU and see how that goes first.

Link to comment

Because you have a passthough card, you have to set the memory reservation to match the amount of ram you are using.

 

Just adjust that in the VM's settings. (advanced tab under hardware settings?)

 

Thanks Johnm.

 

This is what I did for other that might have this issue:

Right click on VM and Edit Settings.

On the hardware tab, change memory to desired value.

On the resources tab, select memory settings, and click on reserve all guest memory.

 

Easy :-)!

 

Link to comment

What are the benefits of installing VMtools. Is there already one for RC10? How do I install it?

 

VMWare tools for RC10 here: http://lime-technology.com/forum/index.php?topic=11449.msg219627#msg219627

 

What are the benefits? Can I install it like a regular plugin?

 

Just installed it, the settings screen looks like in the attachment, is this correct?

 

Especially about shared folders, should I change something there?

vmtools.jpg.b45ef173516157c0b7e1ecdcf69c2b56.jpg

Link to comment

OK.. Its baaaack......

 

I lost an SSD and lost a Flash Drive it looks like..

~snip~

 

The SSD, It is still in the server for now.  I'll pull it out after my ZFS scrub and unraid Parity check pass. It might be fine after a reformat. it might not be. it might have  ton of burned out cells. ill format, smart test and SSDlife it.. see what i get.. its got years left on the warranty :) I might be calling on it..... I'll let you guys know..

 

~snip~

 

A quick follow up.

 

I had a power failure today and i was actually home.. so it was a good time to pull both SSD's and test them.

 

the good SSD reported zero errors. back into the server it goes

 

the bad SSD reported so many relocated sectors I cant count. every time i tried to do anything to the SSD that was a write, it would disconnect and go offline until i unpluged its power. Back to Corsair it goes. It didn't even last a year.

 

I also noticed they have discontinued the Performance pro series off SSD's .. I am wondering what I get back. I also noticed none of their current drive line have enhanced garbage collection.. if they send me anything less then another Performance Pro or a Neutron ill be insulted.

Link to comment

I just wanted to post an update to a post I made earlier in this thread about finding a good UPS shutdown solution for ESXi.  Well I finally purchased the CyberPower OR1500LCDRM2U ($299 at Amazon) and it works great with their PowerPanel Business Edition VM appliance.  It is based on CentOS and installs as it's own VM and as long as all your VM's have VMware Tools installed,  it will do a clean shutdown of each VM and then shutdown the host.  It will also shut itself off if so configured.  You can configure it to send email notifications on numerous events and whatnot.  I've only had it up and running a few days now, but I've tested it numerous times and it works like a charm so far.

 

I post this because I know myself and others have longed for a good easy solution for the free ESXi and this seems to be it.  Granted, it seems to be proprietary as it wouldn't work with my APC unit, but at least it's a viable option and it is super easy to setup and use.

 

 

 

Link to comment

John,

 

I've had 2 of those Corsair Performance Pro's fail on me in past year and half. Not very happy with their performance. Switched over to the Samsung 840 Pro and will hopefully have better results.

 

 

Just recently converted my server to ESXi successfully. Noticing something really strange though that the server is EATING through RAM. RAM is stuck at 95-100% usage all the time after 2 days uptime. Checked processes and cant find what is accounting for the 5GB of RAM (allocated 8GB, usually server sits around 2.5GB).

 

I had SAB, couchpotato, and sickbeard on it running at the time but these three processes were taking up less then 200mb of ram. I have disabled them and will be transferring them into a separate VM Guest soon.

 

Any thoughts on what could be causing this? I dont think a unraid guest needs more then 8GB of ram, and was thinking of putting it down to 4GB after i move the python addons off but maxing out on 8GB is kind of ridiculous no?

 

 

OK.. Its baaaack......

 

I lost an SSD and lost a Flash Drive it looks like..

~snip~

 

The SSD, It is still in the server for now.  I'll pull it out after my ZFS scrub and unraid Parity check pass. It might be fine after a reformat. it might not be. it might have  ton of burned out cells. ill format, smart test and SSDlife it.. see what i get.. its got years left on the warranty :) I might be calling on it..... I'll let you guys know..

 

~snip~

 

A quick follow up.

 

I had a power failure today and i was actually home.. so it was a good time to pull both SSD's and test them.

 

the good SSD reported zero errors. back into the server it goes

 

the bad SSD reported so many relocated sectors I cant count. every time i tried to do anything to the SSD that was a write, it would disconnect and go offline until i unpluged its power. Back to Corsair it goes. It didn't even last a year.

 

I also noticed they have discontinued the Performance pro series off SSD's .. I am wondering what I get back. I also noticed none of their current drive line have enhanced garbage collection.. if they send me anything less then another Performance Pro or a Neutron ill be insulted.

Link to comment

That is the nature of *NIX systems.

They will use all spare RAM for cache. if you give it 8GB, it will eat almost all of it.

 

My Performance PRO's ran quite fine until the one died. I will admit, I pounded on that SSD. it never had time to run its garbage collection.

I have a few of the 830's in my Macs and laptop. I'll have to look at the 840's specs.

 

Link to comment

That is the nature of *NIX systems.

They will use all spare RAM for cache. if you give it 8GB, it will eat almost all of it.

 

Now don't be giving linux a bad rap. It is true it is probably using the available RAM for cache, but only because it is useful since there is file transport activity. I run many linux guests which do not use all RAM, <50%, and others which use it all. The 3 biggest consumers of RAM are Windows machines (including vCenter), and the next three are squid proxy and mail servers. There is even a Windows vm using 50% RAM.

 

Perhaps, "Any good NAS will use whatever RAM it can for caching"?

 

8449690238_ac5c5fc6c4.jpg

Link to comment

I believe I have found the root cause.

 

The RAM was not being used by system cache, it was just being eaten by the server.

 

Not sure how I didn't notice but the host date was wrong and set to March 2013 instead of February(today date).

 

This was screwing up the crontabs (logs were filled crontab complaining about time discrepancy) and some other stuff. Anyways fixing that... server is back to its usual resource usage of about 1.5gb. Moved the python stuff off the server and its now sitting around 800mb usage. Not bad.

 

 

That is the nature of *NIX systems.

They will use all spare RAM for cache. if you give it 8GB, it will eat almost all of it.

 

My Performance PRO's ran quite fine until the one died. I will admit, I pounded on that SSD. it never had time to run its garbage collection.

I have a few of the 830's in my Macs and laptop. I'll have to look at the 840's specs.

Link to comment
  • 2 weeks later...

Has anyone added a hard drive without shutting down the entire ESXi host?  I assumed I can hot add a hard drive (into an empty slot) as long as the unRAID VM is powered down, no?  Want to make sure before I screw something up.

Yes I have done that before.  I've also removed a drive without powering down the PC just the VM.  REPLACING a drive does NOT work on my SAS Expander/M1015 controller.
Link to comment

Thanks.  I'm not replacing, just adding and pre-clearing.

Have you had any problems pre-clearing drives in a VM?  So far every time I try preclear gets completely through the zeroing step and dies saying the MBR isn't cleared correctly.  It's NOT a drive problem because they clear fine on my preclear station.  Also moving data, parity checks and parity builds work fine in a VM once the drive is cleared.  It also doesn't matter how many drives I clear at a time in a VM.  I will admit I haven't tried clearing very many drives from a VM - maybe 5-6.  Most of the time I use my preclear station and then take it to the VM.  It is also possible that I needed to add the drive with the PC off but it doesn't affect parity builds/checks so not sure that is it either.  Besides I thought at least ONE of the attempts I did was after powering down the PC.
Link to comment

Thanks.  I'm not replacing, just adding and pre-clearing.

Have you had any problems pre-clearing drives in a VM?  So far every time I try preclear gets completely through the zeroing step and dies saying the MBR isn't cleared correctly.  It's NOT a drive problem because they clear fine on my preclear station.  Also moving data, parity checks and parity builds work fine in a VM once the drive is cleared.  It also doesn't matter how many drives I clear at a time in a VM.  I will admit I haven't tried clearing very many drives from a VM - maybe 5-6.  Most of the time I use my preclear station and then take it to the VM.  It is also possible that I needed to add the drive with the PC off but it doesn't affect parity builds/checks so not sure that is it either.  Besides I thought at least ONE of the attempts I did was after powering down the PC.

I had that issue when pre clearing drives connected to RES2xxxxx expander. I sorted it updating firmware

 

Sent from my GT-I9100 using Tapatalk 2

 

 

Link to comment

Thanks.  I'm not replacing, just adding and pre-clearing.

Have you had any problems pre-clearing drives in a VM?  So far every time I try preclear gets completely through the zeroing step and dies saying the MBR isn't cleared correctly.  It's NOT a drive problem because they clear fine on my preclear station.  Also moving data, parity checks and parity builds work fine in a VM once the drive is cleared.  It also doesn't matter how many drives I clear at a time in a VM.  I will admit I haven't tried clearing very many drives from a VM - maybe 5-6.  Most of the time I use my preclear station and then take it to the VM.  It is also possible that I needed to add the drive with the PC off but it doesn't affect parity builds/checks so not sure that is it either.  Besides I thought at least ONE of the attempts I did was after powering down the PC.

I had that issue when pre clearing drives connected to RES2xxxxx expander. I sorted it updating firmware

 

Sent from my GT-I9100 using Tapatalk 2

Thank you.  Haven't wanted to update mine - If it ain't broke don't fix it.  Well now I know it's broke.  Now to find the SAS expander update thread.  Found it.
Link to comment

I'm attempting to virtualize unRAID in ESXi 5.0u2 with the hybrid boot method (Option #2 at this link).

 

In essence, I've:

 

1.  Created a VDMK file (virtual hard drive) on a Win7 guest.

2.  VDMK was created as 8GB, IDE interface, formatted as FAT32 with volume named "UNRAID".

3.  Unzipped "unRAID Server 5.0-rc11a AIO.zip" to the VDMK.

4.  Edit "make_bootable.bat" in two lines near the end from "syslinux -ma" to "syslinux -maf".

5.  Run "make_bootable.bat" from command prompt.

6.  Rename VDMK volume name from "UNRAID" to "unraidboot".

7.  Copy preclear script, unmenu script to "boot" and "unmenu" folders respectively.

8.  Quit Win7 guest.

9.  Remove VDMK file from Win7 guest.

10.  Create unRAID VM as outlined in reply #2 of this thread with the previous VDMK (unraidboot) assigned as the virtual hard drive.

11.  Included a USB controller and USB device in unRAID VM (step #10).

12.  USB device in unRAID VM is external USB flash named "UNRAID", on this external USB flash I only have a folder named "config" and the only file is "pro2.key".  No other files are on the external UNRAID flash other than \config\pro2.key

 

The unRAID VM appears to boot and have a local IP assigned (192.168.0.123).

The problem is that I cannot access the unRAID VM via http (web interface) and the VM console is pointing to the external USB "UNRAID" (with /config/pro2.key) rather than the "unraidboot" VDMK.

I was under the impression that you place all unRAID files (including config folder, unmenu folder, etc.) on the VDMK (unraidboot) and only have the /config/*.key file on the external "UNRAID" USB flash.

I'm sure I'm missing a step or made wrong step(s).  I've searched the forum but not been able to find anything conclusive to help.  Any ideas, suggestions or other forum links to point me?  Thank you.

 

unRAID_-_Virtual_Machine_Properties.png.ef2361e1d1b24b675080a5eae7fd1429.png

unRAID_-_Virtual_Machine_Console.png.3a8f0012878422f1a48e229f0ba7ad0c.png

unRAID_-_Virtual_Machine_Console_ifconfig.png.4bd76b249d9df7f0b078a3765d3b7be2.png

Link to comment

bzroot, bzimage and the hidden bootable stuff is all you require.  If you want something to check against, click on my build thread (in my sig below) and download the VMDK I created for rc3.. you can then mount it, replace the bzroot/bzimage with the version you want to use and off you go.

 

Everything else should be on the USB stick.

Link to comment

I was under the impression that you place all unRAID files (including config folder, unmenu folder, etc.) on the VDMK (unraidboot) and only have the /config/*.key file on the external "UNRAID" USB flash.

 

The flash /config folder should contain all the files that are in the /config folder of the unRaid zip file (go, ident.cfg, disk.cfg, etc.). You will also place your .key in there as well. The rest of the files should be on the VMDK. You could put all the files on the flash in case you want to boot the flash from a different machine for some reason.

 

After the system is booted the VMDK is not touched again. All updates to configuration are done on the flash. The system finds the config files there because the volume name is UNRAID as opposed to the VMDK which is unraidboot in your case. That means you should put plugins on the flash as well.

 

UnRaid mounts the flash (at /boot) but it doesn't mount the VMDK so it's easier to just put Preclear on the flash. If you wanted to go the extra step you could mount the VMDK from the command line but why bother?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.