Jump to content

Simplify by Complication? ESXi or Separate Servers?


bnevets27

Recommended Posts

I need some help trying to figure out which is the best road to go down. I have many "plugins" (running 4.7) and other features added on to my unraid install and will admit to fiddling with it and installing more addons/plugins over time. With that said I will admit there's a good chance my fiddling causes a little instability in my unraid server.  Therefore I would like to move all my plugin tasks off of my unraid server and leave it to only store my data. Now I'll be honest, I don't really WANT to do that but having to do hard shut downs of my server with all my data does not leave an easy feeling with me.

 

I figure this leaves me with two options (I am open to suggestions).

 

1) Use a separate computer running a linux distro (ubuntu?) to run my plugins. Plugins I would/do use, sabnzbd, sickbeard, couchpotato, headphones, mysql, subsonic, possibly a minified version of xbmc, newznab, and for the record/some other pvr backend. And anything else I come across that seems interesting. Only problem I see with that is there is going to be a lot of data moving between the two boxes. The linux box would be doing all of the work and data would be moving in and out of it from the unraid box a lot.

 

2) ESXi looks to be popular around here.I know nothing about it other then its VM software and that to run it you need compatible hardware for full pass-through support. Seems like having separate VM's would be nice, though I have no real NEED to have them, but it would be fun to play with.

 

 

Third option? Find a way to make unraid not crash due to my messing with it? I may have a problem with running out of memory due to trying to run so many plugins but I have had a very large swap space setup on a ssd. I figured that would prevent any out of memory errors. I know it would be more helpful if I knew what errors I was getting. Though this looks like an interesting way to prevent emhttp from being killed: http://lime-technology.com/forum/index.php?topic=25609.msg222985#msg222985 Also I think a large part portion on the issues I have been having is with my onboard ethernet chipset (Realtek 8111C) seen here: http://lime-technology.com/forum/index.php?topic=6776.0 I'm currently running an old 10/100 card which is more stable but slooooow. I need to find a replacement that isn't too expensive but most look to be using realtek chips and they don't seem to play well with unraid.

 

 

What it really comes to is this:

Do I look at getting a 24bay rack mount that comes up for sale for about $300: http://lime-technology.com/forum/index.php?topic=21958.0

or

Do I build a ESXi build into my current form factor, a CM590 with 5-in-4 cages (not hot swap bays, which is only starting to bother me slightly lately). With a good quality MB. I want it to be rock solid and I did like my brief experience with IPMI on a supermicro board I had for a little while. Only thing is the cost to do this with a supermicro board and the supporting CPU, RAM, and controller cards is defiantly more expensive then the prebuild server route. Though I also see little need for 24 bays, 15 is doing me fine for now, it currently has a slot or two left and still a few low capacity drives in it.

 

This is what I am currently running:

CPU:	Intel Celeron 3.0GHz Single Core
MB: 	Gigabyte EP45-UD3LR
Mem: 	2x1GB  OCZ
PSU: 	Corsair TX650
Cards: 	4x SYBA 2 port SATA II (SiI3132)
Case: 	CM590
Caddys: 2x Cooler Master 4-in-3
UnRAID: 4.7 Pro

 

I have a second PC with comparable MB, MEM, CPU.

 

All this is keeping in mind that money is definitely a factor, less is better but I'll spend what I have to, so as not to have headaches down the road. Currently power consumption is not a concern, and space/noise could possible be a factor but I do have a couple corners I may be able to hide a server if its larger then a normal PC, ie the rack-mount.

 

Sorry for the long post and what is basically my ramblings. Just need some help on direction.

Link to comment

It seems either physically separating unraid from another server running apps or running both in the same box with ESXI, you'll need to have all of these apps automatically add files into the unraid array (e.g., to provide updated TV shows or something). I'm not sure how to have the array shares automatically mount in either scenario, but given that you're using multiple apps, this might be an important consideration.

 

You mention money is of consideration, so if your other motherboard can be upgraded to something like 8 or 16GB of DDR3 ram, I'd lean towards ESXI in one case. With the Gigabyte EP45-UD3LR mobo, DDR2 ram prices aren't worth an ESXI build, which would then require selling both to fund a new mobo with cheaper DDR3 ram.

Link to comment

Well I was going to just mount my Unraid shares on my "apps box"  using clifs, that way they would function as a local drive. Not being very knowledgeable in Linux,  not sure it that will work well or not.

 

My other board has a single core celeron and uses ddr2, it's just a crappie little computer put together with free parts.  Only reason I mention it is I could start using it as the "apps box" just to see how it would work out. 

 

I am looking to spend a little money,  I feel like I need an upgrade. 

 

I can't decide if the 24 bay rack mount server is the right fit for me or if I should try and build an ESXi server.

 

If I go with the rack mount then I'm really not changing how I'm going to be running addons,  they will have to run with Unraid.  ESXi won't run on that server.  But I do like the fact that it is quite powerful,  it's a purpose built server and it has hot swap bays.

 

If I do an ESXi build,  it looks like it will be a lot more expensive.  I'll have to try and build it on the cheap by reusing my psu (hopefully it would be compatible), case and hard drive cages.

 

 

Link to comment

If you wanted to build something more powerful you wouldn't necessarily need to buy a huge rackmount server case ($$$). You'd probably just want a new mobo, cpu, and ram. If you reuse everything else to save money such as the SYBA SATA cards, PSU, and case, you could save quite a bit.

 

If it were me, I'd wait several months before making an upgrade based on newznab requirements to make sure you're really happy with it and see using/maintaining it long-term.

Link to comment

That's definitely another option. Only reason I was looking at a rack mount is because I was looking at a specific one, the used ones by tam solutions here: http://lime-technology.com/forum/index.php?topic=21958.0 for about $300. For that price I would have a hard time even buying just the hotswap bays. A norco case alone is $400. That's why it interests me. But unfortunately doesn't ESXi pass through.

 

I have no real need for newznab. But it was something I would like to play around with.

 

AndrewT, I just read you signature, you seem to be running just about everything I can think of. What is the your spec's, how do you have your unraid setup? Does it run without crashing unraid?

Link to comment

That's definitely another option. Only reason I was looking at a rack mount is because I was looking at a specific one, the used ones by tam solutions here: http://lime-technology.com/forum/index.php?topic=21958.0 for about $300. For that price I would have a hard time even buying just the hotswap bays. A norco case alone is $400. That's why it interests me. But unfortunately doesn't ESXi pass through.

Do they actually have some back in stock? Last I checked they were gone, and didn't have any idea when they would get more.
Link to comment

If you buy the right stuff, you will definitely simplify your life with ESXi. You will be able to setup a new VM every time you need, within only one box, with ease. Will too avoid unRAID limitations with memory and (until v5 final is ready) true addon/packages support.

 

Here I have a pfSense router, an unRAID server, a Windows 2008 and a Ubuntu box running all at the time, and eventually other test VMs.

 

Here is an excellent topic to start. I have almost the same configuration, with no hassle since deployed it.

Link to comment

Do they actually have some back in stock? Last I checked they were gone, and didn't have any idea when they would get more.

 

Not that I know of. But it sounds like they are getting more stock. I'm not in any rush so I can wait, If I decide to go down that route. I'm sure the new batch will be a different config but similar and I assume it will still be a few generations back hardware and thus not be able to run ESXi. If by some chance they do run ESXi then its a no brainier but I think that's extremely unlikely.

Link to comment

AndrewT, I just read you signature, you seem to be running just about everything I can think of. What is the your spec's, how do you have your unraid setup? Does it run without crashing unraid?

 

I ran v5 with all of these plugins with 1x4GB ram stick without *ever* having unraid act suspiciously slow, muchless crash on me. I never tried the v4.7 OS. The only reason I added a second 4GB stick was because I've been trying out different OSes (just for fun) on phpVirtualBox and started trying out Newznab. If Nmatrix were still around, I'd only be checking on it once every couple weeks. I also run all apps in the /mnt/cache/.apps/ directory, which seems quite common for organization.

 

I'd strongly recommend updating to the v5 unraid... it's here to stay (for quite awhile anyways). Two things you really need to know about it is to run everything you can (apps and file transfers) as the 'nobody' user instead of 'root' to avoid permissions issues and that # installplg pluginname.plg  installs the plugins for v5 you find (no unMenu needed).

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...