Jump to content

Considering unRAID, quick Hello and lots of questions


Waltm

Recommended Posts

Hi folks.  While a have built all my PCs in the past, starting from a 386, I now find myself in unfamiliar territory with a lot of new words and acronyms so I could use a little guidance.

 

I currently have a PC in a mid-tower case running Ubuntu.  This is an older, dual core Athlon with 2gig of ram.  The system has grown to 6 storage drives totaling 14TB (1x4TB, 1x3TB, 3x2TB, 1x1TB, mostly JFS and I think one XFS) along with the 1TB system drive.  Each media drive has been added when storage filled up and mapped to various clients around the house.  The use of this PC has evolved from strictly a MythTV backend to include Sabnzbd, Sonarr, Couchpotato, Headphones and most recently, Tvheadend for a PVR with HDHomerun tuners with Myth not being used any more.  It has also taken on the role of a web browser and video player since my main gaming rig has been moved to another location with more room for VR gaming. 

 

The addition of TVHeadend has really brought to light the deficiencies of this system, as what I thought was bad signal reception turns out to be this PC not being able to keep up with a few of the higher bitrate OTA channels.  These channels play fine if I stream the tuners directly to another machine and only have a problem playing back or recording on this ‘server’.  Also, adding drives, changing save locations in all software and adding shares to clients (mostly kodi running on RaspberryPIs, still a few PCs) is getting kind of old.  Not to mention the lack of drive fault tolerance where a failed drive is a major inconvenience (been there).

 

I heard about unRAID and looked around this forum a bit and think it is a viable solution for me.  From what I gather, and please correct me if I’m mistaken, most of the software I run as a service can run in ‘dockers’ and I could also run an instance of Ubuntu for normal desktop use in a ‘VM’.  Along with the above listed software, I might look into a central media database (MySQL, Emby, etc) at some point.  There would very rarely be more than 2 streams running at once and 90% of the time there would be only one.

 

Assuming I am on the right track, I am totally lost when it comes to hardware choices.  It seems as there are so many motherboard/cpu options in this forum with more choices leaning towards enterprise solutions than consumer equipment.  It also sounds like the hardware must specifically support virtual machines and a second video output device would be needed to run Ubuntu.  If I pull the trigger on this I would want some room to grow, capability wise, without going overboard on cost for features or power I would never use.

 

Can anyone offer some guidance and possibly point out a build that they feel would work well for my use as well as further study material that would serve me well on this journey?

 

 

 

Link to comment

I'm not up to date on current hardware, but welcome, reading what you want to accomplish is pretty much what I do with my server that is based on fairly old hardware now.  My server specs are in my signature and you're correct that dockers will form the basis for your machine's applications, and I'm pretty sure that we (linuxserver) have got all the ones you've mentioned.  ;)

Link to comment

It's not strictly necessary to have a video card for a VM, since you can vnc/rdp if that suits your purpose. Also, IOMMU is not required if you don't need to pass hardware to the VM. Your VM will still be able to use the network and unRAID storage without IOMMU. Having said that I don't personally use any VMs currently so can't really advise much further on that aspect. Dockers do everything I need.

Link to comment

Thanks for the welcome and info!

 

The LSIO dockers look great and it does look like there are more applications than I could ever use.

 

Trurl,  I was planning on using the Ubuntu VM at the same physical location as the machine/monitor/keyboard.  Are you saying to use rdp from the unraid machine into the VM running on the same machine?  I have used rdp a little but it was laggy, might have just been my hardware at the time.

Link to comment

Thanks for the welcome and info!

 

The LSIO dockers look great and it does look like there are more applications than I could ever use.

 

Trurl,  I was planning on using the Ubuntu VM at the same physical location as the machine/monitor/keyboard.  Are you saying to use rdp from the unraid machine into the VM running on the same machine?  I have used rdp a little but it was laggy, might have just been my hardware at the time.

 

If it's going to be at the same physical location then get a discrete video card. 

Link to comment

Ok, thanks!  I'm still trying to wrap my head around non physical stuff.

 

If you had an Unraid server running on one side of your house and wanted to use the VM on it over the other side of the house then you'd need a client machine and use VNC or RDP to access it, so no need for a dedicated gfx card in your Unraid server.

 

If you want to use an Unraid VM at the location of your Unraid server, then it's better to stick a gfx card in it and use that to output video.  The alternative is to boot into the Unraid-GUI and then access the machine via the web client over VNC, but it won't be as good an experience.

Link to comment

A good way of deciding on what CPU/motherboard combo you need is to look at the passmark score of the CPU.  Plex (a very popular media server app) recommend a passmark of 2k for each 1080p/10Mbps stream that you want to play.  Personally I think this is a bit high with more efficient modern transcoders, but it's a good guide.

 

http://www.cpubenchmark.net/

https://support.plex.tv/hc/en-us/articles/201774043-What-kind-of-CPU-do-I-need-for-my-Server-

 

So, for your two concurrent streams I'd look for a CPU that has a passmark of at least 4k - this won't set you back much money. 

 

If as with most people you think your needs will increase in the future, then add on more power.  6-10k passmark and you are looking at a very capable system that will reasonably do what you need; above that you are getting into quite specialist requirements or you want to be able to support lots of multiple streams e.g. I built my system this month with the aim to support min 6-7 1080p streams to give me a lot of headroom and the anticipated requirements to do a few 4k streams in the future, as well as a couple of concurrent VMs for the family.

 

Link to comment

Thanks.  I've seen the build in your sig while reading through some of the hardware forums and was thinking of using that MB/cpu combo as there are still some cpus on the used market.  It might be overkill for what I'm doing but at least it's proven to work and is somewhat future-proof, being a newer MB spec.  I was looking for something to fit in an Antec 1200 so I prefer the ATX form factor over a server board.

Link to comment

Thanks.  I've seen the build in your sig while reading through some of the hardware forums and was thinking of using that MB/cpu combo as there are still some cpus on the used market.  It might be overkill for what I'm doing but at least it's proven to work and is somewhat future-proof, being a newer MB spec.  I was looking for something to fit in an Antec 1200 so I prefer the ATX form factor over a server board.

 

I agree re ATX and a consumer board - I have a bios I actually understand.  I was going to build a dual xeon setup, but was put off by the large server boards and the overall age of the syatem, meaning replacement costs could be high in the future when parts became scarcer.  I got my cpu on eBay for a good price that made my single cpu/lower power system as cost effective.

Link to comment

Is it correct that with a multi-core setup like this the load from several dockers will be spread to different cores/threads and not all trying to run on a few that are being used by unRAID?

 

 

That's one of the real advantages of unRAID - dockers can access the 'bare metal' full power of your machine.  It's one of the reasons I haven't isolated any of my CPUs for my VMs - I prefer to let all apps access the full power so in the rare situation that Plex needs all my 14 cores, then it can have them. 

 

 

It's also why you can run multiple VMs sharing cores if you really have to (most people don't and target certain cores to each VM and also sometimes isolate (tell unRAID not to use), particularly if they are not always doing CPU intensive tasks.

Link to comment

In terms of consumer hardware, I've been running this setup since Jan 2014

 

Gigabyte SKT-AM3+ 990FXA-UD5 Motherboard

AMD FX8320 Black Edition 8 Core (3.5/4.0GHz, 8MB Level 3 Cache, 8MB Level 2 Cache, Socket AM3+, 125W, Retail Boxed)

Arctic Cooling Freezer Xtreme Rev.2 CPU Cooler

Multiple Asus ATI Radeon HD 5450 Silent Graphics Card (1GB, DDR3, PCI-Express)

Corsair CP-9020054-UK RM Series RM650 80 Plus Gold 650W ATX/EPS Fully Modular Power Supply Unit

 

It does all the virtualisation stuff and has no compatibility issues I've found. See further details here;

 

http://mediaserver8.blogspot.ie/2014/01/the-great-rebuild.html

 

and my blog in general might be of some help.

 

I'm currently running TVHeadEnd plugin with Digital Devices tuners as well as some other dockers and VMs. Very stable system (not sure if you can get these cmponenst any more)

 

Peter

Link to comment

Since I've picked up an E5-2683 V3 would anyone care to sanity check the rest of this build?

 

http://pcpartpicker.com/list/rhqNpb

 

The 8Tb red will be for the parity drive and the other drives are what I already have.  I think I have a 250G SSD for the cache/app drive.  The older, smaller drives might be replaced with something else as I start to move files.

 

I also have an external 4 bay eSATA enclosure and controller card in my current pc that will be available.  Would the single graphics card be able to work for unRAID as well as the Ubuntu VM or would I need a second one for desktop use?  I'm also not sure about sharing keyboard/mouse between the two systems.  Would I use a standard KVM switch into different ports?

Link to comment

Since I've picked up an E5-2683 V3 would anyone care to sanity check the rest of this build?

 

http://pcpartpicker.com/list/rhqNpb

 

The 8Tb red will be for the parity drive and the other drives are what I already have.  I think I have a 250G SSD for the cache/app drive.  The older, smaller drives might be replaced with something else as I start to move files.

 

I also have an external 4 bay eSATA enclosure and controller card in my current pc that will be available.  Would the single graphics card be able to work for unRAID as well as the Ubuntu VM or would I need a second one for desktop use?  I'm also not sure about sharing keyboard/mouse between the two systems.  Would I use a standard KVM switch into different ports?

 

 

Welcome to the 2683 Club!! Maybe we should start a thread like the 2670 thread ;-)

 

 

Comments on the build:

 

 

- Motherboard, Memory and CPU work together as I've got the same.  You won't be able to access the AURA lighting (I've just left mine on breath via the BIOS - I think you can also choose the temp gauge here as well, and I've just purchased a cheap powercolor RGB250 which works really well

- Cooler: What made you choose the Noctua?  Your CPU won't be overclocked so you don't need extreme cooling and also you can use any 2011 cooler.  I've got a windowed case like you so I went for the be quiet as I like the look of it and it goes well with the black/white mobo, but any cheap cooler would do the trick

-GPU: There is a way to have just one GPU installed and be able to use it for a VM (https://lime-technology.com/forum/index.php?topic=43644.msg452464#msg452464).  If that doesn't work, if in the future you want to run a 2nd VM at the same time then buy a cheap ATI card.  ATI cards can be used in the first slot and passed through (see section on stubbing halfway down https://lime-technology.com/forum/index.php?topic=51874.msg497875#msg497875) easily - I've done so with my R5 230 which I bought just for this reason with no problems (been told works better if boot unRAID in non-GUI mode or you might get ghosting https://lime-technology.com/forum/index.php?topic=43644.msg502722#msg502722)

- Keyboard and mouse: You can do pretty much everything you need within unRAID via the Ubuntu VM by accessing the webUI in a browser.  I don't think you need to worry about switching the USB devices because you've only got one GPU.  you need two GPUs if you want to run 'two screens' at the same time.  Unfortunately you can't hotplug USB devices assigned to VMs.  What you can do is assign a whole USB controller to a VM and then for that controller you can hotplug devices.  Again, unfortunately with X99 you can't assign the normal USB controllers as they are all bundled up into one.  'Luckily', ASUS added another 3.1 controller to the X-99-A-II so you can assign that to a VM which allows you some hotplug capability.  What I've done is buy a separate cheap PCIe USB controller for my 2nd VM, so both VMs have 'live' USB slots

 

 

My build is here - just waiting for 960 EVO to be launched this month to finish mine.

 

 

http://pcpartpicker.com/list/qYR4gL

 

 

Link to comment

Re the 8TB parity drive, this might be overkill as your biggest array drive is 4TB so you'll only be using half of it's capacity. 

 

What I'd probably do is buy the 4TB version for $148 dollars.  If in the future you want to start increasing your array disk sizes, then move the 4TB red to the array and replace it with a bigger parity drive then e.g. a 6TB or 8TB.

 

With the $170 dollars you save, you could buy another 4TB Red and give yourself some usable array capacity - or go dual parity!

Link to comment
- Motherboard, Memory and CPU work together as I've got the same.

 

That's probably the main reason I went with this combo.  You seemed to be doing everything I wanted (and then some), and I'm more comfortable using a known good combination.

 

- Cooler: What made you choose the Noctua?

 

No reason in particular.  Thought they had a good reputation with a 6 year warranty.  Currently it's just over $4 more than the one you listed (Be Cool! is on sale till the end of the month on Newegg) and I'm open to recommendations.  Hadn't thought of color coordination...

 

The 8TB parity would be to cover future expansion and allow for larger data disks in the future, as I seem to have a habit of adding larger drives as I expand. I currently have less than 1TB free across all drives and I think I'd have to add at least one data drive to the array before I can start moving data so might have to sneak in a 6 or 8 when the wife isn't looking.  Reading around, it seems like a hassle to switch from parity to data.  From a reliability standpoint, I don't know if it's better to go with a larger number of small drives or a smaller number of large drives.  Dual parity is most likely to happen in the future.

 

Feel free to talk me into, or out of, any of this if there are better alternatives/approaches.

 

 

Link to comment

I had a Noctua in the past and technically they are great.  But, you don't need brilliant cooling with the Xeon so I focussed on a quiet cooler and something I liked the look of.  Quietness has become a bit of a theme for me as my PC is on my desk now (I like looking at!) and the HDD noise can be 'loud' relative to the rest of the PC, although I've got a lot of files still moving around that should settle down soon.

 

 

I haven't got a parity drive installed yet as I had to RMA mine, so I can't comment properly but I'm sure it can't be that hard especially as unRAID can support dual parity (add bigger drive, sync and then remove old parity).  Personally I'd get a 4TB parity which I'd move to the array when needed, and bank or invest the extra money somewhere else in the build.

 

 

Buying a new 4TB would help with your migration - put the new drive in your machine to start the migration, move your files over and move drives one by one.  Move the current 4TB last so you can use that as your parity - you want to add parity after all files are on the array for the fastest migration.  If you buy the 8TB you'll need to add more storage to your array AND the parity drive, before you can even get your files across.

Link to comment

The 8TB parity would be to cover future expansion and allow for larger data disks in the future, as I seem to have a habit of adding larger drives as I expand. I currently have less than 1TB free across all drives and I think I'd have to add at least one data drive to the array before I can start moving data so might have to sneak in a 6 or 8 when the wife isn't looking.  Reading around, it seems like a hassle to switch from parity to data.  From a reliability standpoint, I don't know if it's better to go with a larger number of small drives or a smaller number of large drives.  Dual parity is most likely to happen in the future.

 

This is why I'd choose the 8TB. Yes for a time it is a waste of space, but sooner rather later there would be a second 8TB drive jumping into the array.

Link to comment
  • 2 weeks later...

I don't mean to sound negative but these builds seem so expensive to me. You can pick up used servers for so much cheaper, they have enterprise grade components like ECC memory, are reliable and rated for 24/7 use, what you don't get is low power use of modern cpu's, which IMO is a very worthy tradeoff.

 

e.g. http://www.ebay.com/itm/SUPERMICRO-4U-24-BAY-846E1-R900B-X8DTE-F-2x-E5620-48GB-24x-TRAYS-ASR-5805-/381775604197?hash=item58e39969e5:g:JXMAAOSwTA9X3HMV

 

There is a huge thread on these in the forums somewhere, you just need to replace the fans/psu to reduce noise or you can replace all the guts. This is if you need lots of expansion of course since the case is big and heavy.

Link to comment

I don't mean to sound negative but these builds seem so expensive to me. You can pick up used servers for so much cheaper, they have enterprise grade components like ECC memory, are reliable and rated for 24/7 use, what you don't get is low power use of modern cpu's, which IMO is a very worthy tradeoff.

 

e.g. http://www.ebay.com/itm/SUPERMICRO-4U-24-BAY-846E1-R900B-X8DTE-F-2x-E5620-48GB-24x-TRAYS-ASR-5805-/381775604197?hash=item58e39969e5:g:JXMAAOSwTA9X3HMV

 

There is a huge thread on these in the forums somewhere, you just need to replace the fans/psu to reduce noise or you can replace all the guts. This is if you need lots of expansion of course since the case is big and heavy.

 

 

I guess it depends on usecases - I wouldn't get far with a 8288 passmark system as I'm running multiple VMs, multiple plex sessions, plus CP/Sabnzbd etc all at the same time.  Admittedly most of the time my PC isn't taxed, but that's the point - I've built a PC that meets comfortably today's needs and potential needs for years to come.  My old system was 6.5k and it was struggling to do half of what my new machine does.

Link to comment

I don't mean to sound negative but these builds seem so expensive to me. You can pick up used servers for so much cheaper, they have enterprise grade components like ECC memory, are reliable and rated for 24/7 use, what you don't get is low power use of modern cpu's, which IMO is a very worthy tradeoff.

 

e.g. http://www.ebay.com/itm/SUPERMICRO-4U-24-BAY-846E1-R900B-X8DTE-F-2x-E5620-48GB-24x-TRAYS-ASR-5805-/381775604197?hash=item58e39969e5:g:JXMAAOSwTA9X3HMV

 

There is a huge thread on these in the forums somewhere, you just need to replace the fans/psu to reduce noise or you can replace all the guts. This is if you need lots of expansion of course since the case is big and heavy.

 

 

I guess it depends on usecases - I wouldn't get far with a 8288 passmark system as I'm running multiple VMs, multiple plex sessions, plus CP/Sabnzbd etc all at the same time.  Admittedly most of the time my PC isn't taxed, but that's the point - I've built a PC that meets comfortably today's needs and potential needs for years to come.  My old system was 6.5k and it was struggling to do half of what my new machine does.

 

How much of those are cpu limited? I have a machine coming with dual L5630 (which I got because of their low power usage), passmark is only 7k, so I'm wondering how capable it'll be. I thought a lot of those tasks would be cpu dependent and work well on a multi-core cpu as long as you have enough memory.

 

Link to comment

plex requires about 2k per stream, so once you have 2 or more running on that system anything else is going to struggle.

 

 

I frequently have 3 or more running + Plex uploading to Cloud Sync, creating thumbnails or doing background conversions for tablets and phones.  That's before I start considering the impact of sabnzbd etc, whatever is going on in VMS - the beauty of my current system is I don't have to!

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...