Jump to content
zognic

Beginner who need some advice to start

14 posts in this topic Last Reply

Recommended Posts

Hello community,

 

Future "unRaid" user, I'm building at the moment my HW config.

At this point I need some advice to move forward with the hardware configuration/setup

 

First what would be my use

  • Plex HW Transcode with locally no more than 2 streams in parallels and 1 remote (mainly 1080p)
  • Store Media & Backups
  • Daily Download activity with Radarr/Sonarr (small 25mb internet speed)
  • Simple web hosting (nginx/Letsencrypt) very very few traffic
  • Some other light Dockers
  • Be able to run 2 VM (linux and Windows) for Office activities
  • Option, be able to play light game on the windows VM (store 2/3 games max)

 

My current HW

  • Intel Core i5-9600K (3.7 GHz / 4.6 GHz)
  • Gigabyte Z390M Gaming
  • No GPU card
  • RAM 16 Go (2 x 8 Go) DDR4 2666 MHz
  • Crucial SSD  MX500 SATA 2,5'' -  500Go
  • OLD Crucial SSD m500 SATA 2,5'' - 120gb (don't know if I will include it inside the build)
  • WD Black SN750 SSD NVMe - 250Go
  • Seagate Barracuda 2To
  • Not yet purchased + 1 or 2 Seagate Barracuda 2To

 

For the setup

I did not get the occasion to play yet with unraid, but this is what I can imagine we can do.

 

Use the Seagate for the Array (no parity yet, and build parity later with the 2 new Seagate)

Create a cache with the Crucial MX500 SSD to store

  • /appdata
  • /Dockers
  • /Download folder?

Use the WD Black NVME has dedicated SSD to store the VMs (possible?) 

 

Open points

  • I have some doubt on where I should store the Download folder (direct on the Cache SSD? or Array with "preferred cache" option?)
  • For VM is it possible or is it a good idea to dedicate the NVME SSD?
  • If the OLD SSD 120g could be an asset for this config?
  • Is it simple (and without loosing data) to move from a configuration 1 disk array to 3 disks with 1 parity?

 

Base on all this information, I would like to know if this setup is coherent or not, and if you can suggest some advice on how to improve it

 

Thanks in advance

 

 

 

Share this post


Link to post
2 hours ago, zognic said:

Open points

  • I have some doubt on where I should store the Download folder (direct on the Cache SSD? or Array with "preferred cache" option?)
  • For VM is it possible or is it a good idea to dedicate the NVME SSD?
  • If the OLD SSD 120g could be an asset for this config?
  • Is it simple (and without loosing data) to move from a configuration 1 disk array to 3 disks with 1 parity?

 

Base on all this information, I would like to know if this setup is coherent or not, and if you can suggest some advice on how to improve it

 

How are you going to access the VM? Directly on the server (i.e. it's a PC + NAS use case and not a pure NAS server use case)?

If that's the case then you need a new GPU for the main VM (let's say Windows) because trying to pass through the iGPU to a VM is more trouble than it's worth.

You can then use the main VM to remote access the Linux VM (or have the Linux VM boot up with the new GPU only when the Windows VM is off).

 

For temp folder (i.e. heavy write), mount the old 120GB SSD as Unassigned Devices and use it (assuming you are not doing Linux iso larger than 50GB or so). You already have it (and wanting to discard it) so may as well put it to good use.

I would be quite interested in how long it takes for your SSD to die cuz I have been trying to run various SSD's to the ground to no avail.

It is ok to put Download folder on the cache but it's good practice, where possible, to separate heavy write activities from the cache because if your cache fails, it's very annoying to try to recover appdata, docker, vdisk etc.

 

If your Windows VM is your main VM then it's a good idea to pass the NVMe through as a PCIe device (i.e. "dedicate" it to the VM) for best performance. You can store the Linux vdisk on the cache.

 

It is simple to add more disks (e.g. in your case add 2 new disks, 1 as parity and 1 more data).

Just follow the instruction on the Unraid wiki (or watch SpaceInvaderOne guides on Youtube).

A few things to look out for:

  • Preclear your new HDD before adding (use the pre-clear plugin).
  • Don't accidentally add your current data HDD as parity (because parity build will overwrite whatever is on the parity disk). It's actually quite hard to do if you follow the wiki instructions but it's worth noting down the current data disk serial number and double check before clicking "Start" the array.

 

 

 

 

 

Share this post


Link to post

Thanks testdasi for your reply

 

To answer to your first question, mainly use will be to have a NAS for complete Media Center Experience, but I expect to have the possibility to play with Windows and to have an environnement to continue to improve Linux competencies

 

For the access, the server will be connected to a TV Screen (for first install & troubleshooting) but after that , I will connect remotely to the VM (with my windows laptop) with WoL plugin & Splashtop and maybe run game remotly with moonlight. For the Windows VM I just saw a youtube video from "SpaceInvaderOne" about "how to dual boot windows baremetal & unraid then boot the same windows as a vm", it sounds like very interesting. (https://www.youtube.com/watch?v=fnIn6GnA87c)

 

As I not really expected to buy a dedicated GPU card, I can imagine the intel HD graphic could be enough for my use, Is it really necessary to buy GPU card for setup a windows VM? (very newbee on that side)

 

Can we imagine to have NVME drive dedicated to windows (as baremetal) and, the old SSD as dedicated drive for Downloads activities? (in case of old SDD crash no important data will be lost) 

Other VM will be stored on the SSD cache (MX500) Drive 

 

 

 

 

 

Edited by zognic

Share this post


Link to post

If you are only remoting into your VM then a dedicated GPU is not necessary, provided you only play games when booting into Windows bare metal (since the Windows VM won't have a GPU so gaming "performance" will be terrible).

 

I still think for a "complete Media Center Experience", you are better off with a dedicated GPU so you can watch stuff through the VM (via the dedicated GPU) while Unraid does other things e.g. download, manage your media (e.g. Plex), NAS storage etc so you don't have to restart your computer. The dedicated GPU does not have to be expensive. It's pretty easy to get a rather affordable GPU that can beat the iGPU. If all you need is a display, £20 can get you a dedicated GPU. :)

 

One thing I don't seem to see anyone mentioning about dual-booting is that you might run into issues with Windows activation since every boot the OS sees a new "motherboard".

Edited by testdasi

Share this post


Link to post
8 hours ago, testdasi said:

One thing I don't seem to see anyone mentioning about dual-booting is that you might run into issues with Windows activation since every boot the OS sees a new "motherboard".

Very good remark, and yes it was mentioned by SpaceInvaderOne , you can pass through the windows activation issue by using the same  UUID of the motherboard (who was activated on the Baremetal windows) on the VM's XML file.

 

8 hours ago, testdasi said:

I still think for a "complete Media Center Experience", you are better off with a dedicated GPU so you can watch stuff through the VM (via the dedicated GPU) while Unraid does other things e.g. download, manage your media (e.g. Plex), NAS storage etc so you don't have to restart your computer.

I need more explanation on that point, you mean If I use the iGPU for my windows VM, other Stuff like plex decode will not work? 

 

Regarding CPU & Latency, honestly I can image to shutdown Plex if I need more ressources for the VM, but as I will have 6 Cores, I plan to pin some CPU for dedicated use. On schema below, I could have 2 different Plex docker container, one when I use the VM and another one when the VM is shutdown.

(Only 1 VM will be running in the same time)

 

Maybe this Pinned CPU strategy could be optimized, what do you think?

 

image.png.a3ed3371feb946e290fe8285a3ac535f.png

Share this post


Link to post
1 hour ago, zognic said:

I need more explanation on that point, you mean If I use the iGPU for my windows VM, other Stuff like plex decode will not work? 

 

Regarding CPU & Latency, honestly I can image to shutdown Plex if I need more ressources for the VM, but as I will have 6 Cores, I plan to pin some CPU for dedicated use. On schema below, I could have 2 different Plex docker container, one when I use the VM and another one when the VM is shutdown.

(Only 1 VM will be running in the same time)

 

Maybe this Pinned CPU strategy could be optimized, what do you think?

To use the iGPU for your Windows VM, you need to pass it through and passing through the iGPU to a VM varies from difficult to impossible. (I think the 9-series Intel is in the impossible bucket).

  • Hence, the only way for you to use the iGPU for Windows is to boot it bare-metal, which means Unraid is not running, which means no Plex.
  • Yes, you can run Plex in Windows too but note that your storage is in Unraid which uses xfs file system which Windows doesn't support natively - so still no Plex if booting Windows metal.

 

Nice diagram but you might want to number the vCPU instead of the (physical) core since that's how Unraid will pin cores. So let's say you number the vCPU in the same way (so vCPU 0 + 1 = CORE 0) then

  • vCPU 0 should be left unpinned and unused for Unraid tasks. I think the more recent versions have gone multi-tasking so it's not as critical but it's always a good idea to leave something reserved for Unraid so everything doesn't grind to a halt.
  • vCPU 1, where possible, should also be left unused but you can use it for the VM emulator (since it doesn't use much processing power anyway).
  • Plex can share cores with all other dockers. You can set the Plex docker parameter such that Plex will have a higher priority and thus gets more CPU power when it needs to (instead of having Plex-only cores).
  • A trick that I use to maximize available cores is to only pin half a physical core for VM. So in your example, it would be something like this (number is by vCPU)
    • 0 - reserve for Unraid
    • 1 - VM emulator
    • 2,4,6,8,10 - dockers
    • 3,5,7,9,11 - VM pin
  • Intel / AMD multi-threading algorithm is surprisingly good. By using half a physical core that way:
    • When you are not running VM then your Plex has access to 5 physical cores to transcode.
    • When Plex is not transcoding then your VM has access to 5 physical cores for better performance.
    • When both run simultaneously, in my experience, I can tell there's a bit of latency but it's definitely not annoying.

 

 

 

Share this post


Link to post
19 minutes ago, testdasi said:

Plex can share cores with all other dockers. You can set the Plex docker parameter such that Plex will have a higher priority and thus gets more CPU power when it needs to (instead of having Plex-only cores).

Interesting, I need to learn how to do this :)

 

Regarding multi-threading algorithm, it's a good news, I thought that split the vCPU use from their physical Core could really impact latency.

 

Regarding GPU pass through I wasn't know it was inside the impossible bucket  ;) Do you have some recommendation on a correct GPU Card (around 100$) who could do the job?

 

 

Share this post


Link to post
6 minutes ago, zognic said:

Interesting, I need to learn how to do this :)

 

Regarding multi-threading algorithm, it's a good news, I thought that split the vCPU use from their physical Core could really impact latency.

 

Regarding GPU pass through I wasn't know it was inside the impossible bucket  ;) Do you have some recommendation on a correct GPU Card (around 100$) who could do the job?

 

 

 

Read the Docker FAQ topic.

 

Splitting vCPU does impact latency, it's a matter of tolerance. If you play the latest games on a 2080 (with graphics appropriately set for a 2080) then sure, it does get very annoying. If you browse the web, edit a few photos, play Rocket League casually at medium-low settings then it's alright.

 

I don't usually recommend specific hardware due to pricing difference across the globe and potential compatibility issues (no way to be 100% sure that something works without actually physically have that exact hardware).

In general terms though, the 1050Ti is a good budget choice and usually having iGPU helps reduce the chance the infamous error code 43 when passing through Nvidia GPU.

And again, if you just need a display, the GT 710 goes for about £20-£30 this side of the pond. Even my super niche GT 710 (single slot, passively-cooled, PCIe x1) went for about £60.

 

 

 

 

Share this post


Link to post

Thanks a lot. Is it any good reason to not move forward with and AMD GPU Card ?

 

 

Schema updated according to your recommendation

 

image.png.04ebe5b6e07b0f39e01fb9b033e012c8.png

Share this post


Link to post
7 hours ago, zognic said:

Thanks a lot. Is it any good reason to not move forward with and AMD GPU Card ?

 

Schema updated according to your recommendation

image.png.04ebe5b6e07b0f39e01fb9b033e012c8.png

No particular reason against AMD GPU. Basically, the 2 most frequently faced issues with passing through GPU with the 2 teams are:

  • For AMD: reset issue (the GPU bind can't be released so restarting a VM caused the GPU to stop working, requiring an Unraid restart for it to work again).
  • For Nvidia: error code 43 (the Nvidia driver detects that its being run in a virtualised environment and refused to load - in the hope that you will spend more on a Quadro, for which this issue doesn't exist).

Reset issue is Kernel + model related so if you bought a card that has it with the current kernel version then there isn't really a fix (unless / until patches are released and as seen recently, the patches can also be unreliable). Also note that lately I have seen a few posts about reset issue with Nvidia too so "frequently" doesn't mean exclusivity.

There are workarounds to avoid error code 43 that may (or may not) work but it can happen to any model.

So it's a sort of pros and cons with both teams.

 

Regardless of team Red or Green, I would recommend you do a quick search (e.g. let's say you want to buy the AMD RX 580 then search for that on the forum to see if others have had any issue. hint: don't get the RX 580).

Share this post


Link to post
2 minutes ago, testdasi said:

Regardless of team Red or Green, I would recommend you do a quick search (e.g. let's say you want to buy the AMD RX 580 then search for that on the forum to see if others have had any issue. hint: don't get the RX 580).

🤗 good hint, I will take it as an advice for all kind of research

 

For AMD I've thought it was fixed by the last release of unRAID, maybe I'm wrong?

14 minutes ago, testdasi said:

There are workarounds to avoid error code 43 that may (or may not) work but it can happen to any model.

So appart Quadro there is no recommendation? it's like playing with the lottery?

 

Based upon this, I will not add too much bucks on the Graphic cards...

 

Share this post


Link to post
8 minutes ago, zognic said:

🤗 good hint, I will take it as an advice for all kind of research

 

For AMD I've thought it was fixed by the last release of unRAID, maybe I'm wrong?

So appart Quadro there is no recommendation? it's like playing with the lottery?

 

Based upon this, I will not add too much bucks on the Graphic cards...

 

The fix for AMD was what I referred to in my earlier post. It turned out to be unreliable and was subsequently removed.

 

With regards to error code 43, no it isn't lottery.

While there's no guarantee that it won't happen to your GPU, it's not like any random number of GPU has it and a random number of GPU doesn't.

 

The pattern I have seen is passing through the primary (or only) GPU (i.e. the GPU that Unraid displays on at boot) almost always led to having to work around this issue.

You have an iGPU (and presumably the ability to pick that to boot with in the BIOS - check that first please) so if Unraid boots on the iGPU then the chance of error 43 happening to you is very low. A few things you can do to make it even lower:

  • Boot Unraid in legacy mode (i.e. not UEFI)
  • Dump vbios specific for your actual GPU and use it

I personally have not had any run in with it despite doing the "big no no" of turning on Hyper-V. I chuck that to the 2 points above.

 

Share this post


Link to post
9 minutes ago, testdasi said:

Dump vbios specific for your actual GPU and use it

Thanks a lot for your explanation testdasi, but I've no idea how to deal with this?

Edited by zognic

Share this post


Link to post

 

4 hours ago, zognic said:

Thanks a lot for your explanation testdasi, but I've no idea how to deal with this?

Watch SpaceInvaderOne videos on Youtube. :)

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.