2 options - Workstation OR Server grade. Which is best for Plex?


20 posts in this topic Last Reply

Recommended Posts

Hi all,

 

I've put together a couple of mobo / CPU combos which im just not sure is best bang for buck or could be make cheaper without too much compromise.  Here the chassis it's going into where i would like to reuse the PSU and drive cages.  Heres some info on that right here 

 

Quick Sync is an advantage when hardware transcoding in Plex - i'm not sure that is really important if a graphics card is being used, such as the P2000 but i'm wondering if just using the CPU for transcoding rather than a separate GPU for the Workstation option will be ok and net a bit of a saving in the build cost.  I can always add a GPU in at a later point in time if i feel i need it i thinking.  Does that make sense or am i looking at this wrong?  The 2618L is higher performing - but older tech.  Power consumption looks to be the same.

 

I have some 4TB drives from a Synology NAS i might use - got to to figure out the storage element a little tbh, but that's a lot easier.

 

image.thumb.png.df59c06852ffaf47fce64cad78210bb8.png

 

 

Link to post
On 7/6/2020 at 3:30 PM, Jacko_ said:

Hi all,

 

I've put together a couple of mobo / CPU combos which im just not sure is best bang for buck or could be make cheaper without too much compromise.  Here the chassis it's going into where i would like to reuse the PSU and drive cages.  Heres some info on that right here 

 

Quick Sync is an advantage when hardware transcoding in Plex - i'm not sure that is really important if a graphics card is being used, such as the P2000 but i'm wondering if just using the CPU for transcoding rather than a separate GPU for the Workstation option will be ok and net a bit of a saving in the build cost.  I can always add a GPU in at a later point in time if i feel i need it i thinking.  Does that make sense or am i looking at this wrong?  The 2618L is higher performing - but older tech.  Power consumption looks to be the same.

 

I have some 4TB drives from a Synology NAS i might use - got to to figure out the storage element a little tbh, but that's a lot easier.

 

For a "Plex only" machine, it's not necessary to use server-grade/workstation hardware.

And for transcoding under plex you can use either the cpu (higher stream-quality) or the iGPU (faster but needs a plex-pass).

My CPU is from 2012 and can transcode 5x 720p/4Mbit or 4x 1080p/8Mbit on the fly parallel.

I dont use iGPU-transcoding because of weak quality.

CAUTION: Transcoding on a NVidia-Card needs a "spezial version" of Plex and/or unraid - as far as i know...

See this article

 

Edited by Zonediver
Link to post
  • 3 weeks later...

Hi Zonediver,  thanks for the reply.

 

I don't think i was being clear at all.  What i was trying to say is that i would want to use iGPU for transcoding with discrete GPU pass-through to a gaming VM but just not sure if i will do this as i don't really get time for games and haven't had a gaming system or console for years but i like the idea of getting into some sim racing a little - might never happen.

 

I have chosen the below hardware but struggling with an ETA on the motherboard and i keep questioning if this option is really the best for both unraid, plex (i have plex pass) but also running maybe 10 or so docker containers and some test / dev VMs and even a gaming VM.  Would a better CPU be better do you think?  I heard that sticking to Intel for unraid general support is best, but these new AMD CPUs are compelling from a price / feature point of view.  But i would need an integrated GPU for transcoding so that limits my CPU options to the Ryzen 3 and Ryzen 5 i believe.  I haven't researched how these fair with transcoding vs Intel.

 

Here's what i've ordered so far for this build:

Gigabyte Intel C246-WU4 XEON Server Motherboard

Intel Xeon E-2278G, S 1151, Coffee Lake, 8 Core, 16 Thread, 3.4GHz, 5.0GHz Turbo, 16MB, UHD Graphics P630, 80W
2 x Crucial DDR4 ECC UDIMM 16GB

1  x Noctua NH-D15 Dual Radiator Quiet CPU Cooler

LionLi ATX case (reused from old build)

3 x Icydock 5 x 3.5" hot swap enclosures (reused from old build)

2 x Samsung 970 Evo NVMe 500G M.2 (reused from old build)

1 x Crucial 2.5" 240GB SSD

5 x 12Tb WD Elements drives (2 parity / 3 data)

LSI 9240 in IT mode

Silverstone 750W modular PSU (reused from old build)

 

It's very much overkill CPU-wise, but i'm hoping to really get a lot of VMs deployed and i managed to get the CPU and RAM at a good price through work.  I'm not sure on the GPU side of things, not played games for a very long time so i need to research and look into USB pass-through as well which my become a problem.

 

Any comments / thoughts on this build and approach are welcomed - happy to be put wrong here i'm learning on the go.


 

Edited by Jacko_
Added more components
Link to post

How do you plan to access the VM? Through another physical computer over remote desktop?

If you want to access the VM using a display attached to the server itself then you need a dedicated graphics card passed through to a VM and use that VM to access other VM.

 

Reasons to recommend Intel: the iGPU helps a lot with passing through another GPU to a gaming VM + Intel generally has better single-core performance + Intel doesn't have the inherent latency from the CCX/CCD design of Ryzen.

 

Reasons to recommend AMD (Ryzen): better bang for your bucks + Intel just annoys people with their pattern of anti-consumer behaviours (e.g. most recently artificially locking memory speed for budget chipsets) + if you have a graphics card for Unraid (e.g. P2000 for Plex docker hardware transcoding) then the iGPU benefit isn't important

Link to post
4 hours ago, testdasi said:

How do you plan to access the VM? Through another physical computer over remote desktop?

If you want to access the VM using a display attached to the server itself then you need a dedicated graphics card passed through to a VM and use that VM to access other VM.

Initially directly to the server but afterwards i would like it to be tucked away out of sight and accessed remotely which is why i need to consider both graphics pass-through but also USB and audio.  I need to make sure i fully understand how / what i need to do to make this work still.

 

Quote

Reasons to recommend Intel: the iGPU helps a lot with passing through another GPU to a gaming VM + Intel generally has better single-core performance + Intel doesn't have the inherent latency from the CCX/CCD design of Ryzen.

 

Ok, on that then do i need to pass through both the iGPU and the dedicated PCIe GPU for the gaming VM?  Through looking i thought i would select the GPU for the specific VM?  If i'm right in thinking that then i would just pass-through the PCI-e GPU and the iGPU can be for transcoding and for all other test / dev VMs.  My rational here is that someone can be transcoding a film while i play games which hopefully should work ok.  At the moment, i transcode using a Celeron CPU in a Synology DS918+ and it can struggle but this is only a 4 core / 4 thread CPU.  I think that if i dedicate 4 cores / 8 threads to the gaming VM with the remaining 4 cores / 8 threads for unraid, plex, containers, test/dev VMs it should work ok?  I hope?!?  Test/dev VMs can at least be shutdown when not in use.  I don't want to have to do this for the docker containers if i can avoid it.

 

Edited by Jacko_
Link to post
48 minutes ago, Jacko_ said:

Ok, on that then do i need to pass through both the iGPU and the dedicated PCIe GPU for the gaming VM?  Through looking i thought i would select the GPU for the specific VM?  If i'm right in thinking that then i would just pass-through the PCI-e GPU and the iGPU can be for transcoding and for all other test / dev VMs.  My rational here is that someone can be transcoding a film while i play games which hopefully should work ok.  At the moment, i transcode using a Celeron CPU in a Synology DS918+ and it can struggle but this is only a 4 core / 4 thread CPU.  I think that if i dedicate 4 cores / 8 threads to the gaming VM with the remaining 4 cores / 8 threads for unraid, plex, containers, test/dev VMs it should work ok?  I hope?!?  Test/dev VMs can at least be shutdown when not in use.  I don't want to have to do this for the docker containers if i can avoid it.

 

For the gaming VM you're going to want a dedicate PCI-e GPU. Also, if you're going NVIDIA for your GPU(s) you will need to have at minimum 2 GPUs to get around NVIDIA's virtualization-inhibitors. I use a 1050 for the UNRAID server itself/Plex, and a RTX 2070 SUPER for my gaming VM, and the setup works just fine. Plex/UNRAID won't need a high end GPU, save that for the VM. 

 

As far as that CPU allocation goes, 4/8 for each sounds more than reasonable as long as you don't play CPU intensive games (i.e The Total War series, or highly modded Minecraft). 

Link to post
1 hour ago, Jacko_ said:

Ok, on that then do i need to pass through both the iGPU and the dedicated PCIe GPU for the gaming VM?  Through looking i thought i would select the GPU for the specific VM?  If i'm right in thinking that then i would just pass-through the PCI-e GPU and the iGPU can be for transcoding and for all other test / dev VMs.  My rational here is that someone can be transcoding a film while i play games which hopefully should work ok.  At the moment, i transcode using a Celeron CPU in a Synology DS918+ and it can struggle but this is only a 4 core / 4 thread CPU.  I think that if i dedicate 4 cores / 8 threads to the gaming VM with the remaining 4 cores / 8 threads for unraid, plex, containers, test/dev VMs it should work ok?  I hope?!?  Test/dev VMs can at least be shutdown when not in use.  I don't want to have to do this for the docker containers if i can avoid it.

No. Boot Unraid with the iGPU and pass through the dedicated GPU to the gaming VM.

And configure the docker to use the iGPU for hardware transcoding.

 

The other VM's do not use any GPU and can only be accessed remotely (e.g. via VNC or RDP).

You shouldn't pass through the iGPU to any VM. (a) it's unlikely to be possible and (b) once passed through, you will not be able to use it for the docker (e.g. transcoding).

 

4 cores is enough for a normal gaming VM.

Link to post
12 hours ago, untraceablez said:

For the gaming VM you're going to want a dedicate PCI-e GPU. Also, if you're going NVIDIA for your GPU(s) you will need to have at minimum 2 GPUs to get around NVIDIA's virtualization-inhibitors. I use a 1050 for the UNRAID server itself/Plex, and a RTX 2070 SUPER for my gaming VM, and the setup works just fine. Plex/UNRAID won't need a high end GPU, save that for the VM. 

 

As far as that CPU allocation goes, 4/8 for each sounds more than reasonable as long as you don't play CPU intensive games (i.e The Total War series, or highly modded Minecraft). 

Do i need a 2nd GPU if the CPU has it's own?  I was expecting to use iGPU for Plex hardware transcoding, with a 1070 or whatever for gaming VM passthrough.  Is this going to be a problem do you know?

 

Games-wise it'll mostly be car racing, but maybe some first person shooter.  TBH i've not played games for 15-20 years so i've no idea what's what these days. haha.  I'll more than likely use a dedicated SSD for the gaming VM, perhaps one of the NVMe drives i have.  I haven'y quite figured out where the plex metadata should reside (ideally on something solid state) and i do also have a 240G SSD not doing anything at the moment so perhaps use that.

Edited by Jacko_
Link to post
13 hours ago, testdasi said:

No. Boot Unraid with the iGPU and pass through the dedicated GPU to the gaming VM.

And configure the docker to use the iGPU for hardware transcoding.

Great, that's what i was expecting to do.

13 hours ago, testdasi said:

The other VM's do not use any GPU and can only be accessed remotely (e.g. via VNC or RDP).

You shouldn't pass through the iGPU to any VM. (a) it's unlikely to be possible and (b) once passed through, you will not be able to use it for the docker (e.g. transcoding).

That makes sense.  I'll make sure i don't do that. :)

 

 

 

Where should my data reside? 

 

  • Gamer VM
    • i want to have the OS on SSD of some sort - maybe NVMe or SSD i've got both options available.  Perhaps NVMe for some additional performance?
    • Games to reside on HDD and be protected by parity.
  • Test / Dev VMs on SSD/s
  • Plex metadata - SSD i suspect is fine unless NVMe just make client thumbnail preview much snappier?
  • Docker containers i'm not sure still - need to research this still.
  • Unraid cache - i guess this should be on NVMe ideally, might have to settle for SSD though.  Or i put Plex metadata on SSD.
  • Personal non OS / application data on the HDDs.

 

Is this about right, what should i consider for good performance?  I can add more SSD if needed.  I'll have a total of 18 x SATA ports, which 15 will be connected the Icydocks.  I can either mount 2.5" SSDs in the icydock with a cage adapter OR just double-sided tape them internally to the chassis.

Edited by Jacko_
Link to post
44 minutes ago, Jacko_ said:

Where should my data reside? 

 

  • Gamer VM
    • i want to have the OS on SSD of some sort - maybe NVMe or SSD i've got both options available.  Perhaps NVMe for some additional performance?
    • Games to reside on HDD and be protected by parity.
  • Test / Dev VMs on SSD/s
  • Plex metadata - SSD i suspect is fine unless NVMe just make client thumbnail preview much snappier?
  • Docker containers i'm not sure still - need to research this still.
  • Unraid cache - i guess this should be on NVMe ideally, might have to settle for SSD though.  Or i put Plex metadata on SSD.
  • Personal non OS / application data on the HDDs.

 

Is this about right, what should i consider for good performance?  I can add more SSD if needed.  I'll have a total of 18 x SATA ports, which 15 will be connected the Icydocks.  I can either mount 2.5" SSDs in the icydock with a cage adapter OR just double-sided tape them internally to the chassis.

 

With gaming VM, it would be a good idea to pass through a NVMe as a PCIe device.

It's not about additional performance but rather to reduce performance variability. With gaming it's not just about max performance but rather the most consistent performance (i.e. less lag).

 

Have 2 SSD's in the cache pool running as RAID-1 for the test / dev VM vdisks would be sufficient. There's no need to dedicate a SSD to a single VM unless you really need it.

 

Plex isn't snappier with db on a NVMe.

 

It depends on how much space you need for the VM. Usually people share the same cache pool between docker image, appdata and VM vdisks.

 

Don't think of Unraid "cache" as being "cache". It's just a fast general storage pool. If you are happy enough with your array write speed (optionally with turbo write turned on), which most users would be with modern HDD's then there's no need to have a cache pool as write cache.

Link to post
6 hours ago, Jacko_ said:

Do i need a 2nd GPU if the CPU has it's own?  I was expecting to use iGPU for Plex hardware transcoding, with a 1070 or whatever for gaming VM passthrough.  Is this going to be a problem do you know?

You will want one to do hardware transcoding, but as I said before, it doesn't need to be high end, honestly even the 1050 I have is overkill. a $30-$50 GPU will do nicely, it just helps to offset the load from your CPU (especially since it'll be running the dockers and the VM. 

 

Quote

Games-wise it'll mostly be car racing, but maybe some first person shooter.  TBH i've not played games for 15-20 years so i've no idea what's what these days. haha.  I'll more than likely use a dedicated SSD for the gaming VM, perhaps one of the NVMe drives i have.  I haven'y quite figured out where the plex metadata should reside (ideally on something solid state) and i do also have a 240G SSD not doing anything at the moment so perhaps use that.

 

That's quite a while to go without gaming! If you're into shooters the big ones today are still CoD (I'm amazed they haven't ran out of stuff yet) and Overwatch for more traditional shooters, and then there are the million and one 'battle royale' games where you fight in 100 player swarms. Not my cup of tea but they're super popular.

 

A note on racers, if you ever plan to do it in VR, you're going to need a multi-chip USB PCI-e card to pass through sensors and the like. Don't know if you're planning to or not but I know some play racing games in VR for extra immersion. 

Link to post
7 hours ago, testdasi said:

 

With gaming VM, it would be a good idea to pass through a NVMe as a PCIe device.

It's not about additional performance but rather to reduce performance variability. With gaming it's not just about max performance but rather the most consistent performance (i.e. less lag).

 

Have 2 SSD's in the cache pool running as RAID-1 for the test / dev VM vdisks would be sufficient. There's no need to dedicate a SSD to a single VM unless you really need it.

 

Plex isn't snappier with db on a NVMe.

 

It depends on how much space you need for the VM. Usually people share the same cache pool between docker image, appdata and VM vdisks.

 

Don't think of Unraid "cache" as being "cache". It's just a fast general storage pool. If you are happy enough with your array write speed (optionally with turbo write turned on), which most users would be with modern HDD's then there's no need to have a cache pool as write cache.

Perfect, thanks for the very useful info - i'll certainly take it onboard.

 

NVMe for gaming VM - check, thanks for explaining that for me, makes sense.

2 x SSDs Raid 1 for cache pool - check, i can make that happen.

Plex on SSD - check, does these need me dedicated hardware or can it be shared as part of the system cache pool?

 

Link to post
2 hours ago, untraceablez said:

You will want one to do hardware transcoding, but as I said before, it doesn't need to be high end, honestly even the 1050 I have is overkill. a $30-$50 GPU will do nicely, it just helps to offset the load from your CPU (especially since it'll be running the dockers and the VM. 

 

I was planing on using the CPU GPU for this which i think will be more than up the task.  I'm not expecting any more than 2 transcodes at any one time.

 

2 hours ago, untraceablez said:

That's quite a while to go without gaming! If you're into shooters the big ones today are still CoD (I'm amazed they haven't ran out of stuff yet) and Overwatch for more traditional shooters, and then there are the million and one 'battle royale' games where you fight in 100 player swarms. Not my cup of tea but they're super popular.

 

A note on racers, if you ever plan to do it in VR, you're going to need a multi-chip USB PCI-e card to pass through sensors and the like. Don't know if you're planning to or not but I know some play racing games in VR for extra immersion. 

I was really into Unreal Tournament when it first came out.  I haven't really played any games to speak of since then.  I used to enjoy a sim racer but i can't think what it was called, perhaps iRacing.  But really only played a handful of times on a console since then as well but enjoyed Forza and Gran Turismo but no idea what's good these days.  I think i'll stick to normal monitor for now (i do already have a 4K 32" monitor i use for work so i've made some investment in that area already.  Just need a wheel really to make the most of it.

 

That said, i might have a problem which i'll show you shortly.

Link to post

I went and collected the MB and CPU cooler today and spent an hour or so putting it together while i wait for the CPU and RAM which should hopefully be with me before the week is out.

 

Cabling is a nightmare and i've done the best i can with it for now.  It's not perfect but TBH i'll do.  My main concern now is the sheer size of the CPU cooler and the available space.  I've mounted the brackets so the cooler is pulling air from the front to the rear.  With it mounted this way i think it will block the PCIe slot i was hoping to use for the GPU.  I won't know until i can get the CPU installed and the cooler mounted but it looks like it might be a problem.  I could take an angle grinder to the fins worst-case.  I think returning the cooler for a small model is out of the question now as i've got my finger prints all over the thing!

 

Here are some shots nonetheless, any advice you have please give it.  The drives i've bought are WD elements which i have taken out of the housings.  I'm aware of a 3v power issue with the sata power connector but these icydocks are old and each of them is powered via 3 x 4 pin molex so i don't think that will be an issue for me.  I just hope the icydock back-plane doesn't limit the transfer rate, i'm not sure if they were rated for SATA2 when i bought them many years ago.

 

 

IMG_20200804_170831.jpg

IMG_20200804_170935.jpg

IMG_20200804_171022.jpg

IMG_20200804_171940.jpg

IMG_20200804_172029.jpg

Link to post
45 minutes ago, Jacko_ said:

Plex on SSD - check, does these need me dedicated hardware or can it be shared as part of the system cache pool?

Either is fine. It really depends on what you have at hand and how big your main cache pool is. Plex db generally doesn't require dedicated storage since there isn't much of a load for it to be an issue. So it comes down to free space.

Link to post
40 minutes ago, Jacko_ said:

I went and collected the MB and CPU cooler today and spent an hour or so putting it together while i wait for the CPU and RAM which should hopefully be with me before the week is out.

 

Cabling is a nightmare and i've done the best i can with it for now.  It's not perfect but TBH i'll do.  My main concern now is the sheer size of the CPU cooler and the available space.  I've mounted the brackets so the cooler is pulling air from the front to the rear.  With it mounted this way i think it will block the PCIe slot i was hoping to use for the GPU.  I won't know until i can get the CPU installed and the cooler mounted but it looks like it might be a problem.  I could take an angle grinder to the fins worst-case.  I think returning the cooler for a small model is out of the question now as i've got my finger prints all over the thing!

 

Here are some shots nonetheless, any advice you have please give it.  The drives i've bought are WD elements which i have taken out of the housings.  I'm aware of a 3v power issue with the sata power connector but these icydocks are old and each of them is powered via 3 x 4 pin molex so i don't think that will be an issue for me.  I just hope the icydock back-plane doesn't limit the transfer rate, i'm not sure if they were rated for SATA2 when i bought them many years ago.

I mean if you're in the return period you probably can return the cooler still, fingerprints or not. That is some seriously tight clearance though, I'd definitely look at the return option before taking an angle grinder to it, especially given that the fans mount to the fins on the outside, so you'd likely destroy their mounting mechanism in trying to grind down the fins. You'd be better getting another cooler and eating the cost / using it in another build if possible.

 

Regarding the Icydock, SATAI or SATAII is definitely gonna be more limiting than Molex vs. SATA power. If you can find a cheap SATAIII backplane I'd recommend it, because I can say right now that backplane will bottleneck your array performance from the get go. 

Edited by untraceablez
Clarifying answers, adding answer to icydock comment, missed it originally.
Link to post
5 hours ago, untraceablez said:

I mean if you're in the return period you probably can return the cooler still, fingerprints or not. That is some seriously tight clearance though, I'd definitely look at the return option before taking an angle grinder to it, especially given that the fans mount to the fins on the outside, so you'd likely destroy their mounting mechanism in trying to grind down the fins. You'd be better getting another cooler and eating the cost / using it in another build if possible.

 

Regarding the Icydock, SATAI or SATAII is definitely gonna be more limiting than Molex vs. SATA power. If you can find a cheap SATAIII backplane I'd recommend it, because I can say right now that backplane will bottleneck your array performance from the get go. 

I'm going to give returning a shot once I've tried cleaning the prints off with some alcohol and boxing back up with all of the bits.  What I have done is bought another noctua which uses a single 120mm fan so I hope all of the mounting hardware is the same, so opened bags can be replaced for unopened etc and saves me removing the motherboard to get access to the mounting bracket. So, all being well that should be possible. 

 

With regard to the sata performance while I can say for certain, I'm in the hope that the backplane has no logic that will hinder performance.  If data is just passed through to the sata cable from the backplane then the performance is driven out of the connecting controller for each indavidual drive.  Possibly wishful thinking but will save me a lot of money and messing about if this is the case.

 

Im looking forward to powering on for testing and setting up.

Link to post
  • 2 weeks later...

Well i'm up and running.  I have not added all drives or any NVMe or SSD yet but will do in a few days.

 

I have currently installed Version: 6.9.0-beta25.  Did so on a 64Gb SD card as it was all i had available at the time, despite unraid stating 32Gb max i wonder if the new code allows for larger USB keys?

 

I added 4 of my 5 12TB WD element drives and i have tried to add 2 of them for parity and 2 for data and i checked the drive stats.  I will need to check on the read / write performance but the SATA version is showing as 3.2 @ 6Gb/s which is promising.  If you recall back it was a concern that the icydock/s would be a bottleneck but that has given me hope that they will not be.

 

image.thumb.png.26db0048e75472b49cf67de1effbbf2c.png

 

I'm a little confused as to why the parity drives are showing with warnings which i suspect is due to the parity sync it is doing perhaps. Is this having to align the sectors on each of the parity drives to build up a reference table?  The estimated time to complete it over 16 hours.

 

image.thumb.png.1b8ec6c5321f59e224c59558e361ee14.png

 

I have the larger cooler ready to return to the supplier and i bought a replacement online with a much smaller heatsink and with a single 120mm fan.  Hopefully i can keep the temps down still.  I have three fans at the rear of the icydocks, with a PCI-e 120mm fan exhuasting out the rear.  120mm fan pulling air through the heatsink and a 120mm pulling air out the rear of the case.  The PSU is mounted in the top of the chassis and has 120mm fan (bearings are noisy so i want to replace this one).

 

IMG_20200812_232439.thumb.jpg.d23beed3b25d032f8e7406ad3fc6acac.jpg

Link to post

I added some old 1TB drives to the array, just testing if hotswap worked ok etc and to see if these slower drives reported SATA speed correctly and these do so SATAII @ 3Gb/s so this is great news that the icydock backplane is completely passive with no logic and will support my new shiny SATAIII drives at full speed.

 

image.thumb.png.faef3aa4ac5d2dcc0dfd1de8a6e5dffd.png

Link to post

Well after relocating the server to another room i had issues powering it back on.  Not sure what's up with one of the dimms but it's doesn't like one of them.  I had got initally with both sticks but even testing the problematic dimm on it's own in each memory slot the board with showing a memory error and would not get to POST.  What i didn't think about is the parity check. It was maybe 8 hours into the estimated 16 hours when i decided it would be a good time to relocate it so i powered it down (gracefully) once i'd got it backup again i had to start the party check all over again!  Not sure if this was caused by me pulling drives etc and trying to troubleshoot the RAM issue or just what is expected if you don't pause the parity check operation prior to shutdown.  Lesson learnt.

 

While the parity was ticking away i added some of the apps that spaceinvader had suggested in a couple of videos from a year or so ago and had a general play around and familiarising myself with the GUI and getting my APC UPS configured to report into unraid and setup graceful shutdown on power loss.  Took a bit of messing and reading up on powershute but its working great now. 

 

I've added the fifth and final 12TB drive and that is being cleared at the moment and I'll add a couple of 240GB SSDs i have kicking around tomorrow and configure in raid 1 for appdata, docker, plex metadata etc and i'll install one of the NMVe drives (for pass through to Windows VM) from the Synology NAS now that it's rebuilt itself after i replaced one of the failing drives.  I then need to start transferring data from the Synology to unraid but i'm expecting i have to do this via Terminal which i'm not familiar with so i need to figure this out.  I've added one of the Synology shares to unraid so i suspect a simple enough command will get this copying over to one of the array shares but i'm not sure how as yet. 

 

Once everything is copied over to unraid It would be great to script an rsync backup from unraid to Synology for things like appdata, boot drive, VMs, containers and some select personal files so i'll take a look at this in the future as well. 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.