AMD vs. Intel for Unraid/Plex - 2020


Recommended Posts

Hey!

I am not sure what configuration to choose for my upcoming Unraid/Plex Server, so I would be very thankful about your thougts.

My options so far:

  1. AMD Ryzen 7 3700x on the Motherboard X470D4U2-2T Transcodes from GPU Quadro P2000 (since the Ryzen doesn't have processor graphics)
  2. Intel E-2278G on the Motherboard E3C246D4U2-2L2T Transcodes from iGPU (processor graphics)

 

Since I want to have the power consumption as low as possible I am really uncertain what to choose now. The Ryzen hase 65 TDP, the Intel 80, but I have no idea how much power consumption the P2000 will finaly have. It's said to have 75 W. So will I have 140 TDP all the time? Really don't know ... I need the graphic unit only for plex and only when transcodes are requested. On the other hand I'm not sure if the Intel's graphic unit is strong enough ... I guess I won't have more thean two 4K transcodes at a time.

 

I would appreciate your advice!

Link to comment

I’m in a very similar place with my upcoming build. I believe I’ve done a lot of research and can probably help.

 

1. TDP is really a limit to what the processor will operate at under load. Also, AMD and Intel calculate TDP differently so it’s harder to do a direct comparison. Idle power is usually considerably lower with Intel CPU’s while Ryzen processors tend to only idle a little lower than their TDP (but also have a lower ceiling in terms of power consumption). That 80W Intel processor will idle about 20-30W below that Ryzen processor. It’s important to consider this because if you’re primarily using your server as a Plex/Emby server, it will be at idle most of the time. The P2000 is limited to 75W max but if it’s just transcoding and nothing else, it will consume very little power. **Ideally, the P2000 when it’s sitting idle should be below 10W but due to a bug somewhere in Plex and the P2000, it will stay in an active state even after transcoding is done which will cause it to consume somewhere closer to 20W. Plex is looking into it but who knows when that will be fixed.

 

2. I’m not saying that UnRAID with Ryzen is unstable or not adequate, but there are some known quirks that are still in the process of being worked out (especially with the X470D4U and X470D4U2-2T). You will most likely need to set the power state to Typical Idle Power or something similar and you’ll be without some temperature sensors that are nice to have but won’t be in until UnRAID updates their Linux kernel. You also might run into issues with your PSU you are using causing your processor to run at very low speeds. This seems to be correctable but it’s something you need to be aware of. UnRAID updates for Ryzen also tend to cause a few more issues on average than Intel.
 

3. With the Intel CPU and QuickSync, you should be able to easily do 15+ 1080p transcodes with little effort except for a few changes in your BIOS and in your go file. Otherwise, you can run it with any form of UnRAID as long at it has a newer kernel where your iGPU is supported (generally anything 6.8.0rc-1 and above). With Ryzen and the P2000, you’ll need to install the Nvidia version of UnRAID which isn’t updated as fast as the regular UnRAID version (it’s still relatively fast but if you want to update when a newer version of UnRAID comes out you’ll need to wait for the Nvidia guys to bake in their drivers). Granted, with the P2000, you’ll be able to do 20+ 1080p transcodes with ease.

 

I’m personally leaning toward an E-2288G and Supermicro X11SCH-F (I don’t need 10g and I have a Supermicro chassis). It will cost more, have more security patches in the future that will lower performance, and the CPU won’t be upgradeable. With all that said though, QuickSync is incredibly efficient, the CPU will idle below 40W, I’ll get 2 full speed NVME slots, it will be powerful enough to last me about 5 years, and I can always add a P2000 down the road if I need more transcoding power. Ryzen is a very powerful CPU at a very affordable price and if the newer Linux kernels bring forward some much needed improvements and ASRock keeps updating their BIOS to be better, I might end up going that route. We’ll see what the next few months brings.

  • Like 3
  • Thanks 1
Link to comment

wow, thank you so much for that detailed and super helpful reply. 👍 That really helps me in my decision.
QuickSync is now the crucial factor for choosing one of the Intel 8 Cores (E-2278G or even E-2288G).

 

My build so far:

  • MB: Asrock E3C246D4U2-2L2T (or E3C242D4U2-2T)
  • CPU: Intel Xeon E-2288G
  • RAM: 16x2 GB DDR4 2666 ECC
  • M.2 SSD 1 TB Evo Plus
  • Case: SilverStone CS381
  • PSU: Corsair SF450 Platinum
  • Cooler: Noctua NH-L9i

 

But I'm not sure which HDDs to choose. Western Digital Red oder Seagate Ironwolf.
Because the server will be in the living room I will probably prefer less noisy drives. Any recommendations or are they pretty simmular?

 

I would also be very glad to see your build list 😊


 

Edited by Mor9oth
Link to comment

Great post by ramblinreck47 but I need to make 2 corrections to point number 3.

 

3. With the Intel CPU and QuickSync, you should be able to easily do 15+ 1080p transcodes with little effort except for a few changes in your BIOS and in your go file. Otherwise, you can run it with any form of UnRAID as long at it has a newer kernel where your iGPU is supported (generally anything 6.8.0rc-1 and above 6.8.0-rc1 to 6.8.0-rc7, or 6.9.0-rc1 whenever it comes out. Other versions run on pre-5.x kernel which may not support newer iGPU.). With Ryzen and the P2000, you’ll need to install the Nvidia version of UnRAID which isn’t updated as fast as the regular UnRAID version (it’s still relatively fast but if you want to update when a newer version of UnRAID comes out you’ll need to wait for the Nvidia LinuxServer.IO guys to bake in their drivers. Nvidia doesn't lift a fingernail. Unraid Nvidia is the excellent and laborious work of the LSIO guys.). Granted, with the P2000, you’ll be able to do 20+ 1080p transcodes with ease.

Link to comment

I'm currently in the same boat - actually I came here to ask a similar question. Thanks for your detailed answer ramblinreck47.

 

I'm screwing my head thru the details since two days and all I got was that current Intel CPU/GPU combinations seem to limit PCI lanes to 16. Is this correct? And if this is correct, are there different Intel APUs with more lanes? Consider 3x PCI-E x8 PLUS 2x M.2 x4. I can't build that with Intel APUs?

 

I really would like to move from "Unraid NVIDIA" to plain "Unraid" but the current Intel APUs seem to be limited IMHO.

 

Link to comment
31 minutes ago, hawihoney said:

I'm screwing my head thru the details since two days and all I got was that current Intel CPU/GPU combinations seem to limit PCI lanes to 16. Is this correct? And if this is correct, are there different Intel APUs with more lanes? Consider 3x PCI-E x8 PLUS 2x M.2 x4. I can't build that with Intel APUs?

That is a myth probably by the confusion about the theoretical max performance of the iGPU (within the CPU) and the PCIe lanes (out of the CPU).

The iGPU is not connected via the standard PCIe pipes, or at least none of the Intel schematics (and 3rd party technical analysis e.g. Anandtech) has ever indicated as such.

Edited by testdasi
Link to comment
44 minutes ago, hawihoney said:

Sorry, what is a myth? Must be my bad english, sorry.

You mentioned "current Intel CPU/GPU combinations seem to limit PCI lanes to 16.".

That statement reminded me of a myth that was floating around that if one uses the iGPU, it reduces the speed of other PCIe peripherals because the total speed is limited to 16 lanes. That is entirely not true.

 

From your subsequent reply though, I think you meant the current generation of Intel CPU with iGPU has a maximum of to 16 lanes. And yes, that is true.

 

If high number of PCIe lanes are critical for you then you would have no choice but to use a different platform and either forego hardware transcoing or use Nvidia NVENC.

Link to comment
2 hours ago, testdasi said:

Great post by ramblinreck47 but I need to make 2 corrections to point number 3.

 

3. With the Intel CPU and QuickSync, you should be able to easily do 15+ 1080p transcodes with little effort except for a few changes in your BIOS and in your go file. Otherwise, you can run it with any form of UnRAID as long at it has a newer kernel where your iGPU is supported (generally anything 6.8.0rc-1 and above 6.8.0-rc1 to 6.8.0-rc7, or 6.9.0-rc1 whenever it comes out. Other versions run on pre-5.x kernel which may not support newer iGPU.).

I’ll give you the second point because I just didn’t use the right words to describe who has been working on the plugin. The LS.IO guys are phenomenal.

 

The first point, the one quoted above, I still stand by. I originally thought the same as you that the 9th Gen Intel QuickSync needed Linux kernel 4.20 and above but it turns out there are several people saying that they are using it with 6.8.0 even though it’s on 4.19. I submit this comment and the following comments on the reddit thread for the 6.8.0 release:

 

Since I don’t currently have a 9th Gen Intel CPU, I cant verify it. If there’s anyone out there that does, it would nice if you could verify that it’s working.

Edited by ramblinreck47
  • Like 1
Link to comment
11 minutes ago, Mor9oth said:

any signs that Linux kernel 4.20 is coming soon? Or when?

Unlikely. Based on what LT has announced, 6.9.0 will be on 5.x kernel (which is why 6.9.0-rc1 will be on 5.x kernel). The reason 6.8.x is still on 4.19 was because they discovered some strange bugs with docker network on 5.x kernel and need to fix them first.

  • Like 1
Link to comment
5 hours ago, Mor9oth said:

would also be very glad to see your build list 

Here’s the build that I’m working on. I’ve already purchased a few things, but I’ll finish it here in the next few months. Just got to get moved to my new job in a new city and finish paying off the last little bit of my student loans.

 

CPU: Intel Xeon E-2288G 3.7 GHz 8-Core Processor = ~$600 (https://www.cdw.com/product/intel-xeon-e-2288g-3.7-ghz-processor/5846100 OR https://www.ebay.com/itm/INTEL-XEON-E-2288G-PROCESSOR-3-70GHZ-SRFB3-8-core-16thread-octa-core-CPU/193291036144?epid=22036063532&hash=item2d010b25f0:g:W7cAAOSw6IVeFRph)

 

CPU Cooler: Noctua NH-D9L 46.44 CFM CPU Cooler = $54.95 (https://www.amazon.com/dp/B00QCEWTAW?tag=pcpapi-20&linkCode=ogi&th=1&psc=1)

 

Motherboard: Supermicro X11SCH-F ATX LGA1151 Motherboard = $252.74 (https://www.shopblt.com/item/super-micro-mbd-x11sch-f-o-c246-cfl-xeon/supmic_mbdx11schfo.html)

 

Memory: 2 x Kingston Technology 16GB DDR4-2666 ECC Unbuffered DIMM CL19 2Rx8 1.2V Micron E Die (Server Premier) = ~$180 (https://www.provantage.com/kingston-technology-ksm26ed8-16me~7KIN93CU.htm)

 

GPU: PNY Quadro P2000 5 GB Video Card = $270 (PURCHASED)

 

Case: Supermicro 3U Chassis - CSE836BE16-R920B = $397.87 (PURCHASED)

 

Case Fan: 2 x Supermicro 80mm Hot-Swappable Middle Axial Fan (FAN-0104L4) = $34.80 (https://store.supermicro.com/80mm-fan-0104l4.html)

 

Case Fan: 3 x Supermicro 80mm Hot-Swappable Middle Axial Fan (FAN-0074L4) = $63.00 (https://store.supermicro.com/80mm-fan-0074l4.html)

 

HBA: LSI 9211-8i in IT Mode = $57 (PURCHASED)

 

If the iGPU for the E-2288G is everything it’s cracked up to be, I’ll sell the P2000 and just use the money towards adding stuff to my rack. My new place will be a rental and won’t have fiber internet, so I’ll be underutilizing this entire setup for 2 years until my wife and I build our own house in a part of town that has fiber.

If you have any questions about what and why I picked any piece above, I have pretty detailed explanations for all of them and I don’t mind sharing.

 

Side note: My thinking for going with the E-2288G over say an E-2146G is that it’s much easier to add a GPU in the future than it is replacing a processor. It might not be the most cost efficient but it’s definitely the easiest option. If and when the E-2288G becomes old and not as powerful as whatever is new, I’ll just sell the entire motherboard and CPU combo and start over fresh. Intel tends to retain a lot of value on the higher end processors for almost of their sockets. The same can’t be said for AMD. Just look at the Ryzen 7 1800X compared to the i7-7700K.

  • Like 1
Link to comment

Thank you so much for that build list and all these detailed informations! It seems that this will be a very powerful server! 😀 Looking forward to see the finished Server and how it performs. 😉

 

Because I'm an absolute Noob, I would like to ask some more things:

  1. What hard drives will you use for that server? Any recommendations so far? I would prefer less noisy hard drives. "SSD only" doesn't work with unraid, right?
  2. Is unraid slow? At the moment I'm using a Synology (DS415 Play - dual core and only 1 GB RAM ) with WD Red Hard Drive and that is super super slow ... I'm a little affraid that unraid also based on hard drives could also be slow.  Of course I will uses the M.2 SSD as a Cache Drive, but are the reads and wrights on the HDDs not too slow for an fast server? I have really mixed feelings because of my slow Synology ...
  3. What is the thing about the HBA Card? I mean, Unraid already does have its Parity for redundancy, so whats the point of additional Raid configuration? Is it for more speed? Raid 0 or something? How many cards do you need for lets say 8 HDDs?

 

 

Link to comment
4 hours ago, Mor9oth said:

Thank you so much for that build list and all these detailed informations! It seems that this will be a very powerful server! 😀 Looking forward to see the finished Server and how it performs. 😉

 

Because I'm an absolute Noob, I would like to ask some more things:

  1. What hard drives will you use for that server? Any recommendations so far? I would prefer less noisy hard drives. "SSD only" doesn't work with unraid, right?
  2. Is unraid slow? At the moment I'm using a Synology (DS415 Play - dual core and only 1 GB RAM ) with WD Red Hard Drive and that is super super slow ... I'm a little affraid that unraid also based on hard drives could also be slow.  Of course I will uses the M.2 SSD as a Cache Drive, but are the reads and wrights on the HDDs not too slow for an fast server? I have really mixed feelings because of my slow Synology ...
  3. What is the thing about the HBA Card? I mean, Unraid already does have its Parity for redundancy, so whats the point of additional Raid configuration? Is it for more speed? Raid 0 or something? How many cards do you need for lets say 8 HDDs?

 

 

Yeah, this build is way overkill for what I want to do right now, but I’m hoping to not have to do any major updates to it for the next 4-5 years. 

I’ll do the best I can answering your questions. 

1. I right now have a HP Z220 CMT (i7-3770, 16GB RAM) that I have modified with two hard drive cages with the whole setup having 6 x 3.5” HDD’s (Array) and 2 x 2.5” SSD’s (RAID 1 cache). The 6 HDD’s are a mix of 8TB and 10TB WD White drives that I shucked from a variety of WD Easystores, MyBooks, and Elements. I have 4 x 8TB WD White drives that I bought super cheap but don’t have enough room in the case at the moment so that’s a big reason I’m really excited to do my build soon. 

You are correct that you can’t do an Array of SSD’s. I know this a request that some have made and we’ll probably see this sometime in the future. 

2. How do you mean slow? If you’re talking about the OS itself, then no, it’s really fast because the OS once booted lives on the RAM. It doesn’t matter whether you have a cache or not; browsing through the OS is a breeze. I had a Synology DS218+ without a cache drive so I know how slow DSM can move. If you’re talking about accessing drives and uploading data to the drives, it depends. UnRAID doesn’t stripe data across drives (like RAID does) so accessing data from a drive is limited to how fast the drive is. All of my drives are 5400rpm and are still really quick with a greater than 120Mb/s speed. Using a SMB share to upload data to the drives can be limiting though if you’re not setting up a cache for your shares. I see about 50Mb/s without a cache for a particular share. Using a cache for your shares really speeds up the data transfer though. I also have all my appdata sitting on the cache (takes up about 35GB) and makes all my Docker containers run very fast and smooth. I’ll probably update to a NVME cache in the future for SABnzbd and downloading when I finally get fiber, but for right now normal SSD’s are more than adequate. 

3. A HBA card like a LSI 9211-8i will let you expand the total amount of SATA data ports. With two breakout cables you can turn the two SAS ports on the 9211-8i into 8 x SATA ports. The HBA card will need to be flashed to IT Mode first though to ensure that the drives are passed through untouched (no-RAID) to UnRAID. I didn’t want to do the flashing myself (partly out of fear of messing it up and partly out of convenience). I instead bought a used pre-flashed one on eBAY from the Art of the Server. He charges more than it would cost for an unflashed one but he does a perfect job flashing the cards and provides support if you need it. This is the one I bought: https://www.ebay.com/itm/Genuine-LSI-6Gbps-SAS-HBA-LSI-9211-8i-9201-8i-P20-IT-Mode-ZFS-FreeNAS-unRAID/163846248833?hash=item2625ff5981:g:shEAAOSwPKZdbst~ 

Two notes on this though. You don’t want to hook up your SSD’s to a HBA card (anything older than a 9300-8i) since it won’t support trim. Scheduling trim on your SSD will help prolong its life. Best to connect them straight to your motherboard. Secondly, I’m not going to use breakout cables for my build since the Supermicro chassis I’m going to use has a SAS expander built into the backplane and will take a direct link from the HBA card to expand out to 16 ports. Both of these notes aren’t really important to most people and probably you as well but just thought I’d throw it out there.

  • Like 1
  • Thanks 1
Link to comment

Sir, you are awesome! Again, thank you so much. Still Noob but your Infos are very helpful! Shucking seems to be super awesome! 😀

Do I have to pay attention to something before buying? Are the bigger drives louder? On Reddit someone said the 14 TB Version of the WD Digital My Book would be very loud.  However, I would tend towards larger drives.

 

 

Edited by Mor9oth
Link to comment
6 hours ago, Mor9oth said:

Sir, you are awesome! Again, thank you so much. Still Noob but your Infos are very helpful! Shucking seems to be super awesome! 😀

Do I have to pay attention to something before buying? Are the bigger drives louder? On Reddit someone said the 14 TB Version of the WD Digital My Book would be very loud.  However, I would tend towards larger drives.

I can’t speak to the loudness of the 12TB and 14TB drives, but I find the 8TB and 10TB very acceptable noise wise. I have my server down in the finished basement and even when sitting next to it during a parity check, all I can really hear is the fans of the server. Since that person in the reddit thread didn’t post any decibel readings, his idea of loud is just his perception and it’s impossible to discern what he really means.

 

EDIT**Also, be aware that practically all the white drives have the new SATA power standard which uses the 3.3v pin differently than it did with the old standard. If you shuck these drives, attach them to your SATA power and SATA data cables, and power on your computer, you might find out they aren't showing up. This is because on the newer standard that power is not automatically set to go to these pins and that if it does, it causes the HDD to essentially not power on properly. The easiest ways to handle this is to tape over the 3.3v pin (keeping the PSU from sending power to it), snip/remove the cable that goes to the 3.3v pin (I don't generally recommend this), or use a molex to SATA adapter since molex doesn't carry a 3.3v charge (you have to be extremely careful to only buy crimped ones of these and not molded!). All these don't mean anything though if you have a backplane in your case that is molex powered (like practically all Supermicro servers). You simply just plug the hard drive in and it works normally.

Edited by ramblinreck47
Link to comment

Yes, that makes sence. 😀 Also can't imagine, why they would make these drives too loud. From consumers point of view they have to be quite silent because they will stand usualy on the desk.

 

The new 3.3 pin standard could be indeed a problem. But I guess an acceptable and handable thing for such cheap drives. Taping would work for me, but gives me mixed feelings because I'm afraid that the tape could cause a fire or do some other damage in the server. The molex to SATA adapter wouldn't work because I will have a backplane in my case.

What is a molex powered backplane? I know the old molex cables, but I guess you don't mean these. I will buy the CS381 case which uses Backplanes for the Hot-swapable drive bays. In the Manual it is said that it will be powered with two 6 pin cables (one each bay) that goes over 8 pin to the power supply. In the manual it is also said that the backplane has circuitry to convert +12V power down to +5V (Screenshot attached). What ever this means ...
Is that what you mean by Molex powerd? That would be great, because then I could also just plug and play the drives 😌

power.png

Link to comment
19 hours ago, ramblinreck47 said:

3. A HBA card like a LSI 9211-8i will let you expand the total amount of SATA data ports. With two breakout cables you can turn the two SAS ports on the 9211-8i into 8 x SATA ports. The HBA card will need to be flashed to IT Mode first though to ensure that the drives are passed through untouched (no-RAID) to UnRAID. I didn’t want to do the flashing myself (partly out of fear of messing it up and partly out of convenience). I instead bought a used pre-flashed one on eBAY from the Art of the Server. He charges more than it would cost for an unflashed one but he does a perfect job flashing the cards and provides support if you need it. This is the one I bought: https://www.ebay.com/itm/Genuine-LSI-6Gbps-SAS-HBA-LSI-9211-8i-9201-8i-P20-IT-Mode-ZFS-FreeNAS-unRAID/163846248833?hash=item2625ff5981:g:shEAAOSwPKZdbst~

Got it! Also very helpful for me because on the mainboard of my choice (E3C246D4U2-2L2T) I will have only 7 SATA ports. Is it possible to connect the HBA card (LSI 9211-8i) directly to the backplane of my case (it's a mini-SAS SFF-8643 36 pin connector), instead directly to the drives (SATA)? That would be great and would save me all onboard sata ports for SSDs or something else.

Cable could be something like this: Mini SAS SFF-8643 to Mini SAS 36Pin SFF-8087

Link to comment

@Mor9oth to respond to some of your last couple of posts:

  • I've never worked with a backplane like that before. As long as it doesn't carry power to the 3.3v pin, you shouldn't need tape (emphasis on should because I've seen some manufacturers like Norco that somehow provide power to the 3.3v pin even with a molex connection). If not, a little bit of kapton tape on the 3.3v pin on the hard drive and you should be good to go.
  • Looking at your backplane, the cable in that link you provided should work.
  • I'm luckily in the USA so these WD external hard drives are getting extremely cheap. I've seen those 14TB at $199.99 here but I'm probably going to wait to upgrade until I can afford 2 of them to use in a dual parity setup. 
Link to comment

Oh lucky you for living in the USA. 😀 I wished I also could ... Just bought 4 of them because in Germany Hard Drives are way more expensive as this deal. Even exporting from the UK is way cheaper for me. Now, that I just bought that great deal I feel "terrabyte addicted" and think about buying more. 

Link to comment
6 minutes ago, Mor9oth said:

what the hell is going on with Intel!? 😦 I have tried to buy Intel Xeon E-2288G oder 2278G online, but can't find a single cpu worldwide. Are they out of production and removed from the market or something? Ryzen on the other hand is so easy to get ...

These CPUs exist but are scarce in the retail channel as large OEMs are gobbling up the supply from Intel.  I have found a couple of vendors in the US that have the E-2278G and E-2288G but they only get small quantities (1-3 at a time) which are usually gone within a day.  There is a local OEM vendor in the US willing to sell me an ASRock Rack E2C246D4U motherboard, E-2288G CPU, 64GB ECC Samsung RAM and a 256GB NVMe SSD for $1099.  That's actually about $150 less than these items at retail and they have them all in stock.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.