Tybio Posted October 18, 2018 Share Posted October 18, 2018 (edited) I've been using unraid for almost 10 years with only one small upgrade over 7 years ago. The server has been bullet proof and the MB/CPU have survived transplants 3 times as I grew. This summer I started ground-up rebuild to move from a 24 bay rack to a system that could reside in my office. This is a work in progress, I upgraded the Case/Drives/Fans over the summer and am now assembling the parts to replace the MB/CPU/RAM. I'm not going to make this a build-log, just going to edit it as I settle on parts and get things running. The basic idea came from snuffy1pro's "Shogun" build OS at time of building: 6.6.0 CPU: E-2176G Cooler: Noctua NH-DS15S Heatsink Covers: Noctua NA-HC3 chromax.Black.swap heatsink Covers Replacement Cooler Fans: Noctua NF-A15 HS-PWM chromax.Black.swap Premium-Grade Quiet 140mm Fan Motherboard: X11SCA-F-O Thermal: Graphite Pad RAM: 2xSupermicro 16GB 288-Pin DDR4 2666 (ECC) Case: LIAN LI PC-D600WB Drive Cage(s): 3xiStarUSA BPN-DE350SS-BLACK 3x5.25" to 5x3.5" SAS/SATA 6.0 Gb/s Trayless Hot-Swap Cage Note: I replaced the fans on the cages with Noctua NF-A8 PWM Premium 80mm and ran them to the motherboard along with the 3 Noctua fans on the back of the drive side of the case, all 6 fans are controlled by the auto-fan plugin and watching HD temps rather than system temps. Power Supply: Seasonic 650W 80 Plus Titanium ATX12V Power Supply with Active PFC F3 (SSR-650TD) SATA Expansion Card(s): LSI SAS 9207-8i Cables: Cable Matters Internal Mini SAS to SATA Cable (SFF-8087 to SATA Forward Breakout) (2x3.3' 2x1.6') Addon Network Card: ASUS XG-C100C 10G Network Adapter PCI-E x4 Card Fans: 2xCorsair ML140 Pro LED, Blue (1xML120); 3xNoctua NF-A14 PWM chromax.Black.swap Parity Drive: 12TB Seagate Ironwolf Data Drives: 5x10TB Seagate Ironwolf, 3x8TB Seagate Ironwolf Cache Drive: Samsung 860 EVO 2TB VM Drive: Samsung 970 EVO 250GB Total Drive Capacity: 74TB (67% used) Primary Use: Media storage and streaming (Kodi local, Plex remote) Likes: Love the look, very quiet even under load. Dislikes: Seagate drives in these trayless bays can "tick" a bit, sort of annoying in an otherwise silent system. Add Ons Used: Radarr, Sonarr, NZBGet, Organizr, DuckDNS and tons of plugins...I'm a nerd what I can I say. Future Plans: None at the moment Boot (peak): TBD Idle (avg): 70w Active (avg): ~90w Light use (avg): ~80w Disk Rebuild: 105w The highest usage I've seen was the 105w during data rebuild, even booting my windows and linux VMs doesn't push it about the mid 80s. More information to follow when I can order the CPU and swap the core of the system. Current view from the front (Will replace later with full pictures of finished project) System Side: System Side with Chromax fans and Heatsink covers: Drive side: Edited December 29, 2018 by Tybio 1 1 Quote Link to comment
Tybio Posted October 18, 2018 Author Share Posted October 18, 2018 (edited) Reserved for addon topics: 1> Fan Replacement: I used some weather stripping to cover the "Gap" as custom fan that came with it meant the cover could not be re-used. Here are some pictures of the process. Edited October 18, 2018 by Tybio Added fan replacement Quote Link to comment
bphillips330 Posted October 21, 2018 Share Posted October 21, 2018 Nice Build, I am just trying to get together a new build for me. Sounds like we run about the same stuff. I have not built a system in a couple years. Realy torn on processors and mb. Xeon, or i7/i9 or a threadripper 16 core. Mostly plex, but I also do a lot of photography, and would like my win10 vm to be a full fledged computer. Lot of cores and ram, and passthrough a video card for rendering projects. Budget is 1000-1500 for MB, ram, case (to hold 6 or so 3.5 inch drives) and 3 ssd's, power supply, cpu cooler (never done liquid, but might get an all in one liquid cooler vs noctua dual heatsink thing) Ugh. So many options..... ha Quote Link to comment
Tybio Posted October 21, 2018 Author Share Posted October 21, 2018 Yea, check out the thread in the MB/CPU forums on the E-2186G...I've been having some very similar re-thinks about things..but finally ended up realizing that I have a desktop I'm not getting rid of, so I should focus on the server tasks for my unraid box. So Dockers and Plex Transcoding, thus the E-21xxG family. If was doing /anything/ else (other than transcoding) I'd have done a 2700x or TR. Quote Link to comment
Jcloud Posted October 22, 2018 Share Posted October 22, 2018 Requesting a picture of the internals, motherboard and such, for tech-porn -- I like your case and its style, would love to see the internal layout when you post all of your pictures. Quote Link to comment
Tybio Posted October 22, 2018 Author Share Posted October 22, 2018 To tide you over until the rebuild, here is what it looks like today on the MB side: Quote Link to comment
bphillips330 Posted October 22, 2018 Share Posted October 22, 2018 21 hours ago, Tybio said: Yea, check out the thread in the MB/CPU forums on the E-2186G...I've been having some very similar re-thinks about things..but finally ended up realizing that I have a desktop I'm not getting rid of, so I should focus on the server tasks for my unraid box. So Dockers and Plex Transcoding, thus the E-21xxG family. If was doing /anything/ else (other than transcoding) I'd have done a 2700x or TR. That is my dillemma. My current server will become my desktop. It is roughly 4 years old. intel 4790k with 24 gigs of ram. Will work as a desktop. Threadripper might be overkill. But would be nice to have a nice desktop burried in the server. Win 10 vm or whatever to house my photoshop and lightroom tasks. I am curious how utilizing that vm remotely, would affect performance? Quote Link to comment
Tybio Posted October 22, 2018 Author Share Posted October 22, 2018 Personally, I wouldn't do remote for a desktop replacement. Generally you want a Video card passed through to the VM. I've never tried photo editing over RDP/VNC, but I'd worry about the color depth and accuracy. Quote Link to comment
greg_gorrell Posted October 22, 2018 Share Posted October 22, 2018 9 minutes ago, Tybio said: Personally, I wouldn't do remote for a desktop replacement. Generally you want a Video card passed through to the VM. I've never tried photo editing over RDP/VNC, but I'd worry about the color depth and accuracy. Yeah, you definitely want a standalone pc for that. Quote Link to comment
bphillips330 Posted October 22, 2018 Share Posted October 22, 2018 so save the money and build decent server for server stuff. Then build a new desktop also. hmm Quote Link to comment
Tybio Posted November 6, 2018 Author Share Posted November 6, 2018 Small update, all the parts are in with the exception of the CPU, just waiting for Intel to actually ship the darn things. Right now the ship date from the vendor is tomorrow, but I'm not putting a lot of stock in that :). Quote Link to comment
Tybio Posted December 22, 2018 Author Share Posted December 22, 2018 The parts are in and the system is rebuilt. I had two issues: 1> Had to force the IGPU to be used by the BIOS. In Auto mode even without another GPU in the system it would not bring up video on the on-board ports 2> I had an LSI card in the first PCI-E slot (Closest to the CPU) and was unable to mount any drives...though the card ROM loaded fine and the drives were detected by the LSI card. I moved it to the second PCI-E slot and it is working fine ATM. Going to update the main post now. 1 Quote Link to comment
Tybio Posted December 29, 2018 Author Share Posted December 29, 2018 I added some Chormax parts from Noctua to the cooler and added a 250G NVMe drive, updating main post with new picture. Quote Link to comment
casperse Posted January 4, 2019 Share Posted January 4, 2019 New to UNRAID and I am building with the same CPU as you but with another MB/RAM setup (Need more SATA ports and all out of PCI-e slots LOL.) I have some questions I hope you can help me with :-) Any reason you selected 1x2TB cash drive instead of having 2x1TB for extra security on your cash? Are your VM drive M.2 setup as a UAD? - I guess that would provide barebone speed for your VM's right? You did a screen capture of Plex transcoding in your other post, what installation of Plex are you using on UNRAID? (Linuxserver Docker?) Also have you placed your Plex metadata on the cash drive or on a SSD mounted as a UAD? (Is there a best recommended practice for this?) I was wondering if it would be better to use the M.2 as a UAD for VM's and also Metadata and maybee even for temporary Plex transcoding? I also "only" have 32G RAM and I don't think that would be enough to use RAM-drive as a tmp for transcoding? (But it would prolong my SSD life!) Do you know if M.2 lifetime is also impacted by high usage - like tmp transcoding usage? Shares between different storage servers (I have a 24 drive Synology) is it best using SMB shares or NFS with UNRAID? Thanks for a very informative and finally a happy ending on your very hard journey building your new system. I hope I get the right CPU still waiting for it..... Quote Link to comment
Tybio Posted January 4, 2019 Author Share Posted January 4, 2019 Let me see if I can answer! Also, what board did you go with? My investigation concluded that there was only so much they could do with the c246 chipset and PCI-E lanes from the processor. I'd love to see an option with more SATA/PCI-E. I'm going to upgrade to 2x2TB at some point, I'm doing a lot of UHD downloads at the moment and 1TB was getting over 50% used at times...I'd rather my cache drive never run into size contention over data security as everything ON the cache can be re-downloaded/restored. My VMs are on the M.2 and backed up to the array so they are also protected. Yes, I use it unassigned as I have multiple VMs on it. If I were to setup a desktop replacement VM I'd pass it through to get a little more isolation...but it wouldn't be a function of speed...they are stupid fast to start with. Linuxserver docker, with just the standard --device options and the go file to discover the iGPU. Totally generic configuration. Metadata is on Cache, mostly so the CA Backup script can capture it without having to do anything special. Plex transcoding isn't ever disk limited on an SSD...so I've left it on the cache. The only reason to move it is to prolong life of the SSD as far as I can tell. You don't need much for the transcoding location, I've not looked into it lately but IIRC the only things in there are the 1 minute "look ahead" when it is throttled. That said, I haven't bothered to go that far so am not an expert :). I assume it is, as M.2 is just flash on a different form factor, so it will suffer from the same limitations...I'm in the "Buy a cheap SSD for cache and replace it when needed" camp, but I go high quality on my M.2 as that is where the VMs are and in the future might be a desktop replacement. The unraid cache is never going to be really speed stressed, so even a degraded SATA SSD is going to out preform the rest of the setup for a good while. When it doesn't...I'll just buy the cheapest option to replace. That said, I've had SSDs for 2+ years and never noticed degridation with both nzbget post-processing and plex transcoding going on. I use SMB as my clients all support it out of the box. Mostly using Nvidia Shield's as my STBs and it is just easy. With 10G networks coming about, IMHO it is better to plan that path then to nit-pick about the protocol used...the overhead is more and more meaningless...so I just went with the one that was the most likely to be supported...and turned the others off to prevent spamming my network with useless shares. These are all just my opinions mind you, so take them with a grain of salt! Quote Link to comment
casperse Posted January 5, 2019 Share Posted January 5, 2019 (edited) Thanks for sharing your personal opinions 😊 I ended up with the Gigabyte C246-WU4 I needed "Full size PCie" ports and 10 Sata ports. Also I need 4 full slots with 4X & 8X speed. Since I have a 24+2 drive case that I would like to use 100% Yes it all comes down to having enough PCI-e slots... (-; Yes using the first M.2 with PCIe have no impact. but using the secound M.2 slot would mean that I would lose the last PCIe socket On the positive side I have the option to have one more CASH drive as a M.2 SSD :-) I needed x8, x4, x8, x4 - For two controller cards, a ethernetcard and a graphic card. If I use the 8 sata on the board for the rack I could save one of the LSI cards.... So this is actually the only MB I could find that supports what I would like to do UPDATE: Its very hard to get this motherboard still waiting for it...... Edited January 5, 2019 by casperse Quote Link to comment
Tybio Posted January 5, 2019 Author Share Posted January 5, 2019 Odd, I'm not sure how they are doing that, I assume that the 4x ports are coming from the chipset and not the CPU...but I'm still not sure how they get 2 more SATA ports and 1 more 4x port out of it. Quote Link to comment
hawihoney Posted January 6, 2019 Share Posted January 6, 2019 Just a thought: If a M.2 device is shared with a SATA port, does that mean that this M.2 is limited in speed? AFAIK M.2 devices can offer up to ~3GB/s, SATA3 ist limited to ~6Gb/s. When building a Cache RAID1 this would reduce Cache writing speed to SATA3 capabilities. IMHO, using a M.2 device doesn't make sense then. Quote Link to comment
casperse Posted January 6, 2019 Share Posted January 6, 2019 @hawihoneyI think you are comparing the wrong M.2 drives :-) The loss of a SATA port is for use of a M.2 SATA SSD - and the speed here is the same as a normal SSD The M.2 PCIe NVMe (Fast one) does not have any impact on my SATA connections. If I select the first M.2 slot for a fast M.2 PCIe NVMe, and the other M.2 slot as a "Slow" M.2 SATA SSD then there will be no impact to existing SATA connections or my PCIEX4_2 slot and I get one more SSD drive and one fast M.2 PCIe Quote Link to comment
Tybio Posted January 8, 2019 Author Share Posted January 8, 2019 (edited) On the SM board used in this build, there is an impact. The SATA controler can only support 8 drives, so if you put an SATA M.2 in then you lose a MB SATA port (#3 I believe). It isn't about speed in this case, it is about the chipset SATA controller. The Gigabyte has 8xSATA on board and 2xSATA "Extra" that are using an additional SATA controller. That is why it has 10 ports (and you can see the 2 additional ports are different and not in the MB supported block, they are gold/yellow and not black). Likely what they did is add a 4xSATA controller to one lane from the chipset, that means they can add the two ports on the MB and use the other two SATA connections for SATA SSDs without impacting the on SATA ports on the MB at all. A really nice design! Edited January 8, 2019 by Tybio Quote Link to comment
casperse Posted February 11, 2019 Share Posted February 11, 2019 (edited) @Tybio So what temperature sensor did you have on your MB? Did you get it working? - Sofar I cant get my temperature working in UnRAID (Dynamix Temp) If you have it working and it somehow is tied into the Chipset which is the same, then I would love to hear what you did? My build is otherwise performing as I hoped, also got the Plex IGP working but would like to try out my new P2000 just to see if I can see a difference? But I don't want to mess with the working Plex docker - Do you know if I can just copy it and make a clone docker?Anyway I hope your server is also performing like you expected Edited February 11, 2019 by casperse Quote Link to comment
jrd680 Posted February 26, 2019 Share Posted February 26, 2019 How’s your server coming along? Any storage upgrades lately?Sent from my iPhone using Tapatalk Quote Link to comment
Tybio Posted April 30, 2019 Author Share Posted April 30, 2019 (edited) On 2/26/2019 at 10:46 AM, jrd680 said: How’s your server coming along? Any storage upgrades lately? Sent from my iPhone using Tapatalk Been a while since I've been around, but it is humming along nicely. The only issue I've had is the istar cages rattling (The front doors don't latch as snugly as needed). To fix that I've used some electrical tape inside the unused bays. I'm nearing 100G of capacity in the server now, but still have a lot of room to grow. I've suffered with 2 drives having to be RMAed...well, one drive and then the RMA for that one had to be replaced. Parity rebuilds were seamless, temps never got that high and I was able to finish a 12TB parity rebuild in just (barely) under a day. I'm currently debating a Quatro inn a VM for trans-coding so I can finally get rid of having two libraries...but I'm holding off until late fall when I see what Intel and AMD are brining to the table this year. Edited April 30, 2019 by Tybio Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.