ramblinreck47

Members
  • Posts

    217
  • Joined

  • Last visited

Everything posted by ramblinreck47

  1. I haven't been able to get them working together at the same time. I have onboard graphics enabled and am doing a Legacy boot. As soon as the i915 loads, the iKVM cuts out. Adding i915.disable_display=1 doesn't help much. The screen freezes instead of goes away when i915 loads, but that's about it. The BIOS for the X11SCH-F is considerably different than the X11SCA-F, and the options are very different. I'm sure there is a setting that I'm not seeing but I don't know what it is. If I can't figure it out sometime soon, I'll just make a script to load the iGPU when the array starts so that way the iKVM works all the way through the boot process. I think that might work best and would be perfectly acceptable for me since that is the only time I'll really want the iKVM working. At the very worst, I could just manually load the iGPU whenever I reset the server after it's already booted and the web GUI is back up. Overall, it's not as straight forward as I would have liked, but at least it's not that big of a deal. The boot process makes it nearly 80% done before the iKVM cuts out.
  2. Sounds like it could be any number of things. Have you checked to see if your BIOS is up to date? Have you tried running with only 1 stick of RAM? Have you set your motherboard to Typical Idle Current in the BIOS?
  3. How much time do you think it takes? Also, are you including researching what you want to buy into that? The research should take the longest time and then the actual build shouldn’t take more than 2-3 hours.
  4. Look at it this way, a LSI 9211-8i (or SAS2008 variant) takes up a PCIE 2.0 x8 lane with a total bandwidth 4000 MB/s. If you were to use an expander to split those two SAS port links over 24 drives, you would get roughly a 140 MB/s (built in overhead that keeps it from being higher) from each drive if they are all running at the same time. That's not bad but a 5400rpm HDD should be running close to 185 MB/s at full speed. You're limited. If, however, you replace that 9211-8i with a 9300-8i/9207-8i (or SAS2308/SAS3008 variant) it takes up a PCIE 3.0 x8 lane with a total bandwidth of 4800 MB/s. Expanding out to 24 drives, that would be approximately 185 MB/s. The reason I suggested the 9207-8i (SAS2308) equivalent is because it uses the same SAS connectors that the cables on your motherboard will already plug into, has higher speed than SAS2008, and is half the price of SAS3008 (still the same PCIE 3.0 x8). If you were to go to SAS3008, you would need to spend considerably more money and would need 2 cables like these (https://store.supermicro.com/cable/supermicro-minisas-to-minisas-hd-50cm-cable-cbl-sast-0508-02.html). The only benefit you would be getting is support for TRIM (not needed if you don't plan on connecting an SSD to the card) and a slightly newer chipset. That's all you get for double the price.
  5. The Supermicro 846 backplanes essentially comes in 3 varieties: "TQ" - 24 x Direct SATA3 connections...you need 24 SATA ports from your motherboard and HBA to get connections to all 24 drives...it gives you flexibility to add as you go and if you can find them, they are really cheap...the issue really comes with all the cables needed to make it happen and multiple HBA's or an expander "A" - 6 x SAS ports connections...you need 6 SAS ports from either your motherboard or HBA's...if you have 3 x 2-port HBA's, you'll get a long of bandwidth and 6 total cables isn't a lot to deal with...the issue is the amount of PCIE lanes you're going to need to have available unless you're willing to spend some money on a 9211-16i (4 ports) + a 9211-8i (2 ports) or 9305-24i (6 ports) or an expander...these backplanes are fairly cheap and very similar to probably how your Norco was setup "SAS(X)-846E1" - Built in expander only needs 2 SAS ports to get data from all 24 drives...SAS2 will give the best balance of speed and cost with SAS1 being rather limited and SAS3 costing way way more...you keep cables to a minimum (only 2) and you're really only limited by the speed/bandwidth of the HBA you choose and the PCIE lane it is on => I think the SAS2-846E1 is a great model to go with because it keeps the amount of PCIE lanes taken up to a minimum on your motherboard and gives you a lot of flexibility to move up to a better HBA if you need more speed in the future.
  6. You picked a really good server case. I have a Supermicro 835 and 836, and I love them. They're built like tanks and they run great. To address a couple of your concerns: - The barebones 846's are really hard to find right now so the fact that you were able to get one without having to purchase a complete system is a really good deal (unless you had to pay a lot). I put my name on the waiting list to buy one eventually but I'm not in any hurry to replace my 836 when I still have some drive bays open. - That SAS2 backplane has an expander with it and is preferred. You can use only 1 HBA and use both ports to connect to the backplane to get full use out of that PCIE lane. - Unless you have a really hefty budget, a bunch of really fast drives, and want the fastest possible setup, I'd stay away from buying a SAS3 backplane. They're very expensive and if you're not ready to spend serious money, it's not worth it. - For HBA, you shouldn't go with the LSI 9266-8i that comes with. It's a SAS2208 chipset on an actual RAID card and it's harder to flash than a SAS2008 or SAS23008 HBA. - You found a good store to buy a HBA from. I have a 9211-8i that I bought from Art of the Server and it's been great. For your system though, I'd recommend you get a card with a SAS2308 or SAS3008 chipset. You're going to need a PCIE 3.0 x8 bandwidth to get the most speed out of your drives especially when doing a parity check. You'll finish faster. This is a good in expensive one that should work with the two SAS cables that will come with your 846: https://www.ebay.com/itm/Lenovo-03X4446-9217-8i-6Gbps-SAS-PCI-E-3-0-HBA-P20-IT-Mode-ZFS-FreeNAS-unRAID/164248205524?hash=item263df4b8d4:g:J38AAOSw8Sde6XeW - If you're going to go with a non-Supermicro motherboard, you'll need a special cable so you can use the front panel connector: https://store.supermicro.com/supermicro-15cm-16-pin-front-control-panel-split-extension-cable-cbl-0084l.html - If you ever want to quiet the chassis down, there is a non-destructive way of doing it that'll cost a little over $100. Replace the 3 fans on the fan wall with FAN-0074L4's (trim the webbing on the side of the fan with some pliers...it pops right off) and replace the back 2 fans with FAN-0104L4's. They'll push lots of air without sounding like a jet engine.
  7. I’m starting to think you have a hardware issue somewhere.
  8. Do you have any scripts to run on Array start? Have you tried with a clean unRAID build on a different flash drive?
  9. Legacy or EFI boot? USB 2.0 or USB 3.0 port and flash drive for boot?
  10. What motherboard(s) and RAM? Also, is your motherboard on the most recent BIOS?
  11. It totally depends on what you are trying to do? Want to run multiple VM's, need lots of PCIE lanes, etc
  12. A few points: - I think the Asus X570-Pro is a good pick. It's actually the one I'm looking at possibly getting if I make the switch to AMD. - M.2 heatsinks are only like $10. - Do you want ECC or do you not care? If you want ECC, there are Kingston 2666mhz ECC modules that are easy to find and relatively inexpensive. 3200mhz modules are just now starting to become available though and those would be better (the max speed without having to overclock). If you don't care about ECC, any 3200mhz that's on the QVL would be good and cheap if you do some searching and waiting. - I'm not sure if you'll need a GPU to boot with that motherboard...it'll be worth looking up...if you do need one, there's a Nvidia GT 710 (~$40) that fits in a PCIE 3.0 x1 slot which you could later use for VM's after you get a proper GPU for transcoding
  13. First off, it’s good you’re moving away from Norco to Supermicro. Norco is essentially dead and their IPC Store just doesn’t fulfill orders, so you’re at least moving to a better made server where parts are plentiful in case it ever breaks. I have a Supermicro 835 and 836 and they are both built like tanks and perform flawlessly. Best cases I’ve ever owned. You’re trying to build a server at a weird time. Yes, older Xeon v2’s are becoming less expensive and ECC DDR3 is plentiful and dirt cheap, but it does come with some caveats. These systems, especially dual CPU systems, are ridiculously power hungry compared to their modern counterparts. You’re also going to have to deal with more restricted PCIE lanes (mostly PCIE 2.0 and PCIE 3.0) and ports (no M.2, lack of USB 3.0, etc.), which isn’t that bad right now but will become more noticeable in the near future. The Xeon v3’s and v4’s are better in this regard and are also coming down in price. I think they’re at a significantly better value right now compared to the v1’s and v2’s. 2133mhz ECC DDR4 is falling right now and it’s easy to find a lot of RAM for a reasonable price of eBay, Reddit, or ServeTheHome. You have the right idea with the PSU’s. I too have those SQ PSU’s and they’re very quiet and very efficient. I don’t regret going with them at all. The HBA is fine if you don’t plan on loading up the entire server with drives or don’t mind a significant drop in performance during parity checks. Since it’s a PCIE 2.0 x8 HBA, you’ll definitely feel its limitations when you need to run a parity check. It’ll take a good while to complete. You’d almost be better off selling it and buying a PCIE 3.0 x8 HBA like a LSI 9300-8i or HP H220. They’ll give you the bandwidth you need for just a few dollars more. You could tear out the fan wall and replace the fans with 120mm fans. It’s been done by quite a bit of people with decent success. The thing is though, you really don’t need to unless you want to make more work for yourself or enjoy modding cases. The easiest and less obtrusive option is to replace the 3 fans on the fan wall with FAN-0074L4’s and replace the 2 fans in the rear with FAN-0104L4’s. They’ll cost about the same amount as the fan wall mod. It’ll just require you to remove them from their holders and drop them into the ones in your SC846. They move nearly the same amount of air as your stock fans all while making considerably less noise. No cutting metal; no permanent marks. I did this in my 836 and although it’s not quite silent, it is definitely not loud, and I have no problem sitting right beside it and working when it’s being pushed hard. Honestly, if you have the option starting from scratch and going with either older server hardware or new hardware, the best option is the new hardware. I’d get a barebones 846 and fill it out with all new hardware. A Ryzen 9 3900X costs as much as those 2 x E5-2695 v2’s and runs circles around them all while consuming far less energy. You don’t get the same amount of PCIE lanes (shouldn’t really matter unless you want to run a bunch of VM’s) but you do get speed, power efficiency, a tremendous upgrade path, and all sorts of ports that are becoming increasingly more useful. Matched with a decent Nvidia GPU, you’ll have a transcoding and encoding monster for years to come. I recently built an E-2278G system and even I’m preparing to move to a Ryzen system whenever 6.9 becomes stable and established. I’m hoping Black Friday is going to bring some really nice Zen 2 deals this year!
  14. Have you tried booting in UEFI and/or moving your BIOS power setting to Typical Idle Current?
  15. I'm trying to get the PASTA plugin up and running but I have no idea what to put for the "WebUI" section. There is zero info on github or instructions with the docker container for what needs to go there.
  16. Just had something go wrong on Community Apps while searching for music type plugins. Came here to post this because the message said to do so... Something really wrong went on during display_content Post the ENTIRE contents of this message in the Community Applications Support Thread <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <meta http-equiv="Cache-Control" content="no-cache"> <meta http-equiv="Pragma" content="no-cache"> <meta http-equiv="Expires" content="0"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="robots" content="noindex, nofollow"> <meta http-equiv="Content-Security-Policy" content="block-all-mixed-content"> <meta name="referrer" content="same-origin"> <title>Baymax/Login</title> <style> /************************ / / Fonts / /************************/ @font-face { font-family: 'ClearSans'; src: url('/webGui/styles/clear-sans-bold-italic.eot'); src: url('/webGui/styles/clear-sans-bold-italic.eot?#iefix') format('embedded-opentype'), url('/webGui/styles/clear-sans-bold-italic.woff') format('woff'), url('/webGui/styles/clear-sans-bold-italic.ttf') format('truetype'), url('/webGui/styles/clear-sans-bold-italic.svg#clear_sansbold_italic') format('svg'); font-weight: bold; font-style: italic; } @font-face { font-family: 'ClearSans'; src: url('/webGui/styles/clear-sans-bold.eot'); src: url('/webGui/styles/clear-sans-bold.eot?#iefix') format('embedded-opentype'), url('/webGui/styles/clear-sans-bold.woff') format('woff'), url('/webGui/styles/clear-sans-bold.ttf') format('truetype'), url('/webGui/styles/clear-sans-bold.svg#clear_sansbold') format('svg'); font-weight: bold; font-style: normal; } @font-face { font-family: 'ClearSans'; src: url('/webGui/styles/clear-sans-italic.eot'); src: url('/webGui/styles/clear-sans-italic.eot?#iefix') format('embedded-opentype'), url('/webGui/styles/clear-sans-italic.woff') format('woff'), url('/webGui/styles/clear-sans-italic.ttf') format('truetype'), url('/webGui/styles/clear-sans-italic.svg#clear_sansitalic') format('svg'); font-weight: normal; font-style: italic; } @font-face { font-family: 'ClearSans'; src: url('/webGui/styles/clear-sans.eot'); src: url('/webGui/styles/clear-sans.eot?#iefix') format('embedded-opentype'), url('/webGui/styles/clear-sans.woff') format('woff'), url('/webGui/styles/clear-sans.ttf') format('truetype'), url('/webGui/styles/clear-sans.svg#clear_sansregular') format('svg'); font-weight: normal; font-style: normal; } /************************ / / General styling / /************************/ body { background: #F2F2F2; color: #1c1b1b; font-family: ClearSans, sans-serif; font-size: .875rem; padding: 0; margin: 0; } a { text-transform: uppercase; font-weight: bold; letter-spacing: 2px; color: #FF8C2F; text-decoration: none; } a:hover { color: #f15a2c; } h1 { font-size: 1.8em; margin: 0; } h2 { font-size: 0.8em; margin-top: 0; margin-bottom: 1.8em; } .button { color: #ff8c2f; font-family: ClearSans, sans-serif; background: -webkit-gradient(linear,left top,right top,from(#e03237),to(#fd8c3c)) 0 0 no-repeat,-webkit-gradient(linear,left top,right top,from(#e03237),to(#fd8c3c)) 0 100% no-repeat,-webkit-gradient(linear,left bottom,left top,from(#e03237),to(#e03237)) 0 100% no-repeat,-webkit-gradient(linear,left bottom,left top,from(#fd8c3c),to(#fd8c3c)) 100% 100% no-repeat; background: linear-gradient(90deg,#e03237 0,#fd8c3c) 0 0 no-repeat,linear-gradient(90deg,#e03237 0,#fd8c3c) 0 100% no-repeat,linear-gradient(0deg,#e03237 0,#e03237) 0 100% no-repeat,linear-gradient(0deg,#fd8c3c 0,#fd8c3c) 100% 100% no-repeat; background-size: 100% 2px,100% 2px,2px 100%,2px 100%; } .button:hover { color: #fff; background-color: #f15a2c; background: -webkit-gradient(linear,left top,right top,from(#e22828),to(#ff8c2f)); background: linear-gradient(90deg,#e22828 0,#ff8c2f); -webkit-box-shadow: 0; box-shadow: 0; cursor: pointer; } .button--small { font-size: .875rem; font-weight: 600; line-height: 1; text-transform: uppercase; letter-spacing: 2px; text-align: center; text-decoration: none; display: inline-block; background-color: transparent; border-radius: .125rem; border: 0; -webkit-transition: none; transition: none; padding: .75rem 1.5rem; } [type=email], [type=number], [type=password], [type=search], [type=tel], [type=text], [type=url], textarea { font-family: ClearSans, sans-serif; font-size: .875rem; background-color: #F2F2F2; width: 100%; margin-bottom: 1rem; border: 2px solid #ccc; padding: .75rem 1rem; -webkit-box-sizing: border-box; box-sizing: border-box; border-radius: 0; -webkit-appearance: none; } [type=email]:active, [type=email]:focus, [type=number]:active, [type=number]:focus, [type=password]:active, [type=password]:focus, [type=search]:active, [type=search]:focus, [type=tel]:active, [type=tel]:focus, [type=text]:active, [type=text]:focus, [type=url]:active, [type=url]:focus, textarea:active, textarea:focus { border-color: #ff8c2f; outline: none; } /************************ / / Login spesific styling / /************************/ #login { width: 500px; margin: 6rem auto; border-radius: 10px; background: #fff; } #login .logo { position: relative; overflow: hidden; height: 120px; border-radius: 10px 10px 0 0; } #login .wordmark { z-index: 1; position: relative; padding: 2rem; } #login .wordmark svg { width: 100px; } #login .case { float: right; width: 30%; font-size: 6rem; text-align: center; } #login .case img { max-width: 96px; max-height: 96px; } #login .error { color: red; margin-top: -20px; } #login .content { padding: 2rem; } #login .form { width: 65%; } .angle:after { content: ""; position: absolute; top: 0; left: 0; width: 100%; height: 120px; background-color: #f15a2c; background: -webkit-gradient(linear,left top,right top,from(#e22828),to(#ff8c2f)); background: linear-gradient(90deg,#e22828 0,#ff8c2f); -webkit-transform-origin: bottom left; transform-origin: bottom left; -webkit-transform: skewY(-6deg); transform: skewY(-6deg); -webkit-transition: -webkit-transform .15s linear; transition: -webkit-transform .15s linear; transition: transform .15s linear; transition: transform .15s linear,-webkit-transform .15s linear; } .shadow { -webkit-box-shadow: 0 2px 8px 0 rgba(0,0,0,.12); box-shadow: 0 2px 8px 0 rgba(0,0,0,.12); } /************************ / / Cases / /************************/ [class^="case-"], [class*=" case-"] { /* use !important to prevent issues with browser extensions that change fonts */ font-family: 'cases' !important; speak: none; font-style: normal; font-weight: normal; font-variant: normal; text-transform: none; line-height: 1; /* Better Font Rendering =========== */ -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } /************************ / / Media queries for mobile responsive / /************************/ @media (max-width: 500px) { body { background: #fff; } [type=email], [type=number], [type=password], [type=search], [type=tel], [type=text], [type=url], textarea { font-size: 16px; /* This prevents the mobile browser from zooming in on the input-field. */ } #login { margin: 0; border-radius: 0; width: 100%; } #login .logo { border-radius: 0; } .shadow { box-shadow: none; } } </style> <link type="text/css" rel="stylesheet" href="/webGui/styles/default-cases.css?v=1583697979"> <link type="image/png" rel="shortcut icon" href="/webGui/images/green-on.png"> </head> <body> <section id="login" class="shadow"> <div class="logo angle"> <div class="wordmark"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 222.4 39" class="Nav__logo--white"><path fill="#ffffff" d="M146.70000000000002 29.5H135l-3 9h-6.5L138.9 0h8l13.4 38.5h-7.1L142.6 6.9l-5.8 16.9h8.2l1.7 5.7zM29.7 0v25.4c0 8.9-5.8 13.6-14.9 13.6C5.8 39 0 34.3 0 25.4V0h6.5v25.4c0 5.2 3.2 7.9 8.2 7.9 5.2 0 8.4-2.7 8.4-7.9V0h6.6zM50.9 12v26.5h-6.5V0h6.1l17 26.5V0H74v38.5h-6.1L50.9 12zM171.3 0h6.5v38.5h-6.5V0zM222.4 24.7c0 9-5.9 13.8-15.2 13.8h-14.5V0h14.6c9.2 0 15.1 4.8 15.1 13.8v10.9zm-6.6-10.9c0-5.3-3.3-8.1-8.5-8.1h-8.1v27.1h8c5.3 0 8.6-2.8 8.6-8.1V13.8zM108.3 23.9c4.3-1.6 6.9-5.3 6.9-11.5 0-8.7-5.1-12.4-12.8-12.4H88.8v38.5h6.5V5.7h6.9c3.8 0 6.2 1.8 6.2 6.7s-2.4 6.8-6.2 6.8h-3.4l9.2 19.4h7.5l-7.2-14.7z"></path></svg></div> </div> <div class="content"> <h1> Baymax </h1> <h2> Media server </h2> <div class="case"> <img src='/webGui/images/case-model.png?v=1585834400'> </div> <div class="form"> <form action="/login" method="POST"> <p> <input name="username" type="text" placeholder="Username" autocapitalize="none" autofocus required> <input name="password" type="password" placeholder="Password" required> </p> <script type="text/javascript"> document.cookie = "cookietest=1"; cookieEnabled = document.cookie.indexOf("cookietest=")!=-1; document.cookie = "cookietest=1; expires=Thu, 01-Jan-1970 00:00:01 GMT"; if (!cookieEnabled) { document.write('<p class="error">Browser cookie support required for login</p>'); } </script> <p> <button type="submit" class="button button--small">Login</button> </p> </form> </div> <p><a href="https://wiki.unraid.net/Unraid_6/Troubleshooting#Lost_root_Password" target="_blank">Password recovery</a></p> </div> </section> </body> </html>
  17. If you're going PCIE lanes, then yeah, Threadripper, EPYC, and LGA2011-3 are probably your best bets. If you want higher single thread ratings, then Threadripper will get you that. If you want solid ECC support and ability to go up to 64 cores, then EPYC would be great. If you want to save some money, get solid ECC support, and still have lots of cores, the Intel LGA2011-3 socket would be a good option.
  18. You should go with 3200MHz ram instead of the 3600MHz. Your Ryzen 9 3900x will run great with 3200MHz and won't need to be overclocked. Your 3600MHz sticks will run at 3200MHz anyway unless you overclock them. It's generally not recommended to overclock ram for Unraid because it can bring instability. It's up to you though if it's worth the risk.
  19. Don't go with the X11SCL-F if you plan to use your iGPU for transcoding. The C242 chipset doesn't support QuickSync. If you want a Supermicro C246 motherboard (does support QuickSync), the X11SCA-F or X11SCH-F (or X11SCH-LN4F) are fantastic options. I've had no trouble with my X11SCH-F since I got it.
  20. How many hard drives do you currently have plugged into your onboard SATA and SAS ports?
  21. You only need one card for booting Unraid and Plex. No need for an additional GPU.
  22. Objective: With my main Plex server now complete (link in the Links section), I decided to upgrade my dinky Optiplex 7010 DT (i5-3570) to something a little more modern and rack mountable. I knew I wanted to go with something lower powered since this server would primarily be a small backup of my most important pictures and files and would be a backup Plex server for my kids in case I had to take the main one down for maintenance or upgrades. I had planned on just filling it with old, smaller hard drives that I could easily and cheaply replace when they failed. I wanted this to be a server that I could play around with a little more and experiment with in Unraid. The pressure to keep it up and operational was going to be far less. What I came up with: Case – Supermicro CSE-835TQ-R800B (Modified) (~$220, eBay) --> I had been checking out the 835’s in case I couldn’t find a 836 for my main Plex server. I came across this 835 when I was just messing around on eBay. It came with a Supermicro X8DAH+, 2 x CPU’s, 8 x 3.5” HDD’s, and a LSI 9750-8i for $220 shipped. It was too good to pass up. Sadly, most of the hard drives were small capacity (< 2TB) and starting to fail, but at least a couple were okay and everything else was easy to gut and sell off. I took out the 2 x PWS-801-1R’s out and replaced them with 1 x PWS-501P-1R (very quiet) and a blank (CSE-PT0130L). Since this build would be a lower powered, backup build, the 500W platinum PSU would be more than enough. I then put in my old Icy Dock MB994SP-4SB-1 in one of the 5.25” bays to provide 4 x 2.5” bays and added a Silverstone FPS01 to give me some easy-to-access USB 3.0 ports at the front of the case. For the time being, I decided to forgo upgrading the fans to the quiet setup I had on my 836 (3 x FAN-0074L4’s on the fan wall and 2 x FAN-0104L4’s on the back). Although the server is louder than my 836 build, it’s not overwhelmingly so and all the fans sit at the lowest possible setting 100% of the time. CPU – Intel Core i3-8100 (4 cores/4 threads) (~$95, eBay) --> Although the Intel G5400, G5500, and G5600 were all good options, the G5400 had lesser iGPU and the G5500 and G5600 were roughly the same price as the more powerful i3-8100. This processer is definitely far more powerful than I needed but it still idles very low and can get through most tasks faster than some less powerful CPU’s. It was under $100 and can transcode a whole lot, so I consider this a steal. CPU Cooler – Arctic Alpine 12 Passive (~$20, Amazon) --> Even though this cooler is only rated for 45W, I figured putting it in my server with a fan shroud and lots of airflow, it wouldn’t matter. I was right. Even with the fan wall fans sitting at their lowest settings, the i3-8100 stays very cool and usually hovers between 25C and 30C. I can’t believe the Supermicro SC836 air shroud fits perfectly over the Alpine 12, but it does and man, is it cool. Motherboard – Asus WS C246M Pro (~$70, eBay) --> Besides the server case, finding this motherboard really jumpstarted the whole build. This is probably one of my best finds ever trolling eBay. It was an Open Box auction where the seller didn’t have a CPU to use to test it out. I’m guessing everyone else was scared to bid on it, so I won it at an insanely low amount. I figured if it failed, it would only be $70 out of pocket and would be worth the time to test it out. Amazingly, it booted right up and was perfect by every stretch of the imagination. It might not be the best in terms of PCIE lanes or other bells and whistles, but it did have two strong qualities that I was looking for. It had the RAM arranged for server airflow (important to be because of the fan shroud) and had the C246 chipset where the iGPU could be passed through to a docker container and utilized in Plex. The lack of IPMI wasn’t a big deal. Memory – 8GB Samsung M391A1G43EB1-CRC 2400mhz ECC UDIMM (~$60, eBay) --> This RAM was on the QVL for the motherboard and was at a decent price. I memtest’d it when initially setting up the system and had no issues with it. Since this server is only going to run Plex in rare instances and spend most of its time in an idle state, I decided that 8GB was more than enough RAM. HBA – LSI 9211-8i (IT Mode) (~$65, eBay) --> After buying and testing the HP H220 I used in the 836 build, I simply moved my previously purchased 9211-8i over to this build. I’ve used it for the last two years, and it has been nothing but reliable. I probably paid too much for it, but it came already flashed and had the latest firmware. I highly recommend checking out the Art of the Server’s eBay store, if you’re just starting out with HBA’s and want a genuine one that has everything you need to simply plug-n-play in your system. His HBA’s cost a little more but he does offer a lot of support, the cards have up to date firmware, and you’re never going to end up with a cheap, low quality Chinese knockoff. Notes from using it for several months: - I setup Unraid to run in UEFI instead of Legacy. It was very simple to do and something I hadn’t done in any of my previous builds. There weren’t any changes in the BIOS that I needed to make. - I haven’t really needed to test out the i3-8100’s iGPU in Plex so far, but the initial tests I ran looked like it could do quite a bit (has the same iGPU as my E-2278G). - With all the drives spun down, it doesn’t draw that much power, but I haven’t checked to see how much it is actually pulling. It’s connected to my UPS through the network and the UPS isn’t providing the power draw numbers. - Regardless of what happens with upgrading my other server or not, I’m going to keep this one the way it is for a very long time. The iGPU and processing power is more than enough for my needs, and I see no reason to upgrade the CPU/motherboard for years to come. - Possible future upgrades for this build: change out the fans like I did on the 836 build, add in another Icy Dock cage to the remaining 5.25” bay, add in a 10Gbe card? Let me know if you have any questions about this setup. I’m more than willing to share everything I know. Links: Baymax Build: https://forums.unraid.net/topic/92596-baymax-build-–-my-ultimate-plex-server/
  23. Objective: I really wanted to replace my HP Z220 (i7-3770 and 16GB DDR3 Non-ECC). The system ran fine and was dependable, but I had maxed out all the hard drive bays, couldn’t get a HBA and P2000 to work at the same time, and wanted to move to a rack mounted solution. I just moved to a new city with a significantly better upload speed (from 10Mbps to 50Mbps), and within a few years, would be buying a house in an area with either Google or AT&T fiber (up to 1Gbps upload). In order to handle all this bandwidth, I was shooting for a system that could handle around 15+ 1080p 8Mbps transcodes while not being too power hungry. ECC wouldn’t be necessary since practically all the data was either movies or TV shows, but if I was going to spend all this money, it couldn’t hurt to go ahead with ECC. I also had just gotten a better paying job and paid off all my student loans, so I could afford to go a little overboard. I wanted this build to last me the next 5 years with little alternations being needed. What I came up with: Case – Supermicro CSE-836BE16-R920B (Modified) (~$500, eBay & Supermicro store) --> I wanted the ability to use full height PCIE cards but didn’t want to commit to having 24 x 3.5” HDD bays largely being unused, so this fit perfectly in with that. I probably spent too much on getting this case supped up, but it’s built like a tank, functions exceptionally well, is relatively quiet, and meets all my needs. The case itself cost a little less than $400 with the 2 x PWS-920P-SQ’s and rear 2 x 2.5” hotswap module included. I then spent about $100 more upgrading all the fans; 3 x FAN-0074L4’s on the fan wall and 2 x FAN-0104L4’s on the back. The 0104L4’s were easy to take out of their green fan holders and screw into the preexisting holders on the back of my case. The 0074L4’s took a little more work to cut the webbing on the side of the fans so that they would fit properly. Just a couple of quick snips with some wire cutters. With the super quiet PSU’s and new fans installed, the whole case became very quiet. CPU – Intel Xeon E-2278G (8 cores/16 threads) (~$500, Provantage) --> So, why after already buying a Nvidia P2000 (at the wonderful price of $270!) did I go with an Intel/Quicksync build?! Well, it came down to Intel’s stability with Unraid and the iGPU. Although Ryzen CPU’s are fantastic processors at a very affordable cost, they’re still not 100% compatible/stable with Unraid. Now, I know they work well for most cases but there are still little quirks that either pop up due to AGESA versions, Unraid versions, and RAM peculiarities that make them not fully fleshed out. Intel, on the other hand, has a very stable and mature chipset that works great with Unraid in almost all cases. I wanted to be able to plug-n-play with a setup and not have to worry about crashes and sensors not working or being supported. The iGPU can be easily passed through to a docker container without needing to run an unofficial version of Unraid, like what would be needed to use the P2000. The E-2278G has the processing power to stay relevant for years to come, the official support for ECC (it’s not exactly known how ECC and Ryzen work together), and a top of the line iGPU that can handle a crazy amount of transcodes without drawing that much power. I wanted the E-2288G but couldn’t find one when I finally had the money to purchase it. I figured the E-2278G, for me, would be just as good and carry some of the same long-term value that the E-2288G would. I also thought about going with a E5-2680 v3 or E5-2690 v3 and P2000, but I just didn’t feel the need for more cores, more PCIE lanes, and lower single threaded performance. If the E-2278G becomes underpowered in the future, I’ll just sell the whole setup and start over with something new. I’ll probably switch to Ryzen somewhere down the line when that happens as the hardware and software for it becomes more mature and refined. CPU Cooler – Noctua NH-D9L (~$60, Amazon) --> I could have gone with a smaller cooler since the E-2278G only has a TDP of 80W but I wanted to go a little beefier just in case I upgraded to a more powerful processor on a different build sometime in the future. It’s also fits perfectly in a 3U case like the one I have, and its fan is very quiet. The installation was a breeze. Noctua really knows how to make this easy. Motherboard – Supermicro X11SCH-F (~$300, Amazon) --> Since I had a Supermicro chassis already and wanted to be able to use IPMI, this looked like a fantastic option. The X11SCA-F would have also been fine, but I wanted to have a separate LAN port for IPMI and didn’t need extra PCIE lanes. I could have also gone with the ASRock Rack E3C246D4U, but I’ve seen several bugs with that brand and motherboard and wasn’t sure it would be the best option. The Supermicro IPMI has been very nice and shows everything very well. The fans are controlled with the IPMI plugin in Unraid, and I’ve had no issues with them after getting the right settings. Memory – 2 x 16GB Supermicro MEM-DR416L-HL01-EU26 (Hynix HMA82GU7CJR8N-VK 2666mhz ECC UDIMM) (~$180, Newegg) --> This RAM was on the QVL for the motherboard and was at a decent price. I MEMTEST’d it when initially setting up the system and had no issues with it. I didn’t really need 32GB, but this will give me plenty of memory bandwidth for transcoding as well as any VM’s I decide to run in the future. HBA – HP H220 (IT Mode) (~$55, eBay) --> I had a LSI 9211-8i that I was already using in my HP Z220, but I figured I’d get a newer HBA that could use a PCIE 3.0 x8 slot to give more bandwidth to all 16 HDD’s when I eventually filled the case up. I bought a “new” one on eBay from seller jiawen2018. He filtered out as being located in the US but the payment definitely went to China. I was very hesitant to use this card, but I tested it out pretty thoroughly before switching the 9211-8i. After updating the firmware on the card, everything worked well, and I’ve been using it without issue for the last couple of weeks (even did a couple of parity checks). It’s one of those items that was really unnecessary since I only have 5400rpm drives. Notes from using it for several months: - Although the IPMI is nice, I’m finding that I don’t really need it all that much. I should be traveling right now for work but due to the virus, I’m largely stuck in the lab or at home. So, the possibility of needing it might change, but the server rack is very easy to get to in my house and it’s right beside my desk that has a mouse, keyboard, and monitor. I could have easily bought a simple KVM switch and been just as well off. In the future, I might forgo buying a motherboard with IPMI. It’ll save some money that I could use elsewhere. - The iKVM also freezes whenever the i915 kernel loads in Unraid. It’s not all that serious since if I’m going to have any issues upgrading or rebooting my server, the problem will likely emerge before the boot process even gets that far. There must be a setting I’m missing somewhere to get it to stop freezing. Everything else in the BMC is operational though and the IPMI plugin is really nice in Unraid for making any sort of adjustments and monitoring all IPMI related activities. - I don’t regret getting the E-2278G but I could have easily been happy with an E-2146G/E-2246G. It would have saved about $150, and I’d be getting roughly the same performance for what I do. The increased number of cores and threads will be useful if I ever start using VM’s but for right now, it’s way overkill. - The E-2278G hasn’t been tested all that much so far, and the most transcodes I’ve had going at one time so far is 7 x 1080p 10GB-20GB to 720p 4Mbps. It handled all of them without issue, and I don’t think I’ve even seen the overall CPU usage go about 10% for practically any reason (maybe some single threaded docker containers might use more power but since this CPU is so powerful, it makes quick work of it). This thing is a monster. - When starting up the server and everything going full force, it uses about 200W, but when the CPU and all 13 drives are sitting at idle, it averages about 140W. - For some reason, the X11SCH-F won’t actually completely power down the system when I hit the Power Down button in Unraid. Unraid goes offline and for all accounts has a clean shutdown but everything except the motherboard stays powered on. I must physically go and hold the power button down on the chassis to get it to finish powering off. Even the BMC power down button won’t do anything. It’s not a big issue since I’ll be doing all future upgrades at home and if I have to restart the system after a power loss (once the UPS kicks back on), the Power On button in the BMC does work as intended. It’s just weird. - Depending on how Unraid 6.9 stable and future releases work with Ryzen and if the P2000 goes on sale (it’s been below $300 before), I might end up selling this rig and making the transition over to Team Red. Since I only pay about $0.07/kWh, the power consumption difference between the two wouldn’t be that much and I’d have a lot more options for upgrading in the future. It turns out now that I’ve finished this build and it’s entirely stable, I’m somewhat bored and I like to tinker with things. We’ll see how things play out because this system is still robust and problem free. Also, I probably would drop IPMI and get a regular X570 motherboard. I’m not sure I’d go with ECC memory since the performance and reporting of it on Ryzen is suspect (link below). It depends on the availability of 3200mhz ECC UDIMM’s (they’re theoretically out there right now but impossible to get). If they don’t become purchasable at a decent price, I’ll just roll with some regular 3200mhz non-ECC UDIMM’s, memtest them, and save some money. - Possible future upgrades for this build: bump up to another 32GB of RAM to give more bandwidth for transcoding and VM’s, replace my 2 x 500GB SSD’s with 2 x 500GB-1TB NVME 3.0 x4 drives, add in a 10Gbe card?... all these are completely unnecessary but that’s really what this build has always been about Let me know if you have any questions about this setup. I’m more than willing to share everything I know. Links: Discussion about X470D4U and ECC reporting on Ryzen boards: https://www.ixsystems.com/community/threads/freenas-build-with-10gbe-and-ryzen.77752/ How I setup my fans in the IPMI plugin: reserved