Tybio

Members
  • Posts

    610
  • Joined

  • Last visited

Everything posted by Tybio

  1. Personally, I wouldn't do remote for a desktop replacement. Generally you want a Video card passed through to the VM. I've never tried photo editing over RDP/VNC, but I'd worry about the color depth and accuracy.
  2. To tide you over until the rebuild, here is what it looks like today on the MB side:
  3. Yea, check out the thread in the MB/CPU forums on the E-2186G...I've been having some very similar re-thinks about things..but finally ended up realizing that I have a desktop I'm not getting rid of, so I should focus on the server tasks for my unraid box. So Dockers and Plex Transcoding, thus the E-21xxG family. If was doing /anything/ else (other than transcoding) I'd have done a 2700x or TR.
  4. I use the R6 for my water cooled gaming rig, and I can't tell you how amazing this case is. In Gun metal with window it is stunning, and if you get the upgraded pannel for the front you can have a USB-C port. The case is super simple to work with and has nice seperation with a power supply shroud and the area behind the motherboard extending the full length of the case...so you have a hidden area for cable runs all the way to the front fans. It is a true dream to work in for builds, and with the right fans is silent, the air flow is well thought out...directed but not constrained...so it isn't just "Lots of fans" it is "Lots of fans with options" For example, the top pannel has a push button to remove...so you can put a rad in the top if you want, or close it off if you don't want top fans at all and not screw up your front->back flow.
  5. Reserved for addon topics: 1> Fan Replacement: I used some weather stripping to cover the "Gap" as custom fan that came with it meant the cover could not be re-used. Here are some pictures of the process.
  6. I've been using unraid for almost 10 years with only one small upgrade over 7 years ago. The server has been bullet proof and the MB/CPU have survived transplants 3 times as I grew. This summer I started ground-up rebuild to move from a 24 bay rack to a system that could reside in my office. This is a work in progress, I upgraded the Case/Drives/Fans over the summer and am now assembling the parts to replace the MB/CPU/RAM. I'm not going to make this a build-log, just going to edit it as I settle on parts and get things running. The basic idea came from snuffy1pro's "Shogun" build OS at time of building: 6.6.0 CPU: E-2176G Cooler: Noctua NH-DS15S Heatsink Covers: Noctua NA-HC3 chromax.Black.swap heatsink Covers Replacement Cooler Fans: Noctua NF-A15 HS-PWM chromax.Black.swap Premium-Grade Quiet 140mm Fan Motherboard: X11SCA-F-O Thermal: Graphite Pad RAM: 2xSupermicro 16GB 288-Pin DDR4 2666 (ECC) Case: LIAN LI PC-D600WB Drive Cage(s): 3xiStarUSA BPN-DE350SS-BLACK 3x5.25" to 5x3.5" SAS/SATA 6.0 Gb/s Trayless Hot-Swap Cage Note: I replaced the fans on the cages with Noctua NF-A8 PWM Premium 80mm and ran them to the motherboard along with the 3 Noctua fans on the back of the drive side of the case, all 6 fans are controlled by the auto-fan plugin and watching HD temps rather than system temps. Power Supply: Seasonic 650W 80 Plus Titanium ATX12V Power Supply with Active PFC F3 (SSR-650TD) SATA Expansion Card(s): LSI SAS 9207-8i Cables: Cable Matters Internal Mini SAS to SATA Cable (SFF-8087 to SATA Forward Breakout) (2x3.3' 2x1.6') Addon Network Card: ASUS XG-C100C 10G Network Adapter PCI-E x4 Card Fans: 2xCorsair ML140 Pro LED, Blue (1xML120); 3xNoctua NF-A14 PWM chromax.Black.swap Parity Drive: 12TB Seagate Ironwolf Data Drives: 5x10TB Seagate Ironwolf, 3x8TB Seagate Ironwolf Cache Drive: Samsung 860 EVO 2TB VM Drive: Samsung 970 EVO 250GB Total Drive Capacity: 74TB (67% used) Primary Use: Media storage and streaming (Kodi local, Plex remote) Likes: Love the look, very quiet even under load. Dislikes: Seagate drives in these trayless bays can "tick" a bit, sort of annoying in an otherwise silent system. Add Ons Used: Radarr, Sonarr, NZBGet, Organizr, DuckDNS and tons of plugins...I'm a nerd what I can I say. Future Plans: None at the moment Boot (peak): TBD Idle (avg): 70w Active (avg): ~90w Light use (avg): ~80w Disk Rebuild: 105w The highest usage I've seen was the 105w during data rebuild, even booting my windows and linux VMs doesn't push it about the mid 80s. More information to follow when I can order the CPU and swap the core of the system. Current view from the front (Will replace later with full pictures of finished project) System Side: System Side with Chromax fans and Heatsink covers: Drive side:
  7. Some stores in the EU are listing 11/1 as the date they will have stock on these processors. So another few weeks and perhaps I can do the upgrade!
  8. Both! I have a different level of knowledge so I defer to hi!
  9. I differ to Johnnie in all things unraid, I was bassing my comments off a Norco 24 bay chassis I had which didn't have a built in expander.
  10. They are different connections, they use the mini-sas connectors so you get one cable per 4 hard drives...each port on the LSI or the SM cards supports 4 drives (so an 8 port card it is 2 ports/cables). You can also get "break out cables" that have one connection at the card side and 4 standard SATA tails hanging off it. To make things even more complex, you can do an "Expander" that takes the two cables from the LSI card and "expands" it. (Not sure if the SM card supports this) https://www.amazon.com/Intel-RAID-Expander-Card-RES2SV240/dp/B0042NLUVE With that, you take 1 of the LSI cards, connect both ports of that card to the expander...then you get 4xmini sas or 16 drives, doubling the number you can service from one LSI card. (You can do single-link rather than dual link and get 20 I believe, but they would not get full BW for all 20 drives that way).
  11. Depends on what you mean, a backplane like the Noroco chassis aren't "Expanders" they just take 4 ports from the card and break them out to 4 of the hot-swap bays. So you need a 1:1 ratior of card ports to Hot Swap bays. If you want an expander like the Intel one that can take 2 channels from one of these cards and break them out to a lot more drives, that's a different question. I've used both cards in hot swap chassis (1:1 port ratio) and they both work fine, the only real issue with the SAS2LPs has been some users reporting issues when using VT-D (VM Passthrough). The Marvel chip they are based on is the issue, while the LSI cards get strong reccomendations almost universally on the forums. If I was trying to play it safe, I'd start with an LSI based card, you can find them all over ebay (Some have to be flashed to "IT" mode as their RAID mode is not great for unraid). I'm not sure you can go wrong with either one, just know that there is a small risk the SAS2LP cards might cause issues.
  12. The two obvious choices are the AOC-SAS2LP-MV8 or any of the LSI cards. Generally people don't like to reccomend the SAS2LP as "Some" users have issues (Mostly related to running VMs) with them. It seems to be an ether-or situation...if you don't have problems at the start it is rock solid for years...or you have a problem nearly instantly and it never goes away. The LSI cards are the choice of the forum, and you can get them off Ebay for cheap. When I do my upgrade, I'll be moving from SAS2LP to LSI cards, just to be on the safe side.
  13. Take a look at the E-2176G or E-2186G, they are Xeon's with ECC that are basically i7-8700s. The other nice thing is Supermicro has a C246 motherboard with IPMI avaliable for them. The cost of the CPUs is ~300-~380 and the MB is ~300. So not much of a premium to get IPMI and ECC with the same power. Actually the E-2176G with 80 TDW may even be better than a i7-8700k in terms of heat/power while still having the exact same QSV and Passmark score. http://www.supermicro.com/products/motherboard/X11/X11SCA-F.cfm The only down side is the Processors are not shipping just yet, it should be this month. The board should also work with i7's, but SM is now allowed to state that due to Intel wanting people to stick to the roles Intel thinks make them the most money. You could drop an i7-8700k in there with non-ECC ram, but it isn't "Officially supported" just "Tested". You can see SMs hinting on this in the CPU section of the page above.
  14. OUCH! How could you put that thought in my head?! Shesh, perhaps I should return this thing....RAMBUS....egads.
  15. If China wants to waist time watching my downloads then I'll consider it my patriotic duty to flood them with as much meaningless information as possible. I'm not going to give up the best board vendor for my use-case just because China is trying to hack our government. That really strikes me as "No S**T Sherlock" kind of information. Remember, the goal of these things is to Know but not let on...so even if they could expose my deep dark love of Youtube videos of people getting kicked in the nuts....they wouldn't as intelligence groups don't give a toss about what I'm doing...I'm nothing but a frustration if impacted as I have NO value to them and only risk if their system is spying on me. Now for work, where there is propritay information being stored...yea, I'd worry a little. But there are far less dramatic sources of corporate espionage...this is a Government against Government thing if real, no impact on my use of SM..in fact I just ordered a new SM board today
  16. Ordered the SM board from Newegg, must have only had one in stock as it now shows as out of stock. I think I might have just taken a detour on a branch of the intel family that isn't going to really be that widely used :).
  17. So I read the SM manual last night, and according to the block diagram on page 21 It isn't "both" M.2's that take out the 4x slot, just M.2-2. M2-1 disables the U.2 port. I suppose that's why SM put "M.2:1~2" Not sure what the tilde means in their minds. Anyway, if I'm reading this right, I can use 1 M.2 and have the 4x slot for a 10G ethernet card in the future.
  18. I'm just going with the Supermicro and doing LAG rather than a 10Gb card. I've got the parts in a newegg cart, but I'm not sure if I want to buy until the CPU is available. Then again, when the CPU comes out the board might be hard to find as it is a niche product. Have you seen any hint as to the ship date?
  19. Thanks! Ok, so it looks like the same as the SM board really. Hurm, lose IPMI to gain a black PCB . That's almost freaking worth it! Update: Ok, perhaps it is better than the SM. If I'm reading this right it will operate in 8x/4x/4x/-- (the last one is disabled to get the SATA ports) AND allow one M.2 to be used. I think the trade-off here is the loss of the U.2 port, which I have no intention of ever using...so perhaps the Asus board is a better option. The SM can only do 8x/8x with the last slot disabled. While that would work as well, I just don't like upgrading and having all expansion used day-1. Perhaps that's the bind at being close to the end of this Intel cycle, just not many options until they get to 10nm next year (or in 2025 given how well they've done so far).
  20. I can't find any docs on the Asus board. I think I'm just going to go with the SM and forego the 10GB port for the moment. I can bond the two 1G (as my current server is) and that is likely enough for the moment. I'm not sure if I should order parts at the moment or wait for the processor...kind of silly to order MB/RAM/Cooler without known when the processor is going to be in stock.
  21. Thinking the best compromise will be this: https://www.asrock.com/mb/Intel/Z390 Taichi Ultimate/index.asp#Specification The trade-off right now is the iGPU for ECC+Expansion. Can have one, or the other. This Asrock board seems to be the only compromise option but fully loaded it will only have 1xM.2 That works for me I think....but it also isn't a server class setup. ERG, why do you have to have either-or?! I should likely just go TR and suck up the plex thing.
  22. So I've been digging into this more, and I'm not as convinced this is the right path. From the Supermicro page for the MB: M.2 Interface: 2 PCI-E 3.0 x4, RAID 0 & 1 M.2 Form Factor: 2242/2260/2280/22110 M.2 Key: M-Key M.2#1~2 are shared with PCI-Ex4 slot, M.2#2 is shared with U.2 U.2 Interface: 1 PCI-E 3.0 x4 This confuses me. If I use M2#1 OR #2 then the x4 slot is disabled, if I use #2 then both the U.2 and x4 slots are disabled. So basically I can use M.2 OR PCI-Ex4. That means that I'd only be able to run in 16x or 8x+8x mode? So one slot for the HBA and one for the 10Gb card and it is full. More and more I'm thinking of just going AMD to remove the PCI limitations and then shifting plex trans-coding off the server if it becomes a problem.
  23. I haven't really been focused on power, right now I'm using an old Xeon/MB combo that only has SATA2, so I've had to use 2x8-port HBAs. I'm planning on upgrading soon to the E-2176G to get access to the iGPU (My MB is so old it has an on board GPU that "blocks" the one in my Xeon). When I do that I'll run 8 SATA off the MB and the rest off a single HBA to trim power/heat a bit more. I generally don't worry about optimizing for network writes, I have a 2x1G Lag setup to the server, but mostly that's to cover my 1Gig Usenet downloads and allow full rate streaming to my clients on the LAN. As I do all the downloading to the server and process the files there, I don't have much reason to push things. That said, I will be going with an M.2 SSD in the new build, mostly to prevent a cable run and clear out the 2.5" drives so everything fits better. That should give me insane performance when transferring files without having to do anything fancy. This is the board I'm going to get: http://www.supermicro.com/products/motherboard/X11/X11SCA-F.cfm 8 SATA ports, IPMI, and basically everything I could need for another long term work horse system. If I can get half the time out of the new one I've gotten out of my current one, then I'll be happy :).
  24. I've never run an i3, but take a close look at the iGPU in your older i3, only the most recent versions support 4k and only the newest support 4k HDR I believe. Just a warning, but otherwise if you don't need processing horsepower it doesn't seem like a bad idea at all. The new Supermicro c246 board is very nice looking, I've a thread on it in the MB/CPU forum :).
  25. All the disks are spun up ATM, I think Radarr just rescanned...I really wish the would let us control that :(. The UPS is reporting ~100 wats with 8 disks spun up and one plex transcode going. With 15 disks in the trays and me running 10TB Seagates right now, that means a potential of 150TB. My parity is a 12TB so I can up that further by expanding with 12TB drives, but I don't really see the need as I'm only 50% populated ATM. Currently the total protected is 74TB on 8 data disks (Mix of 5 10TB and 3 *TB)