• Posts

  • Joined

  • Last visited

Recent Profile Visitors

1547 profile views

Decto's Achievements


Explorer (4/14)



  1. Decto

    Too much?

    Seems like a case for a simple NAS with streaming. You can save some cash with an I3 - 10100 which is a 4c8t part, this will be plenty for NAS / streaming and the built in GPU does a good job of transcoding in Plex or other media streaming apps. if you want to transcode 4k so you can stream to non 4k devices the you need either the Intel iGPU or an add in GPU as it can be very hard on the cores. This chip will easily run a good number of dockers for gathering and streaming media and even a light weight VM or two if you need it. More cores would be for heavy VM usage, e.g. running gaming servers, or if you were using this as both a NAS and as your main desktop (via a VM). For motherboard, a B460 is a more budget choice with plenty of expansion, the main thing you lose is the option to split the main x16 PCI-E into a pair of X8 PCI-E for expansion. Look for a board with at least a x16 and x4 PCI-E slot, a couple of extra x1 are useful as well but don't get too carried away with premium boards unless you will use the 2.5Gb Network etc. Each x1 PCI-E can support 2 extra spinning drives at native speed while the x4 will give 8 extra drives. If you can get a deal on a B365 and similar CPU then that's a good option too. There is little difference other than Intel offered a little more of the money in the 10 series and added a 10 core part so discounted last gen would be fine. The areas you may want to spend a little extra are PSU and drives. That PSU has power plugs for around 8 drives and splitting power cables isn't recommended an extra one per string is OK, more than that can cause power issues. If you plan significantly more than 8 drives in the future then I'd recommend a PSU with more SATA power strings, or molex that can adapted to SATA. For drives, depending on how much storage you need, fewer larger drives are recommended. e.g 3x 8GB is prefered to 5x 4GB. You do waste a little more capacity using a larger drive as partity, however the expensive part of the whole system is usually the number of physical drives as each needs power, cooling, a slot in your case and a SATA port. So if you run out of capacity to add drives you may need a better PSU, a bigger case, , a more complex SATA add in card etc. or even a motherboard with more expansion slots. The cost of the extra 'wasted' space on the parity drive can soon become trivial in comparision. Also each drive is a potential point of failure. Good luck
  2. It should idle low but will depend on the system. A more basic motherboard with less VRMs will use less power at idle. Use a suitable PSU, gold or better and not oversized. My Xeon E5 2660V3 idles at 40w for the bare board a GT710 and 4 registered Dimms so I would expect the I3 will be ~20w. You then need to add a watt or 2 for each spun down drive and 3-4W for each 120mm cooling fan. Ideally the board will have a decent fan controller so the fans respond to case temp and can be spun down. Cooling is usually the issue with lots of drives so best to have them mounted right behind a fan wall.
  3. Decto

    Too much?

    Hi and welcome. Before anyone can help you need to post your requirements. Is this a simple NAS, media server, will it host VM's, if so what will these be for? What apps, dockers do you plan to run. How much storage do you plan to attach?
  4. SATA 2 is good for ~250 MB/s, the 195 MB/s is about right for a WD 8TB drive at the start of the pre-clear but it will slow down as it moves to the inner disk tracks.
  5. The EDAZ drives are air filled and run 5C or so hotter than other WD drives.
  6. I'd agree with your interpretation, the bandwith of the third slot is shared by all chipset devices. The same thing happens on the ASUS WS board, except it is a X4 PCI 4.0 Bus that is shared among all devices and the card is presented with a x8 PCI 3.0 Bus. The bandwith of which is shared. There just aren't enough PCI-E lanes to go around on consumer platforms
  7. The X10SLA-F absolutely does support Registered Ram. I have the version without IPMI running 64GB 2400Mhz RDIMM (4 x 16GB) which reports as multi-bit ECC in Unraid The full 64GB was bought new with lifetime warranty (brown box) for £152 / $200 early this year. Power consumption is 90W idle with 3 installed GPU and 10HDD (spun down) From what I could see, multi-socket boards tend to only support single bit ECC, so one advantage of single XEON CPU is more robust ECC. I do agree that in many cases a desktop CPU offers better performance / cost / efficiency however in use cases where more PCI-E lanes / slots is desired, the E5 Xeons still make sense at used prices. vs. the China source LGA2011 V1/V2, I'd rather not trust my data to recovered componets and an unqualified BIOS assembled in an untraceable 'factory' with limited, if any support. As a home lab for testing, development, learning etc..... sure why not if it's cheap.
  8. I have a number of 4TB 2.5" Seagate Expansion drives I shucked as they would fit in a Dell T20 slim optical / dual 2.5" expansion bay. Worth noting that they are 15mm thick so don't fit in most 2.5" docks and they are also SMR which can be an issue depending on your write pattern. I use them for media storage in my backup server and they have been fine. 2TB is probably the largest you can get in a standard thicknes and that is likely to be SMR as well. WD are externals are native USB so no option to shuck. It has been a couple of years since I bought them, but you can check which drive is enclosed before you crack the case.
  9. I am an advocate of keeping NAS and Desktop separate. Where do you back your desktop up to? If it's the same NAS that's hosting the VM, it's not really a backup. Others may disagree as a combination NAS / Desktop is a common question here. The old AMD hardware should be fine, I ran WHS 2011 on a X4 630 for a number of years. I then moved to Unraid and a 4C4T Intel e3-1225 V3 which had about 20TB on Plex, a couple of minecraft servers, a 7 days to die server and other odd servers at times. The only time there was any signficant load was I had to an unrar a large file which will always use all the CPU. While the AMD is quite a bit slower than Intel, there is nothing which needs a lot of resources especially if you use the minecraft docker(s). The only issue you may have is transcoding in Plex. If you need to stream to a different device then transcode on the CPU cores is computationally expensive so may impact time sensitive activities such as the minecraft server etc. On option is to use an Nvidia card to transcode, really this needs to be a GTX class card for good transcode support. GTX750 as a minimum or for more transcode options a GTX1050. Basic 2GB version is good enough.
  10. Decto

    RAM question

    A single stick should run fine though you'll have less bandwidth for compute heavy tasks. 8GB plenty for lightweight use.
  11. My system spec is in my sig, but in case you're on mobile it's a E5-2660 V3 in SuperMicro X10SRA so HEDT, though my test machine which currently has the 1060 uses a X99 with the same chip. I deliberately went for PCI-E lanes over single core performance as I only needed relatively low power remote gaming VM's. One of these pretty much just runs whichever Roblox game my son is 'boosting' in 24/7 so it's not all demanding. For cases, I'm using Antex P101s as there are 8 PCI-E expansion slots in the backplane whch helps with an ATX HEDT board where the lower slot (7th) is a x8 PCI since you can use a x1.5 or x2 width GPU in that slot. I've actually moved 1 of the HDD modules from the test server to the main server so I now have 10 HDD slots + 2 x SSD slots + 4 further SSD slots that I 3d printed that are attached to the PSU tunnel. If you are using single GPU or the primary GPU, did you use a VGA BIOS, these can be downloaded from tech power up. You can also try disabling Hyper V in the VM settings. There are some threads on Nvidia errors but once I got around the changing IP address for RDP and needing a dummy plug to fake a monitor I haven't had many other issues with Nvida. My AMD GPUs have been somewhat more fickle resulting in me pulling it from the main server and using the spare quadro for the gaming VM. There was some info that some AGESA versions on the AMD platform are better than others, so some BIOS versions may work better, however it's not something I've needed to deal with as yet.
  12. You don't say much about the system but I have Nvidia passthrough working fine with 1650 (regular), Quadro P1000 and 1060 3GB. I run these as remote gaming VM's with parsec. I use SeaBios and FX 4.2 chipset for the VM as I keep my main system on the stable Unraid version, currently 6.8.3 I have mulitiple cards, so the card passed through is not the primary boot card. The card(s) are stubbed using VFIO-PCI CFG plugin Installed windows Installed the Virt IO drivers Installed Nvidia Driver I have a dummy HDMI plug connected to each card so it powers up correctly. I find sometimes when making changes to the VM, it will suddenly ignore the fixed IP in windows but I can usually connect with RDP if I check the router for the MAC address of the VM. Ideally, I fix the IP to the MAC in the router then is stable vs DHCP. What is your system spec?
  13. Hi, I don't think it's a motherboard issue, if a card is sticking in D3 is a reset issue with a card. The code 43 was a different issue but again, Nvidia driver specific. Can you try the GTX1070 in there? I find that I have no issues with SeaBIOS + FX chipset on GT710, P1000, GTX1650 I had random issues with AMD so have pulled that card for now. I am on an Intel chipset, however the errors you describe seem to suggest GPU compatibility and the GTX8800 is 14 years old now from a time when VM's were very niche.
  14. That is a very ambitious power target. You should get 40-50W idle for the base system, add ~10W idle for each discrete GPU. What power reading does your current system give? I have a LGA2011 10C20T CPU, 8 HDD, 3 SDD with a Quadro P2000 (GTX1060) Quadro P1000 (GTX1050) and GTX1650. My current idle with drives spun down but both VM's (P1000 + GTX1650) running and at a windows desktop is ~100W If I shut the VM's down then ~90W Take out 1 GPU, less memory, a more efficient platform gets you back to the 60-70W idle with 2 GPU which would be my best estimate. I'd also skip the Radeon card, likely to be more trouble than it's worth. I've tried 3 different Radeon RX cards in the last few months and while I got them working, I had a number of random issues, lockups with unclean shutdowns etc. I replaced the card with my spare Quadro and no more issues. Also consider you'll only have 3 cores for gaming in a VM, best to keep one for Unraid so running 2 gaming VM's will be a stretch. I have one at 4 cores and one at 2 cores.
  15. Transcoding on the IGPU is only really an issue if you want to stream media via a paid version of Plex, Emby or similar. I use Plex as it gives a family friendly interface and works on a significant range of devices. I got the lifetime membership many years ago. With the transcoding, the media can be viewed from any PC, tablet, phone or Smart stick (Roku) etc in the house or externally wherever I can get a reasonable wifi connection. If you just plan to use as a NAS then it's not an issue, go with which ever solution you prefer. Any PCI-E x1 slots can be populated with a cheap ASmedia 1062 dual sata card which work reliably and is an easy way to add a couple of extra ports though avoid any with port multipliers (more than 2 ports). The Define 5 has space for a full ATX board which may give you more connectivity options for later expansion, or let you take a board with 6 sata and add sata ports cheaply.