Jump to content

Decto

Members
  • Content Count

    210
  • Joined

  • Last visited

Community Reputation

27 Good

About Decto

  • Rank
    Advanced Member

Recent Profile Visitors

1142 profile views
  1. I'd agree with your interpretation, the bandwith of the third slot is shared by all chipset devices. The same thing happens on the ASUS WS board, except it is a X4 PCI 4.0 Bus that is shared among all devices and the card is presented with a x8 PCI 3.0 Bus. The bandwith of which is shared. There just aren't enough PCI-E lanes to go around on consumer platforms
  2. The X10SLA-F absolutely does support Registered Ram. I have the version without IPMI running 64GB 2400Mhz RDIMM (4 x 16GB) which reports as multi-bit ECC in Unraid The full 64GB was bought new with lifetime warranty (brown box) for £152 / $200 early this year. Power consumption is 90W idle with 3 installed GPU and 10HDD (spun down) From what I could see, multi-socket boards tend to only support single bit ECC, so one advantage of single XEON CPU is more robust ECC. I do agree that in many cases a desktop CPU offers better performance / cost / efficiency however in use cases where more PCI-E lanes / slots is desired, the E5 Xeons still make sense at used prices. vs. the China source LGA2011 V1/V2, I'd rather not trust my data to recovered componets and an unqualified BIOS assembled in an untraceable 'factory' with limited, if any support. As a home lab for testing, development, learning etc..... sure why not if it's cheap.
  3. I have a number of 4TB 2.5" Seagate Expansion drives I shucked as they would fit in a Dell T20 slim optical / dual 2.5" expansion bay. Worth noting that they are 15mm thick so don't fit in most 2.5" docks and they are also SMR which can be an issue depending on your write pattern. I use them for media storage in my backup server and they have been fine. 2TB is probably the largest you can get in a standard thicknes and that is likely to be SMR as well. WD are externals are native USB so no option to shuck. It has been a couple of years since I bought them, but you can check which drive is enclosed before you crack the case.
  4. I am an advocate of keeping NAS and Desktop separate. Where do you back your desktop up to? If it's the same NAS that's hosting the VM, it's not really a backup. Others may disagree as a combination NAS / Desktop is a common question here. The old AMD hardware should be fine, I ran WHS 2011 on a X4 630 for a number of years. I then moved to Unraid and a 4C4T Intel e3-1225 V3 which had about 20TB on Plex, a couple of minecraft servers, a 7 days to die server and other odd servers at times. The only time there was any signficant load was I had to an unrar a large file which will always use all the CPU. While the AMD is quite a bit slower than Intel, there is nothing which needs a lot of resources especially if you use the minecraft docker(s). The only issue you may have is transcoding in Plex. If you need to stream to a different device then transcode on the CPU cores is computationally expensive so may impact time sensitive activities such as the minecraft server etc. On option is to use an Nvidia card to transcode, really this needs to be a GTX class card for good transcode support. GTX750 as a minimum or for more transcode options a GTX1050. Basic 2GB version is good enough. https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
  5. Decto

    RAM question

    A single stick should run fine though you'll have less bandwidth for compute heavy tasks. 8GB plenty for lightweight use.
  6. My system spec is in my sig, but in case you're on mobile it's a E5-2660 V3 in SuperMicro X10SRA so HEDT, though my test machine which currently has the 1060 uses a X99 with the same chip. I deliberately went for PCI-E lanes over single core performance as I only needed relatively low power remote gaming VM's. One of these pretty much just runs whichever Roblox game my son is 'boosting' in 24/7 so it's not all demanding. For cases, I'm using Antex P101s as there are 8 PCI-E expansion slots in the backplane whch helps with an ATX HEDT board where the lower slot (7th) is a x8 PCI since you can use a x1.5 or x2 width GPU in that slot. I've actually moved 1 of the HDD modules from the test server to the main server so I now have 10 HDD slots + 2 x SSD slots + 4 further SSD slots that I 3d printed that are attached to the PSU tunnel. If you are using single GPU or the primary GPU, did you use a VGA BIOS, these can be downloaded from tech power up. You can also try disabling Hyper V in the VM settings. There are some threads on Nvidia errors but once I got around the changing IP address for RDP and needing a dummy plug to fake a monitor I haven't had many other issues with Nvida. My AMD GPUs have been somewhat more fickle resulting in me pulling it from the main server and using the spare quadro for the gaming VM. There was some info that some AGESA versions on the AMD platform are better than others, so some BIOS versions may work better, however it's not something I've needed to deal with as yet.
  7. You don't say much about the system but I have Nvidia passthrough working fine with 1650 (regular), Quadro P1000 and 1060 3GB. I run these as remote gaming VM's with parsec. I use SeaBios and FX 4.2 chipset for the VM as I keep my main system on the stable Unraid version, currently 6.8.3 I have mulitiple cards, so the card passed through is not the primary boot card. The card(s) are stubbed using VFIO-PCI CFG plugin Installed windows Installed the Virt IO drivers Installed Nvidia Driver I have a dummy HDMI plug connected to each card so it powers up correctly. I find sometimes when making changes to the VM, it will suddenly ignore the fixed IP in windows but I can usually connect with RDP if I check the router for the MAC address of the VM. Ideally, I fix the IP to the MAC in the router then is stable vs DHCP. What is your system spec?
  8. Hi, I don't think it's a motherboard issue, if a card is sticking in D3 is a reset issue with a card. The code 43 was a different issue but again, Nvidia driver specific. Can you try the GTX1070 in there? I find that I have no issues with SeaBIOS + FX chipset on GT710, P1000, GTX1650 I had random issues with AMD so have pulled that card for now. I am on an Intel chipset, however the errors you describe seem to suggest GPU compatibility and the GTX8800 is 14 years old now from a time when VM's were very niche.
  9. That is a very ambitious power target. You should get 40-50W idle for the base system, add ~10W idle for each discrete GPU. What power reading does your current system give? I have a LGA2011 10C20T CPU, 8 HDD, 3 SDD with a Quadro P2000 (GTX1060) Quadro P1000 (GTX1050) and GTX1650. My current idle with drives spun down but both VM's (P1000 + GTX1650) running and at a windows desktop is ~100W If I shut the VM's down then ~90W Take out 1 GPU, less memory, a more efficient platform gets you back to the 60-70W idle with 2 GPU which would be my best estimate. I'd also skip the Radeon card, likely to be more trouble than it's worth. I've tried 3 different Radeon RX cards in the last few months and while I got them working, I had a number of random issues, lockups with unclean shutdowns etc. I replaced the card with my spare Quadro and no more issues. Also consider you'll only have 3 cores for gaming in a VM, best to keep one for Unraid so running 2 gaming VM's will be a stretch. I have one at 4 cores and one at 2 cores.
  10. Transcoding on the IGPU is only really an issue if you want to stream media via a paid version of Plex, Emby or similar. I use Plex as it gives a family friendly interface and works on a significant range of devices. I got the lifetime membership many years ago. With the transcoding, the media can be viewed from any PC, tablet, phone or Smart stick (Roku) etc in the house or externally wherever I can get a reasonable wifi connection. If you just plan to use as a NAS then it's not an issue, go with which ever solution you prefer. Any PCI-E x1 slots can be populated with a cheap ASmedia 1062 dual sata card which work reliably and is an easy way to add a couple of extra ports though avoid any with port multipliers (more than 2 ports). The Define 5 has space for a full ATX board which may give you more connectivity options for later expansion, or let you take a board with 6 sata and add sata ports cheaply.
  11. I'm in the big box (tower) camp I can see the appeal of a 'turnkey' solution but I consider the Dell to be an ineffecient, inflexible box of parts bundled with a stack of high millage drives that I wouldn't trust any data to. These servers were designed as compute/web servers, not as storage servers so have limited upgrade routes. The drives are fine home home lab, IT certificaiton practice etc. where reliability is less important (and recovery is actually learning). A couple of 4TB drives is good to get started, Ideally 8TB+ so you have a large parity at the kick off. The hardware itself is likely durable, but a pair of low clocked 6 core CPU's just adds complexity compared to a single more powerful single CPU . A single 8-12 core CPU would be preferable if you need the performance. Though Unraid and a stack of dockers works fine on a 4 core CPU. I only have more cores so I can assign them to the remote gaming VM's via Parsec so the kids can play games remotely on a low end laptop or NUC. For any kind of media streaming you'll likely want a GPU unless you want to hammer the CPU's hard with transcodes. While you can pick up a single slot quadro, almost all other cards are 2 slot. The same goes for gaming / game streaming (Need a GTX class card to stream), even if it's a GTX750, GTX1050 and almost all of those are 2 slot cards. The 'value' Dell bundle quickly becomes inflexible as your needs grow. A more standard ATX style tower server from the used market or some parts added to make your own build would be a more flexible option. If going with a self build, parts can also be flipped amongst other 'family' computers as they grow and want independance, or if you find you need more motherboard features, just flip the new board in and retain everything else. Personally I bought a new single socket LGA2011 board with used CPU and memory. I now have 10 SATA ports + 2 on a PCI-E x1 card, 3 GPU, 1 for transcoding and 2 for remote gaming VM's. I could add a 4th GPU or a HBA, 10GB etc. All in, the board, CPU, 32GB, case and PSU would cost little more than the Dell, If you go with the Ryzen then it would be even cheaper as you have some of the parts. For any VM / gaming etc. I would assume you need 2 GPU, so make sure the motherboard supports at least 2 of 16x PCI-E (physical) with 8x electical connections. The primary (boot) GPU can be a Nv 710 or similar from Ebay for a few Eur. Gigabyte boards may let you pick which slot GPU is primary so worth checking the manuals and this may allow you to boot from a x1 or x4 electrical slot with a basic GPU and save the X16 for the 'gaming' GPU allowing you to buy a less featured board. Good luck This may not be the best solution for you, depending on your needs, b
  12. I'd agree that that case looks like it would be hard work with a signicant number of drives in it. I'm using a couple of Antex P101s cases for my main and test systems. 8x 3.5 + 2x 2.5 though the main reason was for the 8x rear expansion slots. I wanted 8 slots so I can use a GPU in my lower ATX without it choking on the PSU shroud. Not an issue for most people though. Fractal Define 5 has 8x 3.5 + 2x 2.5 and a dual 5.25 bay that could be converted to 3x3.5. Fractal Define 6 only comes with trays for ~6 drives but you could install 11x 3.5 (I think) + 2 x 2.5". I have this for my main PC, however it's full of water cooling so the 11 drives was my best guess from how the holes line up. All the cases are relatively compact and the drives sit in front of a fan wall so get actively cooled. Not hot swap either, buy you can easily get to the rear of the tray to connect / disconnect without needing to drag out a bunch of other wiring. PCpartpicker is a good place to look for cases with a good number of drives. Unless you have a specific reason for AMD, I'd usually recommend Intel for general storage /home media as the intergrated GPU is really very good for transcoding media in real time to any device you connect. The B365 boards and CPU's are fine, quad core or better. Same for the new socket LG1200 boards through even some of the Z490's are reasonable priced and have mutiple PCI-E x16 (physical) slots you can expand into. Memory - branded and on offer, take a look at the native speed suppoted by the CPU / board and don't over pay for speed you wont use. PSU is tricky. The power 'string' for SATA molex is only really suitable for 4 drives, you may get away with 5 or 6 but drives have a relatively high load. Lower wattage PSU's may only have 2 pheriperal (sata/molex) strings / plugs. I need to replace the PSU in my test rig and I'm looking at the TX550/TX650 (3 plugs/strings) and the TX750/850 (4 plugs/strings). Of those the TX550 has less sata plugs on the cables so I'd need adapters etc.
  13. Hi, Unfortunately while there is excellent support for array issues and data recovery, support for VM's is limited. VM's are somewhat an add on feature that require quite a lot of trial an error as soon as you want to pass through hardware. A good number of the issues are due to improperly implemented standards by device manufactures so a chipset may be fine on 1 card but not a different brand. It's quite unusual to pass through HBA to a virtual machine within unraid, usually these are used directly by Unraid. Have you seen this video by Space Invader, he has good series of videos for most topics, though some are a little out of date, the main details are usually still relevant.
  14. For reasons known only to Intel, the K version has it disabled, however the i7 - 4770 has it enabled. As overclocking it not recommended, you may be able to trade for a non K version for little outlay.
  15. Somewhat a niche board you have there. I had a look at the manual out of interest so just offering a second set of eyes. I assume you have enabled VT-d in And enabled virtualisation here I was going to suggest trying an earlier BIOS as they sometimes break support, however looks like there are only two versions. Good luck