DigitalStefan

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by DigitalStefan

  1. First test after setting upstream DNS to my router resulted in no spike in CPU usage and the download hit 80MB/s (over a 2GB game download). I then uninstalled the game, set the upstream DNS back to the PiHole docker instance and re-downloaded the same game, which again did not spike the server CPU and the download again topped 80MB/s. Really not sure what's going on.
  2. I spoke too soon. Lancache has gone back to pegging CPU during Steam installs, which now top out at around 24MB/s. I forgot to mention in previous post that I did also upgrade to unRAID 6.8.3 near the start of the troubleshooting process. What's interesting / frustrating is that whilst the web UI shows CPU usage on all cores being very high, htop from a terminal does not and the CPU usage stats on the web UI docker tab also shows very low usage by the lancache-bundle docker instance. There's very little else running on the server: Plex, PiHole (which lancache-bundle gets DNS from) and lancache-bundle. No VM's running. I'm going to see what happens if I set lancache-bundle to pull DNS directly from my router instead.
  3. I'm having similar issue. I cached Fallout 4 on Steam. I can download it at approx. 8.4MB/s when not cached. It's averaging 15MB/s when cached, but it pegs all CPU cores at 100%. It's a weak CPU (AMD FX8150), but even so this seems disproportionate. Unraid 6.7.2. EDIT: OK, so this was quite a journey to resolve. 1. Hypothesised that the cache slice size of 1MB was likely too low, resulting in lots of read requests saturating CPU. 2. Added a custom variable to the lancache-bundle docker in unRAID GUI. name = CACHE_SLICE_SIZE, key = CACHE_SLICE_SIZE, value = 25m 3. Delete CONFIGHASH file in the configured data folder. Ran into problems here. Running Steam kept throwing errors about not being able to log me in. Managed to get logged in after some time and a reboot. Tried downloading games. Very slow downloads. Cache not being created. Lots of cache misses in the log. Reboot. Check DNS. Try more downloads. Curse. Change CACHE_SLICE_SIZE to 8m in case 25m was just too large for some reason. Reboot. Reboot server. After some time, I noted that the folder structure in the actual cache data folder has actually now appeared (was previously empty). Great success. Able to download Cuphead at full speed from internet. Uninstall. Install. Download now >100MB (on my 1Gb LAN).
  4. I have a Sabertooth 990FX with FX8150. I have had 2 x 8GB Crucial ECC 1333MHz installed for some years with no problems. On Friday I added an additional 1 x 8GB Crucial ECC 1600MHz DIMM. Installed in the 3rd socket along, counting away from the CPU socket. Again, no problem. 24GB installed and usable. This morning, I added another Crucial 8GB ECC 1600MHz DIMM. Now I get 24.056GB Usable, 32GB installed. Confirmed all DIMMs are running nicely at 1333MHz. I haven't twiddled anything in the BIOS. If anyone has had any experience with this and knows how to get 32GB usable, I'd be very pleased to hear how.
  5. Yes, this makes way more sense. Thanks. Certainly a lesson learned.
  6. This hadn't actually crossed my mind. Thanks. On checking, only my parity drive is actually running at 6Gbps speeds. All 3 array drives are running at 3Gbps. They are all connected to the 6Gbps on-board SATA, so I think you've given me some very good advice. I'll be ordering some new cables.
  7. I think the cables are OK. It might be the SATA controller on my original ASUS Sabertooth 990FX motherboard. The drives coming out of the server are SATA 3Gbps drives and the new ones are 6Gbps. I will change the cables though, to be sure. I may just knock everything back to 3Gbps speeds. It's not like these drives are going to go beyond 200MB/s during transfers.
  8. I'm in the process of migrating in some 2TB Seagate drives (cheap ones, not NAS specific). Each of them has so far flagged up between 4 and 8 'UDMA CRC error count' as they go through the array rebuild process (I'm swapping out some older 500GB drives). That type of error being flagged is a little unnerving, but they are recoverable errors with no data loss. If I see those numbers increasing over time, I'll be concerned. I'm also seeing high 'raw read error rate' numbers. I'm ignoring those as spurious.
  9. I recently upgraded to unRAID 6.4.1 and whilst I was there I did a few housekeeping tasks. Installed a different variation of the Plex docker (existing one refused to do an update) and reinstalled a Deluge docker. I also took the opportunity to install the 'Fix Common Problems' plugin. Fix Common Problems was great. Pointing out a couple of small issues that I could easily take care of. I haven't done anything about the "no CPU Scaling driver installed", because it doesn't really affect anything. One small, tiny little notice about "folder mounted under /mnt" error. I forget the wording. So... I plod merrily along, using my server as normal. Then I notice a SMART warning on my parity drive. It's a 6 year-old drive and I wasn't the original owner. I decide it's time to upgrade, so I order 4 x 2TB Toshiba drives for under £200. Then my array starts dropping out. All drives just disappearing from the unRAID control panel. So I think "crap, I'd better get some new drives in quickly. Can't wait the 5 days for eBuyer shipping". I cancel that order and go to Amazon for 4 x 2TB Seagate drives for £215. Prime membership means next-day delivery. Drives arrive and I swap out the 1TB drive for a nice, new (manufactured Jan 2018, nice!) Seagate 2TB drive. Parity is recalculating. Array drops out. I reboot. Parity is recalculating. Array drops out. Crap. Crap. Crap! I check cabling and fire back up. All is well. Parity is recalculating. Recalculating completes. I power down and swap out one of the three 500GB array drives. New drive rebuilds. Array drops out. At this point I'm thinking I'm going to have to buy a new motherboard, because I know my SATA cabling is good, but my motherboard is old. It's £200 for an equivalent replacement. I nearly buy it. Then I remember this 'folder mounted under /mnt" error. Longer story shorter... I've somehow botched the reinstall of the Deluge docker and it's creating a folder '/mnt/NAS/unsorted-downloads/' and it's merrily filling up... my 32GB boot USB. When it gets full, it drops the array out. Not 'array offline'... everything just disappears and even the server log is inaccessible. The moral of the story is, pay attention to even small details when Fix Common Problems tells you something. I suspect my old 1TB parity drive still has some life left in it and I've spent £215 on an unnecessary, but thankfully not unwelcome storage upgrade.
  10. If anyone knows how to put together a docker image that lets me use ethminer with the GPU's in the system, I'd prefer this to running a Windows VM as I would likely be able to make use of all the GPU's in the system.
  11. I've been GPU mining in a Windows VM with docker 6.3.5 for a few months now. It's not been a 100% success story. Using an AMD FX8150 and an ASUS Sabertooth 990FX R1.0 motherboard, I'm unable to pass through the 'first' GPU (i.e. the one that initialises during boot to display unRAID startup stuff) - even if no display is connected and even if I try to manually load the GPU's vbios from a 'rom' file. Second thing, which is something that plagued me when running Windows Server natively on this same hardware and part of the reason I tried out unRAID in the first place, my motherboard cannot reliably power 2 or more GPU's (i.e., unable to provide 75W to all PCIe slots). This manifested in hard power-off whenever CPU usage hit 100% for more than a few seconds. Solved this by using powered PCIe risers on all except 1 GPU. Passing through my GPU's to a Windows VM has been straightforward. 1 x GTX 1070 and 1 x GTX 1060 6GB. Using the Windows 10 Pro evaluation ISO as an install base, I created a VM using OVMF, 2GB RAM, 4 CPU cores and a 50GB drive. GPU's consisting of the VNC adapter as the first display, then each of the GPU's not forgetting to also add the Nvidia HD Audio (HDMI) devices. Usual Windows install. Left it to auto-update and install some basic Nvidia drivers before installing the drivers from the Nvidia website. Remembered to adjust system standby to 'never' in power properties. MSI Afterburner works and enables the much needed memory clock and power adjustments for each card. 'ethminer' works just as it would on native hardware. This has given me a 100% stable system.
  12. No displays attached. I've tried combinations of multiple GPU's attached to a single VM and separate VM's per GPU. I always hit a problem trying to pass through the first GPU - i.e. the one unRAID itself initialises. I'm RDP'ing into the VM. Currently have one Win 10 VM with 2 GPU's attached to it. The 'main' GPU is just sitting in the machine, doing nothing useful. I'm cryptomining with the GPU's, running Plex, using the box as a NAS and also running Server 2016 Essentials with a separate Win 10 VM for a bit of dev work. As long as I don't thrash the CPU, this machine is happy with month+ uptimes. If this were anything other than a "let's see what we can do" box, I'd probably have replaced it with a Xeon machine from a few generations ago.
  13. Hi Steve, I've concentrated my efforts with OVMF, although I did try to fire up a few Seabios based VM's, with zero success. I've just spent the last 20 minutes thinking about buying PCIe risers and a sacrificial Nvidia GT710 or other basic adapter, just so I can pass through the 3 GPU's that I actually care about. It will be a cheaper option than changing motherboard and CPU. That being said, my board and CPU do need to get replaced at some point. If I push all cores on the CPU, the system powers off. Already replaced the PSU, but I guess it's probably capacitors that are a bit too old on the board now, or a MOSFET or two are a little unhappy.
  14. I've tried the different methods ... dump BIOS from the machine ... dump it from another machine using GPUZ and editing with a hex editor. With an ASUS Sabertooth 990FX r1 and AMD FX8150, nothing will persuade the first GPU to pass through to a Windows VM. Genuinely spent many hours attempting it. Either the VM never starts and hogs CPU ... never initialising the displays/GPU's or I get the error 43. I've tried different Windows client versions including pre-creators update Win 10, Win 10 Enterprise and Windows Server 2016. I've resigned myself that of 3 GPU's installed, 2 of them will work. I don't know if this is a BIOS limitation with my motherboard (no newer BIOS exists) or if it's a CPU issue. If anyone has a matching/similar setup with any insights, I'd welcome your comment. My path from here is an upgrade to a Ryzen CPU and Asrock Taichi board (unless anyone knows of another board that properly supports unbuffered ECC RAM?).