DigitalStefan

Members
  • Content Count

    14
  • Joined

  • Last visited

Community Reputation

0 Neutral

About DigitalStefan

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. First test after setting upstream DNS to my router resulted in no spike in CPU usage and the download hit 80MB/s (over a 2GB game download). I then uninstalled the game, set the upstream DNS back to the PiHole docker instance and re-downloaded the same game, which again did not spike the server CPU and the download again topped 80MB/s. Really not sure what's going on.
  2. I spoke too soon. Lancache has gone back to pegging CPU during Steam installs, which now top out at around 24MB/s. I forgot to mention in previous post that I did also upgrade to unRAID 6.8.3 near the start of the troubleshooting process. What's interesting / frustrating is that whilst the web UI shows CPU usage on all cores being very high, htop from a terminal does not and the CPU usage stats on the web UI docker tab also shows very low usage by the lancache-bundle docker instance. There's very little else running on the server: Plex, PiHole (which lancache-bundle get
  3. I'm having similar issue. I cached Fallout 4 on Steam. I can download it at approx. 8.4MB/s when not cached. It's averaging 15MB/s when cached, but it pegs all CPU cores at 100%. It's a weak CPU (AMD FX8150), but even so this seems disproportionate. Unraid 6.7.2. EDIT: OK, so this was quite a journey to resolve. 1. Hypothesised that the cache slice size of 1MB was likely too low, resulting in lots of read requests saturating CPU. 2. Added a custom variable to the lancache-bundle docker in unRAID GUI. name = CACHE_SLICE_SIZE, key = CACHE_SLIC
  4. I have a Sabertooth 990FX with FX8150. I have had 2 x 8GB Crucial ECC 1333MHz installed for some years with no problems. On Friday I added an additional 1 x 8GB Crucial ECC 1600MHz DIMM. Installed in the 3rd socket along, counting away from the CPU socket. Again, no problem. 24GB installed and usable. This morning, I added another Crucial 8GB ECC 1600MHz DIMM. Now I get 24.056GB Usable, 32GB installed. Confirmed all DIMMs are running nicely at 1333MHz. I haven't twiddled anything in the BIOS. If anyone has had any experience with this and knows how to get 32GB usable, I'd
  5. Yes, this makes way more sense. Thanks. Certainly a lesson learned.
  6. This hadn't actually crossed my mind. Thanks. On checking, only my parity drive is actually running at 6Gbps speeds. All 3 array drives are running at 3Gbps. They are all connected to the 6Gbps on-board SATA, so I think you've given me some very good advice. I'll be ordering some new cables.
  7. I think the cables are OK. It might be the SATA controller on my original ASUS Sabertooth 990FX motherboard. The drives coming out of the server are SATA 3Gbps drives and the new ones are 6Gbps. I will change the cables though, to be sure. I may just knock everything back to 3Gbps speeds. It's not like these drives are going to go beyond 200MB/s during transfers.
  8. I'm in the process of migrating in some 2TB Seagate drives (cheap ones, not NAS specific). Each of them has so far flagged up between 4 and 8 'UDMA CRC error count' as they go through the array rebuild process (I'm swapping out some older 500GB drives). That type of error being flagged is a little unnerving, but they are recoverable errors with no data loss. If I see those numbers increasing over time, I'll be concerned. I'm also seeing high 'raw read error rate' numbers. I'm ignoring those as spurious.
  9. I recently upgraded to unRAID 6.4.1 and whilst I was there I did a few housekeeping tasks. Installed a different variation of the Plex docker (existing one refused to do an update) and reinstalled a Deluge docker. I also took the opportunity to install the 'Fix Common Problems' plugin. Fix Common Problems was great. Pointing out a couple of small issues that I could easily take care of. I haven't done anything about the "no CPU Scaling driver installed", because it doesn't really affect anything. One small, tiny little notice about "folder mounted under /mnt" error. I f
  10. If anyone knows how to put together a docker image that lets me use ethminer with the GPU's in the system, I'd prefer this to running a Windows VM as I would likely be able to make use of all the GPU's in the system.
  11. I've been GPU mining in a Windows VM with docker 6.3.5 for a few months now. It's not been a 100% success story. Using an AMD FX8150 and an ASUS Sabertooth 990FX R1.0 motherboard, I'm unable to pass through the 'first' GPU (i.e. the one that initialises during boot to display unRAID startup stuff) - even if no display is connected and even if I try to manually load the GPU's vbios from a 'rom' file. Second thing, which is something that plagued me when running Windows Server natively on this same hardware and part of the reason I tried out unRAID in the first place, my
  12. No displays attached. I've tried combinations of multiple GPU's attached to a single VM and separate VM's per GPU. I always hit a problem trying to pass through the first GPU - i.e. the one unRAID itself initialises. I'm RDP'ing into the VM. Currently have one Win 10 VM with 2 GPU's attached to it. The 'main' GPU is just sitting in the machine, doing nothing useful. I'm cryptomining with the GPU's, running Plex, using the box as a NAS and also running Server 2016 Essentials with a separate Win 10 VM for a bit of dev work. As long as I don't thrash the CPU, this machine
  13. Hi Steve, I've concentrated my efforts with OVMF, although I did try to fire up a few Seabios based VM's, with zero success. I've just spent the last 20 minutes thinking about buying PCIe risers and a sacrificial Nvidia GT710 or other basic adapter, just so I can pass through the 3 GPU's that I actually care about. It will be a cheaper option than changing motherboard and CPU. That being said, my board and CPU do need to get replaced at some point. If I push all cores on the CPU, the system powers off. Already replaced the PSU, but I guess it's probably capacitors that
  14. I've tried the different methods ... dump BIOS from the machine ... dump it from another machine using GPUZ and editing with a hex editor. With an ASUS Sabertooth 990FX r1 and AMD FX8150, nothing will persuade the first GPU to pass through to a Windows VM. Genuinely spent many hours attempting it. Either the VM never starts and hogs CPU ... never initialising the displays/GPU's or I get the error 43. I've tried different Windows client versions including pre-creators update Win 10, Win 10 Enterprise and Windows Server 2016. I've resigned myself that of 3 GPU