FlamongOle Posted March 17, 2021 Share Posted March 17, 2021 (edited) This container will no longer be maintained or supported as of 23.07.2021. Fork it, modify it, do whatever with it if you need it. Docker container for Ethereum mining with CUDA (nsfminer) with Nvidia OC capabilities. This Docker container was inspired by the docker-nsfminer which was inspired by docker-ethminer. It uses the nsfminer with OC capabilities with the Nvidia driver. This docker will allow for over- and underclocking of Nvidia GPU's for Ethereum mining in a docker. One docker template per GPU. GitHub project: https://github.com/olehj/docker-nsfminerOC Questions about mining specific questions, workers and wallets will not be answered anymore. There's enough guides and information out there in the sky. Google it, please. Support here is limited to the docker container, where most is answered in this post below: Requirements Unraid 6.9+ NVIDIA drivers for your GPU installed* Docker is set to run in privileged mode, it is required for overclocking and for setting the drivers in persistence mode. GPU with at least 5GB memory or more (current requirement is above 4,2GB). *) Verified working Nvidia driver: v460.73.01 (Production Branch) - v465.X does not allow for overclocks with Unraid/docker combo for unknown reasons. Installation Install this docker container using CA (Community Applications), search for NsfminerOC and install! Configuration Variable Default Value Description ------------------------------------------------------------------------------------------- NSFMINER_GPU 0 Set GPU ID to use (open terminal and check "nvidia-smi") NSFMINER_GPUPOWERLIMIT 150 Set power limit for GPU in Watt (set this as low as you can with highest possible hashrates) NSFMINER_POWERMIZER 2 Set PowerMizer performance level (0=adaptive, 1=max performance, 2=auto) NSFMINER_GPUGFXCLOCKOFFSET 0 Set GPU graphics clock offset (under- or overclock your GPU in MHz offset) NSFMINER_GPUMEMCLOCKOFFSET 0 Set GPU memory clock offset (overclock your memory in MHz, NB! often these values are the double of what they are shown as in Windows so just crank it up!) NSFMINER_HWMON 2 Set Feedback level from nsfminer (feedback from the miner, 0=off, 1=temp+fan, 2=temp+fan+power) NSFMINER_TRANSPORT stratum1+ssl Set transport for worker NSFMINER_ETHADDRESS 0x516eaf4546BBeA271d05A3E883Bd2a11730Ef97b Set your worker ethereum address (or mine an hour or so for me if you wanna support my docker work ;) NSFMINER_WORKERNAME unraid-worker Set a worker name NSFMINER_ADDRESS1 eu1.ethermine.org Set address 1 for worker, both must be set NSFMINER_ADDRESS2 us1.ethermine.org Set address 2 for worker, both must be set NSFMINER_PORT1 5555 Set port for address 1 NSFMINER_PORT2 5555 Set port for address 2 NSFMINER_GPUFANCONTROLL 0 Set GPU fan controll, 0 will run auto and other fan settings are ignored. GPU MUST have exactly 2 fan controllers available, else this container will fail if this is used. NSFMINER_GPUFAN1 0 Set the FAN ID 1 of GPU (check fan ID with "nvidia-settings -q fans" in terminal) NSFMINER_GPUFANSPEED1 100 Set the speed in percent of FAN ID 1 NSFMINER_GPUFAN2 1 Set the FAN ID 2 of GPU (check fan ID with "nvidia-settings -q fans" in terminal) NSFMINER_GPUFANSPEED2 100 Set the speed in percent of FAN ID 2 Running View the logs for worker output Overclocking example Some cards will report that they are read-only when trying to overclock them, such as Quadro cards. This is normal behavior as they are factory locked. For on-demand overclocking, open the "Logs" to check the hashrates when the docker container is running. Then open "Console" to enter in tuning data manually to figure out the optimized mining values for your card. When all values are found, store them in the variables in the docker container edit in Unraid. The GPU ID is set to "0", adjust yours accordingly. The examples below is set for a GTX 1070. Set the PowerMizer mode to 0=adaptive, 1=max performance, 2=auto nvidia-settings -a [gpu:0]/GPUPowerMizerMode=1 Adjust the GPU Graphics clock offset on all performance levels, crank it up until it starts giving errors and then back up. If you are on a 3000-series card, you might want to underclock this one instead and save the power consumption (check example settings for other cards below). nvidia-settings -a [gpu:0]/GPUGraphicsClockOffsetAllPerformanceLevels=200 Adjust the GPU Memory clock offset, crank this one up until it gives errors, crashes or decreases in hashrates, then back it up to a stable value. nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffsetAllPerformanceLevels=800 Finally, adjust the power limit. Decrease this as much as possible until you hit a target where the hashrates fall. Optimally calculate how much power vs. hashrates you can squeeze out. Sometimes some fine tuning with less or more clocks and power draw can give you better profit. Slightly less hashrates with less power draw might be better profit! nvidia-smi -i 0 -pl 135 Other GPU value examples GPU PowerMizer GPU GFX GPU MEM Power limit Hashrates (~) Effective Score -------------------------------------------------------------------------------------------------- RTX 3080 1 (-300)-(-200) 2300-2500 230-235 97.0-98.5 MH/s 0,421-0,419 * RTX 3070 1 (-600)-(-550) 2300-2400 130-135 60.0-60.2 MH/s 0,462-0,446 * GTX 1070 Stock 1 200 800 135 28 MH/s 0,207 GTX 1070 OC 1 100 400 135 28 MH/s 0,207 Quadro P2000 1 0 0 65 15.6 MH/s 0,24 Some cards might have higher factory clocks in VBIOS like these GTX 1070 GPU's. One of these cards is an OC optimized VBIOS/card, the other one just a standard VBIOS/card with less cooling. The target to reach the hashrates might vary, don't use this table for your own input, this is just an example and might be slightly used for a reference of where it should be. The values might need slight tuning after a while as it might wear out the real top performance of the memory chips, or the ambient temperature simply rises etc. *) The effective score of the RTX 3000 cards shows that it might even be better to run at slightly lower hashrates and power limit, than trying to boost it all up, even with just 5 watts. Fan curves You can also play around with the fan curves, setting it low to reduce the noise can also impact the hashrates. But maybe you want to duplicate your docker container for a "optimized run mode" and a "night mode". Adjusting fan curves might require 2 fan controllers on the graphics card, if the docker container fails and the GPU has only one controller, use "auto" setting (default). Adjust fan controller, 0=auto, 1=manual nvidia-settings -a [gpu:0]/GPUFanControlState=1 Adjust speed for fan 1 (same procedure for fan 2, just replace the number with another fan ID), value in %: nvidia-settings -a [fan:0]/GPUTargetFanSpeed=80 Setting up multiple cards/containers root@Odin:~# nvidia-smi Tue Apr 6 15:06:36 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.67 Driver Version: 460.67 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| ID -> | 0 GeForce GTX 1070 On | 00000000:04:00.0 Off | N/A | | 51% 76C P2 135W / 135W | 4631MiB / 8119MiB | 100% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ID -> | 1 Quadro P2000 On | 00000000:83:00.0 Off | N/A | | 92% 81C P0 65W / 75W | 4862MiB / 5059MiB | 100% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ... The output of "nvidia-smi" in the terminal will show you each GPU ID, this ID you enter under NSFMINER_GPU variable. If you have multiple GPU's: Install first the NsfminerOC container via CA. Configure the first NsfminerOC Click "Add container" and select one of your NsfminerOC templates Configure your second container Repeat steps 1-4 for third, fourth etc... Edited July 23, 2021 by olehj End of support 2 Quote Link to comment
FlamongOle Posted March 17, 2021 Author Share Posted March 17, 2021 (edited) Thanks to @ich777 for the nvidia driver download script for complete driver version support. Edited April 2, 2021 by olehj 2 Quote Link to comment
sWampyGround Posted March 31, 2021 Share Posted March 31, 2021 When I try to install it, I get docker: Error response from daemon: pull access denied for olehj/nsfmineroc, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. See 'docker run --help'. Quote Link to comment
RazorX Posted March 31, 2021 Share Posted March 31, 2021 31 minutes ago, sWampyGround said: When I try to install it, I get docker: Error response from daemon: pull access denied for olehj/nsfmineroc, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. See 'docker run --help'. yeah i can't install the docker either Quote Link to comment
Natebur Posted March 31, 2021 Share Posted March 31, 2021 I had the same issue. Try replacing repository with olehj/docker-nsfmineroc:latest 1 Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 47 minutes ago, Natebur said: I had the same issue. Try replacing repository with olehj/docker-nsfmineroc:latest You are right! "docker-" part were missing in the repository. The issue should be fixed now, but it has to be pulled from CA before changes apply. In the meantime it's safe to replace the repository to "olehj/docker-nsfmineroc:latest" from "olehj/nsfmineroc:latest" Quote Link to comment
Natebur Posted March 31, 2021 Share Posted March 31, 2021 I am having a different issue now though. Have you seen this before? Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 3 minutes ago, Natebur said: I am having a different issue now though. Have you seen this before? Yes, it might be that you run different nvidia driver versions, check that you DON'T run the "latest" nvidia driver, but set it static to 460.67. Quote Link to comment
Natebur Posted March 31, 2021 Share Posted March 31, 2021 4 minutes ago, olehj said: Yes, it might be that you run different nvidia driver versions, check that you DON'T run the "latest" nvidia driver, but set it static to 460.67. Correct, that is how it’s setup. I did upgrade to 460.67 and I have restarted the system to apply the new driver Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 6 minutes ago, Natebur said: Correct, that is how it’s setup. I did upgrade to 460.67 and I have restarted the system to apply the new driver By chance that any other docker container with privileged rights or VM's uses this GPU? The results might vary. I am running a Quadro P2000 fine myself in both Emby and nsfminerOC. I'm not sure what might block this, the mileage might vary from card to card and system to system. Quote Link to comment
Natebur Posted March 31, 2021 Share Posted March 31, 2021 4 minutes ago, olehj said: By chance that any other docker container with privileged rights or VM's uses this GPU? The results might vary. I am running a Quadro P2000 fine myself in both Emby and nsfminerOC. I'm not sure what might block this, the mileage might vary from card to card and system to system. I had plex using it as well as the os was booted into GUI mode. I have booted the os into standard mode and removed it from plex as well. No VMs. Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 Just now, Natebur said: I had plex using it as well as the os was booted into GUI mode. I have booted the os into standard mode and removed it from plex as well. No VMs. The GUI mode is likely the problem, should run fine with Plex (I use mine with Emby without problem, and it's sort of the same thing). Quote Link to comment
Natebur Posted March 31, 2021 Share Posted March 31, 2021 2 minutes ago, olehj said: The GUI mode is likely the problem, should run fine with Plex (I use mine with Emby without problem, and it's sort of the same thing). I’ve already booted into non-GUI mode and the issue continues. I also removed the plug-in GPUstat in case that was causing it, but it continues as well. Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 6 minutes ago, Natebur said: I’ve already booted into non-GUI mode and the issue continues. I also removed the plug-in GPUstat in case that was causing it, but it continues as well. GPU stat, Emby/(probably Plex) should cause no issue with this docker container. Open regular Unraid terminal and verify that your GPU is set at the correct GPU ID. And check if you maybe see some hints in the Syslog? Sorry, I don't have any other ideas at the moment. Quote Link to comment
ich777 Posted March 31, 2021 Share Posted March 31, 2021 2 hours ago, olehj said: check that you DON'T run the "latest" nvidia driver, but set it static to 460.67 Wouldn't it be better to check which driver version is installed on the host on a container start/restart and install the appropriate driver version afterwards in the container? I also do this in my DebianBuster-Nvidia container, so you are always have the same driver version installed in the container and on the host. Quote Link to comment
Natebur Posted March 31, 2021 Share Posted March 31, 2021 2 hours ago, olehj said: GPU stat, Emby/(probably Plex) should cause no issue with this docker container. Open regular Unraid terminal and verify that your GPU is set at the correct GPU ID. And check if you maybe see some hints in the Syslog? Sorry, I don't have any other ideas at the moment. What format should the GPU ID be? Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 3 minutes ago, ich777 said: Wouldn't it be better to check which driver version is installed on the host on a container start/restart and install the appropriate driver version afterwards in the container? I also do this in my DebianBuster-Nvidia container, so you are always have the same driver version installed in the container and on the host. Probably, but now the latest is 465.X on unraid, and that driver isn't available in ubuntu repo, the graphics ppa or at nvidia.com. Then the other problem is the cuda versions, nsfminer runs cuda 11. Older nvidia drivers does not support that as far as I remember, even though I'm not sure what CUDA version 455 is at (which is the lowest at Unraid nvidia plugin). This docker container has probably a limited time before mining isn't even meaningful anymore, maybe as early as in July already. Not sure if I bother supporting multiple versions with this one. Not a priority anyway. Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 3 minutes ago, Natebur said: What format should the GPU ID be? 0, 1, 2 etc. Quote Link to comment
ich777 Posted March 31, 2021 Share Posted March 31, 2021 3 minutes ago, olehj said: Probably That would be only a few lines of code... 5 minutes ago, olehj said: that driver isn't available in ubuntu repo, the graphics ppa or at nvidia.com You always can get them from here, I also pull it from here to build it for the Nvidia-Driver plugin: Click 6 minutes ago, olehj said: his docker container has probably a limited time before mining isn't even meaningful anymore, maybe as early as in July already. Not sure if I bother supporting multiple versions with this one. Not a priority anyway. Understood, thought this would be helpful. Quote Link to comment
Natebur Posted March 31, 2021 Share Posted March 31, 2021 4 minutes ago, olehj said: 0, 1, 2 etc. Of this screenshot, what should I put in the GPU variable? Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 1 minute ago, Natebur said: Of this screenshot, what should I put in the GPU variable? 0 Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 2 minutes ago, ich777 said: That would be only a few lines of code... You always can get them from here, I also pull it from here to build it for the Nvidia-Driver plugin: Click Understood, thought this would be helpful. If you have the code for copy&paste I'll do it 😛 I'm lazy Quote Link to comment
Natebur Posted March 31, 2021 Share Posted March 31, 2021 2 minutes ago, olehj said: 0 That’s what I thought. Here is the result of that. Quote Link to comment
ich777 Posted March 31, 2021 Share Posted March 31, 2021 Just now, olehj said: If you have the code for copy&paste I'll do it 😛 I'm lazy Actually yes... But I don't know how your container works anyways it starts here at line 44 and it ends at 100 but I think you don't need a few lines in between. Quote Link to comment
FlamongOle Posted March 31, 2021 Author Share Posted March 31, 2021 3 minutes ago, ich777 said: Actually yes... But I don't know how your container works anyways it starts here at line 44 and it ends at 100 but I think you don't need a few lines in between. I have everything squeezed inside the Dockerfile, but I think I'll manage to shrink and add this in as well Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.