[Support] PTRFRLL - Docker images


Recommended Posts

Hi this is a great docker and I've been mining with a 1070, now I have a spare RTX 2070 which usually is able to hit 43MH/s on windows however I'm only able to get 37 MH/s now, is there anyway I can OC my card settings by undervolting and changing the memory settings? 

 

Thank you!

Link to comment
On 5/18/2021 at 5:03 PM, PTRFRLL said:

 

Thanks for the link for NSFMiner, I hadn't seen that container before. I'll see if I can integrate the OC capabilities into this image as well.

Just a heads up if you're still looking at doing this - I can confirm that the NSFOC container documentation is accurate: this only is working with the driver version: 460.73.01, the latest production branch of 460.80 does not work and you will get 'X server' errors.

Link to comment
4 hours ago, birdwatcher said:

Just a heads up if you're still looking at doing this - I can confirm that the NSFOC container documentation is accurate: this only is working with the driver version: 460.73.01, the latest production branch of 460.80 does not work and you will get 'X server' errors.

but with 460.37.01 i can adjust memory frequency ?

Link to comment
On 5/24/2021 at 11:17 AM, jvlarc said:

Hi this is a great docker and I've been mining with a 1070, now I have a spare RTX 2070 which usually is able to hit 43MH/s on windows however I'm only able to get 37 MH/s now, is there anyway I can OC my card settings by undervolting and changing the memory settings? 

 

Thank you!

 

All you can do in unraid is limit the power with a nvidia-smi command. So 'nvidia-smi -pl xx' where xx is the power percentage.

 

This is why I moved my setup to a Windows VM. For all cards other than the one I use for Plex transcoding.

Link to comment
On 5/18/2021 at 10:03 AM, PTRFRLL said:

 

Thanks for the link for NSFMiner, I hadn't seen that container before. I'll see if I can integrate the OC capabilities into this image as well.

Any luck? I love your container for my p2000. But running a VM for my 2070 super. 

Link to comment
On 5/27/2021 at 10:06 AM, mcai3db3 said:

 

All you can do in unraid is limit the power with a nvidia-smi command. So 'nvidia-smi -pl xx' where xx is the power percentage.

 

This is why I moved my setup to a Windows VM. For all cards other than the one I use for Plex transcoding.

This is incorrect and why I mentioned the NSFminerOC image and the certain Nvidia driver.

I can console into that container and run this for my 3090. I can then shutdown that container and start this TREX container and the OC will persist. It will only be lost at reboot:

nvidia-smi -pl 270 && nvidia-settings -a [gpu:0]/GPUPowerMizerMode=1 && nvidia-settings -a [gpu:0]/GPUGraphicsClockOffsetAllPerformanceLevels=-250 && nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffsetAllPerformanceLevels=1000 

  • Thanks 1
Link to comment

  

On 6/4/2021 at 8:24 AM, birdwatcher said:

This is incorrect and why I mentioned the NSFminerOC image and the certain Nvidia driver.

I can console into that container and run this for my 3090. I can then shutdown that container and start this TREX container and the OC will persist. It will only be lost at reboot:

nvidia-smi -pl 270 && nvidia-settings -a [gpu:0]/GPUPowerMizerMode=1 && nvidia-settings -a [gpu:0]/GPUGraphicsClockOffsetAllPerformanceLevels=-250 && nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffsetAllPerformanceLevels=1000 

 

Apologies, I hadn't read the responses when I made my post.

 

I looked at the docker you mentioned, but had several issues attempting to get it to run, due to driver incompatibilities (as you mentioned). Additionally it doesn't appear to want me to use the GPU in my Plex Docker.

 

I think it is also far from ideal to need to install a docker for each card just to configure these settings.

 

I'm still running this PTRFRLL Docker, as it allows me to mine with my Plex card, but for the rest of my cards, in my opinion, it is far more advantageous to run them in a VM at this time.

 

It would be quite magical if OC settings could be included in this docker, as I could drop this resource intensive VM, and I could afford to use a much heartier GPU in Plex/tdarr.

  • Like 1
Link to comment

Anyway to add email to this?  Trying to use Nanopool which requires and email to be passed through.  This is from the nanopool wiki.

 

Make sure your ETH-nanopool.bat file looks like this:

t-rex.exe -a ethash -o stratum+tcp://eth-eu1.nanopool.org:9999 -u YOUR_WALLET_ADDRESS.YOUR_WORKER_NAME/YOUR_EMAIL -p x

Where:

YOUR_ADDRESS - your valid Ethereum address

YOUR_WORKER - simple short worker name (like worker01). Optional.

YOUR_EMAIL - your email address for notifications. Optional.

Link to comment
12 hours ago, sittingmongoose said:

Anyway to add email to this?  Trying to use Nanopool which requires and email to be passed through.  This is from the nanopool wiki.

 

Make sure your ETH-nanopool.bat file looks like this:

t-rex.exe -a ethash -o stratum+tcp://eth-eu1.nanopool.org:9999 -u YOUR_WALLET_ADDRESS.YOUR_WORKER_NAME/YOUR_EMAIL -p x

Where:

YOUR_ADDRESS - your valid Ethereum address

YOUR_WORKER - simple short worker name (like worker01). Optional.

YOUR_EMAIL - your email address for notifications. Optional.

 

The WALLET variable on the docker is passed to the -u flag of T-Rex, so just enter everything in that field:

 

426178232_2021-06-0808_05_55-VAULT_UpdateContainerMozillaFirefox.png.48cd5e8a01cd41aa4e7e0edc4d7a2388.png

Link to comment

Came to try this after no success in Phoenix miner with power control.  Might be same problem here I guess.

 

My gtx3080 will only run full power which overheats and is not efficient.  I really REALLY hope someone can get overclocks to work but I'd settle for power limit if I could at least get it to work.

 

Based on what was posted above, I opened a terminal in unraid (not the docker) and typed the following:

 

nvidia-smi -i 0 -pl 240

 

The system reply was:

 

Power limit for GPU 00000000:21:00.0 was set to 240.00 W from 370.00 W.

Warning: persistence mode is disabled on device 00000000:21:00.0. See the Known Issues section of the nvidia-smi(1) man page for more information. Run with [--help | -h] switch to get more information on how to enable persistence mode.
All done.

 

However when I then ran this t-rex docker it is showing use of 291W (see below):

 

Mining at eth-us-east.flexpool.io:5555, diff: 4.00 G
[0;97mGPU #0: [0m[0;97mGigabyte NVIDIA RTX 3080 - 87.29 MH/s, [[0m[0;97mT:[0m[0;91m88[0m[0;97mC, [0m[0;97mP:291W, [0m[0;97mF:100%, [0m[0;97mE:301kH/W[0m[0;97m][0m[0;97m, 6/6 R:[0m[0;97m0[0m[0;97m%[0m
[0mShares/min: 3.068 (Avr. 2.25)
Uptime: 3 mins 51 secs | Algo: ethash | T-Rex v0.20.4
WD: 3 mins 52 secs, shares: 6/6

 

Did I miss something?

Link to comment
22 minutes ago, Ystebad said:

Came to try this after no success in Phoenix miner with power control.  Might be same problem here I guess.

 

My gtx3080 will only run full power which overheats and is not efficient.  I really REALLY hope someone can get overclocks to work but I'd settle for power limit if I could at least get it to work.

 

Based on what was posted above, I opened a terminal in unraid (not the docker) and typed the following:

 

nvidia-smi -i 0 -pl 240

 

The system reply was:

 

Power limit for GPU 00000000:21:00.0 was set to 240.00 W from 370.00 W.

Warning: persistence mode is disabled on device 00000000:21:00.0. See the Known Issues section of the nvidia-smi(1) man page for more information. Run with [--help | -h] switch to get more information on how to enable persistence mode.
All done.

 

However when I then ran this t-rex docker it is showing use of 291W (see below):

 

Mining at eth-us-east.flexpool.io:5555, diff: 4.00 G
[0;97mGPU #0: [0m[0;97mGigabyte NVIDIA RTX 3080 - 87.29 MH/s, [[0m[0;97mT:[0m[0;91m88[0m[0;97mC, [0m[0;97mP:291W, [0m[0;97mF:100%, [0m[0;97mE:301kH/W[0m[0;97m][0m[0;97m, 6/6 R:[0m[0;97m0[0m[0;97m%[0m
[0mShares/min: 3.068 (Avr. 2.25)
Uptime: 3 mins 51 secs | Algo: ethash | T-Rex v0.20.4
WD: 3 mins 52 secs, shares: 6/6

 

Did I miss something?


Run ‘nvidia-smi -pm 1’ first to enable persistence mode. Then run your power limit command as above. 

Link to comment

@mcai3db3 - thanks, that's what I missed.  So I guess I have to run that each time I restart unraid then as well?  Still hoping for undervolting ability as would drop temps a lot, but at least I can run - appreciate you.

 

edit: is it possible to set fan settings manually?  would like 100% to keep memory cool

 

 

Edited by Ystebad
Link to comment
31 minutes ago, Ystebad said:

@mcai3db3 - thanks, that's what I missed.  So I guess I have to run that each time I restart unraid then as well?  Still hoping for undervolting ability as would drop temps a lot, but at least I can run - appreciate you.

 

edit: is it possible to set fan settings manually?  would like 100% to keep memory cool

 

 

 

Unfortunately you can't do anything other than power limit natively. And nothing in this Docker that allows you to set anything else... currently.

 

@birdwatcher posted above about the NSFMiner Docker having these settings, but I had no joy getting that to work.

 

As far as running the script each time you restart Unraid goes, I just have it run as a script within the 'User Scripts' plugin, and tell it to run every time the server starts up.

 

IMO the best route at the moment to maximize MH/efficiency/temps is to create a Windows VM and host your miner there. But there are plenty of pitfalls with VMs & IOMMU. I could only get 3 of my 5 pcie ports to work on a VM, so I'm running a VM and a Docker, and crossing my fingers that PTRFRLL will manage to get fan/clock settings into this docker some day.

 

Link to comment
11 minutes ago, mcai3db3 said:

crossing my fingers that PTRFRLL will manage to get fan/clock settings into this docker some day.

 

I haven't had much time to dedicate to this but it is still on my list. I hesitate to add it directly to this docker as the current method requires a specific version of nVidia drivers to be installed and I prefer to not force that decision on people (it also makes support a nightmare).

 

My current plan is to create a separate docker that contains the over/under-clocking options which will persist those settings across all dockers. That way you could use that docker in conjunction with the T-rex one or any others.

Edited by PTRFRLL
  • Like 2
Link to comment
10 minutes ago, PTRFRLL said:

 

I haven't had much time to dedicate to this but it is still on my list. I hesitate to add it directly to this docker as the current method requires a specific version of nVidia drivers to be installed and I prefer to not force that decision on people (it also makes support a nightmare).

 

My current plan is to create a separate docker that contains the over/under-clocking options which will persist those settings across all dockers. That way you could use that docker in conjunction with the T-rex one or any others.

 

I mean, whatever you need to do to achieve it would be great. I have zero understanding of the technicalities involved in this, so I would appreciate anything you can conjure up!

Link to comment
On 6/10/2021 at 6:31 PM, PTRFRLL said:

 

I haven't had much time to dedicate to this but it is still on my list. I hesitate to add it directly to this docker as the current method requires a specific version of nVidia drivers to be installed and I prefer to not force that decision on people (it also makes support a nightmare).

 

My current plan is to create a separate docker that contains the over/under-clocking options which will persist those settings across all dockers. That way you could use that docker in conjunction with the T-rex one or any others.

Hello @PTRFRLL, thank you very much for your docker. So far been working fine for me with using NsfminerOC to set the over clocks. I just watched a video on Youtube that T-Rex miner can use it own overclock settings not sure if you looked at that and maybe add it to the new docker you planning to make. The video I watch was on done on Windows so not sure if it will work on Unraid but it can be something to look at. 

 

Regards,

Mortifer

Link to comment
  • 1 month later...

This may have been mentioned before, but I'll state it here: First set all your OCing parameters with the nsfminerOC container (1 per GPU), start it up, then stop it, and then use this trex container to mine with the previously applied settings. It invokes nvidia-smi to do the OC and fan control so it's applied even after you stop the container. The only caveat is you need the 460.73.01 Nvidia driver in Unraid or else nsf doesn't work, at least last I checked. Works like a charm for me though: I'm getting ~120 MH/s with 2x3060tis @ 120W each with the OC applied from nsfminerOC.

Edited by nimaim
  • Like 1
Link to comment
On 7/23/2021 at 8:18 AM, nimaim said:

This may have been mentioned before, but I'll state it here: First set all your OCing parameters with the nsfminerOC container (1 per GPU), start it up, then stop it, and then use this trex container to mine with the previously applied settings. It invokes nvidia-smi to do the OC and fan control so it's applied even after you stop the container. The only caveat is you need the 460.73.01 Nvidia driver in Unraid or else nsf doesn't work, at least last I checked. Works like a charm for me though: I'm getting ~120 MH/s with 2x3060tis @ 120W each with the OC applied from nsfminerOC.

 

@nimaim  So I've managed via user script to set power limit for my 3080 upon array startup.  However if using this container will allow more control that would be nice - but If I understand you correctly I have to run it once for each card and then stop it/them and then startup the actual mining docker after that?  Do you automate this?  Seems clunky, but if it works....

 

I need to add a second card to my server and it's different than the first one, so I'm leery of how to get power settings running for each card correctly.

Link to comment
On 7/26/2021 at 11:24 AM, Ystebad said:

 

@nimaim  So I've managed via user script to set power limit for my 3080 upon array startup.  However if using this container will allow more control that would be nice - but If I understand you correctly I have to run it once for each card and then stop it/them and then startup the actual mining docker after that?  Do you automate this?  Seems clunky, but if it works....

 

I need to add a second card to my server and it's different than the first one, so I'm leery of how to get power settings running for each card correctly.

Sorry for the late reply, I don't check this forum much. This docker container does not allow setting any OC. You have to use something that will invoke nvidia-smi / nvidia-settings and still work with the host. Currently the only thing that works is the nsfminerOC container (no longer supported btw) and the 460.73.01 driver on the host (also no longer supported, you'd have to somehow manually install this since the nvidia-driver plugin does not support this old version anymore). I was lucky to have had it installed already prior to updating the plugin so I'm still on it.

 

To answer your questions now, if you get all this set up, yes you'd start it once per GPU with settings for each and then stop it. Once you get one container set up, you can just copy that container to a second one to use as a template, just change the GPU ID and settings for that GPU (you can get the GPU ID using the "nvidia-smi" command in Unraid terminal). No I do not automate it; it does not take much time to manually start the 2 nsfminerOC containers, stop them, and then start up this trex one, especially when my server is up for weeks/months at a time.

 

I posted a screenshot of my 2x60tis mining in Unraid through this container for about a week, no issues as you can see. My 60ti FE is my main card inside the case hence the higher temps with a Aorus 60ti connected externally. My OC settings are -500 core, +2400 memory, 120W (50%) power limit, fans at 60%. YMMV.

 

image.png.1428a0b40fcb1d73bf6d86deea75b59e.png

Edited by nimaim
clarification
Link to comment

Thanks for all the hard work on this project! I just wanted to voice support for being able to tweak the OC settings more than just power level. As previous users mentioned it's no longer possible to (easily) get that old nvidia driver installed that will work with nsfmineroc.

 

Right now I have a 60ti running great through the trex docker, but am only at around 52.24 MH/S on ethhash. Of course the card is capable of slightly more but unfortunately my motherboard is old and does not have IOMMU support, so i cannot spin up a VM and passthrough the GPU then OC that way.

It sounds really tricky but if you are able to get OC settings working that would be sublime for my use case. :)

Edited by Ben Thuss
more detail
Link to comment
  • 3 weeks later...

Does anyone know how to specify to only use certain GPU?  I had one gpu and was working fine but added a second one and I do not want it used by this docker as I plan to use for VM passthrough.  

 

I noticed after adding second GPU that devices had 0,1 and using nvidia-smi I see that the 1 device is what I want to use (NOT the zero).  So I tried  "devices": "1" within the GUI settings section.  That didn't work.

 

Is there something in the docker edit itself I have to change?

Link to comment
7 minutes ago, Ystebad said:

Does anyone know how to specify to only use certain GPU?  I had one gpu and was working fine but added a second one and I do not want it used by this docker as I plan to use for VM passthrough.  

 

I noticed after adding second GPU that devices had 0,1 and using nvidia-smi I see that the 1 device is what I want to use (NOT the zero).  So I tried  "devices": "1" within the GUI settings section.  That didn't work.

 

Is there something in the docker edit itself I have to change?

Add the NVIDIA_VISIBLE_DEVICES variable to the docker container and specify the GUID of the GPU you want to use (see the Specify GPU section in the first post)

 

 

Link to comment
8 hours ago, PTRFRLL said:

Add the NVIDIA_VISIBLE_DEVICES variable to the docker container and specify the GUID of the GPU you want to use (see the Specify GPU section in the first post)

 

 

Thanks - I found that shortly after I posted, and was going to update my post... However despite reddit search and google I'm not finding how I determine the actual GUID to put into that variable.  Such a basic question but I've been looking around in unraid for awhile now and can't see where one obtains this magical information.

 

EDIT: in case someone else has this problem: thanks to u/Xionous_ on reddit: the GUID is found by opening the Nvidia Plugin.  Unraid is anything but intuitive.  Appreciated.

Edited by Ystebad
solution
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.