• Posts

  • Joined

  • Days Won


PTRFRLL last won the day on November 5 2021

PTRFRLL had the most liked content!

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

PTRFRLL's Achievements


Apprentice (3/14)



  1. That's odd the NVIDIA_VISIBLE_DEVICES variable isn't working. Can you post your Docker config (omit any sensitive info)?
  2. I don't think you can mine ethash using a 750Ti, you need more memory
  3. Can you check your config.json and see what the value is set to? I don't use LHR and it seems to run just fine for me. I'm guessing the new lhr-autotune-interval param isn't set
  4. Ugh. Wonderful. Thanks for the heads up.
  5. I believe you need to adjust all the parameters in the config file that reference the second GPU. Any that have a comma (dag-build-mode, mt, pl, etc), e.g. "0,0", should be changed to remove the second number
  6. I don't personally use the mt option so I can't really speak to getting it working. I thought someone else in this thread was able to get it working but after skimming back through, I can't find a definitive answer. Perhaps someone else can chime in to help. Otherwise, you could look at using this container (or copying/modifying it to add overclocking abilities to T-rex):
  7. In theory, just adding the -mt to the config should do it. What did you set as the value?
  8. You should be able to pass that entire URL, port included, via the "server" variable. This is what my "server" var looks like: stratum+ssl://
  9. My only guess would be something related to that latest driver (510.39.01). You might try downgrading that OR seeing if the new container build (3.8) helps at all
  10. Try this link (substitute your IP and PORT): https://IP:PORT/endpoint/@scrypted/core/public/#/
  11. I believe you add the -i flag to the command to specify the id or index of the GPU you want. Use nvidia-smi to print all cards and grab the GPU ID: nvidia-smi -i 0 -pl 125 #assuming card 0 is the one you want
  12. I don't believe there is a way to sort at present. You could submit an issue on the project for consideration:
  13. I have not found a good way to include nvidia-settings and X11 into the container without requiring a specific version of nvidia drivers. I hesitate to do that as it would cause compatibility issues with many who use this container. I'm open to suggestions though.
  14. Yes, anything in /etc/pulseway will not persist after a reboot, so you should always edit the one in /boot/pulseway. That said, if you just want to test that your config.xml changes work, you could edit the /etc/pulseway file so you don't have to reboot after ever change Correct, just place the new version in that directory and it will be installed on the next boot See #1 I've seen this warning as well but as you mentioned Pulseway is not available via NerdPack. I believe putting the package in /boot/extra is still the recommended way to install packages at boot, so you should be able to ignore this particular warning Hope that helps!
  15. @repomanz I've got the container updated to use a new CUDA version. ptrfrll/nv-docker-trex:latest Also, as @Rhomax pointed out, make sure you don't have the 'disable NVML' option set in your config