mcai3db3

Members
  • Posts

    56
  • Joined

  • Last visited

Posts posted by mcai3db3

  1. 10 minutes ago, PTRFRLL said:

     

    I haven't had much time to dedicate to this but it is still on my list. I hesitate to add it directly to this docker as the current method requires a specific version of nVidia drivers to be installed and I prefer to not force that decision on people (it also makes support a nightmare).

     

    My current plan is to create a separate docker that contains the over/under-clocking options which will persist those settings across all dockers. That way you could use that docker in conjunction with the T-rex one or any others.

     

    I mean, whatever you need to do to achieve it would be great. I have zero understanding of the technicalities involved in this, so I would appreciate anything you can conjure up!

  2. 31 minutes ago, Ystebad said:

    @mcai3db3 - thanks, that's what I missed.  So I guess I have to run that each time I restart unraid then as well?  Still hoping for undervolting ability as would drop temps a lot, but at least I can run - appreciate you.

     

    edit: is it possible to set fan settings manually?  would like 100% to keep memory cool

     

     

     

    Unfortunately you can't do anything other than power limit natively. And nothing in this Docker that allows you to set anything else... currently.

     

    @birdwatcher posted above about the NSFMiner Docker having these settings, but I had no joy getting that to work.

     

    As far as running the script each time you restart Unraid goes, I just have it run as a script within the 'User Scripts' plugin, and tell it to run every time the server starts up.

     

    IMO the best route at the moment to maximize MH/efficiency/temps is to create a Windows VM and host your miner there. But there are plenty of pitfalls with VMs & IOMMU. I could only get 3 of my 5 pcie ports to work on a VM, so I'm running a VM and a Docker, and crossing my fingers that PTRFRLL will manage to get fan/clock settings into this docker some day.

     

  3. 22 minutes ago, Ystebad said:

    Came to try this after no success in Phoenix miner with power control.  Might be same problem here I guess.

     

    My gtx3080 will only run full power which overheats and is not efficient.  I really REALLY hope someone can get overclocks to work but I'd settle for power limit if I could at least get it to work.

     

    Based on what was posted above, I opened a terminal in unraid (not the docker) and typed the following:

     

    nvidia-smi -i 0 -pl 240

     

    The system reply was:

     

    Power limit for GPU 00000000:21:00.0 was set to 240.00 W from 370.00 W.

    Warning: persistence mode is disabled on device 00000000:21:00.0. See the Known Issues section of the nvidia-smi(1) man page for more information. Run with [--help | -h] switch to get more information on how to enable persistence mode.
    All done.

     

    However when I then ran this t-rex docker it is showing use of 291W (see below):

     

    Mining at eth-us-east.flexpool.io:5555, diff: 4.00 G
    [0;97mGPU #0: [0m[0;97mGigabyte NVIDIA RTX 3080 - 87.29 MH/s, [[0m[0;97mT:[0m[0;91m88[0m[0;97mC, [0m[0;97mP:291W, [0m[0;97mF:100%, [0m[0;97mE:301kH/W[0m[0;97m][0m[0;97m, 6/6 R:[0m[0;97m0[0m[0;97m%[0m
    [0mShares/min: 3.068 (Avr. 2.25)
    Uptime: 3 mins 51 secs | Algo: ethash | T-Rex v0.20.4
    WD: 3 mins 52 secs, shares: 6/6

     

    Did I miss something?


    Run ‘nvidia-smi -pm 1’ first to enable persistence mode. Then run your power limit command as above. 

  4.   

    On 6/4/2021 at 8:24 AM, birdwatcher said:

    This is incorrect and why I mentioned the NSFminerOC image and the certain Nvidia driver.

    I can console into that container and run this for my 3090. I can then shutdown that container and start this TREX container and the OC will persist. It will only be lost at reboot:

    nvidia-smi -pl 270 && nvidia-settings -a [gpu:0]/GPUPowerMizerMode=1 && nvidia-settings -a [gpu:0]/GPUGraphicsClockOffsetAllPerformanceLevels=-250 && nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffsetAllPerformanceLevels=1000 

     

    Apologies, I hadn't read the responses when I made my post.

     

    I looked at the docker you mentioned, but had several issues attempting to get it to run, due to driver incompatibilities (as you mentioned). Additionally it doesn't appear to want me to use the GPU in my Plex Docker.

     

    I think it is also far from ideal to need to install a docker for each card just to configure these settings.

     

    I'm still running this PTRFRLL Docker, as it allows me to mine with my Plex card, but for the rest of my cards, in my opinion, it is far more advantageous to run them in a VM at this time.

     

    It would be quite magical if OC settings could be included in this docker, as I could drop this resource intensive VM, and I could afford to use a much heartier GPU in Plex/tdarr.

    • Like 1
  5. On 5/24/2021 at 11:17 AM, jvlarc said:

    Hi this is a great docker and I've been mining with a 1070, now I have a spare RTX 2070 which usually is able to hit 43MH/s on windows however I'm only able to get 37 MH/s now, is there anyway I can OC my card settings by undervolting and changing the memory settings? 

     

    Thank you!

     

    All you can do in unraid is limit the power with a nvidia-smi command. So 'nvidia-smi -pl xx' where xx is the power percentage.

     

    This is why I moved my setup to a Windows VM. For all cards other than the one I use for Plex transcoding.

  6. Thank you for the guide @dboris, I'm about to try this myself. I was wondering what happens if I just load the default NH image rather than opening up the file and tweaking the BTC address. Is there no way to just update this in the OS itself?

     

    Apologies for the laziness/naivety, I'm just curious.

  7. I had the same issue. I replaced pretty much all my hardware since, and I haven’t seen the issue since. Idk but maybe it’s a network driver issue or something, all I know is that it annoyed me to the point of spending hundreds of dollars on a new mobo/cpu/ram. 

  8. Did you ever work this out? I'm having similar issues and it's obscenely frustrating. I've attached diagnostics, but basically my docker/plugins/main/dashboard pages either time out, or load exceptionally slowly. And several of my dockers are performing terribly too. On 6.8.3.

     

    I've tried several things, like reinstalling Unraid files, changing docker image, removing a bunch of plugins/dockers. And no improvement.

     

    Aug 9 12:35:45 xxx nginx: 2020/08/09 12:35:45 [error] 17148#17148: *20054 upstream timed out (110: Connection timed out) while reading upstream, client: xxx, server: , request: "GET /Docker HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "xx.unraid.net", referrer: "https://xx.unraid.net/Docker"

     

     

    tower-diagnostics-20200809-1240.zip

  9. This is a crosspost, as I'm not sure if it's an Unassigned Devices issue, or a plugin issue (of which all of my plugins are Phaze plugins), so here goes:

     

    http://lime-technology.com/forum/index.php?topic=45807.msg465130#msg465130

     

    My setup is basically that I have 1 "unassigned device" SSD, and on that disk, I install my various plugins (sabnzbd et al.)

     

    But I'm having this issue where every single time I restart the server/array, the plugins all run their annoying "wizards" as they are failing to read the config folder held on my SSD. I can see in my logs that the SSD (sdd1) is not mounting until after the plugins have tried to install.

     

    So basically, does anyone have a suggestion on how to force phaze plugins to wait until the SSD is ready before trying to start up?  Is there a way of putting a forced delay in? (I'm not even sure if this'll work, given it could delay UD too).

     

    Thanks!

     

    Anyone have any suggestions? :(

     

     

  10. This is a crosspost, as I'm not sure if it's an Unassigned Devices issue, or a plugin issue (of which all of my plugins are Phaze plugins), so here goes:

     

    http://lime-technology.com/forum/index.php?topic=45807.msg465130#msg465130

     

    My setup is basically that I have 1 "unassigned device" SSD, and on that disk, I install my various plugins (sabnzbd et al.)

     

    But I'm having this issue where every single time I restart the server/array, the plugins all run their annoying "wizards" as they are failing to read the config folder held on my SSD. I can see in my logs that the SSD (sdd1) is not mounting until after the plugins have tried to install.

     

    So basically, does anyone have a suggestion on how to force phaze plugins to wait until the SSD is ready before trying to start up?  Is there a way of putting a forced delay in? (I'm not even sure if this'll work, given it could delay UD too).

     

    Thanks!

  11. Howdy all...

     

    I'm hoping this is a common issue, or me being stupid somehow, but here goes:

     

    My setup is basically that I have 1 "unassigned device" SSD, and on that disk, I install my various plugins (sabnzbd et al.)

     

    But I'm having this issue where every single time I restart the server/array, the plugins all run their annoying "wizards" as they are failing to read the config folder held on my SSD. I can see in my logs that the SSD (sdd1) is not mounting until after the plugins have tried to install.

     

    So basically, how do I force UD to mount this disk before the plugins try and run?

     

    Thanks!

     

    And thanks for keeping this plugin alive dlandon!

     

    p.s. both UnRAID and UD are completely up-to-date.

    By plugin's, do you mean Plugins or Docker Applications?

     

    If docker Applications, then you have to stop and start the entire docker service in order for any apps to see any mountings that UD performs (6.1.x)  If you're under 6.2, then you can modify your volume mountings to use the new "slave" option which should fix this up for you.

     

    Just plugins I'm afraid. I've never ventured into the world of Docker.

  12. Howdy all...

     

    I'm hoping this is a common issue, or me being stupid somehow, but here goes:

     

    My setup is basically that I have 1 "unassigned device" SSD, and on that disk, I install my various plugins (sabnzbd et al.)

     

    But I'm having this issue where every single time I restart the server/array, the plugins all run their annoying "wizards" as they are failing to read the config folder held on my SSD. I can see in my logs that the SSD (sdd1) is not mounting until after the plugins have tried to install.

     

    So basically, how do I force UD to mount this disk before the plugins try and run?

     

    Thanks!

     

    And thanks for keeping this plugin alive dlandon!

     

    p.s. both UnRAID and UD are completely up-to-date.

  13. No watt meter, no. But it's not that big an array, drive-wise, so I can't imagine it being that much of a hog.

     

    Anyway, once it charged up to 90%, I unplugged it. It immediately decided that 90%  would last about 12 minutes. It hit 10 minutes and then shut down, as desired. When I brought it back online, the tool reported the battery was 55% full.

     

    The array was spun down at the time, but generally when I stop the array it's 15-20 seconds, nothing more.

     

    So yeah, it's not going to do much more than give a minute to see if power is coming back online, but it should be adequate to shut the thing down rather than see it just fail. My power never really just flickers, if it's going off, it's because half of Philadelphia is going off, so I'm happy enough with it just shutting down asap.

     

    Obviously it could be a concern when the battery loses some of it's charge, but I'll keep an eye on it.

  14. Thanks for the plugin. Just bought a small UPS and the plugin seems to load properly.

     

    However, I don't really know, as I'm not sure what the plugin is really supposed to achieve other than showing me the UPS status and shutting down the server if the power fails. (Haven't actually tested if it powers down yet or not.)

     

    Obviously that's the major function of the UPS, but I was wondering whether there is any more to this plugin? i.e. is there a way of saying if the UPS ever kicks in, to immediately power down. That kind of thing. No idea if this is possible or not, but I'm genuinely just wondering if I'm missing something, given there's no obvious documentation.

     

    Set the 'Timeout:' to the number of seconds to wait until the array shuts down.  If you set it to zero, the timeout is turned off.  I would set it to something greater than 60 seconds so you don't shut down too quickly in case the power returns.

     

    Ok thanks. The UPS I have is just a little guy, so I think it'd probably max out at ten minutes anyway, so 60 seconds sounds good. (APC Back-UPS® NS 600VA 8-Outlet Power-Saving UPS)

     

    What are the other settings here? (sorry if this is documented, I couldn't find anything)

     

    tvEEsTZ.png

  15. Thanks for the plugin. Just bought a small UPS and the plugin seems to load properly.

     

    However, I don't really know, as I'm not sure what the plugin is really supposed to achieve other than showing me the UPS status and shutting down the server if the power fails. (Haven't actually tested if it powers down yet or not.)

     

    Obviously that's the major function of the UPS, but I was wondering whether there is any more to this plugin? i.e. is there a way of saying if the UPS ever kicks in, to immediately power down. That kind of thing. No idea if this is possible or not, but I'm genuinely just wondering if I'm missing something, given there's no obvious documentation.

  16. I've already run it on disk1 and the cache disk. I had errors on the cache disk, which have since been fixed, but the problem persists. I can try doing it on the other disks, but I'm pretty sure the issue exists between these two drives.

     

    Another attempt I made to fix this was to define minimum free space on each share, and to delete all drive names from the "included/not included" parts of the settings. This still didn't work.

     

    Every time the mover runs, this error happens, and my system locks up and requires a hard restart. This is not ideal.

     

    I want to use the cache disk for the purpose of installing plugins, but would be happy to disable it from "cache" usage. Unfortunately my attempts to do this are failing, as I've told each Share not to use the cache.. and yet it continues to do so.  >:(

     

    EDIT: Looks like it actually deleted the files it was trying to move. Great!

  17. Again, will state for everyone - best possible scenario is using a cache drive, install AND data dir on cache drive - this drive is outside of the array so you will not have hangs on array startup/shutdown

     

    Myk

     

     

    So, I only installed couchpotato about a week and a half ago (pre-fallout), but I get issues where the array does not start properly (constant "starting..." message). Anyway, sab an sick (non plugin) start up fine, couchpotato does not. If I disable the cp plugin and re-enable, it seems to work, but it also seemed to forget my library settings, and either way, this is far from desirable. Any advice?

     

    (status of CP is "Status: STOPPED")

     

    Thanks

     

    The way you have Sab and Sickbeard installed starts the plugins after the array has started from the go file, plugins run before the go file and sometimes it happens before the array can start. This creates the /mnt/user directory, so when the array tries to start the directory exists, causing your issue. Try changing your directory from /mnt/user to /mnt/diskx with diskx being the actual disk your directories exist on. This should work around the problem until a proper fix can be put in place

     

    Sent from my HTC Vivid

     

    Thanks for the swift response.

     

    My settings were install directory in : usr/local and my data directory in  /mnt/cache/.custom/yadayada.

     

    I would've assumed that these directories wouldn't suffer the issue you mention, but I'm giving it a bash with an install directory on the cache drive.

     

    Thanks again

     

    Interesting that you'd quote me on this, given that my following post was specifying how I'd now got the install dir and the data dir both on the cache drive... and it still causes hangs on boot.

     

    Canning this for now, not worth the hassle.

  18. You are right, you shouldn't be suffering from that.... Work all day today but if you don't have results by then we'll try to figure it out.

     

    If you have or can have a monitor connected to your server reboot and look at what the console says when booting. It's likely cp is getting an error, just gotta figure out what

     

    Sent from my HTC Vivid

     

    I gave it a go shoving both directories on cache, just to see if it'd help... it did not. Will check it with a monitor tonight and report back.

  19. So, I only installed couchpotato about a week and a half ago (pre-fallout), but I get issues where the array does not start properly (constant "starting..." message). Anyway, sab an sick (non plugin) start up fine, couchpotato does not. If I disable the cp plugin and re-enable, it seems to work, but it also seemed to forget my library settings, and either way, this is far from desirable. Any advice?

     

    (status of CP is "Status: STOPPED")

     

    Thanks

     

    The way you have Sab and Sickbeard installed starts the plugins after the array has started from the go file, plugins run before the go file and sometimes it happens before the array can start. This creates the /mnt/user directory, so when the array tries to start the directory exists, causing your issue. Try changing your directory from /mnt/user to /mnt/diskx with diskx being the actual disk your directories exist on. This should work around the problem until a proper fix can be put in place

     

    Sent from my HTC Vivid

     

    Thanks for the swift response.

     

    My settings were install directory in : usr/local and my data directory in  /mnt/cache/.custom/yadayada.

     

    I would've assumed that these directories wouldn't suffer the issue you mention, but I'm giving it a bash with an install directory on the cache drive.

     

    Thanks again