nimaim

Members
  • Posts

    28
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

nimaim's Achievements

Newbie

Newbie (1/14)

2

Reputation

  1. That would be ideal. Here is @olehj's script from the nsfminerOC container: https://github.com/olehj/docker-nsfminerOC/blob/master/worker.sh, perhaps someone more experienced than I can do something similar in this container. The major bug there is it only works with nvidia's v460.73.01 driver on the host (again, now unsupported), which is something I've not been able to figure out.
  2. Sorry for the late reply, I don't check this forum much. This docker container does not allow setting any OC. You have to use something that will invoke nvidia-smi / nvidia-settings and still work with the host. Currently the only thing that works is the nsfminerOC container (no longer supported btw) and the 460.73.01 driver on the host (also no longer supported, you'd have to somehow manually install this since the nvidia-driver plugin does not support this old version anymore). I was lucky to have had it installed already prior to updating the plugin so I'm still on it. To answer your questions now, if you get all this set up, yes you'd start it once per GPU with settings for each and then stop it. Once you get one container set up, you can just copy that container to a second one to use as a template, just change the GPU ID and settings for that GPU (you can get the GPU ID using the "nvidia-smi" command in Unraid terminal). No I do not automate it; it does not take much time to manually start the 2 nsfminerOC containers, stop them, and then start up this trex one, especially when my server is up for weeks/months at a time. I posted a screenshot of my 2x60tis mining in Unraid through this container for about a week, no issues as you can see. My 60ti FE is my main card inside the case hence the higher temps with a Aorus 60ti connected externally. My OC settings are -500 core, +2400 memory, 120W (50%) power limit, fans at 60%. YMMV.
  3. This is probably a unique request and I know it'd be impossible to guarantee compatibility with newer versions, but I figure it's worth asking anyway. Would it be possible to add a way to manually install a specific driver version? For example, currently there is an nsfminerOC container that only works with nvidia driver 460.73.01 on the Unraid host (which is the only container I know that properly tweaks GPU OC/fan settings) but obviously that is no longer available in this plugin as it's quite old. I rather not spin up an entire VM just to passthrough the card and invoke nvidia-smi, so any other suggestions? I guess manually installing an older version of this plugin would also work, but not sure that is possible.
  4. Understandable, thank you for your work on this! It was the only thing that worked for me to OC + set fan speed in a docker container. I'll migrate over to passing the cards through in a VM.
  5. v460.73.01 is no longer available in the nvidia-driver plugin. Any other versions on the Unraid host confirmed working with this?
  6. This may have been mentioned before, but I'll state it here: First set all your OCing parameters with the nsfminerOC container (1 per GPU), start it up, then stop it, and then use this trex container to mine with the previously applied settings. It invokes nvidia-smi to do the OC and fan control so it's applied even after you stop the container. The only caveat is you need the 460.73.01 Nvidia driver in Unraid or else nsf doesn't work, at least last I checked. Works like a charm for me though: I'm getting ~120 MH/s with 2x3060tis @ 120W each with the OC applied from nsfminerOC.
  7. I have linuxserver's letsencrypt (now SWAG) container working just fine but would like to switch over to this as it makes adding entries so much easier through the UI. I also followed Spaceinvaderone's video of setting up each container that needs to be proxied via a custom proxynet network interface. Is this still necessary? Any other considerations for migrating over? Anything like fail2ban in here?
  8. I must have missed it but was removing the Log column under Docker intentional? Now I have to click on the individual docker container to bring up the menu in order to view log. Still works but just annoying.
  9. After finally getting the container set up with the fine help of you folks in here, I was able to successfully view the feed off a cheapo Wyze cam (official RTSP firmware). The only problem I'm seeing is the error window is being spammed with a bunch of "FFMPEG STDERRa few seconds ago [aac @ 0x14c7b3832a40] Queue input is backward in time" messages about every second or two. I used the settings here (except changed vid streaming to HLS, Poseidon had the same issue; this seems to be an AAC issue though): Any ideas? EDIT: No errors with "No Audio" which is what I'll set it to for now. But am curious why those errors are coming up.
  10. Ok I wasn't sure ... if this is the default, is it safe to disable disk shares then?
  11. Yes I understand the difference between disk and user shares, but should the "cache" disk itself be shared? Prior to 6.9.0, it was not so I'm just wondering if this is a "feature", bug, or something else.
  12. I recently upgraded to 6.9.0-beta25 on my test server and "cache" now appears under disk shares. Is this normal? I am pretty certain I never added any disk shares. Can I safely remove it? Also how do I permanently disable disk shares? I never intend to share the entire disk on my server whatsoever. EDIT: Found the option to disable in "global share settings" ... but my question still remains for "cache".
  13. Re: Workaround for above ... I could not get this working with Deluge so I got rid of the container entirely, got LSIO's Transmission container instead (all my other media related containers are from them), routed all the traffic through binhex-privoxyvpn, and all seems to be good for now. Not sure what the cause was, but this is a cleaner setup as I didn't like AIO containers anyway. Now if binhex-privoxyvpn is not up (i.e. VPN and privoxy), the containers using it (nzbget/transmission/etc.) will fail to start. Not sure if the bug is on the deluge side or radarr side because this did work with radarr v2. I guess some things are expected to be broken in "preview" (beta). Just posting for others facing a similar problem: I'd say try a different application/container instead of wasting hours playing with settings as I did.
  14. Apologize if this has been brought up before, but since upgrading to the :preview tag of radarr LSIO container, radarr is not "seeing" my deluge torrent downloads, and the completed downloads are stuck in the completed folder. This seemed to be working fine in radarr v2 so I think it's related. I do not have the same problem with nzbget. I am using the binhex-delugevpn container and have a mapping from /downloads (I changed this from the default /data as I thought that was the issue) -> /mnt/user/downloads. In radarr, the same mapping exists. I have the deluge client set up in "Download Clients" that has a category of radarr. Deluge properly starts the download and then moves the completed download to /downloads/completed in the container and finally to the category folder (Label plugin in deluge), to /downloads/completed/radarr. Nzbget works the same way moving category "radarr" downloads to /downloads/completed/radarr but while these get picked up by radarr, the completed torrent directories do not. Any ideas? I can't seem to get this sorted out. FWIW, even when removing the extra move to /radarr (by ignoring the category/label in deluge) and leaving the completed downloads in /downloads/completed, radarr does not seem to pick them up and I have to manually move, rename, and import them. I thought about switching torrent clients altogether but since all the VPN based containers are all from binhex and behave similarly, I realized I should try to find the root cause of the issue first.
  15. Thanks! Looks like the H110 (9211-4i) + RES2SV240 SAS expander is a good solution to get ~20 disks working off a single PCie x4 port. Limited bandwidth of course but more than enough for mechanical drives. Highly recommend this over the Marvell garbage.