ZooMass

Members
  • Posts

    28
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ZooMass's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. Just pulled the latest image as of 2021-10-27, T-Rex miner version 0.24.2, it works again with no Nvidia warnings! Thank you for the quick rollback.
  2. I do have --runtime=nvidia in my extra parameters. I I have the latest ptrfrll/nv-docker-trex:cuda11 as of 2021-10-25. Here is my full docker run command: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='trex' --net='container:vpn' --privileged=true -e TZ="xxxxxxxxxx/xxxxxxxxxx" -e HOST_OS="Unraid" -e 'WALLET'='xxxxxxxxxx' -e 'SERVER'='stratum2+tcp://xxxxxxxxxx.ethash.xxxxxxxxxx.xxx:xxxxx' -e 'WORKER'='1080ti' -e 'ALGO'='ethash' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx' -e 'PASS'='xxxxxxxxxx' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -v '/mnt/user/appdata/trex':'/config':'rw' --runtime=nvidia 'ptrfrll/nv-docker-trex:cuda11' And here is my config.json { "ab-indexing" : false, "algo" : "ethash", "api-bind-http" : "0.0.0.0:4067", "api-bind-telnet" : "127.0.0.1:4068", "api-read-only" : false, "autoupdate" : false, "back-to-main-pool-sec" : 600, "coin" : "", "cpu-priority" : 2, "dag-build-mode" : "0", "devices" : "0", "exit-on-connection-lost" : false, "exit-on-cuda-error" : true, "exit-on-high-power" : 0, "extra-dag-epoch" : "-1", "fan" : "t:xx", "gpu-init-mode" : 0, "gpu-report-interval" : 30, "gpu-report-interval-s" : 0, "hashrate-avr" : 60, "hide-date" : false, "intensity" : "0", "keep-gpu-busy" : false, "kernel" : "0", "lhr-low-power" : false, "lhr-tune" : "0", "lock-cclock" : "0", "log-path" : "", "low-load" : "0", "monitoring-page" : { "graph_interval_sec" : 3600, "update_timeout_sec" : 10 }, "mt" : "0", "no-color" : false, "no-hashrate-report" : false, "no-nvml" : false, "no-strict-ssl" : false, "no-watchdog" : false, "pci-indexing" : false, "pl" : "xxxW", "pools" : [ { "pass" : "xxxxxxxxxx", "url" : "stratum2+tcp://xxxxxxxxxx.ethash.xxxxxxxxxx.xxx:xxxxx", "user" : "xxxxxxxxxx", "worker" : "1080ti" } ], "protocol-dump" : false, "reconnect-on-fail-shares" : 10, "retries" : 3, "retry-pause" : 10, "script-crash" : "", "script-epoch-change" : "", "script-exit" : "", "script-low-hash" : "", "script-start" : "", "send-stales" : false, "sharerate-avr" : 600, "temperature-color" : "67,77", "temperature-limit" : 0, "temperature-start" : 0, "time-limit" : 0, "timeout" : 300, "validate-shares" : false, "watchdog-exit-mode" : "", "worker" : "1080ti" }
  3. Hi, I'm running ptrfrll/nv-docker-trex:cuda11 on Unraid 6.9.2 with Unraid Nvidia driver 495.29.05 with only a 1080 Ti, not stubbed. The container used to work fine, until I reinstalled it from CA using the same template I had before, same GPU ID. Now T-Rex repeatedly fails with this warning. 20211025 04:07:46 WARN: Can't load NVML library, dlopen(25): failed to load libnvidia-ml.so, libnvidia-ml.so: cannot open shared object file: No such file or directory 20211025 04:07:46 WARN: NVML error, code 12 20211025 04:07:46 WARN: Can't initialize NVML. GPU monitoring will be disabled. 20211025 04:07:47 20211025 04:07:47 NVIDIA Driver version N/A Any idea what might be causing this missing shared Nvidia library? I can run nvidia-smi just fine on my host. Tried rebooting.
  4. Hi, my syslog gets spammed and 99% filled within minutes of booting up with millions of lines like this Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref] Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref] Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref] I am stubbing my graphics card with this plugin on unRAID 6.8.3. The address 09:00.0 is the device "VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)". HVM and IOMMU are enabled. PCIe ACS override is disabled. The graphics card passthrough (with dumped vbios rom) works in a VM, but on fixed 800x600 resolution (Nvidia drivers installed, Windows VM says there's a driver error code 43), but the VM logs say 2021-01-19T21:57:24.002296Z qemu-system-x86_64: -device vfio-pci,host=0000:09:00.0,id=hostdev0,bus=pci.0,addr=0x5,romfile=/mnt/disk5/isos/vbios/EVGA_GeForce_GTX_1070.vbios: Failed to mmap 0000:09:00.0 BAR 3. Performance may be slow Anybody seen this before? Can't find anything like it on the forum. EDIT: Found some more info. According to booting the server without the HDMI plugged in removed the spamming line. However, after plugging the HDMI back in and booting the VM, the VM logs are repeating lines like 2021-01-19T22:17:27.637837Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x101afe, 0x0,1) failed: Device or resource busy 2021-01-19T22:17:27.637849Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x101aff, 0x0,1) failed: Device or resource busy 2021-01-19T22:17:27.648663Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x4810, 0x1fef8c01,8) failed: Device or resource busy 2021-01-19T22:17:27.648690Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x4810, 0x1fef8c01,8) failed: Device or resource busy 2021-01-19T22:17:27.648784Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x102000, 0xabcdabcd,4) failed: Device or resource busy 2021-01-19T22:17:27.648798Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x102004, 0xabcdabcd,4) failed: Device or resource busy Windows device manager still says there are driver errors, and there are console-like artifacts horizontally across the screen, including a blinking cursor, on top of Windows. It seems like the unraid console and the Windows VM (or is it VFIO stubbing?) fight for the GPU. I have yet to try the recommendation in the above linked post to unbind the console at boot with the go script.
  5. Bumping this post because I am dealing with the same issue. I have the same four containers with these template missing warnings, pointing to the same A75G templates, for which Apply Fix shows the same error. I have the following templates on my USB: $ ls -lhAF /boot/config/plugins/dockerMan/templates/templates/ | grep jitsi -rw------- 1 root root 4.3K Apr 25 2020 jitsi-jicofo.xml -rw------- 1 root root 4.0K Apr 25 2020 jitsi-jvb.xml -rw------- 1 root root 13K Apr 25 2020 jitsi-prosody.xml -rw------- 1 root root 7.2K Apr 25 2020 jitsi-web.xml $ ls -lhAF /boot/config/plugins/dockerMan/templates-user/ | grep jitsi -rw------- 1 root root 4066 Nov 10 10:36 my-jitsi_bridge.xml -rw------- 1 root root 4336 Nov 10 10:37 my-jitsi_focus.xml -rw------- 1 root root 7276 Nov 10 10:09 my-jitsi_web.xml -rw------- 1 root root 12837 Nov 10 10:35 my-jitsi_xmpp.xml I renamed my containers according to the filenames in the templates-user folder.
  6. My question is very similar to this one. I have an arch-rtorrentvpn container (VPN disabled) using the network stack of a dedicated arch-privoxyvpn container using --net=container:vpn parameter. I am trying to set up port forwarding on the vpn container for rtorrent. On the arch-rtorrentvpn container, it just automatically acquires the forwarded port when the PIA endpoint being used supports it. I am aware of PIA's next-gen upgrades disabling port forwarding and I am primarily using their Israel, Romania, and CA Montreal servers. The arch-privoxyvpn container connects to those endpoints successfully, but it doesn't do the same automatic port forwarding that the arch-rtorrentvpn and arch-delugevpn containers do. Is there a setting to force this? I assume that the container supports it due to sharing the same container startup procedure across the binhex containers. Manually creating a STRICT_PORT_FORWARD variable in arch-privoxyvpn (like in the other two containers) has no effect. Even though I am using PIA, there is a log line that says: 2020-09-16 15:23:54,195 DEBG 'start-script' stdout output: [info] Application does not require port forwarding or VPN provider is != pia, skipping incoming port assignment Is using the ADDITIONAL_PORTS variable equivalent to just adding a new port to the template? Is the vpn_options variable just extra parameters for the /usr/bin/openvpn command?
  7. ZooMass

    Jitsi?

    +1 demand unit Help us SpaceInvaderOne, you are our only hope (not tagging you because I'm sure you're already annoyed with the last three @'s in this topic).
  8. Having the same problem accessing web UI, I am using the manually created "docker network create container:vpn" and not "--net=container:vpn" extra parameter on unRAID 6.8.3 with Docker version 19.03.5.
  9. I have been experiencing the same issue with Jackett and LazyLibrarian. There has been some discussion of this web UI issue over on binhex-privoxyvpn (I lay out my details there). For the record, a lot of people are using that container instead of binhex-delugevpn as a dedicated VPN container. Any ideas or advice would be useful!
  10. Thank you for the quick response! My setup looks essentially the same as yours, with the VPN container named simply vpn, and unfortunately I still cannot access the web UI, just a 404. One thing I tried changing was that I changed the network from a custom public Docker network I have (to isolate from non-public-facing containers) to simply the bridge network like yours. Client container still receives the VPN IP, but I still can't access the web UI. I tried disabling my adblocker even though it should have no effect, and it in fact does not. The container is named jackettvpn because I modified my existing container, but that container's VPN is disabled.
  11. Thank you for these very clear instructions! I was just looking for something like this after hitting my VPN device license limit, and SpaceInvader One released this timely video. Like a lot of you guys I wanted to use a dedicated container instead of binhex-delugevpn, and this binhex-privoxyvpn is perfect for the job. However, I'm unable to access the client container web UI. I've now tested with linuxserver/lazylibrarian (to hide libgen direct downloads) and linuxserver/jackett (migrating from dyonr/jackettvpn, but also tried with clean image). I'm on unRAID 6.8.3 and I've tried both "docker create network container:vpn" and "--net=container:vpn" extra parameters. (also, for the record, "docker run" complains when you set a custom network:container in the dropdown and also have translated ports, so be sure to remove ports at the same time you change the network). I've added the ports for the client containers (in my two test containers those 5299 and 9117 respectively) to the binhex-privoxyvpn container named vpn, restarted vpn, and rebuilt & restarted the client containers. Still can't reach container web UI on [host IP]:5299 or [host IP]:9117. In the client containers, I can curl ifconfig.io and I receive my VPN IP, so the container networking seems to work fine. The client web UI seems to be the only issue. I've seen a couple people in the comments on SpaceInvader One's video report the same issue. Has anyone else experienced this or fixed it? Would love to have this setup work out!
  12. I'm having trouble using the "Tiered FFMPEG NVENC settings depending on resolution" plugin with ID "Tdarr_Plugin_d5d3_iiDrakeii_FFMPEG_NVENC_Tiered_MKV". It says it can't find my GPU. Command: /home/Tdarr/Tdarr/bundle/programs/server/assets/app/ffmpeg/ffmpeg42/ffmpeg -c:v h264_cuvid -i '/home/Tdarr/Media/Television/Stranger Things/Season 03/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p].mkv' -map 0 -dn -c:v hevc_nvenc -pix_fmt p010le -rc:v vbr_hq -qmin 0 -cq:V 31 -b:v 2500k -maxrate:v 5000k -preset slow -rc-lookahead 32 -spatial_aq:v 1 -aq-strength:v 8 -a53cc 0 -c:a copy -c:s copy '/home/Tdarr/cache/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p]-TdarrCacheFile-p1cwX-Dg.mkv' ffmpeg version N-95955-g12bbfc4 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1) configuration: --prefix=/home/z/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/z/ffmpeg_build/include --extra-ldflags=-L/home/z/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/z/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree libavutil 56. 36.101 / 56. 36.101 libavcodec 58. 64.101 / 58. 64.101 libavformat 58. 35.101 / 58. 35.101 libavdevice 58. 9.101 / 58. 9.101 libavfilter 7. 67.100 / 7. 67.100 libswscale 5. 6.100 / 5. 6.100 libswresample 3. 6.100 / 3. 6.100 libpostproc 55. 6.100 / 55. 6.100 Guessed Channel Layout for Input Stream #0.1 : 5.1 Input #0, matroska,webm, from '/home/Tdarr/Media/Television/Stranger Things/Season 03/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p].mkv': Metadata: encoder : libebml v1.3.5 + libmatroska v1.4.8 creation_time : 2019-07-04T07:03:27.000000Z Duration: 00:50:33.63, start: 0.000000, bitrate: 7850 kb/s Chapter #0:0: start 306.015000, end 354.521000 Metadata: title : Intro start Chapter #0:1: start 354.521000, end 3033.632000 Metadata: title : Intro end Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default) Metadata: BPS-eng : 7205368 DURATION-eng : 00:50:33.573000000 NUMBER_OF_FRAMES-eng: 72733 NUMBER_OF_BYTES-eng: 2732251549 _STATISTICS_WRITING_APP-eng: mkvmerge v21.0.0 ('Tardigrades Will Inherit The Earth') 64-bit _STATISTICS_WRITING_DATE_UTC-eng: 2019-07-04 07:03:27 _STATISTICS_TAGS-eng: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES Stream #0:1(eng): Audio: eac3, 48000 Hz, 5.1, fltp (default) ... Stream #0:29 -> #0:29 (copy) Stream #0:30 -> #0:30 (copy) Stream #0:31 -> #0:31 (copy) Stream #0:32 -> #0:32 (copy) Press [q] to stop, [?] for help [hevc_nvenc @ 0x55aaaad84e40] Codec not supported [hevc_nvenc @ 0x55aaaad84e40] No capable devices found Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed! I have an EVGA GeForce GTX 760, obv an older card. nvidia-smi doesn't support it. Tue Mar 10 13:54:11 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.59 Driver Version: 440.59 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 760 Off | 00000000:08:00.0 N/A | N/A | | 0% 35C P0 N/A / N/A | 0MiB / 1997MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+ However my linuxserver/plex and linuxserver/emby containers do manage to use it for hardware transcoding. I made sure to set all the correct Docker template variables including --runtime=nvidia, NVIDIA_DRIVER_CAPABILITIES=all, NVIDIA_VISIBLE_DEVICES=<GPU ID>, I have Linuxserver Unraid Nvidia 6.8.3 installed. Any tips? I would really like to be able to transcode on the GPU, I've been brutally punishing my CPU for days slowly transcoding on Unmanic
  13. Unmanic is another good container. It's dead simple, you just point it at a directory and it converts x264 video files to HEVC.
  14. Super happy to see that we have resolved the issue! Thank you @Rich Minear @limetech and everyone else! I look forward to finally confidently upgrading from 6.6.7!
  15. That was it! Yes, I am running 6.6.7 because on 6.7+ my machine experienced the SQLite data corruption bug being investigated and 6.6.7 did not. Thanks for your help!