sjaak
Members-
Posts
184 -
Joined
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by sjaak
-
well, i understand that as a "workaround" i can config it in the router... the only way to get ipv6 working inside docker is to change the network to br0 and use the extra parameter "--sysctl net.ipv6.conf.all.disable_ipv6=0 --sysctl net.ipv6.conf.eth0.use_tempaddr=2", but i use vlans and create letsencrypt with the spaceinvaderone video (proxynet network), i have no idea how to enable ipv6 on that, and to use for every containers it own ip adress is to much for me.. (we have already had i conversation on Tweakers )
-
well, the problem was that the dhcp server didn't give unraid the ipv6 dns server, now it got working dns (tested with the firefox container, with the extra parameters: "--sysctl net.ipv6.conf.all.disable_ipv6=0 --sysctl net.ipv6.conf.eth0.use_tempaddr=2") this container has working ipv6, but the other containers don't work with this parameters... damn, what is IPv6 with docker en pain in the ass... i know, i have to convert the link local address to the gateway address, but don't know how to do... (IPv6-net::1) (need to config in the router)
-
using categories makes it more organized, personaly i would use radarr to control my movies library, after setup its "fire and forget". but with categories in SABnzbd only you can do this job, but i'm not sure if i can help you with that (i pretty lazy, i use radarr/sonarr/lidarr/etc, to do this work )
-
Unraid V6.8.2 Nvidia build. the problem: unRAID doesn't work properly with IPv6. i have created on my OPNsense router a IPV6 tunnel broker (hurricane electric) to enable ipv6 on my network. (ISP has the option "DS-Lite"(Carrier-grade NAT) or IPv4 only) every machine is working fully fine with ipv6, even the VM's on unraid works with ipv6 no manual installation needed. only unraid doest't work with it, for testing i used the website https://test-ipv6.com/ its working fine on every machine here (incl. VM's on unraid) but not directly on unraid itself. (i use the GUI boot) what did i wrong or what is wrong with unraid? everything is set to auto, only change network protocol from "ipv4" to "ipv4+ipv6" br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.3.2 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 2001:XX:XX:XX:XX:b29a:ffa5:3ae6 prefixlen 128 scopeid 0x0<global> inet6 fe80::e2d5:5eff:XX:XX prefixlen 64 scopeid 0x20<link> inet6 2001:XX:XX:XX:XX:5fff:53c:2c53 prefixlen 64 scopeid 0x0<global> ether e0:d5:5e:68:XX:XX txqueuelen 1000 (Ethernet) RX packets 195333 bytes 268545945 (256.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 60352 bytes 12172817 (11.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 1500 inet6 fe80::e2d5:5eff:XX:XX prefixlen 64 scopeid 0x20<link> ether e0:d5:5e:68:XX:XX txqueuelen 1000 (Ethernet) RX packets 416632 bytes 593217654 (565.7 MiB) RX errors 0 dropped 49 overruns 0 frame 0 TX packets 165983 bytes 17702085 (16.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
2 GPU's assign to FaH, bold are working, no cpu slot enabled (it still use some cpu cores to pass work to the gpu's, for my setup its using 4 cores +/-75%) config.xml: <slot id='0' type='GPU'/> <slot id='1' type='GPU'/> Docker setting, NVIDIA_VISIBLE_DEVICES: all if you have a nvidia gpu assign to a VM make it disabled for FaH! or the system will crash. I'm not really sure how to do that (i have an AMD card) but if i understand it correct you can change the syslinux configuration: pci-stub.ids=xxxx:xxxx,xxxx:xxxx (IOMMU id, graphic/vga and sound of it!), after change reboot system.
-
yes, 2 docker images so each use have its own GPU (i have no idea to assign 2 gpu's to one docker image, if that is possible i going to use 1 docker image) i changed to config.xml to 2 gpu's because with only one its seeing only the gt710, which is NOT assign to it... that's the weird thing about it... i did experiment with the ID setting, no luck... also, f@h2 sees the 710 but does not assign work to it... i would love to know how to assign 2 specific gpu's to one docker image, especially for Plex and f@h. (i have 1 pcie slot free, so an another 1050ti can fill that up )
-
it need the cpu to pass work to the gpu. you can limit the docker images for using cpu cores, just assign 1 core to it. also setting in the 'extra parameters': --memory=Xg --cpu-shares=1 (X = how much GB is allowed) so you can also limit the RAM use of it
-
found i bug in the docker system. i have 2 F@H docker images setup. the first one is setup to use the gt710, the second one is setup to use the 1050TI but after create the second images it only sees the 710 (which is not assign to it) then a change the config.xml: <slot id='0' type='CPU'/> <slot id='1' type='GPU'/> <slot id='2' type='GPU'/> now it using the 1050ti and de 2nd image has NO work assign to the gt710, only the 1st one... pretty weird that the 2nd image is seeing the GPU which is NOT setup to using it... first image: second image: at least 2 GPU's (of the 3 gpu installed) are working
-
V2020-03-14 here, still on 6.8.2 and its working fine, Thanks for this plugin!
-
if you removed the external power connectors i'm pretty sure that the 970 won't work anymore... (its designed to have bold connected) if you want to cut down the power, you have to replace it with another 'powerfriendly' gpu... the 1050ti here is running continue at 50c at idle, but there is a SAS card under it and those are running hot! even the gt710 for the GUI boot is 45c+ while doing nothing, i don't worry that it is running 50c continue
-
there is a bug in the Plex Transcoder where the GPU power state stays on P0 (high power). https://forums.plex.tv/t/stuck-in-p-state-p0-after-transcode-finished-on-nvidia/387685/79 for now the command "fuser -kv /dev/nvidia*" works but only use it when its 'idling'. many GPU brands turn the fans off below 60c, so nothing to worry about. Keep in mind that the 970 is an older gpu, which use more power then the newer types (if have a gtx1050ti without extra power connectors, its only use the PCIe power (max 75watts) also the 970 does not support x265, not every media file will be transcoded by the gpu... https://developer.nvidia.com/video-encode-decode-gpu-support-matrix https://www.elpamsoft.com/?p=Plex-Hardware-Transcoding
-
strange... my library is 99% with subtitles and everything what needs to be transcoded is done by the gtx1050ti with subtitle burn in. most of the subtitles are .srt But i did ask my external users to change the default stream setting to 'original' instead of 2mbps... (i have an higher upload speed)
-
looks fine for me, sometimes will the transcode use some more ram sometimes less, and btw, the transocder is using the p400 (under "processes" en its using 65% of the gpu power.)
-
I have the same settings, only i have the access mode set to read only (Tautulli don't need write access)
-
i have 3 GPU and no problems at all. 1 for GUI boot (gt710), 1 for Plex (1050ti) and a Vega64 for the VM's (reset bug is still there). are you sure you didn't assigned the wrong one?
-
you can use a Linux live boot usb to gain access to the drives, i'm not sure how to do it when you have disk encryption... (or remove it from the server and mount it to an another linux system) best tip i can give: ALWAYS make a backup before upgrading and safe the backup somewhere else... (you can create a full backup through the GUI.)