LesterCovax

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

LesterCovax's Achievements

Noob

Noob (1/14)

3

Reputation

  1. TL;DR - Try using the containerized VPN activeeos/wireguard-docker in your host OS (has to be Ubuntu ≥16.04 it seems). I found this referenced here BTW: vltraheaven.io: Down the Rabbit Hole - Kata Containers (this site wins the award for form over function destroying user accessability. Dat font...) -------- So this is the first I'm learning of Kata Containers, and it's certainly intriguing (especially after reading that you can run K8s in it). Regarding its extremely interesting network architecture, it's using MACVTAP which isn't anything new. There seems to be a good bit of documentation for it regarding use with QEMU and/or libvrt. Basically, due to network hardware's common lack of hairpin support, you'll be running MACVTAP in 'Bridge' mode, which will let all guest containers communicate with each other, but not with the Host. This is why a containerized VPN should do the trick (but I havent' tried it and won't be testing it anytime soon). The other option (found referenced several times) is to create a second network interface in the Kata VM. The first one is blind to the host, but the second one can interact with it if set up correctly with a different subnet and such. Again, no clue ho well this would actually work but thought I'd pass on what I found at least. Good luck, and I'll keep an eye on this thread!
  2. +1 for the feature suggestion In the meantime though, you don't need to extract the entire archive to retrieve a single file/folder from it (unless you're talking about access times and such due to a large single file). Here's a simple CLI guide I found. It's even easier if you just open up the archive with something like 7zip. https://www.cyberciti.biz/faq/linux-unix-extracting-specific-files/
  3. It started working after I deleted the path to have the boot device backed up. Will retest and get diagnostics when I get a chance.
  4. I just noticed the other day that my backups stopped working with the process having been hung for two months. I rebooted Unraid, manually initiated a backup, and it still hung on backing up notifications. I'll try removing the boot drive from the backup and try again, but it really needs some way to notify users if the process is ongoing for far too long (e.g. two months ;p ) Here's where it gets stuck, with the 'Abort' button doing nothing: Here are my settings: Cheers!
  5. I had to roll back to that version due to tracker whitelisting as well (and even got a nastygram from an admin asking for a lot of proof on my setup due to inconsistencies). You need to re-add the actual `*.torrent` files for the torrents you had active AFAIK. I first moved/copied everything from the `completed` folder to the `incomplete` folder. I then copied all of my `*.torrent` files from the `/data/.torrents` directory to my `/data/.torrents_add` directory, which is configured to auto-add any torrents in that directory using the `autoadd` plugin. It will populate the torrents for every `*.torrent` file you added and should then check the progress against what you moved from your `completed` to `incomplete` directory. You can select them all and choose "Force Recheck" if it's not doing it for some reason. Then just wait a long long time depending on how large your torrents are. For any that I still don't have the `*.torrent` file for, but the files were in `completed`, I just check a list on the tracker itself for torrents I haven't fully seeded and redownload the torrent file, or just manually find it on the tracker if it's not on the list. Royal PITA, it is
  6. I'm running into the same issue, and trying to find workarounds without having to build a custom kernel. It's supposedly supposed to be built into the linux kernel by now, with binaries provided for different distros. Trying to use usbip/VirtualHere to share a USB device on another machine with a docker container. Here are some VirtualHere (basic) instructions to obtain vhci_hcd.ko and usbip-common.ko: https://www.virtualhere.com/client_configuration_faq
  7. It could be that the local user template is overriding the repo template. Deleting the container, then deleting the user template in ~/.docker/templates-user, followed by recreating the container should work. At first I was thinking that not updating template changes automatically is a bad thing, but it would be annoying if someone made custom changes and had them wiped automatically. The difficulty comes in notifying people of necessary changes if they have everything automated.
  8. I can record a video of what's happening on my end if it would help since the diagnostic bundle apparently isn't showing the full story. If there's anything specific (e.g. specific steps in the GUI, CLI commands) you'd like me to include, let me know. I also tried manually creating the docker image via CLI and it's still adding the extra-param root@undou:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE linuxserver/sonarr latest 7491f3da74a2 2 days ago 576MB linuxserver/nzbget latest 7f1e8f590f23 2 days ago 75.7MB linuxserver/tautulli latest df4f0f0d6ee9 9 days ago 188MB brettm357/unifi latest d259b09081bf 12 days ago 610MB nanocurrency/nano-beta latest 5d46b0c2ac89 13 days ago 122MB nanocurrency/nano latest cdecaab45885 13 days ago 122MB adamfisher90/docker-deluge-1.3.11 latest 249049a9cda6 17 months ago 142MB root@undou:~# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='radarr' --net='br0' --ip='10.0.0.77' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'TCP_PORT_7878'='7878' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/movies/.snatched':'/downloads':'rw' -v '/mnt/movies':'/movies':'rw' -v '/mnt/media/Anime':'/anime':'rw' -v '/mnt/user/appdata/radarr':'/config':'rw' 'linuxserver/radarr' Even after saving the template update to remove the extra-param, I checked the actual template file and it persisted, meaning that template authoring mode isn't working correctly for some reason. Due to this, I deleted the image/container once again and created it from an Unraid provided template that didn't have an extra-param defined (https://forums.unraid.net/topic/53758-support-linuxserverio-radarr/). After creating this new instance, the CPU pinning extra-param resurfaced once again: I had to manually go into "/boot/config/plugins/dockerMan/templates-user/my-Radarr.xml" and change "<ExtraParams>--cpuset-cpus=1</ExtraParams>" to "<ExtraParams/>", along with any other values I wanted to persist. Port mappings via config settings also aren't working correctly as it's using the legacy port mapping style (https://docs.docker.com/network/links/#environment-variables). Even though I have the following in my ".../my-NZBget.xml" template, it's still translating to "-e 'TCP_PORT_6789'='6788'" upon creating the docker container with the effective port mapping of "-p 6789:6789" (which is incorrect). I have to manually add in an extra-param of "-p 6788:6789", and remove the config entry for the port mapping for it to work correctly. <!-- Snipped Code --> <Networking> <Mode>br0</Mode> <Publish> <Port> <HostPort>6788</HostPort> <ContainerPort>6789</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <!-- Snipped Code --> <Config Name="Host Port" Target="6789" Default="6788" Mode="tcp" Description="Container Port: 6789" Type="Port" Display="always" Required="false" Mask="false">6788</Config> CPU pinning is also not saving in the template via the GUI. I had to manually change "<CPUset/>" to "<CPUset>0,1</CPUset>" in the XML template. After doing so, the GUI correctly shows the following when loading the template: I ended up just creating my own repo of templates, which appears to work correctly. The only thing that's still not working is port forwarding (i.e. '-p 6788:6789') even when it's in the extra params or even specifying '/tcp'.
  9. Just curious, but do you have 'preserve user-defined networks' enabled? I don't and still had br0 enabled with the same settings after the upgrade. You have to disable docker and turn on advanced mode to see these settings, BTW.
  10. Then if replacing is not an option, use my 'smart list' optimization trick to automatically convert 4k content to a more playable format for other devices. No point to put such heavy load on your server for transcoding when you could just store a 1080p file alongside the 4k one, and have Plex create those automatically.
  11. As I mentioned, I already tried deleting and recreating it and even tried using a new template. It continues to add the parameters. I think it's something screwy with the CPU Pinning options in Settings. I just checked and there are values marked again in there that I never set. In fact, I had removed those settings, saved, and rebooted to confirm. It seems to have a mind of its own. Yes, I mentioned that I even created a new template that doesn't have extra parameters set, nor the GUI options. As soon as I create it from the template it's adding the extra params.
  12. I tried to fix this odd issue for about an hour earlier with no success. Unraid is setting CPU pinning for containers even after I remove these definitions. I've tried ensuring that there's nothing set under "Settings > CPU Pinning" for any docker containers, removing a container/image and recreating from a template that doesn't contain the `--cpuset-cpus` param (NOTE: there needs to be an inline code option here, not just code block), rebooting, etc. For example, I'll start off with no CPU Pinning defined for my Docker containers in settings If I go to one of the problem containers (pretty much all of them I think), it will have a `--cpuset-cpus` param set that was not there upon creation. If I delete the contents of 'Extra Parameters:', hit 'Apply', and go back to edit the container, the `--cpuset-cpus` param is back again. Even if I turn on template authoring mode, save the changes and create it from my template, it magically reappears. Pretty annoying and kind of backwards considering the param was supposed to be deprecated in favor of the GUI pinning options. undou-diagnostics-20180921-0213.zip
  13. You scared me a little as I came here trying to find info on the new PCIe ACS Override options (e.g. Downstream, Multifunction) without any luck, but just checked out my VM and GPU pass-through is still working after the upgrade. One difference I noticed between our setups was the ACS Override settings... Yours Mine I'm not sure if you manually added the other portions or not, but mine is much more simplistic. Both your 6.5.3 and 6.6.0 installs appear to have the same settings, so maybe that's not the issue? I'll attach my Diag bundle in case it helps you out. undou-diagnostics-20180921-0213.zip
  14. You can just set the max bandwidth (on each non-4k device) to 20Mbps 1080 (or below). I found out earlier today that it sadly does try to enforce the resolution limit and not just the stream bitrate. As for creating a new library, I feel like that's overkill since you can create a custom search filter for 4K content and then create an auto-optimize profile from that search (which also creates an auto-managed playlist it looks like). I swear there used to be an option to prefer optimized content for remote devices but I can't find it now. To get around that you either set the limit on each client like I mentioned above, or take it a step further and set a remote streaming limit to something like 12Mbps, with the auto-optimize profile encoding at 10Mbps. Optionally, you could even specify the IPs of the 4K devices as LAN devices and just have every other device default to the capped bandwidth. IMO, if a 4K device can't direct play the 4K content (video that is, transcoding audio is fine), just get a device that does support H.265 / HEVC like a Chromecast Ultra or Fire TV. Everything from an old i5-based server to a dual-xeon build will be running at 100% CPU trying to transcode 4K content on the fly. My TV with built-in Fire OS plays 4K HEVC content from Amazon just fine, but the Plex app doesn't advertise itself as supporting it so I'm forced to use my Chromecast Ultra ATM which is pretty annoying.
  15. My br0 network persisted through the upgrade, but I can't recall if I manually created it or not in the first place.