LesterCovax

Members
  • Posts

    17
  • Joined

  • Last visited

Posts posted by LesterCovax

  1. On 1/11/2020 at 12:31 PM, primeval_god said:

    I will look into containers with openvpn, but i suspect the issue may be a limitation to the way the kata runtime does things. I suspect it may be a trying to use a TAP/TUN device from the host OS, which it cant do because of the isolation, and the sandbox (VM) does not feature a usable TAP/TUN.

    TL;DR - Try using the containerized VPN activeeos/wireguard-docker in your host OS (has to be Ubuntu ≥16.04 it seems).  I found this referenced here BTW: vltraheaven.io: Down the Rabbit Hole - Kata Containers (this site wins the award for form over function destroying user accessability.  Dat font...)

    --------

    So this is the first I'm learning of Kata Containers, and it's certainly intriguing (especially after reading that you can run K8s in it).  Regarding its extremely interesting network architecture, it's using MACVTAP which isn't anything new.  There seems to be a good bit of documentation for it regarding use with QEMU and/or libvrt. 

     

    Basically, due to network hardware's common lack of hairpin support, you'll be running MACVTAP in 'Bridge' mode, which will let all guest containers communicate with each other, but not with the Host.  This is why a containerized VPN should do the trick (but I havent' tried it and won't be testing it anytime soon).

     

    The other option (found referenced several times) is to create a second network interface in the Kata VM.  The first one is blind to the host, but the second one can interact with it if set up correctly with a different subnet and such.  Again, no clue ho well this would actually work but thought I'd pass on what I found at least.  Good luck, and I'll keep an eye on this thread!

    accessibility_vltraheaven.io.gif

  2. 15 hours ago, jmello said:

    Suggestion for v3 -- I had a file get corrupted in my Radarr docker appdata, and had to retrieve it, and extracting the 2mb .xml file took forever because my .tar.gz backup file is huge. Can you add a setting to back up each docker container to individual .tar.gz files, rather than one enormous file?

    +1 for the feature suggestion

     

    In the meantime though, you don't need to extract the entire archive to retrieve a single file/folder from it (unless you're talking about access times and such due to a large single file). Here's a simple CLI guide I found.  It's even easier if you just open up the archive with something like 7zip.

    https://www.cyberciti.biz/faq/linux-unix-extracting-specific-files/

    • Like 1
  3. I just noticed the other day that my backups stopped working with the process having been hung for two months.  I rebooted Unraid, manually initiated a backup, and it still hung on backing up notifications.  I'll try removing the boot drive from the backup and try again, but it really needs some way to notify users if the process is ongoing for far too long (e.g. two months ;p )

     

    Here's where it gets stuck, with the 'Abort' button doing nothing:

    Screenshot_20190824-161803.thumb.png.bd26557b14dc78db639d1d2b693d97d0.png

     

    Here are my settings:

    Screenshot_20190824-162145.thumb.png.bdd23a31c136e91d5b100c5a7cdd5ba8.png

     

    Cheers!

  4. 5 hours ago, kelmino said:

    So I had auto-updates turned on and saw that it updated Deluge to  2.0.  Unfortunately, some of my private trackers do not have that whitelisted, so I went ahead and downgraded with the following build.

     

    binhex/arch-delugevpn:1.3.15_18_ge050905b2-1-04

     

    When I did that, it booted up just fine, except it lost all of my torrents (All my settings seem to be good).  I had over 200 torrents.  is there a simple way to re-add all of my torrents?  I have them all moving to a completed folder when they finish, so I didn't think I could just re-add the torrent files because I thought it would just re-download them all unless I pointed over to the completed folder, which is not something I'd really want to do at this point.

     

    I had to roll back to that version due to tracker whitelisting as well (and even got a nastygram from an admin asking for a lot of proof on my setup due to inconsistencies).

     

    You need to re-add the actual `*.torrent` files for the torrents you had active AFAIK.  I first moved/copied everything from the `completed` folder to the `incomplete` folder.  I then copied all of my `*.torrent` files from the `/data/.torrents` directory to my `/data/.torrents_add` directory, which is configured to auto-add any torrents in that directory using the `autoadd` plugin.  It will populate the torrents for every `*.torrent` file you added and should then check the progress against what you moved from your `completed` to `incomplete` directory.  You can select them all and choose "Force Recheck" if it's not doing it for some reason.  Then just wait a long long time depending on how large your torrents are.  For any that I still don't have the `*.torrent` file for, but the files were in `completed`, I just check a list on the tracker itself for torrents I haven't fully seeded and redownload the torrent file, or just manually find it on the tracker if it's not on the list.

     

    Royal PITA, it is

  5. I'm running into the same issue, and trying to find workarounds without having to build a custom kernel.  It's supposedly supposed to be built into the linux kernel by now, with binaries provided for different distros.

     

    Trying to use usbip/VirtualHere to share a USB device on another machine with a docker container.  Here are some VirtualHere (basic) instructions to obtain vhci_hcd.ko and usbip-common.ko: https://www.virtualhere.com/client_configuration_faq

  6. On 10/28/2018 at 5:55 PM, spants said:

    np, I'm not sure why it doesn't replicate to everyone. 

    It could be that the local user template is overriding the repo template.  Deleting the container, then deleting the user template in ~/.docker/templates-user, followed by recreating the container should work.

     

    At first I was thinking that not updating template changes automatically is a bad thing, but it would be annoying if someone made custom changes and had them wiped automatically.  The difficulty comes in notifying people of necessary changes if they have everything automated.

  7. 13 hours ago, hocky said:

    Well, my 8700-server transcodes 4K just fine. But it´s true, for the cost of the update you could get a new Fire-TV. But replacing gets more expensive when it comes to mobile devices.

    Then if replacing is not an option, use my 'smart list' optimization trick to automatically convert 4k content to a more playable format for other devices.  No point to put such heavy load on your server for transcoding when you could just store a 1080p file alongside the 4k one, and have Plex create those automatically.

  8. You can just set the max bandwidth (on each non-4k device) to 20Mbps 1080 (or below).  I found out earlier today that it sadly does try to enforce the resolution limit and not just the stream bitrate. 

     

    As for creating a new library, I feel like that's overkill since you can create a custom search filter for 4K content and then create an auto-optimize profile from that search (which also creates an auto-managed playlist it looks like).  I swear there used to be an option to prefer optimized content for remote devices but I can't find it now.  To get around that you either set the limit on each client like I mentioned above, or take it a step further and set a remote streaming limit to something like 12Mbps, with the auto-optimize profile encoding at 10Mbps.  Optionally, you could even specify the IPs of the 4K devices as LAN devices and just have every other device default to the capped bandwidth.

     

    image.thumb.png.f124c818d3f1ffa3ed2066e73663f18d.png

     

    IMO, if a 4K device can't direct play the 4K content (video that is, transcoding audio is fine), just get a device that does support H.265 / HEVC like a Chromecast Ultra or Fire TV.  Everything from an old i5-based server to a dual-xeon build will be running at 100% CPU trying to transcode 4K content on the fly.  My TV with built-in Fire OS plays 4K HEVC content from Amazon just fine, but the Plex app doesn't advertise itself as supporting it so I'm forced to use my Chromecast Ultra ATM which is pretty annoying.

  9. On 4/5/2018 at 5:09 PM, Jcloud said:

    My understanding its more like a SIP proxy, teamviewer servers facilitate the two end points connect to each other without needing/knowing one or both IP addresses, dns address, or ddns address of said endpoints.

     

    I used Teamviewer for years on all of my machines with no issues, but stopped using it completely a few years back when my mouse started moving on its own to the browser address bar with someone trying to access my ebay account.  Immediately powered down the machine and shat myself.  I had my account protected by a high entropy password, and additional passwords on each machine.  They played the blame game on everyone but themselves and I won't be going back.

     

    I've been trying to find a good solution for my gaming VM, and there aren't a lot of good solutions that I've found for Linux > Windows that utilize hardware rendering.  UltraVNC and Splashtop support it, but aren't available or up to date for Linux.  RDP works fine through Remmina, but terminal session sharing / console connections are iffy with Steam Link and the need for it not to be locked.  This can be bypassed somewhat by disabling all sleep / logout events and enabling auto-login via `start > run > netplwiz > uncheck "require login" > put in your microsoft associated email as user and your password...but the session sharing can still be flaky if you're using VNC as well.

     

    On 4/4/2018 at 6:58 AM, uldise said:

    are you tried nomachine.com?

     

    Just tried it out and it works pretty well.  I'd rate it near RDP level.