MowMdown

Members
  • Posts

    194
  • Joined

  • Last visited

Everything posted by MowMdown

  1. I just like to keep the first core free when running heavy CPU tasks.
  2. Got it. I assumed the variable followed the manual method of pinning cores to a docker. The documentation doesn't give an example for a specific number of cores except for "all", which is why I did what I did. I totally forgot I could do it this way... apparently I like doing things the difficult way. Needless to say I did get it working and it booted up 1st try. Woohoo!
  3. Quick Question/Possible Bug? I replaced the Compile CPU Count field with 1,2,3,5,6,7 from "all" and the build failed and threw a bunch of errors. Using "all" yields a positive result. Am I entering the cores wrong or is there something else at play? I noticed in the log that the "1" was missing and it looked like this: " ,2,3,5,6,7"
  4. If you have the GPU Plugin installed can you verify that your GPU returns to the lowest P-State (P8) after a transcode?
  5. I believe I solved my issue using the rclone union mount. It seems to be working as expected now. Instead of caching the union: mount (which was how I had it configured) I instead cached the crypt: mount, mounted it as a volume crypt_vfs and then unioned the local dir w/ the vfs dir. Now when a file is downloaded to the local, it isn't cached and when it's then moved to the cloud through the rclone move crypt: plex has no problem playing it from the crypt_vfs I also mounted crypt_vfs as read only so when sonarr/radarr move files from /mnt/user/download to /mnt/disks/media (this is the union mount) it only writes data to the local mount which avoids the caching. [gsuite] type = drive client_id = client_secret = scope = drive token = root_folder_id = [crypt] type = crypt remote = gsuite:media filename_encryption = standard directory_name_encryption = true password = password2 = [local] type = local nounc = true [union] type = union upstreams = /mnt/disks/media_vfs:ro /mnt/user/media/ action_policy = epall create_policy = eplfs search_policy = all cache_time = 120 ---- mkdir -p /mnt/disks/media mkdir -p /mnt/disks/media_vfs rclone mount \ --allow-other \ --dir-cache-time 720h \ --poll-interval 15s \ --buffer-size 256M \ --cache-dir=/mnt/disk3/system/rclone/cache \ --vfs-cache-mode full \ --vfs-cache-max-size 200G \ --vfs-cache-max-age 168h \ --vfs-read-chunk-size 128M \ --vfs-read-chunk-size-limit off \ --syslog \ crypt: /mnt/disks/media_vfs & rclone mount --allow-other union: /mnt/disks/media &
  6. Is anybody using the built in rclone union for their GDrive and Local mounts with vfs caching? I seem to be struggling with it in a strange way. I have three rclone remotes: 1. Gsuite: 2. Crypt: (this wraps Gsuite:) 3. Union: (this wraps crypt:media and /mnt/disks/media/) (this is also what I am caching w/ vfs shown below) My issue seems to be when I go to issue rclone move /mnt/user/media/movies crypt:media/movies --delete-src-dirs that it moves successfully but then Plex cannot play the file. It can see it just fine but I get an input/output error unless I issue a vfs/refresh rclone rc command. Should I be caching crypt before unionizing it? rclone mount \ --allow-other \ --dir-cache-time 720h \ --poll-interval 15s \ --buffer-size 256M \ --cache-dir=/mnt/disk3/system/rclone/cache \ --vfs-cache-mode writes \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 168h \ --vfs-read-chunk-size 128M \ --vfs-read-chunk-size-limit off \ --rc \ --rc-addr 192.168.1.200:5572 \ --syslog \ union: /mnt/disks/media &
  7. There is a typo in the docker config you left off the 'e' at the end of 'moderate' so it reads 'moderat' I'm afraid that if left as default value 'moderat' that there will be issues changing it to 'moderate' there are some other typos where the trailing 'e' is also missing.
  8. Negative. The plugin is needed to install the nvidia-unraid build to the usb boot drive. Without this plugin Nvidia GPUs wouldn't work at all with unraid.
  9. There is a security feature that doesn't allow scripts to be ran from /boot anymore. I'm not sure if @johnnie.black's method will work or not on 6.8.2 but I asked this question a while ago and my answer was you can't/shouldn't be doing that anymore.
  10. Updated to 6.8.3 - Small WebGUI issue when clicking on dockers from dashboard, the "Update" button appears on all dockers even though no updates are available. Edit, this does not happen while in the "docker" tab
  11. I have two peers configured in wireguard, one is "Remote access to LAN" the other is "Remote Tunneled Access" If I enable Docker Host Access to Custom Networks and I am away from my home network (LTE) and I turn either of my wireguard connections, my remote device can no longer access devices on my network beyond unraid machine. For example I can not log into my router by going to "192.168.1.1" it just times out. Same thing happens when I try and navigate to my HDHomeRun network tuner which has a local IP of 192.168.1.199. (DHCP pool is set to 192.168.1.201~254 so there is no overlap and my unraid server is 192.168.1.200) Everything else is like normal. If I revert the change to docker config, it goes back to normal which got me thinking that whatever this feature enables, it thinks that 192.168.1.1 is now part of the custom network br0 which is configured for 192.168.1.0/24. I don't have any dockers on br0 with a custom IP of 192.168.1.1 (router gateway) In a way I guess this sort of answers why it happens. My next question is how do I make the change so br0 is on a different subnet? Edit: I am unable to even ping anything except for my unraid server unless I turn that feature off.
  12. As title says, when Host Access is enabled in the docker settings, WireGuard VPN can no longer access lan devices when connected. Not sure if this is a bug or an consequence of docker host access mode. Disabling this feature allows access to lan deivces such as being able to access router admin via gateway.
  13. Handbrake works with NVENC just fine, although it's not the best for encoding quality.
  14. Ok, I missed the security change, this would explain it. Thanks
  15. I've recently noticed I can no longer execute scripts from /boot/ nor can I change the permissions to allow me to execute a script. I can still write and read from the USB but changing permissions of files and executing them is a no-go. Did something change in unraid to stop the execution of scripts from USB? For example, there was a nice "docker-shell" script from gridrunner that allowed you to easily bash into running containers, it no longer works nor does it allow me to fix the permission of the script to execute it while it's on /boot/
  16. One thing I like: My favorite part is the integration between dockers and the file system. One thing I want to see: 2-Factor Authentication when logging in via remote browser.
  17. It does not use the regular endpoint. It uses: https://artifacts.plex.tv/plex-media-server-alpha/1.16.7.1597-a6e223f7f/debian/plexmediaserver_1.16.7.1597-a6e223f7f_amd64.deb Unfortunately there is no way to get this container to update to this.
  18. Is this why I keep getting this spammed in my container logs? 2019-08-19 17:27:29,249 DEBG 'watchdog-script' stdout output: [info] microsocks not running
  19. I'm not sure what you guys are doing but using the latest LISO Plex container with the latest NVDEC script and it works fine for me on UNRAID 6.7 w/ an Nvidia GTX970. Some of you might be using an outdated NVDEC patch script.
  20. Hydra doesn't work that way if you aren't accessing it by local ip (it's weird I know). I don't think the dev has any plans on changing that either. Basically if I visit "hydra.mydomain.tld" it generates the links as "hydra.mydomain.tld/getnzb/..." instead of "http://192.168.1.200:5076/getnzb/…" It's fine though, I found a work around for it, thanks though.
  21. Just an FYI something is broken with hydra on the 2.6.4 update, the dev isn't sure why it's broken. 2.6.2 seems to be stable. Edit: Looks like 2.6.6 fixes this issue.
  22. Basically I want to be able to whitelist the /getnzb endpoint so I can fetch the URL without needing http auth. Sabnzbd has two methods of retrieving nzb files. you can upload the data to sab OR you can have sab fetch the data from a URL. The URL needed to fetch the data is behind hydra.mydomain.tld/getnzb/some_nzb_file.nzb however because I put hydra.mydomain.tld behind http_auth, sab is greeted with a 401 error (no authorization) (maybe what I am asking is not possible due to limitations of nginx) I either need to whitelist the /getnzb endpoint ONLY so no http auth is required OR somehow allow sab through the auth. It's not the end of the world but I would prefer the fetch method over upload. I was trying to edit the .conf using this documentation with no sucsess Edit: I think I figured it out, I went in and created a custom location "/getnzb" and used the same IP:PORT as the main proxy and it seems to work. however if you try to normally access it, it will give you an AUTH form so it's not accessible from a browser. Safe enough for me.