Jump to content

Zer0Nin3r

Members
  • Posts

    161
  • Joined

  • Last visited

Posts posted by Zer0Nin3r

  1. I'd be curious to know if you can use the 2nd NIC to isolate Unraid traffic to a VPN connection. Since Wireguard is not currently supported natively in the Linux kernel on Unraid v.6.8.3, And I am trying to be more resource efficient with my Dockers, I was thinking of having my router connect to my VPN as a client and then have my router route all of my traffic from the second NIC to use the VPN.

     

    Sure @binhex has built VPN support into some of his/her's Dockers, but if I can free up some system resources by not having to download those dependencies and utilize extra system processes & RAM, and offload the VPN work to the router, at the expense of VPN speed, that is something I'm willing to explore. Just trying to streamline my Dockers and be able to have my cake and eat it too. Although, I only want to be able to designate specific dockers to utilize the second NIC, the others can remain in the clear.

     

    I was trying to see if you could specify the specific NIC for individual Dockers, but it does not look like you are able to do so:

     

    image.png.d67dc720ed73434fbffa38dd86656b77.png

     

    I tried to get the Wireguard plugin to connect to my VPN provider, but it won't connect using the configuration files that I was given. That being said, I'd be curious to know if we can do split tunneling when the new version of Unraid comes out and Wireguard is baked into the kernel.

     

    Otherwise, I was thinking...maybe I can setup one Wireguard docker and then simply route the individual dockers through that one docker for all of my VPN needs on Unraid. But, I don't know how I would go about doing that and plus, there are other threads discussing this matter.

     

    Anyway, thanks to anyone reading this. Just thinking aloud for a moment to see if anyone else may know the answer. Until then, I'll continue searching in my free time. Oh, and if anyone knows of some "Networking IP Tables for Dummies" type of tutorials, let me know. 🙂

  2.   

    On 3/28/2020 at 12:37 AM, dboris said:

    This information REALLY needs to be condensed / communicated (or integrated in the OS ? ) by unraid's dev team.
    It would have saved me so much time.

     

    On 3/28/2020 at 7:31 AM, testdasi said:

    I just skimmed through this topic in less than 10 minutes. If that is "countless hours" for you then you have bigger things to be concerned about.

    Blaming others for your own laziness will get you quite far.

    I'm siding with @dboris on this one. As a layperson who does this in their spare time and not at a professional level, it's kind of hard to be aware of everything and anything especially since the Unraid wiki is dead, technology, software & techniques change on a regular basis, and Invision Community's search and forum as a platform sucks (with some regard.) But if the wiki is not updated to act as a central location for insightful information and guides, then we are left to using the forum's wonky search or using an external search engine to parse the forums for the information we are looking for. Even then, the efficiency in doing that is horrid having to bounce between a patch work of forum topics and threads, sometimes opening up multiple tabs to reference back to other threads whose topics tie in with the problem that one is trying to troubleshoot on any given day. Then say you get tuckered or burnt out on whatever side project you're working on, you have to come back and repeat the whole process all over again because god forbid you forget anything that you haven't touched on in a long while or a regular basis.

     

    If nobody wants to update the wiki and only relegate information to fractured forum threads then at least Discourse has some semblance of link bundling, summary building, and cross topic suggestions built into their platform.

     

    I've tried different CPU pinnings for the 1950x for the gaming VM using the given mappings in the Unraid template thinking the CPU / HT pairings were in line with the actual CPU 1 & CPU 2 sets only to find years later and recently that the reported CPU / HT pairings are incorrect and do not account for the latency in Infinity Fabric.

     

    But in your defense @testdasi I also get where you are coming from. You only get back what you put in. Meaning, put in half the effort to understand all this crap and you will get mediocre results. Put in the effort to fully understand what is only shared knowledge and reap the benefits and rewards that comes with the community coming together to share information.

     

    In closing, I just want to thank everyone on this thread for taking the time to dissect the AMD Threadripper and Ryzen CPU's and sharing your findings with the rest of us — even if it feels like we are getting "into the weeds" with this knowledge. If I was in a better place, I would take all of this information and make it more accessible for the casual Unraid gamer and less technically inclined users in this forum. But perhaps that day will be soon. Until then...don't hold your breath. 😉

     

    ---

    Other thoughts

     

    I see that there is a section for guides, but the problem is when I become tired or bored of a particular project — that thread becomes stale. Sure, people can come in and post new findings, suggestions, etc, but then the topic becomes yet again, fractured. There is no way for me to enable anyone to edit my original post to act as a quasi wiki page that would feature the latest in knowledge and information.

  3. On 6/14/2020 at 9:52 PM, ceyo14 said:

    no way to select a bunch in a row, or all in a docker or all in a row..

    There is a way. Go to Settings > CPU Pinning and from there you can pin entire cores or hyperthreads across the board by clicking on the number(s). That will help save some time. And then you can fine tune the selections from there. It's not as granular as you described but it is better than clicking on each and every individual dot.

     

    image.png.a0d84e55608373e5faf239c9657c2450.png

    • Like 1
  4. On 5/9/2020 at 5:06 AM, DaClownie said:

    As I've been going through my library, I've had some mixed results. Some files shrink 70%. Some shrink 25%. The overall goal of me reencoding my library was to save some space.

     

    For me, saving all that time and some space is much more valuable than the additional space saved vs. the heat/power/time requirement of the libx265.

     

    I'm still going to stick with the nvenc. I reencoded one library from 336GB to 206GB. If I can do that to most of my library, I'll save a couple terabytes, which is perfect. Especially with how easily the gpu transcodes h265 content.

    @DaClownie - Have you noticed that some of the encodes, you will end up with an output that is larger than the original input? I've been noticing this phenomenon lately especially with some .ts (MPEG-2) encoded streams. But it's not consistent. For the most part the encodes with hevc_nvenc does save me some space (along with time and energy useage.)

    On 5/6/2020 at 5:26 AM, DaClownie said:

    That's unfortunate but we can play with it. I don't know much about the inner workings, so I have no idea what modifiers to settings you have to play with in the background... In testing NVENC in handbrake, The file sizes were very finicky. I still haven't got it dialed in. Converting a single episode with nvenc was increasing size from 1.6gb to 2.9gb. However, if I turned quality all the way down, it tuned it to 150mb. Ideally I'm going to get a single file, and duplicate it. I'll convert it using your normal docker with the lib265 encoder and see the size, then i'll try your nvenc docker. After that, I'll tune the handbrake file to get me the same relative size as your lib265 encoder to see the quality comparison. 

     

    Are the settings with nvenc changeable on your end? or are they locked in?

    I've noticed this with Handbrake, you really have to crank down the quality when doing h.265 to get the speeds of the encode to rival that of h.264, but then the files end up being bloated with degraded picture quality than when you first start out with.

     

    I'm kind of torn with hevc_nvenc and libx265. On one hand I save a butt load of time using the GPU to process the encodes, but I risk that some files are in fact bloated/enlarged which negates the entire point of re-encoding to h.265.

     

    On the other hand, I know my encodes will always net me anywhere from 60 – 80% in file size savings at the expense of heat, energy draw, and thermal load on the CPU, not to mention the extended time it takes to perform the encodes with CPU.

     

    I guess I will have to keep an eye on the outputs and if it keeps happening, I may just switch back to libx265 and drop the number of workers; that and wait until we are able to have access to advanced controls / command input that will guarantee that hevc_nvenc encodes will always be smaller than the source file.

  5. On 6/7/2020 at 6:06 AM, JasonK said:

    Nope - as I don't want to remove the subtitles :)

    You will not find success with PGS subtitles when trying to encode video files that have these types of subtitles embedded within the container — until this project matures. Your best bet is to look towards Handbrake or the other encoder that's been mentioned elsewhere in this thread. (Apologies I don't remember off the top of my head the other encoder's name.)

    On 6/7/2020 at 1:34 PM, UncleBacon said:

    That's what fixed the same problem for me so you'll have to find another tool to convert them for the time being, as far as I know. I've been playing with Handbrake but it's nowhere near as easy.

    SpaceInvaderOne has a tutorial on how to automate this with Handbrake. It's best to do some test encodes first to make sure you get your preset the way you want it and then let it loose on your library — should you want to undertake some of that risk. A safer way to automate would be with watch folders in which you manually bring the completed encodes into your library, working in batches to ensure everything goes smoothly

     

    I was doing batches at first with the Handbrake automation, and then transitioned over to Unmanic. Yes, I have disabled the subtitles and audio to make my encodes with Unmanic easier, but those are the sacrifices I am willing to make do for now until it comes time to convert my library to 4k; it's a problem I'll revisit when it comes time to upgrade my audio equipment.

     

  6. On 5/8/2020 at 9:12 PM, jmmille said:

    Do you think different GPUs work better or worse at encoding?

    Unless you're running the latest and greatest GPU's from nvidia, CPU gives a better encode with h.265/HEVC. Better quality and better compression. Just takes a lot longer which equates to more electricity being used.

     

    GPU can perform the encodes a lot faster, but lacks B Frame Support for anything lower than a GeForce GTX 1660. CPU currently supports B Frame which is why we see such a reduction in file sizes. I would be curious to see 1) the final encode sizes on the Turing family GPU chipset (sans the GTX 1650) and 2) the encode quality coming from the Turing family chipset when compared to a CPU encode.

    image.thumb.png.2bb064803ff98b4a02798eb4e5a35a7e.png

    image.thumb.png.b785582712d9e9975e10f0e3a0c013b4.png

    source: https://developer.nvidia.com/video-encode-decode-gpu-support-matrix

    On 5/8/2020 at 7:24 PM, belliott_20 said:

    just got errors when it attempted to go over the stream limit (2). The first two workers would process a file but the 3rd just kept showing errors for file after file.

    image.thumb.png.4ad0fb41bdb0f2a6872475d8775d565d.pngimage.thumb.png.bab282da2f2bd5828a6db9414d1e7d99.png

     

    I noticed this too @belliott_20, which is what brought me back to the forum tonight. I am probably mistaking the Max # of concurrent sessions as it probably doesn't apply to our case when it comes to Unmanic. I did notice that with two workers = no errors; with three workers = errors.

    Now I am not running in debugging mode, so I don't have any logs to share; sorry I am of no help in this sense.

     

    On the flip side, maybe we are lucky to be able to encode two streams at a time when looking at the Total # of NVENC column?

  7. If you have a secondary GPU and give that to Unraid to use, then you don't have to pass-through the vbios for the GPU you want to pass-through to the VM. I tried like mad to get single GPU pass-through and couldn't get the VM to POST even with vbios pass-through. So, I ended up purchasing a cheap GPU and put it in a secondary PCI-E slot for Unraid to use. I've been happily passing through my GTX 1060 ever since.

     

    Now let's say you want to pass-through the GPU that Unraid is now using, then yes, you will have to pass-through the vbios for that GPU.

  8. On 5/27/2020 at 5:54 AM, CoZ said:

    want to add MORE TV Show folders to the container.  For instance, have multiple TV show folders instead of just the TV Show Library.  Right now I've pointed the "Library Movies" to another TV show instead of the Movie library.  When I grab a new movie or TV show I want encoded, I have to go back into editing the container and point it directly to the new folder or said movie or TV Show and then when it's done, go back into the container and switch it back.

    In my use case, I want to encode everything into h.265 for archiving — with the shows I DVR and then delete after watching, to not be encoded in h.265.

     

    And easier fire and forget method I just set up today was this:

    1. Stop Unmanic so it doesn't try to encode files while you're working on moving around directories and content.
    2. In your media library share create a folder /TV Show_Temp  || example: /mnt/user/Plex/TV Show_Temp
    3. Add /TV Show_Temp to your Sonarr & Plex Docker templates as a new path
    4. Under Sonarr go to: Series > Series Editor
    5. Select the TV Shows you want to work on
    6. Change the root folder of the TV Series with the following drop down menu at the bottom of your screen: Root Folder > Add a Different Path > TV Show_Temp
    7. Log into your Plex dashboard. Settings > Manage > Libraries > TV Shows > Edit > Add Folders > Add TV Show_Temp
    8. Manually move the shows that you do not want Unmanic to see to the new /TV Show_Temp folder
    9. Go back to Plex and scan your TV library so that the file paths automatically update

    What this will do from now on is that Sonarr will download the shows you don't want to encode/Unmanic to see to your TV Show_Temp folder and Plex will add it to your library automatically. No need to set up a separate TV library inside of Plex and all your shows show up together in Plex. And no need to keep adding paths of folders to Unmanic.

     

    ** You will decide which shows you want to encode automatically by editing the series that you've added already and selecting the TV Show_Temp folder or by making this setting when you add a new show. Don't forget to move the shows you want to hide from Unmanic to the new TV Show_Temp folder you created.

     

    There are also settings in Plex that will delete the episodes automatically for you and settings within Sonarr that will unmonitor episodes for you when Sonarr sees a file being deleted.

     

    Hope this helps save some folks some time and headaches and allowing more time for play. 🙂

     

    Here are some screenshots if it will help:

    Sonarr Docker:

    image.thumb.png.c1b1e5a6adf8b6d5c99c5df4669aff5c.png

     

    Plex Docker:

    image.thumb.png.8a83916f7fad2990ff43f6937703c20a.png

    image.thumb.png.0f2ee57d9e2baa1c133c0d23a499ef92.png

     

    Sonarr:

    image.png.72340435268d04c9beb3a4058e7c24c1.png

    image.thumb.png.5aa9daf0f17e591cb9e751d8fa3c00f1.png

    Plex:

    image.png.a783ef336ebb5482c6650b83189c9cf9.png

    ---

    image.thumb.png.1760b0cf1842f2f4dc1f44d954c6652a.png

    • Like 1
  9. On 5/6/2020 at 5:26 AM, DaClownie said:

    The file sizes were very finicky. I still haven't got it dialed in. Converting a single episode with nvenc was increasing size from 1.6gb to 2.9gb. However, if I turned quality all the way down, it tuned it to 150mb.

    The default RF on the h.265 presets for Apple & Chromecast in Handbrake is RF 15. Try adjusting to 22/± 1. Remember a smaller RF will result in better quality, but a larger file.

     

    Quote

    In regards to h.264:

     

    High Definition (e.g Blurays 720/1080) Use an RF value of 22 +/- 1 Since HD sources are typically quality, you can get away with a slightly higher RF value than SD content without any perceived difference in quality...

     

    Standard Definition (e.g DVD’s) Use an RF value of around 20 +/- 1 As an example using the AppleTV2 preset at RF20, with 20 different sources, the average size was 925MB per hour of video. (Min: 625MB/h Max:1,503MB/hr)...

     

    To sum up: when converting from a DVD source, there is no reason to go above an RF of \~19, which is roughly equivalent to how heavily the DVD is compressed. If you do go higher, your output will be larger than your input!

     

    Source: https://handbrake.fr/docs/en/latest/technical/video-cq-vs-abr.html

    Even though the Handbrake team advises against it, you can specify your target bitrate (2000 kbps is the default) and reduce file size and encode times that way (single pass) at the expense of quality. Using a Constant Quality (CQ) will scale the stream bitrate higher for more complex scenes.

     

    Not sure how we would pass-through the quality settings we want in Unmanic if we want to experiment with NVENC.

    ---

     

    image.thumb.png.5ae426360fdb680257c6ec90b10d313f.png

    Do we still need to add the --runtime=nvidia under Advanced View now that I see this in the template?

    --

      

    On 5/6/2020 at 12:38 PM, Josh.5 said:

    As far as I could tell, nvenc_hevc is an older now deprecated encoder. Use the newer hevc_nvenc one?

    Yes, hevc_nvenc is the encoder we will want to use now.

     

    Source: https://devblogs.nvidia.com/nvidia-ffmpeg-transcoding-guide/

  10. On 4/10/2020 at 3:08 PM, vcolombo said:

    Is everyone else just turning off audio transcoding?

    Yep. I turned off audio encoding for the same reason(s). I did all the research and passing through the native audio encodes doesn't add too much to the final output. Anywhere from a couple hundred megabytes to several hundred. For all the headaches audio encoding has caused...I simply turned it off. Plex can transcode the audio and that doesn't take up too much processing power when compared to Plex transcoding video.

    • Like 1
  11. @Mason736:

    1. Boot up your Catalina VM.
    2. Press 'Esc' key to get into the OVMF BIOS.
    3. Change the resolution to your liking.
    4. Make sure you have a remote desktop client installed e.g., Nomachine, Splashtop.
    5. Highly recommended that you enable auto login as Nomachine won't be able to initialize the display until logging in.

     

  12. Ha. I was just performing a quick search to make sure I wasn't double posting.

     

    Any chance cp --reflink will work with XFS on the next Unraid update?

     

    https://blogs.oracle.com/linux/xfs-data-block-sharing-reflink

     

    Currently the array is XFS while the cache is BTRFS

     

    You can reflink files within the cache, but you cannot go from the array to the cache (XFS -> BTRFS). And I just tried to reflink the VM image from within the user share (array/XFS) and it results in Operation not supported.

     

    So, it would be great to be able to reflink from within user shares which, is the safer way to work with data (from my understanding) on the array versus trying to work directly from the disk "shares".

     

  13. On 3/28/2020 at 11:00 AM, CoZ said:

    Subtitle support would be great.  Almost all of my files fail due to subs.

    Did you check to see if this feature request has been submitted to the project's GitHub? The request/commit (or may not) already be open on there.

     

    @flinte You too, also check the project's GitHub for any feature requests such as your quality control feature request.

     

    @All: Have a great rest of the week into the weekend! Stay safe! Watch some movies/TV!

    1. Installed just fine.
    2. Was able to get the "Big Mouse" issue.
    3. Tried GPU pass-through with an Nvidia GTX 1060 with GPU bios dump = no such luck.
    4. With the GPU pass-through, the screen would flicker and then the display would display correctly with the OS trying to recover from a system crash.
    5. System would reboot into the system and then crash all over again into a boot loop.
    6. I run my VM's headless. But from these videos, we'll have better luck installing Ubuntu 18.04 LTS and then install the Steam client in there. Linus Tech Tips | Craft Computing
    7. Also those that run headless, not sure what clients you are using, but Parsec supports Ubuntu 18.04 LTS. That's something I want to test when I come back to this side project.
  14. 43 minutes ago, tucansam said:

    Despite dedicating four cores to this in the config, this software is pegging my CPU and turning it into the surface of the sun.

     

    How to fix?  Found the "use no more than XX%" in the RDP config, looks like that works for now.

     

    Got Rosetta@Home, couldn't find Unraid team.

     

    1 hour ago, Reynald said:

    Hello,

    I'm in with Bionic RDP.

    Even using advanced view, I cannot find where to choose the team when adding the project... Any guidance please?

    You can set it up on their website: https://boinc.bakerlab.org/rosetta/home.php

     

    How much disk space is everyone devoting to this? I just set a 10 GB cap for now. Anyone just go with the default settings? How's that working for you?

     

    Also, anyone find that the BIONC manager lags on macOS?

    Stuck on this screen a lot:

    image.png.6c2139644cacca69a0a870c47b82d0f9.png

    It also appears you can do all of your configuration from the website after you logged into your BIONC account in the manager software and synced it with the docker client.

    image.thumb.png.d3a4ca58889468779ada3e48bdbad29e.png

    Also, happy to report that I'm up and running! Thanks @Squid for bringing this to my attention with the banner announcement. (Although, it did give me pause real quick as expressed by @alael.)

    image.png.2edb65064f2f55239ccc84d6c4863053.png

    • Thanks 2
  15. That's what I figured. I've just created a document with the necessary changes that I need to remember to copy/paste back into the XML after changes.

     

    Nvidia pass-through has been finicky with Macinabox. Install is amazing and simple. Getting the OS to recognize my display dongle for my GPU so that I change change resolution on the thin client (Apple Screen Share OR Splashtop) has been a PITA.

     

    Display is stuck at 1280x1024. With no sound.

     

    07-MAR-20

    So I was very careful to update my XML by hand and not use the template editor. Pretty much copy/pasted the settings I wanted from my other High Sierra install prior to the invention of Macinabox.

     

    I'm happy to report that I have Nvidia GPU pass-through with sound. Video playback is a lot smoother within Splashtop with GPU pass-through.

     

    • Running the Macinabox High Sierra VM headless with an HDMI dongle.
    • Able to adjust the VM display to match that of my thin client.

     

    • Using the template editor modifies the XML in a way that you lose all of the custom changes that SpaceInvaderOne was so kind enough to set up for us. Even though I was catching the majority of the changes, when I was using the template editor, I was still missing some the nuance XML customizations
  16. On 11/8/2019 at 1:44 PM, SpaceInvaderOne said:

    The macinabox template uses  custom ovmf files. If you change

    
    <os>
        <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/e930dfa3-ce5f-4a14-a642-d140ed8035bd_VARS-pure-efi.fd</nvram>
      </os>

    to be as below you will be back without the screen corruption.

    
      <os>
        <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader>
        <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram>
      </os>

    hope that helps 😀

    Is there a way to make this change persistent? Every time I make a change using the template editor, this section of the XML is modified.

  17. On 2/13/2020 at 8:53 AM, vcolombo said:

    I'm wondering, besides file size, what advantage is there of re-encoding all the audio tracks as 2-channel AAC? It seems to me this would be fine if you're mostly watching your movies on your phone, or through your TV speakers, but if you have any halfway decent home audio setup that you'd want to keep the discrete audio channels. Am I missing something here? Is there some magic that Unmanic is performing that reduces the audio track size while still maintaining quality?

     

    Thanks in advance!

    It's really as a fall back measure or like you said, to make the streams compatible with just about any device out there without having Plex or (insert your media manager/server here) transcode the audio to your other devices.

     

    I for instance have an older setup wherein I have to pass-through my audio from TV to my receiver via optical and my TV only supports AC3 out. Eventually I'll upgrade to a receiver that accepts sound via HDMI instead of HDMI video pass-through only.

     

    On 2/14/2020 at 11:56 PM, vcolombo said:

    So after my Unmanic vs Tdarr testing, I've decided to use Tdarr for now. I like the simplicity of Unmanic, but Tdarr supports NVENC, so my encodes there are much faster.

    Does this mean you're using the modded version of Unraid that allows for Nvidia GPU access to docker apps such as Plex?

  18. On 2/21/2020 at 8:43 PM, Bandit_King said:

    Tdarr has an option to put cpu in low priority which this program DON'T HAVE!!! Plus Nvdia GPU encoding hevc encoding.

    Feel free to stop by the GitHub and submit some commits. I'm sure Josh5 would appreciate the help as he is a one-man-army at the moment. I'd love to put my money where my mouth is, but I am not a coder (maybe in the future though?)

     

    GPU encoding is on the road map as stated elsewhere in this thread.

     

    giphy.gif.9bb857def5cb76c77b709aa6ab3c085c.gif

    • Like 1
  19.  

    On 1/11/2020 at 10:15 AM, randomusername said:

    Is there any way to do this automatically or do you have to manually go through all movies one by one?

     

    On 1/12/2020 at 6:51 AM, rmeaux said:

    Not automatically but you can bulk edit right now and then proceed from there. 

     

    From your movies tab, you can click on "Movie Editor" and then select the movies you're happy with and choose "Unmonitor" and update that way. 

     

    I found it easier when I first started using radarr was to unmonitor everything and only re-monitor the few I needed or waiting on. 

    There is a way to do it automatically:

     

    Radarr > Settings > Media Management > (Toggle Advanced Settings > Shown) > File Management > Unmonitor Deleted Movies

    1387045084_ScreenShot2020-03-01at15_49_59.png.71b0c079383ab817cf78dd1385bda061.png

     

    I am not having the issue that you are highlighting hence, I do not have this option enabled. But I had to do this before when I was using Space Invader's method of re-encoding from h.264 to h.265 with Handbrake because I was running into the issue wherein Radarr/Sonarr was re-downloading titles during/after the conversion process.

     

    Hope this helps!

     

  20. 33 minutes ago, itimpi said:

    In my experience that is only true if you have them on a RAM location (which is where /tmp is located).    If you have them on physical media they do not seem to get cleared out.

    Roger that. Then I believe my /tmp folder is in RAM then as the folders are cleared out after a reboot; good to know.

     

    That being said, I haven't had any issues with RAM overflow while encoding with Unmanic (3 workers).

×
×
  • Create New...