Zer0Nin3r

Members
  • Posts

    159
  • Joined

  • Last visited

Everything posted by Zer0Nin3r

  1. I'd be curious to know if you can use the 2nd NIC to isolate Unraid traffic to a VPN connection. Since Wireguard is not currently supported natively in the Linux kernel on Unraid v.6.8.3, And I am trying to be more resource efficient with my Dockers, I was thinking of having my router connect to my VPN as a client and then have my router route all of my traffic from the second NIC to use the VPN. Sure @binhex has built VPN support into some of his/her's Dockers, but if I can free up some system resources by not having to download those dependencies and utilize extra system processes & RAM, and offload the VPN work to the router, at the expense of VPN speed, that is something I'm willing to explore. Just trying to streamline my Dockers and be able to have my cake and eat it too. Although, I only want to be able to designate specific dockers to utilize the second NIC, the others can remain in the clear. I was trying to see if you could specify the specific NIC for individual Dockers, but it does not look like you are able to do so: I tried to get the Wireguard plugin to connect to my VPN provider, but it won't connect using the configuration files that I was given. That being said, I'd be curious to know if we can do split tunneling when the new version of Unraid comes out and Wireguard is baked into the kernel. Otherwise, I was thinking...maybe I can setup one Wireguard docker and then simply route the individual dockers through that one docker for all of my VPN needs on Unraid. But, I don't know how I would go about doing that and plus, there are other threads discussing this matter. Anyway, thanks to anyone reading this. Just thinking aloud for a moment to see if anyone else may know the answer. Until then, I'll continue searching in my free time. Oh, and if anyone knows of some "Networking IP Tables for Dummies" type of tutorials, let me know. 🙂
  2. I'm siding with @dboris on this one. As a layperson who does this in their spare time and not at a professional level, it's kind of hard to be aware of everything and anything especially since the Unraid wiki is dead, technology, software & techniques change on a regular basis, and Invision Community's search and forum as a platform sucks (with some regard.) But if the wiki is not updated to act as a central location for insightful information and guides, then we are left to using the forum's wonky search or using an external search engine to parse the forums for the information we are looking for. Even then, the efficiency in doing that is horrid having to bounce between a patch work of forum topics and threads, sometimes opening up multiple tabs to reference back to other threads whose topics tie in with the problem that one is trying to troubleshoot on any given day. Then say you get tuckered or burnt out on whatever side project you're working on, you have to come back and repeat the whole process all over again because god forbid you forget anything that you haven't touched on in a long while or a regular basis. If nobody wants to update the wiki and only relegate information to fractured forum threads then at least Discourse has some semblance of link bundling, summary building, and cross topic suggestions built into their platform. I've tried different CPU pinnings for the 1950x for the gaming VM using the given mappings in the Unraid template thinking the CPU / HT pairings were in line with the actual CPU 1 & CPU 2 sets only to find years later and recently that the reported CPU / HT pairings are incorrect and do not account for the latency in Infinity Fabric. But in your defense @testdasi I also get where you are coming from. You only get back what you put in. Meaning, put in half the effort to understand all this crap and you will get mediocre results. Put in the effort to fully understand what is only shared knowledge and reap the benefits and rewards that comes with the community coming together to share information. In closing, I just want to thank everyone on this thread for taking the time to dissect the AMD Threadripper and Ryzen CPU's and sharing your findings with the rest of us — even if it feels like we are getting "into the weeds" with this knowledge. If I was in a better place, I would take all of this information and make it more accessible for the casual Unraid gamer and less technically inclined users in this forum. But perhaps that day will be soon. Until then...don't hold your breath. 😉 --- Other thoughts I see that there is a section for guides, but the problem is when I become tired or bored of a particular project — that thread becomes stale. Sure, people can come in and post new findings, suggestions, etc, but then the topic becomes yet again, fractured. There is no way for me to enable anyone to edit my original post to act as a quasi wiki page that would feature the latest in knowledge and information.
  3. There is a way. Go to Settings > CPU Pinning and from there you can pin entire cores or hyperthreads across the board by clicking on the number(s). That will help save some time. And then you can fine tune the selections from there. It's not as granular as you described but it is better than clicking on each and every individual dot.
  4. @DaClownie - Have you noticed that some of the encodes, you will end up with an output that is larger than the original input? I've been noticing this phenomenon lately especially with some .ts (MPEG-2) encoded streams. But it's not consistent. For the most part the encodes with hevc_nvenc does save me some space (along with time and energy useage.) I've noticed this with Handbrake, you really have to crank down the quality when doing h.265 to get the speeds of the encode to rival that of h.264, but then the files end up being bloated with degraded picture quality than when you first start out with. I'm kind of torn with hevc_nvenc and libx265. On one hand I save a butt load of time using the GPU to process the encodes, but I risk that some files are in fact bloated/enlarged which negates the entire point of re-encoding to h.265. On the other hand, I know my encodes will always net me anywhere from 60 – 80% in file size savings at the expense of heat, energy draw, and thermal load on the CPU, not to mention the extended time it takes to perform the encodes with CPU. I guess I will have to keep an eye on the outputs and if it keeps happening, I may just switch back to libx265 and drop the number of workers; that and wait until we are able to have access to advanced controls / command input that will guarantee that hevc_nvenc encodes will always be smaller than the source file.
  5. You will not find success with PGS subtitles when trying to encode video files that have these types of subtitles embedded within the container — until this project matures. Your best bet is to look towards Handbrake or the other encoder that's been mentioned elsewhere in this thread. (Apologies I don't remember off the top of my head the other encoder's name.) SpaceInvaderOne has a tutorial on how to automate this with Handbrake. It's best to do some test encodes first to make sure you get your preset the way you want it and then let it loose on your library — should you want to undertake some of that risk. A safer way to automate would be with watch folders in which you manually bring the completed encodes into your library, working in batches to ensure everything goes smoothly I was doing batches at first with the Handbrake automation, and then transitioned over to Unmanic. Yes, I have disabled the subtitles and audio to make my encodes with Unmanic easier, but those are the sacrifices I am willing to make do for now until it comes time to convert my library to 4k; it's a problem I'll revisit when it comes time to upgrade my audio equipment.
  6. Unless you're running the latest and greatest GPU's from nvidia, CPU gives a better encode with h.265/HEVC. Better quality and better compression. Just takes a lot longer which equates to more electricity being used. GPU can perform the encodes a lot faster, but lacks B Frame Support for anything lower than a GeForce GTX 1660. CPU currently supports B Frame which is why we see such a reduction in file sizes. I would be curious to see 1) the final encode sizes on the Turing family GPU chipset (sans the GTX 1650) and 2) the encode quality coming from the Turing family chipset when compared to a CPU encode. source: https://developer.nvidia.com/video-encode-decode-gpu-support-matrix I noticed this too @belliott_20, which is what brought me back to the forum tonight. I am probably mistaking the Max # of concurrent sessions as it probably doesn't apply to our case when it comes to Unmanic. I did notice that with two workers = no errors; with three workers = errors. Now I am not running in debugging mode, so I don't have any logs to share; sorry I am of no help in this sense. On the flip side, maybe we are lucky to be able to encode two streams at a time when looking at the Total # of NVENC column?
  7. If you have a secondary GPU and give that to Unraid to use, then you don't have to pass-through the vbios for the GPU you want to pass-through to the VM. I tried like mad to get single GPU pass-through and couldn't get the VM to POST even with vbios pass-through. So, I ended up purchasing a cheap GPU and put it in a secondary PCI-E slot for Unraid to use. I've been happily passing through my GTX 1060 ever since. Now let's say you want to pass-through the GPU that Unraid is now using, then yes, you will have to pass-through the vbios for that GPU.
  8. In my use case, I want to encode everything into h.265 for archiving — with the shows I DVR and then delete after watching, to not be encoded in h.265. And easier fire and forget method I just set up today was this: Stop Unmanic so it doesn't try to encode files while you're working on moving around directories and content. In your media library share create a folder /TV Show_Temp || example: /mnt/user/Plex/TV Show_Temp Add /TV Show_Temp to your Sonarr & Plex Docker templates as a new path Under Sonarr go to: Series > Series Editor Select the TV Shows you want to work on Change the root folder of the TV Series with the following drop down menu at the bottom of your screen: Root Folder > Add a Different Path > TV Show_Temp Log into your Plex dashboard. Settings > Manage > Libraries > TV Shows > Edit > Add Folders > Add TV Show_Temp Manually move the shows that you do not want Unmanic to see to the new /TV Show_Temp folder Go back to Plex and scan your TV library so that the file paths automatically update What this will do from now on is that Sonarr will download the shows you don't want to encode/Unmanic to see to your TV Show_Temp folder and Plex will add it to your library automatically. No need to set up a separate TV library inside of Plex and all your shows show up together in Plex. And no need to keep adding paths of folders to Unmanic. ** You will decide which shows you want to encode automatically by editing the series that you've added already and selecting the TV Show_Temp folder or by making this setting when you add a new show. Don't forget to move the shows you want to hide from Unmanic to the new TV Show_Temp folder you created. There are also settings in Plex that will delete the episodes automatically for you and settings within Sonarr that will unmonitor episodes for you when Sonarr sees a file being deleted. Hope this helps save some folks some time and headaches and allowing more time for play. 🙂 Here are some screenshots if it will help: Sonarr Docker: Plex Docker: Sonarr: Plex: ---
  9. The default RF on the h.265 presets for Apple & Chromecast in Handbrake is RF 15. Try adjusting to 22/± 1. Remember a smaller RF will result in better quality, but a larger file. Even though the Handbrake team advises against it, you can specify your target bitrate (2000 kbps is the default) and reduce file size and encode times that way (single pass) at the expense of quality. Using a Constant Quality (CQ) will scale the stream bitrate higher for more complex scenes. Not sure how we would pass-through the quality settings we want in Unmanic if we want to experiment with NVENC. --- Do we still need to add the --runtime=nvidia under Advanced View now that I see this in the template? -- Yes, hevc_nvenc is the encoder we will want to use now. Source: https://devblogs.nvidia.com/nvidia-ffmpeg-transcoding-guide/
  10. Yep. I turned off audio encoding for the same reason(s). I did all the research and passing through the native audio encodes doesn't add too much to the final output. Anywhere from a couple hundred megabytes to several hundred. For all the headaches audio encoding has caused...I simply turned it off. Plex can transcode the audio and that doesn't take up too much processing power when compared to Plex transcoding video.
  11. @Mason736: Boot up your Catalina VM. Press 'Esc' key to get into the OVMF BIOS. Change the resolution to your liking. Make sure you have a remote desktop client installed e.g., Nomachine, Splashtop. Highly recommended that you enable auto login as Nomachine won't be able to initialize the display until logging in.
  12. Ask an you shall receive. 😃 https://am.i.mullvad.net/ https://am.i.mullvad.net/torrent
  13. Ah very cool. I guess there's no chance in getting it to work unless I were to reformat everything which means, I probably won't be able to enjoy the new functionality.
  14. Ha. I was just performing a quick search to make sure I wasn't double posting. Any chance cp --reflink will work with XFS on the next Unraid update? https://blogs.oracle.com/linux/xfs-data-block-sharing-reflink Currently the array is XFS while the cache is BTRFS You can reflink files within the cache, but you cannot go from the array to the cache (XFS -> BTRFS). And I just tried to reflink the VM image from within the user share (array/XFS) and it results in Operation not supported. So, it would be great to be able to reflink from within user shares which, is the safer way to work with data (from my understanding) on the array versus trying to work directly from the disk "shares".
  15. Did you check to see if this feature request has been submitted to the project's GitHub? The request/commit (or may not) already be open on there. @flinte You too, also check the project's GitHub for any feature requests such as your quality control feature request. @All: Have a great rest of the week into the weekend! Stay safe! Watch some movies/TV!
  16. Installed just fine. Was able to get the "Big Mouse" issue. Tried GPU pass-through with an Nvidia GTX 1060 with GPU bios dump = no such luck. With the GPU pass-through, the screen would flicker and then the display would display correctly with the OS trying to recover from a system crash. System would reboot into the system and then crash all over again into a boot loop. I run my VM's headless. But from these videos, we'll have better luck installing Ubuntu 18.04 LTS and then install the Steam client in there. Linus Tech Tips | Craft Computing Also those that run headless, not sure what clients you are using, but Parsec supports Ubuntu 18.04 LTS. That's something I want to test when I come back to this side project.
  17. You can download the latest Steam OS ISO here: https://repo.steampowered.com/download/ But we are still having display issues.
  18. You can set it up on their website: https://boinc.bakerlab.org/rosetta/home.php How much disk space is everyone devoting to this? I just set a 10 GB cap for now. Anyone just go with the default settings? How's that working for you? Also, anyone find that the BIONC manager lags on macOS? Stuck on this screen a lot: It also appears you can do all of your configuration from the website after you logged into your BIONC account in the manager software and synced it with the docker client. Also, happy to report that I'm up and running! Thanks @Squid for bringing this to my attention with the banner announcement. (Although, it did give me pause real quick as expressed by @alael.)
  19. That's what I figured. I've just created a document with the necessary changes that I need to remember to copy/paste back into the XML after changes. Nvidia pass-through has been finicky with Macinabox. Install is amazing and simple. Getting the OS to recognize my display dongle for my GPU so that I change change resolution on the thin client (Apple Screen Share OR Splashtop) has been a PITA. Display is stuck at 1280x1024. With no sound. 07-MAR-20 So I was very careful to update my XML by hand and not use the template editor. Pretty much copy/pasted the settings I wanted from my other High Sierra install prior to the invention of Macinabox. I'm happy to report that I have Nvidia GPU pass-through with sound. Video playback is a lot smoother within Splashtop with GPU pass-through. Running the Macinabox High Sierra VM headless with an HDMI dongle. Able to adjust the VM display to match that of my thin client. Using the template editor modifies the XML in a way that you lose all of the custom changes that SpaceInvaderOne was so kind enough to set up for us. Even though I was catching the majority of the changes, when I was using the template editor, I was still missing some the nuance XML customizations
  20. Is there a way to make this change persistent? Every time I make a change using the template editor, this section of the XML is modified.
  21. It's really as a fall back measure or like you said, to make the streams compatible with just about any device out there without having Plex or (insert your media manager/server here) transcode the audio to your other devices. I for instance have an older setup wherein I have to pass-through my audio from TV to my receiver via optical and my TV only supports AC3 out. Eventually I'll upgrade to a receiver that accepts sound via HDMI instead of HDMI video pass-through only. Does this mean you're using the modded version of Unraid that allows for Nvidia GPU access to docker apps such as Plex?
  22. Feel free to stop by the GitHub and submit some commits. I'm sure Josh5 would appreciate the help as he is a one-man-army at the moment. I'd love to put my money where my mouth is, but I am not a coder (maybe in the future though?) GPU encoding is on the road map as stated elsewhere in this thread.
  23. There is a way to do it automatically: Radarr > Settings > Media Management > (Toggle Advanced Settings > Shown) > File Management > Unmonitor Deleted Movies I am not having the issue that you are highlighting hence, I do not have this option enabled. But I had to do this before when I was using Space Invader's method of re-encoding from h.264 to h.265 with Handbrake because I was running into the issue wherein Radarr/Sonarr was re-downloading titles during/after the conversion process. Hope this helps!
  24. Roger that. Then I believe my /tmp folder is in RAM then as the folders are cleared out after a reboot; good to know. That being said, I haven't had any issues with RAM overflow while encoding with Unmanic (3 workers).
  25. The folders are cleared out automatically after a reboot. No maintenance is necessary from my understanding. Unless, you like strict order Vs. chaos. 🤣