Zer0Nin3r

Members
  • Posts

    152
  • Joined

  • Last visited

Everything posted by Zer0Nin3r

  1. Looking to modify the MTU size so the excruciatingly slow download speeds will be fixed due to ISP constraints. (5G Internet) Adjusting the MTU within Unraid's settings to 1350 has resolved the VPN issues on my other clients with regards to this particular ISP. I am able to set the MTU at the Unraid system level by: Settings > Network Settings > Desired MTU > 1350 I tried the following without any luck: Key 5 - Container Variable: VPN_OPTIONS --fragment 1350 --mssfix --fragment 1350 --mssfix 1350 Setting the 'mssfix 1350' parameter in the OpenVPN configuration file. I can see in the logs that @binhex sets the MTU in the script and I tried to find the script to adjust, but am unable to; it's not in the Appdata share...that's for sure. OPTIONS IMPORT: adjusting link_mtu to 1624 DEBG 'start-script' stdout output: TUN/TAP device tun0 opened net_iface_mtu_set: mtu 1500 for tun0 net_iface_up: set tun0 up Any ideas on how I can resolve this issue? Thanks in advance!
  2. I recommend that you don't uninstall @dee31797's Handbrake docker until we can get an answer from @Djoss whether it will be supported in the future. I'm with you though. Handbrake with NVENC support is my secondary docker application. If you are automating your encodes, you may want to check out Unmanic as that will support hevc-nvenc.
  3. Quite the DVR collection you have there! ๐Ÿ† I have my server setup to use GPU HEVC encoding (speed at the cost of quality/artifacts). Nothing wrong with libx265; better quality โ€” longer encode times and more power usage as it relies on CPU exclusively. As far as your episode failing it could be a bad h.264 encode or a small portion of the file is corrupted. This has happened to me before. Throwing those problematic videos into Handbrake allowed me to re-encode those videos into H.265. Subtitles. Subtitles would cause video encodes to fail 98% of the time with Unmanic. This is being worked on and improved upon in more recent times. /tmp. Is your transcoding cache using the /tmp directory? I suspect that in my case, either my gaming VM is not releasing the GPU fully and that's what is causing the crashes with Unmanic OR I'm running out of RAM memory when encoding large video files. Either way, a reboot of the server has worked for me โ€” not sure why though. **Update** I don't think RAM is a factor in my case at this point in time as 30 minute video files are failing now. I've seen it before though.
  4. Just noticed this happened when checking for Docker updates today. Should we uninstall this docker then?
  5. Agreed. I've been having issues with all encodes failing. Looking at the portion of your log you posted I decided to take a look at my Unraid system log and found this to be happening over and over in real time: Jul 24 23:30:43 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Jul 24 23:30:43 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs Jul 24 23:30:44 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Jul 24 23:30:44 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs Jul 24 23:30:46 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] I had a feeling that there was some sort of conflict over the GPU wherein Unmanic is failing. Although, I don't have any issues using the same GPU for my gaming VM. And I always stop Unmanic before launching my gaming VM...so something is happening when I re-launch Unmanic and it cannot interface with the GPU for some reason. I have rebooted the Unraid server in the past and I feel that this clears up the issue when it does occur. I wonder if using the Dynamix S3 Sleep is causing an issue...but I didn't really have these kind of encoding failures until this year. Will edit this post if/when I learn more. **Update** Found this in one of the failed encodes: [hevc_nvenc @ 0x560418cb3d40] dl_fn->cuda_dl->cuInit(0) failed -> CUDA_ERROR_UNKNOWN: unknown error Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed! Also, Unmanic is working again after rebooting the server.
  6. Do we have to input share specific folders? Would stopping at '/mnt/user/' allow the spotlight flag to be recursive? This is what the default looks like: #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end
  7. ๐Ÿ˜ณ Gadzooks! Now I know why Google is moving away from unlimited cloud storage even for universities. ๐Ÿ˜œ
  8. This feature has been requested many times over and I believe is on the roadmap. 1) Do you have Unmanic set to include closed captions? If so, try turning that off. There were issues with past releases wherein some of the CC embedded in a video file would throw an error in FFMPEG and then Unmanic would continue to keep trying those files. But I see that this has been fixed now: "Removes the subtitle stream from the container. This is useful if you intend to supply your own subtitles for your library WARNING: Unsupported subtitles will always be removed" 2) If you set Unmanic so that it is not including CCs, then there is something wrong with the video file. For this I turn to Handbrake (and there is a docker version of Handbrake that supports GPU encoding.) I have yet to run into a problematic video file that Handbrake couldn't handle. Try that. Also, don't have Unmanic NVENC and Handbrake NVENC trying to access the same GPU at the same time or you're asking for trouble.
  9. I just noticed that this was happening to me now that I have upgraded to Unraid 6.9. It's a real bummer that we have to do a work around in the meantime. My only hesitancy with this workaround is that I know I'll forget to remove the added code instructions once this resolves itself. I just submitted a feedback/bug report from within Unraid's dashboard and linked back to this post in the bug report. Hopefully the Unraid team pushes out an official patch. ๐Ÿคž๐Ÿผ
  10. Yeah, I didn't get the November 2020 memo either, LoL. So, is it cool that we skip trying to roll back the modified Nvidia Unraid build on 6.8.3 and simply back up our USB thumb drive and upgrade to v.6.9? Looks like 6.9 went live since the time of your post. Thanks in advance for the advice! **Update** Never mind! Found the answer. ๐Ÿ˜…
  11. It is dependent on your GPU and how many H.265 encoding threads it can handle. Nvidia has a break down in one of their developer sections with a grid that shows all the capabilities of the various GPUs which will show you how many streams your GPU can handle. I have a GTX 1060 that I use with Unmanic and I set it to two workers as I noticed my encodes for a large queue finished faster than with three workers believe it or not.
  12. And I have the opposite problem. ๐Ÿ˜† I can back up, but cannot restore from the SMB Unraid Time Machine share.
  13. I'm able to do backups to the Unraid Time Machine share, but now that I want to migrate over to a new system, I can connect to the server in Migration Assistant from a fresh install and that's about it. None of the time machine backups show up in the next screen after connecting to the server via SMB. It's been hit or miss as I remember a couple of months ago when doing a restore, I was able to see the Time Machine backups in the Migration Assistant, but I wasn't able to restore; I had to use the secondary backup drive. I know that this doesn't not really help anyone, but at least it goes to show that Time Machine backups over SMB needs some TLC from the Unraid team.
  14. Use the HDMI dongle and don't use Microsoft Remote Desktop/RDP. Instead use Parsec. This way you can still hear the audio passed through to your VR headset. You will also want to select your VR audio as the output in Windows' sound settings. Hope this helps. Or if you fixed it already @adnix42, share with us your steps to get VR + sound to work and mark your thread as [SOLVED].
  15. Have you tried a HDMI dongle? I want to say that the GPU drivers will want to detect a physical display. If that's the case the HDMI dongle that mimics an actual display may be what you are looking for. I game AAA titles in Windows 10 with GPU pass-through no problem. For that to work, I need to have a HDMI dongle. Also, do you have a second GPU for Unraid to use? You need to have a second GPU if you are trying to pass-through your primary GPU in the first slot.
  16. I've only needed a HDMI dongle for GPU pass-through for gaming with the Windows 10 VM. Then again, I also had to install a cheap GPU for Unraid; I was never able to get single GPU pass-through. Also, I am not running a HDMI dongle on the GPU allocated for Unraid. Running a Gigabyte Designare X339 board with a TR4 socket.
  17. What if you SSH into the server and restart the Xorg display service instead of having to reboot? There are some talks about HDMI dongles in this thread.
  18. So, it's never a good idea to be working directly within the /mnt/disk (from what I have gathered on the forums.) What I was referring to in my original post is creating snapshots of my VMs as a means to create lightweight backups without wasting space; the other alternative would be to create an exact copy of a VM image, but you are then wasting space. With the cp --reflink command, you are simply noting the changes in the blocks of data from the original image. So, if I were to make some changes to my gaming rig VM and I messed something up, I can just reverse the cp --reflink command and go back to the state that my VM was in before the changes were made. My understanding with snapshots is that if the master file (disk image in this case) gets corrupted, then the reflinks will then be corrupted too. Another thing to note with the reflink flag is that Unraid will/may show that the full image of the reflink is taking up xx amount of space, when in fact, because it is a snapshot, is not actually taking up all the space. Example: If I have a 100 GB VM image, I do a snapshot after installing let's say a 50 MB program, then the snapshot should only take up about 50 MB since those are the only data blocks that have changed in the VM image. However, Unraid will show that the new reflink snapshot to be taking up 100 MB when it is not really taking up that much space (you should be able to see this inside of Terminal or Midnight Commander). Going back to my original point earlier. In order for me to make snapshots with my VM's and in order to use the --reflink flag for the cp command, I have to be in my /mnt/cache/domains/location_of_my_VM_image in order for the command to work. If I try the --reflink flag in my /mnt/user/domains/location_of_my_VM_image (which like I said earlier, you typically always want to be working out of the /mnt/user directory) then the command will fail because I formated my disks to XFS before the format revision that now supports the --reflink flag (reflink = reference link). Furthermore, I was commenting on the fact that even though the revision to the XFS format allows us to take advantage of snapshots, I would have to reformat all of my XFS partitions if I wanted to take advantage of this new feature to which, it does not seem like a possibility for me now unless I were to have another array capable of receiving a backup of my data so I can make the changes. Why XFS? Because BTRFS was not recommended in the Unraid documentation as being stable enough for the array if you don't want a possible chance of data corruption. However, BTRFS was/is fine for the cache pool. So, that is what I am doing when I am making my VM snapshots. I am working directly in the cache pool when I am making my VM snapshots as my VMs reside on the cache as per the share preferences.
  19. I'd be curious to know if you can use the 2nd NIC to isolate Unraid traffic to a VPN connection. Since Wireguard is not currently supported natively in the Linux kernel on Unraid v.6.8.3, And I am trying to be more resource efficient with my Dockers, I was thinking of having my router connect to my VPN as a client and then have my router route all of my traffic from the second NIC to use the VPN. Sure @binhex has built VPN support into some of his/her's Dockers, but if I can free up some system resources by not having to download those dependencies and utilize extra system processes & RAM, and offload the VPN work to the router, at the expense of VPN speed, that is something I'm willing to explore. Just trying to streamline my Dockers and be able to have my cake and eat it too. Although, I only want to be able to designate specific dockers to utilize the second NIC, the others can remain in the clear. I was trying to see if you could specify the specific NIC for individual Dockers, but it does not look like you are able to do so: I tried to get the Wireguard plugin to connect to my VPN provider, but it won't connect using the configuration files that I was given. That being said, I'd be curious to know if we can do split tunneling when the new version of Unraid comes out and Wireguard is baked into the kernel. Otherwise, I was thinking...maybe I can setup one Wireguard docker and then simply route the individual dockers through that one docker for all of my VPN needs on Unraid. But, I don't know how I would go about doing that and plus, there are other threads discussing this matter. Anyway, thanks to anyone reading this. Just thinking aloud for a moment to see if anyone else may know the answer. Until then, I'll continue searching in my free time. Oh, and if anyone knows of some "Networking IP Tables for Dummies" type of tutorials, let me know. ๐Ÿ™‚
  20. I'm siding with @dboris on this one. As a layperson who does this in their spare time and not at a professional level, it's kind of hard to be aware of everything and anything especially since the Unraid wiki is dead, technology, software & techniques change on a regular basis, and Invision Community's search and forum as a platform sucks (with some regard.) But if the wiki is not updated to act as a central location for insightful information and guides, then we are left to using the forum's wonky search or using an external search engine to parse the forums for the information we are looking for. Even then, the efficiency in doing that is horrid having to bounce between a patch work of forum topics and threads, sometimes opening up multiple tabs to reference back to other threads whose topics tie in with the problem that one is trying to troubleshoot on any given day. Then say you get tuckered or burnt out on whatever side project you're working on, you have to come back and repeat the whole process all over again because god forbid you forget anything that you haven't touched on in a long while or a regular basis. If nobody wants to update the wiki and only relegate information to fractured forum threads then at least Discourse has some semblance of link bundling, summary building, and cross topic suggestions built into their platform. I've tried different CPU pinnings for the 1950x for the gaming VM using the given mappings in the Unraid template thinking the CPU / HT pairings were in line with the actual CPU 1 & CPU 2 sets only to find years later and recently that the reported CPU / HT pairings are incorrect and do not account for the latency in Infinity Fabric. But in your defense @testdasi I also get where you are coming from. You only get back what you put in. Meaning, put in half the effort to understand all this crap and you will get mediocre results. Put in the effort to fully understand what is only shared knowledge and reap the benefits and rewards that comes with the community coming together to share information. In closing, I just want to thank everyone on this thread for taking the time to dissect the AMD Threadripper and Ryzen CPU's and sharing your findings with the rest of us โ€” even if it feels like we are getting "into the weeds" with this knowledge. If I was in a better place, I would take all of this information and make it more accessible for the casual Unraid gamer and less technically inclined users in this forum. But perhaps that day will be soon. Until then...don't hold your breath. ๐Ÿ˜‰ --- Other thoughts I see that there is a section for guides, but the problem is when I become tired or bored of a particular project โ€” that thread becomes stale. Sure, people can come in and post new findings, suggestions, etc, but then the topic becomes yet again, fractured. There is no way for me to enable anyone to edit my original post to act as a quasi wiki page that would feature the latest in knowledge and information.
  21. There is a way. Go to Settings > CPU Pinning and from there you can pin entire cores or hyperthreads across the board by clicking on the number(s). That will help save some time. And then you can fine tune the selections from there. It's not as granular as you described but it is better than clicking on each and every individual dot.
  22. @DaClownie - Have you noticed that some of the encodes, you will end up with an output that is larger than the original input? I've been noticing this phenomenon lately especially with some .ts (MPEG-2) encoded streams. But it's not consistent. For the most part the encodes with hevc_nvenc does save me some space (along with time and energy useage.) I've noticed this with Handbrake, you really have to crank down the quality when doing h.265 to get the speeds of the encode to rival that of h.264, but then the files end up being bloated with degraded picture quality than when you first start out with. I'm kind of torn with hevc_nvenc and libx265. On one hand I save a butt load of time using the GPU to process the encodes, but I risk that some files are in fact bloated/enlarged which negates the entire point of re-encoding to h.265. On the other hand, I know my encodes will always net me anywhere from 60 โ€“ 80% in file size savings at the expense of heat, energy draw, and thermal load on the CPU, not to mention the extended time it takes to perform the encodes with CPU. I guess I will have to keep an eye on the outputs and if it keeps happening, I may just switch back to libx265 and drop the number of workers; that and wait until we are able to have access to advanced controls / command input that will guarantee that hevc_nvenc encodes will always be smaller than the source file.
  23. You will not find success with PGS subtitles when trying to encode video files that have these types of subtitles embedded within the container โ€” until this project matures. Your best bet is to look towards Handbrake or the other encoder that's been mentioned elsewhere in this thread. (Apologies I don't remember off the top of my head the other encoder's name.) SpaceInvaderOne has a tutorial on how to automate this with Handbrake. It's best to do some test encodes first to make sure you get your preset the way you want it and then let it loose on your library โ€” should you want to undertake some of that risk. A safer way to automate would be with watch folders in which you manually bring the completed encodes into your library, working in batches to ensure everything goes smoothly I was doing batches at first with the Handbrake automation, and then transitioned over to Unmanic. Yes, I have disabled the subtitles and audio to make my encodes with Unmanic easier, but those are the sacrifices I am willing to make do for now until it comes time to convert my library to 4k; it's a problem I'll revisit when it comes time to upgrade my audio equipment.
  24. Unless you're running the latest and greatest GPU's from nvidia, CPU gives a better encode with h.265/HEVC. Better quality and better compression. Just takes a lot longer which equates to more electricity being used. GPU can perform the encodes a lot faster, but lacks B Frame Support for anything lower than a GeForce GTX 1660. CPU currently supports B Frame which is why we see such a reduction in file sizes. I would be curious to see 1) the final encode sizes on the Turing family GPU chipset (sans the GTX 1650) and 2) the encode quality coming from the Turing family chipset when compared to a CPU encode. source: https://developer.nvidia.com/video-encode-decode-gpu-support-matrix I noticed this too @belliott_20, which is what brought me back to the forum tonight. I am probably mistaking the Max # of concurrent sessions as it probably doesn't apply to our case when it comes to Unmanic. I did notice that with two workers = no errors; with three workers = errors. Now I am not running in debugging mode, so I don't have any logs to share; sorry I am of no help in this sense. On the flip side, maybe we are lucky to be able to encode two streams at a time when looking at the Total # of NVENC column?
  25. If you have a secondary GPU and give that to Unraid to use, then you don't have to pass-through the vbios for the GPU you want to pass-through to the VM. I tried like mad to get single GPU pass-through and couldn't get the VM to POST even with vbios pass-through. So, I ended up purchasing a cheap GPU and put it in a secondary PCI-E slot for Unraid to use. I've been happily passing through my GTX 1060 ever since. Now let's say you want to pass-through the GPU that Unraid is now using, then yes, you will have to pass-through the vbios for that GPU.