-
Posts
159 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by Zer0Nin3r
-
unRAID Nvidia - Graphics Card Sleep Reset Script
Zer0Nin3r replied to veritas2884's topic in General Support
This is clutch! To think that this was all I had to do for all these years. This method resolves any GPU initialization issues with Docker containers utilizing an NVIDIA GPU (GTX 1060) after waking from sleep. What this script command does is free up the GPU before the system goes to sleep. Lastly, I’ve found that you don’t need to include the shebang in the S3 command fields. @testdasi Thank you for your help. **Edit** I forgot to mention, you can enable power savings wherein the GPU will go into a low power state. For the longest time I didn't know that you could do such a thing. You can add the following command to your S3 Sleep plugin under the Wake-up Command field. This way when the GPU isn't being used, it will go into a low power state. nvidia-smi -pm 1 -
Issue seemed to have resolved itself over time. Thank you for the insights. Have a great week!
-
I've noticed that in the 6.12.2 & 6.12.3 update — Terminal and Log no longer launch from the GUI while using Safari. Content blockers is off for the GUI I don't have any extensions that would interfere. Or at least they haven't in the past. Testing in private browser mode — you are immediately logged out when you try to launch Terminal. **Update** Issue is not present when logging in via IP address, but is present when logging in via local domain name. Using the local domain name wasn't an issue before.
-
Yeah, the new notification makeover is taking some getting used to. For instance, you can't check the notification or see the notification after upgrading to 16.12.3 because the "Update Unraid OS" window's button doesn't change from "Done" to "Finished" and it doesn't allow the notifications to shine through. If you don't think to open a new tab and check the GUI, you'd be waiting until the cows came home. 🐄
-
Been experiencing this issue more and more lately. Server has been up for three days. You can login with the GUI, it then slows down and then you can’t even log out and log back in while on local network or while connecting to WireGuard. All the docker services are still up and running so, you can still access those respective services. issue is even persistent in incognito mode. I have been finding that a reboot of the server resolves the issue.
-
[Support] Josh5 - Unmanic - Library Optimiser
Zer0Nin3r replied to Josh.5's topic in Docker Containers
What you can do is stop the Unmanic container prior to launching your VM and vice versa — stopping your VM prior to launching Unmanic. This way you don't crash anything. If you pause Unmanic during an encoding process and then launch your VM, the GPU may not be released to the VM resulting in issues. Like @Josh.5 said, you don't have to passthrough your GPU to Unmanic; Unmanic can still encode using CPU. @Eddie Seelke It is possible to run a single GPU in Unraid and also pass it through to your VM. You would have to pull the firmware from the GPU card and then load it into your VM. I had very limited success with it a few year ago with a GTX 1060. In the end, I spung for a cheap GT 710 to make Unraid happy even as a headless server. -
For clarity, may I ask what you are trying to accomplish? If you are booting Unraid into GUI — the answer would be yes.
-
Looking to modify the MTU size so the excruciatingly slow download speeds will be fixed due to ISP constraints. (5G Internet) Adjusting the MTU within Unraid's settings to 1350 has resolved the VPN issues on my other clients with regards to this particular ISP. I am able to set the MTU at the Unraid system level by: Settings > Network Settings > Desired MTU > 1350 I tried the following without any luck: Key 5 - Container Variable: VPN_OPTIONS --fragment 1350 --mssfix --fragment 1350 --mssfix 1350 Setting the 'mssfix 1350' parameter in the OpenVPN configuration file. I can see in the logs that @binhex sets the MTU in the script and I tried to find the script to adjust, but am unable to; it's not in the Appdata share...that's for sure. OPTIONS IMPORT: adjusting link_mtu to 1624 DEBG 'start-script' stdout output: TUN/TAP device tun0 opened net_iface_mtu_set: mtu 1500 for tun0 net_iface_up: set tun0 up Any ideas on how I can resolve this issue? Thanks in advance!
-
I recommend that you don't uninstall @dee31797's Handbrake docker until we can get an answer from @Djoss whether it will be supported in the future. I'm with you though. Handbrake with NVENC support is my secondary docker application. If you are automating your encodes, you may want to check out Unmanic as that will support hevc-nvenc.
-
[Support] Josh5 - Unmanic - Library Optimiser
Zer0Nin3r replied to Josh.5's topic in Docker Containers
Quite the DVR collection you have there! 🏆 I have my server setup to use GPU HEVC encoding (speed at the cost of quality/artifacts). Nothing wrong with libx265; better quality — longer encode times and more power usage as it relies on CPU exclusively. As far as your episode failing it could be a bad h.264 encode or a small portion of the file is corrupted. This has happened to me before. Throwing those problematic videos into Handbrake allowed me to re-encode those videos into H.265. Subtitles. Subtitles would cause video encodes to fail 98% of the time with Unmanic. This is being worked on and improved upon in more recent times. /tmp. Is your transcoding cache using the /tmp directory? I suspect that in my case, either my gaming VM is not releasing the GPU fully and that's what is causing the crashes with Unmanic OR I'm running out of RAM memory when encoding large video files. Either way, a reboot of the server has worked for me — not sure why though. **Update** I don't think RAM is a factor in my case at this point in time as 30 minute video files are failing now. I've seen it before though. -
Virt-Manager, Intel-GPU-Tools and more Dockers
Zer0Nin3r replied to dee31797's topic in Docker Engine
Just noticed this happened when checking for Docker updates today. Should we uninstall this docker then? -
[Support] Josh5 - Unmanic - Library Optimiser
Zer0Nin3r replied to Josh.5's topic in Docker Containers
Agreed. I've been having issues with all encodes failing. Looking at the portion of your log you posted I decided to take a look at my Unraid system log and found this to be happening over and over in real time: Jul 24 23:30:43 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Jul 24 23:30:43 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs Jul 24 23:30:44 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Jul 24 23:30:44 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs Jul 24 23:30:46 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] I had a feeling that there was some sort of conflict over the GPU wherein Unmanic is failing. Although, I don't have any issues using the same GPU for my gaming VM. And I always stop Unmanic before launching my gaming VM...so something is happening when I re-launch Unmanic and it cannot interface with the GPU for some reason. I have rebooted the Unraid server in the past and I feel that this clears up the issue when it does occur. I wonder if using the Dynamix S3 Sleep is causing an issue...but I didn't really have these kind of encoding failures until this year. Will edit this post if/when I learn more. **Update** Found this in one of the failed encodes: [hevc_nvenc @ 0x560418cb3d40] dl_fn->cuda_dl->cuInit(0) failed -> CUDA_ERROR_UNKNOWN: unknown error Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed! Also, Unmanic is working again after rebooting the server. -
Search of SMB shares not working (MacOS client)
Zer0Nin3r commented on CS01-HS's report in Prereleases
Do we have to input share specific folders? Would stopping at '/mnt/user/' allow the spotlight flag to be recursive? This is what the default looks like: #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end -
😳 Gadzooks! Now I know why Google is moving away from unlimited cloud storage even for universities. 😜
-
[Support] Josh5 - Unmanic - Library Optimiser
Zer0Nin3r replied to Josh.5's topic in Docker Containers
This feature has been requested many times over and I believe is on the roadmap. 1) Do you have Unmanic set to include closed captions? If so, try turning that off. There were issues with past releases wherein some of the CC embedded in a video file would throw an error in FFMPEG and then Unmanic would continue to keep trying those files. But I see that this has been fixed now: "Removes the subtitle stream from the container. This is useful if you intend to supply your own subtitles for your library WARNING: Unsupported subtitles will always be removed" 2) If you set Unmanic so that it is not including CCs, then there is something wrong with the video file. For this I turn to Handbrake (and there is a docker version of Handbrake that supports GPU encoding.) I have yet to run into a problematic video file that Handbrake couldn't handle. Try that. Also, don't have Unmanic NVENC and Handbrake NVENC trying to access the same GPU at the same time or you're asking for trouble. -
Search of SMB shares not working (MacOS client)
Zer0Nin3r commented on CS01-HS's report in Prereleases
I just noticed that this was happening to me now that I have upgraded to Unraid 6.9. It's a real bummer that we have to do a work around in the meantime. My only hesitancy with this workaround is that I know I'll forget to remove the added code instructions once this resolves itself. I just submitted a feedback/bug report from within Unraid's dashboard and linked back to this post in the bug report. Hopefully the Unraid team pushes out an official patch. 🤞🏼 -
Yeah, I didn't get the November 2020 memo either, LoL. So, is it cool that we skip trying to roll back the modified Nvidia Unraid build on 6.8.3 and simply back up our USB thumb drive and upgrade to v.6.9? Looks like 6.9 went live since the time of your post. Thanks in advance for the advice! **Update** Never mind! Found the answer. 😅
-
[Support] Josh5 - Unmanic - Library Optimiser
Zer0Nin3r replied to Josh.5's topic in Docker Containers
It is dependent on your GPU and how many H.265 encoding threads it can handle. Nvidia has a break down in one of their developer sections with a grid that shows all the capabilities of the various GPUs which will show you how many streams your GPU can handle. I have a GTX 1060 that I use with Unmanic and I set it to two workers as I noticed my encodes for a large queue finished faster than with three workers believe it or not. -
Guide: Setting up a Time Machine Share on your Unraid 6.7 Server
Zer0Nin3r replied to SpencerJ's topic in General Support
And I have the opposite problem. 😆 I can back up, but cannot restore from the SMB Unraid Time Machine share. -
Guide: Setting up a Time Machine Share on your Unraid 6.7 Server
Zer0Nin3r replied to SpencerJ's topic in General Support
I'm able to do backups to the Unraid Time Machine share, but now that I want to migrate over to a new system, I can connect to the server in Migration Assistant from a fresh install and that's about it. None of the time machine backups show up in the next screen after connecting to the server via SMB. It's been hit or miss as I remember a couple of months ago when doing a restore, I was able to see the Time Machine backups in the Migration Assistant, but I wasn't able to restore; I had to use the secondary backup drive. I know that this doesn't not really help anyone, but at least it goes to show that Time Machine backups over SMB needs some TLC from the Unraid team. -
Use the HDMI dongle and don't use Microsoft Remote Desktop/RDP. Instead use Parsec. This way you can still hear the audio passed through to your VR headset. You will also want to select your VR audio as the output in Windows' sound settings. Hope this helps. Or if you fixed it already @adnix42, share with us your steps to get VR + sound to work and mark your thread as [SOLVED].
-
Headless hardware accelerated remote desktop on Linux VM
Zer0Nin3r replied to zeus83's topic in VM Engine (KVM)
Have you tried a HDMI dongle? I want to say that the GPU drivers will want to detect a physical display. If that's the case the HDMI dongle that mimics an actual display may be what you are looking for. I game AAA titles in Windows 10 with GPU pass-through no problem. For that to work, I need to have a HDMI dongle. Also, do you have a second GPU for Unraid to use? You need to have a second GPU if you are trying to pass-through your primary GPU in the first slot. -
I've only needed a HDMI dongle for GPU pass-through for gaming with the Windows 10 VM. Then again, I also had to install a cheap GPU for Unraid; I was never able to get single GPU pass-through. Also, I am not running a HDMI dongle on the GPU allocated for Unraid. Running a Gigabyte Designare X339 board with a TR4 socket.
-
What if you SSH into the server and restart the Xorg display service instead of having to reboot? There are some talks about HDMI dongles in this thread.
- 1 reply
-
- 1
-
So, it's never a good idea to be working directly within the /mnt/disk (from what I have gathered on the forums.) What I was referring to in my original post is creating snapshots of my VMs as a means to create lightweight backups without wasting space; the other alternative would be to create an exact copy of a VM image, but you are then wasting space. With the cp --reflink command, you are simply noting the changes in the blocks of data from the original image. So, if I were to make some changes to my gaming rig VM and I messed something up, I can just reverse the cp --reflink command and go back to the state that my VM was in before the changes were made. My understanding with snapshots is that if the master file (disk image in this case) gets corrupted, then the reflinks will then be corrupted too. Another thing to note with the reflink flag is that Unraid will/may show that the full image of the reflink is taking up xx amount of space, when in fact, because it is a snapshot, is not actually taking up all the space. Example: If I have a 100 GB VM image, I do a snapshot after installing let's say a 50 MB program, then the snapshot should only take up about 50 MB since those are the only data blocks that have changed in the VM image. However, Unraid will show that the new reflink snapshot to be taking up 100 MB when it is not really taking up that much space (you should be able to see this inside of Terminal or Midnight Commander). Going back to my original point earlier. In order for me to make snapshots with my VM's and in order to use the --reflink flag for the cp command, I have to be in my /mnt/cache/domains/location_of_my_VM_image in order for the command to work. If I try the --reflink flag in my /mnt/user/domains/location_of_my_VM_image (which like I said earlier, you typically always want to be working out of the /mnt/user directory) then the command will fail because I formated my disks to XFS before the format revision that now supports the --reflink flag (reflink = reference link). Furthermore, I was commenting on the fact that even though the revision to the XFS format allows us to take advantage of snapshots, I would have to reformat all of my XFS partitions if I wanted to take advantage of this new feature to which, it does not seem like a possibility for me now unless I were to have another array capable of receiving a backup of my data so I can make the changes. Why XFS? Because BTRFS was not recommended in the Unraid documentation as being stable enough for the array if you don't want a possible chance of data corruption. However, BTRFS was/is fine for the cache pool. So, that is what I am doing when I am making my VM snapshots. I am working directly in the cache pool when I am making my VM snapshots as my VMs reside on the cache as per the share preferences.