Zer0Nin3r

Members
  • Posts

    160
  • Joined

  • Last visited

Posts posted by Zer0Nin3r

  1. On 10/26/2022 at 8:47 AM, Masterwishx said:

    Warning:

    Please use A75G's repository template for official changedetection.io container from https://hub.docker.com/r/dgtlmoon/changedetection.io .

    image by linuxserver container does NOT contain Playwright content fetcher .

    Is this still necessary? It looks like the Linuxserver version has since updated Playwright. 
     

    Quote

    2024-03-09

    Build Playwright from source because Microsoft's build and packaging process is awful.

    2024-03-08

    Build Playwright-python from source, add libjpeg.

     

    • Like 1
  2. On 1/29/2021 at 4:12 AM, doobyns said:

    - first in Custom commands before sleep :

    #!/bin/bash
    docker stop Plex-Media-Server

     

    - then in Custom commands after wake-up :

    #!/bin/bash
    docker start Plex-Media-Server

    This is clutch! To think that this was all I had to do for all these years. This method resolves any GPU initialization issues with Docker containers utilizing an NVIDIA GPU (GTX 1060) after waking from sleep. What this script command does is free up the GPU before the system goes to sleep.


    Lastly, I’ve found that you don’t need to include the shebang in the S3 command fields.

     

    @testdasi Thank you for your help. 

     

    **Edit**
    I forgot to mention, you can enable power savings wherein the GPU will go into a low power state. For the longest time I didn't know that you could do such a thing. You can add the following command to your S3 Sleep plugin under the Wake-up Command field. This way when the GPU isn't being used, it will go into a low power state.

     

    nvidia-smi -pm 1

     

  3. On 8/7/2023 at 4:58 PM, 1812 said:

     

    works fine for me either way on both machines I have running 6.12.3. For reference, I'm on safari Version 16.5.1 (18615.2.9.11.7)

     

    On 8/8/2023 at 6:05 AM, tjb_altf4 said:

    sounds like a browser cache issue, try clearing it out.

    Issue seemed to have resolved itself over time. Thank you for the insights. Have a great week!

  4. I've noticed that in the 6.12.2 & 6.12.3 update — Terminal and Log no longer launch from the GUI while using Safari.

    • Content blockers is off for the GUI
    • I don't have any extensions that would interfere. Or at least they haven't in the past.
    • Testing in private browser mode — you are immediately logged out when you try to launch Terminal.

     

    **Update**

    • Issue is not present when logging in via IP address, but is present when logging in via local domain name.
    • Using the local domain name wasn't an issue before. 
  5. On 7/15/2023 at 11:17 PM, Pete0 said:

    I only didn't see it right away because of the new notification thing in the Web GUI.

    Yeah, the new notification makeover is taking some getting used to. For instance, you can't check the notification or see the notification after upgrading to 16.12.3 because the "Update Unraid OS" window's button doesn't change from "Done" to "Finished" and it doesn't allow the notifications to shine through.

     

    If you don't think to open a new tab and check the GUI, you'd be waiting until the cows came home. 🐄

  6. Been experiencing this issue more and more lately. Server has been up for three days. You can login with the GUI, it then slows down and then you can’t even log out and log back in while on local network or while connecting to WireGuard. All the docker services are still up and running so, you can still access those respective services. issue is even persistent in incognito mode.
     

    I have been finding that a reboot of the server resolves the issue. 

    • Like 1
  7. On 11/13/2022 at 5:27 PM, xlucero1 said:

    I'm sure everyone has GPU passed thru to their VM. The Nvidia Driver plugin says dont install it if you pass thru GPU to VM. So how do I do it? Download the plugin anyway? VM and Unmatic should still work or no? 

     

    What you can do is stop the Unmanic container prior to launching your VM and vice versa — stopping your VM prior to launching Unmanic. This way you don't crash anything. If you pause Unmanic during an encoding process and then launch your VM, the GPU may not be released to the VM resulting in issues.

     

    Like @Josh.5 said, you don't have to passthrough your GPU to Unmanic; Unmanic can still encode using CPU.

     

    @Eddie Seelke It is possible to run a single GPU in Unraid and also pass it through to your VM. You would have to pull the firmware from the GPU card and then load it into your VM. I had very limited success with it a few year ago with a GTX 1060. In the end, I spung for a cheap GT 710 to make Unraid happy even as a headless server.

    • Looking to modify the MTU size so the excruciatingly slow download speeds will be fixed due to ISP constraints. (5G Internet)
      • Adjusting the MTU within Unraid's settings to 1350 has resolved the VPN issues on my other clients with regards to this particular ISP.
    • I am able to set the MTU at the Unraid system level by: Settings > Network Settings > Desired MTU > 1350

    I tried the following without any luck:

    • Key 5 - Container Variable: VPN_OPTIONS
      • --fragment 1350 --mssfix
      • --fragment 1350 --mssfix 1350
    • Setting the 'mssfix 1350' parameter in the OpenVPN configuration file.

    I can see in the logs that @binhex sets the MTU in the script and I tried to find the script to adjust, but am unable to; it's not in the Appdata share...that's for sure.

    OPTIONS IMPORT: adjusting link_mtu to 1624
    
    DEBG 'start-script' stdout output:
    TUN/TAP device tun0 opened
    net_iface_mtu_set: mtu 1500 for tun0
    net_iface_up: set tun0 up

     

    Any ideas on how I can resolve this issue? Thanks in advance!

  8. 7 hours ago, ConnerVT said:

    I've been using Handbrake from the djaydev repository (by dee31797), but he has stopped development and all of his dockers are now deprecated.  As I have only a lowly 4 core CPU in the server, this docker was a lifesaver as it could make use of my nVidia Quatro.

     

    Do you have any plans in supporting nVidia transcoding in the future?

     

    I recommend that you don't uninstall @dee31797's Handbrake docker until we can get an answer from @Djoss whether it will be supported in the future. I'm with you though. Handbrake with NVENC support is my secondary docker application.

     

    If you are automating your encodes, you may want to check out Unmanic as that will support hevc-nvenc.

     

    • Like 1
  9. On 7/25/2021 at 5:33 AM, Meller said:

    Why is it trying to use nvenc to encode?  I don't have a GPU in my server.  If I go to Settings > Video Encoding.

    I have the Video Codec set to HEVC and the Video Encoder to set to libx265.

    I've encoded nearly 25,000 tv show episodes so far, and this is the only one that fails over and over, with a huge log file attached to it. 

    1. Quite the DVR collection you have there! 🏆
    2. I have my server setup to use GPU HEVC encoding (speed at the cost of quality/artifacts). Nothing wrong with libx265; better quality — longer encode times and more power usage as it relies on CPU exclusively.
    3. As far as your episode failing it could be a bad h.264 encode or a small portion of the file is corrupted. This has happened to me before. Throwing those problematic videos into Handbrake allowed me to re-encode those videos into H.265.
      1. Subtitles. Subtitles would cause video encodes to fail 98% of the time with Unmanic. This is being worked on and improved upon in more recent times.
      2. /tmp. Is your transcoding cache using the /tmp directory? I suspect that in my case, either my gaming VM is not releasing the GPU fully and that's what is causing the crashes with Unmanic OR I'm running out of RAM memory when encoding large video files. Either way, a reboot of the server has worked for me — not sure why though.

    **Update**

    I don't think RAM is a factor in my case at this point in time as 30 minute video files are failing now. I've seen it before though.

  10. On 8/6/2021 at 3:28 PM, Squid said:

    this repository and all of the apps contained within have been blacklisted because the dockerHub repositories no longer exist (nor does the template repository CA utilized)

    Just noticed this happened when checking for Docker updates today. Should we uninstall this docker then?

  11. On 7/10/2021 at 10:13 PM, Meller said:

    Yea, I'm having a lot of super weird occurrences after the most recent update also.  The UI/Dashboard for unmanic becomes pretty much unresponsive. set_mempolicy: Operation not permitted in my logs.  And I have one fire that just keeps failing.

    [h264 @ 0x5605116f0680] SEI type 195 size 888 truncated at 48
    [h264 @ 0x5605116f0680] SEI type 170 size 2032 truncated at 928
    ...
    [h264 @ 0x5605116f0680] SEI type 33 size 2024 truncated at 16
    [h264 @ 0x5605116f0680] non-existing PPS 2 referenced
    Guessed Channel Layout for Input Stream #0.1 : 5.1

    Literally all it says in the log.  And it takes a good 10-20 seconds for that to even appear. 

    The most recent unmanic push... is weird.

    Agreed. I've been having issues with all encodes failing. Looking at the portion of your log you posted I decided to take a look at my Unraid system log and found this to be happening over and over in real time:

    Jul 24 23:30:43 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
    Jul 24 23:30:43 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs
    Jul 24 23:30:44 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
    Jul 24 23:30:44 Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs
    Jul 24 23:30:46 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]

     

    I had a feeling that there was some sort of conflict over the GPU wherein Unmanic is failing. Although, I don't have any issues using the same GPU for my gaming VM. And I always stop Unmanic before launching my gaming VM...so something is happening when I re-launch Unmanic and it cannot interface with the GPU for some reason.

     

    I have rebooted the Unraid server in the past and I feel that this clears up the issue when it does occur. I wonder if using the Dynamix S3 Sleep is causing an issue...but I didn't really have these kind of encoding failures until this year. Will edit this post if/when I learn more.

    **Update**
    Found this in one of the failed encodes:

    [hevc_nvenc @ 0x560418cb3d40] dl_fn->cuda_dl->cuInit(0) failed -> CUDA_ERROR_UNKNOWN: unknown error 
    Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height 
    Conversion failed! 

     

    Also, Unmanic is working again after rebooting the server. 

  12. On 3/20/2021 at 7:10 AM, bclinton said:

    Hi folks! Been using Unmanic for a few weeks. Finally finished re-encoding my library and it worked great. I have noticed that in the log files it seems to go back and try to re-encode a number of movies over and over again each time it scans. Is there a way to current it where it does not try and re-encode something it has already completed. Thanks.

    This feature has been requested many times over and I believe is on the roadmap.

     

    1) Do you have Unmanic set to include closed captions? If so, try turning that off. There were issues with past releases wherein some of the CC embedded in a video file would throw an error in FFMPEG and then Unmanic would continue to keep trying those files.

     

    But I see that this has been fixed now:

    "Removes the subtitle stream from the container. This is useful if you intend to supply your own subtitles for your library WARNING: Unsupported subtitles will always be removed"

     

    2) If you set Unmanic so that it is not including CCs, then there is something wrong with the video file. For this I turn to Handbrake (and there is a docker version of Handbrake that supports GPU encoding.) I have yet to run into a problematic video file that Handbrake couldn't handle. Try that.

     

    Also, don't have Unmanic NVENC and Handbrake NVENC trying to access the same GPU at the same time or you're asking for trouble.

  13. On 2/26/2021 at 5:01 PM, jonathanm said:

    It's been deprecated.

     

    You will need to update to the 6.9 rc to use the replacement.

    Yeah, I didn't get the November 2020 memo either, LoL. So, is it cool that we skip trying to roll back the modified Nvidia Unraid build on 6.8.3 and simply back up our USB thumb drive and upgrade to v.6.9? Looks like 6.9 went live since the time of your post.

    Thanks in advance for the advice!

     

    **Update**

    Never mind! Found the answer. 😅

     

    • Like 1
  14. On 1/23/2021 at 12:22 AM, JaseNZ said:

    Curious to the 3 session limit. Is this hard coded via nvidia.

    I have a 1070 in the server and anything over 3 sessions (workers) will fail any encoding.

    Not worried about it but was just curious.

    It is dependent on your GPU and how many H.265 encoding threads it can handle. Nvidia has a break down in one of their developer sections with a grid that shows all the capabilities of the various GPUs which will show you how many streams your GPU can handle. I have a GTX 1060 that I use with Unmanic and I set it to two workers as I noticed my encodes for a large queue finished faster than with three workers believe it or not. 

  15. On 4/23/2020 at 5:30 AM, cinereus said:

    This seems to be the most recently topic.

     

    I've followed the guide at https://wiki.unraid.net/UnRAID_6/Configuring_Apple_Time_Machine

     

    Regardless of whether I make it Time Machine (private) or just regular, I can mount the share but Time Machine itself refuses to let me add is as a backup target.

     

    I've read loads of threads on here but haven't found anything that works...

    And I have the opposite problem. 😆

    I can back up, but cannot restore from the SMB Unraid Time Machine share. 

  16. On 6/18/2020 at 5:39 AM, spants said:

    Has anyone tried a restore to "bare metal" yet?. When I last tried some time ago, I couldn't restore because the recovery procedure on the Mac couldn't find the network share.

    I'm able to do backups to the Unraid Time Machine share, but now that I want to migrate over to a new system, I can connect to the server in Migration Assistant from a fresh install and that's about it. None of the time machine backups show up in the next screen after connecting to the server via SMB. It's been hit or miss as I remember a couple of months ago when doing a restore, I was able to see the Time Machine backups in the Migration Assistant, but I wasn't able to restore; I had to use the secondary backup drive. I know that this doesn't not really help anyone, but at least it goes to show that Time Machine backups over SMB needs some TLC from the Unraid team. 

  17. Use the HDMI dongle and don't use Microsoft Remote Desktop/RDP. Instead use Parsec. This way you can still hear the audio passed through to your VR headset. You will also want to select your VR audio as the output in Windows' sound settings. Hope this helps.

     

    Or if you fixed it already @adnix42, share with us your steps to get VR + sound to work and mark your thread as [SOLVED].

  18. Have you tried a HDMI dongle? I want to say that the GPU drivers will want to detect a physical display. If that's the case the HDMI dongle that mimics an actual display may be what you are looking for.

     

    I game AAA titles in Windows 10 with GPU pass-through no problem. For that to work, I need to have a HDMI dongle.

     

    Also, do you have a second GPU for Unraid to use? You need to have a second GPU if you are trying to pass-through your primary GPU in the first slot.

  19. I've only needed a HDMI dongle for GPU pass-through for gaming with the Windows 10 VM. Then again, I also had to install a cheap GPU for Unraid; I was never able to get single GPU pass-through. Also, I am not running a HDMI dongle on the GPU allocated for Unraid.

     

    Running a Gigabyte Designare X339 board with a TR4 socket.

  20. What if you SSH into the server and restart the Xorg display service instead of having to reboot?

     

    There are some talks about HDMI dongles in this thread.

     

    • Thanks 1
  21. On 8/12/2020 at 7:55 PM, gilahacker said:

    Then one of us misunderstood what @Zer0Nin3r was asking about regarding `copy --reflink`, which requires the `reflink=1` flag to be set when the disk is formatted.

    On 8/13/2020 at 3:33 AM, Energen said:

    Since nobody explained what reflink was supposed to be/do I guess it could have also been a cool new pizza oven for XFS and Unraid.. 

    So, it's never a good idea to be working directly within the /mnt/disk (from what I have gathered on the forums.) What I was referring to in my original post is creating snapshots of my VMs as a means to create lightweight backups without wasting space; the other alternative would be to create an exact copy of a VM image, but you are then wasting space.

     

    With the cp --reflink command, you are simply noting the changes in the blocks of data from the original image. So, if I were to make some changes to my gaming rig VM and I messed something up, I can just reverse the cp --reflink command and go back to the state that my VM was in before the changes were made.

     

    My understanding with snapshots is that if the master file (disk image in this case) gets corrupted, then the reflinks will then be corrupted too. Another thing to note with the reflink flag is that Unraid will/may show that the full image of the reflink is taking up xx amount of space, when in fact, because it is a snapshot, is not actually taking up all the space.

     

    Example: If I have a 100 GB VM image, I do a snapshot after installing let's say a 50 MB program, then the snapshot should only take up about 50 MB since those are the only data blocks that have changed in the VM image. However, Unraid will show that the new reflink snapshot to be taking up 100 MB when it is not really taking up that much space (you should be able to see this inside of Terminal or Midnight Commander).

     

    Going back to my original point earlier. In order for me to make snapshots with my VM's and in order to use the --reflink flag for the cp command, I have to be in my /mnt/cache/domains/location_of_my_VM_image in order for the command to work. If I try the --reflink flag in my /mnt/user/domains/location_of_my_VM_image (which like I said earlier, you typically always want to be working out of the /mnt/user directory) then the command will fail because I formated my disks to XFS before the format revision that now supports the --reflink flag (reflink = reference link).

     

    Furthermore, I was commenting on the fact that even though the revision to the XFS format allows us to take advantage of snapshots, I would have to reformat all of my XFS partitions if I wanted to take advantage of this new feature to which, it does not seem like a possibility for me now unless I were to have another array capable of receiving a backup of my data so I can make the changes.

     

    Why XFS? Because BTRFS was not recommended in the Unraid documentation as being stable enough for the array if you don't want a possible chance of data corruption. However, BTRFS was/is fine for the cache pool.

     

    So, that is what I am doing when I am making my VM snapshots. I am working directly in the cache pool when I am making my VM snapshots as my VMs reside on the cache as per the share preferences.

    345235751_ScreenShot2020-08-24at13_55_27.thumb.png.de0dd2d0a3d63ba4ce693dd0cf050122.png

     

     

    • Like 1
  22. I'd be curious to know if you can use the 2nd NIC to isolate Unraid traffic to a VPN connection. Since Wireguard is not currently supported natively in the Linux kernel on Unraid v.6.8.3, And I am trying to be more resource efficient with my Dockers, I was thinking of having my router connect to my VPN as a client and then have my router route all of my traffic from the second NIC to use the VPN.

     

    Sure @binhex has built VPN support into some of his/her's Dockers, but if I can free up some system resources by not having to download those dependencies and utilize extra system processes & RAM, and offload the VPN work to the router, at the expense of VPN speed, that is something I'm willing to explore. Just trying to streamline my Dockers and be able to have my cake and eat it too. Although, I only want to be able to designate specific dockers to utilize the second NIC, the others can remain in the clear.

     

    I was trying to see if you could specify the specific NIC for individual Dockers, but it does not look like you are able to do so:

     

    image.png.d67dc720ed73434fbffa38dd86656b77.png

     

    I tried to get the Wireguard plugin to connect to my VPN provider, but it won't connect using the configuration files that I was given. That being said, I'd be curious to know if we can do split tunneling when the new version of Unraid comes out and Wireguard is baked into the kernel.

     

    Otherwise, I was thinking...maybe I can setup one Wireguard docker and then simply route the individual dockers through that one docker for all of my VPN needs on Unraid. But, I don't know how I would go about doing that and plus, there are other threads discussing this matter.

     

    Anyway, thanks to anyone reading this. Just thinking aloud for a moment to see if anyone else may know the answer. Until then, I'll continue searching in my free time. Oh, and if anyone knows of some "Networking IP Tables for Dummies" type of tutorials, let me know. 🙂