jbear

Members
  • Posts

    170
  • Joined

  • Last visited

Posts posted by jbear

  1. I've tried to get this docker working in Unraid (installed from Docker Hub), could not get it working 100% correctly so I gave up.  Would be nice to have a version that has been tweaked for Unraid.

    • Upvote 1
  2. On 6/22/2023 at 9:01 PM, jbear said:

    Kind of a feeler post, I'm having issues with transcoding 4k content in Plex, it appears this issue popped some time during the 6.12 release candidate cycle.  No other changes to my setup, I've reviewed all of my settings, no issue transcoding 1080p to 720p/480p etc.   Was curious if anyone else was having similar issues?

     

    Intel® Xeon E-2288G CPU. 

     

    In the meantime, I will continue to review my settings, and do more testing.

     

    Thanks.

     

    I wanted to follow-up my own thread, as I continued to have issues transcoding.  I ended up removing the docker and image, then deleted the cache, codec, and drivers folders from the appdata/plex/library/application support/plex media server folder as referenced in this post, https://www.reddit.com/r/PleX/comments/12ikwup/plex_docker_hardware_transcoding_issue/. Then re-installed the plex docker using the existing template.

     

    Thought this may help someone else.  Files that I couldn't play locally or remotely before are now working, very strange issue.

  3. 39 minutes ago, craigr said:

     

    Same CPU as you, no issues here:

     

    image.png.077e6fc5732b1955ba80fccc6de57892.png

     

     

     

    Made a few changes, making progress,  looks like I disabled HDR tone mapping while troubleshooting another issue, re-enabling seems to have made a difference, will continue to adjust and test.

     

    Thanks for responding.

  4. Kind of a feeler post, I'm having issues with transcoding 4k content in Plex, it appears this issue popped some time during the 6.12 release candidate cycle.  No other changes to my setup, I've reviewed all of my settings, no issue transcoding 1080p to 720p/480p etc.   Was curious if anyone else was having similar issues?

     

    Intel® Xeon E-2288G CPU. 

     

    In the meantime, I will continue to review my settings, and do more testing.

     

    Thanks.

  5. On 10/13/2020 at 3:53 PM, mgutt said:

    This script automatically spins up all defined Disks to remove spin up latency on playback. It should be executed through CA User Scripts on array startup. This script is inspired by @MJFOx's version. But instead checking the Plex log file, which needs to enable Debugging, this script monitors the CPU load of the Plex container (which rises if a Plex Client has been started):

    #!/bin/bash
    
    # make script race condition safe
    if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1; fi
    trap 'rmdir "/tmp/${0///}"' EXIT
    
    # ######### Settings ##################
    spinup_disks='1,2,3,4,5,6,7' # Note: Usually parity disks aren't needed for Plex
    cpu_threshold=1 # Disks spin up if Plex container's CPU load exceeds this value
    # #####################################
    # 
    # ######### Script ####################
    while true; do
        plex_cpu_load=$(docker stats --no-stream | grep -i plex | awk '{sub(/%/, "");print $3}')
        if awk 'BEGIN {exit !('$plex_cpu_load' > '$cpu_threshold')}'; then
            echo "Container's CPU load exceeded threshold"
            for i in ${spinup_disks//,/ }; do
                disk_status=$(mdcmd status | grep "rdevLastIO.${i}=" | cut -d '=' -f 2 | tr -d '\n')
                if [[ $disk_status == "0" ]]; then
                    echo "Spin up disk ${i}"
                    mdcmd spinup "$i"
                fi
            done
        fi
    done

     

    Explanation

    - it requests the containers CPU load every ~2 seconds (answer time of "docker stats")

    - if the load is higher than "cpu_treshold" (default is 1%) it checks the disks spinning status

    - all sleeping "spinup_disks" will be spun up

     

    Downside

    - as long a Movie is running, all (unused) disks won't reach their sleep state (they spin down, but will be directly spun up again)

     

    Monitoring

    If you like to monitor the CPU load while (not) using Plex to find an optimal threshold value (or just for fun ;) ), open the WebTerminal and execute this (replace "1" against a threshold of your choice):

    while true; do
        plex_cpu_load=$(docker stats --no-stream | grep -i plex | awk '{sub(/%/, "");print $3}')
        echo $plex_cpu_load
        if awk 'BEGIN {exit !('$plex_cpu_load' > 1)}'; then
            echo "Container's CPU load exceeded threshold"
        fi
    done

    On my machine Plex idles between 0.1 and 0.5% CPU load, why I choosed 1% as the default threshold.

     

    Anyone have this script working under 6.10.0-RC5?   The command "mdcmd status" doesn't appear to output rdevLastIO anymore.  Also, from what I can tell ,"mdcmd spinup" does not work either.  

    • Like 1
  6. 1 hour ago, ich777 said:

    I don't think so since the container listens on all interfaces for incoming connections (outgoing are all routet through the VPN of course).

     

    Then I think something is wrong with the settings in Sonarr, that's my best guess, how have you put in the IP address and the port? Do you use the same port in the OpenVPN-Client container as it was in the original template from Hydra2?

     

    Keep in mind you could try to create a custom network like in the @SpaceInvaderOne video from here (don't forget to turn on "Preserve user defined networks" in the Docker settings, otherwise the networks are deleted after a reboot), put all the containers in this new custom, let's say "opevpnnet" and then you can work with the container names instead of the IP addresses like: http://NZBHydra2:5076 in Sonarr for example (the name must be the exact same as how the Docker container is named).

    This will only work if you create a custom network like in the video above, in the default bridge there is no name resolution of the containers.

     

    Hope this makes sense to you...

    Appreciate the feedback, in the meantime, I'm dialing it down, and I'm only using the OpenVPN-Client redirect for NZBHydra2, NZBGet and qBittorrent.  Everything now working as it should with Unraid Host IP addressing.  I suppose I wanted to encrypt everything, inclduding lookups when adding new content from Sonarr/Radarr etc, but that is overkill.   Appreciate all you do.

     

     

    • Like 1
  7. 11 hours ago, ich777 said:

    Why are you routing every container through the OpenVPN container, I think NZBGet and NZBHydra2 is enough or am I wrong?

     

    That is really weird, on my system I can ping the Host from the OpenVPN-Client and the Hydra container just fine (you first have to run "apt-get update && apt-get install iputils-ping" to actually use ping).

    Can you tell me which repo is in your Docker template for the container?

     

    Also on what unRAID version are you?

     

    Do you run the OpenVPN container in the default bridge or do you have it assigned it's own IP in a Custom: br0?

     

    ich777/openvpn-client

    Version: 6.10.0-rc2

    bridge

     

    Could this have anything to do with my VPN provider (NordVPN) and related .ovpn?

     

    I can console into other docker containers (not using the OpenVPN redirect), and can ping my host @ 192.168.10.10 and other hosts on the subnet /24)

     

    I'm definetely stumped.

     

    Do appreciated you taking the time to assist.

     

     

     

  8. 44 minutes ago, ich777 said:

    Wait, now that I've read it again this should be totally possible and no issue whatsoever because that's the main use case.

     

    In Sonarr you have to enter it like this:

    grafik.png.99d4d1578b5504d6b45cf0668cf08a89.png

    but only if your OpenVPN-Client instance is running in bridge mode and the IP from unRAID is 192.168.10.10 as you've wrote above and you've created a port mapping in the OpenVPN-Client container from 5076 to 5076 like:

    grafik.png.db15ed6a19ec0c84585fcbdee2eb6735.png

     

    This should work totally fine. ;)

     

    Interesting, I must have a configuration issue, prior to routing Sonarr, Radarr, NZBGet, NZBHydra2 etc. through the OpenVPN-Client docker I had no issues with docker containers being able to access each other with Unraid Host IP and corresponding docker IP port #.  What would I be looking for?  After routing all of the above through the VPN and adding the port variables for each one, I can still access the Webgui for all the above.  Running ->Curl ifconfig.io tells me all these dockers are using the OpenVPN client properly based on the returned IP, but for some reason they can no longer talk to each other with HOST IP (192.168.10.10) and only with the APP IP (172.x.x.x).  

     

    Some kind of weird routing issue, opening a console for NZBGet (which is now going through the VPN), I get no response when pining my Unraid Host (192.168.10.10).

     

    Much appreciated.

     

      

  9. 16 hours ago, ich777 said:

    So this is solved?

    What did you do exactly? Maybe it will help others.

    Not sure this is the best solution, as the app IP address seems to change when restarting the server.  My docker app IP addresses are in the 172.17.0.1/24 range.  My host IP is 192.168.10.10/24.  I'm unable to access with other dockers using the host IP address and docker port #.   Maybe a better way to do this?

     

    Any thoughts are appreciated.

  10. First off, I wanted to thank you for the OpenVpn-Client docker.

     

    I'm using it to route traffic from other docker containers, which is working fine, accessing the webui for these containers is working fine with added Port commands etc.  The only issue I'm having is with docker to docker communication, as soon as I route a docker through OpenVPN, other dockers aren't able to communicate with that docker anymore.  If this question was already asked, I apologize, I looked through the thread and didn't see anything.  For instance if NZBHydra2 is being routed through OpenVPN docker, Sonarr can't access NZBHydra2 for searches.  

     

    Maybe I'm missing something.  Much appreciated.

     

    UPDATE: I figured it out, 172.17.0.4:5076 (App IP Address, and NOT the host IP address).  Rather than delete the post.

  11. https://hub.docker.com/r/songkong/songkong

     

    What would it take to modify this dockerfile?  The last 2 lines are as follows:

     

    12 ENTRYPOINT ["/opt/songkong/songkong.sh"]

    13 CMD ["-r"]

     

    The author (developer of SongKong) is not that versed in docker, and not sure how to make the necessary changes, and stated that I should be able to do it myself, I'm willing but not sure where to start.

     

    The program allows different command line switches, but I have no idea how to pass a different CMD switch, as the dockerfile appears to be set to -r (as seen above in line 13)  Can this be a variable?, could it be removed entirely so that you would be required to pass the switch of your choice by using the extra parameters option in Unraid when adding the template?  Here are some of the the available CMD switches.  I'd like to automate it going into Watch Mode -w /watch, instead of just starting in remote mode, which then requires me to manually put the software in watch mode after every Unraid reboot, or Docker start / stop / or restart.

     

    Execution Command/Command, so to run the command line help we would just set it to -h

    Execution Command/Command to -m /music

    Execution Command/Command to -w /music

    Execution Command/Command to -w /watch

     

    More details here: https://community.jthink.net/t/how-to-run-songkong-cmdline-with-docker/9321

     

    Any guidance, feedback or other is appreciated.

     

    Thanks.

     

     

     

     

     

     

     

     

     

  12. I have a 1TB 2.5" SSD (current cache drive formatted XFS) and recently added a 2TB NVMe, which I haven't allocated yet.

     

    Ideally I think I would like to separate my Docker APPDATA and cached shares (on separate physical drives), as I have started to run out of cache space for my shares on occasion, requiring me to schedule the mover more frequently (more than once a day), which has it's pros and cons.

     

    My APPDATA folder is around 500GB, and includes Plex Server (large media collection), and Nextcloud along with other common dockers (NZBGet, Sonarr, Lidarr, Radarr etc. etc.)  I'm backing up APPDATA to the array once a week via the CA Backup Tool.  I would like to leverage Nextcloud a bit more going forward, storing scanned bills etc., along with other documents.

     

    If I mount the new 2TB NVMe via unassigned devices, and move my APPDATA folder, I assume CA Backup tool could still backup from the new location?  Then just use the 1TB SSD for cached shares?

     

    Open to feedback and ideas.  Thanks.

     

  13. On 1/31/2020 at 2:07 PM, Hoopster said:

    I am interested in upgrading my server with a Xeon E-2278G CPU (they are finally beginning to appear in retail channels) which has an iGPU for video transcoding.  I am also interested in a server board with IPMI.  On many such boards, the BIOS does not support the iGPU when IPMI is active via the AST2X00 BMC with video output to a VGA port or JAVA/HTML5 console.

     

    From this thread about the Supermicro X11SCA-F, we know that this board does support both IPMI and the use of the iGPU for transcoding.  There are things I don't like about that board, like the placement of the SATA ports stacked at the very edge of the board.  That would be a tight fit and cabling problem in my case.

     

    I asked ASRock Rack if the E3C246D4U board supports both IPMI and the iGPU and they sent me the 2.10A BIOS which they claim implements that feature specifically.  I don't have that board yet, but, I am leaning towards purchasing one to pair with the E-2278G (when I can finally get my hands on one) and 64 GB RAM.  Not having the board yet, I cannot verify that it does what I need; however, I was very specific with ASRock about what I needed and they claim to have BIOS firmware to support it.

     

    The E3C246D4U looks like a good option for my needs. 

     

    For those interested in 10G NICs, the E3C246D4U2-2L2T motherboard has 2 10G NICs and likely also has BIOS fimware for IMPI + IGPU like its Gigabit NIC sibling.  That board seems impossible to find at the moment.

     

    Curious if the X11SCA-F guys have Turbo Mode issues with their board.

  14. On 5/4/2021 at 6:47 AM, TexasUnraid said:

    Agreed^

     

    Also being able to rename the sensors (fans in particular) would be very helpful.

     

    Along these same lines, is it possible to add multiple sensors to the fan control. Basically my mobo only has 1 PWM control. I can set the fans to be quite and everything is fine as long as it is not under load, if  the drives start working the fans need to spin up, ok fine.

     

    The issue is that if the CPU's with passive heat sinks start working they also need the fans to spin up but right now I have to pick only one of those.

     

    It would be nice if it could have use multiple sensors for fan control and simply set the fans to whatever is higher. Thus allowing it to spin the fans up for either the CPU or Hard drives.

     

    Bonus points to add even more sensors as I could spin the fans up if the HBA or memory starting getting hot for example etc.

     

    I second this, multiple fan profiles would be awesome, In my 24bay 4U case, under heavy disk load I spin up the fan wall to pull more air through the front to keep the drives cool, under a CPU intensive workload (when the drives are often spun down), I would also like to spin up the fan wall, and pull more cool air into the case.

     

    Renaming would be nice also, but not as important as above :)

  15. Good point Pixel5.

     

    I think so many read the various hard drive reports (Google, Microsoft, Backblaze) the reality is those tests were done in a purpose built data centers, where drives are idling in the 20's c.  It's nice that we all want to replicate that in our homes, however, realistically all we can do is optimize air flow through our cases.  Depending on where we live (ambient temps), and how cool you want to keep your home in the warmer months, you really can't expect to keep drives under 40c, unless you are keeping your house cool in the warmer months, or live in a cold climate.  I guess what I'm saying is, there is a trade off, and manufactures do provide a safe operating range for their drives knowing that not every drive is going to live in a data center.  Personally, I like my drives to idle in the 30c range, prefer them to operate under moderate to heavy load under 50c.  Lot's of great tools available to pause parity checks, shutdown servers based on the temps you are comfortable with.

     

    Will I be more likely to have a failure, I suppose so, have I had a failure, NO.  I'm not spending hundred of dollars to keep my home as cold as a data center when I'm not home.

     

     

  16. 1 hour ago, Hoopster said:

    It only crashed on me when I was running BIONC 24x7.  That was a real stress test.  It would crash in anywhere from 16 hours to 3-4 days of continuous high CPU usage.

     

    When I disabled Turbo Boost, the crashes went away and I ran it for weeks on end with no problem. 

     

    I have had Turbo Boost re-enabled for months now with no problems.  There is nothing in my normal use of the CPU (including several  consecutive HandBrake encodes) that comes anywhere near the stress of BOINC/Folding.

    I've been using Handbrake to transcode about 200 older ISO's, after about a day or so of heavy use, it locked.  When I'm done with this project, I will try and re-enable Turbo, and see if I experience issues under more "normal" use.