Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Posts posted by Kaizac

  1. 11 hours ago, Presjar said:

     

    This is my docker-compose for plex. I have cut out the other services. This loads plex server v1.22.0.4163. Settings on transcode page in attached image.

     

    Last week I used environment VERSION: latest so the container updated on startup to 1.23.6.4881, however it did not seem to help.

     

    Even with tone mapping off I get the odd image corruption when doing UHD to 1080. Image below.

     

    
    
    
    version: "3.7"
    
    ########################### NETWORKS
    networks:
      t2_proxy:
        external:
          name: traefik_proxy
      default:
        driver: bridge
    
    ########################### SERVICES
    services:
    
    #Plex - Plex Server
      plex:
        image: linuxserver/plex:latest
        container_name: plex
        environment:
          PUID: $PUID
          PGID: $PGID
          TZ: $TZ
          VERSION: docker
        ports:
          - 32400:32400/tcp
          - 32469:32469/tcp
          - 1900:1900/udp
          - 32410:32410/udp
          - 32412:32412/udp
          - 32413:32413/udp
          - 32414:32414/udp
        networks:
          - t2_proxy
        volumes:
          - $DOCKERDIR/plex/config:/config
          - /tmp:/transcode
          - $MEDIA:/Media  
        devices:
          - /dev/dri:/dev/dri

     

    You're on to old of a version. I have to believe the changelog version 1.23.0.4497 should finally solved the issue. You can look through the changelogs here:

    https://forums.plex.tv/t/plex-media-server/30447/426

     

    I would advise going to latest again. And did you read this topic?

     

    You have a UHD 750 with that 11600K, just like me with my 11500, so you should be able to get it working just like I did. If you can't get it solved then just PM me so we can do a 1 on 1.

     

    EDIT: Standby, I just retested and now it's broken for me as well again. Seems like they broke something in a newer patch.

     

    EDIT2: Went through all the recent versions and it indeed still shows corruption on the occasional 4K video. I drew the wrong conclusion because of movies that would be corrupted before, now play perfectly. So there are still some codecs or whatever which are not played without corruption unfortunately.

    I still believe this has to do with the Linux kernel support not being there for the 11th gen yet. Since Emby also has the same issue and I do not believe Jellyfin is totally free of this issue either. Really depends on your own library.

  2. 3 minutes ago, arturovf said:

    As I understand there are artifacts and corruption on HDR to SDR tone mapping

    according to users reports

    6 minutes ago, Presjar said:

    Are you using Windows 10?

     

    When I use Unraid and docker I get visual corruption of the transcode. I don't get visual corruption with Jellyfin docker.

    No, I'm using the docker on Unraid. Are you on plex pass? What are your docker settings? I used to have the corruption a few versions ago, but they have been solved now.

  3. 1 hour ago, arturovf said:

    It appears almost nobody is running intel 11th gen processors with Plex since they don't even notice it doesn't work

    I got an 11500 and Plex is HW decoding everything now even with HDR. This was broken before, but now it's working fine. What exactly is not working for you? Is the igpu not recognized or do you have an other issue?

  4. I have issues getting my upload speed to my NC server maxxed when accessing both through webbrowser as through the desktop app. When I test on LAN everything is 80-90 MB/s no issues.

     

    However when I then turn on SWAG and use the WAN access the desktop app runs at round 70-90 MB/s. I do see it pause a lot, probably because of the chunking. When I use the webbrowser I get 80-90 MB/s download, but upload is only 20 MB/s. So it seems something is wrong with my nginx configs, but I have no idea what it could be. Already looked at some optimizations, but none seem to fix it. Doesn't help that I don't understand the relationship between SWAG and NC's own NGINX files (nginx.conf and default).

     

    Anyone has any clue what it could be?

  5. On 6/15/2021 at 10:23 AM, remotevisitor said:

    You have probably created the symlinks with absolute references which are only valid within the mappings provided inside your dockers.

     

    Lets say you are mapping /mnt/users/Media -> /Media in a docker.

     

    Now you create a symlink in the docker to be something like /Media/Favourites/A_Film -> /Media/Films/A/A_Film then this works fine within the docker because /Media/Films/A/A_Film is a valid path within the docker.   But /Media/Films/A/A_Film is not valid outside your docker.

     

    If the symlink had been created with relative references like /Media/Favourites/A_Film -> ../Films/A/Favourites/A_Film this it would work both inside and outside the docker because inside it resolves to /Media/Films/A/A_Film and outside it resolves to /mnt/users/Media/Films/A/A_Film.

     

    You could work around the problem outside the docker by creating the symbolic link /Media -> /mnt/users/Media because then /Media/Films/A/A_Film would now be valid.

     

    You're probably right. I use a script someone else created to create these symlinks, so I have no other way to change that. And I have not enough knowledge about symlinks to think about converting them.

     

    Quite unfortunate, just don't understand why Krusader can follow them through and the command lines can't. Thanks for thinking with me!

  6. I have a folder full of symlinks pointing to folders with media in it. I use these to create custom libraries based on trakt lists for example.

    Now I want to copy the actual content which the symlinks point to a different folder.

     

    I've seen this asked many times on different forums, but none of the solutions seem to work. Strange thing is that I can do it with Krusader using the synchronize folders functionality, but I would like to have it done via a script.

     

    So say I have the following:

    /mnt/user/symlinks/ (full with symlinks)

    /mnt/user/selectedmedia/ (where the media should be copied to)

     

    What command can I use to get this done?

     

    Commands like:

    cp -Hr "/mnt/user/symlinks" "/mnt/user/selectedmedia/"

     

    Don't work, and either only copy the symlink or it errors in "no stat, no such file/directory".

     

    Hopefully someone knows the solution!

  7. 4 hours ago, Andiroo2 said:

     

    It worked!!  I had been tagging the container version with "Latest" but I needed to add "plexpass" to the VERSION variable to make it work.  Thanks for the help!!  HW transcoding works on HDR tone mapped files again.

    I put in  "docker" in version since the instructions mentioned that. Quite unclear indeed. Then used latest and now I find out because of you that plexpass is the right one to use.


    Anyways I thought HDR tonemapping HW transcoding was working. But when I check some files in my library then some movies work perfectly but others are still showing artifacts. Dashboard is showing that transcoding is being done by HW and can't find any difference. Maybe because I'm on the latest Intel gen that it's not fully supported yet. But is your whole 4K library being played with tonemapping without artifacts?

    • Like 1
  8. 6 hours ago, XisoP said:

    Hi all,

     

    Last year we, the LG TV owners, ran into an issue with playback while transcoding with subtitles. There were extreme buffer issues, the only option for a while was running an old server version (1.16.x or older).

    Earlier this year the issue was solved.

    Today I noticed that the issue might have returned in a worse way. Playing a 1080p h264 file @ direct play just froze a couple of minutes into the movie. Forcing transcode didn't change a thing. Upgrading to 4K HDR made things worse.

    After downgrading the docker image from 1.22.3.4392 to 1.22.2.4282 the issue cleared. 4K HDR h265 playback with subs was smooth as butter.

     

    Could it be something went wrong (failed line of text or so) while compiling a docker image?

     

    I'm running unraid 3.9.2

    Currently plex 1.22.2.4282

    transcoding is handled by Quadro M4000

     

    Plex currently has issues with HW decoding, especially with HDR (their forums are full with it). They are aware and working on it. But nothing that can be done on the docker, Plex has to solve it. Rolling back to older versions like you did, often solves it.

    • Thanks 1
  9. On 1/11/2021 at 6:51 AM, johner said:

    Hi Kalzac, what did you settle on in the end?

     

    I'm looking to backup gmail to local client/docker/something, which would keep in sync one way only. So I can then delete old emails from gmail to free up space, but it then NOT delete from the local copy.

     

    Sorry I didn't respond earlier. In case you were still wondering I currently have the following setup:

     

    I run mailcow in a VM and just IMAP/POP3 sync with my other mailboxes, so the data is locally stored.

    I also have the Thunderbird docker (found in CA) and run that as client to also just pull in the e-mails.

     

    That seems to work fine. From there I can backup that data again from my local server to an external HDD and/or Cloud.

  10. On 8/11/2020 at 5:32 AM, Emilio5639 said:

    Scripts

    You're erroring on this part:

     

    #######  check if rclone installed  ##########
    echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
    else
        echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
        exit

    So it can't find the $RcloneMountLocation/mountcheck file. RcloneMountLocation is the same as your RcloneShare, so I would start tracing back to there to check if you can find that file and whether all the $'s are correctly in this script.

  11. 4 minutes ago, privateer said:

    Yes I read it, and yes I ran those commands.

     

    the files I have are named sa_gdrive_upload[X].json, but there's no sa_gdrive.json file in there. They are in the correct folder and there's 100 of them. This is the error I've been getting:

     

    Failed to create file system for "gdrive_media_vfs:": failed to make remote gdrive:"crypt" to wrap: drive: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json: no such file or directory

    You are renaming them to gdrive_upload.json through the renaming DZMM mentions. So if you want them to be called sa_gdrive.json you have to define that in your rename script.

    Dry Run:
    n=1; for f in *.json; do echo mv "$f" "sa_gdrive_upload$((n++)).json"; done
    
    Mass Rename:
    n=1; for f in *.json; do mv "$f" "sa_gdrive_upload$((n++)).json"; done
    

     

    Don't just copy and paste the codes in githubs, but also try to understand what they are doing. Otherwise you have to idea where to troubleshoot and end up breaking your setup.

  12. 2 hours ago, privateer said:

    Hopefully progressing onward...but with a new issue.

     

    Where should I get a copy of the sa_gdrive.json file? I have the remotes but not sure about that file...

     

    I don't know what the SharedTeamDriveSrcID or the SharedTeamDriveDstID are. Is the DstID the folder inside the teamdrive where I'm going to store things (e.g. teamdrivefolder/crypt)? What should go here...wondering if this is why I don't have the .json file.

    Did you read the github of DZMM?

    https://github.com/BinsonBuzz/unraid_rclone_mount


     

    Quote

     

    Optional: Create Service Accounts (follow steps 1-4).To mass rename the service accounts use the following steps:

    Place Auto-Genortated Service Accounts into /mnt/user/appdata/other/rclone/service_accounts/

    Run the following in terminal/ssh

    Move to directory: cd /mnt/user/appdata/other/rclone/service_accounts/

    Dry Run:

    n=1; for f in *.json; do echo mv "$f" "sa_gdrive_upload$((n++)).json"; done

    Mass Rename:

    n=1; for f in *.json; do mv "$f" "sa_gdrive_upload$((n++)).json"; done

     

     

  13.  

    4 minutes ago, privateer said:

    python-pip-20.0.2-x86_64-1.txz is installed on my server. Only one I see with pip in it (unless I've missed something). Shouldn't need to reboot or anything after an install right?

     

    pip3 returns this error:

    
    Traceback (most recent call last):
      File "/usr/bin/pip3", line 6, in <module>
        from pkg_resources import load_entry_point
    ModuleNotFoundError: No module named 'pkg_resources'

     

    Then just open console and type "python3 install pip"

  14. 7 minutes ago, privateer said:

    I've been successfully running the original version (unionfs) for a while and finally decided to make the plunge to team drive, service accounts, and mergerfs.

     

    While trying to upgrade, I ran the following command as listed on the AutoRclone git page:

    
    sudo git clone https://github.com/xyou365/AutoRclone && cd AutoRclone && sudo pip3 install -r requirements.txt

    The output for this command resulted in an error: 

    
    sudo: pip3: command not found

    The rest of the command worked fine. Any idea what's going on here?

    You don't have pip installed on  your server. Get it through nerdpack.

  15. Just now, Bjur said:

    I don't know I just started, so that's why I'm asking people why has more experience with this.

    If Google stops the unlimited service because of people encrypting, would there then be a longer grace period to get the stuff local or will they just freeze peoples things?

    Will this be a likely scenario.

    More likely would be that they enforce the 5 user requirement to actually have unlimited. And after that they might raise prices. Both scenario's is personal for each person if it's worth it. And I think they will give a grace period if things do drastically change.

     

    I'm using my drive both for my work related storage as personal. Don't forget there are many universities and data-driven companies who store TB's of data each day. We're pretty much a drop in the bucket for Google. Same with mobile providers. I have an unlimited plan, extra expensive, but most months I don't even use 1gb (especially now, being constantly at home). And then other days I rake in 30GB per day because I'm streaming on holiday or working without wifi.

     

    I did start with cleaning up my media though. I was storing media I will never watch, but because it got downloaded by my automations it got in. It gives too much of that Netflix effect: scrolling indefinitely and never watching an actualy movie or show.