Jump to content

saarg

Community Developer
  • Posts

    5,374
  • Joined

  • Last visited

  • Days Won

    8

Posts posted by saarg

  1. 2 hours ago, xrqp said:

    How do I setup paths for this Emby container for two tv folders?  Here are the two I want:

    1. /data/media/tv/    (used for tv shows from Usenet)
    2. /data/media/tv antenna/    (used for tv shows from antenna and Emby DVR)

    My share is "data".  

     

    Here is for 1. data/media/tv

    image.png.bb31c1643a34177270c0e4d8b6dd51a6.png

     

    Do I need to create a another container path for "tv antenna" for recordings? If yes, what do I put for container path and host path?

    The following did not work:

    image.png

    You can't use the same container path in two folder mappings. They need to be different.

  2. 4 minutes ago, hundsboog said:

    ... or you just setup another docker container and use two additional connections ;-)

     

    But is it true that it is really depricated? I really cant believe it! I mean this container has a huge community, can somebody clearify this?

    Yes, it's deprecated. It's a pain in the ass to maintain as it breaks on every update. The latest release should work.

  3. 1 hour ago, Mystic said:

    Thanks for the response and confirmation of what I figured....  Now the challenge is why Booksonic will not permit access to any settings features. 

     

    When I remove the context path from the docker and log in.  As the admin, I get the following:

    1. Login from docker app the URL consist of the http://ip:port and everything works...  Click on setting and I can configure.
    2. Login from my domain the URL is http://sub.domain.com:port and when I click on setting, nothing happens.

     

    Any Idea's?

    Why are you using the port when using your domain? You are reverse proxying it, right?

  4. 36 minutes ago, Mystic said:

    I can't get http or https to work.  I can not figure out how to configurre NGINX with a docker variable CONTEST_PATH

     

    1508408453_ScreenShot2021-06-12at10_00_12AM.png.9ba96e60d6ba657afb3e3b6183c778c1.png

     

    with Context_Path booksonic = http://sub.dommain.com:4040/booksonic - I can't configure Nginx with a sub after port.

    w/o Context_Path = http://sub.domain.com:4040 - this works but then settings and other menus can not be accessed.

     

    239082962_ScreenShot2021-06-12at9_50_11AM.png.53418a54bdd355da1e9d5816da736d89.png

     

    - Router is port forwarded to 4040

    - Latest version of Unraid | ROG CROSSHAIR VIII HERO (WI-FI) | AMD Ryzen 9 3900X 12-Core @ 3800 MHz

    - NO-IP subdomain = audio and DNS pointing to domain.com.

     

    The URL is http://10.10.1.5:4040/booksonic/ - HOW do I configure this in Nginx, I can not figure it out.  With the variable everything works but I get BAD GATEWAY when accessing it with my domain.

     

     

    You don't use context path for subdomain. It's only for subfolder.

  5. On 6/8/2021 at 8:48 PM, polishprocessors said:

    I've noticed in this version of MariaDB it seems the following are set to Swedish:

    collation_databaselatin1_swedish_ci

    collation_serverlatin1_swedish_ci

    Is it possible to either switch this to something unicode or manually change the defaults ourselves, as this is causing some issues with some of my other containers trying to use databases. I presume those devs can also hard-code their collation choice, but it'd be nice to change it at this level, too..

     

    That's not something we set, so if you haven't set it yourself, then it's in the upstream package it gets set. In which config file is it set?

  6. On 6/10/2021 at 9:54 AM, ich777 said:

    After I updated the container today I get an error that the execution of /bin/tar has been denied and the container isn't able to start, the message repeats over and over again.

     

    I attached the log:

    tvheadend.log 4.4 kB · 0 downloads

     

    Has something changed that I've been missing?

     

     

    Also may I ask what is the difference between the version-9476680f vs. 9476680f-ls100?

    I've reverted now back to version: 63784405-ls97 which is running fine.

    I would not advice to use latest unless there is a feature you need. New developer started working on it and it breaks now and then.

     

    The two versions are identical.

    • Like 1
  7. On 5/31/2021 at 5:16 PM, Vaslo said:

    Hello,

     
    I am trying to do some C++ coding and compiling.  In order to get g++ (compiler) I went through the terminal and installed it and it's dependancies.  It worked fine and then I updated the container and now it's gone.  I assume this is due to the lack of persistence in the container.  Is the best way to have this work everytime I startup the container to alter the script?   

     

    If so, is the script persistent every time I update the container or do I need to do extra work to save the script?  Is there a better way to do this?  Sorry - I know this is a more general container question but I'm thinking someone else had this issue very specific to the container so I thought I'd start here.

     

    Thanks!

    Using the script function it will survive updates.

  8. 5 minutes ago, Janne said:

    I have a problem since I updated to the latest emby version yesterday. Today when playing a movie via Emby on my shield Emby stopped working and the Shield client couldn't connect to the Emby Server anymore. So I checked and saw that my Cache of 250Gb is full. Since only Emby was affected I checked the emby files in my appdata folder (which is set to "prefer" regarding cache usage) and emby created a 2 log files that are more than 200GB big. 

     

    embyserver-63757670400.txt - 73GB (created today at midnight)

    embyserver.txt - 135 GB (created just before emby stopped working today)

     

    Any idea what's happening here? I did not have that problem with any previous version of Emby. Can I just delete those log files and will Emby start building up a new huge logfile then?

     

    Thank you in advance

    Check if you enabled debug logging. You can safely delete those logs.

  9. 9 hours ago, canedje said:

    I just tried. Also not working. only:  linuxserver/unifi-controller:version-6.2.25 is working

    Then it's still using the version that ubiquiti pulled as latest. Delete all unifi-controller images and then change something in the template and add it back and hit apply. Then it should pull the correct version.

  10. 3 hours ago, canedje said:

    Yes I know. But If I use as respository: linuxserver/unifi-controller:latest it is not working, but linuxserver/unifi-controller:version-6.2.25

    is working

    You don't need to specify latest. Remove the latest tag and then do a force update. It's probably using the latest tag on your system and not our latest tag.

  11. 15 minutes ago, Adam64 said:

    Hello,  I've been running the 4.6 beta and now that it's released I'd like to switch to the released version (4.6.50).  When I change the tag from beta to latest, the docker downloads 4.5.  Does lsio need to do something to the docker to indicate that the release is 4.6.50?

    The latest version we released is 4.6.0.50, so you haven't updated your container. It's probably just using the latest version you had on your system, so do a force update.

  12. 5 hours ago, RichardU said:

    I'm also having no luck with this docker right from the start. 

     

    I created the docker and added a password: mypassword

     

    Open the console, type: mysql -u root -p

     

    Enter the password and I get: 

    ERROR 1045 (28000): Access denied for user ‘root’@’localhost’ (using password: YES)

     

    Same result on another unraid machine.

    Docker run command and container log is needed for others to help.

  13. 14 hours ago, RockDawg said:

     

    Changing to the container port worked.  I never considered that and that is the only container that I run where I changed the default port.  Thanks so much!

     

    I thought I still needed the cname and the wildcard was just allowing any of my cnames through.  So I can delete all my subdomain cnames on Cloudflare?

    As long as you have a wildcard cname set in cloudflare, you can delete the other subdomains.

    • Thanks 1
  14. 57 minutes ago, RockDawg said:

    I've had SWAG set up and running for a couple years now (back before it was SWAG) and it works great.  I have to admit I don't totally understand everything (or even much) about it, but via tutorials and this forum, I was able to get everything working. I switched to a wildcard cert a year or so ago without much issue. But I am having an issue trying to allow for a new subdomain.

     

    I have had Radarr working all this time and I recently another instance to to handle 4K content.  I thought this would be super easy.  I went on to my Cloudflare dash and created a cname for radarr4k (the other is just radarr) and I copied my radarr site-conf, renamed it to radarr4k, replaced every instance of radarr inside with radarr4k and changes the port the port I use for radarr4k.  I thought it would be as simple as that and just work, but I get a 502 error whenvever I try to go to https://radarr4k.myserver.com.  

     

    Here is my radarr file:

    
    server {
        listen 443 ssl;
    
        server_name radarr.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        location / {
    #        auth_basic "Restricted";
    #        auth_basic_user_file /config/nginx/.htpasswd;
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_radarr radarr;
            proxy_pass http://$upstream_radarr:7878;
        }
    }

     

     

    Here is my radarr4k file:

    
    server {
        listen 443 ssl;
    
        server_name radarr4k.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        location / {
    #        auth_basic "Restricted";
    #        auth_basic_user_file /config/nginx/.htpasswd;
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_radarr4k radarr4k;
            proxy_pass http://$upstream_radarr4k:7879;
        }
    }

     

     

    Any ideas what I am doing wrong?

    You don't have to set a came in cloudflare when using wildcard. Wildcard is for everything.

     

    I guess you are using a custom docker network for swag and radarrs, so no need to change the port in the proxy-conf as swag talks to the containers using the name. It is all internal in the custom network and therefor you use the container port.

    • Thanks 1
  15. 23 hours ago, guilhem31 said:

    Hi everyone,

     

    I'm sad I have to post here... everything was working fine until 3 days ago when one of my cache pool drive died.
    I had problems with my /appdata backup and I had to delete files from the swag config (probably chmod problems or busy files the moment I wanted to backup).

    I changed my cache drive without any other issue.

     

    When I try to launch swag, I got this error in my log :

    https://pastebin.com/mMuxi79e

     

    My ovh.ini credentials are good.

    I cleaned the _acme-challenge DNS entry from my OVH manager console.

    My domain DNS are all ok,  A and CNAME.

    Restarted swag container multiple times.... nothing.

     

    I don't understand what I'm doing wrong. Any help would be VERY appreciated !

     

    Use an earlier tag. It's an upstream issue.

  16. 15 hours ago, tris203 said:

    I fixed it for anybody wondering. The issue (stupid me) was that the docker config was set to not enable access to host networks for containers.

    So by enabling this, that allowed me to run the docker on its own IP on the genuine port 443

    Then i could add a internal dns entry for nextcloud.mydomain.com to point internally to the internal IP

     

    There is currently a certificate error, but I think that can be fixed fairly easily

    No, the traffic doesn't go out and in again. Your router is doing hairpin nat. There is no reason to set it to host network mode.

  17. 1 hour ago, Edgard666 said:

     

    That solved my HW transcoding problem too, thanks a lot. ^^

     

    Now we need Jellyfin to stop replacing the newer version of ffmpeg with version 4.3.1-Jellyfin (4.3.1-4).

     

    Is there a way to automatically start the “apt install” command once the container is started to replace the official (not working) ffmpeg Jellyfin version automatically with the one working? It would allow to auto upgrade the container without breaking the HW transcoding.

    https://blog.linuxserver.io/2019/09/14/customizing-our-containers/

    • Thanks 1
  18. It gets copied on every start, so no point in deleting it. Edit it instead of deleting it.

    It is set up this way so we can supply a working setup the first time you use the container and in case the user has messed up the default file, can delete it and get it back to a working state.

  19. 14 minutes ago, unrno.spam said:

    Hi guys,

     

    running Unraid and Nextcloud now for one year and it has become an important tool for my so I don't wanna drop it.

     

    But now I can't get nextcloud to work. I'm using the nextcloud-Docker-Container from linuxserver with the SWAG-Container as Reverse Proxy and DuckDNS.

     

    In the log of the nextcloud-Container I found the following errors:

     

    
    Brought to you by linuxserver.io
    -------------------------------------
    
    To support LSIO projects visit:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    chown: cannot read directory '/config/www/nextcloud/core': I/O error
    chmod: cannot read directory '/config/www/nextcloud/core': I/O error
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 40-config: executing...
    [cont-init.d] 40-config: exited 0.
    [cont-init.d] 50-install: executing...
    [cont-init.d] 50-install: exited 0.
    [cont-init.d] 60-memcache: executing...
    [cont-init.d] 60-memcache: exited 0.
    [cont-init.d] 70-aliases: executing...
    [cont-init.d] 70-aliases: exited 0.
    [cont-init.d] 99-custom-files: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-files: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.

     

     

    So  I did check the core on the nextcloud console with the command "occ integrity:check-core" with this result (just the last few lines, because the list is very long):

     

    
    - current:
        - core/vendor/zxcvbn/dist/zxcvbn.js:
          - expected: 4d994c18563dc4a8f7f2dff99b617327e44cfda0f96a53070dedbaa7499850ca021791bb89b7c751020bab4a903d5df096c9c2c0bf933d134ae2f21618b3a451
          - current:
        - core/webpack.test.js:
          - expected: 9e4bb245226e24e15239416bf8fa8544bfd5b0e75e7054a15c41b834479dae9b256a511d0d4721da0b5824d4776090f0f138c73f37e123d7933d723622a10fdb
          - current:
    root@85de9c98265e:/#

     

    Then I stopped the container and set new permissions to the nextcloud share via the Unraid Menu.

     

    I guess, the problem is, that when nextcloud tries to change user and persmissions to www-data as the webserver user the I/O-erros happens...

     

    The log of the SWAG-Container has no errors in it and states the server ready.

     

    Anyone a glue how to fix this...

    There is no www-data user in our container.

    The I/O error is most likely something wrong with your disk/controller and not an issue in the container. So check the drive and filesystem for errors. Of you find errors, you should open a new thread in the general forum the get help as it's not container related.

  20. 15 minutes ago, DiscoDuck said:

    Ok. When I just do ffmpeg I get below output, and libxml2 isn't enabled. Maybe I've missed something basic in the configuration?


     

    
    ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
      built with gcc 9.3.0 (Alpine 9.3.0)
      configuration: --prefix=/usr --enable-avresample --enable-avfilter --enable-gnutls --enable-gpl --enable-libass --enable-libmp3lame --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libx264 --enable-libx265 --enable-libtheora --enable-libv4l2 --enable-libdav1d --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-libxcb --enable-libssh --disable-stripping --disable-static --disable-librtmp --enable-vaapi --enable-vdpau --enable-libopus --enable-libaom --disable-debug
      libavutil      56. 51.100 / 56. 51.100
      libavcodec     58. 91.100 / 58. 91.100
      libavformat    58. 45.100 / 58. 45.100
      libavdevice    58. 10.100 / 58. 10.100
      libavfilter     7. 85.100 /  7. 85.100
      libavresample   4.  0.  0 /  4.  0.  0
      libswscale      5.  7.100 /  5.  7.100
      libswresample   3.  7.100 /  3.  7.100
      libpostproc    55.  7.100 / 55.  7.100
    Hyper fast Audio and Video encoder
    usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...
    
    Use -h to get full help or, even better, run 'man ffmpeg'

     

    Then you need to ask alpine Linux to enable it in their ffmpeg build.

×
×
  • Create New...