aptalca

Community Developer
  • Posts

    3064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. Hey guys,

     

    While running a parity check, I was faced with 128 read errors on one of the disks, and 22 sync errors. The parity check is still ongoing and it seems the errors happened all at once and then stopped. I checked the smart report for the disk and nothing looks (to me) out of the ordinary. My parity check was not correcting, and still has a few more hours to go.

     

    Any ideas on how I should proceed? The integrity of my data is of utmost importance. I already have a hot spare, ready to go, but want to make sure that my data is intact before I do something like that.

     

    Diagnostics attached

     

    Thanks

    tower-diagnostics-20201113-1557.zip

  2. 3 hours ago, casperse said:

    Maybee a stupid Q

     

    But is it okay to add multiple subdomains like this?

    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name photo.doamin.dk;
        server_name photos.domain.dk;
        server_name piwigo.domain.dk;

    And could I just add a piwigo.domain2.dk also?

    It might work but I dont want to go against the approved structure

    You can put multiple names in a single server name directive, don't use multiple directives

  3. 2 hours ago, PsiKoTicK said:

    Hey, man, thank you.  It was not a direct answer, but I was able to find enough to realize I needed a second file in the site-confs for the other server, not just another block in the default (cuz it wouldn't let me, who knew?) - and it's up and going, and my wife is happy, and now I know new things.  Appreciate your help.

    To clarify, you CAN edit the default site conf to modify or add server blocks.

    Alternatively you can add more site conf files like you did. Either works

  4. 7 hours ago, PsiKoTicK said:

    Ah, I just have default certification, I don't "need" the *. just the main b.com and www.b.com - and it errors if I use *. so I am fine not having it. 

     

    So should I make a new Proxy Config?  My server block seems to only exist in the proxy subdomain/subfolder.conf files - I admit I am a n00b with this stuff but I am a tech guy, just...  I still find this hard to wrap my head around, I'm getting there, though!

    Main conf is /config/nginx/nginx.conf, which includes (imports) /config/nginx/site-confs/default, which contains the main server block and it also includes (imports) all the proxy confs

     

    Check out the examples in the default site conf

  5. On 11/3/2020 at 8:28 PM, PsiKoTicK said:

    Hey y'all.  I'm trying to add a second domain to my container, and I need a wordpress container to be where www. goes for the new domain.

     

    domain a.com is working, and I have both subdomain and subfolder conf files working, and the same subdomains will work with the new b.com domain

     

    Wife does not want to have to use wordpress.b.com she wants to just use b.com and/or www.b.com - I am not quite sure how to get this setup in the container and without rebuilding my entire proxynet

     

    is there a way to set this up?

     

    I have figured out that I cannot use *.b.com in the extra_domains field, so the cert is now good for a.com and b.com - but I am unsure how to force www.b.com and b.com to go to a hosted wordpress container but not have a.com and www.a.com go there - is there a config I can change so it looks at both and routes based on the domain?

    If you're doing dns validation, you can get wildcard for b by setting extra domains to "b.com,*.b.com"

     

    Then to serve wordpress at b.com, set the server name for the wordpress server block to b.com

  6. 5 hours ago, romain said:

    I updated git through the docker console.image.png.73d5e6e0c7a14d654c6e4ee8d663dfeb.png

     

    I had to run this:

    
    sudo apt update
    sudo apt upgrade
    sudo apt-get install software-properties-common
    sudo add-apt-repository ppa:git-core/ppa
    sudo apt update
    sudo apt install git

    after those all finished I was on 2.29.1 or 2.29.2 or something.

    Then you'll have to create a custom script that does those steps for you during container start: https://blog.linuxserver.io/2019/09/14/customizing-our-containers/

  7. 8 hours ago, romain said:

    I've got another question - I was on version 3.6.0 and had updated git to 2.29.x.  Today I let unRAID update code-server to 3.6.1 and after the update git was back on 2.17, is there a way to keep git up to date through container updates or will I just need to do a manual update every time the container updates?

    How did you update it? The bionic repo has 2.17 

  8. 1 hour ago, Soulflyzz said:

    Hello,

     

    I updated from Lersencyipt to swag and has been working well for quite a while for some reason i wanted to do a fresh install.

    I removed the swag docker and  used CA remove and completely removed it before i installed the new one.

    After install the docker is not in the list but it is in the appdata folder.

    any ideas how to get it to show up? I have tried a restart with no luck. also if i try to reinstall it all it does is let me edit the config.

     

    *Update - It also auto boots and looks to be running but cannot see it in the running docker apps.

     

    That's a question for unraid and/or CA

  9. 1 hour ago, romain said:

    Does anyone know how to get to the preview page from the Live Server extension?  I saw someone asking about the extension in the past but I think my question is different - Live Server says it's running and that it's running on port 5500.  Normally you would go to 127.0.0.1:5500/index.html if you were using VS Code on your local workstation but since it's a docker container that wouldn't/doesn't work like that - I tried adding a  few ports to the docker configuration to try and let me access ports 5500 through 5505 from my LAN but that didn't work either.  A screen shot of the port 5500 config that I tried to add.  I just did this 6 times for 5500, ..., 5505.

     

    This may be more of a routing question than a code-server question, but if anyone here can help me out I'd appreciate it!

     

    image.png.aaf95db811ae4c4e6c3dc2474aeb6a48.png

    Perhaps live server needs to bind 0.0.0.0 so it's accessible from outside the container as opposed to bonding 127.0.0.1 which would only be accessible from inside the container

  10. 26 minutes ago, Crash said:

    I'm looking for a way to limit bandwidth based on IP (NOT on a per connection basis). I want to allow multiple connections from a single IP, but the total bandwidth per ip to have a specified limit.

     

    I found a simple module that does this here, but it seems super old (but there are some newer pull requests for it that convert it to a dynamic module, and some that fix bugs): https://github.com/bigplum/Nginx-limit-traffic-rate-module

     

    Is there a way to install new modules? Or another way to accomplish this goal?

    No, nginx would have to be compiled with it

  11. 16 hours ago, ccsnet said:

    Hi all - feeling a bit of a twit tonight as I forgot my admin password and I only have one user set up as it is for home use.

     

    Does any one have the process to reset via the docker CLI as I believe "passwd username" is possible but I'm unable to invoke it via the terminal.

     

    Thanks

     

    Terran

    passwd username won't work because it's not using pam

     

    See the readme instructions about disabling the "admin" user, reverse that to re-enable admin user, restart container, log in with "admin/password", make your changes to your main user, and then disable admin user again.

  12. 4 hours ago, fc0712 said:

    Hey

     

    i just migrated from letsencrypt to Swag and wanted to setup UniFi docker as part of my reverse setup. I’m using the sub folder setup for all my dockers, but that isn’t supported for the unify controller. 

     

    Could anyone guide me on how to setup the UniFi as subdomain. It says that I need to setup a Cname for unifi. What should the cname settings be? 
     

    i’m using my own domain and a static Ip with DNS at Cloudflare  

     

    Create a cname in cloudflare with just an asterisk as the name "*", pointing to the A record for your domain and that's it. Then in swag docker settings, set the subdomains variable to "wildcard" without the quotes.

  13. 10 hours ago, martinjuhasz said:

    Do you have some more information on that @aptalca? I have a subdomain set up using proxy_domain and swag, but am not able to find anything online about sub-subdomain configuration. 

    It's code server functionality and it's explained in their docs. Our image supports the domain name setup via env var and swag's built-in proxy conf allows it by default.

     

    You just need to add "*.code-server.yourdomain.com" to extra domains in swag so your cert covers the sub-subdomains of code server.

    (Or XXXX.code-server.yourdomain.com of your doing http validation and can't do wildcard)

  14. 8 hours ago, vonpelz said:

    How do you get wildcard certs for additional domains? I've set EXTRA_DOMAINS="*.domain2.com", but a wildcard cert is only created for the URL primary domain. Under /etc/letsencrypt/live is only one folder which is for the primary domain.

    
    URL=domain1.com
    SUBDOMAINS=wildcard
    EXTRA_DOMAINS=*.domain2.com
    ONLY_SUBDOMAINS=false
    VALIDATION=dns
    DNSPLUGIN=cloudflare
    [email protected]

    I've also tried setting EXTRA_DOMAINS=domain2.com,*.domain2.com, but it didn't make any difference.

     

    Edit: Nevermind, my mistake. The certificate created is valid for both domains! And when I provide it as EXTRA_DOMAINS=domain2.com,*.domain2.com the certificate works for the root as well.

    There is only ever one cert generated with this image and it contains all the names as SANs

  15. 12 hours ago, mika91 said:

    Hi,

     

    Just installed the swag container yesterday, and all is working fine so far.

    Now I need to limit access to the services.
     

    Regarding access control, after reading documentation and config files, it seems choice are: basic auth, ldap, authelia or organizr auth.

    Except for jellyfin, all other services just need authorization (sonarr, radarr, jackett, qbitorrent, ...) 
    I'd like to keep it as simple as possible, as well at configuration side but for user experience too.

    Ideally:

    • login once, then access all services (except Jellyfin as it needs authentication and do not support OIDC)
    • centralize unraid users with reverse proxy ones: active directory / ldap ?
    • web ui to add/edit users 

    Another point is to get access to my docker services both on external an local network.

    Is there a way, with some kind of DNS override, to access my services locally using the xxx.duckdns.org URL (when connected to my local network, xxx.duckdns.org will redirect to the unraid box IP)

    Maybe using a services dashboard like heimdall/organizr/ombi, will help to access service 'transparently' whatever local or external ?

     

    Thanks

    I use authelia and it works great. There is no webui for user management yet (I hear it's in the works), but you can set up the users in a number of ways including ldap (I use a simple yaml file).

     

    See here: https://blog.linuxserver.io/2020/08/26/setting-up-authelia/

     

    For accessing the domain on lan, you need either a hairpin nat or nat loopback (if your router supports it), or you can set up a split dns (where you tell your local dns to resolve the domain to the unraid lan ip). The main caveat is that swag has to use port 443 on the host, which means you'll have to change unraid's https port to a different one first. Afterwards all requests for https://yourdomain.com will resolve to unraid and the client will connect to swag directly on lan (for http to https redirect, you'd need to change unraid's port 80 as well, so swag can use it, but I don't do that and instead only use the https endpoint so only port 443 goes to swag). Google the three terms I mentioned above and you'll find plenty of info for your router/setup.

  16. 1 hour ago, thunderclap said:

    I'm having an interesting problem with LetsEncrypt. Two issues I've experienced I would like to try and resolve: if I use use DNS through Cloudflare my subdomains become unbearably slow. If I do the subdomains through my registrar and forego Cloudflare, anytime I add or remove a subdomain LetsEncrypt reports a firewall/timeout error for several hours rendering my subdomains inaccessible. Does anyone know why this is happening?

    You probably had cloudflare cache/proxy turned on, which we recommend against. It's explained in the docs article linked in the first post

    • Thanks 1
  17. 2 hours ago, td00 said:

    Hey All - I've got this up and running for a while now - great image thanks. Just a question though, it it possible to have a wild card URL entry? Kind of like the way google does with *.google.com?

     

    My current setup just has this:

     

    URL=topleveldomain.com

    SUBDOMAINS=portainer,sonarr,radarr

     

    But when I click to view the cert in the browser it seems that it sets portainer.topleveldomain.com as the URL and the rest in the SAN where they should be. Was just looking to see if possible to clean up. Currently, my topleveldomain doesn't point to anything if that makes a difference?

    Yes, you can get wildcard certs. It's explained in the readme

  18. 2 hours ago, Muff said:

    Hi,

     

    Anyone else getting this error?

    I've googled around a bit but couldn't find an answer. I've also check the files in the container but couldn't find anything about "sslforfree"

     

    The error message is:

    image.thumb.png.f8dc279879e0576a0b9ff6f163d476d3.png

    
    nginx: [emerg] cannot load certificate "/config/sslforfree/cert.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/config/sslforfree/cert.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)

    My old letsencrypt container dosn't give this error message every 0.5 second.

    And I've reintalled swag as well (delete container > delete swag folder under appdata > install swag)

    Looks like you modified your confs and referenced a custom cert. Our image does not use such a cert out of the box.

  19. 3 hours ago, shooga said:

    Thanks @bigmakfor the response. I had added :443 while trying different things I found in my research - it didn't work and I've removed it now.

     

    Turns out I didn't need to add a location for esphome specifically (/a0d7b954_esphome), but needed to add the /api/hassio_ingress location. Saw that in your config and thought it was worth a try. That fixed it! Now it works for esphome and vscode. Thanks again!

     

    Just to be clear for anyone else looking for help, this is the section that I needed to add. Maybe it's in the latest config sample with the container, but it wasn't in mine.

    
        location /api/hassio_ingress {
            resolver 127.0.0.11 valid=30s;
            set $upstream_app 192.168.1.205;
            set $upstream_port 8123;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
    
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }

     

    You shouldn't need that. The latest updates to nginx.conf and proxy.conf auto enable websockets when needed.