Jump to content

MikaelTarquin

Members
  • Posts

    48
  • Joined

  • Last visited

Posts posted by MikaelTarquin

  1. Never mind! Was able to fix it by using the following docker compose:

    version: '3.3'
    services:
        openbooks:
            container_name: OpenBooks
            image: evanbuss/openbooks:latest
            ports:
                - "8585:80"
            volumes:
                - '/mnt/user/data/media/eBooks/to_import:/books'
            command: --name my_irc_name --persist --no-browser-downloads
            restart: unless-stopped
            environment:
              - BASE_PATH=/openbooks/
            user: "99:100"
    volumes:
        booksVolume:

     

  2. I'm not very experienced with manually adding docker containers via compose, so forgive me if this is the wrong place to ask. I was able to successfully add openbooks to my Unraid server via docker compose, but for some reason all the files it downloads come in with r-- read only permissions. I naively tried adding the PUID=99, PGID=100, and UMASK=002 lines to the environment section, thinking that might help, but no luck. Is what I am trying to do not possible?

    version: '3.3'
    services:
        openbooks:
            ports:
                - '8585:80'
            volumes:
                - '/mnt/user/data/downloads:/books'
            restart: unless-stopped
            container_name: OpenBooks
            command: --name <username> --persist
            environment:
              - BASE_PATH=/openbooks/
              - PUID=99
              - PGID=100
              - UMASK=002
            image: evanbuss/openbooks:latest
    
    volumes:
        booksVolume:

     

  3. 2 hours ago, deadnote said:

    Open terminal and execute this command to see the running tasks :

     

    htop

     

    Thank you for the tip. I am not sure what these config.json files are, but they seem to be the cause of the 100% usage on those cores.

    2023-12-07_09-22-56_chrome.png

     

    Edit: I went through each docker container one by one and found that when I stopped qbittorrent (BINHEX - QBITTORRENTVPN), the CPU usage went back to normal. I'll have to keep an eye on that one I suppose.

  4. On 2/12/2023 at 5:39 AM, vurt said:

    I did, and modified the conf like this but it didn't work, still asked for password:

     

    	# OPDS feed for eBook reader apps
    	# Even if you use Authelia, the OPDS feed requires a password to be set for
    	# the user directly in Calibre-Web, as eBook reader apps don't support
    	# form-based logins, only HTTP Basic auth.
        location /opds/ {
            auth_basic off;
            include /config/nginx/proxy.conf;
            include /config/nginx/resolver.conf;
            set $upstream_app calibre-web;
            set $upstream_port 8083;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
            proxy_set_header X-Scheme $scheme;
        }

     

    EDIT: Turned out it wasn't a reverse proxy issue for a change. Was on KOReader's end.

    How did you fix the issue? I've been trying to get Calibre-Web OPDS to work with my Kindle+KOReader over my reverse proxy, but so far have only had luck locally. Everything I try just results in KOReader saying "Authentication required for catalog. Please add a username and password."

  5. 3 hours ago, JorgeB said:

    P.S. 10 days for a rebuild seems a lot, probably some controller boatnecks exist.

    I'm not sure why it's going so slowly. When I initially added them to the array and it did what I assume was a clear operation, it hummed along at something like 180 MB/sec average (220 at first, and slower as it got to the other side of the disks). I am using the array, which I'm sure doesn't help, but normally parity checks still happen at about 90 MB/sec for me on average (about 3-4 days) Not sure why it's going less than half that now.

  6. So, I am currently adding 2x 18TB drives to my array, which would be disks 16 and 17. It's been a while but I thought all I had to do was stop the array, add those 2 as additional drives, and start the array to let Unraid do everything. After about a day, things finished by the array showed the 2 new drives as unmountable. I stopped the array again, attempted to remove them, format them xfs like the rest of the array via unassigned devices, and then add them back before starting again.

     

    I did several things, so I may be misremembering the exact sequence, but the end result is they now in the array as emulated disks 16 and 17 (I have 2x parity drives, so I wonder what would have happened if I tried 3 at once), and the only option that looked like it would help is Data Rebuild. I kicked that off about 12 hours ago now, and it's 6% complete and averaging anywhere from 5 to 50 MB/s. I assume it's just going to spend the next week or two rebuilding "nothing", at the cost of reading through the entirety of every other drive in the array.

     

    Do I really need to just let it go through this Data Rebuild and put all that wear on everything? Or is this likely to fail as well? Is there anything I can do to stop it and add them faster? I assume with those 2 showing emulated, the rest of the array is now unprotected?

    nnc-diagnostics-20230807-0956.zip

  7. 4 hours ago, allanp81 said:

    I still cannot get my nextcloud instance to work with swag anymore. I've tried everything I can think of but can't get it to work.

    Did you upgrade the nextcloud version inside the web UI before upgrading the docker? I finally got mine working again last night and pinned the docker to release 27 so that it doesn't break again. This page helped me a lot:

     

    https://info.linuxserver.io/issues/2023-06-25-nextcloud/

  8. 3 hours ago, alturismo said:

    yep, swag and ssl cert refresh are looking fine.

     

    i guess this is nothing swag related, i would search for the error on self hosted sites and look what could cause this ...

     

    i see some different reasons and as im not using the services ... cant really help from my side, when i google it its now looking like you are flagged as malware ... sorry, may someone else can chim in if somebody else had this before, but i would search for it ...

     

    All I've been able to find so far is some fairly unhelpful discussion about Security Headers, which I think by uncommenting the lines in the SSL configuration file, I've already done. Swag, duckdns, and namecheap are the only common threads I can think of. Bummer.

  9. 14 hours ago, alturismo said:

    what does the swag logs say, especially when you start the docker and it tries to refresh the certs ...

     

    Sorry for the screenshot log, I don't have a better way at the moment of sharing it. It looks unchanged from what I remember it saying in the past.

     

    My nextcloud reverse proxy has stopped working (again, I swear that thing hates being reverse proxied more than anything), but I don't think that's related. I'm assuming an update broke it for the nth time.

     

    image.thumb.png.2254503bc3172fa46cd9deacb5015498.png

  10. For the last couple days, every one of my reverse proxied docker containers now shows this. This is annoying enough for me, but friends trying to access pages I host for them are understandably worried I've been hacked, despite me assuring them this is some quirk of Google security monitoring or something. I've tried updating the SSL.conf in the swag appdata folder and uncommenting basically everything at the bottom of that, but it doesn't seem to be helping. How can I make this go away?

    Screenshot_20230711-115115~2.png

    • Like 1
  11. 30 minutes ago, JonathanM said:

    Yeah, automation only works if all the prerequisites are met. When you step outside the box, you have to learn how to do this kind of stuff manually. It's not really that difficult, just have to step through each bit and follow it back to determine where the values came from, and override with your specific values.

     

    IMHO it's pointless to push usenet through a vpn if the connection is already encrypted. The usenet provider knows who you are, the ISP can see the amount of traffic but can't see the content, which is the same situation with any encrypted traffic. The only reason vpn is useful for torrents is anybody in the torrent group can see everyone else's IP as well as the content. Remove the knowledge of the content with the SSL pipe to the usenet provider, and there is no quick way to analyze you like there is with torrents.

    This is the kind of affirmation I love to hear, thank you! 

  12. 5 hours ago, JonathanM said:

    In the conf file that has your nzbget section, what value is in the proxy_pass line?

    Hmm, I'm not sure how to answer that one, either. The auto proxy docker mod seems to handle that part for me. I don't manually rename any of the sample conf files under swag/nginx/proxy-confs/, so the relevant one is still the default nzbget.subdomain.conf.sample for me. Renaming this to remove the .sample at the end doesn't appear to change the functionality for my setup, either. However, the proxy_pass lines (there are multiple) in that sample file all say:

    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

     

    I'm not sure what happens under the hood to let the auto proxy work, but I would assume it uses the same settings for every container.

     

    For now, I have simply reverted my nzbget container to the docker network "proxynet" that the rest of my swag containers are on, and am relying on SSL within the container rather than routing through deluge's VPN (which has also dramatically improved the performance of nzbget, by about 5x). I think I'd still like to figure out a VPN solution, but I don't think this particular method works well with my SWAG reverse proxy needs.

  13. 7 minutes ago, JonathanM said:

    What address are you using in the SWAG config? You should be using the address that works locally.

    Sorry, I'm not sure I understand your question. I am using subdomains in SWAG, and have the subdomain "nzbget", among others that I use, added to the "SUBDOMAINS" SWAG container variable. 

     

    I use the container variable DOCKER_MODS with value "linuxserver/mods:swag-maxmind|linuxserver/mods:universal-docker|linuxserver/mods:swag-auto-reload|linuxserver/mods:swag-auto-proxy" in SWAG, and in the containers getting reverse proxied the container label "swag" with value "enable" to get them working with SWAG, which I learned from this Ibracorp video. Maybe this is the source of my problem? This method has worked great for every other container I have reverse proxied, but maybe I need to learn another way for this one being routed in an odd way.

     

  14. On 2/17/2021 at 5:13 AM, JonathanM said:

    Add the port mapping for jackett's gui to the vpn container.

     

    Would you mind providing more detail on how to do this? I am in a similar situation with NZBGet. NZBGet has its network type set to "None", and extra parameters as needed for routing through my DelugeVPN container. To the DelugeVPN container, I have added both a container port 6789 that I named "nzbget port" and uses host port 6789 with connection type TCP, as well as a container variable named "VPN Input Ports" with key "VPN_INPUT_PORTS" and value "6789". I can access NZBGet through the webui locally at <local ip>:6789, but trying to access through my reverse proxy (SWAG) just results in Error 502 Bad Gateway. 

     

    I know the reverse proxy can work, since if I revert the network setup for the NZBGet container to be on the proxy net my SWAG container is on, it is accessible just fine (but obviously no longer on the Deluge container's VPN). 

     

    Thank you for your help!

  15. On 3/11/2023 at 11:29 AM, KluthR said:

    No, thats just right. Both setting can lead to broken backups. First one if backups are not verified and second if errors occur which will be discarded silently.

     

    the BackupMainV2 name will be gone with the new version.

    Is this right? As worded, it sounds like setting "Verify Backups?" to "Yes" can result in broken backups. But isn't the point of verification to ensure that they aren't broken?

  16. I still need to run memtest, and update Unraid to v6.10, but have been busy with a move and unable to find the time. However, today I noticed my cache drive is throwing a SMART error again (Reallocated Sector Counts) This exact thing happened almost exactly 1 year ago, and I was unable to solve the problem then short of buying a new SSD. Needless to say, seeing an expensive 2TB SSD throw SMART errors after only 1 year and ~30TB of writes is extremely upsetting.

    If it's related, during the move, I also discovered I was unable to boot my server (a Dell T630) until I moved a stick of RAM out of slot B1 (currently slots A1, A2, B2, and B3 are populated). Swapping other DIMMs didn't resolve the error, it was only when that slot was unpopulated that it got to BIOS. 

    Am I just screwed?

    nnc-diagnostics-20220613-1909.zip nnc-smart-20220613-1918.zip

  17. Ok, is the best way to do a memtest from the boot menu, and let it run for a few days?

     

    I replaced the cache drive very recently. It seems Plex and others are working, how best should I handle the corrupted file system?

     

    EDIT: I saw this post from a few years back saying it's pointless to run memtest with ECC RAM. Is that true? My ram is ECC (Dell poweredge t630).

     

    https://forums.unraid.net/topic/91204-how-to-run-memtest-headless/?do=findComment&comment=846406

  18. A few days ago I had a power outage. My ups allowed the server to gracefully shutdown, but then I was unable to bring it back up the next day. It turns out the USB drive was bad. I didn't have a flash back up, so I made a new USB and used the registration tool to reclaim my license. Everything seemed to go very smoothly, at first.

     

    Today I was notified that my Ombi page isn't working, and sure enough I can't login either. In looking for possible causes, I noticed on my dashboard that my log is using 100% of its memory. I am unsure what is causing this, so I attached the diagnostics here. Would anyone be able to help me figure out why this is happening? Thank you!

    nnc-diagnostics-20220424-0943.zip

  19. So SWAG has been working just fine on my server for several months, but today suddenly I can't access the two dockers I have that utilize it, Nextcloud and Ombi. Checking the log for SWAG, all I see is:

     

    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/
    
    
    Brought to you by linuxserver.io
    -------------------------------------
    
    To support the app dev(s) visit:
    Certbot: https://supporters.eff.org/donate/support-work-on-certbot
    
    To support LSIO projects visit:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 50-config: executing...
    Variables set:
    PUID=99
    PGID=100
    TZ=America/Los_Angeles
    URL=[REDACTED]
    SUBDOMAINS=ombi,cloud
    EXTRA_DOMAINS=
    ONLY_SUBDOMAINS=true
    VALIDATION=http
    CERTPROVIDER=
    DNSPLUGIN=
    EMAIL=[REDACTED]
    STAGING=false
    
    Using Let's Encrypt as the cert provider
    SUBDOMAINS entered, processing
    SUBDOMAINS entered, processing
    Only subdomains, no URL in cert
    Sub-domains processed are: -d ombi.[REDACTED].com -d cloud.[REDACTED].com
    E-mail address entered: [REDACTED]
    http validation is selected
    Certificate exists; parameters unchanged; starting nginx
    [cont-init.d] 50-config: exited 0.
    [cont-init.d] 60-renew: executing...
    The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am).
    [cont-init.d] 60-renew: exited 0.
    [cont-init.d] 70-templates: executing...
    **** The following nginx confs have different version dates than the defaults that are shipped. ****
    **** This may be due to user customization or an update to the defaults. ****
    **** To update them to the latest defaults shipped within the image, delete these files and restart the container. ****
    **** If they are user customized, check the date version at the top and compare to the upstream changelog via the link. ****
    /config/nginx/ssl.conf
    /config/nginx/site-confs/default
    /config/nginx/proxy.conf
    /config/nginx/nginx.conf
    /config/nginx/authelia-server.conf
    /config/nginx/authelia-location.conf
    
    **** The following reverse proxy confs have different version dates than the samples that are shipped. ****
    **** This may be due to user customization or an update to the samples. ****
    **** You should compare them to the samples in the same folder to make sure you have the latest updates. ****
    /config/nginx/proxy-confs/ombi.subdomain.conf
    /config/nginx/proxy-confs/nextcloud.subdomain.conf
    
    [cont-init.d] 70-templates: exited 0.
    [cont-init.d] 90-custom-folders: executing...
    [cont-init.d] 90-custom-folders: exited 0.
    [cont-init.d] 99-custom-files: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-files: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    Server ready

     

    I don't know if those **** lines in blue are the cause of my issue, or if it's something else. Can anyone offer some assistance?

     

    UPDATE: I found the problem was my ISP had pushed an update to my gateway that broke port forwarding. I resolved the issue by removing the port forwarding from the static IP of my server, and instead applied it to the "friendly name" entry in the device list. This was a Pace 5268AC AT&T Fiber router, if that helps anyone looking at this in the future.

    • Like 1
  20. Well I'm at a loss. Suddenly Plex has been inaccessible today. My client players can't connect to the server, and the web UI is inaccessible. The log doesn't show anything particularly out of place:
     

    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/
    
    
    Brought to you by linuxserver.io
    -------------------------------------
    
    To support LSIO projects visit:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 40-chown-files: executing...
    [cont-init.d] 40-chown-files: exited 0.
    [cont-init.d] 45-plex-claim: executing...
    [cont-init.d] 45-plex-claim: exited 0.
    [cont-init.d] 50-gid-video: executing...
    [cont-init.d] 50-gid-video: exited 0.
    [cont-init.d] 60-plex-update: executing...
    Docker is used for versioning skip update check
    [cont-init.d] 60-plex-update: exited 0.
    [cont-init.d] 90-custom-folders: executing...
    [cont-init.d] 90-custom-folders: exited 0.
    [cont-init.d] 99-custom-scripts: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-scripts: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    Starting Plex Media Server.

     

    I've restarted Docker and then the whole server just to make sure. Any ideas?

     

    Edit: fixed by removing the container and re-adding it from the template.

×
×
  • Create New...