MikaelTarquin

Members
  • Posts

    48
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MikaelTarquin's Achievements

Rookie

Rookie (2/14)

3

Reputation

1

Community Answers

  1. Never mind! Was able to fix it by using the following docker compose: version: '3.3' services: openbooks: container_name: OpenBooks image: evanbuss/openbooks:latest ports: - "8585:80" volumes: - '/mnt/user/data/media/eBooks/to_import:/books' command: --name my_irc_name --persist --no-browser-downloads restart: unless-stopped environment: - BASE_PATH=/openbooks/ user: "99:100" volumes: booksVolume:
  2. I'm not very experienced with manually adding docker containers via compose, so forgive me if this is the wrong place to ask. I was able to successfully add openbooks to my Unraid server via docker compose, but for some reason all the files it downloads come in with r-- read only permissions. I naively tried adding the PUID=99, PGID=100, and UMASK=002 lines to the environment section, thinking that might help, but no luck. Is what I am trying to do not possible? version: '3.3' services: openbooks: ports: - '8585:80' volumes: - '/mnt/user/data/downloads:/books' restart: unless-stopped container_name: OpenBooks command: --name <username> --persist environment: - BASE_PATH=/openbooks/ - PUID=99 - PGID=100 - UMASK=002 image: evanbuss/openbooks:latest volumes: booksVolume:
  3. Thank you for the tip. I am not sure what these config.json files are, but they seem to be the cause of the 100% usage on those cores. Edit: I went through each docker container one by one and found that when I stopped qbittorrent (BINHEX - QBITTORRENTVPN), the CPU usage went back to normal. I'll have to keep an eye on that one I suppose.
  4. Recently, half of my server's cores are showing constant 100% CPU usage, and I don't know what is causing it. Does anyone have any advice? I think it started happening after I replaced my cache SSD, in case that is relevant. nnc-diagnostics-20231207-0007.zip
  5. How did you fix the issue? I've been trying to get Calibre-Web OPDS to work with my Kindle+KOReader over my reverse proxy, but so far have only had luck locally. Everything I try just results in KOReader saying "Authentication required for catalog. Please add a username and password."
  6. I'm not sure why it's going so slowly. When I initially added them to the array and it did what I assume was a clear operation, it hummed along at something like 180 MB/sec average (220 at first, and slower as it got to the other side of the disks). I am using the array, which I'm sure doesn't help, but normally parity checks still happen at about 90 MB/sec for me on average (about 3-4 days) Not sure why it's going less than half that now.
  7. So, is it not possible to stop the rebuild and add them without rebuilding again? I guess I don't really understand what the data rebuild could be doing to the new empty drives that's so important.
  8. So, I am currently adding 2x 18TB drives to my array, which would be disks 16 and 17. It's been a while but I thought all I had to do was stop the array, add those 2 as additional drives, and start the array to let Unraid do everything. After about a day, things finished by the array showed the 2 new drives as unmountable. I stopped the array again, attempted to remove them, format them xfs like the rest of the array via unassigned devices, and then add them back before starting again. I did several things, so I may be misremembering the exact sequence, but the end result is they now in the array as emulated disks 16 and 17 (I have 2x parity drives, so I wonder what would have happened if I tried 3 at once), and the only option that looked like it would help is Data Rebuild. I kicked that off about 12 hours ago now, and it's 6% complete and averaging anywhere from 5 to 50 MB/s. I assume it's just going to spend the next week or two rebuilding "nothing", at the cost of reading through the entirety of every other drive in the array. Do I really need to just let it go through this Data Rebuild and put all that wear on everything? Or is this likely to fail as well? Is there anything I can do to stop it and add them faster? I assume with those 2 showing emulated, the rest of the array is now unprotected? nnc-diagnostics-20230807-0956.zip
  9. Did you upgrade the nextcloud version inside the web UI before upgrading the docker? I finally got mine working again last night and pinned the docker to release 27 so that it doesn't break again. This page helped me a lot: https://info.linuxserver.io/issues/2023-06-25-nextcloud/
  10. All I've been able to find so far is some fairly unhelpful discussion about Security Headers, which I think by uncommenting the lines in the SSL configuration file, I've already done. Swag, duckdns, and namecheap are the only common threads I can think of. Bummer.
  11. Sorry for the screenshot log, I don't have a better way at the moment of sharing it. It looks unchanged from what I remember it saying in the past. My nextcloud reverse proxy has stopped working (again, I swear that thing hates being reverse proxied more than anything), but I don't think that's related. I'm assuming an update broke it for the nth time.
  12. For the last couple days, every one of my reverse proxied docker containers now shows this. This is annoying enough for me, but friends trying to access pages I host for them are understandably worried I've been hacked, despite me assuring them this is some quirk of Google security monitoring or something. I've tried updating the SSL.conf in the swag appdata folder and uncommenting basically everything at the bottom of that, but it doesn't seem to be helping. How can I make this go away?
  13. This is the kind of affirmation I love to hear, thank you!
  14. Hmm, I'm not sure how to answer that one, either. The auto proxy docker mod seems to handle that part for me. I don't manually rename any of the sample conf files under swag/nginx/proxy-confs/, so the relevant one is still the default nzbget.subdomain.conf.sample for me. Renaming this to remove the .sample at the end doesn't appear to change the functionality for my setup, either. However, the proxy_pass lines (there are multiple) in that sample file all say: proxy_pass $upstream_proto://$upstream_app:$upstream_port; I'm not sure what happens under the hood to let the auto proxy work, but I would assume it uses the same settings for every container. For now, I have simply reverted my nzbget container to the docker network "proxynet" that the rest of my swag containers are on, and am relying on SSL within the container rather than routing through deluge's VPN (which has also dramatically improved the performance of nzbget, by about 5x). I think I'd still like to figure out a VPN solution, but I don't think this particular method works well with my SWAG reverse proxy needs.
  15. Sorry, I'm not sure I understand your question. I am using subdomains in SWAG, and have the subdomain "nzbget", among others that I use, added to the "SUBDOMAINS" SWAG container variable. I use the container variable DOCKER_MODS with value "linuxserver/mods:swag-maxmind|linuxserver/mods:universal-docker|linuxserver/mods:swag-auto-reload|linuxserver/mods:swag-auto-proxy" in SWAG, and in the containers getting reverse proxied the container label "swag" with value "enable" to get them working with SWAG, which I learned from this Ibracorp video. Maybe this is the source of my problem? This method has worked great for every other container I have reverse proxied, but maybe I need to learn another way for this one being routed in an odd way.