THF13

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by THF13

  1. If you have the "Fix Common Problems" plugin installed it adds an extra tool called "Docker Safe New Perms" that excludes the appdata directory and any other directories you specify and is generally recommended.
  2. As someone who spent about a month dealing with their server being unreliable, I ran into this issue a lot, and wanted to share why I think it comes up for this container more than others. The qBittorrent.conf file is being written to constantly even when there are no changes being made, at least once a minute. These frequent writes make it much more likely there will be a corruption issue when the system or container crashes unexpectedly. I doubt you're doing anything intentionally to cause this, but I spun up the linuxserver Qbit container (without vpn, proxy, any of the extra things you've added obviously) and it hasn't made any writes to the qBittorrent.conf file in 20 minutes. The other reason I think this issue keeps coming up here is that it's more than an inconvenience. I use categories to place different files at different locations. When the .conf file reset happens, it doesn't just return the instance to a blank slate. It still remembers the active torrents, but none of my category configurations for them. So as soon as the container starts it uses qBit's automatic torrent management mode and the default location and starts moving every single file I have seeding into the default download directory. I've solved my unrelated server instability issues and haven't run into this issue in a while, but even so I still make sure to only have the container start manually so I can check it.
  3. Does it need to be a persistent IP, or does the web browser IP you log in from just need to match the current IP you are seeding from? If it's the latter, you can enable the privoxy proxy built into the container and access the site through that. I recommend using an addon like FoxyProxy to be able to quickly switch it on and off.
  4. Exact same scenario here. 6.11.5, "STARTING SERVICES" message came on persistently after modifying a share's "Use cache pool" setting, message went away when I changed the share's "Export" option.
  5. Haven't had it happen recently but I do think the settings resetting is an issue with this container. I personally only ever saw it after my unraid system crashed/unexpectedly shutdown, but have seen a few people post about it in this thread. I never moved from 4.3 so it's unrelated to any 4.3->4.4 issues. When it happened it was a complete loss of the settings, as if the conf file was deleted and regenerated to a default state. From just having the qBittorrent.conf file open in Notepad++ it seems like its constantly being saved every ~minute or so, maybe if the system or container becomes unresponsive this can sometimes cause it to overwrite it with an empty file?
  6. I believe I have this solved and the system has been running stably for the past week. iowait was actually happening, just not showing in top as a big problem. When viewing the netdata docker container however while the issue was happening it was a lot clearer. I followed the advice in the thread below and used the Tips and Tweaks plugin to reduce "Disk Cache 'vm.dirty_background_ratio' (%):" from 10% to 1%, and "Disk Cache 'vm.dirty_ratio' (%):" from 20% to 2%. The effect was night and day when I tested the mover after this change. No pegged CPU cores, system responsive while the mover was running, and no lock ups or crashes for the past week. I don't know why this only happened to me with parity enabled, but glad it's fixed now. The fix has more of an impact the more RAM you have. My system had 128GB of ram so the effect was quite extreme, but I think experimenting with this setting change might be worth testing if you have 16GB or more and your system is running worse than expected when the mover triggers.
  7. Unraid version: 6.9.2 Hardware: Older dual Xeon board with dual Intel® Xeon® CPU E5-2660 v2. CPUs support AES-NI 128GB ECC DDR3 RAM Motherboard has built in LSI SAS2008. Array/Parity drives connected to HP SAS Expander. Cache drives connected to SATA ports on motherboard. Unencrypted Cache drive (XFS) for new files 2nd Encrypted Cache drive (XFS-Encrypted) in a different pool for appdata, metadata. Mix of 8-16TB encrypted (XFS-Encrypted) array drives. 15 drives total 18TB Parity drive What Happens: Usually, when Parity is enabled, either building or valid, and data is written to the array (current setup this only happens when mover runs), many CPU threads (but not all) go to 100% and aspects of the system become unresponsive. What aspects of the system become unresponsive seem random. Sometimes the System is able to recover and returns to normal. Sometimes more and more pieces become unresponsive and the system is unable to restart on its own, requiring a hard power reset. If Parity is not present the system operates completely stable. (~30 days no issue) What is affected: This part is weirdly inconsistent. Most commonly network access to the shares from SMB. Some docker containers, but not necessarily all and not always the same ones become inaccessible. One time this happened the machine seemed totally unresponsive and I couldn't load the webUI or most of the docker containers I tried, but Emby was perfectly fine, able to browse between pages and playback media from the array without issue. Aspects of the Unraid webUI itself. Sometimes it becomes totally inaccessible and won't load at all. Sometimes certain pages of the WebUI (like the docker tab or the syslog) won't load but other pages will. Ability to shutdown/restart: When the system is locking up it is unable to even shut itself down. If I can access the terminal or webUI and trigger a shutdown it will just hand and never finish the process. What I've tried: Check firmware for LSI controller for updates Disable all VMs Unmounted additional unassigned SSDs I had attached via a PCI-e card Change any BTRFs disks to XFS Lowered priority of mover process Mirrored syslog to flash Run drive benchmarks Disabling Parity to test stability Other Details: When the issue is happening and I catch it early I can usually get the system to recover with "mover stop". But it doesn't go back to normal for anywhere between 10-30 minutes. It will still spike CPU. Stopping docker altogether similarly does not immediately resolve the issue, nor doing both simultaneously. Attached are unraid diagnostics and the syslog from before the mover process started until I had to hard reboot the server a few hours later. In this example I could access the unraid WebUI but not the docker tab. Some docker containers were still working (emby, for one) but others were not (tdarr server node). Also attached are the recent results of drive. Parity was building during this lockup. I thought the lockups only happened when mover was running but in the syslog attached the mover starts running at 4:40 but nothing related to the lockup happens until 8:15, there shouldn't have been enough data on the cache to take that long to run. unraid01-diagnostics-20211014-0933.zip syslog.txt
  8. I am working on fixing some intermittent unraid server stability issues that have caused me to hard reboot the server a couple of times a few days apart. The past 2 times it has happened my Qbittorrent settings have reset to defaults, seemingly from the contents of the qBittorrent.conf being wiped and replaced with a nearly blank fresh file. Everything else is fine, the vpn files, the active torrents, where they're saved, etc. I have moved my appdata folder from an unassigned disk to a second cache pool somewhat recently but can't think of anything else that's changed. No other containers have had similar issues. I run 2 qbittorrentvpn containers though and it affected both of them each time in exactly the same way. It hasn't happened with clean array start/stops or with stopping/starting the docker service or the individual container. It's pretty easy to fix, I just drag the previous .conf file from a backup back into appdata and restart the container. It is pretty annoying however because since my torrents are all in "Automatic" management mode it attempts to move every one to the default save location inside /config. Please let me know if anyone has an idea about why this is happening Unfortunately fixed it before thinking to save the log file, if it happens a third time will grab that before fixing and restarting it.
  9. If you're bothering to enable https I assume you are exposing it to the internet. I use the Swag docker container for a reverse proxy that has a configuration for qbittorrent built in, making sure to setup an htaccess file and uncomment the lines in the configuration needed to put basic auth in front of the webUI. The container handles renewing a LetsEncrypt cert automatically, has most of the best practices in terms of protocols and headers configured for you, and has fail2ban included. I'd consider carefully though if you want to expose it to the internet at all before doing so. Even with precautions it is still a lot riskier than having it be accessible only locally.
  10. If you use DNS validation you only need 443, only thing you really will lose is automatic http->https redirection. DuckDNS is free and supports DNS validation.
  11. AutoRemovePlus-0.6.15-py3.8.egg is working for me, though only through the webui and not configurable from the thinclient.
  12. Not really sure if this is a better solution, but it is an alternative. Leave the local basic auth as is, and in your reverse proxy add the line proxy_set_header Authorization "Basic YWRtaW46cnV0b3JyZW50"; This will pass the default admin/rutorrent password along after you auth via the reverse proxy. Definitely do not do this if you do not have basic auth setup in your reverse proxy. If you are using a different username and password you'll need to convert your "user:password" to base 64. https://www.opinionatedgeek.com/codecs/base64encoder
  13. 4.0.3 isn't coming for docker/linux builds apparently, the fix will be in the next patch they hope to have out next week. In the meantime I'd recommend Emby users mount their shares as RO. Emby devs respond to just about everything posted on their official forums.
  14. Open /config/auth and add a second line with the following. <username>:<password>:10 Then use those credentials to connect from the thin client, obviously using the appropriate IP and Port to reach the container.
  15. -e VPN_ENABLED=yes Needs to be set, but I'm assuming you probably did that.
  16. I actually used to do just that, had a VM with deluge installed with my router configured to only allow that whole VM to connect to the internet through the VPN. I'm using my current UbuntuVM for more containers than just DelugeVPN, and honestly the way this container has the VPN rules set up and includes privoxy works better than what I did manually. Also Docker for Windows sets up its own VM in hyper-v to work anyways.
  17. I setup an Ubuntu VM using hyper-V. Mounted a windows share in the VM and used that for the /data directory.
  18. I'd prefer to have this container accessible through my linuxserver/letsencrypt reverse proxy, has anyone got that working well and can post their config? I've been trying it with various options and haven't gotten it quite right. When I refresh the page it seems almost random what happens, sometimes it loads correctly, sometimes it times out, sometimes it takes a long time to load, sometimes it loads some of the interface but with an error. Seems similar to an issue kostecki was having but that was back in August and I didn't see any response to it. Reverse proxy on same local network as rtorrentvpn, using a subdomain, rtorrentvpn loads correctly when going directly to IP:9080, other sites using reverse proxy working normally. EDIT: Think I've got it working, will edit post again if I find an issue after a couple days of testing. Commented out authorization in the rtorrentvpn container by editing nginx.conf Used the below server and location block on the reverse proxy container. #RTorrentVPN server { listen 443 ssl http2; #root /config/www; #index index.html index.htm index.php; server_name rvpn.domain.com; include /config/nginx/ssl.conf; client_max_body_size 0; location / { proxy_pass_header Authorization; proxy_pass http://192.168.1.XXX:9080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_buffering off; proxy_request_buffering off; client_max_body_size 0; proxy_read_timeout 36000s; proxy_redirect off; proxy_ssl_session_reuse off; auth_basic "Restricted Content"; auth_basic_user_file /config/nginx/.htpasswd; } }