MaxiWheat

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by MaxiWheat

  1. Any follow up on this ? I plan to do something similar for the same reason as OP, since Apple requires https for autofill. I would like to use my own domain too. Has previous method worked for combining files ?
  2. Ok I think I found what the problem is, or more like how to replicate the issue... I'm not totally sure about the deep root cause, but I have an explaination. I access each webui by the IP address of my Unraid server with the port (http://10.0.0.xxx:8080 and http://10.0.0.xxx:8081). If I open each webui in different browser tabs at the same time and then interact with them (open settings, change filters, etc.) then the container memory start to grow and grow... as long as I keep the tabs with the webui opened. The problem does not occur if I open a webui alone, close it and open the other. It also happen if I open both with the server name (hostname) instead of the IP address. I think it has something to do with the cookies and/or the authentication for the URL/Hostname used. I was using "Bypass authentication for clients in whitelisted IP subnets" with a single address in it so that my main desktop computer could access the webui without provinding a password. Both containers were set that way and the issue occured. If I uncheck this setting and have to provide a password to access the webgui, if I log into one of the webgui, I get disconnected from the other in the other tab, so the problem does not occur, but I can't use both webgui at the same time in the same browser. This can also be solved by using different hostnames to access each webgui (also need to provide the port). So this is it, if you have more details about this issue or tricks to share, they're welcome
  3. Hi everyone, I'm having a very strange memory issue with this docker image. I am already running an instance of this image in Docker, and it runs pretty smoothly. I seed ~1200 torrents and it idles at around 300MiB of RAM usage according to the Docker page in Unraid. I'm also running an old rtorrent/rutorrent container serving ~ 2200 torrents (and on average larger than those in current qBT), that I want to migrate to qBittorrent so I created a new binhex-qbittorrentvpn container using the previous one as a template and I took good care of changing all ports and paths (including appdata) to avoid issues. The container started correctly, the webui is accessible (had to do a thing for the login since adminadmin does not work anymore, but I managed to make it work). I then imported all my torrents from rtorrent into the new qbittorrent instance without starting them to be make sure everything is fine before starting to seed. I stopped my rtorrent container to avoid "double-seeding" and started seeding with the new qbittorrent container. And then problems began... Both qbittorrent containers started to increase in memory usage... I did not set a limit to their memory usage so it ended taking all my Unraid RAM (24GB) to the point the WEBUI of Unraid was not accessible anymore: I had to reboot my server. I decided to set a memory limit on the containers at 3GB (first qBT) and 6GB (new qBT) but they maxed it too, sometimes in the container logs the "watchdog-script" detected that qBT was not running anymore (it got killed inside the container) and restarted it and other times the memory stayed at 5.997GiB and the 2 CPU cores I allowed maxed at 100%. All this happening in ~5 minutes. If I only start one or the other of the qBT container, everything runs fine... but if I run both, dang, the memory goes crazy. Can someone explain what is going on here ? Am I missing something ? Are there special configs I need to set to avoid issues running this container twice ? Any help would be appreciated. I can also provide more information/context if required.
  4. Could you give more details how you managed to make it work without a VPN at all ? Where/how to you specify the port for torrenting ? It does not seem to have one like linuxserver had. Do I have to add one more port myself and configure it manually in rtorrent conf files when the container is started ?
  5. Hello everyone, Here is my situation : I used to make backups for my home computers using external drives which I then brought at work to avoid loosing everything in case of a catastrophy (fire, flood, etc.). I own 4x8TB external drives. I was organizing my things on those drives like that (my computer at home were all Windows) : Drive 1 : Computer1\D\Folder1 Computer1\E\Folder1 Computer1\E\Folder2 Computer2\D\Folder1 Drive 2 : Computer3\D\Folder1 Computer3\E\Folder1 Computer3\E\Folder2 Computer3\E\Folder3 etc... I was able to map complete drives or 2 drives from each computer onto each of my external backup drives. I used rsync to make backups (with cygwin) to minimize the time it takes. But now with Unraid I'm having a hard time thinking how I would organize that. My files are scattered accross multiple drives, my drives sizes don't match the sizes of my backup drives. I'm aware that my arrays is 36TB and my backups 32TB, but I'm currently using 21.6TB on my array, so it should be enough. Same thing goes for my shares, they don't match the external drive sizes. Any idea how I could make my life easier with backups ? I would like to be able to make incremental updates (once every 2 months when I bring back my drives at home) I would like to not have to think about how I need to "split" my data across my external drives I would like my external drives to be readable independently outside Unraid Not a hard requirement : If possible, a solution that can run on an other computer on the network (using shares to access files), since my license on the server has a limited number of connected devices, I would not be able to connect all external drives and mount them in Unassigned Devices at the same time Thank you
  6. So here is my setup: Unraid 6.8.2, connected directly in my router at 1Gbps with a static IP User share name 'ZZZ' created in Unraid with Cache set to Yes VM running Lubuntu 18.04 vdisk is in /mnt/user/domains/MyVM/vdisk1.img and currently resides on cache drive uses Network Bridge br0 network connection inside the VM is set with a static IP in the same subnet as my Unraid and other machines on my network (192.168.1.0/24) mounted share 'ZZZ' inside my VM with NFS like this in fstab: 192.168.1.94:/mnt/user/ZZZ /home/myname/ZZZ nfs rw,hard,intr,rsize=8192,wsize=8192,timeo=14 0 0 When I run a speed test from inside the VM at the root mount point (which in on the vdisk in my cache drive) I get very good results: myname@MyVM:/$ sudo dd if=/dev/zero of=./speedtest bs=8k count=100k; sudo rm -f ./speedtest [sudo] password for myname: 102400+0 records in 102400+0 records out 838860800 bytes (839 MB, 800 MiB) copied, 1.18783 s, 706 MB/s However, when I go into my mounted share, I get pretty poor results compared to what I expected, it looks like to have a bottleneck somewhere: myname@MyVM:~/ZZZ$ sudo dd if=/dev/zero of=./speedtest bs=8k count=100k; sudo rm -f ./speedtest 102400+0 records in 102400+0 records out 838860800 bytes (839 MB, 800 MiB) copied, 8.1232 s, 103 MB/s I already checked that when I run this test it writes to the cache drive, so that is not a "writing to the array" issue. I think it is a network issue since the speed I get is near the limit of a 1Gbps link, but I can't find how to resolve it. Any idea ?
  7. I had the exact same problem, using a SATA M.2 drive as my cache... Very poor performance (~15MB/s) when mounting the drive with 9p in fstab, using nfs I get much better results (~400MB/s). Maybe tweaking mount options... but I can't find anything googling around too. IMO with this much slowliness, 9p should never be used.