activ

Community Developer
  • Posts

    126
  • Joined

  • Last visited

Everything posted by activ

  1. afaik the rpc password is the same used to log into the web interface. I whitelisted both 192.168.x.x and 172.17.x.x
  2. Don't use it myself, but should be easy enough to configure. Just be aware that the whitelist in transmission needs to include the interal IP range used by docker inside the containers aswel.
  3. You can specify the uid and guid to use inside the container, default is 99 and 100 I believe. You will need to change that or assign rights to that user/group. It's a bit of a tricky affair, but once you get the idea you will never have the problem again.
  4. I configured the incomplete folder from the remote gui, doing it directly in the config file will probably work best for you.
  5. You have specified a folder that will map to the data folder inside the container, whatever is in there will be available in the container including subfolders. Easiest is to create subfolder in the folder you used and select those in transmission.
  6. All of what you want is easily done. I don't use a watch folder myself, but do use an incomplete folder. I've added a folder data to the container below which I've made sub folders for complete and incomplete. I find that easier than adding lots of folder separately.
  7. Hmm, weird. If you know your way around the cli you could try and connect to the cli inside the container and check connectivity.
  8. The container uses a few commonly used ports, you should check if there is a conflict. It's also quite slow to start, could take a few minutes.
  9. It does seem fine at first glance. Could you try downloading something like Ubuntu or something? Maybe the tracker is blocking vpn's. Also: I've had cases where this happened with very new torrents, just leaving it and checking after a few hours solved it.
  10. Thanks for the tip, but my issue is not related to Windows explorer. It even exists locally on the server. I am 99% sure it's related to the magic that presents multiple drivers as one as it only exists on user shares and not directly on the disks.
  11. I'm starting to think Unraid is the issue here. If I do a copy directly to a disk (i.e. /mnt/disk1) the performance is fine. 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 26.1334 s, 41.1 MB/s If I do the same thing to a folder in /mnt/user 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 117.395 s, 9.1 MB/s Don't really know what Unraid does differently for /mnt/user compared to /mnt/disk1 but that seems to be causing a massive slowdown. Any Ideas?
  12. Yes, all HDD's are slow. Will have to borrow a monitor to check the bios setting. Will do so later this week. I did format all drives btrfs, could that be the problem? If so, is there a way of converting without data loss?
  13. Hey, I have been having slow write speeds on my Unraid system since I started using it, but recently it has started to bother me a lot. (was Solaris before with the same hardware, speed was much better). So I have started trying to troubleshoot, but don't really know how to. Doing some basic testing gives my this: The cache drive: 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.80028 s, 224 MB/s A normal hdd: 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 114.004 s, 9.4 MB/s Now the cache drive is fine and not relevant, but the normal hdd is not performing well at all. Doing "hdparm -tT /dev/sdc" gives me: /dev/sdc: Timing cached reads: 3412 MB in 2.00 seconds = 1705.82 MB/sec Timing buffered disk reads: 332 MB in 3.05 seconds = 108.98 MB/sec Anyone got any ideas on how to troubleshoot and fix this? Copying large files takes forever at the moment. Thanks, activ tower-diagnostics-20161008-0825.zip
  14. I don't want to add any extra weight to this. There isn't even a real Transmission install in there, only the RPC connecting to another docker that has the actual Transmission. What you could do to sort of for mine is have a docker file where the FROM points to mine. That way you only need to add your specific stuff and each time you build my latest version is used as source.
  15. I like that the remote brings the entire gui to the client. I was not able to find something like that for Deluge (client needs to run on OSX)
  16. Ah, so rar is the main thing here. My preference is Transmission almost entirely because of Transmission remote. Are you using it on Unraid? (Trying to determine the amount of work this is going to cause).
  17. Don't know about rar, but for deluge it might make more sense to use: http://lime-technology.com/forum/index.php?topic=45812.0 Or is there a specific reason you want bot Transmission and Deluge?
  18. UPNP is not required when used with a VPN (as no ports have to be opened). Sounds like a nice setup. I an thinking I'll go with full Unifi, otherwise it would annoy the blip out of me that not all features in the controller software are available to me. I hate greyed out menu items (a bit ocd, I know).
  19. Hey. First of all I can confirm that no portforwarding is needed. Is it just connection to the internet that is not working? Can you access the web interface normally? Are you sure the vpn is working? What I usually do to troubleshoot is a. Check the logs. b. log into it as if it's a normal machine and see if I can figure out what the issue is inside docker. It might also help to know that another IP is used inside the container which is then translated by docker (but when using a vpn into the container IS the local address for the vpn) p.s. How do you like the Edgerouter? I'm thinking of getting some UBNT stuff too.
  20. I've changed the first post to the correct link. Thanks for letting me know it was wrong. And yes, I'm using Aur, so that is the newer one. See this page for more info: https://aur.archlinux.org/packages/lazylibrarian-git/
  21. Just checked my log and I see no error about watch, might be because I'm not using the watch folder option. I'm using Transmission remote to feed manually and Flexget for automated stuff. You'r welcome, glad you find it useful.
  22. I think squids probably right about it being a mapping issues. Don't really understand what you did with a variable to solve this, but glad it helps. I have a mapping called /mnt and that works fine.
  23. Hey, I'm pretty sure the normal update process work in docker, but to make it easier I just triggered a new build.
  24. I moved this docker image to autobuilds and created a new build. This may or may not fix the issue, but LazyLibrarian is up to date again for sure.
  25. Weird, does LL work normally otherwise? Is it possible that it cannot connect to the internet?