• Posts

  • Joined

  • Last visited

Everything posted by Roudy

  1. You will point Deluge to those directories. I would recommend using the "Labels" plugin in Deluge so it will separate your content as it downloads and puts it into the correct folder. That is just a list of folders that the mount will create when the script is run. It will create the directories that your media awaiting upload (LocalFilesShare) will go and the base file structure for your mergerfs. You can add or remove folders from there as needed.
  2. I had a similar issue where the Deluge and Deluge Web UI weren't starting. I was using an older image and ended up just deleting the image and the directory and started from scratch. It worked after that, but I was unable to determine the root cause from the docker log and the logs in the container. I couldn't even manually start the service.
  3. If the files are uploaded, you should see them on the remote PC. They are either there or they're not, so make sure you are mounting and uploading to the same location. You can keep the cache, just be aware that if you update a file it may appear as a duplicate. It won't really improve performance unless you access the same file multiple times. It will download it the initial time and then play from cache after that. Hope that makes sense.
  4. How often is your upload script running? Can you manually run it and see if it appears on your remote computer? You will just have to use the first part of the script (Create Rclone Mount). You won't need anything after that. I caution about using a cache with the rclone mount if you are just reading as well. I say this because if you update a file on your system and it isn't the same file extension, it won't overwrite the existing media if its in cache. This will make the media server see duplicate files for the same media. It will eventually work itself out due to cache size/time, but something to be aware of.
  5. I'm assuming you are using Djoss's image. In the settings it states "NOTE: This applies only when Network Type is set to Bridge. For other network types, port 5800 should be used instead." Give 5800 a try.
  6. Are the files local on unRAID or on a remote system? Which Sonarr docker are you using? Can you verify the "Root Folers" path is correct under "Settings>Media Management" Also, you can try to run the "Docker Safe New Perms" under "Tools" to see if that helps as well.
  7. I'm assuming you are using Plex. On the remote computer, it won't detect the file changes. You will have to manually scan the files or set it to scan periodically. There is also an "Autoscan" docker from Hotio in the APP store that may trigger the scanning as well. I haven't set it up on my remote box because the periodic scanning works fine for my case.
  8. My script is for a Windows computer. It's really just an rclone mount without a cache. For another unRAID box, you would just need to modify the mount script and just use the rclone mount section without the use of a cache. I make my mount a read only on the remote box to prevent files from being altered.
  9. Is there an error in the Sonarr logs at all? Could you post the error or what it says during the import?
  10. I run a similar setup. Server number 2 wouldn't need the mergerfs since it is not downloading/uploading any content to cloud storage. You would just have to mount the rclone instance as a "Read" and use the same encryption keys. I personally created seperate oauth accounts to avoid API limits. Hope that helps.
  11. That path is for files that you want to upload to rclone. You still point everything to the mergerfs and it handle the rest. Files awating upload will be there until the upload script runs.
  12. If you do an "ls -l" in the directory of where the files downloaded, what are the permissions/owner of the files?
  13. You want to map them to the /mnt/user/mount_mergerfs. That is the combination of your local files and rclone cache/mounted folder.
  14. Is there not a folder at /mnt/user/mount_mergerfs?
  15. Is there anything in the JDownloader log about not starting? Can you show the docker settings?
  16. Don't you needed privileged mode for Wireguard to function correctly? I've never been able to get it function properly without it. Just want to verify for everyone.
  17. Look at Q2 in the documentation below. Let us know if that doesn't work for you.
  18. Deluge doesn't just go find files to download. It is either sent to it or placed in a watch folder that it scans. I could be missing something from your vague initial request, but either way, glad you got everything working now.
  19. Can you post logs from your docker container along with your docker settings? I didn't get much from what you posted.
  20. @Starsixer I have heard this as well. I tried it before, but it never improved my speeds, always made it worse. Newshosting posted an article about it a few years back. I posted it below to make it easy.
  21. Sorry for the late response, I went out of town. I switched over to Wireguard and got some speed improvements for sure. Maybe give it another shot?
  22. Deluge doesn't really keep track of files, it just downloads what is sent to it. You may want to look at whatever you have sending downloads to Deluge. It sounds like it is losing track of the file once it is complete and it is thinking it failed and reinitiates the download again.
  23. Could be. You can take a look at the link below and see which location closest to you has the most servers and supports the most bandwidth. I would just not use a VPN is speed is your primary focus. Good luck!
  24. That is an issue with PIA. Are you using wireguard? You may just want to try redownloading from the APP tab unless there is something worth saving in there and you want to try and work through it. You can use the search bar at the top of the page and just select "This Topic" under the "SEARCH IN" section.
  25. I use PIA. The closest server is 7ms.