Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Everything posted by Kaizac

  1. Upload and download Api hits are seperate. So your mount can still upload but not download. You can get banned for several reasons, with the current state or rclone my experience is that a single filed being opened too many times is often the issue. Plex can be the cause of this. You say direct mounting in windows works. What exactly works? Just opening the file through your windows explorer? I'm very sure if that works, Plex will also work now and your ban has just been lifted. This often happens around midnight CET.
  2. I don't think you can. Maybe @DZMM knows an rclone setting to limit calls. He is better with the commands.
  3. Yeah then making new apis won't work because you can't connect more email accounts to the gdrive. Im using Tdrive. Only thing you can do is migrate to a Tdrive first. Or shut down dockers and stop the ones which might be giving you a lot of Api hits and start a process of elimination.
  4. You never did before? Didn't you fill in client id and password while creating a rclone mount?
  5. You cant as far as I know. But if you play a file in windows it will give an error after a while. Then you know you're banned.
  6. Your Api is probably temp banned. I've had it happen so many times lately. What you can do is give your main dockers an own mount and Api. I've done this for Plex, Bazarr, Radarr and the rest. So 4 unionfs pointing to the same folders but through different apis. What you need to do is just make a new Api and new rclone mounts. You can use the same local folders in your unionfs, but the different mount. And then in the docker settings point them to the seperate union folders. If you don't understand what I mean, let me know!
  7. Anybody else has Radarr continuously crashing? I've degraded to 5.14 and upgraded to v3. Even though v3 was a lot faster, it won't mass search movies. So I've cleared the whole appdata for Radarr and try to rebuild my library, but it's still crashing after a while.
  8. @DZMMlast couple of days I get API bans. I'm currently filling a backlog, so I'm wondering what could be the case. I suspect it could be both Emby as Plex running on the same API. What do you think? For now I will just create a new API and mount for Plex only.
  9. Then it's a syncthing issue as far as I can tell. You should test with an other docker.
  10. Have you tried another docker like Radarr to see if you can write files there? Syncthing doesn't work with browsing through the ui. So you have to put exact paths. So in your case you start with /google.
  11. So how is your rclone set up? Can you post your rclone config? Just remove the tokens and such.
  12. What is the docker you are using? And what happens when you restart the docker. Is it working then?
  13. How do you give the docker access to the share? Can you share that screenshot.
  14. Thanks you're right. My backup Share is just using a lot of files. How do you do this yourself? All the pictures and small files you want to keep safe, don't amount to a lot of size, but do to number of files Some way to autozip them would be best I think. Just like the CA backup does but then for your own shares.
  15. How are you transferring the books to your devices when Calibre is running on it's own IP? I can use the content server but that will download the epub file, and not give me the option to create a library within my ereader.
  16. @DZMM how would your cleanup script work for a mount you've only connected to mount_rclone (backup mount for example which isn't used in mount_unionfs)? I can't alter your script as I'm not 100% sure whether some lines are necessary.
  17. No. The Team Share is a shared storage which multiple users have access to. So you get 750gb per day per user connected to this Team Share. It's not just extra size added to a specific account.
  18. Yeah I do the same, but thought backing up my whole cache was a nice addition. I was wrong ;). On another note, I'm not having any memory problems for a few months now. So maybe rclone changed something but I'm never running out of memory. Hope it's better for others as well.
  19. I'm currently on a backup of 2 TB with almost 400k files (370k)..... I thought backing up my cache drive would be a good idea, whilst forgetting that Plex appdata is huge of small files. Currently also getting the limit exceeded error. So I'm pretty sure rclone doesn't count folders as objects, but Gdrive does.
  20. How do you make sure the unmount script is run before the mount script?
  21. Just create 2 scripts. One for your rclone mount first start and your rclone continous mount. In your first start script you put at the beginning to delete the check files which should be removed to run the script properly.
  22. You could use mount_rclone as your RW folder and it will download directly to your Gdrive. However this will slowed by your upload speed. And it will probably also cause problems while direct writing to the mount. Rclone copy/move/etc. is intended to solve this issue by doing file checks.
  23. Not sure if I understand you properly. You point Sonarr to your unionfs folder so it doesn't matter where you store your file.
  24. Thanks! Kerio is indeed paid and quite expensive as far as I can tell. Using local clients on my desktop and then back that up is possible, however then I'm wasting local storage as well. Which is just waste of expensive space when I have enough of it on my Unraid box. So far running a mail client like Thunderbird in docker seems most likely.