Jump to content

Kaizac

Members
  • Content Count

    186
  • Joined

  • Days Won

    1

Kaizac last won the day on March 5

Kaizac had the most liked content!

Community Reputation

21 Good

About Kaizac

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Then it's a syncthing issue as far as I can tell. You should test with an other docker.
  2. Have you tried another docker like Radarr to see if you can write files there? Syncthing doesn't work with browsing through the ui. So you have to put exact paths. So in your case you start with /google.
  3. So how is your rclone set up? Can you post your rclone config? Just remove the tokens and such.
  4. What is the docker you are using? And what happens when you restart the docker. Is it working then?
  5. How do you give the docker access to the share? Can you share that screenshot.
  6. Thanks you're right. My backup Share is just using a lot of files. How do you do this yourself? All the pictures and small files you want to keep safe, don't amount to a lot of size, but do to number of files Some way to autozip them would be best I think. Just like the CA backup does but then for your own shares.
  7. How are you transferring the books to your devices when Calibre is running on it's own IP? I can use the content server but that will download the epub file, and not give me the option to create a library within my ereader.
  8. @DZMM how would your cleanup script work for a mount you've only connected to mount_rclone (backup mount for example which isn't used in mount_unionfs)? I can't alter your script as I'm not 100% sure whether some lines are necessary.
  9. No. The Team Share is a shared storage which multiple users have access to. So you get 750gb per day per user connected to this Team Share. It's not just extra size added to a specific account.
  10. Yeah I do the same, but thought backing up my whole cache was a nice addition. I was wrong ;). On another note, I'm not having any memory problems for a few months now. So maybe rclone changed something but I'm never running out of memory. Hope it's better for others as well.
  11. I'm currently on a backup of 2 TB with almost 400k files (370k)..... I thought backing up my cache drive would be a good idea, whilst forgetting that Plex appdata is huge of small files. Currently also getting the limit exceeded error. So I'm pretty sure rclone doesn't count folders as objects, but Gdrive does.
  12. How do you make sure the unmount script is run before the mount script?
  13. Just create 2 scripts. One for your rclone mount first start and your rclone continous mount. In your first start script you put at the beginning to delete the check files which should be removed to run the script properly.
  14. You could use mount_rclone as your RW folder and it will download directly to your Gdrive. However this will slowed by your upload speed. And it will probably also cause problems while direct writing to the mount. Rclone copy/move/etc. is intended to solve this issue by doing file checks.
  15. Not sure if I understand you properly. You point Sonarr to your unionfs folder so it doesn't matter where you store your file.