Jump to content

Kaizac

Members
  • Content Count

    280
  • Joined

  • Days Won

    1

Everything posted by Kaizac

  1. Yeah SA's are created, also the new project. SA's are added to a group which is added as member to the team drive. When going in to the dev console of Google I don't see an o-auth module though, not sure if it's needed. My rclone config looks like this: [tdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts_tdrive/sa_tdrive.json team_drive = XX server_side_across_configs = true [tdrive_crypt] type = crypt remote = tdrive:Archief filename_encryption = standard directory_name_encryption = true password = XX password2 = XX Really starts to annoy me that it's so complicated.
  2. I'm getting the following error when mounting my remotes: INFO : Google drive root 'Archief': Failed to get StartPageToken: Get "https://www.googleapis.com/drive/v3/changes/startPageToken?alt=json&prettyPrint=false&supportsAllDrives=true": oauth2: cannot fetch token: 401 Unauthorized Response: { "error": "deleted_client", "error_description": "The OAuth client was deleted." } Do you also get that? And is there an easy way to use your mount script for multiple remotes?
  3. Did you configure the path and file to the json through rclone config or did you just add the line to the rclone config after setting it up. When I try it through the rclone config way through SSH it says: Failed to configure team drive: config team drive failed to create oauth client: error opening service account credentials file: open sa_tdrive.json: no such file or directory
  4. Ok so the remote you set up with one of the SA's you create. So number 1 of 100 for example. And then for uploading you rotate within the service accounts folder between the 100 SA's? Am I understanding well it correctly then? And if I want to have another remote to seperate my bazarr traffic. Do I then create a new project or do I just use a different SA? I'm not sure on what level the api ban is registered.
  5. So how does rclone when streaming media know to use the service accounts then?
  6. That doesn't answer my question unfortunately. In your readme you mention this: So it seems in your example you don't configure your client id and password. But then later on you mention you do need it.
  7. I've tried finding the final consensus in this topic, but it's becoming a bit too large for easy finding. I've created 100 service accounts now, added them to my teamdrives. How should I now setup my rclone remote? I should only need 2 right (1 drive and 1 crypt of that drive)? And should I set it up with it's own client id/secret when using SA's. According to your github it seems like I just create a remote with rclone's own ID and secret, so no defining on my side.
  8. I have no idea how you manage to do all those 4 steps. Care to share some parts of those scripts/merger commands?
  9. I'm trying to setup the Service Accounts for uploading. How does this work when you have multiple team drives for different storage. So 1 for media, 1 for backups, 1 for cloud storage, etc. Would you then create multiple upload scripts with each it's own project and SA's?
  10. Well to each his own. For mainstream media usenet is vastly superior if set up right. If you have access to private trackers and also need non-mainstream media then torrents can bring more to the table. Either way I think with your setup/wishes you can use rclone for your backups and replace Crashplan with it. But you don't need all this elaborate configuration for it. Just create a Gdrive/Team Drive, DO NOT mount it. Just upload to it, and let removed/older data be written to a seperate folder within Gdrive. If you get infected then it can't directly access your mount files. And in case of encrypted/infected files being uploaded you will have your old media to rollback to. Just have to remember that when you want to access your backups you have to mount the rclone mount/gdrive first to see the files. Or if you don't use encryption you can just see them through the browser.
  11. Why are you on torrents? Move to usenet and get rid of that seeding bullshit. Also you can just direct play 4k from your gdrive. I do with up to 80 Gb files and it's fine. You might consider a seed box though. You can use torrents and move with gigabit speed to gdrive.
  12. 2 PSA's: 1. If you want to use more local folders in your union/merge folder which are RO, you can use the following merge command and Sonarr will work. No access denied errors anymore. Use either mount_unionfs or mount_mergerfs depending on what you use. mergerfs /mnt/disks/local/Tdrive=RW:/mnt/user/LocalMedia/Tdrive=NC:/mnt/user/mount_rclone/Tdrive=NC /mnt/user/mount_unionfs/Tdrive -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true 2. If you have issues with the mount script not working at start of array because docker daemon is starting. Then just put your mount script on custom settings and run it every minute (* * * * *). It will then run after array start and will work. @nuhll both these fixes should be interesting for you.
  13. Asking it again cause I'm very curious. Can you share your merger command?
  14. You download to the merge folder but it will write to your ssd. Rclone uploads the files from your sad.
  15. You mount your dockers to /mnt/user/mount_mergerfs/google_vfs and then the proper subfolder (tv/movies/downloads/etc.). If you just put it on your cache it will only see the local stored files.
  16. With unionfs or mergerfs? If mergerfs would you mind sharing your merger command?
  17. Yep I removed one of my local folders which was in my mergerfs and now Sonarr works. Too bad that doesn't work.
  18. I keep getting this error both with automatic import als manual import. But it only happens with upgrades to files: Sab and Sonarr are pointing to the same directory and new series work fine, it's only when an existing file needs to be upgraded. Below the docker runs of sab and sonarr. Hopefully someone has an idea? root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='sonarr' --net='br0.90' --log-opt max-size='10m' --log-opt max-file='1' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'TCP_PORT_8989'='8989' -e 'PUID'='99' -e 'PGID'='100' -v '/dev/rtc':'/dev/rtc':'ro' -v '/mnt/user/mount_unionfs/Tdrive/Series/':'/tv':'rw' -v '/mnt/user/mount_unionfs/Tdrive/Downloads/':'/downloads':'rw' -v '/mnt/user/mount_unionfs/':'/unionfs':'rw,slave' -v '/mnt/cache/appdata/sonarr':'/config':'rw' 'linuxserver/sonarr:preview' root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='sabnzbd' --net='br0.90' --ip='192.168.90.10' --log-opt max-size='10m' --log-opt max-file='1' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'TCP_PORT_8080'='8080' -e 'TCP_PORT_9090'='9090' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/mount_unionfs/Tdrive/Downloads/':'/downloads':'rw' -v '/mnt/user/mount_unionfs/Tdrive/Downloads/Incompleet/':'/incomplete-downloads':'rw' -v '/mnt/user/mount_unionfs/':'/unionfs':'rw,slave' -v '/mnt/cache/appdata/sabnzbd':'/config':'rw' 'linuxserver/sabnzbd'
  19. I'm not using the recycling bin, but I thought you might be doing that. I just don't get why Sonarr can't upgrade files and gets an access denied, when Radarr is working fine with the same settings. For downloads I like to unionfs/Tdrive/Downloads and for series I point to unionfs/Tdrive/Series. Both on r/w and mount_unionfs on rw-slave. I'm doubting I need remote mapping because sonarr and sab are on different IP's. But that isn't needed for Radarr either.
  20. I'm trying to run a ssh command through User Scripts. So I expect creating a .sh file is the right way to do that. But then how do I trigger it from User Scripts? And if I want to make it more complex by giving it this command when running: PYTHONIOENCODING=utf8 * * * * * /path/to/script.sh How would I go about that? And what exactly should be the path if I for example put the .sh file in the User Scripts/scripts folder?
  21. @DZMM did you configure recycling bin in your Sonarr instance? If not would you mind sharing your docker settings, which folders you put on r/w slave and which on normaly r/w. I'm still having import issues, but only when upgrading files. Getting an access denied error.
  22. @DZMM Do you never have the problem of the daemon docker not running when you run the mount script at startup? Nuhll has the same problem as me. I've put in a sleep of 30 but that's not enough. Will be increasing it more to try to get it fixed. But I find it strange that you don't have the same issue. @nuhll unfortunately I have the permission denied error again. Did it come back for you?