DZMM

Members
  • Posts

    2801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. 1. there's nothing in your crypt except your mountcheck file..... 2. you shouldn't be seein the mountcheck file in rclone_upload/google_vfs though as it's already in the cloud. What does your /mnt/user/mount_rclone/google_vfs look like?
  2. Hmm. everything looks correct. Apologies if these questions seem obvious: 1. Are you definitely using the right password and password2? 2. the files that are already in gdrive are encrypted (goobledegook when you look in google)? 3. Are the google files in a folder called crypt on google i.e. not encrypted so you can see my drive/crypt when you look at Google?
  3. no - just substitute in your existing remote name in the mount script for my gdrive_media_vfs remote. Or, rename the remote in rclone config might be easier
  4. post your rclone config (remove passwords and IDs) and mount script
  5. Unionfs is useful is if you want to add one monitored folder for sonarr & radarr that combines files that are on your local drive with files that have already been uploaded e.g. if you had tv_show1 spread across /mnt/user/rclone_upload/google_vfs/tv_shows/tv_show1 and /mnt/user/mount_rclone/google_vfs/tv_shows/tv_show1 sonarr can only monitor one folder at a time, so if when files are moved to the cloud it would think they are missing and download them again, meaning you could end up with multiple copies on the cloud. Also, sonarr wouldn't be able to upgrade files. For radarr, the problems are similar. Even if you're not using radarr or sonarr and you are just using Plex, it's cleaner to add the unionfs folder to Plex rather than both the upload and cloud folder, because plex won't waste time re-scanning files when it spots they've moved to the cloud, as to plex they won't have moved as the unionfs folder 'masks' the real location
  6. @Kaizac how are you getting on with your td-->td transfers? I'm still nervous about doing them e.g. I just moved a two movie folders between td1/adults/ td1/kids using putty and the folders moved ok, but the file inside disappeared! I just did moved an individual file using putty and that went ok. I think I'm going to stick with consolidating all my files to gdrive once they've uploaded and doing my moves/edits there. The lesson here is make sure everything is organised correctly before uploading. Edit: Looks like it was a false alarm. Not sure what how, but the file was also in td2/adults - the file in td1/adults must have been a phantom or something....I just did the same transfer from td2/adults to td2/kids and all was ok - phew!
  7. It won't work for you as you've added a 2nd RO directory. You need: find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs} newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~} newPath1=/mnt/user/rclone_upload/google_vfs${oldPath%_HIDDEN~} rm "$newPath" rm "$newPath1" rm "$line" done find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete Just add another move: rclone move /mnt/user/Archiv/Musik gdrive_media_vfs:Musik -vv --drive-chunk-size 128M --checkers 1 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 8y && rclone move /mnt/user/Archiv/Serien gdrive_media_vfs:Serien -vv --drive-chunk-size 128M --checkers 1 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 8y I haven't uploaded music and I'm not sure if that's a good idea as even an extra pause of a second between tracks would be annoying. Let me know what the experience is like as I'm curious. There's something odd about the file permissions - just delete manually via terminal
  8. use the commands via ssh to create the mountcheck file and to transfer it's foolproof. If you're not seeing your other files then something's gone wrong with your encryption - are you sure the new vfs mount is using the same keys as your old encrypted mount?
  9. You can copy files directly to the mount, but it's better to use rclone move as it retries if there's an error. What unionfs does is create a merged folder for plex that combines local files + cloud files in one directory, so you don't get issues in plex like it thinking files are missing, getting confused when files have been upgraded etc. If you are just copying completed files to rclone you should just modify the upload script i.e.: rclone move /mnt/source_folder/ gdrive_media_vfs:destination_folder -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 9500k --tpslimit 3 --min-age 30m Edit: has your playback improved?
  10. @francrouge also, rc keeps running after your first mount attempt, so if you are trying a second mount attempt without rebooting it will always fail unless you remove --rc i.e. run: rclone mount --allow-other --buffer-size 512M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs & I'm trying to find a way to stop rc in the unmount script to stop this problem tripping people up
  11. Did you create the mountcheck file? touch mountcheck rclone copy mountcheck gdrive_media_vfs: -vv --no-traverse
  12. It worked for me as well one day and then it stopped - I think maybe the checks on teamdrives aren't as stringent. I'm going to stick with my new distributed layout as it makes it easier for my cleanup script to work and for me to move files around. E.g. here's my new clean-up script that looks at each media type in each teamdrive: echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup." # gdrive - movies find /mnt/user/mount_unionfs/google_vfs/.unionfs/movies -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs/movies} newPath=/mnt/user/mount_rclone/google_vfs/movies${oldPath%_HIDDEN~} rm "$newPath" rm "$line" done # tdrive_rclone1 - tv find /mnt/user/mount_unionfs/google_vfs/.unionfs/tv -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs/tv} newPath=/mnt/user/mount_rclone/tdrive_rclone1_vfs/tv${oldPath%_HIDDEN~} rm "$newPath" rm "$line" done find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete
  13. lol see the post I just did - you need to use multiple tdrives the quota is 750GB/day per team drive. Not a big issue as each takes 5 mins to setup. I'm doing my td-->td transfers as much as possible within gdrive. What I've noticed so far in mc is if you are overwriting files it takes as long as downloading, but if you move it's pretty much instantaneous - so just make sure the destination directory is empty
  14. You encountered the same problem as me that I've just fixed. The info I read on Team Drive limits was wrong - it's 750GB/day per team drive not per user, as well as 750GB/day per user. https://support.google.com/a/answer/7338880# What I've done to fix this is create 3 team drives and then spread my media folders across them e.g.: gdrive: tv shows td1: movies td2: movies_uhd etc etc Edit: with still unique token/client_id per team drive The reason I've separated media types across team drives is I want to make it easier to move files within google so I don't have to download to reupload or end up with duplicate folders......I noticed yesterday that if I moved tv_show1/season1/episode_6 from td1 to gdrive where there's already a tv_show1/season1 folder it would create a new tv_show1/season1 folder with tv_show1/season1/episode_6 in it rather than adding episode_6 to the existing folder. This was causing havoc with the mount, so by splitting my media folders I will reduce the number of times I will have to move files between team drives.
  15. have a read of the first couple of posts and check my scripts on github that are fairly up to date (just need to tweak the teamdrive bits). If you're still stuck, there's a few people in this thread who will help. You need to do a vfs mount for streaming.
  16. Not media file related, but just sharing how I backup to gdrive my local /mnt/user/backup share where I keep my VM backups, CA appdata backup, cloudberry backup of important files from other shares etc rclone sync /mnt/user/backup gdrive_media_vfs:backup --backup-dir gdrive_media_vfs:backup_deleted -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 10000k --tpslimit 3 --min-age 30m rclone delete --min-age 90d gdrive_media_vfs:backup_deleted --bwlimit 10000k sync keeps a copy in the cloud of my files. Any files deleted or moved locally that have already been synced are moved to the backup_deleted directory on gdrive: --backup-dir gdrive_media_vfs:backup_deleted where they are deleted after 90 days: rclone delete --min-age 90d gdrive_media_vfs:backup_deleted --bwlimit 10000k
  17. No. A team drive created by a google app user (who has unlimited storage) can be shared with any google account user(s). So, you could create team drives for friends to give them unlimited storage. Each user has a 750GB/day upload quota, so as long as each upload to the shared teamdrive is coming from a diff user (diff rclone token for the remote, and client_ID to try and avoid API bans) then you can utilise the extra quotas. I've added 3 accounts to my plex teamdrive and it's all working fine so far for 4 uploads (3 shared users and my google apps account) I imagine google has a FUP to clamp down on real abuse e.g. creating 100 teamdrives.
  18. rclone move automatically retries failed transfers so you'll be ok - it's why it's best to upload via move rather than writing direct to the mount, because if the write fails there it's permanent.
  19. My Mariadb mysql log folder keeps growing and has now hit 10GB - I only have 17GB or so of files in nextcloud! Is there something I'm supposed to do to keep the log size down? Thanks
  20. Just create another encrypted remote for Bazarr with a different client_ID pointing to same gdrive/tdrive e.g. [gdrive_bazarr] type = drive client_id = Diff ID client_secret = Diff secret scope = drive root_folder_id = service_account_file = token = {should be able to use same token, or create new one if pointed to teamdrive"} [gdrive_bazarr_vfs] type = crypt remote = gdrive_bazarr:crypt filename_encryption = standard directory_name_encryption = true password = same password password2 = same password One problem I'm encountering is the multiple upload scripts are using a fair bit of memory, so I'm investigating how to reduce the memory usage by removing things like --fast-list from the upload script. Not a biggie as I can fix
  21. This is working very well. I just moved files within google drive between My Drive and the Team Drive and once the dir cache updated, they appeared in the tdrive mount and played perfectly 🙂 I'm going to do a few more trial moves, which if go well I'm going to move all my files to the teamdrive and only upload to it going forwards. I wonder if google realise one user can create multiple teamdrives to share with friends to give them unlimited storage?
  22. I have just one rclone_upload folder and then I created 3 upload scripts - one uploads /mnt/cache/rclone_upload, another /mnt/user0/rclone_upload and the third is a booster and currently is uploading from /mnt/disk4/rclone_upload. Yes, one tdrive with multiple accounts - one for each upload instance. Then there's only one tdrive to mount. Check my GitHub scripts for how I added in tdrive support. The mount I did for the 3rd user was just temporary to double-check it all worked as expected. To add each user just create new tdrive remotes with the same tdrive Id, rclone passwords, same remote location but different user tokens for each (and client IDs to spread the API hits)
  23. I setup this last week when I took one of my drives out of my cache pool, so I had only a second drive. I use this two/three scripts in tandem to move files to the array when my cache drive gets to a certain limit. It's a bit clunky, but it works add diskmv script - I added it via custom scripts - I then created custom diskmv commands to move files: #!/bin/bash echo "$(date "+%d.%m.%Y %T") INFO: moving downloads_import." /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/downloads_import" cache disk5 /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/downloads_import" cache disk6 exit I've added two disks just in case disk5 is full - then use this script to set your min and max disk used thresholds - mine is set to run hourly and the script triggers the mover script above #!/usr/bin/php <?PHP $min = 80; $max = 100; $diskTotal = disk_total_space("/mnt/cache"); $diskFree = disk_free_space("/mnt/cache"); $percentUsed = ($diskTotal - $diskFree) / $diskTotal * 100; if (($min <= $percentUsed) and ($percentUsed <= $max)) { exec("/boot/config/plugins/user.scripts/scripts/diskmv_all/script"); } ?>