Jump to content

DZMM

Members
  • Content Count

    1803
  • Joined

  • Last visited

  • Days Won

    6

DZMM last won the day on December 15

DZMM had the most liked content!

Community Reputation

102 Very Good

About DZMM

  • Rank
    Advanced Member
  • Birthday December 30

Converted

  • Gender
    Male
  • Location
    London

Recent Profile Visitors

1901 profile views
  1. have a read of the first couple of posts and check my scripts on github that are fairly up to date (just need to tweak the teamdrive bits). If you're still stuck, there's a few people in this thread who will help. You need to do a vfs mount for streaming.
  2. Not media file related, but just sharing how I backup to gdrive my local /mnt/user/backup share where I keep my VM backups, CA appdata backup, cloudberry backup of important files from other shares etc rclone sync /mnt/user/backup gdrive_media_vfs:backup --backup-dir gdrive_media_vfs:backup_deleted -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 10000k --tpslimit 3 --min-age 30m rclone delete --min-age 90d gdrive_media_vfs:backup_deleted --bwlimit 10000k sync keeps a copy in the cloud of my files. Any files deleted or moved locally that have already been synced are moved to the backup_deleted directory on gdrive: --backup-dir gdrive_media_vfs:backup_deleted where they are deleted after 90 days: rclone delete --min-age 90d gdrive_media_vfs:backup_deleted --bwlimit 10000k
  3. No. A team drive created by a google app user (who has unlimited storage) can be shared with any google account user(s). So, you could create team drives for friends to give them unlimited storage. Each user has a 750GB/day upload quota, so as long as each upload to the shared teamdrive is coming from a diff user (diff rclone token for the remote, and client_ID to try and avoid API bans) then you can utilise the extra quotas. I've added 3 accounts to my plex teamdrive and it's all working fine so far for 4 uploads (3 shared users and my google apps account) I imagine google has a FUP to clamp down on real abuse e.g. creating 100 teamdrives.
  4. rclone move automatically retries failed transfers so you'll be ok - it's why it's best to upload via move rather than writing direct to the mount, because if the write fails there it's permanent.
  5. DZMM

    [Support] Linuxserver.io - Nextcloud

    My Mariadb mysql log folder keeps growing and has now hit 10GB - I only have 17GB or so of files in nextcloud! Is there something I'm supposed to do to keep the log size down? Thanks
  6. Just create another encrypted remote for Bazarr with a different client_ID pointing to same gdrive/tdrive e.g. [gdrive_bazarr] type = drive client_id = Diff ID client_secret = Diff secret scope = drive root_folder_id = service_account_file = token = {should be able to use same token, or create new one if pointed to teamdrive"} [gdrive_bazarr_vfs] type = crypt remote = gdrive_bazarr:crypt filename_encryption = standard directory_name_encryption = true password = same password password2 = same password One problem I'm encountering is the multiple upload scripts are using a fair bit of memory, so I'm investigating how to reduce the memory usage by removing things like --fast-list from the upload script. Not a biggie as I can fix
  7. This is working very well. I just moved files within google drive between My Drive and the Team Drive and once the dir cache updated, they appeared in the tdrive mount and played perfectly 🙂 I'm going to do a few more trial moves, which if go well I'm going to move all my files to the teamdrive and only upload to it going forwards. I wonder if google realise one user can create multiple teamdrives to share with friends to give them unlimited storage?
  8. I have just one rclone_upload folder and then I created 3 upload scripts - one uploads /mnt/cache/rclone_upload, another /mnt/user0/rclone_upload and the third is a booster and currently is uploading from /mnt/disk4/rclone_upload. Yes, one tdrive with multiple accounts - one for each upload instance. Then there's only one tdrive to mount. Check my GitHub scripts for how I added in tdrive support. The mount I did for the 3rd user was just temporary to double-check it all worked as expected. To add each user just create new tdrive remotes with the same tdrive Id, rclone passwords, same remote location but different user tokens for each (and client IDs to spread the API hits)
  9. I setup this last week when I took one of my drives out of my cache pool, so I had only a second drive. I use this two/three scripts in tandem to move files to the array when my cache drive gets to a certain limit. It's a bit clunky, but it works add diskmv script - I added it via custom scripts - I then created custom diskmv commands to move files: #!/bin/bash echo "$(date "+%d.%m.%Y %T") INFO: moving downloads_import." /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/downloads_import" cache disk5 /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/downloads_import" cache disk6 exit I've added two disks just in case disk5 is full - then use this script to set your min and max disk used thresholds - mine is set to run hourly and the script triggers the mover script above #!/usr/bin/php <?PHP $min = 80; $max = 100; $diskTotal = disk_total_space("/mnt/cache"); $diskFree = disk_free_space("/mnt/cache"); $percentUsed = ($diskTotal - $diskFree) / $diskTotal * 100; if (($min <= $percentUsed) and ($percentUsed <= $max)) { exec("/boot/config/plugins/user.scripts/scripts/diskmv_all/script"); } ?>
  10. DZMM

    [Support] Linuxserver.io - MariaDB

    I have 8.6G of logs - can I delete all of them and how can I stop this happening in the future? Thanks root@Highlander:/mnt/cache/appdata/dockers/mariadb# du -h --max-depth=1 | sort - hr 11G . 8.6G ./log 1.7G ./databases
  11. Ok checked the 3rd user is working properly by creating a new remote using this user's token and then mounting it - it decrypted the existing folders and files correctly, so the password/encrypt sync worked 🙂 🙂
  12. hmm not sure what was going on. I created a new teamdrive 'crypt' and the obscured file and folder names match up, regardless of what level I place the crypt. I'm going to ditch the first teamdrive I created and use this one and just doublecheck names manually the first time I decide to transfer files, if I do. I've added two users to this teamdrive so I'm up to 3x750GB per day - I'll only use the 3rd upload manually as I won't need that very often.
  13. That could be it. My gdrive path is Gdrive/crypt/then_encrypted_media_folders and my team drive is Gdrive/highlander(name of tdrive)/crypt/then_encrypted_media_folders - Maybe if I named the tdrive 'crypt' and put the media folders in the root the names would match up. You're not really supposed to encrypt the root of a remote (will check why as not sure) I'm going to test now by creating a teamdrive called crypt and adding folders to its root.
  14. yep - that's what I thought as well. Options I choose are: - 2 / Encrypt the filenames - 1 / Encrypt directory names - 1 Password or pass phrase for encryption. y) Yes type in my own password - Password or pass phrase for salt. y) Yes type in my own password