DZMM

Members
  • Posts

    2662
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by DZMM

  1. It is explained what to do in the config section of the upload script
  2. you can use a a normal drive (with a client ID) or a teamdrive (with SA)
  3. the script changes the SA file used by rclone and overwrites the entry in the rclone config - i.e. I think the script takes care of it if you don't add an initial value. The script doesn't rename the files you have to create 1-16 and put in the right directory, and then the script will rotate which one rclone uses for each run
  4. sorry, if I confused you - if you are using SAs you don't need Client IDs For playback, it's best to experiment. The defaults work well except for 4k content. Having the same settings for non-4K will be ok - it'll just mean start times might be 1-2s longer (wow)
  5. Correct creating a client_id is a doddle https://rclone.org/drive/#making-your-own-client-id. If you are using service_accounts you don't have to do this. Because you're moving server-side you don't need to do anything fancy rclone move tdrive:crypt/encrypted_name_of_music_folder gdrive:crypt/encrypted_name_of_music_folder \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs Do you mean playback times? Every setup is different. There are a few settings you can play with e.g. --vfs-read-chunk-size, --vfs-read-ahead. e.g. here's what I currently have for my 4K tdrive mount. # create rclone mount rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-read-chunk-size 256M \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation &
  6. you should have around 100 json files if you've done the steps correctly. You need to rename up to 15-16 (16 needed to max out a 1Gbps line) sa_gdrive1.json, sa_gdrive2.json and so on and put them in a directory of your choosing
  7. 1. change your rclone config to look something like this: [tdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive.json team_drive = xxxxxxxxxxxxxxxxxxxxxxxxx server_side_across_configs = true 2. In the folder where your service account files are e.g. in my case /mnt/user/appdata/other/rclone/service_accounts, make sure they are numbered sa_tdrive1.json, sa_tdrive2.json, sa_tdrive3.json and so on 3. Then fill in the settings in the upload script # Use Service Accounts. Instructions: https://github.com/xyou365/AutoRclone UseServiceAccountUpload="Y" # Y/N. Choose whether to use Service Accounts. ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files. ServiceAccountFile="sa_tdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload". CountServiceAccounts="16" # Integer number of service accounts to use. 4. If you need to move files from the old gdrive mount to the tdrive, it's best to do this within google drive if there's more than 750GB to avoid quota issues. Stop all your dockers etc until you've finished the move, create the new tdrive mount, and once all the files are available in the right place, restart your dockers
  8. try adding: --vfs-read-ahead 2G I was having the same problem. I think what's happening is the first chunk isn't enough of the file to keep Plex happy, so for 4K/high-bitrate you need more of the file ready before playback starts. Also, try upping --vfs-read-chunk-size to higher than the default (think it's 32m). I use --vfs-read-chunk-size 256M for my 4K files rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-read-chunk-size 256M \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation &
  9. All of my clients bar 1 are Nvidia shields and I have no problems - except with the 2019 sticks which were iffy with 4K content buffering and sometimes crashing. I added in - -vfs-read-ahead 2G which seemed to do the trick. Also, try upping --vfs-read-chunk-size to higher than the default (think it's 32m). I use --vfs-read-chunk-size 256M for my 4K files rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-read-chunk-size 256M \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation &
  10. Did you fix this? The script should "self-heal" if all your settings are correct
  11. Script looks fine. My guess is you have setup a docker wrong.
  12. my upload doesn't let me bust 750GB/day (moved home) anymore. stop on limit stops the download when the error occurs, not before i.e. it is not preventative - it just stops the script constantly hammering away for 24 hours to upload files. If you want to upload >750GB/day, use service accounts with the script.
  13. Probably, just create a new instance of the script maybe and --include only that path https://rclone.org/filtering/
  14. Easy: RcloneCommand="copy" All explained in the script
  15. Excluded files aren't deleted - they stay local and aren't uploaded
  16. The /Sync+ folder is where Plex stores the temp files that users select to be downloaded to their devices until they are downloaded. It is not controlled by the trancode folder location, which is where tmp files created when a file is streamed are stored. If they aren't being removed, then one of your users has (probably) mistakenly selected lots of files e.g. a whole season or a whole library to be downloaded, but e.g. doesn't have the space on their device to sync them, or have enough BW to sync them, or never connects that device again for long enough, meaning they don't get uploaded to their device. Sadly, Plex doesn't make it easy to work out which user is syncing. I don't like the feature for my users as they always do it wrong, meaning my server wastes a lot of resources and space, so I turn off user syncing for most users. If you don't want this behaviour, then go through each of your users and Disable "Allow Downloads". Then delete the files from the Sync+ folder.
  17. I just had this problem again. I think the port forwarding on the BT hub is rubbish and fails occasionally. SWAG stopped working for me again - I just had to reboot the hub.
  18. Post your full mount script and rclone config without passwords please
  19. Hi has anyone succeeded with running the commands to repair a database in the docker? Annoyingly my database backups were never made - and I'm trying to avoid using CA as my backup's a bit old. https://support.plex.tv/articles/repair-a-corrupted-database/ When I try to run the commands to repair the database (I created a copy of my database) I get this: root@Highlander:/# "/usr/lib/plexmediaserver/Plex SQLite" com.plexapp.plugins.library2.db "PRAGMA integrity_check" ok root@Highlander:/# "/usr/lib/plexmediaserver/Plex SQLite" com.plexapp.plugins.library2.db ".output dump.sql" ".dump" root@Highlander:/# del com.plexapp.plugins.library2.db bash: del: command not found Nothing seems to happen with the dump command, and then the del fails. Any idea what I'm doing wrong? Thanks in advance.
  20. I'm not sure. A quick google turned this up on the rclone forum - I'll keep an eye on the thread
  21. I can't remember what copy is on github these days, but in my local script I'd add it anywhere after the #remove dummy file at the end of the script. # remove dummy file rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName echo "$(date "+%d.%m.%Y %T") INFO: Script complete" copy = copies files without deleting the source sync = sync new or changed files. I think it's a 2-way sync
  22. lots of questions there. i) If you need a folder to be present that the upload job is deleting at the end because it's empty, just add a mkdir to the right section of the script ii) radarr/sonarr usually need starting AFTER the mount - that's why the script has a section to start dockers once a successful mount has been verified. An alternative is to manually restart the dockers
  23. Don't think so - when I want it to get a certificate I temporarily delete one of my sub-domains, restart to get the certs, and then re-add the sub-domain.