DZMM

Members
  • Posts

    2665
  • Joined

  • Last visited

  • Days Won

    8

DZMM last won the day on June 13 2019

DZMM had the most liked content!

8 Followers

About DZMM

  • Birthday December 30

Converted

  • Gender
    Male
  • Location
    London

Recent Profile Visitors

9627 profile views

DZMM's Achievements

Proficient

Proficient (10/14)

246

Reputation

  1. Playing files from the cloud can be bandwidth "intensive" depending on how big the file is you are trying to play, how many concurrent streams etc Vs your line speed. I have a 900/120 connection and re download my usage (at peak about 5-6 streams) never becomes a concern for me, although in the evenings I do make sure any other activity is low so that my friends get a good experience. Re Upload I do have to manage to balance my remote plex demand, seeding, uploading to gdrive etc requirements are all met. You should set bwlimits that are appropriate for your connection - both upload and download.
  2. Only for the first run if you've set up your rclone config that way - for the first run you just need to use the value in the script that's in your rclone config. After the first run the script rotates then entry used in rclone config.
  3. It is explained what to do in the config section of the upload script
  4. you can use a a normal drive (with a client ID) or a teamdrive (with SA)
  5. the script changes the SA file used by rclone and overwrites the entry in the rclone config - i.e. I think the script takes care of it if you don't add an initial value. The script doesn't rename the files you have to create 1-16 and put in the right directory, and then the script will rotate which one rclone uses for each run
  6. sorry, if I confused you - if you are using SAs you don't need Client IDs For playback, it's best to experiment. The defaults work well except for 4k content. Having the same settings for non-4K will be ok - it'll just mean start times might be 1-2s longer (wow)
  7. Correct creating a client_id is a doddle https://rclone.org/drive/#making-your-own-client-id. If you are using service_accounts you don't have to do this. Because you're moving server-side you don't need to do anything fancy rclone move tdrive:crypt/encrypted_name_of_music_folder gdrive:crypt/encrypted_name_of_music_folder \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs Do you mean playback times? Every setup is different. There are a few settings you can play with e.g. --vfs-read-chunk-size, --vfs-read-ahead. e.g. here's what I currently have for my 4K tdrive mount. # create rclone mount rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-read-chunk-size 256M \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation &
  8. you should have around 100 json files if you've done the steps correctly. You need to rename up to 15-16 (16 needed to max out a 1Gbps line) sa_gdrive1.json, sa_gdrive2.json and so on and put them in a directory of your choosing
  9. 1. change your rclone config to look something like this: [tdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive.json team_drive = xxxxxxxxxxxxxxxxxxxxxxxxx server_side_across_configs = true 2. In the folder where your service account files are e.g. in my case /mnt/user/appdata/other/rclone/service_accounts, make sure they are numbered sa_tdrive1.json, sa_tdrive2.json, sa_tdrive3.json and so on 3. Then fill in the settings in the upload script # Use Service Accounts. Instructions: https://github.com/xyou365/AutoRclone UseServiceAccountUpload="Y" # Y/N. Choose whether to use Service Accounts. ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files. ServiceAccountFile="sa_tdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload". CountServiceAccounts="16" # Integer number of service accounts to use. 4. If you need to move files from the old gdrive mount to the tdrive, it's best to do this within google drive if there's more than 750GB to avoid quota issues. Stop all your dockers etc until you've finished the move, create the new tdrive mount, and once all the files are available in the right place, restart your dockers
  10. try adding: --vfs-read-ahead 2G I was having the same problem. I think what's happening is the first chunk isn't enough of the file to keep Plex happy, so for 4K/high-bitrate you need more of the file ready before playback starts. Also, try upping --vfs-read-chunk-size to higher than the default (think it's 32m). I use --vfs-read-chunk-size 256M for my 4K files rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-read-chunk-size 256M \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation &
  11. All of my clients bar 1 are Nvidia shields and I have no problems - except with the 2019 sticks which were iffy with 4K content buffering and sometimes crashing. I added in - -vfs-read-ahead 2G which seemed to do the trick. Also, try upping --vfs-read-chunk-size to higher than the default (think it's 32m). I use --vfs-read-chunk-size 256M for my 4K files rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-read-chunk-size 256M \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation &
  12. Did you fix this? The script should "self-heal" if all your settings are correct
  13. Script looks fine. My guess is you have setup a docker wrong.