DZMM

Members
  • Posts

    2801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. I've created a poll to see how much people are storing - curious to see 1) how many people are using my scripts, and 2) how much people are typically uploading!
  2. It's been 3 years since I posted my GUIDE: HOW TO USE RCLONE TO MOUNT CLOUD DRIVES AND PLAY FILES and I know many users have saved hundreds, some thousands of pounds, by moving their files to their cloud - as well as not having the overhead of running and managing a massive storage array. I'm curious to see just how many users are benefiting!
  3. If the backup is a copy of your appdata folder, with potentially hundreds of thousands of files, I would recommend creating a new tdrive to keep the files away from your media mount, as streaming performance gets worse once a tdrive has more than around 100k files. To be honest if you're just copying or syncing files you don't need this script, just run a separate rclone sync job (although the script does have the facility to do basic versioning)
  4. Are you sure another app or docker isn't creating the folder? E.g. Are all your dockers that use the mount starting AFTER a successful mount, rather than at array start?
  5. this is why I recommend that dockers mappings ONLY use the mergerfs folders, otherwise you lose out on all the file-transfer benefits e.g. hardlinking, as unRAID will see files in /local and /mount_rclone and /mount_mergerfs as different files, when they might be the same. /local should only be used to manually verify everything is working correctly.
  6. Another way to update Plex (better in the long run IMO) is to add your new merged folder to Plex which contains ALL your media (remote, local & local pending upload), then scan it, and when it's finished scanning, remove your old local folders from Plex and all your media will be in Plex with just your mergerfs file references. This is a better solution, because if you decide to e.g. move a TV show in the future from local-->local-pending-->remote, this won't be transparent to Plex and you won't have to mess around with Emptying Trash or files becoming temporariliy unavailable.
  7. It's been a while and I don't use pfsense anymore, but I think I created an alias that included her static IP address e.g. HIGH_PRIORITY_DEVICES, and then created floating rules (UDP and TCP) for the alias to assign traffic to the high priority queue
  8. Did you at least bother to read the first sentence of the first post?
  9. not sure what's going on - new files should be stored locally until uploaded, so any performance issues shouldn't be down to rclone/mergerfs.
  10. Playing files from the cloud can be bandwidth "intensive" depending on how big the file is you are trying to play, how many concurrent streams etc Vs your line speed. I have a 900/120 connection and re download my usage (at peak about 5-6 streams) never becomes a concern for me, although in the evenings I do make sure any other activity is low so that my friends get a good experience. Re Upload I do have to manage to balance my remote plex demand, seeding, uploading to gdrive etc requirements are all met. You should set bwlimits that are appropriate for your connection - both upload and download.
  11. Only for the first run if you've set up your rclone config that way - for the first run you just need to use the value in the script that's in your rclone config. After the first run the script rotates the entry used in rclone config.
  12. It is explained what to do in the config section of the upload script
  13. you can use a a normal drive (with a client ID) or a teamdrive (with SA)
  14. the script changes the SA file used by rclone and overwrites the entry in the rclone config - i.e. I think the script takes care of it if you don't add an initial value. The script doesn't rename the files you have to create 1-16 and put in the right directory, and then the script will rotate which one rclone uses for each run
  15. sorry, if I confused you - if you are using SAs you don't need Client IDs For playback, it's best to experiment. The defaults work well except for 4k content. Having the same settings for non-4K will be ok - it'll just mean start times might be 1-2s longer (wow)
  16. Correct creating a client_id is a doddle https://rclone.org/drive/#making-your-own-client-id. If you are using service_accounts you don't have to do this. Because you're moving server-side you don't need to do anything fancy rclone move tdrive:crypt/encrypted_name_of_music_folder gdrive:crypt/encrypted_name_of_music_folder \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs Do you mean playback times? Every setup is different. There are a few settings you can play with e.g. --vfs-read-chunk-size, --vfs-read-ahead. e.g. here's what I currently have for my 4K tdrive mount. # create rclone mount rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-read-chunk-size 256M \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation &
  17. you should have around 100 json files if you've done the steps correctly. You need to rename up to 15-16 (16 needed to max out a 1Gbps line) sa_gdrive1.json, sa_gdrive2.json and so on and put them in a directory of your choosing
  18. 1. change your rclone config to look something like this: [tdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive.json team_drive = xxxxxxxxxxxxxxxxxxxxxxxxx server_side_across_configs = true 2. In the folder where your service account files are e.g. in my case /mnt/user/appdata/other/rclone/service_accounts, make sure they are numbered sa_tdrive1.json, sa_tdrive2.json, sa_tdrive3.json and so on 3. Then fill in the settings in the upload script # Use Service Accounts. Instructions: https://github.com/xyou365/AutoRclone UseServiceAccountUpload="Y" # Y/N. Choose whether to use Service Accounts. ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files. ServiceAccountFile="sa_tdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload". CountServiceAccounts="16" # Integer number of service accounts to use. 4. If you need to move files from the old gdrive mount to the tdrive, it's best to do this within google drive if there's more than 750GB to avoid quota issues. Stop all your dockers etc until you've finished the move, create the new tdrive mount, and once all the files are available in the right place, restart your dockers
  19. try adding: --vfs-read-ahead 2G I was having the same problem. I think what's happening is the first chunk isn't enough of the file to keep Plex happy, so for 4K/high-bitrate you need more of the file ready before playback starts. Also, try upping --vfs-read-chunk-size to higher than the default (think it's 32m). I use --vfs-read-chunk-size 256M for my 4K files rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-read-chunk-size 256M \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation &
  20. All of my clients bar 1 are Nvidia shields and I have no problems - except with the 2019 sticks which were iffy with 4K content buffering and sometimes crashing. I added in - -vfs-read-ahead 2G which seemed to do the trick. Also, try upping --vfs-read-chunk-size to higher than the default (think it's 32m). I use --vfs-read-chunk-size 256M for my 4K files rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-read-chunk-size 256M \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation &
  21. Did you fix this? The script should "self-heal" if all your settings are correct
  22. Script looks fine. My guess is you have setup a docker wrong.