DZMM

Members
  • Posts

    2801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. I'm not sure - I'm tempted to turn the cache off completely as my hit rate must be virtually zero as files don't reside there long enough. Maybe it's Plex scanning a file that leads to rclone downloading the file for it to be analysed?
  2. sometimes files end up in your mergerfs mount location PRIOR to mount. Go into each disk and manually move files from /mount_mergerfs --> /local then run the mount script again .i.e. /mnt/disk1/mount_mergerfs/.... ----> /mnt/disk1/local/ /mnt/disk2/mount_mergerfs/.... ----> /mnt/disk2/local/ etc etc until you've moved all troublesome files.
  3. depends how much you are uploading and at what speed!
  4. post your upload settings but I think you've got a ":" in "gdrive_upload_vfs:" that you shouldn't have: RcloneRemoteName="tdrive_vfs" # Name of rclone remote mount WITHOUT ':'.
  5. I have no idea why your first launch is so slow (the 2nd is fast cause it's coming from the local cache). I can see that you've copied my mount settings and my 1st launch (900/120 connection) is never greater than 3-5s. Does your rclone config look something like this: [tdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive.json team_drive = xxxxxxxxxxxxxxxx server_side_across_configs = true [tdrive_vfs] type = crypt remote = tdrive:crypt filename_encryption = standard directory_name_encryption = true password = xxxxxxxxxxxxxxxxxxxx password2 = xxxxxxxxxxxxxxxxxxxxxxxxx
  6. Hmm this is interesting (in a wrong way of course!) I have the same setup and the only thing I can think of is that maybe rclone upload doesn't like hardlinks i.e. after it's uploaded the file it's deleting the original file rather than respecting the hardlink? Mergerfs definitely supports hardlinks. This would explain why I haven't come across this as I seed for a max of 14 days, whereas because of my slow upload speed, rclone upload doesn't typically upload a file until 14 days+. I can't think of a solution, other than maybe ditching hardlinks and doing a copy to your media folder so that rclone can move the copy? Worth a test to see if this is the cause?
  7. sounds like an Apple or Plex issue. Have you tried playing a local copy to see what happens?
  8. Not sure, bit Enterprise Standard comes with unlimited storage and costs $20/mth
  9. I don't have this problem (used to, but somehow it went away) - there is a script somewhere in the thread that has helped a few users. I think it's a few pages back
  10. My personal rclone mount script was a bit different to the one on github - this is what I have: # create rclone mount rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --umask 000 \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 1G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation & I've updated the github version just now. I think you should see playback and scanning improvements.
  11. I just did a full power cycle (modem, router, switch, server) and it seems ok now.
  12. I had a power cut this morning that caused my server to reboot. The server started "ok" afterwards, but everything is slow, particularly my W10 VMs that take forever to start, but are unusable as everything is mega slow. Even trying to move the mouse around is impossible - it took over an hour just to boot to the desktop. The extended test for FCP is still running even though it's been running for a few hours. At first I thought it was a dodgy docker as my CPU usage was at 80% which is unusual, but even after turning docker off and only running the main VM (Buzz) that I need, the VMs are still running very slow, which means I can't do my job. I'm hoping that there's something in my diagnostics that explains why and that someone can help me please. highlander-diagnostics-20211130-1431.zip
  13. looks fine as you're not using mergerfs and cloud files are being handled just by rclone, so things should be even simpler. Have you changed any of the mount entries further down the script - the section should look like this: # create rclone mount rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 96h \ --vfs-read-ahead 2G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation & Can you post your rclone config as well please to eliminate any problems there.
  14. I don't think the cache works at all to be honest, as it keeps EVERY file that is accessed i.e. if Plex does a scan/analysis it stores those files. In my scenario, the cache would have to be very big to have a decent hit rate. I keep my cache fairly small so that it probably can cope with someone rewinding a show, but not much else.
  15. You can definitely buy enterprise standard for one user - here's my page (email just arrived). $20/mth is still a good deal. I'm going to purchase the annual plan even though there's no discount, in case Google change their mind
  16. Looks like the 5 user limit isn't enforced: https://forum.rclone.org/t/gsuite-is-evolving-to-workspace-so-prices-and-tbs-too/19594/113?u=binsonbuzz
  17. If they start enforcing the min 5 requirement (they don't now), then I think that might be the solution - users pairing up in groups of 5.
  18. read up on --exclude on the rclone forums - you can stop files being uploaded by folder, type, age, name etc
  19. lol if you were the person who just bought me a few beers - thanks! It is a game changer - I can't imagine how much work it would be to run a server with so many disks involved. And the cost!