DZMM

Members
  • Posts

    2801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. Change this in the script manually /mount_rclone is a live view of gdrive. If you try to add a file it will be transferred in real-time to gdrive. It's not advised as if the transfer fails the file can be lost, whereas rclone move will retry the upload. Also, I believe that with the new cache functionality, the file gets cached and then uploaded in the background - which means you lose all control of the upload speed or when the upload occurs. I can't think of a scenario where I would advise adding files directly to /mount_rclone - just use the upload script and set the cron to a high frequency
  2. 1. reduce the poll time for rclone to pickup cloud changes faster 2. if you want to upload to the cloud more frequently - run your cron more frequently. If you want files to go in real-time, then copy to /mount_rclone not /mount_mergerfs. Not recommended but it's there if you really want to do it
  3. That's the right behaviour - when you add files to /mount_mergerfs they are actually physically added to /local until your upload script moves them to gdrive. The /mount_mergerfs folder "masks" the physical location of the file (local or cloud), so that Plex etc just play it and MergerFS/Rclone manage ponying up the file i.e. to Plex the file hasn't moved. When you add direct to gdrive (not recommended going forwards, but I understand you are testing) the changes will be picked up by rclone at the next poll, and displayed in /mount_rclone and then picked up by MergerFS immediately and also "added" to /mount_mergerfs
  4. Config looks fine. How are you adding files to mount_mergerfs in the scenario above? If it's via Krusader, you have to restart the docker after mounting. I don't use, so I don't know why, but other users have reported problems. If you add via Windows Explorer or Putty/SSH is all ok?
  5. The script doesn't stop before 750 - it stops when Google says it can't upload anymore, rather than continuing to run for until the api ban is lifted. This allows you to upload a different way i.e. service accounts.
  6. But it makes no difference if your upload doesn't get blocked if you upload less than 750gb - you're just lowering the cap.....
  7. I'm not sure what you are trying to achieve? You can only upload 750GB/day - if you hit this, that account/ID gets blocked for 24 hours. If you want to upload more, you have to use Service Accounts - there's no other way around it.
  8. This stops the script when google says the account can't upload anymore i.e. at 750GB. Resets everyday. If you want to upload more than 750GB/day you need to use Service Accounts.
  9. No, I just had that in there when I used to have a parity disk to turn on Turbo mode when diskmv was running. You don't need it
  10. Have a look at the diskmv script - I use this to create my own little mover that I run on a schedule with user scripts, to move the files that I don't need 'fast' access to: ######################################## ####### Move Cache to Array ########## ######################################## # check if mover running if [ -f /var/run/mover.pid ]; then if ps h `cat /var/run/mover.pid` | grep mover ; then echo "$(date "+%d.%m.%Y %T") INFO: mover already running. Not moving files." fi else # move files ReqSpace=200000000 AvailSpace=$(df /mnt/cache | awk 'NR==2 { print $4 }') if [[ "$AvailSpace" -ge "$ReqSpace" ]];then echo "$(date "+%d.%m.%Y %T") INFO: Space ok - exiting" else echo "$(date "+%d.%m.%Y %T") INFO: Cache space low. Moving Files." # /usr/local/sbin/mdcmd set md_write_method 1 # echo "Turbo write mode now enabled" echo "$(date "+%d.%m.%Y %T") INFO: moving backup." bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/backup" cache disk2 echo "$(date "+%d.%m.%Y %T") INFO: moving local." bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/local/tdrive_vfs/downloads/complete" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/local/tdrive_vfs/downloads/seeds" cache disk2 echo "$(date "+%d.%m.%Y %T") INFO: moving media." bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/books" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/calibre" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/magazines" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/photos" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/videos" cache disk2 echo "$(date "+%d.%m.%Y %T") INFO: moving mount_rclone cache." bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/mount_rclone/cache" cache disk2 echo "$(date "+%d.%m.%Y %T") INFO: moving software." bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/software" cache disk2 # /usr/local/sbin/mdcmd set md_write_method 0 # echo "Turbo write mode now disabled" fi fi
  11. Glad you got it all up and running (with no help!) easily. The cache filling up quickly is something I'm keeping an eye on a bit on my server by manually browsing the cache every now and then to see what's in there. My cache is getting populated mainly from Plex's overnight scheduled jobs i.e. analysing files that haven't been accessed by users. I'm trying to track how long something I've actually watched stays in the cache - if it's getting flushed within a day (or even hours), I'm probably going to turn the cache off. E.g. I've just checked and some of the stuff I watched just last night isn't in the cache 17 hours later..... I'm hesitant to increase the cache size to increase hit rate, as that's a lot of data (I have 7 teamdrives so I'm already caching over 2TB) to hold to get a slightly faster launch time and better seeking - every now and then..... My server is doing a lot of scheduled work as I've decided to turn thumbnails back on, so maybe it'll settle down a bit in a month or two.
  12. Are you sure this is 'live' ? I think the log is showing what happened when your upload script kicked in
  13. As @BRiT pointed out, everything you need can be found at the start of this thread - although I think this solution is overkill if you are just backing up or syncing a few photos, as the solution in this thread is to optimise Plex playback from Google Drive. It probably can be re-used for OneDrive, but if you want to learn how to backup a photos folder using rclone I'd read the rclone sync page https://rclone.org/commands/rclone_sync/ as I don't see why you need to even mount. If you need help, please create a new thread.
  14. rc2 working well for me today. rc1 major buffering problem with plex, beta25-->rc1 system locks
  15. what version of unraid are you using? I've had problems with anything above Beta25 - lockups, slow plex and I think the mergerFS problems I had might have been when using Beta25+
  16. yep - maybe a reboot was needed to get the latest version
  17. I don't think the issue is at Google's end, as the problem is with the MergerFS mount - the rclone mounts seem to be behaving
  18. Have you looked in file manager to see if the folder exists? I had a similar error a few weeks ago where when I looked in putty at the folder, it was borked and had a ? next to it. I don't know why, but rebooting fixed it.
  19. I don't know if this is the right place, or if this should go in the Plex docker. But, with RC1 I get terrible buffering in Plex - I rolled back (beta25 - the higher betas cause full system lockups for me) to check twice, and the problem went away both times. Even with low Mbps files the buffering made watching anything impossible. highlander-diagnostics-20201215-2118.zip
  20. Is anyone else getting a lot of buffering tonight? I'm wondering if it's linked to Google's outage yesterday?
  21. I've just gone for beta 25 to RC1 - no lockups for 5.5hours which is promising
  22. no idea what that is. Maybe try stopping the mount and then manually deleting the whole cache
  23. You can achieve that via unRAID's share settings . Or, also via Plex and restrict the content ratings your kids can view (recommended) or automatically add a tag to files in certain locations using Tautulli and then restrict your kids to those tags - I do this as it allows me to manually add tags to content I want to expose to my kids e.g. allowing them to watch Marvel movies but not other 12/12A movies that they aren't ready for. If that doesn't work for you, then do this: