DZMM

Members
  • Content Count

    2603
  • Joined

  • Last visited

  • Days Won

    8

DZMM last won the day on June 13 2019

DZMM had the most liked content!

Community Reputation

236 Very Good

7 Followers

About DZMM

  • Rank
    Advanced Member
  • Birthday December 30

Converted

  • Gender
    Male
  • Location
    London

Recent Profile Visitors

5318 profile views
  1. I want to give ruTorrent another go, but I find the config file hard to setup. Does anyone have a good unRAID one that covers watch folders, private torrents (no DHT etc) etc that they can share please to give me a headstart?
  2. Just in 2021? It's been driving me mad this year. I've just turned off "use hardlinks instead of copy" which should confirm my suspicion
  3. Is anyone else having problems where hardlinked torrents are being uploaded and deleting the torrents, rather than just a copy being uploaded? Over the last week my torrents keep disappearing and I think it's rclone/mergerfs uploading the real file.
  4. Is anyone else having a problem over the last couple of days where all torrents are getting deleted?
  5. Change this in the script manually /mount_rclone is a live view of gdrive. If you try to add a file it will be transferred in real-time to gdrive. It's not advised as if the transfer fails the file can be lost, whereas rclone move will retry the upload. Also, I believe that with the new cache functionality, the file gets cached and then uploaded in the background - which means you lose all control of the upload speed or when the upload occurs. I can't think of a scenario where I would advise adding files directly to /mount_rclone - just use the upload script and
  6. 1. reduce the poll time for rclone to pickup cloud changes faster 2. if you want to upload to the cloud more frequently - run your cron more frequently. If you want files to go in real-time, then copy to /mount_rclone not /mount_mergerfs. Not recommended but it's there if you really want to do it
  7. That's the right behaviour - when you add files to /mount_mergerfs they are actually physically added to /local until your upload script moves them to gdrive. The /mount_mergerfs folder "masks" the physical location of the file (local or cloud), so that Plex etc just play it and MergerFS/Rclone manage ponying up the file i.e. to Plex the file hasn't moved. When you add direct to gdrive (not recommended going forwards, but I understand you are testing) the changes will be picked up by rclone at the next poll, and displayed in /mount_rclone and then picked up by MergerFS immediately
  8. Config looks fine. How are you adding files to mount_mergerfs in the scenario above? If it's via Krusader, you have to restart the docker after mounting. I don't use, so I don't know why, but other users have reported problems. If you add via Windows Explorer or Putty/SSH is all ok?
  9. The script doesn't stop before 750 - it stops when Google says it can't upload anymore, rather than continuing to run for until the api ban is lifted. This allows you to upload a different way i.e. service accounts.
  10. But it makes no difference if your upload doesn't get blocked if you upload less than 750gb - you're just lowering the cap.....
  11. I'm not sure what you are trying to achieve? You can only upload 750GB/day - if you hit this, that account/ID gets blocked for 24 hours. If you want to upload more, you have to use Service Accounts - there's no other way around it.
  12. This stops the script when google says the account can't upload anymore i.e. at 750GB. Resets everyday. If you want to upload more than 750GB/day you need to use Service Accounts.
  13. No, I just had that in there when I used to have a parity disk to turn on Turbo mode when diskmv was running. You don't need it
  14. Have a look at the diskmv script - I use this to create my own little mover that I run on a schedule with user scripts, to move the files that I don't need 'fast' access to: ######################################## ####### Move Cache to Array ########## ######################################## # check if mover running if [ -f /var/run/mover.pid ]; then if ps h `cat /var/run/mover.pid` | grep mover ; then echo "$(date "+%d.%m.%Y %T") INFO: mover already running. Not moving files." fi else # move files ReqSpace=200000000 AvailSpace=$(df /mnt/cache | awk 'NR==2 { print $4 }'