DZMM

Members
  • Posts

    2800
  • Joined

  • Last visited

  • Days Won

    9

DZMM last won the day on June 13 2019

DZMM had the most liked content!

8 Followers

About DZMM

  • Birthday December 30

Converted

  • Gender
    Male
  • Location
    London

Recent Profile Visitors

38792 profile views

DZMM's Achievements

Proficient

Proficient (10/14)

287

Reputation

  1. I'm on Enterprise Standard. I hope I don't have to move to Dropbox as I think it will be quite painful to migrate as I have over a PB stored. I have a couple of people I could move with I think - @Kaizac just use your own encryption passwords of you're worried about security. I actually think if I do move, I'll try and encourage people to use the same collection e.g. just have one pooled library and if anyone wants to add anything, just use the web-based sonarr etc. It seems silly all of us maintaining separate libraries when we can have just one. Some of my friends have done that already in a different way - stopped their local Plex efforts and just use my Plex server.
  2. I've just read about 10 pages of posts to try and get up to speed on the "shutdown". Firstly, a big thanks to @Kaizac for patiently supporting everyone while I've been busy with work. I wrote the scripts as a challenge project as someone who isn't a coder over a few months - I literally had to Google for each step "what command do I use to do xxx?", So it's great he's here to help with stuff outside the script like issues regarding permissions etc as I wouldn't be able to help! Back to business - Can someone share what's happening with the "shutdown" please as I'm out of the loop? I moved my cheaper Google account to a more expensive one I think about a year ago, and all was fine until my recent upload problems - but I think that was from my seedbox and unrelated, as I've started uploading from my unraid server again and all looks ok. I've read mentions of emails and alerts in the Google dashboard - could someone share their email/screenshots please and also say what Google account they have?
  3. Is anyone else getting slow upload speeds recently? My script has been performing perfectly for years, but for the last week or so my transfer speed has been awful 2023/06/18 22:37:15 INFO : Transferred: 28.538 MiB / 2.524 GiB, 1%, 3 B/s, ETA 22y21w1d Checks: 2 / 3, 67% Deleted: 1 (files), 0 (dirs) Transferred: 0 / 1, 0% Elapsed time: 9m1.3s It's been so long since I looked at my script I don't even know what to look at first Have I missed some rclone / gdrive updates? Thanks
  4. That's one of the drawbacks of the cache - that it caches all reads e.g. even when Plex, Sonarr etc are doing scans. You could turn off any background scans that your apps are doing - I accept it as a necessary evil in return for the amount of storage I'm getting for £11/pm (I think that's what I pay)
  5. I hope so - I think I've been a victim of the bug where my mount would keep disconnecting - and sometimes too fast for my script to fix.
  6. ahh on a PC now so can read better. you need to store your local files in /mnt/user/media/media_archive_gdrive/Tv for it to work, and then: LocalFilesShare="/mnt/user/media" I.e the union then combines the two /media_archive_gdrive directories.
  7. My load and CPU rarely go over 50%. I think it's something like fs.inotify.max_user_watches being too low. I think I'm under 10TB per day, but it feels like a uncomfortably low ceiling that I'll break through at some point. I'm just going to ditch some of the teamdrives. I've just checked and I've only got an average of 40K items in each. The max is 400K and I think the drives slow down at 150K, so my recent balancing was a bit too excessive. The slowdown at 150k is probably only marginally, and I'm a long way from that anyway.
  8. Ok, ditching again - the performance is too slow. It's been running for an hour and it still won't play anything without buffering. Maybe union doesn't use VFS - dunno. I might have to go with dropbox as my problem is definitely from the number of tdrives I have - 10 for my media, 3 for my backup files, and a couple of others. Unless I can find out if it's e.g. an unraid number of connections issue.
  9. Until my recent issues with something in one of my 10 rclone mounts + mergerfs mount dropping, I wasn't tempted to move to Dropbox as my setup was fine. If union works, although the tdrives are there in the background, I'll have just one mount. How would you move all your files to Dropbox if you did move - rclone server side transfer?
  10. Playback is a bit slower so far, but I'm doing a lot of scanning - Plex, sonarr, radarr etc to add back all my files. Will see what it's like when it's finished
  11. @Kaizac Ok, I've had another go at it this morning for a few reasons. Firstly, because my mount(s) (not sure which) keep disconnecting every couple of hours or so. The problem started when I added another 5 tdrives to balance out some tdrives that had over 200k files in. I think it's probably a unraid issue with the number of open connections or memory - no idea to fix. The second reason is why I decided to have another go. I realised that having >10 tdrives mounted so that they could be combined via mergerfs was using up a lot of resources. I realised with union I only needed 1 mount i.e. this must be saving a lot of resources. Anyway, here's a tidied up mount script I just pulled together - I'll add version numbers etc when I upload to github. You can see how much smaller it is - my old script that mounted >10 tdrives which was over 3000 lines in now under 200! rclone config: [tdrive_union] type = union upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs: tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4_vfs: tdrive5_vfs: action_policy = all create_policy = ff search_policy = ff New mount script: #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/appdata/other/scripts/running/fast_check" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Fast check already running" exit else mkdir -p /mnt/user/appdata/other/scripts/running touch /mnt/user/appdata/other/scripts/running/fast_check fi ############################### ##### Replace Folders ####### ############################### mkdir -p /mnt/user/local/tdrive_vfs/{downloads/complete/youtube,downloads/complete/MakeMKV/} mkdir -p /mnt/user/local/backup_vfs/duplicat19i ############################### ####### Ping Check ########## ############################### if [[ -f "/mnt/user/appdata/other/scripts/running/connectivity" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Already passed connectivity test" else # Ping Check echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online" ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online" # check if mounts need restoring else echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity. Will try again on next run" rm /mnt/user/appdata/other/scripts/running/fast_check exit fi fi ################################################################ ###################### mount tdrive_union ######################## ################################################################ # REQUIRED SETTINGS RcloneRemoteName="tdrive_union" RcloneMountLocation="/mnt/user/mount_mergerfs/tdrive_vfs" RcloneCacheShare="/mnt/user/mount_rclone/cache" RcloneCacheMaxSize="500G" DockerStart="bazarr qbittorrentvpn readarr plex radarr_new radarr-uhd sonarr sonarr-uhd" # OPTIONAL SETTINGS # Add extra commands or filters Command1="" Command2="--log-file=/var/log/rclone" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="N" RCloneMountIP="192.168.1.77" NetworkAdapter="eth0" VirtualIPNumber="7" ####### END SETTINGS ####### ####### Preparing mount location variables ####### ####### create directories for rclone mount and mergerfs mounts ####### mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files mkdir -p $RcloneCacheShare/$RcloneRemoteName #for cache files mkdir -p $RcloneMountLocation ####### Check if script is already running ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}" echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running." if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running." rm /mnt/user/appdata/other/scripts/running/fast_check exit else echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding." touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running fi ####### Create Rclone Mount ####### # Check If Rclone Mount Already Created if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted." rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running else echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote." # Creating mountcheck file in case it doesn't already exist echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote." touch mountcheck rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse # Check bind option if [[ $CreateBindMount == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" else echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}" ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber fi echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}" else RCloneMountIP="" echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}" fi # create rclone mount rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --umask 000 \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=$RcloneCacheShare/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-cache-max-age 1h \ --vfs-cache-max-size $RcloneCacheMaxSize \ --vfs-cache-max-age 24h \ --vfs-read-ahead 1G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation & # Check if Mount Successful echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds" # slight pause to give mount time to finalise sleep 10 echo "$(date "+%d.%m.%Y %T") INFO: continuing..." if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount." rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems." docker stop $DockerStart fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete rm /mnt/user/appdata/other/scripts/running/fast_check exit fi fi ####### Starting Dockers That Need Mount To Work Properly ####### if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ]] || [[ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers." else echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers." docker start $DockerStart fi echo "$(date "+%d.%m.%Y %T") INFO: ${RcloneRemoteName} Script complete" rm /mnt/user/appdata/other/scripts/running/fast_check exit
  12. Experiment over - it got really slow when lots of scans / file access was going on
  13. @Kaizac @slimshizn and others, I need some help with testing. I think I've got rclone union working i.e. can remove mergerfs so there are few moving parts. Plus, I think that rclone union is faster for our scenario than mergerfs, but let me know what you think. The problem with including /mnt/user/local in the union, was that rclone can't poll changes written direct to /mnt/user/local fast enough...so, just don't write to it, and write only to /mnt/user/mount_mergerfs/tdrive_vfs i.e. like we have already been doing. Here are my settings if anyone wants to try them out - basically disable mergerfs by adding "ignore" for MergerfsMountShare="ignore", and then paste in my quick rclone union section - I had to make some quick changes to the rclone mount section that I'll tidy up when I have time. Here's my rclone config: [local_tdrive_union] type = smb host = localhost user = rclone pass = xxxxxxxxxxxxxxxxxxx [tdrive_union] type = unio4 upstreams = local_tdrive_union:local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs: tdrive6_vfs: action_policy = all create_policy = ff search_policy = ff For some strange reason I found the settings above were faster than the settings below, with writes to /mnt/user/local appearing instantly, whereas there was a pause with the settings below. I think when writes are "handled" fully by rclone it works better: [tdrive_union] type = unio4 upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs: tdrive6_vfs: action_policy = all create_policy = ff search_policy = ff And my adjusted script: ####### Create Rclone Mount ####### # Check If Rclone Mount Already Created if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted." # ADDED MOUNT RUNNING REMOVAL HERE rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running else echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote." # Creating mountcheck file in case it doesn't already exist echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote." touch mountcheck rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse # Check bind option if [[ $CreateBindMount == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" else echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}" ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber fi echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}" else RCloneMountIP="" echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}" fi # create rclone mount rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --umask 000 \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-cache-max-size 200G \ --vfs-cache-max-age 24h \ --vfs-read-ahead 1G \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation & # Check if Mount Successful echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds" # slight pause to give mount time to finalise sleep 10 echo "$(date "+%d.%m.%Y %T") INFO: continuing..." if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount." rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems." docker stop $DockerStart fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs # ADDED MOUNT RUNNING REMOVAL HERE AND I THINK FAST CHECK REMOVAL find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete rm /mnt/user/appdata/other/scripts/running/fast_check exit fi fi # create union mount echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of tdrive_union" # Check If Rclone Mount Already Created if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Success tdrive_union is already mounted." else echo "$(date "+%d.%m.%Y %T") INFO: Starting mount of tdrive_union." mkdir -p /mnt/user/mount_mergerfs/tdrive_vfs rclone mount \ --allow-other \ --umask 000 \ --dir-cache-time 5000h \ --attr-timeout 5000h \ --log-level INFO \ --poll-interval 10s \ --cache-dir=/mnt/user/mount_rclone/cache/tdrive_union \ --drive-pacer-min-sleep 10ms \ --drive-pacer-burst 1000 \ --vfs-cache-mode full \ --vfs-cache-max-size 100G \ --vfs-cache-max-age 24h \ --vfs-read-ahead 1G \ tdrive_union: /mnt/user/mount_mergerfs/tdrive_vfs & # Check if Mount Successful echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds" # slight pause to give mount time to finalise sleep 10 echo "$(date "+%d.%m.%Y %T") INFO: continuing..." if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of tdrive_union mount." else echo "$(date "+%d.%m.%Y %T") CRITICAL: tdrive_union mount failed - please check for problems." docker stop $DockerStart fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs rm /mnt/user/appdata/other/scripts/running/fast_check exit fi fi # MERGERFS SECTION AFTER SHOULD BE OK WITHOUT ANY CHANGES IF 'IGNORE' ADDED EARLIER
  14. Just add the service accounts directly to your rclone config file via the plugin editing window. When done your tdrive remote "pairs" should look like this: [tdrive] type = drive scope = drive service_account_file = /mnt/user/path_to_first_service_account/sa_tdrive_new.json team_drive = xxxxxxxxxxxxxxxxxxxx server_side_across_configs = true [tdrive_vfs] type = crypt remote = tdrive:crypt filename_encryption = standard directory_name_encryption = true password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxx password2 = xxxxxxxxxxxxxxxxxxxxxxxxx The whole point of the service accounts is that the script automatically rotates the service account in use, so that you can upload 750GB on each run of the script - read the script notes and it will be clear e.g. If you say in the script to rotate 10 SAs and your SA files start with sa_tdrive_new, then the script will change the SA used on each run (that must all be in the same location) i.e. sa_tdrive_new1.json sa_tdrive_new2.json sa_tdrive_new3.json sa_tdrive_new4.json sa_tdrive_new5.json sa_tdrive_new6.json sa_tdrive_new7.json sa_tdrive_new8.json sa_tdrive_new9.json sa_tdrive_new10.json and on the 11th run, back to 1: sa_tdrive_new1.json sa_tdrive_new2.json etc etc You need 14-16 SAs to safely max out a gig line.