Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. Thanks - another cut and paste error. I think you meant it was: mkdir -p /mnt/user/appdata/other/rclone/$RcloneRemoteName #for script files I've just updated to: mkdir -p /mnt/user/appdata/other/rclone/$RcloneUploadRemoteName #for script files I made this change so users can keep track of what's going on with each upload remote separately from the mount remote.
  2. Thanks - I spotted this in testing, but with all my cutting and pasting I somehow didn't post that change. I need to find a way to work on files locally and sync with github as cutting and pasting is a recipe for disaster. Updated
  3. I use MYDRIVE for my music for this reason because of the 400k object limit. I've got 3 team drives - (i) plex, (ii) home photos/videos and (iii) backup. The backup teamdrive hit 400K recently so I had to remove some older versions of files, but I'm surviving for now. I hope my plex tdrive never goes over 400K as I'm dreading having to split into multiple teamdrives as updating radarr etc will be painful.....although just realised could use symlinks so files won't have 'moved' for dockers, so shouldn't be too hard.
  4. New upload script now supports service_accounts - https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_upload I've ditched multiple remote support as it's silly now - not surprising as I came up with it.
  5. I just had a quick play and it's easy to do: JSONDIR="/your/dir/here" SA="sa_tdrive" # enter the bit before the 1. I'm assuming the first account will be 1 not 01 CounterNumber="1" # my way of doing the count is a bit numpty, but I can follow what's happening and keep track SA+=$CounterNumber".json rclone config update $RcloneRemoteName service_account_file $JSONDIR/$SA # didn't know you could update the config like this Much cleaner. I will do this tomorrow when I'm awake so I don't make any mistakes
  6. @Thel1988 and @watchmeexplode5 I found an error with the counter which messed it up on the first run. Can you change: else echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneRemoteName}. Creating counter_1." touch /mnt/user/appdata/other/rclone/$RcloneRemoteName/counter_1 fi to: else echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneRemoteName}. Creating counter_1." touch /mnt/user/appdata/other/rclone/$RcloneRemoteName/counter_1 CounterNumber="1" fi Or just use the latest script on github. @watchmeexplode5 I've tested this with the service accounts (copying passwords worked perfectly!) using this config and it all worked perfectly for sa_tdrive1_vfs, sa_tdrive2_vfs etc etc: UseMultipleUploadRemotes="Y" RemoteNumber="16" RcloneUploadRemoteStart="sa_tdrive" RcloneUploadRemoteEnd="_vfs"
  7. @Spladge Excellent! @watchmeexplode5 I have a question about the service account script. It created hundreds of json files named gobbledygook.json. Did you just rename the ones you used to SERVICEACCOUNT01.json, SERVICEACCOUNT02.json ...SERVICEACCOUNTXX.json?
  8. @watchmeexplode5 wow thanks for checking. It seems a weird way to do it, otherwise why as you confirmed would rclone config split out different text when the passwords are the same. Weird! Edit: found answer here. Wish I'd known this before, would have saved me a lot of time 😞 https://forum.rclone.org/t/crypt-remote-generating-different-hash-each-time-for-the-same-password/13154/11
  9. 100% certain that you need to run rclone config to create each of the additional remotes and during config enter the same real 'readable' PASSWORD1 and PASSWORD2, and then rclone obscurates them in the config - otherwise your password would be visible in the config file. If you want to test, upload a new file e.g. test.txt to the root of your crypt with new gdrive_counter1_vfs and then see if it appears in your main mount. It should appear obscured in the team drive folder but it won't appear in your mount decrypted as the passwords won't match.
  10. I really don't know why as it works for me (just tested again). Glad you figured it out though. @watchmeexplode5 I was curious and even though I've already got 17 remotes I created my way, this way was definitely quickly. It's nice to know I've got 500 accounts sitting there if I need them! How did you create 100 remotes so quickly though? I'm just checking you didn't cut and paste the same gobbledygook PASSWORD1 and PASSWORD2 into each of the 100 remotes? You still have to create each one individually don't you, so that each remote has a unique gobbledygook string for the password? You still have to enter the actual passwords into rclone config for rclone to obfuscate, right?
  11. The new scripts are a lot easier to use - it's handy for me as I have 3-4 mounts going, so being able to just edit the config section is great. You've got me confused though about editing the CUT - that just extracts the number from the counter_# tracking file that the script creates and should have nothing to do with the remote name. What's the name or format of the multiple upload remotes you're using? It shouldn't matter what they are called as long as you enter the right values for what comes before and after the counter number in the remote name: RcloneUploadRemoteStart="gdrive_counter" # Enter characters before counter in your remote names ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter 'gdrive_counter' RcloneUploadRemoteEnd="_vfs" # Enter characters after counter ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter '_vfs' so if your remotes are name_media1_vfs, name_media2_vfs etc: RcloneUploadRemoteStart="name_media" RcloneUploadRemoteEnd="_vfs" which creates RcloneUploadRemoteStart+Counter+RcloneUploadRemoteEnd as the remote to use on each run
  12. @Xaero - you're a star, thanks.
  13. I forgot to add. Because you can create multiple teamdrives, this is a good way to give a mate an unlimited 'gdrive' account i.e. create another teamdrive and share it with them
  14. To move files between your gdrive and new teamdrive, the easiest way is to: Stop your current rclone mount + plex, radarr etc - any dockers that need to access the mount Log into gdrive with your 'master' account i.e. one that can access both the gdrive folder and the teamdrive click on 'crypt' in the gdrive folder and use the move command to move the folder to the teamdrive The files will then get moved fairly quickly Adjust your rclone mount and upload scripts to use the new tdrive based remotes It's best to wait until the move is finished before remounting, because rclone might not see any server side changes made after mounting for a while Once the old gdrive folder is empty, start the mount with the new tdrive remote
  15. Update: an update to getting creative with making google accounts to create the additional remotes is to create a service account using this guide and then using as many of the credentials as you need to create additional remotes - rclone guide here I've just made some more updates to the script that will be of interest to any users who have an upload speed > 70Mbps and want to upload more than the 750GB/day limit set by Google (per remote and per user/team drive), or just want to upload without a --bwlimit and not get locked out for 24 hours. The new script now allows daily theoretical uploads/day of nearly 11TB with a Gbps connection. I say theoretically as with my Gbps connection I got max upload speeds to Google of around 700-800Mbps giving a daily potential of around 8TB, but I had other things going on. I probably could have gone faster as I did some tdrive->tdrive transfers last month and rclone was reporting 1.7Gbps. I hadn't shared how I did this before as my script was quite clunky and a couple of us got it working, but I've now managed to make it easier for anyone else to setup in the new scripts. I also didn't share because my old script only worked if you had less than 750GB/day in the upload queue. Otherwise, the script would get stuck for up to 24 hours. Now thanks to the --drive-stop-on-upload-limit command added to rclone 1.5.1 the behaviour is much better - if the upload run exceeds 750GB/day it now stops rather than hammering away at google for up to 24 hours. My script takes advantage of this and uses a different account for the next run i.e. in 5 mins, or whatever cron schedule you've set. Setup now should take a maximum 30-60 mins (stage 3 below) if you need the full 14-15 accounts for a 1Gbps upload. You could just dabble with a few and then add more when needed. E.g. 1 extra account would allow 1.5TB/day which (should) be enough for most users. How It Works 1. Your remote needs to mount a team drive NOT a normal gdrive folder. Create One if you don't have one If you haven't done this yet, creating a team drive is easy and moving the files from your gdrive-->tdrive will be mega fast as you can do it server side using server_side_across_configs = true in your remote settings and this new updated script - - just follow these instructions to do quickly: 2. Share your new team drive with other UNIQUE google accounts Google's 750GB/day quota is not only per remote, but also per user per team drive i.e. if you have 2 people sharing a team drive, they both can upload 750GB each = 1.5TB/day, 4 users = 3TB/day and so on. So, to max out your upload you just need to decide how many users you need accessing the team drive based on how fast your connection is, how much you might upload daily and how long your upload job is scheduled for. E.g. for a 1Gbps connection: - 24x7 upload: 14-15 users (1000/8 x 60 x 60 x 24 = 10.8TB / 0.75TB = 14.4) = 14-15 extra users and remotes - Uploading for 8 hours overnight: 5 users (3.6TB) = 5 extra users and remotes - Script running every 5 mins with no --bwlimit: As many accounts/remotes to cover as much data downloaded UPDATE: I advise creating NOT using your existing mounted remote to upload this way to avoid it getting locked out. Use your existing remote just to mount If you want to add 14-15 google accounts with access to the teamdrive you might have to get a bit creative with finding accounts to invite. I had another google apps domain that helped where I gave those users access, plus I had a few gmail.com accounts as well I could use. 3. Create the extra remotes and corresponding encrypted remotes Because each of the accounts in #2 above have access to the new teamdrive, they can all create mounts to access the extra 750GB/day per account. To do this, create rclone mounts as usual - BUT for the client_ID and client_secret for each remote CREATE AND USE a DIFFERENT google account from #2. This is because each user can only upload 750GB, regardless of which remote did it. For each of your new remotes, use the SAME TEAMDRIVE and the same CRYPT LOCATION. i.e. if your main config looks like this: [gdrive] type = drive client_id = UNIQUE CLIENT_ID client_secret = MATCHING_UNIQUE_SECRET scope = drive team_drive = SAME TEAM DRIVE token = {"access_token":Google_Generated"} server_side_across_configs = true [gdrive_media_vfs] type = crypt remote = gdrive:crypt filename_encryption = standard directory_name_encryption = true password = PASSWORD1 password2 = PASSWORD2 then your first new remote for fast uploading should look like this: [gdrive_counter1] type = drive client_id = UNIQUE CLIENT_ID client_secret = MATCHING_UNIQUE_SECRET scope = drive team_drive = SAME TEAM DRIVE token = {"access_token":Google_Generated"} [gdrive_counter1_vfs] type = crypt remote = gdrive_counter1:crypt filename_encryption = standard directory_name_encryption = true password = PASSWORD1 password2 = PASSWORD2 gdrive_counter1: - Recommended: (so you don't lose track!) make sure you give each unencrypted remote the same name before the number (gdrive_counter) - use a unique CLIENT_ID and SECRET - make sure each remote is using the same TEAM DRIVE - when creating the token using rclone config, remember to use the google account that matches the Client_ID and CLient_secret gdrive_counter1_vfs: - IMPORTANT: Each encrypted remote HAS TO HAVE the same characters before the number (gdrive_counter) OR THE SCRIPT WON'T WORK - IMPORTANT: Each encrypted remote HAS TO HAVE the same characters after the number (_vfs) OR THE SCRIPT WON'T WORK - IMPORTANT: remote needs to be :crypt to ensure files go in the same place - IMPORTANT: PASSWORD1 and PASSWORD2 (i.e. what's entered in rclone config not the scrambled versions) need to be the same as used for gdrive_media_vfs That's it. Once finished, your rclone config should look something like this: [gdrive] type = drive client_id = UNIQUE CLIENT_ID client_secret = MATCHING_UNIQUE_SECRET scope = drive team_drive = SAME TEAM DRIVE token = {"access_token":Google_Generated"} server_side_across_configs = true [gdrive_media_vfs] type = crypt remote = gdrive:crypt filename_encryption = standard directory_name_encryption = true password = PASSWORD1 password2 = PASSWORD2 [gdrive_counter1] type = drive client_id = UNIQUE CLIENT_ID client_secret = MATCHING_UNIQUE_SECRET scope = drive team_drive = SAME TEAM DRIVE token = {"access_token":Google_Generated"} [gdrive_counter1_vfs] type = crypt remote = gdrive_counter1:crypt filename_encryption = standard directory_name_encryption = true password = PASSWORD1 password2 = PASSWORD2 [gdrive_counter2] type = drive client_id = UNIQUE CLIENT_ID client_secret = MATCHING_UNIQUE_SECRET scope = drive team_drive = SAME TEAM DRIVE token = {"access_token":Google_Generated"} [gdrive_counter2_vfs] type = crypt remote = gdrive_counter2:crypt filename_encryption = standard directory_name_encryption = true password = PASSWORD1 password2 = PASSWORD2 . . . . . . . . [gdrive_counter15] type = drive client_id = UNIQUE CLIENT_ID client_secret = MATCHING_UNIQUE_SECRET scope = drive team_drive = SAME TEAM DRIVE token = {"access_token":Google_Generated"} [gdrive_counter15_vfs] type = crypt remote = gdrive_counter15:crypt filename_encryption = standard directory_name_encryption = true password = PASSWORD1 password2 = PASSWORD2 4. Enter Values Into Script Once complete, then just fill in this section in the new upload script: # Use Multiple upload remotes for multiple quotas UseMultipleUploadRemotes="Y" # Y/N. Choose whether want to rotate multiple upload remotes for incresed quota (750GB x number of remotes) RemoteNumber="15" # Integer number of remotes to use. RcloneUploadRemoteStart="gdrive_counter" # Enter characters before counter in your remote names ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter 'gdrive_counter' RcloneUploadRemoteEnd="_vfs" # Enter characters after counter ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter '_vfs'
  16. I'm trying to do: FoldersToCreate="folder1/subfolder,folder2,folder3,folder4" mkdir -p /mnt/user/public/test/{$FoldersToCreate} which ends up making these folders: - {folder1 - subfolder,folder2,folder3,folder4} I've read up on bracket expansion, but I can't find a solution to my problem. Help please. Thanks in advance.
  17. Sorry about that - didn't test that option. Change: $RcloneMountLocation & to: $RcloneRemoteName: $RcloneMountLocation & I've fixed on github. @Porkie apologies as well
  18. Update: Thanks to inspiration from @senpaibox I've made a major revision this evening to the scripts on github: They are now much easier to setup through the use of configurable variables Much better messaging Upload script has better --bwlimit options allowing daily schedules, faster or slower uploads without worrying about daily quotas (rclone 1.5.1 upwards needed) e.g. you can now do a 30MB/s upload job overnight for 7 hours to use up your quota, rather than a slow 10MB/s trickle over the day. Or, schedule a slow trickle over the day and a max speed upload overnight option to bind individual rclone mounts and uploads to different IPs. I use this to put my mount traffic in a high-priority queue on pfsense, and my uploads in a low If you haven't switched from unionfs to mergerfs I really recommend that you do now and the layout of the new scripts should make it easier to do so. These are now the scripts (except for my upload script which is modified which rotates remotes to upload more than 750GB/day) I'm using myself, so it'll be easier for me to maintain. I've also updated the first two posts in this thread to align with the new scripts. Any teething problems, please let me know.
  19. Thanks for this. I started this thread not just to share, but also to find ways to improve my own scripts. I'm going to incorporate how you've created the variables (will rename some as I don't think you've used the best names) and a few other things today. I'm hoping then you'll be able to support pulls for any improvements your end in the future.
  20. sharing a good script I found that lets you control per tracker when torrents are deleted automatically by qbittorrent. You just need to create a .qman file for each tracker e.g. : { "category": "imported", "tracker": "tracker.torrentseeds.org", "public": { "min_seed_ratio": 1, "max_seed_ratio": 20, "min_seed_time": 1, "max_seed_time": 336, "required_seeders": 1 }, "private": { "min_seed_ratio": 1, "max_seed_ratio": 20, "min_seed_time": 1, "max_seed_time": 336, "required_seeders": 1 }, "delete_files": true } https://github.com/Hundter/qBittorrent-Ratio-Manager
  21. I found this great host for self-hosting wordpress in case anyone else wants to go down this road: https://technicalramblings.com/blog/how-to-set-up-a-wordpress-site-with-letsencrypt-and-mariadb-on-unraid/
  22. I want to create a free website to host a few pages of content/posts that I tend to email out to the various users of my server, to make them easier to access. I'd rather not self-host unless it's easy to do as it's only going to be a few pages and doesn't need to be snazzy, although preferably I'd like the url to be something like mydomain.com/blog so I might have no choice. I'd like to password protect the pages as I want to control access. I used to run a few serious wordpress blogs about 10 years ago, so I'm fairly familiar with WP but I'd like to try something else. What would people recommend using now? The key features are free and password protected, with an easy content editor and free themes available. Thanks in advance for any tips
  23. Thanks - I wasn't aware of that command. Saved to useful folder
×
×
  • Create New...