Jump to content

DZMM

Members
  • Content Count

    1966
  • Joined

  • Last visited

  • Days Won

    8

DZMM last won the day on June 13

DZMM had the most liked content!

Community Reputation

135 Very Good

3 Followers

About DZMM

  • Rank
    Advanced Member
  • Birthday December 30

Converted

  • Gender
    Male
  • Location
    London

Recent Profile Visitors

2392 profile views
  1. it took me a few hours of trial and error to sort the counters. Once you've finished moving your existing content you should be able to upload from just one folder like me if your upload and download speeds are the same (i.e. content is shifted just as fast as it's added) - you just need enough accounts to ensure no individual account will upload more than 750GB/day and mess up the script for up to 24 hours until the ban lifts. I cap my upload scripts at 70MB/s so if I uploaded 24/7 I'd do about 6TB/day so I'd need at a min 6000/750=8 users... but I use about double that just in case something goes wrong.
  2. Yes that'd be the problem. Within gdrive you need to share the teamdrive with each user email and then when you create the remote use that account to create the token. A few tips: 1. Don't use the remote/user account you mount for uploading to make sure your mount always works for playback 2. If you're bulk uploading and you are confident there is no more than 750GB in each of your sub upload folders, I would run your move commands sequentially rather than at all at the same time ONCE A DAY, with the bwlimit set at say 80% of your max. Running multiple rclone move commands at the same time uses up more memory. You'll still get the same max transfer per day, with less ram usage
  3. @neow if all you want to do is backup read this post
  4. Hmm it all looks ok. And for each upload command you've got: rclone move /mnt/user/rclone_upload/tdrive_01_vfs/ tdrive_01_vfs: ......... rclone move /mnt/user/rclone_upload/tdrive_02_vfs/ tdrive_02_vfs: ......... Etc etc
  5. here's what my config looks like: [user1] type = drive client_id = id1 client_secret = secret1 scope = drive team_drive = SAME_TDRIVE token = token1 [user1_vfs] type = crypt remote = user1:crypt filename_encryption = standard directory_name_encryption = true password = pass1 password2 = pass2 [user2] type = drive client_id = id2 client_secret = secret2 scope = drive team_drive = SAME_TDRIVE token = token2 [user2_vfs] type = crypt remote = user2:crypt filename_encryption = standard directory_name_encryption = true password = pass1 password2 = pass2 and my move commands: rclone move /mnt/disks/ud_mx500/rclone_upload/google_vfs user1_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --fast-list rclone move /mnt/disks/ud_mx500/rclone_upload/google_vfs user2_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --fast-list
  6. Help please - my cert won't renew. It's been so long since I've had problems with LE I can't work out how to fix: Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... using keys found in /config/keys [cont-init.d] 30-keygen: exited 0. [cont-init.d] 50-config: executing... Variables set: PUID=99 PGID=100 TZ=Europe/London URL=MyDOMAIN.com SUBDOMAINS=www,unifi,ha,nextcloud,office,home,heimdall EXTRA_DOMAINS= ONLY_SUBDOMAINS=false DHLEVEL=2048 VALIDATION=http DNSPLUGIN= EMAIL=me@email.com STAGING= 2048 bit DH parameters present SUBDOMAINS entered, processing SUBDOMAINS entered, processing Sub-domains processed are: -d www.MyDOMAIN.com -d unifi.MyDOMAIN.com -d ha.MyDOMAIN.com -d nextcloud.MyDOMAIN.com -d office.MyDOMAIN.com -d home.MyDOMAIN.com -d heimdall.MyDOMAIN.com E-mail address entered: me@email.com http validation is selected Generating new certificate Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator standalone, Installer None Obtaining a new certificate Performing the following challenges: http-01 challenge for MyDOMAIN.com Waiting for verification... Challenge failed for domain MyDOMAIN.com http-01 challenge for MyDOMAIN.com Cleaning up challenges Some challenges have failed. IMPORTANT NOTES: - The following errors were reported by the server: Domain: MyDOMAIN.com Type: connection Detail: Fetching http://MyDOMAIN.com/.well-known/acme-challenge/r_lFlfJYMg2gmnwGbgo-4gqRceo17BLkfJUj8CXnK2A: Timeout during connect (likely firewall problem) To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address. Additionally, please check that your computer has a publicly routable IP address and that no firewalls are preventing the server from communicating with the client. If you're using the webroot plugin, you should also verify that you are serving files from the webroot path you provided. Challenge failed for domain MyDOMAIN.com http-01 challenge for MyDOMAIN.com Cleaning up challenges Some challenges have failed. IMPORTANT NOTES: - The following errors were reported by the server: Domain: MyDOMAIN.com Type: connection Detail: Fetching http://MyDOMAIN.com/.well-known/acme-challenge/r_lFlfJYMg2gmnwGbgo-4gqRceo17BLkfJUj8CXnK2A: Timeout during connect (likely firewall problem) To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address. Additionally, please check that your computer has a publicly routable IP address and that no firewalls are preventing the server from communicating with the client. If you're using the webroot plugin, you should also verify that you are serving files from the webroot path you provided. ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container
  7. glad you got it almost right first time. Re your teamdrive setup, I'm assuming: 1. all files are being loaded to the same teamdrive 2. then the teamdrive is shared with x different users with unique email addresses 3. for each user you've created a unique rclone remote, with unique client IDs each time (all loading to same teamdrive) 4. you've then created a encrypted version of each unique remote 5. you've then created unique rclone_move commands for each user i.e rclone move /mnt/user/rclone_upload/google_vfs USER1_remote: ................ rclone move /mnt/user/rclone_upload/google_vfs USER2_remote: ................ rclone move /mnt/user/rclone_upload/google_vfs USER3_remote: ................
  8. If you put files in mount_unionfs (which you should!) they will behind the scenes really be added to rclone_upload and once uploaded, removed from rclone_upload - all at the same time as always being available and not 'moving' in mount_unionfs. My setup isn't designed for copies or syncs - it's for moving files to gdrive for seamless playback and library management aka plex. If this is what you want, you'll need to start a new thread to get help doing that - or read the rclone help pages which are good. Using my setup to do this is overkill as you would only need to mount if you lose files.
  9. @neow not quite..... think of the folders in this order: 1. mount_rclone are the files that are on gdrive and are accessed via this mount 2. rclone_upload are files on your local server waiting to be uploaded to gdrive. once uploaded they no longer exist locally and are now on gdrive and are accessible via mount_rclone 3. mount_unionfs is a merged or combined folder that shows both files that are in both locations - mount_rclone for gdrive and rclone_upload for local files think of 3. this way - you have already uploaded back to the future I and back to the future II is in the queue to be uploaded: - in mount_rclone you will only see ..../movies/back_to_future_I - in rclone_upload you will see ..../movies/back_to_future_II - in mount_unionfs you will see ..../movies/back_to_future_I AND ..../movies/back_to_future_II that is why you should map all dockers to mount_unionfs as it lets them see files that are on gdrive, plus those haven't been uploaded yet AND hides when files move from rclone_upload to mount_rclone i.e. to mount_unionfs it's a non-event. If you want to free up space on your server - move files to rclone_upload - they will still be playable until they move If you want to backup files, then tweak the upload script to use "rclone sync" rather than "rclone move" and add the folders you want to backup/sync
  10. 1. as long as you copy your scripts, rclone config and the check files you should be fine 2 & 3. you'd have to create another mount for your gdrive remote if you want to upload files unencrypted. anything you add to rclone_mount (not advised), unionfs_mount (ok), or rclone_upload (ok) will get encrypted 4. you would add to plex the folders you created in your new gdrive remote mount 5. yes - rclone works on W10. be careful with making RW changes from 2 machines to the same mount files/folders- read rclone forums if you need to do this 6. my first couple of posts explain why the cleanup script is 'needed' re upload yes - I would have the upload script running on a 30m/1hr schedule to make your life easier. All your 'media' dockers - plex, radarr, sonarr etc should be mapped to the unionfs folder 1. I think yes - as above this is the file 'view' that is what your dockers etc need to use 2. had to google passerelle, but yes - files added to mount_unionfs that don't exist on gdrive (mount_rclone) are automatically added to rclone_upload for upload 3. It's a decrypted view of the files ON gdrive. mount_unionfs is a decrpyted view of what's ON gdrive (mount_rclone) PLUS what hasn't been uploaded yet (rclone_upload) but still needs to be accessible to plex etc until it 'moves' to gdrive. Never point plex at mount_rclone - always use mount_unionfs
  11. linuxserver - never had any docker problems that I can remember
  12. Your API woes might have coincided with a couple of days when google hopefully in 'error' were blocking rclone's user agent - it was fixed and hasn't happened since. Hopefully this won't happen again.... I get out of memory errors occassionally - I think the likely suspect is maybe plex doing something weird overnight as a scheduled job. Weird for me as well as I'm up to 96GB now and I usually have around 30-40GB free. Similarly I haven't got around to checking yet
  13. That's correct - read the earlier posts. If you've not updated an existing file or deleted a file, there's nothing to cleanup!
  14. The info message i added is a bit misleading as the 'error' could be that the mount failed, not that rclone isn't installed. You successfully mounted previously so I think all your attempts haven't cleared properly or something - try manually removing the files in user/appears/other/rclone and rebooting your server and see what happens