Jump to content

Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Posts posted by Kaizac

  1. 2 minutes ago, DZMM said:

    I moved about 70TB in 24 hours i.e. more than the 10TB outgoing limit, so internal google transfers don't count towards the outbound transfer quota.

     

    I'm not sure how you'd automate keeping teamdrives in sync though.  I think something went wrong with your setup e.g. something downloading to analyse in the background, or something trying to connect too fast.  have you checked the API console to see if there are any clues?

    I know the internal transfers don't count towards quota, but I wonder if it's the same for copying. Tomorrow I will test it with rclone sync, see how fast it is.

     

    I think the ban I got today was just an accident because of all the remounting and such. But still Bazarr gives such high API hits (about 200k per day) that it's just a risk. Animosity over on the rclone forums also looked at my logs before and all we can see is that files just get analyzed and opened and closed many times. And it's know that Google doesn't like that behaviour. Plex also has the risk of the same behaviour since Animosity also saw that in his own logs.

  2. Well you can be (API)banned on upload and download side. Once you get banned on one side the other side still works, which is nice. But I think my bans happened either because I'm both running Emby as Bazarr which are both analyzing files to get subtitles. So I disabled Emby for subtitles now.

     

    Other explanation can be that my several reboots and remounts which all used --rc and cached the directory caused problems since Google was seeing too many directory listing.

     

    What happens in my case now is that I just can't play videofiles, so it throws an error that it can't open the file. At around 00:00 PST I think it's resetted and everything works again.

     

    For now I will keep my API for playback and API for Bazarr seperated, since it does distribute API hits. But I'm afraid the bans happen based on the files itself and not so much based on users/API (the quota are very hard to hit anyways).

  3. 1 hour ago, DZMM said:

    How many mounts do you have?  I just have one rclone team drive mount and 1 for unionfs - you only need to mount the team drive once. 

     

    I ditched rc as it only helps populate the dir-cache a little bit faster after mounting, but it was causing problems with remounts that I couldn't be bothered to work out as the benefits are small.  Yes, a 5-10s sleep after each mount helps avoid any problems.

     

    I'm doing ok with 4 my uploads to one team drive for about 40 hours so far - I've done way more than 750GB to the one TD, and more than 750GB/account so I think you're problems are probably because of your bad mounts.

    I have 3 mounts: Gdrive personal drive, Gdrive Team Drive and Gdrive Team Drive for Bazarr. So I've just tested it with another of my API's and it also gives problems playing media on Gdrive. Some media plays (but this is more recent media, not sure why that matters). So it seems that bans happen on the Team Drive and not on API/user level.

     

    That's really unfortunate cause Bazarr is a great piece of software, but getting banned at random is really back.

  4. @DZMM just noticed my other mounts were not working while the main one (which has --rc) does. When I put in the mount command on itself it works. Was that also what you were experiencing and why you removed --rc?

     

    EDIT: seems like it was glitching out that not all mounts commands came through. So I put in a sleep between every mount and rebooted and it worked now. I am always getting a docker daemon error on startup though. I've already put in a sleep of 60 seconds but that doesn't seem to solve it. You don't have this issue?

     

    Strangely enough I just got API banned it seems and both API's are not working to stream. So maybe the ban happens on TD level and not on API/user level.

  5. 32 minutes ago, DZMM said:

    @Kaizac Thanks - I've moved all my files to one team drive with 4 uploads running for over a day now with no problems - each upload using a unique google account and different client credentials.  My backlog will be gone today, which is great.

     

    For anyone else wanting to try this, this is how my rclone config looks.  Because the TEAM_DRIVE_ID, PASSWORD_1 and PASSWORD_2 is the same all uploads go the same team drive and then I mount one of the vfs remotes, team_drive1_vfs, and add it to my unionfs mount:

     

     

    Glad you got it working aswell! Really nice that we can just clear out your backlog this easy. The same technique can be used for your Backups. Just create a new Tdrive for Backups and use the same API's as for your other Teamdrive or create new ones.

     

    Regarding removing --rc. What exactly are the problems you've run into making you remove it? For me it always seems to succeed with a timeout of 5 minutes.

  6. 14 minutes ago, DZMM said:

    I just use the free version to backup to my local folder that then gets encrypted when stored on google.  Until last week I was using Crashplan for a few weeks, but it only managed to upload about 400GB in about 4 months....Duplicati I used for about 2 days and it started throwing up errors - made me lose confidence in it as a backup solution.  CB did my local backup run really quickly that's now synced to gdrive.

     

    I did a quick fag packet calculation before I started switching to one teamdrive and I'm ok.  I don't upload anything other than the media files - I'm not sure what value the nfo and srt files are to Plex etc, especially now that Plex's subtitles are getting better (99.99% of my content is in English anyway)??

    You use the CloudBerry docker? If so I didn't know it had a free version, will check it out then! Do you just let CB back it up with compression and on file level without encryption and let Rclone handle the encryption?

     

    Yeah it makes a difference that you use Plex. Since it stores it's metadata in appdata. Emby you can choose what you want, so I chose to store it in the media folder, which means I have a lot of small sized files with each media folder. And not needing subtitles makes a lot of difference, unfortunately I'm not so lucky to be native english speaking ;).

  7. 7 minutes ago, Spladge said:

    Also - I use traktarr and a lot of Trakt lists.
    Plus I have a few different libraries, so 4k movies and Documentary movies and so on.
    https://github.com/l3uddz/traktarr

    Thanks for this, was looking for something to add Trakt lists with but couldn't find an easy way!

     

    @DZMM thanks for the link to your post, don't know why I overlooked that. Is this your way of backing up your important data now? You wrote that you use Cloudberry, is that the 30 USD license? Do you like it? I'm looking for a solution since Duplicati is dog shit slow. But just tested Cloudberry and even though I chose encryption, it uploaded the plain files to Gdrive... Borg also seems nice, but it's too much scripting which makes me doubt the usefullness when I need to recover/restore.

     

    By the way, since the Teamdrive has a limit of 400k files it might be a smart choice to keep your nfo and srt files and such locally stored. I've excluded those from my upload and wrote a small script to download all the files from my Gdrive to my local server.

  8. 4 minutes ago, DZMM said:

    Thanks - I'm going to give one teamdrive another go.

    Only annoying part is that Gdrive is still moving files from my personal gdrive to my tdrive on the back. This has been taking almost a week or something... So that's something to take into account when decide on your final choice.

     

    Also my upload script doesn't cause andy rate limit errors and just starts right away without any errors. Maybe that's also a key to the multiple upload accounts. Hopefully you also get it working!

  9. On 12/20/2018 at 10:48 AM, DZMM said:

    @Kaizac how are you getting on with your td-->td transfers?  I'm still nervous about doing them e.g. I just moved a two movie folders between td1/adults/  td1/kids using putty and the folders moved ok, but the file inside disappeared!  I just did moved an individual file using putty and that went ok.

     

    I think I'm going to stick with consolidating all my files to gdrive once they've uploaded and doing my moves/edits there.  The lesson here is make sure everything is organised correctly before uploading.

     

    Edit:  Looks like it was a false alarm.  Not sure what how, but the file was also in td2/adults - the file in td1/adults must have been a phantom or something....I just did the same transfer from td2/adults to td2/kids and all was ok - phew!

    Sorry I didn't see your question to me, got lost to me in the many posts :).

     

    I'm currently still using only 1 Team Drive. And using 5-6 API's to upload 24/7 which is still working with the 8000k bwlimit. So for me there is no need to create another TD.

  10. @francrouge you got it working now?

     

    @DZMM I've been migrating fully last days to the Team Drive. Not sure if you did do so already? The mistake I made was to populate the Team Drive already. So when you want to move the files through the WebUI of Google Drive it will create duplicate folders. It doesn't merge them somehow. When I tried to move mount_rclone/Gdrive to mount_rclone/Tdrive with Krusader it started a normal transfer. So I think it's seen as separate entities and thus it will count towards your quota. Maybe your experience is different.

     

    So what I did was creating a folder "Transfer" in the Tdrive (through Krusader/Windows) and moved the files from Gdrive to the Tdrive/Transfer through the WebUI. This will start the move in the background.

    Then I moved the folders from the Transfer folder to the Tdrive itself with Krusader. This does count as a move on the server and won't count for your quota and is fast. However I noticed that after 2 days it's still transferring files to my Transfer folder, even though it seems like it's done transferring in the WebUI. So that's something to look out for that the background process (which you can't see how far it is) takes a while to run.

  11. On 12/16/2018 at 11:34 PM, DZMM said:

    Just create another encrypted remote for Bazarr with a different client_ID pointing to same gdrive/tdrive e.g. 

     

    
    [gdrive_bazarr]
    type = drive
    client_id = Diff ID
    client_secret = Diff secret
    scope = drive
    root_folder_id = 
    service_account_file = 
    token = {should be able to use same token, or create new one if pointed to teamdrive"}
    
    [gdrive_bazarr_vfs]
    type = crypt
    remote = gdrive_bazarr:crypt
    filename_encryption = standard
    directory_name_encryption = true
    password = same password
    password2 = same password

     

    One problem I'm encountering is the multiple upload scripts are using a fair bit of memory, so I'm investigating how to reduce the memory usage by removing things like --fast-list from the upload script.  Not a biggie as I can fix

    Currently running 6 API's/scripts for uploading (1 API/script per disk). And 1 API dedicated for streaming (only Emby and Plex use this API). Other dockers are set on the seperate Docker API. I had an initial error is my unionfs command so my streaming API was used and not my docker API. Since I fixed that I see the API hits being distributed nicely. So this should prevent any futher API bans and still allow me to run Bazarr 24/7.

     

    Upload scripts I had high memory usage as well and was getting rate banned at the end of the day. So I think it might have been uploading a bit too much on 8500k bwlimit. Could also have been too much of checkers going through all the files. Changed to the following command last night. Currently sitting very low in memory usage with 6 uploads with each 3 transfers.

     

    Quote

    -vv --buffer-size 128M --drive-chunk-size 32M --checkers 3 --fast-list --transfers 3 --delete-empty-src-dirs --bwlimit 8000k --tpslimit 3 --min-age 30m

     

  12. 7 hours ago, francrouge said:

    Hi i follow you're script but on startup i got an error 

     

    18.12.2018 20:21:29 INFO: mounting rclone vfs.
    2018/12/18 20:21:29 Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use
    18.12.2018 20:21:34 CRITICAL: rclone gdrive vfs mount failed - please check for problems.
    Script Finished Tue, 18 Dec 2018 20:21:34 -0500

     

     

    I erase my config and tried with new one but still samething 

     

    any idea ?

     

    thx

    Did you create the file "mountcheck" on your Gdrive mount?

×
×
  • Create New...