Jump to content

takeover

Members
  • Posts

    14
  • Joined

  • Last visited

Posts posted by takeover

  1. So you just need to do the next step which is to create an encrypted remote. I would also recommend setting up Service Accounts if you plan on exceeding 750gb/day otherwise here is an example of what your rclone config should look like if you want an encrypted Plex folder

    [googledrive]
    type = drive
    client_id = **********
    client_secret = **********
    scope = drive
    token = {"access_token":"**********"}
    server_side_across_configs = true
    
    [googledrive_encrypted]
    type = crypt
    remote = googledrive:encrypted_plex_folder
    filename_encryption = standard
    directory_name_encryption = true
    password = **********
    password2 = **********

     

  2. 23 hours ago, remedy said:

    hmm still not working. it prints everything after the transfer is complete, but until then it still sits at the rclone debug line. i removed "--stats 9999m" and tried "-vvv" and tried "-P, same result.

     

    is there no way to get it to output the upload periodically during the actual upload? i tried "-v" with "--stats 5m" instead and that didn't work either.

    Hmm works for me but I am using scripts on a VPS machine rather than Unraid. I know Userscripts recently got an update but not sure if that is stopping scripts from displaying live progress

  3. On 4/1/2020 at 10:59 AM, remedy said:

    the upload script doesn't show any progress output for me until the transfer is complete, it just sits at "====== RCLONE DEBUG ======"

     

    any ideas? i'd like to be able to see the live progress.

    The reason for this is because of the Discord notifications. If you don't care for Discord notifications then you can remove "--stats 9999m" and change "-vP" to -vvv or -P

    • Like 1
  4. 4 minutes ago, Tuftuf said:

    thanks I've seen that now, but I'm still getting stuck at almost the first step.

     

    Tried a working client id/key to test and created a new one.

    Completed the remote auth and provided response.

    I've selected the correct team drive once it was listed.

     

    But verifying the mount fails.

     

    root@Firefly:~# rclone lsd tdrive
    2020/03/06 22:14:06 ERROR : : error listing: directory not found
    2020/03/06 22:14:06 Failed to lsd with 2 errors: last error was: directory not found

    Did you forget the colons?

    rclone lsd tdrive:

    • Thanks 1
  5. 4 hours ago, Roken said:

    I have a 250gb SSD cache drive that is capable of saturating my bandwidth when downloading.

    I'm currently using the mount location of /mnt/user/... for NZBGet, Sonarr etc, but when downloading because /mnt/user/ is located on my spinners it's makes it cap at around 16 MBps (I have gigabit).  Is it advisable to switch from /mnt/user to /mnt/cache on a 250gb SSD to speed up downloading?

    I have never had to use /mnt/cache to max out my download speed. I have set my downloads folder to use ssd cache to 'yes' in share settings and have done without cache as well and my download speed didn't change.

     

    these are my mappings

    /user > /mnt/user

     

  6. So I got the idea from someone here to use a VPS... and I managed to sucessfully set this up and it is working great! Using the scripts to upload to my teamdrive, freeing up my bandwidth!

     

    So the VPS I use is a 1core, 1g ram, 50gb storage, 1g connection, 15tb/month for $35 a year. The setup is to just install docker and try to keep it minimal with letsencrypt, nzbget, and sonarr. 

     

    Pros

    • scripts work great with a few tweaks to get best performance
    • free up your bandwidth or help upload even more

    Cons

    • basic knowlege with linux, terminal, docker
    • your providers restrcitions
    • storage space for initial download (can't download anything bigger than your drive supports)
    • extra cost to use this setup
  7. I wanna say I appreciate these awesome scripts! They have been working flawlessly for me so far.

    Now for my question, can I restructure the directories? I know to change each script and the corresponding directory path. Would this cause any problems?

    mkdir -p /mnt/user/appdata/other/rclone
    mkdir -p /mnt/user/mount_rclone/google_vfs
    mkdir -p /mnt/user/mount_unionfs/google_vfs
    mkdir -p /mnt/user/rclone_upload/google_vfs
    CHANGE TO
    mkdir -p /mnt/user/appdata/other/rclone
    mkdir -p /mnt/user/plexdrive/media_rclone/media
    mkdir -p /mnt/user/plexdrive/media_unionfs/media
    mkdir -p /mnt/user/plexdrive/media_upload/media

     

×
×
  • Create New...