Jump to content

yendi

Members
  • Posts

    118
  • Joined

  • Last visited

Posts posted by yendi

  1. Thanks !

    Still have some hicups with like the mountcheck file being duplicated so I have to finish a full upload to see why I have it on both on the upload and google folder (two identical files in Union folder with 2 different creation date), but except that it seems pretty solid. 

    Quick question that you probably missed, do you have a backup strategy with this or you just have 1 Gsuite account and if he gets closed you will just re-dl all medias?

    Anyway, you really have provided great scripts !

  2. 11 minutes ago, DZMM said:

    good

     

    I'm not sure why you'd want to do this.  If you want to test first, just manually copy a tv show or movie or two to see what happens.

     

    If you really want to test the full 40GB, then in the upload script just change rclone move to rclone sync

    I made a typo it’s 40TB.

    If my tv show path is /mnt/user/Media/TV shows would this command do the trick ?

    Quote

    rclone sync “/mnt/user/Media/TV shows” gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --fast-list --bwlimit 9500k --tpslimit 3 --min-age 30m

    I put the quotes in the path as there is a space and removed the “--delete-empty-src-dirs” Am I correct?

    Thanks

  3. 23 hours ago, DZMM said:

    Do you have a /mnt/user/mount_rclone/google_vfs folder?  If not create one.  I think this will solve your problem.

     

    I've added:

     

    
    mkdir -p /mnt/user/mount_rclone/google_vfs

     

    to the mount script.  I think there was a reason why it's not there, but I can't remember why so adding until someone tells me it causes a problem.

     

    This seems to have solved the problem... So trivial ! Thanks :D

    As I have 40Tb + of files, is there a way to make a "symbolik link" like of my movie folder and tv shows for them to be uploaded on the background (without deleting them after upload)? So I can keep a copy of everything local until the full upload is done, and switch at once at the end.

    Thanks

  4. 18 hours ago, testdasi said:

    That usually means the folder wasn't created for some reasons and/or your rclone mount command uses a different path.

     

    There is a mkdir line in the script e.g.:

    
    mkdir -p /mnt/user/mount_rclone/google_vfs

    Do you find an mkdir line in your mount script?

      

    Instead of doing screenshot of the scripts, it's better if you copy-paste your script into a post (remember to use the forum's code functionality - the button that looks like </> so it's easier to check).

      

    I use the exact script from github, I posted a code insert of it at the top of this page.

  5. 7 hours ago, DZMM said:

    No.  If you can, reboot your server and run the script in the background using the user scripts plugin

    I rebooted, run in the background and I have the same error:

    25.07.2019 16:52:39 INFO: mounting rclone vfs.
    2019/07/25 16:52:40 Fatal error: Can not open: /mnt/user/mount_rclone/google_vfs: open /mnt/user/mount_rclone/google_vfs: no such file or directory
    25.07.2019 16:52:44 CRITICAL: rclone gdrive vfs mount failed - please check for problems.

    Could you please double check that I am doing it right:

    1. Installed Rclone BETA and add this config:1923298102_2019-07-2516_59_12-Kanard_rclone-beta.thumb.jpg.b8bbe2bd6788c70e0c5ae4e3d60da75d.jpg
    2. Installed Unionfs-Fuse from Nerdpack:4455667_2019-07-2516_58_56-Kanard_NerdPack.thumb.jpg.d3a30a4a9f638b5d219e762c75815384.jpg
    3. Copied all the scripts in userscripts
    4. Past those commands to ssh:
      1. "touch mountcheck"

      2. "rclone copy mountcheck gdrive_media_vfs: -vv --no-traverse"

    5. Started the mountscript in background

    1531577641_2019-07-2508_53_43-Kanard_Userscripts.thumb.jpg.ea37cd23e1cfaa3566785b46dfd9f014.jpg

     

    --> Am I missing something? I started everything all over again with the same result...

     

    Thanks ! 

    2019-07-25 08_53_43-Kanard_Userscripts.jpg

    2019-07-25 09_22_44-Kanard_Userscripts.jpg

  6. 1 minute ago, DZMM said:

    It looks like you've created /mnt/user/mount_rclone/google_vfs so all should be good.  Are you running the script in the background?

    When I created manually the folders, I ran the rclone command directly in a SSH window. When I hit enter, the prompt is working but I have no message or no way to input any other thing. It is like if it was blocked. 

    Is it a normal behavior?

    Thanks

  7. 17 minutes ago, DZMM said:

    Are you using the user scripts plugin to run the mount script?

    Yes

     

    285425506_2019-07-2508_53_43-Kanard_Userscripts.thumb.jpg.fa8e4990e2b5e8953f4737e58c526b99.jpg

    #!/bin/bash
    
    #######  Check if script is already running  ##########
    
    if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then
    
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."
    
    exit
    
    else
    
    touch /mnt/user/appdata/other/rclone/rclone_mount_running
    
    fi
    
    #######  End Check if script already running  ##########
    
    #######  Start rclone gdrive mount  ##########
    
    # check if gdrive mount already created
    
    if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then
    
    echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."
    
    else
    
    echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."
    
    # create directories for rclone mount and unionfs mount
    
    mkdir -p /mnt/user/appdata/other/rclone
    mkdir -p /mnt/user/mount_unionfs/google_vfs
    mkdir -p /mnt/user/rclone_upload/google_vfs
    
    rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &
    
    # check if mount successful
    
    # slight pause to give mount time to finalise
    
    sleep 5
    
    if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then
    
    echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success."
    
    else
    
    echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems."
    
    rm /mnt/user/appdata/other/rclone/rclone_mount_running
    
    exit
    
    fi
    
    fi
    
    #######  End rclone gdrive mount  ##########
    
    #######  Start unionfs mount   ##########
    
    if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then
    
    echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."
    
    else
    
    unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs
    
    if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then
    
    echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."
    
    else
    
    echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."
    
    rm /mnt/user/appdata/other/rclone/rclone_mount_running
    
    exit
    
    fi
    
    fi
    
    #######  End Mount unionfs   ##########
    
    ############### starting dockers that need unionfs mount ######################
    
    # only start dockers once
    
    if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then
    
    echo "$(date "+%d.%m.%Y %T") INFO: dockers already started"
    
    else
    
    touch /mnt/user/appdata/other/rclone/dockers_started
    
    echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    
    docker start plex
    docker start tautulli
    docker start radarr
    docker start sonarr
    
    fi
    
    ############### end dockers that need unionfs mount ######################
    
    exit
  8. 4 hours ago, Kaizac said:

    No. The Team Share is a shared storage which multiple users have access to. So you get 750gb per day per user connected to this Team Share. It's not just extra size added to a specific account.

    Quote

     tdrive: - a teamdrive remote.  Note: you need to use a different gmail/google account to the one above (which creates and shares the Team Drive) to create the token - any google account will do.  I recommend creating a 2nd client_id using this account

    • when I request a cliend_id + secret i should request it from a different gmail account?
    • This remote will stay always empty? I dont understand the purpose of this remote, could you please elaborate?

     

    • How to do the initial upload? Copy media in the /mnt/user/mount_unionfs/google_vfs/xxxxx/? 
    • How to see upload progress --> (local files gets deleted when upload finished as example?) I have 40TB+ to upload so I wanting to plan this:
      • Is there a way to make symlink or something similar to do a continious upload during few weeks and keep my plex as it is in parallel? So I could switch once everyhting has been uploaded?
    • How does rclone work with Cache? I have a SSD where all downloads goes and the cache is emptied every night. Should I disable cache now?
    • Why is there a script for Radarr and not for Sonarr? i dont see where is the tutorial it is used.

    Thanks for the help !

     

    EDIT: I started right after work and i'm kinda stuck:

    Here is my rclone config:

    [gdrive]
    type = drive
    client_id = XXXXXXXXXXXXXXXXXXXXXXXX.apps.googleusercontent.com
    client_secret = XXXXXXXXXXXXXXXXXXXXXXXX
    scope = drive
    token = {"access_token":"XXXXXXXXXXXXXXXXXXXXXXXXXX","token_type":"Bearer","refresh_token":"1/n-7ZOV5GTQUhOYNW_8txP2xIFciNSN6sOtCxjbvSbEQ","expir
    y":"2019-07-23T19:16:10.643780944+02:00"}
    
    [gdrive_media_vfs]
    type = crypt
    remote = gdrive:crypt
    filename_encryption = standard
    directory_name_encryption = true
    password = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    password2 = XXXXXXXXXXXXXXXXXXXXXXXXXXX

    When I input the command for mountcheck:

    root@Kanard:~# rclone copy mountcheck gdrive_media_vfs: -vv --no-traverse
    2019/07/23 18:59:45 DEBUG : rclone: Version "v1.48.0-073-g266600db-beta" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "copy" "mountcheck" "gdrive_media_vfs:" "-vv" "--no-traverse"]
    2019/07/23 18:59:45 DEBUG : Using config file from "/boot/config/plugins/rclone-beta/.rclone.conf"
    2019/07/23 18:59:46 DEBUG : mountcheck: Size and modification time the same (differ by -365.632µs, within tolerance 1ms)
    2019/07/23 18:59:46 DEBUG : mountcheck: Unchanged skipping
    2019/07/23 18:59:46 INFO  : 
    Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
    Errors:                 0
    Checks:                 1 / 1, 100%
    Transferred:            0 / 0, -
    Elapsed time:        1.1s
    
    2019/07/23 18:59:46 DEBUG : 5 go routines active
    2019/07/23 18:59:46 DEBUG : rclone: Version "v1.48.0-073-g266600db-beta" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "copy" "mountcheck" "gdrive_media_vfs:" "-vv" "--no-traverse"]

    but then, I try to start mount script and I have an error:

    23.07.2019 18:44:56 INFO: mounting rclone vfs.
    2019/07/23 18:44:58 Fatal error: Can not open: /mnt/user/mount_rclone/google_vfs: open /mnt/user/mount_rclone/google_vfs: no such file or directory
    23.07.2019 18:45:01 CRITICAL: rclone gdrive vfs mount failed - please check for problems.

    I tried to use only the rclone mount command but same error of path.

     

    If I manually create the path, I can run the command but it hangs in the ssh window (i never get the prompt again) so I assume that there is something fishy. Permissions issues?

     

    Thanks

  9. Just now, nuhll said:

    Yes, he use it to come over the limits.

     

    But i dont see a way a "nroaml user" is hitting this limits. (after the initial upload)

    So it adds up 750go to the initial user and the teamshare is never used? So with this you could upload 1500gigs a day? That is great !

    I have a 1000/400 fiber internet and during my test I was hitting this limit everyday (initial upload)

  10. @DZMMThanks for all those scripts it is greatly appreciated.

    I played around few months ago with Google Apps for business in a similar setup but using a windows solution instead of rclone (Stablebit Clouddrive)

    I ended up building my current unraid server but because it is exponentially growing in size, the cost of HDD is becoming very high.

    I an considering testing your solution so I am super interested into your tutorial...

    What is the purpose of the teamdrive? Do you use it and merge it into plex at one time or is it only here to provide a seconde 750G/day allowance ?

    I was scared at the time that Google Enforce one day the limit per user and that I loose all my content. Do you have a backup strategy somewhere? I saw that you had a second Gdrive?

    Thanks again for sharing all this stuff !

  11. One other issue: I mapped the path /downloads -> /mnt/user/Media/Downloads/ but when I start a download I have an error: No space left on device (/downloads/xxxxxxxxxxx.mkv) --> I have 120gb left and the download is 4gb

     

    Container config:

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='transmission' --net='bridge' --privileged=true -e TZ="Europe/Paris" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -p '9091:9091/tcp' -p '51413:51413/tcp' -v '/mnt/user/Media/Downloads/':'/downloads':'rw' -v '/mnt/user/Media/Downloads/_Watch/':'/watch':'rw' -v '/mnt/user/appdata/transmission':'/config':'rw' 'linuxserver/transmission' 
    c5ae215adda4c5875af4e91665f328ef2ae6402653cd376429cfe61f835bc229

     

  12. Hello,

     

    I have issues with my first docker app: Transmission.

    I cannot manage to make settings saved: I have installed the docker, then just go in the preferences in the webview change few things and restart the container --> Preferences are reverted to defaults.

    I tried reinstall the container using "priviledged" setting without luck.

    I also tried the "Docker Safe New Perms" and the "Fix Common Problems' plugin but no error founds. 

    Could you please tell me what I am doing wrong? Is it a permission issue? How to correct it ?

    Thanks

×
×
  • Create New...