Jump to content

Bolagnaise

Members
  • Posts

    152
  • Joined

  • Last visited

Posts posted by Bolagnaise

  1. 37 minutes ago, livingonline8 said:

    So I have rclone setup and running...great! 

    I used spaceinvador script to mount google drive as a share and i can access that share now... amazing! 

    I installed emby and I tried to point the media path of to google share but I cannot see it... I see all my other shares that are physically on my unraid server but not the mounted google drive one ?!

     

    Can anyone help me with this please? 

    I’ve never used SpaceInvaders script, are you creating directories and mounting them using the script? I highly Highly recommend using the scripts created on page 1 by DZMM for mounting G drive, as it will stop you getting API bans from google. 

    • Like 1
  2. @DZMM So I tried running the upload script last night and the mount immediately disconnected and was throwing errors in the mount log saying it couldn’t pull the api key, which made me realise exactly what the original issue was, I had BW set to 9000 in the script, and I use a 5G router to perform the uploads but it only has a 4G uplink speed of around 45mbps. So basically everytime the script ran it would cause my router to crash and the mount would disconnect, as soon as I stopped the upload/rebooted it would work again.

     

    maybe a warning to everyone to change the BW limit as a must. @jamesac2 only has a 10Mbps upload so maybe that’s why he’s also getting disconnects. 

  3. 2 minutes ago, DZMM said:

    I need to re-review adding --vfs-cache-mode writes, as a quick glance now makes sense and it might help with when I occasionally write direct to the mount.

     

    I don't think buffer-size should be set to 0, but again I'll research as I haven't touched my settings for almost a year.

    No worries man, that the way it is, everything works completely fine...until it doesnt. You have done everyone a service so I don’t mind a few late nights troubleshooting, your literally saving me money with this script. 
     

    anyway, 12 hours uptime now, zero dismounts and I successfully moved my everything onto a brand new unraid build in new hardware. looks like it’s fixed.

  4. 26 minutes ago, DZMM said:

    @Bolagnaise not sure what's going on - are you sure your dockers are writing to /mount_unionfs and not /mount_rclone?  The mentions of vfs-cache-mode writes seems to indicate something is - writing direct to the mount without using rclone upload is risky, as it doesn't recover if something goes wrong

    Nothing is pointed to mount_rclone, everything is using mount_unionfs. Ive just done a brand new unraid install as well on new hardware.

     

    this seems like the issue as you said before https://forums.unraid.net/bug-reports/stable-releases/67x-very-slow-array-concurrent-performance-r605/?do=findComment&comment=5488

     

  5. @DZMM im getting these errors appear in the mount log now. 

     

    2019/09/23 23:06:40 ERROR : Movies/3 Lives (2019)/3.Lives.2019.WEBDL-1080p.mp4: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes

     

    so i added

     

    ''--vfs-cache-mode writes''

     

     to the mount script (https://github.com/rclone/rclone/issues/961)

     

    I got 

     

    2019/09/23 23:12:31 INFO : Cleaned the cache: objects 1 (was 4), total size 0 (was 0)

     

    i have no idea what ive done, but the error has gone away.

     

    No idea if its fixed my error yet.

     

  6. 12 hours ago, DZMM said:

    What did the logs say? 

     

    Do you have a lot of files in /mnt/user/mount_unionfs/google_vfs/.unionfs ?? Maybe the script can't cope with the cleanup.

     

    I gate this part of unionfs - it looks like rclone union is really coming soon as work resumed last week.

    Logs said input/output error. But now I’m doing cleanup scripts and it’s working so IDK. When clone union is released, will you write up a new tutorial. I will love you long time if you do 😘

  7. @DZMM I just ran the cleanup script and it immediately killed the mount. Could that be an issue?

     

    #!/bin/bash

    #######  Check if script already running  ##########

    if [[ -f "/mnt/user/appdata/other/rclone/rclone_cleanup" ]]; then

    echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."

    exit

    else

    touch /mnt/user/appdata/other/rclone/rclone_cleanup

    fi

    #######  End Check if script already running  ##########


    ################### Clean-up UnionFS Folder  #########################

    echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup."

    find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do
    oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs}
    newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~}
    rm "$newPath"
    rm "$line"
    done
    find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete

    rm /mnt/user/appdata/other/rclone/rclone_cleanup

    exit

  8. 1 minute ago, DZMM said:

    @Bolagnaise I think you've got a rogue docker as script is spot on.  @yendi had similar problems that he resolved by doing some rebuilding, maybe he can help

    Yeah I’m going to test 1 by 1 as suggested, I’m only running radarr, Ombi, letsencrypt, and Tautulli on the machine. My next step if none of that works is to stop everything and rebuild unraid into a new machine. I want to switch my plex server and everything over to unraid as it’s all on windows currently. Just need to do it.

  9. 3 minutes ago, DZMM said:

    post your mount script please.  Have you tried running without dockers on and then turning on one at a time say every hour?

    #!/bin/bash

    #######  Check if script is already running  ##########

    if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

    exit

    else

    touch /mnt/user/appdata/other/rclone/rclone_mount_running

    fi

    #######  End Check if script already running  ##########

    #######  Start rclone gdrive mount  ##########

    # check if gdrive mount already created

    if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

    echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

    else

    echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

    # create directories for rclone mount and unionfs mount

    mkdir -p /mnt/user/appdata/other/rclone
    mkdir -p /mnt/user/mount_rclone/google_vfs
    mkdir -p /mnt/user/mount_unionfs/google_vfs
    mkdir -p /mnt/user/rclone_upload/google_vfs

    rclone mount --allow-other --buffer-size 128M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &

    # check if mount successful

    # slight pause to give mount time to finalise

    sleep 5

    if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

    echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success."

    else

    echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems."

    rm /mnt/user/appdata/other/rclone/rclone_mount_running

    exit

    fi

    fi

    #######  End rclone gdrive mount  ##########

    #######  Start unionfs mount   ##########

    if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

    echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."

    else

    unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

    if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

    echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."

    else

    echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."

    rm /mnt/user/appdata/other/rclone/rclone_mount_running

    exit

    fi

    fi

    #######  End Mount unionfs   ##########

    exit

     

     

     

     

    I havent tested dockers yet.

  10. I have 16GB and the buffer is set to 128mb, I did have the low ram issue and fixed it about 5 weeks ago, this feels vastly different. The SMB share becomes entirely inaccessible , whereas when I was running out of memory before I could always access the share and see that the mount had disconnected. Now the only way to access //tower is to unmount and then remount and the server becomes accessible over SMB again. 
     

    To further the clue, it seems that running a plex scan does cause it to crash, but I do not have thumbnails turned on. The ram usage during a plex scan right now is only 20%. As I said, this issue has only started in the last 2 days or so and I never had unmount crashes before during scans. 

     

    Edit: Ok i have done some more investigating, as everything was pointed to an SMB issue, and i seem to have fixed it. Here's what i think was the issue.

     

    1. My network had reverted back to a public profile, by default network sharing is turned off on public and therefore SMB as well, but somehow it was still connecting. I switched back to a private network profile.

    2. I enabled this setting https://forums.unraid.net/topic/77442-cannot-connect-to-unraid-shares-from-windows-10/

    3. I went to windows credential manager and deleted all saved credentials for mapped drives.

     

    That seems to have done the trick, everything is now incredibly much faster

     

     

    EDIT EDIT: Problem still not fixed. At my wits end.

  11. The last 2 days, I’m getting a constant issue occurring. I will mount a crypt g drive and everything works fine, I  can access the folders through windows explorer and plex can read them. After about 30 mins, the mount appears to drop and I can no longer access any share on unraid, not even local ones. If I unmount and then remount using user scripts, the shares instantly reappear and everything is working again. 
     

    any ideas?

  12. DISREGARD, ill leave this here incase anyone else has the same issue, i recopied the script from GITHUB and reran it, issue gone.

     

     

    I have had an issue ever since i got this working, the union FS cleanup script throws this error everytime something is deleted from the plex server or via sonarr/radarr and then after a rescan, the file is back. Am i missing something?

     

    Script location: /tmp/user.scripts/tmpScripts/rclone_cleanup/script
    30.08.2019 19:12:16 INFO: starting unionfs cleanup.
    rm: cannot remove '/mnt/user/mount_rclone/google_vfs/mnt/user/mount_unionfs/google_vfs/.unionfs/Movies/Aquaman (2018)/Aquaman (2018) Remux-2160p.mkv': No such file or directory

     

     

     

    Here's the script as followed from github.

     

    #!/bin/bash

    #######  Check if script already running  ##########

    if [[ -f "/mnt/user/appdata/other/rclone/rclone_cleanup" ]]; then

    echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."

    exit

    else

    touch /mnt/user/appdata/other/rclone/rclone_cleanup

    fi

    #######  End Check if script already running  ##########


    ################### Clean-up UnionFS Folder  #########################

    echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup."

    find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do
    oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs}
    newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~}
    rm "$newPath"
    rm "$line"
    done
    find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete

    rm /mnt/user/appdata/other/rclone/rclone_cleanup

    exit

  13. On 8/4/2019 at 5:31 AM, yendi said:

    So with the help of rclone guys I might have found the issue:

     

    I have 12 gb of ram and when I upload + all the services running on unRAID i am using about 8.5 of the ram.

    When Plex is doing the thumbnails it seems that it consume all remaining ram for the job: it consume some ram for Plex itself + the --buffer-size 256 * number of opened files. Apparently its 4-5 files simultaneously.

     

    I lowered the buffer-size variable to 128mb and I have not seen the issue since 24h.

     

    Hope it helps someone who would face this issue !

    Thankyou so much, i have only 10GB of ram currently and was seeing rclone crashes in unraid and out of memory issues. Im upgrading to 16GB tommorow

  14. 2 hours ago, SoloLab said:

    DO I then move over the media files once they show up there? , to their correct path after been uploaded and encrypted in google. 

    Use Binhex-Krusader to move your current movie and tv show folders to mount_unionfs. the way its setup is that rclone_upload is your local disk storage and unionfs is the link between local and cloud (rclone_mount). When you add folders to mount_unions they are actually placed into rclone_upload. Then adjust the upload script to suit your needs and it will transfer files that meet the age requirements from upload to rclone_mount . My current upload script is --min-age 7d --max-age 8d so that it stores newly downloaded tv shows and movies for a week and then uploads them the next day, i run the script daily to check.

     

    • Like 1
  15. 1 minute ago, nuhll said:

    If im correct it uses the "created" date. (if i download old linux movies then they get uplaoded even if i set to 1y)

     

    I wouldnt bother uploading such fresh data.

    well min age 30m means if i understand it correctly that all data that is at least 30 min old is uploaded, i ran this script and tested it and it started to upload TB of data, so i switched to max age 2d and it is now taking a long time to filter out files. I initially tried max age 7d and it was also taking a long time so i thought i would try to reduce it to 2 to do a quick test to see what it would upload, im currnetly waiting for the script to finish.

×
×
  • Create New...