DZMM

Members
  • Posts

    2801
  • Joined

  • Last visited

  • Days Won

    9

Posts posted by DZMM

  1. 13 minutes ago, FranticPanic said:

    Am I missing or misunderstanding something here? I have content that was downloaded in October still showing in the mount_rclone directory, and looking at the last time it was played was in November.

    I'm not sure - I'm tempted to turn the cache off completely as my hit rate must be virtually zero as files don't reside there long enough.   Maybe it's Plex scanning a file that leads to rclone downloading the file for it to be analysed?

  2. 1 hour ago, Kevin Clark said:

    11.01.2022 07:50:02 INFO: Creating gdrive_media_vfs mergerfs mount.
    mv: cannot move '/mnt/user/gmedia/mount_mergerfs/gdrive_media_vfs' to '/mnt/user/gmedia/local/gdrive_media_vfs/gdrive_media_vfs': File exists
    fuse: mountpoint is not empty
    fuse: if you are sure this is safe, use the 'nonempty' mount option

    sometimes files end up in your mergerfs mount location PRIOR to mount.  Go into each disk and manually move files from /mount_mergerfs --> /local then run the mount script again

     

    .i.e. /mnt/disk1/mount_mergerfs/.... ----> /mnt/disk1/local/

    /mnt/disk2/mount_mergerfs/.... ----> /mnt/disk2/local/

     

    etc etc until you've moved all troublesome files.

  3. 45 minutes ago, Raneydazed said:

    So I’m new to the forum and forum discussions, please don’t judge lol. I tried to set this up yesterday and it did not go as planned. The upload script is giving me some errors. I’m thinking it’s due to how I set up the scripts. Which was done by me and my lack of knowledge.

     12/31 05:47:01 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/mount_mergerfs/gdrive_media_vfs/gdrive_media_vfs" "gdrive_upload_vfs:" "--user-agent=gdrive_upload_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,15M 16:00,12M" "--bind=" "--delete-empty-src-dirs"]2021/12/31 05:47:01 DEBUG : Creating backend with remote "/mnt/user/mount_mergerfs/gdrive_media_vfs/gdrive_media_vfs"2021/12/31 05:47:01 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"2021/12/31 05:47:01 DEBUG : Creating backend with remote "gdrive_upload_vfs:"2021/12/31 05:47:01 Failed to create file system for "gdrive_upload_vfs:": didn't find section in config file31.12.2021 05:47:01 INFO: Not utilising service accounts.31.12.2021 05:47:01 INFO: Script complete
     

     

                               [edit]
    Last half of log (for rclone_upload) in userscripts


    Sent from my iPhone using Tapatalk

    post your upload settings but I think you've got a ":" in "gdrive_upload_vfs:" that you shouldn't have:

     

    RcloneRemoteName="tdrive_vfs" # Name of rclone remote mount WITHOUT ':'.

     

  4. 11 hours ago, francrouge said:

    Edit: It's wierd but the first time i play a movie its taking 1min or so and after the first try its much faster like 5sec.

     

     

    # create rclone mount
        rclone mount \
        $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
        --allow-other \
        --umask 000 \
        --dir-cache-time 5000h \
        --attr-timeout 5000h \
        --log-level INFO \
        --poll-interval 10s \
        --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
        --drive-pacer-min-sleep 10ms \
        --drive-pacer-burst 1000 \
        --vfs-cache-mode full \
        --vfs-cache-max-size 100G \
        --vfs-cache-max-age 96h \
        --vfs-read-ahead 1G \
        --bind=$RCloneMountIP \
        $RcloneRemoteName: $RcloneMountLocation &

    I have no idea why your first launch is so slow (the 2nd is fast cause it's coming from the local cache).

     

    I can see that you've copied my mount settings and my 1st launch (900/120 connection) is never greater than 3-5s.  Does your rclone config look something like this:

     

    [tdrive]
    type = drive
    scope = drive
    service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive.json
    team_drive = xxxxxxxxxxxxxxxx
    server_side_across_configs = true
    
    [tdrive_vfs]
    type = crypt
    remote = tdrive:crypt
    filename_encryption = standard
    directory_name_encryption = true
    password = xxxxxxxxxxxxxxxxxxxx
    password2 = xxxxxxxxxxxxxxxxxxxxxxxxx

     

  5. On 12/30/2021 at 10:07 AM, Akatsuki said:

    Scrolling through this topic and noticed that I am now also having the same issue as others with Hardlinks being removed randomly (as per below post)

     

    Not 100% sure what's doing it. Possibly my mappings? I have Plex/*arrs and my Qbitorrent docker all point /cloud to /mnt/user/mount_mergerfs/gdrive_media_vfs/. Do you think I should change this and have it pointing to /mnt/user instead? Just noticed I've lost about 2TB of seeding torrents is all :(

    Hmm this is interesting (in a wrong way of course!)  I have the same setup and the only thing I can think of is that maybe rclone upload doesn't like hardlinks i.e. after it's uploaded the file it's deleting the original file rather than respecting the hardlink?  Mergerfs definitely supports hardlinks.

     

    This would explain why I haven't come across this as I seed for a max of 14 days, whereas because of my slow upload speed, rclone upload doesn't typically upload a file until 14 days+.

     

    I can't think of a solution, other than maybe ditching hardlinks and doing a copy to your media folder so that rclone can move the copy?  

     

    Worth a test to see if this is the cause?

  6. 49 minutes ago, ceddybu said:

    What is the best cloud storage for a 10 TB library? It looks like Google has stopped the unlimited plans, now I'd have to get an "Enterprise" plan for 10TB :(

     

    733672071_ScreenShot2021-12-03at1_01_40AM.thumb.png.023414e88257f00656d7b7deef4703c4.png

     

    Not sure, bit Enterprise Standard comes with unlimited storage and costs $20/mth

  7. 4 hours ago, dja said:

    What is the proper way to set these scripts up so that the mounts do not hold the array open when trying to stop it / reboot?  Maybe I have missed a step, but I am having to ps -ax | grep rclone and kill IDs and then kill mounts with fusermount -uz

     

    What am I missing? 

    I don't have this problem (used to, but somehow it went away)  - there is a script somewhere in the thread that has helped a few users.  I think it's a few pages back

  8. 7 minutes ago, francrouge said:

    Just curious but what is the enhanced setting ? emoji848.png Thx

    Envoyé de mon Pixel 5 en utilisant Tapatalk
     

    My personal rclone mount script was a bit different to the one on github - this is what I have:

     

    # create rclone mount
    	rclone mount \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--allow-other \
    	--umask 000 \
    	--dir-cache-time 5000h \
    	--attr-timeout 5000h \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size 100G \
    	--vfs-cache-max-age 96h \
    	--vfs-read-ahead 1G \
    	--bind=$RCloneMountIP \
    	$RcloneRemoteName: $RcloneMountLocation &

    I've updated the github version just now.  I think you should see playback and scanning improvements.

    • Like 1
  9. I had a power cut this morning that caused my server to reboot.  The server started "ok" afterwards, but everything is slow, particularly my W10 VMs that take forever to start, but are unusable as everything is mega slow.  Even trying to move the mouse around is impossible - it took over an hour just to boot to the desktop.  The extended test for FCP is still running even though it's been running for a few hours.

     

    At first I thought it was a dodgy docker as my CPU usage was at 80% which is unusual, but even after turning docker off and only running the main VM (Buzz) that I need, the VMs are still running very slow, which means I can't do my job.

     

    I'm hoping that there's something in my diagnostics that explains why and that someone can help me please.

     

     

    highlander-diagnostics-20211130-1431.zip

  10. 23 hours ago, francrouge said:

    1 min + files are like 20 gb

     

     

     

    
    # REQUIRED SETTINGS
    RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare="/mnt/user/mount_rclone_upload" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="100G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="ignore" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    DockerStart="plex" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    MountFolders=\{"downloads/complete,downloads/incomplete,downloads/seeds,Plex/Films,Plex/Series/Francais,Plex/Series/Anglais"\} # comma separated list of folders to create within the mount
    
    # Note: Again - remember to NOT use ':' in your remote name above

     

    looks fine as you're not using mergerfs and cloud files are being handled just by rclone, so things should be even simpler.  Have you changed any of the mount entries further down the script - the section should look like this:

     

    # create rclone mount
    	rclone mount \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--allow-other \
    	--dir-cache-time 5000h \
    	--attr-timeout 5000h \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size 100G \
    	--vfs-cache-max-age 96h \
    	--vfs-read-ahead 2G \
    	--bind=$RCloneMountIP \
    	$RcloneRemoteName: $RcloneMountLocation &

     

    Can you post your rclone config as well please to eliminate any problems there.

  11. 3 hours ago, veritas2884 said:

    First: Thanks so much for these scripts and updates over the years. It has been amazing.

    Second: General knowledge question, what is the rclone cache doing and does expanding it beyond 400GB increase performance?

     

    If I put a dedicated SSD into the system with 1TB, will it benefit a system that is serving up 8-10 concurrent streams? I have a 1gbps up/down connection.

    I don't think the cache works at all to be honest, as it keeps EVERY file that is accessed i.e. if Plex does a scan/analysis it stores those files.  In my scenario, the cache would have to be very big to have a decent hit rate.  I keep my cache fairly small so that it probably can cope with someone rewinding a show, but not much else.

  12. 8 minutes ago, DZMM said:

    If they start enforcing the min 5 requirement (they don't now), then I think that might be the solution - users pairing up in groups of 5.

     

     

    Looks like the 5 user limit isn't enforced:

     

    https://forum.rclone.org/t/gsuite-is-evolving-to-workspace-so-prices-and-tbs-too/19594/113?u=binsonbuzz

     

    Quote

    I'm running Workspace Enterprise Standard with one license and got unlimited space. Checked with support when signing up and they told me it should remain unlimited. Pay $20 per month, so ends up cheaper than Dropbox which does require 3 active licenses.

     

  13. 8 hours ago, Roudy said:

     

    I just got mine as well. I wasn’t too worried before because I had heard that the Enterprise Standard had the unlimited storage just for $20 a month, but I saw today after the email that you need at least 5 people in your organization…. I’m hoping they offer some type of grandfathering for the G Suite users, but if they don’t, anyone want to start a Google business?…. Hahaha

    If they start enforcing the min 5 requirement (they don't now), then I think that might be the solution - users pairing up in groups of 5.

     

     

    • Like 3
  14. 2 hours ago, stefan416 said:

    After thinking this all over it would seem that having everything bound to 'mount_mergerfs' would be optimal. The only problem with that is the original mount script is setup for a cloud only storage solution instead of  hybrid local/cloud storage like mine right?

    read up on --exclude on the rclone forums - you can stop files being uploaded by folder, type, age, name etc

  15. 2 hours ago, Neo_x said:

    Now this will be interesting. 

     

    thank you DZMM -> i must admit. trusting your script to perform its magic is really something

     

    owe you a beer or three by now!

    lol if you were the person who just bought me a few beers - thanks!

     

    It is a game changer - I can't imagine how much work it would be to run a server with so many disks involved.  And the cost!