Jump to content

DZMM

Members
  • Content Count

    2524
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by DZMM

  1. The mount script checks if there's internet connectivity. Add a cron job so the script runs if the test fails. ####### Checking have connectivity ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online" ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online" else echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity. Will try again on next run" rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi
  2. I just spotted this in my upload logs: 2020/10/16 08:37:03 INFO : Transferred: 1.344T / 9.679 TBytes, 14%, 13.292 MBytes/s, ETA 1w14h39m29s Checks: 286 / 290, 99% Deleted: 143 Renamed: 143 Transferred: 143 / 2190, 7% Elapsed time: 29h27m1.0s I wonder if Google have removed/increased the 750GB/day transfer limit? Edit: I think it's still 750GB/day - my elapsed time is high so I must have fluked transferring less than 750GB over 24 hours
  3. my cache isn't big enough for a permanent vfs cache. choose your own preferred location put to ignore - I left it in as an example - in my real script all 6 are populated with various tdrives.
  4. Here's my updated mount script. I had problems adding @watchmeexplode5's pull that creates a single config file. Adds support for vfs local caching and also changed the mount defaults, which for me, has significantly improved playback and skipping: Edit: I just spotted I've hardcoded the cache size at 400GB - I'll make this configurable when I have time to sort out github. #!/bin/bash ###################### #### Mount Script #### ###################### ### Version 0.96.8 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Change the name of the rclone remote and shares to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: include custom command and bind mount settings # 4. Optional: include extra folders in mergerfs mount # REQUIRED SETTINGS RcloneRemoteName="tdrive_vfs" RcloneMountShare="/mnt/user/mount_rclone" LocalFilesShare="/mnt/user/local" MergerfsMountShare="/mnt/user/mount_mergerfs" DockerStart="duplicati lazylibrarian LDAPforPlex letsencrypt nzbget ombi organizrv2 plex qbittorrentvpn radarr radarr-uhd radarr-collections sonarr sonarr-uhd tautulli" MountFolders=\{"downloads/complete,downloads/seeds,documentaries/kids,documentaries/adults,movies_adults_gd,movies_kids_gd,tv_adults_gd,tv_kids_gd,uhd/tv_adults_gd,uhd/tv_kids_gd,uhd/documentaries/kids,uhd/documentaries/adults"\} # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="/mnt/user/mount_rclone/gdrive_media_vfs" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="" LocalFilesShare4="" LocalFilesShare5="" LocalFilesShare6="" # Add extra commands or filters Command1="--rc" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address NetworkAdapter="eth0" # choose your network adapter. eth0 recommended VirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them ####### END SETTINGS ####### ############################################################################### ##### DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING ####### ############################################################################### ####### Preparing mount location variables ####### LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" ####### create directories for rclone mount and mergerfs mounts ####### mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files mkdir -p /mnt/user0/mount_rclone/cache/$RcloneRemoteName #for cache files if [[ $LocalFileShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested." else echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders." eval mkdir -p $LocalFilesLocation/"$MountFolders" fi mkdir -p $RcloneMountLocation mkdir -p $MergerFSMountLocation ####### Check if script is already running ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}" echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running." if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running." exit else echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding." touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running fi ####### Create Rclone Mount ####### # Check If Rclone Mount Already Created if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted." else echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote." # Creating mountcheck file in case it doesn't already exist echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote." touch mountcheck rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse # Check bind option if [[ $CreateBindMount == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" else echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}" ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber fi echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}" else RCloneMountIP="" echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}" fi # create rclone mount rclone mount \ --allow-other \ --dir-cache-time 720h \ --log-level INFO \ --poll-interval 15s \ --cache-dir=/mnt/user0/mount_rclone/cache/$RcloneRemoteName \ --vfs-cache-mode full \ --vfs-cache-max-size 400G \ --vfs-cache-max-age 336h \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation & # Check if Mount Successful echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds" # slight pause to give mount time to finalise sleep 10 echo "$(date "+%d.%m.%Y %T") INFO: continuing..." if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount." else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems." docker stop $DockerStart find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete rm /mnt/user/appdata/other/scripts/running/fast_check exit fi fi ####### Start MergerFS Mount ####### if [[ $MergerfsMountShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested." else if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place." else # check if mergerfs already installed if [[ -f "/bin/mergerfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount" else # Build mergerfs binary echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now." mkdir -p /mnt/user/appdata/other/rclone/mergerfs docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin # check if mergerfs install successful echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds" sleep 10 if [[ -f "/bin/mergerfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount." else echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully. Please check for errors. Exiting." docker stop $DockerStart find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete rm /mnt/user/appdata/other/scripts/running/fast_check exit fi fi # Create mergerfs mount echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount." # Extra Mergerfs folders if [[ $LocalFilesShare2 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare2=":$LocalFilesShare2" else LocalFilesShare2="" fi if [[ $LocalFilesShare3 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare3=":$LocalFilesShare3" else LocalFilesShare3="" fi if [[ $LocalFilesShare4 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare4=":$LocalFilesShare4" else LocalFilesShare4="" fi if [[ $LocalFilesShare5 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare5} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare5=":$LocalFilesShare5" else LocalFilesShare5="" fi if [[ $LocalFilesShare6 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare6} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare6=":$LocalFilesShare6" else LocalFilesShare6="" fi # mergerfs mount command mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4$LocalFilesShare5$LocalFilesShare6 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true # check if mergerfs mount successful echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created." if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created." else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed." docker stop $DockerStart find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete rm /mnt/user/appdata/other/scripts/running/fast_check exit fi fi fi ####### Starting Dockers That Need Mergerfs Mount To Work Properly ####### echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers." docker start $DockerStart rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running echo "$(date "+%d.%m.%Y %T") INFO: ${RcloneRemoteName} Script complete" exit
  5. prefer cache is a preference - go with wherever you want files storing Correct. Advice is to run using user scripts on a schedule e.g. every 2-5 mins, so if for whatever reason the mount fails, it will stop the dockers to try and avoid any meltdowns since rclone 1.53 (I think) full works better - before it was a bad idea. I've updated my local scripts, but I had a few problems updating github so that's a bit behind (it also takes advantage of the new cache feature, which makes a BIG difference). I'll try again to update github again this weekend. Correct - rclone added a nice feature a few releases back that stopped a transfer job when the 750GB limit is hit. SAs allow a different user or quota to be used on the next run, so if you set a cron job, you can do multiple amounts of 750GB/day e.g. 16 SAs allows you to max out a 1Gbps upload 24/7. yeah, you won't get the full benefit of mergerfs. I would honestly just do this for all your dockers: /user --> /mnt/user and then within radarr add as your media locations /user/gDrive/ShareName/Movies etc etc I just find it easier to do this as all dockers will always match up, and I think the convention of mapping /downloads --> xxxxxx /media --> xxxxx etc, just makes live more complicated. Moving is easy: 1. add the new /user --> /mnt/user mapping 2. add it /user/whatever as a new media location in radarr 3. go to Movie Editor, create a custom filter e.g. show movies with /ShareName/Movies etc in path 4. select all and move them to /user/whatever within Radarr
  6. Thanks. I was too scared to click on the upgrade link before.... I went for it this time and it's spitting out £15.30/mth in the UK - strangely no discount for annual. I'm paying £7.82/mth at the moment - double the price, but still great value for over 600TB of storage!! I'm not upgrading either yet - unless I can find a coupon for Enterprise Standard. It's a good sign that Google aren't shutting down the facility - there are bigger things behind this change, rather than trying to address users like us. I still believe we are a drop in the ocean Vs some institutions e.g. research universities.
  7. ok, logic fail in my script. If you don't want to create a mergerfs mount you need to set: MergerfsMountShare="ignore" You haven't, so it's trying to create a to create a mergerfs mount - but it can't because LocalFileShare is set to ignore. To fix either set MergerfsMountShare to ignore, or add a location for LocalFileShare
  8. We've never been able to pinpoint the problem as it seems intermittent. I haven't had problems for a few months now.
  9. think you've got a formatting error in your script options - post your script please
  10. can you post a copy of your email please if you can as mine didn't have that detail
  11. No changes needed - just use the correct remote names
  12. I got an email yesterday with my migration options, but it's voluntary at the moment, so I'd advise everyone hangs tight for now. On reddit some users have said the Enterprise unlimited price is $20/mth, although mine was listed as "please contact sales"
  13. Looks like existing users/legacy accounts will be ok..
  14. Ahhh, the logic in my upload script isn't quite right. It looks for the mountcheck file in the mount location - if you don't mount, then there is no mountcheck file!
  15. I don't see why it would cause a problem. To be safe, I'd probably add the SMB share using UD.
  16. If it was working before then my suspicion is that beta 29 could be the culprit. I've just had to rollback from beta29 as I've had 3 machine lockouts in 24 hours. Re speeds - rather than using UDs why not make your download drive a new pool drive with a cache/pool-only share 'downloads'? For me the biggest benefit of the 6.9 betas are having more options to have drives with shares that don't need to touch the array (even though I don't have a parity drive) and potential R/W slave issues, without having to resort to UD. E.g. here's my current structure, where only my cache has shares that are moved to the array
  17. you need to specify wheere you want the Mergerfs mount to be you don't have to create your folders this way - I was just trying to make life easier for new users. I think though leaving this empty does create problems - so, just list two or more of your current folders (again I think there's a bug if you only list one folder)
  18. I woke up to a unresponsive system this morning. My Windows 10 VMs were locked, but my pfsense VM I think was still running as I had Wi-Fi connectivity on other devices. I couldn't connect to unRAID though - even with my laptop via ethernet. I've had to rollback to beta25 as that's 3 lockups in 24 hours, whereas I had no issues with beta25. Diags attached after reboot again, so not sure if they will help - I couldn't previous diags as I had to shutdown with hardware power button. highlander-diagnostics-20200930-0826.zip
  19. I've had 2 lockups today on v29 which is very rare - had to use power button to shutdown as totally locked out/full crash. I'm not sure if the diags after boot will shed any light highlander-diagnostics-20200929-2245.zip
  20. Phew - I'd only just started hours of moving files off my pools when I saw this. It's probably worth adding to the main post as other people will be in the same boat i.e. luckily did this when they created new pools.
  21. Yes. If you want to add more folders to an existing mergerfs mount, you can add the extra paths in the script
  22. Dunno - have you upgraded unRAID? If not, maybe upgrade unRAID and make your download drive a pool drive rather than UD.
  23. I'm guessing that Mergerfs doesn't like using an unassigned drive. Maybe try using a normal user share and see if the error goes away?
  24. I don't think so. My scripts do first found (ff) so it would write the file to LocalFileShare1 - I'm not sure what mergerfs does if LocalFileShare1 is full. If LocalFileShare1 was a normal unRAID share, then UnRAID would control where the file is written. You'll have to read up on mergerfs options, but that's outside the scope of this thread.