Jump to content

neeiro

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by neeiro

  1. 2 hours ago, sheldz8 said:

    make sure the cloud drive and merger fs isn't connected then try 

     

    rm -R /mnt/user/GDrive/* && rm -R /mnt/user/GMerged/*

     

    this is what my mount script looks like https://paste.ofcode.org/35uqHBeTNxgT9jRr6eiPcRG

     

    The reason for that error is because you already have the folder so it can't mount and by using --allow-non-empty is actually bad 

    Thanks - That seems to have got it working

    • Like 1
  2. 1 hour ago, sheldz8 said:

    Yes it looks correct

     ok getting this error now (tried a reboot too)...

     

    21.08.2021 15:51:06 INFO: Creating secure mergerfs mount.
    mv: cannot move '/mnt/user/GMerged/secure' to '/mnt/user/Glocal/secure/secure': File exists
    fuse: mountpoint is not empty
    fuse: if you are sure this is safe, use the 'nonempty' mount option
    21.08.2021 15:51:06 INFO: Checking if secure mergerfs mount created.
    21.08.2021 15:51:06 CRITICAL: secure mergerfs mount failed. Stopping dockers.

     

    deleted /mnt/user/Glocal/secure/secure

     

    ran script again...

    2021/08/21 16:30:43 DEBUG : 5 go routines active
    21.08.2021 16:30:43 INFO: *** Creating mount for remote secure
    21.08.2021 16:30:43 INFO: sleeping for 5 seconds
    2021/08/21 16:30:43 NOTICE: Serving remote control on http://localhost:5572/
    2021/08/21 16:30:44 Fatal error: Can not open: /mnt/user/GDrive/secure: open /mnt/user/GDrive/secure: transport endpoint is not connected
    21.08.2021 16:30:48 INFO: continuing...
    21.08.2021 16:30:48 CRITICAL: secure mount failed - please check for problems. Stopping dockers

  3. /

    14 minutes ago, sheldz8 said:

    No I meant for the path to the local and mergerfs folders

    Is this correct now?

     

    My scripts look like this...

    #!/bin/bash
    
    ######################
    #### Mount Script ####
    ######################
    ## Version 0.96.9.2 ##
    ######################
    
    ####### EDIT ONLY THESE SETTINGS #######
    
    # INSTRUCTIONS
    # 1. Change the name of the rclone remote and shares to match your setup
    # 2. NOTE: enter RcloneRemoteName WITHOUT ':'
    # 3. Optional: include custom command and bind mount settings
    # 4. Optional: include extra folders in mergerfs mount
    
    # REQUIRED SETTINGS
    RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/user/GDrive" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare="/mnt/user/Glocal" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/user0/GDrive" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="400G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/user/GMerged" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    DockerStart="nzbget plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    #MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount
    # Note: Again - remember to NOT use ':' in your remote name above
    
    # OPTIONAL SETTINGS
    
    # Add extra paths to mergerfs mount in addition to LocalFilesShare
    LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
    LocalFilesShare3="ignore"
    LocalFilesShare4="ignore"
    
    
    
    

    I've taken out rcloneuploadremotename..

     

    #!/bin/bash
    
    ######################
    ### Upload Script ####
    ######################
    ### Version 0.95.5 ###
    ######################
    
    ####### EDIT ONLY THESE SETTINGS #######
    
    # INSTRUCTIONS
    # 1. Edit the settings below to match your setup
    # 2. NOTE: enter RcloneRemoteName WITHOUT ':'
    # 3. Optional: Add additional commands or filters
    # 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
    # 5. Optional: Use service accounts in your upload remote
    # 6. Optional: Use backup directory for rclone sync jobs
    
    # REQUIRED SETTINGS
    RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
    RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'.
    RcloneUploadRemoteName="" If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
    LocalFilesShare="/mnt/user/Glocal" # location of the local files without trailing slash you want to rclone to use
    RcloneMountShare="/mnt/user/GDrive" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
    MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
    ModSort="ascending" # "ascending" oldest files first, "descending" newest files first
    
    # Note: Again - remember to NOT use ':' in your remote name above
    
    # Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
    BWLimit1Time="01:00"
    BWLimit1="off"
    BWLimit2Time="08:00"
    BWLimit2="15M"
    BWLimit3Time="16:00"
    BWLimit3="12M"
    
    # OPTIONAL SETTINGS
    
    # Add name to upload job
    JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.
    
    # Add extra commands or filters
    Command1="--exclude downloads/**"
    Command2=""
    Command3=""
    Command4=""
    Command5=""
    Command6=""
    Command7=""
    Command8=""

     

  4. 41 minutes ago, sheldz8 said:

    Edit your script and remove what i presume is your remote name secure because it auto creates that folder inside gmerged and glocal

     

    It will always duplicate the path If you dont change the path on the script

    Thanks,

     

    I've commented out:

    #RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'.

     

    In both upload and mount script - is this correct?

  5. Having a few teething problems with this guide:

     

    1. On a server restart or array restart the mount it says the secure folder already exist

    If I delete the folder Gmerged/secure/secure its starts correctly - it seems like i've got another 'secure' folder in the secure folder - both have movies, downloads etc in them

     

    2. Radarr and sonarr aredownloading the files ok but are not exporting them out the queue - they seem to be being put in the secure folder in the secure folder.

     

    Mount script settings are

     

    # REQUIRED SETTINGS
    RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/user/GDrive" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare="/mnt/user/Glocal" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/user0/GDrive" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="400G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/user/GMerged" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    DockerStart="nzbget plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount
    # Note: Again - remember to NOT use ':' in your remote name above

     

    Upload settings are 

     

    RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
    RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'.
    RcloneUploadRemoteName="secure" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
    LocalFilesShare="/mnt/user/Glocal" # location of the local files without trailing slash you want to rclone to use
    RcloneMountShare="/mnt/user/GDrive" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
    MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
    ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

     

    is the setting RcloneUploadRemoteName="secure" the problem?

  6. I have a 4 port Nic in and one in a windows server PC - Is it possible to setup team/bonds directly betweeen the 2 computers without a switch?

     

    I've tried bonding on the unraid side with ip 192.168.10.1/24

     

    and setting up teams on the windows server with ip 192.168.10.2

     

    but this doesn't seem to work - is it possible?

     

     

  7. Hi - I've just recieved 2 new hard drives (8TB each) and wondering the best way to incoperate them in my server, I've only been using Unraid since the start of the year so still getting my head around it - I'm on Unraid Plus.

     

    Some of the drives are starting to get old and I would prefer to reduce the number of drives in the server. All drives have 2 backups on different servers. 

     

    I use the server for for media and a few VMs so don't really want to rebuild the whole system

     

    Originally I was going to remove a 2TB, 3TB and a 3TB drive and replace it with 1 8TB and then use the second as a back up.

     

    my current setup is:

     

    Disk 1    WDC_WD20EZRX-00D8PB0_WD-WMC4M0H9WRW2 - 2 TB (sdl) Used 1.27 TB Free 726 GB
    Disk 2    WD1003FBYX-88_LEN_WD-WCAW30CRFVZ8 - 1 TB (sdk) Used 7 GB Free 993 GB
    Disk 3    TOSHIBA_DT01ABA300_13P9ULWAS - 3 TB (sdg) Used 2.06 TB Free 942 GB
    Disk 4    TOSHIBA_DT01ABA300_Z5OP3GBGS - 3 TB (sdh) Used 2.25 TB Free 747 GB
    Disk 5    WDC_WD30EFRX-68EUZN0_WD-WCC4NNSYFXPP - 3 TB (sdi) Used 1.51 TB Free 1.49 TB
    Disk 6    WDC_WD30EFRX-68EUZN0_WD-WCC4N0695182 - 3 TB (sdj) Used 1.30 TB Free 1.70 TB

    Array of six devices 15 TB Used 8.40 TB Free 6.59 TB

    Cache    SanDisk_Ultra_II_240GB_154624442293 - 240 GB (sdf) Used 95.9 GB Free 143 GB

    No Parity


    Disk 1 4 Years
    Disk 2 4 Years
    Disk 3 7 Years
    Disk 4 1 Years
    Disk 5 5 Years
    Disk 6 3 Years

×
×
  • Create New...