Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

17 hours ago, slimshizn said:

Yeah, we have noticed a jump in electricity here also since a couple years ago. I'm doing some serious monitoring with grafana of my setup to see what I can do to reduce that overall.

Look into Powertop if you haven't already. Can make quite a difference depending on your build(s). I don't think moving to a seedbox for electricity costs will be worth it. You'll still have to run dual systems like DZMM does. Or you would have to use just one server of Hetzner for example and do everything there. But you're easily looking at 40-50 euros per month already and then you still won't have a comparable system (in my case at least).

 

15 hours ago, maxse said:

wow, my mind is still blown with all this lol.

I think Im just going to stick to unraid, it's too confusing too learn the paths on synology, and I dont want to spend all that time learning and then with no place to troubleshoot..

 

Quick question, will this work with an education account? I currently use it for backup with rclone, but I havent seen anyone use it with streaming like this. Will this work, or do some more features need to be enabled on an enterprise gdrive that I can't use on an edu account?

 

And lastly, if I name my folders the same way it's basically just a copy/paste of the scripts, correct?

 

Can someone please post a screen shot of the paths to set in *.arr apps and sab? I remember having an issue years ago, and followed spaceinvader one's guide, but now since the paths are going to be different, I want to make sure the apps all know where to look...

 

*edit*

Also, can someone explain the BW limits, and the time how that works? I don't understnad it exactly. Like if I don't want to have it time based, but just upload at 20MB/s until 750GB is reached starting at say 2AM. How would I set the parameters?

You removed your earlier posts I think regarding the 8500T. Just to be clear, your Synology is only good for serving data, it will never be able to transcode anything efficiently. Maybe a 1080p stream, but not multiple and definitely not 4K's. I personally would only consider a Synology as an offsite backup solution if you actually consider running dockers and serving transcodes especially.

 

Regarding the eduction account. I suspect you are not the admin/owner of that? So edu accounts are as far as I know, unlimited as well. But I don't think you have Team Drives that you can create there as non-admin. I don't know how the storage limits are for your personal drive then. With the change from Gsuite to Google Workspace it seems like the local Gdrive became 1 TB and the Team Drives became unlimited. So you would have to test if you can store over 1 TB on your local drive if you don't have access to Team Drives. You also don't have access to Service Accounts, so you will have only your personal account with access to your personal Gdrive, which you have to be careful with API hits/Quota's for. Should be fine if you just build up slowly.

 

If you name drives/folders the same it is indeed a copy paste, aside of the parameters you have to choose (backup yes/no, etc.). Just always do test runs first, rule 1 of starting with this stuff.

 

Regarding the paths, since you will probably only have 1 mount you just have to remove all the custom directory/names of the docker. So Plex often has /tv and /movies and such in it's docker template. Remove those and replace the dockers in your workflow with /user and point that to /mnt/user/mount_unionfs/gdrive or whatever the name of your mount will be. This is important for mergerFS, since it will be able to hardlink which makes the speed of data moving a lot faster (like importing from Sab to Sonarr).

 

BW-limits are the limit with which you will upload. You'll have to look at your upload speed of your WAN connection and then decide what you want to do. With Google Drive rclone now has a flag to respect google api when you uploaded more (--drive-stop-on-upload-limit). The situation you described is not really possible I think. You would just set bw-limit to 20MB/s and leave it running. If it his the quota it will stop with the above flag. But canceling an upload job while it's running is not really possible or safe to do without risking data loss. So you either blast through your 750GB and let the upload stop. Or you just set a bw-limit that it can run on continuously.

 

But I would first advise to check the edu account limits and then configure the mount itself with encryption and then see if you can actually mount it and get mergerfs running. After that you can start finetuning all the limits and such.

  • Thanks 1
Link to comment
11 hours ago, Kaizac said:

Regarding the paths, since you will probably only have 1 mount you just have to remove all the custom directory/names of the docker. So Plex often has /tv and /movies and such in it's docker template. Remove those and replace the dockers in your workflow with /user and point that to /mnt/user/mount_unionfs/gdrive or whatever the name of your mount will be. This is important for mergerFS, since it will be able to hardlink which makes the speed of data moving a lot faster (like importing from Sab to Sonarr).

 

BW-limits are the limit with which you will upload. You'll have to look at your upload speed of your WAN connection and then decide what you want to do. With Google Drive rclone now has a flag to respect google api when you uploaded more (--drive-stop-on-upload-limit). The situation you described is not really possible I think. You would just set bw-limit to 20MB/s and leave it running. If it his the quota it will stop with the above flag. But canceling an upload job while it's running is not really possible or safe to do without risking data loss. So you either blast through your 750GB and let the upload stop. Or you just set a bw-limit that it can run on continuously.

 

Thank you!

So I don't plan to do team drives or those SA, I just want to keep things as simple as possible. Dont want to spend time learning about them as that's when I see people post issues, it's with the service accounts (not even sure what they are so don't want to get into it). I'm okay with the 750GB/day and things just taking longer..

 

Edu account is unlimited still for a while. I just don't know if I can make a clinet ID since I'm not the admin for it?

I would not be using my personal gdrive for this project...

 

-I think I'm just going to use unraid for this. I need 2 3.5" drives though to store family videos, and photos. The SFF Dell is too small for that. Reason for 2 drives is to have them RAID mirror just in case...

 

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited. The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.

BWLimit1Time="01:00"

BWLimit1="off"

BWLimit2Time="08:00"

BWLimit2="15M"

BWLimit3Time="16:00"

BWLimit3="12M"

 

This is the part I didn't understand. Do I just erase what I dont need? So if I want to upload at 200Mbps until the 750GB limit is reached. How would I change the above? 

Do I just erase what I don't need?

BWLimit1=20M --drive-stop-on-upload-limit

and erase all the other lines?

 

Also still confused about the paths. So for plex I erase the /tv and /movies 

and add a path /user and point to /mnt/user/mount_unionfs/gdrive and then just select the individual movies or shows subfolder within the plex program itself when I add libraries correct?

 

Now for say Radarr. The /data folder (where downloads go) and the /media where they get moved to. Do I just have /data point to /mnt/user/mount_unionfs/gdrive and /media point to /mnt/user/mount_unionfs/gdrive/Movies?

like that?

 

Thank you again sooo much!

 

Edited by maxse
Link to comment
10 hours ago, maxse said:

Thank you!

So I don't plan to do team drives or those SA, I just want to keep things as simple as possible. Dont want to spend time learning about them as that's when I see people post issues, it's with the service accounts (not even sure what they are so don't want to get into it). I'm okay with the 750GB/day and things just taking longer..

 

Edu account is unlimited still for a while. I just don't know if I can make a clinet ID since I'm not the admin for it?

I would not be using my personal gdrive for this project...

 

-I think I'm just going to use unraid for this. I need 2 3.5" drives though to store family videos, and photos. The SFF Dell is too small for that. Reason for 2 drives is to have them RAID mirror just in case...

 

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited. The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.

BWLimit1Time="01:00"

BWLimit1="off"

BWLimit2Time="08:00"

BWLimit2="15M"

BWLimit3Time="16:00"

BWLimit3="12M"

 

This is the part I didn't understand. Do I just erase what I dont need? So if I want to upload at 200Mbps until the 750GB limit is reached. How would I change the above? 

Do I just erase what I don't need?

BWLimit1=20M --drive-stop-on-upload-limit

and erase all the other lines?

 

Also still confused about the paths. So for plex I erase the /tv and /movies 

and add a path /user and point to /mnt/user/mount_unionfs/gdrive and then just select the individual movies or shows subfolder within the plex program itself when I add libraries correct?

 

Now for say Radarr. The /data folder (where downloads go) and the /media where they get moved to. Do I just have /data point to /mnt/user/mount_unionfs/gdrive and /media point to /mnt/user/mount_unionfs/gdrive/Movies?

like that?

 

Thank you again sooo much!

 

 

You won't be able to make a Client ID indeed, which will mean a drop in performance. You should still be able to configure you rclone mount to test things. When you decide this is the way for you I would honestly think about just getting an enterprise Google Workspace account. It's 17 euros per month and you won't be running the risk of getting problems with your EDU account owner. But I can't see the depths of your wallet ;).

 

For BWlimit you would just put everything on 20M, don't erase and don't add any flags there. It's already done further below in the list of flags used for the upload.

 

Quote

BWLimit1Time="01:00"

BWLimit1="20M"

BWLimit2Time="08:00"

BWLimit2="20M"

BWLimit3Time="16:00"

BWLimit3="20M"

 

For Plex, yes you just use  your subfolders for libraries. So /user/Movies and /user/TV-Shows.

 

For all dockers in  your workflow like Radarr and Sonarr, Sab/NZBget, Bazarr maybe. You remove their own /data or whatever points they use and you add the /user path as well. So Radarr should also look into /user, but then probably /user/downloads/movies. And your Sab will download to /user/downloads/movies so Radarr can import from there. So don't user the /media and /data paths, because then you won't have the speed advantage of mergerfs.

 

Just be aware that when you remove these paths and put in /user, you also have to check inside the docker (software) that the /user path is also used. If it's programmed to use /data then you have to change that to /user as well.

Edited by Kaizac
Link to comment
On 6/21/2022 at 4:53 AM, Bolagnaise said:

I’ve had been experiencing permission issues since upgrading to 6.10 as well and i think i finally fixed all the issues.

 

RCLONE PERMISSION ISSUES:

Fix 1: prior to mounting the rclone folder using user scripts, run ‘docker safe new permissions’ from settings for all your folders. Then mount the rclone folders using the script.

 

 

I no longer recommend using the below information, using the docker safe new permissions should resolve most issues. 

 

Fix 2: if that doesnt fix your issues, in the mount script add the following BOLDED sections to the create rclone mount section of the script, or add them to the extra parameters section, this will mount rclone folders as user ROOT with a UMASK of 000.

 

Alternatively you could mount it as USER:NOBODY with the uid:99 gid:100

 

DOCKER CONTAINER PERMISSIONS ISSUES FIX (SONARR/RADARR/PLEX)

 

Fix 1: Change PUID and PGID to user ROOT 0:0 and add an environment variable for UMASK of 000 (NUCLEAR STRIKE OPTION)

CFD7E405-C2C9-422E-AB8C-9C983DE7545F.thumb.jpeg.ba77acf0fa837b6a9d9ef150179cd209.jpeg

 

Fix 2: Maintain PUID and PGID to 99:100 as USER:NOBODY and using the user scripts plugin, update the permissions of the docker containers permissions using the following script, change the /mnt/ path directory to reflect your Docker path setup. Rerun for each containers path after changing it.

#!/bin/bash
for dir in "/mnt/cache/appdata/Sonarr/"
do
`echo $dir` `chmod -R ug+rw,ug+X,o-rwx $dir`
chown -R nobody:users $dir
done

 

 

IMPORTANT PLEX UPDATE:

 

After running docker safe new permissions, if you experience EAC3 or audio transcoder errors where the video never starts to play, it is because your CODECS folder and/or your mapped /transcode path does not have the correct permissions. 

 

To rectify this issue, stop your plex container, navigate to your plex appdata folder path and delete the CODECS folder. Then navigate to your mapped /transcode folder if you are using one and also delete that. Restart your plex container and plex will redownload your codecs and recreate your mapped transcode folder with the correct permissions. 

 

Hi @Bolagnaise: I have update Unraid to 6.10 stable in order to avoid any problems, but I have problems with Sonarr not moving files because of permissions.

 

My folders is showing this.

 

Which of you fixes should I use.

Should I just add

--uid 98 \
--gid 99 \

 

to my scripts or do I need to do some extra work.

 

image.png.47ccd795fb8241ba3833292698100771.png

 

Link to comment
1 hour ago, Bjur said:

 

Hi @Bolagnaise: I have update Unraid to 6.10 stable in order to avoid any problems, but I have problems with Sonarr not moving files because of permissions.

 

My folders is showing this.

 

Which of you fixes should I use.

Should I just add

--uid 98 \
--gid 99 \

 

to my scripts or do I need to do some extra work.

 

image.png.47ccd795fb8241ba3833292698100771.png

 

You don't show the owners of your media folders?

 

But I think it's an issue with the docker paths. You need to show your docker templates for Sab and Sonarr.

Link to comment
On 8/28/2022 at 7:58 AM, Kaizac said:

Look into Powertop if you haven't already. Can make quite a difference depending on your build(s). I don't think moving to a seedbox for electricity costs will be worth it. You'll still have to run dual systems like DZMM does. Or you would have to use just one server of Hetzner for example and do everything there. But you're easily looking at 40-50 euros per month already and then you still won't have a comparable system (in my case at least).

 

Awesome thank you for the input, will be looking over this. Much appreciated!

Link to comment
21 hours ago, Kaizac said:

You don't show the owners of your media folders?

 

But I think it's an issue with the docker paths. You need to show your docker templates for Sab and Sonarr.

This is how it looks.

But it has never been a problem with 6.9.2 but 6.10 is a problem.

image.png.7b7dfb36b28c758f21d528ebc0028885.png

image.png.398aec191413fc6e30ffe5fd78562303.png

image.png.9917858609030e43e96756bff000407b.png

 

Sonarr is PUID 99 and GGID 100

 

image.thumb.png.bb2120a495ea1c8e2b3c4ee9b703a156.png

image.thumb.png.26a4931be1bcbe087ffb5a745051509f.png

Link to comment
16 minutes ago, Bjur said:

This is how it looks.

But it has never been a problem with 6.9.2 but 6.10 is a problem.

image.png.7b7dfb36b28c758f21d528ebc0028885.png

image.png.398aec191413fc6e30ffe5fd78562303.png

image.png.9917858609030e43e96756bff000407b.png

 

Sonarr is PUID 99 and GGID 100

 

image.thumb.png.bb2120a495ea1c8e2b3c4ee9b703a156.png

image.thumb.png.26a4931be1bcbe087ffb5a745051509f.png

Well I'm not a big fan of your mappings. I don't really see direct conflicts there, but I personally just removed all the specific paths like /incomplete and such. I'm talking about the media paths here, not paths like the /dev/rtc one.

And only use /user (or in your case /mnt/user). And then within the docker use that as start point to get to the right folder. Much easier to prevent any path issues, but that's up to you.

 

I also had the permissions issues so what I did is adding these lines to my mount scripts (not the merger, but the actual mount script for your Rclone mount). Those root/root folders are the issue, since sonarr is not running as root.

--uid 99 --gid 100

And in case you didnt have it already (I didn't): --umask 002

 

Add these and reboot, see if that solves the importing issue.

Link to comment
57 minutes ago, Kaizac said:

Well I'm not a big fan of your mappings. I don't really see direct conflicts there, but I personally just removed all the specific paths like /incomplete and such. I'm talking about the media paths here, not paths like the /dev/rtc one.

And only use /user (or in your case /mnt/user). And then within the docker use that as start point to get to the right folder. Much easier to prevent any path issues, but that's up to you.

 

I also had the permissions issues so what I did is adding these lines to my mount scripts (not the merger, but the actual mount script for your Rclone mount). Those root/root folders are the issue, since sonarr is not running as root.

--uid 99 --gid 100

And in case you didnt have it already (I didn't): --umask 002

 

Add these and reboot, see if that solves the importing issue.

Thanks for the answer.

I'm not sure I follow. What's wrong with my mappings.

When I start to download it puts it in local folder before uploading it.

The dockers are mounted path in media folder in mount_merger folders.

All is what I was adviced in here and have worked before.

The /dev/rtc is only shown once in template, but that's default.

 

The other screenshots from ssh is the direct paths as asked for.

I'm not an expert, so please share advice on what I should map differently. 

Link to comment
9 minutes ago, Bjur said:

Thanks for the answer.

I'm not sure I follow. What's wrong with my mappings.

When I start to download it puts it in local folder before uploading it.

The dockers are mounted path in media folder in mount_merger folders.

All is what I was adviced in here and have worked before.

The /dev/rtc is only shown once in template, but that's default.

 

The other screenshots from ssh is the direct paths as asked for.

I'm not an expert, so please share advice on what I should map differently. 

With the mappings what I'm saying is that you can remove the paths for /incomplete /downloads for Sab and for Sonarr /downloads and /series and just replace those with 1 path to /mnt/user/. Then inside the dockers for example Sab you will just point your incomplete folder to /mnt/user/downloads/incomplete instead of /incomplete. That way you keep the paths uniform and dockers will look at the same file through the same route (this is often a reason for file errors).

 

What I find confusing when looking at your screenshots again, is that you point to the local folders. Why are you not using the /mnt/unionfs/ or mnt/mergerfs/ folders?

 

9 minutes ago, Bjur said:

PS: in regards to the mount scripts.

Is it the mount script to mount the shares in user scripts, else I don't know where it's located.

I don't even know where the merger mount is.

I'm talking about the user script, but within (depending on what your script looks like) you have a part that is the mount and after the mount you merge the mount (cloud) with a local folder. So I was talking about the 2/3 flags that you need to add to your mount part of your user script. If you use the full template of DZMM then you can just add those 2 flags (--uid 99 --gid 100) in the list with all the other flags.

 

Hope this makes more sense to you?

Link to comment
1 hour ago, Kaizac said:

With the mappings what I'm saying is that you can remove the paths for /incomplete /downloads for Sab and for Sonarr /downloads and /series and just replace those with 1 path to /mnt/user/. Then inside the dockers for example Sab you will just point your incomplete folder to /mnt/user/downloads/incomplete instead of /incomplete. That way you keep the paths uniform and dockers will look at the same file through the same route (this is often a reason for file errors).

 

What I find confusing when looking at your screenshots again, is that you point to the local folders. Why are you not using the /mnt/unionfs/ or mnt/mergerfs/ folders?

 

I'm talking about the user script, but within (depending on what your script looks like) you have a part that is the mount and after the mount you merge the mount (cloud) with a local folder. So I was talking about the 2/3 flags that you need to add to your mount part of your user script. If you use the full template of DZMM then you can just add those 2 flags (--uid 99 --gid 100) in the list with all the other flags.

 

Hope this makes more sense to you?

Thanks Kaizac. It regards to the paths that makes sense with /mnt/user only. The only problem I see if I do it that way in Sonarr and especially in Plex it would need to refresh libraries because of file change.

One of my scripts looks like this below.

 

 

I'm guessing the guid gid umask should be under this section?:

 

 --drive-pacer-min-sleep 10ms \
    --drive-pacer-burst 1000 \
    --vfs-cache-mode full \
        --vfs-read-chunk-size 256M \
    --vfs-cache-max-size 100G \
    --vfs-cache-max-age 96h \
    --vfs-read-ahead 2G \

 

What I find confusing when looking at your screenshots again, is that you point to the local folders. Why are you not using the /mnt/unionfs/ or mnt/mergerfs/ folders?

 

The reason is that I point all downloaded stuff to local folder and in the night it will upload.

If I do it like you write won't it take longer time to move and use upload initially or what is the benefit. 

 

 

 

 

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.6 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="googleSh_crypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="sonarr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MountFolders=\{"media/tv,downloads/complete"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1=""
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

--allow-other \
    --dir-cache-time 5000h \
    --attr-timeout 5000h \
    --log-level INFO \
    --poll-interval 10s \
    --cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
    --drive-pacer-min-sleep 10ms \
    --drive-pacer-burst 1000 \
    --vfs-cache-mode full \
        --vfs-read-chunk-size 256M \
    --vfs-cache-max-size 100G \
    --vfs-cache-max-age 96h \
    --vfs-read-ahead 2G \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.253" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="3" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files
if [[  $LocalFileShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation
mkdir -p $MergerFSMountLocation

#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
    echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    touch mountcheck
    rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
        if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
            echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        else
            echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
            ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
        fi
        echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    else
        RCloneMountIP=""
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    fi
# create rclone mount
    rclone mount \
    --allow-other \
    --buffer-size 256M \
    --dir-cache-time 720h \
    --drive-chunk-size 512M \
    --log-level INFO \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit off \
    --vfs-cache-mode writes \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
    echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
    sleep 5
    echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    else
        echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
        exit
    fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
    if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    else
# check if mergerfs already installed
        if [[ -f "/bin/mergerfs" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
        else
# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
            echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
            sleep 5
            if [[ -f "/bin/mergerfs" ]]; then
                echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
            else
                echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
                rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
                exit
            fi
        fi
# Create mergerfs mount
        echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
        if [[  $LocalFilesShare2 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare2=":$LocalFilesShare2"
        else
            LocalFilesShare2=""
        fi
        if [[  $LocalFilesShare3 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare3=":$LocalFilesShare3"
        else
            LocalFilesShare3=""
        fi
        if [[  $LocalFilesShare4 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare4=":$LocalFilesShare4"
        else
            LocalFilesShare4=""
        fi
# mergerfs mount command
        mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
        echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
        if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
        else
            echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed."
            rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
            exit
        fi
    fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
    echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    docker start $DockerStart
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

Edited by Bjur
Link to comment
7 hours ago, Bjur said:

Thanks Kaizac. It regards to the paths that makes sense with /mnt/user only. The only problem I see if I do it that way in Sonarr and especially in Plex it would need to refresh libraries because of file change.

Easily fixed in Plex ( a bit harder in sonarr).  I've covered it somewhere in this thread in more detail, but the gist of the change is;

 

1. Add new /mnt/user/movies paths to Plex in the app (after adding path to docker of course)

2. Scan all your libriaries to find all your new paths

3. ONLY when the scan has completed delete old paths from libraries

4. Empty trash

 

This only works if you don't have the setting to auto delete missing files on.

 

Sonarr is a bit of a pain as it tends to crash a lot for big libraries, so do backups!

 

1. Add new paths to docker and then app

2. Turn off download clients so doesn't try to replace files

3. Go to manage libraries and change location of folders to new location. Decline move files to new location

4. Do a full scan and it'll scan the new paths and find the existing files

5. Turn download clients back on

 

Works well if your files are all named nicely.

 

https://trash-guides.info/Sonarr/Sonarr-recommended-naming-scheme/

 

 

 

 

Edited by DZMM
Change link
Link to comment

Thanks for the answer @DZMM. I will strongly consider if it's worth the effort because it is working fine that part.

 

But can you answer guid gid umask should be in the scrip above?

 

Also I've seen a couple of times that my movies share dissappear suddenly without any reason all other scripts don't have this. Am I the only one who have seen this?

Link to comment
On 9/2/2022 at 12:02 AM, Bjur said:

 

Hi @Bolagnaise: I have update Unraid to 6.10 stable in order to avoid any problems, but I have problems with Sonarr not moving files because of permissions.

 

My folders is showing this.

 

Which of you fixes should I use.

Should I just add

--uid 98 \
--gid 99 \

 

to my scripts or do I need to do some extra work.

 

image.png.47ccd795fb8241ba3833292698100771.png

 

 

 

Nope not needed,

 

You need to run the new permissions option inside tools after stopping docker to update their perms.

You can also run this script to update permissions for other folders seperate to the new permissions tool or to force it.

#!/bin/sh 
for dir in "/mnt/user/!!you folder path here!!" 
do echo $dir 
chmod -R ug+rw,ug+X,o-rwx $dir 
chown -R nobody:users $dir 
done

 

The reason it works in 6.9 is because 6.10-rc3 introduced a bug fix where in the user share file system permissions were not being honoured, and containers with permissions assigned as 99:100 (nobody:user) actually had root access, 6.10 fixed this.

Limetech should really make this new permissions tool default upon first boot of 6.10 as a lot of people have had this issue.

 

 

Link to comment
2 hours ago, Bjur said:

Thanks for the answer @DZMM. I will strongly consider if it's worth the effort because it is working fine that part.

 

But can you answer guid gid umask should be in the scrip above?

 

Also I've seen a couple of times that my movies share dissappear suddenly without any reason all other scripts don't have this. Am I the only one who have seen this?

 

For the GID and UID you can try the script Bolagnaise put above this post. Make sure you alter the directory to your folder structure. For me this wasn't the solution, cause it was often stuck on my cloud folders. But just try it, maybe it works and you don't need to bother further.

If that doesn't solve your issue, then you need to put the GID and UID commands where you suggested yes.

 

We discussed your local folder structure earlier. The whole point of using mergerfs and the merged folder structure (/mnt/user/mount_unionfs/gdrive instead of /mnt/user/local/gdrive) is that it doesn't matter to your server and dockers if the files are actually local or in your cloud storage, it treats it the same.

If you then used the upload script correctly, it will only upload the files that are actually stored locally, because the upload script does look at the local files, not the merged files/folders.

 

The dissapearing of your folders is not something I've noticed myself. But are you looking at the local folder or the merged folder when you see that happening? If you look at the local folders it would explain it to me, because the upload script deletes source folders when done and empty. If it's when you look at your merged folder, then it seems strange to me.

 

Link to comment
5 hours ago, Bjur said:

Thanks for the answer @DZMM. I will strongly consider if it's worth the effort because it is working fine that part.

 

But can you answer guid gid umask should be in the scrip above?

 

Also I've seen a couple of times that my movies share dissappear suddenly without any reason all other scripts don't have this. Am I the only one who have seen this?

 "disappear" - when and where?

 

I'm guessing that you're not starting the dockers after the mount is active.

Link to comment
On 8/29/2022 at 3:47 AM, Kaizac said:

Just be aware that when you remove these paths and put in /user, you also have to check inside the docker (software) that the /user path is also used. If it's programmed to use /data then you have to change that to /user as well.

In terms of setting up plex itself. Am I right that it would be set not to scan library periodically but to only scan when a change is detected, and partial scan of that changed directory? Will plex be able to detect the change?

 

Also, does this mean that I won't be able to have plex add thumbnails? Because it will scan the entire library that would be in the cloud and would get API banned for scanning on all of the files? I currently like to have the thumbnails, and detecting intros etc... 

 

Also, How would I check what you mentioned about? I'm a basically a newbie when it comes to dockers. I understnad how path mappings work but I just followed spaceinvaderone's videos when I originally set them up. I use binhex for the Arrs... do you happen to know if it will work by remove the /data folder like you mentioned? I'm assuming just mapping the /data to whatever mergerfs mount I need to isn't the same same correct?

 

Again thank you soo much for all this. I appreciate you! I'm actually thinking of building an itx build just for this with 2 18tb drives mirrored to store irreplaceable files and just consolidate my server! Just want to make sure I understand everything and will be able to do it before I start to spend the $ on new hardware, etc...

Edited by maxse
Link to comment
On 9/4/2022 at 3:10 AM, maxse said:

In terms of setting up plex itself. Am I right that it would be set not to scan library periodically but to only scan when a change is detected, and partial scan of that changed directory? Will plex be able to detect the change?

 

Also, does this mean that I won't be able to have plex add thumbnails? Because it will scan the entire library that would be in the cloud and would get API banned for scanning on all of the files? I currently like to have the thumbnails, and detecting intros etc... 

 

Also, How would I check what you mentioned about? I'm a basically a newbie when it comes to dockers. I understnad how path mappings work but I just followed spaceinvaderone's videos when I originally set them up. I use binhex for the Arrs... do you happen to know if it will work by remove the /data folder like you mentioned? I'm assuming just mapping the /data to whatever mergerfs mount I need to isn't the same same correct?

 

Again thank you soo much for all this. I appreciate you! I'm actually thinking of building an itx build just for this with 2 18tb drives mirrored to store irreplaceable files and just consolidate my server! Just want to make sure I understand everything and will be able to do it before I start to spend the $ on new hardware, etc...

 

For Plex you can use the only partial scan indeed. And then from Sonarr and Radarr I would use the connect option to send Plex a notification. It will then trigger a scan. Later on when you are more experienced you can set up Autoscan which will send the triggers to Plex.

Thumbnails will be difficult, but if they are really important for you, you can decide to keep your media files on your local storage for a long while so Plex can do everything it needs to do. Generally it's advised to just disable thumbnails because it requires a lot of CPU power and time, especially if you're going for a low power build. It all depends on your library size as well of course.

 

I've also disabled all the other planned tasks in the maintenance settings like creating chapters and such and especially media analysis. What I do use is the Intro Detection for Series, I can't do without that. And generally series are easier to grab good quality right away and require less upgrades than Movies, so upgrades are less of a problem.

 

Regarding the paths you can remove or edit the /data paths and add /user in the docker template. Within the docker itself you need to check the settings of that docker and change the paths accordingly. I know binhex uses /data so if you use mostly binhex it will often work ok. But because you use different path/mapping names Unraid will not see it as the same drive and thus will see it as a move from one drive to another, instead of within the same drive. And moving server side is pretty much instant, but if you use the wrong paths it will go through your server back to the cloud storage.

 

So again, check your docker templates and mappings/paths that point to media files and such delete those and only use 1 path which is high level like /mnt/user or /mnt/user/mount_unionfs/Gdrive. And then go in the docker and change the paths used inside. Once you know what you are doing and looking for it's very simple and quick. Just do it docker by docker.

 

Regarding your ITX build, I would recommend if you have the money to get at least 1 cache SSD and better is to get 2 SSD's (1 TB per SDD or more) for cache put in a pool with BTRFS. It's good for your appdata, but you can also run your downloads/uploads and Plex library from it. Especially the downloading/uploading will be better from the SSD because it does not have to switch between reading and writing like a HDD. Using 1 SSD for cache is fine as well, just be careful of what other data you let go through your cache SSD, because if your SSD dies (and it will often instantly die, unlike a HDD). your data will be lost. And get backups of course. Just general good Server housekeeping ;).

 

EDIT: If you are unsure if you are doing the mappings right, just show screenshots of before and after from the template and inside the docker if you want and we can check it for you. Don't feel bothered doing that, I think many struggle with the same in the beginning.

Edited by Kaizac
Link to comment
29 minutes ago, Kaizac said:

EDIT: If you are unsure if you are doing the mappings right, just show screenshots of before and after from the template and inside the docker if you want and we can check it for you. Don't feel bothered doing that, I think many struggle with the same in the beginning.

IMO you can't go wrong with using /user and /disks - at least for your dockers that have to talk to each other.  I think the background to Dockers is they were setup as a good way to ringfence access and data.  However, we want certain dockers to talk to each other easily!

Life gets so much easier settings paths within docker WEBGUIs when your media mappings look like my Sonarr mappings below:

image.thumb.png.62e77cb61d005208cb348810e8a9a8ea.png

Link to comment

@DZMM can you help me with your brainpower?

 

I'm using seperate rclone mounts for the same Team Drivers but with different service accounts (Like Bazarr instances on it's own mount, Radarr and Sonarrs, Plex, Emby, and 1 for normal usage). Sometimes one of those get api banned. Mostly Plex lately, so I have script to mount a backup mount and switch that one within the mergerfs with the Plex local folder and reboot Plex. It then works again.

 

However I'm wondering if you can run 2 Plex instances. 1 to do all the API stuff, like scanning and meta refreshing, etc. And then use 1 Plex instance for the actual streaming. You can point the 2 Plex instances to their own mergerfs folders, which contains the same data. And then I'm thinking you can share the Library. But I don't think this will work right? Cause the Streaming Plex won't get real-time updates through the changed Library data from the other Plex instance right? And you would have to be 100% sure you disable all the jobs and tasks on the Streaming Plex intance.

 

What do you think, possible or not?

Link to comment
12 minutes ago, DZMM said:

IMO you can't go wrong with using /user and /disks - at least for your dockers that have to talk to each other.  I think the background to Dockers is they were setup as a good way to ringfence access and data.  However, we want certain dockers to talk to each other easily!

Life gets so much easier settings paths within docker WEBGUIs when your media mappings look like my Sonarr mappings below:

image.thumb.png.62e77cb61d005208cb348810e8a9a8ea.png

 

Correct, like that. I would only use /disks though when you actually use unassigned drivers. Cause you also have to think about the Read Write - Slave setting then. If you just use cache and array folders, only /user is sufficient. But I think @maxse is mostly "worried" about setting the right paths/mappings inside the docker itself. And that just requires going through the settings and check for the used mappings and alter them where needed.

Link to comment
8 minutes ago, Kaizac said:

However I'm wondering if you can run 2 Plex instances. 1 to do all the API stuff, like scanning and meta refreshing, etc. And then use 1 Plex instance for the actual streaming. You can point the 2 Plex instances to their own mergerfs folders, which contains the same data. And then I'm thinking you can share the Library. But I don't think this will work right? Cause the Streaming Plex won't get real-time updates through the changed Library data from the other Plex instance right? And you would have to be 100% sure you disable all the jobs and tasks on the Streaming Plex intance.

 

What do you think, possible or not?

Sounds dangerous and a bad idea sharing metadata and I think they'd need to share the same database, which would be an even worse idea.

I'd try and solve the root cause and see what's driving the API ban as I've had like 2 in 5 years or so e.g. could you maybe schedule Bazarr to only run during certain hours?  Or, only give it 1 CPU so it runs slowly?

I don't use Bazarr but I think I'm going to start as sometimes in shows I can't make out the dialogue or e.g. there's a bit in another language and I need the subtitles, but I've no idea how Plex sees the files. I might send you a few DMs if I get stuck

Link to comment
26 minutes ago, DZMM said:

Sounds dangerous and a bad idea sharing metadata and I think they'd need to share the same database, which would be an even worse idea.

I'd try and solve the root cause and see what's driving the API ban as I've had like 2 in 5 years or so e.g. could you maybe schedule Bazarr to only run during certain hours?  Or, only give it 1 CPU so it runs slowly?

I don't use Bazarr but I think I'm going to start as sometimes in shows I can't make out the dialogue or e.g. there's a bit in another language and I need the subtitles, but I've no idea how Plex sees the files. I might send you a few DMs if I get stuck

Yeah was afraid it would be a bad idea, and so it is. I'll have to find a way to trigger the script automatically when it gets api banned.

API bans didn't happen before, but had to rebuild all my libraries last weeks after some hardware changes and bad upgrades from Plex.

 

Regarding Bazarr, you can hit me up in my DM's yes. I have everything you need and also how to get it to work with Autoscan so Plex sees the subtitles.

  • Thanks 1
Link to comment
On 9/2/2022 at 3:06 PM, Kaizac said:

Well I'm not a big fan of your mappings. I don't really see direct conflicts there, but I personally just removed all the specific paths like /incomplete and such. I'm talking about the media paths here, not paths like the /dev/rtc one.

And only use /user (or in your case /mnt/user). And then within the docker use that as start point to get to the right folder. Much easier to prevent any path issues, but that's up to you.

 

I also had the permissions issues so what I did is adding these lines to my mount scripts (not the merger, but the actual mount script for your Rclone mount). Those root/root folders are the issue, since sonarr is not running as root.

--uid 99 --gid 100

And in case you didnt have it already (I didn't): --umask 002

 

Add these and reboot, see if that solves the importing issue.

 

@KaizacI tried using the UID/PID/UMASK in userscripts mount and added it to the section in the mountscript:

# create rclone mount
    rclone mount \
    --allow-other \
    --buffer-size 256M \
    --dir-cache-time 720h \
    --drive-chunk-size 512M \
    --log-level INFO \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit off \
    --vfs-cache-mode writes \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &
    --uid 99
    --gid 100
    --umask 002
 

Sonarr still won't get import access from the complete local folder where it's at.

 

My rclone mount folders are still showing root:

image.png.862d3ae33ce4e4c5b725f7d898d57556.png

 

 

@BolagnaiseIf I try the permission docker tool, I would risk breaking Plex transcoder, which I don't want.

 

Also if I run the tool.

Would I only have to run it once of each time I reboot?

 

@DZMM In regards to the Rclone share missing, it has happened a few times even when watching a movie, where I need to reboot to get that specific share working again while the other ones still working.

Edited by Bjur
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.