Plexdrive


Recommended Posts

On 8/3/2018 at 7:24 AM, DZMM said:

I've been importing from array-->array or ud-->array.  I'm going to try some ud-->cache imports today to see if it's been an io problem on my array.  I'm pretty sure it's not, but that will confirm whether or not.

 

Edit: 08/10/2018 - Updated rclone mount, upload script, uninstall script

 

Edit: 11/10/2018 - Tidied up and updated scripts

 

Sharing below what I've got in case it helps anyone else.  I use the rclone plugin, custom user scripts plugin and unionfs via Nerd Pack to make everything below work.

 

docker mapping:

 

For my dockers I create two mappings /user ---> /mnt/user and /disks --> /mnt/disks (RW slave)

 

 

rclone vfs mount :- /mnt/user/mount_rclone/google_vfs

 

So, my rclone mount below is referenced within dockers at /user/mount_rclone/google_vfs

 

I don't think it's safe in the top-level folder and also created google_vfs folder in case I do other mounts in the future

rclone mount --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m &

 

Local Files awaiting upload:  /mnt/user/rclone_upload/google_vfs

 

A seperate script uploads these to gdrive on my preferred schedule using rclone move

 

unionfs mount: - /mnt/user/mount_unionfs/google_vfs

 

Unionfs to combine gdrive files with local files that haven't been uploaded yet.

 

My unionfs mount below is referenced within dockers at /user/mount_unionfs/google_vfs.  All my dockers (Plex, radarr, sonarr etc) look at the movie and tv_shows sub-folders within this mount, which masks whether files are local or in the cloud:

 

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

 

My full scripts below which I've annotated a bit.

 

Rclone install

 

I run this every 5 mins so it remounts automatically (hopefully) if there's a problem.

 

#!/bin/bash

#######  Check if script already running  ##########

if [[ -f "/mnt/user/mount_rclone/rclone_install_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/mount_rclone/rclone_install_running

fi

#######  End Check if script already running  ##########

mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs

#######  Start rclone_vfs mounted  ##########

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") INFO: installing and mounting rclone."

# install via script as no connectivity at unraid boot

/usr/local/sbin/plugin install https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg

rclone mount --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m &

# pausing briefly to give mount time to initialise

sleep 5

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone_vfs mount failed - please check for problems."

rm /mnt/user/mount_rclone/rclone_install_running

exit

fi

fi

#######  End rclone_vfs mount  ##########

#######  Start Mount unionfs   ##########

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted."

else

# Unmount before remounting

fusermount -uz /mnt/user/mount_unionfs/google_vfs

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Movies & Series Remount failed."

rm /mnt/user/mount_rclone/rclone_install_running

exit

fi

fi

############### starting dockers that need unionfs mount or connectivity ######################

# only start dockers once

if [[ -f "/mnt/user/mount_rclone/dockers_started" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: dockers already started"

else

touch /mnt/user/mount_rclone/dockers_started

echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."

docker start plex
docker start letsencrypt
docker start ombi
docker start tautulli
docker start radarr
docker start sonarr
docker start radarr-uhd
docker start lidarr
docker start lazylibrarian-calibre

fi

############### end dockers that need unionfs mount or connectivity ######################

#######  End Mount unionfs   ##########

rm /mnt/user/mount_rclone/rclone_install_running

exit

rclone uninstall

 

run at array shutdown 

 

Edit 08/10/18: also run at array start just in case unclean shutdown

 

#!/bin/bash

fusermount -uz /mnt/user/mount_rclone/google_vfs
fusermount -uz /mnt/user/mount_unionfs/google_vfs

plugin remove rclone.plg

rm -rf /tmp/rclone

if [[ -f "/mnt/user/mount_rclone/rclone_install_running" ]]; then

echo "install running - removing dummy file"

rm /mnt/user/mount_rclone/rclone_install_running

else

echo "Passed: install already exited properly"

fi

if [[ -f "/mnt/user/mount_rclone/rclone_upload" ]]; then

echo "upload running - removing dummy file"

rm /mnt/user/mount_rclone/rclone_upload

else

echo "rclone upload already exited properly"

fi

if [[ -f "/mnt/user/mount_rclone/rclone_backup_running" ]]; then

echo "backup running - removing dummy file"

rm /mnt/user/mount_rclone/rclone_backup_running

else

echo "backup already exited properly"

fi

if [[ -f "/mnt/user/mount_rclone/dockers_started" ]]; then

echo "removing dummy file docke run once file"

rm /mnt/user/mount_rclone/dockers_started

else

echo "docker run once already removed"

fi

exit

 

rclone upload

 

I run every hour

 

Edit 08/10/18:  (i) exclude .unionfs/ folder from upload  (ii) I also run against my cache first to try and stop files going to the array aka 'google mover'.  I also make it cycle through one array disk at a time to stop multiple disks spinning up for the 4 transfers and to increase the odds of the uploader moving files off the cache before the mover moves them to the array

 

#!/bin/bash

#######  Check if script already running  ##########

if [[ -f "/mnt/user/mount_rclone/rclone_upload" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."

exit

else

touch /mnt/user/mount_rclone/rclone_upload

fi

#######  End Check if script already running  ##########

#######  check if rclone installed  ##########

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."

else

echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."

rm /mnt/user/mount_rclone/rclone_upload

exit

fi

#######  end check if rclone installed  ##########

# move files

# echo "$(date "+%d.%m.%Y %T") INFO: Uploading cache then array."

# echo "$(date "+%d.%m.%Y %T") INFO: Temp Clearing each disk"

rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 

rclone move /mnt/disk1/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6

rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 

rclone move /mnt/disk2/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6

rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 

rclone move /mnt/disk3/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6

rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 

rclone move /mnt/disk4/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6

rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 

rclone move /mnt/disk5/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6

rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 

rclone move /mnt/disk6/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6


# end clearing each disk

# remove dummy file

rm /mnt/user/mount_rclone/rclone_upload

exit

 

unionfs cleanup:

 

Daily and manually.  I don't run from dockers anymore as it was running too often and overkill

#!/bin/bash

################### Clean-up UnionFS Folder  #########################

echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup."

find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs}
newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete

########### Remove empty upload folders ##################

echo "$(date "+%d.%m.%Y %T") INFO: removing empty folders."

find /mnt/user/rclone_upload/google_vfs -empty -type d -delete

# replace key folders in case deleted so future mounts don't fail

mkdir -p /mnt/user/rclone_upload/google_vfs/movies_adults_gd/
mkdir -p /mnt/user/rclone_upload/google_vfs/movies_kids_gd/
mkdir -p /mnt/user/rclone_upload/google_vfs/movies_uhd_gd/adults/
mkdir -p /mnt/user/rclone_upload/google_vfs/movies_uhd_gd/kids/
mkdir -p /mnt/user/rclone_upload/google_vfs/tv_adults_gd/
mkdir -p /mnt/user/rclone_upload/google_vfs/tv_kids_gd/

###################### Cleanup import folders #################

echo "$(date "+%d.%m.%Y %T") INFO: cleaning usenet import folders."

find /mnt/user/mount_unionfs/import_usenet/ -empty -type d -delete
mkdir -p /mnt/user/mount_unionfs/import_usenet/movies
mkdir -p /mnt/user/mount_unionfs/import_usenet/movies_uhd
mkdir -p /mnt/user/mount_unionfs/import_usenet/tv

exit

 

Edited by DZMM
updated scripts
  • Like 1
  • Upvote 1
Link to comment
27 minutes ago, Kaizac said:

 

I'm afraid something is going wrong indeed. It's still transferring on the mount logs, Sonarr can't import since it has access denied. But I did everything like you, only I use /mnt/user/Media instead of /mnt/disks. Really frustrating.

 

have you got any apps other than plex looking at your mounts e.g. kodi, or maybe one of your dockers is not configured correctly and is mapped directly to the vfs mount rather than then unionfs folder

  • Like 1
Link to comment
39 minutes ago, DZMM said:

 

have you got any apps other than plex looking at your mounts e.g. kodi, or maybe one of your dockers is not configured correctly and is mapped directly to the vfs mount rather than then unionfs folder

 

No other apps. I'm going to start over again from the info you just provided. Can you tell me how you did the /tv and /movies mappings in Sonarr and Radarr? And Did you add /unionfs as RW to these dockers just for permissions?

Link to comment
14 minutes ago, Kaizac said:

 

No other apps. I'm going to start over again from the info you just provided. Can you tell me how you did the /tv and /movies mappings in Sonarr and Radarr? And Did you add /unionfs as RW to these dockers just for permissions?

I added /unionfs mapped to /mnt/user/mount_unionfs RW slave in the docker mappings, and then within sonarr etc added the relevant folders e.g. /unionfs/google_vfs/tv_kids_gd and /unionfs/google_vfs/tv_adults_gd and in radarr /unionfs/google_vfs/movies_kids_gd , /unionfs/local_media/movies_hd/kids etc etc

Edited by DZMM
  • Like 1
Link to comment
10 minutes ago, DZMM said:

I added /unionfs mapped to /mnt/user/mount_unionfs RW slave in the docker mappings, and then within sonarr etc added the relevant folders e.g. /unionfs/google_vfs/tv_kids_gd and /unionfs/google_vfs/tv_adults_gd and in radarr /unionfs/google_vfs/movies_kids_gd , /unionfs/local_media/movies_hd_kids etc etc

 

Sorry, I don't understand what you mean with "I added /unionfs mapped to /mnt/user/mount_unionfs RW slave in the docker mappings". Do you mean in the individual docker settings. Or is there some general docker mapping?

 

And looking through your scripts I only see movies covered. Is that correct? So did you stop splitting you upload folders for movies/shows?

And I also don't understand why you are binding. I understand your situation is different with a UD used for torrenting, which I don't have/need. But I'm not sure how to translate it to my situation.

 

And did you also change something in your Sabnzbd mappings?

 

Sorry for all the questions, I'm feeling quite imcompetent at the moment.

Edited by Kaizac
Link to comment
31 minutes ago, Kaizac said:

 

Sorry, I don't understand what you mean with "I added /unionfs mapped to /mnt/user/mount_unionfs RW slave in the docker mappings". Do you mean in the individual docker settings. Or is there some general docker mapping?

2090057096_FireShotCapture1-Highlander_UpdateContainer_-https___1d087a25aac48109ee9a15217a1.png.1027c93065e2b38bb984ab99f11f7e2e.png

31 minutes ago, Kaizac said:

And looking through your scripts I only see movies covered. Is that correct? So did you stop splitting you upload folders for movies/shows?

Rather than doing several unionfs mounts/merges for my different local media and google folders, I've just done one and then for each docker I've pointed them to the relevant /unionfs sub-folders within the unionfs mount/merge.

 

For the upload folders I'm still doing individual uploads from each media type sub-folder.  I'm doing this as I'm playing it safe for now, becuase I don't want rclone to accidentally move the top-level folders. Once I've researched how rclone deletes empty folders a bit more, I'll probably just have one upload job as well.

 

31 minutes ago, Kaizac said:

And I also don't understand why you are binding. I understand your situation is different with a UD used for torrenting, which I don't have/need. But I'm not sure how to translate it to my situation.

 

playing it safe again.  Hardlinking doesn't work between mappings e.g. you can't hardlink from /import to /media, so I'm trying to see if I can get unraid to hardlink by placing my torrents and media in /unionfs

 

I've used a bind mount so that my local media that is located somewhere else on my server, appears to the dockers as being located at /unionfs.  Dockers move files with no io within docker mappings - if a file is moved within a docker from /import/file.mkv to /media/file/mkv the file is actually copied across even on the same disk - I'm trying to avoid copying (because my import folders are now e.g. /unionfs/import_usenet I want my local folders to be at /unionfs) by having all file references based on /unionfs/.....

 

31 minutes ago, Kaizac said:

And did you also change something in your Sabnzbd mappings?

 

 

I use nzbget, but yes - radarr,sonarr,plex,nzbget etc all use /unionfs/... e.g. nzbget docker moves movies to /unionfs/import_usenet/movies (/mnt/user/mount_unionfs/import_usenet/movies being the real location)

 

Edit: made a few edits

Edited by DZMM
Link to comment
8 minutes ago, Kaizac said:

 

Thanks for the info again. Did you do the above in the individual dockers? If so, how would Sonarr look for example? Did you point /tv to /unionfs/TVshows for example and then also add /unionfs as RW Slave like the above?

 

All dockers as per the image - so that they are all referencing the same mapping which is important to have good comms between dockers.  Within each docker I added the relevant sub-folders e.g. here's one of my plex libraries:

 

1527247881_FireShotCapture2-Plex_-https___the-shepherds.com_plex_inde.png.9b30e1ba5d8d80f3360855cc65b67f67.png

Edited by DZMM
  • Like 1
Link to comment
31 minutes ago, Kaizac said:

@DZMM: do you just create a folder named /mountcheck within your gdrive crypt folder? The script is running into an error at the vfs mounting even though there is a folder named /mountcheck which I can see from mount_rclone/gdrive.

 

it's not a folder - it's an empty file.  create via command line

 

touch /mnt/user/wherever_you_want/mountcheck

 

Link to comment

Something I might be overthinking. But you are now creating the unionfs on the toplevel. So before I would put a unionfs on the gdrive_movies and the movies_upload folders which combined showed me both off- as online movies. Now there is no unionfs on that level anymore. So is this still going well with Sonarr and Radarr? How does it know it needs movies_upload to upload movies and series_upload for series?

Edited by Kaizac
Fixed the first problem
Link to comment
37 minutes ago, Kaizac said:

Something I might be overthinking. But you are now creating the unionfs on the toplevel. So before I would put a unionfs on the gdrive_movies and the movies_upload folders which combined showed me both off- as online movies. Now there is no unionfs on that level anymore. So is this still going well with Sonarr and Radarr? How does it know it needs movies_upload to upload movies and series_upload for series?

Not quite following you....

 

My unionfs mount is at level 2 in my user share i.e. /mnt/user/mount_unionfs/google_vfs - within the google_vfs folder are all my merged google and local files.  

 

I've created the docker mapping /unionfs at the top level /mnt/user/mount_unionfs because I have other things in the /mnt/user/mount_unionfs user share - maybe my naming is confusing you.

Link to comment
1 minute ago, DZMM said:

Not quite following you....

 

My unionfs mount is at level 2 in my user share i.e. /mnt/user/mount_unionfs/google_vfs - within the google_vfs folder are all my merged google and local files.  

 

I've created the docker mapping /unionfs at the top level /mnt/user/mount_unionfs because I have other things in the /mnt/user/mount_unionfs user share - maybe my naming is confusing you.

 

Hopefully I can express myself better this time.

 

You have google_vfs which is a union of your google drive (your cloud files) and your local upload folder (in which you have different upload folders like "tv_kids", "movies_uhd", etc.).

Normally you would create an union between your google drive tv_kids_gd and tv_kids_upload. So when you download to the union folder tv_kids, Sonarr knows it has to place the files in tv_kids_upload. But since you are not creating a union on subfolder level, how does Sonarr know where to move the files to so it can still see all the series I have (both online as offline)?

Link to comment
1 hour ago, Kaizac said:

 

Hopefully I can express myself better this time.

 

You have google_vfs which is a union of your google drive (your cloud files) and your local upload folder (in which you have different upload folders like "tv_kids", "movies_uhd", etc.).

Normally you would create an union between your google drive tv_kids_gd and tv_kids_upload. So when you download to the union folder tv_kids, Sonarr knows it has to place the files in tv_kids_upload. But since you are not creating a union on subfolder level, how does Sonarr know where to move the files to so it can still see all the series I have (both online as offline)?

 

I've got the same directory structure/folder names in /mnt/user/rclone_upload/google_vfs as /mnt/user/mount_rclone/google_vfs so that I only need one unionfs mount to make my life easier i.e. /mnt/user/mount_unionfs/google_vfs/tv_kids_gd or the docker mapping/unionfs/tv_kids_gd is a union of /mnt/user/mount_rclone/googe_vfs/tv_kids_gd and /mnt/user/rclone_upload/google_vfs/tv_kids_gd.

 

 

Edited by DZMM
  • Like 1
Link to comment

I haven't had any slow downs on speeds when transferring files over at all. I have sonarr/radarr/deluge/nzbget all pointed towards a UD with a download folder on it, appdata is also on a UD. Plex is also on a UD. Media and updated media from sonarr/radarr is downloaded and moved to the cache and then to the array  automatically via the mover at 3AM, and if I want new media I just select the cloud folder and it gets uploaded and updated with zero issues as its all done behind the scenes. Manually moving files to my cloud unionfs folder goes from about 130 MB/s to 230 MB/s. I'm unsure why there would be any slow downs in speeds unless there was high IO.

Edited by slimshizn
Link to comment
3 hours ago, slimshizn said:

I haven't had any slow downs on speeds when transferring files over at all. I have sonarr/radarr/deluge/nzbget all pointed towards a UD with a download folder on it, appdata is also on a UD. Plex is also on a UD. Media and updated media from sonarr/radarr is downloaded and moved to the cache and then to the array  automatically via the mover at 3AM, and if I want new media I just select the cloud folder and it gets uploaded and updated with zero issues as its all done behind the scenes. Manually moving files to my cloud unionfs folder goes from about 130 MB/s to 230 MB/s. I'm unsure why there would be any slow downs in speeds unless there was high IO.

 

hmm I can't understand why when I have everything processing on UDs and my cache like you I can't get fast speeds.  

 

3 hours ago, slimshizn said:

 Manually moving files to my cloud unionfs folder goes from about 130 MB/s to 230 MB/s. I'm unsure why there would be any slow downs in speeds unless there was high IO.

is this from cache to array?

Link to comment
11 hours ago, DZMM said:

 

I've got the same directory structure/folder names in /mnt/user/rclone_upload/google_vfs as /mnt/user/mount_rclone/google_vfs so that I only need one unionfs mount to make my life easier i.e. /mnt/user/mount_unionfs/google_vfs/tv_kids_gd or the docker mapping/unionfs/tv_kids_gd is a union of /mnt/user/mount_rclone/googe_vfs/tv_kids_gd and /mnt/user/rclone_upload/google_vfs/tv_kids_gd.

 

 

 

Thanks man, that was the trick indeed. Amazing that it works like that! Everything seems to be working fine now. Currently putting the system to the test by doing full downloads, both Emby as Plex library updates and playing from local drive. Before my memory usage would go to 70+% but now it's at the normal 20% thus the unionfs is not using memory with all the indexing going on which is good.

 

Only thing I wonder if that would be possible is to have a union of your gdrive (cloud movies), your local_upload movies and your local_not_to_be_uploaded movies. This creates that 1 folder truly merges all the media you have.

 

And another thing I was wondering about your upload config. You constrict it to about 8MB/s to prevent the 750gb upload ban for gdrive. But you are putting this limit on multiple upload folders. Does it then still limit your total upload speed to gdrive or does it just limit the seperate upload folders and thus still causing a  ban when you upload >750gb? Can't test it myself since I won't be on fibre in a few months.

 

Oh and for other people reading this later, I fixed my Sonarr/Radarr access/permission errors by running "New Permissions" (not on the rclone mount and unionfs) and disabling "Set permissions" in Sonarr and Radarr.

Edited by Kaizac
Link to comment
2 hours ago, Kaizac said:

Only thing I wonder if that would be possible is to have a union of your gdrive (cloud movies), your local_upload movies and your local_not_to_be_uploaded movies. This creates that 1 folder truly merges all the media you have.

 

Easy to do if you want one folder view e.g. for plex.  I don't think having the local media as RW i.e. 2x RW folders would work as I'm not sure how unionfs would know which RW folder to add content to.

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/upload_folder/=RW:/mnt/user/local_media/=RO:/mnt/user/google_media=RO /mnt/user/unionfs_mount

 

2 hours ago, Kaizac said:

And another thing I was wondering about your upload config. You constrict it to about 8MB/s to prevent the 750gb upload ban for gdrive. But you are putting this limit on multiple upload folders. Does it then still limit your total upload speed to gdrive or does it just limit the seperate upload folders and thus still causing a  ban when you upload >750gb? Can't test it myself since I won't be on fibre in a few months.

I've got a couple of TBs queued, so at the moment it works as the upload is running constantly so over the course of a day it never uploads more than 750GB.  It runs the rclone move commands sequentially, so it'll never go over the 750GB as each job goes no faster than 8MB/s

 

On my low-priority to-do list is finding a way to do one rclone move that doesn't remove the top-level folders if they are empty.

 

Edit: upload script was simple - I used to have rclone delete empty directories which was the problem.  I also had a seperate rclone remote for uploading because I used to use rclone cache.  Now that I'm mounting vfs not rclone cache, no longer needed.

 

 Now just got one move line and uploading to the rclone vfs remote:

 

#!/bin/bash

#######  Check if script already running  ##########

if [[ -f "/mnt/user/mount_rclone/rclone_upload" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."

exit

else

touch /mnt/user/mount_rclone/rclone_upload

fi

#######  End Check if script already running  ##########

#######  check if rclone installed  ##########

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."

else

echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."

rm /mnt/user/mount_rclone/rclone_upload

exit

fi

#######  end check if rclone installed  ##########

# move files

rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k

# delete dummy file

rm /mnt/user/mount_rclone/rclone_upload

exit

 

Edited by DZMM
  • Like 1
Link to comment
3 hours ago, DZMM said:

 

Easy to do if you want one folder view e.g. for plex.  I don't think having the local media as RW i.e. 2x RW folders would work as I'm not sure how unionfs would know which RW folder to add content to.


unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/upload_folder/=RW:/mnt/user/local_media/=RO:/mnt/user/google_media=RO /mnt/user/unionfs_mount

 

I've got a couple of TBs queued, so at the moment it works as the upload is running constantly so over the course of a day it never uploads more than 750GB.  It runs the rclone move commands sequentially, so it'll never go over the 750GB as each job goes no faster than 8MB/s

 

On my low-priority to-do list is finding a way to do one rclone move that doesn't remove the top-level folders if they are empty.

 

Awesome, using unionfs like this is so convenient! And good to know that the upload jobs run sequentially and not parallel. I'll put more time in the different rclone jobs (like sync and move) when I have my fibre acces. Thanks for the help again!

 

Link to comment
7 minutes ago, Kaizac said:

@DZMM: Why did you remove the bind mounts and what did you do with your local folders and imports of torrents and usenet?

 

I see you also switched your upload folders to the mount_rclone?

I realised it was easier just to write the files direct to /mnt/mount_unionfs to have them in /unionfs !!!

 

I switched  my upload remote to my vfs remote - the old upload remote was a carryover from when I used a rclone cache remote before I switched to a vfs remote

Link to comment
6 minutes ago, DZMM said:

I realised it was easier just to write the files direct to /mnt/mount_unionfs to have them in /unionfs !!!

 

I switched  my upload remote to my vfs remote - the old upload remote was a carryover from when I used a rclone cache remote before I switched to a vfs remote

 

Sorry didn't see that in your edit. Thanks for clarifying.

Link to comment

I'm trying a new mount which makes sense based on what I learnt last night in this thread:

 

https://forum.rclone.org/t/my-vfs-sweetspot-updated-21-jul-2018/6132/77?u=binsonbuzz

 

and here:

 

https://github.com/ncw/rclone/pull/2410

 

Quote
  • Any value for vfs-read-chunk-size will reduce the chance of hitting download limits, since the current default will always request the whole file.
  • vfs-read-chunk-size-limit can be set to off to allow the chunk size to grow infinitely. This only affects consecutive reads and reduces the number of requests for large files.
  • vfs-read-chunk-size should be greater than buffer-size to prevent too many requests from being sent when opening a file.
  • Values less then 32M should only be used if the usage pattern allows it. For fast connections this can cause many requests and may result in rate limiting.

From my own testing and the values seen being used by other users I suggest using --vfs-read-chunk-size 128M and --vfs-read-chunk-size-limit off as the new defaults. They should not affect any user negatively.

 

Apparently for vfs only the buffer is what's stored in memory.  The chunk isn't stored in memory - the settings for the chunk control how much data rclone requests - it isn't 'downloaded' to the machine until plex or rclone requests it.

 

To stop the buffer asking for extra chunks at the start, you need to make sure the first chunk is bigger than the buffer - this keeps API hits down

 

Plex will use memory to cache data depending on if you are direct playing (NO) or streaming (yes) or transcoding (yes) - if transcoding it's controlled by the time to buffer setting in plex. I've got 300 seconds here for my transcoder but other users have higher even 900, which seems excessive to me.

 

So, I'm going with:

rclone mount --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 100M --log-level INFO gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m

 

Edited by DZMM
Link to comment
21 minutes ago, DZMM said:

I'm trying a new mount which makes sense based on what I learnt last night in this thread:

 

https://forum.rclone.org/t/my-vfs-sweetspot-updated-21-jul-2018/6132/77?u=binsonbuzz

 

and here:

 

https://github.com/ncw/rclone/pull/2410

 

 

Apparently for vfs only the buffer is what's stored in memory.  The chunk isn't stored in memory - the settings for the chunk control how much data rclone requests - it isn't 'downloaded' to the machine until plex or rclone requests it.

 

To stop the buffer asking for extra chunks at the start, you need to make sure the first chunk is bigger than the buffer - this keeps API hits down

 

Plex will use memory to cache data depending on if you are direct playing (NO) or streaming (yes) or transcoding (yes) - if transcoding it's controlled by the time to buffer setting in plex. I've got 300 seconds here for my transcoder - 900 seems excessive to me.  I used to have 600

 

So, I'm going with:


rclone mount --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 100M --log-level INFO gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m

 

 

I also saw that info this morning (also following the VFS sweetspot topic) and will be trying different mount options. Thing I run into now is that Plex takes very long to direct stream or doesn't start at all. Emby does direct stream (both on nvidia shield as on pc/laptop) within a few seconds. I think Emby is known for working better for using cloud streaming, but I want them both to work reliably and quick enough.

Link to comment
9 hours ago, DZMM said:

I'm trying a new mount which makes sense based on what I learnt last night in this thread:

 

https://forum.rclone.org/t/my-vfs-sweetspot-updated-21-jul-2018/6132/77?u=binsonbuzz

 

and here:

 

https://github.com/ncw/rclone/pull/2410

 

 

Apparently for vfs only the buffer is what's stored in memory.  The chunk isn't stored in memory - the settings for the chunk control how much data rclone requests - it isn't 'downloaded' to the machine until plex or rclone requests it.

 

To stop the buffer asking for extra chunks at the start, you need to make sure the first chunk is bigger than the buffer - this keeps API hits down

 

Plex will use memory to cache data depending on if you are direct playing (NO) or streaming (yes) or transcoding (yes) - if transcoding it's controlled by the time to buffer setting in plex. I've got 300 seconds here for my transcoder but other users have higher even 900, which seems excessive to me.

 

So, I'm going with:


rclone mount --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 100M --log-level INFO gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m

 

 

Have you been able to test it already? I'm very pleased with these settings. Both movies and series start in Plex after around 3 seconds. Emby seems a bit slower with around 4 seconds. Still a big improvement from my previous start times. Curious what your start times are (without any caching beforehand).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.