Jump to content
Sparkum

Plexdrive

172 posts in this topic Last Reply

Recommended Posts

Keep getting things like this happening with the /mnt/users/Media/Cloud folders. Almost every time an update happens.
image.thumb.png.2b21ded6383ebbe0d3ad9e9467a6e178.png

Running docker safe new perms again see if It will fix it.

Edit: It did not.

Edited by slimshizn

Share this post


Link to post

Is this the culprit?

 

Quote

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/Media/Cloud/Seriestmp=RW:/mnt/disks/crypt/Series=RO /mnt/user/Media/Cloud/Series
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/Media/Cloud/Moviestmp=RW:/mnt/disks/crypt/Movies=RO /mnt/user/Media/Cloud/Movies

 

Share this post


Link to post
4 minutes ago, slimshizn said:

Is this the culprit?

 

 

Shouldn't be. But I've seperated the Upload and Gdrive folders (in your case the unionfs would be in /Media/Series and upload in /Media/Uploads/Seriestmp and the gdrive in /Media/Cloud/Series. Maybe it makes a difference? I just check my logs and didn't have these errors. My config is this:

 

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/Media/Uploaden/Films_upload=RW:/mnt/disks/GdriveStream/Films=RO /mnt/user/Media/Films
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/Media/Uploaden/Series_upload=RW:/mnt/disks/GdriveStream/Series=RO /mnt/user/Media/Series

 

Share this post


Link to post
Quote

 

#!/bin/bash

mkdir -p /mnt/disks/crypt

#######  Check if script already running  ##########

if [[ -f "/mnt/user/software/rclone_install_running" ]]; then

exit
else

touch /mnt/user/software/rclone_install_running

fi

#######  Check if rclone vfs mount is mounted  ##########

if [[ -f "/mnt/disks/crypt/Movies/mountcheck" ]] && [[ -f "/mnt/disks/crypt/Series/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check rclone_vfs mounted success."
else

# Mount rclone vfs mount

rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 2G --buffer-size 512M --umask 002 --cache-dir=/mnt/disks/Samsung_SSD_960_EVO_500GB_S3X4NB0K137602E/rclone/vfs --vfs-cache-mode writes --log-level INFO --stats 1m uploadcrypt: /mnt/disks/crypt &

fi

sleep 5

if [[ -f "/mnt/disks/crypt/Movies/mountcheck" ]] && [[ -f "/mnt/disks/crypt/Series/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: rclone_vfs mount success."
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone_vfs mount failed - please check for problems."
rm /mnt/user/software/rclone_install_running
exit
fi

#######  Mount unionfs   ##########

# check if mounted

if [[ -f "/mnt/user/Media/Cloud/Series/mountcheck" ]] && [[ -f "/mnt/user/Media/Cloud/Movies/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted."

rm /mnt/user/software/rclone_install_running

exit

else

# Unmount before remounting
fusermount -uz /mnt/user/Media/Cloud/Series
fusermount -uz /mnt/user/Media/Cloud/Movies


unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/Media/Cloud/Seriestmp=RW:/mnt/disks/crypt/Series=RO /mnt/user/Media/Cloud/Series
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/Media/Cloud/Moviestmp=RW:/mnt/disks/crypt/Movies=RO /mnt/user/Media/Cloud/Movies

if [[ -f "/mnt/user/Media/Cloud/Series/mountcheck" ]] && [[ -f "/mnt/user/Media/Cloud/Movies/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted."
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Movies & Series Remount failed."
fi

fi

#######  End Mount unionfs   ##########

rm /mnt/user/software/rclone_install_running

exit

 


Here's the entire thing. 

I'm using a separate folder to upload what I want to the cloud which is mnt/user/Media/Cloud/Movies and then a /Series dir as well. I move whatever I want there and then have sonarr and radarr point to that directory. I'm not sure what I'm doing wrong here. mnt/disks/crypt is the gdrive encrypted mount, then mnt/user/Media/Cloud/Seriestmp and Moviestmp are for the upload, correct. Wonder what's going on??

Edit: I'm currently uploading so would that have any effect on that?

Edited by slimshizn

Share this post


Link to post
On 7/30/2018 at 2:57 PM, DZMM said:

Unionfs merges the upload folder and the Google folder.  When sonarr tries to add a new file, it is actually written to the upload folder not the Google folder - when you move it to the Google folder, to sonarr it looks like it hasn't moved as sonarr is looking at the unionfs folder.

 

Radarr/sonarr etc should be pulling files in from sab/nzbget etc, not sab writing directly to your media folders



So maybe that's the problem? Do I have the directories mixed up there?

Share this post


Link to post

Sonarr should point to /mnt/user/Media/Cloud/Series in your case. Then it will automatically add it to /mnt/user/Media/Cloud/Seriestmp for the upload cron job DZMM uses.

 

Does that change things for you?

Share this post


Link to post

That's already where I have it pointed actually. Still getting the perm error and uploads and deletes are working fine. Not sure why it wont transfer. 

Edited by slimshizn

Share this post


Link to post

 

Just now, slimshizn said:

That's already where I have it pointed actually. Still getting the perm error.

 

Remove the -unmask part from your rclone mount line. I think that caused my previous errors aswell. Might be worth trying.

Share this post


Link to post
2 minutes ago, slimshizn said:

Would turning on privileged do it?

 

Is the unassigned cache mounted as RW or RO?

Share this post


Link to post

Okay so I removed --unmask 002, and re ran the script, hit manual transfer again and VOILA it's transfering. So that seems to be the issue I was having. Thank you.

Share this post


Link to post
4 hours ago, slimshizn said:

Keep getting things like this happening with the /mnt/users/Media/Cloud folders. Almost every time an update happens.
image.thumb.png.2b21ded6383ebbe0d3ad9e9467a6e178.png

Running docker safe new perms again see if It will fix it.

Edit: It did not.

Looks like you've got your radarr docker mappings wrong and maybe you're missing a '/' for /Cloud/Movies/.

 

Or, have you not mounted /CloudMovies RW slave?

 

Edit: Just seen in previous post you got it sorted

Edited by DZMM

Share this post


Link to post

So I was looking at your cleanup script. You created ADDITIONAL mounts to have that work? Probably why mine is not working. 

Edit: I found my issue. Had _gd at the end of one of my mounts. I took it out, ran it, and it deleted files that weren't supposed to be there. Nice.

Edited by slimshizn

Share this post


Link to post
3 minutes ago, slimshizn said:

So I was looking at your cleanup script. You created ADDITIONAL mounts to have that work? Probably why mine is not working. 

oh, i see what you mean.  Yes i have separate unionfs mounts for say movies/kids and movies/adults - you just need a cleanup for each of your unionfs mounts

Share this post


Link to post
On 8/1/2018 at 4:42 AM, slimshizn said:

Try moving your download folder to an unassigned drive, as well as appdata to the unassigned drive. After I did my io issues are almost non existent.

I've just added a 2TB unassigned for my usenet and torrent downloads - hopefully this will speedup my import to mu unionfs folders as I'm only getting around 5MB/s.  It's going to take a while for me to know as I need to wait for old torrents to finish downloading before moving them.

 

I wonder sometimes if my unionfs mount at /mnt/disks rather than /mnt/user/somewhere is the culprit.  If the UD doesn't work, I think I'll give /mnt/user/somewhere another crack. 

 

I aborted my last attempt as for some weird reason sonarr/radarr/plex etc sometimes wouldn't see the mount files - anyone else come across this?  I could see the files via putty/SMB etc, but the dockers would be temperamental.

 

 

Share this post


Link to post
9 hours ago, DZMM said:

I've just added a 2TB unassigned for my usenet and torrent downloads - hopefully this will speedup my import to mu unionfs folders as I'm only getting around 5MB/s.  It's going to take a while for me to know as I need to wait for old torrents to finish downloading before moving them.

 

I wonder sometimes if my unionfs mount at /mnt/disks rather than /mnt/user/somewhere is the culprit.  If the UD doesn't work, I think I'll give /mnt/user/somewhere another crack. 

 

I aborted my last attempt as for some weird reason sonarr/radarr/plex etc sometimes wouldn't see the mount files - anyone else come across this?  I could see the files via putty/SMB etc, but the dockers would be temperamental.

 

 

 

I have the same problem now I think. My Gdrive gets disconnected after a while. I also get out of memory notifications which is weird since I have 28GB of ram and I don't see anything using memory. So apparently it's spiking and then just disconnecting the Gdrive. This causes Sonarr to stop importing, since it gets an access denied (even though the local upload folder is still up).

Very frustrating.

Share this post


Link to post
28 minutes ago, Kaizac said:

 

I have the same problem now I think. My Gdrive gets disconnected after a while. I also get out of memory notifications which is weird since I have 28GB of ram and I don't see anything using memory. So apparently it's spiking and then just disconnecting the Gdrive. This causes Sonarr to stop importing, since it gets an access denied (even though the local upload folder is still up).

Very frustrating.

Your problem is different.  My mounts are fine as I can browse files via putty, SMB etc but my dockers won't see any anything.  I used to run into memory problems with my move jobs - post your rclone lines.

6 hours ago, slimshizn said:

They were very temperamental with plexdrive for me. Are you running the mounts as rw slave?

I've tried rw and that didn't help.  Are they supposed to be rw slave even if mounted at mnt/user?  

 

I've been importing from array-->array or ud-->array.  I'm going to try some ud-->cache imports today to see if it's been an io problem on my array.  I'm pretty sure it's not, but that will confirm whether or not.

Share this post


Link to post
15 minutes ago, DZMM said:

Your problem is different.  My mounts are fine as I can browse files via putty, SMB etc but my dockers won't see any anything.  I used to run into memory problems with my move jobs - post your rclone lines.

I've tried rw and that didn't help.  Are they supposed to be rw slave even if mounted at mnt/user?  

 

I've been importing from array-->array or ud-->array.  I'm going to try some ud-->cache imports today to see if it's been an io problem on my array.  I'm pretty sure it's not, but that will confirm whether or not.

 

You mount unionfs on /mnt/disks right? Isn't that on the array directly, so that would mean your writing to it would be slowed. I've mounted on mnt/user/Media and put the Media share also on Cache (with mover invoked at night) and those writes are fast now.

 

Edit: I'm talking bullshit, your /mnt/disks are not on your array. So maybe it just has to do with not mounting a /mnt/user/XX folder from the docker.

 

My rclone lines. I've just added -attr-timeout 60s because apparantly the out of memory issue crashes the mount (probably because of plex and emby scanning the libraries causing too much strain on the system). This line should mitigate it. I've also disabled thumbnail creation and such in both dockers. Waiting for my server to reboot and then hopefully the mounts are correct.

I'm writing to GdriveStream which is also the mount that keeps disconnecting.

rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 1G --log-level INFO --stats 1m --attr-timeout 60s Gdrive_Alles_Crypt: /mnt/disks/GdriveStream &
rclone mount --max-read-ahead 512M --allow-non-empty --allow-other OnedriveBen: /mnt/disks/Onedrive &
rclone mount --max-read-ahead 512M --allow-non-empty --allow-other OnedriveGroot: /mnt/disks/OnedriveGroot &
rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 1G --log-level INFO --stats 1m --attr-timeout 60s Gdrive_Alles_Media: /mnt/disks/MediaLibGdrive &

 

Edited by Kaizac

Share this post


Link to post

Does anyone know what this mean? I see it in the log of my mount script for Gdrive. I dont know understand why it's transferring file since I'm not running the upload script.

 

Quote

2018/08/03 11:15:39 INFO :
Transferred: 601.335 MBytes (110.333 kBytes/s)
Errors: 0
Checks: 0
Transferred: 731
Elapsed time: 1h33m0.9s
Transferring:
* ...- S01E03 - The Entire History of You.mp4: 0% /271.934M, 69.993k/s, 1h5m44s

2018/08/03 11:15:41 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: unexpected EOF
2018/08/03 11:15:41 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
2018/08/03 11:15:41 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 2/10: unexpected EOF
2018/08/03 11:15:42 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 3/10: unexpected EOF
2018/08/03 11:15:42 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 4/10: unexpected EOF
2018/08/03 11:15:43 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 5/10: unexpected EOF
2018/08/03 11:15:50 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 6/10: unexpected EOF
2018/08/03 11:15:51 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 7/10: unexpected EOF
2018/08/03 11:15:51 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 8/10: unexpected EOF
2018/08/03 11:15:52 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 9/10: unexpected EOF
2018/08/03 11:15:57 ERROR : Series/Black Mirror/Season 1/Black Mirror - S01E03 - The Entire History of You.mp4: ReadFileHandle.Read error: low level retry 10/10: unexpected EOF

 

Edited by Kaizac

Share this post


Link to post
6 hours ago, Kaizac said:

 

You mount unionfs on /mnt/disks right? Isn't that on the array directly, so that would mean your writing to it would be slowed. I've mounted on mnt/user/Media and put the Media share also on Cache (with mover invoked at night) and those writes are fast now.

 

Edit: I'm talking bullshit, your /mnt/disks are not on your array. So maybe it just has to do with not mounting a /mnt/user/XX folder from the docker.

 

My rclone lines. I've just added -attr-timeout 60s because apparantly the out of memory issue crashes the mount (probably because of plex and emby scanning the libraries causing too much strain on the system). This line should mitigate it. I've also disabled thumbnail creation and such in both dockers. Waiting for my server to reboot and then hopefully the mounts are correct.

I'm writing to GdriveStream which is also the mount that keeps disconnecting.


rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 1G --log-level INFO --stats 1m --attr-timeout 60s Gdrive_Alles_Crypt: /mnt/disks/GdriveStream &
rclone mount --max-read-ahead 512M --allow-non-empty --allow-other OnedriveBen: /mnt/disks/Onedrive &
rclone mount --max-read-ahead 512M --allow-non-empty --allow-other OnedriveGroot: /mnt/disks/OnedriveGroot &
rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 1G --log-level INFO --stats 1m --attr-timeout 60s Gdrive_Alles_Media: /mnt/disks/MediaLibGdrive &

 

your mount scripts look fine, although I don't know what the implications are of having two vfs mounts e.g. do they behave properly and don't interfere with each other.

 

Post your move scripts as well - these are what used to cause my unraid server to run out of memory.

Share this post


Link to post
4 hours ago, Kaizac said:

Does anyone know what this mean? I see it in the log of my mount script for Gdrive. I dont know understand why it's transferring file since I'm not running the upload script.

 

 

 

I think somehow you are writing directly to the vfs mount and I think the error is showing the write is failing, which means you lose the file I think or at least it can't be retried:

 

https://rclone.org/commands/rclone_mount/#file-caching

Share this post


Link to post
2 minutes ago, DZMM said:

your mount scripts look fine, although I don't know what the implications are of having two vfs mounts e.g. do they behave properly and don't interfere with each other.

 

Post your move scripts as well - these are what used to cause my unraid server to run out of memory.

 

The 2 vfs mounts are connected to seperate Gdrives (one is my own and the other is a friends share for read only). I found out it was Plex who was filling my memory, every item it was scanning was taking up memory which stacked. Shutting down the Plex docker dropped the memory use.

 

Upload script is currently only used for Series:
 

#!/bin/bash

#######  Check if script already running  ##########

if [[ -f "/mnt/user/Rclone/rclone_upload" ]]; then
exit
else
touch /mnt/user/Rclone/rclone_upload

fi

# set folders

uploadfolderSeries="/mnt/user/Media/Uploaden/Series_upload"
#uploadfolderFilms="/mnt/user/Media/Uploaden/Films_upload"


# move files

rclone move $uploadfolderSeries Gdrive_Alles_Upload:/Series -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k
#rclone move $uploadfolderFilms Gdrive_Alles_Upload:/Films -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k

rm /mnt/user/Rclone/rclone_upload

exit

 

Share this post


Link to post
3 minutes ago, DZMM said:

 

I think somehow you are writing directly to the vfs mount and I think the error is showing the write is failing, which means you lose the file I think or at least it can't be retried:

 

https://rclone.org/commands/rclone_mount/#file-caching

 

I'm afraid something is going wrong indeed. It's still transferring on the mount logs, Sonarr can't import since it has access denied. But I did everything like you, only I use /mnt/user/Media instead of /mnt/disks. Really frustrating.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.