-
Posts
152 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Bolagnaise
-
-
4 minutes ago, DZMM said:
Make sure all traces of the folder in mount_unionfs and probably rclone_upload are deleted before you mount
I found it, you put me on the right path, it was a shares issue and i had a 4k movies folder left over on disk 2 on my array, which was appearing in the union mount before the script ran.
-
@DZMM I’m sure the answer is in here somewhere but ill ask anyway, i created a 4K movies folder in unionfs, but i accidentally did it before the mount was active, now everytime i do a server reboot, that folder is always there (completely empty) and i have to manually delete it to get the mount to start. how do i fix this?
-
37 minutes ago, livingonline8 said:
So I have rclone setup and running...great!
I used spaceinvador script to mount google drive as a share and i can access that share now... amazing!
I installed emby and I tried to point the media path of to google share but I cannot see it... I see all my other shares that are physically on my unraid server but not the mounted google drive one ?!
Can anyone help me with this please?
I’ve never used SpaceInvaders script, are you creating directories and mounting them using the script? I highly Highly recommend using the scripts created on page 1 by DZMM for mounting G drive, as it will stop you getting API bans from google.
-
1
-
-
38 minutes ago, nuhll said:
hm. do you know how i can find out if i have the latest version?
edit: "rclone version" in terminal does the trick
Is that correct version?
rclone v1.49.5
- os/arch: linux/amd64
- go version: go1.12.10No idea, my rclone version in unraid says:
‘Version 2019.10.13bFusermount compatibility fix for future unRaid versions
-
On 10/13/2019 at 6:00 AM, nuhll said:
Ill try that.
Edit: okay i updated (but with that entry in GO file) seems to work just fine, AND SO FAST!
Even GUI feels way faster, new login page, wow!
Latest rclone update fixes the fusermount3 issue. Unmount, Upgrade rclone and remove the GO file Symlink line, remount.
-
41 minutes ago, Kaizac said:
Then I think the issue is with your router being 5 or 4G. It's sort of having wifi non stop which just isn't stable. So when it drops, you also lose your mount.
Yeah it’s defiantly related to the router, but as I said, I just reduced the BW limit to 3000 and it no longer drops anymore.
-
4 hours ago, Kaizac said:
How are you running the scripts? As "run now" or "run in background"?
Background using cron jobs.
-
@DZMM So I tried running the upload script last night and the mount immediately disconnected and was throwing errors in the mount log saying it couldn’t pull the api key, which made me realise exactly what the original issue was, I had BW set to 9000 in the script, and I use a 5G router to perform the uploads but it only has a 4G uplink speed of around 45mbps. So basically everytime the script ran it would cause my router to crash and the mount would disconnect, as soon as I stopped the upload/rebooted it would work again.
maybe a warning to everyone to change the BW limit as a must. @jamesac2 only has a 10Mbps upload so maybe that’s why he’s also getting disconnects.
-
2 minutes ago, DZMM said:
I need to re-review adding --vfs-cache-mode writes, as a quick glance now makes sense and it might help with when I occasionally write direct to the mount.
I don't think buffer-size should be set to 0, but again I'll research as I haven't touched my settings for almost a year.
No worries man, that the way it is, everything works completely fine...until it doesnt. You have done everyone a service so I don’t mind a few late nights troubleshooting, your literally saving me money with this script.
anyway, 12 hours uptime now, zero dismounts and I successfully moved my everything onto a brand new unraid build in new hardware. looks like it’s fixed.
-
Everything has been stable for the past 6 hours, no drops.
Things i have done.
1. Downgrade Unraid to 6.6.7
2. Switch to Rclone non beta
3. add --vfs-cache-mode writes to my mount script
i read that when using VFS cache, that the --buffer-size should be set too 0, is that correct?
-
26 minutes ago, DZMM said:
@Bolagnaise not sure what's going on - are you sure your dockers are writing to /mount_unionfs and not /mount_rclone? The mentions of vfs-cache-mode writes seems to indicate something is - writing direct to the mount without using rclone upload is risky, as it doesn't recover if something goes wrong
Nothing is pointed to mount_rclone, everything is using mount_unionfs. Ive just done a brand new unraid install as well on new hardware.
this seems like the issue as you said before https://forums.unraid.net/bug-reports/stable-releases/67x-very-slow-array-concurrent-performance-r605/?do=findComment&comment=5488
-
@DZMM im getting these errors appear in the mount log now.
2019/09/23 23:06:40 ERROR : Movies/3 Lives (2019)/3.Lives.2019.WEBDL-1080p.mp4: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes
so i added
''--vfs-cache-mode writes''
to the mount script (https://github.com/rclone/rclone/issues/961)
I got
2019/09/23 23:12:31 INFO : Cleaned the cache: objects 1 (was 4), total size 0 (was 0)
i have no idea what ive done, but the error has gone away.
No idea if its fixed my error yet.
-
12 hours ago, DZMM said:
What did the logs say?
Do you have a lot of files in /mnt/user/mount_unionfs/google_vfs/.unionfs ?? Maybe the script can't cope with the cleanup.
I gate this part of unionfs - it looks like rclone union is really coming soon as work resumed last week.
Logs said input/output error. But now I’m doing cleanup scripts and it’s working so IDK. When clone union is released, will you write up a new tutorial. I will love you long time if you do 😘
-
@DZMM I just ran the cleanup script and it immediately killed the mount. Could that be an issue?
#!/bin/bash
####### Check if script already running ##########
if [[ -f "/mnt/user/appdata/other/rclone/rclone_cleanup" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
exit
else
touch /mnt/user/appdata/other/rclone/rclone_cleanup
fi
####### End Check if script already running ##########
################### Clean-up UnionFS Folder #########################echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup."
find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs}
newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -deleterm /mnt/user/appdata/other/rclone/rclone_cleanup
exit
-
1 minute ago, DZMM said:
@Bolagnaise I think you've got a rogue docker as script is spot on. @yendi had similar problems that he resolved by doing some rebuilding, maybe he can help
Yeah I’m going to test 1 by 1 as suggested, I’m only running radarr, Ombi, letsencrypt, and Tautulli on the machine. My next step if none of that works is to stop everything and rebuild unraid into a new machine. I want to switch my plex server and everything over to unraid as it’s all on windows currently. Just need to do it.
-
3 minutes ago, DZMM said:
post your mount script please. Have you tried running without dockers on and then turning on one at a time say every hour?
#!/bin/bash
####### Check if script is already running ##########
if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."
exit
else
touch /mnt/user/appdata/other/rclone/rclone_mount_running
fi
####### End Check if script already running ##########
####### Start rclone gdrive mount ##########
# check if gdrive mount already created
if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."
else
echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."
# create directories for rclone mount and unionfs mount
mkdir -p /mnt/user/appdata/other/rclone
mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfsrclone mount --allow-other --buffer-size 128M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &
# check if mount successful
# slight pause to give mount time to finalise
sleep 5
if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success."
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems."
rm /mnt/user/appdata/other/rclone/rclone_mount_running
exit
fi
fi
####### End rclone gdrive mount ##########
####### Start unionfs mount ##########
if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."
else
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs
if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."
rm /mnt/user/appdata/other/rclone/rclone_mount_running
exit
fi
fi
####### End Mount unionfs ##########
exit
I havent tested dockers yet.
-
I have 16GB and the buffer is set to 128mb, I did have the low ram issue and fixed it about 5 weeks ago, this feels vastly different. The SMB share becomes entirely inaccessible , whereas when I was running out of memory before I could always access the share and see that the mount had disconnected. Now the only way to access //tower is to unmount and then remount and the server becomes accessible over SMB again.
To further the clue, it seems that running a plex scan does cause it to crash, but I do not have thumbnails turned on. The ram usage during a plex scan right now is only 20%. As I said, this issue has only started in the last 2 days or so and I never had unmount crashes before during scans.
Edit: Ok i have done some more investigating, as everything was pointed to an SMB issue, and i seem to have fixed it. Here's what i think was the issue.
1. My network had reverted back to a public profile, by default network sharing is turned off on public and therefore SMB as well, but somehow it was still connecting. I switched back to a private network profile.
2. I enabled this setting https://forums.unraid.net/topic/77442-cannot-connect-to-unraid-shares-from-windows-10/
3. I went to windows credential manager and deleted all saved credentials for mapped drives.
That seems to have done the trick, everything is now incredibly much faster
EDIT EDIT: Problem still not fixed. At my wits end.
-
The last 2 days, I’m getting a constant issue occurring. I will mount a crypt g drive and everything works fine, I can access the folders through windows explorer and plex can read them. After about 30 mins, the mount appears to drop and I can no longer access any share on unraid, not even local ones. If I unmount and then remount using user scripts, the shares instantly reappear and everything is working again.
any ideas?
-
DISREGARD, ill leave this here incase anyone else has the same issue, i recopied the script from GITHUB and reran it, issue gone.
I have had an issue ever since i got this working, the union FS cleanup script throws this error everytime something is deleted from the plex server or via sonarr/radarr and then after a rescan, the file is back. Am i missing something?
Script location: /tmp/user.scripts/tmpScripts/rclone_cleanup/script
30.08.2019 19:12:16 INFO: starting unionfs cleanup.
rm: cannot remove '/mnt/user/mount_rclone/google_vfs/mnt/user/mount_unionfs/google_vfs/.unionfs/Movies/Aquaman (2018)/Aquaman (2018) Remux-2160p.mkv': No such file or directoryHere's the script as followed from github.
#!/bin/bash
####### Check if script already running ##########
if [[ -f "/mnt/user/appdata/other/rclone/rclone_cleanup" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
exit
else
touch /mnt/user/appdata/other/rclone/rclone_cleanup
fi
####### End Check if script already running ##########
################### Clean-up UnionFS Folder #########################echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup."
find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs}
newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -deleterm /mnt/user/appdata/other/rclone/rclone_cleanup
exit
-
On 8/4/2019 at 5:31 AM, yendi said:
So with the help of rclone guys I might have found the issue:
I have 12 gb of ram and when I upload + all the services running on unRAID i am using about 8.5 of the ram.
When Plex is doing the thumbnails it seems that it consume all remaining ram for the job: it consume some ram for Plex itself + the --buffer-size 256 * number of opened files. Apparently its 4-5 files simultaneously.
I lowered the buffer-size variable to 128mb and I have not seen the issue since 24h.
Hope it helps someone who would face this issue !
Thankyou so much, i have only 10GB of ram currently and was seeing rclone crashes in unraid and out of memory issues. Im upgrading to 16GB tommorow
-
2 hours ago, SoloLab said:
DO I then move over the media files once they show up there? , to their correct path after been uploaded and encrypted in google.
Use Binhex-Krusader to move your current movie and tv show folders to mount_unionfs. the way its setup is that rclone_upload is your local disk storage and unionfs is the link between local and cloud (rclone_mount). When you add folders to mount_unions they are actually placed into rclone_upload. Then adjust the upload script to suit your needs and it will transfer files that meet the age requirements from upload to rclone_mount . My current upload script is --min-age 7d --max-age 8d so that it stores newly downloaded tv shows and movies for a week and then uploads them the next day, i run the script daily to check.
-
1
-
-
Transferred: 316.811G / 316.811 GBytes, 100%, 4.853 MBytes/s, ETA 0s
Errors: 0
Checks: 621 / 621, 100%
Transferred: 559 / 559, 100%
Elapsed time: 18h34m6.2s@DZMM You are da man
-
1 minute ago, nuhll said:
If im correct it uses the "created" date. (if i download old linux movies then they get uplaoded even if i set to 1y)
I wouldnt bother uploading such fresh data.
well min age 30m means if i understand it correctly that all data that is at least 30 min old is uploaded, i ran this script and tested it and it started to upload TB of data, so i switched to max age 2d and it is now taking a long time to filter out files. I initially tried max age 7d and it was also taking a long time so i thought i would try to reduce it to 2 to do a quick test to see what it would upload, im currnetly waiting for the script to finish.
-
Just now, nuhll said:
What do you mean?
If you set the script to max age 2 then it will upload files older then 2 days.
right, so i guess max age uses the file date and the server date and then only uploads files with an age that is exactly 2 days old? im being retarded i know it
[Support] Plex-Discord Role Management Docker
in Docker Containers
Posted
I cannot get it to connect to Tautulli, keeps getting this error. Api access is enabled in tautulli and its the correct api key.
message: 'invalid json response body at http://192.168.0.101:8181/api/v2?apikey=''TOKENREMOVED''&cmd=get_activity reason: Unexpected token < in JSON at position 0',