DZMM Posted May 16, 2020 Author Share Posted May 16, 2020 1 hour ago, bugster said: Question: Every time the scripts is run I get the following error on multiple Movies/TV I'm not even trying to upload the one listed but somehow they show on the log. Why is it doing this? you have a duplicate directory in your destination, not your source - rclone picks this up. gdrive allows duplicate directories or files with the same name. if you're worried or anal about file management, you can investigate and manually delete the dups from gdrive or check out the rclone dedupe commands which are really good and can clean up server-side dupes. Again though, dupes on gdrive's side doesn't affect the mount as they aren't loaded. Quote Link to comment
1activegeek Posted May 17, 2020 Share Posted May 17, 2020 I think this is obvious, but wanted to check - should I just create a duplicate version of the script and set it to run on array start, and then also have a second one to run every say 10 min like your git outlines? Reason I'm asking, restarted the server earlier today - current script is set with a run every 10 min cron setup like your example, but after having restarted earlier today the server doesn't have anything mounted and appears it hasn't for most of the day. Just curious if I'm missing something, or if I'm facing a different issue. Quote Link to comment
bugster Posted May 17, 2020 Share Posted May 17, 2020 12 hours ago, 1activegeek said: I think this is obvious, but wanted to check - should I just create a duplicate version of the script and set it to run on array start, and then also have a second one to run every say 10 min like your git outlines? Reason I'm asking, restarted the server earlier today - current script is set with a run every 10 min cron setup like your example, but after having restarted earlier today the server doesn't have anything mounted and appears it hasn't for most of the day. Just curious if I'm missing something, or if I'm facing a different issue. Have you checked the log to see why is not mounting? Quote Link to comment
axeman Posted May 17, 2020 Share Posted May 17, 2020 hi - when creating the service accounts from the optional section... we can do that on another machine other than UnRaid, right? I'm thinking of spinning up a VM just to get the accounts created. Quote Link to comment
1activegeek Posted May 17, 2020 Share Posted May 17, 2020 2 hours ago, bugster said: Have you checked the log to see why is not mounting? I hadn't, but it looks like perhaps it was just premature. I saw it did mount when I was making some changes today. Sorry for muddying the thread!! Quote Link to comment
watchmeexplode5 Posted May 17, 2020 Share Posted May 17, 2020 @axeman yup, you can create the service json files from any machine. It's a little less error prone if you do it on Linux but I've done it on Windows without issues. You can also do it on unraid easily. Make sure you have python installed via nerd tools. If you run into issues like not having pip just ssh to your box and run: "python3 get-pip.py" Quote Link to comment
axeman Posted May 18, 2020 Share Posted May 18, 2020 14 minutes ago, watchmeexplode5 said: @axeman yup, you can create the service json files from any machine. It's a little less error prone if you do it on Linux but I've done it on Windows without issues. You can also do it on unraid easily. Make sure you have python installed via nerd tools. If you run into issues like not having pip just ssh to your box and run: "python3 get-pip.py" Thanks ... I ended up doing it in a VM. I have the accounts created.. honestly, I feel like a script kiddie on this one. I normally understand what I'm doing, but with this, I'm just copy/pasta. Slowly but surely, getting there. Quote Link to comment
axeman Posted May 18, 2020 Share Posted May 18, 2020 (edited) Okay - so here's my first actual implementation question. Quote Place Auto-Genortated Service Accounts into /mnt/user/appdata/other/rclone/service_accounts/ I don't have the appdata folder ... is this because I don't have Docker enabled? I ran my service accounts outside of UnRaid. I'm pretty sure the RClone installation went OK, because if i type in RClone at the command prompt, I get the usage screen. I installed CA user scripts and Rclone via CA Apps. Should I just create the path above? follow-up when I created the service accounts, does that already create a unique client_id? or do I need to create another one? Edited May 18, 2020 by axeman client_id Quote Link to comment
watchmeexplode5 Posted May 18, 2020 Share Posted May 18, 2020 (edited) @axeman The mount script creates the appdata/other/rclone folder. Feel free to create them yourself. For your mount remote id and secret. Use a access API you likely already have created for that. That's used to mount the drive. (Theoretically you could auth with a sa.json but let's not over complicate the setup 😝). No need for additional ids/secrets for the SAs. When an upload is ran, your .json service files will be referenced for credentials and the client id/secret will be ignored ( ie: you upload with a new unique user that has a clean quota limit -- 750gb per service account/day). You can also define a custom location for the accounts if you want. It's just cleaner to keep it all in the other/rclone folder. Edited May 18, 2020 by watchmeexplode5 1 Quote Link to comment
norbertt Posted May 19, 2020 Share Posted May 19, 2020 On 5/7/2020 at 5:27 PM, DZMM said: That wouldn't work for me. It can literally freeze and not shutdown and I need to do a hard shutdown. It's the main reason why I don't reboot often - e.g. last month I got a disk error from a hard shutdown, which was painful to fix. Sometimes it's seamless - I actually think there's something in 6.8.x that causes the problem. On 5/8/2020 at 8:55 AM, DZMM said: But - the checker files are created and removed by a script. Just because they aren't there, doesn't mean rclone isn't running - sleeping for a combined 20s isn't going to fix hangs Do you know better solution for a unmount? Sometimtes I need to shut down the array and example in the weekend I will upgrade my hw setup. So sometimes would be nice to have a script for a clean unmount. Quote Link to comment
Bjur Posted May 19, 2020 Share Posted May 19, 2020 Hi a question. If I wish to overwrite an older version with a new video. Will it be best to move it to local folder first, which would be empty, because the video is already uploaded and figurates in mergerfs and run upload script to let it overwrite or should I move it directly to mergerfs folder and run upload script? I don't want copies of the video just have the older version which is already in the cloud overwritten. Quote Link to comment
DZMM Posted May 19, 2020 Author Share Posted May 19, 2020 1 hour ago, Bjur said: Hi a question. If I wish to overwrite an older version with a new video. Will it be best to move it to local folder first, which would be empty, because the video is already uploaded and figurates in mergerfs and run upload script to let it overwrite or should I move it directly to mergerfs folder and run upload script? I don't want copies of the video just have the older version which is already in the cloud overwritten. you're overthinking things. Just treat the mergerfs folder like a normal folder where you add, move, edit etc files and don't worry about what happens in the background and let rclone via the scripts deal with everything. Quote Link to comment
Bjur Posted May 19, 2020 Share Posted May 19, 2020 Okay, thanks for the information. And I guess the same goes with /mnt/local in regards to moving to that first? Just followed watchmeexplodes advice to move downloaded stuff into local first. I just remembered you writing a post earlier in regards to files not being overwritten correct which caused multiple copies, so I wanted to make sure. Quote Link to comment
DZMM Posted May 19, 2020 Author Share Posted May 19, 2020 1 hour ago, Bjur said: Okay, thanks for the information. And I guess the same goes with /mnt/local in regards to moving to that first? Just followed watchmeexplodes advice to move downloaded stuff into local first. I just remembered you writing a post earlier in regards to files not being overwritten correct which caused multiple copies, so I wanted to make sure. Not sure of the context of @watchmeexplode5 advice, but it's best to forget /mnt/user/local exists and focus all file management activity on /mnt/user/mount_mergerfs as there's less chance of something going wrong Quote Link to comment
francrouge Posted May 19, 2020 Share Posted May 19, 2020 Hi all I'm a bit late but what is the main purpose to mergefs exactly. I'm a bit against changes so just curious before i make the move. Thx all Envoyé de mon Pixel 2 XL en utilisant Tapatalk Quote Link to comment
watchmeexplode5 Posted May 19, 2020 Share Posted May 19, 2020 @DZMM @Bjur. I often unpack and write to the local mount due to a minor decrease in performance on a fuse filesystem. But that decrease is very minor. It's easiest to follow DZMM's advise and do most your work in the mergerfs mount. Quote Link to comment
watchmeexplode5 Posted May 19, 2020 Share Posted May 19, 2020 (edited) @francrouge, Short answer. Supports hard links, actively being developed and fixed, easier to cleanup products of merging dir (less scripts for the end-user). Typically agreed on being better for our general rclone case. Complex answer from trapexit's GitHub: UnionFS is more like aufs than mergerfs in that it offers overlay / CoW features. If you're just looking to create a union of drives and want flexibility in file/directory placement then mergerfs offers that whereas unionfs is more for overlaying RW filesystems over RO ones. Edited May 19, 2020 by watchmeexplode5 1 Quote Link to comment
francrouge Posted May 19, 2020 Share Posted May 19, 2020 @francrouge, Short answer. Supports hard links, actively being developed and fixed, easier to cleanup products of merging dir (less scripts for the end-user). Typically agreed on being better for our general rclone case. Complex answer from trapexit's GitHub: UnionFS is more like aufs than mergerfs in that it offers overlay / CoW features. If you're just looking to create a union of drives and want flexibility in file/directory placement then mergerfs offers that whereas unionfs is more for overlaying RW filesystems over RO ones.Thx i will then try it Envoyé de mon Pixel 2 XL en utilisant Tapatalk Quote Link to comment
Bjur Posted May 20, 2020 Share Posted May 20, 2020 13 hours ago, watchmeexplode5 said: @DZMM @Bjur. I often unpack and write to the local mount due to a minor decrease in performance on a fuse filesystem. But that decrease is very minor. It's easiest to follow DZMM's advise and do most your work in the mergerfs mount. @DMZZ @watchmeexplode5 Thanks for the answer. So should I create my download folder in the root of /mnt/mergerfs/ since I have 2 separate mounts like I did on local, since I don't think it would make sense to make in movies and afterwards move the completed files to the other drive if it's not a movie. Can you follow? Quote Link to comment
pgbtech Posted May 20, 2020 Share Posted May 20, 2020 (edited) I love the idea of this plugin and the simplicity to get it all setup and running smoothly. I greatly appreciate the work that has gone into it! One quick question: I have the upload script copying (vs moving) my libraries to a team share now. If I want to give the remote drive playback a trial, could I just delete a sample file from the /mnt/user/local/gcrypt folder? This should leave the file in the mergefs and then playback would occur via the team drive, right? I am thinking of writing a simple age-off script where files older than 30 days are removed locally (via /mnt/user/local), then living only in the team share. Thanks again! Edited May 20, 2020 by pgbtech Typo Quote Link to comment
DZMM Posted May 20, 2020 Author Share Posted May 20, 2020 (edited) 53 minutes ago, pgbtech said: If I want to give the remote drive playback a trial, could I just delete a sample file from the /mnt/user/local/gcrypt folder? Yes. Mergerfs looks at the local location first so if you want to play the cloud copy, you need to delete the local copy. 53 minutes ago, pgbtech said: I am thinking of writing a simple age-off script where files older than 30 days are removed locally (via /mnt/user/local), then living only in the team share. Thanks again! The script already does this - just set the upload script to 'move' and then the MinimumAge to 30d. Edited May 20, 2020 by DZMM Quote Link to comment
pgbtech Posted May 20, 2020 Share Posted May 20, 2020 @DZMM Thanks for the assist! I should have known the plugin could accommodate my thoughts. Once my initial copy upload job finishes, I’ll switch to move with a 30d MinimumAge. Quote Link to comment
markrudling Posted May 21, 2020 Share Posted May 21, 2020 Hi everyone. I am looking for some assistance. I have very slow scan speeds on plex, via SMB on another computer. Plex running in docker on my unraid machine is acceptable. I have quite a few users that have slow connections so I have a second i7 machine running Windows 10 and Plex that I send them too, leaving the Unraid server with some headspace to do everything else it does. The Win PC has a read only mapped network drive to the gdrive folder in mount_mergerfs share from unraid. Browsing this share can be slow, sometimes its fairly fast. Copy from this share can be fast, but it is very intermittent. Most of the time I get full 200meg copy speed though, so this is acceptable. When running the scan, network activity in the Win PC is as expected, fairly low. However, network activity on Unraid and my router is going nuts. What seems to be happening is that Plex on the windows PC is scanning the directory, asking for just a bit of the file, and rclone/unraid is attempting to serve much more of the file, meaning each file takes a long time to scan. I have tested the Win PC with RaiDrive and mounted a drive, and the scans through there are VERY fast and only 1-3meg of my line is used. I think windows and unraid are not playing well in this configuration. Can anyone offer some settings or advise? My mount settings are stock. Quote Link to comment
JohnJay829 Posted May 21, 2020 Share Posted May 21, 2020 Using ### Upload Script #### ###################### ### Version 0.95.5 ### Script starts fine but doesn't complete i get this on the read out /usr/sbin/rclone: line 3: 18008 Killed rcloneorig --config $config "$@" 21.05.2020 15:53:14 INFO: Created counter_20 for next upload run. 21.05.2020 15:53:14 INFO: Script complete Script Finished May 21, 2020 15:53.14 Quote Link to comment
watchmeexplode5 Posted May 22, 2020 Share Posted May 22, 2020 @JohnJay829 That looks like an error with the rclone plugin and not the scripts. What version of rclone plugin are you running? Try updating and/or running the beta rclone plugin. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.