Stupifier Posted June 4, 2020 Share Posted June 4, 2020 1 minute ago, xl3b4n0nx said: I have google drive and an encrypted folder mounted as google and secure. I set this up following Spaceinvaderone's tutorial. The problem I am having is with the unmount script. The 'fusermount -u /mnt/disks/secure' returns 'Invalid argument' and can't properly unmount the secure folder. How can I fix this error? Use this if fusermount doesn't work umount -l /mnt/disks/google Quote Link to comment
xl3b4n0nx Posted June 4, 2020 Share Posted June 4, 2020 20 minutes ago, Stupifier said: Use this if fusermount doesn't work umount -l /mnt/disks/google It claims that /mnt/user/secure (the one causing the most problems) isn't mounted when I try that. Quote Link to comment
Stupifier Posted June 4, 2020 Share Posted June 4, 2020 58 minutes ago, xl3b4n0nx said: It claims that /mnt/user/secure (the one causing the most problems) isn't mounted when I try that. Then it's unmounted.....you should see nothing in that directory.... you're good to go Quote Link to comment
xl3b4n0nx Posted June 4, 2020 Share Posted June 4, 2020 Turns out the created directory for the mount got into a weird state. I unmounted it and then removed the directory and let the mount script recreate it and it was fine. Quote Link to comment
Truk.22 Posted June 5, 2020 Share Posted June 5, 2020 I have a small question regarding the --bwlimit. I have a two scripts that run every hour to copy files from a remote server to local server. I use the bwlimit command and this works sometimes. When there is a lot of data to copy, it ends up saturating my bandwidth making browsing and other online activities slow. What I think is happening is that the --bwlimit is applying to each script and not globally. For instance at hour 0 th script starts and bandwidth limit is set to 4M. If this is still running at hour 1, then a new session is created with bwlimit at 4M for a total of 8M and so on. Can anyone shed some light on this interaction? Also, if the above is the case, is there a way to set a global limit to prevent this stacking of sessions? Thanks in advance for any help and thanks to the devs for this great plugin. Quote Link to comment
Stupifier Posted June 5, 2020 Share Posted June 5, 2020 I have a small question regarding the --bwlimit. I have a two scripts that run every hour to copy files from a remote server to local server. I use the bwlimit command and this works sometimes. When there is a lot of data to copy, it ends up saturating my bandwidth making browsing and other online activities slow. What I think is happening is that the --bwlimit is applying to each script and not globally. For instance at hour 0 th script starts and bandwidth limit is set to 4M. If this is still running at hour 1, then a new session is created with bwlimit at 4M for a total of 8M and so on. Can anyone shed some light on this interaction? Also, if the above is the case, is there a way to set a global limit to prevent this stacking of sessions? Thanks in advance for any help and thanks to the devs for this great plugin.rclone does not have any sort of special logic available that checks for other active rclone instances. You'll have to program that yourself. There are many ways to skin a cat...You just need to be a little creative and willing to dive into a little bash coding (or whatever you are most familiar with).Alternatively....You could just run both at 2M speed permanently if you are unwilling to figure out a few lines of code. Quote Link to comment
Januszmirek Posted June 13, 2020 Share Posted June 13, 2020 On 7/20/2019 at 11:12 PM, ProphetSe7en said: Thank you. Need to look into that and see what I can make of it. I have made a script for testing discord messages. This one sends a message to discord. All it does is showing a bell, pinging my user and type the text "this is a test". Now I need to figure out how to integrate it to the script to get the correct message. curl -X POST "webhookurl" \ -H "Content-Type: application/json" \ -d '{"username":"borg", "content":":bell: Hey <@userid> This is a test"}' There is also a script that uses borg + rclone. At the end it sends a email if any error or backup has finished without errors. It should be possible to change this to use discord, I just dont know how to yet. https://pastebin.com/8WGmJgiQ Hello, did you get any success with notifications for rclone? It's pretty much the only feature I'm missing from rclone - notifications. Don't need anything fancy, discord notification or email sent on sync errors only would be enough;) Quote Link to comment
mgutt Posted June 21, 2020 Share Posted June 21, 2020 Any chance to get this version through the rclone beta channel? Quote Link to comment
Waseh Posted June 21, 2020 Author Share Posted June 21, 2020 No, sorry. You will have to wait until it's merged into the main beta channel or modify the plugin yourself / replace the binary 😄 Quote Link to comment
mgutt Posted June 21, 2020 Share Posted June 21, 2020 10 minutes ago, Waseh said: No, sorry. You will have to wait until it's merged into the main beta channel or modify the plugin yourself / replace the binary 😄 Ok, thanks. I'll try rsync. I hope it works: https://serverfault.com/a/450856/44086 Quote Link to comment
HALPtech Posted June 24, 2020 Share Posted June 24, 2020 Hi. I'm having trouble getting rclone to actually mount using spaceinvaderone's guide. When I view my Dropbox mount from the terminal in rclone (reclone lsd Dropbox:) I can see all of the folders in the root of my Dropbox. However, when I try to mount it using the following script, nothing appears in the /mnt/disks/Dropbox folder: mkdir -p /mnt/disks/Dropbox rclone mount --max-read-ahead 1024k --allow-other Dropbox: /mnt/disks/Dropbox & Why can the terminal see my Dropbox folders and files but not Krusader when I run the mount script? Quote Link to comment
scubieman Posted June 25, 2020 Share Posted June 25, 2020 I moved rclone from one server to another, It appears the the scripts tab it is running fine, however if i go to terminal and run rclone edit it doesnt know the command. Any ideas? Quote Link to comment
Stupifier Posted June 25, 2020 Share Posted June 25, 2020 I moved rclone from one server to another, It appears the the scripts tab it is running fine, however if i go to terminal and run rclone edit it doesnt know the command. Any ideas?They is because rclone edit isn't a command. Try rclone config.Refer to the rclone official documentation for the full list of commands 1 Quote Link to comment
scubieman Posted June 25, 2020 Share Posted June 25, 2020 7 hours ago, Stupifier said: They is because rclone edit isn't a command. Try rclone config. Refer to the rclone official documentation for the full list of commands Wow, So Simple. Thanks Stpifier!!!! Quote Link to comment
crazyhorse90210 Posted June 25, 2020 Share Posted June 25, 2020 I can't get my cache to mount because for some reason the --allow-non-empty is unknown to this version of fusermount. root@Ovi:/mnt/disk3# rclone mount gcache: /mnt/disk3/gdrive --allow-other --cache-db-purge --allow-non-empty --buffer-size 32M --use-mmap --dir-cache-time 72h --drive-chunk-size 16M --timeout 1h --vfs-cache-mode minimal --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G & [1] 36504 root@Ovi:/mnt/disk3# 2020/06/25 14:01:00 mount helper error: fusermount: unknown option 'nonempty' 2020/06/25 14:01:00 Fatal error: failed to mount FUSE fs: fusermount: exit status 1 yet if I try to mount the cache without it tells me to use it. root@Ovi:/mnt/disk3# rclone mount gcache: /mnt/disk3/gdrive --allow-other --cache-db-purge --buffer-size 32M --use-mmap --dir-cache-time 72h --drive-chunk-size 16M --timeout 1h --vfs-cache-mode minimal --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G & [1] 44383 root@Ovi:/mnt/disk3# 2020/06/25 14:06:25 Fatal error: Directory is not empty: /mnt/disk3/gdrive If you want to mount it anyway use: --allow-non-empty option anyone have any idea? Quote Link to comment
crazyhorse90210 Posted June 25, 2020 Share Posted June 25, 2020 On 6/21/2020 at 2:01 PM, Waseh said: No, sorry. You will have to wait until it's merged into the main beta channel or modify the plugin yourself / replace the binary 😄 I think it's been merged to v1.53 now. Quote Link to comment
Waseh Posted June 25, 2020 Author Share Posted June 25, 2020 @crazyhorse90210 please search this thread regarding --allow-non-empty, it's already been discussed. And if it's merged then just update your version of rclone by either rebooting your machine or reinstall the plugin. Both methods will pull the newest version. Quote Link to comment
mgutt Posted June 25, 2020 Share Posted June 25, 2020 11 minutes ago, crazyhorse90210 said: I think it's been merged to v1.53 now. Does not help as Nextcloud/Owncloud is not able to update the modification date. The only possibility is to reupload everything. WebDAV sucks Quote Link to comment
thingie2 Posted July 4, 2020 Share Posted July 4, 2020 I'm having an issue using rclone with Onedrive. Due to rclone not supporting bidirectional syncing, I have been investigating options to provide this. The best route I found so far, was to use rclone to mount Onedrive, then use 2 seperate dockers of syncthings to sync between the Onedrive mount & the local folder. Unfortunatly, doing this throws up errors in syncthings, saying there is insufficient space in the folder. From a bit of investigation, it looks like the reason for this, is that the Onedrive mount is seen as being 1MB in size, rather than the 1TB it actually is. Is there a way that either the size can be correctly shown, or at the very least be increased in size (e.g. to 100MB) to allow syncthings to know there is sufficient space? Quote Link to comment
Stupifier Posted July 4, 2020 Share Posted July 4, 2020 3 hours ago, thingie2 said: I'm having an issue using rclone with Onedrive. Due to rclone not supporting bidirectional syncing, I have been investigating options to provide this. The best route I found so far, was to use rclone to mount Onedrive, then use 2 seperate dockers of syncthings to sync between the Onedrive mount & the local folder. Unfortunatly, doing this throws up errors in syncthings, saying there is insufficient space in the folder. From a bit of investigation, it looks like the reason for this, is that the Onedrive mount is seen as being 1MB in size, rather than the 1TB it actually is. Is there a way that either the size can be correctly shown, or at the very least be increased in size (e.g. to 100MB) to allow syncthings to know there is sufficient space? People use tools like cloudplow to do what you want. You set it up to use rclone sync function and run in regular intervals. Just a suggestion https://github.com/l3uddz/cloudplow Quote Link to comment
thingie2 Posted July 4, 2020 Share Posted July 4, 2020 (edited) 3 hours ago, Stupifier said: People use tools like cloudplow to do what you want. You set it up to use rclone sync function and run in regular intervals. Just a suggestion https://github.com/l3uddz/cloudplow I hadn't come across cloudplow whilst looking around, but from a quick initial look, it seems like it should do what I want, thanks. I'll take a better look/give it a try as an alternative. Edited July 4, 2020 by thingie2 Quote Link to comment
Stupifier Posted July 4, 2020 Share Posted July 4, 2020 I hadn't come across cloudplow whilst looking around, but from a quick initial look, it seems like it should do what I want, thanks. I'll take a better look/give it a try as an alternative.Cloudplow developer also has made a complete rewrite of it called "crop". It is still beta sorta....but is like a Swiss army knife of rclone.https://github.com/l3uddz/crop Quote Link to comment
thingie2 Posted July 18, 2020 Share Posted July 18, 2020 On 7/4/2020 at 5:54 PM, Stupifier said: Cloudplow developer also has made a complete rewrite of it called "crop". It is still beta sorta....but is like a Swiss army knife of rclone. https://github.com/l3uddz/crop Thanks, I've been having a bit of a look into this & Cloudplow, however I'm a little unclear on the setup instructions & can't find a guide/walkthrough. Are you aware of one anywhere? Quote Link to comment
Stupifier Posted July 18, 2020 Share Posted July 18, 2020 (edited) 2 hours ago, thingie2 said: Thanks, I've been having a bit of a look into this & Cloudplow, however I'm a little unclear on the setup instructions & can't find a guide/walkthrough. Are you aware of one anywhere? Both are on GitHub and include readme files with sample configurations. The cloudplow readme also includes a full breakdown of everything as well as that tool is already very mature. That is all I used to get setup. You can also ask people in the Cloudbox discord. They usually help but are pretty strictly focused on Cloudbox setups of cloudplow. For Crop, the readme doesn't have QUITE as much hand holding because we'll....it is still in very active development. Oh and also for Cloudplow it uses a systemd service.....but since we don't have that on Unraid you could just schedule manual runs via Userscripts plugin Good luck Edited July 18, 2020 by Stupifier Quote Link to comment
thingie2 Posted July 18, 2020 Share Posted July 18, 2020 4 hours ago, Stupifier said: Both are on GitHub and include readme files with sample configurations. The cloudplow readme also includes a full breakdown of everything as well as that tool is already very mature. That is all I used to get setup. You can also ask people in the Cloudbox discord. They usually help but are pretty strictly focused on Cloudbox setups of cloudplow. For Crop, the readme doesn't have QUITE as much hand holding because we'll....it is still in very active development. Oh and also for Cloudplow it uses a systemd service.....but since we don't have that on Unraid you could just schedule manual runs via Userscripts plugin Good luck I've been having another look, and I hadn't realised I needed the nerd pack plugin in order to install python modules, which is why the installation instructions weren't working. I've at least got further now, unraid just doesn't recognise the "cloudplow" command, so got that to work out now. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.