yendi Posted August 1, 2019 Share Posted August 1, 2019 Often the forums adds invisible character to command that you copy. Paste it on notepad and copy it again. Quote Link to comment
Cliff Posted August 1, 2019 Share Posted August 1, 2019 Ok, I was prompted to add "--allow-non-empty" so I did that. But it has been running for 30min now and I don't see any files in any of the mount_folders. If I look at the google console drive api there are a couple of hundred requests that increment very slowly like one every 5 sec. The only thing I noticed that have changed is that I have a crypt folder now on my google drive with a single file in it Quote Link to comment
yendi Posted August 1, 2019 Share Posted August 1, 2019 On 7/31/2019 at 2:09 PM, DZMM said: you could try reducing the number of uploads and checkers in the upload script to reduce ram usage. An extra 8GB might do the trick, but you might need to change some of the buffer settings in the mount command if you anticipate having a lot of concurrent streams I think there is an issue with Rclone: I saw in live the dismount issue and saw this: rclone had multiple processes using 100% of available memory. Then it crashes and I have to use the unmount script before the being able to remount. Is there any possible explanation? Thanks Quote Link to comment
yendi Posted August 3, 2019 Share Posted August 3, 2019 On 8/1/2019 at 9:37 PM, yendi said: I think there is an issue with Rclone: I saw in live the dismount issue and saw this: rclone had multiple processes using 100% of available memory. Then it crashes and I have to use the unmount script before the being able to remount. Is there any possible explanation? Thanks So with the help of rclone guys I might have found the issue: I have 12 gb of ram and when I upload + all the services running on unRAID i am using about 8.5 of the ram. When Plex is doing the thumbnails it seems that it consume all remaining ram for the job: it consume some ram for Plex itself + the --buffer-size 256 * number of opened files. Apparently its 4-5 files simultaneously. I lowered the buffer-size variable to 128mb and I have not seen the issue since 24h. Hope it helps someone who would face this issue ! 1 Quote Link to comment
Cliff Posted August 11, 2019 Share Posted August 11, 2019 I have a strange problem if I use rclone mount to mount a google drive folder it gets created correctly and I see all the files on my google drive. I can read and write to the share without problems using a file-browser on unraid. But whenever I add the mount to a docker-container I can't write to it. Does anyone know how to solve this? Quote Link to comment
sauso Posted August 13, 2019 Share Posted August 13, 2019 Sounds like you aren't mounting it as RW. I don't have any issues. Post a screenshot of the folder mapping. Quote Link to comment
Cliff Posted August 13, 2019 Share Posted August 13, 2019 7 hours ago, sauso said: Sounds like you aren't mounting it as RW. I don't have any issues. Post a screenshot of the folder mapping. But I can read and write to it with no problems when using a filebrowser on unraid, it is just when trying to write to the share when it is mounted in a docker container that it fails. Quote Link to comment
Kaizac Posted August 13, 2019 Share Posted August 13, 2019 How do you give the docker access to the share? Can you share that screenshot. Quote Link to comment
Cliff Posted August 13, 2019 Share Posted August 13, 2019 The "LadingZone" path is a local path on the drive and it works. But the google drive path does not seam to allow writes. I can see 0 byte tmp-files and empty folders being created on my google drive but no actual files gets created. Quote Link to comment
yendi Posted August 13, 2019 Share Posted August 13, 2019 Maybe that's stupid could this come from the mounting script executing AFTER the docker mount? Quote Link to comment
Cliff Posted August 13, 2019 Share Posted August 13, 2019 3 minutes ago, yendi said: Maybe that's stupid could this come from the mounting script executing AFTER the docker mount? No, I have been manually starting the docker-containers after the script Quote Link to comment
DZMM Posted August 13, 2019 Author Share Posted August 13, 2019 1 hour ago, Cliff said: The "LadingZone" path is a local path on the drive and it works. But the google drive path does not seam to allow writes. I can see 0 byte tmp-files and empty folders being created on my google drive but no actual files gets created. you are mounting rclone at /mnt/disks - for host path 2 have you selected r/w slave? Quote Link to comment
Cliff Posted August 13, 2019 Share Posted August 13, 2019 Yes, I have tried everything. Right now it is mounted as r/w slave Quote Link to comment
Kaizac Posted August 13, 2019 Share Posted August 13, 2019 1 hour ago, Cliff said: Yes, I have tried everything. Right now it is mounted as r/w slave What is the docker you are using? And what happens when you restart the docker. Is it working then? Quote Link to comment
Cliff Posted August 14, 2019 Share Posted August 14, 2019 9 hours ago, Kaizac said: What is the docker you are using? And what happens when you restart the docker. Is it working then? I am using the Syncthing docker container. I have tried restarting both the docker-container and the server multiple times without change. I also tried moving the mountpoint between different disks and tried modifying the mount-script. But I noticed a strange thing today. I changed the remote folder to a new folder on google drive, and this time folders where created again but also some small files like jpeg-thumbnails and .nfo files. But no "big" files where created Quote Link to comment
Kaizac Posted August 14, 2019 Share Posted August 14, 2019 1 hour ago, Cliff said: I am using the Syncthing docker container. I have tried restarting both the docker-container and the server multiple times without change. I also tried moving the mountpoint between different disks and tried modifying the mount-script. But I noticed a strange thing today. I changed the remote folder to a new folder on google drive, and this time folders where created again but also some small files like jpeg-thumbnails and .nfo files. But no "big" files where created So how is your rclone set up? Can you post your rclone config? Just remove the tokens and such. Quote Link to comment
Cliff Posted August 14, 2019 Share Posted August 14, 2019 (edited) 15 minutes ago, Kaizac said: So how is your rclone set up? Can you post your rclone config? Just remove the tokens and such. Here is my rclone config. But as I think I said earlier, I can read/write to the mount without problem if I use the Cloud Commander app Edited August 14, 2019 by Cliff Quote Link to comment
DZMM Posted August 14, 2019 Author Share Posted August 14, 2019 39 minutes ago, Cliff said: Here is my rclone config. But as I think I said earlier, I can read/write to the mount without problem if I use the Cloud Commander app Have you tried mounting at mnt/user instead of /mnt/disks ? I've had problems with /mnt/disks in the past. Also, what about other dockers - do they work ok? Quote Link to comment
Cliff Posted August 14, 2019 Share Posted August 14, 2019 1 minute ago, DZMM said: Have you tried mounting at mnt/user instead of /mnt/disks ? I've had problems with /mnt/disks in the past. Also, what about other dockers - do they work ok? yes I have tried /mnt/disks too. The Cloud Commander docker works and is able to read/write to the mount. But I dont know how that works as that container mounts "/" and then you browse using the web-ui Quote Link to comment
Kaizac Posted August 14, 2019 Share Posted August 14, 2019 10 minutes ago, Cliff said: yes I have tried /mnt/disks too. The Cloud Commander docker works and is able to read/write to the mount. But I dont know how that works as that container mounts "/" and then you browse using the web-ui Have you tried another docker like Radarr to see if you can write files there? Syncthing doesn't work with browsing through the ui. So you have to put exact paths. So in your case you start with /google. Quote Link to comment
Cliff Posted August 14, 2019 Share Posted August 14, 2019 20 minutes ago, Kaizac said: Have you tried another docker like Radarr to see if you can write files there? Syncthing doesn't work with browsing through the ui. So you have to put exact paths. So in your case you start with /google. yes, I have tried mounting /google and even /google/<some other folder on the drive>. As the google drive is already mounted I can browse and select the folder from the docker-settings. And when syncing I see that folders and files are created on google drive, but the problem is that only folders and very small files 0-~5kb are created Quote Link to comment
Kaizac Posted August 14, 2019 Share Posted August 14, 2019 Just now, Cliff said: yes, I have tried mounting /google and even /google/<some other folder on the drive>. As the google drive is already mounted I can browse and select the folder from the docker-settings. And when syncing I see that folders and files are created on google drive, but the problem is that only folders and very small files 0-~5kb are created Then it's a syncthing issue as far as I can tell. You should test with an other docker. Quote Link to comment
testdasi Posted August 14, 2019 Share Posted August 14, 2019 4 hours ago, Cliff said: I am using the Syncthing docker container. I have tried restarting both the docker-container and the server multiple times without change. I also tried moving the mountpoint between different disks and tried modifying the mount-script. But I noticed a strange thing today. I changed the remote folder to a new folder on google drive, and this time folders where created again but also some small files like jpeg-thumbnails and .nfo files. But no "big" files where created Oh dear, don't use Syncthing directly on the mount. It's a rather long explanation as to why it doesn't work but just trust me, it won't work. 2 Quote Link to comment
mestep Posted August 15, 2019 Share Posted August 15, 2019 (edited) Sometimes files start to disappear on the local unionfs mount, they are still in the cloud. I have to unmount/remount unionfs for things to show back up.. Anyway to fix this? I changed the buffer size to 32MB.. will see how it goes. Edited August 15, 2019 by mestep Quote Link to comment
Bolagnaise Posted August 16, 2019 Share Posted August 16, 2019 On 8/4/2019 at 5:31 AM, yendi said: So with the help of rclone guys I might have found the issue: I have 12 gb of ram and when I upload + all the services running on unRAID i am using about 8.5 of the ram. When Plex is doing the thumbnails it seems that it consume all remaining ram for the job: it consume some ram for Plex itself + the --buffer-size 256 * number of opened files. Apparently its 4-5 files simultaneously. I lowered the buffer-size variable to 128mb and I have not seen the issue since 24h. Hope it helps someone who would face this issue ! Thankyou so much, i have only 10GB of ram currently and was seeing rclone crashes in unraid and out of memory issues. Im upgrading to 16GB tommorow Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.