Jump to content

yendi

Members
  • Content Count

    60
  • Joined

  • Last visited

Community Reputation

2 Neutral

About yendi

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for all those info man! Really you make me save thousands in HDD Have a nice weekend
  2. Ok thanks ! So I will put 15D so I have a local 15 days cache to ease up thumbnail creation and prevent a huge bandwidth usage when many people are playing a recent content. Do you have some plex settigns that you deactivate as they are incompatible with Rclone? I have left everything on (even the partial scans etc) and did not see any issue.
  3. Playing only 1080 Remux & some 4K. Why ? Do you find a noticeable difference?
  4. @DZMM Quick question, if I put a --min-age 15d to use it as a somewhat local cache will it interfer with the directory caching time or any setting ? My idea is to leave a new file few days locally as for example if a new episode of a show is out, many people that have access to my server will watch it few days after. Would it work ? Thanks
  5. Cool to know that I my issue is helping others ! I can confirm that since I upgraded my RAM I have not faced any issues (added 8 gb) and lowered the buffer to 128mb. For now I have noticed no difference between 128 and 256 buffer size on a 1000/300 fiber. Cheers !
  6. Maybe that's stupid could this come from the mounting script executing AFTER the docker mount?
  7. Have the same issue: the search is broken, I don't know why. Has anyone any idea what could have caused that?
  8. So with the help of rclone guys I might have found the issue: I have 12 gb of ram and when I upload + all the services running on unRAID i am using about 8.5 of the ram. When Plex is doing the thumbnails it seems that it consume all remaining ram for the job: it consume some ram for Plex itself + the --buffer-size 256 * number of opened files. Apparently its 4-5 files simultaneously. I lowered the buffer-size variable to 128mb and I have not seen the issue since 24h. Hope it helps someone who would face this issue !
  9. I think there is an issue with Rclone: I saw in live the dismount issue and saw this: rclone had multiple processes using 100% of available memory. Then it crashes and I have to use the unmount script before the being able to remount. Is there any possible explanation? Thanks
  10. Often the forums adds invisible character to command that you copy. Paste it on notepad and copy it again.
  11. The union fs part allow you to use the upload system and to have both cloud and upload folder merged. So no you dont need that if yu just want to mount a gdrive: You just need "rcloune mount" command. You should check rclone help to chekc what argument to add to your command so it suits your need or just try with one like this: rclone mount --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs
  12. I had multiple rclone crash since yesterday (mount is suddenly empty, not only when using Sonarr, also with Plex Scans) but I think I found the issue: I "only" have 12gb of RAM, and the upload + services running on unRAID are eating up 70% of the memory. I read that rclone tend to crash when no RAM is available and I saw that when I do a library scan, memory tend to go to the 90%ish so I ordered 8 gb of more ram... I will give a feedback when my RAM arrive to tell you if its the shortage that make rclone crash.
  13. Sorry, I typed on my cellphone so it made a typo: My question was: Is it ok to add a cache to the 3 shares ? As they are unmounted and recreated each boot via the scripts I wanted to know if it would work/be safe?
  14. Do you config that I can use the cache disk without issue with the 3 mounts? As they are created at boot with a script, I prefer to ask.