Jump to content

goni05

Members
  • Posts

    5
  • Joined

  • Last visited

Posts posted by goni05

  1. So I tried it again, and it failed again, but I now know why!  I was testing out your docker container to see if it would work properly.  I would be syncing ~14TB, but I didn't want to download everything, as I wanted to transfer my files over from my other computer and then sync (keep any significant downloads from occuring).  In order to keep from wasting my time transferring all that to find it just messed everything up, I wanted to test it out.  As it started to download a certain amount, I ended up stopping the docker container.  What I found, after reading through your scripts, was that the permissions modification was only occurring once the sync completes (that's a long time to wait if I tried to download 14TB).  This would NOT be normal scenario, but because I was stopping the docker, the chmod was not occurring on the mount point.  I ended up causing my own problem.  Once I setup a very tiny sync, the process worked as expected.

     

    Like you said, this is not ideal, but it works (knowing the limitations).  I still think figuring out how to pass the PUID and PGID would make this just write as the appropriate user from the start and avoid this from the beginning.  If I get some time to dabble in it, I can assist, but I have zero experience with docker, not to mention how Unraid, Docker Hub, and GitHub all work together.  If you have a little tutorial on how you initially got it setup, I can mess around with my own setup until I figure it out.  I would like to help you get this resolved, as I love the sync client over rclone setups.

  2. Quote

     

    Specify whether new files and directories written on the share can be written onto the Cache disk/pool if present. This setting also affects mover behavior.

     

    No prohibits new files and subdirectories from being written onto the Cache disk/pool. Mover will take no action so any existing files for this share that are on the cache are left there.

     

    Yes indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the Cache disk/pool and onto the array.

     

    Only indicates that all new files and subdirectories must be writen to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status. Mover will take no action so any existing files for this share that are on the array are left there.

     

    Prefer indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto the Cache disk/pool.

     

    NOTE: Mover will never move any files that are currently in use. This means if you want to move files associated with system services such as Docker or VMs then you need to disable these services while mover is running.

     

     

    So those are the definitions.  I have a question on how this should behave, because the definitions and the behavior don't align.  I am running 6.9.0-rc2, and I have a share with the Use Cache Pool setting set to Yes.  I have it setup as a SMB share, and have been attempting to copy over an archive into my array (array is 14TB - data copying is 5TB, cache disk is 1TB), but Windows keeps failing the copy process after it fills up the cache drive stating There is not enough space on [share name].  According to the definition, if the cache drive were to run out of space, then why does it not start writing over to the array?  The ONLY thing I can think of is that the write starts, and when the cache runs out of space, the file it's attempting to write to fails, and then Windows just stops writing any further with the error message.  Even when I attempted to Try Again, it failed again.  Am I missing something, or should Unraid just handle this?

  3. I gave your recent snapshot a try, but the read permission issue still persists.  I also cross posted on GitHub.  I hope just changing the Repository setting to add :snapshot to the end was enough to get what it needed (it seemed to trigger a new download) I will say that if the files are first created locally, then synced, the issue does not present itself.  It is only a problem if Google Drive has the file and it is newly written to disk.  If you get this figured out, I see it as set it and forget it.  Appreciate what you have done to get it here and easily available to everyone.  I wish I knew more about how all this gets setup, because I would help, but I'm not familiar with all the inner workings.  Let me know if I can help in any other way.

×
×
  • Create New...