[Support] Linuxserver.io - Syncthing


252 posts in this topic Last Reply

Recommended Posts

  • Replies 251
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

@mbc0, @CommandLionInterface, @CHBMB   I requested, and was granted, a change to the docker container. The solution to all your write-on-samba problems is now here. First, make s

Application Name: Syncthing Application Site:  https://syncthing.net/ Docker: https://hub.docker.com/r/linuxserver/syncthing/ Github: https://github.com/linuxserver/docker-syncthing

I am a bit ashamed to ask this as I had this problem with a previous install, posted about it, had the post answered (partially, but understood it at the time) but now I am having a similar (same?) pr

Posted Images

On 2/8/2019 at 5:16 AM, saarg said:

 

You seem to not understand how docker and volume mappings works. Take a look in the Docker FAQ for a better understanding.

Your docker containers will only see the container path and not the host path.

So Krusader will not see syncthings /sync folder. For Krusader to see what syncthing sees in the /sync folder, you need to map the same host path to Krusader (/mnt/user/). Notice that the folder you see in Krusader will be whatever you set it to in the container path.

 

Hi saarg!

 

I know I need help with docker and volume mapping ... could you tell me how I find the Docker FAQ's?  I've recently managed to install Syncthing on my unRAID server ... but I can't figure out 'mapping', or the proper way to create new folders inside of Syncthing.

 

Thank you for all your help!

Link to post
  • 2 weeks later...

Hello guys, new user here.

 

Sadly there are no out of the box tutorials for almost any unraid apps (like there are a gazillion for Ubuntu) which makes it more and more frustrating to use. Especially for a noob...

 

So i have a Server with a cache drive and one array drive (parity will follow).

 

While browsing through this topic i saw that it's recommended to sync to cache (faster).

 

So host path 2 has to be on /mtn/cache/syncthing.... And then create the data sync folders inside that folder. 

 

So when i add host path 2 inside the docker, i can only select /mnt/user/syncthing.

 

Do i have to create it manually before?

I didn't find any place to add a share on the cache drive, the shares default to the array drive. Or does it when i select "use cache YES" automatically create on /mnt/cache/syncthing?

 

And btw, over unraid GUI i can create share on the root folder, how can i create shares inside of other shares? Do i have to resort to the horrible terminal?

 

After the shares are created, do i really have to mess around with permissions on the terminal? Is there an easy way to do it over the GUI?

 

Would appreciare

Link to post

For anyone experiencing write-permission errors on Samba shares as I did, please make sure that:

  • Syncthing Container (Docker) configuration has a configuration key set as follows:
    UMASK_SET => 000 (UMASK_SET as "Key" / 000 as "Value")

    Such a key can be added by clicking on "Add another Path, Port, Variable, Label or Device".

    umask.thumb.PNG.0d497986d063e7b5a3732c5bf1ee67a8.PNG
     
  • to enable "Ignore Permissions" on each individual shared folder inside the Syncthing Web UI on your UNRAID machine.
    syncthing.thumb.PNG.f8dabb95a600a980bf7215862d5051f5.PNG

 

I have been investigating for hours why my files would keep ending up with the wrong permissions despite the supposedly correct UMASK of 000. Syncthing will, by default, try to synchronize the file permissions when exchanging data between two machines - beware of this functionality. Especially if you sync folders between another Linux machine and your UNRAID machine, Syncthing will initially create the file on your UNRAID machine respecting the UMASK 000 but later modify the permissions to reflect those on the original machine.

 

By ensuring that your synced folders are all set to "ignore permissions", you basically tell Synthing to stay out of setting permissions altogether and respect the default local permissions. (which you have previously set by handing the UMASK 000 to the container)

 

Hope this helps anyone experiencing permission troubles on Samba shares. 🙂

 

Link to post
On 10/22/2020 at 6:54 AM, MMeirolas said:

Hello guys, new user here.

 

Sadly there are no out of the box tutorials for almost any unraid apps (like there are a gazillion for Ubuntu) which makes it more and more frustrating to use. Especially for a noob...

 

So i have a Server with a cache drive and one array drive (parity will follow).

 

While browsing through this topic i saw that it's recommended to sync to cache (faster).

 

So host path 2 has to be on /mtn/cache/syncthing.... And then create the data sync folders inside that folder. 

 

So when i add host path 2 inside the docker, i can only select /mnt/user/syncthing.

 

Do i have to create it manually before?

I didn't find any place to add a share on the cache drive, the shares default to the array drive. Or does it when i select "use cache YES" automatically create on /mnt/cache/syncthing?

 

And btw, over unraid GUI i can create share on the root folder, how can i create shares inside of other shares? Do i have to resort to the horrible terminal?

 

After the shares are created, do i really have to mess around with permissions on the terminal? Is there an easy way to do it over the GUI?

 

Would appreciare

 

Do not do this - "/mnt/cache" is the disk share equivalent of the cache drive. If you make this available to your Syncthing instance it will clog up your cache drive and the synced files will not be moved to your disk array at all. This means the files will not be protected by parity and also that necessary cache space is taken away from other dockers, VMs etc...! Not being able to select this path in your Docker settings is actually a security mechanism to prevent exactly what you want to do. ;-)

 

What is meant by using the cache is that you should create a user share utilizing the cache, not only writing directly to the cache itself. If you create such a cache-enabled user share, you allow Syncthing to initially write files to the (fast) cache. This is especially useful because the system does not need to be kept busy trying to keep track of the parity while the files are still syncing/being built. The mover will later move the files to your (parity protected) array and free up space on the cache for new sync operations and other applications. This is a reasonable compromise between speed and a relatively short amount of unprotected time for the newly downloaded files, which the mover will later send to the protected array.

 

So if you want to get this set up you need to:

  • Create a user share with "Use cache (for new files/directories):" set to "Yes".
  • Expose this user share to Syncthing by editing your Syncthing container and setting the Host Path 2 accordingly.
  • Set the correct path in Syncthing's Web UI for your shared folder(s). (in this case starting with /sync/)
  • Make sure the mover is running on a schedule in order to move the files from cache to the array regularly.
    You can check this by clicking on "Schedule" on the Main page next to the "Check" button.

This is an example configuration assuming your setup:

  1. Gathering from your message you already have a user share called "syncthing" set up - good!
  2. Now edit this user share and enable the cache for it - set it to "Yes".
  3. Next edit your Syncthing Docker container and point Host Path 2 (/sync) to "/mnt/user/syncthing/".
  4. In your Syncthing web interface create a new shared folder and point it to "/sync/" or "/sync/<folder name>" if you have multiple shares.
  5. You got it all set up - share this folder as usual. 🙂

 

On 7/4/2020 at 1:31 AM, johnwhicker said:

This place id DEAD :)

 

This stuff ain't working :(   It scaned for 2 freaking days and now is STUCK.  Any ideas?  It won't move or transfer anything :( 

 

 

2105054539_ScreenShot2020-07-03at6_29_55PM.thumb.png.cbbcc599dc4cb68678b5efa184dd8922.png

 

 

This place ain't as dead as your system will be with those settings in time. 😱

You are exposing your whole UNRAID operating system to the Syncthing docker:

DO NOT DO THIS.

 

What's even worse, it seems like a sync operation for your WHOLE OPERATING SYSTEM is in progress:

STOP SYNCTHING NOW.

 

The reason that Syncthing is stuck is because it is indexing your whole operating system (1,268,954 files!) and working in folders that it should not even come close to: potentially changing system files, messing with permissions and who knows what else. The whole point of Docker is to limit the exposure of your system to certain applications and "container"-ize applications in sandboxed environments, so they can do less harm when malfunctioning or when set up badly by the end-user.

 

I do not know how much damage is already done by that sync operation shown in your screenshots, but if you actually have it set up this way you can easily destroy both your parity and whole UNRAID configuration, if not lose your data.

 

Syncthing is way too risky a tool to just randomly expose folders without knowing what you are doing - sorry if this sounds harsh.

 

Please watch some tutorials on how Docker works, as well as UNRAID-specific Docker configuration tutorials.

You need to set up a user share and expose only that user share to your Syncthing instance - see explanation to the other user above.

 


 

 

 

 

 

 

 

 

 

 

 

 

Link to post
14 hours ago, Rysz said:

 

Do not do this - "/mnt/cache" is the disk share equivalent of the cache drive. If you make this available to your Syncthing instance it will clog up your cache drive and the synced files will not be moved to your disk array at all. This means the files will not be protected by parity and also that necessary cache space is taken away from other dockers, VMs etc...! Not being able to select this path in your Docker settings is actually a security mechanism to prevent exactly what you want to do. ;-)

 

What is meant by using the cache is that you should create a user share utilizing the cache, not only writing directly to the cache itself. If you create such a cache-enabled user share, you allow Syncthing to initially write files to the (fast) cache. This is especially useful because the system does not need to be kept busy trying to keep track of the parity while the files are still syncing/being built. The mover will later move the files to your (parity protected) array and free up space on the cache for new sync operations and other applications. This is a reasonable compromise between speed and a relatively short amount of unprotected time for the newly downloaded files, which the mover will later send to the protected array.

 

So if you want to get this set up you need to:

  • Create a user share with "Use cache (for new files/directories):" set to "Yes".
  • Expose this user share to Syncthing by editing your Syncthing container and setting the Host Path 2 accordingly.
  • Set the correct path in Syncthing's Web UI for your shared folder(s). (in this case starting with /sync/)
  • Make sure the mover is running on a schedule in order to move the files from cache to the array regularly.
    You can check this by clicking on "Schedule" on the Main page next to the "Check" button.

This is an example configuration assuming your setup:

  1. Gathering from your message you already have a user share called "syncthing" set up - good!
  2. Now edit this user share and enable the cache for it - set it to "Yes".
  3. Next edit your Syncthing Docker container and point Host Path 2 (/sync) to "/mnt/user/syncthing/".
  4. In your Syncthing web interface create a new shared folder and point it to "/sync/" or "/sync/<folder name>" if you have multiple shares.
  5. You got it all set up - share this folder as usual. 🙂

 

 

This place ain't as dead as your system will be with those settings in time. 😱

You are exposing your whole UNRAID operating system to the Syncthing docker:

DO NOT DO THIS.

 

What's even worse, it seems like a sync operation for your WHOLE OPERATING SYSTEM is in progress:

STOP SYNCTHING NOW.

 

The reason that Syncthing is stuck is because it is indexing your whole operating system (1,268,954 files!) and working in folders that it should not even come close to: potentially changing system files, messing with permissions and who knows what else. The whole point of Docker is to limit the exposure of your system to certain applications and "container"-ize applications in sandboxed environments, so they can do less harm when malfunctioning or when set up badly by the end-user.

 

I do not know how much damage is already done by that sync operation shown in your screenshots, but if you actually have it set up this way you can easily destroy both your parity and whole UNRAID configuration, if not lose your data.

 

Syncthing is way too risky a tool to just randomly expose folders without knowing what you are doing - sorry if this sounds harsh.

 

Please watch some tutorials on how Docker works, as well as UNRAID-specific Docker configuration tutorials.

You need to set up a user share and expose only that user share to your Syncthing instance - see explanation to the other user above.

 


 

 

 

 

 

 

 

 

 

 

 

 

 

Thanks much. I moved on already. Command line and scripting is king  :) I tried several ways and didn't work well. Perhaps too much data, who knows.

RAID-1 actually was not my operating system or the entire system.  Is basically an 8TB drive that I used  for a share named RAID-1. The high number of files was basically all video and pictures that I store there.

I didn't hurt any unraid to operating system during this process :)

 

Thanks again for responding Sir

 

Link to post
6 hours ago, johnwhicker said:

 

Thanks much. I moved on already. Command line and scripting is king  :) I tried several ways and didn't work well. Perhaps too much data, who knows.

RAID-1 actually was not my operating system or the entire system.  Is basically an 8TB drive that I used  for a share named RAID-1. The high number of files was basically all video and pictures that I store there.

I didn't hurt any unraid to operating system during this process :)

 

Thanks again for responding Sir

 

 

Fair enough - still exposing / to the Docker exposes your whole operating system structure to the container.

Much better to just go a few directories deeper and expose /mnt/user/<share name> rather than exposing / and picking the folder from the whole tree.

 

Edited by Rysz
Link to post
11 hours ago, Rysz said:

 

Fair enough - still exposing / to the Docker exposes your whole operating system structure to the container.

Much better to just go a few directories deeper and expose /mnt/user/<share name> rather than exposing / and picking the folder from the whole tree.

 

Thanks much. I am sure we can continue learning from here so keep up the good work Sir

Link to post

Very basic question. I have several folders within other existing shares that I'd like to sync with SyncThing. I'm somewhat confused as to what path should be in the Host Path 2 field. Would I just put /mnt/user so it has access to the all the shares and I can pick and choose from there or does it need its own separate share apart from all of the others? I obviously wouldn't what to sync my entire server.

Link to post

I installed Syncthing and it works great so far.

 

But I have one issue:

For testing I copied a 500 MB File to a sorce-folder that should be synced via Syncthing to the Unraid-Share.

The sync have started, but I deleted this file at the source-folder before syncing has completed.

Now I get at the summary "Syncing 95%" and the deleted file as "Out of Sync Items". See screenshot:

 

637216726_95PercentSync.jpg.618b78f1d4b9df197c640f7bb49e3683.jpg

 

Is there a way, maybe under Settings -> Advanced -> Folder, that Syncthing figures out,

that the file was in the meantime deleted and has not to be synced in the future?

A rescan showed no success of this status.

 

Thanks and best regards.

 

Edit:

For some reaseon I've seen that in the source-folder the 500 MB file was still there (don't know why),

but on the Unraid-Share not (which is ok).

As soon I deleted the file from the source folder the sync was "Up to date" again.

Edited by Padi75
Link to post

Yes.  But my search results didn't show me how to reset the password via GUI.

 

Saw this, but don't know how to use it.

 

image.thumb.png.f24cff37604884ce417d458f57bfb7d2.png

 

 

***Update***

 

Solved this by connecting to console.  I found the config folder, and config.xml file.  I looked for something like this:

 

<gui enabled="true" tls="false" debugging="false"> <address>127.0.0.1:8384</address> <user>syncguy</user> <password>$2a$10$s9wWHOQe...Cq7GPye69</password> <apikey>9RCKohqCAyrj5RjpyZdR2wXmQ9PyQFeN</apikey> <theme>default</theme> </gui>

 

Though I remembered the password after looking at the username.

Edited by jang430
Link to post

I'm having an issue trying to synch a 2+ tb folder. It runs the scan and then 3 times now the "source" docker running on unraid displays a message that the folder marker is missing and it stops. I have added the UMASK_SET 000 variable to my docker and no change.

 

The .stfolder does seem to be missing but I can't figure out why syncthing cannot write to that directory. It is owned by "nobody" and seems to have correct perms:

 

drwxrwxrwx 1 nobody users 4096 Nov 18 04:40 xxxxxxxxx/ 

 

Any ideas?

 

 

Link to post
  • 2 weeks later...

The mapping in the unraid template was incorrect. Its mapped to /sync in the container but syncthing defaults to looking in /config/sync. Took me a bit to realize why my files werent showing up in my share. Mind you I know most people will have custom mappings set up but for one just testing it out, it really threw me off.

Edited by mkono87
Link to post

Since today, I can't access to Syncthing UI, firefox loads page for ever.

 

Nothing wrong on logs :

[YOAYO] 00:28:35 INFO: QUIC listener ([::]:22000) starting
[YOAYO] 00:28:35 INFO: TCP listener ([::]:22000) starting
[YOAYO] 00:28:35 INFO: GUI and API listening on [::]:8384
[YOAYO] 00:28:35 INFO: Access the GUI via the following URL: http://127.0.0.1:8384/
[YOAYO] 00:28:35 INFO: My name is "NAS"
[YOAYO] 00:28:35 INFO: Device xxxxx is "Seedbox" at [dynamic]
[YOAYO] 00:28:49 INFO: Detected 1 NAT service
[YOAYO] 00:28:54 INFO: quic://0.0.0.0:22000 detected NAT type: Port restricted NAT
[YOAYO] 00:28:54 INFO: quic://0.0.0.0:22000 resolved external address quic://xxxx:22000 (via stun.syncthing.net:3478)
[YOAYO] 00:29:01 INFO: Joined relay relay://163.172.132.71:587
[YOAYO] 00:29:11 INFO: Established secure connection to xxxxx at xxx:22000-xxx:60428/tcp-server/TLS1.3-TLS_AES_128_GCM_SHA256
[YOAYO] 00:29:11 INFO: Device xxxxx client is "syncthing v1.6.1" named "xxx" at xxx:22000-46.xxx/tcp-server/TLS1.3-TLS_AES_128_GCM_SHA256

Tried to restart. Same behaviour

Edited by Alex.b
Link to post

I have a directory within a share that I wish to sync:

/mnt/user/Unraid_Public/Sissy/Music

 

I put that path in to a clean install of the Syncthing app on my Unraid system.  I repeatedly got permission errors when mkdir tried to create the path.

 

Every single directory was drwxrwxrwx below /mnt/

I even tried changing /mnt/ with a chmod 777.  Still no joy.

 

I uninstalled Synthing thinking I would start fresh.  But then I found that uninstalling it left droppings -- /mnt/user/appdata/syncthing/ with various files and directories under it.  How is that an uninstall?

 

I'm sure that there must be simple directions somewhere, perhaps in a wiki, that explains how to sync an existing directory using Syncthing under Unraid.  This stuff of going through pages of Docker and Syncthing settings and trying to figure out what settings need to change is nuts.

Link to post
  • 4 weeks later...

Hi.. I have some issues getting Syncthing to work with my secundary network... I would appreciate if if someone could look if I am doing something wrong:

 

 

EDIT: Guess I was just TO anxious... It took a while for the system to complete scanning... Its running 300 Mb/s between the boxes now :-)

Edited by Helmonder
Link to post
  • 3 weeks later...

Hi all,

 

I have syncthing installed and working, but do have a couple of "best practice" questions. Not specifically Linuxserver related, but definitely Unraid related!

 

I am syncing a large photo directory between Mac and Unraid.

 

Does anyone have any suggested (recommended) settings for:

 

1. Full rescan interval

2. Watch for changes

 

I assume (2) should be on for both Mac and Unraid.

I presume setting (1) to something bigger like 24 hours would be suitable for Unraid so as not to keep spinning up drives.

 

I've searched but not found anything specifically Unraid related.

Link to post

I hope this is the right location for my question.

I’m using unRAID v6.9.0-rc2 with SyncThing v1.12.1 installed as a docker. I’ve attached/created a unRAID NFS Share mounting a shared folder on one of my Ubuntu servers. I have SyncThing configured to sync to this NFS mounted Share. The synchronization part works as it should.

 

The problem is the folders and files created on the NFS mounted Share have the permissions UID:99 and GID:100. Which I know is user & group SyncThing is running under. These UID/GID don’t match any UID/GID on my ubuntu box. If I change the UID:GID on these files and folders after creation via command line, then SyncThing complains.

 

The user I’m “typically” logged into Ubuntu uses 1000:1000. There is a ubuntu group, “users” with GID:100, which my typically user is a member. Suggestions on how to fix this NFS mounted share permission issue?

Link to post
  • 1 month later...

also, totally unrelated to my previous post, I don't think this container is properly reporting CPU usage - I see my CPUs at 60-70% (8 threads) and sometimes pinning at 98% for an extended period across all of them but the docker is showing 4-7%. This is the situation on both my unraid boxes running different hardware. 

 

I tried to set the docker switch --cpu-shares=256 --cpus=6 to control usage and keep cycles free for other stuff, but the cpus=6 doesn't appear to be working as it does in other containers because the cpu usage itself looks not to be correct... ? 

 

cheers

Edited by tiwing
Link to post
  • 2 weeks later...
On 2/26/2021 at 4:19 PM, tiwing said:

Hi, can anyone please tell me how to restart the docker one time with -reset-deltas ? I had an issue that caused the folders to think they are synced and this seems to be the solution... thanks!

Hi tiwing, have you ever figured this out?

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.