Jump to content

[Support] ich777 - Application Dockers


ich777

Recommended Posts

20 minutes ago, ich777 said:

I mean if someone has the binary or the source zip file I can change the container to use a different URL because now the container isn't able to pull down the archive any more.

It is true that the download has been unsuccessful.

Link to comment
17 minutes ago, Alex1989 said:

In addition to this app, is there a more recommended APP for synchronization

I have now updated the container and it's working again.

Please do the following:

  1. Remove the container
  2. Remove the folder for dirsyncpro from your appdata directory
  3. Pull a fresh copy from the CA App
  4. In the Docker template at the bottom click on "Show more settings..."
  5. At DL_URL change the URL to:
    https://github.com/ich777/docker-dirsyncpro/raw/master/executable/DirSyncPro-1.53-Linux.tar.gz
  6. Click "Apply"

 

This is only necessary for the next few hours until the template in the CA App is updated.

Link to comment
17 minutes ago, ich777 said:

I have now updated the container and it's working again.

Please do the following:

  1. Remove the container
  2. Remove the folder for dirsyncpro from your appdata directory
  3. Pull a fresh copy from the CA App
  4. In the Docker template at the bottom click on "Show more settings..."
  5. At DL_URL change the URL to:
    https://github.com/ich777/docker-dirsyncpro/raw/master/executable/DirSyncPro-1.53-Linux.tar.gz
  6. Click "Apply"

 

This is only necessary for the next few hours until the template in the CA App is updated.

i try now

Link to comment
19 hours ago, ich777 said:

I have now updated the container and it's working again.

Please do the following:

  1. Remove the container
  2. Remove the folder for dirsyncpro from your appdata directory
  3. Pull a fresh copy from the CA App
  4. In the Docker template at the bottom click on "Show more settings..."
  5. At DL_URL change the URL to:
    https://github.com/ich777/docker-dirsyncpro/raw/master/executable/DirSyncPro-1.53-Linux.tar.gz
  6. Click "Apply"

 

This is only necessary for the next few hours until the template in the CA App is updated.

A new issue is that I need to synchronize a lot of files now, which are very large in size, causing me to have a larger volume of containers. What is the reason for this? Can I persist certain folders to disk to solve this problem.

Link to comment
1 hour ago, Alex1989 said:

A new issue is that I need to synchronize a lot of files now, which are very large in size, causing me to have a larger volume of containers. What is the reason for this? Can I persist certain folders to disk to solve this problem.

I think you mean it consumes a lot of RAM correct?

You can try luckyBackup, maybe that fit your needs better since it uses rsync as the backend and won‘t consume that much RAM.

Link to comment

It's not RAM, but the container has a large volume. As follows: 23b39db36d78 DirSyncPro                 136.73%   2.603GiB / 31.11GiB   8.37%     1.86GB / 451GB    742GB / 52.8GB, 52.8g used

1 hour ago, Alex1989 said:

A new issue is that I need to synchronize a lot of files now, which are very large in size, causing me to have a larger volume of containers. What is the reason for this? Can I persist certain folders to disk to solve this problem.

It's not RAM, but the container has a large volume. As follows:

CONTAINER ID   NAME                       CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS

23b39db36d78 DirSyncPro                 136.73%   2.603GiB / 31.11GiB   8.37%     1.86GB / 451GB    742GB / 52.8GB, 52.8g used

Link to comment
1 minute ago, Alex1989 said:

It's not RAM, but the container has a large volume.

Then it's most likely the case that a path is conifgured wrong and the container is actually syncing into the volume and not into a bind mount.

Please check your mounts and make sure that the paths are correct both in the template and also in DirSyncPro too.

Link to comment
4 minutes ago, ich777 said:

Then it's most likely the case that a path is conifgured wrong and the container is actually syncing into the volume and not into a bind mount.

Please check your mounts and make sure that the paths are correct both in the template and also in DirSyncPro too.

I understand what you mean, but I have only configured one task, and the other party for this task has also received the corresponding file, so I am still investigating the problem.

Link to comment
On 5/21/2024 at 10:46 PM, AwesomeAustn said:

I may have figured out the FSID (File System ID) errors with the new version of MEGAsync. It wasn't an issue with version 4.12.2. Starting on v5, it happened daily. Sometimes MEGAsync has an issue and sometimes it doesn't. I tested with an empty new folder. If it has an issue right away, I have to remove the syncs and re-add them, and it scans and adds the new files. If everything seems fine, the FSID errors show up and the syncs get disabled the next morning, and I have to remove and re-add them.

 

I use a cache drive and the files go to that first. The cache drive has the filesystem btrfs and and my regular drives have xfs. I disabled the cache drives for my syncs that go to MEGA a couple days ago, and I haven't seen the FSID error yet.

 

Oh, that's interesting, since it's driving me crazy!!

I'll check my share config, and perhaps look at creating a xfs cache for it and see how that works.

 

I'll report back if I find out more.

 

Cheers,

Pacman

 

Link to comment
On 6/25/2024 at 9:04 AM, SudoPacman said:

 

Oh, that's interesting, since it's driving me crazy!!

I'll check my share config, and perhaps look at creating a xfs cache for it and see how that works.

 

I'll report back if I find out more.

 

Cheers,

Pacman

 


Nope, no luck for me @AwesomeAustn

 

I've tried a single drive xfs cache pool, and a dual drive mirrored zfs pool, and both give me the "File fingerprint missing" error for ~75% of new files. It's very annoying!

 

At least it means I have finally got around to adding a ZFS pool though, and getting some high traffic, but not performance sensitive, data off my NVMe pool.

 

Cheers,

Pacman

Edited by SudoPacman
Link to comment
Posted (edited)

My TeamSpeak3 container seems to have reverted to default. There are no channels and and my TS Client no longer recognizes me as the owner.

Is there a way to recover my old state? I have my original privilege key but the client tells me that key is not valid.

Edited by Siege
Link to comment
5 hours ago, Siege said:

Is there a way to recover my old state? I have my original privilege key but the client tells me that key is not valid.

What path is set in the Docker template?

 

If /mnt/cache/appdata/... is set make sure that your appdata share is set to stay on the Cache and is not moved to the Array, so to speak as primary storage set the Cache and at Move you have to make sure that it takes no action or even better move the data from the Array to the Cache.

That's most likely what's causing your issue here.

Link to comment

seem to be having trouble getting InspIRCd set up. Hopefully I'm just missing something obvious but after filling out at the details I get the following error and the docker wont start.

 

[*] Module file could not be found: m_userip.so

 

I tried disabling that module and it proceeded but then gave errors relating to ssl.

Link to comment
1 hour ago, DeadPixel said:

I tried disabling that module and it proceeded but then gave errors relating to ssl.

I'll look into that a bit later today, they released InspIRCd 4.0 over the weekend, maybe this is causing the issue.

 

I let you know when I know more.

  • Thanks 1
Link to comment
18 hours ago, ich777 said:

If /mnt/cache/appdata/... is set make sure that your appdata share is set to stay on the Cache and is not moved to the Array,

yeah my mover action is moving from cache to array.

 I am still new to unraid. thanks for the help. I will try out changing my mover action to Array => Cache and let you know how it goes.

 

Prechange setting attached. I am kind of of new to this so If there are any changes I could make here to improve this set up please let me know.

image.png

Link to comment

@ich777 Well, after the mover did its thing. I was not able to get back to previous state. but it sounds like with the new settings I should not have a problem any more. 

 

I ended up removing the docker and deleting the folder and reinstalling the docker to get a new admin key. It's not like my ts server was overly complicated so this wasn't really a big deal.

 

thanks again

  • Like 1
Link to comment
2 hours ago, Siege said:

 I am still new to unraid. thanks for the help. I will try out changing my mover action to Array => Cache and let you know how it goes.

In my opinion the appdata share alongside with system and domains share should always stay on the Cache and should be therefore configured to stay on the Cache since with this you reducing overhead by avoiding the FUSE filesystem (/mnt/user/...) and with that also be sure that your applications have fast and quick access to the files in your appata.

 

23 minutes ago, Siege said:

@ich777 Well, after the mover did its thing. I was not able to get back to previous state.

This is maybe because it can't move all the data properly back and maybe mixed up some of the old and new data.

 

Anyways, glad to hear that you are now up and running again! :)

Link to comment
20 hours ago, DeadPixel said:

seem to be having trouble getting InspIRCd set up. Hopefully I'm just missing something obvious but after filling out at the details I get the following error and the docker wont start.

Is this a new installation or an already existing one?

 

The container is running but you have to exclude:

<module name="ssl_gnutls">

and

<module name="userip">

 

so that it properly starts but that is part of the configuration process, I will might release an update for new installation so that this is fixed by default. <- this is now fixed, if you remove the container and the directory for InspIRCd and reinstall it it will work right OOB.

Link to comment

Could you please implement something to be able to limit the download/upload speed from the docker contaier? I am currently running into some issues where the download is using too much network bandwidth whilde downloading and thus causing other docker containers not to work properly.

Link to comment
51 minutes ago, Ricky4K said:

Could you please implement something to be able to limit the download/upload speed from the docker contaier? I am currently running into some issues where the download is using too much network bandwidth whilde downloading and thus causing other docker containers not to work properly.

About what container are we talking about?

Link to comment
25 minutes ago, Ricky4K said:

Talking about lancache prefill

Sorry but I don't have any plans to implement such a feature, however do you route the traffic over the network?

You should be also be able to route the traffic internally between the containers (that's what I would recommend).

 

Another option would be to create a new VLAN and limit the bandwidth that this VLAN can use and attach the container to the VLAN.

 

IMHO the downloads should always be done at full speed so that the prefill task is done as quickly as possible but what's not clear to me is why the traffic in your case is routed over the real network interface, usually it should be handled on your Server so that it won't cause any other network issues.

 

You can also configure it differently that the Prefill container is using the container network from your LANCache container and set the DNS to 127.0.0.1 with that the traffic should be routed completely internally and won't even leave the Docker network.

Link to comment

 

32 minutes ago, ich777 said:

Sorry but I don't have any plans to implement such a feature, however do you route the traffic over the network?

You should be also be able to route the traffic internally between the containers (that's what I would recommend).

 

Another option would be to create a new VLAN and limit the bandwidth that this VLAN can use and attach the container to the VLAN.

 

IMHO the downloads should always be done at full speed so that the prefill task is done as quickly as possible but what's not clear to me is why the traffic in your case is routed over the real network interface, usually it should be handled on your Server so that it won't cause any other network issues.

 

You can also configure it differently that the Prefill container is using the container network from your LANCache container and set the DNS to 127.0.0.1 with that the traffic should be routed completely internally and won't even leave the Docker network.

First of all thanks for the response and solutions. 

 

Here are some information for you to understand the current network/situation. 

Currently connected to my ISP via an wireless Router which gives internet to my network. 

This Router does not support any VLAN configuration thus meaning no Traffic Shaping nor QoS is available.

 

When using the prefill the server uses all/most of the bandwith inside of the network for the download which leads for other clients to have slower internet speeds (for example watching some netflix). Don't get me wrong using the whole bandwidth during non peek hours is totally fine, but if I want to use them during during working hours I also want to be able to join a Teams meeting.

 

I will try to add some kind of Firewall or another Router into my Network in the future which can solve this problem. However this will be a very time consuming project because I will legitimately have to redo my whole network structure and placement of the networking HW which I don't have the time right now to do. This means tha I have to take so some kind of a temporary solution for now until either you or unraid supports such feature.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...