Running private and public services on the same machine


Recommended Posts

Hey there,

Unraid newcomer here. I'm currently in the process of migrating from a Syno over to Unraid.

One of the reasons was the plan to share (and receive) large files from users outside my home network. With the Syno box I did not share any services directly to the outside but rather had a VPN running on my (edge, bare metal) pfsense but I was thinking, that I might now find a solution with the shiny new server.

With Unraid I had a feeling that it was somehow almost "common practice" to make containers (such as Nextcloud, Plex etc.) available trough a proxy. However, in my eyes, this does not solve the problem of eventually being subject to brute force attempts (or worse). I know of the approach of using the Argo/CF tunnel to avoid having open ports to the outside. Nevertheless, the actual line of defense is still the HTTP Auth in the proxy and/or the login screen of the application. In my eyes, this still seems like a problem, if an exploit within those applications is discovered.

Therefore I was thinking about segregating private and public services by actually duplicating the dockers (i.e. having a "public" Nextcloud exposed to the Internet and a "private" Nextcloud only available locally) and also having separate shares that they can access.

 

However, as I'm not that deep in the docker space, I don't know, if this is best practice, if it creates a big (MGMT and processing) overhead to have multiple instances of the same container and if it actually brings any security advantage.

 

Therefore, I'm looking for answers to the following questions:

 

1.) Is there any security benefit of running the same service in two different container instances and subnets? I especially want to prevent "container breakout" (between containers but also to the host)? (I already know of this thread, but I just want to know if there are other vectors such as Dirty Pipe and its - possible - threats or the general "plugins as root" problem)
2.) Is there a better way to achieve the possibility to make certain services availble to the public while at the same time making sure that the "most precious" private data stays safe from stuff like ransomware, hackers gaining root within a container etc.

If there is no good answer to these questions I might stay with my stance on just running everything only locally behind the firewall etc. but maybe I'm just missing something or you have good arguments why the points are listed are not actually problems.

Thanks in advance!!!

Link to comment
  • 1 month later...

This questions arent that simple to be answered.


First of all you should make sure which container / docker got access to which files / folders. Also you should check if these containers / dockers need "root priviledges" or not.

Ransomwares are programms which are encrypt files when started. So you need as mentioned to cleary make a construct where you will be able to tell the "outside" to use folders xyz, where you know if someone starts a ransomware no other files should be corrupted or it isnt that big deal if these files get corrupted. Also you are able to transfer files after a upload to some other folders.

Ransomware depends just and only on the human part. If you just start every programm which you find its your fault.

There will be no one on the planet to tell you the perfect solution for such bad things... Just be aware of programms where you dont know they are coming from.

As mentioned before you can run the same dockers multiple times. Just other ports and maybe different folders mounted to them. If you mount a folder to a docker like /mnt/user/docker1 and to an other docker /mnt/user/docker2, the docker1 wouldnt be able to access docker2 and vice versa.

But if you mount like /mnt/user to one of the dockers they are able to see all files/folders within /mnt/user... Just be aware of this configuration. You should never mount all user shares to one docker which is avaiable over the internet. Because as always bad things can happen but they dont need to happen.

You should always just configure your routers firewall or better if avaiable a "external" firewall appliance to just allow certain ports to be redirected to your wanted ones.

Nextcloud as example should be nice to be reachable over the internet, why you would a nextcloud without the "cloud" functionality xD So will configuring nextcloud set up a share where you nextcloud files are saved like: /mnt/user/nextcloud then set up the ports for 80 use as example 12405 for https 443 use as example 12410. Then start nextcloud and tell your router to portforward 12405 to unraid and 12410 to unraid. So you are able to just use and see the nextcloud webui while the other parts of unraid are still being "secured"...

Zero-Day Exploits and so one are mostly just be accessible if the to be exploited system is online avaiable on certain ports. If you keep an eye on what ports you allow to be used for external access you can and should be fine. But if you like as I already also told like to start every programm which you find, then the problem is still as said on the human side of life... xD

Forgive me if there are some grammar fails. I am ill.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.