VM versus docker


Recommended Posts

I'm not sure what to do. I currently have a plain debian server, no virtualization,

on that server I serve nginx with php-fpm running nextcloud for 7 people to both LAN and WAN, thus it also runs mariaDb/mysql. I run dnsmasq on it as a speedy LAN dns resolver and use it to filter/block (similar to pihole), and it serves as a backup for remote servers in a datacenter and as a secondary mx, a failover when the external mailserver is down, and it also runs a few tiny opendir websites and syncthing mostly auto backing-up mobile devices and desktop stuff, all runs behind letsencrypt certs. It runs CSF/LFD for iptables firewall control, and I share its block/allow lists with other external servers. And last but not least it also runs several rsync tools and scripts.

It has been doing most of this for years without much trouble.

 

But now Unraid has arrived. Which I very much prefer over that server's hardware control (its RAID is useless, slow, not reliable and when a disk fails it takes ages to fix things). Plus, nextcloud is getting to be used more and more recently, as my users are slowly moving away from other less private space constrained cloud services. Needed more diskspace and more powerful hardware, so I built a new machine and plan on selling the old hardware.

 

Let's assume there is plenty of RAM and fast M.2 SSD cache, making me question;

- Is it smarter to run a debian VM with nextcloud on it? And most of the stuff mentioned above on that same VM?

- Or is it smarter to run all in separate docker instances? If so, why?

I've been working a lot with docker containers at work, and I have to say; I really don't like it all that much. Maintenance for updating is never as easy as just running apt update && apt upgrade on a debian instance. I've seen dockers fail way more often than plain server instance services, to be honest.

- For nextcloud and its database, why would running them in separate containers be any more efficient for them to interact while on the exact same hardware?

- Is there an obvious advantage for disk-IO for docker over VM that I'm missing?

Or anything else?

And then there's the security implications of docker containers; I need to firewall all of it. When it's on that VM, I just run CSF/LFD on it and I'm done.

 

I already installed and use syncthing as a docker instance on the unraid machine, since that seemed easier to maintain. Syncthing is a bit of a GUI-based tool anyway, doesn't fit a 'server' OS. But I still have to migrate most of the other stuff, and I was tempting to just copy the way nextcloud is running now, on 1 server with nginx, mariadb, letsencrypt through cloudflare dns etc.

Edited by fluisterben
explaining csf link
Link to comment
19 hours ago, fluisterben said:

Let's assume there is plenty of RAM and fast M.2 SSD cache, making me question;

- Is it smarter to run a debian VM with nextcloud on it? And most of the stuff mentioned above on that same VM?

- Or is it smarter to run all in separate docker instances? If so, why?

The VM will use more ram and cpu to do the same job as your running a linux vm on a linux server. you adding an additional point of failing security wise instead of the software running in a docker you now also have an os running on the same ip as a point of failing. the up side to doing in though the vm is you will get the lastest patches as faster if you keep on top of it. the upside to dockers is you can have the auto update so it set and forget and if its an important patch you can manually do it before the docker gets updated. Dockers are very easy to lock down so firewalls should be easy and by nature only see what you allow them to see and when they crash the rest of your server is still chugging along.

 

  

Link to comment
On 5/10/2019 at 12:45 PM, nicksphone said:

The VM will use more ram and cpu to do the same job as your running a linux vm on a linux server.

Yes, but it allows me to do entire system snapshots twice a day or more.

 

Quote

you adding an additional point of failing security wise instead of the software running in a docker you now also have an os running on the same ip as a point of failing. the up side to doing in though the vm is you will get the lastest patches as faster if you keep on top of it

Please, allow me to chime in with this;  https://security.stackexchange.com/questions/169642/what-makes-docker-more-secure-than-vms-or-bare-metal

If you require constant interaction between services within a VM, I'd say concentrating them within a VM is a better option. Especially with the easy mount options used for Unraid's VMs.

 

Someone needs to bench-test disk-IO/speed for one exact same service from within a docker instance to/fro the mounted data/content, versus the same service from within a VM to/fro the mounted data/content. I'd be very interested to see the results for unraid mounts in this benchmark.

Edited by fluisterben
Link to comment

I run a Nextcloud and MariaDB docker container. 

I backup my data share with CA backup and backup my mariaDB container with a bash script nightly. 

 

Recently I accidentally blew away my entire Nextcloud config share and I was able to come back from that with the two backups I perform.

 

I don't see the benefit of running this in a VM, just makes no sense. 

Link to comment
13 minutes ago, exist2resist said:

I backup my data share with CA backup and backup my mariaDB container with a bash script nightly. 

 

Recently I accidentally blew away my entire Nextcloud config share and I was able to come back from that with the two backups I perform.

 

I don't see the benefit of running this in a VM, just makes no sense. 

And I don't see the benefit of running this in separated containers, just makes no sense. On the exact same hardware no less.

Why would you want to create networks between docker containers in order to communicate with webserver (nginx) and database, while you can have it all as localhosted within 1 VM ? Again, I mentioned all the arguments in favor, like the CSF/LFD firewall I can run it all behind.

I have yet to see 1 argument in favor of using containers for a nextcloud install.

 

Quote

I run a Nextcloud and MariaDB docker container.

No, you also run a webserver for it. nginx, preferably. And you run LetsEncrypt for it. Really, I've seen that video by SpaceInvader One and it's like opening a can of worms. I can have LetsEncrypt using cloudflare api for DNS verification, all within one VM, much easier and cleaner too.

Edited by fluisterben
Link to comment
1 minute ago, fluisterben said:

And I don't see the benefit of running this in separated containers, just makes no sense. On the exact same hardware no less.

Why would you want to create networks between docker containers in order to communicate with webserver (nginx) and database, while you can have it all as localhosted within 1 VM ? Again, I mentioned all the arguments in favor, like the CSF/LFD firewall I can run it all behind.

I have yet to see 1 argument in favor of using containers for a nextcloud install.

 

I see little point in trying to convince you to use docker containers as it's obvious you're comfortable with the method you use and there's nothing wrong with it, personally I love the immutable nature of containers, but as long as it works and you're able to maintain it.  Then do what suits you best.

 

I've run webservers using the same methods you're using, and still do for some things, but the main attraction for containers is it significantly lowers the barrier to entry to enable people who don't have the same degree of sysadmin skills you do, to accomplish what they want.

 

Containers have little overhead and I suspect your overall overhead using docker containers vs running in a VM would be similar, but lets face it, if you're not familiar with docker and it's nuances, you could probably spin up the VM and get your webserver doing what you need quicker.

 

You sound like a seasoned old sysadmin, and like myself, when you've been doing something one way for many years, are comfortable with it and know how to do it, there often isn't a compelling reason to change how you do things (I don't work in IT, but the same principle arises and I often find myself having a similar discussion with much younger colleagues when they try and educate me on the latest and greatest thing!)

 

Whatever you decide, welcome.

Link to comment
16 minutes ago, fluisterben said:

No, you also run a webserver for it. nginx, preferably. And you run LetsEncrypt for it. Really, I've seen that video by SpaceInvader One and it's like opening a can of worms. I can have LetsEncrypt using cloudflare api for DNS verification, all within one VM, much easier and cleaner too.

My let's encrypt/web server container which used nginx serves up more than just next cloud. 

I used to have VMs now I have none.

To each their own. 

Link to comment
9 minutes ago, CHBMB said:

You sound like a seasoned old sysadmin, and like myself, when you've been doing something one way for many years, are comfortable with it and know how to do it, there often isn't a compelling reason to change how you do things (I don't work in IT, but the same principle arises and I often find myself having a similar discussion with much younger colleagues when they try and educate me on the latest and greatest thing!)

I work in IT, have for 2 decades, and if I was doing the same thing as I was 20 years ago it would have left irrelevant in the IT space. 

I adapt to new better technologies, the problem is most sys admins are lazy and like pressing a button. 

I have dealt with people like that my entire career, I have no idea how these individuals survive in the space. 

Not saying that anyone in here reflects that mentality either. 

 

Link to comment

Like I wrote earlier;

Someone needs to bench-test disk-IO/speed for one exact same service from within a docker instance to/fro the mounted data/content,

versus the same service from within a VM to/fro the mounted data/content.

I currently don't have enough time to do benchmarks between the two on unraid, but I'd be very interested to see the results for mounts.

 

In the mentioned example above; using the nextcloud data storage folder for example. In a VM it's reached through

"trans=virtio,version=9p2000.L,_netdev,rw 0 0" in fstab. I'm curious how that would fare against a docker mounted data folder.

 

I've been working in IT for 3 decades now ;) I'm a CISSP and CHFI, and do server admin work for a hosting/webdev company. I can message you my LinkedIn page if you're interested..

Edited by fluisterben
Link to comment
2 minutes ago, fluisterben said:

Like I wrote earlier;

Someone needs to bench-test disk-IO/speed for one exact same service from within a docker instance to/fro the mounted data/content,

versus the same service from within a VM to/fro the mounted data/content.

I currently don't have enough time to do benchmarks between the two on unraid, but I'd be very interested to see the results for mounts.

 

In the mentioned example above; using the nextcloud data storage folder for example. In a VM it's reached through

"trans=virtio,version=9p2000.L,_netdev,rw 0 0" in fstab. I'm curious how that would fare against a docker mounted data folder.

 

Personally I see little point in benchmarking, as unless there was a tangible real world benefit, I wouldn't be interested in switching to running my services in a VM.  They'd only really be a point if the benchmarking would be a deciding factor for yourself as to how you accomplish things.  It seems to me, and I may be wrong, you're pretty set on a VM setup.

 

My gut feeling is that IO speed with docker would carry very little if any overhead for local mapped drives.  I'm sure there's someone out there that's looked at it at some point, maybe not specifically on Unraid but I suspect the ballpark would be the same.

Link to comment
12 hours ago, fluisterben said:

What do you mean by that? How does one 'pass through the shares' ? Also, why are you assuming docker speeds are faster? Do you have the benchmarks that prove that?

when you pass through a share through to a vm is acts as a local mnt when your map to it, your acting as a network share adding an extra layer to every transaction. this can only be done to linux vms click the advanced tab you can add them there. if you do a bit of googling you can find people who have bench marked the diffrences between the 2. its all down to personal choice but like i said before you made up your mind before asking

Edited by nicksphone
typo
Link to comment
On 5/9/2019 at 4:24 PM, fluisterben said:

Or is it smarter to run all in separate docker instances? If so, why?

most def smarter to run in separate docker containers, if you run it all in one large vm and that vm goes bang (bad update, whatever) then you lost the lot, sure you can restore the vm's vdisk from backup but its going to take time to do. The other thing to keep in mind is bad application releases, this does happen and sometimes you want to roll back the application, this is harder to do in a vm environment and not always 100% clean rollback is possible.

 

if you have it all as separate docker containers, if one goes bang you loose one app and only one app, and you can recover from this within minutes, delete container, re-create container, you're up and running!, sure you may get into situations where config data is corrupt, that can happen, but its not common and a restore of the config data (significantly smaller than a vm restore!) and you are back!. if you have a bad application release and you want to roll back then you simply stop the container, delete the container, pull the image with the version you want , re-create the container and you are done, no risky uninstall reinstall or downgrade, its a clean install every time!.

Edited by binhex
  • Upvote 2
Link to comment
On 5/14/2019 at 1:12 PM, binhex said:

most def smarter to run in separate docker containers, if you run it all in one large vm and that vm goes bang (bad update, whatever) then you lost the lot, sure you can restore the vm's vdisk from backup but its going to take time to do. The other thing to keep in mind is bad application releases, this does happen and sometimes you want to roll back the application, this is harder to do in a vm environment and not always 100% clean rollback is possible.

 

if you have it all as separate docker containers, if one goes bang you loose one app and only one app, and you can recover from this within minutes, delete container, re-create container, you're up and running!, sure you may get into situations where config data is corrupt, that can happen, but its not common and a restore of the config data (significantly smaller than a vm restore!) and you are back!. if you have a bad application release and you want to roll back then you simply stop the container, delete the container, pull the image with the version you want , re-create the container and you are done, no risky uninstall reinstall or downgrade, its a clean install every time!.

I still don't see much of a difference there. Like I wrote, I maintain a lot of docker instances at work, as well as a lot of VMs and VPSs and bare metal servers. For someone like me, making sure apps in a VM don't crash is not harder or easier than doing the same for a container.

In fact, combining containers, like nginx, letsencrypt, mariadb and nextcloud is much harder to maintain and more time-consuming than having those 4 in one VM.

In VMs it's also easier to control the use of resources, as in, have them not accidentally take over all resources of the hardware.

 

I posted here because I wanted to know what others think about the differences while using them, if I was missing something, but I don't see valid reasons to not pick a VM.

 

I do have some docker containers on my unraid machine, like syncthing, but mostly because they don't need isolation.

Containers are interesting if you're a developer and changing config often, if not, you should prefer isolation (more secure) and use a VM.

Edited by fluisterben
Link to comment
On 5/14/2019 at 11:22 AM, nicksphone said:

when you pass through a share through to a vm is acts as a local mnt when your map to it, your acting as a network share adding an extra layer to every transaction. this can only be done to linux vms click the advanced tab you can add them there. if you do a bit of googling you can find people who have bench marked the diffrences between the 2. its all down to personal choice but like i said before you made up your mind before asking

I didn't need an explanation for how to mount an external dir in a VM on unraid (there's only one way for that in unraid for linux), what I wanted to know is how performance of an 'external' storage dir on unraid fares between access from a VM and a docker container. I did not make up my mind before asking, which is why I asked;

I would tend to favor a container if it would yield A LOT faster disk-IO for nextcloud, but thus far, I've been testing that in my unraid side by side just now, the VM wins this battle. Probably because in the VM nextcloud, mariadb, nginx and php-fpm are all accessed directly, without having to use network protocol conversions. Or port conversions. I can run nginx on the IP of the VM's port 443 and nothing needs to be redirected or proxied.

Edited by fluisterben
Link to comment

A VM with all the apps would only be faster than multiple containers if the data flows using unix sockets, otherwise its the same thing.

A VM is about as secure as a container

A VM is only an option if your server has hardware support, but can still get really slow when doing certain cryptographic operations unless the CPU does it in hardware.

A VM can only access other server hardware if the server has hardware passthru support.

A container can access server hardware if the server OS has drivers for it.

A container works without needing hardware support, and will always run at baremetal speeds - not counting possible networking issues.

A container doesn't even need firewalling as only the application running in it would be exposed on the network interface.

A container doesn't need patching, only checking if the application and related libraries have vulnerabilities.

 

I'm also an old school sysadmin, but I see that containers work better than VMs for many situations, unless you want to support multiple tenancy, hardware passthru, and or complex firewalling. I mean, do you really need to emulate an RTC, or a floppy controller to run a web server?

Link to comment
On 5/16/2019 at 2:21 AM, ken-ji said:

A VM with all the apps would only be faster than multiple containers if the data flows using unix sockets, otherwise its the same thing.

 

Eeh, no, and no. Even using translations of networks, ports, bridging, hops in between, will be slower than, for example, using a localhost mariadb server directly from nextcloud. Likewise you don't even need unix sockets for nginx+php to perform faster when it serves nextcloud locally, instead of from an external container and/or proxying.

Quote

A VM is about as secure as a container

No, it's not. Inside a VM everything can be isolated and tracked, a container is open to whatever exploit can be run directly on its serving apps.

https://security.stackexchange.com/questions/169642/what-makes-docker-more-secure-than-vms-or-bare-metal

 

Quote

A VM is only an option if your server has hardware support, but can still get really slow when doing certain cryptographic operations unless the CPU does it in hardware.

We're in 2019 now. The cheapest boards and CPUs on the market today offer HVM, IOMMU by default.

Link to comment
On 5/16/2019 at 12:00 AM, fluisterben said:

In VMs it's also easier to control the use of resources, as in, have them not accidentally take over all resources of the hardware.

I'm not an expert by any means, but I think you're wrong. You can easily use cpu pinning on a container to make sure of that.

 

1 hour ago, fluisterben said:

Inside a VM everything can be isolated

You can isolate a container too (from the host) using macvlan.

 

Again I'm not an expert by any means but it surprises me that one that manages containers and vm's on a daily basis at work don't know this.

And I have to agree with @binhex comment as well.

 

I don't know about performance, app vise, vs a vm but I find it way easier to run containers then run everything in a vm. Even if there was a performance difference I doubt it would be significate enough for me to drop containers in favor of a vm. 

 

And like said earlier in the thread it sounds like your mind is set so there's really no point in discussing it.

 

  • Like 1
Link to comment
14 hours ago, fluisterben said:

Eeh, no, and no. Even using translations of networks, ports, bridging, hops in between, will be slower than, for example, using a localhost mariadb server directly from nextcloud. Likewise you don't even need unix sockets for nginx+php to perform faster when it serves nextcloud locally, instead of from an external container and/or proxying.

No, it's not. Inside a VM everything can be isolated and tracked, a container is open to whatever exploit can be run directly on its serving apps.

https://security.stackexchange.com/questions/169642/what-makes-docker-more-secure-than-vms-or-bare-metal

 

We're in 2019 now. The cheapest boards and CPUs on the market today offer HVM, IOMMU by default.

Not going to try to convince you - but as a PSA, containers vs VMs - they have the same practical security unless your hardware and OS supports CPU and memory isolation (IBM Power series is the only thing I've seen that does this) in which case a VM wins hands down.

 

The important part of security is the application, because even with netfilter protection, a VM has its entire network stack is still technically exposed for any possible vulnerabilities. With containers and macvlan (almost forgot about that one), the attack surface shrinks to the application running in the container in its dedicated IP. And standard containers using the default bridge networking is like having a simple linux router in front to provide portforwarding.  The usual portforwarding using a router in front of either a VM or container helps a lot with the security.

 

AFAIK, running nextcloud with a vulnerability in a container would leave the attacker in the nextcloud container, which presumably has nothing else to leverage off, and they would need to figure out a way to move to another target - the data in the mysqldb container? or the host? In the VM scenario, the attacker would have been closer to access the data and about the same to access the host.

 

The bit about HVM is that we are in 2019, but not all countries and users have the money or access to 2015+ stuff and some are still rocking something from the early 2000's and for them only containers work. I for example have decided to use only a Pentium G4620 processor, because an emby container with HW transcoding on the iGPU would work even better than an i3/i5/i7 + Nvidia GPU in a VM at a significant fraction of the cost. (the fact that I'm using an ITX board and I need the only slot for my HBA is also a factor here)

  • Like 1
  • Upvote 1
Link to comment
  • 1 year later...

This debate between VM's and Containers has been at the forefront of my thoughts for some time as well. For me, it's a matter of comfort as I'm just more familiar with VM's and Containers are just new and unfamiliar. I also subscribe to the mantra that VM's are fully isolated, which is a security advantage. However this is likely just my lack of understanding with respect to container technologies.

 

I'm curious in one regard...

Let's say you shift an app to a container based solution, and the app requires network storage such as NFS mounts. In my limited experience, mounting NFS shares inside a container required the container to be granted a sort of privileged execution state that is considered by some to be a security risk.

 

I realize this is a new post to a fairly old thread. Appreciate any insight from any commenters. BTW, I'm in the process of transitioning to unraid, so my question here isn't specific to unraid (yet!!!)

 

Thanks in advance...

Link to comment
3 hours ago, tech101us said:

Let's say you shift an app to a container based solution, and the app requires network storage such as NFS mounts. In my limited experience, mounting NFS shares inside a container required the container to be granted a sort of privileged execution state that is considered by some to be a security risk.

You would tend to mount the share with the Unassigned Devices plugin, with whatever privileges, and what the remote allows and then pass the UD mount point to the container, the same as any local path.  No privilege escalation etc is required in the container.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.