Jump to content
We're Hiring! Full Stack Developer ×

Extremely slow VM boot up in Unraid 6.3.5 with large RAM size


thupig

Recommended Posts

Hi there,

 

Recently I am bothered by the slow boot time of my VMs in Unraid 6.3.5. By slow, I mean over 20 mins. I'll list my specs in details below.

 

I am allocating 10 cores (20 vcpus) to the VM running with Ubuntu 16.04 desktop. 32GB rams are assigned to the VM (about 50 GB are left for Unraid). I use GPU passthrough to pass a GTX 1080ti to my Ubuntu 16.04. No secondary GPU on unraid. 

 

In the beginning when I set up the VM, I was using  like 4GB rams. There is not any problem about bootup time. However, when I change the ram to 8GB, the Ubuntu VM starts to slow down in terms of bootup time(in about 5 mins). When I change to 32GB, the boot time starts to become extremely slow. Sometimes, it takes 25-30 mins to boot. There has to be something wrong with the box.

 

I've seen someone here also had similar problem. The post was back in 2016, and no actual solution was proposed in that post. Any ideas why the VM boot so slow when large amount of RAMs is used? 

 

Is there anyone having same problem?

 

Thanks in advance.

Link to comment

That is quite slow with 8GB. I've had slow times but never more than 30 seconds or so with up to 12GB.

 

NOW, I have had very slow times when I boot with 32-64GB of ram. I don't know the specific reason for it, but this is my hypothesis:

 

First: unRaid has to allocate the memory. You can watch it do this in system stats and see the ram graph usage rise (after a fresh boot.) This takes some time.

Then: it's as if once the vm starts, the bios has to check the ram or at least register it as being there. That takes some time.

Finally: once the os starts to load, the os has to check the ram or at least account for it. and depending on the os, this can also take time.

 

When I boot with 32GB+ it can take anywhere from 4-8 minutes for me (osx), but never the 25-30 you've stated. 

 

 

 

Link to comment

It could be a case that your memory gets fragmented and when you try to launch your VM, it takes a long time to get 32GB of continuous memory allocated for it.  

 

You could try this the approach described here to see if that helps:

 

     https://lime-technology.com/forums/topic/58855-regular-out-of-memory-problems/#comment-577424

 

 

With the default settings of Linux, this would be some ~15GB of memory for this function in your system.  There is not way this function could ever need that amount!  ( The function provides for temporary RAM storage of HD bound-data so that the 'slow' writes to the hard disks does not cause the system to look like it is hanging while the writes are taking place.)

Link to comment
55 minutes ago, 1812 said:

That is quite slow with 8GB. I've had slow times but never more than 30 seconds or so with up to 12GB.

 

NOW, I have had very slow times when I boot with 32-64GB of ram. I don't know the specific reason for it, but this is my hypothesis:

 

First: unRaid has to allocate the memory. You can watch it do this in system stats and see the ram graph usage rise (after a fresh boot.) This takes some time.

Then: it's as if once the vm starts, the bios has to check the ram or at least register it as being there. That takes some time.

Finally: once the os starts to load, the os has to check the ram or at least account for it. and depending on the os, this can also take time.

 

When I boot with 32GB+ it can take anywhere from 4-8 minutes for me (osx), but never the 25-30 you've stated. 

 

 

 

 

Thanks for the reply. How many cores did you assign to your osx VM? I saw in the post (link) I mentioned that someone claimed the more cores the slower boot time.

Also, did you passthru any GPU to your VM? See the link above. Someone did some experiments to compare boot time with and without GPU passthru. Looks like GPU passthru can somehow affect boot up as well.

Link to comment
57 minutes ago, Frank1940 said:

It could be a case that your memory gets fragmented and when you try to launch your VM, it takes a long time to get 32GB of continuous memory allocated for it.  

 

You could try this the approach described here to see if that helps:

 

     https://lime-technology.com/forums/topic/58855-regular-out-of-memory-problems/#comment-577424

 

 

With the default settings of Linux, this would be some ~15GB of memory for this function in your system.  There is not way this function could ever need that amount!  ( The function provides for temporary RAM storage of HD bound-data so that the 'slow' writes to the hard disks does not cause the system to look like it is hanging while the writes are taking place.)

 

Thanks. I also think memory fragmentation could be a potential cause. Do you think reboot my Unraid server can reduce the memory fragments?

Link to comment
53 minutes ago, thupig said:

 

Thanks for the reply. How many cores did you assign to your osx VM? I saw in the post (link) I mentioned that someone claimed the more cores the slower boot time.

Also, did you passthru any GPU to your VM? See the link above. Someone did some experiments to compare boot time with and without GPU passthru. Looks like GPU passthru can somehow affect boot up as well.

 

Always with GPU passthrough, and using anywhere from 20-64 vcpu threads.

Link to comment

I wanted to add an update to this:

 

I booted up my 20 core smaller machine today with only 2 cores assigned to a vm. It was nearly instantaneous start with 24gb of ram. When I shut it down and changed it to 18 cores, it took about 2 mins to get to the boot loader. So for me, its clearly the cpu count that causes the long wait on boot and not ram quantities. 

 

I have been reading around a bit and this seems to be an intermittent problem for a few years on many linux distros. If I stumble on a solution, I'll post it here.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...