Jump to content

Existing VM not booting anymore


gerard6110

Recommended Posts

Approx. since 6-9 months (after one of the unRAID OS stable upgrades), one of my Linux based VM's on my unRAID basic system is not booting anymore - meaning it boots, but does not complete and thus rendering it useless. Other Linux and windows based VMs all running fine.

I've tried all kinds of things except for starting from scratch (as I don't want to loose data, VMs, dockers).

Starting VM from safe mode (no plugins): to no avail

Creating a new one with the exact same settings (as before): to no avail

Creating a new one with the exact same settings (as before) in safe mode: to no avail

Creating a new one with the exact same settings on my other unraid pro system: no problem at all; even copying the basic vdisk1.img virtual disk to the pro server and using that - again no problem at all.

What I feel is very strange is that the VM-xml files on basic and pro are quite different; despite the same starting settings?!

Point is that I want to run the VM on my basic system, as it is setup to use as little as possible energy (for 24/7), whereas the other media-server is too power hungry (and thus only running when required).

I'm almost certain any of the files is corrupt. But which? bzimage and bzroot have already been replaced - to no avail therefore.

It would be very very nice to get this one running again ...

Link to comment

Thanks for your reply (although the email notification is showing more than above?

"Tools - Diagnostics.

The XML's being different are possibly because you have differing hardware between your pro and basic system.

Beyond drive limitations, there is no differences between Pro / Basic / Trial on any version of unRaid.").

 

I know, and that's why its so strange that creating and running the VM on my pro system works like a breeze and on my basic not. For instance in pro version there are all kinds of alias's in the xml, whereas in the basic there are none; also in basic xml vnc port and websocket are -1, whreas in stacker they are 5900 and 5700). Changing in basic does not stick.

 

Please find attached the relevant? diagnostics files of my stacker with unRAID pro and my elite with unRAID basic

Only the SyNAS VM on my basic system is not booting; that is not completing

 

Link to comment

There's no difference between autostart and manual start.

As it has a special boot sequence, it boots the boot.iso first, but does not complete it, meaning after the initial options screen where one can select to 'proceed' (default option), debug or install/upgrade, nothing happens (black VNC screen), whereas on the unraid pro system, it boots properly. Not  sure what is happening exactly, because after the initial boot, control is handed to the vdisk, but unsure at which moment. Still, the pro system handles this fine, whereas the basic system does not, even though they are both the same latest version and with the same plugins. Only difference is that the basic system has a few additional dockers (but from principle these should not interfere) and an additional VM (which I believe should also not interfere). Which is why I am lost.

 

I have attached the full diagnostics of both systems (for comparison).

elite-diagnostics-20170502-1905 - unraid basic.zip

 

Link to comment

PS. it got broken after moving from 6.2.3 to 6.2.4. Hope this helps (also).

 

From the diagnostics files you may have noticed that the 6.3.3 pro is on:

Model: CM 690
M/B: ASUSTeK COMPUTER INC. - P8B75-M LX PLUS
CPU: Intel® Core™ i3-3225 CPU @ 3.30GHz
HVM: Enabled
IOMMU: Disabled
Cache: 128 kB, 512 kB, 3072 kB
Memory: 8 GB (max. installable capacity 16 GB)
Network: eth0: 1000 Mb/s, full duplex, mtu 1500
Kernel: Linux 4.9.19-unRAID x86_64
OpenSSL: 1.0.2k
 
Whereas the 6.3.3 basic is on:
Model: Elite 110
M/B: ASRock - AM1H-ITX
CPU: AMD Athlon™ 5350 APU with Radeon™ R3 @ 2050
HVM: Enabled
IOMMU: Disabled
Cache: 256 kB, 2048 kB
Memory: 8 GB (max. installable capacity 8 GB)
Network: eth0: 1000 Mb/s, full duplex, mtu 1500
Kernel: Linux 4.9.19-unRAID x86_64
OpenSSL: 1.0.2k
Link to comment

Follow up on my Elite with AMD cpu and unraid basic:

After changing to "Settings:VM Manager:Advanced View" I noticed the "View libvirt log" option. This showed the following errors:

2017-05-04 20:51:57.324+0000: 29397: error : x86FeatureInData:780 : internal error: unknown CPU feature __kvm_hv_spinlocks
2017-05-04 20:51:57.324+0000: 29397: error : x86FeatureInData:780 : internal error: unknown CPU feature __kvm_hv_vendor_id

 

Googling it seemed to be affected by the VM's CPU mode (in advanced view when creating the VM). Although "host passthrough ..." with the same AMD cpu worked before, I changed it to "Emulated QEMU64" (just trying ...), and bam! the VM booted instantly.

 

What is strange however, is that on my Stacker with Intel and unraid pro "host passthrough ..." is working fine. As the unraid basic and pro versions are exactly the same (except for the maximum number of drives), apparently it matters which CPU one uses since version 6.2.4, because since then my VM had not booted anymore with "host passthrough ...".

 

Hope this info helps to pinpoint the exact root cause.

At least glad my VM is running again, albeit (a little) slower then with "host passthrough ...".

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...