bennymundz Posted March 2, 2019 Share Posted March 2, 2019 (edited) Hello all, I am hoping someone might be able to assist. I recently upgraded to the latest RC5 and noticed weirdness on my unraid box so i decided to downgrade back to the latest stable release 6.6.7 to fix the issue. However my problem is that my vm's are all still crashing. Until i did the upgrade everything was running perfect no issue for 60+ days humming along nicely and now i cannot even get 60min out of it before my VM's crash, This is the log in libvert, tho i dont specifically know what it means. 2019-03-02 10:30:03.592+0000: 6591: info : libvirt version: 4.7.0 2019-03-02 10:30:03.592+0000: 6591: info : hostname: mrblack 2019-03-02 10:30:03.592+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=1 name='UTD01' uuid=8ca2aaf4-c5ec-a8a3-774d-9fde82c3d944 is tainted: high-privileges 2019-03-02 10:30:03.592+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=1 name='UTD01' uuid=8ca2aaf4-c5ec-a8a3-774d-9fde82c3d944 is tainted: host-cpu 2019-03-02 10:30:03.780+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=2 name='AMS01' uuid=d80df609-ca7b-33e2-ab90-59de51a176af is tainted: high-privileges 2019-03-02 10:30:03.992+0000: 6591: warning : qemuDomainObjTaint:7640 : Domain id=3 name='DLH01' uuid=c02d5c00-ad2c-c6e9-6be2-ca553682a971 is tainted: high-privileges 2019-03-02 10:57:37.319+0000: 6575: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor 2019-03-02 11:33:18.005+0000: 6575: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor Really hoping someone can point me in the right direction to fix this annoying issue. Thanks Edited March 19, 2019 by bennymundz Quote Link to comment
lotetreemedia Posted March 2, 2019 Share Posted March 2, 2019 according to this post looks like the first few messages are normal. the last line i think just means its been shutdown. to help Please post your diagnostics: Tools -> Diagnostics Quote Link to comment
bennymundz Posted March 4, 2019 Author Share Posted March 4, 2019 (edited) I've since updated back to 6.7 RC5 trying to fix this no avail. I have attempted the following - deleted /mnt/user/system/libvirt/libvirt.img and let it be recreated, no resolution - Increase the size of /mnt/user/system/libvirt/libvirt.img to 2gb from 1gb - created VM xmls - full power cycled my system Again this morning woke up, VM had crashed again libvirt log 2019-03-04 13:00:00.362+0000: 6701: info : libvirt version: 4.10.0 2019-03-04 13:00:00.362+0000: 6701: info : hostname: mrblack 2019-03-04 13:00:00.362+0000: 6701: warning : qemuDomainObjTaint:7831 : Domain id=1 name='AMS01' uuid=61fa935c-ce3b-6c32-3dcd-cea3cece8ee1 is tainted: high-privileges 2019-03-04 14:03:07.467+0000: 6697: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor Would love it if anyone had any suggestions at all. After a VM crashes, until i reboot i get this error Execution error internal error: process exited while connecting to monitor: qemu: qemu_thread_create: Resource temporarily unavailable mrblack-syslog-20190304-2238.zip mrblack-diagnostics-20190304-2248.zip Edited March 4, 2019 by bennymundz Quote Link to comment
rlust Posted March 5, 2019 Share Posted March 5, 2019 I am having the same issue with an Ubuntu VM. Had been working fine for a couple of months on 6.6. Upgraded to 6.7 yesterday and now crashes within an hour with same error message. Quote Link to comment
bennymundz Posted March 5, 2019 Author Share Posted March 5, 2019 Im exactly the same, my unraid box was fine running multiple VM's for months, the i upgraded now no VM's work longer than 24hrs. Quote Link to comment
John_M Posted March 5, 2019 Share Posted March 5, 2019 @bennymundz The diagnostics zip you posted three messages up contains zero-length log files. Reboot, start your VMs, wait for them to go wrong and grab diagnostics again. @rlust Start your own thread and do the same. Quote Link to comment
bennymundz Posted March 5, 2019 Author Share Posted March 5, 2019 @John_M that's weird, i will do that and post back. Thanks. Quote Link to comment
bennymundz Posted March 5, 2019 Author Share Posted March 5, 2019 (edited) @John_M mrblack-diagnostics-20190305-2219.zip New diags with log files populated. Please let me know if you need anything else. Edited March 5, 2019 by bennymundz Quote Link to comment
jonp Posted March 11, 2019 Share Posted March 11, 2019 What we really need is for someone to have a monitor attached to their server via a graphics device that is not going to be assigned to a VM. Leave the Unraid console open on this system and run the following command: tail /var/log/syslog -f This will begin printing the log to the screen and when the server crashes, take a picture of what's on the monitor and post it here. This will show us exactly what is going on when the system crashes. Quote Link to comment
bennymundz Posted March 11, 2019 Author Share Posted March 11, 2019 @jonp - I will do that and report back. However I think i might have fixed the issue, there were some dockers which were working fine but for some reason could not be updated. (I relalise my issue was with VM's). I ditch all dockers and deleted the docker configs and the VM's have all been stable. I did notice the dockers were using an unusual amount of CPU which lead me to trash all of them, perhaps the config of the dockers was corrupted. Quote Link to comment
jonp Posted March 11, 2019 Share Posted March 11, 2019 [mention=62528]jonp[/mention] - I will do that and report back. However I think i might have fixed the issue, there were some dockers which were working fine but for some reason could not be updated. (I relalise my issue was with VM's). I ditch all dockers and deleted the docker configs and the VM's have all been stable. I did notice the dockers were using an unusual amount of CPU which lead me to trash all of them, perhaps the config of the dockers was corrupted.Interesting!! Keep us apprised if anything changes!!Sent from my Pixel 3 XL using Tapatalk Quote Link to comment
bennymundz Posted March 11, 2019 Author Share Posted March 11, 2019 @jonp I will do, I've successfully had 3 vm's running for 2 days now. Before deleting those dockers i would be lucky to get 2 hours out of them. After a week i will consider this resolved. For now im putting this down to corrupted docker config causing pain, perhaps locking resources causing kvm to kill VM's Quote Link to comment
bennymundz Posted March 19, 2019 Author Share Posted March 19, 2019 (edited) VM's stable after 1 week. The corrupted dockers which all ran without issue were the problem. Dockers were deleted and everything started working as expected. Edited March 19, 2019 by bennymundz Quote Link to comment
NewDisplayName Posted March 19, 2019 Share Posted March 19, 2019 (edited) could be interesting which dockers did this? Edited March 19, 2019 by nuhll Quote Link to comment
bennymundz Posted March 20, 2019 Author Share Posted March 20, 2019 Binhex Sonarr/Radarr along with the unify video, UNMS, and Unify controller were all present. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.