Jump to content

greg_gorrell

Members
  • Content Count

    152
  • Joined

  • Last visited

Community Reputation

0 Neutral

About greg_gorrell

  • Rank
    Advanced Member
  • Birthday 07/02/1989

Converted

  • Gender
    Male
  • URL
    http://www.facebook.com/screensaversrepairs
  • Location
    Bradford, PA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hey guys, for two days in a row now my VMs have crashed. I am not sure what is causing this but I have to stop and restart the array to get them to run again. I will post the relevant log info here and attach the diagnostics from right when it happened. Please note that for some reason, the VM logs show 18:34 for the time when everything else is showing 13:34. Windows 7: 2019-01-24 18:34:04.562+0000: shutting down, reason=crashed pfsense: 2019-01-24 18:33:23.053+0000: shutting down, reason=crashed Syslog: an 24 07:52:21 Tower avahi-daemon[5409]: Joining mDNS multicast group on interface vnet2.IPv6 with address fe80::fc54:ff:feaf:f563. Jan 24 07:52:21 Tower avahi-daemon[5409]: New relevant interface vnet2.IPv6 for mDNS. Jan 24 07:52:21 Tower avahi-daemon[5409]: Registering new address record for fe80::fc54:ff:feaf:f563 on vnet2.*. Jan 24 07:53:44 Tower avahi-daemon[5409]: Interface vnet2.IPv6 no longer relevant for mDNS. Jan 24 07:53:44 Tower avahi-daemon[5409]: Leaving mDNS multicast group on interface vnet2.IPv6 with address fe80::fc54:ff:feaf:f563. Jan 24 07:53:44 Tower kernel: br0: port 4(vnet2) entered disabled state Jan 24 07:53:44 Tower kernel: device vnet2 left promiscuous mode Jan 24 07:53:44 Tower kernel: br0: port 4(vnet2) entered disabled state Jan 24 07:53:44 Tower avahi-daemon[5409]: Withdrawing address record for fe80::fc54:ff:feaf:f563 on vnet2. Jan 24 09:45:40 Tower kernel: mdcmd (43): spindown 1 Jan 24 13:33:22 Tower avahi-daemon[5409]: Interface vnet0.IPv6 no longer relevant for mDNS. Jan 24 13:33:22 Tower avahi-daemon[5409]: Leaving mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:feb3:52ec. Jan 24 13:33:22 Tower kernel: br0: port 2(vnet0) entered disabled state Jan 24 13:33:22 Tower kernel: device vnet0 left promiscuous mode Jan 24 13:33:22 Tower kernel: br0: port 2(vnet0) entered disabled state Jan 24 13:33:22 Tower avahi-daemon[5409]: Withdrawing address record for fe80::fc54:ff:feb3:52ec on vnet0. Jan 24 13:33:22 Tower emhttpd: error: shcmd_test, 1188: Resource temporarily unavailable (11): system Jan 24 13:33:23 Tower kernel: pci-stub 0000:03:04.0: claimed by stub Jan 24 13:33:23 Tower kernel: pci-stub 0000:03:04.1: claimed by stub Jan 24 13:34:04 Tower avahi-daemon[5409]: Interface vnet1.IPv6 no longer relevant for mDNS. Jan 24 13:34:04 Tower avahi-daemon[5409]: Leaving mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe49:dbdb. Jan 24 13:34:04 Tower kernel: br0: port 3(vnet1) entered disabled state Jan 24 13:34:04 Tower kernel: device vnet1 left promiscuous mode Jan 24 13:34:04 Tower kernel: br0: port 3(vnet1) entered disabled state Jan 24 13:34:04 Tower avahi-daemon[5409]: Withdrawing address record for fe80::fc54:ff:fe49:dbdb on vnet1. libvirt: 2019-01-24 18:33:22.852+0000: 6528: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor 2019-01-24 18:34:04.320+0000: 6528: error : qemuAgentIO:598 : internal error: End of file from agent monitor 2019-01-24 18:34:04.361+0000: 6528: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor tower-diagnostics-20190124-1336.zip
  2. Yeah, you definitely want a standalone pc for that.
  3. Yes, I do have the Advanced Buttons plugin installed, and like I said above it just won't uninstall. I deleted it off the flash drive and will reboot my system when I get home, as I am remoted into a VM right now and can't risk losing access. Thanks Squid, I commend you for all do here. Your dedication to these forums is admirable.
  4. Currently on unRAID version 6.5 I have a few plugins at the top of the list showing that they have updates available: CA Backup/Restore - 2018.03.15 (current version) - 2018.07.15 (update) ControlR - v2018.03.21 Tips and Tweaks - 2018.03.21 Custom Tab - 2017.12.13 Unassigned Devices - 2018.03.21 User Scripts - 2018.02.16 These will not update at all. When I go to update them, I see the notification box pop up with this: plugin: updating: cabackup2:.plg plugin: not installed Then it says Plugin Update has finished. This has only recently occurred and I am not sure if it is only these plugins or if it is something with my system. I will also note I have the advanced buttons plugin installed and I do know it is not compatible with my version of unRAID. I am unable to delete this, as it says it is uninstalling and then tells me it was successfully uninstalled, but it still remains on the list of plugins. I will also note that other plugins have updated to more recent versions, my latest updated plugin is Fix Common Problems, which was just updated on 8/5/2018. Has anyone had issues like this before or know what I can do? Is there a way to manually remove plugins and reinstall them?
  5. Works just fine for me.. sounds like you have some other issues going on.
  6. I keep having issues trying to get the live feed. When I open the live feed, ill catch the first couple frames and then an error message pops up saying it is unable to load stream and to make sure port 7446 is open on the NVR. I don't understand what could be causing this, obviously some type of SSL issue.
  7. Anyone ever take this request seriously?
  8. You need a GPU in order to pass anything through. I don't know if I am misunderstanding something here, but tons of motherboards come with onboard graphics. There are even boards with APUs that combine CPU and GPU operations. You are using AMD, you will only need one video card per VM, unless you plan on only using one at a time in which case you can just assign the card to whichever VM you want to use prior to starting in the VM manager.
  9. This sounds more like a resource issue than network. Are all your VMs using different cores? Sent from my iPad using Tapatalk
  10. Hey guys, I had some time to play last night and I got everything working now without any errors or XML editing. I don't know whether it was luck or just that they fixed it, but here's what I did. I went in and shut down all my VMs (ubuntu, pfSense, Windows 7, Windows 10, OpenELEC). Stop the array, and back everything up. I went from 6.1.9 to 6.2.4 via the plugin page. Just go in and click update on the unRAID plugin. Reboot your server, delete your docker.img file and file up the array. You'll notice all your VMs are there but you will need to go in and reselect the locations of the iso and the vdisk for some reason. My kept saying they couldn't be found until I went back and just reselected them on the edit page. Windows 10 worked without any issues, Windows 7 on the other hand to some toying with. First, it would hang on the "Windows is loading files" screen. I knew there was an issue with the video driver, so I switched that to Cirrus and it did get to the startup repair screen. For some reason I had to reload all the virtIO drivers as Windows could not find the OS to boot, I hope to understand this a little better. Either way, I got it to a repair window where I was able to load the drivers in the proper order as if I was instilling Windows fresh. As soon as I installed the virtstor driver, it froze for about 5 minutes, then went into the desktop. Now I am not sure whether QEMU was "fixed" or what, but I did not have to change anything in the XML. This was what unRAID has based on my selections on the edit page: <cpu mode='host-passthrough'> <topology sockets='1' cores='3' threads='1'/> </cpu> I know it is hard to get any help from anyone on these issues, but I got sick of being stuck on 6.1.9 last night and finally had the time to let my network go down and get some things figured out. If anyone has any questions, I would be glad to try and help. This has been so frustrating and I am so relieved to finally have it working like it should be. Note: I still do have this error showing in the log, but unless something stops working, I am going to disregard it. warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 16] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 17] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 0] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 1] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 2] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 3] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 4] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 5] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 6] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 7] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 8] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 9] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 12] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 13] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 14] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 15] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 16] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 17] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 0] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 1] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 2] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 3] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 4] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 5] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 6] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 7] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 8] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 9] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 12] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 13] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 14] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 15] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 16] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 17] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24]
  11. Can anyone point me to where the VM settings are stored or explain this phenomenon to me? When on 6.9.1 I created a few VMs such as ubuntu, Windows, and pfsense. When I upgraded to 6.2, I added a couple more including OpenFLIXR. When I downgraded, I deleted the OpenFLIXR VM, but when I upgrade back to 6.2, It shows up on the list again. I noticed this with a couple VMs. Why are some only showing in 6.2, and not in 6.1.9? Is the a config file somewhere that stores this information and is different between verisons? I understand QEMU is a different version and maybe that is to blame.
  12. I currently have pfsense running as a vm on my unraid box with an Intel dual nic passed through. It works very well, the only issue you may experience is if you lose power to unraid, you lose your network. Make sure to keep a machine set up with a static ip or you won't be able to connect in that situation. The host will not receive any packets from the nic if you have it passed through to pfsense vm, so no security concerns there. Sent from my SM-N910V using Tapatalk
  13. Just stick with 6.1.9 until they try the new Qemu in 6.3. It sucks I know, but no one is going to try and fix this for us until then.
  14. Why do you have your dockers appdata on a user share?
  15. Gotcha.. After doing a little research, its too bad something like RemoteFX couldn't be be made to work on Windows 10. (https://technet.microsoft.com/en-us/library/ff817578%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396) Over on one of the forums where the guys are using SETI@home, I read that they can start their software that ustilizes the GPU via VNC, then start the RDP client and the application continues using the GPU rather than the emulated adapter. I might test this out a little later, not that I would have a whole lot of use for it. Sorry to hijack the thread anyways EDIT - I did find the RemoteFX in the Group Policy Editor of my Windows 7 Ult VM, but it may need to be run on Hyper-V.