Jump to content

vw-kombi

Members
  • Content Count

    182
  • Joined

  • Last visited

Community Reputation

3 Neutral

About vw-kombi

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Only one VM, pfsense. Not windows.
  2. Had first power failure than lasted longer than the UPS. Got email saying time has been reached (10 mins left) and beginning shutdown. When it all came back up, it was doing a parity check. I assume it did not manage to shutdown cleanly ? I cant see any special settings on the UPS. Any ideas where I can look ?
  3. My mags are not in folders. I get errors therefore due to that I assume. I can use a utility to make folders out of files and move them, but wondering if there is an option to have it work if they are not in individual folders ?
  4. I would like this also. My VM images are not on the cache disk, but on high speed unassigned devices drives, one step further from the array.
  5. I have gone through all the troubleshooting stuff from the pfsense forums. This is so frustrating.
  6. Any update on this ? I note that when I edit the xml with the skylake cpu stuff, then you cant save the 'non xml' VM as it throws a message 'xml error invalid cpu feature name'. Not sure if that's just me ?
  7. I have a physical old laptop with 2 extra USB NIC's - max 100Mbps so I want to virtualise to unraid so I can use GBPS interfaces all round. Once I create a VM with 2 CPU's, 3GB Ram, and pass in the 4 port server nic it all works fine except for one thing (note, it is an exact same config except the interface names - I restore between them). The issue..... My pfsense OPT1 network has iot and internet connected stuff in there My pfsense LAN has the servers, unraid, docker etc, one of which is the Emby Container My shieldTV (1GBps ethernet connection) on the pfsense OPT1 interface - it pauses all the time watching stuff from emby. If I connect it to a LAN port instead it is all fine. I want it in the same network as all the other stuff - google homes etc, so they can all communicate. The exact same config on the old laptop pfsense does not have the issue. Everything else is fine on the VM pfsense, just not this one thing. Any ideas ?
  8. The freefilesync app allowed this to process without any unraid lockup. I will use that for any mass updates from now on.
  9. Caused it to all lock up again - however this time I was able to free this up by killing the file copy job from my pc. I am now going to try my old free file sync app to see if that can work - it states there is only 65GB that needs replacing (31,035 files).
  10. I have run these jobs many times in the past on the old unraid releases. A week ago, I upgraded to the latest unraid release. Not sure if this is related ? I have a job that copied about 250GB of photos up to unraid if any differences. There are large differences now as faces all written to the photo metadata. If I run this robocopy job, the unraid server crashes in some way - I get alerts that my dockers etc are not responding. Console access is frozen from attached monitor and over the network. I can only click the reset button and get a parity check. I just repeated this again and it happened. I am now trying a manual copy with windows file explorer rather than robocopy script. Will report back. The diagnostics I ran afterwards the first time is attached if it is requested. tower-diagnostics-20200510-1026.zip
  11. I have now converted all VM's to qcow2. I have a shedload, so I am never going to do the vdi thing again.
  12. I thought I would revisit this, and used the latest 6.8.3, and this one went in with no issues at all. I had a few months off since the 6.8.0 and 6.8.1, skipped 6.8.2 and now 6.8.3 went in without any issues, just like all my pre 6.8 upgrades.
  13. Id love something like this for my VM's more than my docker containers. I have sorted my containers with correct delays on autostartup and they are fine as they are, - but my pages and pages of VM's are a hassle which ideally I would like to group together, or at least have then sortable/movable like the docker containers can be.
  14. I seem to have fixed this issue. I converted these two vdi images to qcow2 with : qemu-img convert -f vdi -O qcow2 VASDEP3.vdi VASDEP3.qcow2 qemu-img convert -f vdi -O qcow2 VASENT3.vdi VASENT3.qcow2 As qcow2, the servers are stable and dont crash and then bring down the VM system in unraid. Maybe there is some sort of compatibility issue with vurtualbox VDI files ?
  15. I recently added a number of vdi images from virtualbox to kvm in unraid. These are two windows 2016 machines. I did NOT convert them, just added them as their .vdi images. I start them up, log on and all is fine until I start using them under any sort of load. One of them (VASENT3-32bit) - the main database server is dropped from VNC Viewer, They have two CPU's each that are not used by anything else. I can no longer ping the machine so it is gone, but on the unraid dashboard, they are both still showing as runnng. The VMS tab in unraid however is unresponsive and shows nothing but the jumping lines forever. The ONLY way I can recover this is to go into settings, turn off VM's, delete the libvirt.img file, then restart and re-add the vm's. A turn off/on on its own of the VMs does not help - has to be a delete and re-create - which is a pain as I have about 10 VM's configured there. Diagnostics is attached. Anyone give me an idea of what is going on ? This is an extract from the libvirt.log : 2020-04-09 10:15:34.467+0000: 14305: error : qemuDomainObjBeginJobInternal:7209 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2020-04-09 10:15:46.176+0000: 14304: warning : qemuDomainObjBeginJobInternal:7187 : Cannot start job (modify, none, none) for domain VASENT3-32bit; current job is (query, none, none) owned by (14306 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (1287s, 0s, 0s) 2020-04-09 10:15:46.176+0000: 14304: error : qemuDomainObjBeginJobInternal:7209 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2020-04-09 10:16:04.468+0000: 14308: warning : qemuDomainObjBeginJobInternal:7187 : Cannot start job (query, none, none) for domain VASENT3-32bit; current job is (query, none, none) owned by (14306 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (1306s, 0s, 0s) 2020-04-09 10:16:04.468+0000: 14308: error : qemuDomainObjBeginJobInternal:7209 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2020-04-09 10:16:56.961+0000: 14303: error : qemuMonitorIORead:611 : Unable to read from monitor: Connection reset by peer 2020-04-09 10:19:00.047+0000: 17921: info : libvirt version: 5.1.0 2020-04-09 10:19:00.047+0000: 17921: info : hostname: Tower 2020-04-09 10:19:00.047+0000: 17921: warning : qemuDomainObjTaint:7986 : Domain id=1 name='VASENT3-32bit' uuid=c752cd72-ff12-d630-06be-91c1200f1974 is tainted: high-privileges 2020-04-09 10:19:00.047+0000: 17921: warning : qemuDomainObjTaint:7986 : Domain id=1 name='VASENT3-32bit' uuid=c752cd72-ff12-d630-06be-91c1200f1974 is tainted: host-cpu 2020-04-09 10:20:14.826+0000: 17922: warning : qemuDomainObjTaint:7986 : Domain id=2 name='VASDEP3-32bit' uuid=65d3d573-4a1f-fb65-aa6b-a721ad2ca435 is tainted: high-privileges 2020-04-09 10:20:14.826+0000: 17922: warning : qemuDomainObjTaint:7986 : Domain id=2 name='VASDEP3-32bit' uuid=65d3d573-4a1f-fb65-aa6b-a721ad2ca435 is tainted: host-cputower-diagnostics-20200410-0830.zip