Jump to content

vw-kombi

Members
  • Content Count

    175
  • Joined

  • Last visited

Community Reputation

3 Neutral

About vw-kombi

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The freefilesync app allowed this to process without any unraid lockup. I will use that for any mass updates from now on.
  2. Caused it to all lock up again - however this time I was able to free this up by killing the file copy job from my pc. I am now going to try my old free file sync app to see if that can work - it states there is only 65GB that needs replacing (31,035 files).
  3. I have run these jobs many times in the past on the old unraid releases. A week ago, I upgraded to the latest unraid release. Not sure if this is related ? I have a job that copied about 250GB of photos up to unraid if any differences. There are large differences now as faces all written to the photo metadata. If I run this robocopy job, the unraid server crashes in some way - I get alerts that my dockers etc are not responding. Console access is frozen from attached monitor and over the network. I can only click the reset button and get a parity check. I just repeated this again and it happened. I am now trying a manual copy with windows file explorer rather than robocopy script. Will report back. The diagnostics I ran afterwards the first time is attached if it is requested. tower-diagnostics-20200510-1026.zip
  4. I have now converted all VM's to qcow2. I have a shedload, so I am never going to do the vdi thing again.
  5. I thought I would revisit this, and used the latest 6.8.3, and this one went in with no issues at all. I had a few months off since the 6.8.0 and 6.8.1, skipped 6.8.2 and now 6.8.3 went in without any issues, just like all my pre 6.8 upgrades.
  6. Id love something like this for my VM's more than my docker containers. I have sorted my containers with correct delays on autostartup and they are fine as they are, - but my pages and pages of VM's are a hassle which ideally I would like to group together, or at least have then sortable/movable like the docker containers can be.
  7. I seem to have fixed this issue. I converted these two vdi images to qcow2 with : qemu-img convert -f vdi -O qcow2 VASDEP3.vdi VASDEP3.qcow2 qemu-img convert -f vdi -O qcow2 VASENT3.vdi VASENT3.qcow2 As qcow2, the servers are stable and dont crash and then bring down the VM system in unraid. Maybe there is some sort of compatibility issue with vurtualbox VDI files ?
  8. I recently added a number of vdi images from virtualbox to kvm in unraid. These are two windows 2016 machines. I did NOT convert them, just added them as their .vdi images. I start them up, log on and all is fine until I start using them under any sort of load. One of them (VASENT3-32bit) - the main database server is dropped from VNC Viewer, They have two CPU's each that are not used by anything else. I can no longer ping the machine so it is gone, but on the unraid dashboard, they are both still showing as runnng. The VMS tab in unraid however is unresponsive and shows nothing but the jumping lines forever. The ONLY way I can recover this is to go into settings, turn off VM's, delete the libvirt.img file, then restart and re-add the vm's. A turn off/on on its own of the VMs does not help - has to be a delete and re-create - which is a pain as I have about 10 VM's configured there. Diagnostics is attached. Anyone give me an idea of what is going on ? This is an extract from the libvirt.log : 2020-04-09 10:15:34.467+0000: 14305: error : qemuDomainObjBeginJobInternal:7209 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2020-04-09 10:15:46.176+0000: 14304: warning : qemuDomainObjBeginJobInternal:7187 : Cannot start job (modify, none, none) for domain VASENT3-32bit; current job is (query, none, none) owned by (14306 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (1287s, 0s, 0s) 2020-04-09 10:15:46.176+0000: 14304: error : qemuDomainObjBeginJobInternal:7209 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2020-04-09 10:16:04.468+0000: 14308: warning : qemuDomainObjBeginJobInternal:7187 : Cannot start job (query, none, none) for domain VASENT3-32bit; current job is (query, none, none) owned by (14306 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (1306s, 0s, 0s) 2020-04-09 10:16:04.468+0000: 14308: error : qemuDomainObjBeginJobInternal:7209 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2020-04-09 10:16:56.961+0000: 14303: error : qemuMonitorIORead:611 : Unable to read from monitor: Connection reset by peer 2020-04-09 10:19:00.047+0000: 17921: info : libvirt version: 5.1.0 2020-04-09 10:19:00.047+0000: 17921: info : hostname: Tower 2020-04-09 10:19:00.047+0000: 17921: warning : qemuDomainObjTaint:7986 : Domain id=1 name='VASENT3-32bit' uuid=c752cd72-ff12-d630-06be-91c1200f1974 is tainted: high-privileges 2020-04-09 10:19:00.047+0000: 17921: warning : qemuDomainObjTaint:7986 : Domain id=1 name='VASENT3-32bit' uuid=c752cd72-ff12-d630-06be-91c1200f1974 is tainted: host-cpu 2020-04-09 10:20:14.826+0000: 17922: warning : qemuDomainObjTaint:7986 : Domain id=2 name='VASDEP3-32bit' uuid=65d3d573-4a1f-fb65-aa6b-a721ad2ca435 is tainted: high-privileges 2020-04-09 10:20:14.826+0000: 17922: warning : qemuDomainObjTaint:7986 : Domain id=2 name='VASDEP3-32bit' uuid=65d3d573-4a1f-fb65-aa6b-a721ad2ca435 is tainted: host-cputower-diagnostics-20200410-0830.zip
  9. be great if goaccess could be integrated into this. I use custom BR network and sep IP addresses for these containers so they can be managed by my firewall and I cant get the goaccess container to read the logs from the seperate nginx container. Anyone know how ? edit - ignore - I figured it out, I had configured my own custom access.log files in nginx, so the default access.log was empty. Have edited the goaccess.conf to point to emby.log instead of access.log and I have data now.
  10. I have an unused USB-C connection on my unraid motherboard. I have seen a number of these below devices out there, but this one specifically says linux support : https://www.amazon.com/StarTech-com-5Gbps-Ethernet-Network-Adapter/dp/B081SM5CMY I am sure the price will come down overtime but I figured this may be a way of squeezing loads more out of my server NIC connection than the current 1Gbps. Seems a good option if I cant do 10Gbe cards/wires/switch. I dont understand however that is the network switch is 10/100/1000, how can it therefore also allow connections with theoretical speeds of up to 5000 for this USB-C adapter connection ? I know that the speed from the client device will control this with a 1 on 1 connection, but unraid talks to lots of stuff in my network so i am assuming gains could be had ? Any comments ? Anyone tried ?
  11. Thanks for that - Had a read but i'm not too technical. Does not matter I guess now as I am getting my pfsense router to email me when WAN_DHCP goes up to collect all the ISP outages.
  12. It would be great if the log produced by this could have the date/time by each entry.
  13. Ta - I have already set that folder - it is the two underneath I am referring too. It creates a 'get-iplayer' folder for temp files downloading and a 'completed' folder under the top level one specific by the /data parameter. I wondered if these second level folders can be changed.
  14. Any way I can set the output folder to a different area in this docker container ?
  15. Just resurrecting this again. I'm going to update the bios to latest. Remove all ACS stuff and current reservation options, reboot and see what I have iommu wise then. I hope the NIC's are able to be pulled out, if not, I will make small steps on the ACS front. I suspect its an issue with the latest release and my iommu grouping stuff with the mass ACS overrides - not being seen correctly for some reason.