richardsim7

Members
  • Posts

    67
  • Joined

  • Last visited

Everything posted by richardsim7

  1. Nevermind, I used the method at the bottom of this thread: https://forums.lime-technology.com/topic/54953-solved-how-to-mount-vdiskimg-files/
  2. So I gave in and did a fresh install on the NVMe, very fast, very nice, love it I want to transfer files across from my old VM (vdisk) ideally mounting it with this VM and booting to the NVMe. However every time I try, it just boots into the vdisk. How do I access the boot menu, or mount the vdisk so it won't boot from it?
  3. Ah ok, interesting So yeah, would be interesting to see if we can get this working natively instead Like you say, @billington.mark's instructions are for starting from scratch
  4. Check your motherboard's site for a BIOS update if possible
  5. From this page. Just wanted to say that it worked for me. unRAID v6.3.5. Windows 7 Pro to WIndows 10 (latest build as of today). Think I had it on 1 CPU core from when I installed Windows 7 just to be on the safe side
  6. FYI looks like Gigabyte have released BIOS updates for lots of recent motherboards. Not sure how far back they're supporting the fix
  7. Cheers. Managed to figure it out (sort of) I think the VPN server has gone, so I changed to a different one, but then couldn't get that to work either. Turns out it doesn't know/like DNS so I had to put in the IP of the new server, rather than the URL in "VPN_REMOTE" field
  8. DelugeVPN seems to have stopped working. I think the latest update did it. Running unRAID v6.2.4 Have checked the vpn address and user/pass so it's not them I noticed when nothing was downloading, restarted the docker and now Chrome says "refuses to connect"
  9. Rolled back to 6.2.4 and the VM boots again. Any ideas why 6.3.2 isn't working?
  10. I did a quick search but couldn't find anything: I upgraded from 6.2.4 to 6.3.2, and now my Windows 10 VM won't boot. SeaBIOS just says "No bootable device" Any ideas? nas-diagnostics-20170302-2017.zip
  11. I started the VM with an Ubuntu Live CD as the install ISO so I could access the Windows files from there. I haven't had any luck in completely removing Avast that way (maybe you will), but I at least could access my files and transfer them somewhere safe
  12. With Avast installed, does the VM freeze immediately or after a few minutes? In my case, Avast was also causing my Windows 10 VM to freeze, but, it stayed running long enough for me to uninstall the normal way through the Control Panel. After installing Sophos, it was smooth sailing for my Windows 10 VM. I had to force Avast to walk the plank. I have never tried it, but, if your VM freezes immediately, can you boot it into Windows safe mode and uninstall Avast in Safe Mode? EDIT: Oops, missed the part about not even being able to boot the VM. You will probably have to recreate the VM if you can't even boot into it with Avast installed. Another killer of Windows 10 is apparently CoreTemp RC6. I had that installed and it locked up Windows 10 every time until I uninstalled it. It does not even have to be running, just installed. It locked up during installation of Avast itself :\ New VM, sans Avast, all good now
  13. Thank god I'm not the only person having this issue! Installing Avast kills my VM and crashes my server, and from then on I can no longer boot into my VM Any ideas how one would uninstall Avast from the vm image?
  14. It's a release candidate, so other than bug fixes, it theoretically should stay the same. That said, there were some pretty major changes that were made after it went to RC status, so it's tough to say where those changes will land, either baked into the final, or reverted back out. There is some contention over whether docker appdata must live on a specific disk or can use a relative user location. Either option have things they help and things they break, so I think the effort right now is to figure out a way to make relative user locations work for all dockers. The only other reasons to wait would be a. Documentation - final docs and migration instructions won't settle until after the release goes final b. Offline usage - trials, beta's and RC's require internet connectivity to start the array. Final release should start without an active internet connection as long as you have a valid license. Disclaimer: I do not work for or have inside knowledge from Limetech. This is just my opinions from observation and long experience. Thanks for the input, useful stuff It's just I'm itching to upgrade so I can setup my VM again Upgrade from 6.1.7 to 6.1.9 broke my VM (and maybe my dockers) so I thought screw it, upgrade to 6.2 (beta 21) and go from there. After much screwing around trying to fix my VM, reverting back to 6.1.9 (tried 6.1.7, nope, didn't work) I gave up. So I'm just waiting for the 6.2 final so I can start my VM from scratch
  15. Reckon much will change between this and release? Tempted to upgrade now...
  16. So I updated from 6.1.7 to 6.1.9 and my Windows VM stopped booting So I thought, might as well upgrade to 6.2beta21 and see what happens Well, it got worse. When my VM freezes at the Windows loading screen, the rest of the NAS just...dies. I can't access the Flash disk etc and I have to reboot it to get it back Here's my xml: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Windows</name> <uuid>50d3fa99-2a1c-88cf-21a3-d58d88d08c39</uuid> <metadata> <vmtemplate name="Custom" icon="windows.png" os="windows"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/virtualmachines/Windows/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:48:8f:00'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Windows.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x045e'/> <product id='0x0800'/> </source> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/> </qemu:commandline> </domain> If I edit the XML after it's crashed, this appears at the top of the page: Warning: file_put_contents(/boot/config/domain.cfg): failed to open stream: Input/output error in /usr/local/emhttp/plugins/dynamix.vm.manager/classes/libvirt_helpers.php on line 375 And the log just has this: May 10 23:44:53 NAS shfs/user: shfs_read: read: (5) Input/output error May 10 23:44:53 NAS shfs/user: shfs_read: read: (5) Input/output error May 10 23:44:53 NAS shfs/user: shfs_read: read: (5) Input/output error May 10 23:44:53 NAS shfs/user: shfs_read: read: (5) Input/output error More log: May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED May 11 00:26:56 NAS kernel: ata3.00: cmd 61/18:e0:40:6e:cc/00:00:08:00:00/40 tag 28 ncq 12288 out May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY } May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED May 11 00:26:56 NAS kernel: ata3.00: cmd 61/48:e8:78:6e:cc/00:00:08:00:00/40 tag 29 ncq 36864 out May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY } May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED May 11 00:26:56 NAS kernel: ata3.00: cmd 61/18:f0:c0:6e:cc/00:00:08:00:00/40 tag 30 ncq 12288 out May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY } May 11 00:26:56 NAS kernel: ata3: hard resetting link May 11 00:26:56 NAS kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 11 00:26:56 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 11 00:27:01 NAS kernel: ata4.00: qc timeout (cmd 0xec) May 11 00:27:01 NAS kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) May 11 00:27:01 NAS kernel: ata4.00: revalidation failed (errno=-5) May 11 00:27:01 NAS kernel: ata4: hard resetting link May 11 00:27:01 NAS kernel: ata3.00: qc timeout (cmd 0xec) May 11 00:27:01 NAS kernel: ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4) May 11 00:27:01 NAS kernel: ata3.00: revalidation failed (errno=-5) May 11 00:27:01 NAS kernel: ata3: hard resetting link May 11 00:27:02 NAS kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 11 00:27:02 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 11 00:27:12 NAS kernel: ata4.00: qc timeout (cmd 0xec) May 11 00:27:12 NAS kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) May 11 00:27:12 NAS kernel: ata4.00: revalidation failed (errno=-5) May 11 00:27:12 NAS kernel: ata4: limiting SATA link speed to 3.0 Gbps May 11 00:27:12 NAS kernel: ata4: hard resetting link May 11 00:27:12 NAS kernel: ata3.00: qc timeout (cmd 0xec) May 11 00:27:12 NAS kernel: ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4) May 11 00:27:12 NAS kernel: ata3.00: revalidation failed (errno=-5) May 11 00:27:12 NAS kernel: ata3: limiting SATA link speed to 3.0 Gbps May 11 00:27:12 NAS kernel: ata3: hard resetting link May 11 00:27:12 NAS kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 320) May 11 00:27:12 NAS kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 320) May 11 00:27:13 NAS kernel: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen May 11 00:27:13 NAS kernel: ata2.00: failed command: READ DMA EXT May 11 00:27:13 NAS kernel: ata2.00: cmd 25/00:08:18:29:54/00:00:57:00:00/e0 tag 21 dma 4096 in May 11 00:27:13 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) May 11 00:27:13 NAS kernel: ata2.00: status: { DRDY } May 11 00:27:13 NAS kernel: ata2: hard resetting link May 11 00:27:13 NAS kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen May 11 00:27:13 NAS kernel: ata1.00: failed command: READ DMA EXT May 11 00:27:13 NAS kernel: ata1.00: cmd 25/00:08:18:29:54/00:00:57:00:00/e0 tag 17 dma 4096 in May 11 00:27:13 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) May 11 00:27:13 NAS kernel: ata1.00: status: { DRDY } May 11 00:27:13 NAS kernel: ata1: hard resetting link May 11 00:27:13 NAS kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 11 00:27:13 NAS kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
  17. I had a similar problem, everything was fine until I installed graphics drivers and I got IRQ 16 error upon boot, and everything was suuuuper slow. And in Windows 10 it just caused a BSOD loop What VirtIO drivers are you using? I found taking a step back to .109 fixed everything
  18. Ah ok, fair enough. My issue was everything was fine with the .112 drivers until my Nvidia card got drivers installed. Then everything just went downhill. In Windows 8.1, the system became VERY slow (and the IRQ errors got spat out when it booted) In Windows 10, it just got stuck in a reboot loop Luckily for me, the .109 drivers fixed all that, but I've no idea why
  19. This may or may not help, but from the skim reading of this thread, I had a very similar issue, with the kernel throwing out the same error Using the virtio .109 drivers fixed everything for me
  20. It's up on YouTube now: Surprisingly good results! I may have to give this a go someday...
  21. Perfect!! Now how do I mark this thread as solved? Edit the first post. Done, thanks again