johnsanc

Members
  • Posts

    309
  • Joined

  • Last visited

Everything posted by johnsanc

  1. Any other suggestions for this? Is there a process for starting clean and restoring VMs / Dockers / etc?
  2. Same issue across all browsers and devices in safe mode. I did notice that Firefox gives a slightly more descriptive console error: Diagnostics also attached. tower-diagnostics-20190513-1936.zip
  3. After upgrading to 6.7.0 my VMs tab of the webui continuously loads and I cannot manage any of my VMs. The VMs actually run just fine in the background, it's just the webui that is inoperable. I had no issues with 6.6.7. I get an unhandled exception error in the console: I also see that the /sub/var call is permanently pending... but I think that is intended for websockets: I tried rebooting, stopping and starting the VM service, backing up and deleting my libvirt.img and nothing has made a difference with the webui. Any ideas? (apologies for the double post - this issue got buried in the 6.7.0 announcement thread, if this belongs elsewhere please move it or let me know the right area to post.)
  4. Just a quick update on my VMS issues with the webui... no luck. I even tried backing up then wiping out my libvirt.img. I still got the same issue where the VMS page just looks like it loads forever. I've dug around in the forums but couldnt find any info about this particular problem. Is there a process outlined to backup my VM configurations and start completely fresh? I thought wiping out the libvirt.img would basically do that. I also noticed that occasionally the libvirt log would show errors from trying to find references to isos in directories I deleted well over a year ago. It would be nice to clean up all this old stuff.
  5. I don't mean to double post - but didn't want this to get buried on the other page and lost amongst the string of troubleshooting posts above - If there is a better place to post this let me know. Since the VMs actually work fine I fear it may be a defect in the VM Manager web code. Having an unusable interface to manage VMs is a deal breaker and I may need to downgrade.
  6. Nope, I only have 1 VM called "windows10pro" Screenshot of where the error shows for reference:
  7. Upgrade seems to have gone OK, however there is something wrong with the VMS tab. My auto-start VMs start up just fine, no errors in the system log or libvirt log as far as I can tell. The VMs tab just shows the loading animation and never actually loads. In the browser console log I do see this: "Uncaught SyntaxError: Invalid or unexpected token" I also see that the request for /sub/var is pending with a 101 status code. Any ideas?
  8. Just following up with this. The issue was in fact a networking issue. Under high network load the VirtIO Network Adapter would just become disabled with no way to re-enable unless the VM was restarted. Turns out I was using drivers that were too new. I reverted back to a stable driver from the 0.1.141 ISO and that seems to have resolved the issue.
  9. I've been having an issue for a long time now where my Windows 10 VM will appear to crash after awhile. This seems to occur when there are periods of high network activity in the VM and/or disk reads from the array. Has anyone seen anything like this before and know what the root cause might be? I use remote desktop to access the VM and when it "crashes" it appears as if it is still running but I cannot access it. At this time it will not shut down gracefully and I have to force stop and restart it. Also, when it crashes I can tell that its not just a network issue, I've had it crash while in the middle of zipping a file and the temp file's modified date is the moment the VM crashed. This file will be locked for a few minutes afterwards.
  10. No worries, no critical data was lost. However considering this is a "data loss" issue I would consider this a pretty serious issue, even if it is an edge case. Especially since this is meant to be a feature (cache pool) thats explicitly built with a default configuration to prevent data loss (RAID1). It would be a little different if I was trying to build something totally custom and not supported. The process isn't dummy proof and one would have to hunt for info buried in forums to not shoot themselves in the foot. I'll hop off my soapbox now
  11. Well now I know. That's not readily apparent based on the way the cache UI is setup. Unlike the array UI that shows a fixed number of slots, the cache UI does not. This difference in UI behavior is actually what made me think I was supposed to align the number of slots to the number of drives I was using. I suppose consider this a user experience enhancement suggestion to avoid confusion.
  12. Maybe I don't understand. Why wouldn't I change the number of slots if I am reducing the number of drives in the pool? It doesn't make sense at all to me why I would leave 3 slots open when I am only using 2 drives. Anyways, thanks for your help, hopefully some extra checks are eventually put in place to keep other people from making the same mistake I did.
  13. Well thats not very intuitive... so you have to start the array with 2 filled slots and 1 blank slot? I didn't even think that was possible.
  14. Just for my own reference, how should I have done this? I needed to remove the disk in slot #1 (old drive) and move discs #2 and #3 into positions #1 and #2. Are you saying I would have needed to do the following, starting with my original state of a single cache drive. Shut down array Increase cache slots from 1 to 3 Add new disks to positions 2 and 3 Start array and let the pool balance Stop array Rearrange all disks to a different order so that the disk I need to remove is in position 3 Start array and wait until any starting processes complete Stop array Remove disk 3 from the cache pool Change cache pool from 3 to 2 disks without making any other position switches Start array I might be missing something, but cannot think of another way to do this one step at a time since the disk I needed to remove was in position #1 Also thank you so much for taking the time to try to reproduce this, logging a bug report, and updating the FAQ. Thankfully I didn't have any data on the cache drive that cannot be replaced.
  15. Diagnostics attached. Hopefully it contains some clues as to what went wrong here (or more likely what I did wrong...) tower-diagnostics-20181128-1921.zip
  16. How can i remove the disk incorrectly? Here is what I did after all 3 drives were balanced and the cache config showed "No balance found on /mnt/cache" (which is very confusing, but I digress): Shut down array Removed all cache disks (Old 320gb HDD + 2x new 1tb SSDs) Changed cache drives from 3 to 2 Reassigned the 2 SSDs At this point Cache 2 warned that data would be erased on that disk... wasnt really sure why Proceeded to start array Rebalancing automatically took place Cache drives had no data I never physically removed any disks, only unassigned and reassigned Another weird thing I didn't understand is that when all 3 drives were connected it said the total cache space was 1.3ish tb, so basically the 320gb + the 1tb (mirrored I assume?). If this was really in RAID1 shouldn't the max space be the smallest of the drives?
  17. No i did not disconnect the disk. Yeah this is not documented well at all. This should really be a power user feature until the Unraid UI is updated to warn against stuff like this.
  18. Yes it was raid1. None of the non-destructive repair options worked. After creating the temp "x" directory, both "mount -o recovery,ro /dev/sdm1 /x" and "mount -o degraded,recovery,ro /dev/sdm1 /x" return an error saying "mount: /x: mount point does not exist." Trying "btrfs restore -v /dev/sdm1 /mnt/disk13/Restore" I get: warning, device 4 is missing warning, device 2 is missing bytenr mismatch, want=1718833954816, have=0 ERROR: cannot read chunk root Could not open root, trying backup super warning, device 4 is missing warning, device 2 is missing bytenr mismatch, want=1718833954816, have=0 ERROR: cannot read chunk root Could not open root, trying backup super warning, device 4 is missing warning, device 2 is missing bytenr mismatch, want=1718833954816, have=0 ERROR: cannot read chunk root Could not open root, trying backup super I know very little about btrfs, and this is my first time attempting a cache pool. I have to say the implementation is far from intuitive for a newbie, and it looks like I may have gotten myself into trouble by adding and removing cache disks in the process. I tried to completely remove my old cache drives as well and go back to a single cache drive, but the unraid UI warns me that it will erase all data if I do that... so now I'm kind of stuck and don't know what to do. If these were supposedly mirrored in RAID1 then shouldn't the data still all be there? Especially since this was the disk I originally used for my single-drive cache. Can't I just undo the RAID1 status of that disk somehow and mount it? As of right now this is the only disk with any of my old cache data on it. My other 2 SDDs which now occupy my cache pool are empty (which was a surprise because they were balanced before I removed the old cache drive... I would have expected them to have the data from this old cache drive.) Anyways, any other ideas or do I have to basically wipe out all the data and chalk this up to user error?... which is fair but at no point was I warned that the data would be corrupted somehow.
  19. I recently added 2 SDDs to my cache pool and I want to remove the old spinner disk. There is still a little bit of data on that cache drive I would like to restore. I added the drives to the cache pool, balanced, then shut down array and removed the old cache disk. Now I am trying to mount that old cache disk in Unassigned Devices and i get this. Any idea how I can get this disk to be mountable so I can retrieve the data from it? Nov 28 12:23:13 Tower unassigned.devices: Adding disk '/dev/sdm1'... Nov 28 12:23:13 Tower unassigned.devices: Mount drive command: /sbin/mount -t btrfs -o auto,async,noatime,nodiratime '/dev/sdm1' '/mnt/disks/oldcache' Nov 28 12:23:13 Tower kernel: BTRFS info (device sdm1): disk space caching is enabled Nov 28 12:23:13 Tower kernel: BTRFS error (device sdm1): devid 2 uuid e98a22d0-63c9-484b-b6ea-90ca68386ff1 is missing Nov 28 12:23:13 Tower kernel: BTRFS error (device sdm1): failed to read the system array: -2 Nov 28 12:23:13 Tower kernel: BTRFS error (device sdm1): open_ctree failed Nov 28 12:23:13 Tower unassigned.devices: Mount of '/dev/sdm1' failed. Error message: mount: /mnt/disks/oldcache: wrong fs type, bad option, bad superblock on /dev/sdm1, missing codepage or helper program, or other error. Nov 28 12:23:13 Tower unassigned.devices: Partition 'ST3320620AS_6QF28RW3' could not be mounted...
  20. Here is my XML. Can anyone see any issues with it that might be causing the issues that tayshun12 and I are having? <domain type='kvm' id='1'> <name>windows10pro</name> <uuid>d2f62486-79d7-1d78-145a-70e63187f893</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='7'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.11'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='3' threads='2'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/disks/vmdisk/iso/Windows_10_Pro-20170304.iso'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/disks/vmdisk/iso/virtio-win-0.1.149.iso'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/vmdisk/vm/windows10pro/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='nec-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:76:56:bd'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-windows10pro/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0719'/> <address bus='3' device='3'/> </source> <alias name='hostdev2'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='3' device='2'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  21. Upgraded to 6.5.3 is and my Windows 10 VM would not start. Tried rolling back to 6.5.2 and I get the same issue. I changed the display to VNC instead of GPU passthrough I can now see a green screen windows error with an INACCESSIBLE BOOT DEVICE error. Tried switching vDisk bus from VirtIO to SATA and the VM booted without issue. Any idea why VirtIO vDisk bus stopped working and how I can fix it? I tried installing the latest VirtIO drivers from within the VM but it still wont boot properly when VirtIO is selected for the vDisk bus. Also, I checked my Windows update history and no new updates were installed during the timeframe from upgrading from 6.5.2 to 6.5.3.
  22. Upgraded to 6.5.3 is and my Windows 10 VM would start but there was no display. Tried rolling back to 6.5.2 and I get the same issue. Changing the display to VNC instead of GPU passthrough I now get a green screen windows error with an INACCESSIBLE BOOT DEVICE error. Tried switching disk from VirtIO to SATA and the VM booted with and without GPU passthrough Anyone know how to make this work with VirtIO bus? (Moving this to VM section for visibility) 2018-07-06 16:43:39.896+0000: starting up libvirt version: 4.0.0, qemu version: 2.11.1, hostname: Tower LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name guest=windows10pro,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-windows10pro/master-key.aes -machine pc-q35-2.7,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host -m 16384 -realtime mlock=off -smp 6,sockets=1,cores=3,threads=2 -uuid d2f62486-79d7-1d78-145a-70e63187f893 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-windows10pro/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 -device pcie-root-port,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 -device pcie-root-port,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 -device pcie,vhostfd=25 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:76:56:bd,bus=pci.1,addr=0x0 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-1-windows10pro/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device vfio-pci,host=01:00.0,id=hostdev0,x-vga=on,bus=pci.4,addr=0x0 -device vfio-pci,host=01:00.1,id=hostdev1,bus=pci.5,addr=0x0 -device usb-host,hostbus=3,hostaddr=3,id=hostdev2,bus=usb.0,port=1 -device usb-host,hostbus=3,hostaddr=2,id=hostdev3,bus=usb.0,port=2 -device virtio-balloon-pci,id=balloon0,bus=pci.6,addr=0x0 -msg timestamp=on 2018-07-06 16:43:39.897+0000: Domain id=1 is tainted: high-privileges 2018-07-06 16:43:39.897+0000: Domain id=1 is tainted: host-cpu 2018-07-06T16:43:39.945022Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/0 (label charserial0)
  23. I’m no expert, but it just seems really weird that guid partition table info would be injected into the middle of a file... I’m almost certain it happened sometime after the initial creation because i have a synced version elsewhere that is correct.
  24. I experienced a really weird issue where one of my files appears to be corrupted. No change in modify time or anything, but there is exactly 512 bytes of the file that is replaced with a bunch of zeroes and what looks like part of a GUID partition table. Has anyone ever experienced something like this? Any idea what could cause this in an unRAID array? Note this disk reports no issues in the Smart stats. 45 46 49 20 50 41 52 54 00 00 01 00 5C 00 00 00 B2 D1 CB C3 00 00 00 00 AF 2A 81 A3 03 00 00 00 01 00 00 00 00 00 00 00 22 00 00 00 00 00 00 00 8E 2A 81 A3 03 00 00 00 E8 AA 45 BE F7 99 9B 47 B6 71 85 D2 BF 8A BF 78 8F 2A 81 A3 03 00 00 00 80 00 00 00 80 00 00 00 27 05 B2 F9 00 00 00 00 https://en.wikipedia.org/wiki/GUID_Partition_Table