goinsnoopin

Members
  • Posts

    357
  • Joined

  • Last visited

Everything posted by goinsnoopin

  1. My cache drive has btrfs errors. When I start array and check file system it lists a bunch of errors. When I add the --repair option it starts and the it asks a y/n question that I am unable to answer since I ran this from the webgui. Any suggestions on how to proceed. Diagnostics are attached. Thanks, Dan tower-diagnostics-20200412-2108.zip
  2. I was looking at this list and thought I was okay: https://www.elpamsoft.com/?p=Plex-Hardware-Transcoding Guess I need to do a little more research. In looking at pricing gt 710 runs somewhere around $30 used....but a GTX 1050ti can be had for $70 or so used on Reddit hardwareswap. Maybe I should just go for that and call it a day.
  3. I am currently using a nvidia GTX950 for my Windows 10 VM. I am thinking of installing the nvidia plugin to use that GPU for transcoding. I only use Windows 10 for internet browsing, youtube, netflix, general office use (i.e. no gaming or video editing). Any recommendations for an inexpensive GPU to passthrough to my Windows VM. I read through the recommended hardware posting and see people have mentioned the GT 710 and the GT 730. Would these GPUs be sufficient? Any others to consider...would like to stay with NVIDIA to keep drivers simple if I ever need to switch back to the GTX950. For the GT710 it comes in 1GB and 2GB. Is it worth spending the extra for a 2GB model? Thanks, Dan
  4. Is there a command that I can ssh into Unraid and execute to have it be the equivalent of unplugging and reinserting a USB Device? I am passing a USB zigbee coordinator cc2531 to my zigbee2mqtt docker.... everything works fine, however when the docker updates it does not start. When I am home this is no big deal as I just unplug then replug the USB dongle then start the docker container....however sometimes I am away so I would love a command line way to do this. Thanks, Dan
  5. @limetech My issue is resolved. I had the serial adapter plugged into a usb 3 port. When I moved the device to a usb 2.0 port everything started working as expected. Unraid assigned it to /dev/ttyusb0 and all is well. Dan
  6. @limetech I updated to 6.7 stable and am trying to get my usb to serial adapter working. Here is a link to the CH340T adapter I am using https://www.amazon.com/gp/product/B00NKAJGZM/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 When I plug it in, I get the following in syslog...full diagnostics attached. I did unplug and replug it a couple of times. May 13 19:38:42 Tower kernel: usb 3-10: new full-speed USB device number 10 using xhci_hcd May 13 19:38:42 Tower kernel: usb 3-10: Device not responding to setup address. May 13 19:38:43 Tower kernel: usb 3-10: Device not responding to setup address. May 13 19:38:43 Tower kernel: usb 3-10: device not accepting address 10, error -71 May 13 19:38:43 Tower kernel: usb 3-10: new full-speed USB device number 11 using xhci_hcd May 13 19:38:43 Tower kernel: usb 3-10: Device not responding to setup address. May 13 19:38:43 Tower kernel: usb 3-10: Device not responding to setup address. May 13 19:38:44 Tower kernel: usb 3-10: device not accepting address 11, error -71 May 13 19:38:44 Tower kernel: usb usb3-port10: attempt power cycle May 13 19:38:44 Tower kernel: usb 3-10: new full-speed USB device number 12 using xhci_hcd May 13 19:38:44 Tower kernel: usblp 3-10:1.0: usblp0: USB Bidirectional printer dev 12 if 0 alt 0 proto 2 vid 0x1A86 pid 0x7584 May 13 19:38:44 Tower kernel: usbcore: registered new interface driver usblp May 13 19:39:14 Tower kernel: usb 3-10: USB disconnect, device number 12 May 13 19:39:14 Tower kernel: usblp0: removed May 13 19:39:23 Tower kernel: usb 3-10: new full-speed USB device number 13 using xhci_hcd May 13 19:39:40 Tower kernel: usb 3-10: device descriptor read/64, error -110 May 13 19:39:40 Tower kernel: usb 3-10: Device not responding to setup address. May 13 19:39:40 Tower kernel: usb 3-10: Device not responding to setup address. May 13 19:39:40 Tower kernel: usb 3-10: device not accepting address 13, error -71 May 13 19:40:03 Tower kernel: usb 3-10: new full-speed USB device number 15 using xhci_hcd May 13 19:40:19 Tower kernel: usb 3-10: device descriptor read/64, error -110 May 13 19:40:34 Tower kernel: usb 3-10: device descriptor read/64, error -110 May 13 19:40:35 Tower kernel: usb 3-10: new full-speed USB device number 16 using xhci_hcd May 13 19:40:50 Tower kernel: usb 3-10: device descriptor read/64, error -110 tower-diagnostics-20190515-1746.zip
  7. Johnnie, I really appreciate all of your assistance. Disk3 rebuild #4 completed. I rebooted and all started as expected. Old VMs are now gone. While the last rebuild was going on, this was the only time I was able to edit the old VMs and change the XML to the USB3 PCI card. After editing them I removed them thinking maybe since the edits stuck...that maybe removing them would as well. Once they were removed I created a test Linux VM and it started successfully. The shut it down and recreated my original VM from an XML backup. The VM worked fine. On reboot only these two new VMs show up. Not sure I completely understand, but am happy to be back to normal. I rebooted in normal mode...all was fine. The updated back to 6.6.7 and all is fine. Johnnie - THANKS AGAIN!
  8. Tried Safe mode and rolled back to 6.6.6 Same result, old VMs showed up autostarted and crashed the 2 disks. Could something have stuck in the ram? Wondering if I should run memtest after disk3 rebuild #4 is finished. During the rebuild I enabled VMs got the old listings, deleted VMs and added my kitchenpc from backup XML and used it for hours all while rebuild was going on. Decided to disable VMs then reenabled to see if old VMs came back and got a new error...libvirt service failed to start. See diagnostics... libvirt.img in use cannot mount tower-diagnostics-20190507-2248.zip
  9. Or.....in looking at update os in tools, should I restore/revert to Unraid 6.6.6 stable? See if it works as it used to then go back up to 6.6.7 stable?
  10. Agreed, doesn't make sense. Doing my 3rd data rebuild of disk3...so can't do anything for 5 hours or so. When the old VMs come up, I tried to edit them i.e. eliminate auto start and change passthrough to USB card ...when I hit update it goes to updating and stays there. Do you think something on flash could be corrupted?
  11. Somehow the offending VM is coming back. See attached screenshots. I did find /mnt/ -name libvirt.img and it returns what I expected the one libvirt.img file. I then went to settings VM manager and disabled VMs then deleted the libvirt.img. I then went back to ssh and did the find again and as you can see the file is no longer found anywhere in /mnt. So the existing libvirt.img was successfully removed. I then rebooted Unraid went to settings then VM manager and enabled VMs and it immediately dropped disk3 and the other unassigned disk (understand why from previous messages offending VM has wrong pci passed through and is autostarting). Then went to the VM tab and all those old VMs show up??? How can this be created if there is no libvirt.img??? Hope attached shows that I have eliminated the multiple libvirt.img issue.
  12. Johnnie, I truly have tried both of those. If I delete VMs(definitions only as I want to keep the disks)....they disappear from Unraid VM webgui, however on reboot they reappear and offending one auto starts....passing the incorrect SATA card then disabling the disk. So I have used the PCi.ids to list the devices for passthrough. Is there an opposite to that command to prevent this SATA controller from being passed to the VM while I figure this out. At this point wondering if there is some corruption on flash drive from original hard crash... contemplating pulling USB flash and doing clean install.
  13. Johnnie, How is the libvirt.img created? Is it getting VM information from the flash drive? If I delete libvirt.img, then turn VMs back on it keeps auto creating my 6 old VMs and the Bobby steam is one of them with the wrong SATA card passthrough...and it's set to auto start. Any ideas on how to break this circle? I have rebuilt disk3 two times now. I even verified libvirt.img was gone via telnet session and checked each disk. At one point tried to edit bobby_steam VM to change pci address to the USB card....hit update and it changed to updating and never completed.
  14. Quick step back: Before crash libvirt.img was in a share that was allowed to be on any disk. looked like this data was on disk3. reading forums most people had libvirt.img on cache drive. I then went to my cache drive and saw a libvirt folder, which contained a libvirt.img. I went to vm settings and changed my path to point to the libvirt.img on the cache drive. once I did this a bunch of old vms like bobby steam vm showed up. these vms havent been used by me in years. They may have been autostart but I cant tell as today I deleted the libvirt.img on cache drive then removed all vms from the webgui. So the only vm in my webgui is called KitchenPC. This is the one that wont start. Do i have some corruption? maybe I should make vm via form vs copying backup xml? I did review template and it looked good, passed correct usb controller. Should I rebuild disk 3 first?
  15. Don't understand that??? Don't think I attempted to start any other than Kitchen PC At this point I want all VMs gone...will create new. Is removing them from webgui enough. Any other fragments elsewhere? Disk3 became unassigned in last reboot, so assume I should assign and rebuild data and not touch any VMs until rebuilt?
  16. I am not using Bobby_steam VM. I removed all VM definitions in VM manager then created a new one called Kitchen PC. However, you are right both drives dropped??? That forced disk3 to be unassigned on reboot. Assume I should rebuild disk 3 and not touch any VM stuff until rebuild complete?
  17. Disk 3 emulated contents look fine. Deleted libvirt.img, tried to start my original win10 VM and get an execution error. Operation failed: unable to find any master var store for loader: /usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd Unassigned disk sdg no longer shows up. Believe that was the drive that failed. Libvirt was not on cache drive it was in a share called system ...in looking at emulated contents this was on disk3. Any suggestions...should I rebuild disk3 to get to a normal state then deal with VMs? tower-diagnostics-20190506-1326.zip
  18. I think my plan is to verify emulated disk3 by starting array. Then delete libvirt image and create VMs from backup XML files....but will double check pci assignments as hardware changes made have given them different addresses since backup. Then if all looks good, will rebuilt disk3.
  19. Johnnie....I am confused. I have 3 pcie cards that I pass through. Two Nvidia GPUs (one to each VM) and one USB3 pcie card to one of the VMs. I have no sata pcie cards in my case....however my motherboard ASRock Z97 Extreme6 does have an ASMedia ASM1061 sata 4 ports. I haven't made changes to this in forever...back to my original post...I did restore an old libvirt....and diagnostics were run with this old libvirt....could that be the issue?
  20. Johnnie....I am not passing any sata controllers to VMs. The second disk that crashed at same time is an unassigned disk that has an image file that is mounted to one of the VMs that was running for my son's video game storage i.e. a d: drive. The win10 VMs c drive is on the SSD cache drive. Does this second thought impact your suggestion if rebuilding on same disk? 😄
  21. Here is current diagnostics. Disk 3 extended smart report just completed successfully no errors. Thanks guys for providing feedback! tower-diagnostics-20190506-0946.zip
  22. Unraid 6.6.7 Here is what I am experiencing: VMs kept going to resume Attempted to shutdown VMs, by hitting resume then quickly hitting stop. One of the two VMs shut down. When doing the same for the second VM, Unraid crashed. No webgui, no ssh/telnet access. Server is headless so I don't have keyboard or monitor (will in future). So I was unable to get diagnostics for this original crash. That being said I know my cache drive was over utilized as I had this happen once before (VMs going to resume) when a docker log filled the cache drive. I also know a couple of the array disks had little free space and I know that can sometime be an issue. Due to hard crash, power cycled Unraid. Unraid came up and parity check started. My two VMs that were running during the crash were no longer listed in my VM tab (my two other VMs that were not running at the time were the only two listed). A handful of dockers were running, balance of them would not start. Decided to reboot unraid via webgui to see if they would return. It returned the exact same way only 2 VMs with handful of dockers. Also note that it says parity check canceled, so I probably should not have done this as parity check was in progress. Deleted docker.img and recreated with 10 or so of my key dockers from templates. Worked with no issues. Went to bed. Got up this morning....read some forums and decided to run btrfs balance on cache drive. While this was running, I was looking into VM issue. Realized I had an older libvirt.img in another location and switched to this image. Went to VM tab and saw a bunch of older VMs I had from lets say a year ago or so. All VMs were stopped. I then went to my VM xml file backups and copied the VM xml for my main windows 10 VM, went to xml mode, pasted the backup xml file, unchecked start on creation and then created the new VM. I immediately got a disk 3 read error, followed by notification that disk 3 was disabled. Went to btrfs balance and saw it was still running so I canceled it, stopped the array, went to settings to turn off auto start array, rebooted server and started array in maintenance mode. After rebooting in maintenance mode, disk 3 shows up, has a smart report and in looking at the smart status stats, I don't see any issues. I am currently running an extended smart test on this disk and am waiting for results. I know this is a lot...any advice on how to proceed?
  23. Unraid 6.6.7 Here is what I am experiencing: VMs kept going to resume Attempted to shutdown VMs, by hitting resume then quickly hitting stop. One of the two VMs shut down. When doing the same for the second VM, Unraid crashed. No webgui, no ssh/telnet access. Server is headless so I don't have keyboard or monitor (will in future). So I was unable to get diagnostics for this original crash. That being said I know my cache drive was over utilized as I had this happen once before (VMs going to resume) when a docker log filled the cache drive. I also know a couple of the array disks had little free space and I know that can sometime be an issue. Due to hard crash, power cycled Unraid. Unraid came up and parity check started. My two VMs that were running during the crash were no longer listed in my VM tab (my two other VMs that were not running at the time were the only two listed). A handful of dockers were running, balance of them would not start. Decided to reboot unraid via webgui to see if they would return. It returned the exact same way only 2 VMs with handful of dockers. Also note that it says parity check canceled, so I probably should not have done this as parity check was in progress. Deleted docker.img and recreated with 10 or so of my key dockers from templates. Worked with no issues. Went to bed. Got up this morning....read some forums and decided to run btrfs balance on cache drive. While this was running, I was looking into VM issue. Realized I had an older libvirt.img in another location and switched to this image. Went to VM tab and saw a bunch of older VMs I had from lets say a year ago or so. All VMs were stopped. I then went to my VM xml file backups and copied the VM xml for my main windows 10 VM, went to xml mode, pasted the backup xml file, unchecked start on creation and then created the new VM. I immediately got a disk 3 read error, followed by notification that disk 3 was disabled. Went to btrfs balance and saw it was still running so I canceled it, stopped the array, went to settings to turn off auto start array, rebooted server and started array in maintenance mode. After rebooting in maintenance mode, disk 3 shows up, has a smart report and in looking at the smart status stats, I don't see any issues. I am currently running an extended smart test on this disk and am waiting for results. I know this is a lot...any advice on how to proceed? on cache drive. While this was running, I was looking into VM issue. Realized I had an older libvirt.img in another location and switched to this image. Went to VM tab and saw a bunch of older VMs I had from lets say a year ago or so. All VMs were stopped. I then went to my VM xml file backups and copied the VM xml for my main windows 10 VM, went to xml mode, pasted the backup xml file, unchecked start on creation and then created the new VM. I immediately got a disk 3 read error, followed by notification that disk 3 was disabled. Went to btrfs balance and saw it was still running so I canceled it, stopped th tower-diagnostics-20190506-0648.zip
  24. I am having an issue with starting, stopping and resuming a windows 10 VM. I keep getting the following in my libvirt log....any suggestions? Full diagnostics also attached. My cache drive has only 18G free...could that be an issue? Eventually after repeating this error for 529 seconds....the vm started. Not fact...but sometimes it seems like if I go to unraid webgui from another pc or phone...that I am able to control the vm. 2019-02-02 15:26:21.112+0000: 1900: error : qemuDomainObjBeginJobInternal:6865 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainCreate) 2019-02-02 15:26:51.112+0000: 1899: warning : qemuDomainObjBeginJobInternal:6843 : Cannot start job (query, none, none) for domain Win10_Boys; current job is (async nested, none, start) owned by (1898 remoteDispatchDomainCreate, 0 <null>, 1898 remoteDispatchDomainCreate (flags=0x0)) for (529s, 0s, 529s) 2019-02-02 15:26:51.112+0000: 1899: error : qemuDomainObjBeginJobInternal:6865 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainCreate) Thanks, Dan tower-diagnostics-20190202-1032.zip
  25. Thanks jonathanm and johnnie.black...cache disk space was my issue. Made some room and am all set! Dan