RyanServer711

Members
  • Posts

    23
  • Joined

Everything posted by RyanServer711

  1. I had to put in the actual IP and not the local host name. For example: http://192.168.x.x:8080 not qbittorrent:8080. Hope that helps.
  2. Figured it out, we need to add the path /var/run/docker.sock for both host and container, set as read only. (found in the youtube comments of the IBRACORP video) Just edit the Homarr container and click Add another Path, Port, Variable, Label or Device. Here is a pic of the inputs.
  3. I'm in the same boat, just saw the Ibracorp video and tried installing. I see the possible solution on this page. Homarr v0.8.0 (20th of July 2022) I assume we need to put in the command listed below into the container. I'm not super savvy on the deeper details of docker. -v /var/run/docker.sock:/var/run/docker.sock "Docker integration provides a simple way to start, stop, restart and delete containers. To get started, simply mount your docker socket by adding -v /var/run/docker.sock:/var/run/docker.sock to your Homarr container !"
  4. Crashed it, is there a recommended mount point? I just picked a folder in my backup share on my array. /mnt/disk1/backup/HATEST I'm going to try again
  5. Wait?!? Is the VM running out of memory? Isn't the default cap lower than 32G?
  6. This is what I have with the correct list command Device Start End Sectors Size Type /dev/nbd0p1 2048 67583 65536 32M EFI System /dev/nbd0p2 67584 116735 49152 24M Linux filesystem /dev/nbd0p3 116736 641023 524288 256M Linux filesystem /dev/nbd0p4 641024 690175 49152 24M Linux filesystem /dev/nbd0p5 690176 1214463 524288 256M Linux filesystem /dev/nbd0p6 1214464 1230847 16384 8M Linux filesystem /dev/nbd0p7 1230848 1427455 196608 96M Linux filesystem /dev/nbd0p8 1427456 67108830 65681375 31.3G Linux filesystem Also I did the dirty bit removal on this copy of qcow2
  7. I assumed they were lost in the corrupted qcow2. Either way I'm learning a very good lesson in backing up my stuff
  8. So loading a fresh install would have my backups in there? I thought they would be inside the corrupted qcow2? I've done regular backup within home assistant but I didn't think they left the VM unless specifically done so.
  9. I tried fsck /dev/nbd0p1 and received this message. fsck /dev/nbd0p1 fsck from util-linux 2.37.4 fsck.fat 4.2 (2021-01-31) Dirty bit is set. Fs was not properly unmounted and some data may be corrupt. 1) Remove dirty bit 2) No action [12?q]? Should I go ahead and try to remove dirty bit? Thanks for the help btw 👍
  10. I did the first 2 commands and received this on the 3rd. fdisk -1 /dev/nbd0 fdisk: invalid option -- '1' I tried the first 2 again and received this message. I assume this means the first 2 commands worked. qemu-nbd: Failed to blk_new_open '/mnt/disks/00000000000000001358/domains/home_assistant/haos_ova-7.5.qcow2': Failed to get "write" lock Is another process using the image [/mnt/disks/00000000000000001358/domains/home_assistant/haos_ova-7.5.qcow2]?
  11. I had some issues with my server. Longs story short, I now have a seemingly corrupted qcow2 image file. Every time I try to use it it either doesn't load or crashes my system. Tried to copy it to my windows machine but it fails part way through. Is there any way to open it up or search through the files? I'm desperately trying to get my backups hidden inside and really don't want to start from scratch again. Any help would be greatly appreciated.
  12. Post realization....after I updated the bios, "Fast Boot" was enabled. That was the reason I had trouble restarting.
  13. So I had a close call with a bad NVMe drive. Wishing I had the VM back up plugin. Either way I tried plugging back in all the VM files. I have a libvert image and a domain folder with a homeassistant folder in it. Is that all I need? I plugged them back in and HA VM shows up but I get this screen when it starts and these logs. I feel like I'm close but don't want to mess anything up further. Any help would be greatly appreciated text error warn system array login -device pcie-root-port,port=18,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=19,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=20,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/home_assistant/haos_ova-7.5.qcow2","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \ -device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-1-format,id=virtio-disk2,bootindex=1,write-cache=on \ -netdev tap,fd=36,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:80:f8:1f,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0,index=0 \ -chardev socket,id=charchannel0,fd=34,server=on,wait=off \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -audiodev '{"id":"audio1","driver":"none"}' \ -vnc 0.0.0.0:0,websocket=5700,audiodev=audio1 \ -k en-us \ -device qxl-vga,id=video0,max_outputs=1,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,bus=pcie.0,addr=0x1 \ -device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/8 (label charserial0) qxl_send_events: spice-server bug: guest stopped, ignoring 2022-08-04T18:52:42.524401Z qemu-system-x86_64: terminating on signal 15 from pid 3921 (/usr/sbin/libvirtd) 2022-08-04 18:52:42.724+0000: shutting down, reason=destroyed 2022-08-04 18:53:44.670+0000: starting up libvirt version: 8.2.0, qemu version: 6.2.0, kernel: 5.15.46-Unraid, hostname: Anton LC_ALL=C \ PATH=/bin:/sbin:/usr/bin:/usr/sbin \ HOME='/var/lib/libvirt/qemu/domain-1-Home Assistant' \ XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-1-Home Assistant/.local/share' \ XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-1-Home Assistant/.cache' \ XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-1-Home Assistant/.config' \ /usr/local/sbin/qemu \ -name 'guest=Home Assistant,debug-threads=on' \ -S \ -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-Home Assistant/master-key.aes"}' \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/7306f1ca-1770-ff99-1d17-f0c0add58e65_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-q35-5.1,usb=off,dump-guest-core=off,mem-merge=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format,memory-backend=pc.ram \ -accel kvm \ -cpu host,migratable=on,host-cache-info=on,l3-cache=off \ -m 2048 \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":2147483648}' \ -overcommit mem-lock=off \ -smp 2,sockets=1,dies=1,cores=1,threads=2 \ -uuid 7306f1ca-1770-ff99-1d17-f0c0add58e65 \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=35,server=on,wait=off \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=16,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=17,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=18,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=19,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=20,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/home_assistant/haos_ova-7.5.qcow2","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \ -device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-1-format,id=virtio-disk2,bootindex=1,write-cache=on \ -netdev tap,fd=36,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:80:f8:1f,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0,index=0 \ -chardev socket,id=charchannel0,fd=34,server=on,wait=off \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -audiodev '{"id":"audio1","driver":"none"}' \ -vnc 0.0.0.0:0,websocket=5700,audiodev=audio1 \ -k en-us \ -device qxl-vga,id=video0,max_outputs=1,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,bus=pcie.0,addr=0x1 \ -device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/8 (label charserial0) qxl_send_events: spice-server bug: guest stopped, ignoring
  14. Well I got all my dockers up and running. Hasn't crashed for some time now. I think I'm out of the woods. I'll contibute my problems to a badNVMe drive. My issue now is restoring my Home Assistant VM. I'm pretty sure I have all the VM files backed up just having trouble knowing where to plug everything in as far as the file structure. I'll make another post for that. Thanks for the help
  15. Yes, but after a couple power cycles it seems to be working.
  16. Removed the NVMe drive and everything seemed to be working fine, no crashes. Popped in the new drive and it was still running fine. Formated the drive and set as new cashe. Unfortunately my Home Assistant VM and all my dockers are not there since they are on the old cashe. Tried loading a back up with the Appdata Backup/Restore plugin, which started a new round of instant boot crashes. I feel like I'm close to losing the hundreds of hours setting up all the dockers and home assistant. Attached are my syslogs. syslog2.txt
  17. I'll try that and grab the logs. Thanks
  18. Here is my current syslog file. Thanks syslog.txt
  19. Joined the Unraid community and made my first server. I've been up and running for about 5 months with minimal issues. Monday I noticed all my services were down and the machine was running. Complete lock up. Haven't been able to get it working since. Every time I log in it crashes within a minute or two. I've been able to boot in safe mode and it works enough to snoop around. Disabling docker seems to help but I still crash if I try to compute the drive size of my NDME drive. Starting docker results in a crash in about a minute. I did notice in my logs that it crashed after this message. Aug 2 18:32:27 Anton kernel: nvme nvme0: I/O 323 QID 4 timeout, aborting Aug 2 18:32:57 Anton kernel: nvme nvme0: I/O 323 QID 4 timeout, reset controller Aug 2 18:33:28 Anton kernel: nvme nvme0: I/O 24 QID 0 timeout, reset controller I have received that before crashing more than once. Bought another NVME drive in case that's the issue. I updated my mobo bios and am running Unraid 6.10.3 Any insight on my issue would be greatly appreciated. System Info.txt anton-diagnostics-20220802-1452.zip