stoutpanda

Members
  • Posts

    19
  • Joined

  • Last visited

stoutpanda's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hello all. I think I have messed up a VM's disk image, but I'm having trouble understanding why / how. I had a vdisk raw image for a VM stored on a non-attached disk mount point /mnt/vmstorage. Using the shell, I used the move command (mv) to move the file to /mnt/user/domains/vmstorage/XubuntuHomeVM/vdisk.img. After the move completed, I edited the VM and pointed it to the new location. Now the VM will not boot. So I tried to see if I could access the files on the image or mount the image and no partitions are visible. This was an Xubuntu VM, but I don't remember the exact details of the file system on it. root@Sekhmet:/mnt/disks/vmstorage# fdisk -l vdisk1.img Disk vdisk1.img: 18.81 GiB, 20174798848 bytes, 39403904 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes root@Sekhmet:/mnt/disks/vmstorage# root@Sekhmet:/mnt/disks/vmstorage# losetup -f --show vdisk1.img /dev/loop4 No partitions are found on the disk when I set it up with losetup either. I can rebuild the VM, and the only important data is backed up. I just want to understand what I did wrong, or what could have caused this.
  2. Uptime - 8 days, 1 hour, 5 minutes, no further issues. I'm feeling very hopeful that the crashes I was having appear to have been resolved by the newer kernel. To anyone with EPYC having similar errors, Linux kernel: version 4.18.14 & unraid 6.6.2 appear to have resolved all.
  3. Wanted to report in that on 6.6.2 with the ipmi/kvm java console it is working fine for my H11SSL-NC w/ ASPEED AST2500 BMC.
  4. Updated to 6.6.2 hoping that the newer kernel may alleviate some of this!
  5. No events other than powering on / power off fortunately/unfortunately.
  6. Another hang up today. This time I was able to access the web page (though anything related to docker, plugins or the app page wasn't loading) & via ssh so I was able to get diagnostics this time. Was unable to stop dockers via command stop or kill commands. Tried a shutdown via the web page and it hung, tried a powerdown command via the console following and stuck again at ccp 0000:06:00.2: disabled. sekhmet-diagnostics-20181012-1524.zip
  7. I did notice something the following that happened right before the rcu-sched errors started this time while reviewing the syslogs, but I'm having trouble determining what program may have caused it. BUG: unable to handle kernel NULL pointer dereference at 00000000000000300
  8. Well it just happened again after 4 days 16 hours of uptime following the memtest. This time I wasn't able to access my VMs once it happened, they dropped my VNC / RDP sessions and then the whole server became unresponsive except from the console itself again. This time even more difficult to work with, as I couldn't even get it to list the status of dockers without it just sitting waiting for the command to run. Could not shutdown or destroy vms from console (device or resource busy). Again unable to run diagnostics before power cycling, but was able to copy the syslog. Ran diagnostics after boot again and attaching those. sekhmet-diagnostics-20181011-1515.zip syslog-2018-10-11.zip
  9. Well I ran the memtest with no issues / errors through this morning. I'm going to bring the server backonline into unraid now and see how long until it happens again. I am hopeful now that it may be a bug similar to the above thread for ryzen and future kernel updates may fix it, unless anyone else has ideas.
  10. It might be overkill for your needs, but you can run PFSENSE in a VM, and have it run a DNS server as well as many other services.
  11. Started the memtest over with all cores and ran it last night. So far no errors. I've found some similar errors for Ryzen CPU on various ubuntu kernels, but haven't found much for epyc. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1690085
  12. Here was the usage. I'm recreating it to be a 20GB file now, but just to show before I change anything. Filesystem Size Used Avail Use% Mounted on /dev/loop2 60G 3.9G 55G 7% /var/lib/docker root@Sekhmet:~# docker ps -s CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE 571f9dd9722e linuxserver/unifi "/init" 2 days ago Up 25 seconds unifi 101MB (virtual 559MB) 0a5ad4b8f382 mlebjerg/steamcachebundle:latest "/scripts/bootstrap.…" 3 days ago Up 41 seconds SteamCacheBundle 3.65MB (virtual 27MB) 390ebd87726a linuxserver/mariadb "/init" 5 days ago Up 28 seconds 0.0.0.0:3306->3306/tcp mariadb 340kB (virtual 362MB) a62d83fdcae4 linuxserver/duplicati "/init" 5 days ago Up 30 seconds 0.0.0.0:8200->8200/tcp duplicati 301kB (virtual 596MB) 00ca105186bf pihole/pihole:latest "/s6-init" 7 days ago Up 48 seconds (healthy) pihole 10.5MB (virtual 356MB) 8d2543550e7c binhex/arch-deluge "/usr/bin/tini -- /b…" 13 days ago Up 35 seconds binhex-deluge 1.15MB (virtual 1.04GB) Edit: Docker file resized to 20GB (deleted, recreated containers from templates). root@Sekhmet:/mnt/user/system/docker# ls -lh total 20G -rw-rw-rw- 1 nobody users 20G Oct 5 14:24 docker.img /dev/loop2 20G 3.4G 15G 19% /var/lib/docker
  13. Yes, you are correct. That was the issue, and it was also before I found the CA plugin & figured out the /mnt/user shares and had a better understanding of how things in Unraid were organized. I actually ran onto the faq by Squid you have linked in your signature. I mentioned that I got rid of the image just because I wanted to let you know that that image in particular shouldn't be causing any further trouble. I can reboot the server and will gladly provide any info regarding the docker configurations for further review however, and greatly appreciate your time and help.
  14. This was due to an earlier issue with a deluge docker that wasn't configured correctly and was writing data to the docker image. I have since removed that docker image/container and went with the binhex-deluge suggested in a thread on the forums here, but never went back and resized the docker image. Edit: I am willing to shrink that down of course if needed.