willum0330

Members
  • Posts

    12
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

willum0330's Achievements

Noob

Noob (1/14)

4

Reputation

  1. Did you ever solve this? Having a similar issue, and it doesn't seem to be drive related... Thanks!
  2. blaine07 - YOU GOT IT! Working for me as well - finally! Everything seems to be up and running and VM's came online without any complaint! Thanks for taking the time to look into this further
  3. I have isolated the problem to be something with the hardware virtualization that is used by UNRAID to support VMs. I started by resetting my BIOS on my T610, created a fresh USB install and booted 6.10.0 RC3 and it booted successively. I then copied over all my configurations, plugins, etc. and booted again - again, it booted successfully. I went through to see if everything was working. I noticed that my VM's hadn't come online. At the top of the web-gui, it indicated that the hardware did not support KVM. I figured at this point that the resetting of the BIOS must have disabled the virtualization options. I went in the BIOS and enabled the virtualization technology on the processor and rebooted. The UNRAID boot splash came up and continued to boot normally until it hits the bzroot checksum failed. This is the ONLY change I made, so it must be related to the enablement of the virtualization technology in the BIOS. Now what???
  4. Same issue. I tried to update from 6.9.2 using web gui to 6.10.0 RC3 and at first got a boot failure. Took USB out and using the UnRaid USB creator tool, created a new USB and then copied over the customizations. Now I have the bzroot checksum error. I have a T610 , that will work fine on 6.9.2… but was really wanting to try ARM emulation… the reason for updating… I have 64Gb of Ram, and mem test passes fine.
  5. Agreed - and the server also gives no sign of issues.... except for the only place that the VM’s exist is on the cache pool. Same thing with docker - all the services affected are on the cache pool...
  6. I am running Unraid Version: 6.6.0-rc4, with a cache pool of 2 Kingston 128GB SSD's in Raid0. I am getting emails from dynamix "No space left on device", and when I go and check on the server, one VM will be shutdown, and within another 10 minutes, my second VM will be shutdown. The system essentially just becomes more and more crippled until I reboot the server. Once rebooted, it will stay running for about a day, and then have the same problem. The cache drive is used regularly as is caching a couple live streams of video of cameras that I have... but the mover runs every 4 hours, and the cache drive always has about 40-60GB available. Right now, it is reporting 58.4 GB available. The last time this happened, I had to completely wipe out my btrfs cache pool, reformat and then create a new cache pool. That worked for several months... but I am not going to do that every few months. Hopefully someone here can point me in the right direction. Thanks! tower-diagnostics-20181204-1416.zip
  7. I am not sure what has changed recently. I did upgrade to unraid 6.5.3, from 6.5.1 - but reverted to the old installation with the same issues. I have two VM's one ubuntu and one windows 10 (that have been running for years...), and now after about 1.25 days of running fine, both VM's will crash. When I attempt to restart them (doing one at a time also has the same effect), they start booting, and once they have run for about 30 seconds, they crash again. Only way to resolve this state is to reboot the unraid server. Last entry from the VM log: qemu: qemu_thread_create: Resource temporarily unavailable Libvirt log entry when this occurs: qemuMonitorIO:721 : internal error: End of file from qemu monitor If you take a look through the logs, you will see that I was having some issues with the Cache drive filling. It was a two part problem - I have some security camera storage happening and didn't realize how much space it would take, as well as setting the share it used to the wrong usage settings for Cache. This has been resolved and I am normally 60% or less on usage of the cache pool. Mover logging is on because I wanted to see if that was potentially a trigger. It is not, that I can distinguish. Looking for suggestions where to look next. tower-diagnostics-20180820-1146.zip FCPsyslog_tail.txt