HELP! Issue after upgrade to 6.9 and backup to 6.8.3


Recommended Posts

Hi all,

 

This morning I update to 6.9. At first it seems that all was good but my hassio VM was not responding an CPU was constantly at 100%!.

 

So I decided to go back to working 6.8.3 but this erased all my dockers and my VMs. (!)

 

I have a backup of appdata but I'm wondering what is the best way to restore dockers.

 

And also how can  I restore VM from image?

 

TY in advance!

Link to comment

This was notes from Beta22 where multiple pools where created. Is your cache missing?

 

 

Multiple Pools

This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

 

Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

 

When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

 

Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

 

 

  • Like 1
Link to comment
14 minutes ago, SimonF said:

This was notes from Beta22 where multiple pools where created. Is your cache missing?

 

 

 

 

 

Yes I noticed now that cache drive as not mounted. What to do know? Let appdata restore finish or delete appadata restored and mount cache drive?

Link to comment

This has just been added to 6.9 announcement by Limetech.

 

Reverting back to 6.8.3

If you have a cache disk/pool it will be necessary to either:

restore the flash backup you created before upgrading (you did create a backup, right?), or

on your flash, copy 'config/disk.cfg.bak' to 'config/disk.cfg' (restore 6.8.3 cache assignment), or

manually re-assign storage devices assigned to cache back to cache

 

This is because to support multiple pools, code detects the upgrade to 6.9.0 and moves the 'cache' device settings out of 'config/disk.cfg' and into 'config/pools/cache.cfg'.  If you downgrade back to 6.8.3 these settings need to be restored.

 

 

and feedback from JorgeB. Just stop array, re-assign all cache pool devices, there can't be a "all data on this device will be deleted" warning for any cache device, then start array.

  • Thanks 1
Link to comment
On 3/2/2021 at 9:26 AM, Jokerigno said:

This morning I update to 6.9. At first it seems that all was good but my hassio VM was not responding an CPU was constantly at 100%!.

 

 

Did someone have the same problem? I am keeping my Hassio VM off by now and I will work with it during the weekend to try to fix it. If someone have the same problem it will be a pleasure to work together and find a solution.

 

All the best,

 

Lucas

 

Link to comment
9 hours ago, Jokerigno said:

Can It be somehow related to xml file of the VM?

In my case, yes and no.

 

I found two problems.

 

A - Qcow2 disk full (no relation with UnRaid 6.9.0)

 

B - Some craziness with XML files. 

 

A problem: Disk Full! How I fixed it.

1 - Backup qcow2 disk

 

cd /mnt/user/domains/Hassos/

cp hassos_ova-5.8.qcow2 hassos_ova-5.8.bkp

 

2 - Them I increased 1Gb using the command  

 

qemu-img resize hassos_ova-5.8.qcow2 +1G

 

Restarted the VM. 

 

It did not start. It stayed in a black screen with a dot. I realize it was just some VNC problem there.

I pressed "2" and "Enter" and it booted.

I found out what was fulling my qcow2 disk (it was my media folder). Connected via Samba, delete all the old videos there. Stoped recording videos from my Ring Door Bell inside the qcow2 drive (stupid idea I had) LOL. 

 

 

BOOM!!! Everything works fine now.

 

B Problem:

In some of my attempts, I had some problem with the boot EFI.

Something related to the XML files (that I was not patience enough to dig in and figure it out.) I had a copy of the original XML before starting to make changes. But Lucas = No Patience. LOL 

 

So, To fix it I just deleted the VM (without deleting the VDisk!!!) and made a new one using the HA OS on unRaid! instructions (link: https://community.home-assistant.io/t/ha-os-on-unraid/59959 ) using my new increased +1Gb qcow2 disk as SATA disk. 

 

I hope that helps you to have Unraid 6.9.0 installed with your Hassio VM working!

 

All the best,

 

Lucas

 

 

 

 

 

 

Edited by DrLucasMendes
  • Like 2
Link to comment
40 minutes ago, Jokerigno said:

I disabled al VM and Docker. Still see CPU pick related to this process.

 

Anyone can support me?

Cattura.PNG

Hi @Jokerigno

 

I am checking your Logs here looking for anything that might be in some "infinite loop" and kicking your CPU high.

 

It seems it is still you Hassio kicking it high at 79.4% of your CPU

 

 

___________________

"root     15472 79.4 10.9 2558664 2228892 ?     SLl  19:17  27:13 /usr/bin/qemu-system-x86_64 -name guest=Hass.io,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Hass.io/master-key.aes -blockdev {"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} -blockdev {"driver":"file","filename":"/etc/libvirt/qemu/nvram/c72b2bf3-b7fe-ef94-e9fa-acdefb509fec_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine pc-q35-2.11,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format -cpu host,migratable=on,host-cache-info=on,l3-cache=off -m 2048 -overcommit mem-lock=off -smp 1,sockets=1,dies=1,cores=1,threads=1 -uuid c72b2bf3-b7fe-ef94-e9fa-acdefb509fec -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=32,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 -blockdev {"driver":"file","filename":"/mnt/cache/domains/hassos_ova-4.6.qcow2","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null} -device ide-hd,bus=ide.2,drive=libvirt-1-format,id=sata0-0-2,bootindex=1,write-cache=on -fsdev local,security_model=passthrough,id=fsdev-fs0,path=/mnt/user/cctv/ -device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=cctv,bus=pci.1,addr=0x0 -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:1b:e4:0c,bus=pci.4,addr=0x0 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=2 -vnc 0.0.0.0:0,websocket=5700 -k it -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 -device virtio-balloon-pci,id=balloon0,bus=pci.3,addr=0x0 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
root      7548 22.9  0.1 1038296 34560 ?       Ssl  19:10   9:22 /usr/local/sbin/shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0
root      9121  9.3  0.1 1334300 22620 ?       Sl   19:11   3:47 /usr/sbin/libvirtd -d -l -f /etc/libvirt/libvirtd.conf -p /var/run/libvirt/libvirtd.pid
root      8152  4.7  0.4 2285284 86492 ?       Sl   19:10   1:55 /usr/bin/dockerd -p /var/run/dockerd.pid --log-opt max-size=10m --log-opt max-file=1 --log-level=er

 

___________________

 

 

 

Do you have next cloud installed? Dynamix.cache is kicking some CPU (79%), looking for /mnt/disk4/nextclud

 

___________

root      7807  0.1  0.0   5208  3484 ?        S    19:10   0:03 /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -e appdata -e backup -e domains -e isos -e system -l off
root      5244  0.0  0.0   4184  2808 ?        S    19:51   0:00  \_ /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -e appdata -e backup -e domains -e isos -e system -l off
root      5425  0.0  0.0   2648   820 ?        S    19:51   0:00      \_ /bin/timeout 30 find /mnt/disk4/nextcloud -noleaf
root      5428 79.0  0.1  29008 27212 ?        R    19:51   0:03          \_ find /mnt/disk4/nextcloud -noleaf

_______________________

 

 

 

 

 

 

 

 

Link to comment
1 minute ago, Jokerigno said:

Hey, thank you.

 

I removed Dynamix Cache and stopped nextcloud. If the VM is off the peaks are gone and CPU usage seems regular.

I tried your XML but still CPU dedicated is 100% all the time :(

 

 

 

Great!!! Now we at least know where are the problems. ;-)

 

Try to reinstall Dynamix Cache and see if everything keeps running in low CPU.

 

When your Hassio is on (kicking the CPU to the sky), is it working?

If yes, let's check its logs. Also, be sure to have it updated. ;-)

 

All the best,

Lucas

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.