Jump to content

6.12 Stable update stuck in boot loop

Recommended Posts


UNRAID is getting stuck in a boot loop after upgrading to 6.12 from 6.11.5. On the monitor connected to the server I can see that it gets to the GUI boot screen and gets to

Loading /bzroot...

and then just reboots. I have managed to revert back to 6.11.5 manually and restart on the same flash drive. I am using a 16GB Sandisk micro USB flash drive - which I have used for almost 4 years. Tried with another flash drive but ended up with the same result. 


Update assistant doesn't find any issues, I know it is not definitive. But I am unsure what is causing the problem, hence this post. 

Screenshot 2023-06-15 195344.png

Link to comment
3 hours ago, JorgeB said:

Try booting with a new flash drive using a stock Unraid install, no key needed, to rule out any config issues.

I will try that, but just to be clear - I did try booting into safe mode from the GUI options - Unraid OS Safe Mode (no plugins, no GUI). That also did not work. 

Edited by abhi.ko
Link to comment

Yes, but using a flash drive with a stock install will rule out any config settings, if it does the same with that one it basically means that likely there's some hardware incompatibility between the new kernel and your hardware, if it boots it's something in your config, you can recreate.

  • Upvote 1
Link to comment

I haven't done the above step yet, have the USB drive ready but waiting for some stuff to finish on the server before I shutdown. I had another question:


Ever since the upgrade issue and manual revert to 6.11.5, i am seeing this error on the GUI 

Warning: file_put_contents(): Only -1 of 154 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php on line 715

What is this warning about - anything I can do to fix this?

Link to comment

Now a docker container is refusing to start because of a similar "no space left on device" error. Not sure what is going on. All drives including the flash device and the cache drive (appdata) have enough space on them.

docker: Error response from daemon: failed to start shim: symlink /var/lib/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f619bbfa000746382d32a164bac78f53035dc52c88d74ae2936d0b58e91a2149 /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f619bbfa000746382d32a164bac78f53035dc52c88d74ae2936d0b58e91a2149/work: no space left on device: unknown.


I'm think that this is somehow related to me copying the config folder after installing 6.11.5, but I am not sure what is going on or how to fix this. All help is appreciated.

Edited by abhi.ko
Link to comment

I have the same problem. I did try booting from a "stock" Unraid stick and same issue. Gets to bizroot then reboots. Reverted back to  ver. 6.11.5. Since I can't get it to go through boot sequence with 6.12, no logs. What else can I send you to help diagnose?


BTW, only issue I had was a disk that died. Replaced it and rebuilt it prior to trying the OS upgrade.

Link to comment
3 hours ago, JorgeB said:

This usually means something is full, post output of:

df -h


or the diags.

df - h output below:

Filesystem      Size  Used Avail Use% Mounted on
rootfs           47G  2.1G   45G   5% /
tmpfs            32M   32M     0 100% /run
/dev/sda1        15G  979M   14G   7% /boot
overlay          47G  2.1G   45G   5% /lib/firmware
overlay          47G  2.1G   45G   5% /lib/modules
devtmpfs        8.0M     0  8.0M   0% /dev
tmpfs            47G     0   47G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  8.3M  120M   7% /var/log
tmpfs           1.0M     0  1.0M   0% /mnt/disks
tmpfs           1.0M     0  1.0M   0% /mnt/remotes
tmpfs           1.0M     0  1.0M   0% /mnt/addons
tmpfs           1.0M     0  1.0M   0% /mnt/rootshare
/dev/md1        4.6T  3.8T  838G  83% /mnt/disk1
/dev/md2        3.7T  2.6T  1.2T  69% /mnt/disk2
/dev/md3        3.7T  2.6T  1.2T  70% /mnt/disk3
/dev/md4        4.6T  3.5T  1.2T  76% /mnt/disk4
/dev/md5        2.8T  1.9T  900G  68% /mnt/disk5
/dev/md6        3.7T  2.2T  1.5T  61% /mnt/disk6
/dev/md7        5.5T  3.6T  2.0T  65% /mnt/disk7
/dev/md8        2.8T  1.2T  1.7T  42% /mnt/disk8
/dev/md9        3.7T  1.9T  1.9T  51% /mnt/disk9
/dev/md10       7.3T  5.1T  2.3T  69% /mnt/disk10
/dev/md11       2.8T  1.4T  1.4T  51% /mnt/disk11
/dev/md12       7.3T  5.1T  2.3T  69% /mnt/disk12
/dev/md13       4.6T  2.7T  1.9T  60% /mnt/disk13
/dev/md14       7.3T  5.1T  2.3T  69% /mnt/disk14
/dev/md15       7.3T  5.4T  2.0T  74% /mnt/disk15
/dev/md16       7.3T  5.2T  2.2T  71% /mnt/disk16
/dev/md17       7.3T  4.8T  2.5T  66% /mnt/disk17
/dev/md18       7.3T  5.2T  2.2T  71% /mnt/disk18
/dev/md19       9.1T  5.8T  3.4T  64% /mnt/disk19
/dev/md20       7.3T  4.4T  3.0T  60% /mnt/disk20
/dev/sdu1       1.9T  947G  961G  50% /mnt/cache
shfs            110T   73T   37T  67% /mnt/user0
shfs            110T   73T   37T  67% /mnt/user
/dev/loop2      100G   50G   50G  51% /var/lib/docker
/dev/loop3      1.0G  6.6M  903M   1% /etc/libvirt


Diags attached as well.


Link to comment

Thank you @JorgeB - what would I do to investigate/fix this?


The problem seem to have gone away, at least for now. CA backup ran yesterday night as per the cron shedule and it stopped and started all my containers, which is  what I am suspecting did it. Plex was also restarted. 


Anything more I need to do now or just watch if it happens again.

Link to comment

I compared abhi.ko's df -h output with mine that goes to same reboot from bizroot.  Can't see any common threads that would help solve the original problem since my tmpfs isn't at 100%. Other than minor differences, all mirrors up closely. Guess I can't find a common anomaly that can help.  Of course I can't download my diagnostics for some reason to compare both.  Oh well, I tried.......

  • Like 1
Link to comment
On 6/16/2023 at 3:16 AM, JorgeB said:

Try booting with a new flash drive using a stock Unraid install, no key needed, to rule out any config issues.


So coming back to this, I just tried with a stock 6.12.1 version of Unraid on a different USB and it had the exact same result. It got to

Loading bzroot.... 

and then rebooted.


So does that mean something within my hardware is not compatible with the new version of Unraid? Any next steps to identify what exactly is causing the issue?  

Edited by abhi.ko
Link to comment
11 hours ago, trurl said:

Are you using the USB Creator? Have you tried Manual Install Method?


USB creator did not work for me always. It was hit or miss. Just got stuck midway most of the time but completed a few times. 


So I have been using the manual install method and followed the same instructions. The flash drive is Bootable (I think) since it gets to the screen with the boot options, but when it tries to load bzroot it fails, is my limited understanding. Happy to try something else. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...