I broke it - OCI runtime create failed: container_linux.go:380


Recommended Posts

So I failed at swapping over my cache SSD

 

I set cache to yes and ran the mover but took a backup of appdata and system anyway

 

What I failed to do was let mover automatically move stuff back to the new cache SSD instead of choosing prefer:cache to retain the permissions

 

Long story short, permissions are borked and I'm not sure how to fix it

 

I've tried restoring a backup but appdata and system are on the array so will continue to have borked permissions

 

Tried deleting appdata so it'd be recreated but still can't get containers to start

 

Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown

 

ganymede-diagnostics-20220801-1759.zip

Link to comment

Not entirely sure how to read df results when docker image is a folder, but 132G used seems excessive. Did you make it a folder because you had been filling docker.img? There may be good reasons for making it a folder, but doing it just so you can allow it to grow as large as possible isn't really a good reason. The usual reason for filling docker.img is an application writing to a path that isn't mapped.

 

5 hours ago, Akshunhiro said:

appdata and system are on the array

According to those diagnostics appdata is on cache, perhaps because you

5 hours ago, Akshunhiro said:

Tried deleting appdata so it'd be recreated

system has files on disk2.

 

domains also has files on disk2, probably not necessary if you do your VMs right. They typically won't need large vdisks since they can access Unraid storage. And if you have appdata, domains, system on the array, performance is impacted and array disks can't spin down.

 

Disable Docker in Settings, delete that docker folder. Then see if you can make it work as an img of 20G while you reinstall each docker one at a time from Previous Apps

 

Link to comment
9 hours ago, trurl said:

Nothing can move open files. Did you disable Docker and VM Manager in Settings before attempting move?

 

Thanks trurl, I didn't and that was my first mistake

 

I did get a pop up from Fix Common Problems saying that Docker would break if files were moved

The Docker tab was showing a lot of broken containers with the LT logo and broken link icon

 

Pretty sure I turned off Docker and VM Manager when I got that pop up so ran the mover again, was left with ~7GB on the old cache drive and, after checking the contents, assumed it was done

 

My problem was likely caused by manually copying (cp -r) the appdata, system & domain folders (these were all originally set to cache:Prefer) to the new cache drive rather than letting mover do it as I don't think it retained the correct permissions

 

With the other changes to my setup, I read that having a docker folder rather than the image is a better way to go so that change was made a while ago

I no longer have any vDisks as I recently grabbed a Synology E10M20-T1 so have a m.2 passed through to each of my VMs

I can't remember where Plex metadata is stored but suspect that's what contributing to the 132GB

 

The only container I have working is Cloudflare tunnel but that was setup through CLI

I tried removing vm_custom_icons but get the same error message when re-installing and attempting to start it

VMs are still working

 

I think I need to purge Docker completely and set it up again but not sure what the correct method would be

I also tried installing 6.11.0-rc2 to upgrade Docker without any luck but suspect it's because either system or appdata still have broken permissions, it won't pickup those folders properly

Edited by Akshunhiro
Link to comment
9 hours ago, trurl said:

Disable Docker in Settings, delete that docker folder. Then see if you can make it work as an img of 20G while you reinstall each docker one at a time from Previous Apps

 

Oh sorry, missed this

 

Will give it a try when I get home from work

Edited by Akshunhiro
Link to comment

Ah yep

Damn, wonder why it's so big then haha

 

So I've tested reverting back to a 20GB btrfs image and Docker is working fine now

 

I'll make a note of the container configs just in case and purge the /mnt/user/system/docker directory tonight (I assume that's fine to do since it's an option in the GUI? And yes, I'll do it through the GUI)

 

I'll then keep Docker setup as a directory instead of an image, I think I changed it because of excess write concerns on the SSD

 

Thanks for your assistance!

Edited by Akshunhiro
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.