Docker Service Failed to Start (SOLVED)


ucliker

Recommended Posts

I tried to do that but it won't let me. I keep getting Btrfs errors on my cache drive, parent transid verify failed on 1275643199488 wanted 2588463 found 2587150.

 

I copied what I could off of the drive, first I thought it was a bad cable but after switching cables and even pulling the drive and putting it in my Archlinux box it still throws the same errors. 

Link to comment

To reformat a drive you need to stop the array: click on the disk to be formatted and change the format to the one you want; restart the array and the disk will now show as unmountable.   You can now use the option to format unmountable disks.

 

NOTE:  this will erase any existing data on the drive so make sure you first copy anything you want to keep to another location.

Link to comment
3 minutes ago, itimpi said:

To reformat a drive you need to stop the array: click on the disk to be formatted and change the format to the one you want; restart the array and the disk will now show as unmountable.   You can now use the option to format unmountable disks.

 

NOTE:  this will erase any existing data on the drive so make sure you first copy anything you want to keep to another location.

Thanks! Yeah, I copied what I could but the filesystem on the cache drive is corrupted. 

Link to comment
  • 1 year later...

Hi everyone

 

I might be in a simulair situation

 

I wanted to change my Cache to btrfs encrypted....

I then copied all data to my UAD drive and copied it back again and now get the: "Docker Service failed to start."

 

Did I do something wrong:

1) Stop docker service

2) Copy appdata (Not system) back to the Cache drive

3) Start docker service again

 

UPDATE: Did above again 2 times and then it worked (Some transfer error?)

 

 

 

 

 

 

 

Edited by casperse
Link to comment
  • 11 months later...
On 9/15/2018 at 7:05 PM, JorgeB said:

Change cache slots to 1

 

We have one german user with a degraded RAID because his SSD in slot 1 became defective. Now his docker service does not start anymore. Is it really needed to convert the RAID back to single and move the running SSD to slot 1 (as long his waiting for the new SSD to re-create an RAID1).

 

I wonder because in both cases would be the path /mnt/cache/system or /mnt/user/system or is the docker.img mounted in a different way?

Link to comment
9 minutes ago, JorgeB said:

If the pool was redundant you can have just slot2 populated, if it's not working there are other issues.

 

It looks like Unraid mounted the cache pool read-only. This would explain his problem. We suggested him to make a backup through rsync first and then try to re-assign the SSD as a single cache drive in slot 1 if he don't want to wait for the replacement SSD.

Link to comment
16 minutes ago, mgutt said:

No? Isn't it possible to downgrade the RAID1 to Single disk through this way?

I guess you mean re-format? If that's it then yes, but it can be left in slot, that won't matter, backup cache and re-format, I assume it this post:

Looking at the diags there were errors on the remaining device:

 

Jan 20 09:12:47 Unraid-Tower kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 1314931, rd 1043949, flush 26985, corrupt 1428, gen 0

 

This suggests that device also dropped offline in the past and it was never corrected, when the other device failed it was corrupt, tell the user to look a look here for better pool monitoring.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.