Jump to content
Sign in to follow this  
LyhjeHylje

6.6.6 mover messes up vm and docker

11 posts in this topic Last Reply

Recommended Posts

Sometimes after scheduled mover activity my vm (linux mint) starts to complain that disk is read-only and I need to restart unraid and the vm in order to sort things out. My docker containers also tend to break at the same time. Some of them (duplicati) complain not being able to write to disk.

I have set appdata and domains as "cache: prefer". Should I change that to "only"?

Share this post


Link to post
19 hours ago, LyhjeHylje said:

Sometimes after scheduled mover activity my vm (linux mint) starts to complain that disk is read-only and I need to restart unraid and the vm in order to sort things out. My docker containers also tend to break at the same time. Some of them (duplicati) complain not being able to write to disk.

I have set appdata and domains as "cache: prefer". Should I change that to "only"?

If any share is set to Cache:prefer then if files for that share exist on the array Mover will try and move them to the cache drive.   As long as there is sufficient space on the cache drive to hold these files then this will not cause issues. 

 

The one thing that can be affected is a VM if the vdisk it uses is on the cache drive and the physical size is less than the allocated size (vdisks are normally set up as 'sparse' files so only use as much physical space as they need) which can be a lot less than there 'allocated' size.   This can mean that as the VM is running more physical space needs to be allocated to a particular vdisk if the VM is writing to it.    If there is not enough free space on the cache drive to allow this to happen it can cause the VM to act up.

Share this post


Link to post

Thanks for the help. The vm disk has 20G allocation and I have some 370G free space on cache.

One thing I forgot to mention is that I have mounted non-cache share inside the vm. It is probably this connection that gets broken by mover. I usually have to boot the vm several times before it starts properly.

The containers that I have seen affected by mover (krusader, duplicati, plex) also have access to non-cache shares. But it seems that they do not break every time.

Share this post


Link to post

Today I got this error:

Quote

internal error: process exited while connecting to monitor: 2019-02-14T17:16:47.729691Z qemu-system-x86_64: -drive file=/mnt/cache/domains/vm_mint.qcow2,format=qcow2,if=none,id=drive-virtio-disk2,cache=writeback: Could not open '/mnt/cache/domains/vm_mint.qcow2': Read-only file system

And if I stop any container and try to start it again:

Quote

Execution error

Error code 403

I have no idea what is going on. I do not think today's errors are caused by mover since it has not been run.

Edited by LyhjeHylje

Share this post


Link to post

Unraid log shows:

Quote

 

Feb 14 19:15:44 Tessu kernel: BTRFS warning (device loop3): Skipping commit of aborted transaction.
Feb 14 19:15:44 Tessu kernel: BTRFS: error (device loop3) in cleanup_transaction:1847: errno=-5 IO failure
Feb 14 19:15:44 Tessu kernel: BTRFS info (device loop3): delayed_refs has NO entry
Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3450691584, length 4096.
Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6739632
Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 23, rd 0, flush 0, corrupt 0, gen 0
Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3466641408, length 4096.
Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6770784
Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3467952128, length 4096.
Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6773344
Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 24, rd 0, flush 0, corrupt 0, gen 0
Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3468738560, length 4096.
Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6774880
Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3470049280, length 4096.
Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6777440
Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 25, rd 0, flush 0, corrupt 0, gen 0
Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3470835712, length 4096.
Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6778976
Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3472146432, length 4096.
Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6781536
Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 26, rd 0, flush 0, corrupt 0, gen 0
Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3472932864, length 4096.
Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6783072
Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3474243584, length 4096.
Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6785632
Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 27, rd 0, flush 0, corrupt 0, gen 0

 

Are my hard-drives failing? What are loop devices?

Reboot seems to clear problems but I have no idea what is going on and how to fix it.

 

Edited by LyhjeHylje

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this