LyhjeHylje Posted February 12, 2019 Share Posted February 12, 2019 Sometimes after scheduled mover activity my vm (linux mint) starts to complain that disk is read-only and I need to restart unraid and the vm in order to sort things out. My docker containers also tend to break at the same time. Some of them (duplicati) complain not being able to write to disk. I have set appdata and domains as "cache: prefer". Should I change that to "only"? Quote Link to comment
JorgeB Posted February 13, 2019 Share Posted February 13, 2019 Cache prefer is fine as long as that share doesn't also exist on the array. Quote Link to comment
itimpi Posted February 13, 2019 Share Posted February 13, 2019 19 hours ago, LyhjeHylje said: Sometimes after scheduled mover activity my vm (linux mint) starts to complain that disk is read-only and I need to restart unraid and the vm in order to sort things out. My docker containers also tend to break at the same time. Some of them (duplicati) complain not being able to write to disk. I have set appdata and domains as "cache: prefer". Should I change that to "only"? If any share is set to Cache:prefer then if files for that share exist on the array Mover will try and move them to the cache drive. As long as there is sufficient space on the cache drive to hold these files then this will not cause issues. The one thing that can be affected is a VM if the vdisk it uses is on the cache drive and the physical size is less than the allocated size (vdisks are normally set up as 'sparse' files so only use as much physical space as they need) which can be a lot less than there 'allocated' size. This can mean that as the VM is running more physical space needs to be allocated to a particular vdisk if the VM is writing to it. If there is not enough free space on the cache drive to allow this to happen it can cause the VM to act up. Quote Link to comment
LyhjeHylje Posted February 13, 2019 Author Share Posted February 13, 2019 Thanks for the help. The vm disk has 20G allocation and I have some 370G free space on cache. One thing I forgot to mention is that I have mounted non-cache share inside the vm. It is probably this connection that gets broken by mover. I usually have to boot the vm several times before it starts properly. The containers that I have seen affected by mover (krusader, duplicati, plex) also have access to non-cache shares. But it seems that they do not break every time. Quote Link to comment
LyhjeHylje Posted February 14, 2019 Author Share Posted February 14, 2019 (edited) Today I got this error: Quote internal error: process exited while connecting to monitor: 2019-02-14T17:16:47.729691Z qemu-system-x86_64: -drive file=/mnt/cache/domains/vm_mint.qcow2,format=qcow2,if=none,id=drive-virtio-disk2,cache=writeback: Could not open '/mnt/cache/domains/vm_mint.qcow2': Read-only file system And if I stop any container and try to start it again: Quote Execution error Error code 403 I have no idea what is going on. I do not think today's errors are caused by mover since it has not been run. Edited February 14, 2019 by LyhjeHylje Quote Link to comment
JorgeB Posted February 14, 2019 Share Posted February 14, 2019 Please post the diagnostics: Tools -> Diagnostics Quote Link to comment
LyhjeHylje Posted February 14, 2019 Author Share Posted February 14, 2019 (edited) Unraid log shows: Quote Feb 14 19:15:44 Tessu kernel: BTRFS warning (device loop3): Skipping commit of aborted transaction. Feb 14 19:15:44 Tessu kernel: BTRFS: error (device loop3) in cleanup_transaction:1847: errno=-5 IO failure Feb 14 19:15:44 Tessu kernel: BTRFS info (device loop3): delayed_refs has NO entry Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3450691584, length 4096. Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6739632 Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 23, rd 0, flush 0, corrupt 0, gen 0 Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3466641408, length 4096. Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6770784 Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3467952128, length 4096. Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6773344 Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 24, rd 0, flush 0, corrupt 0, gen 0 Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3468738560, length 4096. Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6774880 Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3470049280, length 4096. Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6777440 Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 25, rd 0, flush 0, corrupt 0, gen 0 Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3470835712, length 4096. Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6778976 Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3472146432, length 4096. Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6781536 Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 26, rd 0, flush 0, corrupt 0, gen 0 Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3472932864, length 4096. Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6783072 Feb 14 19:16:25 Tessu kernel: loop: Write error at byte offset 3474243584, length 4096. Feb 14 19:16:25 Tessu kernel: print_req_error: I/O error, dev loop2, sector 6785632 Feb 14 19:16:25 Tessu kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 27, rd 0, flush 0, corrupt 0, gen 0 Are my hard-drives failing? What are loop devices? Reboot seems to clear problems but I have no idea what is going on and how to fix it. Edited February 14, 2019 by LyhjeHylje Quote Link to comment
JorgeB Posted February 14, 2019 Share Posted February 14, 2019 7 minutes ago, johnnie.black said: Please post the diagnostics: Tools -> Diagnostics Quote Link to comment
LyhjeHylje Posted February 14, 2019 Author Share Posted February 14, 2019 17 minutes ago, johnnie.black said: Please post the diagnostics: Tools -> Diagnostics Too bad I noticed your message only after rebooting. Hope there is some clue to be found tessu-diagnostics-20190214-1941.zip Quote Link to comment
JorgeB Posted February 14, 2019 Share Posted February 14, 2019 Previous read and write errors on the SSD: Feb 14 19:34:38 Tessu kernel: BTRFS info (device sdb1): bdev /dev/sdb1 errs: wr 14, rd 57, flush 0, corrupt 0, gen 0 Likely it's dropping offline, see her for how to better monitor the cache: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=700582 Quote Link to comment
LyhjeHylje Posted February 14, 2019 Author Share Posted February 14, 2019 Seems plausible, I'll check my cables and keep an eye on it. Thank you for help. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.