Jump to content

[Solved] VM pause


damonwilson24

Recommended Posts

All of a sudden my 2 VM machines go into paused mode without notice.  It does not let me resume them and the only way to get them out of it is to do a force stop.  It just started happening and does it within minutes of booting them up.  Any ideas where to start looking?  I was just about to start building out my home lab.  Thanks!

Link to comment

One of these machines is a windows 7 machine and it has been running fine since I built it 2 weeks ago.  Today I was connected to it from work doing a download and it just dropped me.  When I got back home I saw it was in a paused state.  I have started it up 3x since and it keeps pausing. 

Link to comment

I have a 500 and 64GB cache pool with 219GB free.  So unless the pool is not working right, it is something else.  I just looked through the VM log and the unraid log on the dashboard and I am seeing errors.  I am new to unraid, but I would guess these errors are not good.  This is not all of it, but they both seem to repeat the same errors so I just pasted part of it:

 

 

VM log:

 

2015-08-19 23:50:26.966+0000: starting up libvirt version: 1.2.15, qemu version: 2.3.0

Domain id=30 is tainted: high-privileges

Domain id=30 is tainted: host-cpu

char device redirected to /dev/pts/0 (label charserial0)

qemu: terminating on signal 15 from pid 15552

2015-08-20 00:45:58.223+0000: shutting down

2015-08-20 00:46:00.953+0000: starting up libvirt version: 1.2.15, qemu version: 2.3.0

Domain id=31 is tainted: high-privileges

Domain id=31 is tainted: host-cpu

char device redirected to /dev/pts/0 (label charserial0)

 

 

Dashboard log:

Aug 19 21:03:21 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251096

Aug 19 21:03:21 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6454, rd 0, flush 0, corrupt 0, gen 0

Aug 19 21:03:22 unraid kernel: loop: Write error at byte offset 4736688128, length 4096.

Aug 19 21:03:22 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251344

Aug 19 21:03:22 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6455, rd 0, flush 0, corrupt 0, gen 0

Aug 19 21:03:22 unraid kernel: loop: Write error at byte offset 4736696320, length 4096.

Aug 19 21:03:22 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251360

Aug 19 21:03:22 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6456, rd 0, flush 0, corrupt 0, gen 0

Aug 19 21:03:22 unraid kernel: loop: Write error at byte offset 4736823296, length 4096.

Aug 19 21:03:22 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251608

Aug 19 21:03:22 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6457, rd 0, flush 0, corrupt 0, gen 0

Aug 19 21:03:22 unraid kernel: loop: Write error at byte offset 4736950272, length 4096.

Aug 19 21:03:22 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251856

Aug 19 21:03:22 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6458, rd 0, flush 0, corrupt 0, gen 0

Link to comment

I have a 500 and 64GB cache pool with 219GB free.  So unless the pool is not working right, it is something else.  I just looked through the VM log and the unraid log on the dashboard and I am seeing errors.  I am new to unraid, but I would guess these errors are not good.  This is not all of it, but they both seem to repeat the same errors so I just pasted part of it:

 

 

VM log:

 

2015-08-19 23:50:26.966+0000: starting up libvirt version: 1.2.15, qemu version: 2.3.0

Domain id=30 is tainted: high-privileges

Domain id=30 is tainted: host-cpu

char device redirected to /dev/pts/0 (label charserial0)

qemu: terminating on signal 15 from pid 15552

2015-08-20 00:45:58.223+0000: shutting down

2015-08-20 00:46:00.953+0000: starting up libvirt version: 1.2.15, qemu version: 2.3.0

Domain id=31 is tainted: high-privileges

Domain id=31 is tainted: host-cpu

char device redirected to /dev/pts/0 (label charserial0)

 

 

Dashboard log:

Aug 19 21:03:21 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251096

Aug 19 21:03:21 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6454, rd 0, flush 0, corrupt 0, gen 0

Aug 19 21:03:22 unraid kernel: loop: Write error at byte offset 4736688128, length 4096.

Aug 19 21:03:22 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251344

Aug 19 21:03:22 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6455, rd 0, flush 0, corrupt 0, gen 0

Aug 19 21:03:22 unraid kernel: loop: Write error at byte offset 4736696320, length 4096.

Aug 19 21:03:22 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251360

Aug 19 21:03:22 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6456, rd 0, flush 0, corrupt 0, gen 0

Aug 19 21:03:22 unraid kernel: loop: Write error at byte offset 4736823296, length 4096.

Aug 19 21:03:22 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251608

Aug 19 21:03:22 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6457, rd 0, flush 0, corrupt 0, gen 0

Aug 19 21:03:22 unraid kernel: loop: Write error at byte offset 4736950272, length 4096.

Aug 19 21:03:22 unraid kernel: blk_update_request: I/O error, dev loop0, sector 9251856

Aug 19 21:03:22 unraid kernel: BTRFS: bdev /dev/loop0 errs: wr 6458, rd 0, flush 0, corrupt 0, gen 0

 

Cache pool support is set to RAID1 right now with btrfs.  This means that if you have a 500GB and a 64GB SSD, your net usable space should be ~ 64GB less the space consumed by btrfs for metadata.  Please go to Tools -> Diagnostics and click collect.  Upload the zip file here for review please.  Thank you!

Link to comment

You are correct.  This morning I was chatting to my coworker who had a similar problem with memory and then it dawned on me that maybe its just using the 512GB drive for parity and only leaving me with the smaller 64GB drive (even though it said I had 129GB remaining).  Once I broke the pool, removed the smaller drive, and moved my data back to it, everything seems to be working fine.  If that changes, I will let you know.  I want to thank everyone for their input to get me through this.  The unraid forums are some of the best around and I appreciate all the assistance.   

Link to comment

You are correct.  This morning I was chatting to my coworker who had a similar problem with memory and then it dawned on me that maybe its just using the 512GB drive for parity and only leaving me with the smaller 64GB drive (even though it said I had 129GB remaining).  Once I broke the pool, removed the smaller drive, and moved my data back to it, everything seems to be working fine.  If that changes, I will let you know.  I want to thank everyone for their input to get me through this.  The unraid forums are some of the best around and I appreciate all the assistance. 

I would just run my vm images off the 64GB and the 512 as the cache drive

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...