6.9 nothing but problems.


Recommended Posts

<rant>

 

Tbh, I'm getting sick of problems since 6.9 and thinking of starting to look at alternative solutions to Unraid.  I appreciate that this may not be the experience of others, but prior to 6.9 I had no problems, zero, none, nada, I didn't expect 6.9 to cause so many problems from day 1.

 

The VM performance is killing my machine, I'm seeing loads of 70-700 when VM's are idle and qemu consuming all the CPU.  When I first updated I had an issue where every KVM was locking up the graphics, the VM's themselves were still running but unable to VNC to the KVM, I eventually edited one of the VM's config file in the editor and suddenly all the VM's stopped locking the KVM up.  I've seen that others are having this problem too, I've tried the "fixes" that have been mentioned, but nothing thus far has solved it.

 

Between that and continual "blank pages" (i.e I get the unraid header but no actual content) returned from the web server, sections empty or missing it's starting to become a really miserable experience.  Long standing issues like the VNC port resetting when making changes in the GUI, that I've mentioned several times over the course of a few years still haven't been fixed, this is *basic* stuff.

 

The GUI feels clunky.  The mover has always been a resource hog, killing the machine when it runs, this is an machine with 64GB of RAM, ssd's for the cache and VM's.  

 

I'm at a loss to what is actually going on, but it has got to the point now that I'm losing faith in my Unraid solution.

Link to comment

There's an optical drive generating a lot of timeout errors, these can make the server unresponsive, you should fix it (by replacing cables) or disconnected it if cables don't help.

 

May  9 21:50:40 Tower kernel: sr 3:0:0:0: [sr0] tag#12 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=21s
May  9 21:50:40 Tower kernel: sr 3:0:0:0: [sr0] tag#12 Sense Key : 0x4 [current]
May  9 21:50:40 Tower kernel: sr 3:0:0:0: [sr0] tag#12 ASC=0x3e ASCQ=0x2
May  9 21:50:40 Tower kernel: sr 3:0:0:0: [sr0] tag#12 CDB: opcode=0x28 28 00 00 00 72 5e 00 00 03 00
May  9 21:50:40 Tower kernel: blk_update_request: I/O error, dev sr0, sector 117112 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0
May  9 21:50:57 Tower kernel: sr 3:0:0:0: [sr0] tag#12 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=16s
May  9 21:50:57 Tower kernel: sr 3:0:0:0: [sr0] tag#12 Sense Key : 0x4 [current]
May  9 21:50:57 Tower kernel: sr 3:0:0:0: [sr0] tag#12 ASC=0x3e ASCQ=0x2
May  9 21:50:57 Tower kernel: sr 3:0:0:0: [sr0] tag#12 CDB: opcode=0x28 28 00 00 00 72 5e 00 00 03 00
May  9 21:50:57 Tower kernel: blk_update_request: I/O error, dev sr0, sector 117112 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0

 

Link to comment
10 hours ago, JorgeB said:

There's an optical drive generating a lot of timeout errors, these can make the server unresponsive, you should fix it (by replacing cables) or disconnected it if cables don't help.

 


May  9 21:50:40 Tower kernel: sr 3:0:0:0: [sr0] tag#12 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=21s
May  9 21:50:40 Tower kernel: sr 3:0:0:0: [sr0] tag#12 Sense Key : 0x4 [current]
May  9 21:50:40 Tower kernel: sr 3:0:0:0: [sr0] tag#12 ASC=0x3e ASCQ=0x2
May  9 21:50:40 Tower kernel: sr 3:0:0:0: [sr0] tag#12 CDB: opcode=0x28 28 00 00 00 72 5e 00 00 03 00
May  9 21:50:40 Tower kernel: blk_update_request: I/O error, dev sr0, sector 117112 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0
May  9 21:50:57 Tower kernel: sr 3:0:0:0: [sr0] tag#12 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=16s
May  9 21:50:57 Tower kernel: sr 3:0:0:0: [sr0] tag#12 Sense Key : 0x4 [current]
May  9 21:50:57 Tower kernel: sr 3:0:0:0: [sr0] tag#12 ASC=0x3e ASCQ=0x2
May  9 21:50:57 Tower kernel: sr 3:0:0:0: [sr0] tag#12 CDB: opcode=0x28 28 00 00 00 72 5e 00 00 03 00
May  9 21:50:57 Tower kernel: blk_update_request: I/O error, dev sr0, sector 117112 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0

 

The drive has only been used over the past couple of days, these issues I've been suffering with since 6.9 came out, but i'll check that out.  

 

The VM's causing excessive load is what is killing me at the moment, they're idle with no load in the VM but I'm seeing ridiculously high loads in unraid.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.