Re: unRAID Server Release 6.0-beta12-x86_64 SPINDOWN/SPINUP ISSUES


209 posts in this topic Last Reply

Recommended Posts

I was able to recreate tonight on one of my test machines at home.  This is the FOURTH machine I've tested on.  Good news is that on an internal development build, I'm NOT having this issue anymore, so I think some of the lower level changes we made in this build internally just happened to fix it.  Crossing my fingers on this, but until the next release is pushed out and we get feedback from folks here, we won't know for sure.  At least it looks like there is light at the end of the tunnel.

 

So much for being a plugin issue, I could have told you it wasn't from my testing. ;)

 

Can we have an ETA for this build? My 4 EARS drives have been spinning continuously since b12 was released and i don't want to roll back if b13 is later this week.

Link to post
  • Replies 208
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I was able to recreate tonight on one of my test machines at home.  This is the FOURTH machine I've tested on.  Good news is that on an internal development build, I'm NOT having this issue anymore, so I think some of the lower level changes we made in this build internally just happened to fix it.  Crossing my fingers on this, but until the next release is pushed out and we get feedback from folks here, we won't know for sure.  At least it looks like there is light at the end of the tunnel.

 

So much for being a plugin issue, I could have told you it wasn't from my testing. ;)

 

Can we have an ETA for this build? My 4 EARS drives have been spinning continuously since b12 was released and i don't want to roll back if b13 is later this week.

Very soon.

Link to post

I was able to recreate tonight on one of my test machines at home.  This is the FOURTH machine I've tested on.  Good news is that on an internal development build, I'm NOT having this issue anymore, so I think some of the lower level changes we made in this build internally just happened to fix it.  Crossing my fingers on this, but until the next release is pushed out and we get feedback from folks here, we won't know for sure.  At least it looks like there is light at the end of the tunnel.

 

Thank you for all the testing efforts!

Link to post

I was able to recreate tonight on one of my test machines at home.  This is the FOURTH machine I've tested on.  Good news is that on an internal development build, I'm NOT having this issue anymore, so I think some of the lower level changes we made in this build internally just happened to fix it.  Crossing my fingers on this, but until the next release is pushed out and we get feedback from folks here, we won't know for sure.  At least it looks like there is light at the end of the tunnel.

 

Thank you for all the testing efforts!

 

No problem.

 

I hope folks understand where I was really coming from with Plugins.  I realize that for folks that use these plugins everyday, it may seem ludicrous when I suggest that we need them removed to isolate, but we at LT do NOT use those plugins every day.  We DON'T know what they are doing "under the covers" and technically they could be doing a million different things that could impact overall system stability / reliability.  And I can tell you from a history of having to help folks with technical problems, it all comes down to isolating problems down by process of elimination.  If I installed every plugin that is available for V6 right now, I would almost guarantee that there'd be some issues.  Could be memory related, could be CPU related, etc, etc.  Just because something works once for someone or even a few people doesn't mean it's guaranteed to work for everyone all the time and across all versions.

 

This is especially true for the unRAID 6 beta where we've consistently been upgrading our kernel version.  We've gone all the way from 3.9 which was what was in 5.x up to 3.17 now.  There is ALWAYS a chance that some functionality a plugin provides may break when kernel versions are adjusted, so just assuming that they all work perfect because the system booted for you isn't really a safe assumption at all.

 

At the end of the day, I had a sneaking suspicion this was NOT plugin related, but again, needed to rule out everything because we couldn't get the issue to repeat in our lab.

 

Lastly, this is a beta still.  Folks need to curb expectations a bit when it comes to things like this in a beta.

Link to post
  • 2 weeks later...

I understand this is a beta, but you think a bug like this with the potential to reduce the lifespan of hard drives would have been fixed a little quicker. You don't say "thanks" to your community of volunteer beta tester by letting somthing like this go for months!

 

It hasn't been "months".  It's been "weeks" (first reported on December 6th).

Link to post

I understand this is a beta, but you think a bug like this with the potential to reduce the lifespan of hard drives would have been fixed a little quicker. You don't say "thanks" to your community of volunteer beta tester by letting somthing like this go for months!

 

It hasn't been "months".  It's been "weeks" (first reported on December 6th).

 

Ok, its been weeks, not months but my point was you couldn't help out your testers by fixing this one issue that is negatively affecting hardware? I really like my unraid server but you guys are really not doing yourselves any favors by failing to communicate effectively with your community (which in my opinion is what makes your product great).

Link to post

 

Ok, its been weeks, not months but my point was you couldn't help out your testers by fixing this one issue that is negatively affecting hardware? I really like my unraid server but you guys are really not doing yourselves any favors by failing to communicate effectively with your community (which in my opinion is what makes your product great).

 

 

Roll back to b11

 

 

As you said, This is beta, and I assume you did a nice backup of b11 before upgrading.  It should be a simple matter of downgrading.  Even easier had you done it weeks ago when the problem popped up (less differential between now and then).

 

 

unRAID is under no obligation to give us beta access, in fact if 100% reliability / bug free is whats required, you shouldn't be on beta at all.  I want this fixed too, as most of my drives are spinning up/ remaining spun, but I get to have Plex on a Docker, and a my old HTPC hardware replaced with a VM all because we have beta access!

 

 

 

 

Link to post

I think people are starting to confuse beta with developer free for all! I understand it's a beta and will be buggy, I don't think it's unreasonable to ask for an updates on what's going on with the next beta. I also think it's a two way street, If LT expect to have the community beta test the software for them then they should understand people are going to want some communiction and bug fixing during the beta period.

 

What's the hold up on fixing the spin down bug? does it really have to wait for the next beta release?

Link to post

I think people are starting to confuse beta with developer free for all! I understand it's a beta and will be buggy, I don't think it's unreasonable to ask for an updates on what's going on with the next beta. I also think it's a two way street, If LT expect to have the community beta test the software for them then they should understand people are going to want some communiction and bug fixing during the beta period.

 

What's the hold up on fixing the spin down bug? does it really have to wait for the next beta release?

 

 

I don't believe I"m confused about what Beta means.  The simple solution to your issue if you cannot wait for the spindown fix is to roll back a release to a beta version that works for you.

 

 

I'm holding out simply because I'm not *that* worried about the extra spin time for a month knowing that they have replicated the issue internally (looks like on 1/7 Jon said he replicated it on their test system).  If there was no indication that a fix was coming "shortly" I'd probably roll back myself.  But a few weeks/month I'm waiting it out.

 

 

 

 

Link to post

Not to mention there isn't solid evidence that spinning time has any impact on longevity. Constant spin-up/spin-down has some evidence as does constantly parking the heads (the load-ramp whatever issue WD Greens had a while back). But just spinning isn't a real issue beyond power draw.

Link to post

I honestly don't care if a disk is spun up or down because I have seen reports that made clear, that this isn't affecting the life span of a disc.

 

However for the sake of feedback: none of my WDC_WD20EARX are spinning down.

No VM and no Docker active. Only the plugins are running (pls. see signature).

 

I have upgraded all Plugins according to bonienl's advice from yesterday.

Link to post

Have you guys with issues already used wdidle3 to disable parking on your drives?  I have six 3TB and five 4TB green WD drives and I always disable parking through the HD firmware.  I do not have the spindown problem.

 

Maybe it was already pondered in this thread, I haven't read the entire thing.  Hope maybe it helps.

 

craigr

Link to post

Have you guys with issues already used wdidle3 to disable parking on your drives?  I have six 3TB and five 4TB green WD drives and I always disable parking through the HD firmware.  I do not have the spindown problem.

 

Maybe it was already pondered in this thread, I haven't read the entire thing.  Hope maybe it helps.

 

craigry

 

Good thought/ide, bit alas min have all been set to not park

Link to post

Same issue.  Running beta 12 with only crashplan docker with appdata on cache.  3 Disks will not stay spun down despite command to spin down in log.  This inlcudes disk where crashplan data is stored and parity.  I thought that crashplan was the culprit so I stopped the docker.  This did help with spinning the parity drive down and the drive where the crashplan backup is.  However, disk 4, a data disk (western digital) does not remain spun down.

 

Right now disk 4 is spinning despite this last line in the log.

 

Jan 25 05:41:43 Tower kernel: mdcmd (50): spindown 4

 

The log does not show when disks spin up.  It would be easy for me to uninstall or stop my crashplan docker and test further if needed. 

 

Thanks

Link to post

Have you guys with issues already used wdidle3 to disable parking on your drives?  I have six 3TB and five 4TB green WD drives and I always disable parking through the HD firmware.  I do not have the spindown problem.

 

Maybe it was already pondered in this thread, I haven't read the entire thing.  Hope maybe it helps.

 

craigr

 

After applying the no park fix on all of my 2Tb HDD's they are spinning down as they should.

Thanks craigr.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.