unRAID OS version 6.3.2 Stable Release Available


limetech

Recommended Posts

I can't even download any dockers...........I have tried on 2 machines, 1 located in Toronto Canada, the other in VB, USA...

 

I restarted unraid and it showed all dockers as needing updates!!! this machine was having issues and we just deleted docker image because of this.. 

 

Imgur is having problems also right now, something is going down.. 

 

Looks like amazon is having some issues ..

 

https://status.aws.amazon.com/

 

 

Link to comment
1 hour ago, Mettbrot said:

I dont know how the versioning works on Linus' github though. I suppose this is only a mirror of the current development branch...

 

Yes the patch has made it's way to linux 'next' but not yet distributed to maintainers.  The way it works is all changes are applied to 'next' first and tested.  If it works there, then the patch is sent down to all the maintainers of the various stable releases.  Probably a matter of days before this happens on this particular patch I would guess.  So let's see what the next kernel point release brings us.  If we're ready to generate our own point release and we haven't seen this patch yet, then I'll go ahead and put into our kernel.

  • Upvote 3
Link to comment
12 hours ago, nexusmaniac said:

Can confirm... It doens't even load that far for me haha! :D

 

This was due to the AWS outage which we posted an announcement about earlier today (we've since taken it down since the outage is over).  We had temporarily moved our download hosting to another location for a while today, but that doesn't help with automatic updates within the webGui, only for folks downloading via the website.  All services are now restored and the downloads page is back up and running.

  • Upvote 1
Link to comment
57 minutes ago, jonp said:

 

This was due to the AWS outage which we posted an announcement about earlier today (we've since taken it down since the outage is over).  We had temporarily moved our download hosting to another location for a while today, but that doesn't help with automatic updates within the webGui, only for folks downloading via the website.  All services are now restored and the downloads page is back up and running.

I noticed the outage, didn't see the announcement :)

 

Indeed, thankfully all my dockers are back to normal and don't all think they're in need of updates! ;)

Link to comment
10 hours ago, limetech said:

 

Yes the patch has made it's way to linux 'next' but not yet distributed to maintainers.  The way it works is all changes are applied to 'next' first and tested.  If it works there, then the patch is sent down to all the maintainers of the various stable releases.  Probably a matter of days before this happens on this particular patch I would guess.  So let's see what the next kernel point release brings us.  If we're ready to generate our own point release and we haven't seen this patch yet, then I'll go ahead and put into our kernel.

great! Thank you for your ongoing support!! :)

Link to comment

At some stage, in a recent update, a little icon appeared towards the top-right of the main webgui  tab, the popup help says "Toggle reads/writes display".  When I click on this, all the disk read/write counts change to '0.0 B/s'.  How does one get this to display non-zero values?  My disks can be busy, with read/write counts increasing quite rapidly, but the B/s never reads anything but 0.0.

Link to comment
47 minutes ago, Woodpusherghd said:

My monthly parity checked finished with a speed of 60.4 MB/s.  My previous parity checks were all around 80 MB/s.  Anyone else experience  a significant slowdown with 6.3.2? 

Slower CPU's have displayed this tendency particularly with dual parity setups.  What is your hardware and to what earlier version(s) are you comparing the times.  (Recent version of unRAID have a tab called "History" on the 'Array Operation' tab that will give you a more complete picture.)  Watching the progress on the GUI will also slow down the parity check. 

Link to comment
2 hours ago, Woodpusherghd said:

Anyone else experience  a significant slowdown with 6.3.2? 

 

Yes, as I noted earlier in this thread (see link below).    As Frank noted, it's likely due to slower processors -- although I'm surprised an E6300 isn't enough to do a full-speed parity check with single parity.   But folks with more power CPU's don't seem to be having any parity check slowdown issues, so that's almost certainly the reason.

 

 

  • Upvote 1
Link to comment

Upgraded on Monday from 6.2.4. It had been up for 70 days without any problems.

 

Now 2 different VMs have crashed. One today, and one yesterday.

 

The logs seem to show that problems started almost exactly 24 hours after boot.

 

Boot up 

Feb 27 09:14:51 Tower liblogging-stdlog:  [origin software="rsyslogd" swVersion="8.23.0" x-pid="2368" x-info="http://www.rsyslog.com"] start
Feb 27 09:14:51 Tower kernel: Linux version 4.9.10-unRAID (root@develop64) (gcc version 5.4.0 (GCC) ) #1 SMP PREEMPT Wed Feb 15 09:38:14 PST 2017


 

1st problem

Feb 28 09:15:17 Tower kernel: makemkv: page allocation stalls for 11303ms, order:0, mode:0x2400840(GFP_NOFS|__GFP_NOFAIL)
Feb 28 09:15:17 Tower kernel: CPU: 11 PID: 3226 Comm: makemkv Not tainted 4.9.10-unRAID #1

Lots of problems then that went on until the VM crashed at around 16:30pm yesterday.

 

A different VM crashed at around 16:30pm today.

Mar 1 16:20:45 Tower kernel: Out of memory: Kill process 29390 (qemu-system-x86) score 52 or sacrifice child
Mar 1 16:20:45 Tower kernel: Killed process 29390 (qemu-system-x86) total-vm:3836896kB, anon-rss:3434208kB, file-rss:24kB, shmem-rss:16416kB
Mar 1 16:20:45 Tower kernel: oom_reaper: reaped process 29390 (qemu-system-x86), now anon-rss:0kB, file-rss:20kB, shmem-rss:20kB
Mar 1 16:20:46 Tower kernel: br0: port 2(vnet0) entered disabled state
Mar 1 16:20:46 Tower kernel: device vnet0 left promiscuous mode
Mar 1 16:20:46 Tower kernel: br0: port 2(vnet0) entered disabled state

 

Diagnostics attached. Any assistance much appreciated.

 

 

tower-diagnostics-20170301-2057.zip

Edited by al_uk
Link to comment
1 hour ago, al_uk said:

Upgraded on Monday from 6.2.4. It had been up for 70 days without any problems.

 

Now 2 different VMs have crashed. One today, and one yesterday.

 

The logs seem to show that problems started almost exactly 24 hours after boot.

 

You are loading a huge amount of stuff, some of them clear memory hogs, like multiple java apps and Plex and more.  You would thing that with 64GB you should not be having memory issues, but they started as you said about 24 hours after booting.  At that point, page allocations were numerous but very slow, ranging from 10 to 20 seconds per process, to some time later 10 to 50 seconds, an obvious latency issue.  I believe they were because of garbage collection efforts as the memory filled up, the attempt to reorganize memory chunks to satisfy the requests.  They clearly were averaging longer and longer, so it was only a matter of time before they were going to fail to satisfy an allocation request.  The OOM (Out Of Memory) was the final straw.  While the system did carry on valiantly for quite awhile, you probably should have rebooted when the allocation issues first began.

 

My guess is, you have something with a serious memory leak.  I can't say what it is, prime suspects would be java itself, a java app, makemkv, a Plex component, or a corrupted btrfs causing this.  I would look for updates for all apps, then check the disk file systems on all drives formatted with BTRFS.  If at all possible, stop loading anything you aren't actually using.  For example, do you really need so many of the NerdPack packages?

 

If the problem continues, and I suspect it will, you will have to run without selected apps, trying different combinations, and figure out which apps are using up the memory.  You do have Cadvisor, perhaps it could be used to monitor all resource usage, see what is growing too large, and never shrinks back.  Right now, the java processes are enormous, and there are a number of them, could be suspect.

 

This is a support issue, will probably be moved to the support board.

  • Upvote 1
Link to comment

Hi Rob, thank you for the pointers.

 

Crashplan is probably the prime suspect - I'll start closing down some of the dockers and see what happens. This system has been running fine for a year using about 40% of the memory.

 

No problem if this moves to support.

Link to comment
8 hours ago, garycase said:
11 hours ago, Woodpusherghd said:

Anyone else experience  a significant slowdown with 6.3.2? 

 

Yes, as I noted earlier in this thread (see link below).    As Frank noted, it's likely due to slower processors -- although I'm surprised an E6300 isn't enough to do a full-speed parity check with single parity.   But folks with more power CPU's don't seem to be having any parity check slowdown issues, so that's almost certainly the reason.

 

 

I'd like to offer one more possibility, a cause for slowing parity checks, perhaps not responsible for either of your slower times, but might be for someone.  My parity checks seemed to be slowing a little over time, but no known issues, no SMART attributes dropping, etc.  Recently, they began dropping more dramatically, and I noticed it was much slower than normal during the initial phases, when it was dealing with the older and smaller drives, then returned to normal fast speeds once past them.  Again, no apparent issues, fine SMART reports.  Once it slowed to an average of about 30MB/s, I decided it was past time to investigate, and installed jbartlett's diskspeed tool, a very nice Drive Performance tester, that produces very nice graphs of drive speed at various points on every drive.  I discovered my Hitachi 1TB drive was struggling badly, and in fact in less than a day, a critical SMART attribute went from excellent to bottomed out.  I replaced it, which restored my parity check speeds.

 

I'd like to suggest adding diskspeed to our scheduled maintenance, perhaps a quarterly run.  I haven't gotten around to it, but I considered adding a wrapper script for the User Scripts plugin, that would rename the diskspeed.html file to include a timestamp, and save it (with its graphs) to a diskspeed folder on the boot drive.  But perhaps @jbartlett would prefer to add an option to it to do that automatically.  I like the defaults it is set up with, nothing to learn, just run diskspeed.sh.

 

Again, this may not apply to either of you, but I think that if a user sees slowing parity checks, running diskspeed should be one of the recommendations.

 

   Drive performance testing (diskspeed.sh)

Edited by RobJ
add link
  • Upvote 1
Link to comment
3 hours ago, RobJ said:

I'd like to suggest adding diskspeed to our scheduled maintenance, perhaps a quarterly run. 

 

Not a bad idea -- and just for grins I will start doing this check periodically.   But I can confirm this is NOT the reason for the increased check times -- every time I upgrade to a new version I run a parity check "just because" ... and both of the last couple times it increased markedly, I downgraded it back to the previous version and ran it again to confirm it was version dependent -- and sure enough the times returned to their previous level with the earlier version.

 

Link to comment
6 hours ago, RobJ said:

I'd like to suggest adding diskspeed to our scheduled maintenance, perhaps a quarterly run.  I haven't gotten around to it, but I considered adding a wrapper script for the User Scripts plugin, that would rename the diskspeed.html file to include a timestamp, and save it (with its graphs) to a diskspeed folder on the boot drive.  But perhaps @jbartlett would prefer to add an option to it to do that automatically.  I like the defaults it is set up with, nothing to learn, just run diskspeed.sh.

 

   Drive performance testing (diskspeed.sh)

 

I'd be happy to add a date stamp to the file name by default, but there's a command switch (-o) to specify a file name.

 

If you suspect issues with a specific drive, I'd recommend running it as follows:

diskspeed.sh -s 51 -n sdx

Replace "sdx" with the drive in question. This will test just that drive at every 2% of the storage and issues will be more obvious.

 

I've also started developing a new plugin that will map out & display every storage controller and list their ports & the drives attached to them (if any). It'll perform simultaneous drive tests to maximize the controllers & ata/sata/scsi bus. My ultimate goal is to display a heat map of each platter showing the read speeds.

Edited by jbartlett
  • Upvote 2
Link to comment
4 hours ago, garycase said:

Not a bad idea -- and just for grins I will start doing this check periodically.   But I can confirm this is NOT the reason for the increased check times -- every time I upgrade to a new version I run a parity check "just because" ... and both of the last couple times it increased markedly, I downgraded it back to the previous version and ran it again to confirm it was version dependent -- and sure enough the times returned to their previous level with the earlier version.

 

Are the Tunable settings set to a custom value or default? There may be a ghost in the machine that is tweaking things just enough to show up there. Maybe a recheck if you use custom values?

Link to comment
9 hours ago, jbartlett said:

 

Are the Tunable settings set to a custom value or default? There may be a ghost in the machine that is tweaking things just enough to show up there. Maybe a recheck if you use custom values?

 

I've tweaked the tunables several times, and have tried it again on the new versions, but to no avail.    Not a big deal -- how long parity checks take really isn't a big issue.

 

Link to comment

I'm not sure if it's just my server but, sometime after upgrading to 6.3, when I copy a file to my idle server from a Windows box it will spin up all my drives then tell me the file already exists.  If I don't overwrite then it creates a 0 byte file.  If I do overwrite then it copies to the cache drive normally.  Has anyone else experienced this?

Link to comment
19 minutes ago, earthworm said:

I'm not sure if it's just my server but, sometime after upgrading to 6.3, when I copy a file to my idle server from a Windows box it will spin up all my drives then tell me the file already exists.  If I don't overwrite then it creates a 0 byte file.  If I do overwrite then it copies to the cache drive normally.  Has anyone else experienced this?

So is this a file that is on your Windows computer, and you are copying it to your unRAID server? Are you copying it to a user share? Can you read that user share from Windows OK?

Link to comment
1 hour ago, trurl said:

So is this a file that is on your Windows computer, and you are copying it to your unRAID server? Are you copying it to a user share? Can you read that user share from Windows OK?

All yes.  Everything else works great.  I noticed the SMB speed increase from 6.3.1 to 6.3.2 and am thankful for that.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.