unRAID Server Release 6.0-rc3-x86_64 Available


Recommended Posts

I'm a big unRaid fan, but I do have to point out this conversation ...

 

Yes, that conversation hit the nail on the head.    Clearly they didn't listen to NAS's advice ...

 

We have been stung with this before (see the last release cycle). Resist the urge to premature RC :P

Link to comment
  • Replies 446
  • Created
  • Last Reply

Top Posters In This Topic

I upgraded to rc2 (from b15) yesterday, rebooting from the webgui without powering down.  Following a powercut during the night, I found that unRAID was not running.

 

I could ping the machine, but there was no web access and no telnet or ssh access.

 

I connected via IPMI but the IPMI console was unresponsive, with only a few lines displayable, showing some services having been stopped.  It was not possible to ascertain the time at which these lines had been produced.

 

I performed a reset via IPMI and the system came up, but with a parity check running.  This is the first time in several months, and well in excess of 100 powercuts, that I've experienced an unclean shutdown.  The powerdown plugin had not left a logfile on the flash drive.  Coincidence, or indicative of a problem?

 

I suspect that we will have another power cut before the parity check completes (currently showing just over 7 hours to go).

Link to comment

An RC gives a certain expectation of near to if not complete code.

 

but unraid does tend to have it's own way of doing things.

LT made a mistake.  So long as RC2 doesn't repeat it I have no issues.  For my usage case, both RC1 and RC2 have been rock solid.
Link to comment

An RC gives a certain expectation of near to if not complete code.

 

but unraid does tend to have it's own way of doing things.

LT made a mistake.  So long as RC2 doesn't repeat it I have no issues.  For my usage case, both RC1 and RC2 have been rock solid.

 

RC1 still had the connection issues with dockers in a mixed docker and VM environment for me, i disabled autostart of my VM's and that "cured" it with RC1.

 

i'm on RC2 now, i haven't tested booting with VM's on autostart to see if it's no longer an issue as i've been working on "that damn container".....

Link to comment

I'm a big unRaid fan, but I do have to point out this conversation ...

 

Yes, that conversation hit the nail on the head.    Clearly they didn't listen to NAS's advice ...

 

We have been stung with this before (see the last release cycle). Resist the urge to premature RC :P

 

Gentlemen ----

 

This is a semantics argument!  Limetech uses a public beta program.  What is the difference between v6b16 and and v6rc1?  Basically, what they did was eliminate releasing 6b16 by calling it 6rc1.  By releasing it as rc1, they have now said it will be the final underlying OS version and feature set of the software prior to the formal release of version 6.0 reserving the right to correct any bugs encountered.  There have been a few bugs and they are addressing them...

 

 

Link to comment

Gentlemen ----

 

This is a semantics argument!  Limetech uses a public beta program.  What is the difference between v6b16 and and v6rc1?  Basically, what they did was eliminate releasing 6b16 by calling it 6rc1.  By releasing it as rc1, they have now said it will be the final underlying OS version and feature set of the software prior to the formal release of version 6.0 reserving the right to correct any bugs encountered.  There have been a few bugs and they are addressing them...

 

Exactly that. No more features will be added, just bugs will be corrected.

Link to comment

The Log button in the upper right is only showing a blank page.  Just started happening.  Tried pale moon and IE 11 browsers.  Same thing.  Shows a blank page.

I can still go to tools/system log and get my log though. That shows fine.

Link to comment

The Log button in the upper right is only showing a blank page.  Just started happening.  Tried pale moon and IE 11 browsers.  Same thing.  Shows a blank page.

I can still go to tools/system log and get my log though. That shows fine.

 

it can take a little while sometimes for something to show in there, particularly if the system is under load.

Link to comment

... what they did was eliminate releasing 6b16 by calling it 6rc1.

 

Agree ==>  However ...

 

(a)  Words matter ... they can definitely impact perceptions, and in this case it clearly wasn't actually ready for prime time (else there wouldn't have been an RC2 within less than 24 hours)

 

and

 

(b)  It was pretty clear from the ongoing discussion that there were a lot of post-B15 changes.    There was widespread expectation of an impending B16 due to the # of changes.  They simply chose to jump straight to an RC ... one which, in my opinion, had clearly not been subjected to a thorough internal testing cycle.  NAS very clearly warned them about doing exactly that:

... Resist the urge to premature RC :P

 

But what's done is done.  As Squid so succinctly noted:

... LT made a mistake.

 

... and I agree that as long as RC2 has resolved the issues that shouldn't have been overlooked, then all is well  :)

 

Link to comment

Without malice and as I said a few times now it was absolutely certain RC1 was going to be a flop.

 

The change list we knew off was far to large and it was by very definition a beta and not an RC.

 

Also no amount of internal testing can come close the the wider and wilder public testing that every other beta found... otherwise users would never find any bugs.

 

It is a lesson we seem never to learn and very frustrating.

Link to comment

I'm a big unRaid fan, but I do have to point out this conversation ...

 

Yes, that conversation hit the nail on the head.    Clearly they didn't listen to NAS's advice ...

 

We have been stung with this before (see the last release cycle). Resist the urge to premature RC :P

 

Gentlemen ----

 

This is a semantics argument!  Limetech uses a public beta program.  What is the difference between v6b16 and and v6rc1?  Basically, what they did was eliminate releasing 6b16 by calling it 6rc1.  By releasing it as rc1, they have now said it will be the final underlying OS version and feature set of the software prior to the formal release of version 6.0 reserving the right to correct any bugs encountered.  There have been a few bugs and they are addressing them...

Like I said, I have no issues with RC1 or RC2.  However since LT does use a public beta program, the version that qualified for RC status was B15.  B16 never hit the wild, and therefore was never tested on a wide variety of hardware, and thats why (presumably) the issues with VMs surfaced.  But, like you said its all semantics.

 

The important thing is that now 6.0 is feature complete.  No more features will be added to 6.0 - only bug fixes.  Based upon my own experience with the beta series, and my experience thus far with the RC's, I don't expect it to remain RC for long.  But, I do not run VMs

Link to comment

... It is a lesson we seem never to learn and very frustrating.

 

Agree.    And you very clearly warned LT several times about the need for another Beta cycle to test the large # of changes that were incorporated after Beta 15.

 

It's one thing to have a Beta that needs a very quick "a" release to resolve something that's been overlooked.    An RC, by definition, should be stable enough that it doesn't need a new release on the SAME DAY it was released !!

 

Link to comment

for me docker dont work, with update to rc1 i start getting error

time="2015-05-17T01:28:46+01:00" level=info msg="+job init_networkdriver()"
time="2015-05-17T01:28:46+01:00" level=info msg="+job serveapi(unix:///var/run/docker.sock)"
time="2015-05-17T01:28:46+01:00" level=info msg="Listening for HTTP on unix (/var/run/docker.sock)"
Could not find a free IP address range for interface 'docker0'. Please configure its address manually and run 'docker -b docker0'
time="2015-05-17T01:28:46+01:00" level=info msg="-job init_networkdriver() = ERR (1)"
time="2015-05-17T01:28:46+01:00" level=fatal msg="Shutting down daemon due to errors: Could not find a free IP address range for interface 'docker0'. Please configure its address manually and run 'docker -b docker0'"

 

after downgrade to beta15 eveything working

 

UPDATE:

i know now where is problem, im using Openvpn client with option to start with array.

When i stoped Openvpn client - then Docker starting without problem

 

Now question why is problem with this on rc1 and on b15 works fine ?

 

Starting in rc1 we moved when Docker and VMs services start up (from 'disk_mounted' to 'started') to eliminate some race conditions.  Since openvpn client's start event is still disks_mounted, it'll start before Docker now.

 

Are you using openvpn to connect to another unraid box running the openvpn server or to some router?  I'm curious if you have access to the other end, openvpn server, to look at the settings there because it might be setup to redirect local traffic over to the openvpn server instead of using your local internet connection and router for dhcp requests.

 

I'm no OpenVPN expert by an stretch but here are some of my OpenVPN *Server* settings that I think help prevent clients from using the server's internet connection and instead just use their own (PeterB probably has a much better understanding):

Pushing DHCP options to clients: DNS Local gateway

Redirect-gateway: No

Push LAN subnet to the clients: Yes

 

Link to comment

An RC gives a certain expectation of near to if not complete code.

 

but unraid does tend to have it's own way of doing things.

LT made a mistake.  So long as RC2 doesn't repeat it I have no issues.  For my usage case, both RC1 and RC2 have been rock solid.

 

RC1 still had the connection issues with dockers in a mixed docker and VM environment for me, i disabled autostart of my VM's and that "cured" it with RC1.

 

i'm on RC2 now, i haven't tested booting with VM's on autostart to see if it's no longer an issue as i've been working on "that damn container".....

 

 

was just about to IRC a support forum for a-n-other piece of software and fired up my irc docker, same connection issues until i change from host to bridge and back to host again, that i have had since the introduction of the VM bridge.

syslog.zip

Link to comment

... It is a lesson we seem never to learn and very frustrating.

 

Agree.    And you very clearly warned LT several times about the need for another Beta cycle to test the large # of changes that were incorporated after Beta 15.

 

It's one thing to have a Beta that needs a very quick "a" release to resolve something that's been overlooked.    An RC, by definition, should be stable enough that it doesn't need a new release on the SAME DAY it was released !!

 

This is being greatly exaggerated.  The issues that were present in RC1 were faulty webGui code only and were not issues with the more fundamental mechanics of the operating system.

 

And as far as why we so quickly put out an RC2 is because we are going through an accelerated RC phase.  Bug squashing is top priority now as we are on feature freeze.  We are already working to generate an RC3 as we progress towards final.

 

Please stop the nonconstructive back and forth posts for a few bugs that were resolved in < 24 hours from posting.  That doesn't lead to productive conversation or help us squash other bugs.  If you wish to discuss this, please take it to the lounge.

 

RC1 doesn't mean bug free, it means stable.  It means your data is safe.  It doesn't mean every application feature is bug free yet.  We are still squashing bugs on those features as we progress our way to final.

Link to comment

Running into a potential issue with a Windowed 10 VM.  It goes through most of the install process, but after booting and selecting privacy options (public/private network, tracking, etc), it goes to a checking connection screen, then reboots.  I have tried loading the virtio Ethernet driver and without loading it since it seems like a network issue and noted the same behavior.  It installed correctly in beta 15.  Its build 10041 on AMD hardware if that matters.

Link to comment

Running into a potential issue with a Windowed 10 VM.  It goes through most of the install process, but after booting and selecting privacy options (public/private network, tracking, etc), it goes to a checking connection screen, then reboots.  I have tried loading the virtio Ethernet driver and without loading it since it seems like a network issue and noted the same behavior.  It installed correctly in beta 15.  Its build 10041 on AMD hardware if that matters.

 

I've had some flakiness with Windows 10 as a VM with VirtIO drivers as well.  As it's not a fully released OS yet, it's not really supported, but some folks have had better luck than others with stability.  For a stable Windows VM, you can revert to Windows 8.1 or look on the libvirt documentation to modify the XML to use an IDE drive as opposed to VirtIO.

Link to comment

Running into a potential issue with a Windowed 10 VM.  It goes through most of the install process, but after booting and selecting privacy options (public/private network, tracking, etc), it goes to a checking connection screen, then reboots.  I have tried loading the virtio Ethernet driver and without loading it since it seems like a network issue and noted the same behavior.  It installed correctly in beta 15.  Its build 10041 on AMD hardware if that matters.

 

I have a Windows 10 VM working with the .103 virtio drivers (latest).  I am not passing through any hardware.  There is an issue with Windows 10 recognizing all the network devices.  i.e. the Windows 10 VM does not see my unRAID server.  I believe this is a Windows 10 issue based on what I've seen on the Internet.

Link to comment

 

I have a Windows 10 VM working with the .103 virtio drivers (latest).  I am not passing through any hardware.  There is an issue with Windows 10 recognizing all the network devices.  i.e. the Windows 10 VM does not see my unRAID server.  I believe this is a Windows 10 issue based on what I've seen on the Internet.

 

Do you have the link for the .103 virtio drivers?  All I can find is .100 and .96.  The .100 worked previously, but perhaps that was just luck.  I'm not passing any hardware either.

Link to comment

Wow, why are you guys all going crazy? Beta, RC, it's still a beta. RC doesn't mean it's perfect. Anyone running the beta is "testing" the next version of unraid.

So what there was a slight issue, release RC2 and repeat. Any other issues? Release RC3. And so on.

Link to comment

 

I have a Windows 10 VM working with the .103 virtio drivers (latest).  I am not passing through any hardware.  There is an issue with Windows 10 recognizing all the network devices.  i.e. the Windows 10 VM does not see my unRAID server.  I believe this is a Windows 10 issue based on what I've seen on the Internet.

 

Do you have the link for the .103 virtio drivers?  All I can find is .100 and .96.  The .100 worked previously, but perhaps that was just luck.  I'm not passing any hardware either.

 

https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/

Link to comment

I upgraded to rc2 (from b15) yesterday, rebooting from the webgui without powering down.  Following a powercut during the night, I found that unRAID was not running.

 

I could ping the machine, but there was no web access and no telnet or ssh access.

 

I connected via IPMI but the IPMI console was unresponsive, with only a few lines displayable, showing some services having been stopped.  It was not possible to ascertain the time at which these lines had been produced.

 

I performed a reset via IPMI and the system came up, but with a parity check running.  This is the first time in several months, and well in excess of 100 powercuts, that I've experienced an unclean shutdown.  The powerdown plugin had not left a logfile on the flash drive.  Coincidence, or indicative of a problem?

 

I suspect that we will have another power cut before the parity check completes (currently showing just over 7 hours to go).

 

If you have powerdown 2.14 and the unassigned_devices plugin, there is a problem that I am working on.

Link to comment
Guest
This topic is now closed to further replies.