unRAID Server Release 6.2.0-beta20 Available


Recommended Posts

How long does it normally take to upgrade? It has been saying "syncing - please wait..." for about 30 min now, starting to think something is wrong.

Ok it has been an hour now.  Should I cancel the operation, or is that dangerous?

Link to comment

How long does it normally take to upgrade? It has been saying "syncing - please wait..." for about 30 min now, starting to think something is wrong.

Ok it has been an hour now.  Should I cancel the operation, or is that dangerous?

 

Do you have anything in the background running such as drive preclears or massive torrent or usenet downloading or something else that is writing to any drives?

Link to comment

 

Do you have anything in the background running such as drive preclears or massive torrent or usenet downloading or something else that is writing to any drives?

Thanks for the response.  Yes, a drive is being cleared; I probably should have stopped that prior to the upgrade.  Should I just let it run then?  I was hoping to have my array online while it was clearing, which is why I decided to try this upgrade.  Right now I just don't want to screw anything up.

Link to comment

 

Do you have anything in the background running such as drive preclears or massive torrent or usenet downloading or something else that is writing to any drives?

Thanks for the response.  Yes, a drive is being cleared; I probably should have stopped that prior to the upgrade.  Should I just let it run then?  I was hoping to have my array online while it was clearing, which is why I decided to try this upgrade.  Right now I just don't want to screw anything up.

You can't upgrade unRAID without rebooting so you couldn't keep it online while applying the upgrade anyway.
Link to comment

 

Do you have anything in the background running such as drive preclears or massive torrent or usenet downloading or something else that is writing to any drives?

Thanks for the response.  Yes, a drive is being cleared; I probably should have stopped that prior to the upgrade.  Should I just let it run then?  I was hoping to have my array online while it was clearing, which is why I decided to try this upgrade.  Right now I just don't want to screw anything up.

 

If you want the upgrade to go through then you need to do something:

Wait til preclear finishes

Stop preclear and reboot with the upgrade then restart preclear after reboot.

 

 

Btw, there is a new preclear beta that detects when a upgrade or sync is in flight and it pauses the preclear process.

Link to comment

You can't upgrade unRAID without rebooting so you couldn't keep it online while applying the upgrade anyway.

 

If you want the upgrade to go through then you need to do something:

Wait til preclear finishes

Stop preclear and reboot with the upgrade then restart preclear after reboot.

 

 

Btw, there is a new preclear beta that detects when a upgrade or sync is in flight and it pauses the preclear process.

I was (am) using the standard "Clear" function of the system, rather than a preclear.  Once I saw that would keep my array offline for 12+ hours I looked up the preclear function, but then found that the newest release allows the same thing, clearing a new disk while online.  So I decided to do that instead, which puts me where I am.  Does that change the answer at all?  I can't access the GUI currently, I would have to cancel the upgrade while it is "syncing".

Link to comment

Did you do anything to test the disk? The builtin clear will not really test the disk. I think many users will continue to preclear or use some other method to test new disks despite the improvement to the builtin clear capability.

 

As for your current situation, probably have to kill dd or something. Someone with a test server (johnnie.black seems to test everything) might know.

Link to comment

I just went ahead and canceled the upgrade, no problems thankfully.  I then canceled the clear, upgraded through the update function (it looked as if it was half updated, was showing both 6.1 and 6.2) and rebooted.  I am back online with the disk clearing.  Thank you both for your help.

 

I will keep that in mind about the preclear function for future disks, this one is a year old so not too concerned about infant mortality.

Link to comment

 

For my Test Bed server (spec's below) the time was 7:51:0 +/- 2 minutes running 6.1.*.  With dual parity, it was 7:55:02.  As you can see that system has a CPU that about as low as you can go on the AMD totem pole. (However, I did ran the optimization script on it and used its output to reset the md tunables.)

 

I would say that you might have a problem.  Have a look at the smart reports for your drives as a starting point.

 

Yes there is absolutely something with one of my disks, or more likely SATA cable / connection problem. After the night only 3 of my 3TB drives are running the parity sync (the rest are smaller) And the speed was still around 50mb/s. I did the disk speed check from the wiki. "for ((i=0;i<12;i++)) do hdparm -tT /dev/hda; done" while doing the sync i got 35mb/s on WD red. 65mb/s on first parity, and 1mb/s on second parity, and that one is same as P1. I've just done a 2x preclear on those drives. And no problem were found and they were finished just a minute apart. (Over usb3.) Moved into SATA and now one is much slower.

 

I'll let the sync finish so that the array is protected, and then see if i can find the error.

 

Thanks!

 

Link to comment

Thank you for confirming that this worked as expected.  We will be adding this to 6.2 (it was a very trivial thing to fix).  This was nothing more than a basic oversight when we first implemented keymap support for VNC.

 

Glad to hear this involves just a simple change. Looking forward to it.  :)

 

I need you to submit this as a feature request.

 

On the request: please find it here: https://lime-technology.com/forum/index.php?topic=48031.0

Link to comment

Still in the hospital recovering from anaesthetic (stupid wisdom teeth) so have nothing to do but mess on iPad and read forums. Anyway, context is there as to why I can't just logon to my 6.2 test server to check this out myself:

 

Starting VMs independent of the array is a feature that has been requested allot and one I'd love to facilitate running pfsense. Did this "feature" make it into 6.2?

Link to comment

Starting VMs independent of the array is a feature that has been requested allot and one I'd love to facilitate running pfsense. Did this "feature" make it into 6.2?

No it has not (so far at least).

 

I already run my VMs from a disk external to the array, so would be quite happy it if there was a way to merely get unRAID to not close down VMs when the array stops.  I would be quite happy in the short term to handle this myself from the command line or the 'go' and 'stop' scripts run at system startup and shutdown.

Link to comment

Starting VMs independent of the array is a feature that has been requested allot and one I'd love to facilitate running pfsense. Did this "feature" make it into 6.2?

No it has not (so far at least).

 

I already run my VMs from a disk external to the array, so would be quite happy it if there was a way to merely get unRAID to not close down VMs when the array stops.  I would be quite happy in the short term to handle this myself from the command line or the 'go' and 'stop' scripts run at system startup and shutdown.

 

thats a feature im looking forward to. cant tell you how many times i have stopped the array and forgot my VM was running and has stopped as well LOL

Link to comment

Starting VMs independent of the array is a feature that has been requested allot and one I'd love to facilitate running pfsense. Did this "feature" make it into 6.2?

No it has not (so far at least).

 

I already run my VMs from a disk external to the array, so would be quite happy it if there was a way to merely get unRAID to not close down VMs when the array stops.  I would be quite happy in the short term to handle this myself from the command line or the 'go' and 'stop' scripts run at system startup and shutdown.

 

That sounds like a wonderful and very reasonable compromise.

 

On shutdown of the array perhaps unRAID could just do a quick check to see if the VM is running on a vdisk that is hosted on an "Array" drive and if it is not leave it alone. On boot up we could just use the go script as you suggest to get things started.

 

I know things are not always that simple BUT I wonder if LT would be willing to compromise here?

Link to comment

Perhaps it can be considered by LT to have independent start/stop control of the cache drive/pool?

 

This would be more inline with the devices available in a plain unRAID configuration.

 

That makes sense too. I guess LT would not want to tie any of their code / features etc on something which is out of their control (e.g. unassigned devices).

 

Maybe I should take the rest of this conversation outside this thread.

Link to comment

Perhaps it can be considered by LT to have independent start/stop control of the cache drive/pool?

 

This would be more inline with the devices available in a plain unRAID configuration.

 

That makes sense too. I guess LT would not want to tie any of their code / features etc on something which is out of their control (e.g. unassigned devices).

 

Maybe I should take the rest of this conversation outside this thread.

 

Yeah, it is not really part of this thread, it should go under feature requests.

 

Link to comment

Perhaps it can be considered by LT to have independent start/stop control of the cache drive/pool?

 

This would be more inline with the devices available in a plain unRAID configuration.

 

That makes sense too. I guess LT would not want to tie any of their code / features etc on something which is out of their control (e.g. unassigned devices).

 

Maybe I should take the rest of this conversation outside this thread.

 

Yeah, it is not really part of this thread, it should go under feature requests.

 

No more discussion on this here peeps please. Moved over to the Feature Request Board.

 

http://lime-technology.com/forum/index.php?topic=48036.0

Link to comment

Little bug / oversight here

 

On 6.1.x, dockerMan would take any name for the container and sanitize it (at the very least, it would remove spaces from the name)

 

On 6.2, dockerMan takes the name as entered and passes it to the docker run command.  Spaces are disallowed in the docker name, and the command will fail.

 

Probably should revert to the old behaviour and sanitize the input (or at least throw up a warning about invalid characters) - This will happen a lot as users rename plex or PlexMediaServer to be Plex Media Server

 

http://lime-technology.com/forum/index.php?topic=38548.msg460341#msg460341

 

This is the list of templates that previously would have worked under 6.1.9 and will now fail under 6.2 without editing the name:

 

Air Video HD

Calibre Server2

Duck DNS

FileBot UI

Mumble Server (both of them)

Netatmo Librato

Link to comment

Today I experienced a GUI failure in 6.2.0-beta20 that appears to be identical to what I reported happening in 6.2.0-beta19.  Specifically my dashboard, main, shares and VM show that the array isn't started.  I am not able to download diagnostics as I get a 404 error.  Not able to get diagnostics from the command line.  Am able to get the system log from tools and the docker tab is fully accessible.  All dockers still work, the VM's are still working and I am able to access the shares from windows as well as from MC.  This has been happening on my system since using 6.2 and the strange thing seems to be that the duration between fails is at least 4 days.  As I asked in beta19, any advice appreciated.

shares.png.2b46e07e99df9994ba4583284a832c91.png

vms.png.0e5b4d8ec84a5b0cbb555656b5cd65ad.png

trying_to_get_diagnostics.png.66807d39fb0b455b9802e0943f26ef34.png

March_31_system_log.txt

Link to comment

Today I experienced a GUI failure in 6.2.0-beta20 that appears to be identical to what I reported happening in 6.2.0-beta19.  Specifically my dashboard, main, shares and VM show that the array isn't started.  I am not able to download diagnostics as I get a 404 error.  Not able to get diagnostics from the command line.  Am able to get the system log from tools and the docker tab is fully accessible.  All dockers still work, the VM's are still working and I am able to access the shares from windows as well as from MC.  This has been happening on my system since using 6.2 and the strange thing seems to be that the duration between fails is at least 4 days.  As I asked in beta19, any advice appreciated.

You ran out of memory... Possibly you're allocating too much memory to the VMs or one of your docker containers is saving files into RAM -> bad mappings?

 

Reboot and try the diagnostics again.  Failing that,

 

Edit the file on your flash drive config/disk.cfg

 

Change the line that says startArray="yes" to be startArray="no" and reboot

Link to comment

Today I experienced a GUI failure in 6.2.0-beta20 that appears to be identical to what I reported happening in 6.2.0-beta19.  Specifically my dashboard, main, shares and VM show that the array isn't started.  I am not able to download diagnostics as I get a 404 error.  Not able to get diagnostics from the command line.  Am able to get the system log from tools and the docker tab is fully accessible.  All dockers still work, the VM's are still working and I am able to access the shares from windows as well as from MC.  This has been happening on my system since using 6.2 and the strange thing seems to be that the duration between fails is at least 4 days.  As I asked in beta19, any advice appreciated.

You ran out of memory... Possibly you're allocating too much memory to the VMs or one of your docker containers is saving files into RAM -> bad mappings?

 

Reboot and try the diagnostics again.  Failing that,

 

Edit the file on your flash drive config/disk.cfg

 

Change the line that says startArray="yes" to be startArray="no" and reboot

 

Diagnostics attached.  I changed my two OPENELEC VMs from 4 to 2g.  Honestly not sure what they should be but didn't think it would be an issue in that I have 32g on my system.  How would I know if something was saving files into RAM?  No sure what to look for.  Thanks for the help.

tower-diagnostics-20160331-1224.zip

Link to comment

Today I experienced a GUI failure in 6.2.0-beta20 that appears to be identical to what I reported happening in 6.2.0-beta19.  Specifically my dashboard, main, shares and VM show that the array isn't started.  I am not able to download diagnostics as I get a 404 error.  Not able to get diagnostics from the command line.  Am able to get the system log from tools and the docker tab is fully accessible.  All dockers still work, the VM's are still working and I am able to access the shares from windows as well as from MC.  This has been happening on my system since using 6.2 and the strange thing seems to be that the duration between fails is at least 4 days.  As I asked in beta19, any advice appreciated.

You ran out of memory... Possibly you're allocating too much memory to the VMs or one of your docker containers is saving files into RAM -> bad mappings?

 

Reboot and try the diagnostics again.  Failing that,

 

Edit the file on your flash drive config/disk.cfg

 

Change the line that says startArray="yes" to be startArray="no" and reboot

 

Diagnostics attached.  I changed my two OPENELEC VMs from 4 to 2g.  Honestly not sure what they should be but didn't think it would be an issue in that I have 32g on my system.  How would I know if something was saving files into RAM?  No sure what to look for.  Thanks for the help.

 

Very likely you run out of space in /var/log, which by default can hold only 128MB. syslog, VM logs and docker logs are all stored in this space.

 

Look at the Dashboard page in the GUI, it will show the utilization percentage of /var/log. Alternatively log into the CLI and do

df -h /var/log

 

Link to comment

Today I experienced a GUI failure in 6.2.0-beta20 that appears to be identical to what I reported happening in 6.2.0-beta19.  Specifically my dashboard, main, shares and VM show that the array isn't started.  I am not able to download diagnostics as I get a 404 error.  Not able to get diagnostics from the command line.  Am able to get the system log from tools and the docker tab is fully accessible.  All dockers still work, the VM's are still working and I am able to access the shares from windows as well as from MC.  This has been happening on my system since using 6.2 and the strange thing seems to be that the duration between fails is at least 4 days.  As I asked in beta19, any advice appreciated.

You ran out of memory... Possibly you're allocating too much memory to the VMs or one of your docker containers is saving files into RAM -> bad mappings?

 

Reboot and try the diagnostics again.  Failing that,

 

Edit the file on your flash drive config/disk.cfg

 

Change the line that says startArray="yes" to be startArray="no" and reboot

 

Diagnostics attached.  I changed my two OPENELEC VMs from 4 to 2g.  Honestly not sure what they should be but didn't think it would be an issue in that I have 32g on my system.  How would I know if something was saving files into RAM?  No sure what to look for.  Thanks for the help.

Right off the hop, if you have continuing OOM errors, I'd get rid of checksum plugin, as it can be a real pig on memory usage with tons and tons of files when its running scheduled checks.

 

Link to comment

Starting VMs independent of the array is a feature that has been requested allot and one I'd love to facilitate running pfsense. Did this "feature" make it into 6.2?

This is not going to happen anytime soon, if ever.  It interferes with features we have planned for the future.  "Array Start/Stop" is really a misnomer.  In this context "array" refers to the entire set of devices attached to the server, not just the ones that are assigned to the parity-protected devices.

Link to comment
Guest
This topic is now closed to further replies.