Jump to content
limetech

unRAID Server Release 6.2 Stable Release Available

444 posts in this topic Last Reply

Recommended Posts

I done that, it is only appearing on limetech's version of Plex...

 

Pulling image: limetech/plex:latest

IMAGE ID [latest]: Pulling from limetech/plex.

IMAGE ID [6ffe5d2d6a97]: Already exists.

IMAGE ID [f4e00f994fd4]: Already exists.

IMAGE ID [e99f3d1fc87b]: Already exists.

IMAGE ID [a3ed95caeb02]: Already exists.

IMAGE ID [ededd75b6753]: Already exists.

IMAGE ID [1ddde157dd31]: Already exists.

IMAGE ID [79321844ebba]: Pulling fs layer. Downloading 100% of 772 B. Verifying Checksum. Download complete. Extracting. Pull complete.

IMAGE ID [ebe499b4c161]: Pulling fs layer. Downloading 100% of 119 MB. Download complete.

IMAGE ID [cb387480a3c1]: Pulling fs layer. Download complete.

 

TOTAL DATA PULLED: 119 MB

 

Command:

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="PlexMediaServer" --net="host" --privileged="true" -e TZ="America/Los_Angeles" -e HOST_OS="unRAID" -v "/mnt/user/appdata/plexmediaserver/":"/config":rw limetech/plex

Unable to find image 'limetech/plex:latest' locally

latest: Pulling from limetech/plex

6ffe5d2d6a97: Already exists

f4e00f994fd4: Already exists

e99f3d1fc87b: Already exists

a3ed95caeb02: Already exists

a3ed95caeb02: Already exists

ededd75b6753: Already exists

1ddde157dd31: Already exists

a3ed95caeb02: Already exists

79321844ebba: Already exists

ebe499b4c161: Pulling fs layer

cb387480a3c1: Pulling fs layer

cb387480a3c1: Download complete

ebe499b4c161: Download complete

ebe499b4c161: Pull complete

ebe499b4c161: Pull complete

cb387480a3c1: Pull complete

cb387480a3c1: Pull complete

docker: layers from manifest don't match image configuration.

See '/usr/bin/docker run --help'.

 

The command failed.

You're sure you deleted the docker.img file?  We've never seen that fail to fix the problem...

Share this post


Link to post

Haven't been participating in Betas but looking forward to giving this a spin. A few questions;

 

If I need to stub more than one device, is it a comma separated format like this;

 

append vfio-pci.ids=8086:1528, 8077:4456, 1234:5678  initrd=/bzroot

 

(Would be useful to add this to the first post)

 

 

 

Secondly, is there a recommended process for migrating to the new system share scheme when doing an upgrade? The notes in OP seem to be discussing a fresh install? For Docker, I'm gathering that deleting existing docker image and re-adding via 'previous applications' would be the recommendation?

 

My main concern is my TvHeadEnd plug-in. I want to be sure I don't lose my card, channel and recording set ups. Any advice here?

 

Thanks for all the great work

 

Peter

Drop the spaces at the commas.  Don't think you want them there.  Also I have mine starting with "pci-stub".  This is how I have mine:
pci-stub.ids=1b4b:9230,1b21:0612,1131:7160,8086:10d3

Share this post


Link to post

A suggestion for Tom ...

 

Good idea.  Please use the existing post #2 for this purpose.

 

Thank you Tom!

 

I've made a start, now need moderators and Lime Technology staff to keep it up to date and complete.

 

Users, before you upgrade, please read the first post!  Then review the second.  I believe you are more likely to have a better upgrade experience if you do read both.

Share this post


Link to post

A suggestion for Tom ...

 

Good idea.  Please use the existing post #2 for this purpose.

 

Thank you Tom!

 

I've made a start, now need moderators and Lime Technology staff to keep it up to date and complete.

 

Users, before you upgrade, please read the first post!  Then review the second.  I believe you are more likely to have a better upgrade experience if you do read both.

ok... How about something like this from me:

 

Unlike 6.1.9, the docker system in 6.2 no longer supports the docker.img file to be located on an disk mounted with the Unassigned Devices plugin.  You must locate it either on the cache drive or on the array (but performance will suffer if doing this)

 

Unlike 6.1.9, you can now reference your appdata shares within the docker templates as /mnt/user/appdata/appName and have it work correctly.

 

6.2 now also better supports host volume shares stored outside of your array.  See this Docker FAQ Entry for more details

 

 

Share this post


Link to post

Upgraded from 6.1.9, no problems.  Thank you Tom, LimeTech team and community!  :)

 

-Update:  minor issue-

  • I have (2) Win7 VMs that were migrated from 6.1.9 to 6.2.  I followed the suggested QXL video driver edit for both.
  • Upon booting the VNC Remote window froze with "Starting Windows" on both VMs, however I was able to use Windows Remote Desktop from another PC on LAN, log into the VM and everything is normal (VM fully booted, access desktop, etc).
  • After restarting the Win7 VM, the VNC Remote window functions normally.  So in my case the VNC Remote window froze until the second boot.

 

Share this post


Link to post

@Limetech -> fantastic job on the 6.2 release.  Have you had an opportunity to test dual parity on lower end hardware?  I am GUESSING I'll not be able to run this on my D525... but if you've some testing to share with your D525 I would love to hear it. Cheers.

 

@Garycase -> I just upgraded my Supermicro D525 to 6.2 from 6.1.9 (after uninstalling Powerdown Plugin).  So far it seems to operate as well as 6.1.9.  Single Disk parity speed is good at 119.4 MB/sec, and with no other load the CPU is jumping between 22-80% usage.

I have not yet checked copy speeds to the server's SSD Cache.... hopefully it will be faster than the max of 40 it has had since the jump to v6 from v5. 

Share this post


Link to post

@Garycase -> I just upgraded my Supermicro D525 to 6.2 from 6.1.9 (after uninstalling Powerdown Plugin).  So far it seems to operate as well as 6.1.9.  Single Disk parity speed is good at 119.4 MB/sec, and with no other load the CPU is jumping between 22-80% usage.

I have not yet checked copy speeds to the server's SSD Cache.... hopefully it will be faster than the max of 40 it has had since the jump to v6 from v5.

 

Nice to know.  I may (finally) upgrade my D525 ... it's the only system I still have on v5, but it's "Oh So Stable" I've been resisting it.  I have two servers on v6.2 with dual parity; one other I'm going to upgrade this week; and the trusty old Atom.    I gather you've been happy with it on v6 => is that right?    Are parity check speeds as fast as they were on v5?

 

r.e. dual parity ==> I don't think it's right that you won't "... be able to run this on my D525" => but it may indeed be noticeably slower.  It'll be interesting to see if Tom's done any testing in this regard.  If I do decide to upgrade this server, I wont' be able to test that, as it's already got the max drives supported by the motherboard, so it can't support a 2nd parity drive.  If you happen to test it, let us know the results.

 

 

Share this post


Link to post

Updated from 6.1.9 to 6.2 (after getting the weird Dynamix bug - probably should disable auto updates now.)

Encountered some weird issue powering down (But then I had powerdown plugin installed and didn't restart after attempting to upgrade.)

 

I guess my biggest gripe is that I really need to get another decent flash drive now. 256MB won't be able to support in-place plugin based upgrades anymore. :C

 

 

Share this post


Link to post

Upgrade of Main Server from 6.2 RC5 to 6.2 Stable was smooth with no issue.

 

Wanted to add that I have also used some of the features of unRAID in 6.2 for the first time and had no issue.

 

I decided that the time I had taken to put aside for upgrading to 6.2 id used for some basic admin too. So I figured i'd add a drive too. Ended up nudging a cable which resulted in a "red ball" sorry "red X". Decided to let unRAID rebuild the drive "just to see".

 

Anyway, added new drive fine. Rebuilt drive fine. All VM's and Dockers working fine. Network running fine.

 

Life is good!  :)

Share this post


Link to post

Haven't upgraded yet but today I notice the web GUI is acting up.  Is there something changed in community apps for 6.2 that is breaking 6.1.9?

 

Warning: parse_ini_file(state/network.ini): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix/template.php on line 43 Warning: extract() expects parameter 1 to be array, boolean given in /usr/local/emhttp/plugins/dynamix/template.php on line 43

 

Screenshot_20160916-214043.png.28e886d2cb4e9c4ce68ad30080d243da.png

Share this post


Link to post

Just a question here.....

 

By any chance has the BTRFS balancing options been updated in the stable build? I remember from a few beta builds ago that there was going to be a fix for switching the BTRFS cache pool from raid 1 to raid 0, and having it not rebuild the pool back to raid 1 every time you added/removed a drive. I know I may be one of the very few users on here who is running their cache pool in raid 0, but I do have my reasons (space and speed).

Share this post


Link to post

Haven't upgraded yet but today I notice the web GUI is acting up.  Is there something changed in community apps for 6.2 that is breaking 6.1.9?

 

Warning: parse_ini_file(state/network.ini): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix/template.php on line 43 Warning: extract() expects parameter 1 to be array, boolean given in /usr/local/emhttp/plugins/dynamix/template.php on line 43

 

Was going to try and stop the array but it just goes forever saying retry unmounted.  How does a non updated system start doing this?

Share this post


Link to post

Haven't upgraded yet but today I notice the web GUI is acting up.  Is there something changed in community apps for 6.2 that is breaking 6.1.9?

 

Warning: parse_ini_file(state/network.ini): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix/template.php on line 43 Warning: extract() expects parameter 1 to be array, boolean given in /usr/local/emhttp/plugins/dynamix/template.php on line 43

 

Was going to try and stop the array but it just goes forever saying retry unmounted.  How does a non updated system start doing this?

 

The root of the problem is dynamix.plg plugin I had the exact same issue as your in the morning.  I couldn't even do a "powerdown -r" via terminal as the system wouldn't respond and the web interface kept looping while trying to unmounted.

 

I end it up doing a "shutdown -r now" via terminal as I had no other choice  once the system was back up I deleted the dynamix.plg located at /config/plugins/dynamix.plg on the flashdrive then rebooted the server one more time.... GUI problem was solved. However my system initiated a parity check as soon as I started the array

 

 

You can read all about it here https://lime-technology.com/forum/index.php?topic=51891.0

Share this post


Link to post

Upgraded from RC5.  All good.

 

Was there going to be some script/tool to let people know if dual parity would speed or slow down their servers ?

Share this post


Link to post
Was there going to be some script/tool to let people know if dual parity would speed or slow down their servers ?

you will never get a speed up from using Dual Parity.    The only question is whether you get a negligible or significant slowdown when running parity checks.

Share this post


Link to post

 

 

ok... How about something like this from me:

 

Unlike 6.1.9, the docker system in 6.2 no longer supports the docker.img file to be located on an disk mounted with the Unassigned Devices plugin.  You must locate either on the cache drive (if you don't have one, use your unassigned devices drive and set all of your shares to not use the cache drive) or on the array (but performance will suffer if doing this)

 

So this means that I cannot continue to use my current setup? Have a 256GB SSD mounted via UD that stores my docker.img and my downloads are written to it. Then I have a regular 500GB hard-drive as cache disk where these downloads get moved to once they are finished.

 

I would love to store the Docker.img on the SSD for performance reasons.

 

So my only option right now, as I see it, is to use the SSD as cache, remove the old cache disk and run mover more than once a day?! Could use the "move at percentage" script from the user scripts plugin...

 

Is my understanding of this correct?

Share this post


Link to post

Was there going to be some script/tool to let people know if dual parity would speed or slow down their servers ?

you will never get a speed up from using Dual Parity.    The only question is whether you get a negligible or significant slowdown when running parity checks.

 

OK, was there going to be some tool/script to let people know if their system will noticeably degrade by enabling dual parity ?

Share this post


Link to post

Was there going to be some script/tool to let people know if dual parity would speed or slow down their servers ?

you will never get a speed up from using Dual Parity.    The only question is whether you get a negligible or significant slowdown when running parity checks.

 

OK, was there going to be some tool/script to let people know if their system will noticeably degrade by enabling dual parity ?

 

LT's view is that performance impact will be "minimal". One might interpret that to mean that if you're hardware with single parity (i.e. 6.1.9) performs well then 6.2 should be fine.

 

Dual parity[edit]

 

For large arrays, ‘dual parity’ – or, the facility to have a second parity disc that is not simply a mirror of the first – would be useful. This would permit two simultaneous drive failures without losing data. unRAID does not have dual parity at present, but ‘P + Q redundancy’ is part of the future roadmap.(Dead Link)

In a P + Q redundancy system (as in a RAID-6 system), there would be two redundancy disks: ‘P’, which is the ordinary XOR parity, and ‘Q’, which is a Reed-Solomon code. This would allow unRAID to recover from any 2 disk errors, with minimal impact on performance.

 

I have not investigated or discussed this at all to confirm though. I can't imagine LT would make a statement like this however if it was going to be even close to incorrect. I would imagine if there was likely to be a big impact on performance or impact to the required hardware they would have called it out.

 

They have not defined "minimal" but I would say a reasonable variance for minimal would be < -10%.

 

I think you're safe.

Share this post


Link to post

Though it's certainly not the fault of 6.2... Has anyone else had issue (i.e. it's impossible!!) to install ubuntu Yakkety Yak 16.10 server as a VM on unRAID? :)

The only progress I've made thus far is getting the installer to open :P And for that, I ended up changing every single setting a few times, by trial and error finding that using OVMF was the 'issue' SeaBIOS is seemingly ok. But I don't really want to use SeaBIOS instead of OVMF, OVMF has better UEFI support iirc

 

Anyone got any ideas? :)

Share this post


Link to post

@Garycase -> I just upgraded my Supermicro D525 to 6.2 from 6.1.9 (after uninstalling Powerdown Plugin).  So far it seems to operate as well as 6.1.9.  Single Disk parity speed is good at 119.4 MB/sec, and with no other load the CPU is jumping between 22-80% usage.

I have not yet checked copy speeds to the server's SSD Cache.... hopefully it will be faster than the max of 40 it has had since the jump to v6 from v5.

 

Nice to know.  I may (finally) upgrade my D525 ... it's the only system I still have on v5, but it's "Oh So Stable" I've been resisting it.  I have two servers on v6.2 with dual parity; one other I'm going to upgrade this week; and the trusty old Atom.    I gather you've been happy with it on v6 => is that right?    Are parity check speeds as fast as they were on v5?

 

r.e. dual parity ==> I don't think it's right that you won't "... be able to run this on my D525" => but it may indeed be noticeably slower.  It'll be interesting to see if Tom's done any testing in this regard.  If I do decide to upgrade this server, I wont' be able to test that, as it's already got the max drives supported by the motherboard, so it can't support a 2nd parity drive.  If you happen to test it, let us know the results.

 

The server has all Seagate 5900rpm 4TB data disks -  3 data,  1 parity,  1 Pre-Cleared hot spare disk.  All from External Enclosure pulls when that was the only way to get a hold of the 4TB units.  I will need to add another data by end of year.

 

The single disk parity check,  with nothing else running takes 10 hours 4 minutes.  I no longer remember the parity check time on v5.

 

Drawbacks to v6 on the D525:

1 - Writes to the SSD cache are about half as fast as when on v5,  a Samsung 850 PRO max out at about 40 on 6.1.9.  Will update with 6.2 speeds when able. 

2 -  The web Gui is slow to update.

3 -  RFS  to XFS conversion was brutal

4 -  read times during parity suffer. 

 

Testing dual parity (slightly nervous to be the first on the D525 and have something go south as this is my main data server)

1 -  Anything I should be concerned about prior to stopping the array and adding the second disk?

Share this post


Link to post

Updated and everything working well. I had to recreate my docker image but that went smoothly.

 

One question about VMs. I have 1 Windows 10 machine that I had created in 6.1.9. It worked fine after the upgrade. I was surprised that I did not have to do the post-update VM procedure as it booted to windows right away, but then I noticed it was a Seabios machine. Is there an advantage for me to use OVMF instead for this VM? I am not well versed on the difference. Initially I installed the VM using the video guide from limetech.

 

Thanks for the great work. I love my unRaid server.

Share this post


Link to post

Testing dual parity (slightly nervous to be the first on the D525 and have something go south as this is my main data server)

1 -  Anything I should be concerned about prior to stopping the array and adding the second disk?

 

From the very first beta that included dual parity, implementation and operation has been very smooth.  In fact, I can't recall a single report of even small issues with starting it or using it!  Go for it!  Performance impact appears to be small, possibly more significant on under-powered systems.  Let us know.

Share this post


Link to post

Second time attempting to upgrade to 6.2. I delete the dynamix plg, server reboots but no web gui nor shares accessible. I have ipmi running and I could see during boot that br0: carrier lost. Not sure if this could help. I'm able to access the console thru IPMI. Very frustrating.

 

****

I reverted back to 6.19 and went through the logs

Sep 17 10:28:36 Tower dhcpcd[1601]: dhcpcd-6.8.1 starting
Sep 17 10:28:36 Tower dhcpcd[1601]: br0: executing `/lib/dhcpcd/dhcpcd-run-hooks' PREINIT
Sep 17 10:28:36 Tower dhcpcd[1601]: br0: executing `/lib/dhcpcd/dhcpcd-run-hooks' NOCARRIER
Sep 17 10:28:36 Tower dhcpcd[1601]: br0: waiting for carrier
Sep 17 10:28:39 Tower kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Sep 17 10:28:39 Tower kernel: br0: port 2(eth1) entered listening state
Sep 17 10:28:39 Tower kernel: br0: port 2(eth1) entered listening state
Sep 17 10:28:55 Tower kernel: br0: port 2(eth1) entered learning state
Sep 17 10:29:10 Tower kernel: br0: topology change detected, propagating
Sep 17 10:29:10 Tower kernel: br0: port 2(eth1) entered forwarding state
Sep 17 10:29:10 Tower dhcpcd[1601]: br0: carrier acquired
Sep 17 10:29:10 Tower dhcpcd[1601]: br0: executing `/lib/dhcpcd/dhcpcd-run-hooks' CARRIER
Sep 17 10:29:10 Tower dhcpcd[1601]: br0: delaying IPv4 for 0.9 seconds
Sep 17 10:29:10 Tower dhcpcd[1601]: br0: using ClientID 01:00:25:90:57:9e:a8
Sep 17 10:29:10 Tower dhcpcd[1601]: br0: soliciting a DHCP lease
Sep 17 10:29:10 Tower dhcpcd[1601]: br0: sending DISCOVER (xid 0xacde3e83), next in 3.4 seconds
Sep 17 10:29:11 Tower dhcpcd[1601]: br0: offered 192.168.1.6 from 192.168.1.1
Sep 17 10:29:11 Tower dhcpcd[1601]: br0: sending REQUEST (xid 0xacde3e83), next in 4.6 seconds
Sep 17 10:29:11 Tower dhcpcd[1601]: br0: acknowledged 192.168.1.6 from 192.168.1.1
Sep 17 10:29:11 Tower dhcpcd[1601]: br0: probing for 192.168.1.6
Sep 17 10:29:11 Tower dhcpcd[1601]: br0: ARP probing 192.168.1.6 (1 of 3), next in 2.0 seconds
Sep 17 10:29:13 Tower dhcpcd[1601]: br0: ARP probing 192.168.1.6 (2 of 3), next in 1.7 seconds
Sep 17 10:29:14 Tower dhcpcd[1601]: br0: ARP probing 192.168.1.6 (3 of 3), next in 2.0 seconds
Sep 17 10:29:16 Tower dhcpcd[1601]: br0: leased 192.168.1.6 for 172800 seconds
Sep 17 10:29:16 Tower dhcpcd[1601]: br0: renew in 86400 seconds, rebind in 151200 seconds
Sep 17 10:29:16 Tower dhcpcd[1601]: br0: writing lease `/var/lib/dhcpcd/dhcpcd-br0.lease'
Sep 17 10:29:16 Tower dhcpcd[1601]: br0: adding IP address 192.168.1.6/24
Sep 17 10:29:16 Tower dhcpcd[1601]: br0: adding route to 192.168.1.0/24
Sep 17 10:29:16 Tower dhcpcd[1601]: br0: adding default route via 192.168.1.1
Sep 17 10:29:16 Tower dhcpcd[1601]: br0: executing `/lib/dhcpcd/dhcpcd-run-hooks' BOUND
Sep 17 10:29:16 Tower dhcpcd[1601]: forking to background
Sep 17 10:29:16 Tower dhcpcd[1639]: br0: ARP announcing 192.168.1.6 (1 of 2), next in 2.0 seconds
Sep 17 10:29:16 Tower dhcpcd[1601]: forked to background, child pid 1639

Br0 is the problem I believe do I need to disable the bridge?

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.