Unraid OS version 6.9.1 available


171 posts in this topic Last Reply

Recommended Posts

5 minutes ago, ultimz said:

I should have... but not 100% sure. Thanks for a great plugin... was quick to solve my issue after the reboot.

I eventually can add a pause so the Plugin waits again for about 20 or a maximum of 30 seconds for a internet connection.

 

You don't virtualize PfSense/IPFire/Opensense or have PiHole running on Unraid or something similar so that you have no connection?

Link to post
  • Replies 170
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Done. Going back to bed now...

This release contains bug fixes and minor improvements.   Refer to Summary of New Features for an overview of changes since version 6.8.   To upgrade: First create a backup

Yeah just a min, actually about 15.  I put it on the wrong branch 😆   Ok good to go now, sheesh

Posted Images

18 minutes ago, ich777 said:

I eventually can add a pause so the Plugin waits again for about 20 or a maximum of 30 seconds for a internet connection.

 

You don't virtualize PfSense/IPFire/Opensense or have PiHole running on Unraid or something similar so that you have no connection?

Sounds like a good idea

 

And no I have a physical router and I did have internet access on other devices when I rebooted the server

Link to post

Hi All hoping that someone will be able to help me out.   I was following the instructions below for updating the Cache to be at the 1Mib.   I followed the Unassign/Re-assign Method  as described below.  I have made it through steps 1-4  with it Balancing and completing balance at each of the steps.    I am now ready to go to the second drive of my Two drive Cache and am hitting problem.. 

 

When I did the above unassign/ re-assign for the first time I chose the second of my two drives first removing it from the pool then starting array, having the balance occur then stoping and readding the second drive.  After the balance completes for the second time I can see that the second drive now has 1Mib  alignment. 

 

When I go now to remove the 1st SSD and restart the array leaving only the 2nd (newly balanced ssd) I am getting error where it says "No BTRFS  devices unmountable .  no file system.   Asking me to format the drive . 

 

I tried to turn off thrizznetunraid-diagnostics-20210310-1201.zipe array and select again both or just the first drive... and am getting the same error.   I am not sure what to do...  would really appreciate help   I have attached diagnostics

 

 

Link to post

I have a lot of these messages in the syslog. No "old" (before upgrade) / other browser sessions exist to the Unraid server. Any ideas?

 

Mar 10 12:32:06 Juno nginx: 2021/03/10 12:32:06 [error] 9704#9704: *225050 limiting requests, excess: 20.104 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:06 Juno nginx: 2021/03/10 12:32:06 [error] 9704#9704: *225056 limiting requests, excess: 20.829 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:07 Juno nginx: 2021/03/10 12:32:07 [error] 9704#9704: *225058 limiting requests, excess: 20.731 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:07 Juno nginx: 2021/03/10 12:32:07 [error] 9704#9704: *225068 limiting requests, excess: 20.582 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:07 Juno nginx: 2021/03/10 12:32:07 [error] 9704#9704: *225071 limiting requests, excess: 20.456 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:07 Juno nginx: 2021/03/10 12:32:07 [error] 9704#9704: *225074 limiting requests, excess: 20.360 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:08 Juno nginx: 2021/03/10 12:32:08 [error] 9704#9704: *225076 limiting requests, excess: 20.207 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:08 Juno nginx: 2021/03/10 12:32:08 [error] 9704#9704: *225086 limiting requests, excess: 20.076 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:08 Juno nginx: 2021/03/10 12:32:08 [error] 9704#9704: *225092 limiting requests, excess: 20.815 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:09 Juno nginx: 2021/03/10 12:32:09 [error] 9704#9704: *225094 limiting requests, excess: 20.682 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:09 Juno nginx: 2021/03/10 12:32:09 [error] 9704#9704: *225096 limiting requests, excess: 20.583 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:09 Juno nginx: 2021/03/10 12:32:09 [error] 9704#9704: *225100 limiting requests, excess: 20.428 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:09 Juno nginx: 2021/03/10 12:32:09 [error] 9704#9704: *225103 limiting requests, excess: 20.296 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:10 Juno nginx: 2021/03/10 12:32:10 [error] 9704#9704: *225105 limiting requests, excess: 20.200 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"

 

Link to post

Now that there are at least two different Seagate drive models with issues after upgrading, can anyone confirm they upgraded with no issues with the same drives I have?

 

12TB

ST12000NE0008

fw:  EN01

 

8TB

ST8000NM0055

fw:  SN04

 

LSI 9305-24i x8 controller

Link to post
32 minutes ago, Gico said:

I have a lot of these messages in the syslog. No "old" (before upgrade) / other browser sessions exist to the Unraid server. Any ideas?

 


Mar 10 12:32:06 Juno nginx: 2021/03/10 12:32:06 [error] 9704#9704: *225050 limiting requests, excess: 20.104 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:06 Juno nginx: 2021/03/10 12:32:06 [error] 9704#9704: *225056 limiting requests, excess: 20.829 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:07 Juno nginx: 2021/03/10 12:32:07 [error] 9704#9704: *225058 limiting requests, excess: 20.731 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:07 Juno nginx: 2021/03/10 12:32:07 [error] 9704#9704: *225068 limiting requests, excess: 20.582 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:07 Juno nginx: 2021/03/10 12:32:07 [error] 9704#9704: *225071 limiting requests, excess: 20.456 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:07 Juno nginx: 2021/03/10 12:32:07 [error] 9704#9704: *225074 limiting requests, excess: 20.360 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:08 Juno nginx: 2021/03/10 12:32:08 [error] 9704#9704: *225076 limiting requests, excess: 20.207 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:08 Juno nginx: 2021/03/10 12:32:08 [error] 9704#9704: *225086 limiting requests, excess: 20.076 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:08 Juno nginx: 2021/03/10 12:32:08 [error] 9704#9704: *225092 limiting requests, excess: 20.815 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:09 Juno nginx: 2021/03/10 12:32:09 [error] 9704#9704: *225094 limiting requests, excess: 20.682 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:09 Juno nginx: 2021/03/10 12:32:09 [error] 9704#9704: *225096 limiting requests, excess: 20.583 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:09 Juno nginx: 2021/03/10 12:32:09 [error] 9704#9704: *225100 limiting requests, excess: 20.428 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:09 Juno nginx: 2021/03/10 12:32:09 [error] 9704#9704: *225103 limiting requests, excess: 20.296 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"
Mar 10 12:32:10 Juno nginx: 2021/03/10 12:32:10 [error] 9704#9704: *225105 limiting requests, excess: 20.200 by zone "authlimit", client: 192.168.168.10, server: , request: "PROPFIND /login HTTP/1.1", host: "junol"

 

@Gico Do you know which client on your network is 192.168.168.10? its connecting to your server repeatedly via /login and nginx is complaining about it.

Link to post
12 minutes ago, BRiT said:

@optiman did you do the steps listed by another user who said it fixed things for him in the 6.9.0 thread?

 

 

 

I saw it, but I would rather not have to disable the EPC or the low current spin up settings if I don't need to.  That's why I'm asking if anyone out there has the same drives as I do and upgraded with no issues.

Edited by optiman
Link to post

Update to v6.9.0 was flawless so I installed v6.9.1 without much thought.... lesson learned.

 

  • 1st reboot... nothing working. No network no nothing. Cold sweat started running already... move the NAS to a display or a display to the NAS.... either way this is going to be a disaster... 
  • 2nd reboot... (just in case)... NAS is accessible through the network!

Not too bad! Starting the array... (more cold sweat)

Array stared... Parity Check started (oooookey, so no clean shutdown/reboot obviously there),

 

  • Opening Docker page... big red letters, Containers are starting...
  • 5 minutes later... no change, reload page, still containers are starting... damn... ok, parity does add some performance hit to the A10 tiny CPU running it xD 
  • 8 minutes later, reloading page.... Ahhhh the red letters are gone, all containers appear to be running!

Next check, running some manual tests to verify all custom user-scripts have run... 

To my big surprise at this point, they did.

 

This was a bit more excitement that what I am looking for from a NAS but definitely time for some beer after all that cold sweat.

Cheers, guys, thank you for all the hard work and for the Docker upgrades that appear to be coming out xD

 

Link to post
2 hours ago, ken-ji said:

@Gico Do you know which client on your network is 192.168.168.10? its connecting to your server repeatedly via /login and nginx is complaining about it.

Thanks. I restarted that PC and these errors are gone for now.

Link to post
1 hour ago, Gico said:

Thanks. I restarted that PC and these errors are gone for now.

Well it's back, without any app running on that PC. The only cause I can think of is that I run Pihole docker on the server, and this PC connects to it, although the docker is running and all seems fine.

I reset this PC TCPIP configuration not to use Pihole, in order to try to identify this issue.

Link to post
4 hours ago, optiman said:

 

I saw it, but I would rather not have to disable the EPC or the low current spin up settings if I don't need to.  That's why I'm asking if anyone out there has the same drives as I do and upgraded with no issues.

 

You won't notice much of a difference, if any, toasting EPC and spin.  Try just the EPC if you are inclined.  Drives will still spin down.  It's not like your power bill will double.  All we are doing here is making the drives way less aggressive with their sleep modes so the controller doesn't freak out.  I'd rather this fix than the alternative of drives dropping.

 

I believe it to be an issue in any recent merge into the combined mpt3sas driver and kernel.  It was all fine under 4.19.  Disable and await any non-firmware fixes later.  You can then re-enable the aggressive power saving if you wish.

 

I have had zero issue since this fix across all my controllers that are LSI based.

 

Kev.

Link to post
9 hours ago, optiman said:

Now that there are at least two different Seagate drive models with issues after upgrading, can anyone confirm they upgraded with no issues with the same drives I have?

 

12TB

ST12000NE0008

fw:  EN01

 

8TB

ST8000NM0055

fw:  SN04

 

LSI 9305-24i x8 controller

 

I'm seeing almost identical error numbers on two ST4000NM0023 (via onboard LSI SAS2308 on Supermicro X9DRD-7LN4F) one day after updating to 6.9.1 and also one day after running a successful parity check (on 6.9.0 just prior to upgrading to 6.9.1)

I've never seen issues with these drives before. Only other change was the installation of the SAS spin down plugin (which I'm now removing to test).

 

Diags attached.

 

image.png.63aba99e2ddb4e5d2c699c046b9c7e62.png

 

image.png.c6c344caa9b07376ec7c25d9c5f84af8.png

 

preston-diagnostics-20210311-1519.zip

Link to post
8 hours ago, TDD said:

Try the EPC disable/low current disable per my posts.  They are reversible if nothing better happens after reboot.

I've had no issue since.

 

Kev.

To second this, I tried it last night and it seems to be going well.  It was easier than I was expecting.

 

I'm just creating a General Support post collating the entries spread across the 6.9.0 & 6.9.1 topics and including the resulting step-by-steps that I took if you don't mind.

Link to post
On 3/9/2021 at 7:21 PM, Ender331 said:

I am having an issue attempting to upgrade to 6.9.1 every time I attempt to apply the upgrade I get the following error:


plugin: updating: unRAIDServer.plg
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.9.1-x86_64.zip ... done
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.9.1-x86_64.md5 ... done

writing flash device - please wait...
Archive: /tmp/unRAIDServer.zip
plugin: run failed: /bin/bash retval: 1

Once  I received the the flash drive is not read/write error.  Is it possible my flash drive is failing?

 

If I need to replace the flash drive is it possible to re-use an old 16gb ssd, or emmc module that is attached to a USB enclosure?  I think it would have a better endurance than a USB flash drive?

As an Update ... it seems that my flash drive had failed.  I ended up replacing it, and now everything works well.

 

And to answer my other question, I ended up replacing the Flash drive with  This SSD that I reclaimed from an old HP 620 plus and this enclosure.  As of this moment all seems well, and the license transferred without a hitch.  Thanks for the help.

Link to post

well... this was a painful update... at reboot i got surprised with "Boot error", put an backup on flash drive still give this error. so recreate a new flashdrive and put the config back on it fixed the problem.

then i got problems with the network card (mellanox connectX-3 pro) its a dual NIC but unraid was showing 3 NIC's🤔

found out that its detecting the second nic double, GIU wasn't helping so i edit the network.cfg manually. rebooted, not fixed...

found network-rules.cfg and edit this. rebooted and this got fixed. (network-rules.cfg is after reboot gone)

Edited by sjaak
Link to post

Updated my main server to 6.9.0 then 6.9.1 and have run into 2 issues that only make sense in relation to the upgrade:

 

1) Unassigned Devices and PreClear don't seem to be able to recognize drives plugged in via USB. I've connected both a brand new 8TB drive that I want to preclear (for testing purposes only, I know it's not required), and an older 4TB drive that's known to be good, but has been retired. It recognizes neither. UD doesn't show any devices and preclear_binhex.sh -l reports "No unassigned disks detected".

  I've confirmed that the USB dock I'm using works by plugging it into my Win10 machine and both of them show up in windows management, so the dock & and both drives are functional.

 

2) In an attempt to double check the preclear results, I tried to open a PuTTY terminal and was presented with this error message: image.png.442c6670ae097706a5686587a8016d5a.png

I don't know if my version of PuTTY is seriously out of date (I haven't a clue when I last updated it) or if there's something that I need to add back in to UNRAID to get it to accept a PuTTY connection, but this is mildly frustrating.

 

nas-diagnostics-20210311-1428.zip

 

This is the first issue I've had with an upgrade since I transitioned to 6.x, so I'm certainly NOT complaining! I'm the one who wants you guys to fix the Windows Update process! :) I'm just mildly disappointed and a bit frustrated because I've got this new drive to get installed...

Link to post

Is anyone else noticing that the Cache Pool size isn't showing correctly now?  It shows correctly initially but then for example mine goes from 76.9GB used down to 36.7GB used 

 

My docker.img and libvirt.img have a combined size of 53.6GB and reside on the Cache Pool so I know it's wrong.

Link to post
7 hours ago, Ender331 said:

And to answer my other question, I ended up replacing the Flash drive with  This SSD that I reclaimed from an old HP 620 plus and this enclosure.  As of this moment all seems well, and the license transferred without a hitch.  Thanks for the help.

 

Enclosures are risky. If each enclosure has a unique flash GUID then you are good to go, but if they all share the same flash GUID then that GUID will get blacklisted. See  https://wiki.unraid.net/USB_Card_Readers

Link to post
12 minutes ago, FreeMan said:

if there's something that I need to add back in to UNRAID to get it to accept a PuTTY connection

 

I can help with this part... remove everything from your go script that deals with /root/.ssh/

 

Your authorized_keys and other files should be placed in /boot/config/ssh/root/ now and they will be symlinked to /root/.ssh automatically.  

 

See https://wiki.unraid.net/Unraid_OS_6.9.0#SSH_Improvements 

Link to post
25 minutes ago, ljm42 said:

 

I can help with this part... remove everything from your go script that deals with /root/.ssh/

 

Your authorized_keys and other files should be placed in /boot/config/ssh/root/ now and they will be symlinked to /root/.ssh automatically.  

 

See https://wiki.unraid.net/Unraid_OS_6.9.0#SSH_Improvements 

Close, but no cigar. :(

 

I had keys to connect to my backup server stored in /root/.ssh/ along with a few lines to move them on boot, and I did move them per the linked instructions. Unfortunately, PuTTY is still throwing the same error when I try to connect to my server.

Link to post
10 minutes ago, FreeMan said:

Close, but no cigar. :(

 

I had keys to connect to my backup server stored in /root/.ssh/ along with a few lines to move them on boot, and I did move them per the linked instructions. Unfortunately, PuTTY is still throwing the same error when I try to connect to my server.

 

Updating my PuTTY client from 0.62 to 0.74 seems to have done the trick. I guess that makes the SSH updates a GoodThing™. :)

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.