Unraid OS version 6.9.0 available


332 posts in this topic Last Reply

Recommended Posts

  • Replies 331
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Refer to Summary of New Features for an overview of changes since version 6.8.   To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If yo

Successfully upgraded 3 (encrypted) systems from 6.8.3 to 6.9.0.   One of the 6.8.3 systems was running nvidia-plugin, upgrade procedure:   0. Stop docker containers from auto-star

I'll wait for 6.9.1.   There's always some unforeseen bugs that crop up on hardware and I've learned there's no reason to rush these things.      Thanks for all the hard work

Posted Images

On 3/7/2021 at 8:56 AM, h0schi said:

Update to 6.9 was successful, but automatic spindown is not working anymore :(

 

I'm not an expert, but people have said the Dynamix Autofan plugin or Telegraf docker are culprits. For me, disabling [inputs.smart] in Telegraf fixed it as was suggested earlier in this thread.

 

Edit: Here's the relevant bug report:

 

Edited by mudsloth
Link to post
16 minutes ago, mudsloth said:

 

I'm not an expert, but people have said the Dynamix Autofan plugin or Telegraf docker are culprits. For me, disabling [inputs.smart] in Telegraf fixed it as was suggested earlier in this thread.

 

Edit: Here's the relevant bug report:

 

Thx @mudsloth.

I do not use AutoFan or Telegraf.

I solved it yesterday with increasing the increasing the "Tunable (poll_attributes)" under Settings -> Disk Settings.

 

I set the spindown to 1 hour and increase the Tunable (poll_attributes) to 3700 seconds

Link to post
7 hours ago, itimpi said:

Have you made sure that you do not have a console (or screen) session open with the current directory being one on the array as this will stop the array completing the unmounting correctly and thus can lead to a subsequent unclean shutdown.

 

I assume that reply was meant for me, but yes, I had closed all open console sessions, and even took the pro-active step of shutting down Docker containers manually before attempting the reboots. As I've got the pre and post diagnostics for this latest occurrance, I'll start doing some comparison today.

 

I just find it odd that I've experienced 3 supposed unclean shutdowns since the upgrade to 6.9.0 stable. I had rebooted numerous times while using 6.9.0 RC2 but don't recall a single unclean shutdown occurring. And as reported, I believe they're completely false errors as all of my parity checks in the last year have passed with no errors found. When I 1st started with unRAID in 2019 using my original storage chassis (an old Norco RPC-4020) I had numerous actual errors, but not a single one since I've moved to the Supermicro CSE-847.

 

Link to post
6 hours ago, JorgeB said:

There have been a couple of reports where it looks like the shutdown time-out is not being honored, i.e., Unraid considers an unclean shutdown after a couple of seconds even if the set time is much longer, changing the setting (Settings -> Disk Settings) to re-apply it fixed it.

 

That's something I'll try. It could indeed be related to a settings issue, just like the minor issue with the incorrect colors being shown for disk utilization thresholds. Although I just noticed a few minutes ago that the Dashboard tab has reverted back to all drives showing as green, while the Main tab shows them correctly.

 

Regardless, I'll try resetting the timeout for unclean shutdowns and hopefully after this current parity check completes, I won't see another likely false one. Thanks!

 

Link to post
12 hours ago, AgentXXL said:

add the 2 x 16TB new disks to the array. After the array started and the disks were formatted, disk utilization has now returned to using the proper colors. The 2 new disks are both green as they're empty, and the rest are accurately showing as red as I let them fill as completely as possible.

 

Does the coloring function work relative to disks in array instead of to 100%? Your description here made me think of conditional formatting in excel like so, round up applied to all % to avoid the middle yellow color for the purposes of this point.

 

image.png.478c222b94430adc8a4546f6614fc664.png

Link to post
5 minutes ago, Cull2ArcaHeresy said:

 

Does the coloring function work relative to disks in array instead of to 100%? Your description here made me think of conditional formatting in excel like so, round up applied to all % to avoid the middle yellow color for the purposes of this point.

 

image.png.478c222b94430adc8a4546f6614fc664.png

 

From what I know of how it works, it's based on the disk utilization thresholds and is out of 100%. The thresholds are setup in Disk Settings for both a warning level (color should be orange) and a critical level (red). Green should only be used when below the warning level threshold. As I prefer to fill my disks as completely as possible, my warning threshold is at 95% and my critical threshold is at 99%. This is just to alert me when I need to purchase more disks.

 

Regardless, it's unusual that it's displaying correctly on the Main tab, but not on the Dashboard tab. A very minor issue though, compared to my other reported issues with the flash drive disconnect and 3 x 'unclean shutdowns' that all appear to be false.

 

I'm OK with tolerating the issues as 6.9.0 stable has only been out for just over 1 week, and no matter what, there's always issues found with stable releases. Once exposed to users with all levels of knowledge, and a huge increase in the diversity and config of user's hardware platforms, more bugs are likely to be found. But in my experience, the Limetech and community devs are outstanding in their support availability.

 

Link to post

Does anyone know how to resolve these errors that keep flashing across the top of the page every time I go to a different tab?

 

WARNING: syntax error, unexpected '$' in Unknown on line 13 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34
WARNING: syntax error, unexpected '$' in Unknown on line 13 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34
WARNING: syntax error, unexpected '$' in Unknown on line 17 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34

 

Link to post
8 minutes ago, doremi said:

Does anyone know how to resolve these errors that keep flashing across the top of the page every time I go to a different tab?

 


WARNING: syntax error, unexpected '$' in Unknown on line 13 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34
WARNING: syntax error, unexpected '$' in Unknown on line 13 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34
WARNING: syntax error, unexpected '$' in Unknown on line 17 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34

 

 

Link to post
On 3/2/2021 at 8:23 AM, limetech said:

Reverting back to 6.8.3

If you have a cache disk/pool it will be necessary to either:

  • restore the flash backup you created before upgrading (you did create a backup, right?), or
  • on your flash, copy 'config/disk.cfg.bak' to 'config/disk.cfg' (restore 6.8.3 cache assignment), or
  • manually re-assign storage devices assigned to cache back to cache

 

This is because to support multiple pools, code detects the upgrade to 6.9.0 and moves the 'cache' device settings out of 'config/disk.cfg' and into 'config/pools/cache.cfg'.  If you downgrade back to 6.8.3 these settings need to be restored.

 

 

 

I reverted back to 6.8.3 because 6.9.0 does not support my Netapp SAS 4-Port 3/6 GB QSFP PCIE 111-00341+B0 Controller (PMC Sierra PM8003).

 

I originally upgraded via the link on the GUI, without looking here first.

 

I had assumed that the GUI upgrade process took a backup of the current image. so did not take a separate one myself.

 

I had copied 'config/disk.cfg.bak' to 'config/disk.cfg' as suggested here.

 

I then manually took a backup of unraid.

 

I downgraded back to 6.8.3, but no drives were shown in the Cache. 

 

I can add both drives back, but they both show the blue square meaning new device.

 

Other drives have appeared correctly with the green circle.

 

I have not restarted the array yet, as I am concerned I may loose data on the Cache drives.

 

Any suggestions?

 

 

Link to post

Upgrade from 6.8.3 to 6.9.0 was quick and all dockers are working properly.

 

Pass though of 2nd Video Card (Nvidea 1070) to Windows 10 VMs after upgrade is broken.

 

Have tried to pass ROM (didn't need to in past) and even tried recreating the VM from scratch. Goes into a boot loop immediately after point where Video card is upgraded to Nvidea specific driver.

 

If VM left as VNC (i.e. no pass through) it works fine..

 

Changing VM to use VNC after it is broken does not work. Goes into boot loop if VM is restarted.

 

Have noted that in new VM (in VNC), SteelSeries Apex M800 Keyboard does not work. Had to plug in a more basic keyboard to allow setup screen data entry. I pass the USB controller to the VM and SteelSeries Rival 310 eSports Mouse plugged in on controller works fine. 

tower-diagnostics-20210307-1042.zip

Edited by Letoh
Link to post
7 hours ago, pete69 said:

I can add both drives back, but they both show the blue square meaning new device.

You can do that, as long as there's no "all data on this device will be deleted after array start" warning in front of the cache devices Unraid will import the existing pool.

Link to post
48 minutes ago, JorgeB said:

You can do that, as long as there's no "all data on this device will be deleted after array start" warning in front of the cache devices Unraid will import the existing pool.

I can confirm this. I upgraded from 6.8.3 to 6.9 and hit the issue with the 8TB Ironwolf ST8000VN004 parity disk becoming invalid.

I reverted back to 6.8.3 via the web interface and my devices were not active after booting into 6.8.3. had to add them again (blue circle) and all data on them (containers/VMs) became available again the moment the cache disks were mounted. 

Any idea what could be the root cause for the ST8000VN004 issue with 6.9.0? I got two of them and all my HDDs are attached to an IBM M1015, crossflashed to LSI9211-8i in IT mode. Is it a driver/kernel-module issue?

Link to post
On 3/4/2021 at 2:46 PM, NAStyBox said:

I upgraded from 6.8.3 with no issues.

 

However before I went ahead with the upgrade I read this thread. So just for giggles I did the following before upgrading. 

1. Disabled auto-start on all dockers

2. Disabled VMs entirely
3. Set Domains and Appdata shares to Cache "Yes", and ran mover to clear my SSDs just in case an issue came up. They're XFS. 
4. Backed up flash drive

5. Rebooted

6. Ran upgrade

7. Rebooted

8. Let it run 20 minutes while I checked the dash, array, and NIC for any issues.
9. Reenabled Docker autostarts and VMs without starting them
10. Rebooted

...and I'm good as gold. In fact the whole house uses an Emby Docker and the array is so fast I think I might leave it there. 

 

 

Just wanted to says thanks for posting this. I followed it (except 8 - I wouldn't know what to look for!) and didn't seem to have any problems except my Windows VM took two goes to boot but other than that all is good and my cache is now back to normal.

Edited by figrin_dan
Link to post

One of my private plugins requires certain users to be configured in the /etc/passwd file.

I used to achieve this be ensuring that the required users were present in /boot/config/passwd and, during system startup, that file was copied to /etc.

 

Since upgrading to 6.9.0, this passwd file copy no longer seems to be happening.  I don't remember whether I performed the copy, or whether the standard system was doing it.  Has something been changed which would account for this?

Link to post

Given I run only Seagate drives, does anyone know if the following drives have any issues with this upgrade?

 

12TB

ST12000NE0008

fw:  EN01

 

8TB

ST8000NM0055

fw:  SN04

Link to post
13 minutes ago, optiman said:

Given I run only Seagate drives, does anyone know if the following drives have any issues with this upgrade?

AFAIK there have only been issues with the ST8000VN004, and only when used with an LSI HBA, not clear so far if it's a general issue with that combo or it only affects some users.

Link to post
51 minutes ago, JorgeB said:

AFAIK there have only been issues with the ST8000VN004, and only when used with an LSI HBA, not clear so far if it's a general issue with that combo or it only affects some users.

Is there a general topic for this to maybe pool resources?  Spent a couple of hours earlier going through my diagnostics and Googling, and it seems to be a fairly common issue with NAS systems and this controller/drive combo (the 10TB drive was also mentioned). 

Link to post
3 hours ago, JorgeB said:

AFAIK there have only been issues with the ST8000VN004, and only when used with an LSI HBA, not clear so far if it's a general issue with that combo or it only affects some users.

Any indication of an issue before starting the array? I was thinking of putting a clean trial copy of unraid on another usb to boot to check before upgrading. Could this be a valid way to check hardware/drive compatibility before upgrading?

Link to post

If you have Fix Common Problems installed, then Tools - Update Assistant will let you know of anything it knows will cause you problems.  It's not definitive though.

 

But, a new trial key installed will work.  You don't need to set up any drives or not.  If they show up asking to be assigned, then you're probably good to go.

Link to post
14 hours ago, falsenegative said:

I can confirm this. I upgraded from 6.8.3 to 6.9 and hit the issue with the 8TB Ironwolf ST8000VN004 parity disk becoming invalid.

I reverted back to 6.8.3 via the web interface and my devices were not active after booting into 6.8.3. had to add them again (blue circle) and all data on them (containers/VMs) became available again the moment the cache disks were mounted. 

Any idea what could be the root cause for the ST8000VN004 issue with 6.9.0? I got two of them and all my HDDs are attached to an IBM M1015, crossflashed to LSI9211-8i in IT mode. Is it a driver/kernel-module issue?

 

See my earlier post on how I fixed this.  I have had no issues since.

 

Kev.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.