Unraid OS version 6.9.0 available


Recommended Posts

6 hours ago, JorgeB said:

There have been a couple of reports where it looks like the shutdown time-out is not being honored, i.e., Unraid considers an unclean shutdown after a couple of seconds even if the set time is much longer, changing the setting (Settings -> Disk Settings) to re-apply it fixed it.

 

That's something I'll try. It could indeed be related to a settings issue, just like the minor issue with the incorrect colors being shown for disk utilization thresholds. Although I just noticed a few minutes ago that the Dashboard tab has reverted back to all drives showing as green, while the Main tab shows them correctly.

 

Regardless, I'll try resetting the timeout for unclean shutdowns and hopefully after this current parity check completes, I won't see another likely false one. Thanks!

 

Link to comment
12 hours ago, AgentXXL said:

add the 2 x 16TB new disks to the array. After the array started and the disks were formatted, disk utilization has now returned to using the proper colors. The 2 new disks are both green as they're empty, and the rest are accurately showing as red as I let them fill as completely as possible.

 

Does the coloring function work relative to disks in array instead of to 100%? Your description here made me think of conditional formatting in excel like so, round up applied to all % to avoid the middle yellow color for the purposes of this point.

 

image.png.478c222b94430adc8a4546f6614fc664.png

Link to comment
5 minutes ago, Cull2ArcaHeresy said:

 

Does the coloring function work relative to disks in array instead of to 100%? Your description here made me think of conditional formatting in excel like so, round up applied to all % to avoid the middle yellow color for the purposes of this point.

 

image.png.478c222b94430adc8a4546f6614fc664.png

 

From what I know of how it works, it's based on the disk utilization thresholds and is out of 100%. The thresholds are setup in Disk Settings for both a warning level (color should be orange) and a critical level (red). Green should only be used when below the warning level threshold. As I prefer to fill my disks as completely as possible, my warning threshold is at 95% and my critical threshold is at 99%. This is just to alert me when I need to purchase more disks.

 

Regardless, it's unusual that it's displaying correctly on the Main tab, but not on the Dashboard tab. A very minor issue though, compared to my other reported issues with the flash drive disconnect and 3 x 'unclean shutdowns' that all appear to be false.

 

I'm OK with tolerating the issues as 6.9.0 stable has only been out for just over 1 week, and no matter what, there's always issues found with stable releases. Once exposed to users with all levels of knowledge, and a huge increase in the diversity and config of user's hardware platforms, more bugs are likely to be found. But in my experience, the Limetech and community devs are outstanding in their support availability.

 

Link to comment

Does anyone know how to resolve these errors that keep flashing across the top of the page every time I go to a different tab?

 

WARNING: syntax error, unexpected '$' in Unknown on line 13 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34
WARNING: syntax error, unexpected '$' in Unknown on line 13 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34
WARNING: syntax error, unexpected '$' in Unknown on line 17 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34

 

Link to comment
8 minutes ago, doremi said:

Does anyone know how to resolve these errors that keep flashing across the top of the page every time I go to a different tab?

 


WARNING: syntax error, unexpected '$' in Unknown on line 13 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34
WARNING: syntax error, unexpected '$' in Unknown on line 13 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34
WARNING: syntax error, unexpected '$' in Unknown on line 17 in /usr/local/emhttp/plugins/dynamix/include/PageBuilder.php +line 34

 

 

Link to comment
On 3/2/2021 at 8:23 AM, limetech said:

Reverting back to 6.8.3

If you have a cache disk/pool it will be necessary to either:

  • restore the flash backup you created before upgrading (you did create a backup, right?), or
  • on your flash, copy 'config/disk.cfg.bak' to 'config/disk.cfg' (restore 6.8.3 cache assignment), or
  • manually re-assign storage devices assigned to cache back to cache

 

This is because to support multiple pools, code detects the upgrade to 6.9.0 and moves the 'cache' device settings out of 'config/disk.cfg' and into 'config/pools/cache.cfg'.  If you downgrade back to 6.8.3 these settings need to be restored.

 

 

 

I reverted back to 6.8.3 because 6.9.0 does not support my Netapp SAS 4-Port 3/6 GB QSFP PCIE 111-00341+B0 Controller (PMC Sierra PM8003).

 

I originally upgraded via the link on the GUI, without looking here first.

 

I had assumed that the GUI upgrade process took a backup of the current image. so did not take a separate one myself.

 

I had copied 'config/disk.cfg.bak' to 'config/disk.cfg' as suggested here.

 

I then manually took a backup of unraid.

 

I downgraded back to 6.8.3, but no drives were shown in the Cache. 

 

I can add both drives back, but they both show the blue square meaning new device.

 

Other drives have appeared correctly with the green circle.

 

I have not restarted the array yet, as I am concerned I may loose data on the Cache drives.

 

Any suggestions?

 

 

Link to comment

Upgrade from 6.8.3 to 6.9.0 was quick and all dockers are working properly.

 

Pass though of 2nd Video Card (Nvidea 1070) to Windows 10 VMs after upgrade is broken.

 

Have tried to pass ROM (didn't need to in past) and even tried recreating the VM from scratch. Goes into a boot loop immediately after point where Video card is upgraded to Nvidea specific driver.

 

If VM left as VNC (i.e. no pass through) it works fine..

 

Changing VM to use VNC after it is broken does not work. Goes into boot loop if VM is restarted.

 

Have noted that in new VM (in VNC), SteelSeries Apex M800 Keyboard does not work. Had to plug in a more basic keyboard to allow setup screen data entry. I pass the USB controller to the VM and SteelSeries Rival 310 eSports Mouse plugged in on controller works fine. 

tower-diagnostics-20210307-1042.zip

Edited by Letoh
Link to comment
7 hours ago, pete69 said:

I can add both drives back, but they both show the blue square meaning new device.

You can do that, as long as there's no "all data on this device will be deleted after array start" warning in front of the cache devices Unraid will import the existing pool.

Link to comment
48 minutes ago, JorgeB said:

You can do that, as long as there's no "all data on this device will be deleted after array start" warning in front of the cache devices Unraid will import the existing pool.

I can confirm this. I upgraded from 6.8.3 to 6.9 and hit the issue with the 8TB Ironwolf ST8000VN004 parity disk becoming invalid.

I reverted back to 6.8.3 via the web interface and my devices were not active after booting into 6.8.3. had to add them again (blue circle) and all data on them (containers/VMs) became available again the moment the cache disks were mounted. 

Any idea what could be the root cause for the ST8000VN004 issue with 6.9.0? I got two of them and all my HDDs are attached to an IBM M1015, crossflashed to LSI9211-8i in IT mode. Is it a driver/kernel-module issue?

  • Like 1
Link to comment
On 3/4/2021 at 2:46 PM, NAStyBox said:

I upgraded from 6.8.3 with no issues.

 

However before I went ahead with the upgrade I read this thread. So just for giggles I did the following before upgrading. 

1. Disabled auto-start on all dockers

2. Disabled VMs entirely
3. Set Domains and Appdata shares to Cache "Yes", and ran mover to clear my SSDs just in case an issue came up. They're XFS. 
4. Backed up flash drive

5. Rebooted

6. Ran upgrade

7. Rebooted

8. Let it run 20 minutes while I checked the dash, array, and NIC for any issues.
9. Reenabled Docker autostarts and VMs without starting them
10. Rebooted

...and I'm good as gold. In fact the whole house uses an Emby Docker and the array is so fast I think I might leave it there. 

 

 

Just wanted to says thanks for posting this. I followed it (except 8 - I wouldn't know what to look for!) and didn't seem to have any problems except my Windows VM took two goes to boot but other than that all is good and my cache is now back to normal.

Edited by figrin_dan
  • Like 2
Link to comment

One of my private plugins requires certain users to be configured in the /etc/passwd file.

I used to achieve this be ensuring that the required users were present in /boot/config/passwd and, during system startup, that file was copied to /etc.

 

Since upgrading to 6.9.0, this passwd file copy no longer seems to be happening.  I don't remember whether I performed the copy, or whether the standard system was doing it.  Has something been changed which would account for this?

Link to comment
13 minutes ago, optiman said:

Given I run only Seagate drives, does anyone know if the following drives have any issues with this upgrade?

AFAIK there have only been issues with the ST8000VN004, and only when used with an LSI HBA, not clear so far if it's a general issue with that combo or it only affects some users.

Link to comment
51 minutes ago, JorgeB said:

AFAIK there have only been issues with the ST8000VN004, and only when used with an LSI HBA, not clear so far if it's a general issue with that combo or it only affects some users.

Is there a general topic for this to maybe pool resources?  Spent a couple of hours earlier going through my diagnostics and Googling, and it seems to be a fairly common issue with NAS systems and this controller/drive combo (the 10TB drive was also mentioned). 

Link to comment
3 hours ago, JorgeB said:

AFAIK there have only been issues with the ST8000VN004, and only when used with an LSI HBA, not clear so far if it's a general issue with that combo or it only affects some users.

Any indication of an issue before starting the array? I was thinking of putting a clean trial copy of unraid on another usb to boot to check before upgrading. Could this be a valid way to check hardware/drive compatibility before upgrading?

Link to comment

If you have Fix Common Problems installed, then Tools - Update Assistant will let you know of anything it knows will cause you problems.  It's not definitive though.

 

But, a new trial key installed will work.  You don't need to set up any drives or not.  If they show up asking to be assigned, then you're probably good to go.

Link to comment
14 hours ago, falsenegative said:

I can confirm this. I upgraded from 6.8.3 to 6.9 and hit the issue with the 8TB Ironwolf ST8000VN004 parity disk becoming invalid.

I reverted back to 6.8.3 via the web interface and my devices were not active after booting into 6.8.3. had to add them again (blue circle) and all data on them (containers/VMs) became available again the moment the cache disks were mounted. 

Any idea what could be the root cause for the ST8000VN004 issue with 6.9.0? I got two of them and all my HDDs are attached to an IBM M1015, crossflashed to LSI9211-8i in IT mode. Is it a driver/kernel-module issue?

 

See my earlier post on how I fixed this.  I have had no issues since.

 

Kev.

  • Thanks 1
Link to comment
7 minutes ago, TDD said:

 

See my earlier post on how I fixed this.  I have had no issues since.

Do you mean where you ran seachest to change drive settings?  I've got four ST8000VN004 in my array via LSI, and two of them have dropped off, one of them twice (think during spinup).

 

Would be difficult to run them off the motherboard and just waiting for a rebuild to finish before potentially downgrading.

Link to comment
On 3/9/2021 at 8:43 AM, pete69 said:

I had copied 'config/disk.cfg.bak' to 'config/disk.cfg' as suggested here.

 

I then manually took a backup of unraid.

 

I downgraded back to 6.8.3, but no drives were shown in the Cache. 

 

I can add both drives back, but they both show the blue square meaning new device.

 

Other drives have appeared correctly with the green circle.

 

I have not restarted the array yet, as I am concerned I may loose data on the Cache drives.

 

Any suggestions?

 

Answering my own question.

 

After the above did not work I recopied 'config/disk.cfg.bak' to 'config/disk.cfg' after the downgrade had been applied.

 

And the Cache worked again.

 

Hope that helps someone else.

Edited by pete69
Link to comment
4 hours ago, Cessquill said:

Do you mean where you ran seachest to change drive settings?  I've got four ST8000VN004 in my array via LSI, and two of them have dropped off, one of them twice (think during spinup).

 

Would be difficult to run them off the motherboard and just waiting for a rebuild to finish before potentially downgrading.

 

 

  • Like 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.