Unraid OS version 6.9.0 available


Recommended Posts

1 minute ago, ken-ji said:

Anybody understand why the dashboard doesn't seem to be able to show the correct orange and red disk usage bars anymore?

image.png.65699e3d7302e011c3471eaf02a277dc.png

I've tried resetting the disk free thresholds, but no luck.

 

Also IPv6 routing has new entries the GUI is not parsing properly

 


root@MediaStore:/usr/local/emhttp# ip -6 route
::1 dev lo proto kernel metric 256 pref medium
2001:xxxx:xxxx:xxxx::/64 dev br0 proto ra metric 216 pref medium
fd6f:3908:ee39:4000::/64 dev br0 proto ra metric 216 pref medium
fe80::/64 dev br0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev vnet0 proto kernel metric 256 pref medium
multicast ff00::/8 dev br0 proto kernel metric 256 pref medium
multicast ff00::/8 dev eth0 proto kernel metric 256 pref medium
multicast ff00::/8 dev vnet0 proto kernel metric 256 pref medium
default via fe80::ce2d:e0ff:fe50:e7b0 dev br0 proto ra metric 216 pref medium

 

 

I've reported the same earlier in the thread regarding the disk usage thresholds and incorrectly making them green. Upon 1st boot after upgrading, there were notifications from every array drive stating that the drives had returned to normal utilization. There's a note in the release notes about this stating that users not using the unRAID defaults will have to reconfigure them, but as you, I and others have found, resetting the disk usage thresholds in Disk Settings hasn't corrected the issue.

 

I and others are also seeing the IPV6 messages, but they seem pretty innocuous so not a big concern at this time. We'll get these small issues solved eventually.

 

Link to comment
3 minutes ago, AgentXXL said:

 

I've reported the same earlier in the thread regarding the disk usage thresholds and incorrectly making them green. Upon 1st boot after upgrading, there were notifications from every array drive stating that the drives had returned to normal utilization. There's a note in the release notes about this stating that users not using the unRAID defaults will have to reconfigure them, but as you, I and others have found, resetting the disk usage thresholds in Disk Settings hasn't corrected the issue.

 

I and others are also seeing the IPV6 messages, but they seem pretty innocuous so not a big concern at this time. We'll get these small issues solved eventually.

 

I was fairly sure they were getting reported properly on my first boot post upgrade, but I don't have any evidence for that.

the IPv6 is harmless, but I worry people might break their servers network connectivity (until they reboot) if they try to delete the mishandled routing lines. I didn't try since I'm doing all of this remotely without an IPMI /KVM device

Link to comment

Seems like this little change in /etc/profile by @limetech is giving some of scripts grief

# limetech - modified for unRAID 'no users' environment
export HOME=/root
cd $HOME

Its causing my scripted tmux sessions to all open (uselessly) in /root rather than in the directories I've specified.

Can we not do this?

Link to comment
10 hours ago, TDD said:

Seagate Ironwolf parity drive under RC2.

There have been multiple users with issues with the 8TB Ironwof + LSI, for now best to stick with v6.8.3 or connect those disks to a non LSI controller.

Link to comment

Any issues with the Ironwolf/LSI combo are software *only* since it does work fine under the old kernel/driver.  I'm hopeful that from my reading and testing, disabling some of the aggressive power saving modes of the Ironwolf may cover up any faults currently in the driver that will need to be addressed. 

 

No matter what, kudos to Seagate (I have all WD drives BTW!) for having tools and documentation to tweak the settings.  WD could learn a lot here - but admittedly my WD drives always 'just worked' without any tweaks...

Link to comment
19 minutes ago, JorgeB said:

8TB Ironwof + LSI,

Hi JorgeB, 

 

I found this article which points to firmware on the 10Tb Ironwolfs. Post also says people having issues with 8s also but not sure if there is new firmware for 8s. May not be related but maybe worth a look?

 

Finally a solution?:
Now the point of this topic, digging deeper it turns out Seagate released a firmware update for the ST10000VN0004 and ST10000NE0004 last month. They bumped from firmware SC60 to SC61 and in that topic it's stated that this is because of "flush cache timing out bug that was discovered during routine testing" in regards to Synology systems.

As it turns out, write cache (and I believe internally NCQ) had been turned off for these specific drives in Synology systems for a while now because of "stability" issues. Since this firmware update it gets turned on again and all is well.

That got me thinking, if a Synology is having this issue, maybe this was more disk firmware related then anything else. So, since I still have all my data on other drivers anyway I went ahead and flashed all my 8x ST10000VN0004 from SC60 to SC61. This worked without a problem and even a ZFS Scrub found no issues with the data still on there.

But... I was able to finish a scrub twice of 20TB on the pool now without a single read, write or CRC error. I've been hitting the drive with TBs of DD and Bonnie++ and not a single error anymore. So this might actually be a fix for topics like this one I found and this one.

 

https://www.truenas.com/community/threads/seagate-ironwolf-10tb-st10000vn0004-vs-lsi-it-firmware-controllers.78772/

Edited by SimonF
  • Like 1
Link to comment

I did check for revised firmware past my SC60. Nothing available.

 

This particular issue seems to be limited to the older 10TB drives.  If it is a bug in the 8TBs, they are certainly dragging their feet on a fix.

Link to comment
15 minutes ago, TDD said:

I did check for revised firmware past my SC60. Nothing available.

 

This particular issue seems to be limited to the older 10TB drives.  If it is a bug in the 8TBs, they are certainly dragging their feet on a fix.

Found this post from Jun 2020, so maybe worth checking again? 

 

Just to put a full stop to this for my issues. I engaged with Seagate tech support and they supplied me with another firmware for my ST8000VN004 8TB drives. Good news - I could enable NCQ again and not have any issues with SMART Command_Timeout accumulation, and I could do a full Nakivo backups verification, plus ZFS scrub and not have any issues. So I consider this issue resolved.

I used the Seagate utility to boot from a USB stick to update the drives. Thankfully it worked with the LSI RAID card, so that I didn't have to tediously transpose the drives into another caddy to put into a Windows server to run. The weird thing is that the "new" firmware still says SC60 and not SC61. You'd think they'd at least give it an engineering code like SC60e or something or even SE60. Anyways - there's other IT challenges to tackle. Finally I can put this hard drive madness to rest! (touch wood).

Link to comment

@limetech Could this be parameterized in disk settings, as non rotational devices do have power saving options. I know most people may not need that feature as running for VMs etc, but I think it should be an option for people that may use it. i.e not running docker/VMs 

  • Like 1
Link to comment
26 minutes ago, Gdtech said:

Anybody know when multiple Array Pools will be available ?

Not super clear.

 

Multiple Pools are available as per the release notes.

Multiple Arrays is not available.

 

You can use several Pools in conjunction with the Array to act as Cache. You can choose which pool to use for each Share.

Link to comment

Thanks for 6.9. Excited about what to come in 7.0.

 

6.9 working fine, except cache performance. Current performance:

 

Raid 1:

500-600 MB/s in user path, about 2000 MB/s outside user path. No difference Raid 1 to Raid 10. 

 

The performance outside user path is fine enough, strange with no performance gain with Raid 10. 

 

Should the "user path" reduce performance by 3/4?

 

After upgraded to 6.9 yje cache has be formatted/rebuild/permissions/restarted.

 

Screenshot 2021-03-07 at 19.57.54.png

Link to comment
10 hours ago, SimonF said:

Found this post from Jun 2020, so maybe worth checking again? 

 

Just to put a full stop to this for my issues. I engaged with Seagate tech support and they supplied me with another firmware for my ST8000VN004 8TB drives. Good news - I could enable NCQ again and not have any issues with SMART Command_Timeout accumulation, and I could do a full Nakivo backups verification, plus ZFS scrub and not have any issues. So I consider this issue resolved.

I used the Seagate utility to boot from a USB stick to update the drives. Thankfully it worked with the LSI RAID card, so that I didn't have to tediously transpose the drives into another caddy to put into a Windows server to run. The weird thing is that the "new" firmware still says SC60 and not SC61. You'd think they'd at least give it an engineering code like SC60e or something or even SE60. Anyways - there's other IT challenges to tackle. Finally I can put this hard drive madness to rest! (touch wood).

I upgraded from 6.8.3 to 6.9.0 without issue.

 

I have 10TB segate ironwolf pros (firmware EN01) and 8TB ironwolf's (SC61) - all on a LSI controller with NCQ on.

 

The 8TB's had about 2 years on a ZFS system as a mirror also without issues before I moved them to the unraid system. 

 

Edited by rilles
Link to comment

My 8TB is a very recent manufacture (a VN004) and has the SC60 firmware.  Is yours the slightly older VN0022?  That would explain the SC61.

 

I haven't seen any update for the VN004 as of yet.

Link to comment
17 minutes ago, TDD said:

My 8TB is a very recent manufacture (a VN004) and has the SC60 firmware.  Is yours the slightly older VN0022?  That would explain the SC61.

 

I haven't seen any update for the VN004 as of yet.

ST8000VN0022

 

not sure why a newer drive would have an older version of firmware.

 

Link to comment

Upgrade to 6.9.0 did not go well for me. no drives visible.

 

Based on other people with the same issue the drivers in 6.9.0 do not work with Netapp SAS 4-Port 3/6 GB QSFP PCIE 111-00341+B0 Controller (PMC Sierra PM8003).

 

Peter.

 

Link to comment

@limetech Just an update to the issue with disk usage thresholds - my media unRAID system has been 'corrected'. As mentioned previously, it was showing all disks as 'returned to normal utilization' and showing as 'green' after the upgrade to 6.9.0. As per the release notes, I tried numerous times to reset my thresholds in Disk Settings, along with a few reboots. Nothing had corrected it.

 

After I was sure other aspects were working OK, I proceeded to add the 2 x 16TB new disks to the array. After the array started and the disks were formatted, disk utilization has now returned to using the proper colors. The 2 new disks are both green as they're empty, and the rest are accurately showing as red as I let them fill as completely as possible.

 

Note that the preclear signature was valid even though the disks may not have fully completed the post-read process due to my previously reported USB disconnection error with the unRAID flash drive. I know they passed the pre-read and zero 100% successfully, so I just took the chance on adding them to the array, expecting that unRAID might need to run its own clear process but that wasn't the case.

 

Note that my backup unRAID which was upgraded from 6.8.3 to 6.9.0 still shows the incorrect colors for disk utilization thresholds. I'll be adding/replacing some disks on it next month, so I'll watch to see if it also corrects this utilization threshold issue.

 

Alas I've now got another potential issue.... sigh.

 

Before adding the new drives I shut down all active VMs and Docker containers and then attempted to stop the array. This hung up the system reporting  'retry unmounting user shares' at the bottom left corner of the unRAID webgui. I then grabbed diagnostics and attempted a reboot, which it did. But upon restart, the system has again reported an unclean shutdown even though the reboot itself showed no errors on the monitor directly attached to the unRAID system.

 

This is the 3rd time since the upgrade to 6.9.0 that the system has reported an unclean shutdown. On all 3 occasions I've been able to watch the console output on the monitor directly attached to the unRAID system, with no noticeable errors during the reboot. On the 1st occasion (immediately after the upgrade from 6.9.0 RC2), I did an immediate reboot and that cleared the need for a parity check.

 

The 2nd unclean shutdown was reported after another apparently clean reboot on Mar 2nd. On this 2nd occasion I let unRAID proceed with the parity check and it completed with 0 errors found. I'm letting it proceed again with this most recent 'unclean shutdown' error, but I suspect it's a false warning and no errors will be found.

 

I've got the diagnostics created just before adding the 2 new drives, and also a diagnostic grab just a few moments ago. I can provide them if you wish, but prefer directly through PM. Let me know if you have any questions or want the diagnostics, but hopefully these 'unclean shutdowns' are just a small quirk with the stable 6.9.0 release.

 

Edited by AgentXXL
Clarify + re-wording; add statement about backup unRAID
Link to comment

Have you made sure that you do not have a console (or screen) session open with the current directory being one on the array as this will stop the array completing the unmounting correctly and thus can lead to a subsequent unclean shutdown.

 

 

  • Like 2
Link to comment
5 hours ago, AgentXXL said:

The 2nd unclean shutdown was reported after another apparently clean reboot

There have been a couple of reports where it looks like the shutdown time-out is not being honored, i.e., Unraid considers an unclean shutdown after a couple of seconds even if the set time is much longer, changing the setting (Settings -> Disk Settings) to re-apply it fixed it.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.