Unraid OS version 6.9.1 available


Recommended Posts

1 hour ago, ljm42 said:

 

Unraid uses ttyd ( https://github.com/tsl0922/ttyd ) to provide the terminal window, which in turn uses xterm.js ( https://github.com/xtermjs/xterm.js ) I've poked around a bit but I don't see any configuration options to change the font or font size.

 

If you can find some configuration options we can figure out whether that should be handled by a plugin or Unraid core.

Let's move this discussion to the feature request:

I found an add-on that will have to be done on Unraid core side,

Link to comment

Upgraded to 6.9.1 from 6.8.3.

 

Some messages now seem to report as "devX" for the disks rather than "sdX". Why was this changed?

 

e.g. old warning

Event: unRAID device sdi SMART health [187]
Subject: Warning [MARS] - reported uncorrect is 117
Description: ST2000DL003-9VT166_5YD2SZRE (sdi)
Importance: warning

 

New warning 

Event: Unraid device dev1 SMART health [187]
Subject: Warning [MARS] - reported uncorrect is 117
Description: ST2000DL003-9VT166_5YD2SZRE (dev1)
Importance: warning

 

 

Yes I can tell from the serial number but what is dev1, dev2 etc?

 

Some messages still use sdX

Event: Unraid Disk 5 message
Subject: Notice [MARS] - Disk 5 returned to normal utilization level
Description: WDC_WD40EZRX-00SPEB0_WD-WCC4E0262246 (sdg)
Importance: normal

 

Ah I see "Dev 1" and "Dev 2" as the the first column of unassigned devices on the Main page. Then I guess the question becomes, why doesn't the space message use "Disk 1" etc. Not consistent.

Edited by Shonky
Link to comment

I'm experiencing extremely high CPU activity in my Windows 10 VM since upgrading from 6.8.x to 6.9.1, is anyone getting this?

 

The VM is BlueIris CCTV server and uses a pass through NV710 PCI-E GPU for decoding, I've checked that GPU decoding is working, and it is. But even when I stop the BlueIris service, the base VM when idle is consuming 100% of the 3 CPU's assigned to it. Task manager keeps highlighting system interrupts, so I implemented the two known workarounds for those but it hasn't made a tangible difference.

Link to comment

Does anyone know, how to get back the spindown (=standby) command for SSDs? It works with hdparm via console, but I would like to have it automatic, like it was in 6.8.3.

It really messes up my idle power, since the lower powerstates for SSDs are not working, the packge c-states don't work either on my system.

 

@limetech

Edited by TGP
  • Like 1
Link to comment
On 3/9/2021 at 2:41 PM, kizer said:

 

Personally and I do mean personally I always do the following:

Stop all Dockers

Spin up all Drives (below does it anyways, but......)

Stop the Array

Shutdown/Reboot 

 

I do that simply because if a docker hangs I can wait for it to shutdown vs wondering what's hung and why my machine isn't shutting down. So I assume control of each step because I don't like unclean shutdowns and having to wait for a parity check to fire up if something goes sideways. ;)

 

I've not done that a few times and had good results, but there was a few times in the past when I had to eventually login and pray when I forced it to shutdown nothing would go wrong. 

My drives are spun up 100% of the time, but if they weren't why would spinning them up, then immediately stopping the array be beneficial?

Link to comment
1 minute ago, Gunny said:

My drives are spun up 100% of the time, but if they weren't why would spinning them up, then immediately stopping the array be beneficial?

 

When you click the stop array button unraid spins up all drives first then disconnects/unsyncs them. 

Its just part of my mental preparedness procedure when I shut things down. Is it needed? Probably not, but its part of my OCD. ;)

 

Link to comment

Last week upgraded from 6.9.0-rc2 to 6.9.0.

Just now upgraded to 6.9.1.

Both were quick and pain free.

 

My thanks to all of the Lime Technology team, everyone who has developed plug ins and Dockers, as well as the folks who spend their time here helping others.  It is a great platform, as well as a great community.

  • Like 1
Link to comment

Just want to point out 2 issues I ran into and how I solved them after updating to 6.9.1

 

My br0 network is a 802.3ad bonded pair with bridging enabled. After the first reboot any docker container that was using br0 stopped working. To solve this I ran the following 2 lines from the terminal console

rm /var/lib/docker/network/files/local-kv.db
/etc/rc.d/rc.docker restart

 

Virtual Machine "VNC Remote" from within the web browser stopped working with a "SyntaxError: The requested module '../core/util/browser.js" error

 

Clearing Chrome "Cached images and files" fixed this

  • Like 1
Link to comment

Hello

 

I just upgraded from 6.8 to 6.9 last night. Since the upgrade I noticed that my drives won't spin down anymore. I saw something about it as being fixed in 6.9.1 but that is the version I upgraded to.

 

I have telegraf installed to monitor the server in Grafana. I noticed that all the drives are constantly reading at the low rate. The rate is slightly changing but it's always the same for all drives:

 

1107449399_2021-03-1619_03_28-Greenshotimageeditor.png.0e4084c09e427c860da158e12aa35ce7.png115499668_2021-03-1618_04_41-UltimateUNRAIDDashboard-Grafana.png.d5984669abd77491532d004d533a0861.png

 

Any idea what's causing it or how to troubleshoot? Thanks

Link to comment
14 hours ago, gulo said:

I have telegraf installed to monitor the server in Grafana. I noticed that all the drives are constantly reading at the low rate. The rate is slightly changing but it's always the same for all drives:

 

I had the same issue with Telegraf in 6.9.0.  This has been fixed for me in 6.9.1.  Drives now spin down properly even with telegraf running. 

 

If this does not work for you, perhaps something else beside telegraf is causing a problem.  With telegraf it appeared to be the call to "/usr/sbin/smartctl" that was preventing the drives from spinning down as Limetech changed a bit how they deal with smartctl.

Edited by Hoopster
  • Like 1
Link to comment
1 hour ago, Hoopster said:

 

I had the same issue with Telegraf in 6.9.0.  This has been fixed for me in 6.9.1.  Drives now spin down properly even with telegraf running. 

 

If this does not work for you, perhaps something else beside telegraf is causing a problem.  With telegraf it appeared to be the call to "/usr/sbin/smartctl" that was preventing the drives from spinning down as Limetech changed a bit how they deal with smartctl.

I do not use telegraf, does it have options for checking status and/or temps. If people are checking temps this could be the cause. I.e. smartctl with -a

Link to comment
On 3/15/2021 at 7:12 AM, ThatDude said:

I'm experiencing extremely high CPU activity in my Windows 10 VM since upgrading from 6.8.x to 6.9.1, is anyone getting this?

 

The VM is BlueIris CCTV server and uses a pass through NV710 PCI-E GPU for decoding, I've checked that GPU decoding is working, and it is. But even when I stop the BlueIris service, the base VM when idle is consuming 100% of the 3 CPU's assigned to it. Task manager keeps highlighting system interrupts, so I implemented the two known workarounds for those but it hasn't made a tangible difference.

I've had the same thing happening to me.  Just a normal Windows 10 VM with a gpu passed through.  Actually, I have seen WAY more CPU usage than before now.  I barely ever saw 100% usage but now it happens all the time.  

Link to comment

  

On 3/15/2021 at 1:12 PM, ThatDude said:

I'm experiencing extremely high CPU activity in my Windows 10 VM since upgrading from 6.8.x to 6.9.1, is anyone getting this?

 

The VM is BlueIris CCTV server and uses a pass through NV710 PCI-E GPU for decoding, I've checked that GPU decoding is working, and it is. But even when I stop the BlueIris service, the base VM when idle is consuming 100% of the 3 CPU's assigned to it. Task manager keeps highlighting system interrupts, so I implemented the two known workarounds for those but it hasn't made a tangible difference.

 

Yeah I'm also seeing ALOT higher CPU usage on my gaming VM since the update.

 

Upgraded from 6.8.3 to 6.9.1. Was fine at the start, after a while I started to notice that mouse movement would freeze and stutter, checked load on the VM's isolated cores in unraid webui, 99-100% on all of them.

 

Since then I've been trying loads of edits to get it better, updated xml to use Q35-5.1 since that is supposed to be alot better with VFIO. Still havent come up with anything really good though. 😒

Edited by sorano
Link to comment
7 minutes ago, snowboardjoe said:

SMART settings altered after 6.8.3>6.9.1. This includes temperature settings and the monitoring or attribute 197 of my SSD's. Is that expected when upgrading the OS of unRAID?

Expected as mentioned in the release notes that 6.9.0+ would result in reset SMART configuration settings,

This should be a one off reset, and was needed to allow management of SMART configurations for multi-pool and UD disks.

https://wiki.unraid.net/Unraid_OS_6.9.0#SMART_handling_and_Storage_Threshold_Warnings

  • Like 1
Link to comment
On 3/12/2021 at 2:30 PM, MothyTim said:

Hi, autofan has stopped working after upgrading to 6.9.1 it was working fine in 6.9.0 and all the betas and RC's. Its annoying as finally my drives spin down and now something else is broken. Not sure what info is needed but I have this in syslinux: acpi_enforce_resources=lax and this in my GO file: modprobe it87 force_id=0x8628 Hopefully someone knows what has changed to stop this working? I have also asked in plugin's page! Cheers, Tim

 

tower-diagnostics-20210312-1429.zip 134 kB · 0 downloads

Just bumping this hoping someones got an answer?

Cheers,

Tim

Link to comment

Updated from 6.9.0-beta1, i just had to delete vfio-pci plugin and remap my cache drive. The update was really smooth and now my VMs feels waaaaaaaaaaaaaaaay faster than before, it's just a placebo effect or there is a real reason? 

 

Anyway thank you devs, great work!

Edited by Vulneraria
  • Like 1
Link to comment

First off I'd like to say that my entry into the UnRAID universe happened last spring and I began with 6.8.3. This plattforms viability and stability has blown me away. So now with 6.9.x it's my first major update.

I have been holding it off for a while since 6.8.3 has been so good but last night I decided it was time to make the jump to 6.9.1. 

 

Everything went really smooth, I ran the upgrade assistant and followed it's advice, like uninstallintg VFIO-PCI plugin etc etc.

 

I have a question pertaining to VM's and Windows VMs in particular.

I noticed that since 6.9.1 runs on a newer version of Hypervisor layer, there is a new virtIO ISO version available, so which one is the most accurate statement of the ones below:

  1. You should upgrade the drivers in your windows 10 guests to the latest (virtio-win-0.1.190-1) as soon as possible.
  2. If it works fine, don't touch it!
  3. It's not necessary but you could see some perfomance benefits by upgrading your VirtIO drivers.

 

Thank you.

Edited by bunkermagnus
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.