Unraid OS version 6.9.0 available


Recommended Posts

Only minor issue here is that it looks like my 12 year old 1GB flash drive is now too full.  Wouldn't upgrade until I cleared some space (after backing up); now Fix Common Problems tells me it's up to 90%.

 

Sounds trivial, but is there still a recommended hardware for Unraid OS flash drives? Or will anything do?

Link to comment
2 minutes ago, Cessquill said:

Only minor issue here is that it looks like my 12 year old 1GB flash drive is now too full.  Wouldn't upgrade until I cleared some space (after backing up); now Fix Common Problems tells me it's up to 90%.

 

Sounds trivial, but is there still a recommended hardware for Unraid OS flash drives? Or will anything do?

I use a Sandisk Cruzer Fit 16GB. Been solid for 5ish years now. I like it because it fits in the internal port on my mobo and is short enough not to hit the chassis when I slide the mobo tray out (learnt that lesson the hard way once...).

I think it's recommended to use USB 2.0 drives/ports, but I don't know if that's just superstition or not.

Link to comment
11 hours ago, dlandon said:

You will probably see the CPU Scaling governor driver says "intel_cpufreq" driver on the right side of Tips and Tweaks.  This appears to be a new driver for some Intel CPUs.  I am working on a change to Tips and Tweaks so this driver will be handled properly.  That's why you are not seeing Turbo Boost as an option.  Right now it only shows as an option for the intel_pstate (Intel Pstate in Tips and Tweaks) driver.

 

It is recommended that you use 'Performance' for the best power savings and performance.

 

I don't believe "On Demand" is an option to the intel_cpufreq driver.  Tips and Tweaks is showing it as an option and probably shouldn't.

 

You are correct that it shows intel_cpufreq as the driver.

 

I'm going to have to just give up the 200Mhz boost for now as setting it to Performance kicks all cores up to 2.6Ghz compared to On Demand which lets them throttle down to 1.3Ghz (or less) but causes them to never go above 2.4Ghz.

 

I'd rather give up the 200Mhz than have all 24 cores at 2.4Ghz all of the time.

 

Either way it's better than the 1.3Ghz that was happening after the upgrade when it got set to Power Save for some reason.

 

If you can get the Turbo setting working though that'd be great!

Edited by jlficken
Link to comment
6 hours ago, jademonkee said:

 

I have the  intel_cpufreq driver, and when I set it to 'On Demand' I get:


root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 3097.473
cpu MHz         : 3101.104
cpu MHz         : 3094.179
cpu MHz         : 3091.279
cpu MHz         : 3100.118
cpu MHz         : 3093.355
cpu MHz         : 3092.385
cpu MHz         : 3099.522

So I'm guessing it has the effect of just running at full frequency all the time?

 

 

However, when I set it to 'Performance' I get:


root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 1596.710
cpu MHz         : 1596.466
cpu MHz         : 1596.533
cpu MHz         : 1596.398
cpu MHz         : 1596.510
cpu MHz         : 1596.519
cpu MHz         : 1596.589
cpu MHz         : 1596.520

 

I'm currently re-building a disk while also performing an 'erase and clear' on an old disk, so my CPU isn't close to idle (ocassional 90+% peaks), so I would expect that at least some of the cores would be running at full speed with a profile that scales speed with demand. Is 'Performance' just keeping them at half speed, or is more likely that my demand is not big enough to warrant scaling up to full speed for any significant period of time?

In the mean time I'll keep it on 'On Demand' and keep an eye on temps.

 

 

Mine works the exact opposite of yours...weird.

  • Like 1
Link to comment

After I've upgraded to 6.9.0, previously from 6.9.0 RC2, It seems GPU passthrough is causing my VMs to freeze after installing Nvidia drivers. Or cycle into recovery upon boot. I know a few others have had this happen and have reverted back to 6.8.3. 

If you could please test this and apply a fix for the next update that rolls out, It would be appreciated.

 

Also, yes. Please push the 460.56 driver updates.

 

Thanks,

Link to comment
51 minutes ago, fearlessknight said:

After I've upgraded to 6.9.0, previously from 6.9.0 RC2, It seems GPU passthrough is causing my VMs to freeze after installing Nvidia drivers. Or cycle into recovery upon boot. I know a few others have had this happen and have reverted back to 6.8.3. 

If you could please test this and apply a fix for the next update that rolls out, It would be appreciated.

 

Also, yes. Please push the 460.56 driver updates.

 

Thanks,

 

Having the same issue - Win10 VM, hangs.  Running VNC video driver and still hanging.

 

Figures that I'd do this in the middle of my work day.. but ya know... whatever 😛

 

I disabled docker and was able to boot my win10 box.

Edited by TheSnotRocket
Link to comment
11 minutes ago, smashingtool said:

Am I understanding correctly that if I want to set up multiple SSDs in a "pool" where the pool size is the sum of each drive's size, that I need to(or should?) wait until the mentioned future support of "Unraid array pools"?


not quite sure what you want?

 

in Unraid 6.9.0 you can set up multi-drive pools to use the “Single” btrfs profile which means the available space is the sum of the drives size, but the data is not protected by parity.

 

if you want multiple arrays that work like the current data array where the available space is the sum of the data drives, but you can still have parity protection then this IS a future roadmap item.

 

Link to comment

Very minor issue after update. After update, trying to access any VM via VNC the following error appeared:

 

Quote

The requested module '../core/util/browser.js' does not provide an export named 'hasScrollbarGutter'

 

Doing a cache refresh (CTRL + F5) fixed the issue.

Edited by T321
Link to comment

Just upgraded from 6.8.3 to 6.9.0 today.  Now, none of my drives are spinning down.  Openfiles doesn't show anything open on the drives I would expect to see spin down, but something is reading from them (~20k reads on each drive, in a little under ten hours).

 

Turbowrite is disabled  (Data drives spun down: script not running, Write mode: unraid determined).

How do I identify what is accessing the drives in question?  Should I revert to 6.8.3?

Link to comment
7 minutes ago, PeterB said:

Just upgraded from 6.8.3 to 6.9.0 today.  Now, none of my drives are spinning down.  Openfiles doesn't show anything open on the drives I would expect to see spin down, but something is reading from them (~20k reads on each drive, in a little under ten hours).

 

Turbowrite is disabled  (Data drives spun down: script not running, Write mode: unraid determined).

How do I identify what is accessing the drives in question?  Should I revert to 6.8.3?

Cache dirs installed?   IIRC some changes to the setup of it needs to be done.

Link to comment
38 minutes ago, Squid said:

Cache dirs installed?   IIRC some changes to the setup of it needs to be done.

Thanks for the response.

 

Yes, I do have directory caching enabled.

Researching possible conflicts, but not found any information yet.

 

UPDATE:

It appears that this issue is being caused by the BinHex Couch Potato docker.

Edited by PeterB
  • Like 1
Link to comment
4 hours ago, itimpi said:


not quite sure what you want?

 

in Unraid 6.9.0 you can set up multi-drive pools to use the “Single” btrfs profile which means the available space is the sum of the drives size, but the data is not protected by parity.

 

if you want multiple arrays that work like the current data array where the available space is the sum of the data drives, but you can still have parity protection then this IS a future roadmap item.

 

I want to be able to use SSDs in the array, but per the warnings from "Fix Common Problems", that could cause issues with rebuilding from parity due to SSD garbage collection.

 

So what I currently do instead is use an extra 2 SSDs in unRaid via Unassigned devices. I also have a cache drive SSD, but have never messed with any pool functionality due to the warnings about BTRFS RAID1.

 

What I ultimately want is to be able to have a user share spread out over multiple SSD drives. I'm not currently concerned with parity protection for said drives, but maybe down the road, some sort of fault tolerance would be worthwhile.

 

I don't see how to do "multi-drive pools to use the “Single” btrfs profile". Per the help toggle in the UI, "When configured as a multi-device pool, Unraid OS will automatically select btrfs-raid1 format". https://wiki.unraid.net/Unraid_OS_6.9.0#Multiple_Pools also seems to lack a mention of this functionality, but maybe I am missing something.

 

Having multiple pools seemed like a way of getting around using Unassigned devices, but based on "When you create a user share or edit an existing user share, you can specify which pool should be associated with that share. ", I assume that I can't have a user share span multiple pools...

Link to comment
3 minutes ago, smashingtool said:

I don't see how to do "multi-drive pools to use the “Single” btrfs profile"


in the 6.9.0 release if you click on a cache pool and then go to the Balance section there is a drop-down to allow you to select what profile you want the Balance to use with the option being current profile, single, raid0 or raid1 profiles.

Link to comment
5 hours ago, TheSnotRocket said:

 

Having the same issue - Win10 VM, hangs.  Running VNC video driver and still hanging.

 

Figures that I'd do this in the middle of my work day.. but ya know... whatever 😛

 

I disabled docker and was able to boot my win10 box.

Were you able to passthrough your GPU after disabling the docker? Or was it still booting from VNC?

Link to comment

6.8.3 to 6.9 went fine on this end. Just, shutdown dockers and disabled autostart. Backed up flash and updated. 5 minutes later I was up and running. Original Intel igpu passthrough to VMs seems fine. Plex can use it no issue. Any reason to change to the new method for Intel igpu passthrough? Besides cleaning up the go file?

 

Edit: Minor panic attack after I posted this lol. Zigbee2mqtt docker wasnt starting. Couldnt find device plus multiple other errors in the logs. I just unplugged the usb stick (CC2531 dongle), waited about 30 seconds and plugged it back in. Docker started like a champ and Home Assistant started receiving updates. Just sayin, that would of been miserable if I lost that docker. The rest of my night would of been shot with re-adding a ton of devices and resetting entity ID tags lol.

Edited by sminker
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.