Unraid OS version 6.9.0 available


Recommended Posts

5 hours ago, TheSnotRocket said:

 

Having the same issue - Win10 VM, hangs.  Running VNC video driver and still hanging.

 

Figures that I'd do this in the middle of my work day.. but ya know... whatever 😛

 

I disabled docker and was able to boot my win10 box.

Were you able to passthrough your GPU after disabling the docker? Or was it still booting from VNC?

Link to comment

6.8.3 to 6.9 went fine on this end. Just, shutdown dockers and disabled autostart. Backed up flash and updated. 5 minutes later I was up and running. Original Intel igpu passthrough to VMs seems fine. Plex can use it no issue. Any reason to change to the new method for Intel igpu passthrough? Besides cleaning up the go file?

 

Edit: Minor panic attack after I posted this lol. Zigbee2mqtt docker wasnt starting. Couldnt find device plus multiple other errors in the logs. I just unplugged the usb stick (CC2531 dongle), waited about 30 seconds and plugged it back in. Docker started like a champ and Home Assistant started receiving updates. Just sayin, that would of been miserable if I lost that docker. The rest of my night would of been shot with re-adding a ton of devices and resetting entity ID tags lol.

Edited by sminker
Link to comment
3 hours ago, smashingtool said:

I don't see how to do "multi-drive pools to use the “Single” btrfs profile"

Since "single" doesn't have any redundancy, you could just forgo btrfs and make each disk XFS each in its own separate pool. They would all be part of user shares. I have my dockers and VMs on a "fast" pool which is just one NVME using XFS.

  • Thanks 1
Link to comment

Updated from 6.8.3 NVIDIA - went smoothly! I did remove some deprecated plug-ins and apps afterwards and had zero issues installing the new NVIDIA drivers - no changes to my Plex install for transcoding needed much to my relief. I'm on an AMD x570 mobo (Taichi) with a 3700 and with some fiddling in the temp settings app I now have temp readings for the first time! I am also seeing fan speeds for seven fans - very happy! Temps for my SSD and HDD continue to work. Not seeing power readings from my UPS but I can see my GPU readings finally so that's nice. I'll fiddle with the UPS when it's not 4am lol.

 

All in all this came thru nice and smooth so far, my thanks for all those who tested and posted about their experiences as I wasn't able to test this time around.

Link to comment
7 hours ago, sminker said:

.......The rest of my night would of been shot with re-adding a ton of devices and resetting entity ID tags lol.

 

That's one of the primary reasons I switched to "Hubitat". They not only allow you to back up the config completely, but it runs automatically by default. Off-topic I know, but I have to plug that thing. After years of constantly addressing Smartthings issues its nice to just USE smart home devices. Sorry, @limetech

Link to comment
19 hours ago, NAStyBox said:

I upgraded from 6.8.3 with no issues.

 

However before I went ahead with the upgrade I read this thread. So just for giggles I did the following before upgrading. 
.

.

3. Set Domains and Appdata shares to Cache "Yes", and ran mover to clear my SSDs just in case an issue came up. They're XFS. 

 

 

 

Hi, the order you list here is how I was going to do this, I just wanted to ask you about step 3, don't you mean you set the Domains and Appdata share Chache setting to "No" and then ran mover to move any data FROM SSD?

 

Just want to clraify that I understand the intent correctly before proceeding.

Link to comment
5 minutes ago, bunkermagnus said:

Hi, the order you list here is how I was going to do this, I just wanted to ask you about step 3, don't you mean you set the Domains and Appdata share Chache setting to "No" and then ran mover to move any data FROM SSD?

 

Just want to clraify that I understand the intent correctly before proceeding.

 

No prohibits new files and subdirectories from being written onto the Cache disk/pool. Mover will take no action so any existing files for this share that are on the cache are left there.

 

Yes indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the Cache disk/pool and onto the array.

 

Only indicates that all new files and subdirectories must be writen to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status. Mover will take no action so any existing files for this share that are on the array are left there.

 

Prefer indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto the Cache disk/pool.

 

  • Like 1
Link to comment
4 minutes ago, Squid said:

 

No prohibits new files and subdirectories from being written onto the Cache disk/pool. Mover will take no action so any existing files for this share that are on the cache are left there.

 

Yes indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the Cache disk/pool and onto the array.

 

Only indicates that all new files and subdirectories must be writen to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status. Mover will take no action so any existing files for this share that are on the array are left there.

 

Prefer indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto the Cache disk/pool.

 

Thank you for clearing that up! It's the small nuances that gets me, the devil is as usual in this case the details between "prefer" and "yes". I must have read these severeal times in the F1 help in the web UI but the mover part escaped me somehow.

Link to comment
59 minutes ago, bunkermagnus said:

but the mover part escaped me somehow.

 

It is probably worth pointing out than when you select a setting the text alongside this setting changes to tell you what action mover will take.   This means you can tell without having to activate the help.

  • Like 1
Link to comment
On 3/4/2021 at 9:09 AM, jademonkee said:

 

I have the  intel_cpufreq driver, and when I set it to 'On Demand' I get:



root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 3097.473
cpu MHz         : 3101.104
cpu MHz         : 3094.179
cpu MHz         : 3091.279
cpu MHz         : 3100.118
cpu MHz         : 3093.355
cpu MHz         : 3092.385
cpu MHz         : 3099.522

So I'm guessing it has the effect of just running at full frequency all the time?

 

 

However, when I set it to 'Performance' I get:



root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 1596.710
cpu MHz         : 1596.466
cpu MHz         : 1596.533
cpu MHz         : 1596.398
cpu MHz         : 1596.510
cpu MHz         : 1596.519
cpu MHz         : 1596.589
cpu MHz         : 1596.520

 

I'm currently re-building a disk while also performing an 'erase and clear' on an old disk, so my CPU isn't close to idle (ocassional 90+% peaks), so I would expect that at least some of the cores would be running at full speed with a profile that scales speed with demand. Is 'Performance' just keeping them at half speed, or is more likely that my demand is not big enough to warrant scaling up to full speed for any significant period of time?

In the mean time I'll keep it on 'On Demand' and keep an eye on temps.

 

Update for @dlandon and @jlficken :

I swapped it from 'On Demand to 'Performance' just now, and I now get:

root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 3097.960
cpu MHz         : 3050.607
cpu MHz         : 3108.310
cpu MHz         : 3117.604
cpu MHz         : 3124.331
cpu MHz         : 3112.003
cpu MHz         : 3113.727
cpu MHz         : 3097.004

So I don't know why I was at half speed under 'Performance' mode previously. So weird.

 

FWIW, if I set it to 'Power Save' I get the following:

root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 1596.545
cpu MHz         : 1596.504
cpu MHz         : 1596.528
cpu MHz         : 1596.444
cpu MHz         : 1596.444
cpu MHz         : 1596.515
cpu MHz         : 1596.500
cpu MHz         : 1596.498

 

EDIT:

ARRGH! Now I set it back to 'Performance' and I get:

root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 1596.634
cpu MHz         : 1596.535
cpu MHz         : 1596.515
cpu MHz         : 1596.617
cpu MHz         : 1596.523
cpu MHz         : 1596.548
cpu MHz         : 1596.502
cpu MHz         : 1596.543

So I don't know what's going on.

Edited by jademonkee
Link to comment
2 hours ago, bunkermagnus said:

Hi, the order you list here is how I was going to do this, I just wanted to ask you about step 3, don't you mean you set the Domains and Appdata share Chache setting to "No" and then ran mover to move any data FROM SSD?

 

Just want to clraify that I understand the intent correctly before proceeding.

Yes I read something in this thread about some weirdness with the SSD pool, so I moved it all to the array. If you look at the cache settings you have to set a share to "Yes" if you want Mover to move it to the array. If you set it to "No" it leaves everything on the cache drives. 

  • Like 1
Link to comment

Upgraded from 6.8.3 to 6.9 and when trying to either Putty or WinSCP in, I get a failed login for user root. After 'refreshing' the same password in the GUI all is well again.

Just wanted to let anyone know who faces the same issue... you're not crazy ;)

 

Thanks for the great job guys, everything went smoothly!

 

Link to comment

I performed the upgrade this morning and went from 6.8.3 > 6.9.  Upgrade went fine, rebooted and it came back online.  A few minutes later, the UI became unresponsive and I could no longer ping the machine.  I performed a hard reboot after a small while, same thing occurred again.  

 

Thoughts on how to resolve considering it's only accessible for a minimal amount of time before going unresponsive/offline?

 

FYI - it's running headless so I'm unable to see what is occurring in the logs atm...

 

Quick follow-up: After a 3rd reboot, it's now back online again and hasn't shut down.  It's disabled my VM Manager though.  I had 2 VMs, one for Windows 10 and one for Home Assistant.  I'm guessing the Nvidia 3070 passed through to the Windows 10 VM has something to do with it?

 

Suggestions before I proceed with re-enabling it and possibly get into a boot loop again?

 

Logs attached.

jared-server-diagnostics-20210305-0942.zip

Edited by irandumi
additional info.
Link to comment
12 hours ago, trurl said:

Since "single" doesn't have any redundancy, you could just forgo btrfs and make each disk XFS each in its own separate pool. They would all be part of user shares. I have my dockers and VMs on a "fast" pool which is just one NVME using XFS.

Tried this, but from the looks of it, shares can only be assigned to one pool. So having one share span multiple pools doesn't seem possible.

Link to comment
4 hours ago, Rendo said:

Upgraded from 6.8.3 to 6.9 and when trying to either Putty or WinSCP in, I get a failed login for user root. After 'refreshing' the same password in the GUI all is well again.

Just wanted to let anyone know who faces the same issue... you're not crazy ;)

 

Thanks for the great job guys, everything went smoothly!

 

 

EDIT: SOLVED - I don't understand why, but it took 2 reboots and 3 x changing/refreshing the root password before I could again use ssh from my Mac. Regardless, it's working for now.

 

Original Message Starts:

 

I'm having this issue as well, but only on my backup unRAID system. Refreshing the root password hasn't worked. I'm using MacOS Terminal like I always have and under the 'New Remote Connection' dialog it shows I'm issuing the command:

 

ssh -p 22 root@AnimDL.local

 

After refreshing the password I also tried using the built-in shell access from the unRAID GUI and it works:

 

AnimDLLoginViaGUI.jpg.f98eacf5cbcc015b88c6fbb0b0b5ca2c.jpg

 

Looking at the syslog shows that it's using the right user (uid=0 which is root), but it's not accepting the password. This worked fine before the upgrade to 6.9.0. Note that the backup system was upgraded from 6.8.3 whereas my media server (which I can ssh into) was upgraded from 6.9.0 RC2.

 

Suggestions? Thanks!

 

Edited by AgentXXL
Link to comment
2 hours ago, smashingtool said:

Tried this, but from the looks of it, shares can only be assigned to one pool. So having one share span multiple pools doesn't seem possible.

 

You can ony have a maximum of a single pool assigned to a user share, for the purpose of caching writes to that pool, however a user share is still defined as the union of all root level directories of the same name across all array disks and pools. So you could have a user share span multiple pools but you'd have to copy files to some of them manually.

  • Thanks 1
Link to comment

***I think this is my Chrome that is broken so this can be ignored.***

 

I have an error with VNC Remote.

 

SyntaxError: The requested module '../core/util/browser.js' does not provide an export named 'hasScrollbarGutter'

 

Anyone else had this? I am using Chrome and I haven't tried rebooting yet. 

 

Edit: I can VNC into the machines with Remmina or similar.

Edited by PRG
Link to comment
On 3/4/2021 at 6:42 AM, ezhik said:

 

 

 

--

 

Same here, lots of entries - I tried to delete them, nothing happens.


The logs show:

 

Mar  3 14:42:00 unraid3 dnsmasq[12378]: no servers found in /etc/resolv.conf, will retry
Mar  3 14:42:00 unraid3 dnsmasq[12378]: reading /etc/resolv.conf
Mar  3 14:42:00 unraid3 dnsmasq[12378]: using nameserver 9.9.9.9#53

Same thing for me after upgrading. Could anyone shine a light on why these IPv6 routes were created?

Link to comment

Updated to 6.9.0 with success. At first I thought it went horribly wrong as the Web Gui would not load... spent an hour troubleshooting.. got fed up left it. Come back 45 mins later and nothing was wrong, I guess there was a delay before web GUI was live.. impatient most likely. 

 

Just in case anyone else panics, give it some time after the update to try access Web GUI.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.