Unraid OS version 6.9.0 available


Recommended Posts

3 hours ago, smashingtool said:

I don't see how to do "multi-drive pools to use the “Single” btrfs profile"

Since "single" doesn't have any redundancy, you could just forgo btrfs and make each disk XFS each in its own separate pool. They would all be part of user shares. I have my dockers and VMs on a "fast" pool which is just one NVME using XFS.

  • Thanks 1
Link to comment

Updated from 6.8.3 NVIDIA - went smoothly! I did remove some deprecated plug-ins and apps afterwards and had zero issues installing the new NVIDIA drivers - no changes to my Plex install for transcoding needed much to my relief. I'm on an AMD x570 mobo (Taichi) with a 3700 and with some fiddling in the temp settings app I now have temp readings for the first time! I am also seeing fan speeds for seven fans - very happy! Temps for my SSD and HDD continue to work. Not seeing power readings from my UPS but I can see my GPU readings finally so that's nice. I'll fiddle with the UPS when it's not 4am lol.

 

All in all this came thru nice and smooth so far, my thanks for all those who tested and posted about their experiences as I wasn't able to test this time around.

Link to comment
19 hours ago, NAStyBox said:

I upgraded from 6.8.3 with no issues.

 

However before I went ahead with the upgrade I read this thread. So just for giggles I did the following before upgrading. 
.

.

3. Set Domains and Appdata shares to Cache "Yes", and ran mover to clear my SSDs just in case an issue came up. They're XFS. 

 

 

 

Hi, the order you list here is how I was going to do this, I just wanted to ask you about step 3, don't you mean you set the Domains and Appdata share Chache setting to "No" and then ran mover to move any data FROM SSD?

 

Just want to clraify that I understand the intent correctly before proceeding.

Link to comment
5 minutes ago, bunkermagnus said:

Hi, the order you list here is how I was going to do this, I just wanted to ask you about step 3, don't you mean you set the Domains and Appdata share Chache setting to "No" and then ran mover to move any data FROM SSD?

 

Just want to clraify that I understand the intent correctly before proceeding.

 

No prohibits new files and subdirectories from being written onto the Cache disk/pool. Mover will take no action so any existing files for this share that are on the cache are left there.

 

Yes indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the Cache disk/pool and onto the array.

 

Only indicates that all new files and subdirectories must be writen to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status. Mover will take no action so any existing files for this share that are on the array are left there.

 

Prefer indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto the Cache disk/pool.

 

  • Like 1
Link to comment
4 minutes ago, Squid said:

 

No prohibits new files and subdirectories from being written onto the Cache disk/pool. Mover will take no action so any existing files for this share that are on the cache are left there.

 

Yes indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the Cache disk/pool and onto the array.

 

Only indicates that all new files and subdirectories must be writen to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status. Mover will take no action so any existing files for this share that are on the array are left there.

 

Prefer indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto the Cache disk/pool.

 

Thank you for clearing that up! It's the small nuances that gets me, the devil is as usual in this case the details between "prefer" and "yes". I must have read these severeal times in the F1 help in the web UI but the mover part escaped me somehow.

Link to comment
59 minutes ago, bunkermagnus said:

but the mover part escaped me somehow.

 

It is probably worth pointing out than when you select a setting the text alongside this setting changes to tell you what action mover will take.   This means you can tell without having to activate the help.

  • Like 1
Link to comment
On 3/4/2021 at 9:09 AM, jademonkee said:

 

I have the  intel_cpufreq driver, and when I set it to 'On Demand' I get:



root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 3097.473
cpu MHz         : 3101.104
cpu MHz         : 3094.179
cpu MHz         : 3091.279
cpu MHz         : 3100.118
cpu MHz         : 3093.355
cpu MHz         : 3092.385
cpu MHz         : 3099.522

So I'm guessing it has the effect of just running at full frequency all the time?

 

 

However, when I set it to 'Performance' I get:



root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 1596.710
cpu MHz         : 1596.466
cpu MHz         : 1596.533
cpu MHz         : 1596.398
cpu MHz         : 1596.510
cpu MHz         : 1596.519
cpu MHz         : 1596.589
cpu MHz         : 1596.520

 

I'm currently re-building a disk while also performing an 'erase and clear' on an old disk, so my CPU isn't close to idle (ocassional 90+% peaks), so I would expect that at least some of the cores would be running at full speed with a profile that scales speed with demand. Is 'Performance' just keeping them at half speed, or is more likely that my demand is not big enough to warrant scaling up to full speed for any significant period of time?

In the mean time I'll keep it on 'On Demand' and keep an eye on temps.

 

Update for @dlandon and @jlficken :

I swapped it from 'On Demand to 'Performance' just now, and I now get:

root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 3097.960
cpu MHz         : 3050.607
cpu MHz         : 3108.310
cpu MHz         : 3117.604
cpu MHz         : 3124.331
cpu MHz         : 3112.003
cpu MHz         : 3113.727
cpu MHz         : 3097.004

So I don't know why I was at half speed under 'Performance' mode previously. So weird.

 

FWIW, if I set it to 'Power Save' I get the following:

root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 1596.545
cpu MHz         : 1596.504
cpu MHz         : 1596.528
cpu MHz         : 1596.444
cpu MHz         : 1596.444
cpu MHz         : 1596.515
cpu MHz         : 1596.500
cpu MHz         : 1596.498

 

EDIT:

ARRGH! Now I set it back to 'Performance' and I get:

root@Percy:~# grep MHz /proc/cpuinfo
cpu MHz         : 1596.634
cpu MHz         : 1596.535
cpu MHz         : 1596.515
cpu MHz         : 1596.617
cpu MHz         : 1596.523
cpu MHz         : 1596.548
cpu MHz         : 1596.502
cpu MHz         : 1596.543

So I don't know what's going on.

Edited by jademonkee
Link to comment

Upgraded from 6.8.3 to 6.9 and when trying to either Putty or WinSCP in, I get a failed login for user root. After 'refreshing' the same password in the GUI all is well again.

Just wanted to let anyone know who faces the same issue... you're not crazy ;)

 

Thanks for the great job guys, everything went smoothly!

 

Link to comment

I performed the upgrade this morning and went from 6.8.3 > 6.9.  Upgrade went fine, rebooted and it came back online.  A few minutes later, the UI became unresponsive and I could no longer ping the machine.  I performed a hard reboot after a small while, same thing occurred again.  

 

Thoughts on how to resolve considering it's only accessible for a minimal amount of time before going unresponsive/offline?

 

FYI - it's running headless so I'm unable to see what is occurring in the logs atm...

 

Quick follow-up: After a 3rd reboot, it's now back online again and hasn't shut down.  It's disabled my VM Manager though.  I had 2 VMs, one for Windows 10 and one for Home Assistant.  I'm guessing the Nvidia 3070 passed through to the Windows 10 VM has something to do with it?

 

Suggestions before I proceed with re-enabling it and possibly get into a boot loop again?

 

Logs attached.

jared-server-diagnostics-20210305-0942.zip

Edited by irandumi
additional info.
Link to comment
12 hours ago, trurl said:

Since "single" doesn't have any redundancy, you could just forgo btrfs and make each disk XFS each in its own separate pool. They would all be part of user shares. I have my dockers and VMs on a "fast" pool which is just one NVME using XFS.

Tried this, but from the looks of it, shares can only be assigned to one pool. So having one share span multiple pools doesn't seem possible.

Link to comment
4 hours ago, Rendo said:

Upgraded from 6.8.3 to 6.9 and when trying to either Putty or WinSCP in, I get a failed login for user root. After 'refreshing' the same password in the GUI all is well again.

Just wanted to let anyone know who faces the same issue... you're not crazy ;)

 

Thanks for the great job guys, everything went smoothly!

 

 

EDIT: SOLVED - I don't understand why, but it took 2 reboots and 3 x changing/refreshing the root password before I could again use ssh from my Mac. Regardless, it's working for now.

 

Original Message Starts:

 

I'm having this issue as well, but only on my backup unRAID system. Refreshing the root password hasn't worked. I'm using MacOS Terminal like I always have and under the 'New Remote Connection' dialog it shows I'm issuing the command:

 

ssh -p 22 [email protected]

 

After refreshing the password I also tried using the built-in shell access from the unRAID GUI and it works:

 

AnimDLLoginViaGUI.jpg.f98eacf5cbcc015b88c6fbb0b0b5ca2c.jpg

 

Looking at the syslog shows that it's using the right user (uid=0 which is root), but it's not accepting the password. This worked fine before the upgrade to 6.9.0. Note that the backup system was upgraded from 6.8.3 whereas my media server (which I can ssh into) was upgraded from 6.9.0 RC2.

 

Suggestions? Thanks!

 

Edited by AgentXXL
Link to comment
2 hours ago, smashingtool said:

Tried this, but from the looks of it, shares can only be assigned to one pool. So having one share span multiple pools doesn't seem possible.

 

You can ony have a maximum of a single pool assigned to a user share, for the purpose of caching writes to that pool, however a user share is still defined as the union of all root level directories of the same name across all array disks and pools. So you could have a user share span multiple pools but you'd have to copy files to some of them manually.

  • Thanks 1
Link to comment

***I think this is my Chrome that is broken so this can be ignored.***

 

I have an error with VNC Remote.

 

SyntaxError: The requested module '../core/util/browser.js' does not provide an export named 'hasScrollbarGutter'

 

Anyone else had this? I am using Chrome and I haven't tried rebooting yet. 

 

Edit: I can VNC into the machines with Remmina or similar.

Edited by PRG
Link to comment
On 3/4/2021 at 6:42 AM, ezhik said:

 

 

 

--

 

Same here, lots of entries - I tried to delete them, nothing happens.


The logs show:

 

Mar  3 14:42:00 unraid3 dnsmasq[12378]: no servers found in /etc/resolv.conf, will retry
Mar  3 14:42:00 unraid3 dnsmasq[12378]: reading /etc/resolv.conf
Mar  3 14:42:00 unraid3 dnsmasq[12378]: using nameserver 9.9.9.9#53

Same thing for me after upgrading. Could anyone shine a light on why these IPv6 routes were created?

Link to comment

Updated to 6.9.0 with success. At first I thought it went horribly wrong as the Web Gui would not load... spent an hour troubleshooting.. got fed up left it. Come back 45 mins later and nothing was wrong, I guess there was a delay before web GUI was live.. impatient most likely. 

 

Just in case anyone else panics, give it some time after the update to try access Web GUI.

Link to comment
10 minutes ago, witalit said:

Just in case anyone else panics, give it some time after the update to try access Web GUI

I am glad that worked for you (others as well), but it is certainly not normal or expected. 

 

Have you tried a reboot after it came up and did it take that long again?  If so, your configuration should be checked as something is not right.

Link to comment

NEW POOL CLARIFICATION

If i'm following the new pool stuff right, it basically allows 30*34*disk size total storage size (35 if no cache). A share will show files from all pools, but multi pool for same share requiring manual movement. But downsides:

  • mover only goes between main array and cache (no cache/fast/slow 3 tier option or cache/other pool option)
  • pool can only be in raid0 or raid1 (with 2, 3, or 4 copies of data). Raid6 replaced with raid1c3, meaning instead of #disk-2parity / #disk you only get 1/3 of storage max? Is super reduces capacity (for more than a few drives), but less stress during rebuild + higher redundancy the only option here?

 

WHAT I AM TRYING TO DO, RELATED TO NEW POOLS BUT CAN MOVE TO ANOTHER THREAD IF NEED TO

I have 14 x 4tb sas drives in my main array that i would like to move into a pool along with some more 4tb drives that are not in use. I was looking at getting another server for the function of this pool of the 4tb disks, probably running truenas, but for now i do not need more server horsepower and the pool will be under 30 disks. To do this, my thought is to

  1. make a pool with the ~10 drives that are not in use
  2. move share that i want on this pool onto it from main array (or at least as much as will fit)
  3. use unbalance to clear the rest of the contents of the 4tb drives that are in main array
  4. remove them from array which will result in rebuilding double parity for 2 days
  5. add drives to pool and i assume need to do do some kind of rebalancing operation since the drive count would have just gone up 2.4 times

Depending on available space, might need to do steps 2-5 in 2 or 3 batches instead of all at once. Either way this will be at least a few days out as I only have a few open bays and need to order another DAS. After expanding bay count, and upgrading from 6.8.3, is that the right way to do it? The 4tb drives in main array are disk11-disk24, so after this whole process is over I'll have to reassign disk positions so the main array is contiguous (i know it doesnt have to be, but would rather). Unless it would be better to reorder the drives before moving to new pool, but either way have to do parity rebuilds. Been wanting a reorder anyways as it goes 12tb*2, 6tb*3, 12tb*5, 4tb*14, 6tb*1, 8tb*3 (6*3 came from another system, 6*1,8*3 were shucked externals), but have been putting off due to parity rebuilds.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.