Unraid OS version 6.9.0 available


332 posts in this topic Last Reply

Recommended Posts

7 minutes ago, TDD said:

 

See my earlier post on how I fixed this.  I have had no issues since.

Do you mean where you ran seachest to change drive settings?  I've got four ST8000VN004 in my array via LSI, and two of them have dropped off, one of them twice (think during spinup).

 

Would be difficult to run them off the motherboard and just waiting for a rebuild to finish before potentially downgrading.

Link to post
  • Replies 331
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Refer to Summary of New Features for an overview of changes since version 6.8.   To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If yo

Successfully upgraded 3 (encrypted) systems from 6.8.3 to 6.9.0.   One of the 6.8.3 systems was running nvidia-plugin, upgrade procedure:   0. Stop docker containers from auto-star

I'll wait for 6.9.1.   There's always some unforeseen bugs that crop up on hardware and I've learned there's no reason to rush these things.      Thanks for all the hard work

Posted Images

On 3/9/2021 at 8:43 AM, pete69 said:

I had copied 'config/disk.cfg.bak' to 'config/disk.cfg' as suggested here.

 

I then manually took a backup of unraid.

 

I downgraded back to 6.8.3, but no drives were shown in the Cache. 

 

I can add both drives back, but they both show the blue square meaning new device.

 

Other drives have appeared correctly with the green circle.

 

I have not restarted the array yet, as I am concerned I may loose data on the Cache drives.

 

Any suggestions?

 

Answering my own question.

 

After the above did not work I recopied 'config/disk.cfg.bak' to 'config/disk.cfg' after the downgrade had been applied.

 

And the Cache worked again.

 

Hope that helps someone else.

Edited by pete69
Link to post
4 hours ago, Cessquill said:

Do you mean where you ran seachest to change drive settings?  I've got four ST8000VN004 in my array via LSI, and two of them have dropped off, one of them twice (think during spinup).

 

Would be difficult to run them off the motherboard and just waiting for a rebuild to finish before potentially downgrading.

 

 

Link to post

Link to Seachest https://www.seagate.com/gb/en/support/software/seachest/

 

Commands use sg names rather than sd, you can use sg_map to find sg to sd mapping

 

root@Tower:/tmp/Seachest# sg_map
/dev/sg0  /dev/sda
/dev/sg1  /dev/sdb
/dev/sg2  /dev/sdc
/dev/sg3  /dev/sdd
/dev/sg4  /dev/sde
/dev/sg5  /dev/sdf
/dev/sg6  /dev/sdg
/dev/sg7  /dev/sdh
/dev/sg8  /dev/sdi
/dev/sg9  /dev/sdj
/dev/sg10  /dev/sdk
/dev/sg11  /dev/sdl
/dev/sg12  /dev/sdm

 

Edited by SimonF
Link to post

I don't have a way of testing, but does the options for GPU drivers need to be looked at? i.e. for people getting black screens in GUI mode etc.

 

Disabling modesetting
You may want to disable KMS for various reasons, such as getting a blank screen or a "no signal" error from the display, when using the Catalyst driver, etc. To disable KMS add nomodeset as a kernel parameter. See Kernel parameters for more info.

Along with nomodeset kernel parameter, for Intel graphics card you need to add i915.modeset=0 and for Nvidia graphics card you need to add nouveau.modeset=0. For Nvidia Optimus dual-graphics system, you need to add all the three kernel parameters (i.e. "nomodeset i915.modeset=0 nouveau.modeset=0").

 

Link to post
On 3/8/2021 at 12:48 AM, AgentXXL said:

@limetech 

 

This is the 3rd time since the upgrade to 6.9.0 that the system has reported an unclean shutdown. On all 3 occasions I've been able to watch the console output on the monitor directly attached to the unRAID system, with no noticeable errors during the reboot.

 

 

Just a quick update - the 3rd 'false' parity check completed with 0 errors found, as I expected. I've increased the timeout to 120 seconds as @JorgeB suggested. I've also just successfully upgraded to 6.9.1 and hope that these 'false' unclean shutdowns won't re-occur.

 

Also, just to confirm - 6.9.1 shows the correct colors for disk utilization thresholds on both the Dashboard and Main tabs. My OCD thanks you @limetech for correcting this. 🖖

Link to post
On 3/4/2021 at 8:46 AM, NAStyBox said:

I upgraded from 6.8.3 with no issues.

 

However before I went ahead with the upgrade I read this thread. So just for giggles I did the following before upgrading. 

1. Disabled auto-start on all dockers

2. Disabled VMs entirely
3. Set Domains and Appdata shares to Cache "Yes", and ran mover to clear my SSDs just in case an issue came up. They're XFS. 
4. Backed up flash drive

5. Rebooted

6. Ran upgrade

7. Rebooted

8. Let it run 20 minutes while I checked the dash, array, and NIC for any issues.
9. Reenabled Docker autostarts and VMs without starting them
10. Rebooted

...and I'm good as gold. In fact the whole house uses an Emby Docker and the array is so fast I think I might leave it there. 

 

 

I upgrade to 6.9.1 from 6.8.3 and it went well using the above process from @NAStyBox! Thanks unRaid Devs!

 

However, I think step 3 moved my VMs off of the SSD cache and into the array.


Can I just move them back to the SSD?

Link to post
14 minutes ago, nraygun said:

I upgrade to 6.9.1 from 6.8.3 and it went well using the above process from @NAStyBox! Thanks unRaid Devs!

 

However, I think step 3 moved my VMs off of the SSD cache and into the array.


Can I just move them back to the SSD?

Set the share back to Cache:Prefer and run the mover.

Link to post
4 hours ago, Rick Gillyon said:

Set the share back to Cache:Prefer and run the mover.

Thanks @Rick Gillyon - that did it, but I think a little too well!

I could swear I had my appdata and domains on the cache before I upgraded. When I set the cache to prefer on these two shares, it did move the contents to the cache, but it exceeded the size of the cache drive.

Is there a way to only move some of the VMs to the cache drive? Does everyone run docker containers off the array?

Link to post
7 hours ago, nraygun said:

I upgrade to 6.9.1 from 6.8.3 and it went well using the above process from @NAStyBox! Thanks unRaid Devs!

 

However, I think step 3 moved my VMs off of the SSD cache and into the array.


Can I just move them back to the SSD?

Yes, you would just change the setting "Use cache pool (for new files/directories):" in the share back to "Prefer" and run mover. Do this with VMs down. 

Link to post
5 hours ago, nraygun said:

Thanks @Rick Gillyon - that did it, but I think a little too well!

I could swear I had my appdata and domains on the cache before I upgraded. When I set the cache to prefer on these two shares, it did move the contents to the cache, but it exceeded the size of the cache drive.

Is there a way to only move some of the VMs to the cache drive? Does everyone run docker containers off the array?

I run dockers and appdata from cache. If you set system and appdata shares to Cache:Prefer and ran out of space, it should already have mixed array and cache - nothing is missing. If you want to change the mix of what's on array and cache (e.g. to get certain VMs on cache), you'll have to move things around manually. If you want to move the appdata share to the array, just set it back to Cache:Yes and invoke the mover.

 

Also worth checking that none of your dockers are massive, as some can misbehave with the logging.

Link to post
8 hours ago, nraygun said:

Is there a way to only move some of the VMs to the cache drive? Does everyone run docker containers off the array?

 

The easiest thing to do is NOT have the Domains share set to Prefer but to "Only" or "No which will stop mover taking any action on the share.  Then manually move the vdisk files for the VMs you want on the cache to that location.  It is also not mandated that vdisk files HAVE to be in the Domains share - that is just the default.

 

You then handle backing up any vdisk files on the cache (or wherever you have placed them) as needed using either your own backup script or the VM Backup plugin.

 

Note that for existing vdisk files they will be found on cache/array regardless of the setting - in such cases the setting just determines where NEW files get created.

 

Link to post
46 minutes ago, itimpi said:

 

The easiest thing to do is NOT have the Domains share set to Prefer but to "Only" or "No which will stop mover taking any action on the share.  Then manually move the vdisk files for the VMs you want on the cache to that location.  It is also not mandated that vdisk files HAVE to be in the Domains share - that is just the default.

 

You then handle backing up any vdisk files on the cache (or wherever you have placed them) as needed using either your own backup script or the VM Backup plugin.

 

Note that for existing vdisk files they will be found on cache/array regardless of the setting - in such cases the setting just determines where NEW files get created.

 

Thanks @itimpi.

I set the appdata to prefer to get it on the cache. I think it's only 30GB or so and I think I'll be good there.
For the VMs, if I have to move things around manually, I was thinking of setting up a new share called "vmcache" and set it to prefer. Then manually move over the VM directories for the VMs I want on the cache. Then I'd have to change the file location inside of the VM to point to this share. This should allow me to choose what VMs are running off the cache.

Does this scheme sound right?

Link to post

Backed up flash drive, ran the upgrade to 6.9.1 with seemingly no issues.  Thanks for a successful upgrade process.  I'd really appreciate being able to remediate security patches quicker on the unraid system though as patches come out all the time.  Is there a way to remediate some security only type things in the future without waiting for big upgrades?  A lot of security items get fixed in ~10 months.  Thanks for all you do!

Link to post

I'm not quite following the implications the changes to VFIO have in this update.

In order to pass the iGPU (on 6700K cpu) _with sound_ through to a VM, following sysconfig change has been added:

pcie_acs_override=downstream vfio-pci.ids=<my_device_id> modprobe.blacklist=i2c_i801,i2c_smbus

"my_device_id" being the vendor:device of the audio device.

 

Is this still needed or there's another way this should be solved now?

Edited by tuxbass
Link to post
40 minutes ago, tuxbass said:

I'm not quite following the implications the changes to VFIO have in this update.

In order to pass the iGPU (on 6700K cpu) _with sound_ through to a VM, following sysconfig change has been added:



pcie_acs_override=downstream vfio-pci.ids=<my_device_id> modprobe.blacklist=i2c_i801,i2c_smbus

"my_device_id" being the vendor:device of the audio device.

 

Is this still needed or there's another way this should be solved now?

 

Rather than hardcode your <my_device_id> in syslinux, you can now bind it to vfio-pci by adding a checkbox next to the device on the System Devices page. You don't *have* to change to the new method, but it is recommended:

https://forums.unraid.net/topic/93781-guide-bind-devices-to-vfio-pci-for-easy-passthrough-to-vms/

 

Link to post
37 minutes ago, ljm42 said:

 

Rather than hardcode your <my_device_id> in syslinux, you can now bind it to vfio-pci by adding a checkbox next to the device on the System Devices page. You don't *have* to change to the new method, but it is recommended:

https://forums.unraid.net/topic/93781-guide-bind-devices-to-vfio-pci-for-easy-passthrough-to-vms/

Ah i'm mixing things up. So only the vfio binding is to be removed from syslinux when the new binding method is to be used; acs_override and i2c_i801,i2c_smbus modules blacklisting still remains there. Thanks!

Link to post

Couple of questions re. ssh changes:

Quote

In addition, upon upgrade we ensure the config/ssh/root directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory.  This means any files you might put into /root/.ssh will be persistent across reboots.

Now /boot/config/ssh looks like this:

┌─[Tower]─[/boot/config/ssh]
└──╼ + ls -lt
total 96K
drwx------ 2 root root 8.0K Mar 19 12:00 root
-rw------- 1 root root  812 Feb 18 11:32 authorized_keys
-rw------- 1 root root  177 Dec  1 23:09 known_hosts
-rw------- 1 root root  352 Dec  1 23:09 known_hosts~
-rw------- 1 root root  668 May 11  2019 ssh_host_dsa_key
-rw------- 1 root root  600 May 11  2019 ssh_host_dsa_key.pub
-rw------- 1 root root  227 May 11  2019 ssh_host_ecdsa_key
-rw------- 1 root root  172 May 11  2019 ssh_host_ecdsa_key.pub
-rw------- 1 root root  399 May 11  2019 ssh_host_ed25519_key
-rw------- 1 root root   92 May 11  2019 ssh_host_ed25519_key.pub
-rw------- 1 root root 1.7K May 11  2019 ssh_host_rsa_key
-rw------- 1 root root  392 May 11  2019 ssh_host_rsa_key.pub

with root/ dir being empty, think the update introduced it. So far I had been creating /root/.ssh -> /boot/config/ssh symlink myself from the go file.

 

1) Is it safe to move all the keys files from /boot/config/ssh to /boot/config/ssh/root?

 

Note the keys were created by unraid (likely during the very initial installation years ago), as I haven't generated them myself. Unsure what such a move might affect.

-----------------------------------------------------------------------

My /etc/ssh/sshd_config has following line:
    PasswordAuthentication no

 

After adding said line and restarting sshd (via /etc/rc.d/rc.sshd restart) password login is still allowed.
2) What has changed, why isn't key-only login enforced anymore?

Edited by tuxbass
Link to post
7 hours ago, tuxbass said:

So far I had been creating /root/.ssh -> /boot/config/ssh symlink myself from the go file.

 

Not ideal, as /etc/rc.d/rc.sshd has always managed the files in /boot/config/ssh. It was not expecting you to put other files in there with the files it manages.

 

7 hours ago, tuxbass said:

1) Is it safe to move all the keys files from /boot/config/ssh to /boot/config/ssh/root?

 

No.  You could either ignore the extra files, or delete the /boot/config/ssh dir and reboot, letting Unraid set everything up fresh. Then put your authorized keys in /boot/config/ssh/root

 

7 hours ago, tuxbass said:

What has changed, why isn't key-only login enforced anymore?

 

Editing files outside of the gui isn't technically supported, but this seems like something that should work. If you clean everything and still have the issue, might be worth a bug report.

 

Link to post
11 hours ago, tuxbass said:

1) Is it safe to move all the keys files from /boot/config/ssh to /boot/config/ssh/root?

 

Note the keys were created by unraid (likely during the very initial installation years ago), as I haven't generated them myself. Unsure what such a move might affect.

-----------------------------------------------------------------------

My /etc/ssh/sshd_config has following line:
    PasswordAuthentication no

 

After adding said line and restarting sshd (via /etc/rc.d/rc.sshd restart) password login is still allowed.
2) What has changed, why isn't key-only login enforced anymore?

only the files that should be in /root/.ssh should be in /boot/config/ssh/root. All other files will be copied into /etc/ssh whenever the sshd service is started/restarted, all the files in /boot/config/ssh are copied to /etc/ssh (non recursively) before sshd is started

 

ssh key-only login was never enabled by default.

To enable this, you copy /etc/ssh/sshd_config to /boot/config/ssh and edit that one.

 

 

Link to post
On 3/19/2021 at 8:41 PM, ljm42 said:

 

Not ideal, as /etc/rc.d/rc.sshd has always managed the files in /boot/config/ssh. It was not expecting you to put other files in there with the files it manages.

In creating the symlink the only file that was manually created was the link in /root/.ssh/, pointing to /boot/config/ssh, so latter contents were not modified.

 

Quote

delete the /boot/config/ssh dir and reboot

Ah so it's actively manged on startup? Good call, will give that a try.

 

On 3/20/2021 at 12:19 AM, ken-ji said:

 all the files in /boot/config/ssh are copied to /etc/ssh (non recursively) before sshd is started

 

ssh key-only login was never enabled by default.

To enable this, you copy /etc/ssh/sshd_config to /boot/config/ssh and edit that one.

 

Great tips, will try copying sshd_config to /boot/config/ssh as opposed to editing the file in /etc via go-file.

I know key-only login was never enabled, but the method I described used to work until 6.9.0.

Link to post

You also need to set

# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication no

To disable password login

or since Limetech insists that only the root user be used

PermitRootLogin prohibit-password
#PermitRootLogin yes

 

Edited by ken-ji
Link to post

That's pretty much what i've been doing so far. In go file there's this section for ssh changes:

_ssh="/root/.ssh"
sshd_conf="/etc/ssh/sshd_config"

[[ -d "$_ssh" ]] || ln -s -- /boot/config/ssh "$_ssh"
find -L "$_ssh/" \( -type f -o -type d \) -exec chmod 'u=rwX,g=,o=' -- '{}' \+

sed --follow-symlinks -i '/^PermitEmptyPasswords.*/d' "$sshd_conf"
sed --follow-symlinks -i '/^PasswordAuthentication.*/d' "$sshd_conf"
echo 'PermitEmptyPasswords no' >> "$sshd_conf"
echo 'PasswordAuthentication no' >> "$sshd_conf"
# restart sshd service:
/etc/rc.d/rc.sshd restart
### /sshd

 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.