falsenegative Posted March 10, 2021 Share Posted March 10, 2021 thanks for this info @TDD ! for reference to others that stumble upon this: TDD is talking about this post: Quote Link to comment
SimonF Posted March 10, 2021 Share Posted March 10, 2021 (edited) Link to Seachest https://www.seagate.com/gb/en/support/software/seachest/ Commands use sg names rather than sd, you can use sg_map to find sg to sd mapping root@Tower:/tmp/Seachest# sg_map /dev/sg0 /dev/sda /dev/sg1 /dev/sdb /dev/sg2 /dev/sdc /dev/sg3 /dev/sdd /dev/sg4 /dev/sde /dev/sg5 /dev/sdf /dev/sg6 /dev/sdg /dev/sg7 /dev/sdh /dev/sg8 /dev/sdi /dev/sg9 /dev/sdj /dev/sg10 /dev/sdk /dev/sg11 /dev/sdl /dev/sg12 /dev/sdm Edited March 10, 2021 by SimonF 1 Quote Link to comment
SimonF Posted March 10, 2021 Share Posted March 10, 2021 I don't have a way of testing, but does the options for GPU drivers need to be looked at? i.e. for people getting black screens in GUI mode etc. Disabling modesetting You may want to disable KMS for various reasons, such as getting a blank screen or a "no signal" error from the display, when using the Catalyst driver, etc. To disable KMS add nomodeset as a kernel parameter. See Kernel parameters for more info. Along with nomodeset kernel parameter, for Intel graphics card you need to add i915.modeset=0 and for Nvidia graphics card you need to add nouveau.modeset=0. For Nvidia Optimus dual-graphics system, you need to add all the three kernel parameters (i.e. "nomodeset i915.modeset=0 nouveau.modeset=0"). Quote Link to comment
AgentXXL Posted March 10, 2021 Share Posted March 10, 2021 On 3/8/2021 at 12:48 AM, AgentXXL said: @limetech This is the 3rd time since the upgrade to 6.9.0 that the system has reported an unclean shutdown. On all 3 occasions I've been able to watch the console output on the monitor directly attached to the unRAID system, with no noticeable errors during the reboot. Just a quick update - the 3rd 'false' parity check completed with 0 errors found, as I expected. I've increased the timeout to 120 seconds as @JorgeB suggested. I've also just successfully upgraded to 6.9.1 and hope that these 'false' unclean shutdowns won't re-occur. Also, just to confirm - 6.9.1 shows the correct colors for disk utilization thresholds on both the Dashboard and Main tabs. My OCD thanks you @limetech for correcting this. 🖖 Quote Link to comment
nraygun Posted March 10, 2021 Share Posted March 10, 2021 On 3/4/2021 at 8:46 AM, NAStyBox said: I upgraded from 6.8.3 with no issues. However before I went ahead with the upgrade I read this thread. So just for giggles I did the following before upgrading. 1. Disabled auto-start on all dockers 2. Disabled VMs entirely 3. Set Domains and Appdata shares to Cache "Yes", and ran mover to clear my SSDs just in case an issue came up. They're XFS. 4. Backed up flash drive 5. Rebooted 6. Ran upgrade 7. Rebooted 8. Let it run 20 minutes while I checked the dash, array, and NIC for any issues. 9. Reenabled Docker autostarts and VMs without starting them 10. Rebooted ...and I'm good as gold. In fact the whole house uses an Emby Docker and the array is so fast I think I might leave it there. I upgrade to 6.9.1 from 6.8.3 and it went well using the above process from @NAStyBox! Thanks unRaid Devs! However, I think step 3 moved my VMs off of the SSD cache and into the array. Can I just move them back to the SSD? 1 Quote Link to comment
Rick Gillyon Posted March 10, 2021 Share Posted March 10, 2021 14 minutes ago, nraygun said: I upgrade to 6.9.1 from 6.8.3 and it went well using the above process from @NAStyBox! Thanks unRaid Devs! However, I think step 3 moved my VMs off of the SSD cache and into the array. Can I just move them back to the SSD? Set the share back to Cache:Prefer and run the mover. Quote Link to comment
nraygun Posted March 11, 2021 Share Posted March 11, 2021 4 hours ago, Rick Gillyon said: Set the share back to Cache:Prefer and run the mover. Thanks @Rick Gillyon - that did it, but I think a little too well! I could swear I had my appdata and domains on the cache before I upgraded. When I set the cache to prefer on these two shares, it did move the contents to the cache, but it exceeded the size of the cache drive. Is there a way to only move some of the VMs to the cache drive? Does everyone run docker containers off the array? Quote Link to comment
Rick Gillyon Posted March 11, 2021 Share Posted March 11, 2021 5 hours ago, nraygun said: Thanks @Rick Gillyon - that did it, but I think a little too well! I could swear I had my appdata and domains on the cache before I upgraded. When I set the cache to prefer on these two shares, it did move the contents to the cache, but it exceeded the size of the cache drive. Is there a way to only move some of the VMs to the cache drive? Does everyone run docker containers off the array? I run dockers and appdata from cache. If you set system and appdata shares to Cache:Prefer and ran out of space, it should already have mixed array and cache - nothing is missing. If you want to change the mix of what's on array and cache (e.g. to get certain VMs on cache), you'll have to move things around manually. If you want to move the appdata share to the array, just set it back to Cache:Yes and invoke the mover. Also worth checking that none of your dockers are massive, as some can misbehave with the logging. Quote Link to comment
itimpi Posted March 11, 2021 Share Posted March 11, 2021 8 hours ago, nraygun said: Is there a way to only move some of the VMs to the cache drive? Does everyone run docker containers off the array? The easiest thing to do is NOT have the Domains share set to Prefer but to "Only" or "No which will stop mover taking any action on the share. Then manually move the vdisk files for the VMs you want on the cache to that location. It is also not mandated that vdisk files HAVE to be in the Domains share - that is just the default. You then handle backing up any vdisk files on the cache (or wherever you have placed them) as needed using either your own backup script or the VM Backup plugin. Note that for existing vdisk files they will be found on cache/array regardless of the setting - in such cases the setting just determines where NEW files get created. Quote Link to comment
nraygun Posted March 11, 2021 Share Posted March 11, 2021 46 minutes ago, itimpi said: The easiest thing to do is NOT have the Domains share set to Prefer but to "Only" or "No which will stop mover taking any action on the share. Then manually move the vdisk files for the VMs you want on the cache to that location. It is also not mandated that vdisk files HAVE to be in the Domains share - that is just the default. You then handle backing up any vdisk files on the cache (or wherever you have placed them) as needed using either your own backup script or the VM Backup plugin. Note that for existing vdisk files they will be found on cache/array regardless of the setting - in such cases the setting just determines where NEW files get created. Thanks @itimpi. I set the appdata to prefer to get it on the cache. I think it's only 30GB or so and I think I'll be good there. For the VMs, if I have to move things around manually, I was thinking of setting up a new share called "vmcache" and set it to prefer. Then manually move over the VM directories for the VMs I want on the cache. Then I'd have to change the file location inside of the VM to point to this share. This should allow me to choose what VMs are running off the cache. Does this scheme sound right? Quote Link to comment
nlz Posted March 14, 2021 Share Posted March 14, 2021 Backed up flash drive, ran the upgrade to 6.9.1 with seemingly no issues. Thanks for a successful upgrade process. I'd really appreciate being able to remediate security patches quicker on the unraid system though as patches come out all the time. Is there a way to remediate some security only type things in the future without waiting for big upgrades? A lot of security items get fixed in ~10 months. Thanks for all you do! Quote Link to comment
tuxbass Posted March 18, 2021 Share Posted March 18, 2021 (edited) I'm not quite following the implications the changes to VFIO have in this update. In order to pass the iGPU (on 6700K cpu) _with sound_ through to a VM, following sysconfig change has been added: pcie_acs_override=downstream vfio-pci.ids=<my_device_id> modprobe.blacklist=i2c_i801,i2c_smbus "my_device_id" being the vendor:device of the audio device. Is this still needed or there's another way this should be solved now? Edited March 18, 2021 by tuxbass Quote Link to comment
ljm42 Posted March 18, 2021 Share Posted March 18, 2021 40 minutes ago, tuxbass said: I'm not quite following the implications the changes to VFIO have in this update. In order to pass the iGPU (on 6700K cpu) _with sound_ through to a VM, following sysconfig change has been added: pcie_acs_override=downstream vfio-pci.ids=<my_device_id> modprobe.blacklist=i2c_i801,i2c_smbus "my_device_id" being the vendor:device of the audio device. Is this still needed or there's another way this should be solved now? Rather than hardcode your <my_device_id> in syslinux, you can now bind it to vfio-pci by adding a checkbox next to the device on the System Devices page. You don't *have* to change to the new method, but it is recommended: https://forums.unraid.net/topic/93781-guide-bind-devices-to-vfio-pci-for-easy-passthrough-to-vms/ Quote Link to comment
tuxbass Posted March 18, 2021 Share Posted March 18, 2021 37 minutes ago, ljm42 said: Rather than hardcode your <my_device_id> in syslinux, you can now bind it to vfio-pci by adding a checkbox next to the device on the System Devices page. You don't *have* to change to the new method, but it is recommended: https://forums.unraid.net/topic/93781-guide-bind-devices-to-vfio-pci-for-easy-passthrough-to-vms/ Ah i'm mixing things up. So only the vfio binding is to be removed from syslinux when the new binding method is to be used; acs_override and i2c_i801,i2c_smbus modules blacklisting still remains there. Thanks! Quote Link to comment
tuxbass Posted March 19, 2021 Share Posted March 19, 2021 (edited) Couple of questions re. ssh changes: Quote In addition, upon upgrade we ensure the config/ssh/root directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory. This means any files you might put into /root/.ssh will be persistent across reboots. Now /boot/config/ssh looks like this: ┌─[Tower]─[/boot/config/ssh] └──╼ + ls -lt total 96K drwx------ 2 root root 8.0K Mar 19 12:00 root -rw------- 1 root root 812 Feb 18 11:32 authorized_keys -rw------- 1 root root 177 Dec 1 23:09 known_hosts -rw------- 1 root root 352 Dec 1 23:09 known_hosts~ -rw------- 1 root root 668 May 11 2019 ssh_host_dsa_key -rw------- 1 root root 600 May 11 2019 ssh_host_dsa_key.pub -rw------- 1 root root 227 May 11 2019 ssh_host_ecdsa_key -rw------- 1 root root 172 May 11 2019 ssh_host_ecdsa_key.pub -rw------- 1 root root 399 May 11 2019 ssh_host_ed25519_key -rw------- 1 root root 92 May 11 2019 ssh_host_ed25519_key.pub -rw------- 1 root root 1.7K May 11 2019 ssh_host_rsa_key -rw------- 1 root root 392 May 11 2019 ssh_host_rsa_key.pub with root/ dir being empty, think the update introduced it. So far I had been creating /root/.ssh -> /boot/config/ssh symlink myself from the go file. 1) Is it safe to move all the keys files from /boot/config/ssh to /boot/config/ssh/root? Note the keys were created by unraid (likely during the very initial installation years ago), as I haven't generated them myself. Unsure what such a move might affect. ----------------------------------------------------------------------- My /etc/ssh/sshd_config has following line: PasswordAuthentication no After adding said line and restarting sshd (via /etc/rc.d/rc.sshd restart) password login is still allowed. 2) What has changed, why isn't key-only login enforced anymore? Edited March 19, 2021 by tuxbass Quote Link to comment
ljm42 Posted March 19, 2021 Share Posted March 19, 2021 7 hours ago, tuxbass said: So far I had been creating /root/.ssh -> /boot/config/ssh symlink myself from the go file. Not ideal, as /etc/rc.d/rc.sshd has always managed the files in /boot/config/ssh. It was not expecting you to put other files in there with the files it manages. 7 hours ago, tuxbass said: 1) Is it safe to move all the keys files from /boot/config/ssh to /boot/config/ssh/root? No. You could either ignore the extra files, or delete the /boot/config/ssh dir and reboot, letting Unraid set everything up fresh. Then put your authorized keys in /boot/config/ssh/root 7 hours ago, tuxbass said: What has changed, why isn't key-only login enforced anymore? Editing files outside of the gui isn't technically supported, but this seems like something that should work. If you clean everything and still have the issue, might be worth a bug report. Quote Link to comment
ken-ji Posted March 19, 2021 Share Posted March 19, 2021 11 hours ago, tuxbass said: 1) Is it safe to move all the keys files from /boot/config/ssh to /boot/config/ssh/root? Note the keys were created by unraid (likely during the very initial installation years ago), as I haven't generated them myself. Unsure what such a move might affect. ----------------------------------------------------------------------- My /etc/ssh/sshd_config has following line: PasswordAuthentication no After adding said line and restarting sshd (via /etc/rc.d/rc.sshd restart) password login is still allowed. 2) What has changed, why isn't key-only login enforced anymore? only the files that should be in /root/.ssh should be in /boot/config/ssh/root. All other files will be copied into /etc/ssh whenever the sshd service is started/restarted, all the files in /boot/config/ssh are copied to /etc/ssh (non recursively) before sshd is started ssh key-only login was never enabled by default. To enable this, you copy /etc/ssh/sshd_config to /boot/config/ssh and edit that one. Quote Link to comment
tuxbass Posted March 22, 2021 Share Posted March 22, 2021 On 3/19/2021 at 8:41 PM, ljm42 said: Not ideal, as /etc/rc.d/rc.sshd has always managed the files in /boot/config/ssh. It was not expecting you to put other files in there with the files it manages. In creating the symlink the only file that was manually created was the link in /root/.ssh/, pointing to /boot/config/ssh, so latter contents were not modified. Quote delete the /boot/config/ssh dir and reboot Ah so it's actively manged on startup? Good call, will give that a try. On 3/20/2021 at 12:19 AM, ken-ji said: all the files in /boot/config/ssh are copied to /etc/ssh (non recursively) before sshd is started ssh key-only login was never enabled by default. To enable this, you copy /etc/ssh/sshd_config to /boot/config/ssh and edit that one. Great tips, will try copying sshd_config to /boot/config/ssh as opposed to editing the file in /etc via go-file. I know key-only login was never enabled, but the method I described used to work until 6.9.0. Quote Link to comment
ken-ji Posted March 22, 2021 Share Posted March 22, 2021 (edited) You also need to set # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no To disable password login or since Limetech insists that only the root user be used PermitRootLogin prohibit-password #PermitRootLogin yes Edited March 22, 2021 by ken-ji Quote Link to comment
tuxbass Posted March 22, 2021 Share Posted March 22, 2021 That's pretty much what i've been doing so far. In go file there's this section for ssh changes: _ssh="/root/.ssh" sshd_conf="/etc/ssh/sshd_config" [[ -d "$_ssh" ]] || ln -s -- /boot/config/ssh "$_ssh" find -L "$_ssh/" \( -type f -o -type d \) -exec chmod 'u=rwX,g=,o=' -- '{}' \+ sed --follow-symlinks -i '/^PermitEmptyPasswords.*/d' "$sshd_conf" sed --follow-symlinks -i '/^PasswordAuthentication.*/d' "$sshd_conf" echo 'PermitEmptyPasswords no' >> "$sshd_conf" echo 'PasswordAuthentication no' >> "$sshd_conf" # restart sshd service: /etc/rc.d/rc.sshd restart ### /sshd Quote Link to comment
ken-ji Posted March 22, 2021 Share Posted March 22, 2021 3 minutes ago, tuxbass said: That's pretty much what i've been doing so far. In go file there's this section for ssh changes: _ssh="/root/.ssh" sshd_conf="/etc/ssh/sshd_config" [[ -d "$_ssh" ]] || ln -s -- /boot/config/ssh "$_ssh" find -L "$_ssh/" \( -type f -o -type d \) -exec chmod 'u=rwX,g=,o=' -- '{}' \+ sed --follow-symlinks -i '/^PermitEmptyPasswords.*/d' "$sshd_conf" sed --follow-symlinks -i '/^PasswordAuthentication.*/d' "$sshd_conf" echo 'PermitEmptyPasswords no' >> "$sshd_conf" echo 'PasswordAuthentication no' >> "$sshd_conf" # restart sshd service: /etc/rc.d/rc.sshd restart ### /sshd Looks about right (for 6.8.3) though it is unnecessarily convoluted and its wrong for 6.9 as discussed above. So just put your authorized_keys file in /root/.ssh (which should now be symlinked to /boot/config/ssh/root) make a copy of /etc/ssh/sshd_config in boot/config/ssh and edit that then restart sshd with /etc/rc.d/rc.sshd restart which should make the new changes active Quote Link to comment
ChatNoir Posted April 7, 2021 Share Posted April 7, 2021 The original release notes link is not valid, the new one is : https://wiki.unraid.net/Manual/Release_Notes/Unraid_OS_6.9.0 (I suppose the content is the same) Quote Link to comment
ChatNoir Posted April 10, 2021 Share Posted April 10, 2021 3 hours ago, 6of6 said: Unraid doesn't work. It does for most people. I understand it can be frustrating if you cannot do what you expect but simply saying that it does not work will not improve the situation. We have nothing to work from to try solve issues you might have with Unraid. Please make a post in General support explaining your issues in details. What you expect to do, what step you took, what is happening in return ? Pictures of errors can help. Attaching your Diagnostics will also give us information to help you (Tools / Diagnostics). 3 Quote Link to comment
brhersh Posted April 13, 2021 Share Posted April 13, 2021 Upgraded from 6.8.X to 6.9.2 with no issues. Containers and VMs seem to be running fine. I did see (too late) that AFP shares are no longer supported per (https://unraid.net/blog/unraid-6-9-stable). I could not be more disappointed in this decision Is there a work around or container project that might get this feature back? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.