doron Posted March 2, 2021 Share Posted March 2, 2021 6.9.0-rc2 -> 6.9.0, smooth, no issues. One tiny data point: After the upgrade, I received three notice messages for 3 of my drives: "Notice [TOWER] - Disk 3 returned to normal utilization level" I never saw a warning state while in previous versions. Have the default utilization thresholds changed? Great job, Limetech folks!! Quote Link to comment
Squid Posted March 2, 2021 Share Posted March 2, 2021 5 hours ago, Arragon said: After updating to 6.9.0 all my user shares are gone. /mnt looks like this d--------- 1 root root 0 Jan 1 1970 user/ and I can't change permission on that # chmod 1777 /mnt/user chmod: changing permissions of '/mnt/user': No such file or directory You should create a new post within General Support and include your diagnostics Quote Link to comment
bastl Posted March 2, 2021 Share Posted March 2, 2021 Update from 6.8.3 went smooth as always. Temp sensors on TRX40 are working now. 😘 Switching from old pci-vfio in syslinux config to the new method also went without any issues. But a small thing I noticed. Syslog isn't starting with the normal boot process entries. The first entries for me are from some plugin checks/updates Is this normal behaviour? Tried it with Firefox (main Browser) and Brave (never used for my server before). Same result. Is there a max line limit now for showing the logs and I can't find the settings? Quote Link to comment
xruchai Posted March 2, 2021 Share Posted March 2, 2021 (edited) After the update to 6.9 Final, the HDD's (sata) no longer go into standby after 30 minutes, no spin down. I have not changed any settings or installed new plugins. Edited March 2, 2021 by xruchai Quote Link to comment
Spritzup Posted March 2, 2021 Share Posted March 2, 2021 I'm excited to upgrade, great work guys! Just a quick question (and I read the release notes), but is their any major changes to networking functionality? Specifically communication between bridges and/or a method to set a default bridge for docker containers (similar to what exists for VM's) Thanks again for all the hard work everyone!! ~Spritz Quote Link to comment
weirdcrap Posted March 2, 2021 Share Posted March 2, 2021 (edited) 10 minutes ago, bastl said: Update from 6.8.3 went smooth as always. Temp sensors on TRX40 are working now. 😘 Switching from old pci-vfio in syslinux config to the new method also went without any issues. But a small thing I noticed. Syslog isn't starting with the normal boot process entries. The first entries for me are from some plugin checks/updates Is this normal behaviour? Tried it with Firefox (main Browser) and Brave (never used for my server before). Same result. Is there a max line limit now for showing the logs and I can't find the settings? I believe this is due to the size of the syslog buffer provided to the WebUI and it was mentioned in one of the beta or RC threads. The full syslog should still be available on the filesystem itself. EDIT: Bug report already opened for it: Edited March 2, 2021 by weirdcrap Quote Link to comment
bastl Posted March 2, 2021 Share Posted March 2, 2021 (edited) 3 minutes ago, weirdcrap said: I believe this is due to the size of the syslog buffer provided to the WebUI and it was mentioned in one of the beta or RC threads. The full syslog should still be available on the filesystem itself. Thx for the info. Is there any settings to show again the full syslog inside the WebUI? Edit: I was to quick. Noticed your edit to late. Thanks for the link Edited March 2, 2021 by bastl Quote Link to comment
weirdcrap Posted March 2, 2021 Share Posted March 2, 2021 2 minutes ago, bastl said: Thx for the info. Is there any settings to show again the full syslog inside the WebUI? Not that I'm aware of at this time. I assume it would be rather trivial to increase the buffer size but I'm not sure where in UnRAID this is handled without digging through code. If you follow that bug report @John_Mopened I bet LimeTech will come up with a workaround pretty quick for those who want it. Quote Link to comment
SimonF Posted March 2, 2021 Share Posted March 2, 2021 6 minutes ago, bastl said: Thx for the info. Is there any settings to show again the full syslog inside the WebUI? Edit: I was to quick. Noticed your edit to late. Thanks for the link sed -i 's/1000/3000/' /usr/local/emhttp/plugins/dynamix/include/Syslog.php run this in terminal to fix. Quote Link to comment
bastl Posted March 2, 2021 Share Posted March 2, 2021 4 minutes ago, SimonF said: sed -i 's/1000/3000/' /usr/local/emhttp/plugins/dynamix/include/Syslog.php run this in terminal to fix. Thanks. Does this survive a server reboot? Quote Link to comment
weirdcrap Posted March 2, 2021 Share Posted March 2, 2021 (edited) Just now, bastl said: Thanks. Does this survive a server reboot? I imagine not as those files are unpacked fresh from the flash each boot. you would need to add it to your GO file. Edited March 2, 2021 by weirdcrap Quote Link to comment
SimonF Posted March 2, 2021 Share Posted March 2, 2021 1 minute ago, weirdcrap said: you would need to add it to your GO file. Yes need to put in go file. Quote Link to comment
weirdcrap Posted March 2, 2021 Share Posted March 2, 2021 (edited) With the GPU changes (https://wiki.unraid.net/Unraid_OS_6.9.0#GPU_Driver_Integration) the old method of enabling Intel QuickSync via the go file is no longer recommended then? # Enable Intel QuickSync HW Transcoding modprobe i915 chmod -R 777 /dev/dri Edited March 2, 2021 by weirdcrap Quote Link to comment
SimonF Posted March 2, 2021 Share Posted March 2, 2021 @limetech@SpencerJ Can this be appended to the first post so people know if they revert they need to add the cache back in as I think its buried in the release notes and people are not seeing it.. Note: A pre-6.9.0 cache disk/pool is now simply a pool named "cache". When you upgrade a server which has a cache disk/pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of config/disk.cfg and into a new file, config/pools/cache.cfg. If later you revert back to a pre-6.9.0 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact. Also if they start the array can they stop and add if they missed this step or have auto start *yes? Quote Link to comment
Ziryab Posted March 2, 2021 Share Posted March 2, 2021 Updated from 6.9.0 rc2 to 6.9.0 stable and everything went smoothy as expected. Thanks for all the hard work by unRAID team, @ich777 for his amazing plugins and support, and for @linuxserver.io developers for making almost any software you can imagine in a container and maintaining them. 3 Quote Link to comment
Zorlofe Posted March 2, 2021 Share Posted March 2, 2021 Everything went smoothly for me with the upgrade as well. So far so good! Thanks to all those involved for getting this rolled out! 1 Quote Link to comment
limetech Posted March 2, 2021 Author Share Posted March 2, 2021 4 hours ago, pervel said: Is it necessary to upgrade if you're on 6.9.0-rc2? Has anything changed? Yes. The wiki doc shows the consolidated changes (changes since 6.8.3) but here are changes from -rc2 to stable: Version 6.9.0 2021-02-27 (vs -rc2) Base distro: bind: [removed] btrfs-progs: version 5.10 ca-certificates: version 20201219 curl: version 7.74.0 (CVE-2020-8286 CVE-2020-8285 CVE-2020-8284) dnsmasq: version 2.84 (CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687) intel-microcode: version 20210216 kernel-firmware: version 20210211_f7915a0 openssl: version 1.1.1i openssl-solibs: version 1.1.1i p11-kit: version 0.23.22 (CVE-2020-29361 CVE-2020-29361 CVE-2020-29361) php: version 7.4.15 samba: version 4.12.11 sudo: version: 1.9.3p2 (CVE-2021-23239 CVE-2021-23240) wireguard-tools: version 1.0.20210223 Linux kernel: version 5.10.19 Management: bug fix: rename /etc/krb.conf to /etc/krb5.conf emhttpd: bug fix: initial device temperatures not being displayed emhttpd: bug fix: No Smartdata for non standard controller type plugin: support sha256 file validation smart-one.cfg keeps SMART info per-ID instead of per-slot; disk warning/critical config moved to disk/pool cfg wireguard support: rc.wireguard: add iptables rules webgui: Add notification agent for Discord webgui: Fix: Dashboard / Docker scrolling on iPad devices webgui: dockerMan: Selectable start upon install webgui: fix: login prompt when switching between servers webgui: sanitize input on tail_log webgui: SysDevs - warn if leave page without saving webgui: Diagnostics: Remove SHA256 Hashes webgui: Display settings: colors should be 3 or 6 character hex digits webgui: Fix: properly set samesite cookie (fix login issue with Safari) webgui: Switch Diagnostics to web socket 1 1 Quote Link to comment
xanvincent Posted March 2, 2021 Share Posted March 2, 2021 I just wanted to say that this is the best feature-set of a major release I've seen in a long time. You guys managed to pack in a lot of community requested items and we love to see it. Great work! 1 1 Quote Link to comment
limetech Posted March 2, 2021 Author Share Posted March 2, 2021 2 hours ago, doron said: 6.9.0-rc2 -> 6.9.0, smooth, no issues. One tiny data point: After the upgrade, I received three notice messages for 3 of my drives: "Notice [TOWER] - Disk 3 returned to normal utilization level" I never saw a warning state while in previous versions. Have the default utilization thresholds changed? Great job, Limetech folks!! Yes, see here. 1 Quote Link to comment
BVD Posted March 2, 2021 Share Posted March 2, 2021 7 hours ago, hawihoney said: I do have two system/appdata folders. One on an unassigned device and one on an array device. The system share is set to "Use cache=No". E.g.: /mnt/disk17/system/ /mnt/disks/NVMe1/system/ VMs are running on array, Dockers are split between both. Docker/VM settings are set to: /mnt/disk17/system/docker.img /mnt/disk17/system/libvirt.img Should I expect problems? I would say it's a possibility... One of my prod systems is using a completely non-standard zfs setup, with appdata living on a pair of P3500 NVMe drive, and everything else living on a 24 drive zfs pool, with the mount point for appdata being taken care of in the go file on boot - as I've not had enough time to ensure that I can stick around to fix it if things go south, I haven't touched it yet. Definitely back up your flash drive before updating, and if things aren't working afterwards, compare the domain and network config files post-upgrade for unexpected changes. Quote Link to comment
BVD Posted March 2, 2021 Share Posted March 2, 2021 1 hour ago, weirdcrap said: With the GPU changes (https://wiki.unraid.net/Unraid_OS_6.9.0#GPU_Driver_Integration) the old method of enabling Intel QuickSync via the go file is no longer recommended then? # Enable Intel QuickSync HW Transcoding modprobe i915 chmod -R 777 /dev/dri Correct, you'll instead un-blacklist the driver by creating a file to override the driver blacklist: touch /boot/config/modprobe.d/i915.conf 3 Quote Link to comment
limetech Posted March 2, 2021 Author Share Posted March 2, 2021 23 minutes ago, SimonF said: @limetech@SpencerJ Can this be appended to the first post so people know if they revert they need to add the cache back in as I think its buried in the release notes and people are not seeing it.. Note: A pre-6.9.0 cache disk/pool is now simply a pool named "cache". When you upgrade a server which has a cache disk/pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of config/disk.cfg and into a new file, config/pools/cache.cfg. If later you revert back to a pre-6.9.0 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact. Also if they start the array can they stop and add if they missed this step or have auto start *yes? Good suggestion. I added a note about Reverting to the OP. 1 Quote Link to comment
KingHorse Posted March 2, 2021 Share Posted March 2, 2021 More people dont see drives spindown after the timer has expired? I was suprised with the update and installed it directly to see my issue fixed, unfornatley it is still there. 2 Quote Link to comment
BVD Posted March 2, 2021 Share Posted March 2, 2021 (edited) 8 minutes ago, limetech said: Good suggestion. I added a note about Reverting to the OP. I think it'd also be helpful to note specific 'gotchas' within the original announcement that were addressed within the previous beta releases announcements - many haven't touched 6.9 yet and are moving straight from 6.8.3, so information on things like the the blacklisting of video drivers aren't readily at hand to them. i.e. "For those with manual edits to their /boot/config/go file, please take note of <modprobe stuff>" Edited March 2, 2021 by BVD Quote Link to comment
archedraft Posted March 2, 2021 Share Posted March 2, 2021 (edited) 5 hours ago, mgrx said: I've just updated, and i get a noVNC error on every vm i try to access using the integrated vnc client: Same issue on my end as well. - Clearing browser cache corrected this. Edited March 2, 2021 by archedraft Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.