giafidis Posted March 4, 2021 Share Posted March 4, 2021 (edited) 2 hours ago, itimpi said: Did you check that no hardware ID's for passed-through hardware had changed under 6.9.0? If they had then this could explain why this happened. How can i check that out? Edited March 4, 2021 by giafidis Quote Link to comment
Cessquill Posted March 4, 2021 Share Posted March 4, 2021 Only minor issue here is that it looks like my 12 year old 1GB flash drive is now too full. Wouldn't upgrade until I cleared some space (after backing up); now Fix Common Problems tells me it's up to 90%. Sounds trivial, but is there still a recommended hardware for Unraid OS flash drives? Or will anything do? Quote Link to comment
Squid Posted March 4, 2021 Share Posted March 4, 2021 Kingston DT-SE9 (USB 2 version) Quote Link to comment
jademonkee Posted March 4, 2021 Share Posted March 4, 2021 2 minutes ago, Cessquill said: Only minor issue here is that it looks like my 12 year old 1GB flash drive is now too full. Wouldn't upgrade until I cleared some space (after backing up); now Fix Common Problems tells me it's up to 90%. Sounds trivial, but is there still a recommended hardware for Unraid OS flash drives? Or will anything do? I use a Sandisk Cruzer Fit 16GB. Been solid for 5ish years now. I like it because it fits in the internal port on my mobo and is short enough not to hit the chassis when I slide the mobo tray out (learnt that lesson the hard way once...). I think it's recommended to use USB 2.0 drives/ports, but I don't know if that's just superstition or not. Quote Link to comment
jlficken Posted March 4, 2021 Share Posted March 4, 2021 (edited) 11 hours ago, dlandon said: You will probably see the CPU Scaling governor driver says "intel_cpufreq" driver on the right side of Tips and Tweaks. This appears to be a new driver for some Intel CPUs. I am working on a change to Tips and Tweaks so this driver will be handled properly. That's why you are not seeing Turbo Boost as an option. Right now it only shows as an option for the intel_pstate (Intel Pstate in Tips and Tweaks) driver. It is recommended that you use 'Performance' for the best power savings and performance. I don't believe "On Demand" is an option to the intel_cpufreq driver. Tips and Tweaks is showing it as an option and probably shouldn't. You are correct that it shows intel_cpufreq as the driver. I'm going to have to just give up the 200Mhz boost for now as setting it to Performance kicks all cores up to 2.6Ghz compared to On Demand which lets them throttle down to 1.3Ghz (or less) but causes them to never go above 2.4Ghz. I'd rather give up the 200Mhz than have all 24 cores at 2.4Ghz all of the time. Either way it's better than the 1.3Ghz that was happening after the upgrade when it got set to Power Save for some reason. If you can get the Turbo setting working though that'd be great! Edited March 4, 2021 by jlficken Quote Link to comment
jlficken Posted March 4, 2021 Share Posted March 4, 2021 6 hours ago, jademonkee said: I have the intel_cpufreq driver, and when I set it to 'On Demand' I get: root@Percy:~# grep MHz /proc/cpuinfo cpu MHz : 3097.473 cpu MHz : 3101.104 cpu MHz : 3094.179 cpu MHz : 3091.279 cpu MHz : 3100.118 cpu MHz : 3093.355 cpu MHz : 3092.385 cpu MHz : 3099.522 So I'm guessing it has the effect of just running at full frequency all the time? However, when I set it to 'Performance' I get: root@Percy:~# grep MHz /proc/cpuinfo cpu MHz : 1596.710 cpu MHz : 1596.466 cpu MHz : 1596.533 cpu MHz : 1596.398 cpu MHz : 1596.510 cpu MHz : 1596.519 cpu MHz : 1596.589 cpu MHz : 1596.520 I'm currently re-building a disk while also performing an 'erase and clear' on an old disk, so my CPU isn't close to idle (ocassional 90+% peaks), so I would expect that at least some of the cores would be running at full speed with a profile that scales speed with demand. Is 'Performance' just keeping them at half speed, or is more likely that my demand is not big enough to warrant scaling up to full speed for any significant period of time? In the mean time I'll keep it on 'On Demand' and keep an eye on temps. Mine works the exact opposite of yours...weird. 1 Quote Link to comment
Cessquill Posted March 4, 2021 Share Posted March 4, 2021 Thanks @Squid and @jademonkee - Sandisk on order (only because it was quicker delivery). Mine used to be on the front, and I'm surprised it's lasted this long and not snapped off. Yay for internal USB! 1 Quote Link to comment
fearlessknight Posted March 4, 2021 Share Posted March 4, 2021 After I've upgraded to 6.9.0, previously from 6.9.0 RC2, It seems GPU passthrough is causing my VMs to freeze after installing Nvidia drivers. Or cycle into recovery upon boot. I know a few others have had this happen and have reverted back to 6.8.3. If you could please test this and apply a fix for the next update that rolls out, It would be appreciated. Also, yes. Please push the 460.56 driver updates. Thanks, Quote Link to comment
CrashnBrn Posted March 4, 2021 Share Posted March 4, 2021 Upgrade went without a hitch. I did find a weird bug. When trying to start all my docker containers using the Start All button less than half actually started. Hitting it again did nothing. I had to start them one by one. Quote Link to comment
smashingtool Posted March 4, 2021 Share Posted March 4, 2021 Am I understanding correctly that if I want to set up multiple SSDs in a "pool" where the pool size is the sum of each drive's size, that I need to(or should?) wait until the mentioned future support of "Unraid array pools"? Quote Link to comment
TheSnotRocket Posted March 4, 2021 Share Posted March 4, 2021 (edited) 51 minutes ago, fearlessknight said: After I've upgraded to 6.9.0, previously from 6.9.0 RC2, It seems GPU passthrough is causing my VMs to freeze after installing Nvidia drivers. Or cycle into recovery upon boot. I know a few others have had this happen and have reverted back to 6.8.3. If you could please test this and apply a fix for the next update that rolls out, It would be appreciated. Also, yes. Please push the 460.56 driver updates. Thanks, Having the same issue - Win10 VM, hangs. Running VNC video driver and still hanging. Figures that I'd do this in the middle of my work day.. but ya know... whatever 😛 I disabled docker and was able to boot my win10 box. Edited March 4, 2021 by TheSnotRocket Quote Link to comment
itimpi Posted March 4, 2021 Share Posted March 4, 2021 11 minutes ago, smashingtool said: Am I understanding correctly that if I want to set up multiple SSDs in a "pool" where the pool size is the sum of each drive's size, that I need to(or should?) wait until the mentioned future support of "Unraid array pools"? not quite sure what you want? in Unraid 6.9.0 you can set up multi-drive pools to use the “Single” btrfs profile which means the available space is the sum of the drives size, but the data is not protected by parity. if you want multiple arrays that work like the current data array where the available space is the sum of the data drives, but you can still have parity protection then this IS a future roadmap item. Quote Link to comment
fearlessknight Posted March 4, 2021 Share Posted March 4, 2021 5 minutes ago, TheSnotRocket said: Having the same issue - Win10 VM, hangs. Running VNC video driver and still hanging. I'm able to run with the VNC Video alone. Try using DDU to run a clean uninstall of your Nvidia drivers and then try booting from your VM again. Quote Link to comment
T321 Posted March 4, 2021 Share Posted March 4, 2021 (edited) Very minor issue after update. After update, trying to access any VM via VNC the following error appeared: Quote The requested module '../core/util/browser.js' does not provide an export named 'hasScrollbarGutter' Doing a cache refresh (CTRL + F5) fixed the issue. Edited March 4, 2021 by T321 Quote Link to comment
trurl Posted March 4, 2021 Share Posted March 4, 2021 1 minute ago, T321 said: cache refresh Doing that should be standard practice with any release since any cached browser code may not work with the new code. Quote Link to comment
Fiala06 Posted March 4, 2021 Share Posted March 4, 2021 Update when smooth for me. Not sure if it helped or not but I uninstalled the old Nvidia plugin before updating. Also holy crap docker starts A LOT faster, same for updates. Thanks devs! Quote Link to comment
PeterB Posted March 4, 2021 Share Posted March 4, 2021 Just upgraded from 6.8.3 to 6.9.0 today. Now, none of my drives are spinning down. Openfiles doesn't show anything open on the drives I would expect to see spin down, but something is reading from them (~20k reads on each drive, in a little under ten hours). Turbowrite is disabled (Data drives spun down: script not running, Write mode: unraid determined). How do I identify what is accessing the drives in question? Should I revert to 6.8.3? Quote Link to comment
Squid Posted March 4, 2021 Share Posted March 4, 2021 7 minutes ago, PeterB said: Just upgraded from 6.8.3 to 6.9.0 today. Now, none of my drives are spinning down. Openfiles doesn't show anything open on the drives I would expect to see spin down, but something is reading from them (~20k reads on each drive, in a little under ten hours). Turbowrite is disabled (Data drives spun down: script not running, Write mode: unraid determined). How do I identify what is accessing the drives in question? Should I revert to 6.8.3? Cache dirs installed? IIRC some changes to the setup of it needs to be done. Quote Link to comment
Hoopster Posted March 4, 2021 Share Posted March 4, 2021 18 minutes ago, PeterB said: Now, none of my drives are spinning down. In my case I traced it to the telegraf docker. Having [inputs.smart] ( reads /usr/sbin/smartctl) enabled in telegraf.conf for my Grafana dashboard was preventing the spin down of disks. 1 Quote Link to comment
PeterB Posted March 4, 2021 Share Posted March 4, 2021 (edited) 38 minutes ago, Squid said: Cache dirs installed? IIRC some changes to the setup of it needs to be done. Thanks for the response. Yes, I do have directory caching enabled. Researching possible conflicts, but not found any information yet. UPDATE: It appears that this issue is being caused by the BinHex Couch Potato docker. Edited March 4, 2021 by PeterB 1 Quote Link to comment
Kir Posted March 4, 2021 Share Posted March 4, 2021 Quite a smooth upgrade. Only had a scary popup "Warning [STORAGE] - Cache pool BTRFS missing device(s)" after reboot, but all clear in the syslog. Just finished cache re-balancing after changing the alignment. Good job, team! Quote Link to comment
smashingtool Posted March 4, 2021 Share Posted March 4, 2021 4 hours ago, itimpi said: not quite sure what you want? in Unraid 6.9.0 you can set up multi-drive pools to use the “Single” btrfs profile which means the available space is the sum of the drives size, but the data is not protected by parity. if you want multiple arrays that work like the current data array where the available space is the sum of the data drives, but you can still have parity protection then this IS a future roadmap item. I want to be able to use SSDs in the array, but per the warnings from "Fix Common Problems", that could cause issues with rebuilding from parity due to SSD garbage collection. So what I currently do instead is use an extra 2 SSDs in unRaid via Unassigned devices. I also have a cache drive SSD, but have never messed with any pool functionality due to the warnings about BTRFS RAID1. What I ultimately want is to be able to have a user share spread out over multiple SSD drives. I'm not currently concerned with parity protection for said drives, but maybe down the road, some sort of fault tolerance would be worthwhile. I don't see how to do "multi-drive pools to use the “Single” btrfs profile". Per the help toggle in the UI, "When configured as a multi-device pool, Unraid OS will automatically select btrfs-raid1 format". https://wiki.unraid.net/Unraid_OS_6.9.0#Multiple_Pools also seems to lack a mention of this functionality, but maybe I am missing something. Having multiple pools seemed like a way of getting around using Unassigned devices, but based on "When you create a user share or edit an existing user share, you can specify which pool should be associated with that share. ", I assume that I can't have a user share span multiple pools... Quote Link to comment
itimpi Posted March 4, 2021 Share Posted March 4, 2021 3 minutes ago, smashingtool said: I don't see how to do "multi-drive pools to use the “Single” btrfs profile" in the 6.9.0 release if you click on a cache pool and then go to the Balance section there is a drop-down to allow you to select what profile you want the Balance to use with the option being current profile, single, raid0 or raid1 profiles. Quote Link to comment
fearlessknight Posted March 4, 2021 Share Posted March 4, 2021 5 hours ago, TheSnotRocket said: Having the same issue - Win10 VM, hangs. Running VNC video driver and still hanging. Figures that I'd do this in the middle of my work day.. but ya know... whatever 😛 I disabled docker and was able to boot my win10 box. Were you able to passthrough your GPU after disabling the docker? Or was it still booting from VNC? Quote Link to comment
sminker Posted March 5, 2021 Share Posted March 5, 2021 (edited) 6.8.3 to 6.9 went fine on this end. Just, shutdown dockers and disabled autostart. Backed up flash and updated. 5 minutes later I was up and running. Original Intel igpu passthrough to VMs seems fine. Plex can use it no issue. Any reason to change to the new method for Intel igpu passthrough? Besides cleaning up the go file? Edit: Minor panic attack after I posted this lol. Zigbee2mqtt docker wasnt starting. Couldnt find device plus multiple other errors in the logs. I just unplugged the usb stick (CC2531 dongle), waited about 30 seconds and plugged it back in. Docker started like a champ and Home Assistant started receiving updates. Just sayin, that would of been miserable if I lost that docker. The rest of my night would of been shot with re-adding a ton of devices and resetting entity ID tags lol. Edited March 5, 2021 by sminker Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.