DivideBy0 Posted March 2, 2021 Share Posted March 2, 2021 Upgraded to 6.9 and lost all my Docker Containers as others experienced. I think is related to loosing the cache drives assignment. Attached see the diagnostic file. I will re-assign the cache manually see if it does fix the issues. nas-unraid-bkp-diagnostics-20210302-1011.zip Quote Link to comment
DivideBy0 Posted March 2, 2021 Share Posted March 2, 2021 So I added the cache drives manually and created a pool so now is all back to normal. Got my Docker Containers back as well. Why can't the upgrade process do all that? Luckily this is my backup unraid but I have a production one with very important data so I am really really afraid to touch and upgrade that one as I have very important data. Can the Unraid team advise please? 1 Quote Link to comment
SimonF Posted March 2, 2021 Share Posted March 2, 2021 (edited) ah ok, but you have added nvme into the cache now as was showing three devices in the pool Edited March 2, 2021 by SimonF Quote Link to comment
DivideBy0 Posted March 2, 2021 Share Posted March 2, 2021 (edited) 5 minutes ago, SimonF said: ah ok, but you have added nvme into the cache now as was showing three devices in the pool on the 6.9 thread. Is because I manually re-assigned the cache drives and I added the nvme as well. But the point is the 6.9 upgrade doesn't migrate /retain the cache configuration to where it needs to be. Could it be a bug? In 6.8.3 I had 2 500Gb SSD forming the cache pool and the NVMe was unassigned After 6.9.X the cache was gone which caused my dockers to disappear as I had my docker.img on the cache. I ended up manually re-assigning all3 and it works fine now. Edited March 2, 2021 by johnwhicker Quote Link to comment
BVD Posted March 2, 2021 Share Posted March 2, 2021 24 minutes ago, johnwhicker said: So I added the cache drives manually and created a pool so now is all back to normal. Got my Docker Containers back as well. Why can't the upgrade process do all that? Luckily this is my backup unraid but I have a production one with very important data so I am really really afraid to touch and upgrade that one as I have very important data. Can the Unraid team advise please? @limetech is this perhaps a function of how 6.9 now treats the cache as 'just another pool' as far as disk assignments are concerned? @johnwhicker what's your cache filesystem type? Quote Link to comment
DivideBy0 Posted March 2, 2021 Share Posted March 2, 2021 1 minute ago, BVD said: @johnwhicker what's your cache filesystem type? btrfs Sir Quote Link to comment
trurl Posted March 2, 2021 Share Posted March 2, 2021 4 minutes ago, johnwhicker said: I manually re-assigned the cache drives and I added the nvme as well. As I mentioned in your other thread, it would be better to make that NVME a separate pool. Mixing sizes in the same pool can produce unexpected results unless you understand btrfs raid better than I do. Quote Link to comment
BVD Posted March 2, 2021 Share Posted March 2, 2021 (edited) @johnwhicker Yeah, I was meaning btrfs R5 vs R0 - the change in cache accounting is where the issue is: Mar 1 19:43:50 Apollo emhttpd: import 30 cache device: (sdc) Samsung_SSD_840_Series_S19HNEAD493526E Mar 1 19:43:50 Apollo emhttpd: import 31 cache device: no device I'd file a bug; it's looking for a device as though it was previously set up as cache in earlier beta releases, not like it's configured in 6.8.3 and earlier. @trurl mixing drive sizes is fine with btrfs (unless you're doing RAID 6 btrfs, which is... unpleasant), it's kind of the impetus for it's existence. It's no problem. Edited March 2, 2021 by BVD Quote Link to comment
trurl Posted March 2, 2021 Share Posted March 2, 2021 1 minute ago, BVD said: mixing drive sizes is fine with btrfs (unless you're doing RAID 6 btrfs, which is... unpleasant), it's kind of the impetus for it's existence. It's no problem. I know it's allowed but as far as I know there is still an issue with determining the size of the pool in some situations with mixed sizes. Quote Link to comment
BVD Posted March 2, 2021 Share Posted March 2, 2021 That might be specific to UnRAID.... Btrfs allows mixed drive sizes by writing out files in 'chunks' as opposed to full stripes used with standard RAID, so there can be some storage that's considered 'lost' to the overall array due to chunk sizes and where they'll end up fitting. So with raid 1 level btrfs, you're storing two copies of that 'chunk', regardless of how many disks you have, and with mixed drive sizes, I could see how it might be a bit complex for unraid to determine %utilized.... There also used to be a space accounting bug a ways back (still exists with raid 6 equivalent unless that's changed), but it was patched out some time ago. I'll admit, I don't really use btrfs much, and don't see it much in the industry, but for home use, it's kind of perfect thanks to online migrations and allowing mixed device sizes Quote Link to comment
DivideBy0 Posted March 2, 2021 Share Posted March 2, 2021 I think the unassigned NVMe was the problem. I just upgraded my primary UNRAID and the upgrade went fine. I only have 2 SSD drives for the cache on this one. 1 Quote Link to comment
Rick Gillyon Posted March 2, 2021 Share Posted March 2, 2021 Sorry, I don't really understand the GPU talk here. I've upgraded to 6.9.0 fine, no issues, but I still have this in my go file: modprobe i915 chmod -R 777 /dev/dri Doesn't seem to be causing a problem. I put it there to get hardware transcoding with emby, which is probably working. Am I doing it wrong? Quote Link to comment
John_M Posted March 2, 2021 Share Posted March 2, 2021 3 minutes ago, Rick Gillyon said: Am I doing it wrong? That will still work. There's now a better way of doing it that also works with other GPUs and also keeps your go file cleaner. 1 Quote Link to comment
Rick Gillyon Posted March 2, 2021 Share Posted March 2, 2021 1 minute ago, John_M said: That will still work. There's now a better way of doing it that also works with other GPUs and also keeps your go file cleaner. Thanks. So what would the new way entail for me (apart from cleaning the go file)? Quote Link to comment
BVD Posted March 2, 2021 Share Posted March 2, 2021 (edited) 4 minutes ago, Rick Gillyon said: Thanks. So what would the new way entail for me (apart from cleaning the go file)? I put the command up further back, just head to earlier comments: Quote Edited March 2, 2021 by BVD 1 Quote Link to comment
Rick Gillyon Posted March 2, 2021 Share Posted March 2, 2021 2 minutes ago, BVD said: I put the command up further back, just head to earlier comments Thanks, I didn't link the two. I assume this is a one-off? Quote Link to comment
BVD Posted March 2, 2021 Share Posted March 2, 2021 (edited) It's a new way of loading/unloading drivers in UnRAID - essentially they implemented the ability for us to utilize modprobe.d in UnRAID, something that's been the norm for controlling drivers in Linux for some time now. Think of it like device manager in Windows - here's their notes on it: https://wiki.unraid.net/Unraid_OS_6.9.0#Better_Module.2FThird_Party_Driver_Support As to why you need to edit this file, it'd due to their having blacklisted the driver - the only place I know of that these specific parts of the modprobe implementation is documented is here: @limetech @SpencerJ - Could we get these customizations (those that were noted in earlier beta/RC release notices) noted in the 6.9(GA) release notice? I'll get it updated on the wiki if/when I get access. (Edit: I guess maybe this is more-so in the 'Project' related field, so I probably should've ping @ljm42 I think? Sorry guys!) Edited March 2, 2021 by BVD 1 Quote Link to comment
interwebtech Posted March 2, 2021 Share Posted March 2, 2021 At the risk of jinxing it, I want to report that I am no longer getting the erroneous notifications about disk utilization. My guess is that the changes to "Storage Threshold Warnings" reset something under the hood that cleared whatever was causing this. I am a happy camper. 3 Quote Link to comment
AgentXXL Posted March 2, 2021 Share Posted March 2, 2021 (edited) I've upgraded both of my systems to 6.9.0 with no major issues, the backup going from 6.8.3 and the media from 6.9 RC2. Really the only issue that I'm seeing is like @S80_UK and @doron reported. Both systems showed numerous notifications that 'drive utilization returned to normal'. On my backup unRAID I still have the UI set to default color (grey for space used) but on my media unRAID, I changed it to show the color based on utilization. The Dashboard tab shows all drives as green, yet they are VERY HIGH on utilization. The Main tab shows them all as red, as I expect. I also had my media unRAID report that it was an unclean shutdown following the upgrade reboot, yet when watching the process on the monitor attached to it, it appeared to restart normally with no errors noticed. I had no tasks writing to any of the drives so I expected parity was still valid. I performed one more reboot and of course that cleared the 'unclean shutdown' error. I'll be running a full parity check shortly after I clean up a few more things. Any idea of why the Dashboard tab shows the drives as green even though they should be red like shown on the Main tab? Note that I have gone in and reconfigured the thresholds as noted in the release notes, but the media unRAID still shows green on the Dashboard tab and red on the Main tab. Edited March 2, 2021 by AgentXXL Added note about reconfig of thresholds Quote Link to comment
takkkkkkk Posted March 2, 2021 Share Posted March 2, 2021 I realized many of my HDs aren't spinning down even though it's not doing anything, and I'm seeing that spindown issue isn't specific to my server. Anyone know the culprit? 1 Quote Link to comment
John_M Posted March 2, 2021 Share Posted March 2, 2021 24 minutes ago, takkkkkkk said: Anyone know the culprit? It's difficult to say without diagnostics but elsewhere the Dynamix Autofan plugin has been suggested. Have you tried safe mode? Quote Link to comment
SimonF Posted March 2, 2021 Share Posted March 2, 2021 (edited) 29 minutes ago, takkkkkkk said: culprit Are you using telegraf, Autofan or anything that may be using smartctl outside of the array functions? Edited March 2, 2021 by SimonF Quote Link to comment
takkkkkkk Posted March 2, 2021 Share Posted March 2, 2021 17 minutes ago, SimonF said: Are you using telegraf, Autofan or anything that may be using smartctl outside of the array functions? I had Telegraf installed although I don't use it. I've removed it from the server, just waiting to see if it spins down... Quote Link to comment
RecycledBits Posted March 2, 2021 Share Posted March 2, 2021 Upgraded my Threadripper system with nVidia drivers from 6.8.3 to 6.9.0 without any problems. GPU, dockers and VM's all seem to work fine after the upgrade. Thanks for a good product and the little extra guidance regarding nVidia drivers to ezhik 1 Quote Link to comment
glennv Posted March 2, 2021 Share Posted March 2, 2021 Upgraded my main production and backup unraid servers (using the kernel build docker from the ich777 master to include amd gpu reset bug patch and zfs) and besides the spindown issue that still did not go away with the upgrade from rc2 to final all went smooth. After breaking my head why even with all dockers and vm's and shares down it still would not spindown my array drives , i finaly found it was a custom own build script i had running in the user scripts plugins that did smarctl's every 5 mins to collect drive standby statistics for my splunk dashboard. Once i disabled that all was smooth sailing. Replaced the smartctl calls to hddparm calls which give me the same data but apparently do not disturb the standby behavior of unraid. So if you experience this issue look for "anything" that can fire of smartctl's. (graphana/telegraf and the like have already been pointed at in other recet posts) 1 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.