Unraid OS version 6.9.0 available


Recommended Posts

So I added the cache drives manually and created a pool so now is all back to normal. Got my Docker Containers back as well.

Why can't the upgrade process do all that?  Luckily this is my backup unraid but I have a production one with very important data so I am really really afraid to touch and upgrade that one as I have very important data.

 

Can the Unraid team advise please?

6.9-CACHE-FIXED.JPG

  • Like 1
Link to comment
5 minutes ago, SimonF said:

ah ok, but you have added nvme into the cache now as was showing three devices in the pool on the 6.9 thread.

 

Is because I manually re-assigned the cache drives and I added the nvme as well.  But the point is the 6.9 upgrade doesn't migrate /retain the cache configuration to where it needs to be.  Could it be a bug?

 

In 6.8.3 I had 2 500Gb SSD forming the cache pool and the NVMe was unassigned

After 6.9.X the cache was gone which caused my dockers to disappear as I had my docker.img on the cache.  I ended up manually re-assigning all3 and it works fine now.

Edited by johnwhicker
Link to comment
24 minutes ago, johnwhicker said:

So I added the cache drives manually and created a pool so now is all back to normal. Got my Docker Containers back as well.

Why can't the upgrade process do all that?  Luckily this is my backup unraid but I have a production one with very important data so I am really really afraid to touch and upgrade that one as I have very important data.

 

Can the Unraid team advise please?

6.9-CACHE-FIXED.JPG

 

@limetech is this perhaps a function of how 6.9 now treats the cache as 'just another pool' as far as disk assignments are concerned?

@johnwhicker what's your cache filesystem type?

Link to comment
4 minutes ago, johnwhicker said:

I manually re-assigned the cache drives and I added the nvme as well. 

As I mentioned in your other thread, it would be better to make that NVME a separate pool. Mixing sizes in the same pool can produce unexpected results unless you understand btrfs raid better than I do.

Link to comment

@johnwhicker Yeah, I was meaning btrfs R5 vs R0 - the change in cache accounting is where the issue is:

Mar  1 19:43:50 Apollo emhttpd: import 30 cache device: (sdc) Samsung_SSD_840_Series_S19HNEAD493526E
Mar  1 19:43:50 Apollo emhttpd: import 31 cache device: no device

 

I'd file a bug; it's looking for a device as though it was previously set up as cache in earlier beta releases, not like it's configured in 6.8.3 and earlier.

 

@trurl mixing drive sizes is fine with btrfs (unless you're doing RAID 6 btrfs, which is... unpleasant), it's kind of the impetus for it's existence. It's no problem.

Edited by BVD
Link to comment
1 minute ago, BVD said:

mixing drive sizes is fine with btrfs (unless you're doing RAID 6 btrfs, which is... unpleasant), it's kind of the impetus for it's existence. It's no problem.

I know it's allowed but as far as I know there is still an issue with determining the size of the pool in some situations with mixed sizes.

Link to comment

That might be specific to UnRAID.... Btrfs allows mixed drive sizes by writing out files in 'chunks' as opposed to full stripes used with standard RAID, so there can be some storage that's considered 'lost' to the overall array due to chunk sizes and where they'll end up fitting. So with raid 1 level btrfs, you're storing two copies of that 'chunk', regardless of how many disks you have, and with mixed drive sizes, I could see how it might be a bit complex for unraid to determine %utilized....

There also used to be a space accounting bug a ways back (still exists with raid 6 equivalent unless that's changed), but it was patched out some time ago.

 

I'll admit, I don't really use btrfs much, and don't see it much in the industry, but for home use, it's kind of perfect thanks to online migrations and allowing mixed device sizes

Link to comment

Sorry, I don't really understand the GPU talk here. I've upgraded to 6.9.0 fine, no issues, but I still have this in my go file:

 

modprobe i915
chmod -R 777 /dev/dri

 

Doesn't seem to be causing a problem. I put it there to get hardware transcoding with emby, which is probably working. Am I doing it wrong?

Link to comment
4 minutes ago, Rick Gillyon said:

Thanks. So what would the new way entail for me (apart from cleaning the go file)?

 

 

I put the command up further back, just head to earlier comments:
 

Quote

 

 

Edited by BVD
  • Like 1
Link to comment

It's a new way of loading/unloading drivers in UnRAID - essentially they implemented the ability for us to utilize modprobe.d in UnRAID, something that's been the norm for controlling drivers in Linux for some time now. Think of it like device manager in Windows - here's their notes on it:

https://wiki.unraid.net/Unraid_OS_6.9.0#Better_Module.2FThird_Party_Driver_Support

 

As to why you need to edit this file, it'd due to their having blacklisted the driver - the only place I know of that these specific parts of the modprobe implementation is documented is here:

 

@limetech @SpencerJ - 

 

Could we get these customizations (those that were noted in earlier beta/RC release notices) noted in the 6.9(GA) release notice? I'll get it updated on the wiki if/when I get access.

 

(Edit: I guess maybe this is more-so in the 'Project' related field, so I probably should've ping @ljm42 I think? Sorry guys!)

 

Edited by BVD
  • Like 1
Link to comment

I've upgraded both of my systems to 6.9.0 with no major issues, the backup going from 6.8.3 and the media from 6.9 RC2. Really the only issue that I'm seeing is like @S80_UK and @doron reported. Both systems showed numerous notifications that 'drive utilization returned to normal'.

 

On my backup unRAID I still have the UI set to default color (grey for space used) but on my media unRAID, I changed it to show the color based on utilization. The Dashboard tab shows all drives as green, yet they are VERY HIGH on utilization. The Main tab shows them all as red, as I expect.

 

I also had my media unRAID report that it was an unclean shutdown following the upgrade reboot, yet when watching the process on the monitor attached to it, it appeared to restart normally with no errors noticed. I had no tasks writing to any of the drives so I expected parity was still valid. I performed one more reboot and of course that cleared the 'unclean shutdown' error. I'll be running a full parity check shortly after I clean up a few more things.

 

Any idea of why the Dashboard tab shows the drives as green even though they should be red like shown on the Main tab? Note that I have gone in and reconfigured the thresholds as noted in the release notes, but the media unRAID still shows green on the Dashboard tab and red on the Main tab.

 

Edited by AgentXXL
Added note about reconfig of thresholds
Link to comment

Upgraded my main production and backup unraid servers (using the kernel build docker from the ich777 master to include amd gpu reset bug patch and zfs) and besides the spindown issue that still did not go away with the upgrade from rc2 to final all went smooth.

After breaking my head why even with all dockers and vm's and shares down it still would not spindown my array drives , i finaly found it was a custom own build script i had running in the user scripts plugins that did smarctl's every 5 mins to  collect drive standby statistics for my splunk dashboard. Once i disabled that all was smooth sailing. Replaced the smartctl calls to hddparm calls which give me the same data but apparently do not disturb the standby behavior of unraid.

So if you experience this issue look for "anything" that can fire of smartctl's. (graphana/telegraf and the like have already been pointed at in other recet posts)

  • Like 1
  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.