[REQUEST] More total disks in Array when pooling them in BTRFS cache


Recommended Posts

I know there's currently a limit of array disks + cache, and the UI allows us to allocate more to cache, at the cost of array disks..

 

I run a 4 SSD BTRFS Cache disk in RAID-1 (btrfs mode).

 

Would it be possible to allow BTRFS disks to not count against the array disk limit?

Unfortunately not without a major rewrite. Currently disks can only use the 3 letter designation instead of 4, so there is a hard limit of /dev/sda-/dev/sdz. I think I remember seeing this somewhere in the long term goals, but I wouldn't hold my breath.
Link to comment

you're trying hard, but even that doesnt address my request. 

 

I'm not trying to go beyond  24-26 devices in the array...

 

I'm highlighting clearly that when I use BRTFS in RAID-1 as an array Cache device it uses up 4 drives in Linux (e-f-g-h)

 

YyadRfd.png

 

but really, as far as md driver is concerned, only 1 drive is used (sdg1) in this case.

 

So I'm not trying to go over the 24 drive limit, but find that when pairing a BTRFS array as a cache device, the UI decreases the available # of array drive available for Data when it appears the drive naming convention ISN'T the limiting factor.

 

Again, not going beyond 24 data drives.. so not asking to tweak buffers, device assignment etc...

 

And all the statements to the effect that "sdaa" "sdab" won't work , I can show is wrong because it's already running OK on my system and everything starts up fine...

 

So the only valid statement I read from Tom, is that when it comes to the specific feature I'm requesting, it hasn't been tested.

 

Link to comment

you're trying hard, but even that doesnt address my request. 

 

I'm not trying to go beyond  24-26 devices in the array...

 

I'm highlighting clearly that when I use BRTFS in RAID-1 as an array Cache device it uses up 4 drives in Linux (e-f-g-h)

 

YyadRfd.png

 

but really, as far as md driver is concerned, only 1 drive is used (sdg1) in this case.

 

So I'm not trying to go over the 24 drive limit, but find that when pairing a BTRFS array as a cache device, the UI decreases the available # of array drive available for Data when it appears the drive naming convention ISN'T the limiting factor.

 

Again, not going beyond 24 data drives.. so not asking to tweak buffers, device assignment etc...

 

And all the statements to the effect that "sdaa" "sdab" won't work , I can show is wrong because it's already running OK on my system and everything starts up fine...

 

So the only valid statement I read from Tom, is that when it comes to the specific feature I'm requesting, it hasn't been tested.

 

The restriction of how many devices per license doesn't matter if they are array or cache.  Cache devices count against the licensed limit just the same as data disks.

Link to comment

Then my request is simple, make cacheID device count on the license as it does,  while cacheId.XXX disks not count on the license.

 

Being penalized on a data array capacity while still using 1 cache , but pooling it for optimization is a let down.

 

The reality is unraid is knocking down the disk usage against the license strictly for using the UI.

 

I can already manually do what I'm requesting become a feature:

 

If I mount the cache drive manually, and it so happens to be pooled in BTRFS RAID1, it still counts as 1 disk against the license as far as Unraid works:  Because I am stil system mounting a single disk.

 

However, If I use the UI to do the same thing, you ding 4 disk usage and still mount the same single disk.

 

So here I am doing it manually, having swung 24 disks allocation back to data array in the UI, so I can run 24 array disks:

JZsR5RU.png

 

And still running a btrfs raid1 cache device, and mounting a single disk.

 

# df /mnt/cache

Filesystem    1K-blocks    Used  Available Use% Mounted on

/dev/sdg1      976771588 23204008 1211440272  2% /mnt/cache

 

# btrfs filesystem show

Label: none  uuid: a97f3ee8-7459-4518-bc4e-8012fae4f360

        Total devices 4 FS bytes used 22.11GiB

        devid    1 size 465.76GiB used 136.03GiB path /dev/sdh1

        devid    2 size 465.76GiB used 136.00GiB path /dev/sde1

        devid    3 size 465.76GiB used 136.03GiB path /dev/sdf1

        devid    4 size 465.76GiB used 136.00GiB path /dev/sdg1

 

 

So I hope you understand what I'm trying to show.  The subsystem uses 1 disk, for cache, but the UI takes 4 disk allocation for a feature (btrfs pooling) not native to it.

 

Link to comment
  • 3 weeks later...

So it looks like you removed 4 letter device support between b12 and b14.

 

I guess my thread wasn't constructive...

 

There was never anything done with 4-letter device support one way or the other.  If it works, it works "by accident" and there will surely be some things that won't work if you have enough devices to start rolling over to 4 letters.  It's not a huge amount of work to fix, however will not get fixed in 6.0.

 

The 'cache devices' counting toward key limit has been fixed, though probably not how you want it  ;)  The fix is to not permit mounting a 'cache pool' when all devices in the file system as reported by 'btrfs fi show' are not also assigned to cache slots.  The intention is that devices managed by unRaid OS count against the key device limit.  If this adversely affects you personally, please send me an email: [email protected]

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.