Jump to content

Multiple Arrays


Go to solution Solved by Ralf.,

Recommended Posts

Posted

Apologies if this on has been requested a few times already. I did a little searching through the forum and didn't see anything about it.

I'd like the ability to have multiple (more than one) Array. I guess the main reason I could see for doing this is being able to have a massive media library stored on one array, and maybe personal storage for files on a different array. Having multiple arrays would allow you to use different disk sizes for each array and not be limited by the smallest HDD capacity.

 

Example:
(Array 1) - 3 x 10TB HDD + 1 x 10TB parity for mass media storage

(Array 2) - 1 x 4TB + 1 x 4TB parity for NextCloud storage with multiple users

 

I understand you can split the array up into multiple shares, but I guess I'd prefer to have separate arrays to hold different sets of data.

  • Upvote 3
Posted
4 minutes ago, PiMan314159265 said:

Having multiple arrays would allow you to use different disk sizes for each array and not be limited by the smallest HDD capacity.

It's been asked before and is a valid request, but not for the reason above, since you can have multiple disk sizes on an Unraid array and full capacity will be used.

 

Also, we'll likely see multiple cache pools before multiple arrays.

  • Upvote 1
Posted

I'd like to second this - The example given above is not only desirable for performance and media reasons.

 

While we can already locate shares on specific media, I would like to be able to demarcate a smaller array with it's own parity disks for a shorter rebuild for more important data.

 

(Array 1) - 3 x 10TB HDD + 1 x 10TB parity for mass media storage

(Array 2) - 1 x 4TB + 1 x 4TB parity for NextCloud storage with multiple users

 

Sticking with the given example, with 4 x 10TB and 2 x 4TB disks in a single unraid - even if I explicitly locate my media and user shares - if I lose a 4TB disk, I'm waiting for 6 x 4TB + 4 x 6TB of parity processing before my user data can be considered "safe".

  • 2 months later...
Posted

as before I also like the ideal of mutli Arrays but for me it is Disk usage and not size,

 

in my example I would like to put MotionEye onto the my raid storage but I have discovered that in doing so, it will keep the Parity and one of the disks active 24/7 which is not a good use of the array.

 

so I have resulted using a Unassigned drive to store my motioneye data, which is not a ideal way to keep it safe.

 

  • Like 1
  • Upvote 1
Posted
2 hours ago, chris_netsmart said:

as before I also like the ideal of mutli Arrays but for me it is Disk usage and not size,

 

in my example I would like to put MotionEye onto the my raid storage but I have discovered that in doing so, it will keep the Parity and one of the disks active 24/7 which is not a good use of the array.

 

so I have resulted using a Unassigned drive to store my motioneye data, which is not a ideal way to keep it safe.

 

Do not see how multiple arrays help in this use case?    If you want redundancy you always going to have a minimum of 2 drives active.

  • 3 months later...
Posted

I would like to see multiple arrays as well.

 

But I would like to be able to use one array as a backup to the primary array.

 

1 Array for everything and the 2nd array (does not have to be online at all times) to be able to backup up important or non replaceable data.

 

Based on my current hardware, I would like to see:

 

Array 1

8 TB Parity

6 TB Data

3 TB Data

3 Drives of 1 TB each

500 g Data (Drive was just laying around and there is space in server)

320 g Data (Drive was just laying around and there is space in server)

8 TB Drive Used either as a 2nd Parity drive or as a Data Drive

 

Array 2

6 TB Data

4 TB Data

As a backup does not need to parity protected

 

Just my thoughts

 

 

  • Thanks 1
  • 2 months later...
Posted

Or theoretically, with multiple JBOD style enclosures like a Nettapp disk shelf. You could potentially infinitely expand your server to make as many arrays as you want.

Obviously there would be some trade offs with having many arrays, but id assume features like staggered parity checks etc would be implemented, so that only 1 array at a time would ever be getting a parity check at a time, and id dare say a staggered drive spin up option for parity check or cache mover operations would be neccessary with large amounts of drives, as i could easily see some people building ridiculous 100+ drive servers.

 

Either way i think multiple arrays feature would be amazing for unraid and would even possibly result in an increase of sales, as i know for a fact many people choose other options over unraid due to the max array size / not being able to create multiple arrays.

  • Upvote 1
Posted (edited)

Just an additional thought:

 

Modern HBAs like the LSI 9300 series can address up to 1024 SAS/SATA drives thru expander features. Many backplanes do have expander chips on board and do work perfect with these HBAs. What could be done in theory is to cascade many backplanes, hosting for example 24 drives each, with just one single HBA.

 

Currently, if you want several parity protected arrays, this can be done with Unraid VMs running on Unraid. The drawback is, you need a HBA passed through to every VM *) So there's a builtin limit currently for the amount of parity protected arrays running in Unraid VMs. The limit are the count of free slots on your motherboard and the count of USB slots for an Unraid license that every Unraid array needs currently.

 

*) In theory cascading should be possible without HBAs passed through to VMs, if one could pass through e.g. 24 individual disks to every VM. But I never could get that to work.

 

I'm dreaming of chassis hosting 48 or 60 or even 90 drives running as one single server with one motherboard and one HBA hosting several parity protected arrays ... ;-)

 

Edited by hawihoney
  • 1 month later...
Posted (edited)

+1 for multiple arrays

 

Here is another use case 

 

My issues is not disk space or number of drives, as I am sure many of you have the same setup, you keep all your file shares (documents, media, ...) on the array and use either standalone SSDs or in my case RAID1 SSDs for VMs, Cache and Dockers to make them run faster.

 

As for the cache, because it is temporary storage and supports RAID redundancy, there really isn't any point to "unraid" that, however 

 

For VM and dockers, you want the best performance you can get and running them on the caches drive (SSD) gives you that, however, that setup isn't very flexible.

 

I realize that you could run a VM with Unraid for your file shares and keep the main Unraid for only VMs and Dockers using an SSD "Unraid" raid setup , but then you are dealing with x2 or x3 admin consoles and the extra overhead of the nested VM. This would work, but having everything in one management console would be ideal. 

 

 

My Ideal setup would be 

 

Media and such on the HDD using "Unraid" RAID  #1

Cache disks using SSDs and RAID1  (just in case of a failure) , but could go RAID0

Docker & VMs stored on SSD using "Unraid" RAID  #2

 

or even 

Docker on "Unraid" RAID  #2 (using SSDs)

and 

VMs on "Unraid" RAID  #3 (using SSDs)

 

This setup would allow you to configure disk arrays of different sizes for different purposes and be protected against disk failures

 

An added bonus would be if you could configure shares that span across the multiple arrays, but that is for a different thread :) but there is no point in asking for that if it doesn't support multiple arrays in the first place.

 

 

Now there is another approach, maybe instead of supporting multiple arrays,  you go with the multiple Unraid servers for different purposes and the management console itself is changed to enable the management of multiple Unraid servers as if they were one that has multiple arrays, that would solve the administrivia nightmare of having multiple Unraid servers (VM'd or not). This would also be good for backup servers (which I have)

 

Administrating multiple Unraid servers from one console would solve a bunch of other issues, but this post is getting too long as it is.

 

 

 

 

 

Edited by piyper
  • Like 1
  • 3 weeks later...
Posted

How about the use case for upgrading to new drives?

Let's say i have 10x8tb drives and 2 8tb parity drives. Your case is quickly running out of physical space.

Right now if I wanted to introduce 14TB drives for example into the array, the solution is usually to push them into the array or parity disks until you get enough to be able to handle the full array size to copy. In the mean time, you are not utilizing the disk capacity above 8TB for these drives unless I am missing something.

If you allowed a second array, you could spin this up with 2, 14TB drives and use it on day one... Adding a Third for parity and get running pretty quick with no downtime on your first array still utilizing disks. After time, you could either run the arrays in parallel or start copying data to the new array as you can to eventually decommission the first array.

Posted
9 minutes ago, Casey said:

Right now if I wanted to introduce 14TB drives for example into the array, the solution is usually to push them into the array or parity disks until you get enough to be able to handle the full array size to copy. In the mean time, you are not utilizing the disk capacity above 8TB for these drives unless I am missing something.

Not sure if you are missing something, or I just don't understand what you mean by "push" and "copy".

 

No copying needed to introduce larger disks to the array. Just replace smaller disks with larger disks and let it rebuild, one at at time, as needed. No requirement for all disks to be the same size, but parity must be at least as large as the largest data disk, so parity would have to be the first disk to upsize of course.

Posted
10 minutes ago, trurl said:

Just replace smaller disks with larger disks and let it rebuild, one at at time, as needed.

For example, I have always had a small case and few ports. When I started with Unraid years ago I was using 2TB drives. Now they are all 6TB except I recently did parity swap with a failing 6TB data disk to get to 8TB parity, ready for the next data disk upsize.

 

At no point did I need to increase my drive count or copy anything to make this happen.

  • 3 weeks later...
Posted
8 hours ago, madbrayniak said:

I would like to see this as well but I was thinking it would be nice to make an all SSD array for my gaming VMs and then have the main Array with SSD Cache for everything else.

Ditto what itimpi said. You can have multiple cache pools with 6.9.0 (currently beta 25 but IME very stable) and run the SSD "array" as a pool (e.g. btrfs raid 5 for parity). Pro is it's faster than normal array parity, support trim and can have a vdisk larger than a single drive.

 

If you just want to divide your array into SSD vs non-SSD section, it's already do-able by changing your shares settings to include on the SSD or exclude the HDD from the gaming VM share. There's no need for a separate array.

  • Like 1
Posted (edited)
47 minutes ago, -Daedalus said:

Isn't BTRFS RAID 5/6 still a no-no?

The "no-no" regarding BTRFS RAID-5 has been outdated for about 2 years (4.17 - Jun 2018).

It's kinda funny how some outdated noises can have such reverberating impact.

 

BTRFS RAID-6 has been no-no until 5.5 (Jan 2020) with RAID1C3 implementation so metadata chunks have the same resistance to 2-drive failure as data chunks. Before 5.5, it would be pointless to run BTRFS RAID-6 because a 2-drive failure means irrecoverable lost of some metadata chunks (assuming it's in RAID-1).

 

I have had a few disks dropped offline (not simultaneously) in my BTRFS RAID-5 pool and recovering was basically running scrub.

 

The only outstanding issue that causes it to be marked "unstable" is the write hole problem but then hardware RAID also has write hole problem and nobody calls it "unstable". There are ways to mitigate it and the risk of it happening is rather low to begin with.

 

So no, aint a no-no. 😉

Edited by testdasi
Posted
1 hour ago, testdasi said:

The "no-no" regarding BTRFS RAID-5 has been outdated for about 2 years (4.17 - Jun 2018).

It's kinda funny how some outdated noises can have such reverberating impact.

 

BTRFS RAID-6 has been no-no until 5.5 (Jan 2020) with RAID1C3 implementation so metadata chunks have the same resistance to 2-drive failure as data chunks. Before 5.5, it would be pointless to run BTRFS RAID-6 because a 2-drive failure means irrecoverable lost of some metadata chunks (assuming it's in RAID-1).

 

I have had a few disks dropped offline (not simultaneously) in my BTRFS RAID-5 pool and recovering was basically running scrub.

 

The only outstanding issue that causes it to be marked "unstable" is the write hole problem but then hardware RAID also has write hole problem and nobody calls it "unstable". There are ways to mitigate it and the risk of it happening is rather low to begin with.

 

So no, aint a no-no. 😉

since 6.9 has multi-pool support, I setup a btrfs raid-5 pool with three 14T HDD,  the issue I find is the slow scrub speed, only around 40MB/s, it might take 3 days for a single scrub if I have 10T data on it,  is it normal?

Posted
13 minutes ago, trott said:

since 6.9 has multi-pool support, I setup a btrfs raid-5 pool with three 14T HDD,  the issue I find is the slow scrub speed, only around 40MB/s, it might take 3 days for a single scrub if I have 10T data on it,  is it normal?

It's normal for scrub to be slower than your normal speed. Long scrub time is partly the reason why don't recommend BTRFS RAID-5/6 with HDD.

 

Shameless plug: I just did a relatively long write up about BTRFS RAID-5/6 in my build log so have a read (which among other things explains why long scrub time is probably not something you want).

 

 

Posted
17 minutes ago, testdasi said:

It's normal for scrub to be slower than your normal speed. Long scrub time is partly the reason why don't recommend BTRFS RAID-5/6 with HDD.

 

Shameless plug: I just did a relatively long write up about BTRFS RAID-5/6 in my build log so have a read (which among other things explains why long scrub time is probably not something you want).

 

yeah, this is also my concerns, I just ordered another 14t HDD, and plan to convert the pool to raid 10

Posted
33 minutes ago, trott said:

yeah, this is also my concerns, I just ordered another 14t HDD, and plan to convert the pool to raid 10

I would say you are better off getting the ZFS plugin and run your HDD in ZFS RAIDZ1 instead.

More space and no write hole (so lower risks from slow scrub).

Posted
11 minutes ago, johnnie.black said:

but no fix for now.

I was suggested one workaround, which is scrubbing each disk individually, but in a quick test total time looked that it would be about the same, still need to test better.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...