Multiple Arrays


Go to solution Solved by Ralf.,

Recommended Posts

  • Solution

I would definetly also want mutliple arrays. That would give the ultimate felxibility to organize the storage of the server.

Every array with its own parity disk(s) would be an independet fault tolerance unit. Without speed and securtiy degration, if in another array a drive fails and the array needs to be reconstructed.

It would be possible to spread the risk over more disks when the data is split by mutliple arrays.

Statistically, the over all reliability gets lower and lower, the more drives you add to a single array with its max. 2 parity disks. With multiple arrays, this could be prohibited.

 

Edited by Ralf.
  • Like 2
  • Upvote 2
Link to comment
  • 1 month later...

I am interested in running two dual-parity protected arrays in a single chassis w/ Unraid on bare metal.

 

I have a smaller array for testing and data that would be a minor headache to replace (6 data drives + 2 parity). I also have my "Production" Unraid array that houses my primary data that would be a huge issue if I lost (18 data drives and growing + 2 parity & full/daily offsite backup). 

 

My chassis is 36 bays and I currently run Unraid on top of ESXi, passing through the USB keys along with HBAs and NVMe cache drives to get as close to bare metal as I can.

 

 

Edit: After ZFS lands, surely multiple Unraid parity arrays will be next…?
 

 

Edited by BrianAz
Learned that ZFS is officially planned for next release…
  • Like 1
Link to comment
  • 3 weeks later...

+1 for multiple arrays.

I'd settle for just a second array, for my SSD storage that way my frequently used data such a time-machine, phone backups, newly "acquired" media and my "to be played" steam library would not need to spin up my spindle disks all the time relegating them to infrequently used media.

Having my largest capacity (expensive) disk spinning all the time isn't great.

This is the main feature I'm missing to move from an aging Synology NAS to Unraid.

 

Given Synology's horrible track record with me over the last two years i would gladly pay double the pro fee to get this feature

  • Upvote 1
Link to comment
  • 2 weeks later...
  • 2 months later...
  • 2 months later...
10 hours ago, dada051 said:

at least dual array

 

Please don't jump that short.

 

With currently available 30 cache pools, the limit for array pools should not get restricted to two only.

 

The market for 45-drive, 60-drive or 90-drive systems is increasing. Using these systems with Unraid VMs to overcome the current restrictions is limited. For every Unraid VM you need an USB-slot for the additional license/OS-stick and a PCIe-slot for the additional HBA you need to passthru. So the limiting factor currently are available USB-slots and PCIe-slots if you want to run big systems.

 

I would like to create several small and big arrays - e.g. dual parity/two array disks, 3x dual parity/22 array disks. And that based on bulletproofed Unraid arrays, no experimental BTRFS cache pools (my opinion based on several bad experiences with BTRFS).

 

Link to comment
  • 4 weeks later...

I would like multiple arrays for what I think is a unique edge case.  I would like a second array for use of special purpose drives like WD Purple drives to store my backup security cam footage.  I would like to keep my reds for all the other data in a separate array.   

Link to comment

+ for writing Chia plots to a second array without paritiy.

Initially I had the idea to make a "no cache" share on multiple disks with "most free" allocation.

But wrting the 101 GB plot to parity protected array is slowed down by 50-70%, which takes too much time in parallel plotting.

So the only reasonable way of plotting is against unassigned devices and rotate the destination disks in the plotting script.

Link to comment
  • 2 weeks later...
On 5/17/2021 at 6:10 AM, dada051 said:

I found another use case. One array with xfs, and another with Encrypted XFS. So we can start and use dockers/VM even after a reboot.

I find this interesting. Should LT decide not to implement code to require ALL Arrays to start before allowing Docker and VM services to start, then you essentially bypass the current restriction. 

 

Pondering this for 2 mins only, I guess you would have to define some sort of Array Hierarchy. Which come online first, what order, which one is responsible for allowing other services to start (#1?). If it was #1, you would just create a simple small (even RAM disk) Array just to get services running.

 

Like I said ... interesting.

Link to comment

+1 for multiple arrays.

 

Ralf and some others had good points. Adds flexibility and new options, many never thought of.

 

An encrypted and an unencrypted array would be great.

Also, I got lots of disks and don't want to stress the parity drives for everything. Spin array 1 down if just array 2 is working.  Alternatively: load balancing for two simultaneous workloads.

My disks are quite different sizes. Just seems wrong to have some giants and dwarfs in the same team - in my case, their use-cases are different sports.

 

I already see the situation, when I need to move my data to bigger disks and use this for a cleanup. Copying between two arrays will be much easier.

As mentioned already: my backups won't need parity.

Also, I farm chia. I don't want parity for that.

 

Some might also fancy a btrfs raid 5 or 6 as additional array.

Edited by TheJJJ42
Link to comment

+1 for multiple arrays

 

By now i have 4 3TB drives (2 for Data 2 for parity)

After the migration from my old nas i have 2 1TB Disks that i could reuse.

 

But unraid didnt let me do this the way i would, as a second mirror 1TB Array.

 

ii´m still between between unraid and truenas.  

 

Link to comment

You can use cache pool to create a RAID 1 with your 2 1TB disks. But you will be force to add only 1TB disks in this pool after if you want to add disk (and change to RAID 5 or 6).

And I find "cache pool" not appropriate for HDD (and for the usage we do with)

Link to comment
2 hours ago, dada051 said:

You can use cache pool to create a RAID 1 with your 2 1TB disks. But you will be force to add only 1TB disks in this pool after if you want to add disk (and change to RAID 5 or 6).

And I find "cache pool" not appropriate for HDD (and for the usage we do with)

 

Nope. You can add any size disks you want to a pool, and unRAID will figure it out.

I personally ran a pool of 1TB + 2x500GB disks no problem.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.