Jump to content
hawihoney

How do you guys work with more than 24 drives?

18 posts in this topic Last Reply

Recommended Posts

I can't resist. There are so many used and beefy Supermicro Systems at eBay. Many of them have 36-45 bays. And adding cheap JBOD cases containing a single CSE-PTJBOD-CB2 powerboard to connect to these servers looks so promising. How do you guys work with these storages?

 

Unraid protects up to 26 data drives with up to two parity drives (=28). I would always go the M.2 way for cache (=30). AFAIK, a XFS RAID does not exist, so one has to go with BTRFS and Unassigned Devices (=32 RAID1).

 

And then?

 

- Is there a way to simulate a protected XFS array like Unraid does?

 

- What is a 24-drive cache pool good for?


- Do you add lots of BTRFS RAID1 pools thru Unassigned Devices?

 

- And do you think we will see multiple protected arrays in Unraid "soon"?

 

  • Like 1

Share this post


Link to post

you can try setup multiple arrays right now(install second/third.. as VM on main array), and then organize all access to data through your main array. you can mount second array to main a with unassigned devices plugin via NFS for example, and then share it if you want..

OK, at now this requires a lot of manual work, but who knows will be this multiple array feature in unraid or not..  

  • Like 1

Share this post


Link to post

Hmm, what do you mean with multiple arrays. Do you mean multiple physical servers? In fact I'm using Unassigned Devices to map drives of a second server into the first one. With lots of shares and drives it's really a pain.

 

That solution has a small drawback. You need two servers with mainboard, RAM and CPU.

 

One 36-45-bay server, or a main server with a JBOD expansion box giving in total 48 drives, seem to be the more promising solutions, yet not supported with base Unraid.

 

Just an example - don't know the backplane model. SAS2 is not mentioned, so I guess it's a SAS-only. Always be carefully here. One needs to check before buying:

 

https://ebay.us/zN4sGa

1.) Connect that beast as it is to you main server. Use Unassigned Devies plugin to connect the drives. That gives 36 additional drives.

2.) Throw mainboard and stuff out and add a powerboard. With under EUR 100 for used powerboard and proper cables it can expand your existing server.

 

The problem of both solutions: Base Unraid can't handle it. You need Unassigned Devices and you need BTRFS if you want some kind of RAID level, ...

 

Share this post


Link to post

no, you don't need two physical servers. look at my example - i have one physical 24 bay server with VMWare ESXi as host OS. then i have two unraid server VMs in it. both are running just fine - i simple pass-through one HBA controller to each VM. and then i can organize all access to data through one VM, if i want. 

you can run similar setup with unraid as host OS, and then add more unraid VMs as needed.

your idea with second case and power board in it works very well, @johnnie.black have very similar setup. in this case you probably have more than one HBA in your setup and with proper pass-through you can organize your all server resources to run two or more unraids. But again, i agree, it will require some manual steps to be performed :)  

Share this post


Link to post

That sounds interesting. Two or more unRAID VMs running on unRAID. The unRAID VMs are responsible for "their" count of disks including parity. And these disks are Unassigned Devices in the Host unRAID?

 

And this will work? Can't believe that ...

 

What about the USB drives and license within the unRAID VMs?

 

Share this post


Link to post

Yes, each unraid VM is responsible for their own disks with parity. and with that many disks, i would export user shares to main array. if you have say 24 drives, it's a big hassle to operate with them as 24 disk shares - just group your disks with user shares according on data usage - you can start with one/two disks for each user share and add more disks later if you need space for that share.

according USB and licence - you need separate unraid licence for each VM + one for host.

but see here for complete tutorial: 

 

Share this post


Link to post

Here is a much newer guide to running Unraid in a VM on an Unraid host, it takes advantage of some functionality that was added in 6.4

 

Share this post


Link to post
5 hours ago, hawihoney said:

Unraid protects up to 26 data drives with up to two parity drives (=28)

Actually it's 28 data + 2 parity.

Share this post


Link to post

Thanks a lot. Would be better if unRAID could handle multiple arrays and disk counts beyond the current limits. But I will make some experiments with your solution.

 

Share this post


Link to post

I have 4 unRAID servers on 2 48 bay Chenbro cases.  Each box has one bare metal unRAID server hosting a VM unRAID server.  The 48 bays are split at 24 drives per back plane and the back planes are SAS expanders so can't split them up except by back planes.  So it was natural to give the VM and the bare metal unRAID servers on each box 24 drives each.  I have one 9211-8i class card for the Bare Metal server and another passed through to the VM Server going to the 2 back planes.

  • Like 1

Share this post


Link to post

Thanks, that's sounds terrific.

 

Can you do parity checks in your Unraid VM with 24 drives connected? What speed gives that and is it reliable - I mean no crashes, etc?

 

My idea: I would place for every 24-bay JBOD expansion box (just CSE-PTJBOD-CB2 powerboard) one LSI 9300-8e into the Unraid host. Then I would give one HBA to one Unraid VM. So every JBOD expansion has it's own HBA and drives. And on these drives the Unraid VM will manage dual parity with 22 drives each. Is this possible?

 

And if I get that working, how can I use that drives, that are managed by the Unraid VM client, in the Unraid host. Can is simply use them? I mean, the HBA is mapped to an Unraid VM. Will it still work?

 

My first steps did not end promising. I can't map a HBA to the VM. If I look at the "Other PCIe devices" it's always empty.


Do you guys mind give some sort of first steps help? You're ideas are highly appreciated.

 

Share this post


Link to post
On 1/30/2019 at 1:02 AM, hawihoney said:

Can you do parity checks in your Unraid VM with 24 drives connected? What speed gives that and is it reliable - I mean no crashes, etc?

Yes parity check runs separately for the VM unRAID and the HOST unRAID.  SInce I am using expanders rather than directly connected to a controller the speed may be reduced but I think it is mostly the drives.  It has nothing to do with unRAID running in a VM.

unRAID VM speeds: Average speed: 104.6 MB/sec with 11 device array of WD 4TB 5400rpm Red drives.  Speed is limited by the drives.

unRAID Host speeds: Average speed: 139.2 MB/sec with 17 device array of HGST 6TB 7200rpm NAS drives.   Speed might be limited by expander.

On 1/30/2019 at 1:02 AM, hawihoney said:

My idea: I would place for every 24-bay JBOD expansion box (just CSE-PTJBOD-CB2 powerboard) one LSI 9300-8e into the Unraid host. Then I would give one HBA to one Unraid VM. So every JBOD expansion has it's own HBA and drives. And on these drives the Unraid VM will manage dual parity with 22 drives each. Is this possible?

Sounds possible.  That is essentially what I'm doing except internally.  But I have not tried it so cannot confirm it will work.

On 1/30/2019 at 1:02 AM, hawihoney said:

And if I get that working, how can I use that drives, that are managed by the Unraid VM client, in the Unraid host. Can is simply use them? I mean, the HBA is mapped to an Unraid VM. Will it still work?

The drives are not visible to the HOST when you pass through the controller to the VM.  You use the standard GUI connected to the VM to control them.  NOTE: I had to use xen-pciback.hide since I have two LSI controllers instead of vfio-pci.ids since the two controllers have the same ID information and only one of them is passed through to the unRAID VM the other controller is used on the unRAID HOST.

Edited by BobPhoenix

Share this post


Link to post

One more thing.  I was having a hell of a time getting my unraid vm to work on my unraid host.  It should have been working, I spent hours a day doing research over the course of several weeks, tried every possible combination of configurations multiple times, but I could never get the damnable thing to boot.  Then, like a flash of lightening it struck me to try updating the motherboard bios.  Needed to do it anyway because of meltdown and spectre, but had never gotten around to it,so I figured what the hell.  Applied the updated bios and miraculously everything started working.  Yippie!!

Share this post


Link to post

Thanks for this insight.

 

In the meantime I've set up everything. There's one nasty thing left. I will post about this in General Support today. It's about copying large files between servers (VM and bare metal).

 

My new environment - mostly used stuff from eBay:

 

3x Supermicro SC846E16 chassis (1x bare metal, 2x JBOD expansion)

 

Bare metal:

1x Supermicro BPN-SAS2-EL1 backplane

1x Supermicro X9DRi-F Mainboard

2x Intel E5-2680 v2 CPU

2x Supermicro SNK-P0050AP4 Cooler

8x Samsung M393B2K70CMB-YF8 16GB RAM --> 128 GB

1x LSI 9300-8i HBA connected to internal backplane (two cables)

2x LSI 9300-8e HBA connected to JBOD expansions (two cables each)

2x Lycom DT-120 PCIe x4 M.2 Adapter

2x Samsung 970 EVO 250GB M.2 (Cache Pool)

 

Vor each JBOD expansion:

1x Supermicro BPN-SAS2-EL1 backplane

1x Supermicro CSE-PTJBOD-CB2 Powerboard

1x SFF-8088/SFF-8087 Slot Sheet

2x SFF-8644/SFF-8088 cables (to HBA in bare metal)

2019-03-05: Removed two back fans (only in cool environment)

2019-03-05: Replaced three fans in fan wall with drop-in replacement Supermicro FAN-0074L4 (only in cool environment)

 

For each expansion box I've setup an unRAID VM. There one 9300-8e passed to every VM. Every VM has 16GB RAM and 4x2 CPUs. All disks in expansion boxes are mounted in bare metal server. And here is my problem I will report today in General Support:

 

I can copy a 70GB file happily with rsync from server to server. But if I use cp or MC I bring everything down. 100% CPU on target server, 100% memory usage on target server, ls on mount points stalls system. I even waited for an hour before I had to power cycle everything. For files over 30GB I use USB sticks to copy between servers 8-(

 

 

2019-03-05: Because I never could get a solution for my "copy large files from bare metal to VM" problem, I changed copy/move direction for all SMB copy/move commands between bare metal and VMs. Always fetch files from bare metal, don't copy/move to VMs.

 

Edited by hawihoney

Share this post


Link to post

@hawihoney I think I'm going to try this too. how's your setup running these days? any additional advice you'd add?

Share this post


Link to post
Posted (edited)

It's running pretty good. However, some things I changed in the meantime:

 

The fans in the JBOD expansion boxes run nearly at full speed all the time (this powerboard has no temperature sensor) and cry like hell. But the JBODs don't need so much cooling. So I removed the two fans in the back. In addition I replaced the three fans in the fan wall with Supermicro FAN-0074L4 fans (green ones, direct replacement possible). However, I would not do that in a warm environment.

 

The "copy large files from bare metal to VM via SMB problem" is still there. I changed copy direction and do fetch files on bare metal from VM now, always, always, always.

 

I do not mount all these drives in Unassigned Devices. I created individual User Scripts for all mount and unmount commands. This way they don't show up on the Main page. I experienced a lot of problems if the Main page refeshed before all mount points were collected and shown.

 

Starting the whole stuff is a manual activity:

 

1. Physically start the JBODs.

2. Physically start the bare metal server.

3. Do not start VMs automatically on bare metal server (starting two Unraid VMs at once did result in some problems with USB license sticks).

4. Automatically start only those Dockers (bare metal and JBODs) that don't need access to the other servers.

5. On bare metal server manually start the VMs one after another and wait til completion (JBOD web GUIs are reachable and their arrays are started).

6. Mounts in VMs to bare metal server automatically start with JBOD arrays (mount/unmout commands in User Scripts attached to Array Start/Array Stop).

7. Mounts in bare metal server to JBODs are started manually after the VMs are up and running.

8. Now start all those Dockers on bare metal server manually that need access to JBOD mounts.

 

Edited by hawihoney
  • Like 1

Share this post


Link to post

Helpful tips @hawihoney, thank you! I would have likely run into some of the same situations you have already, much appreciate the advice.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now