Jump to content
SpencerJ

How many TB's is your Unraid Server?

How many TB's is your Unraid Server?  

632 members have voted

You do not have permission to vote in this poll, or see the poll results. Please sign in or register to vote in this poll.

81 posts in this topic Last Reply

Recommended Posts

On 9/22/2019 at 10:19 AM, jonathanm said:

Huh? If you have SSD data drives mixed with a spinning rust parity drive in the main array (not cache) you are getting the worst possible combo for performance and data safety.

 

Either have an all SSD array parity included and deal with possible parity sync errors and lack of trim, or only put SSD's in the cache pool. Mixing them in the main array means you still have the possibility of parity corruption and no trim, plus all writes will be limited by the speed of the parity drive.

 

Hopefully I misunderstood what you were saying.

Currently i've had no trouble that I can pinpoint to an actual configuration problem, But to clarify, nope, I have all SSD as the Array. A single 4 TB HDD Parity for now. The object is to update to full SSD eventually. The HDD/SSD mix was in the Array but I have recently upgraded all drives to SSD. 

 

Not a hundred percent clear on what the issue would be with that. (Besides performance limitations due to the parity drive but I plan on upgrading that as well)

 

Where would the sync errors come from in an SSD prominent system? Is there a lack of TRIM in Unraid? I am not clear on what is being conveyed.

Edited by RogueWolf33

Share this post


Link to post
13 minutes ago, RogueWolf33 said:

Where would the sync errors come from in an SSD prominent system? Is there a lack of TRIM in Unraid?

https://unraid.net/blog/unraid-14th-birthday-blog

 

Read the answer to the second question.

 

SSD's in the parity protected portion of the array are not currently supported. That will change, but for now, you are treading in somewhat uncharted waters.

Share this post


Link to post

30 TB, Going to be throwing a little more storage in there next week when the new machine arrives, upgrading from my current hodgepodge mess in a norco 4224 to a supermicro 24 bay, the case upgrade is mostly just quality level the CPU going from a Athlon II x4 965 to a pair of Xeon 2630 v2's and from 16bg to 32 gb of ram is going to be best part, that and having 34 sata ports available for drives, current machine only has 16 at the moment, more bays than i can hookup.

 

*edit* new machine is going to see the 2tb Cache join the array, as well as an additional 2tb, 1tb 500gb drive I have laying around (currently preclearing on a spare machine) and I'm considering tossing in a few 2.5inch drives I have as well, will be using 8 256gb ssd's to replace cache drive.

unraidsize.thumb.png.57275dedaea78c0ad66c0536a2431099.png

Edited by ibixat

Share this post


Link to post
1 hour ago, stefan marton said:

you user only 1 disk for parity ?

I use 2 Parity once I go over 14 data disks.

Are good points for single and dual parity. Is a choice on which way to go.

Share this post


Link to post
you user only 1 disk for parity ?
Yeah 1 disk for parity I need a new case I've ran out of room for more drives

Sent from my SM-G975F using Tapatalk

Share this post


Link to post
On 9/23/2019 at 12:58 PM, jonathanm said:

https://unraid.net/blog/unraid-14th-birthday-blog

 

Read the answer to the second question.

 

SSD's in the parity protected portion of the array are not currently supported. That will change, but for now, you are treading in somewhat uncharted waters.

This mentions parity, I agree. However, if the Parity drive is still Mechanical and the array drives are SSD, I only see a write hit if I am not mistaken. I am currently attempting to confirm that my drives support DZAT to be sure I will not run into issues in the future. I appreciate you putting me on this track so I can ensure I do not run into any issues. 

Share this post


Link to post

I’ve got 106 TB of usable space, plus 32 TB as double parity. Some time ago I got rid of my Supermicro 5x3 cages and replaced them with 4x3 cages with 12 mm fans (much quieter). I had to reduce the number of array drives so it took some time to replace the previous drives with larger ones. I’m going for 16 TB drives at the moment (2 at the moment used for parity). The smallest one is 8 TB.

 

I’m planning to replace current drives with larger ones when I need more space. I don’t plan to do any kind of upgrade in terms of case in order to increase again the number of drives.

Edited by stomp

Share this post


Link to post

my little box

image.png.6e3b9475fb4cb9bc67d1aa7f1ddf7d16.png

 

I have 3 more 12tb but i am in the middle of migrating the stuff into a rack, this is the new case i will use

image.thumb.png.34d10f42751b9d663cf38c658778c74f.png

 

 

Share this post


Link to post

And for the little 1U that you can see... basically does not a lot, at the moment but can be spooled up to take up some slack...

just now need a way to manage them both at the same time to make it more like freenas (that has issues with bonded ports and thread ripper 2990's)

Screenshot 2019-12-11 at 11.41.40.png

Share this post


Link to post

currently 41TB + 1.6TB cache. soon it will be 45TB, waiting on a sas => sata cable...

probably it will run out off space in between 6 months...

1256862243_Schermafdrukvan2019-12-1116-22-34.thumb.png.4f277fb48a4d61e514cc96b91d670156.png

 

IMG_20191015_112029.thumb.jpg.8dba19a62a27e22213ee846773cfd712.jpg

 

the pic missing the 3rd GPU (GTX1050ti), didn't make pictures at the installation of it...

yeah, cable management can be much better, will improve it do soon... (probably not 😇 )

 

Share this post


Link to post

3x Supermicro SC846E16 chassis (1x bare metal, 2x JBOD expansion)

 

Bare metal (Nvidia Unraid):

1x Supermicro BPN-SAS2-EL1 backplane

1x Supermicro X9DRi-F Mainboard

2x Intel E5-2680 v2 CPU

2x Supermicro SNK-P0050AP4 Cooler

8x Samsung M393B2K70CMB-YF8 16GB RAM --> 128 GB

1x LSI 9300-8i HBA connected to internal backplane (two cables)

2x LSI 9300-8e HBA connected to JBOD expansions (two cables each)

2x Lycom DT-120 PCIe x4 M.2 Adapter

2x Samsung 970 EVO 250GB M.2 (Cache Pool)

1x One-slot NVIDIA 1050ti graphics card

 

For each JBOD expansion (Unraid VM):

1x Supermicro BPN-SAS2-EL1 backplane

1x Supermicro CSE-PTJBOD-CB2 Powerboard

1x SFF-8088/SFF-8087 Slot Sheet

2x SFF-8644/SFF-8088 cables (to HBA in bare metal)

 

3x Unraid USB license sticks from different companys for easy passthrough to VMs.

 

For each JBOD expansion box I've setup an unRAID VM. There's a 9300-8e passed to every Unraid VM. Every Unraid VM has 16GB RAM and 4x2 CPUs. All disks in expansion boxes are mounted (SMB) in bare metal server.

 

 

*** EDIT:*** Mostly used stuff from eBay. I even buy used harddisks ...

 

 

Tower.jpg

TowerVM01.jpg

TowerVM02.jpg

Towers.jpg

Tower binnen.jpg

TowerVM0x binnen.jpg

Edited by hawihoney

Share this post


Link to post
12 hours ago, hawihoney said:

3x Supermicro SC846E16 chassis (1x bare metal, 2x JBOD expansion)

 

Bare metal (Nvidia Unraid):

1x Supermicro BPN-SAS2-EL1 backplane

1x Supermicro X9DRi-F Mainboard

2x Intel E5-2680 v2 CPU

2x Supermicro SNK-P0050AP4 Cooler

8x Samsung M393B2K70CMB-YF8 16GB RAM --> 128 GB

1x LSI 9300-8i HBA connected to internal backplane (two cables)

2x LSI 9300-8e HBA connected to JBOD expansions (two cables each)

2x Lycom DT-120 PCIe x4 M.2 Adapter

2x Samsung 970 EVO 250GB M.2 (Cache Pool)

1x One-slot NVIDIA 1050ti graphics card

 

For each JBOD expansion (Unraid VM):

1x Supermicro BPN-SAS2-EL1 backplane

1x Supermicro CSE-PTJBOD-CB2 Powerboard

1x SFF-8088/SFF-8087 Slot Sheet

2x SFF-8644/SFF-8088 cables (to HBA in bare metal)

 

3x Unraid USB license sticks from different companys for easy passthrough to VMs.

 

For each JBOD expansion box I've setup an unRAID VM. There's a 9300-8e passed to every Unraid VM. Every Unraid VM has 16GB RAM and 4x2 CPUs. All disks in expansion boxes are mounted (SMB) in bare metal server.

 

 

*** EDIT:*** Mostly used stuff from eBay. I even buy used harddisks ...

 

 

Tower.jpg

TowerVM01.jpg

TowerVM02.jpg

Towers.jpg

Tower binnen.jpg

TowerVM0x binnen.jpg

Woah, those chassis look deeper then 4U, how many U are they or is it just the camera angle (last pic particularly)

Share this post


Link to post
13 hours ago, hawihoney said:

3x Supermicro SC846E16 chassis (1x bare metal, 2x JBOD expansion)

 

Bare metal (Nvidia Unraid):

1x Supermicro BPN-SAS2-EL1 backplane

1x Supermicro X9DRi-F Mainboard

2x Intel E5-2680 v2 CPU

2x Supermicro SNK-P0050AP4 Cooler

8x Samsung M393B2K70CMB-YF8 16GB RAM --> 128 GB

1x LSI 9300-8i HBA connected to internal backplane (two cables)

2x LSI 9300-8e HBA connected to JBOD expansions (two cables each)

2x Lycom DT-120 PCIe x4 M.2 Adapter

2x Samsung 970 EVO 250GB M.2 (Cache Pool)

1x One-slot NVIDIA 1050ti graphics card

 

For each JBOD expansion (Unraid VM):

1x Supermicro BPN-SAS2-EL1 backplane

1x Supermicro CSE-PTJBOD-CB2 Powerboard

1x SFF-8088/SFF-8087 Slot Sheet

2x SFF-8644/SFF-8088 cables (to HBA in bare metal)

 

3x Unraid USB license sticks from different companys for easy passthrough to VMs.

 

For each JBOD expansion box I've setup an unRAID VM. There's a 9300-8e passed to every Unraid VM. Every Unraid VM has 16GB RAM and 4x2 CPUs. All disks in expansion boxes are mounted (SMB) in bare metal server.

 

 

*** EDIT:*** Mostly used stuff from eBay. I even buy used harddisks ...

 

 

Tower.jpg

TowerVM01.jpg

TowerVM02.jpg

 

 

 

Lovely bit of Kit you have....

So if I am getting this correctly, for the expansion bays you have created a VM on the primary server and then passed through the hardware that is required plus the expansion hardware... Is this simply to get around the 30 device limit for the array? if so I can see the logic in this but it would seem a an awful waste of resources on the primary server and licensing costs...

 

forgive me for my ignorance I am new here and really want to understand the thinking and design principles of this build... As I am looking at at and personally would have gone down the route of either ZFS for that size of an array following Wendels guide that he did for Gamers Nexus or created JBOD pools of disks using a RAID adaptor (I have the 9365-28i) so using the 9365-28i I have to pass through the devices as JBODs anyway as no IT mode (nor would I want to downgrade a £1k card) and it works, and if I required that amount of disks would have gone down this route myself of JBOD'ing the drives to make it look like x3 drives were x1 drive to the OS...

Sorry for the waffle just thinking out loud but I do honestly want to hear your thoughts on choosing this method..

Share this post


Link to post

@je82: It's standard 4U. I'm not good in photography ... The Last picture is one of the two JBODs. The picture before that is the server. 

 

@ghoule: Yes, one VM with one HBA passed through for one attached JBOD. Previously they were three complete Servers. But it was a waste of Material. One Single HBA is more than enough for one JBOD. To be honest, even the HBAs for the JBODs are not necessary. So it's only the drive Limit and the lack of multiple array Pools that drove my decision. The day multiple array Pools are implemented in Unraid I will throw out the two 9300-8e HBAs and use the expander Feature of the backplanes and make that Beast one single shiny Server.

 

The Main Server does see all individual Disks of the JBODs. I do not give user shares to the server. It's because all disks are spun down usually and spinning up user shares for a single file was a show Stopper. I know about the disk Cache Plugin. But reading 60 disks regulary is no Option for me. So all applications work with individual disks and build their own Pools/shares.

 

BTW this combination is running without any problems since over a year. It's cool to see three parity Checks Running in parallel on one Server ...

 

 

Edited by hawihoney

Share this post


Link to post

I really thought OP was being conservative with those storage ranges until I saw the results.

I'm at 74TB, but I've got 5 drives (Another 37TB probably. 4 8TB and 1 5TB.  I highly doubt I ever use the 5 TB other than to replace a dead drive))  waiting but no more space.

I've got a 24 bay server now for a second unraid server so I can continue adding more drives.

 

12 TB drives are now hitting $180 on sale, so really thought there would be very few in the 2-20 range.  The amount of storage you can fit per drive is getting serious.

  

On 9/4/2019 at 9:03 AM, ashman70 said:

Wow, nice, too rich for my blood though.

You can get just the chassis on ebay for relatively cheap.  Actually, the chassis and some parts(pretty sure it was fully working).  I would have bought one, but I think I wanted the ease of the 24bay configuration and it was an extra $200 for 6 more bays.  I figured worst case, I'll just get it and move my current unraid server that has 15 drives into that giving me 54 bays total, and if that's not enough then I don't know what's wrong with me.

Unless you're a business and can't buy used/build, going to a shop like that directly is like going to a boutique gaming pc website.  Of course you're paying a HUGE markup for a "boutique" build and can easily go out and buy the parts and put it together for far less.

 

Edited by tential

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.