RogueWolf33 Posted September 23, 2019 Share Posted September 23, 2019 (edited) On 9/22/2019 at 10:19 AM, jonathanm said: Huh? If you have SSD data drives mixed with a spinning rust parity drive in the main array (not cache) you are getting the worst possible combo for performance and data safety. Either have an all SSD array parity included and deal with possible parity sync errors and lack of trim, or only put SSD's in the cache pool. Mixing them in the main array means you still have the possibility of parity corruption and no trim, plus all writes will be limited by the speed of the parity drive. Hopefully I misunderstood what you were saying. Currently i've had no trouble that I can pinpoint to an actual configuration problem, But to clarify, nope, I have all SSD as the Array. A single 4 TB HDD Parity for now. The object is to update to full SSD eventually. The HDD/SSD mix was in the Array but I have recently upgraded all drives to SSD. Not a hundred percent clear on what the issue would be with that. (Besides performance limitations due to the parity drive but I plan on upgrading that as well) Where would the sync errors come from in an SSD prominent system? Is there a lack of TRIM in Unraid? I am not clear on what is being conveyed. Edited September 23, 2019 by RogueWolf33 Quote Link to comment
JonathanM Posted September 23, 2019 Share Posted September 23, 2019 13 minutes ago, RogueWolf33 said: Where would the sync errors come from in an SSD prominent system? Is there a lack of TRIM in Unraid? https://unraid.net/blog/unraid-14th-birthday-blog Read the answer to the second question. SSD's in the parity protected portion of the array are not currently supported. That will change, but for now, you are treading in somewhat uncharted waters. Quote Link to comment
ibixat Posted September 24, 2019 Share Posted September 24, 2019 (edited) 30 TB, Going to be throwing a little more storage in there next week when the new machine arrives, upgrading from my current hodgepodge mess in a norco 4224 to a supermicro 24 bay, the case upgrade is mostly just quality level the CPU going from a Athlon II x4 965 to a pair of Xeon 2630 v2's and from 16bg to 32 gb of ram is going to be best part, that and having 34 sata ports available for drives, current machine only has 16 at the moment, more bays than i can hookup. *edit* new machine is going to see the 2tb Cache join the array, as well as an additional 2tb, 1tb 500gb drive I have laying around (currently preclearing on a spare machine) and I'm considering tossing in a few 2.5inch drives I have as well, will be using 8 256gb ssd's to replace cache drive. Edited September 24, 2019 by ibixat Quote Link to comment
stefan marton Posted September 25, 2019 Share Posted September 25, 2019 On 9/24/2019 at 12:20 PM, tabac1987 said: 62TB you user only 1 disk for parity ? Quote Link to comment
HK-Steve Posted September 25, 2019 Share Posted September 25, 2019 1 hour ago, stefan marton said: you user only 1 disk for parity ? I use 2 Parity once I go over 14 data disks. Are good points for single and dual parity. Is a choice on which way to go. Quote Link to comment
tabac1987 Posted September 27, 2019 Share Posted September 27, 2019 you user only 1 disk for parity ?Yeah 1 disk for parity I need a new case I've ran out of room for more drives Sent from my SM-G975F using Tapatalk Quote Link to comment
RogueWolf33 Posted October 6, 2019 Share Posted October 6, 2019 On 9/23/2019 at 12:58 PM, jonathanm said: https://unraid.net/blog/unraid-14th-birthday-blog Read the answer to the second question. SSD's in the parity protected portion of the array are not currently supported. That will change, but for now, you are treading in somewhat uncharted waters. This mentions parity, I agree. However, if the Parity drive is still Mechanical and the array drives are SSD, I only see a write hit if I am not mistaken. I am currently attempting to confirm that my drives support DZAT to be sure I will not run into issues in the future. I appreciate you putting me on this track so I can ensure I do not run into any issues. Quote Link to comment
stomp Posted October 19, 2019 Share Posted October 19, 2019 (edited) I’ve got 106 TB of usable space, plus 32 TB as double parity. Some time ago I got rid of my Supermicro 5x3 cages and replaced them with 4x3 cages with 12 mm fans (much quieter). I had to reduce the number of array drives so it took some time to replace the previous drives with larger ones. I’m going for 16 TB drives at the moment (2 at the moment used for parity). The smallest one is 8 TB. I’m planning to replace current drives with larger ones when I need more space. I don’t plan to do any kind of upgrade in terms of case in order to increase again the number of drives. Edited October 19, 2019 by stomp Quote Link to comment
ijuarez Posted October 21, 2019 Share Posted October 21, 2019 27 minutes ago, denellum said: 66TB pimpdaddy......nice Quote Link to comment
SavellM Posted December 11, 2019 Share Posted December 11, 2019 128tb usable. Only using 25tb currently. Plenty of room to grow Quote Link to comment
testdasi Posted December 11, 2019 Share Posted December 11, 2019 1 hour ago, SavellM said: 128tb usable. Only using 25tb currently. Plenty of room to grow Need pics! Quote Link to comment
je82 Posted December 11, 2019 Share Posted December 11, 2019 my little box I have 3 more 12tb but i am in the middle of migrating the stuff into a rack, this is the new case i will use Quote Link to comment
ghoule Posted December 11, 2019 Share Posted December 11, 2019 Here is a Quick update on mine 1 Quote Link to comment
bonienl Posted December 11, 2019 Share Posted December 11, 2019 Congrats. You have some serious box Quote Link to comment
mrbilky Posted December 11, 2019 Share Posted December 11, 2019 1 hour ago, ghoule said: Here is a Quick update on mine Nice! Quote Link to comment
ghoule Posted December 11, 2019 Share Posted December 11, 2019 And for the little 1U that you can see... basically does not a lot, at the moment but can be spooled up to take up some slack... just now need a way to manage them both at the same time to make it more like freenas (that has issues with bonded ports and thread ripper 2990's) Quote Link to comment
SavellM Posted December 11, 2019 Share Posted December 11, 2019 5 hours ago, testdasi said: Need pics! Here you go: Quote Link to comment
sjaak Posted December 11, 2019 Share Posted December 11, 2019 currently 41TB + 1.6TB cache. soon it will be 45TB, waiting on a sas => sata cable... probably it will run out off space in between 6 months... the pic missing the 3rd GPU (GTX1050ti), didn't make pictures at the installation of it... yeah, cable management can be much better, will improve it do soon... (probably not 😇 ) Quote Link to comment
je82 Posted December 12, 2019 Share Posted December 12, 2019 12 hours ago, hawihoney said: 3x Supermicro SC846E16 chassis (1x bare metal, 2x JBOD expansion) Bare metal (Nvidia Unraid): 1x Supermicro BPN-SAS2-EL1 backplane 1x Supermicro X9DRi-F Mainboard 2x Intel E5-2680 v2 CPU 2x Supermicro SNK-P0050AP4 Cooler 8x Samsung M393B2K70CMB-YF8 16GB RAM --> 128 GB 1x LSI 9300-8i HBA connected to internal backplane (two cables) 2x LSI 9300-8e HBA connected to JBOD expansions (two cables each) 2x Lycom DT-120 PCIe x4 M.2 Adapter 2x Samsung 970 EVO 250GB M.2 (Cache Pool) 1x One-slot NVIDIA 1050ti graphics card For each JBOD expansion (Unraid VM): 1x Supermicro BPN-SAS2-EL1 backplane 1x Supermicro CSE-PTJBOD-CB2 Powerboard 1x SFF-8088/SFF-8087 Slot Sheet 2x SFF-8644/SFF-8088 cables (to HBA in bare metal) 3x Unraid USB license sticks from different companys for easy passthrough to VMs. For each JBOD expansion box I've setup an unRAID VM. There's a 9300-8e passed to every Unraid VM. Every Unraid VM has 16GB RAM and 4x2 CPUs. All disks in expansion boxes are mounted (SMB) in bare metal server. *** EDIT:*** Mostly used stuff from eBay. I even buy used harddisks ... Woah, those chassis look deeper then 4U, how many U are they or is it just the camera angle (last pic particularly) Quote Link to comment
ghoule Posted December 12, 2019 Share Posted December 12, 2019 13 hours ago, hawihoney said: 3x Supermicro SC846E16 chassis (1x bare metal, 2x JBOD expansion) Bare metal (Nvidia Unraid): 1x Supermicro BPN-SAS2-EL1 backplane 1x Supermicro X9DRi-F Mainboard 2x Intel E5-2680 v2 CPU 2x Supermicro SNK-P0050AP4 Cooler 8x Samsung M393B2K70CMB-YF8 16GB RAM --> 128 GB 1x LSI 9300-8i HBA connected to internal backplane (two cables) 2x LSI 9300-8e HBA connected to JBOD expansions (two cables each) 2x Lycom DT-120 PCIe x4 M.2 Adapter 2x Samsung 970 EVO 250GB M.2 (Cache Pool) 1x One-slot NVIDIA 1050ti graphics card For each JBOD expansion (Unraid VM): 1x Supermicro BPN-SAS2-EL1 backplane 1x Supermicro CSE-PTJBOD-CB2 Powerboard 1x SFF-8088/SFF-8087 Slot Sheet 2x SFF-8644/SFF-8088 cables (to HBA in bare metal) 3x Unraid USB license sticks from different companys for easy passthrough to VMs. For each JBOD expansion box I've setup an unRAID VM. There's a 9300-8e passed to every Unraid VM. Every Unraid VM has 16GB RAM and 4x2 CPUs. All disks in expansion boxes are mounted (SMB) in bare metal server. *** EDIT:*** Mostly used stuff from eBay. I even buy used harddisks ... Lovely bit of Kit you have.... So if I am getting this correctly, for the expansion bays you have created a VM on the primary server and then passed through the hardware that is required plus the expansion hardware... Is this simply to get around the 30 device limit for the array? if so I can see the logic in this but it would seem a an awful waste of resources on the primary server and licensing costs... forgive me for my ignorance I am new here and really want to understand the thinking and design principles of this build... As I am looking at at and personally would have gone down the route of either ZFS for that size of an array following Wendels guide that he did for Gamers Nexus or created JBOD pools of disks using a RAID adaptor (I have the 9365-28i) so using the 9365-28i I have to pass through the devices as JBODs anyway as no IT mode (nor would I want to downgrade a £1k card) and it works, and if I required that amount of disks would have gone down this route myself of JBOD'ing the drives to make it look like x3 drives were x1 drive to the OS... Sorry for the waffle just thinking out loud but I do honestly want to hear your thoughts on choosing this method.. Quote Link to comment
hawihoney Posted December 13, 2019 Share Posted December 13, 2019 (edited) @je82: It's standard 4U. I'm not good in photography ... The Last picture is one of the two JBODs. The picture before that is the server. @ghoule: Yes, one VM with one HBA passed through for one attached JBOD. Previously they were three complete Servers. But it was a waste of Material. One Single HBA is more than enough for one JBOD. To be honest, even the HBAs for the JBODs are not necessary. So it's only the drive Limit and the lack of multiple array Pools that drove my decision. The day multiple array Pools are implemented in Unraid I will throw out the two 9300-8e HBAs and use the expander Feature of the backplanes and make that Beast one single shiny Server. The Main Server does see all individual Disks of the JBODs. I do not give user shares to the server. It's because all disks are spun down usually and spinning up user shares for a single file was a show Stopper. I know about the disk Cache Plugin. But reading 60 disks regulary is no Option for me. So all applications work with individual disks and build their own Pools/shares. BTW this combination is running without any problems since over a year. It's cool to see three parity Checks Running in parallel on one Server ... Edited December 13, 2019 by hawihoney Quote Link to comment
tential Posted December 13, 2019 Share Posted December 13, 2019 (edited) I really thought OP was being conservative with those storage ranges until I saw the results. I'm at 74TB, but I've got 5 drives (Another 37TB probably. 4 8TB and 1 5TB. I highly doubt I ever use the 5 TB other than to replace a dead drive)) waiting but no more space. I've got a 24 bay server now for a second unraid server so I can continue adding more drives. 12 TB drives are now hitting $180 on sale, so really thought there would be very few in the 2-20 range. The amount of storage you can fit per drive is getting serious. On 9/4/2019 at 9:03 AM, ashman70 said: Wow, nice, too rich for my blood though. You can get just the chassis on ebay for relatively cheap. Actually, the chassis and some parts(pretty sure it was fully working). I would have bought one, but I think I wanted the ease of the 24bay configuration and it was an extra $200 for 6 more bays. I figured worst case, I'll just get it and move my current unraid server that has 15 drives into that giving me 54 bays total, and if that's not enough then I don't know what's wrong with me. Unless you're a business and can't buy used/build, going to a shop like that directly is like going to a boutique gaming pc website. Of course you're paying a HUGE markup for a "boutique" build and can easily go out and buy the parts and put it together for far less. Edited December 13, 2019 by tential Quote Link to comment
Pauven Posted January 2, 2020 Share Posted January 2, 2020 On 9/5/2019 at 8:39 AM, jonathanm said: Sure. 2,000 blu-rays backed up at 50GB / disk. When you have well over $20,000 worth of blu ray disks, surely you would want a backup of your data, right? 🤣 Cool, you've got the right idea. Just wondering if you are fully leveraging it with a front-end so you don't have to insert discs. Essentially, your discs are your backup, and your array is your media server. I've got 1800+ movies stored away in boxes in the basement (my backup), and watch everything directly from my array. Using my own GUI front-end, of course... 😉 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.