r4ptor

Members
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

r4ptor's Achievements

Noob

Noob (1/14)

0

Reputation

  1. What short distance? are you refering to DAC cables?
  2. My current bottleneck is my connection to the server, one which im planing to eliminate soon. Is there any good Dual port 10Gbe SFP+ NIC card that will work out of the box and isnt that expensive? I looked through the forum and found several suggestions, so far I found these: Chelsio CC2-N320E-SR Chelsio T320 Mellanox Connectx 2 Intel X520 Are there any more 10Gbe card which works with unraid that you recommend?
  3. Your right, i miss read that. If you want to max performance of your discs but still on a budget, a 9207-8i with a RES2SV240 seems to be the way to go. I noticed that the 9300-8i is evenly matched in price, would that give even better performance then the 9207-8? Also, what is the pci-e port on the RES2SV240 for? power maybe but cant you power it from the molex on top instead?
  4. Thats awsome. But if im useing the 9201-16i the speed would be 3000/16=187,5MB/s but if I went with the 9207-i8 and a RES2SV240 useing dual link the speed i got would be 275MB/s. Could that really be the case? both use pci-e 2.0 and x8.
  5. Thanks for the fast reply. So it would be better to ignore the expander and simply use 16 drives directly to the 9201-16i. I should then get 4000/16=250MB/s per drive then?
  6. I'm building a new NAS and its going to have 16 or 24 drives, but i cant decide on the components, which is why i could use some recommendations. Primarily its going to be a NAS, but I also want plex on it with 1-3 streams. What i have laying around is a supermicro X11SSH-LN4F motherboard with a i3-6300T cpu, a 500w psu and a 24 bay narco case (which i might sell for one with fewer bays). The motherboard has a free pci-e 3.0 x8 slot for a HBA card, the rest is going to be occupied by a nvm cache ssd and a dual sfp+ nic. So how am I going to put this server together with a controller without a bottleneck? 1. The controller LSI SAS 9305-24i is expensive ($500-600) but its simple plug and play, enough ports for all drives and utilize x8 pci-e 3.0. 2. On the other hand, the LSI SAS 9201-16i is cheap but has only 16 ports (im fine with that) but can it handle drives over 4TB and max the speed of the drives? 3. If I went with the 9201-16i and added a Intel RES2SV240, can it handle 24 drives with max performance?
  7. Your right, I mixed it up for another card. Will there be a bottleneck in the performance doit it this way with a sas expander? (just found this page)
  8. cant find a RES2CV360 for sale anyway How about the Intel RES3TV360 with a LSI 9300-8I? both seem to support 12Gb/s and combined they go for $400-420 instead of $620 for a 9305-24i Is it possible to use one of these just to power the explander?
  9. Thanks for your tip jonnie.black, ive been reading up on this topic and you seem to be right. i decided to go witha norco 4324 case, 24 bay with 6 sas backplanes (1 per row of 4 drives) but i might need some suggestion on a new controller, the 9305-24i is a huge hole in my wallet. Is there a good sas expander + controller combo that can handle 24 drive with no bottleneck? (preferable cheaper too) Does the sas expander need a pcie slot for data or could i simply use a riser to power it since im running out of slots on my motherboard?
  10. Sure, from the cache to the array of data disk, this will be slow as johnnie.black stated earlier. But i want 1GB/s from my PC to the "cache", if it then takes 10min for 1GB from the cache to the data disks isn't that important, right now atleast are you suggesting that i go for 4x SSD then?
  11. Of course I meant 1 GigaByte transfer speed between PC and NAS in a 10 Gigabit network. I reed somewhere on this forum that its possible to stripe the cache drives but not the data drives. If this works then i wont be needing a NVMe but rather 2 regular SSDs That 150MB/s you speak of would be the bottleneck in a 1Gb network but internaly from cache to array would be limited to up to 600MB/s as per SATA3.
  12. I currently have a X11SSH-LN4F laying around that i want to re-purpose for unraid. 20x WD red sata 6Gb/s drives for data and 2-4x Samsung 860 EVO 500GB sata 6gb/s is what im going to stuff the nas with. Since the board doesn't have 24 sata or sas ports means that i have to get a controller. Im also planing on getting 10Gb network with the goal of having a whooping 1GB/s transfer speed. But this sacrifice one pcie x8 slot for the NIC, leaving me one x8 and one x4 slot for a controller that unraid can use. So my questions are: 1. Looking at the LSI SAS 9305-24i (pcie x8, SATA 12Gb/s) is it enough for my setup without a bottleneck? 2. Is it better to use 2 controllers rather then 1 controller for all the drives if i want 10GB/s speed? 3. Any other recommendation on controller that are more suited for my situation?
  13. Thanks for the answer. Is it possible to add 2 extra cache disks, making it a total of 4 disks? What about controller, is it smarter to use 2 or more for the array or is a single controller the better option?
  14. I've gone back and forth a while now, should i get a prebuilt NAS or build it my self, you know the process But now i have decided to build my own, I have a few things that i can use, but there are still some uncertainties on how everything should be setup. Hopefully i can get some answers here. - What I'm after is a system for 20-24 drives with at least 2 parity, for extra redundancy is it possible to use 3 or even 4 parity disks? - Regarding controller, I'm looking at a LSI SAS 9305-24i which works out of the box and support all 24 drives, is it a good idea to use two or even more controller rather then a single controller (regarding security and performance)? - I already have 2x 500GB SSD that i can use for cache, would it be possible to use them in RAID1 for extra redundancy?