Tybio

Members
  • Posts

    610
  • Joined

  • Last visited

Everything posted by Tybio

  1. Had to dig this up, the best description around for options with Cache pools: https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421
  2. You can do many RAID options with cache, 0/4/5/6, I'm not sure about 10 but I think so. There are some issues with RAID5 in the driver space ATM, so I'd go 0 or 1 for multi disks. I use mine in RAID0 for size boost, but I don't really mind if the cache has a hit every 4-5 years from a disk failure ;).
  3. For cache, I'd just backup things required to the array and do Raid0. Why have all that protected space and then create another bucket of protected space on the device you want the most performance out of? I totally get why some people do, just how I've come down in my thinking...
  4. Drat, wonder if something like this would work: https://www.amazon.com/Noctua-NA-SAV3-silicone-anti-vibration-mounts/dp/B071PFPFV2/ref=pd_rhf_gw_p_img_6?_encoding=UTF8&psc=1&refRID=EBXEXZ7AGGW2QAXP4MH1
  5. If you can't reach an IP, sometimes ICMP will get a response from the gateway. Check for the arp on your router or another system on the network.
  6. True enough, mine are not exposed so I tend to just check every now and then. I believe you can check from the UI even, but anyway, the code is not hidden behind a support contract, anyone can download.
  7. I don't think that is the case for the small business switches, at least I have no contract and haven't needed one for upgrades...but then, the software is so well baked that I also don't need to upgrade See for yourself, no login needed to download: https://www.cisco.com/c/en/us/support/switches/sg300-10-10-port-gigabit-managed-switch/model.html#~tab-downloads
  8. Any hint on what screws to use? Or just take one apart and head to home depot and rummage around for something that looks like it works?
  9. These come with a web-ui that isn't horrid. However they are so feature rich that the UI has a LOT of options, QoS, VLANS, SNMP, 802.1x, bonding...the list goes on and on...so they aren't "Simple" either
  10. So I'm now on the path, thanks for this thread as it gave me a lot of ideas and my goals were the same, to move from a 24 bay rack to a server that will fit in my office. I'm using the guts of my current server and just ordered the case/drivebays and fans. A little expensive for down sizing, but I'm paying a premium for making it as quiet as possible. The one fear I have is replacing the fans on the cages, I'm going to get them and see how they are by default before I make any decisions on that front. Is there a narrow option for the 80mm on the iStar cages that will fit out of the box? Or best to just go with a normal 80mm Noctua? Edit: I found this on Amazon, can someone with an iStar cage that's mentioned in this thread comment on the depth of the default fan? Is it 15mm by chance? https://www.amazon.com/Noiseblocker-NB-BlackSilentPro-PC-P-Ultra-Silent/dp/B0083A0BIA/ref=sr_1_1?ie=UTF8&qid=1398784528&sr=8-1&keywords=noiseblocker+pcp
  11. I really like the Cisco small office switches, they are expensive but have almost the same feature set as the enterprise switches, and they are /tanks/: https://www.amazon.com/Cisco-SG300-10-10-port-Gigabit-SRW2008-K9-NA/dp/B0041ORN6U/ref=sr_1_6?s=electronics&ie=UTF8&qid=1525268104&sr=1-6&keywords=cisco+switch
  12. I have to admit, I picked one up. I'm in a 24 bay 4U right now and with drive sizes where they are I've decided to go "backwards" to a nice setup like the one in this thread rather than have a massive block of server in the basement sounding like a jet engine :). I'm now debating on which 5-in-3 to use here, I like the trayless ones in this post, but I don't want to have to deal with the fan replacement on them...so I'm thinking of just going with the old stand-by of the SuperMicro ones. The only thing I could wish is that it had /one/ more 5.25 bay for one of these: https://www.amazon.com/dp/B071NP2M6L/ref=sspa_dk_detail_0?psc=1&pd_rd_i=B071NP2M6L&pd_rd_wg=puaSY&pd_rd_r=0K9QZ54NFZH0B2TMED5J&pd_rd_w=Y69mw&smid=A1GQQIQD0YQTUY
  13. FYI all, the case has a nice discount on newegg, Showing $148 off for a final price of $222. List is $369. Code is good until 4/30: NAFDECLIA422
  14. All, I'm running a huge 24 bay rack mount right now, with larger drives I'm using less and less of it over time (down to 10 disks now, and most of those are 6TB). I'm pondering decommissioning my monolith and switching to a small form factor desktop with some JBOD enclosures...this will help with noise (I hope) and allow me to separate the disk storage from the system and allow for more effective builds without trying to fit everything in one case. I found this: https://www.amazon.com/Sans-Digital-TowerRAID-Modularize-TR8X6G/dp/B00NOTUWAC It looks like two of those would get me 16 drives and I could put cache/VM/non-array drives in a SFF case like a Node or something and downsize in amazing ways. I'm wondering if anyone has done this? With the -16e cards being so bloody cheap, I'm struggling to see why I would go with my first plan...which was a PC-D600 with 5-in-4s.
  15. 1080p gaming is almost trivial compared to transcoding 4k BluRays. Hit up the passmark scores, figure a 1080p transcode takes 2000 passmarks per stream, and 4k is 4 times more data...and 265 being a BEAR to process and you are looking at north of 15-20k passmarks to do one stream at BEST. The hardware offload for h.265 is just not there yet and the data in a 4k encode is massivly larger than a 1080p encode. Honestly, to transcode one 4k stream you are looking at a minimum of an i9 or a TR 1950...and even then some 4k blu rays will buffer and have issues. Perhaps one of the newest high end Xeon's in a dual cpu setup could give you reliable transcoding, but at the moment I haven't heard of anyone having success with it. There are threads about this on this forum and the Plex forum, I'd search around before making it a requirement. Outside of that, everything is rather tame, I'd say anything that gets you 12k+ passmarks and has enough cores to divide between the VMs should be a good start, perhaps as low as 10k. However 4k are gone off the top from transcoding the 1080p, so look at the passmark scores and figure out what "CPU worth of power" you want for each VM, add up the total with the 4k for transcodes and leave 1 or 2 k for the OS and that should help you isolate the level of CPU you want to start with.
  16. I just upgraded to this one: https://www.asus.com/us/Networking/RT-AC86U/ I have a RJ45 connection from Fios, so I'm not using the coax MOCA junk, which might be one of the reasons it is so trivial for me.
  17. I use an Asus router with my Fios connection, has a built in VPN and works much more reliablly than the Actiontek crap they sell. Also cost a lot less, but I guess for non-techies being able to call Verizon is a big benefit, I just found it to be useless most of the time.
  18. Ok, the limitation is the writes to the array, not the VM. What I did was get a SSD and mount it outside the array with the unassigned devices plugin, that left my Cache free for use and the VM can move files /much/ more quickly to the array. That doesn't explain why the reads from the array to the VM are so slow, might need someone with a bit more experience to chime in at this point. It feels like there is something simple going on, just having trouble seeing it ATM
  19. I don't think so, those speeds sound like you are pushing directly to the array rather than to the cache disk. When you do the file copy, do you see the Parity disk writes incrementing? Did you mount the user0 version of the folder? Is "use cache" enabled for the share you are using? Are you using a disk share to test? (This would bypass the cache disk and result in direct writes to the protected array).
  20. The only way to really test is the VM to the Cache. Everything else will be limited by the write operations of parity. Your network will not matter, if this is all local from a VM to the array then it will use a virtual adapter and not go over the wire.
  21. Very cool! Would love to hear how it goes and what hardware you are using :).
  22. Did you put the 7920X in your server? Or is it running in a desktop to the side?
  23. Indeed, so only during checks or rebuilds etc. I'm thinking my next server will have 15 drives in the array max, and the Cache/extra drives will go on the MB controller...so this solution would save me from getting a big LSI, or two of the smaller ones. Then I can dual link it and have no limitation in the chain (Not that it would really matter much!) Thanks for the info
  24. So that would mean 110MB/s per channel on single link (2200/20). Is that the right way to look at it?
  25. Are you dual linking it CHBMB? I've been thinking about going this route, but the larger Intel expander is too expensive, and I wasn't sure the smaller one would work single-linked...if it is dual linked then the number of ports becomes a sticking point (4 or 16 drives). I'm just not sure how the math works out for spinners and single vs dual