Jump to content

heffe2001

Members
  • Posts

    411
  • Joined

Everything posted by heffe2001

  1. The main reason I went with the Areca card initially was the port density/dollar of the older cards. I didn't intend on using any of the raid functionality (and didn't for quite a while). Once the 8tb Archive drives came out, and I bought a few, I wanted a faster alternative to using one for parity, and had 2 WD Red 4tb's here. I set them up in a striped array as a single 8tb drive, and use it for the parity drive on the system. I could just buy a faster 8tb drive now, but at the time, the Archives were the only game in town (and the 8tb non-archive drives are still somewhat pricey). The nice thing about Unraid is it'll take the drives either way, passed through, or in an array.
  2. I'm currently using an Areca card with 2 4tb WD Red drives in a Raid 0 array for my parity drive in my main system, with 8 other drives in pass-thru mode on the card. Unraid sees the raid array as a single 8tb drive (I did this due to the speed of the 8tb drives I'm using, which are the Seagate Archive drives). The system currently has 35tb in it, but I have 2 more 8tb drives to add when I move it all over to my new box (Cisco UCS C200 M2 with a Dell MD1000 raid array, just waiting on my drive trays to show up). The Areca cards work pretty well with Unraid (I've had no troubles), but aren't 100% supported. You'll need to check the thread here about how to get it set up, and you won't get accurate drive temp readings, or proper spinup/spindown management from Unraid (but what's built into the card does work). I'm moving to a different card on the new system since I need external ports for the MD1000 (LSI9201-16E), but the Areca has worked well for me for the last 2 years or so. Just wish it had better support in Unraid.
  3. Use code EMCRFBJ24 for an extra $10 off the Newegg price. Just ordered 2 for work. With our Premier account, got them for 339.98 shipped..
  4. The LSI SAS9201-16e card would definitely be a better purchase, and won't cost you much more. It supports drives bigger than 2tb as well.. If you're wanting to do a 30 drive unraid system, you really should replace that motherboard with something more suited to it. Me personally, I'd not void my warranty cutting part of a 1x slot off just to fit a card.
  5. NewEgg 1151 CPU 2x+ PCI-e 16x slots All these boards have 2x+ PCI-E 16x slots on them, there's plenty of boards that do. My first Unraid box was (and currently still is) running on an 'enthusiast' board as opposed to a 'server'-class board. My next system (that's in process) is running on server-grade hardware, with 1 8x slot, and 1 16x slot, plus an onboard 4 port sas card (plus an additional 4 port plug going to the onboard SATA controller). A good, server-grade motherboard that will do what you need (and includes IPMI, which is certainly handy in an unraid setup): SUPERMICRO MBD-X11SSL-F-O This board has 2 8x slots, and 1 16x, should give you the 30+ drives your wanting (with 2 of the cards suggested above), and STILL leave you a 16x slot you can use for video, or additional drives down the road..
  6. Lol, you're probably right. I do remember reading an article a while back when we first got the MD1000 about the interposers, and remember reading something about it being the cause of speed issues, but what I didn't remember was the post that challenged that assumption. It's something to do with the SAS drives being multi-ported, where SATA drives are single, and that SAS drives are full-duplex, where SATA drives aren't. At least thats what I had in my notes from the research we did at the time. All that being said, I got my 9201-16e, and it appears to work fine in my Cisco C200 M2. Now to score another cheap MD1000 and I'll be ready to move from my old box to my new one..
  7. It's not getting the full bandwidth because I'm betting you aren't using the interposer cards.
  8. Just around half the bays, but probably a whole lot quieter, lol. The MD1000 sounds like 3 vacuum's running at once when it's at full fan speed, lol.
  9. Yeah, it's just a storage box without any sort of CPU. Has redundant power supplies, and most times redundant interfaces on the back (or you can split the array in it into 2 halves, one with 8 drives, one with 7, each set controlled by one of the rear controllers, probably how I will use it, with each controller on the back connected to a different port on the 8e card). If I remember correctly, you can chain 3 of them together (that may be a MD3000 + 2x MD1000's, can't exactly remember). I just wish they offered it in a tower version instead of just a rack-mount version. Had to 3d print a set of feet for it to sit vertically at our office to use with a T610 Hyper-v box that needed more drives..
  10. Yep, that's what I need for my situation, not sure about the OP or not though. These Cisco C200 m2 boxes are pretty nice for what they are, 1u chassis with 2 5650's, capable of up to 192gb ram, with 4 hot-swap bays. If you use the onboard 1068e controller you're limited to 2tb drives per slot up front, 8tb max (6tb with parity), but you can always put a different controller in the 8x PCIE slot and plug the front drives into that controller, and use a controller with external ports in the 16x slot, going to something like a MD1000/MD3000 external chassis (that'd give you an additional 15 slots for drives, and depending on what controller use the larger 4tb+ drives, and push upwards of 150tb depending on the drives used..
  11. Yeah, I'm looking at the 9200-8e at the moment, still relatively cheap too. I've got an Areca ARC-1231ML in my current server that works great with this setup (I have 2 4tb WD Red drives in a stripe set for parity, and several 8tb Seagate drives, plus a couple 4tb's in the array, with a 300gb Raptor for cache, and a 512gb SSD for all my docker stuff, everything but the SSD connected to the Areca card). I'm contemplating using 4 2tb reds for parity on the new setup (using the onboard 1068e controller) for parity, and the external box containing all the 8tb's (plus a couple new ones, I'm running low on space at the moment, lol).
  12. Just noticed myself that it's basically a single card with 2 1068e's built on. Guess it won't work for my current application (need it to run several 8tb drives in a MD1000 chassis). Good thing they aren't expensive, lol. Oh, and the 1068e controllers ARE well supported with Unraid, I'm using one in a Cisco C200 M2 at the moment for testing, and it works perfectly fine with Unraid (aside from the size limitations).
  13. I'm not 100% on Unraid, but as far as your board, it'll take one. I have one of those cards on order, so hopefully I can tell you one way or another in a few days as far as Unraid is concerned. LSI has Linux drivers for most of their cards, so I'm fairly certain this one will work though..
  14. I have an older Cisco 1u box that has dual X5650's in it already, so as far as the hardware goes I already have that much of it. I'll need to get another MD1000 box and a controller to run it if I were to move my existing storage over to that. It would also open up more expansion bays than what I currently have..
  15. How much of a performance upgrade would going from a single FX8320 8-core AMD cpu to a system with dual Xeon X5650 6-cores? I know in raw cores I'll have 4 more, plus whatever gains hyperthreadding gets. The individual cores are faster on the FX (at least by MHZ rating, but I'd bet core to core the intel would keep up with, or surpass the FX cores). I'm just wondering if I'll see a major increase with my docker containers, as I'm getting issues with Plex telling me that my system can't keep up from time to time, as Pynab seems to want to hog the system..
  16. I'll definitely add a vote to a dark theme, white-background sites give me way too much eye strain, lol.
  17. Updated without problems. Just wish my Unraid would read the individual temps and rotational status from my Areca card.
  18. Trying to install over RC9, and get this message: plugin: installing: https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg plugin: downloading https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg plugin: downloading: https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg ... done plugin: not installing older version
  19. Was this stealth-updated a couple days ago? I see it as having an update, and it LOOKS like in the docker hub logs it may be pulling 1.01? Going to try updating it and see what I get.. **EDIT** Looks like it was. Went ahead and blew out the old one I had installed, and reinstalled, and now have full 1.01. Not sure if it saved any space or anything, or if it's worth updating if you already did the manual update or not..
  20. I was also able to export my presets from my windows machine, to a directory accessible by the handbrake docker, then import it. Once it was imported correctly (as a plist legacy type), I closed the handbrake program (used the X on the handbrake window to shut it down), restarted the docker, and my presets had stuck. Not sure if people were still having problems with this or not, but it worked for me..
  21. If you manually connect to the docker while it's running, and use: apt-get update apt-get install handbrake It will update to version 0.9.9+dfsg-2~2.gbpa4c3e9build1 (actually a downgrade?). That's the latest version that's compiled for the version of debian he's using. I just did it and it's encoding something now. It updated several other packages at the same time as the handbrake update that were also required. It also leaves a few packages that aren't needed anymore, but I left those alone. Actually, if you connect to the docker when it's running, you can update it to 1.01 manually. I ended up doing each part of handbrake seperately, and I show I'm running 1.0.1-zhb-1ppa1~trusty1. Ignore the above, and do this: Attach to the docker container in a shell docker exec -i -t Handbrake /bin/bash (Change HandBrake to either the container number, or name that you're using, case matters on the name), then: apt-get update apt-get install handbrake apt-get install handbrake-cli apt-get install handbrake-gtk That ended up getting me everything updated to the latest (1.01).
×
×
  • Create New...