prostuff1

Moderators
  • Posts

    6378
  • Joined

  • Last visited

Everything posted by prostuff1

  1. I think these would make very poor parity drives, especially if you need to do a parity check or rebuild... it would take forever. A parity check, or rebuild of another data drive should go as fast as the drive can read. It's yet to be fully determined how a parity generate (sync) and/or a data rebuild on one of these drives is handled. From what I saw on a review, it could drop as low as 40MB/s near the end of the drive. I would imagine the head would be swinging from the outer cache tracks to write to the inner tracks unless the hard drive's cache algorithm manages it well. Yup, meant to only type rebuild/sync in there.
  2. I think these would make very poor parity drives, especially if you need to do a parity check or rebuild... it would take forever.
  3. I got my three a little while back now. Came packaged just fine. I got mine only because they were so cheap. I had been planning to reuse the Chenbro expander that is in my current build but with this being cheaper I figured why not.
  4. Yes, you can create BTRFS pools from command line to experiment with if you'd like. Cool!! I figured that would be the answer but wanted to make sure before venturing down that path.
  5. So jonp, I have not had a chance to mess with this but I was wondering if I could perhaps do this via the command line manually? Kind of like what was done initially for cache pool for unRAID v6 beta 6. I probably wont be playing with this for another week or two but figured I would ask the question now.
  6. Docker.img can be anywhere -> it doesn't matter, as long as its referred to in the docker tab by the disk share, not by the user share. IE: I have a user share called docker set to use cache drive only I tell docker to store my image as /mnt/cache/docker/docker.img Yup, which is exactly what I plan on doing when I move to v6. For customers that I set docker up for I usually do exactly what you just put above with an "apps" folder inside the docker folder where everything can live and i only have to grab/copy one folder to get everything back up and running.
  7. Yes. It has to be in a folder to be considered as part of a User share. I have mine set as /mnt/cache/docker.img and it functions just fine, and also does not get moved or show up as in a User share (although it can be seen in the 'cache' disk share). This does work... but I personally don't like it. I create an "Apps" cache only share and then install everything in it.
  8. Sure you can... you just have to set that use share as a cache-only share No you can't. We now require docker image to be stored in a disk share (this was in the previous release notes as well). You cannot have /mnt/user in your path for specifying your docker image file location. Fair enough, my terminology is a little different. A user share to me is something that shows up under the shares section of the webGUI regardless of the specific disk it direct to. If it shows up under \\tower\NAME_OF_FOLDER it is a user share in my mind.
  9. Sure you can... you just have to set that use share as a cache-only share
  10. It may be annoying to you but it is the way it should have been from the beginning IMO.
  11. Did the same thing, going to use one in my server rebuild and stash the others for customer builds.
  12. #2 and #3 along with more storage
  13. I would like to upvote this also. I am in the process of building myself a new server based on updated hardware and am very much wanting to to a RAID0 of 2 240GB SSD drives for my cache drive. I'd actually think that even better than a RAID0 would be a btrfs "single" where it's basically a JBOD (yes, even though it's called "single", it does support multiple devices). So it's like unRAID but without a parity device. So still no protection from device failure (just like RAID0) but at least if you lose 1 disk, you don't lose everything in your pool. Thoughts? I would think this is better than RAID0 especially for SSDs. I realize that RAID0 has write-performance benefits, but I don't think those benefits are very strong when A) dealing with SSDs and B) dealing with a NAS. I personally would rather have the RAID0 for maximum speed on my cache drive. I don't run cache pool right now but I want maximum speed on my cache drive. I plan on backing up the docker image to the parity protected array just in case so I don't care about "loosing" one of the drives in the RAID0 cache.
  14. I would like to upvote this also. I am in the process of building myself a new server based on updated hardware and am very much wanting to to a RAID0 of 2 240GB SSD drives for my cache drive.
  15. I'm glad we were able to help you out and get everything sorted. Pleasure to work with and very helpful when I needed something done to the server that I could not do remotely.
  16. You just described almost exactly the setup I have. The only difference is the ESXi box currently only has 10GB of RAM in it. I have 382 days uptime on that ESXi box so don't want to take it down to put more RAM in it.
  17. We have never had any problems with the Norco cages. We do swap out the fan on the back for a more quite 80mm though.
  18. damn good price... in for three... time to take some of my old 1TB drives out of service.
  19. Norcos own page says it is an 80mm fan : http://www.norcotek.com/item_detail.php?categoryid=8&modelno=ss-500
  20. No, it's an 80mm fan... unless they have changed something.
  21. We replace all fans on these cages. They are not overly loud, but all our customers build strive to be as quite as possible.
  22. That processor is way overkill for most anything. Something around the Intel Xeon E3-1230v3 or above range is the best bang for the buck. Not sure if you have a micro center near you but the E3-1240 can be had for a little cheaper (even with sales tax) then buying it online.
  23. The controller card may be useful later. It really comes down to how much he wants to spend right now. My main box is still running on a Quad core LGA 775 socket processor with ESXi (4.1 i think) as the virtualization layer. Quite frankly, it wont be changing until something decides to fubar itself. I have nearly a year of uptime on that box right now and I am not touching it until the power goes out long enough that the UPS can't keep it online. I have another 8GB of RAM to add to the box but have not wanted to shut everything down to install it. Yup, which will likely cost about $200 fro the motherboard and another $200 for the processor... with another ~$200 for 16GB of RAM.
  24. but you will need new RAM and processor to go in that board. You would likely end up spending more than just getting one of the cards I mentioned.