distracted

Members
  • Posts

    21
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

distracted's Achievements

Noob

Noob (1/14)

0

Reputation

  1. This is your only option. I pass through my onboard USB3 controller along with a GPU (HD6450). Unfortunately, since I use the extra onboard controller, I don't have a standalone pci-e card to recommend.
  2. As far as other solutions - We already have 4TB HD's readily available, and supposedly (in 2H) 2014, we'll see 6TB hard drives. If I had to do it all over again, I don't think I'd build a file server. I think I'd use individual drives with cold storage mirrored backups. File Server != Backups
  3. Okay, not absolutely necessary, but how much confidence do you have in the substantial part of the drive which has not been accessed while in service as a parity drive? The other problem with this regime is that I, like many others, use the parity drive as non-volatile storage for the apps I have running on the server. So, it would be necessary to hold a spare cache drive. I'm guessing when you say parity drive you mean cache drive. Personally I test every drive I purchase, so in my mind it would be good to go. Plus I would run a parity check immediately after rebuild, and I would check smart before the rebuild and after the parity check for any anomalies. However, I can understand why you may want to run a preclear. There is nothing wrong with a little peace of mind.
  4. One disadvantage of using a spare drive as cache is that you still have to run a lengthy preclear before it can be put into service. It would be ideal if you could keep a precleared drive sitting on the shelf (or installed, but unassigned), ready to be put into service at a moment's notice. Why would you need to run a preclear before using it to replace a failed drive? Not trying to be sarcastic. From my understanding this isn't necessary. Am I mistaken?
  5. If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable. Edit: To be clear I am not picking on you specifically, I just used your message No offense meant. No worries, none taken. While I have witnessed close to theoretical 125MB/s transfers many times on enterprise grade gear, I have rarely seen it on consumer grade gear. I should have been more clear as I was stating a real world throughput of ~100MB/s on consumer grade gear, which in my experience most home/small business GbE networks effectively top out at. EDIT: Thought I would clarify further that it is my assumption that the vast majority of unRAID installs are in homes/small businesses on consumer grade networking gear. I didn't use anything too crazy. Just some Intel PCIe NICs and a Trendnet Gb Switch. I can be picky about my NICs, I find Intel gives me the least trouble and the best performance. I'd love to have a nice managed switch but I don't think my wife would be happy with the noise I envy all of you that have a basement, no chance of that here in Florida...
  6. If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable. Edit: To be clear I am not picking on you specifically, I just used your message No offense meant.
  7. You could always get a server motherboard with IPMI. Or virtualize using vSphere. To work properly he'd need a CPU with vt-d, hence, server equipment. Granted, it's preferred, but RAW mappings work fine. Edit: As for server equipment, my Asrock 990FX based motherboard with Phenom II works great and fully supports IOMMU. I have my M1015 passed through to an unraid VM.
  8. You could always get a server motherboard with IPMI. Or virtualize using vSphere.
  9. Thanks! $36.00 but still a bargain.
  10. There may be a problem, but to be fair Neweggs packaging is horrendous since they moved away form the peanuts. I've had HDD's from them taped into a styrofoam holder, which wouldn't be too bad if it covered the entire drive. Unfortunately one entire side was not protected and could easily smack against the sides of the box, which is entirely likely when shipped via UPS.
  11. I disagree with the topic. Mine were climbing like crazy until I disabled it. As others have noted you must power cycle the drive afterwards. Regarding /d vs /s300, all /d does is max out the sleep timer, it doesn't disable it altogether.
  12. Just the green drives, AFAIK. That same utility should work for all green drives.
  13. I have 5 of these, no issues so far but only 3 are in an array at this time. Don't forget to change/disable the idle3 timer using wdidle3. If you don't you can and will get excessive head parking under linux.
  14. 1000. 1024 would be MiB/s and Mib/s. Edit: Typed it backwards Edit2: Reference http://physics.nist.gov/cuu/Units/binary.html