distracted

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by distracted

  1. This is your only option. I pass through my onboard USB3 controller along with a GPU (HD6450). Unfortunately, since I use the extra onboard controller, I don't have a standalone pci-e card to recommend.
  2. As far as other solutions - We already have 4TB HD's readily available, and supposedly (in 2H) 2014, we'll see 6TB hard drives. If I had to do it all over again, I don't think I'd build a file server. I think I'd use individual drives with cold storage mirrored backups. File Server != Backups
  3. Okay, not absolutely necessary, but how much confidence do you have in the substantial part of the drive which has not been accessed while in service as a parity drive? The other problem with this regime is that I, like many others, use the parity drive as non-volatile storage for the apps I have running on the server. So, it would be necessary to hold a spare cache drive. I'm guessing when you say parity drive you mean cache drive. Personally I test every drive I purchase, so in my mind it would be good to go. Plus I would run a parity check immediately after rebuild, and I would check smart before the rebuild and after the parity check for any anomalies. However, I can understand why you may want to run a preclear. There is nothing wrong with a little peace of mind.
  4. One disadvantage of using a spare drive as cache is that you still have to run a lengthy preclear before it can be put into service. It would be ideal if you could keep a precleared drive sitting on the shelf (or installed, but unassigned), ready to be put into service at a moment's notice. Why would you need to run a preclear before using it to replace a failed drive? Not trying to be sarcastic. From my understanding this isn't necessary. Am I mistaken?
  5. If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable. Edit: To be clear I am not picking on you specifically, I just used your message No offense meant. No worries, none taken. While I have witnessed close to theoretical 125MB/s transfers many times on enterprise grade gear, I have rarely seen it on consumer grade gear. I should have been more clear as I was stating a real world throughput of ~100MB/s on consumer grade gear, which in my experience most home/small business GbE networks effectively top out at. EDIT: Thought I would clarify further that it is my assumption that the vast majority of unRAID installs are in homes/small businesses on consumer grade networking gear. I didn't use anything too crazy. Just some Intel PCIe NICs and a Trendnet Gb Switch. I can be picky about my NICs, I find Intel gives me the least trouble and the best performance. I'd love to have a nice managed switch but I don't think my wife would be happy with the noise I envy all of you that have a basement, no chance of that here in Florida...
  6. If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable. Edit: To be clear I am not picking on you specifically, I just used your message No offense meant.
  7. You could always get a server motherboard with IPMI. Or virtualize using vSphere. To work properly he'd need a CPU with vt-d, hence, server equipment. Granted, it's preferred, but RAW mappings work fine. Edit: As for server equipment, my Asrock 990FX based motherboard with Phenom II works great and fully supports IOMMU. I have my M1015 passed through to an unraid VM.
  8. You could always get a server motherboard with IPMI. Or virtualize using vSphere.
  9. Thanks! $36.00 but still a bargain.
  10. There may be a problem, but to be fair Neweggs packaging is horrendous since they moved away form the peanuts. I've had HDD's from them taped into a styrofoam holder, which wouldn't be too bad if it covered the entire drive. Unfortunately one entire side was not protected and could easily smack against the sides of the box, which is entirely likely when shipped via UPS.
  11. I disagree with the topic. Mine were climbing like crazy until I disabled it. As others have noted you must power cycle the drive afterwards. Regarding /d vs /s300, all /d does is max out the sleep timer, it doesn't disable it altogether.
  12. Just the green drives, AFAIK. That same utility should work for all green drives.
  13. I have 5 of these, no issues so far but only 3 are in an array at this time. Don't forget to change/disable the idle3 timer using wdidle3. If you don't you can and will get excessive head parking under linux.
  14. 1000. 1024 would be MiB/s and Mib/s. Edit: Typed it backwards Edit2: Reference http://physics.nist.gov/cuu/Units/binary.html
  15. There is no 2TB limit with RDM, just make sure to use '-z' when creating the RDM mapping. Personally I would go with a VT-d/IOMMU setup, though I did use RDM's to test unraid. If you do run torrents in a VM make sure you either preallocate the entire VMDK or use RDM. If not you will be hating life while the VMDK constantly allocates and expands.
  16. Phenom II + 890FX/990[X,FX] chipsets will work, as long as the MB has a bios that truly supports IOMMU. Finding reliable reports the hard part. I picked up an ASRock Extreme III that supposedly works with directpath but I have another project that I need to complete beforehand. IOMMU is a function of the chipset, with AMD-V a function of the processor. Some Athlon64, Athlon II etc processors support an earlier version of AMD-V.
  17. I get exactly the same thing, I don't think it's related to the download. It's probably related to how unraid interacts with the emulated controller. I get it with both the LSI controller and the PV controller. Edit: It's probably some unsupported command, the logical place to start would be any changes between the last version w/o this error message and RC5, be it the kernel driver or a changed/added drive/controller related command.
  18. Mine does the same thing but there are no apparent issues. Do you pass through your controller or do you use RDM? I use RDM at the moment. If I had to guess I would say it is related to the emulation layer between the drive and the OS.
  19. I would automatically convert it, just my 2 cents.
  20. I've been lurking for a while waiting for 5.0 to go final. I have the trial up and working under VMWare and plan on purchasing the PRO version. This isn't relevant to this post but I want to make it clear I'm not some transient troll. The dedicated servers at SoftLayer are better than the VPS, but IMHO that isn't saying much. We had about 80 VPS (They use XenServer) and 15ish dedicated servers. We had multiple problems with our dedicated servers in which the power cords just fell out or became loose. This is according to their own tech support. Why they don't use some sort of positive retention mechanism is beyond me. The VPS servers were downright slow and overcrowded with poor transfer rates and latency. Their tech support was slow and unresponsive (Some of the above power issues took 6+ hours to resolve). I apologize for the somewhat malicious first post, but please pick anywhere but SoftLayer. Edit: If anyone is curious our VPS (Cloud) were in their Seattle DC. Our Dedicated servers were in Dallas, though I don't recall which DC. I believe they have two in Dallas.