distracted

Members
  • Posts

    21
  • Joined

  • Last visited

Posts posted by distracted

  1. Actually it is a fact.  I'm not purchasing licenses until there's an official release with >2TB support.  And if I find a better solution elsewhere in the meantime, then the opportunity to gain me as a customer will be lost.  That's the long and short of it.

     

    As far as other solutions - We already have 4TB HD's readily available, and supposedly (in 2H) 2014, we'll see 6TB hard drives.

     

    If I had to do it all over again, I don't think I'd build a file server.  I think I'd use individual drives with cold storage mirrored backups.

     

    File Server != Backups

  2. Why would you need to run a preclear before using it to replace a failed drive? Not trying to be sarcastic. From my understanding this isn't necessary. Am I mistaken?

    Okay, not absolutely necessary, but how much confidence do you have in the substantial part of the drive which has not been accessed while in service as a parity drive?

     

    The other problem with this regime is that I, like many others, use the parity drive as non-volatile storage for the apps I have running on the server.  So, it would be necessary to hold a spare cache drive.

     

    I'm guessing when you say parity drive you mean cache drive. Personally I test every drive I purchase, so in my mind it would be good to go. Plus I would run a parity check immediately after rebuild, and I would check smart before the rebuild and after the parity check for any anomalies. However, I can understand why you may want to run a preclear. There is nothing wrong with a little peace of mind.

  3. I use a 3TB drive as my cache drive. It acts as a warm spare.

     

    One disadvantage of using a spare drive as cache is that you still have to run a lengthy preclear before it can be put into service.

     

    It would be ideal if you could keep a precleared drive sitting on the shelf (or installed, but unassigned), ready to be put into service at a moment's notice.

     

    Why would you need to run a preclear before using it to replace a failed drive? Not trying to be sarcastic. From my understanding this isn't necessary. Am I mistaken?

  4. both far in excess of GbE real world max throughput of ~100MB/s.

     

    If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable.

     

    Edit: To be clear I am not picking on you specifically, I just used your message :) No offense meant.

     

    No worries, none taken.  While I have witnessed close to theoretical 125MB/s transfers many times on enterprise grade gear, I have rarely seen it on consumer grade gear.  I should have been more clear as I was stating a real world throughput of ~100MB/s on consumer grade gear, which in my experience most home/small business GbE networks effectively top out at.

     

    EDIT: Thought I would clarify further that it is my assumption that the vast majority of unRAID installs are in homes/small businesses on consumer grade networking gear.

     

    I didn't use anything too crazy. Just some Intel PCIe NICs and a Trendnet Gb Switch. I can be picky about my NICs, I find Intel gives me the least trouble and the best performance. I'd love to have a nice managed switch but I don't think my wife would be happy with the noise :) I envy all of you that have a basement, no chance of that here in Florida...

  5. both far in excess of GbE real world max throughput of ~100MB/s.

     

    If you only get 100MB/s you are doing something wrong or you need better NICs/Switches. I've had sustained transfers ~120MB/s for hours during large file transfers between raid arrays hosted in different servers. (Specifically a 9500s under Win7 and a linux VM based md array exported via iSCSI to a win7 VM, both under vSphere). Granted, this doesn't change your argument, but I felt I needed to make this point for anyone that thinks 100MB/s is acceptable.

     

    Edit: To be clear I am not picking on you specifically, I just used your message :) No offense meant.

  6. Freezing while streaming video, telnet and Browsing webpage interface

     

    UnRAID versin: 5.0-rc11

    motherboard: Gigabyte GA-G41MT-D3V

    Processory: Intel core 2 duo E8400 @ 3.00 GHz

    Memeory: 8GB RAM

     

     

    Feb 14 20:07:30 Tower kernel: eth0: Identified chip type is 'RTL8168E-VL/8111E-VL'.

     

    I looked up my motherboard "Gigabyte GA-G41MT-D3V"

     

    They list this as my NIC

    1.1 x Realtek RTL8111E chip

     

    Is 'RTL8168E-VL/8111E-VL' the proper diver for that NIC?

     

    I notice the "VL" any idea what the means?

     

    If I had processor issue, is there a way I can check?

     

    You may want to post that in the RC11 thread :)

  7. I also use it all the time, since I don't have a monitor hooked up normally.  But that's kind of my point.  PuTTy only works once the server is up and running.  I can't see the startup options from PuTTy, or without a monitor connected to the machine, so if a 'start without plugins' option was added, but was only selectable upon startup, I couldn't really use it, since I don't have a monitor connected, and PuTTy isn't connectable until it's already started.

     

    You could always get a server motherboard with IPMI.  ;)

     

    Or virtualize using vSphere.

     

     

    To work properly he'd need a CPU with vt-d, hence, server equipment.

     

    Granted, it's preferred, but RAW mappings work fine.

     

    Edit: As for server equipment, my Asrock 990FX based motherboard with Phenom II works great and fully supports IOMMU. I have my M1015 passed through to an unraid VM.

  8. I also use it all the time, since I don't have a monitor hooked up normally.  But that's kind of my point.  PuTTy only works once the server is up and running.  I can't see the startup options from PuTTy, or without a monitor connected to the machine, so if a 'start without plugins' option was added, but was only selectable upon startup, I couldn't really use it, since I don't have a monitor connected, and PuTTy isn't connectable until it's already started.

     

    You could always get a server motherboard with IPMI.  ;)

     

    Or virtualize using vSphere.

  9. Has anybody had any failures on these?  The reviews on Newegg are pretty bad, they haven't been in-stock anywhere in almost two months.  It makes me believe there is a problem and WD isnt shipping any more until its fixed.

     

    There may be a problem, but to be fair Neweggs packaging is horrendous since they moved away form the peanuts. I've had HDD's from them taped into a styrofoam holder, which wouldn't be too bad if it covered the entire drive. Unfortunately one entire side was not protected and could easily smack against the sides of the box, which is entirely likely when shipped via UPS.

  10. http://support.wdc.com/product/download.asp?groupid=609&sid=113

     

    ????

     

    Is there one for each specific drive or series of drives?  Do I have to do this to ALL the WD drives I buy?

     

    Fine, but I guess there really aren't any drives that "just work."  Hrmph.  In the last 25 years, I have never had to do anything with a hard disk other than plug it in and go, unless it was a SCSI ID or termination change.

     

    Just the green drives, AFAIK. That same utility should work for all green drives.

  11. Now i get it, + i can't probably create vmdk bigger then 2TB, so i will need to lurk more and change the build from the basics.

    First thing i need to decide is if i would go the adventurous way of AMD-vi, which is about half the price of Intel vt-d for mobo+cpu.

     

    Currently i'm thinking more like

    Intel Core i5 2400S - 4610 - 65W TDP, intel VT-d, ESXi5 comp.

    Intel Crow Point DQ67OWB3 - 2726 - intel nic, intel a VT-d, non ECC, probably ESXi5 comp.

    2* 4GB DDR3 1333 - 900

    IBM Express ServeRAID M1015 - 3100 - HBA, ESXi5 comp.

    Norco RPC-3216 - 9500 - 16HDDs, standard PSU, need to buy 1 sata backplane (found for sale in netherlands)

    ST-LAB A-214 - 350 - sil3114, for torrenting HDD

    Chieftec Smart Series GPS-600A8 600W - 1238 - 46A 12V, not a total noname builder

     

    About $1100 to buy it here, so it seems still quite a budget, but more server like :)

     

    There is no 2TB limit with RDM, just make sure to use '-z' when creating the RDM mapping. Personally I would go with a VT-d/IOMMU setup, though I did use RDM's to test unraid. If you do run torrents in a VM make sure you either preallocate the entire VMDK or use RDM. If not you will be hating life while the VMDK constantly allocates and expands.

  12. ...I am not on my turf with AMD CPUs, but you would need one with the Intel vt-d equivalent (not just vt-x).

    AFAIK this is called AMD-Vi (not only AMD-V) and only available in Opteron family line  of CPU models, isn't it?

     

    Phenom II + 890FX/990[X,FX] chipsets will work, as long as the MB has a bios that truly supports IOMMU. Finding reliable reports the hard part. I picked up an ASRock Extreme III that supposedly works with directpath but I have another project that I need to complete beforehand. IOMMU is a function of the chipset, with AMD-V a function of the processor. Some Athlon64, Athlon II etc processors support an earlier version of AMD-V.

     

  13. I get exactly the same thing, I don't think it's related to the download. It's probably related to how unraid interacts with the emulated controller. I get it with both the LSI controller and the PV controller.

     

    Edit: It's probably some unsupported command, the logical place to start would be any changes between the last version w/o this error message and RC5, be it the kernel driver or a changed/added drive/controller related command.

  14. On ESXi I have unraid running followed the directions on the forum and everything was working great.  When I went from B14 to R2 I the green light stayed blinking.  attached is a screenshot of the error that I got from the console window.  Please let me know if you need anymore information.

     

    Mine does the same thing but there are no apparent issues. Do you pass through your controller or do you use RDM? I use RDM at the moment. If I had to guess I would say it is related to the emulation layer between the drive and the OS.

  15. It also sounds like the typical oversell capacity and hope no one uses it business approach. That or they're simply incompetent.

     

    Fortunately for the sites I run, we opted to go with dedicated hardware from the get-go to avoid hassles such as these. The only issues we've run into in the 10+ years of doing so is with hardware failures, but that happens in either situation.

     

    Currently we're pleased with the support and assistance Softlayer has provided.

     

    I've been lurking for a while waiting for 5.0 to go final. I have the trial up and working under VMWare and plan on purchasing the PRO version. This isn't relevant to this post but I want to make it clear I'm not some transient troll.

     

    The dedicated servers at SoftLayer are better than the VPS, but IMHO that isn't saying much. We had about 80 VPS (They use XenServer) and 15ish dedicated servers. We had multiple problems with our dedicated servers in which the power cords just fell out or became loose. This is according to their own tech support. Why they don't use some sort of positive retention mechanism is beyond me. The VPS servers were downright slow and overcrowded with poor transfer rates and latency. Their tech support was slow and unresponsive (Some of the above power issues took 6+ hours to resolve).

     

    I apologize for the somewhat malicious first post, but please pick anywhere but SoftLayer.

     

    Edit: If anyone is curious our VPS (Cloud) were in their Seattle DC. Our Dedicated servers were in Dallas, though I don't recall which DC. I believe they have two in Dallas.