Johnm

Members
  • Posts

    2484
  • Joined

  • Last visited

Everything posted by Johnm

  1. This last 2 pages has so much info i am not sure where and how to comment on it all.. @dheg I would defiantly consider OI and napp-it if I started over. I was where I was going to go from the start. I was building both freeNAS and OI servers side-by-side to test performance. some time mid build, the drives for the IO guest ended up in another server that had older 1.5TB drives that started failing.. time and finances (for server toys) have been a bit tight for a while. so i never go to finish the build. I auto boot my domain controller then the ZFS guest first then have a long timeout before any other guests boot for the ZFS array. I have the ZFS array shared out as NFS (then mapped from inside esxi as a datastore). You couls also use iSCSI. It is the same as if the ZFS server was a stand-alone server.. i just happened to have virtualized it. I have my own needs to keep my SSD datastores, so i have not looked at alternatives to replace them. So my understanding is that you want to virtualize the ZFS. put it into an external chassis (like a DAS with no motherboard) and link it with an external SAS cable? I want to do similar, get a 2U-3U 12-16 drive "head" with my 2 unraid servers hanging off of them.
  2. the second port is redundant. only one port is needed. the rumor is that it is for redundant PSU. Yet, i know of no redundant PSU so wired. of my 3 Norco's, i have 1 single wired and 2 doubled wired (knowing that both are on the same rail) I used to run them all single wired. I just wanted to make sure that I have good connections on the plugs. backplanes.
  3. The 2TB versions of that drive are probably only a few $$ more. I have seen them for as low $89 recently. Obviously that might not be the case where you are located. but it might be worth a look. That case rolled off the same assembly lines as the Norco's. It just has custom hotswap trays. It is the same as the RPC-1204.
  4. With the 5 series (and i'm almost positive 4.1, was limited was 6 cores per CPU) of ESXi there is no CPU/Core count restrictions. the limit is 32GB of ram and no vsphere client. the real changes between 4.1 and 5x that we noticed in the lab. was that 5 supports CPUs with more then 6 Cores (not threads). That the 32GB memory is hard limited to the entire server in 5 while on 4.1 we could have boxes with 96GB+ and the limit was 32 virtual GB in a guest.
  5. Actually, in said example. the PCI drives, if not the whole sever is probably out of commission during parity checks. due to bandwidth issues. 2TB at 13MB/s ? eek.. thats a long time to be down. As mentioned. for a few drives, you should be ok. I would not try to write to any other drive on the PCI bus while streaming multiple blurays.
  6. +1 I have mentioned this before. I want unraid to TELL ME there is a problem. then ask what I want to do. not to just start fixing what it thinks is wrong.. PS. you are the second or third person to have this issue in the last month.
  7. my answers in RED EDIT: I looked for some real world benchmarks for SSD's in raid5 http://www.storagereview.com/intel_ssd_510_raid_review Pretty sad honestly.. about a 20% performance gain. so my guesses would be about correct with M3's, Performance PRO's or any other max IOP drives like the sandisk extreme.
  8. Just a follow up to this Error: I did just what said I would. 1. I migrated some test guests to my ZFS datastore from the SSD. Thus freeing up about 1/3 of the drive (33% free). 2. I powered down all guests and went out shopping for a few hours. 3. When I got home I bounced entire server. When it came back up, it was self healed. Back to full speed. The auto garbage seems to work just fine when you give it a chance to do its job....
  9. Dont have a samsung part number. I don't recall seeing it anywhere I get memory from. I am pretty sure it on the Supermicro memory list. ( I had checked mine) This is the same stuff http://www.superbiiz.com/detail.php?name=W1333EB8GS re-branded as super talent. (for half the price I paid 8 months ago) I know you have asked before and i think you have gotten the answer before. just search this thread for "samsung" you might even get the part number i ordered you can also zoom in on the pic in this post http://lime-technology.com/forum/index.php?topic=14695.msg175152#msg175152
  10. I am going the same. I have one 3TB disk with 4x 500MB Time Machine Shares on it for 4 different Macs. When I had one share for 4 Macs, I was having issues with my 2 Mac Pros. They keep saying the backup was corrupt.
  11. No, they aren't. eBuyer have been selling Hitachi 4TB (in enclosures) for £129 and you can get a bare 7200rpm for £159: http://www.ebuyer.com/393234-hitachi-4tb-7k4000-desktar-hard-drive-hds724040ale640 the only problem here is: I actually did bust one of those open only to have it fail 2 weeks later.. i voided the warranty opening it.. expensive paperweight..
  12. first off, i want to point out that I am not arguing with your concern, I am pointing out you can do it now, and it will only get better... The bottleneck is the mechanical disk. that is the unfortunate nature of the beast.. and as they get larger, they are getting only marginally faster as they get more data per square inch. the newest segate 3TB (1TB per platter) drives are much faster then any brands 2TB of just 3 years ago. they are about on par with the velociraptors of a few years back. They are now tweaking 1.25TB platters. by design they have to be faster (unless they slow down the platter). also SSD's are getting larger and cheaper.. in a few years as we start to see 8TB+ mechanical drives, we will probably see 2TB SSD's at a consumer affordable price. you would be able to at least have an SSD cache drive or maybe a full SSD server for beastly performance. a write speed of 30-40MB/sec : I am way past that speed with my systems.. I am pretty sure I posted the speeds for my systems.. My Write speeds to Cache: "Goliath" gets about full bandwidth of gigabit. 85-110Mb/s.. "Spartacus" gets the same unless I am pounding my ZFS array at the same time from another guest.. "unRAID Mini" is a bit slower. it hovers around 65-85MB/s due to the weak processor and laptop mechanical spinner Cache drive. I'll admit TM backups can be a little slower due the fact that you cant use a cache drive. but mine are much faster then 10MB/s about 4 times that. yes the first backup can take an hour. since it is a transparent process to the user. speed is not really that important user as long as it can reasonably keep up. that much data cant change each hour? (it is still about the same speed as it was to my time capsule) Keep in mind that a major bottleneck is your source drive. your average I5 win7 dell desktop will have a bit of a hard time breaking 45MB/s in transfer speed. I deal with this issue all day work when moving TB's of data backing up desktops or restoring over copper to SANs arrays with speeds that would make you drool. In my home network, all of my desktops are SSD and all of my Physical Windows servers are RAID5 or RAID6. I can push data back and forth at full NIC speeds. My unraid reed speed is my bottleneck. I'll admit i have an SSD cache in one server and a RAID 5 cache drive for my other server. with write speeds of over 450MB/s to both servers, this does eliminate my write issues when using cache writes. I know thats overkill to the average user, but it shows you can push the envelope. I think you are worried about the future, but the future will be self healling... the unRAId servers of a few years ago were writing straight to the array at 25MB/s ish transfer speeds. 2 years ago 40MB/s ish. I am getting 60ish with the new seagates (and yes i dislike seagate. but im running them due to low cost). Your performance really depends on your server hardware. PS. i am thinking about building a dedicated TM unraid with only 2 disks and see how that works. I really want to get my Timemachine backups off my unraid. especially since 2 of my MACs will be "wireless only" as soon as i move them into the basement studio. EDIT: I noticed your build is quite respectable. I am going to guess your WD greens and the slow cache drive are your bottlenecks. I bet you can squeeze quite a bit more from that build. add another black for TM only drive and upgrade to a 7200 RPM cache drive or even SSD.
  13. You can get a Morco down to desktop noise levels with Noctua fans while maintaining cool temps. My drives are the noisiest part of my servers. Parts I used are in my Sig.
  14. I would wager that because the drive is now green and not red without a rebuild (unraid didn't have to rebuild it the data putting that 300MB into the rebuild).. because of that. it knows nothing about the now lost data.. that is until you test parity.. it will probably fail and then it probably has no clue where to put that data... I would try to run a parity check with correct set to off. see if you get any errors.... It is possible you might not have good parity anymore and your parity might be corrupt. there is also a slight possibility that you can put a new 2tb in the slot that was red and rebuild the lost data if you have not already messed up the parity. there is also a good chance that if any other drive fails right now, it will NOT be able to rebuild it based on the now bad parity info. I would not try to rebuild the drive that WAS red drive. i would try a new drive to prevent any further data loss. in the end you might have to rebuild parity.
  15. Ah, now that sounds interesting! What do they consider a "real" email address? Something corporate? If so that I can handle no problem. I will take a look at that one for sure, the price is certainly right! I need to find a good backup package for use at work too. I've got some UEFI machines that Acronis barfs on and I've got the very latest version. Thankfully this won't be an issue with VM Thank you both! Yes. a corporate email account. (they will spam and call you to buy it also). I actually use another program on my work production boxes (corporate mandate) but i use the Veeam free on lab PC's and at home. The Veeam free is manual backups only. you need to buy it for automated backups.
  16. Veeam Backup Free Edition is pretty fast for migrating and backing up guest. I use it to migrate guests from one datastore to another or from one server to another. I just run it from my management guest on the ESXi box itself. the only limit is you must have a real email address.
  17. Dont forget your largest drive must be your Parity drive. This will only work if your parity drive is already 3TB or larger (sorry if you mentioned this and i missed it)
  18. unRAID was never designed with high IO performance in mind. It is mass storage at low cost will little overhead on desktop hardware. It has a side benefit of low energy costs and good recover-ability in case of total disaster. It is really a high volume storage appliance for long term data storage for things like media libraries and backups. (most people use unUNRAID as a W.O.R.M. drive (write once read many)) if you need a high IO production file server, look elsewhere.. (PS, unraid is great for backing up that high IO production file server) However.... You can tweak performance a bit by using 7200 rpm drives (reducing the power savings). You can further tweak write speeds by using the built in cache drive feature (losing the parity benefit temporarily). You can use an SSD for the cache drive for even greater performance. Some people ~cough~ have gone so far as to use a raid for the cache drive (and even parity drives). if price is not an object, you can tweak unRAID into a bit of a beast.... but.. it is not needed. (for example, an ESXi server running both ZFS and unRAID as a hybrid server)
  19. the M1015 has no cache memory. unless your raid is all SSD, the array will not perform very well.. Plus ,a card with a battery unit is really important. that allows you to turn on the write-back cache and gain a significant performance boost. I have personally used the following cards for what your are asking: Areca arc-1222 (needs a driver) Areca ARC-1882I (needs a driver) HP P400 (limited to 2TB drives and smaller) <these are dirt cheap (mine was $50 with battery a year ago). old, so performance is not the best for a production server. HP-P410 (supports larger then 3TB Drives) Dell Perc-5 If you already had and really wanted to use the m1015 and save some money.. you "could" create a ZFS array with an m1015 and host your ESXi hosts on that. You would still need another datastore for that guest to live on. This is quite a bit more advanced and not as convenient as a stand alone hardware raid card. However, the performance will be very similar. whatever way you do it, I recommend some sort of a backup plan. raid is not a back up. It is just a form of redundancy for always available data (not counting raid 0). sometimes with a performance boost. here is the official VMware HCL: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=io&releases=76&deviceTypes=12,13,14&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc many cards not on this list unofficially work.
  20. I am still way behind on RC's.. i have not been home more then a few days now and then in the last few months. I have not dared to upgraded it then leave town in case of issues.
  21. Honesty, it is your own preference. I have had 4x LP Hitachi's and 4x Seagate 7200's. they were all about the same performance wise. the seagates with the 1TB platters did seem to preclear and parity check a tad faster. I would prefer hitachi, but those are now history.... Some reviews online say the WD greens tend to be a little slower of all of the 3TB drives. I have never used one, so I cant verify that.
  22. This tutorial/guide might be handy for you if you have not seen it yet. It is what i used to setup my first unraid server. http://lime-technology.com/wiki/index.php?title=Configuration_Tutorial