marcusone Posted February 2, 2012 Share Posted February 2, 2012 What, from my point of view, would also be nice would be a Raid 1 Mode/Option for the Parity Drive(s). The Parity Drive is the one, with the most Traffic, fom this point this would make sens to me. cu But it is the least important drive in actuality. You can lose it and not lose any data. Agree, you can loose parity without loosing any data, RAID1 on a parity drive is counterproductive. The performance hit would make unRAID very slow. Also the parity drive is not the most used drive, it's only the most used drive when you are populating the array. Once it's populated with your movies and music, only updates are written to the parity drive. What would be better is RAID1 on cache. Anything placed on cache has a possibility to loss for a maximum period of about 24 hours. The possibility of loss is worse if you use it as an application drive. For those who say they would spend more, The answer is an ARC-1200 http://www.newegg.com/Product/Product.aspx?Item=N82E16816151031 What can be done is a two fold aid to unRAID. Using a SAFE33 arrangement and a pair of sizable drives, you can have a RAID0 Parity and RAID1 cache. This provides a nice performance boost for writes to the array and a safe area for cache and apps. I do this and have had great success with it. Plus you can have that today instead of waiting for limetech to develop it. Won't an M1015 in IR mode work as well (and provide more SATA connections for less than the card you mention)? I've only just started playing with it, but UnRAID 5b12a saw the virtual drive in raid0 that I setup in my M1015-IR. Quote Link to comment
WeeboTech Posted February 2, 2012 Share Posted February 2, 2012 I wish I knew the answer to your question. I do not have that controller so I cannot answer if it works in raid mode. If someone has that controller maybe you can test for all of us. 1. Does the controller allow assignment of 2 drives to a raid set. 2. Does the controller allow the raidset to have two volumes, and of different raid types. 3. Does unRAID see each volume as a disk. For the ARC-1200 I can answer yes to each of these. For the 3ware cards, you can create a raidset which is 1 volume of 1 type of raid. RAID0 or RAID1, but not both. unRAID will see the raid volume as 1 disk. Quote Link to comment
marcusone Posted February 2, 2012 Author Share Posted February 2, 2012 Once I get my second M1015 and test rig setup, i'll let you know what I find. I don't believe it can setup multiple raid types on a pair of drives as you have done with the ARC-1200, but I do know it can create a RAID drive that unRaid sees as a single drive. Quote Link to comment
WeeboTech Posted February 2, 2012 Share Posted February 2, 2012 It will be an interesting comparison. For a PCIe x1 card and two drives, it provides a nice performance bang for the buck. Since I bought it used on eBay for under a hundy.. I've been quite pleased with it. At the very least my hybrid setup RAID1 cache/apps/local/swap drive gives me peace of mind too. Is a RAID1 set on the M1015 seen by unRAID? Quote Link to comment
marcusone Posted February 2, 2012 Author Share Posted February 2, 2012 I've only tried a RAID0 set, and it was seen by unRaid - didn't try using it yet, but it did show as a single "double" capacity drive. (UnRAID 5b12a) Quote Link to comment
WeeboTech Posted February 2, 2012 Share Posted February 2, 2012 I wonder how it would improve your parity operation speed. I know the Areca's have an onboard ram cache which helps. Quote Link to comment
Joe L. Posted February 2, 2012 Share Posted February 2, 2012 I wonder how it would improve your parity operation speed. I know the Areca's have an onboard ram cache which helps. Probably will not affect it much unless the data drive can rotate faster. unRAID's write speed is determined by the slower rotational speed of the two disks involved. If the raid controller can cache an entire stripe of the parity disk, then the data disk will be the slower of the two. Joe L. Quote Link to comment
WeeboTech Posted February 2, 2012 Share Posted February 2, 2012 I wonder how it would improve your parity operation speed. I know the Areca's have an onboard ram cache which helps. Probably will not affect it much unless the data drive can rotate faster. unRAID's write speed is determined by the slower rotational speed of the two disks involved. If the raid controller can cache an entire stripe of the parity disk, then the data disk will be the slower of the two. Joe L. This is not entirely true. If both disks are slow, then the two operations will be slow. If one of the disks is faster then the other, the operation becomes a little faster. Albeit not much. If I can read and write from one disk faster, then the operation is not held back by the second operation. I.E. The other operation benefits from the first operation completing sooner. The problem is that it's not much of a saving unless there is good sized cache on the parity disk. I've tested this operation in so many ways with RAID0 on parity, It has always showed an improvement in write operations. It just doesn't show that much of an improvement to warrant the cost for most people, unless your controller has write back cache. Even with the Silicon Image steelvine processor without a cache, I saw an improvement of 3MB/s when using RAID0 on the cache. For most people the $100 cost of a steelvine processor or $100-$199 cost of the Areca controller doesn't warrant the upgrade, but I've tested and found that it does increase performance slightly. No matter what the data disk is. 5400 or 7200. The real savings is when you have all day constant updating processes on multiple disks. a torrent server or a usenet downloader will see the benefit of a raid0 parity since the array will be dealing with multiple writes from different locations. Quote Link to comment
Joe L. Posted February 2, 2012 Share Posted February 2, 2012 I wonder how it would improve your parity operation speed. I know the Areca's have an onboard ram cache which helps. Probably will not affect it much unless the data drive can rotate faster. unRAID's write speed is determined by the slower rotational speed of the two disks involved. If the raid controller can cache an entire stripe of the parity disk, then the data disk will be the slower of the two. Joe L. This is not entirely true. If both disks are slow, then the two operations will be slow. If one of the disks is faster then the other, the operation becomes a little faster. Albeit not much. If I can read and write from one disk faster, then the operation is not held back by the second operation. I.E. The other operation benefits from the first operation completing sooner. The problem is that it's not much of a saving unless there is good sized cache on the parity disk. I've tested this operation in so many ways with RAID0 on parity, It has always showed an improvement in write operations. It just doesn't show that much of an improvement to warrant the cost for most people, unless your controller has write back cache. Even with the Silicon Image steelvine processor without a cache, I saw an improvement of 3MB/s when using RAID0 on the cache. For most people the $100 cost of a steelvine processor or $100-$199 cost of the Areca controller doesn't warrant the upgrade, but I've tested and found that it does increase performance slightly. No matter what the data disk is. 5400 or 7200. The real savings is when you have all day constant updating processes on multiple disks. a torrent server or a usenet downloader will see the benefit of a raid0 parity since the array will be dealing with multiple writes from different locations. Basically, you are saying exactly the same as I did. I did overlook the multiple "writing" processes though, and in that case, the improvement might be more noticeable when the hardware RAID striping happens to access multiple disks. Quote Link to comment
marcusone Posted February 3, 2012 Author Share Posted February 3, 2012 Well, I've hit a bit of a bump... I was going to test the performance of my M1015 in IR mode with a RAID0 setup for parity, but, it appears that 2x1TB is smaller than 1x2TB I have 2 WD 1TB blacks in RAID0 and 1 Seagate 2TB drive I was going to use for data... but it won't let me because I have to use the larger disk for parity Any way to work around this? I don't have another smaller drive right now Quote Link to comment
prostuff1 Posted February 3, 2012 Share Posted February 3, 2012 HPA the 2TB drive to make it look smaller than it is. Quote Link to comment
marcusone Posted February 3, 2012 Author Share Posted February 3, 2012 HPA the 2TB drive to make it look smaller than it is. I'm fairly techie, but not don't have a clue how to do that I assume its an easy command inside unraid? Quote Link to comment
WeeboTech Posted February 3, 2012 Share Posted February 3, 2012 I think you can enter this command to see what the current sizes are. hdparm -N /dev/[hs]d[a-z] Post the results, I forgot the syntax to set it, but once we see your results, I'm sure someone can post the command line to set it. Quote Link to comment
marcusone Posted February 3, 2012 Author Share Posted February 3, 2012 Found a smaller drive - but hope to use the 2T as a test. hdparm -N fails on the RAID0 drive (likely due to the fact that the M1015 is creating a "virtual" drive from 2 drives). I'll post the sizes that unRaid reports when I'm done testing with the smaller drive and see if we can get the 2TB working (as its a newer drive and may be faster). Preliminary Results: No surprise to Joe and WeeboTech, no real noticable difference in speeds when writing directly to the protected array (about 30MB/s'ish - bounces around more than my "production" box for some reason, but averages about that - likely due to other things going on in the network/etc as I don't really have a dedicated 'clean' test environment). Now, where I do notice the difference is in the parity check/sync. it maxes out the slowest drive, and when it got to the point where it was just the parity drive it flew at 280MB/s I'm going to try and do a few more tests just in case I'm wrong; but looking like not worth doing RAID0 for parity is correct (at least on the M1015). I'll also try a RAID1 for parity and see what that does to the performance. Quote Link to comment
WeeboTech Posted February 3, 2012 Share Posted February 3, 2012 I'll also try a RAID1 for parity and see what that does to the performance. This isn't worth your time. There will be a write penalty. Without a cache on the controller, it will hurt every parity operation that does a write. before you break the raid0 array. do the writeread10gb test from my google code page. Choose a data drive that is freshly formatted. That will show you the maximum potential. http://code.google.com/p/unraid-weebotech/downloads/detail?name=writeread10gb login to unraid as root then do ./writeread10gb /mnt/disk1/test.dd Quote Link to comment
marcusone Posted February 4, 2012 Author Share Posted February 4, 2012 writing 10240000000 bytes to: /mnt/disk1/10gbfile 206300+0 records in 206300+0 records out 211251200 bytes (211 MB) copied, 4.95318 s, 42.6 MB/s 324436+0 records in 324436+0 records out 332222464 bytes (332 MB) copied, 9.82195 s, 33.8 MB/s 445496+0 records in 445496+0 records out 456187904 bytes (456 MB) copied, 14.8556 s, 30.7 MB/s 577960+0 records in 577960+0 records out 591831040 bytes (592 MB) copied, 19.987 s, 29.6 MB/s 696016+0 records in 696016+0 records out 712720384 bytes (713 MB) copied, 24.9593 s, 28.6 MB/s 817408+0 records in 817408+0 records out 837025792 bytes (837 MB) copied, 29.9948 s, 27.9 MB/s 951568+0 records in 951568+0 records out 974405632 bytes (974 MB) copied, 35.0269 s, 27.8 MB/s 1073426+0 records in 1073426+0 records out ...... 9451878+0 records in 9451878+0 records out 9678723072 bytes (9.7 GB) copied, 361.396 s, 26.8 MB/s 9584998+0 records in 9584998+0 records out 9815037952 bytes (9.8 GB) copied, 366.4 s, 26.8 MB/s 9831238+0 records in 9831238+0 records out 10067187712 bytes (10 GB) copied, 371.39 s, 27.1 MB/s 9961128+0 records in 9961128+0 records out 10200195072 bytes (10 GB) copied, 376.439 s, 27.1 MB/s 10000000+0 records in 10000000+0 records out 10240000000 bytes (10 GB) copied, 378.492 s, 27.1 MB/s write complete, syncing reading from: /mnt/disk1/10gbfile 10000000+0 records in 10000000+0 records out 10240000000 bytes (10 GB) copied, 135.964 s, 75.3 MB/s removing: /mnt/disk1/10gbfile removed `/mnt/disk1/10gbfile' Quote Link to comment
WeeboTech Posted February 4, 2012 Share Posted February 4, 2012 Thanks. How much ram do you have? Also there are some tunings you can do via emhttp for the md_write stripe and other values. I think I did normal values *2 and also *3 and it did beef up the speeds. But from what I'm seeing, you are about average. Quote Link to comment
marcusone Posted February 4, 2012 Author Share Posted February 4, 2012 That was to an old 320G drive, I'll try again on a newer drive and play with those settings. The PC i'm using right now is a Core 2 Quad, with 8GB ram (until it gets re-purposed ) Quote Link to comment
marcusone Posted February 6, 2012 Author Share Posted February 6, 2012 With a "newer" drive I got 40MB/s average. Which is 10MB/s faster than my current "production" setup... still not likely worth the headache. Quote Link to comment
dikkiedirk Posted August 25, 2012 Share Posted August 25, 2012 Putting some life in an old thread. I finally come to experimenting a bit more with an ARC1200 and perhaps a M1015. I set up 2 WD green drives of 2 TB (totalling 4 TB). I created a 3 TB RAID0 volume for parity and a 500 GB RAID1 volume for cache. I threw in a 1 TB Samsung for data on the M1015. This on a X9SCM. I see parity sync/rebuild speeds of about 42 MB/s from 0 to 100%. Seems a bit slow to me. What is your opinion on this? What settings on the ARC1200 influence speed the most? Stripe sizes and more? Quote Link to comment
wkearney99 Posted August 25, 2012 Share Posted August 25, 2012 The 1200 card is only going to work as fast at the 1x PCIe lane allows. Which isn't going to be very fast; probably no more than 500mbps for the slot itself. Then you're limited by whatever that card will do internally. This also assumes the underlying PCIe fabric has enough lanes free and/or decent performance. I doubt many motherboards would be putting an emphasis on performance for any 1x slots present. So getting 42MBps from it doesn't seem unreasonable. Quote Link to comment
dikkiedirk Posted August 25, 2012 Share Posted August 25, 2012 The 1200 card is only going to work as fast at the 1x PCIe lane allows. Which isn't going to be very fast; probably no more than 500mbps for the slot itself. Then you're limited by whatever that card will do internally. This also assumes the underlying PCIe fabric has enough lanes free and/or decent performance. I doubt many motherboards would be putting an emphasis on performance for any 1x slots present. So getting 42MBps from it doesn't seem unreasonable. Do you use or have experience with the ARC1200? I know it is a PCIe 1x card. In a X9SCM motherboard it sits in at least a PCIe 4x slot. So one would think there should be enough lanes. Quote Link to comment
WeeboTech Posted August 26, 2012 Share Posted August 26, 2012 42MB/s is not what I would expect from RAID0 even on x1 lane. You could try moving it to other slots to see what the deal is. On my card, I enabled write back cache. The best test is to use the writeread10gb script on these raw drives to determine maximum speed before you build the array. I.e. with the 3 drives and emhttp down, format each drive with reiserfs manually. After that use the writeread10gb script on each drive. That will be the maximum you could possibly achieve on each drive. From there you can factor in all the other issues. I can say that if you are doing a parity operation (read/write/sync, etc, etc) and write data to the cache drive, there will be a performance penalty and it could be significant. So I do those two operations separately. If you have a UPS you can enable the write back cache on the controller in the firmware before doing anything else to see if it helps. I have 15 data drives to 1 ARC1200 on a x1 slot, I get from 75-80 at the start dropping to 55k near the end. Jul 14 21:22:10 atlas kernel: md: sync done. time=18244sec rate=53538K/sec I have 2 seagate 1.5tb 7200RPM 32MB cache drives as my raid0/raid1 hybrid. If you are using 5400rpm drives for yours, that could be an issue. Quote Link to comment
dikkiedirk Posted August 26, 2012 Share Posted August 26, 2012 Thanks Weebo, Should I format the 2 drives on the ARC1200 separately, so not building any RAID yet? Then do the writeread10gb test on these separate drives? I use WD 5900 rpm drives, but am tempted to switch to some 7200 rpm drives, any recommendations for this? I have 2 x4 slots and a x8 slots on the X9SCM. Could using a different x4 slot make any difference? I have a UPS and WB cache is enabled somewhere in the RAID settings. I will break the RAID and try to test each drive separately if the show up in unraid. As always I wanna go too fast. assembling before doing any testing. Can't I use writeread10gb on /dev/sd?? I get an error dev/sd?/test.dd not a directory. So I guess I can only use /mnt/disk? Come to think of it, I had this setup running on an old DFI Socket 775 board and I was getting Parity Sync speeds in the 70s MB/s. Maybe the X9SCM motherboard is the problem, or BIOS. Quote Link to comment
dikkiedirk Posted August 26, 2012 Share Posted August 26, 2012 42MB/s is not what I would expect from RAID0 even on x1 lane. You could try moving it to other slots to see what the deal is. On my card, I enabled write back cache. The best test is to use the writeread10gb script on these raw drives to determine maximum speed before you build the array. I.e. with the 3 drives and emhttp down, format each drive with reiserfs manually. After that use the writeread10gb script on each drive. That will be the maximum you could possibly achieve on each drive. From there you can factor in all the other issues. I can say that if you are doing a parity operation (read/write/sync, etc, etc) and write data to the cache drive, there will be a performance penalty and it could be significant. So I do those two operations separately. If you have a UPS you can enable the write back cache on the controller in the firmware before doing anything else to see if it helps. I have 15 data drives to 1 ARC1200 on a x1 slot, I get from 75-80 at the start dropping to 55k near the end. Jul 14 21:22:10 atlas kernel: md: sync done. time=18244sec rate=53538K/sec I have 2 seagate 1.5tb 7200RPM 32MB cache drives as my raid0/raid1 hybrid. If you are using 5400rpm drives for yours, that could be an issue. I'm totally lost here! I wanna do a speedtest on all disks or volumes using writeread10gb. Can I do this also on the 2 volumes I created on the ARC1200 or on the 2 individual disks?. How? And how do I down emhttp? Is unmenu gone too then? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.