-
Posts
747 -
Joined
-
Last visited
-
Days Won
7
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by Pauven
-
I'm considering using one as well. The 160W PicoPSU would definitely be sufficient to power my build at max load (parity check), the concern is drive spinup. With 5 HDD, the initial power on will be real close to max output of the PSU. As long as it can successfully boot up it should be fine as it would only have to spin up 4 drives to run a parity check as the cache drive never spins down. If it did work, it wouldn't leave any overhead for additional drives, but I don't see myself adding any more drives any time soon. And it would certainly free up a lot of room in that cramped little case! r.e. "free up a lot of room in that cramped little case" ==> You can do that with an SFX unit like this: http://www.newegg.com/Product/Product.aspx?Item=N82E16817151063 But obviously the PicoPSU would free up a LOT of room [i.e. the entire PSU bay ] But I agree the PicoPSU is VERY tempting => no minimum load requirement; over 90% efficiency; and supports a peak load of 15A (180W) on the 12v line, so it shouldn't have any problem spinning up my 6 WD Reds (~ 20W spinup current). I could probably get my idle consumption down to ~ 18W or so with that PSU !! I did a bit of research yesterday trying to determine the maximum number of drives the PicoPSU 160 could spin-up. I never found a solid answer, but the general consensus was about 5 3.5" drives. I don't think the full 15A is available for spinning drives once you're powering a PC with it too. I'd be very interested in some real results if one of you take the plunge. I'm also getting conflicting information about how efficient these PicoPSU's really are. The PicoPSU takes 12V power in (from a power brick), and converts some of it to 5V & 3.3V for the PC, and passes some unconverted as 12V. The PicoPSU's conversion from 12V to 5V/3.3V is very efficient, 94%+. But some people are saying that the power brick itself, which converts the household AC to 12VDC, is only about 70% efficient. This is very confusing to me, as the PicoPSU power draw numbers seem to be 5W-10W below what normal power supplies consume. I don't know what to believe. I've never really read anything negative about the PicoPSU's, so perhaps that 70% number is made up. Not knowing has caused me to shy away from pursuing a home-made power supply utilizing PicoPSU's plus extra power adapters for the hard drives, because that would be a total waste of time if the power bricks are really that inefficient. Regardless, I picked up a KingWin LZP-650, which is a 650W Platinum power supply. It has excellent low power efficiency in the neighborhood of 40W to 45W - only about 7W lost as heat, making it 80%+ efficient at ~6% load. This is significantly better than most power supplies of this size. It was a pricey $170, making it more expensive than the processor, motherboard and memory combined! I did some unRAID spin-up testing yesterday, and everyone was right, unRAID starts all drives simultaneously. Shameful. That's why I had to get the 650W beastie instead of a 300W petitie. By the way, garycase I'm a little confused by your build. You mentioned you had a 20TB server, made with 6 Red 3TB drives. At first I thought you had just rounded up a bit, from 18TB to 20TB, and that you really had a 7th drive for parity that you hadn't mentioned (since your case has room for a 7th drive internally). Now that you're talking about spin-up for only 6 drives, the math tells me you can't have more than 15TB if one is for parity. Was the 20TB just a typo?
-
For starters, it only has 1/4th of the bandwidth of the 2760A If you're talking PCIe bandwidth, it's not that bad. With 16 drives, each drive would see 250MB/s of bandwidth. The 2760A, with 24 drives, has 333MB/s. So, technically, it has 3/4 the bandwidth of the 2760A. Remember, the 2760A is PCIe 2.0 only, not 3.0. I'm sure the PCIe 3.0 version will be announced real soon, now that I've placed my order... I think 250MB/s is a good number. That said, I'm not sure what chips are in play on the board itself. The 2760A is special in that each drive get's the full SATA bandwidth spec. If this LSI card is using port multiplier technology, then each drive would have less than full SATA bandwidth spec. So even though the PCIe bus connection gives plenty of bandwidth for each drive, the card itself may not be the highest performer (but probably not bad either, just average). By the way, MvL, you're numbers are a touch off. You would need 8 on-board SATA ports to get up to 24 total with this card. Pricewise, yes it is cheaper, but not cheaper per port. $409/16=$25/port, $542/24=$23/port. Also consider the motherboard I just picked up was only $50 (yes, it only had 2 SATA ports, but I don't need any motherboard ports). I only found 1 socket 1155 motherboard that had at least 8 SATA3 ports, and it was $400 (ouch!). The cheapest motherboard with 6 SATA3 ports (with enough SATA2 ports to get you to 8 total) was $150. If you drop down to 4 SATA3 and 4 SATA2 ports, you can find options in the $75 range, but they don't look like good choices for motherboards - I think you would be back up around $100 for a decent option. All said, the price premium for the 2760A is not much when you start running the numbers. And assuming it lives up to the performance potential, I don't think anything can touch it for the money. Of course, we are dealing with 5400rpm drives here, so chances are none of this will amount to much.
-
Good question. Both processors idle the same. The 1610T is capped on the high end (lower max frequency) to keep heat in check. This allows you to use the processor in smaller cases, with smaller fans, and with smaller power supplies. I wasn't worried about any of those issues. Since my only concern was idle power, I save the money and went with the regular 1610, which will also have a little extra performance.
-
Amazon's and NewEgg's (among many many others) return policies were just too risky for my to buy from them, especially considering I could be stuck with a very expensive paperweight. Luckily, ProVantage carries the 2760A, and they have a 30 day return policy, even used. Even better, the price was only $542! The savings paid for $78 worth of SFF-8087 cables (6 cables x $13). For once I'm glad NewEgg had such a horrible return policy, otherwise I would have never found this deal. Since they have such a generous return policy, I also picked up a Celeron G1610 CPU and Foxconn H61S Motherboard. Yes, the H61S is older, but it still supports Ivy Bridge, Gigabit Ethernet, DDR3-1600 memory and PCI 3.0 x16. The chipset is slightly lower voltage than H77's, plus the motherboard is barebones, so I'm not wasting any money or electricity on features I don't need for unRAID. I can't think of any features I'm missing on the newer H77 boards (upgraded audio, better video, USB 3.0). When it all comes in (maybe next week), I'll power it up and try unRAID on it.
-
Yes, it looks like a VERY nice card. Now if somebody would just spring for one and test it with UnRAID ... [i'll be happy to test it and post the results if somebody wants to buy me the card ] I've been looking for a place to buy the 2760A that allows returns. So far, no dice. Every company seems to have some disclaimer, that ranges from no returns for opening or installing, to simply receiving it. If anyone finds a source for the Highpoint 2760A that allows returns (for an opened and installed product), please let me know. Thanks, Paul
-
Hey MvL, not sure why Tom would have to allow more drives to use that case, as unRAID supports 24 drives, same as that case. The case uses port expanders, which I believe is the same thing as port multipliers. Bandwidth will be limited by the expanders, same as some of the other options we discussed in this thread. For the price (approx $1k), you are not getting $600 in value over a Norco 4224. I would stay away.
-
Staggered spin-up would also help Tom on his server builds (smaller, cheaper power supplies and more efficient servers), but I just took a look at his Atom based server and he sells it with a 520W power supply by default, and offers a 650W upgrade option; my napkin based calculations lead me to believe a 400W power supply would be better suited to this 14-drive Atom based server, especially considering 4 of those drives are 2.5". I'm thinking energy efficiency isn't high on Tom's priority list. I'm not discounting that unRAID technology is the most energy efficient media serving fault tolerant server, since it only has to spin up a single drive to serve media, but without staggered spin-up the power supply can't be properly sized for maximum efficiency, especially for full-scale servers. I'm wondering if there's a way I could utilize multiple power supplies? I'm thinking along the lines of a PicoPSU for the motherboard itself, and one or two 'something else' power supplies for the HD array. With each of the power supplies being smaller, and more efficient, the total power consumption could come way down. Looking at these charts: http://www.silentpcreview.com/forums/viewtopic.php?f=6&t=65244 The Seasonic X650 loses about 12W at 40W output (70% efficiency). The Seasonic SS-350TGM loses about 7W at the same output (82% efficiency). So the larger power supply costs an extra 5W of waste heat, turning this 45W idling server into a 50W idling server. A PicoPSU, being about 97% efficient, would lose about 1W. The picoPSU can handle peak currents of 15A on the 12V rail, or enough to spin-up 6-7 drives simultaneously... looks like I would need 5 of these (one for the motherboard, four for the 24 drives). Not exactly cost effective, assuming I could even make it work. Alternatively I could use one picoPSU, plus two more 12V power adapters, and some creative wiring... I recently designed and built a pinball machine from scratch, including the power supplies and power driver circuits for the solenoids. Perhaps I need to design and make a MEGApicoPSU that can handle a 24-drive server... Can't be that hard. Sounds like a fun challenge (yes, I'm weird like that). Thanks for starting the staggered spin-up topic!
-
Yes, that 236W power consumption is an estimated max parity check value. My best estimate for idle is around 45W. The 2760A does support staggered spin-up, and I was hoping this would work with unRAID. You know for sure that it won't? Do you think the 2760's staggered spin-up will work during the boot cycle, after power on and before unRAID has loaded? Doesn't unRAID support a software based staggered spin-up? I thought this feature had been added a few years ago, but maybe I'm mistaken and it was only requested, or maybe I'm confusing it with Spin-Up Groups. It doesn't sound like too hard of a feature to implement in software, but I'm not the expert. If Staggered Spin-Up is not available in all possible scenarios, you're absolutely right that I would need a larger power supply. From what I understand, the Red 3TB have a 21 watt spin-up. Other drives may be different, so I'll add 20% to that number and assume a 25W spin-up requirement per drive. With 24 drives, that's 600 watts, plus another 130W for the rest of the system, and I need to be looking at 750W power supplies... I hate this because the 20% efficiency range of the 80+ rated power supplies is pushed up to 150W. At idle, this box would only be pulling about 6% of the power supply's rating. Most power supplies are horrible below 10%. I really, really want staggered spin up.
-
That's good to know that the 2760 uses supported 88x Marvel chips, though perhaps not the switching chip. I think I'll go ahead and pick one up (once I verify the return policy) to test in my current server, before I buy any other components. The 2760/2760A is the only board I found that truly delivers on the bandwidth front, with a very unique arrangement of four 88SE9485's, though technically only 3 of the 4 are used on the 2760A. Highpoint also sells a 2782, with 2 more ports (external this time) that make use of the 4th chip, and supports a total of 32 drives. I think the 4th chip is still present on the 2760, just not utilized. HighPoint also sells lower end versions of the same basic board, like the 2740, which could give someone a more affordable way to test the chipset. Regarding heat, the spec sheets show a maximum of 45 watts (35W on the 12V, and 10W on the 3.3V). I've been using that figure in my power supply calculations. I think 45W is reasonable for what this card does. Some users have installed a small cooling fan on the heat sink to deal with the heat, which is probably a good idea. Also consider that this card supports several hardware RAID modes that will not be utilized with an unRAID build, so I'm slightly hopeful it won't draw the full wattage rating in this use case. I really wish they made a 250W PicoPSU... CPU: 55W Chipset: 20W Memory: 5W 2760A: 45W Fans: 5W 24 HD: 106W Total: 236W
-
Thanks DirtySanchez, that's pretty close to what I'm now thinking. Instead of the i3-3220T, I'm looking at a Celeron G1610. It has weaker graphics (which is actually good from a power perspective), is on the same Ivy Bridge 22nm process, and seems to idle slightly lower than the i3-3220T. Performance wise, it is very similar. If I went this path, I would run it on something like a H77 based mini-ITX, and your ASUS P8H77-I is on the short list of options I'm considering. Another board I'm considering is the ASRock H61MV-ITX. It's slightly older, but still supports that Celeron G1610, has one PCIe 3.0 x16, and only costs $60. The G1610 is only $50. Add in 4GB of RAM, and I'm out the door for under $150 with an energy efficient and powerful server. I think this combo is gonna be pretty hard to beat. Add in $60 for a 300W 80+ Gold power supply, $620 for the HighPoint 2760A controller card, $450 for a Norco RPC-4224, and another $50 for some SAS cables, and my total is around $1330. That's a cost of only $56 a drive bay, which is not bad at all, especially considering each drive gets dedicated bandwidth of 333MB/s, and the server should have a very low idle power consumption (for a 24 drive server). Still... does no one have any confirmation that the HighPoint 2760A will work with unRAID? I haven't found any other single controller cards that I feel to be an acceptable alternative to the 2760A for a 24-bay server. The port multipliers split bandwidth, which I don't like. The other 24 port controller cards are both more expensive and slower. If the 2760A won't work, I need to take a very different path.
-
Keep in mind that PCIe 1.x x4 (1GB/s) is half the bandwidth of PCIe 2.x x4 (2GB/s), which in turn half the bandwidth of PCIe 3.x (3.94 GB/s). In this case, it is the AOC-SASLP-MV8 that is the limiting factor as (unless I'm mistaken) it is only PCIe 1.x capable. If 125MB/drive is enough bandwidth, that is a limit of 8 drives on that card. You're absolutely right about the idle power of these drives, so that's something I've neglected in my calcs. Assuming the motherboard+processor are ~20W, and 24 Red drives @ .6W each are another 14.4W, this is already a 35 watt server at idle, at best. Add in the HighPoint 2760A, which I'm guessing might idle ~10W, and we're at 45W. That's still a significant improvement over my current system, even if it doesn't come close to my original goal. But like you said, a sub 35W 24 drive server probably isn't possible, at least not without customized circuitry. I don't really have a specific capacity goal as much as an expandability goal. My personal perspective is that a 24-bay server is cheaper, per drive and per gigabyte, than a small 7-drive server like what you are running, as there are shared hardware costs that are amortized over a larger number of drives. I expand, as needed, using the highest capacity NAS rated drives available at the time. Adding drives ends up cheaper than replacing drives, and 24 bays allows me to keep adding far longer. I am looking forward to 5TB Reds, though... that will help. A 115TB server, now we're talking! It seems to me that we've been stuck at current 4TB HD capacity for way too many years, and I have little faith that 5TB RED drives will be out this year.
-
Wow, everyone's contributing some really great info, thanks! I guess I didn't expect my sub 20W goal to be so... challenging. Last year I built an always on HTPC using an AMD A10-5700, and I hit a 23W idle out of the box, no problem (though adding in the CableCard adapter pulled it up to about 35W). I guess I was expecting that the use of a Mobile processor would some how magically idle lower, and I have read (unconfirmed) reports that Intel's mobile processors idle at 800 MHz, whereas their desktop processors idle at 1600MHz. Anyone know if this is true? This could have a significant impact on idle wattage. That's a great suggestion BetaQuasi. From looking at the specs, this looks almost like a mobile processor in a desktop socket. Any idea at what MHz these idle? I'm not as familiar with Intel's offerings, so your link got me to researching. On the desktop processor scene, the T processors denote their low power offerings. I see many other T processors, many quite a bit newer and faster, all within the same 35W TDP envelope. The only downside I see is that most socket 1155 motherboards have more features, and from an energy efficiency standpoint, more features =more power draw. Any suggestions on a power efficient 1155 motherboard? It doesn't have to be mini-ITX, but that form factor would probably be among the most energy efficient. Congrats garycase, awesome achievement! I think your post was the most eye-opening for me, as in my mind the Atom processors are a significant step down in horsepower and power consumption, and yet you just barely achieved 20W with no add-in boards. I looked at the specs for the HighPoint 2760A, and it has a 45W Max power consumption, but I didn't see any idle power consumption figures. I'm guessing it might... might idle around 10W if I'm lucky. Maybe this needs to be the sub 30W 24-drive server build... I hadn't even considered these Atom processors. This does bring up several questions for me: Does adding more drives increase CPU utilization during parity checks and rebuild? I ask because you are getting great performance with 6 drives, but what happens when you have 24? Wouldn't the processor being seeing a much larger workload? As electron286 pointed out, Tom is using the same platform (thanks for the tip!), but that chassis maxes out at 14 drives. I really have no idea how much the processor impacts parity check/rebuild performance, but I'm concerned this platform may not have the grunt for 24 drives. As a quick note, the SuperMicro X7SPA-H-D525 motherboard's PCIe x16 connection is the physical size only, the electrical connection is PCIe 2.0 x4 (the x8 and x16 pins are not wired up). That means 2GB/s max throughput. The board I listed, the JetWay JNF9G-QM77, has a PCIe 3.0 x16 electrical connection, which is 15.76GB/s max throughput, or roughly 8x faster (but only if used with both a processor and an add-in card that support PCIe 3.0 x16). This PCIe 2.0 x4 connection is still twice as fast as the AOC-SASLP-MV8 needs, but it's not going to match up well to the HighPoint 2760A. You're 8-hour parity check for 3TB is fantastic, and I would be happy with that, but I would say you currently have the best possible scenario - all drives have a direct, dedicated connection to the motherboard. As you correctly point out, adding the AOC-SASLP-MV8 doesn't get you to 24 drives, so you would still need some port multipliers. I have two concerns/questions with this approach: [*]Does the use of port multipliers affect performance? I think it has to, because you're taking dedicated bandwidth for 1 drive and sharing it with 5 drives. We're talking IDE levels of bandwidth per drive, but I think this would only rear it's ugly head during parity checks/rebuilds. Does anyone know? [*] The AOC-SASLP-MV8 only has a PCIe 1.0 x4 connection, so 1GB/s max throughput. The X7SPA-H-D525 motherboard only has 6 SATA ports, so that means 18 drives would need to be connected through the AOC-SASLP-MV8, and during parity check each drive's share of the 1GB/s max throughput would be 55.5 MB/s, at best! I'm fairly certain the performance will be horrible! When I ran one of my Adaptec 1430SA's at PCI 1.0 x1, each of the 4 drives was allocated only 62.5 MB/s, and I was getting parity 3TB checks closer to 30 hours! Unacceptable! On second thought - it would be better to use the port multipliers on the motherboard SATA ports, and not on the AOC-SASLP-MV8 (is this possible?). That means only 8 drives are sharing that PCIe slot's 1GB/s bandwidth, for about 125MB/s each, so not as bad, but this still goes back to my question about the performance impact of port multipliers. For comparison, the HighPoint 2760A would allocate 333MB/s of bandwidth per drive (on a compatible motherboard), probably way faster than rotating hard drives need, but it certainly wouldn't be holding them back. I think you're on to something Ford Prefect (love the name, btw)! In years past, Celeron's were horrible processors to be avoided at all costs, but the current offerings are basically just the i3 processor minus a few features. Some people are calling them Atom killers, since they are faster than Core 2 Duo's and they idle like an Atom: I read a report of one person with a Celeron G1610 Ivy Bridge 2.6GHz 55W processor seeing a 20W idle, with a 860W power supply (good grief!). With a good, efficient 1155 motherboard and super efficient power supply, this might be the right ticket. Thanks b0ssman. I had stumbled onto some of the same info (in German too) during my searching. I need to do some more research to fully understand how they do it, but it seems one of the main things they are doing is using a PicoPSU - think of a laptop power supply - which are much more efficient. I'm not sure if these are physically compatible with the Norco/X-Case backplane power connections, and even if they were, do they have enough power to spin up 24 drives? The WD Red 3TB peak Amps is 1.73 @ 12VDC, so I think that is a 21 watt max spin-up power draw. Hopefully a staggered spin-up would work with an add-on controller card, but I'm not sure if the motherboard SATA ports do a staggered spin-up in all scenarios (including boot, before unRAID is loaded). Normal Read/write power requirements for the Red drives is 4.4 W, so for 24 drives that would be 106W, plus whatever the motherboard, processor, and add-in card draw. I'm not sure if you could do a 24-drive server with less than a 225W power supply (and keep in mind, there are minimum requirements on the 12V rails, so even 225W may not be big enough with all those drives). The biggest PicoPSU I see is rated for 160W. Thanks for all the info! It's nice to see positive, constructive responses to responsible computing - I was actually expecting many comments along the line of this being a waste of money with no ROI. I'd love to hear some more thoughts and ideas. Paul
-
Hey guys, I'm looking to build an energy efficient and fast 24 bay server, and I could use your feedback. I'm not sure if I'm about to make a big mistake. Background: I'm been running an 18 drive unRAID in a Norco 4020 enclosure for about 4 years now (and luvin' it). I primarily use the server for music and movie playback (Blu-Ray ISO's), plus PC backups. I don't do any extra stuff like web/db serving, torrenting, or encoding, so the horsepower requirements are on the low side. The most challenging thing the server ever does is rebuild data or run a parity check. I used to sleep the server (I like to conserve energy whenever possible), but over the years I have come to find that problematic for various reasons. Lately I've been letting the server run 24-7, and I love the always-on convenience. Unfortunately it draws 108 watts at idle, and 270 during parity checks. I'm looking to build a replacement server, and want to make it more energy efficient this time around. My goal is to idle at 20 watts or less. I'm also looking to improve parity check/rebuild performance, something that seems limited in my current box by the four Adaptec 1430SA cards. Here's the build: Processor: Core i5-3320M (Mobile G2 Socket) - $240 Motherboard: JetWay JNF9G-QM77 (Mobile G2 Socket) - $200 Memory: 4 Gigs total (2×204pin SO-DIMM, DDR3-1333) - $35 HD Controller: HighPoint RocketRAID 2760A (24 drive SAS, PCIe 2.0 16x) - $620 Enclosure: Norco RPC-4224 ($450) or X-Case RM424 Pro ($900) as the enclosure. Either way, this means SAS. Total cost is about $1.5-$2k. This is obviously not a value server, but I'm okay with that, as my goals are energy efficiency and performance. How I chose the components: Starting with the processor, I've never noticed much CPU activity on my current server (AMD Athlon 64 X2 5600+, 2.9 GHz), and I don't anticipate I need a high horsepower CPU. Looking at energy efficient options, I'm leaning towards an Intel mobile processor (Core i5-3320M) in the G2 socket. This is the type of processor that would normally go in a laptop, and it is rated at only 35W TDP, not bad for a dual core that runs at 2.6 GHz and has video built in. I'm expecting this alone would save 50-60 watts over my current build. Am I making a mistake in choosing too weak of a CPU? I do see my Athlon 64 X2 5600+ hit 100% utilization during 18-drive parity checks, but only momentarily. I also think my current performance is bottle-necked by the 4 Adaptec 1430SA cards - if I end up with a faster HD controller card, the CPU might become the bottle-neck. I think the Core i5-3320M is similar in performance, but I'm not really sure. There is a 35W mobile quad-core, the Core i7-3840QM, but at over $600 I don't think that the price premium is worth it. There aren't many desktop motherboards that have a G2 socket, so the easy option is the JetWay JNF9G-QM77 Socket G2. It only has one PCIe slot, but it's a nice and fast 3.0 16x slot, so tons of bandwidth on that one slot. Of course, since the motherboard only has one slot, that means I need a 24 drive single card SAS adapter to go with the case. I think only having one adapter, instead of my current four, should also reduce power consumption a bit, helping me reach my 20 watt idle goal. Most of the 24-drive adapters (LSI/areca/3ware) only have a PCIe 2.0 8x connection, which is technically slower per drive than my current Adaptec 1430SA solution (167 MB/s for 24 drives vs. 250 MB/s for 4 drives), and they are all super expensive, so I not considering them. There's one option, the HighPoint RocketRAID 2760A, that has a PCIe 2.0 16x connection, which could theoretically deliver 333 MB/s for all 24 drives, so this would be a 33% increase in bandwidth, and it is also much cheaper. I would really like a PCIe 3.0 16x adapter, which would give 657 MB/s per drive, but I didn't find any. Any thoughts on minimum bandwidth per drive to allow the drives to perform at optimum? I know I see a parity check slowdown if I set my Adaptec 1430SA in 1x mode instead of 4x, but I've never been able to test higher bandwidth to see if they speed up any. I don't see anyone using the HighPoint RocketRAID 2760A on the forums, and I'm hesitant to be the first, but it seems to be the only card that fits the bill. If this card won't work with unRAID, then my alternatives require multiple SAS adapter cards instead of one, and that would force a different motherboard with more PCIe slots, which would force a regular processor instead of a G2 socket mobile processor. From everything I've looked at, the 2760A with a Intel mobile CPU is probably going to be most energy efficient 24-bay server I can build, but I have no idea if it will even work. Does anyone have unRAID experience with the 2782/2760/2744/2740 family of HighPoint products? Anyone else build an ultra-low power server? If so, how'd you do it? Thanks for any help! I need it! Paul EDIT JUN 9, 2013 with the results from the actual build: THE BUILD COMPONENT NAME NOTES Power Supply Kingwin Lazer Platinum LZP-650SPCR tested this power supply to have 80+ efficiency even below 10% utilization, this server will be idling at 12.5% utilization Motherboard Foxconn H61SMini-ITX Intel 1155 Socket motherboard with minimal features, delivering excellent performance at high efficiency CPU Intel Celeron G1610Ivy Bridge CPU with low-power HD2500 graphics, very affordable, energy efficient and great performance Memory G.SKILL ECO 4GB (2 x 2GB) 240-Pin DDR3 1600 (PC3 12800) Low voltage (1.35V) for this memory size/speed SAS Controller HighPoint RocketRAID 2760A24-drive SAS controller with 3x Marvel 88SE9485 HD Controllers, PCIe 2.0 x 16 connection, very high per drive bandwidth 24-bay Server Chassis X-Case RM-424Excellent build quality, great SAS backplanes, incredible cooling performance from 3x120mm fans (MB connection, controllable via Linux) THE NUMBERS Tested under unRAID 5.0 RC13, Linux kernel 3.9.3. COMPONENT ITEM WATTAGE CUMULATIVE WATTAGE NOTES Foxconn H61S + Intel Celeron G1610 + 4GB RAM (1.35V) 17.50 Watts 17.5 Watts New Intel power saving features in Linux 3.9.3 saved about 0.5W over 3.4.36 (5RC12a) Above + X-Case RM-424 SAS BackPlanes 1.00 Watts 18.5 Watts Measured with no drives inserted Above + X-Case RM-424 3x 120mm Case Fans 6.50 Watts 25.0 Watts Measured with PWM set to 70, lowest fan idle speed ** Above + HighPoint 2760A SAS Controller28.00 Watts 53.0 Watts Measured connected to SAS BackPlanes with no drives attached Above + 16 Hard Drives 1.15 Watts 71.0 Watts Standby Wattage for 2 x Samsung F2 1.5TB, 3 x Samsung F3 2TB, 11 x WD Red 3TB - All drives averaged about the same wattage Estimated with 24 Drives81.0 Watts Estimated power consumption of above build when all 24 drives are installed - UNTESTED **NOTE: I was not able to lower the case fan speed under RC12a for two reasons - (1) The IT8772E chipset (fan control) on the H61S motherboard didn't have support added to the IT87.KO driver until Linux kernel 3.8, and (2) unRAID 5RC12a was delivered without many kernel module drivers, including the IT87.KO driver. My weak Linux skills were insufficient for me to figure out how to get a newer IT87 driver compiled onto RC12a, besides it is a waste of time now that Tom has taking unRAID onto Linux 3.9.x. I was able to load the IT87 driver under 5RC13 and control the case fan speeds from Linux. Idle wattage on 5 RC12a, with identical hardware configuration, is 75W. The 4W savings under RC13 primarily come from the ability to control the case fan speed on my particular motherboard, with small additional power savings from newer Intel CPU power saving features. There was no observable difference in 2760A idle wattage between RC12a and RC13. Additional numbers coming soon: Idle Wattage with all drives spinning, Wattage with 1-drive spinning playing a Blu-Ray ISO, Wattage while copying data to array, Max Cold Boot Wattage, Max unRAID All-Drive Spin-Up Wattage, Max Parity Check Wattage, Parity Check Speed/Time with only WD Red 3TB drives
-
I just read about the new HighPoint Rocket 750. It is a 40 port (40 hard drives!) controller card, and each port is a 6GB/s SATA port. At first I thought I was looking at a SAS card (which I was hoping it would be), but it looks like the card's 10 connectors work with 4-way breakout cables, so it really is just SATA and not SAS. It also uses one PCIe 2.0 x8 connection. I would have liked x16, but perhaps it's not necessary. It's supposed to come out this month. I'm currently using 4 PCIe cards (and have found it impossible to get them all running at x4 speed due to BIOS issues) plus another 5 ports from the motherboard. This seems like a nice (albeit expensive) solution. http://www.xbitlabs.com/news/storage/display/201304092345011_HighPoint_Launches_40_Port_Serial_ATA_6Gb_s_Controller_Card.html Now where do I find an affordable 40 bay case....
-
Tom, I'm just catching up and would like to cast my vote if it isn't too late: 1) Freeze any feature development and resolve any open 'CRITICAL' bugs with the 5.0 RC's - a task which it sounds like it may already be complete (performance issues on select hardware configurations are not critical bugs). As part of this, you might want to post a "Last Call for 5.0 RC10 'CRITICAL' Bug Reports" on the forum. 2) Release 5.0.0-x32-Final. Include in the release notes that this 32-bit version does not support memory configurations greater than 4GB, which have been noted to sometimes result in performance issues. Also list any hardware platforms that may have performance issues which may or may not be related to the amount of installed RAM. Users are certainly entitled to run unRAID on any hardware configuration they prefer, but should be informed that choosing to run certain problematic hardware or excessive memory limits their support. Of course, any currently 'Known Issues' should be documented in the release notes. 3) Provide immediate and ongoing support for any new 5.0.0-x32-Final issues that are rated 'HIGH' or 'CRITICAL', releasing 5.0.n-x32 versions as necessary. 4) Once support calms down to only 'MEDIUM' or lower issues, give unRAID a 4-week 64-bit evaluation window. With the clock ticking, create and release 5.1.0-x64-Alpha1 with no changes from 5.0.n-x32-Final other than those necessary to recompile under x64. This release is purely to test the waters and determine if 64-bit is a currently viable path forward for unRAID. Release only to a selected alpha test group, and if it proves successful, proceed to a 5.1.0-x64-Beta1 general population test release. During the 4-week window you may code bug fixes, as long as that effort does not extend beyond the 4-week evaluation window. At the end of the 4-week evaluation window, if x64 unRAID is not ready for a Release Candidate, revert to x32 development. 5) Regardless of outcome of the x64 experiment, please share the results with the community. 6) Resume feature development on the chosen platform, either x32 or x64, but not both. Other Thoughts: You may want to take a poll to see how many users are running legacy 32-bit processors that are not 64-bit capable. I would hate to see a large 32-bit hardware only user base alienated (if it even exists), even though I feel it is time for unRAID to make the move to 64-bit. If the 32-bit only user base is small, then it may be worth sun-setting their support. I also believe unRAID's development resources (Tom and, errr... just you Tom) are spread too thin to support both x32 builds and x64 builds going forward, so I see going 64-bit as abandoning 32-bit for future versions. I have concerns that future betas and release candidates may hop around between x32 and x64 builds in much the same way as I have seen test 5.0 versions hop around between various kernel builds looking for a stable base, so I would rather see x32 development take precedence over searching for a stable x64 build. That means that if x64 alpha or beta tests come back with disappointing results, let's parking-lot x64 development in favor of x32 feature development that has long been promised. As always, thank you Tom for developing and supporting this unique and very capable product. </brown-nosing> Paul Paul's Bug Rating Scale: CRITICAL - On a 'Stock Installation': System crashes, data corruption, common major performance issues that limit usability or broken core functionality, all with no workarounds. HIGH - On a 'Stock Installation': System crashes, data corruptions, common major performance issues or broken core functionality, all with viable documented workarounds. Also common minor performance issues. MEDIUM - On a 'Stock Installation': Broken non-core functionality and rare performance issues (affecting less than 5% of users). On a 'Non-Stock Installation with 3rd Party Plugins': System crashes, data corruption, and broken core functionality. LOW - Feature requests, module upgrade requests, specific driver requests.
-
WD Red - new NAS optimized product line
Pauven replied to gorbachev's topic in Storage Devices and Controllers
Both drives that are dead exhibit the same issue: They don't show in BIOS, and during startup unRAID sees them and tries to connect to them. These connection attempts go on for several minutes, delaying the server startup. Eventually it looks like Linux gives up on them and unRAID finishes booting. Inside the GUI, the dead drives are not visible. So these drives are so dead I can't get a SMART report. Here's the last set of error messages for the two drives. Earlier error messages have the same errors (softreset failed / SRST failed type messages). Jul 29 16:56:28 Tower kernel: ata19: softreset failed (device not ready) (Minor Issues) Jul 29 16:56:28 Tower kernel: ata20: softreset failed (device not ready) (Minor Issues) Jul 29 16:56:28 Tower kernel: ata20: limiting SATA link speed to 1.5 Gbps (Drive related) Jul 29 16:56:28 Tower kernel: ata20: softreset failed (device not ready) (Minor Issues) Jul 29 16:56:28 Tower kernel: ata20: reset failed, giving up (Minor Issues) Jul 29 16:56:28 Tower kernel: ata19: softreset failed (device not ready) (Minor Issues) Jul 29 16:56:28 Tower kernel: ata19: softreset failed (1st FIS failed) (Minor Issues) Jul 29 16:56:28 Tower kernel: ata19: limiting SATA link speed to 1.5 Gbps (Drive related) Jul 29 16:56:28 Tower kernel: ata19: softreset failed (device not ready) (Minor Issues) Jul 29 16:56:28 Tower kernel: ata19: reset failed, giving up (Minor Issues) And yes, before you ask, I have tried these drives on other slots / controllers in the server. The only thing I haven't done with them yet is plug them into my Windows machine to see how they behave on another computer. -
WD Red - new NAS optimized product line
Pauven replied to gorbachev's topic in Storage Devices and Controllers
Update: Out of 6 WD Red 3TB drives, 1 was DOA, and 1 appears to have died during clearing. Of the 4 that survived, one is now Parity, one replaced a 1TB data drive, and the other 2 are pending data drive upgrades. So, for me a 33% failure rate, as compared to my 0% failure rate for my Samsungs. Man I miss Samsung. I did have a power-failure, apparently long enough to drain my UPS, near the end of the preclear on the last two drives, and the power outage appears to have been between hour 35 and 36 of the Preclear, so I'm thinking the Preclear actually finished before the power outage. The drive that survived has a valid Preclear signature, while the other drive is the one that is now dead. I'm thinking I will run a another preclear on the drive the lived, even though the SMART report is flawless. Helmonder, how did your preclear go? -
WD Red - new NAS optimized product line
Pauven replied to gorbachev's topic in Storage Devices and Controllers
Finished a 1 pass pre-clear on 3 of the WD Red 3TB drives with flying colors (everything that matters is still a zero). Of the remaining 3 drives, 1 may be defective (didn't show on boot) and 2 are still in the wrapper. Here's the pre-clear reports: Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ========================================================================1.13 Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == invoked as: ./preclear_disk.sh -A /dev/sdv Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == WDC WD30EFRX-68AX9N0 WD-WMC1T0077429 Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Disk /dev/sdv has been successfully precleared Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == with a starting sector of 1 Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Ran 1 cycle Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Using :Read block size = 8225280 Bytes Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Last Cycle's Pre Read Time : 7:55:20 (105 MB/s) Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Last Cycle's Zeroing time : 7:12:13 (115 MB/s) Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Last Cycle's Post Read Time : 18:04:48 (46 MB/s) Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Last Cycle's Total Time : 33:13:22 Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Total Elapsed Time 33:13:22 Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Disk Start Temperature: 33C Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Current Disk Temperature: 30C, Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ============================================================================ Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ** Changed attributes in files: /tmp/smart_start_sdv /tmp/smart_finish_sdv Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Jul 25 04:03:30 Tower preclear_disk-diff[26320]: Temperature_Celsius = 120 117 0 ok 30 Jul 25 04:03:30 Tower preclear_disk-diff[26320]: No SMART attributes are FAILING_NOW Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ========================================================================1.13 Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == invoked as: ./preclear_disk.sh -A /dev/sds Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == WDC WD30EFRX-68AX9N0 WD-WMC1T0076840 Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Disk /dev/sds has been successfully precleared Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == with a starting sector of 1 Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Ran 1 cycle Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Using :Read block size = 8225280 Bytes Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Last Cycle's Pre Read Time : 8:09:59 (102 MB/s) Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Last Cycle's Zeroing time : 7:26:46 (111 MB/s) Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Last Cycle's Post Read Time : 18:29:42 (45 MB/s) Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Last Cycle's Total Time : 34:07:29 Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Total Elapsed Time 34:07:29 Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Disk Start Temperature: 42C Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Current Disk Temperature: 34C, Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ============================================================================ Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ** Changed attributes in files: /tmp/smart_start_sds /tmp/smart_finish_sds Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Jul 25 04:58:14 Tower preclear_disk-diff[6222]: Temperature_Celsius = 116 108 0 ok 34 Jul 25 04:58:14 Tower preclear_disk-diff[6222]: No SMART attributes are FAILING_NOW Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ========================================================================1.13 Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == invoked as: ./preclear_disk.sh -A /dev/sdt Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == WDC WD30EFRX-68AX9N0 WD-WMC1T0073984 Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Disk /dev/sdt has been successfully precleared Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == with a starting sector of 1 Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Ran 1 cycle Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Using :Read block size = 8225280 Bytes Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Last Cycle's Pre Read Time : 8:23:43 (99 MB/s) Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Last Cycle's Zeroing time : 7:40:33 (108 MB/s) Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Last Cycle's Post Read Time : 19:01:48 (43 MB/s) Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Last Cycle's Total Time : 35:07:05 Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Total Elapsed Time 35:07:05 Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Disk Start Temperature: 41C Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Current Disk Temperature: 33C, Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ============================================================================ Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ** Changed attributes in files: /tmp/smart_start_sdt /tmp/smart_finish_sdt Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Jul 25 06:35:12 Tower preclear_disk-diff[23441]: Temperature_Celsius = 117 109 0 ok 33 Jul 25 06:35:12 Tower preclear_disk-diff[23441]: No SMART attributes are FAILING_NOW -
unRAID Server Release 5.0-rc6-r8168-test Available
Pauven replied to limetech's topic in Announcements
Just a quick note to let Tom know that I recently upgraded from 4.7 to 5.0-RC6-R8168-Test, and I've been up a week and have had zero issues. My build is in my sig. For those that care, Parity Check dropped from ~53 MB/s to ~48 MB/s. Thank you Tom. -
WD Red - new NAS optimized product line
Pauven replied to gorbachev's topic in Storage Devices and Controllers
I have 6 WD Red 3TB drives arriving today, and will spend the next several days pre-clearing them. I'll post back later on my impressions of the drives. I'm expecting it will take me a week to integrate them into the array. I currently have 18 Samsung drives (F1's, F2's and F3's), some of which are 3.5 years old, most of which are over 2 years old, and none of which have ever failed. I have at least 4 other Samsung F1's in various computers in the home, and none of those have ever failed. Never had a DOA either. I know everyone's experience is unique, and not all have been as fortunate as I have been, but my word that is an amazing track record. I briefly had a 2TB Western Digital RE4-GP as a parity drive, a really expensive high-end 5-year warranty bugger, but it died within a couple months. I was so P/O'd I never even had it replaced (but with that 5 year warranty, I guess it's not too late). I had no intention of abandoning the Samsung drives, and a very high reluctance to go back to Western Digital, but in the current HD market the new Red's just seemed like a logical choice. Really hoping I don't get burned again. -
+1 me too. Willing to pay extra for a ParityPlus license option. Thank you for everything Tom, I'm very happy with 4.7. Just getting nervous as my 2 yr old array (just checked, 3 years this month!) approaches 20 drives. In my experience when drives start dropping they often like to go in groups. They're clicky like that...
-
I'm a little unsure of where to post this, since we've moved on to Beta 13 and the issue may be hardware, not B12 related. Sorry if I get this wrong. One of the reasons I'm posting this in the B12 thread is that another user had the same error up above (Tower kernel: ata20.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0) which is leading to parity write errors on my server. This issue was not apparent on B7 (and I have not yet reverted to B7 to see if it goes away), and it began the very first time I booted B12. My Parity drive is now giving errors that are causing it to become disabled. On B12, I no longer have parity. The very first time I booted on B12, it didn't recognize parity and forced me to run a parity sync to start the array. After it completed, the parity drive button color was still red, and I realized there were errors. I rebooted and tried again to create the parity. Here's the odd part: It gets through approximately 75% of the drive before there was any errors. My parity drive is 2 TB, and my largest data drive is 1.5 TB. So it seems to be getting through all the data drive parity creation before erroring. This, of course, is probably just a coincidence, but I didn't want to leave any detail out. Speaking of detail, I've had occasional errors that get corrected by parity checks since day one (a few months ago), but it has never disabled parity before. I've attached the log file with errors. Here's the SMART report for the parity drive: Statistics for /dev/sdo WDC_WD2002FYPS-01U1B0_WD-WCAVY0102731 smartctl version 5.38 [i486-slackware-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD2002FYPS-01U1B0 Serial Number: WD-WCAVY0102731 Firmware Version: 04.05G04 User Capacity: 2,000,398,934,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sat Dec 5 11:09:08 2009 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (41160) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 255) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303f) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 197 197 051 Pre-fail Always - 41708 3 Spin_Up_Time 0x0027 164 155 021 Pre-fail Always - 8791 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 452 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 3667 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 192 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 90 193 Load_Cycle_Count 0x0032 196 196 000 Old_age Always - 14697 194 Temperature_Celsius 0x0022 127 107 000 Old_age Always - 25 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 196 196 000 Old_age Always - 1462 198 Offline_Uncorrectable 0x0030 196 196 000 Old_age Offline - 1326 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 1 200 Multi_Zone_Error_Rate 0x0008 183 169 000 Old_age Offline - 3580 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. I tried running both the SHORT and LONG SMART tests, but clicking the button in unMENU under B12 don't seem to do anything. As far as next steps, I plan to try using different slots for the parity drive, to rule out cable and controller related issues (current controler is on the motherboard (ASUS A379T Deluxe, SB700 in AHCI mode) and alternate controller I will try is an Adaptec 1430SA). I also plan to try IDE mode, different hard drives, and possibly Beta7 again. Unfortunately, it takes about a day to perform each test. Is there any possibility that this is not a hardware issue and is B12 related? Thanks in advance for any help and insight.
-
How to check assigned PCIe Lanes in OS?
Pauven replied to Pauven's topic in General Support (V5 and Older)
As expected, contacting customer service and explaining the issue was challenging. At this point my case is logged and is submitted to the testing team. I hope to hear back this year. The previous generation motherboard to my M4A79 Deluxe is the M3A79 Deluxe, and beside a slightly older chipset it has one notable difference: the BIOS allows you to manually allocate PCIe lanes. I stumbled upon this feature during my research online today, and I pointed this out to ASUS in hope that they would incorporate those BIOS parameters into the newer board. As luck would have it, I actually have the M3A79 Deluxe installed in my HTPC. I booted up into the BIOS to double check the parameters, and found I had to hit the secret key (F4) to make them appear. Since these two motherboards are basically the same, I plan to swap them this weekend. The newer M4A79 is actually better suited to my HTPC anyway, so this seems a logical choice. -
How to check assigned PCIe Lanes in OS?
Pauven replied to Pauven's topic in General Support (V5 and Older)
I made progress in analyzing the performance issue, and thought I would share my troubleshooting steps and results - hopefully they can help others. I wasn't able to find a method to determine PCIe lane allocation in the unRAID linux distribution. I was also concerned about corrupting my array via testing with various cards removed. To sidestep both issues, I installed Windows on a spare drive attached to the motherboard. This allowed me to use Windows based tools to analyze the issue, and remove cards without fear of unRAID seeing the activity and invalidating the array. I will return hardware back to the original configuration before allowing the system to boot off the unRAID flash drive again. I then installed SiSoftware Sandra Lite, which is a fantastic suite of free software tools that can benchmark your system and tell you all kinds of nitty gritty detail. Sure enough, the information on PCI lane allocation was easy to find. Please note, I only browsed the hardware information. I did not run any benchmarks or attempt to access the unRAID drives in any way, for fear of corrupting unRAID data. I don't care about Windows performance anyway, only what PCIe lane width was being assigned to the RAID controller cards. I then set about testing every combination of BIOS settings to see how they affected lane allocation. No matter what I did, I got this: The only discrepancy I found was that in Linux, buses 4 and 5 are actually reported as 5 and 6. That small discrepancy aside, I was able to directly correlate the fast drives to the controllers on buses 2 and 5(aka 6), and the slow drives to the controllers on buses 3 and 4(aka 5). I was also able to see that the Adaptec 1430SA was reporting x4 capability in spite of the motherboard/BIOS assigning only x1 to the card. This testing does not illustrate why I am seeing performance degradation on an x1 link with only 1 drive attached, only that it is related to the issue. Perhaps that is just an idiosyncrasy of the Adaptec 1430SA. I am hopeful that simply establishing an x4 connection will resolve the performance gap. I also tried pulling a controller card that was operating at x4, and this allowed the sister card that was operating at x1 to speed up to x4. Since it seems the hardware is fully capable of running all cards at x4, the limiting factor is the motherboard BIOS. I will be contacting ASUS for resolution. -
How to check assigned PCIe Lanes in OS?
Pauven replied to Pauven's topic in General Support (V5 and Older)
I'm willing to try two/three card configurations, but not at the expense of my array. Can I remove a controller/4 drives from my system that are part of the array, boot up in a non-functioning mode (being sure NOT to hit repair/restore/format), test the drive speed, then reinstall the card and boot normally? I'm scared I could corrupt my array by testing with active cards and drives removed. I haven't tried 2/3 card setups to this point, as I built the server from the beginning to support 20 drives (the last 4 drives are on the motherboard), and everything was installed from the get-go. I didn't discover the speed issue until later, as I continued to expand the array with new drives. I haven't tried contacting Adaptec yet. My gut instinct is telling me this is a MB issue, so I would probably contact ASUS first. That's the main reason I want a tool that tells me the lane width, so I can provide ASUS (or Adaptec) with some hard data. I don't have any controls for NB to SB lane allocation. On the AMD platform these communications travel over the HyperTransport link. I have moved controllers around, back in the beginning when I first built the server. Haven't tried that since I noticed the performance issue.