Jump to content

JorgeB

Moderators
  • Posts

    61,803
  • Joined

  • Last visited

  • Days Won

    652

Everything posted by JorgeB

  1. You have file system corruption on disk7, carefully follow the instructions here, you need to run xfx_repair on md7. Depending on how bad the corruption is, there's a risk of data loss, make sure your backups are up to date.
  2. Of the disks I currently have I’m very impressed with the 8TB Seagate, just hope they are reliable as Seagate is not usually my first choice. The 3TB Toshiba is also a good performer, both can do ~200MB/s in the outer cylinders.
  3. He has 5 disks with hpa enable: ata1.00: HPA detected: current 3907027055, native 3907029168 ata1.00: ATA-8: SAMSUNG HD204UI, S2HFJ1BZ902507, 1AQ10001, max UDMA/133 ata2.00: HPA detected: current 1953523055, native 1953525168 ata2.00: ATA-8: ST31000340AS, 5QJ0Z5VD, SD15, max UDMA/133 ata3.00: HPA detected: current 3907027055, native 3907029168 ata3.00: ATA-8: Hitachi HDS722020ALA330, JK1101YAG54KDV, JKAOA20N, max UDMA/133 ata4.00: HPA detected: current 1953523055, native 1953525168 ata4.00: ATA-8: WDC WD1001FALS-00J7B0, WD-WMATV1318749, 05.00K05, max UDMA/133 ata6.00: HPA detected: current 1953523055, native 1953525168 ata6.00: ATA-8: ST31000340AS, 9QJ1XX25, SD15, max UDMA/133 2TB disk should be: 2,000,398,934,016 bytes [2.00 TB]
  4. Although the 1430SA uses a Marvell chipset it does not use the same MVSAS driver as the SASLP and SAS2LP, I believe the increase in performance on your older system using the tweak was more from lower CPU utilization and less from any benefit to the 1430. The 1430SA has been a very consistent performer since unraid V5 and always delivers more than 200MB/s per disk, which is consistent with its max theoretical bandwidth of 1000MB/s. Different hardware and tunable settings can make a small difference but doubt that anyone with a Intel socket 1155 or above CPU will see any gains from the changes as the card is performing at maximum speed as is. Average parity check with 4 SSDs - Intel G1620 2.7Mhz V5.0.6 V6.0.0 V6.0.1 V6.1.0 V6.1.1 V6.1.2 V6.1.3 V6.1.3 with nr_r=8 213.4 214.9 213.4 213.4 213.4 214.9 214.9 209.3 AMD users may see some difference as speed is not so consistent: Average parity check with 4 SSDs - AMD X2 4200+ 2.2Mhz V5.0.6 V6.0.0 V6.0.1 V6.1.0 V6.1.1 V6.1.2 V6.1.3 180.6 200.1 200.1 181.9 196.4 180.9 201.4 Unfortunately my AMD board is not working at the moment so can’t test with the tweak.
  5. Latest bios is 2507, you can download it from the adaptec website and flash it. http://www.adaptec.com/en-us/downloads/bios_fw/bios_fw_ver/productid=aar-1430sa&dn=adaptec+serial+ata+ii+raid+1430sa.html
  6. Slightly OT, but I thought that a redball only results from a write error. Here is what I thought happened with a read error, and that might produce a redball: If the data cannot be read, then unRAID will "reconstruct" the data from the other disks + parity, and then attempt to write that data back to the disk. If that write fails then you get a redball. But if parity is being built to a new disk how can it reconstruct the data that can't be read? Is this a special case for redballing, or have I got it all wrong? I think you’re right, it was some time ago, I think what I got were several read errors.
  7. OK, I misunderstood, restore device configuration is restoring from a flash backup, not doing a new config. This is what once happened to me once and why I’d like this feature request: Server was all green I started to upgrade my parity drive Forgot to make a flash backup of old config, yes I know I should have. During the parity sync one of the data disks redballed with read errors. So I had to put the old parity back, did a new config and started array, thankfully the problem disk errors were not in the beginning and I was able to stop it without invalidating my parity and then rebuilded the problem disk. If the disk was completely dead I believe I could not have recovered from this without this feature request.
  8. Yes, you can have up to 12 devices. Your board should work with 2 x 1430sa, so you can by one now and another in the future, another option would be to by now a 8 port SAS card, most common used with Unraid are: Supermicro SASLP – plug n play, probably one of the cheapest 8 port cards and works ok with Unraid but somewhat bandwidth limited, with 8 disks on the controller it will limit your parity checks to ~80MB/s Supermicro SAS2LP – plug n play, there was a performance issue but was solved and won’t be a problem on next Unraid release IBM M1015 or Dell Perc H310 – IMO one of the best 8 port cards you can buy now for Unraid but has to be flashed to IT mode
  9. If you think you will not need more than 4 ports I recommend the adaptec 1430SA, relatively cheap on ebay and works very well with Unraid, just make sure it’s using latest firmware for > 2tb support.
  10. In this case you would not use the 'trust parity' box. Instead you restore the device configuration you had before then type a command at the console or telnet/ssh session: mdcmd set invalidslot <N> where <N> is the disk number that you want disabled. After typing this command, you click 'Start' back on the webGui (without doing an intervening browser refresh). The array will come up with that disk disabled, and if a device has been assigned, it will kick off the reconstruct. [The way the 'trust parity' box is implemented is, if checked, emhttp will execute 'mdcmd set invalidslot 99' just before starting driver.] I’m trying to test this procedure but I think I’m doing something wrong. This is what I did: Created a parity + 2 disk array, xfs formatted, copied some data to disk1 Stoped array New config Selected same parity and disk 2 Selected different disk as disk 1 Without starting array typed on console: mdcmd set invalidslot 1 Started array Instead of rebuild Disk 1, it appears as unmountable and unraid starts doing a parity sync Also tried invalidslot 2 with same result as I was not sure if slot 1 is parity or disk 1 Can anyone see what I’m doing wrong?
  11. Although the tweak fixed the issue for me (and from reading this thread appears to work for everyone else with the SAS2LP) I’m very happy that Tom has fixed the underlying issue, I believe that the SAS2LP performance has degraded further with almost every new kernel release since V5beta12, so there was a risk that the issue could reemerge in the future. Hmmm, I wonder if that means I’ll have to buy a new disk for my servers soon…
  12. I have 8... but 2 are not in the array .. in fact.. I may have been less than 6 as I'm not sure where my cache disks (2) are located! Jim If I trust the odering SDa- SDx... I had 5 disks in the parity check on the SASLP controller. (11 total in the array including parity) I expected 6 arrays disks as your speed is very close to what I got from my tests with 6 SSDs: default - 93.1MB/s nr_r.=8 - 105MB/s In any case your improvement is in line with what I get with 5 to 8 disks, between 9 and 12%, so I believe most SASLP users will also benefit from this tweak, naturally only if there aren't any other bottlenecks.
  13. No idea. Looking at SuperMicro's specs, it IS based on a Marvell chip (Marvell 6480), but I don't know if that has the same issues as the 9480 chip used in the SAS2LP. Very easy to test => just do a parity check (Just run it for perhaps 10 minutes and update the status); then change the nr_requests values and repeat the process. Ok.. With the default.. I was at 90-95MB/s With the change I was at ~102-106MB/s So there is a speed up for the SASLP original card as well. I guess I put this in the go script? Jim Just out of curiosity and to see how your speed compares with my own testing, how many disks are on the SASLP? I’m guessing no more than 6?
  14. That seems incorrect to me. Even with her only 4 PCIe lanes (assuming PCIe v2) that card offers a bandwidth of ~2TB/s ~2GB/s. That should be enough for 8 drives with ~200MB/s (that I haven't yet seen). Supermicro even states 300MB/s per channel - referring to the 8 SATA channels. Parity check is just as fast as your slowest drive read rate. Maybe that's your 80MB/s cap? The SASLP is PCIe gen1 4x, max theoretical bandwidth is 1000MB/s, max I could get in Unraid V6 was ~580MB, with the same tweak that helps the SAS2LP it goes to ~640MB/s max. You can see more details in my post where I tested various controller with SSDs, the SASLP was particularly slow when compared for example with the Adaptec 1430SA, also PCIe gen1 4x but can deliver ~840MB/s max.
  15. I haven't built the system, but I'm planning to eventually have 24 drives in a Norco 4224 with a Xeon E5-1650v3 and a SUPERMICRO MBD-X10SRA-F-O. I would just get one 16 port or two 8 port SAS cards and not mess with a SAS expander. I currently use a LSI 9211-8i that is flashed to IT mode. There is a supermicro AOC-SASLP-MV8 3Gbps SAS card that a lot of user use here it is cheaper than the LSI cards and since we use mechanical hard drives there really isn't any bottleneck. I was able to get the AOC-SASLP-MV8 for about $75. The SASLP is somewhat bandwidth limited, in my experience a fully loaded card will limit a parity check/sync and disk rebuild to 80MB/s max, and while that it’s a perfectible acceptable speed it will be a bottleneck for the slowest disks on the market today and bigger one for the fastest ones that can do 200MB/s+ in the outer tracks. Just so you are not surprised after getting one and decide now if that’s an acceptable speed for you.
  16. My backup strategy is split two ways, data I can’t lose and data that if lost will make cry a day or two but can mostly recover with some time and work. Data I can’t lose is weekly backup up to another Unraid server, but for most of my servers I have no backups, plan to install dual parity when available, although not a backup I believe the risk of losing data will be low enough. If I could afford it, both in cost and space, I would backup every Unraid server to another.
  17. It does improve a little for me, back to v5 speed. SASLP with 8 SSDs (MB/s): V5.0.6 V6.0.0 V6.0.1 V6.1.0 V6.1.1 V6.1.2 V6.1.3 V6.1.3 (nr_reqs= 80.6 72.4 73.3 70.1 71.0 71.3 71.3 80.4 Unfortunately the SASLP is somewhat bandwidth limited, so there’s not a big improvement, but in a big array you can gain 1 or 2 hours.
  18. I also noticed that with controllers that show no improvement in speed the CPU utilization during a parity check is lower with the tweak, e.g.: 1430SA with 4 SSDs Default - ~31% reqs=8 - ~24% Dell H310 with 8 SSDs Default - ~47% reqs=8 - ~40% Speed was very similiar with the 1430SA and identical with the H310, so I believe that servers with close to max CPU utilization can show a little speed improvement even if the controller used is not affected.
  19. During the early stages of the parity check on my SAS2LP setup (first 10-20% of the array maybe?) the change to nr_requests more than doubled the speed but after that it started tapering off and eventually settled on the old speeds. After the parity check completed, the average speed and duration remained pretty much the same as before with leaving nr_requests as default at 128 Because you have 4 different size of disks it’s normal for the speed to vary greatly, at about 20% you will be on the last third of your 1.5TB disks, this will bring your speed down way down to below 100Mb/s, from 30% to 40% will be on the last quarter of your 2Tb disks, so again speed will be low, then from 40% to 60% you’re on the last third of you 3TB disks, speed should pick up a little after 60% but soon after you’re going to reach the inner tracks of your 5TB drives, so low speed again. So, you’ll never have great parity check speed you so many different sizes, in your case the biggest difference with the tweak should be between 0 and 15% of your array.
  20. That’s interesting, for me the 1430SA has been very consistent since V5, but maybe the hardware used makes a difference. 1430SA with 4 SSDs (MB/s): V5.0.6 V6.0.0 V6.0.1 V6.1.0 V6.1.1 V6.1.2 V6.1.3 V6.1.3 (nr_reqs= 213.4 214.9 213.4 213.4 213.4 214.9 214.9 209.3 Besides the SAS2LP, I did find some small gains on the SASLP with the tweak, back to V5 speeds. SASLP with 8 SSDs (MB/s): V5.0.6 V6.0.0 V6.0.1 V6.1.0 V6.1.1 V6.1.2 V6.1.3 V6.1.3 (nr_reqs= 80.6 72.4 73.3 70.1 71.0 71.3 71.3 80.4 But anyone experiencing lower parity check speed since V5 should try it to see if it makes any difference.
  21. That’s how it should be, the sequential read speed slows down as the disk moves from the outer to the inner tracks, I have mostly WD green drives, parity check starts at ~150MB/s and ends at ~80MB/s, average speed in the end is a little over 100MB/s.
  22. After testing with 8 faster 120GB SSDs I’m satisfied that this fix (or workaround) solves the issue, this result is very close to what I get with a Dell H310 on the same server, more than enough speed for the fastest HDDs on the market and then some. default Notice [TESTV6] - Parity check finished (0 errors) Duration: 20 minutes, 53 seconds Average speed: 95.8 MB/sec nr_requests=8 Notice [TESTV6] - Parity check finished (0 errors) Duration: 6 minutes, 31 seconds Average speed: 307.0 MB/sec Also it does not appear to have a negative impact on writes, at least on parity syncs and disk rebuilds: default Notice [TESTV6] - Parity sync: finished (0 errors) Duration: 6 minutes, 1 second Average speed: 332.5 MB/sec Notice [TESTV6] - Data rebuild: finished (0 errors) Duration: 5 minutes, 59 seconds Average speed: 334.4 MB/sec nr_requests=8 Notice [TESTV6] - Parity sync: finished (0 errors) Duration: 6 minutes, 1 second Average speed: 332.5 MB/sec Notice [TESTV6] - Data rebuild: finished (0 errors) Duration: 6 minutes, 6 seconds Average speed: 328.0 MB/sec Many thanks to LT and Eric for fixing an issue that although not critical was very frustrating. P.S: like Eric pointed out, the disk tunable settings can have an huge impact on the parity check speed with the SAS2LP, with these SSDs and nr_reqs=8 the default settings gave me ~150MB/s, after running tunables tester ended up with 300MB/s+
  23. I wasn't expecting a speed increase for SSDs but I guess I wasn't expecting really (old?) SSDs to be used From the Main tab, click on one of the SSDs to view its details then click on the Identity tab. What is listed for the 'ATA Version' and 'SATA Version' fields? The SSDs are ACS-2, and the parity check speed on the SAS2LP doubled from ~100MB/s to >200MB/s. They are all the same model: Device Model: TS32GSSD370S Serial Number: C160931700 Firmware Version: N1114H User Capacity: 32,017,047,552 bytes [32.0 GB] Sector Size: 512 bytes logical/physical Rotation Rate: Solid State Device Device: Not in smartctl database [for details use: -P showall] ATA Version: ACS-2 (minor revision not indicated) SATA Version: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Local Time: Sat Oct 31 08:49:03 2015 GMT SMART support: Available - device has SMART capability. SMART support: Enabled SMART overall-health: Passed
×
×
  • Create New...