gillman99 Posted July 1, 2015 Share Posted July 1, 2015 I had an old x4 SATA controller card that was working great, but needed to expand. I replaced that card with the aoc-sas2lp-mv8 with everything else being exactly the same and now a parity check takes 24 hours vs 8 hours previously. Please note, I'm only checking, not correcting. I'm using v6 of unRaid, have 9 total drives (5 on MB, 4 on SuperMicro card), using forward cable to connect. Any ideas where to look to see what's causing the performance issue? Link to comment
uldise Posted July 1, 2015 Share Posted July 1, 2015 I had an old x4 SATA controller card that was working great, but needed to expand. I replaced that card with the aoc-sas2lp-mv8 with everything else being exactly the same and now a parity check takes 24 hours vs 8 hours previously. Please note, I'm only checking, not correcting. I'm using v6 of unRaid, have 9 total drives (5 on MB, 4 on SuperMicro card), using forward cable to connect. Any ideas where to look to see what's causing the performance issue? Need more info about motherboard and PCIe slot you use with SM card.. are there another PCIe cards used? if yes, then which slots? Link to comment
gillman99 Posted July 1, 2015 Author Share Posted July 1, 2015 I'm using ECS A885GM-A2 MB. I have no other cards in the MB, and have it inserted in the slot #21 as denoted in the manual. There are 2 PCIe Gen2 slots, but it doesn't say if it's slot 1 or 2. Link to comment
uldise Posted July 1, 2015 Share Posted July 1, 2015 I'm using ECS A885GM-A2 MB. I have no other cards in the MB, and have it inserted in the slot #21 as denoted in the manual. There are 2 PCIe Gen2 slots, but it doesn't say if it's slot 1 or 2. looked at your board's manual, all seems correct, this slots is x16 electrically.. you can try second PCIe slot too, just for testing.. Link to comment
gillman99 Posted July 1, 2015 Author Share Posted July 1, 2015 I moved card to the upper slot and it now says 16 hours. So, here's where I'm at: Old Card: 8 Hours New Card in bottom slot: 24 hours New card in upper slot: 16 hours A little annoying, but I guess I could manage if this is as good as it can get. Any pointers on what else to configure to improve or look into would be appreciated. Link to comment
Squid Posted July 1, 2015 Share Posted July 1, 2015 Are there any errors in the syslog that indicate any problems with the drives? Post the diagnostics file. Link to comment
gillman99 Posted July 1, 2015 Author Share Posted July 1, 2015 Here's the syslog so far. Currently 2.3% of the way through parity check. Jul 1 10:01:55 ss emhttp: shcmd (42): mkdir /mnt/user Jul 1 10:01:55 ss emhttp: shcmd (43): /usr/local/sbin/shfs /mnt/user -disks 510 -o noatime,big_writes,allow_other -o remember=0 |& logger Jul 1 10:01:55 ss emhttp: shcmd (44): rm -f /boot/config/plugins/dynamix/mover.cron Jul 1 10:01:55 ss emhttp: shcmd (45): /usr/local/sbin/update_cron &> /dev/null Jul 1 10:01:55 ss emhttp: shcmd (46): :>/etc/samba/smb-shares.conf Jul 1 10:01:55 ss emhttp: Restart SMB... Jul 1 10:01:55 ss emhttp: shcmd (47): killall -HUP smbd Jul 1 10:01:55 ss emhttp: shcmd (48): cp /etc/avahi/services/smb.service- /etc/avahi/services/smb.service Jul 1 10:01:55 ss emhttp: shcmd (49): pidof rpc.mountd &> /dev/null Jul 1 10:01:55 ss emhttp: shcmd (50): /etc/rc.d/rc.atalk status Jul 1 10:01:55 ss emhttp: Start AVAHI... Jul 1 10:01:55 ss emhttp: shcmd (51): /etc/rc.d/rc.avahidaemon start |& logger Jul 1 10:01:55 ss logger: Starting Avahi mDNS/DNS-SD Daemon: /usr/sbin/avahi-daemon -D Jul 1 10:01:55 ss avahi-daemon[1596]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214). Jul 1 10:01:55 ss avahi-daemon[1596]: Successfully dropped root privileges. Jul 1 10:01:55 ss avahi-daemon[1596]: avahi-daemon 0.6.31 starting up. Jul 1 10:01:55 ss avahi-daemon[1596]: WARNING: No NSS support for mDNS detected, consider installing nss-mdns! Jul 1 10:01:55 ss avahi-daemon[1596]: Successfully called chroot(). Jul 1 10:01:55 ss avahi-daemon[1596]: Successfully dropped remaining capabilities. Jul 1 10:01:55 ss avahi-daemon[1596]: Loading service file /services/sftp-ssh.service. Jul 1 10:01:55 ss avahi-daemon[1596]: Loading service file /services/smb.service. Jul 1 10:01:55 ss avahi-daemon[1596]: Loading service file /services/ssh.service. Jul 1 10:01:55 ss avahi-daemon[1596]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.0.130. Jul 1 10:01:55 ss avahi-daemon[1596]: New relevant interface eth0.IPv4 for mDNS. Jul 1 10:01:55 ss avahi-daemon[1596]: Network interface enumeration completed. Jul 1 10:01:55 ss avahi-daemon[1596]: Registering new address record for 192.168.0.130 on eth0.IPv4. Jul 1 10:01:55 ss avahi-daemon[1596]: Registering HINFO record with values 'X86_64'/'LINUX'. Jul 1 10:01:55 ss emhttp: shcmd (52): /etc/rc.d/rc.avahidnsconfd start |& logger Jul 1 10:01:55 ss logger: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon: /usr/sbin/avahi-dnsconfd -D Jul 1 10:01:55 ss avahi-dnsconfd[1605]: Successfully connected to Avahi daemon. Jul 1 10:01:55 ss emhttp: Starting Docker... Jul 1 10:01:55 ss logger: /usr/bin/docker not enabled Jul 1 10:01:56 ss avahi-daemon[1596]: Server startup complete. Host name is ss.local. Local service cookie is 4200031360. Jul 1 10:01:57 ss avahi-daemon[1596]: Service "ss" (/services/ssh.service) successfully established. Jul 1 10:01:57 ss avahi-daemon[1596]: Service "ss" (/services/smb.service) successfully established. Jul 1 10:01:57 ss avahi-daemon[1596]: Service "ss" (/services/sftp-ssh.service) successfully established. Jul 1 10:03:21 ss kernel: mdcmd (47): check NOCORRECT Jul 1 10:03:21 ss kernel: md: recovery thread woken up ... Jul 1 10:03:21 ss kernel: md: recovery thread checking parity... Jul 1 10:03:21 ss kernel: md: using 1536k window, over a total of 1953514552 blocks. Link to comment
Squid Posted July 1, 2015 Share Posted July 1, 2015 Full syslog would be more helpful. Also, parity checks etc are highly dependent upon the disk tunables (and the values required change depending upon the hardware). You might want to try this: http://lime-technology.com/forum/index.php?topic=29009.0 and see if you get some speed back. Link to comment
gillman99 Posted July 1, 2015 Author Share Posted July 1, 2015 Attached is the syslog syslog.txt Link to comment
bkastner Posted July 1, 2015 Share Posted July 1, 2015 It would be helpful to know what speed the parity check is completing at. Someone had posted about these cards a few days ago and I had mentioned I get around 95MB/sec I think average (I am part way through my monthly check right now so can't confirm). There is also a tunables script you can run to help optimize things. Not saying this is your issue, but something to be aware of. Link to comment
gillman99 Posted July 1, 2015 Author Share Posted July 1, 2015 I'm getting about 32MB/Sec. Link to comment
JorgeB Posted July 1, 2015 Share Posted July 1, 2015 I have the same problem, my parity check speed went from 80Mb/s with the old SASLP to 40Mb/s with the new SAS2LP, interestingly parity syncs work as they should, speed above 100Mb/s, only parity checks and disk rebuilds are affected. You can read about my problem and all the tests I made here: http://lime-technology.com/forum/index.php?topic=39125.msg386462#msg386462 Link to comment
gillman99 Posted July 1, 2015 Author Share Posted July 1, 2015 Thanks Johnnie Black. Hopefully they fix this soon. Link to comment
JorgeB Posted July 1, 2015 Share Posted July 1, 2015 I hope so, I sent an email to Lime Tech over a week ago, haven't got a reply but they should be aware of the issue and this is a fairly common card in Unraid builds, in the meantime I'm back to using the old SASLP. Bear in mind that as you add more disks parity checks will slow down even more, if you connect 8 disks to the SAS2LP you can expect half the speed you get now with 4. Link to comment
BRiT Posted July 1, 2015 Share Posted July 1, 2015 What is your pollable parameter set to and what is the ui update value set to? Link to comment
bkastner Posted July 1, 2015 Share Posted July 1, 2015 I hope so, I sent an email to Lime Tech over a week ago, haven't got a reply but they should be aware of the issue and this is a fairly common card in Unraid builds, in the meantime I'm back to using the old SASLP. Bear in mind that as you add more disks parity checks will slow down even more, if you connect 8 disks to the SAS2LP you can expect half the speed you get now with 4. I don't necessarily agree with this. I have all my disks attached to 2 SAS2LP cards. I have a total of 15 disks, so have almost fully utilized both cards. My monthly parity check just finished a half hour ago: Last checked on Wed 01 Jul 2015 06:46:51 PM EDT (today), finding 0 errors. ? Duration: 17 hours, 46 minutes, 50 seconds. Average speed: 93.8 MB/sec Link to comment
gillman99 Posted July 2, 2015 Author Share Posted July 2, 2015 BRit, Can you tell me how to check for these parameters? Thanks, Mark Link to comment
JorgeB Posted July 2, 2015 Share Posted July 2, 2015 I don't necessarily agree with this. I have all my disks attached to 2 SAS2LP cards. I have a total of 15 disks, so have almost fully utilized both cards. My monthly parity check just finished a half hour ago: Last checked on Wed 01 Jul 2015 06:46:51 PM EDT (today), finding 0 errors. ? Duration: 17 hours, 46 minutes, 50 seconds. Average speed: 93.8 MB/sec You're average speed is not bad but as you have different size disks it's not a very good indicator, passing the 3tb mark you lose half your disks, and from 4tb to 6tb you're running the check with only 2 disks, that will highly inflate the average speed. Can you please start a parity check, wait 5 minutes and post speed you’re seeing? In the link I posted earlier and in the same test server with a SAS2LP, the parity check went from 197Mb/s with Unraid v5 to 70Mb/s with v6, so clearly there is an issue, although maybe some builds are more affected than others. @gillman99 You can try running the tunables tester, some people get improved speeds, in my case didn’t make a significant difference, and I expect you won’t get near the speeds of your old card. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.