calvinandh0bbes Posted January 15, 2011 Share Posted January 15, 2011 i'm having some performance problems after since i added the data disk number 7 of 8. This was my first EARS drive (disk 8 was second) and it precleared in a reasonable amount of time, but ever since, my write speeds have been limited to 10 MB/s and read is around 22 MB/s. Write speed before had been all over the place, but it would reach the mid 30's sometime. My switch lights indicate Gb/s connections between the two computers. Disk 7 was also the first to go on the Supermicor AOC SAS card. My BIOS is set for ACHI mode for the onboard ports (ASUS P5B Deluxe). Both EARS drives are jumpered (and were jumpered before preclear). WHen I get my next drive, is it worth moving all the data from 7 to the new one and reclearing it without the jumper and the new option? Write and read speed is not critical to me, but when moving a 100 GB worth of files, 30-40 MB/s would certainly be nicer than 10. any thoughts? Link to comment
Joe L. Posted January 15, 2011 Share Posted January 15, 2011 Please supply a syslog. Just moving files between drives will not help without you knowing the cause. Link to comment
lionelhutz Posted January 15, 2011 Share Posted January 15, 2011 Can you explain the exact steps you took? For example; Upgraded to 4.7-beta1. Installed the drive without the jumper. Pre-cleared with the new script using the -A switch. Added the drive to the array. Then, a syslog and a SMART check on the drive would not be a bad idea. Peter Link to comment
Joe L. Posted January 15, 2011 Share Posted January 15, 2011 Yes, First this is general support question, not a preclear question, so I moved it to the general support forum and gave it an appropriate title. Second, for ANY analysis you need to supply a system log. No, just moving data around will not help until you prove a disk is failing. Joe L. Link to comment
calvinandh0bbes Posted January 17, 2011 Author Share Posted January 17, 2011 Please supply a syslog. Just moving files between drives will not help without you knowing the cause. I'm not looking ot just move the files to another drive, i'm basically asking if reformatting the drives with the new preclear (and unjumpering them) will help me. Anyways, syslog is attached. My setup is Asus P5B Deluxe Parity: Seagate 2 TB, onboard SATA Disk 1: Hitachi 1 TB, onboard SATA Disk 2: WD EADS 1 TB, onboard SATA Disk 3: WD EADS 1 TB, onboard SATA Disk 4: Samsung 1 TB, onboard SATA Disk 5: Hitachi 1 TB, onboard SATA Disk 6: Samsung 1.5 TB, onboard SATA Disk 7: WD 2 TB Ears, jumpered, Supermicro AOC card Disk 8: WD 2 TB Ears, jumpered, Supermicro AOC card It's when I added disk 7 that performance went south. I don't think any drives are failing (knock on wood), I just think something isn't set up correctly. thanks for any insight, c+h syslog-2011-01-17.txt Link to comment
Joe L. Posted January 17, 2011 Share Posted January 17, 2011 The only thing that stands out is the motherboard's clock is constantly losing time. Jan 16 00:48:33 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 06:04:00 lionfish ntpd[2342]: no servers reachable Jan 16 06:21:05 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 10:37:14 lionfish ntpd[2342]: no servers reachable Jan 16 10:54:17 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 14:36:16 lionfish ntpd[2342]: time reset +0.155407 s Jan 16 14:36:59 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 19:24:36 lionfish ntpd[2342]: time reset +0.464753 s Jan 16 19:26:04 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 19:27:11 lionfish ntpd[2342]: no servers reachable Jan 16 19:28:14 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 19:40:00 lionfish ntpd[2342]: time reset -0.180060 s Jan 16 19:41:18 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 19:44:13 lionfish ntpd[2342]: no servers reachable Jan 16 19:45:17 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 20:04:34 lionfish ntpd[2342]: no servers reachable Jan 16 20:16:22 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 20:18:34 lionfish ntpd[2342]: no servers reachable Jan 16 20:20:44 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 Jan 16 20:20:44 lionfish ntpd[2342]: time reset +0.273020 s Jan 16 20:21:28 lionfish ntpd[2342]: synchronized to 208.75.88.4, stratum 2 You can get some idea of read speed of the individual disks by typing: hdparm -tT /dev/[hs]d? Link to comment
calvinandh0bbes Posted January 17, 2011 Author Share Posted January 17, 2011 The only thing that stands out is the motherboard's clock is constantly losing time.interesting. I'm always having to correct the time on my main desktop computer as well (also a linux box). You can get some idea of read speed of the individual disks by typing: hdparm -tT /[hs]d? Typing what you typed gives me "/[hs]d?: No such file or directory", same without the "?" Do you mean hdparm -tT /dev/sda If so.... hdparm -tT /dev/sda (Flash drive) /dev/sda: Timing cached reads: 7912 MB in 2.00 seconds = 3961.89 MB/sec Timing buffered disk reads: 70 MB in 3.06 seconds = 22.91 MB/sec hdparm -tT /dev/sdb (WD 2 TB EARS, Supermicro AOC) /dev/sdb: Timing cached reads: 7876 MB in 2.00 seconds = 3944.49 MB/sec Timing buffered disk reads: 314 MB in 3.02 seconds = 104.04 MB/sec hdparm -tT /dev/sdc (WD 2 TB EARS, Supermicro AOC) /dev/sdc: Timing cached reads: 7832 MB in 2.00 seconds = 3922.21 MB/sec Timing buffered disk reads: 362 MB in 3.01 seconds = 120.19 MB/sec hdparm -tT /dev/sdd (Hitachi, Onboard sata) /dev/sdd: Timing cached reads: 7930 MB in 2.00 seconds = 3970.91 MB/sec Timing buffered disk reads: 338 MB in 3.02 seconds = 112.08 MB/sec hdparm -tT /dev/sde (Hitachi, Onboard sata) /dev/sde: Timing cached reads: 7758 MB in 2.00 seconds = 3885.16 MB/sec Timing buffered disk reads: 326 MB in 3.00 seconds = 108.66 MB/sec hdparm -tT /dev/sdf (WD 1 TB EADS, onboard SATA) /dev/sdf: Timing cached reads: 7824 MB in 2.00 seconds = 3918.32 MB/sec Timing buffered disk reads: 294 MB in 3.01 seconds = 97.75 MB/sec hdparm -tT /dev/sdg (Samsung 1 TB, oboard sata) /dev/sdg: Timing cached reads: 8328 MB in 2.00 seconds = 4170.99 MB/sec Timing buffered disk reads: 310 MB in 3.01 seconds = 102.84 MB/sec hdparm -tT /dev/sdh (WD 1 TB EADS, onboard SATA) /dev/sdh: Timing cached reads: 7828 MB in 2.00 seconds = 3920.24 MB/sec Timing buffered disk reads: 258 MB in 3.01 seconds = 85.82 MB/sec hdparm -tT /dev/sdi (seagate, parity) /dev/sdi: Timing cached reads: 7776 MB in 2.00 seconds = 3893.94 MB/sec Timing buffered disk reads: 342 MB in 3.01 seconds = 113.68 MB/sec hdparm -tT /dev/sdi (Samsung 1.5 TB, oboard sata) /dev/sdj: Timing cached reads: 8050 MB in 2.00 seconds = 4030.97 MB/sec Timing buffered disk reads: 316 MB in 3.01 seconds = 104.85 MB/sec Link to comment
Joe L. Posted January 17, 2011 Share Posted January 17, 2011 Sorry... a typo I meant: hdparm -tT /dev/[hs]d? But you did it individually. And the read speeds all look pretty decent. Link to comment
calvinandh0bbes Posted January 17, 2011 Author Share Posted January 17, 2011 I did this on my linux box (source box for copying files to unRaid box), and I get... /dev/sda: Timing cached reads: 2330 MB in 2.00 seconds = 1165.45 MB/sec Timing buffered disk reads: 216 MB in 3.02 seconds = 71.62 MB/sec Maybe it's not an unRaid thing, but my linux drive has some issues. edit: checked my bios, SATA was in IDE mode, i changed to ACHI...not real change in performance /dev/sda: Timing cached reads: 2366 MB in 2.00 seconds = 1182.86 MB/sec Timing buffered disk reads: 228 MB in 3.02 seconds = 75.40 MB/sec hmmmmm...... Link to comment
bcbgboy13 Posted January 18, 2011 Share Posted January 18, 2011 You have an older but feature rich motherboard. There is a few possible solutions for you to try. 1.First one make sure you have the latest BIOS. Then go into the BIOS, load the default settings, save them, restart, go into the BIOS again and manually disable any feature you are not going to use (serial an parallel ports, floppy, IDE controller, firewire, audio, etc.) - try and see if this has improved the speed. 2.You can also consider moving the SM card to the second PCIe x4 port - try to see what speed you will get. 3. You have a dual gigabit LAN but Marvell Yukon based and according to Asus specs: "Dual Gigabit LAN controllers, both featuring AI NET2 Marvell® PCI-E and PCI Gigabit LAN controllers" You must disable one of them - the tricky part is one is apparently PCIe and there is a possibility that this one may share under a certain settings some of the PCIe lanes with the SM controller - and this is why you experienced this drop in the speed once you have HDs to the SM as they fight with the gigabit controller for bandwidth. You probably may have to take a good look at your manual to see the optimal placement and what LAN port you should enable for use. Good luck and let us know. Link to comment
calvinandh0bbes Posted January 18, 2011 Author Share Posted January 18, 2011 1.First one make sure you have the latest BIOS. Then go into the BIOS, load the default settings, save them, restart, go into the BIOS again and manually disable any feature you are not going to use (serial an parallel ports, floppy, IDE controller, firewire, audio, etc.) - try and see if this has improved the speed.My transfer times were fine before I added the 7th disk. Back when I built the unRaid server, I disabled anything that wasn't used.2.You can also consider moving the SM card to the second PCIe x4 port - try to see what speed you will get.My AOC card is in the second x4 slot as the first is used for my graphics card. There is a setting in the BIOS to force the second slot to "fast mode" (which I believe is x4), and it is set as such.3. You have a dual gigabit LAN but Marvell Yukon based and according to Asus specs: "Dual Gigabit LAN controllers, both featuring AI NET2 Marvell® PCI-E and PCI Gigabit LAN controllers" You must disable one of them - the tricky part is one is apparently PCIe and there is a possibility that this one may share under a certain settings some of the PCIe lanes with the SM controller - and this is why you experienced this drop in the speed once you have HDs to the SM as they fight with the gigabit controller for bandwidth. It definitely isn't a case of "must disable one of them" as the server works with both active. I didn't know one used the PCIe bus, I'll play around with disabling one and seeing what happens. I know I am getting Gbit speeds as I can copy from the server at 20+ MB/s which is greater than 100 Mbit/s speeds. You probably may have to take a good look at your manual to see the optimal placement and what LAN port you should enable for use. Good luck and let us know. I'll play with the LAN setting when I get a chance and see what I can get. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.