X9SCM-F slow write speed, good read speed


Recommended Posts

 

 

That makes sense, but what doesn't make sense is why I get 100+MB/s parity checks on Beta12 and before, and everything after that it's under 70MB/s. I know there were quite a few people reporting the same thing when the first RCs were rolling out, and i'm wondering where they all went (or how they fixed the issue).

 

My server with 18 data drives takes over 24 hours to parity check every month. It's just not cutting it, especially when i'll be going to 4TB drives in the future. My 3x SAS2LP setup is much slower at parity checks than my old setup that used the much slower PCI-X cards.

 

Whats your CPU usage? Mines fairly high with a check and correct with 6 drives, I'd imagine with more it is higher.

 

What does top say when your running a sync? Anywhere near 100%?

 

I have i3 2120s in my servers, the highest peak was like 10% usage.

 

If I transfer directly from a data disc to my computer, I get 85MB/s over the network. I've tested all the drives in the past, they were all at least 20MB/s over what I get during parity checks and I have no idea where the data i'm transferring is stored on these drives. The start of the drives get about 120MB/s, the end gets about 80MB/s (exactly what I see during parity checks on beta12a)... but my parity check on anything after starts at about 63MB/s and ends at about 44MB/s.

 

It makes no sense.. no one else seems to have this issue now and it's really annoying to have my servers parity check for an entire day every month.

Link to comment
  • Replies 387
  • Created
  • Last Reply

Top Posters In This Topic

I have i3 2120s in my servers, the highest peak was like 10% usage.

 

If I transfer directly from a data disc to my computer, I get 85MB/s over the network. I've tested all the drives in the past, they were all at least 20MB/s over what I get during parity checks and I have no idea where the data i'm transferring is stored on these drives. The start of the drives get about 120MB/s, the end gets about 80MB/s (exactly what I see during parity checks on beta12a)... but my parity check on anything after starts at about 63MB/s and ends at about 44MB/s.

 

It makes no sense.. no one else seems to have this issue now and it's really annoying to have my servers parity check for an entire day every month.

 

Have you attached a syslog recently?  (My apologies if I missed it)  Perhaps someone here might spot something...

Link to comment

@tyrindor. Others including myself have reported the slower parity checks. Tom made mention of unraid polling the drives every 10seconds,which was changed in rc9. But simplefeatures also utilizes this in the disk health plug in. Don't know if this is related, but I plan to test by disabling.

 

http://lime-technology.com/forum/index.php?topic=25250.msg220439.msg#220439

 

This was my original report after upgrading to rc5,Btw I believe the slow writes and parity checks have separate root causes.

http://lime-technology.com/forum/index.php?topic=21269.msg188995.msg#188995

Sent from my SAMSUNG-SGH-I897 using Tapatalk 2

 

Link to comment

I have i3 2120s in my servers, the highest peak was like 10% usage.

 

If I transfer directly from a data disc to my computer, I get 85MB/s over the network. I've tested all the drives in the past, they were all at least 20MB/s over what I get during parity checks and I have no idea where the data i'm transferring is stored on these drives. The start of the drives get about 120MB/s, the end gets about 80MB/s (exactly what I see during parity checks on beta12a)... but my parity check on anything after starts at about 63MB/s and ends at about 44MB/s.

 

It makes no sense.. no one else seems to have this issue now and it's really annoying to have my servers parity check for an entire day every month.

 

Have you attached a syslog recently?  (My apologies if I missed it)  Perhaps someone here might spot something...

 

Here's one of the server I already upgrade to RC10.

 

Sadly, I just did a fresh reboot after an uptime of 98 days to update.

 

@tyrindor. Others including myself have reported the slower parity checks. Tom made mention of unraid polling the drives every 10seconds,which was changed in rc9. But simplefeatures also utilizes this in the disk health plug in. Don't know if this is related, but I plan to test by disabling.

 

http://lime-technology.com/forum/index.php?topic=25250.msg220439.msg#220439

 

This was my original report after upgrading to rc5,Btw I believe the slow writes and parity checks have separate root causes.

http://lime-technology.com/forum/index.php?topic=21269.msg188995.msg#188995

Sent from my SAMSUNG-SGH-I897 using Tapatalk 2

 

I also have simplefeatures, could you please report back your results after disabling it?

syslog.txt

Link to comment

I have i3 2120s in my servers, the highest peak was like 10% usage.

 

If I transfer directly from a data disc to my computer, I get 85MB/s over the network. I've tested all the drives in the past, they were all at least 20MB/s over what I get during parity checks and I have no idea where the data i'm transferring is stored on these drives. The start of the drives get about 120MB/s, the end gets about 80MB/s (exactly what I see during parity checks on beta12a)... but my parity check on anything after starts at about 63MB/s and ends at about 44MB/s.

 

It makes no sense.. no one else seems to have this issue now and it's really annoying to have my servers parity check for an entire day every month.

 

Have you attached a syslog recently?  (My apologies if I missed it)  Perhaps someone here might spot something...

 

Here's one of the server I already upgrade to RC10.

 

Sadly, I just did a fresh reboot after an uptime of 98 days to update.

 

I don't see anything wrong in your syslog, but did note 2 things.

 

You are limited to about 3500MB of RAM, not the 4000 odd you should be seeing.  I suspect it has been allocated to the onboard graphics, but that means about 512MB lost to graphics which you aren't even using with UnRAID.  Check your BIOS settings, and drop the RAM allocation for video to as low as it will let you go.  8MB is probably enough for the character consoles we use, but that may not be an option, 128MB is more likely the lowest, which would free up about 384MB.

 

The other item - I just have to ask, why aren't you using the 6 onboard SATA ports?  Aren't they the fastest SATA ports you have?  I'd move a few drives from the SAS cards to the onboard ports, especially the Parity drive.  They are all claiming 6gbps SATA speed.  This would also allow you to spread your bandwidth needs around, on unused channels, possibly resulting in improved performance.

Link to comment

I have i3 2120s in my servers, the highest peak was like 10% usage.

 

If I transfer directly from a data disc to my computer, I get 85MB/s over the network. I've tested all the drives in the past, they were all at least 20MB/s over what I get during parity checks and I have no idea where the data i'm transferring is stored on these drives. The start of the drives get about 120MB/s, the end gets about 80MB/s (exactly what I see during parity checks on beta12a)... but my parity check on anything after starts at about 63MB/s and ends at about 44MB/s.

 

It makes no sense.. no one else seems to have this issue now and it's really annoying to have my servers parity check for an entire day every month.

 

Have you attached a syslog recently?  (My apologies if I missed it)  Perhaps someone here might spot something...

 

Here's one of the server I already upgrade to RC10.

 

Sadly, I just did a fresh reboot after an uptime of 98 days to update.

 

I don't see anything wrong in your syslog, but did note 2 things.

 

You are limited to about 3500MB of RAM, not the 4000 odd you should be seeing.  I suspect it has been allocated to the onboard graphics, but that means about 512MB lost to graphics which you aren't even using with UnRAID.  Check your BIOS settings, and drop the RAM allocation for video to as low as it will let you go.  8MB is probably enough for the character consoles we use, but that may not be an option, 128MB is more likely the lowest, which would free up about 384MB.

 

The other item - I just have to ask, why aren't you using the 6 onboard SATA ports?  Aren't they the fastest SATA ports you have?  I'd move a few drives from the SAS cards to the onboard ports, especially the Parity drive.  They are all claiming 6gbps SATA speed.  This would also allow you to spread your bandwidth needs around, on unused channels, possibly resulting in improved performance.

 

The top 2 cards run at PCI-E 2.0 x8 bandwidth, and the bottom card runs at PCI-E 2.0 x4 bandwidth. The bottom card isn't full yet, but even when it is full bandwidth should not be an issue. If I downgrade to beta12a, I get 100+MB/s parity checks with the current setup. I do not have any SATA to SAS cables to test your theory, and they aren't very cheap. I really don't think it's an issue with my hardware/setup. It's something that was changed in unRAID.

 

I see nothing in the BIOS to change the onboard video memory. Is this a feature on these boards?

Link to comment

I don't see anything wrong in your syslog, but did note 2 things.

 

You are limited to about 3500MB of RAM, not the 4000 odd you should be seeing.  I suspect it has been allocated to the onboard graphics, but that means about 512MB lost to graphics which you aren't even using with UnRAID.  Check your BIOS settings, and drop the RAM allocation for video to as low as it will let you go.  8MB is probably enough for the character consoles we use, but that may not be an option, 128MB is more likely the lowest, which would free up about 384MB.

 

The other item - I just have to ask, why aren't you using the 6 onboard SATA ports?  Aren't they the fastest SATA ports you have?  I'd move a few drives from the SAS cards to the onboard ports, especially the Parity drive.  They are all claiming 6gbps SATA speed.  This would also allow you to spread your bandwidth needs around, on unused channels, possibly resulting in improved performance.

 

The top 2 cards run at PCI-E 2.0 x8 bandwidth, and the bottom card runs at PCI-E 2.0 x4 bandwidth. The bottom card isn't full yet, but even when it is full bandwidth should not be an issue. If I downgrade to beta12a, I get 100+MB/s parity checks with the current setup. I do not have any SATA to SAS cables to test your theory, and they aren't very cheap. I really don't think it's an issue with my hardware/setup. It's something that was changed in unRAID.

I'm not following you.  These look like normal SATA drives, can't you use a simple SATA cable to connect them to the motherboard port?  (I haven't actually seen one of these boards.)

 

I see nothing in the BIOS to change the onboard video memory. Is this a feature on these boards?

The boards with onboard video that I have seen all had an option to adjust the assigned RAM up or down, so obviously I haven't seen enough boards!  This is unfortunate if you don't have that option, means 512MB of very usable memory in the first 4GB is unavailable to users of this board.  This is one more reason for users of this motherboard to want a 64bit UnRAID!
Link to comment

I don't see anything wrong in your syslog, but did note 2 things.

 

You are limited to about 3500MB of RAM, not the 4000 odd you should be seeing.  I suspect it has been allocated to the onboard graphics, but that means about 512MB lost to graphics which you aren't even using with UnRAID.  Check your BIOS settings, and drop the RAM allocation for video to as low as it will let you go.  8MB is probably enough for the character consoles we use, but that may not be an option, 128MB is more likely the lowest, which would free up about 384MB.

 

The other item - I just have to ask, why aren't you using the 6 onboard SATA ports?  Aren't they the fastest SATA ports you have?  I'd move a few drives from the SAS cards to the onboard ports, especially the Parity drive.  They are all claiming 6gbps SATA speed.  This would also allow you to spread your bandwidth needs around, on unused channels, possibly resulting in improved performance.

 

The top 2 cards run at PCI-E 2.0 x8 bandwidth, and the bottom card runs at PCI-E 2.0 x4 bandwidth. The bottom card isn't full yet, but even when it is full bandwidth should not be an issue. If I downgrade to beta12a, I get 100+MB/s parity checks with the current setup. I do not have any SATA to SAS cables to test your theory, and they aren't very cheap. I really don't think it's an issue with my hardware/setup. It's something that was changed in unRAID.

I'm not following you.  These look like normal SATA drives, can't you use a simple SATA cable to connect them to the motherboard port?  (I haven't actually seen one of these boards.)

 

Well yeah I can do that, but I'd have no where to put the drives (My case uses SAS hotswap bays). I guess I could move the parity and 3 data drives over to the motherboard, and just set the drives to the side to test.

 

I also just tested all my drives speeds with hdparm, and the slowest reads were 121MB/s, so it's definitely not a single slow drive causing this. I really don't think its a bandwidth issue either...  100-120MB/s in beta 12a and earlier, and during a parity build I also get 100-120MB/s. It's only parity checks that are really slow. I will test when I get time, but I think this rules out bandwidth issues.

 

The boards with onboard video that I have seen all had an option to adjust the assigned RAM up or down, so obviously I haven't seen enough boards!  This is unfortunate if you don't have that option, means 512MB of very usable memory in the first 4GB is unavailable to users of this board.  This is one more reason for users of this motherboard to want a 64bit UnRAID!

 

I triple checked but I may of missed it, any X9SCM owners know if this setting exists?

Link to comment

I'm using a X9SCM-F, my internal copy speeds (disk to disk, using mc) are 35-50MB/s, average of 40MB/s, internal copy (disk to ssd) about 95-100MB/s, to- and from disks over the network are 35-50MB/s, average of 40MB/s. Parity check was over 100MB/s average with v5.0-rc5, and about 90MB/s avg. using v5.0-rc10. I have stock bios settings (v2.0a), i'm not using the 'memory' limiter, nor the 'dirty' flag. See sig for system details.

Link to comment

I'm using a X9SCM-F, my internal copy speeds (disk to disk, using mc) are 35-50MB/s, average of 40MB/s, internal copy (disk to ssd) about 95-100MB/s, to- and from disks over the network are 35-50MB/s, average of 40MB/s. Parity check was over 100MB/s average with v5.0-rc5, and about 90MB/s avg. using v5.0-rc10. I have stock bios settings (v2.0a), i'm not using the 'memory' limiter, nor the 'dirty' flag. See sig for system details.

 

You are on 4gig.. That makes the difference..The parameter limits the amount of memory the server uses to 4gig, but you allready have 4gig.. Have mine ordered, wondering if my speeds will go up to yours..

Link to comment

Yeah I should read through the whole thread, but....  to summarize, does the 'slow write' issue exist for:

 

Via network:

a) writing to a disk share,

b) writing to a non-cache user share,

c) writing to the cache disk share,

d) writing to a cache-enabled user share.

 

Via command line:

a) writing to cache disk,

b) writing to array disk.

 

For those experiencing slow writes, do they exist for all cases above?  If not, which subsets?

 

I just want to get this nailed down before release.  Thanks!

Link to comment

I can validate a subset of the cases listed.  (I only have one data disk and no cache disk.)

 

Via network:

a) writing to a disk share, - yes

b) writing to a non-cache user share, - yes

c) writing to the cache disk share,

d) writing to a cache-enabled user share.

 

Via command line:

a) writing to cache disk,

b) writing to array disk.

 

 

Link to comment

I don't know which speeds are mentioned here, but yesterday I have been moving files from one disk in the array to an other in MC. Speeds reported by MC were between 28 and 42 MB/s. 42 MB/s was reported with files larger than 4 GB. My VM was assigned 4 GB memory. Total memory is 16 GB. Don't know what is "normal" BTW.

Link to comment

I've been noticing some slowness but attributed it to all the plugins and the age of the cache drive. Recently found this thread so I gave sysctl vm.highmem_is_dirtyable a try. Even without formal testing I can see MySQL is functioning so much better. Newznab was painfully slow while populating the database. I was considering abandoning it entirely. It's definable better now. Looking forward to a x64 version. I'm sure the db plugin would benefit greatly.

Link to comment

I don't know which speeds are mentioned here, but yesterday I have been moving files from one disk in the array to an other in MC. Speeds reported by MC were between 28 and 42 MB/s. 42 MB/s was reported with files larger than 4 GB. My VM was assigned 4 GB memory. Total memory is 16 GB. Don't know what is "normal" BTW.

If your talking about parity protected drives then that is normal.  I get 16MB/s on some drives upto 45MB/s on others.  I'm working on replacing the slower drives.  I have completely replaced them on one unRAID server and now get a consistent 40+MB/s when I copy dropping to around 30MB/s when the drives get closer to being full.
Link to comment

Yeah I should read through the whole thread, but....  to summarize, does the 'slow write' issue exist for:

 

Via network:

a) writing to a disk share, yes

b) writing to a non-cache user share, yes

c) writing to the cache disk share, no

d) writing to a cache-enabled user share. no

 

Via command line:

a) writing to cache disk, no

b) writing to array disk. yes

 

For those experiencing slow writes, do they exist for all cases above?  If not, which subsets?

 

I just want to get this nailed down before release.  Thanks!

 

Anything involving writing to cache is fine, anything else is not. When the cache mover is going, it is also very slow. My 1TB cache drive was full and it took roughly 28 hours to move it off.

Link to comment

Yeah I should read through the whole thread, but....  to summarize, does the 'slow write' issue exist for:

 

Via network:

a) writing to a disk share, yes

b) writing to a non-cache user share, yes

c) writing to the cache disk share, no

d) writing to a cache-enabled user share. no

 

Via command line:

a) writing to cache disk, no

b) writing to array disk. yes

 

For those experiencing slow writes, do they exist for all cases above?  If not, which subsets?

 

I just want to get this nailed down before release.  Thanks!

 

Anything involving writing to cache is fine, anything else is not. When the cache mover is going, it is also very slow. My 1TB cache drive was full and it took roughly 28 hours to move it off.

 

tyrindor,

Did you provide the answers, other than those provided by moose? The quoting confuses me.

 

Link to comment

Yeah I should read through the whole thread, but....  to summarize, does the 'slow write' issue exist for:

 

Via network:

a) writing to a disk share, yes

b) writing to a non-cache user share, yes

c) writing to the cache disk share, no

d) writing to a cache-enabled user share. no

 

Via command line:

a) writing to cache disk, no

b) writing to array disk. yes

 

For those experiencing slow writes, do they exist for all cases above?  If not, which subsets?

 

I just want to get this nailed down before release.  Thanks!

 

Anything involving writing to cache is fine, anything else is not. When the cache mover is going, it is also very slow. My 1TB cache drive was full and it took roughly 28 hours to move it off.

 

tyrindor,

Did you provide the answers, other than those provided by moose? The quoting confuses me.

 

I put the bolded text in the quote. In short: Any writing to any drive that is in the protected array is slow, doesn't matter if you transfer via disk or share. If I enabled a cache drive, writing directly to it or to a share with the cache drive enable gives normal transfer speeds. However, as soon as the cache mover is ran it will take ages for it to move the files off the drive... leaving me to believe internal/command line transfers are also slow.

 

My case may be a little different than most because i'm still affected by slow parity check speeds on newer unRAID builds. I'm hoping these 2 issues are the same, but my best guess is they are two separate issues. By the way mbryanr, I removed simplefeatures and the parity check issue is still there with 100% stock unRAID.  :-\

Link to comment

I don't know which speeds are mentioned here, but yesterday I have been moving files from one disk in the array to an other in MC. Speeds reported by MC were between 28 and 42 MB/s. 42 MB/s was reported with files larger than 4 GB. My VM was assigned 4 GB memory. Total memory is 16 GB. Don't know what is "normal" BTW.

If your talking about parity protected drives then that is normal.  I get 16MB/s on some drives upto 45MB/s on others.  I'm working on replacing the slower drives.  I have completely replaced them on one unRAID server and now get a consistent 40+MB/s when I copy dropping to around 30MB/s when the drives get closer to being full.

 

Yes, it is parity pretected. My drives are a mixture of 7200 rpm hitachis, green 5400 rpm WD and older 7200 and 5400 rpm Samsungs.

Link to comment

 

I don't know which speeds are mentioned here, but yesterday I have been moving files from one disk in the array to an other in MC. Speeds reported by MC were between 28 and 42 MB/s. 42 MB/s was reported with files larger than 4 GB. My VM was assigned 4 GB memory. Total memory is 16 GB. Don't know what is "normal" BTW.

If your talking about parity protected drives then that is normal.  I get 16MB/s on some drives upto 45MB/s on others.  I'm working on replacing the slower drives.  I have completely replaced them on one unRAID server and now get a consistent 40+MB/s when I copy dropping to around 30MB/s when the drives get closer to being full.

 

 

Which drives have you found to be slow ?  Are these the typical "green drives" ?  I would not stop using them because of the power consumption, but if you found something else to determine a "slow" drive then I would like to know..

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.