X9SCM-F slow write speed, good read speed


Recommended Posts

Just an idea... anyone running unraid native (not on vm) tested to just disable nx bit at bios setup (it should disable pae I think?) and see what happens? that way it should disable pae for sure, I'm not sure if mem=4xxx actually do same as... remember that video memory and other pci allocations will take space on addressing (size depending on hardware) and I guess mem param really limits only ram then there should be still pae working for some few MB's...

 

How would we test this ?

Link to comment
  • Replies 387
  • Created
  • Last Reply

Top Posters In This Topic

Just an idea... anyone running unraid native (not on vm) tested to just disable nx bit at bios setup (it should disable pae I think?) and see what happens? that way it should disable pae for sure, I'm not sure if mem=4xxx actually do same as... remember that video memory and other pci allocations will take space on addressing (size depending on hardware) and I guess mem param really limits only ram then there should be still pae working for some few MB's...

 

How would we test this ?

 

 

"execute-disable bit capability", if I'm not in error disabling it should also disable pae (at least on my desktop pc with XP it only shows PAE enabled when such option is enabled on bios).

Link to comment

I got 24GB Memory no virtualization (I tried but no luck with passthrough).

My motherboard is an ASUS P6T7

 

I installed the RC10 Test version and I also added the mem=4095M parameter to syslinux.cfg

Here are some results:

Writing - Console Command

Cache Drive - dd if=/dev/zero of=./testhd2 bs=1024M count=1 1,1GB - (81,5; 110; 112)MB/s

Cache Drive - dd if=/dev/zero of=./testhd2 bs=1024M count=10 11GB - (110; 99,4; 113)MB/s

Cache Drive - dd if=/dev/zero of=./testhd2 bs=1024M count=4 4,3GB - (99,5; 119; 99,4)MB/s

Cache Drive - dd if=/dev/zero of=./testhd2 bs=1M count=1000 1GB - ( 91,5; 110; 119)MB/s

Normal Drive - dd if=/dev/zero of=./testhd2 bs=1024M count=1 1,1GB - (23,3; 26,6; 33)MB/s

Normal Drive - dd if=/dev/zero of=./testhd2 bs=1024M count=10 11GB - (26,8; 31,2; 26,7)MB/s

Normal Drive - dd if=/dev/zero of=./testhd2 bs=1024M count=4 4,3GB - (29,1; 24,9; 26,9)MB/s

Normal Drive - dd if=/dev/zero of=./testhd2 bs=1M count=1000 1GB - (27,1; 32,1; 29)MB/s

 

Upload - Network Transfer - smb - win7 > /mnt/{cache|disk2}

Cache Drive - 1 File ~ 1GB - (30; 27; 29)MB/s

Cache Drive - 1 File ~ 4GB - (31; 30; 31)MB/s

Cache Drive - 1 File ~ 8GB - (30; 31; 30)MB/s

Cache Drive - 1055 Files ~ 6GB - (32; 33; 32)MB/s

Normal Drive - 1 File ~ 1GB - (20; 25; 23)MB/s

Normal Drive - 1 File ~ 4GB - (25; 23; 24)MB/s

Normal Drive - 1 File ~ 8GB - (24; 24; 23)MB/s

Normal Drive - 1055 Files ~ 6GB - (23; 22; 22)MB/s

 

Download - Network Transfer - smb - /mnt/disk2 > win7

1 File ~ 1GB - (51; 46; 47)MB/s

1 File ~ 4GB - (42; 42; 41)MB/s

1 File ~ 8GB - (39; 41; 36)MB/s

1055 Files ~ 6GB - (32; 31; 32)MB/s

 

Without the mem parameter the writing speed was about 2MB/s.

 

Cheers

Max

Link to comment

I'm also running a x9scm-f with 8gb of ram.

I don't seem to be having any issues it seems.

My server is running since a vouple of months now. It has been running since rc8.

 

I'm not running it virtualized.

 

I'll post the complete specs tonight with some speed figures.

Normally read speed is about 120+Mb/s and write is above 50 with cache.

 

Added is a info print from my server.

ImageUploadedByTapatalk1358336992.137943.jpg.43c0c8685872b3ca5c264da5b5ed08d2.jpg

Link to comment

I have 4x8GB sticks (32GB RAM) installed.  I plan to run ESXi, but wanted to get unRAID fully working before virtualizing.  I have ESXi and a Win7 VM setup, but haven't completed the unRAID VM steps yet.

 

With 1GB test file and "append mem=4095M initrd=bzroot" in the syslinux.cfg file and sysctl vm.highmem_is_dirtyable=0, I get an initial 5sec burst of 45 MB/s then decays to 8 MB/s write speed within ~ 35sec.

With 1GB test file and "append mem=8190M initrd=bzroot" in the syslinux.cfg file and sysctl vm.highmem_is_dirtyable=0, I get an initial 5sec burst of 45 MB/s then decays to 8 MB/s write speed within ~ 35sec.

 

I seem to be the only one in which the "mem=4095M" change is not working.  I wonder if that is due to the 4x8GB sticks installed.  If everyone else is having better success with "mem=4095M" and I'm the outlier, I wonder if the cause is the high amount of RAM installed or the fact that the memory modules are 8GB sticks.  Unfortunately I don't have any smaller sized memory sticks for this motherboard.

 

I'll try to fit in another quick test before leaving for work this morning and remove 16GB and 24GB and retest with the "mem=4095" change.

 

I understand the 32-bit/4GB RAM barrier.  The question I have is why 5.0-rc5 and earlier releases don't have any issue.  One answer is that 5.0-rc5 had kernel 3.0.33.  I know (and agree) we aren't going backward with kernel releases but I wish we knew the underlying cause of the change in behavior once we went from kernel 3.0.33 to kernel 3.4.x (5.0-rc5 to 5.0-rc6 and beyond).

 

Edit:  With 16GB RAM installed (2x8GB), a 4+GB test file and "append mem=4095M initrd=bzroot" in the syslinux.cfg file and sysctl vm.highmem_is_dirtyable=0, I get ~ 21 MB/s write speed which is not great but acceptable.  I'm guessing that running an unRAID VM in ESXi and the eventual 64-bit version of unRAID would improve the write speed in my setup.  I just wish we knew the underlying cause and if it is a kernel change, what changed beyond 3.0.33 with respect to this problem.  (I appreciate the great efforts of Tom and the unRAID community!  :))

Link to comment

I got 24GB Memory no virtualization (I tried but no luck with passthrough).

My motherboard is an ASUS P6T7

 

I installed the RC10 Test version and I also added the mem=4095M parameter to syslinux.cfg

...

Without the mem parameter the writing speed was about 2MB/s.

 

Cheers

Max

 

This is good (bad) news,  At least it shows us the issue is not just one motherboard.

 

 

Moose> Thanks for your input!

Link to comment

I finally got my drives pre-cleared on my new native v5.0-rc10 system (no virt. and no cache disk).

 

I did some "testing" without any changes to the v5.0-rc10 (not the special version that Tom gave people with this SM MB to try). Here are my results (MB/s are average and approx.) while writing/copying 2 files over my network from my other UnRAID v4.7 box to my new v5.0-rc10 system (one file was approx 1 GB, the other was approx. 15 GB - ultimately, the speeds for each write seemed to mirror each other, no matter the file size):

 

Without the "dirty" script:

8GB ram (one stick): 90MB/s for approx. the first minute, then down to 45MB/s for the remainder of the write/copy;

16GB ram (two sticks): 90MB/s for approx. the first minute, then down to 45MB/s for the remainder of the write/copy;

24GB ram (three sticks): 1MB/s

32GB ram (four sticks): 1MB/s

 

With the "dirty" script:

32GB ram (four sticks): 90MB/s for approx. the first minute, then down to 45MB/s for the remainder of the write/copy;

 

BTW, a few times, I moved the ram around to various slots and it didn't seem to make a difference. The only scripts that I had running were the email scripts.

 

On a related note, during the "fast" copies (above) what is the reason is for the halving of the speed after a minute of copying/writing a file? Is there any way to improve this? Cache disk?

 

One more question that I've seen asked, but seems to remain unanswered: what is the downside of utilizing the "dirty" script?

 

Thanks much.

Link to comment

does anyone read my posts?

 

look at my board.. it's some cheap intel board.. and i have this problem too :(

 

well fixed it by using my old 4gb ram.. but still

 

i really don't think it's specific to the board or the processor.

 

I have this x9 board with an ivey and it works fine (inside esxi)  I think I said it before somewhere on here.

 

Sent from my SGH-I727R using Tapatalk 2

 

 

Link to comment

Just to chime, getting abysmal write speeds 1-2MBs. Will try with the mem parameter

 

System:

ESXi 5.1 Host: X8SIL-F, x3430, 16GB, 2X1015M

 

unRAID (rc10) vm: 4GB

Personally I would just reduce the amount of mem to the VM to 3GB.  It is working for me on a Tyan S5512GM2NR MB that has 16GB host memory.
Link to comment

Just to chime, getting abysmal write speeds 1-2MBs. Will try with the mem parameter

 

System:

ESXi 5.1 Host: X8SIL-F, x3430, 16GB, 2X1015M

 

unRAID (rc10) vm: 4GB

Personally I would just reduce the amount of mem to the VM to 3GB.  It is working for me on a Tyan S5512GM2NR MB that has 16GB host memory.

 

Thanks for the suggestion, was my next step if the mem parameter had no effect. But alas, and thankfully so, my write speeds are back to normal with

mem=4095M

 

Also noticed you can tweak the allocated memory in MB increments in ESXi but for some reason 4095 always went back to 4096, guess it doesnt like odd numbers,

Link to comment

Just to chime, getting abysmal write speeds 1-2MBs. Will try with the mem parameter

 

System:

ESXi 5.1 Host: X8SIL-F, x3430, 16GB, 2X1015M

 

unRAID (rc10) vm: 4GB

Personally I would just reduce the amount of mem to the VM to 3GB.  It is working for me on a Tyan S5512GM2NR MB that has 16GB host memory.

 

Thanks for the suggestion, was my next step if the mem parameter had no effect. But alas, and thankfully so, my write speeds are back to normal with

mem=4095M

 

Also noticed you can tweak the allocated memory in MB increments in ESXi but for some reason 4095 always went back to 4096, guess it doesnt like odd numbers,

You do realize that unRAID is only using ~3.0-3.5GB of that memory.  The other memory over 3.0GB uses PAE for disk buffering - maybe ONLY disk buffering.  A 32bit OS has to have address space for IO devices and that usually drops the memory available.  On Windows 32bit  it was 3.0 GB usable on a VM I got 3.5GB on bare metal 32bit Windows.
Link to comment

Just an idea... anyone running unraid native (not on vm) tested to just disable nx bit at bios setup (it should disable pae I think?) and see what happens? that way it should disable pae for sure, I'm not sure if mem=4xxx actually do same as... remember that video memory and other pci allocations will take space on addressing (size depending on hardware) and I guess mem param really limits only ram then there should be still pae working for some few MB's...

 

How would we test this ?

 

Anyone allready know ?

 

I have been running with the MEM parameter for a few days now and I notice a couple of things:

 

- Decent copy speeds

- No OOM errors

- No issues with plugins whatsoever

 

I am running quite a few plugins and I do not notice any adverse effect in running with the parameter. I have ordered a 4 gig stick (fast memory, nice to see if that makes a difference also).

 

My 16gb is build up out of 4 4gig sticks: PC3-10600 (DDR3-1333)

My 4gb will be 1 stick: 1600 MHz ( PC3-12800)

 

The 4gigs memory is 20% faster

 

The only reason I had 16gig is that the memory was really cheap, but I must say it looks like it is utter nonsense..

 

Link to comment

I tried everything suggested in this thread, but still have parity sync speed issues on the later 5.0 RCs. Beta12a is fine and my parity syncs at 100+MB/s. Anything after that and it's about 63MB/s. If I wipe my parity and build a new parity from scratch, I get 100+MB/s. However, after parity is valid and I do a parity check/sync, it goes back to 63MB/s.

 

What gives? If anything building fresh parity should be way more taxing then just checking if it's valid.

Link to comment

What gives? If anything building fresh parity should be way more taxing then just checking if it's valid.

I agree it SHOULD be, but it isn't. The freshly written parity is NOT read, it's just written. So, a check adds another disk read to the mix, where an initial sync is a blind write, that assumes if the disk didn't issue an error, that the write succeeded. 99.9999 percent of the time that's ok.

 

A parity check does a read on all the disks, and does the math to verify parity. An initial sync reads the data disks, and writes the parity disk with the math result.

 

I don't know why writing the parity is faster than reading it, but it is, and ideally I think parity generation should be much slower by forcing a non-correcting check run after it's written as part of the process. I guess people would object to more than doubling parity generation time, but in my opinion it would add a layer of confidence to the validity of the array. As it is now, it's just strongly recommended that you manually trigger a parity check after you are done generating it.

Link to comment

What gives? If anything building fresh parity should be way more taxing then just checking if it's valid.

I agree it SHOULD be, but it isn't. The freshly written parity is NOT read, it's just written. So, a check adds another disk read to the mix, where an initial sync is a blind write, that assumes if the disk didn't issue an error, that the write succeeded. 99.9999 percent of the time that's ok.

 

A parity check does a read on all the disks, and does the math to verify parity. An initial sync reads the data disks, and writes the parity disk with the math result.

 

I don't know why writing the parity is faster than reading it, but it is, and ideally I think parity generation should be much slower by forcing a non-correcting check run after it's written as part of the process. I guess people would object to more than doubling parity generation time, but in my opinion it would add a layer of confidence to the validity of the array. As it is now, it's just strongly recommended that you manually trigger a parity check after you are done generating it.

 

That makes sense, but what doesn't make sense is why I get 100+MB/s parity checks on Beta12 and before, and everything after that it's under 70MB/s. I know there were quite a few people reporting the same thing when the first RCs were rolling out, and i'm wondering where they all went (or how they fixed the issue).

 

My server with 18 data drives takes over 24 hours to parity check every month. It's just not cutting it, especially when i'll be going to 4TB drives in the future. My 3x SAS2LP setup is much slower at parity checks than my old setup that used the much slower PCI-X cards.

Link to comment

 

 

That makes sense, but what doesn't make sense is why I get 100+MB/s parity checks on Beta12 and before, and everything after that it's under 70MB/s. I know there were quite a few people reporting the same thing when the first RCs were rolling out, and i'm wondering where they all went (or how they fixed the issue).

 

My server with 18 data drives takes over 24 hours to parity check every month. It's just not cutting it, especially when i'll be going to 4TB drives in the future. My 3x SAS2LP setup is much slower at parity checks than my old setup that used the much slower PCI-X cards.

 

Whats your CPU usage? Mines fairly high with a check and correct with 6 drives, I'd imagine with more it is higher.

 

What does top say when your running a sync? Anywhere near 100%?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.