X9SCM-F slow write speed, good read speed


Recommended Posts

  • Replies 387
  • Created
  • Last Reply

Top Posters In This Topic

If this works for the rest of the gang, I'm jumping onboard and voting to release 5.0 final, Gentlemen, please test your rigs with the new parameters.

 

As Helmonder mentioned, it would be good to mention where to set this parameter for unRAID.

 

Also mention that this means forcing unRAID to use only 4GB of how much ram users have.

 

and for 5.0 final, it would be good to add this in, and include in the release note that unRAID only support up to 4GB of RAM and it will only use 4GB max no matter how much ram the user has.

 

 

if this is actually the problem, that is.

Link to comment

If this works for the rest of the gang, I'm jumping onboard and voting to release 5.0 final, Gentlemen, please test your rigs with the new parameters.

 

As Helmonder mentioned, it would be good mention where to set this parameter for unRAID.

 

Also mention that this means forcing unRAID to use only 4GB of how much ram users have.

 

and for 5.0 final, it would be good to add this in, and include in the release note that unRAID only support up to 4GB of RAM and it will only use 4GB max no matter how much ram the user has.

 

I was using 8GB for many years with excellent results, someone should test with mem=8192.

Link to comment

I have been re-reading the thread looking forward to implementing the memory parameter..

 

I am now pretty confident that the issues lies in X9 motherboards combined with more then 4gig of memory allocated. NOT physical memory. There are people reporting using this motherboard with 16gigs without an issue but all those mentions point to systems running virtual machines and having less then 4gig of memory..

 

Als found someone without VM's running without any issues..... 4 gigs of RAM:

 

http://lime-technology.com/forum/index.php?topic=25306.msg220274#msg220274

 

 

Link to comment

StevenD explained quite clearly what to do  (I just read and am rebooting now). There is a .cfg file on your flashdrive you need to change.

 

The file is called syslinux.cfg.

 

Open it in editpad (do not use notepad) or VI and change the following line:

 

append initrd=bzroot

 

to:

 

append  mem=4095M initrd=bzroot

 

Save the file and reboot.

Link to comment

Well... there is a definate effect... I am running without the dirtyable parameter. Speeds are around 20MB/Sec to 25MB/Sec, that is nowhere near what you are getting but definately not the 500Kb/sec I used to get... And about the same effect I got when running with the dirtable parameter set on 1 !

 

I was able to add the dirtyable paramater, but it still crashed my system.  Ive been running solid for a few hours now.  Transferred approx 200GB with no issues.

Link to comment

Well... there is a definate effect... I am running without the dirtyable parameter. Speeds are around 20MB/Sec to 25MB/Sec, that is nowhere near what you are getting but definately not the 500Kb/sec I used to get... And about the same effect I got when running with the dirtable parameter set on 1 !

 

I guess stevenD get's much higher speeds because he is running parity using 2x7200RPM drives set up as RAID0.

 

Link to comment

Well... there is a definate effect... I am running without the dirtyable parameter. Speeds are around 20MB/Sec to 25MB/Sec, that is nowhere near what you are getting but definately not the 500Kb/sec I used to get... And about the same effect I got when running with the dirtable parameter set on 1 !

 

I guess stevenD get's much higher speeds because he is running parity using 2x7200RPM drives set up as RAID0.

 

 

That's for parity, and it does help, a little.  However, I'm also writing to a cache drive on a controller that has 256MB of cache.  For writes to the protected array without cache, I get 35MB/s to 40MB/s.

Link to comment

Interesting maybe related post here:

 

X9SCM-F/82574L/e1000e lag / high latency (e1000e/Intel bug)

http://comments.gmane.org/gmane.linux.kernel/1372917

From said post

(ONBOARD Intel eth0 / 82574L )

 

$ ping windowspc

PING windowspc (192.168.0.1) 56(84) bytes of data.

64 bytes from windowspc (192.168.0.1): icmp_req=1 ttl=128 time=0.544 ms

64 bytes from windowspc (192.168.0.1): icmp_req=2 ttl=128 time=0.193 ms

64 bytes from windowspc (192.168.0.1): icmp_req=3 ttl=128 time=0.619 ms

64 bytes from windowspc (192.168.0.1): icmp_req=4 ttl=128 time=0.642 ms

64 bytes from windowspc (192.168.0.1): icmp_req=5 ttl=128 time=0.426 ms

64 bytes from windowspc (192.168.0.1): icmp_req=6 ttl=128 time=0.464 ms

64 bytes from windowspc (192.168.0.1): icmp_req=7 ttl=128 time=0.696 ms

64 bytes from windowspc (192.168.0.1): icmp_req=8 ttl=128 time=1353 ms

64 bytes from windowspc (192.168.0.1): icmp_req=9 ttl=128 time=353 ms

64 bytes from windowspc (192.168.0.1): icmp_req=10 ttl=128 time=0.492 ms

64 bytes from windowspc (192.168.0.1): icmp_req=11 ttl=128 time=0.618 ms

64 bytes from windowspc (192.168.0.1): icmp_req=12 ttl=128 time=0.474 ms

64 bytes from windowspc (192.168.0.1): icmp_req=13 ttl=128 time=0.542 ms

64 bytes from windowspc (192.168.0.1): icmp_req=14 ttl=128 time=0.471 ms

64 bytes from windowspc (192.168.0.1): icmp_req=15 ttl=128 time=0.645 ms

64 bytes from windowspc (192.168.0.1): icmp_req=16 ttl=128 time=0.394 ms

64 bytes from windowspc (192.168.0.1): icmp_req=17 ttl=128 time=0.537 ms

64 bytes from windowspc (192.168.0.1): icmp_req=18 ttl=128 time=0.706 ms

64 bytes from windowspc (192.168.0.1): icmp_req=19 ttl=128 time=0.465 ms

64 bytes from windowspc (192.168.0.1): icmp_req=20 ttl=128 time=0.707 ms

64 bytes from windowspc (192.168.0.1): icmp_req=21 ttl=128 time=348 ms

64 bytes from windowspc (192.168.0.1): icmp_req=22 ttl=128 time=0.703 ms

64 bytes from windowspc (192.168.0.1): icmp_req=23 ttl=128 time=0.560 ms

64 bytes from windowspc (192.168.0.1): icmp_req=24 ttl=128 time=0.554 ms

64 bytes from windowspc (192.168.0.1): icmp_req=25 ttl=128 time=0.585 ms

64 bytes from windowspc (192.168.0.1): icmp_req=26 ttl=128 time=0.508 ms

64 bytes from windowspc (192.168.0.1): icmp_req=27 ttl=128 time=345 ms

64 bytes from windowspc (192.168.0.1): icmp_req=28 ttl=128 time=0.374 ms

64 bytes from windowspc (192.168.0.1): icmp_req=29 ttl=128 time=0.728 ms

64 bytes from windowspc (192.168.0.1): icmp_req=30 ttl=128 time=0.537 ms

64 bytes from windowspc (192.168.0.1): icmp_req=31 ttl=128 time=0.190 ms

64 bytes from windowspc (192.168.0.1): icmp_req=32 ttl=128 time=0.204 ms

64 bytes from windowspc (192.168.0.1): icmp_req=33 ttl=128 time=0.239 ms

 

Same test (copy test) with samba as above but now with an Intel 4-port NIC:

$ ping windowspc

64 bytes from windowspc (192.168.0.1): icmp_req=1 ttl=128 time=0.175 ms

64 bytes from windowspc (192.168.0.1): icmp_req=2 ttl=128 time=0.332 ms

64 bytes from windowspc (192.168.0.1): icmp_req=3 ttl=128 time=0.276 ms

64 bytes from windowspc (192.168.0.1): icmp_req=4 ttl=128 time=0.221 ms

64 bytes from windowspc (192.168.0.1): icmp_req=5 ttl=128 time=0.518 ms

64 bytes from windowspc (192.168.0.1): icmp_req=6 ttl=128 time=0.157 ms

64 bytes from windowspc (192.168.0.1): icmp_req=7 ttl=128 time=0.222 ms

64 bytes from windowspc (192.168.0.1): icmp_req=8 ttl=128 time=0.605 ms

64 bytes from windowspc (192.168.0.1): icmp_req=9 ttl=128 time=0.335 ms

64 bytes from windowspc (192.168.0.1): icmp_req=10 ttl=128 time=0.679 ms

64 bytes from windowspc (192.168.0.1): icmp_req=11 ttl=128 time=0.223 ms

64 bytes from windowspc (192.168.0.1): icmp_req=12 ttl=128 time=0.189 ms

64 bytes from windowspc (192.168.0.1): icmp_req=13 ttl=128 time=0.432 ms

64 bytes from windowspc (192.168.0.1): icmp_req=14 ttl=128 time=0.235 ms

64 bytes from windowspc (192.168.0.1): icmp_req=15 ttl=128 time=0.386 ms

64 bytes from windowspc (192.168.0.1): icmp_req=16 ttl=128 time=0.658 ms

64 bytes from windowspc (192.168.0.1): icmp_req=17 ttl=128 time=0.430 ms

64 bytes from windowspc (192.168.0.1): icmp_req=18 ttl=128 time=0.494 ms

64 bytes from windowspc (192.168.0.1): icmp_req=19 ttl=128 time=0.411 ms

64 bytes from windowspc (192.168.0.1): icmp_req=20 ttl=128 time=0.737 ms

64 bytes from windowspc (192.168.0.1): icmp_req=21 ttl=128 time=0.543 ms

64 bytes from windowspc (192.168.0.1): icmp_req=22 ttl=128 time=0.564 ms

64 bytes from windowspc (192.168.0.1): icmp_req=23 ttl=128 time=0.571 ms

64 bytes from windowspc (192.168.0.1): icmp_req=24 ttl=128 time=0.407 ms

64 bytes from windowspc (192.168.0.1): icmp_req=25 ttl=128 time=0.518 ms

64 bytes from windowspc (192.168.0.1): icmp_req=26 ttl=128 time=0.482 ms

64 bytes from windowspc (192.168.0.1): icmp_req=27 ttl=128 time=0.904 ms

64 bytes from windowspc (192.168.0.1): icmp_req=28 ttl=128 time=0.478 ms

64 bytes from windowspc (192.168.0.1): icmp_req=29 ttl=128 time=1.16 ms

64 bytes from windowspc (192.168.0.1): icmp_req=30 ttl=128 time=0.656 ms

64 bytes from windowspc (192.168.0.1): icmp_req=31 ttl=128 time=0.613 ms

64 bytes from windowspc (192.168.0.1): icmp_req=32 ttl=128 time=0.475 ms

64 bytes from windowspc (192.168.0.1): icmp_req=33 ttl=128 time=0.562 ms

 

So it appears, at the moment, if you have problems with eth0, try using eth1

or buy another network card.

 

Anyone try more than 4 gb on the ESXI unrade VM?

Anyone with the problem able to try another ether net card or force the ethernet to eth1?

 

Link to comment

I run a different SM board with a X3470 xeon proc. When I test under esxi5.0, I allocate only 6GB of ram to the unRAID guest (Host has 32 GB). With 4GB i run into issues when mover has a lot of data to move, anything above 6GB has issues with clearing a drive under unRAID. So for virtualization, 6GB works the best for me.

 

For networking, I have dual onboard Intel 82574L and a 4port Intel 82571EB (HP)

vmnic0(82574L)/vimnic4(82571EB) for VMmanagemnet and vMotion

vminc1,2,3(82571EB)/vmnic5(82574L) for VM port group's (various for different vlans)

 

I don't see the behavior as in that write up, but then again I am running on a different SuperMicro board & switches/routers make a BIG difference (one of which you will see with pings, you can't compare some rinky dinky linksys/dell, etc... switch/router). And not facing the issues the guys with X9SCM-F are here (so the issue very much seems to be based/stem from this particular SM board).

 

P.S. no plugins/add-ons (vm tools when I test under esxi), I run what I need from other hosts/VMs (plex,sab,sick, etc..), many write to unRAID at the same time though.

 

Link to comment

I have been re-reading the thread looking forward to implementing the memory parameter..

 

I am now pretty confident that the issues lies in X9 motherboards combined with more then 4gig of memory allocated. NOT physical memory. There are people reporting using this motherboard with 16gigs without an issue but all those mentions point to systems running virtual machines and having less then 4gig of memory..

 

Als found someone without VM's running without any issues..... 4 gigs of RAM:

 

http://lime-technology.com/forum/index.php?topic=25306.msg220274#msg220274

 

More than 4GB of RAM is not the issue. I am running with 8GB with lots of plugins,so it's being utilized, and do not have any issues.

 

6abademy.jpg

Link to comment

More than 4GB of RAM is not the issue. I am running with 8GB with lots of plugins,so it's being utilized, and do not have any issues.

 

It seems to be memory related for sure though. Some of us here have been able to fix this problem with lowering the ram. (Like I did).

 

Maybe it's not specific to 4GB, but this seems to fix some issue

 

Also, i don't think it's specific to X9 Boards, because I have a different MB and have the same problem.

Link to comment

mrow, you should share your bios settings, memory (brand/specs/slots used), nic(s) used, Virtualized or not, etc.. to allow others to see if there is a difference, who have the same board.

 

BIOS settings are whatever the default for a board that shipped with firmware 2.0a are. At work now so I can't check though.

 

RAM is two sticks of Kingston KVR16E11/4 in Bank 0, Channel A, DIMM 0 and 1 which is believe is slot 1 and 3 with slot 1 being the furthest from the top of the board.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820239223

 

I'm using the IPMI port and ethernet port number 2 built in to the board. I'm not virtualized. I have 10 disks. 8 are hooked up to a Supermicro AOC-SASLP-MV8, including the parity disk. 1 array disk and the cache disk are hooked up to on board SATAII ports. Flash drive is plugged in to the internal USB port on the board. I'm running 18 plugins which I can list if someone really wants me to.

 

My network: All my PCs, Macs and iDevices are assigned static IPs via DHCP on an Apple AirPort Extreme router. Wired PCs are hooked up to a TrendNet TEG-S80G 8 port gigabit switch.

 

Here are the types of disks I have for those interested:

EwjXF.png

 

 

 

If I've missed something or you want to know more let me know.

Link to comment

AS far as I have read, this problem only affects those with >4GB installed, and a 4GB limit would seem to suggest that this problem is something to do with PAE.

 

While reports are still sketchy, I suspect that it isn't just SM boards, or even the C204 chipset, which is affected.  Perhaps it's just that those running more recent hardware are more likely to install more RAM

 

However, there are some running with more than 4GB in the X9SCM board who are adamant that their system is not afflicted.  Perhaps we need to start looking at BIOS versions/revision dates.  Has anyone looked at changelogs for the SM BIOS?

 

Link to comment

However, there are some running with more than 4GB in the X9SCM board who are adamant that their system is not afflicted.  Perhaps we need to start looking at BIOS versions/revision dates.  Has anyone looked at changelogs for the SM BIOS?

 

I am running BIOS version 2.0a with no issues and 8GB of RAM. Looking at Supermicro's website it seems they don't give changelogs for BIOS versions which seems very strange to me.

Link to comment

I only have 4gb of RAM installed, but I implemented the 4095M memory parameter to test if it was causing my write speed of 12-15MB/s

 

I just completed a 30gig transfer and it sped up throughout the transfer, ending at 21.4 MB/s as the transfer finished. 

 

Not sure why limiting to 4gb of RAM helped when I only have 4gb or RAM installed, so I started a 600gig transfer, and 15 minutes in the transfer speed has stabalized at 24.3 MB/s, my previous speed pre-rc8a exactly.

 

*edit* 

Still climbing, 25.1 MB/s now

 

*edit*

Consistently copying varied file sizes at 24-26 MB/s now, very happy!

 

 

Link to comment

Just an idea... anyone running unraid native (not on vm) tested to just disable nx bit at bios setup (it should disable pae I think?) and see what happens? that way it should disable pae for sure, I'm not sure if mem=4xxx actually do same as... remember that video memory and other pci allocations will take space on addressing (size depending on hardware) and I guess mem param really limits only ram then there should be still pae working for some few MB's...

Link to comment

Wow, setting the maximum memory to 4095M has had a HUGE impact! I'm seeing write speeds in excess of 100MB/s ...

 

This didn't help my other problem involving "attempted task abort!" in the syslog during parity check, which I'm still troubleshooting...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.