unRAID Server Release 5.0-rc10 Available


limetech

Recommended Posts

I also have write speed issues. I installed 5.0-rc10 yesterday by replacing the 2 files,  commenting out the Samba fix in the Go file and re-ran the new permissions script once the array was up.

 

A little while ago I tried to move an 4.5GB  ISO to the array and Teracopy indicated the transfer speed was 512KB/sec.  :o I let it go for a few minutes, it jumped to 1MB/sec at one point but mostly remained at 512KB/sec.

 

Seeing the comment above, I checked and the link speed is 1000Mb.  I don't have time to play with it today so I put rc8a back in place (2 files, re-enabled the updated Samba, didn't run permissions script), and the same file started transferring at 34MB/sec.  I will try to get a syslog when I have time to test it again, this time removing Simple Features.

Link to comment
  • Replies 284
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Made the switch to rc10 from rc8a, have Realtek 8111E - no problems.  Ran iperf on rc8a before switch: 112MB/s to/from, ran same test with rc10 and got same values.  Client is Xubuntu 12.10 with Realtek 8111DL.  Have not tested NFS or SMB yet. 

 

Update: Ran some NFS tests with rc10.  I set up a couple 3GB tmpfs - one on unRaid with NFS export and the other on my Xubuntu PC.  Using rsync on Xubuntu and a csv capture with bwm-ng on unraid:

 

File sizes: several different TV shows of 2.2GB each

Ram to Ram tmpfs via NFS: 117MB/s

 

unRaid WD Black to Xubuntu Ram tmpfs via NFS: 103MB/s

(WD Black reported 125MB/s with hdparm -tT)

 

unRaid WD Black to Xubuntu disk via NFS: 65-73MB/s (w/ 1-3 second hiccups and wild fluctuations)

(Disk limited on Xubuntu, hdparm -tT reported 73.76MB/s)

 

Will do SMB next.

 


Unraid 5.0-rc10 - Asus M5A78L-MLX Plus - AMD Athlon II X3 450 Rana 3.2GHz - 8GB DDR3 - Antec NEO ECO 620W - Antec Three Hundred Case - 1x Rosewill RC-211 - Parity: 1T Seagate ST1000DM005/HD103SJ - DATA: 3x WD Black 750G - 1x 1T Seagate ST1000DM003 - 1x 500G Seagate ST500DM002, Cache: Intel X25-V SSD 40GB

Link to comment

I installed rc10 successfully, however I'm unable to stop my array (was powering down the server).

 

I've gone through the standard commands to see if anything is holding things up, but I can't find anything...

 

root@Tower:/boot# fuser -k /mnt/disk4
root@Tower:/boot# fuser -k /mnt/disk5
root@Tower:/boot# fuser -k /mnt/disk6
root@Tower:/boot# fuser -k /mnt/disk7
root@Tower:/boot# fuser -k /mnt/disk8
root@Tower:/boot# fuser -k /mnt/disk9
root@Tower:/boot# fuser -k /mnt/disk10
root@Tower:/boot# fuser -k /mnt/disk11
root@Tower:/boot# fuser -k /mnt/disk12
root@Tower:/boot# fuser -k /mnt/disk13
root@Tower:/boot# fuser -k /mnt/cache
root@Tower:/boot# fuser -mv /mnt/disk* /mnt/user/*

 

all that reveals nothing... yet my array won't stop :(

 

here is the output of my ps -ef

 

root@Tower:/boot# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 04:22 ?        00:00:04 init
root         2     0  0 04:22 ?        00:00:00 [kthreadd]
root         3     2  0 04:22 ?        00:00:00 [ksoftirqd/0]
root         6     2  0 04:22 ?        00:00:00 [migration/0]
root         7     2  0 04:22 ?        00:00:00 [migration/1]
root         9     2  0 04:22 ?        00:00:00 [ksoftirqd/1]
root        11     2  0 04:22 ?        00:00:00 [migration/2]
root        13     2  0 04:22 ?        00:00:00 [ksoftirqd/2]
root        14     2  0 04:22 ?        00:00:00 [migration/3]
root        16     2  0 04:22 ?        00:00:00 [ksoftirqd/3]
root        17     2  0 04:22 ?        00:00:00 [migration/4]
root        18     2  0 04:22 ?        00:00:00 [kworker/4:0]
root        19     2  0 04:22 ?        00:00:00 [ksoftirqd/4]
root        20     2  0 04:22 ?        00:00:00 [migration/5]
root        22     2  0 04:22 ?        00:00:00 [ksoftirqd/5]
root        23     2  0 04:22 ?        00:00:00 [migration/6]
root        25     2  0 04:22 ?        00:00:00 [ksoftirqd/6]
root        26     2  0 04:22 ?        00:00:00 [migration/7]
root        28     2  0 04:22 ?        00:00:00 [ksoftirqd/7]
root        29     2  0 04:22 ?        00:00:00 [khelper]
root       178     2  0 04:22 ?        00:00:01 [sync_supers]
root       180     2  0 04:22 ?        00:00:00 [bdi-default]
root       182     2  0 04:22 ?        00:00:00 [kblockd]
root       335     2  0 04:22 ?        00:00:00 [ata_sff]
root       345     2  0 04:22 ?        00:00:00 [khubd]
root       456     2  0 04:22 ?        00:00:00 [rpciod]
root       496     2  0 04:22 ?        00:00:00 [kswapd0]
root       558     2  0 04:22 ?        00:00:00 [fsnotify_mark]
root       578     2  0 04:22 ?        00:00:00 [nfsiod]
root       581     2  0 04:22 ?        00:00:00 [cifsiod]
root       587     2  0 04:22 ?        00:00:00 [crypto]
root       748     2  0 04:22 ?        00:00:00 [deferwq]
root       876     2  0 04:22 ?        00:00:00 [scsi_eh_0]
root       877     2  0 04:22 ?        00:00:00 [fw_event0]
root       879     2  0 04:22 ?        00:00:00 [scsi_eh_1]
root       880     2  0 04:22 ?        00:00:00 [scsi_eh_2]
root       881     2  0 04:22 ?        00:00:00 [scsi_eh_3]
root       882     2  0 04:22 ?        00:00:00 [scsi_eh_4]
root       883     2  0 04:22 ?        00:00:00 [scsi_eh_5]
root       884     2  0 04:22 ?        00:00:00 [scsi_eh_6]
root       887     2  0 04:22 ?        00:00:00 [kworker/u:4]
root       890     2  0 04:22 ?        00:00:00 [kworker/u:7]
root       953     2  0 04:22 ?        00:00:00 [scsi_eh_7]
root       955     2  0 04:22 ?        00:00:00 [usb-storage]
root       969     2  0 04:22 ?        00:00:00 [poll_0_status]
root      1246     1  0 04:22 ?        00:00:00 /usr/sbin/syslogd -m0
root      1250     1  0 04:22 ?        00:00:00 /usr/sbin/klogd -c 3 -x
root      1276     1  0 04:22 ?        00:00:00 /sbin/dhcpcd -t 10 -h Tower eth0
root      1368     1  0 04:22 ?        00:00:00 /sbin/rpc.statd
root      1378     1  0 04:22 ?        00:00:00 /usr/sbin/inetd
root      1387     1  0 04:22 ?        00:00:00 /usr/sbin/acpid
root      1402     1  0 04:22 ?        00:00:00 /usr/sbin/crond -l notice
daemon    1404     1  0 04:22 ?        00:00:00 /usr/sbin/atd -b 15 -l 1
root      1967     2  0 05:33 ?        00:00:00 [kworker/4:1]
root      2410     2  0 05:35 ?        00:00:00 [kworker/5:0]
root      3282     2  0 05:39 ?        00:00:00 [kworker/0:0]
root      3516     2  0 05:41 ?        00:00:00 [kworker/7:1]
root      5904     2  0 05:52 ?        00:00:00 [kworker/2:0]
root      7549     2  0 06:00 ?        00:00:00 [kworker/1:1]
root      7989     2  0 06:03 ?        00:00:00 [kworker/6:0]
root      8239     2  0 06:04 ?        00:00:00 [kworker/1:2]
root      9052     2  0 06:08 ?        00:00:00 [kworker/3:2]
root      9657     2  0 06:11 ?        00:00:00 [kworker/5:1]
root      9879     2  0 06:12 ?        00:00:00 [kworker/0:2]
root     10388     2  0 06:15 ?        00:00:00 [kworker/3:0]
root     10415     1  0 04:22 ?        00:00:00 /usr/sbin/sshd
root     10547     2  0 06:16 ?        00:00:00 [kworker/2:3]
root     13695 20741  0 06:43 pts/0    00:00:00 ps -ef
root     18476     1  0 04:22 ?        00:00:05 /usr/local/sbin/emhttp
root     18477     1  0 04:22 tty1     00:00:00 /sbin/agetty 38400 tty1 linux
root     18478     1  0 04:22 tty2     00:00:00 /sbin/agetty 38400 tty2 linux
root     18479     1  0 04:22 tty3     00:00:00 /sbin/agetty 38400 tty3 linux
root     18480     1  0 04:22 tty4     00:00:00 /sbin/agetty 38400 tty4 linux
root     18481     1  0 04:22 tty5     00:00:00 /sbin/agetty 38400 tty5 linux
root     18482     1  0 04:22 tty6     00:00:00 /sbin/agetty 38400 tty6 linux
root     18729     2  0 04:23 ?        00:00:30 [mdrecoveryd]
root     18734     2  0 04:23 ?        00:00:00 [spinupd]
root     18735     2  0 04:23 ?        00:00:00 [spinupd]
root     18736     2  0 04:23 ?        00:00:00 [spinupd]
root     18737     2  0 04:23 ?        00:00:00 [spinupd]
root     18738     2  0 04:23 ?        00:00:00 [spinupd]
root     18739     2  0 04:23 ?        00:00:00 [spinupd]
root     18740     2  0 04:23 ?        00:00:00 [spinupd]
root     18741     2  0 04:23 ?        00:00:00 [spinupd]
root     18742     2  0 04:23 ?        00:00:00 [spinupd]
root     18743     2  0 04:23 ?        00:00:00 [spinupd]
root     18744     2  0 04:23 ?        00:00:00 [spinupd]
root     18745     2  0 04:23 ?        00:00:00 [spinupd]
root     18746     2  0 04:23 ?        00:00:00 [spinupd]
root     18747     2  0 04:23 ?        00:00:00 [spinupd]
root     18809     2  0 04:23 ?        00:01:03 [unraidd]
root     18837     2  0 04:23 ?        00:00:00 [reiserfs]
root     20740  1378  0 04:28 ?        00:00:00 in.telnetd: 192.168.1.11
root     20741 20740  0 04:28 pts/0    00:00:00 -bash
root     27485     2  0 04:59 ?        00:00:00 [kworker/6:1]
root     29233     2  0 05:07 ?        00:00:00 [kworker/7:2]

 

I would try and stop the array through the CLI, however I cannot access the wiki page :(

 

TRCLt.png

 

Suggestions?

 

BTW, rc10 did not help w/ my super slow parity sync, and my "attempted task abort!" issues.  I just purchased another M1015 RAID card that I want to try out...

Link to comment

Made the switch to rc10 from rc8a, have Realtek 8111E - no problems.  Ran iperf on rc8a before switch: 112MB/s to/from, ran same test with rc10 and got same values.  Client is Xubuntu 12.10 with Realtek 8111DL.  Have not tested NFS or SMB yet. 

 

Update: Ran some NFS tests with rc10.  I set up a couple 3GB tmpfs - one on unRaid with NFS export and the other on my Xubuntu PC.  Using rsync on Xubuntu and a csv capture with bwm-ng on unraid:

 

File sizes: several different TV shows of 2.2GB each

Ram to Ram tmpfs via NFS: 117MB/s

 

unRaid WD Black to Xubuntu Ram tmpfs via NFS: 103MB/s

(WD Black reported 125MB/s with hdparm -tT)

 

unRaid WD Black to Xubuntu disk via NFS: 65-73MB/s (w/ 1-3 second hiccups and wild fluctuations)

(Disk limited on Xubuntu, hdparm -tT reported 73.76MB/s)

 

Will do SMB next.

 


Unraid 5.0-rc10 - Asus M5A78L-MLX Plus - AMD Athlon II X3 450 Rana 3.2GHz - 8GB DDR3 - Antec NEO ECO 620W - Antec Three Hundred Case - 1x Rosewill RC-211 - Parity: 1T Seagate ST1000DM005/HD103SJ - DATA: 3x WD Black 750G - 1x 1T Seagate ST1000DM003 - 1x 500G Seagate ST500DM002, Cache: Intel X25-V SSD 40GB

 

Unevent, could by chance post how to create those temps on unraid?  I'll be moving data to and from a win7 box but at least it is an SSD and mostly I want to confirm that unraid and my unraid link aren't the problems.  I can move data to and from my two win7 boxen at link speed.

 

Thanks.

Link to comment

I installed rc10 successfully, however I'm unable to stop my array (was powering down the server).

 

I've gone through the standard commands to see if anything is holding things up, but I can't find anything...

 

root@Tower:/boot# fuser -k /mnt/disk4
root@Tower:/boot# fuser -k /mnt/disk5
root@Tower:/boot# fuser -k /mnt/disk6
root@Tower:/boot# fuser -k /mnt/disk7
root@Tower:/boot# fuser -k /mnt/disk8
root@Tower:/boot# fuser -k /mnt/disk9
root@Tower:/boot# fuser -k /mnt/disk10
root@Tower:/boot# fuser -k /mnt/disk11
root@Tower:/boot# fuser -k /mnt/disk12
root@Tower:/boot# fuser -k /mnt/disk13
root@Tower:/boot# fuser -k /mnt/cache
root@Tower:/boot# fuser -mv /mnt/disk* /mnt/user/*

 

all that reveals nothing... yet my array won't stop :(

 

here is the output of my ps -ef

 

root@Tower:/boot# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 04:22 ?        00:00:04 init
root         2     0  0 04:22 ?        00:00:00 [kthreadd]
root         3     2  0 04:22 ?        00:00:00 [ksoftirqd/0]
root         6     2  0 04:22 ?        00:00:00 [migration/0]
root         7     2  0 04:22 ?        00:00:00 [migration/1]
root         9     2  0 04:22 ?        00:00:00 [ksoftirqd/1]
root        11     2  0 04:22 ?        00:00:00 [migration/2]
root        13     2  0 04:22 ?        00:00:00 [ksoftirqd/2]
root        14     2  0 04:22 ?        00:00:00 [migration/3]
root        16     2  0 04:22 ?        00:00:00 [ksoftirqd/3]
root        17     2  0 04:22 ?        00:00:00 [migration/4]
root        18     2  0 04:22 ?        00:00:00 [kworker/4:0]
root        19     2  0 04:22 ?        00:00:00 [ksoftirqd/4]
root        20     2  0 04:22 ?        00:00:00 [migration/5]
root        22     2  0 04:22 ?        00:00:00 [ksoftirqd/5]
root        23     2  0 04:22 ?        00:00:00 [migration/6]
root        25     2  0 04:22 ?        00:00:00 [ksoftirqd/6]
root        26     2  0 04:22 ?        00:00:00 [migration/7]
root        28     2  0 04:22 ?        00:00:00 [ksoftirqd/7]
root        29     2  0 04:22 ?        00:00:00 [khelper]
root       178     2  0 04:22 ?        00:00:01 [sync_supers]
root       180     2  0 04:22 ?        00:00:00 [bdi-default]
root       182     2  0 04:22 ?        00:00:00 [kblockd]
root       335     2  0 04:22 ?        00:00:00 [ata_sff]
root       345     2  0 04:22 ?        00:00:00 [khubd]
root       456     2  0 04:22 ?        00:00:00 [rpciod]
root       496     2  0 04:22 ?        00:00:00 [kswapd0]
root       558     2  0 04:22 ?        00:00:00 [fsnotify_mark]
root       578     2  0 04:22 ?        00:00:00 [nfsiod]
root       581     2  0 04:22 ?        00:00:00 [cifsiod]
root       587     2  0 04:22 ?        00:00:00 [crypto]
root       748     2  0 04:22 ?        00:00:00 [deferwq]
root       876     2  0 04:22 ?        00:00:00 [scsi_eh_0]
root       877     2  0 04:22 ?        00:00:00 [fw_event0]
root       879     2  0 04:22 ?        00:00:00 [scsi_eh_1]
root       880     2  0 04:22 ?        00:00:00 [scsi_eh_2]
root       881     2  0 04:22 ?        00:00:00 [scsi_eh_3]
root       882     2  0 04:22 ?        00:00:00 [scsi_eh_4]
root       883     2  0 04:22 ?        00:00:00 [scsi_eh_5]
root       884     2  0 04:22 ?        00:00:00 [scsi_eh_6]
root       887     2  0 04:22 ?        00:00:00 [kworker/u:4]
root       890     2  0 04:22 ?        00:00:00 [kworker/u:7]
root       953     2  0 04:22 ?        00:00:00 [scsi_eh_7]
root       955     2  0 04:22 ?        00:00:00 [usb-storage]
root       969     2  0 04:22 ?        00:00:00 [poll_0_status]
root      1246     1  0 04:22 ?        00:00:00 /usr/sbin/syslogd -m0
root      1250     1  0 04:22 ?        00:00:00 /usr/sbin/klogd -c 3 -x
root      1276     1  0 04:22 ?        00:00:00 /sbin/dhcpcd -t 10 -h Tower eth0
root      1368     1  0 04:22 ?        00:00:00 /sbin/rpc.statd
root      1378     1  0 04:22 ?        00:00:00 /usr/sbin/inetd
root      1387     1  0 04:22 ?        00:00:00 /usr/sbin/acpid
root      1402     1  0 04:22 ?        00:00:00 /usr/sbin/crond -l notice
daemon    1404     1  0 04:22 ?        00:00:00 /usr/sbin/atd -b 15 -l 1
root      1967     2  0 05:33 ?        00:00:00 [kworker/4:1]
root      2410     2  0 05:35 ?        00:00:00 [kworker/5:0]
root      3282     2  0 05:39 ?        00:00:00 [kworker/0:0]
root      3516     2  0 05:41 ?        00:00:00 [kworker/7:1]
root      5904     2  0 05:52 ?        00:00:00 [kworker/2:0]
root      7549     2  0 06:00 ?        00:00:00 [kworker/1:1]
root      7989     2  0 06:03 ?        00:00:00 [kworker/6:0]
root      8239     2  0 06:04 ?        00:00:00 [kworker/1:2]
root      9052     2  0 06:08 ?        00:00:00 [kworker/3:2]
root      9657     2  0 06:11 ?        00:00:00 [kworker/5:1]
root      9879     2  0 06:12 ?        00:00:00 [kworker/0:2]
root     10388     2  0 06:15 ?        00:00:00 [kworker/3:0]
root     10415     1  0 04:22 ?        00:00:00 /usr/sbin/sshd
root     10547     2  0 06:16 ?        00:00:00 [kworker/2:3]
root     13695 20741  0 06:43 pts/0    00:00:00 ps -ef
root     18476     1  0 04:22 ?        00:00:05 /usr/local/sbin/emhttp
root     18477     1  0 04:22 tty1     00:00:00 /sbin/agetty 38400 tty1 linux
root     18478     1  0 04:22 tty2     00:00:00 /sbin/agetty 38400 tty2 linux
root     18479     1  0 04:22 tty3     00:00:00 /sbin/agetty 38400 tty3 linux
root     18480     1  0 04:22 tty4     00:00:00 /sbin/agetty 38400 tty4 linux
root     18481     1  0 04:22 tty5     00:00:00 /sbin/agetty 38400 tty5 linux
root     18482     1  0 04:22 tty6     00:00:00 /sbin/agetty 38400 tty6 linux
root     18729     2  0 04:23 ?        00:00:30 [mdrecoveryd]
root     18734     2  0 04:23 ?        00:00:00 [spinupd]
root     18735     2  0 04:23 ?        00:00:00 [spinupd]
root     18736     2  0 04:23 ?        00:00:00 [spinupd]
root     18737     2  0 04:23 ?        00:00:00 [spinupd]
root     18738     2  0 04:23 ?        00:00:00 [spinupd]
root     18739     2  0 04:23 ?        00:00:00 [spinupd]
root     18740     2  0 04:23 ?        00:00:00 [spinupd]
root     18741     2  0 04:23 ?        00:00:00 [spinupd]
root     18742     2  0 04:23 ?        00:00:00 [spinupd]
root     18743     2  0 04:23 ?        00:00:00 [spinupd]
root     18744     2  0 04:23 ?        00:00:00 [spinupd]
root     18745     2  0 04:23 ?        00:00:00 [spinupd]
root     18746     2  0 04:23 ?        00:00:00 [spinupd]
root     18747     2  0 04:23 ?        00:00:00 [spinupd]
root     18809     2  0 04:23 ?        00:01:03 [unraidd]
root     18837     2  0 04:23 ?        00:00:00 [reiserfs]
root     20740  1378  0 04:28 ?        00:00:00 in.telnetd: 192.168.1.11
root     20741 20740  0 04:28 pts/0    00:00:00 -bash
root     27485     2  0 04:59 ?        00:00:00 [kworker/6:1]
root     29233     2  0 05:07 ?        00:00:00 [kworker/7:2]

 

I would try and stop the array through the CLI, however I cannot access the wiki page :(

 

TRCLt.png

 

Suggestions?

 

BTW, rc10 did not help w/ my super slow parity sync, and my "attempted task abort!" issues.  I just purchased another M1015 RAID card that I want to try out...

 

successfully stop/rebooted by performing the following commands

 

/root/samba stop
umount /dev/md1
...
umount /dev/md13
/root/mdcmd stop
reboot

 

Array registered as valid, ... was getting some errors on a hard drive, so I'm running a long SMART test while I go to work...

Link to comment

1) Network copy > I confirm what I said before - speed around 2,3 MB/s writing to "disk..." share

2) Network copy > Copyng to cache drive get about 7,5 MB/s

3) dd command on "disk..." (/mnt/disk2) get 2,4 MB/s

4) dd command on cache (/mnt/cache) get 2,4 MB/s

 

I did check with ethtool eth0 command and I can confirm that the link is 1000 and full duplex.

 

Doing more tests I noticed that copying to /mnt/disk2 I got for the first 7/8 seconds a speed of 30MB/s, after that it start to go at 2 MB/s.

 

I'd like also to ask again if someone can share an old version of 5.0 RC so I could try it.

 

Cheers

Max

Link to comment

1) Network copy > I confirm what I said before - speed around 2,3 MB/s writing to "disk..." share

2) Network copy > Copyng to cache drive get about 7,5 MB/s

3) dd command on "disk..." (/mnt/disk2) get 2,4 MB/s

4) dd command on cache (/mnt/cache) get 2,4 MB/s

 

I did check with ethtool eth0 command and I can confirm that the link is 1000 and full duplex.

 

Doing more tests I noticed that copying to /mnt/disk2 I got for the first 7/8 seconds a speed of 30MB/s, after that it start to go at 2 MB/s.

 

I'd like also to ask again if someone can share an old version of 5.0 RC so I could try it.

 

Cheers

Max

 

 

How much usable ram is in your machine for unRAID's use?

In the other thread we've been discussing setting a limit of 4G for unRAID's usage.

Read from here -> http://lime-technology.com/forum/index.php?topic=22675.msg220296#msg220296

 

 

Link to comment

I'm having issues with the parity check. I have 7 2TB drives (5 data + parity + cache). All of them are WD green drives.

 

In 4.7 my parity checks lasted about 6h. They increased to 9h on 5rc8a but since some people were having some issues, I didn't pay much attention. However in 5rc10 (I didn't upgrade to rc9) duration has increased to 13h. I have run two parity checks two days in a row and duration has been +13h in both of them.

 

This is my syslog:

 

http://tny.cz/2ca38aac

 

Any hints?

Link to comment

I'm having issues with the parity check. I have 7 2TB drives (5 data + parity + cache). All of them are WD green drives.

 

In 4.7 my parity checks lasted about 6h. They increased to 9h on 5rc8a but since some people were having some issues, I didn't pay much attention. However in 5rc10 (I didn't upgrade to rc9) duration has increased to 13h. I have run two parity checks two days in a row and duration has been +13h in both of them.

 

This is my syslog:

 

http://tny.cz/2ca38aac

 

Any hints?

 

I think these lines in the syslog may indicate the problem, but someone with more experience may need to confirm:

 

Jan 13 17:43:18 Hercules kernel: sd 0:0:6:0: attempting task abort! scmd(d1267b40)

Jan 13 17:43:18 Hercules kernel: sd 0:0:6:0: [sdg] CDB: cdb[0]=0x28: 28 00 00 5b 0c 40 00 04 00 00

Jan 13 17:43:18 Hercules kernel: scsi target0:0:6: handle(0x0010), sas_address(0x5001e6739eda2ff0), phy(16)

Jan 13 17:43:18 Hercules kernel: scsi target0:0:6: enclosure_logical_id(0x5001e6739eda2fff), slot(16)

Jan 13 17:43:19 Hercules kernel: sd 0:0:6:0: task abort: SUCCESS scmd(d1267b40)

 

Those lines repeat every minute or so during the parity check, but because they occur on multiple drives it's probably not the drives themselves.  I'm not sure if the error is pointing at cable, power or controller though.  I doubt it's related to upgrading unraid to RC10 unless other people have starting seeing the same error.

Link to comment

I'm having issues with the parity check. I have 7 2TB drives (5 data + parity + cache). All of them are WD green drives.

 

In 4.7 my parity checks lasted about 6h. They increased to 9h on 5rc8a but since some people were having some issues, I didn't pay much attention. However in 5rc10 (I didn't upgrade to rc9) duration has increased to 13h. I have run two parity checks two days in a row and duration has been +13h in both of them.

 

This is my syslog:

 

http://tny.cz/2ca38aac

 

Any hints?

 

I get super slow parity speeds, and I've attributed to the "Attempting task abort!" entries in my syslog, which I see you have as well.  Did you not have those entries before?

Link to comment

I'm having issues with the parity check. I have 7 2TB drives (5 data + parity + cache). All of them are WD green drives.

 

In 4.7 my parity checks lasted about 6h. They increased to 9h on 5rc8a but since some people were having some issues, I didn't pay much attention. However in 5rc10 (I didn't upgrade to rc9) duration has increased to 13h. I have run two parity checks two days in a row and duration has been +13h in both of them.

 

Any hints?

 

As was stated above, seems related to the Task Abort attempts, about one a minute the entire 13 hours, but none before or after the parity check.  None occurred on the parity drive, fewest occurred on Disk 1, most on disk 2, with many on the other 3 data drives also.  No clue here as to why.  Nothing else appears to be an issue.  Have you checked for the latest BIOS for your sas card?  Are others with the same problem also running VMware?

Link to comment

I'm having issues with the parity check. I have 7 2TB drives (5 data + parity + cache). All of them are WD green drives.

 

In 4.7 my parity checks lasted about 6h. They increased to 9h on 5rc8a but since some people were having some issues, I didn't pay much attention. However in 5rc10 (I didn't upgrade to rc9) duration has increased to 13h. I have run two parity checks two days in a row and duration has been +13h in both of them.

 

This is my syslog:

 

http://tny.cz/2ca38aac

 

Any hints?

 

I think these lines in the syslog may indicate the problem, but someone with more experience may need to confirm:

 

Jan 13 17:43:18 Hercules kernel: sd 0:0:6:0: attempting task abort! scmd(d1267b40)

Jan 13 17:43:18 Hercules kernel: sd 0:0:6:0: [sdg] CDB: cdb[0]=0x28: 28 00 00 5b 0c 40 00 04 00 00

Jan 13 17:43:18 Hercules kernel: scsi target0:0:6: handle(0x0010), sas_address(0x5001e6739eda2ff0), phy(16)

Jan 13 17:43:18 Hercules kernel: scsi target0:0:6: enclosure_logical_id(0x5001e6739eda2fff), slot(16)

Jan 13 17:43:19 Hercules kernel: sd 0:0:6:0: task abort: SUCCESS scmd(d1267b40)

 

Those lines repeat every minute or so during the parity check, could be a problem with this disk: WDC_WD20EARX-00PASB0_WD-WCAZA8449068.  I'm not sure if the error is pointing at disk, cable, power or controller though.  I doubt it's related to upgrading unraid to RC10 unless other people have starting seeing the same error.

 

I never had an issue with that drive and smart errors run ok. I'm running a norco 4224 with many empty slots, I could change that drive to another backplane. Would this help?

Link to comment

I'm having issues with the parity check. I have 7 2TB drives (5 data + parity + cache). All of them are WD green drives.

 

In 4.7 my parity checks lasted about 6h. They increased to 9h on 5rc8a but since some people were having some issues, I didn't pay much attention. However in 5rc10 (I didn't upgrade to rc9) duration has increased to 13h. I have run two parity checks two days in a row and duration has been +13h in both of them.

 

This is my syslog:

 

http://tny.cz/2ca38aac

 

Any hints?

 

I get super slow parity speeds, and I've attributed to the "Attempting task abort!" entries in my syslog, which I see you have as well.  Did you not have those entries before?

 

I don't have any previous logs, didn't care to save them  :(

Link to comment

I'm having issues with the parity check. I have 7 2TB drives (5 data + parity + cache). All of them are WD green drives.

 

In 4.7 my parity checks lasted about 6h. They increased to 9h on 5rc8a but since some people were having some issues, I didn't pay much attention. However in 5rc10 (I didn't upgrade to rc9) duration has increased to 13h. I have run two parity checks two days in a row and duration has been +13h in both of them.

 

Any hints?

 

As was stated above, seems related to the Task Abort attempts, about one a minute the entire 13 hours, but none before or after the parity check.  None occurred on the parity drive, fewest occurred on Disk 1, most on disk 2, with many on the other 3 data drives also.  No clue here as to why.  Nothing else appears to be an issue.  Have you checked for the latest BIOS for your sas card?  Are others with the same problem also running VMware?

My SAS M1015 has F14 firmware and I'm running an expander. Parity and disk3 drive are on the sas, others are on the expander.

I have a spare sas, could it help test it?

Link to comment
I never had an issue with that drive and smart errors run ok. I'm running a norco 4224 with many empty slots, I could change that drive to another backplane. Would this help?

 

I skimmed the syslog too quickly the first time, it looks like the error happens on multiple drives, so they probably are fine.  Swapping the drives to a different backplane and reseating the cables are worth trying.

Link to comment

How much usable ram is in your machine for unRAID's use?

In the other thread we've been discussing setting a limit of 4G for unRAID's usage.

Read from here -> http://lime-technology.com/forum/index.php?topic=22675.msg220296#msg220296

 

Is this exclusively related to this motherboard? I've got 8GB installed in a Supermicro C2SEA and haven't noticed this problem... Will have to go and test this now!

 

On another note, I've successfully upgraded to RC10. Parity check has just completed successfully, and took pretty much the same time as on RC8a.

 

EDIT: Forgot to mention, my board has a Realtek RTL8111C which seems to be working fine too.

 

Sent from my Nexus 7 using Tapatalk HD

 

Link to comment

There is much more I can write about my opinion of Realtek's linux support.  Let me just say that I no longer have any motherboards around with Realtek NIC's and we will probably never again offer any motherboards with Realtek NIC's in any new server product moving forward, 'nuff said.

 

Does this mean Limetech's official position on Realtek NIC's is that you do not recommend them? I know that Intel NIC's are cream of the crop, especially for unraid use, but most motherboards come with realtek.

Link to comment

How much usable ram is in your machine for unRAID's use?

In the other thread we've been discussing setting a limit of 4G for unRAID's usage.

Read from here -> http://lime-technology.com/forum/index.php?topic=22675.msg220296#msg220296

 

Is this exclusively related to this motherboard? I've got 8GB installed in a Supermicro C2SEA and haven't noticed this problem... Will have to go and test this now!

 

On another note, I've successfully upgraded to RC10. Parity check has just completed successfully, and took pretty much the same time as on RC8a.

 

EDIT: Forgot to mention, my board has a Realtek RTL8111C which seems to be working fine too.

 

Sent from my Nexus 7 using Tapatalk HD

 

That issue seems to be limited to this board... I'm eager to go home and try this fix (as I'm one of the people w/ that board).

Link to comment

Unevent, could by chance post how to create those temps on unraid?  I'll be moving data to and from a win7 box but at least it is an SSD and mostly I want to confirm that unraid and my unraid link aren't the problems.  I can move data to and from my two win7 boxen at link speed.

Thanks.

 

Can do it two ways: create a mount point under an existing SMB or NFS export (under an existing share), or create a new export.  NFS will be faster than SMB, but if your Windows not much choice.

 

On unRaid:

mkdir -m 777 /tmp/ramdrv  or mkdir /mnt/disk1/ramdrv (do your own export manually or piggyback an existing one, respectively)

 

_stop all addons_ (unmenu is fine) and clear the caches and see what is left when choosing the size:

sync && echo 3 > /proc/sys/vm/drop_caches

free -lm

 

Create the ram drive:

mount -t tmpfs -o size=3G tmpfs /tmp/ramdrv

The size is important and should be chosen conservatively if you don't have a swap partition enabled.  G = Gig, M = Meg and so on.  Tmpfs will be created with a set max size - will not grow more than what you tell it.  It will also be swappable if you have a swap partition enabled, which is nice for those oops as it keeps the server from crashing.

 

If you piggybacked an export, browse (Windows) to the network, then choose disk1 and it will be there.  If you want to roll your own export, edit the export file (nfs) or samba config and restart the service.

 

To capture a csv file with bwm-ng of the transfers on runraid (install bwm-ng via unmenu), telnet in and:

bwm-ng --output csv -F transfer_log.csv --count 1000 --interfaces eth0 

 

Start the command and run your transfer.  Hit CTRL-C if the transfer finishes and bwm-ng is still logging or increase the count if it ends early (scary).

 

Edit the file to add the headings below to the top:

unix_timestamp;iface_name;bytes_out;bytes_in;bytes_total;packets_out;packets_in;packets_total;errors_out;errors_in

 

Import into your favorite spreadsheet program.  Delete the last columns with no headings.  Sort to separate the "total" rows and delete, sort ascending using timestamp.  Plot vs timestamp however you want.  I'd post one of my NFS plots, but don't have a pic hosting site.  The plots are only exciting when they are crazy with hiccups and dropouts in the transfers.

 

Another tidbit is to watch for dropped packets if you suspect network trouble, in another telnet session:

watch -n.1 'ifconfig|grep dropped'

 

Drops while nothing going on is most likely framing errors that is being reported as dropped.  Drops during a transfer is more important to watch for.

 

 

 

 

 

 

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.