unRAID Server Release 6.1.3 Available


limetech

Recommended Posts

EDIT:  Just checked and SAB is running a repair on a download.  Not sure it it is related but figured I would throw it out there.  Maybe my VM and SAB are competing for the same CPUs?

 

Just got these in my syslog.  Nothing has crashed.  CPUs 12 ann 13 are pinned to one of my OE VMs that I PXE boot (using NFS for the storage disk).

 

Oct 10 19:54:00 unRAID kernel: ------------[ cut here ]------------
Oct 10 19:54:00 unRAID kernel: WARNING: CPU: 12 PID: 2866 at fs/nfsd/nfsproc.c:756 nfserrno+0x45/0x4c()
Oct 10 19:54:00 unRAID kernel: nfsd: non-standard errno: -38
Oct 10 19:54:00 unRAID kernel: Modules linked in: xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables kvm_intel kvm vhost_net vhost macvtap macvlan tun iptable_mangle xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod igb i2c_algo_bit mvsas ahci i2c_i801 libsas libahci ptp scsi_transport_sas pps_core ipmi_si acpi_cpufreq
Oct 10 19:54:00 unRAID kernel: CPU: 12 PID: 2866 Comm: nfsd Not tainted 4.1.7-unRAID #3
Oct 10 19:54:00 unRAID kernel: Hardware name: Supermicro X8DTH-i/6/iF/6F/X8DTH, BIOS 2.1b       05/04/12  
Oct 10 19:54:00 unRAID kernel: 0000000000000009 ffff88000223bca8 ffffffff815eff9a 0000000000000000
Oct 10 19:54:00 unRAID kernel: ffff88000223bcf8 ffff88000223bce8 ffffffff810477cb 0000000000000000
Oct 10 19:54:00 unRAID kernel: ffffffff811e0f27 ffff880c61672408 ffff880c61672548 ffff880c43ecc0c0
Oct 10 19:54:00 unRAID kernel: Call Trace:
Oct 10 19:54:00 unRAID kernel: [] dump_stack+0x4c/0x6e
Oct 10 19:54:00 unRAID kernel: [] warn_slowpath_common+0x97/0xb1
Oct 10 19:54:00 unRAID kernel: [] ? nfserrno+0x45/0x4c
Oct 10 19:54:00 unRAID kernel: [] warn_slowpath_fmt+0x41/0x43
Oct 10 19:54:00 unRAID kernel: [] nfserrno+0x45/0x4c
Oct 10 19:54:00 unRAID kernel: [] nfsd_link+0x1f5/0x299
Oct 10 19:54:00 unRAID kernel: [] nfsd3_proc_link+0xb1/0xc0
Oct 10 19:54:00 unRAID kernel: [] nfsd_dispatch+0x93/0x14e
Oct 10 19:54:00 unRAID kernel: [] svc_process+0x3c3/0x60f
Oct 10 19:54:00 unRAID kernel: [] nfsd+0x106/0x158
Oct 10 19:54:00 unRAID kernel: [] ? nfsd_destroy+0x6f/0x6f
Oct 10 19:54:00 unRAID kernel: [] kthread+0xd6/0xde
Oct 10 19:54:00 unRAID kernel: [] ? kthread_create_on_node+0x172/0x172
Oct 10 19:54:00 unRAID kernel: [] ret_from_fork+0x42/0x70
Oct 10 19:54:00 unRAID kernel: [] ? kthread_create_on_node+0x172/0x172
Oct 10 19:54:00 unRAID kernel: ---[ end trace 3fbf405d57f675a1 ]---
Oct 10 19:54:00 unRAID kernel: ------------[ cut here ]------------
Oct 10 19:54:00 unRAID kernel: WARNING: CPU: 12 PID: 2866 at fs/nfsd/nfsproc.c:756 nfserrno+0x45/0x4c()
Oct 10 19:54:00 unRAID kernel: nfsd: non-standard errno: -38
Oct 10 19:54:00 unRAID kernel: Modules linked in: xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables kvm_intel kvm vhost_net vhost macvtap macvlan tun iptable_mangle xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod igb i2c_algo_bit mvsas ahci i2c_i801 libsas libahci ptp scsi_transport_sas pps_core ipmi_si acpi_cpufreq
Oct 10 19:54:00 unRAID kernel: CPU: 12 PID: 2866 Comm: nfsd Tainted: G        W       4.1.7-unRAID #3
Oct 10 19:54:00 unRAID kernel: Hardware name: Supermicro X8DTH-i/6/iF/6F/X8DTH, BIOS 2.1b       05/04/12  
Oct 10 19:54:00 unRAID kernel: 0000000000000009 ffff88000223bca8 ffffffff815eff9a 0000000000000000
Oct 10 19:54:00 unRAID kernel: ffff88000223bcf8 ffff88000223bce8 ffffffff810477cb 0000000000000000
Oct 10 19:54:00 unRAID kernel: ffffffff811e0f27 ffff880c61672408 ffff880c61672548 ffff880c19431680
Oct 10 19:54:00 unRAID kernel: Call Trace:
Oct 10 19:54:00 unRAID kernel: [] dump_stack+0x4c/0x6e
Oct 10 19:54:00 unRAID kernel: [] warn_slowpath_common+0x97/0xb1
Oct 10 19:54:00 unRAID kernel: [] ? nfserrno+0x45/0x4c
Oct 10 19:54:00 unRAID kernel: [] warn_slowpath_fmt+0x41/0x43
Oct 10 19:54:00 unRAID kernel: [] nfserrno+0x45/0x4c
Oct 10 19:54:00 unRAID kernel: [] nfsd_link+0x1f5/0x299
Oct 10 19:54:00 unRAID kernel: [] nfsd3_proc_link+0xb1/0xc0
Oct 10 19:54:00 unRAID kernel: [] nfsd_dispatch+0x93/0x14e
Oct 10 19:54:00 unRAID kernel: [] svc_process+0x3c3/0x60f
Oct 10 19:54:00 unRAID kernel: [] nfsd+0x106/0x158
Oct 10 19:54:00 unRAID kernel: [] ? nfsd_destroy+0x6f/0x6f
Oct 10 19:54:00 unRAID kernel: [] kthread+0xd6/0xde
Oct 10 19:54:00 unRAID kernel: [] ? kthread_create_on_node+0x172/0x172
Oct 10 19:54:00 unRAID kernel: [] ret_from_fork+0x42/0x70
Oct 10 19:54:00 unRAID kernel: [] ? kthread_create_on_node+0x172/0x172
Oct 10 19:54:00 unRAID kernel: ---[ end trace 3fbf405d57f675a2 ]---
Oct 10 19:54:00 unRAID kernel: ------------[ cut here ]------------
Oct 10 19:54:00 unRAID kernel: WARNING: CPU: 13 PID: 2866 at fs/nfsd/nfsproc.c:756 nfserrno+0x45/0x4c()
Oct 10 19:54:00 unRAID kernel: nfsd: non-standard errno: -38
Oct 10 19:54:00 unRAID kernel: Modules linked in: xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables kvm_intel kvm vhost_net vhost macvtap macvlan tun iptable_mangle xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod igb i2c_algo_bit mvsas ahci i2c_i801 libsas libahci ptp scsi_transport_sas pps_core ipmi_si acpi_cpufreq
Oct 10 19:54:00 unRAID kernel: CPU: 13 PID: 2866 Comm: nfsd Tainted: G        W       4.1.7-unRAID #3
Oct 10 19:54:00 unRAID kernel: Hardware name: Supermicro X8DTH-i/6/iF/6F/X8DTH, BIOS 2.1b       05/04/12  
Oct 10 19:54:00 unRAID kernel: 0000000000000009 ffff88000223bca8 ffffffff815eff9a 0000000000000000
Oct 10 19:54:00 unRAID kernel: ffff88000223bcf8 ffff88000223bce8 ffffffff810477cb 0000000000000000
Oct 10 19:54:00 unRAID kernel: ffffffff811e0f27 ffff880c61672408 ffff880c61672548 ffff880a33698d80
Oct 10 19:54:00 unRAID kernel: Call Trace:
Oct 10 19:54:00 unRAID kernel: [] dump_stack+0x4c/0x6e
Oct 10 19:54:00 unRAID kernel: [] warn_slowpath_common+0x97/0xb1
Oct 10 19:54:00 unRAID kernel: [] ? nfserrno+0x45/0x4c
Oct 10 19:54:00 unRAID kernel: [] warn_slowpath_fmt+0x41/0x43
Oct 10 19:54:00 unRAID kernel: [] nfserrno+0x45/0x4c
Oct 10 19:54:00 unRAID kernel: [] nfsd_link+0x1f5/0x299
Oct 10 19:54:00 unRAID kernel: [] nfsd3_proc_link+0xb1/0xc0
Oct 10 19:54:00 unRAID kernel: [] nfsd_dispatch+0x93/0x14e
Oct 10 19:54:00 unRAID kernel: [] svc_process+0x3c3/0x60f
Oct 10 19:54:00 unRAID kernel: [] nfsd+0x106/0x158
Oct 10 19:54:00 unRAID kernel: [] ? nfsd_destroy+0x6f/0x6f
Oct 10 19:54:00 unRAID kernel: [] kthread+0xd6/0xde
Oct 10 19:54:00 unRAID kernel: [] ? kthread_create_on_node+0x172/0x172
Oct 10 19:54:00 unRAID kernel: [] ret_from_fork+0x42/0x70
Oct 10 19:54:00 unRAID kernel: [] ? kthread_create_on_node+0x172/0x172
Oct 10 19:54:00 unRAID kernel: ---[ end trace 3fbf405d57f675a3 ]---

 

I think these errors show when an app or vm is attempting to create a hardlink.  Hardlinks are not supported on user shares.  I couldn't find a definite answer if SAB is using hardlinks to repair though.

Link to comment
  • Replies 246
  • Created
  • Last Reply

Top Posters In This Topic

First off I would like to say thank you for the added SMART support for non-array drives! One of my non-array drives that houses some of my VMs just started having multiple pending sectors & Offline Uncorrectable (4 hours ago). Luckily I like to check my logs each morning and happened to notice some write errors to that drive and it was super easily to check out what was happening with that drive. I was wondering if it would be possible to get an added feature of SMART notifications for non-array drives?

Link to comment

Got this message in my syslog for the first time:

 

Oct 15 03:15:00 unRAID emhttp: need_authorization: getpeername: Transport endpoint is not connected

 

The fact that it appears at exactl 03:15:00  indicates to me that it must be a scheduled job that triggered it.

 

Any ideas?

 

John

Link to comment

Got this message in my syslog for the first time:

 

Oct 15 03:15:00 unRAID emhttp: need_authorization: getpeername: Transport endpoint is not connected

 

The fact that it appears at exactl 03:15:00  indicates to me that it must be a scheduled job that triggered it.

 

Any ideas?

 

John

 

None of the standard unRAID jobs start at 03:15:00, perhaps one of your plugins?

 

Since version 6.1 unRAID is more strict on accessing the webGUI, anything trying to access the GUI on an address different as the localhost, requires authorization from emhttp.

 

Link to comment

My log is full and showing as 100%.

I thought this may rotate after a day, but it's been like this for a couple of days now.

I haven't had any odd behavior, other than the showing of the system log through the UI is cut off and doesn't scroll correctly.

Been up for 14 days now, I assume a reboot will fix this temporarily.

 

Any known causes or fixes?

log.png.7dc63471c39833b9ab2b09d1290b0c3b.png

Link to comment

My log is full and showing as 100%.

I thought this may rotate after a day, but it's been like this for a couple of days now.

I haven't had any odd behavior, other than the showing of the system log through the UI is cut off and doesn't scroll correctly.

Been up for 14 days now, I assume a reboot will fix this temporarily.

 

Any known causes or fixes?

Here
Link to comment

I've read most of this thread, but not every thread on the board. Has the slow parity check/build problem been identified? This usually takes about 14 hours on my setup (6 drives on motherboard SATA, 2 on a 1430SA), not 31 hours!  :-\

 

 

Is that from a HP microserver? Never had any issue with mine, always good constant speed in every unraid release, but only have 4 array disks + cache.

 

My last parity check:

 

Last checked on Wed 28 Oct 2015 06:20:53 AM GMT (today), finding 0 errors. 
Duration: 10 hours, 12 minutes, 35 seconds. Average speed: 108.9 MB/sec

Link to comment

I've read most of this thread, but not every thread on the board. Has the slow parity check/build problem been identified? This usually takes about 14 hours on my setup (6 drives on motherboard SATA, 2 on a 1430SA), not 31 hours!  :-\

 

 

Is that from a HP microserver? Never had any issue with mine, always good constant speed in every unraid release, but only have 4 array disks + cache.

 

My last parity check:

 

Last checked on Wed 28 Oct 2015 06:20:53 AM GMT (today), finding 0 errors. 
Duration: 10 hours, 12 minutes, 35 seconds. Average speed: 108.9 MB/sec

Yes, that's what I got before this release. Are you running 6.1.3? I have an Adaptec 1430SA as well, so this release appears not to like it (and others like the SAS2LP?)

Link to comment

 

Yes, that's what I got before this release. Are you running 6.1.3? I have an Adaptec 1430SA as well, so this release appears not to like it (and others like the SAS2LP?)

 

I’m on 6.1.3.

 

I also have the 1430SA on other servers and unlike the SAS2LP (and even the SASLP to a lesser extent), parity check has been constant since V5.

 

How is your CPU usage during a check? Some releases have a slightly higher usage and because you have 8 disks it can make a difference.

 

Link to comment

I've read most of this thread, but not every thread on the board. Has the slow parity check/build problem been identified? This usually takes about 14 hours on my setup (6 drives on motherboard SATA, 2 on a 1430SA), not 31 hours!

 

I haven't heard of a resolution.  It only seems to impact certain controllers.  My old LSI SAS3041E completes the parity check in about the same time as before, if not a little faster.

Link to comment

I've read most of this thread, but not every thread on the board. Has the slow parity check/build problem been identified? This usually takes about 14 hours on my setup (6 drives on motherboard SATA, 2 on a 1430SA), not 31 hours!  :-\

 

 

Is that from a HP microserver? Never had any issue with mine, always good constant speed in every unraid release, but only have 4 array disks + cache.

 

My last parity check:

 

Last checked on Wed 28 Oct 2015 06:20:53 AM GMT (today), finding 0 errors. 
Duration: 10 hours, 12 minutes, 35 seconds. Average speed: 108.9 MB/sec

Instead of how many disks you have, it would be more useful to know the size of parity when comparing parity checks.
Link to comment

My N40L is connecting the last drive in IDE mode for the internal drives.  It doesn't matter which slot because I've tried 2, 3 & 4 drives connected in the internal slots and it is always the last drive (I.E slot 2, 3 or 4 depending on how many drives I've installed - from the left to the right).  It is not limited to 6.1.3 for me.  I've seen it on 6.0.1.  When I connect 8 drives on my H310 flashed to IT mode all connect at SATA speeds.  So you might check your logs for this.

Link to comment

It seems to have sped up and is now at over 100MB/sec with 62% done. Weird. I guess we'll see how quick it does it when it finishes overnight!

 

Looking more carefully at your screenshot you average speed is not that bad, 929GB in 2:44H is about 95MB/s average, if there was nothing using the array the momentary slowdown can be a disk getting some slow sectors.

Link to comment

It seems to have sped up and is now at over 100MB/sec with 62% done. Weird. I guess we'll see how quick it does it when it finishes overnight!

 

Looking more carefully at your screenshot you average speed is not that bad, 929GB in 2:44H is about 95MB/s average, if there was nothing using the array the momentary slowdown can be a disk getting some slow sectors.

 

I also experience slowdowns during parity checks.

I just checked performance of my disks and one disk shows lower average speed and the graph shows low performance in the first 2 TBs of the disk. see attachment. Can this somewhat troublesome disk cause bad speeds?

diskspeed.zip

Link to comment

It seems to have sped up and is now at over 100MB/sec with 62% done. Weird. I guess we'll see how quick it does it when it finishes overnight!

 

Looking more carefully at your screenshot you average speed is not that bad, 929GB in 2:44H is about 95MB/s average, if there was nothing using the array the momentary slowdown can be a disk getting some slow sectors.

 

I also experience slowdowns during parity checks.

I just checked performance of my disks and one disk shows lower average speed and the graph shows low performance in the first 2 TBs of the disk. see attachment. Can this somewhat troublesome disk cause bad speeds?

 

I would suggest attaching a smart report for disk 11.  Something is going on with that disk and/or the its interface.  (i.e., its controller or cabling.)

Link to comment

It seems to have sped up and is now at over 100MB/sec with 62% done. Weird. I guess we'll see how quick it does it when it finishes overnight!

 

Looking more carefully at your screenshot you average speed is not that bad, 929GB in 2:44H is about 95MB/s average, if there was nothing using the array the momentary slowdown can be a disk getting some slow sectors.

 

I also experience slowdowns during parity checks.

I just checked performance of my disks and one disk shows lower average speed and the graph shows low performance in the first 2 TBs of the disk. see attachment. Can this somewhat troublesome disk cause bad speeds?

 

I would suggest attaching a smart report for disk 11.  Something is going on with that disk and/or the its interface.  (i.e., its controller or cabling.)

 

 

Turn off the spindown timer for that disk and issue a smart long/extended test. Then review the report.

If there are any pending sectors, the retries might slow things down for a short period.

However they may be other issues as well.

Link to comment

It seems to have sped up and is now at over 100MB/sec with 62% done. Weird. I guess we'll see how quick it does it when it finishes overnight!

 

Looking more carefully at your screenshot you average speed is not that bad, 929GB in 2:44H is about 95MB/s average, if there was nothing using the array the momentary slowdown can be a disk getting some slow sectors.

 

 

I also experience slowdowns during parity checks.

I just checked performance of my disks and one disk shows lower average speed and the graph shows low performance in the first 2 TBs of the disk. see attachment. Can this somewhat troublesome disk cause bad speeds?

 

I would suggest attaching a smart report for disk 11.  Something is going on with that disk and/or the its interface.  (i.e., its controller or cabling.)

 

all my disks are in cse m35t drive cages. I already relocated this disk in another slot by swapping disks. So it is using another cable and channel on the controller. It is still showing the same result.

smart report is attached.

tower1-smart-20151030-0715.zip

Link to comment

It seems to have sped up and is now at over 100MB/sec with 62% done. Weird. I guess we'll see how quick it does it when it finishes overnight!

Looking more carefully at your screenshot you average speed is not that bad, 929GB in 2:44H is about 95MB/s average, if there was nothing using the array the momentary slowdown can be a disk getting some slow sectors.

 

I also experience slowdowns during parity checks.

I just checked performance of my disks and one disk shows lower average speed and the graph shows low performance in the first 2 TBs of the disk. see attachment. Can this somewhat troublesome disk cause bad speeds?

I would suggest attaching a smart report for disk 11.  Something is going on with that disk and/or the its interface.  (i.e., its controller or cabling.)

all my disks are in cse m35t drive cages. I already relocated this disk in another slot by swapping disks. So it is using another cable and channel on the controller. It is still showing the same result.

smart report is attached.

 

The short test is not sufficient enough.

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     24865         -

 

 

The drive spin down timer needs to be disabled temporarily and a long/extended test of the whole surface needs to be executed.

96  Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
5   Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0

Nothing else seems to stand out.

 

Another choice might be to do a badblocks test in readonly mode of the whole drive.

Which may or may not trigger any events for weak sectors.  However, badblocks and kernel reads are retried, so the smart long/extended test will reveal a problem earlier.

 

The extended test will take approximately 8~9 hours to finish as estimated by the smart recommended polling time.

Extended self-test routine
recommended polling time:     ( 492) minutes.

Link to comment

It seems to have sped up and is now at over 100MB/sec with 62% done. Weird. I guess we'll see how quick it does it when it finishes overnight!

 

Looking more carefully at your screenshot you average speed is not that bad, 929GB in 2:44H is about 95MB/s average, if there was nothing using the array the momentary slowdown can be a disk getting some slow sectors.

 

 

I also experience slowdowns during parity checks.

I just checked performance of my disks and one disk shows lower average speed and the graph shows low performance in the first 2 TBs of the disk. see attachment. Can this somewhat troublesome disk cause bad speeds?

 

I would suggest attaching a smart report for disk 11.  Something is going on with that disk and/or the its interface.  (i.e., its controller or cabling.)

 

all my disks are in cse m35t drive cages. I already relocated this disk in another slot by swapping disks. So it is using another cable and channel on the controller. It is still showing the same result.

smart report is attached.

 

The only thing that I saw was the Throughput_Performance.  You have a number of Hitachi Deskstar 7K3000 in this array.  Have a look at the smart reports of these disks to see if anything stands out.

 

You can do this quickly by clicking on the 'Disk 11' on the 'Main' tab and then on the 'Attributes' tab on that page.  Look at your other Hitachi Deskstar 7K3000's and see how that number compares. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.