unRAID Server Release 6.2.0-rc3 Available


Recommended Posts

  • Replies 190
  • Created
  • Last Reply

Top Posters In This Topic

Everything was working perfectly with CA's appdata backup to user shares until I wound up installing a new container just to test that it installed ok.

 

Now I'm getting this error:

 

2016/08/02 12:44:34 [6720] rsync: mknod "/mnt/user/Backups/Docker Appdata/[email protected]/gitlab-ce/data/gitlab-rails/sockets/gitlab.socket" failed: Function not implemented (38)
2016/08/02 12:44:34 [6720] rsync: mknod "/mnt/user/Backups/Docker Appdata/[email protected]/gitlab-ce/data/gitlab-workhorse/socket" failed: Function not implemented (38)
2016/08/02 12:44:34 [6720] rsync: mknod "/mnt/user/Backups/Docker Appdata/[email protected]/gitlab-ce/data/postgresql/.s.PGSQL.5432" failed: Function not implemented (38)

  (lots of fun finding 3 errors in a log composed of 200K lines  :)  )

 

Going directly to a disk share instead of a user share works perfectly.  TBH not sure how commonly used this function is (I suspect its very rarely used in appdata)

 

Those are Unix Sockets and are probably pretty rare.  I'll need to check with Tom to see if SHFS (FUSE) is capable of storing unix sockets -- poor guy just got hardlink support in  :P

Link to comment

Recently I build up a 2nd unRAID server with add-on cheap 5-Bay USB enclosure ( Plus licence with 8+4 bay config ).

In general it work, but due to enclosure static report disk(s) "standby", so I think unRAID won't check the SMART/Temperature.

Even I submit Spinup all disks or setting those disks never spindown,  it still report standby.

 

Sure I can check the SMART by "smartctl -d sat".

 

Is it possible to override this problem (hope have a selection box in each disk which can ignore disk report state). 

 

Thanks

 

 

root@Tower:~# hdparm -C /dev/sda

/dev/sda:
drive state is:  standby
root@Tower:~# hdparm -C /dev/sdb

/dev/sdb:
drive state is:  standby
root@Tower:~# hdparm -C /dev/sdc

/dev/sdc:
drive state is:  standby
root@Tower:~# hdparm -C /dev/sdd

/dev/sdd:
drive state is:  standby
root@Tower:~#

 

123.jpg.9f609a838907c668c7c064e8db503efb.jpg

Link to comment

Dashboard only displays 3 cores of my 4 core AMD CPU.  It's correctly identified as 4 core in System Profiler.  Am I missing something? Screen shot of CPU Load and info from System Profiler attached.

 

What does it show under Tools => System Devices => CPU Thread Pairings ?

 

Link to comment

Recently I build up a 2nd unRAID server with add-on cheap 5-Bay USB enclosure ( Plus licence with 8+4 bay config ).

In general it work, but due to enclosure static report disk(s) "standby", so I think unRAID won't check the SMART/Temperature.

Even I submit Spinup all disks or setting those disks never spindown,  it still report standby.

 

Sure I can check the SMART by "smartctl -d sat".

 

Is it possible to override this problem (hope have a selection box in each disk which can ignore disk report state). 

 

Thanks

 

 

root@Tower:~# hdparm -C /dev/sda

/dev/sda:
drive state is:  standby
root@Tower:~# hdparm -C /dev/sdb

/dev/sdb:
drive state is:  standby
root@Tower:~# hdparm -C /dev/sdc

/dev/sdc:
drive state is:  standby
root@Tower:~# hdparm -C /dev/sdd

/dev/sdd:
drive state is:  standby
root@Tower:~#

 

Not possible to overwrite, temperatures are read using the "-n standby" option of smartctl to prevent disks from spinning up.

 

Link to comment

Dashboard only displays 3 cores of my 4 core AMD CPU.  It's correctly identified as 4 core in System Profiler.  Am I missing something? Screen shot of CPU Load and info from System Profiler attached.

 

What does it show under Tools => System Devices => CPU Thread Pairings ?

 

Under CPU Thread Pairings it shows 0 1 2.  I'm working on posting a diagnostic file once I shrink it to the allowed size. (Limetech, how about increasing attachment size limit?)

Link to comment

The CPU (pairs) are read from the linux subsystem.

 

Can you telnet into your system and show the output of:

 

ls -l /sys/devices/system/cpu

 

Ps. Are you sure all your cores are enabled in your BIOS?

 

See attached.  Dashboard was showing 4 cores prior to upgrading to RC2.

 

Dashboard will show the number of CPUs which linux presents.

 

For some reason linux "thinks" your system has 3 CPUs.

 

Which version of unRAID did you upgrade from?

 

 

Link to comment

I originally upgraded to 6.2 RC2 from 6.19.  6.19 was showing 4 cores.

 

unRAID 6.1.9 runs linux 4.1.x while unRAID 6.2.0 runs linux 4.4.x. Don't know if other people using AMD, experience the same issue as you, perhaps somebody with the same processor can give feedback?

 

Link to comment

unRAID 6.1.9 runs linux 4.1.x while unRAID 6.2.0 runs linux 4.4.x. Don't know if other people using AMD, experience the same issue as you, perhaps somebody with the same processor can give feedback?

I'm using AMD cpu and I'm seeing 4 cores. Check the bios to make sure there's not some sort of power feature that disables cores enabled.

 

EDIT: Someone seems to be having a similar issue on 4.4 kernel with proxmox so maybe it is a 4.4 kernel issue. https://forum.proxmox.com/threads/missing-cpu-cores.27219/

Link to comment

I cannot stress enough how important keeping your BIOS up to date is when refering to unRAID. This may be my personal experiences only, and it may only be related to my MBs specifically, but on 2 seperate systems I have had issues in unRAID that were corrected simply by updating the MB BIOS. Having come from the Windows world and having the mind set of "only update BIOS when your system has an issue", it was quite the change in mind set for me to actively check and keep my BIOSs updated. As I mentioned before though, multiple issues in 6.19 and the 6.2 betas were eliminated just from a BIOS update. In fact, I even updated before rc3, lol (no issues btw).

 

I might also recommend that if you are updating from 6.19 to 6.2rc3 and suddenly nothing seems to work, try doing a clean install of 6.2rc3. This cleared up all of my other remaining issues when I migrated over from 6.19 to 6.2beta. Since getting into the 6.2 line though, the GUI updater has worked flawlessly for me.

Link to comment

@afoard. Can you post your diagnostics.

 

Diags #2

 

There is a MAC address conflict between your Realtec NIC and Altheros NIC resulting in wrong renaming of eth1.

 

This is the part in the log initiating the NICs:

Jul 29 06:55:03 MiirUnraid kernel: r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
Jul 29 06:55:03 MiirUnraid kernel: r8169 0000:0a:00.0: can't disable ASPM; OS doesn't have ASPM control
Jul 29 06:55:03 MiirUnraid kernel: r8169 0000:0a:00.0 eth0: RTL8168e/8111e at 0xffffc900035be000, [color=red]30:b5:c2:05:3b:2f[/color], XID 0c200000 IRQ 32
Jul 29 06:55:03 MiirUnraid kernel: r8169 0000:0a:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]
[color=red]
Jul 29 06:55:03 MiirUnraid kernel: alx 0000:05:00.0 eth1: Qualcomm Atheros AR816x/AR817x Ethernet [30:b5:c2:05:3b:2f]
Jul 29 06:55:03 MiirUnraid kernel: alx 0000:05:00.0 eth118: renamed from eth1
[/color]
Jul 29 06:55:03 MiirUnraid kernel: e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k
Jul 29 06:55:03 MiirUnraid kernel: e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
Jul 29 06:55:03 MiirUnraid kernel: e1000e 0000:09:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
Jul 29 06:55:03 MiirUnraid kernel: e1000e 0000:09:00.0 eth1: registered PHC clock
Jul 29 06:55:03 MiirUnraid kernel: e1000e 0000:09:00.0 eth1: (PCI Express:2.5GT/s:Width x1) 68:05:ca:26:60:65
Jul 29 06:55:03 MiirUnraid kernel: e1000e 0000:09:00.0 eth1: Intel(R) PRO/1000 Network Connection
Jul 29 06:55:03 MiirUnraid kernel: e1000e 0000:09:00.0 eth1: MAC: 3, PHY: 8, PBA No: E46981-008
Jul 29 06:55:03 MiirUnraid kernel: e1000e 0000:09:00.0 eth117: renamed from eth1
Jul 29 06:55:03 MiirUnraid kernel: e1000e 0000:09:00.0 eth2: renamed from eth117

 

Your NICs are detected as follows:

# PCI device 0x1969:0xe091 (alx)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="fc:aa:14:99:b5:3c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

# PCI device 0x8086:0x10d3 (e1000e)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="68:05:ca:26:60:65", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"

# PCI device 0x10ec:0x8168 (r8169)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="30:b5:c2:05:3b:2f", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

Start with doing two things and see if it makes a difference:

 

1. Change bonding type from "802.3ad" to "active-backup" and select the correct bonding members

2. Change the NIC assignments, see "interface rules"

 

Ps. If these NICs are on your motherboard, check your BIOS settings.

 

Link to comment

Did anyone succeed to get preclear script/plugin to work with RC3? There is a beta plugin, which I didn't get to work at all. The "standard" plugin appears to potentially work when doing some edits with sed (see https://lime-technology.com/forum/index.php?topic=13054.msg481622#msg481622 for context).

 

Any experience what is working for you?

I've never had any issue with the beta plugin on preclearing drives on the RC's. You may want to bring up your issues with the support thread for the plugin.

Link to comment

I have upgraded to this release candidate (from 6.1.9) and I now have a sluggish/slow webui where page can take 20-30 seconds or more to load. 

I am seeing this error over and over again in the logs.  Did I miss something in the upgrade steps (which I thought I followed)?  I turned off docker and VM's, but it had no effect.  The user shares work just fine as well, it's basically just the Web UI.

 

Aug  5 13:52:02 DarkTower emhttp: shcmd (20782): /etc/rc.d/rc.avahidaemon start |& logger
Aug  5 13:52:02 DarkTower root: Starting Avahi mDNS/DNS-SD Daemon:  /usr/sbin/avahi-daemon -D
Aug  5 13:52:02 DarkTower avahi-daemon[20325]: Failed to find user 'avahi'.
Aug  5 13:52:22 DarkTower root: Timeout reached while wating for return value
Aug  5 13:52:22 DarkTower root: Could not receive return value from daemon process.
Aug  5 13:52:22 DarkTower emhttp: shcmd (20783): /etc/rc.d/rc.avahidnsconfd start |& logger
Aug  5 13:52:22 DarkTower root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon:  /usr/sbin/avahi-dnsconfd -D
Aug  5 13:52:22 DarkTower avahi-dnsconfd[20428]: connect(): No such file or directory
Aug  5 13:52:22 DarkTower avahi-dnsconfd[20428]: Failed to connect to the daemon. This probably means that you
Aug  5 13:52:22 DarkTower avahi-dnsconfd[20428]: didn't start avahi-daemon before avahi-dnsconfd.

I noticed someone posted the same issue in the RC2 thread but there was no response.  I can usually resolve issues from other people posting, google, etc, but this has stumped me for the past 2 days.

Any Ideas?

darktower-diagnostics-20160805-2002.zip

Link to comment
Guest
This topic is now closed to further replies.