unRAID Server Release 6.2.0-beta22 Available


Recommended Posts

  • Replies 78
  • Created
  • Last Reply

Top Posters In This Topic

allow kernel append parameter 'unraidlabel' to override boot device label (default: UNRAID)

 

I was excited to try this, but I'm not having a lot of success.  I had b21 running in a VM based on the instructions here:

https://www.linuxserver.io/index.php/2015/12/14/creating-an-unraid-virtual-machine-to-run-on-an-unraid-host/

 

I copied the new bz* files over to my unraid-vm.img and booted in to b22 fine, but when I re-labeled the flash drive to UNRAID-CONF and modified sysconfig.cfg by adding "unraidlabel=UNRAID-CONF" to each of the "append" lines like this:

append unraidlabel=UNRAID-CONF initrd=/bzroot

then emhttp segfaults:

Jun  9 20:07:29 Tower kernel: emhttp[1412]: segfault at 528 ip 000000000040bace sp 00007ffe9b808150 error 4 in emhttp[400000+22000]

Diag attached.  Any ideas?

 

This would be a great feature to have, because it means we would be able to test the betas without having to passthrough a usb controller.

tower-diagnostics-20160609-2014.zip

Link to comment

- include year in parity history

 

Am I right that this change applies to new data but it doesn't try to edit historical data?  If we wanted to fix historical data, would it simply be a matter of adding the year to the beginning of each line in /boot/config/parity-checks.log?

 

i.e. change this:

Dec  2 10:03:00|30779|130.0 MB/s|0
Dec 17 07:31:22|33588|119.1 MB/s|0
Jan  2 10:07:36|31054|128.8 MB/s|0
Feb  4 08:45:39|30901|129.5 MB/s|0

 

to this?

2015 Dec  2 10:03:00|30779|130.0 MB/s|0
2015 Dec 17 07:31:22|33588|119.1 MB/s|0
2016 Jan  2 10:07:36|31054|128.8 MB/s|0
2016 Feb  4 08:45:39|30901|129.5 MB/s|0

 

Link to comment

- include year in parity history

 

Am I right that this change applies to new data but it doesn't try to edit historical data?  If we wanted to fix historical data, would it simply be a matter of adding the year to the beginning of each line in /boot/config/parity-checks.log?

 

i.e. change this:

Dec  2 10:03:00|30779|130.0 MB/s|0
Dec 17 07:31:22|33588|119.1 MB/s|0
Jan  2 10:07:36|31054|128.8 MB/s|0
Feb  4 08:45:39|30901|129.5 MB/s|0

 

to this?

2015 Dec  2 10:03:00|30779|130.0 MB/s|0
2015 Dec 17 07:31:22|33588|119.1 MB/s|0
2016 Jan  2 10:07:36|31054|128.8 MB/s|0
2016 Feb  4 08:45:39|30901|129.5 MB/s|0

 

Correct new entries get the year added, any existing entries need to be adjusted manually. Simply add the year in front of each line, as in your example.

 

Link to comment

I got ethernet issues as well.

 

Updated and rebooted, no ethernet connection.

 

Good thing I have IPMI so I can at least see the console (not at home right now).

 

My board has two ethernet ports, and I was using the second one. I'm assuming unraid by default is trying to connect through the first?

 

Any ideas how I can change that setting through the console?

 

Thanks

 

EDIT: Tried to boot into gui mode through IPMI, firefox came up, but it is not able to connect to localhost. I guess emhttp is not running either. I'm confused.

 

EDIT2: After several minutes, the gui finally came up. The issue was the "validation error" no ethernet, no validation

 

EDIT3: GUI won't let me into the settings tab. Info box shows both ethernet ports as disconnected. I'll revert to beta 21 until I get home

 

Before anything is changed, can you post contents of config/network.cfg file?

 

If you have any network settings done in your go file, please post these as well.

Going forward it isn't needed anymore to do network adjustments from the go file, all can be done from the GUI.

 

Link to comment

I like the new gui dashboard look of how the cpu usage per core is displayed. Looks great.

However now you can only see the % for each core and not the speed, I cant see which core is on max turbo as opposed to all core turbo speed.

For example 2 of my cores will turbo to 3.2 gz whilst all other cores 3.0.

Before i could see which cores were doing what, but with the new system a core at 3.2 will be 100% and a core at 3.0 will be 100%

Anyway is there a way to show core speed again?

 

Link to comment

I like the new gui dashboard look of how the cpu usage per core is displayed. Looks great.

However now you can only see the % for each core and not the speed, I cant see which core is on max turbo as opposed to all core turbo speed.

For example 2 of my cores will turbo to 3.2 gz whilst all other cores 3.0.

Before i could see which cores were doing what, but with the new system a core at 3.2 will be 100% and a core at 3.0 will be 100%

Anyway is there a way to show core speed again?

 

Are you saying you see the CPU load of the individual cores all at 100%?

 

Under normal circumstances the individual load per CPU is displayed and you can see which CPU (core) is getting loaded. The frequencies don't say anything about the load, which is the reason they are replaced by the load indicator.

 

Link to comment

Here is my network config,

 

After I updated to v22 unRAID changed my default nic to the 2nd one (wasn't using it) I have now hooked it up and bonded the nic's (active backup) in case unRAID changes the default nic again.

 

I do have on my console saying the following but it seems everything is working (I need to do some more testing). Logs also attached.

 

 

default via 192.168.0.1 dev br0 linkdown

 

 

# Generated settings:
IFNAME[0]="br0"
BONDNAME[0]="bond0"
BONDING_MIIMON[0]="100"
BRNAME[0]="br0"
BRSTP[0]="no"
BRFD[0]="0"
BONDING_MODE[0]="1"
BONDNICS[0]="eth0 eth1"
BRNICS[0]="bond0"
DESCRIPTION[0]=""
USE_DHCP[0]="no"
IPADDR[0]="192.168.0.10"
NETMASK[0]="255.255.255.0"
GATEWAY="192.168.0.1"
DHCP_KEEPRESOLV="yes"
DNS_SERVER1="192.168.0.1"
DNS_SERVER2=""
DNS_SERVER3=""
MTU[0]=""
SYSNICS="1"

server-diagnostics-20160610-1849.zip

Link to comment

Here is my network config,

 

After I updated to v22 unRAID changed my default nic to the 2nd one (wasn't using it) I have now hooked it up and bonded the nic's (active backup) in case unRAID changes the default nic again.

 

I do have on my console saying the following but it seems everything is working (I need to do some more testing). Logs also attached.

 

 

default via 192.168.0.1 dev br0 linkdown

 

 

# Generated settings:
IFNAME[0]="br0"
BONDNAME[0]="bond0"
BONDING_MIIMON[0]="100"
BRNAME[0]="br0"
BRSTP[0]="no"
BRFD[0]="0"
BONDING_MODE[0]="1"
BONDNICS[0]="eth0 eth1"
BRNICS[0]="bond0"
DESCRIPTION[0]=""
USE_DHCP[0]="no"
IPADDR[0]="192.168.0.10"
NETMASK[0]="255.255.255.0"
GATEWAY="192.168.0.1"
DHCP_KEEPRESOLV="yes"
DNS_SERVER1="192.168.0.1"
DNS_SERVER2=""
DNS_SERVER3=""
MTU[0]=""
SYSNICS="1"

 

what is the output of the command:

[b]ip route show[/b]

Link to comment

I like the new gui dashboard look of how the cpu usage per core is displayed. Looks great.

However now you can only see the % for each core and not the speed, I cant see which core is on max turbo as opposed to all core turbo speed.

For example 2 of my cores will turbo to 3.2 gz whilst all other cores 3.0.

Before i could see which cores were doing what, but with the new system a core at 3.2 will be 100% and a core at 3.0 will be 100%

Anyway is there a way to show core speed again?

 

Are you saying you see the CPU load of the individual cores all at 100%?

 

Under normal circumstances the individual load per CPU is displayed and you can see which CPU (core) is getting loaded. The frequencies don't say anything about the load, which is the reason they are replaced by the load indicator.

 

 

No i dont see them all at 100%. But it was useful when i was isolating cores for a gaming vm to see speeds.

What i found was if I isolated all my cores except core 1 (for me 0,14) and pinned them to the vm and then pinned the  emulatorpin cpuset to 0,14.

i found that core 0,14 was turbing to 3.2 gz. i dont want this as i want highest single cores to be pinned to the vm.

So i found if I pinned less cores to the vm but isolated one core purely for emulatorpin cpuset then core 0,14 didnt turbo to 3.2 nor did the core pinned to emulatorpin cpuset.

The advantage of this is i could make sure the highest turbo cores would be in the gaming vm which i think is advantageous.

I had been experimenting with multiple tests of pinning cores, isolating cores, different core counts and running cpu benchmarks for different configs

whilst seeing core speeds in unraid.

I hope i have explained this properly and it makes sense!!

Link to comment

root@Server:~# ip route show

default via 192.168.0.1 dev br0

127.0.0.0/8 dev lo  scope link

192.168.0.0/24 dev br0  proto kernel  scope link  src 192.168.0.10

 

This means everything is alright.

 

Likely the console message was displayed due to transition of old to new configuration. You can ignore it.

 

Link to comment

root@Server:~# ip route show

default via 192.168.0.1 dev br0

127.0.0.0/8 dev lo  scope link

192.168.0.0/24 dev br0  proto kernel  scope link  src 192.168.0.10

 

This means everything is alright.

 

Likely the console message was displayed due to transition of old to new configuration. You can ignore it.

 

was it normal for unRAID to change the default nic? (message still comes up after a reboot, is it something that needs to be fixed at Limetech's end?)

Link to comment

root@Server:~# ip route show

default via 192.168.0.1 dev br0

127.0.0.0/8 dev lo  scope link

192.168.0.0/24 dev br0  proto kernel  scope link  src 192.168.0.10

 

This means everything is alright.

 

Likely the console message was displayed due to transition of old to new configuration. You can ignore it.

 

was it normal for unRAID to change the default nic? (message still comes up after a reboot, is it something that needs to be fixed at Limetech's end?)

 

That's normal. It is now made possible to make fixed port assignments, see Interface rules. First time (no assignment) this is dynamically created, but stays fixed afterwards and will ensure the interface stays the same in between system reboots.

 

Link to comment

bonienl, I hit a potentially harmful setting. I'm using ETH1 only, and the webGUI let me disable it even if it's the only connected interface. Just let my server without network by accidentally hitting a button. Had to log into iKVM to up it again.

 

IMO, the GUI should only let you disable a interface if it's not the only one connected.

Link to comment

I like the new gui dashboard look of how the cpu usage per core is displayed. Looks great.

However now you can only see the % for each core and not the speed, I cant see which core is on max turbo as opposed to all core turbo speed.

For example 2 of my cores will turbo to 3.2 gz whilst all other cores 3.0.

Before i could see which cores were doing what, but with the new system a core at 3.2 will be 100% and a core at 3.0 will be 100%

Anyway is there a way to show core speed again?

 

Are you saying you see the CPU load of the individual cores all at 100%?

 

Under normal circumstances the individual load per CPU is displayed and you can see which CPU (core) is getting loaded. The frequencies don't say anything about the load, which is the reason they are replaced by the load indicator.

 

 

No i dont see them all at 100%. But it was useful when i was isolating cores for a gaming vm to see speeds.

What i found was if I isolated all my cores except core 1 (for me 0,14) and pinned them to the vm and then pinned the  emulatorpin cpuset to 0,14.

i found that core 0,14 was turbing to 3.2 gz. i dont want this as i want highest single cores to be pinned to the vm.

So i found if I pinned less cores to the vm but isolated one core purely for emulatorpin cpuset then core 0,14 didnt turbo to 3.2 nor did the core pinned to emulatorpin cpuset.

The advantage of this is i could make sure the highest turbo cores would be in the gaming vm which i think is advantageous.

I had been experimenting with multiple tests of pinning cores, isolating cores, different core counts and running cpu benchmarks for different configs

whilst seeing core speeds in unraid.

I hope i have explained this properly and it makes sense!!

 

I requested bonienl to make the change from cpu frequencies to cpu load because it provides more useful information.  The way some of the frequency governors work, the frequency may shift up at a fairly low load and all the cpu frequencies at max only means that the cpu is beyond a threshold of load.

 

I know what you are trying to accomplish with tuning your VMs, but there are other considerations besides cpu frequencies.  You want your cpu load to be spread amongst the cpus and be sure that you have minimized latency in cpu context switching.  Using the emulatorpin in the xml puts the emulator tasks onto cpus other than the VM's and helps with the cpu context switching latency.  Using the cpu loads, you can be sure you have not overloaded certain cpus.  If you start out a VM with 4 cpus and find the load is staying low, you don't need to add more cpus to the VM.

 

Cpu loads will provide you much more useful information.

Link to comment

On a side note: in 6.2 are we not allowed to pass through devices with the following code?

  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=00:1a.0,bus=root.1,addr=00.1'/>
  </qemu:commandline>

 

 

I keep getting this error which I was not getting in 6.1?

2016-06-10T01:43:42.236119Z qemu-system-x86_64: -device vfio-pci,host=00:1a.0,bus=root.1,addr=00.1: vfio: error opening /dev/vfio/5: Operation not permitted
2016-06-10T01:43:42.236161Z qemu-system-x86_64: -device vfio-pci,host=00:1a.0,bus=root.1,addr=00.1: vfio: failed to get group 5
2016-06-10T01:43:42.236169Z qemu-system-x86_64: -device vfio-pci,host=00:1a.0,bus=root.1,addr=00.1: Device initialization failed

 

You can, but you just have to edit the qemu.conf and add the iommu groups

 

See here: https://lime-technology.com/forum/index.php?topic=43428.0

 

I had to do that for the El Capitan VM :-)

 

Just edit the file /etc/libvirt/qemu.conf, find the line for "cgroup_device_acl" and add "/dev/vfio/5" to the list, restart VM manager and it should work

 

Ah, very nice. The post that is linked makes it sound like it was an early bug in 6.2 that was suppose to have been fixed?

 

Umm I just passed the USBs to my El Capitan VM using the <hostdev> value, without modifying qemu.conf and it seems to be working fine. For example for mouse an keyboard:

 

<hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc07d'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x05ac'/>
        <product id='0x0250'/>
      </source>
    </hostdev>

 

EDIT: never mind, I didn't read well... you are passing through an entire PCIe USB!

Link to comment

bonienl, I hit a potentially harmful setting. I'm using ETH1 only, and the webGUI let me disable it even if it's the only connected interface. Just let my server without network by accidentally hitting a button. Had to log into iKVM to up it again.

 

IMO, the GUI should only let you disable a interface if it's not the only one connected.

 

There is a safetyguard build in that eth1 (or higher) can only be put down/up when port eth0 is connected.

 

Did you set up a special configuration to let your single interface be eth1? Normally this would be eth0.

Link to comment

allow kernel append parameter 'unraidlabel' to override boot device label (default: UNRAID)

 

I was excited to try this, but I'm not having a lot of success.  I had b21 running in a VM based on the instructions here:

https://www.linuxserver.io/index.php/2015/12/14/creating-an-unraid-virtual-machine-to-run-on-an-unraid-host/

 

I copied the new bz* files over to my unraid-vm.img and booted in to b22 fine, but when I re-labeled the flash drive to UNRAID-CONF and modified sysconfig.cfg by adding "unraidlabel=UNRAID-CONF" to each of the "append" lines like this:

append unraidlabel=UNRAID-CONF initrd=/bzroot

then emhttp segfaults:

Jun  9 20:07:29 Tower kernel: emhttp[1412]: segfault at 528 ip 000000000040bace sp 00007ffe9b808150 error 4 in emhttp[400000+22000]

Diag attached.  Any ideas?

 

This would be a great feature to have, because it means we would be able to test the betas without having to passthrough a usb controller.

 

Thank you for including diagnostics!

 

You do have a /boot set up, so I don't think the problem is with the new label feature.  But there are many other 'non-standard' aspects of your setup!  Any of which could be the potential source of the segfault.

* running the beta in a VM

* '/boot' drive is not bootable (no bz* or syslinux*)  (probably not a problem)

* no super.dat  (probably not a problem)

* no apparent device setup of eth0  (don't know, but strange, potentially a problem)

* "emhttp: device_by_name: device_by_name: () not found"  (possibly a problem)

* QEMU drives only, unassigned  (don't know, no experience with these)

* Unregistered  (probably not a problem)

* Docker is enabled, but no drives assigned or docker.img exists  (almost certainly a problem! docker.img is configured on /mnt/user, so that and btrfs scan will fail)

 

It's a rather unusual setup!  ;)

Link to comment

bonienl, I hit a potentially harmful setting. I'm using ETH1 only, and the webGUI let me disable it even if it's the only connected interface. Just let my server without network by accidentally hitting a button. Had to log into iKVM to up it again.

 

IMO, the GUI should only let you disable a interface if it's not the only one connected.

 

There is a safetyguard build in that eth1 (or higher) can only be put down/up when port eth0 is connected.

 

Did you set up a special configuration to let your single interface be eth1? Normally this would be eth0.

 

Nope, the port I use has always being detected and assigned eth1, so I bonded eth0 and eth1 into br0.

 

D1h5rmk.png

 

FC9FtBd.png

Link to comment

Really nice to see the improvements in the diagnostics!

* Version is immediately apparent

 

* Syslog tail of 200 lines, named intuitively; I assume this is preparatory for log truncation, so that ALL diagnostics zip files will be small enough to attach here!

  For normal small syslogs, the tail isn't necessary, but does provide a quick way to see the last activity

 

* The new df.txt report is proving very interesting, the more I look the more little tidbits of useful info I see.

  One thing concerns me, but I'll admit to being a Linux memory management neophyte.  The following lines show an apparently huge memory 'overcommit' (user has 32GB of RAM) -

Filesystem      Size  Used Avail Use% Mounted on
rootfs           16G  424M   16G   3% /
tmpfs            16G  244K   16G   1% /run
devtmpfs         16G  4.0K   16G   1% /dev
cgroup_root      16G     0   16G   0% /sys/fs/cgroup
/dev/loop0       20G  4.7G   14G  26% /var/lib/docker
/dev/loop1      1.0G   17M  905M   2% /etc/libvirt

  The next OOM we see may show some interesting results in this report.  I realize some of those should never grow, in real memory used, but it's still concerning to me, coming from standard memory management worlds!

 

* The new folders.txt already helped me understand a boot drive that wasn't a boot drive, no syslinux.  It provides a nice list of installed plugins, contents of /var/log including file sizes, anything in /extra, and an easy way to check their basic installation (files and folders in the right place, no v5 stuff).

 

* There's new networking stuff, new entries in network.cfg and a new network-rules.cfg.  We'll probably need a little time to fully understand them.

Link to comment

bonienl, I hit a potentially harmful setting. I'm using ETH1 only, and the webGUI let me disable it even if it's the only connected interface. Just let my server without network by accidentally hitting a button. Had to log into iKVM to up it again.

 

IMO, the GUI should only let you disable a interface if it's not the only one connected.

 

There is a safetyguard build in that eth1 (or higher) can only be put down/up when port eth0 is connected.

 

Did you set up a special configuration to let your single interface be eth1? Normally this would be eth0.

 

Nope, the port I use has always being detected and assigned eth1, so I bonded eth0 and eth1 into br0.

 

D1h5rmk.png

 

FC9FtBd.png

 

Need to check what happened... It reports eth0 as down (see footer) and it that case the button to disable eth1 should not be allowed. Until then 'be careful' :)

 

Link to comment

Need to check what happened... It reports eth0 as down (see footer) and it that case the button to disable eth1 should not be allowed. Until then 'be careful' :)

 

In all the updates and changes I guess I missed somewhere a git upload which disables the button as required. It will be added (back) in future release.

 

Link to comment

I got ethernet issues as well.

 

Updated and rebooted, no ethernet connection.

 

Good thing I have IPMI so I can at least see the console (not at home right now).

 

My board has two ethernet ports, and I was using the second one. I'm assuming unraid by default is trying to connect through the first?

 

Any ideas how I can change that setting through the console?

 

Thanks

 

EDIT: Tried to boot into gui mode through IPMI, firefox came up, but it is not able to connect to localhost. I guess emhttp is not running either. I'm confused.

 

EDIT2: After several minutes, the gui finally came up. The issue was the "validation error" no ethernet, no validation

 

EDIT3: GUI won't let me into the settings tab. Info box shows both ethernet ports as disconnected. I'll revert to beta 21 until I get home

 

Before anything is changed, can you post contents of config/network.cfg file?

 

If you have any network settings done in your go file, please post these as well.

Going forward it isn't needed anymore to do network adjustments from the go file, all can be done from the GUI.

 

There are no network settings in my go file, just a couple lines to start the rsync daemon:

 

#!/bin/bash
# Start the Management Utility
/usr/local/sbin/emhttp &

#start rsync daemon
if ! grep ^rsync /etc/inetd.conf > /dev/null ; then
cat <<-EOF >> /etc/inetd.conf
rsync   stream  tcp     nowait  root    /usr/sbin/tcpd  /usr/bin/rsync --daemon
EOF
killall -HUP inetd
fi
cp /boot/rsyncd.conf /etc

Link to comment
Guest
This topic is now closed to further replies.