unRAID Server Release 6.2.0-beta18 Available


Recommended Posts

  • Replies 421
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Where can we find the documentation on the new xml attributes for docker containers (ie: hidden, required, etc etc.)

 

Or, do we just have to play around with the options in dockerMan and see what user templates it creates with the various options?

Link to comment

NFS appears to be failing.  Log entries:

 

Mar 11 23:49:08 Tower rpc.statd[8155]: Version 1.3.3 starting

Mar 11 23:49:08 Tower sm-notify[8156]: Version 1.3.3 starting

Mar 11 23:49:08 Tower sm-notify[8156]: Already notifying clients; Exiting!

Mar 11 23:49:08 Tower rpc.statd[8155]: failed to create RPC listeners, exiting

Mar 11 23:49:08 Tower root: Starting NFS server daemons:

Mar 11 23:49:08 Tower root:  /usr/sbin/exportfs -r

Mar 11 23:49:08 Tower root:  /usr/sbin/rpc.nfsd 8

Mar 11 23:49:08 Tower root: rpc.nfsd: address family AF_INET6 not supported by protocol TCP

Mar 11 23:49:08 Tower root: rpc.nfsd: unable to set any sockets for nfsd

Mar 11 23:49:08 Tower root:  /usr/sbin/rpc.mountd

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for udp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for tcp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for udp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for tcp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for udp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for tcp6

Mar 11 23:49:08 Tower rpc.mountd[8166]: mountd: No V2 or V3 listeners created!

 

Link to comment

On my test server, from the minute I turn it on I get a recycling set of messages for each device in the syslog over and over again like this:

 


Mar 12 18:21:02 test kernel: sdb: sdb1
Mar 12 18:21:02 test emhttp: shcmd (1141): rmmod md-mod |& logger
Mar 12 18:21:02 test kernel: md: unRAID driver removed
Mar 12 18:21:02 test emhttp: shcmd (1142): modprobe md-mod super=/boot/config/super.dat |& logger
Mar 12 18:21:02 test kernel: md: unRAID driver 2.6.0 installed
Mar 12 18:21:02 test emhttp: Pro key detected, GUID: 0718-048A-07B2-1607419A1FD1 FILE: /boot/config/Pro.key
Mar 12 18:21:02 test emhttp: Device inventory:
Mar 12 18:21:02 test emhttp: shcmd (1143): udevadm settle
Mar 12 18:21:02 test emhttp: TDKMedia_Gold_Flash_07B21607419A1FD1-0:0 (sda) 7811040
Mar 12 18:21:02 test emhttp: WDC_WD800AAJS-00PSA0_WD-WMAPA1162808 (sdb) 78150712
Mar 12 18:21:02 test emhttp: WDC_WD800JD-60LSA5_WD-WMAM9MF49542 (sdc) 78150712
Mar 12 18:21:02 test emhttp: ST380815AS_5RW18GXH (sdd) 78150712
Mar 12 18:21:02 test emhttp: WDC_WD800JD-60LSA5_WD-WMAM9PV29584 (sde) 78150712
Mar 12 18:21:02 test emhttp: WDC_WD800AAJS-00PSA0_WD-WMAPA1147612 (sdf) 78150712
Mar 12 18:21:02 test kernel: mdcmd (1): import 0
Mar 12 18:21:02 test kernel: md: import_slot: 0 empty
Mar 12 18:21:02 test kernel: mdcmd (2): import 1
Mar 12 18:21:02 test kernel: md: import_slot: 1 missing
Mar 12 18:21:02 test kernel: mdcmd (3): import 2
Mar 12 18:21:02 test kernel: mdcmd (4): import 3
Mar 12 18:21:02 test kernel: mdcmd (5): import 4
Mar 12 18:21:02 test kernel: mdcmd (6): import 5
Mar 12 18:21:02 test kernel: mdcmd (7): import 6
Mar 12 18:21:02 test kernel: mdcmd (: import 7
Mar 12 18:21:02 test kernel: mdcmd (9): import 8
Mar 12 18:21:02 test kernel: mdcmd (10): import 9
Mar 12 18:21:02 test kernel: mdcmd (11): import 10
Mar 12 18:21:02 test kernel: mdcmd (12): import 11
Mar 12 18:21:02 test kernel: mdcmd (13): import 12
Mar 12 18:21:02 test emhttp: import 30 cache device: no device
Mar 12 18:21:02 test emhttp: import flash device: sda
Mar 12 18:21:02 test kernel: mdcmd (14): import 13
Mar 12 18:21:02 test kernel: mdcmd (15): import 14
Mar 12 18:21:02 test kernel: mdcmd (16): import 15
Mar 12 18:21:02 test kernel: mdcmd (17): import 16
Mar 12 18:21:02 test kernel: mdcmd (18): import 17
Mar 12 18:21:02 test kernel: mdcmd (19): import 18
Mar 12 18:21:02 test kernel: mdcmd (20): import 19
Mar 12 18:21:02 test kernel: mdcmd (21): import 20
Mar 12 18:21:02 test kernel: mdcmd (22): import 21
Mar 12 18:21:02 test kernel: mdcmd (23): import 22
Mar 12 18:21:02 test kernel: mdcmd (24): import 23
Mar 12 18:21:02 test kernel: mdcmd (25): import 24
Mar 12 18:21:02 test kernel: mdcmd (26): import 25
Mar 12 18:21:02 test kernel: mdcmd (27): import 26
Mar 12 18:21:02 test kernel: mdcmd (28): import 27
Mar 12 18:21:02 test kernel: mdcmd (29): import 28
Mar 12 18:21:02 test kernel: mdcmd (30): import 29
Mar 12 18:21:02 test kernel: md: import_slot: 29 empty
Mar 12 18:21:02 test emhttp: shcmd (1149): /etc/rc.d/rc.nfsd start |& logger
Mar 12 18:22:37 test login[6398]: ROOT LOGIN  on '/dev/tty1'
Mar 12 18:23:33 test in.telnetd[29155]: connect from 192.168.1.2 (192.168.1.2)
Mar 12 18:23:35 test login[29156]: ROOT LOGIN  on '/dev/pts/1' from 'main.danioj.lan'

 

To indicate what I am doing, go here: https://lime-technology.com/forum/index.php?topic=47427.0

 

Currently pre clearing drives using gfjardim's new preclear plugin / script: http://lime-technology.com/forum/index.php?topic=39985.msg453938#msg453938

 

BUT from the syslog it doesn't appear to the cause IMHO because what is in the log seems to start as soon as the server starts up.

 

No disks assigned at the current time and as such Array not started.

 

test-diagnostics-20160312-1830.zip

Link to comment

On my test server, from the minute I turn it on I get a recycling set of messages for each device in the syslog over and over again like this:

 


Mar 12 18:21:02 test kernel: sdb: sdb1
Mar 12 18:21:02 test emhttp: shcmd (1141): rmmod md-mod |& logger
Mar 12 18:21:02 test kernel: md: unRAID driver removed
Mar 12 18:21:02 test emhttp: shcmd (1142): modprobe md-mod super=/boot/config/super.dat |& logger
Mar 12 18:21:02 test kernel: md: unRAID driver 2.6.0 installed
Mar 12 18:21:02 test emhttp: Pro key detected, GUID: 0718-048A-07B2-1607419A1FD1 FILE: /boot/config/Pro.key
Mar 12 18:21:02 test emhttp: Device inventory:
Mar 12 18:21:02 test emhttp: shcmd (1143): udevadm settle
Mar 12 18:21:02 test emhttp: TDKMedia_Gold_Flash_07B21607419A1FD1-0:0 (sda) 7811040
Mar 12 18:21:02 test emhttp: WDC_WD800AAJS-00PSA0_WD-WMAPA1162808 (sdb) 78150712
Mar 12 18:21:02 test emhttp: WDC_WD800JD-60LSA5_WD-WMAM9MF49542 (sdc) 78150712
Mar 12 18:21:02 test emhttp: ST380815AS_5RW18GXH (sdd) 78150712
Mar 12 18:21:02 test emhttp: WDC_WD800JD-60LSA5_WD-WMAM9PV29584 (sde) 78150712
Mar 12 18:21:02 test emhttp: WDC_WD800AAJS-00PSA0_WD-WMAPA1147612 (sdf) 78150712
Mar 12 18:21:02 test kernel: mdcmd (1): import 0
Mar 12 18:21:02 test kernel: md: import_slot: 0 empty
Mar 12 18:21:02 test kernel: mdcmd (2): import 1
Mar 12 18:21:02 test kernel: md: import_slot: 1 missing
Mar 12 18:21:02 test kernel: mdcmd (3): import 2
Mar 12 18:21:02 test kernel: mdcmd (4): import 3
Mar 12 18:21:02 test kernel: mdcmd (5): import 4
Mar 12 18:21:02 test kernel: mdcmd (6): import 5
Mar 12 18:21:02 test kernel: mdcmd (7): import 6
Mar 12 18:21:02 test kernel: mdcmd (: import 7
Mar 12 18:21:02 test kernel: mdcmd (9): import 8
Mar 12 18:21:02 test kernel: mdcmd (10): import 9
Mar 12 18:21:02 test kernel: mdcmd (11): import 10
Mar 12 18:21:02 test kernel: mdcmd (12): import 11
Mar 12 18:21:02 test kernel: mdcmd (13): import 12
Mar 12 18:21:02 test emhttp: import 30 cache device: no device
Mar 12 18:21:02 test emhttp: import flash device: sda
Mar 12 18:21:02 test kernel: mdcmd (14): import 13
Mar 12 18:21:02 test kernel: mdcmd (15): import 14
Mar 12 18:21:02 test kernel: mdcmd (16): import 15
Mar 12 18:21:02 test kernel: mdcmd (17): import 16
Mar 12 18:21:02 test kernel: mdcmd (18): import 17
Mar 12 18:21:02 test kernel: mdcmd (19): import 18
Mar 12 18:21:02 test kernel: mdcmd (20): import 19
Mar 12 18:21:02 test kernel: mdcmd (21): import 20
Mar 12 18:21:02 test kernel: mdcmd (22): import 21
Mar 12 18:21:02 test kernel: mdcmd (23): import 22
Mar 12 18:21:02 test kernel: mdcmd (24): import 23
Mar 12 18:21:02 test kernel: mdcmd (25): import 24
Mar 12 18:21:02 test kernel: mdcmd (26): import 25
Mar 12 18:21:02 test kernel: mdcmd (27): import 26
Mar 12 18:21:02 test kernel: mdcmd (28): import 27
Mar 12 18:21:02 test kernel: mdcmd (29): import 28
Mar 12 18:21:02 test kernel: mdcmd (30): import 29
Mar 12 18:21:02 test kernel: md: import_slot: 29 empty
Mar 12 18:21:02 test emhttp: shcmd (1149): /etc/rc.d/rc.nfsd start |& logger
Mar 12 18:22:37 test login[6398]: ROOT LOGIN  on '/dev/tty1'
Mar 12 18:23:33 test in.telnetd[29155]: connect from 192.168.1.2 (192.168.1.2)
Mar 12 18:23:35 test login[29156]: ROOT LOGIN  on '/dev/pts/1' from 'main.danioj.lan'

 

To indicate what I am doing, go here: https://lime-technology.com/forum/index.php?topic=47427.0

 

Currently pre clearing drives using gfjardim's new preclear plugin / script: http://lime-technology.com/forum/index.php?topic=39985.msg453938#msg453938

 

BUT from the syslog it doesn't appear to the cause IMHO because what is in the log seems to start as soon as the server starts up.

 

No disks assigned at the current time and as such Array not started.

I think that you will find that is quite normal and will stop when the array is started.
Link to comment

On my test server, from the minute I turn it on I get a recycling set of messages for each device in the syslog over and over again like this:

 


Mar 12 18:21:02 test kernel: sdb: sdb1
Mar 12 18:21:02 test emhttp: shcmd (1141): rmmod md-mod |& logger
Mar 12 18:21:02 test kernel: md: unRAID driver removed
Mar 12 18:21:02 test emhttp: shcmd (1142): modprobe md-mod super=/boot/config/super.dat |& logger
Mar 12 18:21:02 test kernel: md: unRAID driver 2.6.0 installed
Mar 12 18:21:02 test emhttp: Pro key detected, GUID: 0718-048A-07B2-1607419A1FD1 FILE: /boot/config/Pro.key
Mar 12 18:21:02 test emhttp: Device inventory:
Mar 12 18:21:02 test emhttp: shcmd (1143): udevadm settle
Mar 12 18:21:02 test emhttp: TDKMedia_Gold_Flash_07B21607419A1FD1-0:0 (sda) 7811040
Mar 12 18:21:02 test emhttp: WDC_WD800AAJS-00PSA0_WD-WMAPA1162808 (sdb) 78150712
Mar 12 18:21:02 test emhttp: WDC_WD800JD-60LSA5_WD-WMAM9MF49542 (sdc) 78150712
Mar 12 18:21:02 test emhttp: ST380815AS_5RW18GXH (sdd) 78150712
Mar 12 18:21:02 test emhttp: WDC_WD800JD-60LSA5_WD-WMAM9PV29584 (sde) 78150712
Mar 12 18:21:02 test emhttp: WDC_WD800AAJS-00PSA0_WD-WMAPA1147612 (sdf) 78150712
Mar 12 18:21:02 test kernel: mdcmd (1): import 0
Mar 12 18:21:02 test kernel: md: import_slot: 0 empty
Mar 12 18:21:02 test kernel: mdcmd (2): import 1
Mar 12 18:21:02 test kernel: md: import_slot: 1 missing
Mar 12 18:21:02 test kernel: mdcmd (3): import 2
Mar 12 18:21:02 test kernel: mdcmd (4): import 3
Mar 12 18:21:02 test kernel: mdcmd (5): import 4
Mar 12 18:21:02 test kernel: mdcmd (6): import 5
Mar 12 18:21:02 test kernel: mdcmd (7): import 6
Mar 12 18:21:02 test kernel: mdcmd (: import 7
Mar 12 18:21:02 test kernel: mdcmd (9): import 8
Mar 12 18:21:02 test kernel: mdcmd (10): import 9
Mar 12 18:21:02 test kernel: mdcmd (11): import 10
Mar 12 18:21:02 test kernel: mdcmd (12): import 11
Mar 12 18:21:02 test kernel: mdcmd (13): import 12
Mar 12 18:21:02 test emhttp: import 30 cache device: no device
Mar 12 18:21:02 test emhttp: import flash device: sda
Mar 12 18:21:02 test kernel: mdcmd (14): import 13
Mar 12 18:21:02 test kernel: mdcmd (15): import 14
Mar 12 18:21:02 test kernel: mdcmd (16): import 15
Mar 12 18:21:02 test kernel: mdcmd (17): import 16
Mar 12 18:21:02 test kernel: mdcmd (18): import 17
Mar 12 18:21:02 test kernel: mdcmd (19): import 18
Mar 12 18:21:02 test kernel: mdcmd (20): import 19
Mar 12 18:21:02 test kernel: mdcmd (21): import 20
Mar 12 18:21:02 test kernel: mdcmd (22): import 21
Mar 12 18:21:02 test kernel: mdcmd (23): import 22
Mar 12 18:21:02 test kernel: mdcmd (24): import 23
Mar 12 18:21:02 test kernel: mdcmd (25): import 24
Mar 12 18:21:02 test kernel: mdcmd (26): import 25
Mar 12 18:21:02 test kernel: mdcmd (27): import 26
Mar 12 18:21:02 test kernel: mdcmd (28): import 27
Mar 12 18:21:02 test kernel: mdcmd (29): import 28
Mar 12 18:21:02 test kernel: mdcmd (30): import 29
Mar 12 18:21:02 test kernel: md: import_slot: 29 empty
Mar 12 18:21:02 test emhttp: shcmd (1149): /etc/rc.d/rc.nfsd start |& logger
Mar 12 18:22:37 test login[6398]: ROOT LOGIN  on '/dev/tty1'
Mar 12 18:23:33 test in.telnetd[29155]: connect from 192.168.1.2 (192.168.1.2)
Mar 12 18:23:35 test login[29156]: ROOT LOGIN  on '/dev/pts/1' from 'main.danioj.lan'

 

To indicate what I am doing, go here: https://lime-technology.com/forum/index.php?topic=47427.0

 

Currently pre clearing drives using gfjardim's new preclear plugin / script: http://lime-technology.com/forum/index.php?topic=39985.msg453938#msg453938

 

BUT from the syslog it doesn't appear to the cause IMHO because what is in the log seems to start as soon as the server starts up.

 

No disks assigned at the current time and as such Array not started.

I think that you will find that is quite normal and will stop when the array is started.

 

It does seem that you are correct. I have just started the Array and am undergoing a Parity Sync and the messages have stopped.

Link to comment

changed appdata default to /mnt/cache/appdata from /mnt/user/appdata (/mnt/user/appdata just plain doesn't work for some dockers, including postgres and anything that uses sqlite)

 

mover ran at my set time 8.00 am, and now i have an appdata folder on disk1, which is something new....

Link to comment

I'd like to know if anyone (with Dual Parity) is experiencing a dip in performance over v6.1x

 

According to Windows 10 File Copy GUI Stats - On my test server, which I admit has some "older" disks, I don't seem to be able to get more than 26MB/s (After an initial burst of about 90MB/s for about 5 seconds) write for a 4GB file. If I enable Turbo Write I get about 60MB/s (after a similar initial burst as above) retained for the same file.

 

To LT Staff - if you are reading - is there expected to be performance penalty as a result of implementing Dual Parity?

 

Without Turbo Write Enabled:

 

Screen_Shot_2016_03_12_at_8_02_45_PM.png

 

With Turbo Write Enabled:

 

Screen_Shot_2016_03_12_at_8_20_01_PM.png

Link to comment

VM and Qemu

 

In the new xml file I see this line now for qemu

 

  <emulator>/usr/local/sbin/qemu</emulator>

On 6.1.9

<emulator>/usr/bin/qemu-system-x86_64</emulator>

 

Is this right ?

 

//Peter

Yes, this new wrapper script is critical to some of the upgrades to VM manager.

Link to comment

 

 

I'd like to know if anyone (with Dual Parity) is experiencing a dip in performance over v6.1x

 

According to Windows 10 File Copy GUI Stats - On my test server, which I admit has some "older" disks, I don't seem to be able to get more than 26MB/s (After an initial burst of about 90MB/s for about 5 seconds) write for a 4GB file. If I enable Turbo Write I get about 60MB/s (after a similar initial burst as above) retained for the same file.

 

To LT Staff - if you are reading - is there is there expected to be performance penalty as a result of implementing Dual Parity?

 

Without Turbo Write Enabled:

 

Screen_Shot_2016_03_12_at_8_02_45_PM.png

 

With Turbo Write Enabled:

 

Screen_Shot_2016_03_12_at_8_20_01_PM.png

 

Can you really claim "slower"?  Show us the same tests with the same hardware on 6.1.9 with single parity otherwise we have no baseline comparison.

Link to comment

changed appdata default to /mnt/cache/appdata from /mnt/user/appdata (/mnt/user/appdata just plain doesn't work for some dockers, including postgres and anything that uses sqlite)

 

mover ran at my set time 8.00 am, and now i have an appdata folder on disk1, which is something new....

Hmm, need to see your diagnostics.

Link to comment

 

VM and Qemu

 

In the new xml file I see this line now for qemu

 

  <emulator>/usr/local/sbin/qemu</emulator>

On 6.1.9

<emulator>/usr/bin/qemu-system-x86_64</emulator>

 

Is this right ?

 

//Peter

Yes, this new wrapper script is critical to some of the upgrades to VM manager.

Thanks, do we still need the Qemu.conf patch for larger  groups Numbers ?

Link to comment

 

VM and Qemu

 

In the new xml file I see this line now for qemu

 

  <emulator>/usr/local/sbin/qemu</emulator>

On 6.1.9

<emulator>/usr/bin/qemu-system-x86_64</emulator>

 

Is this right ?

 

//Peter

Yes, this new wrapper script is critical to some of the upgrades to VM manager.

Thanks, do we still need the Qemu.conf patch for larger  groups Numbers ?

Nope.

Link to comment

 

 

I'd like to know if anyone (with Dual Parity) is experiencing a dip in performance over v6.1x

 

According to Windows 10 File Copy GUI Stats - On my test server, which I admit has some "older" disks, I don't seem to be able to get more than 26MB/s (After an initial burst of about 90MB/s for about 5 seconds) write for a 4GB file. If I enable Turbo Write I get about 60MB/s (after a similar initial burst as above) retained for the same file.

 

To LT Staff - if you are reading - is there is there expected to be performance penalty as a result of implementing Dual Parity?

 

Without Turbo Write Enabled:

 

Screen_Shot_2016_03_12_at_8_02_45_PM.png

 

With Turbo Write Enabled:

 

Screen_Shot_2016_03_12_at_8_20_01_PM.png

 

Can you really claim "slower"?  Show us the same tests with the same hardware on 6.1.9 with single parity otherwise we have no baseline comparison.

 

I haven't said it's slower for purely that reason (no benchmark testing was done on this hardware beforehand), therefore to everyone reading this I guess it is purely anecdotal.

 

What I asked was:

 

To LT Staff - if you are reading - is there is there expected to be performance penalty as a result of implementing Dual Parity?
Link to comment

 

 

I'd like to know if anyone (with Dual Parity) is experiencing a dip in performance over v6.1x

 

According to Windows 10 File Copy GUI Stats - On my test server, which I admit has some "older" disks, I don't seem to be able to get more than 26MB/s (After an initial burst of about 90MB/s for about 5 seconds) write for a 4GB file. If I enable Turbo Write I get about 60MB/s (after a similar initial burst as above) retained for the same file.

 

To LT Staff - if you are reading - is there is there expected to be performance penalty as a result of implementing Dual Parity?

 

Without Turbo Write Enabled:

 

Screen_Shot_2016_03_12_at_8_02_45_PM.png

 

With Turbo Write Enabled:

 

Screen_Shot_2016_03_12_at_8_20_01_PM.png

 

Can you really claim "slower"?  Show us the same tests with the same hardware on 6.1.9 with single parity otherwise we have no baseline comparison.

 

I haven't said it's slower for purely that reason (no benchmark testing was done on this hardware beforehand), therefore to everyone reading this I guess it is purely anecdotal.

 

What I asked was:

 

To LT Staff - if you are reading - is there is there expected to be performance penalty as a result of implementing Dual Parity?

Only in cases of extremely weak hardware (CPU).

 

Link to comment

Hi jonp, firstly congrats on getting this out, awesome job! Do you know if the tweak to put iptable_mangle support back in got included in this release?

Hi bin,

 

Sorry for not replying to the PMs on the subject. I had forwarded internally for review.  I believe this was taken care of with 6.2, but if you could confirm for the, that'd be great.

Link to comment
Guest
This topic is now closed to further replies.