Jump to content
limetech

unRAID OS version 6.3.0 Stable Release Available

324 posts in this topic Last Reply

Recommended Posts

Upgraded nicely, I was looking forward to be rid of my build of apcupsd but modbus is still disabled in this update, can you enable it please.

 

All recent mid to high end APC UPS implement it (smart-ups), and it's the only way to get any useful data from them, classic USB just gives runtime.

I don't mind using my own build but since I have no control over what the php part is doing and I'm not sure how you configured your build it still feels iffy...

 

Thanks for all the hard work.

 

Share this post


Link to post

Upgraded nicely, I was looking forward to be rid of my build of apcupsd but modbus is still disabled in this update, can you enable it please.

 

All recent mid to high end APC UPS implement it (smart-ups), and it's the only way to get any useful data from them, classic USB just gives runtime.

I don't mind using my own build but since I have not control over what the php part is doing and I'm not sure how you configured your build it still feels iffy...

 

Thanks for all the hard work.

 

Modbus with a serial cable works now, but yeah it would be nice to be able to use modbus with a USB cable.

Share this post


Link to post

So, reverted back to 6.2.4, Docker is working again and all is fine. There must be a massive fault in this 6.3.0. because it kills even docker and all docker-plugins...  ???

Did also some cross-test - as soon as 6.3 is installed, docker and all plugins are not working.

Share this post


Link to post

So, reverted back to 6.2.4, Docker is working again and all is fine. There must be a massive fault in this 6.3.0. because it kills even docker and all docker-plugins...  ???

Did also some cross-test - as soon as 6.3 is installed, docker and all plugins are not working.

Doesn't appear that you ever posted your diagnostics from 6.3 which means not much anyone can do to see what went wrong.  But docker does indeed work on 6.3

Share this post


Link to post

one of my servers had the setting for confirmation of reboot and powerdown set to no

the other had the same settings set to yes

 

both rebooted with no confirmation using the reboot button on the main page

 

 

with them being at the bottom of the page now, i've already accidentally clicked them twice trying to start an app from mac dock with unraid webui still open.

 

 

both servers reboot regardless of whatever is set there

this is going to get annoying rather quickly

Have you tried toggling the confirmation setting and applying?

Share this post


Link to post

So, reverted back to 6.2.4, Docker is working again and all is fine. There must be a massive fault in this 6.3.0. because it kills even docker and all docker-plugins...  ???

Did also some cross-test - as soon as 6.3 is installed, docker and all plugins are not working.

 

Be sure you do not have any old plugins installed like the dynamix.plg that was a beta test version.  I would start in safe mode and see if that works, and then start re-installing plugins to see if one is breaking unRAID.

Share this post


Link to post

With updated SMB I did have to change a few mounts points on my end as well. I had to change from ntlm to ntlmssp auth.

 

If you are having issues authenticating to the unRAID server, you will need to specify the domain you are authenticating against.

 

for example, if your unraid hostname is unraid-awesome1337 you would authenticate with:

 

Username: unraid-awesome1337\John

Password: ********

 

Give it a shot.

 

so:

tried that, no luck

 

Edit: Actually it's made it worse.  Now I can't access secure shares either.  Gotta run out again, I'll try later

Edit 2: Went it and deleted my credentials for tdm.  Now I have access to the secure shares but still no luck with private shares.

Edit 3: While I can access the now secure share, it's as a guest as I only have read only access, not read/write as I set it up for.

 

There were 2 sets of credentials listed for tdm.  One under Windows credentials which had a modify date of today which I am assuming is the one not working.

 

I had another set of credentials listed under Generic Credentials which had a last modify date of November.  Stupidly I deleted both without doing some testing to see if I deleted the more recent that might fix things.

 

I have a new set of credentials, which don't worm, under Windows Credentials.

 

 

Share this post


Link to post

So, reverted back to 6.2.4, Docker is working again and all is fine. There must be a massive fault in this 6.3.0. because it kills even docker and all docker-plugins...  ???

Did also some cross-test - as soon as 6.3 is installed, docker and all plugins are not working.

Doesn't appear that you ever posted your diagnostics from 6.3 which means not much anyone can do to see what went wrong.  But docker does indeed work on 6.3

 

Did some crosstests with 6.2.4 and also some reboots, so there is no diagnostic file. But as soon i start with 6.3, docker isnt working anymore.

Share this post


Link to post

So, reverted back to 6.2.4, Docker is working again and all is fine. There must be a massive fault in this 6.3.0. because it kills even docker and all docker-plugins...  ???

Did also some cross-test - as soon as 6.3 is installed, docker and all plugins are not working.

Doesn't appear that you ever posted your diagnostics from 6.3 which means not much anyone can do to see what went wrong.  But docker does indeed work on 6.3

 

Did some crosstests with 6.2.4 and also some reboots, so there is no diagnostic file. But as soon i start with 6.3, docker isnt working anymore.

Understood, but without the diagnostics from 6.3 showing your issue no body can help you with your problem since you're the only one reporting the issue.

Share this post


Link to post

So, reverted back to 6.2.4, Docker is working again and all is fine. There must be a massive fault in this 6.3.0. because it kills even docker and all docker-plugins...  ???

Did also some cross-test - as soon as 6.3 is installed, docker and all plugins are not working.

 

Be sure you do not have any old plugins installed like the dynamix.plg that was a beta test version.  I would start in safe mode and see if that works, and then start re-installing plugins to see if one is breaking unRAID.

 

I tried this already but was not able to start docker with 6.3.

I deinstalld all docker plugins and deaktivated docker, then upgrade to 6.3 and then i tried to create a docker image and start it. Creating was possible but no start.

Did all from scratch severel times but no luck - as soon 6.3 is on, docker cant be startet.

Share this post


Link to post

So, reverted back to 6.2.4, Docker is working again and all is fine. There must be a massive fault in this 6.3.0. because it kills even docker and all docker-plugins...  ???

Did also some cross-test - as soon as 6.3 is installed, docker and all plugins are not working.

 

Be sure you do not have any old plugins installed like the dynamix.plg that was a beta test version.  I would start in safe mode and see if that works, and then start re-installing plugins to see if one is breaking unRAID.

 

I tried this already but was not able to start docker with 6.3.

I deinstalld all docker plugins and deaktivated docker, then upgrade to 6.3 and then i tried to create a docker image and start it. Creating was possible but no start.

Did all from scratch severel times but no luck - as soon 6.3 is on, docker cant be startet.

 

Start 6.3, attempt to start the docker, give logs. If no docker is even shown for you to be able to start, then post logs. Either way, needs logs or can't help.

Share this post


Link to post

My upgrade from v6.2.4 seems to be working fine. I do have tons of this line in my syslog occurring every 1 or 2 seconds and I don't know what it is or if I should be worried about it, other then filling up the syslog.

 

Feb  5 09:01:08 FileSvr kernel: xhci_hcd 0000:00:14.0: URB transfer length is wrong, xHC issue? req. len = 0, act. len = 4294967288

 

Diagnostics attached.

 

Gary

filesvr-diagnostics-20170205-0908.zip

Share this post


Link to post

My upgrade from v6.2.4 seems to be working fine. I do have tons of this line in my syslog occurring every 1 or 2 seconds and I don't know what it is or if I should be worried about it, other then filling up the syslog.

 

Feb  5 09:01:08 FileSvr kernel: xhci_hcd 0000:00:14.0: URB transfer length is wrong, xHC issue? req. len = 0, act. len = 4294967288

 

At the rate it's filling the syslog, it's clearly going to be trouble, probably causing you to have to reboot every other day.  That number corresponds to -8, but I have no idea what table to check for an error return of 8.  I do notice that you are very tight on memory, about as tight as I've ever seen, so it's possible that's an indirect indication of something not being able to allocate the memory it requested.  It first occurred at 10:05pm, with no other clues associated.  It's associated with the onboard Intel USB controller.

 

Perhaps consider decreasing by 1GB the RAM assigned to one VM?

 

One other thing I noticed, you may want to raise the tunable md_sync_thresh to at least 320, but 600 or 610 may be even better, for better parity check performance.

Share this post


Link to post

I've updated two servers as well.

 

One was running 6.2.4 and another 6.3RC9. Both upgrades were successfully completed. No issues.

 

Thank you LimeTech!

 

Cheers.

Share this post


Link to post

I would like to try IGD Passthrough. I can see the IGD option in the VM menus, however, I have no choice for audio. I am guessing that isolating the HD audio is still pissble on the Skylake platform? I had read in some thread somewhere that there was a possible workaround being worked on. Here are my IOMMU groups:

 

IOMMU group 0

00:00.0 Host bridge [0600]: Intel Corporation Skylake Host Bridge/DRAM Registers [8086:191f] (rev 07)

IOMMU group 1

00:01.0 PCI bridge [0604]: Intel Corporation Skylake PCIe Controller (x16) [8086:1901] (rev 07)

IOMMU group 2

00:01.1 PCI bridge [0604]: Intel Corporation Skylake PCIe Controller (x8) [8086:1905] (rev 07)

IOMMU group 3

00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 530 [8086:1912] (rev 06)

IOMMU group 4

00:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] (rev 31)

00:14.2 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Thermal subsystem [8086:a131] (rev 31)

IOMMU group 5

00:16.0 Communication controller [0780]: Intel Corporation Sunrise Point-H CSME HECI #1 [8086:a13a] (rev 31)

IOMMU group 6

00:17.0 SATA controller [0106]: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] [8086:a102] (rev 31)

IOMMU group 7

00:1b.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Root Port #17 [8086:a167] (rev f1)

IOMMU group 8

00:1b.2 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Root Port #19 [8086:a169] (rev f1)

IOMMU group 9

00:1c.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #1 [8086:a110] (rev f1)

IOMMU group 10

00:1c.4 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #5 [8086:a114] (rev f1)

IOMMU group 11

00:1d.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #9 [8086:a118] (rev f1)

IOMMU group 12

00:1d.4 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #13 [8086:a11c] (rev f1)

IOMMU group 13

00:1f.0 ISA bridge [0601]: Intel Corporation Sunrise Point-H LPC Controller [8086:a145] (rev 31)

00:1f.2 Memory controller [0580]: Intel Corporation Sunrise Point-H PMC [8086:a121] (rev 31)

00:1f.3 Multimedia audio controller [0401]: Intel Corporation Sunrise Point-H HD Audio [8086:a170] (rev 31)

00:1f.4 SMBus [0c05]: Intel Corporation Sunrise Point-H SMBus [8086:a123] (rev 31)

IOMMU group 14

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1)

01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fb9] (rev a1)

IOMMU group 15

02:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 02)

IOMMU group 16

04:00.0 Ethernet controller [0200]: Qualcomm Atheros Killer E220x Gigabit Ethernet Controller [1969:e091] (rev 10)

IOMMU group 17

05:00.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578]

IOMMU group 18

06:00.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578]

IOMMU group 19

06:01.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578]

IOMMU group 20

06:02.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578]

IOMMU group 21

06:04.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578]

IOMMU group 22

09:00.0 USB controller [0c03]: Intel Corporation DSL6540 USB 3.1 Controller [Alpine Ridge] [8086:15b6]

Share this post


Link to post

Updated from 6.2.4.

For now, everything looks good.

Running 16 plugins, 8 dockers and 3 VMs.

A huge thanks to limetech and all the volunteers for great work.

 

 

 

Sent from my iPhone using Tapatalk

Share this post


Link to post

My upgrade from v6.2.4 seems to be working fine. I do have tons of this line in my syslog occurring every 1 or 2 seconds and I don't know what it is or if I should be worried about it, other then filling up the syslog.

 

Feb  5 09:01:08 FileSvr kernel: xhci_hcd 0000:00:14.0: URB transfer length is wrong, xHC issue? req. len = 0, act. len = 4294967288

 

Diagnostics attached.

 

Gary

 

Is this a new issue vs. 6.2.4?

 

This issue might be caused by an attached USB device auto-suspending or auto-poweroff.  A plugin or container or vm might be trying to do this  Please post output of these commands and we might be able to figure out which device it is:

 

lsusb
for i in /sys/bus/usb/devices/*/power/autosuspend; do echo -n "$i "; cat $i; done
for i in /sys/bus/usb/devices/*/power/level; do echo -n "$i "; cat $i; done

 

(Click [select] above, then Ctrl-C to copy to clipboard.  Then in telnet/ssh window open to server, "paste" those commands - with putty you can use right-mouse click or hit Shift-Insert.)

 

 

Share this post


Link to post

With updated SMB I did have to change a few mounts points on my end as well. I had to change from ntlm to ntlmssp auth.

 

If you are having issues authenticating to the unRAID server, you will need to specify the domain you are authenticating against.

 

for example, if your unraid hostname is unraid-awesome1337 you would authenticate with:

 

Username: unraid-awesome1337\John

Password: ********

 

Give it a shot.

 

so:

tried that, no luck

 

Edit: Actually it's made it worse.  Now I can't access secure shares either.  Gotta run out again, I'll try later

Edit 2: Went it and deleted my credentials for tdm.  Now I have access to the secure shares but still no luck with private shares.

Edit 3: While I can access the now secure share, it's as a guest as I only have read only access, not read/write as I set it up for.

 

There were 2 sets of credentials listed for tdm.  One under Windows credentials which had a modify date of today which I am assuming is the one not working.

 

I had another set of credentials listed under Generic Credentials which had a last modify date of November.  Stupidly I deleted both without doing some testing to see if I deleted the more recent that might fix things.

 

I have a new set of credentials, which don't worm, under Windows Credentials.

 

Just to verify: please boot server once in 'Safe Mode' and see if same behavior exists.  Also make sure "netbios over tcp/ip" is enabled for your win10 PC in network adaptor Properties / Internet Protocol Version 4 (TCP/IPv4) / Advanced / WINS tab

Share this post


Link to post

There's a minor display problem when browsing folders within the Dynamix GUI if a folder contains the ampersand "&" character. The folder name is truncated at the ampersand and the folder is displayed as empty. Here's what it looks like at the command line:

 

root@Northolt:~# ls -l "/mnt/user/N_Public/Ripper/CD/FLAC/The Killers__Day & Age" 

total 323592

-rw-rw-r-- 1 nobody users 32838728 Feb  5 17:08 01\ Losing\ Touch.flac

-rw-rw-r-- 1 nobody users 29543389 Feb  5 17:11 02\ Human.flac

-rw-rw-r-- 1 nobody users 35121479 Feb  5 17:14 03\ Spaceman.flac

-rw-rw-r-- 1 nobody users 29049279 Feb  5 17:17 04\ Joy\ Ride.flac

-rw-rw-r-- 1 nobody users 25815710 Feb  5 17:19 05\ A\ Dustland\ Fairytale.flac

-rw-rw-r-- 1 nobody users 28058602 Feb  5 17:21 06\ This\ Is\ Your\ Life.flac

-rw-rw-r-- 1 nobody users 23348194 Feb  5 17:22 07\ I\ Can't\ Stay.flac

-rw-rw-r-- 1 nobody users 21549017 Feb  5 17:24 08\ Neon\ Tiger.flac

-rw-rw-r-- 1 nobody users 34590571 Feb  5 17:26 09\ The\ World\ We\ Live\ In.flac

-rw-rw-r-- 1 nobody users 43984558 Feb  5 17:29 10\ Goodnight,\ Travel\ Well.flac

-rw-rw-r-- 1 nobody users 27431861 Feb  5 17:31 11\ A\ Crippling\ Blow.flac

-rw-rw-r-- 1 nobody users    1034 Feb  5 17:31 Killers,\ The\ -\ Day\ &\ Age.m3u

root@Northolt:~#

 

and attached are GUI screen shots.

Share_with_problem_folder.png.19e6dc8a043805742549bf5e26042dec.png

Problem_folder_contents.png.26e162aa95d080f82b38f920a888f6bb.png

Share this post


Link to post

I'm booting into the Unraid GUI and the GUI does not seem to be refreshing correctly. Like if I start the array, the screen should refresh to show the array has now started, but instead the drive assignments get to the point where they are grayed out and then just sits there. If I then go to Dashboard and then come back to Main, it will now show that the array has started. (I can also just click on Main to refresh the page and that works as well.) This happens when shutting down the array and I notice the same type of thing when starting and stopping VM's. (The VM's were doing that in 6.2.4 as well.)

 

Everything else seems to be working normally with the update to 6.3.0 and it's just a small annoyance at this point.

turtle-diagnostics-20170205-1220.zip

Share this post


Link to post

I'm booting into the Unraid GUI and the GUI does not seem to be refreshing correctly. Like if I start the array, the screen should refresh to show the array has now started, but instead the drive assignments get to the point where they are grayed out and then just sits there. If I then go to Dashboard and then come back to Main, it will now show that the array has started. (I can also just click on Main to refresh the page and that works as well.) This happens when shutting down the array and I notice the same type of thing when starting and stopping VM's. (The VM's were doing that in 6.2.4 as well.)

 

Everything else seems to be working normally with the update to 6.3.0 and it's just a small annoyance at this point.

 

Got to 'Settings'  >> 'Display Settings' and see what the setting is for "Page update frequency:"  Click on the '?'  with the left mouse button for more information.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.