Jump to content
limetech

Unraid OS version 6.9 beta1 available

37 posts in this topic Last Reply

Recommended Posts

Posted (edited)

Hey thanks for this beta, really appreciate having access to the 5.5 kernel. I have an ASRock Taichi x570, AMD 3950x and Gigabyte Titan Ridge TB3 card installed. I have a Razer Core X (Alpine Ridge based) eGPU enclosure attached to that. 

The upgrade went fine, Unraid works generally well so far. It also detects the TB3 card fine, and if I hotplug the Core X after boot (doesn't see it straight away on boot), Unraid also detects the eGPU enclosure, but not the Zotac 1080ti inside the enclosure. 

If only I could get Unraid to detect the gfx card, I could then pass it through to a VM as a PCIE IOMMU device. Passing through the whole TB3 card also sees the Core X in Windows 10 VM, but not the graphics card inside.

 

If I boot into Ubuntu 19.10 as bare metal boot, I can see the enclosure and I can use the 1080ti inside no problem as a display. So I know it can work. I'm just a bit unsure how Ubuntu can see it, and Unraid cant. I can only imagine boltd/boltctl has some involvement in this. I'm also booting in legacy mode at the moment, I'm not sure if a UEFI unraid boot would have any impact?

 

Has anyone else managed to get this working at all in a similar configuration? I was hoping the updated kernel might be enough to work better with titan ridge and/or nested alpine ridge hubs (the eGPU), but I guess the problem is something else as the behaviour here is exactly the same as on 6.8.x.

 

Any ideas anyone? Thanks!

Edited by gzilla

Share this post


Link to post
2 hours ago, gzilla said:

Hey thanks for this beta, really appreciate having access to the 5.5 kernel. I have an ASRock Taichi x570, AMD 3950x and Gigabyte Titan Ridge TB3 card installed. I have a Razer Core X (Alpine Ridge based) eGPU enclosure attached to that. 

The upgrade went fine, Unraid works generally well so far. It also detects the TB3 card fine, and if I hotplug the Core X after boot (doesn't see it straight away on boot), Unraid also detects the eGPU enclosure, but not the Zotac 1080ti inside the enclosure. 

If only I could get Unraid to detect the enclosure I could pass it through to a VM as a PCIE IOMMU device. Passing through the whole TB3 card also sees the Core X in Windows 10 VM, but not the graphics card inside.

 

If I boot into Ubuntu 19.10 as bare metal boot, I can see the enclosure and I can use the 1080ti inside no problem as a display. So I know it can work. I'm just a bit unsure how Ubuntu can see it, and Unraid cant. I can only imagine boltd/boltctl has some involvement in this. I'm also booting in legacy mode at the moment, I'm not sure if a UEFI unraid boot would have any impact?

 

Has anyone else managed to get this working at all in a similar configuration? I was hoping the updated kernel might be enough to work better with titan ridge and/or nested alpine ridge hubs (the eGPU), but I guess the problem is something else as the behaviour here is exactly the same as on 6.8.x.

 

Any ideas anyone? Thanks!

Yes: post diagnostics.

Share this post


Link to post
40 minutes ago, Gdtech said:

Anybody know when multiple cache or array pools be available ?

 

yes

  • Like 1
  • Haha 5

Share this post


Link to post
Posted (edited)

Updated to this today but still I don't get the https://www.sonnettech.com/product/allegro-usbc-4port-pcie.html usb extension card to work directly with unraid. Unraid is not using any of the devices that I try connect to it, mouse, keyboard, usb-stick or nic.

I bought this card to give my server USB 3 ports to backup data faster to cold storage and have an option to connect extra nics or faster nics to very limited 1U chassis.

When I use lsusb -t it seems like I have the hubs there but nothing that I connect to the hub seems to do anything.

$ lsusb -t
/:  Bus 12.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 10000M
/:  Bus 11.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 480M
/:  Bus 10.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 10000M
/:  Bus 09.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 480M
/:  Bus 08.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
/:  Bus 07.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
/:  Bus 06.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
/:  Bus 05.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
    |__ Port 2: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 12M
    |__ Port 2: Dev 2, If 1, Class=Human Interface Device, Driver=usbhid, 12M
/:  Bus 04.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
/:  Bus 03.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/4p, 480M
/:  Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/4p, 480M
    |__ Port 3: Dev 2, If 0, Class=Hub, Driver=hub/4p, 480M
        |__ Port 3: Dev 3, If 0, Class=Mass Storage, Driver=usb-storage, 480M

 

rs1-diagnostics-20200319-2257.zip

Edited by fireflower

Share this post


Link to post
On 3/18/2020 at 4:13 AM, Gdtech said:

Anybody know when multiple cache or array pools be available ?

 

Multiple cache pools being internally tested now.  Multi array pools not in the cards for this release.

  • Like 7
  • Thanks 1

Share this post


Link to post
3 hours ago, limetech said:

Multiple cache pools being internally tested now.  Multi array pools not in the cards for this release.

woot woot =0

Share this post


Link to post
5 hours ago, limetech said:

Multiple cache pools being internally tested now.  Multi array pools not in the cards for this release.

Just curious: I do understand the need for multiple array pools, but I don't have a clue what multiple cache pools are good for?

 

Share this post


Link to post
Just curious: I do understand the need for multiple array pools, but I don't have a clue what multiple cache pools are good for?
 

Not sure if I've understood it correctly, but I would like VM drives on a cache pool but not the same cache pool as my dockers to reduce wear and tear and risk failure plus I can buy lower sized SSDs for each task.


Sent from my iPhone using Tapatalk

Share this post


Link to post

Ah, I see. Thanks. So it's clear: I need multiple array pools - not multiple cache pools ;-)

 

Share this post


Link to post

I'm getting the following errors in my system log, I've been seeing this issue in the last several versions.  I thought I would try the BETA in the hope it would address this issue., which it hasn't.

 

I can't download diagnostics, the web interface becomes unstable / partially functional, eventually my log will hit 100% after 2-3 days, forcing me to reboot.

 

Log is filed with this.

 

Mar 25 14:56:35 Tower nginx: 2020/03/25 14:56:35 [alert] 7994#7994: worker process 20475 exited on signal 6

Mar 25 14:56:37 Tower nginx: 2020/03/25 14:56:37 [alert] 7994#7994: worker process 20476 exited on signal 6

Mar 25 14:56:39 Tower nginx: 2020/03/25 14:56:39 [alert] 7994#7994: worker process 20493 exited on signal 6

Mar 25 14:56:41 Tower nginx: 2020/03/25 14:56:41 [alert] 7994#7994: worker process 20497 exited on signal 6

Mar 25 14:56:42 Tower nginx: 2020/03/25 14:56:42 [alert] 7994#7994: worker process 20500 exited on signal 6

Mar 25 14:56:44 Tower nginx: 2020/03/25 14:56:44 [alert] 7994#7994: worker process 20502 exited on signal 6

Mar 25 14:56:46 Tower nginx: 2020/03/25 14:56:46 [alert] 7994#7994: worker process 20505 exited on signal 6

Mar 25 14:56:46 Tower nginx: 2020/03/25 14:56:46 [alert] 7994#7994: worker process 20510 exited on signal 6

Mar 25 14:56:48 Tower emhttpd: error: publish, 244: Connection reset by peer (104): read

Mar 25 14:56:48 Tower nginx: 2020/03/25 14:56:48 [alert] 7994#7994: worker process 20511 exited on signal 6

Mar 25 14:56:50 Tower nginx: 2020/03/25 14:56:50 [alert] 7994#7994: worker process 20515 exited on signal 6

Mar 25 14:56:50 Tower nginx: 2020/03/25 14:56:50 [alert] 7994#7994: worker process 20516 exited on signal 6

Mar 25 14:56:52 Tower nginx: 2020/03/25 14:56:52 [alert] 7994#7994: worker process 20517 exited on signal 6

Mar 25 14:56:52 Tower nginx: 2020/03/25 14:56:52 [alert] 7994#7994: worker process 20526 exited on signal 6

Mar 25 14:56:54 Tower nginx: 2020/03/25 14:56:54 [alert] 7994#7994: worker process 20527 exited on signal 6

Mar 25 14:56:54 Tower nginx: 2020/03/25 14:56:54 [alert] 7994#7994: worker process 20529 exited on signal 6

Mar 25 14:56:56 Tower nginx: 2020/03/25 14:56:56 [alert] 7994#7994: worker process 20530 exited on signal 6

Mar 25 14:56:56 Tower nginx: 2020/03/25 14:56:56 [alert] 7994#7994: worker process 20533 exited on signal 6

 

Any input or ideas are appreciated.  If I could download diagnostics, I would.

Share this post


Link to post

From the 6.8 release:

 

"Linux kernel

We started 6.8 development and initial testing using Linux 5.x kernel.  However there remains an issue when VM's and Docker containers using static IP addresses are both running on the same host network interface.  This issue does not occur with the 4.19 kernel.  We are still studying this issue and plan to address it in the Unraid 6.9 release."

 

Is this (or will this be) solved in this release?

Share this post


Link to post
On 3/10/2020 at 1:42 PM, gzilla said:

Hey thanks for this beta, really appreciate having access to the 5.5 kernel. I have an ASRock Taichi x570, AMD 3950x and Gigabyte Titan Ridge TB3 card installed. I have a Razer Core X (Alpine Ridge based) eGPU enclosure attached to that. 

The upgrade went fine, Unraid works generally well so far. It also detects the TB3 card fine, and if I hotplug the Core X after boot (doesn't see it straight away on boot), Unraid also detects the eGPU enclosure, but not the Zotac 1080ti inside the enclosure. 

If only I could get Unraid to detect the gfx card, I could then pass it through to a VM as a PCIE IOMMU device. Passing through the whole TB3 card also sees the Core X in Windows 10 VM, but not the graphics card inside.

 

If I boot into Ubuntu 19.10 as bare metal boot, I can see the enclosure and I can use the 1080ti inside no problem as a display. So I know it can work. I'm just a bit unsure how Ubuntu can see it, and Unraid cant. I can only imagine boltd/boltctl has some involvement in this. I'm also booting in legacy mode at the moment, I'm not sure if a UEFI unraid boot would have any impact?

 

Has anyone else managed to get this working at all in a similar configuration? I was hoping the updated kernel might be enough to work better with titan ridge and/or nested alpine ridge hubs (the eGPU), but I guess the problem is something else as the behaviour here is exactly the same as on 6.8.x.

 

Any ideas anyone? Thanks!

Hi guys,

 

Just wanted to drop in and say I managed to solve this!

 

I noticed when booting in Windows, that it would fail to find enough resources to allocate to PCI devices, resulting in error 12 on maybe a PCI-E root port or, at times, the 1080 ti graphics card!

Looking at the Unraid logs, I found the same kind of thing happening there that I had missed when debugging the logs last time:

 

Mar 27 14:57:47 gnznas kernel: pci 0000:04:00.0: PCI bridge to [bus 05]
Mar 27 14:57:47 gnznas kernel: pci 0000:04:00.0:   bridge window [mem 0xf2100000-0xf21fffff]
Mar 27 14:57:47 gnznas kernel: pci 0000:06:00.0: BAR 15: no space for [mem size 0x18000000 64bit pref]
Mar 27 14:57:47 gnznas kernel: pci 0000:06:00.0: BAR 15: failed to assign [mem size 0x18000000 64bit pref]
Mar 27 14:57:47 gnznas kernel: pci 0000:06:00.0: BAR 14: assigned [mem 0xe7800000-0xe8ffffff]
Mar 27 14:57:47 gnznas kernel: pci 0000:06:00.0: BAR 13: assigned [io  0xc000-0xcfff]
Mar 27 14:57:47 gnznas kernel: pci 0000:07:01.0: BAR 15: no space for [mem size 0x18000000 64bit pref]
Mar 27 14:57:47 gnznas kernel: pci 0000:07:01.0: BAR 15: failed to assign [mem size 0x18000000 64bit pref]
Mar 27 14:57:47 gnznas kernel: pci 0000:07:01.0: BAR 14: assigned [mem 0xe7800000-0xe8ffffff]
Mar 27 14:57:47 gnznas kernel: pci 0000:07:01.0: BAR 13: assigned [io  0xc000-0xcfff]
Mar 27 14:57:47 gnznas kernel: pci 0000:08:00.0: BAR 1: no space for [mem size 0x10000000 64bit pref]
Mar 27 14:57:47 gnznas kernel: pci 0000:08:00.0: BAR 1: failed to assign [mem size 0x10000000 64bit pref]
Mar 27 14:57:47 gnznas kernel: pci 0000:08:00.0: BAR 3: no space for [mem size 0x02000000 64bit pref]
Mar 27 14:57:47 gnznas kernel: pci 0000:08:00.0: BAR 3: failed to assign [mem size 0x02000000 64bit pref]  

 

In baremetal Windows, there are a number of options you can go through to fix this issue outlined here: https://egpu.io/forums/pc-setup/fix-dsdt-override-to-correct-error-12/. You can do a DSDT patch with Clover and remove a PCI-E root port that you don't care about (which can completely disable visibility of a 2nd graphics card (I also have an AMD RX580 and Nvidia GT 710 in there too) from the Windows OS).

I thought maybe also this might work: https://khronokernel-4.gitbook.io/disable-unsupported-gpus/

Neither of those things I could get working, but I did find an acceptable workaround for now.. which is to press F11 at boot, and when the boot selection menu comes up, re-plug the eGPU box back into thunderbolt port.

 

 Then boot into Unraid afterwards. This now shows the devices in the PCI list as follows:

           +-01.2-[01-2f]----00.0-[02-2f]--+-01.0-[03-22]----00.0-[04-22]--+-00.0-[05]----00.0  Intel Corporation JHL7540 Thunderbolt 3 NHI [Titan Ridge 4C 2018]
           |                               |                               +-01.0-[06-13]----00.0-[07-08]----01.0-[08]--+-00.0  NVIDIA Corporation GP102 [GeForce GTX 1080 Ti]
           |                               |                               |                                            \-00.1  NVIDIA Corporation GP102 HDMI Audio Controller
           |                               |                               +-02.0-[14]----00.0  Intel Corporation JHL7540 Thunderbolt 3 USB Controller [Titan Ridge 4C 2018]
           |                               |                               \-04.0-[15-22]--

...

           +-03.1-[31]--+-00.0  NVIDIA Corporation GK208B [GeForce GT 710]
           |            \-00.1  NVIDIA Corporation GK208 HDMI/DP Audio Controller
           +-03.2-[32]--+-00.0  Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590]
           |            \-00.1  Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590]

 

But when I passthrough the 1080ti to a Windows 10 VM, the device won't start because error 43, which is something SpaceInvaderOne has documented very well in his Youtube videos (check them out if you haven't already they're all really good!)

So I went through all of his suggestions, and _still_ it didn't work.

 

Looking at the logs again, there are still some PCI devices whose BAR is failing to start.

After much digging around Linux kernel man pages I found that adding `pci=realloc` to the boot args of Flash USB, the resource accounting is sorted out completely on the next boot and all PCI devices find resources and start successfully! yay

 

Make sure you also have CSM disabled and above 4G enabled in your BIOS, and use a VBIOS ROM in your libvirt xml (all documented by SpaceInvaderOne) .. I also set 'Above 4G MMIO` to 36 bit as well (I thought that was related to the 36bit large memory that the egpu.io page was trying to set up, but not sure if it bears any relevance over the other Above 4G setting in CSM area). I also set PCIE Devices Power On to enabled in BIOS too.

 

Hope this helps someone. I now have a working Mac and Windows VMs running simultaneously with their own separate graphics cards!

Share this post


Link to post
Posted (edited)
On 3/25/2020 at 3:13 PM, jbear said:

I'm getting the following errors in my system log, I've been seeing this issue in the last several versions.  I thought I would try the BETA in the hope it would address this issue., which it hasn't.

 

I can't download diagnostics, the web interface becomes unstable / partially functional, eventually my log will hit 100% after 2-3 days, forcing me to reboot.

 

Log is filed with this.

 

Mar 25 14:56:35 Tower nginx: 2020/03/25 14:56:35 [alert] 7994#7994: worker process 20475 exited on signal 6

Mar 25 14:56:37 Tower nginx: 2020/03/25 14:56:37 [alert] 7994#7994: worker process 20476 exited on signal 6

Mar 25 14:56:39 Tower nginx: 2020/03/25 14:56:39 [alert] 7994#7994: worker process 20493 exited on signal 6

Mar 25 14:56:41 Tower nginx: 2020/03/25 14:56:41 [alert] 7994#7994: worker process 20497 exited on signal 6

Mar 25 14:56:42 Tower nginx: 2020/03/25 14:56:42 [alert] 7994#7994: worker process 20500 exited on signal 6

Mar 25 14:56:44 Tower nginx: 2020/03/25 14:56:44 [alert] 7994#7994: worker process 20502 exited on signal 6

Mar 25 14:56:46 Tower nginx: 2020/03/25 14:56:46 [alert] 7994#7994: worker process 20505 exited on signal 6

Mar 25 14:56:46 Tower nginx: 2020/03/25 14:56:46 [alert] 7994#7994: worker process 20510 exited on signal 6

Mar 25 14:56:48 Tower emhttpd: error: publish, 244: Connection reset by peer (104): read

Mar 25 14:56:48 Tower nginx: 2020/03/25 14:56:48 [alert] 7994#7994: worker process 20511 exited on signal 6

Mar 25 14:56:50 Tower nginx: 2020/03/25 14:56:50 [alert] 7994#7994: worker process 20515 exited on signal 6

Mar 25 14:56:50 Tower nginx: 2020/03/25 14:56:50 [alert] 7994#7994: worker process 20516 exited on signal 6

Mar 25 14:56:52 Tower nginx: 2020/03/25 14:56:52 [alert] 7994#7994: worker process 20517 exited on signal 6

Mar 25 14:56:52 Tower nginx: 2020/03/25 14:56:52 [alert] 7994#7994: worker process 20526 exited on signal 6

Mar 25 14:56:54 Tower nginx: 2020/03/25 14:56:54 [alert] 7994#7994: worker process 20527 exited on signal 6

Mar 25 14:56:54 Tower nginx: 2020/03/25 14:56:54 [alert] 7994#7994: worker process 20529 exited on signal 6

Mar 25 14:56:56 Tower nginx: 2020/03/25 14:56:56 [alert] 7994#7994: worker process 20530 exited on signal 6

Mar 25 14:56:56 Tower nginx: 2020/03/25 14:56:56 [alert] 7994#7994: worker process 20533 exited on signal 6

 

Any input or ideas are appreciated.  If I could download diagnostics, I would.

Wanted to follow-up on my own post, from what I can tell, this manifests itself if I fail to logout of the browser based admin interface.  So essentially after about 24 of being logged in, the system log will start to fill with the entries seen above.  After 48- 72 hours or so my system log will be at 100%.  If I logout, they stop.  Obviously logging out here is the answer but you wouldn't think this would be the end result.

 

Is there a way to automatically logout of a session after X minutes of inactivity?

 

I've really never had this issue until the 6.8 releases, currently on 6.9 beta 1.  I will continue to monitor, but this is where I'm currently at. 

 

Any feedback is appreciated.  Thanks.

 

tower-diagnostics-20200401-1525.zip

Edited by jbear
error

Share this post


Link to post
On 3/28/2020 at 11:07 AM, jowi said:

From the 6.8 release:

 

"Linux kernel

We started 6.8 development and initial testing using Linux 5.x kernel.  However there remains an issue when VM's and Docker containers using static IP addresses are both running on the same host network interface.  This issue does not occur with the 4.19 kernel.  We are still studying this issue and plan to address it in the Unraid 6.9 release."

 

Is this (or will this be) solved in this release?

Any comments on this?

Share this post


Link to post

There has been no comments on any question whatsoever for over a month... 

Share this post


Link to post
Posted (edited)
6 hours ago, jowi said:

There has been no comments on any question whatsoever for over a month... 

They busyyyyy...

 

Testing the multi-Language and Multi cache pool support.

 

Those are huge changes.

 

Also they have personal lives which could be affected by covid-19.

(That's not our business)

Edited by Dazog

Share this post


Link to post
On 5/2/2020 at 6:42 AM, Tucubanito07 said:

Anything new for beta 2?

a little bird told me that they already were at beta8, huge changes coming....

  • Like 4
  • Thanks 2
  • Haha 1

Share this post


Link to post
On 5/7/2020 at 9:21 AM, starbetrayer said:

a little bird told me that they already were at beta8, huge changes coming....

I hope that little birdie is correct. Obviously I hope the unraid team is safe, I just wanted to see if they had an update that’s all. We’ll be safe out there and have a great day. 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.