trypowercycle
Members-
Posts
25 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by trypowercycle
-
I'm also interested in LTO support. I have a Quantum LTO5 library I'm gonna test out this weekend. Are there any drivers built in the base install?
-
Feel free to have me delete this and re-name my old thread that this stemmed from. I am running an 11th Gen Intel system and can not get any devices to work in the 1st (x16) slot on the system. I previously thought it was a network card driver but even after swapping it around with my SAS controller card, I am getting the same errors in the logs. 0000:01:00.0: can't change power state from D3cold to DO. So far I've updated my BIOS to the latest release. I've also tried adding "pcie_aspm=off" as a boot flag (thinking this issue had something to do with power states) but that did not work. I've attached the log files. Any other ideas would be appreciated. unraid-diagnostics-20211117-2351.zip
-
6.10 RC2 Mellanox ConnectX3 Not Functional
trypowercycle commented on trypowercycle's report in Prereleases
I'm going to close out this thread and file a new more genera bug report for the "can't change power state from D3cold to DO (config space inaccessible)" occurring on any pcie device plugged into the x16 slot on my board. -
6.10 RC2 Mellanox ConnectX3 Not Functional
trypowercycle commented on trypowercycle's report in Prereleases
Well the plot thickens... I guess this is solved. I swapped it with my SAS controller and now the Mellanox works but the SAS controller doesn't... So something is preventing devices in that x16 slot to work. -
6.10 RC2 Mellanox ConnectX3 Not Functional
trypowercycle commented on trypowercycle's report in Prereleases
That's worth a shot... I suppose I could try a bios update as well for the hell of it. Googling that issue shows a bunch of Nvidia driver issues which (kinda makes since since they own Mellanox now) and talks about pcie power settings. -
6.10 RC2 Mellanox ConnectX3 Not Functional
trypowercycle commented on trypowercycle's report in Prereleases
Yeah, I updated to that firmware version last night to see if that was the issue... No dice unfortunately. -
6.10 RC2 Mellanox ConnectX3 Not Functional
trypowercycle commented on trypowercycle's report in Prereleases
Any chance you could query your card so I can compare it to mine with a mstconfig -d 'device-id' query? I wonder if I have some funky config. Mine is below: Device type: ConnectX3 Device: 01:00.0 Configurations: Next Boot SRIOV_EN False(0) NUM_OF_VFS 8 LINK_TYPE_P1 ETH(2) LINK_TYPE_P2 ETH(2) LOG_BAR_SIZE 3 BOOT_PKEY_P1 0 BOOT_PKEY_P2 0 BOOT_OPTION_ROM_EN_P1 True(1) BOOT_VLAN_EN_P1 False(0) BOOT_RETRY_CNT_P1 0 LEGACY_BOOT_PROTOCOL_P1 PXE(1) BOOT_VLAN_P1 1 BOOT_OPTION_ROM_EN_P2 True(1) BOOT_VLAN_EN_P2 False(0) BOOT_RETRY_CNT_P2 0 LEGACY_BOOT_PROTOCOL_P2 PXE(1) BOOT_VLAN_P2 1 IP_VER_P1 IPv4(0) IP_VER_P2 IPv4(0) CQ_TIMESTAMP True(1) -
6.10 RC2 Mellanox ConnectX3 Not Functional
trypowercycle commented on trypowercycle's report in Prereleases
I'm starting to think this is an issue with the firmware having SR-IOV enabled and my motherboard not supporting SR-IOV. I'm going to see if I can disable it in the firmware and go from there. -
6.10 RC2 Mellanox ConnectX3 Not Functional
trypowercycle commented on trypowercycle's report in Prereleases
Hmm interesting, so I guess the driver is functional if it’s working on yours. Is that a connectx 3 card? I wonder if it doesn’t like my flavor of the card for some reason… -
Upgraded to 6.10 RC2 from latest stable release. Lost my Mellanox connectX3 card as a network interface. It is on the recommended firmware for Linux kernel 5.14. Firmware version 2.42.5000. https://docs.mellanox.com/display/kernelupstreamv514/Linux+Kernel+Upstream+Release+Notes+v5.14 I forgot to grab the log file before I reverted back to stable because my wife was bugging me to have Plex up and running again. But I can grab one tomorrow. Best I have for now is a picture I took from the syslog in the local GUI while I was doing some initial googling for the error. The lines that seemed relevant were: kernel: mlx4 core: Mellanox Connedtx core driver v4.0-0 Nov 16 21: 32:57 Unraid kernel: mlx4 core: Initializing 0000:01:00.0 Nov 16 21:32:57 Unraid kernel: mlx4 core 0000:01:00.0: can't change power state from D3cold to DO (config space inaccessible) Nov 16 21:32:57 Unraid kernel: mlx4 core 0000:01:00.0: Multiple PFs not yet supported - Skipping PF Nov 16 21:32 57 Unraid kernel: mlx4 core: probe of boBo:d Failed with error -22 Let me know if you’d like that full log or if this is a known issue already, thanks!
-
Just got a first kernel panic on R2. What is the best way to make these logs persist so I can share it if it happens again? I have a picture of the console screen from the attached monitor but I assume there isn't enough info on it to be useful.
-
Unraid OS version 6.9.0-beta35 available
trypowercycle commented on limetech's report in Prereleases
I have also experienced a kernel panic. I don’t have any logs collecting but I took a picture of the console when it happened. Not sure if there is enough there to be useful though... I am not running any Nvidia drivers. Only using Intel IGPU. -
Unraid OS version 6.9.0-beta35 available
trypowercycle commented on limetech's report in Prereleases
Is anyone else having issues with their virtual machines disappearing from the dashboard after starting any of your VMs? -
Unraid OS version 6.9.0-beta30 available
trypowercycle commented on limetech's report in Prereleases
Thanks for the writeup and sending that over. For what it's worth I took the line back out for the aio write size to let it default to on. I tested with aio write off and on and it didn't seem to make a difference. I put aio read into the extras file as you suggested and it is working as expected, so all is good with that. I guess if other people have the same controller they will just have to do the same. I'd assume it is fairly common controller. Perhaps the fix could be documented somewhere so others can find it. It is strange that this particular controller performs better with it off though... -
Unraid OS version 6.9.0-beta30 available
trypowercycle commented on limetech's report in Prereleases
Nice! That makes a ton of sense why your's was working then on the same hardware. Do you know if it is preferable to make these changes in the smb.conf file vs the Samba extra settings? I imagine the smb.conf can be overwritten when updates come out possibly whereas the Samba extra settings would persist? -
Unraid OS version 6.9.0-beta30 available
trypowercycle commented on limetech's report in Prereleases
I figured out the problem. I compared the smb.conf between beta 25 and 30 and the only difference was that the lines below were missing. In 25 there is even a comment in the file about how they are probably not needed anymore. After I put the lines back in on beta 30, file transfers worked as expected on my array on the LSI crd. I seems they are still needed for some reason and should probably be added back in. aio read size = 0 aio write size = 4096 -
Unraid OS version 6.9.0-beta30 available
trypowercycle commented on limetech's report in Prereleases
I have unassigned devices connected to the onboard SATA and my nvme cache drive that are both performing normally if I read from them over Samba in terms of CPU usage as well as speeds. Seems to be only devices attached to the HBA... -
Unraid OS version 6.9.0-beta30 available
trypowercycle commented on limetech's report in Prereleases
** Update, I tested this out on a few of the past beta releases and the problem outlined below began in 6.9.29 and remains a problem in 6.9.30. I tested 6.9.25 and I had expected read speeds of around 150MB/s over smb from the drives on my H310 LSI card. This is my first post on this forum so please let me know if there is a separate place to report these bugs. I'm running a Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) and the card does show up, however it cut my read speeds over Samba nearly in half, it also spikes my CPU usage way up compared to 6.8.3. Over my 10Gb connection I get the full read speeds of my 5200 rpm drives around 130-150 MB/s on 6.8.3. I'm now getting about 75 on average on 6.9 beta. I imagine this is due to these driver problems? Funny enough if I run a diskspeed test I get my expected hard drive speeds... Does anyone have any suggestions to fix the Samba speeds? Or just let me know if this is something that needs to be worked out in the driver. I can post logs if needed. I am going to stay on the betas for now since I just bought a 10th gen i3 and need the igpu support under 6.9.