[Support] ich777 - AMD Vendor Reset, CoralTPU, hpsahba,...


Recommended Posts

8 minutes ago, Econaut said:

I do have these items:

Yes, I've built in a routine that checks if the module is already enabled because enabling twice doesn't make much sense.

Are you on 6.10.0-rc1, if yes the Kernel module is enabled on boot.

 

8 minutes ago, Econaut said:

Is there a way to use this amdgpu module on other dockers mentioned as well? (trying out your Jellyfin shortly)

Just pass through the device /dev/dri

Please go to the support thread or read the container description, put everything in there.

Radeon TOP is necessary if you want to install for example GPU Statistics from @b3rs3rk so that you see the GPU utilization on your unRAID Dashboard.

 

8 minutes ago, Econaut said:

Cool! Worth a shot - I was interested in trying all 3 (Jellyfin, Plex, Emby) if I can pass hardware transcode capability to each docker.

I think Plex doesn't support AMD GPUs currently.

  • Thanks 1
Link to comment
9 minutes ago, ich777 said:

Yes, I've built in a routine that checks if the module is already enabled because enabling twice doesn't make much sense.

Are you on 6.10.0-rc1, if yes the Kernel module is enabled on boot.

 

Just pass through the device /dev/dri

Please go to the support thread or read the container description, put everything in there.

Radeon TOP is necessary if you want to install for example GPU Statistics from @b3rs3rk so that you see the GPU utilization on your unRAID Dashboard.

 

I think Plex doesn't support AMD GPUs currently.

 

Yes I am on 6.10.0-rc1 - does that mean I don't need to add the modprobe line to the /boot/config/go file?

 

If I add the same parameter as is in your docker config to other docker containers (Device with value /dev/dri ) that will allow the other containers to use the hardware? (I guess just Emby additionally maybe)?

 

You may be right, Plex has this posted in their article:

 

Quote

*Note: Our hardware-transcoding system has technical support for many dedicated AMD graphics cards, but we haven’t done official, full testing on those. Support for AMD GPUs is provided “as is” and your mileage may vary. It is recommended that you use Intel Quick Sync Video or a dedicated NVIDIA GPU.

 

Link to comment
5 minutes ago, Econaut said:

Yes I am on 6.10.0-rc1 - does that mean I don't need to add the modprobe line to the /boot/config/go file?

Exactly, the line isn't needed with the Radeon TOP plugin either because it does also a modprobe if necessary.

 

6 minutes ago, Econaut said:

If I add the same parameter as is in your docker config to other docker containers (Device with value /dev/dri ) that will allow the other containers to use the hardware? (I guess just Emby additionally maybe)?

Exactly, everything that uses the VAAPI needs that device passed through.

Please remember that not every container supports AMD since the necessary dependencies have to be installed in the container too.

 

From what I know Plex doesn't support it but as you quoted the mileage may vary... :D

Link to comment
4 minutes ago, Econaut said:

I am not sure how to check but the binhex jellyfin docker does not work with the device passthrough as such.

I thought you are trying it with my container?

My container does indeed work with AMD.

 

5 minutes ago, Econaut said:

Seems like AMF

No because AMF needs the amdpro drivers in the container to work and I really don't like to create a container with these drivers...

VAAPI does work fine.

Link to comment
11 minutes ago, ich777 said:

I thought you are trying it with my container?

My container does indeed work with AMD.

 

Yes indeed - at first I was unable to log in to your container perhaps because I already had the binhex container. Seems like it was a port conflict and changing the port allowed it to work. Dumb question... but how can I differentiate between cpu & gpu transcode?

 

image.png.2839215a4478c7895be624e81b01a24d.png

 

Edited by Econaut
Link to comment
1 minute ago, Econaut said:

Yes indeed - at first I was unable to log in to your container perhaps because I already had the binhex container.

Exactly. You have to change the port to access my container if you have already another container running in your system.

 

2 minutes ago, Econaut said:

Dumb question... but how can I differentiate between cpu & gpu transcode?

Have you read my post in the Jellyfin support thread?

Install the GPU Statistics plugin from @b3rs3rk and select in the plugin options that you have a AMD gpu and you will see the utilisation on the unRAID dashboard.

 

You have to force a transcode when playing a video file with the little gear icon on the bottom within the Jellyfin player and also you have to configure VAAPI.

  • Like 1
  • Thanks 1
Link to comment

Yes I did read that and installed & set that up. At first there was huge CPU usage which could have just been something else... and very minimal GPU usage but then the CPU usage settled after a while and the GPU usage continued low. Looks like it's working great 👍

Was hoping Jellyfin may be more definitive on it's own (I am using hardware vs software for instance).

 

I do actually have 3 AMD GPUs plugged into this system (one integrated, two discrete) I don't suppose /dev/dri specifies one over another or if maybe just the primary is selected?

Link to comment
20 minutes ago, Econaut said:

Yes I did read that and installed & set that up. At first there was huge CPU usage which could have just been something else... and very minimal GPU usage but then the CPU usage settled after a while and the GPU usage continued low. Looks like it's working great 👍

In Jellyfin the main problem is that throttling doesn't work even if you enable it in the settings because it caused trouble and so they hardcoded it so it is always deactivated regardless if you enable or disable it in the settings.

 

Don't forget audio needs to be transcoded too and that is done over the CPU and cause also a huge load when throttling doesn't work.

 

22 minutes ago, Econaut said:

I do actually have 3 AMD GPUs plugged into this system (one integrated, two discrete) I don't suppose /dev/dri specifies one over another or if maybe just the primary is selected?

Have you bound the 2 other GPUs to VFIO or ar they currently used in a VM? At least it looks like you've did it or they are currently in use in a VM because you have only renderD128 in the directory, if you have multiple GPUs visible to unRAID (which actually VFIO would prevent) you would actually have renderD129 and renderD130 and so on in /dev/dri

  • Thanks 1
Link to comment
17 minutes ago, Econaut said:

(still can't get passthrough working for them but they are bound anyway).

If these are AMD cards and they are affected by the AMD Reset bug (mostly RX cards) try to upgrade to 6.10.0-rc1 and then install the AMD Vendor Reset Patch from the CA App and reboot and then try to pass them through again, maybe also try to recreate the VM.

  • Thanks 1
Link to comment
  • 2 weeks later...

I don't have my Unraid build setup yet but I was hoping to get this Docker container ahead of time since I'll need a custom driver module.  Is the container available anywhere for download?  I see the links have been removed from the first post.

 

Thanks,

Harry

Edited by HarryMuscle
Link to comment
14 minutes ago, HarryMuscle said:

I don't have my Unraid build setup yet but I was hoping to get this Docker container ahead of time since I'll need a custom driver module.  Is the container available anywhere for download?  I see the links have been removed from the first post.

Oh, you've already found it...

Please keep in mind that it currently only supports up to 6.9.2 reliably...

 

Sure thing, it's on DockerHub.

Link to comment
  • 2 weeks later...
  • 2 weeks later...
1 hour ago, HarryMuscle said:

Are the scripts for the docker container available on GitHub or anywhere else?  I found the plug-in on your GitHub account but can't find anything for the docker container's source files.  I'm curious to see how you accomplished this.

The script for the container is copied over to the root directory if you enable the custom build mode, this is basically a Debian Bullseye container that cross compiles everything and packs it up in custom images, keep in mind I will be deprecating the container soon because I will move away from custom images since you can integrate now nearly everything with plugins.

 

Made a package for you from where it is possible to create a plugin:

Compiled the linked Github repo and attached the files for you to test, please keep in mind these files will only work on unRAID v6.9.2:

qnap_it8528-plugin-5.10.28-Unraid-1.txz

qnap_it8528-plugin-5.10.28-Unraid-1.txz.md5

 

To install the files first place it somewhere on unRAID, navigate to the folder where you've put the files and issue these commands:

installpkg qnap_it8528-plugin-5.10.28-Unraid-1.txz
depmod -a
modprobe qnap-ec

I can't test these files since I've got no QNAP hardware, I also don't know if you need to run 'qnap-ec' in order that everything is working.

 

 

May I also recommend that you change the Makefile a bit so that it respects DESTDIR?

Since Slackware has a little different layout where the share libraries are I would also recommend to change the 'LIBRARY1_PATH' & 'LIBRARY2_PATH' to '/usr/lib64' (those are 64 bit libraries I think or am I wrong, if not the path should be '/usr/lib') and the last thing that I recommend would be to change the 'HELPER_PATH' to '/usr/bin'.

 

Feel free to contact me again if you got further questions. :)

Link to comment
8 hours ago, ich777 said:

The script for the container is copied over to the root directory if you enable the custom build mode, this is basically a Debian Bullseye container that cross compiles everything and packs it up in custom images, keep in mind I will be deprecating the container soon because I will move away from custom images since you can integrate now nearly everything with plugins.

 

Made a package for you from where it is possible to create a plugin:

Compiled the linked Github repo and attached the files for you to test, please keep in mind these files will only work on unRAID v6.9.2:

qnap_it8528-plugin-5.10.28-Unraid-1.txz 645.67 kB · 0 downloads

qnap_it8528-plugin-5.10.28-Unraid-1.txz.md5 33 B · 0 downloads

 

To install the files first place it somewhere on unRAID, navigate to the folder where you've put the files and issue these commands:

installpkg qnap_it8528-plugin-5.10.28-Unraid-1.txz
depmod -a
modprobe qnap-ec

I can't test these files since I've got no QNAP hardware, I also don't know if you need to run 'qnap-ec' in order that everything is working.

 

 

May I also recommend that you change the Makefile a bit so that it respects DESTDIR?

Since Slackware has a little different layout where the share libraries are I would also recommend to change the 'LIBRARY1_PATH' & 'LIBRARY2_PATH' to '/usr/lib64' (those are 64 bit libraries I think or am I wrong, if not the path should be '/usr/lib') and the last thing that I recommend would be to change the 'HELPER_PATH' to '/usr/bin'.

 

Feel free to contact me again if you got further questions. :)

Awesome.  Thanks.  I'll look into making the suggested changes.

  • Like 1
Link to comment
8 hours ago, ich777 said:

Have you tested it yet?
I'm curious if it is working... :)

Sent from my C64
 

No not yet, I have to still setup Unraid on the QNAP NAS.  The only testing so far has been on Debian running on the QNAP NAS and the driver works great.  Hopefully I'll be able to test the file you provided soon...ish.

Edited by HarryMuscle
  • Like 1
Link to comment
No not yet, I have to still setup Unraid on the QNAP NAS.  The only testing so far has been on Debian running on the QNAP NAS and the driver works great.  Hopefully I'll be able to test the file you provided soon...ish.
You should be able to test this without starting the Array, so you don't have to change too much on your current config.

Sent from my C64

Link to comment
  • 2 weeks later...
8 minutes ago, ich777 said:

@HarryMuscle any news on the QNAP packages? Do they work?

 

We had to make some changes to the driver code so the original packages you created became outdated but your Docker build script was very helpful in creating a new package (we based a simplified packaging workflow on how your build script does things ... current package can be found here: https://github.com/Stonyx/QNAP-EC/releases/tag/1.0.0) which does indeed work for reporting the fan speeds to Unraid.  However we came across a bug in the Auto Fan Control code in Unraid that prevents it from being able to control the fans (https://forums.unraid.net/bug-reports/stable-releases/auto-fan-control-assumes-pwm-enable-sysfs-attribute-exists-r1617) so we're working on a possible solution for that.

Edited by HarryMuscle
Link to comment
14 minutes ago, HarryMuscle said:

so we're working on a possible solution for that.

Should I create a plugin for this where the package is compiled every time a new unRAID version is released, like it is for my Nvidia Driver, DVB Driver, NCT Driver Plugin,...

I also compile packages for other community developers like ZFS for @steini84 and USBIP & iSCSI for @SimonF.

 

The process is automated and executed every time a new unRAID version is released.

Just hook me up with a short PM.

Link to comment
13 minutes ago, ich777 said:

Should I create a plugin for this where the package is compiled every time a new unRAID version is released, like it is for my Nvidia Driver, DVB Driver, NCT Driver Plugin,...

I also compile packages for other community developers like ZFS for @steini84 and USBIP & iSCSI for @SimonF.

 

The process is automated and executed every time a new unRAID version is released.

Just hook me up with a short PM.

I might take you up on that.  I'm having trouble figuring out exactly who supports the Auto Fan Control related code (according to the Dynamix GitHub page it's no longer a plugin but part of Unraid but according to the Unraid response to my bug report it's still a plugin) so we might add a "solution" to the driver itself to deal with the issue to allow Unraid to control the fan speeds not just read them.  I'd like to hold off until that is figured out before we make it widely available.

 

Edited by HarryMuscle
Link to comment
  • ich777 changed the title to [Support] ich777 - AMD Vendor Reset, CoralTPU, hpsahba,...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.