Jump to content

ich777

Community Developer
  • Posts

    15,752
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. I think because you are tried to save power when the VM wasn't running maybe? Exactly, because there is a newer driver for the legacy cards available and it can't find your specified driver version, it will fall back to the latest available. Glad to hear that everything is up and running again.
  2. Keep in mind that if you have nvidia-persistenced on when you try to start the VM it is possible that the VM won't start and/or even crash the your entire server. I would recommend that you bind the one card to VFIO that you maybe plan to use in a VM because then nvidia-persistenced will only work for the card that is not bound to VFIO. This is then basically the same as what the script from spaceinvader one does since it will also pull the cards into P8 and they can of course ramp up to whatever power mode they need, but keep that with the VM in mind. As a workaround if you don't want to bind the GPU to VFIO you can also do something like this in your go file: nvidia-persistenced sleep 5 kill $(pidof nvidia-persistenced) This will basically start nvidia-persistenced, wait 5 seconds and kill it after the 5 seconds so that the cards go to P8 when you boot the server. Of course there are more advanced ways to also pull the card into P8 again after you've ended a VM.
  3. You don‘t use the GPU in VMs or am I wrong? If you are only using the GPU for Docker containers simply put this line in your go file and everything should work as it would with the script: nvidia-persistenced and of course reboot after that.
  4. Please disable this script for now since this also includes a outdated command, nvidia-persistenced is now a dedicated application and nvidia-smi —persistence-mode should not be used anymore because this will deprecated in the future. Please try to remove that script, reboot your server and try again. The output seems fone to me… Please keep me updated, could be possible that I wont answer very quickly in the next two days…
  5. What are the contents from: nvidia-powersave/script? Is only one container affected or there are multiple container affected? If it is only one container, please post the container configuration (Docker template). I test every stable driver release for every new Unraid version and I have had no issues so far on 6.10.0+ with Jellyfin utilizing NVENC. Please open up a terminal and post the output from: cat /proc/sys/kernel/overflowuid (screenshot preferred) What also catches my eye is the modifications to the go file, do you still need them?
  6. Isn't Sundtek supported by the in tree modules? Without you hardware IDs or your Diagnostics I really can't tell much. Is Sundtek such a good value? My to-go was always DigitalDevices on the high end and TBS on the mid range, also please note that these two brands or better speaking their drivers are working OOB with in tree drivers (LibreELEC package from my plugin).
  7. Why not control it from the BIOS? I really can't help because NCT6687D isn't really supported by NCT6683, you can of course use force=1 but this won't show the real numbers from what I know... Have you yet tried to force your chips ID? like force_id=0xd590 or something like that?
  8. Since I don't know what hardware are you using I can't give any advice... I would say, try it out to not install it on 6.10 and you will see if it will work... This is always some kind of lottery...
  9. In the main window click on File -> Quit (actually I don't know what it looks like in English since I'm using the German version) this will save the filters. Press ESC. Definitely not, this is because the profile got not quite saved right if you restart/stop it from the Unraid WebGUI, but I can't really do anything about it.
  10. Can you post a screenshot please? Yes, a user had that issue too, how do you end the container? Do you restart it from the Unraid WebGUI or am I wrong? When you are finished setting up your filters click File -> Exit from within the noVNC Thunderbird GUI and wait for the container to restart again and you will see that they are still there. Anyways I can't reproduce this issue on my server because even if I set up a custom filter and regardless if I restart it over the Unraid Web GUI or from inside the container it works and saves the filters, but the way with clicking File -> Exit above should save you filters. Logs won't help here.
  11. Then this is a pure Plex issue and I would recommend that you post on the Plex forums or in the corresponding support thread for the container.
  12. Sorry für die späte Rückmeldung, bei mir läuft der Server einwandfrei und ich kann connecten...
  13. Have you yet tried to use 6.10.2? What container are you using for transcoding? If Plex, have you tried anything else so far?
  14. What CPU are you using? Keep in mind this thread is not for Alder Lake! On 11th Gen+ GuC is enabled automatically. Read this:
  15. Crashing and such… Please read the comment from @Nackophilz BTW, you have to mention/quote me that I actually see such messages, I‘m not subscribed to every thread that I‘m writing in…
  16. But this is mainly a Alder Lake or better speaking Tiger Lake+ issue and it needs to be activated there so that everything works properly...
  17. I think the simplest way to describe it is here. Maybe also worth mentioning Kernel 5.18.2 and 5.18.3 in combination with Unraid works fine with Alder Lake and HW transcoding (currently Jellyfin tested), I think even Plex even fixed their custom version from FFmpeg that causes crashes with Alder Lake. ...please also note that this is not a Alder Lake Bug Thread!!!
  18. That's not entirely true you can of course let everything as it is with /dev/dri in the template: and in the container it will automatically put in the Jellyfin app /dev/dri/renderD128 (this is also the common way to do it - if you have changed it once than it is maybe set to /dev/dri instead to /dev/dri/renderD128) : If you do it this way it will also ensure that if you have more GPUs that are capable of VAAPI on the host that you can for example switch in the container from one to another card (eg: /dev/dri/renderD128 or /dev/dri/renderD129 and so on...). BTW: You should be also able to use Quick Sync if you have Intel 8th gen+ in my container.
  19. Are we talking about 1,5MB/s up? Even if you have that upload it should be enough for 2 clients but I could be wrong about that... Valheim was also a little bit buggy if there was not enough bandwith available... Depends and I really can't give you a clear answer here but Intel gave me none trouble so far where on AMD platforms I had various issues that the container won't start and so on... I think I also read somewhere similar to your issue but I really couldn't remember if that was on Intel or AMD.
  20. You are doing this with a YouTube video from what I see… Can you try to install VNC player and try it there with a file? Encode and Decode should work fine too since 3D is working for you, I think the main issue on your system is that it deosn‘t make use of it but that‘s out of my control…
  21. They changed the name from ffmpeg that‘s why it failed. To fix this was actually on my todo list but currently I‘m really busy and I had planed to fix this today evening if nobody requests it…
  22. But in the above the UHD 630 is shown in the Device Manager? Do you try to install another driver over the existing one? Are you using Windows 10 or 11? Looks like 10 or am I wrong? I can't open the Diagnostics from above...
  23. First of all please remember that I've run this server through WINE because Linux is not supported yet and also please note that this game is in pretty early alpha and such issues are very common. AMD Platforms have the most issues with V-Rising from my experience but have you yet tried to restart the container? What upload do you have, is maybe something downloading in the background?
×
×
  • Create New...