Jump to content

ich777

Community Developer
  • Posts

    15,758
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. Because you‘ve typed it in twice. Just follow it step by step. Type in su $USER Type in cd…. And so on
  2. This is really strange, do you have another PC to test the 4090?
  3. Yes, I‘ve already saw that in the Diagnostics. This release only changed ine line in the plugin which is responsible for the log output in the container runtime which should not affect anything in terms how the driver is working. Are you sure that it was working before and hadn‘t got the same issue like now since Plex will fall back to SW transcoding if the card isn‘t working anymore. Anyways if it was working before this seems rather strange to me bacause the driver package is still the same as before and now it reports that it isn‘t supported.
  4. Have you yet tried to install another driver version like 530.41.03 and see if the issue persists (please reboot after installing another version)? The strange thing is that the driver reports that your card isn't supported: Jul 8 02:29:59 UNRAID kernel: NVRM: The NVIDIA GPU 0000:06:00.0 (PCI ID: 10de:2684) Jul 8 02:29:59 UNRAID kernel: NVRM: installed in this system is not supported by the Jul 8 02:29:59 UNRAID kernel: NVRM: NVIDIA 535.54.03 driver release. Jul 8 02:29:59 UNRAID kernel: NVRM: Please see 'Appendix A - Supported NVIDIA GPU Products' Jul 8 02:29:59 UNRAID kernel: NVRM: in this release's README, available on the operating system Jul 8 02:29:59 UNRAID kernel: NVRM: specific graphics driver download page at www.nvidia.com. Did you change anything in terms of hardware or did a BIOS upgrade of some sort? This July version changed only a minor thing which doesn't affect how the driver is working, it only changes one line for the container runtime which again doesn't affect how the plugin is working. BTW, if the driver lists something related to nvidia-smi failed then a restart from Docker won't fix anything.
  5. From what I see in the Diagnostics your GPU falls from the bus: Jul 7 19:51:42 SmasheNas kernel: NVRM: GPU at PCI:0000:01:00: GPU-cc1373f9-9d64-fd0f-2406-6b90e4430287 Jul 7 19:51:42 SmasheNas kernel: NVRM: Xid (PCI:0000:01:00): 79, pid='<unknown>', name=<unknown>, GPU has fallen off the bus. Jul 7 19:51:42 SmasheNas kernel: NVRM: GPU 0000:01:00.0: GPU has fallen off the bus. Jul 7 19:51:42 SmasheNas kernel: NVRM: A GPU crash dump has been created. If possible, please run Jul 7 19:51:42 SmasheNas kernel: NVRM: nvidia-bug-report.sh as root to collect this data before Jul 7 19:51:42 SmasheNas kernel: NVRM: the NVIDIA kernel module is unloaded. This usually happens because of too less power over the external PCIe power connector or some aggressive power saving measurements. Did you do nothing to the hardware (add/remove hardware)? You can also try to reseat the card in the PCIe slot that also helped some users. What power supply are you using? Do you have a machine where you can put the card in to test the card and put a 3D load on it like FurMark for at least 30 minutes to an hour?
  6. Please force an update from the container itself and see if its working afterwards.
  7. The query port is 2457 so to speak you have to try it with IP:2457 On what Unraid version are you? Please make sure that you are at least on 6.12.x How much RAM and CPU is the container using (you can see this by enabling Advanced View on the top right corner - please don't forget to disable Advanced View again).
  8. Yes, in my opinion this is a bit sad since nobody has replied yet not on GitHub or on their forums but of course it's weekend... Everything that was built before 2023-07-07 should work fine: Click
  9. Please remember that Windows is not Linux. I don‘t know how the devs from the game are doing things on Windows and Linux. My container has full access to all of your cores and RAM (as long as you are not limiting the resources). For example if you are running ARK or RUST it will absolutely destroy your server on startup becaus it will use all resources which are available.
  10. This is somehing that you have to ask on the 7DtD forums/community hub since the container is basically like if you run it on bare metal and this question is application specific. BTW such high core count CPUs are most of the times a bad idea for game servers because they favour usually high clock speeds. Most of the times the game servers are only using one or a few cores.
  11. @Lucas Mietke did your problem solve by itself? They changed the download URL again, if this is not an easy fix I will deprecate the Terraria Mobile container.
  12. What you are describing is not possible since if it works then it will work. The image index is downloaded once and valid for about 7 days I think and in that 7 days it is pulled from the cache and if it worked once it will always work in that 7 days (or at least how long the index is valid, I'm not sure about the 7 days). Yes, look at the linked GitHub Issue from above, two other users also reported that it's not working with images from today. I've also reported that on their forums here. If you have a GitHub account maybe also make a short post here.
  13. It seems that something is wrong with the newest LXC container builds form today, I've already created an issue on GitHub over here: Click There is nothing I can do about that, this is a thing that Linux Containers have to fix. BTW The alpine edge image was just working fine a few hours ago.
  14. In these Diagnostics the plugin isn't even installed, please install this plugin: ...reboot and then post the new Diagnostcis. Please also be aware that maybe the IP address from Unraid can/will change after installing this plugin.
  15. Have you read the linked article? Especially this: The Nvidia GPU is only for transcoding videos it seems. Have you yet tried to place a video file in your import folder? Does your CPU support AVX? If you search in this thread you will find people with the same question and they got it working.
  16. @chris smashe please read the first post in this thread, especially the red text on top.
  17. Please post your Diagnostics if this is happening the next time since all the relevant information are in there. Please remove those scripts, those are outdated IIRC because they use a binary that is soon to be dropped by Nvidia and therefore the scripts won't work anymore. The only thing that you need is to run this once: nvidia-persistenced Put this in the go file or through the User Scripts on startup, please don't run this command multiple times one time is enough.
  18. From what I see in the documentation one variable is wrong: https://docs.photoprism.app/getting-started/advanced/transcoding/ PHOTOPRISM_FFMPEG_ENCODER should be: "nvidia" Maybe this is related why it‘s not working.
  19. Sorry but I only aware of ffmpeg Hw acceleration in Photoprism: https://docs.photoprism.app/getting-started/advanced/transcoding/ Are you sure that tensorflow and Photoprism is working for Nvidia GPUs? The documentation only mentions CPUs.
  20. As @alturismo said the default fan curve is fine and should prevent the card from overheating (every other FE card is using that fan profile too)… You can‘t controll the fans since there are too many libraries missing and since there is no X environment by default you can‘t use nvidia-settings.
  21. Did you also add --runtime=nvidia to the Extra Parameters with Advanced View enabled? Shouldn‘t it use TensorFlow by default because it‘s built around TensorFlow? Sorry but I can‘t confirm that Photoprism has no documentation, they have a really good ducumentation because without it the template for Photoprism won‘t exist.
  22. No, since I won't downgrade OpenVPN further. Can't you use the build in WireGuard for this? I'm not super into WireGuard but AFAIK it should be possible to use the built in WireGuard for certain containers too.
  23. Thank you for the report, please update the container itself, should be fixed by now.
  24. What do you mean with that, you can't disable CG-NAT, that's something your ISP has to change and give you a real IP. It should only listen to IPv4 since the default format from "IpAddress" is "0.0.0.0" which tells the game only to listen on IPv4. Maybe ask your ISP if they block some UDP ports, another user reported a similar issue but I can't reproduce this on my end everything is just working fine with the default ports. What does the log tell you, it will tell you if a port isn't properly forwarded or better speaking not reachable from the outside world.
×
×
  • Create New...