saarg

Community Developer
  • Posts

    5374
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by saarg

  1. Why are you making something simple into something complicated? Linuxserver have one place we document things an that is GitHub. We are not unraid specific, so to simplify things we have one place where we put the documentation. The container have one task and that is to run ddclient to monitor the public IP and update it if needed. Thank you for your suggestions, but the container will not be changed to how you want it to be.
  2. Thanks for notifying about the new version. I'll see if I get it running today and if it works, push the new build.
  3. 1 & 2: The config file in your appdata folder. You don't have to think about any other files to edit. 3: I won't see a reason to not keep the container running. It's not using much resources. Schedule a stop and start script using cron if you are worried it eats to much resources. There is a plugin for that. The container runs the ddclient binary which takes care of checking for a new IP as configured in the config file.
  4. No, you need to install the drivers on the host. So you need to install the dvb build to get it working.
  5. I think it might have to do with your go file and where you placed the iGPU stuff. Probably emhttp binds the iGPU to vfio and then the driver isn't available, and therefore no /dev/dri. placing the iGPU part last in the go file would probably have fixed it also.
  6. I'm afraid I can't help much with your issue. I haven't tried habridge after I made the container and I don't have Alexa. I do have a woman by another name, but she is incompatible with habridge unfortunately... I do know that it can be a little bit tricky to connect the Amazon device, so have you checked the habridge github page for troubleshooting info?
  7. Please post the diagnostics after rebooting the server, but before doing anything. Also try to move the modprobe and chmod after the emhttp entry. Do you have any VMs using the iGPU?
  8. The shortage is caused by the hoarding and I guess most of the hospitals don't have a large stock as they weren't prepared for this. The whole world wants to buy, so it's no wonder it's hard to get protective gear.
  9. I have no issues here on a dual cpu board. You should leave core 0 and it's ht core for unraid. You have to be more specific about your setup and what is happening. Have you tried pinning cores to the container?
  10. Without you supplying anything about what you actually have in your go file and the outputs of the commands you run, it's hard to help. Post the output of ls -al /dev/dri after array started and after applying the command, while the array still running. Also any commands you run you should post the output of, and if course the command you used. Post also the content of the go file regarding the setup of the iGPU. Remember that there now are some restriction for running commands from the go file.
  11. I'm not sure if the binhex plexpass works with nvidia transcoding. If you don't get it to work, try ours.
  12. Post your docker run command. Did you enable hardware transcoding in the plex settings?
  13. You might be able to change the default folder, but then you have to check out the code-server documentation. This isn't a multiuser software so /config will always be available to the user using code-server. If that feels wrong to you, you have to find another alternative to code-server.
  14. I can't help with the handbrake container, but please use our plex container to get it working there first. If you have plex pass that is.
  15. Which GPU do you use? If it's an iGPU, you have to add the chmod command in the go file so it's applied every boot. It doesn't make sense that /dev/dri doesn't exist with the array started. Do you pass through the gpu to a VM?
  16. There isn't really a need to do it this way anymore. If the usb card have its own ID (4 digits:4 digits) then you can stub the card in the syslinux.conf file and then choose the card in the other PCI devices part in the VM template. If the usb card shares an ID with a USB controller on the motherboard, you have to use the new method of stubbing the card using a config file and the PCI number.
  17. You can change back to your old cache drive and then copy the content to the array, and then to the new cache drive after swapping the cache drives again. Just remember to stop docker and vm service before swapping the drives.
  18. saarg

    Squid is 50!

    Happy birthday @Squid Since I'm in the future, I can already wish you a happy birthday 🎂 I'm going to have a big party, as usual all alone, for my own birthday on monday...
  19. I have to correct you there. We do have a crystal ball, but it's still in the repair shop waiting for the not in stock part.
  20. If you mean the docker log that pops up when you click the log icon in the webui, that is unraids departement. But you can open the log on the command line with timestamp. Might only be possible if you tail the log, but try it out using the below link. https://docs.docker.com/engine/reference/commandline/logs/
  21. To make all GPU's available to one container you just set the NVIDIA_VISIBLE_DEVICES to all. You also then need to adjust the xml to reflect all GPU's. And the slots id part in config.xml: <!-- Folding Slots --> <slot id='0' type='GPU'/> <slot id='1' type='GPU'/> In the above config I removed the CPU as I use it for boinc. If you want all 3, add one more line.
  22. So just for me to understand what you want, you want the two containers to see just one GPU each? As far as I know, when setting only one specific device in the variable, the container should only see that one. I saw that you have 2 GPU devices in the xml you posted. Remove one and it will only use one of the gpus. You might have to experiment with the ID to get the correct card.
  23. What did you set in the NVIDIA_VISIBLE_DEVICES variable for the containers?
  24. You still need the nvidia build to use hardware transcoding with an nvidia card. You probably confused plex hardware decode ability with the need for the nvidia build. If you can't restore the database, your only option is to start from scratch.