Jump to content

ich777

Community Developer
  • Posts

    15,759
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. What do you mean with getting disabled? In the Mega app or on Unraid?
  2. Can this be the cause of the issue? I‘m not able to reproduce that over here. Have you upgraded Unraid recently or changed something in the share settings?
  3. At night, sometimes, maybe… 😅 I just looked into that and I think this is a odd choice of naming a debug release: (in my humble oppinion not even worth extracting) I build from time to time a new package based on the latest master branch if they haven‘t made a new release in a long time btw… Maybe yes, don‘t expect everything fully working on Linux since they are clearly more focused on Windows to compete with the other two main players. But it is working good enough in my oppinion for now and transcoding from/to h264/h265 is also blazing fast.
  4. Is this folder mounted or shared via NFS by any chance?
  5. This is strange, I just upsated the container a few days ago, maybe that‘s the cause of the issue, I will look into it, please give me a few days. So only this one folder is affected correct?
  6. Emby is also working fine. I tested ARC GPUs a long time ago but don't expect that everything is working because this is basically Intel's first dGPU project in a long time which is actually good and the main focus lies definitely on other things currently (mainly Windows). AV1 Encoding is also not working on Linux, Deconding is working but Enconding not... Because an iGPU is not a dGPU and the iGPU from the N100 is Xe-LP and not Alchemist like the dGPU A380. For example the ARC GPUs are reporting the temperatures differently and you can read them by issuing `seonsors` from the terminal. Can you please tell me where you have that information from? All the official tags are here.
  7. Please look at this repository, this is basically a Docker container which downloads a precompiled Kernel for your running Unraid version and you can then compile whatever you want. I can't support that officially because that's too risky for me since if something breaks or is not working properly it can be a nightmare in terms of support. I'm also not sure if this violates the Nvidia EULA but from what I read it should not... Seems like it is possible, you can for example try to compile the open source driver first with my script from my Nvidia Driver plugin repository. The main issue is that you have to create a package (which is also done in the compile_opensource.sh file) and you have to install it every time on boot and even if you upgrade the Unraid version you have to recompile the package again for the new Kernel. I think with the above linked Docker container it should be pretty simple. Just a question, wouldn't it be simpler to pass the card through to a VM and then install this driver in the VM?
  8. And what does the prometheus page tell you? Is the exporter online or offline? The pihole exporter is definitely working because you get a output from /metrics I think there is something wrong with the access but I really can‘t tell you where the issue in your case is…
  9. Did you yet restart your Docker service?
  10. The output is correct, can you try to restart the Docker service, it seems that Host Access is not working properly. You can easily try that by opening up a container terminal (from Grafana or Prometheus) and try to ping Unraid.
  11. Is host access enabled? What does 192.168.111.200:9617/metrics give you back if you open it in a browser?
  12. I think you have to wait until the new 6.13 beta/RC releases since you got such a new CPU/iGPU that it isn't fully supported by the Kernel and causes these crashes.
  13. No, it isn't necessary for a long time. Intel-GPU-Top is only a binary which is installed by the plugin and if you don't call it or you don't go to the Dashboard it won't be executed either. I don't think that that's related since as said before Intel-GPU-Top is only a binary and does nothing on it's own. I don't remember one having that issue.
  14. I‘m not 100% sure what‘s going on on your system but it‘s working fine on Unraid. I can only thing of a permission issue or some similar kind of issue? Is your Distribution and also Docker up to date?
  15. Is the system now working? If not please boot into safe mode or connect the USB Flash device to your local computer and delete the .plg file radeontop manually from the folder /boot/config/plugins/ (or \config\plugins\). Can you please post your Diagnostics?
  16. I think those are the problematic values, are you sure that this UID and GID exist on your system and that they are accessible for Docker? These values are usually for Unraid. On Debian based systems you have a default UID 1000 and GID 1000
  17. Can you please share your docker run command since it seems like some permissions are off and the container is running into multiple race conditions which then ultimately lead to a crash.
  18. Oh sorry, I overlooked your post. Unraid uses ttyd to create a terminal session in the browser. However if a user experience such an issue I would recommend that the user tries to natively connect through SSH with something like Putty, the Windows Terimal or whatever he prefers rather than the built in web terminal since it always also depends on the browser and so on. So in conclusion I think it's not an issue with CoreFreq, it's more of an issue with ttyd.
  19. Is this the feature where you log in to your account and it should sync? If yes, this isn't supported anymore with Chromium, Google only allows that from a real Chrome browser.
  20. Please update the container itself and add a variable to the template: Key: NOVNC_TITLE Value: Your .-_ Title (Only alphanumeric, numeric, spaces and . - _ are allowed) Looks like that:
  21. Please create a variable in the template: Key: CONNECTED_CONTAINERS Value: 27286 (This will enable the Connected Containers service) For my containers it's pretty simply, also create a variable in the templates: Key: CONNECTED_CONTAINERS Value: 127.0.0.1:27286 For these containers you have to place one script on your server, let's say to /mnt/user/appdata/scripts/connected-containers-alpine.sh (make sure the script is executable on the host) and then mount it to the container like: Container Path: /etc/cont-init.d/91-connected-containers Host Path: /mnt/user/appdata/scripts/connected-containers-alpine.sh Access Mode: Read Only With that all containers should restart with OpenVPN-Client when you restart the OpenVPN-Client container. Please make sure that all containers have: in them so that the container actually restarts. Of course all of that will only work if you connected them with: --net=container:OpenVPN-Client
  22. I‘ll look into the containers whats needed to be done.
  23. You don‘t have to switch them over I think. Do you route all containers through the VPN container? Only the ones that you route through the VPN container are necessary to restart.
  24. The main issue with that is, when the container restarts all the containers which are routed through the container need to be restarted too and that's why I came up with that idea "Connected Containers". I just need the container names and from which repository/maintainer since I have to look into if that can be implemented easily, if you are using my *arr containers then it is pretty simple.
×
×
  • Create New...