Jump to content

ich777

Community Developer
  • Posts

    15,755
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. Jup, that's why I would recommend that you install DNS containers in a LXC container, I plan to do a write up on this in the next weeks because you don't need Host Access in LXC container and in a LXC container you can run, like me for example Unbound, Adguard and SteamCache all with one IP and even have the benefit of DoH <- because AdGuard is able to utilize DoH OOB but of course you could also use PiHole if prefered. Please restart the Docker service one time and Host Access should work. Have you set the port like mentioned above?
  2. Please enable it and see if it helps. If not, let me know and I will look into that. I've now tried a fresh install on a second machine and it just works fine OOB even with cgroup v1. Can you post your Diagnostics please, I think something is messing with the cgroup on your System... No issue over here: ...even on cgroup v1
  3. Can you ping Unraid from both of the contianers? In my container you have to actually install ping by opening up a Docker Console and copy past the following: apt-get update apt-get install iputils-ping ...after that you can ping from my container. Have you yet tried it with Intra on Android or DNS-Cloak on iOS? What do you get back when you enter the You actually have to use: 192.168.178.66:8083/dns-query with the appropriate port because the server listens on that port, also I really can't help with Nginx Proxy Manager because I've never used it. You don't have to add a single port since if you are running a container in Custom: br0 all ports are exposed, you only have to specify ports in the template if you run it on a bridge network.
  4. What IP did you assign to the DoH-Server? Have you read the linked GitHub README.md yet? Did you enable Host access on Unraid too?
  5. Can you please be a bit more specific? Where do you click start? Can you maybe post screenshots from what isn't working exactly? On what Unraid version are you? Can you maybe post your Diagnostics? Have you read the first page on how to use LXC? Have you seen that you have to enable cgroup2 on Unraid for some distributions?
  6. Can you please share your configuration from the DoH Server? Usually I recommend to put the DoH Server on br0. Have you also read the README.md over on GitHub, there are detailed instructions on how to get it working: Click
  7. Do you have the RadeonTOP plugin installed? There is basically nothing I can do about that but maybe also check if you have blacklisted amdgpu on your system... This has nothing to do with the Nvidia Driver plugin anymore and I'm not too familiar with the GUI mode on Unraid because I don't see a point for me running Unraid like that, I know there are might be some use cases but as said, not for me... I can only think that the driver won't load because of above mentioned reasons. The desktop environment won't do much in terms of performance since this is a really lightweight task even for a T400... If you are someday using Jellyfin, you already could use your AMD iGPU for transcoding in the container. Not too sure about Plex but I won't bet on it that they have it ready yet...
  8. Yes and no. The reason behind why it is like it is for the legacy driver is because I only compile the latest legacy driver which is available at a new release from a new Unraid version, like in this case Nvidia Driver version 470.141.03 was the latest legacy driver when Unraid 6.11.5 was released on November 3rd 2022: (BTW 5.19.17 is the Kernel version from Unraid 6.11.5 and this version allows me to exactly identify in the plugin which version from Unraid a user is running and needs to be downloaded since the driver depends on the Kernel version) This driver should run with all cards which need the legacy driver just fine. On the other hand, for stable Unraid versions I compile ever new Nvidia driver (for recent cards) that is released in the life cycle for a specific Unraid version, like you can see for 6.11.5 there are a lot since November 3rd: If you want to use this card for a VM then please uninstall this plugin, this plugin is only meant if you want to use your Nvidia card in a Docker, or even multiple, Docker contianer(s). See the first post from this thread:
  9. What is wrong with driver version: 470.141.03? From what I see in your Diagnostics the driver is loaded and your card is recognized just fine. BTW if you want to use this card for hardware transcoding that's a pretty bad choice because it even doesn't support h265 (HEVC). The screenshot that you've posted above is for another Unraid version too...
  10. This is caused by the implementation in the BIOS and mostly nothing I can do about... However may I ask why you want to use the iGPU for Console or better speaking the Unraid GUI output? It wouldn't make much of a difference if you for example disable the iGPU and display it through the Nvidia GPU, of course you have to set disable_xconfig again to false.
  11. On what Unraid version are you? Please post your Diagnostics. May I ask why you need that specific driver?
  12. Please run this command from a Unraid Terminal: sed -i "/disable_xconfig=/c\disable_xconfig=true" /boot/config/plugins/nvidia-driver/settings.cfg and reboot after that.
  13. Glad that your server is now back up and running. Again I don't think that the Nvidia Driver plugin caused the issue, please keep me update if installing the plugin changes anything.
  14. As far as I know no, hope that answers your question. A few things to keep in mind: Even if the M-Chip powered Mac Minis are about 40W you have to add power for external storage devices if you want to use it like some kind of NAS and that's maybe why someone recommended a Odroid SBC because for a low power NAS such a board makes much more sense because of the built in SATA ports and expandable storage. Another thing is that not every application on DockerHub has a ARM version and building a translation layer like Rosetta for the M-Chips on Linux must be done first to support x86_64, of course you could maybe use QEMU and run a VM on your M-Chip powered Mac Mini for your x86_64 Docker applications but that kind of defeats the purpose of Docker. Besides that there are also multiple ARM revisions out there, v7 and arm64 are the most common ones, every application has to be compiled for each individual ARM version and created a Docker container for, not to speak that the M-Chips also have some kind of special sauce (instructions) which can be maybe used to further improve performance <- since there is no documentation on these chips no one has knowledge how they are working and everything has to be reverse engineered. I looked myself into that and what is possible with a bit more powerful ARM hardware than a RaspberryPi on Linux, but it turns out that you can't do much because of lacking support (especially Rockchip) for Linux. Transcoding is a bit rough (no support for the SBC that I used back then) and the container maintainers would need to implement every codec/dependency to support each individual ARM GPU <- this is in my opinion a really convoluted space. ARM is cool tech indeed and can be of course be power efficient but consumer based powerful ARM hardware that is truly Linux compatible is not released as of today AFAIK, please correct me if I'm wrong about that, of course except for M-Chip powered Macs but these are not really of better speaking fully Linux compatible, at least not as time of writing. From my perspective ARM needs to establish itself a bit more and needs to be more widespread in the consumer space to become a viable option, especially for some kind of NAS with lots of storage and transcoding capabilities. I would really hear what your thoughts are about that, please keep in mind that these are my personal opinions on ARM powered hardware and how far they are as of time of writing.
  15. I would recommend that you create a post on the General Support sub-forum since I'm not too familiar with Supermicro boards. I really can't think why the Nvidia Driver plugin should cause such an issue. No changes to the hardware or software where made? Does the Motherboard has a onboard video card where it maybe outputs the console?
  16. Did you Enable Flash Backup in the My Server Plugin? If yes you can restore your Flash backup like this: Click Do you have a backup from your USB Boot device? If yes you can simply create a new USB Boot device with the USB Creator Tool for Unraid and replace the whole "config" from your backup with the on the new USB Boot device. You can also try to download the latest release (as time of writing 6.11.5) from here and replace only the bz* files in the root of the USB Boot device with the ones from your archive, maybe this will fix the issue but I would recommend that you buy a new USB Boot device since it is likely that something corrupted the files. Did you by any chance change anything in the BIOS or in the config from your USB Boot device? I would also recommend that you go through this thread for a new USB Boot device (I personally recommend Transcend JetFlash 600 USB2.0 32GB as Flash drive).
  17. I've pushed a fix to the plugin and it should not work as intended. There was an issue in the old package detection routine which is now fixed, please update the plugin and upgrade to Unraid 6.11.5, it should work right OOB.
  18. I also have these moments... ...but in all seriousness, the plugin or better speaking the driver should not prevent your server from booting. I have suspicion that maybe we are dealing with a bad boot drive here or another issue related to the boot device. As @alturismo said, please remove the plugin manually, if you do this from another machine because obviousely your server isn't booting, delete this file too: /config/plugins/nvidia-driver.plg Does it go immediately to a blinking cursor and display nothing? On what machine are you running Unraid?
  19. Try to set "Run as Root" to "true" in the template. You could also try to fix the permissions on the disk if these are only media files which require the user and group 99:100 by issuing these two commands from a Unraid Terminal: chown -R 99:100 /mnt/disks/DISKNAME chmod -R 777 /mnt/disks/DISKNAME (of course replace DISKNAME with the exact disk name)
  20. As said above, I will look into it ASAP and let you know.
  21. As said, I can't test that right now because I can't reboot my server currently. As said above running a Firewall or a Router in a VM is never recommended (at least from my side) because it can have some security implications and the issues you are experiencing.
  22. You can use RCON, at least when the game server supports it and write custom scripts but I think that's not your goal here. Sorry, I only provide the basic functionality, everything further is up to the user. The container of course all support modding and so on but I can't support that because I can't know every mod and even if I play some of the games, I can't know how to mod every game. If you find something to monitor your game server let me know, I don't had the need to monitor my game servers closely because I only play with a few friends on them (if I have time to play games... ). Who did tell you to do so? Sure it maybe doesn't hurt but I know a few people with 117+ days of uptime on Unraid and even more... I would recommend to restart the game server on a daily basis with the command provided above since they pull updates on every start/restart from the dedicated server itself through SteamCMD.
  23. Exactly, the zip is only downloaded when it isn't found, this is why I ask you to execute the command and post the output here. You have to understand that nothing in the installation routine is different from 6.11.0 and 6.11.5 or even if you install the plugin on 6.10.x
  24. There is your error, this is caused because you don't have a active Internet connection on boot: Jan 23 21:50:57 Ajian-unraid root: plugin: creating: /boot/config/plugins/unraid.iSCSI/packages/libffi-3.3-x86_64-3.txz - downloading from URL https://raw.githubusercontent.com/SimonFair/unraid.iSCSI/main/packages/libffi-3.3-x86_64-3.txz Jan 23 21:51:07 Ajian-unraid root: plugin: downloading: libffi-3.3-x86_64-3.txz ...#015plugin: libffi-3.3-x86_64-3.txz download failure: Network failure (this is actually caused because you run OpenWRT on Unraid <- this is something that I never recommend, a Firewall or a router should be always run on a dedicated machine to avoid such issues) I really can't imagine why this is working on Unraid 6.11.0, the process of installing the plugin is exactly the same and the package will be also downloaded if it's not found. Please execute this command from a Unraid Terminal and post the output here: ls -la /boot/config/plugins/unraid.iSCSI/*/*
  25. Ja genau, der PoE Konverter macht praktisch nur PoE zu 5V USB weil ich dort nirgends eine Steckdose bzw. USB zum anschließen hab war das die lögische Lösung für mich.
×
×
  • Create New...