Community Developer
  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by ich777

  1. This mode is notorious to not work even if you change all the settings in the BIOS to the highest setting. This indicates that the above mentioned mode is not working and you can only use the next mode below.
  2. Did you change your game client to the beta branch? What is exactly not working, do you have at least a log or something like that?
  3. I don‘t think that anything else than the vanilla server will actually work to ensure crossplay is working. Have you yet tried to disable crossplay and connect to the server with a PC client and BepInEx installed and enabled? Please remember most mods are client side only and if you have some server mods installed the client needs them too or at least BepInEx installed. Please also keep in mind that modding is always up to the user and in general I can‘t give support for mods because I simply can‘t know how every mod is working.
  4. Please post your Diagnostics, without them I can't know what's going on.
  5. Oh, sorry now I get it... Steps four and five for the Docker container not the plugin, sorry for that... The Prometheus Docker container which is needed is simply the bridge between the Prometheus Node Exporter plugin and Grafana and has strictly speaking nothing to do with the plugin itself. Yes sure, I'll agree on that, but that's a thing that I can't change since this is not my Docker container, I often thought about removing these step including installing Grafana and only advice to install Grafan and Prometheus. These steps gave me some issues in the past because on some systems the Promehteus Docker continaer wouldn't work on some systems (which is out of my control). The plugin can't install a configuration file for a Docker container because first I don't like changing things which are not directly controlled by the plugin and second the plugin can't know where the configuration for your Prometheus Docker container is, there are simply too many variables (name, paths,...). Usually you don't have to restart the plugin itself because the config that you are changing is related to the Docker container not the plugin. Yes, it's actually exactly the same on my system but TBH, I've never noticed it because I've always ran it as a service: I've found already another report here on the forums but that's related to the Docker container: But please keep in mind that Unraid doesn't use the default md driver and instead it uses it's custom md driver for the Array. I think the issue is caused by that which is also mentioned in the bug report from the Docker container. Anyways, thank you for the report, I will look into that when I got a bit more spare time. For now you can, of course if you want to, disable the mdadm metrics entirely. Simply edit your "/boot/config/plugins/prometheus_node_exporter/settings.cfg" file so that it looks like this: start_parameters=--no-collector.mdadm Of course you have to restart your server because, as you've mentioned earlier, there is not restart function implemented in the plugin because this is more a niche use case to pass arguments to the plugin itself, however if you want to start it in the background as usual simply do that: echo "/usr/bin/prometheus_node_exporter --no-collector.mdadm" | at now of course you first have to kill the background exporter process first with something like that: kill $(pidof prometheus_node_exporter) or if you want to test the command first without starting it in the background: /usr/bin/prometheus_node_exporter --no-collector.mdadm Hope that helps for now!
  6. Why are you doing that by hand? What settings did you change? I installed the plugin and have no issue whatsoever and can connect to it just fine. The plugin is only meant to be installed and that's it. You can however run it with custom parameters but I think in your case something is wrong if you changed some settings, that's my best guess.
  7. Currently not because that‘s not how iSCSI should be used but I will take a look at it and report back to implent a Reconnect button. Please give me a few days.
  8. I will try that after the weekend and report back.
  9. ich777


    Bist du dir denn sicher das die auf ZFS basieren? Auf einem ZFS Pool müssen nicht alle Datasets zwingend ein ZFS filesystem sein, du kannst auch Subvolumes mit einem beliebigen Dateisystem machen. Ansonsten, wie schon oben geschrieben würde ich dir stärkstens empfehlen einen Issue auf GitHub auf zu machen von OpenZFS -> dort wird dir geholfen. EDIT: Außerdem, dir wird doch das Dataset /mnt/cache angezeigt, also alles was darunter ist, ist auch auf ZFS, dann sind eben deine Ordner Nextcloud usw kein eigenes Dataset, hier nochmal ein weiterführender Link zum lesen. EDIT2: Ist es evtl. möglich das du dich schon mit anderen tools gespielt hast wie ZFS Master usw? Normalerweise wird nur der Ursprüngliche mountpunkt angezeigt wenn du explizit keine Datasets erstellst.
  10. Have you yet tried to remove the Nvidia Driver plugin and see if you got crashes too without it installed (after removing, you have to reboot)? If you've done this already, I would strongly recommend that you are creating a post in the Gerneral Sub Forums here on the forums.
  11. Please check your BIOS. Are you sure that you've got Above 4G Decoding and Resizable BAR Support enabled in your BIOS?
  12. ich777


    Lies dir bitte mal durch was das überhaupt ist und macht: Klick Das listet dir alles auf und du hast nur einen Teil des listings oben gepostet, sicher das die nicht irgendwo dazwischen dabei sind und wenn du es so ausführst wie du dann listet es alles auf, nicht nur von einem ZFS Filesystem. Das ist kein klassisches `ls`, du spielst hier mit spezifischen ZFS Befehlen rum und du musst schon wirklich ein Expert sein wenn du alles verstehen willst weil ZFS wirklich kein einfaches Thema ist und wirklich umfangreich ist. Wenn du mit solchen Befehlen rum spielst dann lies dir vorher die Dokumentation dazu durch was es genau macht -> siehe Link oben. Wenn du wirklich ein Problem damit hast bzw. einen Bug gefunden hast und wirklich was nicht gelistet wird dann müstest du fast einen Issue im OpenZFS GitHub repository auf machen da ZFS auf Unraid eben darauf basiert.
  13. A little bit more information would be helpful. Where do you get that error? I can't reproduce that.
  14. You have to select console mode (checkbox) for the Schedules in luckyBackup, otherwise it won't work (this is also mentioned in the description from the container).
  15. Did you do anything custom with your server? Please post your Diagnostics. Intel GPU TOP is working just fine on 6.12.0.-rc2
  16. That's definitely not the case... Some routers need to be restarted some times. Please delete the forwarding of the other ports, this is a security risk!!! I always provide the necessary ports in my templates and you never need to forward more ports, even if other sites suggests that. Otherwise I won't release the containers to the public. Another side note even if you forward these ports they point to nothing (but it's still a security risk) because if they are not forwarded in the template they can't be reached anyways. Please trust me on that, I do my research on that and test every single container before releasing it.
  17. Why do you forward that many ports? That is definitely wrong... The only ports which need to be forwarded are the ones listed in the container template: 7777/UDP 15777/UDP 15000/UDP You don't need any other ports. I assume you are trying to connect to your public IP from you LAN correct? If yes, I had people where the hair pin NAT doesn't work correctly and therefore they weren't able to connect through the public IP from their LAN but from outside their LAN it was working fine. It could also be a issue with your ISP that it blocks some ports <- some provider are doing that but this is a really rare case.
  18. ich777


    Siehst du ja auch. ZFS List ist ein Befehl für ZFS selbst und der listet dir alle vorhanden ZFS-Pools, Snapshots und Volumes auf. Volumes sind auch zB Docker Images. Du verwendest bei BTRFS auch nicht `btrfs filesystem show -a /PFAD/ZUR/BTRFSDISK` statt `ls` oder (hier ist beispielsweise `btrfs filesystem show -a /PFAD/ZUR/BTRFSDISK` vergleichbar mit `zfs list`)? `ls` ist ein UNIX-Shell Befehl und macht ganz was anderes, nämlich Verzeichnisse auflisten.
  19. Are you trying to connect with your public IP or your local IP? Are you sure the correct ports with the appropriate protocol are forwarded in your Firewall? Have you yet tried if you can connect with your local IP?
  20. Für den moment kannst du wenn du willst diesen Befehl in einem Terminal von Unraid ausführen: sed -i 's/$DockerStopped = pgrep('\''dockerd'\'')===false;/exec("\/etc\/rc.d\/rc.docker status",$dummy,$DockerStopped);/g' /usr/local/emhttp/plugins/dynamix.docker.manager/DockerSettings.page Danach ist das Problem gelöst. Den Befehl müsstest aber nach jedem Neustart ausführen auf der 6.12.0-rc2 (oder früher), der Fix wird aber implementiert.
  21. The server is running fine if you see this line: Setting breakpad minidump AppID = 346110 Have you yet tried to query it in the Steam Server Browser (View -> Servers -> Server Browser -> Favorites -> Add Server -> Enter the IP:27015 -> Add -> Refresh -> Refresh). Yes, with the appropriate protocol. You usually don't need to forward the RCON port and I also engurage you to not do it (RCON is completely unencrypted).
  22. Please try to check the log from the cron jobs itself, you see the path to the logs in the output from 'crontabl -l'. Do something like 'cat root/.luckyBackup/logs/default-LastCronLog.log' Btw: You can see the crontab also in the GUI on the Schedules page. I'm really curious if it's related to the network shares but I don't think so...
  23. Wird in einer der nächsten Unraid versionen gefixt! Ist nun bestätigt, ist ein Anzeigefehler und die GUI glaubt das Docker eigentlich läuft obwohl der Dienst nicht läuft. Kannst den Thread bitte vormerken und dann als gelöst makrieren wenn die neue Unraid version erscheint oder wenn du willst kannst das auch gleich machen, dir überlassen.
  24. Please install the updated plugin from @SimonF until @b3rs3rk merges his Pull Request with this link (simply go to Plugins -> Install Plugin -> paste the link -> click Install): https://raw.githubusercontent.com/SimonFair/gpustat-unraid/master/gpustat.plg It has also some cool improvements like multi GPU support.