Jump to content

ich777

Community Developer
  • Posts

    15,758
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. There are many ways to achieve what you want to do. You can change the network for each individual container manually through the config file. However I would strongly recommend that you set the static IP address in the container and not in the configuration file from the container itself because I had some issues in the past with certain distributions from my testing. To do that for a Debian based container (Bullseye+) here is a quick tutorial: Click It is possible to create a VLAN, you can also use another physical NIC for the container, macvlan, proxy and so on... You can read more about that over here: Click (just scroll down where it says Network to get all the available options) If you need help with anything then please feel free to reach out again, I'm here to help.
  2. This path in the container would be: .../WINE64/drive_c/users/steam/AppData/Local Are you sure that these are server mods? I think there are client and server mods out there for Astroneer. The path that you've mentioned above is for the Client savegame/mods I think. The save path for the server is: .../Astro/Saved/SaveGames
  3. If you tell me what game server are we talking about I could maybe help you.
  4. This is a known issue and there is nothing that I can do about that because intel_gpu_top reports that value after some time wrong. If you reboot your system everything is back in a working state, I have no solution to that, sorry.
  5. @SpencerJ, @JorgeB, @ljm42 do you know about this site and that they sell servers with Unraid Plus licenses?
  6. Hast du denn IPv6 an? Ich hab keine öffentliche IPv6 und hatte da wirklich schlimme Antwortzeiten (600ms und höher). Vergiss aber nicht das du direkt bei den root Servern anfragst (non recursive) und nicht Cachest, zumindest nicht lange welches sich anders anfühlen kann als wenn du einen Resolver wie 8.8.8.8 oder 1.1.1.1 verwendest da die alles Cachen und im Hintergrund aktualisieren um den Nutzern sehr performanteste DNS zu bieten. Du könntest Unbound auch sicher noch ein wenig tunen und das caching rauf stellen usw. So um die 100ms ist eigentlich ein normaler Wert für Anfragen direkt bei den root Servern. Jap, ich habe hier 100ms ca. du könntest aber auch versuchen DNESC in Unbound auszuschalten und mal sehen ob das eine Verbesserung bringt, es wird ja hier dann validiert (nicht vergessen wenn du was umstellst, entweder den container oder den Unbound dienst neu starten). Unbound ist by default so strikt als möglich konfiguriert, du kannst aber auch mal versuchen in AdGuard auf einen öffentlichen resolver umzuleiten wie zB 8.8.8.8 oder 1.1.1.1 je nachdem was du präferierst.
  7. Sorry but I'm not familiar with mods and Astroneer, please keep in mind modding is always up to the user because I can't know how every mod is working. However in general you have to put it in the same location as you would do on a regular bare metal Astroneer server and configure it the same as on bare metal. Do you have by any chance validation installed? If yes, please disable it since this can also cause trouble <- only use validation if a update from the dedicated server is not working as expected.
  8. First of all, automation has always it's up- and downsides and these is a completely different discussion and does not belong in this thread. However you've mentioned a GUI above wich LXD nor Incus is, it is just a, let's call it "management interface" from the CLI for LXC. Sure you can install various dependencies through "cloud-init" but that doesn't always work because of the many, many, many different distributions and how they are installing/managing their packages and of course it's more meant for the initialization that you can use the container as LAMP or similar. After you've set up the container as "cloud-init" it is also up to you to maintain the applications running inside that container. ...and if you plan on always deploying a new container for LXC that is not that easily possible because you always have to destroy the container and rebuilt it and possibly mount a path where the data persists <- LXC is not intended to be used like that. Set up a cron schedule that runs a script and gives you the output to somewhere and you are good to go, you could even do that through a User Script within Unraid if you want to go fancy. I completely understand what you are trying to do but that always introduces some maintenance and I would never recommend that you for example do a distribution upgrade or even package updates automated because you know, some thing(s) will most certainly go wrong... Why not use Ansible or something like that to maintain the containers. For example if you look into most of my Docker containers they are self maintained, meaning that they check on every restart if there is a newer version from the application available and update the application if necessary with a relatively simple script. The test containers for LXC that I've made for testing include PiHole, AdGuard-Home (these two because it is way easier to set them up in a LXC container than in Docker), HomeAssistant and AMP. All the containers have set up a cron schedule which runs a update of the various applications that are running inside the container but not the base packages itself (again, I've had some horrible experience in the past). It is also possible for the user to disable the cron schedule too and do everything manually. Look for example at this repository, it is basically done on the frontend but the backend needs a bit more love (if you want to try this, then send me a short PM and I will tell you how to install it, pretty easy). I'm not familiar with that but it looks pretty much the same as the LXC GUI in Unraid, you can start/stop/freeze/kill the container, open up a terminal, set/limit resources (through the config),... I hope I've covered all your points and that this helps, however I would recommend that we continue this conversation else where because this is strictly speaking the support thread for the LXC plugin.
  9. Which values are we talking about? Do you have any examples? If it is for example: -crossplay or similar it belongs in the GAME_PARAMS However, I didn't know that you can change the difficulty and other settings... EDIT: I now see it, if you are talking about: -preset hard for example it belongs in the GAME_PARAMS
  10. Most likely Hair pin NAT is not working correctly. Have you yet tried to connect with your local IP? This seems more like a client issue if you have issues that you describe here, are you sure all your game files are valid? Are you yet on @Spectral Force‘s Discord server? He helps me out with 7DtD but I assume that either your NAT is not working properly internally or you client has some kind of issue.
  11. Yeah, a few things have changed since I've created the plugin but it should be more easily now and there are also help texts all over the Settings page. I don't understand what you mean with that... Distrobuilder is used to create custom images for LXC but not images with for example HomeAssistant or PiHole installed. However I have a few templates that you can try out for Unraid and are specifically created for Unraid and for installation through the CA App, but many things needed to be sorted out until this can be released. This looks very similar to a LXD replacement because LXD was ripped out of the hands from the community and is now maintained and more or less closed source by Conical. I never used LXD because it was based on Python (which would introduce Python itself as a dependency for LXC which I never wanted) and because it was already somewhere on my radar that LXD is commercialized. Don‘t forget if you use Distrobuilder you also need a way of providing the images to the users, I‘ve alteady come up with ideas how that is achievable for Unraid so that nearly everyone can publish containers. You already have a GUI in Unraid where you can manage your containers, what do you want to do exactly with it? For what do you need Incus? You already can do most of it in the GUI. I'm also planning on releasing some kind of help page where some very useful examples are listed how to pass through a TUN device, Intel iGPU,... but that is something for later down the road. LXC is basically a VM with the advantages of Docker, so to speak shared resources and you are in charge to maintain it and so on but don't mix it up with Docker where a container is easily to update because a LXC container is not easy to update, you have to maintain it and make sure that everything is up to date and so on, so to speak with the downsides of a VM (could be also a upside).
  12. Same for me... I already have container "templates" which maybe will become available through the CA App, still many thing to sort out. However LXC is cool because you are in control but of course the mayor downside to that is that you are in control and it is a bit more to maintain than a docker container. But in the case for my HomeAssistant container it has a built in updater which uses crome and is based on HomeAssistant core, if you are interested let me know and write me a PM. Thanks for the detailed explanation, that helps a lot but I'm still not sure why this is happening on some systems and on some not. However from my testing it is safe to use ZFS as the storage backing type, I use it on a daily basis <- LXC also supports BTRFS and it as a backing storage type which also works with snapshots and send/receive. For ZFS I have to use a secondary dataset because ultimately it could mess up other containers if I put it directly into your main LXC path.
  13. This is caused because you‘ve assigned a static IP to the container and therefore the port mapping that you see in the container template is not valid anymore because if you assign a static IP to a Docker container it behaves like it is a local computer on your network where all ports are exposed and you don‘t have to do a port forwarding anymore in the template because as said all ports are open. But that doesn‘t change how the container is working and the default ports are still reachable. May I ask why you assign a static IP, it is for most game servers overkill and not necessary to do that.
  14. You can also take snapshots with lxc-autosnapshot from the command line maybe with a user script. lxc-autosnapshot is a unique Unraid feature which I wrote specific for that use case. Sorry but I'm not that deep into ZFS because I find it for home use a bit overkill, at least what most people do with it... So to speak the dataset was there before you configured the plugin correct? If not and the dataset was created after the plugin created the folder this is the culprit.
  15. Did you maybe created the dataset for LXC after you started and configured everything in the plugin? If so this is most likely the case why you got this error. A restart of the LXC service would have solved that too. Please let me know if the dataset was created after you configured everything. Please note that you can also use ZFS here as backing storage type, this will create a dedicated dataset where the containers/snapshots are stored.
  16. No worries, this issue pops up from time to time on new installations but I really don't know why... As said I can't reproduce this over here...
  17. I don‘t know why that happens on some systems, can‘t reproduce this, try the command from this post please: (simply copy paste it)
  18. There you find everything you'll need: Click It looks like you even can set this up with a Docker plugin. These are all assumptions because I don't know what realdebrid is and I also don't use Plex, so to speak from what I can tell from the documentation the paths should be changed like: ~/rclone/config to something like /mnt/user/appdata/rclonerd/config ~/rclone/cache to something like /mnt/user/appdata/rclonerd/cache realdebrid:/tmp/myvolume should be something like realdebrid:/mnt/user/yourstoragelocation
  19. Lass mal alles so wie es ist bitte. Nein, C7. Soweit ich weiß ist es unwahrscheinlich das deine CPU in C10 geht sobald du auch nur eine PCIe Erweiterungskarte im System hast. Es wäre auch mal sehr hilfreich wenn du deine Diagnostics postest da dann die Leute die wirklich was vom Stromsparen verstehen deine Systemgeräte sehen. Ich kann mir aber kaum vorstellen das es an den C-States liegt das du so viel höheren Stromverbrauch hast.
  20. Puh, das ist schon viel zu viel. Das noch immer... Mein System in dieser config hat gerade mal 70 bis 75W verbraucht im Idle mit 1x HDD, 2x SSD & 2x NVME aktiv: Ich hab jetzt aber fast keine Addon Karten mehr drin dafür 1 SSD mehr und bin jetzt auf ca. 50 - 55W im Idle. Setz mal dein BIOS zurück, normalerweise brauchst du explizit nix einstellen für das Idle, standardmäßig sind die normalerweise schon gut eingestellt. Muss aber auch sagen ich verwende den powersave governor.
  21. This is caused because the Changelog doesn't reflect the actual version: https://valheim.thunderstore.io/package/denikson/BepInExPack_Valheim/ I have now pushed a fix to the container, please update the container itself.
  22. You don't have to do that because the container listens on all interfaces so to speak 0.0.0.0 and you just have to do a port forwarding in your router and as long as your domain points to your public IP you can connect through it. Just install the container, do the port forwarding in your router and you can connect to the container through your domain.
  23. This game was designed without Linux in mind and won't work on Linux. There is a different branch available in my repository for CoreKeeper: Click This is mainly caused because it needs other start parameters and the application is located else where. I'm assuming you don't use Unraid correct? If you are using Unraid I have for each branch a template in the CA App for easy installation on Unraid. In every branch there should be a run example for each game, hope that helps.
  24. I've now changed what's done/executed on start from the array so that should fix your issue. I will push the update maybe today. I'm not entirely sure if I find anything else that I want to change... Anyways, thank you for the report!
×
×
  • Create New...