Jump to content

ich777

Community Developer
  • Posts

    15,714
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. Yes this error will be always there look here: Click If you bind port 27016 (in the Game Parameters) to the RCON Port you also have to create a new port entry to be reachable from outside, please don't use host it makes things more complicated if you run more containers... Just click on 'Add another Path, Port, Variable, Label or Device' then select 'Port' from the drop down menu then enter your RCON Port number at Container Port and also the same number to Host Port and click Add (eventually select the right protocol but I think RCON is TCP - correct my if I'm wrong). Then the port is exposed from the container and reachable with bridge mode. EDIT: If you add another port or something to a container you always have to add also a port forwarding in the template with the methode above if you are in bridge mode (I recommend to be in that mode to keep things simple), you also could use Custom so all ports will be exposed with a static IP in your network but that's often not that easy if you got many container, keep it simple and try bridge mode . EDIT2: Should look something like this:
  2. What did you changed where? Did you edit the startup script itself in the container itself or did you just insert it in the template at 'Game Parameters'? If so please send me a screenshoot of your template
  3. Yes that's possible and also in my test environment already setup, waited for your response to release the container.
  4. Sorry can't help since I don't know what's the problem... It seems everything is fine. Do you know that you can cards in the IOMMU menu to the fifo driver? Eventually something else is the problem, i now tried a GTX 1050Ti and a GTX 750 in one machine and got no problem.
  5. Okay then I can't help sorry. I don't know what could be the issue... You are the first person where the images doesn't work. Isyour bzroot built with this container about 200mb?
  6. No, this is a windows game only and was designed without linux in mind. Even through WINE it's not possible.
  7. So the 440.100 driver works when you build with my container? Yes, because I use another compression I think. How much bigger they are? A typical bzroot from my container with nVidia has to be around 200MB to 230MB I think, depending on the nVidia driver version...
  8. Sind das die von denen wir weiter oben gesprochen haben oder, sprich die mit den SMART Werten?
  9. Can't imagine that it's not working because I don't do amything different than linuxserver.io im the build process... Eventually the new drivers are the problem. Have you also tried it with the old nVidia driver version? You can set the driver version from latest to the your prferred version. Yes that can be a problem if there is no cache drive in the system. That should be possible... I would like to ask if something used one of the cards at build process but you also said that you tried the prebuilt one and that also doesn't work so that can't be the problem. Can you try to build a completely clean build with nVidia drivers and Custom Build turned off? Oh and are you sure that nothing uses the nVidia card at building the images, if so it will fail and not work.
  10. Can you try to disable all your VM's and also your Docker containers at startup, reset all your IOMMU assignments, reboot and then try to assign it and see if it works? Can't imagine why this isn't working... EDIT: You also tried the prebuilt one from the first post in this thread?
  11. Eventually the game developers haven't update the game itself for linux. As I said, it's updated through SteamCMD and the container doesn't need to be updated. Eventually you understand it if I explain what the container does: The container looks for SteamCMD and installs it if it isn't found Then the container loads up SteamCMD and SteamCMD looks if the game is installed if it isn't installed SteamCMD downloads it Then the container starts the server executable that is downloaded through SteamCMD If you restart the container it happens in the exact same order as described above, I hope you understand now why the container don't needs to be updated. The container runs just a script that starts SteamCMD and executes all commands through it. I hope you understand now how this works. The container doesn't needs to be updated because SteamCMD does the update, the container contains just the script that executes all commands through SteamCMD. I would recommend to go to the Avorion forums and ask there if there the Linux version of the dedicated server isn't updated. EDIT: The container only needs to be updated if the developers change something at the startup for example the change the server executable or something like that but that would result in not starting or that it constantly looping (restarting)
  12. Yes should be possible. These containers update on a restart of the container itself. If it doesn't do it please set the variable Validate gamefiles to 'true' Since all games are updated through SteamCMD and that is executed on every start/restart of the container (can't include the dedicated server files this would be a copyright horror... ). Hope this makes sense to you. EDIT: Forgot to say that this is mentioned in the Description of all my containers: 'Update Notice: Simply restart the container if a newer version of the game is available.'
  13. Have you tried to compile it with the nvidia driver alone and without your patch file? Have you installed a cache drive in your system? On which unraid version are you? Can you provide a full log? You can enable the option save to log to save the output to a logfile. Something seems wrong the file should be created when the modules are compiled.
  14. Ja das hängt mit dem zusammen, hast du eigentlich schon mal versucht das sata kabel zu tauschen. Brich sofort ab, das bringt nichts, du erzeugst dir eine Parität die nicht gültig ist, warum? Hängt die Festplatte/Festplatten am motherboard oder über einen controller dran?
  15. I'm not 100% sure but I think so... (I personally don't own the game and can't test anything). That's possible but only if you move it to the right place. The user data folder must be here: /serverdata/serverfiles/User The save folder must be here: /serverdata/serverfiles/Saves (that's the folders User and Saves in your 7DtD directory in your appdata folder) I can't do it any other way since the container will wipe your savegame everytime you change something in the docker template or an update. The container works for sure because I know a few players that got no problem with it.
  16. Is your serverconfig variable set to the right file? It will always reset if you rename it or something. Also you can't rename the path to the Saves and also not User because otherwise it will break the container because 7DtD saves everything to the home folder which is not accessable from the container and so on every update or change of one of the variables your save will be gone... That's why you can't change some things. The servername and password are not affected by this! Also you don't answered the whole question, have you set the variable Validate game files to 'true'?
  17. Which settings do you want to change? The settings won't be overwritten are you having validation turned on?
  18. Correct I think the driver - make script - should be built for that and should remove the old driver and replace it with the new one (most drivers does this if you compile them). I'm strictly against that, my beta container use another gcc compile and much other different stuff. Do it the way you want, but if something breaks I can't help, I would recommend that you compile it with the sourceforge driver. Btw, have you read the instructions from the driver there is described how to update the driver if it's already installed, I would strongly recommend starting there... Also what's wrong to build it in the script from what I've read that are the commands: tar zxf e1000e-<x.x.x>.tar.gz cd e1000e-<x.x.x>/src/ make install You could also do it that way. Like I said you can do it your way but I got no hardware to test and can't help. Hope you get it to work.
  19. Set the variable CUSTOM_BUILD to 'true' and the container copies over the build script for the kernel and every additional build stage (nVidia, ZFS, DVB,...) over to the main directory. Correct, that's described on the first post of this thread. I think you are talking about my build script and then include your linked driver into the script. No problem, was a little late yesterday... I would do it that way: Download the container from the CA APP Make your configuration by setting the things to 'true' or simply leave it empty if you don't need it Set the 'Custom Build' to 'true' (then the container copies over the build script and some other stuff and will sleep infinity Go to your appdata directory and download your linked driver and place it there In that folder you will also find the file 'buildscript.sh' Open that file and edit it Add the necessary build steps (extract, make,...) commands there (the best way to do this after '## Copy Kernel image to output folder' simply search for that text in 'buildscript.sh') After you made the changes/additions, save and close 'buildscript.sh' and open up a console terminal (click on the icon on the Dockerpage and select 'console') Go to the data directory (type in 'cd $DATA_DIR') and execute 'buildscript.sh' (with './buildscript.sh') If everything went well you have everything ready in the output folder. NOTE: the path to the working directory is set with a variable and you can use that also in your additional build steps for the driver '$DATA_DIR' (points to '/usr/src' inside the container). Hope this helps.
  20. There is also a readme in the linked file how to compile the drivers. So as I said it would best to append that steps to the buildscript after the bzimage is built. This is right before the nVidia build stage.
  21. Wait, the basic container is for 6.8.3 I think you don't read the first thread. You don't have to pull anything from github, I built in a custom mode where the container copys aver all necessary files to the main directory, can editthe buildscriptand run it then from the docker console. Youst add it after the kernel is builtand exported, if you do everything right than it should compile and pack everything in the new images. Edit: please don't pull anything from a beta25 to a 6.8.3 these are two different kernels. You have to compile the driver that you linked with the version 6.8.3 of the container.
  22. And what's bad about that? I run it on my main server and also a buddy of mine on a Intel 10th gen platform. You could make a stable version of Unraid worse than a beta if you don't know what you are doing, keep that in mind... Just saying... If you want to do that you must extend the buildscript or do every build step manually and then you also need to compile the driver I think manually (please don't ask me since I don't have such a card to test and I don't want to destroy anything on your server or your entire server).
  23. Yes this is possible. But you have to find someone that does a backport of the driver and then implement it, the implementation is not the problem. ;) Why are you not running the beta25?
  24. Naja das log sagt was anderes... Ich bin mir auch nicht sicher ob das Synology NAS SMART ständig überprüft bzw. auch wäre interessant was er da drauf laufen gehabt hat, evtl. war das timeout für die platten falsch eingestellt und die sind die ganze zeit in das timeout gelaufen und gleich wieder angelaufen. Aber das ist eine andere geschichte... Will auch niemand was unterstellen. Auf jeden fall sehen die Platten defekt aus oder stehen zumindest kurz vor dem endgültigen tod...
  25. Have to look this up on my own since I don't have multiple graphics cards in my system (personally have a Xeon and not AMD) and I only use the nVidia GPU for my Docker containers.
×
×
  • Create New...