Jump to content

ich777

Community Developer
  • Posts

    15,714
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. No, sorry, wrong explanation, english it not my native language... I as a developer can expose in the Dockerfile while creating a container, I don't do this in my SteamCMD containers because I see no reason for this because if you give it a static IP address in the custom network all ports are exposed. Think the other way around, if someone else is using the container and put it in the custom network and give the container another port let's say for example Port 27019 and not 27015 and I expose the port 27015 in the Dockerfile, Unraid will display that the port 27015 is exposed, but actually that's not true because all ports are exposed. These values simply came from the Dockerfile itself and doesn't restrict any port, even if the line is empty. I recommend using duckdns.org or some other kind of DynDNS if you don't have a static public IP. So that your friends can connect for example with 'myawesomserver.duckdns.org:27015' I don't understand the last sentence what is the 'right IP' your internal or your external IP and what do you mean the server is not showing? No, then there is something wrong with the port forwarding I think (some ISP's block for example port 80, 443, 22 and other common ports). Have you edited any other setting in the 'server.cfg' file? Can your friends see your server if they enter your public IP and the port? I would also recommend to try it with bridge and not br0 I can only try to connect to your server if you want to, if you want a proof that the container is working you can connect to my server if you want to.
  2. These parameters seem right to me. This is just a normal behavoir since it's not a LAN game, it's a dedicated server these are two different things. A LAN game is if somebody is playing the game and hosting it so that other players can play with him. Yes the port forwarding is working if you can reach it from outside and others can connect than everything is fine. Does CS:GO show the public IP? I don't think so... You can add the server simply by going into your 'Favourites' and then click 'Add Server' enter the PUBLICIP:PORT (or LANIP:PORT), click on 'Add' and click on 'Referesh' you should now see the server. This isn't weird, the explanation is simple. If you use it in bridge mode you define a containerport and a hostport and Unraid shows the portmapping with the IP's (because you mapped the port in the template). If you choose the custom network Unraid looks for exposed ports from the Dockerfile, but in the SteamCMD I see no reason why I should expose a port in the Dockerfile because in a Custom Network all ports are exposed because the container get's his own IP. Hope this all makes sense to you.
  3. Eventually something got uplugged or something... Try also to reseat the card.
  4. I'm also able to see the card if the 8+6pin power connector is not seated right but I can't use it properly. Have you anything done to your syslinux config?
  5. Can you check if you power cables for the card are seated correctly, please also try to reseat the card in the PCIe slot. Have you also installed a VM to try if the card works in there?
  6. Please don't quote your own posts... Can you open up a terminal of Unraid itself an give me the output of 'nvidia-smi'?
  7. Ich glaub ich werd mal den Hinweis machen, denn nicht jeder braucht den extra Pfad und den kannst ja nicht leer lassen sonst lässt sich der container nicht starten. Freut mich, luckyBackup bietet die möglichkeit über SSH direkt zu syncen und hab auch was eingebaut das noch die Keys für den container erstellt und du dann zu den known hosts hinzufügen kannst (muss noch was ändern aktuell bekommst die nur über das cli).
  8. By the time yes and no. The plugin checks on every installation/start/restart of the server if there is a newer version of the driver available and installs it on boot. In the future I will implement a notification system so that you get a notification if a newer driver is available in the Unraid GUI but you have to manually reboot to install it. The reboot is required because: If something is using the card on an update it will simply fail and your server can/will crash If nothing is using the card you have to restart the Docker daemon in order to pick up the new driver version - that's something I don't want to do automatically
  9. Ja genau. Das kommt immer auf den Einsatzzwecks des Users an, ich kann natürlich noch dazuschreiben das wenn man auf eine Disk die im UD liegt syncen will sich einen extra pfad anlegen muss/soll aber von mir war das nie so geplant, ich selbst nutze das über SSH zum syncen auf einen anderen Server. Ich würd nur ungern statt /mnt/user das Verzeichnis /mnt/ eintragen da dort noch mehr passieren kann aber ich werde evtl. die Beschreibung updaten, muss aber erst ansehen was sich bei UD mit der RC1 geändert hat da gibt glaub ich für SMB shares jetzt noch einen Unterpfad in /mnt/. Danke für den Hinweis btw
  10. Mitlesen nur wenn ich Zeit hab... Um was gehts hier?
  11. Container is now finished and uploaded, it will take a few hours to show up in the CA App.
  12. You probably got a really old version of the template itself, I implemented the automatic update on a container start/restart after half a year after the release of the container. This is how the template should look like (you can redownload it from the CA App, just be sure to set the gamepath and also the other parameters to the same as the old ones):
  13. Just restart the container and it will download the latest version if you set the versionnumber to latest.
  14. I will look into this, give me one or two days
  15. Das hat auch schon Tom in seinem Q&A schon erwähnt, siehe auch das Kommentar von Squid: Ja, vereinfacht gesagt teilt BTRFS zwei gleiche Datenblöcke (wegen Redundanz) immer auf zwei unterschiedliche Platten auf. BTRFS RAID1 ≠ konventionelles RAID1 Da man auch Festplatten verschiedener größen mischen kann und auch SSD, HDD,... sogar ein betrieb von 3 Platten ist möglich. Wie gesagt bei TRIM gehen die Meinungen stark auseinander, vergiss auch nicht das die Paritätsplatte dann extrem stark beansprucht wird. Die Samsung Evo 850 macht meines wissens automatisches TRIM intern ohne das man noch irgendwas machen muss.
  16. EDIT: Wenn du Trim im Array machst stimmen die Parity bits nicht mehr und eine Wiederherstellung des Arrays würde nicht mehr klappen.
  17. Dadurch das im Array kein Trim möglich ist kann es zu Problemen kommen, muss aber nicht. Die Meinungen zu Trim gehen da stark auseinander... Ich hab zB auch kein Trim Plugin installiert für meine SSD's. Ich würd aber die SSD's trotzdem im Cache Pool verwenden, vor allem jetzt seit den betas mit denen du mehrere Cache Pools erstellen kannst wäre das doch viel sinnvoller. Unraid ist auch eigentlich so ausgelegt das dein Array das "Datengrab" (Coldstorage) und dein(e) Cache(s) als Speicher für deinen täglich gebrauchten Dateien (Hotstorage) dient. Was du als Workaround machen könntest wenn du die SSD's als Cache verwendest ist das du einen zusätzlichen USB Stick anschließt und den als Array Platte benutzt (das Array lässt sich ohne zugewiesene Arrayplatte nicht starten - bedenke aber das dann der zusätzliche USB Stick auch als "Speichergerät" zählt und somit einen Datenträger von deinen Lizenz verbraucht).
  18. Can you try to set 'Write Back' to 'true', this will cache smaller files onto RAM and writes it after it caught up to the img itself. ATTENTION: Please keep in mind if you restart all your fileIO images will be gone from the iSCSI GUI they are still there but you have to assign them new to the LUN, after that it works just like normal. There is currently a bug in the Plugin (iSCSI-Target) that removes the images from the config, I will push a fix ASAP. Can you give me the configuration of the TrueNAS iSCSI Target? Write Back enabled? On which filesystem is the img located? Sorry for that many questions, but I'm not too familar with TrueNAS. Also I will open up a proper support thread for this if the plugin is working correctly.
  19. You should have a 'openvpn-client' folder in your appdata directory where you have to put the 'vpn.ovpn' (if it's named differently please rename it to 'vpn.ovpn').
  20. This should help: https://www.zabbix.com/documentation/current/manual/appendix/install/db_scripts
  21. I have to look this up and try this myself. I will report back, can you try to delete the cronjob and add it again? I've updated the container yesterday with a newer rsync version.
  22. Have you already tried to restart the container if it is working again after the container restart?
  23. Thank you for the heads up, already updated the description (will take a few hours to update in the CA App).
  24. You don't have to use this tool anymore. Simoly download the stock RC1 - reboot - go to the CA App and search for Nvidia Driver install it and also for ZFS and install it, reboot youre done. No more compiling needed and if a new version of Unraid is released the plugins will update the drivers automatically.
×
×
  • Create New...