growlith

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by growlith

  1. Sounds good to me! Feel free to use anything that I have written, although it's not great python code. I think the useful parts are figuring out what Steam API endpoint to use to check against as well as how to manage other containers from a container.
  2. @Cyd Thanks for the help with -automanagedmods the other day. For anyone running a modded Ark server/cluster, I have created a companion container to run alongside and automate the whole checking if mods update and triggering resets. It scans a given mounted /mods folder and queries Steam API to see if a mod has updated. If one has, it broadcasts warnings in the servers and then restarts them. If you have a cluster, it will restart the primary (server setup with -automanagedmods) server and stop all of the others. Once that server has updated your mods and is running, it will start back up the other ones that were running (leaving stopped ones untouched). Feel free to use it. integrate the concept, or modify it however you want. https://github.com/jalbertcory/ArkDockerModUpdater
  3. You need to make sure that you tell Ark what ports you are using eg. `?Port=7785?QueryPort=27064` and it's not just that you need to remove and re-add them, the port inside the container needs to be the same port exposed externally. This is because ARK reports it's ports back to steam and you can't change the container port without adding a new port mapping. So from a fresh setup, you should remove the query port and add a 27016<>27016 mapping. In addition, there is no command line argument for the second UDP port (following 7777), it's always the first UDP port +1. So when you tell Ark ?Port=7785 you are also saying UDP2 port is 7786. Then ?QueryPort=27064 is obvious what it does. If you want RCON (do you need this?), then there is more you have to do such as adding ?RCONPort=27075?RCONEnabled=True?ServerAdminPassword=<yourpass> to the commandline.
  4. There are probably some template XML changes as well right? This would only add the modIds to the game params. I imagine there is a volume mapping needed to double mount the streamcmd so that Ark can find it in it's expected locations. If you tell me what mappings you use, I can double check that things work if that helps.
  5. I have verified that I placed this flag into Extra Game Parameters. I went ahead and captured the output from when I edit the Docker: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='ARK-TheIsland' --net='bridge' -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'GAME_ID'='376030' -e 'USERNAME'='' -e 'VALIDATE'='' -e 'PASSWRD'='' -e 'MAP'='TheIsland' -e 'SERVER_NAME'='Call-The-Island' -e 'SRV_PWD'='' -e 'SRV_ADMIN_PWD'='ARKAdmin' -e 'GAME_PARAMS'='?Port=7777 ?QueryPort=27015 ?RCONEnabled=True ?RCONPort=27016 ?bAllowUnlimitedRespecs=true ?FastDecayUnsnappedCoreStructures=true' -e 'GAME_PARAMS_EXTRA'='-server -log -exclusivejoin -automanagedmods -NoBattlEye -clusterid=call1 -ClusterDirOverride=/serverdata/clusterfiles' -e 'UID'='99' -e 'GID'='100' -p '7777:7777/udp' -p '7778:7778/udp' -p '27015:27015/udp' -p '27016:27016/tcp' -v '/mnt/user/appdata/steamcmd':'/serverdata/steamcmd':'rw' -v '/mnt/user/appdata/ark-se/island':'/serverdata/serverfiles':'rw' -v '/mnt/usr/appdata/ark-se/cluster':'/serverdata/clusterfiles':'rw' --restart=unless-stopped 'ich777/steamcmd:arkse' 30e4fd7cbd207015fc8a7ac69096a52b41cd6befba92f8fe5c4ba03e2a84b864 Whenever I include the -automanagedmods flag, I get this error looping in my log until I stop the Docker image. Removing only the -automanagedmods flag allows the server to start and run as expected, except it does not check/update the installed mods. I am having the same issue with the -automanagedmods flag. Seg fault upon starting the server. I do not have any spaces in my game parameter string. Any ideas of things that I might try? This is what my start command looks like from Unraid. /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='ARKSurvivalEvolved' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'GAME_ID'='376030' -e 'USERNAME'='' -e 'VALIDATE'='' -e 'PASSWRD'='' -e 'MAP'='Ragnarok' -e 'SERVER_NAME'='rag' -e 'SRV_PWD'='somepassword' -e 'SRV_ADMIN_PWD'='otherpassword' -e 'GAME_PARAMS'='?AllowFlyerCarryPvE=true?AllowFlyingStaminaRecovery=true?AllowCaveBuildingPVE=true?OverrideOfficialDifficulty=5.0?ForceAllowCaveFlyers=true?AllowAnyoneBabyImprintCuddle=true' -e 'GAME_PARAMS_EXTRA'='-automanagedmods -server -log' -e 'UID'='99' -e 'GID'='100' -p '7777:7777/udp' -p '7778:7778/udp' -p '27015:27015/udp' -p '27020:27020/tcp' -v '/mnt/user/appdata/steamcmd':'/serverdata/steamcmd':'rw' -v '/mnt/cache/appdata/ark-se':'/serverdata/serverfiles':'rw' --restart=unless-stopped 'ich777/steamcmd:arkse'
  6. This is broken for me as well. The container seems to run for a few hours fine, then it thrashes in a boot loop taking tons of cpu with the constant restarting. I believe it's an issue with the application itself.
  7. Thanks for looking at it for me. Recovering the disk2 data proved to be beyond my ability, but I was able to confirm that I did not lose any important data just a whole bunch of media. I got lucky. All of my docker data is on the cache drive, and I had just moved my personal files in bulk around and they were all placed on the same drive together (a drive which survived). So I just formatted those two disks and created a new config. Things are up and running now and I am in the process of a parity build for a second parity drive. Now I need to let my family know that I lost a lot of stuff from plex that will take a bit to get back and that their plex user history is going to be lost (the plex container is the only one with data on the array apparently even though /mnt/docker/ is cache only).
  8. My server has a collection of 4tb and 8tb drives. When I opened it up to add another 8tb to start double parity, I moved one of the 4tb drives to keep the 8tb next to each other. When I booted back up, the 4tb would not show up. After trying various drive locations, and then checking to see if the drive was recognized on another computer, I decided to just use the new drive to rebuild the now not working one. The rebuild seemed very strange because it was running at 3.5Gbps which does not seem possible. When I checked back after it was complete, another 4tb drive got an "UDMA CRC error count" of 13 and was disabled (I don't know if this happened during the rebuild or not). Stopping the array and starting it again triggered another rebuild onto the error drive Long story short I now have 2 drives showing "Unmountable: No file system" with the 4tb one maybe having data on it. Currently running the xfs_repair checks on both drives through the GUI with the array in maintenance mode. Is there any way to get the 4tb drive with write errors to be attached to the array long enough to build the missing drive and then I can replace it with another new drive to rebuild that? The timing is unfortunate because I was just thinking about how I should be better protected and was going for double parity and actually pricing out a less powerful server to ship to the parents to back myself up offsite. Lesson learned I guess. tower-diagnostics-20200113-0551.zip
  9. Were you able to get this to work with Google DNS? I have 25 subdomains and a wildcard cert seems like it would make more sense at this point. I get to the acme-challenge step and it says that it cannot find a text record. I setup the service account, the dns api, the managed zone. Not sure what I am missing.
  10. I was under the (potentially mistaken) impression that if I did not go to 6.2 first I would have to rebuild my docker containers and their configuration. At this point, I just want to get the new version running so I can makes changes to my setup. So I will backup everything I can and record all the configuration in case everything goes upside down and then try manually upgrading.
  11. I am running 6.1.9 and want to update my system. Following the https://lime-technology.com/wiki/UnRAID_6/Upgrade_Instructions, I have made sure that I am ready to upgrade to 6.2 so that I can then upgrade to 6.5.x. The problem is that the upgrade plugin links are broken. The 6.2 Link: https://raw.githubusercontent.com/limetech/unRAIDServer-6.2/master/unRAIDServer.plg I was able to find the .plg file here: https://github.com/limetech/unRAIDServer/blob/0486b4614bc384cc81eb0413335645d070a6588c/unRAIDServer.plg and by knowledge of githubs URLs, I tried installing https://raw.githubusercontent.com/limetech/unRAIDServer/0486b4614bc384cc81eb0413335645d070a6588c/unRAIDServer.plg. The problem with that was that the zip file in s3 no-longer exists. "plugin: wget: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.2-x86_64.zip download failure (Invalid URL / Server error response)" How should I go about updating my system / are there working links?
  12. Just wanted to comment here that I was having the same issue, the flash drive would continuously disconnect and re-connect. I solved this by moving to one of he usb2 connections as suggested. Sadly the ports behind the lock on my server are all 3.0 so not the drive is accessible, but it is a micro drive and the back of the server is hard to get to so I will live. Thanks for the help!
  13. I am also having the same problem with this plugin (invalid argument). For now I just upgraded my ram to take care of hitting the old cap occasionally.