Jump to content

JDK

Members
  • Posts

    25
  • Joined

  • Last visited

Posts posted by JDK

  1. So I'm a bit stuck here, still at the start of troubleshooting and investigating this but in attempt to minimize how long I stay down for, I'm posting so long to see if anyone has a quick solution.

     

    I upgraded to 6.12.0 this morning and no issues there; things were running fine after upgrade. I then decided to convert my cache pool to zfs per this guide from Space Invader. Again no issues following this, but when it got time to format the cache pool after the array startup, it now looks like it wants to format my array disks as well! :(

     

     

    Has anyone run into this?

     

    server-diagnostics-20230617-1519.zip

     

     

    image.thumb.png.8c0a7d9298413b22f9ec558a1557d838.png

  2. Updating to 4.7.6.0 fixed my image extract issue. At the time i tried, the binhex image hadn't updated to 4.7.6.0 yet, so i just switched to the official docker image. I simply had to point it to the previous appdata location and it fired right up.

  3. I've had this issue now for a while, but only on certain video files, not all. take a look at this thread here: https://emby.media/community/index.php?/topic/110537-image-extraction-error/&do=findComment&comment=1169206

     

    I was able to get things working again by going to binhex/arch-emby:4.6.7.0-3-01.

     

    I also just noticed from post above that binhex/arch-emby:4.6.7.0-4-02 works, so i tried that and can confirm this works for me as well.

  4. You should be able to use one of the other elastic dockers and just specify the tag. for example i remember using FoxxMD's image. if you just look at the template you'll see where to change it to "elasticsearch:5.6.16". thats how i had v1.5 running before.

     

    I won't get to try installing v1.5 for a couple of days, but ill update here with my progress once i do get back to it.

  5. Awesome, thanks. ill give this a poke tomorrow to see if i can get v1.5 back up.

     

    v2 was really simple actually and fewer steps since it no longer seems to require Redis or a specific version of Elastic (i have it running against 7.10.2). That said there is now so much confusing information because so many struggled getting v1.5 up in the first place that there are a loads of loose bits of info that can throw you off course when v2 really doesnt need it.

     

    If it helps you, these are the exact steps i followed while installing v2:

    1. Install CA User Scripts
      • Create a new script named vm.max_map_count
      • Contents of script as follows: #!/bin/bash sysctl -w vm.max_map_count=262144
      • Set script schedule to At Startup of Array.
      • Add the docker container.
    2. Install Elasticsearch container from CA (for me, 7.10.2 works just fine using d8sychain's release)
    3. Install diskover container from CA
      • configure all your variables, do not start the container upon creation. if you did, no problem, just power it down.
      • i had to manually edit the config file (/appdata/diskover/diskover-web.conf.d/Constants.php) to set the ES_HOST variable. it does not seem to update from the docker template, resulting in failed connection to ES.
      • start the container
    4. The diskover Application Setup documentation mentions the issue "The application doesn't start an index by default.", so i had to manually create the first index using below.
      • docker exec -u abc -d diskover python3 /app/diskover/diskover.py -i diskover-my_index_name /data
      • you can exclude the "-i diskover-my_index_name" and it will create an index using this pattern "diskover-data-211128210916".
      • the index may take a while to crawl. you can see its progress here: http://server:9200/_cat/indices?v where 9200 is your ES port.
    5. optional: go back to CA User Scripts and create a new script using the command from previous step.
      • i may have missed something in the config, but the v2 container is not creating indices automatically like the v1.5 container did.

    For me it took a few minutes for the first index to become accessible and i could actually use the diskover app. one thing to note is that you need to be careful to not delete the index you have selected in diskover ("Save selection" option). i did this and the app broke completely, not defaulting to one of the other indices. the only way i could fix this was to recreate the index using the command from step #4...at least the error gives you the name of the missing index.

     

    • Like 3
  6. So i had been using this container for a couple of years at least, then a few weeks (maybe months now?) ago it broke with some error in the logs that i didn't get to figure out at the time. Fast forward to this week and i finally get back to fixing this container on my server and see that we are now on v2. That seems to bring with it the fact that most of the great features are now locked behind pay editions. The v1.5 tags are still available, and i realize now that the initial issue that broke my container was likely due to an update because i can pull one of the more recent v1.5 tags and the same error is there, but going back a few versions (~JUN 2021) and no error.

    But the template is no longer available..does anyone have v1.5 still running and mins posting a screenshot of the template so i can see which variables to set up again?

  7. Hi everyone,

     

    I was really trying to not create a new topic just for this, but I could not find a solution: my understanding is that when I setup syslog to mirror to flash the logs should be going to the /logs/ folder on flash. Unless I have the wrong location it should be this /boot/logs/. I have Krusader set up to map a path variable to this location, and I see historical logs there in zip files, but nothing recent enough.

     

    I am trying to troubleshoot a freezing issue with my server that has been plaguing me for the last couple of weeks, so I also set up remote logging since i couldnt track down logs on the flash.

     

    Below is how I configured the syslog options

    image.png.b93f350768303ec651b7c101cc0f4051.png

     

    Here is what is available in /boot/logs/

    image.png.3bdee940e62815fbf1b66b55446e2718.png

     

    This is how I mapped the path in Krusader, if that helps.

    image.png.3f7dba8d059b8998983364bf2834640d.png

  8. On 10/15/2019 at 2:41 AM, ich777 said:

    As @JDK said you could use this tools.

    I've also made a CSMM 7DtD (Catalysm Server Monitor & Manager for 7 Days to Die) Docker to manage the servers but it's a little bit complicated to set up, you need a reverse proxy and a real domain name with a subdomain then you can host this docker.

     

    I've been meaning to ask if there are any high level instructions for getting this set up on unraid?

     

  9. 3 minutes ago, phat_cow said:

    Just installed 7daystodie docker. How do I manage the server?

    Depending on which version you installed, you can use one of the management tools like 7D2D RAT (Windows), Botman (Linux), or simply telnet to the server and issue commands that way.

    If you're just looking to change some of the basic server setup, you would use the serverconfig.xml for that though having the server up and running already, im assuming youre looking for something beyond that.

     

    • Thanks 1
  10. 10 hours ago, ich777 said:

    Add it after the AppID.

    This should be a simple fix i will look into it after work.

    EDIT: Sure that this is not a bug in the game itself and they patch it in the next few days? I've downloaded the stable version and it works without a flaw, then i deleted the whole folder and the docker and installed the latest_experimental and it's the same as in your screenshot.

    Looked even if the folder structure itself is different but it's not it's exactly the same, even the missing steamclient.so is in the main directory...

    Is this the correct term for the beta build: '-beta latest_experimental' or is '-beta' engouh?

     

    EDIT2:Fixed the docker, please klick 'Check for Updates' on the Docker screen in Unraid and update the servers. The latest experimentail build now runs fine. ;)

    You are too awesome, thanks @ich777.

     

    8 minutes ago, jordanmw said:

    Looks like mine works now too- and I have to say JDK- the 18 experimental version of ALPHA 18 is fully worth it!  They have really changed and improved so many things... and this is from a guy who has 1982 hours into that game.  I particularly like the re-balancing and removal of level cap for perks.  I was so pissed in A17 being level 80 and not being able to craft a 4x4.  And the Turrets syndrome perk looks like a awesome addition. 

    Yeah, I only spent about 15min on A18 yesterday. Now that the docker has been updated I can roll in a few more hrs to really get a feel for it, but good vibes so far ✌️

     

     

     

    • Thanks 1
  11. 4 minutes ago, jordanmw said:

    An experimental of an ALPHA game.... right. Can't help but be a little jaded after being a die hard and seeing so many builds with glitches that get the whole community pissy.  Eh, hey.. maybe I'll give it a shot, just this once.  How do you add -beta latest_experimental to the steamcmd update command?

    From the release notes it looks like they have spent a lot of time implementing positive features and listened to the community closely. Performance seems better from just a quick test too, so that is very positive for me :)

     

    I just updated the GAME ID field to be "294420 -beta latest_experimental". This downloads the correct server version, but i haven't yet been able to get it to run. Still tinkering...

    image.thumb.png.9128f5fa4b54e99aa1548745c99d85f8.png

     

     

    • Like 1
  12. @ich777 with 7 Days To Die a18 potentially dropping to general availability tomorrow, will dockers be updated automatically or do need to release something new? It

    will be an experimental release on steam, i.e. opt-in.

     

    I run a couple 7dtd servers using you docket and I would like them to stay unaffected, but might want to try out a18 if there is a way without you needing to release something new.

  13. 13 hours ago, ich777 said:

    Okay that's strange if the unraid terminal itself reported 2mbit/s and you reach your 800mbit/s within the vm's on unraid...

     

    Yes this would be the best solution and i think it is some kind of network problem.

     

    One more question, did you have more than one nic in your server and you are using one for the vm's and one for unraid?

    Try also to change the network cables, often there can be an issue with one cable (i've remember i got such kind of a problem with a unraid server with two nic's and a load balance setup in the past).

     

    I figured this out: a few months ago i installed pi-hole docker and decided to just set my router's primary DNS to the pi-hole docker IP so that everything in the house is covered. Well, with unRAID set as DHCP client it picked up those DNS settings by default and the traffic was following a crazy route i guess. Once i specifically set unRAID's DNS entries, things came right back. thanks for your help!

    • Like 1
  14. Hey all, looking for some help with a strange issue.

     

    First off: I'm quite new to the Linux side of things and have a lot to learn still, so assume i know nothing.

     

    So recently while downloading a larger game docker of about 6GB, I noticed that it was taking several hours to complete which didn't seem right since I'm on a gigabit connection. I've never done a speed test directly from unRAID terminal, but I've also just really downloaded the more common useful dockers and plugins to help with management of the instance. I've done speed tests from within VMs on the unRAID box before, and they always came back within expected target.

     

    So I did a speed test from within the unRAID terminal, and compared that to same speed test from an Ubuntu Server VM on the same box. The results are below, and even though the results from the VM are not near max, i believe that is because they throttle the service a bit during business hours. Still, far better than what I'm getting from unRAID terminal.

     

    Also, regular LAN traffic to the box is fine easily maxing out around 110-120MBps, so never had reason to suspect any issues.

     

    Anyone see this issue before and able to help me resolve it?

     

    unRAID

    image.png.7d3ae09830e9654f5dab9993fd5a8ee8.png

     

    VM (Ubuntu Server)

    image.thumb.png.97bd1bef6b1ee8b1ed38c56aed08f70b.png

  15. 13 hours ago, ich777 said:

    Nope, no setting or something, should pull with full speed.

    7DtD finishes on my 90Mbit connection in about 15mins or so.

    Is it possible that there is another limitation, diskspeed or steam itself (please keep in mind that steamcmd works over steampipe).

    Did you got any cachingservice for steam running on your network?

     

    Im certainly not an expert at the finer unraid/linux configurations, but storage-wise i think my setup is pretty typical. appdata lives on my cache drive which is a ssd. I dont have issues with any other docker images, but i also dont have anything else nearly that size.

     

    So now for the first time i thought to do a speedtest from the unraid terminal to see if that is fine...its not!

    Testing download speed................................................................................
    Download: 2.35 Mbit/s
    Testing upload speed................................................................................................
    Upload: 3.23 Mbit/s

    I have tested the speed from several windows-based VMs in unraid, and they all reported much closer to the typical speeds i am expecting, which is closer to 800Mbps.

    Ill spend some time to try and figure this out, and take it to another part of the forum if i need more help.

  16. Is there some sort of setting i'm missing that would make these dockers download extremely slowly? Pulling down 5-6GB for a new docker (7daystodie in this case) takes several hours. I initially thought that it was in part to the game getting set up and generating the map/etc, but setting up a fresh instance now i was looking at the logs and after nearly 2hrs it is only around 50% complete.

     

    Im on a gigabit connection and pulling down any game from steam is typically very fast. Not sure why these downloads take so long.

  17. 20 hours ago, ich777 said:

    Since i don't own the game testing is a little bit difficult but it should work, had a few testers and it worked.

     

    Look one page back (page 6 i think) there was a simillar post to yours and it was solved.

     

    Are you shure youve opened the base port TCP and UDP and then the following 3 UDP ports and also a port in the range from 27015 to 27030.

     

    Can you post a picture from the firewall or router with the port mapping?

     

    Have you also set another port in the config? If yes please try to delete all the port mappings from the docker and enter it manually with the same ports for Container and Host.

     

     

    Man i feel a bit silly. It was all right there on P6 as you said - thank you. I thought the ports are visible only to the container and the host, and that a mapping is done as part of that interface.

    I made the changes to have the ports the same between Host and Container, as well as in the serverconfiog.xml and it came right up. thanks again!

     

    I know someone offered earlier in this thread, but if you need the game i'm happy to gift it to you. PM me steam details or at least some way to buy you a beer and say thanks for this work!

    • Like 1
  18. Firstly, thanks for these dockers. Ive tried the one for 7 Days To Die and it works a charm with easy setup.

    Has anyone been able to get their server up on the public list? I must have a port blocked or something because mine does not show up, but i am able to connect to it both locally and remotely. I have it set up on port 25550 since that is where i had it running while under windows (dedicated). I had to open up ports range 25550-25552 on TCP and UDP prior, and everything worked.

     

    Right now the only thing missing is me getting the service on the public list so it is easy to find. I tried opening up 26903 (since changed to 25553, also open) and 27015 as well without making a difference.

     

    Here are my port mappings if this helps. Appreciate any feedback.

    image.png.884e79c64bdb0e2db6b30e8c6d7ce3df.png

  19. On 4/7/2019 at 9:16 PM, BrianB said:

    I don't know what you mean or how to do that.  I just went back to the current version and just restart NZBGet every day or two.  Does anyone know if SAB has this problem?  Any reason to stick with NZBGet?

    I just find NZBget the easiest to configure and use. I havent used SAB in years now, so maybe they improved on that factor since.

    Anyway, here are some steps you can try to help get NZBget going:

    1. Go to your docker page and add a new container

    2. From the template dropdown, select "binhex-nzbget", presuming you havent already deleted the template. Basically, this just restores all the settings you used last time, but is also the cause of you getting the 404 when checking out the 19.1 tag: you are most likely mixing the config data.

    3. Make the changes to get the previous tag going - i used 19.1-1-02 (https://hub.docker.com/r/binhex/arch-nzbget/tags)

    Below is a screenshot of how i changed it. The most important part is the AppData Config Path setting, so make sure you change that form the template value.

    4. Once you've started the container you will need to configure NZBGet again via its internal settings page (news server, paths, etc). This is because of the change in the appdata location, basically starting fresh.

     

    Hope this helps!

     

    image.thumb.png.32765d353648c7421ea1ae21ad47c116.png

  20. On 3/29/2019 at 11:36 PM, BrianB said:

    So, I went ahead and rolled back to v19, but now I can't get the WebUI to function.  It just says "404 Not Found."  For the heck of it I went up to v20 and same problem.  What am I doing wrong?

     

    B.

    Same happened to me. I just pulled the tag into a new container building it from the template on which i was using 20.0.

    Seems to be working OK for me now.

×
×
  • Create New...