JDK

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by JDK

  1. my dude! thanks for the quick reply. that did do the trick, and i can now wipe the nervous sweat off my brow and move on - thank you!
  2. So I'm a bit stuck here, still at the start of troubleshooting and investigating this but in attempt to minimize how long I stay down for, I'm posting so long to see if anyone has a quick solution. I upgraded to 6.12.0 this morning and no issues there; things were running fine after upgrade. I then decided to convert my cache pool to zfs per this guide from Space Invader. Again no issues following this, but when it got time to format the cache pool after the array startup, it now looks like it wants to format my array disks as well! Has anyone run into this? server-diagnostics-20230617-1519.zip
  3. were you able to get renaming to work? from the worker logs i dont even see an attempt to call the External Post-processor Script.
  4. Updating to 4.7.6.0 fixed my image extract issue. At the time i tried, the binhex image hadn't updated to 4.7.6.0 yet, so i just switched to the official docker image. I simply had to point it to the previous appdata location and it fired right up.
  5. I've had this issue now for a while, but only on certain video files, not all. take a look at this thread here: https://emby.media/community/index.php?/topic/110537-image-extraction-error/&do=findComment&comment=1169206 I was able to get things working again by going to binhex/arch-emby:4.6.7.0-3-01. I also just noticed from post above that binhex/arch-emby:4.6.7.0-4-02 works, so i tried that and can confirm this works for me as well.
  6. You should be able to use one of the other elastic dockers and just specify the tag. for example i remember using FoxxMD's image. if you just look at the template you'll see where to change it to "elasticsearch:5.6.16". thats how i had v1.5 running before. I won't get to try installing v1.5 for a couple of days, but ill update here with my progress once i do get back to it.
  7. Awesome, thanks. ill give this a poke tomorrow to see if i can get v1.5 back up. v2 was really simple actually and fewer steps since it no longer seems to require Redis or a specific version of Elastic (i have it running against 7.10.2). That said there is now so much confusing information because so many struggled getting v1.5 up in the first place that there are a loads of loose bits of info that can throw you off course when v2 really doesnt need it. If it helps you, these are the exact steps i followed while installing v2: Install CA User Scripts Create a new script named vm.max_map_count Contents of script as follows: #!/bin/bash sysctl -w vm.max_map_count=262144 Set script schedule to At Startup of Array. Add the docker container. Install Elasticsearch container from CA (for me, 7.10.2 works just fine using d8sychain's release) Install diskover container from CA configure all your variables, do not start the container upon creation. if you did, no problem, just power it down. i had to manually edit the config file (/appdata/diskover/diskover-web.conf.d/Constants.php) to set the ES_HOST variable. it does not seem to update from the docker template, resulting in failed connection to ES. start the container The diskover Application Setup documentation mentions the issue "The application doesn't start an index by default.", so i had to manually create the first index using below. docker exec -u abc -d diskover python3 /app/diskover/diskover.py -i diskover-my_index_name /data you can exclude the "-i diskover-my_index_name" and it will create an index using this pattern "diskover-data-211128210916". the index may take a while to crawl. you can see its progress here: http://server:9200/_cat/indices?v where 9200 is your ES port. optional: go back to CA User Scripts and create a new script using the command from previous step. i may have missed something in the config, but the v2 container is not creating indices automatically like the v1.5 container did. For me it took a few minutes for the first index to become accessible and i could actually use the diskover app. one thing to note is that you need to be careful to not delete the index you have selected in diskover ("Save selection" option). i did this and the app broke completely, not defaulting to one of the other indices. the only way i could fix this was to recreate the index using the command from step #4...at least the error gives you the name of the missing index.
  8. So i had been using this container for a couple of years at least, then a few weeks (maybe months now?) ago it broke with some error in the logs that i didn't get to figure out at the time. Fast forward to this week and i finally get back to fixing this container on my server and see that we are now on v2. That seems to bring with it the fact that most of the great features are now locked behind pay editions. The v1.5 tags are still available, and i realize now that the initial issue that broke my container was likely due to an update because i can pull one of the more recent v1.5 tags and the same error is there, but going back a few versions (~JUN 2021) and no error. But the template is no longer available..does anyone have v1.5 still running and mins posting a screenshot of the template so i can see which variables to set up again?
  9. server was stable for a good 5 or so days, but then just crashed again. I dont see any new logs on the flash, and of course the logs get flushed when the server comes back up. im not sure what im missing. any thoughts?
  10. Hi everyone, I was really trying to not create a new topic just for this, but I could not find a solution: my understanding is that when I setup syslog to mirror to flash the logs should be going to the /logs/ folder on flash. Unless I have the wrong location it should be this /boot/logs/. I have Krusader set up to map a path variable to this location, and I see historical logs there in zip files, but nothing recent enough. I am trying to troubleshoot a freezing issue with my server that has been plaguing me for the last couple of weeks, so I also set up remote logging since i couldnt track down logs on the flash. Below is how I configured the syslog options Here is what is available in /boot/logs/ This is how I mapped the path in Krusader, if that helps.
  11. I've been meaning to ask if there are any high level instructions for getting this set up on unraid?
  12. Depending on which version you installed, you can use one of the management tools like 7D2D RAT (Windows), Botman (Linux), or simply telnet to the server and issue commands that way. If you're just looking to change some of the basic server setup, you would use the serverconfig.xml for that though having the server up and running already, im assuming youre looking for something beyond that.
  13. You are too awesome, thanks @ich777. Yeah, I only spent about 15min on A18 yesterday. Now that the docker has been updated I can roll in a few more hrs to really get a feel for it, but good vibes so far ✌️
  14. From the release notes it looks like they have spent a lot of time implementing positive features and listened to the community closely. Performance seems better from just a quick test too, so that is very positive for me I just updated the GAME ID field to be "294420 -beta latest_experimental". This downloads the correct server version, but i haven't yet been able to get it to run. Still tinkering...
  15. @ich777 with 7 Days To Die a18 potentially dropping to general availability tomorrow, will dockers be updated automatically or do need to release something new? It will be an experimental release on steam, i.e. opt-in. I run a couple 7dtd servers using you docket and I would like them to stay unaffected, but might want to try out a18 if there is a way without you needing to release something new.
  16. I figured this out: a few months ago i installed pi-hole docker and decided to just set my router's primary DNS to the pi-hole docker IP so that everything in the house is covered. Well, with unRAID set as DHCP client it picked up those DNS settings by default and the traffic was following a crazy route i guess. Once i specifically set unRAID's DNS entries, things came right back. thanks for your help!
  17. Hey all, looking for some help with a strange issue. First off: I'm quite new to the Linux side of things and have a lot to learn still, so assume i know nothing. So recently while downloading a larger game docker of about 6GB, I noticed that it was taking several hours to complete which didn't seem right since I'm on a gigabit connection. I've never done a speed test directly from unRAID terminal, but I've also just really downloaded the more common useful dockers and plugins to help with management of the instance. I've done speed tests from within VMs on the unRAID box before, and they always came back within expected target. So I did a speed test from within the unRAID terminal, and compared that to same speed test from an Ubuntu Server VM on the same box. The results are below, and even though the results from the VM are not near max, i believe that is because they throttle the service a bit during business hours. Still, far better than what I'm getting from unRAID terminal. Also, regular LAN traffic to the box is fine easily maxing out around 110-120MBps, so never had reason to suspect any issues. Anyone see this issue before and able to help me resolve it? unRAID VM (Ubuntu Server)
  18. Im certainly not an expert at the finer unraid/linux configurations, but storage-wise i think my setup is pretty typical. appdata lives on my cache drive which is a ssd. I dont have issues with any other docker images, but i also dont have anything else nearly that size. So now for the first time i thought to do a speedtest from the unraid terminal to see if that is fine...its not! Testing download speed................................................................................ Download: 2.35 Mbit/s Testing upload speed................................................................................................ Upload: 3.23 Mbit/s I have tested the speed from several windows-based VMs in unraid, and they all reported much closer to the typical speeds i am expecting, which is closer to 800Mbps. Ill spend some time to try and figure this out, and take it to another part of the forum if i need more help.
  19. Is there some sort of setting i'm missing that would make these dockers download extremely slowly? Pulling down 5-6GB for a new docker (7daystodie in this case) takes several hours. I initially thought that it was in part to the game getting set up and generating the map/etc, but setting up a fresh instance now i was looking at the logs and after nearly 2hrs it is only around 50% complete. Im on a gigabit connection and pulling down any game from steam is typically very fast. Not sure why these downloads take so long.
  20. I wish this was more accessible. Didn't even think to go back to the apps section... Couple beers on the way!
  21. Man i feel a bit silly. It was all right there on P6 as you said - thank you. I thought the ports are visible only to the container and the host, and that a mapping is done as part of that interface. I made the changes to have the ports the same between Host and Container, as well as in the serverconfiog.xml and it came right up. thanks again! I know someone offered earlier in this thread, but if you need the game i'm happy to gift it to you. PM me steam details or at least some way to buy you a beer and say thanks for this work!
  22. Firstly, thanks for these dockers. Ive tried the one for 7 Days To Die and it works a charm with easy setup. Has anyone been able to get their server up on the public list? I must have a port blocked or something because mine does not show up, but i am able to connect to it both locally and remotely. I have it set up on port 25550 since that is where i had it running while under windows (dedicated). I had to open up ports range 25550-25552 on TCP and UDP prior, and everything worked. Right now the only thing missing is me getting the service on the public list so it is easy to find. I tried opening up 26903 (since changed to 25553, also open) and 27015 as well without making a difference. Here are my port mappings if this helps. Appreciate any feedback.
  23. I just find NZBget the easiest to configure and use. I havent used SAB in years now, so maybe they improved on that factor since. Anyway, here are some steps you can try to help get NZBget going: 1. Go to your docker page and add a new container 2. From the template dropdown, select "binhex-nzbget", presuming you havent already deleted the template. Basically, this just restores all the settings you used last time, but is also the cause of you getting the 404 when checking out the 19.1 tag: you are most likely mixing the config data. 3. Make the changes to get the previous tag going - i used 19.1-1-02 (https://hub.docker.com/r/binhex/arch-nzbget/tags) Below is a screenshot of how i changed it. The most important part is the AppData Config Path setting, so make sure you change that form the template value. 4. Once you've started the container you will need to configure NZBGet again via its internal settings page (news server, paths, etc). This is because of the change in the appdata location, basically starting fresh. Hope this helps!
  24. Same happened to me. I just pulled the tag into a new container building it from the template on which i was using 20.0. Seems to be working OK for me now.