andrew207 Posted June 20, 2019 Share Posted June 20, 2019 (edited) Overview: Docker image for Splunk based on Alpine Linux. Application: Splunk https://www.splunk.com/ Docker Hub: https://hub.docker.com/r/atunnecliffe/splunk GitHub: https://github.com/andrew207/splunk Documentation: https://github.com/andrew207/splunk/blob/master/README.md // https://docs.splunk.com/Documentation/Splunk Splunk can get pretty big so the default Docker image size is tight. You should increase this if you plan on using this image. Any issues let me know here. If you want to reset your Splunk Enterprise trial license, there are instructions in the GitHub readme. You can specify branches to download starting from 8.0.0, or just use "latest": 8.0.0 8.0.1 x.x.x ### If you upgrade to 9.0.0 and the web interface fails to start after ~10mins, just restart the container. This is because Splunk changed the way it handles SSL certs and in some cases it might need a restart to regenerate them properly. Edited June 26, 2022 by andrew207 more info Quote Link to comment
rpnater Posted July 3, 2019 Share Posted July 3, 2019 hi! as a splunk admin, thanks a lot for your work, was on the edge to install it on a full centOS VM will let you know if i need smthg Quote Link to comment
queueiz Posted July 6, 2019 Share Posted July 6, 2019 This image doesn't seem to be working fully. If you change or Add another Path, Port, Variable, Label or Device or set an IP for the container the whole image gets reset back to factory settings. Quote Link to comment
andrew207 Posted July 6, 2019 Author Share Posted July 6, 2019 (edited) Queueiz, if you want full persistence of your entire install you'll need to add a volume for the entire /opt/splunk directory. I haven't tested this, I may need to patch the installer script so it checks for an existing installation in /opt/splunk/ rather than just an existing installer file. You can stop/start the container all you want but if you rebuild it you will lose config by default (by design, in a perhaps misguided attempt to follow Splunk best practice), the container will only persist your indexed data if you have created a volume for /opt/splunk/var. --- edit: @queueiz I just tried to properly configure a volume for /opt/splunk for a full persistent install but Splunk started throwing some very obscure errors. I'll look further into this, perhaps I'll try to configure a persistent mount for the config in /opt/splunk/etc alongside indexed data in /opt/splunk/var, but I'll probably have to leave the rest of the application to be installed on every container rebuild. Feel free to try swapping to the dockerhub tag "fullpersist" (i.e. in unraid set Repository to "atunnecliffe/splunk:fullpersist"), removing any /opt/splunk/var volume and adding an /opt/splunk volume. Edited July 6, 2019 by andrew207 new info Quote Link to comment
queueiz Posted July 7, 2019 Share Posted July 7, 2019 I tried just that and it seems to be working better now. Thank you. Quote Link to comment
GHunter Posted July 9, 2019 Share Posted July 9, 2019 (edited) On 7/7/2019 at 8:23 AM, queueiz said: I tried just that and it seems to be working better now. Thank you. @queueizCan you be more specific? What did you try exaclty to get this working? Thanks! Edited July 9, 2019 by GHunter Quote Link to comment
wedge22 Posted July 23, 2019 Share Posted July 23, 2019 I too would like to know what was changed to make this work as a persistent docker. Quote Link to comment
andrew207 Posted July 23, 2019 Author Share Posted July 23, 2019 (edited) If you want a fully persistent install, for some reason Splunk throws some pretty odd errors. They don't seem to hinder functionality) so if you're cool ignoring them then you can do the following @wedge22 @GHunter Just add a volume for /opt/splunk/etc. /opt/splunk/etc directory stores all of your customisation. By default we already have a volume for /opt/splunk/var, the directory that stores all indexed data; so with these two your install should feel fully persistent. Edited July 23, 2019 by andrew207 Quote Link to comment
wedge22 Posted July 24, 2019 Share Posted July 24, 2019 Thanks for the reply @andrew207 I have tried to make changes but as of this morning the docker is no longer working for me, even from a clean install. Quote Link to comment
andrew207 Posted July 26, 2019 Author Share Posted July 26, 2019 (edited) Hey @wedge22 that one may be due to a bad download -- doesn't happen on my end on UnRAID or on Win 10 hypervisors. In an attempt to make this answer a bit more useful, here's a screenshot showing my container config, change being the added "App Data" volume: Edited July 26, 2019 by andrew207 add more info Quote Link to comment
GHunter Posted July 29, 2019 Share Posted July 29, 2019 Thanks for the help Andrew. I'll give this a try. Quote Link to comment
mgiggs Posted September 19, 2019 Share Posted September 19, 2019 How do I set the timezone correctly in this docker? All references to timezone setting in regards to splunk, linux and docker do not work with this docker unfortunately so all my time stamps are incorrect. Quote Link to comment
andrew207 Posted September 22, 2019 Author Share Posted September 22, 2019 (edited) The container runs as GMT, and so does the Splunk app. To change the displayed timezone, in Splunk under your user's preferences set a timezone and Splunk will add the offset to any events at search time. From Splunk Answers: Quote You can set a user time zone using the Splunk Web UI: navigate to Settings > Users and Authentication > Access controls > Users. This will enable users to see search results in their own time zone, although it won't change the time zone of the event data. Edited September 22, 2019 by andrew207 Quote Link to comment
mgiggs Posted September 23, 2019 Share Posted September 23, 2019 Thanks for the response Andrew, that fixed it, I could not find that setting anywhere and all resources pointed to changing the system time. One more question I'm hoping you can help with. I use Splunk as a syslog server and so setup both TCP and UDP port 514 Data Inputs but it seems that these data inputs were lost so I'm guessing the inputs.conf file is not setup to be persistent. I can of course copy this folder and map it to a persistent location using Docker but I am interested in your input as to whether I have misunderstood it or if you have a better way? Cheers Michael Quote Link to comment
andrew207 Posted September 23, 2019 Author Share Posted September 23, 2019 To make your inputs.conf persistent set this path: As for your port 514, you'll have to expose that port yourself as it's not done by default; I only expose 8000 (web) 8089 (https api) and 9997 (splunk data). Just make it available through unRAID's docker edit screen same as those three are configured and it should work fine. FYI I have a new version of this container coming out soon that does a few fixes including defaulting to persistent storage of config as well as data, as well as rebasing to Alpine Linux: https://github.com/andrew207/splunk/tree/openshift Quote Link to comment
mgiggs Posted September 23, 2019 Share Posted September 23, 2019 Thanks for the response, confirmed what I thought. And already had the ports mapped. Look forward to the new version as well. Thanks for your help! Quote Link to comment
wedge22 Posted September 26, 2019 Share Posted September 26, 2019 I was also going to ask about persistent data as I have been having the same problem, I might just wait until the latest version of the docker. Quote Link to comment
andrew207 Posted September 26, 2019 Author Share Posted September 26, 2019 (edited) Yeah @wedge22 persistent data has been challenging, when I convert the /opt/splunk/var/lib/ directory (i.e. where Splunk stores indexed data) to a volume the KV store crashes. You can make it a volume yourself -- search / report / alert etc will still work if you make it a volume, but you get a permanent "message" about KV store being dead. I'm working on it though, we'll get there hopefully in the next week or so alongside a rebase to Alpine. https://github.com/andrew207/splunk/tree/openshift Edited September 26, 2019 by andrew207 add pic Quote Link to comment
wedge22 Posted September 26, 2019 Share Posted September 26, 2019 Thanks for the update, I was just trying to configure it again now but I may as well wait. Quote Link to comment
andrew207 Posted September 26, 2019 Author Share Posted September 26, 2019 (edited) @wedge22 @mgiggs Fixed, I'll be pushing it to master shortly and it'll be available through unRAID's interface whenever they update their repos; probably a day or two. it's now published and available. If you want it now set your config up thusly - note adding :openshift tag to the repo, and make sure the volumes are exact: For anyone interested, I'm still not certain why the kvstore was failing previously (probably volume permissions?); the issue was mitigated by moving the SPLUNK_DB to a proper standalone volume (and separating the KVstore from SPLUNK_DB). This means your KVstore isn't persistent, but chances are for a small standalone install that won't matter. Edited September 27, 2019 by andrew207 Quote Link to comment
mgiggs Posted September 26, 2019 Share Posted September 26, 2019 Thanks Andrew, great support and thanks for your time putting this together for the community!Sent from my SM-G960F using Tapatalk Quote Link to comment
wedge22 Posted September 27, 2019 Share Posted September 27, 2019 Thanks for making the changes so quickly, I installed the latest version and I am using the volumes exactly as described but this means I have very little space left on my docker volume. Did you increase the size of your docker volume, mine is currently set to 20GB, and has multiple containers running. Quote Link to comment
andrew207 Posted September 27, 2019 Author Share Posted September 27, 2019 Ah good point, I don't see this as my Docker volume is set to 300GB due to hitting the limit in the past. I also have it stored on a non-array SSD for performance, alongside most of my appdata folders. Splunk image at absolute full blast should only ever consume 5-7 GB as an absolute max, more likely to only be about 1.5GB; so 20GB could be very limiting based on what else you run. The increase in size is due to Splunk's "dispatch" directory which resides in /opt/splunk/var/run/splunk/dispatch, which will fill up with "search artifacts" based on how frequently / what you search. This directory can be (fairly) safely deleted in a pinch, but the best solution is probably to increase your Docker volume size. I wouldn't consider this directory a candidate for a Docker volume due to its volatility. Based on my link above, if your dispatch directory is getting large due to search volume, you can set your `ttl` or `remote_ttl` in `limits.conf`, perhaps set it to something crazy low like 10 minutes so things don't get hoarded there like they do by default. Quote Link to comment
wedge22 Posted September 27, 2019 Share Posted September 27, 2019 Thanks for the reply, I can increase the current size of my docker image up from 20GB to maybe 40GB as I do not have a spare SSD to use as a non-array drive. Quote Link to comment
guitarlp Posted May 2, 2020 Share Posted May 2, 2020 Is this container still up to date and okay for use? It failed to install a couple of times, but now that it did install unRAID is showing the version as "not available". Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.