Jump to content

[Support] atunnecliffe - Splunk

Recommended Posts

Overview: Docker image for Splunk based on Alpine Linux.

Application: Splunk https://www.splunk.com/

Docker Hub: https://hub.docker.com/r/atunnecliffe/splunk

GitHub: https://github.com/andrew207/splunk

Documentation: https://github.com/andrew207/splunk/blob/master/README.md // https://docs.splunk.com/Documentation/Splunk


Splunk can get pretty big so the default Docker image size is tight. You should increase this if you plan on using this image.


Any issues let me know here.


If you want to reset your Splunk Enterprise trial license, there are instructions in the GitHub readme.


You can specify branches to download starting from 8.0.0, or just use "latest":





### If you upgrade to 9.0.0 and the web interface fails to start after ~10mins, just restart the container. This is because Splunk changed the way it handles SSL certs and in some cases it might need a restart to regenerate them properly.

Edited by andrew207
more info
Link to comment
  • 2 weeks later...

Queueiz, if you want full persistence of your entire install you'll need to add a volume for the entire /opt/splunk directory. I haven't tested this, I may need to patch the installer script so it checks for an existing installation in /opt/splunk/ rather than just an existing installer file.


You can stop/start the container all you want but if you rebuild it you will lose config by default (by design, in a perhaps misguided attempt to follow Splunk best practice), the container will only persist your indexed data if you have created a volume for /opt/splunk/var.


--- edit:


I just tried to properly configure a volume for /opt/splunk for a full persistent install but Splunk started throwing some very obscure errors. I'll look further into this, perhaps I'll try to configure a persistent mount for the config in /opt/splunk/etc alongside indexed data in /opt/splunk/var, but I'll probably have to leave the rest of the application to be installed on every container rebuild.


Feel free to try swapping to the dockerhub tag "fullpersist" (i.e. in unraid set Repository to "atunnecliffe/splunk:fullpersist"), removing any /opt/splunk/var volume and adding an /opt/splunk volume.

Edited by andrew207
new info
Link to comment
  • 2 weeks later...

If you want a fully persistent install, for some reason Splunk throws some pretty odd errors. They don't seem to hinder functionality) so if you're cool ignoring them then you can do the following @wedge22 @GHunter


Just add a volume for /opt/splunk/etc.


/opt/splunk/etc directory stores all of your customisation. By default we already have a volume for /opt/splunk/var, the directory that stores all indexed data; so with these two your install should feel fully persistent.

Edited by andrew207
Link to comment

Hey @wedge22 that one may be due to a bad download -- doesn't happen on my end on UnRAID or on Win 10 hypervisors.


In an attempt to make this answer a bit more useful, here's a screenshot showing my container config, change being the added "App Data" volume:


Edited by andrew207
add more info
Link to comment
  • 1 month later...

The container runs as GMT, and so does the Splunk app. To change the displayed timezone, in Splunk under your user's preferences set a timezone and Splunk will add the offset to any events at search time.


From Splunk Answers:


You can set a user time zone using the Splunk Web UI: navigate to Settings > Users and Authentication > Access controls > Users. This will enable users to see search results in their own time zone, although it won't change the time zone of the event data.


Edited by andrew207
Link to comment

Thanks for the response Andrew, that fixed it, I could not find that setting anywhere and all resources pointed to changing the system time.


One more question I'm hoping you can help with. I use Splunk as a syslog server and so setup both TCP and UDP port 514 Data Inputs but it seems that these data inputs were lost so I'm guessing the inputs.conf file is not setup to be persistent. I can of course copy this folder and map it to a persistent location using Docker but I am interested in your input as to whether I have misunderstood it or if you have a better way?




Link to comment

To make your inputs.conf persistent set this path: image.png.01e2f8389ba28bade2779815881fdd1b.png


As for your port 514, you'll have to expose that port yourself as it's not done by default; I only expose 8000 (web) 8089 (https api) and 9997 (splunk data). Just make it available through unRAID's docker edit screen same as those three are configured and it should work fine.


FYI I have a new version of this container coming out soon that does a few fixes including defaulting to persistent storage of config as well as data, as well as rebasing to Alpine Linux: https://github.com/andrew207/splunk/tree/openshift


Link to comment

Yeah @wedge22 persistent data has been challenging, when I convert the /opt/splunk/var/lib/ directory (i.e. where Splunk stores indexed data) to a volume the KV store crashes. You can make it a volume yourself -- search / report / alert etc will still work if you make it a volume, but you get a permanent "message" about KV store being dead.


I'm working on it though, we'll get there hopefully in the next week or so alongside a rebase to Alpine. https://github.com/andrew207/splunk/tree/openshift





Edited by andrew207
add pic
Link to comment

@wedge22 @mgiggs Fixed, I'll be pushing it to master shortly and it'll be available through unRAID's interface whenever they update their repos; probably a day or two. it's now published and available.


If you want it now set your config up thusly - note adding :openshift tag to the repo, and make sure the volumes are exact:



For anyone interested, I'm still not certain why the kvstore was failing previously (probably volume permissions?); the issue was mitigated by moving the SPLUNK_DB to a proper standalone volume (and separating the KVstore from SPLUNK_DB). This means your KVstore isn't persistent, but chances are for a small standalone install that won't matter.

Edited by andrew207
Link to comment

Thanks for making the changes so quickly, I installed the latest version and I am using the volumes exactly as described but this means I have very little space left on my docker volume. Did you increase the size of your docker volume, mine is currently set to 20GB, and has multiple containers running.

Link to comment

Ah good point, I don't see this as my Docker volume is set to 300GB due to hitting the limit in the past. I also have it stored on a non-array SSD for performance, alongside most of my appdata folders.

Splunk image at absolute full blast should only ever consume 5-7 GB as an absolute max, more likely to only be about 1.5GB; so 20GB could be very limiting based on what else you run. The increase in size is due to Splunk's "dispatch" directory which resides in /opt/splunk/var/run/splunk/dispatch, which will fill up with "search artifacts" based on how frequently / what you search. This directory can be (fairly) safely deleted in a pinch, but the best solution is probably to increase your Docker volume size. I wouldn't consider this directory a candidate for a Docker volume due to its volatility.


Based on my link above, if your dispatch directory is getting large due to search volume, you can set your `ttl` or `remote_ttl` in `limits.conf`, perhaps set it to something crazy low like 10 minutes so things don't get hoarded there like they do by default.

Link to comment
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...