[Support] atunnecliffe - Splunk


Recommended Posts

Ok, so I can set it persistent as above, I will try that.

But to confirm, are you saying worst case scenario is it stops ingestion and pops up with errors?


Am I right in understanding that if that happens, I can set the license type to free in the GUI at any point thereafter, and things will swim along?

Forwarders as I understand cache everything if they can't send it, so I may get a bit of log build up, but once that license is set to free, it should just open the flood gates? Or is it not quite as simple as log once "in a while" and set the license to free via GUI?

 

Also, could you please confirm what you mean by
"Delete all default indexes from disk"
Is that referring to the "main" index? Does this=data loss? Would the work around there (if I wanted to keep using trial) to create a new custom index straight away and keep everything in that?

Link to comment

edit: I've just updated the container from 8.0.4 to Splunk 8.0.5, so you should be able to test a real upgrade once it pushes out to dockerhub/unraid.

 

Worst case scenario (meaning you're outside 60 days and the free license doesn't take for some reason) is it'll stop ingestion and pop up with errors, you're right.

 

Yes, you can always change the license type in the GUI, but realistically if you set up a persistent server.conf it'll be fine. If it works for you once it'll keep working, Splunk is an enterprise-grade product it's generally good at applying the config you set as long as it's in the right place.

 

Forwarders will cache up to a limit and then "open the flood gates" once the server is accepting data. This caching is called "Persistent Queues", where the forwarder will store data in RAM. By default, this is 500kb as the forwarder is designed to have a minimal footprint, but can easily be configured using limits.conf on the forwarder. https://docs.splunk.com/Documentation/Splunk/8.0.4/Data/Usepersistentqueues. Remember, queing only really applies when log files aren't just sitting on disk -- so if you're monitoring for example nginx log files it doesn't matter how long the server isn't accepting data, as long as the log file hasn't rolled Splunk will still have a checkpoint in the file where it will continue from.

 

If the bandwidth from the "opening floodgates" is a concern you can also set the maxKBps in limits.conf to set a firm limit on the bandwidth consumed by the forwarder -- https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Limitsconf.

 

"Delete all default indexes from disk" means delete basically the following indexes:

audit, main, _internal, _introspection, _telemetry. Yes, this means data loss -- however ideally you would be storing your data in a non-default index, for example I store my pfsense logs in a "pfsense" index. The index is also defined with a persistent indexes.conf file, and also means if you need to reset your license (which it doesn't sound like you will if you set up a free one properly) it'll stay around. Also note even if you don't persist your index properly and it disappears from the UI the data won't get deleted, when you re-add the index the data will all magically appear again.

 

Basically if you have all the pieces in place it'll be fine. For you this sounds like:

 

1. Define indexes.conf on your server to split hotwarm and cold to separate Docker volumes

2. Define indexes.conf on your server to add new non-default index(es) for your data, and ensure your forwarders are sending to those indexes (see inputs.conf definitions)

3. Consider upping the persistent cache limit on your forwarder, and consider limiting the bandwidth (I generally just reduce it from the default unlimited to something crazy high like 10MBps just so things won't break if the endpoint starts logging like mad and decides to redline for a few days)

4. Double check the config by swapping Docker branches from :latest to :8.0.4 to force a full container redownload, simulating an upgrade and testing all your persistent config.

Edited by andrew207
  • Like 1
Link to comment
  • 2 months later...

Hey @VictorCharlie, the tesla app seems to use data models to present its data, and it does not create the "tesla" index it uses by default. This creates two bits of work you need to do yourself in addition to the documented process of filling out your API keys. I don't know how much you know about Splunk, so i'll be specific in my responses.

 

1. Ensure data is being persisted

You'll need to ensure that the indexed data is being persisted by filling out the DataPersist param in the config for the container in the unRAID UI.

 

2. Create the tesla index (this seems to be required as all the datamodels have index=tesla hardcoded).

From the GUI, settings-->indexes and create a new index called "tesla". Make sure you use default directories which will end up in your DataPersist directory (or modify and ensure they remain in there).

 

The Tesla app doesn't seem to use any KVStore, rather it uses data models. Data models persist fine in the container as long as you have DataPersist set in your unRAID docker template.

 

I can't see any reason it wouldn't work if the above are followed. I can't test because unfortunately I don't own a Tesla and therefore don't have API keys etc. Feel free to take this to PM if you want me to debug further, perhaps I can use your keys to test it.

Edited by andrew207
Link to comment

@andrew207 Thank you for your response. I should have been more clearer in my last post.  I am able to get the app working. I created the index and set the inputs and it runs perfectly. The issue comes on the configuration tab in the app. You do not need the api key to see the problem. After I add the inputs (you can just put anything it does not matter) and save them. then I edit the Splunk docker and go back to the app the configuration pages break. The frame of the page loads but the content such as the form and the submit buttons do not load. The pages never loads. Anything that is created as an input is missing and never loads. From what I can see there is something is created when the inputs are entered that are not persistent. This is the best Splunk docker for unraid out there I would hate to have to run a VM just because I can't get this app to work. Thanks for your help.

Link to comment

@VictorCharlie Thanks for the extra info! Pretty sure I've got it solved.

When this app (and all properly configured apps) saves passwords they are encrypted using your splunk key. By design this key is unique to all Splunk installations, so when you rebuild the container it reinstalls itself and gives a new key.

 

To fix this you should be able to manually add a volume for the single file required for this /opt/splunk/etc/auth/splunk.secret. Make that specific file (or the whole auth directory) a volume in your unRAID docker template config and your tesla app should persist nicely.

 

For the next release I'll add in a default volume for the key file or the whole auth directory once I figure out which is best, because this would be affecting all apps that use encrypted passwords.

Edited by andrew207
  • Like 1
Link to comment

Hey @Moka
First, you'll need to configure Splunk to listen on a port for syslog. You can do this by following the instructions in their documentation, you can do it in the GUI. https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitornetworkports#Add_a_network_input_using_Splunk_Web

 

Once you've got Splunk listening on a port for syslog, you'll need to make sure your Docker template config in unRAID exposes that port. You might need to add a "port" to your Docker template (the same way the default template has port 8000 added for the web interface).

 

Once you've got Splunk listening and a that port added to your docker template, you can configure your network devices to send their syslog to Splunk by following their documentation, making sure to use the port you configured in the unRAID Docker template. Once configured, you will be able to search for the new syslog data in Splunk.

Link to comment
  • 1 month later...
Quote

Unable to initialize modular input "journald" defined in the app "journald_input": Introspecting scheme=journald: Unable to run "/opt/splunk/etc/apps/journald_input/bin/journald.sh --scheme": child failed to start: No such file or directory.

This message is persistent in my Messages. Is this error expected?

Link to comment

Hey @Caduceus, this seems to be something new Splunk added in the latest update that will only work on systemd based systems.

 

I've just committed a change to disable this in the container.

 

Commit is here: https://github.com/andrew207/splunk/commit/ebf5f696c1458fd9ca0be3402f0fc930d2cfd1a2

 

Such is life living on the bleeding edge! :) Thanks for pointing this out. You can switch to the 8.1.0 branch/dockerhub tag if you want to try the fix now, otherwise i'll push it to master in the next couple of days.

Link to comment
On 11/10/2020 at 8:03 AM, andrew207 said:

Hey @Caduceus, this seems to be something new Splunk added in the latest update that will only work on systemd based systems.

 

I've just committed a change to disable this in the container.

 

Commit is here: https://github.com/andrew207/splunk/commit/ebf5f696c1458fd9ca0be3402f0fc930d2cfd1a2

 

Such is life living on the bleeding edge! :) Thanks for pointing this out. You can switch to the 8.1.0 branch/dockerhub tag if you want to try the fix now, otherwise i'll push it to master in the next couple of days.

I was mistaken. This change doesn't seem to fix it.  But in $splunk_home/etc/apps/journald_input/local/app.conf

 

if we add:

[install]
state = disabled

That seems to fix it by disabling the app.

Edited by Caduceus
was mistaken
Link to comment
On 7/11/2020 at 12:08 AM, andrew207 said:

Hey @4554551n thanks for your interest, here are some answers to your questions.

> Resetting trial license

Yeah sure, you can set it to the free license if you want. Whenenver you upgrade the container you'll just need to set it to free license again.

 

> Splunk index data location / splitting hotwarm and cold

You can't split hot and warm, but you can split hot/warm and cold. With Splunk there are a lot of ways to split cold data off into its own location, I'd use "volumes". Here's a link to the spec for the config file we'll be editing: https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Indexesconf

 

In my docker startup script I run code to change the default SPLUNK_DB location to the /splunkdata mount the container uses. SPLUNK_DB contains both hot/warm and cold. OPTIMISTIC_ABOUT_FILE_LOCKING fixes an unrelated bug. We set this in splunk-launch.conf meaning the SPLUNK_DB variable is set at startup and is persistent through the whole Splunk ecosystem. As you correctly identified from Splunk docs, SPLUNK_DB is used as the storage location for all indexes data and all buckets by default, this config was made to split it off into a volume.


printf "\nOPTIMISTIC_ABOUT_FILE_LOCKING = 1\nSPLUNK_DB=/splunkdata" >> $SPLUNK_HOME/etc/splunk-launch.conf

1. Create a new volume in your docker config to store your cold data (e.g. /mynewcoldlocation)
2. Create an indexes.conf file, preferebly in a persistent location such as $SPLUNK_HOME/etc/apps/<app>/default/indexes.conf
3. Define hotwarm / cold volumes in your new indexes.conf, here's an example:


[volume:hotwarm]
path = /splunkdata
# Roughly 3GB in MB
maxVolumeDataSizeMB = 3072

[volume:cold]
path = /mynewcoldlocation
# Roughly 50GB in MB
maxVolumeDataSizeMB = 51200

It would be up to you to ensure /splunkdata is stored on your cache disk and /mynewcoldlocation is in your array as defined in your docker config for this container.

4. Configure your indexes to utilise those volumes by default by updating the same indexes.conf file:


[default] 
# 365 days in seconds 
frozenTimePeriodInSecs = 31536000 
homePath = volume:hotwarm\$_index_name\db 
coldPath = volume:cold\$_index_name\colddb 
# Unfortunately we can't use volumes for thawed path, so we need to hardcode the directory. 
# Chances are you won't need this anyway unless you "freeze" data to an offline disk. 
thawedPath = /mynewFROZENlocation/$_index_name/thaweddb 
# Tstats should reside on fastest disk for maximum performance 
tstatsHomePath = volume:hotwarm\$_index_name\datamodel_summary

5. Remember that Splunk's internal indexes won't follow config in [default], so if we want Splunk's own self-logging to follow these rules we need to hard-code it:


[_internal]
# 90 days in seconds
frozenTimePeriodInSecs = 7776000
# Override defaults set in $SPLUNK_HOME/etc/system/default/indexes.conf
homePath = volume:hotwarm\_internaldb\db
coldPath = volume:cold\_internaldb\colddb

[_audit]
# 90 days in seconds
frozenTimePeriodInSecs = 7776000
# Override defaults set in $SPLUNK_HOME/etc/system/default/indexes.conf
homePath = volume:hotwarm\audit\db
coldPath = volume:cold\audit\colddb

# ... etc etc for other indexes you want to honour this.

> Freezing buckets

When data freezes it is deleted unless you tell it where to go.

You'll see in my config above I set the config items "maxVolumeDataSizeMB" and "frozenTimePeriodInSecs". For our volumes, once the entire volume hits that size it'll start moving buckets to the next tier (hotwarm --> cold --> frozen). Additionally, each of our individual indexes will also have similar max size config that will control how quickly they freeze off, iirc the default is 500GB.

 

Each individual index can also have a "frozenTimePeriodInSecs", which will freeze data once it hits a certain age. If you have this set, data will freeze either when it is the oldest bucket and you've hit your maxVolumeDataSizeMB, or if it's older than frozentimePeriodInSecs.

 

When data freezes it is deleted unless you tell it where to go. The easiest way to tell it where to go is by setting a coldToFrozenDir in your indexes.conf for every index. For example, in our same indexes.conf we have an index called "web", it might look like this, here's some doco to explain further https://docs.splunk.com/Documentation/Splunk/8.0.4/Indexer/Automatearchiving:


[web]
# 90 days in seconds
frozenTimePeriodInSecs = 7776000
coldToFrozenDir = /myfrozenlocation
# alternatively,
coldToFrozenScript = /mover.sh

Hope this helps.

 

I tried to follow these instructions to the letter... unfortunately, every time I restart the container or just restart splunkd, all of my indexed data seems to no longer be searchable on restart.  The index data can still be seen in the persistant shares (ie. /splunkdata and /splunkcold).  If I stop splunk and delete everything in /splunkdata and /splunkcold, then splunk start. Everything will re-index again.  If I don't delete the data, I cannot search the data whether I restart or not. So some record of the files being indexed remains persistant in fishbucket db.

I have not idea why my other data seems to disappear on restart. I was doing some Splunk Labs for an Sales Engineer 2 accreditation and I spent half my time wondering why my files weren't being indexed... turned out that they had been indexed but on restart they disappeared and because there was data still in fishbucket they wouldn't be re-parsed with any changes I made and re-indexed.

 

Here are my docker volume mappings:

/splunkcold <> /mnt/user/splunk-warm-cold/  #this is a unraid cached array share for cold buckets
/opt/splunk/etc/licenses <> /mnt/user/appdata/splunkenterprise/license #I persist a developer license here
/test <> /mnt/user/appdata/splunkenterprise/test/  #ingesting test files from here
/opt/splunk/etc/system/local <> /mnt/user/appdata/splunkenterprise/etc/system/local   #indexes.conf lives here
/splunkdata <> /mnt/user/appdata/splunkenterprise/splunkdata #hot-warm data is persisted here on ssd cache
/opt/splunk/etc/apps <> /mnt/user/appdata/splunkenterprise/etc/apps
/opt/splunk/etc/auth <> /mnt/user/appdata/splunkenterprise/etc/auth

 

Here is my indexes.conf stored in /opt/splunk/etc/system/local which is persisted to a share (/mnt/user/appdata/splunkenterprise/etc/system/local):

[volume:hotwarm]
path = /splunkdata
maxVolumeDataSizeMB = 3072

[volume:cold]
path = /splunkcold
maxVolumeDataSizeMB = 51200

[default]
# 90 days in seconds
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/$_index_name/db
coldPath = volume:cold/cold/$_index_name/colddb
thawedPath = /splunkcold/$_index_name/thaweddb
tstatsHomePath = volume:hotwarm/$_index_name/datamodel_summary


# Splunk Internal Indexes
[_internal]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/_internaldb/db
coldPath = volume:cold/cold/_internaldb/colddb
thawedPath = /splunkcold/_internaldb/thaweddb
tstatsHomePath = volume:hotwarm/_internaldb/datamodel


[_audit]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/audit/db
coldPath = volume:cold/cold/audit/colddb
thawedPath = /splunkcold/audit/thaweddb
tstatsHomePath = volume:hotwarm/audit/datamodel

[_introspection]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/_introspection/db
coldPath = volume:cold/cold/_introspection/colddb
thawedPath = /splunkcold/_introspection/thaweddb
tstatsHomePath = volume:hotwarm/_introspection/datamodel

[_metrics]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/_metricsdb/db
coldPath = volume:cold/cold/_metricsdb/colddb
thawedPath = /splunkcold/_metricsdb/thaweddb
tstatsHomePath = volume:hotwarm/_metricsdb/datamodel

[_metrics_rollup]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/_metrics_rollup/db
coldPath = volume:cold/cold/_metrics_rollup/colddb
thawedPath = /splunkcold/_metrics_rollup/thaweddb
tstatsHomePath = volume:hotwarm/_metrics_rollup/datamodel

[_telemetry]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/_telemetry/db
coldPath = volume:cold/cold/_telemetry/colddb
thawedPath = /splunkcold/_telemetry/thaweddb
tstatsHomePath = volume:hotwarm/_telemetry/datamodel

[_thefishbucket]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/fishbucket/db
coldPath = volume:cold/cold/fishbucket/colddb
thawedPath = /splunkcold/fishbucket/thaweddb
tstatsHomePath = volume:hotwarm/fishbucket/datamodel

[history]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/historydb/db
coldPath = volume:cold/cold/historydb/colddb
thawedPath = /splunkcold/historydb/thaweddb
tstatsHomePath = volume:hotwarm/historydb/datamodel

[summary]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/summarydb/db
coldPath = volume:cold/cold/summarydb/colddb
thawedPath = /splunkcold/summarydb/thaweddb
tstatsHomePath = volume:hotwarm/summarydb/datamodel

[main]
frozenTimePeriodInSecs = 7884000
homePath = volume:hotwarm/defaultdb/db
coldPath = volume:cold/cold/defaultdb/colddb
thawedPath = /splunkcold/defaultdb/thaweddb
tstatsHomePath = volume:hotwarm/defaultdb/datamodel

# Begin Custom Indexes
[splunk_labs]
homePath = volume:hotwarm/splunk_labs/db
coldPath = volume:cold/cold/splunk_labs/colddb
thawedPath = /splunkcold/splunk_labs/thaweddb
tstatsHomePath = volume:hotwarm/splunk_labs/datamodel

[win_logs]
homePath = volume:hotwarm/win_logs/db
coldPath = volume:cold/cold/win_logs/colddb
thawedPath = /splunkcold/win_logs/thaweddb
tstatsHomePath = volume:hotwarm/win_logs/datamodel

Monitoring Console / Data / Indexes Before a Restart

firefox_fR0NrXngFg.thumb.png.4a7376bb2fae9fb557ed95c014a8380a.png

 

Any ideas how I can fix this? I love the idea of having a container but I can't live with it as is :P  Thanks!  I also attached the container logs to see if that gives any insight.

 

 

****UPDATE****

 

I think I discovered the answer in the logs:

11-17-2020 09:07:31.535 +0000 INFO BucketMover - will attempt to freeze: candidate='/splunkdata/splunk_labs/db/db_1505895227_1388693545_13' because frozenTimePeriodInSecs=7884000 is exceeded by the difference between now=1605604051 and latest=1505895227
11-17-2020 09:07:31.535 +0000 INFO IndexerService - adjusting tb licenses
11-17-2020 09:07:31.545 +0000 INFO BucketMover - AsyncFreezer freeze succeeded for bkt='/splunkdata/splunk_labs/db/db_1505895227_1388693545_13'

splunk_docker_logs.txt

It turns out that the Splunk Lab files they give you have dates back from 2014... so the event dates in my case and in most peoples cases exceed by far the amount of time it takes to send a bucket of events to a frozen bucket. So on restart it sends all of my just indexed data, directly to frozen because of the latest event date in that newly indexed bucket and therefore I can't see the data anymore.  Seem that the data was persisting just fine and this is expected behavior.  For the lab data I will change the frozenTimePeriodInSecs > greater than the event dates.  I hope this helps others, even if just looking at screenshots and config files if they are looking to do the same thing.

Edited by Caduceus
RESOLVED
  • Like 1
Link to comment

Hey @Caduceus Thanks for posting your self-solve there. What you posted makes sense, if you're ingesting data that's older than your frozenTimePeriodInSecs it'll freeze off pretty quickly.

 

And yes I did notice my little quick-fix-attempt didn't work (I noticed this so I didn't push it to master yet fortunately), so I'll follow your lead and just disable the app entirely :). Thanks for that.

 

Happy to help with any other questions, and appreciate you reading through past answers to help work through the issues you were hitting.

Link to comment
  • 2 months later...

First, thank you for creating this! It's super smooth!

I'm not sure if the issue I'm experiencing is due to something I'm doing wrong related to this splunk docker or if it is just a coincidence, but I notice it specifically here so figured I'd start here.

 

Any time I adjust ports or change paths on my splunk docker config my total used docker image storage goes up by a few gigs. Despite this, the container sizes stay reasonable. I searched around the internet a bit, and found info on pruning, but that doesn't seem to do the trick. I deleted all my containers, and the seemingly zombie used storage remains behind. After deleting the docker.img file, and rebuilding all containers the size is again reasonable, but each time I change settings on the splunk container it rises and space is not returned to free.

 

I assume there is something obvious I have not yet learned in regards to Docker, but my searches thus far have not lead me in the right direction. Is there anything obvious that jumps out at you @andrew207? Thank you so much for your help!

Link to comment
  • 2 months later...
On 1/18/2021 at 1:51 PM, Mervin said:

First, thank you for creating this! It's super smooth!

I'm not sure if the issue I'm experiencing is due to something I'm doing wrong related to this splunk docker or if it is just a coincidence, but I notice it specifically here so figured I'd start here.

 

Any time I adjust ports or change paths on my splunk docker config my total used docker image storage goes up by a few gigs. Despite this, the container sizes stay reasonable. I searched around the internet a bit, and found info on pruning, but that doesn't seem to do the trick. I deleted all my containers, and the seemingly zombie used storage remains behind. After deleting the docker.img file, and rebuilding all containers the size is again reasonable, but each time I change settings on the splunk container it rises and space is not returned to free.

 

I assume there is something obvious I have not yet learned in regards to Docker, but my searches thus far have not lead me in the right direction. Is there anything obvious that jumps out at you @andrew207? Thank you so much for your help!


Hi, did you ever find a solution to this?  I seem to be having issues with this container using up docker image space as well.
I can easily replicate it by doing a "force upgrade" on the container itself.

Link to comment

Hi @ShadeZeRO, I was able to replicate and figure out why.

This would be due to the way the container installs itself on startup. The container includes the installer grabbed directly from splunk.com during image build, and on first run it untars the installer. This causes your unRAID docker.img to grow every time you rebuild the image (i.e. "force upgrade", or real upgrade). If you have your indexed data stored in the container rather than on an external volume this will accentuate your docker.img disk usage.

The same occurs for other containers that have an installer process (or use their internal volumes extensively). For example most of the SteamCMD game server containers that download big files on startup, something that can generate a lot of data like Nessus or Minecraft; or even if you configure a downloader (sab/deluge/etc) to download inside the container rather than to a volume you will also see your docker.img size increasing a lot on upgrade/rebuild.


You can view the space used by running 'docker system df'. Here's mine, as you can see I've been working hard to ignore this issue by having 98% reclaimable space in my local volumes.
image.png.48dc6e994603f0e3b04b27fe6af1e396.png

 

Running the following will reclaim all this space. BE CAREFUL RUNNING IT, THIS COMMAND WILL DELETE ALL STOPPED CONTAINERS AND ALL THEIR DATA. Read docs etc, there are probably safer flags to use.

 

docker system prune --volumes -f

 

Results speak for themselves :) lol

 image.png.9761c54b3e60405439596ddda6241e48.png

https://docs.docker.com/engine/reference/commandline/system_df/
https://docs.docker.com/engine/reference/commandline/system_prune/

  • Like 1
Link to comment
16 hours ago, andrew207 said:

If you have your indexed data stored in the container rather than on an external volume this will accentuate your docker.img disk usage.


Hi, thank you for looking into this.  Just for clarification:

My indexes were stored on /splunkdata (mapped to my disk pool on Unraid).


image.png.8abbc145baaecae17c08365ad29ddc1b.pngimage.png.2afce4b3d3973ab3123334fe8e918fcb.png

 

Where does the image build script download/untar the installer to? Maybe that path needs to be mapped externally as well.
 

Link to comment

Unfortunately I tested that and it won't work -- Splunk is very particular about the permissions on a bunch of files and I was unable to get them working in a volume. I documented some of this in the readme on github. You'll get errors like KVStore failing to start, modular inputs failing to execute, some search commands not working -- a whole lot of pain.

 

I think the solution is to prune your volumes after upgrades as I previously ddescribed. Perhaps unRAID could add a feature to do this automatically, or add a GUI button for it.

 

I will note this in the readme on the next release.

Edited by andrew207
extra jngo
  • Thanks 1
Link to comment
  • 7 months later...

Hi,

 

I am having some issues getting splunk set up for the HTTP Event Collector (HEC). I installed the template and changed the "DataPersist" to one of my data shares, and set it as my custom network type (not bridge).

 

I was able to post to one of the tokens using Postman.

 

I set up two tokens for HEC and attached these extra arguments to one of my containers

Quote

--log-driver=splunk --log-opt splunk-token=<TOKEN> --log-opt splunk-url=http://192.168.68.46:8088

 

The docker container starts up fine (it failed before with ECONNREFUSED before I had the HEC tokens enabled) but nothing is being written to splunk. What else should I do to fix this?

 

My understanding is that if I set those flags above, the docker container should use the splunk log driver and automatically write all the logs that would appear in the Docker Popup Log window into Splunk?

Edited by 97WaterPolo
Link to comment

Hey @97WaterPolo, can you give me some more details pls?

1. Is the HEC set up in the Splunk container?
2. Is the port exposed/mapped appropriately through unraid's config?
3. are you able to successfully send a HEC message to Splunk (e.g. with Postman) and then see that as a search result in Splunk?
 

If you can complete #3 then that feels like the limit of what I can do, I'm not really sure about setting up Splunk as a log output for other containers using the method you described. Generally I just mount the log directory of my other containers as a volume and have Splunk "monitor" those volumes.

Link to comment

Hi @andrew207,

 

Thanks for the quick reply!

 

Yes I am able to complete 3, I am able to successfully send a HEC message VIA postman and search it. I followed this documentation at https://docs.docker.com/config/containers/logging/splunk/ where I should be able to use

Quote

docker run --log-driver=splunk --log-opt splunk-token=VALUE --log-opt splunk-url=VALUE

and append those arguments to the end and have it write to the HEC.

 

Could you go into some detail on how you set up your containers with the volumes? One of the reason I really liked the HEC concept was that all I had to do was generate a token for each of my 8 containers and change that argument and it would be searchable by the token name. I'm not opposed to that but I would just like to get all my docker container logs in a single place and sort through them.

 

EDIT: Never mind, I got it to work using the HEC via the CMD line arguments! Thanks!

Edited by 97WaterPolo
Link to comment

@97WaterPolo this is really cool, i haven't seen it before. back in the old days we had to do some hack with fluentd to get this type of logging working.

 

Anyway it worked immediately for me, attached is a pic of the logs for my scrutiny container in both Splunk and in the logs-view in unRAID. My scrutiny container runs as "bridged" network.

 

I just pasted those settings into the "Extra Parameters" section in the container config and it worked right away.

--log-driver=splunk --log-opt splunk-token=9cfe33zz-zzzz-zzzz-zzzz-zzzzzzzzzzzz --log-opt splunk-url=http://<unraidipaddress>:8088

 

image.png.0ba243dbf0a4a61d454179a803f20d8c.png 

 

It sounds like you're having an issue with your custom network type. Perhaps you could try and use bridge mode on your container, or you could try routing through a proxy -- i'm not the right person to advise on docker network stuff sorry! but i'm happy to test what I can for you.

 

image.png

image.png

Link to comment
  • 1 month later...

This is probably a docker noob question, but I haven't run into this problem with other dockers yet. The container will start, but the log says :

 

ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?

 

I assume this is because I'm not mapping my volumes correctly. I've read through this thread and did some internet searching, but can't figure it out. Can anyone set me straight? Here are my volume mappings:

 

image.thumb.png.85a33c37e5a832722c73ac76af573ecb.png

Link to comment
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.