andrew207

Members
  • Posts

    43
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

andrew207's Achievements

Rookie

Rookie (2/14)

11

Reputation

  1. @97WaterPolo this is really cool, i haven't seen it before. back in the old days we had to do some hack with fluentd to get this type of logging working. Anyway it worked immediately for me, attached is a pic of the logs for my scrutiny container in both Splunk and in the logs-view in unRAID. My scrutiny container runs as "bridged" network. I just pasted those settings into the "Extra Parameters" section in the container config and it worked right away. --log-driver=splunk --log-opt splunk-token=9cfe33zz-zzzz-zzzz-zzzz-zzzzzzzzzzzz --log-opt splunk-url=http://<unraidipaddress>:8088 It sounds like you're having an issue with your custom network type. Perhaps you could try and use bridge mode on your container, or you could try routing through a proxy -- i'm not the right person to advise on docker network stuff sorry! but i'm happy to test what I can for you.
  2. Hey @97WaterPolo, can you give me some more details pls? 1. Is the HEC set up in the Splunk container? 2. Is the port exposed/mapped appropriately through unraid's config? 3. are you able to successfully send a HEC message to Splunk (e.g. with Postman) and then see that as a search result in Splunk? If you can complete #3 then that feels like the limit of what I can do, I'm not really sure about setting up Splunk as a log output for other containers using the method you described. Generally I just mount the log directory of my other containers as a volume and have Splunk "monitor" those volumes.
  3. Using Monica, the web server doesn't seem to bind appropriately: Monica v3.2.0 is set up, enjoy. Database ready.AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80 (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Nothing written to the "logs" folder in the default volume. In unraid UI, port 80 is mapped to an unused high port. I couldn't find any environment variables or volumeised config for changing port 80 to something else. Any ideas? Probably something silly!
  4. Unfortunately I tested that and it won't work -- Splunk is very particular about the permissions on a bunch of files and I was unable to get them working in a volume. I documented some of this in the readme on github. You'll get errors like KVStore failing to start, modular inputs failing to execute, some search commands not working -- a whole lot of pain. I think the solution is to prune your volumes after upgrades as I previously ddescribed. Perhaps unRAID could add a feature to do this automatically, or add a GUI button for it. I will note this in the readme on the next release.
  5. Hi @ShadeZeRO, I was able to replicate and figure out why. This would be due to the way the container installs itself on startup. The container includes the installer grabbed directly from splunk.com during image build, and on first run it untars the installer. This causes your unRAID docker.img to grow every time you rebuild the image (i.e. "force upgrade", or real upgrade). If you have your indexed data stored in the container rather than on an external volume this will accentuate your docker.img disk usage. The same occurs for other containers that have an installer process (or use their internal volumes extensively). For example most of the SteamCMD game server containers that download big files on startup, something that can generate a lot of data like Nessus or Minecraft; or even if you configure a downloader (sab/deluge/etc) to download inside the container rather than to a volume you will also see your docker.img size increasing a lot on upgrade/rebuild. You can view the space used by running 'docker system df'. Here's mine, as you can see I've been working hard to ignore this issue by having 98% reclaimable space in my local volumes. Running the following will reclaim all this space. BE CAREFUL RUNNING IT, THIS COMMAND WILL DELETE ALL STOPPED CONTAINERS AND ALL THEIR DATA. Read docs etc, there are probably safer flags to use. docker system prune --volumes -f Results speak for themselves lol https://docs.docker.com/engine/reference/commandline/system_df/ https://docs.docker.com/engine/reference/commandline/system_prune/
  6. Hey @Caduceus Thanks for posting your self-solve there. What you posted makes sense, if you're ingesting data that's older than your frozenTimePeriodInSecs it'll freeze off pretty quickly. And yes I did notice my little quick-fix-attempt didn't work (I noticed this so I didn't push it to master yet fortunately), so I'll follow your lead and just disable the app entirely :). Thanks for that. Happy to help with any other questions, and appreciate you reading through past answers to help work through the issues you were hitting.
  7. Hey @Caduceus, this seems to be something new Splunk added in the latest update that will only work on systemd based systems. I've just committed a change to disable this in the container. Commit is here: https://github.com/andrew207/splunk/commit/ebf5f696c1458fd9ca0be3402f0fc930d2cfd1a2 Such is life living on the bleeding edge! Thanks for pointing this out. You can switch to the 8.1.0 branch/dockerhub tag if you want to try the fix now, otherwise i'll push it to master in the next couple of days.
  8. Hey @Moka First, you'll need to configure Splunk to listen on a port for syslog. You can do this by following the instructions in their documentation, you can do it in the GUI. https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitornetworkports#Add_a_network_input_using_Splunk_Web Once you've got Splunk listening on a port for syslog, you'll need to make sure your Docker template config in unRAID exposes that port. You might need to add a "port" to your Docker template (the same way the default template has port 8000 added for the web interface). Once you've got Splunk listening and a that port added to your docker template, you can configure your network devices to send their syslog to Splunk by following their documentation, making sure to use the port you configured in the unRAID Docker template. Once configured, you will be able to search for the new syslog data in Splunk.
  9. @VictorCharlie Thanks for the extra info! Pretty sure I've got it solved. When this app (and all properly configured apps) saves passwords they are encrypted using your splunk key. By design this key is unique to all Splunk installations, so when you rebuild the container it reinstalls itself and gives a new key. To fix this you should be able to manually add a volume for the single file required for this /opt/splunk/etc/auth/splunk.secret. Make that specific file (or the whole auth directory) a volume in your unRAID docker template config and your tesla app should persist nicely. For the next release I'll add in a default volume for the key file or the whole auth directory once I figure out which is best, because this would be affecting all apps that use encrypted passwords.
  10. Hey @VictorCharlie, the tesla app seems to use data models to present its data, and it does not create the "tesla" index it uses by default. This creates two bits of work you need to do yourself in addition to the documented process of filling out your API keys. I don't know how much you know about Splunk, so i'll be specific in my responses. 1. Ensure data is being persisted You'll need to ensure that the indexed data is being persisted by filling out the DataPersist param in the config for the container in the unRAID UI. 2. Create the tesla index (this seems to be required as all the datamodels have index=tesla hardcoded). From the GUI, settings-->indexes and create a new index called "tesla". Make sure you use default directories which will end up in your DataPersist directory (or modify and ensure they remain in there). The Tesla app doesn't seem to use any KVStore, rather it uses data models. Data models persist fine in the container as long as you have DataPersist set in your unRAID docker template. I can't see any reason it wouldn't work if the above are followed. I can't test because unfortunately I don't own a Tesla and therefore don't have API keys etc. Feel free to take this to PM if you want me to debug further, perhaps I can use your keys to test it.
  11. Yeah I've just started getting this one too now. I ran memtest and got 2 full successful passes, not really sure what to do. :~# mcelog mcelog: ERROR: AMD Processor family 23: mcelog does not support this processor. Please use the edac_mce_amd module instead. CPU is unsupported
  12. edit: I've just updated the container from 8.0.4 to Splunk 8.0.5, so you should be able to test a real upgrade once it pushes out to dockerhub/unraid. Worst case scenario (meaning you're outside 60 days and the free license doesn't take for some reason) is it'll stop ingestion and pop up with errors, you're right. Yes, you can always change the license type in the GUI, but realistically if you set up a persistent server.conf it'll be fine. If it works for you once it'll keep working, Splunk is an enterprise-grade product it's generally good at applying the config you set as long as it's in the right place. Forwarders will cache up to a limit and then "open the flood gates" once the server is accepting data. This caching is called "Persistent Queues", where the forwarder will store data in RAM. By default, this is 500kb as the forwarder is designed to have a minimal footprint, but can easily be configured using limits.conf on the forwarder. https://docs.splunk.com/Documentation/Splunk/8.0.4/Data/Usepersistentqueues. Remember, queing only really applies when log files aren't just sitting on disk -- so if you're monitoring for example nginx log files it doesn't matter how long the server isn't accepting data, as long as the log file hasn't rolled Splunk will still have a checkpoint in the file where it will continue from. If the bandwidth from the "opening floodgates" is a concern you can also set the maxKBps in limits.conf to set a firm limit on the bandwidth consumed by the forwarder -- https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Limitsconf. "Delete all default indexes from disk" means delete basically the following indexes: audit, main, _internal, _introspection, _telemetry. Yes, this means data loss -- however ideally you would be storing your data in a non-default index, for example I store my pfsense logs in a "pfsense" index. The index is also defined with a persistent indexes.conf file, and also means if you need to reset your license (which it doesn't sound like you will if you set up a free one properly) it'll stay around. Also note even if you don't persist your index properly and it disappears from the UI the data won't get deleted, when you re-add the index the data will all magically appear again. Basically if you have all the pieces in place it'll be fine. For you this sounds like: 1. Define indexes.conf on your server to split hotwarm and cold to separate Docker volumes 2. Define indexes.conf on your server to add new non-default index(es) for your data, and ensure your forwarders are sending to those indexes (see inputs.conf definitions) 3. Consider upping the persistent cache limit on your forwarder, and consider limiting the bandwidth (I generally just reduce it from the default unlimited to something crazy high like 10MBps just so things won't break if the endpoint starts logging like mad and decides to redline for a few days) 4. Double check the config by swapping Docker branches from :latest to :8.0.4 to force a full container redownload, simulating an upgrade and testing all your persistent config.
  13. Yeah it's set to enterprise by default because that's what I use. Persisting the free license is a little bit more awkward than for example modifying your bucket locations due to the precedence Splunk reads config files with. There is a server.conf file located $SPLUNK_HOME/etc/system/local that contains your license details, and this file will take precedence over anything you put in $SPLUNK_HOME/etc/apps/. You could make this file (or folder) a volume in your docker config, then modify the file to instruct Splunk to operate under a free license, such a server.conf entry might look like this: [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free [license] active_group = Free So the TL;DR is you can set your license type to "free" in the GUI, then ensure the file $SPLUNK_HOME/etc/system/local/server.conf is persisted in a docker volume. If you do this your free license will work fine, and will persist through container updates/autoupdates. ----- The only reason it wouldn't work is if the config was messed up somehow. It's easy to check, just restart Splunk and make sure you're still on the free license, then you can even swap your dockerfile from the :latest branch to the :8.0.4 branch to force it to redownload the whole image, if you're still on free after that then you're golden. If your config didn't apply for whatever reason (e.g. you remove read permissions on the file) Splunk will start up under the trial license. Splunk checks a bunch of things to figure out the 60 day limit (I discuss how to reset it in the Github readme), but basically if you're within the 60 days then all is good and you can change back to free and debug your issue, but if you're over 60 days it will stop ingesting data and you will be unable to search non-internal indexes. You'll also get an alert banner on the login page and a heap of messages in the interface telling you so. Thanks
  14. Hey @4554551n thanks for your interest, here are some answers to your questions. > Resetting trial license Yeah sure, you can set it to the free license if you want. Whenenver you upgrade the container you'll just need to set it to free license again. > Splunk index data location / splitting hotwarm and cold You can't split hot and warm, but you can split hot/warm and cold. With Splunk there are a lot of ways to split cold data off into its own location, I'd use "volumes". Here's a link to the spec for the config file we'll be editing: https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Indexesconf In my docker startup script I run code to change the default SPLUNK_DB location to the /splunkdata mount the container uses. SPLUNK_DB contains both hot/warm and cold. OPTIMISTIC_ABOUT_FILE_LOCKING fixes an unrelated bug. We set this in splunk-launch.conf meaning the SPLUNK_DB variable is set at startup and is persistent through the whole Splunk ecosystem. As you correctly identified from Splunk docs, SPLUNK_DB is used as the storage location for all indexes data and all buckets by default, this config was made to split it off into a volume. printf "\nOPTIMISTIC_ABOUT_FILE_LOCKING = 1\nSPLUNK_DB=/splunkdata" >> $SPLUNK_HOME/etc/splunk-launch.conf 1. Create a new volume in your docker config to store your cold data (e.g. /mynewcoldlocation) 2. Create an indexes.conf file, preferebly in a persistent location such as $SPLUNK_HOME/etc/apps/<app>/default/indexes.conf 3. Define hotwarm / cold volumes in your new indexes.conf, here's an example: [volume:hotwarm] path = /splunkdata # Roughly 3GB in MB maxVolumeDataSizeMB = 3072 [volume:cold] path = /mynewcoldlocation # Roughly 50GB in MB maxVolumeDataSizeMB = 51200 It would be up to you to ensure /splunkdata is stored on your cache disk and /mynewcoldlocation is in your array as defined in your docker config for this container. 4. Configure your indexes to utilise those volumes by default by updating the same indexes.conf file: [default] # 365 days in seconds frozenTimePeriodInSecs = 31536000 homePath = volume:hotwarm\$_index_name\db coldPath = volume:cold\$_index_name\colddb # Unfortunately we can't use volumes for thawed path, so we need to hardcode the directory. # Chances are you won't need this anyway unless you "freeze" data to an offline disk. thawedPath = /mynewFROZENlocation/$_index_name/thaweddb # Tstats should reside on fastest disk for maximum performance tstatsHomePath = volume:hotwarm\$_index_name\datamodel_summary 5. Remember that Splunk's internal indexes won't follow config in [default], so if we want Splunk's own self-logging to follow these rules we need to hard-code it: [_internal] # 90 days in seconds frozenTimePeriodInSecs = 7776000 # Override defaults set in $SPLUNK_HOME/etc/system/default/indexes.conf homePath = volume:hotwarm\_internaldb\db coldPath = volume:cold\_internaldb\colddb [_audit] # 90 days in seconds frozenTimePeriodInSecs = 7776000 # Override defaults set in $SPLUNK_HOME/etc/system/default/indexes.conf homePath = volume:hotwarm\audit\db coldPath = volume:cold\audit\colddb # ... etc etc for other indexes you want to honour this. > Freezing buckets When data freezes it is deleted unless you tell it where to go. You'll see in my config above I set the config items "maxVolumeDataSizeMB" and "frozenTimePeriodInSecs". For our volumes, once the entire volume hits that size it'll start moving buckets to the next tier (hotwarm --> cold --> frozen). Additionally, each of our individual indexes will also have similar max size config that will control how quickly they freeze off, iirc the default is 500GB. Each individual index can also have a "frozenTimePeriodInSecs", which will freeze data once it hits a certain age. If you have this set, data will freeze either when it is the oldest bucket and you've hit your maxVolumeDataSizeMB, or if it's older than frozentimePeriodInSecs. When data freezes it is deleted unless you tell it where to go. The easiest way to tell it where to go is by setting a coldToFrozenDir in your indexes.conf for every index. For example, in our same indexes.conf we have an index called "web", it might look like this, here's some doco to explain further https://docs.splunk.com/Documentation/Splunk/8.0.4/Indexer/Automatearchiving: [web] # 90 days in seconds frozenTimePeriodInSecs = 7776000 coldToFrozenDir = /myfrozenlocation # alternatively, coldToFrozenScript = /mover.sh Hope this helps.
  15. @loheiman Nice job figuring out you need to define UDP 514 as a Data Input through the user interface. This config will be applied in an "inputs.conf" file in /opt/splunk/etc/apps/*/local/inputs.conf (where * is whatever app context you were in when you created the input, defaults to "search" or "launcher"), so as long as you used the ConfigPersist volume configuration in the UnRAID Docker template you're all good there -- even the default is fine. 9997 is the port used by default for Splunk's own (somewhat) propriatary TCP format. It supports TLS and compression which is why it's generally preferred. This port is generally used by Splunk's "Universal Forwarder", an agent that gets installed on endpoints you want to monitor. I say 'somewhat' propriatary because several third party applications have implemented Splunk's TCP input stream in their own commercial applications, so theoretically anyone could too. Splunk can also listen for HTTP or HTTPS data using a HTTP Event Collector data input, this defaults to port 8088. HTTP is available because similar to Syslog over 514, HTTP is very universal and supported by just about everything.