• Content Count

  • Joined

  • Last visited

Community Reputation

8 Neutral

About andrew207

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hey @Caduceus Thanks for posting your self-solve there. What you posted makes sense, if you're ingesting data that's older than your frozenTimePeriodInSecs it'll freeze off pretty quickly. And yes I did notice my little quick-fix-attempt didn't work (I noticed this so I didn't push it to master yet fortunately), so I'll follow your lead and just disable the app entirely :). Thanks for that. Happy to help with any other questions, and appreciate you reading through past answers to help work through the issues you were hitting.
  2. Hey @Caduceus, this seems to be something new Splunk added in the latest update that will only work on systemd based systems. I've just committed a change to disable this in the container. Commit is here: Such is life living on the bleeding edge! Thanks for pointing this out. You can switch to the 8.1.0 branch/dockerhub tag if you want to try the fix now, otherwise i'll push it to master in the next couple of days.
  3. Hey @Moka First, you'll need to configure Splunk to listen on a port for syslog. You can do this by following the instructions in their documentation, you can do it in the GUI. Once you've got Splunk listening on a port for syslog, you'll need to make sure your Docker template config in unRAID exposes that port. You might need to add a "port" to your Docker template (the same way the default template has port 8000 added for the web interface). Once you've
  4. @VictorCharlie Thanks for the extra info! Pretty sure I've got it solved. When this app (and all properly configured apps) saves passwords they are encrypted using your splunk key. By design this key is unique to all Splunk installations, so when you rebuild the container it reinstalls itself and gives a new key. To fix this you should be able to manually add a volume for the single file required for this /opt/splunk/etc/auth/splunk.secret. Make that specific file (or the whole auth directory) a volume in your unRAID docker template config and your tesla app should persist nic
  5. Hey @VictorCharlie, the tesla app seems to use data models to present its data, and it does not create the "tesla" index it uses by default. This creates two bits of work you need to do yourself in addition to the documented process of filling out your API keys. I don't know how much you know about Splunk, so i'll be specific in my responses. 1. Ensure data is being persisted You'll need to ensure that the indexed data is being persisted by filling out the DataPersist param in the config for the container in the unRAID UI. 2. Create the tesla index (this seems to b
  6. Yeah I've just started getting this one too now. I ran memtest and got 2 full successful passes, not really sure what to do. :~# mcelog mcelog: ERROR: AMD Processor family 23: mcelog does not support this processor. Please use the edac_mce_amd module instead. CPU is unsupported
  7. edit: I've just updated the container from 8.0.4 to Splunk 8.0.5, so you should be able to test a real upgrade once it pushes out to dockerhub/unraid. Worst case scenario (meaning you're outside 60 days and the free license doesn't take for some reason) is it'll stop ingestion and pop up with errors, you're right. Yes, you can always change the license type in the GUI, but realistically if you set up a persistent server.conf it'll be fine. If it works for you once it'll keep working, Splunk is an enterprise-grade product it's generally good at applying the config you se
  8. Yeah it's set to enterprise by default because that's what I use. Persisting the free license is a little bit more awkward than for example modifying your bucket locations due to the precedence Splunk reads config files with. There is a server.conf file located $SPLUNK_HOME/etc/system/local that contains your license details, and this file will take precedence over anything you put in $SPLUNK_HOME/etc/apps/. You could make this file (or folder) a volume in your docker config, then modify the file to instruct Splunk to operate under a free license, such a server.conf entry might loo
  9. Hey @4554551n thanks for your interest, here are some answers to your questions. > Resetting trial license Yeah sure, you can set it to the free license if you want. Whenenver you upgrade the container you'll just need to set it to free license again. > Splunk index data location / splitting hotwarm and cold You can't split hot and warm, but you can split hot/warm and cold. With Splunk there are a lot of ways to split cold data off into its own location, I'd use "volumes". Here's a link to the spec for the config file we'll be editing:
  10. @loheiman Nice job figuring out you need to define UDP 514 as a Data Input through the user interface. This config will be applied in an "inputs.conf" file in /opt/splunk/etc/apps/*/local/inputs.conf (where * is whatever app context you were in when you created the input, defaults to "search" or "launcher"), so as long as you used the ConfigPersist volume configuration in the UnRAID Docker template you're all good there -- even the default is fine. 9997 is the port used by default for Splunk's own (somewhat) propriatary TCP format. It supports TLS and compression which
  11. It will wipe anything that's not in a volume. So make sure you map your /splunkdata to preserve your indexes, and make sure your settings are in /etc/apps/* (and not /etc/system/local/*) as well as ensuring /etc/apps is a volume as well. In the UnRAID template XML I have called these two "DataPersist" and "ConfigPersist". Your KVStore will be wiped as it is not in a volume by default. If you want to retain your KVStore (tbh I just use lookups instead for the simple things I do in Docker) you'll just need to do a backup + restore which is simple too --
  12. @Phoenix Down Yes this installs the 60 day free enterprise trial license. In the Github repo readme there are instructions on how to reset the license when it runs out.
  13. @tkohhh I'm testing a fix to this now. It'll be available temporarily under the :tzfix dockerhub tag, I'll push it to master (and update this post) later this arvo once I've fully tested it. Issue seems to be related to my recent update to Alpine Linux -- they changed the way timezones are applied and I didn't notice. EIDT: Now working fine. Update your container and set your timezone in user preferences in the UI, should be all good
  14. TL;DR: seems to be a UI bug, unsure what's causing it. Am looking into it. Generally there are 3 things to consider: 1. The timezone of your log source. If the source data has a correct TZ specified in it then you shouldn't need to change anything. Splunk will use this timestamp and create a metadata timestamp in GMT for storage and retrieval. 2. The timestamp of your Splunk server. Generally servers are set in UTC, but unRAID docker will apply an environment variable that is set to your TZ. This container always runs the server in GMT (per Splunk best pra
  15. When you set any user's timezone in the UI, it is saved in a file /opt/splunk/etc/users/user-prefs.conf. You'll need to put this file in a volume if you want timezone changes to persist after a restart. I suggest making /opt/splunk/etc/users/ a volume, as setting the whole /opt/splunk/etc parent directory will cause other issues. Perhaps in the next version I'll add this to the UnRAID template