andrew207

Members
  • Posts

    49
  • Joined

  • Last visited

Everything posted by andrew207

  1. thanks @napalmyourmom, I'll add persisting license and user folders as options in the unraid GUI in a future update (should be 9.0.2 shortly after xmas).
  2. God dammit you're right @HQToast. It certainly looks like the answer. Might suggest the devs write a message to syslog :P. Cheers.
  3. At some point it changed from every night to once a week, I have no idea why. It still happens reliably once a week at ~3AM same as before. My specific problem with this downtime was that a few automations would kick off in Home Assistant when sensors dropped to zero as the Mosquitto container died, I was able to resolve that through application-logic that ignores triggers when everything is reporting zero. So no I never resolved it, but the problem has eased up and I worked around it.
  4. That should work. There are two things you can check: 1. Make sure the port is actually open/listening. Just open a terminal to the docker container and check netstat to make sure port 514 is open and listening. I just added 514 to my server, you can see it listed below as being open. /opt/splunk # netstat -anp | grep splunkd | grep 'LISTEN\|udp' tcp 0 0 0.0.0.0:9997 0.0.0.0:* LISTEN 71/splunkd tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 71/splunkd tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN 71/splunkd tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 71/splunkd udp 0 0 0.0.0.0:514 0.0.0.0:* 71/splunkd /opt/splunk # 2. Send a test syslog message. I'm sending to 10514 because that's what I mapped the port to as described in my previous post. You should send to your custom IP:514, because that's what you have configured. Something like: ~ >>> nc -v -u 192.168.1.121 10514 hi, i'm sendign some data to my syslog listener wow! ^CExiting. ~ took 10s >>> Check if it lands in the index. It should, mine did just fine. If this works for you then the problem lies elsewhere, perhaps ask in a broader unRAID support section about remote syslogging.
  5. Hey @tknx First configure Splunk to listen for TCP/UDP 514, make sure you set an index for the data. https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitornetworkports#Add_a_network_input_using_Splunk_Web Then add a port mapping in your docker config for the Splunk container. In the edit screen for the container, click "add a port, path, variable, label, or device" and fill it out appropriately, perhaps like the image I've attached. In my example I've mapped the container's port 514 to the local br0 port 10514. In unRAID's syslog settings (settings --> syslog) under "remote syslog server", type in the IP of your unRAID server, and set the port to the mapped port, in our example 10514. Then the data should be available in Splunk under the index you specified when creating the listener. Hope this helps
  6. unRAID PRO 6.9.2, Asrock x370, ryzen 5 1600x (CSTATES DISABLED), 16gb ram Suspicions started when Home Assistant was showing sudden drops to zero on all my sensors, including those sourced from scripts and MQTT from local sources. So I got Uptime Kuma to track as many services as I can. Sure just about every day at 3AM and/or midnight the 60-second health checks from Uptime Kuma say everything is down, generally "ECONNREFUSED" for local services (e.g. MariaDB, Mosquitto, Nginx) and "Client network socket disconnected before secure TLS connection was established" for external services (e.g. google.com, 1.1.1.1). Sometimes uptime kuma misses the downtime spikes maybe because it only checks every 60 seconds, but Home Assistant sees them every single time because my sensors all post every 5 seconds. There are no cronjobs at this time, and there is nothing in syslog, srsly the most boring syslog ever. Actually nothing in syslog pertaining to this at all. Always midnight, 3AM, or both. Below are a bunch of screenshots showing stuff, as well as syslog from late last night to now; where things all went offline like normal. Does anyone have any ideas? root@rack:~# tail -n50 /var/log/syslog Mar 16 22:00:12 rack emhttpd: spinning down /dev/sdg Mar 16 22:00:14 rack emhttpd: spinning down /dev/sdj Mar 16 22:00:14 rack emhttpd: spinning down /dev/sdh Mar 16 22:00:14 rack emhttpd: spinning down /dev/sde Mar 16 22:00:14 rack emhttpd: spinning down /dev/sdb Mar 16 22:00:14 rack emhttpd: spinning down /dev/sdc Mar 16 22:00:14 rack emhttpd: spinning down /dev/sdi Mar 17 00:27:48 rack root: /etc/libvirt: 23.9 MiB (25100288 bytes) trimmed on /dev/loop3 Mar 17 00:27:48 rack root: /var/lib/docker: 118.1 GiB (126809567232 bytes) trimmed on /dev/loop2 Mar 17 00:27:48 rack root: /mnt/disks/Kingston240SSD: 113.2 GiB (121536778240 bytes) trimmed on /dev/sdk1 Mar 17 02:00:17 rack emhttpd: read SMART /dev/sdj Mar 17 02:00:19 rack emhttpd: read SMART /dev/sde Mar 17 02:00:53 rack emhttpd: read SMART /dev/sdb Mar 17 02:01:08 rack emhttpd: read SMART /dev/sdi Mar 17 02:01:18 rack emhttpd: read SMART /dev/sdg Mar 17 02:02:01 rack emhttpd: read SMART /dev/sdc Mar 17 02:02:30 rack emhttpd: read SMART /dev/sdh Mar 17 03:14:10 rack emhttpd: spinning down /dev/sdh Mar 17 03:14:12 rack emhttpd: spinning down /dev/sdg Mar 17 03:14:12 rack emhttpd: spinning down /dev/sde Mar 17 03:14:12 rack emhttpd: spinning down /dev/sdc Mar 17 03:14:12 rack emhttpd: spinning down /dev/sdi Mar 17 03:14:33 rack emhttpd: spinning down /dev/sdb Mar 17 03:14:35 rack emhttpd: spinning down /dev/sdj Mar 17 03:40:16 rack crond[1914]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Mar 17 05:50:20 rack emhttpd: read SMART /dev/sdh Mar 17 06:37:33 rack emhttpd: read SMART /dev/sde Mar 17 06:38:19 rack emhttpd: read SMART /dev/sdj Mar 17 06:38:19 rack emhttpd: read SMART /dev/sdb Mar 17 06:38:19 rack emhttpd: read SMART /dev/sdc Mar 17 06:38:42 rack emhttpd: read SMART /dev/sdg Mar 17 06:38:42 rack emhttpd: read SMART /dev/sdi Mar 17 07:23:04 rack webGUI: Successful login user root from [redacted] Mar 17 07:38:44 rack emhttpd: spinning down /dev/sdj Mar 17 07:38:44 rack emhttpd: spinning down /dev/sdg Mar 17 07:38:44 rack emhttpd: spinning down /dev/sde Mar 17 07:38:44 rack emhttpd: spinning down /dev/sdb Mar 17 07:38:44 rack emhttpd: spinning down /dev/sdc Mar 17 07:38:44 rack emhttpd: spinning down /dev/sdi Mar 17 07:40:26 rack emhttpd: spinning down /dev/sdh
  7. @97WaterPolo this is really cool, i haven't seen it before. back in the old days we had to do some hack with fluentd to get this type of logging working. Anyway it worked immediately for me, attached is a pic of the logs for my scrutiny container in both Splunk and in the logs-view in unRAID. My scrutiny container runs as "bridged" network. I just pasted those settings into the "Extra Parameters" section in the container config and it worked right away. --log-driver=splunk --log-opt splunk-token=9cfe33zz-zzzz-zzzz-zzzz-zzzzzzzzzzzz --log-opt splunk-url=http://<unraidipaddress>:8088 It sounds like you're having an issue with your custom network type. Perhaps you could try and use bridge mode on your container, or you could try routing through a proxy -- i'm not the right person to advise on docker network stuff sorry! but i'm happy to test what I can for you.
  8. Hey @97WaterPolo, can you give me some more details pls? 1. Is the HEC set up in the Splunk container? 2. Is the port exposed/mapped appropriately through unraid's config? 3. are you able to successfully send a HEC message to Splunk (e.g. with Postman) and then see that as a search result in Splunk? If you can complete #3 then that feels like the limit of what I can do, I'm not really sure about setting up Splunk as a log output for other containers using the method you described. Generally I just mount the log directory of my other containers as a volume and have Splunk "monitor" those volumes.
  9. Using Monica, the web server doesn't seem to bind appropriately: Monica v3.2.0 is set up, enjoy. Database ready.AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80 (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Nothing written to the "logs" folder in the default volume. In unraid UI, port 80 is mapped to an unused high port. I couldn't find any environment variables or volumeised config for changing port 80 to something else. Any ideas? Probably something silly!
  10. Unfortunately I tested that and it won't work -- Splunk is very particular about the permissions on a bunch of files and I was unable to get them working in a volume. I documented some of this in the readme on github. You'll get errors like KVStore failing to start, modular inputs failing to execute, some search commands not working -- a whole lot of pain. I think the solution is to prune your volumes after upgrades as I previously ddescribed. Perhaps unRAID could add a feature to do this automatically, or add a GUI button for it. I will note this in the readme on the next release.
  11. Hi @ShadeZeRO, I was able to replicate and figure out why. This would be due to the way the container installs itself on startup. The container includes the installer grabbed directly from splunk.com during image build, and on first run it untars the installer. This causes your unRAID docker.img to grow every time you rebuild the image (i.e. "force upgrade", or real upgrade). If you have your indexed data stored in the container rather than on an external volume this will accentuate your docker.img disk usage. The same occurs for other containers that have an installer process (or use their internal volumes extensively). For example most of the SteamCMD game server containers that download big files on startup, something that can generate a lot of data like Nessus or Minecraft; or even if you configure a downloader (sab/deluge/etc) to download inside the container rather than to a volume you will also see your docker.img size increasing a lot on upgrade/rebuild. You can view the space used by running 'docker system df'. Here's mine, as you can see I've been working hard to ignore this issue by having 98% reclaimable space in my local volumes. Running the following will reclaim all this space. BE CAREFUL RUNNING IT, THIS COMMAND WILL DELETE ALL STOPPED CONTAINERS AND ALL THEIR DATA. Read docs etc, there are probably safer flags to use. docker system prune --volumes -f Results speak for themselves lol https://docs.docker.com/engine/reference/commandline/system_df/ https://docs.docker.com/engine/reference/commandline/system_prune/
  12. Hey @Caduceus Thanks for posting your self-solve there. What you posted makes sense, if you're ingesting data that's older than your frozenTimePeriodInSecs it'll freeze off pretty quickly. And yes I did notice my little quick-fix-attempt didn't work (I noticed this so I didn't push it to master yet fortunately), so I'll follow your lead and just disable the app entirely :). Thanks for that. Happy to help with any other questions, and appreciate you reading through past answers to help work through the issues you were hitting.
  13. Hey @Caduceus, this seems to be something new Splunk added in the latest update that will only work on systemd based systems. I've just committed a change to disable this in the container. Commit is here: https://github.com/andrew207/splunk/commit/ebf5f696c1458fd9ca0be3402f0fc930d2cfd1a2 Such is life living on the bleeding edge! Thanks for pointing this out. You can switch to the 8.1.0 branch/dockerhub tag if you want to try the fix now, otherwise i'll push it to master in the next couple of days.
  14. Hey @Moka First, you'll need to configure Splunk to listen on a port for syslog. You can do this by following the instructions in their documentation, you can do it in the GUI. https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitornetworkports#Add_a_network_input_using_Splunk_Web Once you've got Splunk listening on a port for syslog, you'll need to make sure your Docker template config in unRAID exposes that port. You might need to add a "port" to your Docker template (the same way the default template has port 8000 added for the web interface). Once you've got Splunk listening and a that port added to your docker template, you can configure your network devices to send their syslog to Splunk by following their documentation, making sure to use the port you configured in the unRAID Docker template. Once configured, you will be able to search for the new syslog data in Splunk.
  15. @VictorCharlie Thanks for the extra info! Pretty sure I've got it solved. When this app (and all properly configured apps) saves passwords they are encrypted using your splunk key. By design this key is unique to all Splunk installations, so when you rebuild the container it reinstalls itself and gives a new key. To fix this you should be able to manually add a volume for the single file required for this /opt/splunk/etc/auth/splunk.secret. Make that specific file (or the whole auth directory) a volume in your unRAID docker template config and your tesla app should persist nicely. For the next release I'll add in a default volume for the key file or the whole auth directory once I figure out which is best, because this would be affecting all apps that use encrypted passwords.
  16. Hey @VictorCharlie, the tesla app seems to use data models to present its data, and it does not create the "tesla" index it uses by default. This creates two bits of work you need to do yourself in addition to the documented process of filling out your API keys. I don't know how much you know about Splunk, so i'll be specific in my responses. 1. Ensure data is being persisted You'll need to ensure that the indexed data is being persisted by filling out the DataPersist param in the config for the container in the unRAID UI. 2. Create the tesla index (this seems to be required as all the datamodels have index=tesla hardcoded). From the GUI, settings-->indexes and create a new index called "tesla". Make sure you use default directories which will end up in your DataPersist directory (or modify and ensure they remain in there). The Tesla app doesn't seem to use any KVStore, rather it uses data models. Data models persist fine in the container as long as you have DataPersist set in your unRAID docker template. I can't see any reason it wouldn't work if the above are followed. I can't test because unfortunately I don't own a Tesla and therefore don't have API keys etc. Feel free to take this to PM if you want me to debug further, perhaps I can use your keys to test it.
  17. Yeah I've just started getting this one too now. I ran memtest and got 2 full successful passes, not really sure what to do. :~# mcelog mcelog: ERROR: AMD Processor family 23: mcelog does not support this processor. Please use the edac_mce_amd module instead. CPU is unsupported
  18. edit: I've just updated the container from 8.0.4 to Splunk 8.0.5, so you should be able to test a real upgrade once it pushes out to dockerhub/unraid. Worst case scenario (meaning you're outside 60 days and the free license doesn't take for some reason) is it'll stop ingestion and pop up with errors, you're right. Yes, you can always change the license type in the GUI, but realistically if you set up a persistent server.conf it'll be fine. If it works for you once it'll keep working, Splunk is an enterprise-grade product it's generally good at applying the config you set as long as it's in the right place. Forwarders will cache up to a limit and then "open the flood gates" once the server is accepting data. This caching is called "Persistent Queues", where the forwarder will store data in RAM. By default, this is 500kb as the forwarder is designed to have a minimal footprint, but can easily be configured using limits.conf on the forwarder. https://docs.splunk.com/Documentation/Splunk/8.0.4/Data/Usepersistentqueues. Remember, queing only really applies when log files aren't just sitting on disk -- so if you're monitoring for example nginx log files it doesn't matter how long the server isn't accepting data, as long as the log file hasn't rolled Splunk will still have a checkpoint in the file where it will continue from. If the bandwidth from the "opening floodgates" is a concern you can also set the maxKBps in limits.conf to set a firm limit on the bandwidth consumed by the forwarder -- https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Limitsconf. "Delete all default indexes from disk" means delete basically the following indexes: audit, main, _internal, _introspection, _telemetry. Yes, this means data loss -- however ideally you would be storing your data in a non-default index, for example I store my pfsense logs in a "pfsense" index. The index is also defined with a persistent indexes.conf file, and also means if you need to reset your license (which it doesn't sound like you will if you set up a free one properly) it'll stay around. Also note even if you don't persist your index properly and it disappears from the UI the data won't get deleted, when you re-add the index the data will all magically appear again. Basically if you have all the pieces in place it'll be fine. For you this sounds like: 1. Define indexes.conf on your server to split hotwarm and cold to separate Docker volumes 2. Define indexes.conf on your server to add new non-default index(es) for your data, and ensure your forwarders are sending to those indexes (see inputs.conf definitions) 3. Consider upping the persistent cache limit on your forwarder, and consider limiting the bandwidth (I generally just reduce it from the default unlimited to something crazy high like 10MBps just so things won't break if the endpoint starts logging like mad and decides to redline for a few days) 4. Double check the config by swapping Docker branches from :latest to :8.0.4 to force a full container redownload, simulating an upgrade and testing all your persistent config.
  19. Yeah it's set to enterprise by default because that's what I use. Persisting the free license is a little bit more awkward than for example modifying your bucket locations due to the precedence Splunk reads config files with. There is a server.conf file located $SPLUNK_HOME/etc/system/local that contains your license details, and this file will take precedence over anything you put in $SPLUNK_HOME/etc/apps/. You could make this file (or folder) a volume in your docker config, then modify the file to instruct Splunk to operate under a free license, such a server.conf entry might look like this: [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free [license] active_group = Free So the TL;DR is you can set your license type to "free" in the GUI, then ensure the file $SPLUNK_HOME/etc/system/local/server.conf is persisted in a docker volume. If you do this your free license will work fine, and will persist through container updates/autoupdates. ----- The only reason it wouldn't work is if the config was messed up somehow. It's easy to check, just restart Splunk and make sure you're still on the free license, then you can even swap your dockerfile from the :latest branch to the :8.0.4 branch to force it to redownload the whole image, if you're still on free after that then you're golden. If your config didn't apply for whatever reason (e.g. you remove read permissions on the file) Splunk will start up under the trial license. Splunk checks a bunch of things to figure out the 60 day limit (I discuss how to reset it in the Github readme), but basically if you're within the 60 days then all is good and you can change back to free and debug your issue, but if you're over 60 days it will stop ingesting data and you will be unable to search non-internal indexes. You'll also get an alert banner on the login page and a heap of messages in the interface telling you so. Thanks
  20. Hey @4554551n thanks for your interest, here are some answers to your questions. > Resetting trial license Yeah sure, you can set it to the free license if you want. Whenenver you upgrade the container you'll just need to set it to free license again. > Splunk index data location / splitting hotwarm and cold You can't split hot and warm, but you can split hot/warm and cold. With Splunk there are a lot of ways to split cold data off into its own location, I'd use "volumes". Here's a link to the spec for the config file we'll be editing: https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Indexesconf In my docker startup script I run code to change the default SPLUNK_DB location to the /splunkdata mount the container uses. SPLUNK_DB contains both hot/warm and cold. OPTIMISTIC_ABOUT_FILE_LOCKING fixes an unrelated bug. We set this in splunk-launch.conf meaning the SPLUNK_DB variable is set at startup and is persistent through the whole Splunk ecosystem. As you correctly identified from Splunk docs, SPLUNK_DB is used as the storage location for all indexes data and all buckets by default, this config was made to split it off into a volume. printf "\nOPTIMISTIC_ABOUT_FILE_LOCKING = 1\nSPLUNK_DB=/splunkdata" >> $SPLUNK_HOME/etc/splunk-launch.conf 1. Create a new volume in your docker config to store your cold data (e.g. /mynewcoldlocation) 2. Create an indexes.conf file, preferebly in a persistent location such as $SPLUNK_HOME/etc/apps/<app>/default/indexes.conf 3. Define hotwarm / cold volumes in your new indexes.conf, here's an example: [volume:hotwarm] path = /splunkdata # Roughly 3GB in MB maxVolumeDataSizeMB = 3072 [volume:cold] path = /mynewcoldlocation # Roughly 50GB in MB maxVolumeDataSizeMB = 51200 It would be up to you to ensure /splunkdata is stored on your cache disk and /mynewcoldlocation is in your array as defined in your docker config for this container. 4. Configure your indexes to utilise those volumes by default by updating the same indexes.conf file: [default] # 365 days in seconds frozenTimePeriodInSecs = 31536000 homePath = volume:hotwarm\$_index_name\db coldPath = volume:cold\$_index_name\colddb # Unfortunately we can't use volumes for thawed path, so we need to hardcode the directory. # Chances are you won't need this anyway unless you "freeze" data to an offline disk. thawedPath = /mynewFROZENlocation/$_index_name/thaweddb # Tstats should reside on fastest disk for maximum performance tstatsHomePath = volume:hotwarm\$_index_name\datamodel_summary 5. Remember that Splunk's internal indexes won't follow config in [default], so if we want Splunk's own self-logging to follow these rules we need to hard-code it: [_internal] # 90 days in seconds frozenTimePeriodInSecs = 7776000 # Override defaults set in $SPLUNK_HOME/etc/system/default/indexes.conf homePath = volume:hotwarm\_internaldb\db coldPath = volume:cold\_internaldb\colddb [_audit] # 90 days in seconds frozenTimePeriodInSecs = 7776000 # Override defaults set in $SPLUNK_HOME/etc/system/default/indexes.conf homePath = volume:hotwarm\audit\db coldPath = volume:cold\audit\colddb # ... etc etc for other indexes you want to honour this. > Freezing buckets When data freezes it is deleted unless you tell it where to go. You'll see in my config above I set the config items "maxVolumeDataSizeMB" and "frozenTimePeriodInSecs". For our volumes, once the entire volume hits that size it'll start moving buckets to the next tier (hotwarm --> cold --> frozen). Additionally, each of our individual indexes will also have similar max size config that will control how quickly they freeze off, iirc the default is 500GB. Each individual index can also have a "frozenTimePeriodInSecs", which will freeze data once it hits a certain age. If you have this set, data will freeze either when it is the oldest bucket and you've hit your maxVolumeDataSizeMB, or if it's older than frozentimePeriodInSecs. When data freezes it is deleted unless you tell it where to go. The easiest way to tell it where to go is by setting a coldToFrozenDir in your indexes.conf for every index. For example, in our same indexes.conf we have an index called "web", it might look like this, here's some doco to explain further https://docs.splunk.com/Documentation/Splunk/8.0.4/Indexer/Automatearchiving: [web] # 90 days in seconds frozenTimePeriodInSecs = 7776000 coldToFrozenDir = /myfrozenlocation # alternatively, coldToFrozenScript = /mover.sh Hope this helps.
  21. @loheiman Nice job figuring out you need to define UDP 514 as a Data Input through the user interface. This config will be applied in an "inputs.conf" file in /opt/splunk/etc/apps/*/local/inputs.conf (where * is whatever app context you were in when you created the input, defaults to "search" or "launcher"), so as long as you used the ConfigPersist volume configuration in the UnRAID Docker template you're all good there -- even the default is fine. 9997 is the port used by default for Splunk's own (somewhat) propriatary TCP format. It supports TLS and compression which is why it's generally preferred. This port is generally used by Splunk's "Universal Forwarder", an agent that gets installed on endpoints you want to monitor. I say 'somewhat' propriatary because several third party applications have implemented Splunk's TCP input stream in their own commercial applications, so theoretically anyone could too. Splunk can also listen for HTTP or HTTPS data using a HTTP Event Collector data input, this defaults to port 8088. HTTP is available because similar to Syslog over 514, HTTP is very universal and supported by just about everything.
  22. It will wipe anything that's not in a volume. So make sure you map your /splunkdata to preserve your indexes, and make sure your settings are in /etc/apps/* (and not /etc/system/local/*) as well as ensuring /etc/apps is a volume as well. In the UnRAID template XML I have called these two "DataPersist" and "ConfigPersist". Your KVStore will be wiped as it is not in a volume by default. If you want to retain your KVStore (tbh I just use lookups instead for the simple things I do in Docker) you'll just need to do a backup + restore which is simple too -- https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/BackupKVstore.
  23. @Phoenix Down Yes this installs the 60 day free enterprise trial license. In the Github repo readme there are instructions on how to reset the license when it runs out.
  24. @tkohhh I'm testing a fix to this now. It'll be available temporarily under the :tzfix dockerhub tag, I'll push it to master (and update this post) later this arvo once I've fully tested it. Issue seems to be related to my recent update to Alpine Linux -- they changed the way timezones are applied and I didn't notice. EIDT: Now working fine. Update your container and set your timezone in user preferences in the UI, should be all good
  25. TL;DR: seems to be a UI bug, unsure what's causing it. Am looking into it. Generally there are 3 things to consider: 1. The timezone of your log source. If the source data has a correct TZ specified in it then you shouldn't need to change anything. Splunk will use this timestamp and create a metadata timestamp in GMT for storage and retrieval. 2. The timestamp of your Splunk server. Generally servers are set in UTC, but unRAID docker will apply an environment variable that is set to your TZ. This container always runs the server in GMT (per Splunk best pracitces). All data in Splunk is stored with a GMT timestamp. 3. The timestamp of your end user, as set in the UI or by default through your web browser settings. All data presented to the end user has the GMT timestamp converted to their local time. THIS SETTING DOES NOT SEEM TO BE HONOURED BY SPLUNK. If you have logs that are being created and sent in realtime (i.e. most syslogs) then you can check the difference between your server time and user time by running this search: index=myindex | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | table _time indextime _time should be the local time of the event, and should be the GMT time of the event. It looks like both my GMT and localtime appear the same despite me setting a timezone in the UI, so there must be a bug somewhere. For now you can just present you're in GMT, or you can hard-code a conversion into any searches you're running -- e.g. there's a 28800 second difference between UTC and PST so you can run | eval _time=_time-28800. I'll look in to this TZ issue. Here's some extra reading: https://docs.splunk.com/Documentation/Splunk/latest/Search/Abouttimezones