Jump to content

andrew207

Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

1 Neutral

About andrew207

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ah good point, I don't see this as my Docker volume is set to 300GB due to hitting the limit in the past. I also have it stored on a non-array SSD for performance, alongside most of my appdata folders. Splunk image at absolute full blast should only ever consume 5-7 GB as an absolute max, more likely to only be about 1.5GB; so 20GB could be very limiting based on what else you run. The increase in size is due to Splunk's "dispatch" directory which resides in /opt/splunk/var/run/splunk/dispatch, which will fill up with "search artifacts" based on how frequently / what you search. This directory can be (fairly) safely deleted in a pinch, but the best solution is probably to increase your Docker volume size. I wouldn't consider this directory a candidate for a Docker volume due to its volatility. Based on my link above, if your dispatch directory is getting large due to search volume, you can set your `ttl` or `remote_ttl` in `limits.conf`, perhaps set it to something crazy low like 10 minutes so things don't get hoarded there like they do by default.
  2. @wedge22 @mgiggs Fixed, I'll be pushing it to master shortly and it'll be available through unRAID's interface whenever they update their repos; probably a day or two. it's now published and available. If you want it now set your config up thusly - note adding :openshift tag to the repo, and make sure the volumes are exact: For anyone interested, I'm still not certain why the kvstore was failing previously (probably volume permissions?); the issue was mitigated by moving the SPLUNK_DB to a proper standalone volume (and separating the KVstore from SPLUNK_DB). This means your KVstore isn't persistent, but chances are for a small standalone install that won't matter.
  3. Yeah @wedge22 persistent data has been challenging, when I convert the /opt/splunk/var/lib/ directory (i.e. where Splunk stores indexed data) to a volume the KV store crashes. You can make it a volume yourself -- search / report / alert etc will still work if you make it a volume, but you get a permanent "message" about KV store being dead. I'm working on it though, we'll get there hopefully in the next week or so alongside a rebase to Alpine. https://github.com/andrew207/splunk/tree/openshift
  4. To make your inputs.conf persistent set this path: As for your port 514, you'll have to expose that port yourself as it's not done by default; I only expose 8000 (web) 8089 (https api) and 9997 (splunk data). Just make it available through unRAID's docker edit screen same as those three are configured and it should work fine. FYI I have a new version of this container coming out soon that does a few fixes including defaulting to persistent storage of config as well as data, as well as rebasing to Alpine Linux: https://github.com/andrew207/splunk/tree/openshift
  5. The container runs as GMT, and so does the Splunk app. To change the displayed timezone, in Splunk under your user's preferences set a timezone and Splunk will add the offset to any events at search time. From Splunk Answers:
  6. Hey @wedge22 that one may be due to a bad download -- doesn't happen on my end on UnRAID or on Win 10 hypervisors. In an attempt to make this answer a bit more useful, here's a screenshot showing my container config, change being the added "App Data" volume:
  7. If you want a fully persistent install, for some reason Splunk throws some pretty odd errors. They don't seem to hinder functionality) so if you're cool ignoring them then you can do the following @wedge22 @GHunter Just add a volume for /opt/splunk/etc. /opt/splunk/etc directory stores all of your customisation. By default we already have a volume for /opt/splunk/var, the directory that stores all indexed data; so with these two your install should feel fully persistent.
  8. Queueiz, if you want full persistence of your entire install you'll need to add a volume for the entire /opt/splunk directory. I haven't tested this, I may need to patch the installer script so it checks for an existing installation in /opt/splunk/ rather than just an existing installer file. You can stop/start the container all you want but if you rebuild it you will lose config by default (by design, in a perhaps misguided attempt to follow Splunk best practice), the container will only persist your indexed data if you have created a volume for /opt/splunk/var. --- edit: @queueiz I just tried to properly configure a volume for /opt/splunk for a full persistent install but Splunk started throwing some very obscure errors. I'll look further into this, perhaps I'll try to configure a persistent mount for the config in /opt/splunk/etc alongside indexed data in /opt/splunk/var, but I'll probably have to leave the rest of the application to be installed on every container rebuild. Feel free to try swapping to the dockerhub tag "fullpersist" (i.e. in unraid set Repository to "atunnecliffe/splunk:fullpersist"), removing any /opt/splunk/var volume and adding an /opt/splunk volume.
  9. Overview: Docker image for Splunk based on Alpine Linux. Application: Splunk https://www.splunk.com/ Docker Hub: https://hub.docker.com/r/atunnecliffe/splunk GitHub: https://github.com/andrew207/splunk Documentation: https://github.com/andrew207/splunk/blob/master/README.md // https://docs.splunk.com/Documentation/Splunk Any issues let me know here.
  10. THIS WAS RESOLVED WITH A SERVER REBOOT. Posting anyway for others. rack-diagnostics-20190512-2023.zip Diag attached. I upgraded to 6.7 24 hours ago, all worked fine up front (for >12 hours). Some time in the past few hours all my /mnt/user/ shares became unavailable. My "Unnasigned Devices" disk still works fine. I can access /mnt/disk[1,2,3,4,5,6,7,8] fine and read all the data. I cannot access /mnt/user/. I have pasted some commands below to demonstrate the errors. root@rack:/mnt# ls /bin/ls: cannot access 'user': Transport endpoint is not connected cache/ disk2/ disk4/ disk6/ disk8/ user/ disk1/ disk3/ disk5/ disk7/ disks/ user0/ root@rack:/mnt# ls -lah /bin/ls: cannot access 'user': Transport endpoint is not connected total 32K drwxr-xr-x 14 root root 280 May 13 06:21 ./ drwxr-xr-x 19 root root 400 May 13 06:23 ../ drwxrwxrwx 1 nobody users 0 May 13 06:21 cache/ drwxrwxrwx 12 nobody users 215 Mar 3 2018 disk1/ drwxrwxrwx 9 nobody users 137 Nov 16 2017 disk2/ drwxrwxrwx 9 nobody users 137 Aug 10 2018 disk3/ drwxrwxrwx 7 nobody users 124 Nov 16 2017 disk4/ drwxrwxrwx 4 nobody users 36 Nov 16 2017 disk5/ drwxrwxrwx 8 nobody users 119 Nov 16 2017 disk6/ drwxrwxrwx 8 nobody users 91 Nov 16 2017 disk7/ drwxrwxrwx 1 nobody users 124 May 12 07:38 disk8/ drwxrwxrwx 3 nobody users 60 May 13 06:20 disks/ d????????? ? ? ? ? ? user/ drwxrwxrwx 1 nobody users 215 May 12 07:38 user0/ root@rack:/mnt# cd ./user -bash: cd: ./user: Transport endpoint is not connected root@rack:/mnt# mount proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) tmpfs on /dev/shm type tmpfs (rw) tmpfs on /var/log type tmpfs (rw,size=128m,mode=0755) /dev/sda1 on /boot type vfat (rw,noatime,nodiratime,flush,umask=0,shortname=mixed) /boot/bzmodules on /lib/modules type squashfs (ro) /boot/bzfirmware on /lib/firmware type squashfs (ro) /mnt on /mnt type none (rw,bind) shfs on /mnt/user type fuse.shfs (rw,nosuid,nodev,noatime,allow_other) /dev/md1 on /mnt/disk1 type xfs (rw,noatime,nodiratime) /dev/md2 on /mnt/disk2 type xfs (rw,noatime,nodiratime) /dev/md3 on /mnt/disk3 type xfs (rw,noatime,nodiratime) /dev/md4 on /mnt/disk4 type xfs (rw,noatime,nodiratime) /dev/md5 on /mnt/disk5 type xfs (rw,noatime,nodiratime) /dev/md6 on /mnt/disk6 type xfs (rw,noatime,nodiratime) /dev/md7 on /mnt/disk7 type xfs (rw,noatime,nodiratime) /dev/md8 on /mnt/disk8 type btrfs (rw,noatime,nodiratime) shfs on /mnt/user0 type fuse.shfs (rw,nosuid,nodev,noatime,allow_other) /dev/sdj1 on /mnt/disks/Kingston240SSD type xfs (rw,noatime,nodiratime,discard) /dev/sdk1 on /mnt/cache type btrfs (rw,noatime,nodiratime)
  11. Disk 2 has been auto-disabled for about a week. Seems like I just got a bad disk, it's only about 9 months old 8TB IronWolf. Diag attached, also SMART report for disk that has died (Disk 2). The SMART numbers look very severe, but I don't fully understand their implication. In particular: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-- 082 064 044 - 164804696 I'd like to know whether the disk is broken enough that I should immediately send it off for warranty, or if I can just enable it again and it'll work fine for another few months. Thanks all rack-smart-20190407-1138.zip rack-diagnostics-20190407-1142.zip
  12. OK I got it working, honestly not sure how it was working before; the only port forwarded is 443. I changed Validation to duckdns, added DUCKDNSTOKEN variable, and added wildcard to subdomains. All good now, thanks very much!
  13. <-------------------------------------------------> <-------------------------------------------------> cronjob running on Mon Feb 11 20:09:20 AEDT 2019 Running certbot renew Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Processing /etc/letsencrypt/renewal/[...] - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Cert is due for renewal, auto-renewing... Plugins selected: Authenticator standalone, Installer None Running pre-hook command: if ps aux | grep [n]ginx: > /dev/null; then s6-svc -d /var/run/s6/services/nginx; fi Renewing an existing certificate Performing the following challenges: http-01 challenge for [...] http-01 challenge for www.[...] Waiting for verification... Cleaning up challenges Attempting to renew cert ([...]) from /etc/letsencrypt/renewal/[...].conf produced an unexpected error: Failed authorization procedure. [...] (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://[...]/.well-known/acme-challenge/[...]: Timeout during connect (likely firewall problem), www.[...] (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://[...]/.well-known/acme-challenge/T9kiSatf7ElU1UhFQHSwUAG4udfx58cCUOkRiXQ8Rac: Timeout during connect (likely firewall problem). Skipping. All renewal attempts failed. The following certs could not be renewed: /etc/letsencrypt/live/[...]/fullchain.pem (failure) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - All renewal attempts failed. The following certs could not be renewed: /etc/letsencrypt/live/[...]/fullchain.pem (failure) hi guys, my cert expired so I checked out my logs and saw failures. Sensetives replaced with [...]. I manually executed the update script a couple times and got the above, same as the cron-executed ones. I have been able to use this config for well over a year with no worries (and no changes on my part!), not sure what to do. I can still access my configured domain remotely on 443 like always, just now I get a cert error. Any ideas? Anyone else unable to renew for the same reason? Should I just nuke the container and reinstall it? Cheers.
  14. Hi lurkers, if you want to start Firefox in full screen mode use this: https://addons.mozilla.org/en-US/firefox/addon/autofullscreen For whatever reason there isn't a command line arg that does this.
  15. Timestamp was incorrect for me, so I executed a band-aid fix -- installing NTP inside the container. Note if anyone else does this (bad) fix, you will need to re-install NTP after every update. docker exec -it MotionEye bash apt-get install ntp Restart container. Done. Note I have snapshots + Motion recording (h264) working on a pair of Amcrest ip2m-841b. Thanks for the template