Jump to content

andrew207

Members
  • Content Count

    14
  • Joined

  • Last visited

Community Reputation

1 Neutral

About andrew207

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hey @wedge22 that one may be due to a bad download -- doesn't happen on my end on UnRAID or on Win 10 hypervisors. In an attempt to make this answer a bit more useful, here's a screenshot showing my container config, change being the added "App Data" volume:
  2. If you want a fully persistent install, for some reason Splunk throws some pretty odd errors. They don't seem to hinder functionality) so if you're cool ignoring them then you can do the following @wedge22 @GHunter Just add a volume for /opt/splunk/etc. /opt/splunk/etc directory stores all of your customisation. By default we already have a volume for /opt/splunk/var, the directory that stores all indexed data; so with these two your install should feel fully persistent.
  3. Queueiz, if you want full persistence of your entire install you'll need to add a volume for the entire /opt/splunk directory. I haven't tested this, I may need to patch the installer script so it checks for an existing installation in /opt/splunk/ rather than just an existing installer file. You can stop/start the container all you want but if you rebuild it you will lose config by default (by design, in a perhaps misguided attempt to follow Splunk best practice), the container will only persist your indexed data if you have created a volume for /opt/splunk/var. --- edit: @queueiz I just tried to properly configure a volume for /opt/splunk for a full persistent install but Splunk started throwing some very obscure errors. I'll look further into this, perhaps I'll try to configure a persistent mount for the config in /opt/splunk/etc alongside indexed data in /opt/splunk/var, but I'll probably have to leave the rest of the application to be installed on every container rebuild. Feel free to try swapping to the dockerhub tag "fullpersist" (i.e. in unraid set Repository to "atunnecliffe/splunk:fullpersist"), removing any /opt/splunk/var volume and adding an /opt/splunk volume.
  4. Overview: Docker image for Splunk. Allows arbitrary version (currently defaults to 7.3.0) / auto-install apps / more. Application: Splunk https://www.splunk.com/ Docker Hub: https://hub.docker.com/r/atunnecliffe/splunk GitHub: https://github.com/andrew207/splunk Documentation: https://github.com/andrew207/splunk/blob/master/README.md // https://docs.splunk.com/Documentation/Splunk Any issues let me know here.
  5. THIS WAS RESOLVED WITH A SERVER REBOOT. Posting anyway for others. rack-diagnostics-20190512-2023.zip Diag attached. I upgraded to 6.7 24 hours ago, all worked fine up front (for >12 hours). Some time in the past few hours all my /mnt/user/ shares became unavailable. My "Unnasigned Devices" disk still works fine. I can access /mnt/disk[1,2,3,4,5,6,7,8] fine and read all the data. I cannot access /mnt/user/. I have pasted some commands below to demonstrate the errors. root@rack:/mnt# ls /bin/ls: cannot access 'user': Transport endpoint is not connected cache/ disk2/ disk4/ disk6/ disk8/ user/ disk1/ disk3/ disk5/ disk7/ disks/ user0/ root@rack:/mnt# ls -lah /bin/ls: cannot access 'user': Transport endpoint is not connected total 32K drwxr-xr-x 14 root root 280 May 13 06:21 ./ drwxr-xr-x 19 root root 400 May 13 06:23 ../ drwxrwxrwx 1 nobody users 0 May 13 06:21 cache/ drwxrwxrwx 12 nobody users 215 Mar 3 2018 disk1/ drwxrwxrwx 9 nobody users 137 Nov 16 2017 disk2/ drwxrwxrwx 9 nobody users 137 Aug 10 2018 disk3/ drwxrwxrwx 7 nobody users 124 Nov 16 2017 disk4/ drwxrwxrwx 4 nobody users 36 Nov 16 2017 disk5/ drwxrwxrwx 8 nobody users 119 Nov 16 2017 disk6/ drwxrwxrwx 8 nobody users 91 Nov 16 2017 disk7/ drwxrwxrwx 1 nobody users 124 May 12 07:38 disk8/ drwxrwxrwx 3 nobody users 60 May 13 06:20 disks/ d????????? ? ? ? ? ? user/ drwxrwxrwx 1 nobody users 215 May 12 07:38 user0/ root@rack:/mnt# cd ./user -bash: cd: ./user: Transport endpoint is not connected root@rack:/mnt# mount proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) tmpfs on /dev/shm type tmpfs (rw) tmpfs on /var/log type tmpfs (rw,size=128m,mode=0755) /dev/sda1 on /boot type vfat (rw,noatime,nodiratime,flush,umask=0,shortname=mixed) /boot/bzmodules on /lib/modules type squashfs (ro) /boot/bzfirmware on /lib/firmware type squashfs (ro) /mnt on /mnt type none (rw,bind) shfs on /mnt/user type fuse.shfs (rw,nosuid,nodev,noatime,allow_other) /dev/md1 on /mnt/disk1 type xfs (rw,noatime,nodiratime) /dev/md2 on /mnt/disk2 type xfs (rw,noatime,nodiratime) /dev/md3 on /mnt/disk3 type xfs (rw,noatime,nodiratime) /dev/md4 on /mnt/disk4 type xfs (rw,noatime,nodiratime) /dev/md5 on /mnt/disk5 type xfs (rw,noatime,nodiratime) /dev/md6 on /mnt/disk6 type xfs (rw,noatime,nodiratime) /dev/md7 on /mnt/disk7 type xfs (rw,noatime,nodiratime) /dev/md8 on /mnt/disk8 type btrfs (rw,noatime,nodiratime) shfs on /mnt/user0 type fuse.shfs (rw,nosuid,nodev,noatime,allow_other) /dev/sdj1 on /mnt/disks/Kingston240SSD type xfs (rw,noatime,nodiratime,discard) /dev/sdk1 on /mnt/cache type btrfs (rw,noatime,nodiratime)
  6. Disk 2 has been auto-disabled for about a week. Seems like I just got a bad disk, it's only about 9 months old 8TB IronWolf. Diag attached, also SMART report for disk that has died (Disk 2). The SMART numbers look very severe, but I don't fully understand their implication. In particular: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-- 082 064 044 - 164804696 I'd like to know whether the disk is broken enough that I should immediately send it off for warranty, or if I can just enable it again and it'll work fine for another few months. Thanks all rack-smart-20190407-1138.zip rack-diagnostics-20190407-1142.zip
  7. OK I got it working, honestly not sure how it was working before; the only port forwarded is 443. I changed Validation to duckdns, added DUCKDNSTOKEN variable, and added wildcard to subdomains. All good now, thanks very much!
  8. <-------------------------------------------------> <-------------------------------------------------> cronjob running on Mon Feb 11 20:09:20 AEDT 2019 Running certbot renew Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Processing /etc/letsencrypt/renewal/[...] - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Cert is due for renewal, auto-renewing... Plugins selected: Authenticator standalone, Installer None Running pre-hook command: if ps aux | grep [n]ginx: > /dev/null; then s6-svc -d /var/run/s6/services/nginx; fi Renewing an existing certificate Performing the following challenges: http-01 challenge for [...] http-01 challenge for www.[...] Waiting for verification... Cleaning up challenges Attempting to renew cert ([...]) from /etc/letsencrypt/renewal/[...].conf produced an unexpected error: Failed authorization procedure. [...] (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://[...]/.well-known/acme-challenge/[...]: Timeout during connect (likely firewall problem), www.[...] (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://[...]/.well-known/acme-challenge/T9kiSatf7ElU1UhFQHSwUAG4udfx58cCUOkRiXQ8Rac: Timeout during connect (likely firewall problem). Skipping. All renewal attempts failed. The following certs could not be renewed: /etc/letsencrypt/live/[...]/fullchain.pem (failure) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - All renewal attempts failed. The following certs could not be renewed: /etc/letsencrypt/live/[...]/fullchain.pem (failure) hi guys, my cert expired so I checked out my logs and saw failures. Sensetives replaced with [...]. I manually executed the update script a couple times and got the above, same as the cron-executed ones. I have been able to use this config for well over a year with no worries (and no changes on my part!), not sure what to do. I can still access my configured domain remotely on 443 like always, just now I get a cert error. Any ideas? Anyone else unable to renew for the same reason? Should I just nuke the container and reinstall it? Cheers.
  9. Hi lurkers, if you want to start Firefox in full screen mode use this: https://addons.mozilla.org/en-US/firefox/addon/autofullscreen For whatever reason there isn't a command line arg that does this.
  10. Timestamp was incorrect for me, so I executed a band-aid fix -- installing NTP inside the container. Note if anyone else does this (bad) fix, you will need to re-install NTP after every update. docker exec -it MotionEye bash apt-get install ntp Restart container. Done. Note I have snapshots + Motion recording (h264) working on a pair of Amcrest ip2m-841b. Thanks for the template
  11. Apparently not with my network configuration for whatever reason. Config was changed on router, I never touched networking beyond network.cfg in Unraid.
  12. Fixed, issue was related to MTU on router being too high. Switched to router with DD-WRT installed, changed MTU to 1400, issue resolved.
  13. Thanks for the reply. Add I said in OP not only have I tried a huge number of devices, I've also removed all the extra hardware (switches, extenders, firewall) and replaced it with a simple ISP provided router to no avail.
  14. Hi all. This setup has been working flawlessly for over a year now, however yesterday everything died. The only change was deleting my Zoneminder docker which was constantly redlining the CPU for some reason. All the dockers stopped responding on ports (however upon SSH inspection they were all functioning), WebGUI basically died too. I found if I left it loading however it would eventually show up. As you can see, dynamix.js is taking forever to load. If you leave it for anywhere between 5-15 minutes it finishes, but if you refresh the page (and break cache) it'll have to redownload and you're screwed. During this time SMB worked fine, but none of my docker's responded to web requests (even though they were working -- Sonarr send requests to SABnzbd which successfully downloaded things). I disabled docker in /boot/config/docker.cfg, rebooted to no avail. Installed "fix common problems" plugin, nothing significant there (lack of auto-updates for docker). Then I disabled all virtualisation and deleted all my docker files / configs and VMs, no fix. Then I removed all non-essential plugins, no fix // booted in safe mode, no fix. I have just formatted the USB and installed a fresh copy of UnRAID (keeping only my pro.key, super.dat and disk.cfg). Still no fix... Same issue, dynamix.js takes ages to load and sometimes just fails. I've tried on several machines, phones, a TV's web browser and a VM as well as two different routers, static/DHCP and a couple of other things too. I even culled my network layout down to just the Unraid server and a laptop connected to a simple ISP provided router. I've just reinstalled again, no fix and taken a diag. tower-diagnostics-20171116-0131.zip Not sure what to do! Hoping for some magician to assist! Here's some hardware deet, UnRAID 6.3.5 by the way. M/B: Gigabyte Technology Co., Ltd. - H97N-WIFI CPU: Intel® Core™ i7-4790K CPU @ 4.00GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB, 1024 kB, 8192 kB Memory: 16 GB (max. installable capacity 32 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 eth1: not connected Kernel: Linux 4.9.30-unRAID x86_64 OpenSSL: 1.0.2k