kimocal

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by kimocal

  1. My wife and I are the only users for our nextcloud instance. We recently started having an issue where the connection will die and I will have to restart the container. Attached is my syslog I am getting from the syslog server. Any ideas based upon the syslog? Apr 11 15:11:14 tower rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0" x-pid="23380" x-info="https://www.rsyslog.com"] start Apr 11 15:14:33 tower ool www[30194]: /usr/local/emhttp/plugins/dynamix/scripts/rsyslog_config Apr 11 15:14:35 tower rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0" x-pid="30614" x-info="https://www.rsyslog.com"] start Apr 11 16:35:02 tower kernel: docker0: port 2(veth048550e) entered disabled state Apr 11 16:35:02 tower kernel: veth4f0e390: renamed from eth0 Apr 11 16:35:02 tower kernel: docker0: port 2(veth048550e) entered disabled state Apr 11 16:35:02 tower kernel: device veth048550e left promiscuous mode Apr 11 16:35:02 tower kernel: docker0: port 2(veth048550e) entered disabled state Apr 11 16:35:02 tower kernel: docker0: port 2(veth4583332) entered blocking state Apr 11 16:35:02 tower kernel: docker0: port 2(veth4583332) entered disabled state Apr 11 16:35:02 tower kernel: device veth4583332 entered promiscuous mode Apr 11 16:35:02 tower kernel: docker0: port 2(veth4583332) entered blocking state Apr 11 16:35:02 tower kernel: docker0: port 2(veth4583332) entered forwarding state Apr 11 16:35:02 tower kernel: eth0: renamed from veth4daad3b Apr 11 16:35:02 tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth4583332: link becomes ready Apr 11 16:35:02 tower kernel: docker0: port 9(vethdcf57da) entered disabled state Apr 11 16:35:02 tower kernel: veth0389db9: renamed from eth0 Apr 11 16:35:03 tower kernel: docker0: port 9(vethdcf57da) entered disabled state Apr 11 16:35:03 tower kernel: device vethdcf57da left promiscuous mode Apr 11 16:35:03 tower kernel: docker0: port 9(vethdcf57da) entered disabled state Apr 11 16:35:03 tower kernel: docker0: port 9(vethd8cddd5) entered blocking state Apr 11 16:35:03 tower kernel: docker0: port 9(vethd8cddd5) entered disabled state Apr 11 16:35:03 tower kernel: device vethd8cddd5 entered promiscuous mode Apr 11 16:35:03 tower kernel: docker0: port 9(vethd8cddd5) entered blocking state Apr 11 16:35:03 tower kernel: docker0: port 9(vethd8cddd5) entered forwarding state Apr 11 16:35:03 tower kernel: eth0: renamed from veth976f3a7 Apr 11 16:35:03 tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd8cddd5: link becomes ready Apr 11 17:15:04 tower kernel: veth2e3f8cd: renamed from eth0 Apr 11 17:15:04 tower kernel: eth0: renamed from veth79ca70e
  2. **UPDATE** I forgot to add the IP of the sever in the remote part. It's working now. ____________________________________________________________________________ I am running unraid 6.12.3 on this server at the moment. Currently trying to troubleshoot a Nextcloud container of it needing a restart every once in a while. I set-up the local syslog server per the directions in this post with the following settings initially: I didn't see anything in the syslog_local share after 24 hours. I then changed the syslog server settings to this: I've been letting it run and I am still not seeing anything in the syslog_local share: Thoughts?
  3. Changed Status to Solved Solution: Force Update on the containers
  4. Here's an update that fixed the Container WebUI issues. I updated 2 of my docker containers that had new updates and they are now showing the WebUI link: I then forced Update some others and they are now showing the WebUI link: Before After After force updating all my containers they are now all showing the WebUI link.
  5. Another user going from 6.9.2 to 6.12.1 is reporting the same issue. I have asked them to post their diagnostics here to help.
  6. It's a bit odd it's happening on 2 different unraid servers. The common part is they were both updated from 6.9.2.
  7. No. I clicked through every container on both boxes and none have a WebUI link: Both boxes were running for about 18 hours. I rebooted both of them about an hour ago and the issue is still present.
  8. **FIX** Force Update the containers to get the WebUI link again. I upgraded both of my 6.9.2 boxes to 6.12.1 and there are no WebUI links when accessing the docker containers: unRAID server #1 unRaid server #2 I can still access them when plugging in the correct IP:port Diagnostics attached. server01-diagnostics-20230621-1533.zip server02-diagnostics-20230621-1533.zip Tried multiple web browsers (Chrome, Firefox, Brave) and also cleared the cache on the browsers. No difference.
  9. Is anyone else's container not downloading? It was working fine a day ago for me. Edit #1 Appears that it can't connect to eweka: TLS handshake failed for news.eweka.nl I'm using PIA and OpenVPN. Edit #2 I swapped over to Wireguard to see if there is a difference. Getting the same error. I then tried changing the port to 119 and setting Encryption to No and it is able to connect. Setting CertCheck to No doesn't make a difference either with SSL enabled. Edit #3 So if using port 119 and wireguard is up it should be safe to run it that way? I am no longer encrypted, correct or does it still use PIA to mask the traffic if using 119? Another weird thing is that I restart NZBgetvpn after setting port 119, I can successfully connect to eweka, but then nothing downloads still. Edit #4 I tried using GrabIt on my Win box and was able to connect to eweka without any issue. Really at a loss now with NZBget 😕 Edit #5 I ran this command from the unraid terminal & container console: openssl s_client -showcerts -connect news.website:563 The terminal returns a cert but the container gets stuck on CONNECTED(00000003) Edit #6 It must eweka as I am able to use newsdemon without any issue on their ssl ports =P Edit#7 Last update for the time being. I disabled the VPN part of the container and eweka connects fine now over both SSL ports. Maybe PIA updated something recently that broke the ability to connect. I tried routing the nzbget container traffic through a binhex-privoxy container with PIA and same issues happens.
  10. I have a general best practices question. Are there any issues to consider when running multiple DBs in one MariaDB container? I originally setup MariaDB to be used with Nextcloud. I then created another DB to use with my HomeAssistant VM. Both have been working great. I'm about to add another DB for SuiteCRM to the same MariaDB docker instance. Is it best/better practice to have multiple MariaDB dockers for this scenario or should there not be any real issues doing this? There will only be 2 users with SuiteCRM. There are currently only 2 Nextcloud users too.
  11. I don't know why it won't work when placed in the AppData location but I downloaded the cert.pem file linked above and placed it in my "Downloads" folder. I then modified the settings in NZBGETVPN and it started downloading just now.
  12. I don't know if it will work for this docker but for the jshridha/docker-nzbgetvpn version, I copied the cert.pem file from here and placed it in my "Downloads" folder. I then modified the settings in NZBGETVPN and it started downloading just now.
  13. This fixed the recently connection issues I've been having too with the PIA VPN in nzbgetvpn.
  14. How can I trigger a docker script to run by accessing a specific website/IP? I was watching this video about a PiHole docker and want to do something similar minus the Alexa part. I know I can run a docker script thru the command line via the docker exec command. I only know how to run a script via a cron job or the command line. The end goal is to make it so myself or wife only needs to click a bookmark in a browser to either run the blockdomains or unblockdomains scripts shown in the above video. These bookmarks would just be to local LAN addresses of the PiHole container like 192.168.10.5:XXXX/block or similar as shown in the YouTube video. Any tips or links to learn some more on how to do this?
  15. Thanks @uberchuckie It's up and running. I pointed to my Maria DB docker so not sure if that's what fixed it. I was able to add my Ubiquiti ER-X device for it to monitor, as all I'm really interested in is tracking my monthly data usage thru my ISP. Beyond that I haven't had much time to tinker with it though.
  16. I too am having the issue of the default login/password not working. I've stopped the container, deleted the contents of the appdata folder, and restarted. I've also deleted the container image and reinstalled it. Sometimes when accessing the webinterface it gives this error: DB Error 2002: No such file or directory A common item I've seen in the logs is this error: [0;35m o [1;37mMySQL [0m ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION And then after it's been running for a while it seems to repeat this MariaDB entry: Starting MariaDB... 200731 10:43:51 mysqld_safe Logging to '/config/databases/ea65e1ef1086.err'. 200731 10:43:51 mysqld_safe Starting mariadbd daemon with databases from /config/databases Database exists. Starting MariaDB... 200731 10:43:52 mysqld_safe Logging to '/config/databases/ea65e1ef1086.err'. 200731 10:43:52 mysqld_safe Starting mariadbd daemon with databases from /config/databases Database exists. Starting MariaDB... 200731 10:43:53 mysqld_safe Logging to '/config/databases/ea65e1ef1086.err'. 200731 10:43:53 mysqld_safe Starting mariadbd daemon with databases from /config/databases Database exists. Starting MariaDB... 200731 10:43:54 mysqld_safe Logging to '/config/databases/ea65e1ef1086.err'. 200731 10:43:54 mysqld_safe Starting mariadbd daemon with databases from /config/databases Database exists. Starting MariaDB... 200731 10:43:55 mysqld_safe Logging to '/config/databases/ea65e1ef1086.err'. 200731 10:43:55 mysqld_safe Starting mariadbd daemon with databases from /config/databases Attached is a C&P of the Observium log. I've also tried running the container as Privileged. I'm running unRAID 6.8.3. **EDIT** I also haven't touch the config.php in the Observium appdata folder. Are we able to point to an existing MariaDB container? **EDIT #2** Somehow got it working. I edited the config.php to point to my MariaDB Container used by other containers. // Database config --- This MUST be configured $config['db_extension'] = 'mysqli'; $config['db_host'] = '10.0.0.90:3306'; $config['db_user'] = 'observium'; $config['db_pass'] = 'xxxxxxxxxxxxxxxxxx'; //<--- SAME PASSWORD GENERATED IN OBSERVIUM LOGS $config['db_name'] = 'observium'; // Base directory #$config['install_dir'] = "/opt/observium"; // Default community list to use when adding/discovering $config['snmp']['community'] = array("public"); // Authentication Model $config['auth_mechanism'] = "mysql"; // default, other options: ldap, http-auth, please see documentation for config help // Enable alerter // $config['poller-wrapper']['alerter'] = TRUE; //$config['web_show_disabled'] = FALSE; // Show or not disabled devices on major pages. // Set up a default alerter (email to a single address) //$config['email']['default'] = "user@your-domain"; //$config['email']['from'] = "Observium <observium@your-domain>"; //$config['email']['default_only'] = TRUE; // End config.php I also edited with the Observium docker container settings by enabling Privileged, deleted the files except the edited config.php and lo and behold it's working now. No idea what the problem was. Observium_LOG_07312020.txt
  17. Great. Once my replacement SSD arrives I can give this a go. Thanks again.
  18. I have only 1 cache drive formatted as BTRFS and want to replace it with a larger one. I want to make sure this step is correct: Does that mean I just create 2 slots in the Cache Pool like below? Then in Slot 1, I choose the larger SSD to replace the existing one? Then afterwards I run the following command since the replacement drive is larger: btrfs fi resize 1:max /mnt/cache
  19. I'm currently rebuilding parity after replacing a 3 TB with a new 14 TB, this plugin is going to be helpful.
  20. When editing a monitor/camera, be sure to turn on Advanced Settings:
  21. I keep trying to export the Timelapse but it seems to get stuck and never creates the export.
  22. Thanks for sharing. Here's my modified script that also includes Unassigned Devices mounting/unmounting the USB drive. #!/bin/sh LOGFILE="/mnt/user/logs/b0rgLOG.txt" LOGFILE2="/mnt/user/logs/b0rg-RClone-DETAILS.txt" # Close if rclone/borg running if pgrep "borg" || pgrep "rclone" > /dev/null then echo "$(date "+%m-%d-%Y %T") : Backup already running, exiting" 2>&1 | tee -a $LOGFILE exit exit fi # Close if parity sync running #PARITYCHK=$(/root/mdcmd status | egrep STARTED) #if [[ $PARITYCHK == *"STARTED"* ]]; then # echo "Parity check running, exiting" # exit # exit #fi UD_DRIVE=$(ls -l /dev/disk/by-id/*My_Passport* | grep -v part1 | grep -o '...$') echo "The My Passport HDD is located at " $UD_DRIVE echo "Mounting My Passport HDD" echo "....." /usr/local/sbin/rc.unassigned mount /dev/$UD_DRIVE echo "My Passport HDD Mounted" echo "....." #This is the location your Borg program will store the backup data to export BORG_REPO='/mnt/disks/My_Passport/b0rg' #This is the location you want Rclone to send the BORG_REPO to #export CLOUDDEST='GDrive:/Backups/borg/TDS-Repo-V2/' #Setting this, so you won't be asked for your repository passphrase: export BORG_PASSPHRASE='XXXXXXXXXXXXXXXXXXXXXXXXXXXX' #or this to ask an external program to supply the passphrase: (I leave this blank) #export BORG_PASSCOMMAND='' #I store the cache on the cache instead of tmp so Borg has persistent records after a reboot. export BORG_CACHE_DIR='/mnt/user/appdata/borg/cache/' export BORG_BASE_DIR='/mnt/user/appdata/borg/' #Backup the most important directories into an archive (I keep a list of excluded directories in the excluded.txt file) SECONDS=0 BORG_OPTS="--verbose --info --list --filter AMEx --files-cache=mtime,size --stats --show-rc --compression lz4 --exclude-caches" echo "$(date "+%m-%d-%Y %T") : Borg backup has started" 2>&1 | tee -a $LOGFILE borg create $BORG_OPTS $BORG_REPO::'{hostname}-{now}' /mnt/user/backup/testB0RG/ >> $LOGFILE2 2>&1 backup_exit=$? # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. The '{hostname}-' prefix is very important to # limit prune's operation to this machine's archives and not apply to # other machines' archives also: echo "$(date "+%m-%d-%Y %T") : Borg pruning has started" 2>&1 | tee -a $LOGFILE borg prune --list --prefix '{hostname}-' --show-rc --keep-daily 7 --keep-weekly 4 --keep-monthly 6 >> $LOGFILE2 2>&1 prune_exit=$? #echo "$(date "+%m-%d-%Y %T") : Borg pruning has completed" 2>&1 | tee -a $LOGFILE # use highest exit code as global exit code global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit )) # Execute if no errors if [ ${global_exit} -eq 0 ]; then borgstart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Borg backup completed in $(($borgstart/ 3600))h:$(($borgstart% 3600/60))m:$(($borgstart% 60))s" | tee -a >> $LOGFILE 2>&1 #Reset timer # SECONDS=0 # echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync has started" >> $LOGFILE # rclone sync $BORG_REPO $CLOUDDEST -P --stats 1s -v 2>&1 | tee -a $LOGFILE2 # rclonestart=$SECONDS # echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync completed in $(($rclonestart/ 3600))h:$(($rclonestart% 3600/60))m:$(($rclonestart% 60))s" 2>&1 | tee -a $LOGFILE # All other errors else echo "$(date "+%m-%d-%Y %T") : Borg has errors code:" $global_exit 2>&1 | tee -a $LOGFILE fi echo "....." echo "Unmounting My Passport HDD" /usr/local/sbin/rc.unassigned umount /dev/$UD_DRIVE echo "My Passport HDD Unmounted" echo "....." exit ${global_exit}
  23. Thanks for sharing. I just installed it. Now to figure out how to use it, lol.
  24. Thanks for the guidance. I'll do the update later tonight. I've been running 6.8.0-rc4 as soon as it came out and this issue just now popped up.
  25. Recently been notified that something is filling my log file. Which file do I go about looking at it to determine what is eating it up? Attached are the system diagnostics. TIA tower-diagnostics-20200113-1751.zip