Waddoo

Members
  • Posts

    29
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Missouri
  • Personal Text
    An Engineer lost in the networking.

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Waddoo's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Would that mean I would need to pass these arguments as logic into an rclone script? For context, I have the backup run at 0245 and then have a rclone script to push the backups into the cloud at 0600. I tried putting the third flag in an if statement and still not running: # Check if the third argument is "true" if [ "$3" = "true" ]; then echo "Deleting previous log" rm /mnt/user/backups/BackBlaze_Logs/backups_log.txt echo "Running RClone Sync of backups Share to B2 backups Bucket" rclone sync \ --progress \ --stats-one-line-date \ --transfers 4 \ --verbose \ --exclude ".Recycle.Bin/**" \ --exclude "UniFi_Protect/**" \ --links \ --log-file /mnt/user/backups/BackBlaze_Logs/backups_log.txt \ /mnt/user/backups/ \ b2_buckets:my_bucket echo "Updating permissions of log file" chmod 755 /mnt/user/backups/BackBlaze_Logs/backups_log.txt else echo "Skipping backup operation as the third argument is not 'true'" fi
  2. Highly doubt it. Checksum relies on a single file to have a checksum against. I think the only way of generating a single checksum of your files is to zip/tar/compress them and checksum that each time.
  3. Ran verification last night, same results where I continue to get emails of updated .nfo files from the media and Nextcloud shares. Likewise, exclusion rules do not seem to be respected. Will there be upcoming updates to the app to not have updated hashes email and only include those that are corrupted?
  4. I am currently trying to iron out my file integrity plugin's settings however, I currently have multiple excluded shares and custom files that continue to appear on my verification's logs. Namely Nextcloud and media. Linux ISOs. and *.log files appearing in the logs results. Is there something I am not selecting correctly in the checkbox for excluded shares? What is the proper format to exclude multiple custom files? It reads: However I am finding that `*.nfo, *.log` files continue to appear in the logs, even for modifications? I am running Unraid 6.11.5 and have attached a screenshot of my settings and a snippet of the verification log. Thanks and any help would be greatly appreciated. Event: unRAID file corruption Subject: Notice [ALEXANDRIA] - bunker verify command Description: Found 2 files with MD5 hash key corruption Importance: alert MD5 hash key mismatch, /mnt/disk2/backups/Unraid/CA_Backup/Appdata_Backup/[email protected]/CA_backup.tar is corrupted MD5 hash key mismatch, /mnt/disk2/backups/Unraid/CA_Backup/Appdata_Backup/[email protected]/backup.log is corrupted MD5 hash key mismatch (updated), /mnt/disk2/media/LINUXISO/mint.nfo was modified
  5. So turns out, in Unraid version 6.10, there is a change/update to this. I am running unraid 6.9.X for context. The Unraid version after what I am currently running [link](https://wiki.unraid.net/Manual/Release_Notes/Unraid_OS_6.10.0#Docker): > The new ipvlan mode is introduced to battle the crashes some people experience when using macvlan mode. If that is your case, change to ipvlan mode and test. Changing of mode does not require reconfiguring anything on the Docker level, internally everything is being taken care of. Sigh, maybe it's time to upgrade sooner than later.
  6. Yes, in-fact it just crashed recently... Apr 25 12:04:41 Alexandria kernel: WARNING: CPU: 9 PID: 20303 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Apr 25 12:04:41 Alexandria kernel: Modules linked in: nvidia_uvm(PO) xt_mark macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap veth xt_nat xt_tcpudp xt_conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs nfsd lockd grace sunrpc md_mod nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops nvidia(PO) drm backlight agpgart iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables mlx4_en mlx4_core r8169 realtek edac_mce_amd kvm_amd kvm mpt3sas crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd wmi_bmof i2c_piix4 glue_helper i2c_core wmi raid_class scsi_transport_sas k10temp Apr 25 12:04:41 Alexandria kernel: ccp ahci rapl libahci button acpi_cpufreq [last unloaded: mlx4_core] Apr 25 12:04:41 Alexandria kernel: CPU: 9 PID: 20303 Comm: kworker/9:2 Tainted: P S O 5.10.28-Unraid #1 Apr 25 12:04:41 Alexandria kernel: Hardware name: Micro-Star International Co., Ltd. MS-7B79/X470 GAMING PLUS MAX (MS-7B79), BIOS H.60 06/11/2020 Apr 25 12:04:41 Alexandria kernel: Workqueue: events macvlan_process_broadcast [macvlan] Apr 25 12:04:41 Alexandria kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Unsure what to do next about this issue, we aren't even running similar hardware based on your logs.
  7. All of my"public" containers remain accessible through my domain on nginx. I'm not sure if that is because I do not hard code IPs but unsure of y'all's setup.
  8. Will the changing of the network type affect my driver network as it is? Or nginx for example?
  9. I have been getting there macvlan kernel panics, at least I think I am... Mar 22 19:39:10 Alexandria kernel: ------------[ cut here ]------------ Mar 22 19:39:10 Alexandria kernel: WARNING: CPU: 5 PID: 1298 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Mar 22 19:39:10 Alexandria kernel: Modules linked in: nvidia_uvm(PO) xt_mark macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap veth xt_nat xt_tcpudp xt_conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs nfsd lockd grace sunrpc md_mod nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops nvidia(PO) drm backlight agpgart iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables mlx4_en mlx4_core r8169 realtek edac_mce_amd kvm_amd kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd mpt3sas wmi_bmof glue_helper i2c_piix4 wmi k10temp raid_class scsi_transport_sas i2c_core Mar 22 19:39:10 Alexandria kernel: ccp rapl ahci libahci button acpi_cpufreq [last unloaded: mlx4_core] Mar 22 19:39:10 Alexandria kernel: CPU: 5 PID: 1298 Comm: kworker/5:1 Tainted: P S O 5.10.28-Unraid #1 Mar 22 19:39:10 Alexandria kernel: Hardware name: Micro-Star International Co., Ltd. MS-7B79/X470 GAMING PLUS MAX (MS-7B79), BIOS H.60 06/11/2020 Mar 22 19:39:10 Alexandria kernel: Workqueue: events macvlan_process_broadcast [macvlan] Mar 22 19:39:10 Alexandria kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] I am also using a UDMP-SE. How would I go about switching to IPVLan? Reading through threads it reads like it is something to be changed in Unraid?
  10. Mar 22 19:39:10 Alexandria kernel: WARNING: CPU: 5 PID: 1298 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Mar 22 19:39:10 Alexandria kernel: Modules linked in: nvidia_uvm(PO) xt_mark macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap veth xt_nat xt_tcpudp xt_conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs nfsd lockd grace sunrpc md_mod nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops nvidia(PO) drm backlight agpgart iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables mlx4_en mlx4_core r8169 realtek edac_mce_amd kvm_amd kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd mpt3sas wmi_bmof glue_helper i2c_piix4 wmi k10temp raid_class scsi_transport_sas i2c_core Mar 22 19:39:10 Alexandria kernel: ccp rapl ahci libahci button acpi_cpufreq [last unloaded: mlx4_core] Mar 22 19:39:10 Alexandria kernel: CPU: 5 PID: 1298 Comm: kworker/5:1 Tainted: P S O 5.10.28-Unraid #1 Mar 22 19:39:10 Alexandria kernel: Hardware name: Micro-Star International Co., Ltd. MS-7B79/X470 GAMING PLUS MAX (MS-7B79), BIOS H.60 06/11/2020 Mar 22 19:39:10 Alexandria kernel: Workqueue: events macvlan_process_broadcast [macvlan] Mar 22 19:39:10 Alexandria kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Is this the similar error to what this thread was getting? How has the stability been after switching to ipvlan? I am still running Unraid OS 6.9.2
  11. I have been trying to setup Authentik following the video posted by Ibracorp: However I am getting the following issues using this config (nginx_advanced) where I replaced the proxy_pass on ~line 30 with my serverIP:9000 and replaced ~line 53 with my auth.domain.com... url. Once setup, my app.domain.com gets the advanced config, and ends up resolving to the wrong port. Meaning it ends up at app.domain.com:4443 which is my nginxproxymanger internal docker port. Nowhere is Authentik setup to re-direct to that port. Else, deviations from this setup, or replacing proxy_pass at ~line 9 creates server error 500. Unsure if related, Authentik shows my Outpost integration as unhealthy, even though I am pointing it to unix:///var/run/docker.sock as noted in the environment variable and documentation. Would this be causing my bad re-direct? Anyone willing to help me set this up? Also why was npm added to npm in the video? nginx_advanced.txt
  12. Do you only use the docker-compose or are you building up your unraid server with ansible playbooks?
  13. Curious, how are you using docker-compose? Pre-face, I am using CA Backup. But I am wanting to automate the install of my docker containers using Ansible as a learning method. Instead of having me click through, and setup all the docker run parameters, they can all be contained in within a docker-compose template/file. Thoughts?
  14. I seem to be having a similar no handshake problem seen on the client side logs of my Wireguard configuration. The caveat to my issue is that I have a tested method to both have it work, and break the functionality of my Wireguard instance. Meaning I can replicate my issue but I cannot understand why it is occurring. My server has two networking connections, br0 and br1. br0 being a 10GbE NIC while the br1 is the MB 1Gig connection. The ip scheme is identical apart from the last octect being .14 (br1) and .15 (br0). My Wireguard is setup in a 'Remote access to LAN' setup. When adding a default route to use br0 (post-array start since Unraid defaults to using br1 as the default route [separate issue currently solved with a boot script]), Wiregaurd can no longer create the handshake with my phone and unraid server. When typing in console 'ip route delete default dev br0' and restarting the VPN connection from my client, it connects and I can sign in to see my Unraid's dashboard, no handshake issues. Any ideas why this may be occurring? Perhaps this is a bad configuration of networking and Wiregaurd exacerbates my problem? My other dockers and DDNS setups all continue to work as intended, pre 10GbE addition.
  15. Duckdns does not dictate whether you can use http vs https. However yes, I am using duckdns and pushing my external connections via https. look into nginx, there are multiple different dockers; nginx proxy manager, swag, etc.