spants

Community Developer
  • Posts

    637
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by spants

  1. Sometimes my unit SMB server (I have an old DNS323 dlink unit with ALT-F firmware that supports larger drives) sleeps (or disconnects) and my rclone beta backup then seems to write to /mnt/disks/xxxxxx (or RAM?) causing my unraid to lock up.

     

    Not sure if its Unassigned devices or the external unit causing the problem.

     

    Can RClone talk to an SMB share with user/password?

  2. So I have noticed a disk problem just before it completely failed.

     

    I decided to do a Plan with unBalance (no writing to disks) to see if I could move the files from the emulated disk to real ones..... it started reading then the disk 7 disappeared completely...

    What is the best way to proceed without losing the data on the emulated disk?

     

    This is the log for the disk 7 error. I am now on 6.7.2

    Nov 10 23:55:20 Tower kernel: XFS (md7): Mounting V4 Filesystem
    Nov 10 23:55:20 Tower kernel: XFS (md7): Starting recovery (logdev: internal)
    Nov 10 23:55:21 Tower kernel: XFS (md7): Internal error XFS_WANT_CORRUPTED_GOTO at line 1862 of file fs/xfs/libxfs/xfs_alloc.c.  Caller __xfs_free_extent+0xdf/0x146 [xfs]
    Nov 10 23:55:21 Tower kernel: CPU: 3 PID: 14896 Comm: mount Not tainted 4.19.56-Unraid #1
    Nov 10 23:55:21 Tower kernel: Hardware name: System manufacturer System Product Name/P8Z68-V PRO, BIOS 3603 11/09/2012
    Nov 10 23:55:21 Tower kernel: Call Trace:
    Nov 10 23:55:21 Tower kernel: dump_stack+0x5d/0x79
    Nov 10 23:55:21 Tower kernel: xfs_free_ag_extent+0x3d2/0x5ff [xfs]
    Nov 10 23:55:21 Tower kernel: __xfs_free_extent+0xdf/0x146 [xfs]
    Nov 10 23:55:21 Tower kernel: xfs_trans_free_extent+0x27/0x5d [xfs]
    Nov 10 23:55:21 Tower kernel: xfs_efi_recover+0x14b/0x199 [xfs]
    Nov 10 23:55:21 Tower kernel: xlog_recover_process_efi+0x2d/0x43 [xfs]
    Nov 10 23:55:21 Tower kernel: xlog_recover_process_intents+0xa6/0x1b0 [xfs]
    Nov 10 23:55:21 Tower kernel: xlog_recover_finish+0x13/0x80 [xfs]
    Nov 10 23:55:21 Tower kernel: xfs_log_mount_finish+0x5a/0xc3 [xfs]
    Nov 10 23:55:21 Tower kernel: xfs_mountfs+0x50d/0x72f [xfs]
    Nov 10 23:55:21 Tower kernel: ? xfs_mru_cache_create+0x12b/0x151 [xfs]
    Nov 10 23:55:21 Tower kernel: xfs_fs_fill_super+0x448/0x527 [xfs]
    Nov 10 23:55:21 Tower kernel: ? xfs_test_remount_options+0x53/0x53 [xfs]
    Nov 10 23:55:21 Tower kernel: mount_bdev+0x12f/0x17c
    Nov 10 23:55:21 Tower kernel: mount_fs+0x10/0x6b
    Nov 10 23:55:21 Tower kernel: vfs_kern_mount+0x62/0xfa
    Nov 10 23:55:21 Tower kernel: do_mount+0x7b3/0xa2f
    Nov 10 23:55:21 Tower kernel: ? __kmalloc_track_caller+0x65/0x122
    Nov 10 23:55:21 Tower kernel: ? _copy_from_user+0x2f/0x4d
    Nov 10 23:55:21 Tower kernel: ? memdup_user+0x39/0x55
    Nov 10 23:55:21 Tower kernel: ksys_mount+0x6d/0x92
    Nov 10 23:55:21 Tower kernel: __x64_sys_mount+0x1c/0x1f
    Nov 10 23:55:21 Tower kernel: do_syscall_64+0x57/0xf2
    Nov 10 23:55:21 Tower kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
    Nov 10 23:55:21 Tower kernel: RIP: 0033:0x14ee3912bfca
    Nov 10 23:55:21 Tower kernel: Code: 48 8b 0d c9 7e 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 96 7e 0c 00 f7 d8 64 89 01 48
    Nov 10 23:55:21 Tower kernel: RSP: 002b:00007ffe41fa7648 EFLAGS: 00000206 ORIG_RAX: 00000000000000a5
    Nov 10 23:55:21 Tower kernel: RAX: ffffffffffffffda RBX: 000000000040d2b0 RCX: 000014ee3912bfca
    Nov 10 23:55:21 Tower kernel: RDX: 000000000040d4c0 RSI: 000000000040d540 RDI: 000000000040d520
    Nov 10 23:55:21 Tower kernel: RBP: 000014ee392baf24 R08: 0000000000000000 R09: 0000000000000000
    Nov 10 23:55:21 Tower kernel: R10: 0000000000000c00 R11: 0000000000000206 R12: 0000000000000000
    Nov 10 23:55:21 Tower kernel: R13: 0000000000000c00 R14: 000000000040d520 R15: 000000000040d4c0
    Nov 10 23:55:21 Tower kernel: XFS (md7): Internal error xfs_trans_cancel at line 1041 of file fs/xfs/xfs_trans.c.  Caller xfs_efi_recover+0x15e/0x199 [xfs]
    Nov 10 23:55:21 Tower kernel: CPU: 3 PID: 14896 Comm: mount Not tainted 4.19.56-Unraid #1
    Nov 10 23:55:21 Tower kernel: Hardware name: System manufacturer System Product Name/P8Z68-V PRO, BIOS 3603 11/09/2012
    Nov 10 23:55:21 Tower kernel: Call Trace:
    Nov 10 23:55:21 Tower kernel: dump_stack+0x5d/0x79
    Nov 10 23:55:21 Tower kernel: xfs_trans_cancel+0x52/0xcd [xfs]
    Nov 10 23:55:21 Tower kernel: xfs_efi_recover+0x15e/0x199 [xfs]
    Nov 10 23:55:21 Tower kernel: xlog_recover_process_efi+0x2d/0x43 [xfs]
    Nov 10 23:55:21 Tower kernel: xlog_recover_process_intents+0xa6/0x1b0 [xfs]
    Nov 10 23:55:21 Tower kernel: xlog_recover_finish+0x13/0x80 [xfs]
    Nov 10 23:55:21 Tower kernel: xfs_log_mount_finish+0x5a/0xc3 [xfs]
    Nov 10 23:55:21 Tower kernel: xfs_mountfs+0x50d/0x72f [xfs]
    Nov 10 23:55:21 Tower kernel: ? xfs_mru_cache_create+0x12b/0x151 [xfs]
    Nov 10 23:55:21 Tower kernel: xfs_fs_fill_super+0x448/0x527 [xfs]
    Nov 10 23:55:21 Tower kernel: ? xfs_test_remount_options+0x53/0x53 [xfs]
    Nov 10 23:55:21 Tower kernel: mount_bdev+0x12f/0x17c
    Nov 10 23:55:21 Tower kernel: mount_fs+0x10/0x6b
    Nov 10 23:55:21 Tower kernel: vfs_kern_mount+0x62/0xfa
    Nov 10 23:55:21 Tower kernel: do_mount+0x7b3/0xa2f
    Nov 10 23:55:21 Tower kernel: ? __kmalloc_track_caller+0x65/0x122
    Nov 10 23:55:21 Tower kernel: ? _copy_from_user+0x2f/0x4d
    Nov 10 23:55:21 Tower kernel: ? memdup_user+0x39/0x55
    Nov 10 23:55:21 Tower kernel: ksys_mount+0x6d/0x92
    Nov 10 23:55:21 Tower kernel: __x64_sys_mount+0x1c/0x1f
    Nov 10 23:55:21 Tower kernel: do_syscall_64+0x57/0xf2
    Nov 10 23:55:21 Tower kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
    Nov 10 23:55:21 Tower kernel: RIP: 0033:0x14ee3912bfca
    Nov 10 23:55:21 Tower kernel: Code: 48 8b 0d c9 7e 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 96 7e 0c 00 f7 d8 64 89 01 48
    Nov 10 23:55:21 Tower kernel: RSP: 002b:00007ffe41fa7648 EFLAGS: 00000206 ORIG_RAX: 00000000000000a5
    Nov 10 23:55:21 Tower kernel: RAX: ffffffffffffffda RBX: 000000000040d2b0 RCX: 000014ee3912bfca
    Nov 10 23:55:21 Tower kernel: RDX: 000000000040d4c0 RSI: 000000000040d540 RDI: 000000000040d520
    Nov 10 23:55:21 Tower kernel: RBP: 000014ee392baf24 R08: 0000000000000000 R09: 0000000000000000
    Nov 10 23:55:21 Tower kernel: R10: 0000000000000c00 R11: 0000000000000206 R12: 0000000000000000
    Nov 10 23:55:21 Tower kernel: R13: 0000000000000c00 R14: 000000000040d520 R15: 000000000040d4c0
    Nov 10 23:55:21 Tower kernel: XFS (md7): xfs_do_force_shutdown(0x8) called from line 1042 of file fs/xfs/xfs_trans.c.  Return address = 000000007285336a
    Nov 10 23:55:21 Tower kernel: XFS (md7): Corruption of in-memory data detected.  Shutting down filesystem
    Nov 10 23:55:21 Tower kernel: XFS (md7): Please umount the filesystem and rectify the problem(s)
    Nov 10 23:55:21 Tower kernel: XFS (md7): Failed to recover intents
    Nov 10 23:55:21 Tower kernel: XFS (md7): log mount finish failed
    Nov 10 23:55:21 Tower root: mount: /mnt/disk7: mount(2) system call failed: Structure needs cleaning.
    Nov 10 23:55:21 Tower emhttpd: shcmd (71): exit status: 32
    Nov 10 23:55:21 Tower emhttpd: /mnt/disk7 mount error: No file system
    Nov 10 23:55:21 Tower emhttpd: shcmd (72): umount /mnt/disk7
    Nov 10 23:55:21 Tower root: umount: /mnt/disk7: not mounted.

     

  3. 13 hours ago, nblain1 said:

    On Unraid 6.8.0 rc3

     

    I've noticed if I enable my VMs that use br0 (Pihole uses same bridge), Pihole is inaccessable.  The webui for Pihole comes back as soon as I shutdown my VMs.

     

    EDIT: Just updated to 6.8.0 rc4 and now Pihole and my VMs are running together and playing nicely.  Not sure what the problem was on rc3...

     

    EDIT 2: Spoke too soon... seems to still be happening..

    could it be related to this:

     

  4. On 9/27/2019 at 3:47 AM, BRiT said:

    As I linked earlier, the cost is around $10 - $12 per month for unlimited storage, as Google doesn't enforce more than 5 users setup required. It used to be around $8 a year ago. The person who started that thread and is helping a lot of us out has over 400 TB stored there.

    @BRiTGoogle drive does enforce the 1TB limit - I just hit it.... (using legacy google apps account)

  5. 12 minutes ago, Squid said:

    Yeah, and that's the problem.  Currently @spants has 2 identical apps (exactly the same repository) which FCP cannot distinguish between, so at the end of the day, everytime you're hitting "Fix" it's simply bouncing from the URL from one template to the other.

     

    Since he just uploaded the "template" version 4 hours ago, maybe he's going to delete the other shortly or something.  IE: give this a bit to play itself out.  If after a day or so @spants doesn't clean things up, then @dockerPolice will wind up flipping a coin and blacklist one of them.

     

    For some reason the github client doesnt update the repository quickly - have deleted pihole-template and changed pihole to use the new settings. Also uploaded a new node red

  6. On 9/30/2019 at 4:03 AM, frakman1 said:

    I  noticed that when I changed the TZ to America/New_York, it still uses the default America/Los_Angeles. Some template values don't seem to make it into the docker run command for some reason.

     

    
    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='pihole' --net='br0' --ip='192.168.86.2' --log-opt max-size='50m' --log-opt max-file='1' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'TCP_PORT_53'='53' -e 'UDP_PORT_53'='53' -e 'TCP_PORT_80'='80' -e 'PUID'='99' -e 'PGID'='100' -e 'ServerIP'='192.168.86.2' -e 'ServerIPv6'='' -e 'DNS1'='9.9.9.9' -e 'DNS2'='149.112.112.112' -e 'IPv6'='False' -e 'TZ'='America/New_York' -e 'WEBPASSWORD'='admin' -e 'INTERFACE'='br0' -e 'DNSMASQ_LISTENING'='all' -v '/mnt/user/appdata/pihole/pihole/':'/etc/pihole/':'rw' -v '/mnt/user/appdata/pihole/dnsmasq.d/':'/etc/dnsmasq.d/':'rw' --cap-add=NET_ADMIN --dns 127.0.0.1 --dns 1.1.1.1 --restart=unless-stopped 'pihole/pihole:4.3.1-4_amd64' 
    WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
    

     

     

    yes - I will need to create a new template for it  (hopefully today) as some of the parameters have changed.

    Just a note: I have only created the simple template for this - it is the official docker underneath.

  7. I would like to know this as well, I see that there is an update in the WebUI but unsure if I need to update within PiHole with pihole-up or the docker get eventually updated. Is the cron job necessary with the latest version?

    Thanks
    I must get round to cleaning the instructions, unfortunately I travel alot for work so that becomes difficult.

    With unraid, if you make changes to the template these are not seen by existing users.... the only way seems to be to create a new pihole app.

    Just to be clear, my only involvement with pihole is to create the original template for unraid. I dont touch the docker files at all.

    Hopefully a new template in a week!

    Sent from my SM-N950F using Tapatalk

  8. On 9/5/2019 at 2:40 PM, karlpox said:

     

    Just wondering, my pihole docker doesn't auto start sometimes on boot? But would start properly if I manually start it. Anybody know why that happens?

     

    
    
    WARNING Misconfigured DNS in /etc/resolv.conf: Primary DNS should be 127.0.0.1 (found 127.0.0.11)
    nameserver 127.0.0.11

     

     

    Wondering if it is due to this error? I remember someone else having that and solving it.... Unfortunately I am traveling - take a look back through this thread, it might be there,

  9. 22 hours ago, dgwharrison said:

    Hi @spants, thanks for the pi-hole docker. 

     

    I'd like to set this up so I can use it with the lets encrypt reverse proxy however I notice that when I set a custom password for key 9 WEBPASSWORD, it doesn't seem to work.. The default 'admin' still works, but not what goes in the field. I can't see anywhere in the UI to set the password so I'm assuming it's in a config file hence you'd have to ssh into the docker and even if you changed it there it wouldn't be persistent with docker image updates. 

     

    Is there something I should check, or is this a know issue?

     

    I know it isn't going to help, but it works for me! (but I do not use a reverse proxy for it)

  10. 5 hours ago, FreeMan said:

    I keep having my main and backup server drop offline:

    473212675_2019-09-0813_19_15-ZeroTierCentral.thumb.png.57c5bdf8e71d6da28cf20a5813e0227a.png

     

    If I restart the docker, they will, of course come back up. On a couple of occasions, I've browsed to the ZT IP address from Win Explorer, and the server has responded, and my.zerotier will then show that it's online. Of course, this is NOT very helpful if I'm not at home and am trying to get access to the server.

     

    This is all I see in unRAID's log for the ZT container:

    Again, not very useful...

     

    I'm running the default 1.2.12 build version on both of my servers. Any idea why this is happening or what I can do to fix it?

    I had something similar - so I used the userscripts plugin to restart it daily...

    Content of the script: (hope it helps)

     

     

    #!/bin/bash
    docker restart ZeroTier

  11. 4 minutes ago, mattekure said:

    Quick question, I have the Pi-hole docker set up and working correctly (I think).  the Dashboard shows queries being blocked and the blacklists in effect.  I currently have it set up so that my Router points to pi-hole for DNS queries.

     

    On the pi-hole network overview, it show all the queries as originating from the router.  I believe this is the expected behavior as the clients are passing the request to the router which then passes  it to pi-hole.  I just wanted to be sure that is what I should be seeing. 

    What you are seeing is normal. You can instead change the DHCP server in your router to hand out piholes IP as the dns server and then you will see the clients IP. (or just use pihole as the dhcp server)

  12. 5 hours ago, zzgus said:

    Is there any way to access pi-hole admin page from another location using open-vpn?
    (Only can access pi-hole inside the same lan locally)

     

    Thankyou
    Gus

    Yes - if you have openvpn installed, you can look at admin panel (I tested it). If you want to block advertisements whilst remote, you will need to look at routing all traffic over vpn and that your dns is pointing to pihole.