Shonky

Members
  • Posts

    73
  • Joined

  • Last visited

Everything posted by Shonky

  1. Live demo link in unRAID apps ( https://dash.mauz.io/ ) goes to possibly dangerous websites vie multiple redirects. Should be https://dash.mauz.dev/
  2. Not sure exactly what fixed it for me but I can say that running the latest stable the issue has not appeared for quite some months now. I did make some minor changes to macvlan/ipvlan but seem to remember that didn't seem to help.
  3. Well this docker template is not the same as the one I linked to. If you're having trouble with the other one then this thread is the wrong thread. The thread I linked I believe is the best and most up to date one. There have been a couple and also there have been significant changes to Crashplan over time e.g. the removal of the Home product.
  4. Well firstly have you found the service.log.0 file and checked if it really is the same issue? Just because it's similar symptoms doesn't mean it's thae same problem. If it's the same problem go to the Docker tab then click the Crashplan then ">_ Console". Then you should be able to just "chmod 777 /usr/local/crashplan" without quotes and restore should just start from memory. Note that this is essentially a hack to make it work and possibly won't last if the container is restarted. The other container I use ended up getting a permanent fix.
  5. Take a look at this. Could be the same or a similar thing. Check Crashplan's log files. It creates quite a few.
  6. Did some more digging and found the code in /usr/local/emhttp/webGui/nchan/update1 #!/usr/bin/php -q <?PHP /* Copyright 2005-2021, Lime Technology * Copyright 2012-2021, Bergware International. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License version 2, * as published by the Free Software Foundation. * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. */ ?> <? $docroot = '/usr/local/emhttp'; $varroot = '/var/local/emhttp'; require_once "$docroot/webGui/include/publish.php"; while (true) { unset($memory,$sys,$rpms,$lsof); exec("grep -Po '^Mem(Total|Available):\s+\K\d+' /proc/meminfo",$memory); exec("df /boot /var/log /var/lib/docker|grep -Po '\d+%'",$sys); exec("sensors -uA 2>/dev/null|grep -Po 'fan\d_input: \K\d+'",$rpms); $info = max(round((1-$memory[1]/$memory[0])*100),0)."%\0".implode("\0",$sys); $rpms = count($rpms) ? implode(" RPM\0",$rpms).' RPM' : ''; $names = array_keys((array)parse_ini_file("$varroot/shares.ini")); exec("LANG='en_US.UTF8' lsof -Owl /mnt/disk[0-9]* 2>/dev/null|awk '/^shfs/ && \$0!~/\.AppleD(B|ouble)/ && \$5==\"REG\"'|awk -F/ '{print \$4}'",$lsof); $counts = array_count_values($lsof); $count = []; foreach ($names as $name) $count[] = $counts[$name] ?? 0; $count = implode("\0",$count); publish('update1', "$info\1$rpms\1$count"); sleep(5); } ?> My regex skills aren't the best so perhaps excluding the Apple files and just counting open files? But why does it even do this if there's no WebGui reading it?
  7. Did you figure out what "feature" causes this? I see an lsof task running every 5-10 seconds as a process spawned by update1. I have made sure I closed all WebUI interfaces (closed browser completely, made sure no processes still running) but they persist. This is the command that keeps running sh -c LANG='en_US.UTF8' lsof -Owl /mnt/disk[0-9]* 2>/dev/null|awk '/^shfs/ && $0!~/\.AppleD(B|ouble)/ && $5=="REG"'|awk -F/ '{print $4}' I don't have any Apple devices so whilst mine's not pegging the CPU at 100% for extended periods it's not necessary. Just runs 100% for maybe a second each time. Seems to be look for hidden files left by Macs and then prints them somewhere. Seems kind of pointless to me. I see a sleep 2, so perhaps running every two seconds: ps aux | grep ls[o]f -A5 -B5 root 17957 0.0 0.1 88744 29040 ? SL Jul03 24:23 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/wg_poller root 17960 0.0 0.1 88744 29000 ? SL Jul03 6:40 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/update_1 root 17963 0.0 0.1 88944 29716 ? SL Jul03 34:08 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/update_2 root 17966 0.1 0.1 88884 29408 ? SL Jul03 64:21 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/update_3 root 18095 0.0 0.0 0 0 ? I 17:31 0:00 [kworker/2:0] root 18149 0.0 0.0 3936 2984 ? S 17:31 0:00 sh -c LANG='en_US.UTF8' lsof -Owl /mnt/disk[0-9]* 2>/dev/null|awk '/^shfs/ && $0!~/\.AppleD(B|ouble)/ && $5=="REG"'|awk -F/ '{print $4}' root 18150 13.0 0.0 5340 3292 ? S 17:31 0:00 lsof -Owl /mnt/disk1 /mnt/disk2 /mnt/disk3 /mnt/disk4 /mnt/disk5 root 18151 0.0 0.0 8376 2560 ? S 17:31 0:00 awk /^shfs/ && $0!~/\.AppleD(B|ouble)/ && $5=="REG" root 18152 0.0 0.0 8244 2496 ? S 17:31 0:00 awk -F/ {print $4} root 18155 0.0 0.0 2464 732 ? S 17:31 0:00 sleep 2 root 18156 0.0 0.0 4860 2908 pts/27 R+ 17:31 0:00 ps aux root 18157 0.0 0.0 3980 2228 pts/27 S+ 17:31 0:00 grep ls[o]f -A5 -B5
  8. Right but the Change time is being updated (see original post) which appears to be what hashbackup is seeing.
  9. If I knew I could hit it in a few days I would probably roll back to 6.8.3 and try, but mine's just as likely to run for 2 months without an issue so I think I prefer to go with 6.10-rc2 for now. One of those hard to prove a negative things. If it fails again I'll revert and/or try your suggestions. BTW: I don't really follow this ipvlan/macvlan thing but my 6.10-rc2 has a macvlan kernel module loaded (and no ipvlan)
  10. Is this the right post? You say host access *disabled* but in the quote above you say "with host access": I don't have any bridging enabled but the host access does create some sort of bridge. I found firewall was complaining that my unRAID IP machine was changing between two MAC addresses which is bad (tm) so that's how I ended up with turning that host access thing off today.
  11. 6.9.2 was where I'd been for a long time with it occurring semi regularly. Upgraded to 6.10-rc2 just the other day and it happened within about 12ish hours. I don't think 6.10-rc2 is any worse, just luck of the draw. At this point the released 6.9.2 was just as bad so saying release vs unreleased isn't really a thing on my setup.
  12. It did appear to be the file integrity plugin as updating one of the file date attributes resulting in hashbackup backing them up again. I've stopped the plugin for now.
  13. I came across this solution separately, and then found this other thread just now. Seems like it could be a possible solution. My router (pfSense) was complaining about an IP having two different MAC addresses (the real hardware and a virtual interfaces called something like br0-shim but responds to ARP requests I presume resulting in packets to one IP coming in on two different network interfaces)
  14. How did you go with this? I've been suffering this issue for a while and independently from this thread came across this as a possible cause. I've only been running a few hours and it can take anywhere between 12 hours and 30+ days for it to occur.
  15. Turn cache off or if doing it on the machine itself, copy direct to /mnt/user0/...
  16. Upgrade your crashplan docker. Crashplan tries to upgrade itself but that doesnt work in the unraid docker. Should be v8.8.1 from memory. Mine had to be upgraded twice recently although the 8.8.1 I don't think changed. Crashplan was downloading the same ~20MB file hundreds of times into its /tmp directory.
  17. System is a Microserver Gen 8 with "07:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe 2.0 x2 4-port SATA 6 Gb/s RAID Controller (rev 11)" connected to an external drive enclosure via an eSATA port through a port multiplier. Works fine in 6.9.2 and it has been part of my system for at least 4 years. Drives sdb, sdc, sdd, sde are in the main chassis and sdi, sdg are in the enclosure. Also sdh and sdj are installed but not part of the array. Drives are getting quite full, I was going to add a new drive soon. Upgraded from 6.9.2 to 6.10.0-rc2. Initially everything started OK. However shortly after booting as I was just checking things over, it started dropping all the drives in the enclosure - both in the array and not. Log ending from is 6.10.0-rc2 boot from /boot/logs. I didn't request it - is it created automatically? Nov 8 12:22:00 Mars root: Fix Common Problems Version 2021.08.05 Nov 8 12:26:54 Mars kernel: ata8: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:00 Mars kernel: ata8: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:05 Mars kernel: ata11: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:10 Mars kernel: ata12: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:15 Mars kernel: ata12: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:21 Mars kernel: ata8: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:21 Mars kernel: ata8.00: disabled Nov 8 12:27:21 Mars kernel: ata11: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:21 Mars kernel: sd 8:0:0:0: rejecting I/O to offline device Nov 8 12:27:21 Mars kernel: blk_update_request: I/O error, dev sdg, sector 3910845296 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 Nov 8 12:27:21 Mars kernel: md: disk5 read error, sector=3910845232 Nov 8 12:27:26 Mars kernel: ata13: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:31 Mars kernel: ata8.00: detaching (SCSI 8:0:0:0) Nov 8 12:27:31 Mars kernel: sd 8:0:0:0: [sdg] Synchronizing SCSI cache Nov 8 12:27:31 Mars kernel: sd 8:0:0:0: [sdg] Synchronize Cache(10) failed: Result: hostbyte=0x04 driverbyte=DRIVER_OK Nov 8 12:27:31 Mars kernel: sd 8:0:0:0: [sdg] Stopping disk Nov 8 12:27:31 Mars kernel: sd 8:0:0:0: [sdg] Start/Stop Unit failed: Result: hostbyte=0x04 driverbyte=DRIVER_OK Nov 8 12:27:31 Mars kernel: ata11: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:31 Mars kernel: ata11.00: disabled Nov 8 12:27:31 Mars kernel: ata12: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:31 Mars kernel: ata12.00: disabled Nov 8 12:27:31 Mars kernel: ata13: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:31 Mars kernel: sd 12:0:0:0: rejecting I/O to offline device Nov 8 12:27:31 Mars kernel: blk_update_request: I/O error, dev sdi, sector 3910845296 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 Nov 8 12:27:31 Mars kernel: md: disk4 read error, sector=3910845232 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: XFS (md5): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0xe91ac330 len 8 error 5 Nov 8 12:27:31 Mars kernel: md: disk5 read error, sector=6143051080 Nov 8 12:27:31 Mars kernel: blk_update_request: I/O error, dev sdi, sector 6143051144 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 Nov 8 12:27:31 Mars kernel: md: disk4 read error, sector=6143051080 Nov 8 12:27:32 Mars kernel: md: disk5 read error, sector=6143089464 Nov 8 12:27:32 Mars kernel: blk_update_request: I/O error, dev sdi, sector 6143089528 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 Nov 8 12:27:32 Mars kernel: md: disk4 read error, sector=6143089464 Nov 8 12:27:37 Mars kernel: ata12.00: detaching (SCSI 12:0:0:0) Nov 8 12:27:37 Mars kernel: ata11.00: detaching (SCSI 11:0:0:0) Nov 8 12:27:37 Mars kernel: sd 11:0:0:0: [sdh] Synchronizing SCSI cache Nov 8 12:27:37 Mars kernel: sd 11:0:0:0: [sdh] Synchronize Cache(10) failed: Result: hostbyte=0x04 driverbyte=DRIVER_OK Nov 8 12:27:37 Mars kernel: sd 11:0:0:0: [sdh] Stopping disk Nov 8 12:27:37 Mars kernel: sd 11:0:0:0: [sdh] Start/Stop Unit failed: Result: hostbyte=0x04 driverbyte=DRIVER_OK Nov 8 12:27:37 Mars kernel: sd 12:0:0:0: [sdi] Synchronizing SCSI cache Nov 8 12:27:37 Mars kernel: sd 12:0:0:0: [sdi] Synchronize Cache(10) failed: Result: hostbyte=0x04 driverbyte=DRIVER_OK Nov 8 12:27:37 Mars kernel: sd 12:0:0:0: [sdi] Stopping disk Nov 8 12:27:37 Mars kernel: sd 12:0:0:0: [sdi] Start/Stop Unit failed: Result: hostbyte=0x04 driverbyte=DRIVER_OK Nov 8 12:27:37 Mars kernel: ata13: SATA link down (SStatus 0 SControl 300) Nov 8 12:27:37 Mars kernel: ata13.00: disabled Nov 8 12:27:37 Mars emhttpd: error: get_device_size, 1549: No such device or address (6): open: /dev/sdj Nov 8 12:27:37 Mars emhttpd: error: device_inventory, 1704: No such file or directory (2): readlink: /sys/dev/block/8:112 Nov 8 12:27:37 Mars emhttpd: error: device_inventory, 1704: No such file or directory (2): readlink: /sys/dev/block/8:129 Nov 8 12:27:37 Mars emhttpd: error: device_inventory, 1704: No such file or directory (2): readlink: /sys/dev/block/8:113 Nov 8 12:27:37 Mars emhttpd: error: device_inventory, 1704: No such file or directory (2): readlink: /sys/dev/block/8:128 Nov 8 12:27:37 Mars kernel: ata13.00: detaching (SCSI 13:0:0:0) Nov 8 12:27:37 Mars kernel: sd 13:0:0:0: [sdj] Synchronizing SCSI cache Nov 8 12:27:37 Mars kernel: sd 13:0:0:0: [sdj] Synchronize Cache(10) failed: Result: hostbyte=0x04 driverbyte=DRIVER_OK Nov 8 12:27:37 Mars kernel: sd 13:0:0:0: [sdj] Stopping disk Nov 8 12:27:37 Mars kernel: sd 13:0:0:0: [sdj] Start/Stop Unit failed: Result: hostbyte=0x04 driverbyte=DRIVER_OK Nov 8 12:27:37 Mars unassigned.devices: Warning: Can't get rotational setting of 'sdj'. Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=282765536 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=282765536 Nov 8 12:27:44 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x10daa8e0 len 8 error 5 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=5988858984 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=5988858992 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=5988859000 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=5988859008 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=5988858984 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=5988858992 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=5988859000 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=5988859008 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=6008286176 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=6008286176 Nov 8 12:27:44 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x1661f2be0 len 8 error 5 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=4035502192 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=4035502200 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=4035502208 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=4035502216 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=4035502192 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=4035502200 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=4035502208 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=4035502216 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=5990007848 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=5990007856 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=5990007864 Nov 8 12:27:44 Mars kernel: md: disk5 read error, sector=5990007872 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=5990007848 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=5990007856 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=5990007864 Nov 8 12:27:44 Mars kernel: md: disk4 read error, sector=5990007872 Nov 8 12:27:50 Mars kernel: md: disk4 read error, sector=66720 Nov 8 12:27:50 Mars kernel: md: disk5 read error, sector=66720 Nov 8 12:27:50 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x104a0 len 8 error 5 Nov 8 12:27:50 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x104a0 len 8 error 5 Nov 8 12:27:50 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x104a0 len 8 error 5 Nov 8 12:27:50 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x104a0 len 8 error 5 Nov 8 12:27:50 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x104a0 len 8 error 5 Nov 8 12:27:50 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x104a0 len 8 error 5 Nov 8 12:27:50 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x104a0 len 8 error 5 Nov 8 12:27:50 Mars kernel: XFS (md4): metadata I/O error in "xfs_da_read_buf+0xa3/0x103 [xfs]" at daddr 0x104a0 len 8 error 5 Nov 8 12:27:50 Mars kernel: md: disk4 read error, sector=2303691736 .... I didn't really want to hang around long so fairly quickly reverted back to 6.9.2 and everything is working again. Started a parity check and that's going ok after 200GB of checking. I wouldn't call this urgent, but it's not minor or annoyance. I would consider it a major problem though certainly for a next release. The priority options aren't the best IMO. mars-diagnostics-20211108-1230.zip
  18. Would be much clearer with a simple "none of the above" and likely people would actually vote. It's in no way clear that clicking "view results" counts as abstention. The current numbers show 135 out of 219 voted, i.e. "no vote" is second choice by a slim margin to ApplePay (89 vs 84)
  19. https://forums.unraid.net/topic/51633-plugin-rclone ?
  20. Polls don't work in Tapatalk.
  21. None of the above also. Google Pay if you're offering Apple Pay. As an Australian, the only option there is a buy now pay later service which requires another account etc and not something I'd wish to bother with.
  22. Ok well at risk of bursting your bubble, that's how mine is setup anyway and I still had the problem. I have a /24 LAN. 1-99 are static which I just assign manually. Some are dockers some are things like routers/printers. DHCP is 100+. That's the way it's always been and kind of has to be really. If you have static IPs in the middle of a DHCP server's range you're going to have a bad time (tm) at some point. Putting dockers above or below the DHCP range makes no difference.
  23. I have recently implemented hashbackup to replace CrashplanPRO but believe that hashbackup has just made the problem more visible. I had seen Crashplan backing up files more than once but never investigated why mainly because the size wasn't affecting me. Anyway, I'm seeing files that have not been changed in years having their modified/changed times altered. e.g. I do a hashbackup, then run it again a few hours later and something's modified some files e.g. a particularly random one just now: ~# stat "/mnt/user/store/manuals and drivers/A8N-SLI Premium Drivers/15.23_nforce_winxp32_international_whql.exe" File: /mnt/user/store/manuals and drivers/A8N-SLI Premium Drivers/15.23_nforce_winxp32_international_whql.exe Size: 53370976 Blocks: 104248 IO Block: 4096 regular file Device: 2fh/47d Inode: 648799821402998927 Links: 1 Access: (0666/-rw-rw-rw-) Uid: ( 99/ nobody) Gid: ( 100/ users) Access: 2017-04-04 06:21:53.417766486 +1000 Modify: 2008-10-05 09:03:49.000000000 +1000 Change: 2021-10-10 17:53:26.528479637 +1000 Birth: - There's no way I touched it just now. That's about 20 minutes ago. Whole chunks (not every file in a folder though) randomly have their change time altered and the backup program picks it up. Could the file integrity plugin do anything like this? I've set it up recently but pretty sure the repeated backups were happening long before that. If a file is not being actually changed how else could this be happening? All disk file systems are xfs. Not entirely sure what else would be relevant. Nothing in syslog at the change time. Checking the direct /mnt/diskx/... file shows the same info but with the birth field entered (seems just before the access time)
  24. How's your fix holding up? I get this sporadically. certainly not every 3-4 days. 1-3 months perhaps. I do not have the mentioned switch. I have a static IP but don't use VLANs within unRAID nor on my network. I don't really follow the fix though. I do have a br0 setup and some of my dockers connect that way to get a specific LAN IP of their own. My docker br0 is IPv4 custom network on interface br0: Subnet: 192.168.99.0/24 Gateway: 192.168.99.1 DHCP pool: not set That's my LAN too. Is the suggested fix to have docker restricted to a sub range of the full /24 and then for each docker that needs it, only use IPs within that range? My DHCP range for my LAN is 192.168.99.100-192.168.99.255. Below .100 I reserve for static assigned IPs and that's where my dockers that have their own IP run from. @corgan if you're still getting the problem does that not mean you didn't really fix it though?
  25. Jinxed it. Caught it happening again tonight so none of the above things fixed anything. I only touched two dockers to bring it back around. Gitea and Photoprism. The latter did seem to be scanning. I'm only playing with it so killed it pretty quickly. Previously it did seem like CrashPlanPRO was a cause, but not always. Seems like dockers that generate a lot of IO (Plex scanning, Photoprism scanning, CrashPlanPRO indexing/scanning etc ) tend to trigger it off. Stop one or two of them or even others sometimes and it comes back around. Almost like it's disk thrashing on a mechanical disk.