SPOautos

Members
  • Posts

    383
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SPOautos's Achievements

Contributor

Contributor (5/14)

18

Reputation

  1. Hello, I have a Radeon amd gpu which literally does nothing but sit there in my server. Its never used and typically the fans don't spin at all or might spin very slowly. However I was just updating my plugins and as soon as I hit the button for them to update the gpu fans spun up to 100% and wont come down. I have two plugins I thought might be a issue.....a fan control plugin and a Radeon plugin. I have deleted both of those plugins and it didn't make any difference. I'm currently running 6.10.3 but was updating everything in preparation to update 6.11.5 Could this be because my Unraid is out of date and updating to 6.11.5 might correct it? Or should I try to resolve this before I update the OS? Thanks for any advice!
  2. As I was updating my dockers and plugins something odd happened. As soon as I clicked to update the plugins my gpu fan ramped up to 100% (older Radeon AMD gpu) and I cant seem to get it back down. Should I try to resolve this before upgrading to 6.11.5 or is it possible that this is happening because I updated a plugin but still on the old 6.10.3 OS, in which case upgrading my Unraid may resolve the issue? I have a plugin for fan control and a plugin for Radeon gpu's.....neither seem to have any affect on the fan being at 100% as I have tried disableing and enabling them several time. *I actually completely removed the fan control and Radeon plugins but that did not help anything* For the record, my gpu is never used, I have never heard my fan running 100%, it just sits there doing nothing and normally if the fan spins at all its very little.....even right now, its doing nothing except spinning the fan wide open. Thoughts?
  3. Did you do anything before updating to help ensure it went smooth?
  4. Hey guys, I'm still on 6.10.3 and everything has been running good, i haven't touched my unraid since I did the 6.10.3 update. BUT I don't want it to get too far behind so considering updating to this latest one. I am NOT very knowledgeable about Unraid.....am I likely going to have a smooth transition or will there probably be issues? Are there things I can do before updating it to help it have a higher chance of not messing up? I'll make a backup copy of the flash, update all my apps and plugins, run a parity check......is there anything else that might help before updating?
  5. Just to have a general idea, how much data were you moving and how long was it taking? If I'm moving 2TB between two drives should it move at the speed of the hardware or does unbalance slow it way down? I'll probably move 2TB at a time with total of moving 8TB. Will that take hours, days, weeks???
  6. Thank You! It seems to be going smooth so far!
  7. Thank You! I'm on 6.9.2 which has been very stable for me and I'm thinking of upgrading to 6.10.3. Do you think I should hold off, add the new drive, move all the data around, ect before upgrading? Or should I just go ahead and upgrade, make sure its stable, then change the drives and such? I wont get the new drive for a week or so, thought maybe I should go ahead and upgrade the OS while I'm waiting. Do you have any thoughts on that?
  8. I am still on 6.9.2 and my server runs very well with no issues but was thinking of updating to the latest 6.10.3.....but issue is I'm not much of a computer/unraid expert so I'm nervous of it not going well. Are there certain things (like hardware and applications) that are going to be more likely to cause trouble? Thanks!
  9. I have four 6TB drives and a 12TB parity. My 6TB drives are nearly full (each between 5.5-5.75 used) and I have a new 12TB drive in the mail to me. Is there a way I can move some of the data from each 6TB drive to the 12TB drive so that they are not so close to maxed out? Maybe move like 2TB from each of the 6TB to the new 12TB drive. If not, is there a way I can tell Unraid to stop adding data to the 6TB drives and only use the 12TB drive??? Thanks!
  10. But all of my containers added together are only 19.9GB disk utilization, right? How is that so high? Is this just based on a setting somewhere that I'm only allowing a certain amount of space dedicated to containers?
  11. I recently set up a Syslog file that runs and this is what it shows for the 11th, 12th, 13th but I dont think I'm seeing anything in here that would be causing a issue of docker image disk utilization being high. I do have mining running and its been running for months so I doubt that is the cause, it doesn't use much memory, it uses the GPU and half the CPU. I have no idea what all these entries are regarding port numbers and entering blacking/forwarding/disabled states. Jun 11 00:00:32 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 01:01:14 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 02:00:14 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 03:00:11 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 04:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 04:00:47 Tower root: /var/lib/docker: 12.9 GiB (13833793536 bytes) trimmed on /dev/loop2 Jun 11 04:00:47 Tower root: /mnt/cache: 331.4 GiB (355793293312 bytes) trimmed on /dev/nvme0n1p1 Jun 11 04:40:01 Tower root: Fix Common Problems Version 2021.05.03 Jun 11 05:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 06:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 07:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered blocking state Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered disabled state Jun 11 07:44:09 Tower kernel: device veth2de21d8 entered promiscuous mode Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered blocking state Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered forwarding state Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered disabled state Jun 11 07:44:09 Tower kernel: eth0: renamed from veth5f484ab Jun 11 07:44:09 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2de21d8: link becomes ready Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered blocking state Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered forwarding state Jun 11 08:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 08:24:45 Tower kernel: docker0: port 6(veth00d4a84) entered blocking state Jun 11 08:24:45 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state Jun 11 08:24:45 Tower kernel: device veth00d4a84 entered promiscuous mode Jun 11 08:24:46 Tower kernel: eth0: renamed from veth164898c Jun 11 08:24:46 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth00d4a84: link becomes ready Jun 11 08:24:46 Tower kernel: docker0: port 6(veth00d4a84) entered blocking state Jun 11 08:24:46 Tower kernel: docker0: port 6(veth00d4a84) entered forwarding state Jun 11 08:53:06 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state Jun 11 08:53:06 Tower kernel: veth164898c: renamed from eth0 Jun 11 08:53:06 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state Jun 11 08:53:06 Tower kernel: device veth00d4a84 left promiscuous mode Jun 11 08:53:06 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered blocking state Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state Jun 11 08:53:07 Tower kernel: device vetha6de89e entered promiscuous mode Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered blocking state Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered forwarding state Jun 11 08:53:07 Tower kernel: eth0: renamed from veth61b0ff9 Jun 11 08:53:07 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha6de89e: link becomes ready Jun 11 09:00:08 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 09:00:37 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state Jun 11 09:00:37 Tower kernel: veth61b0ff9: renamed from eth0 Jun 11 09:00:37 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state Jun 11 09:00:37 Tower kernel: device vetha6de89e left promiscuous mode Jun 11 09:00:37 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered blocking state Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered disabled state Jun 11 09:00:37 Tower kernel: device veth3969633 entered promiscuous mode Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered blocking state Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered forwarding state Jun 11 09:00:38 Tower kernel: eth0: renamed from vethfd855b4 Jun 11 09:00:38 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3969633: link becomes ready Jun 11 09:11:08 Tower kernel: docker0: port 6(veth3969633) entered disabled state Jun 11 09:11:08 Tower kernel: vethfd855b4: renamed from eth0 Jun 11 09:11:08 Tower kernel: docker0: port 6(veth3969633) entered disabled state Jun 11 09:11:08 Tower kernel: device veth3969633 left promiscuous mode Jun 11 09:11:08 Tower kernel: docker0: port 6(veth3969633) entered disabled state Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered blocking state Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state Jun 11 09:11:09 Tower kernel: device veth3cc4ddf entered promiscuous mode Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered blocking state Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered forwarding state Jun 11 09:11:09 Tower kernel: eth0: renamed from veth83942c3 Jun 11 09:11:09 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3cc4ddf: link becomes ready Jun 11 09:19:24 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state Jun 11 09:19:24 Tower kernel: veth83942c3: renamed from eth0 Jun 11 09:19:24 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state Jun 11 09:19:24 Tower kernel: device veth3cc4ddf left promiscuous mode Jun 11 09:19:24 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered blocking state Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state Jun 11 09:19:25 Tower kernel: device veth6bc6288 entered promiscuous mode Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered blocking state Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered forwarding state Jun 11 09:19:25 Tower kernel: eth0: renamed from vethfac2dbd Jun 11 09:19:25 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6bc6288: link becomes ready Jun 11 09:22:54 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state Jun 11 09:22:54 Tower kernel: vethfac2dbd: renamed from eth0 Jun 11 09:22:54 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state Jun 11 09:22:54 Tower kernel: device veth6bc6288 left promiscuous mode Jun 11 09:22:54 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered blocking state Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered disabled state Jun 11 09:22:55 Tower kernel: device vethb4fba36 entered promiscuous mode Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered blocking state Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered forwarding state Jun 11 09:22:55 Tower kernel: eth0: renamed from vethcd82465 Jun 11 09:22:55 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb4fba36: link becomes ready Jun 11 10:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 11:00:08 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 12:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 13:00:01 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 14:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 15:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 16:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 17:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 18:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 19:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 20:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 21:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 22:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 11 23:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 00:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 01:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 02:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 03:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 04:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 04:00:43 Tower root: /var/lib/docker: 3.1 GiB (3280904192 bytes) trimmed on /dev/loop2 Jun 12 04:00:43 Tower root: /mnt/cache: 326.5 GiB (350595309568 bytes) trimmed on /dev/nvme0n1p1 Jun 12 04:40:01 Tower root: Fix Common Problems Version 2021.05.03 Jun 12 04:40:15 Tower root: FCP Debug Log: root 25980 1198 0.8 3598248 274764 ? Sl Jun11 13867:55 ./xmrig --url=xmr-us-east1.nanopool.org:14433 --coin=monero --user= *I removed the user data from the copy/paste* Jun 12 04:40:15 Tower root: Fix Common Problems: Warning: Possible mining software running ** Ignored Jun 12 05:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 06:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 07:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 08:00:25 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 09:00:23 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 10:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 11:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 12:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 13:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 14:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 15:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 16:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 17:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 18:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 19:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 20:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 21:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 22:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 12 23:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 00:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 01:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 02:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 03:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 04:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 04:00:42 Tower root: /var/lib/docker: 3 GiB (3194228736 bytes) trimmed on /dev/loop2 Jun 13 04:00:42 Tower root: /mnt/cache: 326.7 GiB (350819770368 bytes) trimmed on /dev/nvme0n1p1 Jun 13 04:40:01 Tower root: Fix Common Problems Version 2021.05.03 Jun 13 04:40:14 Tower root: FCP Debug Log: root 25980 1198 0.8 3596196 277248 ? Sl Jun11 31122:13 ./xmrig --url=xmr-us-east1.nanopool.org:14433 --coin=monero --user= *I removed the user data from the copy/paste* Jun 13 04:40:14 Tower root: Fix Common Problems: Warning: Possible mining software running ** Ignored Jun 13 04:40:15 Tower apcupsd[6677]: apcupsd exiting, signal 15 Jun 13 04:40:15 Tower apcupsd[6677]: apcupsd shutdown succeeded Jun 13 04:40:18 Tower apcupsd[11753]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded Jun 13 04:40:18 Tower apcupsd[11753]: NIS server startup succeeded Jun 13 05:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 06:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 07:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 08:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 08:56:11 Tower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin update unassigned.devices.plg Jun 13 08:56:11 Tower root: plugin: running: anonymous Jun 13 08:56:11 Tower root: plugin: creating: /boot/config/plugins/unassigned.devices/unassigned.devices-2021.06.11.tgz - downloading from URL https://github.com/dlandon/unassigned.devices/raw/master/unassigned.devices-2021.06.11.tgz Jun 13 08:56:12 Tower root: plugin: checking: /boot/config/plugins/unassigned.devices/unassigned.devices-2021.06.11.tgz - MD5 Jun 13 08:56:12 Tower root: plugin: creating: /tmp/start_unassigned_devices - from INLINE content Jun 13 08:56:12 Tower root: plugin: setting: /tmp/start_unassigned_devices - mode to 0770 Jun 13 08:56:12 Tower root: plugin: skipping: /boot/config/plugins/unassigned.devices/unassigned.devices.cfg already exists Jun 13 08:56:12 Tower root: plugin: skipping: /boot/config/plugins/unassigned.devices/samba_mount.cfg already exists Jun 13 08:56:12 Tower root: plugin: skipping: /boot/config/plugins/unassigned.devices/iso_mount.cfg already exists Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/smb-settings.conf already exists Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/config/smb-extra.conf already exists Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/add-smb-extra already exists Jun 13 08:56:12 Tower root: plugin: setting: /tmp/unassigned.devices/add-smb-extra - mode to 0770 Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/remove-smb-extra already exists Jun 13 08:56:12 Tower root: plugin: setting: /tmp/unassigned.devices/remove-smb-extra - mode to 0770 Jun 13 08:56:12 Tower root: plugin: running: anonymous Jun 13 09:00:45 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null Jun 13 10:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &amp;&gt; /dev/null </pre> </body> </html>
  12. Also, if it makes a difference, everything is up to date....6.9.2 with all containers and plugins updated. I'm pretty sure this is going to keep getting higher and higher over a couple day period. I'm pretty sure this happened a couple weeks ago and eventually made my server crash (well it crashed and I never found out why and it was running fine when it booted back up, and I just *think* it must have been because of this). I've looked through the log but I don't see anything jumping out at me....but I also don't know enough about what all is going on behind the scenes to know what everything means. I'm not a IT/Computer guy.....We have a children's clothing store and I do construction lol. To be honest I don't even know how I got all this working and being useful except that the forum community is awesome.
  13. Hey guys, on the Dashboard under Memory then Docker, its a 71% and growing, its normally a very low number. I attached a diagnostic, can anyone tell what's causing it? I also have a notification of "Warning [TOWER] - Docker image disk utilization of 71%". When I look under the Docker tab and advanced view, none of the containers are using a lot of memory. Thanks... tower-diagnostics-20210613-0439.zip
  14. Were you ever able to get this resolved? I seem to be having a similar issue after updating. I cant find the server from the other devices in the Workgroup. Wondering what fixed it for you. Similarly my settings have "yes" for Workgroup.